entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
18
175
authors
sequencelengths
1
1.12k
primary_category
stringclasses
114 values
categories
sequencelengths
1
8
text
stringlengths
5
364k
http://arxiv.org/abs/2407.11961v1
20240716175840
On Fourier Asymptotics and Effective Equidistribution
[ "Shreyasi Datta", "Subhajit Jana" ]
math.DS
[ "math.DS", "math.NT", "11J83, 28A80" ]
§ ABSTRACT We prove effective equidistribution of expanding horocycles in SL_2()\SL_2() with respect to various classes of Borel probability measures on having certain Fourier asymptotics. Our proof involves new techniques combining tools from automorphic forms and harmonic analysis. In particular, we prove effective equidistribution with respect to the Hausdorff measures on certain shifted, possibly irrational, missing digit Cantor sets, whose Hausdorff dimensions can be as small as 0.61. This improves upon the dimension and rationality restrictions of the recent breakthrough of Khalil–Luethi [Invent. Math, 232(2): 713–831, 2023]. As an application in Diophantine approximation, a complete Khintchine's theorem follows for a large class of self-similar measures extending a groundbreaking work of Yu [arXiv, 2101.05910]. : Modelling Optimisation to Compute Horndeski in E. Bellini^3,4,5 Received XXX; accepted YYY ================================================= § INTRODUCTION §.§ Equidistribution of horocycle flow Dynamics on :=_2()\_2(), the unit cotangent bundle of the modular surface, is a central topic lying in the interface of homogeneous dynamics and representation theory and has numerous applications in various number theoretic problems. We consider the homogeneous space equipped with the _2()-invariant probability measure m_. Let g_y:=([ √(y); √(y^-1) ]) for y>0 be the geodesic flow and the corresponding expanding horocycle flow is given by n(x):=([ 1 x; 0 1 ]) for x∈. Let μ denote a Borel probability measure on . Let μ_y:=∫δ_n(x)g_yμ̣(x) denote the probability measure on supported on the horocycle of length of y^-1, that is, for ϕ∈ C_c^∞() we write μ_y(ϕ):=∫ϕ(n(x)g_y)μ̣(x). We record the following (folklore) problem which is currently mostly wide open. Given a Borel probability measure μ on , determine whether μ_ym_ as y→ 0 with a polynomial order convergence rate, that is, if there is an η>0 such that μ_y(ϕ)=m_(ϕ)+O_μ,ϕ(y^η), ϕ∈ C_c^∞(), as y→ 0 where m_(ϕ):=∫ϕṃ_. Let μ denote the Fourier transform of μ; see(<ref>). The goal of this paper is to initiate a study of Problem <ref>, depending on the asymptotic behavior of μ. Loosely speaking, we answer Problem <ref> affirmatively for μ with μ having * no pointwise decay, but a certain polynomial decay on average, * arbitrarily slow polynomial decay, but having precise oscillatory asymptotics. Interestingly, both of the above types contain many interesting class of measures. Recently, in <cit.> and <cit.>, a large class of self-similar measures obtained that fall under kind <ref> as above. On the other hand, the push-forward of the Lebesgue measure by analytic non-constant maps fall under kind <ref> as above. Problem <ref> has a long history when μ is the normalized Lebesgue measure supported on [0,1]. In this case, Problem <ref> has been investigated by numerous people over the last decades, starting with Selberg and Zagier; see <cit.> and <cit.> that make the equidistribution result by Dani–Smillie <cit.> (and also <cit.>) effective. For absolutely continuous measures, it is known that equidistribution holds as in Problem <ref>, but nothing can be said, a priori, about the rate of convergence. In fact, the convergence could happen arbitrary slowly if the density function (coming from Radon–Nikodym theorem) does not have enough regularity. Kleinbock–Margulis, in <cit.>, answered Problem <ref> affirmatively for absolutely continuous measures with smooth compactly supported densities in the space of matrices. Recently, Björklund–Gorodnik in <cit.> extended <cit.> to continuous densities, among other things. Problem <ref> is significantly difficult when the measure is not absolutely continuous. Such problem emerges from deep works by Kleinbock–Margulis <cit.> and Kleinbock–Lindenstrauss–Weiss <cit.>. In <cit.>, the authors show that if μ is a friendly measures (see <cit.> for definition) then any weak-^∗ limit of -1/logε∫_ε^1 μ_y ^̣× y as ε→ 0 is a probability measure. The class of friendly measures contain the Hausdorff measures on the missing digit Cantor sets (described later). The celebrated work Simmons–Weiss <cit.> implies that for certain self-similar measures μ, one has lim_ε→ 0 -1/logε∫_ε^1μ_y^̣× y=m_, (in weak-^∗ sense) improving <cit.> in this case. Note that (<ref>) is an average version of Problem <ref> in its qualitative form. In a recent breakthrough, Khalil–Luethi <cit.> answered Problem <ref> for a class of measures that are not necessarily absolutely continuous. Below, we explain more in this direction. §.§.§ Average Fourier decay Our first main result is an affirmative solution of Problem <ref> when μ satisfies property Item <ref>. We recall the definition of the ℓ^1-dimension of μ (see (<ref>)) as _ℓ^1μ:=1-inf{ s≥ 0|lim_X→∞X^-s∑_|m|≤ X|μ(m)|=0}. We refer to see <cit.> and <cit.> for more details about this quantity. Let μ be a Borel probability measure on such that μ>0.609375. Then there exists an η>0 and ℓ∈ so that for any ϕ∈ C_^̱2ℓ() we have μ_y(ϕ)=m_(ϕ)+ O_μ(|y|^ηS_∞,ℓ(ϕ)), as y→ 0. We refer the reader to <ref> for the definitions of C_^̱2ℓ() and S_∞,ℓ(·). The number 0.609375=3964=12+764 is related to the spectral gap for ; see <ref> for details. We immediately give existence of a large class of measures that satisfy the ℓ^1-dimension condition in Theorem <ref>. We refer to <ref> for undefined terminologies mentioned below. Let μ_i for i=1, 2 be two Borel probability measures on such that each μ_i is s_i-AD-regular for s_i>0. If s_1+s_2/2>0.609375 then the convolution μ_1∗μ_2 satisfies the conclusion of Theorem <ref>. Let μ be a Borel probability measure on with μ>0.609375. Then for any Borel probability measure ν, the convolution μ∗ν satisfies the conclusion of Theorem <ref>. Moreover, in this case the implied constant in the error term in Theorem <ref> does not depend on ν. In Corollary <ref>, choosing μ satisfying the hypothesis of Theorem <ref> and ν to be the Dirac mass δ_x_0 for some x_0∈ we immediately obtain that ∫ϕ(n(x_0+x)g_y)μ̣(x)=∫_ϕṃ_ + O_μ(|y|^η S_∞,ℓ(ϕ)) for the same η and ℓ as in Theorem <ref> and uniformly in x_0. §.§.§ Missing digit Cantor set We define some minimal notations to state the next theorem. For 2<b∈ and ∅≠ D⊊{0,…,b-1}, let K_b,D denote the missing digit Cantor set with base b and (appearing) digits from D. Let X denote the Hausdorff dimension of a set X. It is known that K_b,D=log#D/log b; see <ref> for details. Let μ_b,D denote log#D/log b-dimensional Hausdorff (probability) measure restricted on K_b,D (which we call by natural missing digit measure). In a recent breakthrough <cit.>, Khalil–Luethi answered Problem <ref> affirmatively for any μ_b,D such that the Hausdorff dimension of K_b,D is sufficiently close to 1; in particular, >0.9992 for any b, and >0.839 if b is prime; see <cit.>[We remark that <cit.> considered more general rational iterated function system IFSs in higher dimensions with restrictions coming from contraction ratios and probability vectors associated to the self similar measure.]. In this paper, we substantially improve upon the dimension restriction of <cit.>. For every s>0.609375, there exists explicitly computable b( s)∈, as in (<ref>), such that for * any b≥ b(s) and * any D in arithmetic progression, with K_b,D≥ s the measure μ_b,D satisfies the conclusion of Theorem <ref>. We point out that Theorem <ref> is new in this setting even in its qualitative form, i.e., without any rate of convergence. Also, we note that b(s) as given in (<ref>) is not optimal. Any μ_b,D with K_b,D<0.9992 (for general b) considered in Theorem <ref> is not covered by <cit.>. As an easy example, let b=450 and D={0,1,⋯, 446}, then _ℋ(K_b,D)<0.9992 and μ_b,D>0.609375 by (<ref>). Note that the Hausdorff dimension of the middle third Cantor set (that is, K_3,{0,2}) is ≈ 0.6309. Whereas in Theorem <ref> we get an infinite class of natural missing digit measures whose supports have Hausdorff dimension even less than the same of middle third Cantor set. However, our result does not apply to middle third Cantor set. Our method of proof is completely different than that of <cit.>; see <ref> for details and comparison with previous methods. In particular, to prove Theorem <ref>, we do not need any condition (e.g. self-similarity like <cit.>) on the measure μ other than the restriction on ℓ^1 dimension, which will be reflected in <ref>. Recently, Chow–Varju–Yu in <cit.> find a robust way to compute μ for self-similar measures whose attractors are missing digit Cantor sets. Thus, their method gives a large collection of non absolutely continuous measures that satisfy the hypothesis of Theorem <ref>. §.§.§ Irrational IFS For x_0∈ we consider the shifted missing digit Cantor set K_b,D+x_0; see (<ref>). Let μ_b,D,x_0 denote the log#D/log b-dimensional Hausdorff measure on K_b,D+x_0; see (<ref>). Let s,b,D be as in Theorem <ref>. Then for any x_0∈, the measure μ_b,D,x_0 supported on K_b,D+x_0 satisfies the conclusion of Theorem <ref>. Note that if x_0 in Theorem <ref> is an irrational number then the corresponding IFS has irrational translates; see (<ref>). Our next theorem, shows another instance of an IFS (in fact, uncountably many) with irrational contractions such that a self-similar measure satisfies the effective equidistribution; also see Remark <ref>. For every sufficiently small positive ρ there exists a self-similar measure with underlying IFS having (equal) contraction ratio ρ, satisfies the conclusion of Theorem <ref>. In <cit.>, the authors consider IFSs that have rational contractions and translations (albeit, more general than K_b,D). We point out that when x_0∉ in Theorem <ref> and Theorem <ref> are the first instances where the Problem <ref> is answered for self-similar measures associated to irrational IFSs. Moreover, in these cases the methods in <cit.> can not be applied as the rationality assumption of IFS is used crucially in <cit.>. §.§.§ Improvement assuming the Generalized Ramanujan Conjecture (GRC) Upon the assumption of the GRC (see the discussion after (<ref>)) we can replace the number 0.609375 by 0.5 in Theorem <ref>, significantly improving the ℓ^1-dimension restriction. This will yield likewise improvements on e.g. Theorem <ref> and Theorem <ref>. As an example, we describe the following corollary which follows from Theorem <ref>, <cit.>, and the proof of Theorem <ref>. Assume the GRC. Let b≥ 5 and #D=b-1 or let b=4 with D={0,1,2} or D={1,2,3}. Then for any x_0∈ the measure μ_b,D,x_0 (see <ref>) satisfies the conclusions of Theorem <ref>. To the best of our knowledge, albeit conditional on the GRC, Corollary <ref> is the first example of missing digit Cantor sets with base 4 where effective equidistribution, as considered above, holds. We end this subsection remarking that Theorem <ref> on the assumption of the GRC is sharp: For every ϵ>0, there exists a measure μ_ϵ, constructed by Kaufman in <cit.>, that is supported on 𝒲(ψ_ϵ+1) (as defined in <ref>), such that μ_ϵ≥1/2-ϵ. Thus combining <cit.>, the conclusion of Theorem <ref> fails for μ_ϵ. §.§.§ Pointwise Fourier decay Our next main theorem answers Problem <ref> for measures that satisfy Item <ref>. Let μ be a Borel probability measure on such that there exist a K∈, sequences {δ_j}_j=1^K⊂(0,12] and δ_0>1/2, and complex numbers {α_j}_j=1^K⊂ and {β_j}_j=1^K⊂ such that μ(ξ)= ∑_j=1^K|ξ|^-δ_jβ_je(ξα_j)+O_μ(|ξ|^-δ_0), e(z):=exp(2π i z), as |ξ|→∞. Then there is an ℓ∈ such that for any 0<η<min{12,δ_0,δ_12…,δ_K2} and ϕ∈ C_^̱2ℓ() we have μ_y(ϕ)=m_(ϕ)+ O_η,μ(|y|^ηS_∞,ℓ(ϕ)), as y→ 0. Note that in Theorem <ref>, the rate of Fourier decay of the measures can be arbitrarily slow, and yet we get an effective equidistribution. Moreover, in Theorem <ref>, in general, the measures may have μ<12, in fact μ can be very close to 0. Thus Theorem <ref> can not be deduced from Theorem <ref>. As a proof of concept for Theorem <ref>, using it we answer Problem <ref> for the push-forward of the Lebesgue measure by a non-constant analytic map, namely Corollary <ref>. However, we think that Corollary <ref> can be deduced more directly (that is, not going via Theorem <ref>). Let f:→ be a non-constant real analytic function and w be a compactly supported non-negative L^1-normalized smooth function on . Let μ^w,f be the Borel probability measure defined by f_⋆(w∘Leb) where Leb denotes the Lebesgue measure, that is μ^w,f(h):=∫_ h(f(x))w(x)x̣, h∈ C(). One can check that μ^w,f is an absolutely continuous measure with possibly non-continuous density. One may analyse the asymptotics of the Fourier transform of μ^w,f via the method of stationary phase; see Proposition <ref>, which also shows that Theorem <ref> is not vacuous. Let Z(f) denote the zero set of f. Note that as f is analytic and w is compactly supported the set Z(f')∩(w) is finite (counted with multiplicity). We define k_f,w:=max{order of vanishing of f' at any z∈ Z(f')∩(w)}. There exists ℓ∈ such that for any 0<η<12(k_f,w+1) and ϕ∈ C_^̱2ℓ() we have μ^w,f_y(ϕ)=m_(ϕ)+ O_η(|y|^ηS_∞,ℓ(ϕ)), as y→ 0. One may check that the rate of equidistribution in Corollary <ref> can not be improved without assuming the Riemann hypotheis (cf. the proof of <cit.> for f(x)=x). On higher rank groups the related problem is significantly difficult and, in general, open; see <cit.> for recent major works. §.§ Diophantine approximation on fractals In this section, we focus on some applications in Diophantine approximation. Given a non-increasing monotonic positive function ψ:→_+, we denote 𝒲(ψ):={x∈[0,1]|| qx-p|<ψ(q) for infinitely many q∈ and p∈}. Khintchine in 1926, <cit.> shows that Leb(𝒲(ψ))= 0 if ∑ψ(q)<∞, 1 if ∑ψ(q)=∞. When ψ_τ(q):=1/q^τ with τ>1, the set ⋃_τ>1𝒲(ψ_τ) is referred as the set of very well approximable numbers. Using the convergence part of Khintchine's theorem, it follows that Lebesgue almost every point in is not very well approximable. The same observations as above for not absolutely continuous measures become highly nontrivial and are topics of research in the past two decades. For instance, in higher dimensions, studying very well approximable points inside certain manifolds is the content of 's conjecture, which was famously solved by Kleinbock–Margulis in <cit.>. There are remarkable developments over the last two decades for general non-increasing ψ; see <cit.>. In case of measures on , in 1984 <cit.>, Mahler asked “How close can irrational elements of Cantor’s (middle third) set be approximated by rational numbers not in Cantor’s set?”. Weiss in <cit.> shows that for the middle third Cantor set almost every point is not very well approximable with respect to the natural missing digit measure. This was significantly improved in <cit.>, where it is shown that with respect to friendly measures almost every point is not very well approximable. On the other hand, result in <cit.> by Einsiedler–Fishman–Shapira, and more generally, the result in <cit.> by Simmons–Weiss imply that for ψ(q)=c/q with c>0 one has μ(𝒲(ψ))=1, when μ is a self-similar measure with an irreducible IFS satisfying the OSC in (see <ref> for definition). These results motivate the following problem which was posed by Kleinbock–Lindenstrauss–Weiss in <cit.>: For a friendly measure μ does the analogue of (<ref>) hold? In general, Problem <ref> is wide open. In <cit.>, the authors show under some a strong summability condition on ψ together with monotonicity, with respect to absolutely friendly measure almost every point is not ψ approximable. The only known complete answer to Problem <ref> is given in <cit.>, for self-similar measures with rational irreducible IFS satisfying OSC under a certain extra condition. This extra condition in case of the natural missing digit measures (see the end of <ref> for definition) amounts to restrict the Hausdorff dimension of the missing digit Cantor set to be very close to 1. With a different approach, in a groundbreaking work <cit.>, Yu answers the convergence part of (<ref>) for a class of measures μ with _ℓ^1μ>1/2, namely: <cit.> Let μ be a Borel probability measure with μ>1/2. Also, let ψ be a non-increasing monotonic function with ∑ψ(q)<∞. Then μ(𝒲(ψ))=0. For the special function ψ(q)=1/qloglog q, Yu in <cit.> also proves divergence for self-similar measure with attractor being a missing digit Cantor set. However, his methods do not yield the divergence part in general. In this paper, we address the divergence part, extending the result of <cit.>. As a consequence, we extend the result of Khalil–Luethi <cit.>. Let μ be a self-similar measure whose underlying attractor is a shifted missing digit Cantor set as in <ref>. If μ>0.609375, then for any ψ non-increasing monotonic function, μ(𝒲(ψ))= 1 if ∑ψ(q)=∞. In particular, measures considered in Theorem <ref> satisfy the above conclusion. We remark that assuming the GRC we can improve Theorem <ref> by weakening the hypothesis to μ>0.5; see Corollary <ref> and discussion around it. We note that Theorem <ref> is not enough to imply Theorem <ref>. One needs a stronger effective equidistribution result Proposition <ref> with any starting point, not only the identity. We end the section remarking that when the monotonicity condition on ψ is absent, an interesting question arises where approximating function is supported on a sparse set; see <cit.>. §.§ Effective equidistribution via Fourier asymptotics We end this section with a discussion on our proofs of the effective equidistribution results. Let ϕ∈ C_c^∞() with ∫_ϕṃ_=0 and μ be a Borel probability measure on . Our goal is to show that μ_y(ϕ)≪ |y|^η for some η>0. After Plancherel we write μ_y(ϕ)=∑_m∈ϕ_y(m)μ(m), ϕ_y(m):=∫_/ϕ(n(x)a(y))e(-mx)x̣. Integrating by parts with ([ 0 1; 0 0 ])∈Lie(N) one checks that ϕ_y(m) is supported essentially on |m|≤ |y|^-1. We use Weyl's criterion in attempt to prove the effective equidistribution. Using the spectral theory of automorphic forms on _2() we choose a nice basis that we plugin as ϕ to test the effective equidistribution. Thus for such a non-trivial automorphic form we need to essentially show that ∑_|m|≤ |y|^-1ϕ_y(m)μ(m)≪_ϕ |y|^η, η>0 with certain implicit Sobolev-type dependency on ϕ. At this point, applying a major automorphic input, namely, theory of Whittaker function and Hecke theory we may obtain in the region |m|≤ |y|^-1 one has ϕ_y(m) ∼λ(|m|) |y|^1/2 where λ(m) are the Hecke eigenvalues. The GRC predicts that λ(m) are essentially bounded, where as unconditionally, by the spectral gap <cit.> one has λ(m)≪ m^ϑ for any ϑ>7/64. Thus to show effective equidistrbution we need to show a better than square-root cancellation, namely, ∑_m≤ Xλ(m)μ(m)≪ X^1/2-η, η>0, X→∞. Now we list down what can say about the above sum using different techniques: * Trivial bound: Note that trivially |μ(m)|≤ 1. Thus assuming the GRC the above sum is bounded by X (unconditionally, it is even worse). * Cauchy–Schwarz and average GRC: After Cauchy–Schwarz and applying (<ref>) and (<ref>) one can bound the sum by √(∑_m≤ X|λ(m)|^2)√(∑_m≤ X|μ(m)|^2)≪ X^1/2+1-_ℓ^2μ/2. To win, one needs _ℓ^2μ>1 which is impossible! We briefly remark for a missing digit Cantor set the above approach is somewhat close to that of Khalil–Luethi <cit.>, where they start with a self-similar μ (albeit, they have not used a Fourier dualization to start with). They used self-similarity quite cleverly and crucially to start with μ_y(𝒫^nϕ) where 𝒫 is the Markov operator (see <cit.>) underlying the self-similar structure of μ. Heuristically, this replaces λ(m) in the sum in consideration by the “Hecke eigenvalues”, say ν_n(m) of 𝒫^nϕ. Their winning point can be summaraized as that such ν_n has a “better spectral gap on average”. Now we describe the winning strategy that we take in this paper. * Hölder estimate with (1,∞)-pair: We apply Hölder's estimate on the sum in consideration. Applying the bound of Hecke eigenvalues we see the sum is ≪ X^ϑ+1-_ℓ^1μ=X^1/2-(_ℓ^1μ-1/2-ϑ), which is the main idea behind our Theorem <ref>. We remark that this argument follows without any self-similarity assumption on μ, unlike <cit.>. * Fourier decay of μ: Assume that μ(m)≪ m^-δ for some δ>0. Then using average GRC we can see that equidistribution holds if δ>1/2; (cf. Lemma <ref>). * Cancellation of additive twist: A deep automorphic result of Jutila <cit.> gives us ∑_m≤ Xλ(m)e(mx) ≪ X^1/2, uniformly in x, which immediately implies that the sum in consideration is O(X^1/2). This method fails but almost at the border. An important point is that to reach at the border we have not assumed anything on μ. The above gives us a hint that if μ(m) asymptotoically behaves as e(mα)m^-δ for any δ>0 we have a chance of winning. This and the previous point are the main ideas behind our Theorem <ref>. § MEASURE THEORETIC PRELIMINARIES §.§ Self-similar measures We record certain basic definitions about self-similar sets and measures in . We refer to <cit.> for details. Let l∈ and ℱ={f_i}_1≤ i≤ l be a finite collection contracting similarities, i.e., f_i:→, x↦ρ_i x+θ_i, 0<ρ_i<1, θ_i∈. Such an ℱ is called an Iterated Function System (IFS). Using Banach fixed point theorem (see <cit.>), we see that there exists a unique compact set K ⊂, known as attractor, such that ⋃_1≤ i≤ l f_i(K)=K. We say ℱ satisfies the Open Set Condition (OSC) if there exists an open set U⊂ such that f_i(U)⊆ U, f_i(U)∩ f_j(U)=∅, 1≤ i≠ j≤ l. In this case, the Hausdorff dimension of K, denoted by K, is the unique solution s of the equation ∑_i=1^n ρ_j^s=1. Finally, we call ℱ to be irreducible if there does not exist an α∈ such that f_i(α)=α for 1≤ i≤ l. Let λ={λ_i}_1≤ i≤ l be a probability vector, i.e., ∑_i=1^lλ_i=1, λ_i>0. Given an IFS ℱ and probability vector λ there exists an unique Borel probability measure μ such that ∑_i=1^l λ_i f_i_⋆μ= μ, where by f_⋆μ we denote the push-forward of μ by a function f. Such a measure is called self-similar measure. If ℱ satisfies OSC then μ has null overlaps, that is, μ(f_i(K)∩ f_j(K))=0, 1≤ i≠ j≤ l, see <cit.>. If ℱ has OSC, then for λ_j=ρ_j^s the corresponding self-similar measure μ is the s-dimensional Hausdorff measure restricted to the attractor K which is s-AD regular; see <cit.>. Let b≥ 3 and ∅≠ D⊂{0,⋯, b-1} with # D=l≥ 2. Let ℱ_b,D denote the IFS {f_i(x):=x+i/b| i∈ D}. The IFS ℱ_b,D satisfies OSC and the attractor of ℱ_b,D is K_b,D. Given any probability vector, there exists an unique self-similar measure whose attractor is K_b,D. We refer to them as self-similar measures for which the attractor is a missing digit Cantor set. In particular, when the probability vector is uniform probability vector, we call the self-similar measure to be natural missing digit measure μ_b,D. Note μ_b,D is also the s=log l/log b-dimensional Hausdorff measure restricted on K_b,D. For x_0∈ it can be easily checked that K_b,D+x_0 is the attractor of the IFS ℱ_b,D,x_0:={f_i(x):=x+i/b+x_0(1-1b)| i∈ D} and the convolution μ_b,D,x_0:=μ_d,D∗δ_x_0 is the self-similar measure on K_b,D+x_0 associated with uniform probability vector. It is easy to check that ℱ_b,D,x_0 satisfies OSC. §.§ Fourier theory Let μ be a Borel probability measure on . Given two Borel probability measures μ_1 and μ_2 we denote their convolution by μ_1∗μ_2(f) = ∫ f(x+y)μ̣_1(x)μ̣_2(y), f∈ C(). We define the Fourier transform of μ by μ(ξ):=∫ e(ξ x)μ̣(x); e(z):=exp(2π i z), z∈. If μ is compactly supported then μ is a bounded Lipschitz continuous function. Note that μ_1∗μ_2=μ_1μ_2. It follows from the classical Riemann–Lebesgue lemma that any absolute continuous measure (w.r.t the Lebesgue measure) μ(ξ)→ 0 as |ξ|→∞. However, it is often extremely difficult to quantify the decay rate of μ in general. On the other hand, when μ is not absolutely continuous, μ may not have any decay. For instance, if μ is the Hausdorff measure supported on the middle-third Cantor set then μ(ξ)↛ 0, i.e., μ has no decay. If μ is a self-similar measure (see <ref>) with underlying IFS having ρ_i=ρ for 1≤ i≤ l then it is easy to compute the Fourier transform of μ, namely μ(ξ)=∏_j=1^∞ g(ρ^jξ), g(ξ):=∑_i=1^lλ_ie(b_iξ); see <cit.>. We refer readers to <cit.> for surveys on behaviour of Fourier transform in case of various self-similar measures. In this paper, we mainly consider the Fourier decay in an ℓ^p-average sense. Following <cit.>, we define, _ℓ^pμ:=1-inf{ℓ|lim_X→∞X^-ℓ∑_|m|≤ X|μ(m)|^p = 0}. The set on the right-hand side above is non-empty as trivially any ℓ>1 belongs to this set. Moreover, this definition is equivalent to those in <cit.>. Indeed, this follows from the fact for any sequence {A(X)}_X∈ of positive reals A(X) = o_ϵ(X^ϵ) ∀ϵ>0 A(X) = O_ϵ(X^ϵ) ∀ϵ>0. Using Cauchy–Schwarz inequality and definitions, as noted in <cit.>, we have 1/2_l^2μ≤μ≤_l^2μ. For s>0 we call a Borel measure μ to be s-AD regular if there is a C>1 such that for any x∈(μ) and sufficiently small r>0 one has C^-1r^s≤μ((x-r,x+r))≤ Cr^s. By <cit.>, if μ is s-AD regular then, _ℓ^2μ=s. We define a variant of the above ℓ^p dimension, following <cit.>. For 1≤ p <∞ we define ^⋆_ℓ^pμ:=1-inf{ℓ|lim_X→∞X^-ℓsup_0≤θ≤ 1∑_|m|≤ X|μ(m+θ)|^p = 0}. It is immediate to check that ^⋆_ℓ^pμ≤_ℓ^pμ. However, for p=1 a self-similar measure whose attractor is a missing digit Cantor set (see <ref>), the above two dimensions will coincide; see Lemma <ref>. § AUTOMORPHIC THEORY §.§ Convention The letter ϵ will denote placeholders for a small positive number. In particular, the exact value of ϵ may change from line to line. We will make the same convention with the order of the Sobolev norm (denoted by d or ℓ) which is thought as an implicit positive integer. Moreover, we will write ≪ x^O_P(1) to denote ≪ x^d for some unspecified positive d depending on P as x→∞. §.§ Groups and measures Let G:=_2(), Γ:=_2() and [G]:=Γ\ G. Throughout the paper we will identify with [G] by means of, e.g., <cit.>. We denote n(x):=[ 1 x; 1 ], a(y):=[ y; 1 ]. Via the above identification we rewrite μ_y, as in (<ref>), for a Borel probability measure μ as μ_y(ϕ)=∫ϕ(n(x)a(y))μ̣(x), ϕ∈ C(). Let K:=SO_2(), which is a compact subgroup of G. We will abbreviate exp(2π i z) by e(z) for any z∈. We fix measures x̣ on N:={n(x)| x∈} and ^̣× y:=ỵ/|y| on A:={a(y)| y∈^×}. Correspondingly, we can write a Haar measure on G in the Iwasawa coordinates g=n(x)a(y)k as g̣=x̣^̣× y/|y|ḳ where ḳ is the probability Haar measure on K. We use m_[G] (which is the same is m_) to denote the invariant probability measures on on [G]. §.§ Sobolev norm We refer to <cit.> for details. Let {X,Y,Z} be an orthonormal basis of Lie(G) where the orthonormality is with respect to the Killing form on Lie(G) and Z is a non-zero element in Lie(K). We define a Laplacian of G by the formula :=Id-X^2-Y^2-Z^2=Id-C_G+2C_K where C_G:=X^2+Y^2-Z^2 is the Casimir operator on G and C_K:=-Z^2 is the Casimir operator on K Let π be a unitary representation of G. It is known that (see, e.g., <cit.>) |_π is a densely-defined, self-adjoint, invertible, positive definite, second-order elliptic differential operator. For d∈_≥ 0 we define the d'th Sobolev norm on π by S_2,d(v) := ^d v_π, smooth v∈π and extend the definition to π by density of the smooth vectors in π. The Casimir operator C_G lies in the center of the universal enveloping algebra of G. Thus if π is also irreducible then C_G acts on π by a scalar ν_π. On the other hand, if v∈π is a K-type vector (i.e. K acts on v by a character) of weight k∈ then C_K also acts on v by a scalar ν_k. Consequently, it follows from (<ref>) that acts on a weight-k vector v∈π by a scalar. Thus a -eigenbasis of π is the same as a K-isotypic basis of π. For every ℓ>0 there is a ℓ'>0 such that we have trace(^-ℓ'|_π)≪ (1+|ν_π|)^-ℓ for any unitary irreducible π. Let {v_k:k∈} be an orthonormal K-type basis of π where v_k is a weight k-vector. Thus, from the above discussion it follows that trace(^-ℓ'|_π) = ∑_k∈(1-ν_π+2ν_k)^-ℓ'. From <cit.>, we have that ν_k≍ k^2. Thus the above sum is bounded by ≪∑_k∈(1+|ν_π|)^-ℓ'/2(1+|k|)^-ℓ'. Making ℓ' sufficiently large we conclude. Finally, for 0≤ℓ≤∞, we define C_^̱2ℓ([G]):={ϕ∈ C^2ℓ([G])| S_∞,ℓ(ϕ):=^ℓϕ_L^∞([G])<∞}. Note that for every ℓ>0 and for every ϕ∈ C_^̱2ℓ([G]) we have S_∞,ℓ(ϕ)≪𝒮_∞,ℓ(ϕ) where 𝒮_∞,ℓ is the Sobolev norm defined in <cit.>. §.§ Automorphic forms and representations We refer to <cit.> for details. In this section, π will denote a standard non-trivial unitary automorphic representation for Γ, that is, π is infinite dimensional and appears in the spectral decomposition of L^2([G]); see Lemma <ref>. Then π is either cuspidal or a unitary Eisenstein series. By φ∈π (cuspidal or an Eisenstein series) we will denote an automorphic form on [G]. §.§.§ Unitary structure If π is cuspidal then we fix a G-invariant inner product on π by φ^2_π:=φ^2_[G]=∫_[G]|φ(g)|^2g̣. If π is unitary Eisenstein then any φ∈π is of the form φ=(f), f∈(s):=Ind_NA^G |·|^s, s∈ i; where for (s)>12 we define (f):=∑_γ∈ N∩Γ\Γf(γ·). The sum converges absolutely for (s)>12 and can be meromorphically continued to all s∈. Note that f satisfies f(n(x)a(y)g) = |y|^1/2+s f(g), x∈, y∈^×,g∈ G. Thus any such f is determined by f|_K by Iwasawa decomposition. If s∈ i then (s) is unitary. In this case, we fix a G-invariant inner product on (s) by f^2_(s):=∫_K|f(k)|^2ḳ and a G-invariant inner product on π by (f)_π = f_(s). Through out the paper we only work with f∈(s) such that f|_K is independent of s. In this paper, if not mentioned otherwise, we assume that f|_K is s-independent and a K-type vector. In this case, (f) is holomorphic in the region 0≤(s)<12. §.§.§ Constant term We define the constant term of φ by φ_0(g):=∫_/φ(n(x)g)x̣. If φ is cuspidal then φ_0=0. If φ=(f) then (see <cit.>) φ_0(g)=f(g)+ζ(2s)/ζ(1+2s)M(s)f(g), where ζ is the Riemann zeta function and M(s) is the standard intertwiner mapping (s)→(-s), defined by M(s)f(g):=∫_f(wn(x)g)x̣, w:=[ 1; 1 ]. The above integral converges absolutely for (s)>0 and has meromorphic continuation to all of . Let φ=(f) for f∈(s). Then for all x∈, y∈^×, and k∈ K we have φ_0(n(x)a(y)k)=φ_0(a(y)k) =|y|^1/2(|y|^sf(k)+ζ(2s)/ζ(1+2s)|y|^-sM(s)f(k)). In particular, there is a d>0 so that for (s)=0 φ_0(n(x)a(y)k)≪ |y|^1/2S_2,d(φ) uniformly in x and k. The formula of φ_0 and left N-invariance of it follow immediately from (<ref>), and the facts that f∈(s) and consequently, M(s)f∈(-s). To see the next estimate, we first use the functional equation of the Riemann zeta function: ζ(2s)/ζ(1+2s)=Γ_(1-2s)/Γ_(2s)ζ(1-2s)/ζ(1+2s), Γ_(s):=π^-s/2Γ(s/2). Note that |ζ(1-2s)/ζ(1+2s)|=1, (s)=0. Then the claim follows from the fact that f(k), Γ_(1-2s)/Γ_(2s)M(s)f(k)≪ S_2,d(φ); see <cit.> and the discussion around it. §.§.§ Fourier coefficients For any cuspidal or unitary Eisenstein series π∋φ, one has the Fourier expansion φ(g) - φ_0(g) = ∑_m≠ 0λ_π(|m|)/√(|m|)W_φ(a(m)g). Let π be cuspidal and φ∈π. In this paper, we always use the normalized φ with λ_π(1)=1. In this case, λ_π are the Hecke eigenvalues attached to π. The Whittaker function W_φ in this case is given by W_φ(g):=∫_/φ(n(x)g)e(-x)x̣. The expression (<ref>) follows from Shalika's multiplicity one theorem; see <cit.>. We have the following bound λ_π(|m|)≪ |m|^ϑ, ϑ >7/64, uniformly in π; see <cit.>. The Generalized Ramanujan Conjecture (GRC) predicts that the above estimate of λ_π can be improved to |m|^ϵ for any ϵ>0. However, this is only known when π corresponds to a modular form (that is, a discrete series representation). If π is an Eisenstein series with parameter s∈ and φ=(f) then the Whittaker function W_φ is given by W_φ(g) := W_f(g):=∫_ f(wn(x)g)e(-x)x̣ which converges absolutely for (s)>0 and can be analytically continued to all of . In particular, when f varies over a family such that f|_K is s-independent (so called flat family) then W_f is entire in s. In this case, we have λ_π(|m|)= |m|^-sτ_2s(|m|)/ζ(1+2s), τ_z(m):=∑_d| md^z; see <cit.> for details of the proof. Consequently, for (s)=0 it follows from the lower bound of ζ on (s)=1 that (see <cit.> and <cit.>) λ_π(|m|)≪_ϵ(|m|(1+|s|))^ϵ≍(|m|(1+|ν_π|))^ϵ. Finally, the GRC is known on average for any cuspidal or unitary Eisenstein series. That is, ∑_m≤ X|λ_π(m)|^2 ≪_ϵ X^1+ϵ(1+|ν_π|)^ϵ; see <cit.>. §.§.§ Functional equation of the Eisenstein series We record the functional equation of the Eisenstein series. It follows from (see <cit.>, cf. <cit.>) that W_f = Γ_(1-2s)/Γ_(2s)W_M(s)f =W_M^∗(s) f, M^∗(s):=Γ_(1-2s)/Γ_(2s)Ṁ(s). Moreover, by Schur's lemma and the above it follows that M^∗(-s)∘ M^∗(s)=Id. Thus from the Fourier expansion of Eisenstein series, namely, (<ref>), (<ref>), and (<ref>), and the functional equation of the Riemann zeta function it follows that ζ(1+2s)f(g) + ζ(2s)M(s)f(g) +∑_m≠ 0|m|^-sτ_2s(|m|)/√(|m|)W_f(a(m)g) =ζ(1-2s)M^∗(s) f(g) + ζ(-2s) M(-s)∘ M^∗(s)f(g) + ∑_m≠ 0|m|^sτ_-2s(|m|)/√(|m|)W_M^∗(s)f(a(m)g) Hence, we obtain ζ(1+2s)(f) = ζ(1-2s)(M^∗(s) f) = ζ(2s)(M(s)f) for any s∈. §.§ Whittaker functions In this section, π will denote an abstract unitary irreducible representation of G. We call π to be generic if π has a unique G-invariant embedding into Ind_N^G e(·):={W:G→ smooth : W satisfies (<ref>)}; W(n(x)g) = e(x)W(g), ∀ x∈, g∈ G. The functions W are known as Whittaker functions and the image of π inside Ind_N^G e(·) under the above embedding is known as the Whittaker model of π. In this paper, we always identify a generic representation and its Whittaker model. §.§.§ Mellin Theory From the theory of local (2)×(1) Hecke zeta integral (see, e.g., <cit.>) we know that for ε∈{0,1} and s∈ the zeta integral Z(s,ε,W):=∫_^×W(a(y))|y|^s(y)^ε^̣× y converges absolutely for (s)>-1/2 and hence defines a holomorphic function in this region. The same also follows from the growth of W(a(y)), namely, (y∂_y)^jW(a(y)k) ≪_j,A |y|^-AS_2,d(W), y∈^×, k∈ K; where A can take any value in (-1/2,∞). Here d only depends on j and A. The above follows from <cit.> temperedness of π (at the archimedean place) due to Selberg. Finally, we record the functional equation of the local (2)×(1) zeta integral (see <cit.>) Z(s,ε,W)=Z(-s,ε,W(· w))γ(12-s,π) where γ is a certain complex meromorphic function (implicitly depending on ε) satisfying γ(12-s,π)≪_(s) (1+|s|)^2(s)ν_π^O(1) as long as s is a fixed distance away from any pole of the γ-function; see <cit.>. §.§.§ Unitary structure We fix a G-invariant unitary structure on the Whittaker model of π by W^2_π:=∫_^×|W(a(y)|^2^̣× y. It is known that cuspidal representations and unitary Eisenstein series π of G are generic. For any such φ∈π with Whittaker function W_φ it follows from Schur's lemma that (see <cit.>) φ^2_π=C_πW_φ^2_π. If π is cuspidal then from (<ref>) and <cit.> it follows that C_π^-1≪_ϵ (1+|ν_π|)^ϵ for any ϵ>0. If π is a unitary Eisenstein series with parameter s∈ i then C_π=1 which follows from the fact that W_f_(s) = f_(s); see, e.g., <cit.>. The above combining with (<ref>) implies that f_(s)=M^∗(s)f_(-s). Finally, if φ, cuspidal or unitary Eisenstein, is a -eigenfunction if and only if W_φ is the same with the same eigenvalue. Thus the above estimates of C_π imply that S_2,d(W_φ)≪_ϵ (1+|ν_π|)^ϵ S_2,d(φ)≪ S_2,d(φ) for any -eigenfunction φ. From now on, if not mentioned otherwise, we assume that φ is a -eigenfunction and use the above without anymore explanation. Fix σ>1. For every N>0 sufficiently large there is a d>0 so that for every automorphic form φ we have (φ-φ_0)(n(x)a(y)) = ∑_0≠ |m|≤ |y|^-σλ_π(|m|)/√(|m|)W_φ(a(my))e(mx)+O(|y|^NS_2,d(φ)) as y→ 0. We start with the Fourier–Whittaker expansion of φ is in (<ref>). Using unipotent equivariance of the Whittaker function as in (<ref>) we obtain (φ-φ_0)(n(x)a(y)) = ∑_m≠ 0λ_π(|m|)/√(|m|)W_φ(a(my))e(mx). Thus it is enough to show that the contribution of the terms for |m|>|y|^-σ in the above sum is O(|y|^NS_2,d(φ)) for all large N. We recall from (<ref>) the bound of the Whittaker function W_φ(a(my))≪_A |my|^-AS_2,d(φ). From (<ref>) and (<ref>) we obtain the bound of Hecke eigenvalues λ_π(|m|)/√(|m|)≪ (1+|ν_π|)^ϵ, m≠ 0. Thus using triangle inequality and partial summation we obtain ∑_|m|≥ |y|^-σλ_π(|m|)/√(|m|)W_φ(a(my))e(mx)≪ S_2,d(φ)∑_|m|≥ |y|^-σ|my|^-A≪ |y|^A(σ-1)-σS_2,d(φ). Making A sufficiently large we conclude. There is a d>0 such that for any unitary automorphic form φ we have φ_1 ≪ S_2,d(φ) and φ(n(x)a(y)k)≪_ϵmax(√(|y|) , |y|^-1-ϵ)S_2,d(φ) uniformly in x∈. The second assertion is essentially the Sobolev embedding as stated in <cit.>. However, we give a proof for completion. Applying the Fourier–Whittaker expansion (<ref>) and unipotent equivariance of Whittaker function (<ref>) for any g = n(x)a(y)k we write (φ-φ_0)(g) = ∑_m≠ 0λ_π(|m|)/√(|m|)W_φ(a(my)k)e(mx). Using the bounds (<ref>), (<ref>), (<ref>) and working as in the proof of Lemma <ref> with A=1+ϵ we bound the above by S_2,d(φ)∑_m≠ 0|my|^-1-ϵ≪ |y|^-1-ϵS_2,d(φ) for some fixed d>0. Finally, applying Lemma <ref> we obtain φ(g)≪max(√(|y|) , |y|^-1-ϵ)S_2,d(φ), g=n(x)a(y)k. This proves the second assertion. To prove the first assertion we fix the Siegel domain S:={g=n(x)a(y)k| |x|≤12, y≥√(3)2, k∈ K}⊇ [G]. Then we obtain φ_1≤∫_S |φ|≪ S_2,d(φ)∫_|x|≤1/2x̣∫_|y|≫ 1√(|y|)^̣× y/|y|∫_K ḳ≪ S_2,d(φ) as claimed. For 0≤(s)<12 and f∈(s) it follows from the proof of Lemma <ref> and (<ref>) that (f) is L^1-integrable on [G]. §.§ Spectral decomposition For a unitary representation π (similarly, (s)) of G by (π) we denote an orthogonal basis of π consisting of arithmetically normalized K-type vectors. For the rest of the paper, by (π) we mean a basis of π with the properties as above, unless otherwise mentioned. There exist a sufficiently large ℓ>0 such that for any ϕ∈ C_^̱2ℓ([G]) and any g∈ G we have ϕ(g)=∫_[G]ϕṃ_[G]+∑_π cuspidal∑_φ∈(π)⟨ϕ,φ⟩_[G]/φ^2_πφ(g) +∫_(s)=0∑_f∈((s))⟨ϕ,(f)⟩_[G]/f^2_(s)(f)(g)ṣ/4π i. The right-hand side converges absolutely and uniformly on compacta. Moreover, in each summand the sum over is invariant under a choice of orthogonal basis . Lemma <ref> is essentially given in <cit.>. However, in <cit.> the test function ϕ is chosen to be inside C^∞([G])∩ L^2([G]). Here we give a sketch to show that one may choose the test function from C_^̱2ℓ([G]). The ideas of the proof are from <cit.>. We start with the L^2-version of the equality given in the statement of the lemma, which can be found in <cit.>. Let π be cuspidal and φ∈(π). Thus φ is also an eigenvector of with, say, eigenvalue λ_φ. Integrating by parts with respect to for l times we obtain ⟨ϕ,φ⟩_[G]=λ_φ^-l⟨^lϕ,φ⟩_[G]≪λ_φ^-l^lϕ_∞φ_1. Applying Lemma <ref> we obtain the above is ⟨ϕ,φ⟩_[G]/φ^2_π≪λ_φ^-l+d^lϕ_∞. Working similarly, for f∈(s) we obtain ⟨ϕ,(f)⟩_[G]/f^2_(s)≪λ_f^-l+d^lϕ_∞ where λ_f is the -eigenvalue of f (note that (f) is a -eigenfunction if and only if f is). Once again applying Lemma <ref> we have φ(g)≪_g S_d(φ), g∈ [G]. Thus the right-hand side of the claimed equation is bounded by ≪ϕ_1+S_∞,l(ϕ)(∑_π cuspidal∑_φ∈(π)λ_φ^-l+2d+∫_∑_f∈((it))λ_f^-l+2dṭ). The above φ-sum (resp. f-sum) can be realized as trace(^-l+2d|_π) (resp. trace(^-l+2d|_(it))) as is positive definite. We make l sufficiently large. Applying Lemma <ref> for cuspidal π and Weyl's law (see, e.g., <cit.>) that #{π cuspidal | |ν_π|≤ X}≪ X^O(1), we conclude that the above (π,φ)-sum converges absolutely. Similarly, applying Lemma <ref> for (s) and noting that ∫_(1+|t|)^-2ṭ<∞ we conclude that the above t-integral and f-sum converge absolutely. Thus the right-hand side of the claimed equation converges absolutely, hence defines a continuous function of g. Thus the pointwise decomposition follows. § EQUIDISTRIBUTION WITH AVERAGE FOURIER DECAY Let π∋φ be cuspidal or a unitary Eisenstein series, x_0∈/, and q∈. Let μ be a Borel probability measure on . Then for any σ>1 there is a d>0 such that we have ∫φ(n(x_0+xq)a(y))μ̣(x)=∑_0≠|m|≤ |y|^-σλ_π(|m|)/√(|m|)W_φ(a(my))e(mx_0)μ(m/q) +O(|y|^1/2S_2,d(φ)) as y→ 0. We use the equation in Lemma <ref> and integrate both sides in x with respect to μ. As μ is a probability measure and φ_0 is left N-invariant (as seen in Lemma <ref>) we obtain that ∫φ(n(x_0+xq)a(y))μ̣(x)=φ_0(a(y)) + ∑_0≠|m|≤ |y|^-σλ_π(|m|)/√(|m|)W_φ(a(my))e(mx_0)μ(m/q) +O(|y|^1/2S_2,d(φ)). We conclude the proof after an application of the estimate in Lemma <ref>. Recall the definition of μ from (<ref>). Let μ be a Borel probability measure and ℓ<μ. Then for q∈ and X≥ 1 we have ∑_|m|≤ X|μ(m/q)|≪ q^ℓ X^1-ℓ. We may assume that X≥ q, otherwise, the claim is trivial. We see that ∑_|m|≤ X|μ(m/q)| ≤ 1+2∑_a=1^q∑_n=0^[X/q]|μ(n+aq)|≪ qsup_0≤θ≤ 1∑_n=0^[X/q]|μ(n+θ)|. From (<ref>), it follows that the above is ≪ q [Xq]^1-ℓ≤ q^ℓ X^1-ℓ as claimed. Let x_0∈/ and q∈. Assume that there are an η>0 and b,d>0 such that for any π∋φ cuspidal or unitary Eisenstein series, we have ∫φ(n(x_0+xq)a(y))μ̣(x) ≪ q^b|y|^η S_2,d(φ) as y→ 0. Then there is a sufficiently large ℓ>0 so that for any ϕ∈ C_^̱2ℓ([G]) one has ∫ϕ(n(x_0+xq)a(y))μ̣(x) = ∫_[G]ϕṃ_[G]+ O(|y|^η q^bS_∞,ℓ(ϕ)) as y→ 0. We start by spectrally decomposing ϕ as given in Lemma <ref> at g=n(x_0+xq)a(y) and integrate in x with respect to μ. We obtain ∫ϕ(n(x_0+xq)a(y))μ̣(x) -∫_[G]ϕ =∑_π cuspidal∑_φ∈(π)⟨ϕ,φ⟩_[G]/φ^2_π∫φ(n(x_0+xq)a(y))μ̣(x) +∫_(s)=0∑_f∈((s))⟨ϕ,(f)⟩_[G]/f^2_(s)∫(f)(n(x_0+xq)a(y))μ̣(x)ṣ/4π i. The interchanges of above μ integral with spectral sum and integral are justified via absolute convergence of the spectral decomposition that is uniform in x (which follows from Lemma <ref>) and finiteness of μ. Now the proof follows exactly as the proof of Lemma <ref> where we replace the estimates φ(g)≪_g S_2,d(φ) by ∫φ(n(x_0+xq)a(y))μ̣(x) ≪ q^b|y|^η S_2,d(φ), as given in the assumption. The following theorem is a stronger version of Theorem <ref>, under a slightly stronger assumption. Let x_0∈/, q∈ and g_0:=n(x_0)a(q^-1). Let μ be a Borel probability measure such that μ>39/64=0.609375. Then there exist an η>0, b>0, and ℓ∈ so that for any ϕ∈ C_^̱2ℓ(Γ\ G) we have ∫ϕ(g_0n(x)a(y))μ̣(x)=∫_[G]ϕṃ_[G]+ O_μ(q^b|y|^ηS_∞,ℓ(ϕ)), as y→ 0. First, note that g_0n(x)a(y)=n(x_0+xq)a(y)a(q^-1). Let π∋φ be a cuspidal representation or a unitary Eisenstein series. We start with the first term of the right-hand side of the equation in Lemma <ref> with φ. Applying (<ref>) with 7/64<ϑ<μ-1/2, (<ref>), and (<ref>) with A=-1/2+ϑ we estimate ∑_0≠|m|≤ |y|^-σλ_π(|m|)/√(|m|)W_φ(a(my))e(mx_0)μ(m/q) ≪ S_2,d(φ)∑_|m|≤|y|^-σ|m|^ϑ-1/2|my|^1/2-ϑ|μ(m/q)| ≪ S_2,d(φ)|y|^1/2-ϑ∑_|m|≤ |y|^-σ|μ(m/q)| for some d>0 independent of φ and q. Fix 1/2+ϑ<b<μ. Then from Lemma <ref> we obtain the last sum above is O(q^b|y|^-σ(1-b)). Hence, from Lemma <ref> we obtain ∫φ(n(x_0+xq)a(y))μ̣(x)≪ S_2,d(φ)(|y|^1/2 + q^b |y|^b-(1/2+ϑ)-(σ-1)(1-b)). Choosing 0<σ-1 sufficiently small we obtain that ∫φ(n(x_0+xq)a(y))μ̣(x)≪ q^b|y|^ηS_2,d(φ) for some η>0. We apply Lemma <ref> with ϕ_q:=ϕ(· a(q^-1)) to obtain ∫ϕ(g_0n(x)a(y))μ̣(x)=∫ϕ_q(n(x_0+xq)a(y))μ̣(x) = ∫_[G]ϕ_qṃ_[G]+ O(|y|^η q^bS_∞,ℓ(ϕ_q)). Finally, noting that (see, e.g., <cit.>) S_∞,ℓ(ϕ_q)≪Ad(a(q^-1))^ℓ S_∞,ℓ(ϕ) ≪ q^O(1)S_∞,ℓ(ϕ) and by measure invariance ∫_[G]ϕ_q = ∫_[G]ϕ, we conclude the proof. The proof follows similarly to the proof of Proposition <ref>, but with x_0=0 and q=1, and replaced by . Note that for any two Borel probability measures μ_1 and μ_2 we have μ_1∗μ_2(ξ)=μ_1(ξ)μ_2(ξ). Hence by Cauchy–Schwarz we have ∑_|m|≤ X|μ_1∗μ_2(m)|≤(∑_|m|≤ X|μ_1(m)|^2)^1/2(∑_|m|≤ Xμ_1(m)|^2)^1/2≪ X^1-l_1+l_2/2 for any l_i<_ℓ^2μ_i. Since each μ_i is s_i-AD regular, by (<ref>) it follows, μ∗μ≥s_1+s_2/2. The corollary now follows from Theorem <ref>. Applying (<ref>) and the fact that |μ_j(ξ)|≤ 1 for all ξ we conclude that μ∗ν≥μ. Now we prove Theorem <ref>. From <cit.> we recall that for D in arithmetic progression with #D=l one has μ_b,D≥log l/log b-log(4+log(2l))/log b. Suppose 3964<s<1. Let b(s)∈ such that for every b≥ b(s) we have s-log (4+log(2b))/log b> 39/64 and b-b^s≥ 2. Now fix b≥ b(s) and let 2≤ l_1∈ (b^s,b)∩. Let K_b,D be such that D is any subset of {0,…,b-1} with l_1 elements and they are in an arithmetic progression. K_b,D. Thus K_b,D=log l_1/log b=:s_1≥ s. Note that s_1-log(4+log(2l_1))/log b≥ s_1-s+ s-log (4+log(2b))/log b>39/64. Thus from (<ref>) we have μ_b,D>3964, consequently, the proof follows from Theorem <ref>. The proof immediately follows from (<ref>), discussion around (<ref>), and proof of Theorem <ref>. Let μ be a self-similar measure whose underlying attractor is a missing digit Cantor set. Then μ=μ. Moreover, the same is true when the attractor is a shifted missing digit Cantor set. We first prove when μ is a self-similar measure on K_b,D. The proof of this is essentially available in <cit.>, although was not pointed out. Note that using (<ref>) and <cit.>, the lemma follows upon showing that for all L∈, f_L:=-log(max_xb^-L∑_i=0^b^L-1S_L(x+i/b^L))/log b^L≤μ, where S_L is as defined in <cit.>. Then by the argument as in <cit.>, it is enough to prove (<ref>) for L=1. Now in the proof of <cit.> authors show that for any N∈_≥ 0 and θ∈, ∑_m=0^b^N-1|μ(m+θ)|≤ b^N(1-f_1). Hence sup_0≤θ≤ 1∑_m=0^b^N-1|μ(m+θ)|≤ b^N(1-f_1), which implies (<ref>). To see the second part, we note if μ is a self-similar measure on K_b,D+x_0 for some x_0∈ then by uniqueness of the self-similar measures μ=μ_1∗δ_x_0 where μ_1 is a self-similar measure on K_b,D; see (<ref>). Thus |μ|=|μ_1|. Let μ be a self-similar measure with the IFS being irreducible and having OSC. If μ>0.609375, then for any ψ non-increasing monotonic function, μ(𝒲(ψ))= 1 if ∑ψ(q)=∞. The proposition follows combining Proposition <ref> and <cit.>. Let μ be a self-similar measure whose underlying attractor is a shifted missing digit Cantor set. Therefore the corresponding IFS satisfies OSC and is irreducible. Also by Lemma <ref>, μ=μ. Thus the hypothesis of Proposition <ref> is valid. Hence the proof follows from Propostion <ref>. Let ρ be sufficiently small such that (ρ^-0.609375,ρ^-1]∩_≥ 2≠∅. Choose l from the above interval. Let μ be the self-similar measure associated to the IFS {ρ (x+i), i=0,⋯,l-1} and the uniform probability vector. Since this IFS satisfies OSC, μ is -log l/logρ-AD regular (see <ref>). Note that μ∗μ is a self-similar measure with (equal) contraction ρ; see <cit.>. By (<ref>), μ∗μ=_ℓ^2μ=-log l/logρ>0.609375, therefore the corollary follows from Theorem <ref>. Note that in the proof of Theorem <ref>, when ρ^-1 is the (positive) n'th root of a positive integer then μ∗μ is not absolutely continuous, since μ↛ 0 in this case, which follows from (<ref>). § EQUIDISTRIBUTION WITH POINTIWSE FOURIER DECAY There is a d>0 such that for every δ>0 and 0<η<min(12,δ) we have the following. Let π be either cuspidal representation or unitary Eisenstein series and φ∈π. Then we have ∑_m≠ 0λ_π(|m|)/|m|^1+δW_φ(a(my))≪_η |y|^ηS_2,d(φ) as y→ 0. We use the the bound of Whittaker function (<ref>) with A=-η>-min(12,δ). Then we bound the sum in the lemma by ≪ |y|^ηS_2,d(W_φ)∑_m≥ 1|λ_π(m)|/m^1+δ-η. Thus it suffices to show that the above sum is ν_π^O(1). To see this, first note that for any X>1 applying summation by parts we have ∑_m≤ X|λ_π(m)|/m^1+δ-η =X^-(1+δ-η)∑_m≤ X|λ_π(m)|+∫_1^X∑_m≤ t|λ_π(m)|/t^2+δ-ηṭ. Applying Cauchy–Schwarz and employing (<ref>) it follows that ∑_m≤ t|λ_π(m)|≪_ϵ t^1+ϵν_π^ϵ. As 1+δ-η>1, the proof follows after letting X→∞. Now we will estimate a similar sum as in Lemma <ref> but 1+δ is replaced by 12+δ, namely, ∑_m≠ 0λ_π(|m|)/|m|^1/2+δe(mα)W_φ(a(my)), α∈. To prove a polynomial decay in y for the above sum we can not mimic the proof of Lemma <ref>, as in this proof the absolute convergence of the Dirichlet series ∑_m≠ 0λ_π(|m|)/|m|^s for (s)>1 is crucially used, which will not be available in this case. We will analyse this sum using a Voronoi-type argument. This argument rather uses the meromorphic properties of the Dirichlet series of the additive twisted Hecke eigenvalues. Such an argument is common in literature when π is cuspidal. However, we are unable to find a reference for the same when π is Eisenstein. Below we give a proof for the required estimate for all unitary automorphic π. As a preparation, we first approximate α by a nonzero rational number p/q with q∈ in its reduced form so that |ξ|≤1/√(|y|)q, ξ:=1/|y|(α-p/q), 1≤ q≤ |y|^-1/2. The above is guaranteed because of the Dirichlet's approximation. Finally, we abbreviate W_φ(· n(ξ)) by W^ξ. We start with the following Mellin-theoretic property of W^ξ. If z is away from any pole of Z(·,W^ξ) then Z(z,ε,W^ξ)≪_N (1+|z|)^2min(0,(z))(1+|ξ|/1+|z|)^N S_2,d(W_φ) for some d>0 depending only on N. If (z)≥ 0 then Z(z,ε,W^ξ)=∫_^×W(a(t))e(tξ)|t|^z(t)^ε^̣× t converges absolutely. We integrate by parts the above with (t∂_t)^N and apply (<ref>) to obtain Z(z,ε,W^ξ)≪_N (1+|ξ|/1+|z|)^NS_2,d(W) If (z)<0 and z is not a pole of Z(W^ξ) then applying local functional equation as in (<ref>) we obtain Z(z,ε,W^ξ)=γ(12-z,π)Z(-z,ε,W^ξ). We bound the gamma factor by γ(12-z,π)≪ (1+|z|)^2(z)ν_π^O(1) and apply the above estimate for Z(-z,W^ξ) for -(z)>0. There is a d>0 such that for every 0<δ≤12 and every 0<η<δ2 we have the following. Let π be a cuspidal representation of G and φ∈π be an automorphic form. Let p,q,ξ be as in (<ref>). Then ∑_m≠ 0λ_π(|m|)/|m|^1/2+δe(mpq)W^ξ(a(my)) ≪_η |y|^η S_2,d(φ) as y→ 0. We start with the Mellin expansion of W^ξ given by W^ξ(a(y))=1/2∑_ε∈{0,1}(y)^ε∫_(z)=σZ(z,ε,W^ξ)|y|^-zẓ/2π i for some sufficiently positive σ, where Z(⋯) is as in (<ref>). Using this, we write the sum in the lemma as 1/2∑_ε∈{0,1}(y)^ε∫_(z)=σZ(z,ε,W^ξ)|y|^-z∑_m≠ 0λ_π(|m|)(m)^ε e(mpq)/|m|^1/2+z+δẓ/2π i. The above interchange of the m-sum and the z-integral is justified because the m-sum converges absolutely for sufficiently large σ (which follows from (<ref>) and (<ref>)) and Z(z,ε,W^ξ) decays rapidly in z (which follows from Lemma <ref>). Let us consider the ε=0 case to ease the notations; the ε=1 case will be similar. Also, from now on, we will drop ε(=0) from the notations. Moreover, we only consider the m>0 part of the above sum; the m<0 part can be treated similarly by replacing pq by its negative. Correspondingly, we can rewrite the ε=0 summand in the above expression as ∫_(z)=σZ(z,W^ξ)|y|^-zL(12+z+δ,pq,π)ẓ/2π i. where we define L(z,pq,π):=∑_m=1^∞λ_π(m)e(mpq)/m^z. It follows from (<ref>) that the above converges absolutely for (z)>1. As π is cuspidal, it is known that L(z,pq,π) is entire in z; see <cit.>. First, we will bound the L(z,pq,π) for 0≤(z)≤ 1. Note that using (<ref>) we can bound L(z,pq,π)≪_(z) 1, (z)>1. On the other hand, L(z,pq,π) satisfies the functional equation <cit.> of the form L(z,pq,π) = q^1-2z[L(1-z,-pq,π)γ_1(z,π) + L(1-z,pq,π)γ_2(z,π)] for some complex meromorphic functions γ_j(z,π) (given in terms of C^± as in <cit.>) satisfying γ_j(z,π)≪_(z) (1+|z|)^1-2(z)ν_π^O_(z)(1) as long as z is a fixed distance away from any pole of γ_j(·,π). The above bound follows from a standard application of Stirling's estimate of Γ-function; e.g., see <cit.>. Hence, by the functional equation of the L-function and the above bounds of the γ-factor we have L(z,pq,π)≪_(z)(q(1+|z|))^1-2(z)ν_π^O_(z)(1), (z)<0. Thus applying Phragmén–Lindelöf convexity principle, we conclude L(z,pq,π)≪_ϵ (q(1+|z|))^1-(z)+ϵν_π^O(1), 0≤(z)≤ 1. Now using the holomorphic and decay properties of Z(z,W^ξ) for (z)>-1/2 as in Lemma <ref>, we shift the contour of (<ref>) to (z)=σ where -1/2<σ<0 without crossing any poles of the integrand. Applying (<ref>) and Lemma <ref> we estimate the shifted contour integral (<ref>) by ≪_σ,Nν_π^O(1)|y|^-σq^1/2-σ-δ∫_(1+|z|)^1/2+σ-δ(1+|ξ|/1+|z|)^Nẓ≪_σ |y|^-σq^1/2-σ-δ(1+|ξ|)^3/2+σ-δν_π^O(1). We choose σ=-12+ϵ and apply (<ref>) to estimate the above by O(|y|^δ/2-ϵν_π^O(1)). There is an ℓ>0 such that for every 0<δ≤12, and every 0<η<δ2 we have the following. For any 0≤α<1 and any ϕ∈ C^2ℓ_(̱[G]) we have ∑_π cuspidal∑_φ∈(π)⟨ϕ,φ⟩_[G]/φ^2_π∑_m≠ 0λ_π(|m|)/|m|^1/2+δW_φ(a(my))e(mα)≪_η |y|^ηS_∞,ℓ(ϕ) as y→ 0. Using unipotent equivariance of W as in (<ref>) and Dirichlet approximation (<ref>) we deduce W_φ(a(my))e(mα)=W^ξ(a(my))e(mpq). Thus we write the sum in the Lemma as ∑_m≠ 0λ_π(|m|)/|m|^1/2+δW^ξ(a(my))e(mpq). We conclude using Lemma <ref> that ∑_m≠ 0λ_π(|m|)/|m|^1/2+δW_φ(a(my))e(mα)≪ |y|^ηS_2,d(φ). for cuspidal π. Working as in the proof of Lemma <ref> we see that ∑_π cuspidal∑_φ∈(π)⟨ϕ,φ⟩_[G]/φ^2_πS_2,d(φ) converges absolutely and is O(S_∞,ℓ(ϕ)). There is a d>0 such that for every 0<δ≤12 and every 0<η<δ2 we have the following. Let π be a unitary Eisenstein series with parameter s∈ i and φ=(f) with f∈(s). Let p,q,ξ be as in (<ref>). Then ∑_m≠ 0λ_π(|m|)/|m|^1/2+δe(mpq)W^ξ(a(my)) = ℳ+ O_η(|y|^η S_2,d(φ)) where ℳ:=∑_± Z(12-δ± s,W^ξ)|y|^δ(√(|y|)q)^-1∓ 2sζ(1± 2s)/ζ(1+2s), as y→ 0. Working as in the proof of cuspidal case (with the same conventions and notations), we write the sum in the lemma as ∫_(z)=σ|y|^-zZ(z,W^ξ)L(12+z+δ,pq,π)ẓ/2π i for some σ>12-δ. Here from (<ref>) it follows that L(z,pq,π)=D(z+s,2s,pq)/ζ(1+2s) where D(z,s,pq):=∑_m=1^∞τ_s(m)e(mpq)/m^z, as defined in <cit.>. As before, we first determine holomorphic and growth properties of L(z,pq,π) for 0≤(z)≤ 1 and (s)=0. Clearly, for (z)>1 we have D(z+s,2s,pq)≪_(z)1. On the other hand, D satisfies a functional equation, namely, D(z+s,2s,pq)=-2/q(q/2π)^2-2zΓ(1-z+s)Γ(1-z-s) ×[cos(π z)D(1-z-s,-2s,-pq)-cos(π s)D(1-z-s,-2s,pq)], which follows from <cit.>. Using Stirling's estimate (see, e.g., <cit.>) Γ(σ+it)≪_σexp(-π/2|t|)|σ+it|^σ-1/2, σ,t∈, σ∉_≤ 0 for (s)=0 we estimate Γ(1-z+s)Γ(1-z-s)≪_(z)exp(-πmax(|(z)|,|s|))(1+|z|^2+|s|^2)^1/2-(z). Moreover, using that cos(π z)≍_(z)exp(π |(z)|), we conclude D(z+s,2s,pq)≪ (q(1+|z|))^1-2(z)(1+|s|)^O_(z)(1), (z)<0,(s)=0. Thus, using Phragmén–Lindelöf convexity principle and ζ(1+s)^-1≪ (1+|s|)^ϵ for (s)=0 (see <cit.>) we obtain L(z,pq,π)≪_ϵ(q(1+|z|))^1-(z)+ϵν_π^O(1), 0≤(z)≤ 1. On the other hand, from <cit.> we deduce that L(z,pq,π) has poles in z exactly at the poles of q^1-2zζ(z-s)ζ(z+s)/ζ(1+2s) which are at z=1± s and of order 1 with residues q^-1∓ 2sζ(1± 2s)/ζ(1+2s), respectively. Now, as in the proof of the cuspidal case, using holomorphic and decay properties of Z(z,W^ξ) for (z)>-1/2 as in Lemma <ref> we shift the contour of (<ref>) to (z)=σ where -1/2<σ<0. Using Cauchy's theorem we can write (<ref>) as the sum of ℳ=∑_± Z(12-δ± s,W^ξ)|y|^δ(√(|y|)q)^-1∓ 2sζ(1± 2s)/ζ(1+2s), and the shifted contour integral (<ref>). Working exactly as in the cuspidal case, in particular, using Lemma <ref> and (<ref>), and choosing σ=-1/2+ϵ we bound the shifted contour integral (<ref>) by O_ϵ(|y|^δ/2-ϵν_π^O(1)). There is an ℓ>0 such that for every 0<δ≤12 and every 0<η<δ2 we have the following. For any 0≤α<1 and any ϕ∈ C^2ℓ_(̱[G]) we have ∫_(s)=0∑_f∈((s))⟨ϕ,(f)⟩_[G]/f^2_(s)∑_m≠ 0λ_s(m)/|m|^1/2+δW_f(a(my))e(mα)ṣ/2π i≪_η |y|^ηS_∞,ℓ(ϕ) as y→ 0. Here λ_s we mean λ_π where π is the Eisenstein series with parameter s. We work as in the proof of Lemma <ref> to reduce to α=pq and W_f to W^ξ where ξ,p,q are as in (<ref>). Then we apply Lemma <ref> for π Eisenstein to write the sum in the lemma as |y|^δ∫_(s)=0(√(|y|)q)^-1-2sH(12-δ+s)ṣ/2π i +|y|^δ∫_(s)=0(√(|y|)q)^-1+2sζ(1-2s)/ζ(1+2s)H(12-δ-s)ṣ/2π i +O(|y|^η∫_∑_f∈((i t))|⟨ϕ,(f)⟩_[G]|/f^2_(it)S_2,d((f))ṭ), where H(z)=H(z;W^ξ,ϕ,s):=∑_f∈((s))⟨ϕ,(f)⟩_[G]/f^2_(s) Z(z,W^ξ). Working as in the proof of Lemma <ref> we see that the integral-sum in the third summand in (<ref>) is convergent and consequently, the third summand is O(|y|^η S_∞,ℓ(ϕ)). We now deal with the second summand in (<ref>). Using (<ref>) and (<ref>) we write ζ(1-2s)/ζ(1+2s)H(12-δ-s) =∑_f∈((s))⟨ϕ,(M^∗(s)f)⟩_[G]/M^∗(s)f^2_(-s) Z(12-δ-s,W^ξ) =∑_f∈((-s))⟨ϕ,(f)⟩_[G]/f^2_(-s) Z(12-δ-s,W^ξ) The last equality follows due to (<ref>) and noting that {M^∗(s)f}_f∈((s) forms an orthogonal basis of (-s). Using the above and changing variable s↦-s we see the second summand in (<ref>) equals |y|^δ∫_(s)=0(√(|y|)q)^-1-2sH(12-δ+s)ṣ/2π i which is the same as the first summand in (<ref>) on which we focus next. We first analytically continue H(z,..,s) in neighbourhood of (s)=0. Note that (f)=(f̅) and f̅∈(-s) for (s)=0. Thus changing s to -s we give a holomorphic realization of ⟨ϕ,(f)⟩_[G]. Recalling that f_(s) is s-independent and meromorphic properties of (f) we obtain that ⟨ϕ,(f)⟩_[G]/f^2_(s) is holomorphic in -12<(s)≤ 0. On the other hand, recalling holomorphic properties of Z and W we get that Z(12-δ+s) is holomorphic in -12<(s)≤ 0. Finally, incorporating Remark <ref>, using the bound in Lemma <ref>, and following the proof of Lemma <ref> we obtain that the sum defining H(12-δ+s) converges absolutely in -12<(s)≤ 0 and thus defines a holomorphic function in the same region. Moreover, the same proof yields that H(12-δ+s)≪_N,(s)(1+|s|)^-N^ℓϕ_∞, 0≤(s)<12, -12<(s)≤ 0 for some ℓ>0 depending on N. Now we shift the contour of the first summand in (<ref>) to (s)=-12+ϵ without crossing any pole. Using the last estimate of H with sufficiently large N we bound the shifted integral by ≪ |y|^δ-ϵ^ℓϕ_∞ completing the proof. Without loss of generality we assume K=1 and drop the subscripts from δ_1∈(0,12],α_1∈,β_1∈. Let π be either cuspidal or unitary Eisenstein series and φ∈π. Applying absolute convergent Fourier expansion of φ-φ_0, as in (<ref>), and bound of φ_0(a(y)), as in Lemma <ref> we write ∫φ(n(x)a(y))μ̣(x) = ∑_m≠ 0λ_π(|m|/√(|m|)W_φ(a(my))μ(m) + O(|y|^1/2S_2,d(φ)). Now employing the expression of μ as in Theorem <ref> and applying Lemma <ref> we deduce that the right hand side above is β∑_m≠ 0λ_π(|m|)/|m|^1/2+δW_φ(a(my))e(mα) + O(|y|^ηS_2,d(φ)). Starting with the spectral decomposition as in Lemma <ref> then applying Lemma <ref> and Lemma <ref>, and working as in the proof of Lemma <ref>, we conclude the proof. §.§ Stationary Phase In this subsection, we prove Corollary <ref>. We start with the stationary phase estimate of the Fourier transform of μ^w,f as in Corollary <ref>. Let f,w and μ^w,f be as in Corollary <ref>. Let {x_i}_i=1^n be the set of points in the support of w and let {k_i}_i=1^n be the multi-set of positive integers so that f' has a zero of order k_i-1 at x_i. Then there exist {a_i,j}_1≤ i≤ n^j∈ only depending on f and w so that for any N>0 μ^w,f(ξ) = ∑_i=1^n e(ξ f(x_i))∑_j=0^N-1 a_i,jξ^-j+1/k_i+O_f, w, N((1+|ξ|)^-N+1/max_ik_i), as |ξ|→∞. We first discuss how the above proposition can be reduced to a simpler one. Modifying the support of w and using <cit.>, without loss of generality, we can assume i=1, i.e., there is a unique stationary point of f in the support of w (equivalently, x_1=x_2=⋯= x_n). Also by changing f with x↦ f(x+x_1)-f(x_1), we can also assume without loss of generality, that x_1=0 and f(0)= f'(0)=⋯=f^(k-1)(0)=0, f^(k)(0)≠ 0. To prove Proposition <ref> it is enough to show that for the above f for any N>0 we have μ^w,f(ξ) = ∑_j=0^N-1 a_jξ^-j+1/k+O_f, w, N((1+|ξ|)^-N+1/k), for some a_j∈. The above follows from <cit.>. The proof follows immediately after applying Theorem <ref> and Proposition <ref> with N=⌊max{k_i}2⌋. §.§ Acknowledgements A major discussion took place during the “Analytic Number Theory” program at the Institute Mittag-Leffler (IML) where we were in residence and we want to thank IML for extraordinary hospitality and work condition. SD thanks University of Michigan and Uppsala University, and SJ thanks University of York where significant parts of our work were completed. We want to thank Dmitry Kleinbock, Manuel Luethi, Andreas Strömbergsson, and Barak Weiss for helpful comments on an earlier draft of this paper. We thank Asaf Katz for giving us a proper reference regarding stationary phase. SD wants to thank Han Yu for many interesting discussion and Sam Chow for sending the Mathematica code from <cit.>. Finally, we want to thank our family for supporting us while writing the paper. abbrv
http://arxiv.org/abs/2407.12314v1
20240717044652
Charged Particles Capture Cross-Section by a Weakly Charged Schwarzschild Black Hole
[ "A. M. Al Zahrani", "A. Al-Jama" ]
gr-qc
[ "gr-qc", "astro-ph.HE" ]
Vol.0 (20xx) No.0, 000–000 Physics Department, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; amz@kfupm.edu.sa 1; ahmad.aljama1905@gmail.com 2 Received 20xx month day; accepted 20xx month day We study the capture cross-section of charged particles by a weakly charged Schwarzschild black hole. The dependence of the maximum impact parameter for capture on the particle's energy is investigated numerically for different values of the electromagnetic coupling strength between the particle and the black hole. The capture cross-section is then calculated. We show that the capture cross-section is independent of the electromagnetic coupling for ultra-relativistic particles. The astrophysical implications of our results are discussed. A. M. Al Zaharni & A. Al-Jamaa Capture Cross-Section by Weakly Charged Black Hole Charged Particles Capture Cross-Section by a Weakly Charged Schwarzschild Black Hole A. M. Al Zahrani, 1 A. Al-Jama 2 July 22, 2024 ==================================================================================== § INTRODUCTION Studying the capture cross-section of black holes is central to understand how matter interacts with them. It helps us understand the process of matter accretion by a black hole which in turn determines how its mass, angular momentum and charge evolve. It can also help us understand the environment near black holes. Moreover, scrutinizing capture cross-section can be used to test theories of gravity in strong gravitational fields. Astrophysicists generally assume black holes are electrically neutral. This is because they would quickly attract oppositely charged matter to balance out any access charge. However, there are compelling reasons why weakly charged black holes might exist as discussed in <cit.> and the references therein. The differences in how a black hole accretes electrons and protons within its plasma environment, influenced by radiation, could render it charged. Also, the spin of a black hole in the presence of a magnetic field can induce the accretion of charged particles. In fact, using the EHT observations, it was inferred that Sgr A* and M87 can be charged <cit.>. The black hole's charge is weak in the sense that it has no tangible effect on spacetime, but its effect on charged particle dynamics is prominent. There are numerous astrophysical scenarios wherein charged particles are drawn into black holes. Stars within the Roche limits near black holes often contribute matter through tidal interactions. Additionally, stars emit streams of charged particles as stellar winds. Highly energetic charged particles, resulting from supernovae, gamma-ray bursts, and bipolar jets from compact objects, frequently find their way into the vicinity of black holes. These processes collectively enrich the environment around black holes with a significant population of charged particles. The concept of capture cross-sections has been explored extensively for various black hole types. Foundational treatment which examine photon and neutral particle capture by Schwarzschild black holes was given in several monographs, such as <cit.>. Further work addressed capture cross-sections of charged and neutral particles by Kerr-Newman black holes, including the implications for black hole spin and charge evolution <cit.>. Capture by Reissner-Nordström black holes was also investigated <cit.>. In the context of higher-dimensional black holes, studies have focused on calculating photon critical impact parameters for Schwarzschild-Tangherlini black holes <cit.>. The capture cross-section for massive particles was determined in <cit.>. Additionally, research extends to particle capture in Myers-Perry rotating spacetime which describes rotating black holes in five-dimensions <cit.>. Moreover, wave capture cross-sections have been studied for various black hole configurations (see <cit.> and the references within). In this research, we examine the capture cross-section of charged particles by a weakly charged Schwarzschild black hole and discuss the astrophysical consequences of our findings. The paper is organized as follows: In Sec. <ref>, we review the dynamics of charged particles in the background of a weakly charged black hole. We then review the capture cross-section of neutral particles in Sec. <ref>. The capture cross-section of charged particles is calculated for different coupling strengths and particle energies in Secs. <ref>. Finally, we summarize our main findings and discuss their astrophysical consequences in Sec <ref>. We use the sign conventions adopted in <cit.> and geometrized units where c=G=k=1, where k is the electrostatic constant. § CHARGED PARTICLES NEAR A WEAKLY CHARGED SCHWARZSCHILD BLACK HOLE Here, we review the dynamics of charged particles near a weakly charged black hole. The spacetime geometry around a black hole of mass M and charge Q is described by the Schwarzschild Reissner-Nordströn metric which reads <cit.> ds^2=-hdt^2+h^-1dr^2+r^2dθ^2+r^2sin^2θdϕ^2, where h=1-r_S/r+Q^2/r^2 and r_S=2M is the Schwarzschild radius. The electromagnetic 4-potential is A_μ=-Q/rδ_μ^0. However, when the charge is weak we can ignore the curvature due to it and use the Schwarzschild metric, which reads <cit.> ds^2=-fdt^2+f^-1dr^2+r^2dθ^2+r^2sin^2θdϕ^2, where f=1-r_S/r and r_S=2M is the Schwarzschild radius. This weak charge approximation is valid unless the charge creates curvature comparable to that due to the black hole's mass. This happens when Q^2 ∼ M^2. In conventional units, the weak charge approximation fails when Q ∼G^1/2M/k^1/2∼ 10^20 M/ M_⊙ coloumbs. This charge is way greater than the greatest estimated change on any black hole. Although the black hole charge is tiny, its effect on charged particles dynamics is profound because it is multiplied by the charge-to-mass ratio of these particles (∼10^21 m^-1 for electrons and ∼10^18 m^-1 for protons). The Lagrangian describing a charged particle of charge q and mass m in a spacetime described by a metric g_μν and an electromagnetic field produced by a 4-potential A^μ reads <cit.> L=1/2mg_μνu^μ u^ν+qu^μ A_μ, where u^μ≡ dx^μ/dτ is the particle’s 4-velocity and τ is its proper time. In our case, the Lagrangian becomes L = 1/2 m[-f(dt/dτ)^2+f^-1(dr/dτ)^2+r^2(dθ/dτ)^2+r^2sin^2θ(dϕ/dτ)^2 ] -qQdt/dτ. This Lagrangian is cyclic in t and ϕ, which means that the particle’s energy and azimuthal angular momentum are constants of motion. The specific energy and azimuthal angular momentum are, respectively, given by E = -1/m∂ L/∂(dt/dτ)=fdt/dτ+qQ/mr, ℓ = 1/m∂ L/∂(dϕ/dτ)=r^2sin^2θdϕ/dτ Combining these equations with the normalization condition g_μνu^μ u^ν=-1 and solving for dr/dτ give (dr/dτ)^2=( E-qQ/mr)^2-f[r^2(dθ/dτ)^2+ℓ^2/r^2sin^2θ+1]. In the equatorial plane where θ=π/2, the equation becomes (dr/dτ)^2=( E-qQ/mr)^2-f(ℓ^2/r^2+1). Let us rewrite the last equation in a dimensionless form. We first introduce the following dimensionless quantities: T=τ/r_S, ρ=r/r_S, L=ℓ/r_S . Equation <ref> then becomes (dρ/d T)^2=( E-α/ρ)^2-f( L^2/ρ^2+1), where α=qQ/mr_S. The parameter α represent the relative strength of the electromagnetic force to the Newtonian gravitational force. We can rewrite Eq. <ref> as (dρ/d T)^2=( E-V_+)( E-V_-), where V_±=α/ρ±√(f( L^2/ρ^2+1)), is an effective potential. It is V_+ that corresponds to physical, future-directed motion and hence will be used in all of the analyses below. Without loss of generality, we will consider L>0 only. It was estimated in Ref. <cit.> that the charge of Sgr A* is 10^8-10^15 coulomb. Using the lower limit of charge, the coupling constant for electrons α_e and protons α_p near Sgr A*, which has a mass of M = 4.3 × 10^6 M_⊙ according to Ref. <cit.>, are α_e ∼ 10^9, α_p ∼ 10^6. § CAPTURE CROSS-SECTION OF NEUTRAL PARTICLES Before we tackle the main problem, let us find the capture cross-cross section for neutral particles first. Setting α = 0, the effective potential V_+ reduces to V_+= √(f( L^2/ρ^2+1)). Capture occurs whenever the particle's energy is greater than the maximum of V_+. The function V_+ is at an extremum when dV_+/dρ=0 or ρ^2+(3 - 2 ρ)^2=0, which gives the position of the extrema in terms of as ρ_± = ^2±√(^2-3), where ∈ [√(3),∞). When = √(3) (≡_min), ρ_+ and ρ_- meet at a saddle point. Inspecting d^2V_+/dρ^2 reveals that ρ_- corresponds to the position of the local maximum of V_+. In terms of , the escape condition E=V_+|_ρ=ρ_- becomes E=√(2/27)[(√(^2-3)+)-3 √(^2-3)/+9]^1/2, Inverting this equation gives = [27 ^4-36 ^2+(9 ^2-8)^3/2+8/8 (^2-1)]^1/2 The impact parameter b is defined as the perpendicular distance between the center of force and the incident velocity <cit.>. It can be written as b = / P = /√(^2-1), where P is the specific linear momentum. The maximum impact parameter for capture b_max is given by b_max = [27 ^4-36 ^2+(9 ^2-8)^3/2+8]/2√(2)(^2-1) ^1/2. The capture cross-section σ_cap is given by σ_cap = π b_max^2 =π/827 ^4-36 ^2+(9 ^2-8)^3/2+8/(^2-1)^2. Figures <ref> and <ref> are plots of b_max and the capture cross-section σ_cap vs. , respectively. For ultra-relativistic particles (E≫ 1), b_max = 3 √(3)/2+√(3)/2 E^2+𝒪(1/^3). The corresponding capture cross-section σ_cap is therefore σ_cap = 27π/4+9π/2 E^2+𝒪(1/^3). For a slowly moving particle with speed v ≪ 1, ≈ 1+v^2/2, and thus b_max = √(2)/√(-1)+𝒪(√(-1)) = 2/v+𝒪(v), and the capture cross-section becomes σ_cap = 4π/v^2+𝒪(v^0). § CAPTURE CROSS-SECTION OF CHARGED PARTICLES We will now follow the same procedure we used for the neutral particle. However, analytic expressions are not viable in this case and we will resort to numerical solutions, except in the ultra-relativistic particle case. The structure of the effective potential V_+ is generically similar to the neutral particle's. The effect of α is to raise (lower) the peak of V_+ for positive (negative) α. The effective potential V_+ is at an extremum when 2 α√((ρ_±-1) (^2+ρ_±^2) ρ_±)-^2 (3-2 ρ_±)-ρ_±^2 = 0. The extremum is a maximum when α[^2 (1-2 ρ_±)+(3-4 ρ_±) ρ_±^2]/√((ρ_±-1) (^2+ρ_±^2) ρ_±) +2 ρ_± - 2 ^2 < 0. To be consistent with the notation of the previous section, we let ρ_+ correspond to the minimum of V_+ and ρ_- correspond to the maximum. Here, _min (the value at which ρ_- and ρ_+ meet) depends on the value of α. The two parameters are related by the relation -α ^8+6 α ^4 _min^2 (_min^2-3)-8 α ^2 _min^4 (_min^2+9)+3 _min^4 (_min^2-3)^2=0. Figure <ref> is a plot of _min vs α. When α = 1/2, _min approaches zero. This is because V_+ ceases to have a local minimum for α≥ 1/2. Physically, this limit corresponds to the case when the Coulomb repulsion becomes too strong for stable orbits to exist as discussed in Ref. <cit.>. Figure <ref> shows how b_max depends on for several negative values of the coupling parameter α. The effect of increasing |α| is to increase the values of b_max for all energies. This is expected because the Coulombs attraction makes it easier for a charges particle to get captured. In all cases, b_max is a monotonic function of . In the ultra-relativistic limit, b_max approaches 3√(3)/2, the limit in the neutral particle case, for any finite value of α, provided that α is not too large compared to . Figure <ref> shows how b_max depends on for several values of α between 0 and 0.5. In this range, there is competition between that gravitational 'attraction' and the Coulomb repulsion. The curves have richer structure. They falls quickly as goes beyond 1 and reach a minimum. After that, the curves rise and reach 3√(3)/2 asymptotically. Figure <ref> shows how b_max depends on for several positive values of α greater than 0.5. Generally, b_max becomes smaller as α increases. This is expected because the greater the Coulomb repulsion the more difficult it is for a charged particle to be captured. In fact, there is a threshold energy _thr below which capture cannot occur. It is given by _thr = α + 1/4α. This equation is valid for α≥ 0.5 only. Fig. <ref> shows how _thr vary with α.   The capture cross-section σ_cap corresponding to Figs. <ref>, <ref> and <ref> is shown in Figs. <ref>, <ref> and <ref>, respectively. In all cases, σ_cap vs. curves inherent the features of the b_min vs. curves. For ultra-relativistic particles, we can write b_max as b_max = 3 √(3)/2-√(3)α/+9-2α^2/6√(3)^2+𝒪(1/^3). The corresponding capture cross-section is then σ_cap = 27π/4-9πα/+(4α^2+9)π/2^2+𝒪(1/^3). These limiting results are in agreement with our numerical findings. § CONCLUSION We have studied the capture cross-section of charged particles by a weakly charged Schwarzschild black hole. We have shown that a trace charge on the black hole can have prominent effects. When the Coulomb force between a charged particle and the black hole is attractive, it enlarges the capture cross-section significantly. This is expected since the Coulomb attraction enhances the capture of charged particles. However, when the Coulomb force between a charged particle and the black hole is repulsive, it shrinks the capture cross-section significantly. When the electromagnetic coupling strength is below a critical value, capture is possible for all values of the particle's energy. When the electromagnetic coupling strength is above the critical value, there is a minimum value of the particle's energy below which capture is impossible. This is because the Coulomb repulsion surpasses the gravitational attraction unless the particle's radial momentum is large enough. Our results emphasizes the assertion that charged black holes will favorably accretes charges of the opposite sign. However, it is still possible for the black hole charge to grow if the plunging charged particles are energetic enough to the limit that the capture cross-section becomes independent of the sign of the charges. Moreover, the fact that the electromagnetic coupling constant is three orders of magnitudes greater for electrons than protons suggests that it is relatively easier for a black hole to accumulate positive charge than negative charge. It will be an astrophysically interesting to study the energies of charged particles near an astrophysical black hole to understand better how the black hole's charge evolves. The problem can be astrophyically more viable when other astrophysical black holes, such as rotating black holes, are studied (in progress). 99 [Ahmedov et al. (2021)]Ahm Ahmedov B., Rahimov O., Toshmatov B., 2021, Universe, 7(8) , 307. [Al Zahrani (2021)]Z1 Al Zahrani A., 2021, Phys. Rev. D, 103. [Al Zahrani (2022)]Z2 Al Zahrani A., 2022, ApJ, 937. [Anacleto, et al. (2023)]Ana Anacleto M., et al., 2023, arXiv:2307.09536v1 [gr-qc]. [Bugden (2020)]Buq Bugden M., 2020, Class. Quantum Gravity, 37, 015001. [Carter (1973)]Car Carter B., 1973, Black Hole Equilibrium States, Black Holes, eds. C. DeWitt and B. S. DeWitt (Gordon and Breach Science Publishers, Inc. New York, p. 57. [Chandrasekhar (1983)]Cha Chandrasekhar S., 1983, The Mathematical Theory of Black Holes, Oxford University Press. [Connell & Frolov (2008)]CoFr Connell P., Frolov V., 2008, Phys. Rev. D, 78, 024032. [Frolov & Zelnikov (2011)]FZ Frolov V., Zelnikov A., 2011, Introduction to black hole physics, Oxford university press. [Ghosh & Afrin (2023)]GA Ghosh S. and Afrin M., 2023, ApJ, 944, 174. [Goldstein et al. (2001)]Gold Goldstein H., Poole C. and Safko J., 2001, Classical Mechanics, third eddition, Pearson. [Gooding & Frolov (2008)]GoFr Gooding C., Frolov A., 2008, Phys. Rev. D, 77, 104026. [GRAVITY Collaboration (2023)]GC GRAVITY Collaboration, 2023, A & A, 677, L10. [Kocherlakota et al. (2021)]EHT Kocherlakota P. et al., 2021, (EHT Collaboration), Phys. Rev. D, 103, 104047. [Misner et al. (1973)]MTW Misner C., Thorne K., Wheeler J., 1973, Gravitation, W. H. Freeman and Co., San Francisco. [Singh & Ghosh (2018)]SiGh Singh B. , Ghosh S., 2018, Annals of Physics 395, 127. [Tsukamoto (2014)]Tsu Tsukamoto N., et al., 2014, Phys. Rev. D, 90, 064043. [Young (1976)]Youn Young P., 1976, Phys. Rev. D, 14, 3281. [Zajaček et al.(2018)]Zaj3 Zajaček M. et al., 2018, Monthly Notices of the Royal Astronomical Society, 480 4, 4408. [Zajaček et al.(2019)]Zaj2 Zajaček M. et al., 2019, J. Phys.: Conf. Ser., 1258, 012031. [Zajaček & Tursunov (2019)]Zaj1 Zajaček M., Tursunov A., arXiv:1904.04654. [Zakharov (1994)]Zak Zakharov A., 1994, Class. Quantum Grav., 11, 1027.
http://arxiv.org/abs/2407.13530v1
20240718140401
Pushing the Limits of Reactive Planning: Learning to Escape Local Minima
[ "Isar Meijer", "Michael Pantic", "Helen Oleynikova", "Roland Siegwart" ]
cs.RO
[ "cs.RO" ]
Pushing the Limits of Reactive Planning: Learning to Escape Local Minima Isar Meijer, Michael Pantic, Helen Oleynikova, Roland Siegwart isarmeijer@gmail.com, mpantic@ethz.ch, helenoleynikova@gmail.com, rsiegwart@ethz.ch All authors are with the Autonomous Systems Lab, ETH Zurich, 8092 Zurich, Switzerland. ====================================================================================================================================================================================================================================================== § ABSTRACT When does a robot planner need a map? Reactive methods that use only the robot’s current sensor data and local information are fast and flexible, but prone to getting stuck in local minima. Is there a middle-ground between fully reactive methods and map-based path planners? In this paper, we investigate feed forward and recurrent networks to augment a purely reactive sensor-based planner, which should give the robot “geometric intuition” about how to escape local minima. We train on a large number of extremely cluttered worlds auto-generated from primitive shapes, and show that our system zero-shot transfers to real 3D man-made environments, and can handle up to 30% sensor noise without degeneration of performance. We also offer a discussion of what role network memory plays in our final system, and what insights can be drawn about the nature of reactive vs. map-based navigation. § INTRODUCTION Robots are increasingly being deployed in complex and cluttered environments. Collision-free navigation is one of the most fundamental and best-studied skills a robot must possess. Most collision avoidance methods fall into one of two categories: map-based or reactive. Map-based methods rely on a processed world representation which can be checked for collisions, while reactive approaches often only use the robot's local information and current sensor data to decide the robot's next action. Local reactive methods can be extremely computationally efficient and provide safety without relying on additional processes, such as mapping frameworks or state estimators. Additionally, strict assumptions about the environment, like static world assumptions, are not needed. However, without memory or a longer-term perspective they are prone to getting stuck in local minima – geometric dead-ends such as large walls, U-shaped features, or long corridors. Ideally, even a purely reactive method would have a certain sense of geometric intuition that allows it to make informed decisions even in absence of a consistent map. How can we build such a system? Where are the limits of purely reactive navigation, in terms of geometric and temporal consistency? When is it better to rely on a map? In this paper, we aim to answer these questions using a succession of novel methods for informed reactive navigation by combining a purely classical, reactive approach with different neural networks. We use simple FFN and RNN with LSTM cells trained in a self-supervised environment to provide geometric “intuition” that is then combined with a classical safety layer. These networks essentially bias the classical reactive method to avoid and escape local minima. Our method is made sensor-agnostic by expressing the sensor data as a set of rays originating at the robot's current position. By studying the performance implications of the different variants of our sensor-agnostic method, we provide novel insights into the limits of reactive navigation and to what extent these limitations can be overcome using different networks with varying degrees of temporal consistency. Most importantly, we provide insight into the nature of the different methods – we investigate how different parts of the network play a larger role on the resulting trajectory and what implications can be drawn from it. Our method is loosely inspired by navigation of humans and animals, which are able to anticipate navigation decisions, without a metric or complete map, by using memory and intuition. By training our approaches in auto-generated, unstructured environments with very high obstacle density (<Ref>), many different shapes of local minima are encountered and learned. The resulting approaches are then evaluated on structured, human-made environments (<Ref>). The central goal of this work is to explore and understand how far the limits of reactive planning can be pushed without using an explicit map. To this end, we contribute: * multiple neural network architectures that provide “geometric intuition” to a purely reactive baseline planner, * show how these networks generalize to real 3D environments with different levels of sensor noise, * and a thorough discussion how temporal memory within the network affects the overall planning system's ability to escape local minima. The purely reactive baseline method is reviewed in <Ref>, then combined with an additional feed-forward network in <Ref>, then extended to a recurrent network in <Ref>. For each method we discuss its performance, short-comings, and implications, based on qualitative examples. We then compare all variants with established methods that rely on a full map (i.e. are non-reactive) in <ref> and discuss the results and fundamental findings in detail in <Ref>. § RELATED WORKS Many works on robotic navigation rely on accurate maps of the environment. This allows the community to treat mapping and navigation as separate components with their own respective evaluation metrics. The first works in this direction were potential field methods <cit.>, which treated every obstacle as applying a repulsive force on the robot. Such a potential field was computed over the entire map, allowing the addition of other constraints or optimization objectives on the trajectory such as elastic band models <cit.> or explicit kinematic or dynamic constraints <cit.>. However, even with full map knowledge, methods based on collision potentials get stuck in local minima when the scenes become too cluttered <cit.>, as they often result in non-convex optimization problems. Other map-based methods such as RRT* <cit.> trade-off computation time for asymptotic optimality and probabilistic completeness. Due to their runtime, they are often used for global planning in combination with a faster local planning layer, such as any of the map-based potential field methods. Reactive, map-free navigation still uses this concept of obstacle-induced repulsive forces, as in <cit.>. Newer approaches such as RMP <cit.> allow us to combine multiple “local policies”, encoding different robot objectives based on local information, smoothly. Pantic  <cit.> formulate a pure avoidance policy with RMP which allows fast, safe navigation without requiring any built maps. However, their approach still suffers from occasionally getting trapped in local minima. We use the open source implementation of <cit.> as a baseline method. Mattamala  <cit.> use a geodesic field in a reactive navigation setup to avoid local minima, however, computing the geodesic distance field a priori requires a consistent map of the environment. Other approaches solve reactive navigation by learning an end-to-end system: sensor data in, robot action out. This removes the need for an explicit map. Some end-to-end methods end up learning the map implicitly as part of the training process, by training in the same environment as the robot navigates in. For example, using only visual input to navigate through 2.5D mazes <cit.> or flying in 3D through cluttered environments <cit.>. Other works focus on learning policies that work in a variety of environments. Ross  <cit.> learn a mapping from image features to velocity commands from human pilot demonstration using DAgger <cit.> for a drone flying rapidly through a forest; however, requiring human pilot demonstrations makes training for new environments difficult. Loquercio  <cit.> instead train fast dynamic drone flight on a variety of environments in simulation, and show impressive results of zero-shot transfer to the real world, using stereo disparity images as input and a short-horizon dynamic trajectory as output. The main focus of their work is to fly quickly through semi-cluttered environments, while we specifically want to focus on escaping local minima with a mix of classical and learned approaches. Finally, other end-to-end networks have used reinforcement learning with a similar ray-based sensor representation, but limited to 2/2.5D. Tai  <cit.> learn a general navigation policy in 2D from LiDAR scans which map to a velocity output, and generalize across sparse maps. Similarly, Pfeiffer  <cit.> uses a similar set-up in 2.5D with imitating expert demonstrations. Zhang  <cit.> use explicit external memory structures to learn a representation of the visited environment, however, limited to discretized 2D worlds. By mixing a classical reactive avoidance algorithm with a supervised learning component, we are able to handle full 3D trajectories in vastly more cluttered environments than presented in other works. The goal of our work is to see how we can build on the ideas of <cit.> to overcome the downsides of map-free, purely reactive navigation without requiring an explicit map, while leveraging the best advantages of both classical and learned methods. We rely on the RMP-based repulsive field approach to provide collision-free, safe trajectories, while our new learned component can focus on developing a geometric intuition about the environment. The following sections will describe the purely reactive safety layer, followed by feed-forward and then recurrent network architectures. § PURELY REACTIVE NAVIGATION We use the open-source system presented in <cit.> as the base method for purely reactive navigation. Obstacle avoidance is formulated as a combination of obstacle repulsive forces, represented as RMP <cit.>, with each “obstacle” being generated by a ray-cast into a volumetric map <cit.>. Each ray creates one repulsion policy. As it serves as our base method, and for the reader's convenience, we reproduce the most important concepts of RMP and how they are used to perform obstacle avoidance in this chapter. For more details we refer the reader to the respective original work <cit.>. §.§ Riemannian Motion Policies An RMP is a policy 𝒫 that consists of an acceleration f(x, ẋ) ∈ℝ^3 coupled with a metric A(x, ẋ) ∈ℝ^3×3, where x, ẋ∈ℝ^3 denote the robot's position and velocity. A single policy 𝒫 = (f, A) describes an acceleration on the system, combined with a metric that captures the directional importance of that acceleration. A set of policies {𝒫_i } can be summed into a single policy 𝒫_c by 𝒫_c = ∑_i 𝒫_i = ( ( ∑_i A_i ) ^+ ∑_i A_i f_i , ∑_i A_i ), where ^+ denotes the pseudoinverse. The result is a policy itself, that contains an implicitly metric-optimal, joint behavior of all policies. A policy can be a function of the robot's state, the goal location, and any other type of observation that is available. In the next section, we introduce local observations in the form of ray casts. §.§ Ray-Casting All following policies have access to state information and the goal location. Additionally, we use ray casts from the robot's current pose as a general abstraction of depth sensor data, as shown in <Ref>. For simplicity, we treat our robot as a point, but robot shape can be represented by modifying the distance of the rays; to model a sphere robot, a constant distance would be subtracted from all ray values for example. Similar to <cit.>, we sample N equally spaced, quasi-random depth rays by using the Halton sequence ℋ(·) <cit.>. We define the i'th Halton direction vector r_ℋ(i), parameterized by the elevation φ_i and azimuth θ_i, sampled from deterministic Halton sequences with base 2 and 3 respectively: φ_i = arccos(1 - 2 ·ℋ(i, 2)), θ_i = 2π·ℋ(i, 3), r_ℋ(i) = (sinφ_icosθ_i, cosφ_isinθ_i, cosφ_i). The ray obstacle distance function d_r(x, r, L) returns the distance to an obstacle from the position x along a ray direction r, truncated at a maximum distance L. Using this, we can define the ray-cast function ℛ^k : ℕ×ℝ_+ →ℝ^N_+ as ℛ^k (N, L) = [d_r(x^k, r_ℋ(i), L)]^N - 1_i=0. <Ref> samples N equally spaced rays r_ℋ(i) according to the ray-cast function from the robot's position x^k at time step k. Due to the Halton sequence' deterministic nature, for function calls with identical N, the direction vector r_ℋ(i) for each element i in the output remains the same across timesteps. By using a GPU-based mapping environment <cit.>, this function can be evaluated in parallel <cit.> for all rays. §.§ Collision Avoidance The obstacle avoidance policy defined in <cit.> serves as a building block for collision avoidance. The obstacle policy repels the robot from a point obstacle and consists of a repulsive term f_rep(x) and a damping term f_damp(x, ẋ): f_obs( x, ẋ) = f_rep(x) + f_damp(x, ẋ). The repulsive term applies an acceleration away from obstacles based on distance, while the damping term is based on the velocity towards the obstacle. For more details and the definition of the corresponding metric, the reader is referred to the original work <cit.>. The combination of the repulsive term, damping term, and metric matrix results in a smooth, safe, and velocity-dependent avoidance behavior, where only obstacles that are very close to the robot or that the robot is about to approach have an effect. A simple unidirectional repulsor as used in potential field methods would react equally to all obstacles, and would not allow for parallel combination in the same fashion. This method allows us to combine thousands of such policies to perform avoidance in dense and cluttered 3D maps. To do so, we follow the approach from <cit.>, where the ray-cast function defined in <Ref> is used to create an obstacle policy for every ray that hits an obstacle. A GPU implementation is leveraged such that not only ray-casting, but also policy creation and summation happens in parallel. §.§ Goal Seeking Similar to <cit.>, we use the goal policy as defined in <cit.> to implement goal seeking behavior. The goal policy pulls the robot towards a goal location x_g and is defined as f_g ( x, ẋ) = α_g s( x_g - x) - β_g ẋ, A_g ( x, ẋ) = 𝕀^3 × 3 , where α_g and β_g are scalar tuning parameters, and s is a soft-normalization function  <cit.>. The robot is then commanded with the sum of all activated policies according to <Ref>. Trajectories are generated by numerical integration of the resulting acceleration, starting from an initial position at rest. §.§ Limitations This purely reactive method performs surprisingly well even in very cluttered and complex 3D maps (see also <cit.>). Its navigation capabilities emerge from the sum of many simple obstacle avoidance policies and the goal seeking behaviour, but it can get trapped in local minima whenever the two parts cancel each other out. Both, natural and man-made environments occasionally contain such local minima. <Ref> visualizes a typical example. However, humans are able to navigate these environments without a full map and do not get stuck indefinitely. Clearly, there are heuristics and intuitions about taking decisions that avoid or escape local minima. This naturally raises the question of how such intuition can be provided to a robotic system, preferably through a self-emergent process. In the next section, we introduce such an add-on to the purely reactive method by using a neural network that is trained in a self-supervised fashion. § NEURAL REACTIVE NAVIGATION We regress a geometric-aware informed goal policy as a function of the robo-centric goal direction and the sensor rays. The sensor rays provide the network with a sampling of the local geometry. As a training signal, we use the geodesic distance field. The geodesic distance field is a global function that captures the shortest distance to the goal from anywhere in the world around obstacles. It can be computed using the FMM <cit.>, a grid-based version of which is implemented in the scikit-fmm[github.com/scikit-fmm/scikit-fmm] library. The geodesic distance field requires perfect world information and is expensive to compute, therefore we only use it as privileged information for self-supervision during training. Doing so, the network should learn to infer near-optimal decisions that mimic the geodesic distance field directly from raw sensor data, effectively obtaining good heuristics to deal with local geometry. A high-level overview of the proposed system is shown in Figure <ref>. More formally, let the (symmetric) scalar function 𝒢(x, x_g) denote the geodesic distance field between locations x and x_g. The shortest direction to the goal x_g from location x can be calculated by taking the (negative) gradient of 𝒢 with respect to x: -∇_x 𝒢(x, x_g) . With this improved goal direction, we can create a goal policy similar to <Ref>: f_g ( x, ẋ) = α_g s( -x_g - x∇_x 𝒢(x, x_g)) - β_g ẋ, where we have changed the direction to the goal into the negative geodesic distance field gradient, while maintaining the original distance to the goal. The aim is to learn a function ϕ_θ parameterized by θ that mimics <Ref>, using only the relative goal location and sensor rays as input. §.§ Architecture and Training We regress the function ϕ_θ using a multi-layer perceptron trained in a self-supervised fashion. Figure <ref> illustrates the full network architecture. In this section, we discuss architectural details, data generation and training methods. §.§.§ Encoders The encoder part of the network consists of the ray encoder and the goal direction encoder. All input signals are rescaled to relative quantities with respect to the maximum ray length L. The rays are linearly rescaled with respect to L. The state and goal information are used to calculate the relative goal direction d_g^k = x_g - x^k, which is fed to the network as a unit direction d̂_g^k = d_g^k / d_g^k vector and a scalar distance measure: [ d̂_g^k; 𝒟(d_g^k, L) ]∈ℝ^4 , with the distance normalization function 𝒟 as 𝒟(d, L) = d/2L, d ≤ L σ(2 d - L/L), d > L , where σ is the sigmoid function. This function maintains linearity below the ray truncation distance and saturates large input values to 1. Doing so, the distance to the goal becomes a relative quantity with respect to the ray lengths, which also implicitly encodes if the goal location is in front or behind a perceived obstacle. §.§.§ Information Bottleneck The concatenated state and ray latent representations are passed through four fully connected layers. §.§.§ Decoder Instead of directly regressing the geodesic gradient, we decode the latent space into a weighted output distribution of directional rays, similar to as they are used in the input rays. For calculating the loss from the geodesic distance field that fits this output, the label y is encoded as a one-hot vector e_i with index i determined as, _i{y^T r_ℋ(i) } , i.e. we select the index of the Halton direction vector that is closest in direction to the label. We train this network using a BCE loss between the one-hot encoded label and the output of the decoder. While the supervision signal may be an exclusive class label, interpreting the outputs as independent binary classifiers using BCE gives the model more freedom to encode multi-modal outputs. An additional benefit of using such an output format is that it enables easy introspection by visualizing the output distribution. To obtain an RMP compatible acceleration vector from the output distribution, we pass it through a softmax layer and calculate a weighted sum of the direction vectors, over the max k largest outputs: ŷ = ∑_i ∈𝒴^ke_i^T s(ŷ_ℋ) r_ℋ(i) , where 𝒴^k is the index set of the largest k output entries of ŷ_ℋ, e_i is the i'th standard vector, and s(·) is the softmax function. Higher values of k lead to smoother trajectories, however, we found that very high values somewhat decrease success rates. In all following experiments, the value of k is set at 50, which we found to provide a good trade-off between smoothness and success rates. §.§.§ Training We exclusively train on auto-generated worlds obtained through boolean combinations of primitive shapes. Two classes of 10×10×10^3 worlds are created and used during training, validation, and testing: sphere box worlds (<Ref>) and plane worlds (<Ref>). We use a varying amount of obstacles for each; up to 200 obstacles for the sphere box worlds, and up to 100 obstacles for the plane worlds. These classes of worlds are simple to generate in thousands of variations, without the need for human labeling or manual data collection. But more importantly, they resemble human-made structures in terms of local minima and entropy, which should facilitate generalization to unseen, structured environments. We train the network by using densely sampled random data from 6400 of these randomly generated worlds. A random goal location is sampled for each, and subsequently 1024 different locations are sampled within the world, from which the inputs and geodesic field label are collected. §.§ Evaluation and Limitations The learned method outperforms both the baseline method and the comparison method (CHOMP <cit.>) significantly, as shown in <Ref>. The network is able to provide geometric intelligence to avoid local minima, while the baseline method still serves as a safety layer. The learned model implicitly acts in the sort of “null-space" occurring whenever the obstacle avoidance potentials are not overly strong. While the system performs much better than the baseline, there are still obvious cases where it gets stuck, as seen for example in <Ref>: local observations may provide insufficient information about the free space around the perceived obstacle, confirmed by the bimodal output distribution of the model. These cases can be attributed to the purely reactive nature of even the learned method. In some situations, regardless of the sensor used, previously observed data is needed to form a consistent model of the geometry in order to navigate it. Those cases give rise to the need of temporal consistency – which we will explore in the next section by using recurrent neural networks. § RECURRENT NEURAL REACTIVE NAVIGATION We introduce memory by inserting a single LSTM layer into the bottleneck, as depicted in <Ref>. The previous pure-FFN architecture was trained using randomly sampled dense data, an RNN however requires sequences to train the recurrent elements. Therefore, we train the network by using DAgger <cit.>; model rollouts from a random start and goal location are aggregated with previously collected runs during training. We found that the densely sampled dataset, however, proved to be more efficient at training a pure FFN model than using DAgger. To combine the apparent training efficiency of the densely trained FFN, and the expected performance boost of including an LSTM, we utilize a 2-stage training setup. First, the FFN architecture is trained as described in Section <ref>. All weights are then frozen, the LSTM is inserted and subsequently trained using DAgger. §.§ Evaluation and Limitations We evaluate the training performance of our recurrent models by directly comparing against a pure-FFN architecture: During training, a DAgger validation dataset D_DAgger is collected, and both the model under training ℳ, and a benchmark pure-FFN model ℳ_B trained previously are evaluated on that dataset using the loss function ℒ. The pure-FFN model is not influenced by temporal dependencies in the data, and is thus able to act as a reference for evaluation. We use the ratio ℒ(ℳ(D_DAgger)) / ℒ(ℳ_B(D_DAgger)) as a metric to compare different recurrent models. We use the learned model from <Ref> as the reference model. In <Ref> we compare this metric during training for the FFN, RNN and pretrained RNN with frozen weights. We can see that the addition of the LSTM cell provides a performance increase over the FFN when both are randomly initialized, however, a pretrained model with frozen weights is required to achieve a loss ratio below 1.0. We speculate that the highly correlated data points within a trajectory sequence and the lack of data diversity due to rollouts naturally staying further away from obstacles contribute to the overall worse performance of only using DAgger for the full network. The LSTM-based navigation system learns to overcome local minima for which using only immediate sensor data is insufficient. We obtain qualitative evidence of the LSTM capability of providing temporal consistency by coloring the trajectory based on the relative influence of the LSTM on the latent representation inside the network. <Ref> shows a situation in which the LSTM is able to go over a wall in which the previous systems got stuck (see also <Ref>). We observe only a single mode remains in the output distribution, and that the LSTM has more influence in the exact locations where the FFN-variant got stuck. <Ref> visualizes the LSTM influence on difficult randomly generated synthetic maps. We also evaluate the RNN on human made environments, shown in <Ref> and <Ref>. The learned system is able to generalize zero-shot to these structured environments from training entirely on synthetic datasets made from primitive shapes. We see that the LSTM increases its influence at key locations for navigation in both the synthetic and real world examples. As with any method, there are still limitations. Local minima that are considerably larger or deeper than observed on training data can still pose a problem, examples are visualized in <Ref> and <Ref>. This can be attributed to (a) the training environments and (b) the multi-modality of the output. Training on potentially synthetic but human-made structured environments is a promising further avenue for improvements. Additionally, choosing a more complex strategy to select the most promising “mode” of the multi-modal output than the weighted sum presented in <Ref> could also improve the influence of the learned goal policy. <Ref> depicts an example of a case where an improved strategy would be beneficial. § QUANTITATIVE EVALUATION In this section, we will quantitatively compare our proposed methods to existing solutions, both on synthetic and real-world scenes. §.§ Synthetic Scenes To evaluate how our reactive method, that uses only current sensor data, compares to methods that use a complete map, we compare against an “expert”, which simply follows the geodesic field gradient, and CHOMP, a well-known local planner <cit.>. We compare planning success rates as a function of obstacle density on a separate test set, depicted in <Ref>. In all following evaluations, “baseline” refers to the system as described in <ref>, “FFN” to the one in <ref>, and “RNN” to <ref>. “CHMP c” corresponds to CHOMP <cit.> with a collision weight c, and “Expert” is a policy that directly uses the gradient of the geodesic distance field as input, as shown in <Ref>. The expert does not always achieve a 100% success rate, as it is still part of a policy-based system; i.e. in certain cases the local avoidance policies do not let the expert pass through very narrow spaces. It is important to note that CHOMP and the expert both use privileged information (the full map), whereas all other systems only have access to immediate sensor rays. The FFN- and RNN-based systems outperform all other local methods for almost all map variants. It seems that despite having access to the full map, CHOMP struggles with the quasi-global planning problems we used here to push local navigation algorithms to their limits. We found that CHOMP requires a large collision weight to navigate around the thin walls present in the plane worlds. However, such a high value decimates performance in the sphere box worlds, as the more `bulky' obstacles result in a very strong obstacle gradient, leading to unstable behavior that pushes CHOMP to the edges of, or even outside of, the world. The baseline method and our neural network-based extension methods achieve surprisingly high success rates - especially considering the fact that none of these methods has access to a map. §.§ Real-World Scenes While using very densely occupied simulated worlds gives us an idea of the relative performance of our algorithms, the true test of any learning-based system is how well it generalizes to real problems. We evaluate several variants of proposed reactive planners as shown in the previous section (Baseline, FFN, and RNN and ablations) against each other, as well as CHOMP (CHMP40) and RRT* <cit.>. We use three datasets: a 120-obstacle variant of the generated sphere box world (SB 120), the home_at_scan1_2013_jan_1 sequence of SUN3D <cit.>, shown in Figure <ref>, and the apt0 sequence of BundleFusion <cit.>, shown in <Ref>. We additionally do an ablation on the network size by introducing sFFN, which is a smaller version of the FFN with all learned layers with size ≥ 128 halved. The results are shown in <Ref>. We show that our learned methods are able to zero-shot generalize to real environments, despite never having seen anything similar during training time. It is also again important to emphasize that CHOMP and RRT* have access to the complete map during planning time, while all the reactive methods (Base, sFFN, FFN, and RNN) only have access to the sensor rays at each planning timestep. As a result, our reactive methods also have a much smaller memory footprint: requiring only the ray information to be stored, not a complete volumetric reconstruction. Another interesting result is that while all learned methods outperform the reactive baseline, the RNN only shows improved performance in simulation, not on the real-world datasets. We hypothesize that the RNN “overfits” more to the distribution of environments used during training, as the effective input space of an RNN is much larger. Additionally, the smaller sFFN performs worse than the FFN on the synthetically generated world, but has roughly similar performance on the real world datasets. Similar to the RNN, we believe that the smaller network size may lead to better generalization to completely different environments. As in previous evaluations, all learned methods outperform the CHOMP local planner. RRT*, as a global planning method with a much larger time budget, has the highest success rate – but again, both methods have access to the full map while the reactive methods only have instantaneous sensor rays. Another important aspect is that CHOMP and RRT* only provide output once the solver reaches a stopping criterion and a full trajectory is created, whereas the reactive methods simply need to evaluate the next-best acceleration at each time step. The individual query time highlights this contrast, as the reactive methods can be queried orders of magnitude faster than CHOMP and RRT*. There is additional room for optimization: we found that a version of the RNN compiled using the TensorRT compiler runs inference at 2.7 kHz on an Nvidia Jetson Orin NX, which would lead to a query time of only 0.4 ms. §.§ Robustness to Noise We have shown that our learned methods generalize to real environments, but do they also generalize to realistic sensor noise? We evaluate the robustness of the systems by adding noise to the rays (given as input to both the network and the avoidance policies). We choose a multiplicative noise model, where a ray distance d_r is converted to a noisy observation using a normal distribution 𝒩(·) d_r ·(1 + 𝒩(0, σ_n^2) ), parameterized by the noise standard deviation σ_n. This is done independently for every ray. <Ref> shows the performance of the RNN on the SUN3D world with different noise levels. The progression of these numbers is similar for different world types; performance is constant up to 30%, and quickly drops off afterwards. This indicates that the system is extremely robust to noise levels up to 30%, which is far above the noise levels reached by modern distance sensors. § DISCUSSION We show that adding a neural network to a purely reactive planner helps navigation in very cluttered environments. By training on a geodesic field on a fully known map, we give our learned planners a form of geometric intuition on how to escape local minima based only on current sensor rays. The addition of an LSTM component further increased the success rate. We hypothesize that some temporal consistency is important to avoid certain types of local minima. Adding an LSTM component also comes with trade-offs - it generally makes training less efficient and generalization outside of the training distribution is slightly worse. We used the geodesic field as a training supervision signal, as it acts a proxy for a global planner: pointing in the direction of the goal around obstacles throughout the map. However, the geodesic field is constructed such that it always points along the geodesic - the shortest possible path. The drawbacks of this are shown in Figure <ref>, where the geodesic field is pointing in the direction of a very narrow opening above the wall, which might not be the ideal path for a real robot to traverse. While path length is often an important metric, it may not be the ultimate objective. Depending on the map, robot and scenario, it is sometimes better to take a longer path that fulfills other desired properties, e.g. avoiding narrow geometries or using fewer turns. Especially in reactive navigation, finding the shortest path is not necessarily the ultimate objective; progress towards the goal, by going towards large perceived open spaces, is often more useful. Another interesting aspect is the multi-modality of the navigation network output. In many situations there are multiple viable next directions that can be taken, which motivates future research into how to best exploit such output. However, the results of this work provide meaningful and important insight into the difficulty and nature of reactive navigation in potentially cluttered 3D scenes in general. Coming back to our introductory question – when do we need a map? – we here present evidence that for many realistic navigation use-cases, reactive navigation combined with a higher-level geometric intuition suffices. Considering a full robotic system, especially floating-base mobile robots where drift-free state estimation can be difficult to guarantee, the use of a pure reactive method greatly simplifies the operational complexity and state estimation quality requirements. By using a purely reactive navigation system that only needs the current sensor input to navigate safely, robot designers can use less accurate odometry sources, have fewer requirements on time synchronization between sensors, and reduce overall computational complexity of their systems. Furthermore, the navigation policies themselves are extremely robust to noise on the input sensor data, hopefully bringing safety even with lower-cost sensors. In this work, we used a generic ray-casting interface to a mapping system to mimic reactive sensor navigation. As was shown for the baseline method <cit.>, the ray-casting interface closely resembles LiDAR data. In future work, we plan to study the effects of using LiDAR data for navigation. § CONCLUSION In this paper we presented two neural-network-based navigation methods, that when combined with a purely reactive safety layer, enable navigation through very densely cluttered 3D worlds using only local sensor data and without a map. Our system outperforms other local, well-known methods and is trained in a fully self-supervised fashion in auto-generated worlds. Additionally, it is capable of zero-shot transfer to real 3D environments, and has high robustness to noise. The modular architecture facilitates the seamless combination of the navigation stack with any other task formulated as an RMP, and enables introspection to gain intuition about how the learned components exert their influence. We exploit the introspectability of the presented system for understanding the challenges and nature of local navigation in 3D spaces. The ability of the system to find its way through extremely cluttered maps with only local data is surprising and highly relevant for practical applications, where robust traversal of our cluttered and semi-structured world with minimal system requirements is highly important. unsrtnat
http://arxiv.org/abs/2407.12141v1
20240716195314
Predicting Emotion Intensity in Polish Political Texts: Comparing Supervised Models and Large Language Models in a Resource-Poor Language
[ "Hubert Plisiecki", "Piotr Koc", "Maria Flakus", "Artur Pokropek" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Influences of modified Chaplygin dark fluid around a black hole H. Hassanabadi July 22, 2024 =============================================================== § ABSTRACT This study explores the use of large language models (LLMs) to predict emotion intensity in Polish political texts, a resource-poor language context. The research compares the performance of several LLMs against a supervised model trained on an annotated corpus of 10,000 social media texts, evaluated for the intensity of emotions by expert judges. The findings indicate that while the supervised model generally outperforms LLMs, offering higher accuracy and lower variance, LLMs present a viable alternative, especially given the high costs associated with data annotation. The study highlights the potential of LLMs in low-resource language settings and underscores the need for further research on emotion intensity prediction and its application across different languages and continuous features. The implications suggest a nuanced decision-making process to choose the right approach to emotion prediction for researchers and practitioners based on resource availability and the specific requirements of their tasks. § INTRODUCTION §.§ Significance of emotions in psychological science and present advancements in research on emotions Over the past few decades, social scientists have broadened their research to understand the significant role that emotions play in human behavior and societal dynamics. This exploration has yielded important findings in political sciences (Mintz et al., 2022), sociology (Bericat, 2016; Turner & Stets, 2006), economics (Loewenstein, 2000), anthropology (Lutz & White, 1986), organizational research (Diener et al., 2020) as well other fields of social (Kleef, 2018) and psychological sciences (Derks et al., 2008). While investigating this role of emotions, many researchers concentrate on the question of whether an emotion is present, focusing on the categorical aspects of emotions (Fritz et al., 2009; Saarimäki et al., 2016; Siedlecka & Denson, 2019; Tanaka-Matsumi et al., 1995). However, beyond the sole presence or absence of emotions, there is also their intensity, which was early recognized as necessary to understand human behaviors (Brehm, 1999; Plutchik, 1965). People often describe emotions like anger, sadness, or happiness in varying degrees, from none at all to very intense, and research indicates that emotion intensity is crucial in cognitive processing, social behavior, and communication within groups (Frijda et al., 1992; Niedenthal & Brauer, 2012; Reisenzein, 1994). There is also the third general approach to studying emotions, i.e., dimensional models. In contrast to categorical and intensity approaches, dimensional models offer a different perspective by suggesting that emotions can be placed within a continuous space defined by specific dimensions representing fundamental properties of emotional states (Gendron & Feldman Barrett, 2009). The most recognized model from this branch is Russell's (1980) circumplex model of affect, which posits that all emotions can be characterized by two fundamental dimensions: valence, which is the degree of pleasure or displeasure, and arousal, the level of activation or deactivation associated with an emotion. Access to numerous text data sources, including social media content, responses to open-ended questions in computer-based assessments, political declarations, newspapers, and online forums, offers an unprecedented opportunity to study emotions beyond the traditional settings of psychological laboratories. To capture emotions in text, scholars initially focused on valence, i.e., differentiating between positive and negative sentiment. However, an increasing body of research has demonstrated that emotions of the same valence can affect social processes in different ways (Druckman & McDermott, 2008; Nabi, 2003; Valentino et al., 2011; Vasilopoulos et al., 2019) and that those distinct (discrete) emotions, like anger and happiness, can be identified in text (e.g., Pennebaker & Francis, 1996). As a result, various tools for discrete emotion detection were created, mainly for the English language context, with far fewer tools available for other languages (Mohammad, 2016; Üveges & Ring, 2023). The emotion intensity approach has been largely overlooked in natural language processing (NLP) applications. While some attempts at predicting intensity exist, they are very rare (e.g. Akhtar et al., 2020). We can attribute it to the straightforwardness of the discrete approach. For instance, annotating items regarding the binary occurrence of emotions is way easier than their intensity. Recently, large language models (LLMs) have contributed to advancements in NLP, including emotion classification. These models have demonstrated their effectiveness in accurately identifying discrete emotions in the text by leveraging vast data and complex pattern recognition capabilities (Kocoń et al., 2023). Their success in this domain suggests the potential of LLMs to also aid in predicting emotion intensity, which has not yet been thoroughly investigated. The approach using LLMs is an auspicious direction for “resource-poor languages” (Mohammad 2016, 203), where researchers often encounter the problem of lacking adequate tools for analyzing emotions. A problem, which is difficult to solve as the expression of emotions is language (Bazzanella, 2004) and domain (Haselmayer & Jenny, 2017; Rauh, 2018) specific, which requires researchers to use or create linguistically adapted tools for a particular kind of corpora (Üveges and Ring 2023). In this work, we explore the potential of LLMs to replace human annotators and traditional predictive models in the task of predicting emotion intensity in one of the resource-poor languages, Polish, focusing on political texts. To do so, we build a corpus of political texts using different social media sources and have 10, 000 texts annotated by expert judges, whose reliability in assessing the intensity of emotions is evaluated. Then, we compare the performance of several LLMs to a supervised model that was trained on annotated datasets in predicting the intensity of emotions. The results show that the supervised model trained on the annotated data generally outperforms the LLMs, offering marginally higher accuracy and lower variance. However, this comes at the cost of the resources needed for the data annotation. Overall, the findings hold promise for using LLMs to assess other continuous features in Polish and potentially extend to other resource-poor languages. §.§ Emotions and social media - previous research Researching emotions in social media is of high importance as these social platforms have evolved into significant channels for spreading opinions and emotions (Beskow & Carley, 2019) - primarily due to their striking popularity, with 77.8% of people over the age of 18 using them globally (DataReportal, 2023). Research indicates that social media emotions may motivate people to share certain content (Brady et al., 2017), buy commercial products (Lu et al., 2021), change their behaviors (McBride & Ball, 2022), or even divide and disrupt populations (Whitehead, 2016). It also has been demonstrated that a reaction to social media posts may depend on the specific emotions enhanced by this content and the context of its origin. As for the former, for instance, emotions rich in arousal (e.g., anger and anxiety) may increase information sharing, regardless of emotional valence (Berger & Milkman, 2012; Stieglitz & Dang-Xuan, 2013). As for the latter, for example, in political discourse, content sharing is more probable if it includes fear bait or support communication (Walker et al., 2017). However, although there is general scientific consensus regarding the great utility of emotion evaluation in social media texts, the studies that have done so to date have been largely limited. They have either (1) measured emotion to a limited extent (e.g., limiting the number of estimated emotions or keywords to refer to them) or (2) failed to capture the nuances of emotional responses (e.g., their dimensional nature and intensity), ignoring emotions' complexity (Elfenbein & Ambady, 2002). Regarding the first point, current research acknowledges that affective and emotional responses may capture basic emotions ("discrete emotions"; e.g., happiness, sadness, and anger), as well as their combinations, often discussed as "moods" (Ekman, 1992a), and are universal, culturally- and situation-dependent (Russell, 2014). Nevertheless, there is still open discussion regarding the exact number of emotions that should be distinguished, starting from Ekman's (1992a,b) theory of six basic emotions (enlisting happiness, anger, sadness, fear, disgust, and surprise), and ending with his notable critiques (Barrett et al., 2019; Elfenbein & Ambady, 2002). For instance, Barrett et al. (2019) opted for more emotional dimensions (over 20). Correspondingly, other researchers opted for describing emotions using more specific categories (e.g., irritation or rage) rather than its more extensive "umbrella terms", e.g., anger (Cowen & Keltner, 2017), especially because some specific emotions, for instance, awe and wonder (Keltner & Haidt, 2003; Shiota et al., 2007), compassion, sympathy, and empathic pain (Goetz et al., 2010) may not fully comprehend with generalized categories. As there is no clear consensus in this discussion, the researcher's choice regarding the number of emotions may also be seen as a reflection of their theoretical or ontological preferences. Regarding the latter, second point, ignoring emotions' complexity and dimensionality may potentially harm the validity of conclusions, as it "tears down" the measurement of the theoretical nature of emotional responses - contrary to the choice regarding the number of emotions, which may only limit the interpretation of the results to the exact dimensions which were under investigation in a particular study (Paletz et al., 2023). Indeed, some of the previously described annotation schemes do not measure emotions' intensity (Alm et al., 2005; Novielli et al., 2018) or operate on exclusive coding, in which only one emotion may be chosen per text (Alm et al., 2005). In the current research we try to capture the emotional content of social media posts in a more robust way, although the list of emotions that we try to map is limited, we believe that it is comprehensive and additionally we fully acknowledge the arbitrary choice of the specific emotions that we have picked. We combine the dimensional, circumplex model with basic emotions, and additionally predict the intensity of the latter, which so far has been rarely attempted. §.§ Emotions intensity - promising research gap or scientific dead end? The intensity of emotion was recognized early as an important and natural extension of the basic classification scheme in both theoretical and practical contexts (Ferrara & Yang, 2015; Qiu et al., 2020). One of the earlier significant attempts to use continuous emotion metrics was made by Strapparava and Mihalcea (2007). However, their results were not entirely valid. The dataset they used consisted of news headlines from major outlets like the New York Times, CNN, and BBC News, as well as from the Google News search engine. They prepared two datasets: a development dataset with 250 annotated headlines and a test dataset with 1,000 annotated headlines. Annotators were provided with six predefined emotion labels (i.e., anger, disgust, fear, joy, sadness, surprise) and asked to classify the headlines with the appropriate emotion label and/or with a valence indication (positive/negative). Additionally, an intensity scale ranging from 0 to 100 was added. The agreement between the six annotators on the emotions (calculated as the Pearson’s correlation of their scores to the averaged scores of the other annotators) was as follows: 0.50 for anger, 0.44 for disgust, 0.64 for fear, 0.60 for joy, 0.68 for sadness, 0.36 for surprise, and 0.78 for valence. Using NLP and a lexicon-based method, they were able to detect emotions with high accuracy (binary classification): 93.6% for anger, 97.3% for disgust, 87.9% for fear, 82.2% for joy, 89.0% for sadness, and 89.1% for surprise. However, the performance of automated systems for emotion intensity prediction, calculated as the correlation between the original scores and the system predictions, was low: 0.32 for anger, 0.19 for disgust, 0.45 for fear, 0.26 for joy, 0.41 for sadness, and 0.17 for surprise. The next significant study on emotion intensity was conducted by Mohammad and Bravo-Marquez in (2017b). They created the first dataset called the “Tweet Emotion Intensity Dataset”, which consisted of 7,097 tweets annotated regarding anger, fear, joy, and sadness intensities. The reliability of annotation for intensity, supported by best-worst scaling (BWS) technique, showed high Pearson correlation coefficients: 0.80 for anger, 0.85 for fear, 0.88 for joy, and 0.85 for sadness. In the shared task using this dataset, 22 teams participated (Mohammad & Bravo-Marquez, 2017a), with the best-performing system achieving a Pearson correlation of 0.747 with the gold intensity scores, indicating that predicting emotion intensity is possible but challenging. Akhtar et al. (2020b) achieved the following correlations for predicted emotions with annotated emotions: 0.75 for anger, 0.71 for joy, 0.76 for sadness, and 0.78 for fear, with an average correlation of 0.75. This demonstrates that predicting the intensity of emotions, although difficult, is feasible with a reasonable level of reliability. However, this task is significantly more challenging than simple classification. Despite the apparent success in predicting emotion intensity, further research in this area has been limited, with few exceptions where emotion intensity has been applied to the study of empirical problems (Sharifirad et al., 2019). This leaves the intensity of emotions as a theoretically valid yet rarely explored area in emotion sentiment analysis. §.§ LLMs as a method of classifying emotions - previous research LLM’s have been successfully used to predict some dimensions of emotions in text snippets. One of the experiments with Open AI models, both GPT3.5 and GPT4, have tested a variety of different annotation tasks, sentiment analysis and emotion recognition included and showed promising results, however its performance did not match that of the available State of the Art (SOTA) models at the time (Kocoń et al., 2023), and compared to other annotation tasks fared poorly especially on those tasks which were related to emotion annotation where the difference between its results and those of the SOTA models ranged from 71.3% to 21.8%. This result has been confirmed by other research projects, where the models developed by Open AI have also fallen short of the SOTA (Amin et al., 2023; Krugmann & Hartmann, 2024). This however should not be interpreted as a rule for all LLMs as Amin and his team (2023) showed that the Llama model developed by Meta can match the SOTA performance on some benchmarks. While the models developed by Open AI might not provide the best results with regards to English benchmarks, they have been shown to be superior for the task of cross-lingual sentiment analysis (Přibáň et al., 2024) owing largely to their vast multilingual training data. For example, while the Llama model has been shown to be superior for some English emotion related tasks, due to its limited training set compared to the OpenAI models it fared worse on multilingual tasks. For that reason in the current study, we choose to focus on the performance of GPT3.5 and GPT4 models. Furthermore, the bar set by the SOTA models for the utilization of LLMs in resource-poor languages is considerably lower as the lack of the resources also leads to lower SOTA performance. As LLMs can accept context alongside the task that they are supposed to complete, the In-Context Learning (ICL) technique has been used repeatedly to enhance their performance (Chochlakis et al., 2024; Kocoń et al., 2023). This method relies on providing examples of the items the LLM is supposed to annotate, alongside their ground truth values in order to guide the model towards better solutions. It is also often referred to as multi-shot prediction. While this technique indeed has elevated the accuracy of LLM predictions for the most part, deeper analysis has shown that the model in many cases does not learn from the provided ground truth, but rather pays attention to the examples alone, which in turn prime the model towards similar examples that it has learned from its training set, resulting in better performance (Chochlakis et al., 2024). This could mean that multi-shot prompting should be less performant for low-resource languages as the model has been trained on comparatively less texts associated with them. While testing this hypothesis directly is beyond the scope of this research as we lack reliable control groups, we do employ multi-shot prompting in order to push the LLM to the edge of its performance, whether it works. § MATERIALS AND METHODS §.§ Database preparation Our research utilizes a comprehensive database of Polish political texts from social media profiles (i.e., YouTube, Twitter, Facebook) of 25 journalists, 25 politicians, and 19 non-governmental organizations (NGOs). The complete list of the profiles is available in the Appendix. For each profile, all available posts from each platform were scraped (going back to the beginning of 2019). In addition, we also used corpora, which consists of texts written by “typical” social media users, i.e., non-professional commentators of social affairs. Our data consists of 1,246,337 text snippets (Twitter: 789490 tweets; Youtube: 42252 comments; Facebook: 414,595 posts). As transformer models have certain limits, i.e., their use imposes limits on length, we implemented two types of modification within the initial dataset. First, since texts retrieved from Facebook were longer than the others, we have split them into sentences. Second, we deleted all texts that were longer than 280 characters. The texts were further cleaned from social media artifacts, such as dates scrapped alongside the texts. Next, the langdetect (Danilak, 2021) software was used to filter out text snippets that were not written in Polish. Also, all online links and user names in the texts were replaced with “_link_” and “_user_”, respectively, so that the model does not overfit the sources of information nor specific social media users. Because most texts in the initial dataset were emotionally neutral, we filtered out the neutral texts and included only these snippets which had higher emotional content in the final dataset. Accordingly, the texts were stemmed and subjected to a lexicon analysis (Imbir, 2016) using lexical norms for valence, arousal, and dominance - the three basic components of emotions. The words in each text were summed up in terms of their emotional content extracted from the lexical database and averaged to create separate metrics for the three emotional dimensions. These metrics were then summed up and used as weights to choose 8,000 texts for the final training dataset. Additionally, 2,000 texts were selected without weights to ensure the resulting model could process both neutral and emotional texts. The proportions of the texts coming from different social media platforms reflected the initial proportions of these texts, resulting in 496 YouTube texts, 6,105 Twitter texts, and 3,399 Facebook texts. §.§ Annotation Process The final dataset consisting of 10,000 texts was annotated by 20 expert annotators (age: M = 23.89, SD = 4.10; gender: 80% female). All annotators were well-versed in Polish political discourse and were students of Psychology (70% of them were graduate students, which in the case of Polish academic education denotes people studying 4th and 5th year). Thus, they underwent at least elementary training in psychology. The entire annotation process lasted five weeks. Each week, every annotator was given five sets of texts (out of 100 sets with 100 randomly assigned sentences each) that should be annotated in the given week. The sets were randomly assigned to annotators, considering the general assumption that five different annotators should annotate each set. Generally, annotators simultaneously annotated no more than 500 texts each week, preventing them from cognitive depletion's negative effects. Annotators labeled each text based on the five basic emotions: happiness, sadness, anger, disgust, and fear. In addition, annotators were asked to label the texts with regard to an additional emotion, namely pride, and two general dimensions of emotions: valence and arousal. In all cases, annotators used a 5-point scale (in the case of emotions: 0 = emotion is absent, 4 = very high level of emotion; in the case of valence and arousal, we used a pictographic 5-point scale provided in the Appendix). Since two additional emotional dimensions might not have been familiar to annotators, before the formal annotation process began, all annotators were informed about the characteristics of valence and arousal (note that we did not provide formal definitions of basic emotions). General annotation guidelines were provided to ensure consistency and minimize subjectivity (all instructions used within the training process are available in the Appendix). §.§ Statistical analyses §.§.§ Annotation Agreement We assessed the agreement between raters using the intraclass correlation coefficient (ICC). The ICC coefficients are based on the random model ANOVA for independent groups (McGraw & Wong, 1996; Shrout & Fleiss, 1979). ICC(1) measures the reliability of single-ratings. ICC(1) compares the variability between raters to the total variability across all ratings. It assesses how much of the total variance in the scores is due to the variance between the rated texts. It assumes that a different rater rates each text, and the raters are randomly selected. It determines the consistency of raters' evaluations across texts when a randomly selected rater assesses each text. The ICC(1,k) coefficient extends the concept of single-rating reliability, as measured by ICC(1), to scenarios where the average ratings from a set of k raters evaluate each subject. Specifically, it assesses the absolute agreement among these raters, considering the mean of their ratings for each text. This approach acknowledges the increased reliability expected when aggregating evaluations from multiple raters. The ICC values range from 0 to 1, with 0 indicating no agreement among raters and 1 indicating perfect reliability. Koo and Li (2016) provide a guideline for interpreting ICC values, categorizing them as follows: values below 0.50 are considered poor; values ranging from 0.50 to 0.75 indicate moderate reliability; values between 0.75 and 0.90 suggest good reliability; and values above 0.90 are deemed excellent. To estimate the ICC, we used the pingouin Python package (Vallat, 2018). §.§.§ Data for training, validation and testing After the annotation steps, we averaged the annotations corresponding to specific emotional metrics for each text. As the emotional load of the texts was still highly skewed towards lack of emotions, z scores for all of the emotions were computed, summed up, and used as weights to sample the test set, which constituted 10% of the total dataset. We did this to prevent the model from overfitting the lack of emotions by assigning low emotions to every predicted text. The remaining data was split into a training set and validation set, rearing a split of (8:1:1). §.§.§ Model Architecture We considered two alternative base models: the Trelbert transformer model developed by a team at DeepSense (Szmyd et al., 2023), and the Polish Roberta model (Dadas, 2020). The encoders of both models were each equipped with an additional regression layer with a sigmoid activation function. The maximum number of epochs in each training run was set to 100. At each step, we computed the mean correlation of the predicted metrics with their actual values on the evaluation batch, and the models with the highest correlations on the evaluation batch were saved to avoid overfitting. We used the MSE criterion to compute the loss alongside the AdamW optimizer with default hyperparameter values. Both of the base models were then subjected to a Bayesian grid search using the WandB platform (Wandb/Wandb, 2017/2024) with the following values: dropout - 0; 0.20, 0.40, 0.60; learning rate - 5e-3, 5e-4, 5e-5; weight decay - 0.10, 0.30, 0.50; warmup steps - 300, 600, 900. The model which obtained the highest correlation relied on the Roberta transformer model and had the following hyperparameters: dropout = 0.6; learning rate = 5e-5; weight decay = 0.3. §.§.§ Robustness Analysis To assess the robustness of the model when trained on different subsets of the data, we performed a k-fold analysis with the same parameters as those chosen through the Bayesian grid search. We split the dataset into ten folds. On each iteration, one partition was held out, and the rest were split into the training and validation set (889 to 111 ratio to ensure approximately the exact size of the validation and test set). Then, we trained the model using the exact same method as described in the Model Architecture section. §.§.§ LLM Testing To assess the ability of LLMs to annotate the dataset properly, we have queried both gpt3_5_turbo_0125 (GPT3.5) and gpt-4-0613 (GPT4) with the multiple shot technique. Also, we have tested the GPT3.5 on the zero, one, and up to five-shot setup to estimate the best-performing multiple-shot setup. The tests have been completed on the validation set in order not to overfit the test set. The discrete emotions were tested with the following query (The prompts have been translated for the purpose of presentation): Translation: "To what extent does the text below manifest the emotion 'emotion'? Respond using a 5-point scale, where 1 means the emotion is not present at all and 5 means the emotion is very distinctly present. Please respond with a single number. Text: 'text' Your response:" While the dimensions of valence and arousal had these prompts: Valence: "What emotional valence do you read in the following text? Respond using a 5-point scale, where 1 indicates a negative emotion is present and 5 indicates a positive emotion is present. Please respond with a single number." Arousal: "What level of arousal do you read in the following text? Respond using a 5-point scale, where 1 means no arousal and 5 means extreme arousal. Please respond with a single number." Due to the difference in the prompts as well as the qualitative difference between the dimensions and basic emotion, we have conducted two separate tests for each type of emotion taxonomy (basic vs dimensional affective metrics). The prompts were created based on the questions that annotators provided during the annotation process. They were structured in accordance with the official OPENAI prompt engineering guidelines (OpenAI Platform, n.d.). For an in-depth description of how the prompts were structured see Appendix. The examples for the multiple-shot scenarios were picked in the following manner. First, we have vectorized the training set using the text-embedding-3-small model from the OPENAI API. Based on the resulting vectors, we calculated the centroid of the embeddings to represent the central point of our dataset. We then determined each text's distance from this centroid to assess its representativeness or deviation from the rest of the texts in the dataset. We wanted the example texts to be as representative of the whole dataset as possible. Then, for the one-shot scenario, we calculated the distance of each text from the midpoint on their corresponding emotional scales for each emotion separately. By combining these two types of metrics, we have then picked the texts that are both the most representative in terms of vector similarity and were rated to express their corresponding emotional constructs in neither a high nor low manner. We repeated the same operation for the two-shot scenario. However, the texts were picked based on the distance from the lowest point (first text) and the highest point (second text) on the emotional scale. The three-shot scenario combined the examples from one-shot and two-shot. The four-shot scenario picked texts were picked based on distance from the 0.20, 0.40, 0.60, and 0.80 points of the emotional scale. Finally, the five-shot scenario texts were picked based on the distance from the distance from the points of the emotional scale represented as the following fraction points: 1/6, 2/6, 3/6, 4/6, 5/6. The logic was to gradually present the LLM with a more fine-grained representation of the emotional spectrum. There were multiple cases where the LLM did not respond to the request with an intelligible number, either refusing to honor the request based on the query not complying with OPENAI regulations or simply saying that it cannot assess the emotionality of the specific snippet. We considered this when picking the best multiple-shot scenario for each emotion taxonomy. The test results in the basic emotions condition showed that the three-shot method reared the best results for this task (see Table 1). The averaged correlation between the actual data and the scores provided by the LLM for all basic emotions achieved the highest level for this setting (r = 0.72). The averaged standard deviation of the scores for all basic emotions for this setting was lower than zero-shot and higher than two-shot (zero-shot: SD = 1.53; one-shot: SD = 1.15; two-shot: SD = 1.10). However, we chose to focus on correlation as the decisive metric. The total rejected texts for this scenario were also considerably low, totaling only 47 texts across all emotions. The dimension-oriented tests pointed towards the two-shot scenario as most applicable for their setting (see Table 2). Here, the two-shot scenario had the highest averaged correlation (r = 0.77) while, at the same time, an acceptable averaged standard deviation of scores (SD = 1.28). The total rejected texts for this scenario were also considerably low totaling 27 texts. Concluding, the three-shot scenario was chosen for the basic emotion setup, while for the dimensional taxonomy setup the two-shot we have picked the two-shot method. These methods were then used to annotate the test set using both GPT3.5 and GPT4. §.§.§ Costs The participants in the annotation process were paid around $2,400 in total, split equally between them. At the same time, the calls to the API that were required to perform the multiple shot search totaled $8.38. The test set annotations, on the other hand, cost us $65.6, which was driven mostly by the GPT4 API calls. § RESULTS The ICC results were presented in Table 3. The reliability of individual rater's assessments ranged from poor to moderate across the tested emotions, with ICC (1) values extending from 0.29 for arousal to 0.60 for valence. In contrast, the reliability of average ratings from multiple raters indicated moderate to good consistency, with ICC (1, k) values ranging from 0.63 for fear to 0.88 for valence. §.§ Supervised model results The results of the main model, as summarized in Table 4, demonstrated the model's performance across different emotion categories and two general affect dimensions: valence and arousal. The table presents correlation coefficients and standard deviations (SDs) for the model predictions compared to human annotations, along with the original standard deviations observed in the human annotations. The model exhibited strong correlations with human ratings, particularly in predicting happiness and valence, both achieving the highest correlation of 0.87. It indicated a high level of agreement between the model's predictions and human judgments for these emotional dimensions, suggesting that the model was particularly effective at identifying positive emotional content and overall emotional valence. Correlations for other emotions, such as sadness (r = 0.75), anger (r = 0.85), disgust (r = 0.81), fear (r = 0.73), and pride (r = 0.80), also indicated a substantial agreement with human annotations, although to a slightly lesser extent than happiness and valence. These results suggested that the model can generally capture a wide range of emotional states, with varying degrees of effectiveness across different emotions. The SDs of the predictions generated by the model (Model’s SD) were consistently lower than those observed in averaged human annotations (“Annotator’s SD”) for all emotions and affect dimensions. This difference in variability indicated that the model's predictions tend to be more consistent than human ratings. For example, the model's predictions for happiness had an SD of 0.22, compared to the original human annotation SD of 0.26. Similar patterns were observed across all emotions and affect dimensions, with the model's predictions showing less variability than averaged human annotations. In conclusion, the model demonstrated a strong ability to predict human emotional annotations across diverse emotions and affect dimensions. The high correlation values indicated a significant agreement between the model's predictions and human judgments. In contrast, the lower SDs in the model's predictions suggested a higher consistency in the model's performance compared to the variability inherent in human annotations. These results underscored the model's potential in effectively capturing and predicting human emotional responses to textual content. §.§ K-fold validation The results of the k-fold validation are presented in Table 5. In assessing the robustness of the supervised model, we conducted a 10-fold cross-validation, focusing on a spectrum of emotional dimensions and affective states, including happiness, sadness, anger, disgust, fear, pride, valence, and arousal. The results revealed a generally high level of reliability across these dimensions. Specifically, the model exhibited strong performance in identifying happiness (mean correlation of 0.83, for 95% CI see Table 5), anger (r = 0.81), and valence (r = 0.84), indicating a consistent ability to assess these emotional states across different data subsets. Moderate to strong correlations were observed for sadness (r = 0.68), disgust (r = 0.75), fear (r = 0.67), pride (r = 0.76), and arousal (r = 0.71), with the confidence intervals suggesting a stable performance across the folds, albeit with some variability, particularly in detecting fear. These outcomes did not highlight the supervised model's overall reliability and generalizability. §.§ LLM Annotation Results The LLM annotation attempts explored two distinct scenarios: a two-shot setup for assessing valence and arousal, and a three-shot setup for discrete emotions including happiness, sadness, anger, fear, disgust, and pride. The two-shot approach involved GPT3.5 two-shot and a variant leveraging GPT4, while the three-shot scenario explored a GPT3.5 three-shot alongside a three-shot GPT4 variant. These setups were selected based on prior tests to optimize the LLM's performance in emotion annotation. In the two-shot setup for valence and arousal (Table 6), the GPT3.5 two-shot approach yielded correlations of 0.79 for valence and 0.53 for arousal, with standard deviations (SD) of 1.30 and 1.02, respectively. The two-shot GPT4 variant showed slightly improved performance with correlations of 0.79 for valence and 0.55 for arousal, and reduced SDs of 1.23 and 0.93, respectively. Notably, the GPT4 variant exhibited no rejected texts, suggesting enhanced reliability or acceptance criteria compared to the GPT3.5 two-shot setup, which had 36 rejected texts for valence and 2 for arousal. The three-shot scenario focused on discrete emotions (Table 7) demonstrated varied performance across different emotions. The GPT3.5 three-shot approach showed correlations ranging from 0.46 for fear (the lowest) to 0.78 for anger (the highest), with corresponding SDs spanning from 1.22 for pride to 1.73 for fear. The number of rejected texts varied significantly across emotions, with fear seeing the highest rejection at 12 texts, indicating potential challenges in consistently annotating this emotion. The three-shot GPT4 variant, however, marked a noticeable improvement in both correlation and SD across all emotions, with correlations improving to 0.88 for happiness and 0.83 for anger, among others. SDs were generally lower, indicating more consistent annotations, with pride having the lowest SD at 0.86. Remarkably, this variant showed no rejected texts across all emotions, underscoring its enhanced capability in emotion annotation. As can be seen in Figure 1. the distribution of labels varies between the original annotators, the GPT-3.5 generated labels, and those generated by GPT-4 varies considerably. Starting with the annotators, their labels for basic emotions were highly skewed towards lower values, with the label of 1 being the most popular. In contrast, the distributions for valence and arousal approached a monotonic shape, with the exception of the two most extreme labels 4, and 5, which were less numerous. In comparison, GPT3.5 exhibited a bimodal distribution for basic emotions, an uneven distribution for valence where “4” was the least used label, and a centered at the middle, mostly leptokurtic distribution for arousal with a sudden increase in counts on the “5” label. GPT3.5’s distributions were therefore visibly different than the original annotators’ distributions. On the other hand, GPT4’s distributions had a significantly better alignment with the original ones, being similarly skewed towards the lower values for basic emotions, without a pronounced bimodal peak at both ends of the spectrum. For valence, GPT-4’s distribution was bimodal with leptokurtic characteristics different from both GPT3.5’s and original annotator’s distributions. Finally, for arousal GPT4’s distribution overlapped with the original annotator’s distribution pretty well, apart from a slight peak at the value of “2” and a drop at the value of “4”. §.§ Direct comparison For the direct comparison we took the best performing LLM model results for both emotion categories, which was the GPT4. As can be seen in Table 8, for happiness, the GPT4 variant slightly outperformed the supervised model with a correlation of 0.88 (SD = 1.12) compared to the supervised model's 0.87 (SD = 0.22). This indicated a marginally higher accuracy in the GPT4 model, albeit with increased variability. In the case of sadness, the supervised model exhibited a higher correlation of 0.75 (SD = 0.15) relative to the GPT4 variant's 0.66 (SD = 1.00), suggesting the supervised model's superior ability to accurately annotate sadness with less variability. For anger, the supervised model also showed a higher correlation of 0.85 (SD = 0.24) against the GPT4 variant's 0.83 (SD = 1.21), indicating a slight edge in accurately capturing expressions of anger, despite the GPT4 variant's broader range of responses. When assessing fear, the supervised model demonstrated a significantly higher correlation of 0.81 (SD = 0.19) compared to the GPT4 variant's 0.65 (SD = 1.09), underscoring the supervised model's enhanced capability in identifying fear-related expressions with greater consistency. For disgust, the correlation values were more similar, with the supervised model at 0.73 (SD = 0.11) and the GPT4 variant at 0.72 (SD = 1.00), suggesting comparable performance levels, though the GPT4 model exhibits greater variability. In evaluating pride, the supervised model's correlation of 0.80 (SD = 0.20) surpassed the GPT4 variant's 0.67 (SD = 0.85), indicating the supervised model's better performance in consistently capturing expressions of pride. Regarding valence, both models showed equivalent top performance with a correlation of 0.87 for the supervised model (SD = 0.22) and 0.88 for the GPT4 variant (SD = 1.12), albeit with the GPT4 variant displaying higher variability. For arousal, the supervised model's correlation of 0.75 (SD = 0.15) was notably higher than the GPT4 variant's 0.66 (SD = 1.00), indicating the supervised model's superior accuracy and consistency in annotating arousal. In summary, while the GPT4 variant demonstrated competitive or slightly superior performance in some respects (particularly for happiness and valence), the supervised model generally exhibited higher accuracy and significantly lower variability across most emotions and affective states, highlighting its robustness and reliability in emotion annotation tasks. The standard deviations of GPT4’s predictions, on the other hand, were more similar to the standard deviations of original annotations, before they were averaged to produce training data, while the standard deviations of the supervised model, mirrored those of the averaged labels on which it was trained. § DISCUSSION As the results indicated, the question of whether researchers should use existing LLM models when annotating political texts in low-resource languages such as Polish is nuanced. On the one hand, the supervised models provided marginally, yet visibly, more accurate results. They were either just as good as GPT-4 (in cases of happiness, disgust, and arousal) or better (for all other emotions). While the standard deviation of the LLMs' predictions was more similar to individual, non-aggregated labels, it is not clear whether this should be considered an asset. This is because the standard deviations of arguably more representative, aggregated human emotionality labels were far smaller. These smaller values were mirrored in the distribution of the predictive model’s labels. One significant advantage of the supervised model is its resilience to external circumstances, such as API availability. Once trained, the model can be stored on the researchers' machines and reused at any time for practically free (excluding computing costs). The availability of the API, while not completely uncertain, is less reliable. On the other hand, the supervised models require a costly annotation process that is orders of magnitude more resource intensive. One significant upside of this annotation process is that the data gathered can be opened to the large public as we do so for this paper, and thus reused for different projects. The annotation issue is further complicated by the fact that without gathering at least some annotations it is hard to estimate the reliability of LLMs for the specific task that it is supposed to be used for. Therefore, it is hard to avoid this laborious process. However, for evaluating the performance of the LLM without previous parameter searches with regards to a specific multiple shot setup the size of the annotated dataset can be significantly smaller than that required for supervised learning and can perhaps be carried out by the researchers themselves. Of course, a smaller dataset also makes it difficult to choose examples for the multiple shot setup. It also limits the possibility of the parameter search for the prompting technique which, when not carried out on a separate validation set, can result in overfitting. These considerations imply that the preference for use of either approach largely depends on the availability of resources, both of financial and substantive nature. Research programs as well as commercial projects that have the option to engage in large scale annotation projects and train their own models will be rewarded for doing so by higher accuracy of their predictions, as well as more confidence in the long-term utility and reliability of their predictive solutions. On the other hand, those teams which either do not have the resources necessary or do not want to spend them can opt for the LLM-based approach, which will be marginally worse in performance but at the same time offers a fairly easier and faster-to-implement solution. The nature of the task such research teams strive to accomplish can thus be considered as another guide to choosing which approach works best for the team. Tasks that permit forgoing some accuracy for the sake of fast resolution are therefore best suited for the LLM approach, while those in which small accuracy errors can propagate and multiply should be tackled with the supervised method. Another important issue is the amount of data that has to be assessed. LLM approaches, while simpler and faster to implement, can run into scaling issues. This has to be considered before choosing the approach by estimating the number of predictions that need to be made for the project and checking the current prices of OPENAI calls. While for research purposes the cost of API calls will rarely be higher than the cost of the annotation process, this might be of greater import to commercial projects. The study's findings need to be viewed in light of the fast pace at which Large Language Models (LLMs) are evolving, which could affect the relevance of our results over time. As new models are developed, the performance and capabilities of LLMs might change, potentially limiting the applicability of our current conclusions. However, by making our code publicly available, we allow for the replication and updating of this study by others, which helps in maintaining the relevance of the findings despite the rapid advancements in the field. The supervised model training code is available at <https://colab.research.google.com/drive/1ZIMIicDyEUVA-kHNXfH0oiUAPIVXCZyh?usp=drive_link> while the rest of the code, including LLM querying can be found at <https://github.com/hplisiecki/Predicting-Emotion-Intensity-in-Polish-Political-Texts>. We also welcome other researchers to use the pretrained model introduced in this paper. To let them do that we have published it under the following url <https://huggingface.co/hplisiecki/polemo_intensity>. Since the data used to train and validate the model come from social media profiles we choose to not publish it at this stage due to legal concerns, although we are working on making it available in the future. Future research could explore whether the findings of this study also hold for other resource-poor languages and, potentially, other continuous features. Also, one could add another approach to the comparison, involving machine translation into a language with existing labeled data, like English, to see if that is a viable option at least for some problems (Licht et al., 2024). § FUNDING This research is funded by a grant from the National Science Centre (NCN) 'Research Laboratory for Digital Social Sciences' (SONATA BIS-10, No. UMO-020/38/E/HS6/00302). unsrt 1 akhtar2020intense M. S. Akhtar, A. Ekbal, and E. Cambria. How Intense Are You? Predicting Intensities of Emotions and Sentiments using Stacked Ensemble [Application Notes]. IEEE Computational Intelligence Magazine, 15(1):64–75, 2020. <https://doi.org/10.1109/MCI.2019.2954667>. alm2005emotions C. O. Alm, D. Roth, and R. Sproat. Emotions from text: Machine learning for text-based emotion prediction. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing - HLT ’05, pages 579–586, 2005. <https://doi.org/10.3115/1220575.1220648>. amin2023affective M. M. Amin, E. Cambria, and B. W. Schuller. Will Affective Computing Emerge From Foundation Models and General Artificial Intelligence? A First Evaluation of ChatGPT. IEEE Intelligent Systems, 38(2):15–23, 2023. <https://doi.org/10.1109/MIS.2023.3254179>. barrett2019emotional L. F. Barrett, R. Adolphs, S. Marsella, A. M. Martinez, and S. D. Pollak. Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(1):1–68, 2019. <https://doi.org/10.1177/1529100619832930>. bazzanella2004emotions C. Bazzanella. Emotions, language, and context. In E. Weigand (Ed.), Current Issues in Linguistic Theory (Vol. 248, p. 59). John Benjamins Publishing Company, 2004. <https://doi.org/10.1075/cilt.248.06baz>. berger2012viral J. Berger and K. L. Milkman. What Makes Online Content Viral? Journal of Marketing Research, 49(2):192–205, 2012. <https://doi.org/10.1509/jmr.10.0353>. bericat2016sociology E. Bericat. The sociology of emotions: Four decades of progress. Current Sociology, 64(3):491–513, 2016. <https://doi.org/10.1177/0011392115588355>. brady2017emotion W. J. Brady, J. A. Wills, J. T. Jost, J. A. Tucker, and J. J. Van Bavel. Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, 114(28):7313–7318, 2017. <https://doi.org/10.1073/pnas.1618923114>. brehm1999intensity J. W. Brehm. The Intensity of Emotion. Personality and Social Psychology Review, 3(1):2–22, 1999. <https://doi.org/10.1207/s15327957pspr0301_1>. chochlakis2024prior G. Chochlakis, A. Potamianos, K. Lerman, and S. Narayanan. The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition (Version 1). arXiv, 2024. <https://doi.org/10.48550/ARXIV.2403.17125>. cowen2017self A. S. Cowen and D. Keltner. Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences, 114(38), 2017. <https://doi.org/10.1073/pnas.1702247114>. dadas2020polish S. Dadas. Sdadas/polish-roberta [Python]. <https://github.com/sdadas/polish-roberta>, 2020. (Original work published 2020). danilak2021langdetect M. M. Danilak. langdetect: Language detection library ported from Google’s language-detection. (1.0.9) [Python; OS Independent]. <https://github.com/Mimino666/langdetect>, 2021. derks2008role D. Derks, A. H. Fischer, and A. E. R. Bos. The role of emotion in computer-mediated communication: A review. Computers in Human Behavior, 24(3):766–785, 2008. <https://doi.org/10.1016/j.chb.2007.04.004>. diener2020positive E. Diener, S. Thapa, and L. Tay. Positive Emotions at Work. Annual Review of Organizational Psychology and Organizational Behavior, 7(1):451–477, 2020. <https://doi.org/10.1146/annurev-orgpsych-012119-044908>. digital2023global Digital 2023: Global Overview Report—DataReportal – Global Digital Insights. (n.d.). Retrieved June 9, 2024. <https://datareportal.com/reports/digital-2023-global-overview-report>. druckman2008emotion J. N. Druckman and R. McDermott. Emotion and the Framing of Risky Choice. Political Behavior, 30(3):297–321, 2008. <https://doi.org/10.1007/s11109-008-9056-y>. ekman1992argument P. Ekman. An argument for basic emotions. Cognition and Emotion, 6(3–4):169–200, 1992a. <https://doi.org/10.1080/02699939208411068>. ekman1992basic P. Ekman. Are there basic emotions? Psychological Review, 99(3):550–553, 1992b. <https://doi.org/10.1037/0033-295X.99.3.550>. elfenbein2002universality H. A. Elfenbein and N. Ambady. On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128(2):203–235, 2002. <https://doi.org/10.1037/0033-2909.128.2.203>. ferrara2015measuring E. Ferrara and Z. Yang. Measuring Emotional Contagion in Social Media. PLOS ONE, 10(11):e0142390, 2015. <https://doi.org/10.1371/journal.pone.0142390>. frijda1992complexity N. H. Frijda, A. Ortony, J. Sonnemans, and G. L. Clore. The complexity of intensity: Issues concerning the structure of emotion intensity. In Emotion, pages 60–89. Sage Publications, Inc., 1992. fritz2009universal T. Fritz, S. Jentschke, N. Gosselin, D. Sammler, I. Peretz, R. Turner, A. D. Friederici, and S. Koelsch. Universal Recognition of Three Basic Emotions in Music. Current Biology, 19(7):573–576, 2009. <https://doi.org/10.1016/j.cub.2009.02.058>. gendron2009reconstructing M. Gendron and L. Feldman Barrett. Reconstructing the Past: A Century of Ideas About Emotion in Psychology. Emotion Review, 1(4):316–339, 2009. <https://doi.org/10.1177/1754073909338877>. goetz2010compassion J. L. Goetz, D. Keltner, and E. Simon-Thomas. Compassion: An evolutionary analysis and empirical review. Psychological Bulletin, 136(3):351–374, 2010. <https://doi.org/10.1037/a0018807>. haselmayer2017sentiment M. Haselmayer and M. Jenny. Sentiment analysis of political communication: Combining a dictionary approach with crowdcoding. Quality & Quantity, 51(6):2623–2646, 2017. <https://doi.org/10.1007/s11135-016-0412-4>. imbir2016affective K. K. Imbir. Affective Norms for 4900 Polish Words Reload (ANPW_R): Assessments for Valence, Arousal, Dominance, Origin, Significance, Concreteness, Imageability and, Age of Acquisition. Frontiers in Psychology, 7:1081, 2016. <https://doi.org/10.3389/fpsyg.2016.01081>. keltner2003approaching D. Keltner and J. Haidt. Approaching awe, a moral, spiritual, and aesthetic emotion. Cognition and Emotion, 17(2):297–314, 2003. <https://doi.org/10.1080/02699930302297>. kleef2018interpersonal G. A. van Kleef. The interpersonal dynamics of emotion toward an integrative theory of emotions as social information. Cambridge University Press, 2018. kocon2023chatgpt J. Kocoń, I. Cichecki, O. Kaszyca, M. Kochanek, D. Szydło, J. Baran, J. Bielaniewicz, M. Gruza, A. Janz, K. Kanclerz, A. Kocoń, B. Koptyra, W. Mieleszczenko-Kowszewicz, P. Miłkowski, M. Oleksy, M. Piasecki, Ł. Radliński, K. Wojtasik, S. Woźniak, and P. Kazienko. ChatGPT: Jack of all trades, master of none. Information Fusion, 99:101861, 2023. <https://doi.org/10.1016/j.inffus.2023.101861>. koo2016guideline T. K. Koo and M. Y. Li. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. Journal of Chiropractic Medicine, 15(2):155–163, 2016. <https://doi.org/10.1016/j.jcm.2016.02.012>. krugmann2024sentiment J. O. Krugmann and J. Hartmann. Sentiment Analysis in the Age of Generative AI. Customer Needs and Solutions, 11(1):3, 2024. <https://doi.org/10.1007/s40547-024-00143-4>. licht2024translation H. Licht, R. Sczepanski, M. Laurer, and A. Bekmuratovna. No More Cost in Translation: Validating Open-Source Machine Translation for Quantitative Text Analysis. ECONtribute Discussion Papers Series, Article 276, 2024. <https://ideas.repec.org//p/ajk/ajkdps/276.html>. loewenstein2000emotions G. Loewenstein. Emotions in Economic Theory and Economic Behavior. American Economic Review, 90(2):426–432, 2000. <https://doi.org/10.1257/aer.90.2.426>. lu2021cuteness Y. Lu, Y. Liu, L. Tao, and S. Ye. Cuteness or Coolness—How Does Different Anthropomorphic Brand Image Accelerate Consumers’ Willingness to Buy Green Products? Frontiers in Psychology, 12:599385, 2021. <https://doi.org/10.3389/fpsyg.2021.599385>. lutz1986anthropology C. Lutz and G. M. White. The Anthropology of Emotions. Annual Review of Anthropology, 15(1):405–436, 1986. <https://doi.org/10.1146/annurev.an.15.100186.002201>. mcbride2022thesmore S. K. McBride and J. Ball. #TheSmoreYouKnow and #emergencycute: A conceptual model on the use of humor by science agencies during crisis to create connection, empathy, and compassion. International Journal of Disaster Risk Reduction, 77:102995, 2022. <https://doi.org/10.1016/j.ijdrr.2022.102995>. mcgraw1996forming K. O. McGraw and S. P. Wong. Forming inferences about some intraclass correlation coefficients. Psychological Methods, 1(1):30–46, 1996. <https://doi.org/10.1037/1082-989X.1.1.30>. meiselman2016emotion H. L. Meiselman, ed. Emotion measurement. Elsevier; Woodhead Publishing, 2016. mintz2022beyond A. Mintz, N. A. Valentino, and C. Wayne. Beyond rationality: Behavioral political science in the 21st century. Cambridge University Press, 2022. mohammad2017emotion S. Mohammad and F. Bravo-Marquez. Emotion Intensities in Tweets. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), pages 65–77, 2017. <https://doi.org/10.18653/v1/S17-1007>. mohammad2016sentiment S. M. Mohammad. Sentiment Analysis. In Emotion measurement, pages 201–237, Elsevier, 2016. <https://doi.org/10.1016/B978-0-08-100508-8.00009-6>. mohammad2017wassa S. M. Mohammad and F. Bravo-Marquez. WASSA-2017 Shared Task on Emotion Intensity (arXiv:1708.03700). arXiv, 2017. <http://arxiv.org/abs/1708.03700>. nabi2003exploring R. L. Nabi. Exploring the Framing Effects of Emotion: Do Discrete Emotions Differentially Influence Information Accessibility, Information Seeking, and Policy Preference? Communication Research, 30(2):224–247, 2003. <https://doi.org/10.1177/0093650202250881>. niedenthal2012social P. M. Niedenthal and M. Brauer. Social Functionality of Human Emotion. Annual Review of Psychology, 63(1):259–285, 2012. <https://doi.org/10.1146/annurev.psych.121208.131605>. novielli2018gold N. Novielli, F. Calefato, and F. Lanubile. A gold standard for emotion annotation in stack overflow. In Proceedings of the 15th International Conference on Mining Software Repositories, pages 14–17, 2018. <https://doi.org/10.1145/3196398.3196453>. openai2024platform OpenAI Platform. (n.d.). Retrieved May 6, 2024. <https://platform.openai.com>. paletz2023social S. B. F. Paletz, E. M. Golonka, N. B. Pandža, G. Stanton, D. Ryan, N. Adams, C. A. Rytting, E. E. Murauskaite, C. Buntain, M. A. Johns, and P. Bradley. Social media emotions annotation guide (SMEmo): Development and initial validity. Behavior Research Methods, 2023. <https://doi.org/10.3758/s13428-023-02195-1>. pennebaker1996cognitive J. W. Pennebaker and M. E. Francis. Cognitive, Emotional, and Language Processes in Disclosure. Cognition and Emotion, 10(6):601–626, 1996. <https://doi.org/10.1080/026999396380079>. plutchik1965what R. Plutchik. What is an Emotion? The Journal of Psychology, 61(2):295–303, 1965. <https://doi.org/10.1080/00223980.1965.10543417>. priban2024comparative P. Přibáň, J. Šmíd, J. Steinberger, and A. Mištera. A comparative study of cross-lingual sentiment analysis. Expert Systems with Applications, 247:123247, 2024. <https://doi.org/10.1016/j.eswa.2024.123247>. qiu2020mutual J. Qiu, L. Xu, J. Wang, and W. Gu. Mutual influences between message volume and emotion intensity on emerging infectious diseases: An investigation with microblog data. Information & Management, 57(4):103217, 2020. <https://doi.org/10.1016/j.im.2019.103217>. rauh2018validating C. Rauh. Validating a sentiment dictionary for German political language—A workbench note. Journal of Information Technology & Politics, 15(4):319–343, 2018. <https://doi.org/10.1080/19331681.2018.1485608>. reisenzein1994pleasure R. Reisenzein. Pleasure-arousal theory and the intensity of emotions. Journal of Personality and Social Psychology, 67(3):525–539, 1994. <https://doi.org/10.1037/0022-3514.67.3.525>. russell1980circumplex J. A. Russell. A circumplex model of affect. Journal of Personality and Social Psychology, 39(6):1161–1178, 1980. <https://doi.org/10.1037/h0077714>. russell2014four J. A. Russell. Four Perspectives on the Psychology of Emotion: An Introduction. Emotion Review, 6(4):291–291, 2014. <https://doi.org/10.1177/1754073914534558>. saarimaki2016discrete H. Saarimäki, A. Gotsopoulos, I. P. Jääskeläinen, J. Lampinen, P. Vuilleumier, R. Hari, M. Sams, and L. Nummenmaa. Discrete Neural Signatures of Basic Emotions. Cerebral Cortex, 26(6):2563–2573, 2016. <https://doi.org/10.1093/cercor/bhv086>. sharifirad2019mood S. Sharifirad, B. Jafarpour, and S. Matwin. How is Your Mood When Writing Sexist tweets? Detecting the Emotion Type and Intensity of Emotion Using Natural Language Processing Techniques (arXiv:1902.03089). arXiv, 2019. <http://arxiv.org/abs/1902.03089>. shiota2007nature M. N. Shiota, D. Keltner, and A. Mossman. The nature of awe: Elicitors, appraisals, and effects on self-concept. Cognition and Emotion, 21(5):944–963, 2007. <https://doi.org/10.1080/02699930600923668>. shrout1979intraclass P. E. Shrout and J. L. Fleiss. Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2):420–428, 1979. <https://doi.org/10.1037/0033-2909.86.2.420>. siedlecka2019experimental E. Siedlecka and T. F. Denson. Experimental Methods for Inducing Basic Emotions: A Qualitative Review. Emotion Review, 11(1):87–97, 2019. <https://doi.org/10.1177/1754073917749016>. socialcybersecurity2024 Social Cybersecurity: An Emerging National Security Requirement. (n.d.). Retrieved June 9, 2024. <https://apps.dtic.mil/sti/citations/AD1108494>. stieglitz2013emotions S. Stieglitz and L. Dang-Xuan. Emotions and Information Diffusion in Social Media—Sentiment of Microblogs and Sharing Behavior. Journal of Management Information Systems, 29(4):217–248, 2013. <https://doi.org/10.2753/MIS0742-1222290408>. strapparava2007semeval C. Strapparava and R. Mihalcea. SemEval-2007 Task 14: Affective Text. In E. Agirre, L. Màrquez, and R. Wicentowski (Eds.), Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 70–74. Association for Computational Linguistics, 2007. <https://aclanthology.org/S07-1013>. szmyd2023trelbert W. Szmyd, A. Kotyla, M. Zobniów, P. Falkiewicz, J. Bartczuk, and A. Zygadło. TrelBERT: A pre-trained encoder for Polish Twitter. In J. Piskorski, M. Marcińczuk, P. Nakov, M. Ogrodniczuk, S. Pollak, P. Přibáň, P. Rybak, J. Steinberger, and R. Yangarber (Eds.), Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023), pages 17–24. Association for Computational Linguistics, 2023. <https://doi.org/10.18653/v1/2023.bsnlp-1.3>. tanaka1995context J. Tanaka-Matsumi, D. Attivissimo, S. Nelson, and T. D’Urso. Context effects on the judgment of basic emotions in the face. Motivation and Emotion, 19(2):139–155, 1995. <https://doi.org/10.1007/BF02250567>. turner2006sociological J. H. Turner and J. E. Stets. Sociological Theories of Human Emotions. Annual Review of Sociology, 32(1):25–52, 2006. <https://doi.org/10.1146/annurev.soc.32.061604.123130>. uveges2023hunembert I. Üveges and O. Ring. HunEmBERT: A Fine-Tuned BERT-Model for Classifying Sentiment and Emotion in Political Communication. IEEE Access, 11:60267–60278, 2023. <https://doi.org/10.1109/ACCESS.2023.3285536>. valentino2011election N. A. Valentino, T. Brader, E. W. Groenendyk, K. Gregorowicz, and V. L. Hutchings. Election Night’s Alright for Fighting: The Role of Emotions in Political Participation. The Journal of Politics, 73(1):156–170, 2011. <https://doi.org/10.1017/S0022381610000939>. vallat2018pingouin R. Vallat. Pingouin: Statistics in Python. Journal of Open Source Software, 3(31):1026, 2018. <https://doi.org/10.21105/joss.01026>. vasilopoulos2019fear P. Vasilopoulos, G. E. Marcus, N. A. Valentino, and M. Foucault. Fear, Anger, and Voting for the Far Right: Evidence From the November 13, 2015 Paris Terror Attacks. Political Psychology, 40(4):679–704, 2019. <https://doi.org/10.1111/pops.12513>. walker2017antecedents L. Walker, P. R. Baines, R. Dimitriu, and E. K. Macdonald. Antecedents of Retweeting in a (Political) Marketing Context. Psychology & Marketing, 34(3):275–293, 2017. <https://doi.org/10.1002/mar.20988>. wandb2024 wandb/wandb. (2024). [Python]. Weights & Biases. <https://github.com/wandb/wandb> (Original work published 2017). whitehead2016islamic T. Whitehead. Islamic State using kittens to lure jihadists to fight. The Telegraph, May 25, 2016. <https://www.telegraph.co.uk/news/2016/05/25/isil-using-kittens-to-lure-jihadists-to-fight/>. § APPENDIX §.§ LLM Prompts The prompts used for the LLM annotation process were structured as follows: Basic emotions (Happiness, Sadness, Anger, Disgust, Fear, Pride): Translation: "To what extent does the text below manifest the emotion 'emotion'? Respond using a 5-point scale, where 1 means the emotion is not present at all and 5 means the emotion is very distinctly present. Please respond with a single number. Text: 'text' Your response:" Original: “Na ile przedstawiony poniżej tekst manifestuje emocje "emotion". Odpowiedz używając 5 stopniowej skali, gdzie 1 - emocja wogóle nie występuje a 5 - emocja jest bardzo wyraźnie obecna. Odpowiadaj za pomocą pojedynczego numeru. Tekst: "text" Twoja odpowiedź:” Valence Translation: "What emotional valence do you read in the following text? Respond using a 5-point scale, where 1 indicates a negative emotion is present and 5 indicates a positive emotion is present. Please respond with a single number." Original “Jaki znak emocji wyczytujesz w poniższym tekście? Odpowiedz używając 5 stopniowej skali, gdzie 1 - obecna jest negatywna emocja a 5 - obecna jest pozytywna emocja. Odpowiadaj za pomocą pojedynczego numeru.” Arousal Translation: "What level of arousal do you read in the following text? Respond using a 5-point scale, where 1 means no arousal and 5 means extreme arousal. Please respond with a single number." Original: “Jaki poziom pobudzenia wyczytujesz w poniższym tekście? Odpowiedz używając 5 stopniowej skali, gdzie 1 - brak pobudzenia a 5 - ekstremalne pobudzenie. Odpowiadaj za pomocą pojedynczego numeru.“ For the multiple-shot scenarios, the exemplars were added by appending them to the end of the queries above. They had the following structure: Translation: "Text {text number}: """{text}""" Your response: """{score}""" ###" Original: “Tekst {text number}: """{text}""" Twoja odpowiedź: """{score}""" ###” The target text was finally appended in the same manner as the exemplars: Translation: "Text {text number}: """{text}""" Your response: " Original: “Tekst {text number}: """{text}""" Twoja odpowiedź: ” §.§ Annotation Process - instruction for annotators Translation: You will evaluate the emotional content displayed in some short texts. Your task will be to mark on a five-point scale the degree to which you think that a given sentence is characterized by each of the following emotions: joy, sadness, anger, disgust, fear, and pride. Use a five-point scale as described below: 0 - the emotion does not occur at all 1 - low level of emotion 2 - moderate level of emotion 3 - high level of emotion 4 - very high level of emotion. Then, we will ask you to estimate the intensity of two additional emotion parameters: the direction of sensations (negative versus positive) and emotional arousal (no arousal versus extreme arousal). On the next screen you will learn the definitions of both parameters and how you will evaluate them. Read the descriptions of two emotion parameters: the sign of sensations and emotional arousal. You can do this several times to make sure you understand them - it will make it easier for you to complete the task ahead of you. You will rate each of the emotion dimensions described above on a five-point scale. To make it easier to imagine the states we have in mind, you can use pictograms symbolizing different directions of experiences and the intensity of the emotional states. For the direction of sensations, use the following scale: The first pictogram shows a person who is visibly depressed - specific experiences may include: panic, irritation, disgust, despair, failure, or crisis. The last image shows a person who is visibly excited - specific experiences may include: fun, delight, happiness, relaxation, satisfaction, or rest. The remaining pictograms represent intermediate states. For emotional arousal, use the following scale: The first pictogram shows a person who is very calm, almost sleepy - specific experiences may include: relaxation, calm, inactivity, meditation, boredom, or laziness. The last image shows a person who is intensely aroused - appropriate emotional states may include: excitement, euphoria, arousal, rage, agitation, or anger. Save the link to this manual for later - you can return to it at any time during the examination. Very important: you can take a break while assessing your statements and return to them at any time - your current work will be saved and you will be able to resume it after the break. If you want to do this, in the upper right corner of the screen you will find the option: "Postpone for later" - click on it, enter the data necessary to save, and confirm the operation. In case you are ready to get back to work: when you enter the study page, an option "Load unfinished survey" will appear in the upper right corner of the screen - select it to load your work. §.§ Social Media profiles In this research we have scraped the posts of following: A) Journalists: Adrian Klarenbach, Agnieszka Gozdyra, Bartosz T. Wieliński, Bartosz Węglarczyk, Bianka Mikołajewska, Cezary Krysztopa, Daniel Liszkiewicz, Dawid Wildstein, Dominika Długosz, Dominika Wielowieyska, Ewa Siedlecka, Jacek Karnowski, Jacek Kurski, Jacek Nizinkiewicz, Janusz Schwertner, Jarosław Olechowski, Konrad Piasecki, Krzysztof Ziemiec, Łukasz Bok, Łukasz Warzecha, Magdalena Ogórek, Magdalena Rigamonti, Marcin Gutowski, Marcin Wolski, Michał Karnowski, Michał Kolanko, Michał Rachoń, Miłosz Kłeczek, Paweł Żuchowski, Piotr Kraśko, Piotr Semka, Radomir Wit, Rafał Ziemkiewicz, Renata Grochal, Robert Mazurek, Samuel Pereira, Szymon Jadczak, Tomasz Lis, Tomasz Sakiewicz, Tomasz Sekielski, Tomasz Sommer, Tomasz Terlikowski, Wojciech Bojanowski, Agaton Koziński, Piotr Witwicki, Jacek Tacik, Magdalena Lucyan, Agata Adamek, Kamil Dziubka, Jarosław Kurski, Dorota Kania, Ewa Bugala, Zuzanna Dąbrowska, Karol Gac, Marcin Tulicki, Marzena Nykiel, Jacek Prusinowski, Paweł Wroński B) Politicians: Donald Tusk, Andrzej Duda, Rafał Trzaskowski, Mateusz Morawiecki, Sławomir Mentzen, Janusz Korwin-Mikke, Grzegorz Braun, Szymon Hołownia, Radosław Sikorski, Krzysztof Bosak, Władysław Kosiniak-Kamysz, Borys Budka, Artur E. Dziambor, Marek Belka, Leszek Miller, Mariusz Błaszczak, Roman Giertych, Franek Sterczewski, Konrad Berkowicz, Marek Jakubiak, Michał Szczerba, Przemysław Czarnek, Zbigniew Ziobro, Krzysztof Brejza, Leszek Balcerowicz, Izabela Leszczyna, Klaudia Jachira, Janusz Piechociński, Patryk Jaki, Robert Biedroń, Krystyna Pawłowicz, Katarzyna Lubnauer, Anna Maria Sierakowska, Łukasz Kohut, Marcin Kierwiński, Anna Maria Żukowska, Marian Banaś, Dariusz Joński, Kamila Gasiuk-Pihowicz, Barbara Nowacka, Adrian Zandberg, Krzysztof Śmieszek, Paulina Matysiak, Paweł Kukiz, Michał Wójcik, Sebastian Kaleta, Małgorzata Wassermann, Joachim Brudziński, Maciej Konieczny, Marcelina Zawisza C) NGOs: Polska Akcja Humanitarna, Helsińska Fundacja Praw Człowieka, Polski Czerwony Krzyż, Fundacja Dialog, Fundacja Ocalenie, Fundacja Ogólnopolski Strajk Kobiet, Stowarzyszenie Amnesty International, Fundacja Centrum Praw Kobiet, Stowarzyszenie Sędziów Polskich IUSTITIA, Stowarzyszenie Marsz Niepodległości, Lekarze bez Granic, Fundacja TVN, Fundacja Dzieciom "Zdążyć z Pomocą", Wielka Orkiestra Świątecznej Pomocy, Szlachetna Paczka, Fundacja WWF Polska, Fundacja Greenpeace Polska, Liga Ochrony Przyrody, Związek Stowarzyszeń Polska Zielona Sieć, Młodzieżowy Strajk Klimatyczny, Stowarzyszenie Miłość Nie Wyklucza, Kampania Przeciw Homofobii, Stowarzyszenie Lambda - Warszawa, Fundacja Trans-Fuzja, Stowarzyszenie Grupa Stonewall.
http://arxiv.org/abs/2407.12672v1
20240717155410
A concentration inequality for random combinatorial optimisation problems
[ "Joel Larsson Danielsson" ]
math.CO
[ "math.CO", "math.PR", "60C05" ]
Kirolos Ataallah10009-0007-0495-2171 Xiaoqian Shen^*10000-0001-6284-520X Eslam AbdelrahmanEqual contribution 1Essam Sleimanwork was done during internship in KAUST20000-0002-1505-6694Mingchen Zhuge10000-0003-2561-7712Jian Ding12222–3333-4444-5555Deyao Zhu10000-0001-8014-7309Jürgen Schmidhuber1,30000-0002-1468-6758Mohamed Elhoseiny10000-0001-9659-1551 Received 04 December 2023 / Accepted 15 July 2024 ==================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Given a finite set S, i.i.d. random weights {X_i}_i∈ S, and a family of subsets ⊆ 2^S, we consider the minimum weight of an F∈: M():= min_F∈∑_i∈ FX_i. In particular, we investigate under what conditions this random variable is sharply concentrated around its mean. We define the patchability of a family : essentially, how expensive is it to finish an almost-complete F (that is, F is close to in Hamming distance) if the edge weights are re-randomized? Combining the patchability of , applying the Talagrand inequality to a dual problem, and a sprinkling-type argument, we prove a concentration inequality for the random variable M(). § INTRODUCTION §.§ Combinatorial minimum weight problems The class of optimization problems that we are interested in does not necessarily involve graphs, but before giving the general definition we will first discuss them in a graph setting. Suppose we have a finite graph K (typically K_n or K_n,n), a family of subgraphs of K , and a collection of i.i.d. non-negative random edge weights {X_e}_e∈ E(K). We are interested in the random variable M():= min_G∈∑_e∈ E(K)X_e, i.e. the lowest weight of a G ∈. Two famous examples are the random assignment problem and the random minimum spanning tree problem. Let ℳ be the set of perfect matchings on the complete bipartite graph K_n,n, and 𝒯 the set of spanning trees on the complete graph K_n. When both graphs are equipped with i.i.d. U(0,1) edge weights, it has been shown that M(ℳ)→ζ(2)=π^2/6 in probability as n→∞ <cit.>, and similarly M(𝒯)→ζ(3) <cit.>. The spanning tree problem will be a recurring example throughout this paper, and we will prove a slight generalization of <cit.> as an application of our concentration inequality. A proof of convergence in probability of M() typically consists of two parts: First, the convergence of the expected value [M()], and then sharply concentration of M() around its expected value. To answer the first question one often needs a method tailored[In <cit.> a greedy algorithm was analysed to show convergence of the expected value for the minimum spanning tree problem, while in <cit.> a local graph limit approach was used for the random assignment problem.] to the specific family . In this paper, we are concerned only with the second question: When is M() sharply concentrated? That is, under what conditions is it true that the random variable M() is close to its expected value (or median) with high probability? Our aim in this paper is to provide a `user-friendly' concentration inequality for M(), with conditions that are easy to check. Although we mainly have graph applications in mind, we will work in a slightly more general setting. Instead of a graph K and a family of subgraphs of K, we will work with a finite ground set S and a family ⊆ 2^S of subsets of S. We will assume |S|=N, and will frequently identify S with [N]={1,2,… ,N}. To the elements i∈ S, we associate i.i.d. random weights X_i, and for each set G⊆ S we let X_G denote the total weight of the elements in G. Then, analogously to M() in the graph setting, we define M():= min_F∈ X_F. Without loss of generality, we may assume that contains only minimal sets: that is, if G⊂ F∈, then G∉. We also define ():=max_F∈|F|. §.§ Concentration inequalities In probabilistic combinatorics, one often needs to show that the distribution of some random variable Z concentrates around some value c: that for any small >0, Z lies in the interval [(1-)c,(1+)c] with high probability.[If such concentration holds, the median has to be close to c, and it is usually easy to show that the expected value is also close to c.] In particular, it is a common situation that one has a product space Ω=∏_i∈ SΩ_i, random variables X_i on the Ω_i's, and a function g:Ω→. One then wants to show concentration of the random variable Z=g(X_1,X_2,… X_N). With some abuse of notation, we can also refer to Z(ω)=g(X_1(ω),X_2(ω),… X_N(ω)) by g(ω). For many such functions g of interest, it turns out that while g does depend on all N coordinates of its input, it only depends sensitively on a smaller number of coordinates. Many concentration inequalities quantify in different ways this intuitive sense of g not depending `too much' on any specific random variable X_i, or on any small set of these random variables. The method of bounded differences considers the Lipschitz constant of g: How much can g change, if only one of its inputs is changed? More precisely, we say that g is K-Lipschitz if |g(ω)-g(ω')|≤ K whenever ω,ω' differ in only one coordinate. The McDiarmid inequality (based on the Azuma-Hoeffding martingale concentration inequality) bounds the size of the fluctuations around the mean by O(K√(N)). This Lipschitz condition considers the worst-case change, which might very different from the typical change: |g(ω)-g(ω')| could be significantly smaller than K for most pairs ω,ω'. (This tends to be the case for the random variable M(), for families that are not very small.) In a 2016 paper by Warnke <cit.>, several variations on the McDiarmid inequality can be found, involving various typical-case Lipschitz conditions. While these can greatly improve upon the inequalities based on worst-case Lipschitz constants for some functions g, computing the average-case Lipschitz constants for minimum-weight type problems is often not tractable. influence inequality, discrete fourier transform, <cit.> Another powerful tool is the Talagrand inequality <cit.>, in particular the `certifiability' corollary, as found in <cit.>. This inequality captures the intuition of a function not depending `too much' on any coordinates in a different way, by considering the `certifiability' of g: what is the smallest number of random variables X_i that one can show to an observer to verify that the event {g(X)≤ s} has occured (for some s)? In the case of minimum weight problems, M() is ()-certifible: If {M()≤ s}, then by definition there exists an F∈ with X_F=M()≤ s, and it suffices to look at the at most () weights of F to verify that X_F≤ s. A major benefit of the Talagrand inequality is that it does not depend on the dimension N. For M(), it improves on the McDiarmid bound of O(K√(N)), down to order O(K√(())). As far as we are aware, Talagrand-type inequalities have only been established for worst-case Lipschitz constants. Let's consider a naive application of the bounded difference method and the Talagrand inequality to the minimum spanning tree problem. Here N=n2, (𝒯)=n-1 and K=1, so the bounded difference method gives that M(𝒯) has a standard deviation of O(n), which the Talagrand inequality lowers to O(√(n)). However, [M(𝒯)]=O(1) as n →∞ (since it converges to ζ(3)), so neither bound is useful. Both of these inequalities suffer from using the worst case Lipschitz constant K=1, while the typical change in M(𝒯) when changing one edge weight is of order[Follows from the proof of theorem <ref> with r=1.] 1/n. It is easy to apply our patchability inequality to the minimum spanning tree problem. This gives an upper bound of O(n^-1/4), implying sharp concentration (see section <ref>). The `patchability' criterion (definition <ref>) behaves more like the average case Lipschitz constant than the worst case. However, this only meant as a comparison between these three concentration inequalities. Much stronger results have been obtained previously, for instance a central limit theorem for M(𝒯) was established in <cit.>, with a standard deviation of order n^-1/2. [For instance, in <cit.> a log-Sobolev inequality is used] §.§ Asymptotic notation In addition to the commonly used asymptotic notation of O,o,ω,Ω, we will also use ,, and to denote the probabilistic versions: For sequences X_n,Y_n of random variables, we say that X_n=(Y_n) and Y_n=(X_n) if X_n/Y_n→ 0 in probability as n→∞. We say that X_n=(Y_n) and Y_n=(X_n) if there for any >0 exists a constant C>0 such that X_n≤ CY_n with probability at least 1- for all sufficiently large n. Furthermore, X_n=(Y_n) iff X_n=(Y_n) and X_n=(Y_n). Finally, we use f(n)≪ g(n) to denote f(n)=o(g(n)). And unless otherwise specified, the asymptotics will always be implicitly `as n→∞' (or `as N→∞'). § RESULTS §.§ Patchability condition Loosely speaking, our concentration inequality says that if any `almost-complete' member of (missing on the order of √(()) elements) can be completed at cost o( M()) (whp), then the optimal cost M() has to be sharply concentrated. Before stating the theorem, we need to make this notion more precise. As noted earlier, we can assume without loss of generality that contains only minimal sets. We will let denote the upwards closure of : ={G⊆ S: ∃ F∈: F⊆ G}. Since the weights are non-negative, M()=M(). For any G,P⊆ S, we say that P is a G-patch if G∪ P contains a member of , i.e. G∪ P ∈. Define the function :2^S↦ by (G):=d(G,) = min{|P|: P is a G-patch}, where d denotes Hamming distance: d(G,F)=|GΔ F| and d(G,)=min_F∈ F|GΔ F|. Let =() be the r-neighbourhood of in the Hamming metric, i.e. the set of all G⊆ S with (G)≤ r. For any set G⊆ S, let the random variable Patch(G) be the minimum weight of a G-patch: Patch(G):= min{X_P: P is a G-patch}. We say that a G⊆ S is (,)-patchable (with respect to the random weights X_i) if G can be patched at cost at most with probability at least 1-. That is, (Patch(G)≤)≥ 1-. The family is said to be (r,,)-patchable if every G∈ (that is, G⊆ S with (G)≤ r) is (,)-patchable. Patch(G) can also be seen as a Hamming distance to : if we define the randomly weighted Hamming distance by D(G,F):=X_GΔ F, then Patch(G)=D(G,). In terms of these Hamming distances, is (r,,)-patchable if any G in the r-neighbourhood of w.r.t. the metric d lies in the -neighbourhood of w.r.t. the metric D, with probability at least 1-. In other words, is (r,,)-patchable if any F∈ from which an arbitrary r elements has been removed (giving us a G with (G)=r), can be patched at cost at most , with probability at least 1-. Patching G will not necessarily restore the same F, as we only require that our `patched' set G∪ P contains some member of . It is important to note here that the set G is not random, and in particular it is not chosen in a way that depends on the weights X_i. When applying our inequality (theorem <ref>), the main effort will usually be to show that this patchability condition is met, for some r, and . Note that if X_i≤ 1 with probability 1, then any family is trivially (r,r,0)-patchable for any r – simply put back the r elements that were removed, at cost at most r. However, it is only for families that are (r,,)-patchable for some ≪ r that our inequality (theorem <ref>) improves upon a `standard' application of the Talagrand inequality. §.§ Probability distributions of the weights Most commonly, the edge weights X_i are exponential or uniform U(0,1). But the proof of our inequality can easily be generalised to a larger class of edge weight distributions, such as the positive powers of such random variables. The only assumption we will make on the distribution of X_i is the following. For q>0 we say that the distribution of a random variable X satisfies assumption A(q) if the following holds: For any s∈ (0,1), X can be coupled to two copies Yd=Y'd=X following the same distribution, such that Y and Y' are independent and surely X ≤min( Y/(1-s)^1/q,Y'/s^1/q). For instance, the (1/q):th power of exponential or uniform random variables satisfies A(q). The assumption is also closely related to the so-called pseudo-dimension of a random variable. We will discuss this in section <ref>. Note also that <ref> can be iterated, so that for any {s_i}_i=1^k with ∑_i=1^k s_i=1 we have that i.i.d. random variables Y^(i)d=X can be coupled to X in such a way that X ≤min_i∈ [k](Y^(i)/s_i^1/q). In particular, for s_i=k^-1, X ≤ k^1/qmin_i∈ [k](Y^(i)). §.§ A `patchability' concentration inequality We are now ready to state our concentration inequality. Assume the distribution of the X_i's satisfies A(q) in assumption (<ref>). Given >0, let be such that (M()>)≥. If is (r,,)-patchable for some r≥√(8log(^-1)·()) and >0, then with probability at least 1-2, M() ≤(^q/q+1+^q/q+1)^q+1/q. In particular, if m is the median of M(), <1/4, and ≤ m, then for some constant C=C(q), with probability at least 1-3, |M()-m|/m ≤ C (/m)^q/q+1. The theorem gives sharp concentration if = o(m) as N→∞. Note also that r is an increasing function of (), the largest size of a member of , and in turn is a non-decreasing function of r. One therefore want to make () as small as possible. A less general version of theorem <ref>, appeared in a previous paper <cit.> by the author and L. Federico. There we studied a specific family (the family of H-factors in K_n, for some small graph H), for which a property very similar to patchability holds trivially. We discuss this family in section <ref>. Some families have a large variation in the sizes of its members, and while smaller sets F are more likely to have low weight X_F, there might be a much larger number of bigger sets in . If the F achieving the optimal weight M() tends to be a small set, it can sometimes be helpful to sort the members of into two families: _small with small sets and _big with big sets. If one can (with high probability) upper bound M(_small)< t and lower bound M(_big)> t with the same t=t(n) (for instance by the methods discussed in sections <ref> and <ref>), then with high probability M()=M(_small) and it suffices to show concentration of M(_small). §.§ Proof strategy for theorem <ref> This proof will follow a similar strategy to that in <cit.>. The main novel ideas (here and in <cit.>) are (i) the patchability condition, and (ii) to apply the Talagrand inequality to a dual problem: Setting a `budget' >0, how close (in Hamming distance) to a member of can we get while staying within budget? More precisely, we define Z_ := min{ρ(G): G⊆ S and X_G≤ L }. Talagrand's concentration inequality is much better suited to this random variable, and we use it to show that Z_ is typically `small' (roughly of order √(())) for a suitable . That is, that there exists a G⊆ S with weight X_G≤, and which can be turned into a member of by adding at most a small number of elements from S. For the next step, given such a G, we would like to find a cheap `patch' P⊆ S such that G∪ P∈. However, here we run in to an obstacle: G is now a random set, and the weights of the elements not in G will not be independent from G, because G was chosen in a way that depends on the weights X_i. To get around this obstacle, we perform a trick originally due to Walkup <cit.>, which we call the red-green split. Split each element x∈ S into two, a green and a red copy. For some small s>0, give these independent random weights Y_i/(1-s)^1/q,Y_i'/s^1/q, where Y_i and Y_i' follow the same distribution as X_i, and couple them to X_i as in assumption A(q). With this coupling, the green weights Y_i are typically close to X_i, while the red weights Y'_i tend to be larger. Crucially, the red weights are independent from the green weights. We then study the dual problem on the green weights, and use Talagrand's inequality as described above to show that there probably exists a green G⊆ S with ρ(G)≤ r and Y_G≤ L, with r of order roughly √(()). Next we use that is (r,,)-patchable to find a red G-patch P with Y'_P≤. Since G∪ P∈, we have that M()≤ X_G+X_P≤Y_G/(1-s)^1/q+Y_P'/s^1/q. Using that Y_G≤ L and Y'_P≤ (with high probability), and optimizing over s gives us the upper bound (<ref>) in theorem <ref>. § APPLICATIONS §.§ Minimum spanning tree Let 𝒯 be the family of spanning trees on K_n. For some q>0, equip K_n with i.i.d. edge weights following the distribution of the (1/q):th power of a uniform U(0,1) random variable. Then there exists a constant c_q such that for :=c_q n^1-1/q, |M(𝒯)-|/=(n^-q/2(q+1))=(1). In particular, for q=1, it is known that [M(𝒯)]→ζ(3)≈ 1.202, and [M(𝒯)]=Θ(1/n), so that the fluctuations of M(𝒯) around its expected value is of order n^-1/2. Our theorem gives a weaker upper bound, of order n^-1/4. For q≥ 1 the theorem follows from <cit.>, but it seems to be novel for q<1. In <cit.> it is shown that [M(𝒯)]→ζ(3) when q=1. Assuming that the edge weights are such that X_i^q∼ U(0,1), it is easy to adapt this argument to show that [M(𝒯)]/n^1-1/q converges to a constant as n→∞. We will prove that (G)=(rn^-1/q) for any G with (G)=r. The theorem then follows by plugging this into theorem <ref>, by noting that (i) since a spanning tree has n-1 edges, r=Θ(√(n)) (for fixed), and (ii) sharp concentration of M(𝒯) around its median implies that the expected value is close to the median. Pick any G with (G)=r=Θ(√(n)). A graph with (G)=r has r+1 connected components, as r edges must be added to it in order to connect it. Let C_1,…, C_r+1 be these connected components, sorted in increasing order by their number of vertices, and with ties broken arbitrarily. For each edge in E(K_n)-E(G) that goes between two components, orient it according to the order of the components above: from C_i to C_j when i<j. We will find a G-patch P by, for each 1≤ i≤ r, picking the cheapest outgoing edge from C_i. Note that such a P is indeed a G-patch, because ⟨𝒯⟩ is the family of connected graphs, and there is a path in G∪ P from any C_i (i≤ r) to C_r+1. For 1≤ i≤ r, C_i has at least s:=min(n/2,n^2/4r^2)=Θ(n) outgoing edges. Consider a component C_i with k vertices. If k≤ n/2r, then C_1,… ,C_i all have at most k vertices each, for a total of at most rk≤ n/2 vertices. Hence there are at least n/2 total vertices in C_i+1∪…∪ C_r+1, and every vertex in C_i has at least this many outgoing edges. If instead k>n/2r, C_i+1 also has at least k vertices, so there are at least k^2>n^2/4r^2 edges from C_i to C_i+1. Let W_i be the minimum weight of an outgoing edge from C_i. The W_i's are independent. The minimum of m i.i.d. edge weights X_i such that X_i^q∼ U(0,1) has expected value and standard deviation of order Θ(m^-1/q). Hence the W_i's have expected value and standard deviation uniformly bounded by some O(n^-1/q), and there exists a constant c>0 such that with high probability (G)≤∑_i=1^r W_i ≤ crn^-1/q. Hence 𝒯 is (,r,)-patchable with =crn^-1/q and r=Θ(√(n)). By theorem <ref> we have that |M(𝒯)-m|/m≤((/m)^q/q+1). Here m=Θ(n^1-1/q) and =O(n^1/2-1/q), so that /m = O(r/n)=O(n^-1/2). Hence |M(𝒯)-m|/m ≤(n^-q/2(q+1))=(1). §.§ Minimum H-factor Given a fixed graph H, an H-factor (or tiling) on K_n is a collection of vertex-disjoint copies of H, which together cover all vertices of K_n. For H=K_2, this is the random assignment (also known as minimum perfect matching) problem which we discuss in the next subsection. In a paper by the author and L. Federico <cit.>, an earlier version of theorem <ref> was used to show sharp concentration of the minimum weight of an H-factor for graphs H containing at least one cycle. In the minimum H-factor problem, patches have a particularly nice structure: They are essentially H-factors on smaller vertex sets. If F is an H-factor and we remove r edges, we may as well remove the (at most) r copies of H these edges belonged to. This leaves a partial H-factor G, and any H-factor on the at most r· v(H) uncovered vertices forms a G-patch. For graphs H containing at least one cycle (and random weights satisfying A(1), such as U(0,1)), we showed that the minimum weight M of an H-factor is of order (n^β) w.h.p., for some β=β(H)∈ (0,1) . This immediately implies that H-factors are (r,λ,)-patchable where λ =O(r^β). When applying theorem <ref>, r is of order √(n), so that λ is of order n^β/2. Since the median m of M is of order m=Θ(n^β), the theorem gives us that |M-m| ≤(√(λ m))= (m^3/4). In other words, M is sharply concentrated. §.§ Random assignment In the random assignment problem, is the set of perfect matchings on the complete bipartite graph K_n,n. This problem has been studied when the edge weights satisfy condition A(q) in (<ref>), for q=1 <cit.>, q>1 <cit.> and by the present author for q<1 <cit.>. In all cases it has been shown that M()/n^1-1/q converge in probability to a constant depending only on q. However, a straight-forward application of theorem <ref> only gives sharp concentration in the case q>1. Although a perfect matching is also an H-factor (with H=K_2), the minimum weight scales like n^1-1/q, and a similar argument to that in section <ref> only leads to sharp concentration if the exponent is positive. §.§ Minimum spanning d-sphere A combinatorial d-sphere is a (d+1)-regular hypergraph, which – when viewed as the set of maximal faces of an abstract simplicial complex – is homeomorphic to a d-sphere. In an upcoming paper joint with A. Georgakopoulos and J. Haslegrave, we study the minimum weight of a spanning d-sphere in a randomly-weighted complete (d+1)-uniform hypergraph K_n^(d+1) <cit.>, again with weights satisfying A(1). We show concentration of this minimum weight for d=2 and 3, and the proof for d=2 uses theorem <ref>. § PROOFS To prove theorem <ref> we will need the following lemma, which is where the `red-green split trick' is used. Recall that for any r≥ 0, is the set of G⊂ S within Hamming distance r of , i.e. such that there exists a P⊆ S with |P|≤ r and G∪ P∈. Assume a,b>0 and pick c such that c^q/q+1=a^q/q+1+b^q/q+1. Let G^* be the (random) set in with minimal cost, i.e. W_G^*=M(). Then (M()>c) ≤ (M()>a)+(Patch(G^*)>b) ≤ (M()>a)+max_G∈(Patch(G)>b). We will also need the following claim. Let f(s):=a/(1-s)^p+b/s^p, with a≥ b>0 and p≥ 0. Then f has a unique minimum s_0 on (0,1), with f(s_0)= (a^1/p+1+b^1/p+1)^p+1≤ a·(1+C· (b/a)^1/p+1), for some constant C=C(p). In particular if a≫ b, f(t)=a · (1+o(1)). We postpone the proofs of lemma <ref> and claim <ref> until the end of this section. Now, let's instead proceed with the proof of our main theorem. We want to upper bound the probability that M() is large. To do this, we will use lemma <ref> with a=,b=. (M()>c) ≤ (M()>)_Upper bound by Talagrand's inequality+max_G∈(Patch(G)>)_Upper bound by using patchability For the second term of the right-hand side of (<ref>), we use the patchability condition: By assumption, is (r,,)-patchable, or in other words Patch(G)> with probability at most for all G∈. Recall that Z_:=min{(G): X_G≤}, and note that M()> if and only if Z_>r. We now want to apply the Talagrand inequality to the first term on the right-hand side of (<ref>). The way this inequality is stated in <cit.>, we would need to apply it to the random variable -Z_. However, for the sake of clarity and to avoid a clutter of minus signs, we reformulate the inequality and the definition of certifiability so that they apply directly to Z_L. The random variable Z_L has the following two properties: Z_ is 1-Lipschitz: Suppose ω,ω'∈Ω are such that X_i(ω)=X_i(ω'), for all i except some i_0. Consider G⊆ S such that X_G(ω)≤ and ρ(G) attains the minimum Z_(ω). Then G':=G-{i_0} satisfies X_G'(ω')=X_G'(ω)≤ L, and ρ(G') is at most ρ(G)+1, so that Z_(ω')≤ Z_(ω)+1. By interchanging ω and ω', |Z_(ω')- Z_(ω)|≤ 1. Z_ is ()-certifiable: If ω is such that Z_(ω)≤ s, there exists a G with X_G(ω) ≤ and (G)≤ s. Assuming WLOG that G is a minimal such set, it has at most () elements. These are a certificate that Z_≤ s: any ω' which agrees with ω on the set G has X_G(ω')=X_G(ω)≤, and hence Z_L(ω')≤ s too. The Talagrand inequality then states that for any t>0 and b, (Z_≤ b) ·(Z_≥ b+t√(())) ≤ e^-t^2/4. Let t:=√(8log (^-1)), so that e^-t^2/4=^2 and r≥ t√(()), and let b:=0. The first probability on the left-hand side of (<ref>) is (Z_=0)=(M()≤), which is at least by assumption. Hence the second probability is (Z_≥ r)≤. The first term on the right-hand side of (<ref>) is then (M()>)=(Z_>r)≤, and hence (M()>c)≤ 2. The `in particular'-statement follows from the second part of claim <ref> with p=q^-1, a=L, and b=λ: For some constant C, c=(^q/q+1+^q/q+1)^q+1/q≤· (1+C· (/)^q/q+1). Then M()> c with probability at most 2, and M()< L with probability at most by assumption. Hence M() lies within an interval of length LC· (λ/L)^q/q+1 with probability 1-3, and in particular the median also lies in this interval. For convenience, set p=q^-1. For a small s>0 to be chosen later, use assumption A(1) in assumption <ref> to couple the weights X_i to a pair of independent random variables Y_i,Y_i'd=X_i such that X_i≤Y_i/(1-s)^p+Y_i'/s^p. We think of the Y_i as `green' weights and the Y'_i as `red' weights. Using the coupling of Y_i,Y_i' in (<ref>) gives that for any F=G∪ P∈, surely M()≤ X_F≤ X_G+X_P≤Y_G/(1-s)^p+Y'_P/s^p. Our strategy is now to find a cheap green G with ρ(G)≤ r, and then find a cheap red G-patch P. Let W be the cost of the cheapest such G w.r.t. the weights Y_i, or in other words W :=min{Y_G:G∈}, and let G^* be the random set G which achieves this minimum. Similarly, for any G∈, let the random variable W'(G) be the minimum of Y'_P over all G-patches P. Then M()≤W/(1-s)^p+W'(G^*)/s^p. Now, (<ref>) is at most a/(1-s)^p+b/s^p, unless W>a or W'(G^*)>b. Note that since Y_i,Y'_i follow the same distrubition as X_i, Wd=M() and W'(G)d=Patch(G). Hence the right-hand side of (<ref>) is α + β, with α:= (M()>a)=(W>a), β:= max_G∈(Patch(G)>b)=max_G∈(W'(G)>b). For the second termof the right-hand side of (<ref>), by the choice of β, (W'(G)>b)≤β for any G. By averaging, it also holds for G picked according to a any probability distribution on which is independent from the red weights. In particular, it holds if G^* is the G that achieves the minimum W=min_G∈Y_G, since G^* only depends on the green weights. Hence (W'(G^*)>b)≤β. So by a union bound on the right-hand side of (<ref>), (M()> a/(1-s)^p+b/s^p)≤α + β. But since s∈ (0,1) was arbitrary, we can minimize a/(1-s)^p+b/s^p over s. Noting that (a^1/p+1+b^1/p+1)^p+1=(a^q/q+1+b^q/q+1)^q+1/q=c, the lemma follows. Since f is smooth and strictly convex on (0,1), there exists a unique local minimum s_0, which is also the global minimum. Let u:=a^1/p+1,v:=b^1/p+1. Then f'(s) = -p(u^p+1/(1-s)^p+1-v^p+1/s^p+1), which is zero iff u/1-s=v/s. Solving for s gives s_0:=v/u+v, so that f(s_0)=(u+v)^p+1. The function φ(x)=(1+x)^p+1 is convex, and hence it lies below the secant line with intersections at x=0 and x=1. This secant line has slope C:=φ(1)-φ(0)=2^p+1-1. In other words, (1+x)^p+1≤ 1+Cx for any x∈ [0,1]. With x:=v/u≤ 1, we get (u+v)^p+1≤ u^p+1(1+Cv/u). § OTHER WEIGHT DISTRIBUTIONS Assumption <ref> is closely related to the so-called pseudo-dimension of a distribution. If the cdf F of X is such that, for some d>0, F(x)/x^d converges to some c∈ (0,∞) as x→ 0, then X is said to be of pseudo-dimension d. The motivation behind the name is that if d is a positive integer and two points are chosen uniformly at random from the d-dimensional unit box, the distribution of the Euclidean distance between these points is of pseudo-dimension d. Since pseudo-dimension is only a condition on the behavior of F(x) near 0, we cannot guarantee that assumption A(q) will hold. It is, however, often the case that the distribution of M() is (asymptotically) the same for any weights of pseudo-dimension q, up to a global rescaling. If one has weights X_i of pseudo-dimension q but not satisfying A(q), it is usually easiest to first show that one can approximate these with e.g. the (1/q):th power of U(0,1)-distributed random variables, and then apply theorem <ref>. We will now briefly outline one potential strategy to do this, using a variant of the patchability condition. Start with the F achieving optimality (X_F=M()). Remove expensive elements (weight ≥δ for some δ>0) from F, resulting in some G with (G) fairly small. Rerandomize the edge weights, and search for a G-patch which is both cheap, and uses no edge of cost above δ. If (whp) such a patch can be found, then one can show that there is an F'∈ using only cheap elements, and with X_F' very close to M(). Since F' only uses cheap elements, we can couple the weights of pseudo-dimension q to, for instance, the (1/q):th power of U(0,1)-weights, and thereby show that M() with pseudo-dimension q weights can be well approximated by M() with weights satisfying A(q). For an example of a proof following this strategy, see theorem 5.1 in <cit.>. § BOUNDS ON M() As noted in section <ref>, M() is sharply concentrated if is (r,,)-patchable with small, r at least of order √(()), and =o(m) as N→∞ (where m is the median of M()). To verify that =o(m), one typically needs a lower bound on m. In section <ref> we provide a generic first moment method bound, which in practice often turns out to be within a constant factor of the true m. §.§ Upper bound While not strictly necessary to prove sharp concentration, one is usually also interested in finding a matching upper bound on m (or M()). These tend to require an approach tailored to the specific family one is studying. Here are some examples of approaches that have been successful in the past. * For all F∈, M()≤ X_F. Any algorithm for finding an F with low weight will give an upper bound on M(), and even fairly naive algorithms (e.g. greedy algorithms) can often be within a constant factor of optimal. Or, in the case of minimum spanning tree, actually optimal. * For any ⊆, M()≤ M('). Sometimes one can find such a ' which is significantly easier to analyse, but which still has M(') fairly close to M(). For instance, see remark <ref>. * In a recent breakthrough paper, Frankston, Kahn, Narayanan and Park <cit.> gives an upper bound on M() (and the corresponding threshold problem) in terms of the so-called spread of . A family is said to be κ-spread if no r-set G⊂ S occur as a subset in a more than a fraction κ^-r of the members of .[This is similar to the intuition that a function should not depend `too much' on any small set of coordinates. It would be interesting to see if there are any connections between the spread and the patchability of a family .] * If one has shown that is (r,,)-patchable for some not too large r and , then it suffices to upper bound M() (the minimal cost of a G within Hamming distance r of ). In the case of H-factors which we discuss in section <ref>, M() was essentially already known for r=δ n for any fixed δ >0. §.§ Lower bound on M() In this section we provide a general lemma that gives a lower bound on M() given some bound on the size of and the sizes of members of . For brevity, we do this only for weights X_i with distribution given by the 1/q:th power of a U(0,1)-random variable. For other distributions satisfying A(q)? Let be a family of subsets of S such that each set F∈ has ℓ_0≤ |F|≤ℓ_1 elements for some ℓ_0,ℓ_1 >0. Assume |{ F∈:|F|=m}| ≤ c^m m^β m for some constants c,β > 0 and all m. Then for any t>0 there exists a constant c'>0 such that M() (with respect to i.i.d. weigths X_i satisfying X_i^q∼ U(0,1)) is at least c'·min(ℓ_0^1-β/q,ℓ_1^1-β/q) with probability at least 1-exp(- ℓ_0 t). For a set with m elements, the probability that their total cost is below some given value ≪ N:=|S| decays superexponentially fast as a function of m (see claim <ref>). On the other hand, the number of sets in with m elements, might increase superexponentially fast. If β< q, then the former decay rate beats the latter growth rate, so that sets with few elements dominate the expected number of `cheap' sets. In this case, ℓ_0^1-β/q≪ℓ_1^1-β/q, so that M()=(ℓ_0^1-β/q). If instead β≥ q, the large sets dominate, and M()=(ℓ_1^1-β/q). We will apply the first moment method to the number of `cheap' sets in . For some to be determined later, let R_m be the (random) number of F∈ with precisely m elements and with X_F≤. (For m< ℓ_0 or m> ℓ_1, R_m=0.) Then R:= ∑_m R_m is the total number of sufficiently cheap sets, and by Markov's inequality (M()≤) ≤ R. Now, [R_m] ≤c^m m^β m·(X_F≤), where F is any set with m elements. If U_i are i.i.d. uniform [0,1] random variables and q>0 is a constant, then (∑_i=1^m U_i^1/q≤)≤Γ(1+q)^m/Γ(1+qm)^m=2^O(m)(/m)^qm. Let A:={x∈_+^m: ∑_i=1^m |x_i|^1/q≤} be the positive orthant of the (1/q)-norm ball with radius , and let B:=[0,1]^m be the unit box. The probability of the event {∑_i=1^m U_i^1/q≤} equals the m-dimensional volume μ(A∩ B)≤μ(A). The volume of A is known to be μ(A)=Γ(1+q)^m/Γ(1+qm)^qm (see <cit.> for a concise derivation). The asymptotic expression comes from noting that the numerator is 2^O(m), and using Stirling's approximation on the denominator. From the claim we have that (X_F≤) ≤ 2^O(m)(/m)^qm (for F with |F|=m) and hence [R_m] ≤ (c_0 /m^1-β/q)^qm for some c_0. If we pick :=c_1 c_0^-1·min(ℓ_0^1-β/q,ℓ_1^1-β/q) for some c_1>0, we get that c_0/ m^1-β/q≤ c_1 for all m, and hence [R_m] ≤ c_1^qm. Thus [X]=∑_m=ℓ_0^ℓ_1[R_m] < c_1^qℓ_0/(1- c_1^q), which is less than exp(-ℓ_0) if c_1 is sufficiently small. § UPPER TAIL BOUND To apply Theorem <ref> it is sometimes helpful to first have rougher bounds on the tails of M(). Here we provide a simple bound on the upper tail. If μ is the median of M(), then for any t≥ 0, (M()>tμ )≤ 2^1-t^q. Furthermore, M() ≤ C_qμ for some constant C_q only depending on q, and thus <ref> holds with μ replaced with M() and the 2 in the right-hand side replaced with some r>1. Let k:=⌊ t^q⌋, and assume WLOG that k≥ 2: If t^q≤ 1, then 2^1-t^q≥ 1 and there is nothing to prove, and if 1<t^q<2, then tμ>μ and hence (M()>tμ )≤ 1/2<2^1-t^q. For each i∈ S, and j∈ [k], let X_i^(j) be i.i.d random variables following the same distribution as X_i. As noted in remark <ref>, we can couple these random variables X_i^(j) to X_i such that surely X_i≤ k^1/q·min_j∈[k](X_i^(j)). Let M^(j) be defined as M() but with edge weights X_i^(j). Now, if M^(j)= x for some j,x, by definition there exists an F∈ with X_F^(j)= x, and this F then has weight X_F=∑_i∈ FX_i≤∑_i∈ Fk· X_i^(j)= kx. This implies that surely M()≤ X_F≤ k^1/q·min_j∈[k](M^(j)), and k^1/q≤ t by the choice of k. Noting that (min_j∈[k](M^(j))> μ) ≤ 2^-k≤ 2^1-t^q, the first part of the claim follows. For the `furthermore' part, we will use M()=∫_0^∞(M()> x)dx. From the first part the integrand is at most 2^1-(x/μ)^q. After a change of variables z=ln(2)(x/μ)^q, its integral becomes ∫_0^∞ 2^1-(x/μ)^qdx= 2ln(2)^-1/qΓ(1+1/q)·μ. plain
http://arxiv.org/abs/2407.12950v1
20240717183241
Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI
[ "Qi Huang", "Emanuele Mezzi", "Osman Mutlu", "Miltiadis Kofinas", "Vidya Prasad", "Shadnan Azwad Khan", "Elena Ranguelova", "Niki van Stein" ]
cs.AI
[ "cs.AI", "cs.CV", "cs.LG" ]
Quantifying Semantic Continuity in XAI Q. Huang et al. Leiden University, Leiden, The Netherlands {q.huang,n.van.stein}@liacs.leidenuniv.nl Vrije Universiteit Amsterdam, Amsterdam, The Netherlands Wageningen Food Safety Research, Wageningen, The Netherlands University of Amsterdam, Amsterdam, The Netherlands Eindhoven University of Technology, Eindhoven, The Netherlands Sorbonne University, Paris, France Netherlands eScience center, Amsterdam, The Netherlands Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI Qi Huang10009-0007-4989-135X Emanuele Mezzi20009-0001-9007-8260 Osman Mutlu30000-0001-6144-5685 Miltiadis Kofinas40000-0002-3392-4037 Vidya Prasad50000-0002-9296-3693 Shadnan Azwad Khan60000-0003-2769-6856 Elena Ranguelova70000-0002-9834-1756 Niki van Stein10000-0002-0013-7969 July 22, 2024 ============================================================================================================================================================================================================================================================================================ § ABSTRACT We introduce a novel metric for measuring semantic continuity in Explainable AI methods and machine learning models. We posit that for models to be truly interpretable and trustworthy, similar inputs should yield similar explanations, reflecting a consistent semantic understanding. By leveraging XAI techniques, we assess semantic continuity in the task of image recognition. We conduct experiments to observe how incremental changes in input affect the explanations provided by different XAI methods. Through this approach, we aim to evaluate the models' capability to generalize and abstract semantic concepts accurately and to evaluate different XAI methods in correctly capturing the model behaviour. This paper contributes to the broader discourse on AI interpretability by proposing a quantitative measure for semantic continuity for XAI methods, offering insights into the models' and explainers' internal reasoning processes, and promoting more reliable and transparent AI systems. § INTRODUCTION Human intelligence can project objects and events to higher-order semantics. Starting from concrete objects it then generates abstractions that are invariant to the change of the environments in which those objects were initially identified. Over the years, given the growing power of representation shown by deep learning (DL) models, specifically deep neural networks (DNNs), researchers have started to wonder whether this capacity to abstract is only exclusive to humans or also embedded in the neural architecture. The field of Explainable AI (XAI) aims to find the answer to this question, highlighting the features contributing to a specific prediction realized by a neural network (or other machine learning model). Over time, one of the concepts that characterized XAI called continuity has emerged and was highlighted in <cit.>. It can be framed as the capacity of the XAI method, i.e., the explainer, to behave consistently with the model behaviour. More concretely, the explanations of similar model inputs that result in similar model outputs (confidence) should also correspondingly give similar explanations. In this work, we investigate semantic continuity, which we outline as “similar semantics should have similar explanations". We first define the scope of this work and a general background in XAI in Section <ref>. We then list related literature and the motivation of our proposed solution in Section <ref>. We then define the term semantic continuity in Section <ref> and propose a new metric to measure the semantic continuity of a given model and explainer. In Section <ref>, we investigate the semantic continuity in the image domain, and we discuss the results in <ref>. Lastly, in Section <ref> we summarize our findings and provide our final remarks. § EXPLAINABLE AI The field of XAI, motivated by the imperative to understand the inner workings of AI models, has undergone many advancements in recent years. This has resulted in various explanation methods being developed that may be broadly categorized into attribution-based, model-based, and example-based explanations <cit.>. Particularly, attribution-based methods provide insights into the importance of the features in a model by assigning importance values or ranks based on the relevance of these features to final predictions. There are global explanation methods such as Global Sensitivity Analysis <cit.>, which attribute importance to features for a given model on a global level, and there exist various methods for single predictions (local methods), such as perturbation-based, gradient-based, surrogate-based, and propagation-based methods <cit.>. This discussion mainly focuses on perturbation-based and gradient-based methods for single predictions. § RELATED WORK Over the past years, several evaluation frameworks and metrics have emerged to assess the performance and compare different XAI methods with each other. One of these measures is continuity. Continuity is a critical aspect that ensures the stability and generalizability of XAI solutions and is often associated with robustness. In <cit.>, an AutoXAI framework is proposed that automates the selection of XAI solutions where continuity is a property that plays a significant role. Understanding this property becomes even more crucial in scenarios such as Part-Prototype Image classifiers, where continuity directly influences user trust and the model's ability to generalize <cit.>. To evaluate the continuity property in XAI solutions, several tools and methodologies have been developed, with some focusing on image-based tasks <cit.>. Toolkits designed for XAI evaluation in continuity tests on images include: Quantus <cit.>, Safari <cit.>, XAI-Bench <cit.>, and BAM <cit.>. Additionally, more generic XAI toolkits like Captum <cit.> and OmniXAI <cit.> are available. However, the continuity metrics utilized in these tools require further verification. Examining perturbed inputs has revealed complexities surrounding continuity, particularly when the perturbations lead to misinterpretations and inconsistencies in explanation outcomes. Two types of misinterpretations come to light: one where a perturbed input with different highlighted features receives the same prediction label, and another, where a perturbed input with similar highlighted features is assigned a different prediction label <cit.>. Additionally, the assumption that explanations and model outcomes are directly comparable has been brought into question. Such as in <cit.>, where experimental demonstrations with image classification tasks, employing various perturbation techniques (Gaussian noise variation, spatial rotation, spatial translation, and latent sampling) are tested, utilizing distance measures such as Maximum Mean Discrepancy and Lipschitz Continuity. The findings support the notion that the outcome is affected by the perturbation techniques and the distance measures employed. It is concluded that the continuity test as it stands, can be easily biased towards desired results by employing a particular combination of perturbation technique and distance measure. This prompts the need for a more rigorous assessment of continuity. The study in <cit.> investigates the effect of Adversarial Perturbations (APs), and subtle disruptions in input data on DNNs. It uses GradCAM to generate explainability maps and shows a decrease in correlation coefficients between Layered GradCAM outputs after a DeepFool attack. Correspondingly, the research in <cit.> is focused on the problem of semantic discontinuity of deep learning models, where small perturbations in the input space tend to cause semantic-level interference to the model output, which is explained by the flaws in choosing the training targets. The need to study continuous perturbations of the input data is also argued in <cit.>, especially for model interpretability. The authors have addressed the causes for the previously observed fragility of many attribution XAI methods and propose enhanced metrics and improving robustness via adversarial training. The focus of those and similar papers is only on studying the DL method itself. While adversarial robustness is very important, in the context of explaining the model's behaviour in semantic terms, the potential influence of an explainer has not been researched. A system for visual analytics and understanding of CNNs, VATUN, is presented in <cit.>. Again, the focus is on studying the sensitivity to and preventing adversarial attacks and is also limited to GradCAM images. Similarly, Perturber, introduced in <cit.>, is a web application offering interactive comparison of base and adversarially trained CNN models. Our proposed approach is generic for any data modality, DL model architecture, and attribution explainer. Also, both VATUN and Perturber are interactive approaches and do not include a metric for quantifying (semantic) continuity, which we propose in this work. As far as we know, here we are the first to define the notion of “semantic continuity" in the XAI context. In the inspirational publication <cit.>, 12 properties of explainers have been defined for their objective and systematic evaluation. Continuity considers how continuous is the explanation function learned by the explainer. A continuous function ensures that small variations in the input lead to small changes in the explanation. Continuity also adds to generalizability beyond a particular input, which is specifically useful for domain experts who are not (X)AI experts. Having primarily their needs in mind, we have extended the notion of continuity to semantic continuity as a valuable property of an explainer. § SEMANTIC CONTINUITY To define semantic continuity, we first look at the definition of continuity in the context of explainable AI from the work <cit.>. The definition of continuity in <cit.> is loosely defined as “similar inputs should have similar explanations". We expand this idea with the notion of “semantic continuity" so that “semantically similar inputs should have similar explanations." This definition brings us to the following hypothesis: * A slight change in the input will correspond to a slight change in the output, which is the result of the explainer. * Given the explanation of a reference prediction (the base case), the bigger the input change, the bigger the change between the explanation of the current output and the explanation of the initial (reference) output. This will yield an increasing monotonic correlation between changes in explanations and changes in the original input (images). Let 𝐱_0 denote the reference input data, and f be a function that applies a semantic variation θ with a domain Θ on the input data, resulting in 𝐱_i = f(𝐱_0; θ_i). Let M be a deep learning model and E(M) the explainer of the model. We define semantic continuity as follows: θ_j - θ_0 > θ_i - θ_0⇒D(E(M(𝐱_j)), E(M(𝐱_0))) > D(E(M(𝐱_i)), E(M(𝐱_j))), ∀θ_i, θ_j ∈Θ, where θ_0 corresponds to an identity transformation, i.e. 𝐱_0 = f(𝐱_0; θ_0) and E(M(𝐱_0)) corresponds to its explanation. The function D corresponds to a distance function between the two explanations. To test whether XAI methods are semantically continuous, given that the predictor is a perfect predictor that is also semantically continuous, and thus respects the mathematical definition on which we ground the concept of semantic continuity, we propose the following approach: First, given a trained predictor (for example a DL model) for a boolean classification task, we define one or more semantic variations we can apply to the data. For example in the case of image classification, we can apply image transformation techniques such as rotation, contrast change and cropping, which do not alter the semantic meaning of the image. On the other hand, we can also define a gradual transformation from one class to the other class (and therefore gradually changing semantics). Next, we apply the predictor to predict images generated using the semantic variations and subsequently apply an explainer to get the feature attribution map for the prediction. Finally, we can measure the distance between the original (non-transformed) image and the semantic variation of the input image and we can measure the distance between the two feature attribution maps given by the explainer. These distances should increase monotonically when applying larger semantic variations and the distances between inputs should be correlated to distances of the explanations derived by the explainer. §.§ Proof-of-concept Experiment To investigate the semantic continuity of XAI methods, the simplest case study consists of binary classification. We will first explore semantic continuity by considering the case in which the machine learning model must distinguish between triangles and circles, where the grayscale images contain only one uniform triangle or circle positioned in the centre of a uniform background. We selected this case study as proof of concept as the task is simple and can be easily manipulated. In addition, we can assume (and verify) that the model on this task is more or less a perfect predictor. Considering that the case study is a binary classification task, we build a Convolutional Neural Network (CNN) of 2 hidden layers, which is sufficient for a model to learn the features that enable it to distinguish between triangles and circles (100% test accuracy). §.§.§ Generation of Datasets with Semantic Variations Once the model has been trained, we generate datasets that enable us to measure the semantic continuity of XAI methods in different scenarios. As shown in Figure <ref>, we analyze three possible cases of semantic variation and continuity: * Rotation: the explainer is semantically continuous concerning the rotation of triangles. To check this property, we generate a dataset of 100 images of the same triangle on the same background. The dataset is a sequence of images with the triangle rotating clockwise by one degree. * Contrast: the explainer is semantically continuous concerning variations in the background contrast. To check this property, we built a dataset composed of 200 images. The first 100 are images of triangles, and the second 100 are circles. In this case, the progressive change consists of a constantly diminishing contrast of the shape with the background until the shape is no longer recognizable. * Transition: the most complex semantic transformation. The rotation and contrast are fixed. The dataset, composed of 100 images, is a sequence where the starting image depicts a circle and the ending image - a triangle. The shape gradually changes from a circle to a triangle in the in-between images. §.§.§ Comparing Different XAI Methods The (pre)trained model is used in each of the three scenarios above to measure the semantic continuity of different explainers. We test the semantic continuity of RISE <cit.>, LIME <cit.>, GradCAM <cit.>, and SHAP <cit.> explainers, as these are popular and well-established XAI methods. In this proof-of-concept experiment, we assume that the model is a perfect predictor and the output of the model adheres to the semantic continuity definition. Based on this assumption, it is possible to check whether it respects the mathematical assumption in Equation (<ref>) through qualitative analysis and quantitative correlations between the changes in input and explainer output. To understand the extent to which the XAI output under gradual semantic variations respects monotonicity, the verification is composed of two phases, the first consisting of visual inspection and the second consisting of the application of a correlation metric apt for monotonicity checking to investigate whether a positive change in the input is correlated with a positive change in the explainer output. The correlation metrics that we use to quantify semantic continuity are the Pearson <cit.>, Spearman's <cit.> and Kendall's Tau <cit.> metric. §.§ From Perfect Predictor to Imperfect Predictor As the first proof-of-concept experiment relies on a perfect predictor, we refine our definition of semantic continuity to be able to verify semantic continuity of the XAI method not only when the model is a perfect predictor but also when the model output is not semantic continuous. In addition, the binary classification of geometric shapes is fairly straightforward for the classifier being evaluated, and we would like to show the applicability of our proposed approach in a more realistic real-world setting. It is noteworthy that in Definition <ref>, we aim to directly establish a connection between the semantic changes in input data and the changes in post hoc explanations generated by XAI methods. However, recall one of the fundamental requirements of XAI methods, correctness, which aims to achieve high faithfulness of explanations w.r.t. the to-be-explained model <cit.>. Our implicit assumption for Definition <ref>, is the classifier can consistently perceive the semantic changes in test time such that it is feasible to evaluate the semantic continuity using XAI methods. In the next evaluation scenario, this assumption or constraint is lifted. Following a similar vein to the previous proof-of-concept Shape datasets, we consider the binary classification of human facial information, which is a much more challenging task for the machine learning model, and hence, the XAI explainers are exposed to a more noisy situation where the model is not always providing accurate predictions. The concept and the criterion of semantic continuity are now re-formulated. Let θ denote a semantic variation in the range [Θ_A, Θ_B], Θ_A < Θ_B. We let f be a deterministic function that applies such variations θ on 𝐱_0, resulting in 𝐱_i = f(𝐱_0; θ_i), where θ_i is a real number that quantifies the scale of such variations. Here, the function f and the variation indicator θ_i firstly satisfies: P(H(𝐱_i)=Θ_A| x=𝐱_i) + P(H(𝐱_i)=Θ_B| x=𝐱_i) = 1, where H(·) is a fixed (hypothetically) perfect semantic percipient, and P is the probability symbol. Secondly, for any pairs of valid (θ_i, θ_j), the following causal property shall be held: θ_j> θ_i⇒P(H(𝐱_j)=Θ_B| x=𝐱_j) > P(H(𝐱_i)=Θ_B| x=𝐱_i). Definition <ref> establishes the concept for a data-dependent, unique, deterministic, and controlled semantic variation process between the two semantics (or domains or concepts). Based on this definition, we now provide formal definitions of semantic continuity for both predictors and XAI methods (explainers). Let M(x) be a machine learning model that determines the semantics for an input x, and a post-hoc explainer E(M; x) of M. Notably, we only consider explainers that implicitly or explicitly produce a real-valued heatmap over the entire x or a feature importance score for each element in x. Given a reference data point 𝐱_0 of semantic Θ_A and a variation function f(·; θ) that can transform 𝐱_0 from Θ_A to Θ_B as defined in Definition <ref>. We say the model M is semantically continuous between Θ_A and Θ_B shortly, Θ_A, B, on 𝐱_0 if for any valid pair of (θ_i, θ_j): θ_j> θ_i⇒P(M(x)=Θ_B| x=𝐱_j) > P(M(x)=Θ_B| x=𝐱_i), where 𝐱_k = f(𝐱_0; θ_k), a semantic variation following Definition <ref>, and P(M(x)=Θ_B| x=𝐱_k) denotes the probability (confidence) of 𝐱_k to be an instance of domain Θ_B, estimated by model M. Similarly, given with a predictive model M, a semantic variation function f(x;θ) as defined in Definition <ref>, and a reference data point 𝐱_0, we say the explainer E is Θ_A, B regarding M on 𝐱_0 if for any valid pair of (θ_s, θ_t): P_B(M;𝐱_s) > P_B(M;𝐱_t) ⇒D(E(M;𝐱_s), E(M;𝐱_0)) > D((E(M;𝐱_t), E(M;𝐱_0)), where 𝐱_k = f(𝐱_0; θ_k), P_B(M;𝐱_k) is short for P(M(x)=Θ_B| x=𝐱_k), and D(·, ·) is a distance metric that quantifies the discrepancy between outputs of the explainer E. Notably, when testing semantic continuity for explainers as defined in Definition <ref>, we don't assume any size relationship between the paired indicators of variations. Speaking in general, when we relate these new definitions to the previous one, Definition <ref> relies on the assumption that the to-be-explained predictor holds Definition <ref> almost surely, but will contradict the requirement of correctness for XAI methods when this assumption is not upheld. §.§ Synthesis of the human facial dataset With the aforementioned formal definitions of semantic continuity, we propose to design and create a binary classification (class A vs class B) dataset 𝒮 that satisfies Definition <ref> but maximally prohibits a strong baseline model from being Θ_A, B on all reference points in 𝒮. In this research, particularly, we consider classification on artificially generated human faces, where the two non-overlapping classes are with glasses and without glasses. The training data is generated using stable diffusion <cit.>, and all unrealistic samples are manually removed. Figure <ref> depicts several randomly chosen examples of our training data. The test dataset is constructed using InterFaceGAN <cit.> and SEGA <cit.>. Both are generative models that are capable of smoothly transforming a source image of one semantic to an image of another non-overlapping target semantic while preserving other semantics of the source image during the transformation. The transformation process can be exclusively controlled by a real-valued indicator showing the degree of likeness to the target semantic. Once we determine a sequence of such variation indicators and a starting reference image, it is feasible to generate test data that follows our Definition <ref>, where the generative model itself serves as both the variation function f and the semantic percipient H. We give an example of a series of consistently generated test images in Figure <ref>, where from left to right, the generative model gradually adds a pair of glasses to the face of a girl. Practically, since both generative models are not perfect (regarding humans), it is important to manually inspect their outcomes and discard all series of images that have an unreliable target image as displayed, e.g., in Figure <ref>. However, we preserve several of these data for analyzing the explainer continuity under the case where Definition <ref> does not hold for human-level semantic percipient. § EXPERIMENTAL SETUP §.§ Shape Dataset To explore the (easier to analyse) proof-of-concept scenario related to the capacity of the explainers to capture the semantic continuity in images, we prepare three gray-scale image datasets, each related to a different semantic scenario: * Rotation: fixing the background contrast, this dataset contains 100 images of equilateral triangles with different degrees of rotation. Starting from a base triangle, we apply a progressive rotation to the shape, allowing it to complete a 120-degree rotation. * Contrast: fixing the rotation, this dataset contains 100 images of triangles and 100 images of circles with different background contrast. Starting from the base case of a triangle and a circle with maximum gray-level contrast, we progressively diminish the contrast with the background resulting in a shape that is indistinguishable from the environment. * Transition: fixing rotation and contrast, this dataset contains x images of shapes. Starting from a triangle, the images show a progressive transition until the final image is one of a circle. The model used <cit.> to test the semantic continuity of XAI methods, tested with the above datasets, has been trained on the Simple geometric shapes dataset <cit.>. §.§ Synthesis facial dataset In section <ref>, the methodology for creating the dataset has been introduced. In total, the training data contains 1000 balanced samples and the test set contains 100 balanced samples. Among the test samples, 48 samples with the label no glasses are chosen to be the reference image for semantic variations, where each of them is gradually shifted towards the with glasses class twenty times uniformly. This results in 1,008 images for testing the explainers' semantic continuity. Regarding the classifier, we choose the well-known baseline model ResNet <cit.> with 18 convolution layers and a Sigmoid output layer. The model is fitted exclusively on the training data from scratch using binary cross entropy loss and Adam optimizer <cit.> for about 20 epochs. As a result, the ResNet-18 achieves 100% predictive accuracy on the 100 test samples. For consistency across the setups between experiments, following setups for the Shape dataset, we evaluated instance-wise semantic continuity of RISE, LIME, GradCAM, and KernelSHAP explainers. We choose mean squared deviation and Wasserstein distance as the distance metric (regarding D_I) for Definition <ref>) in quantitative analysis. §.§ Software For the experiments, the uniform implementation of the XAI explainers in the Deep Insight and Neural Network Analysis (DIANNA) <cit.>, <cit.> python library has been used. § RESULTS The results section is organized into two subsections: i) results and insights from the proof-of-concept shape dataset to show the outcome of comparisons of different explainers and correlation metrics and ii) results of semantic continuity of explainers on the complex facial image classification task with realistic images. §.§ Proof-of-concept Results: Shape Dataset To calculate the extent to which changes in the input lead to changes in the output, we calculate the correlation between the independent variable x, which represents the change in the input, and the dependent variable y, which represents how explanations vary. The distance metrics considered are the Pearson correlation, Spearman's correlation and Kendall's Tau correlation. Considering that the ideal result consists of a monotonic function, that would be able to show the increasing change that affects the output once the input is modified, these correlation metrics have been selected on the basis of their capacity to capture monotonicity. For each transformation (contrast, rotation and gradual change from circle to triangle), we perform three types of analysis: * Saliency maps (heatmaps) of the predictor extracted by the XAI methods. These maps highlight the region of interest in the image for a machine learning model, as inferred by the given explainer. * Relational plots that visualize the correlation among the observed properties: * Semantic variations measures the degree of semantic variation between the varied images and the reference image. * Saliency distances denotes the mean squared deviation distances between the saliency maps of varied images and that of the reference image. * Statistical correlations between the changes in the input and the changes in distances between the heatmaps. We report Pearson correlation, Spearman's rank correlation, and Kendall rank correlation (Kendall's τ) to quantify the degree of explainer continuity. The analysis of the first transformation (Rotation) can be seen in Figure <ref>. We can observe that GradCAM focuses on the edges of the triangle and shows a perfect explanation for the triangle class. When looking at the Saliency Distances for GradCAM in Figure <ref>, GradCAM shows an oscillating pattern that matches the fact that after 60 degrees of rotation, the original image is obtained. Also, RISE shows the expected pattern for the rotation case with the exception of a few outliers, although its explanations are less clear. The statistical correlations given in Table <ref> match with these observations, note that for these statistics we only look at the first 30 degrees of change, as for those variations the saliency distances should be monotonically increasing. Lime fails to provide any differences in explanations and can therefore not be calculated. Note that with different hyper-parameters LIME could possibly be improved but that is outside the scope of this work. The analysis of the contrast transformation can be viewed in Figure <ref>. Visually, the explanations of GradCAM seem again to be superior, LIME in this case suffers from its binary nature in providing the explanations. Also, the relation between saliency distances and semantic variations in Figure <ref> and the corresponding correlation metrics in Figure <ref> confirms these observations. In the relation between distances and variations, GradCAM and RISE show the most monotonic pattern (GradCAM seems to be perfectly monotonically increasing), with Kendall's Tau and Spearman giving highest correlation to GradCAM and Pearson correlation is the highest for RISE. The analysis of the circle-to-triangle transformation can be viewed in Figure <ref>. Again GradCAM seems to be superior in terms of visualized explanations, however also RISE and LIME show meaningful and logical explanations in this case. In terms of the relation between the saliency distance and semantic variation, GradCAM suffers from the first few explanations being empty and a slight drop in saliency distance towards the end of the transformation, resulting in overall lower correlation measures than RISE for this case. §.§ Synthesis facial dataset In this section, we discuss the semantic continuity of the four evaluated XAI methods, case by case regarding different relationships between the semantic and the ResNet predictor. For each case, we present three types of analysis: * Saliency maps (heatmaps) of the predictor extracted by the XAI methods. These maps highlight the region of interest in the image for a machine learning model, as inferred by the given explainer. * Relational plots that visualize the correlation among the observed properties: * Semantic variations measures the degree of semantic variation between the varied images and the reference image. * Model confidence represents the probability of the given image to be of the class with glasses. * Saliency distances denotes the first Wasserstein distances between the saliency maps of varied images and that of the reference image. * Confidence changes quantifies the differences in model confidence between the varied to-be-measured images and the reference image. * Statistical correlations between the changes in model confidence and the changes in distances between the heatmaps. We report Pearson correlation and Kendall rank correlation (Kendall's τ) to quantify the degree of explainer continuity as introduced in <ref>. Apart from the first Wasserstein distance, we additionally report the correlation coefficients on saliency distances quantified by mean squared deviation between explanations. We discuss our results regarding the four disjoint model's predictive behaviour, commonly referred to as the confusion matrix. For convenience, interchangeably, we use G to denote GradCAM, R for RISE, L for LIME, and K for KernelSHAP. True positive The first case is a true positive as shown in Figure <ref>. The ResNet model correctly identifies the emergence of glasses of images as they are semantically varied from no glasses to with glasses. In Figure <ref>, we display the saliency maps or the heatmaps produced by four explainers. Speaking intuitively based on the saliency maps, the G, R, and K correctly reflect the changes in model confidence regarding semantic variations as their (particularly R and K) highlight areas become darker and more concentrated on the locations of glasses. The qualitative relational plots in Figure <ref> support these observations as G, R, and K possess stably coherent trends with the model's predictive confidence, whereas the plots of LIME have strong stochasticity and do not display relatively monotonicity regarding the confidence changes. Figure <ref> depicts the statistical correlations between the saliency distances and the changes in model confidence. With a significance level of 0.05, i.e., p<0.05, we can find both linear and monotonical correlations between the saliency distances of all explainers and the changes in predictor's confidence where GradCAM and RISE are believed to be more semantically continuous regarding the sizes of coefficients. False positive Figure <ref> illustrates a false positive scenario, wherein the model erroneously predicts the presence of glasses where the lenses are missing. All four explainers offer logical rationales: starting from the third column to the left in Figure <ref>, the regions of interest are dense around the brow ridge and nasal bone, suggesting the model heavily weighs the presence of a glasses frame. The saliency maps contradict the trends of Wasserstein distance metrics. In Figure <ref>, as the model's confidence converges, the distance measurements from all explainers, except those from GradCAM, become stochastic and incoherent, whereas the saliency maps keep their consistency. Furthermore, empirical analysis through relational plots reveals that only GradCAM exhibits certain conformity with the predictor, while the KernalSHAP and RISE explainers behave similarly. The empirical findings are further substantiated by statistical correlation coefficients in <ref>, where the discrepancies in GradCAM's explanations demonstrate a stronger correlation with changes in model confidence. In contrast, it is unlikely that LIME is monotonic with the predictor judging by the correlation coefficients. False negative In Figure <ref>, we present the analysis of false negatives. The model disbelieves the presence of glasses where they exist. Judging by the saliency maps shown in Figure <ref>, RISE and KernelSHAP both exhibit a trend of the ResNet's increasing focus on the right eye. LIME starts to produce a similar inference with KernelSHAP, i.e., highlighting hairs, and it later consistently inferred that the ResNet focuses on both eyes. GradCAM's inference says the predictor is more interested in the left cheek, which reasonably explains the cause of the predictor's underperformance. The tendencies observed in RISE and KernelSHAP are also confirmed through relational plots. Empirically, we find out that GradCAM, KernelSHAP, and RISE each display a certain degree of monotonicity regarding the model confidence, whereas LIME is indifferent. Statistical analysis in Table <ref> supports our empirical findings. It is hard to precisely rank the continuity of RISE, GradCAM, and KernelSHAP based on these tabular results, however, it is clear that LIME is the explainer with the least explainer continuity. True negative This scenario in Figure <ref> mirrors that depicted in Figure <ref>, but with a notable difference: ResNet avoids misclassifying images of eyeglasses without lenses (a true negative). Analysis of the heatmaps in Figure <ref> reveals divergent explanatory behaviours: the heatmaps produced by GradCAM primarily shift between two patterns; RISE's interpretations appear randomly distributed; LIME's explanations concentrate on the upper right segments of the images, albeit with some fluctuation; KernelSHAP consistently emphasizes both eyebrows. Empirical observations from relational plots in Figure <ref> and statistical results in Table <ref> indicate that GradCAM and KernelSHAP maintain certain explainer continuity regarding predictors, in contrast to the discontinuity exhibited partially by LIME and particularly by RISE despite low yet stable model confidence. Summarizing our analysis across four case studies, visual inspection of saliency maps shows that KernelSHAP delivers the most semantically continuous and informative explanations. RISE and GradCAM follow, ranked second and third, respectively, while LIME is the least informative, with discontinuity between closely adjacent semantics. Regarding metric studies through relational plots and statistical correlations, GradCAM undoubtedly is the most semantically continuous explainer, with KernelSHAP, RISE, and LIME following in descending order. Besides summarizing findings on an explainer level, we discuss the conformity between qualitative, and our proposed quantitative analysis. Among three of the four studied cases, in Figure <ref>, Figure <ref>, and Figure <ref>, analytical observations on statistical correlation (linearity and monotonicity) between the saliency distances and the changes in model confidence generally match and support our empirical findings on explanations and relational plots. § CONCLUSIONS AND OUTLOOK In this paper, we presented a novel methodology for evaluating semantic continuity for Explainable AI (XAI) methods and subsequently the predictive models. Our focus on semantic continuity emphasizes the importance of consistent explanations for similar inputs. We characterize an explainer as semantically continuous if similar inputs, lead to similar model predictions, having similar explanations. We explored semantic continuity for image classification tasks, assessing how sequential input changes impact the DL model explanations. We explored popular explainers, including LIME, RISE, GradCAM and KernelSHAP. We performed an in-depth instance-based analysis for a realistic and complex image binary classification task and different XAI methods. We found that regarding the relational plots and statistical correlations, GradCAM shows to be the most semantically continuous explainer, with KernelSHAP following as a good second. Visual inspection results of saliency maps are mostly in agreement with the proposed qualitative and quantitative semantic continuity measures. The investigation of semantic continuity extends our understanding of the interpretability of DL models and the capacities of different XAI methods, introducing a crucial dimension to the evaluation of XAI methods. Numerous promising avenues exist for future research in the intersection of semantic continuity and XAI. First, the proposed metric can be further extended to support different types of deep learning tasks beyond image classification and also extend to other domains, such as text or speech. Additionally, exploring the impact of semantic continuity on user trust and acceptance of DL models and XAI methods can provide insights into the practical implications of our findings. To quantify the semantic continuity, different metrics could be explored such as Distance correlation <cit.>. In conclusion, we hope that our comprehension of semantic continuity and its nuanced implications not only enhances the evolution of Explainable AI methods but also supports the usage of deep learning across diverse domains. §.§.§ Disclosure of Interests. The authors have no competing interests to declare that are relevant to the content of this article. § ACKNOWLEDGMENTS We would like to thank the Lorentz Center for organizing the Workshop; ICT with Industry, which led to the collaboration and this work. This publication is part of the project XAIPre (with project number 19455) of the research program Smart Industry 2020 which is (partly) financed by the Dutch Research Council (NWO). splncs04
http://arxiv.org/abs/2407.13596v1
20240718153500
EarthMarker: A Visual Prompt Learning Framework for Region-level and Point-level Remote Sensing Imagery Comprehension
[ "Wei Zhang", "Miaoxin Cai", "Tong Zhang", "Yin Zhuang", "Xuerui Mao" ]
cs.CV
[ "cs.CV" ]
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals EarthMarker: A Visual Prompt Learning Framework for Region-level and Point-level Remote Sensing Imagery Comprehension Wei Zhang*, Miaoxin Cai*, Student Member, IEEE, Tong Zhang, Student Member, IEEE, Yin Zhuang† Member, IEEE, and Xuerui Mao† * Wei Zhang and Miaoxin Cai contributed equally to this work. † Co-corresponding author: Yin Zhuang and Xuerui Mao. Wei Zhang is with the Advanced Research Institute of Multidisciplinary Sciences, Beijing Institute of Technology, Beijing 100081, China, and also with the School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China. (e-mail: w.w.zhanger@gmail.com, 3120235339@bit.edu.cn). Xuerui Mao is with the Advanced Research Institute of Multidisciplinary Sciences, Beijing Institute of Technology, Beijing 100081, China, and with the School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China, and also with Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314003, China. (e-mail: maoxuerui@sina.com). Yin Zhuang, Miaoxin Cai, and Tong Zhang are with the National Key Laboratory of Science and Technology on Space-Born Intelligent Information Processing, Beijing Institute of Technology, Beijing 100081, China. (e-mail: yzhuang@bit.edu.cn, 3120220667@bit.edu.cn, bit_zhangtong@163.com). ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Recent advances in visual prompting in the natural image area have allowed users to interact with artificial intelligence (AI) tools through various visual marks such as box, point, and free-form shapes. However, due to the significant difference between the natural and remote sensing (RS) images, existing visual prompting models face challenges in RS scenarios. Moreover, RS MLLMs mainly focus on interpreting image-level RS data and only support interaction with language instruction, restricting flexibility applications in the real world. To address those limitations, a novel visual prompting model named EarthMarker is proposed, which excels in image-level, region-level, and point-level RS imagery interpretation. Specifically, the visual prompts alongside images and text instruction input into the large language model (LLM), adapt models toward specific predictions and tasks. Subsequently, a sharing visual encoding method is introduced to refine multi-scale image features and visual prompt information uniformly. Furthermore, to endow the EarthMarker with versatile multi-granularity visual perception abilities, the cross-domain phased learning strategy is developed, and the disjoint parameters are optimized in a lightweight manner by leveraging both the natural and RS domain-specific knowledge. In addition, to tackle the lack of RS visual prompting data, a dataset named RSVP featuring multi-modal fine-grained visual prompting instruction is constructed. Extensive experiments are conducted to demonstrate the proposed EarthMarker's competitive performance, representing a significant advance in multi-granularity RS imagery interpretation under the visual prompting learning framework. Our code and dataset are available at herehttps://github.com/wivizhang/EarthMarker. Visual prompting, Remote sensing, Multi-modal large language models (MLLMs). § INTRODUCTION Visual prompting refers to the technique of guiding the visual models to focus on the region of interest and improving their finer-grained interaction performance by providing them with visual marks (e.g., boxes, points, masks) or examples<cit.>. Recently, multi-modal large language models (MLLMs) <cit.> have experienced remarkable advancements in the remote sensing (RS) domain. However, those MLLMs only support language instruction and fail to understand the images in a visual prompting manner. Considering that the high-resolution RS imagery is characterized by scale variation, across categories and tiny objects, fine-grained reasoning is necessary alongside holistic scene interpretation. This is crucial to perform more detailed analyses to make informed decisions in real-world applications<cit.>. Nevertheless, most existing MLLMs achieve visual-language alignment using image-text pairs, lacking fine-grained referring understanding abilities, such as region-level and point-level. At present, leveraging the visual prompting method to enhance the complex visual reasoning capabilities of MLLMs in RS remains under-explored. Notably, prompting engineering <cit.> has been extensively studied in the natural language processing (NLP) community<cit.> and subsequently spread to the computer vision area. A key example is the Segment Anything (SAM) <cit.> model, which utilizes multiple visual prompting marks to realize zero-shot segmentation adapted for various new image distributions. However, SAM lacks semantic information, hindering real-world applications. GPT4RoI <cit.> and RegionBlip <cit.> have enabled MLLMs to complete region-level visual understanding tasks by training on region-text pairs. Nevertheless, they only support bounding boxes as visual prompts, which lacks flexibility. Osprey <cit.> excels in pixel-level visual understanding but relies on pre-attached segmentation models, constraining its application range. Additionally, Ferret <cit.> and SPHINX-V <cit.> support free-shape visual prompting marks to achieve pixel-level image comprehension. However, all these models are trained on natural scene data, leading to inferior performance when handling the RS imagery. In the RS field, there are a few works that limited works were devoted to region-level imagery interpretation. For example, RSVG <cit.> adopts language prompting to inquire and localize the specific object, but does not involve the visual prompting technique to realize various visual tasks. In addition, inspired by SAM, RSPrompter <cit.> introduces an automated prompts generation to develop interactive segmentation in RS imagery. Another representative work is EarthGPT <cit.>, which shows the potential of region-level image comprehension by training on visual grounding datasets. However, EarthGPT only supports language interaction without visual prompts, thereby lacking flexibility. These limitations hinder the development of fine-grained spatial understanding and complex reasoning execution in the RS domain. It is clear that the visual prompts learning in the RS domain is still in its infancy. To bridge this gap, a fine-grained MLLM named EarthMarker is proposed, leveraging visual prompting to extend the capability of MLLMs for region-level and point-level understanding in the RS domain for the first time. Based on the visual prompts learning, as illustrated in Fig. <ref>, EarthMarker excels at the multi-granularity interpretation of RS imagery across image, region, and point levels. Moreover, EarthMarker can complete a wide range of RS visual reasoning tasks, including scene classification, referring object classification, captioning, inter-relationship analysis, etc. Concretely, the visual prompts, i.e., bounding boxes and points, along with the RS images and the text instructions are provided as input to the LLM. Notably, the visual prompting marks are utilized to isolate specific areas and guide the model to interpret regional content in the entire RS image. Considering that the RS imagery is gathered from an overhead perspective by satellites, associated with large-scale variations and cluttered backgrounds, multi-resolution image input processing is necessary. Subsequently, unlike most existing nature scene visual prompting works, which routinely set different visual encoders and visual prompts encoders. In our method, a sharing visual encoding method is developed. Specifically, the visual prompt is processed to RGB images analogously, which shares the same visual encoder with the inputted image. This strategy is beneficial for consistent feature extraction and understanding the relationship between visual prompts regions and the holistic image, enhancing the performance of the model under visual prompts learning. In order to enhance the visual prompts-image-text alignment and to equip the EarthMarker with versatile multi-granularity visual comprehension abilities, the cross-domain phased learning strategy is proposed. In the first stage for multi-domain image-text alignment, EarthMarker is trained on the existing nature scene and RS caption data to obtain general image understanding and enhance the modeling of conceptual diversity. Subsequently, the model is further trained on the nature domain referring data to achieve spatial perception in images, beneficial for subsequent developing referring comprehension ability in the RS domain. Lastly, in the RS visual prompting tuning stage, leveraging RS region-text and point-text instruction data, the proposed EarthMarker is equipped with point-level and region-level RS imagery interpretation capability. Notably, the phased training leverages the natural domain generalized knowledge and the RS domain expert knowledge for developing RS visual prompting MLLM. The multi-domain joint training is advantageous for enhancing the deep interpretation of fine-grained RS imagery and improving open-vocabulary reasoning capabilities. In addition, the updatable parameters of the model are disjoint, preventing interference between understanding images at different granularity and the capability to follow the visual prompt instruction. Another challenge lies in the datasets, e.g., existing visual prompting datasets  <cit.> are restricted to the natural scene, lacking RS semantics. It has become indispensable to construct a visual prompting dataset tailored to the RS domain for developing fine-grained MLLM. To this end, a RS visual prompting dataset named RSVP-3M, featuring large-scale fine-grained instruction-following, is developed. In particular, diverse publicly available RS data are transformed and re-annotated into uniform conversation formats. Furthermore, part of the more high-quality caption data is generated from GPT-4V<cit.>. Those captions are uniquely tailored with the distinctive characteristics of each RS imagery, thereby enhancing the richness and diversity of data. Through the data conversion and re-annotation from existing datasets and GPT-4V, over 3M image-point-text and image-region-text pairings are constructed, covering a wide geographic distribution and multiple types of ground targets. Extensive tests are conducted on multi-type RS datasets to evaluate the performance of EarthMarker which is demonstrated to be superior to state-of-the-art (SOTA) specialist models, MLLMs, and visual prompting models in various RS visual tasks at different granularity. Specifically, for the zero-shot scene classification task, EarthMarker shows a significant improvement compared with other existing MLLMs. Notably, for referring object classification, EarthMarker achieves a Semantic Similarity (SS) score of 98.37 % using bounding boxes as visual prompts and 95.96 % using point prompts on DIOR-RSVG dataset<cit.>. Furthermore, for image and region captioning tasks, EarthMarker also far exceeds other MLLMs and visual prompting models on the NWPU-Captions<cit.> dataset. In summary, the experimental results demonstrate that EarthMarker exhibits exceptional performance across a variety of multi-granularity RS image comprehension tasks and excellent zero-shot reasoning capability. Our contributions can be summarized as follows. * The First RS Visual Prompting Dataset, RSVP. A large-scale RS regional instruction dataset named RSVP-3M, containing over 3M image-point-text and image-region-text pairings, is constructed. The construction of RSVP-3M facilitates fine-grained RS imagery interpretation, laying the foundation for the development of visual prompting in the RS domain. * The First RS Visual Prompting MLLM, EarthMarker. Leveraging our newly constructed RSVP, the visual prompting MLLM named EarthMarker is proposed. EarthMarker can interpret RS imagery in the multi-turn conversation at different granularity, including image, region, and point levels, significantly catering to the fine-grained interpretation needs for RS imagery. * The First RS Visual Prompt Learning Framework. A universal region and point-level visual prompting data annotation method is developed. Subsequently, a sharing visual encoding mechanism is proposed, which adapts visual prompts to match the dimensions of the input image, thereby both of them undergo uniform processing by the same visual encoder. This mechanism comprehensively enhances the interplay among visual prompts, holistic images, and text instructions. Furthermore, the cross-domain phased learning strategy is designed, and the disjoint parameters are optimized in a lightweight manner by leveraging the multi-domain data, endowing EarthMarker with spatial perception and visual prompting following capabilities. * Superior performance on multi-granularity RS Visual Tasks. Extensive experiments are conducted to demonstrate EarthMarker's competitive performance in multi-granularity RS visual interpretation tasks, compared with the SOTA specialist models, MLLMs, and visual prompting models. The tasks evaluated include scene classification, referring object classification, captioning, and inter-relationship analyses. Therefore, EarthMarker successfully explores the adaptation of the visual prompt learning framework in the RS domain, improving the performance of MLLM and representing a significant step in fine-grained RS imagery interpretation. § RELATED WORK §.§ Multi-modal Large Language Models (MLLMs) Recently, the advancement of large language models (LLMs) has significantly fueled the revolution and innovation in the natural language processing (NLP) field. The representative works including closed-source GPT series <cit.> and open-source LLaMA series <cit.> have achieved powerful generalizable language processing and reasoning ability. Inspired by LLM and by further injecting visual signals, MLLMs are developed for visual-language mutual comprehension and various visual tasks. For example, VisualGPT <cit.>, BLIP <cit.> and Flamingo <cit.> show strong multi-modal reasoning potential after aligning LLMs with visual modality. Notably, LLAMA-Adapter V2 <cit.> and SPHINX <cit.> adopt zero-shot attention mechanism and linear projection layers tuning to mix LLM with visual signal. Those nature scene MLLMs laid the foundation for the extension to the remote sensing (RS) domain. Some pioneer RS MLLMs have emerged, and related studies such as EarthGPT <cit.>, Geochat <cit.>, and SkyEyeGPT <cit.> have enabled MLLMs to interpret RS imagery. Among them, Geochat is the first MLLM targeting solving multiple tasks on optical RS images. Furthermore, EarthGPT has proposed a more universal MLLM that can deal with multi-source RS imagery and a wide range of RS visual tasks. There is no doubt that those models facilitate the development of MLLMs in the RS-specific domain. However, those models complete visual interpretation only through human-like language interactions, but cannot generate responses through visual prompts. Apparently, existing RS MLLMs mainly focus on image-level and visual grounding, but are incapable of referring comprehension. Therefore, this paper aims to enhance the MLLMs for referring fine-grained understanding of vision. §.§ Prompt Engineering Prompt engineering is an emerging research direction in NLP <cit.>. Representation works contain AutoPrompt <cit.> and CoOp  <cit.>, which are designed to automate prompt template generation for language and vision-language models, instead of manual crafting. Additionally, Language prompting has been applied for developing open-vocabulary detection models such as DetPro <cit.> and Promptdet <cit.>. Compared with the extensively developed language prompting technique, visual prompting also needs more exploration. A major development is the Segment Anything (SAM) <cit.> model, which supports multiple segmentation prompts to enhance the zero-shot performance. Due to the lack of semantic labels in SAM, the Semantic-SAM <cit.> is proposed to realize multi-level semantics analysis and prediction. Notably, GPT4RoI <cit.> uses spatial boxes, and combines language and region-of-interest for input, enabling regional recognition. Colorful Prompting Tuning (CPT) <cit.> uses color-based markers to improve the performance of pre-trained vision-language models. The aforementioned models are trained on nature scene datasets. Note that Osprey <cit.> incorporates fine-grained mask regions into language instruction, achieving pixel-level visual understanding. Other visual prompting works including RegionBlip <cit.>, Kosmos-2 <cit.>, Shikra <cit.>, and Ferret <cit.>, also have shown promising results in region-based image understanding by leveraging visual prompting techniques. Additionally, the study entitled “Visual Prompting via Image Inpainting"<cit.> shows that various vision tasks can be accomplished well by giving desired task examples. There are pioneering studies in the RS domain on region-level image understanding. For example, RSVG <cit.> can provide the referred object’s bounding box based on images and natural language expression. Moreover, EarthGPT <cit.> also has the visual grounding ability, and it is capable of providing captions for specific areas within images. Inspired by prompt learning, RSPrompter <cit.> designs an automated approach to generate appropriate prompts for SAM input, facilitating RS imagery segmentation. However, RSVG adopts language prompting but without visual prompting, whilst RSPrompter is only tailored to the segmentation task. Apparently, there is no unified visual prompting framework designed for the RS domain to further improve the performance of MLLMs. Those limitations hamper the development of more complex and fine-grained RS imagery understanding, therefore this paper focuses on filling this gap. § METHODOLOGY We first overview the overall model architecture in Section III-A. Subsequently, the three-phase continuous training strategy of the proposed EarthMarker is detailed in Section III-B. §.§ Model Architecture One challenge in the RS domain is the absence of a visual prompts learning framework to endow MLLMs with fine-grained image understanding capabilities, blocking more complex reasoning. To address this challenge, EarthMarker is proposed, utilizing visual prompting for multi-granularity RS imagery comprehension. As illustrated in Fig. <ref>, EarthMarker contains four core components: a sharing visual encoding mechanism, a modality-align projection layer, a text tokenizer module, and a LLM decoder. These components work together to deal with multi-modal information, such as text instruction, images, and diverse visual prompting marks including bounding boxes and points, allowing LLM to generate accurate text responses. Each part is introduced as follows in detail. In particular, the images and corresponding visual prompts share a visual encoding mechanism for feature sharing, enabling the visual encoders to better understand and associate the relationship between images and visual prompts. Specifically, the Mixture of Visual Experts (MoV) <cit.> is designed to encode the visual information. The MoV incorporates two visual encoders, DINOv2-ViT L/14 <cit.> and CLIP-ConvNeXt <cit.>, which are pre-trained on distinct network architectures (ViT and CNN), thus offering complementary visual semantics. To refine the robust multi-scale visual features, the input images I are downsampled to different resolutions denoted as I^i and then fed into the MoV module to encode, respectively. Leveraging the strengths of various visual backbones, visual perception is enhanced and key details in images are refined. Subsequently, the encoded visual features are transformed to the same dimension and concatenated channel-wisely to obtain the integrated multi-scale feature maps represented as V_img. This process can be formulated simply as V_img = Concat (MoV(I^i)),  i = 1,2,...,N. Notably, a key step to the encoder-sharing mechanism is the “Visual Prompt as Images". Especially, the dimension (H×W× 1) of the visual prompts is processed to the same dimension (H×W× 3) with the images. Then, the transformed visual prompts P also can be fed into MoV together with the images, the encoded visual prompts expressed as V_prompt. Similarly, this process is written as V_prompt = MoV (P). Subsequently, the modality alignment projection layer Φ transforms the visual tokens into the language semantic space. Meanwhile, the text instructions are processed by the tokenizer module, which handles text tokenization and embedding, converting them into text embeddings X_instruct. After obtaining the projected image tokens, visual prompts tokes, and text instructions embeddings, they are integrated into an entire multi-modal input sequence. LLM decoder takes the multi-modal inputs and generates the response sequence Y. Formulated as VY= LLM  (Φ (V_img), Φ (V_prompt), X_instruct). We employ Llama 2, a transformer-based decoder-only LLM, as the LLM decoder. §.§ Cross-domain Phased Training To realize the fundamental image-level understanding, spatial perception, and region/point-level RS data interpretation ability, the cross-domain phased training method is designed. The entire training process is divided into three phases including multi-domain image-text alignment, spatial perception tuning, and RS visual prompting tuning stage. Throughout the training, we keep lightweight training and avoid expensive full-parameters tuning. Furthermore, the disjoint parameters strategy is proposed, namely, the updated parameters of each stage are different. This strategy is conducive to the step-by-step solid understanding of images, and naturally solves the interference between image-text understanding, visual prompting comprehension, and fine-grained instruction-following. Multi-domain Image-text Alignment. The first phase employs a multi-domain image-text alignment strategy. In this stage, both natural and RS domain image-level data are leveraged for pre-training to bring visual and text knowledge into alignment within a high-dimensional feature space. This strategy enables EarthMarker to deeply understand the holistic semantics of images. Specifically, we utilize the natural scene caption dataset COCO Caption <cit.>, alongside the RS image caption and scene classification subset from the newly constructed RSVP. During this training phase, multi-scale visual features and language representations are integrated into the LLM to develop image-level comprehension capabilities. The MoV module is kept frozen throughout the training, so as to concentrate on refining robust visual features. Only the alignment projection layer, which acts as the visual-language connector, undergoes parameter updates to enhance the multimodal capabilities of the proposed EarthMarker and ensure seamless integration of visual and textual information. Spatial Perception Tuning. In the previous step, EarthMarker achieved image-level comprehension capability. In this step, to acquire spatial perception and object-level comprehension, the nature scene publicly available datasets, RefCOCO <cit.> and RefCOCO+ <cit.> are transformed into the instruction-following format. Throughout the training, the attention layers of LLM are unfrozen for aligning spatial region features with language embeddings. Specifically, LLM's key module self-attention head is composed of key K, query Q, and the value V, which are transformed by several linear layers. The l-th implementation equations can be expressed as follows Q_l(X) = W_l^q X +b_l^q, K_l(X) = W_l^k X +b_l^k, V_l(X) = W_l^v X +b_l^v, where X represents multi-modal input. The parameters W_q, W_k, W_v, b_q, b_k, and b_v are updated during the training. Then, the l-th single attention scores of Q and K are calculated as Att_l(Q _l, K _l, V _l) = V _l×Softmax(Q _lK _l^T/√(d_K)), where √(d_K) is the dimensionality of the keys. In addition, the other modules are kept frozen. RS Visual Prompting Tuning. The last stage focuses on accurately following user instructions and achieving complex region-level and point-level visual reasoning tasks. The MoV, alignment projection, and LLM are fixed. The LoRA method is adopted for tuning. We load the weights trained in the previous phase and continue training EarthMarker on RSVP-3M region-text and point-text parings, which contain the fine-grained referring object classification and referring brief caption data. Specifically, several learnable low-rank adapter matrices ΔW_l^v, ΔW_l^q, ΔW_l^k are inserted into Transformer layers of LLM. The adapted multi-head attention is denoted as Attn^*_l(Q_l, K _l, V _l), the output of the l-th adapted Transformer attention is formulated as Attn^*_l(Q_l, K _l, V _l) = Softmax((Q_l + Δ𝐖_l^q)×(K_l^T + Δ𝐖_l^k)/√(d_K)) × (V_l +Δ𝐖_l^v). In conclusion, the cross-domain phased training endows EarthMarker with various granular (e.g., image-level, point-level, and region-level) multimodal instruction capabilities in the RS domain. In the first multi-domain image-text alignment stage, the LLM is efficiently converted into an MLLM, which is capable of image-level understanding. Subsequently, by utilizing the nature scene referring datasets, EarthMarker is equipped with the fundamental spatial perception of images. This is beneficial for subsequent developments of referring ability in the RS domain. Furthermore, by leveraging the RS visual prompting datasets RSVP-3M, EarthMarker is endowed with image understanding at both the region level and point level. Notably, different field's datasets are adopted for training, and enhancing open-vocabulary reasoning ability. Notably, during the whole training, our updatable parameters are disjoint, preventing interference between understanding images at different granularity and the capability to follow visual prompts. § RS VISUAL PROMPTING DATASET CONSTRUCTION In this section, a visual prompting dataset named RSVP-3M is presented. RSVP-3M is the first visual prompting instruction dataset in the RS field, designed to advance image-level and fine-grained point-level, region-level RS MLLMs. Specifically, RSVP-3M contains over 3 million multimodal dialogue data with visual prompting marks. Those multi-granularity visual prompting data are restructured and cleaned from existing publicly available RS datasets. Furthermore, the GPT-4V <cit.> is employed for automatic annotation to construct a high-quality complex visual reasoning dataset<cit.>. A detailed explanation of the construction of the RSVP-3M dataset is introduced as follows. §.§ Data Conversion and Annotation from Public RS Datasets A part data of the dataset RSVP-3M is constructed by restructuring and relabeling existing RS datasets. A range of visual task types is covered, containing image classification, instance segmentation, object detection, image caption, and region caption, see Tab. <ref>. The image-level, region-level, and point-level data are derived from different RS datasets. Firstly, image-level visual prompting data is converted from image classification and captioning datasets. For the two type datasets, image-level visual instructions are used, with the bounding box [0, 0,width,height] serving as the visual prompt to obtain the image's category or brief caption. Subsequently, the region-level data is based on object detection datasets. The ground truth bounding boxes are used as visual prompts to guide the model to identify the object-level or region-level categories accurately. Additionally, the point-level data is transformed from segmentation datasets. For instance segmentation, the representative points extracted from masks corresponding to instances are used as point-level visual prompts. For semantic segmentation, each image is divided into 32 × 32 patches, and the points are randomly sampled within each patch as the visual prompts, with the category retrieved from the corresponding segmentation map. The selected existing datasets include NWPU-NESISIC45 <cit.>, OPTIMAL 31 <cit.>, RSD46 <cit.>, and WHURS19 <cit.> for optical classification. Vaihingen, Potsdam, Uavid<cit.>, Hi-UCD <cit.>, LoveDA <cit.> is used for optical semantic segmentation; NWPUVHR-10 <cit.>, WHU <cit.>, SAMRS <cit.> for optical instance segmentation; SSDD <cit.> for SAR instance segmentation; DOTA V2 <cit.>, FAIR1M <cit.>, DOSR <cit.>, MAR20 <cit.>, UCAS-AOD <cit.>, VisDrone <cit.>, LEVIR <cit.>, HRSC2016 <cit.>, RSOD <cit.> for optical object detection; AIR-SARShip-2.0 <cit.>, SARDet <cit.>, MSAR <cit.>, SAR-Aircraft <cit.> for SAR detection; and Sea-shipping <cit.>, Infrared-security <cit.>, Aerial-mancar <cit.>, Double-light-vehicle <cit.>, HIT-UAV <cit.> for infrared object detection. For brief image captioning task, the datasets NWPU-captions <cit.>, RSITMD-captions <cit.>, and Sydney-captions <cit.> were selected. For brief region captioning tasks, the OPT-RSVG and DIOR-RSVG <cit.> datasets were selected. In RSVP-3M, each data item consists of visual prompts, user instructions, and images. The visual prompts in the user instructions or model answers are expressed as <Mark i> or <Region i>. For the point-level data, for example, the user instruction which guides referring object classification is “Please identify the category of each marked point in the image". The answer format is “< Mark 1>: Label 1\ n < Mark 2>: Label 2\ n,...,`points:[x_1,y_1],[x_2,y_2],...". In addition, for the region-level data, Take the scene of the airport as an example. The user instruction for airport region captioning is “Please provide the brief caption of each marked region in the image, and the corresponding answer format generated by the model is “< Region 1>: \ n < Region 2>: \ n,...,`bbox':[x_1,y_1,x_2,y_2],...". The data structures of other visual tasks are similar to those explained above. Through the transformation and re-annotation based on public datasets, the visual prompting dataset RSVP-3M is effectively developed, featuring image-point-text and image-region-text pairings. §.§ GPT4V-assisted Visual Prompting Data Generation The aforementioned public datasets only provide simple classification information and brief captions, which are insufficient for intelligent interpretation of complex RS imagery. To mitigate the limitation and develop a more detailed and explicit RS visual prompting dataset, the language prompts for GPT-4V are carefully crafted for generating data featuring various complex visual reasoning. The complex fine-grained visual tasks involve detailed image captioning, inter-relationship analysis, and grounding captioning. We adopt the Set-of-Marks<cit.> (SoM) prompting, which can effectively unleash the extraordinary visual grounding ability of GPT-4V, to obtain comprehensive and unique characteristics from the RS imagery. The data generated using GPT-4V not only compensates for the lack of information in brief captions but also provides detailed descriptions that reveal the spatial and semantic relationships between different regions in the image. For example, in aerial imagery, it is feasible to identify the general category of the image and provide a simple description. Additionally, detailed descriptions, such as the spatial layout of tennis courts, basketball courts, and playgrounds, the relationships among these areas, and the activities of people on the playground, can be conducted. The RSVP-3M dataset, supplemented by public datasets and data generated by GPT-4V, covers a wide range of fine-grained visual reasoning tasks, enhancing the richness and diversity of the data. The structure of this subset's data is as follows. § EXPERIMENTS In this section, we present extensive experiments to validate the superior performance of EarthMarker. In Section V-A, we introduce the implementation details. Subsequently, we conduct qualitative and quantitative analyses to provide a holistic view of EarthMarker's performance from Sections V-B to V-E. §.§ Implementation Details The proposed EarthMarker adopts the cross-domain phased training strategy, and the parameters updated vary at different stages. In general, we train an off-the-shelf 13B language model Llama 2 and the visual encoder MoV is kept frozen during the training. In the first multi-domain image-text alignment stage, only the alignment projection layer is updated. Then, in the spatial perception tuning phase, only the attention layers of LLM are unfrozen. Furthermore, the trainable LoRA metrics are introduced in the last RS visual prompting tuning stage. We utilized AdamW optimizer<cit.> with weight decay = 0 and betas = (0.9, 0.95), the learning rate is set to 2e-5, and the total training stages are conducted on 8 NVIDIA A100 GPUs. For model evaluation, we select diverse multi-granularity visual tasks to assess the performance of EarthMarker. Image-level tasks include scene classification, image captioning, and region-level tasks contain referring object classification and region captioning. §.§ Scene Classification For scene classification tasks, we use the AID<cit.> and UCMerced<cit.> datasets for evaluation. AID is a large-scale aerial dataset collected from Google Earth, containing 30 categories. Following the setting of GeoChat, we use a 20% split of the AID dataset for testing. The UCMerced dataset consists of 21 categories for scene classification. Following the setting of GeoChat, the entire UCMerced dataset is adopted as a zero-shot test set. We prompt the model with an image-level box [0, 0,width,height] to represent the entire image. The text instruction is “Please identify the object category of each marked region in the image.". We calculate the zero-shot accuracy on the AID and UCMerced dataset. EarthMarker significantly outperforms other VLMs, with an accuracy of 86.52% on UCMerced and 77.97% on AID, as presented in Tab. <ref>. In comparison, LLaVa-1.5 and Sphinx, due to the lack of RS domain knowledge, are inferior to the RS MLLM GeoChat and our EarthMarker. Compared to GeoChat, our EarthMarker achieved an accuracy improvement of 5.94% on AID and 2.09% on UCMerced. §.§ Image Captioning To evaluate the image captioning capabilities, we use the NWPU-Captions<cit.> dataset to assess and compare EarthMarker against other expert models in the supervised setting. Created by Northwestern Polytechnical University, The NWPU-Captions dataset includes 31,500 aerial images and 157,500 sentences for RS image description. Following the protocol of MLCA-Net, we employ BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE-L, and CIDErD as evaluation metrics. In the evaluation, we use the sentence “Please provide a brief caption of each marked region in the image." as text instruction and a full-image box [0, 0,width,height] as the visual prompt. As shown in Tab. <ref>, compared to other expert models, EarthMarker demonstrate improvements in BLEU1, BLEU2, BLEU3, BLEU4, METEOR, and ROUGE-L by 7.4%, 8.2%, 8.8%, 6.5%, 3.5%, and 9.9%, respectively, and a 36.5% improvement in CIDErD. §.§ Referring Object Classification The referring object classification task aims to identify the category within the referring region in the image. The metrics used to evaluate this task are two semantic relevance indicators—Semantic Similarity (SS) and Semantic Intersection over Union (S-IOU) to assess a model’s classification ability. Closed-set testing is adopted on the test sets of object-level dataset DIOR-RSVG <cit.>. The text instruction is “Please identify the category of the marked region in the image", which along with the bounding boxes are fed into LLM to predict the category of regions. Due to the former MLLM (e.g., GeoChat, Sphinx, and EarthGPT) only accepting images and text as input, the region prompt for those MLLMs are coordinates information contained in the text instructions. As the results shown in Tab. <ref>, EarthMarker achieves 95.96 % in SS and 93.49 % in S-IoU using point-level visual prompts, and 98.37% in SS and 97.24% in S-IoU based on box-level visual prompts on the DIOR-RSVG dataset. Both sets of results significantly outperform the SOTA method. Furthermore, EarthMarker surpasses the previous SOTA model EarthGPT by 3.73% in SS and 7.08% in S-IoU on the DIOR-RSVG dataset, demonstrating its robust capability in fine-grained box-level classification. Additionally, in Fig. <ref>, there are considerable differences in predictions of regional object categories by EarthMarker and other MLLMs, as well as visual prompting models. It is evident that when faced with complex geographical scenes and blurry tiny objects, EarthMarker's predictions are significantly accurate. §.§ Region Captioning For the brief region captioning, the test set of DIOR-RSVG <cit.> is employed. Specifically, we adopt boxes as the visual prompt and a text prompt, such as “Please provide a brief caption of each marked region in the image," to prompt EarthMarker to concisely describe the content of the specified region using a brief caption. Similar to the image captioning task, metrics like BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE-L, and SPICE are used to evaluate EarthMarker and other MLLMs and visual promoting models in region target understanding. As displayed in Tab. <ref>, on the DIOR-RSVG test set, compared to other MLLMs such as Qwen-VL-Chat, GeoChat, SPhinx, EarthGPT, and visual prompting models such as Sphinx-V, ViP-LLava, and GLAMM, EarthMarker shows improvements of 7.72%, 9.99%, 11.38%, 12.67%, blue12.49%, 7.92%, 10.97%, 165.67%, and 17.50% in BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE-L, CIDER and SPICE, respectively. Furthermore, we visualize the instances of EarthMarker and other models on the region captioning task, as shown in Fig. <ref>. In complex RS scenarios with numerous targets and extensive geographic coverage, EarthMarker can accurately identify and describe various specified targets, which are challenging tasks for other models. §.§ Complex Visual Reasoning In this part, we present the qualitative experimental result of EarthMarker to demonstrate its proficiency in completing complex RS tasks such as key target inter-relationship analyses. The text instruction for the relationship analyses task is “Please analyze the relationship between all marked regions in the image." As shown in Fig. <ref>, when faced with an airport scenario, four visual models provided different responses. Specifically, the response generated by GPT-4V does not specify the exact categories of each marked region. Additionally, GPT4V incorrectly describes Region 1 as “Region 1 supports all other regions indirectly by housing vehicles and equipment necessary for ground operations, including baggage handling, maintenance, and possibly emergency services", whereas Region 1 actually contains fuel storage tanks. The Vip-LLava model incorrectly identifies all regions as containing airplanes. Note that Sphinx-V correctly identifies the types of objects in each region and analyzes the internal relationship of some regions. For example, the Sphinx-V answer “they are positioned in close proximity to each other, suggesting they are within the same area of the airport. The tanks are similar in appearance and likely serve the same function.” However, it fails to provide a comprehensive analysis of the relationships among all four regions, and incorrectly states that the airplane in Region 4 is in motion. In contrast, our EarthMarker delivers an exemplary response. It firstly summarizes the relationship of all marked regions representing different elements of an airport environment. It then analyzes functionally similar areas in detail, stating that “<Region 1> and <Region 2> are both storage tanks, likely used for fuel or other liquids, and are similar in shape and size, suggesting they are part of the same system or facility. <Region 3> and <Region 4> are both commercial airplanes, indicating the area is used for aircraft operations. The positioning of the planes and the tanks suggests a functional airport with operational infrastructure for aircraft and fuel services.” This response accurately reflects the diverse functionalities within the airport, demonstrating superior comprehension and analysis compared to the other visual prompting MLLMs. § CONCLUSION In this paper, the fine-grained MLLM called EarthMarker, the first visual prompting model specifically designed for the RS domain, is proposed. Moreover, the RS visual prompting instruction dataset called RSVP is constructed for the first time, facilitating the development of fine-grained RS imagery comprehension. Furthermore, the visual prompts learning framework is developed. In particular, the shared visual encoding method is developed to uniformly refine multi-scale visual features and visual prompt content, which is beneficial for comprehensively understanding the interplay between visual prompts and the holistic image. Subsequently, the referring areas in the input are replaced by the proposed hybrid representation before being fed into the LLM to instruct the model to comprehend referring areas and toward specific predictions. Employing the RSVP-3M and the visual prompt learning framework, EarthMarker is equipped with multi-granularity visual understanding capability at the image, region, and point levels, making it able to simultaneously perform comprehensive and intelligent analysis in real-world scenarios. In the future, we plan to incorporate a broader range of visual modalities into EarthMarker, enhancing its multi-source imagery comprehension capabilities. In addition, we plan to support free-form shapes as visual marks to adjust the referring granularity flexibly. unsrt
http://arxiv.org/abs/2407.12398v1
20240717082316
Polynomial convergence rate at infinity for the cusp winding spectrum of generalized Schottky groups
[ "Yuya Arima" ]
math.DS
[ "math.DS" ]
Quantum beats of a macroscopic polariton condensate in real space * July 22, 2024 ================================================================= § ABSTRACT We show that the convergence rate of the cusp winding spectrum to the Hausdorff dimension of the limit set of a generalized Schottky group with one parabolic generator is polynomial. Our main theorem provides the new phenomenon in which differences in the Hausdorff dimension of the limit set generated by a Markov system cause essentially different results on multifractal analysis. This paper also provides a new characterization of the geodesic flow on the Poincaŕe disc model of two-dimensional hyperbolic space and the limit set of a generalized Schottky group. To prove our main theorem we use thermodynamic formalism on a countable Markov shift, gamma function, and zeta function. § INTRODUCTION In this paper, we consider the Poincaré disc model (𝔻, d) of the two-dimensional hyperbolic space. For the sake of simplicity, we postpone some technical definitions to the Section 2. Let G be a generalized Schottky group with one parabolic generator generated by G_0. We can write G_0=H_0∪Γ_0, where H_0 is the non-empty finite set of hyperbolic generators and Γ_0:={γ^±1} is the set of a parabolic generator with the fixed point p which is in the Euclidean boundary of 𝔻. Note that G is a non-elementary finitely generated free Fuchsian group. We denote by Λ(G) the limit set of G and by Λ_c(G) the conical limit set. In this setting, the conical limit set is given by Λ_c(G)=Λ(G)∖⋃_g∈ G{g(p)}. Thus, the set Λ(G)∖Λ_c(G) is a countable set. Therefore, the Hausdorff dimension of Λ_c(G) coincides with the Hausdorff dimension of Λ(G). Note that we have 1/2<_H(Λ_c(G))<1, where _H denotes the Hausdorff dimension. (see Theorem <ref>) we recall the definition and the motivation of the cusp winding process from <cit.> (see also <cit.> and <cit.>). Let R⊂𝔻 be the Dirichlet fundamental domain for G at centered 0. For a conical limit point x∈Λ_c(G) we can construct the unique infinite sequence ω(x)=ω_0(x)ω_1(x)⋯ in G^ℕ∪{0} which is associated to x as follows: Consider the oriented geodesic ray s_x from 0 to x. The oriented geodesic s_x intersects the infinitely many copies R, g_0(x)R, g_0(x)g_1(x)R,... of R, with g_i(x)∈ G_0 and i∈ℕ∪{0}. Thus, we obtain the infinite sequence ω(x)=g_0(x)g_1(x)g_2(x)⋯∈ G_0^ℕ∪{0}, which is necessarily reduced, that is, g_i-1(x)g_i(x)≠ I for all i∈ℕ, where I denotes the identity map. Using the infinite sequence ω(x) in G^ℕ∪{0}, we can define a block sequence which is used to define the cusp winding process as follows: Let x be a conical limit point. We define a block sequence B_i(x), i∈ℕ, such that ω(x)=B_1(x)B_2(x)⋯, where each B_i(x) is either a hyperbolic generator, or a maximal block of consecutive appearances of the same parabolic generator. By construction, for γ∈Γ_0, l∈ℕ∪{0} and i∈ℕ a block B_i(x)=γ^l+1 means that the projection of s_x onto 𝔻/G winds l times around the cusp p. Motivated by counting the number of windings around the cusp p, we define the cusp winding process (a_i)_i≥1:Λ_ c(G)→ℕ∪{0} by a_i(x)= {[ l if B_i(x)=γ^l+1 or B_i(x)=γ^-(l+1), l≥1; 0 otherwise ].. For α∈[0,∞] we define the level set by J(α):={x∈Λ_ c(G):lim_n→∞1/n∑_i=1^na_i(x)=α}. Using the above level set, we can consider the following multifractal decomposition of the conical limit set Λ_c(G): Λ_ c(G) =(⋃_α∈[0,∞]J(α) )∪ J_ir, where J_ir denotes the irregular set, that is, the set of conical limit points x∈Λ_c(G) for which the limit (1/n)∑_i=1^na_i(x) does not exist. To investigate the growth rate of the number of cusp winding by the Hausdorff dimension, we define the cusp winding spectrum as follows: b:[0,∞]→ [0,_HΛ_ c(G)], b(α)=_HJ(α). Also, the function b is simply called the dimension spectrum. Denote by f the Bowen-Series map associated with the Dirichlet fundamental domain R centered at 0 and by f̃ the induced system derived from the Bowen-Series map f as defined in Section 2.1. By the proof of <cit.>, the maximal invariant subset of ∂𝔻 generated by the induced system f̃ is the conical limit set Λ_ c(G). Thus, we can consider the dynamical system (f̃, Λ_c(G)). Moreover, by the definition of the induced system f̃ we have a_1∘f̃^i-1=a_i for all i∈ℕ (see Section 2.1). Hence, the dimension spectrum b can be regarded as a Birkhoff spectrum. We call the triple (f̃, Λ_c(G), a_1) a generalized Schottky system. The detailed analysis of the multifractal decomposition of the conical limit set and dimension spectra is obtained from <cit.> (see also <cit.>). By <cit.>, we have that the dimension spectrum b is strictly increasing and lim_α→∞b(α)=_H(Λ_c(G)). Therefore, it is natural to ask about the convergence rate of b to the Hausdorff dimension of the conical limit set. The following theorem is our main theorem. Let G be a generalized Schottky group. We have lim_α→∞(_H(Λ_c(G))-b(α))α^x= {[ ∞ if 1/(2-2_HΛ_c(G))-1<x; 0 if 1/(2-2_HΛ_c(G))-1>x ]. . Multifractal analysis has been studied in several settings. We refer the reader to <cit.> and <cit.> for basic results on multifractal analysis. By studies on multifractal analysis, it is known that the lack of certain conditions leads to strange results on multifractal analysis. For instance, the lack of compactness of a phase space can cause the existence of a point such that a Birhkoff spectrum is not analytic at this point (see <cit.>). Also, the presence of a neutral fixed point can cause the phenomenon in which a Birhkoff spectrum is completely flat (see <cit.>). However, to our knowledge, there is no known result in which differences in the Hausdorff dimension of the limit set generated by a Markov system cause essentially different results on multifractal analysis. Comparing Theorem <ref> with the result regarding the convergence rate of the Birkhoff spectrum of the arithmetic mean of the continued fraction, we can see that Theorem <ref> exhibits a new phenomenon. To explain this, we introduce the Gauss system. Let T:(0,1]→ (0,1] be the Gauss map defined by T(x)=1/x-[1/x], where [·] denotes the floor function. We define intervals (I_m)_m∈ℕ given by I_m:=(1/(m+1),1/m] for m∈ℕ. The Gauss map T is a Markov map with the countable Markov partition (I_m)_m∈ℕ and the limit set generated by T (i.e. ∪_(m_0,m_1,⋯)∈ℕ^ℕ∪{0}∩_i=0^∞ T^-iI_m_i) is I:=(0,1)∖ℚ. Note that the Hausdorff dimension of I is 1. We define the digit functions (𝔡_i)_i≥ 1 as 𝔡_i(x):=m if T^i-1(x)∈ I_m for i,m∈ℕ. The digit functions are closely related to the continued fraction for a irrational number in (0,1). Note that for i∈ℕ we have 𝔡_i=𝔡_1∘ T by definition of the digit functions (𝔡_i)_i≥ 1. We call the triple (T,I,𝔡_1) the Gauss system. The Gauss system (T,I,𝔡_1) is well-studied from various perspectives. See for example <cit.>, <cit.>, <cit.> and <cit.>. Let n∈ℕ. We consider the fully constrained set E(n) by n, that is, the set of points x in I such that 𝔡_i(x) does not exceed n for all i∈ℕ. By <cit.>, we can completely understand the asymptotic behavior of the function n↦_H(E(n)) as follows: _H(E(n)) = 1 -6/π^2 n - 72 log n/π^4n^2 + O(1/n^2) as n→∞, where O denotes the Landau's notation. Next, for α∈[1,∞) we consider the average constrained set Ẽ(α), that is, the set of points x in I such that the average (1/n)∑_i=0^n-1𝔡_i(x) does not exceed α for all n∈ℕ. By <cit.>, the asymptotic behavior of the function α↦_H(Ẽ(α)) is also understood as follows: _H(Ẽ(α))=1-O((1/2)^α). For α∈[1,∞] we define the level set by J̃(α):={x∈ I:lim_n→∞1/n∑_i=1^n𝔡_i(x)=α} and the Birkhoff spectrum 𝔟:[1,∞]→ℝ given by 𝔟(α):=_H(J̃(α)). We can also obtain the following proposition regarding the convergence rate of the Birkhoff spectrum of the arithmetic mean of the continued fraction 𝔟. For the Gauss system (T,I,𝔡_1) and α∈(1,∞) we have 𝔟(α)=_H(Ẽ(α)). Moreover, we have 𝔟(α)=1-O((1/2)^α). Proposition <ref> is the immediately consequence from <cit.> and <cit.>. Proposition <ref> states that the convergence rate of the Birkhoff spectrum of the arithmetic mean of the continued fraction 𝔟 to the Hausdorff dimension of the limit set I is exponential. By <cit.>, the generalized Schottky system (f̃, Λ_c(G), a_1) is analogous to the Gauss system (T,I,𝔡_1). Thus, if the convergence rate of b to _H(Λ_c(G)) and the convergence rate of 𝔟 to _H(I)=1 are different, then the essential difference in the result of multifractal analysis is caused by the difference between the Hausdorff dimension of I and the Hausdorff dimension of Λ_c(G) (Note that 1/2<_H(Λ_c(G))<1). In fact, Theorem <ref> state that the convergence rate of the dimension spectrum b to _H(Λ_c(G)) is polynomial. Therefore, we can see the new phenomenon in which differences in the Hausdorff dimension of the limit set generated by a Markov system cause an essentially different result in multifractal analysis. Next, we consider Theorem <ref> in terms of hyperbolic geometry. There are numerous studies on the geodesic flow on hyperbolic surfaces using a Fucshian group. Especially, there are a lot of results on the geodesic flow on a hyperbolic surface and the limit set of a non-elementary finite generated Fuchsian group obtained by performing a multifractal analysis. However, for a non-elementary finite generated Fuchsian group G the relationship between _H(Λ(G)) and the geodesic flow on the hyperbolic surface obtained from these results is that the supremum of a dimension spectrum including information of the geodesic flow on the hyperbolic surface is _H(Λ(G)). For example, by <cit.>, we known that for a non-elementary finite generated free Fuchsian group G with parabolic generators the maximum of the dimension spectrum describing the fluctuation of a certain asymptotic exponential scaling associated to number of winding around a cusp is _H(Λ(G)). On the other hand, Theorem <ref> states that for a generalized Schottky group G with one parebolic generator the convergence rate of the dimension spectrum to _H(Λ(G)) is determined by the _H(Λ(G)). Since the cusp winding spectrum b includes geometric information of the geodesic flow on 𝔻/G, this means that we can also relate geodesic flow on 𝔻/G and the Hausdorff dimension of Λ(G) using the convergence rate of the cusp winding spectrum b to _H(Λ(G)). This is a new characterization of geodesic flow on the 𝔻/G and the limit set of a generalized Schottky group. Methods of proofs. To prove Theorem <ref>, we relate to thermodynamic formalism, which is also used in <cit.>, with methods of proofs used in <cit.>. We first define the key function as follows: p:ℝ^3→ℝ, p(α,q,b):=P(q(-a_1+α)-blog|f̃'|), where P denotes the topological pressure (see Definition <ref>). By <cit.>, there exists real-analytic function α∈(0,∞) ↦ q(α)∈(0,∞) such that p(α,q(α),b(α))=0 and ∂/∂ qp(α,q(α),b(α))=0. Using (<ref>) and real-analyticity of the dimension spectrum b (see Theorem <ref>), for α∈(0,∞) large enough we can relate _H(Λ_c(G))-b(α) with ∫_α^∞ q(t) dt. On the other hand, using (<ref>), Ruelle's formula for the derivative of the pressure, and the Gibbs property, for α∈(0,∞) large enough we obtain a relationship between α and the Dirichlet series ∑_l=1^∞exp(-lq(α))l^1-2b(α). By Mellin transformation, Gamma function and zeta function, one can show a comparability between α^1/(2-2b(α)) and q(α) for large enough α∈(0,∞). By those relations, we can show Theorem <ref>. Plan of the paper. In the Section <ref>, we introduce the precise definition on Discrete geometry which is used in this paper. In the Section <ref>, we first describe thermodynamic formalism for Countable Markov Shift and a coding between Countable Markov Shift and the dynamical system (f̃,Λ_c(G)). Then, we explain thermodynamic formalism for the dynamical system (f̃,Λ_c(G)). In the Section <ref>, we recall some results from <cit.>. This result play a fundamental rule in this paper. In the section <ref>, we first show some technical lemma. By these lemmas, we obtain the comparability between α^1/(2-2b(α)) and q(α) for large enough α∈(0,∞). Then, using these lemmas, we prove Theorem <ref>. § PRELIMINARIES §.§ The Bowen-Series map and multi-cusp winding process In this section, we will first give some definitions of hyperbolic geometry and the notation used throughout this paper. We refer to the reader <cit.>,<cit.> and <cit.> for details on discrete geometry. Let (𝔻,d) denote the Poincaré disc model of two-dimensional hyperbolic spaces. We denote by Conf(𝔻) the set of orientation-preserving isometries of (𝔻,d). Recall that, in this setting, Conf(𝔻) is the set of Möbius transformations (see <cit.>) and each element g of Conf(𝔻) is classified as hyperbolic, parabolic and elliptic using fix points of g. A element h of Conf(𝔻) is called a hyperbolic element if h has two fixed points in ∂𝔻 , where ∂𝔻 denotes the Euclidean boundary of 𝔻. A element γ of Conf(𝔻) is called a parabolic element if γ has one fixed point in ∂𝔻. Note that for a parabolic element γ∈ G and its fixed point p∈∂𝔻 we have |γ'(p)|=1, where |·| denotes the Euclidean metric norm of ℝ^2. A element ϕ of Conf(𝔻) is called a elliptic element if ϕ has one fixed point in 𝔻. A subgroup G of Conf(𝔻) is called a Fuchsian group if G is a discrete subgroup of Conf(𝔻). Let G be a Fuchsian group. We define the limit set of G by Λ(G):=⋃_g∈ G{g0}, where {g0} denotes the Euclidean closure of {g0} for g∈ G. G is said non-elementary if the limit set of G is not a finite set. A limit point x∈Λ(G) is called a conical limit point if there exists (g_n)_n∈ℕ⊂ G such that lim_n→∞g_n0=x and the sequence (inf_z∈[0,x)d(g_n0,z))_n∈ℕ is bounded, where [0,x) denotes the geodesic ray connecting 0 and x. We denote by Λ_c(G) the set of conical limit points. Next, we introduce the definition of a generalized Schottky group. For g∈Conf(𝔻) we define the isometry circle of g as Δ(g):={z∈∂𝔻:|g'(z)|≥1}. Let n,m∈ℕ. Let H_0:={h_1^±1,⋯,h_n^±1}⊂Conf(𝔻) be the set of hyperbolic generators and let Γ_0:={γ_1^±1,⋯,γ_m^±1}⊂Conf(𝔻) be the set of parabolic generators. We put G_0=H_0∪Γ_0 and write G_0={g_1^±1,⋯,g_n+m^±1}. We assume that Δ(g_i)∪Δ(g_i^-1)∩Δ(g_j)∪Δ(g_j^-1)= ∅ for i,j∈{1,2,⋯,m+n} with i≠ j. Let G be the subgroup of Conf(𝔻)} generated by G_0. G is called a generalized Schottky group with m parabolic generators generated by G_0. Let G be a generalized Schottky group with m≥1 parabolic generators generated by G_0. Note that a generalized Schottky group is a non-elementary finite generated free Fuchsian group with respect to the generator G_0 (see <cit.>). Furthermore, the Dirichlet fundamental domain of G centered at 0 is given by ⋂_g∈ G_0{z∈𝔻:|g'(z)|<1}. Since the generalized Schottky group G is finite generated, the conical limit set Λ_c(G) is given by the following form (see <cit.>): Λ_c(G)=Λ(G)∖⋃_g∈ G⋃_i=1^m{g(p_i)}, where p_i∈∂𝔻 denotes the fixed point of the parabolic generator γ_i∈ G_0 for i∈{1,⋯, m}. Thus, the set Λ(G)∖Λ_c(G) is a countable set. This implies that the Hausdorff dimension of Λ_c(G) is equal to the Hausdorff dimension of Λ(G). For a subset A of ∂𝔻 we denote by _H(A) the Hausdorff dimension of A. We have the following fundamental fact. <cit.> For a non-elementary finite generated Fuchsian group G containing parabolic elements we have 1/2<_H(Λ_c(G))<1. In this paper, we always assume that G is a generalized Schottky group with one parabolic generator generated by G_0. We recall definitions of the Bowen-Series map with respect to the Dirichlet fundamental domain R centered at 0 and the cusp-winding process. To do this, we put Δ:=⋃_g∈ G_0Δ(g). <cit.> The Bowen-Series map with respect to the Dirichlet fundamental domain R centered at 0 is given by f:Δ→∂𝔻, f|_Δ(g)=g (g∈ G_0). The Bowen-Series map f with respect to the Dirichlet fundamental domain R centered at 0 is simply called the Bowen-Series map. Since the definition of a generalized Schottky group, for h,h̃∈ H_0 we have Δ(h)∩Δ(h̃)=∅ and for γ∈Γ_0 we have Δ(γ^-1)∩Δ(γ)={p}, where p is the fixed point of γ∈Γ_0. Thus, the definition of the Bowen-Series map is well-defined. Also, note that Λ(G) is a f-invariant set. By the choice of the fundamental domain, there exist constants W>Z>1 such that for all h∈ H_0 and x∈Δ(h)∩Λ(G) we have Z≤ |f'(x)|≤ W. For all x∈Λ_c(G) there uniquely exists ω(x)=ω_0ω_1⋯∈ G_0^ℕ∪{0} such that f^n(x)∈Δ(ω_n) and if ω_n is parabolic for some n∈ℕ∪{0} then there exists m∈ℕ such that m>n and ω_m≠ω_n. Then, ω(x) defines a sequence of blocks B_i (i∈ℕ) such that ω(x)=B_1(x)B_2(x)⋯, where each B_i(x) is either a hyperbolic generator, or a maximal block of consecutive appearances of the same parabolic generator. <cit.> The cusp winding process is given by (a_i)_i≥1:Λ_c(G)→ℕ∪{0}, a_i(x)= {[ m if B_i(x)=γ^m+1 (γ∈Γ_0 ); 0 otherwise ]. . Next, we describe the definition of the induced system derive from the Bowen-Series map. We define 𝒜:=⋃_l=1^∞{γ^lh:γ∈Γ_0,h ∈ H_0}∪ H_0 and the set Δ(ω):=Δ(ω_0) ∩ f^-1Δ(ω_1) ∩⋯∩ f^-(n-1)Δ(ω_n-1) for ω=ω_0⋯ω_n-1∈𝒜 and n∈ℕ. Define the inducing time τ:𝒜→ℕ by τ(γ^l+1h)=l+1 (γ∈Γ_0, h∈ H_0 and l∈ℕ) and τ|_H_0=1. The induced Markov map with the Markov partition {Δ(ω) }_ω∈ A is given by f̃: ⋃_ω∈ AΔ(ω) →∂𝔻, f̃|_Δ(ω)=f^τ(ω). Note that a_i=a_1∘f̃^i-1 for i≥ 1 and the maximal f̃-invariant set is the conical limit set Λ_c(G) by (<ref>) (see the proof of <cit.>). In this paper, we denote simply f̃|_Λ_c(G) as f̃ and consider the pair (f̃, Λ_c(G)) as a dynamical system. Since for the fixed point p∈Λ(G) of γ∈Γ_0 we have f(p)=p and |f'(p)|=|γ'(p)|=1 (i.e. p is a neutral fixed point of f), f is not uniformaly expanding. But, for all h∈ H_0 and all γ∈Γ_0 we have lim_l→∞inf{|f̃'(x)|:x∈Δ(γ^lh)}=∞ (see <cit.>). Therefore, by (<ref>), f̃ is uniformaly expanding. For α∈ [0,∞] we define the level sets by J(α):={ x ∈Λ_c(G):lim_n→∞1/n∑_i=1^na_i(x) = α}, and the dimension spectrum b:[0,∞]→ℝ, b(α):=_HJ(α). §.§ Thermodynamic formalism In this section, we describe the thermodynamic formalism. For details on the thermodynamic formalism we refer to the reader <cit.> and <cit.>. Recall that f̃ is a Markov map. Thus, f̃ determines a 𝒜×𝒜 matrix A by A_a,b=1 if Δ_b⊂f̃Δ_a and A_a,b=0 otherwise. Define Σ_A:={ω∈𝒜^ℕ∪{0} :A_ω_n-1,ω_n=1, n∈ℕ}. A string (ω_0,ω_1,…,ω_n-1)∈𝒜^n is called an admissible word of length n if A_ω_i-1,ω_i=1 for all i=0,…,n-1. We denote by E^n the set of all admissible words of length n for n∈ℕ and by E^* the set of all admissible words which have a finite length (i.e. E^*=∪_n∈ℕE^n). For convenience, put E^0={∅}. For ω∈ E^n we define the cylinder set of ω by [ω]:={τ∈Σ_A:τ_i=ω_i, 0≤ i≤ n-1}. Note that Σ_A is finitely primitive since hg∈ E^*, gh∈ E^* for all h∈ H_0 and g∈𝒜∖{h^-1}. We endow Σ_A with the topology generated by the cylinders. Since Σ_A is finitely primitive and 𝒜 is not a finite set, Σ_A is not locally compact. We define the shift map σ:Σ_A→Σ_A by σ((ω_0,ω_1,ω_2⋯))=(ω_1,ω_2,⋯) for (ω_0,ω_1,ω_2⋯)∈Σ_A Since f̃ is uniformaly expanding and for each ω∈𝒜 the set Δ(ω) is a compact set, for any ω=(ω_0,ω_1,⋯) ∈Σ_A the set ⋂_j=0^∞f̃^-jΔ(ω_j) is a singleton. Thus, we can define the coding map π:Σ_A→π(Σ_A) given by π(ω)∈⋂_n=0^∞f̃^-jΔ(ω_j), (ω=(ω_0,ω_1,⋯) ∈Σ_A). Since for all ω,τ∈𝒜 we have Δ(ω)∩Δ(τ)=∅, the coding map π is homeomorphism. Moreover, the coding map π satisfies f̃(π(ω))=π(σ(ω)) for ω∈Σ_A and π(Σ_A)=Λ_ c(G). Thus, we can consider the dynamical system (Σ_A,σ) instead of (f̃,Λ_c(G)). A function ϕ on Σ_A is called weakly Hölder if there exist Z>0 and t∈(0,1) such that sup{ |ϕ(ω)-ϕ(τ)|:ω,τ∈Σ_A, ω_i=τ_i for 0≤ i≤ n-1}≤ Zt^n. Since f̃ is uniformaly expanding (see Remark <ref>), log|f̃'∘π| (see <cit.>) is weakly Hölder. Since for all n≥ 2, ω∈ E^n and τ_1,τ_2∈[ω] we have a_1∘π(τ_1)=a_1∘π(τ_2), the cusp winding process a_1∘π is weakly Hölder. Thus, for all (x,y)∈ℝ^2 the potential xa_1∘π+ylog|f̃'∘π| is also weakly Hölder. Let ϕ:Σ_A→ℝ be a continuous function. The topological pressure of ϕ is defined by P(ϕ):=lim_n→∞1/nlog∑_ω∈ E^nexp(sup_τ∈[ω]∑_j=0^n-1ϕ(σ^i(τ))). The Topological pressure satisfies the following variational principle. Let ϕ be a continuous function on Σ_A. We have P(ϕ)=sup{h(μ)+∫ϕ dμ}, where the supremum is taken over all σ-invariant ergodic Borel probability measures μ supported by Σ_A satisfying ∫ϕ dμ>-∞ and h(μ) is the measure-theoretic entropy with respect to σ. Let ϕ be a continuous function on Σ_A. A σ-invariant Borel probability measure μ supported by Σ_A is called an equilibrium measure for ϕ if P(ϕ)= h(μ)+∫ϕ dμ. A Borel probability measure μ supported by Σ_A is called a Gibbs measure for the potential ϕ if there exists a constant M>1 such that for all cylinder [ω] (ω∈ E^n, n∈ℕ) and all τ∈ [ω] we have 1/M≤μ([ω])/exp(-nP(ϕ)+∑_j=0^n-1ϕ(σ(τ)))≤ M. Furthermore, the topological pressure satisfies the following basic and a important result. Let ϕ:Σ_A→ℝ be a weakly Hölder function. Suppose that ϕ satisfies P(ϕ)<∞ and ∑_ω∈𝒜inf(-ϕ|_[ω])exp(infϕ|_[ω])<∞ (i.e. ∫ -ϕ dμ<∞ for all Gibbs measure μ). Then, there exists a unique equilibrium measure μ_ϕ such that μ_ϕ is a Gibbs measure. Next, we describe the thermodynamic formalism on the dynamical system (f̃,Λ_ c(G)). The topological pressure of a continuous function ϕ : Λ_ c(G)→ℝ is defined by P_f̃(ϕ)=sup{ h(μ) + ∫ϕ dμ}, where the supremum is taken over all f̃-invariant ergodic Borel probability measures μ supported by Λ_c(G) satisfying ∫ϕ dμ>-∞. By Remark <ref>, there exists a bijection between the set of σ-invariant Borel probability measures supported by Σ_A and the sat of f̃-invariant Borel probability measures supported by Λ_c(G). Thus, for a continuous function ϕ on Λ_c(G) we obtain P_f̃(ϕ)=P(ϕ∘π). We will denote both pressures by P. § MULTIFRACTAL ANALYSIS FOR THE CUSP WINDING PROCESS In this section, we introduce some results from <cit.>. Recall that p(α,q,b)=P(q(-a_1+α)-blog|f̃'|) for (α,q,b)∈ℝ^3. Let ℱ:={(q,b)∈ℝ×(0,∞):P(-qa_1-blog|f̃'|)<∞}. By the following lemma, we can determine the set on which the function (α,q,b)∈ℝ^3→ p(α,q,b)∈ℝ is finite. <cit.> We have ℱ=(0,∞)×[0,∞)∪{0}×(1/2,∞). By Lemma <ref> and Ruelle's formula (see <cit.>), for α,q,b∈(0,∞) we have the following useful formulas: ∂/∂ qp(α,q,b)=∫ (-a_1+α)dμ_α,q,b and ∂/∂ bp(α,q,b)=-∫log|f̃'| dμ_α,q,b, where μ_α,q,b denotes the equilibrium measure of the potential q(-a_1+α)-blog|f̃'̃|. On the other hand, using Lemma <ref>, we obtain Bowen's formula. We have P(-_H(Λ_c(G))log|f̃'|)=0. The following lemma will be used later. <cit.> There exists a constant K>1 such that -2t(log K + log l) ≤inf_ [γ^lh](-tlog |(γ^l)'| ∘π) ≤sup_ [γ^lh](-tlog |(γ^l)'| ∘π) ≤ -2t(-log K + log l) for all l≥ 1, t≤0, γ∈Γ_0, and h∈ H_0. To prove the main theorem, the following proposition is important. <cit.> For all α∈(0,∞) there exists q(α)∈(0,∞) such that p(α,q(α),b(α))=0 and ∂/∂ qp(α,q(α),b(α))=0. Moreover, we obtain b(α)=h(μ_q(α))/λ(μ_q(α)), where μ_q(α) denotes the equilibrium state of the potential q(α)(-a_1+α)-b(α)log|f̃'|. By the proof of <cit.> and <cit.> we obtain the following theorem. The functions α↦ b(α) and α↦ q(α) are real-analytic on (0,∞). Moreover, b(α) is strictly increasing and we have lim_α→∞b(α)=_H(Λ_c(G)). § PROOF OF THE MAIN THEOREM We will use the notations which is used in the previous section. Put s:=_H(Λ_c(G)). We consider the asymptotic behavior of the function α↦ q(α) when α goes to ∞. We have lim_α→∞q(α)=0. Put q_∞:=lim sup_α→∞q(α). For a contradiction we assume that 0<q_∞<∞. Then, there exists a strictly increasing sequence {α_n}_n∈ℕ⊂(0,∞) such that lim_n→∞α_n=∞ and lim_n→∞q(α_n)=q_∞. By Proposition <ref>, we have P(-q(α_n)a_1-b(α_n)log|f̃'|)=-q(α_n)α_n for all n∈ℕ. Therefore, lim_n→∞P(-q(α_n)a_1-b(α_n)log|f̃'|)=lim_n→∞(-q(α_n)α_n)=-∞. By Proposition <ref>, we have q(α_n)∈(0,∞) for all n∈ℕ. Hence, by Lemma <ref>, Theorem <ref> and continuity of the topological pressure, we obtain lim_n→∞P(-q(α_n)a_1-b(α_n)log|f̃'|)=P(-q_∞a_1-slog|f̃'|)∈ℝ. This is a contradiction. For a contradiction we assume that q_∞=∞. Then, there exists a strictly increasing sequence {α_n}_n∈ℕ⊂(0,∞) such that lim_n→∞α_n=∞ and lim_n→∞q(α_n)=∞. By proposition <ref>, we have lim_n→∞p(α_n,q(α_n),b(α_n))=0. On the other hand, by the variational principle for the topological pressure, for all n∈ℕ we obtain p(α_n,q(α_n),b(α_n)) ≥∫ (q(α_n)(-a_1+α_n)-b(α_n)log|f̃'|)dδ_x_h(1) =q(α_n)α_n-b(α_n)log|f̃'(x_h(1))|, where x_h(1):=π(h_1h_1⋯) and δ_x_h(1) denotes the point mass measure at x_h(1). Since lim_n→∞q(α_n)α_n=∞, we obtain lim_n→∞p(α_n,q(α_n),b(α_n))=∞. This is a contradiction. Thus, since q(α)∈(0,∞) for all α∈(0,∞), we obtain lim_α→∞q(α)=0. We have lim_α→∞q(α)α=0. By Theorem <ref> and Theorem <ref>, there exists M>0 such that for all α≥ M we have b(α)>1/2. By Proposition <ref>, we have -q(α)α=P(-q(α)a_1-b(α)log|f̃'|) for all α∈ (M,∞). Therefore, by Lemma <ref>, Theorem <ref> and continuity of the topological pressure, we obtain lim_α→∞(-q(α)α) =lim_α→∞P(-q(α)a_1-b(α)log|f̃'|) =P(-slog|f̃'|)=0. The asymptotic behavior of the function α↦ b(α) when α goes to ∞ is associated with the asymptotic behavior of the function α↦ q(α) when α goes to ∞. There exist constants B≥1 and W>0 such that for all α∈(W,∞) we have 1/B≤s-b(α)/∫_α^∞ q(t)dt≤ B. By Proposition <ref>, for all α∈(0,∞) we have exp(P(-q(α)a_1-b(α)log|f̃'|))=exp(-α q(α)). Differentiating this equation with respect to α and using (<ref>), we obtain (-b'(α)λ(μ_α)-α q'(α))exp(-α q(α)) =(-q(α)-α q'(α))exp(-α q(α)) and thus, b'(α) =q(α)/λ(μ_α), where μ_α denotes the equilibrium measure of the potential -q(α)(-a_1+α)-b(α)log|f̃'|. Therefore, by Theorem <ref>, for all α∈(0,∞) we obtain s-b(α)=∫_α^∞λ(μ_t)^-1 q(t)dt. Next, we show that there exist constants C≥ 1 and W>0 such that for all α∈( W,∞) we have λ(μ_α)≤ C. By Theorem <ref> and Lemma <ref> and Theorem <ref>, for all α∈ (0,∞) we have p(α,q(α),b(α))=0, lim_α→∞b(α)=s, lim_α→∞α q(α)=0 and μ_α be a Gibbs measure for the potential -q(α)(-a_1+α)-b(α)log|f̃'|. Therefore, by <cit.>, there exist constants C≥ 1 and W_1>0 such that for all α∈( W_1,∞) and ω∈𝒜 we have μ_α(π[ω])/exp(sup_π([ω]){-q(α)a_1-b(α)log|f̃'|})≤ C. By Theorem <ref>, we can take a small ϵ∈(0,1) such that 1/2<s-ϵ. By Lemma <ref> and Theorem <ref>, there exists a constant W_2>W_1 such that for all α∈(W_2,∞) we have 0<q(α)<1 and s-b(α)<ϵ. By Lemma <ref>, there exist constants M≥1 and W>W_2 such that for all α∈(M,∞), h∈ H_0, γ∈Γ_0 and l≥ 1 we have sup_π([γ^lh]){log|f̃'|}/log l^2≤ M and exp(sup_π([γ^lh]){-b(α)log|f̃'|})/l^-2b(α)≤ M. Thus, since s-ϵ>1/2, for all α∈(W,∞) we obtain λ(μ_α) =∑_h∈ H_0,γ∈Γ_0∑_l=0^∞∫_π([γ^lh])log|f̃'|dμ_α ≤∑_h∈ H_0,γ∈Γ_0∑_l=0^∞sup_π([γ^lh]){log|f̃'|}μ_α(π([γ^lh])) ≤ C∑_h∈ H_0,γ∈Γ_0∑_l=0^∞sup_π([γ^lh]){log|f̃'|}exp(-q(α)l)exp(sup_π([γ^lh]){-b(α)log|f̃'|}) ≤ CM^2∑_h∈ H_0,γ∈Γ_0∑_l=0^∞(2log l)exp(-q(α)l)l^-2b(α) ≤ CM^2∑_h∈ H_0,γ∈Γ_0∑_l=0^∞(2log l)l^-2(s-ϵ)<∞. Since f̃ is uniformly expanding, there exists c>0 such that for all α∈(0,∞) we have c<λ(μ_α) . Therefore, the proof is complete. The following lemma is shown by a standard argument using the Gibbs property, but it connects <cit.> and <cit.>. There exist constants C≥ 1 and Y>0 such that for all α∈(Y,∞) we have 1/C≤∑_l=1^∞exp(-lq(α))l^1-2b(α)/α<C By Proposition <ref>, for all α∈(0,∞) we have (∂/∂ q)p(α,q(α),b(α))=∫ (-a_1+α)dμ_α=0, where μ_α denotes the equilibrium measure of the potential q(α)(-a_1+α)-b(α)log|f̃'|. Thus, we have ∫ a_1dμ_α=α. Repeating the argument in the proof of Lemma <ref>, there exist a constants R,V≥ 1 and Y>0 such that for all α∈(Y,∞) and ω∈𝒜 we have 1/R≤μ_α(π[ω])/exp(sup_π([ω]){-q(α)a_1-b(α)log|f̃'|})≤ R and 1/V≤exp(sup_π([γ^l+1h]){-b(α)log|f̃'|})/l^-2b(α)≤ V. Thus, for all α∈(Y,∞) we obtain α= ∫ a_1dμ_α =∑_h∈ H_0,γ∈Γ_0∑_l=1^∞ lμ_α(π([γ^l+1h])) ≤ R∑_h∈ H_0,γ∈Γ_0∑_l=1^∞ lexp(sup_π([γ^l+1h]){-q(α)l-b(α)log|f̃'|}) ≤ RV∑_h∈ H_0,γ∈Γ_0∑_l=1^∞exp(-q(α)l)l^1-2b(α). Repeating the above argument, for all α∈(Y,∞) we obtain α= ∫ a_1dμ_α≥1/RV∑_h∈ H_0,γ∈Γ_0∑_l=1^∞exp(-q(α)l)l^1-2b(α). Since H_0 and Γ_0 are finite sets, the proof is complete. We denote by Γ the Gamma function and by ζ the zeta function. There exist constants Z≥ 1 and Q>0 such that for all α∈(Q,∞) we have 1/Z≤∑_l=1^∞exp(-lq(α))l^1-2b(α)/q(α)^-2+2b(α)≤ Z By Theorem <ref>, we can take a small ϵ∈(0,1/2) such that 1/2<s-ϵ and s+ϵ<1. By Theorem <ref>, there exists a constant W>0 such that for all α∈(W,∞) we have s-ϵ<b(α). Let α∈(W,∞). For q∈(0,∞) we put K_b(α)(q):= ∑_l=1^∞exp(-lq)l^1-2b(α). The Mellin transform of K_b(α)(q) is K^*_b(α)(u)=Γ(u)ζ(2b(α)-1+u) (see <cit.>). Γ(u) has a pole of order 1 at u=0 and ζ(2b(α)-1+u) has a pole of order 1 at u=2-2b(α). Thus, by Mellin inversion theorem (see <cit.>), for D>2-2b(α) we have K_b(α)(q)=1/2π i∫_D-i∞^D+i∞Γ(u)ζ(2b(α)-1+u)q^-udu. Put δ:=2ϵ-1. Note that -1<δ<0 and for all α∈(W,∞) we have -1<2b(α)-1+δ<0 since δ<2b(α)-1+δ<2(s-(1-ϵ))<0. By the choice of W, we have 2-2b(α)<1. We fix 2-2b(α)<D<1. By <cit.>, there exist constants β, T, C>0 such that for all δ≤ x≤ D and |t|≥ T we have |Γ(x+it)ζ(2b(α)-1+x+it)|≤ Cexp(-β |t|). Moreover, the residue of Γ(u) at u=0 is 1 and the residue of ζ(2b(α)-1+u) at u=2-2b(α) is also 1. Therefore, shifting the integration line to the left, we obtain K_b(α)(q)= Γ(2-2b(α))q^-2+2b(α)+ζ(2b(α)-1) +1/2iπ∫_δ-i∞^δ+i∞Γ(u)ζ(2b(α)-1+u)q(α)^-udu. By Lemma <ref>, Theorem <ref> and (<ref>), we have lim_α→∞q(α)^2-2b(α)|1/2iπ∫_δ-i∞^δ+i∞Γ(u)ζ(2b(α)-1+u)q(α)^-udu |=0. Note that for all α∈(W,∞) we have 0<2(s-ϵ)-1<2b(α)-1<2(1-ϵ)-1<1. By the continuity of ζ on [2(s-ϵ)-1,2(1-ϵ)-1], Lemma <ref> and Theorem <ref>, we have lim_α→∞q(α)^2-2b(α)|ζ(2b(α)-1)|=0. Since 0<2-2s<1, we have Γ(2-2s)>0. We take η>0 such that η<Γ(2-2s). Since lim_α→∞b(α)=s and Γ is continuous on a small neighborhood of 2-2s, there exists Y>W such that for all α∈(Y,∞) we have η<Γ(2-2b(α)). Therefore, by (<ref>), (<ref>) and (<ref>), the proof is complete. By Lemma <ref> and Proposition <ref>, we obtain the following proposition. There exist constants Z≥ 1 and Q>0 such that for all α∈ (Q,∞) we have 1/Z≤q(α)/α^1/(-2+2b(α))≤ Z. By Lemma <ref> and Proposition <ref>, there exist a constants C≥ 1 and L>0 such that for all α∈(L,∞) we have 1/C≤s-b(α)/∫_α^∞ t^1/(-2+2b(t))dt≤ C. By Theorem <ref>, for all x≤ 0 we have lim_α→∞(s-b(α))α^x=0. Let x∈(0,1/(2-2s)-1). Then, there exists ϵ∈(0,2s-1) such that x=1/(2-2s+ϵ)-1. By Theorem <ref>, there exists a constant W>L such that for all α∈(W,∞) we have s-ϵ/2<b(α). Since ϵ∈(0,2s-1), we have 1/2<s-ϵ/2<1 and 1/-2+2(s-ϵ/2)+1 =-1+2(s-ϵ/2)/-2+2(s-ϵ/2)<0. Thus, since for all α∈(W,∞) we have 1/(-2+2b(α))<1/(-2+2(s-ϵ/2)), for all α∈(W,∞) we obtain ∫_α^∞ t^1/(-2+2b(t))dt ≤∫_α^∞ t^1/(-2+2(s-ϵ/2))dt =-(1/-2+2(s-ϵ/2)+1)α^1/(-2+2(s-ϵ/2))+1. Hence, by (<ref>), we obtain lim_α→∞(s-b(α))α^x=0. Let x∈(1/(2-2s)-1,∞). There exists δ∈(0,∞) such that x=1/(2-2s)-1+δ. By Theorem <ref>, we have 1/2s-2-δ/2+1=-1+2s-δ(s-1)/2s-2<-1+2s/2(s-1)<0 Thus, we obtain ∫_α^∞ t^1/(-2+2b(t))dt ≥∫_α^∞ t^1/(-2+2s)-δ/2dt =-(1/-2+2s-δ/2+1)α^1/(-2+2s)-δ/2+1. Hence, by (<ref>), we obtain lim_α→∞(s-b(α))α^x=∞. abbrv *
http://arxiv.org/abs/2407.13496v2
20240718132612
Solvability and Optimal Controls of Impulsive Stochastic Evolution Equations in Hilbert Spaces
[ "Javad A. Asadzade", "Nazim I. Mahmudov" ]
math.OC
[ "math.OC" ]
Article Title]Solvability and Optimal Controls of Impulsive Stochastic Evolution Equations in Hilbert Spaces 1]Javad A. Asadzadejavad.asadzade@emu.edu.tr 2]Nazim I. Mahmudovnazim.mahmudov@emu.edu.tr These authors contributed equally to this work. [1]Department of Mathematics, Eastern Mediterranean University, Mersin 10, 99628, T.R., 5380,North Cyprus, Turkey [2]Department of Mathematics, Eastern Mediterranean University, Mersin 10, 99628, T.R., 5380,North Cyprus, Turkey [2] Research Center of Econophysics, Azerbaijan State University of Economics (UNEC), Istiqlaliyyat Str. 6, Baku , 1001, Azerbaijan [2]Jadara University Research Center, Jadara University, Jordan This paper examines the solvability and optimal control of a class of impulsive stochastic evolution equations in a Hilbert space. First, we investigate the existence and uniqueness of mild solutions for the considered system. Next, we determine the conditions necessary for the existence of optimal control pairs. Finally, we present an example to illustrate the effectiveness of our theoretical results. [MSC Classification] 47J35, 60H10, 49J15. [ [ Received: date / Accepted: date =================================== § INTRODUCTION Many evolutionary processes show impulsive behavior, experiencing short-term disturbances at specific moments. Such dynamic systems with impulsive traits are common in fields such as artificial intelligence, genetics, biological systems, population dynamics, neural networks, robotics, telecommunications, and computer science. For a detailed analysis of impulse systems, readers can refer to the monograph by Lakshmikantam et al. <cit.>. The study of stochastic differential equations (SDEs) with impulsive effects has garnered significant attention in recent years due to their wide range of applications in various fields such as finance, engineering, and biological systems. These systems are characterized by sudden changes in the state of the system at specific moments, which can model real-world phenomena more accurately than continuous processes alone. The main advantage of incorporating impulsive effects is that they provide a more realistic representation of systems that undergo abrupt changes, making the models more applicable to real-world scenarios. Recently, many researchers have focused on impulsive differential equations, particularly impulsive evolution equations (see <cit.>). For instance, Mahmudov in <cit.> examines the following linear impulsive evolution equation in Hilbert space: x^'(t)=Ax(t)+Bu(t), t∈[0,T]∖{t_1,…,t_n}, Δ x(t_k+1)=D_k+1x(t_k+1)+E_k+1v_k+1, k=0,…,n-1, x(0)=x_0. Here, the state variable x(·) takes values in a Hilbert space H with the norm x = √(⟨ x, x ⟩). The control function u(·) belongs to L^2([0, T], U), where U is another Hilbert space, and v_k ∈ U for k = 1, …, n. In that article, Mahmudov provided a representation of the solution in terms of semigroup and impulsive operators and presented the necessary and sufficient conditions for the approximate controllability of linear impulsive evolution equations using the concept of an impulsive resolvent operator. The main advantage of the impulsive effect described in <cit.>, Δ x(t_k+1) = D_k+1 x(t_k+1) + E_k+1 v_k+1, is that it allows for the modeling of instantaneous and significant changes in the system state at specific moments. This form of impulsive effect captures the impact of sudden external inputs or internal adjustments that occur abruptly, enabling a more accurate depiction of systems where such events are frequent and critical. By incorporating these impulses, the model can better represent real-world dynamics where abrupt shifts can dramatically influence the system's behavior. Mathematically, the impulsive effect Δ x(t_k+1) = D_k+1 x(t_k+1) + E_k+1 v_k+1 indicates that at each impulsive moment t_k+1, the state x(t_k+1) is instantaneously altered by a linear transformation D_k+1 and an additional input E_k+1 v_k+1. This formulation is particularly useful for modeling scenarios where the state experiences abrupt changes due to external forces or internal system dynamics. In contrast, standard impulsive effects typically involve sudden changes in the system state at predefined times, usually represented by a jump condition such as: Δ x(t_k) = I_k(x(t_k^-)) where I_k is an impulse function and x(t_k^-) denotes the state just before the impulse at time t_k. These standard impulses are useful for modeling systems with regular or predictable disturbances. However, they may not fully capture the complexity of systems that experience both predictable and unpredictable impulses. The impulse function I_k usually describes a predefined transformation or adjustment applied to the state, which might be less flexible in scenarios involving complex, abrupt changes. Thus, while the standard impulse model Δ x(t_k) = I_k(x(t_k^-)) is suitable for simpler or more predictable impulsive systems, the model Δ x(t_k+1) = D_k+1 x(t_k+1) + E_k+1 v_k+1 offers a more nuanced and flexible approach for representing intricate and frequent abrupt changes in the system state. Therefore, the factors we have mentioned above make the investigation of the qualitative properties of such impulsive evolution equations even more relevant. On the other hand, stochastic systems have always been of great interest to researchers, which is why the stochastic version of this system remains perpetually significant (see <cit.>). The dynamic system under consideration in this paper is governed by a stochastic differential equation with impulsive effects, described by: x^'(t)=Ax(t)+Bu(t)+f(t,x(t))+σ(t,x(t))dW/dt, t∈[0,T]∖{t_1,…,t_n}, Δ x(t_k+1)=D_k+1x(t_k+1)+E_k+1v_k+1, k=0,…,n-1, x(0)=x_0. where, the state variable x(·) takes values in a Hilbert space H with the norm x = √(⟨ x, x ⟩). The control function u(·) belongs to L^2([0, T], U), where U is another Hilbert space, and v_k ∈ U for k = 1, …, n. In this context, A is the infinitesimal generator of a strongly continuous semigroup of bounded linear operators T(t) in H. The linear operators involved are B ∈ L(U, H), D_k ∈ L(H, H), and E_k ∈ L(U, H). At the discontinuity points t_k (where k = 1, …, n and 0 = t_0 < t_1 < t_2 < ⋯ < t_n < t_n+1 = T), the jump in the state variable is given by Δ x(t_k) = x(t_k^+) - x(t_k^-), with x(t_k^±) = lim_h → 0^± x(t_k + h) and the assumption that x(t_k^-) = x(t_k). For operator composition, ∏_j=1^k A_j denotes the composition A_1, A_2, ⋯, A_k. For j = k+1 to k, ∏_j=k+1^k A_j equals 1. Similarly, ∏_j=k^1 A_j denotes the composition A_k, A_k-1, ⋯, A_1, and ∏_j=k^k+1 A_j = 1. This paper focuses on several important aspects of stochastic impulsive systems. First, we explore the existence and uniqueness of mild solutions, which are crucial for understanding how the system behaves over time. By using fixed point theorems and the properties of semigroups, we determine the conditions that ensure a unique mild solution exists. Furthermore, we explore the optimal control problem for these systems. The goal is to find control functions that optimize a specific performance criterion. Using methods from stochastic control theory and functional analysis, we identify the necessary conditions for the existence of optimal control pairs. We also show how these conditions apply through an illustrative example. The contributions of this paper provide a comprehensive framework for analyzing and controlling impulsive stochastic systems in Hilbert spaces, extending existing theories and offering new insights into their practical implementations. § MATHEMATICAL PRELIMINARIES Let (Ω, ℱ, {ℱ_t}_t ≥ 0, ℙ) be a filtered complete probability space satisfying the usual conditions, where the filtration {ℱ_t}_t ≥ 0 is a right continuous increasing family and ℱ_0 contains all ℙ-null sets.Let {e_k, k ∈ℕ} denote a complete orthonormal basis of K. We have a cylindrical Brownian motion {W(t) : t ≥ 0} taking values in K, defined on a probability space (Ω, ℱ, {ℱ_t}_t ≥ 0, ℙ). The covariance operator Q ≥ 0 associated with W(t) is finite and nuclear, with a trace Tr(Q) = ∑_k=1^∞λ_k = λ < ∞. This operator satisfies Qe_k = λ e_k for each k ∈ℕ. Let {W_k(t), k ∈ℕ} be a sequence of one-dimensional standard Wiener processes mutually independent on (Ω, ℱ, {ℱ_t}_t ≥ 0, ℙ) such that W(t) = ∑_k=1^∞√(λ_k) W_k(t) e_k, t ≥ 0. Additionally, we assume that ℱ_t = σ{W(s), 0 ≤ s ≤ t}, which represents the sigma-algebra generated by the cylindrical Brownian motion {W(s)}_s ≥ 0. Here, ℱ_b = ℱ, indicating that the filtration is complete. The space L^2_0 = L^2(Q^1/2 K, H) is defined as the set of all Hilbert-Schmidt operators from Q^1/2 K to H. It possesses an inner product given by ⟨φ, ψ⟩ = Tr(φ Q ψ^*), where φ and ψ are elements of L^2_0. This space is notable for being separable and forms a Hilbert space under this inner product. The collection of all ℱ_b-measurable, square-integrable H-valued random variables, denoted L^2(Ω, H), is a Banach space equipped with the norm x_L^2 = ( 𝔼x(ω)^2 )^1/2, where 𝔼 denotes the expectation with respect to the measure ℙ. For more details on stochastic integrals, see the books of <cit.>. Let C([0, T], L^2(Ω, H)) be the Banach space of all continuous mappings from [0, T] to L^2(Ω, H) with the norm x_C = ( sup_t ∈ [0,T]𝔼x(t)^2 )^1/2. Let PC([0, T], L^2(Ω, H)) = { x : [0, T] → L^2(Ω, H), x(t) is continuous at t ≠ t_i, left continuous at t = t_i, and right limit x(t^+_i) exists for i = 1, 2, …, n }. Let PC([0, T], L^2 (Ω, H)) be the space of all ℱ_t-adapted measurable stochastic processes x ∈ PC([0, T], L^2 (Ω, H)) with the norm x_PC = ( sup_t ∈ [0,T]𝔼_k x(t)^2 )^1/2. It is easy to see that (PC, ·_PC) is a Banach space. We suppose that U is a separable reflexive Hilbert space from which the controls u take values. Let L^2_F([0,T], U) = { u : [0,T] ×Ω→ U : u is ℱ_t-adapted measurable stochastic processes and 𝔼∫_0^T u(t)^2 dt < ∞}. Let Y be a nonempty closed bounded convex subset of U. Define the admissible control set U_ad = { u(·) ∈ L^2_F([0,T], U) | u(t) ∈ Y, ∀ t ∈ [0,T] }. We assume that the control function u ∈ U_ad, where U_ad represents the admissible control set, and B ∈ L(U, H). Here, L(U, H) denotes the space of bounded linear operators from the Banach space U to the Hilbert space H. This implies that B is a bounded linear operator that maps elements from U to H, ensuring the continuity and boundedness of the operator B in the context of the control and state spaces. Let X be a Banach space and ℒ(X) be the Banach space of bounded linear operators on X. <cit.> A one parameter family {𝒯(t)}_t ≥ 0⊂ℒ(X) is a semigroup of bounded linear operators on X if (i) T(t)T(s) = T(t + s), for t, s ≥ 0; (ii) T(0) = ℐ, where ℐ denotes the identity operator in X. This semigroup property is fundamental in the analysis of operator families in Banach spaces, as it provides a structure for how these operators combine over time. The following result will be used in the sequel of this paper: (see <cit.>) For any p ≥ 1 and for arbitrary L^2_0-valued predictable process χ(·) such that sup_s ∈ [0,t]𝔼∫_0^s χ(τ) dW(τ) ^2p≤ (p(2p - 1))^p {∫_0^t ( 𝔼χ(s)_L^2_0^2p)^1/p ds }^p, for t ∈ [0, ∞). We now consider Krasnoselskii's Fixed Point Theorem, which is very important in mathematical analysis and applications, especially in the study of functional equations and nonlinear situations. This theorem is an effective technique for demonstrating the existence of solutions to problems for which a direct construction or explicit solution is difficult or impossible to find. (Krasnoselskii’s Fixed Point Theorem <cit.>) Let X be a Banach space, let Y be a bounded closed and convex subset of X, and let F_1, F_2 be maps of Y into X such that F_1x + F_2y ∈ Y for every pair x, y ∈ Y. If F_1 is a contraction and F_2 is completely continuous, then the equation F_1x + F_2x = x has a solution in Y. § EXISTENCE AND UNIQUENESS OF MILD SOLUTION In this section, we will prove the existence and uniqueness of the mild solution to (<ref>) by using Krasnoselskii's fixed point theorem. For this purpose, first of all, similarly to Lemma 3 in <cit.>, we define the mild solution of (<ref>) in the following definition. For any given u ∈ U_ad, a stochastic process x is said to be a mild solution of (<ref>) on [0,T] if x ∈ PC([0,T],L^2(Ω,H)) and satisfies the following conditions: (i) x(t) is measurable and adapted to ℱ_t. (ii) x(t) satisfies the integral equation: x(t)= T(t)x(0)+∫_0^t T(t-s)[Bu(s)+f(s,x(s))]ds +∫_0^t T(t-s)σ(s,x(s))dW(s), 0≤ t≤ t_1, T(t-t_k)x(t^+_k)+∫_t_k^t T(t-s)[Bu(s)+f(s,x(s))]ds +∫_t_k^t T(t-s)σ(s,x(s))dW(s), t_k<t≤ t_k+1, k=1,2,…, n, where x(t^+_k)= ∏_j=k^1(ℐ+D_j)T(t_j-t_j-1)x_0 + ∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i)∫_t_i-1^t_iT(t_i-s)Bu(s)ds + ∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i)∫_t_i-1^t_iT(t_i-s)f(s,x(s))ds + ∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i)∫_t_i-1^t_iT(t_i-s)σ(s,x(s))dW(s) + ∑_i=2^k∏_j=k^i(ℐ+D_j) T(t_j-t_j-1) E_i-1v_i-1+E_kv_k. To prove the main results, we list some assumptions that will be utilized in the application of Krasnoselskii's fixed point theorem. We suppose that A generates a compact C_0-semigroup T(t) (t ≥ 0) of uniformly bounded linear operators in H. That is, there exists a positive constant M ≥ 1 such that T(t)≤ M for all t ≥ 0. Let the function f: [0,T] × H → H is continuous. Assume that the following assumptions are satisfied: (i) There exists a constant L_f > 0 such that f(t,x)^2≤ L_f(1+ x ^2) for all t ∈ [0,T] and x ∈ H. (ii) For some r>0, there is a constant L̃_f such that for every t∈ [0,T] and x,y∈ H satisfying ‖ x‖^2≤ r, ‖ y‖^2≤ r, f(t,x) - f(t,y)^2≤L̃_fx - y^2 . The function σ: [0,T] × H → L^2_0 is continuous. Assume that the following assumptions are satisfied: (i) There exists a constant L_σ > 0 such that σ(t,x)^2_L^2_0≤ L_σ(1+ x ^2) for all t ∈ [0,T] and x ∈ H. (ii) For some r>0, there is a constant L̃_σ such that for every t∈ [0,T] and x,y∈ H satisfying ‖ x‖^2≤ r, ‖ y‖^2≤ r, σ(t,x) - σ(t,y)^2_L^2_0≤L̃_σx - y^2 . With these assumptions in place, we are now prepared to proceed with the proof of the existence and uniqueness of the mild solution for (<ref>) using Krasnoselskii's fixed point theorem. Suppose that A generates a compact C_0-semigroup T(t) (t ≥ 0) of uniformly bounded operators in a Hilbert space H. If the assumptions (<ref>), (<ref>) and (<ref>) are satisfied, then the impulsive stochastic system (<ref>) has at least one mild solution in PC([0,T],L^2(Ω,H)) provided that max{𝒩,𝒦_0}< 1/9, and max{M^2; k}<1, where 𝒩=M^2+M^2(T^2L_f+ TL_σ), 𝒦_0 = M^2k+2∏_j=1^k (1 + D_j)^2 + (M^4 + M^2) B^2 N (T^2 L_f + T L_σ), k=3M^2k+2∏_j=1^k (1 + D_j)^2+3 M^4 N(T^2 L_f+ TL_σ). For each constant r_0>0, let ℬ_r_0={x∈ PC([0,T], L^2([0,T],H)): ‖ x‖^2_PC≤ r_0}. It is easy to see that B_r_0 is a bounded closed convex set in PC([0, T], L^2 ([0,T], H)). Define operators F_1 and F_2 on B_r_0 as follows: (F_1 x)(t) = T(t)x(0), for t_0 < t ≤ t_1, T(t-t_k)∏_j=k^1(ℐ+D_j)T(t_j-t_j-1)x_0 +T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i)∫_t_i-1^t_iT(t_i-s)Bu(s)ds +T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i)∫_t_i-1^t_iT(t_i-s)f(s,x(s))ds +T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i)∫_t_i-1^t_iT(t_i-s)σ(s,x(s))dW(s) +T(t-t_k)∑_i=2^k∏_j=k^i(ℐ+D_j) T(t_j-t_j-1) E_i-1v_i-1+T(t-t_k)E_kv_k, for t_k < t ≤ t_k+1, k ≥ 1, (F_2 x)(t) = ∫_0^tT(t-s)(Bu(s) + f(s, x(s))) ds +∫_0^tT(t-s)σ(s, x(s)) dW(s), for t_0 < t ≤ t_1, ∫_t_k^tT(t-s)(Bu(s) + f(s, x(s))) ds +∫_t_k^tT(t-s)σ(s, x(s)) dW(s), for t_k < t ≤ t_k+1, k ≥ 1. Clearly, x is a mild solution of (<ref>) if and only if the operator equation x = F_1x + F_2x has a solution. To establish this, we will demonstrate that the operator F_1 + F_2 has a fixed point by applying Krasnoselskii's Fixed Point Theorem (Lemma <ref>). For this, we proceed in several steps. Step 1.To prove that there exists a positive number r_0 such that F_1x + F_2y ∈ℬ_r_0 whenever x, y ∈ℬ_r_0, we proceed as follows: Choose r_0≥max{4[𝒮+M^2‖ B‖^2T∫_0^T𝔼‖ u(s)‖^2ds]/1-4𝒩, 9[𝒦_1+𝒦_2∫_0^T 𝔼u(s)^2 ds ]/1-9𝒦_0} Then, for any pair x, y ∈ℬ_r_0 and t ∈ [0, T], by applying Lemma (<ref>), assumptions (<ref>), (<ref>) and (<ref>), along with Hölder's inequality, and Ito isometry, we obtain the following results: For t_0<t≤ t_1, 𝔼‖ (F_1x)(t)+(F_2x)(t)‖^2≤ 4𝔼‖ T(t)x(0)‖^2+ 4𝔼‖∫_0^t T(t-s)Bu(s)ds‖^2 +4𝔼‖∫_0^t T(t-s)f(s,x(s))ds‖^2+4𝔼‖∫_0^t T(t-s)σ(s,x(s))dW(s)‖^2 ≤ 4 M^2‖ x_0‖^2+4M^2‖ B‖^2T ∫_0^t𝔼‖ u(s)‖^2ds+4M^2(TL_f+L_σ)∫_0^t(1+𝔼‖ x(s)‖^2)ds ≤ 4 M^2 r_0+4M^2‖ B‖^2T ∫_0^T𝔼‖ u(s)‖^2ds+4M^2(T^2L_f+TL_σ)(1+r_0) =4𝒩r_0+4[𝒮+M^2‖ B‖^2T∫_0^T𝔼‖ u(s)‖^2ds]≤ r_0, where 𝒩=M^2+M^2T^2L_f+M^2TL_σ, 𝒮=M^2T^2L_f+M^2TL_σ. Given t_k < t ≤ t_k+1 for k ≥ 1, we aim to derive an inequality using the Jensen inequality for the expectation of the norm square of the sum of two functionals (F_1x)(t) and (F_2x)(t). Specifically, we start with: 𝔼‖ (F_1 x)(t) + (F_2 x)(t) ‖^2 ≤ 9 𝔼‖ T(t - t_k) ∏_j=k^1 (ℐ + D_j) T(t_j - t_j-1) x_0 ‖^2 + 9 𝔼‖ T(t - t_k) ∑_i=1^k ∏_j=k^i+1 (ℐ + D_j) T(t_j - t_j-1) (ℐ + D_i) ∫_t_i-1^t_i T(t_i - s) Bu(s) ds ‖^2 + 9 𝔼‖ T(t - t_k) ∑_i=1^k ∏_j=k^i+1 (ℐ + D_j) T(t_j - t_j-1) (ℐ + D_i) ∫_t_i-1^t_i T(t_i - s) f(s, x(s)) ds ‖^2 + 9 𝔼‖ T(t - t_k) ∑_i=1^k ∏_j=k^i+1 (ℐ + D_j) T(t_j - t_j-1) (ℐ + D_i) ∫_t_i-1^t_i T(t_i - s) σ(s, x(s)) dW(s) ‖^2 + 9 𝔼‖ T(t - t_k) ∑_i=2^k ∏_j=k^i (ℐ + D_j) T(t_j - t_j-1) E_i-1 v_i-1‖^2 + 9 𝔼‖ T(t - t_k) E_k v_k ‖^2 + 9 𝔼‖∫_t_k^t T(t - s) Bu(s) ds ‖^2 + 9 𝔼‖∫_t_k^t T(t - s) f(s, x(s)) ds ‖^2 + 9 𝔼‖∫_t_k^t T(t - s) σ(s, x(s)) dW(s) ‖^2. Using the triangle inequality, Ito isometry, Lipschitz conditions, and the boundedness of the semigroup T(t), we get: 𝔼‖ (F_1 x)(t) + (F_2 x)(t) ‖^2 ≤ 9 M^2k+2∏_j=1^k (1 + D_j)^2 x_0^2 + 9M^4 B^2 𝔼(∑_i=1^k C_i ∫_t_i-1^t_iu(s) ds )^2 + 9M^4 B^2 𝔼(∑_i=1^k C_i ∫_t_i-1^t_if(s, x(s)) ds )^2 + 9M^4 B^2 𝔼(∑_i=1^k C_i ∫_t_i-1^t_iσ(s, x(s)) dW(s) )^2 + 9M^2 ∑_i=2^k ∏_j=i^k (1 + D_j)^2 E_i-1^2 𝔼v_i-1^2 + 9M^2 E_k^2 𝔼v_k^2 + 9M^2 B^2 T ∫_0^T 𝔼u(s)^2 ds + 9M^2 T L_f ∫_t_k^t (1 + 𝔼x(s)^2) ds + 9M^2 L_σ∫_t_k^t (1 + 𝔼x(s)^2) ds, where C_i = ∏_j=k^i+1 (1 + D_j) T(t_j - t_j-1) (1 + D_i), N = ∑_i=1^k C_i^2. Using the Cauchy-Schwarz inequality, Ito isometry, and the assumptions (<ref>) and (<ref>), we have: 𝔼‖ (F_1 x)(t) + (F_2 x)(t) ‖^2 ≤ 9 M^2k+2∏_j=1^k (1 + D_j)^2 r_0 + 9M^4 B^2 T ∑_i=1^k C_i^2 ∑_i=1^k ∫_t_i-1^t_i𝔼u(s)^2 ds + 9M^4 B^2 T ∑_i=1^k C_i^2 ∑_i=1^k ∫_t_i-1^t_i𝔼f(s, x(s))^2 ds + 9M^4 B^2 ∑_i=1^k C_i^2 ∑_i=1^k ∫_t_i-1^t_i𝔼σ(s, x(s))^2 ds + 9M^2 ∑_i=2^k ∏_j=i^k (1 + D_j)^2 E_i-1^2 𝔼v_i-1^2 + 9M^2 E_k^2 𝔼v_k^2 + 9M^2 B^2 T ∫_0^T 𝔼u(s)^2 ds + 9M^2 (T^2 L_f + T L_σ)(1 + r_0) ≤ 9 M^2k+2∏_j=1^k (1 + D_j)^2 r_0 + 9M^4 B^2 T N ∫_0^T 𝔼u(s)^2 ds + 9M^4 B^2 N (TL_f + L_σ) ∫_0^T (1 + 𝔼x(s)^2) ds + 9M^2 ∑_i=2^k ∏_j=i^k (1 + D_j)^2 E_i-1^2 𝔼v_i-1^2 + 9M^2 E_k^2 𝔼v_k^2 + 9M^2 B^2 T ∫_0^T 𝔼u(s)^2 ds + 9M^2 (T^2 L_f + T L_σ)(1 + r_0) ≤ 9 M^2k+2∏_j=1^k (1 + D_j)^2 r_0 + 9 (M^4 N + M^2) B^2 T ∫_0^T 𝔼u(s)^2 ds + 9M^4 B^2 N (T^2 L_f +T L_σ)(1 + r_0) + 9M^2 (T^2 L_f + T L_σ)(1 + r_0) + 9M^2 ∑_i=2^k ∏_j=i^k (1 + D_j)^2 E_i-1^2 𝔼v_i-1^2 + 9M^2 E_k^2 𝔼v_k^2 =9 𝒦_0 r_0 + 9 [ 𝒦_1 + 𝒦_2 ∫_0^T 𝔼u(s)^2 ds ] ≤ r_0, where 𝒦_0 = M^2k+2∏_j=1^k (1 + D_j)^2 + (M^4 + M^2) B^2 N (T^2 L_f + T L_σ), 𝒦_1 = M^2 ∑_i=2^k ∏_j=i^k (1 + D_j)^2 E_i-1^2 𝔼v_i-1^2 + M^2 E_k^2 𝔼v_k^2 + (M^4 + M^2) B^2 N (T^2 L_f + TL_σ), 𝒦_2 = (M^4 N + M^2) B^2 T. Consequently, F_1+F_2 maps ℬ_r_0 to ℬ_r_0. Step 2: Show that F_1 is a contraction To show that F_1 is a contraction mapping on the set ℬ_r, we need to prove that there exists a constant 0 < k < 1 such that for all x, y ∈ℬ_r, F_1 x - F_1 y_PC≤ k x - y_PC. Let x, y ∈ℬ_r. We will estimate F_1 x - F_1 y_PC for t_0 < t ≤ t_1 and t_k < t ≤ t_k+1. For t_0 < t ≤ t_1: 𝔼 (F_1 x)(t) - (F_1 y)(t) ^2 = 𝔼 T(t) (x(0) - y(0)) ^2. Using the properties of the C_0-semigroup T(t): 𝔼 T(t) (x(0) - y(0)) ≤ M^2𝔼 x(0) - y(0) . For t_k < t ≤ t_k+1, k ≥ 1: 𝔼 (F_1 x)(t) - (F_1 y)(t) ^2≤ 3𝔼 T(t-t_k) ∏_j=k^1 (ℐ + D_j) T(t_j - t_j-1) (x_0 - y_0) ^2 + 3𝔼 T(t-t_k) ∑_i=1^k ∏_j=k^i+1 (ℐ + D_j) T(t_j - t_j-1) (ℐ + D_i) ∫_t_i-1^t_i T(t_i - s) (f(s, x(s)) - f(s, y(s))) ds ^2 + 3𝔼 T(t-t_k) ∑_i=1^k ∏_j=k^i+1 (ℐ + D_j) T(t_j - t_j-1) (ℐ + D_i) ∫_t_i-1^t_i T(t_i - s) (σ(s, x(s)) - σ(s, y(s))) dW(s) ^2. Using the properties of the C_0-semigroup T(t), the boundedness of operators D_j, and assumptions on f and σ: 𝔼 T(t-t_k) ∏_j=k^1 (ℐ + D_j) T(t_j - t_j-1) (x_0 - y_0) ^2≤ M^2k+2∏_j=1^k (1 + D_j)^2 𝔼x_0 - y_0^2 . Since x, y ∈ℬ_r: 𝔼x_0 - y_0^2≤x - y_PC^2. Thus, 𝔼 T(t-t_k) ∏_j=k^1 (ℐ + D_j) T(t_j - t_j-1) (x_0 - y_0) ^2≤ M^2k+2∏_j=1^k (1 + D_j)^2 x - y_PC^2. For the second term, using the properties of T(t) and D_j, and the Lipschitz continuity of f: 𝔼 T(t-t_k) ∑_i=1^k ∏_j=k^i+1 (ℐ + D_j) T(t_j - t_j-1) (ℐ + D_i) ∫_t_i-1^t_i T(t_i - s) (f(s, x(s)) - f(s, y(s))) ds ^2 ≤ M^4𝔼( ∑_i=1^k ∏_j=i+1^k (1 + D_j)‖ T(t_j - t_j-1) ‖ (1 + D_i) ∫_t_i-1^t_if(s, x(s)) - f(s, y(s)) ds )^2 ≤ M^4𝔼( ∑_i=1^k C_i∫_t_i-1^t_if(s, x(s)) - f(s, y(s)) ds )^2 ≤ M^4T N ∑_i=1^k ∫_t_i-1^t_i𝔼f(s, x(s)) - f(s, y(s))^2 ds ≤ M^4T NL_f∑_i=1^k ∫_t_i-1^t_i𝔼 x(s) - y(s)^2 ds ≤ M^4T^2 NL_fx - y_PC^2. For the third term, using the properties of T(t) and D_j, and the Lipschitz continuity of σ: 𝔼 T(t-t_k) ∑_i=1^k ∏_j=k^i+1 (ℐ + D_j) T(t_j - t_j-1) (ℐ + D_i) ∫_t_i-1^t_i T(t_i - s) (σ(s, x(s)) - σ(s, y(s))) dW(s) ^2 ≤ M^4𝔼( ∑_i=1^k ∏_j=i+1^k (1 + D_j)‖ T(t_j - t_j-1) ‖ (1 + D_i) ∫_t_i-1^t_iσ(s, x(s)) - σ(s, y(s)) dW(s) )^2 ≤ M^4𝔼( ∑_i=1^k C_i∫_t_i-1^t_iσ(s, x(s)) - σ(s, y(s)) dW(s) )^2 ≤ M^4 N ∑_i=1^k ∫_t_i-1^t_i𝔼σ(s, x(s)) - σ(s, y(s))^2 ds ≤ M^4 NL_σ∑_i=1^k ∫_t_i-1^t_i𝔼 x(s) - y(s)^2 ds ≤ M^4T NL_σx - y_PC^2. Combining all terms, we get: 𝔼 (F_1 x)(t) - (F_1 y)(t) ^2 ≤(3M^2k+2∏_j=1^k (1 + D_j)^2+3 M^4 N(T^2 L_f +TL_σ)) x - y_PC^2. To show that F_1 is a contraction, we need the right-hand side to be less than x - y_PC^2. Hence, we require: 3M^2k+2∏_j=1^k (1 + D_j)^2+3 M^4 N(T^2L_f+T L_σ) < 1. Given the boundedness conditions on D_j, f, and σ, and assuming that T is sufficiently small, there exists a constant k such that: F_1 x - F_1 y_PC≤ k x - y_PC, where 0 < k < 1. This completes the proof that F_1 is a contraction mapping on ℬ_r_0. Step 3. F_2 is a completely continuous operator. Firstly, we show that the mapping F_2 is continuous on B_r_0. For this purpose, letx_m→ x in B_r_0, then we have f(t,x_m(t))→ f(t,x(t)), σ(t,x_m(t))→σ(t,x(t)), as m→∞. Moreover, for t_0≤ t≤ t_1, by Lebesgue dominated convergence theorem, we can get 𝔼‖∫_0^tT(t-s) (f(s, x_m(s))-f(s,x(s))) ds‖^2 ≤ M^2T ∫_0^t𝔼‖ f(s, x_m(s))-f(s,x(s))‖^2ds→ 0, as m→∞. On the other hand, using the Itô isometry property for stochastic integrals, and the properties of the C_0-semigroup T(t) and the boundedness assumption on σ, we get: 𝔼‖∫_0^tT(t-s) (σ(s, x_m(s)) - σ(s, x(s))) dW(s)‖^2≤ M^2∫_0^t𝔼‖σ(s, x_m(s)) - σ(s, x(s)) ‖^2 ds. By the Lebesgue dominated convergence theorem, since σ(s, x_m(s)) →σ(s, x(s)) almost surely and ‖σ(s, x_m(s)) - σ(s, x(s)) ‖^2 is bounded by an integrable function, we get: M^2∫_0^t𝔼‖σ(s, x_m(s)) - σ(s, x(s)) ‖^2 ds → 0 as m →∞. Therefore, 𝔼‖∫_0^tT(t-s) (σ(s, x_m(s)) - σ(s, x(s))) dW(s)‖^2→ 0 as m →∞. Combining the results for the deterministic and stochastic parts, we obtain: 𝔼‖ F_2 (x_m) - F_2 (x) ‖^2≤ 2𝔼‖∫_0^tT(t-s) (f(s, x_m(s)) - f(s, x(s))) ds‖^2 + 2𝔼‖∫_0^tT(t-s) (σ(s, x_m(s)) - σ(s, x(s))) dW(s)‖^2 → 0 as m →∞. For t_k < t ≤ t_k+1 with k ≥ 1, the process is analogous to that for t_0 < t ≤ t_1. Thus, it follows that F_2 is continuous on B_r_0. Secondly, we prove that for any t ∈ [0, T], 𝒱(t) = {F_2(x)(t) | x ∈ℬ_r_0} is relatively compact in H. It is obvious that 𝒱(0) is relatively compact in H. Let 0 < t ≤ T be given. For any ε∈ (0, t), define an operator F^ε on ℬ_r_0 by (F^ε x)(t) = ∫_0^t-ε T(t-s) (Bu(s) + f(s, x(s))) ds + ∫_0^t-ε T(t-s)σ(s, x(s)) dW(s) =T(ε )∫_0^t-ε T(t-s-ε) (Bu(s) + f(s, x(s)))ds +T(ε) ∫_0^t-ε T(t-s-ε)σ(s, x(s)) dW(s), if t_0 < t ≤ t_1, ∫_t_k^t-ε T(t-s) (Bu(s) + f(s, x(s))) ds + ∫_t_k^t-ε T(t-s)σ(s, x(s)) dW(s) =T(ε )∫_t_k^t-ε T(t-s-ε) (Bu(s) + f(s, x(s)))ds +T(ε) ∫_t_k^t-ε T(t-s-ε)σ(s, x(s)) dW(s), if t_k < t ≤ t_k+1, k ≥ 1. Then the set {(F^ε)(t): x∈ℬ_r} is relatively compact in H because T(ε) is compact. This compactness helps us establish the desired continuity properties. Now, let's consider the case for t_0 < t ≤ t_1 : 𝔼‖ (F_2x)(t)-(F^εx)(t)‖^2 ≤3𝔼‖∫_t-ε^t T(t-s) Bu(s)ds‖^2 +3𝔼‖∫_t-ε^t T(t-s) f(s, x(s))ds‖^2 +3𝔼‖∫_t-ε^t T(t-s) σ(s, x(s)) dW(s) ‖^2. To estimate the deterministic component involving Bu(s), we apply the triangle inequality followed by the Cauchy-Schwarz inequality. This yields: 𝔼‖∫_t-ε^t T(t-s) Bu(s) ds ‖^2 ≤ M^2 B^2 ε∫_0^T 𝔼u(s)^2 ds. Using Assumption <ref>, we have 𝔼‖∫_t-ε^t T(t-s) f(s, x(s)) ds ‖^2 ≤𝔼( ∫_t-ε^t T(t-s) f(s, x(s)) ds )^2 ≤ M^2𝔼( ∫_t-ε^t f(s, x(s)) ds )^2 ≤ M^2ε∫_t-ε^t𝔼 f(s, x(s)) ^2 ds . Since x ∈ℬ_r_0 and x(s) ^2 ≤ r_0, and applying the Lipschitz condition, we have: 𝔼‖∫_t-ε^t T(t-s) f(s, x(s)) ds ‖^2 ≤ M^2 ε∫_t-ε^t𝔼 f(s, x(s)) ^2 ds ≤ M^2 L_f ε∫_t-ε^t(1 + 𝔼 x(s) ^2 ) ds ≤ M^2 L_f (1 + r_0 ) ε^2. Given that x ∈ℬ_r_0 and x(s) ^2 ≤ r_0, and applying the Lipschitz condition, we use the Itô isometry to estimate: 𝔼‖∫_t-ε^t T(t-s) σ(s, x(s)) dW(s) ‖^2 ≤𝔼( ∫_t-ε^t T(t-s) σ(s, x(s)) dW(s) )^2 ≤ M^2 ∫_t-ε^t𝔼σ(s, x(s)) ^2 ds ≤ M^2 L_σ∫_t-ε^t (1 + 𝔼 x(s) ^2) ds ≤ M^2 L_σε( 1 + r_0 ). Combining all terms, we get: 𝔼‖ (F_2x)(t)-(F^εx)(t)‖^2≤ 3M^2 L_f (1 + r_0 ) ε^2+3 M^2 L_σε( 1 + r_0 )+3M^2 B^2 ε∫_0^T 𝔼u(s)^2 ds. As ε→ 0: 𝔼‖ (F_2x)(t)-(F^εx)(t)‖^2→ 0. For t_k < t ≤ t_k+1, k ≥ 1, the definition of F_2 and F^ε allows us to obtain analogous results as discussed above. Thus, since F_2x can be approximated by F^εx arbitrarily closely in the mean square norm and F^εx is relatively compact in H, it follows that 𝒱(t) = {F_2(x)(t) | x ∈ℬ_r_0} is relatively compact in H. Finally, we demonstrate that F_2(B_r_0) is equicontinuous on [0,T]. According to the definition of the F_2 operator, demonstrating one case is sufficient as the other follows analogously. For any x∈ℬ_r_0 and t_0 ≤ a<b ≤ t_1, we have 𝔼‖ (F_2x)(b)-(F_2x)(a)‖^2 ≤ 4𝔼‖∫_0^aT(b-a)(Bu(s)+f(s,x(s)))ds‖^2 +4𝔼‖∫_a^bT(b-s)(Bu(s)+f(s,x(s)))ds‖^2 +4𝔼‖∫_0^aT(b-a)σ(s,x(s))dW(s)‖^2 +4𝔼‖∫_a^bT(b-s)σ(s,x(s))dW(s)‖^2 =𝒥_1+𝒥_2+𝒥_3+𝒥_4. To prove 𝔼‖ (F_2x)(b)-(F_2x)(a)‖^2→ 0 as b - a → 0, it suffices to demonstrate that 𝒥_i→ 0 independently of x ∈ℬ_r_0 as b - a → 0, for i = 1, 2, 3, 4. Further, for 𝒥_1 and 𝒥_3, if a=0, 0<b<t_1, it is easy to see 𝒥_1=𝒥_3=0, so for a>0 and 0<ε <a small enough, we have that 𝒥_1 ≤ 8𝔼‖∫_0^a-εT(b-a)(Bu(s)+f(s,x(s)))ds‖^2 +8𝔼‖∫_a-ε^aT(b-a)(Bu(s)+f(s,x(s)))ds‖^2 ≤ 8M^2(a-ε)∫_0^a-ε𝔼‖(Bu(s)+f(s,x(s)))‖^2 ds +8M^2ε∫_a-ε^a𝔼‖(Bu(s)+f(s,x(s)))‖^2 ds ≤ 8M^2(a-ε)^2(2TL_f (1+r_0)+2‖ B‖^2∫_0^T𝔼‖ u‖^2ds ) +8M^2ε^2(2TL_f (1+r_0)+2‖ B‖^2∫_0^T𝔼‖ u‖^2ds ) → 0 as b-a→ 0 ε→ 0, 𝒥_3 ≤ 4 𝔼∫_0^a‖ T(b-a)σ(s,x(s)) ‖^2 ds≤ 4M^2∫_0^a𝔼‖σ(s,x(s)) ‖^2 ds ≤ 4M^2L_σ(1+r_0)a→ 0 as b-a→ 0. For 𝒥_2 and 𝒥_4, we obtain by assumptions (<ref>), (<ref>) and (<ref>), Lemma <ref> that 𝒥_2 ≤ 4𝔼(∫_a^b‖ T(b-s)(Bu(s)+f(s,x(s)))‖ ds)^2 ≤ 4M^2𝔼(∫_a^b‖(Bu(s)+f(s,x(s)))‖ ds)^2 ≤ 4M^2(b-a)∫_a^b𝔼‖(Bu(s)+f(s,x(s)))‖^2 ds ≤ 4M^2(b-a)(2TL_f (1+r_0)+2‖ B‖^2∫_0^T𝔼‖ u‖^2ds ) → 0 as b-a→ 0, 𝒥_4 ≤4𝔼∫_a^b‖ T(b-s)σ(s,x(s))‖^2 ds≤ 4M^2(1+r_0)(b-a)→ 0, as b-a→ 0. This suggests that F_2(B_r_0) displays equicontinuity. Consequently, according to the Arzela-Ascoli theorem, F_2 qualifies as a completely continuous operator. Hence, by Lemma <ref>, the operator F_1 + F_2 possesses at least one fixed point x ∈ℬ_r_0, which coincides with the mild solution of system (<ref>). Under the assumption that A generates a compact C_0-semigroup T(t) (t ≥ 0) of uniformly bounded operators in the Hilbert space H, and provided that conditions (<ref>), (<ref>) and (<ref>) are met, the impulsive stochastic system (<ref>) possesses a unique mild solution within PC([0,T],L^2(Ω,H)) given that (<ref>) and the condition k = max{k_1, k_2} < 1, hold true, where k_1 and k_2 are defined as k_1 = 2M^2T^2(L̃_f+L̃_σ), k_2 = 4M^4(N+T^2)(L̃_f +L̃_σ). Consider the mapping F: PC([0,T],L^2(Ω,H)) → PC([0,T],L^2(Ω,H)) defined by (Fx)(t) = (F_1 x)(t) + (F_2 x)(t), t ∈ [0,T] ∖{t_1, …, t_n}. It is evident that the mild solution of the system (<ref>) is equivalent to a fixed point of the operator F. According to Step 1 of Theorem 1, we know that F(ℬ_r_0) ⊆ℬ_r_0. For any x_1,x_2∈ℬ_r_0 and t∈[t_0,t_1], we have 𝔼‖ (Fx_2)(t)-(Fx_1)(t)‖^2≤ 2𝔼‖∫_0^tT(t-s)(f(s,x_2(s))-f(s,x_1(s)))ds‖^2 + 2𝔼‖∫_0^tT(t-s)(σ(s,x_2(s))-σ(s,x_1(s)))dW(s)‖^2 ≤ 2M^2T ∫_0^t𝔼‖ f(s,x_2(s))-f(s,x_1(s))‖^2ds + 2M^2T ∫_0^t𝔼‖σ(s,x_2(s))-σ(s,x_1(s))‖^2ds ≤ 2M^2TL̃_f∫_0^t𝔼‖ x_2(s)-x_1(s)‖^2ds + 2M^2TL̃_σ∫_0^t𝔼‖ x_2(s)-x_1(s)‖^2ds ≤ 2M^2T^2(L̃_f+L̃_σ)‖ x_2(s)-x_1(s)‖^2_PC = k_1‖ x_2(s)-x_1(s)‖^2_PC. For any x_1,x_2∈ℬ_r_0 and t_k < t ≤ t_k+1, k ≥ 1, we have 𝔼‖ (Fx_2)(t)-(Fx_1)(t)‖^2 ≤ 4𝔼‖ T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i)∫_t_i-1^t_iT(t_i-s)(f(s,x_2(s))-f(s,x_1(s)))ds‖^2 +4𝔼‖ T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i)∫_t_i-1^t_iT(t_i-s)(σ(s,x_2(s))-σ(s,x_1(s)))dW(s)‖^2 +4𝔼‖∫_t_k^tT(t-s)(f(s,x_2(s))-f(s,x_1(s)))ds‖^2+4𝔼‖∫_t_k^tT(t-s)(σ(s,x_2(s))-σ(s,x_1(s)))ds‖^2 =𝒥_1+𝒥_2+𝒥_3+𝒥_4. By applying the Cauchy-Schwarz inequality and utilizing the Lipschitz condition, we can derive a following bound for 𝒥_1: 𝒥_1 =4𝔼‖ T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i) ×∫_t_i-1^t_iT(t_i-s)(f(s,x_2(s))-f(s,x_1(s)))ds‖^2 ≤ 4M^4𝔼(∑_i=1^k∏_j=k^i+1(1+‖ D_j‖)‖ T(t_j-t_j-1)‖ (1+‖ D_i‖) ×∫_t_i-1^t_i‖ f(s,x_2(s))-f(s,x_1(s))‖ ds)^2 ≤ 4M^4𝔼(∑_i=1^kC_i∫_t_i-1^t_i‖ f(s,x_2(s))-f(s,x_1(s))‖ ds)^2 ≤ 4M^4∑_i=1^kC^2_i∑_i=1^k∫_t_i-1^t_i𝔼‖ f(s,x_2(s))-f(s,x_1(s))‖^2 ds ≤ 4M^4NL̃_f∑_i=1^k∫_t_i-1^t_i𝔼‖ x_2(s)-x_1(s)‖^2 ds ≤ 4M^4NL̃_fT x_2(s) - x_1(s) ^2_PC, where C_i=∏_j=k^i+1(1+‖ D_j‖)‖ T(t_j-t_j-1)‖ (1+‖ D_i‖), N=∑_i=1^kC^2_i. By applying the Cauchy-Schwarz inequality, the Itô isometry and utilizing the Lipschitz condition, we can derive a following bound for 𝒥_2: 𝒥_2 =4𝔼‖ T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i) ×∫_t_i-1^t_iT(t_i-s)(σ(s,x_2(s))-σ(s,x_1(s)))dW(s)‖^2 ≤ 4M^4𝔼(∑_i=1^k∏_j=k^i+1(1+‖ D_j‖)‖ T(t_j-t_j-1)‖ (1+‖ D_i‖) ×∫_t_i-1^t_i‖σ(s,x_2(s))-σ(s,x_1(s))‖ dW(s))^2 ≤ 4M^4𝔼(∑_i=1^kC_i∫_t_i-1^t_i‖σ(s,x_2(s))-σ(s,x_1(s))‖ dW(s))^2 ≤ 4M^4∑_i=1^kC^2_i∑_i=1^k∫_t_i-1^t_i𝔼‖σ(s,x_2(s))-σ(s,x_1(s))‖^2 ds ≤ 4M^4NL̃_σ∑_i=1^k∫_t_i-1^t_i𝔼‖ x_2(s)-x_1(s)‖^2 ds ≤ 4M^4NL̃_σT x_2(s) - x_1(s) ^2_PC Similarly, by using the appropriate mathematical tools, we obtain the following results for 𝒥_3 and 𝒥_4: 𝒥_3 =4𝔼‖∫_t_k^tT(t-s)(f(s,x_2(s))-f(s,x_1(s)))ds‖^2 ≤ 4 M^2𝔼( ∫_t_k^t‖ f(s,x_2(s))-f(s,x_1(s)) ‖ ds )^2 ≤ 4 M^2T ∫_t_k^t𝔼‖ f(s,x_2(s))-f(s,x_1(s)) ‖^2 ds ≤ 4 M^4 TL̃_f∫_t_k^t𝔼 x_2(s) - x_1(s) ^2 ds ≤ 4 M^4 T^2L̃_f x_2(s) - x_1(s) ^2_PC, 𝒥_4 =4𝔼‖∫_t_k^tT(t-s)(σ(s,x_2(s))-σ(s,x_1(s)))dW(s)‖^2 ≤ 4 M^2𝔼( ∫_t_k^t‖σ(s,x_2(s))-σ(s,x_1(s)) ‖ dW(s) )^2 ≤ 4 M^2T ∫_t_k^t𝔼‖σ(s,x_2(s))-σ(s,x_1(s)) ‖^2 ds ≤ 4 M^4 TL̃_σ∫_t_k^t𝔼 x_2(s) - x_1(s) ^2 ds ≤ 4 M^4 T^2L̃_σ x_2(s) - x_1(s) ^2_PC . Combining the estimates, we have: 𝔼‖ (Fx_2)(t)-(Fx_1)(t)‖^2 ≤4M^4(N+T^2)(L̃_f +L̃_σ) x_2(s) - x_1(s) ^2_PC =k_2 x_2(s) - x_1(s) ^2_PC . Then, we get ‖ (Fx_2)(t)-(Fx_1)(t)‖^2≤ k ‖ x_2 - x_1‖^2_PC, where k=max{k_1,k_2}. According to (<ref>), it is established that F acts as a contraction mapping on ℬ_r_0. Consequently, applying the well-established contraction mapping principle confirms that F possesses a sole fixed point within ℬ_r_0. This fixed point x ∈ℬ_r_0 signifies that x(t) stands as the unique mild solution to the system described by (<ref>). § EXISTENCE OF OPTIMAL CONTROLS In this section of the manuscript, we delve into the existence of optimal controls for a given control problem. We begin by defining the framework and assumptions necessary for our analysis. Let x^u represent the mild solution of system (<ref>) associated with the control u ∈ U_ad. Consider the Lagrange problem (P): Our goal is to find an optimal pair (x^0, u^0) ∈ PC([0,T],L^2(Ω,H)) × U_ad such that J(x^0,u^0) ≤ J(x^u,u), ∀ (x^u,u) ∈ PC([0,T],L^2(Ω,H)) × U_ad, where the cost function is defined as J(x^u,u) = 𝔼(∫_0^T l(t,x^u(t),u(t)) dt). Suppose the following assumptions hold: (A_1) The functional l: [0,T] × H × U →ℝ∪{∞} is F_t-measurable. (A_2) For any t ∈ [0,T], l(t,·,·) is sequentially lower semicontinuous on H × U. (A_3) For any t ∈ [0,T] and x ∈ H, l(t,x,·) is convex on U. (A_4) There exist constants d_1≥ 0, d_2 > 0, and a nonnegative function ξ∈ L^1([0,T], ℝ) such that l(t,x,u) ≥ξ(t) + d_1𝔼x^2 + d_2𝔼u^2. With these assumptions in place, we are now in a position to demonstrate the existence of optimal controls for problem (P). Assume the hypothesis of Theorem <ref> and conditions A_1-A_4 are satisfied. Then the Lagrange problem (P) has at least one optimal solution that is, there exists an admissible state-control pair (x^0, u^0) ∈ PC([0,T], L^2(Ω, H)) × U_ad, such that J(x^0, u^0) ≤ J(x^u, u), ∀ (x^u, u) ∈ PC([0,T], L^2(Ω, H)) × U_ad. Without loss of generality, we assume that inf{ J(x^u,u) | u ∈ U_ad} = ε < +∞. If this were not the case, there would be nothing to prove. From assumption A_4, it follows that ε > -∞. By the definition of the infimum, there exists a minimizing sequence of feasible pairs (x^m, u^m) ∈ PC([0,T], L^2(Ω, H)) × U_ad such that J(x^m, u^m) →ε as m →∞, where x^m is a mild solution of system (<ref>) corresponding to u^m∈ U_ad. We observe that the sequence {u^m}∈ U_ad for m = 1, 2, …, which implies that {u^m}∈ L^2_F([0,T],U) is bounded. Consequently, there exists a function u^0∈ L^2_F([0,T],U) and a subsequence of {u^m} such that u^m→ u^0 (m →∞). Since U_ad is both convex and closed, by the Marzur theorem <cit.>, we infer that u^0∈ U_ad. Let x^0 denote the mild solution of equation (<ref>) corresponding to u^0. Given the boundedness of {u^m}, {u^0}, we can assert the existence of a positive number r_0 such that x^m^2_PC≤ r_0, x^0^2_PC≤ r_0. For t ∈ [t_0,t_1], we obtain 𝔼‖ x^m(t)-x^0(t)‖^2≤ 3𝔼‖∫_0^t T(t-s)B(u^m(s)-u^0(s))ds‖^2 + 3𝔼‖∫_0^t T(t-s)(f(s,x^m(s))-f(s,x^0(s)))ds‖^2 + 3𝔼‖∫_0^t T(t-s)(σ(s,x^m(s))-σ(s,x^0(s)))dW(s)‖^2 ≤ 3M^2‖ B‖^2∫_0^t𝔼‖ u^m(s)-u^0(s)‖^2 ds + 3M^2𝔼(∫_0^t‖ f(s,x^m(s))-f(s,x^0(s))‖ ds)^2 + 3M^2𝔼(∫_0^t‖σ(s,x^m(s))-σ(s,x^0(s))‖ dW(s))^2 ≤ 3M^2‖ B‖^2‖ u^m(s)-u^0(s)‖^2_L^2_F([0,T],U) + 3M^2T∫_0^t𝔼‖ f(s,x^m(s))-f(s,x^0(s))‖^2 ds + 3M^2∫_0^t𝔼‖σ(s,x^m(s))-σ(s,x^0(s))‖^2 ds ≤ 3M^2‖ B‖^2‖ u^m(s)-u^0(s)‖^2_L^2_F([0,T],U) + 3M^2(T^2L̃_f+T L̃_σ)‖ x^m-x^0‖^2_PC, which means ‖ x^m(s)-x^0(s)‖^2_PC ≤3M^2‖ B‖^2‖ u^m(s)-u^0(s)‖^2_L^2_F([0,T],U)/1-3M^2(T^2L̃_f+T L̃_σ). For t_k < t ≤ t_k+1, k ≥ 1, by using the Lipschitz continuity of σ and the triangular inequality and Cauchy-Schwarz inequality, we have 𝔼‖ x^m(t)-x^0(t)‖^2≤ 6𝔼‖ T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i) × ∫_t_i-1^t_iT(t_i-s)(f(s,x^m(s))-f(s,x^0(s))) ds‖^2 + 6𝔼‖ T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i) × ∫_t_i-1^t_iT(t_i-s)(σ(s,x^m(s))-σ(s,x^0(s))) dW(s)‖^2 + 6𝔼‖∫_t_k^t T(t-s)(f(s,x^m(s))-f(s,x^0(s)))ds‖^2 + 6𝔼‖∫_t_k^t T(t-s)(σ(s,x^m(s))-σ(s,x^0(s)))dW(s)‖^2 + 6𝔼‖ T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i) × ∫_t_i-1^t_iT(t_i-s)B(u^m(s)-u^0(s)) ds‖^2 + 6𝔼‖∫_t_k^t T(t-s)B(u^m(s)-u^0(s))ds‖^2 = ℐ_1+ℐ_2+ℐ_3+ℐ_4+ℐ_5 +ℐ_6. Using the Lipschitz continuity of f and the triangular inequality and Cauchy-Schwarz inequality, we get: ℐ_1 = 6𝔼‖ T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i) ×∫_t_i-1^t_iT(t_i-s)(f(s,x^m(s))-f(s,x^0(s))) ds‖^2 ≤ 6 M^2𝔼(∑_i=1^k∏_j=k^i+1(1+‖ D_j‖)‖ T(t_j-t_j-1)‖ (1+‖ D_i‖) ×∫_t_i-1^t_i‖ f(s,x^m(s))-f(s,x^0(s))‖ ds)^2 =6 M^2𝔼(∑_i=1^kC_i∫_t_i-1^t_i‖ f(s,x^m(s))-f(s,x^0(s))‖ ds)^2 ≤ 6 M^2∑_i=1^kC^2_i∑_i=1^k𝔼(∫_t_i-1^t_i‖ f(s,x^m(s))-f(s,x^0(s))‖ ds)^2 ≤ 6 M^2T∑_i=1^kC^2_i∑_i=1^k∫_t_i-1^t_i𝔼‖ f(s,x^m(s))-f(s,x^0(s))‖^2 ds ≤ 6 M^2TN ∑_i=1^k∫_t_i-1^t_iL̃_f𝔼‖ x^m(s)-x^0(s)‖^2 ds ≤ 6 M^2TNL̃_f∑_i=1^k∫_t_i-1^t_i𝔼‖ x^m(s)-x^0(s)‖^2 ds ≤ 6 M^2T^2NL̃_f‖ x^m(s)-x^0(s)‖^2_PC, where C_i=∏_j=k^i+1(1+‖ D_j‖)‖ T(t_j-t_j-1)‖ (1+‖ D_i‖), N=∑_i=1^kC^2_i. Using Itô isometry and the Lipschitz continuity of σ, we obtain: ℐ_2 =6𝔼‖ T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i) ×∫_t_i-1^t_iT(t_i-s)(σ(s,x^m(s))-σ(s,x^0(s))) dW(s)‖^2 ≤ 6 M^2𝔼(∑_i=1^k∏_j=k^i+1(1+‖ D_j‖)‖ T(t_j-t_j-1)‖ (1+‖ D_i‖) ×∫_t_i-1^t_i‖σ(s,x^m(s))-σ(s,x^0(s))‖ dW(s))^2 =6 M^2𝔼(∑_i=1^kC_i∫_t_i-1^t_i‖σ(s,x^m(s))-σ(s,x^0(s))‖ dW(s))^2 ≤ 6 M^2∑_i=1^kC^2_i∑_i=1^k𝔼(∫_t_i-1^t_i‖σ(s,x^m(s))-σ(s,x^0(s))‖ dW(s))^2 ≤ 6 M^2∑_i=1^kC^2_i∑_i=1^k∫_t_i-1^t_i𝔼‖σ(s,x^m(s))-σ(s,x^0(s))‖^2 ds ≤ 6 M^2N ∑_i=1^k∫_t_i-1^t_iL̃_σ𝔼‖ x^m(s)-x^0(s)‖^2 ds ≤ 6 M^2NL̃_σ∑_i=1^k∫_t_i-1^t_i𝔼‖ x^m(s)-x^0(s)‖^2 ds ≤ 6 M^2TNL̃_σ‖ x^m(s)-x^0(s)‖^2_PC. Using Itô isometry and the Lipschitz continuity of f and σ: ℐ_3 =6𝔼‖∫_t_k^t T(t-s)(f(s,x^m(s))-f(s,x^0(s)))ds‖^2 ≤ 6M^2T∫_t_k^t𝔼‖ f(s,x^m(s))-f(s,x^0(s))‖^2 ds ≤ 6M^2T∫_t_k^tL̃_f𝔼‖ x^m(s)-x^0(s)‖^2 ds ≤6 M^2T^2L̃_f‖ x^m(s)-x^0(s)‖^2_PC, ℐ_4 =6𝔼‖∫_t_k^t T(t-s)(σ(s,x^m(s))-σ(s,x^0(s)))dW(s)‖^2 ≤ 6M^2∫_t_k^t𝔼‖σ(s,x^m(s))-σ(s,x^0(s))‖^2 ds ≤6 M^2∫_t_k^tL̃_σ𝔼‖ x^m(s)-x^0(s)‖^2 ds ≤6 M^2TL̃_σ‖ x^m(s)-x^0(s)‖^2_PC. Following a similar process as described above, we have ℐ_5 =6𝔼‖ T(t-t_k)∑_i=1^k∏_j=k^i+1(ℐ+D_j)T(t_j-t_j-1) (ℐ+D_i) ×∫_t_i-1^t_iT(t_i-s)B(u^m(s)-u^0(s)) ds‖^2 ≤ 6 M^2‖ B‖^2𝔼(∑_i=1^k∏_j=k^i+1(1+‖ D_j‖)‖ T(t_j-t_j-1)‖ (1+‖ D_i‖) ×∫_t_i-1^t_i‖ u^m(s)-u^0(s)‖ ds)^2 =6 M^2‖ B‖^2𝔼(∑_i=1^kC_i∫_t_i-1^t_i‖ u^m(s)-u^0(s)‖ ds)^2 ≤ 6 M^2‖ B‖^2∑_i=1^kC^2_i∑_i=1^k𝔼(∫_t_i-1^t_i‖ u^m(s)-u^0(s)‖ ds)^2 ≤ 6 M^2‖ B‖^2T∑_i=1^kC^2_i∑_i=1^k∫_t_i-1^t_i𝔼‖ u^m(s)-u^0(s)‖^2 ds = 6 M^2‖ B‖^2TN ∑_i=1^k∫_t_i-1^t_i𝔼‖ u^m(s)-u^0(s)‖^2 ds = 6 M^2‖ B‖^2TN‖ u^m(s)-u^0(s)‖^2_L^2_F([0,T],U), ℐ_6 =6𝔼‖∫_t_k^t T(t-s)B(u^m(s)-u^0(s))ds‖^2 ≤ 6M^2‖ B‖^2T∫_t_k^t𝔼‖ u^m(s)-u^0(s)‖^2 ds ≤ 6M^2‖ B‖^2T‖ u^m(s)-u^0(s)‖^2_L^2_F([0,T],U). Combining all the estimates, we obtain: 𝔼‖ x^m(t)-x^0(t)‖^2≤ 6 M^2(N+1)(T^2L̃_f+TL̃_σ)‖ x^m(s)-x^0(s)‖^2_PC +6 M^2T(N+1)‖ B‖^2‖ u^m(s)-u^0(s)‖^2_L^2_F([0,T],U), which means ‖ x^m(s)-x^0(s)‖^2_PC ≤6 M^2T(N+1)‖ B‖^2‖ u^m(s)-u^0(s)‖^2_L^2_F([0,T],U)/1-6 M^2(N+1)(T^2L̃_f+TL̃_σ). Using (<ref>) and (<ref>), we get ‖ x^m(s)-x^0(s)‖^2_PC ≤ C^*‖ u^m(s)-u^0(s)‖^2_L^2_F([0,T],U), where C^* =max{3M^2‖ B‖^2/1-3M^2(T^2L̃_f+TL̃_σ) ,6 M^2T(N+1)‖ B‖^2/1-6 M^2(N+1)(T^2L̃_f+TL̃_σ)}. Since, ‖ u^m(s)-u^0(s)‖^2_L^2_F([0,T],U) 0, (m→∞). Consequently, ‖ x^m(s)-x^0(s)‖^2_PC 0, (m→∞). Thus, by applying conditions A_1-A_4 and using Balder's theorem (see theorem 2.1 <cit.>), we can conclude that the mapping (x, u) ↦𝔼(∫_0^T l(t, x(t), u(t)) dt) is sequentially lower semicontinuous with respect to the strong topology of L^1_F([0,T], H) and the weak topology of L^2_F([0,T], U) ⊂ L^1_F([0,T], U). Consequently, the functional J is weakly lower semicontinuous on L^2_F([0,T], U). Therefore, we have: ε = lim_m →∞𝔼(∫_0^T l(t, x^m(t), u^m(t)) dt) ≥𝔼(∫_0^T l(t, x^0(t), u^0(t)) dt) = J(x^0, u^0) ≥ε, which implies that u^0 ∈ U_ad is a minimizer of J. § APPLICATION Consider the stochastic impulsive system described by: x'(t) = A x(t) + B u(t) + f(t, x(t)) + σ(t, x(t)) dW(t)/dt, t ∈ [0,1] ∖{1/2}, Δ x(1/2) = D_1 x(1/2) + E_1 v_1, Δ x(1) = D_2 x(1) + E_2 v_2, x(0) = 0. where H = U = L^2[0,1]. Define the operators and functions as follows: A = -d^2/d t^2 + d/dt, B = D_k+1 = E_k+1 = ℐ, for k=0,1, f(t, x(t)) = 2/5cos(t) + x(t)/t + 5, σ(t, x(t)) = 1/5(2/1 + e^t + |x(t)|/1 + |x(t)|), v_1 = sin(π t), v_2 = cos(π t). Here, A is defined as an operator A: D(A) ⊂ L^2[0,1] → L^2[0,1] with D(A) = {x ∈ L^2[0,1] : x(0) = x(1) = 0}. Substituting these definitions into (<ref>), we obtain: x^''(t) = u(t) + 2/5cos(t) + x(t)/t + 5 + 1/5(2/1 + e^t + |x(t)|/1 + |x(t)|) dW(t)/dt, t ∈ [0,1] ∖{1/2}, x(1/2^+) = x(1/2^-) + x(1/2) + sin(π t), x(1^+) = x(1^-) + cos(π t), x(0) = 0. The problem (<ref>) can be framed in the abstract form of (<ref>), with the cost functional given by: J(x,u) = 𝔼( ∫_0^1∫_0^1 x(ω) ^2 dω dt + ∫_0^1∫_0^1 u(ω) ^2 dω dt ). The functions f and σ are shown to satisfy the Lipschitz conditions specified in assumptions (<ref>) and (<ref>), with parameters L_f = L_σ = 2/5 and L̃_f = L̃_σ = 1/25. Given that all assumptions A_1-A_4 are satisfied, Theorem <ref> ensures the existence of at least one optimal pair (x^0, u^0) such that: J(x^0, u^0) ≤ J(x^u, u), ∀ (x^u, u) ∈ PC([0,T], L^2(Ω, H)) × U_ad. Thus, Theorem <ref> is applicable to the given stochastic impulsive system, guaranteeing the existence of an optimal solution to the problem. § CONCLUSION In this paper, we investigated the solvability and optimal control of a class of impulsive stochastic evolution equations in a Hilbert space. Initially, we proved the existence and uniqueness of mild solutions for the system under consideration, using the Krasnoselskii's Fixed Point Theorem. We then established the necessary and sufficient conditions for the existence of optimal control pairs, ensuring the feasibility and optimality of the control solutions. To illustrate the applicability and effectiveness of our theoretical results, we provided a detailed example. This research enhances the comprehension and formulation of control strategies for impulsive stochastic systems, providing a solid foundation for future mathematical studies in this area. 99 1 Mainardi, F., Paradisi, P., & Gorenflo, R. (2007). Probability distributions generated by fractional diffusion equations. arXiv preprint arXiv:0704.0320. 2 Mahmudov, N. I. (2024). A study on approximate controllability of linear impulsive equations in Hilbert spaces. Quaestiones Mathematicae, 1-16. 3 Periago, F., & Straub, B. (2002). A functional calculus for almost sectorial operators and applications to abstract evolution equations. Journal of Evolution Equations, 2(1), 41-68. 4 Asadzade, J.A., & Mahmudov, N.I. (2024). Approximate Controllability of Linear Fractional Impulsive Evolution Equations in Hilbert Spaces. arXiv preprint arXiv:2406.15114 5 Asadzade, J. A., & Mahmudov, N. I. (2024). Finite time stability analysis for fractional stochastic neutral delay differential equations. Journal of Applied Mathematics and Computing, 1-25. 6 Barraez, D., Leiva, H., Merentes, N., & Narváez, M. (2011). Exact controllability of semilinear stochastic evolution equation. 7 Kilbas, A. A., Srivastava, H. M., & Trujillo, J. J. (2006). Theory and applications of fractional differential equations (Vol. 204). elsevier. 8 Zhou, Y., & Jiao, F. (2010). Nonlocal Cauchy problem for fractional evolution equations. Nonlinear analysis: real world applications, 11(5), 4465-4475. 9 Da Prato, G., & Zabczyk, J. (2014). Stochastic equations in infinite dimensions (Vol. 152). Cambridge university press. 10 Mao, X. R. (1997). Stochastic differential equations and their applications, horwood publ. House, Chichester. 11 Heinonen, J., Kipelainen, T., & Martio, O. (2018). Nonlinear potential theory of degenerate elliptic equations. Courier Dover Publications. 12 Ding, Y., & Niu, J. (2024). Solvability and optimal controls of fractional impulsive stochastic evolution equations with nonlocal conditions. Journal of Applied Analysis & Computation, 14(5), 2622-2642. 13 Balder, E. J. (1987). Necessary and sufficient conditions for L1-strong-weak lower semicontinuity of integral functionals. Nonlinear Analysis: Theory, Methods & Applications, 11(12), 1399-1404. 14 Dhayal, R., Malik, M., & Abbas, S. (2021). Solvability and optimal controls of non-instantaneous impulsive stochastic fractional differential equation of order q∈(1, 2). Stochastics, 93(5), 780-802. 15 Bainov, D., & Simeonov, P. (2017). Impulsive differential equations: periodic solutions and applications. Routledge. 16 Balasubramaniam, P., & Tamilalagan, P. (2017). The solvability and optimal controls for impulsive fractional stochastic integro-differential equations via resolvent operators. Journal of Optimization Theory and Applications, 174, 139-155. 17 Benchohra, M. (2006). Impulsive differential equations and inclusions (Vol. 2). Hindawi Publishing Corporation. 18 Byszewski, L. (1991). Theorems about the existence and uniqueness of solutions of a semilinear evolution nonlocal Cauchy problem. Journal of Mathematical analysis and Applications, 162(2), 494-505. 19 Yan, Z. (2021). Time optimal control of a Clarke subdifferential type stochastic evolution inclusion in Hilbert spaces. Applied Mathematics & Optimization, 84(3), 3083-3110. 20 Mahmudov, N. I., Vijayakumar, V., & Murugesu, R. (2016). Approximate controllability of second-order evolution differential inclusions in Hilbert spaces. Mediterranean Journal of Mathematics, 13, 3433-3454. 21 Anukiruthika, K., Durga, N., & Muthukumar, P. (2023). Optimal control of non-instantaneous impulsive second-order stochastic McKean–Vlasov evolution system with Clarke subdifferential. International Journal of Nonlinear Sciences and Numerical Simulation, 24(6), 2061-2087. 22 Zhou, J., & Liu, B. (2010). Optimal control problem for stochastic evolution equations in Hilbert spaces. International journal of control, 83(9), 1771-1784. 23 Al-Hussein, A. (2011). Necessary conditions for optimal control of stochastic evolution equations in Hilbert spaces. Applied Mathematics & Optimization, 63, 385-400. 24 Mahmudov, N. I., & McKibben, M. A. (2007). On backward stochastic evolution equations in Hilbert space and optimal control. Nonlinear Analysis Series A: Theory, Methods, and Applications, 67(4), 1262. 25 Subalakshmi, R., & Radhakrishnan, B. (2021). A study on approximate and exact controllability of impulsive stochastic neutral integrodifferential evolution system in Hilbert spaces. International Journal of Nonlinear Analysis and Applications, 12(Special Issue), 1731-1743. 26 Levajković, T., Mena, H., & Tuffaha, A. (2016). The stochastic linear quadratic optimal control problem in Hilbert spaces: A polynomial chaos approach. Evol. Equ. Control Theory, 5(1), 105-134. 27 Chadha, A., & Bora, S. N. (2021). Solvability of control problem for a nonlocal neutral stochastic fractional integro-differential inclusion with impulses. Mathematical Reports, 23(3), 265-294. 28 Carmichael, N., & Quinn, M. D. (1985). Fixed point methods in nonlinear control. In Distributed Parameter Systems: Proceedings of the 2nd International Conference Vorau, Austria 1984 (pp. 24-51). Springer Berlin Heidelberg. 29 Subalakshmi, R., & Balachandran, K. (2009). Approximate controllability of nonlinear stochastic impulsive integrodifferential systems in Hilbert spaces. Chaos, Solitons & Fractals, 42(4), 2035-2046. 30 Mahmudov, N. I. (2003). Approximate controllability of semilinear deterministic and stochastic evolution equations in abstract spaces. SIAM journal on control and optimization, 42(5), 1604-1622. 31 Ahmadova, A. (2023). Approximate controllability of stochastic degenerate evolution equations: Decomposition of a Hilbert space. Differential Equations and Dynamical Systems, 1-23. 32 Sakthivel, R., Suganya, S., & Anthoni, S. M. (2012). Approximate controllability of fractional stochastic evolution equations. Computers & Mathematics with Applications, 63(3), 660-668. 33 Asadzade, J. A., & Mahmudov, N. I. (2024). Euler-Maruyama approximation for stochastic fractional neutral integro-differential equations with weakly singular kernel. Physica Scripta, 99(7), 075281. 34 Bashirov, A. E., & Mahmudov, N. I. (1999). On concepts of controllability for deterministic and stochastic systems. SIAM Journal on Control and Optimization, 37(6), 1808-1821. 35 Chang, Y. K., Pei, Y., & Ponce, R. (2019). Existence and optimal controls for fractional stochastic evolution equations of Sobolev type via fractional resolvent operators. Journal of Optimization Theory and Applications, 182, 558-572. 36 Yan, Z., & Jia, X. (2017). Optimal controls of fractional impulsive partial neutral stochastic integro-differential systems with infinite delay in Hilbert spaces. International Journal of Control, Automation and Systems, 15(3), 1051-1068. 37 Li, X., & Liu, Z. (2015). The solvability and optimal controls of impulsive fractional semilinear differential equations. 38 Yan, Z., & Lu, F. (2018). Solvability and optimal controls of a fractional impulsive stochastic partial integro-differential equation with state-dependent delay. Acta Applicandae Mathematicae, 155(1), 57-84. 39 Sheng, L., Hu, W., & Su, Y. H. (2024). Existence and optimal controls of non-autonomous for impulsive evolution equation without Lipschitz assumption. Boundary Value Problems, 2024(1), 17. 40 Samoilenko, A. M., & Perestyuk, N. A. (1995). Impulsive differential equations. World scientific. 41 Lakshmikantham, V., & Simeonov, P. S. (1989). Theory of impulsive differential equations (Vol. 6). World scientific. 42 Mahmudov, N. I., & Almatarneh, A. M. (2020). Stability of Ulam–Hyers and existence of solutions for impulsive time-delay semi-linear systems with non-permutable matrices. Mathematics, 8(9), 1493.
http://arxiv.org/abs/2407.12641v1
20240717151000
Thermal pion condensation: holography meets lattice QCD
[ "Nicolas Kovensky", "Andreas Schmitt" ]
hep-ph
[ "hep-ph", "hep-lat", "hep-th" ]
=1 calc,intersections,positioning,arrows
http://arxiv.org/abs/2407.12486v1
20240717111154
Decoupled Edge Physics algorithms for collaborative XR simulations
[ "George Kokiadis", "Antonis Protopsaltis", "Michalis Morfiadakis", "Nick Lydatakis", "George Papagiannakis" ]
cs.HC
[ "cs.HC", "cs.GR" ]
0000-0002-0858-5440 FORTH - ICS, University of Crete, ORamaVR 0000-0002-5670-1151 University of Western Macedonia, ORamaVR 0009-0006-4737-6285 University of Crete, ORamaVR 0000-0001-8159-9956 FORTH - ICS, University of Crete, ORamaVR 0000-0002-2977-9850 FORTH - ICS, University of Crete, ORamaVR § ABSTRACT This work proposes a novel approach to transform any modern game engine pipeline, for optimized performance and enhanced user experiences in Extended Reality (XR) environments. Decoupling the physics engine from the game engine pipeline and using a client-server N-1 architecture creates a scalable solution, efficiently serving multiple graphics clients on Head-Mounted Displays (HMDs) with a single physics engine on edge-cloud infrastructure. This approach ensures better synchronization in multiplayer scenarios without introducing overhead in single-player experiences, maintaining session continuity despite changes in user participation. Relocating the Physics Engine to an edge or cloud node reduces strain on local hardware, dedicating more resources to high-quality rendering and unlocking the full potential of untethered HMDs. We present four algorithms that decouple the physics engine, increasing frame rates and Quality of Experience (QoE) in VR simulations, supporting advanced interactions, numerous physics objects, and multi-user sessions with over 100 concurrent users. Incorporating a Geometric Algebra interpolator reduces inter-calls between dissected parts, maintaining QoE and easing network stress. Experimental validation, with more than 100 concurrent users, 10,000 physics objects, and softbody simulations, confirms the technical viability of the proposed architecture, showcasing transformative capabilities for more immersive and collaborative XR applications without compromising performance. [ George Papagiannakis July 22, 2024 ======================== § INTRODUCTION Immersive collaborative XR experiences demand realistic simulations for optimal QoE <cit.>. In mobile-XR environments, efficient physics algorithms are essential for handling 3D object animations, transformations, and soft-body deformations while maintaining interactivity. Maintaining a minimum frame rate of 60fps is essential for fluid XR experiences, alongside high-resolution, low-latency graphics rendering. Although modern game engines tackle many physics simulation challenges effectively, specific material physics aspects, such as liquid or deformable surfaces in XR scenes, are often overlooked, leading to unsatisfactory rendering. Standalone XR headsets with limited processing power often simplify physics models, affecting behavior alignment with expected norms. Tethering to high-end workstations improves processing power but limits user mobility, disrupting immersion. In pursuit of advancing the efficiency and adaptability of modern game engines for untethered HMDs in immersive XR environments, this paper introduces a novel approach to dissect the native physics simulation engine from the main application. The primary goals of this decoupled physics unit are to minimize the total frame time for XR applications and to support real-time interactivity with multiple objects and enhance multi-player sessions, without imposing user limitations or QoE degradations. Through this approach, we seek to create a seamless and immersive XR gaming experience while addressing the challenges posed by the limitations of untethered HMDs. § RELATED WORK Untethered XR HMDs face significant challenges due to their relatively low processing power, which complicates the real-time simulation of realistic XR scenes and interactions. Immersive XR experiences impose strict requirements on low latency computations for user interactions and high-fidelity rendering for virtual worlds. Previous works analyzed the game engine's architecture <cit.>, the inter-calls between its modules, the CPU/GPU consumption, and resource requirements <cit.>. Furthermore studies have assessed the computational load and power consumption on client devices <cit.>, the trade-offs between video quality and latency <cit.>, and offloading rendering tasks <cit.>. In that respect, micro-services <cit.> were explored in mobile cloud integration and back-end architectures for MMORPGs <cit.>. The offloading of modern game engine processes to cloud-edge necessitates considering their adaptability as service-oriented architecture or micro-service architecture, addressing the monolithic architecture challenges <cit.>. Remote rendering architectures were also utilized to alleviate the computational burden on untethered XR HMDs, that typically involve a monolithic cloud service responsible for performing all rendering, game logic and physics computations, with the encoded video streamed to the lightweight HMD via high-speed networks. Notable solutions include NVIDIA's CloudXR [<www.nvidia.com/en-us/design-visualization/solutions/cloud-xr/>] and the open source ALVR[<github.com/alvr-org/ALVR>]. While monolithic remote rendering architectures have shown promising, their the data-intensive nature imposes substantial network and edge-cloud infrastructure requirements. Each HMD user necessitates a corresponding GPU-enabled workstation on the edge or cloud, amplifying resource demands, especially in multi-user gaming scenarios. To accommodate such significant network challenges, often requires leveraging high-speed 5G/6G networks, that minimize latency and increase the available bandwidth. Considering these challenges, there's growing interest in dissecting modern game engines and utilize a distributed pipeline. As interactive XR scenes involve intense physics computations, offloading the physics engine as a edge-cloud service is a promising solution. In that respect, edge-physics frameworks as in <cit.> propose streaming at the scene-graph level, reporting significantly lower computational overhead, bandwidth, and latency compared to video streaming. Another approach <cit.> involves utilizing the open source Bevy game engine in conjunction with a remote dedicated physics server, such as Rapier, to simulate non-XR scenarios of simple scenes without real-time user interactions. Moreover, established physics engines, like NVIDIA PhysX and Havok, offer support for distributed physics simulation, allowing developers to distribute physics computations across multiple machines. These alternative solutions often introduce great complexity in the development process of an XR solution, as they are not generalized, so this involves hard-coded scene specific details. Decoupling the physics engine from monolithic game engines like Unity and Unreal Engine poses a formidable challenges. The tight integration of the physics engine with core subsystems such as rendering, input handling, and game logic and the numerous inter-calls between them complicates the separation process. Synchronization of the simulation state in real-time for multi-user interactive scenarios is a complex task, that imposes strict requirements for QoE, especially under degraded network conditions. Achieving physics engine decoupling requires careful consideration and potentially extensive modifications to the engine's internals. § GOALS AND CONSTRAINTS In contemporary computational architectures, particularly within the realm of game engines and XR systems, a monolithic design is commonly employed, wherein both physics and graphics computations are conducted on the same hardware. This conventional approach can often impose significant computational burdens on devices, especially on XR HMDs. This research work aims to decouple physics computations from any modern game engine to revolutionize immersive experiences, particularly for untethered HMDs, by addressing several critical goals while adhering to constraints inherent in distributed XR pipelines. By offloading physics computations to edge or cloud nodes, the final system endeavors to achieve optimal QoE to maintain user immersion, even during interactions with softbodies or scenes with a high number of objects, which traditionally strain the processing power of standalone devices. This research not only enables the realization of complex simulations but also scales the virtual environment to accommodate a vast number of physics objects, enriching the depth and interactivity of immersive scenes. This optimization requires low application latency, facilitated by low network latency and high-bandwidth networks to ensure seamless interaction and rendering of complex virtual environments. The decoupling strategy serves a dual purpose of reducing the CPU load on untethered XR HMDs and enhancing performance in scenarios involving intensive physics simulations. By alleviating the computational burden on onboard processors, the research aims to not only boost frame rates but also to elevate the quality of experience for users, fostering smoother visuals and more responsive interactions. Moreover, this distributed XR pipeline aims to offer tangible benefits such as increased battery life and enhanced user mobility, thereby ensuring prolonged and uninterrupted immersive experiences. An overloaded CPU acts as a bottleneck[<https://www.intel.com/content/www/us/en/gaming/resources/what-is-bottlenecking-my-pc.html>] in the overall system performance, even if the GPU is capable of handling more tasks. Offloading physics computations from the primary device reduces the computational load on HMDs, optimizing the overall pipeline and ultimately the utilization of the GPU, allowing it to operate at full capacity and improving the overall graphical performance. Additionally, segregating physics computations allows for more sophisticated simulations, such as soft body dynamics and managing a large number of physics objects, without adversely impacting the frame rate, as these computations can run on a dedicated server or processor. A high frame rate is crucial in XR environments, as low frames per second (FPS) can lead to motion sickness and break the user's sense of immersion. By ensuring that the physics calculations do not interfere with the graphical frame rate, users are less likely to experience nausea and other XR-induced discomforts. The Metaverse has made the requirement for dynamic and robust multi-user and interactive experiences in XR environments obvious. In this research we aim to maintain the XR session's physics state independently from the users' devices, to support up to 100 collaborating interactive MR users in the same scene (see figure <ref>). To enrich the gameplay experience and enhance the realism and immersiveness of the environment, the distributed pipeline will allow users to real-time interact with objects, in ways that mimic real-world interactions, such as grabbing objects or interacting with softbodies. Synchronization is paramount for user interactions and multi-user sessions, enabling seamless collaboration and dynamic interactions within XR environments. Network usage should remain acceptable for average home/business networks to ensure widespread adoption and accessibility. Compatibility with various modern game engines is essential to ensure the versatility and accessibility of the designed solution. In that respect, the developer experience must be streamlined, with minimal hindrance and seamless integration of the decoupled physics component. The distributed XR pipeline must be adaptable to the game engine development cycle, ensuring seamless integration of the decoupled physics engine in the game development workflow. Ultimately, the research endeavors to leverage edge-cloud infrastructure to offload all physics computations, transcending the limitations of standalone XR HMDs and enabling the realization of realistic operations that include heavy physics simulations. Overall, by meeting these goals and adhering to the constraints of distributed XR pipelines, this research work seeks to advance the capabilities and accessibility of XR applications, enabling richer, more immersive, and collaborative virtual experiences for users. § ALTERING THE GAME ENGINE PIPELINE The Entity-Component-System (ECS) pattern <cit.> has been widely adopted by modern game engines to facilitate the construction of complex game systems, empowering diverse and immersive interactive experiences while maintaining flexibility and scalability. Modern game engines also utilize the scene graph, a hierarchical structure that organizes entities and components. Several modern game engines, including Unity's DOTS (Data-Oriented Technology Stack), Unreal Engine's ECS, Bevy[<https://bevyengine.org/>], and the open-source Godot Engine[<https://godotengine.org/>], have embraced the ECS pattern to streamline game development and enhance performance. Most of these engines tightly integrate a physics engine into their core functionality under the hood, providing accurate and realistic physics simulations for a wide range of applications. Specifically, Unity and Unreal Engine utilize NVIDIA's PhysX physics engine, Bevy cooperates with the Rapier[<https://rapier.rs/>] physics engine, and Godot uses the open-source Bullet physics engine[<https://github.com/bulletphysics>]. §.§ Multiple Users and N-1 Architecture In multi-user virtual environments, each instance of a Multi-user Graphics Engine application traditionally operates its own dedicated physics engine. However, an innovative modification to this setup involves implementing a centralized physics engine micro-service. This micro-service, hosted on a server within the cloud-edge continuum, can be utilized in a 1-N relationship by multiple graphics applications engaged in the same game session. Consequently, this centralized model allows the physics micro-service to perform simulations and then stream the results to all connected graphics rendering applications (see figure <ref>). This architecture not only facilitates the collaboration of a significant number of concurrent users but also enhances the efficiency and scalability of resource utilization. The modification of a game engine pipeline delineates two key units with bidirectional communication capabilities: the Graphics Host (GHost) and the Physics Server (PhyS) (see figure <ref>). The GHost unit encompasses the entirety of the game engine graphics pipeline and operates without any active Physics Components during game play. GHost is responsible for maintaining the game logic and performing the rendering. Conversely, the PhyS unit solely manages the entirety of physics computations on behalf of the game engine. The PhyS unit runs in optimal Headless mode, where all processing, communication and calculations are performed without the rendering pipeline, to be as lightweight as possible. The primary objective of this distributed game engine pipeline is to enable every Entity within the scene to be fully simulated by the PhyS, thereby alleviating the computational load on the untethered HMD operated by the GHost. To ensure coherence and real-time performance between these two units, network communication and synchronization is achieved through Riptide networking[<https://github.com/RiptideNetworking/Riptide>]. The lightweight and open-source nature of this networking library allows the exchange of only the absolutely necessary messages, with minimal overhead or added processing from Riptide itself. We predominantly utilize Riptide's "unreliable" connection type, which, akin to UDP, is strategic for minimizing network latency. This type of connection is suitable for streaming applications where occasional packet loss is tolerable because subsequent messages quickly follow. However, for crucial communications that must be reliably delivered - such as object initialization or significant Entity state updates, or object deletion — we use Riptide's "reliable" connection types to ensure these important packets reach their destination. §.§ Dissecting an Entity The scene graph is the fundamental building block of a game scene, representing a hierarchy of entities in the game world, that serve as containers for components that define its properties, behavior, and appearance, ultimately contributing to the overall interactivity and visual representation of the game. Entities can have a variety of components attached to them, each serving a specific purpose. An entity can interact with the physics engine through the use of certain components and features that allow the simulation of physical behaviors. The development of a game application with decoupled physics behavior from an entity involves the usual entity creation process in GHost with an additional "PhysComponentObtainer" (PCO) component. This component is tasked with extracting and packaging all physics-related scripts from the respective entity, along with the object's current transformation data, into a data structure named "Phys Component Container" (PCC). The PCC holds all the necessary information required to initialize the respective entity in the PhyS. When all physics-related information has been extracted from the Entity, the components are removed from the specific entity, to prevent Physics simulations from being computed on the GHost unit. After this initialization process is complete, in the GHost, each PCO component creates a "Graphics Object" (GrO) Component, attaches it to its Entity, and then destroys itself. The initialization process also ensures that all entities in the GHost scene graph, containing physics components, are hierarchically replicated in the PhyS with only its physics components present (see Algorithm <ref>). This requires any parent that may not contain physics components to still be present in the PhyS. In this case, a PCO must also be placed in the parent, which will generate a PCC which only contains transform information. The system also integrates both a "GHost Controller" (GHC) and a "PhyS Controller" (PhC). Each controller is responsible for managing its respective domain: the GHC oversees all Graphics Objects, while the PhC handles all Physics Objects in the application's scene. These components are also responsible for exchanging messages with each other, to allow for the synchronization of the scene graph in both units. During the initialization stage, the GHC is aware of all PCO Components, gathers all PCCs from the respective PCOs, and transmits them to the PhC. (see Algorithms <ref>, <ref>, Figure <ref>) For each PCC received in the PhyS from the GHost, a new Entity is created. In this new Entity, a "Physics Object" (PO) Component is initialized, and the respective PCC is passed to it. The PO is responsible for doing the reverse operation that the PCO does, creating all Physics Components included in the PCC. As a result, we effectively create two distinct representations of each entity in the scene graph: the Graphics object in the GHost and the Physics object on the PhyS. This dual representation ensures unmodified development process for developers while seamlessly integrating with the advanced simulation capabilities of the physics engine. During the initialization of this process, a unique Entity ID is assigned to each Graphics and Physics object, which is consistent between them. This unique ID ensures that each GrO and its corresponding PO can synchronize accurately, maintaining a consistent state across both systems throughout the game play. After the initialization process, the GHost has no physics components to simulate, leaving all physics simulations to the PhyS. The GrO's transformations are controlled by the PhyS, allowing the GHost to focus on accurately placing the rendered entities in their correct positions. In specific instances, the GHost can send a "MoveToTransform" command to the PhyS to manually adjust an object's transform. However, these transformations are still subject to the laws of physics within the simulation. For example, if a user tries to move an object through another, the objects will collide instead of overlapping, due to their physical properties. The resulting physics simulated transform is what the GHost receives and presents in the HMD to the user. §.§ Multi-user Session initiation & runtime In our game environment, the PhyS micro service can be deployed either by one of the GHosts or through an automated cloud service provider. Once deployed, each GHost connects to the server using its IP and port, establishing a Riptide connection. In a multi-user session, GHost applications, served by the same PhyS, will contain an identical scene graph. During the initialization phase of the Dissection, the GHost scene graph structure is replicated and transmitted to the PhyS (see figure <ref>). The PhyS unit is initialized by the first GHost unit in the multi-user session. Its scene graph is not hard-coded, but replicated from the first GHost's scene graph. This functionality allows compatibility with any GHost unit, promoting versatility and adaptability across different use cases and applications. Subsequent GHosts joining the session, immediately start their session participation by receiving entities' transform updates. §.§ User Avatars In XR Multi-User environments, user avatars are essential for representing players within the digital space. Each avatar consists of several Entities, each designated by a unique identifier—combining a specific EntityID with the respective PlayerID from the session. This identification system ensures that each avatar's interactions and movements are distinctly linked to the corresponding player, enabling precise control over individual avatars in a multi-user setting. Control over avatars and their movements traditionally comes through input devices like controllers and HMDs. The input transformations received from these devices are processed in the GHost and then communicated to the PhyS, through the transmission of MoveToTransform commands. The outcome of these commands is different for each case. The hands interactor of the user's avatar are subject to physics. This means that if a user moves his hand into a space where it collides with an interactable object, a collision will occur. The real position of the user's hand in physical space may differ from the respective position in digital space due to these collisions. Should a user try to move their hand through an object, the hand and object will collide instead of overlapping, due to their physical properties. The user's avatar follows the scene Camera, where the user's point of view is located. The avatar itself does not undergo physics simulations, since this would lead to interfering with an XR user's point of view, which can cause nausea, discomfort and disorientation. For this reason it only contains a kinematic rigid body component, that allows it to be moved in digital space without being subject to physical forces. While avatars do not have collider components, they must still be updated in the PhyS to ensure all players see consistent avatar positions. §.§ Collision Events & Interaction Collision events are integral to fostering interactive game play and crafting engaging narratives within the game environment. Since the GHost has no way of detecting collisions, since no Physics Components exist, all collision detection occurs in the PhyS. When an Entity collides with another in the PhyS, the PO Component collects the EntityID of the collided Entity, and uses the PhC to send it is to the GHost where the GrO Component is updated with it. This allows the GHost to keep track of which entities are colliding with each other. This is important for implementing game logic, interactions and an overall responsive environment. To enhance the system's functionality, a new event-handling system was implemented in GHost, that handles "CollisionEnter" and "CollisionExit" events. These custom events are triggered accordingly, allowing for specific responses when Entities begin or end their interaction through a collision. This method of handling collision events allows the design of complex scenarios and story-driven game play, as it integrates dynamic events and interactions seamlessly into the narrative. Collision events are fundamental for the interaction system within the game environment, particularly in how they facilitate object manipulation by the user. Every interactable entity, contains an "Interactable" component in the GHost. Each of the user's hands entity in the GHost contains an "Interactor" Component and Trigger Collider Components in the PhyS (see figure <ref>). When the user presses the "grab" button on their controller, the Interactor Component accesses the GrO Component, to acquire the Entity that the Interactor is currently hovering, as reported by the PO to the GrO. If the Entity contains an Interactable component, interaction starts. The interaction ends when the user releases the the "grab" button (see Algorithm <ref>). Once an object is successfully grabbed, GHost performs calculations to determine the correct position the interactable entity should have, in relation to the user's hands. The position of the Interactable must be such that it appears within the user's hand. These computations, performed in the GHost at every frame, are required for realistic interactions. The resulting position is sent to the PhyS using a MoveToTransform command. Similarly to how the movement of the hands is achieved, the position of the interactable is with respect to physics, so if a user tries to move an object through another, the objects will collide and not overlap. §.§ Simulating Softbodies To simulate soft bodies in our system, we use a particle-based method <cit.> where the mesh vertices of the model are clustered in particles (see figure <ref>). Each particle controls a set of vertices within a specific range, which allows for realistic deformation when forces are applied. Neighboring particles are interconnected with each other, forming a particle map, allowing force exertion in nearby particles. This approach ensures that movement and deformations are realistically portrayed while maintaining system performance. This processing of soft bodies is handled within the PhyS. To achieve a coherent simulation, the positions of the particles are synchronized with the GHost. This synchronization ensures that both the physical interactions and the visual representations are consistent and accurate across the system, providing a seamless and realistic experience in the simulated environment. §.§ Relay server for local-physics compatibility A relay server in collaborative XR environments acts as an intermediary that facilitates communication between XR clients. This server centralizes data traffic by receiving information from one client and redistributing it to others. This model is particularly beneficial in scenarios where direct peer-to-peer communication is impractical due to network constraints or when uniform data handling is critical for maintaining a cohesive virtual environment. In our setup, the relay server and the physics server coexist to fit a diverse range of client capabilities and ensure a uniform experience across all participants in the session (see figure <ref>). The physics server is available for clients that needs to offload intensive physics calculations, particularly beneficial for lower-end devices that might struggle with complex simulations. This server handles the heavy computational load, allowing these devices to maintain high performance without local resource strain. Meanwhile, the relay server maintains its role as the central coordinator for all client interactions, managing data synchronization. It ensures that updates, whether processed locally or computed by the physics server, are consistently distributed to all clients, keeping the virtual environment synchronized. This dual-server setup maximizes the efficiency of network and computational resources and provides flexibility and scalability, supporting a wide array of devices. §.§ Network optimisations for Relaying The described architecture involves two relaying servers. The PhyS, that besides physics simulations, relays transformation data to all GHosts in the session, and the relay server that we described in section <ref>. Optimizing a relay server for sending involves minimizing the data sent across the network while ensuring game state consistency and responsiveness. In our solution we've developed strategies to improve how our relay server handles transformation data, ensuring our simulation stays fast and consistent. Selective Synchronization: This method scrutinizes the positional and rotational states of XR objects since their last update, transmitting only those changes that exceed a set significance threshold. This ensures that only impactful alterations are communicated, conserving bandwidth by omitting minor updates. Bitmasking Strategy: When changes are detected, we use a specific coding system to identify the kind of change—whether it's in position, rotation, or both. This method allows us to pack our data more efficiently, sending only the necessary data. This reduces the size and amount of data we send. State Update Mechanism: On the other end, our system reads the incoming codes to figure out what has changed. It then updates only those parts of the local XR objects that need it, which keeps everything running smoothly without unnecessary work. Grouped updates: The relay server transmits transform updates in groups rather than as separate messages for each entity. This grouping strategy reduces the cumulative overhead caused by the multiple header bytes included in each individual network message. Dynamic message size: The size of the message the group is sent in is dynamic, meaning that only the absolutely necessary level of network usage is reached. By consolidating updates into one message, we significantly decrease the network load and enhance the efficiency of data transmission. Sending at Intervals: The Relay Server employs a strategy of transmitting transform updates to the GHosts at fixed intervals for all Entities within the dissected environment. To further refine this approach, entities deemed as critical by the developer of the Application, such as interactable objects or user avatars, are updated at a higher frequency than non-critical items. This increased rate ensures a smoother experience by providing more frequent updates for Entities that significantly influence game play and user interaction. Despite the benefits of interval-based updating and batching, this method can introduce visual artifacts, such as stuttering or perceptible lags in the movement of synchronized objects. To address these issues and improve visual continuity, we have implemented a Dual Quaternion interpolator <cit.>. This tool effectively smooths out the motion of objects between received updates, creating a more fluid and natural appearance. The interpolator calculates intermediate states by considering previous and current transform data, thus mitigating the impact of network-induced delays and providing a seamless visual experience. This combination of strategic update intervals, batch processing of updates, and advanced interpolation techniques ensures that our network optimization efforts enhance the user's experience by creating a more responsive and engaging virtual environment. § EXPERIMENTATION To accurately assess the device load and performance enhancements, our experiment will compare the effects of local versus decoupled physics processing. We will use a controlled setup with the HMD and a more powerful server running PhyS over a local area network. The goal is to determine if decoupled physics simulations improves performance and reduces load on the HMD. Our experimental process (figure <ref>) includes three experimentation scenarios that will assess the distributed XR pipeline performance: a) the "Softbody" scenario deals with complex physics computations while the user interacts with varying numbers of soft body models (Bunny1 with 500 particles, Bunny3 with 1500 particles) (see figure <ref>) b) the "MultiObject" scenario includes physics computations on a scene with varied the number of objects (500, 1000, 2000, 5000 and 10000) with active physics simulations (see figure <ref>), and c) the "CCU" (concurrent users) scenario that involves an interactive scene with multiple (100+) CCUs, gradually joining the XR session (see figure <ref>). This approach allows us to assess how offloading physics computations impacts user device performance across different levels of complexity and user load, while preserving acceptable QoE. Finally, we will experiment with the proposed relay server and compare its metrics with those of a commercial relay server. In all scenarios, we used a Meta Quest 2 VR HMD networked over a 5GHz Wi-Fi connection. In the "Softbody" and "MultiObject" scenarios, the PhyS was hosted in an AMD Ryzen 9 5900X CPU server with 64GB of 3200MHz RAM, and NVIDIA GeForce RTX2070S GPU. The PhyS server was connected to a ZTE H1600 Router via Ethernet cable. In the "CCU" scenario, the PhyS was hosted on an Intel Core i7-6700K CPU server with 32GB of 2133MHz RAM and an NVIDIA GeForce GTX1070 GPU. We spawned special bot-users in the XR session that ran on local computers connected via Ethernet cable to the ZTE H1600 router, but connected to the PhyS over the internet using Ethernet. MultiObject experimentation showcased that our remote physics approach decreases significantly the total frame time in both HMD (see figure <ref>) and desktop PC scenarios (see figure <ref>) for all cases. For scenes with more than 7500 objects, we notice that although the Graphics object update process increases significantly for our approach, as the HMD has to perform a great number of updates, it is almost half compared to the local physics case. The breakdown of frame times (see figure <ref> right) shows that most of the frame time in local-physics case is consumed by physics calculations, which explains the great improvement in the decoupled physics case. The PhyS simulates (see figure <ref> right) up to 2000 objects in less than 10ms, while the rest of the cases are far below the local physics case. Additionally, table <ref> shows a steady increase of outgoing throughput from the PhyS as objects increase. In the decoupled physics case of the Softbody experimentation on HMD (see figure <ref> left and figure <ref>) we see similar values with MultiObject scenario (cases of 500 and 1500 objects), where in the local we notice a great reduction in the total frame time, as all complex softbody computations are handled by the HMD. Although the softbodies have around 500 particles each, they are not equal in performance with 500 objects locally, due to the added computational complexity from spring joints and multiple rigidbodies. In the dissected case, the computational power required from the HMD for softbodies or 500 objects is the same, since GHost just syncs transforms. We also notice (figure <ref> left) a higher rendering and game logic percentage as the physics offloading allows for more complex rendering without having to worry about the physics overhead. The PhyS performance (see figure <ref> shows that both softbodies are simulated in below 10ms time, far below the physics simulations in the local physics case. In the respective Softbody and MultiObject experimentations with desktop-PC (see figure <ref>) we see much greater improvement in overall performance due to the greater specs in CPU and GPU of the desktop computer. The trends remain the same. Particularly, the 10,000 object scenario is a typical case of this result where the frame time of the dissected physics case is 60ms compared to 750 ms of the local physics case. Experimentation outcomes in the CCU scenario showcases PhyS capability to serve (see figure <ref>) successfully 100 concurrent users in the same VR session. Due to small number of available XR HMDs, most of the 100 users were bot-users, deployed with same physics objects as an HMD user. Bot-users constantly moving in the scene, so that they generate the same load on the physics server as an HMD user. In that respect, we artificially generated a quite representative standard XR session for the sake of CCU experimentation scenario. As users gradually join the session, we notice a steady rise in outbound network usage on the PhyS (see Figure <ref>), which stabilizes after all users have joined the session and are constantly moving in the scene. The total latency in multiuser scenarios refers to the delay experienced by UserA in viewing an interactable object that UserB interacts with. The computation of the worst-case total latency consists of five components: a) the frame rendering time for both UserA and UserB, b) the network latency between the PhyS and both UserA and UserB, and c) the PhyS computation time. To reduce network load, PhyS sends updates for interacted objects at a rate of 48 times per second, introducing a delay of 20ms between updates. A sub-experiment for total latency was conducted with two users in the same session moving a virtual object around. In this experiment, the HMDs were connected to the same Wi-Fi network, while the PhyS was situated remotely and accessed via the Internet. The resulting average total latency was measured to be approximately 68ms (see figure <ref>), which provides evidence that the decoupled Physics server can be used without compromising QoE in interactive multiuser scenarios. Experimentation with the relay server, with cases involving up to 512 physics objects, showcased a constant improvement in outgoing bandwidth compared to the commercial Photon Cloud relay server (free plan) (see Table <ref>). Our experimentation scenario involves multiple objects (cubes), each constantly rotating and translating, to produce extreme cases of network bandwidth per object. Both solutions were set up with a send rate of 12 times per second per object. The host for each session is responsible for sending the latest transform data to the relay server. In the performance analysis, our relay server appears to be noticeably efficient as it constantly exhibits lower outbound rates, compared to the alternative Photon solution, while the difference between them is constantly increasing (see figure <ref>). We assume each user requires four distinct physics objects: two hands, one head (avatar), and one interactable object, which allows us to determine the maximum number of users our relay server can support. Specifically, given a total system capacity of M objects, the number of supported users N is derived from the formula N = M/4. This resource allocation aligns precisely with our relay server's capability to efficiently manage multiple user interactions for up to 128 users. § CONCLUSIONS AND FUTURE WORK We presented a novel system that transforms a modern game engine's pipeline, optimizing XR performances and enhancing user experience in XR environments. By decoupling the physics engine from its tight connection with the game engine pipeline and implementing a client-server N-1 architecture, we establish a scalable solution that efficiently serves multiple graphics clients (HMDs) with a single physics engine application running on edge/cloud infrastructure. This single point of truth for physics computations not only fosters better synchronization in multi-player scenarios, without introducing unnecessary overhead in single-player experiences. The maintenance of the Physics state at the dedicated engine inside the PhyS, regardless of users joining, leaving, or participating in the session, ensures the continuity of the XR session. Additionally, the introduction of a relay server facilitates seamless and optimized collaboration between users utilizing local physics and those connected to the physics server, enabling diverse participation in shared virtual environments. The decoupling of the Physics Engine from the HMD and relocating it to an edge/cloud node alleviates strain on local hardware, empowering it to dedicate more resources to rendering high-quality visuals. This strategic offloading of heavy tasks from the CPU unlocks the full potential of untethered HMDs, allowing their powerful GPUs to produce more impressive and complex visuals without being bottlenecked by CPU limitations. Moreover, this approach yields numerous benefits including increased frame-rate and QoE in highly interactive and realistic VR simulations, support for advanced interactions on softbodies, scenes with a great number of physics objects, and multi-user scenes with more than 100 CCUs. The modular pipeline with one physics server for all clients ensures efficient resource utilization, facilitates potential gains in HMD battery life and increased user mobility. Also, the higher frame rate achieved translates to less nausea and enables more realistic physics simulations, enhancing the effectiveness of VR medical training applications. Additionally, the incorporation of a Geometric algebra interpolator minimizes inter-calls between dissected parts, preserving an equivalent QoE while alleviating network stress. Collectively, these design decisions contribute cohesively to the successful achievement of the stated goals, showcasing a well-thought-out and effective approach to optimizing XR gaming experiences. We demonstrated the feasibility of Physics engine offloading through experimental validation with 100 concurrent users, 10,000 objects and softbody simulations. This capability not only confirms the technical viability of the proposed architecture but also opens up new avenues for enhancing XR experiences, promising more immersive and collaborative XR applications without compromising HMD or desktop performance. In our future work, we envision to optimize PhyS by enhancing and evaluating QoE even under degraded network conditions. To fully harness edge-cloud resources, we plan to integrate multi-threaded physics algorithms and GPU compute shaders, aiming to significantly reduce physics computation times within PhyS. Exploring Pixar's USD physics schemas presents an exciting opportunity to streamline the initiation process of the decoupled PhyS. Additionally, to further enhance frame times on XR HMDs, we will investigate methods to optimize the update process of Graphics objects. Finally, expanding PhyS functionality to support persistent always-on sessions will also be a key focus, aiming to allow seamless and uninterrupted user experiences across extended periods of interaction in Metaverse. § ACKNOWLEDGMENTS This work was partially funded by EU research and innovation programmes CHARITY (H2020 GA No 101016509), FIDAL (Horizon Europe GA No 101096146) and the Innovation project Swiss Accelerator supported by Innosuisse. We would like to thank Manos Kamarianakis and Maria Pateraki for their valuable comments. plain § AUTHOR BIOGRAPHY l25mm < g r a p h i c s > George Kokiadis is PhD Candidate at University of Crete, and a member of the Human-Computer Interaction Lab at FORTH-Hellas. His research revolves around the use of 5G and Cloud Infrastructure to enhance XR Technologies and Applications. Contact him at george.kokiadis@oramavr.com. l25mm < g r a p h i c s > Dr. Antonis Protopsaltis is a Computer Scientist and the Lead Research Scientist at ORamaVR. He is a Special Teaching Fellow in Computer Graphics at the University of Western Macedonia (UoWM) and an Affiliated Researcher at the ITHACA-UOWM lab, specializing in Extended Reality and CAD methods. Contact him at antonis.protopsaltis@oramavr.com. l25mm < g r a p h i c s > Michalis Morfiadakis is a MSc student at the Computer Science Department of the University of Crete, and a Networking Developer at ORamaVR. His thesis was about developing a Relay Server-based Network Solution for VR Collaborative Applications. Contact him at michael.morfiadakis@oramavr.com. l25mm < g r a p h i c s > Nick Lydatakis is Head of platform at ORamaVR and a PhD Candidate at University of Crete, exploring application partitioning frameworks for high-fidelity edge-cloud collaboration in Extended Reality with soft mesh deformations. Contact him at nick.lydatakis@oramavr.com. l25mm < g r a p h i c s > Dr. George Papagiannakis is a distinguished computer scientist with a specialization in computer graphics, extended reality, and geometric algebra. After earning his PhD from the University of Geneva in 2006, he has established a notable academic and entrepreneurial career. Currently, he is a professor at the University of Crete and holds positions at FORTH-ICS and the University of Geneva. He has made significant contributions to human-computer interaction and virtual reality, particularly in medical training and virtual heritage, using advanced computational and geometric computer graphics models. As a co-founder and CEO of ORamaVR, he pioneers in developing VR solutions for medical training. He has published over 120 papers and is involved in various professional societies including IEEE and ACM. Notably, his work is recognized through awards like the Marie-Curie Fellowship and he has held key roles in prominent conferences such as CGI. His book on Mixed Reality and Gamification has been highly influential, evidencing his impact in the field. Dr. Papagiannakis continues to lead research projects, supervising numerous doctoral and master’s students, and has attracted significant R&D funding. Contact him at george.papagiannakis@oramavr.com.
http://arxiv.org/abs/2407.12683v1
20240717160251
Information Flow in the FTX Bankruptcy: A Network Approach
[ "Riccardo De Blasis", "Luca Galati", "Rosanna Grassi", "Giorgio Rizzini" ]
q-fin.TR
[ "q-fin.TR", "econ.GN", "q-fin.EC" ]
label1]Riccardo De Blasis [label1]organization=Department of Management, Marche Polytechnic University, addressline=Piazzale R. Martelli 8, city=Ancona, postcode=60121, state=AN, country=Italy label2,label3]Luca Galaticor1 luca.galati@unibo.it [https://www.unibo.it/sitoweb/luca.galati/en]personal page [label2]organization=Department of Management, Alma Mater Studiorum - University of Bologna, addressline=Via Capo di Lucca 34, city=Bologna, postcode=40126, state=BO, country=Italy [label3]organization=School of Business, University of Wollongong, addressline=Northfield Ave, city=Wollongong, postcode=2500, state=NSW, country=Australia [cor1]Corresponding author. label5]Rosanna Grassi [label5]organization=Department of Statistics and Quantitative Methods, University of Milano-Bicocca, addressline=Piazza dell’Ateneo Nuovo 1, city=Milano, postcode=20126, state=MI, country=Italy label6]Giorgio Rizzini [label6]organization=Faculty of Sciences, Scuola Normale Superiore, addressline=Piazza dei Cavalieri 7, city=Pisa, postcode=56126, state=PI, country=Italy § ABSTRACT This paper investigates the cryptocurrency network of the FTX exchange during the collapse of its native token, FTT, to understand how network structures adapt to significant financial disruptions, by exploiting vertex centrality measures. Using proprietary data on the transactional relationships between various cryptocurrencies, we construct the filtered correlation matrix to identify the most significant relations in the FTX and Binance markets. By using suitable centrality measures - closeness and information centrality - we assess network stability during FTX's bankruptcy. The findings document the appropriateness of such vertex centralities in understanding the resilience and vulnerabilities of financial networks. By tracking the changes in centrality values before and during the FTX crisis, this study provides useful insights into the structural dynamics of the cryptocurrency market. Results reveal how different cryptocurrencies experienced shifts in their network roles due to the crisis. Moreover, our findings highlight the interconnectedness of cryptocurrency markets and how the failure of a single entity can lead to widespread repercussions that destabilize other nodes of the network. * Centrality measures trace network structure dynamics in FTX's crisis * Network metrics reveal cryptocurrencies' vulnerabilities * More sophisticated altcoins show their centrality over popular cryptocurrencies * Network analysis is crucial for risk assessment and crisis management in DeFi Bankruptcy Centrality Measures Cryptocurrency Market Decentralized Finance Complex Networks § INTRODUCTION In recent years, the cryptocurrency market has gained a lot of attention from academics and practitioners <cit.>. According to <cit.>, since 2017, the number of publications on cryptocurrencies has rapidly increased, with more than 140 papers published only in 2021. Almost half of this research focuses on the prediction of returns and volatility, while 20% studies the relationship between pairs and portfolios, and only about 7% of researchers are interested in bubbles and extreme conditions of this emerging market. Moreover, the majority of this research (almost 70%) employs statistical and machine learning methods, and only a partial focus is reserved for the study of the cryptocurrency market from a network perspective. As such, this study aims to fill these critical gaps in the literature by examining the network market structure of the bankrupted cryptocurrency exchange FTX during the collapse of its token, FTT. Cryptocurrency markets, as traditional financial markets, are characterized by a high number and possibly different types of interactions among various market participants. Due to the intricate nature of such markets, exploiting complex network tools can be highly effective in revealing hidden attributes and determining the importance of each cryptocurrency within the market as a whole. In particular, by the use of centrality measures, it is possible to assess how central a cryptocurrency is. This allows for the identification of the relevant cryptocurrencies not only on the basis of the connections with other cryptocurrencies in the market but also on the information that each can transmit to others. Notably, the use of these measures is crucial to reveal the signs of the arrival of a crisis on the market. In the complex network literature, cryptocurrency markets have been analyzed following two main approaches. Part of the literature directly analyses the blockchain public data constructing the cryptocurrency transaction networks, which are the largest real-world networks with publicly accessible data (see, e.g., <cit.>). Conversely, other studies concentrate on the network constructed from the analysis of the price series. Among possible approaches to construct networks from time series (see <cit.> for a review), a particularly powerful method involves the construction of the correlation matrix: where nodes represent the assets, and weighted edges capture the relationship among couples of nodes by means of the correlation coefficient. However, this approach constructs a full matrix that does not allow the identification of the informative structure of the network and the filtering of relevant information. In the literature, there are two main approaches to reducing information's redundancy in the correlation matrix and building sparse networks containing only relevant edges: (i) the minimum spanning tree (MST) and (ii) the planar maximally filtered graph (PMFG). The approach proposed in the seminal work <cit.> belongs to the first stream of research. The author proposes a method filtering the most important n-1 links from a n × n correlation matrix to construct the MST by introducing an ultrametric that transforms correlations into distances. <cit.> follows a similar approach, first building the network of stocks from the New York Stock Exchange (NYSE) and then extracting the MST from the distance matrix. Regarding the second filtering approach, the authors in <cit.> extend the methodology in <cit.> proposing a heuristic algorithm to construct the Planar Maximally Filtered Graph (PMFG) filtering the correlation matrix. <cit.> analyzes the topological features of a class of PFMG networks from the returns of the 300 most capitalized stocks traded at NYSE during the period 2001–2003 at different time horizons. Recently, <cit.> proposes a new efficient algorithm to filter the correlation matrix based on triangulation called Triangulated Maximally Filtered Graph (TMFG). Among the network applications to cryptocurrencies, <cit.> examines the MST network built from the correlations of daily returns of 16 cryptocurrencies following the approach in <cit.>. From a static analysis of the network, the authors identify the Ethereum currency in a central position of the MST, then as the benchmark within the market, leaving Bitcoin in a peripheral position. From a dynamic network perspective, <cit.> analyzes the effects of information flows in cryptocurrency markets built from the Granger causality[Granger causality networks are graphs in which two nodes are connected if one of them causes in Granger meaning (see <cit.>) the other one.] among weekly log-returns and find a quite stable network structure over time. On the contrary, <cit.> examines the stability of the PMFG cryptocurrency network around critical events using a function of neighbours' influence strength. The author shows that critical events lead to significant changes in the structure of the network from stability to fragility, going back to stability once the critical time has passed. <cit.>, instead, explores the dynamic downside risk among digital financial assets (cryptocurrencies, DeFi tokens, and NFTs) and traditional financial assets (stock indices and commodities) by constructing a daily network based on the CoVaR measure. They document significant tail risk spillovers, bidirectional between digital assets and commodities, and unidirectional from stocks to digital assets and from commodities to stocks. To this extent, it is worth noting that a major critical event has occurred in the cryptocurrency industry, the collapse of the FTX market in November 2022. This event has been analyzed in the literature from different viewpoints. From a market microstructure perspective, <cit.> contributes to understanding the systemic implications of major disruptions in the cryptocurrency markets. In particular, the authors examine how the halt in withdrawals at a major exchange affects market liquidity, traders' behavior, and asset pricing dynamics. Their findings indicate significant liquidity deterioration and effectively highlight the critical vulnerabilities within the cryptocurrency market infrastructure, especially under stress scenarios like bankruptcy and operational disruptions. Similar to <cit.> for the collapse of the Terra-Luna token and <cit.> for the Silicon Valley bank bankruptcy, <cit.> uses a BEKK Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) model to examine the intraday volatility spillover effects of the FTX collapse to other cryptocurrency exchanges. The authors found evidence consistent with their proposition, and also examined the information cascade effects of other cryptocurrency assets on FTX when nearly all withdrawals were prohibited. Abnormal returns for major assets indicated a flight to safety from less to more authoritative digital assets. <cit.> further corroborates these findings, showing that the FTX collapse increased overall intraday volatility in the cryptocurrency markets, with stablecoins being the most affected. Their use of the Multiplicative Component GARCH (MCGARCH) model and Time-Varying Parameter VAR model (TVP-VAR) methodology revealed FTT's role as a primary volatility transmitter, particularly impacting stablecoins like USD Coin and altcoins like Polygon, thus highlighting the interconnectedness and systemic risk in the cryptocurrency market. The recent scholarly examinations of the FTX collapse shed light on its broader implications as well. <cit.> finds surprisingly stable systemic risks and liquidity despite the collapse, suggesting inherent market resilience against such shocks. <cit.> identifies specific contagion effects propagated by FTX-associated tokens like FTT and Serum, which significantly influenced related financial products. The study <cit.> reveals that while cryptocurrencies like Bitcoin, Ethereum, and Binance experienced significant negative returns due to the FTX collapse, traditional asset markets remained unaffected, indicating limited contagion. In a similar vein, <cit.> finds that the FTX collapse significantly impacted cryptocurrency markets but not traditional financial assets, showing traditional investors' indifference to cryptocurrency fluctuations during bear markets. <cit.> examines how market sentiment and investor behavior changed in response to news releases related to FTX, while <cit.> argues for enhanced regulatory oversight in crypto markets to mitigate financial instability risks. From a network perspective, the FTX collapse has been studied by <cit.>, which examines the cross-exchange risk in a high-frequency setting employing a Multivariate Heterogeneous AutoRegression (MHAR) model to build the network. The network is constructed by filtering the correlations which exceed in absolute value a fixed threshold. The authors find that FTX bankruptcy triggered a chain effect among exchanges, i.e., increased partial correlations with other exchanges, and highlighted the spillover risk and its persistent effect across centralized exchanges. Additionally, <cit.> investigates the causes and consequences of FTX’s failure. Their results reveal that leveraging and misuse of its native token, FTT, exacerbated FTX's financial fragility highlighting the Terra-Luna collapse as a pivotal trigger event. The study exploits the TMFG to model the evolutionary dependency structures of 199 cryptocurrencies, showing the systemic impact of Binance's actions on FTX's downfall and emphasizing the trend toward centralization in the crypto market. To the best of our knowledge, this study is the first that combines market microstructure theory and network analysis to investigate the FTX collapse. It provides insights into the evolving patterns of suitable centrality measures throughout the crisis of one the largest cryptocurrency centralized exchanges, FTX, and unveils the intricate relationships between market behaviors and investor responses, which, in turn, modify the cryptocurrency network structures before and during financial disruptions. In addition, our study provides suitable measures for assessing network resilience and stability. In particular, it highlights the crucial roles that different cryptocurrencies play in sustaining the informational architecture of the market during crises. The results offer a comprehensive picture of resilience and vulnerability inherent in financial networks, especially within such significant markets as FTX and Binance, the industry leader (see, e.g., <cit.>). This also hints at potential strategic pivot points for stakeholders within the cryptocurrency ecosystem. Indeed, understanding the centrality dynamics can equip market participants, analysts, and regulators with deeper insights into the critical nodes within the networks. This is particularly useful because it potentially drives investment decisions, risk assessments, and regulatory considerations. This study, thus, contributes to a better comprehension of the structural complexities within digital asset markets, revealing the intricate interplay of connections, resilience, and influence. The rest of the paper proceeds as follows. Section <ref> details the methodology adopted through a theoretical background on network theory, centrality measures and filtered graphs. A toy example is proposed to show the appropriateness of the proposed centrality measures in cryptocurrency networks. Section <ref> describes the data and the empirical findings. Section <ref> concludes. § METHODOLOGY §.§ Preliminary definitions on network theory In this section, we briefly review some mathematical network definitions. A network is formally represented by a graph G=(V,E), where V is the set of n nodes (or vertices) and a set E of m edges (or links). Two nodes i and j are adjacent if there exists an edge connecting them, i.e., if (i,j) ∈ E. G is undirected if (j,i)∈ E whenever (i,j)∈ E. G is simple if loops – a link joining a vertex to itself – and multiple edges – more edges incident to the same pair of vertices – are not allowed. In this paper, we assume that graphs are undirected and simple. A subgraph G'=(V',E') of G is a graph such that V' ⊆ V and E' ⊆ E. The degree d_i of a node i is the number of links incident to it. A complete graph K_n is a graph of n nodes such that each node degree is equal to n-1. A graph is weighted if a non-negative real number w_ij is associated with each edge (i,j) of G. The adjacency relations can be represented by a n-square matrix 𝐀, the adjacency matrix, whose elements a_ij are equal to 1 if (i,j) ∈ E, and 0 otherwise. As graphs are simple and undirected, the diagonal entries of 𝐀 are null, and 𝐀 is symmetric. The adjacency relations for a weighted graph are represented by a n-square matrix 𝐖, with elements w_ij if the weighted link (i,j)∈ E, and 0 otherwise. An i-j path is a sequence of distinct adjacent vertices from i to j. The distance d(i,j) between i and j is the length of the shortest path joining them when such a path exists, and it is set to +∞ otherwise. A cycle of length k is a path of k edges in which the first and the last vertices coincide. A graph G is connected if there is a path between every pair of vertices. A connected component of G is a maximal set of nodes such that each pair of nodes is connected by a path. A connected graph has exactly one connected component. For a more detailed treatment of graphs and networks, we refer the reader, for instance, to <cit.> and <cit.>. Since the mathematical object underlying the network is the graph, in the rest of the text, we will use the words graph and network interchangeably. Finally, we introduce the concepts of planar graphs and maximally planar graphs. A planar graph is a graph that can be drawn in such a way that no edges cross each other, see <cit.>. A necessary but not sufficient condition for a graph G to be planar is that m ≤ 3n-6 for n≥ 3. Notice that this condition implies that planar graphs are sparse networks characterized by a number of edges of order O(n). Planar graphs are useful in extracting relevant information from a complex database represented by a weighted graph. Indeed, in such a complex structure, it can be meaningful to filter data by unveiling the most significant information. This can be done by searching for the largest possible subgraph satisfying some topological constraints. In the related literature about networks, this problem is known as the Weighted Maximal Planar Graph problem (see <cit.>). In the financial context, a first attempt to solve such a problem is presented in <cit.> in which the information's filtering procedure is performed by extracting the Minimum Spanning Tree (MST), that is a subgraph on n nodes, connected and without cycles, with n - 1 edges, such that the sum of all edge weights is minimized. A more sophisticated procedure to extract relevant information consists of constructing the Planar Maximally Filtered Graph (PMFG), which belongs to the class of Information Filtering Networks (see <cit.>). Specifically, given the weighted adjacency matrix of a complete graph K_n, the authors propose an algorithm to extract the maximal planar subgraph, with n vertices and 3(n-2) edges, ensuring the highest sum of edge weights. The construction of PMFG is based on the following procedure: all the edges of the initial dense network are sorted in non-increasing order, and one edge per time is added to the PMFG. Edges that violate the planarity constraint are discarded. Edges are added until the PMFG has exactly 3(n-2) edges. Recently <cit.> proposed a new algorithm to determine the solution of the Weighted Maximal Planar Graph problem. The resulting subgraph is based on a triangulation obtained by maximizing a score function linked to the information withheld by the starting network. §.§ Centrality measures based on paths In network theory, centrality is one of the key issues. Broadly speaking, any element of the network (nodes, edges or groups of nodes) plays a role with respect to the global network structure. However, to assess their relevance in terms of connections, the most studied aspect of centrality is the assignment of a score to the vertices. The degree centrality of a node i is the most intuitive centrality measure as it counts the number of neighbours of i, and it is formally represented by the normalized degree d_i=1/n(n-1)∑_j=1^n a_ij. With reference to the information flow, an interesting class of centrality measures is those based on the concept of distance in the network, namely paths between pairs of nodes. A relevant role in this framework is certainly played by the closeness centrality (see <cit.>). This measure is based on the length of the paths from a node i to all other nodes in the network, and formally, it is defined as the reciprocal of the sum of the distance between i and all other nodes (multiplied by n-1 to obtain a normalized measure): c(i)=n-1/∑_j d(i,j). A meaningful weighted version has been proposed by <cit.>, based on the idea of weighted shortest paths. The identification of the shortest paths is quite simple for unweighted networks. Indeed, the geodesic distance between two nodes i and j is the length of the path with the minimum number of edges connecting i and j. The matter is more complicated when links are weighted. The problem of identifying the weighted shortest path has been analyzed in many papers, e.g., see <cit.> among others, most of them based on Dijkstra's algorithm, where weights on the links are interpreted as costs of transmission (<cit.>). In the milestone papers of <cit.> and <cit.> the authors propose to invert the link's weight before applying the Dijkstra's algorithm. Indeed, a low weight of the link makes the passage through it more costly than passing through a link with a high weight.[Indeed, given two links weights w_ij and w_hk such that w_ij>w_hk then 1/w_ij< 1/w_hk.] In such a framework, the weighted shortest path distance between nodes i and j is defined as d^w(i,j)=min(1/w_ih+…+1/w_hj), and the normalized weighted closeness centrality measure as c^w(i) = n-1/∑_j d^w(i,j). Another centrality path-based measure is information centrality. This measure is based on the concept of efficiency introduced by <cit.> to assess how much nodes in a network exchange information. The authors assume that the information between two nodes i and j is spread through one of the possible weighted shortest paths between the involved nodes. In this way, the efficiency ϵ^w(i,j) is defined as the reciprocal of the shortest distance. Indeed, the higher the geodesic distance between i and j, the lower their efficiency in the information's transmission. The efficiency ϵ^w(i,j) between i and j is therefore defined as ϵ^w(i,j) = 1/d^w(i,j), so that if d^w(i,j)= + ∞, i.e. there is no a shortest path between i and j, then ϵ^w(i,j) = 0 and therefore no information can travel between those nodes. From (<ref>), the efficiency of a graph G naturally arises: ε^w(G) = 1/n(n-1)∑_i,j∈ V i jϵ^w(i,j). Giving the previous definition, the information centrality of a node i is defined as (<cit.>): c_I(i) = 1 - ε^w(G’)/ε^w(G), where G' is the subgraph obtained by removing from G all the edges incident to node i, i.e., broadly speaking, the node i in G’ is an isolated node. This measure is suitable to capture how a network reacts when a node i is deactivated, i.e., when node i cuts all its links. Thus, it measures the relative reduction in the network efficiency after the node i's removal. Information centrality ranges in [0,1]: if c_I(i) = 0 it means that the removal of node i does not affect the efficiency of G', i.e., ε^w(G)=ε^w(G’) while c_I(i) = 1 means that G' is an empty graph, i.e. the edge set of G' is empty. Therefore, among the centrality measures described in this section, the latter is the most significant in terms of information transmission. Moreover, in Section <ref>, we use a global centrality measure by averaging nodes' information centrality c_I(G) = 1/n∑_i c_I(i). §.§ A toy example In this section, by means of a simple example, we show the appropriateness of the proposed centrality measures in capturing the role of each node in the transmission of information within the network. The network G is plotted in Figure <ref> whose weighted adjacency matrix is: 𝐖 = [ 0.0 0.5 0.2 0.3 0.4; 0.5 0.0 0.1 0.2 0.0; 0.2 0.1 0.0 0.0 0.0; 0.3 0.2 0.0 0.0 0.0; 0.4 0.0 0.0 0.0 0.0; ]. We first compute the centrality measures degree and closeness; then the information centrality of G by using Formula (<ref>). Notice that the closeness has been computed applying Formula (<ref>), which is based on the reciprocal of the link's weight. Table <ref> reports node rankings based on the three centrality measures. We observe that the node a is the one with the highest information centrality, highlighting its strong presence in paths connecting nodes and, therefore, confirming its central role in spreading information in the network. Consider in the network G the four weighted shortest paths connecting nodes d and c (i.e., d-a-c, d-b-c, d-a-b-c, d-b-a-c). Among these paths, the shortest one, i.e. the one with the smallest cost, is b-a-c with d^w(d,c)= 8.33. We notice that the node a is present in three out of four paths connecting d and c, in line with its central role in the network. After the node a's removal (graph G' in Figure <ref>), there exists only one path connecting d and c with a cost of 15. This result confirms that the higher the information centrality of the deactivated node, the higher the cost of information transmission in the network. Table <ref> reports all the weighted paths with their costs both in G (Panel A) and G' (Panel B). Similar considerations can be done by deactivating one node of G per time and the impact in terms of graph efficiency is shown in Table <ref>. The results confirm the predominant role of node a in the network: indeed, the deepest drop in the network efficiency is observed when the node a is deactivated, while the less deep efficiency is associated with the deactivation of the node e. § RESULTS §.§ Data This study uses cryptocurrency trading price data spanning from October 16^th to November 16^th, 2022, gathered from FTX,[<https://ftx.com/>] the bankrupted exchange of reference, and Binance,[<https://www.binance.com/>] the major exchange in cryptocurrency markets. However, data from FTX were only available up until November 12^th, 03:28 a.m., due to operational disruptions. Proprietary data were collected from Refinitiv,[<https://www.lseg.com/en>] a London Stock Exchange Group (LSEG) business, and sourced from the DataScope Select database.[<https://www.lseg.com/en/data-analytics/products/datascope-select-data-delivery>] The initial dataset comprises a total of 442 cryptocurrencies on FTX and 344 cryptocurrencies on Binance. In order to maintain focus on purely cryptocurrency movements and exclude external financial influences, several categories of assets were removed from the initial dataset. Specifically, we excluded leveraged tokens (bear and bull), tokens with partial exposure (half), and hedging options (hedge), as well as all fiat currencies, which are not the focus of our study. In addition, to avoid repetition and high correlations between similar crypto-assets, we excluded the cryptocurrency pairs against Tether (USDT) or other digital assets and focused exclusively on trading pairs against the US dollar (USD), being the most liquid and used instrument.[For Binance, instead, we focus exclusively on trading pairs against the USDT as the exchange does not allow customers to trade against the fiat currency USD.] After these exclusions, the dataset was consolidated into 195 cryptocurrencies traded on FTX and 324 cryptocurrencies traded on Binance.[The full lists of the cryptocurrencies included in this study are available in the appendix] Nonetheless, as in <cit.>, the dataset collected from Binance is used as an overall market comparison only, and the focus of this study remains on the actual bankrupted exchange, FTX. The full list of the cryptocurrencies analyzed is in <ref>. For high-frequency analytical granularity, in the empirical analysis we processed the data to include 1-minute logarithmic returns for each of the remaining cryptocurrencies, computed as r_i,t = ln(P_i,t/P_i,t - 1) where P_i,t is the price of the cryptocurrency i at time t and P_i,t - 1 is the price of the cryptocurrency i at time t-1. This high-frequency data allows for a detailed analysis of price dynamics and volatility within the specified period across both trading platforms. The visual representations provided in Figure <ref> effectively capture the normalized price trajectories of major cryptocurrencies, including Bitcoin (BTC), Ethereum (ETH), Polkadot (DOT), Solana (SOL), Avalanche (AVAX), and the FTX Token (FTT). These plots are particularly illustrative of the market dynamics in response to the unfolding events during the bankruptcy timeline, as reported in Table <ref>. The price data reveal significant volatility, especially visible in the sharp decline of FTT’s value, which directly correlates with the escalated phases of the FTX crisis. The graphical depictions allow for a clear observation of the cascading effects on other cryptocurrencies, underscoring the interconnectedness of the crypto market. Finally, based on the observation of the price series, we divide the sample into two periods: one preceding the collapse of the FTX’s token, FTT, which arose from the main exchange competitor, Binance, moving a large amount of the token (Event (c) in Table <ref>); and one during the collapse period after that event. §.§ Network construction We construct the network as follows. Firstly, we consider n= 195 cryptocurrencies and we construct the correlation matrix 𝐂 in which the element c_ij is the Pearson correlation coefficient between cryptocurrencies i and j returns time series. The correlation matrix 𝐂 is a full matrix, then its associated weighted graph is a complete K_n graph containing loops. Moreover, the entries can be either positive or negative, as c_ij∈ [-1,1]. According to the literature (<cit.>), to overcome these issues we consider the absolute value of the entries, focusing only on the intensity of the assets co-movements, and we set the diagonal elements of 𝐂 to zero. We obtain a new non-negative matrix 𝐂̃ such that the associated complete graph becomes simple. At this stage, we need to filter the information in 𝐂̃ to unveil the hidden relevant structure. Therefore, we filter the correlation matrix by constructing the Triangulated Maximally Filtered Graph (TMFG) proposed in <cit.> which is a computationally efficient algorithm to construct the PMFG (see Sect. <ref>). §.§ Descriptive statistics In this study, we focus on the major cryptocurrencies (BTC, DOT, ETH, FTT, AVAX, SOL) showing the highest logarithmic returns correlations in the FTX market during the period preceding the collapse of FTT (Event (c) in Table <ref>), which caused the bankruptcy of the FTX exchange, given the fraudulent behaviors with the sister trading company Alameda Research. We analyse market conditions in three distinct phases: pre-collapse, during the collapse, and over the entire observed period. Table <ref> shows the top 20 correlations between the cryptocurrency returns analyzed in the FTX market, highlighting significant variations across the different periods examined. The major cryptocurrencies we refer to are those at the top 5 positions in the first column of Table <ref>. In the pre-collapse phase, higher correlations were observed among major cryptocurrencies, such as a correlation of 0.80 between ETH and BTC. This indicates a closely knit market where the movements of these major currencies were largely synchronous. However, during the collapse, the correlation figures decreased substantially, such as between ETH and BTC dropping to 0.40, reflecting a market dislocation where individual cryptocurrencies responded differently to the crisis. Even stronger evidence is reported for the correlation between SOL and BTC, going from a level of 0.69 in the pre-event to 0.26 within the collapse period, and other cryptocurrency trading pairs. This pattern suggests that the market structure was significantly altered by the event, affecting how asset prices moved in relation to each other. The summary statistics presented in Table <ref> delineate the market conditions in the three periods. In the pre-collapse phase, the cryptocurrencies exhibited relatively low volatility and modest positive mean returns, except for FTT which showed a minimum negative return of -3.8596, indicating early signs of distress specific to the FTX token. The collapse phase presents a stark contrast, with all cryptocurrencies experiencing increased volatility and negative mean returns, highlighted by FTT’s dramatic mean return of -0.0277 and an exceptionally high standard deviation of 1.5709, which is around 22 times the volatility observed in the pre-collapse phase. This period clearly reflects the acute market stress and investor panic triggered by the unfolding crisis. The full period combines these diverse phases, showing generally subdued mean returns and heightened volatility, which underscores the long-term impact of the crisis on market dynamics. §.§ The network structure surrounding the collapse of the FTT node To highlight changes in topological properties in the FTX market due to FTT's collapse, we construct two networks: one preceding the FTT's collapse and one during the collapse period after that event. Figure <ref> shows the filtered networks of the FTX market in both periods pre and post event. We highlighted with different colors the nodes corresponding to the major cryptocurrencies that resulted in the highest correlations. However, the main topological changes cannot be detected by a simple visual network inspection. The analysis of the centrality measures computed for the cryptocurrency network particularly focusing on the FTX platform, reveals significant insights into the structural dynamics and influence patterns across the sample period. Analyzing the normalized centrality measures — degree, closeness, and information centrality — provides an understanding of the node's importance, both prior to and following the market collapse. Table <ref> presents the ranking of centrality measures of the top 20 cryptocurrencies in the FTX market, divided into the pre-collapse (Panel A) and collapse (Panel B) periods. We first observe that the centrality measures are close to zero, due to the network sparsity. However, the centrality measures well reflect the results of Table <ref> revealing the most informative cryptocurrencies in the FTX market analysis. In the period preceding the FTT collapse (Panel A), AVAX and DOT emerged prominently across the centrality measures. AVAX, the token of Avalanche, is one of the most promising emerging proof-of-stake (PoS) blockchain projects and perhaps the one that is followed with the greatest attention by developers and cryptocurrency investment experts. It shows a notable number of direct connections (normalized degree score 0.20103) compared to other cryptocurrencies, suggesting a pivotal role in the network’s structure. Similarly, its closeness score (0.00102) underscores its centrality within the FTX network, being the most followed among cryptocurrencies as per its shortest distances from all the other nodes. However, in terms of information centrality, DOT takes precedence with a score of 0.06773, reflecting its critical role in sustaining network efficiency in terms of rapid communication between nodes despite potential disruptions. Arguably, DOT is the native token of the Polkadot blockchain, which is considered among the most ambitious and innovative projects in the cryptocurrency arena, aiming to establish a decentralized web that enables various blockchains to interact and collaborate seamlessly. BTC, consistently recognized for its market dominance, maintained notable centrality, reflective of its enduring influence and robust integration within the network. Interestingly, the presence of lesser-known cryptocurrencies like NEAR, a PoS protocol based on the concept of sharding, and SOL, a competitor of the Polkadot and Ethereum blockchains, in the top ranks across different measures highlights their emerging significance within the network’s architecture. ETH, the second-largest cryptocurrency in terms of market capitalization and liquidity, follows the rank among the most central nodes of the FTX network in terms of degree, closeness, and information centrality. Apart from NEAR, the centrality measures in the pre-collapse period show consistent results to the correlations ranking of Table <ref> in Section <ref>. The collapse phase delineated a stark transformation in network structure (see Figure <ref>), with Dogecoin (DOGE), one of the first altcoins[i.e., alternative coins to Bitcoin.] created as a fast and instant payment system based on the Litecoin blockchain architecture, leading in all centrality measures. Its degree centrality (0.21134) notably increased, paired with the highest closeness (0.00276) and information centrality (0.05907) scores, indicating a surge in its network influence post-collapse. This shift might suggest a realignment of network structures where market participants pivot towards alternative nodes during periods of instability. Ripple (XRP) and Litecoin (LTC), absent in the centrality ranking of the pre-collapse period, also gained more importance in terms of network centrality and information dissemination during the collapse period, a signal of investors’ reliance on well-established cryptocurrencies. Indeed, BTC maintained its predominant position, albeit with reduced scores compared to DOGE, signaling its resilience and foundational role within the network even amidst market upheavals. Other cryptocurrencies like AVAX and DOT also remained influential, though with adjusted rankings, reflecting a reconfiguration of inter-node relations and dependencies. Before the collapse of the FTX market, FTT was solidly positioned within the top 20 in degree centrality, signifying a robust number of direct connections, i.e., strong correlations within a high number of cryptocurrencies, and an important role within the network’s structure. This initial placement highlighted its prominence and influence among investors and traders on the platform. However, after the collapse of FTX market, we observe that FTT experienced a significant reduction in its degree centrality, indicating a loss of its direct connections and, therefore, high correlations with other cryptocurrencies within the network. Another interesting result emerges from the ranking of the closeness. Indeed, results in Table <ref> show that the collapse of FTT has led to an increase in closeness centralities as a consequence of the average decrease of the geodesic distances between the nodes of the network. This shift suggests a decrease in its active integration and participation within the trading environment, likely due to diminished investor confidence and a restructuring of the network’s dynamics. Despite this decline in degree and information centralities, FTT is still present in the closeness centrality's ranking. This result derives from the fact that FTT keeps few connections in the network and its neighbors are central in terms of degree, closeness and information centrality.[The FTT's neighbors after its collapse are: BTC, SOL, and DOGE] This could be economically interpreted as a sign of investors’ need to readjust their positions during times of crisis, reflecting a scenario of ongoing adjustments and liquidations by investors trying to navigate the market’s volatility. Interestingly, the information centrality for FTT post-collapse suggests an absence of significant information flow through the FTX's token, underscoring a scenario where investors were likely in a rush to disassociate from the token amid its collapse. This lack of flowing information indicates that while FTT retained some network closeness—perhaps due to its previous importance and lingering transactions—the effective communication and utility of FTT within the network had substantially diminished, mirroring the broader crisis impacts on its value and operational significance. §.§ Pattern of information flow surrounding the collapse of the FTT node To deepen the analysis of the previous Sections, we capture the evolving patterns in the FTX network by means of rolling windows. In particular, we construct daily rolling window networks as in Sect. <ref> with 24-hour windows at intervals of one hour. Based on the results of the previous Sections, we focus on the closeness and the information centrality as the most informative centrality measures. We plot in Figure <ref> the daily rolling closeness centrality measure[The normalized version is represented by Figure <ref> in <ref>)] over time for the major cryptocurrencies of the FTX network. This measure is crucial for understanding how well connected (in terms of reachability) a node is, not only through direct links but with respect to the entire network’s structure. Figure <ref> illustrates that Bitcoin (BTC) consistently maintained the highest score, indicative of its central role in the network. The prominence of BTC throughout the period illustrates its foundational status within the cryptocurrency network, maintaining efficient information dissemination capabilities despite market upheavals. Overall, the average closeness centrality (pink line) of the FTX network saw a slight increase after the announcement of Binance dropping out from the acquisition deal (Event (i)). This signals a reduction of the distance between pairs of nodes and therefore, an improvement in the reachability[In graph theory, the reachability refers to the ability of a node i to reach all the other nodes in the network.] between cryptocurrencies after the disruption of the FTT. Each remaining colors correspond to the other main cryptocurrencies analyzed, while the three ranges comprise the percentiles as described in the Figure legend. As previously discussed, specific events had pronounced impacts on the centrality measures of other cryptocurrencies, especially FTT. The revelation of compromised balance sheet of Alameda Research, containing significant amounts of FTT, precipitated an immediate reaction: it can be easily noticed by the drop in FTT’s centrality on November 2^nd. This initial impact was compounded by subsequent market movements and public statements from key market players. The series of events from November 5^th through 6^th, involving the large-scale movement of FTT by Binance and Binance’s CEO’s public warnings, sparked notable fluctuations in FTT’s centrality. This is even more evident in Figure <ref> of <ref> when normalizing by the average centrality of the entire FTX network. These changes underscored the market’s sensitivity to news and actions by major stakeholders, reflecting real-time adjustments within the network’s structure. The heightening uncertainty following conflicting public statements about the insolvency of FTX and Alameda Research and the dramatic fallout from the failed Binance acquisition deal further influenced the network dynamics. Closeness centrality values for all the cryptocurrencies analyzed, particularly after Event (i) in Table <ref>, displayed erratic movements that mirrored the tumultuous information environment. The operational disruptions at FTX, highlighted by the halting and partial reopening of withdrawals, followed by the collapse of the FTT token, were critical as well. These events led to a pronounced and sustained decline in FTT’s closeness centrality, but also to a major centrality volatility in other major cryptocurrencies within the network. This decline was stark against the backdrop of FTX filing for bankruptcy and closing operations, where FTT’s centrality reached its nadir by November 11^th, showing its reduced role in the network. Figure <ref> shows, instead, the patterns of nodes' information centrality measure. Information centrality, a measure of the efficiency of information flow through the shortest paths in the network, showcases distinct patterns in response to the events leading to the collapse of FTT. The plot shows spikes in DOT’s information centrality around the time when concerns about Alameda Research’s balance sheet surfaced (Events (a) and (d) in Table <ref>). This highlights the importance of DOT within the network as a safe-haven vehicle through which to disseminate information. The same happens to AVAX, which saw major spikes throughout the FTT collapse period as well. On the other hand, this could also indicate temporary scrutiny of FTT within the network. The reason can likely be due to increased transactions and information flow as market participants assessed the implications of the news. Perhaps cryptocurrencies like DOT and AVAX are likely traded less for their complexities than more popular ones, such as BTC and ETH, but they still represent central points for network structure, suggesting the importance of more stable protocols over crypto-assets popularity driven by less sophisticated investments. Throughout this turbulent period, other cryptocurrencies such as BTC, ETH, and SOL also experienced shifts in their information centrality measures, though less dramatically than FTT. This pattern underscores the broader impact of the crisis, affecting the entire cryptocurrency market as participants adjusted their strategies in response to the unfolding events. Finally, in Figure <ref> we plot the average information centrality measure of the entire FTX network c_I(G), showing the overall trend of the information flow during the sample period. As it is readily apparent, there is a structural break after the Event (h) in Table <ref>, showing that the impossibility of withdrawing digital assets from the exchange led investors to cautiously divest and re-allocate their positions in the face of the crisis, a result consistent with the findings in <cit.>. While the weeks before the first announcement and the period during the collapse of FTX’s native token show a stable flow of information within the network, there was substantial informed trading after the exchange halted funds withdrawals to avoid a “bank run”. The observed centrality trends from the figures provide detailed insights into how information is disseminated and processed within the cryptocurrency network in response to crisis events. These dynamics are indicative of the market’s reactive nature and the critical role of information centrality in understanding the mechanisms of crisis management and response in decentralized financial systems. It is interesting to see how well the centrality measures used in this study reflect the market behaviors surrounding each of the events considered, signaling the appropriateness of the methodology employed for the proposed investigation. §.§ The market responses in the Binance network Consistent with the market comparison in <cit.> and the dominance results of <cit.>, we lastly explore the centrality measures across major cryptocurrencies within the Binance network during the FTX crisis. In particular, similarly to the previous section, we construct daily rolling window networks at intervals of one hour. Each network has n=324 nodes and it is constructed as in Sect. <ref>. Our aim is to provide a better overall understanding of the decentralized financial system dynamics that underpin the distribution and reception of information in times of financial upheaval. Given Binance’s significant market presence, this analysis serves as a proxy for broader market behaviors and offers insights into how cryptocurrencies interact and respond within one of the largest trading environments. Figure <ref> (as well as its normalized version in Figure <ref> in <ref>) clearly depicts the shifts in closeness centrality that occurred in response to the pivotal events mentioned above. Throughout the observed period, BTC demonstrates relatively high and stable closeness centrality, indicative of its central role in the cryptocurrency market. This stability reflects Bitcoin’s broad acceptance within the whole market as a central node through which information and transactions are disseminated. Besides, we notice the resilience of the overall Binance network shown after the Event (h), as all the closeness reversed and started to increase again. This is consistent with the findings of <cit.> documenting an improvement of Binance market quality upon the halt of funds withdrawal on FTX. Particularly for FTT, the figures show a distinct pattern in closeness centrality, peaking around critical events before exhibiting a dramatic fall as the crisis escalated. This trend is even more visible on the Binance network, within which the FTT closeness is downward below the overall closeness average of the network. This is crucial as it reflects how such crises can influence related cryptocurrencies within a large trading platform like Binance, which is supposedly not connected with the bankrupted FTX exchange. Contrastingly, most popular cryptocurrencies, like BTC and ETH, displayed more stable closeness centrality during the crisis, suggesting a retained level of trust and stability despite the surrounding turmoil. While FTT shows significant centrality spikes (before the first announcement) and subsequent declines (during the collapse), other cryptocurrencies like AVAX and SOL display varying levels of centrality changes. These variations could be indicative of differing levels of exposure to systemic shocks. Additionally, DOT shows similar patterns albeit seemingly less volatile in the Binance network compared to FTX. Before the onset of the crisis, the closeness centrality of all tracked cryptocurrencies exhibits a somewhat synchronized pattern, suggesting a tightly interlinked market structure. As the crisis unfolds, the divergence in centrality measures becomes more pronounced, underscoring a shift in how these cryptocurrencies interact and influence each other within the network. Figure <ref> reveals noticeable fluctuations in information centrality as the FTX crisis unfolded. Interestingly, as in Figure <ref>, FTT exhibited sharp centrality peaks coinciding with a supposedly market event that anticipated the crisis. The same happens for ETH, showing what can arguably be described as insider trading, or more generally, an anticipation of information leading to the overall crisis.[ETH is arguably the most central node within the Binance network in terms of the measures considered.] Its spike, just before the first public announcement of FTX's crisis, might be seen as a signal of the information spreading through Binance to the overall market. ETH and AVAX had the highest information centrality on Binance during the sample period, particularly after Event (a). Such spikes in centrality reflect their role in information spreading due to a surge in transaction activities associated with these tokens. Overall, the patterns of the other major cryptocurrencies do not show significant changes, highlighting their non-primary role in spreading information. This emphasizes the role of Binance’s network in mediating these dynamics even though the crisis incurred on another exchange. § CONCLUSIONS This study examined the anatomy of information flow in the bankrupted FTX cryptocurrency exchange by underscoring the distinct role of centrality measures in capturing the critical dynamics within financial networks during periods of market instability. Through an exploration of degree, closeness, and information centrality, our findings reveal how the market structure and the efficacy of information dissemination within such networks evolve in response to substantial financial disruptions. In this study, we also elucidate the dynamic nature of centrality within the cryptocurrency markets more broadly, influenced by both internal network adjustments and external market conditions. Our analysis illustrates that path-based centrality measures in financial networks not only offer insights into the immediate impacts of such crises but also highlight the broader structural vulnerabilities and resilience within the cryptocurrency market. The rapid shifts in centrality metrics observed in response to the unfolding crisis at FTX reflect the market’s sensitivity to both internal network changes and external economic pressures. This study exposes the interconnected nature of cryptocurrency markets, where the failure of a single node such as FTT can lead to widespread repercussions across different networks, affecting investor behavior and market stability. Importantly, the resilience of certain cryptocurrencies like Bitcoin, which maintained high centrality throughout the crisis, contrasts sharply with the volatility in centrality measures of other tokens directly implicated in the crisis, like FTT. This disparity underscores the importance of robust network positions and the dangers of over-reliance on individual market players within decentralized financial systems. Finally, the insights derived from this research have significant implications for stakeholders across the financial spectrum—from investors and analysts to regulators and policymakers. Understanding the centrality dynamics within such networks can aid in developing more resilient financial structures and inform strategic decisions aimed at mitigating risks associated with market centralization and the cascading effects of financial crises. This study, therefore, not only contributes to the academic debate on complex networks and financial markets but also provides practical frameworks for assessing and enhancing the stability of the cryptocurrency ecosystem. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § ACKNOWLEDGEMENT Luca Galati thanks Refinitiv, an LSEG business, which provided him with access to data and technical assistance. Riccardo De Blasis is member of the Gruppo Nazionale Calcolo Scientifico-Istituto Nazionale di Alta Matematica (GNCS-INdAM). Rosanna Grassi acknowledges financial support from the European Union – NextGenerationEU. Project PRIN 2022 “Networks: decomposition, clustering and community detection” code: 2022NAZ0365 - CUP H53D23002510006. Rosanna Grassi is a member of GNAMPA-INdAM. Giorgio Rizzini gratefully acknowledges financial support from “SoBigData.it”, which receives funding from the European Union – NextGenerationEU – PNRR – Project: “SoBigData.it – Strengthening the Italian RI for Social Mining and Big Data Analytics” – Prot. IR0000013 – Avviso n. 3264 del 28/12/2021. Giorgio Rizzini acknowledges partial support by the European Program scheme “INFRAIA-01-2018-2019: Research and Innovation action”, grant agreement n. 871042 “SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics”. § ADDITIONAL FIGURES § LIST OF CRYPTOCURRENCIES elsarticle-num-names
http://arxiv.org/abs/2407.12451v1
20240717095952
Across Platforms and Languages: Dutch Influencers and Legal Disclosures on Instagram, YouTube and TikTok
[ "Haoyang Gui", "Thales Bertaglia", "Catalina Goanta", "Sybe de Vries", "Gerasimos Spanakis" ]
cs.CY
[ "cs.CY", "cs.CL", "cs.SI" ]
Dutch Influencers and Legal Disclosures on Instagram, YouTube and TikTok Gui et al. Utrecht University, Utrecht, the Netherlands {h.gui,t.costabertaglia,e.c.goanta,s.a.devries}@uu.nl Maastricht University, Maastricht, the Netherlands jerry.spanakis@maastrichtuniversity.nl Across Platforms and Languages: Dutch Influencers and Legal Disclosures on Instagram, YouTube and TikTok Haoyang Gui1 Thales Bertaglia1Catalina Goanta1 Sybe de Vries1 Gerasimos Spanakis2 July 22, 2024[The insight and motivation for the study of inertial methods with viscous and Hessian driven damping came from the inspiring collaboration with our beloved friend and colleague Hedy Attouch before his unfortunate recent departure. We hope this paper is a valuable step in honoring his legacy. ] ======================================================================================================================================================================================================================================================================================================================== § ABSTRACT Content monetization on social media fuels a growing influencer economy. Influencer marketing remains largely undisclosed or inappropriately disclosed on social media. Non-disclosure issues have become a priority for national and supranational authorities worldwide, who are starting to impose increasingly harsher sanctions on them. This paper proposes a transparent methodology for measuring whether and how influencers comply with disclosures based on legal standards. We introduce a novel distinction between disclosures that are legally sufficient (green) and legally insufficient (yellow). We apply this methodology to an original dataset reflecting the content of 150 Dutch influencers publicly registered with the Dutch Media Authority based on recently introduced registration obligations. The dataset consists of 292,315 posts and is multi-language (English and Dutch) and cross-platform (Instagram, YouTube and TikTok). We find that influencer marketing remains generally underdisclosed on social media, and that bigger influencers are not necessarily more compliant with disclosure standards. § INTRODUCTION Social media is undergoing fundamental changes due to the presence of users who rely on monetization, known as influencers or content creators. Influencers engage in various monetization business models, the most popular being influencer marketing consists of brands hiring influencers to deliver advertising services in exchange for money, goods and/or services. Such ads tend to look like content rather than advertising. As a result, influencer marketing remains largely undisclosed or inappropriately disclosed on social media <cit.>. Despite an exponential interest in influencer studies across various computer science communities in the past years <cit.>, the resulting body of academic work in this field has faced three main problems. First is the problem of the evasiveness of influencer definitions and classifications. In academic literature, influencers are defined either in terms of size <cit.>, network influence <cit.> or based on manual curation by researchers <cit.>. These approaches remain unrelated to legal standards. Second, not all monetized posts can be objectively identified. Thus, measuring hidden advertising generally suffers from an inherent degree of subjectivity in the perception of which content is monetized. Third, laws worldwide establish abstract disclosure obligations but often do not include practical standards. This leads researchers to propose their own (non-legal) disclosure standards. This study proposes a transparent methodology for measuring influencer disclosure compliance based on legal standards. We focus on the Netherlands, where both authorities and the advertising industry have been very active in setting clear disclosure standards. We introduce a novel dataset of influencers registered with the Dutch Media Authority based on a legal registration obligation imposed in 2022 by Dutch media law <cit.>. We collect and analyze a multi-language and cross-platform dataset to measure and characterize the advertising disclosures by Dutch influencers. Our research makes several contributions. First, it provides a comprehensive, multi-language (English and Dutch), cross-platform (Instagram, YouTube, and TikTok) measurement of influencer marketing disclosures based on legal standards. Second, it proposes and applies an original disclosure taxonomy that distinguishes between legally sufficient (green) disclosures and legally insufficient (yellow) disclosures. Finally, it identifies a sub-dataset of affiliate marketing based on a simple and effective methodology and uses it to measure different disclosure practices across different platforms, languages, and sizes of influencers. § RELATED WORK Research on content monetization has primarily focused on: monetization effectiveness <cit.>, influencer marketing strategies <cit.>, the impact of disclosures and regulation <cit.>, and the detection of undisclosed sponsored content <cit.>. In this context, <cit.> compiled a dataset of 35,000 posts and 99,000 stories from Instagram, categorizing influencers by their audience size and employing deep neural networks to distinguish between disclosed and undisclosed sponsored posts. <cit.> compiled a large dataset of 1.6 million Instagram posts and employed network features, including brand mentions and connections between posts, to train deep learning models for detecting hidden advertisements. Additionally, <cit.> investigated the reliability of human annotators in detecting undisclosed ads, highlighting the implications of such inconsistencies for machine learning models.  <cit.> also applied web measurement methods and identified only 10% AM content as disclosed out of 3,472 YouTube videos and 18,273 Pinterest pins. While these studies offer substantial insights, they exhibit a notable gap in connecting computational findings with legal standards within specific jurisdictions. § CONTENT MONETIZATION AND LEGAL DISCLOSURES Influencer Marketing and Dutch law. Based on the contractual transaction models, influencer marketing practices include Endorsements, where money is exchanged for advertising services; Barters, which involve goods or services being provided in return for advertising services; and Affiliate Marketing (AM), where each sale results in a referral commission <cit.>. In the Netherlands, media and consumer law determine applicable disclosure standards. Laws are generally vague and principle-based. However, self-regulatory organizations such as the Dutch Advertising Organization (Stichting Reclame Code) have proposed more specific rules, such as which hashtags should be clear enough for disclosure purposes. These rules are included in the Dutch Advertising Code, which theoretically must be aligned with Dutch law. In this study, we therefore focus on the more specific rules of the Dutch Advertising Organization to computationally model legally required disclosures. In parallel, the Dutch Media Authority is a state organization that has adopted specific national guidelines relating to identifying influencers. As a result, starting with 1 July 2022, Dutch influencers must register in the Video-Uploader Registry if they: (a) have more than 500k followers on Instagram, YouTube or TikTok; (b) make regular video content (at least 24 videos in the past 12 months); (c) make revenue based on the content; and (d) are registered with the Dutch Chamber of Commerce. A Legal Framework for Measuring Influencer Marketing. These legal developments allow us to propose a simple and effective approach to measuring disclosures and overcome the research gaps identified above. First, the Dutch Video-Uploader Registry provides a means to identify influencers accurately based on legal criteria. This public registry, mandated by the government, includes influencers who have formalized their monetization activities through registration, offering a formal list that avoids definitional subjectivities. Second, we focus on the legal standards for disclosure as outlined in the Dutch Advertising Code. We categorize disclosures into green disclosures, which follow legal standards (e.g., specific hashtags and words in Dutch and their English translations), and yellow disclosures, which are more inconspicuous and commonly used by influencers (e.g., #ambassador, #partner). Lastly, we propose a method for identifying affiliate marketing (AM) as a benchmark for hidden advertising. § METHODOLOGY Data Collection and Cleaning. Between August and October 2023, we collected textual data from the Dutch Video-Uploader Registry. We focus on text data as monetization disclosures remain largely communicated in writing. 209 registrations were officially made by 1 July 2023. However, this number included not only influencers but also other online media companies. We filtered out all the non-influencer accounts through annotations made by the research team, leading to a total of 150 influencers. Out of these 150 influencers, 133 are active on Instagram, 141 on YouTube, 131 on TikTok and 105 are on all three platforms. We used each of the respective platform's API (Instagram's Crowdtangle <cit.>, YouTube Data API v3 <cit.> and TikTok Research API <cit.> to collect all the available data of the respective influencers. Due to API limitations or bugs (especially for the TikTok Research API), we could only retrieve data from 132 influencers from Instagram, 136 from YouTube and 127 from TikTok. The collected data features a total of 300,199 posts. We used  <cit.> to identify the language of each post. The resulting dataset reflects 292,315 posts recognized as either English or Dutch text. The relevant text data consists of 122,913 Instagram posts from 2011 to 2023, 128,444 YouTube video descriptions from 2007 to 2023 and 48,842 TikTok video descriptions from 2016 to 2023) Detecting Legal Disclosures. We identify disclosures as follows. Green Disclosures are legal disclosures made in compliance with the Dutch Advertising Code. The Code specifies that platform toggles must be used (e.g., the Paid partnership) and that word disclosures must be positioned at the beginning of the text. We consider disclosure words in the first five words of each post (after tokenization and removing all punctuation) to be compliant. While all platforms in our study use the disclosure toggle, we only managed to collect disclosure toggle information from Instagram. Yellow Disclosures are disclosures which are not legally sufficient but are still used by influencers. We identify them based on a list created using observations from the dataset and expert insights from the author team. Detecting Affiliate Marketing. Based on AM textual cues, we compiled a list with the co-occurrence of these terms based on dataset observations. Our set of co-occurrence terms includes variations of these relevant words. When words co-occur in one post together, content can be categorized as AM. We checked the accuracy of this approach by manually annotating 10% of 13,917 AM posts across the dataset, where we only found 2 false positives. § FINDINGS We focus on three main research questions: First, what are the practices of Dutch influencers with respect to complying with legal standards? Second, how do Dutch micro- macro- and mega-influencers influencers disclose content on different platforms? Third, What is the engagement difference between disclosed and non-disclosed content across different platforms and influencer sizes? Legal Disclosure Practices. Overall, the amount of content voluntarily disclosed by influencers (green and yellow disclosures aggregated) shows that registered influencers only flag a marginal amount of their content as being monetized (5.63%) and, therefore, needing disclosure. Table <ref> shows a general breakdown of the overall dataset and a distribution of disclosure practices and AM content across three platforms and two languages. Besides Dutch, the influencers also post content in English (43.5%). Within the disclosed content category, we note a very low usage of green disclosures in general, with YouTube having the lowest proportion (0.009%), where yellow disclosures are exclusively used (12.843%). One possible explanation is that green disclosures require strict positioning, so influencers may be placing them at the end of the text. On YouTube, this is additionally problematic since text on the platform tends to be longer than Instagram or TikTok posts. Overall, this finding reveals a preference of Dutch influencers for using popular disclosure cues that do not comply with Dutch law. Table <ref> illustrates the overall amount of AM content in the dataset per platform and language (total 4.76%), as well as how much AM is disclosed using green and yellow disclosures (total 0.43%). While green and yellow disclosures only allow us to track disclosures that were voluntarily made by influencers, they do not reveal non-disclosed advertising. Using the AM sub-dataset as a benchmark, it is possible to identify hidden advertising as non-disclosed AM (total 4.43%). Moreover, the green disclosure of AM content is meagre across all platforms and languages (even the highest is just 0.07% on Instagram English). Except for YouTube English, most AM content from the other venues remains undisclosed (especially for YouTube Dutch, with 12.346% of undisclosed AM content). Moving to disclosure positions, we calculate the position within the sentence (in # of words) where the first disclosure word is shown. Although sentence length varies across different platforms, none of them has a median number lower than the first five words. Moreover, Instagram and YouTube have relatively different medians in English and Dutch, whereas the difference between TikTok's English and Dutch is small. While all platforms in our study use the disclosure toggle, we could only collect disclosure toggle information from Instagram. Fig. <ref> presents the distribution of different disclosure types in Instagram data. Green disclosures are divided into three categories: Words & position, which refers to the right words at the beginning of the text (first five words); Toggle, which indicates the use of the platform toggle in the platform interface; and Toggle, words, & position, which involves using the right words at the beginning of the text along with the platform toggle. We find that disclosures are used insufficiently across both English (more than 60%) and Dutch (around 80%). Overall, there are more toggle disclosures in English than in Dutch, and for both languages, there is an insignificant amount of legal disclosures placed sufficiently early in the text. Influencer Size and Disclosures. We further investigate whether influencers with more followers disclose more monetized content and if this disclosure is legally sufficient. We determine size using the number of followers on the day when the data was collected. We divide the dataset into three audience size categories: micro-influencers (less than 500K followers), macro-influencers (more than 500K but less than 1M followers), and mega-influencers (more than 1M followers). Here, we hypothesise that the bigger the following, the more professional the influencer is. We start by looking into the general distribution of influencers by size. Table <ref> shows the distribution of different sizes of influencers across platforms. The second and third rows in Table <ref> present the disclosure distribution for each platform. The results show that macro-influencers from YouTube disclose most advertising across all platforms using yellow disclosures. This corresponds with findings from Table <ref>, showing the higher prevalence of disclosures and AM content on YouTube compared to the other two platforms. Moreover, macro- and mega-influencers have a similar distribution of disclosure content on Instagram and TikTok, generally disclosing more than micro-influencers. As suggested by Table <ref>, almost no green disclosures are found, and the majority (around 91%) of AM content stays undisclosed. These findings suggest that more disclosures originate from influencers with a large audience, whether in terms of overall disclosures or AM specifically. To further investigate this pattern, we analyze the top five influencers on each platform with the most AM disclosures. We then measure the proportion of disclosed AM among all AM created by each of the selected influencers, showing their compliance with disclosures. Fig. <ref> presents the results. The left y-axis indicates the scale of the bars, showing the proportion of disclosed AM for each examined user out of all disclosed AM posts on each platform. As a result, the 5 influencers with the highest proportions of disclosed AM on all platforms are either macro- or mega-influencers, but none are micro-influencers. Instagram's accounts are more representative than the other two platforms, and it would not be reasonable to infer that influencers from YouTube and TikTok are more likely to disclose AM because of the skewed distribution. Finally, the right y-axis of Fig. <ref> shows the proportion of AM that is disclosed for each user (indicated by dots). Four accounts from TikTok disclose all their AM content, showing their high compliance. However, none of them have more than five AM posts in total, which makes the result not representative. In comparison, the results from Instagram and YouTube show that macro-influencers tend to disclose more AM content than mega-influencers from the same platform. These findings do not support the hypothesis that the bigger the influencers are, the more compliant they tend to be. Engagement and Disclosures. To understand how disclosures affect engagement, we conduct a series of comparative experiments on AM posts from all three platforms. For each post, we define engagement as the sum of the number of likes and comments. Fig. <ref> shows box plots of audience engagement in AM posts for different disclosure word positions. The engagement score is normalized by the Z-score so that the results between different platforms are comparable. This plot suggests that for the micro-influencer group, no disclosure words are found positioned in the first five words of the sentence, which corresponds to the findings from Table <ref>. A general tendency for a higher median in disclosed AM content than in undisclosed ones can be observed except for micro-influencers on YouTube and mega-influencers on TikTok. Overall, these observations suggest that disclosures can benefit engagement but the results vary with the different positioning of disclosure words. Lastly, we extend the experiment of the composition of different disclosures on Instagram from Fig. <ref> and explore differences in engagement. “Green disclosures: word & position” are rarely found in all categories in Fig. <ref> due to its few occurrences. Except for it and “green disclosures: toggle, words & position” in mega-influencers, green disclosures tend to perform better than yellow disclosures regarding the median. The variance of green disclosures is also better than other practices in micro- and mega-influencers. Moreover, in micro- and macro-influencers, “green disclosure: toggle, words & position” also performs better regarding the median than those only using toggle. The more compliant the AM posts are, the higher engagement they tend to attract. However, findings from mega-influencers contradict this assumption, as those using both toggle and words & position perform the worst. § DISCUSSION AND FUTURE RESEARCH This paper presents granular information on how disclosures are done on social media using Dutch law as a starting point to measure legal compliance. Our analysis shows that the general volume of disclosed content is astonishingly low. The content voluntarily disclosed by influencers, whether with green or yellow disclosures, amounts to a mere 5.63% out of the overall dataset. According to our results, in the case of affiliate marketing, only up to 9% is disclosed, leaving 91% of influencer marketing undisclosed. This result aligns with the low disclosure rates found in previous research on English YouTube and Pinterest affiliate marketing by <cit.>, which was, on average, around 10%. The growing popularity of content monetization has led to an ecosystem where influencers must be present on multiple platforms and often create content for different language audiences. It is important to understand the particularities of content creation on each of these platforms. Further research should investigate platform-specific disclosure affordances. Limitations Although we used the TikTok Research API, our data retrieval was incomplete due to API problems. We reported the issue to TikTok and used the partial data we retrieved. Data incompleteness is often seen more in earlier data points than in later ones. § ACKNOWLEDGMENTS This research has been supported by funding from the ERC Starting Grant HUMANads (ERC-2021-StG No 101041824). splncs04
http://arxiv.org/abs/2407.13743v1
20240718174909
Optimistic Q-learning for average reward and episodic reinforcement learning
[ "Priyank Agrawal", "Shipra Agrawal" ]
cs.LG
[ "cs.LG", "stat.ML" ]
Neural Network Tire Force Modeling for Automated Drifting (AVEC '24) Nicholas Drake Broadbent Trey Weber Daiki Mori J. Christian Gerdes July 22, 2024 ====================================================================== § ABSTRACT We present an optimistic Q-learning algorithm for regret minimization in average reward reinforcement learning under an additional assumption on the underlying MDP that for all policies, the expected time to visit some frequent state s_0 is finite and upper bounded by H. Our setting strictly generalizes the episodic setting and is significantly less restrictive than the assumption of bounded hitting time for all states made by most previous literature on model-free algorithms in average reward settings. We demonstrate a regret bound of Õ(H^5 S√(AT)), where S and A are the numbers of states and actions, and T is the horizon. A key technical novelty of our work is to introduce an operator defined as v = 1/H∑_h=1^H L^h v where L denotes the Bellman operator. We show that under the given assumption, the operator has a strict contraction (in span) even in the average reward setting. Our algorithm design then uses ideas from episodic Q-learning to estimate and apply this operator iteratively. Therefore, we provide a unified view of regret minimization in episodic and non-episodic settings that may be of independent interest. § INTRODUCTION Reinforcement Learning (RL) is a paradigm for optimizing the decisions of an agent interacting sequentially with an unknown environment over time. RL algorithms must carefully balance exploration, i.e., collecting more information, and exploitation, i.e., using the information collected so far to maximize immediate rewards. The mathematical model underlying any RL formulation is a Markov Decision Process (MDP). The algorithmic approaches for RL are typically categorized as model-based or model-free, depending on whether they learn the underlying MDP model explicitly or implicitly to learn the optimal decision policy. Model-free approaches such as Q-learning and policy gradient work by directly learning the optimal values or policy. They have gained popularity in practice because of their simplicity and flexibility, and underlie most successful modern deep RL algorithms (e.g., DQN <cit.>, DDQN <cit.>, A3C <cit.>, TRPO <cit.>, etc.). Technically, an algorithm is declared to be model-free if its space complexity is o(S^2A), preferably O(SA), with S,A begin the number of states and actions respectively <cit.>. At a more conceptual level though, the aim is to design algorithms that enjoy the structural simplicity and ease of integration of methods like Q-learning, value iteration, policy iteration, etc. On the theoretical side, sample complexity and regret bounds for model-free approaches have often lagged behind the corresponding model-based approaches. The existing literature is divided based on whether they consider settings with repeated episodes of fixed horizon H (aka episodic setting) or average reward under a single thread of experience with no restarts (aka average reward setting). For episodic settings, recent work <cit.> presented variants of optimistic Q-learning with near-optimal regret upper bounds. In average reward settings, however, simple UCB-based extensions of Q-learning have only been able to achieve a Õ(T^2/3) regret bound <cit.>. Recent √(T) regret bounds with model-free algorithms do so by either introducing some elements of model-based learning (e.g. tracking pairwise state-visit counters in <cit.>, storing exploration data in <cit.>) and/or by introducing strong assumptions like worst-case hitting time and mixing times (e.g., <cit.>, <cit.>). Furthermore, (arguably) these cleverly designed algorithms often do not enjoy the same simplicity and flexibility that make model-free approaches like Q-learning attractive in the first place. In this paper, we present an optimistic Q-learning algorithm for regret minimization in tabular RL that is applicable in both episodic and average reward settings. The contribution of our work is threefold. First, we introduce a novel formulation of the problem through an assumption of "bound H on the expected time to visit a frequent state s_0". This assumption is naturally satisfied by episodic settings and is significantly less restrictive than the commonly made assumption of bounded worst-case hitting time for all states in average reward settings. Furthermore, it admits many practical settings like repeated runs of a game or robotic task where each `episode' is unlikely to last beyond a certain maximum number of steps, but episode lengths may vary significantly across runs and policies. Second, we introduce a novel operator defined as v = 1/H∑_h=1^H L^h v where L denotes the standard Bellman operator with discount factor 1. We show that under the given assumption, the operator has a strict contraction (in span) even in the average reward setting. Our algorithm design then uses ideas from episodic Q-learning to estimate and apply this operator iteratively. This new way to achieve strict contraction in average reward settings may be of independent interest. Finally, we use the above insights to design a model-free algorithm that improves the existing literature both in terms of regret bounds and simplicity of algorithmic design. Specifically, we present an optimistic Q-learning algorithm with a regret bound of Õ(H^5 S√(AT)) in average reward setting[Here, Õ hides logarithmic factors in H,S,A, T, and additive lower order terms in T have been omitted.] A regret bound of Õ(H^6 S√(AT)) in episodic setting follows as a corollary of this result. In the next section, we formally present our setting, main results, and comparison to related work. Algorithm design along with an overview of challenges and novel techniques is presented in Section <ref>. Section <ref> gives an outline of our regret analysis. All missing proofs are in the appendix. § OUR SETTING AND MAIN RESULTS §.§ Our setting: average reward weakly communicating MDP We consider a Reinforcement Learning (RL) problem with an underlying Markov Decision Process (MDP) described by tuple M=(,,P,R), where , are state space and action space of finite size S,A respectively, P is the transition matrix and R is the reward function. The problem proceeds in discrete and sequential time steps t=1,…, T. At each time step t, the RL agent observes a state s_t∈ S and takes an action a_t ∈. The agent then observes a new state s_t+1 generated according to transition probability (s_t+1=s'|s_t,a_t) = P_s_t,a_t(s'), and the receives a bounded reward[Can be extended to Gaussian or sub-Gaussian rewards using standard techniques in the literature.] r_t∈ [0,1] with [r_t|s_t,a_t]=R(s_t,a_t). The transition and reward models P,R of the underlying MDP M are apriori unknown to the agent. Our main assumption on the MDP M is the following. [Expected time H to hit a frequent state] There exists a state s_0 ∈ S such that under any policy (stationary or non-stationary), starting from any state s∈ S, the expected time to visit the state s_0 is upper bounded by H. Given Assumption <ref>, it is easy to see that the MDP M is weakly communicating <cit.>. We provide a proof in Appendix <ref> for completeness. Intuitively, this assumption also generalizes the episodic setting with episode length H since the terminal state/starting state s_0 is visited every H steps. Later, we will use this observation to provide a formal reduction from episodic to average reward setting so that our algorithm will apply to both paradigms. We assume that the RL agent knows the upper bound H satisfying Assumption <ref> but not necessarily the identity of state s_0. The goal of the agent is to optimize total reward over a horizon T, or equivalently minimize regret which compares the algorithm's total reward to the optimal reward. Specifically, in average reward settings, regret is defined with respect to the optimal asymptotic average reward (aka gain ρ^*) of the MDP. For weakly communicating MDPs, the gain ρ^* is independent of the starting state and achieved by a stationary policy (see Theorem 8.3.2 of <cit.>). That is, ρ^* = max_π:→Δ^ρ^π(s); where ρ^π(s) lim_T→∞1/T∑_t=1^TR(s_t,a_t)|s_1=s; a_t=π(s_t), for all s. Then, in line with the existing literature, we define regret as (T) ∑_t=1^T (ρ^*-R(s_t,a_t)). Let π^* denote optimal stationary policy. Then, along with the so called `bias vector' V^*, defined as V^*(s) = lim_T→∞1/T∑_t=1^T(R(s_t,a_t)-ρ^*)|s_1=s; a_t=π^*(s_t). ρ^* satisfies following Bellman equations (see <cit.>, Chapter 8) which connects it to dynamic programming based algorithms like Q-learning: ρ^* + V^*(s) = max_a R(s,a) + P_s,a· V^* , ∀ s. Or equivalently, using the Bellman operator L defined as [Lv](s) = max_a R(s,a) + P_s,a· v, ρ^* + V^* = LV^*. Here denotes the vector of all ones. The bias vector (or value vector) V^* is unique up to any constant shift. Also, under Assumption <ref> it is easy to show (see Lemma <ref>) that the span of vector V^* is bounded, specifically (V^*) ≤ 2H where (V^*) := max_s V^*(s) - min_s V^*(s). §.§ Episodic setting as a special case We show that the problem of regret minimization in episodic MDPs forms a special case of our setting. Consistent with the literature (e.g., <cit.>), we define the episodic setting using a time-inhomogeneous MDP M = (,, P, R, H), where (P, R)= {P^h,R^h}_h=1^H. Under any policy, after exactly H steps, the MDP reaches the terminal state (say s_0) which is an absorbing state with reward 0. The value function V^π_h(s) of (possibly non-stationary) policy π=(π_1,…, π_H) at step h is defined as the H-h+1 step expected reward starting from state s. Then optimal value for any h=1,…, H is given by, V^*_h(s) = max_π V^π_h(s); where V^π_h = [∑_j=h^H R^h(s_j,π_h(s_j)) | s_h=s]. Unlike weakly communicating MDPs, here the optimal value depends on the starting state and the optimal policy is non-stationary. The long-term regret minimization problem in episodic setting seeks to minimize total regret over T/H episodes i.e., ^(T) = T/H V^*_1(s_1) - ∑_k=1^T/H∑_h=1^H R^h(s_k,h,a_k,h), where s_k,h, a_k,h denote the state, action at step h in k^th episode. It is easy to reduce the above problem to regret minimization in an average reward setting with MDP M' that has a slightly larger (HS) state space. Simply construct MDP M' by augmenting each state s with indices h=1,…, H, and modifying the transition model so that s_0 is not an absorbing state but instead transitions to the starting state s_1 with probability 1. Then, we show (Appendix <ref>) that _M^(T) = _M'(T) and M'. Importantly, M' satisfies Assumption <ref>, so that our algorithm and regret analysis directly applies to the episodic setting with S replaced by HS. In Appendix <ref>, we also illustrate the connection between the Bellman equations for the two settings which may be of pedagogical interest. §.§ Main results Our main contribution is a model-free optimistic Q-learning algorithm for the average reward setting as defined in Section <ref>. A main difficulty in designing algorithms for average reward setting is that the L operator does not have a strict contraction property when discount factor is 1. Our key novel insight is that under Assumption <ref>, an operator that we define as v := 1/H (L^H v + L^H-1 v + ⋯ + L v), has a strict contraction property in span. Specifically, we prove (see Lemma <ref>)[Obtained by substituting (H,p) by (2H, 1/2) in Lemma <ref>. For completeness, in Lemma <ref>, we also prove a more standard form of span contraction property: for all v_1,v_2, ( v_1- v_2) ≤1-1/4H(v_1 - v_2).] that for any v∈ℝ^S, v - V^*≤1-1/4H v - V^*. This result forms the basis of our algorithm design that uses ideas from episodic Q-learning to estimate operator as an average of L^h operators for h=1,…, H. Our algorithm assumes the knowledge of T, S, A, H but does not need to know the identity of the frequent state s_0. Our main result is the following regret bound for our algorithm. [Average reward setting]theoremthmMain1 Given Assumption <ref>, there exists an optimistic Q-learning algorithm (specifically, Algorithm <ref> with input (2H,1/2,2H)) that achieves a regret bound of (T) = O(H^5 S √(AT log(SAT/δ))log^2(T) + H^9S^2A√(log(SAT/δ))log^4.5(T) ) = Õ(H^5S√(AT) + H^9S^2A), for any starting state s_1, with probability 1-δ. Note that when T≥ H^9S^2A, we get an Õ(H^5S√(AT)) regret bound. Following the discussion in Section <ref>, we obtain the following corollary for the episodic setting. [Episodic setting]corollarythmEpisodic For episodic setting with episode length H, our algorithm (i.e., Algorithm <ref> with input (H,1,H) and states augmented by indices h=1,…, H) achieves a regret bound of Reg^Episodic(T) = Õ(H^6S√(AT) + H^11S^2A). Admittedly, the above regret bound is not optimal for the episodic setting where optimistic Q-learning algorithms have achieved optimal regret bounds of Õ(H√(SAT)) <cit.>. Those algorithms, however, fundamentally rely on the fixed length episodic structure and cannot be applied to the average reward setting even with Assumption <ref>. On the other hand, most algorithms for average reward settings make assumptions like diameter and worst-case hitting time (for all states) that are not satisfied by episodic settings (see Section <ref> for details). Our work provides a unified view and an algorithm that covers both these paradigms. Furthermore, as we discuss next, it significantly improves the state-of-the-art model-free algorithms for average reward settings, both in terms of regret bounds and simplicity of algorithm design. Finally, our regret bounds also imply a PAC guarantee. Specifically, let π_1,…, π_T denote the policy used by our algorithm (Algorithm <ref>) in each of the T time steps. Then, we show that picking a policy π randomly from this set (and repeating this experiment multiple times for a high probability guarantee) provides a policy that is ϵ-optimal (i.e., ρ^*-ρ^π≤ϵ) with probability 1-δ, where ϵ = 3(T)/T + O(H^2 √(Slog(T)log(1/δ)/T)). Its proof is deferred to Appendix <ref>. Substituting the Õ(H^5S√(AT)) regret bound from Theorem <ref>, this provides a way to get (ϵ,δ)-PAC policy using Õ(H^10S^2A/ϵ^2) samples. §.§ Comparison to related work Our work falls under the umbrella of online reinforcement learning, specifically on regret minimization in tabular average reward settings with a weakly communicating MDP. <cit.> proved a regret lower bound of Ω(√(DSAT)) for this setting where D, referred to as the diameter of the MDP, bounds the time to reach any recurrent state from another state under some policy. Most of the earlier works on this topic focus on model-based algorithms with near-optimal regret guarantee of Õ(DS√(AT)) <cit.>, or Õ(H^*S√(AT)) . Recently, several papers(e.g., <cit.>, <cit.>) improve the dependence on S and D, with <cit.> closing the gap to achieve an Õ(√(H^*SAT)) regret bound where H^* = (V^*) ≤ D, however their algorithm is not efficiently implementable. Very recently, <cit.> claimed to have an algorithm that is tractable and achieves the minimax optimal regret of Õ(√(H^*SAT)). More recently, increased interest has been in designing model-free algorithms with provable regret bounds. However, unlike episodic MDP, where variants of Q-learning have shown to achieve near-optimal regret bounds <cit.>, there is still a significant gap between model-free and model-based algorithms in average reward settings. Table <ref> lists state-of-the-art regret bounds for model-free algorithms (when applied to the tabular MDP case). The table may not be comprehensive but highlights the most relevant related results. <cit.> presented a simple extension of episodic optimistic Q-learning from <cit.> to the average reward case with regret that grows as T^2/3. Most subsequent works make more restrictive assumptions in order to achieve a √(T) regret bound. These include bounds on the mixing time for all policies (t_mix) and the time to reach any state from any other state under any stationary policy (t_hit). Specifically, <cit.> assume a bound of η on max_π∑_s μ^*(s)/μ^π(s) where μ^π,μ^* are the stationary distributions of policy π and optimal policy respectively, so that η≥ S. Other works for the linear function approximation setting <cit.> involve a parameter σ when applied to the tabular case. This parameter σ lower bounds the probability to visit any state and action under any stationary policy, so that 1/σ≥ SA. In comparison to the above literature, our work only assumes a bound H on hitting one frequent state (s_0) and does not require uniform mixing. Given t_mix=t_mix(ϵ) for ϵ≤1/2t_hit, it is easy to show that our Assumption <ref> is strictly weaker and holds in these settings with H=t_mix+2t_hit, and similarly for t_hit replaced by η, 1/σ. In practice, H can be much smaller than t_hit, η, 1/σ especially when the state space is large and not all policies explore all states uniformly. One exception to the above literature is <cit.> that requires only a bound H^* on (V^*), the weakest possible assumption in average reward settings. However, their regret bound suffers with a high dependence (S^5) on the size of the state space. Furthermore, while in terms of memory usage their algorithm qualifies as model-free, some features like pair-wise state-visit counters arguably make the algorithm design closer to a model-based algorithm. In contrast, our algorithm keeps the basic structure of optimistic Q-learning from <cit.> intact with some key intuitive modifications (based on the Bellman equations) to handle the average reward setting. Finally, a concurrent work by <cit.> provides an optimistic value-iteration algorithm with Õ(H^*√(S^3A^3T)) regret bound. However, their algorithm can be best described as a combination of model-based and model-free design since they need to keep track of the model (covariate matrix Λ_t in linear function approximation setting reduces to tracking an estimate P̂ of the transition matrix in the tabular setting). 1 § ALGORITHM DESIGN Our algorithm (Algorithm <ref>) extends the Optimistic Q-learning algorithm of <cit.> to a more general setting that includes episodic settings and non-episodic settings satisfying Assumption <ref>. For technical convenience, we present our algorithm design and analysis under the following assumption instead, which in fact holds whenever Assumption <ref> holds. There exists a state s_0 such that under any policy (stationary or non-stationary), starting from any state, the probability of visiting state s_0 in time H at least p. More precisely, for any (non-stationary) policy π=(π_1,…, π_H, π_H+1…), let P_π_i denote the transition probability matrix for policy π_i, and μ be any starting state distribution. Then, we have μ^⊤P_π_1 + P_π_1 P_π_2 + ⋯ + ∏_i=1^H P_π_i≥ p_s_0^⊤. Using Markov inequality, it is easy to derive the following. Given Assumption <ref>, under any policy starting from any state, the probability of visiting state s_0 in time 2H at least p=1/2. Therefore, Assumption <ref> holds with parameters (2H, 1/2) whenever Assumption <ref> holds with parameter H. Following the above observation, in the remaining paper, we work with Assumption <ref> only. The main result under Assumption <ref> will be derived by simply substituting (H,p) by (2H,1/2). §.§ Challenges and techniques To understand the challenges and ideas used for our algorithm design, let us first consider the Q-learning algorithm for episodic setting with a fixed episode length H. The Q-learning update rule makes the following update to the (H-h+1)-step value (and Q-value) functions Q^h,V^h for h∈{1,…, H} on making an observation (s,a, s', r) (state, action, next state, reward): * Q^h(s,a) ← (1-α) Q^h(s,a) + α (r+ V^h+1(s')) * V^h(s) ←max_a Q^h(s,a) where the terminal value V^H+1(s)=0; and α is referred to as the learning rate. Convergence of episodic Q-learning is based on the observation that with enough samples for all states and actions, V^h converges to L^H-h V^H+1, so that . <cit.> extend this algorithm to include UCB-based exploration and careful choice of α so that its episodic regret is bounded. A natural way to extend this algorithm to a non-episodic setting satisfying Assumption <ref> is to consider steps between two consecutive visits of s_0 as an episode. However, then Q^h(s,a) represents Q-value of a state and action when the expected number of remaining steps in the episode is at most H-h+1. This means we no longer have V^H+1(s) =0. With enough samples, V^h would still converge to L^H-h V^H+1, but in order to ensure convergence to V^*, V^H+1 needs to be carefully updated. Novel ideas. The main new innovation in our algorithm is to include an update step for V^H+1, so that it converges (in span) to V^*. This update step ( in Algorithm <ref>) essentially sets V^H+1 as (running) average of V^h, h=1,…, H, and is of the form: * V^H+1(s) ←1/N(s)· V^H+1(s) + 1-1/N(s)1/H∑_h=1^H V^h(s) where N(s) is the number of visits of state s so far. To see why this kind of update may lead to convergence of V^H+1 to V^*, observe that if V^h ≈ L^H-h V^H+1, then after our update step V^H+1≈1/H∑_h=1^H V^h ≈1/H∑_h=1^H L^H-h+1 V^H+1 =: V^H+1 where we define operator as := 1/H∑_h=1^H L^H-h+1. Therefore, our algorithm essentially performs V^H+1← V^H+1 repeatedly (in epochs ℓ=1,2,…). Our main new technical insight is that under Assumption <ref>, operator has the following span contraction property. Proof is in Appendix <ref>. [Span Contraction]lemmaspanContractionCorollary Define operator L:ℝ^S →ℝ^S as: for any v∈ℝ^S, Lv := 1/H∑_h=1^H L^hv. Then, given any V^* ∈ℝ^S such that V^*-LV^*=0, and any v∈ℝ^S, under Assumption <ref>, we have (L v- V^*) ≤ (1-p/H) ( v - V^*). Therefore, by repeatedly applying V^H+1← V^H+1, our algorithm ensures that V^H+1 gets closer and closer to V^* in span. Another subtle difference in our algorithm compared to the episodic setting is that in every step, the algorithm uniformly samples an h∈{1,…, H} and picks the arg max action: max_a Q^h(s,a). This uniform sampling is important to ensure that for each state s all values of h are explored. In the next section, we provide algorithmic details that include setting exploration bonus and learning rates similar to <cit.>, and careful epoch design for efficient estimation of V^H+1. §.§ Algorithm details Algorithm <ref> provides the detailed steps of our algorithm. It takes as input, parameters (H,p) satisfying Assumption <ref>) and an upper bound H^* on (V^*). We assume that the time horizon T and the size of state space and action space S,A are also fixed and known. The algorithm uses these along with parameters (H,p,H^*) to define quantities C,K, b_n in . The algorithm proceeds in epochs ℓ=1,2,… of geometrically increasing duration. The epoch break condition (see ) is such that the number of epochs is upper bounded by ζ:= CSlog(T) = O(1/p H^2 S log^2(T)). In each epoch, the algorithm resets and re-estimates Q-values and V-values Q^h(s,a), V^h(s) for all s,a and h=1,…, H. The vector V^H+1 on the other hand is updated across epochs. Specifically, in the beginning of every epoch ℓ, Q^h(s,a), V^h(s) are reset to large enough values for all s,a, h∈{1,…, H} (). The initialization is chosen Then, in every round t of epoch ℓ, the algorithm observes state s_t and uniformly samples h_t∈{1,…, H}. Action a_t is picked as the arg max action: a_t = max_a Q^h_t(s_t,a). On playing action a_t, the reward r_t and next state s_t+1 is observed (). The tuple (s_t,a_t,s_t+1, r_t) is then used to update Q^h(s_t,a_t), V^h(s_t) for all h∈{1,…, H} (and not just for h_t). For each h, a Q-learning style update is performed (). A subtle but important point to note is that Q^h, V^h are updated in the reverse order of h, i.e., h=H,H-1,…, 1 (see ). This ensures that the latest updated value for h+1 is used to construct the target for h. The updated V^h(s_t), h=1,…, H are then used to obtain an updated value of V^ℓ+1,H+1(s_t) (). This update is such that at the end of the epoch V^ℓ+1,H+1(s_t) is set as the average of V^h(s_t) over all h and all rounds t in epoch ℓ. At the end of epoch ℓ, a projection operation V^ℓ+1, H+1← V^ℓ+1,H+1 () is performed occasionally (roughly every 2H/plog(T) epochs) defined as: for any v∈ℝ^S, [ v](s) := min{2H^*, v(s)-min_s∈ v(s)} + min_s∈ v(s). This projection trims down the span of vector V^ℓ+1, H+1 to at most 2H^*. § REGRET ANALYSIS In this section we analyze the regret of Algorithm <ref>. Specifically, we prove the following theorem. theoremthmMain Under Assumption <ref>, with probability at least 1-δ, the T round regret of Algorithm <ref> is upper bounded by the following (T) = O1/p^2H^4H^*S√(ATlog(SAT/δ))log^2(T)+1/p^4.5H^8H^*S^2A√(log(SAT/δ)log^4.5(T) = O1/p^2H^*H^4 S√(AT) + 1/p^4.5H^* H^8 S^2A. Following the observation in Lemma <ref> and the span bound on bias vector (V^*)≤ 2H under Assumption <ref>, Theorem <ref> is a simple corollary of Theorem <ref> on substituting (2H, 1/2, 2H) for (H,p, H^*). In the rest of the section we provide a proof outline for Theorem <ref> along with some important intermediate lemmas. All the missing proofs from this section are in Appendix <ref>, with some supporting lemmas in Appendix <ref>. Let Q^t,h, V^t,h, N^t denote the value of Q^h, V^h , N at the beginning of time step t in epoch ℓ (i.e., before the updates of round t) in the algorithm. And, for n≥ 1, 1≤ i≤ n, define α_n^i := α_i ∏_j=i+1^n (1-α_j), where as defined in the algorithm, α_n=(C+1)/(C+n). For notational convenience, we also define α_n^0 as 1 for n=0 and 0 otherwise. Then, by algorithm construction, for any t≥τ_0+1, and any s,a with n:=N^t(s,a) ≥ 1, we have Q^t,h(s,a) = R(s,a) + ∑_i=1^n α_n^i(V^t_i+1,h+1(s_t_i+1) + b_i); otherwise (i.e., when n=0), Q^t,h(s,a)=(s)+ as initialized in the beginning of epoch. And, V^t,h(s_t) = Q^t,h(s_t,a_t,h), where a_t,h:=max_a Q^t,h(s_t,a). Also, define for ℓ≥ 2, and all s such N_ℓ-1(s)≥ 1, v^ℓ(s) :=1/N_ℓ-1(s)∑_t_i∈ epoch ℓ-1: s_t_i=s1/H∑_h=1^H V^t_i,h(s). Then, by algorithm construction, we have for such ℓ,s, V^ℓ,H+1(s) = {[ [ v^ℓ](s), if (ℓ-1 K)=0,; v^ℓ(s), otherwise. ]. (For other s, V^ℓ,H+1(s)=(s)+ as initialized in the beginning of epoch). As discussed in the algorithm design overview, each epoch of our algorithm attempts to update V^H+1 as V^H+1← V^H+1 so that it gets closer and closer to V^* due to the span contraction property of (see Lemma <ref>). And the Q-learning update attempts to maintain . Therefore, at any given epoch ℓ and k≤ℓ-1, we expect V^ℓ, H+1 to be close to ^k V^ℓ-k, H+1 and V^ℓ, h to be close to L^H-h+1^k V^ℓ-k, H+1. We show that this observation holds but with some errors due to exploration bonus and sampling estimation errors. Below, we recursively define two quantities g^k(t,h) and G^k(ℓ,s), for any k≥ 1, aimed at capturing these errors in estimates V^t,h(s_t), h=1,…, H and in V^ℓ,H+1(s), respectively. Here, ζ=CSlog(T) denotes an upper bound on the number of epochs in the algorithm. * g^k(t,h) is defined as follows: g^0(t,H+1):=0, ∀ t; and for k≥ 1, h≤ H, g^k(t,h):= b_n_t,h+∑_i=1^n_t,hα^i_n_t,hg^k(t_i+1,h+1) + α_n_t,h^0() where n_t,h=N^t(s_t,a_t,h), a_t,h=max_a Q^t,h(s_t,a), and g^k(t,H+1):= G^k-1(ℓ_t,s_t), k≥ 1. where ℓ_t denote the epoch where times step t appears. * G^k(ℓ,s) is defined as follows: G^0(ℓ, s):=0, ∀ℓ, s; and for k ≥ 1, N_ℓ-1(s)≥ 1, G^k(ℓ,s):=1/N_ℓ-1(s)∑_t_i∈epoch ℓ-1: s_t_i=s1/H∑_h=1^H g^k(t_i,h), For s,ℓ with N_ℓ-1(s)=0, we set G^k(ℓ,s)=. Given these definitions, our first main lemma is the following optimism property which formalizes the intuition discussed earlier about convergence of V^H+1 to ^k V^ℓ-k and V^h to L^H-h^k V^ℓ-k. [Optimism]lemmaoptimism With probability at least 1-δ, we have for all epochs ℓ, t∈ℓ, 0 ≤ V^t,h(s_t) -[L^H-h+1L^k V^ℓ-k,H+1](s_t) ≤ 4 g^k+1(t,h), for all h=1,…, H+1, and 0 ≤ Q^t,h_t(s_t,a_t) - (R(s_t,a_t) + P_s_t,a_t· L^H-h_tL^k V^ℓ-k,H+1) ≤ 4 g^k+1(t,h_t), where k=ℓ-1 for ℓ≤ K+1 and k=ℓ-K⌊ℓ-K-1/K⌋-1, otherwise. Here K= as defined in Algorithm <ref>. The proof of above lemma is in Appendix <ref>. Now, fix an ℓ, t∈ℓ, and set k as in Lemma <ref>. Denote v=L^H-h_tL^k V^ℓ-k,H+1. Then, Lemma <ref> provides that V^t,h_t(s_t) ≥ [Lv](s_t), and Q^t,h_t(s_t,a_t) ≤ R(s_t,a_t) + P_s_t,a_t· v +4 g^k+1(t,h_t). By the choice of action a_t in the algorithm, we also have V^t,h_t(s_t) = max_a Q^t,h_t(s_t,a) = Q^t,h_t(s_t,a_t). Therefore, subtracting the above two inequalities we get R(s_t,a_t) ≥ [Lv](s_t) -P_s_t,a_t v - 4 g^k+1(t,h_t). Now, using the Bellman equations for average reward MDP (see Section <ref>), we have ρ^* = [LV^*](s_t) - V^*(s_t) where ρ^* denote the optimal asymptotic average reward. Subtracting the last two inequalities and using the definition of span, we can obtain ρ^* - R(s_t,a_t) ≤ 2 (v-V^*) + 4 g^k+1(t,h_t) + (P_s_t,a_tV^*-V^*(s_t)) Then, the following bound on the per-round regret of Algorithm <ref> follows by applying the span contraction property from Lemma <ref> to bound (v-V^*). Detailed Proof is in Appendix <ref>. [Per round regret]lemmalemRegretDecomposition Under Assumption <ref>, with probability at least 1-δ, for all epochs ℓ, t∈ℓ , the per round regret of Algorithm <ref> is bounded as: ρ^* - R(s_t,a_t) ≤ 8H^*(1-p/H)^k + 4 g^k+1(t,h_t) + (P_s_t,a_tV^*-V^*(s_t)) where k=ℓ-1 for ℓ≤ K+1 and k=ℓ-K⌊ℓ-K-1/K⌋-1, otherwise. We obtain our cumulative regret bound by summing the above per-round regret bound over all t∈ℓ for all ℓ. Let k_ℓ denote the value of k defined in Lemma <ref> for epoch ℓ. Then, (T) = Tρ^* - ∑_t=1^T R(s_t,a_t) ≤ ∑_ℓ, t∈ℓ(8H^*(1-p/H)^k_ℓ + 4g^k_ℓ+1(t,h_t) ) + ∑_t δ_t where δ_t := (P_s_t,a_tV^*-V^*(s_t)). We use the careful setting of k_ℓ to bound the above expression. Note that a large k_ℓ is desirable for the first term which may be interpreted as `bias' of our estimates compared to V^*. But for the second term that captures the `variance' in our estimates and accumulates the exploration bonus over k_ℓ+1 epochs, a small k_ℓ is desirable. The choice of k in Lemma <ref> ensures that k_ℓ∈ [K,2K-1] for ℓ≥ K+1. When k_ℓ≥ K =, clearly the first term is small enough. For bounding the second term, we prove the following lemma (Appendix <ref>). lemmagksummation Given any sequence of 1≤ k_ℓ+1 ≤ 2K for all epochs ℓ. Then, 1/H∑_h=1^H ∑_ℓ, t∈ℓ g^k_ℓ+1(t,h) ≤ 2e K ∑_h=1^H ∑_t=1^T b_n_t,h + , where n_t,h=N^t(s_t,a_t,h), a_t,h=max_a Q^t,h(s_t,a), and ζ=CSlog(T) denotes an upper bound on the number of epochs in Algorithm <ref>. The sum of all bonuses ∑_t,h b_n_t,h can then be bounded using standard algebraic arguments (see Lemma <ref>). The proof of regret bound stated in Theorem <ref> then follows by applying the above observations along with a standard concentration inequality to bound the ∑_t δ_t term. All the missing steps of this proof are provided in Appendix <ref>. § CONCLUSION We presented an optimistic Q-learning algorithm for online reinforcement learning under a setting that unifies episodic and average reward settings. Specifically, we consider MDPs with some (unknown) frequent state s_0 such that the expected time to reach this state from any other state is bounded by a known constant H under all policies. A main technical contribution of our work is to introduce an operator = 1/H∑_h=1^H L^h and demonstrate its strict span contraction property in our setting. Using this property, we demonstrate an average reward regret bound of Õ(H^5 S√(AT)) for our algorithm, with a corollary of Õ(H^6 S√(AT)) for the episodic setting. An avenue for future research is to improve the dependence on H in our regret bound. Such an improvement was not a focus of this work, but may be possible by employing techniques in some recent work on improving dependence on H for episodic Q-learning, particularly <cit.>. plainnat § PRELIMINARIES: SOME IMPLICATIONS OF ASSUMPTION <REF> §.§ Weakly communicating and bounded span For any stationary policy π, define bias vector V^π∈ℝ^S as follows, V^π(s) = lim_T→∞1/T∑_t=1^T(R(s_t,a_t)-ρ^π(s_t))|s_1=s; a_t=π(s_t). where ρ^π(s) lim_T→∞1/T∑_t=1^TR(s_t,a_t)|s_1=s; a_t=π(s_t) is the asymptotic average reward of policy π starting from state s. Then, under Assumption <ref>, the span of the vector V^π is upper bounded by 2H. Let J^π_n(s) be the n-step value of playing policy π starting from the state s. Let state s_0 is reached in τ steps. Then, by our assumption [τ]≤ H. Therefore, since the reward in each time step is upper bounded by H, we have for every s, J^π_n(s_0) - H≤ J^π_n(s) ≤ J_n^π(s_0)+H, so that (J^π_n) ≤ 2H. Now, using the known result <cit.> that for all s_1,s_2, V^π(s_1) - V^π(s_2) = lim_n→∞ J_n^π(s_1) - J_n^π(s_2) we have that the bound on (J^π_n) implies bound on the bias vector as (V^π)≤ 2H. Under Assumption <ref>, the MDP is weakly communicating and the span of optimal bias vector V^* is upper bounded by 2H. The weakly communicating property can be seen by applying Proposition 8.3.1(b) of <cit.>. Consider the Markov chain induced by any stationary policy. Under Assumption <ref> there is a positive probability to go from s to s_0 for every state s in n steps for some finite n≥ 1. Now, for any recurrent state s there must be a positive probability of going from s_0 to s in n steps for some n≥ 1 because otherwise the probability of revisiting s would be strictly less than 1. Therefore all recurrent states are reachable from each other and form a single irreducible class. All the remaining states are transient by definition. For a weakly communicating MDP, the optimal policy is known to be stationary <cit.>. Therefore, applying Lemma <ref>, we obtain (V^*)≤ 2H §.§ Episodic MDP as a special case of Average reward MDP Here we provide a reduction that shows that the setting of regret minimization in episodic MDP as a special case of the weakly communicating MDPs satisfying Assumption <ref>. This reduction provides a unified view of regret minimization across episodic and non-episodic settings, and serves as the basis for our algorithm design as an extension of an episodic Q-learning algorithm. We describe an episodic setting consistent with the recent literature (e.g., <cit.>). In the episodic setting with finite horizon H, we have a time-inhomogeneous MDP described by the tuple (,, P, R, H), where (P, R)= {P^h,R^h}_h=1^H. At each time step h=1,…, H, the learner observes the state s_h, and takes an action a_h ∈. The MDP transitions to a new random state s_h+1∼ P^h_s_h,a_h and the agent receives a reward R^h(s_h,a_h). After H steps, under any policy, the MDP reaches terminal state (say s_0) which is an absorbing state with reward 0. Optimal policy aims to optimize the value function V^π_h(s) at every step h of the episode, defined as the H-h+1 step expected reward starting from state s under possibly non-stationary policy π=(π_1,…, π_H): V^π_h(s) = [∑_j=h^H R^h(s_j,π_h(s_j)) | s_h=s], where the expectation is taken over the sequence s_j+1∼ P^j_s_j,a_j, a_j ∼π_h(s_j) for j=h,…, H. Then, by dynamic programming, the optimal value is given by V^*_h(s) = max_π V^π_h(s) = [LV^*_h+1](s) = [L^H-h+1 0](s) And, regret of an episode is defined as V^*_1(s_1) - ∑_h=1^H R^h(s_h, a_h), where s_h, a_h are the state,action at step h in the episode. Unlike weakly communicating MDPs, the optimal reward in an episode depends on the starting state and the optimal policy is non-stationary. The long-term regret minimization problem in episodic setting seeks to minimize total regret over a large number (T/H) of episodes, i.e., ^(T) = T/H V^*_1(s_1) - ∑_k=1^T/H∑_h=1^H R^h(s_k,h,a_k,h) where s_k,h, a_k,h denote the state, action at step h in kth episode. We show that this problem of regret minimization in episodic MDPs is in fact equivalent to the regret minimization problem in a weakly communicating average reward (homogenous) MDP with slightly bigger state space (HS states). Specifically, we show the following result: Given any episodic MDP M=(, , P, R, H), there exists a weakly communicating MDP M'=(', ', P', R') satisfying Assumption <ref> with |'|=H| S|, |'|=| A| such that _M^(T) = _M'(T), where _M^(T) and _M'(T) denote the episodic and average reward regrets under the MDP M and M' respectively. We prove this by the following simple reduction. Given episodic MDP M=(,,P,R H), construct (time-homogeneous) MDP M'=(',,P',R') where corresponding to every state s ∈, the new state space ' contains H states denoted as {s^h,h=1,…, H}. A visit to state s at step h in the episodic setting is then a visit to s^h in MDP M'. And for all s ∈ S,a∈ A ,h, we define P'(s^h,a) = P^h(s,a), R'(s^h, a) = R^h(s,a). Further, the transition model P' is modified so that s_0 is not an absorbing state but instead transitions to starting state s_1 with probability 1. Then any non-stationary policy in the episodic MDP M is equivalent to a stationary policy π' in the new MDP M', with π'(s^h) =π_h(s). Therefore, optimal non-stationary policy for the episodic MDP corresponds to a stationary policy for MDP M'. Because s_0 is visited every H steps, the constructed MDP M' trivially satisfies Assumption <ref> Therefore, M' is weakly communicating with span of optimal bias vector bounded by H. Since V^*_1(s_1) is the maximum reward obtainable per episode of fixed length H, clearly, the optimal asymptotic average reward ρ^* for M' isρ^* = 1/H V^*_1(s_1). Construct vector V^* as V^*(s^h) = V^*_h(s) - (H-h+1)ρ^*; then the dynamic programming equation for the episodic MDP implies the average reward Bellman optimality equations for ρ^*, V^*. To see this recall that we have by dynamic programming V^*_1=LV^*_2 = ⋯ = L^H-1V^*_H = L^H 0 so that for every state s^h in M', we have [LV^*](s^h) -V^*(s^h) = [LV^*_h+1](s) - (H-h)ρ^* -V^*_h(s) + (H-h+1)ρ^* = ρ^* Therefore, V^* is the optimal bias vector (up to span) for MDP M'. Now, in the expression for ^(T) above, substitute ρ^*=V_1^*(s_1)/H and R^h(s_k,h, a_k,h)=R'(s_t,a_t) where s_t ∈ S' is the corresponding state to s_k,h and a_t=a_k,h. We obtain ^_M(T) = Tρ^* - ∑_t=1^T R'(s_t,a_t) = _M'(T) i.e., the average (over episodes) of episodic regret is the same as the average reward of the constructed homogenous MDP. This discussion demonstrates that any algorithm constructed for weakly communicating MDP under Assumption <ref> can be seamlessly applied to time in-homogeneous episodic MDP setting. The regret bound obtained will hold almost as it is (only difference being that the size of state space S will be changed to SH). Thus, our setting provides a unified view of the episodic and non-episodic analysis. § SPAN CONTRACTION: PROOF OF LEMMA <REF> Define operator L:ℝ^S →ℝ^S as: for any v∈ℝ^S, Lv := 1/H∑_h=1^H L^hv. Then, given any v_1,v_2∈ℝ^S, under Assumption <ref>, we have (L v_1-L v_2) ≤ (1-p/H)( v_1 - v_2). For this proof, first we show by induction: for i=1, 2, … L^i v_1 - L^i v_2 ≤ (∏_j=1^i P_π^1_i) (v_1-v_2), L^i v_1 - L^iv_2 ≥ ( ∏_j=1^i P_π^2_i) (v_1-v_2). where π^1_i _π∈Πr_π + P_π· L^i-1v_1, and π_i^2_π∈Πr_π + P_π· L^i-1 v_2. For i=1, consider Lv_1-Lv_2: Lv_1- Lv_2 = _π∈Πr_π + P_π v_1 - _π∈Πr_π + P_π v_2 ≥ P_π^2_1(v_1-v_2), where in the last inequality we use π^2_1_π∈Πr_π + P_π v_2. For the upper bound, we have: Lv_1 - Lv_2 = _π∈Πr_π + P_π v_1 - _π∈Πr_π + P_π v_2 ≤ P_π^1_1(v_1-v_2), where in the last inequality we use π^1_1_π∈Πr_π + P_π v_1. Assume inequalities (<ref>), (<ref>) are true for i-1, then we have: L^iv_1 - L^iv_2 = L(L^i-1v_1) - L(L^i-1 v_2) (applying the upper bound for i=1) ≤ P_π^1_1( L^i-1 v_1 - L^i-1 v_2) (applying the upper bound for i-1) ≤ P_π^1_1⋯ P_π^1_i ( v_1 - v_2), where π^1_i _π∈Πr_π + P_π· L^i-1v_1. Similarly, we can show the lower bound statement by induction. L^iv_1 - L^i v_2 = L(L^i-1v_1) - L(L^i-1 v_2) (applying the lower bound for i=1) ≥ P_π^2_1( L^i-1 v_1 - L^i-1 v_2) (applying the lower bound for i-1) ≥ P_π^2_1⋯ P_π^2_i ( v_1 - v_2), where π_i^2_π∈Πr_π + P_π· L^i-1 v_2. This completes the proof of inequalities (<ref>),(<ref>). We have by Assumption <ref>, for all starting state distributions μ over states in S, μ^T (∑_i=1^H P_π^1_1⋯ P_π^1_i ) ≥ p _s_0^T, μ^T(∑_i=1^H P_π^2_1⋯ P_π^2_i) ≥ p_s_0^T. In particular using above with μ=_s, the Dirac delta distribution for state s, and substituting the inequalities (<ref>), we have for all s, ∑_i=1^H [L^i v_1 - L^i v_2](s) = _s^T (∑_i=1^H (L^i v_1 - L^i v_2)) ≤ _s^T (∑_i=1^H P_π^1_1⋯ P_π^1_i)(v_1-v_2) ≤ p _s_0^T (v_1-v_2) + (H-p) max_s' (v_1(s')-v_2(s') = p(v_1(s_0) - v_2(s_0)) + (H-p) max_s'{v_1(s')-v_2(s')}, and similarly substituting the inequality (<ref>) ∑_i=1^H [L^i v_1 - L^i v_2](s) = _s^T (∑_i=1^H (L^i v_1 - L^i v_2)) ≥ _s^T (∑_i=1^H P_π^2_1⋯ P_π^2_i)(v_1-v_2) ≥ p _s_0^T (v_1-v_2) + (H-p) min_s' (v_1(s')-v_2(s') = p(v_1(s_0) - v_2(s_0)) + (H-p) min_s'{v_1(s')-v_2(s')}. Therefore, on subtracting the above two inequalities we have, (1/H∑_i=1^H (L^i v_1 - L^iv_2)) ≤(1-p/H) (v_1-v_2). As a corollary to Lemma <ref>, we obtain Lemma <ref>, stated again here for easy reference. * We use (v_1+v_2)≤(v_1)+(v_2) and (c v) = c(v), to get, (L v -V^*) ≤(L v - L V^*) + 1/H∑_i=1^H (L^i V^* - V^*), Then, substituting the result from Lemma <ref> for the first term, along with the observation that (L^iV^*-V^*)≤∑_j=1^i (L^jV^*-L^j-1 V^*)≤∑_j=1^i (LV^*-V^*) = 0, we get the lemma statement. § MISSING PROOFS FROM SECTION <REF> In below, we consider modification that for an epoch ℓ≥ K+1, we set (s)=+max{V^Km+1(s), V^K(m-1)+1} where m=⌊ℓ-1/K⌋. This is same as setting (s)=+max{V^Km+K+1(s), V^Km+1} with m=⌊ℓ-K-1/K⌋ for ℓ≥ K+1. (Priyank) bound on |V^Km+1,H+1(s)-V^Km+K+1,H+1(s)|: Consider any m≥ 0. ???We show |V^Km+1,H+1(s)-V^Km+K+1,H+1(s)|≤ 5Km(H+1)b_0 ≤ 5 ζ (H+1) b_0 ??? To see that induction holds for m=0, note that we want to prove |V^1,H+1(s)-V^K+1,H+1(s)|≤ 5K(H+1)b_0. Let ℓ' be the largest epoch such that 1<ℓ'≤ K+1 and N_ℓ'-1(s)=0. If no such ℓ' exist then |V^K+1,H+1(s)-V^2K+1,H+1(s)|≤ K(H+1)(b_0). If such ℓ' exists, then V^ℓ',H+1 has its initial value +(s)= +V^1,H+1(s). And, |V^K+1,H+1(s)- V^1,H+1(s)| ≤ |V^K+1,H+1(s)- V^ℓ',H+1(s)|+|V^ℓ',H+1(s)- V^1,H+1(s)| ≤ (K+1-ℓ')(H+1)(b_0) + ≤ 5K(H+1)(b_0) For m≥1, let ℓ' be the largest epoch such that Km+1<ℓ'≤ Km+K+1 and N_ℓ'-1(s)=0. If no such ℓ' exist then |V^Km+1,H+1(s)-V^Km+K+1,H+1(s)|≤ K(H+1)(b_0). If such ℓ' exists, then V^ℓ'(s),H+1(s) has initial value (s)+=max{V^Km+1,H+1(s), V^K(m-1),H+1(s)} |V^Km+1,H+1(s)-V^Km+K+1,H+1(s)| ≤ (Km+K+1-ℓ')(H+1)(b_0) + +|V^Km+1,H+1(s)-V^K(m-1)+1,H+1(s)| ≤ 5K(H+1)(b_0) + |V^Km+1,H+1(s)-V^K(m-1)+1,H+1(s)| ≤ 5Km(H+1)b_0, where the last inequality is due to induction. §.§ Optimism: Proof of Lemma <ref> Here, we prove Lemma <ref>. We restate it below: * We prove the first inequality (<ref>) by induction on ℓ, h. The second inequality (<ref>) about Q-values will follow from the proof of (<ref>). In order to facilitate our induction-based proof, we prove the statement in (<ref>) for two values of k (denoted as k_ℓ) for large ℓ. Specifically, we show that the statement holds for k=ℓ-1 for ℓ≤ K+1, and for ℓ>K+1 it holds both for k=ℓ-K⌊ℓ-K-1/K⌋-1 and k=ℓ-K⌊ℓ-1/K⌋-1. Note that for all these values of k, we have k+1≤ 2K and ℓ-k=mK+1 for some integer m≥ 1. Our induction-based proof involves proving two inductive statements that depend on each other. Induction statement 1 proves inequality (<ref>) for ℓ, h given that it holds for ℓ, h+1,…, H+1, and Induction statement 2 proves the inequality for ℓ, H+1 given that it holds for ℓ-1, h=1,…, H. Base case: consider ℓ=1,h=H+1, and k=0, then (<ref>) reduces to V^1,H+1(s_t) ≤ V^t,H+1(s_t) ≤ V^1,H+1(s_t) + 4g^1(t,H+1), for t in epoch 1. This is trivially true since g^1(t,H+1)=G^0(1,s_t) = 0 and V^t,H+1(s_t) = V^1,H+1(s_t). §.§.§ Induction statement 1: Induction hypothesis 1. Fix an epoch ℓ≥ 1, and an h∈{1, …, H}. The induction hypothesis 1 is for pair (ℓ,h+1) and for the (potentially two) feasible values of k_ℓ defined above: assume that with probability at least 1-(ℓ-1)δ/T- (H-h)δ/HT, (<ref>) holds for h'= h+1,…, H+1 for t∈ℓ, k_ℓ, and for h'=1,…,H+1, t∈ℓ', k_ℓ' for ℓ'≤ℓ-1. Then, above base case provides the base case for this hypothesis (for ℓ=1, h+1=H+1). Induction step 1. We show that the inequality (<ref>) holds for h and for the above-mentioned (potentially two) values of k, for every t∈ℓ with probability at least 1-δ/T^3 H. By taking union bound over t,k, this will establish the inductive statement 1 for pair (ℓ,h), i.e., the inequality (<ref>) holds with probability at least 1-(ℓ-1)δ/T- (H-h) δ/TH - 2τ_ℓδ/T^3H≥ 1-(ℓ-1)δ/T- (H-h+1) δ/HT for all epochs ℓ'≤ℓ-1, t∈ℓ', h'=1,…,H+1 and for t∈ℓ, h'=1,…,H+1. Let Q^t,h, V^t,h, N^t denote the value of Q^h, V^h, N at the beginning of time step t in epoch ℓ (i.e., not counting the sample at time t). Fix a t∈ℓ. First, let us consider the case that N^t(s_t,a)=0 for some a. In this case V^t,h(s_t) = max_a Q^t, h(s_t,a) ≥(s_t)+ Now, consider the values of k that are of interest: for ℓ<K+1 we consider k=ℓ-1, and for ℓ>K+1, we consider two potential values k_ℓ=ℓ-Km-K-1,ℓ-Km-1 where m=⌊ℓ-K-1/K⌋. Note that for ℓ>K+1, we have that ℓ-k_ℓ can take two values Km+1 and Km+K+1. For both these values, we have k_ℓ≤ 2K and (V^ℓ-k_ℓ,H+1)≤ 4H^* (Lemma <ref>). By algorithm construction, for such ℓ, =max{V^Km+1,H+1, V^Km+K+1,H+1}. Therefore, if N^t(s_t,a)=0 for some a, we have the lower bound V^t, h(s_t) ≥ +(s_t) ≥ + V^ℓ-k_ℓ,H+1(s_t) ≥ H-h+1+2K + 4H^* + V^ℓ-k_ℓ,H+1(s_t) ≥ H-h+1+2K + max_s' V^ℓ-k_ℓ,H+1(s') ≥ [L^H-h+1^k_ℓ V^ℓ-k_ℓ,H+1](s_t) where in the last inequality we used that since rewards are bounded in [0,1], each L operator can add at most 1 to the max value of the vector. Similarly, we can show the upper bound for the case when n_t,h=N^t(s_t,a_t,h)=0, where a_t,h=max_a Q^t,h(s_t,a). In Lemma <ref>, we show that for any m≥ 0, s, |V^Km+1,H+1- V^Km+K+1,H+1|≤ 5(m+1)KH b_0 ≤ 6ζ Hb_0, where ζ is an upper bound on the number of epochs in the algorithm. Therefore, V^t, h(s_t) = Q^t,h(s,a_t,h)= +(s_t) = + max{V^Km+1,H+1(s_t), V^Km+K+1,H+1(s_t)} ≤ 10ζKH b_0 + V^ℓ-k_ℓ,H+1(s_t) ≤ 10ζKH b_0+4H^*+ min_s' V^ℓ-k_ℓ,H+1(s') ≤ 10ζKH b_0 + 4H^* + H-h+1+2K + [L^H-h+1^k_ℓ V^ℓ-k_ℓ,H+1](s_t) ≤ 16ζKH b_0 + [L^H-h+1^k_ℓ V^ℓ-k_ℓ,H+1](s_t) = 4g^k_ℓ+1(t,h) + [L^H-h+1^k_ℓ V^ℓ-k_ℓ,H+1](s_t) where last inequality follows by initialization of g^k(t,h) for the case when n_t,h=0. To summarize, the lower bound in induction statement 1 holds trivially if N^t(s,a)=0 for some a, and the upper bound holds trivially if n_t,h = N^t(s_t,a_t,h)=0. Let us now show the lower bound for the case N^t(s_t,a)≥ 1 for all a. By algorithm design, after the update, for any s,a with N^t(s,a)≥ 1, the Q estimate available at the beginning of time step t is Q^t,h(s,a) = ∑_i=1^n α_n^i(r_t_i+V^t_i+1,h+1(s_t_i+1) + b_i), where n=N^t(s,a), and ∑_i=1^n α_n^i=1. Then (with some abuse of notation, in below we use n N^t(s_t,a) where identity of action a is clear from the context). k_ℓ is whichever feasible value of k (out of potentially two values) we are proving the inequality for. V^t,h(s_t) = max_a Q^t,h(s_t,a) = max_a (∑_i=1^n α_n^i(r_t_i+V^t_i+1,h+1(s_t_i+1) + b_i) ), where n= N^t(s_t,a) (since s_t_i=s_t, using Lemma <ref> with σ = 1, we get with probability 1-δ/4T H^3,) ≥ max_a (∑_i=1^nα_n^i (R(s_t,a) + V^t_i+1,h+1(s_t_i+1) + b_i) - b_n/2) (using b_i ≥ b_n for i≥ 1, and ∑_i=1^nα^i_n=1 since n≥ 1) ≥ max_a (∑_i=1^nα_n^i(R(s_t,a) + V^t_i+1,h+1(s_t_i+1) + b_i/2) ) (using the induction hypothesis) ≥ max_a ∑_i=1^n α_n^i ( R(s_t,a) + L^H-hL^k_ℓV^ℓ-k_ℓ,H+1(s_t_i+1)+ b_i/2) ≥ max_a R(s_t,a) + ∑_i=1^n α_n^i (L^H-hL^k_ℓV^ℓ-k_ℓ,H+1(s_t_i+1) - P(s_t,a)L^H-hL^k_ℓV^ℓ-k_ℓ,H+1+ b_i/2) + P(s_t,a)L^H-hL^k_ℓV^ℓ-k_ℓ,H+1 (Lemma <ref> (σ≤ 4H^* from Lemma <ref>) give with probability 1-δ/4T H^3) ≥ max_a R(s_t,a)+P(s_t,a)L^H-hL^k_ℓV^ℓ-k_ℓ,H+1 (by definition of L(·)) = [L^H-h+1L^k_ℓV^ℓ-k_ℓ,H+1](s_t). For applying the concentration bound from Lemma <ref> in above, we use that V^ℓ-k_ℓ,H+1= V^Kj+1,H+1 for some integer j and due to projection step, in Lemma <ref> we can show that σ=(V^Kj+1)≤ 4H^*. This step explains the significance of the projection operator. Similarly, for the upper bound, restrict to the case with n_t,h≥ 1. Then, V^t,h(s_t) = Q^t,h(s_t, a_t,h) = ∑_i=1^n_t,hα_n_t,h^i(r_t_i+V^t_i+1,h+1(s_t_i+1) + b_i) = R(s_t,a_t,h) + ∑_i=1^n_t,hα_n_t,h^i(V^t_i+1,h+1(s_t_i+1) + b_i)+∑_i=1^n_t,hα_n_t,h^ir_t_i - R(s_t,a_t,h) (using the induction hypothesis, ∑_i=1^nα^i_nb_i ≤ 2b_n, and Lemma <ref>, with probability 1-δ/4T^3 H) ≤ R(s_t,a_t,h) + ∑_i=1^n_t,hα_n_t,h^i ([L^H-hL^k_ℓV^ℓ-k_ℓ,H+1](s_t_i+1) + 4g^k_ℓ+1(t_i+1,h+1)) + 3b_n_t,h (Lemma <ref> (σ≤ 4H^* from Lemma <ref>) gives with probability 1-δ/4T^3 H) ≤ R(s_t,a_t,h) + P(s_t,a_t,h)L^H-hL^k_ℓV^ℓ-k_ℓ,H+1 + 4b_n_t,h+∑_i=1^n_t,hα_n_t,h^i ( 4g^k_ℓ+1(t_i+1,h+1)) = R(s_t,a_t,h)+P(s_t,a_t,h)L^H-hL^k_ℓV^ℓ-k_ℓ,H+1 +4(b_n_t,h+∑_i=1^n_t,hα_n_t,h^i g^k_ℓ+1(t_i+1,h+1)_= g^k_ℓ+1(t,h)) ≤ L^H-h+1L^k_ℓV^ℓ-k_ℓ,H+1+ g^k_ℓ+1(t,h), Therefore, we have that (<ref>) holds for h, (potentially two feasible values of) k_ℓ, and every t in epoch ℓ with probability at least 1-δ/T^3 H. §.§.§ Induction statement 2: Induction hypothesis 2: Fix an epoch ℓ≥ 2, and an h∈{1, …, H}. Assume that with probability at least 1-(ℓ-1) δ/T, inequality (<ref>) holds for all epochs ℓ'≤ℓ-1, h'=1,…,H+1, t∈ℓ' and (potentially two values of) k_ℓ' defined earlier. The base case of this statement is established by applying Induction statement 1 for pair (ℓ-1, 1). Induction step 2: We show that (<ref>) holds for all t ∈ ℓ, h=H+1, and k=k_ℓ∈{ℓ-K⌊ℓ-K-1/K⌋-1, ℓ-K⌊ℓ-1/K⌋-1} for ℓ> K+1, and k=k_ℓ=ℓ-1 otherwise. This statement will provide the base case of inductive statement 1 for pair (ℓ,H+1) for ℓ≥ 2. For h=H+1, k=k_ℓ, (<ref>) reduces to 0 ≤ V^t,H+1 (s_t) - [L^k_ℓ V^ℓ-k_ℓ,H+1](s_t) ≤ 4 g^k_ℓ+1(t,H+1). By construction, for t∈ℓ, we have V^t,H+1 (s_t) = V^ℓ,H+1 (s_t). Also, by definition g^k+1(t,H+1) = G^k(ℓ, s_t), and therefore above statement is same as: for ℓ≥ 2, 0 ≤ V^ℓ,H+1(s_t) - [L^k_ℓ V^ℓ-k_ℓ,H+1](s_t) ≤ 4 G^k_ℓ(ℓ,s_t). In fact, we prove above for all s. First consider the case when N_ℓ-1(s)=0. In this case V^ℓ,H+1(s) does not get updated during epoch ℓ-1, and will take the initial value V^ℓ,H+1(s)=(s)+, where =max{V^Km+1,H+1,V^Km+K+1,H+1} with m= ⌊ℓ-K-1/K⌋. Then, we can use the arguments similar to (<ref>) and (<ref>) to show that the above statement holds trivially, since (s)+ ≥ V^ℓ-k_ℓ,H+1(s) + ≥ max_s' V^ℓ-k_ℓ,H+1(s') -4H^* + ≥ [^k_ℓ V^ℓ-k_ℓ,H+1](s), and using Lemma <ref>, we have (s) + ≤ min_s' V^ℓ-k_ℓ,H+1(s') + 6ζ KH b_0+4H^*+4(H+K) ≤ 4G^k_ℓ(ℓ,s) + [^k_ℓ V^ℓ-k_ℓ,H+1](s) where in the last equality we used that for ℓ,s with N_ℓ-1(s)=0 we defined G^k(ℓ,s)=. Therefore, we can now restrict to the case when N_ℓ-1(s)≥ 1. For such s, and ℓ≥ 2, we have v^ℓ(s) :=1/N_ℓ-1(s)∑_t_i∈ epoch ℓ-1: s_t_i=s1/H∑_h=1^H V^t_i,h(s) And, by algorithm construction, we have V^ℓ,H+1(s) = {[ [ v^ℓ](s), if (ℓ-1 K)=0; v^ℓ(s), otherwise ]. Fix any ℓ≥ 2. First consider ℓ such that ℓ mK+1 for some integer m>0. For such ℓ, V^ℓ,H+1=v^ℓ, and k_ℓ-1=k_ℓ-1 for both the definitions of k_ℓ. Then, since induction hypothesis holds for (ℓ-1, 1≤ h≤ H,), we can use it for V^t_i,h, t_i ∈ ℓ-1 to conclude v^ℓ(s) ≥ 1/N_ℓ-1(s)∑_t_i∈ℓ-1: s_t_i=s1/H∑_h=1^H L^H-h+1L^k_ℓ-1 V^ℓ-1-k_ℓ-1,H+1(s) = [L^k_ℓ V^ℓ-k_ℓ,H+1](s). Similarly, for the upper bound, using the induction hypothesis, we have v^ℓ(s) ≤ 1/N_ℓ-1(s)∑_t_i∈ℓ-1: s_t_i=s1/H∑_h=1^H ([L^H-h+1L^k_ℓ-1 V^ℓ-1-k_ℓ-1,H+1](s) + 4g^k_ℓ-1+1(t_i,h)) = [L^k_ℓ V^ℓ-k_ℓ,H+1](s) + 4G^k_ℓ(ℓ,s). It remains to prove for ℓ of form ℓ=Km+1 for some m≥ 1. In this case we want to prove for k_ℓ=ℓ-(m-1)K-1=K and k_ℓ=ℓ-(m)K-1=0. In particular, for such ℓ=Km+1 the statements we need to prove are for k_ℓ=K: 0 ≤ V^ℓ,H+1(s) - [L^K V^ℓ-K,H+1](s) ≤ 4 G^K(ℓ,s) , for k_ℓ=0: 0 ≤ V^ℓ,H+1(s) - [V^ℓ,H+1](s) ≤ 4 G^0(ℓ,s) =0 , The second statement in above is trivial. For the first statement we use the induction statement 1 with k_ℓ-1=ℓ-1 - K⌊ℓ-2/K⌋-1, so that k_ℓ-1 = K-1 = k_ℓ-1. Therefore, we can use induction hypothesis for ℓ-1, h=1,…, H, and (similar to the previous derivation for ℓ mK+1) to obtain, [L^K V^ℓ-K,H+1](s) ≤ v^ℓ(s) ≤[L^K V^ℓ-K,H+1](s) + 4G^K(ℓ,s). Now, for ℓ = Km+1, we have V^ℓ,H+1= v^ℓ. For the upper bound, we use the property v ≤ v (see Lemma <ref> (b)), so that V^ℓ,H+1(s)=[ v^ℓ] (s) ≤ v^ℓ(s). Therefore, the upper bound in above holds for V^ℓ,H+1(s) as well. For the lower bound we use the monotonicity property (v ≥ u, v≥ u) and property that v = v, when (v) ≤ 2H^* (see Lemma <ref> (c,d)). To apply the latter property observe that since ℓ-K is of form Kj+1 for some j, using Lemma <ref>, we have (L^K V^ℓ-K,H+1)≤ 2H^*. And therefore, V^ℓ,H+1 = v^ℓ≥ (L^K) V^ℓ-K,H+1 = L^K V^ℓ-K,H+1. Finally, applying the induction statement 1 for the last epoch (say ℓ=L) and h=1, we obtain the lemma statement with probability at least 1-L/Tδ≥ 1-δ. §.§.§ Inequality (<ref>) about Q-values: Now we extend the above proof to derive statement (<ref>) about Q-values. First consider the case when N^t(s_t,a_t)=0. Then Q^t,h_t(s_t,a_t) = (s_t)+ and g^k+1(t,h_t)=. Therefore, using the same derivation as that of the lower and upper bounds in (<ref>) and (<ref>), we can derive that (<ref>) holds trivially. Assume now that N^t(s_t,a_t)≥ 1. Let k_ℓ denote any value of k for which inequality (<ref>) was proven for epoch ℓ. Note that by the choice of action a_t, we have V^t,h_t(s_t) = max_a Q^t,h_t(s_t,a) = Q^t,h_t(s_t,a_t), where h_t∈{1,…, H}. Therefore, for the lower bound, we can use the inequality (<ref>) in the above derivation to get Q^t,h_t(s_t,a_t) = V^t,h_t(s_t) ≥ R(s_t,a_t)+P(s_t,a_t)L^H-h_tL^k_ℓV^ℓ-k_ℓ,H+1. And the upper bound on Q^t,h_t follows from (<ref>) by substituting h=h_t, a_t,h_t=a_t, so that Q^t,h_t(s_t,a_t) = V^t,h_t(s_t) ≤ R(s_t,a_t)+P(s_t,a_t)L^H-h_tL^k_ℓV^ℓ-k_ℓ,H+1 +4g^k_ℓ+1(t,h_t). §.§ Proof of Lemma <ref> We prove Lemma <ref> restated here for convenience. * Fix an ℓ, t∈ℓ. Define k = {[ ℓ-1, if ℓ≤ K; ℓ-K⌊ℓ-K-1/K⌋-1 ∈ [K, 2K-1], if ℓ≥ K+1 ]. Let v=L^H-h_tL^k V^ℓ-k,H+1. Lemma <ref> proves that V^t,h_t(s_t) ≥ [Lv](s_t) Q^t,h_t(s_t,a_t) ≤ R(s_t,a_t) + P_s_t,a_t· v +4 g^k+1(t,h_t) By algorithm design, we have V^t,h_t(s_t) = max_a Q^t,h_t(s_t,a) = Q^t,h_t(s_t,a_t). Therefore, subtracting the above two inequalities we get R(s_t,a_t) ≥ [Lv](s_t) -P_s_t,a_t v - 4 g^k+1(t,h_t) On the other hand using Bellman equation, we have ρ^* = [LV^*](s_t) - V^*(s_t) = [LV^*](s_t) - P_s_t,a_t V^* + δ_t where we define δ_t:= P_s_t,a_t V^* - V^*(s_t). Subtracting the last two equations, we get ρ^* - R(s_t,a_t) ≤ [LV^*](s_t) - Lv(s_t) -P_s_t,a_t (v- V^*) + 4 g^k+1(t,h_t) + δ_t ≤ 2 (v-V^*) + 4 g^k+1(t,h_t) + δ_t where we used that by the definition of L operator, for any two vectors V,V' see that for any two vectors LV-Lv'≤(V-V'), and also P_s_t,a_t(v-V^*) ≤(v-V^*). Now, substituting the value of v, and using that (V^*-LV^*)=0, along with the span contraction property from Lemma <ref> we can derive (v-V^*) = (L^H-h_tL^k V^ℓ-k,H+1 - V^*) = (L^H-h_tL^k V^ℓ-k,H+1 - L^H-h_tL^k V^*) ≤ (1-p/H)^k(V^ℓ-k,H+1-V^*) ≤ (1-p/H)^k (4H^*) where we used that (V^ℓ-k,H+1) ≤ 4H^* by Lemma <ref>. This lemma is applicable since by above definition of k, we have ℓ-k=Kj+1 for some integer j. Substituting the bound on (v-V^*) back in (<ref>), we get ρ^* - R(s_t,a_t) ≤ 8H^*(1-p/H)^k + 4 g^k+1(t,h_t) + δ_t for all ℓ, t∈ℓ. §.§ Proof of Lemma <ref> In this section, we prove Lemma <ref>. The lemma is restated below. * We first prove the following lemma for bounding the summation over one epoch. For any epochs ℓ, h=1,…, H and all k∈[1,ℓ], the following holds ∑_t∈ℓ g^k(t,h) ≤ ∑_j=h^H (1+1/C)^j-h( + ∑_t∈ℓ b_n_t,j) + (1+1/C)^H-h+2(∑_t∈ epoch ℓ-11/H∑_h'=1^H g^k-1(t,h') ). The proof follows by induction on h=H,H-1,… 1. Base Case: consider h=H, then by definition g^k(t,H) = b_n_t,H+ α_n_t,H^0 () + ∑_i=1^n_t,Hα^i_n_t,H G^k-1(ℓ,s_t_i+1). Therefore, ∑_t ∈ℓ g^k(t,H) = ∑_t∈ℓ b_n_t,H+ ∑_t∈ℓ(α_n_t,H^0 () +∑_i=1^n_t,Hα^i_n_t,H G^k-1(ℓ,s_t_i+1) ) ≤ ∑_t∈ℓ b_n_t,H+ ∑_t: t+1∈ℓ(∑_n≥ N^t(s_t,a_t)α_n^N^t(s_t,a_t)) G^k-1(ℓ,s_t+1) + SA (using that for all i, ∑_n≥ iα^i_n ≤ 1+1/C from Lemma <ref> (c)) ≤ ∑_t∈ℓ b_n_t,H+ ∑_t: t+1∈ℓ (1+1/C)G^k-1(ℓ,s_t+1) + SA ≤ ∑_t∈ℓ b_n_t,H+ ∑_t∈ℓ (1+1/C)G^k-1(ℓ,s_t) + SA ≤ ∑_t∈ℓ b_n_t,H+ (1+1/C) ∑_s N^ℓ(s)G^k-1(ℓ,s)+ SA (note that for s such that N_ℓ-1(s)=0, by definition G^k-1(ℓ,s)=) (also by epoch break condition ∑_s:N_ℓ-1(s)=0 N_ℓ(s)≤ 1) = ∑_t∈ℓ b_n_t,H+ (1+1/C) ∑_s:N^ℓ-1(s)>0N^ℓ(s) /N_ℓ-1(s)∑_t∈ℓ-1: s_t=s1/H∑_h=1^H g^k-1(t,h) + SA+ (using the epoch break condition in Algorithm <ref>) ≤ ∑_t∈ℓ b_n_t,H+ (1+1/C)^2 ∑_s∑_t∈ℓ-1: s_t=s1/H∑_h=1^H g^k-1(t,h) + ≤ ∑_t∈ℓ b_n_t,H+ (1+1/C)^2 ∑_t∈ℓ-11/H∑_h=1^H g^k-1(t,h)+. Induction step: Assume (<ref>) holds for h+1. We show that it holds for h. ∑_t∈ℓ g^k(t,h) = ∑_t∈ℓ b_n_t,h + ∑_t∈ℓ(α_n_t,h^0 () + ∑_i=1^n_t,hα^i_n_t,h g^k(t_i+1,h+1)) ≤ ∑_t∈ℓ b_n_t,h + ∑_(t+1)∈ℓ∑_n≥ N^t(s_t,a_t)α^N^t(s_t,a_t)_n g^k(t+1,h+1) + SA (Lemma <ref> (c) gives) ≤ ∑_t∈ℓ b_n_t,h + ∑_(t+1)∈ℓ (1+1/C)g^k(t+1,h+1) + SA ≤ ∑_t∈ℓ b_n_t,h + ∑_t∈ℓ (1+1/C)g^k(t,h+1) + SA (applying induction hypothesis for h+1) ≤ ∑_t∈ℓ b_n_t,h+ (1+1/C) ∑_j=h+1^H (1+1/C)^j-(h+1) ( +∑_t∈ℓ b_n_t,j) + + (1+1/C) (1+1/C)^H-h+1∑_t∈ℓ-11/H∑_h'=1^H g^k-1(t,h') = ∑_j=h^H (1+1/C)^j-h (+∑_t∈ℓ b_n_t,j) + (1+1/C)^H-h+2∑_t∈ℓ-11/H∑_h'=1^H g^k-1(t,h') For any epochs ℓ and all k∈[1,ℓ], the following holds 1/H∑_h=1^H ∑_t∈ℓ g^k(t,h) ≤ (1+1/C)^k(H+2)∑_h=1^H ( k+ ∑_t∈ℓ…, ℓ-k b_n_t,h) Lemma <ref> gives 1/H∑_h=1^H ∑_t∈ℓ g^k(t,h) ≤ (1+1/C)^H∑_h=1^H(+∑_t∈ℓ b_n_t,h) + (1+1/C)^H+2 (∑_t∈ℓ-11/H∑_h'=1^H g^k-1(t,h') ) Upon recursively applying the above for k-1, k-2, …, 1 we get the required statement. 0 Given k_ℓ =min{ℓ-1,K} for each epoch ℓ; and C=. Then, 1/H∑_ℓ∑_t∈ℓ∑_h=1^H g^k_ℓ(t,h) ≤ e K ∑_h=1^H ∑_t=1^T b_n_t,h We sum the statement in the previous corollary over all ℓ while substituting k by k_ℓ in the statement for epoch ℓ. Then, note that in the summation in the right hand side the term (∑_t∈ℓ∑_h=1^H b_n_t,h) for each epoch ℓ can appear multiple times for at most K future epochs: ℓ+1 ≤ℓ' ≤ℓ+K, with coefficients (1+1/C)^k_ℓ' (H+2)≤ e since where k_ℓ'≤ K and C=. Therefore each epoch term is counted at most K times, and we get 1/H∑_ℓ∑_t∈ℓ∑_h=1^H g^k_ℓ(t,h) ≤ K(1+1/C)^∑_ℓ∑_t∈ℓ∑_h=1^H b_n_t,h≤ eK ∑_t=1^T ∑_h=1^H b_n_t,h Lemma <ref> then follows as a corollary of above results. We sum the statement in Corollary <ref> over all ℓ while substituting k by k_ℓ+1 ≤ 2K in the statement for epoch ℓ. Then, note that in the summation in the right hand side, the term (∑_t∈ℓ∑_h=1^H b_n_t,h) for each epoch ℓ can appear multiple times (in the corresponding terms for some ℓ' ≥ℓ such that ℓ≥ℓ'-(k_ℓ'+1)). Indeed, because k_ℓ'+1≤ 2K for all ℓ', each epoch term can appear in at most 2K terms which gives a factor 2K, and we get 1/H∑_ℓ∑_t∈ℓ∑_h=1^H g^k_ℓ+1(t,h) ≤ 2K(1+1/C)^2K (H+2)∑_h=1^H ( ∑_ℓ∑_t∈ℓ b_n_t,h + 10ζK^2Hb_0 SA) Finally, we use that (1+1/C)^2K (H+2) = (1+1/C)^C≤ e using C= to get the lemma statement. §.§ Proof of Theorem <ref> We restate the theorem for completeness * Note that for any t∈ℓ, Lemma <ref> gives a per-round regret with k=k_ℓ, where k_ℓ := {[ ℓ-1, for ℓ≤ K; ℓ-K⌊ℓ-K-1/K⌋-1 ∈ [K, 2K], for ℓ≥ K+1 ]. Therefore, using Lemma <ref> with k=k_ℓ for t∈ℓ summing over all ℓ, t∈ℓ, we get (T) = Tρ^* - ∑_t=1^T R(s_t,a_t) ≤ ∑_ℓ, t∈ℓ 8H^*(1-p/H)^k_ℓ + ∑_ℓ, t∈ℓ 4g^k_ℓ+1(t,h_t) + ∑_t (P_s_t,a_tV^*-V^*(s_t)). Since k_ℓ+ 1=ℓ-K⌊ℓ-K-1/K⌋≤ 2K, we can use Lemma <ref> to bound the expected sum of the g^k(·,·) terms in the above. Using Lemma <ref>, we get, [∑_h=1^H ∑_ℓ, t∈ℓ g^k_ℓ+1(t,h)] ≤ 2eK ∑_t=1^T ∑_h=1^H b_n_t,h + . Further using Lemma <ref> we have, ∑_t=1^T ∑_h=1^H b_n_t,h≤ 24HH^*√(SACζ (T+ζ SA)log(8 SAT/δ)). And using Lemma <ref>, we have (note that k_ℓ≥min{ℓ-1, K} where K=), ∑_ℓ, t∈ℓ 8H^*(1-p/H)^k_ℓ≤8H^2H^*/p^2 + 8H^*/T. Further from Lemma <ref> and using |P(s_t,a_t)V^*-V^*(s_t+1)| ≤ H^*, we have with probability at least 1-δ, |∑_t=1^T (P(s_t,a_t) V^*-V^*(s_t))| ≤ |∑_t=1^T-1 (P(s_t,a_t) V^*-V^*(s_t+1))| + H^* ≤ H^*√(Tlog(2/δ)) + H^*. Therefore, [|∑_t=1^T (P(s_t,a_t) V^*-V^*(s_t+1))|] ≤ H^*√(Tlog(2/δ))+δ T + H^*. Substituting back in (<ref>), we get that (T) ≤ 24eKHH^*√(SACζ (T+ζ SA)log(8 SAT/δ)) + H^*√(Tlog(2/δ)) +10H^2H^*/p^2 + 2δ T + 4(), which gives a high probability regret bound with probability 1-δ as, (T) ≤ 24eKHH^*√(SACζ (T+ζ SA)log(8 SAT/δ)) + 10H^2H^*/p^2 + 2 δ T+2H^*√(Tlog(1/δ))+4(). Replacing δ by δ/2T, we complete the proof of the lemma. Therefore, with probability at least 1-δ (T) ≤ 26eKHH^*√(SACζ (T+ζ SA)log(16 SAT/δ)) + 10H^2H^*/p^2 +4(). Then, substituting ζ=CSlog(T) where C=, K=, and b_0=, we obtain that with probability at least 1-δ: (T) = OH^4H^*/p^2S√(ATlog(SAT/δ))log^2(T)+S^2AH^8H^*/p^4.5√(log(SAT/δ))log^4.5(T). § SUPPORTING LEMMA §.§ Properties of Projection Operator Define operator :ℝ^S →ℝ^S as: for any v∈ℝ^S [ v](s) := min{2H^*, v(s)-min_s∈ v(s)} + min_s∈ v(s). We show that this operator satisfies the following properties, which will be useful in our analysis later. For any vector v ∈ℝ^S, (a) ( v)≤ 2H^*, (b) v ≤ v, and (c) for any vector u≤ v, u ≤ v. (d) for any v with (v)≤ 2H^*, v = v. These statements are trivially true by definition of the operator. Fix an s. For any vector v, we say that v(s) is `clipped' if v(s) v(s), i.e., iff v(s) - min_s' v(s') > 2H^*, so that v(s) = 2H^* + min_s' v(s') < v(s). We compare u(s), v(s) by analyzing all possible four cases: * If both u(s) & v(s) are not clipped, then u(s) = u(s) ≤ v(s)=v(s). * If both u(s) & v(s) are clipped, then u(s) = 2H^*+min_s u(s) ≤ 2H+min_s v(s) = v(s). * If u(s) is clipped but v(s) is not clipped, i.e. u(s) < u(s) but v(s)=v(s) then clearly u(s) < u(s) ≤ v(s) = v(s). * Finally, if v(s) is clipped, i.e. v(s) < v(s) but u(s)=u(s), then we have u(s) ≤ 2H^*+min_s u(s) and v(s) = 2H^* +min_s v(s). Therefore, v(s) ≥ u(s). §.§ Some properties of V^ℓ,H+1 In this subsection, we prove some properties of the quantities V^ℓ, H+1, for any ℓ of form ℓ=Kj+1 for some integer j. For any integer j≥ 0, L^H-h+1L^kV^Kj+1,H+1≤ 4H^*, h∈{1, …, H+1}. And, further for k≥ K, we have L^H-h+1L^kV^Kj+1,H+1≤ 2H^* , h∈{1, …, H+1}. Fix an h ∈ 1, …,H+1. Consider, L^H-h+1L^kV^Kj+1,H+1 ≤ L^H-h+1L^kV^Kj+1,H+1-V^*+ V^* (from Lemma <ref>) ≤ 1-p/H^kV^Kj+1,H+1-V^* + H^* ≤ V^Kj+1,H+1-V^* + H^* ≤ 4H^*, where in the last inequality we used that either from initialization (j=0) or from projection operation (j≥ 1), we have (V^Kj+1) ≤ 2H^*, so that (V^Kj+1,H+1-V^*) ≤ 3H^*. For k≥ K, we can extend above to obtain L^H-h+1L^kV^Kj+1,H+1 ≤ 1-p/H^kV^Kj+1,H+1-V^* + H^* ≤ 1-p/H^K (3H^*) + H^* ≤ 2H^*, where in the last inequality we used that K= (assuming T ≥ 3 H^*). For all s∈, and any integer m≥ 0, |V^Km+1,H+1(s)-V^Km+K+1,H+1(s)| ≤ 5K(m+1)Hb_0 Fix a state s∈. We prove by induction on m. For the base case, consider m=0. Then, (<ref>) reduces to: |V^1,H+1(s)-V^K+1,H+1(s)| ≤ 5KHb_0. Let ℓ' be the largest epoch such that 1<ℓ'≤ K+1 and N_ℓ'-1(s)=0. If no such ℓ' exists, then in all these epochs, V^ℓ,H+1(s) is an average of V^h(s) at steps where s_t=s, which itself is some weighted average of targets r_t+V^h+1(s_t+1) and b_n_t, where n_t:=N^t(s_t,a_t). Therefore, since rewards are bounded to be in [0,1] and b_n_t≤ b_0, we have a crude upper bound V^h(s)≤ (H-h)+ V^ℓ-1,H+1(s)+b_0 for each h, yielding |V^ℓ,H+1(s)-V^ℓ-1,H+1(s)|≤ H b_0. And applying this for ℓ=2,…, K+1 we have |V^K+1,H+1(s)-V^1,H+1(s)|≤ K H b_0. On the other hand, if such ℓ' exists, then V^ℓ',H+1(s) doesn't get updated and remains at its initial value +(s)= +V^1,H+1(s) by the choice of initialization in the reset step of Algorithm <ref>. And, |V^K+1,H+1(s)- V^1,H+1(s)| ≤ |V^K+1,H+1(s)- V^ℓ',H+1(s)|+|V^ℓ',H+1(s)- V^1,H+1(s)| ≤ (K+1-ℓ')H b_0 + ≤ 5KHb_0. where the second last inequality follows a similar argument as the previous case when no intermediate epochs had N_ℓ(s)=0. Next, for induction, we assume that (<ref>) is true for all 0≤ m'≤ m-1, and prove it for m. Let ℓ' be the largest epoch such that Km+1<ℓ'≤ Km+K+1 and N_ℓ'-1(s)=0. If no such ℓ' exists, then using same reasoning as in the base case, we have, |V^Km+K+1,H+1(s)-V^Km+1,H+1(s)|≤ KHb_0. If such an ℓ' exists, then V^ℓ',H+1 remains as initialized in the reset step, i.e., V^ℓ',H+1(s)=(s)+ with (s)=max{V^Km+1,H+1(s), V^K(m-1),H+1(s)}. And, |V^Km+K+1,H+1(s)-V^Km+1,H+1(s)| ≤ (Km+K+1-ℓ')Hb_0 + +|V^Km+1,H+1(s)-V^K(m-1)+1,H+1(s)| ≤ 5K(H+1)(b_0) + |V^Km+1,H+1(s)-V^K(m-1)+1,H+1(s)| ≤ 5K(m+1)H b_0, where the last inequality is by using the induction hypothesis. §.§ Bonus term summation We have the following, ∑_t=1^T ∑_h=1^H b_n_t,h≤ 24HH^*√(SACζ (T+ζ SA)log(8 SAT/δ)) where n_t,h= N^t(s_t,a_t,h) with a_t,h=max_a Q^t,h(s_t,a), and ζ is an upper bound on the number of epochs (ℓ=1,2,… in the algorithm. From Lemma <ref> We have, ∑_t=1^T ∑_h=1^H b_n_t,h = ∑_t=1^T ∑_h=1^H 24H^*√(Clog(8 SAT/δ)/n_t,h+1) To bound the above, we need a bound on ∑_t=1^T ∑_h=1^H 1/√(n_t,h+1). Let n_t=N^t(s_t,a_t). Then, observe that n_t=n_t,h_t; and given s_t and history before time t, h_t=h w.p. 1/H for all h∈ [1,H]. Therefore, 1/H∑_t=1^T ∑_h=1^H 1/√(n_t,h+1) = ∑_t=1^T [1/√(n_t+1)]. Define N^ℓ(s,a) as the number of visits to (s,a) by the end of the ℓ_th epoch. Now, ∑_t1/√(n_t+1) = ∑_ℓ=1^ζ∑_t∈ℓ1/√(n_t+1) = ∑_ℓ=1^ζ∑_s,a∑_t:s_t=s,a_t=a1/√(n_t+1) ≤ ∑_ℓ∑_s,a∑_i=1^N_ℓ(s,a)+11/√(i) ≤ ∑_ℓ∑_s,a√(N_ℓ(s,a)+1) ≤ √(ζ SA(T+ζ SA)). Substituting back we get, ∑_t=1^T ∑_h=1^H b_n_t,h = 24H^*√(Clog(8 SAT/δ))∑_t=1^T ∑_h=1^H 1/√(n_t,h+1) = 24 HH^*√(Clog(8 SAT/δ))[∑_t=1^T 1/√(n_t+1)] ≤ 24 HH^*√(Clog(8 SAT/δ)) (√(ζ SA(T + ζ SA)). §.§ Bound on the summation ∑_ℓ, t∈ℓ 8H^*(1-p/H)^k_ℓ For k_ℓ≥min{ℓ-1, K} with K ≥, we have the following ∑_ℓ, t∈ℓ 8H^*(1-p/H)^k_ℓ≤8H^2H^*/p^2 + 8H^*/T. Recall that denotes the number of epochs ℓ in the algorithm, and τ_ℓ is the number of time steps in epoch ℓ. For ℓ≤ K, we have k=ℓ-1, then ∑_ℓ, t∈ℓ 8H^*(1-p/H)^k_ℓ = 8H^*∑_ℓ=1^K τ_ℓ (1-p/H)^k_ℓ = 8H^*∑_ℓ=1^K τ_ℓ(1-p/H)^ℓ-1 ≤ 8H^*∑_ℓ=1^K (1+p/H)^ℓ-1(1-p/H)^ℓ-1, where we used that τ_ℓ≤ (1+p/H)^ℓ-1 due to the epoch break condition of τ_ℓ≤ (1+1/C) τ_ℓ-1≤ (1+p/H) τ_ℓ-1. 8H^*∑_ℓ=1^∞ (1-p^2/H^2)^ℓ-1≤8H^2H^*/p^2. Using K=, we have (1-p/H)^K ≤ 1/T^2. Therefore, for ℓ >K, ∑_ℓ, t∈ℓ 8H^*(1-p/H)^k_ℓ ≤ ∑_t=1^T8H^*/T^2 ≤ 8H^*/T. §.§ Concentration results For a collection of random trajectory acrrued by Algorithm <ref> G_n = (s_1, a_1, s_2, …, s_n) and a given fixed vector V∈^S with (V) ≤σ and for all i=1,… n such that, [V(s_i+1) | s_1, a_1, …, s_i, a_i] = P_s_i,a_i· V, where n is a stopping time with respect to filtration F_i composed of σ-field generated by G_i and all other randomness until time index of (s_i,a_i), then the following holds with probability at least 1-δ, |∑_i=1^nα_n^iV(s_i+1)-P_s_i,a_i· V| ≤ σ√(2Clog(SAT/δ)/n) ≤ σ√(4Clog(SAT/δ)/n+1). Define x_i = α_n^i(V(s_i+1)-P_s_i,a_i· V) and consider filtration _i. (V)(s_i+1)-P_s_i,a_i· (V)≤ (V). Using Lemma <ref> (b), we have ∑_i^n x_i≤ 2Cσ^2/n. We apply Azuma-Hoeffding inequality (see Lemma <ref> combined with an union bound over all (s,a)∈× and all possible values of n ≤ N^t(s,a) to get the following with probability at least 1-δ, ∑_i=1^n x_i ≤ σ√(2Clog(SAT/δ)/n). We complete the proof by using the observation that 1/n+1≥1/2n. § TECHNICAL PRELIMINARIES The following holds: * 1/√(n)≤∑_i=1^n α^i_n/√(i)≤2/√(n). * max_i∈ nα_n^i ≤2C/t and ∑_i=1^n(α_n^i)^2 ≤2C/t. * ∑_n=i^∞α^i_n ≤ 1+1/C. Let ({A_i,ℱ_i}_i=1^∞) be a martingale difference sequence, and suppose |A_i|≤ d_i almost surely for all i≥ 1. Then for all η≥ 0, ℙ[|∑_i=1^nA_i| ≥η] ≤ 2 exp-2η^2/∑_i=1^n d_i^2. In other words, with probability at most δ, we have, |∑_i=1^nA_i| ≥√(ln2/δ∑_i=1^nd_i^2/2) § PAC GUARANTEE In addition to regret guarantees, our results imply that Algorithm <ref> is also a PAC-learning algorithm. For PAC-guarantee of (ϵ,δ), we seek a policy π such that with probability 1-δ, the policy is ϵ-optimal in the sense that ρ^* - ρ^π≤ϵ. We show that we can use Algorithm <ref> to construct a policy with ϵ, δ)-PAC guarantee using T samples where ϵ = 3(T)/T + O(H √(ζ/Tlog(1/δ))), with ζ being the number of epochs in the algorithm. Substituting ζ=O(H^2 Slog(T)) and the Õ(H^5S√(AT)) regret bound from Theorem <ref>, this provides a way to get (ϵ,δ)-PAC policy using Õ(H^10S^2AT/ϵ^2) samples. The desired policy can simply be constructed at the end of T time steps by picking one out of the T policies π_1,…, π_T used by the algorithm uniformly at random. As we show in Lemma <ref> in the appendix, such a policy π is ϵ-optimal with probability at least 2/3. Then, repeating this experiment 3 log(1/δ) times and picking the best policy (by estimating ρ^π of each policy, which by Lemma 3 from <cit.> can be done efficiently under Assumption <ref>), we can obtain the desired (ϵ,δ)-PAC-guarantee. Let π_1,…π_T denote the policy used by Algorithm <ref> at time step t=1,…, T. Consider the policy π constructed by picking one of these T policies uniformly at random. That is, π∼{π_1,π_2,…,π_T}. Then, with probability at least 2/3, ρ^* - ρ^π≤ϵ, where ϵ = (T)/T+O(H√(ζ Tlog(1/δ)/T)), and ζ=CSlog(T) denotes an upper bound on the number of epochs in Algorithm <ref>. By Lemma <ref>, we have that ρ^π(s_1) = ρ^π(s_2) =: ρ^π for all s_1,s_2 (see <cit.>). To prove that π is ϵ-optimal with probability 2/3, we first prove that for any state s, [ρ^π] ≥ρ^*-ϵ/3. Then, by Markov inequality (ρ^*-ρ^π(s) ≥ϵ) ≤1/3, and we get the desired result. Now, observe that by construction of Algorithm <ref>, we have [ρ^π] = 1/T∑_t=1^T ρ^π_t = 1/T∑_ℓ=1^L τ_ℓρ^π_ℓ, where π_ℓ denotes the (stationary) policy used by the algorithm in epoch ℓ. In Lemma <ref>, we show that for every stationary policy the span of bias vector is bounded by 2H under Assumption <ref>. Therefore, applying the result from <cit.> (Lemma 3, restated in our notation as Lemma <ref>) that provides a bound on the difference between asymptotic average and empirical average reward of any stationary policy that has bounded bias, we get that with probability 1-δ, |τ_ℓρ^π_ℓ - ∑_t∈ℓ R(s_t, a_t)| ≤ O(H√(τ_ℓlog(1/δ))) Summing over (at most) ζ epochs, and substituting back in the expression for [ρ^π], we get T [ρ^π] ≥∑_t=1^T R(s_t, a_t) - O(H√(ζ T log(1/δ))) Then, substituting the definition of regret (T)=Tρ^* - ∑_t=1^T R(s_t,a_t), we get the desired bound on [ρ^π]: [ρ^π] ≥ρ^* - (T)/T - O(H√(ζ Tlog(1/δ)/T)) Given any policy π with bias bounded by H, let s_1,a_1,…, s_τ, a_τ denote the states visited and actions taken at time steps k=1,…, τ on running the policy starting from state s_1 for time τ which is a stopping time relative to filtration F_k={s_1,a_1, …, s_k, a_k}. Then, |τρ^π - ∑_k=1^τ R(s_k,a_k)| ≤ 2H√(2τlog(2/δ))
http://arxiv.org/abs/2407.13345v1
20240718094321
Topological Hall effect enhanced at magnetic transition fields in a frustrated magnet EuCd$_2$
[ "S. Nishihaya", "Y. Watanabe", "M. Kriener", "A. Nakamura", "M. Uchida" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.str-el" ]
Department of Physics, Tokyo Institute of Technology, Tokyo 152-8551, Japan Department of Physics, Tokyo Institute of Technology, Tokyo 152-8551, Japan RIKEN Center for Emergent Matter Science (CEMS), Wako 351-0198, Japan Department of Physics, Tokyo Institute of Technology, Tokyo 152-8551, Japan [Author to whom correspondence should be addressed: ]m.uchida@phys.titech.ac.jp Department of Physics, Tokyo Institute of Technology, Tokyo 152-8551, Japan § ABSTRACT Emergent magnetic fields exerted by topological spin textures of magnets lead to an additional Hall response of itinerant carriers called the topological Hall effect (THE). While THE as a bulk effect has been widely studied, THE driven by magnetic domain boundaries (DBs) has been elusive. Here, we report rich Hall responses characterized by multiple peak structures and a hysteresis loop in films of EuCd_2, where Eu layers form a geometrically frustrated lattice of Heisenberg spins. We uncover a THE component sharply enhanced at magnetic transition fields, indicating a giant contribution from non-trivial spin textures possibly formed at the DBs. Topological Hall effect enhanced at magnetic transition fields in a frustrated magnet M. Uchida July 22, 2024 ======================================================================================= Hall effects which are neither proportional to the magnetic field nor to the magnetization have been one of the most vital research topics surrounding quantum transport phenomena in condensed-matter physics. Such non-monotonic Hall responses have been associated mostly with a non-monotonic modulation of the Berry phase rooted in either momentum space or real space. The momentum-space case originates from drastic changes of the band structure accompanied by the formation or shift of singular band features such as Weyl points during the magnetization process, and can be understood in the same framework as the intrinsic anomalous Hall effect (AHE) <cit.>. On the other hand, real-space-based Berry phase is induced when itinerant carriers couple with non-coplanar magnetic orderings <cit.>. As exemplified by skyrmion phases realized in chiral and geometrically-frustrated magnets <cit.>, a non-coplanar arrangement of spins is characterized by the scalar spin chirality defined by the solid angle spanned by three spins (S_1·S_2×S_3). The scalar spin chirality acts as an emergent magnetic field on charged carriers, leading to additional contributions to the Hall effect usually termed as topological Hall effect (THE). In addition to the above-mentioned Berry-phase origins, recent studies have also revealed that the extrinsic skew scattering can lead to an additional giant Hall response when combined with local scalar spin chirality of fluctuating spins, overcoming the upper limit of the Hall conductivity set by the intrinsic band structure <cit.>. Thus, the observation of the non-monotonic AHE or THE provides experimental evidence of unique electronic and magnetic structures as well as novel scattering mechanisms. While the non-monotonic AHE and THE as bulk effects within a single magnetic domain have been widely reported in various systems, the observation of Hall contributions from a domain boundary (DB) has been rare. DBs in magnets generally serve as an additional scattering source or conduction path depending on the details of the magnetic orderings <cit.>, and are expected to give finite contributions to the Hall responses when coupled with a non-zero Berry phase. In ferromagnets, for example, conventional DBs between the ferromagnetic domains exhibit topologically trivial spin textures leading to no THE, whereas chiral domain walls (skyrmion bubbles) induced by interfacial Dzyaloshinskii-Moriya (DM) interaction in films and heterointerfaces, or topological defects such as vertical Bloch lines are predicted to contribute to finite THE <cit.>. When the system possesses Weyl points near the Fermi level, enhanced skew-scattering of Weyl fermions at the DB is proposed to lead to a giant AHE even comparable to the bulk contribution <cit.>. The recent observation of a non-monotonic AHE accompanied by a unique hysteresis loop in the ferromagnetic Weyl semimetal CeAlSi and related compounds have been attributed to the scattering at DBs <cit.>. However, DB contributions in magnets with more complex spin orderings such as non-colinear or non-coplanar configurations have been elusive, except for one known example of pyrochlore-type antiferromagnets where the all-in-all-out ordering forms conductive DBs responsible for unconventional AHE <cit.>. The DBs in systems with complex magnetic orderings can lead to unique spin configurations formed locally, and thus serve as a fertile playground for exploring novel Hall responses. In this Letter, we demonstrate a giant THE triggered by magnetic transitions in films of a frustrated magnet . takes the -type hexagonal structure, which can be viewed as a lower-symmetry derivative of the -type structure with an alternate stacking of triangular and honeycomb lattices <cit.>. In the -type structure of , the Eu triangular lattice is anisotropically distorted in the ac-plane and the Cd honeycomb lattice is buckled along the b-axis as shown in Fig. 1. Several - and -type compounds with Heisenberg-type spins located at the triangular sites have been known to host non-colinear/non-coplanar orderings such as the skyrmion phase in Gd_2PdSi_3 <cit.>, the spiral spin texture in EuZnGe <cit.>, or the non-colinear antiferromagnetic ordering in GdCu_2 <cit.>. For , on the other hand, only magnetization measurements on polycrystalline samples have been reported so far, suggesting a possible antiferromagnetic ordering <cit.>. Here, we reveal that exhibits rich non-monotonic Hall responses, including THE characterized by multiple peak structures and a hysteresis loop. In particular, one of the THE components is sharply enhanced at the magnetic transition, indicating a Hall contribution derived from possible non-trivial spin configurations formed at the magnetic DBs. Thin films of were epitaxially grown on (0001) substrates by molecular beam epitaxy. The growth temperature was 350^∘C. Eu and Cd were supplied by an effusion cell with a Cd-rich flux ratio (Cd/Eu = 15-44) due to the highly volatile nature of Cd as compared to Eu. The film thickness is designed to be 60 nm. As summarized in the Supplemental Material<cit.>, structural characterization by x-ray diffraction (XRD) reveals that the b-axis of is oriented along the surface normal, while the ac-plane Cd honeycomb lattice is aligned to that of the c-plane. Depending on the orientation of the distorted Eu hexagons with respect to the substrate, there are three types of in-plane domains as confirmed by reciprocal space map measurements <cit.>. Figure 1(c) shows a cross sectional image of the film taken along the a-axis by high-angle annular dark-field scanning transmission electron microscopy. The periodic arrangement of Eu and Cd atoms characteristic to the -type structure is clearly resolved. Figures 1(d)-1(i) summarize magnetization and transport properties of the films. The temperature dependence of the magnetization measured with the magnetic field applied out-of-plane (B_out) and in-plane (B_in, parallel to [112̅0]) exhibits an onset of antiferromagnetic order at = 37 K which is consistent with the previous report on polycrystalline bulk samples <cit.>. Below , the magnetization M keeps increasing, indicating that the magnetic structure is not a simple colinear antiferromagnetic ordering. Magnetization curves measured as functions of B_out and B_in at 2 K are presented in Fig. 1(f). develops ferromagnetic moments at lower temperature as indicated by a small hysteresis loop around zero field as seen in the inset. This ferromagnetism is suppressed above 20 K (see Supplemental Material <cit.>). While the magnetization monotonically increases when the field is applied in-plane, a metamagnetic transition is observed around 1.3 T when the field is applied along the out-of-plane hard axis. The metamagnetic transition is observed up to irrespective of the low-temperature out-of-plane ferromagnetism, implying the occurrence of a spin reorientation or a spin-flop transition from the antiferromagnetic ground state. The temperature dependence of the resistivity presented in Fig. 1(e) exhibits a kink structure at reflecting that the metallic conduction is strongly coupled with the localized spins of Eu^2+. Figures 1(g) and 1(h) show longitudinal resistivity and Hall resistivity data measured at 2 K in out-of-plane fields. exhibits a peak at the field where the metamagnetic transition occurs, and further increasing B_out leads to a negative magnetoresistance until the spins are fully polarized at around 3.8 T. During such a magnetization process, exhibits rich peak features with a hysteresis loop in stark contrast to a rather monotonic change of M against B_out in Fig. 1(f). Figure 1(i) displays the non-monotonic Hall signal after subtracting the sum of the ordinary Hall term and the M-proportional anomalous Hall term . We note that the longitudinal conductivity of lies in the so-called intrinsic region of the conventional AHE scaling plot <cit.>, and here is calculated via r_sρ_xx^2M with r_s being determined as a fitting parameter. As discussed below, the non-monotonic Hall signal in films consists mainly of THE originating from the real-space spin configuration, and hereafter we denote it as . Figure 2(a) presents a color map of as a function of temperature and out-of-plane field (for the Hall resistivity data at different temperatures, see also the Supplemental Material <cit.>). For clarity, we label the phases below (above) the metamagnetic transition around 1.3 T as Phase I (II) and the forced-ferromagnetic phase above the saturation field around 3.8 T as FM, and the magnetic phase boundaries separating different phases are indicated by dotted lines which are drawn based on the results of the magnetization (marked by solid triangles)and magnetoresistance measurements. Importantly, the Hall response of the films is characterized by two unique features. Firstly, there are multiple positive and negative peaks appearing on both sides across zero field as indicated by circles with labels P1, P2, and P3 in Figs. 2(b) and 2(c). These peak structures of start to develop clearly below , reflecting their magnetic origin. Moreover, among the peaks, the P1 peak appears sharply at the metamagnetic transition field (marked with a solid triangle), and this tendency is observed in the entire temperature range below . A Hall resistivity with multiple peaks has been identified as non-monotonic AHE for various Eu-based magnetic semimetals and semiconductors <cit.>. In those systems, the continuous change of the momentum-space Berry curvature is induced mainly by the formation of Weyl points or their shifting with respect to the Fermi energy during the magnetization process. However, this is not the case for . As shown in the Supplemental Material <cit.>, exhibits a sign change of AHE with developing the out-of-plane ferromagnetism below 20 K. This observation itself is interesting because it originates from a drastic modulation of the momentum-space band structure taking place in the presence of ferromagnetism in , which may imply the appearance of topologically non-trivial band features near the Fermi level. We note that a similar temperature-dependent sign change of AHE has been reported for the ferromagnetic Weyl metal SrRuO_3 <cit.>. Focusing on the non-monotonic Hall term, on the other hand, the overall peak structures of P1, P2, and P3 remain unchanged even across 20 K <cit.>. The absence of sign changes for those peaks indicates that they do not share the same origin as the intrinsic AHE. Therefore, we can reasonably conclude that the non-monotonic Hall effect is originating in the real-space spin Berry phase rather than in momentum space, and hence, it can be ascribed to THE (see also Supplemental Material <cit.> for additional discussions). Having clarified the origin of the unique Hall responses as topological spin structures, we now focus on the second feature of in , i.e., the pronounced hysteresis loop. Generally, the spin chirality of field-induced spin textures changes its sign upon field reversal <cit.>, leading to a field-antisymmetric THE ((B)=-(-B)). A hysteretic THE has been reported only in special cases such as hysteretic formation of skyrmions in FeGe thin films <cit.>, transitions between skyrmion and anti-skyrmion phases in Mn_2RhSn <cit.>, and ferromagnet-based systems with interfacial Dzyaloshinskii-Moriya interaction <cit.>. In , on the other hand, ferromagnetism develops below 20 K, and the hysteretic behaviour of can be straightforwardly associated with the polarized out-of-plane moments. However, we should emphasize that the hysteresis cannot be explained solely by the additional AHE induced by the out-of-plane ferromagnetic moments. As seen in Fig. 2(b), the hysteresis loop is particularly enlarged around the P1 THE peak due to its asymmetric appearance depending on the field scan directions; in the field-decreasing (increasing) sweep at 2 K, the P1 peak is suppressed on the positive (negative) field side, and it is sharply enhanced on the negative (positive) field side, respectively. This highlights the sensitive coupling between the P1 THE peak and the remanent ferromagnetic moment. Interestingly, such coupling between THE and ferromagnetism is not observed for the other THE peaks (P2 and P3), which appear on both sides of the field regardless of the scan direction and the metamagnetic transition. These two different THE components can be also distinguished by measuring their field-angle dependence <cit.>. We observe that the amplitude of the P1 peak is quickly diminished as the metamagnetic transition is suppressed with tilting the field toward the in-plane direction, while the P2 peak survives up to high tilting angles (see the Supplemental Material <cit.> for details). From these findings, the complex Hall responses of can be separated into three components as illustrated in Figs. 2(d) and 2(e); (i) the THE component appearing without hysteresis before the full polarization of the moments along the out-of-plane direction has been reached (P2 and P3), (ii) the THE component which is sharply enhanced at the metamgenetic transition with coupling to the ferromagnetism (P1), and (iii) the ferromagnetism-induced additional AHE component appearing below 20 K. As discussed in the following, we propose that the former THE component represents the bulk contribution and the latter component the DB contribution. To clarify the DB-derived nature of the P1 THE peak, we have performed minor loop measurements. Figure 3 presents the Hall term - measured by scanning the field from -5 T to a certain maximum value B_max and then back to -5 T. B_max is varied from -2.2 T to 2.2 T so that the minor loops cover the peak and hysteretic features of . To evaluate the hysteretic behavior, we have also extracted the loop term defined as the difference between the field-increasing sweep and the field decreasing sweep as shown in Figs. 3(d)-3(f). corresponds to the summation of the hysteresis contribution from the P1 THE term and the additional AHE term as illustrated in Fig. 2(d). A striking observation is that there is a region of negative in the minor loops of B_max = 0.9, 1.3, 1.5, and 1.6 T as shown in the inset in Figs. 3(e) and 3(f). The negative indicates an increase of THE during the field-decreasing sweep as compared to the field-increasing sweep. In particular, the amplitude of negative is the largest for B_max = 1.3 T where the magnetic field is reversed during the metamagnetic transition. These observations reveal that the promoted formation of DBs by the minor loop scans leads to a larger amplitude of THE, highlighting the THE contribution of the DBs. We also note that the constant discrepancy (indicated by a two-headed arrow) in the peak amplitude between the full loop and the minor loops with B_max = 0, 0.5, 0.9, and 1.3 T corresponds to the term. is suppressed in the minor loops for B_max > 1.3 T as shown in Fig. 3(c), which means that the metamagnetic transition also promotes the reversal of the out-of-plane moments. To further demonstrate the DB-driven nature of the P1 THE peak, we have also examined its dependence on different field cooling processes, which can effectively modulate the DB density. Figure 4 presents - measured after experiencing two different field cooling paths from above to 2 K; in Path 1 the out-of-plane field was gradually increased from 0 T at to at 2 K, while in Path 2 the constant field was applied from above to 2 K as shown in the top panels. After reaching at 2 K, field scans towards +5 T and -5 T were performed. - of the = 0 T case presented in Fig. 4 (a) reflects the initial magnetization curve after zero field cooling, and exhibits the P1 peak on both positive and negative field sides, in contrast to the full loop shown in gray for comparison. This can be interpreted to reflect the absence of the ferromagnetic component suppressing the DB-driven THE under zero field cooling. For evaluating the modulation of DB-driven THE for each we have specifically taken the change of the Hall resistivity from that of the = 0 T case (-_,B_FC = 0 T) as presented in the lower panels. When is much lower than the metamagnetic transtion field around 1.3 T such as in the case of = 0.7 T in Fig. 4(b), both of the field cooling along Path 1 and Path 2 result in enhanced amplitude of the P1 peak. When is 1.3 T, on the other hand, Path 1 and Path 2 exhibit a contrasting behaviour. As shown in Fig. 4(c), Path 1 for = 1.3 T following closely along the magnetic phase boundary between I and II leads to significant enhancement of the P1 peak, which even exceeds that of the full loop. In contrast, Path 2 following the outside of the phase boundary results in suppression of the P1 peak. These observations in the = 1.3 T case clearly indicates the strong relevance of THE to the DB density formed during different field cooling paths in the films. Further increase of strengthens the development of the out-of-plane moments, suppressing DB-driven THE for both Path 1 and Path 2 as shown in Fig. 4(d) (see also the Supplemental Material <cit.> for the results of other cases, and also the field cooling dependence of the magnetization curve). Finally, we would like to discuss the possible origin of the THE signals in . THE appears as a bulk effect when non-coplanar spin textures with finite scalar spin chirality are stabilized by the external field. If we simply assume a non-colinear ordering such as 120^∘-spin ordering for the Eu triangular lattice of , however, the field-induced spin canting generates scalar spin chirality only locally and cancels it out globally due to the contribution of opposite signs from the adjacent triangles. To realize a non-vanishing scalar spin chirality as bulk effect in a triangular lattice system, either strong coupling to the spin-orbit interaction or an incommensurate non-coplanar ordering is required to break the balance of this cancellation <cit.>. As proposed in the theory <cit.> and experimentally verified by the realization of a skyrmion phase in -type <cit.>, the presence of further neighbor interactions on a triangular network of Heisenberg-type spins can lead to incommensurate spiral textures with multiple-Q vectors. It is worth noting that the nearest-neighbor Eu sites in -type are the interlayer Eu-Eu sites rather than the intralayer triangular sites. The ferromagnetism below 20 K also indicates the presence of ferromagnetic interaction between some Eu-Eu sites. Altogether, it is highly likely that not just a simple two-dimensional antiferromagnetic ordering within the triangular lattice plane but a more complex three-dimensional non-coplanar ordering is realized in , accounting for the observations of the P2 and P3 THE peaks. Opposed to the bulk THE, the particularly sharp THE at the metamagnetic transition is attributed to the contribution from the magnetic DBs. In contrast to the bulk scalar spin chirality on the triangular lattice, that of the local DBs is expected to survive owing to the broken symmetry. The fact that the peak amplitude of P1 is suppressed when the out-of-plane moments remain polarized under higher fields, and that it is enhanced when the moments are weakened at the switching field, proves that the spin configuration realized at the DBs is crucial for the appearance of the THE. Such a dependence of THE on ferromagnetism actually resembles the THE observed in ferromagnet-based heterostructures with interfacial DM interaction <cit.>. There, a THE peak structure appears only at the coercive field of the ferromagnetic layer, where the DM interaction comes in to realize chiral DBs or skyrmions. Since interfacial or bulk DM interaction is not expected for the present case, we speculate that instead the presence of frustration-induced non-coplanar spin textures in plays a vital role in realizing non-trivial DBs hosting finite scalar spin chirality. In summary, we have succeeded in the film growth of the -type frustrated magnet and have revealed its rich Hall responses, including a temperature-dependent sign change of AHE and multiple THE peaks accompanied by a pronounced hysteresis loop. One of the THE peaks is sharply enhanced at the metamagnetic transition field, indicating the formation of magnetic DBs hosting finite scalar spin chirality. Importantly, compared to the bulk AHE and THE, the DB contribution appears as the leading term. For the formation of non-trivial DBs which give such a dominant Hall response, it is expected that the presence of possible non-coplanar spin textures within the frustrated Eu network is essential. The determination of the actual magnetic structure realized in is highly desired to further clarify its unique Hall responses. We thank M. Kawasaki for fruitful discussions and also the help in parts of the low-temperature magnetotransport measurements. We also thank N. Kanazawa, H. Ishizuka, H. Oike, H. Sakai, T. Nakajima, Y. Yamasaki, K. Matsuura, and F. Kagawa for fruitiful discussions. This work was supported by JST FOREST Program grant no. JPMJFR202N, Japan; and by Grant-in-Aids for Scientific Research JP21H01804, JP22H04471, JP22H04501, JP22K18967, JP22K20353, and JP23K13666 from MEXT, Japan. 100 AHE M. Onoda and N. Nagaosa, J. Phys. Soc. Jpn. 71, 19-22 (2002). AHE_scaling S. Onoda, N. Sugimoto, and N. Nagaosa, Phys. Rev. B 77, 165103 (2008). THE K. Ohgushi, S. Murakami, and N. Nagaosa, Phys. Rev. B 62, R6065(R) (2000). Nd2Mo2O7 Y. Taguchi, Y. Oohara, H. Yoshizawa, N. Nagaosa, and Y. Tokura, Science 291, 2573-2576 (2001). MnSi S. Mühlbauer, B. Binz, F. Jonietz, C. Pfleiderer, A. Rosch, A. Neubauer, R. Georgii, and P. Böni, Science 323, 915-919 (2009). MnGe N. Kanazawa, Y. Nii, X.-X. Zhang, A. S. Mishchenko, G. De Filippis, F. Kagawa, Y. Iwasa, N. Nagaosa, and Y. Tokura, Nat. Commun. 7, 11622 (2016). FeGe X. Z. Yu, N. Kanazawa, Y. Onose, K. Kimoto, W. Z. Zhang, S. Ishiwata, Y. Matsui, and Y. Tokura, Nat. Mater. 10, 106-109 (2011). GdPdSi T. Kurumaji, T. Nakajima, M. Hirschberger, A. Kikkawa, Y. Yamasaki, H. Sagayama, H. Nakao, Y. Taguchi, T. Arima, and Y. Tokura, Science 365, 914-918 (2019). EuAl4 R. Takagi, N. Matsuyama, V. Ukleev, L. Yu, J. S. White, S. Francoual, J. R. L. Mardegan, S. Hayami, H. Saito, K. Kaneko, K. Ohishi, Y. O̅nuki, T. Arima, Y. Tokura, T. Nakajima, and S. Seki, Nat. Commun. 13, 1472 (2022). Chiralityscattering H. Ishizuka and N. Nagaosa, Sci. Adv. 4, eaap9962 (2018). MnGe_scattering Y. Fujishiro, N. Kanazawa, R. Kurihara, H. Ishizuka, T. Hori, F. S. Yasin, X. Yu, A. Tsukazaki, M. Ichikawa, M. Kawasaki, N. Nagaosa, M. Tokunaga, and Y. Tokura, Nat. Commun. 12, 317 (2021). EuAs M. Uchida, S. Sato, H. Ishizuka, R. Kurihara, T. Nakajima, Y. Nakazawa, M. Ohno, M. Kriener, A. Miyake, K. Ohishi, T. Morikawa, M. S. Bahramy, T. Arima, M. Tokunaga, N. Nagaosa, and M. Kawasaki, Sci. Adv. 7, eabl5381 (2021). DWscattering P. M. Levy, and S. Zhang, Phys. Rev. Lett. 79, 5110 (1997). NIO E. Y. Ma, Y.-T. Cui, K. Ueda, S. Tang, K. Chen, N. Tamura, P. M. Wu, J. Fujioka, Y. Tokura, and Z.-X. Shen, Science 350, 538-541 (2015). EIO T. C. Fujita, M. Uchida, Y. Kozuka, W. Sano, A. Tsukazaki, T. Arima, and M. Kawasaki, Phys. Rev. B 93, 064419 (2016). Bubble C. Moutafis, S. Komineas, and J. A. C. Bland, Phys. Rev. B 79, 224429 (2009) DWTHE K.-J. Kim, M. Mochizuki, and T. Ono, Appl. Phys. Express 12, 053006 (2019). DWWeyl S. Sorn and A. Paramekanti, Phys. Rev. B 103, 104413 (2021). CeAlSi_1 H.-Y. Yang, B. Singh, J. Gaudet, B. Lu, C.-Y. Huang, W.-C. Chiu, S.-M. Huang, B. Wang, F. Bahrami, B. Xu, J. Franklin, I. Sochnikov, D. E. Graf, G. Xu, Y. Zhao, C. M. Hoffman, H. Lin, D. H. Torchinsky, C. L. Broholm, A. Bansil, and F. Tafti, Phys. Rev. B 103, 115143 (2021). CeAlSi_2 M. M. Piva, J. C. Souza, V. Brousseau-Couture, S. Sorn, K. R. Pakuszewski, J. K. John, C. Adriano, M. Côté, P. G. Pagliuso, A. Paramekanti, and M. Nicklas, Phys. Rev. Res. 5, 013068 (2023). CeAlGe2 X. He, Y. Li, H. Zeng, Z. Zhu, S. Tan, Y. Zhang, C. Cao, and Y. Luo, Sci. China-Phys. Mech. Astron. 66, 237011 (2023). DWAHE W. J. Kim, J. H. Gruenewald, T. Oh, S. Cheon, B. Kim, O. B. Korneta, H. Cho, D. Lee, Y. Kim, M. Kim, J.-G. Park, B.-J. Yang, A. Seo, and T. W. Noh, Phys. Rev. B 98, 125103 (2018). AlB2_strc R.-D. Hoffman and R. Pöttgen, Z. Kristallogr. 216, 127-145 (2001). EuZnGe T. Kurumaji, M. Gen, S. Kitou, H. Sagayama, A. Ikeda, and T. Arima, Phys. Rev. Mater. 6, 094410 (2022). GdCu2_1 M. Rotter, A. Lindbaum, E. Gratz, G. Hilscher, H. Sassik, H. E. Fischer, M. T. Fernandez-Diaz, R Arons, and E. Seidl, J. Mag. Mag. Mater. 214, 281-290 (2000). EuCd2 K. H. J. Buschow, and F. J. van Steenwijk, Physica B 85, 122-126 (1977). SM See Supplemental Material at SM-URL for details about the reciprocal space mapping measuremements, temperature dependent measurements of the magnetization and the Hall resistivities, field-angle dependent measurements,field cooling dependence measurements, and additional discussion on the origin of the Hall response of the films. The Supplemental Material also includes Refs. <cit.>. S1 G. Kimbell, C. Kim, W. Wu, M. Cuoco, and J. W. A. Robinson, Commun. Mater. 3, 19 (2022). S2 D. Kan, T. Moriyama, K. Kobayashi, and Y. Shimakawa, Phys. Rev. B 98, 180408(R) (2018). EuTiO3 K. S. Takahashi1, H. Ishizuka, T. Murata, Q. Y. Wang, Y. Tokura, N. Nagaosa, and M. Kawasaki, Sci. Adv. 4, eaap7880 (2018). EuP3 A. H. Mayo, H. Takahashi, M. S. Bahramy, A. Nomoto, H. Sakai, and S. Ishiwata, Phys. Rev. X 12, 011033 (2022). EuCd2As2 X. Cao, J.-X. Yu, P. Leng, C. Yi, X. Chen, Y. Yang, S. Liu, L. Kong, Z. Li, X. Dong, Y. Shi, M. Bibes, R. Peng, J. Zang, and F. Xiu, Phys. Rev. Res. 4, 023100 (2022). ECS M. Ohno, S. Minami, Y. Nakazawa, S. Sato, M. Kriener, R. Arita, M. Kawasaki, and M. Uchida, Phys. Rev. B 105, L201101 (2022). EuMg2Bi2 M. Kondo, M. Ochi, R. Kurihara, A. Miyake, Y. Yamasaki, M. Tokunaga, H. Nakao, K. Kuroki, T. Kida, M. Hagiwara, H. Murakawa, N. Hanasaki, and H. Sakai, Phys. Rev. B 107, L121112 (2023). SrRuO3 Z. Fang, N. Nagaosa, K. S. Takahashi, A. Asamitsu, R. Mathieu, T. Ogasawara, H. Yamada, M. Kawasaki, Y. Tokura, and K. Terakura, Science 302, 92-95 (2003). FeGefilm N. Kanazawa, M. Kubota, A. Tsukazaki, Y. Kozuka, K. S. Takahashi, M. Kawasaki, M. Ichikawa, F. Kagawa, and Y. Tokura, Phys. Rev. B 91, 041122(R) (2015). Mn2RhSn P. K. Sivakumar, B. Göbel, E. Lesne, A. Markou, J. Gidugu,J. M. Taylor, H. Deniz, J. Jena, C. Felser, I. Mertig, and S. S. P. Parkin, ACS Nano 14, 13463-13469 (2020). SROSIO J. Matsuno, N. Ogawa, K. Yasuda, F. Kagawa, W. Koshibae, N. Nagaosa, Y. Tokura, and M. Kawasaki, Sci. Adv. 2, e1600304 (2016). MagTI K. Yasuda, R. Wakatsuki, T. Morimoto, R. Yoshimi, A. Tsukazaki, K. S. Takahashi, M. Ezawa, M. Kawasaki, N. Nagaosa, and Y. Tokura, Nat. Phys. 12, 555-559 (2016). MagTI2 J. Jiang, D. Xiao, F. Wang, J.-H. Shin, D. Andreoli, J. Zhang, R. Xiao, Y.-F. Zhao, M. Kayyalha, L. Zhang, K. Wang, J. Zang, C. Liu, N. Samarth, M. H. W. Chan, and C.-Z. Chang, Nat. Mater. 19, 732-737 (2020). MagTI3 W. Wang, Y.-F. Zhao, F. Wang, M. W. Daniels, C.-Z. Chang, J. Zang, D. Xiao, and W. Wu, Nano Lett. 21, 1108-1114 (2021). MultiQ T. Okubo, S. Chung, and H. Kawamura, Phys. Rev. Lett. 108, 017206 (2012).
http://arxiv.org/abs/2407.13366v1
20240718101829
Circumbinary Disk Spectra Irradiated by Two Central Accretion Disks in a Binary Black Hole System
[ "Yunewoo Lee", "Atsuo T. Okazaki", "Kimitake Hayasaki" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
Kimitake Hayasaki yunewoo.lee@chungbuk.ac.kr, kimi@chungbuk.ac.kr Department of Astronomy and Space Science, Chungbuk National University, Republic of Korea Hokkai-Gakuen University, Toyohira-ku, Sapporo 062-8605 Japan Department of Astronomy and Space Science, Chungbuk National University, Republic of Korea Department of Physical Sciences, Aoyama Gakuin University, Sagamihara 252-5258, Japan § ABSTRACT We study the effect of irradiation from two accretion disks (minidisks) around respective black holes of stellar to intermediate masses in a circular binary on the spectrum of a circumbinary disk (CBD) surrounding them. We assume the CBD to be a standard disk and adopt the orbit-averaged irradiation flux because the viscous timescale is much longer than the orbital period. We then solve the energy equation both analytically and numerically to compute the CBD temperature distribution and the corresponding disk spectrum. We find that the analytically calculated spectra are in good agreement with the numerical ones. The CBD spectrum is almost independent of the binary mass ratio. We also find that the combined spectra of two minidisks and the CBD have double peaks, one peak in the soft X-ray band and the other in the infrared (IR) band. The former peak comes from the two minidisks, while the latter peak from the CBD. The observed flux density increases with frequency as ν^1/3 towards the soft X-ray peak, while it decreases with frequency away from the IR peak as ν^-5/3. The latter feature is testable with near-IR observations with Subaru and JWST. § INTRODUCTION Binary black holes (BBHs) are composed of two black holes in a binary system. They are of particular interest in astrophysics because they provide key insights into black hole mergers and the growth from a seed black hole to a supermassive black hole <cit.>. The gravitational interactions within BBHs lead to the emission of gravitational waves, as predicted by general relativity (GR), and their detection has been a major breakthrough in observational astronomy <cit.>. BBHs are typically formed by the stellar collapse of massive binary systems and by dynamical interactions in dense stellar environments such as globular clusters <cit.>. Sub-solar-mass BBHs can be formed by three-body interactions between the primordial black holes <cit.>. The entire binary system is surrounded by the circumbinary disk (CBD), which orbits the common center of mass of the two black holes. The tidal-resonant interactions between the CBD and the binary black hole lead to the formation of gaps or cavities within the CBD <cit.>, and lead to the orbital decay of the BBH by angular momentum transfer <cit.>. In addition, the CBD material falls inward from the two points at the inner edge of the CBD, forming a circumprimary disk (CPD) around the more massive (primary) black hole and a circumsecondary disk (CSD) around the less massive (secondary) black hole, and eventually accretes onto the black holes through these minidisks (e.g., ). The thermal emission from the CBD has been studied by <cit.> based on the long-term evolution of the CBD, and <cit.> has studied the non-thermal X-ray emission due to the shock formed by the interaction between the CBD and two minidisks. The disk-binary interaction can also cause periodic accretion <cit.>. Recently, GR effects have also been found to give the light curve variations of BBH systems: a special relativistic Doppler boost in the emission from each minidisk of a rapidly orbiting binary at relativistic speeds <cit.>, the quasi-periodic modulation of the structure of two minidisks in a coalescing black hole <cit.>, and the GR precession of the semi-major axis of the binary <cit.>. These signals can help to identify BBH candidates in electromagnetic surveys. The study of X-ray self-irradiated disks has advanced our understanding of the accretion dynamics and radiative processes of binary systems. The pioneering work of <cit.> explored the effects of X-ray irradiation on accretion disks and established a basic model for how X-ray photons from a central source of the disk can heat the outer regions of the disk, thereby influencing its emission properties. <cit.> and <cit.> extended this model to include the effects of self-irradiation, where X-rays emitted by the disk itself are reprocessed by the disk material, leading to thermal and emission profiles that differ from the standard disk spectrum (see also for a review). Subsequent studies, such as that of <cit.>, have provided detailed analyses of the light curves of soft X-ray transients, showing that self-irradiation can significantly alter the observational properties of these systems. Furthermore, <cit.> studied the Galactic supersoft X-ray source RX J0019.8+2156 and showed that reprocessed radiation from the accretion disk and companion star could account for the observed fluxes in UV and optical wavelengths, highlighting the role of irradiation in shaping the spectral energy distributions of such systems. Recently, a long-term multi-wavelength study of the microquasar GRS 1915+105 revealed that the soft X-ray photons coming from the inner region of the disk are reprocessed to thermalize the outer part of the disk <cit.>. It has also been argued that, in ultraluminous X-ray sources, the X-ray flux from the inner source is reprocessed in the outer regions of the accretion disk, dominating the spectrum at optical and UV wavelengths (e.g., ) While supermassive BBHs naturally form CBDs in the galactic nuclei, BBHs composed of stellar-mass black holes and intermediate-mass black holes (IMBHs) must exist in a gaseous environment to accompany the CBDs <cit.>. If such BBHs were to plunge into a gas cloud, such as a molecular cloud, they would end up with a triple disk, consisting of two minidisks and a CBD surrounding them. In this case, X-ray photons from the CPD and the CSD would irradiate the outer part of the CBD, modifying the emission properties of the CBD. Figure 1 shows a schematic representation of X-ray irradiation of the outer part of the CBD from near the inner edge of these two minidisks. It is assumed that the orbital plane of the binary and the CBD are aligned. However, little is known about how X-rays from the inner region of the CPD and CSD theoretically affect the CBD surface in a binary black hole system. In this paper, we study analytically and numerically the effect of irradiation from the two minidisks on the CBD. In Section 2, we construct the basic model of the irradiated CBD. Note that the detailed descriptions are given in the Appendix. Section 3 provides the method to solve the energy equation of the CBD analytically and numerically. In Section 4, we present their solutions and the combined spectra of two minidisks and the CBD, called triple disk spectra, and describe how our models are tested by near-infrared (NIR) observations with Subaru and JWST. Sections 5 and 6 are devoted to discussion and conclusions, respectively. § MODEL We aim to investigate the effect of irradiation from the two minidisks, which is absorbed and re-emitted at the surface of the CBD, on the disk spectrum of the CBD. First, we assume that the two minidisks and the CBD are one-dimensional, axisymmetric, and steady-state standard disks. The binary has a mass M and a circular orbit with a semi-major axis a = a_0 r_ S, where a_0 is the parameter to measure the semi-major axis. Also, r_ S =2GM/c^2 =3.0×10^7 cm(M/100 M_⊙). is the Schwarzschild radius of the black hole with M, where G is the gravitational constant and c is the speed of light. We note that irradiation from near the inner edge of each minidisk is unlikely to affect the spectrum of the outer part because the radius where the irradiation heating overcomes the viscous heating is much larger than the outer radius of the disk. Also, the energies of the photons generated at the inner edge of the CBD are in the optical wavelength and are much lower than those of the photons from the inner edge of two minidisks. This is because the disk temperature of each minidisk is higher than the CBD temperature because of T∝r^-3/4 for the standard disk model <cit.>. Therefore, we consider the effect of irradiation from the inner edge of the CBD on the CBD spectrum to be tiny, and thus we neglect the irradiation heating rate due to the CBD inner edge in the energy equation. In this work, as shown in Figure <ref>, we examine the effect of irradiating the surface of the CBD from the inner edges of the two minidisks. We then calculate the spectrum of the entire binary system, including the minidisks and the CBD. Since the CBD is optically thick, the radiative flux from both sides of the disk surface is locally proportional to the fourth power of the disk temperature according to the Stefan-Boltzmann law, i.e. the radiative cooling rate is given by Q_ rad=2σ T^4, where σ is the Stefan-Boltzmann constant. According to the standard disk theory, the heating rate due to the viscous heating of the disk is given by Q_ vis = 3GMṀ/4π r^3, for r≫r_ ISCO in the CBD, where r_ ISCO=6GM/c^2 is the radius at the innermost stable circular orbit (ISCO). Here we take the mass accretion rate as Ṁ=ṁL_ Edd/c^2, where L_ Edd=4π GMc/κ is the Eddington luminosity with the opacity of the gas κ and ṁ is the ratio of mass accretion rate to L_ Edd/c^2. We take ṁ=1 throughout the paper. Since the CBD viscous timescale τ_ vis is much longer than the binary orbital period P_ orb (see equations <ref>-<ref> for details), the binary system with triple disk composed of CBD, CPD, and CSD can be in a quasi-steady state , which thus allows to impose on the following relation on the accretion rate between CBD, CPD, and CSD: Ṁ=Ṁ_1+Ṁ_2. Also, we assume that the mass accretion rate ratio is equal to the binary mass ratio q, i.e. Ṁ_2/Ṁ_1=q, where Ṁ_1 and Ṁ_2 are the CPD and CSD accretion rates, respectively. These relation provides Ṁ_1= 1/1+qṀ, Ṁ_2=q/1+qṀ. The energy equation Q_ vis + Q_ irr=Q_ rad is given by 3GMṀ/4π r^3 + A_1L_1/2π r[ d/dr(H/r) - β_1(r_ in/r)^2 [ H/r^2 - 1/2d/dr(H/r) ] ] + A_2L_2/2π r[ d/dr(H/r) - β_2(r_ in/r)^2 [ H/r^2 - 1/2d/dr(H/r) ] ] = 2σT^4, where we used equations (<ref>), (<ref>), and (<ref>) for the derivation. In equation (<ref>), H=H(r) is the scale height of the CBD, L_1 and L_2 are the bolometric luminosities of the primary and secondary black holes, respectively, given by the equation (<ref>) as L_1=1/6Ṁ_1c^2, L_2=1/6Ṁ_2c^2, β_1 and β_2 are given in equation (<ref>), and r_ in =C_ gapa =6.0×10^10 cm(C_ gap/2) (a_0/1000) (M/100 M_⊙) is the CBD inner-edge radius with 1.6≲ C_ gap≲4 <cit.>, where C_ gap=2 is adopted as a fiducial value throughout this paper. The CBD is also assumed to be in hydrostatic equilibrium for the direction perpendicular to the disk plane: c_ s=ΩH, where Ω=√(GM/r^3) is the Keplerian angular frequency and c_ s=√(R_ g/μT_ c) is the sound speed with the molecular weight for an ionized plasma with solar abundances being μ=0.615 and the gas constant R_ g, giving the CBD mid-plane temperature: T_ c=μ/R_ g(H/r)^2 GM/r. In what follows, we assume that the CBD mid-plane temperature is approximately equal to the surface temperature, i.e., T_ c≈ T. The validity of this assumption will be discussed later in the Discussion section. § METHOD Solving the energy equation yields the disk aspect ratio H/r, which can then be substituted into the hydrostatic equation to finally obtain the radial distribution of the CBD temperature. In the following, we describe the method, divided into two ways: one is to approximate the energy equation to obtain an analytical solution, and the other is to solve the energy equation numerically. To solve the energy equation prospectively, we introduce the following dimensionless variables: ξ ≡ r/r_ in, Y ≡ H/r. §.§ Analytical solutions The energy equation is approximated into two separate equations, depending on the dominant heating mechanism: Q_ vis=Q_ rad for Q_ vis≫ Q_ irr and Q_ irr=Q_ rad for Q_ vis≪ Q_ irr. The former case gives the temperature profile in the viscous heating dominated region directly without using equation (<ref>): T=T_ in,vis ξ^-3/4, where T_ in,vis = ( 3/8πGMṀ/σ r_ in^3)^1/4 ∼ 3.7×10^4 K(M/100 M_⊙)^-1/4 (ṁ/1.0)^1/4(C_ gap/2)^-3/4(a_0/1000)^-3/4. Next, we consider the latter, irradiation-heating dominated case. Applying equation (<ref>) to equations (<ref>) and (<ref>) gives irradiation heating rates with dimensionless variables: ⟨Q_ irr,1⟩ =A_1L_1/2π r_ in^21/ξ[dY/dξ-β_1/ξ^2(Y/ξ-1/2dY/dξ)], ⟨Q_ irr,2⟩ =A_2L_2/2π r_ in^21/ξ[dY/dξ-β_2/ξ^2(Y/ξ-1/2dY/dξ)], where β_1 and β_2 are given by equation (<ref>). Then, total irradiation heating rate is given as the sum of ⟨Q_ irr,1⟩ and ⟨Q_ irr,2⟩ by Q_ irr =A_1L_1/2π r_ in^21/ξ[ (1+Q_12)dY/dξ- β_1+Q_12β_2/ξ^2(Y/ξ -1/2dY/dξ)], where Q_12=A_2L_2/(A_1L_1). Since A_1 and A_2 are unlikely to have much different values for two minidisks of the similar physical state, we assume that A_1=A_2=A, resulting in Q_12=q. Equation (<ref>) is rewritten with equation (<ref>) as T=T_0Y^2/ξ, where we define the dynamical temperature at r_ in as T_0 = μ/R_gGM/r_ in ∼ 1.66×10^9 K(C_ gap/2)^-1(a_0/1000)^-1. Combining equation (<ref>) with equation (<ref>), we write the radiative cooling rate as Q_ rad =2σT_0^4Y^8/ξ^4. Equating equation (<ref>) with equation (<ref>) gives the following differential equation: dY/dξ =[α Y^8+βY] [ξ^3+β/2ξ]^-1, where we introduce the following two parameters: α = 1/1+Q_121/AL_0/L_1 = 1/AL_0/L_1+L_2 , β = β_1+β_2Q_12/1+Q_12 = β_1L_1/L_1+L_2 + β_2L_2/L_1+L_2 with the normalization luminosity defined by L_0 ≡ 4π r_ in^2 σT_0^4 ∼ 1.9×10^55 ergs^-1(M/100 M_⊙)^2(C_ gap/2)^-2(a_0/1000)^-2. Here, α is the ratio of the blackbody luminosity at the inner edge of the CBD to the total irradiation luminosity from the two minidisks, whereas β is an average of β_1 and β_2, weighted by the ratio of the total irradiation to respective irradiation luminosities. Equation (<ref>) can be easily integrated analytically (see Appendix <ref> for details), and its special solution is given by equation (<ref>). Now, it is noted that β/ξ^2≪1, since the region where irradiation heating dominates is the outer part of the disk. Expanding equation (<ref>) in a Taylor series of β/ξ^2 and keeping up to the first-order terms yields the following approximate solution: Y ≈ (2/7α)^1/7ξ^2/7(1-3/14β/ξ^2). This solution reduces to a simple power-law solution of ξ in the β=0 case: Y=(2/7α)^1/7ξ^2/7, which is consistent with the solution of the single black hole case <cit.>, except for the value of α. Substituting equation (<ref>) into equation (<ref>) gives T= T_0(2/7α)^2/7ξ^-3/7(1-3/14β/ξ^2)^2. This is the radial dependence of the CBD temperature in the irradiation-dominated region. §.§ Numerical solutions Next, we derive the radial profile of the CBD temperature numerically. With dimensionless variables, the energy balance equation Q_ vis + Q_ irr=Q_ rad is expressed as 3/4πGMṀ/r_ in^31/ξ^3 +AL_1/2π r_ in^21/ξ[ (1+Q_12)dY/dξ- β_1+Q_12β_2/ξ^2(Y/ξ -1/2dY/dξ)] =2σT_0^4Y^8/ξ^4. This leads to the differential equation to determine the evolution of the disk aspect ratio as dY/dξ =[αY^8/ξ^3+βY/ξ^3 -γ1/ξ^2] [1+β/21/ξ^2]^-1, where γ is defined as γ =3/21/AL_11/1+Q_12GMṀ/r_ in =9/21/A1/C_ gapa_0 ∼ 2.3×10^-2(A/0.1)^-1(C_ gap/2)^-1(a_0/1000)^-1. We solve equation (<ref>) numerically with the outer boundary condition: Y_ out= (2/7α)^1/7ξ_ out^2/7(1-3/14β/ξ_ out^2), where equation (<ref>) is adopted at ξ_ out=r_ out/r_ in with r_ out being the outer radius of the CBD. The radial temperature profile is then obtained by substituting the numerical solution of Y into equation (<ref>). To sum up at the end, our model has seven parameters: M, ṁ, a_0, C_ gap, q, A, and ξ_ out. Among them, the following four parameters are fixed through the paper as ṁ=1, a_0=1000, C_ gap=2, and A=0.1. The dependencies of the CBD temperature on the remaining three parameters and spectra are examined in the next section. § RESULTS In this section, we compare the analytical and numerical solutions for the radial temperature distribution of the CBD and discuss how it depends on some parameters. §.§ Comparison of circumbinary disk temperature between analytical and numerical solutions Figure <ref> compares the radial temperature profiles between the analytical and numerical solutions for three different mass ratios. Panels (a), (b), and (c) represent the radial profiles of the CBD temperature for the cases of q=1.0, q=0.1, and q=0.01, respectively, while panel (d) shows the comparison between the numerical solutions of these cases. Panels (a)-(c) of Figure <ref> exhibit that the analytical and numerical solutions are in good agreement. It can also be seen from panel (d) that there is almost no dependence on the mass ratio. This is because the CBD temperature due to viscous heating depends only on the total BH mass and not on the mass ratio. In addition, α in equation (<ref>) is also independent of the mass ratio. In contrast, the binarity effect of irradiation heating appears in β of equation (<ref>). However, panel (d) indicates that the CBD temperature negligibly depends on the mass ratio because of β≪1. §.§ Cicumbinary disk spectra Since the CBD is highly optically thick in the vertical direction, its surface locally radiates the blackbody radiation with the spectral intensity: I_ν=2h/c^2ν^3/exp(hν/kT)-1, where h is the Planck constant, k is the Boltzmann constant, and ν is a frequency. The flux density to be emitted from the whole of the CBD is then given by <cit.> S_ν = ∫ I_ν dΩ = 4πh/c^2cosδ/D^2ν^3 ∫_r_ in^r_ out r/e^hν/(kT)-1 dr, where δ is an inclination angle of the disk and D is the distance between the source and the earth. In the following we adopt δ=0 unless otherwise noted. In our analytical models, since the disk's inner part is dominated by viscous heating, the disk temperature obeys T/T_ in=(r/r_ in)^-3/4 in viscous-heating dominated region. In contrast, the disk's outer part is dominated by irradiation heating. Since Q_ vis=Q_ irr gives the boundary radius between the two regions, we estimate it as ξ_ b = (7α/2)^1/9( 21/4α/L_0GMṀ/r_ in)^7/9 ∼ 1.2×10 (ṁ/1.0)^-1/9 (M/100 M_⊙)^1/9(A/0.1)^-8/9(C_ gap/2)^-1(a_0/1000)^-1 , equations (<ref>), (<ref>), (<ref>), and (<ref>) are used for the derivation. The boundary radius ξ_ b is independent of the mass ratio and is insensitive to both the binary mass and the mass accretion rate. We also note that ξ_ b is a dozen times larger than the inner edge radius of the CBD. Combining equations (<ref>), (<ref>), and (<ref>) gives T/T_ b=(r/r_ b)^-3/7 as the CBD temperature of the irradiation-heating dominated region, where T_ b = (7α/2)^-1/12(3/8πGMṀ/σ r_ in^3)^1/4( 21/4α/L_0GMṀ/r_ in)^-7/12 ∼ 5.7×10^3 K(ṁ/1.0)^1/3(M/100 M_⊙)^-1/3(A/0.1)^2/3. We note that the temperature at ξ_ b is independent of the inner edge radius of the CBD. The disk spectrum of the viscous-heating dominated region is given by S_ν, vis = 16π/3h/c^2 ( r_ in/D)^2 ( kT_ in/hν)^8/3ν^3 ∫_ζ_ in^ζ_ b ζ^5/3/e^ζ-1 dζ, where ζ=hν/kT, ζ_ in=hν/kT_ in, and ζ_ b=hν/kT_ b. On the other hand, the disk spectrum of the irradiation-heating dominated region is given by S_ν, irr = 28π/3h/c^2 ( r_ in/D)^2 ( kT_ in/hν)^14/3ν^3 ∫_ζ_ b^ζ_ out ζ^11/3/e^ζ-1 dζ, where ζ_ out=hν/kT_ out and T_ out=T_ b (ξ_ out/ξ_ b)^-3/7. Figure <ref> shows the spectral energy distributions of the CBD. For our fiducial model, we adopt that the binary mass is 10^2 M_⊙, the mass ratio is q=1.0, and the outer boundary radius is r_ out=10^3 r_ in. In all four panels, the thick solid black line represents the fiducial model. Panel (a) compares the analytical solutions 4πD^2S_ν, vis, 4πD^2S_ν, irr, and 4πD^2(S_ν, vis+S_ν, irr) with the numerical solutions 4πD^2S_ν. It is noted from the panel that the CBD spectrum generally has two peaks, one peak arizing from the viscous-heating dominated region and the other from the irradiation-heating dominated region. We also find that the numerical solution is in good agreement with the analytical solution. Panel (b) displays the model spectra for three different mass ratios, demonstrating that the CBD spectra hardly depend on the mass ratio as predicted in panel (b) of Figure <ref>. Panel (c) demonstrates the dependence of the spectral luminosity on the binary mass. We note that the spectral luminosity increases with increase in the black hole mass. The spectral luminosity also depends on the outer radius of the CBD. Panel (d) shows the effect of the outer radius. The lower-energy peak becomes higher as the outer radius increases, because of the increase in the emitting area. §.§ Triple disk spectra This section describes the spectral energy distributions of the triple disk system consisting of the two minidisks and the CBD surrounding them. The spectral luminosities of the CPD and the CSD are calculated by equation (<ref>) as L_ν,i= 16π^2 h/c^2 ν^3 ∫_r_ in,i^r_ out,i r/e^hν/(kT_i)-1 dr, where the inner boundary radius is set to the ISCO radius of each black hole, i.e., r_ in,i=6GM_i/c^2 with i=1 for the CPD and i=2 for the CSD. We also assume that the disk outer disk radius is equal to the Roche radius of each black hole <cit.>: r_ out,1, =0.49q^-2/3a/0.6q^-2/3+ln(1+q^-1/3), r_ out,2 =0.49q^2/3a/0.6q^2/3+ln(1+q^1/3), and the temperature of each disk is calculated by T_i =( 8/3πGM_iṀ_i/r_ in,i)^1/4(r/r_ in,i)^-3/4. Figure <ref> compares the spectral energy distributions of the triple disk between two different cases. Panel (a) represents the triple disk spectra with the CBD with irradiation, while panel (b) shows the triple disk spectra with the CBD without irradiation, where the blue solid lines denote the CBD spectrum. In deriving these spectra, the same parameters are adopted as in Figure <ref>. It is also noted from panel (a) that the spectral luminosity increases with frequency as ν^1/3 toward the high-frequency peak, while it decreases with frequency as ν^-5/3 in the NIR to optical wavebands. Figure <ref> shows the triple disk spectra for the different black hole masses and outer boundary radii. Panel (a) of Figure <ref> shows how the triple disk spectra depend on the black hole mass, indicating that the luminosity is higher as the black hole mass increases. Panel (b) shows how the spectra depend on the outer boundary radius. From panel (b) it can be seen that only the low frequency peak of the spectral luminosity increases as r_ out becomes larger. This is simply because the area over which the low-frequency photons are emitted is larger. §.§ Observational implications In this section we discuss the observability of our model spectra by comparing them with the flux limits of the Hyper Suprime-Cam (HSC) Subaru Strategic Program survey and the James Webb Space Telescope (JWST) in the IR to optical wavelength range, and Swift/XRT in the X-ray waveband. The sensitivity data of these instruments are taken from their web pages. Figure <ref> plots the frequency distribution of the triple disk flux density for different binary masses (panel a), distances (panel b), and CBD's outer radii (panel c) superimposed on the observational flux limits. The reference model is the flux density with M=100M_⊙, D=10 Mpc, and r_ out/r_ in=1000. Panel (a) shows that a triple disk of stellar to intermediate-mass black holes is detectable in the IR, optical, and X-ray at a distance of 10 Mpc from Earth. As expected, the spectral magnitude is higher with increasing binary mass. Panel (b) indicates that the reference model is too dark to be observed in any waveband at 100 Mpc, but the 10^3 M_⊙ case is detectable even at 100 Mpc. Panel (c) demonstrates that the brightness of the low-frequency peaks varies with the outer radius of the CBD, while that of the high-frequency peaks does not at all. This makes sense because the high-frequency peak comes from the minidisks and is therefore independent of the outer radius of the CBD. In particular, the outer radius as small as 10^2 r_in makes the low-frequency peak weaker than the high-frequency peak. Nevertheless, even this smallest CBD case is sufficiently observable with JWST/NIRCam and Swift/XRT. Let us consider the brief strategy of Target-of-Opportunity observations to test our model: If Swift/XRT finds a bright flare in X-rays (within 100 Mpc if the distance to the source is available), we prompt NIR telescopes to look into that region. If a point source is found, Subaru and JWST will be pointed at it to examine the detailed frequency dependence of the flux density. This will allow us to estimate the power-law index of the flux density in the NIR band and compare it with our model. § DISCUSSION Considering the conservation of radiative flux in the vertical direction of the CBD, the vertical flux at the disk mid-plane is equivalent to the flux at the surface, giving <cit.> T = 2/3^1/41/τ^1/4 T_ c where, according to the standard disk model, the optical depth is given by τ =κΣ/2 ∼ 4.6 (M/100 M_⊙)^1/2(κ/0.4 cm^2 g^-1) (α_ SS/0.1)^-1(H/r/0.01)^-1 (ṁ/1.0)^-1(r/r_ b)^-1/2 with the surface density Σ=Ṁ/(3πν) and the disk viscosity ν=(2/3)α_ SSc_ sH with the Shakura-Sunayev viscosity parameter α_ SS (see also equation <ref>). Here, we used equation (<ref>) for the derivation. Assuming that the CBD opacity source is electron scattering, i.e., κ=0.4 cm^2 g^-1 the optical depth is of the order of unity for r ≥ r_ b, resulting in T≈ T_ c in the irradiation heating dominated region from the equation (<ref>). In this case, our assumption of T≈T_ c is justified. However, as the CBD mid-plane temperature drops to several thousand degrees Kelvin, the free-free absorption comes into play, and the difference between the disk mid-plane temperature and the surface temperature is expected to become significant. At lower temperatures, the bound-free absorption is also an important source of opacity. In other words, the spectra of the irradiated part of the CBD will vary with these opacities. Since, for example, the opacity due to the free-free absorption has a complicated dependence on the density and temperature as κ∝ρT_ c^-3.5, the differential equation for the disk aspect ratio becomes too complicated to solve analytically, unlike equation (<ref>). In a forthcoming paper, we will numerically study the CBD spectra by taking these opacities into account. Next, we discuss the effect of the CBD inner edge radius on the spectrum. Note that the inner edge radius is proportional to a_0 and C_ gap. As the inner-edge radius increases, the inner-edge temperature will be lower, shifting the high-frequency peak of the CBD toward the lower frequencies. In contrast, the low frequency peak will be relatively prominent. If the inner-edge radius is smaller, the inner-edge temperature will be higher. This would make the high frequency peak larger. The timescale for the binary orbital decay due to gravitational wave radiation is given by <cit.> t_ gw = 5/8(a/r_ S)^4 (r_ S/c) g(q) ∼ 7.8×10 yr(M/100 M_⊙) (a_0/1000)^4(g(q)/4), where g(q)=(1+q)^2/q and g(q=1)=4. This suggests that the binary with a=1000r_ S is on the way to merging with radiating gravitational waves. Now consider a situation where the CBD and binary are dynamically coupled; if the CBD viscosity time estimated at the inner edge of the CBD is longer than the coalescence time, the CBD and binary will move rapidly toward coalescence, leaving the CBD to decouple. For our fiducial model, the semi-major axis of the decoupled binary is estimated to be about 100r_ S. If the difference between the pre- and post-decoupled spectra is pronounced, we can distinguish between binaries dynamically with and without CBDs. This is also a future topic, including scaling the black hole mass to the supermassive black holes (SMBHs). § CONCLUSIONS We have studied the effect of irradiation from the two minidisks on the circumbinary disk (CBD). We have derived the irradiation heating rate on the CBD surface and then, by considering the energy balance equation, a first-order differential equation for the radial profile of the disk aspect ratio. Assuming the hydrostatic equilibrium of the CBD in the vertical direction and adopting a consistent outer boundary condition, we have solved the differential equation analytically and numerically. Using these solutions, we have calculated the CBD spectra and studied their dependence on the black hole mass, the binary mass ratio, and the CBD outer radius. We also computed the combined disk spectrum, which is the sum of the CBD spectrum and two minidisk spectra, the so-called triple disk spectra. Our conclusions are summarized as follows: * The analytically calculated spectra are in good agreement with the numerical spectra. * The CBD spectrum is almost independent of the binary mass ratio. * Triple disk spectra show double peaks. The high frequency peak arises from the two minidisks, while the low frequency peak from the irradiated CBD. Note that the latter peak does not appear when the CBD is not irradiated. For the stellar-mass to intermediate-mass range of binary black holes, the high frequency peak appears in the soft X-ray band, while the low frequency peak appears in the IR band. * The observed flux increases with frequency as ν^1/3 towards the high-frequency peak, while it decreases with frequency away from the low-frequency peak as ν^-5/3. The latter feature can be tested with near-IR observations with Subaru and JWST. § ACKNOWLEDGMENTS Y.L. and K.H. have been supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2016R1A5A1013277 to K.H. and 2020R1A2C1007219 to K.H. and Y.L.). This research was also supported in part by grant no. NSF PHY-2309135 to the Kavli Institute for Theoretical Physics (KITP), and also supported by Grant-inAid for Scientific Research from the MEXT/JSPS of Japan, JP21K03619 to A.T.O. § IRRADIATED FLUXES FROM TWO MINIDISKS We calculate the irradiation flux directed toward the CBD from the accretion disks (i.e., minidisks) around the primary and secondary black holes. We assume that the two sources are point sources and orbit in a circular binary. This is because they are close to the inner edge of each disk and much smaller than the orbital semi-major axis of the binary black hole. The flux from the two black holes to the CBD surface is expected to vary with the orbital motion of the binary as their distance varies. To properly calculate the flux, the positional relationship between the binary and the CBD surface is shown in Figure <ref>. The vectors of Figure <ref> are expressed in polar coordinates as follows: r⃗ = rsinθcosϕ x̂+rsinθsinϕ ŷ+rcosθ ẑ r⃗_1 =r_1cos f x̂+r_1sin f ŷ r⃗_2 =r_2cos (π+f) x̂+r_2sin(π+f) ŷ=-r_2cos f x̂-r_2sin f ŷ d⃗_1 =(rsinθcosϕ-r_1cos f) x̂+(rsinθsinϕ-r_1sin f) ŷ+rcosθ ẑ d⃗_2 =(rsinθcosϕ+r_2cos f) x̂+(rsinθsinϕ+r_2sin f) ŷ+rcosθ ẑ n⃗ =[1+(dH/dR)^2]^-1/2(-dH/dRcosϕ, -dH/dRsinϕ, 1 ) with the assumption H≪ R and f is the true anomaly of the binary orbit, where r_1 = q/1+qa r_2 = 1/1+qa with the mass ratio q=M_1/M_2 and the semi-major axis a. The unit vectors of d⃗_1 and d⃗_2 are given by l⃗_1 =-d⃗_1/|d⃗_1|, l⃗_2 = -d⃗_2/|d⃗_2|, respectively, where |d⃗_1| = [ r^2 + r_1^2 - 2rr_1 sinθcos(ϕ - f) ]^1/2 |d⃗_2| = [ r^2 + r_2^2 - 2rr_2 sinθcos(ϕ - f) ]^1/2. §.§ Irradiated fluxes From the geometrical relation of Figure <ref>, the irradiated flux on the CBD surface from each BH is given by F_ irr,i=L_i/4π D^2(l⃗_i·n⃗)(n⃗·e⃗_z), where i=1 and i=2 denote the primary and secondary black hole, respectively, D is the distance from Earth. Here, the bolometric luminosity for each black hole L_i is given by L_i≡GM_iṀ_i/r_ ISCO,i, where Ṁ_i, M_i, and r_ ISCO,i=6GM_i/c^2 are the mass accretion rate, black hole mass, and ISCO radius of each black hole. From equations (<ref>) and (<ref>), we get l⃗_1 ·n⃗ = [1 + ( dH/dR)^2 ]^-1/2 d_1^-1( r dH/dr - r_1 dH/drcos(ϕ - f)/sinθ - r cosθ) l⃗_2 ·n⃗ = [1 + ( dH/dR)^2 ]^-1/2 d_2^-1( r dH/dr + r_2 dH/drcos(ϕ - f)/sinθ - r cosθ) n⃗·e⃗_z = [1 + ( dH/dR)^2 ]^-1/2, dH/dR = dH/dr1/sinθ. Substituting equations (<ref>)-(<ref>) into the equation (<ref>) gives F_ irr,1 = F_1(l⃗_1 ·n⃗)(n⃗·e⃗_z) = L_1/4π d_1^3[1+(dH/dR)^2]^-1(rdH/dr-r_1dH/drcos(ϕ-f)/sinθ-rcosθ) = L_1/4π r^2[1-(dH/dr)^2] [dH/dr-r_1/rdH/drcos(ϕ-f)/sinθ -H/r. . +3r_1/rdH/drsinθcos(ϕ-f)-3(r_1/r)^2dH/drcos^2(ϕ-f)-3/2r_1/rsin2θcos(ϕ-f). . +3/2(r_1/r)^2(5cos^2(ϕ-f)-1)(dH/dr-r_1/rdH/drcos(ϕ-f)/sinθ-H/r)], F_ irr,2 = F_2(l⃗_2 ·n⃗)(n⃗·e⃗_z) = L_2/4π d_2^3[1+(dH/dR)^2]^-1(rdH/dr+r_2dH/drcos(ϕ-f)/sinθ-rcosθ) = L_2/4π r^2[1-(dH/dr)^2][ dH/dr+r_2/rdH/drcos(ϕ-f)/sinθ-H/r. . -3r_2/rdH/drsinθcos(ϕ-f)-3(r_2/r)^2dH/drcos^2(ϕ-f)+3/2r_2/rsin2θcos(ϕ-f) . . +3/2(r_2/r)^2 (5cos^2(ϕ-f)-1) (dH/dr+r_2/rdH/drcos(ϕ-f)/sinθ-H/r)], where the term d_i is approximately obtained in a Taylor series of r_i/r as 1/d_1^3 = [r^2+r_1^2-2rr_1sinθcos(ϕ-f)]^-3/2 ≈ r^-3[1+3r_1/rsinθcos(ϕ-f) +3/2(r_1/r)^2(5sin^2θcos^2(ϕ-f)-1)], 1/d_2^3 = [r^2+r_2^2+2rr_2sinθcos(ϕ-f)]^-3/2 ≈ r^-3[1-3r_2/rsinθcos(ϕ-f) +3/2(r_2/r)^2(5sin^2θcos^2(ϕ-f)-1)], repectively. § IRRADIATION HEATING RATES The irradiated heating rate is given by Q_ irr,i=2A_i F_ irr,i, where the factor 2 indicates that the irradiated flux heats the two sides of the CBD surface, and A_i represents the ratio of re-emitted to incident photons. The CBD viscous timescale at r_ b and binary orbital period are evaluated as τ_ vis =r^2/ν≈3/2r^2Ω/α_ SS c_ s^2 ∼ 1.9 × 10^9 s (r/r_ b)^1/2(α_ SS/0.1)^-1(M/100 M_⊙)^25/18(ṁ/1.0)^-7/18(A/0.1)^-10/9, P_ orb =2π/Ω ∼ 2.8 × 10^2 s (a_0/1000)^3/2(M/100 M_⊙), where the disk viscosity is given by ν = 2/3α_ SSc_ sH. with the Shacla-Sunayev viscosity parameter α_ SS <cit.>. Since the viscous timescale of the CBD is much longer than the binary orbital period, and also the binary is in a circular orbit, the effect of the binary motion on the irradiation heating to the CBD can be considered quasi-stationary. Thus, the irradiation heating rate is approximately equal to the azimuthally- and orbit-averaged irradiation heating rate: <Q_ irr,i>=1/2π∫^2π_0 Q_ irr,i dϕ', where (ϕ'=ϕ-f). From equations (<ref>), (<ref>), (<ref>), and (<ref>), we get <Q_ irr,1> and <Q_ irr,2> as <Q_ irr,1> = A_1L_1/2π r^2[1-(dH/dr)^2] [ dH/dr-H/r-3/2(r_1/r)^2dH/dr-3/2(r_1/r)^2 (dH/dr-H/r) . . +15/4(r_1/r)^2(dH/dr-H/r) ] ≈ A_1L_1/2π r[ d/dr(H/r)+ 3/2(r_1/r)^2[3/2d/dr(H/r)-1/rdH/dr] ] = A_1L_1/2π r[ d/dr(H/r) - β_1(r_ in/r)^2 [ H/r^2 - 1/2d/dr(H/r) ] ], <Q_ irr,2> = A_2L_2/2π r^2[1-(dH/dr)^2][dH/dr-H/r-3/2(r_2/r)^2dH/dr-3/2(r_2/r)^2(dH/dr-H/r). .+15/4(r_2/r)^2(dH/dr-H/r) ] ≈ A_2L_2/2π r[ d/dr(H/r)+ 3/2(r_2/r)^2 [ 3/2d/dr(H/r)-1/rdH/dr] ] = A_2L_2/2π r[ d/dr(H/r) - β_2(r_ in/r)^2 [ H/r^2 - 1/2d/dr(H/r) ] ], where β_1 and β_2 are given by β_1=3/2(r_1/r_ in)^2 =3/21/C_ gap^2q^2/(1+q)^2 , β_2=3/2(r_2/r_ in)^2 =3/21/C_ gap^21/(1+q)^2, respectively, and β_1 and β_2 are less than unity for q≤1.0. From equations (<ref>) and (<ref>), we obtain the total irradiated heating as Q_ irr = A_1L_1/2π r[ d/dr(H/r) - β_1(r_ in/r)^2 [ H/r^2 - 1/2d/dr(H/r) ] ] + A_2L_2/2π r[ d/dr(H/r) - β_2(r_ in/r)^2 [ H/r^2 - 1/2d/dr(H/r) ] ]. § ANALYTICAL SOLUTIONS FOR Β≠0 Equation (<ref>) can be integrated analytically. By separating the variables, we obtain ∫_Y_ b^Y1/(α Y^8+βY) dY = ∫_ξ_ b^ξ1/(ξ^3+(β/2)ξ) dξ, where Y_ b=Y(ξ_ b) and ξ_ b is the normalized boundary radius, which can be taken arbitrarily in this formulation. Both sides are integrated as logY/(αY^7+β)^1/7 - logY_ b/(αY_ b^7+β)^1/7 = logξ^2/2ξ^2+β - logξ_ b^2/2ξ_ b^2+β. This equation gives the general solution as Y=β^1/7(J ξ^2/β+2ξ^2) [ 1-α(J ξ^2/β + 2ξ^2)^7]^-1/7, where the integral constant J is given by J=2ξ_ b^2+β/ξ_ b^2Y_ b/(αY_ b^7+β)^1/7. It follows from equation (<ref>) that the global (1≤ξ≤∞) solutions exist only if the following condition is satisfied: α^1/7/2J≤1. Here, substituting J=2/α^1/7 into equation (<ref>) yields the following particular solution: Y=(β/α)^1/7[1+1/2β/ξ^2]^-1[ 1-(1+β/2ξ^2)^-7]^-1. Taking β=0 at the limit of β/ξ^2 → 0, equation (<ref>) consistently connects to the power-law solution of equation (<ref>). aasjournal
http://arxiv.org/abs/2407.12117v1
20240716185949
Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs
[ "Pinxue Zhao", "Hailin Zhang", "Fangcheng Fu", "Xiaonan Nie", "Qibin Liu", "Fang Yang", "Yuanbo Peng", "Dian Jiao", "Shuaipeng Li", "Jinbao Xue", "Yangyu Tao", "Bin Cui" ]
cs.LG
[ "cs.LG", "cs.DC" ]
^1Peking University, ^2Tencent {pinxue.zhao, z.hl, ccchengff, xiaonan.nie, bin.cui}@pku.edu.cn {brendenliu, youngfyang, yuanbopeng, focusjiao, shuaipengli, jinbaoxue, brucetao}@tencent.com § ABSTRACT Nowadays, Large Language Models (LLMs) have been trained using extended context lengths to foster more creative applications. However, long context training poses great challenges considering the constraint of GPU memory. It not only leads to substantial activation memory consumption during training, but also incurs considerable memory fragmentation. To facilitate long context training, existing frameworks have adopted strategies such as recomputation and various forms of parallelisms. Nevertheless, these techniques rely on redundant computation or extensive communication, resulting in low Model FLOPS Utilization (MFU). In this paper, we propose , a novel LLM training framework designed for fine-grained activation memory management. Given the quadratic scaling of computation and linear scaling of memory with sequence lengths when using FlashAttention, we offload memory-consuming activations to CPU memory after each layer’s forward pass and fetch them during the backward pass. To maximize the swapping of activations without hindering computation, and to avoid exhausting limited CPU memory, we implement a token-wise activation recomputation and swapping mechanism. Furthermore, we tackle the memory fragmentation issue by employing a bi-level Mixed Integer Programming (MIP) approach, optimizing the reuse of memory across transformer layers. Empirical results demonstrate that achieves an average of 2.42× and 2.26× MFU compared to Megatron-LM and DeepSpeed, respectively. This improvement is attributed to 's ability to minimize memory fragmentation, reduce recomputation and intensive communication, and circumvent the delays associated with the memory reorganization process due to fragmentation. By leveraging fine-grained activation memory management, facilitates efficient training of 7B LLM with 1 million sequence length on just 8 A800 GPUs, achieving an MFU of 52.30%. Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs Pinxue Zhao^1, Hailin Zhang^1, Fangcheng Fu^1, Xiaonan Nie^1, Qibin Liu^2, Fang Yang^2, Yuanbo Peng^2, Dian Jiao^2, Shuaipeng Li^2, Jinbao Xue^2, Yangyu Tao^2, Bin Cui^1 July 22, 2024 =============================================================================================================================================================================== § INTRODUCTION Since the advent of ChatGPT <cit.>, Large Language Models (LLMs) have demonstrated remarkable proficiency in comprehending and generating natural language texts. Besides revolutionizing the field of language processing, which encompasses translation <cit.>, coding <cit.>, etc., transformer-based LLMs have also found applications in multi-modal scenarios, such as image processing <cit.>, video stream analysis <cit.>, and AI for science <cit.>. To accommodate novel applications that require lengthy contexts, LLMs have developed to support long context input, from 2K-4K <cit.> to 32K <cit.>, 128K <cit.>, or even millions of tokens <cit.>. Considering the extrapolation problem <cit.>, which refers to the decline in LLM performance when input sequences exceed the training length, it is necessary to conduct long context training <cit.> or fine-tuning <cit.> to facilitate long sequence inference. Beyond natural language processing, increasing the context length is also essential across diverse domains, including video processing <cit.>, protein properties prediction <cit.>, weather forecasting <cit.>, and health care <cit.>. However, training LLMs with long sequence lengths poses a significant challenge for GPU memory. During training, a large amount of activations (i.e., the intermediate results computed in the forward pass) must be stored for gradient computation during the backward pass, resulting in substantial memory consumption. Typically, it is well known that the self-attention module in the transformer architecture has a quadratic computation and memory complexity w.r.t. the sequence length. FlashAttention <cit.>, now a standard technique for attention computation in LLM training, accelerates computation and shrinks the memory complexity to be linear w.r.t. the sequence length by scheduling memory I/O and recomputing necessary components during the backward pass. Except for attention, the remaining activation memory also scales proportionally with the sequence length, which can become quite large in long context scenarios. For instance, training a GPT model with 7B parameters on a sequence length of 1 million can lead to an activation memory of 4096GB, far exceeding the memory capacity of commonly used accelerators (e.g. 80GB for an NVIDIA H100/A100 GPU). Moreover, the memory fragmentation issue makes this situation even worse. Besides storing the skeletal activations for the backward pass, there are also tremendous transient activations that are temporarily generated during computation (we will formally categorize the two kinds of activations in Section <ref>). Such transient activations have distinct data life cycles and usually lead to frequent allocation and deallocation of GPU memory. Currently, most LLM training systems are built on top of PyTorch <cit.>, including Megatron-LM <cit.> and DeepSpeed <cit.>. PyTorch employs a caching memory allocator designed to reduce the costly “cudaMalloc” and “cudaFree” operations by caching and reusing allocated memory blocks. However, the frequent memory (de)allocation requests in the caching allocator result in significant memory fragmentation <cit.>. This issue becomes more severe in long context training, considering the fact that the (de)allocated memory blocks are significantly larger than those in normal tasks. Memory fragmentation not only leads to Out-Of-Memory (OOM) error but also significantly hinders training efficiency because of the frequent triggering of the PyTorch memory reorganization process, which involves calls to expensive “cudaFree” and “cudaMalloc” to release cached blocks and reclaim GPU memory. Figure <ref> illustrates an example of GPU memory fragmentation. At the peaks of the curves, there is more than 4GB memory reserved but not allocated. However, when the training task tries to allocates 4GB memory, the allocator fails to find a continuous memory space to fulfill the allocation request. Consequently, it necessitates to invoke a series of “cudaFree” and “cudaMalloc” to reorganize memory, which blocks GPU computation. In this paper, we aim to tackle the memory challenges encountered during long context LLM training. Specifically, we propose and implement an LLM training framework to address the activation data management problem. There are several key observations that inspire our design. Observation 1: Opportunity for activation swapping. In the field of deep learning training, to reduce the peak memory consumption caused by skeletal activations, activation recomputation  <cit.> and swapping <cit.> are two well-known memory reduction techniques that trade time to save memory.[Parallelism techniques like sequence parallelism <cit.> and context parallelism <cit.> are also compelling approaches to reduce memory at the price of extra communication overhead. Our work is compatible with these parallelism techniques.] Typically, both of them reduce memory consumption at the price of extra time cost. For one thing, the activation recomputation technique discards the skeletal activations in the forward pass and later recomputes them in the backward pass, leading to extra computation cost. For another, the swapping technique offloads the activations to CPU memory in the forward pass to relieve the GPU memory pressure, and later fetches them back to GPU memory in the backward pass, incurring the overhead of data transmission between CPU and GPU memory. Contemporary, mainstream LLM training frameworks such as Megatron-LM and DeepSpeed prefer activation recomputation to swapping, which is due to the fact that the GPU computing ability has a far more rapid growth than the connectivity between CPU and GPU memory in the past few years (see Section <ref> for details). However, we find that the situation is a bit different in long context training of LLMs. Denote s as the sequence length. The computation complexity of one transformer layer is O(s^2), while the activation memory complexity is O(s) thanks to FlashAttention. During GPU computation, we can leverage the idle CPU-GPU bandwidth, offloading activations to CPU memory during the forward pass, and fetching the activations during the backward pass. As the sequence length increases, there is greater potential for overlapping computation and communication, given that their time requirements scale quadratically and linearly with the sequence length, respectively. As shown in Figure <ref>, eventually, after reaching a specific sequence length (192K in this case), the transmission of activations can be fully overlapped with GPU computation. However, in practice, there is limited chance to completely swap all activations. On the one hand, extremely long training data is rare, and most of the time we need to train on data that doesn't fully overlap the activation transmission and the computation. On the other hand, offloading all activations may cause CPU OOM issues — the CPU memory is responsible for storing all activations from all GPUs on the same machine, but the current CPU memory is typically several terabytes, which is insufficient for very long sequence lengths. Considering the above challenges, we introduce a fine-grained activation recomputation and swapping mechanism to manage the skeletal activations. We consider both tensor-level and token-level activation management. For each layer, following previous works <cit.>, we consistently offload two kinds of activation tensors, the input of each transformer layer and the output of FlashAttention, to CPU memory. For other activation tensors, we only offload a fraction (denoted as α) of tokens, and recompute the rest part during the backward pass. We model the time cost of activation recomputation and transmission and determine the fraction α through a well-formulated linear programmin problem, which aims to maximize offloading activations without impeding GPU computation or causing CPU OOM issues. During the backward pass, prefetching activations can also overlap with GPU computation, because the backward computation is typically twice as much as the forward computation. With both tensor-level and token-level activation management, we make full use of the idle bandwidth and minimize the recomputation overhead to improve the overall efficiency. Observation 2: Deterministic memory (de)allocation pattern across iterations and layers. The memory fragmentation mainly comes from frequent and irregular memory (de)allocation requests. However, we observe that, typical LLM training adheres to a deterministic computation process across iterations and layers. All transformer layers in an LLM are identical, and each training iteration involves the same computation. While the general-purpose caching allocator is designed for dynamic computation routines, training LLMs can be conceptualized as static computation graphs <cit.>, which have identical structures across layers. This provides an opportunity to design static planning for each layer and reuse the allocated memory of each layer, thereby mitigating memory fragmentation. To enhance memory utilization while minimizing fragmentation, we leverage a hierarchical Mixed Integer Programming (MIP) technique to tackle the memory planning problem. Before training, we profile the memory (de)allocation requests of one training iteration, then use MIP to solve an optimized memory plan for a single transformer layer. Since each layer's memory request are identical, the entire memory block for one layer can be directly reused for the subsequent identical layer. Considering each transformer layer's memory block as a single memory allocation request, we further solve another MIP problem that plans memory allocation for the entire LLM training, including the initial embedding layer, all transformer layers, and the final classifier layer. We only need to solve the problem once before the actual training, since all iterations can utilize the same memory plan. The near-optimal memory plan eliminates the fragmentation issue and avoids PyTorch's time-consuming memory reorganization mechanism. Putting them together, in response to the activation memory challenge in long context training, we propose , an LLM training framework with fine-grained tensor memory management. We consider the challenge as an activation data management problem. To make full use of the idle CPU-GPU bandwidth during training with different sequence lengths, we introduce a token-wise fine-grained activation recomputation and swapping strategy. We employ a bi-level hierarchical MIP technique to solve the memory planning problem and eliminate memory fragmentation. To the best of our knowledge, this is the first training framework that enables efficient training of a 7B LLM on 8 GPUs with a sequence length of 1 million. We summarize our contributions as follows: * We propose and implement an LLM training framework to address the activation data management problem in long context LLM training. * We introduce a fine-grained activation recomputation and swapping mechanism to fully utilize the idle CPU-GPU communication bandwidth during time-consuming GPU computation. * We employ a bi-level hierarchical MIP technique to solve the memory planning problem and significantly mitigate memory fragmentation. * We evaluate through extensive experiments, and demonstrate an average of 2.42× and 2.26× MFU compared to Megatron-LM and DeepSpeed, respectively. Additionally, is the first framework that enables the efficient training of 7B LLM with 1 million context length on only 8 A800 GPUs. § PRELIMINARY In this section, we present an overview of the architecture and training process of LLMs, along with memory reduction strategies and distributed training techniques. Commonly used notations are listed in Table <ref>. §.§ Large Language Models §.§.§ Architecture As shown in Figure <ref>, the architecture of a typical LLM comprises an input embedding layer, multiple decoder-only transformer layers, and a final classifier layer. The embedding layer converts input tokens into continuous representations. Each decoder-only transformer layer constitutes a multi-head self-attention module with causal mask, and an Feed-Forward Network (FFN) module containing Fully-Connected (FC) networks. The classifier layer takes the hidden states produced by the transformer layers as input, and generates a probability distribution over the vocabulary. §.§.§ The Training Process The training process of LLM involves two phases: the forward pass and the backward pass. During the forward pass, the model processes the input data through its layers, and finally generates predictions. The output tensors of the operators in forward pass are called activation tensors, some of which are stored for backward pass computation according to gradient-based learning. The backward pass, on the other hand, computes the gradients with regard to the model parameters. These gradients are used to update the model's parameters. Following the chain rule in gradient computation, the backward pass relies on the activation tensors from the forward pass to compute gradients. §.§.§ The Challenge of Huge Memory Requirement in Long Context Training In the original form of self-attention, which is the most critical module in LLMs, input tokens are initially projected into queries, keys, and values. The queries and keys are multiplied and then softmaxed to create an s× s matrix that represents attention weights, which are subsequently used to compute a weighted sum of the values. Thus, it exhibits a memory complexity of O(s^2). As the sequence length increases, storing the entire s× s tensors for the backward pass becomes infeasible. FlashAttention <cit.> has emerged as a solution for efficient attention computation in long context training. It minimizes memory I/O and carries out the attention computation in a streaming fashion in both the forward and backward pass, and thereby successfully get rid of the huge s× s matrix. With such designs, FlashAttention accelerates the overall computation speed (while maintaining a time complexity of O(s^2)), and eliminates the O(s^2) memory requirement. Currently, FlashAttention has become the de-facto strategy for self-attention computation. Thus, we assume FlashAttention is employed throughout this work. Although FlashAttention has reduced the memory complexity of LLM training from O(s^2) to O(s), the linearly scaling activation memory remains the primary challenge in long context training. For example, as we will elaborate in Section <ref>, when training a 7B GPT model with 32 layers and a hidden size of 4096, using a single 1 million length sequence, the forward activation tensors required by the backward pass consume 4096GB (when using half-precision numbers), whereas the typical memory capacity of a GPU is much smaller. To cope with this issue, there are two lines of efforts, which are the memory reduction techniques and distributed parallelism strategies. In the rest of this section, we will introduce these two lines respectively. It is worth noting that although our work primarily concentrates on the memory reduction techniques, the proposed framework is compatible with a wide range of parallelism strategies. §.§ Memory Reduction Techniques Limited GPU memory has become the bottleneck of long context LLM training. To alleviate the memory pressure, there are two notable memory reduction techniques, namely activation recomputation and swapping. Activation recomputation <cit.> (a.k.a. activation checkpointing) selectively stores only the inputs of certain layers rather than storing all intermediate activations. During the backward pass, the required activations are recomputed on-the-fly. While this approach reduces the activation memory footprint required for LLM training, it introduces additional computation, which impacts efficiency. Swapping <cit.>, also known as CPU offloading, aims to relieve the GPU memory pressure by offloading GPU tensors to CPU memory, and fetch them back to GPU when needed. Although swapping does not consumes GPU computation units, it would still slow down the training if the data transmission cannot be overlapped by GPU computation. In general, both the two memory reduction techniques release the memory of activations in the forward pass, but need to rematerialize them in the backward pass, at the price of extra computation or data transmission overhead, respectively. In the past few years, the GPU computing ability provisions over 100× of improvement (e.g., the half-precision performance of P100 and H100 are 18.7 and 1979 TFLOPs, respectively), while the improvement of CPU-GPU bandwidth is only 4× (from PCIe 3.0 to PCIe 5.0). As a consequence, mainstream LLM training frameworks favor the activation recomputation technique.[Both Megatron-LM and DeepSpeed have supported activation recomputation for long. Nevertheless, Megatron-LM does not support swapping until the release of TransformerEngine v1.3 in Feb 2024. Besides, DeepSpeed primarily focuses on swapping of model states, emcompassing model parameters, gradients and optimizer states <cit.>, as they constitutes the most significant portion of memory footprint in short context training tasks. However, in long context training senarios, the memory consumption of activations has surpassed that of model states.] In practice, when training LLMs with long context input, full activation recomputation is often employed, which involves storing only the input tensor of each transformer layer and recomputing the required activations during backward propagation. §.§ Distributed Parallelism Strategies Distributed training is essential for efficiently training LLMs, especially in the senario of long context training. To facilitate the training of large-scale data and model, several distributed parallelism strategies have been proposed. Data Parallelism (DP) <cit.> duplicates model parameters and distributes the input data across multiple devices. Each device holds a complete copy of the model and processes its input data independently. After backward propagation, the devices synchronize parameter gradients to ensure consistency across the model copies. Zero-Redundancy Optimizer (ZeRO) <cit.> is a series of variants built upon DP, aiming to alleviate memory pressure. In naive DP, model parameters, gradients and optimizer states are replicated among all devices. ZeRO is designed in three stages to reduce these memory requirements respectively. First, ZeRO-1 partitions the optimizer states among all DP workers. Next, ZeRO-2 extends ZeRO-1 by also partitioning gradients, further reducing memory footprint. Finally, ZeRO-3, based on ZeRO-2, partitions model parameters among DP workers, further mitigating memory pressure but introducing additional communication to gather parameters during forward and backward pass. Tensor Parallelism (TP) <cit.> partitions the self-attention and feed-forward modules of transformer layers across multiple devices along either the column or row dimension. It addresses the problem that LLMs can not fit into the memory of a single device. It involves extra collective communication operations (i.e. AllReduce) to synchronize the intermediate results. Therefore, TP is usually applied within a computing node, where intra-node GPUs are connected via high-bandwidth NVLink. Pipeline Parallelism (PP) <cit.> is also proposed to address the problem that LLMs cannot be fit onto a single device. Different from TP, PP partitions different model layers into several stages, then distributes the stages to different devices. The input data is processed through these stages in a pipeline fashion. Given the peer-to-peer communication style, the PP stages are often distributed across nodes. However, PP introduces a phenomenon known as “bubble”, which corresponds to GPU idle time. The issue becomes more severe when the number of micro-batches is small. To facilitate efficient long context training, several novel parallelism strategies have been recently proposed. Sequence Parallelism (SP) <cit.> is built upon TP to further reduce activation memory overhead. It splits the sequence dimension in the part of the model that does not apply TP. The original AllReduce communication now transitions to AllGather and ReduceScatter communication. DeepSpeed-Ulysses <cit.>, built upon ZeRO, is another form of sequence parallelism. During self-attention computation, it splits the head dimension, whereas in other model components, it partitions the sequence dimension. For transitioning between modules, it utilizes AllToAll communications, theoretically reducing communication overhead compared to SP. However, its SP degree is limited by the number of heads in self-attention. To further relieve the memory pressure, DeepSpeed-Ulysses leverages ZeRO to distribute model parameters. Context Parallelism (CP) <cit.> proposes sharding the query, key, value matrices within the self-attention module along the sequence dimension across different devices. During attention computation, necessary communications are involved to ensure consistent results. The communications can be overlapped with computation by careful scheduling. In practice, these parallelism strategies and memory reduction techniques can be integrated and employed simultaneously to facilitate efficient training of LLMs. § ANATOMY AND SYSTEM DESIDERATA Given the fact that the memory consumption of activations scales proportionally w.r.t. the sequence length in LLM training, we first provide an in-depth anatomy of the key characteristics of different activations in this section. Based on this analysis, we present the design desiderata that motivate the development of . §.§ Categorization of Activation Tensors To be specific, according to their life cycles, we categorize activations generated during the forward propagation into two classes, which are the skeletal activations and the transient activations, where the former is necessary for the backward propagation while the latter is not. For illustration, in Figure <ref>, tensors 13, 14, 17, 18, and 19 are produced during the forward propagation of a transformer layer, and are discarded before the completion of this layer's forward pass. Similarly, tensors 20, 21, 22, 23, and 24 are generated during the backward propagation of this layer, and are discarded after corresponding computation. We term them “transient tensors” because they are both created and discarded within a single layer's forward or backward pass. Transient tensors usually serve as temporary results. Conversely, tensors 15 and 16 are generated during the forward propagation and are needed for backward propagation, so they are discarded in this layer's backward pass. We refer to these tensors as “skeletal tensors” because they are produced during the forward pass, and are essential for the gradient calculation during the backward pass. §.§ Analysis of Skeletal Activations Figure <ref> presents all skeletal tensors generated within a transformer layer's forward propagation, along with their sizes. We can see that the total size of all skeletal activations in a single transformer layer amounts to 16bsh. To exemplify, when training the GPT-7B model (h = 4096, 32 layers) with a sequence length (s) of 1 million, if we store the skeletal activations in half-precision floating numbers, it would take 4096 GB for only one sequence (b = 1), exceeding the memory capacity of even 50 A100/H100 GPUs. The most important characteristic of skeletal activations is that they are needed by backward computation, so they must reside in GPU memory at least before the backward propagation of the corresponding transformer layer begins. However, there is no doubt that maintaining all skeletal activations for backward propagation is infeasible. To this end, memory-saving techniques like activation recomputation and swapping become necessary for long context training. These techniques first release the skeletal activations of a transformer layer in the forward propagation, and later rematerialize them before the corresponding backward propagation. In essence, both the two techniques trade time for memory — the activation recomputation technique incurs extra computation overhead while the swapping technique necessitates transmitting the activations from CPU memory to GPU memory. For extremely long context lengths, using either technique would take significant time to rematerialize the skeletal activations, causing performance degradation. As a result, we desiderate a meticulous orchestration of the two memory-saving techniques to manage the skeletal activations, so that we can minimize the extra overhead while accommodating the huge memory requirement in long context training of LLMs. To achieve this goal, we develop a token-wise activation recomputation and swapping mechanism, which will be demonstrated in Section <ref>. §.§ Analysis of Transient Activations Transient activations are intermediate results generated and discarded during the forward (or backward) pass of a transformer layer. Actually, there are more transient activations than skeletal activations in a transformer layer. Specifically, we observe that the number of transient activations can exceed 5 times that of skeletal activations. Without careful management, the frequent allocation and deallocation can lead to memory fragmentation and degrades system performance. Fortunately, the most important characteristic of transient activations is that they are identical cross transformer layers, which provides us with the opportunity to manage and reuse their memory regions to minimize fragmentation. In particular, the memory addresses of a single transformer layer's transient activation tensors can be reused by all other transformer layer's corresponding transient activation tensors. However, in practice, memory reuse is not fulfilled because the PyTorch caching allocator lack prior information of the memory request sequence during training iterations. This inspires us to statically plan the memory addresses of each transformer layer's transient tensors, which will be described in detail in Section <ref>. § DESIGN In this section, we propose for fine-grained activation memory management. Our proposed method leverages fine-grained and structured activation management, akin to concise memos that share vital information. The main challenge of long context training is the large activation size which scales linearly w.r.t. sequence length. And we propose token-wise activation recomputation and swapping, along with a bi-level memory planning to address the issue, which targets skeletal activation tensors and transient activation tensors, respectively. The overview of is depicted in Figure <ref>. §.§ Token-wise Recomputation and Swapping Skeletal tensors, generated during the forward pass of a transformer layer, must reside in GPU memory for the subsequent backward propagation. In practice, as sequence length grows, the size of skeletal activations increases linearly, which can easily exceed the capacity of GPU memory. As introduced in Section <ref>, currently the most widely-used technique to tackle this issue is activation recomputation, which stores only the input of each transformer layer, and discard the rest skeletal activation tensors of this layer. Prior to backward propagation of each layer, an additional forward pass of the layer is conducted to reconstruct all skeletal tensors so that the backward computation can be carried out. However, we note that the vanilla activation recomputation strategy is not an optimal choice to handle the challenge of linearly increasing skeletal activation memory, considering the following two reasons: (1) activation recomputation introduces redundant computation, thus diminishing training efficiency; and (2) the memory overhead of retaining the input tensor of each transformer layer can still be expensive, especially when the sequence length is too long or the number of layers is too large. Take the training of GPT-7B with a context length of 1 million as an example again. For only one sequence, the input tensors of all 32 transformer layers together consume 128GB. Even using a SP degree of 8, it takes 16GB for each GPU to store the input tensors of all 32 transformer layers, which already takes up to 20% of total GPU memory capacity. As explained in Observation <ref>, the computation complexity of FlashAttention w.r.t. sequence length is O(s^2), while the size of skeletal activations within a transformer layer scales linearly with sequence length. This provides us with the opportunity to offload skeletal activations to CPU memory, thereby saving GPU memory. We can prefetch them back to GPU before the backward propagation of the corresponding transformer layer. The swapping of skeletal activations can overlap with GPU computations in long context training, since the CPU-GPU data transmission does not consume GPU computation units. To facilitate the overlapping, we utilize two rounding GPU buffers to store the skeletal activations for all transformer layers. The two rounding buffers are allocated before the actual training iterations begin. As shown in Figure <ref>, transformer layers with even layer indices place their skeletal activation tensors in rounding buffer 0, while layers with odd layer indices use rounding buffer 1. After the computation of transformer layer i, rounding buffer (i%2) will be offloaded to CPU using a separate CUDA stream. This happens simultaneously with the computation of transformer layer (i+1). Before the forward computation of transformer layer (i+2), a CUDA event is employed to ensure the content of rounding buffer (i%2) has been fully offloaded to CPU memory, thus the transformer layer (i+2) can safely rewrite rounding buffer (i%2). For backward propagation, after the backward pass of transformer layer (i+2) ends, the contents within rounding buffer (i%2) become useless, and we start prefetching the skeletal activations of transformer layer i to rounding buffer (i%2) using another CUDA stream. The prefetching of transformer layer i's skeletal activations happens simultaneously with the backward propagation of transformer layer (i+1). When the sequence length is sufficiently long, with careful computation-transmission overlapping and synchronization, CPU swapping can substitute activation recomputation without incurring additional computation overhead. However, there are two constraints that prevent us from offloading all skeletal activations to CPU memory. * For sequence lengths that are not sufficiently long, the time required to offload all skeletal activations to CPU memory surpasses the computation time for a single transformer layer. This discrepancy forces the computation of transformer layer (i+2) to be delayed until the offloading of rounding buffer to CPU memory is completed, thereby blocks the normal GPU computation workflow. For instance, as illustrated in Figure <ref>, when training a 7B GPT model on 8 GPUs with a TP size of 8, ideal overlap between the computation of a transformer layer and the offloading of its skeletal activations occurs only for sequence lengths exceeding 192K. In practice, the sequence lengths of most training datasets for LLMs are moderate and may be not sufficient to ensure an ideal overlap between computation and transmission. * In theory, a longer sequence length provides more opportunities for overlapping CPU offloading with GPU computation. However, in practice, the CPU memory in GPU servers is often limited. For a typical GPU server which has several terabytes CPU memory (e.g. 2TB in our environment), this is insufficient to store all skeletal activations when the sequence length is excessively long or the number of transformer layers is too large. For instance, when training the 7B model on a server equipped with 8 GPUs using a sequence length of 1 million, the skeletal activations amount to a total size of 4096GB, which is double the capacity of CPU memory. Therefore, instead of simply offloading all skeletal activations to CPU memory, we employ selective activation swapping to ensure perfect overlap of computation and transmission for short sequences as well as to avoid depleting CPU memory for extremely long context lengths. manages to determine the selection of swapping at both the tensor and token granularities, as depicted in Figure <ref>. At the tensor granularity, we consider the benefits of leveraging the swapping technique rather than the recomputation technique of different modules. As depicted in Figure <ref>, FlashAttention constitutes the most substantial portion of the forward computation of a transformer layer. Notably, when the sequence length exceeds 576K, FlashAttention accounts for more than 90% of the computation involved in a single transformer layer. However, as illustrated in Figure <ref>, the output of FlashAttention only accounts for 6.25% of total skeletal activation size. This inspires us to offload the entire output tensor of FlashAttention to CPU memory since recomputing its output is very time-consuming. Besides, since LLMs have a layered structure, in order to reconstruct the “input_norm”, “q”, “k”, “v” tensors, we also store the input of each transformer layer to CPU, following common recomputation strategy <cit.>. At the token granularity, we develop the token-wise activation recomputation and swapping technique to reduce the memory consumption of all skeletal activation tensors other than the output of FlashAttention and the input of each transformer layer. To be specific, as shown in Figure <ref>, for each of these skeletal activation tensors, we only offload a fraction (denoted as α) to CPU, while the remaining part is discarded, ensuring perfect overlapping and to avoid CPU OOM error. Before the backward propagation, the discarded part is rematerialized via recomputation while the offloaded part is prefetched. To determine the fraction α, we solve the following problem: max α, s.t. (S_input + S_attn + α· S_others)/B ≤ T_layer, (n-2)(S_input + S_attn + α· S_others) ≤ M_CPU. where S_input, S_attn, and S_other stand for the size of input tensor, the size of FlashAttention output tensor, the total size of other skeletal activation tensors, respectively, B is the PCIe bandwidth between GPU and CPU, T_layer is the forward time of a single transformer layer, n is the total number of transformer layers, and M_CPU stands for the capacity of CPU memory. It is worth noting that, the last two transformer layers can initiate the backward pass immediately after the forward pass, obviating the need for swapping. These variables can be easily obtained through profiling before training, so we can determine an appropriate α without much effort. As a special case, when the determined fraction is 0, skeletal buffers except layer input and attention output can share one GPU buffer, as they are fully recomputed and do not need to be offloaded. Thus there is no need to use two rounding buffers to avoid data corruption. §.§ Bi-level Memory Planning In the previous subsection, we have tackled the management of skeletal activations by the fine-grained recomputation and swapping technique. However, frequent allocation and deallocation of the transient activation tensors still lead to GPU memory fragmentation, which forces the allocator to frequently reorganize GPU memory using time-consuming “cudaFree” and “cudaMalloc” operations. To address the issue, and to achieve full reuse of GPU memory across all transformer layers, we design a bi-level Mixed Integer Programming (MIP) method. In practice, our initial step involves profiling the sequence of memory requests during a single training iteration. Given the memory request sequence, the challenge lies in determining the address of each requested tensor while at the same time minimizing the peak memory usage. This task aligns with the well-established offline Dynamic Storage Allocation (DSA) problem <cit.>, which can be formulated as a Mixed Integer Programming (MIP) problem. A concise overview of this formulation is shown as follows. The offline DSA problem assumes a predefined sequence of memory allocations and deallocations, and aims to determine the address of each allocated memory block and at the same time minimizing the peak memory usage. Parameters of offline DSA problem includes: * n, the number of requested tensors. * S_i, the size of requested tensor i, for ∀ i∈{1,2,...,n}. * E = {(i,j)| tensor i,j have overlapped lifespan}. And the problem can be written as min M, s.t. {[ A_i + S_i≤ M, i ∈{1,2,...,n},; A_i + S_i≤ A_j + z_ij· M_cap, (i,j)∈ E,; A_j + S_j≤ A_i + (1-z_ij) · M_cap, (i,j)∈ E,; 0 ≤ M ≤ M_cap,; A_i≥ 0, i ∈{1,2,...,n}, ]. where A_i stands for the address of requested tensor i, M stands for the peak memory usage, M_cap is the memory capacity, and z_ij is defined as z_ij={[ 0, A_i + S_i≤ A_j, (i, j) ∈ E,; 1, A_j + S_j≤ A_i, (i, j) ∈ E.; ]. Here the first constraint and the last two constraints define and limit peak memory, while the second and third constraints ensure non-overlapping tensors. Following this formulation, the solution for each tensor's address is optimal. However, modern LLM training involves thousands of allocation and deallocation requests, which makes this MIP problem computationally intractable. Fortunately, all transformer layers have identical structures and memory request sequences, which presents repetitive substructures within the MIP problem. By leveraging this inherent repetitiveness, we can devise a bi-level hierarchical MIP optimization algorithm, which is both computationally feasible and effective. As discussed in Section <ref>, a typical LLM consists of an embedding layer, n consecutive transformer layers, and a final classification layer. As shown in Figure <ref>, each layer has forward memory request sequence and backward memory request sequence. The memory request sequence is in the form of a sequence of “malloc tensor_id size” and “free tensor_id size”. Since all transformer layers in an LLM are identical, they have the same forward/backward pass memory request sequence. As shown in the bottom of Figure <ref>, we first solve the offline DSA sub-problem for just one transformer layer's forward (backward) pass, which is called the first-level MIP. This offline DSA problem can be simply solved by any MIP solver. After this step, the peak memory needed for the forward (backward) propagation of a single transformer layer, as well as the address of each transient tensor within a transformer layer is determined. After solving the sub-problem for one transformer layer, all other transformer layers can reuse the same memory address for (de)allocation. Subsequently, we can replace the original fine-grained memory request sequence of a transformer layer forward (backward) propagation with a “pseudo” large memory request pair, as shown in Figure <ref>. After the substitution, this reformulated memory request sequence also satisfies the formulation of an offline DSA problem, with a size small enough to be efficiently solved. We then leverage the MIP solver again to solve this second-level MIP problem. After this step, the addresses of all activation tensors, as well as the peak memory needed for all transient activation tensors can be determined. §.§ System Implementation §.§.§ Overview Figure <ref> illustrates the overall architecture of . First, the job profiler takes in the model configuration and user-defined settings, then executes a training iteration to profile the memory requests directed to the PyTorch CUDA allocator during the training phase. The job profiler also determines offloading fraction α by solving optimization problem in Section <ref>. These memory requests comprise a sequence of allocation and deallocation instructions. Afterwards, the memory planner receives the memory requests, executes the bi-level MIP optimization algorithm and, generates a memory plan, which constitutes the addresses of all activation tensors during one training iteration. Finally, the runtime executor reads the memory plan and conducts the training process. §.§.§ Job Profiler The job profiler is designed to profile the memory request sequence during a training iteration. To implement the module, we have extended the PyTorch CUDA allocator with extra interfaces that log each memory request it receives, in the format of “malloc tenosr_id size” and “free tensor_id size”. However, naively recording all memory requests may lead to OOM error. For example, profiling a GPT-7B model with a sequence length of 512K on 8 GPUs can result in OOM error since the proposed techniques are not employed in profiling. Fortunately, as the transformer layers have identical memory footprint, we leverage this property by only profiling a single transformer layer's memory footprint and then applying it to all transformer layers. When the sequence is too long, we cannot even profile one single transformer layer. In such extreme cases, we turn to the CUDA Unified Memory feature, which enables the swapping between GPU memory and CPU memory under the hook, effectively creating an illusion of unlimited GPU memory. By integrating CUDA Unified Memory support into the PyTorch CUDA allocator, we have successfully managed to profile the training of extremely long context lengths without encountering OOM errors. The profiler also gathers the basic information to determine α in Section <ref>, including the size of each skeletal activation tensor, and the forward time of a single transformer layer. Subsequently, it solves for the optimal α to maximize the overlapping of computation and tranmission as well as to avoid CPU OOM error. §.§.§ Memory Planner Given the memory request sequence generated by the job profiler, memory planner executes the bi-level MIP optimization algorithm as introduced in Section <ref> to generate a memory plan, which includes the address of each activation tensor and the peak memory usage needed during training. In all our experiments, memory planning takes less than 5 minutes, which is negligible compared to the training time of LLMs. §.§.§ Runtime Executor The runtime executor takes the memory plan as input, and executes the training process. It is built on the top of Megatron-LM <cit.> and TransformerEngine <cit.>, one of the most popular LLM training frameworks. The runtime executor utilizes two rounding buffers for the storage of skeletal activations, as introduced in Section <ref>. Meanwhile, the transient activation tensors are allocated and discarded according to the memory plan. Three CUDA streams are employed for efficient overlapping of data transmission and GPU computation, which are for GPU computation, activation offloading from GPU to CPU, and activation prefetching from CPU to GPU, respectively. Figure <ref> shows the scheduling of computation and transmission. After the computation of a transformer layer's forward pass, the skeletal activations of this layer are scheduled to be transferred to the CPU memory, which can overlap with the computation of the next layer. Before the backward computation of a transformer layer, the forward skeletal activations of the previous layer are scheduled to be fetched back to GPU. In addition, token-wise tensor recomputation is also scheduled before the layer's backward pass. By hiding the activation swapping with computation and enabling the fractional, token-wise activation recomputation, minimizes the overhead of activation rematerialization at full stretch. § EXPERIMENTS In this section, we conduct experiments across various model sizes and input sequence lengths to show that achieves superior efficiency in longer context training of LLMs. §.§ Setup Hardware: Our experiments are conducted on an A800 GPU cluster, with each node equipped with 8 NVIDIA A800 GPUs (80GB). The GPUs within each node are interconnected via NVLinks (400GB/s), while the nodes are interconnected through Infiniband (200GB/s). Each node has 2TB CPU memory, and the GPU-CPU communication bandwidth is 32GB/s. Baselines: We select two widely-used LLM training frameworks as baselines for our experiments. The first is Megatron-LM (commit id: ccfeda47cb) <cit.> in conjunction with TransformerEngine (v1.3) <cit.>. Megatron-LM, maintained by NVIDIA, is renowned for its comprehensive support of hybrid parallelisms, including DP, TP, PP, SP, and CP. The other baseline is Megatron-Deepspeed (commit id 7eb36a11b3) paired with DeepSpeed (v0.14.3) <cit.>, which is recognized for ZeRO optimizers and DeepSpeed-Ulysses <cit.>, a novel parallel training strategy designed for long context LLM training. Metrics: We use two important evaluation metrics to measure the training efficiency, which are Model FLOPs Utilization (MFU) and Tokens per GPU per Second (TGS). MFU is defined as the ratio of model FLOPs per second to the theoretical peak FLOPs per second of the GPU (e.g. 312 TFLOPS for NVIDIA A800 GPUs) <cit.>. Based on FlashAttention <cit.> and considering the causal mask, the exact formula for calculating model FLOPS per sample is: 6 · s · P + 6 · n · h · s^2 . MFU is a standard metric that measures the training efficiency of how model FLOPs utilize computational resources. On the other hand, TGS directly measures training throughput, providing a clear view of how quickly a model can be trained using a given amount of training samples. Both metrics are crucial for LLM researchers and engineers, enabling comparisons among various training strategies (including distributed parallelisms and activation recomputation). Workloads: Our experiments cover a wide range of workloads to examine the strength of . In particular, we consider training the 7B, 13B, 30B and 65B GPT models on 8, 16, 32, and 64 GPUs, respectively, with various sequence lengths ranging from 64K to 1408K. The detailed model configurations are shown in Table <ref>. §.§ End-to-end Evaluation We compare the end-to-end training efficiency of and two baselines. Table <ref> shows the MFU and TGS of DeepSpeed-Ulysses, Megatron-LM and under different training workloads. During evaluation, we manually adjust the distributed parallelism strategies for each system and each workload to achieve optimal training performance for fair comparisons. Detailed parallelism strategies are provided in Appendix <ref>. Overall, is capable of training longer sequences than the competitors. Across the training of 7B, 13B, 30B, and 65B models on 8, 16, 32, and 64 GPUs, DeepSpeed-Ulysses supports sequence lengths of 256K, 256K, 128K, and 1280K, while Megatron-LM supports sequence lengths of 640K, 640K, 384K, 512K. In comparison, achieves superior performance in all scenarios, enabling training sequence lengths of 1024K, 1408K, 1280K, and 1408K. Megatron-LM only supports up to 640K sequence length, even if we have leveraged a high model parallel degree and enabled the memory reduction techniques. This is unsurprising since it overlooks the memory fragmentation issue, leading to OOM for large sequence lengths. DeepSpeed, thanks to its support of DeepSpeed-Ulysses sequence parallel and ZeRO-3 optimizer, is capable of training 1280K sequence length when training the 65B model on 64 GPUs. When training smaller models, DeepSpeed supports only very small sequence lengths. This is because it can only utilize a small SP size of 8, which either aligns to the number of GPUs or is dividable by the number of attention heads (40 and 56). In contrast, by token-wise recomputation/swapping and memory planning, is able to train sequences over 1 million tokens in all scenarios. Furthermore, when comparing to the baselines with aligned sequence lengths, achieves superior MFU and TGS. Across all experimented workloads, achieves an average MFU of 51.33%. In contrast, Megatron-LM and DeepSpeed only achieve an average MFU of 23.91% and 23.26%, respectively. In average, achieves 2.42× and 2.26× MFU compared to Megatron-LM and DeepSpeed, respectively. The deficiencies of baselines are not surprising — due to their unsatisfactory memory management, Megatron-LM and DeepSpeed require high model parallel size (i.e. large TP size and/or large SP size) and full activation recomputation to avoid OOM error, which leads to low training efficiency. For example, in order to train the 65B model with 1024K sequence length on 64 GPUs, DeepSpeed has to simultaneously use an SP size of 64, ZeRO-3 and activation recomputation to avoid OOM error. The large SP/TP size and ZeRO degree introduce significant communication overheads, while vanilla memory reduction techniques incurs extra overhead, further diminishing training efficiency. Memory fragmentation is another factor leading to the low training efficiency of these systems. When the GPU memory becomes highly fragmented and cannot allocate sufficient space for new tensors, the allocator will call expensive “cudaMalloc” and “cudaFree” operations to reorganize the GPU memory, which blocks GPU computation substantially. When training the 7B model on 8 GPUs using Megatron-LM, the memory reorganization operation is triggered 6 times and 16 times per iteration for sequence lengths of 128K and 256K, respectively. In contrast, successfully addresses these issues. On the one hand, the fine-grained activation recomputation and swapping technique significantly reduces computation burden compared to the vanilla full recomputation. This advantage is even more pronounced during long-context training. On the other hand, adopts an optimized memory planning for activation, which avoids GPU memory fragmentation and the time-consuming memory reorganization process. Reducing memory fragmentation also allows for more efficient model parallelism configurations. For example, when training the 30B model on 32 GPUs with 256k sequence length, adopts a TP size of 8, a CP size of 2, and a DP size of 2, which is more efficient than Megatron-LM with a TP size of 8 and CP size of 4. Additionally, does not trigger any memory reorganization since the memory is already managed. As a result, consistently achieves an MFU of approximately 50% across all model sizes and sequence lengths, enabling the efficient training of significantly longer sequences compared to the baselines. §.§ Ablation Studies Next, we assess the effectiveness of the proposed techniques in . All experiments in ablation studies are conducted by training the 7B model on 8 GPUs, keeping the parallelism configuration fixed at a TP size of 4 and a CP size of 2. §.§.§ Effectiveness of Memory Planning To evaluate the effectiveness of memory planning, we evaluate two variants of with full recomputation, both with and without memory planning. As shown in the first two rows of Table <ref>, without memory planning, the longest sequence supported is only 384k, achieving an MFU of 25.67%. After applying memory planning, the longest supported sequence length increases to 640k, with an MFU of 42.15%. The results are reasonable since full recomputation without memory planning has severe memory fragmentation, resulting in OOM errors in large sequence length scenarios. Additionally, the frequent GPU memory reorganization process further impairs training efficiency. By employing memory planning, the fragmentation issue can be minimized, providing more memory for longer context training. Getting rid of GPU memory reorganization, memory planning brings an average of 1.51× MFU when facing the same context length. §.§.§ Effectiveness of Token-wise Recomputation For token-wise recomputation and swapping, we compare and its variants, one with full recomputation and another with full swapping. The results are shown in the last two rows of Table <ref>. When training with appropriate sequence length, which is 256k in this scenario, the computation time of one transformer layer can fully overlap with the offloading time of a layer's activations. Therefore, full swapping with memory planning can achieve an MFU of 53.62% under 256K sequence length, far exceeding the 42.05% achieved by full recomputation with memory planning. However, for short sequence lengths, such as 64K, the offloading time of one layer's activations block the GPU computation, resulting in a lower MFU than full recomputation. Full swapping presents another challenge as the sequence length grows longer: the host memory is rapidly depleted by offloaded activations, leading to OOHM errors. By employing token-wise recomputation together with swapping, consistently improves training efficiency for both short and long context lengths. For short sequence lengths like 64K, our tensor-level design only offloads the input tensor of the transformer layer and the FlashAttention output tensor to CPU memory, enabling efficient overlap of GPU computation and data transmission. For long context lengths, our token-level management successfully avoids depleting the CPU memory, and incurs only minimal recomputation overhead. Among all the methods, supports the longest sequence length. Considering MFU, achieves an average of 1.22× MFU compared to full recomptutation with memory planning, and an average of 1.13× MFU compared to full offloading with memory planning. §.§ Scalability To demonstrate the scalability of , we train the 7B model on 8, 16, 32, and 64 GPUs respectively, and report the maximum supported sequence length. As shown in Figure <ref>, when the number of GPUs increases, the maximum sequence length supported by increases linearly. When training on 8, 16, 32, and 64 GPUs, is capable of training 1, 2, 4, 8 million sequence lengths, respectively, which demonstrates ideal scalability. also consistently maintains an MFU of over 50% across different numbers of GPUs, as shown in Figure <ref>. For DeepSpeed, as the number of GPUs increases, it can enlarge the SP size, leading to longer supported sequence lengths. Note that since because the 7B model has 32 attention heads and thereby the maximum SP size is 32, DeepSpeed achieves the same maximum sequence length 1536K for both 32 and 64 GPUs. Megatron-LM supports context parallelism, which has better scalability than DeepSpeed. When the number of GPUs increases, the longest sequence length it can handle grows sublinearly. Compared to the baselines, introduces fine-grained activation memory management, achieving not only ideal scalability but also better efficiency. We also evaluate the MFU metrics of the three systems when training the 7B model on 64 GPUs with sequence lengths varing from 1024K to 8192K. In Figure <ref>, as sequence length increases, the MFU of maintains above 50% consistently, surpassing the competitors significantly. §.§ Convergence of To demonstrate the correctness of our system implementation, we conduct a convergence experiment. Specifically, we train the 7B model with 128K sequence length on 8 GPUs and compare the convergence of and Megatron-LM for 1000 iterations. For both systems, we fix the parallelism strategy to TP size 4 and CP size 2, and for , we enumerate the value of α in {0, 0.125, 0.25, 0.5 , 1}. As shown in Figure <ref>, the loss curves of with different α values all align with Megatron-LM, confirming the correctness of our system implementation. § RELATED WORK Parallelism strategies for long context training: To tackle the challenge of long context training, DeepSpeed-Ulysses <cit.> employs an novel AllToAll communication to facilitate the partition of input sequence among GPUs, achieving lower communication overhead compared with Megatron-LM sequence parallelism <cit.>. LightSeq <cit.>, Ring Attention <cit.>, and Megatron-LM context parallelism <cit.> propose to split the sequence within self-attention computation, achieving better scalability. Recent efforts in the realm of sequence and context parallelisms <cit.> aim to integrate multiple strategies and enhance existing distributed settings. It is worth noting that the fine-grained memory management of is orthogonal to these distributed parallelism strategies. Activation recomputation and swapping: Capuchin <cit.> proposes to combine recomputation and swapping to reduce memory footprint during training. The swapping decision is made by considering the tensor access pattern. In addition to tensor recomputation, MegTaichi <cit.> also proposes to co-optimize the tensor partition. Coop <cit.> notices that naive tensor recomputation leads to severe memory fragmentation, and proposes heuristics to reduce memory fragmentation during tensor recomputation. While these works offer solutions for common deep learning models, they do not take advantage of the specific characteristics of LLM training to achieve full overlapping and fragmentation minimization. Memory planning for deep learning models: The memory allocation problem of in deep learning models can be regarded as a DSA problem and solved by MIP <cit.>. OLLA <cit.> proposes to optimize the lifetime and memory location of tensors during the training process by solving a joint ILP problem, reducing the peak memory during training. However, these methods does not exploit the repetitive substructure in LLMs and relies on heuristics to simplify the integer programming problem. § CONCLUSION In this paper, we proposed to address the memory challenges in long context LLM training. We designed a fine-grained activation recomputation and swapping strategy to fully utilize the idle PCIe bandwidth during the GPU computation, thereby reducing the activation rematerialization cost in long context LLM training. We employed a bi-level MIP technique to solve the problem of memory allocation within one transformer layer, and reused the same memory space for each identical layer so as to eliminate memory fragmentation. Through extensive experiments, we demonstrated that achieved an average of 2.42× MFU compared to Megatron-LM. By leveraging fine-grained tensor memory management, achieved 52.30% MFU when training 7B LLM with 1 million sequence length on only 8 A800 GPUs. ACM-Reference-Format § DETAILED PARALLELISM STRATEGY IN EVALUATION Table <ref>, <ref>, and <ref> present the detailed parallelism training strategies for end-to-end evaluation.
http://arxiv.org/abs/2407.13226v1
20240718072421
Towards a complete picture of the Sco-Cen outflow
[ "M. Piecka", "S. Hutschenreuter", "J. Alves" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.SR" ]
University of Vienna, Department of Astrophysics, Türkenschanzstrasse 17, 1180 Vienna, Austria martin.piecka@univie.ac.at University of Vienna, Research Network Data Science at Uni Vienna, Kolingasse 14-16, 1090 Vienna, Austria Previous studies have shown strong evidence that the Sun is crossing an outflow originating from the Sco-Cen OB association. Understanding this outflow's origin and structure illuminates how massive star formation shapes the interstellar medium (ISM) and helps predict future Galactic conditions affecting our Solar System. We analysed H i emission and optical ISM absorption lines towards 47 early-type stars around the Upper Sco region to refine the map of the Sco-Cen outflow. Combined with data for nearby stars, we find that the outflow has at least two components: a faster, low-density component traced by Ca ii, and a slower, possibly lower-density component traced by Mg ii and Fe ii in the UV that is passing through the Earth. A constant flow model successfully describes both components with (l,b,|v⃗|) = (335.4^∘, -6.8^∘, 14.0 km s^-1) and (305.5^∘, +17.6^∘, 21.2 km s^-1), respectively. The origin of the faster component points towards the Sco-Cen 15 Myr population, which is consistent with the origin of the slower component within 2 σ. A simple model comparison indicates that a constant flow is favoured over a spherical flow geometry, implying an extended distribution of feedback sources within Sco-Cen. We also found that a poorly studied 25 pc long H i cloud at a distance of 107 pc belongs to the established Sco-Cen flow. Towards a complete picture of the Sco-Cen outflow M. Piecka1 S. Hutschenreuter1 J. Alves1,2 Received X XX, XXXX; accepted X XX, XXXX ============================================================================ § INTRODUCTION The Scorpius-Centaurus OB association (Sco-Cen) is known to produce blue-shifted interstellar absorption lines in the spectra of nearby stars, suggesting an outflow of material from its young stellar population <cit.>. Since then, research in the local interstellar medium (ISM) has revealed several warm, low-density clouds within 30 pc, whose kinematics suggest an association with this outflow <cit.>. Likely related, the detection of ^60Fe in deep-sea sediments suggests a recent influx of supernova ejecta into the Solar System, likely originating from Sco-Cen or the Tucana-Horologium association <cit.>. As noted in <cit.>, there is evidence for a gaseous outflow from Sco-Cen that encompasses the Ophiuchus and the Lupus clouds. However, due to a limited number of sightlines probing the ISM, they were unable to identify any specific spatial sub-structure within the flow, though their high-resolution spectra did reveal kinematic sub-structure. Such spatial and kinematic sub-structures are expected due to the complex nature of outflows, as demonstrated in simulations <cit.>. More recently, <cit.> conducted a detailed study of the spectrum towards HD 102065, identifying kinematic sub-structure with the most negative component at -20 km s^-1 (w.r.t. the local standard of rest, LSR), similar to the most negative radial velocities found by <cit.>. Sco-Cen is the closest OB association to Earth, containing stars as old as ∼20 Myr and as young as protostars <cit.>. Recently, <cit.> constructed a high-resolution star formation history map of Sco-Cen showing the existence of a dominant formation peak about 15 Myr ago, where most of that stars and clusters were formed. This work also revealed chains of ordered star-forming regions, likely formed from the feedback of Sco-Cen's massive stars over the last 10 Myr <cit.>. This feedback flow is likely to also be responsible for the enrichment of star-forming clouds in Ophiuchus with short-lived radionuclides like ^26Al <cit.>. In this Paper, we attempt to connect previous observations of the Sco-Cen outflow with archival data to begin an investigation on the structure of the Sco-Cen flow. The region of interest is the thin ISM between Sco-Cen and Earth, as represented in Fig. <ref>. We make use of the combination of spectral information obtained with ESO telescopes to map ISM absorption lines, combined with astrometric Gaia Data Release 3 <cit.> and Hipparcos <cit.> to constrain ISM distances. We also use H i data <cit.> in our analysis. Unless stated otherwise, all velocities are presented in LSR. This study aims to illuminate the interactions between the massive stars and the gas surrounding the Sco-Cen association and improve our understanding of the source of the feedback that shapes the local ISM, contributing to a better understanding of the environmental conditions affecting the Solar System. § ARCHIVAL DATA We make use of the spectra available in the ESO archives. Two instruments are associated with data products that have wavelengths already shifted to the barycentric rest frame: FEROS <cit.> and HARPS <cit.>. Since FEROS observations cover a larger number of Sco-Cen stars, specifically within and around the Upper Sco population, we use only these spectra for extracting information about spectral lines. We use HARPS only for a visual inspection of spectral lines at a higher resolution. UVES <cit.> is used for investigating an interstellar titanium line in the near-UV. Our primary aim is to make use of two interstellar spectral lines within the 3800-8000 Å region: Ca ii K line (3933.66 Å) and K i line (7698.96 Å). Ca ii and K i are both doublets, but we are forced to ignore Ca ii H line at 3968 Å and K i at 7665 Å due to blending with a stellar (Balmer-ϵ) and a telluric feature (O_2 A-band), respectively. The Ca ii H line is used to validate the source of features around 3933 Å attributed to calcium, together with the Ti ii line at 3384 Å, which is considered as a tracer of comparable ISM conditions as those probed by Ca ii <cit.>. To be able to detect the interstellar calcium line, the spectrum of the studied star must contain a very weak (or ideally no) stellar feature at this wavelength. This forced us to focus on studying only lines of sight towards hot-type stars (O, B, possibly early A), with the spectral class cut-off depending on the projected rotational velocity of the star. We identify 45 feasible targets with available FEROS spectra (see Appendix <ref>). These objects are located at distances between 100 and 300 pc. In the sky, the targets are spread around the ρ Oph region within the radius of about 17^∘. The central regions of this area are more densely covered when compared to the outskirts, which is the result of an observational bias – the hot stars in Upper Sco are usually located closer to the high-extinction star-forming region. To investigate the detailed sub-structure of calcium <cit.>, we also extracted the HARPS spectra for 9 stars of our sample. Additionally, the spectra of HD 158427 (α Ara) and HD 116658 (Spica) were also obtained to check the profiles of optical interstellar lines towards B-type stars at lower distances. Identifying an outflow within 70 pc from the Sun using hot-type stars is impossible due to the lack of such objects. Instead, one needs to rely on different methods. <cit.> focused on later-type stars (mostly cooler than B) at closer distances, bridging the gap between more distant and nearby probes of the ISM. In this work, we use their published heliocentric radial velocities extracted from HST UV spectra. To investigate a potential link between the dense ISM around the studied OB stars and the flow traced by calcium, we incorporate additional data into our analysis. We use the HI4PI survey <cit.>, which offers the most comprehensive all-sky coverage of neutral hydrogen (H i), to determine the velocities of denser structures. This survey offers a spatial resolution of ;16.2; and a kinematic resolution better than 2 km,s^-1. Finally, we use the 3D dust map of <cit.>, which will play a critical role in allowing a distance determination towards structures identified in H i. We note the availability of the Ca ii (and the Na i) map from <cit.>. However, we find that this map is of a limited use, as it does not distinguish between the velocity components seen in the profiles of the spectral lines. § ANALYSIS OF CA II AND K I SPECTRAL LINES The continuum normalisation was performed by masking the interstellar and stellar features and fitting a cubic spline, similar to the approach described in <cit.>. Afterwards, stellar/telluric features were fitted with a combination of several generalised Gaussians and subtracted from the spectrum, while keeping the interstellar line of interest masked. The reduced spectrum was finally fitted with a combination of standard Gaussians. All of the components of the observed spectrum (continuum, stellar/telluric features, interstellar lines) were fitted using . At least two Gaussian components are required to properly fit the profiles of the Doppler-splitted calcium line in the spectra of our targets. We chose not to fit more complicated profiles due to the limited resolution of FEROS. Detection of the same splitting in the profiles of the Ca ii H line and the near-UV Ti ii line (if a UVES spectrum is available), we can confirm that the complex profile of the calcium line is the result of Ca ii Doppler-splitting and not an overlap of unrelated features. The effect of splitting was observed in potassium only in the case of HD 142184, where a blue-shifted component (ΔRV≈ -7 km s^-1) appears in both of the K i doublet lines. For this specific case, the blue-shifted component was masked during the fitting procedure. The resulting radial velocities are presented in Table <ref> – for an easier comparison with the literature, RVs were shifted to the LSR using , where ( U_⊙,V_⊙,W_⊙) = ( 11.10, 12.24, 7.25 ) km s^-1 was obtained by <cit.>. Additional information about the extracted values and the corresponding errors is provided in Appendix <ref>. We find no prominent intervening cloud in front of Spica and α Ara. This is revealed by the lack of potassium absorption towards these two lines of sight. The spectra of both stars show two calcium components – one located at ∼ -7 km s^-1, and an offset component that is either red-shifted (Spica) or blue-shifted (α Ara). The distribution of RVs (Fig. <ref>) of the blue and the main component of calcium are centred at around -15 km s^-1 and +3 km s^-1, respectively. Comparing with the CO velocity of the ρ Ophiuchus cloud <cit.>, we find that the velocity distribution of the main calcium component fits well with the velocity of the Ophiuchus complex. We also note a relatively strong one-to-one relation between the RVs of potassium and calcium main (Pearson coefficient ρ=0.78). The identified velocity of the blue component coincides with the velocity of the outflow, as suggested in the literature (see Section <ref>). HARPS spectra reveal that the blue velocity component in not a single line but consists of about two or three sub-components (see Fig. <ref> for an example) that are unresolved in most of the FEROS spectra (see Appendix <ref>). § MODELLING THE SCO-CEN OUTFLOW By taking into account the results from <cit.> and the results of our analysis of the Ca ii lines, we have enough information to make a statement about the structure of the observed flow in the ISM. We attempt to estimate the 3D velocity vector field using simple flow models. In what follows, we treat the UV spectral lines probing the local ISM separately from the Ca ii lines observed towards the Upper Sco. The first model used to determine the origin of the flow and its amplitude is a constant (and uniform) vector field, or a constant flow for brevity. This model is parameterised by the vector v⃗ = (v_x, v_y, v_z), defined in the same coordinate system as the map in Fig. <ref> (z points out of the plane, Sun is in the origin). From this, the radial velocity of the flow between us and a star can be calculated by projecting this vector on the respective normalised position vector in heliocentric coordinates. This model is likely the simplest model to fit an outflow, and can at best constrain the large scale velocity amplitude and angular origin of the flow. The second model is an infinitely thin expanding spherical shell, parameterised by the position of its origin in spherical coordinates at a distance r, o⃗ = (l, b, r), the sphere radius R, and the velocity amplitude perpendicular to the surface of the shell, v_⊥. Such a model should work well in the context of past supernova events occurring in Sco-Cen <cit.>. The respective radial velocity in the line of sight towards each star is calculated by projecting the normalised position vector onto the normal vector on the sphere at the intersection of the line of sight of the star with the sphere. While this model is slightly more complex than the simple constant flow, it has the advantage of having the ability to constrain the distance to the origin of the flow. We only use this model for the Ca ii data, as the data set from <cit.> is difficult to fit with a thin expanding shell. This is due to the fact that distinguishing between the two models becomes impossible on sufficiently small scales (local flatness) given the magnitude of the measurement uncertainties. Assuming the diameter of the local complex of clouds to be 30 pc <cit.>, a distance of 145 pc from the source <cit.>, and an expansion velocity of about 25 km s^-1 <cit.>, we estimate a deviation of maximum 0.5 km s^-1 between the two models – a value below the limit set by the turbulence <cit.> and the expected RV errors <cit.>. Furthermore, previous works already showed that a simple cosine model fits well the relation between the measured RVs and the angular separation of the line of sight from the coordinates of the flow's origin <cit.>. To fit the data from <cit.>, we make use of all of their presented radial velocities. Based on their description, we assume a constant error of 3 km s^-1 for each data point. Unlike in <cit.>, we do not aim to analyse the sub-structure of the local ISM but rather focus on the mean kinematic properties. The nested sampling algorithm <cit.> is used to constrain the posterior distributions of the respective model parameters and choose widely uninformative prior distributions for all parameters, summarised in Table <ref>. In Table <ref>, the mean and the standard deviations of the respective marginal posterior distributions for the model parameters are summarised for all three cases. For the constant flow models (Fig. <ref>), we also show the posterior mean and standard deviations of the absolute flow velocity |v⃗| and the angular origin of the flow (l, b), all calculated from posterior samples. It should be noted that the origin of the local flow probed in the UV is located in the upwind direction. The model constrained by the <cit.> data shows much smaller uncertainties, which can be attributed to the larger data set and the full sky coverage. Both the origin and the velocity amplitude of the flow is notably different when comparing the ISM traced by Ca ii and the UV features studied by <cit.>, specifically Mg ii and Fe ii. While the difference in the velocity amplitudes is statistically significant, the high uncertainty in the determined coordinates of the origin of the Ca ii flow makes a definitive statement impossible. We note that some of these distributions are significantly skewed, implying that these numbers have to be interpreted with care. The marginal posteriors of the Ca ii sky position (thin shell case) are shown in Fig. <ref>. The comparison of the constant flow models reveals notable differences in both the velocity amplitude and the flow origin. The latter plot reveals that the radius and distance of the sphere are strongly correlated, with R being only slightly smaller than r in most cases and the mode of the R posterior close to the maximum value allowed by the prior (400 pc). This indicates that the fit prefers the sphere to be as flat as possible, and hence puts its origin far beyond Sco-Cen, rendering these results unlikely, as an origin beyond Sco-Cen would imply a past interaction of the flow with Sco-Cen, making the geometry of a perfect sphere extremely unlikely. We hence prefer the constant flow model as the best explanation of the observed flow, with the caveat that significant residuals remain in the case of Ca ii. These results suggest that a single feedback source or event is unlikely to explain the available observed data. In summary, we confirm that the flows probed by Ca ii and UV features originate from Sco-Cen, specifically from regions somewhere between Lupus and Chamaeleon clouds, towards the general location of the Sco-Cen 15 Myr population <cit.>. The two identified origins coincide only within 2 σ uncertainties, with the posterior based on the calcium spectra covering a much larger area in the sky. It is worth noting that the local flow appears to be slower than the flow identified in Ca ii. § CONNECTING CALCIUM AND HYDROGEN KINEMATICS Under the right conditions, H i can be a very useful tracer of an outflow. Since the ISM density is expected to decrease as a function of the distance from Sco-Cen (origin of the flow), one might also expect a gradual decline of the intensity of the 21-cm line. We noticed the presence of two clearly distinguishable structures towards the Sco-Cen massive stars in the HI4PI data. * We note the presence of an H i elongated cloud visible at velocities between -18 km s^-1 and -10 km s^-1. It is about 20^∘ long, extending from (l,b) = (357.6^∘, +32.8^∘) southwards to (l,b)=(346.4^∘, +16.3^∘). <cit.> first identified this cloud and estimated its distance to be 170 pc. <cit.> later showed that the cloud is a part of the gas expansion in this region, likely driven by stellar winds. * An H i ring with a radius of ∼ 9^∘ at velocities as low as -30 km s^-1, and as high as 0 km s^-1, centred close to the massive stars β Sco and δ Sco. The H i emission projected onto the sky and overlaid with the H ii emission <cit.> can be seen in Fig. <ref> (black-white-red and black-blue colour-maps, respectively). The elongated filamentary cloud and the ring-like structure can be clearly identified in H i. Furthermore, we note that the H i ring appears to enclose the prominent H ii region Sharpless 7, or Sh2-7, which is being primarily ionised by δ Sco. Given the narrow edge of the ring, its large extension in velocities, and it being centred close to β Sco and δ Sco, we suggest a possible connection to the stellar feedback. We are unable to identify a similar connection for the elongated cloud. The ionised regions in Fig. <ref> located to the east and to the south are related to ζ Oph and τ Sco, respectively. Using the 3D dust map from <cit.>, we are able to identify a dust feature at the coordinates of the H i cloud (l,b)=(353.8^∘, +25.5^∘). We determine the distance towards this cloud to be between 95 pc and 120 pc. The nominal distance is identified at the distance channel showing the highest extinction, specifically at 107 pc. Since the determined distance to the closest star (β Sco) is about 125 pc, the hydrogen outflow material must extend to at least 20 pc away from the source of the flow. We note that the distances to the most important outflow drivers in this region (δ Sco, β Sco) based on the properties of their hosting clusters <cit.> suggest that they are located further away from the cloud, between 140 and 155 pc away from the Sun. The contours of the dust cloud are displayed in Fig. <ref> – the eastern side of the dust cloud is located on top of the northern half of the H i cloud (close to its centre). On the other hand, the western portion of the dust cloud has a shape resembling Sh2-7. The contours of the dust map (A_V = 9 mmag) at various distances are displayed in Fig. <ref>, indicating that the western part of the dust cloud is less extended in the distance than its eastern part. The positional and the kinematic structures of the H i cloud and the ring display variations that we presently ignore. A quick look at various velocity bins (from -30 to +10 km s^-1) of the H i map reveals sudden changes (∼ 1 km s^-1, ∼ 1^∘) in the morphology of the structures in the sky. Additional complexity is introduced when one starts to look at very high absolute value of the RVs <cit.>. Furthermore, the bright endpoints of the H i cloud remain obvious at negative velocities as high as -30 km s^-1, in contrast with the rest of the elongated cloud that appears to span ≈ 8 km s^-1. To investigate a potential transition from the hydrogen flow to the previously described calcium flow, we analysed H i spectra from the HI4PI data cube towards our calcium-probing targets listed in Table <ref>. A comparison of the H i and Ca ii line profiles (Fig. <ref>) reveals a striking resemblance in most cases, particularly towards the sources probing the H i cloud. This similarity extends to the hydrogen line's overall shape and potential sub-structure, suggesting a strong connection between the two flows. This connection is critical in establishing the distance to the flow traced by Ca ii. We also note a possible existence of small systematic differences in the velocities of calcium and hydrogen – in most lines-of-sight, the differences are smaller than 5 km s^-1 (between 2 and 3 km s^-1 in Fig. <ref>). § COMPARISON WITH LITERATURE There are many excellent published papers addressing the same topic as this paper. Although we do not intend to provide a comprehensive review <cit.>, several studies closely related to our analysis warrant special attention. Below, we compare our results with those from five other key studies. §.§ Crawford (1991) This paper was already mentioned in Section <ref>. While the most negative velocity component of Ca ii and Na i was found to be at around -20 km s^-1, the outflow models analysed by <cit.> suggest a much smaller flow velocity amplitude between 7 and 10 km s^-1. When compared with the local ISM, this would suggest an acceleration of the flow as a function of the distance from the source of the outflow. On the contrary, our analysis suggests a possibility of flow deceleration. The type of the fitted model (an expanding shell and a constant flow) does not significantly change the value of the velocity amplitude, a result achieved in both works. It should be pointed out that unlike in our work, <cit.> used stars across the whole Sco-Cen (80^∘× 30^∘) in their analysis. Despite this, our results agree in terms of the origin of the outflow, which seems to be located westwards from the Upper Sco and above the Galactic disk. The relatively low number (23) of studied lines of sight limits the precision of the flow modelling presented by <cit.>, while in our cases the limiting factor is the relatively low sky coverage (30^∘× 20^∘). Extending the sky coverage to the whole Sco-Cen (or beyond) and keeping the high number of studied sources per area presented in our work should provide the best possible data set for further investigations of the Sco-Cen outflow when making use of the optical interstellar absorption lines. §.§ Frisch (1995) and Frisch et al. (2011) <cit.> and <cit.> presented outstanding reviews of the past studies focused on the local ISM. While the main focus of these works was put on the <20 pc distant local clouds, this topic connects to our work via kinematics. As was already mentioned, the RVs obtained for the local ISM put the origin of the local ISM to a region somewhere in Sco-Cen. Since an outflow can be connected to both, the region very close to the Sun and to the regions within Sco-Cen, it is clear that there must be either continuity of the outflow or that the outflow parts are the results of multiple flow-driving events. Our work confirms many of the results referenced in the mentioned review papers. For example, both papers show that the observed RVs (in LSR) follow a cosine law when plotted as a function of the Galactic longitude or the angular distance from the source of the flow. This is true for both, Ca ii <cit.> and the local ISM probing UV features <cit.>. This behaviour is expected from a flow that can be best described by using a constant vector field, as was discussed in <cit.>. Furthermore, <cit.> discussed the neutral hydrogen kinematics. In this case, we are able to provide a more precise distance measurement to the H i cloud originally presented by <cit.>. In the future works, this distance determination should help to better understand the processes that drive the gas flow, especially when it comes to the interaction of the Sco-Cen outflow driving force with the relatively low-density dust clouds. §.§ Krause et al. (2018) <cit.> used observations and theory to try and characterise the established Sco-Cen outflow. The authors compared the H i velocities with the Na i spectra combined with the stellar parallaxes, similar to our analysis based on Ca ii. Unlike us, the authors' primary use of the optical ISM line was to constrain gas distances. This was done by associating components of the H i line profile with the components observed within the Na i profile, yielding an upper distance limit for each line of sight. The region probed by <cit.> includes the H i cloud and ring analysed in this work. It is precisely the multiple blue-shifted components that draw our attention. Presented in their Figure 5 (right panel), <cit.> suggest that these components are "co-spatial but separated in the velocity space". Comparing results from both works, it seems that their blue-shifted (tube-like) structures might be related to the gas ring identified in this work. However, the prominent filamentary cloud appears to be missing in their results – due to its size, it should appear as a large (in dimensions similar to their Upper Sco loop) structure at around -13 km s^-1. As was mentioned above, the existence of a local ISM that is kinematically connected to the gas flow observed near/within Upper Sco points towards a continuity of the Sco-Cen outflow. Re-examining the results of our work, we find no clear evidence that would support (or contradict) the claim of a co-spatial structure mentioned by <cit.>. The discussion regarding the hydrogen ring presented in Section <ref> might hint at this possibility but we prefer to avoid speculating about this topic. Further research is certainly required to provide additional information about this property of the flow. In general, we find an agreement with the results obtained by <cit.>. Their main point is the interaction of two super-bubbles, with the interface between them containing denser gas (and dust) structure. This interface seems to coincide with the intervening material located around 110 pc away from the Sun, visible in front of ζ Oph and β Sco in our Fig. <ref>. The complex kinematic properties of this gas component observed in H i suggest an interaction between the outflow driving force at the interface material. The sub-structure identified in Ca ii line profile in some lines of sight might be partially related to the same process, but this needs to be confirmed with higher resolution spectra (obtained, for example, using HARPS). §.§ Linsky et al. (2022) <cit.> give an update on the information about the local clouds that are part of the local flow, building up primarily on the results presented in <cit.>. The updated data show hints of variations in the velocity dispersion and temperatures of the local ISM. If proven to be a statistically significant result, it would be interesting to identify the processes leading to these conditions – could these variations be a general property of the Sco-Cen outflow? § DISCUSSION AND CONCLUSIONS In this study, we connected observational evidence from nearby stars and those in the Sco-Cen association, indicating the presence of a general interstellar outflow originating from the OB association. We utilised two different probes of the ISM: the Ca ii line profiles obtained from FEROS spectra and the radial velocities (RVs) derived from the UV (Mg ii, Fe ii) spectra by <cit.>. By fitting a constant flow model (Fig. <ref>), we confirm that both flows appear to originate from a region between the Lupus and the Chamaeleon clouds, or towards the direction of the major star formation event in Sco-Cen, where most of the massive stars and largest clusters in the association formed ≈ 15 Myr ago <cit.>. We also found significant differences in the exact origin and flow velocity amplitudes, suggesting the existence of distinct kinematic (and possibly spatial) components of the Sco-Cen outflow. Within the flow, we identified a previously understudied H i cloud with the concurrent detection in H i and Ca ii at about the same radial velocity. However, small systematic differences (<5 km s^-1) in the velocities of Ca ii and H i are noted. This suggests a more intricate flow structure than previously assumed. We speculate that the calcium flow may be interacting with the H i cloud. A representation of the Sco-Cen outflow projected onto the Galactic plane is displayed in Fig. <ref> – the dust feature that we identify with the H i (discussed in Section <ref>) is highlighted. We find supporting evidence that the Sco-Cen outflow is an ongoing process and can be linked to the massive stars in Sco-Cen. The local flow (as traced in the UV absorption lines) and the flow probed by Ca ii, at a more uncertain distance, together with the properties of the H i cloud all indicate the existence of an ISM reaching from Sco-Cen to the current position of the Sun within the Local Bubble. However, there are many questions that remain to be answered, including: * Why is the observed local flow so uniform? Our modelling yielded the surprising result that the local flow is extremely uniform and that a single spherical-flow model cannot adequately explain the global observations. This points to a complexity that needs to be further explored. * Do the observed flows originate from a common region? We cannot rule in or out a common flow origin based on the calcium and the local UV absorption measurements. However, using higher-resolution spectra (R>100 000) together with additional calcium observations at higher angular separation from Upper Sco would significantly constrain the posterior distribution of the calcium flow model and provide an answer to this question. * Has the flow been primarily shaped by supernovae or stellar winds and radiation? Presently, we cannot determine which drivers dominate the observed flows. The driving force behind the H i cloud may differ from the one that shaped the motion of the local ISM. Answering these questions requires further research. For example, we should be able to gain an additional insight into the structure of the outflow by including a larger number of observations such as the one based on HARPS and presented in Fig. <ref>. This should lead to uncovering of the kinematic sub-structure of the outflow, extending the number of components beyond the two presented in this Paper. The Sco-Cen flow feeds the Local Bubble <cit.>. This work takes a critical step towards exploring the closest large-scale ISM outflow. It confirms that the flow found in the very local (d < 30 pc) ISM is part of a larger flow from Sco-Cen. Our analysis raises more questions than answers but calls attention to an important and exciting aspect of the ISM in the solar neighbourhood, the one the Sun is currently crossing. The presence of supernova radioisotopes on Earth <cit.>, a strong argument in favour of a large-scale ISM outflow from Sco-Cen, is a strong motivation for further studies of this flow. Finally, large 100-pc scale outflows, like the one studied here, are the natural consequence of massive star formation events. The common occurrence of feedback-driven bubbles <cit.>, suggests these outflows are an important component of the ISM in spiral galaxies. Kinematic studies of other local ISM flows powered by massive star formation can provide important information about the evolution of the galactic environment. Co-funded by the European Union (ERC, ISM-FLOW, 101055318). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This paper made use of data obtained from the ESO Science Archive Facility, specifically from the following ESO programmes: 0106.A-9009(A), 073.C-0337(A), 076.C-0164(A), 077.C-0575(A), 078.C-0403(A), 081.C-2003(A), 082.B-0610(A), 082.D-0061(A), 083.D-0034(A), 086.D-0236(A), 086.D-0449(A), 087.A-9005(A), 089.D-0153(A), 090.D-0358(A), 091.C-0713(A), 094.A-9012(A), 097.A-9024(A), 099.A-9029(A), 179.C-0197(A), 179.C-0197(C), 183.C-0972(A), 60.A-9036(A), and 60.A-9700(G). This work made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. The following Python libraries were used in this work: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. The authors are grateful to M. Kajan for his valuable suggestions regarding the treatment of stellar spectra. Furthermore, we express our gratitude to C. Zucker and J. Linsky for the discussion about the local flow. aa § DETERMINED RADIAL VELOCITIES The results of our Ca ii line-fitting procedure are presented in Table <ref> and in Fig. <ref>. The values were obtained by making use of a bootstrapping method with N=10^4 repetitions. The derived values correspond to the 50th percentiles of the determined distributions. The numerical errors, ϵ, were calculated as half of the difference between the 84th and the 16th percentiles. The reported errors (Δ = √(ϵ^2 + σ^2)) include a minimum error limit, σ, which was evaluated as the distance between the neighbouring points of a spectrum at a given wavelength (and thus represents the spectral resolution). We find that these errors tend to be more accurate than those derived by using a single-iteration . We note that Δ≈σ when the noise is low enough for the feature to be clearly distinguishable. § SUB-STRUCTURE IN THE STUDIED CA II LINES We noticed a significant difference from the analysed FEROS spectra in all of the 11 available HARPS spectra. The main issue lies in the presence of unresolved kinematic components. Fig. <ref> shows an example of a line of sight where resolving the kinematic sub-structure would provide additional information about the structure of the intervening ISM. Another interesting example is the β Sco system (HD 144217/144218), where a split in the blue-shifted component is missed when using FEROS spectra, producing a systematic offset of a few km s^-1.
http://arxiv.org/abs/2407.12203v1
20240716221014
Semantic Communication for the Internet of Sounds: Architecture, Design Principles, and Challenges
[ "Chengsi Liang", "Yao Sun", "Christo Kurisummoottil Thomas", "Lina Mohjazi", "Walid Saad" ]
eess.AS
[ "eess.AS" ]
Semantic Communication for the Internet of Sounds: Architecture, Design Principles, and Challenges Chengsi Liang, Yao Sun, Christo Kurisummoottil Thomas, Lina Mohjazi, and Walid Saad Chengsi Liang, Yao Sun (corresponding author), and Lina Mohjazi are with the James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK (e-mail: 2357875l@student.gla.ac.uk; {yao.sun, lina.mohjazi}@glasgow.ac.uk). Christo Kurisummoottil Thomas and Walid Saad are with the Bradley Department of Electrical and Computer Engineering at Virginia Tech, Arlington, VA 22203, USA. (e-mail: {christokt, walids}@vt.edu). Received: date / Revised version: date ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The Internet of Sounds (IoS) combines sound sensing, processing, and transmission techniques, enabling collaboration among diverse sound devices. To achieve perceptual quality of sound synchronization in the IoS, it is necessary to precisely synchronize three critical factors: sound quality, timing, and behavior control. However, conventional bit-oriented communication, which focuses on bit reproduction, may not be able to fulfill these synchronization requirements under dynamic channel conditions. One promising approach to address the synchronization challenges of the IoS is through the use of semantic communication (SC) that can capture and leverage the logical relationships in its source data. Consequently, in this paper, we propose an IoS-centric SC framework with a transceiver design. The designed encoder extracts semantic information from diverse sources and transmits it to IoS listeners. It can also distill important semantic information to reduce transmission latency for timing synchronization. At the receiver's end, the decoder employs context- and knowledge-based reasoning techniques to reconstruct and integrate sounds, which achieves sound quality synchronization across diverse communication environments. Moreover, by periodically sharing knowledge, SC models of IoS devices can be updated to optimize their synchronization behavior. Finally, we explore several open issues on mathematical models, resource allocation, and cross-layer protocols. § INTRODUCTION The Internet of Sounds (IoS) represents a confluence of sound and music systems with the Internet of Things (IoT) <cit.>. In an IoS, dedicated sound devices are designed and used for sensing, capturing, processing, actuating, and sharing sounds and sound-related information via acoustic signal processing and deep learning (DL) techniques. The IoS integrates co-located or remotely connected sound-centric devices to work together efficiently and seamlessly. Furthermore, the IoS promises to support novel sound-based applications including smart home, smart healthcare such as sound-based therapies, and wildlife monitoring, among others. One significant challenge in the IoS is the need for synchronization, particularly for time-sensitive applications such as live concerts. To guarantee high-quality auditory perception for IoS listeners, these applications demand rapid capture, exchange, and synchronization of sounds from a variety of IoS senders within a short timeframe. Synchronization in the IoS should take into account three major factors: sound quality, timing and behavior control. Sound quality synchronization focuses on maintaining the fidelity and clarity of each sound stream and developing the auditory perception during the sound synchronization process. Meanwhile, timing is the fact that the IoS requires a precise alignment of multiple sounds transmitted from different IoS senders to guarantee that they are played at the intended moments, despite their different geographical locations. Finally, synchronization extends to the behavior control of IoS devices. “Behavior" refers to IoS devices' role-specific actions in synchronization tasks, such as live concerts. Behavior control strategies define the roles of IoS devices and dictate which types of sounds they should prioritize based on their assigned roles. Moreover, these strategies should dynamically adapt to changing performance requirements and listener needs. Thereby, there is a need for intelligent, adaptable communication paradigms that can support the IoS and overcome its synchronization challenges. In this regard, semantic communication (SC) <cit.> could be a viable approach. Unlike traditional bit-oriented communication, SC enables a network to convey the meanings embedded within messages. SC leverages a semantic encoder and decoder with background knowledge to ensure meanings are interpreted accurately, even in the presence of significant bit errors that would typically disrupt traditional communications. Based on these features, we therefore envision that SC can improve all three factors of IoS sound synchronization. First, SC can reconstruct and interpolate missing or corrupted sound segments by leveraging semantic-aware reasoning based on context and background knowledge <cit.>. Even when channels suffer from significant noise and interference, SC helps preserve sound quality during synchronization by filling in the gaps through intelligent inference. Furthermore, SC senders can convey only “essential” semantic information, thereby SC receivers can reconstruct sounds promptly. This approach compensates for the transmission latency experienced by remote devices and ensures precise timing synchronization. Finally, SC systems can acquire new knowledge from new sensing data and listeners' feedback, and, then, update their knowledge base. By sharing the new knowledge among multiple IoS devices, these IoS devices can update their models and automatically adjust their behavior. Instead of using conventional control messages, this knowledge-based control strategy allows for system-wide improvements and dynamic optimization based on user experiences. However, leveraging SC in the IoS requires meeting several critical challenges, that fall into three categories. Robust Semantic Encoding and Decoding: Unlike traditional encoder/decoder design in SC, which focuses solely on a single transmission pair, the encoder/decoder design in the IoS-centric SC should consider the synchronization of the entire network. How to design encoder/decoder to improve both the transmission quality of individual links and synchronization in a multi-user IoS is a nontrivial challenge. Latency Synchronization across Geographically Heterogeneous Devices: In IoS-centric SC networks, IoS listeners receive sounds from multiple geographically heterogeneous IoS senders. Since these senders may be located in different regions, the latency experienced by each device can vary significantly, which makes it challenging to achieve precise sound synchronization. Thus, how to implement advanced semantic-aware communication techniques is crucial for latency synchronization across geographically heterogeneous devices is another challenge. Knowledge-based Control Strategy with Semantic Effectiveness Evaluation: To provide users with a seamless and immersive auditory experience, IoS senders should adapt their behavior based on the listeners' feedback in real-time. In SC networks, senders can update their models periodically by learning from the knowledge (e.g., history and sensing data) without the need for control messages. However, a knowledge-based behavior control strategy is required to manage how and when to share knowledge among diverse devices in the dynamic IoS. Moreover, traditional communication metrics are no longer applicable for evaluating semantic effectiveness in sound synchronization. Recently, the authors in <cit.> conducted a comprehensive survey of the IoS, exploring synchronization issues in classical communication schemes and discussing semantic-aware audio processing. However, their work did not delve into the specifics of semantic-aware communications in the IoS. Moreover, despite some recent works like <cit.> and <cit.> that investigate the use of SC for speech signals, those prior works do not extend to an IoS system. Moreover, those works do not consider cooperation among multiple users for time-sensitive sound synchronization. In contrast to this prior art, the main contribution of this paper is a novel SC framework that enables a robust and seamless sound synchronization in the IoS. We first articulate the synchronization challenges in terms of sound quality, timing, and behavior control within the IoS. We then delve into how SC can address these challenges by reasoning from context and background knowledge and facilitating knowledge sharing among IoS-related devices. Next, we propose an IoS-centric SC framework that encompasses sound devices, SC coordinators, base stations (BSs), and cloud servers. Within this framework, we introduce the transceiver design, which includes semantic encoding, semantic importance-aware transmission, and semantic decoding. Finally, we discuss open research questions for future work at the intersection of IoS and SC. § SYNCHRONIZATION CHALLENGES IN THE IOS AND HOW SC CAN HELP In this section, we explore the specific synchronization challenges in the IoS in terms of sound quality, timing and behavior control. We then discuss how SC can help address these challenges. §.§ Synchronization Challenges in the IoS The IoS extends the principles of the IoT to the auditory domain. It encompasses a wide range of technologies and techniques for sound processing and transmission while leveraging sophisticated algorithms and large online sound-based repositories. As defined in <cit.>, the IoS represents “the ensemble of sound devices, network infrastructures, protocols, and representations of sound-related information that enable services and applications for the communication of sound-related information in physical and/or digital realms”. Sound devices are networked computing devices equipped with sensors and actuators capable of capturing, processing, sharing, or producing sounds and sound-related information. The term “sounds” is used hereinafter to indicate the union of music, speech, and other audio signals. Sound-related information involves perceived and processed data which assist IoS devices to exchange sounds efficiently and precisely. Synchronization is a critical challenge in the IoS, particularly for time-sensitive applications that require seamless and coherent sound communication across different channels and devices. In such applications, ensuring precise synchronization is essential for delivering a high-quality and immersive auditory experience to users. The current IoS networks mainly face sync challenges on sound quality, timing and behavior control. §.§.§ Sound Quality IoS listeners receive sound streams from multiple IoS senders and integrate them. If the quality of certain sound streams is unacceptable, it may divert listeners' attention, detracting from the overall auditory experience. Imperfect sound integration can result in a fragmented and unsatisfying auditory experience for listeners. Therefore, sound quality synchronization pertains to the fact that the IoS must ensure that all the sound streams delivered to listeners are well-orchestrated and maintain a similar level of fidelity and accuracy. However, the IoS spans a large geographical area, involving multiple wireless links, heterogeneous devices, and varying network conditions. Consequently, sound quality is often compromised due to distortion and interference introduced by poor-quality channels and long-distance transmission. Moreover, the IoS has a complex sound environment, in which multiple sound sources may coexist, such as speech, music, and background noise. Hence, this complex sound environment further makes it challenging to fulfill the diverse service requirements for various IoS sound types. §.§.§ Timing IoS listeners expect a seamless coordination of multiple sounds. Even a slight vibration of timing can lead to dissonance, echoes, or a disjointed auditory perception. Thus, precise timing alignment is crucial for maintaining the integrity and coherence of the overall auditory experience. To achieve accurate timing, the end-to-end delay consisting of sound processing time and sound transmission time should be considered. The processing latency is determined by the devices' computing capabilities, and thus it falls outside the scope of our discussion. Our goal is to reduce sound transmission time and mitigate jitter, as both timely and consistent reception of each sound stream by the receiver are crucial for achieving precise timing synchronization. However, channels with varying transmission rates can introduce gaps in sound streams and unpredictable packet arrival times, which pose key challenges. Consequently, developing low-latency, low-jitter communication schemes that minimize transmission delay and variability is essential to ensure robust timing synchronization. §.§.§ Behavior Control In a specific sound synchronization task, such as a live concert, each device may exhibit different behavior based on its assigned role. For instance, the lead vocalist's microphone would typically be given a higher priority, as the clarity and timing of the vocals are crucial for song delivery. However, during a guitar solo, the IoS devices may need to dynamically shift their focus to prioritize the synchronization of the guitarist's amplifier output. Therefore, IoS devices should adjust their behavior to accommodate IoS listeners' requirements in a timely manner to deliver a high-quality and immersive auditory experience. Current IoS networks heavily depend on control messages generated from listeners' feedback to guide device behavior control. However, relying exclusively on listeners feedback can be unreliable, as listeners may require considerable time to provide input. Moreover, this feedback often targets a specific sender, which hinders the system's capacity to learn from diverse experiences. If the same issues occur again, transmitting redundant feedback would lead to a waste of resources. Therefore, introducing a novel behavior control strategy is essential to enhance sound synchronization in the IoS, addressing the limitations of the current user feedback-based approach. §.§ How SC can Help for Synchronization Challenges? SC diverges from conventional Shannon communication by integrating human-like “comprehension" and “reasoning" into the data encoding, transmission and decoding processes, rather than striving for precise bits duplication <cit.>. To be concrete, SC systems extract the semantic information from a source based on the background knowledge, transmit it through a physical channel, and reconstruct the source data by reasoning from background knowledge and context. Due to its superior understanding and reasoning capabilities, SC could address the sound synchronization challenges in the IoS. Specifically, SC offers significant advantages in sound quality synchronization, timing synchronization, and synchronization behavior control as follow. * SC enables sound quality synchronization by facilitating semantic-aware sound prediction and error correction, particularly for those devices suffering poor channel conditions. As shown in Fig. <ref>, semantic features, which contain contextual and knowledge-based information, are extracted from sound sources. However, these features may become distorted or lost during the transmission process due to severe channel noise and interference. Those distorted or missing features can be inferred from the remaining features by leveraging the contextual relationships and predefined patterns in listeners' knowledge bases. This semantic-aware error resilience approach can improve the quality of sound streams that experience poor channel quality. Furthermore, by analyzing semantic information, listeners' device setup, and their preferences, SC systems can adjust the relative levels, panning, and spatial positioning of different sound elements while filtering out irrelevant or conflicting elements. * SC enables timing synchronization by reducing the transmission latency for delayed sounds through prioritizing the transmission of important semantic information. As shown in Fig. <ref>, SC encoders can extract and transmit important semantic features. Once these features are received, the context can be inferred based on their semantic relationships. Even if listeners do not receive the complete message, they can still begin synchronizing the sound streams while inferring the remaining content based on the available semantic information and context. As a result, this approach compensates for transmission delays, which is particularly beneficial for IoS devices suffering from low-transmission-rate channels. * SC enhances behavior control by updating senders' models and sharing knowledge among listeners and senders. New knowledge can be derived from listeners' feedback, sensing data, and history. Edge servers equipped with advanced computing ability can fine-tune SC models using the updated knowledge bases, and distribute them to senders. The updated knowledge bases ensure that senders can react accurately and efficiently if they encounter repetitive problems. However, updating knowledge bases and models will incur overhead and consume time. Hence, edge servers must determine when and how to perform updates based on timing requirements, resource usage, network conditions in the IoS. § IOS-CENTRIC SEMANTIC COMMUNICATION FRAMEWORK As shown in Fig. <ref>, an IoS-centric network consists of sound devices (senders and listeners), SC coordinators, BSs and cloud servers. Sound devices are equipped with sensors or actuators that enable them to capture, process, share, and produce sounds and sound-related information. SC coordinators, which can be edge servers deployed on BSs, play a crucial role in pre-training SC models, sharing knowledge among sound devices and cloud servers, as well as updating knowledge bases to control sound devices behavior. BSs are responsible for sounds and sound-related information exchanges among sound devices, SC coordinators, and cloud servers. Cloud servers facilitate the global interconnection of SC coordinators and provide global sound repositories. To achieve effective sound synchronization, a tight collaboration among these entities is needed. Initially, sound devices monitor their surroundings and capture environmental sounds to gain valuable insights into the acoustic context. For instance, if a microphone consistently detects piano sounds, it will prioritize capturing and processing these specific sounds to improve its performance in synchronization. The sensing data collected from sound devices is a kind of private knowledge which will be uploaded to SC coordinators. SC coordinators collect these private knowledge, download global knowledge from cloud servers, and then merge them together into a structured form, i.e., a knowledge graph (KG) <cit.>. Subsequently, SC coordinators use KGs to pre-train SC models, and then distribute the pre-trained SC model and a local KG to each sound device. Due to the limited storage and computing abilities of sound devices, local KGs are oriented towards device behavior and the preferences of the users while filtering out irrelevant and unnecessary global knowledge <cit.>. Both SC coding models and KGs are updated periodically, which will not generate an additional delay for particular sound transmissions. Given the SC coding models and KGs, senders extract semantic features from the source and encode it into symbols for wireless transmission to listeners. Next, listeners decode the received symbols and reconstruct the original sound signals from semantic features based on their KGs. In scenarios involving multiple sound sources, listeners also integrate and refine the received sounds, aiming to enhance the quality of synchronization based on the preferences of the users. For instance, if users have a preference on piano sounds, the devices will prioritize and amplify the piano sounds during the sound synchronization process. Subsequently, SC coordinators analyze listeners’ feedback, sensing data, and history to extract valuable insights and update their KGs by integrating new knowledge. By leveraging the updated KGs, SC coordinators fine-tune the SC models. This fine-tuning process involves adjusting the model parameters and architecture to better capture the semantic relationships and optimize the behavior performance of sound devices. The fine-tuned SC models are then distributed to the sound devices along with the relevant local KGs, tailored to their specific behavior and requirements. § TRANSCEIVER DESIGN IN IOS-CENTRIC SC NETWORKS Building upon the IoS-centric SC networks framework, we present a transceiver design, including semantic encoding, semantic importance-aware transmission, and semantic decoding. In particular, we present a symbolic semantic encoding module designed to represent sounds accurately and efficiently. This robust semantic encoding facilitates high-fidelity sound reconstruction, contributing significantly to sound quality synchronization. Concurrently, our semantic decoding process maintains sound quality by reasoning through distorted or missing sound elements and rendering sounds based on their semantic information. To achieve timing synchronization, we introduce a semantic importance-aware transmission system. This system prioritizes the transmission of important semantic information while filtering out irrelevant data, thereby reducing overall transmission latency. §.§ Semantic Encoding Contextual and knowledge-assisted sound prediction, coupled with semantic-aware sound rendering plays a crucial role in achieving sound quality synchronization. To effectively implement these functions, it is essential to accurately represent sound in a logical manner leveraging contextual relationships and external knowledge. Although there are various types of sounds with diverse nature in IoS networks, such as speech, music, and background sounds, unified semantic representation methods can be developed for representing them. Neurosymbolic AI techniques, which stem from neural networks and symbolic AI, offer feasible solutions by combining reasoning with complex representations of knowledge, such as KGs and ontologies <cit.>. However, there are slight differences in the symbolic units among speech, music, and background sounds. Speech can be transcribed into a sequence of word tokens, where each token represents a discrete unit of meaning derived from a predefined vocabulary or alphabet. Similarly, borrowing ideas from <cit.>, music can be symbolized by notes, which include pitch, duration, and onset information as fundamental music tokens. The semantic representations for music are then derived from these tokens. For sounds with less informative content, such as wildlife or machine noises, semantic representations can be extracted based on key attributes including the sound's source, environmental context, temporal characteristics, and other relevant acoustic features. Before encoding different types of sounds, it is necessary to detect and extract them from a source. To achieve this, a few traditional sound processing steps should be conducted. As shown in Fig. <ref>, a sound signal sequence is first converted into a spectrogram <cit.>, which is a visual representation of the frequencies present in the sound signal and how they change over time. Next, acoustic features are extracted from the spectrogram. These features may include spectral characteristics, temporal patterns, and other relevant information that can help identify and distinguish different types of sounds <cit.>. By comparing the extracted features with predefined sound templates, the category of each sound segment can be detected. Subsequently, a filtering process is applied to remove sounds that are not pertinent to the sender's intended behavior. The filtered sound segments then undergo semantic representation and encoding. By querying acoustic features in the sender' KG and searching for their corresponding semantic representations, the speech, music and other segments are transformed into word-based, note-based and ontology-based semantic representations respectively. Finally, all the semantic representations extracted from the different sound types are fused together. This fusion process combines the semantic information from speech, music, and background sounds into a unified representation. The fused semantic representations are then fed into the semantic importance-aware transmission module. §.§ Semantic Importance-aware Transmission Module To enhance timing synchronization, particularly for low-bandwidth scenarios, a semantic importance-aware transmission module can be implemented. This module adapts semantic representations based on semantic importance of source data and channel conditions. For instance, for low-rate links, there may be no need to transmit background sounds that a listener may be not interested in. In particular, in a fixed-length semantic representation, each token is reordered based on its semantic importance. A token that is more closely related to the listener's preferences is considered more important. Unlike classical priority-based queuing, which relies on bit-oriented rules for packet prioritization, the semantic importance-aware transmission approach prioritizes sound segments based on their perceived relevance to the listener. As shown in Fig. <ref>, the semantic importance of a token is quantified by calculating the relevance between this token and the listener's preferences. The relevance calculation follows the True Logic where the relation in a triple (head, relation, tail) indicates the truth value that the tail is true if the head is true (head → tail) <cit.>. The semantic representations of a token and target entities related to listeners' preferences have been located in the sender's KG during semantic encoding and model training. To reduce querying delay, advanced graph traversal algorithms are employed to efficiently search for target entities from tokens along the shortest paths within this KG. The probability calculated along the shortest path represents the semantic relevance between the token and the listener's preferences, indicating the token's importance. Meanwhile, the less important tokens can be selectively filtered out if messages are transmitted through low-transmission-rate channels. When the listener receives important semantic data, they can begin sound reconstruction promptly to enable faster sound processing and playback. §.§ Semantic Decoding To enhance the sound quality of synchronization, decoders exploit semantic-aware reasoning techniques that leverage contextual information and knowledge to intelligently reconstruct and interpolate missing or corrupted sound segments. As shown in Fig. <ref>, the received semantic representations are recovered and decoded into word-based, note-based, and ontology-based semantic representations. Subsequently, these semantic representations are transformed back into their corresponding acoustic features in a spectrogram based on the listener's KG. However, the recovered sound quality may be poor due to adverse communication conditions, such as noise, interference, or packet loss. Inspired by <cit.>, the plausible hypotheses for the missing or distorted sound segments can be predicted from context and predefined relationships between entities in the listener's KG. Moreover, we note that this KG-based approach may not be suitable for handling entirely new or unseen information that cannot be found in the existing KG. In such cases, the system relies solely on the contextual relationships to reason about the missing content. Furthermore, to achieve sound quality synchronization, the reconstructed sound segments undergo precise adjustment and rendering during the sound integration process. This procedure harmonizes the acoustic resonance and filters out discordant elements, thereby enhancing the listener's auditory experience. First, the reconstructed sound segments are they are seamlessly fused into a unified spectrogram. Then, the semantic decoder analyzes the semantic content within the sounds, taking into account factors such as the listener's device setup and personal preferences. Based on this analysis, the semantic decoder intelligently adjusts the relative levels, panning, and spatial positioning of different sound elements, creating a balanced and immersive soundscape <cit.>. It prioritizes the most semantically relevant sound elements based on listeners' preferences, while filtering out irrelevant or conflicting elements that may detract from the desired listening experience. Finally, the spectrogram is converted back into sound waves using techniques like inverse Fourier transform or wavelet synthesis. These sound waves can then be played through the listener's audio output devices. § OPEN ISSUES AND DISCUSSION While the proposed IoS-centric SC networks shows numerous advantages, it also presents several unavoidable and complex challenges that should be addressed to fully unlock its potential. Mathematical SC Models: A significant challenge arises in developing rigorous mathematical models for integrating KGs and SC in the context of the IoS. Currently, it is not clear how KGs should explicitly affect semantic encoding and decoding, particularly in terms of the impact on semantic entropy and semantic ambiguity. For IoS applications, this uncertainty extends to how KGs can optimally represent temporal sound sequences, acoustic environments, and cross-modal relationships between sounds and other sensory data. Understanding this impact is crucial for theoretically formulating the sound quality synchronization problem in our IoS-centric SC networks. Some mathematical tools such as logic probability, category theory, and neurosymbolic models could be promising here. Adaptive Resource Allocation Strategies: Designing resource allocation strategies for timing synchronization in IoS networks presents a significant challenge. Efficient strategies can prioritize the allocation of resources to transmit important semantic data based on channel conditions and timing requirements, ensuring accurate timing synchronization across multiple sound streams. This is particularly important in IoS-centric SC systems in which network conditions, semantic importance, and user preferences are often dynamic and unpredictable. Consequently, novel resource allocation strategies are needed to optimize both timing and quality synchronization in IoS networks, taking into account SC features. Cross-layer Protocol Design: To enhance IoS applications, particularly in terms of sound quality synchronization performance, the joint design of application layer protocols and wireless data transmission protocols is important yet challenging. Cloud-based applications in IoS-centric SC networks often rely on diverse global sound repositories containing data in various formats and types. This heterogeneity between global repositories and local knowledge can lead to semantic ambiguity between the application layer and the wireless transmission layer, potentially degrading the accuracy and efficiency of sound quality synchronization. The semantic inconsistencies across layers can result in misinterpretation of acoustic features, tonal characteristics, and contextual cues essential for precise sound quality synchronization. Hence, a novel cross-layer protocol incorporating a unified semantic language is required to ensure that semantic data can be effectively and accurately represented, transmitted, and interpreted across the different layers of IoS networks. § CONCLUSIONS In this article, we have delved into integrating SC into the IoS, with a particular emphasis on addressing the synchronization challenges in the IoS. Besides offering visions at a conceptual level, we have proposed a new framework of IoS-centric SC networks and illustrated the transceiver design to address the synchronization challenges. Finally, we have discussed several open issues in terms of mathematical models, resource allocation strategies, and cross-layer communication protocol design for the IoS. We hope this research serves as a pioneer in exploring SC for synchronization and the IoS, paving the way for advanced semantic-driven wireless sound delivery. IEEEtran
http://arxiv.org/abs/2407.13390v1
20240718105729
GeometrySticker: Enabling Ownership Claim of Recolorized Neural Radiance Fields
[ "Xiufeng Huang", "Ka Chun Cheung", "Simon See", "Renjie Wan" ]
cs.CV
[ "cs.CV" ]
Enabling Ownership Claim of Recolorized Neural Radiance Fields X. Huang et al. Department of Computer Science, Hong Kong Baptist University NVIDIA AI Technology Center, NVIDIA xiufenghuang@life.hkbu.edu.hk, {chcheung, ssee}@nvidia.com, renjiewan@hkbu.edu.hk GeometrySticker: Enabling Ownership Claim of Recolorized Neural Radiance Fields Xiufeng Huang1,2 Ka Chun Cheung2 Simon See2 Renjie Wan1Corresponding author. July 22, 2024 ================================================================================== § ABSTRACT Remarkable advancements in the recolorization of Neural Radiance Fields (NeRF) have simplified the process of modifying NeRF's color attributes. Yet, with the potential of NeRF to serve as shareable digital assets, there's a concern that malicious users might alter the color of NeRF models and falsely claim the recolorized version as their own. To safeguard against such breaches of ownership, enabling original NeRF creators to establish rights over recolorized NeRF is crucial. While approaches like CopyRNeRF have been introduced to embed binary messages into NeRF models as digital signatures for copyright protection, the process of recolorization can remove these binary messages. In our paper, we present GeometrySticker, a method for seamlessly integrating binary messages into the geometry components of radiance fields, akin to applying a sticker. GeometrySticker can embed binary messages into NeRF models while preserving the effectiveness of these messages against recolorization. Our comprehensive studies demonstrate that GeometrySticker is adaptable to prevalent NeRF architectures and maintains a commendable level of robustness against various distortions. Project page: https://kevinhuangxf.github.io/GeometrySticker/https://kevinhuangxf.github.io/GeometrySticker. § INTRODUCTION Significant progress <cit.> has been made in the recolorization of Neural Radiance Fields <cit.>, allowing people to adjust NeRF's color properties easily. However, as NeRF, a key representation of 3D scenes, becomes possible to be shareable digital assets <cit.>, there is a risk that ill-intentioned users could recolorize NeRF models and illegitimately assert ownership over the modified versions. To prevent NeRF models from such ownership breaches, it is important to allow original NeRF creators to claim ownership for a recolorized NeRF. A convenient way to assert ownership of digital assets is to embed invisible binary messages as digital watermarking into digital assets. Then, once the digital assets are maliciously edited, the owners can extract binary messages from NeRF <cit.> to claim ownership. For example, CopyRNeRF <cit.> has been introduced to safeguard the copyright of NeRF. Their method embeds binary messages in a way that aligns with the color representation within NeRF. Then, they utilize a message decoder to extract binary messages from images rendered from NeRF to verify the copyright. Yet, as CopyRNeRF <cit.> deeply relies on the combination between binary messages and color representation, such messages directly become “irretrievable” on rendered images when simply altering the color representation. Thus, the cornerstone of ensuring ownership claim over the recolorized NeRF model lies in making binary messages robust to alternations to color representation. If recolorization is mainly conducted on color representation, a straightforward solution would be to hide messages into cover media <cit.> unrelated to color representation. Then, the envisioned ownership claim and recolorization can be achieved in a concurrent way. In general, NeRF <cit.> relies on color and geometry to finish volume rendering. As the two key components are represented separately via two MLPs, geometry components become suitable cover media for hiding binary messages. Then, we can balance the target for claimability and recolorization. However, three challenges stand in the way of achieving the above goals. First, since its introduction, several NeRF variants <cit.> have already been developed. Their internal structures used for geometry representation also have significant differences. The watermarks should be scalable to NeRF variants. Second, NeRF <cit.> and its variations <cit.> utilize neural networks to learn implicit neural representations for reconstructing 3D scenes. Additional messages can easily disrupt the geometry structure represented in the networks. Then, it is essential to ensure that the changes made by the embedded binary messages are minimal. At last, an effective watermarking method should extract the copyright messages from such minimal changes. We introduce GeometrySticker for the ownership claim of recolorized NeRF. GeometrySticker is with the following characteristics to address the above challenges: 1) Scalability. Rather than embedding messages into geometry representation, we propose to attach messages onto geometry like stickers. Besides a simple network used for message representation, the attachment of such a sticker does not rely on any additional structure-specific optimization, which can be scalable to various variants. 2) Subtlety. The cover media selected for embedding binary messages occupy only small sections within the NeRF model, ensuring that modifications introduced by the message embedding remain subtle. 3) Ubiquity. The cover media are accessible from every viewpoint, ensuring NeRF owners can retrieve binary messages from each perspective. In fig:algo_overview, we display our framework for implementing GeometrySticker. Our GeometrySticker is a lightweight Multilayer Perceptron (MLP) capable of translating binary messages into formats compatible with the geometry representation used in NeRF and its variants <cit.>. Then, we can seamlessly integrate this compatible form of message representation with the chosen cover media, irrespective of the NeRF architectures, thus guaranteeing scalability. Then, we employ the 3D points sampled throughout the ray marching process as the cover media, and we specifically choose the 3D points near the objects' surfaces to affix our binary messages. These selected 3D points around the objects' surfaces represent a minor fraction of the total representations, thus ensuring subtlety. Ultimately, we guarantee that these cover media are accessible from every viewpoint, ensuring that the messages can be accessed from each perspective, thereby ensuring ubiquity. As shown in fig:proposed_scene, after the message attachment, we can easily recolorize the watermarked NeRF with GeometrySticker for authorized recolorization. However, if unauthorized recolorization is triggered, NeRF creators can easily retrieve binary messages from 2D images rendered from recolorized NeRF for ownership verification. The proposed GeometrySticker has the following key characteristics: * Safeguarding the ownership claim of NeRF even when the color attributes have been altered * Using the geometry components associated with selected 3D points to achieve the subtlety and ubiquity of the embedded ownership messages. * Designing a message sticker for ownership message attachment to achieve the scalability of our proposed solution. Our GeometrySticker, exhibiting high scalability, is capable of being generalized across various NeRF variants <cit.> that use neural representations for geometry. Based on our experiments, the use of GeometrySticker does not impair the effectiveness of current recolorization approaches <cit.> designed for NeRF. Furthermore, besides the recolorization approaches for NeRF, the binary messages can still be reliably extracted even when the 2D images rendered from NeRF are subjected to direct image-level color modifications. § RELATED WORK NeRF and its variants. NeRF <cit.> and its variants <cit.> show the capability to create realistic 3D representations of objects and scenes from 2D images with different perspectives. To improve the efficiency of scene representation, Plenoxel <cit.> reconstructs the scene in a sparse voxel grid and renders each ray sample via trilinear interpolation of the neighboring voxel coefficients. TensoRF <cit.> factorizes the radiance fields by vector-matrix decomposition for efficient scene modeling. InstantNGP <cit.> optimizes the input encoding with a multi-resolution hash table to reduce the number of floating point and memory access operations. As the creation of NeRF becomes more accessible, individuals are more inclined to easily recreate their preferred 3D scenes and share them with the public. It's important to address potential breaches of ownership in the process of such sharing. Recolorization of NeRF. The recolorization of NeRF <cit.> has achieved remarkable performance. CLIP-NeRF <cit.> use text prompts to alter the color supervised by CLIP-based <cit.> matching loss. PaletteNeRF<cit.> decomposes the appearance of 3D points into a linear combination of palette-based bases across the scene for photorealistic color editing. Similarly, RecolorNeRF<cit.> also decomposes the scene into a set of color layers to form a palette for color altering based on the TensoRF <cit.> architecture. As the recolorization of NeRF has become effective in recolorizing the 3D scene, it is important to explore an effective approach to protect the intellectual property of NeRF models when their color properties are modified. Ownership assertion of NeRF. Traditional 2D watermarking methods embed information in the least significant bits of image pixels <cit.>. Significant progress has been made in deep-learning-based image watermarking <cit.>. HiDDeN <cit.> is one of the first deep image watermarking methods that outperforms traditional methods. 3D watermarking approaches are usually designed for explicit 3D models <cit.>. However, these methods are not applicable to the copyright protection of NeRF due to its implicit property. Recently, StegaNeRF <cit.> designs a framework for steganographic information embedding in NeRF renderings. CopyRNeRF <cit.> generates watermarked color representations to ensure the invisibility of hidden copyright messages. However, when faced with recolorization, the hidden information embedded in the two methods might become unrecoverable. This prompts us to investigate ensuring ownership claims over recolorized NeRF models. § PRELIMINARIES ON THE WATERMARKING OF NERF The goal of watermarking NeRF is to safeguard its copyright and ownership by integrating binary messages into this burgeoning digital asset for 3D scenes. In general, NeRF <cit.> builds a function f: (𝐱, 𝐝) →(σ, c) to map the position 𝐱 and viewing direction 𝐝 to the point's density σ and color c. Vanilla NeRF<cit.> uses a MLP Θ_σ and the encoding function γ_𝐱 to map the 3D location 𝐱 into density value σ and the intermediate geometry feature output of 𝐳: [σ, 𝐳]=Θ_σ(γ_𝐱(𝐱)). Another MLP Θ_c and the encoding function γ_𝐝 are used to map the geometry feature 𝐳 and viewing direction 𝐝 into the color value 𝐜=Θ_c(𝐳, γ_𝐝(𝐝)). Once the optimization settles down, an implicit scene representation can be obtained, and all scene information can be stored in MLP as its network weights. Given the challenge of retrieving binary messages from the implicit representation of NeRF, existing strategies <cit.> typically design a method to convey copyright messages from the implicit neural representation to the 2D rendered images. Following that, a CNN-based message extractor is commonly employed to retrieve binary messages from the 2D images. The binary message embedding can be represented as f_𝐌, where 𝐌 is the binary messages with length N_𝐌. The message embedding is fused with the implicit neural representation for rendering the 2D watermarked images 𝐈_w. The corresponding message extraction process can be denoted as 𝐌̂=D(𝐈_w), where D denotes a CNN-based message extractor used to extract the hidden information 𝐌̂ from the 2D watermarked image 𝐈_w. § PROPOSED METHOD We have shown the scenario for our GeometrySticker in fig:proposed_scene, where people can claim ownership over the recolorized NeRF. In our scenario, for a NeRF model established via public resources such as NeRFStudio <cit.>, creators can effortlessly integrate binary messages into these established NeRF models using GeometrySticker for potential ownership claims. Once ownership messages are embedded into NeRF models, authorized recolorization can still be easily performed on the watermarked NeRF for legitimate uses. Yet, in the event of unauthorized recolorization, NeRF creators can utilize their available message extraction tool to retrieve binary messages from the rendered images, thereby affirming their ownership of their digital assets. §.§ GeometrySticker Choosing cover media. Choosing suitable cover media to conceal ownership messages is crucial in digital watermarking <cit.>. CopyRNeRF <cit.> has shown that embedding binary messages directly into geometry components can readily result in visible artifacts, thereby degrading the quality of scene representation. In fact, these artifacts partly originate from 3D points in empty spaces that have lower geometry values. In volume rendering <cit.>, N_p points are sampled along the camera marching rays with color and geometry values {(𝐜_i, σ_i)}_i=1^N_p. These points exhibit low geometry values in empty spaces and high values at the surfaces of objects <cit.>. Incorporating additional information into low geometry value 3D points located in empty spaces can lead to easily detectable alterations. Thus, rather than directly incorporating the hidden messages into the whole geometry representation, we consider those 3D points located on the object surfaces as the cover media. These 3D points on the object surfaces often exhibit high values, making the attachment of messages into them result in less conspicuous changes. To precisely pinpoint the 3D points located on the surfaces of objects, we utilize the Laplace Cumulative Distribution Function (CDF) equipped with a learnable parameter. This approach can help in determining an appropriate threshold to filter out 3D points that exhibit high geometry values as: ψ = 1/2 + 1/2·sign(σ - μ) ·(1 - exp(-|σ - μ|/β)), where μ and β are the average and deviation of the geometry field, and ψ∈ [0,1] is a probability that any geometry values in the geometry field is less than or equal to the given geometry values σ. The probability ψ can be used as an importance value to indicate whether the geometry value is a large number or not. While utilizing fixed parameters for thresholding points on object surfaces is feasible, adopting a naive strategy for determining mean and standard deviation values for such thresholding might lead to inflexible cutoffs. This could unintentionally cover too many points, causing notable distortions, or on the flip side, too few points, thus hindering the efficient embedding of messages for future extraction by a message extractor. Rather than using a fixed threshold for cover media generation, we consider β to be a learnable parameter to be optimized during the cover media generation. Then, the range used for the cover media generation can be adaptively adjusted according to rendered contents and message attachment efficiency to ensure the invisibility of embedded messages. The outcomes illustrated in fig:algo_overview demonstrate that the chosen 3D points, serving as the cover media, effectively form sparse point clouds that capture the essential information of the target objects. We optimize the importance value ψ with a sparsity loss <cit.> as follows: ℒ_sparse=1/|N_p|∑_ψ_i[log(ψ_i)+log(1-ψ_i)], which forces the importance value ψ to be close to either zero or one. The importance values ψ close to one indicate the 3D points with high geometry values. These points are selected as the cover media and only occupy small sections of the NeRF geometry to ensure subtlety. Message sticker. To maintain scalability, we avoid the data hiding techniques used in previous methods, such as CopyRNeRF <cit.>. CopyRNeRF <cit.> inherently alters the underlying NeRF structure, which depends on particular configurations and exhibits reduced scalability. Instead, we propose a message sticker that can attach messages to the selected cover media by summation like a sticker. The message sticker Θ_𝐦 can be achieved via an MLP as: m =Θ_𝐦(γ_x(𝐱), 𝐌), where 𝐌 is the binary message with length N_b and m is the one dimensional message embedding. Then, we can directly attach the message embedding m into geometry σ via ψ defined in eq:learnable_cdf: σ = σ + ψm, where σ is a watermarked geometry value. During volume rendering, the information attached via the message sticker can be incorporated into the rendered pixel values C as follows: C=∑_i=1^Nexp(-∑_j=1^i-1σ_jδ_j)(1-exp(-σ_iδ_i))𝐜_i, where δ is the distance between adjacent sample points, σ is the watermarked geometry with a message attachment and c is the color sampled along the ray. During training, to guarantee ubiquity, we repeat the above operations in each viewing point, which ensures that such 3D points exist at each perspective. During message extraction, the binary messages incorporated on cover media can be easily extracted from the watermarked image 𝐈_w via a message extractor D_χ as 𝐌̂ = D_χ(𝐈_w), where 𝐌̂ is the binary messages extracted by the message extractor and χ is a trainable parameter. Optimization. The message attachment in eq:encode_watermark is based on simple addition operation, with the training just focused on enabling the message sticker to adapt binary messages into a form that aligns with the diverse architectures employed by NeRF <cit.> and its variants. Our optimization contains three components: 1) we optimize the learnable variable β defined in eq:learnable_cdf to find an appropriate threshold for identifying 3D points with high geometry values as the cover media; 2) we train the message sticker in eq:message_sticker for embedding the binary messages into the 3D points; 3) we also train the message extractor D_χ to extract the hidden message from the watermarked rendered images 𝐈_w. The above three components can be simply optimized via a message loss as ℒ_msg=BCE(D_χ(𝐈_w), 𝐌), where BCE is the binary cross entropy loss, 𝐈_w is the watermarked rendered image and 𝐌 is the ground truth message. Then, the classical MSE loss adopted by NeRF <cit.> is employed to ensure that the GeometrySticker does not compromise the visual quality of rendered contents as ℒ_cont = 𝐈_w - 𝐈_o_2^2, where ℒ_cont is the content loss and 𝐈_o is the original image. We also train a CNN-based classifier C_ϕ to classify whether the rendered images contain watermarks as ℒ_cls = BCE(C_ϕ(𝐈_w), C_ϕ(𝐈_u)), where 𝐈_u is the unwatermarked rendered image and ϕ is a trainable parameter. The overall loss functions for our GeometrySticker can be incorporated by optimizing the following loss functions: ℒ_total=ℒ_cont + ℒ_msg + ℒ_cls + ℒ_sparse. §.§ Recolorization Once the binary messages have been attached to the geometry components via GeometrySticker, watermarked NeRF models can be easily recolorized in a claimable manner via off-the-shelf approaches for recolorization. We consider two off-the-shelf recolorization approaches. The first one is CLIP-based recolorization proposed in CLIPNeRF <cit.>. In CLIP-based recolorization, the color representation within a NeRF model is modified using CLIP <cit.> features derived from a specified text prompt. We adhere to the protocols set forth in CLIPNeRF <cit.>, employing a CLIP <cit.> feature loss to guide the update of the color representation. Our second recolorization strategy is the palette-based recolorization. This approach begins by establishing a color palette that encompasses all fundamental color components. Subsequent precise recolorization is accomplished by adjusting the color palette, specifically by allotting distinct RGB values to designated color layers. We randomly select 10 reference colors from the Standard sRGB / Rec.709 color gamut to recolorize the NeRF model. Additionally, users have the option to apply image-level recolorization techniques to directly alter the colors in rendered images using traditional methods (, color jittering). Some recolorization results are displayed in fig:nerf_recoloring. Due to page limitations, more details about this part can be found in the supplementary materials. §.§ Implementation details Our GeometrySticker is implemented on PyTorch. The whole pipeline can be easily combined with popular NeRF architectures like InstantNGP <cit.> or TensoRF <cit.>. Besides, we also implement on vanilla NeRF <cit.>. NeRF and InstantNGP <cit.> utilize MLP layers, and TensoRF <cit.> utilizes a density grid to predict volume density. We train our GeometrySticker to find the important geometry component for attaching the copyright messages. The patch size is set to 400 × 400 for each rendered image during training. During training, we apply several types of 2D distortions on the watermarked rendered images to achieve robustness, including Gaussian noise, random rotation, random cropout, and Gaussian blur. As our motivation is to protect the copyright of NeRF that has already been created, we first create several NeRF models by training them on Blender and LLFF datasets <cit.> following standard settings. Then, we use GeometrySticker to attach binary messages on established NeRF and apply the aforementioned recolorization methods in ssec:recolorization on the NeRF watermarked by GeometrySticker to test the watermarking robustness against recolorizations. Specifically, we employ the VGG16 <cit.> network as the backbone of the CNN-based message extractor. An average pooling is then performed, followed by a final linear layer with a fixed output dimension N_b to produce the continuous predicted message 𝐌̂. Training typically takes 5000 steps and can be completed within 45 minutes. All experiments are conducted on a single V100 GPU. § EXPERIMENTS §.§ Experimental settings Dataset. We use two benchmark datasets Blender <cit.> and LLFF <cit.> for evaluation. For Blender, we directly follow the dataset splitting to use 100 viewpoints from the training set to train our GeometrySticker and then render 200 views from the testing set to validate whether the binary messages can be extracted under different viewpoints and color editing conditions. For LLFF, we follow the dataset splitting in NeRF<cit.>. In general, 1/8 images in each scene are used to test the visual quality and bit accuracy of binary message extraction, and others are used to train our GeometrySticker. We report average values across all testing viewpoints in our experiments. All testing viewpoints are used for computing the average values during the evaluation session. Baselines. Our evaluations consist of three parts to verify claimability, recolorization, and scalability. For claimability, we compare our proposed GeometrySticker with four baselines for a fair comparison: 1) HiDDeN <cit.> + NeRF <cit.>. We process images with HiDDeN <cit.>, a classical image watermarking method, before the training of NeRF; 2) CopyRNeRF <cit.>. A state-of-the-art method for protecting the copyright of NeRF <cit.> by using digital watermarking; 3) StegaNeRF <cit.>. A state-of-the-art data hiding method for steganographic information embedding of NeRF. We adapt StegaNeRF <cit.> to embed binary messages with the CNN-based message extractor for message retrieval; 4) Unwatermarked NeRF. We also compare the rendered results from NeRF watermarked by GeometrySticker with the rendered results from the non-watermarked version of NeRF to evaluate whether GeometrySticker undermines visual quality. For recolorization, we compare the differences between the watermarked recolorized images with the corresponding unwatermarked recolorized images to investigate whether our GeometrySticker undermines recolorization. For scalability, we validate whether GeometrySticker can be easily adapted into various NeRF architectures, including vanilla NeRF <cit.>, InstantNGP <cit.>, and TensoRF <cit.>. We also evaluate whether GeometrySticker is compatible with the existing recolorization schemes mentioned in ssec:recolorization. Evaluation methodology. We evaluate the performance of GeometrySticker compared with other digital watermarking methods using the standard of capacity, invisibility, and robustness. For capacity, we set hidden messages bit length to 48 bits, aligning with the maximum length previously employed in 3D model watermarking methods <cit.>. For invisibility, we evaluate the visual quality with PSNR, SSIM, and LPIPS <cit.> by comparing the visual quality of the rendered images before and after GeometrySticker watermarking. For robustness, we evaluate whether the hidden messages can keep consistent against various distortions and recolorizations. Besides normal situations, we consider different distortions including Gaussian noise, rotation, cropout, and Gaussian blur. Different recolorization pipelines are employed to ensure adequate comparisons. §.§ Experimental results Can we claim ownership over recolorized NeRF models? We first assess whether our GeometrySticker can maintain its effectiveness under various recolorization operations. We consider model-level recolorization, including CLIP-based and palette-based recolorization. We also apply color jittering to randomly change images' hues as a general image-level color alternation. As shown in table:recolorization-robustness, HiDDeN <cit.> + NeRF <cit.> completely fails to perform well under both image- and model-level recolorization. CopyRNeRF <cit.> also shows degraded performance since CopyRNeRF <cit.> uses the color representation for hiding the binary message. Since StegaNeRF <cit.> depends on the complete geometry and color representation for data hiding, the hidden messages are susceptible to being compromised under both image- and model-level recolorization. Moreover, the intrinsic architectures of CopyRNeRF <cit.> and StagaNeRF <cit.> do not fully support palette-based recolorization, which leads to “N.A.” in table:recolorization-robustness. In contrast, our approach is uniquely adaptable to all three existing recolorization schemes, further underscoring the scalability of our method. In contrast, the binary messages embedded by our GeometrySticker remain effective against both image- and model-level recolorization and achieve high bit accuracy. Does GeometrySticker undermine recolorization? We discuss whether our GeometrySticker undermines recolorization in table:recolorization-robustness and fig:residual-maps. We evaluate the variations between recolored samples from NeRF models that are not watermarked and those that have been watermarked. In table:recolorization-robustness, as it is difficult to obtain identical recolorized pairs via CLIP-based recolorization, we use Palette-based colorization for our approach and HiDDeN <cit.> + NeRF<cit.>. We utilize color-jittering to change the images' hue for CopyRNeRF <cit.> and StegaNeRF <cit.>, since the two methods are not compatible with Palette-based colorization. The better quantitative values in table:recolorization-robustness show that our samples can achieve higher similarity to the unwatermarked recolorized images. This is further supported by the qualitative results shown in fig:residual-maps. Moreover, our technique uniquely maintains a balance between recolorization quality and bit accuracy post-recolorization, unlike other methods, which show significantly reduced bit accuracy following recolorization. Can GeometrySticker function properly without recolorization? We evaluate if GeometrySticker functions properly in standard scenarios without recolorization. We also evaluate its robustness by applying several types of 2D distortions to rendered images, including Gaussian noise with deviation ν, random rotation with parameters α, random cropout with a parameter s, and Gaussian blur with deviation ξ. As shown in the table:distortion-attacks, HiDDeN <cit.> fails to extract the binary messages from the renderings of NeRF. CopyRNeRF <cit.> can limitedly extract hidden messages from the renderings and show undermined robustness to different image distortions. Although StegaNeRF <cit.> can extract the hidden messages, it shows vulnerability to different types of image distortions. Our GeometryStricker reliably extracts hidden messages and shows robustness to different image distortions. Is GeometrySticker scalable? We have shown that our method can achieve scalability over the three main recolorization schemes in table:recolorization-robustness. We further evaluate the scalability of our proposed GeometrySticker on three typical NeRF architectures, including vanilla NeRF <cit.>, InstantNGP <cit.>, and TensoRF <cit.>. From  tab:scalability, GeometrySticker can achieve high invisibility and bit accuracy in these conditions, which reconfirms our scalability. Ablation study. The selection strategy of 3D points is a key architecture in our framework. We focus on investigating this part in our ablation study. As shown in fig:ablation_study, attaching messages to all geometry components can cause obvious distortion, which is aligned with the previous findings in CopyRNeRF <cit.>. Applying simple Laplace CDF with fixed thresholds for message attachment can reduce perturbation on the NeRF geometry but still with noticeable distortion. Our learnable Laplace CDF can find an optimal threshold for message attachment, making the visual distortion imperceivable. Potential threat analysis. The experiments in table:distortion-attacks have highlighted the robustness of our method to common image distortions. Besides, as the recolorization is also a very powerful modification operation, our previous experiments also demonstrate that GeometrySticker can show robustness to different recolorizations. We further investigate the robustness of the embedded watermarks against various possible deliberate interferences and security threats including the adversarial attack and model purification. [figure]font=small,skip=0pt r0.48 < g r a p h i c s > Robustness to model purification and adversarial attacks. We show the correlations between the PSNR and bit accuracy. Adversarial attack. We consider the situation if the message extractor has leaked. A malicious user can use an adversarial attack such as PGD <cit.> to remove the hidden messages by optimizing the rendered images via a PSNR constraint. The goal is to minimize the Euclidean distance between a pre-sampled random binary message and the message extractor's output, which could replace the original hidden message with a random one. As shown in fig:threat_model_analysis, adversarial attacks can indeed result in a reduction of bit accuracy while causing only minimal visual distortion. Thus, it's essential to keep the message extractor private. Model purification. Another one is the model purification by fine-tuning the model with non-watermarked images. We consider an extreme situation that the attackers can directly access the original non-watermarked images used for the NeRF creation. Based on this assumption, we implement this attack on GeometrySticker by eliminating the message loss and fine-tuning the model solely through perceptual loss. As shown in fig:threat_model_analysis, the bit accuracy starts to decrease if the model purification sacrifices the rendering quality, but it can keep a relatively high bit accuracy if we want to keep the rendered image quality. These results show that model purification is hard to reduce the bit accuracy significantly without sacrificing the image quality. § CONCLUSION In our research, we introduce a novel approach for asserting ownership of recolorized NeRF models through the innovative GeometrySticker. By embedding binary messages within the high geometry value components of NeRF, we ensure that these messages remain robust even after recolorization. Comprehensive testing demonstrates that our method establishes ownership claims on recolorized NeRF models, which guarantees the safe application of NeRF recolorization across various scenarios, thereby ensuring positive societal impacts for protecting the copyrights of artists and creators. Limitations and future work. Our method is an effective technical solution for copyright protection against the recolorization of NeRF models. However, as we discussed before, our mechanism may still face threats from some malicious operations. We will consider further enhancing the adversary robustness through adversary learning approaches <cit.>. Besides, we will explore enhancing the robustness of GeometrySticker towards geometry editing scenarios, enabling the manipulation of NeRF model and rendered images through techniques such as cage-based deformation <cit.> or motion transfer <cit.> in future work. Moreover, we will improve the GeometrySticker to align with the emerging 3D Gaussian Splatting (3DGS) method <cit.>, which utilizes an explicit point cloud representation, distinguishing itself from the NeRF implicit neural representation. Our aim is to elevate the versatility of GeometrySticker to be compatible with different 3D representation baselines. Acknowledgement This work was done at Renjie’s Research Group at the Department of Computer Science of Hong Kong Baptist University. Renjie's Research Group is supported by the National Natural Science Foundation of China under Grant No. 62302415, Guangdong Basic and Applied Basic Research Foundation under Grant No. 2022A1515110692, 2024A1515012822, and the Blue Sky Research Fund of HKBU under Grant No. BSRF/21-22/16. splncs04 Enabling Ownership Claim of Recolorized Neural Radiance Fields X. Huang et al. Department of Computer Science, Hong Kong Baptist University NVIDIA AI Technology Center, NVIDIA xiufenghuang@life.hkbu.edu.hk, {chcheung, ssee}@nvidia.com, renjiewan@hkbu.edu.hk Supplementary Material: GeometrySticker: Enabling Ownership Claim of Recolorized Neural Radiance Fields Xiufeng Huang1,2 Ka Chun Cheung2 Simon See2 Renjie Wan1Corresponding author. July 22, 2024 ========================================================================================================= § OVERVIEW This supplementary document provides more discussions, implementation details, and further results that accompany the paper: * sec:uniqueness explains the uniqueness of our method by comparing with the current NeRF ownership claiming methods under NeRF recolorizations. * sec:laplace_cdf explains the effectiveness of applying the Laplace Cumulative Distribution Function (CDF) with learnable parameters. * sec:recolorization_methods introduces the details of our reference colors and visualizes their corresponding recolorization results for NeRF. These recolorization methods are applied to different NeRF architectures to validate ownership for the recolorized NeRF. * sec:implementation_details presents the implementation details of our method, including the network architectures and the training process. * sec:additional_results provides additional results, including additional qualitative results of the main paper. § UNIQUESNESS As shown in fig:uniqueness, we demonstrate the uniqueness of using our GeometrySticker to claim ownership of a recolorized NeRF model. The current ownership protection methods such as CopyRNeRF <cit.> and StegaNeRF <cit.> can only claim the ownership when recolorization is not conducted. However, since the recent developments of NeRF recolorization methods <cit.>, if a model owner Bob creates a NeRF model and watermark the model with CopyRNeRF <cit.> or StegaNeRF <cit.>, the hidden ownership information could be vulnerable when a malicious user applies unauthorized recolorization on the NeRF model. Our GeometrySticker can be robust under different recolorizations. A model owner Alice can watermark her NeRF model by GeometrySticker, which can keep the hidden information consistent under different recolorizations and reliably extract the binary message from the recolorized NeRF renderings. § LEARNABLE LAPLACE CDF We provide more ablation studies for our learnable Laplace CDF used for the selection of cover medium. As shown in fig:laplace_cdf, we calculate the mean μ and deviation β of the geometry values and use the Laplace distribution to model the geometry values distribution of a selected scene. As shown in fig:laplace_cdf (a), attaching messages to all NeRF geometry values can cause obvious distortion since the low geometry values take up the majority of the entire NeRF geometry. We apply the Laplace CDF with the fixed parameters μ and β and the CDF value ψ=0.99 as the threshold to filter large geometry values for messages attachment. As shown in fig:laplace_cdf (b), applying Laplace CDF with calculated parameters can reduce perturbation but still show noticeable distortion. As shown in fig:laplace_cdf (c), our learnable Laplace CDF can adaptatively find an optimized deviation parameter β to adjust the CDF threshold (ψ=0.99) for the selection of cover medium and finally make the perturbation caused by the attached messages imperceivable. § MORE DETAILS ON RECOLORIZATION We select 10 reference colors from the Standard sRGB / Rec.709 color gamut including green, yellow, orange, red, pink, megenta, purple, blue, dodger blue, cyan. We recolorize the NeRF models by using colors' name for the CLIP-based method or assigning RGB values for the palette-based method. We also convert the NeRF renderings into HSV format to recolorize the images by changing the hue channel. As shown in fig:palette-based-recolorization, the palette-based method can precisely recolorize NeRF by editing the palette's colors to the reference colors. As shown in fig:clip-based-recolorization, though the CLIP-based method can roughly conduct the recolorization via the text prompts, the results are uncontrollable since the recolorization under the same prompts may have some differences as shown in fig:color-clip-diff. Thus, it is hard to get the same results for an unwatermarked NeRF model and a watermarked NeRF model. As shown in fig:color-jittering, color-jittering is an image-level recolorization by converting images into HSV format and shifting the intensity of the hue channels in a scale of [-0.5, 0.5]. For a fair comparison across different baselines, we only use color-jittering in our reconstruction quality computation for PSNR/SSIM and LPIPS in the main manuscript Table 1, since CLIP-based recolorization is uncontrollable and palette-based recolorization is not applicable to CopyRNeRF <cit.> and StegaNeRF <cit.>. All the testing set images in the main manuscript Section 5.1 are recolorized for computing reconstruction quality or message extraction bit accuracies. § IMPLEMENTATION DETAILS §.§ Network achitectures In our proposed GeometrySticker, the message sticker Θ_𝐦 is an MLP layer. In specific, it has 80 input channels, which are a concatenation of the message 𝐌 in 48 dimensions and positional encoding γ_x(𝐱) in 32 dimensions. The message sticker Θ_𝐦 has two hidden layers with 64 dimensions and 1-dimensional output for the message embedding m. For the message extractor D_χ, we use the VGG16 network <cit.> as the backbone feature extractor. An average pooling is then performed, followed by a final linear layer with a fixed output dimension N_b to produce the continuous predicted message 𝐌̂. For the watermark classifier C_ϕ, we use a similar architecture with the message extractor D_χ with the VGG16 network <cit.> as the feature extractor followed by an average pooling layer and a final 1-dimensional layer for classification. §.§ Training process The training process consists of two stages. In the first stage, we establish a NeRF scene by optimizing Θ_σ and Θ_c to get the geometry and color values of the scene according to ℒ_cont. In the second stage, we keep the geometry MLP Θ_σ and color MLP Θ_c unchanged and train the message sticker Θ_m and Laplace CDF with the learnable deviation parameter β for message attachment and key points selection. Meanwhile, we train a message extractor D_χ to extract the hidden message from the 2D watermarked renderings. In addition, we also train the watermarking classifier C_ϕ to classify whether the NeRF renderings contain watermarkings or not. The ℒ_cont is measured by the mean squared error between the watermarked rendering images and the ground truth images. The ℒ_msg is a binary cross entropy loss calculated between the embedded messages 𝐌 and the extracted messages 𝐌̂. The ℒ_cls is a binary cross entropy loss calculated between the watermarked rendering image 𝐈_w and the unwatermarked rendering images 𝐈_u. ℒ_sparse is the sparsity loss <cit.> to force the CDF value ψ to be close to either zero or one. The network Θ_m and parameters χ, ϕ and β are optimized with the objective functions ℒ_cont, ℒ_msg, ℒ_cls and ℒ_sparse. In every training loop, we attach the message 𝐌 with a random camera pose and apply 2D distortions on the watermarked rendering images. § ADDITIONAL RESULTS We provide additional results to validate the effectiveness of our GeometrySticker. As shown in fig:residual-maps-normal, we evaluate the qualitative and quantitative results of the reconstruction quality and bit accuracies of our GeometrySticker on the selected scene. The watermarked rendered images have high reconstruction quality with minimal discrepancies compared with the original rendered images. From the residual maps, we can observe that the hidden messages are sparsely embedded into the geometrical structure of the object or scene. We further validate the consistency of our GeometrySticker under different recolorizations. As shown in fig:residual-maps-recoloring, the message perturbation attached by GeometrySticker remains consistent from non-recolorized NeRF models to recolorized NeRF models. These results show our method successfully embeds secret messages into the geometry representation and disentangles them with the color representation, thus claiming ownership under various NeRF recolorizations.
http://arxiv.org/abs/2407.12459v1
20240717101249
Noncommutative Lightcones from Quantum SO(2,1) Conformal Groups
[ "Martina Adamo", "Angel Ballesteros", "Flavio Mercati" ]
hep-th
[ "hep-th", "gr-qc", "math-ph", "math.MP" ]
a4paper, textwidth=16.7cm,textheight=25.5cm red psm([ ) ]
http://arxiv.org/abs/2407.12737v1
20240717165832
Tutorial on Quantum Error Correction for 2024 Quantum Information Knowledge (QuIK) Workshop
[ "Priya J. Nadkarni", "Narayanan Rengaswamy", "Bane Vasić" ]
quant-ph
[ "quant-ph", "cs.IT", "math.IT" ]
Tutorial on Quantum Error Correction for 2024 Quantum Information Knowledge (QuIK) Workshop Priya J. Nadkarni, Narayanan Rengaswamy, and Bane Vasić P. J. Nadkarni, N. Rengaswamy, and B. Vasić are the Program Chairs for the First Quantum Information Knowledge (QuIK) Workshop held during the 2024 IEEE International Symposium on Information Theory at Athens, Greece. Email: narayananr@arizona.edu ================================================================================================================================================================================================================================================================================================================================== § ABSTRACT We provide a brief review of the fundamentals of quantum computation and quantum error correction for the participants of the first Quantum Information Knowledge (QuIK) workshop at the 2024 IEEE International Symposium on Information Theory (ISIT 2024). While this is not a comprehensive review, we provide many references for the reader to delve deeper into the concepts and research directions. Quantum error correcting codes, stabilizer codes, CSS codes, fault-tolerance, quantum computation § INTRODUCTION Quantum technologies exploit the laws of quantum mechanics, the most precise physical description of the world, to enable fundamentally new information processing capabilities. The primary quantum technologies are quantum computers, quantum communications and networks, and quantum sensors. While these technologies are all developed from the same concepts, their goals and tasks vary significantly. For this workshop, we will primarily focus on quantum computing, where the goal is to store and process information in quantum-mechanically behaving carriers such as atoms, ions, superconducting circuits, and photons. When isolated from their environments, these carriers behave ideally and can keep the information intact indefinitely. However, in reality, they continuously interact with the environment and cause the stored information to decohere. Similarly, the external manipulation of these carriers to compute on the information is also far from ideal, suffering from lack of precision, background noise etc. Therefore, it is essential to protect the stored information from decoherence as well as ensure that its processing remains tolerant to faults in the apparatus. The most systematic approach to such fault tolerant information processing in quantum systems is through the use of quantum error correcting codes. In this document, we provide a brief overview of the fundamentals of quantum error correction and fault tolerance. We assume that the reader is familiar with classical error correction or channel coding but perhaps not with quantum information. The goal is to provide sufficient background for the QuIK'24 workshop attendees to follow the invited talks, posters and discussions. While this is not a comprehensive review of the field, we will provide ample references for the readers to expand on the fundamentals discussed here. For a historical review of quantum computing and quantum error correction, we recommend the readers to refer to <cit.>. § BASICS OF QUANTUM COMPUTATION §.§ Postulates of quantum mechanics The theory of quantum mechanics involves a mathematical formulation describing the behaviour of physical systems at submicroscopic scales with a set of postulates that associate experimental observations to the mathematical formulation. The four postulates of quantum mechanics are <cit.>: * State of a quantum mechanical system: A normalized state vector, a unit vector in the state space, completely describes an isolated physical system. The state space is mathematically described by a Hilbert space, a complete complex vector space with an inner product. The fundamental unit of quantum information is an m-dimensional quantum state called a quantum digit (qudit). When m = 2, the two-dimensional unit of quantum information in a two-level quantum system is termed the quantum bit (qubit), whose state is represented by the “ket” |ψ⟩ = a|0⟩ + b|1⟩ = a [ 1; 0 ] + b [ 0; 1 ], where a,b ∈ℂ and |a|^2+|b|^2=1. The normalization constraint is referred to as Born's rule <cit.>. The states |0⟩ and |1⟩ form the computational basis of the state space. The state |ψ⟩ is said to be in a superposition of |0⟩ and |1⟩. The Hermitian transpose of a ket is the “bra”: ⟨ψ||ψ⟩^† = a^* ⟨0| + b^* ⟨1| = a^* [ 1 0 ] + b^* [ 0 1 ]. The (complex) inner product between two quantum states |ψ⟩ and |ϕ⟩ is denoted by ⟨ψ|ϕ|$⟩. The bra-ket notation is also termed as the Dirac notation, named after Paul Dirac who introduced it [His intention was likely to make the inner product ⟨ψ|ϕ|$⟩ look similar to the common bracket notation(|ψ⟩,|ϕ⟩), but specific to quantum mechanics.. Similar to qubits, the state of anm-dimensional qudit is represented by|ψ⟩ = j=0m-1∑ a_j |j⟩, wherea_0,a_1,…,a_m-1∈ℂand∑_j=0^m-1 |a_j|^2 = 1. The states|0⟩,|1⟩,…,|m-1⟩form the computational basis of the qudit state space. * Evolution of a quantum mechanical system: The evolution of a closed (or isolated) quantum system is completely described by a unitary operator. Recall that a complex square matrixU ∈ℂ^2 × 2is unitary if and only if its inverse is the same as its Hermitian transpose, i.e.,U^-1 = U^†. The states|ψ_1⟩and|ψ_2⟩of a quantum system at timest_1andt_2are related by a unitary operatorUthat depends only on the time instancest_1andt_2, i.e.,|ψ_2⟩ = U|ψ_1⟩. The most basic unitary operators are the single-qubit Pauli operators I_2 = [ 1 0; 0 1 ] , X = [ 0 1; 1 0 ], Y = [ 0 -i; i 0 ] , Z = [ 1 0; 0 -1 ]. Note that these are also Hermitian and have unit determinant. The non-identity Pauli operators have order2, zero trace, and eigenvalues± 1. These generate the single-qubit Pauli group 𝒫 ⟨ iI_2, X, Y, Z ⟩ = {±I_2, ±iI_2,±X, ±iX,±Y, ±iY,±Z, ±iZ}. The Pauli matrices are also represented asσ_0 ≡I_2,σ_1 ≡X,σ_2 ≡Y, andσ_3 ≡Z. Two non-identity Pauli matrices anticommute with each other, e.g.,XZ = - ZX, and are related asXY=iZ,YZ=iX, and ZX=iY. The Pauli matrices are orthogonal with respect to the Hilbert-Schmidt (or trace) inner product⟨ A,B ⟩_ HSTr(A^†B). They form an orthonormal basis via normalization: {1/√(2)σ_i ; i=0,1,2,3 }. The Hermitian Pauli matrices are represented by a two-bit vector[a,b]based on the representation of the operator asi^abX^aZ^b, i.e.,I≡ [0,0], X≡ [1,0], Y≡ [1,1], Z≡ [0,1]. By extension to𝒫, this defines a homomorphismγ𝒫→𝔽_2^2whose kernel is⟨ iI_2 ⟩. Similarly, the evolution over qudits are described by unitary operators inℂ^m × m, which are usually represented in terms of the generalized Pauli basis <cit.>. For qudits of prime-power dimensionm=p^l, the unitary operators are represented in terms of the clock operatorsZ^(p^l)(γ) = ∑_θ∈𝔽_p^lω^Tr_p^l/p(γθ)|θ⟩⟨θ|and shift operatorsX^(p^l)(β) = ∑_θ∈𝔽_p^l|β + θ⟩⟨θ|, wherepis prime,ω=e^i2π/p,β∈𝔽_m, and the field traceTr_p^l/p(β)=∑^l-1_i=0β^p^l<cit.>. * Measurement on a quantum mechanical system: A set of operators{ M_m }_msatisfying∑_m M_m^† M_m = I, called the measurement operators, describe a quantum measurement, where the indexmdenotes the possible measurement outcomes. The measurement outcome and the post-measurement state are probabilistic in nature: if the state of the system being measured is|ψ⟩, then the outcomemis obtained with probability p(m) = ⟨ψ| M_m^†M_m |ψ⟩. The completeness condition∑_m M_m^† M_m = Iensures that the probabilities sum to1for any initial state|ψ⟩. The post-measurement state of the system is given by |ψ_m⟩ = M_m |ψ⟩/√(p(m)) = M_m |ψ⟩/√(⟨ψ| M_m^†M_m |ψ⟩). Thus, measurement destroys superposition unless the state is an eigenstate of a measurement operator. The most common measurement is a projective measurement, whereM_m = P_mare projection operators satisfyingP_m P_m' = P_m if m = m', 0 if m ≠ m'. It is common to describe a projective measurement as the measurement of an observable, i.e., a Hermitian operator. The outcomes are the eigenvalues and the measurement operators are given by the projectors onto the different eigenspaces, obtained by diagonalizing the observable. As an example, consider the measurement ofZon|ψ⟩ = a |0⟩ + b |1⟩. It is easy to verify that the projectors areP_+1 = |0⟩⟨0|, P_-1 = |1⟩⟨1|. The probabilities (resp. post-measurement states) of outcomes+1and-1are|a|^2and|b|^2(resp.|0⟩and|1⟩), respectively. It is important to note that no quantum measurement can distinguish a state|ψ⟩from its scalar multiplee^iθ|ψ⟩. Hence, global phase never matters. * Composite quantum mechanical systems: The state space of a composite physical system is the tensor product of the state spaces of the component physical systems. For example, the state space of ann-qubit system isℂ^2^n = ℂ⊗ℂ⊗⋯⊗ℂ. For a composite physical system with thei^ thsystem is prepared in state|ψ_i⟩, wherei ∈{1,…,n}, the state of the complete system is|ψ_1⟩⊗|ψ_2⟩⊗⋯⊗|ψ_n⟩∈ℂ^2^n. The evolution of a closed quantum system withnqubits is described by a unitary operatorU ∈ℂ^2^n × 2^n. The Pauli group𝒫_non ann-qubit system is defined as then-fold tensor product of the single-qubit Pauli group𝒫. The homomorphismγis extended to map𝒫_nto𝔽_2^2n: E(a, b) i^a_1 b_1X^a_1Z^b_1⊗⋯⊗i^a_n b_nX^a_nZ^b_n ↦ [a_1, a_2, …, a_n, b_1, b_2, …, b_n]. Since any two non-identity Pauli matrices anti-commute, we can define the symplectic inner product between their binary representations to capture commutativity: symp([a,b],[c,d]) cb^T + ad^T ( 2). The corresponding Pauli operatorsE(a, b)andE(c, d)commute if and only if the above symplectic inner product is0<cit.>. The2^2nHermitiann-qubit Pauli matrices {1/√(2^n) E(a,b) ; a, b∈𝔽_2^n } form an orthonormal basis under the Hilbert-Schmidt inner product. Similarly, ann-qudit system can be defined as ann-fold tensor product of the single-qudit generalized Pauli group <cit.>. §.§ Mixed states and entanglement Entanglement is a physical phenomenon unique to quantum mechanics due to which the definite state of a component system of a composite physical system cannot be described independently of the other component systems, irrespective of the distance between them. Entanglement and superposition enable quantum systems to perform better compared to their classical counterparts. For every quantum system in state|ψ⟩with sub-systems A and B, there exist a Schmidt decomposition in terms of the orthonormal basis states{|j_A⟩}_jfor the sub-system A and orthonormal basis states{|j_B⟩}_jfor the sub-system B, respectively, such that|ψ⟩ = j=1r∑λ_j|j_A⟩|j_B⟩, whereλ_js are non-negative numbers called the Schmidt coefficients such thatj=1r∑λ_j^2=1andris called the Schmidt rank <cit.>. The state|ψ⟩is entangled if and only ifr > 1. In an entangled system, the sub-system is viewed to be in an ensemble of states{(p_i,|ψ_i⟩)}_i, meaning that the sub-system is in the state|ψ_i⟩with probabilityp_i. When the ensemble contains only one element, the sub-system is said to be in a pure state; else, it is in a mixed state. The state of the sub-system can alternatively be represented by a density matrixρ = ip_i|ψ_i⟩⟨ψ_i|, which is a positive operator whose trace is1. For a pure state,Tr(ρ^2) = 1, while for a mixed state,Tr(ρ^2) < 1. An example of an entangled state is |00⟩ + |11⟩/√(2)≡|0⟩⊗|0⟩ + |1⟩⊗|1⟩/√(2), represented by the ensemble{( 1, |00⟩ + |11⟩/√(2)) }. However, each of its single-qubit sub-systems are described by the ensemble{(1/2,|0⟩), (1/2,|1⟩)}and, hence, the density matrix1/2I_2. This is one of the four famous Bell states or EPR pairs (EPR stands for Einstein-Podolsky-Rosen): |Φ^±⟩ = |00⟩±|11⟩/√(2) , |Ψ^±⟩ = |01⟩±|10⟩/√(2). Together, these form an entangled basis forℂ^4, the state space of two qubits. They play a critical role in quantum information. We note that two different ensembles could correspond to the same density matrix, where the states of one ensemble are linear combinations of states of the other and the coefficients of the linear combinations form a unitary matrix <cit.>. According to the four postulates of quantum mechanics, the unitary evolution of a mixed stateρunder a unitary operatorUis described byUρ U^†, whereU^†is the conjugate transpose ofU. Performing a measurement onρdescribed by measurement operators{M_m}_mcollapsesρto the state ρ_m = M_mρ M_m^†/Tr(M_m^†M_mρ) with probabilityp(m) = Tr(M_m^†M_mρ). Sometimes, it suffices to know only the statistics of the measurement and not the post-measurement states. For such cases, we can define a POVM (positive operator-valued measure) using positive operators{ E_m }_msuch that∑_m E_m = I, whereE_mplays the role ofM_m^† M_mandp(m) = Tr(E_m ρ). §.§ Quantum gates and measurements A quantum gate is the same as a unitary operator. The basic gates are the Pauli gates and their tensor products onnqubits. An important set of gates is the Clifford group𝒞_n<cit.>, which is defined as the normalizer of the Pauli group𝒫_n, i.e., 𝒞_n { U ∈𝕌^2^n U P U^†∈𝒫_n ∀ P ∈𝒫_n }, where𝕌^2^ndenotes the group of unitary matrices of size2^n. In other words, Clifford gates conjugate Pauli gates to Pauli gates. The Clifford group is generated by three gates: Hadamard (H), Phase (S), and controlled-NOT (CNOT). Their matrix representations are provided below: H 1/√(2)[ 1 1; 1 -1 ] , S[ 1 0; 0 i ] = √(Z) , CNOT = [ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0 ] = [ I_2 0; 0 X ] = |0⟩⟨0|⊗ I_2 + |1⟩⟨1|⊗X≡CX. Here, the notationCXrefers to the fact thatCNOTis a “controlled-X” gate: if the first qubit (control) is in state|0⟩, then it does nothing to the second qubit (target), but if the control qubit is in state|1⟩, then it appliesXto the target qubit. SinceXis the “bit flip” gate, i.e.,X|0⟩ = |1⟩andX|1⟩ = |0⟩, the effect of CNOT is the same as the reversible XOR generalized to quantum states via linearity. Note thatZis commonly called the “phase flip” gate, sinceZ|0⟩ = |0⟩andZ|1⟩ = - |1⟩, andYis called the “bit-phase flip” gate. The Clifford group can be extended to a universal gate set by including any non-Clifford gate. Here, universality means that any unitary operator can be decomposed into a sequence of gates from this finite set with arbitrarily small approximation error in the spectral norm <cit.>. The most common non-Clifford gate included in the universal set is T[ 1 0; 0 e^iπ/4 ] = √(S) = Z^1/4, called the “Tgate”. Hence, a common universal gate set for quantum computing is{H, T, CNOT}. The circuit notations for single-qubit gatesUandCNOT_1 → 2(i.e., first qubit as control and second qubit as target) are shown below: |ψ⟩ U U |ψ⟩ [wires=2]|ψ⟩ 1 control target The only other ingredient in quantum circuits is the quantum measurement. It can be shown that general quantum measurements can be realized through additional ancillary qubits, joint unitary evolution, and projective measurements <cit.>. Hence, it suffices to only consider projective measurements in quantum circuits. The standard measurement is the measurement of PauliZ, often called as theZ-basis measurement. Other measurements can be realized through suitable unitary operations beforeZ-measurement, e.g.,X-measurement is equivalent to applyingHfollowed byZ-measurement sinceHZH^† = X. The circuit representation for such measurements are: |ψ⟩ X c± 1=|ψ⟩ H Z c± 1 The double wire represents classical information whereas a solid wire represents quantum information. § PHYSICAL REALIZATION OF QUBITS Most of the quantum computing systems currently use qubits. The physical implementation of these qubits can be based on various technologies such as photonics, superconducting circuits, ion traps, quantum dots, neutral atoms, etc. <cit.>. At the moment, there is no particular technology considered the standard for implementation of quantum computers, unlike classical computers for which semiconductor technology is considered the standard. Photonic qubit encodings are usually based on using either a photon's degree of freedom, such as its polarization, or using continuous-variable codes, such as bosonic codes, based on states of light to encode a qubit <cit.>. Photonic quantum computers are easy to network, usually have minimal cryogenics requirement, are scalable, have flexbility in choice of quantum error correction code used, and use measurement-based quantum computing (MBQC) <cit.> approaches. Their main challenge is the probabilistic photonic-based qubit generation/gates and to combat photon loss. Superconducting qubit encodings use superconducting electronic circuits to encode qubits within artificial atoms <cit.>. The basis states of a qubit are mapped to the energy levels that correspond to the integer number of pairs of electrons called Cooper pairs (for charge qubits), or to the integer number of magnetic flux quanta (for flux qubits), or different charge oscillation amplitudes across a Josephson junction (for phase qubits/qudits) <cit.>. Superconducting qubits have fast gate times and methods/processes used for implementing classical computers can be utilized. However, their architectures need to be designed with quantum error correction codes whose operations act on neighboring qubits as the qubits are usually laid out on a surface and have only limited nearest-neighbor coupling. Due to this restriction, scaling on superconducting quantum systems is a challenge. Ion-trap qubit encodings use ions or charged particles confined and suspended in free space using electromagnetic fields <cit.>. The basis states are the stable energy levels of these ions. Ion-trap qubits have long coherence times and high-fidelity quantum operations. The main challenge for ion-trap based quantum computing is scaling it to hundreds or thousands of qubits, which is required for quantum advantage. Neutral atom encodings use two different energy states of the atom to encode the qubit <cit.>. The atoms have long coherence time and are easier to trap and control as they are neutral in charge, enabling scalable quantum computing architectures. The gate operations are slower compared to superconducting circuits and need more preparation time at the beginning of the computation. § QUANTUM NOISE CHANNELS A general representation of a quantum channel is the so-called Kraus representation <cit.>: ℰ(ρ) = ∑_i E_i ρ E_i^†, where{ E_i }_iare called Kraus operators and satisfy the completeness condition∑_i E_i^† E_i = I. The effect of noise on a quantum system can be captured in this representation using suitable Kraus operators that describe the noise. The dephasing channel either leaves the qubit unchanged or appliesZ: ℰ_ dephasing(ρ) = (1-ε) ρ + ε Z ρ Z. The bit flip channel can be described similarly as ℰ_ flip(ρ) = (1-ε) ρ + ε X ρ X. The quantum equivalent of the classical binary symmetric channel is the depolarizing channel which either leaves the qubit unchanged or applies one of the flip operators: ℰ_ dep(ρ) = (1-ε) ρ + ε/3 [ X ρ X + Y ρ Y + Z ρ Z] = ( 1 - 4ε/3) ρ + 4 ε/3 I_2. The second equality can be shown by expandingρin the Pauli basis <cit.>. This is the most common noise channel considered in quantum computing as it represents the worst-case scenario where the quantum state can be replaced with the completely mixed state1/2I_2(which retains no information aboutρ). It can be simulated by a4-sided coin flip and applying eitherI_2, X, Y, orZaccording to the result. § QUANTUM ERROR CORRECTION Quantum error correction (QEC) involves incorporating redundancy in quantum information which enables the system to retrieve the quantum information in the presence of noise. The no cloning theorem <cit.> forbids copying of arbitrary quantum states, so a naïve quantum repetition code does not exist. The QEC code is a subspace of the state space over which the quantum states are defined. A code is able to correct a set of errorsE = {E_i}if and only ifPE_i^†E_jP = c_ij P, wherePis the code space projector andc_ij∈ℂform a Hermitian matrix <cit.>. This is known as the Knill-Laflamme condition for QEC <cit.>. The stabilizer framework <cit.> based on the Pauli group is commonly used to construct quantum codes. The Calderbank-Shor-Steane (CSS) framework <cit.>, a sub-framework of the stabilizer framework, constructs quantum codes from pairs of classical codes satisfying a dual-containing constraint. In this section, we review these two frameworks and discuss some of the latest codes of interest in the field of quantum computation. The framework of subsystem codes <cit.> generalizes the stabilizer framework and has proven very useful, but we will not discuss them here. §.§ Stabilizer codes Let𝒮be an abelian subgroup of the Pauli group𝒫_nthat does not contain-I_2^n. Let the minimal generators of𝒮beS_1,S_2,…,S_r. The stabilizer code <cit.>𝒬_𝒮is the subspace ofℂ^2^ndefined as 𝒬_𝒮{|ψ⟩∈ℂ^2^n S_i |ψ⟩ = |ψ⟩ ∀ i ∈{1,…,ρ}}. The group𝒮is called the stabilizer group of𝒬_𝒮because it stabilizes the codeword|ψ⟩, i.e.,|ψ⟩is a simultaneous eigenstate of all elements of𝒮with eigenvalue+1. The minimal generatorsS_is are called the stabilizer generators. LetS_i = E(a_i, b_i), wherea_i, b_i ∈𝔽_2^n. The check matrix of𝒬_𝒮isH_𝒮 = [ H_X| H_Z], where H_X = [ a_1; a_2; ⋮; a_r ] , H_Z = [ b_1; b_2; ⋮; b_r ]∈𝔽_2^r × n. As the stabilizers commute, their symplectic inner productsymp([a_i,b_i],[a_j,b_j]) = a_jb_i^T + a_ib_j^T = 0(mod2) for alli,j∈{1,2,…,r}<cit.>. Equivalently, we have the constraint H_X H_Z^T + H_Z H_X^T = 0. The dimension of the stabilizer code defined byrminimal stabilizer generators overnqubits is2^(n-r)<cit.>. The code is said to encodek = n-rlogical qubits of information intonphysical qubits. The normalizer𝒩(𝒮)of𝒮in𝒫_nis the set of operators in𝒫_nthat commute with the group𝒮, i.e.,∀ E ∈𝒩(𝒮)andS ∈𝒮, we haveS E = E S. The commonly considered errors on stabilizer codes are Pauli errors, e.g., the depolarizing channel. Through appropriate syndrome measurement circuits that only depend on the stabilizers, one can detect if the error commutes or anti-commutes with each stabilizer <cit.>. This provides anr-bit syndrome, where the bit is0if the error commutes with that stabilizer and1if it anti-commutes with that stabilizer. Note that errors that are stabilizers leave the state unchanged and, hence, are trivial errors. These are called degenerate errors. The minimum distance of the code,d, is the minimum weight of an undetectable error, i.e., a non-trivial error whose syndrome is trivial. Hence,dis the minimum Pauli weight of an element in𝒩(𝒮) ∖𝒮, where Pauli weight refers to the number of non-identity components in then-qubit Pauli operator. The size of𝒩(𝒮)is2^(2n-r). Overall, the stabilizer code has parameters n,k,d . The stabilizer codes can be viewed to be analogous to classical additive[All additive codes over a prime field are linear codes.] codes <cit.>. The check matrix is analogous to the parity check matrix of a classical code. As measurement of a quantum state collapses the superposition of the state, quantum error correction needs to be performed without any knowledge about the state. A syndrome is anr-bit binary vector obtained using the eigenvalues of the stabilizers for the erroneous quantum state, mapping+1to bit0and-1to bit1. In other words, ifE ∈𝒫_nis the error andS ∈𝒮, then for any initial code state|ψ⟩we have the eigenvalue equation S (E |ψ⟩) = (SE) |ψ⟩ = (± ES) |ψ⟩ = (± E) (S |ψ⟩) = ± E |ψ⟩. Hence, mathematically, the syndrome is obtained from the symplectic inner product of the error with the stabilizer generators. This is analogous to obtaining a syndrome in the classical case based on the parity checks. Based on the syndrome, a recovery operator is deduced and used to correct the error. Due to the existence of degenerate errors in the quantum setting, it suffices to find a recovery operator that is a product of the actual error and any stabilizer. Thus, degeneracy is a uniquely quantum phenomenon which provides many ways to correct the same error. IfEis the actual error andÊis the error estimate from a decoder (that has the same syndrome asE), then there are two possible scenarios:Ê E ∈𝒮(correct decoding) orÊ E ∈𝒩(𝒮) ∖𝒮(logical error). A stabilizer code is said to be degenerate if there exists a stabilizer in𝒮whose Pauli weight is strictly less thand. For a degenerate code, correct decoding may be possible withÊ≠ Esuch that bothÊandEhave the same Pauli weight. Qudit stabilizer framework is a generalization of the qubit stabilizer framework where the quantum code is also simultaneously stabilized by an abelian group. The check matrices are defined similarly with the elements of the matrices either defined over a ringℤ_mor a finite field𝔽_m. The symplectic inner product with respect to the generalized Pauli basis issymp([a_i,b_i],[a_j,b_j]) = a_jb_i^T - a_ib_j^Tand with respect to the finite field-based clock and shift operator issymp([a_i,b_i],[a_j,b_j]) = Tr_p^l/p(a_jb_i^T - a_ib_j^T)<cit.>. §.§ CSS (Calderbank-Shor-Steane) codes Calderbank and Shor <cit.>, and independently Steane <cit.>, proposed a framework to construct quantum error correction codes from two classical codesC_1andC_2that satisfy the dual-containing criterionC_1^⊥⊂ C_2. The quantum codes constructed using this framework are called the CSS codes and form a class of stabilizer codes. When the codesC_1andC_2used to construct a CSS code are the same, i.e.,C_1=C_2=C, the codeCis a dual-containing classical code, i.e.,C^⊥⊂ C. LetH_1andH_2be the parity check matrices of the classical codesC_1[n,k_1,d_1]andC_2[n,k_2,d_2], respectively. WhenC_1^⊥⊂ C_2, we obtainH_2H_1^T = 0. The basis codewords of the CSS code𝒬_CSSare the normalized superposition of all the elements in a particular coset ofC_1^⊥inC_2<cit.>. Thus, the CSS code obtained fromC_1andC_2is an n,(k_1+k_2-n), d ≥min(d_1,d_2) quantum code. The minimum distance is equal tomin(d_1,d_2)when the code is non-degenerate <cit.>, i.e., when the minimum Pauli weight of any stabilizer is at least the minimum distance of the code. The check matrix of the CSS code is H_ CSS = [[ H_1 0; 0 H_2 ]], where theX- andZ-stabilizers based onC_1andC_2correct theZ- andX-errors, respectively. This is because the syndrome of an errorE(e_X, e_Z)is s = [[ H_1 0; 0 H_2 ]] [ e_Z^T; e_X^T ] = [ H_1 e_Z^T; H_2 e_X^T ] = [ s_Z^T; s_X^T ]. Two independent decoders can be run forH_1andH_2with respective syndromess_Zands_Xto correct the complete error. While this is the commonly used strategy, whenX- andZ-errors are correlated, it is suboptimal. §.§ Logical operators An n,k,d code encodesklogical qubits intonphysical qubits such that the minimum (Pauli) weight of an undetectable error isd. The undetectable errors are precisely the logical operators of the code since they non-trivially change the encoded information while keeping it within the code space. This is similar to the classical case where undetectable errors are the codewords of the code — adding a codeword to the transmitted codeword non-trivially changes the encoded message but keeps it within the code space. Formally, we can identify the logical Pauli operators of the code with𝒩(𝒮), where the operators (stabilizers) within𝒮⊆𝒩(𝒮)are trivial logical operators as they leave any encoded state unchanged <cit.>. Each logical qubit is identified by a pair of logicalXand logicalZoperators, denotedX_jandZ_jrespectively for thej^ thlogical qubit, each of which act on thenphysical qubits of the code. Naturally,X_j, Z_j ∈𝒩(𝒮) ∖𝒮and satisfy the conditions X_i Z_j = - Z_j X_i if i = j, Z_j X_i if i ≠ j, for alli,j ∈{ 1,2,…,k }. For a CSS code defined by classical codesC_1andC_2as explained before, the logicalX(resp. logicalZ) operators are defined by the cosets inC_2/C_1^⊥(resp.C_1/C_2^⊥) under the homomorphismγ𝒫_n →𝔽_2^2n<cit.>. As an example, consider the 7,1,3 Steane code <cit.> defined by settingC_1 = C_2 = Cto be the classical[7,4,3]Hamming code. ThenH_1 = H_2 = His the parity check matrix of the Hamming code. The dual codeC^⊥is the[7,3,4]binary simplex code which is a subcode of the Hamming code, i.e.,C^⊥⊂ C. There is a single non-trivial coset inC/C^⊥with coset leaderc = 1111111. Hence, the single logical qubit has logical Pauli operator generators X = E(c,0) = X_1 X_2 X_3 X_4 X_5 X_6 X_7, Z = E(0,c) = Z_1 Z_2 Z_3 Z_4 Z_5 Z_6 Z_7. It is easily verified thatX Z = - Z Xsincec c^T = 1(mod2) guarantees that the symplectic inner product of[c,0]and[0,c]is1. Since stabilizers do not modify the action of an operator, we can multiplyXorZwith a weight-4X- orZ-type stabilizer (from the rows ofH), respectively, to reduce the weight to3. Hence, the code has minimum distance3because logical operators are non-trivial undetectable errors. For universal computation on the logical qubits of a code, logical Pauli operators alone are insufficient. It is necessary to synthesize a universal set of logical operators, such asH_i, T_i, andCNOT_i → jon all logical qubitsi,j ∈{ 1,2,…,k }. Besides correcting errors, this is an important aspect of QEC. Typically, these logical operators are synthesized for individual codes or families of codes by leveraging their structural properties <cit.>. There are systematic ways to approach this too for arbitrary codes <cit.>, but the resulting circuits might be suboptimal in terms of their circuit complexity. It is essential to ensure that errors do not spread during the execution of these logical circuits. This is the requirement of fault tolerance that we will discuss briefly later. Construction of fault tolerant logical gates is unique to QEC and is critical for reliable and useful quantum computing. §.§ Quantum low-density parity check codes Quantum codes with high rate and good error correction capability are considered ideal candidates for fault-tolerant quantum computing (FTQC). In several architectures <cit.>, the noise over the syndrome measurement scales with the number of qubits on which a stabilizer acts non-trivially and the number of stabilizers acting on a qubit. Thus, classes of quantum low-density parity check (QLDPC) codes <cit.> with asymptotically constant rate and distance scaling linearly withnare preferred candidates for FTQC. Surface codes <cit.> are well-studied n, 𝒪(1), 𝒪(√(n)) QLDPC codes that have good logical error rate performance, good distance scaling, and require only nearest-neighbor connectivity of qubits. However, they have asymptotically zero rate, leading to large overheads as the size of the system increases. There have been efforts in moving beyond the surface code and exploring codes with constant non-zero asymptotic rates and with distance scaling linearly with code size. The hypergraph product codes <cit.> and the lifted product codes <cit.> are two important classes of CSS codes that are currently being explored, besides other codes <cit.>. We provide a brief review of surface codes, hypergraph product codes, lifted product codes, and concatenated quantum codes (which are more general than QLDPC codes). §.§.§ Surface codes and Toric codes < g r a p h i c s > The standard 41,1,5 surface code with side length 5. Qubits are represented by circles. The orange boxes highlight weight-4X-type and Z-type stabilizers represented by red and blue squares, respectively. Note that there are weight-3 stabilizers in the boundaries of the lattice. All connecting lines represent local connectivity natively made available in the hardware; solid lines show the faces (or plaquettes) of the lattice and dashed lines connect the blue Z-checks to qubits incident to a face. A possible choice for the logical Z and logical X operators is shown, both of minimum weight 5. The surface codes are well-studied QLDPC codes whose qubits can be laid out on a 2D surface and the stabilizer measurements involve qubits within a particular neighborhood on the surface. Thus, they require only nearest-neighbor connectivity of qubits, which is essential for superconducting architectures. The most studied surface code is defined over a square lattice with qubits represented by edges,X-stabilizers represented by vertices, andZ-stabilizers represented by faces. AZ-stabilizer acts non-trivally on all the qubits defined by the edges defining the face. AnX-stabilizer acts non-trivially on all the qubits incident to the vertex representing the stabilizer. The logicalXandZoperators span the length and breadth of the lattice and the distance of the code is based on the side length of the square lattice. This standard surface code is shown in Fig. <ref>. The toric code <cit.> is defined on a square lattice on the surface of a torus with the edges of the lattice depicting qubits, the faces representingZ-stabilizers, and vertices representingX-stabilizers. The logical operators are represented by topologically non-trivial loops in the lattice, which also span the length and the breadth of the lattice. The toric code encodes two logical qubits and has distance based on the side length of the lattice. The square lattice for the toric code is the same as for the surface code but with opposite boundaries identified with each other, i.e., it has no boundaries. §.§.§ Hypergraph product codes < g r a p h i c s > The surface code constructed as the hypergraph product of classical repetition codes. The intersection of bits and checks of the repetition codes determine the qubits and stabilizers of the hypergraph product code. Tillich and Zémor <cit.> proposed the hypergraph product (HGP) code contruction based on any two classical codes, called the component codes. The Tanner graph of the HGP code is the graph product of the Tanner graphs of the component codes. Fori=1,2, letC_i[n_i,k_i,d_i]be the two component codes with parity check matrixH_i. Let their tranposed codesC_i^Tbe the code with parity-check matrixH_i^Twith parameters[m_i, k_i^T, d_i^T]. The hypergraph product codeHGP(C_1,C_2)obtained is a[[n_1n_2+m_1m_2, k_1k_2 + k_1^Tk_2^T, min(d_1,d_2,d_1^T,d_2^T)]]CSS code withX- andZ-check matrices H_X = [ H_1 ⊗ I I ⊗ H_2^T ], H_Z = [ I ⊗ H_2 H_1^T ⊗ I ]. When the component codes are chosen appropriately, the parameters follow the scaling n, 𝒪(n), 𝒪(√(n)) . The surface code can be constructed as the hypergraph product of classical repetition codes as shown in Fig. <ref>. There are closed-form expressions for the logical Pauli operators of these codes <cit.>. §.§.§ Lifted product codes Panteleev and Kalachev <cit.> first proposed the lifted product (LP) codes, which are the lifted versions of the hypergraph product codes. The LP codes are based on two matricesA_1andA_2defined overR_l = 𝔽_2[x]/(x^l-1). LetA_ibe of size(m_i× n_i). Fora(x) = a_0 + a_1x + ⋯ + a_l-1x^l-1∈ R_l, the lift𝔹(a(x))is the cyclic permutation matrix whose first column has the coefficients ofa(x)and the rest of the columns are obtained as the previous columns shifted down by one element. We note thata^T(x) = a_0 + a_l-1x + ⋯ + a_1x^l-1. The lifted product codeLP(A_1, A_2)is thel(n_1m_2+n_2m_1)-qubit CSS code withX- andZ-check matrices H_X = 𝔹([A_1 ⊗ I  I ⊗ A_2]), H_Z = 𝔹([I ⊗ A_2^T    A_1^T ⊗ I]). When the component codes are chosen appropriately, the parameters follow the scaling n, 𝒪(n), 𝒪(n) <cit.>. §.§.§ Concatenated quantum codes Concatenated quantum codes are obtained by concatenating a quantum code, called the inner code, with another quantum code, known as the outer code <cit.>. The inner code is first used to encode aK_1-dimensional quantum system intoN_1-dimensional quantum states. TheN_1-dimensional quantum states are further encoded using an outer code by considering theN_1-dimensional quantum states as logical information. The distance of the concatenated code is the product of the distance of the outer and inner code, improving its error correction ability. Multiple outer codes could be used to encode the logical information over theN_1-dimensional quantum states. The quintessential example of a concatenated code is the 9,1,3 Shor code, which was constructed famously by Peter Shor <cit.> to show that QEC even works. Until then, the continuous nature of quantum errors was thought to be a fundamental bottleneck to construct reliable quantum systems. The Shor code first encodes a single qubit in the 3,1,1 phase flip code defined by the stabilizer group𝒮_ phase = ⟨X_1 X_2, X_2 X_3 ⟩and logical operatorsX = X_1, Z = Z_1 Z_2 Z_3. Then it encodes each of those3qubits into the bit flip code defined by the stabilizer group𝒮_ bit = ⟨Z_1 Z_2, Z_2 Z_3 ⟩and logical operatorsZ = Z_1, X = X_1 X_2 X_3. Overall, the concatenated code has the stabilizer group 𝒮_ Shor = ⟨ Z_1 Z_2 , Z_2 Z_3 , Z_4 Z_5 , Z_5 Z_6 , Z_7 Z_8 , Z_8 Z_9 , X_1 X_2 X_3 X_4 X_5 X_6 , X_4 X_5 X_6 X_7 X_8 X_9 ⟩. A valid pair of logical Pauli operators are X = X_1 X_2 ⋯X_9 , Z = Z_1 Z_2 ⋯Z_9. Clearly, by multiplying with stabilizers, one can make them weight-3so that the minimum distance of the code is3. §.§ Bosonic codes Bosonic encoding, also known as continuous-variable encoding, encodes quantum information into electromagnetic signals. Bosonic encoding can be viewed analogous to the modulation codes used in communication systems where bitstrings are encoded into the in-phase and quadrature carrier electromagnetic waves. Bosonic codes are classified as bosonic stabilizer codes and bosonic Fock-state codes. The bosonic encodings inherently have error correction ability embedded into them. The logical performance of qubit codes can be improved by concatenating them with bosonic codes <cit.>. The hardware can be utilized more efficiently using these codes and certain gates forbidden over qubits can be performed using the continuous-variable operations. In bosonic stabilizer encoding, the carrier electromagnetic waves correspond to the position and momentum quadratures. Gottesman-Kitaev-Preskill (GKP) encoding <cit.> is the most commonly used bosonic stabilizer encoding. A commuting set of displacement operators across position and momentum quadratures form the stabilizers of the code. A square GKP encoding can be viewed as a comb of evenly spaced momentum states with a spacing of2√(π). Thus, a displacement of2√(π)corresponds to a stabilizer and a displacement of√(π)is a logical operation that transforms logical|0⟩to logical|1⟩. The GKP code protects the quantum information from large displacements upto√(π)/2. Using the concepts of QEC with Fock states or number states, cat encoding, binomial encoding, rotor GKP encoding etc. are developed. The square GKP encoding and rotor encoding can be viewed to be analogous to amplitude-shift and phase-shift keying techniques, respectively. See <cit.> for an extensive review of bosonic encoding. §.§ Decoding of quantum codes The decoder for quantum codes is still a classical algorithm that takes the quantum check matrixH_𝒮and the syndrome as input and outputs an estimate of the error that caused the syndrome. While the principle is similar to decoding of classical codes, there are multiple equivalent errors for the same syndrome due to degenerate stabilizer errors. The optimal decoder on the depolarizing channel is not the maximum likelihood decoder but the maximum likelihood coset decoder that determines the most likely logical cosetE + L + 𝒮in𝒫_nthat matches the syndrome <cit.>. Here,Eis the actual error andLis a logical Pauli operator. For fixedEandL, all elements of the cosetE + L + 𝒮have the same effect on the code since elements ofL + 𝒮∈𝒩(𝒮)have a trivial syndrome. As discussed earlier, ifÊis the error estimate from a decoder (that has the same syndrome asE), then there are two possible scenarios:Ê E ∈𝒮(correct decoding) orÊ E = L ∈𝒩(𝒮) ∖𝒮(logical error). The plot of logical error rate versus noise parameter shows the block error rate performance of the code-decoder pair, just as in classical channel coding. For CSS codes, one can execute separate decoders forX-errors andZ-errors usingH_ZandH_X, respectively. In particular, QLDPC codes can be decoded using efficient message passing algorithms such as belief propagation or min-sum executed on the Tanner graphs ofH_ZandH_X<cit.>. Message passing can also be performed in the GF(4) domain by constructing a combined Tanner graph that includes all the stabilizers <cit.>. In either case, short cycles and trapping sets, especially uniquely quantum ones from degeneracy <cit.>, cause challenges in effective decoding that remain to be addressed. Many families of QLDPC codes have a threshold, which is the noise parameter beyond which increasing the code size within the family monotonically improves logical error rate on one side of the threshold and worsens logical error rate on the other side. While QEC theorists strive to improve the threshold, experimentalists work hard to reduce the noise parameter as much as possible. The threshold theorem <cit.> states that if every component in the hardware has an error rate within the threshold, then scalable and reliable quantum computers can be built through appropriate QEC schemes. §.§ Fault tolerance While decoding ensures that the most likely errors are corrected, quantum computers must also perform computation on the encoded information. The circuits used to perform such universal computation must not spread errors catastrophically and overwhelm the QEC scheme. This is ensured by imposing fault tolerance constraints on the circuits <cit.>. There are several ways to define the requirement of fault tolerance, so let us consider a common one <cit.>. Assume that every logical operator circuit is followed by a block of ideal syndrome measurement and error correction. If the code can correct Pauli errors on up totqubits, then fault tolerance can be ensured by requiring that any combination oftfaults in the input and the logical operator circuit does not cause more thanterrors at the output of the circuit. Of course, the syndrome measurement and error correction block could itself introduce noise. But this is captured as errors in the input of the next logical operator circuit. Since iterative decoders do not necessarily correct up toterrors, fault tolerance can take a more subtle role. Constructing fault tolerant gates on good QLDPC codes is an active and open area of research today. Typically, logical Clifford gates are easier to construct on quantum codes than logical non-Clifford gates. There are a variety of methods to construct logical Clifford gates. It is common to design codes where a transversal physical operation induces the necessary action on the logical qubits. A transversal gate is one that acts as a tensor product on thenphysical qubits, thereby not introducing any interaction between two or more qubits <cit.>. Hence, by design, it is a fault-tolerant circuit. For example, the Steane code realizes the logicalHandStransversally asH^⊗ 7andS^⊗ 7, respectively. If two logical qubits are encoded separately in two Steane code blocks, then a transversalCNOT, i.e.,7CNOTs between corresponding qubits of the two blocks, realizes the logicalCNOT. However, the logicalTgate cannot be realized transversally on the Steane code to complete a universal logical gate set. In fact, the Eastin-Knill theorem <cit.> states that no error-detecting quantum code can realize a universal logical gate set using only transversal gates. An innovative strategy to implement logical non-Clifford gates is using magic states <cit.>. These are specific resource states that enable one to implement the gate without directly applying it on the data qubits. For example, the magic state for theTgate is|T⟩ = |0⟩ + e^iπ/4|1⟩/√(2). Given this state, the following circuit applies theTgate on the input data qubit|ψ⟩using only Clifford operations and Pauli measurements: |T⟩ = T|+⟩ 1 SX T|ψ⟩ |ψ⟩ = α|0⟩ + β|1⟩ Z [vertical wire=c]-1c The double wire indicates a classically-controlledSXgate which is applied if and only if the measurement result is-1. Hence, it is desirable to generateTmagic states of high fidelity. This is achieved through a process called magic state distillation (MSD) <cit.>. The most common approach to MSD is the Bravyi-Haah protocol using triorthogonal codes <cit.>. These codes realize logicalTgates on allklogical qubits via a transversalTgate on thenphysical qubits. Once these codes are used to distill higher-fidelity magic states from lower-fidelity magic states natively produced by hardware, the resulting states are injected into the data using the above circuit. When the data is itself encoded into a different code, such as a QLDPC code, the magic state must also be encoded to allow a fault-tolerant execution of the above circuit. A major portion of the resource consumption of a quantum computer comes from magic state distillation and injection, sinceTgates are a critical component of most non-trivial quantum algorithms <cit.>. Triorthogonal codes have been generalized to CSS-T codes <cit.> in the hope of reducing the overhead of implementing logical non-Clifford gates. This has generated much interest among algebraic coding theorists recently <cit.>. The realization of logical non-Clifford gates with low overhead on good QLDPC codes is an important and exciting area of research. § CONCLUSION In this short article, we have briefly reviewed the fundamentals of quantum computation and quantum error correction. We hope that this is informative to researchers that are new to the field. There are several challenges to be addressed in the pursuit of scalable, fault-tolerant, quantum computing. We firmly believe that classical coding theorists have a lot to offer in addressing these challenges. 94 100 url@samestyleNielsenChuang M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition, 10th ed. 1em plus 0.5em minus 0.4em USA: Cambridge University Press, 2011. lidar2013quantum D. A. Lidar and T. A. Brun, Quantum error correction. 1em plus 0.5em minus 0.4em Cambridge university press, 2013. QC_HistoricalReview_Shor P. W. Shor, “The early days of quantum computation,” 2022. QC_HistoricalReview_Preskill J. Preskill, “Quantum computing 40 years later,” 2021. wilde2013quantum M. Wilde, Quantum information theory. 1em plus 0.5em minus 0.4em Cambridge university press, 2013. WeylHeisenberg I. Bengtsson and K. Zyczkowski, Geometry of Quantum States. 1em plus 0.5em minus 0.4em Cambridge University Press, 2017. Ketkar06 A. Ketkar, A. Klappenecker, S. Kumar, and P. Sarvepalli, “Nonbinary stabilizer codes over finite fields,”IEEE Transactions on Information Theory, vol. 52, no. 11, pp. 4892–4914, nov 2006. NBEASC2021 P. J. Nadkarni and S. S. Garani, “Non-binary entanglement-assisted stabilizer codes,”Quantum Information Processing, vol. 20, no. 8, p. 256, Aug 2021. [Online]. Available: <https://doi.org/10.1007/s11128-021-03174-1>rengaswamy2020classical N. Rengaswamy, “Classical coding approaches to quantum applications,” Ph.D. dissertation, Duke University, 2020. bennett1996mixed C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, “Mixed-state entanglement and quantum error correction,”Physical Review A, vol. 54, no. 5, p. 3824, 1996. calderbank1997quantum A. R. Calderbank, E. M. Rains, P. W. Shor, and N. J. Sloane, “Quantum error correction and orthogonal geometry,”Physical Review Letters, vol. 78, no. 3, p. 405, 1997. Gottesman97 D. Gottesman, “Stabilizer codes and quantum error correction,” Ph.D. dissertation, California Institute of Technology, CA, USA, 1997. Calderbank_GF4_97 A. R. Calderbank, E. M. Rains, P. W. Shor, and N. J. A. Sloane, “Quantum error correction via codes over gf(4),” in Proceedings of IEEE International Symposium on Information Theory, 1997. boykin2000new P. O. Boykin, T. Mor, M. Pulver, V. Roychowdhury, and F. Vatan, “A new universal and fault-tolerant quantum basis,”Information Processing Letters, vol. 75, no. 3, pp. 101–107, 2000. PreskillNotes J. Preskill. (2020) Lecture notes for quantum computation. [Online]. Available: <http://theory.caltech.edu/ preskill/ph219/index.html#lecture>XanaduBlueprint J. E. Bourassa, R. N. Alexander, M. Vasmer, A. Patil, I. Tzitrin, T. Matsuura, D. Su, B. Q. Baragiola, S. Guha, G. Dauphinais, K. K. Sabapathy, N. C. Menicucci, and I. Dhand, “Blueprint for a scalable photonic fault-tolerant quantum computer,”Quantum, vol. 5, p. 392, Feb. 2021. [Online]. Available: <http://dx.doi.org/10.22331/q-2021-02-04-392>slussarenko2019photonic S. Slussarenko and G. J. Pryde, “Photonic quantum information processing: A concise review,”Applied Physics Reviews, vol. 6, no. 4, 2019. romero2024photonic J. Romero and G. Milburn, “Photonic quantum computing,”arXiv preprint arXiv:2404.03367, 2024. killoran2019strawberry N. Killoran, J. Izaac, N. Quesada, V. Bergholm, M. Amy, and C. Weedbrook, “Strawberry fields: A software platform for photonic quantum computing,”Quantum, vol. 3, p. 129, 2019. kolarovszki2024piquasso Z. Kolarovszki, T. Rybotycki, P. Rakyta, Á. Kaposi, B. Poór, S. Jóczik, D. T. Nagy, H. Varga, K. H. El-Safty, G. Morse et al., “Piquasso: A photonic quantum computer simulation software platform,”arXiv preprint arXiv:2403.04006, 2024. raussendorf2001one R. Raussendorf and H. J. Briegel, “A one-way quantum computer,”Physical review letters, vol. 86, no. 22, p. 5188, 2001. raussendorf2003measurement R. Raussendorf, D. E. Browne, and H. J. Briegel, “Measurement-based quantum computation on cluster states,”Physical review A, vol. 68, no. 2, p. 022312, 2003. raussendorf2006fault R. Raussendorf, J. Harrington, and K. Goyal, “A fault-tolerant one-way quantum computer,”Annals of physics, vol. 321, no. 9, pp. 2242–2270, 2006. briegel2009measurement H. J. Briegel, D. E. Browne, W. Dür, R. Raussendorf, and M. Van den Nest, “Measurement-based quantum computation,”Nature Physics, vol. 5, no. 1, pp. 19–26, 2009. SuperconductingQCReview H.-L. Huang, D. Wu, D. Fan, and X. Zhu, “Superconducting quantum computing: a review,”Science China Information Sciences, vol. 63, no. 8, p. 180501, Jul 2020. [Online]. Available: <https://doi.org/10.1007/s11432-020-2881-9>rasmussen2021superconducting S. E. Rasmussen, K. S. Christensen, S. P. Pedersen, L. B. Kristensen, T. Bækkegaard, N. J. S. Loft, and N. T. Zinner, “Superconducting circuit companion—an introduction with worked examples,”PRX Quantum, vol. 2, no. 4, p. 040204, 2021. kwon2021gate S. Kwon, A. Tomonaga, G. Lakshmi Bhai, S. J. Devitt, and J.-S. Tsai, “Gate-based superconducting quantum computing,”Journal of Applied Physics, vol. 129, no. 4, 2021. bravyi2022future S. Bravyi, O. Dial, J. M. Gambetta, D. Gil, and Z. Nazario, “The future of quantum computing with superconducting qubits,”Journal of Applied Physics, vol. 132, no. 16, 2022. Superconducting_Qudit18 W. Y. Liu, H. K. Xu, F. F. Su, Z. Y. Li, Y. Tian, S. Han, and S. P. Zhao, “Coupled superconducting qudit-resonator system: Energy spectrum, state population, and state transition under microwave drive,”Phys. Rev. B, vol. 97, p. 094513, Mar 2018. [Online]. Available: <https://link.aps.org/doi/10.1103/PhysRevB.97.094513>IonTrapReview C. D. Bruzewicz, J. Chiaverini, R. McConnell, and J. M. Sage, “Trapped-ion quantum computing: Progress and challenges,”Applied Physics Reviews, vol. 6, no. 2, May 2019. [Online]. Available: <http://dx.doi.org/10.1063/1.5088164>bernardini2023quantum F. Bernardini, A. Chakraborty, and C. R. Ordóñez, “Quantum computing with trapped ions: a beginner’s guide,”European Journal of Physics, vol. 45, no. 1, p. 013001, 2023. moses2023race S. A. Moses, C. H. Baldwin, M. S. Allman, R. Ancona, L. Ascarrunz, C. Barnes, J. Bartolotta, B. Bjork, P. Blanchard, M. Bohn et al., “A race-track trapped-ion quantum processor,”Physical Review X, vol. 13, no. 4, p. 041052, 2023. NeutralAtomReview K. Wintersperger, F. Dommert, T. Ehmer, A. Hoursanov, J. Klepsch, W. Mauerer, G. Reuber, T. Strohm, M. Yin, and S. Luber, “Neutral atom quantum computing hardware: performance and end-user perspective,”EPJ Quantum Technology, vol. 10, no. 1, p. 32, Aug 2023. [Online]. Available: <https://doi.org/10.1140/epjqt/s40507-023-00190-1>wurtz2023aquila J. Wurtz, A. Bylinskii, B. Braverman, J. Amato-Grill, S. H. Cantu, F. Huber, A. Lukin, F. Liu, P. Weinberg, J. Long et al., “Aquila: Quera's 256-qubit neutral-atom quantum computer,”arXiv preprint arXiv:2306.11727, 2023. young2022architecture C. Young, A. Safari, P. Huft, J. Zhang, E. Oh, R. Chinnarasu, and M. Saffman, “An architecture for quantum networking of neutral atom processors,”Applied Physics B, vol. 128, no. 8, p. 151, 2022. KnillLaflamme E. Knill and R. Laflamme, “Theory of quantum error-correcting codes,”Phys. Rev. A, vol. 55, pp. 900–911, Feb 1997. [Online]. Available: <https://link.aps.org/doi/10.1103/PhysRevA.55.900>CSS_CS A. R. Calderbank and P. W. Shor, “Good quantum error-correcting codes exist,”Physical Review A, vol. 54, no. 2, pp. 1098–1105, aug 1996. CSS_Steane A. M. Steane, “Error correcting codes in quantum theory,”Physical Review Letters, vol. 77, no. 5, pp. 793–797, jul 1996. kribs2005unified D. Kribs, R. Laflamme, and D. Poulin, “Unified and generalized approach to quantum error correction,”Physical review letters, vol. 94, no. 18, p. 180501, 2005. poulin2005stabilizer D. Poulin, “Stabilizer formalism for operator quantum error correction,”Physical review letters, vol. 95, no. 23, p. 230504, 2005. aly2006subsystem S. A. Aly, A. Klappenecker, and P. K. Sarvepalli, “Subsystem codes,”arXiv preprint quant-ph/0610153, 2006. bacon2006operator D. Bacon, “Operator quantum error-correcting subsystems for self-correcting quantum memories,”Physical Review A—Atomic, Molecular, and Optical Physics, vol. 73, no. 1, p. 012340, 2006. breuckmann2011subsystem N. P. Breuckmann, “Quantum subsystem codes: Their theory and use,” 2011, Bachelor's Thesis. Ashikhmin01 A. Ashikhmin and E. Knill, “Nonbinary quantum stabilizer codes,”IEEE Transactions on Information Theory, vol. 47, no. 7, pp. 3065–3072, 2001. Nadkarni_TQE21_Qudit_CSS_Codes P. J. Nadkarni and S. S. Garani, “𝔽_p-linear and 𝔽_p^m-linear qudit codes from dual-containing classical codes,”IEEE Transactions on Quantum Engineering, vol. 2, pp. 1–19, 2021. wilde2009logical M. M. Wilde, “Logical operators of quantum codes,”Physical Review A—Atomic, Molecular, and Optical Physics, vol. 79, no. 6, p. 062322, 2009. Shor95 P. W. Shor, “Scheme for reducing decoherence in quantum computer memory,”Phys. Rev. A, vol. 52, pp. R2493–R2496, Oct 1995. [Online]. Available: <https://link.aps.org/doi/10.1103/PhysRevA.52.R2493>horsman2012surface D. Horsman, A. G. Fowler, S. Devitt, and R. Van Meter, “Surface code quantum computing by lattice surgery,”New Journal of Physics, vol. 14, no. 12, p. 123011, 2012. litinski2019game D. Litinski, “A game of surface codes: Large-scale quantum computing with lattice surgery,”Quantum, vol. 3, p. 128, 2019. vuillot2019code C. Vuillot, L. Lao, B. Criger, C. G. Almudéver, K. Bertels, and B. M. Terhal, “Code deformation and lattice surgery are gauge fixing,”New Journal of Physics, vol. 21, no. 3, p. 033028, 2019. kubica2015universal A. Kubica and M. E. Beverland, “Universal transversal gates with color codes: A simplified approach,”Physical Review A, vol. 91, no. 3, p. 032330, 2015. cohen2022low L. Z. Cohen, I. H. Kim, S. D. Bartlett, and B. J. Brown, “Low-overhead fault-tolerant quantum computing using long-range connectivity,”Science Advances, vol. 8, no. 20, p. eabn1717, 2022. krishna2021fault A. Krishna and D. Poulin, “Fault-tolerant gates on hypergraph product codes,”Physical Review X, vol. 11, no. 1, p. 011023, 2021. rengaswamy2020logical N. Rengaswamy, R. Calderbank, S. Kadhe, and H. D. Pfister, “Logical clifford synthesis for stabilizer codes,”IEEE Transactions on Quantum Engineering, vol. 1, pp. 1–17, 2020. XanaduPassiveArch I. Tzitrin, T. Matsuura, R. N. Alexander, G. Dauphinais, J. E. Bourassa, K. K. Sabapathy, N. C. Menicucci, and I. Dhand, “Fault-tolerant quantum computation with static linear optics,”PRX Quantum, vol. 2, p. 040353, Dec 2021. [Online]. Available: <https://link.aps.org/doi/10.1103/PRXQuantum.2.040353>breuckmann2021quantum N. P. Breuckmann and J. N. Eberhardt, “Quantum low-density parity-check codes,”PRX Quantum, vol. 2, no. 4, p. 040101, 2021. dennis2002topological E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, “Topological quantum memory,”Journal of Mathematical Physics, vol. 43, no. 9, pp. 4452–4505, 09 2002. fowler2012surface A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, “Surface codes: Towards practical large-scale quantum computation,”Phys. Rev. A, vol. 86, p. 032324, Sep 2012. tillich2013quantum J.-P. Tillich and G. Zémor, “Quantum ldpc codes with positive rate and minimum distance proportional to the square root of the blocklength,”IEEE Transactions on Information Theory, vol. 60, no. 2, pp. 1193–1202, 2013. Panteleev2021 P. Panteleev and G. Kalachev, “Degenerate Quantum LDPCCodes With Good Finite Length Performance,”Quantum, vol. 5, p. 585, Nov. 2021. [Online]. Available: <https://doi.org/10.22331/q-2021-11-22-585>leverrier2022quantum A. Leverrier and G. Zémor, “Quantum tanner codes,” in 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS). 1em plus 0.5em minus 0.4em IEEE, 2022, pp. 872–883. mostad2024generalizing O. Å. Mostad, E. Rosnes, and H.-Y. Lin, “Generalizing quantum tanner codes,”arXiv preprint arXiv:2405.07980, 2024. kitaev2003fault A. Y. Kitaev, “Fault-tolerant quantum computation by anyons,”Annals of physics, vol. 303, no. 1, pp. 2–30, 2003. burton2021limitations S. Burton and D. Browne, “Limitations on transversal gates for hypergraph product codes,”IEEE Transactions on Information Theory, vol. 68, no. 3, pp. 1772–1781, 2021. Panteleev2022 P. Panteleev and G. Kalachev, “Quantum LDPC codes with almost linear minimum distance,”IEEE Transactions on Information Theory, vol. 68, no. 1, pp. 213–229, 2022. panteleev2022asymptotically——, “Asymptotically good quantum and locally testable classical ldpc codes,” in Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, 2022, pp. 375–388. knill1996concatenated E. Knill and R. Laflamme, “Concatenated quantum codes,”arXiv preprint quant-ph/9608012, 1996. jochym2014using T. Jochym-O’Connor and R. Laflamme, “Using concatenated quantum codes for universal fault-tolerant quantum gates,”Physical review letters, vol. 112, no. 1, p. 010505, 2014. yoshida2024concatenate S. Yoshida, S. Tamiya, and H. Yamasaki, “Concatenate codes, save qubits,”arXiv preprint arXiv:2402.09606, 2024. noh2020fault K. Noh and C. Chamberland, “Fault-tolerant bosonic quantum error correction with the surface–gottesman-kitaev-preskill code,”Physical Review A, vol. 101, no. 1, p. 012316, 2020. noh2022low K. Noh, C. Chamberland, and F. G. Brandão, “Low-overhead fault-tolerant quantum error correction with the surface-gkp code,”PRX Quantum, vol. 3, no. 1, p. 010315, 2022. grimsmo2021quantum A. L. Grimsmo and S. Puri, “Quantum error correction with the gottesman-kitaev-preskill code,”PRX Quantum, vol. 2, no. 2, p. 020101, 2021. raveendran2022finite N. Raveendran, N. Rengaswamy, F. Rozpędek, A. Raina, L. Jiang, and B. Vasić, “Finite rate qldpc-gkp coding scheme that surpasses the css hamming bound,”Quantum, vol. 6, p. 767, 2022. gottesman2001encoding D. Gottesman, A. Kitaev, and J. Preskill, “Encoding a qubit in an oscillator,”Physical Review A, vol. 64, no. 1, p. 012310, 2001. BosonicEncodingReview V. V. Albert, “Bosonic coding: introduction and use cases,” 2022. pelchat2013degenerate E. Pelchat and D. Poulin, “Degenerate viterbi decoding,”IEEE transactions on information theory, vol. 59, no. 6, pp. 3915–3921, 2013. iyer2015hardness P. Iyer and D. Poulin, “Hardness of decoding quantum stabilizer codes,”IEEE Transactions on Information Theory, vol. 61, no. 9, pp. 5209–5223, 2015. fuentes2021degeneracy P. Fuentes, J. E. Martinez, P. M. Crespo, and J. Garcia-Frias, “Degeneracy and its impact on the decoding of sparse quantum codes,”IEEE Access, vol. 9, pp. 89 093–89 119, 2021. poulin2008iterative D. Poulin and Y. Chung, “On the iterative decoding of sparse quantum codes,”arXiv preprint arXiv:0801.1241, 2008. babar2015fifteen Z. Babar, P. Botsinis, D. Alanis, S. X. Ng, and L. Hanzo, “Fifteen years of quantum ldpc coding and improved decoding strategies,”iEEE Access, vol. 3, pp. 2492–2519, 2015. roffe2020decoding J. Roffe, D. R. White, S. Burton, and E. Campbell, “Decoding across the quantum low-density parity-check code landscape,”Physical Review Research, vol. 2, no. 4, p. 043423, 2020. du2022stabilizer J. Du Crest, M. Mhalla, and V. Savin, “Stabilizer inactivation for message-passing decoding of quantum ldpc codes,” in 2022 IEEE Information Theory Workshop (ITW). 1em plus 0.5em minus 0.4em IEEE, 2022, pp. 488–493. du2023layered J. Du Crest, F. Garcia-Herrero, M. Mhalla, V. Savin, and J. Valls, “Layered decoding of quantum ldpc codes,” in 2023 12th International Symposium on Topics in Coding (ISTC). 1em plus 0.5em minus 0.4em IEEE, 2023, pp. 1–5. kuo2020refined K.-Y. Kuo and C.-Y. Lai, “Refined belief propagation decoding of sparse-graph quantum codes,”IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 2, pp. 487–498, 2020. raveendran2021trapping N. Raveendran and B. Vasić, “Trapping sets of quantum ldpc codes,”Quantum, vol. 5, p. 562, 2021. pradhan2023learning A. K. Pradhan, N. Raveendran, N. Rengaswamy, X. Xiao, and B. Vasić, “Learning to decode trapping sets in qldpc codes,” in 2023 12th International Symposium on Topics in Coding (ISTC). 1em plus 0.5em minus 0.4em IEEE, 2023, pp. 1–5. aharonov1997fault D. Aharonov and M. Ben-Or, “Fault-tolerant quantum computation with constant error,” in Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, 1997, pp. 176–188. shor1996fault P. W. Shor, “Fault-tolerant quantum computation,” in Proceedings of 37th conference on foundations of computer science. 1em plus 0.5em minus 0.4em IEEE, 1996, pp. 56–65. gottesman1998theory D. Gottesman, “Theory of fault-tolerant quantum computation,”Physical Review A, vol. 57, no. 1, p. 127, 1998. steane1999efficient A. M. Steane, “Efficient fault-tolerant quantum computing,”Nature, vol. 399, no. 6732, pp. 124–126, 1999. gottesman2010introduction D. Gottesman, “An introduction to quantum error correction and fault-tolerant quantum computation,” in Quantum information science and its contributions to mathematics, Proceedings of Symposia in Applied Mathematics, vol. 68, 2010, pp. 13–58. gottesman2013fault——, “Fault-tolerant quantum computation with constant overhead,”arXiv preprint arXiv:1310.2984, 2013. chamberland2018flag C. Chamberland and M. E. Beverland, “Flag fault-tolerant error correction with arbitrary distance codes,”Quantum, vol. 2, p. 53, 2018. eastin2009restrictions B. Eastin and E. Knill, “Restrictions on transversal encoded quantum gate sets,”Physical review letters, vol. 102, no. 11, p. 110502, 2009. bravyi2005universal S. Bravyi and A. Kitaev, “Universal quantum computation with ideal clifford gates and noisy ancillas,”Physical Review A—Atomic, Molecular, and Optical Physics, vol. 71, no. 2, p. 022316, 2005. bravyi2012magic S. Bravyi and J. Haah, “Magic-state distillation with low overhead,”Physical Review A—Atomic, Molecular, and Optical Physics, vol. 86, no. 5, p. 052329, 2012. litinski2019magic D. Litinski, “Magic state distillation: Not as costly as you think,”Quantum, vol. 3, p. 205, 2019. chamberland2020very C. Chamberland and K. Noh, “Very low overhead fault-tolerant magic state preparation using redundant ancilla encoding and flag qubits,”npj Quantum Information, vol. 6, no. 1, p. 91, 2020. rengaswamy2020optimality N. Rengaswamy, R. Calderbank, M. Newman, and H. D. Pfister, “On optimality of css codes for transversal t,”IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 2, pp. 499–514, 2020. rengaswamy2020csst——, “Classical coding problem from transversal t gates,” in 2020 IEEE International Symposium on Information Theory (ISIT). 1em plus 0.5em minus 0.4em IEEE, 2020, pp. 1891–1896. andrade2023css E. Andrade, J. Bolkema, T. Dexter, H. Eggers, V. Luongo, F. Manganiello, and L. Szramowski, “Css-t codes from reed muller codes for quantum fault tolerance,”arXiv preprint arXiv:2305.06423, 2023. berardini2024structure E. Berardini, A. Caminata, and A. Ravagnani, “Structure of css and css-t quantum codes,”Designs, Codes and Cryptography, pp. 1–23, 2024. camps2024algebraic E. Camps-Moreno, H. H. López, G. L. Matthews, D. Ruano, R. San-José, and I. Soprunov, “An algebraic characterization of binary css-t codes and cyclic css-t codes for quantum fault tolerance,”Quantum Information Processing, vol. 23, no. 6, pp. 1–24, 2024. camps2024toward E. Camps-Moreno, H. H. López, G. L. Matthews, and E. McMillon, “Toward quantum css-t codes from sparse matrices,”arXiv preprint arXiv:2406.00425, 2024.]
http://arxiv.org/abs/2407.12163v1
20240716204008
Bellman Diffusion Models
[ "Liam Schramm", "Abdeslam Boularias" ]
cs.LG
[ "cs.LG", "cs.RO" ]
Building AI Agents for Autonomous Clouds: Challenges and Design Principles Manish Shetty^1,2, Yinfang Chen^1,3, Gagan Somashekar^1, Minghua Ma^1, Yogesh Simmhan^1,4, Xuchao Zhang^1, Jonathan Mace^5, Dax Vandevoorde^5,6, Pedro Las-Casas^1, Shachee Mishra Gupta^1, Suman Nath^5, Chetan Bansal^1, Saravan Rajmohan^1 July 22, 2024 ================================================================================================================================================================================================================================================== § ABSTRACT Diffusion models have seen tremendous success as generative architectures. Recently, they have been shown to be effective at modelling policies for offline reinforcement learning and imitation learning. We explore using diffusion as a model class for the successor state measure (SSM) of a policy. We find that enforcing the Bellman flow constraints leads to a simple Bellman update on the diffusion step distribution. § INTRODUCTION The successor state measure is a central object of study in reinforcement learning (RL). A common statement of the objective is to find the policy that induces the state occupancy measure with the highest expected reward <cit.>. The state occupancy measure (SOM) has also received considerable attention in the RL theory community, as a number of provably efficient exploration schemes revolve around regularizing the state occupancy measure <cit.>. We explore a closely related concept, the state successor measure (SSM), which is the probability distribution over future states, given that the agent is currently at state s and takes action a. Despite their utility, the problem of learning the successor measure or state occupancy measure has received relatively little attention in the empirical RL community. While the full reasons for this are difficult to pin down, we argue that it is in large part due to the lack of an expressive and learnable representation that can be easily normalized. We argue that diffusion models can address this deficiency. § BACKGROUND §.§ Diffusion Models Diffusion models are a form of generative model that has shown significant success in image generation [cite]. More recently, there has been significant interest in using diffusion models as a policy class for reinforcement learning [cite]. In our work, we are primarily concerned with the loss function, and how it can be used to derive a Bellman update for diffusion models. For this reason, we begin with a review of diffusion models and the derivation of the standard diffusion model loss. Diffusion models are trained using a forward process and a backward process. In the forward process, noise is gradually added to a data point until only noise remains, and the data point is distributed as a multivariate unit Gaussian. Let D be a dataset and x_0 be a data point in D. We express the result of the K-step forward process as a distribution q where q(x_0:K | D) = q(x_0 | D) ∏_i=1^K q(x_i | x_i-1, x_0) q(x_0 | D) is defined to be 1/| D | for each point in D and 0 for all other x_0. In the reverse process, the neural network parameterized by weights θ outputs a Gaussian distribution, predicting what noise was added during the forward process. The backward process samples a predicted noise from this distribution and this noise is subtracted from the data point. This process repeats for the same number of steps as the forward process. The probability of a sequence of points x_0:K in the reverse diffusion process is p(x_0:K | θ) = p(x_K) ∏_i=0^K-1 p(x_i-1 | x_i, θ) In this work, we seek to learn a diffusion model p(x | θ) that minimizes KL(q(x)|| p(x | θ)). By the convexity of the KL divergence, we see that KL(q(x | D) || p(x | θ)) ≤ KL(q(x_0:K | D) || p(x_0:K | θ)) = KL(q(x_K| x_0) || p(x_K)) + ∑_i=1^K KL(q(x_i-1 | x_i, x_0) || p(x_i-1 | x_i, θ)) - log p(x_0 | x_1, θ) If p(x_i-1 | x_i, θ) is Gaussian with a fixed variance σ_K, then the portion of the loss L that depends on θ can be written as follows L_K-1 - C = E_x_0, ϵ[β_i^2/2 σ_i^2α_i(1-α_i) || ϵ - ϵ_θ(√(α_K)x_0 + (1-√(α_K))ϵ, i)||^2] For brevity, we refer to η_i = β_i^2/2 σ_i^2α_i(1-α_i). § DERIVATION Let M be a Markov Decision Process with state space S, action space A, transition distribution T, reward function R, and discount rate γ. We consider the successor measure of a state and action d^π(x | s, a), where x is some future state. This describes the probability that an agent following the policy π will stop at the state x if it begins in state s, takes action a, and has a (1-γ) chance of stopping after taking each action. The action-conditioned value function Q_π(s,a) is the expected reward of the distribution d^π( · | s, a), E_x ∼ d^π( · | s, a)[R(x)]. The successor measure of a given policy is the unique probability distribution satisfying the Bellman flow constraints. These constraints are as follows: d^π(s_f | s, a) = (1-γ) T(s' = s_f | s, a)+ γ E_a' ∼π(s'), s' ∼ T( · | s, a)[d^π(s_f | s', a')] We hope to learn a representation of d^π(x | s, a) that approximately satisfies this constraint. We do this by deriving an upper bound on the KL divergence between the left and right-hand sides of the equation. First, we note that the KL divergence is convex in both arguments KL_Bellman = KL(E_s' ∼ T(· | s, a), a' ∼π(s')[(1-γ) δ(s' = x) + γ d^π(x | s', a')] || d^π(x | s, a)) ≤ (1-γ)KL( E_s' ∼ T(· | s, a), a' ∼π(s')[ δ(s' = x)] || d^π(x | s, a)) + γ KL(E_s' ∼ T(· | s, a), a' ∼π(s')[d^π(x | s', a')] || d^π(x | s, a)) ≤ (1-γ)KL( T(x|s,a) || d^π(x | s, a)) + γ KL(E_s' ∼ T(· | s, a), a' ∼π(s')[d^π(x | s', a')] || d^π(x | s, a)) ≤ (1-γ)KL( T(x|s,a) || d^π(x | s, a)) + γ E_s' ∼ T(· | s, a), a' ∼π(s')[KL(d^π(x | s', a') || d^π(x | s, a))] Now, recall that diffusion models generate sequences of increasingly noised variables x_i from noiseless (x_0) to fully random x_K. Let x_0:K be the full trajectory of noised points from x_0 to x_K. Observe that P(x_0) = ∫_x_1:K P(x_0:K) = ∫_x_1:K P(x_0, x_1:K). Again by convexity, the KL divergence between sequences of noising trajectories x_0:K is greater than the divergence between unnoised points x_0 KL_Bellman≤ (1-γ)KL( q(x_1:K | x_0)T(x_0 | s, a) || d^π(x_0:K | s, a)) + γ E_s' ∼ T(· | s, a), a' ∼π(s')[KL(d^π(x_0:K | s', a') || d^π(x_0:K | s, a))] Now, recall that the KL divergence of a Markov chain such as q(x_1:K | x_0) can be expressed as the sum of the divergences of each step. Therefore we have KL_Bellman≤ E_s' ∼ T(· | s, a), a' ∼π(s')[(1-γ)E_x_0:K, x_0=s'[ KL( q(x_K) || d^π(x_K | s, a)) + ∑_i=1^K KL( q(x_i-1 | x_i, x_0) || d^π(x_i-1 | x_i, s, a))] + γ E_x_0:K, x_0 ∼ d^π(· | s', a')[KL(d^π(x_K | s', a') || d^π(x_K | s, a)) + ∑_i=0^K KL( d^π(x_i-1 | x_i, s', a') || d^π(x_i-1 | x_i, s, a))]] Now, we wish to approximate d^π with a neural network. As with DDPM [cite], we do this by having a neural network predict ϵ, the noise that was added to x_0. We refer to the output of the network as ϵ_θ and the distribution generated by this diffusion model d_π, θ. Additionally, since this loss function requires measuring the divergence between two neural networks, we use the target network trick common in reinforcement learning. We refer to the target network as ϵ_target and the distribution generated by the target network as d_π, target. Applying these changes, we obtain KL_Bellman≤ E_s' ∼ T(· | s, a), a' ∼π(s')[(1-γ)E_ϵ, x_0=s', i[η_i || ϵ - ϵ_θ(√(α_K)x_0 + (1-√(α_K))ϵ, s', π(s'), i)||^2] + γ E_x_i∼ d^π(· | s', a'), i[η_i || ϵ_target(x_i, s', π(s'), i) - ϵ_θ(x_i, s', π(s'), i)||^2]] + const The first term is a standard denoising diffusion loss. The second term is similar, but instead sets the target to be the deterministic output of the network at the next state. A key difference between this loss and the standard diffusion loss is that the second term is deterministic and suffers from no variance (at the cost of some bias when ϵ_target is inaccurate). This is very similar to the use of temporal difference updates in Q learning – biased, low-variance estimates of the value at the next time step are known to converge faster than unbiased, high variance estimates like those used in vanilla policy gradient. A major benefit of this formulation is that it allows us to make the same tradeoff when learning the state occupancy measure. There are two ways to sample x_i∼ d^π(· | s', a'). The obvious way would be to run the backward process for i steps, but this is computationally inefficient. Instead, we propose a heuristic method to approximate this distribution with less compute time. First, we use a modified version of the Bellman flow constraints for finite horizons. Suppose there are n steps remaining in the episode. Then, d^π(x | s, a, n) = E_s' ∼ T(· | s, a), a' ∼π(s')[1/nδ(s' = x) + n-1/n d^π(x | s', a', n-1)] Repeating the above derivation with the modified constraint gives us the loss: L = E_s' ∼ T(· | s, a), a' ∼π(s')[1/n E_ϵ, x_0=s', i[η_i || ϵ - ϵ_θ(√(α_K)x_0 + (1-√(α_K))ϵ, s, π(s), i, n)||^2] + n-1/n E_x_i∼ d^π(· | s', a', n-1), i[η_i || ϵ_target(x_i, s', π(s'), i, n-1) - ϵ_θ(x_i, s, π(s), i,n)||^2]] For convenience, we write this as a sum of two losses: L_1 = E_s' ∼ T(· | s, a)[E_ϵ, x_0=s', i[η_i || ϵ - ϵ_θ(√(α_K)x_0 + (1-√(α_K))ϵ, s, π(s), i, n)||^2] L_2 = E_x_i∼ d^π(· | s', a', n-1), i[η_i || ϵ_target(x_i, s', π(s'), i, n-1) - ϵ_θ(x_i, s, π(s), i,n)||^2]] L = 1/n L_1 + n-1/n L_2 In this finite-horizon setting, we calculate the state occupancy measure uniformly discounted over a finite number of steps, instead of exponentially discounted over an infinite number. This allows us to sample the future states of d^π(· | s, a) from the future trajectory of s,a instead of using the network itself. We then minimize the loss in expectation. We sample a state s from the replay buffer with n steps remaining in its trajectory. Then, we sample a second state x from the future trajectory of s. If x is the state immediately after s, we apply L_1. Otherwise, we apply L_2. This gives us a 1/n chance of sampling L_1 and a n-1/n chance of sampling L_2. A limitation of this approach is that it only gives the correct on-policy loss, and may be biased in the off-policy case. abbrv
http://arxiv.org/abs/2407.12973v1
20240717193844
Temporal Label Hierachical Network for Compound Emotion Recognition
[ "Sunan Li", "Hailun Lian", "Cheng Lu", "Yan Zhao", "Tianhua Qi", "Hao Yang", "Yuan Zong", "Wenming Zheng" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Temporal Label Hierachical Network for Compound Emotion Recognition]Temporal Label Hierachical Network for Compound Emotion Recognition School of Information Science and Engineering, Southeast University 1 Thørväld Circle Nanjing China 230189473@seu.edu.cn School of Information Science and Engineering, Southeast University 1 Thørväld Circle Nanjing China lianhailun@seu.edu.cn School of Biological Science and Medical Engineering, Southeast University 1 Thørväld Circle Nanjing China cheng.lu@seu.edu.cn School of Information Science and Engineering, Southeast University 1 Thørväld Circle Nanjing China zhaoyan@seu.edu.cn School of Biological Science and Medical Engineering, Southeast University 1 Thørväld Circle Nanjing China School of Information Science and Engineering, Southeast University 1 Thørväld Circle Nanjing China School of Biological Science and Medical Engineering, Southeast University 1 Thørväld Circle Nanjing China xhzongyuan@seu.edu.cn Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science and Medical Engineering, Southeast University 1 Thørväld Circle Nanjing China wenming_zheng@seu.edu.cn ^*Corresponding author. ^†Both authors contributed equally to this research. § ABSTRACT The emotion recognition has attracted more attention in recent decades. Although significant progress has been made in the recognition technology of the seven basic emotions, existing methods are still hard to tackle compound emotion recognition that occurred commonly in practical application. This article introduces our achievements in the 7th Field Emotion Behavior Analysis (ABAW) competition. In the competition, we selected pre trained ResNet18 and Transformer, which have been widely validated, as the basic network framework. Considering the continuity of emotions over time, we propose a time pyramid structure network for frame level emotion prediction. Furthermore. At the same time, in order to address the lack of data in composite emotion recognition, we utilized fine-grained labels from the DFEW database to construct training data for emotion categories in competitions. Taking into account the characteristics of valence arousal of various complex emotions, we constructed a classification framework from coarse to fine in the label space. [500]Human-centered computing Human computer interaction (HCI) [500]Computing methodologies Artificial intelligence [ Wenming Zheng^* July 22, 2024 =================== § INTRODUCTION Emotion recognition is a technology aimed at endowing machines with the ability to identify, process, and understand human emotions. For example, previous work often using elaborated designed hand-crafted features such as LBP and IS09 and machine learning based methods support vector machine (SVM) <cit.>, Gaussian mixture model (GMM) <cit.>, supervised dictionary learning <cit.> and sparse representation <cit.> <cit.> to classify emotion class. In recent years, with the rapid advancement of deep learning techniques, various emotion recognition methods have been proposed. For example, in <cit.> Li et.al using label revision method to cope with emotion recognition in nosiy environments. In <cit.> Lu et.al impose sparse constrain on the reconstruction matrix to select more effective features However, these methods often focus on the recognition of seven basic emotions. In practical applications, complex emotions composed of combinations of these basic emotions are more commonly encountered. There is relatively little research in this domain, and the lack of high-quality databases for complex emotions hinders further development in this field. To facilitate the development of compound emotions recognition, the 7th Affective Behavior Analysis in-the-wild (ABAW) hold the Compound Expression (CE) Recogntion based on the C-EXPR-DB database<cit.>. The remainder of this paper is organized as follows. The framework of our modal including training dataset preparation and the backbone of our methods are described in section 2. In section 3, we show our experimental results on the challenge dataset to evaluate the effectiveness of our proposed method. Finally, in section 4, we conclude this paper. § THE PROPOSED METHOD §.§ Feature Extraction and Fusion We first employed the OpenFace toolkit for face detection in every frame of the video. For images where faces were difficult to detect, we supplemented with the closest temporally adjacent face to obtain a face image corresponding to every original video frame. Considering the temporal continuity of emotional states, despite the task requiring emotion classification prediction for each frame, we constructed a temporal pyramid structure of image sequences to acquire more robust emotional features. Three sets of image sequences at different temporal scales were composed as follows: a sequence of 15 frames starting from the current frame, a sequence of 15 frames sampled from a quarter-length segment of the video where the current frame resides, and a sequence of 15 frames sampled from the entire video. These hierarchical sequences of three image sets were parallelly fed into a spatiotemporal feature extraction network consisting of ResNet18 and Transformer. For each frame image, we averaged the sum of all classification results obtained from different image sequences to derive the final classification result. Additionally, due to inherent data imbalance in the training set, as depicted in Fig. 1 of emotional cycles, we utilized the DFEW database to train the network for positive and negative classification in valence and arousal to assist the final classification results. Specifically, for each frame, if both valence and arousal were positive, it was directly categorized as a compound emotion of happiness-surprise; if both were negative, a judgment was made among three compound emotions carrying sadness components. Since other sets of compound emotions do not exhibit mutual exclusivity in valence and arousal, valence and arousal were not used as assisting information for their recognition. The overall network framework is illustrated in Fig. 2. § EXPERIMENT §.§ Dataset and Preprocessing To select appropriate training and validation data, we utilized the DFEW database, which features detailed emotion labels. Established by Jiang et al. in 2020, this database initially collected over 1500 high-resolution movie clips depicting near-real scenarios, yielding 16,372 facial expression videos. Each video segment was independently labeled by 10 annotators with one of the basic emotions (happiness, sadness, neutral, anger, surprise, disgust, fear). The final true label for each video segment was determined based on emotions chosen by more than 6 annotators. Ultimately, 12,059 video segments were selected. For the competition task, we curated a training set comprising 1864 samples, ensuring each component of composite emotions was represented by ratings from at least 3 annotators. Considering significant sample imbalances and the mutual exclusivity of happiness and disgust across the seven composite emotions, additional single-emotion data from the DFEW database were included to balance the dataset. Furthermore, recognizing the unique positions of happiness and sadness on the emotional wheel, we performed valence and arousal-based positive classification using the DFEW database to assist in determining the final emotional categories. §.§ Training settings All image resolution used in this paper is consistently set to 224 × 224. During training, the number of epochs is set to 50. Cross-entropy is utilized as the classification loss function, and the Adam is selected as the optimizer, and the learning rate is set at 3e-4 according to experiment performance, and the batch size is 90. §.§ Result and Discussion According to the performance assessment rules of the competition, the evaluation the performance of compound expressions recognition by the average F1 Score across all 7 compound expressions. Therefore, the evaluation criterion is: § CONCLUSION In this paper, we propose a hierarchical composite emotion recognition network in both temporal and label spaces. The emotional category for each frame is determined by aggregating classification information from image sequences across different time spans to provide final discriminative information. Simultaneously, a hierarchical classification strategy is designed based on the differences in emotional dynamics across emotional composites. The final results demonstrate promising performance on ABAW7. ACM-Reference-Format
http://arxiv.org/abs/2407.11945v1
20240716173530
Min-max theory and existence of H-spheres with arbitrary codimensions
[ "Rui Gao", "Miaomiao Zhu" ]
math.DG
[ "math.DG", "math.AP", "49J35, 53A10, 53C42, 58E20" ]
Min-max theory and existence of H-spheres]Min-max theory and existence of H-spheres with arbitrary codimensions Gao]Rui Gao School of Mathematical Sciences, Shanghai Jiao Tong University 800 Dongchuan Road Shanghai, 200240 P. R. China gaorui0416@sjtu.edu.cn Zhu]Miaomiao Zhu School of Mathematical Sciences, Shanghai Jiao Tong University 800 Dongchuan Road Shanghai, 200240 P. R. China mizhu@sjtu.edu.cn We would like to thank Professor Xin Zhou for valuable conversations and helpful comments. [2020]49J35; 53A10; 53C42; 58E20 Dedicated to Stefan Oscar Walter Hildebrandt (1936-2015) § ABSTRACT We demonstrate the existence of branched immersed 2-spheres with prescribed mean curvature, with controlled Morse index and with arbitrary codimensions in closed Riemannian manifold N admitting finite fundamental group, where π_k(N) ≠ 0 and k ≥ 2, for certain generic choice of prescribed mean curvature vector. Moreover, we enhance this existence result to encompass all possible choices of prescribed mean curvatures under certain Ricci curvature condition on N when N = 3. When N≥ 4, we establish a Morse index lower bound while N satisfies some isotropic curvature condition. As a consequence, we can leverage latter strengthened result to construct 2-spheres with parallel mean curvature when N has positive isotropic curvature and N≥ 4. At last, we partially resolve the homotopy problem concerning the existence of a representative surface with prescribed mean curvature type vector field in some given homotopy classes. [ [ July 22, 2024 ================= 0.5cm § INTRODUCTION 15pt §.§ Open Problems and Main Results 5pt Surfaces with Constant Mean Curvature (CMC) and with Prescribed Mean Curvature (PMC) are important in mathematics, physics and biology. They arise naturally in partitioning problems, isoperimetric problems, general relativity, two phases interface problems and tissue growth etc. Around the early 1980's, Yau imposed the following [<cit.>] Let h be a real valued function on ^3. Find reasonable conditions on h to insure that one can find a closed surface with prescribed genus in ^3 whose mean curvature is given by h. At almost the same time, Yau also posed an open problem for the existence of closed PMC surfaces in closed 3-manifolds[This was confirmed by communication between Yau and Zhou-Zhu, see comments in <cit.>.]. It is natural to impose the following extension to the higher codimensional setting: Let H ∈Γ(∧^2(N)⊗ TN) be a tensor field on a closed n-dimensional Riemannian manifold N. Find reasonable conditions on H to insure that one can construct a closed surface with prescribed genus in N whose mean curvature vector is given by H. The general existence for PMC surfaces (also called μ-bubbles) with all codimensions is a challenging problem, see for instance comments by Gromov in a recent series of four lectures <cit.>. In this paper, we derive a resolution to the generalized Yau’s open problem for branched immersed 2-spheres in closed Riemannian manifolds having finite fundamental groups with arbitrary codimensions, with controlled Morse index and with certain generic choice of prescribed mean curvature vector field H. Now, we state our first result: Let (N,h) be a closed n-dimensional (n ≥ 3) Riemannian manifold with finite fundamental group, then given any ω∈ C^2(∧^2(N)) with induced mean curvature type tensor field H ∈Γ(∧^2(N)⊗ TN) determined by (<ref>), for almost every constant λ > 0, there exists a non-trivial branched immersed 2-sphere in N with prescribed mean curvature vector λ H and Morse index at most k-2, where k is the least integer such that π_k(N)≠ 0. By imposing some curvature constraint in the target N, we can improve the Morse index estimates to get the following: Let (N,h) be a closed n-dimensional Riemannian manifold with finite fundamental group and given ω∈ C^2(∧^2(N)) with induced mean curvature type vector field H ∈Γ(∧^2(N)⊗ TN) determined by (<ref>), the following holds: * If N = 3 with π_3(N) ≠ 0 and the Ricci curvature of N satisfies |H|^2h + Ric_h/2 > |∇ H|h, then there exists a non-trivial branched immersed 2-sphere in N with prescribed mean curvature vector field H determined by (<ref>) and Morse index exactly 1. * If N≥ 4 and N has positive isotropic curvature (PIC), namely, on any totally isotropic two plane σ⊂ TN⊗ℂ the complex sectional curvature fulfils 𝒦(σ) > 0, then, for 𝒦 being the isotropic curvature and for almost every constant λ > 0 satisfying 𝒦 - λ|∇ H|h > 0, there exists a non-trivial branched immersed 2-sphere in N with prescribed mean curvature vector field λ H and Morse index satisfying [(n-2)/2] ≤Index≤ k-2, where k is the least integer such that π_k(N)≠ 0. Consequently, for the existence of surfaces with parallel mean curvature vector field (see Definition <ref> below) we have Let (N,h) be a closed n-dimensional (n≥ 4) Riemannian manifold with positive isotropic curvature and finite fundamental group, then given any ω∈ C^2(∧^2(N)) with induced parallel mean curvature type vector field H ∈Γ(∧^2(N)⊗ TN) determined by (<ref>), for almost every constant λ > 0, there exists a non-trivial branched immersed 2-sphere in N with prescribed parallel mean curvature vector field λ H and with Morse index satisfying [(n-2)/2] ≤Index≤ k-2, where k is the least integer such that π_k(N)≠ 0. Indeed, stronger existence results than Theorem <ref>, Theorem <ref> and Theorem <ref> still hold true, namely, we can remove the assumption that N has finite fundamental group, see step <ref> in Subsection <ref> and Proposition <ref> below for more details. Considering the completeness of our min-max theory and technical issues arising from the compactness in Section <ref>, we added the assumption that N has finite fundamental group in these Theorems. In general, CMC 2-spheres in Riemannian 3-manifold need not being embedded, see comments by Meeks-Mira-Pérez-Ros <cit.>. In fact, for certain ambient Berger 3-spheres with positive sectional curvature and for some specific choice of H, Torralbo <cit.> proved that every immersed CMC 2-sphere with mean curvature H always exists self-intersecting points. See also comments by Zhou <cit.>. In a recent work <cit.>, Cheng-Zhou settled the existences of branched immersed CMC 2-spheres in 3-manifolds with constant H>0 by developing a min-max theory for a fourth-order perturbation of the action functional. In the present paper, we develop a new min-max theory for a perturbed functional of Sacks-Uhlenbeck-Moore type, it achieves a natural extension to the more general setting of PMC 2-spheres in Riemannian manifolds with arbitrary codimensions. Theorem <ref> and Theorem <ref> can be viewed as a natural generalization of the existence results about minimal 2-spheres by Sacks-Uhlenbeck <cit.> and Micallef-Moore <cit.>. As an application of Morse index estimates of minimal 2-spheres described as in Theorem <ref>, Micallef-Moore <cit.> proved that any closed n-dimensional (n ≥ 4) PIC manifold admits π_i(N) = 0 for 2 ≤ i ≤ [n/2]. In particular, if N is simply connected, then N is homeomorphic to a sphere. In 1964, Eells-Sampson proposed a fundamental homotopy problem for the existence of harmonic maps: [<cit.>[See also <cit.> and <cit.>]] Is it possible for every homotopy class of smooth maps u: M → N between closed Riemannian manifolds M and N have a harmonic representative? Eells-Sampson <cit.> answered this question when M is an oriented closed m-dimensional Riemannian manifold and N has non-positive sectional curvature. For M a closed surface and N a general target with π_2(N) = 0, Sacks-Uhlenbeck <cit.> proved that there exists a minimizing harmonic map in every homotopy class of the mapping space C^0(M,N) and moreover, the generating set of π_2(N,p) can be represented by area minimizing 2-spheres by viewing π_2(N,p) as a ℤ(π_1(N,p))-module for some base point p ∈ N. Transforming into the H-surface setting, it is natural to impose the following: Can every homotopy class of smooth map u from a closed surface M into a closed Riemannian manifold N be represented by a H-surface? In this paper, we answer this homotopy problem by imposing ω_L^∞(N) : = max_p ∈ Nω(p)_T_pN < 1. Let N be a closed Riemannian manifold with π_2(N) = 0 and ω_L^∞(N) < 1, then there exists a minimizing H-surface u:M→ N in every homotopy class of maps in C^0(M, N). Note that the assumption π_2(N) = 0 is necessary as claimed in <cit.>. Moreover, by considering π _2(N,p) as a ℤ ( π _1(N,p))-module, whose action orbits represent the free homotopy classes of maps from ^2 into N, we obtain: Let N be a closed Riemannian manifold and ω_L^∞(N) < 1. Then, each generating set for π_2(N,p) acted by π_1(N,p) can be represented by minimizing H-spheres. In particular, if N is simply connected, then every set of generators of H_2(N,ℤ) can be represented by minimizing H-spheres. Theorem <ref> and Theorem <ref> extend the existence theory of minimizing harmonic maps with arbitrary codimensions in <cit.> to the H-surface setting. Moreover, when (N) = 3, by Micallef-White's descriptions <cit.> on local behavior of branch points for minimizing almost conformal H-surfaces, the H-surfaces obtained in Theorem <ref> and Theorem <ref> are immersed and free from branch points if they are almost conformal. 5pt §.§ Backgrounds 5pt The searching for CMC surfaces and PMC surfaces is a long standing problem. There has been extensive and substantial works dedicated to this problem. The construction of CMC surfaces with boundary started at Plateau's experiment with soap films and soap bubbles enclosed by various contours, and the natural existence questions arose from these experiments are called Plateau problem. The classical Plateau problem for disk-type minimal surfaces in ^3, that is, with mean curvature H=0, was solved independently by Douglas <cit.> and Radó <cit.>. In 1954, Heinz <cit.> investigated the Plateau problem for CMC surfaces in ^3 and obtained an existence result provided that |H| < 1/8(√(17) - 1). Werner <cit.> optimized the configuration of Heinz to improve the existence result for |H| < 1/2. Later, Hildebrandt <cit.> proved the existence of CMC surface for |H| ≤ 1 and subsequently extended this result to variable mean curvature H satisfying ||H||_L^∞≤ 1 in <cit.>. It is worth mentioning that the upper bound |H|≤ 1 is optimal in the sense that there is no solution when |H| > 1 for the Plateau contour Γ = {(cosθ,sinθ,0) : 0≤θ < 2π}⊂^3, see comments by Heinz <cit.> and Jost <cit.>. For further developments of the Plateau problem for CMC surfaces, see e.g. Wente <cit.>, Hildebrandt-Kaul <cit.>, Steffen <cit.>. Brezis-Coron <cit.> and Struwe <cit.> proved the existence of at least two solution for the Plateau problem of CMC surfaces which confirmed the Rellich's conjecture, see also Steffen <cit.>. For existence of PMC surfaces with Plateau boundary, see <cit.>. The study for the existence of CMC surfaces or PMC surfaces with free boundary initiated from Struwe <cit.> by applying heat flow method, and also studied by <cit.>. The Dirichlet problem for the minimal surface equation was studied by Jenkins-Serrin <cit.> for domain contained in ^2 and Spruck <cit.> considered the same problem when mean curvature H > 0, see also <cit.> for recent developments. For closed CMC surfaces, Hopf <cit.> proved that the round sphere is the only CMC surface with genus zero in ^3. Later, Barbosa-Do Carmo <cit.> showed that the standard sphere is the only closed stable CMC hypersurfaces in ^n+1. Wente <cit.> constructed an immersed CMC torus in ^3 which also gives a counterexample of Hopf's conjecture <cit.>. Moreover, Kapouleas <cit.> constructed a series of immersed closed CMC surfaces in ^3 with arbitrary genus, see also Briener-Kapouleas <cit.> for higher dimensional CMC hypersurfaces setting. The Almgren-Pitts min-max theory, which was introduced in <cit.>, is a significant breakthrough in the field of constructing closed minimal hypersurfaces. In recent years, this theory has been further developed and refined, starting with the confirmation of the Willmore conjecture by Marques-Neves <cit.>, followed by the resolution of Yau's conjecture <cit.> on the existence of infinitely many closed minimal surfaces in any closed 3-manifold, also by Marques-Neves <cit.> under the assumption of positive Ricci curvature. Later, Zhou <cit.> confirmed the multiplicity one conjecture of Marques-Neves <cit.> and Song <cit.> proved the Yau's conjecture <cit.> in the general case where the dimension of ambient manifold is relaxed from 3 to 7. Additionally, there have been several recent works on this topic, such as <cit.>. In the context of H≠0, closed CMC hypersurfaces in an ambient manifold were initially constructed by minimizing the area functional among all volume-preserving variations. For a more detailed understanding, we refer to <cit.>. However, this approach provides little information about the mean curvature value or the topology of CMC surfaces in ambient manifolds. There is another series of deformation approaches to construct CMC hypersurfaces, one can generate foliations by closed CMC hypersurfaces with small mean curvature H from a closed non-degenerate minimal surface. Moreover, Ye <cit.>, Mahmoudi-Mazzeo-Pacard <cit.>, and others have constructed foliations by closed CMC hypersurfaces from minimal submanifolds of strictly lower dimensions, see also <cit.>. The obtained CMC hypersurfaces by this method have a mean curvature that tends to be either very small or very large. Besides, a degree theory established by Rosenberg-Smith <cit.> constructed many important examples of CMC hypersurface when mean curvature is greater than some constant. Zhou-Zhu's construction <cit.> on the min-max method has led to the establishment of a comprehensive existence theory for closed CMC hypersurfaces in closed Riemannian manifolds of dimension between 3 and 7. Furthermore, Zhou-Zhu <cit.> extended their min-max theory for CMC hypersurfaces into the PMC setting, allowing for any prescribed mean curvature h lying in an open dense subset of smooth function space. This was later generalized to higher dimensions by Dey <cit.>, allowing a singular set of codimension 7, see also <cit.> for more recent development of min-max theory developed from <cit.>. Very recently, Mazurowski-Zhou <cit.> introduced the half-volume spectrum w_p_p ∈ N which also satisfies a Weyl law and they developed an Almgren-Pitts type min-max theory for finding closed CMC hypersurfaces associated to the half-volume spectrum in <cit.>, hence showed that there exists infinitely many geometrically distinct closed CMC hypersurfaces in closed manifold M^n of dimension 3 ≤ n ≤ 5. The existence theory of CMC and PMC surfaces with prescribed topology in general closed Riemannian manifold is less understood. For minimal 2-spheres, Simon-Smith <cit.> explored the existence of embedded minimal 2-spheres in any Riemannian 3-sphere, utilizing the Almgren-Pitts min-max theory <cit.>. The existence of branched immersed minimal 2-spheres in general Riemannian manifold with arbitrary codimensions was firstly studied in the pioneering work by Sacks-Uhlenbeck <cit.> and then explored in greater depth by Micallef-Moore <cit.>, who obtained branched immersed minimal 2-spheres with controlled Morse index. For CMC 2-spheres in homogeneous 3-spaces, Meeks-Mira-Pérez-Ros proved the existence and uniqueness of immersed CMC 2-spheres with any prescribed mean curvature in homogeneous 3-spheres <cit.> and later in homogeneous 3-manifolds <cit.>. For CMC 2-spheres in closed Riemannian 3-manifolds, a recent breakthrough was made by Cheng-Zhou <cit.> who established the existence of branched immersed CMC 2-spheres in arbitrary Riemannian 3-spheres (𝕊^3,h) for almost every positive constant mean curvature H>0 and with Morse index at most 1. Moreover, if (𝕊^3,h) has positive Ricci curvature, then the existence result can be enhanced to encompass any choice of constant mean curvature H>0 and guarantee the Morse index exactly 1 <cit.>. Here, we need to point out that, due to the third De Rham cohomology group H_dR^3(^3) ≅≠ 0, by our choices of functional E^ω we can not obtain the existence of branched immersed CMC 2-spheres in 3-manifolds, see Theorem <ref>. However, we believe that, if we modify our functional E^ω as following E_H = 1/2∫_M |∇ u |^2 dV_g + H· V(f_u) and adapt a similar Sacks-Uhlenbeck type perturbation E_α,H := 1/2∫_M 1 + |∇ u |^2 ^α dV_g + H· V(f_u) to study the existence of branched immersed 2-spheres with constant mean curvature in Riemannian 3-spheres (^3,h), utilizing a similar proof as Theorem <ref> and Theorem <ref> we will provide an alternative proof for Cheng-Zhou's results <cit.>. Here, H>0 is some positive constant and V(f_u) is the enclosed volume of the map u : (^2,g) → (^3,h), see <cit.> and <cit.> for more detailed descriptions about E_H. Very recently, Sarnataro-Stryker <cit.> constructed an embedded PMC 2-sphere in the round 3-sphere for generic set of prescribed mean curvature functions h with L^∞ norm at most 0.547 and obtained an embedded 2-sphere with constant mean curvature H when the metric on (𝕊^3,h) is sufficiently close to the round metric and H is below some threshold. As a higher codimensional generalization of CMC surfaces, the study of surfaces with parallel mean curvature vectors (see definition <ref> below) can be traced back to approximately 1940s, see e.g. Schouten-Struik <cit.>, Coburn <cit.> and Wong <cit.>. From the early 1970s, numerous studies have been conducted on the rigidity theory of such surfaces in homogeneous ambient manifolds, such as, the characterization of the spherical immersion of 4-dimensional space form by Ferus <cit.>, rigidity results for submanifolds with parallel mean curvature vector in spaces of constant sectional curvature by Yau <cit.> which involves the classification of surfaces with parallel mean curvature vector in 4-dimensional real space form, see also independent and related works by Hoffman <cit.> and Chen <cit.>, a structure theorem into higher dimensional space form by Alencar-do Carmo-Tribuzy <cit.>. And for more classification results see Kenmotsu-Zhou <cit.> in 2-dimensional complex space form, Fetcu <cit.> in general complex space forms and Fectu-Rosenberg <cit.> in Sasakian space forms. In contrast with the theory of CMC and PMC hypersurfaces, the existence of closed surfaces with parallel mean curvature vector, or more generally, with prescribed mean curvature vector, admitting prescribed topology and controlled Morse index in general n-dimensional compact Riemannian manifold is not widely understood. In this paper, our existence theory of CMC spheres (Theorem <ref>), PMC spheres (Theorem <ref> and Theorem <ref>), and partial resolution of homotopy problem (Theorem <ref> and Theorem <ref>) with arbitrary codimensions serves a supplement in such area. §.§ Settings 5pt Before describing our main ideas about proving main results, we recall some basic notions about H-surfaces with arbitrary codimensions, and introduce some notations, see Grüter <cit.>, Jost <cit.> and Rivière <cit.> for more details. Let (M,g) be a closed surface and (N,h) a closed Riemannian manifold of dimension n that is isometrically embedded into some ℝ^K. Take a C^2 2-form ω on N and consider the functional E^ω(u) = 1/2∫_M |∇ u|^2 dV_g + ∫_M u^* ω acting on maps u∈ C^2(M, N). It is easy to see that the functional E^ω is conformally invariant. Surprisingly enough, Grüter <cit.> showed that any coercive conformally invariant functional with quadratic growth has the form of (<ref>) for some appropriately chosen metric on M and ω on N. Critical points of the functional E^ω are called H-surfaces and one can verify that H-surface satisfies the Euler-Lagrange equation Δ u + A(u)(∇ u, ∇ u) = H(u)( u, ∇ u). Here, A is the second fundamental form of embedding N⊂ℝ^K, and the mean curvature type vector field H ∈Γ(∧^2(N)⊗ TN) is determined by ∀ U, V, W∈Γ(TN), dω(U, V, W):= U, H(V, W)_TN = U· H(V,W) where “·" is the standard scalar product on ^K. We write H(u)( u,∇ u) := H(u_x^1,u_x^2) - H(u_x^2,u_x^1) = 2H(u_x^1,u_x^2) for notation simplicity. If a solution u to (<ref>) is conformal, that is, |u_x^1|^2 - |u_x^2|^2 = u_x^1,u_x^2_u^*(TN) = 0, then H(u) is the mean curvature vector of the surface determined by u:M → N, see <cit.>. In particular, due to the uniqueness of conformal structure on ^2, all H-spheres are conformal automatically. Naturally, we assume H is non-degenerate in the sense that H(u)(u_x^1,u_x^2) ≢0, otherwise, the problem is reduced to a harmonic map setting. Then, we define ∇ H ∈Γ(∧^3(N)⊗ TN) to be the convariant differential of H with respect to vector field component of H, more precisely, ∇ H is determined by ∇ H(U,V), W _TN = (∇_W H)(U,V) ∈Γ(T(N)) for all U,V, W ∈ TN. In particular, when N = 3, the 3-form dω defined on N can be identified with a function on N, that is, there exists H ∈ C^1(N,) such that dω = H dz^1∧ dz^2 ∧ dz^3 where (z^1,z^2,z^3) is some local coordinates of N. In this case, the equation (<ref>) can be written as Δ u + A(u)(∇ u, ∇ u) = 2H(u) (u_x^1∧ u_x^2). A solution to (<ref>) and (<ref>) determines a PMC surface, which is a CMC surface when H is constant. A natural extension of CMC surfaces in higher codimensions is the concept of surfaces with parallel mean curvature vector field. For detailed description of surfaces with parallel mean curvature vector field in some homogeneous spaces, see e.g. classical works <cit.>, <cit.>, <cit.>. For surfaces in general Riemannian manifolds with prescribed parallel mean curvature vectors and with arbitrary codimensions, based on our previous settings it is natural to define the following: [Parallel H-surfaces] We call a C^2 map u : M → N a parallel H-surface if u is a solution to (<ref>) with the mean curvature type vector field H satisfying ∇ H ≡ 0. From the perspective of submanifold theory, it is important to note that the mean curvature vector H is a section of the normal bundle and parallelism is referred to the mean curvature vector H is parallel in the normal bundle. We would like to mention that the mean curvature of the parallel conformal H-surface, as described in Definition <ref>, is parallel in the usual sense (see e.g. <cit.>). 5pt §.§ Basic Ideas of Seeking Lg-spheres 5pt Seeking critical points of E^ ω by directly applying methods from calculus of variations is a challenging task due to several technical difficulties: * The conformally invariant functional E^ω does not satisfy the Palais-Smale condition. * Due to the appearance of the term involving ω in E^ω, some classical methods developed for harmonic maps can not be applied. To this end, we consider a perturbation of E^ω, denoted by E^ω_α:W^1,2α(M,N)→, as follows, called the Sacks-Uhlenbeck-Moore approximation: E^ω_α(u) = 1/2∫_M 1 + |∇ u|^2^α dV_g + ∫_M u^*ω where α > 1 and ω is a C^2 2-form on N. In his book <cit.>, Moore wrote down the above perturbed functional and pointed out that it satisfies Palais-Smale condition and indicated the regularity of critical points for E^ω_α. Sacks-Uhlenbeck type perturbations have been effectively employed in various other settings, for instance, by utilizing these approximations, Cheng-Zhou <cit.> established the existence of curves with constant geodesic curvature in Riemannian 2-spheres, and Cheng <cit.> demonstrated the existence of free boundary disks with constant mean curvature in ^3. In this paper, we demonstrate that this perturbed functional E^ω_α is a feasible one to derive the existence of branched immersed H-spheres in Riemannian manifolds with arbitrary codimensions. More precisely, we develop a min-max theory for the functional E^ω_α, then deduce a compactness theory for non-trivial critical points of E^ω_α as α↘ 1, and finally achieve the desired existence results. 5pt §.§ Outline of Proof 5pt Perturbing the functional E^ω also brings in many new challenges. To this end, we develop a method to construct sequences of non-trivial critical points {u_α_j}_j ∈ℕ of E^ω_α_j with uniformly bounded α_j-energy and uniform Morse index upper bound. Also, we implement a convergence scheme to produce non-constant H-spheres and we related the existence of prescribed mean curvature sphere with the continuity of α-energy E_α as α↘ 1. * Firstly, thanks to the fact that the functional E^ω_α satisfies the Palais-Smale condition on Banach manifold W^1,2α(^2,N), inspired by a monotonicity technique by Struwe <cit.> and an argument by Colding-Minicozzi <cit.>, for almost every λ∈_+, we exploit the notion of Width with higher dimensional parameter spaces in our setting to construct a sequence of non-constant critical points {u_α_j} of the functional E^λω_α_j with uniformly bounded α_j-energy E_α_j(u_α_j). Here, we utilized the monotonicity technique in <cit.> with respect to the parameter λ∈_+ to obtain some α-energy upper bound which depends on the choice of λ but is uniform for the sequence {u_α_j} as α_j ↘ 1, see Proposition <ref>. To establish the Morse index upper bound for our approximated sequence, we draw inspiration from the Morse index upper bound estimates within the framework of Almgren-Pitts min-max theory as explored by Marques-Neves <cit.>, Song <cit.>, and Li <cit.>. Additionally, we refer to the works of Cheng-Zhou <cit.> and Cheng <cit.> for insights into a newly devised min–max theory setting. Leveraging a homotopical deformation approach for the min-max sequences of sweepouts, we construct a sequence {u_α_j}_j ∈ℕ with prescribed mean curvature type vector field λ H that simultaneously satisfies the desired Morse index upper bound and uniformly α_j-energy bound, see Theorem <ref> and Theorem <ref> for detailed descriptions. * Next, we investigate the limit u_α_j as α_j ↘ 1. Standard tools for blow-up analysis for α-harmonic maps developed in <cit.> still hold in our case. We also have an alternative: * If Dirichlet energy E(u_α_j) is nowhere concentrated as α_j ↘ 1, then u_α_j converges strongly in C^∞(^2, N) to some (λ H)-sphere with same Morse index upper bound k-2. * If Dirichlet energy E(u_α_j) concentrates somewhere x_1 ∈^2, then the rescaled sequence v_α_j(x) :=u_α_j(x_1 + λ_α_j x) for some λ_α_j↘ 0 also converges smoothly to a limit v ∈ W^1,2(^2,N). But due to the absence of conformally invariance for functional E^ω_α, v solves a new equation Δ v + A(v)(∇ v, ∇ v) = 1/μλ H(v)( v, ∇ v) where μ = lim inf_α_j↘ 1λ_α_j^2 - 2α_j∈ [1,∞). We call it the blow-up spectrum of v_α_j which characterizes the competition between the extent of energy dissipation |∇ u_α_j| ↗∞ and the speed of (α_j - 1) ↘ 0 as j →∞ during blow-up process. For α-harmonic maps, such type of quantity was introduced by Li-Wang <cit.> to investigate the generalized energy identity. The second challenge in our paper is to establish μ = 1. An intriguing observation is that μ = 1 if and only if there is no energy loss during the blow-up process for a general sequence { u_α_j}_α_j ↘ 1 of critical points of E^ω_α around each energy concentration point. Through a meticulous neck analysis and leveraging Gromov's <cit.> estimates on the length of geodesics by its Morse index, we demonstrate that the energy identity holds, hence μ = 1, for sequences of min-max type critical points { u_α_j}_α_j ↘ 1, see Section <ref> for more details. Therefore, when energy concentrates at a particular point, a non-constant (λ H)-sphere with Morse index bounded from above by k-2 is also obtained. In general, as a consequence of Theorem <ref>, u_α_j converges to some (λ H)-sphere u weakly in W^1,2(M,N) and strongly in C^2(^2\x_1,x_2,⋯, x_l,N) for some l ≥ 0. Surprisingly, we observe that the weak limit u of u_α_j is always non-constant, as shown in Proposition <ref>. This suggests the possibility of a second non-trivial H-sphere being produced when the bubbling phenomenon occurs. As a result, the proof of Theorem <ref> is completed in both scenario. To prove the part (<ref>) of Theorem <ref>, we firstly modify the calculation of Ejiri-Micallef <cit.>, which is also applied in the proof of <cit.> in CMC setting, to obtain a new bi-linear form whose Morse index is controlled by E^ω. Then, we combine the Ricci curvature assumption on target N with the conformal balance argument (see for instance Li-Yau <cit.>) to get a uniform energy upper bound and exclude the possibility of existence of non-trivial stable H-sphere. Then Theorem <ref> when (N) = 3 follows by a convergence argument for sequences of H-spheres. For the part (<ref>) of Theorem <ref>, by adapting the calculation and counting argument of <cit.>, we can get the Morse index lower bound of non-constant H-sphere. The proofs of Theorem <ref> and Theorem <ref> follow by adapting the scheme of Sacks-Uhlenbeck's <cit.> resolutions on homotopy problem of harmonic map setting and an observation that the assumption (<ref>) implies the lower boundedness of functional E^ω. 5pt §.§ Organizations 5pt The paper is structured as follows: In Section <ref>, we provide the necessary notations and discuss the variational properties of the perturbed functional, E^ω_α:W^1,2α(M,N) →. Note that the results presented in this section are applicable to general closed Riemann surfaces (M,g). Section <ref> is dedicated to the construction of a sequence of non-trivial critical points {u_α_j}_j ∈ℕ of E^ω_α_j with uniformly bounded α_j-energy E_α_j and a Morse index bounded from above by k-2. Section <ref> focuses on investigating the limits of u_α_j as α_j ↘ 1. More precisely, we establish a generalized energy identity and unveil a direct convergence relationship between our blow-up spectrum and the energy identity for a general sequence u_α_j from a closed Riemann surface M to a compact n-manifold N. In Section <ref>, we prove the main results Theorem <ref>, Theorem <ref>, Theorem <ref> and Theorem <ref>. 2cm § VARIATIONAL PROPERTIES OF PERTURBED FUNCTIONAL LG 10pt In this section, we shall review some notations and variational properties of the perturbed functional E^ω_α. It is worth noting that the restriction of M=^2 is not a necessary condition for the results to hold and consequences presented in this section can apply to general Riemann surfaces (M,g). §.§ Some Preliminaries 5pt Recall that we assumed that N is isometrically embedded into ^K for some K ∈ℕ. In order to utilize the coordinate of ℝ^K to locate the point of N, we choose a tubular neighborhood N of N equipped with canonical Euclidean coordinate (y^1, y^2,…, y^K) in ^K. Furthermore, N can be chosen to be close to N enough such that ω can be extended to a C^2 2-form defined on N which is also denoted by ω. Hence, utilizing this local coordinate of N, we can write ω = ω_ij dy^i∧ dy^j and H = H^i_kldy^k∧ dy^l⊗∂/∂ y^i∈Γ(∧^2(N)⊗ TN). Similar to the extension procedure of ω, we also extend H to a small neighborhood N of N and using the coordinate of N to represent H. In the following content of paper, we will always use coordinate of N to represent ω and H defined on N unless giving other specific convention. In particular, taking U = ∂/∂ y^k, V = ∂/∂ y^i and W = ∂/∂ y^j, by the correspondence (<ref>) we can write dω(∂/∂ y^k, ∂/∂ y^i, ∂/∂ y^j) = ∂ω_ij/∂ y^k + ∂ω_jk/∂ y^i + ∂ω_ki/∂ y^j := H^k_ij . Then coefficients of H are anti-symmetric in the indices i,j and k, i.e. H^k_ij = - H^k_ji and H^k_ij = - H^i_kj . Let u:(M,g) → (N,h) be a critical point of E^ω_α in W^1,2α(M,N), which is called the α-H-surface. By Sobolev embedding W^1,2α(M,^K) ↪ C^0(M,^K), the mapping space W^1,2α(M,N) is a smooth, closed, infinite dimensional submanifold of W^1,2α(M,^K). For each u ∈ W^1,2α(M,N), the tangent space 𝒯_u of Banach manifold W^1,2α(M,N) at u can be identified with 𝒯_u := {V ∈ W^1,2α(M,^K) : V(x) ∈ T_u(x)N for all x∈ M} which is a closed subspace of W^1,2α(M,^K). §.§ First and Second Variation of Perturbed Functional Lg 5pt The differential of E_α^ω at u or the first variation of E_α^ω at u , denoted by δ E^ω_α(u) : 𝒯_u →, is defined as following δ E^ω_α(u)(V) := .d/dt|_t = 0 E^ω_α(u_V, t) for all u_V, t = exp_u(x)t V(x) and V ∈𝒯_u. Moreover, if δ E^ω_α(u) = 0, then the Hessian of E^ω_α at u or the second variation of E^ω_α at u, denoted by δ^2 E^ω_α(u) : 𝒯_u ×𝒯_u → is defined by δ^2 E^ω_α(u)(V,V) := .d^2/dt|_t = 0 E^ω_α(u_V, t) for all u_V, t = exp_u(x)t V(x) and V ∈𝒯_u. And by parallelogram law, for any V,W∈𝒯_u we have δ^2 E^ω_α(u)(V,W) = 1/4δ^2 E^ω_α(u)(V + W,V + W) - δ^2 E^ω_α(u)(V - W,V - W) To begin, we compute the first and second variations of E^ω_α. Although we will carry out the computation in choosing a local version, i.e. for compactly supported variations, the choice turns out to be not crucial and the outcome makes sense globally. Let u ∈ W^1,2α(M,N) and V∈𝒯_u. Then, the first variation formula of E^ω_α is δ E^ω_α(u)(V) = ∫_M α1 + ∇ u^2^α - 1∇ u, ∇ V dV_g + ∫_M H( u, ∇ u), V dV_g, where 𝒫_u : T_uW^1,2α(M,^K) ≅ W^1,2α(M,^K) →𝒯_u is the orthogonal projection, and the second variation formula of E^ω_α is δ^2 E^ω_α(u)(V,V) = α∫_M 1 + ∇ u^2^α - 1( ∇ V, ∇ V - R(V,∇ u, V, ∇ u) ) d V_g + 2α(α - 1)∫_M 1 + ∇ u^2^α - 2⟨∇ u, ∇ V⟩^2 dV_g + 2 ∫_M ⟨ H( u, ∇ V), V⟩ dV_g + ∫_M ⟨∇_V H( u, ∇ u), V⟩ dV_g, where A is the second fundamental form of embedding N ↪^K. As mentioned before, we only need to compute the variation formula for compact supported section V ∈𝒯_u. On the one hand, for the first variation of α-energy E_α, it is well known that .d/dt|_t = 0 E_α(u_V, t) = ∫_Mα1 + ∇ u^2^α - 1⟨∇ u, ∇ V⟩ d V_g. On the other hand, in local coordinate {y^1, y^2, …, y^K} of N and integration by parts we compute that .d/dt|_t = 0∫_M (u_V, t)^*ω = .d/dt|_t = 0∫_M ω_ij(u_V, t) u_V, t^i ∇ u_V, t^j dx = ∫_M∂ω_ij/∂ y^kd u_V, t^k/dt u^i ∇ u^j dx + ∫_Mω_ij(u) V^i ∇ u^j dx + ∫_M ω_ij u^i ∇ V^j dx = ∫_M ∂ω_ij/∂ y^k + ∂ω_jk/∂ y^i + ∂ω_ki/∂ y^j u^i ∇ u^j V^k dx = ∫_M ⟨ H( u, ∇ u), V ⟩ dV_g. Consequently, we obtain the following first variation formula for E_α^ω δ E^ω_α(u)(V) = ∫_M α1 + ∇ u^2^α - 1⟨∇ u, ∇ V⟩ dV_g + ∫_M ⟨ H( u, ∇ u), V⟩ dV_g . Next, we turn to compute the Hessian of δ^2 E^ω_α, from the definition we compute the δ^2 E^ω_α(u)(V,V) by taking the derivative of the following expression with respect to t at t = 0: ∫_Mα1 + ∇ u_V, t^2^α - 1∇ u_V, t, ∇d u_V, t/dt d V_g + ∫_M H( u_V, t, ∇ u_V, t), d u_V,t/dt dV_g To begin with, we can differentiate the first term in (<ref>) to obtain .d/dt|_t = 0 ∫_Mα1 + ∇ u_V, t^2^α - 1∇ u_V, t, ∇d u_V, t/dt d V_g = 2α(α - 1)∫_M1 + ∇ u_V, t^2^α - 2⟨∇ u, ∇ V⟩^2 d V_g + ∫_M2α1 + ∇ u^2^α - 1⟨∇ V, ∇ V⟩ + ∇ u, ∇_V ∇_∇ u Vd V_g = 2α(α - 1)∫_M1 + ∇ u^2^α - 2⟨∇ u, ∇ V⟩^2 d V_g + ∫_Mα1 + ∇ u^2^α - 1⟨∇ V, ∇ V⟩ - R(V,∇ u, V, ∇ u) d V_g, where R is the Riemann curvature tensor on N. Next, we consider the mean curvature type vector field part in (<ref>). Before penetrating into details, we note that .d^2 u_V, t/dt^2|_t = 0 = .d^2/dt^2|_t = 0exp_u(x)t V(x) = A(V,V), which is perpendicular to the tangent bundle TN. Utilizing this observation and integrating by parts, we can compute .d/dt|_t = 0 ∫_M H( u_V, t, ∇ u_V, t), d u_V,t/dt dV_g = ∫_M ∂ H^k_ij/∂ y^l V^l V^k u^i ∇ u^j dx + ∫_M H^k_ij V^k V^i ∇ u^j dx +∫_M H^k_ij V^k u^i ∇ V^j dx = ∫_M ⟨∇_V H( u, ∇ u), V⟩ dV_g + 2 ∫_M ⟨ H( u, ∇ V), V⟩ dV_g. Combining computations (<ref>) and (<ref>) will yield the second variation formula of E^ω_α formulated in Lemma <ref>. For α > 1, by a similar computations as Lemma <ref>, the Euler-Lagrange equation of critical points of E^ω_α, which are called α-H-surfaces, can be written as Δ u_α + (α - 1)∇ |∇ u_α|^2·∇ u_α/1+|∇ u_α|^2 + A(u_α)(∇ u_α, ∇ u_α) = H(u_α)( u_α, ∇ u_α) /α(1 + |∇ u_α|^2)^α - 1, or equivalently in divergence form ((1+|∇ u_α|^2)^α - 1∇ u_α) + (1 + |∇ u_α|^2)^α -1A(u_α)(∇ u_α,∇ u_α) = 1/αH(u_α)( u_α, ∇ u_α). Using parallelogram law, from Lemma <ref>, we can write the Hessian of E^ω_α as following. For V,W ∈_u, we can compute the Hessian for E^ω_α(u) at some critical point u ∈ W^1,2α(M,N) δ^2 E^ω_α(u)(V,W) = α∫_M 1 + ∇ u^2^α - 1( ⟨∇ V, ∇ W⟩ - R(V,∇ u, W, ∇ u) ) d V_g + 2α(α - 1)∫_M 1 + ∇ u^2^α - 1⟨∇ u, ∇ V⟩⟨∇ u, ∇ W⟩ dV_g + ∫_M ( H( u, ∇ V), W + H( u, ∇ W), V) dV_g + 1/2∫_M ( (∇_V H)( u, ∇ u), W + (∇_W H)( u, ∇ u), V) dV_g where A(·,·) is the second fundamental form of embedding N↪^K. In order to derive a priori estimate that will be used in later sections, especially when it comes to proving Lemma <ref>, we need a formula for δ E^ω_α(𝒫_u(φ)) with φ∈ W^1,2α(M,^K). To simplify the notation, we set G_α^ω(u) : = δ E^ω_α∘𝒫_u and define the norm of G_α^ω(u) : W^1,2α(M,^K) → by G_α^ω(u) := sup{G_α^ω(u)(φ) : φ∈ W^1,2α(M,^K) with φ_W^1,2α(M,^K)≤ 1}. Based on above conventions, we can obtain the following estimates: The following properties for G^ω_α holds: * Given u ∈ W^1,2α(M,N), for any φ∈ W^1,2α(M,^K) we have that G^ω_α(u)(φ) = ∫_Mα1 + ∇ u^2^α - 1(∇ u ·∇φ - A(u)(∇ u, ∇ u)·φ) dV_g +∫_M H( u,∇ u)·φ dV_g ; * For all L > 0, there exists constant C_L > 0 such that G^ω_α(u)≤ C_L whenever α > 1 and u ∈ W^1,2α(M,N) satisfying u_W^1,2α(M,N)≤ L; * For all L > 0, there exists constant C_L > 0 such that G^ω_α(u_1) - G^ω_α(u_2)≤ C_L u_1 - u_2_W^1,2α(M,N) whenever α > 1, and u_1_W^1,2α(M,^K), u_2_W^1,2α(M,^K)≤ L. For the part (<ref>), using the first variation formula in Lemma <ref>, we get δ E^ω_α(u)(𝒫_u(φ)) = ∫_M α1 + ∇ u^2^α - 1∇ u, ∇𝒫_u(φ) dV_g + ∫_M H( u, ∇ u), 𝒫_u(φ) dV_g = - α∫_M ⟨1 + ∇ u^2^α - 1∇ u, 𝒫_u(φ) ⟩ dV_g + ∫_M H( u, ∇ u), 𝒫_u(φ) dV_g. Recalling that 𝒫_u1 + ∇ u^2^α - 1∇ u = 1 + ∇ u^2^α - 1∇ u - 1 + ∇ u^2^α - 1A(∇ u, ∇ u), we decompose φ = 𝒫_u(φ) + φ^⊥ where φ^⊥ is normal component of φ in TN^⊥ and plug this into (<ref>) to obtain δ G^ω_α(u)(φ)=δ E^ω_α(u)(𝒫_u(φ)) = - α∫_M 1 + ∇ u^2^α - 1∇ u·φ dV_g + ∫_M H( u, ∇ u)·φ dV_g - α∫_M 1 + ∇ u^2^α - 1 A(∇ u, ∇ u) ·φ dV_g = α∫_M 1 + ∇ u^2^α - 1⟨∇ u, ∇φ⟩ dV_g + ∫_M H( u, ∇ u)·φ dV_g - α∫_M 1 + ∇ u^2^α - 1A(∇ u, ∇ u) ·φ dV_g, which is exactly the desired of part (<ref>). For part (<ref>), using the formula (<ref>) we straightforward estimate that G^ω_α(u)(φ) ≤∫_M α1 + ∇ u^2^α - 1∇φ·∇ u+ α1 + ∇ u^2^α·φ d V_g + ∫_M C ∇ u^2·φ dV_g ≤ C_L^'( ∇φ_L^2α(M,^K) + φ_L^∞(M,^K)) ≤ C_L φ_W^1,2α(M,^K). Here, in the last inequality we used the Sobolev embedding W^1,2α(M,^K) ↪ C^0(M,^K). Next, we consider the part (<ref>). To begin, we use formula (<ref>) obtained in part (<ref>) to get G^ω_α(u_1)(φ) - G^ω_α(u_2)(φ) = ∫_Mα(1 + ∇ u_1^2^α - 1∇ u_1 ·∇φ - 1 + ∇ u_2^2^α - 1∇ u_2 ·∇φ)dV_g - ∫_Mα1 + ∇ u_1^2^α - 1( A(u_1)(∇ u_1, ∇ u_1) -A(u_2)(∇ u_2, ∇ u_2))·φ dV_g -∫_Mα(1 + ∇ u_2^2^α - 1 - 1 + ∇ u_1^2^α - 1)A(u_2)(∇ u_2, ∇ u_2)·φ dV_g + ∫_M (H( u_1,∇ u_1)- H( u_2,∇ u_2))·φ dV_g . Based on part (<ref>) and Sobolev embedding W^1,2α(M,^K) ↪ C^0(M,^K), we can simplify the estimates by focusing on the case when u_1 - u_2_C^0(M,N)≤δ_0 for some small δ_0 > 0. Here, we choose small enough δ_0 >0 to ensure that tu_1 + (1 - t)u_2 ∈ W^1,2α(M,N), hence the formula obtained in part (<ref>) can be applied to such convex combination. we can estimate the integrand of the first integral on the right hand side of (<ref>) to obtain α1 + ∇ u_1^2^α - 1∇ u_1 ·∇φ - α1 + ∇ u_2^2^α - 1∇ u_2 ·∇φ = ∫_0^1 α(α - 1)1 + |∇ (t u_1 + (1 - t)u_2)|^2^α - 2· [∇t u_1 + (1 - t)u_2·∇φ] [∇(u_1 - u_2) ·∇φ]dt + ∫_0^1 α(1 +|t u_1 + (1 - t)u_2|^2)^α - 1∇(u_1 - u_2)·∇φ dt. From this identity and Hölder's inequality, the first integral on the right-hand side will be bounded by C_L u_1 - u_2_W^1,2α(M,^K)·φ_W^1,2α(M,^K). The remaining terms in (<ref>) can be estimated in a complete similar manner by using Sobolev embedding W^1,2α(M,^K) ↪ C^0(M,^K) and Hölder's inequality. §.§ Palais-Smale Condition and Regularity of Critical Points for Functional Lg 5pt The Palais-Smale conditions are a set of compactness conditions that are essential in variational analysis. Our first objective in this subsection is to confirm that the functional E^ω_α: W^1,2α(M,N) →ℝ satisfies a version of the Palais-Smale condition, which was also pointed out by Moore <cit.>. For α > 1, the functional E^ω_α: W^1,2α(M,N)→ satisfies the Palais-Smale condition, more precisely, let {u_j} be a sequence in W^1,2α (M,N) satisfying * E^ω_α(u_j) ≤ C for some universal constant C independent of j ∈ℕ; * δ E^ω_α(u_j)→ 0 as j →∞, then, after passing to a subsequence if necessary, u_j converges strongly in W^1,2α(M,N) to a critical point of E^ω_α as j →∞. Let {u_j}_j ∈ℕ be a sequence in W^1,2α(M,N) ⊂ W^1,2α(M,^K) such that E^ω_α(u_j) is uniformly bounded and δ E^ω_α(u_j)→ 0 for fixed α > 1. It follows that the α-energy of u_j, E_α(u_j), is also uniformly bounded. Otherwise if, after choosing a subsequence, E_α(u_j) →∞ as j→∞, then E^ω_α(u_j) ≥1/2∫_M1 + ∇ u_j^2^α dV_g - 1/2ω_L^∞(N)∫_M ∇ u_j^2 dV_g ≥1/4∫_M1 + ∇ u_j^2^α dV_g →∞, as j →∞, which contradicts to the uniformly boundedness of E^ω_α. From this observation, by Sobolev embedding W^1,2α(M,N) ↪ W^1,2α(M,^K) ↪ C^0,1 - 1/α(M,^K), every u_j is Hölder continuous. And together with the compactness of N, it follows that {u_j}_j ∈ℕ is equi-continuous. Therefore, thanks to the Arzela-Ascoli Theorem, there exists a subsequence of {u_j}, also denoted by {u_j}, converges uniformly to a continuous map u_0 : M → N. To complete the proof of the Lemma <ref>, it is sufficient to demonstrate that {∇ u_j } is a Cauchy sequence in L^ 2 α (M,N). Recall that 𝒫 : ^K ×^K ≅ T^K→ TN is the pointwise orthogonal projection from ^K ×^K ≅ T^K onto TN and simplify the notation of (𝒫∘ u) (V) to 𝒫_u(V ) for any V∈ T_uW^1,2α(M,ℝ^K). This allows us to make the following estimates d 𝒫_u(V)_L^2α(M,^K) ≤d 𝒫∘ u(V)_L^2α(M,^K) + 𝒫_u( ∇ V)_L^2α(M,^K) ≤ C V_L^∞(M, ^K)∇ u_L^2α(M,^K) + C ∇ V_L^2α(M,^K) which further implies that 𝒫_u(V)_W^1,2α(M,^K) ≤ CV_L^∞(M, ^K) E_α(u) + ∇ V_L^2α(M,^K) + V_L^2α(M,^K) ≤ C 1 + E_α(u)V_W^1,2α(M,^K). Since E_α (u_j) is uniformly bounded and u_j is also uniformly bounded, then {u_j} is uniformly bounded in W^1,2α(M, ^K) and (<ref>) tells us that 𝒫_u(u_i - u_j) is bounded in W^1,2α(M, ^K) for each u ∈ W^1,2α(M,^K). Because δ E^ω_α(u_j)→ 0 and recall part <ref> Lemma <ref> , we have δ G^ω_α(u_i)(u_i - u_j) - δ G^ω_α(u_j)(u_i - u_j)⟶ 0 as i, j →∞. By part (<ref>) of Lemma <ref>, replacing φ with u_i - u_j to obtain G^ω_α(u)(u_i - u_j) = ∫_Mα1 + ∇ u^2^α - 1(∇ u ·∇ (u_i - u_j) - A(u)(∇ u, ∇ u)· (u_i - u_j) ) dV_g +∫_M H( u,∇ u)· (u_i - u_j) dV_g . Plugging this identity into (<ref>), we obtain |∫_M α[ 1 + ∇ u_i^2^α - 1∇ u_i·∇(u_i - u_j).. - ..1 + ∇ u_j^2^α - 1∇ u_j·∇(u_i - u_j)] dV_g . + ∫_M [ H( u_i, ∇ u_i)· (u_i - u_j) - H( u_j, ∇ u_j)·(u_i - u_j) ]dV_g - ∫_M1 + ∇ u_i^2^α - 1 A(∇ u_i, ∇ u_i) · (u_i - u_j) dV_g .- ∫_M 1 + ∇ u_j^2^α - 1 A(∇ u_j, ∇ u_j) · (u_i - u_j) dV_g| ⟶ 0 as i, j→∞. We consider the above asymptotic quantity (<ref>) term by term, first we note that | ∫_M 1 + ∇ u_i^2^α - 1 A(∇ u_i, ∇ u_i) · (u_i - u_j) dV_g| ≤ 2u_i - u_j_L^∞(M,^K)A_L^∞(N) E_α(u_i) ⟶ 0 as i,j→∞. And similarly, ∫_M H( u_i, ∇ u_i)· (u_i - u_j) dV_g ≤ 2u_i - u_j_L^∞(M,^K)H_L^∞(N) E(u_i) ⟶ 0 as i,j→∞. Thus, (<ref>) is equivalent to |∫_M α[ 1 + ∇ u_i^2^α - 1∇ u_i·∇(u_i - u_j) .. .. - 1 + ∇ u_j^2^α - 1∇ u_j·∇(u_i - u_j) ] dV_g | ⟶ 0 as i,j→∞, and this further implies that ∫_M ∇ u_i - ∇ u_j^2α d V_g ≤ C ∫_M ∫_0^1 ∇ u_j + t (∇ u_i - ∇ u_j)^2α - 2∇ u_i - ∇ u_j^2 dt d V_g ≤ C ∫_M∫_0^1 ∫_0^1 d^2(1 + ∇ u^2)^α(∇ u_j + t (∇ u_i - ∇ u_j))(∇ u_i - ∇ u_j) dt dV_g ≤ C|∫_M α[ 1 + ∇ u_i^2^α - 1∇ u_i·∇(u_i - u_j) .. .. - 1 + ∇ u_j^2^α - 1∇ u_j·∇(u_i - u_j) ] dV_g | ⟶ 0 as i,j→∞, that is, ∇ u_j is Cauchy in L^2α(M,^K). In summary, by combining the fact that u_j → u_0 in C^0(M, ℝ^K) with that {∇ u_j}_j ∈ℕ is a Cauchy sequence in L^2α(M, ℝ^K) and the completeness of W^1,2α(M, ℝ^K) , we can conclude the desired assertion of Lemma <ref>. Based on the beginning part of the proof for Lemma <ref>, it can be argued that the functional E^ω_α is bounded from below, although the lower bound may be dependent on the choice of α > 1. Thus, by a classical consequence of variational analysis in <cit.>, we have For functional E^ω_α : W^1,2α(M,N) →, the followings hold * E^ω_α attains its minimum value in every component of W^1,2α(M,N); * If there are no critical points of E^ω_α in the interval [a,b], then there exists a deformation retraction ϱ : (E^ω_α)^-1(-∞, b] → (E^ω_α)^-1(-∞, a]. We will now examine the smoothness of the critical points u ∈ W^1,2α(M,N) with respect to the functional E^ω_α by following the argument in <cit.>, which was also indicated by Moore <cit.>. To ensure the comprehensiveness of our content, we provide an outline of the proof here. For sufficiently small α_0 - 1, all critical points of E^ ω_α: W^1,2α(M,N) → are smooth. By Sobolev embedding W^1,2α(M,N) ⊂ W^1,2α(M,^K) ↪ C^0,1 - 1/α(M,^K), the critical point u∈ W^1,2α(M,N) is Hölder continuous. The Euler-lagrange equation for u can be written as ((1+|∇ u|^2)^α - 1∇ u) + (1 + |∇ u|^2)^α -1A(u)(∇ u,∇ u) = 1/αH(u)( u, ∇ u) in weak sense. This is a quasi-linear uniformly elliptic system when α - 1 is small enough. Based on standard variational analysis consequences, such as in <cit.> or <cit.>, it can be concluded that u belongs to the space W^1,2α(M,N). Thus, the Euler-Lagrange equation (<ref>) can be written pointwisely as Δ u + (α - 1)∇_|∇_ u|^2·∇_ u/1+|∇_ u|^2 = H(u)(_ u, ∇_ u)/α(1 + |∇_ u|^2)^α - 1 - A(u)(∇_ u, ∇_ u). The right hand side of the above equation belongs to L^α(M,N), α > 1, for u ∈ W^1,2α(M,N). From the L^p theory of uniformly elliptic equations, we can conclude that u belongs to W^2,2α(M,N). Then, the conclusion of Lemma <ref> follows by applying standard elliptic PDE's bootstrapping argument. 2cm § EXISTENCE OF NON-TRIVIAL CRITICAL POINTS OF THE PERTURBED FUNCTIONAL LG 10pt In this section, we combine the analytic preliminaries established in previous Section <ref> with the min-max theory for perturbed functional E^ω_α, that will be described in this section, to establish the existence of a sequences of non-trivial critical points u_α_j of E^ω_α_j. More precisely, for each fixed ω∈ C^2(∧^2(N)), we can find a generic choice of λ∈_+ to construct a sequence of non-constant critical points u_α_j of E^λω_α_j (see (<ref>) below for the definition of E^λω_α) with bounded α_j-energy that is uniformly with respect to j ∈ℕ, see Proposition <ref>, and with bounded Morse index from above, see Theorem <ref>. The main result of this section is summarized in Corollary <ref>. 5pt §.§ Construction of Min-Max Type Critical Value. 5pt In this subsection, we construct the min-max type critical value by introducing a higher dimensional version of width, in the spirit of <cit.> for finding minimal 2-sphere in Riemannian 3-sphere. Furthermore, we always assume that N is a closed Riemannian manifold with π_k(N) ≠ 0 for some k ≥ 2 and M is the standard 2-sphere ^2 which is same as the assumption of our main Theorem <ref> and Theorem <ref>. Then, we can choose a least integer k ≥ 2 such that π_k(N) ≠ 0. Let I^k-2 = {t=(t^1,t^2,…,t^k-2) : 0≤ t^i ≤ 1, 1≤ i≤ k-2} be the k-2 dimensional unit complex cube. We define a sweepout as a continuous map σ : I^k-2→ W^1,2α(^2,N), which also can be viewed as a map σ(·,t):^2× I^k-2→ N, such that σ assigns ∂ I^k-2 to constant maps which can be identified with the points in N and the map f_σ : ^k → N induced by σ represents a non-trivial free homotopy class in π_k(N). Considering that W^1,2α(𝕊^2,N)↪ C^0(M,N) when α>1, it can be inferred that the induced map f_σ is continuous. Then take a non-trivial homotopy class [ι] ∈π_k(N) and we define the set of admissible sweepouts as below 𝒮 = {σ∈ C^0(I^k-2,W^1,2α(^2,N)) : σ(t) is constant for t∈∂ I^k-2 and f_σ∈ [ι]} Thus, the corresponding min-max value for E^ω_α is defined as following 𝒲_α,ω := inf_σ∈𝒮sup_t ∈ I^k-2 E^ω_α (σ(t)) which is called width depicted by the functional E^ω_α upon W^1,2α(^2 , N). It is important to note that for every admissible sweepout σ, the boundary of the complex cube ∂ I^k-2 is mapped to constant maps. This implies that 0≤𝒲_α, ω < ∞ which represents a well-defined real-valued number depending on the choice of α > 1 and ω. 5pt §.§ Lg-Energy Estimates for Min-Max Type Critical Points 5pt In this subsection, we apply the idea of Struwe's monotonicity technique <cit.> to obtain uniformly α_j-energy estimates for some specific subsequence of critical points {u_α_j}_j∈ℕ of the perturbed functional E^λω_α_j for generic choice of λ > 0, see (<ref>) below for the definition of E^λω_α. This is essential in the process of obtaining the existence of H-sphere by letting α_j ↘ 1 for suitable choice of sequences α_j ↘ 1. In order to apply the monotonicity technique, we need to bring in a scalar parameter in our perturbed functional as follows E^λω_α(u) : = E_α(u) + λ∫_^2 u^*ω for given ω∈ C^2(∧^2(N)) and positive number λ > 0. And we use abbreviated notation 𝒲_α,λ to denote the min-max value 𝒲_α,λω constructed in previous Section <ref> and u_α is denoted to be the critical points of E^λω_α. Equipped with these notations for functional E^λω_α and corresponding min-max value 𝒲_α,λ we have Viewing 𝒲_α,λ as a two variable function of α∈ (1,∞) and λ∈ (0,∞), the following properties hold * For each α > 1, the function λ↦𝒲_α,λ/λ is non-increasing; * For each λ > 0, the function α↦𝒲_α,λ is non-decreasing; Moreover, given any sequences α_j ↘ 1, then, for almost every λ > 0, there exists a subsequence of {α_j}_j ∈ℕ, also denoted by α_j, and a constant C > 0 which is independent of j, such that 0 ≤d/dλ- 𝒲_α_j,λ/λ≤ C, for all j ∈ℕ. We first consider part (<ref>), for any u ∈ W^1,2α(^2,N) and 0 < λ_1 < λ_2, by the expression (<ref>) of functional E^λω_α we have E_α^λ_1ω(u)/λ_1 - E^λ_2ω_α(u)/λ_2 = λ_2 - λ_1/λ_1·λ_2E_α(u)≥ 0. By the construction of min-max value 𝒲_α,λ, for any ε > 0, there exists σ∈𝒮 such that 𝒲_α,λ_1≤max_t ∈ I^k-2 E^λ_1ω_α(σ(t)) ≤𝒲_α,λ_1 + ε. Thus, using the monotone formula (<ref>) we can estimate 𝒲_α,λ_2/λ_2≤max_t ∈ I^k-2E^λ_2ω_α(σ(t))/λ_2≤max_t ∈ I^k-2E^λ_1ω_α(σ(t))/λ_1≤𝒲_α,λ_1/λ_1 + ε/λ_1. Since the choice of ε > 0 is arbitrary, we can obtain the conclusion of (<ref>). For part (<ref>), it is worth noting that the function (1 + |x|^2)^α is increasing with respect to α > 1, hence we have E_α_2^λω(u) - E_α_1^λω(u) = ∫_^21 + |∇ u|^2^α_2 - 1 + |∇ u|^2^α_1 dV_g ≥ 0 for any u∈ W^1, 2α(^2, N) and 1 < α_1 < α_2. The remaining argument is essentially identical to the process outlined in part (<ref>) and therefore leads to the conclusion as stated in (<ref>). For the last statement, since 𝒲_α_j,λ/λ is a monotone function with respect to λ by part (<ref>), the derivative d/d λ- 𝒲_α_j,λ/λ exists for almost every λ∈ (0,∞) and is non-negative. Furthermore, given any 0 < λ_1 < λ_2 < ∞, we have ∫_λ_1^λ_2d/dλ- 𝒲_α_j,λ/λ dλ≤𝒲_α_j, λ_1/λ_1 - 𝒲_α_j,λ_2/λ_2 < ∞ Then, by Fatou's Lemma, we have ∫_λ_1^λ_2lim inf_j→∞d/dλ- 𝒲_α_j,λ/λdλ ≤lim inf_j→∞∫_λ_1^λ_2d/dλ- 𝒲_α_j,λ/λ dλ ≤lim inf_j→∞𝒲_α_j, λ_1/λ_1 - 𝒲_α_j,λ_2/λ_2 ≤𝒲_2, λ_1/λ_1 - 𝒲_1,λ_2/λ_2 < ∞, where in the last inequality we used the monotonicity of 𝒲_α,λ with respect to α obtained in part (<ref>). It can be inferred that for almost every λ within the range of [λ _1, λ _2] there holds lim inf_j→∞d/dλ- 𝒲_α_j,λ/λ < ∞. Therefore, the last assertion of Lemma <ref> follows by the arbitrary choices of λ_1 and λ_2. Next, in Lemma <ref> below, we show that based on the conclusion of Lemma <ref> for certain choice t_0 ∈ I^k-2 there exists a α-energy control for some sweepouts valued at t_0, with upper bounds depending on the constant C obtained in last assertion of Lemma <ref>. Let λ > 0 and α_0 >α > 1 for some small enough α_0 - 1. And assume there exists a constant C > 0, independent of α↘ 1, such that 0 ≤d/dλ- 𝒲_α,λ/λ≤ C. Then, there exists a sequence of sweepouts σ_j:I^k-2→ W^1,2α(^2,N) such that max_t ∈ I^k-2 E_α^λω(σ_j(t)) ≤𝒲_α,λ + λε_j, and E_α(σ_j(t_0)) ≤ 8 λ^2 C as long as t_0 ∈ I^k-2 satisfying E_α^λω(σ_j(t_0)) ≥𝒲_α,λ - λε_j, for any sequence ε_j ↘ 0 with 0 < ε_j ≤λ /2. Given any sequence ε_j ↘ 0 with 0 < ε_j ≤λ /2 for each j ∈ℕ, we set λ_j = λ - ε_j/(4C). By the assumption of Lemma <ref>, there exists a large j_0 ∈ℕ such that for all j ≥ j_0 there holds 1/λ - λ_j𝒲_α,λ_j/λ_j - 𝒲_α,λ/λ≤ 2C. This is equivalent to 𝒲_α,λ_j/λ_j≤𝒲_α,λ/λ + ε_j/2, for all j≥ j_0. Next, recalling that the functional E^λω_α has min-max value 𝒲_α,λ, there exists a sequence of sweepouts σ_j ∈𝒮 such that 1/λ_jmax_t ∈ I^k-2 E_α^λ_jω(σ_j(t)) ≤1/λ_j𝒲_α,λ_j + ε_j/2. Thus, combining with the monotone formula (<ref>) we can obtain max_t ∈ I^k-2 E_α^λω(σ_j (t)) ≤λ𝒲_α,λ_j/λ_j + λε_j/2≤𝒲_α,λ + λε_j. For the another side of inequality, we pick t_0 ∈ I^k-2 satisfying 1/λ E_α^λω(σ_j(t_0)) ≥1/λ𝒲_α,λ - ε_j, then, for j ≥ j_0, by subtracting (<ref>) from (<ref>) and utilizing (<ref>), we can obtain 1/λ·λ_j E_α(σ_j(t_0)) = 1/λ - λ_jE_α^λ_jω(σ_j(t_0))/λ_j - E^λω_α(σ_j(t_0))/λ ≤1/λ - λ_j𝒲_α,λ_j/λ_j - 𝒲_α,λ/λ + 3ε_j/2 ≤1/λ - λ_j𝒲_α,λ_j/λ_j - 𝒲_α,λ/λ + 6C ≤ 8 C. Therefore, we have E_α(σ_j(t_0)) ≤ 8 λ^2 C provided t_0 ∈ I^k-2 satisfying (<ref>). Next, we show that the min-max value 𝒲_α,λ is a critical value and construct a critical point {u_α} for the functional E^λω_α with uniformly bounded α-energy, where the α-energy upper bound depends on the value of derivatives d/dλ- 𝒲_α,λ/λ, the choice of λ > 0 in view of Lemma <ref>. Given λ > 0 and α > 1 such that there exists a sequence of sweepouts σ_j ∈𝒮 satisfying max_t ∈ I^k-2 E_α^λω(σ_j(t)) ≤𝒲_α,λ + λε_j, and E_α(σ_j(t_0)) ≤ 8 λ^2 C for any t_0 ∈ I^k-2 admitting E_α^λω(σ_j(t_0)) ≥𝒲_α,λ - λε_j and for any sequence ε_j ↘ 0 with 0 < ε_j ≤λ /2. Then, after passing to a subsequence, there exists t_j ∈ I^k-2 so that the following holds: * E^λω_α(σ_j(t_j)) - 𝒲_α,λ≤λε_j, which further implies E_α^λω(σ_j(t_j)) →𝒲_α,λ as j →∞; * σ_j(t_j) converges strongly in W^1,2α(^2,N) to some u_α with uniformly bounded energy E_α(u_α) ≤ 8 λ^2 C; * The limiting map u_α obtained in part (<ref>) is non-constant. Moreover, there exists a positive constant δ(α,λω) > 0 depending on α > 1, λ > 1 amd ω∈ C^2(∧^2(N)) such that E_α(u_α) ≥1/2Vol(^2) + δ(α,λω). Given a sequence of admissible sweepouts {σ_j}_j ∈ℕ⊂𝒮, we call σ_j is a min-max sequence if lim sup_j →∞max_t ∈ I^k-2 E^λω_α(σ_j(t)) = 𝒲_α,λ. Therefore, combining the Lemma <ref>, Lemma <ref> and Proposition <ref>, we can conclude that, for almost every choice of λ∈_+ given any min-max sequence {σ_j}_j ∈ℕ⊂𝒮, after passing to certain subsequences, there exists a sequence of t_j ∈ I^k-2 such that σ_j(t_j) converges strongly in W^1,2α(^2, N) to a non-constant α-H-surface u_α with E^λω_α(u_α) = 𝒲_α,λ. Moreover, Given any α_j ↘ 1 there exists a subequence of α_j ↘ 1 such that the α_j-energy of u_α_j is uniformly bounded. In the Lemma <ref> below in Section <ref>, we can actually show that when α_0 - 1 is small enough there exists a constant δ(λω) > 0 independent of α∈ (1, α_0) such that E_α(u_α) ≥1/2Vol(^2) + δ(λω) for the non-constant critical point u_α obtained in (<ref>) of Proposition <ref>. We first consider part (<ref>) and define U_j = {t ∈ I^k-2 : E^λω_α(σ_j (t)) > 𝒲_α,λ - λε_j }⊂ I^k-2. Since E_α^λω satisfies the Palais-Smale condition, see Lemma <ref>, considering the assumption of Proposition <ref>, it suffices to show that the first variation acting on σ_j(U_j) is not bounded away from zero. More precisely, we claim that: For any ε > 0, there exists j_0 ∈ℕ such that inf_t ∈ U_jδ E_α^λω(σ_j(t)) < ε, for all j ≥ j_0. We prove the Claim <ref> by contradiction, that is, suppose that there exists some δ > 0 and a subsequence of σ_j ∈𝒮, which is also denoted by σ_j, such that δ E_α^λω(σ_j(t))≥δ, for all t ∈ U_j and all j ∈ℕ. The following existence of pseudo-gradient vector field is essential for us and the detailed proof can be founded in <cit.>. There exists a locally Lipschitz continuous map X: V→ TW^1,2α(^2,N)⊂ TW^1,2α(^2,^K), where V = { u ∈ W^1,2α(^2,N) : δ E^λω_α(u)≠ 0}, such all the following holds: * X(u) ∈𝒯_u for each u ∈V; * X(u)_W^1,2α(^2,^K) < 2 minδ E^λω_α(u), 1; * δ E^λω_α(u)(X(u)) < - minE^λω_α(u), 1·E^λω_α(u). Then, we consider the continuous 1-parameter family of homeomorphisms associated with X, denoted by Φ : {(u,s) : u ∈V, 0 ≤ s < T(u) }→ W^1,2α(^2, N) ⊂ W^1,2α(^2,^K), where T(u) is the maximal existence time of the integral curve from u along X. Next, we show that T(u) has a uniformly positive lower bound which is independent of u ∈V, following the outline of the proof presented in <cit.>. For all L > 0 and 0 < δ < 1, there exists T = T(δ,L) > 0 such that if δ E^λω_α(u)≥δ and E_α(u) ≤ L, then the maximal existence time T(u) satisfies T(u) ≥ T(δ, L). In particular, when s ≤ T(δ , L) there holds δ E^λω_α(Φ(u,s))≥δ/2. By part (<ref>) of Lemma <ref> and general ODE theory on Banach manifold, we see that if T(u) < ∞, then lim inf_s ↗ T(u)δ E^λω_α(Φ(u,s)) = 0. Thus, it suffices to obtain a lower bound for E^λω_α(Φ(u,s)) when s ∈ [0,T(u)] in order to obtain a lower bound of T(u). To this end, given s < min{1/2, T(u)}, we use property (<ref>) in Lemma <ref> to see Φ(u,s) - u_W^1,2α(^2,N) ≤∫_0^s X(Φ(u,t))_W^1,2α(^2,^K) dt ≤∫_0^s 2 minδ E^λω_α(Φ(u,t)), 1dt≤ 2s < 1. This further implies Φ(u,s)_W^1,2α(^2,N)≤ C_N(L + 1), whenever s < min{1/2, T(u)}, for some constant depending only on geometries of N. By the estimates ||u||_W^1,2α(^2,N)≤ C_N E_α(u) ≤ C_N L and ||Φ(u,s)||_W^1,2α(^2,N)≤ C_N(L + 1), we can apply part (<ref>) of Proposition <ref> to get δ E^λω_α(Φ(u,s)) - δ E^λω_α(u)≤ C_LΦ(u,s) - u_W^1,2α(^2,N)≤ 2C_L s. Utilizing this inequality, we can get δ E^λω_α(Φ(u,s)) ≥δ/2, whenever s ≤min{T(u), δ/4(C_L + 1)} which implies that T(u) ≥δ/4(C_L + 1) : = T(δ , L) and the second conclusion is also followed. Then we come back to the proof of Claim <ref>. Recalling that we assumed by contradiction that δ E^λω_α(σ_j(t))≥δ, for all t ∈ U_j, and by the assumption of Proposition <ref>, there exists a universal constant C_λ:= 8λ^2 C > 0 such that E_α(σ_j(t)) ≤ C_λ, when t∈ U_j. So, we can apply Lemma <ref> to obtain a lower bound of T(σ_j(t)) ≥ T(δ ,C_λ) for all t ∈ U_j and δ E^λω_α(Φ(σ_j(t), s))≥δ/2 for all (t,s) ∈ U_j × [0,T(δ,C_λ)]. In order to construct a new sweepout from σ_j and Φ, we define a compact subset V_j of U_j as follows V_j = {t ∈ I^k-2 : E^λω_α(σ_j(t)) ≥𝒲_α,λ - λε_j/2}. By the continuity of t ↦ E^λω_α(σ_j(t)), V_j is a compact subset of U_j. So, there exists a smooth cut-off function φ_j : I^k-2→ such that φ_j ≡ 1 on V_j and vanishes outside of U_j. Then, we set Φ_j(t,s) := Φ(σ_j(t), φ_j(t) T(δ, C_λ)s) for (t,s) ∈ I^k-2× [0,1]. We observe that, when t ∈∂ I^k-2 and j is large enough, φ_j(t) = 0 and Φ_j(t,s) = σ_j(t) is a constant map for all s ∈ [0, 1]. Hence, if we let σ_j(t) = Φ_j(t,1), σ_j ∈𝒮 is an admissible sweepout. Then differentiating E^λω_α(Φ_j(t,s)) with respect to s at (t_0,s_0)∈ I^k-2×[0,1] yields that .d/ds|_s = s_0 E^λω_αΦ_j(t_0,s) = φ_j(t_0)T(δ,C_λ) δ E^λω_αΦ_j(t_0,s_0)XΦ_j(t_0,s_0). Then, we integrate the above identity with respect to s from 0 to 1 by changing variables to get E^λω_α(σ_j(t)) = E^λω_ασ_j(t) + ∫_0^φ_j(t) T(δ,C_λ)δ E^λω_αΦ_j(x,s) XΦ_j(x,s) ds. Next, combining the identity (<ref>) with estimate (<ref>) and part (<ref>) of Lemma <ref>, we can get E^λω_α(σ_j(t)) < E^λω_α(σ_j(t)) - δ^2/4 T(δ,C_λ) < 𝒲_α,λ + λε_j - δ^2/4 T(δ,C_λ) for all t ∈ V_j. Thus, when t ∈ V_j and j is large enough, we have max_t ∈ I^k-2 E^λω_α(σ_j(t)) ≤𝒲_α,λ - δ^2 T(δ,C_λ)/8 < 𝒲_α,λ, which is a contradiction to the definition of min-max value 𝒲_α,λ. Therefore Claim <ref> holds. Let us now return to the proof of the Proposition <ref>. Consequently, by Claim <ref>, there exists a subsequence of σ_j(t_j) for t_j ∈ U_j, which are still denoted by same symbols, such that E_α(σ_j(t_j)) ≤ C_λ and δ E^λω_α(σ_j(t_j)) → 0 as j →∞. In view of Lemma <ref>, after passing to a subsequence, σ_j(t_j) converges strongly in W^1,2α(^2, N) to some u_α satisfying δ E^λω_α(u_α) = 0 and E_α(u_α) ≤ C_λ. The conclusion of part (<ref>) and (<ref>) of Proposition <ref> follows directly. For part (<ref>) of Proposition <ref>, we first note that the α-energy for critical points u_α is strictly larger than 1/2Vol(^2), that is, there exists a δ(α, λω) > 0 such that E_α(u_α) ≥1/2Vol(^2) + δ(α, λω). Otherwise, assume for any ε > 0 there exists a sweepout σ_ε∈𝒮 such that max_t ∈ I^k-2 E_α(σ_ε(t)) < 1/2Vol(^2) + ε. Then, the map f_σ_ε: ^k→ N, induced by σ_ε, is homotopy to some constant map, by directly applying Poincaré's inequality and Sobolev Embedding W^1,2α(^k,N) ↪ C^0(^k,N), which contradicts to the choice of σ_j ∈ [ι]≠ 0. Thus, u_α is a non-constant critical point for E^λω_α and the proof of Proposition <ref> is now complete. §.§ Morse Index Upper Bound for Min-Max Critical Points Lg 5pt In this subsection, we are devoted to construct a sequence of non-constant critical points {u_α_j}_j ∈ℕ of E^λω_α_j for α_j ↘ 1 as j →∞ that admits an uniformly α_j-energy upper bound together with a Morse index upper bound: Ind_E^λω_α_j(u_α_j) ≤ k-2. The main obstruction in constructing the critical points u_α_j with desired Morse index upper bound is the dependence of α_j-energy upper bound obtained in Proposition <ref> with the choices of sequence {u_α_j}_j ∈ℕ and λ∈_+. Such dependence prevents us to apply Morse theory to obtain the Morse index upper bound estimates by perturbing the functional E^λω_α_j further to a Morse one. To overcome this obstacle, inspired by the Morse index upper bound estimates in the setting of Almgren-Pitts min-max theory by Marques-Neves <cit.>, Song <cit.>, and Li <cit.>, see also Cheng-Zhou <cit.> and Cheng <cit.> for the setting of a newly devised min–max theory, we design a homotopical deformation for the min-max sequences of sweepouts σ_l: I^k-2→ W^1,2α(^2, N) obtained in Proposition <ref> to construct a sequence {u_α_j}_j ∈ℕ with the desired Morse index upper bound and α_j-energy bound simultaneously, for more details see Theorem <ref> and Theorem <ref> below. Note that the main result—Theorem <ref> in this subsection holds for all choice of λ∈_+ and any α > 1 in the definition of functional E^ω_α, so we simply write α for α_j when j ∈ℕ fixed, ω for λω and E^ω_α for E^λω_α. Before penetrating into the detailed description of homotopical deformation Theorem <ref> and Theorem <ref>, we prepare some essential notions and estimates. To begin, recall that the second variation formula of E^ω_α:W^1,2α(^2, N) → is written as following, for more details see Lemma <ref>, δ^2 E^ω_α(u)(V,V) = α∫_^21 + ∇ u^2^α - 1( ⟨∇ V, ∇ V ⟩ - R(V,∇ u, V, ∇ u) ) d V_g + 2α(α - 1)∫_^21 + ∇ u^2^α - 2⟨∇ u, ∇ V⟩^2 dV_g + 2 ∫_^2 H( u, ∇ V), V dV_g + ∫_^2 (∇_V H)( u, ∇ u), V dV_g, for V ∈𝒯_u. For any ω∈ C^2(∧^2(N)), the Morse index of a critical point u ∈ W^1,2α(^2,N) for E^ω_α is the maximal dimension of linear subspace of 𝒯_u on which δ^2 E^ω_α(u) restricted to be a negative definite symmetric bilinear form. By Lemma <ref>, every critical point u ∈ W^1,2α(^2, N) of E^ω_α is smooth for small enough α > 1. Then, we can extend δ^2 E^ω_α(u): 𝒯_u ×𝒯_u → to a bounded symmetric bilinear form on the Hilbert space 𝒯_u := V ∈ W^1,2(^2, ^K) : V(x) ∈ T_u(x)N for a.e. x ∈^2. The Morse index of critical point u of E^ω_α on 𝒯_u is defined exactly the same manner with Definition <ref>. At each 𝒯_u, it follows from the Riesz representation theorem that there exists a bounded linear operator Ł_u such that δ^2 E^ω_α(u)(V,W) = ⟨Ł_u(V),W⟩_𝒯_u for V, W ∈𝒯_u, which is called the Jacobi operator of E^ω_α at u. Here, the inner product ·, ·_𝒯_u is induced from inclusion 𝒯_u ⊂ W^1,2(M, ^K). Furthermore, in the Lemma <ref> below, we demonstrate that the Morse index defined on 𝒯_u is equivalent to the one extended on 𝒯_u and we establish a spectral decomposition on 𝒯_u using the standard uniformly elliptic operator theory. Given a critical point u ∈ W^1,2α(^2, N) of E^ω_α, when α - 1 is small enough, the following properties holds: * The Jacobi operator Ł_u is a self-adjoint second order elliptic differential operator, hence a Fredholm operator on 𝒯_u. * There exists a sequence of real eigenvalues λ_j ↗∞ of Ł_u and a sequence of corresponding eigenfunctions {ϕ_j}_j ∈ℕ which forms a basis of 𝒯_u such that δ E^ω_α(u)(ϕ_i ,ϕ_j) = Ł_u(ϕ_i), ϕ_j_𝒯_u = λ_i ϕ_i ,ϕ_j_L^2 = λ_iδ_ij. * The Morse index defined on 𝒯_u is finite and is identical with the standard one defined as in Definition <ref>. It suffices to show <ref>, <ref> follows directly from the Sobolev compact embedding 𝒯_u ↪V ∈ L^2(^2, ^K) : V(x) ∈ T_u(x)N for a.e. x ∈^2, and the application of standard compact operator theory to Ł_u. And <ref> follows from the observation that each eigenfunction ϕ_j actually is smooth by applying the elliptic bootstrapping to the eigenequations of ϕ_j. Note that the self-adjointness of Ł_u follows from the symmetry of bilinear form δ^2 E^ω_α, and that by the smothness of critical point u, when α - 1 is small enough, Ł_u is uniformly elliptic. To verify Ł_u is a Fredholm operator on 𝒯_u, it is enough to establish the following Garding-type inequality: δ^2 E^ω_α(u)(V,V) ≥ C_1(u, H, N)∫_M |∇ V|^2 d V_g - C_2(u,H,N) ∫_M |V|^2 d V_g for some constants C_1(u, H, N), C_2(u, H, N) depending on u, mean curvature type vector field H and geometries of N. In fact, since the integrand in second line of (<ref>) is positive, it is not difficult to see δ^2 E^ω_α(u)(V,V) ≥∫_M ∇ V^2 - C(u,N) V^2 d V_g - C(u,H) ∫_M ∇ V·V dV_g - C(u,H)∫_M |V|^2 dV_g. Then, applying the Cauchy-Schwartz inequality with ε to the first integrand in the second line of (<ref>) will yield (<ref>), hence completes the proof of Lemma <ref>. Utilizing the conclusions <ref> and <ref> of Lemma <ref>, the Morse index of critical point u of δ^2 E^ω_α can also be defined as The Ind_E^ω_α(u) equals to the number of negative eigenvalues of Ł_u on 𝒯_u counted with multiplicity. Furthermore, by the spectral decomposition <ref> of Lemma <ref> and regularity of ϕ_j, we have 𝒯_u = 𝒯_u^- ⊕𝒯_u^+ where 𝒯_u^- is the direct sum of negative eigenspaces of Ł_u with (𝒯_u^-) = Ind_E^ω_α(u) and 𝒯_u^+ is the L^2-orthogonal complement of 𝒯_u^- in 𝒯_u. Based on this decomposition, we write V = V^- + V^+ for each V ∈𝒯_u. For α > 1, W^1,2α(^2,N) is a Banach manifold. Then, for each u ∈ W^1,2α(^2,N), taking a small enough ball ℬ_u(0,r_u) = V ∈𝒯_u : V_𝒯_u < r_u⊂𝒯_u center at the origin of 𝒯_u such that .δ^2 E^ω_α(Φ_u(w))|_𝒯_u^-: 𝒯_w^- ×𝒯_w^- ≅𝒯_u^- ×𝒯_u^- → is negatively definite for all w ∈ℬ_u(0,r_u), we define the coordinate map as below Φ_u : ℬ_u(0,r_u) → W^1,2α(^2,N) by [Φ_u(V)](x) = exp_u(x)(V(x)), for x ∈^2. Here, the norm ·_𝒯_u is induced from inclusion 𝒯_u ⊂ W^1,2α(^2, ^K). The collection ℬ_u(0,r_u), Φ_u_u ∈ W^1,2α(^2,N) consists of a smooth structure for W^1,2α(^2,N). Note that in the sequel, we use ℬ^-_u(0,r_u) and ℬ^+_u(0,r_u) to represent the balls in 𝒯_u^- and 𝒯_u^+, respectively. In Lemma <ref> below, around each critical point u of E^ω_α we establish some local estimates of E^ω_α∘Φ_u on ℬ_u(0,r_0(u)) for small enough 0 < r_0(u) < r_u /3. With the same notations as above, given ω∈ C^2(∧^2 (N)) and α > 1, for each critical point u of E^ω_α there exists 0 < r_0 := r_0(u) < r_u/3 such that the following holds: * There exists a constant 0 < κ := κ(u) < 1 and a constant C := C(u) > 0 such that for all V ∈𝒯_u with V ∈ℬ_u(0,r_0) and V^+_𝒯_u≤κV^-_𝒯_u, we have E^ω_α(Φ_u(V)) - E^ω_α(Φ_u(0)) ≤ - C V^-_𝒯_u^2. * There exists a constant C := C(u) > 0 such that for all V, W ∈𝒯_u with V ∈ℬ_u(0,r_0) W^-_𝒯_u = 1 and δ E^ω_α(Φ_u(V))(W^-) ≤ 0, we have E^ω_α(Φ_u(V + r W^-)) - E^ω_α(Φ_u(V)) ≤ - C r^2, for 0 ≤ r ≤ r_0. Since δ^2 E^ω_α(Φ_u(0)) = δ^2 E^ω_α(u): 𝒯_u ×𝒯_u → is a bounded bilinear form, there exists C_1(u) := δ^2 E^ω_α(u) > 0 such that δ^2 E^ω_α(Φ_u(0))(V,V)≤ C_1(u) V_𝒯_u^2, for all V ∈𝒯_u. In particular, observe that (𝒯_u^-) = Ind_E^ω_α(u) < ∞, the induced norms ·_𝒯_u and ·_𝒯_u restricted on 𝒯_u^- are equivalent. There exists a constant C_2(u) > 0 such that δ^2 E^ω_α(Φ_u(0))(V^-,V^-) ≤ -C_2(u) V^-^2_𝒯_u for all V^- ∈𝒯_u^-. By the continuity of δ^2 E^ω_α∘Φ_u on ℬ_u(0,r_u), we can choose small enough 0 < r_0 < r_u/3 such that for any W ∈ℬ_u(0,2r_0) and any V ∈𝒯_u there holds δ^2 E^ω_α(Φ_u(W))(V,V) - δ^2 E^ω_α(Φ_u(0))(V,V)≤κ^2 C_1(u)/4V^2_𝒯_u, where κ > 0 is a constant that will be determined later. Therefore, for V ∈ℬ_u(0,r_0) with V^+_𝒯_u≤κV^-_𝒯_u, utilizing the Taylor formula with integral remainder at critical point u and keeping in mind (<ref>) and (<ref>) we obtain E^ω_α(Φ_u(V)) - E^ω_α(Φ_u(0)) = 1/2δ^2 E^ω_α(Φ_u(0))(V^-,V^-) + δ^2 E^ω_α(Φ_u(0))(V^+,V^+) + ∫_0^1 (1-s) δ^2 E^ω_α(Φ_u(s V))(V,V) - δ^2 E^ω_α(Φ_u(0))(V,V) ds ≤ - C_2(u)/ 2V^-^2_𝒯_u + C_1(u)/2V^+^2_𝒯_u + κ^2 C_1(u)/8V^2_𝒯_u ≤ - 1/2C_2(u) - κ^2 C_1(u) - κ^2 C_1(u)V^-_𝒯_u^2, where in the first equality we used δ^2 E^ω_α(Φ_u(0))(𝒯_u^-, 𝒯_u^+) = 0 in viewing of part <ref> in Lemma <ref> and we used the inequality V_𝒯_u≤ 2 V^-_𝒯_u in the last inequality which comes from our assumption V^+_𝒯_u≤κV^-_𝒯_u for 0 < κ < 1. Then, letting κ > 0 small enough such that κ^2 < minC_2(u)/4 C_1(u),1 will yield the conclusion of <ref> by taking C(u) := 1/4 C_2(u). Next, in order to prove <ref> we apply (<ref>) and the Taylor formula with integral remainder for E^ω_α∘Φ_u at u acting on V ∈ℬ_u(0,r_0) and V + t W^- ∈ℬ_u(0, 2r_0) satisfying W^-_𝒯_u = 1 and δ E^ω_α(Φ_u(V))(W^-) ≤ 0, to obtain E^ω_α(Φ_u(V + t W^-)) - E^ω_α(Φ_u(V)) = t δ E^ω_α(Φ_u(V))(W^-) + t^2/2δ^2 E^ω_α(Φ_u(V))(W^-,W^-) +∫_0^t (t-s) (δ^2 E^ω_α(Φ_u(V+ s W^-))(W^-, W^-) - δ^2 E^ω_α(Φ_u(0))(W^-, W^-)) ds ≤ - t^2/2C_2(u) - κ^2 C_1(u)/8W^-_𝒯_u^2 ≤ -31 C_2(u)/64 t^2. This leads to the conclusion of <ref> in Lemma <ref> by letting C(u) := 31 C_2(u)/64. In order to describe the main result in this subsection more precisely, given λ >0 and α > 1, writing ω as λω we define 𝒜_ε, C to be the collection of admissible sweepouts σ∈𝒮 satisfying max_t ∈ I^k-2 E^ω_α(σ(t)) ≤𝒲_α,λ + ε and E_α(σ(t))≤ C as long as t ∈ I^k-2 satisfying E_α^ω(σ(t)) ≥𝒲_α,λ - ε. Note that 𝒜_ε, C is exactly the set of sweepouts that fulfills the assumptions of Proposition <ref> by replacing constant C > 0 by 8λ^2 C. Moreover, for C > 0 we define 𝒰_C ⊂ W^1,2α(^2, N) to be the set of critical points u ∈ W^1,2α(^2, N) for functional E^ω_α satisfying δ E^ω_α(u) = 0 with E_α(u) ≤ C and E_α^ω(u) = 𝒲_α,λ. Since E^ω_α satisfies the Palais-Smale condition, see Lemma <ref>, it is not difficult to see that 𝒰_C ⊂ W^1,2α(^2, N) is a compact set. Equipped with these notations at our disposal, we are prepared to present the main result in this subsection: Given ω∈ C^2(∧^2 (N)), λ > 0 writing ω as λω, α > 1 and C > 0, let 𝒰_0 be a closed subset of 𝒰_C+1. If Ind_ E^ω_α(u) ≥ k-1 for all u ∈𝒰_0, then for each sequence of sweepouts {σ_j}_j ∈ℕ⊂𝒜_ε_j, C with ε_j ↘ 0 there exists another sequence of sweepouts {σ_j}_j ∈ℕ⊂𝒮 such that * σ_j ∈𝒜_ε_j, C + 1 when j is large enough. * For any u ∈𝒰_0, there exists j_0(u) ∈ℕ, ε_0(u) > 0 such that for all j ≥ j_0(u), inf_t ∈ I^k-2, j ≥ j_0σ_j(t) - u_W^1,2α(^2, N) : E^ω_α(σ_j(t)) ≥𝒲_α,λ- ε_j≥ε_0(0). Firstly, we observe that there exists δ_0 > 0 such that if σ∈𝒮 is an admissible sweepout then max_t ∈ I^k-2 E_α(σ(t)) ≥1/2Vol(^2) + δ_0. In fact, suppose that for any δ > 0 there exists a σ∈𝒮 such that max_t ∈ I^k-2 E_α(σ(t)) < 1/2Vol(^2) + δ. By Poincaré's inequality, we see that there exists a C(α) > 0 such that the oscillation of σ(t) ∈ W^1,2α(^2, N) satisfies Osc_^2(σ(t)) ≤ C(α) E_α(σ(t)) - 1/2Vol(^2)^1/2α≤ C(α) δ^1/2α, for all t ∈ I^k-2, which means σ:I^k-2→ W^1,2α(^2, N) can be deformed onto a map that assigns each points in (k-2)-dimensional complex cube I^k-2 to constant maps, hence the induced map f_σ: ^k → N is null-homotopic, contradicting to the definition of 𝒮. Equipped with this observation, the construction of new sweepouts σ_j ∈𝒮 stated in Theorem <ref> splits into four steps. We construct a finite subset u_i_i = 1^m ⊂𝒰_0 and their associated open balls B^1,2α(u_i, r_1(u_i)) ⊂ W^1,2α(^2, N) for 1≤ i ≤ m satisfying the following: * 𝒰_0 ⊂⋃_i = 1^m B^1,2α(u_i, r_1(u_i))⊂⋃_i = 1^m𝒟_u_i(1) where 𝒟_u(1) is defined in (<ref>); * E^ω_α(v) ≤ E^ω_α(u) - (r_0(u))^2/4C(u) for any v ∈∂^-𝒟_u(2) where r_0(u) and C(u) are obtained in <ref> of Lemma <ref> and ∂^-𝒟_u(2) is defined in (<ref>). * For each 1≤ i ≤ m and any v, w ∈ B^1,2α(u_i, 2r_1(u_i)), there holds |E_α(u) - E_α(v)| ≤δ_0/4 where δ_0 >0 is obtained in (<ref>). Here, B^1,2α(u_i,r_1(u_i)) := v ∈ W^1,2α(^2, N) : v-u_i_W^1,2α(^2, N) < r_1(u_i) are balls defined with respect to the topology induced by the Finsler structure of W^1,2α(^2, N). For each u ∈𝒰_0, by Lemma <ref> we can find constants 0 < r_0(u) < r_u/3, 0 < κ(u) < 1 and C(u) > 0 such that the conclusions of <ref> and <ref> in Lemma <ref> can be applied in the neighborhood Φ_u(ℬ_u(0,r_u)) of u. By the continuity of E_α:W^1,2α(^2, N) →, after shrinking r_0(u) if necessary, we can further assume that |E_α(v) - E_α(w)| ≤δ_0/4 for any v, w ∈Φ_u(ℬ_u(0,r_0(0))). Then, we define 𝒟_u(ρ):= Φ_uV ∈𝒯_u : V^-_𝒯_u≤r_0(u)/4ρ, V^+_𝒯_u≤κ(u) r_0(u)/4ρ, and ∂^-𝒟_u(ρ) := Φ_uV ∈𝒯_u : V^-_𝒯_u = r_0(u)/4ρ, V^+_𝒯_u≤κ(u) r_0(u)/4ρ for ρ∈ [1,4]. Thus, by the estimates obtained in <ref> of Lemma <ref> we see that E^ω_α(v) ≤ E^ω_α(u) - r_0(u)^2/4C(u) for any v ∈∂^-𝒟_u(2) Then, for each u ∈𝒰_0 we can choose small enough 0 < r_1(u) < r_0(u) such that B^1,2α(u, 2r_1(u)) ⊂𝒟_u(1). Since the collection B^1,2α(u, r_1(u))_u ∈𝒰_0 consists of an open covering of compact set 𝒰_0, there exists a finite subcovering B^1,2α(u,r_1(u))_i =1^m satisfying all the assertions of Step <ref> by the choice of r_1(u). In the following, for the notation simplicity, we write r_i := r_1(u_i) and b_i := r_0(u_i)^2/4 C(u_i), and denote r := min_1 ≤ i ≤ m r_i, b := min_1≤ i≤ m b_i, C:= min_1 ≤ i ≤ m C(u_i). We construct a constant η > 0 such that the following properties hold * 𝒩_η^1,2α := ⋃_u ∈𝒰_0 B^1,2α(u,η) ⊂⋃_i = 1^m B^1,2α(u_i,r_i). * For any u ∈𝒰_0 and any v ∈ B(u,η), we have |E^ω_α(v) - 𝒲_α,λ| ≤1/4b. * For any p ∈0,1,⋯,k-2, any ϑ∈ (0,b/4), any 1≤ i ≤ m and any continuous map ς : I^p →𝒩_η^1,2α∩ B^1,2α(u_i, r_i), there exists a continuous homotopy H_p,i^ς,ϑ:I^p × [0,1] →𝒟_u_i(3) such that the following holds * H_p,i^ς,ϑ(τ,0) = ς (τ) for τ∈ I^p. * E^ω_αH_p,i^ς,ϑ(τ,t) - E^ω_ας(τ)≤ϑ, for all τ∈ I^p and t ∈ [0,1]. * H_p,i^ς,ϑ(τ,t) - ς(τ)_W^1,2α(^2,N)≤ϑ, for all τ∈ I^p and t ∈ [0,1/2]. * H_p,i^ς,ϑ(τ,1) ∉𝒩_η^1,2α, for all τ∈ I^p. Given η > 0 we define 𝒩_η^1,2α : = ⋃_u ∈𝒰_0 B^1,2α(u,η) and by the compactness of 𝒰_0 we can choose small enough η > 0 such that 𝒩_η^1,2α⊂⋃_i = 1^m B^1,2α(u_i, r_i) and the second assertion <ref> of Step <ref> can also be satisfied by the continuity of w ↦ |E^ω_α(w) - 𝒲_α,λ|. For the part <ref> of Step <ref>, firstly, we are devoted to construct a continuous homotopy Ĥ^ς,ϑ_p,i inductively on the l-dimensional skeleton I^p_(l) of I^p for 0 ≤ l ≤ p such that * Ĥ^ς,ϑ_p,i(τ,0) = ς(τ), for τ∈ I^p. * For t ∈ [0,1] and τ∈ I^p, Ĥ^ς,ϑ_p,i(τ,t) - ς(τ)_W^1,2α(^2,N)≤ϑ. * There exists some δ(ϑ,ς,p,i) > 0 satisfying inf_τ∈ I^psup_V^- ∈𝒯_u_i^- with V^-_𝒯_u_i = 1δ E^ω_αΦ_u_i^-1Ĥ^ς,ϑ_p,i(τ,1)(V) ≥δ(ϑ,ς,p,i) > 0. If . δ E^ω_αΦ_u_i^-1(ς(τ))|_𝒯^-_u_i≢0 for each τ∈ I^p, then by the continuity of ς on compact complex cube I^p we can simply choose Ĥ^ς,ϑ_p,i≡ς for t ∈ [0,1] being the constant homotopy and let δ(ϑ,ς,p,i) to be inf_τ∈ I^psup_V^- ∈𝒯_u_i^- with V^-_𝒯_u_i = 1δ E^ω_αΦ_u_i^-1(ς(τ))(V) > 0. Thus, we assume that .δ E^ω_αΦ_u_i^-1(ς(τ))|_𝒯^-_u_i≡ 0 for some τ∈ I^p. Observe that δ E^ω_α(Φ_u_i(0)) = 0 and δ^2 E^ω_α(Φ_u_i(w))|_𝒯_u^- is negatively definite for all w ∈ℬ_u_i(0,r_u_i) by the choice of r_u_i > 0, which implies that δ E^ω_α(Φ_u_i(V^-)) ≢0 for V^- ∈ℬ_u_i^-(0, r_i)\0. Since ℬ^-_u_i(0,r_i) is a convex set of dimension at least k-1 by the assumption of Theorem <ref>, it suffices to define Ĥ^ς,ϑ_p,i(τ,1) through the homeomorphism Φ_u_i such that Φ_u_i^-1Ĥ^ς,ϑ_p,i(τ,1) - Φ_u_i^-1ς(τ)_𝒯_u_i≤ϑ^', for all τ∈ I^p and Φ_u_i^-1Ĥ^ς,ϑ_p,i(τ,1)^- ≠ 0, for all τ∈ I^p. Here, by the continuity of Φ_u_i, ϑ^' > 0 is chosen to satisfy the property <ref> in Step <ref> of Ĥ^ς,ϑ_p,i as long as (<ref>) holds. Then, for each τ∈ I^p_(0) which is a finite discrete set, we can choose (Φ_u_i^-1(Ĥ^ς,ϑ_p,i(τ,1)))^- ≠ 0 satisfying (<ref>). Suppose we have defined the homotopy Ĥ^ς,ϑ_p,i(τ,1) on I^p_(l) for some 0 ≤ l≤ p-1 such that (<ref>) and (<ref>) are satisfied. Let 𝒪(Φ_u_i^-1ς(τ), ϑ^') be the ϑ^'-neighborhood of Φ_u_i^-1(ς(I^p_(l))), that is, 𝒪 Φ_u_i^-1ςI^p_(l), ϑ^' := w ∈𝒯_u_i : min_τ∈ I^p_(l)w - Φ_u_i^-1ς(τ)_𝒯_u_i< ϑ^' which is also equal to ⋃_τ∈ I^qℬ_u_iΦ_u_i^-1ς(τ),ϑ^'. And we use 𝒪^-Φ_u_i^-1ς(τ), ϑ^' to denote the L^2-orthogonal projection of 𝒪(Φ_u_i^-1ς(τ), ϑ^') into 𝒯_u_i^-. Since 𝒯_u_i^-≥ k-1 > p ≥ l+1, we have that π_l𝒪^-Φ_u_i^-1ς(I^p_(l)), ϑ^'\0 = π_l𝒯_u_i^- \0 = 0, there exists a continuous extension of H^ς,ϑ_p,i(τ,1) from I^p_(l) onto I^p_(l+1) such that (<ref>) and (<ref>) are also satisfied. Therefore, we complete the induction construction of the homotopy Ĥ^ς,ϑ_p,i. Now, we construct the desired homotopy H^ς,ϑ_p,i satisfying the properties stated in <ref> of Step <ref>. For each w ∈ℬ_u_i(0,r_i) \0, we pick up a ξ_u_i(w) ∈𝒯_u_i^- with ξ_u_i(w)_𝒯_u_i = 1 satisfying δ E^ω_αΦ_u_i (ξ_u_i(w)) := inf_V^- ∈𝒯_u_i^- with V^-_𝒯_u_i = 1δ E^ω_αΦ_u_i(w^-)(V^-) < 0. Because .δ E^ω_αΦ_u_i|_𝒯_u_i^-: 𝒯_u_i^- → is a linear function defined on a finite dimensional vector space 𝒯_u_i^-, the minimum point ξ_u_i(w) ∈𝒯_u_i^- on unit sphere of 𝒯_u_i^- is unique and well defined. Then, we pick 0 < ϱ_u_i(w) ≤ r_u_i such that w^- + ϱ_u_i(w) ξ_u_i(w)_𝒯_u_i = r_0(u_i)/2 and that w^- + t ξ_u_i(w)_𝒯_u_i≤r_0(u_i)/2 for any 0≤ t ≤ϱ_u_i(w). Thus, for w ∈ℬ^-_u_i(0,r_i) \0, we can define a path γ_w : [0,1]→𝒟_u_i(3) as below γ_w(t) = Φ_u_i w^0 + w^- + tϱ_u_i(w)/2ξ_u_i(w)_𝒯_u_iξ_u_i(w) + w^+ which starts at Φ_u_i(w) and terminates at Φ_u_i w^0 + w^- + ϱ_u_i(w)/2·ξ_u_i(w)/2ξ_u_i(w)_𝒯_u_i + w^+∈∂^- 𝒟_u_i(2) . By the <ref> in Lemma <ref> for E^ω_α∘Φ_u_i and the choice of ξ_u_i(w), we see that E^ω_α(γ_w(1)) - 𝒲_α,λ = E^ω_α(γ_w(1)) - E^ω_α(γ_w(0)) + E^ω_α(γ_w(0)) - 𝒲_α,λ ≤ -r_0(u_i)^2/4 C(u_i) + 1/4b≤ - 3/4b. Therefore, we define H^ς,ϑ_p,i(τ,t) := { Ĥ^ς,ϑ_p,i(τ,2t), for t ∈[0,1/2], γ_Ĥ^ς,ϑ_p,i(τ,1)(2t-1), for t ∈[1/2,1 ]. . By the construction of Ĥ^ς,ϑ_p,i, we see that .δ E^ω_αΦ_u_i^-1Ĥ^ς,ϑ_p,i(τ,1)|_𝒯_u_i^-≢0 for each τ∈ I^p, which means that ξ_u_i(Ĥ^ς,ϑ_p,i(τ,1)) depends continuously on τ∈ I^p. Hence, H^ς,ϑ_p,i(τ,t) is a well-defined continuous homotopy. Next, we show that H^ς,ϑ_p,i satisfies the properties stated in <ref> of Step <ref>. When t ∈ [0,1/2], the <ref>, <ref> and <ref> follow straightforwardly from the construction of Ĥ^ς, ϑ_p,i corresponding to properties <ref> and <ref>. When t ∈ [1/2,1], thanks to <ref> of Lemma <ref> and the definition of H^ς,ϑ_p,i, the <ref> of Step <ref> also holds. At last, by the choice of ϑ≤b/4 and the construction of Ĥ^ς,ϑ_p,i, we see that for all τ∈ I^p E^ω_αH^ς,ϑ_p,i(τ,1) - E^ω_α(ς(τ)) ≤ -3/4b + ϑ≤ -1/2b. Thanks to the choice of η > 0 in <ref> of Step <ref>, we see that H_p,i^ς,ϑ(s,1) ∉𝒩_η^1,2α, for all s ∈ I^k-2. We would like to show that for any u ∈𝒰_0, there exists finite many positive numbers e_p(u)_p = 1^k-1⊂_+ and θ_p(u)_p = 0^k-2⊂_+ such that for any p ∈1,⋯,k-1, any i ∈1,⋯, m with B^1,2α(u,η) ⊂ B^1,2α(u_i, r_i) and any τ∈ I^p with ς(τ) ∉ B^1,2α(u,η/e_p(u)) and E^ω_α(ς(τ)) - 𝒲_α,λ≤θ_p(u), we have H^ς, ϑ_p,i(τ,t) ∉ B^1,2α(u,η/e_p + 1(u)) for all t ∈ [0,1] and ϑ < minη/(4e_p(u)), θ_p(u). Furthermore, viewing e_p(u) and θ_p(u) as functions e_p(u) : 𝒰_0 →_+ and θ_p(u) : 𝒰_0 →_+, e_p(u) has an uniform upper bound e_p on 𝒰_0 and θ_p(u) has a positive lower bound θ_p > 0 on 𝒰_0. We construct e_p(u) and θ_p(u) by induction on p. For u ∈𝒰_0, we take e_1(u) = 2, θ_0(u) = 0 and suppose that we have defined e_p(u) and θ_p-1(u) for some p ∈2,⋯,k-1. For any natural number 1 ≤ i ≤ m with B^1,2α(u,η) ⊂ B^1,2α(u_i, r_i), we define d_i,p(u):= dist𝒯_u_i^- ⋂Φ_u_i^-1B^1,2α(u, η/2 e_p(u)),𝒯_u_i^- ⋂Φ_u_i^-1∂ B^1,2α(u, 3η/4 e_p(u)). Note that (𝒯_u_i^-) < ∞ which means that 𝒯_u_i^- ⋂Φ_u_i^-1B^1,2α(u, η/2 e_p(u)) and 𝒯_u_i^- ⋂Φ_u_i^-1∂ B^1,2α(u, 3η/4 e_p(u)) are two disjoint compact set, hence, d_i,p(u): B^1,2α(u_i, r_i - η) →_+ is a positive continuous function. Moreover, we see that d_i,p:= infd_i,p(u) : u ∈𝒰_0 with B^1,2α(u,η) ⊂ B^1,2α(u_i, r_i) > 0. Otherwise, suppose that d_i,p = 0, we can find a sequence {u_j}_j ∈ℕ⊂𝒰_0 such that lim_j →∞ d_i,p(u_j) = 0. Since E^ω_α : W^1,2α(^2,N) → satisfies the Palais-Smale condition, see Lemma <ref>, by the definition of 𝒰_0 after passing to certain subsequence we can assume u_j converges to some u_0 ∈𝒰_0 in W^1,2α(^2,N). This leads to the contradiction 0 < d_i,p(u_0) = lim_j →∞ d_i,p(u_j) = 0. Then we define θ_p(u) := minC(u_i)/4d_i,p^2 : 1 ≤ i ≤ m with B^1,2α(u,η) ⊂ B^1,2α(u_i, r_i) > 0 and take e_p + 1(u) ≥ 2 e_p(u) + 1 to be the smallest number such that for any w ∈B^1,2α(u, η/e_p + 1(u)), there holds E^ω_α(w) ≥𝒲_α,λ - θ_p(u). Note that θ_p(u) ≥θ_p := min_1 ≤ i ≤ mC(u_i)/4d^2_i,p>0, for any u ∈𝒰_0, has a positive uniform lower bound, where C(u_i) > 0 is a constant determined in Step <ref> and Lemma <ref>. Moreover, we claim that There exists e_p + 1 > 0 such that sup_v ∈𝒰_0 e_p + 1(v) ≤e_p + 1. In fact, since η≤ 1/2, it suffices to show there exists a constant c_p > 0 such that for any u ∈𝒰_0 and w ∈B^1,2α(u, η/c_p) there holds E^ω_α(w) ≥𝒲_α,λ - min_1 ≤ i ≤ mC(u_i)/4d^2_i,p. Then, we can conclude that e_p + 1(u) ≤ c_p for any u ∈𝒰_0. By contradiction, suppose that there exists two sequences u_j_j ∈ℕ⊂𝒰_0 and w_j : w_j ∈B^1,2α(v_j, 1/j)_j ∈ℕ such that E^ω_α(w_j) < 𝒲_α,λ - min_1 ≤ i ≤ mC(u_i)/4d^2_i,p. Then similarly to the previous argument, by the compactness of 𝒰_0, after passing to a subsequence, we can conclude that u_j converges strongly in W^1,2α(^2, N) to some u_0 ∈𝒰_0 and w_j converges strongly in W^1,2α(^2, N) to u_0. However, the identity E^ω_α(u_0) = 𝒲_α,λ leads to the contradiction 𝒲_α,λ < 𝒲_α,λ - min_1 ≤ i ≤ mC(u_i)/4d^2_i,p. Therefore, we complete the proof of Claim <ref>. Now, the sequences e_p(u)_p = 1^k-1⊂_+ and θ_p(u)_p = 1^k-2⊂_+ are defined by our induction argument. In the end, we prove the following Claim to finish the proof of Step <ref>. H^ς, ϑ_p,i(τ, t) ∉ B^1,2α(u,η/e_p + 1(u)) for all t ∈ [0,1] provided that ς(τ) ∉ B^1,2α(u, η/e_p(u)), E^ω_α(ς(τ)) - 𝒲_α,λ≤θ_p(u) and ϑ < minη/(4e_p(u)), θ_p(u). For t∈ [0,1/2], since ϑ < minη/(4e_p(u)), θ_p(u) and ς(τ) ∉ B^1,2α(u, η/e_p(u)), consulting the <ref> of item <ref> in Step <ref> we see that H^ς,ϑ_p,i(τ,t) - u_W^1,2α(^2,N) ≥ς(τ) - u_W^1,2α(^2,N) - H^ς,ϑ_p,i(τ,t) - ς(τ)_W^1,2α(^2,N) ≥3η/4 e_p(u)≥η/e_p + 1(u), which implies H^ς, ϑ_p,i(τ, t) ∉ B(u,η/e_p + 1(u)) for all t ∈ [0,1/2] Next, we show that H^ς,ϑ_p,i(τ,t) - u_W^1,2α(^2,N)≥η/e_p + 1(u) when t∈ (1/2,1]. To see this, we note that E^ω_αH^ς,ϑ_p,i(τ,t) = E^ω_α(H^ς,ϑ_p,i(τ,t)) - E^ω_α(ς(τ)) + E^ω_α(ς(τ)) ≤ϑ + 𝒲_α,λ + θ_p(u) ≤𝒲_α,λ + 2θ_p(u) for ϑ < θ_p(u). Next, we proceed the proof by contradiction. Suppose that there exists some t ∈ (1/2,1] such that H^ς,ϑ_p,i(τ,t) ∈B^1,2α(u,η/e_p+1(u)). Observe that H^ς,ϑ_p,i(τ,t) ∈𝒟_u_i(3) ⊂Φ_u_i(ℬ_u_i(0,r_u_i)) and by the definitions of e_p+1(u) and d_i,p(u) we have that Φ^-1_u_iH^ς,ϑ_p,i(τ,t)^- - Φ^-1_u_iH^ς,ϑ_p,i(τ,1/2)^-_𝒯_u_i ≥dist𝒯_u_i^- ⋂Φ_u_i^-1B^1,2α(u, η/e_p + 1(u)),𝒯_u_i^- ⋂Φ_u_i^-1∂ B^1,2α(u, 3η/4 e_p(u)) ≥ d_i,p(u) ≥min_1 ≤ i ≤ md_i,p > 0. Next, by the construction of H^ς,ϑ_p,i(τ,t) for t ≥ 1/2 in Step <ref>, we see that E^ω_αH^ς,ϑ_p,i(τ,t) - E^ω_αH^ς,ϑ_p,i(τ,1/2) = E^ω_αγ_H^ς,ϑ_p,i(τ,1/2)(2t-1) - E^ω_αH^ς,ϑ_p,i(τ,1/2) ≤ E^ω_αH^ς,ϑ_p,i(τ,1/2) - C(u_i)min_1 ≤ i ≤ md_i,p^2 ≤𝒲_α,λ + 2θ_p(u) - 4 θ_p(u) = 𝒲_α,λ - 2 θ_p(u), which contradicts to the definition of e_p+1(u). This completes the proof of Claim <ref>. Therefore, we finish the proof of Step <ref>. Before penetrating to the detailed description of next step, for the notation simplicity we take d:= min_1 ≤ p ≤ k-1inf_ u ∈𝒰_0minη/4 e_p(u), θ_p(u) > 0. After passing to some subsequence, we construct a sequence of desired continuous homotopies H_j:I^k-2× [0,1] → W^1,2α(^2, N) such that H_j(·, 0) = σ_j and H_j(·, 1) = σ_j satisfies all the properties asserted in Theorem <ref>. To begin, we choose a subsequence σ_j_l∈𝒜_ε_j_l,C of σ_j such that max_τ∈ I^k-2 E^ω_α(σ_j_l(τ)) ≤𝒲_α,λ + d/2. where d is defined in (<ref>). For notation simplicity, we still write {σ_j}_j ∈ℕ to represent {σ_j_l}_l ∈ℕ. For fixed σ_j: I^k-2→ W^1,2α(^2, N), to show Step <ref>, recalling that 𝒰_0 ⊂𝒩^1,2α_η = ⋃_u ∈𝒰_0 B^1,2α(u,η) for some fixed η > 0, it suffices to construct the homotopy H_j from σ_j to σ_j such that σ_j(I^k-2) ⋂𝒩^1,2α_η/2 e_k-1 = ∅ where the constant e_k-1 > 0 is defined in Step <ref> and η is obtained in Step <ref>. To this end, firstly we denote I(1,n) to be the cell complex on the unit interval I whose 1-cells are the intervals [0,1· 3^-n], [1· 3^-n, 2· 3^-n],…, [1-3^-n,1] and whose 0-cells are the end points 0, 3^-n, 2· 3^-n,…, 1. Then, I(k-2,n) denotes the cell complex of I^k-2 as below I(k-2, n) = I(1,n)⊗ I(1,n) ⊗⋯⊗ I(1,n). Then, for each fixed j ∈ℕ we take n large enough to obtain a sufficiently fine subdivision of I^k-2 such that for each closed face F of I(k-2,n) the followings are fulfilled: * If σ_j(F) ⋂𝒩^1,2α_η/2≠∅, then σ_j(F) ⊂𝒩^1,2α_η. * Each F ∈ℱ can be covered by single B^1,2α(u_i,r_i) for certain 1 ≤ i ≤ m. Here, ℱ is the set of faces F satisfying σ_j(F) ⋂𝒩^1,2α_η/2≠∅. To see <ref>, since I(k-2,n) is compact and σ_j:I(k-2,n) → W^1,2α(^2,N) is continuous, we can take finitely many times barycentric subdivision upon I^k-2 such that the oscillation of σ_j on each face F ∈ℱ is less than η/4. Moreover, by Step <ref>, 𝒩^1,2α_η is covered by the union of finite collection B^1,2α(u_i, r_i)_i = 1^m, after further taking finitely many times barycentric subdivision, by the notion of Lebesgue's number, we can arrange that each maximal F ∈ℱ (here, F ∈ℱ is maximal means that there is no F^'∈ℱ with F ⫋ F^' ) σ_j(F) can be covered by exactly one B^1,2α(u_i,r_i). Then <ref> follows by an induction argument by decreasing dimensions of F ∈ℱ. In the following, we are devoted to construct the desired continuous homotopy H_j inductively on dimension 1 ≤ l ≤ k-2 of the l-skeletons I^k-2_(l) for I^k-2. For l = 0, we can apply the homotopy constructed in <ref> of Step <ref>. More precisely, for the 0-cells outside 𝒩^1,2α_η/e_1, we simply take the H^(0)_j to be the constant homtopy on them. For the 0-cells X in 𝒩^1,2α_η/e_1, we choose the homotopy H^(0)_j := . H^σ_j, ϑ_0,i|_X defined on X × [0,1], where H^σ_j, ϑ_0,i is constructed in Step <ref>, i ∈1,2,…, m is chosen to be the smallest positive integer such that σ_j(X) ⊂ B(u_i,r_i) and ϑ is chosen to satisfies ϑ < d/4. In summary, by Step <ref> and Step <ref> we obtain a homotopy H_j^(0) such that H_j^(0)(I^k-2_(0)×1)⋂𝒩_η/e_1 = ∅ and H_j^(0)(X,t) ∈𝒟_u_i(3) for any X ∈ I^k-2_(0) and any t∈ [0,1]. Now, suppose that we have constructed H_j^(l-1) on I^k-2_(l-1)× [0,1] for some l ≥ 1 such that H_j^(l-1)(I^k-2_(l-1)×1)⋂𝒩_η/e_l-1 = ∅, H_j^(l-1)(I^k-2_(l-1)× [0,1])⋂𝒩_η/e_l = ∅ by consulting the conclusion of Step <ref> and such that H_j^(l-1)(X,t) ∈𝒟_u_i(3) for any X ∈ I^k-2_(l-1) and any t∈ [0,1]. Then, we consider the l-cells in I^k-2. For F_l ∈ I^k-2_(l)\ I^k-2_(l-1), we see that ∂ F_ l ∈ I^k-2_(l-1) and F_l:= F_l ∪∂ F_l × [0,1] is homeomorphic to F_l by concatenating F_l and ∂ F_l × [0,1] along the ∂ F_l. This implies that we can construct the continuous map ς : F_l ≅F_l → W^1,2α(^2,N) by gluing H^(l-1)_j:∂ F_l × [0,1] → W^1,2α(^2,N) and σ_j: F_l → W^1,2α(^2,N) along the ∂ F_l. Next, we construct the homotopy H_j^(l) from ς on F_l ≅ F_l conditionally depending on whether F_l belongs to ℱ or not. * If F_l ∈ℱ, then by the definition of ℱ there is also no cells of ∂F_l belongs to ℱ. Therefore, we define H^(l)_j: F_l × [0,1] → W^1,2α(^2,N) ≡ς to be the constant homotopy. Note that in this case ς|_∂F_l satisfies the assumption of Step <ref>, so we have that H^(l)_jF_l×1⋃∂F_l× [0,1]⋂𝒩_η/e_l+1 = ∅. * If F_l ∉ℱ, by the induction assumption on H^(l-1)_j and the choice of fine subdivision on I^k-2, see <ref>, there exists 1 ≤ i_l ≤ m such that ς (F_l) ⊂ B(u_i_l,r_i_l). However, it is important to point out that ς (F_l) may not be contained in 𝒩_η which means that the construction of continuous homotopy obtained in <ref> of Step <ref> can not be applied directly. Based on this consideration, we further take finer subdivision on F_l such that each l-dimensional face f_l of F_l satisfying ς(f_l) ∩𝒩_η/e_p≠∅ must fulfill ς(f_l) ⊂𝒩_η. Then, similarly, we denote ℱ_l to be the union of all l-dimensional faces f_l of F_l with ς(f_l) ∩𝒩_η/e_l≠∅. By induction assumption, we see that ς(∂F_l) ∩𝒩_η/e_l = ∅ which implies that ς(∂ℱ_l) ∩𝒩_η/e_l = ∅. Then, we can construct a homotopy map Ĥ_j^(l): F_l × [0,1/2] ⋃ℱ_l× [1/2,1]→ W^1,2α(^2, N) such that Ĥ(x,t) = ς(x) when t ∈ [0,1/2] and x ∈F_l and that Ĥ(x,t) = H^ς,ϑ_l,i_l(x,2t-1) for t ∈ [1/2,1], where H^ς,ϑ_l,i_l(x,2t-1) is defined in <ref> of Step <ref> with ϑ < d/4. By the definition of ℱ_l, we see that Ĥ_j^(l)(F_l \ℱ_l)×1/2⋃∂F_l ×[0,1/2]⋂𝒩_η/e_l+1 = ∅. And by the construction of Ĥ_j^(l) and the <ref> in Step <ref>, we have that Ĥ_j^(l)(ℱ_l×1) ⋂𝒩_η/e_l+1 = ∅. Moreover, by Step <ref>, we have Ĥ_j^(l)(∂ℱ_l× [1/2,1]) ⋂𝒩_η/e_l+1 = ∅. Then by the homeomorphisms F_l ×[0,1] ≅F_l ×[0,1/2]⋂ℱ_l × [1/2,1], that is induced from the homeomorphism F_l×1≅F_l \ℱ_l×1/2⋃ℱ_l ×1⋃∂ℱ_l × [1/2,1], and the identification ∂F_l ×[0,1] = ∂F_l ×[0,1], we can derive a continuous homotopy H^(l)_j : F_l× [0,1] → W^1,2α(^2, N) from Ĥ_j^(l) which satisfies H^(l)_jF_l×1⋃∂F_l × [0,1]⋂𝒩_η/e_l+1 = ∅. In summary, whenever F_l ∈ℱ or not, we can construct a continuous homotopy H^(l)_j : F_l × [0,1] → W^1,2α(^2, N) induced from the homeomorphisms F_l×[0,1] ≅F_l × [0,1] and F_l ×1≅F_l ×1⋃∂F_l × [0,1], which satisfies that H^(l)_j|_∂ F_l × [0,1] = H^(l-1)_j|_∂ F_l × [0,1] and H^(l)_j (x,0) = σ_j(x) for all x ∈ F_l. Then, we glue all together such continuous homotopy H^(l)_j defined on F_l × [0,1] when F_k runs through I^k-2_(l)\ I^k-2_(l-1) to obtain the desired homotopy H^(l)_j : I^k-2_(l)× [0,1] → W^1,2α(^2, N). Keeping in mind that H^(l)_j (F_l ×1) = H^(l)_jF_l ×1⋃∂F_l × [0,1] on each F_l ∈ I^k-2_(l)\ I^k-2_(l-1) for any 1≤ l ≤ k-2 and recalling (<ref>), we have that H^(k-2)_jI^k-2_(k-2)×1⋂𝒩_η/e_l+1 = ∅. To complete the construction, we let σ_j:= H^(k-2)_j(·,1) which satisfies that σ_j(I^k-2) ⋂𝒩_η/e_k-1 = ∅. In the following, we show that there exists large enough j_0 ∈ℕ such that σ_j satisfies the first property <ref> when j ≥ j_0, and for each u ∈𝒰_0 there exists j_0(u) ∈ℕ such that σ_j satisfies the second property <ref> asserted in Theorem <ref> when j ≥ j_0(u). To see <ref> of Theorem <ref>, by our choice of δ_0 and covering B^1,2α(u_i,r_i)_i=1^m, in particular see (<ref>), we find that σ_j(τ) ∉⋃_i=1^m B^1,2α(u_i,r_i) for any τ∈∂ I^k-2. This implies that the continuous homotopy H^(k-2)_j restricted to ∂ I^k-2 is constant, hence σ_j|_∂ I^k-2 = σ_j|_∂ I^k-2. Thus, σ_j ∈ℐ is an admissible sweepout. Moreover, to show σ_j ∈𝒜_ε_j, C+1, we first observe that the homotopy H^(k-2)_j is constructed from σ_j by gluing Ĥ^(l)_j finitely many times and that Ĥ^(l)_j is constructed from the continuous homotopy obtained in <ref> of Step <ref> for 1 ≤ l ≤ k-2. By the arbitrariness of ϑ in <ref> of Step <ref>, choosing large enough j_0 ∈ℕ with ε_j ≤d/8 and small enough ϑ we can conclude that max_τ∈ I^k-2 E^ω_α( σ_j(τ)) = max_τ∈ I^k-2 E^ω_α(H^(k-2)_j(τ,1)) ≤𝒲_α,λ + ε_j for all j ≥ j_0. Next, we verify the second part of definition of 𝒜_ε_j,C+1. Let τ∈ I^k-2 be the point such that E^ω_α(σ_j(τ)) ≥𝒲_α,λ - ε_j. If τ∉⋃_F ∈ℱ F, then we have E^ω_α(σ_j(τ)) = E^ω_α(σ_j(x)) ≥𝒲_α,λ - ε_j which implies that E_α(σ_j(x)) = E_α(σ_j(x)) ≤ C < C+1. On the other hand, if τ∈⋃_F ∈ℱ F, then σ_j(τ) ∈𝒩_η and there exists 1 ≤ i ≤ m such that σ_j(τ) ∈ B^1,2α(u_i, r_i). By the construction of σ_j and the choice of τ satisfying (<ref>), we know that σ_j(τ) ∈ B^1,2α(u_i, 2r_i) which implies that E_α(σ_j(τ)) ≤E_α(σ_j(τ)) - E_α(u_i) + E_α(u_i) ≤ C + 1. thanks to (<ref>) and the choice u_i ∈𝒰_0 ⊂𝒰_C. Therefore, we showed that σ_j ∈𝒜_ε_j, C+1, that is, the first assertion <ref> of Theorem <ref>. To show <ref> of Theorem <ref>, we argue by contradiction. Suppose that there exists subsequences of σ_j, ε_j (still denoted by σ_j, ε_j), a sequence of points u_j ∈𝒰_0 and a sequence τ_j ∈ I^k-2 such that E^ω_α(σ_j(τ_j)) ≥𝒲_α,λ - ε_j and that lim_j →∞σ_j(τ_j) - u_j_W^1,2α(^2, N) = 0. for any j_0 ∈ℕ. Then, by the compactness of 𝒰_0, after passing some subsequences we have lim_j →∞σ_j(τ_j) - u_W^1,2α(^2, N) = 0 for some u ∈𝒰_0. But this contradicts to (<ref>), hence we finish the proof of Step <ref>. Therefore, we complete the proof of Theorem <ref>. Equipped with the deformation of sweepouts Theorem <ref>, we can construct the desired non-trivial critical points in 𝒰_C with Morse index bounded from above by k-2. Given ω∈ C^2(∧^2 (N)), λ > 0 writing ω as λω, α > 1 and a sequence of sweepouts σ_j ∈𝒜_ε_j, C for some constant C > 0 with ε_j ↘ 0. Then, there exists a non-trivial critical point u ∈𝒰_C satisfying E_α(u) ≥1/2Vol(^2) + δ(α, λω) and Ind_E^ω_α(u) ≤ k-2, for some constant δ(α, λω) > 0 depending on the choice of α > 0, ω∈ C^2(∧^2(N)) and λ >0 which is obtained in (<ref>) of Proposition <ref>. Let 𝒰_0 = u ∈𝒰_C+1 : E^ω_α(u) ≥1/2Vol(^2) + δ(α, ω) which is a non-empty closed subset of 𝒰_C+1 by Proposition <ref>. By contradiction, if Ind_E^ω_α(u) ≥ k-1 for all u ∈𝒰_0, then by <ref> of Theorem <ref> we can obtain a sequence of sweepouts σ_j∈𝒜_ε_j,C+1 constructing from σ_j. Then, combining the Theorem <ref> with the Proposition <ref>, after passing to some subsequences of σ_j and ε_j, there exists a sequence τ_j ∈ I^k-2 such that E^λω_α(σ_j(τ_j)) ≥𝒲_α,λ - ε_j and that lim_j →∞σ_j(τ_j) - u_W^1,2α(^2, N) = 0. for some u ∈𝒰_0 which contradicts to <ref> of Theorem <ref>. Therefore, there exists an u ∈𝒰_0 with Ind_E^ω_α(u) ≤ k-2 completing the proof of Theorem <ref>. As a summary of this section, for almost every λ∈_+ we constructed a sequence of non-trivial α_j-λ H-spheres with desired properties described as below. Given ω∈ C^2(∧^2(N)), for almost every λ∈_+, there exists a constant C > 0, a sequence α_j ↘ 1 and a sequence of positive constant δ(α_j, λω) > 0 such that for each j ∈ℕ there exists a α_j-λ H-sphere u_α_j∈ W^1,2α_j(^2,N) satisfying δ E^λω_α_j(u_α_j) = 0, 1/2Vol(^2) + δ(α_j, λω) ≤ E^λω_α_j(u) ≤ C+1 and Ind_E^λω_α_j(u_α_j) ≤ k-2. Given a sequence α_j ↘ 1, combining Lemma <ref> and Lemma <ref>, for almost every λ∈_+ we can find a constant C > 0 and a subsequence of α_j ↘ 1, still denoted by α_j, such that for each j ∈ℕ, there exists a sequences of sweepouts σ_l^j_l ∈ℕ⊂𝒜_λε_l^j, 8λ^2 C for some ε_l^j_l ∈ℕ with ε^j_l ↘ 0 as l →∞ and large enough l ∈ℕ. Then, for fixed j ∈ℕ we can apply the Theorem <ref> to obtain a sequence u_α_j satisfying all the properties asserted in Corollary <ref>. 2cm § COMPACTNESS FOR CRITICAL POINTS OF FUNCTIONAL LG 10pt In Section <ref>, in particular see Corollary <ref>, we constructed a sequence of non-trivial critical points {u_α_j}_j ∈ℕ of the functional E^λω_α_j with uniformly bounded α_j-energy and with uniformly bounded Morse index from above by k-2. Next, in order to obtain the existence of non-constant H-sphere, we need to study the behavior of sequence u_α_j as α_j ↘ 1 which is the primary task in this section. It is crucial to point out that the compactness result developed in this section is applicable to a broad range of sequences of α-H-surfaces { u_α}_α > 1 (being the critical points of E_α^ω) between closed Riemann surface M and compact Riemannian n-manifold N with uniformly bounded α-energy and with uniformly bounded Morse index from above, where H ∈Γ∧^2(N)⊗ TN induced from any given ω∈ C^2(∧^2(N)) as in (<ref>). Therefore, we simply write α for α_j to represent the general sequence α↘ 1 , ω for λω and E^ω_α for E^λω_α, respectively. Considering that the asymptotic analysis for general sequences of α-H-surfaces are involved and complicated, we summarize the main result of this section in Subsection <ref>, with the detailed proofs provided in the subsequent subsections. 5pt §.§ Main Results on Asymptotic Analysis for Sequences of Lg-surfaces 5pt §.§.§ Descriptions of Bubbling Procedure 5pt Given a sequence of α-H-surfaces {u_α}_α↘ 1: M → N with uniformly bounded α-energy sup_α > 1E_α(u_α) ≤Λ < ∞. Then, from Lemma <ref>, Lemma <ref>, Lemma <ref> below and adapting a rescaling argument by Sacks-Uhlenbeck <cit.>, after passing to a subsequence, u_α converges to a H-surface u_0 : M → N smoothly except at most finitely many singular points (that is, energy concentration points) {x^i}_i = 1^n_0 as α↘ 1. Around point each x^i for 1 ≤ i ≤ n_0, we assume that there are n_i bubbles (that is, non-trivial H-spheres) arising at x^i during bubbling. Therefore, there exists sequences of points {x^ij_α}_α↘ 1 for 1 ≤ i ≤ n_0 and 1 ≤ j ≤ n_i, and sequences of positive numbers {λ^ij_α}_α↘ 1 such that x^ij_α→ x^i for 1 ≤ j ≤ n_i and λ^ij_α→ 0 as α↘ 1. By a standard scaling argument, see for instance <cit.>, for 1 ≤ j_1, j_2 ≤ n_i and j_1≠ j_2 at least one alternative of the following two statements holds: * for any fixed R > 0, B^M(x^ij_1_α, λ^ij_1_α R) ⋂ B^M(x^ij_2, λ^ij_2_α R) = ∅ whenever α - 1 is sufficiently small. * λ^ij_1_α/λ^ij_2_α + λ^ij_2_α/λ^ij_1_α→∞, as α↘ 1. Moreover, after taking a conformal transformation ^2 ∪∞≅^2 and applying the removability of isolated singularities Lemma <ref> the rescaled maps v^ij_α : =u_α(x^ij_α + λ^ij_α x) for 1≤ i ≤ n_0, 1≤ j ≤ n_i converges strongly in C^∞_loc^2\{p^ij_1, …, p^ij_s_ij} to a non-trivial H-sphere w^ij: ^2 → N for some finite energy concentration points {p^ij_1, …, p^ij_s_ij}⊂^2. Then we chosoe small enough r_i > 0 such that B(x_i,r_i) ⋂{x^1, x^2, ⋯, x^i-1, x^i+1, ⋯, x^n_0} = ∅, and hence v_α^ij : B(0, (λ^ij_α)^-1 r_i) → N is a critical points of E^ω_α, ij := 1/2∫_B(0, (λ^ij_α)^-1r_i)(λ^ij_α)^2 + |∇ v^ij_α|^2^αdV_g_α,ij + (λ_α^ij)^2α - 2∫_B(0, (λ_α^ij)^-1 r_i)(v_α^ij)^*ω, where g_α,ij := e^φ(x_α + λ_α^ij x)((dx^1)^2 + (dx^2)^2) arising from the metric g : = e^φ(x) ((dx^1)^2 + (dx^2)^2) on M under a conformal coordinate (x^1, x^2) and v_α^ij solves the Euler-Lagrange equation Δ v_α^ij + (α - 1)∇|∇_g_α,ij v_α^ij|^2·∇ v_α^ij/(λ_α^ij)^2+|∇_g_α,ij v_α^ij|^2 + A(v_α^ij)(∇ v_α^ij, ∇ v_α^ij) = (λ_α^ij)^2α - 2H(v_α^ij)( v_α^ij, ∇ v_α^ij)/α((λ_α^ij)^2 + |∇_g_α,ij v_α^ij|)^α - 1. The deficiency of conformally invariance for functional E^ω_α leads to distinct formulations of E^ω_α and E^ω_α,ij, hence the distinct formulation of Euler Lagrange equations (<ref>) and (<ref>). Based on this consideration, we will employ the following more general functional to show our main Theorem <ref>, Theorem <ref> and Theorem <ref> in this section E^ω_α = 1/2∫_M τ_α + ∇_g_α u_α^2^α dV_g_α + τ^α - 1_α∫_M u_α^* ω where 0 < τ_α≤ 1 satisfying 0 < β_0 ≤lim inf_α↘ 1τ^α -1 _α≤ 1 and g_α is a sequence of metrics on (M,g) that is conformal to g and converges smoothly to the standard g as α↘ 1. Critical points u_α : M → N of generalized functional E^ω_α are also called α-H-surfaces for simplicity, similarly to (<ref>) and (<ref>), it solves the following generalized Euler Lagrange equation Δ_g_α u_α + (α - 1)∇_g_α|∇_g_α u_α|^2·∇_g_α u_α/τ_α+|∇_g_α u_α|^2 + A(u_α)(∇_g_α u_α, ∇_g_α u_α) = τ_α^α -1H(u_α)(_g_α u_α, ∇_g_α u_α)/α(τ_α + |∇_g_α u_α|^2)^α - 1 or equivalently in divergence form -((τ_α+|∇_g_α u_α|^2)^α - 1∇_g_α u_α) + (τ_α + |∇_g_α u_α|^2)^α -1A(u_α)(∇_g_α u_α,∇_g_α u_α) = τ_α^α -1/αH(u_α)(_g_α u_α, ∇_g_α u_α) Considering the (<ref>) and (<ref>), the following quantities arise naturally in the process of studying the energy identity and asymptotic analysis of necks μ_ij := lim inf_α↘ 1λ^ij_α^2 - 2α and ν_ij := lim inf_α↘ 1λ^ij_α^- √(α - 1) indicating the comparison with expansion speed of blow-up radius and the speed of α↘ 1. It is easy to check that μ_ij, ν_ij∈ [1,∞] as λ_α^ij→ 0 as α↘ 1. Moreover, we can see that all μ_ij are finite, that is, there exists a positive constant 1 ≤μ_max < ∞ such that μ_ij∈ [1,μ_max]. Indeed, without loss generality we can assume there is only one blow-up point x_1 ∈ M and there are n_1 bubbles arising at this point, which implies, there exists a sequence of points {x_α^j}_α > 1 and a sequence of positive numbers {λ^j_α}_α > 1 satisfying one of <ref> and <ref>. For simplicity, we assume lim sup_α↘ 1λ^1_α/λ^j_α < ∞ for all 2≤ j ≤ n_1 which implies w^1_α(x) := u_α(x_α^1 + λ^1_α x) converges strongly to w^1 in C^∞_loc(^2), namely, w^1 is the first non-trivial H-bubble. Therefore, we have Λ > lim_R→∞lim_α↘ 1∫_B(x^1_α, λ^1_α R)∇ u_α^2α dx = lim_R→∞lim_α↘ 1λ_α^1^2-2α∫_B(0,R)∇ w^1_α dx = μ_1 E(w^1). By the energy gap Lemma <ref>, there is a positive constant ε_0 := infE(w) = 1/2∫_^2 |∇ w|^2 dV_^2: 10emw is a non-constant harmonic sphere in N > 0 such that E(w^1) ≥ε_0 hence μ_j ≤μ_1 ≤Λ_1/E(w^1)≤Λ_1/ε_0 := μ_max < ∞. §.§.§ Generalized Energy Identity 5pt Now, we are in a position to state our first main compactness result of generalized energy identity for sequences of α-H-surfaces. Let (M,g) be a closed Riemann surface, (N,h) be a n-dimensional closed Riemannian manifold that is isometrically embedded in ^K for some K ∈ℕ. Assume that {u_α}_α↘ 1⊂ C^∞(M,N) is a sequence of α-H-surfaces with uniformly bounded generalized α-energy, that is, sup_α↘ 1E_α(u_α)≤Λ < ∞. We define the blow-up set 𝔖:={x∈ M : lim inf_k→∞1/2∫_B^M(x,r)∇ u_α dV_g ≥ε_0^2, for all r > 0} where B^M(x,r) = {y∈ M : dist^M(x,y) < r} denotes the geodesic ball in M and ε_0 is determined in (<ref>). Then 𝔖 is finite, written as 𝔖 = {x^1, ⋯, x^n_0}. After choosing a subsequence, there exists a smooth H-surface u_0:M→ N and finitely many bubbles, that is, a finite set of H-spheres w^ij, 1 ≤ j ≤ n_i such that u_α→ u_0 weakly in W^1,2(M,^K) and strongly in C^∞_loc(M\𝔖, N). Moreover, the following generalized energy identity holds lim_k→∞E_α(u_α) = E(u_0) + 1/2Vol(M) + ∑_i=1^n_0∑_j = 1^n_iμ^2_ij E(w^ij). §.§.§ Asymptotic Behavior on Necks 5pt Now, we present our second main result about asymptotic analysis on neck, which provides a complete geometric picture of all possible limiting behaviors of the necks occurring in the blow-up process for sequences of α-H-surfaces. We show that all necks between bubbles and the base map converge to geodesics and we provide a scheme to calculate the length of these geodesics, see Remark <ref> below. More precisely, we have Let (M,g) be a closed Riemann surface, (N,h) be a n-dimensional closed Riemannian manifold that is isometrically embedded in ^K for some K ∈ℕ. Assume that {u_α}_α↘ 1⊂ C^∞(M,N) is a sequence of α-H-surfaces with uniformly bounded α-energy, that there is only one blow up point 𝔖 = {x_1} and there is only one bubble in B^M(x_1,r) ⊂ M, for some r>0, denoted by w^1:^2→ N. Let ν^1 = lim inf_α↘ 1λ^1_α^-√(α - 1). Then one of following statement holds * when ν^1 = 1, the set u_0B^M(x_1,r)⋃ w^1(^2) is a connected subset of N where u_0 is the weak limit of u_α in W^1,2(M,N) as α↘ 1; * when ν^1∈ (1,∞), the set u_0B^M(x_1,r) and w^1(^2) are connected by a geodesic Γ⊂ N with length L(Γ) = √(E(w^1)/π)logν^1; * when ν^1 = ∞, the neck contains at least a geodesic of infinite length. It is important to note that, although we state the Theorem <ref> under the assumption that there is only one bubble w^1 occurring the single blow up point x_1, it is not difficult to obtain a general version by an induction argument following the proofs in Section <ref>. The length formula looks quite complicated and needs to be discussed by case splitting. For instance, if there are two H-spheres, w^1 and w^2, occurring the blow up point x_1, namely, there exists sequences of positvies numbers λ_α^1 ↘ 1, λ_α^2 ↘ 1 with λ_α^1/λ_α^2 → 0 and sequences of points x_α^1 → x_1, x_α^2 → x_1 satisfying w^1 = lim_α↘ 1 u_α(λ_α^1 x + x_α^1) and w^2 = lim_α↘ 1 u_α(λ_α^2 x + x_α^2). Then, the length formula for geodesic connecting u_0B^M(x_1, r) and w^2(^2) is given by L(u_0, w^2) = √(E(w^1) + E(w^2)/π)logν^2. And the length formula for geodesic connecting w^2(^2) and w^1(^2) is given by L(w^2, w^1) = √(E(w^1)/π)logν^1/ν^2. Here, ν^1 = lim inf_α↘ 1λ^1_α^-√(α - 1) and ν^2 = lim inf_α↘ 1λ^2_α^-√(α - 1). §.§.§ Energy Identity Under Topological and Curvature Conditions 5pt The topology and geometry of target manifold (N,h) plays a critical role in investigating the convergence properties of α-H-surfaces from some compact surface and moreover comparing the α - 1 and the rate of scaling λ_α^ij→ 0 as α↘ 1, that is, the value of μ_ij and ν_ij. From the point view of differential geometry, it is natural and reasonable to find some geometric and topology condition on target (N, h) to ensure the energy identity holds, equivalently ensure the neck converges to a geodesic of finite length. To this end, utilizing Gromov's estimates <cit.> (See also <cit.>) on length of geodesics by its Morse index, we have the following Let (M,g) be a closed Riemann surface, (N,h) be a n-dimensional compact Riemannian manifold, that is isometrically embedded in ^K for some K ∈ℕ and has finite fundamental group. Assume that {u_α}_α↘ 1⊂ C^∞(M,N) is a sequence of α-H-surfaces with uniformly bounded α-energy and unifomly bounded Morse index, that is, Ind_E^ω_α(u_α) ≤ C for some universal constant C > 0. Then 𝔖 is finite, written as 𝔖 = {x^1, …, x^n_0}. After choosing a subsequence, there exists a smooth H-surface u_0:M→ N and finitely many bubbles, that is, a finite set of H-spheres w^ij, 1 ≤ j ≤ n_i such that u_α→ u_0 weakly in W^1,2(M,^K) and strongly in C^∞_loc(M\𝔖, N). Moreover, the limiting necks consists of some geodesics of finite length, and hence the following energy identity holds lim_k→∞ E_α(u_α) = E(u_0) + 1/2Vol(M) + ∑_i=1^n_0∑_j = 1^n_i E(w^ij). By Myers' Theorem from Riemannian geometry, see <cit.>, the diameter of complete Riemannian manifold (N,h) with Ric(N) ≥κ > 0 satisfies diam(N) ≤π/√(κ) and any geodesic Γ⊂ (N,h) with length L(Γ) ≥π/√(κ) is unstable. Moreover, the fundamental group π_1(N) is finite. Utilizing this fact, as a corollary of Theorem <ref> we can obtain the following consequence If we assume (N,h) be a n-dimensional complete Riemannian manifold with strictly positive Ricci curvature, that is, Ric(N) > κ > 0 and keep the remaining assumption same as Theorem <ref>, then the energy identity stated as Theorem <ref> for sequences of α-H-surfaces with bounded Morse index still holds. In the context of α-harmonic maps, Moore <cit.> (see also <cit.>) demonstrated bubble tree convergence, akin to the conditions described in Theorem <ref>. Similar result for α-harmonic maps under Ricci curvature assumptions, as seen in Corollary <ref>, was established by Li-Liu-Wang <cit.>. §.§.§ Non-constancy of Weak Limit 5pt There is a key insight about sequences of non-trivial α-H-spheres, as one of main advantages of α-energy approximation to Dirichlet energy. More precisely, we can show that, if u_α is a sequence of non-trivial α-H-spheres with uniformly bounded α-energy, then the weak limit of u_α is non-constant. In the context of α-harmonic maps, see <cit.>. The following Lemma <ref> plays a crucial role in reaching the desired result. Let ι : ^2 →^3 be the standard isometric embedding, that is, ι^1(p)^2 + ι^2(p)^2 + ι^3(p)^2 = 1, for p ∈^2 If u_α∈ C^2(^2,N) is a critical point for E_α^ω for α > 1, then ∫_^2ι^i(x) Ψ_α|∇ u_α(x)|^2 dV_g = 0, i = 1,2,3, where Ψ_α:[0,∞) → is a strictly increasing smooth function defined by Ψ_α(r) = α(1 + r)^α- 1r - (1 + r)^α + 1/α - 1. Without loss of generality, we take the standard metric on ^2 such that it admits constant curvature one. Moreover, by the rotational symmetric of ^2 and (<ref>), it suffices to show ∫_^2ι^3(x) Ψ_α|∇ u_α(x)|^2 dV_g = 0. Utilizing the stereographic projection from ^2 to ^2 we can write the metric on (^2,ds^2) with the polar coordinate (ρ,θ) as ds^2 = 4/1 + ρ^2^2dρ^2 + ρ^2 dθ^2. Then, taking a conformal transformation (ρ,θ)↦ (φ,η) by ρ = e^-φ and θ = η, we can rewrite the metric ds^2 as ds^2 = 1/cosh^2φdφ^2 + dη^2. Note that, since stereographic projection is a conformal coordinate, (φ,η) is also a conformal coordinate of ^2. Then, using this coordinate we can define a collection of conformal transformation {ϕ_t}_t∈ with each ϕ_t :^2 →^2 expressed as φ(ϕ_t(x)) = φ(x) + t, η(ϕ_t(x)) = η(x). Thus, the function E_α^ω acts on u∘ϕ_t can be expressed as E_α^ω(u∘ϕ_t) = 1/2∫_^21 + ∂ u/∂φ^2 + ∂ u/∂η^2cosh^2(φ + t)^α dV_g + ∫_^2(u∘ϕ_t)^*ω = 1/2∫_^21 + ∂ u/∂φ^2 + ∂ u/∂η^2cosh^2(φ + t)^αdφ dη/cosh^2(φ + t) + ∫_^2u^*ω. Here, in the second identity we use the conformally invariance of the integral of u^*ω. Then, we take the derivative in the identity (<ref>) with respect to t at t = 0 to get d/dt|_t=0 E^ω_α(u∘ϕ_t) = α∫_^21 + |∇ u|^2^α - 1∂ u/∂φ^2 + ∂ u/∂η^2tanhφ dφ dη - ∫_^21 + |∇ u|^2^αtanhφ/cosh^2φ dφ dη =α∫_^21 + |∇ u|^2^α - 1∇ u^2 tanhφ dV_g -∫_^21 + |∇ u|^2^αtanhφ dV_g. If u is a critical point of E_α^ω, then by the regularity Lemma <ref> u is stationary with respect to ϕ_t. Hence, we have ∫_^2α1 + |∇ u|^2^α - 1∇ u^2 - 1 + |∇ u|^2^αtanhφ dV_g = 0. In the stereographic projection coordinate, we have tanhφ = sinhφ/coshφ = ρ^2 - 1/ρ^2 + 1 = ι(φ,η)^3 and ∫_^2tanhφ dV_g = ∫_^2ι^3(φ,η) dV_g = 0. Plugging these two identities into (<ref>), we finally obtain ∫_^2α1 + |∇ u|^2^α - 1∇ u^2 - 1 + |∇ u|^2^α + 1ι^3(φ,η) dV_g = 0 which is exactly (<ref>). We need to mention that Ψ_α converges to a smooth function Ψ_1 as α↘ 1, more precisely, it can be expressed as Ψ_1(r) = r- log(1 + r) which is also a strictly increasing smooth function. Now, we can prove the main consequence of this subsection. Let u_α:^2→ N be a sequence of non-constant critical points for E^ω_α that converges strongly in C^2^2\{x_1,x_2,…,x_l} to u for some l ∈ℕ as α→ 1. Then the limit u:^2→ N is also non-constant. Let (φ,θ) be the geographic coordinates of ^2 with 0≤φ≤π and 0≤θ≤ 2π. And denote ^+ = {(φ,θ) : 0≤φ≤π/2} and ^- = {(φ,θ) : π/2≤φ≤π}. Since the set of points that fails to convergence is finite, after taking a fractional linear transformation of ^2, we can assume {x_1,x_2,…,x_l}⊂int(^+) such that φ(x_i) < π/3 for 1≤ i ≤ l. Then splitting the integral domain ^2 into ^+ and ^- in identity (<ref>) obtained in Lemma <ref> gives ∫_^+ι^3(x) Ψ_α|∇ u_α(x)|^2 dV_g = ∫_^-ι^3(x)Ψ_α|∇ u_α(x)|^2 dV_g. If the limit u is constant, then by Theorem <ref> the energy must concentrate at some point, say x_1, and one can construct a rescaling map v_α that converges strongly in C^2_loc to a non-constant bubble v:^2→ N. Then, utilizing (<ref>) we can estimate 0 < E(v)/2≤1/2lim inf_α↘ 1E(u_α,^+) ≤lim inf_α↘ 1∫_^+ι^3(x) |∇ u_α|^2 dV_g ≤ 2lim inf_α↘ 1∫_^+ι^3(x) Ψ_α|∇ u_α(x)|^2 dV_g =2lim inf_α↘ 1∫_^-ι^3(x)Ψ_α|∇ u_α(x)|^2 dV_g = 0 which is a contradiction. Here, we note that Ψ(r)/r → 1 as r →∞ which implies the second inequality of the above estimates (<ref>). Therefore, we reach the conclusion of Proposition <ref>. 5pt §.§ Preparations for the Proof of Main Theorem 5pt In this subsection, we will derive some basic Lemmas for α-H-surfaces, such as small energy regularity, energy gap and removability of isolated singularities of H-surfaces that will be described in Subsubection <ref>, and we will establish several Pohozaev type identities see Lemma <ref> in Subsubection <ref>. By Riemann mapping theorem, for each p ∈ M there exists an isothermal coordinate system in a neighborhood U(p) of p such that the metric g can be written as g = e^φ(dx^1)^2 + (dx^2)^2 where x = (x^1,x^2) ∈B(0,1) ⊂^2 and φ is a smooth function satisfying φ(p) = 0. Therefore, it suffices to restrict our analysis on unit ball B(0,1) ⊂^2 equipped with the metric g_α := e^φ_α(dx^1^2 + dx^2^2) with φ_α(0) = 0 and φ_α→φ∈ C^∞(B(0,1)) in order to investigate the local bubbling behavior for α-H-surfaces. Hence, under these isothermal coordinates the Euler Lagrange equation (<ref>) and (<ref>) are equivalent to the following Δ u_α + (α - 1)∇|∇_g_α u_α|^2·∇ u_α/τ_α+|∇_g_α u_α|^2 + A(u_α)(∇ u_α, ∇ u_α) = τ_α^α -1H(u_α)( u_α, ∇ u_α)/α(τ_α + |∇_g_α u_α|^2)^α - 1 and in divergence form -((τ_α+|∇_g_α u_α|^2)^α - 1∇ u_α) + (τ_α + |∇_g_α u_α|^2)^α -1A(u_α)(∇ u_α,∇ u_α) = τ_α^α -1/αH(u_α)( u_α, ∇ u_α) §.§.§ Small Energy Regularity, Energy Gap and Removability of Isolated Singularities 5pt Similarly to blow-up phenomenon for sequences of α-harmonic maps, that was developed by Sacks-Uhlenbeck <cit.>, by showing the small energy regularity Lemma <ref>, energy gap Lemma <ref> and the removability of isolated singularities Lemma <ref> for H-surfaces, we can establish a similar convergence theory for general sequence of α-H-surfaces {u_α}_α↘ 1 (as critical points of generalized functional E^ω_α) with uniformly bounded α-energy. In the H-surface context, compared with the case of harmonic maps, the following inequality H(u)( u, ∇ u)/α(τ_α + |∇_g_α u|^2)^α - 1_L^1(M) ≤1/αβ_0H(u)( u, ∇ u) _L^1(M) ≤1/2αβ_0H_L^∞(N)∇ u^2_L^2(M) implies that the new quadratic growth part arising from the mean curvature type vector field H( u,∇ u) actually plays a complete similar role with the original second fundamental form term A(∇ u, ∇ u) in the proof of small energy regularity for α-harmonic maps, see <cit.>. Based on this fact, it is not difficult to establish the following small energy regularity for α-H-surfaces: Let {u_α}_α > 1 be a sequence of critical points of E^ω_α in W^1,2α(B(0,1),N) where B(0,1) is equipped with metric g_α := e^φ_α(dx^1^2 + dx^2^2) with φ_α(0) = 0 and φ_α→φ∈ C^∞(B(0,1)) as α↘ 1. Then, there exists constants ε_0 > 0 and α_0 > 1 such that if sup_1 < α < α_0 E(u_α, B) ≤ε_0^2, where B := B(0,1) for simplicity, then for any B^'⊂⊂ B we have ||∇ u_α(x)||_W^2,p(B^',N)≤ C(p,B^', N)∇ u_α_L^2(B(0,1), N), for all 1 < α≤α_0 and 1 < p < ∞, where C(p,B^',N) is a constant depending only on 1 < p < ∞, B^'⊂ B and geometries of N. Since the desired estimates holds locally and g_α→ g smoothly as α↘ 1, it suffices to prove the Lemma <ref> for sequence u_α : B⊂^2 → N with Euclidean metric on B by choosing small enough α_0 -1. Let φ be a smooth function which is 1 on B^' and supports in B, then multiplying the Euler-Lagrange equation (<ref>) for E^ω_α by φ and writing the terms arising from the derivatives on φ in the right-hand side yield |Δ(φ u_α) + (α - 1)∇^2 (φ u_α),∇ u_α·∇ u_α/τ_α+|∇ u_α|^2. + .A(u_α)(∇(φ u_α), ∇ u_α) + τ_α^α -1 H(u_α)( u_α,∇(φ u_α))/ατ_α + |∇ u_α|^2^α - 1| ≤ C (φ,∇φ, N, ||A||_L^∞, ||H||_L^∞) |u_α| + |∇ u_α|, where C(φ,∇φ, N, ||A||_L^∞, ||H||_L^∞) is a constant that depends on the φ, ∇φ, geometries of target N, second fundamental form A and mean curvature type vector field H. For notation simplicity, we denote it by C_0. Keeping in mind (<ref>) and applying L^p estimates for Laplace operators, we obtain (C_p)^-1φ u_α_W^2,p(B,N) ≤ (α - 1) φ u_α_W^2,p(B,N) + (||A||_L^∞(N) + H_L^∞(N)) |∇ (φ u_α)|· |∇ u_α|_L^p(B,N) + C_0 u_α_W^1,p(B,N), where C_p is the constant arising from operator norms of Laplace operator. Now, let p = 4/3 and take 2(α_0 - 1)< (C_p)^-1, using Hölder's inequality we have (C_4/3)^-1 - 2(α - 1)φ u_α_W^2,4/3(B,N) ≤ C(A,H)|∇ (φ u_α)|· |∇ u_α|_L^4/3(B,N) + C_0 u_α_W^1,4/3(B,N) ≤ C(A,H) E(u_α,B)∇(φ u_α)_L^4+ C_0 u_α_W^1,4/3(B,N). By Sobolev embedding W^2,4/3(B,N) ↪ W^1,4(B,N), we conclude that from (<ref>) (C_4/3)^-1 - 2(α - 1) - C_e C(A,H) E(u_α,B) φ u_α_W^2,4/3(B,N) ≤ C_0 u_α_W^1,4/3(B,N) where C_e is the norm of the embedding W^2,4/3(B,N) ↪ W^1,4(B,N) and C(A,H) := ||A||_L^∞(N) + H_L^∞(N). Note that, after replacing u_α with u_α -1/Vol(B) ∫_B u_α, we can assume ∫_B u_α = 0. So, the right-hand side of (<ref>) is controlled by E(u_α,B) by Poincaré's inequality. We take ε_0 is small enough such that (C_4/3)^-1 - 2(α - 1) - C_e C(A,H) ε_0^2 > 0. Then, in estimate (<ref>), we take p = 2 to obtain (C_2)^-1 - 2(α - 1) φ u_α_W^1,2α(B,N) ≤ C(A,H)φ u_α_W^1,4(B,N) + C_0 u_α_W^1,2(B,N). By Sobolev embedding W^1,2α(B,N)↪ W^1, p(B,N) for all 1 < p < ∞. (<ref>) will give the estimates of ||φ u_α||_W^1, p(B,N) and plugging this estimates into (<ref>) gives ||φ u_α||_W^2, p(B,N)≤ C^' (φ,∇φ, N, ||A||_L^∞, ||H||_L^∞) ∇ u_α_L^4(B,N). Then, by W^2,4/3(B,N) ↪ W^1,4(B,N), plugging (<ref>) into above inequality will give the desired estimates of the Lemma <ref>. By a similar argument to <cit.>, we can obtain the following globally energy gap Lemma for α-H-surfaces u_α from M to N. There exists ε_0 > 0 and α_0 > 1 such that if E(u_α)< ε_0^2, 1≤α < α_0 and u_α : M → N is a critical map of E^ω_α, then u_α is constant and E(u_α) = 0. If we replace the smooth function φ with φ≡ 1 and do the estimates globally on M, then C_0 ≡ 0 arising in Lemma <ref>. Thus, (<ref>) becomes (C_4/3)^-1 - 2(α - 1) - C_e C(A,H) E(u_α,B)φ u_α_W^2,4/3(M,N)≤ 0. Therefore, when E(u_α,M) < ε_0^2 is small enough, every critical point u_α of E^ω_α is constant. Moreover, when α = 1, we have the following removability of isolated singularities for H-surfaces by combining the proof in <cit.> and the regularity result in <cit.>. Suppose that u ∈ C^2(B(0,1) \{0},N) where B(0,1) equipped with metric g= e^φ(dx^1)^2 + (dx^2)^2 for some smooth function φ, E(u,B(0,1)) <∞ and that u satisfies the Euler-Lagrange equation (<ref>), then u can extends to a smooth H-surface u:B(0,1)→ N. §.§.§ Pohozaev type Identities 5pt As a corollary of Lemma <ref>, we can establish the following boundedness estimates lim sup_α↘ 1τ_α + ∇_g_αu_α^2^α - 1_C^0(B(0,1))≤ C < ∞. Let (B(0,1),g_α) be a unit disk in ^2 equipped with a metric g_α = e^φ_α((dx^1)^2 + (dx^2)^2) where φ_α(0) = 0 and φ_α is a sequence of smooth function such that φ_α→φ strongly in C^∞(B(0,1)). If u_α is a sequence of α-H-surfaces with uniformly bounded generalized α-energy sup_α > 1E_α(u_α, B(0,1)) < ∞ and lim_α↘ 1τ_α^α - 1 > β_0 > 0, then there exists a positive β_1 > 0 which is independent of α↘ 1 such that β_0 ≤lim inf_α↘ 1τ_α + ∇_g_αu_α^2^α - 1_C^0(B(0,1)) ≤lim sup_α↘ 1τ_α + ∇_g_αu_α^2^α - 1_C^0(B(0,1))≤β_1. It suffices to prove the upper bound part of (<ref>). If the energy concentrate set 𝔖:={x∈ B(0,1) : lim inf_k→∞1/2∫_B(x,r)∇_g_α u_α^2 dV_g_α≥ε_0^2, for all r > 0} is empty, then by Lemma <ref> u_α converges to some H-surface u_0 smoothly which implies lim sup_α↘ 1∇_g_α u_α_C^0(B(0,1))≤ C <∞. Hence, (<ref>) follows directly. Thus, we assume that 𝔖 is non-empty. Without loss of generality, we further assume that 0 ∈𝔖 is the only energy concentration point. Then, there exists finitely many bubbles occurring around 0, hence there exists sequences of positive numbers λ_α^i↘ 0 and sequences of points x_α^i ↘ 0 as α↘ 1, for 1 ≤ i ≤ n_0 satisfying the alternative <ref> or <ref>. We choose the smallest λ_α^i_0 satisfying lim sup_α↘ 1λ_α^i_0/λ_α^i≤ C < ∞ for any 1≤ i ≠ i_0 ≤ n_0. Therefore, the energy concentration 𝔖 set of rescaled sequences w_α(x) := u_α(x_α^i_0) is empty, hence by Lemma <ref> we have lim sup_α↘ 1τ_α + ∇_g_αu_α^2^α - 1_C^0(B(0,1))≤ C lim sup_α↘ 11 + λ_α^i_0^2 -2α≤ C(1 + μ_max) which yields the estimate (<ref>) by letting β_1 := C(1 + μ_max). Next we are devoted to derive some general variational formulas for the functional E^ω_α, to obtain some critical estimates of the energy of α-H-surfaces on the neck domains. We adapt the idea introduced in <cit.> and hence some reduplicative computational details are omitted. Let (B(0,1),g_α) be a unit disk in ^2 equipped with a metric g_α = e^φ_α((dx^1)^2 + (dx^2)^2) where φ_α(0) = 0 and φ_α is a sequence of smooth function and φ_α→φ strongly in C^∞(B(0,1)). If u_α is a critical point of E^ω_α(u, B(0,1)), then for any 0 < t < 1 there holds 1-1/2α ∫_∂ B(0,t)τ_α + ∇_g_α u_α^2^α - 1∂ u_α/∂ r^2 ds - 1/2α∫_∂ B(0,t)τ_α + ∇_g_αu_α^2^α - 11/|x|^2∂ u_α/∂θ^2ds =1 - 1/α1/t∫_B(0,t)τ_α + ∇_g_αu_α^2^α - 1∇ u_α^2 dx + O(t). and 1-1/2α ∫_∂ B(0,t)τ_α + ∇_g_α u_α^2^α - 1∇ u_α^2 ds - ∫_∂ B(0,t)τ_α + ∇_g_αu_α^2^α - 11/|x|^2∂ u_α/∂θ^2ds =1 - 1/α1/t∫_B(0,t)τ_α + ∇_g_αu_α^2^α - 1∇ u_α^2 dx + O(t). Taking a 1-parameter family of transformations group {ϕ_s} that is generated by the vector field supported in B(0,1)⊂^2, we compute E^ω_α(u∘ϕ_s, B(0,1)) = ∫_B(0,1)τ_α + ∇_g_α(u∘ϕ_s)^2^α dV_g_α + ∫_B(0,1)(u∘ϕ_s)^*ω =∫_B(0,1)τ_α + ∑_i = 1^2du(ϕ_s)_*(e_i(x))^2^α dV_g_α + ∫_B(0,1)ω_ij(u∘ϕ_s)(u∘ϕ_s)^i ∇(u∘ϕ_s)^j dx^1∧ dx^2 = ∫_B(0,1)τ_α + ∑_i = 1^2du(ϕ_s)_*(e_i(ϕ^-1_s(x)))^2^α J(ϕ^-1_s) dV_g_α + ∫_B(0,1)ω_ij(u∘ϕ_s)∂ u^i/∂ x^k∂ϕ_s^k/∂ z^1∂ u^j/∂ x^l∂ϕ_s^l/∂ z^2- ∂ u^j/∂ x^k∂ϕ_s^k/∂ z^1∂ u^i/∂ x^l∂ϕ_s^l/∂ z^2 dx^1∧ dx^2 := A + B where {e_i} is a local orthonormal basis of TB(0,1) and J(ϕ^-1_s) is the Jacobian of ϕ^-1_s. Utilizing the first variational formula for area functional d/dsJ(ϕ_s^-1)dV_g_α|_s = 0 = -(X)dV_g_α, differentiating E^ω_α yields d/dsE^ω_α(u ∘ϕ_s )|_s = 0 = δE^ω_α(u)du(X) = - ∫_B(0,1)τ_α + ∇_g_αu^2^α(X)dV_V_g_α + 2α∑_i∫_B(0,1)τ_α + ∇_g_αu^2^α - 1⟨ du(∇_e_i X), du(e_i)⟩ dV_g_α +d/dsB |_s = 0. Next, we focus on d/dsB |_s = 0 = ∫_B(0,1)∂ω_ij/∂ y^p∂ u^p/∂ x^qd ϕ^q_s/ds|_s = 0 u^i ∇ u^j dx^1∧ dx^2 + ∫_B(0,1)ω_ij(u)(∂^2 u^i/∂ x^p ∂ x^1d ϕ^p_s/ds|_s=0∂ u^j/∂ x^2 + ∂ u^i/∂ x^k∂ X^k/∂ z^1∂ u^j/∂ x^2+∂ u^i/∂ x^1∂^2 u^j/∂ x^p ∂ x^2dϕ^p_s/ds|_s= 0. + ∂ u^i/∂ x^1∂ u^j/∂ x^l∂ X^l/∂ z^2 - ∂^2 u^j/∂ x^p ∂ x^1d ϕ^p_s/ds|_s=0∂ u^i/∂ x^2 - ∂ u^j/∂ x^k∂ X^k/∂ z^1∂ u^i/∂ x^2 -.∂ u^j/∂ x^1∂^2 u^i/∂ x^p ∂ x^2dϕ^p_s/ds|_s = 0 - ∂ u^j/∂ x^1∂ u^i/∂ x^l∂ X^l/∂ z^2) dx^1∧ dx^2 =∫_B(0,1)∂ω_ij/∂ y^p∂ u^p/∂ x^qX^q u^i ∇ u^j dx^1∧ dx^2 + ∫_B(0,1)ω_ij(u)(∂^2 u^i/∂ x^p ∂ x^1X^p∂ u^j/∂ x^2 + ∂ u^i/∂ x^k∂ X^k/∂ z^1∂ u^j/∂ x^2+∂ u^i/∂ x^1∂^2 u^j/∂ x^p ∂ x^2X^p . + ∂ u^i/∂ x^1∂ u^j/∂ x^l∂ X^l/∂ z^2-∂^2 u^j/∂ x^p ∂ x^1X^p∂ u^i/∂ x^2 - ∂ u^j/∂ x^k∂ X^k/∂ z^1∂ u^i/∂ x^2 .-∂ u^j/∂ x^1∂^2 u^i/∂ x^p ∂ x^2X^p - ∂ u^j/∂ x^1∂ u^i/∂ x^l∂ X^l/∂ z^2)dx^1∧ dx^2 : = C + D Rearranging terms in D and integrating by parts yields D = -∫_B(0,1)∂ u^i/∂ x^1∂/∂ x^pω_ij(u)X^p∂ u^j/∂ x^2dx + ∫_B(0,1)ω_ij(u)∂ u^i/∂ x^1∂ u^j/∂ x^2 - ∂ u^j/∂ x^1∂ u^i/∂ x^2∂ X^1/∂ z^1dx + ∫_B(0,1)ω_ij(u)∂ u^i/∂ x^1∂ u^j/∂ x^2 - ∂ u^j/∂ x^1∂ u^i/∂ x^2∂ X^2/∂ z^2dx + ∫_B(0,1)ω_ij(u)∂^2 u^j/∂ x^p∂ x^2X^p∂ u^i/∂ x^1dx + ∫_B(0,1)∂ u^j/∂ x^1∂/∂ x^pω(u)_ijX^p∂ u^i/∂ x^2dx - ∫_B(0,1)ω_ij(u)∂^2 u^i/∂ x^p ∂ x^2X^p∂ u^j/∂ x^1dx = ∫_B(0,1)ω_ij(u) u^i ∇ u^j(X)dx - ∫_B(0,1)∂ω_ij/∂ x^p∂ u^i/∂ x^1∂ u^j/∂ x^2 X^pdx - ∫_B(0,1)ω_ij(u)∂ u^i/∂ x^1∂ u^j/∂ x^2(X)dx - ∫_B(0,1)ω_ij(u)∂ u^i/∂ x^1∂^2 u^j/∂ x^p ∂ x^2X^p dx + ∫_B(0,1)ω_ij(u)∂^2 u^j/∂ x^p∂ x^2X^p∂ u^i/∂ x^1dx + ∫_B(0,1)∂ω_ij/∂ x^p∂ u^i/∂ x^2∂ u^j/∂ x^1 X^p dx +∫_B(0,1)ω_ij(u)∂ u^i/∂ x^2∂ u^j/∂ x^1(X)dx + ∫_B(0,1)ω_ij(u)∂ u^j/∂ x^1∂^2 u^i/∂ x^p ∂ x^2X^p dx - ∫_B(0,1)ω_ij(u)∂^2 u^i/∂ x^p ∂ x^2X^p∂ u^j/∂ x^1dx = ∫_B(0,1)ω_ij(u) u^i ∇ u^j(X)dx - ∫_B(0,1)ω_ij(u) u^i ∇ u^j(X)dx + ∫_B(0,1)∂ω_ij/∂ x^p∂ u^i/∂ x^2∂ u^j/∂ x^1 - ∂ u^i/∂ x^1∂ u^j/∂ x^2dx = - ∫_B(0,1)∂ω_ij/∂ x^p X^p u^i∇ u^j dx which implies dB/ds|_s = 0 = 0. Now if u_α is the critical point of E^ω_α, for any vector field X supported in unite disk B(0,1) we have 2α∑_i∫_B(0,1) τ_α + ∇_g_αu^2^α - 1⟨ du(∇_e_i X), du(e_i)⟩ dV_g_α = ∫_B(0,1)τ_α + ∇_g_αu^2^α(X)dV_V_g_α To obtain (<ref>), we choose a vector field X supported in B_ρ by X = η(r)r∂/∂ r = η(|x|)x^i∂/∂ x^i where η(r) is defined by η(r) = { 1 if r≤ t^', t - r/t - t^' if t^'≤ r ≤ t, 0 if r≥ t, . for 0 < t^' < t≤ρ < 1. Plugging this vector field into (<ref>), we obtain 0= (2 α-2) ∫_B(0,t)η(τ_α+|∇_g_α u_α|^2)^α-1|∇_0 u_α|^2 d x +∫_B(0,t) O(|x|)(τ_α+|∇_g_α u_α|^2)^α-1|∇_0 u_α|^2 d x -2 τ_α∫_B(0,t)η(τ_α+|∇_g_α u_α|^2)^α-1 d V_g_α +τ_α/t-t^'∫_B(0,t) \ B_t^' r(τ_α+|∇_g_α u|^2)^α-1 d V_g_α +1/t-t^'∫_B(0,t) \ B_t^'(τ_α+|∇_g_α u_α|^2)^α-1[|∇_0 u_α|^2 r-2 α r|∂ u_α/∂ r|^2] d x -∫_B(0,t)τ_α(τ_α+|∇_g_α u_α|^2)^α-1 r η∂φ/∂ r d V_g_α In equation (<ref>) taking t^'↗ t yields estimation (<ref>) ∫_∂ B(0,t)τ_α + ∇_g_α u_α^2^α - 1∂ u_α/∂ r^2 ds - 1/2α∫_∂ B(0,t)τ_α + ∇_g_αu_α^2^α - 1∇ u_α^2ds =1 - 1/α1/t∫_B(0,t)τ_α + ∇_g_αu_α^2^α - 1∇ u_α^2 dx + O(t). where we used co-area formula and Lemma <ref>. Since under the polar coordinates the metric tensor can be written as g_α = e^φ_αdr^2 + r^2 dθ^2, hence ∇ u_α^2 = ∂ u_α/∂ r^2 + 1/|x|^2∂ u_α/∂θ^2. Therefore, (<ref>) is obtained from above observation and (<ref>). Next, compared with previous Lemma <ref> we proceed to derive an alternative form of the Pohozave-type identity, which directly connects the angular component of the energy function with the radial component of the energy functional. Let (B(0,1),g_α) be the unit disk in ^2 with metric g_α = e^φ_α((dx^1)^2 + (dx^2)^2) where φ_α∈ C^∞(B(0,1)) and φ_α(0) = 0 for α > 1. If u_α is a α-H-surface being a critical point of E_α^ω(u, B(0,1)), then for any 0 < t < 1 the following holds ∫_∂ B(0,t)(∂ u_α/∂ r^2. - .1/r^2∂ u_α/∂θ^2)ds = -2(α - 1)/t∫_B(0,t)∇∇_g_αu_α^2∇ u_α/τ_α + ∇_g_αu_α^2 r ∂ u_α/∂ r dx. Multiplying the Euler Lagrange equation (<ref>) by r ∂ u_α/∂ r written as polar coordinate of B(0,1) and integrating over B(0,t) to yield ∫_B(0,t) r ∂ u_α/∂ rΔ u_α dx = - (α - 1)∫_B(0,t)∇|∇_g_α u_α|^2·∇ u_α/τ_α+|∇_g_α u_α|^2 r ∂ u_α/∂ r dx + τ_α^α -1∫_B(0,t)H(u_α)( u_α, ∇ u_α)/α(τ_α + |∇_g_α u_α|^2)^α - 1 r ∂ u_α/∂ rdx. Integration by parts to lefthand integral of (<ref>) gets ∫_B(0,t) r ∂ u_α/∂ rΔ u_α dx = ∫_∂ B(0,t) t ∂ u_α/∂ r^2 ds - ∫_B(0,t)∇r ∂ u_α/∂ r·∇ u_α dx. The second integral of righthand of (<ref>) can be further computed as ∫_B(0,t)∇r ∂ u_α/∂ r·∇ u_α dx = ∑_i =1^2 ∫_B(0,t)∇x^i ∂ u_α/∂ x^i·∇ u_α dx = ∫_B(0,1)∇ u_α^2 dx + ∫_B(0,1)r/2∂∇ u_α^2/∂ r dx = ∫_B(0,1)∇ u_α^2 dx + t/2∫_∂ B(0,t)∇ u_α^2 ds - ∫_B(0,1)∇ u_α^2 dx = t/2∫_∂ B(0,t)∇ u_α^2 ds. On the other hand, we take a polar coordinate transformation, letting ∂ u_α/∂ x^1 = cosθ∂ u_α/∂ r - sinθ/r∂ u_α/∂θ and ∂ u_α/∂ x^2 = sinθ∂ u_α/∂ r + cosθ/r∂ u_α/∂θ, we can rewrite the mean curvature type vector term in (<ref>) as ∫_B(0,t)H(u_α)( u_α, ∇ u_α)/α(τ_α + |∇_g_α u_α|^2)^α - 1 r ∂ u_α/∂ rdx = ∫_B(0,t)H(u_α)( u_α, ∇ u_α)·(x ∇ u)/α(τ_α + |∇_g_α u_α|^2)^α - 1 = ∑_i,j,k = 1^K∫_B(0,t)1/α(τ_α + |∇_g_α u_α|^2)^α - 1 H^k_ij∂ u_α^k/∂ r∂ u^i/∂ r∂ u^j/∂θ - ∂ u^j/∂ r∂ u^i/∂θ dx The antisymmetric of H^k_ij in indices i, j and k, see (<ref>), tells us that the above quantity vanishes identically. Hence, combining (<ref>), (<ref>) and (<ref>) with (<ref>), we have ∫_∂ B(0,t)∂ u_α/∂ r^2 - 1/2∇ u_α^2 ds = - α - 1/t∫_B(0,t)∇∇_g_αu_α^2∇ u_α/τ_α + ∇_g_αu_α^2 r∂ u_α/∂ r dx which leads to (<ref>) keeping in mind that |∇ u|^2 = ∂ u/∂ r^2 + 1/|x|^2∂ u/∂θ^2. §.§ Proof of Generalized Energy Identity — Theorem <ref> 5pt In this subsection, our goal is to establish the generalized energy identity for sequences of α-H-surfaces (being the critical points of E_α^ω) with uniformly bounded generalized α-energy. We will adapt the approach outlined by Ding-Tian <cit.> in showing the energy identity for a sequence of approximate harmonic maps with uniformly L^2-norm bounded tension field and Li-Wang <cit.> for sequences of α-harmonic maps. And it is important to emphasize the significance of the Pohozaev identity (<ref>) and (<ref>) in the proof. To prove Theorem <ref>, it is sufficient to focus on the simpler case of a single blow-up point, stated as below: Let (B(0,1),g_α)⊂^2 be the unit disk in ^2 equipped with sequence of conformal metric g_α = e^φ_α(x)((dx^1)^2 + (dx^2)^2) and g = e^φ(x)((dx^1)^2 + (dx^2)^2) where φ_α∈ C^∞(B(0,1)), φ_α(0) = 0 for α > 1 and φ_α→φ strongly in C^∞(B(0,1)) as α↘ 1. Let u_α∈ C^∞(B(0,1), N) be a sequence of α-H-surfaces satisfying * sup_α > 1E_α(u_α) ≤Λ < ∞ and 0 < β_0 ≤lim_α↘ 1τ_α^α - 1≤ 1, * u_α→ u_0 strongly in C^∞_locB(0,1)\{0}, ^K as α↘ 1. Then there exists a subsequence of u_α still denoted by u_α and a nonnegative integer n_0 such that for any i = 1, … , n_0 there exists a sequence of points x^i_α, positives number λ^i_α and a non-trivial H-sphere w^i such that all following statements hold: * x^i_α→ 0 and λ^i_α→ 0, as α↘ 1; * lim_α↘ 1(r^i_α/r^j_α + r^j_α/r^i_α + |x^i_α - x^j_α|/r^i_α + r^j_α) = ∞ for any i≠ j; * w^i is the weak limit of u_α(x_α^i + λ^i_α x) in W^1,2_loc(^2) * Generalized Energy Identity: lim_δ↘ 0lim_α↘ 1E_α(u_α, B(0,δ)) = ∑_i = 1^n_0μ_i^2E(w^i) where μ_i = lim_α↘ 1 (λ^i_α)^2- 2α. §.§.§ Proof of Single Bubble Case for Theorem <ref> 5pt At the first step, we will commence by establishing Theorem <ref> under the assumption of a single bubble, i.e., when n_0 = 1. The proof for the scenarios involving multiple bubbles will be presented in the subsequent section. To begin, since 0 ∈ B(0,1) is the only energy concentration point as stated in Theorem <ref>, we can assume the only bubble w is produced by sequence w := lim_α↘ 1 w_α(x) = lim_α↘ 1 u_α(x_α + λ_α x) where λ_α = 1/max_B(0, 1/2)|∇_g_α u_α| and x_α is the point where the maximum is taken on, that is, |∇_g_αu_α(x_α)| = max_x ∈ B(0, 1/2)|∇_g_αu_α(x)|. Then, ∇ w_α_L^∞ = ∇ w_α(0) =1. So, by Lemma <ref>, w_α converges strongly in C^∞_loc(^2) to a non-trivial H-surface w from ^2 to N. Moreover, by Lemma <ref> and identifying ^2 ∩∞≅^2, w actually extends to a H-sphere. Note that for any 0 < δ < 1/2, u_α converges to weak limit u_0 strongly in C^∞(B(0,1)\ B(0,δ), N). For any large enough R > 0 and small enough α -1 such that λ_α R < δ, we have w_α converges to H-sphere w strongly in C^∞(B(0,R), N). Therefore, the workload to prove the our main Theorem <ref> reduces to study to asymptotic formulation of u_α on the neck domain B(0,δ)\ B(x_α, λ_α R) when α↘ 1. Under the one bubble hypothesis and by a similar argument in Ding-Tian <cit.>, we claim the following: For any ε > 0 there exists δ > 0 and R > 0 such that ∫_B(x_α,2t)\ B(x_α,t)∇_g_α u_α^2 dV_g_α≤ε for any t ∈λ_α R/2, 2δ when α - 1 is small enough. We argue by contradiction, suppose that the Claim <ref> fails, then there exists a sequence of α_j ↘ 1 and λ_α_j^'↘ 0 satisfying λ_α_j^'/λ_α_j→∞ such that ∫_B(x_α,2λ_α_j^')\ B(x_α,λ_α_j^')∇_g_α u_α^2 dV_g_α≥ε Then, the rescaled map w^'_α_j(x) := u_α_j(λ_α_j^' x + x_α_j) converges strongly in C^∞(^2\0,x^1,…,x^m, N) to some H-surface w^' for some energy concentration set 0,x^1,…,x^m. If m = 0, by (<ref>) and the fact that λ_α_j^'/λ_α_j→∞, we can conclude that w^' is a non-constant H-surface that is different from w. This contradicts to the only one bubble assumption. If m ≥ 1, then similarly to previous argument around some energy concentration point x^i for w_α after passing to some subsequence we can also construct a sequence x_α_j↘ x^i and a sequence λ_α_j such that the second rescaled map w_α_j^'(x_α_j + λ_α_jx) converges to a non-trivial H-sphere w^i. Hence w_α_j^'x_α_j + λ_α_jx = u_α_jx_α_j + λ_α_j^'x_α_j + λ_α_jx→w^i as j →∞ which means w^i is also a second non-constant H-sphere which contradicts to the only one bubble assumption again. In a word, the Claim <ref> always holds. In the sequel, for any 0 < a < b <∞ and x∈ B(0,1) we will use the notation A(a,b,x): = { y ∈^2 : a ≤ |y - x| ≤ b } to denote the annulus centered at x with inner radii a and outer radii b. By small energy regularity, Lemma <ref>, we have With the same hypothesis as Theorem <ref> and assume there is only one bubble for u_α as α↘ 1. Given any small enough δ > 0, large enough R > 0 and small enough such that α -1 ≤α_0 -1 where α_0 - 1 is chosen in Lemma <ref>. Then for any λ_α R < a < b ≤δ, we have ∫_A(a,b,x_α)∇^2_g_α u_α·|x - x_α| ·∇_g_α u_α dV_g_α≤ C ∫_A(a/2, 2b, x_α)∇_g_α u_α^2 dV_g_α. where C is a constant independent of α as α↘ 1. Since both the lefthand and righthand of (<ref>) is conformally invariant, it suffices to show ∫_A(a,b,x_α)∇^2 u_α·|x - x_α| ·∇ u_α dx ≤ C ∫_A(a/2, 2b, x_α)∇ u_α^2 dx. Without loss of generality, we can assume b = 2^Ia for some positive integer I ∈ℕ. Let f : ×^1 →^2 be the mapping such that r = e^-ρ and θ = φ where ×^1 equipped with product metric f^*g = dρ^2 + dφ^2. Here, we use notation (r,θ) to represent the polar coordinates centered at x_α and (ρ,φ) be coordinate of cylinder ×^1. Then f is a conformal map from ×^1 into ^2. Let v_α(t,φ) = u_α(f)(t,φ) = u_α(e^-t,φ) which satisfies Euler Lagrange equation (<ref>) and hence fulfills Lemma <ref>. Thus after conformal transformation f the annulus A(2^i-1a, 2^i a, x_α) maps to be [i - 1 + log1/a, i + log1/a]×^1. Since we have assumed there is only one bubble, in view of (<ref>) we can apply small energy regularity Lemma <ref> to v_α on (<ref>) and transform the estimates back to u_α to derive |∇ u_α (x)|·x -x_α≤ C∫_A(2^i - 2a, 2^i + 1a, x_α)∇ u_α^2 dx^1/2, 1≤ i ≤ I, for any x ∈ A(2^i-1a, 2^i a, x_α) and constant C which is independent of 1≤ i ≤ I and α as α↘ 1. And similarly, we have |∇^2 u_α (x)|·x -x_α^2 ≤ C∫_A(2^i - 2a, 2^i + 1a, x_α)∇ u_α^2 dx^1/2, 1≤ i ≤ I. Then, combining (<ref>) and (<ref>) and taking summation with respect to i from 1 to I, we can get ∫_A(a,b,x_α)∇^2 u_α·x - x_α·∇ u_α dx = ∑_i = 1^I∫_A(2^i -1a , 2^ia, x_α)∇^2 u_α·x - x_α^2 ·∇ u_α·x - x_αdx/x - x_α^2 ≤∑_i = 1^Isup_x∈ A(2^i-1a, 2^ia, x_α)∇^2 u_α·x - x_α^2· sup_x∈ A(2^i-1a, 2^ia, x_α)∇ u_α·x - x_α∫_A(2^i-1a, 2^ia, x_α)dx/x - x_α^2 ≤ C∑_i = 1^I ∫_A(2^i - 2a, 2^i + 1a, x_α)∇ u_α^2 dx^1/2 = C ∫_A(a/2, 2b, x_α)∇ u_α^2 dx. This completes the proof of Lemma <ref>. In the polar coordinate, the energy functional has two components, namely, the radical part and the angular part. ∫∇ u_α^2 dx = ∫∂ u_α/∂ r^2 dx + ∫1/|x|^2∂ u_α/∂θ^2 dx. To show the generalized energy identity stated in Theorem <ref>, considering the Remark <ref> we first establish the following energy decay of the angular component: With the same assumption as Lemma <ref>, there holds lim_δ↘ 0lim_R→∞lim_α↘ 1∫_A(λ_α R, δ, x_α)1/|x - x_α|^2∂ u_α/∂θ^2 dx = 0. Here, we always use the same (r,θ) to represent the polar coordinate systems centered at x_α as α↘ 1. Combining the deduction (<ref>) of one bubble assumption and small energy regularity Lemma <ref> with the conformal transformation argument described in the proof of Lemma <ref>, we have Osc_A(t,2t,x_α)(u_α) :=sup_x,y ∈ A(t,2t,x_α) u_α(x) - u_α(y) ≤ C ∇ u_α_L^2(A(t/2,4t,x_α)) for any t∈ (λ_α R, δ). Let u^*_α(t) := 1/2π t∫_∂ B(x_α,t)u_α = 1/2π∫_0^2π u_α(x_α + te^iθ) dθ Then, using (<ref>) we have u_α(x) - u^*_α(|x|)_L^∞(A(λ_α R, δ, x_α)) ≤sup_λ_α R ≤ t≤δu_α(x) - u^*_α(|x|)_L^∞(A(t, 2t, x_α)) ≤sup_λ_α R ≤ t≤δOsc_A(t,2t,x_α)(u_α) ≤ C ∇ u_α_L^2(A(t/2,4t,x_α))≤ Cε for any t∈ (λ_α R, δ). Next, by integration by part and recall the Euler Lagrange equation (<ref>) of α-H-surface we can estimate the energy on neck domain ∫_A(λ_α R, δ, x_α)∇ u_α^2 dx = ∫_A(λ_α R, δ, x_α)∇ u_α·∇u_α - u^*_αdx + ∫_A(λ_α R, δ, x_α)∇ u_α·∇ u^*_α dx = - ∫_A(λ_α, δ, x_α)Δ u_α·u_α - u^*_αdx + ∫_∂ A(λ_α, δ, x_α)∂ u_α/∂ r·u_α - u^*_α ds + ∫_A(λ_α R, δ, x_α)∇ u_α·∇ u^*_α dx = ∫_A(λ_α R, δ, x_α) A(u_α)(∇ u_α, ∇ u_α)·u_α - u^*_αdx + (α - 1)∫_A(λ_α R, δ, x_α)∇|∇_g_α u_α|^2·∇ u_α/τ_α+|∇_g_α u_α|^2·u_α - u^*_αdx + τ_α^α - 1∫_A(λ_α R, δ, x_α)H(u_α)( u_α, ∇ u_α)/α(τ_α + |∇_g_α u_α|^2)^α - 1u_α - u^*_αdx + ∫_∂ A(λ_α R, δ, x_α)∂ u_α/∂ r·u_α - u^*_α ds + ∫_A(λ_α R, δ, x_α)∇ u_α·∇ u^*_α dx. In the following, we estimate every terms obtained in last equation of (<ref>). For the last integral of (<ref>), by Jensen's inequality we see that ∫_A(λ_α R, δ, x_α) ∇ u_α·∇ u^*_α dx = ∫_A(λ_α R, δ, x_α)∂ u_α/∂ r·∂ u^*_α/∂ r dx ≤∫_A(λ_α R, δ, x_α)∂ u_α/∂ r^2dx^1/2∫_A(λ_α R, δ, x_α)∂ u^*_α/∂ r^2dx^1/2 = ∫_A(λ_α R, δ, x_α)∂ u_α/∂ r^2dx^1/2∫_A(λ_α R, δ, x_α)1/2π∫_0^2π∂ u_α/∂ r dθ^2 dx^1/2 ≤∫_A(λ_α R, δ, x_α)∂ u_α/∂ r^2dx = ∫_A(λ_α R, δ, x_α)∇ u_α^2dx - ∫_A(λ_α R, δ, x_α)1/|x - x_α|^2∂ u_α/∂θ^2dx. Next, for the boundary term in (<ref>) using trace theorem in Sobolev spaces and (<ref>) we have ∫_∂ A(λ_α, δ, x_α)∂ u_α/∂ ru_α - u^*_α ds ≤ C ε∫_∂ A(λ_α, δ, x_α)∇ u_α^2^1/2 ≤ Cε(∇ u_α_L^2A1/2λ_α R, 2λ_α R, x_α⋃ A1/2δ, 2δ, x_α. +.|x - x_α|·∇^2 u_α_L^2A1/2λ_α R, 2λ_α R, x_α⋃ A1/2δ, 2δ, x_α) ≤ Cε∇ u_α_L^2A1/2λ_α R, 2λ_α R, x_α⋃ A1/2δ, 2δ, x_α≤ Cε where the last inequality is obtained from the small energy regularity, Lemma <ref>. Furthermore, for the second integral in (<ref>), by Lemma <ref>, we can estimate (α - 1)∫_A(λ_α R, δ, x_α)∇|∇_g_α u_α|^2·∇ u_α/τ_α+|∇_g_α u_α|^2u_α - u^*_αdx ≤ 2(α - 1)C∫_A(λ_α R, δ, x_α)∇^2 u_α· |x - x_α|·|∇ u_α| dx ≤ 2(α - 1)C ∫_A(λ_α R/2, 2δ, x_α) |∇ u_α|^2 dx. At last, by (<ref>) it is easy to see that ∫_A(λ_α R, δ, x_α) A(u_α)(∇ u_α, ∇ u_α)u_α - u^*_αdx + τ_α^α - 1∫_A(λ_α R, δ, x_α)H(u_α)( u_α, ∇ u_α)/α(τ_α + |∇_g_α u_α|^2)^α - 1·u_α - u^*_αdx ≤ Cε∫_A(λ_α R, δ, x_α) |∇ u_α|^2 dx ≤ Cε Plugging the estimates (<ref>), (<ref>), (<ref>) and (<ref>) to (<ref>), we can obtain ∫_A(λ_α R, δ, x_α)1/|x - x_α|^2∂ u_α/∂θ^2dx ≤ Cε + C (α - 1) ∫_A(λ_α R/2, 2δ, x_α) |∇ u_α|^2 dx which yields lim_δ↘ 0lim_R→∞lim_α↘ 1∫_A(λ_α R, δ, x_α)1/|x - x_α|^2∂ u_α/∂θ^2 dx = 0. Hence, we complete the proof of Lemma <ref>. As a corollary of Lemma <ref> and the uniformly boundedness of lim sup_α↘ 1τ_α + ∇_g_α u_α^2^α - 1_C^0(B(0,1))≤β_0 < ∞, see Lemma <ref>, we have With the same hypothesis as Lemma <ref>, there holds lim_δ↘ 0lim_R→∞lim_α↘ 1∫_A(λ_α R, δ, x_α)τ_α + ∇_g_α u_α^2^α - 11/|x - x_α|^2∂ u_α/∂θ^2 dx = 0. To simplify the notation, considering the Pohozaev identity (<ref>), for 0 < t< 1 we define the quantities ℰ_α(t) = ∫_B(x_α, λ_α^t)τ_α + |∇_g_α u_α|^2^α - 1∇ u_α^2 dx. Besides, for fixed 0 < t_0 < 1 and 0<t < t_0 < 1 we define ℰ_r,t_0,α(t) = ∫_A(λ_α^t_0, λ_α^t, x_α)τ_α + |∇_g_α u_α|^2^α - 1∂ u_α/∂ r^2 dx and ℰ_θ,t_0,α(t) = ∫_A(λ_α^t_0, λ_α^t, x_α)τ_α + |∇_g_α u_α|^2^α - 11/|x - x_α|^2∂ u_α/∂θ^2 dx. Therefore, for t ∈ (0,t_0) the Pohozaev identity (<ref>) can be rewritten as 1 - 1/2αℰ^'_r,t_0, α(t) - 1/2αℰ^'_θ,t_0, α(t) = 1 - 1/αlogλ_αℰ_α(t) + O(λ_α^t logλ_α). Integrating with respect to t to get 1 - 1/2αℰ_r,t_0, α(t) - 1/2αℰ_θ,t_0, α(t) = 1/2∫_t_0^t 1/αlogλ_α^2(α - 1)ℰ_α(s) + Oλ_α^s logλ_α ds On the one hand, since the generalized α-energy E_α is uniformly bounded and using (<ref>) and (<ref>) one can see that 1 -1/2αℰ_r,t_0, α(·) - 1/2αℰ_θ,t_0, α(·)_C^1([τ, t_0]) is uniformly bounded for any 0 < τ < t_0/2. On the other hand, by Lemma <ref> ℰ_θ,t_0, α(·)_C^1(δ, t_0)→ 0 as α↘ 1 for any τ > 0. Therefore, we can conclude that the sequences ℰ_α(t)_α↘ 1, ℰ_r,t_0, α(t)_α↘ 1 and ℰ_θ,t_0, α(t)_α↘ 1 are compact in C^0([τ, t_0]) norm for any 0 < τ < t_0/2, which implies there exists functions ℰ: (0,t_0) →_+ and ℰ_r,t_0: (0,t_0) →_+ such that for any τ > 0 ℰ_α→ℰ and ℰ_r,t_0,α→ℰ_r,t_0 in C^0([τ,t_0]) as α↘ 1. Based on Lemma <ref> and the construction above quantities, we can establish the following With the same hypothesis as Theorem <ref> , for any t∈(0,1), there holds lim_α↘ 1∫_B(x_α, λ_α^t)τ_α + ∇_g_α u_α^2^α - 1|∇_g_α u_α|^2 dV_g_α = μ^1 - tΛ where Λ : = lim_R→∞lim_α↘ 1∫_B(x_α, λ_α R)∇_g_α u_α^2α dV_g_α = lim_R→∞lim_α↘ 1∫_B(0,R)∇_g_α w_α^2αλ_α^2 -2α dV_g_α = μ E(w) To begin, we decompose the integral of (<ref>) as below ∫_B(x_α, λ_α^t)τ_α + ∇_g_α u_α^2^α - 1|∇_g_α u_α|^2 dV_g_α = ∫_B(x_α, λ_α^t)τ_α + ∇_g_α u_α^2^α - 1|∇ u_α|^2 dx = ℰ_r,t_0,α(t) + ℰ_θ,t_0,α(t) + ℰ_α(t_0) By Corollary <ref>, we know that lim_α↘ 1ℰ_θ,t_0,α(t) = 0 for any 0< t_0 < 1. Then, in (<ref>) letting α↘ 1 we see that lim_α↘ 1ℰ_r,t_0,α(t) = ℰ_r,t_0(t) = -∫_t_0^tlogμℰ(s) ds. Recalling that ℰ_α(t) - ℰ_α(t_0) = ℰ_r,t_0,α - ℰ_θ,t_0,α and (<ref>), letting α↘ 1 we have ℰ(t) - ℰ(t_0) = ℰ_r,t_0. Plugging this identity into (<ref>), we have that ℰ_r,t_0(t) = -logμ∫_t_0^tℰ_r,t_0(s) + ℰ(t_0) ds. Solving this integral equation, we see that ℰ_r,t_0(t) = μ^t_0 - tℰ(t_0) - ℰ(t_0). Now, taking α↘ 1 in (<ref>) and utilizing (<ref>) we have lim_α↘ 1∫_B(x_α, λ_α^t)τ_α + ∇_g_α u_α^2^α - 1|∇ u_α|^2 dx = μ^t_0- tℰ(t_0) Therefore, to show (<ref>) it suffices to prove lim_t_0 → 1ℰ(t_0) = Λ. To see this, integrating the Pohozaev identity (<ref>) with respect to t from λ_α R to λ_α^t_0, we get 0 ≤ℰ_α(t_0) - ∫_B(x_α, λ_α R)τ_α + ∇_g_α u_α^2^α - 1∇ u_α^2 dx ≤ C∫_A(λ_α R ,λ_α^t_0,x_α)τ_α + ∇_g_α u_α^2^α - 11/|x - x_α|^2∂ u_α/∂θ^2 dx +C ∫_λ_α R^λ_α^t_0α - 1/r dr + C(λ_α^t_0 - λ_α R). Note that for the second integral on the righthand of (<ref>) we have lim_t_0 → 1lim_R →∞lim_α↘ 1∫_λ_α R^λ_α^t_0α - 1/r dr = lim_t_0 → 11 - t_0/2logμ = 0. Moreover, keeping in mind that (<ref>) and the definition of w_α, see (<ref>), we see that lim_t_0 → 1 F(t_0) = lim_t_0 → 1lim_R →∞lim_α↘ 1∫_B(x_α, λ_α R)τ_α + ∇_g_α u_α^2^α - 1∇ u_α^2 dx = lim_R →∞lim_α↘ 1∫_B(0, R)λ_α^2τ_α + ∇_g_α w_α^2^α - 1λ_α^2 - 2α∇ w_α^2 dx = μ E(w) = Λ. Therefore, we prove the (<ref>) and complete the proof of Lemma <ref>. Now we are in a position to prove the generalized energy identity—Theorem <ref> when there is only one bubble during the blow-up procedure. Integrating the Pohozaev identity (<ref>) with respect to t over the interval [λ^t_α, δ] for some t ∈ (0,1), we have ∫_A(λ_α^t, δ, x_α) τ_α + ∇_g_α u_α^2^α - 1|∇ u_α|^2 dx ≤ C ∫_A(λ_α^s, δ, x_α)τ_α + ∇_g_α u_α^2^α - 11/|x - x_α|^2∂ u_α/∂θ^2 dx + C∫_λ_α^s^δα - 1/rdr + C(δ - λ_α^s). By Corollary <ref>, we have lim_δ↘ 0lim_s→ 0lim_α↘ 1∫_A(λ_α^s, δ, x_α)τ_α + ∇_g_α u_α^2^α - 11/|x - x_α|^2∂ u_α/∂θ^2 dx = 0 Moreover, by direct computations we have lim_δ↘ 0lim_s→ 0lim_α↘ 1∫_λ_α^s^δα - 1/rdr = lim_s→ 0s/2logμ = 0 Therefore, plugging (<ref>) and (<ref>) into (<ref>) we have lim_δ↘ 0lim_s→ 0lim_α↘ 1∫_A(λ_α^s, δ, x_α)τ_α + ∇_g_α u_α^2^α - 1|∇ u_α|^2 dx = 0. Moreover, utilizing Lemma <ref> we know that lim_s→ 0lim_α↘ 1∫_B(x_α,λ_α^s)τ_α + ∇_g_α u_α^2^α - 1|∇ u_α|^2 dx = μΛ = μ^2 E(w) In the end, by Lemma <ref>, we can conclude that lim_δ↘ 0lim_α↘ 1∫_B(x_α, δ)τ_α + ∇_g_α u_α^2^αdV_g_α = μ^2 E(w). which completes the proof of Theorem <ref> when n_0 = 1. §.§.§ Proof of General Case for Theorem <ref>. 5pt In this subsubsection, we employ an induction argument on the number of bubbles n_0 to complete the proof of Theorem <ref>. Since we have proved the Theorem <ref> when n_0 = 1 in Subsubsection <ref>, now suppose that the generalized energy identity asserted in Theorem <ref> holds when there are n_0 - 1 many bubbles for sequence u_α as α↘ 1. Firstly, recall that the first bubble w^1 for α-H-surfaces u_α are constructed by sequence w^1 := lim_α↘ 1 w^1_α(x) = lim_α↘ 1 u_α(x_α^1 + λ^1_α x) where λ^1_α = 1/max_B(0, 1/2)|∇_g_α u_α| and x_α^1 is the point where the maximum is taken on, that is, |∇_g_αu_α(x_α)| = max_x ∈ B(0, 1/2)|∇_g_αu_α(x)|. Then, ∇ w^1_α_L^∞(^2) = ∇ w^1_α(0) =1. So, by Lemma <ref>, w^1_α converges strongly in C^∞_loc(^2) to the first non-trivial H-sphere w^1 modulo a conformal transformation from ^2 ∪∞ onto ^2 and removing the singularity ∞, see Lemma <ref>. Then, similarly, we assume that the remaining n_0-1 many bubbles are produced by sequences w^i : = lim_α↘ 1w^i_α= lim_α↘ 1 u_α(x_α^i +λ_α^i x) strongly in C^∞_loc(^2\𝔖^i) for some sequences of points x_α^i → 0 and λ_α^i → 0 as α↘ 1 satisfying the alternative <ref> or <ref>, 2 ≤ i ≤ n_0. Here, 𝔖^i ⊂^2 are finite sets consisting of energy concentration points for sequences w_α^i as α↘ 1. By our choice of first bubble, we see that λ_α^1 = min_1≤ i ≤ n_0λ_α^1, λ_α^2, …, λ_α^n_0. For notation simplicity, we assume that λ_α^n_0 = max_1≤ i ≤ n_0λ_α^1, λ_α^2, …, λ_α^n_0 and define λ_α = λ_α^n_0 + ∑_i = 1^n_0 - 1x_α^n_0 - x^i_α/n_0 -1. Thanks to the choice of λ_α and through a complete similar argument as Claim <ref>, we also have For any ε > 0 there exists δ > 0 and R > 0 such that ∫_B(x_α^1,2t)\ B(x_α^1,t)∇_g_α u_α^2 dV_g_α≤ε for any t ∈λ_α R/2, 2δ when α - 1 is small enough such that λ_α R ≤δ. Then, we can apply the argument of subsubsection <ref> in proving the Theorem <ref> to conclude that lim_α↘ 1∫_B(x_α^1,δ)τ_α + ∇_g_α u_α^2^α dV_g_α = lim_R →∞lim_α↘ 1∫_B(0,R)τ_αλ_α^2 + ∇_g_αu_α^2^αλ_α^2 - 2α dV_g_α + (B(0, δ)) lim_α↘ 1τ_α + ∫_B(0, δ)∇ u_0^2 dx, where u_α(x) :=u_α(x_α^1 + λ_α x), g_α(x) = e^φ_α(x_α^1 + λ_α x)dx^1^2 + dx^2^2 and u_0 is the weak limit of u_α. Note that as a corollary of Claim <ref> there exists a large R > 0 such that all the energy concentration points 𝔖 of u_α belongs to B(0, R). Then, u_α converges to some H-surface w strongly in C^∞_loc(^2\𝔖). Then, we proceed the induction argument conditionally depending on whether w is trivial or not. * w is a non-constant H-sphere. Then, there must holds lim sup_α↘ 1∑_i = 1^n_0 - 1x_α^n_0- x_α^i/(n_0 - 1)λ_α^n_0 < ∞ Otherwise, there will exists one more bubble which is distinct from w^i for 1 ≤ i ≤ n_0 around 0 contradicting the assumption of Theorem <ref>. Thus, we see that w is exactly the w^n_0 after formulating a conformal transformation and hence lim_α↘ 1λ_α^2 -2 α = lim_α↘ 1λ_α^n_0^2 -2α = μ_n_0. Furthermore, by our choice of λ_α we know that the later one of the alternatives <ref> and <ref> must hold, that is, λ_α^i / λ_α^n_0→ 0 as α↘ 0 for any 1 ≤ i ≤ n_0 - 1. Observe that u_αx_α^i + λ_α^i/λ_α^n_0x - x_α^i = w_α(x)→ w^i as α↘ 1, where x_α^i = 1/λ_α - λ_α^ix_α^i - x_α^i. This means w^1, …, w^n_0 - 1 are exactly all bubbles of u_α. Now, we consider the functional E^ω_α, n_0(w) = 1/2∫_B(0, R)τ_αλ_α^2 + ∇_g_α w^2^α dV_g_α + τ^α - 1_αλ_α^2α -2∫_B(0,R) w^* ω. And we can apply the induction assumption to this functional and for sequence u_α to get lim_α↘ 1∫_B(0,R) τ_αλ_α^2 + ∇_g_αu_α^2^α - 1∇_g_αu_α^2 dV_g_α = E(w, B(0, R)) + ∑_i = 1^n_0 - 1lim_α↘ 1λ_α^i/λ_α^4 - 4α E(w^i). Since λ_α^2 - 2αlim_α↘ 1∫_B(0,R) τ_αλ_α^2 + ∇_g_αu_α^2^α - 1∇_g_αu_α^2 dV_g_α = ∫_B(x_α^1, λ_α R)τ_α + ∇_g_α u_α^2 ^α - 1∇_g_α u_α^2 dV_g_α, we can conclude that lim_R →∞lim_α↘ 1∫_B(x_α^1, λ_α R)τ_α + ∇_g_α u_α^2 ^α - 1∇_g_α u_α^2 dV_g_α = μ_n_0^2 E(w) + ∑_i = 1^n_0 - 1μ_i^2 E(w^i). Combining (<ref>) and (<ref>), we will get lim_α↘ 1∫_B(x_α,δ)τ_α + ∇_g_α u_α^2^α dV_g_α = (B(0,δ))lim_α↘ 1τ_α + E(u_0) + ∑_i = 1^n_0μ_i^2 E(w^i). * w is constant. Then, there are at least two distinct energy concentration points for sequence u_α. This means at each energy concentration point there at most have n_0 - 1 many bubbles. And one can apply the induction assumption and utilize a similar argument as the previous case to conclude the desired generalized energy identity stated in Theorem <ref>. Therefore, whether w is trivial or not, both cases contribute to the completion of the proof for Theorem <ref>. §.§ Proof of Asymptotic Behavior on Necks—Theorem <ref> 5pt In this subsection, we will examine the convergent behaviors of necks for sequence u_α as α↘ 1 and present the proof of our second main consequence, Theorem <ref>. As in the previous Subsection <ref>, it suffices to consider the following simple case to prove the Theorem <ref>. Let (B(0,1),g_α)⊂^2 be the unit disk equipped with sequence of conformal metric g_α = e^φ_α((dx^1)^2 + (dx^2)^2) and g = e^φ((dx^1)^2 + (dx^2)^2) where φ_α∈ C^∞(B(0,1)), φ_α(0) = 0 for α > 1 and φ_α→φ strongly in C^∞(B(0,1)) as α↘ 1. Let u_α∈ C^∞(B(0,1), N) be a sequence of α-H-surfaces satisfying * sup_α > 1E_α(u_α) ≤Λ < ∞ and 0 < β_0 ≤lim_α↘ 1τ_α^α - 1≤ 1, * u_α→ u strongly in C^∞_locB(0,1)\{0}, ^K as α↘ 1. We further assume there is only one bubble w^1: ^2 → N around 0 ∈ B(0,1) for sequence u_α as α↘ 1. Let ν^1 = lim inf_α↘ 1λ^1_α^-√(α - 1). Then there exists a subsequence of u_α still denoted by u_α, a sequence of points x_α and a sequence of positive numbers λ_α such that the following statements hold: * when ν^1 = 1, the set u_0B^M(x_1,1)⋃ w^1(^2) is a connected subset of N where u_0 is the weak limit of u_α in W^1,2(M,N) as α↘ 1; * when ν^1∈ (1,∞), the set u_0B^M(x_1,1) and w^1(^2) are connected by a geodesic Γ⊂ N with length L(Γ) = √(E(w^1)/π)logν^1; * when ν^1 = ∞, the neck contains at least an infinite length geodesic. §.§.§ No Neck Property for the case Lg 5pt In this subsubsection, we focus on the case where ν = 1 in Theorem <ref> to demonstrate that the base map and all bubbles are directly connected. Considering the Remark <ref>, similarly to the construction of first bubble described in Subsubsection <ref>, let x_α∈ B(0, δ) be the maximum point of ∇_g_α u_α on B(0, δ). Since 0 is the only blow-up point, there must have lim_α↘ 1 x_α = 0. The first bubble w for α-H-surfaces u_α is constructed by sequence w := lim_α↘ 1 w_α(x) = lim_α↘ 1 u_α(x_α + λ_α x). And 1 ≤μ≤ν =1, Theorem <ref> tells us that the energy identity holds, that is, lim_δ↘ 0lim_R→∞lim_α↘ 1∫_A(λ_α R, δ, x_α)∇ u_α^2 dx = 0. Without loss of generality, we can assume for each α > 1 there is a positive integer k_α such that δ = 2^k_αλ_α R. For k = 1, …, k_α -1 and 0 ≤ t ≤min{k_α - k, k}, we define Q(k,t) = A2^k-tλ_α R,2^k+tλ_α R,x_α, and ℱ_α,k(t) = ∫_Q(k,t)∇ u_α^2 dx. By the same estimate techniques as in Lemma <ref>, as a consequence of (<ref>) we can obtain ∫_Q(k,t)∇ u_α^2 dx ≤ Cε∫_Q(k,t)∇ u_α^2dx+ C(α - 1)∫_Q(k,t+1)∇ u_α^2dx + ∫_∂ Q(k,t)∂ u_α/∂ ru_α - u^*_α ds + ∫_Q(k,t)∂ u_α/∂ r^2dx for any small enough ε > 0 that will be determined later. Next, we want to utilizing Pohozaev identity (<ref>) obtained in Lemma <ref> to control the term ∫_Q(k,t)∂ u_α/∂ r^2dx occurring in righthand of (<ref>). Integrating ∫_∂ B(x_α,s)(∂ u_α/∂ r^2. - .1/|x - x_α|^2∂ u_α/∂θ^2)ds = -2(α - 1)/s∫_B(x_α,s)∇∇_g_αu_α^2∇ u_α/τ_α + ∇_g_αu_α^2· |x - x_α|·∂ u_α/∂ r dx. with respect to s from 2^k-tλ_α R to 2^k+ tλ_α R, we can get ∫_Q(k,t) (∂ u_α/∂ r^2. - .1/x - x_α^2∂ u_α/∂θ^2)ds = - ∫_2^k-tλ_α R^2^k+ tλ_α R2(α - 1)/s∫_B(x_α,s)∇∇_g_αu_α^2∇ u_α/τ_α + ∇_g_αu_α^2· |x - x_α|·∂ u_α/∂ r dx ds ≤ C ∫_2^k-tλ_α R^2^k+ tλ_α R2(α - 1)/s∫_A(λ_α R, δ, x_α)x - x_α·∇_g_α^2 u_α·∂ u_α/∂ rdxds + ∫_2^k-tλ_α R^2^k+ tλ_α R2(α - 1)/s∫_B(x_α, λ_α R)x - x_α·∇_g_α^2 u_α·∂ u_α/∂ rdxds For the first integral in the righthand of (<ref>), utilizing the energy identity (<ref>) and together with the small energy identity Lemma <ref>, when α - 1 is small we have that ∫_2^k-tλ_α R^2^k+ tλ_α R2(α - 1)/s ∫_A(λ_α R, δ, x_α)x - x_α·∇_g_α^2 u_α·∂ u_α/∂ rdxds ≤∫_2^k-tλ_α R^2^k+ tλ_α R2(α - 1)/s∫_A(λ_α R/2, 2δ, x_α)∇ u_α^2 dx ds ≤ C(α - 1)∫_2^k-tλ_α R^2^k+ tλ_α R1/sds≤ C (α - 1)t. For the second integral in the righthand of (<ref>), we see that ∫_2^k-tλ_α R^2^k+ tλ_α R2(α - 1)/s ∫_B(x_α, λ_α R)x - x_α·∇_g_α^2 u_α·∂ u_α/∂ rdxds = ∫_2^k-tλ_α R^2^k+ tλ_α R2(α - 1)/s∫_B(0, R)x·∇_g_α^2 w_α·∇ w_αdxds ≤ C(α - 1)∫_2^k-tλ_α R^2^k+ tλ_α R1/sds≤ C (α - 1)t. Here, we used that the α-energy of u_α is uniformly bounded in estimates (<ref>) and (<ref>). Plugging (<ref>) and (<ref>) into (<ref>) and keeping in mind that (<ref>), we obtain ∫_Q(k,t)∂ u_α/∂ r^2 dx≤1/2∫_Q(k,t)∇ u_α^2dx + C(α - 1)t for small enough α - 1. Combining inequalities (<ref>) and (<ref>), we obtain 1/2 - C ε∫_Q(k,t)∇ u_α^2dx ≤∫_∂ Q(k,t)∂ u_α/∂ ru_α - u^*_α ds + C(α - 1)(t + 1) where we choose ε > 0 such that C ε < 1/4. Furthermore, for the boundary term in (<ref>) we observe that ∫_∂ Q(k,t) ∂ u_α/∂ r·u_α - u^*_α ds = ∫_∂ Q(k,t)√(|x - x_α|)·∂ u_α/∂ r·u_α - u^*_α·1/√(|x - x_α|) ds ≤1/2∫_∂ Q(k,t)∂ u_α/∂ r^2· |x - x_α| ds + ∫_∂ Q(k,t)u_α - u^*_α^2·1/|x - x_α| ds ≤1/2∫_∂ Q(k,t)∂ u_α/∂ r^2· |x - x_α| ds + ∫_∂ Q(k,t)∂ u_α/∂θ^2·1/|x - x_α| ds ≤1/2∫_∂ Q(k,t)∇ u_α^2· |x - x_α| ds = 2^k + t -1λ_α R ∫_∂ B(x_α, 2^k + tλ_α R)∇ u_α^2 ds - 2^k -t -1λ_α R ∫_∂ B(x_α, 2^k-tλ_α R)∇ u_α^2 ds. Recall the definition of ℱ_α,k(t), (<ref>) tells us that ∫_∂ Q(k,t)∂ u_α/∂ ru_α - u^*_α ds ≤1/2log 2ℱ^'_α,k(t). Thus, plugging the (<ref>) into (<ref>) we get (1 - C ε)ℱ_α,k(t) ≤1/log 2ℱ^'_α,k(t) + C(α - 1)(t + 1) Let σ = 1 - Cε∈ (0,1), multiplying the both side of above inequality by 2^-σ t yields 2^-σ t F_α,k(t)^'≥ - 2^-σ tC(α -1)(t + 1). Integrating from 2 to T for some T ∈ℕ, we have ℱ_α, k(2) ≤ C 2^-σ Tℱ_α, k(T) + C(α - 1). Let T = T_k := min{k, k_α -k }, then we get ∫_Q(k,2)∇ u_α^2 dx ≤ C2^-σ T_k∫_A(λ_α R/2, 2δ, x_α)∇ u_α^2dx + C(α - 1). Thus, by small energy regularity Lemma <ref>, we have Osc_Q(k,1)(u_α) ≤∫_Q(k,2)∇ u_α^2 dx^1/2 ≤ C2^-σ/2 T_k∫_A(λ_α R/2, 2 δ, x_α)∇ u_α^2^1/2 + C√(α -1). which implies Osc_A(λ_α R,δ,x_α)(u_α) ≤∑_k = 1^k_α Osc_Q(k,1)(u_α) ≤ C∫_A(λ_α R/2, 2 δ, x_α)∇ u_α^2^1/2 + C√(α - 1)logδ - log (λ_α R) Therefore, keeping in mind that lim_α↘ 1λ_α^- √(α - 1) = ν = 1 we can conclude that lim_δ↘ 0lim_R→∞lim_α↘ 1Osc_A(λ_α R ,δ,x_α)(u_α) = 0 which shows that the set u_0(B(0,1))⋃ w^1(^2) is a connected subset of N, as desired in part <ref> of Theorem <ref>. §.§.§ Asymptotic Neck Analysis and Length Formula for Lg 5pt In this subsubsection, if 1 < ν < ∞ we demonstrate that the neck domain converges to a geodesic of finite length in N, allowing us to derive the formula for the length of this geodesic. And if ν = ∞, we prove that the neck domain converges to a geodesic of infinite length. These cases present a higher level of complexity, necessitating the introduction of several preliminary lemmas before proving the main consequences. First, we note that With same hypothesis as Theorem <ref> and the assumption ν > 1, we have lim_α↘ 1(τ_α + ∇_g_α u_α^2)^α - 1_C^0(B(0, 1/2)) = μ. Since we have assumed there is only one bubble, there exists x_α∈ B(0, 1/2) such that 1/λ_α := max_x ∈B(0, 1/2)∇_g_α u_α (x) = ∇_g_α u_α (x_α) for small enough α - 1. On the one hand, lim_α↘ 1(τ_α + ∇_g_α u_α^2)^α - 1_C^0(B(0, 1/2)) ≥lim_α↘ 1∇_g_α u_α^2α - 2_C^0(B(0, 1/2)) = lim_α↘ 1λ_α^2-2α = μ. On the other hand, recalling we have assumed τ_α≤ 1 for α > 1 we estimate lim_α↘ 1(τ_α + ∇_g_α u_α^2)^α - 1_C^0(B(0, 1/2)) ≤lim_α↘ 1 2 ∇_g_α u_α^2α - 2_C^0(B(0, 1/2)) = lim_α↘ 1λ_α^2-2α = μ. First, similarly to the estimates in Lemma <ref>, we establish a more delicate decay estimates of angle component of the energy functional of u_α as α↘ 1, more precise description is following. With same hypothesises as Theorem <ref>, we further assume ν > 1. Then for any sequence t_α∈ [t_1 ,t_2] where 0 < t_1 ≤ t_2 < 1 and any R > 0, after choosing a subsequence, we have lim_α↘ 11/α - 1 ∫_A(λ_α^t_α/R, λ_α^t_αR, x_α)1/|x - x_α|^2∂ u_α/∂θ^2 dx = 0. The proof splits into two steps. Firstly, we prove a weaker version of this Proposition <ref>, that is, we show that for any positive integer k, there exists a constant C that is independent of k such that lim_α↘ 11/α - 1 ∫_A(2^-kλ_α^t_α, 2^kλ_α^t_α, x_α)1/|x - x_α|^2∂ u_α/∂θ^2 dx≤ C. Taking a small enough positive number γ < min{t_1, 1 -t_2} and t≤logλ_α^-γ/log 2 we can define Q(t): = A2^-tλ_α^t_α, 2^tλ_α^t_α,x_α. Transforming the integral domain of (<ref>) over Q(t), we can obtain (1 - Cε)∫_Q(t)∇ u_α^2 dx ≤ C(α - 1)∫_Q(t+1)∇ u_α^2dx + ∫_∂ Q(t)∂ u_α/∂ ru_α - u^*_α ds + ∫_Q(t)∂ u_α/∂ r^2dx. Next, similarly to the proof in the proof of Theorem <ref> for case ν =1 in Subsubsection <ref>, we want to utilizing Pohozaev identity (<ref>) obtained in Lemma <ref> to establish a more delicate estimates of the term ∫_Q(t)∂ u_α/∂ r^2dx occurring in righthand of (<ref>). Integrating ∫_∂ B(x_α,s) ∂ u_α/∂ r^2 - ∇ u_α^2 ds = -(α - 1)/s∫_B(x_α,s)∇∇_g_αu_α^2∇ u_α/τ_α + ∇_g_αu_α^2·x - x_α·∂ u_α/∂ r dx with respect to s from 2^-tλ_α^t_α to 2^ tλ_α^t_α with respect to t, we can get ∫_Q(t) ∂ u_α/∂ r^2 - ∇ u_α^2 dx = - ∫_2^-tλ_α^t_α^2^tλ_α^t_αα - 1/s∫_B(x_α,s)∇∇_g_αu_α^2∇ u_α/τ_α + ∇_g_αu_α^2·x - x_α·∂ u_α/∂ r dx ds. Combining (<ref>) and (<ref>) we have (1/2- Cε)∫_Q(t)∇ u_α^2 dx ≤ C(α - 1)∫_Q(t+1)∇ u_α^2dx + ∫_∂ Q(t)∂ u_α/∂ ru_α - u^*_α ds + ∫_2^-tλ_α^t_α^2^tλ_α^t_αα - 1/sI_α(s) ds, where we choose small enough ε > 0 such that Cε≤ 1/4 and we denote I_α(t) := -∫_B(x_α,t)∇∇_g_αu_α^2∇ u_α/τ_α + ∇_g_αu_α^2·x - x_α·∂ u_α/∂ r dx. For any 2^-tλ_α^t_α≤ r ≤ 2^t λ_α^t_α, utilizing Lemma <ref> we get that I(r) - I(λ_α^t_α) ≤ C ∫_A(2^-tλ_α^t_α,2^tλ_α^t_α,x_α)∇^2_g_α u_α· |x - x_α|·∇ u_α dx = C∫_A(2^-t - 1λ_α^t_α,2^t+ 1λ_α^t_α, x_α)∇ u_α^2 dx ≤ C∫_A(λ_α^t_α + γ, λ_α^t_α - γ, x_α)∇ u_α^2 dx := η(γ,α), where in the last inequality we used the choice of t≤logλ_α^-γ/log 2. Next, we show that η(γ,α) → 0 as α↘ 1 and γ→ 0. To see this, by transforming the integral domain from [2^k-tλ_α R,2^k+tλ_α R ] into [λ_α^t_α + γ, λ_α^t_α - γ] in (<ref>), we have ∫_A(λ_α^t_α + γ, λ_α^t_α - γ, x_α) τ_α + ∇_g_α u_α^2^α - 1∂ u_α/∂ r^2 dx ≤1/2α∫_A(λ_α^t_α + γ, λ_α^t_α - γ, x_α)τ_α + ∇_g_αu_α^2^α - 1∇ u_α^2dx + α - 1/α∫_λ_α^t_α + γ^λ_α^t_α - γ1/t∫_B(x_α,t)τ_α + ∇_g_αu_α^2^α - 1∇ u_α^2dxdt + C λ^2t_α - 2γ_α ≤1/2α∫_A(λ_α^t_α + γ, λ_α^t_α - γ, x_α)τ_α + ∇_g_αu_α^2^α - 1∇ u_α^2dx + C(α -1)γlog(λ_α)+Cλ^2t_α - 2γ_α. Keeping in mind that Lemma <ref>, given 0<ε < 1/8 choosing α - 1 small enough such that τ_α + ∇_g_αu_α^2^α - 1_C^0(B(x_α, δ)) - μ≤ε/2, plugging this into (<ref>) we have η(γ,α)= ∫_A(λ_α^t_α + γ, λ_α^t_α - γ, x_α)∇ u_α^2 dx ≤1/2 + ε∫_A(λ_α^t_α + γ, λ_α^t_α - γ, x_α)∇ u_α^2dx + C(α -1)/μγlog(λ_α) + ∫_A(λ_α^t_α + γ, λ_α^t_α - γ, x_α)1/|x - x_α|^2∂ u_α/∂θ^2 dx + C/μλ^2t_α -2γ_α, which implies that lim_γ→ 0lim_α↘ 1I(r) - I(λ_α^t_α)≤lim_γ→ 0lim_α↘ 1η(γ,α) = 0, by recalling the Lemma <ref>. Therefore, by (<ref>) and (<ref>) we have 1/2 - Cε∫_Q(t)∇ u_α^2 dx ≤∫_∂ Q(t)∂ u_α/∂ ru_α - u^*_α ds + C(α - 1)∫_Q(t+1)∇ u_α^2dx + I(λ_α^t_α) + η(γ,α)∫_2^-tλ_α^t_α^2^tλ_α^t_αα - 1/t dt ≤∫_∂ Q(t)∂ u_α/∂ ru_α - u^*_α ds + C(α - 1)η(γ,α) + 2log(2) (α -1) I(λ_α^t_α) + η(γ,α)t. Define ℱ_α(t) : = ∫_Q(t)∇ u_α^2 dx, then by (<ref>) we can rewrite (<ref>) as (1 - Cε) ℱ_α(t) ≤1/log 2ℱ^'_α(t) + 4log(2) (α -1)I(λ_α^t_α)t + C(α - 1)η(γ,α)(t+1), which implies 2^-σ tℱ_α(t)^' ≥ - 4log^2(2)(α -1)2^-σ tI(λ_α^t_α)t - C(α - 1)2^-σ tη(γ,α)(t+1), where σ = 1 - C ε∈ (0,1) is a constant. Letting 2^T = λ_α^-γ and integrating above inequality (<ref>) from k to T yields ℱ_α(k) ≤ 2^σ(k - T)ℱ_α(T) + 4log(2)/σ(α - 1)I(λ_α^t_α)2^σ k∫_k^T 2^-σ tt dt +C(α - 1)η(γ,α) 2^σ k∫_k^T (t+1)2^-σ tdt ≤ 2^σ(k - T)ℱ_α(T)+ 4log(2)k/σ(α - 1)I(λ_α^t_α) +C(α - 1)I(λ_α^t_α) + η(γ,α)(k + 1), where we used the estimates ∫_k^T 2^-σ tt dt ≤k/σlog (2)2^-σ k + 1/σlog (2)^2 2^-σ k. On the other hand, utilizing the Pohozaev identity (<ref>), we obtain ∫_Q(k)∂ u_α/∂ r^2 - 1/|x - x_α|^2∂ u_α/∂θ^2 dx = 2 ∫_2^-kλ_α^t_α^2^kλ_α^t_αα - 1/t I_α(t) dt ≥ 4log (2) k(α - 1)(I_α(λ_α^t_α) - η(γ,α)). Next, subtracting (<ref>) by (<ref>) yields 2∫_Q(k)1/|x - x_α|^2∂ u_α/∂θ^2 dx ≤ 2^σ kλ_α^γσℱ_α(T) + (α - 1)4log(2)I_α(λ_α^t_α)1/σ - 1k + C(α - 1)I(λ_α^t_α) + C(α - 1)η(γ,α)k. Since ν = lim_α↘ 1λ_α^- √(α - 1) > 1, one have λ_α^γσ = o(α - 1)^m as α↘ 1, for any positive integer m > 0. Then in (<ref>), taking α↘ 1 first, then ε→ 0 and γ→ 0, yields lim_α↘ 11/α - 1 ∫_A(2^-kλ_α^t_α, 2^kλ_α^t_α, x_α)1/|x - x_α|^2∂ u_α/∂θ^2 dx ≤ C lim_α↘ 1 I(λ_α^t_α) ≤ C, for some universal constant C > 0 independent of k. Next, we prove the assertion of Proposition <ref>, that is, for any k ∈ℕ lim_α↘ 11/α - 1 ∫_A(2^-kλ_α^t_α, 2^kλ_α^t_α, x_α)1/|x - x_α|^2∂ u_α/∂θ^2 dx= 0. Utilizing Fubini's theorem we rewrite (<ref>) as 1/α - 1 ∫_A(2^-kλ_α^t_α, 2^kλ_α^t_α, x_α) 1/|x - x_α|^2∂ u_α/∂θ^2 dx = ∫_2^-kλ_α^t_α^2^kλ_α^t_α1/α - 1∫_0^2π∂ u_α/∂θ(r,θ)^2 dθdr/|x - x_α|≤ C, thus given any small ε > 0 there will always exist a large enough positive integer k_0, which is independent of α > 1, and L_α∈ [2^k_0, 2^k_0 + 1] such that 1/α - 1∫_0^2π∂ u_α/∂θ(L_αλ_α^t_α,θ)^2 dθ < ε and 1/α - 1∫_0^2π∂ u_α/∂θ1/L_αλ_α^t_α,θ^2 dθ < ε. From these two estimates (<ref>) and (<ref>), we can obtain a more delicate estimate of (<ref>) ∫_∂ A(1/L_αλ_α^t_α, L_αλ_α^t_α, x_α ) ∂ u_α/∂ ru_α - u^*_α ds ≤∫_∂ A(1/L_αλ_α^t_α, L_αλ_α^t_α , x_α ) |x - x_α|·∂ u_α/∂ r^2 ds^1/2· ∫_∂ A(1/L_αλ_α^t_α, L_αλ_α^t_α , x_α)∂ u_α/∂θ^2 ds^1/2 ≤√((α - 1)ε)∫_∂ A(1/L_αλ_α^t_α, L_αλ_α^t_α , x_α ) |x - x_α|·∂ u_α/∂ r^2 ds^1/2. Moreover, using (<ref>) and Corollary <ref> we have ∫_∂ A(1/L_αλ_α^t_α, L_αλ_α^t_α , x_α ) |x - x_α|·∂ u_α/∂ r^2 ds ≤ C ∫_∂ A(1/L_αλ_α^t_α, L_αλ_α^t_α , x_α )1/|x - x_a|∂ u_α/∂θ^2 ds + C(α - 1) + C λ_α^t_α, which implies ∫_∂ A(1/L_αλ_α^t_α, L_αλ_α^t_α, x_α )∂ u_α/∂ ru_α - u^*_α ds ≤ C√(ε)(α - 1). Applying (<ref>) to (<ref>) , we can obtain a more refined estimate (1 - Cε)∫_ A(1/L_αλ_α^t_α, L_αλ_α^t_α, x_α )∇ u_α^2 dx ≤ C√(ε)(α - 1) + C(α - 1)η(γ,α)log(L_α) + 1 + 4log(L_α)t (α -1)I(λ_α^t_α). Similar to (<ref>), we have ∫_A(1/L_αλ_α^t_α, L_αλ_α^t_α, x_α) ∂ u_α/∂ r^2 - 1/|x - x_α|^2∂ u_α/∂θ^2 dx ≥4(α - 1)logL_α I(λ_α^t_α) - C(α - 1)η(γ,α)log(L_α). Subtracting (<ref>) by (<ref>) yields 2 ∫_A(1/L_αλ_α^t_α, L_αλ_α^t_α, x_α) 1/|x - x_α|^2∂ u_α/∂θ^2 dx ≤ C√(ε)(α - 1) + 1 - 1/1 - Cε4(α - 1)logL_α I(λ_α^t_α) + C(α - 1)η(γ,α)log(L_α) + 1. Thus, by the choice of log(L_α) ∈ [log(2)k_0, log(2)(k_0 +1)] and the fact (<ref>), taking α↘ 1 firstly and then letting ε↘ 0 will yield the assertion of Proposition <ref> immediately. As a corollary, we have the following observation which will be used later. With the same hypothesis as Proposition <ref>. For any fixed R > 0 and 0 < t_1 < t_2 < 1, we have lim_α↘ 1sup_t ∈ [t_1, t_2]1/α - 1 ∫_A1/Rλ_α^t_α, λ_α^t_α R, x_α1/|x - x_α|^2∂ u_α/∂θ^2 dx = 0. We prove by contradiction. If the assertion fails, then after choosing a subsequence there exists ε > 0 and t_α_k→ t_0 for some t_0 ∈ [t_1, t_2] such that 1/α - 1 ∫_A(1/Rλ_α_k^t_α_k, λ_α_k^t_α_k R, x_α_α)1/|x - x_α_k|^2∂ u_α/∂θ^2 dx ≥ε_0 However, Proposition <ref> tells us that lim_α↘ 11/α - 1 ∫_A(1/Rλ_α^t_α, λ_α^t_α R, x_α)1/|x - x_α|^2∂ u_α/∂θ^2 dx = 0. for any sequence {t_α}_α↘ 1⊂ [t_1, t_2]. This is a contradiction. Note that by Lemma <ref>, we find that for any 0< t_1 ≤ t ≤ t_2 < 1 Osc_∂ B(x_α, λ^t_α) (u_α) ≤ C ∫_A(1/2λ^t_α, 2λ^t_α, x_α)|∇ u_α|^2 dx ^1/2 ≤ C ∫_A(1/2λ^t_α, 2λ^t_α, x_α)τ_α + ∇_g_α u_α^2^α - 1|∇ u_α|^2 dx → 0 as α↘ 1. which implies the u_α∂ B(x_α, λ_α^t_α) converges to some point of N as α↘ 1. With same hypothesises as Theorem <ref>, we further assume ν > 1. Then for any sequence t_α∈ [t_1 ,t_2] where 0 < t_1 ≤ t_2 < 1 and any R > 0, after choosing a subsequence, we have 1/√(α - 1)(u_αx_α + λ_α^t_αx - u(x_α + (λ_α^t_α,0))) →alog|x| strongly in C^2A1/R, R, 0,^K for any R > 0 and any integer k∈ℕ, here y = lim_α↘ 1 u_α∂ B(x_α, λ_α^t_α) and a∈ T_yN ⊂ T_y^K≅^K is a vector in ^K satisfying |a| = μ^1 - lim_α↘ 1 t_α√(E(w^1)/π). Let u^'_α(x) := u_α(x_α + λ_α^t_α x) and v_α(x) := 1/√(α - 1)(u_αx_α + λ_α^t_αx - u(x_α + (λ_α^t_α,0))). By (<ref>) and small energy regularity Lemma <ref>, recalling λ_α^γ = o(α - 1)^m for all γ > 0 and m∈ℕ we have ∇ u^'_α_C^0(A(2^-k, 2^k,0)) + ∇^2 u^'_α_C^0(A(2^-k, 2^k,0))≤ C(k)√(α - 1) which further implies ∇ v_α_C^0(A(2^-k, 2^k,0)) + ∇^2 v_α_C^0(A(2^-k, 2^k,0))≤ C(k), for some constant C(k) depending on k. Since v_α(1,0) = 0, the above estimate implies v_α_C^0(A(2^-k, 2^k,0))≤ C(k). By the Euler-Lagrange equation (<ref>) of u_α, one can check that v_α satisfies the following equation Δ v_α + √(α - 1)A(∇ v_α, ∇ v_α) + (α - 1)O(|∇^2 v_α|) = √(α - 1)H( v_α,∇ v_α) + √(α - 1)o(1) where o(1) tends to 0 as α↘ 1. By the compactness of PDE's theory, there exists a subsequence of v_α, which is still denoted by v_α, such that v_α→ v_0 in C^2_loc^2\{0} where v_0 is a harmonic function on ^2. Moreover, by Proposition <ref>, the angel component energy of v_0 vanishes, that is, v_0(x) = v_0(|x|). Thus, v_0 is a fundamental solution of Laplacian equation over ^2, without loss of generality we can write v_0 as v_0 = alogr = (a_1,…,a_K)logr, for some vector a∈ T_yN ⊂^K From (<ref>), we know that v_α satisfies ∫_∂ B(0,t) τ_α + ∇_g_α u_α^2^α - 1∇ v_α^2 ds = 2α/(2α - 1)∫_∂ B(0,t)τ_α + ∇_g_αu_α^2^α - 11/|x|^2∂ v_α/∂θ^2ds +2/(2α - 1)t∫_B(0,t)τ_α + ∇_g_αu_α^2^α - 1∇ u_α^2 dx + O(t)/α - 1, which implies lim_α↘ 1∫_ A(λ_α^t_α, 2λ_α^t_α, x_α)τ_α + ∇_g_α u_α^2^α - 1∇ v_α^2 dx = lim_α↘ 12α/(2α - 1)∫_A(λ_α^t_α, 2λ_α^t_α, x_α)τ_α + ∇_g_αu_α^2^α - 11/|x|^2∂ v_α/∂θ^2dx +lim_α↘ 12/(2α - 1)∫_λ_α^t_α^2λ_α^t_α1/t∫_B(0,t)τ_α + ∇_g_αu_α^2^α - 1∇ u_α^2 dx dt + lim_α↘ 1O(λ_α^t_α)/α - 1 = 2 log 2 μ^1 - lim_α↘ 1t_αΛ. Here, we used Lemma <ref> and Proposition <ref>. On the other hand, we observe that lim_α↘ 1∫_ A(λ_α^t_α, 2λ_α^t_α, x_α)τ_α + ∇_g_α u_α^2^α - 1∇ v_α^2 dx = lim_α↘ 1∫_A(1,2,0)τ_α + ∇_g_αv_α^2 α - 1/λ_α^2t_α^α - 1∇ v_α^2 dx = 2πlog 2 |a|^2 μ^lim_α↘ 1t_α. Therefore, combining above two identities (<ref>) and (<ref>) we have a^2 = Λ/πμ^1 - 2lim_α↘ 1 t_α. This completes the proof the Proposition <ref>. As a corollary of above Proposition <ref>, we can obtain the following result. Under the same assumption of Proposition <ref>, the following holds * For the radical direction, we have ∫_λ_α^t^2λ_α^t1/√(α - 1)∂ u_α/∂ r dr →log 2 μ^1-t√(E(w)/π) in C^0([t_1,t_2]), and 1/√(α - 1)r∂ u_α/∂ r(λ_α^t,θ) →μ^1-t√(E(w)/π) in C^0([t_1,t_2]); * For the angular direction, we have ∫_0^2π1/√(α - 1)1/r∂ u_α/∂θ(λ_α^t,θ) dθ→ 0 in C^0([t_1,t_2]) and 1/√(α - 1)1/r∂ u_α/∂θ(λ_α^t,θ) → 0 in C^0([t_1,t_2]) We only prove the angular direction case <ref>, the other statements in <ref> can be argued similarly and the proof can be found in <cit.>. For the case <ref>, it suffices to show the second assertion 1/√(α - 1)1/r∂ u_α/∂θ(λ_α^t,θ) → 0 in C^0([t_1,t_2]), since the first one of <ref> is a direct corollary of the second one. By contradiction, if it fails, then there exists a sequence t_α∈ [t_1,t_2] and θ_α∈ [0,2π] such that 1/√(α - 1)1/r∂ u_α/∂θ(λ_α^t_α,θ_α)≥ε_0 > 0 for some ε_0 > 0. But, Proposition <ref> tells us that for any θ∈ [0,2π] 1/√(α - 1)1/r∂ u_α/∂θ(λ_α^t_α,θ) → 0 in C^2 as α↘ 1 which contradicts to (<ref>) by the compactness of θ_α∈ [0,2π] modulo some subsequences. Thus, we complete the proof of Corollary <ref>. Next, we show the necks of α-H-surfaces u_α converges to a geodesic, that is, the base map u_0 and single bubble w are connected by some geodesic. To this end, we define the following curve γ_α(r) : =1/2π∫_0^2π u_α(r,θ) dθ : [λ_α^t_2, λ_α^t_1] →^K where (r,θ) is the polar coordinate around x_α. We denote the image of γ_α by Γ_α⊂ N. For convenience, we use the following notation γ̇_α := d γ_α/dr, γ̈_α := d^2 γ_α/dr^2. We directly compute that γ̈_α = 1/2π∫_0^2π∂^2 u_α/∂ r^2 dθ = 1/2π∫_0^2π∂^2 u_α/∂ r^2 + 1/r∂ u_α/∂ r + 1/r^2∂^2 u_α/∂θ^2dθ - 1/2π∫_0^2π1/r∂ u_α/∂ r dθ = 1/2π∫_0^2πΔ u_α dθ - 1/2π∫_0^2π1/r∂ u_α/∂ r dθ = - 1/2π∫_0^2π A(u_α)(∇ u_α, ∇ u_α)dθ - (α - 1)/2π∫_0^2π∇|∇_g_α u_α|^2·∇ u_α/τ_α +|∇_g_α u_α|^2 dθ +τ_α^α -1/2π∫_0^2πH(u_α)( u_α, ∇ u_α)/α(τ_α + |∇_g_α u_α|^2)^α - 1dθ - γ̇_α/r. We use h_α to denote the induced metric upon Γ_α in ^K and A_Γ_α to denote the second fundamental form restricted on Γ_α. Equipped with these notations, we have For any λ_α^t_α∈ [λ_α^t_2, λ_α^t_1], after choosing a subsequence, there holds γ̇_α(λ_α^t_α) = √(α - 1)/λ_α^t_α(a + o(1)), h_α(d/dr,d/dr) = γ̇_α^2 = α - 1/λ_α^2t_αa^2 + o(1) A_Γ_α(∇γ_α, ∇γ_α) = α - 1/λ_α^2t_αA(y)(a,a) + o(1), where a and y are constructed in Proposition <ref> and o(1) → 0 as α↘ 1. Moreover, for any t∈ [t_1, t_2], there exists a positive constant C > 0, such that A_Γ_α_h_α(λ_α^t) ≤ C. For any λ_α^t_α∈ [λ_α^t_2, λ_α^t_1], by Proposition <ref>, we have 1/√(α - 1)(u_α(x_α + λ_α^t_αx) - u(x_α + (λ_α^t_α,0))) →alog|x|, as α↘ 1 where a∈ T_yN ⊂ T_y^K = ^K is a vector in ^K satisfying |a| = μ^1 - lim_α↘ 1 t_α√(E(w^1)/π) and y = lim_α↘ 1 u_α(∂ B(x_α, λ_α^t_α)) = lim_α↘ 1 u_α(x_α + λ_α^t_αe^iθ). Then we have γ̇_α(λ_α^t_α) = 1/2π∫_0^2π∂ u_α/∂ r(λ_α^tα,θ)dθ = √(α - 1)/λ_α^t_α(a + o(1)) and hence h_α(d/dr,d/dr) = γ̇_α^2 = α - 1/λ_α^t_αa^2 + o(1) where o(1) → 0 as α↘ 1. Let G_α = - γ̈_α - γ̇_α/r and by equation(<ref>), Proposition <ref>, Corollary <ref> and the assumption 0<β_0 ≤λ_α^α - 1≤ 1 we can further compute that G_α(λ_α^t_α) = 1/2π∫_0^2π A(u_α)(∇ u_α, ∇ u_α)dθ + (α - 1)/2π∫_0^2π∇|∇_g_α u_α|^2·∇ u_α/τ_α+|∇_g_α u_α|^2 dθ -1/2απτ_α^α -1∫_0^2πH(u_α)( u_α, ∇ u_α)/(τ_α+ |∇_g_α u_α|^2)^α - 1dθ =α - 1/λ_α^2t_α1/2π∫_0^2πA(y)(a,a)dθ + o(1) + (α - 1)∫_0^2πO∇^2_g_αu_αdθ + 1/2π∫_0^2π OH(u_α)( u_α, ∇ u_α) dθ = α - 1/λ_α^2t_α1/2π∫_0^2πA(y)(a,a)dθ +√(α - 1)∫_0^2πOλ_α^2t_α∇^2_g_αu_α/√(α - 1)dθ + o(1) + α - 1/2πλ_α^2t_α∫_0^2πλ_α^2t_α/α - 1 O∂ u_α/∂ r1/λ_α^t_α∂ u_α/∂θ dθ = α - 1/λ_α^2t_α1/2π∫_0^2πA(y)(a,a)dθ +√(α - 1)∫_0^2πO∇^2_g_αv_αdθ + o(1) + α - 1/λ_α^2t_α∫_0^2π O∂ v_α/∂ r(λ_α^t_α,θ)·1/r∂ v_α/∂θ(λ_α^t_α,θ)dθ = α - 1/λ_α^2t_αA(y)(a,a) +O√(α - 1) + o(1) = α - 1/λ_α^2t_αA(y)(a,a) + o(1). Because ⟨ A(y)(a, a), a⟩ = 0, we have -A_Γ_α(∇γ_α,∇γ_α) = γ̈_α - ⟨γ̈_α, γ̇_α⟩/γ̇_α^2γ̇_α = - G_α + ⟨ G_α, γ̇_α⟩/γ̇_α^2γ̇_α = - α - 1/λ_α^2t_αA(y)(a,a) + o(1). which implies A_Γ_α_h_α(λ_α^t_α) < ∞. Since t_α∈ [t_1, t_2] is a arbitrary sequence, by a contradiction argument similar to Corollary <ref> we will obtain that for any t ∈ [t_1, t_2] A_Γ_α_h_α(λ_α^t) ≤ C, which completes the proof of Lemma <ref>. After choosing a subsequence, the sequence of curves Γ_α⊂^K, which is defined by γ_α and parametrized by its arc length, converges to geodesic a γ : [0, L] → (N,h) for some L ∈_+, that is, γ satisfies the following equation d^2 γ/ds^2 + A(γ)d γ/ds, d γ/ds = 0. Let s be the arc length parameter of γ_α(t) with s(λ_α^t_α) = 0 for t_α∈ [t_1, t_2] and y_α = γ_α(λ_α^t_α) = 1/2π∫_0^2πu_α(λ_α^t_α,θ)dθ. We know that the sequence {γ_α(λ_α^t_α)} = {y_α} is convergent and γ_α(s) satisfies equation d^2γ_α/ds^2 + A_Γ_α(γ_α)(d γ_α/ds,d γ_α/ds) = 0 for α > 1. Then, by the uniformly boundedness of A_Γ_α_h_α(λ_α^t) obtained in Lemma <ref> as α↘ 1, γ_α(s) converges locally in C^1([0,s_1], ^K) to a vector valued function for some small s_1 > 0, denoted by γ(s) which also parameterized by arc length. To show γ is a geodesic, that is, to show γ(s) solves d^2 γ/ds^2 + A(γ)d γ/ds, d γ/ds = 0, it suffices to show A_Γ_α(γ_α)(d γ_α/ds,d γ_α/ds) ⟶ A(γ)d γ/ds, d γ/ds strongly in C^0([0,s_1], ^K) for some small enough s_1 > 0. By contradiction, if not, then for any arbitrary small s_1, there always exists a subsequence of {u_α} still denoted by {u_α} and a sequence of λ_α^t^'_α such that s_α^' := s(λ_α^t^'_α) = ∫_λ_α^t_α^λ_α^t^'_αω̇_α(r)dr → s^'∈ (0,s_1) and A_Γ_α(γ_α)(d γ_α/ds,d γ_α/ds) - A(γ)d γ/ds, d γ/ds_s = s_α^' > ε_0 > 0, for some ε_0 > 0. Furthermore, we can choose small enough s_1 and small enough α - 1 such that t_α^'∈ [t_1/2, t_2]. In fact, without loss of generality, we assume there exists an integer T_α such that λ_α^t_1/2 = 2^T_αλ_α^t_α where T_α→∞ as α↘ 1. Utilizing Corollary <ref>, when α - 1 is small enough we have ∫_λ_α^t_α^λ_α^t_1/2γ̇_̇α̇(r) dr = ∑_k = 1^T_α∫_2^k - 1λ_α^t_α^2^kλ_α^t_αγ̇_̇α̇(r) dr ≥ T_α√(α - 1)log 2 √(E((w^1)/π) + o(1) ≥ C(t_α - t_1/2) logλ_α^-√(α - 1 ) ≥ C t_1/2logν > 0. Thus, if we let s_1 ≤ C t_1/2logν, we can make t_α^'∈ [t_1/2, t_2] when α - 1 is small enough. Therefore, we can apply Proposition <ref> and Lemma <ref> to yield that d γ_α/ds(s^'_α) = γ̇_α(λ_α^t^'_α)/|γ̇_α(λ_α^t^'_α)|⟶d γ/d s(s^') as α↘ 1 which furthermore implies that . A_Γ_α(γ_α)(d γ_α/ds,d γ_α/ds)|_s = s^'_α = .1/γ̇_αλ_α^t^'_α^2A_Γ_α(γ_α)( γ̇_α(r), γ̇_α(r))|_r = λ_α^t_α^' ⟶.A(γ)d γ/ds, d γ/ds|_s = s^' as α↘ 1. This contradicts to the choice of s_α^' asserted in (<ref>) and (<ref>), hence (<ref>) holds. Therefore, from the equation (<ref>) of γ_α and the convergence properties induced from Lemma <ref>: d γ_α/ds(r) = γ̇_α(r)/|γ̇_α(r)|⟶d γ/d s(s) as α↘ 1 and (<ref>) we will get d γ/ds(s) - d γ/ds(0) = - ∫_0^s A(γ)d γ/ds, d γ/ds ds for all s∈ [0,s_1] which is the integral formation of geodesic equation. Thus, γ(s) is a geodesic which completes the proof of Lemma <ref>. Now, we are in a position to prove the remaining cases of main Theorem <ref> Without loss of generality, we assume k_α = t_1 - t_2/log 2logλ_α is an integer, that is equivalent to λ_α^t_1 = 2^k_αλ_α^t_2, which tends to infinity as α↘ 1. Case 1: We first consider the case ν = ∞. For any 0 < t_1 < t_2 < 1, by Corollary <ref> there holds L(Γ_α|_A(2^kλ_α^t_2, 2^k + 1λ_α^t_2, x_α)) := ∫_2^kλ_α^t_2^2^k + 1λ_α^t_2 |γ̇_α (r)| dr ≥√(α - 1)log 2√(E(w^1)/π) + o(1) . Then, we can estimate L(Γ_α) ≥ C k_α√(α - 1)log 2√(E(w^1)/π) + o(1) ≥ C logλ_α^- √(α - 1)→∞ as α↘ 1. which means in this case the length L(Γ) of γ(s) is infinite. Case 2: Now, we consider the case 1 < ν < ∞. Note that 1 < ν < ∞ implies μ = 1 which is equivalent to the energy identity shown in Theorem <ref>, that is lim_δ↘ 0lim_R→∞lim_α↘ 1∫_A(λ_α R, δ, x_α)∇ u_α^2 dx = 0. Then, we can use same estimates as the proof for the case ν = 1, just replacing δ by λ_α^t for any 0 < t_1 ≤ t≤ t_2 < 1, to obtain Osc_A(λ_α R,λ_α^t,x_α)(u_α) ≤ C∫_A(λ_α R/2, 2 δ, x_α)∇ u_α^2^1/2 + C√(α - 1)t - 1logλ_α - C√(α - 1)log R ⟶ 0 letting α↘ 1 then R →∞, δ↘ 0 and t → 1. And similarly, replacing λ_α R by λ_α^t for any 0 < t_1 ≤ t≤ t_2 < 1, we obtain Osc_A(λ_α^t,δ,x_α)(u_α) ≤ C∫_A(λ_α R/2, 2 δ, x_α)∇ u_α^2^1/2 + C√(α - 1)logδ - tlogλ_α ⟶ 0 letting α↘ 1 then R→∞, δ↘ 0 and t → 0. Also, by Corollary <ref>, we have L(Γ_α|_A(2^kλ_α^t_2, 2^k + 1λ_α^t_2, x_α)) = √(α - 1)log 2√(E(w)/π) + o(1) , which implies L(Γ) = lim_α↘ 1 k_α√(α - 1)log 2√(E(w)/π) + o(1) = (t_2 - t_1)logν√(E(w)/π). Now, letting t_1 → 0 and t_2 → 1 and keeping in mind that (<ref>), (<ref>) and Lemma <ref>, we know that the neck converges to a geodesic of length L = logν√(E(w)/π) which complete the proof of Theorem <ref> for the case ν > 1. §.§ Energy Identity for Lg-surfaces with Bounded Morse Index 5pt In this subsection, we prove another main consequence — Theorem <ref>. Before giving the detailed proof, some lemmas are needed. Let s be the arc length parameter of γ_α(r) such that s(λ_α^t) = 0 for some fixed 0 < t < 1. Then as a corollary of the proof of Lemma <ref> and Theorem <ref>, we have the following result Let {u_α}_α↘ 1 be a sequence satisfying hypothesis of Theorem <ref>. Assume that the limiting neck of {u_α} is a geodesic of infinite length. Then, for any given l > 0 and θ∈ [0,2π], u_α(s,θ) = u_α(x_α + s(cosθ, sinθ)) converges to γ in C^1([0,l]). Furthermore, we have r(s)/√(α - 1)∂ s/∂ r - μ^1 - t√(E(w^1)/π)_C^0([0,l])⟶ 0 as α↘ 1. where r(s) is the inverse of the arc length parameter s(r) with s(λ_α^t) = 0. Since the limiting neck of u_α converges to a geodesic of infinite length, we can choose a real number ι∈ such that sλ_α^t^ι_α = l By Corollary <ref>, we can estimate l = ∫_λ_α^t^ι_α^λ_α^td γ_α(r)/dr dr = ∫_λ_α^t^ι_α^λ_α^t1/2π∫_0^2πd u_α (r,θ) /dr dθdr ≥ C ∫_λ_α^t^ι_α^λ_α^t√(α - 1)/r dr = C(t^ι_α - t)logλ_α^-√(α - 1) But, notice that logλ_α^-√(α - 1)→∞ as α↘ 1, there must holds t_α^ι→ t as α↘ 1. We prove the Lemma <ref> by contradiction, suppose that u_α(s, θ) does not converge to γ in C^1([0,l]), then, after choosing a subsequence, there exists ε_0 > 0 and a sequence {s_α}_α↘ 1⊂ [0,l] such that sup_θ∈ [0,2π]∂ u_α/∂ s(s_α, θ) - d γ_α(s_α)/ds > ε_0 Write s(λ_α^t̃_α) = s_α, then t̃_α∈ [t, t_α^ι] which implies t_α→ t. Thus, applying Corollary <ref> yields λ_α^t̃_α/√(α - 1)∂ u_α/∂ rλ_α^t̃_α, θ - d γ_α/drλ_α^t̃_α⟶ 0 as α↘ 1. Note that by Corollary <ref> d s/drλ_α^t̃_α = d γ_α/drλ_α^t̃_α≥C λ_α^t̃_α/√(α - 1 ), which implies ∂ u_α/∂ s(s_α, θ) - d γ_α(s_α)/ds = d r/d s·∂ u_α/∂ r(s_α, θ) - d γ_α(s_α)/dr ≤λ_α^t̃_α/√(α - 1)∂ u_α/∂ r(λ_α^t̃_α, θ) - d γ_α/dr(λ_α^t̃_α)⟶ 0 as α↘ 1. This is a contradiction to (<ref>), thus we obtain the converges of first derivatives ∂ u_α/∂ s(s_α, θ) - d γ_α(s_α)/ds_C^0([0,l]) The C^0 converges of γ_α can be obtained by a similar argument using Proposition <ref>. Next, the convergence (<ref>) is the direct result of radical part of Corollary <ref>. Let us recall the definition of stability of a geodesic in Riemannian manifold. A geodesic γ(s) : [0,l]→ N is unstable if its index form is not non-negative definite, that is, there exists V_0 ∈𝒱_γ such that I_γ(V_0,V_0) = ∫_0^l ⟨∇_γ^' V_0 ,∇_γ^' V_0 ⟩ - R(V_0, γ^', V_0, γ^') ds < 0 where R is the Riemann curvature tensor on N and 𝒱_γ is the vector space formed by vector fields V along γ which are piecewise differentiable and vanish at the end points of γ, that is, V(0) = V(l) = 0. The following Lemma <ref> is of vital importance in the proof of Theorem <ref> Let {u_α}_α↘ 1 be a sequence satisfying hypothesis of Theorem <ref>. If the necks of {u_α} converges to an unstable geodesic γ(s) : [0,l]→ N parameterized by arc length, then for small enough α - 1, there exists a vector field V_α along u_α on N, that is, V_α∈ u_α^*(TN), which vanishes outside of A(λ_α^t_α^ι, λ_α^t,x_α), such that the second variation of E^ω_α acting on V_α is strictly negative, i.e. δ^2 E^ω_α(V_α, V_α) < 0. By the assumption of Lemma <ref>, γ(s) : [0,l]→ N is an unstable geodesic, then there exists a vector field V_0 ∈𝒱_γ such that I_γ(V_0, V_0) < 0. Recall that 𝒫 be the projection from T^K onto TN, more precisely for y∈ N 𝒫_y is the orthogonal projection from T_y^K = ^K onto T_yN ⊂ T_y^K . Then we define V_α as V_α(u_α(s,θ)) = V_α( u(x_α + r(s)(cosθ,sinθ))) := 𝒫_u_α(s,θ)(V_0(s)), where r(s) is the inverse function of arc parameter of γ_α(r) with s(λ_α^t) = 0. Here, we put V_0(s) as a vector in ^K which is identified with T_u_α(s,θ)^K, thus the expression 𝒫_u_α(s,θ)(V_0(s)) is well defined. Then, V_α is a piecewise smooth vector field along u_α which also vanishes outside of A( λ_α^t_α^ι, λ_α^t, x_α). From Lemma <ref>, for fixed θ∈ [0,2π] V_α(u_α(s,θ)) converges to V_0(γ(s)) in C^1([0,l]). Next, we will compute δ^2 E_α^ω(V_α,V_α) and judge its negativity by showing that lim_α↘ 11/√(α - 1)δ^2 E_α^ω(V_α,V_α) = 4 πμ√(E(w^1)/π) I_γ(V_0, V_0). The right hand of identity is negative by our assumption which implies the conclusion of Lemma <ref>. To this end, we first split the computation into some different parts using second variation formula obtained in Lemma <ref> and the conformal coordinates of M δ^2 E^ω_α(u_α)(V_α,V_α) = α∫_A(λ_α^t_α^ι, λ_α^t,x_α)τ_α + ∇_g_α u_α^2^α - 1( ∇ V_α, ∇ V_α - RV_α, ∇ u_α, V_α, ∇ u_α) d x + 2α(α - 1)∫_A(λ_α^t_α^ι, λ_α^t,x_α)τ_α + ∇_g_α u_α^2^α - 1⟨∇ u_α, ∇ V_α⟩^2 dx + 2 ∫_A(λ_α^t_α^ι, λ_α^t,x_α) H( u_α, ∇ V_α), V_α dx + ∫_A(λ_α^t_α^ι, λ_α^t,x_α) (∇_V_αH)( u_α, ∇ u_α), V_α dx = α∫_A(λ_α^t_α^ι, λ_α^t,x_α)τ_α + ∇_g_α u_α^2^α - 1( ∇_∂ u_α/∂ r V_α, ∇_∂ u_α/∂ r V_α -RV_α, ∇_∂ u_α/∂ r u_α, V_α, ∇_∂ u_α/∂ r u_α) d x + α∫_A(λ_α^t_α^ι, λ_α^t,x_α)τ_α + ∇_g_α u_α^2^α - 11/r^2( ∇_∂ u_α/∂θ V_α, ∇_∂ u_α/∂θ V_α -RV_α, ∇_∂ u_α/∂θ u_α, V_α,∇_∂ u_α/∂θ u_α) dx + 2α(α - 1)∫_A(λ_α^t_α^ι, λ_α^t,x_α)τ_α + ∇_g_α u_α^2^α - 2∇ u_α, ∇ V_α^2 dx + ∫_A(λ_α^t_α^ι, λ_α^t,x_α) 2 H( u_α, ∇ V_α), V_α + (∇_V_αH)( u_α, ∇ u_α), V_α dx := I_1 + I_2 + I_3 + I_4 where I_i represents the i-th integral of above identity. First, we consider I_1 and observe that μ^t⟵sup_A( λ_α^t_α^ι, λ_α^t, x_α)(C 1/λ_α^2t_α^ι)^α - 1 =sup_A(λ_α^t_α^ι, λ_α^t,x_α)τ_α + ∇_g_α u_α^2^α - 1 ≤sup_A( λ_α^t_α^ι, λ_α^t, x_α)(τ_α + C 1/λ_α^2t_α^ι)^α - 1⟶μ^t as α↘ 1. since in Lemma <ref> we have concluded that t_α^ι→ t as α↘ 1. Utilizing (<ref>), we can estimate I_1 as below lim_α↘ 1I_1/√(α - 1) = lim_α↘ 12α/√(α - 1)∫_A(λ_α^t_α^ι, λ_α^t,x_α)τ_α + ∇_g_α u_α^2^α - 1( ∇_∂ u_α/∂ r V_α, ∇_∂ u_α/∂ r V_α -RV_α, ∇_∂ u_α/∂ r u_α, V_α, ∇_∂ u_α/∂ r u_α) d x =lim_α↘ 1∫_0^2π∫_0^l ( ∇_∂ u_α/∂ r V_α, ∇_∂ u_α/∂ r V_α -RV_α,∇_∂ u_α/∂ r u_α, ∇_∂ u_α/∂ r u_α, V_α) 1 + ∇_g_α u_α^2^α - 1∂ s/∂ r r(s)/√(α - 1) ds dθ = μ√(E(w^1)/π)lim_α↘ 1∫_0^2π∫_0^l ( ∇_∂ u_α/∂ r V_α, ∇_∂ u_α/∂ r V_α -RV_α, ∇_∂ u_α/∂ r u_α, V_α, ∇_∂ u_α/∂ r u_α) ds dθ = 2πμ√(E(w^1)/π) I(V_0,V_0). Here, we used the fact that V_α(u_α(s,θ)) converges to V_0(γ(s)) in C^1([0,l]) and u_α(s,θ) converges to γ in C^1([0,l]) for fixed θ∈ [0,2π], see Lemma <ref>. Before calculating I_2, we note that ∇_∂ u_α/∂θ V_α = 𝒫_u_α(s,θ)(∂ V_α/∂θ) = 𝒫_u_α(s,θ)∂/∂θ𝒫_u_α(s,θ)(V_0) = 𝒫_u_α(s,θ)∂/∂θ𝒫_u_α(s,θ)(V_0) where ∂ V_α/∂θ is taken in ^K. This implies ∇_∂ u_α/∂θ V_α≤ C_l ∂ u_α/∂θ for some constant C_l depending on l. Given R > 0, we take T_α = [ logλ_α^|t - t_α^ι|/log R] + 1 and by choice of T_α, one can see that Aλ_α^t_α^ι,λ_α^t, x_α⊂⋃_i = 1^T_α AR^i - 1λ_α^t_α^ι, R^iλ_α^t_α^ι, x_α. Then, we can compute I_2 lim_α↘ 1I_2/√(α - 1) = lim_α↘ 1∫_A(λ_α^t_α^ι,λ_α^t, x_α)τ_α + ∇_g_α u_α^2^α - 1( ∇_∂ u_α/∂θ V_α, ∇_∂ u_α/∂θ V_α -RV_α, ∇_∂ u_α/∂θ u_α, V_α, ∇_∂ u_α/∂θ u_α) dr/r dθ ≤lim_α↘ 1C/√(α - 1)∫_A(λ_α^t_α^ι,λ_α^t, x_α)1/|x - x_α|^2∂ u_α/∂θ^2 dx ≤lim_α↘ 1C/√(α - 1)∑_i = 1^T_α∫_A(R^i - 1λ_α^t_α^ι, R^iλ_α^t_α^ι, x_α)1/|x - x_α|^2∂ u_α/∂θ^2 dx ≤lim_α↘ 1 C T_α√(α - 1)sup_τ∈ [t-ε, t+ε]1/α - 1∫_A1/Rλ_α^τ, λ_α R^τ1/|x - x_α|^2∂ u_α/∂θ^2 dx. By the choice of T_α, we have lim_α↘ 1 T_α√(α - 1)≤ C(R)lim_α↘ 1t - t_α^ιlogλ_α^- √(α - 1) + √(α - 1) which is bounded by (<ref>). Thus, by Corollary <ref> we can conclude lim_α↘ 1I_2/√(α - 1) = 0. For the third integral I_3, by Cauchy-Schwartz inequality we can straightforward estimate that I_3/√(α - 1) = 2α√(α - 1)∫_A(λ_α^t_α^ι,λ_α^t, x_α)τ_α + ∇_g_α u_α^2^α - 2⟨∇ u_α, ∇ V_α⟩^2 dx ≤ 2α√(α - 1)∫_A(λ_α^t_α^ι,λ_α^t, x_α)τ_α + ∇_g_α u_α^2^α - 2∇ u_α^2 ∇ V_α^2 dx ≤ C √(α - 1)∫_A(λ_α^t_α^ι,λ_α^t, x_α)τ_α + ∇_g_α u_α^2^α ≤ C√(α - 1)⟶ 0 as α↘ 1. Here, similar to derivation of (<ref>) and (<ref>), we used the estimate ∇ V_α≤ C_l ∇ u_α. At last, for the fourth integral I_4, we split I_4 into I_41 an I_42 and consider I_41 firstly I_41/√(α - 1) := 1/√(α - 1)∫_A(λ_α^t_α^ι, λ_α^t, x_α) 2 H( u_α, ∇ V_α), V_α dx = 1/√(α - 1)∫_A(λ_α^t_α^ι, λ_α^t, x_α) 2H^k_ij u_α^i ∇ V_α^j V_α^k dx ≤2/√(α - 1)H_L^∞(N)V_C^0(N)∑_i,j^K∫_A(λ_α^t_α^ι, λ_α^t, x_α)1/|x - x_α|∂ u^i_α/∂ r∂ V^j_α/∂θ - ∂ V_α^j/∂ r∂ u_α^i/∂θ dx ≤ C 1/√(α - 1)∫_A(λ_α^t_α^ι, λ_α^t, x_α)1/|x - x_α|∂ V_α/∂θ∂ u_α/∂ r + ∂ u_α/∂θ∂ V_α/∂ r d x ≤ C 1/√(α - 1)∫_A(λ_α^t_α^ι, λ_α^t, x_α)1/|x - x_α|∂ u_α/∂θ∂ u_α/∂ r dx ≤ C ∫_A(λ_α^t_α^ι, λ_α^t, x_α)1/|x - x_α|^2∂ u_α/∂θ dx = o(1) √(α - 1)∫_λ_α^t_α^ι^λ_α^t1/r dr = -(t_α^ι - t)√(α - 1)logλ_α o(1) = o(1) Here, we have used the estimate (<ref>) and Corollary <ref>. For the second part I_42 of I_4 we have similarly computations I_42/√(α - 1) = ∫_A(λ_α^t_α^ι, λ_α^t,x_α) (∇_V_αH)( u_α, ∇ u_α), V_α dx =1/√(α - 1)∫_A(λ_α^t_α^ι, λ_α^t,x_α)∂ H^k_ij/∂ y^l + ∂ H_jl^k/∂ y^i + ∂ H^k_il/∂ y^l u_α^i ∇ u_α^j V_α^l V_α^k dx ≤ C 1/√(α - 1)∇ H_L^∞(N)V_0_L^∞(N)^2· ∫_A(λ_α^t_α^ι, λ_α^t, x_α)1/|x - x_α|∂ u_α/∂θ∂ u_α/∂ r + ∂ u_α/∂θ∂ u_α/∂ r d x = o(1) Combining all computations for I_1, I_2, I_3 and I_4 = I_41 + I_42, we conclude that lim_α↘ 11/√(α - 1)δ^2 E_α^ω(V_α,V_α) = 4 πμ√(E(w^1)/π) I_γ(V_0, V_0). which implies the conclusion of Lemma <ref>. Under the assumption of Theorem <ref>, (N,h) has finite fundamental group. Then, the Gromov's estimates <cit.> (See also <cit.>) on the length of geodesic γ and its Morse index hold: Length(γ) ≤ C_0 Ind(γ) + 1≤ C C_I + 1 for some universal constant C_0 > 1 and C_I is the uniformly Morse index upper bound of γ. Thus. any geodesic γ⊂ (N,h) with length Length(γ) > C_0 is unstable. Therefore, if we choose l > C_0 , then any geodesic γ(s): [0,l] → N with arc length parameter is unstable and under the assumption that γ is of infinite length we can apply above Lemma <ref> successively to finish the proof of our Theorem <ref>. More precisely, we have: By contradiction, suppose that the neck of u_α converges to a geodesic γ with infinite length. Then, we take l > C_0 and u_α(s,θ) converges to an unstable geodesic γ : [0,l]→ N in C^1([0,l]) for any θ∈ [0,2π] by Lemma <ref>. Using same notation as Lemma <ref> and Lemma <ref>, let s(λ_α^t) = 0 and δ_1 > 0 such that s(λ_α^t + δ_1) = l. Since t_α^ι→ t as α↘ 1, for arbitrary small ε > 0 when α tends to 1 close enough there holds that t^ι - t < ε. Therefore, by Lemma <ref> there exists a vector fields V_α^1 along u_α, that is vanishes outside A(λ_α^t + δ_1, λ_α^t,x_α), such that δ^2 E^ω_α(V^1_α,V^1_α) < 0 for all α≤α_1 for some small enough α_1 - 1. Since the limiting neck is a geodesic with infinite length, then we replace t by t + δ_1 and apply Lemma <ref> again on A(λ_α^t+δ_1 + δ_2, λ_α^t+δ_1, x_α) for some δ_2 > 0 to find second vector field V^2_α, that vanishes outside A(λ_α^t+δ_1 + δ_2, λ_α^t+δ_1, x_α), such that δ^2 E^ω_α(V^2_α,V^2_α) < 0 for all α≤α_2 for some small enough α_2 - 1 ≤α_1 - 1. This process can keep going continuously and for any integer L > 0 we can construct a collection of vector fields {V^1_α, V^2_α, … , V^L_α} and small α_L -1 satisfying δ^2 E^ω_α(V^i_α,V^i_α) < 0 for all α≤α_L and 1 ≤ i ≤ L. Since the support of V^i_α are disjoint each other, V^1_α, V^2_α, … , V^L_α are linearly independent which implies that Ind_E^ω_α(u_α) ≥ L, for any L≥ 0. Thus, Ind_E^ω_α(u_α) →∞ as α↘ 1 which contradicts to the uniformly bounded assumptions of Ind_E^ω_α(u_α). We conclude that the limiting necks of {u_α} consists of geodesics with finite length and by Theorem <ref> we know that the energy identity holds, completing the proof of Theorem <ref>. 2cm § EXISTENCE OF LG-SPHERE OF BOUNDED MORSE INDEX 10pt In this section, we prove our main results using the convergence schemes developed in Section <ref> and combining with the existence results obtained in Section <ref>. In the following, ε_0 is a uniform constant depending on the geometries of N and mean curvature vectors λ H, which is assumed to be the minimum of the constants appearing in the previous results. §.§ Existence of Minimizing Lg-Surfaces. 5pt In the first instance, let us consider a relatively simpler scenario where ω _ L^ ∞ (N) < 1. In such a case, the surgery construction appeared in <cit.> can be applied in the H-surface setting to rule out the occurrence of bubbles. Since E^ω_α satisfies the Palais-Smale condition, see Lemma <ref> and Corollary <ref>, and by the upper bound of ω _ L^ ∞ (N) < 1, we can take a minimizing map u_α:M→ N for E^ω_α in a fixed non-trivial homotopy class in W^1,2α(M,N) with E_α(u_α) ≤ C E^ω_α(u_α) ≤ C 1 + B^2^α + C ω_L^∞(N) B^2 Vol(M) where B = max_x ∈ M |∇ u(x)| and u is a smooth map in that homotopy class. By Theorem <ref>, we can choose a subsequence, which still denoted by u_α, such that u_α→ u in C^2(M-{x_1,⋯, x_l}, N) for some l ∈ℕ and u:M→ N is a H-surface. Next we prove that there is actually no energy concentration point for {u_α}, that is, u_α→ u in C^2(M,N). Take a small ball centering at x_i in M of radius ρ where ρ is small enough such that x_j∉ B(x_i,ρ) for 1≤ j≠ i ≤ l and will be determined more precisely later. Let φ(r) be a smooth function which is 1 on r≥ 1 and 0 on r≤1/2 and exp:TN→ N be the exponential map on N. Then we can define û_α(x) = { u(x) if 0 ≤ |x| ≤ρ/2 exp_u(x)(φ(|x|/ρ)exp^-1_u(x)∘ u_α(x)) if ρ/2 < |x| < ρ u_α(x) if |x| ≥ρ. which agrees with u_α near the boundary of B(x_i,ρ) and with u near the center x_i. Then u_α→ u in C^2(φ(|x|/ρ)∩ B(x_i,ρ),N), and we have û_α→ u in C^2(B(x_i,ρ), N) which implies lim_α↘ 1 E^ω_α(û_α,B(x_i,ρ)) -1/2Vol(M) = 1/2lim_α↘ 1∫_B(x_i,ρ)1 + |∇û_α|^2^α - 1 dV_g + lim_α↘ 1∫_B(x_i,ρ) (û_α)^*ω = E(u, B(x_i,ρ)) + ∫_B(x_i,ρ) u^*ω By assumption π_2(N) = 0, which implies that every u_1, u_2 ∈ C^0(B(x_i,ρ),N) with u_1|_∂ B(x_i,ρ) = u_2|_∂ B(x_i,ρ) are homotopic, u_α and û_α are homotopic. Since u_α is a minimizing map for E^ω_α in its homotopy class, we have E^ω_α(u_α, B(x_i,ρ))≤ E^ω_α(û_α,B(x_i,ρ)) Applying (<ref>) we get lim sup_α→ 1 E(u_α, B(x_i,ρ)) + 1/2Vol(M) ≤lim sup_α→ 1 E^ω_α(u_α, B(x_i,ρ)) - lim inf_α↘ 1∫_B(x_i,ρ) (u_α)^*ω ≤lim sup_α→ 1 E_α^ω(û_α, B(x_i,ρ)) - lim inf_α↘ 1∫_B(x_i,ρ) (u_α)^*ω = E^ω(u,B(x_i,ρ)) + 1/2Vol(M) - lim inf_α↘ 1∫_B(x_i,ρ) (u_α)^*ω ≤ C πρ^2∇ u_∞ + ω_L^∞(N)lim inf_α↘ 1 E(u_α, B(x_i,ρ)) + Vol(M). If we initially choose ρ small enough such that Cπρ^2∇ u_∞≤ε_0/2(1 - ω_L^∞(N)) and keeping in mind that ω_L^∞(N) < 1, then we can utilize small energy regularity Lemma <ref> to conclude that u_α converges to u in C^2(B(x_i, ρ), N), given that E(u_α, B(x_i, ρ)) < ε_0^2 for α sufficiently close to 1. Therefore, an induction argument tells us that the convergence can be extended over the points {x_1,⋯,x_l} and hence we can conclude u_α→ u in C^2(M,N). Since u_α minimizes E^ω_α, u must minimize the E^ω in the same homotopy class. Now, we are in a position to prove the Theorem <ref>. Let 𝒞 be the set of free homotopy classes containing minimizing H-sphere and G be the subgroup of π_2(N) generated by the elements of 𝒞. If G ≠π_2(N), then there exists a homotopy class that does not contain the minimizing H-sphere. Let 𝒞_1 = {u ∈ C^1(^2, N) : the corresponding free homotopy class [u] ∉G} then by Corollary <ref> we can find a sequence of maps {u_α}_α > 1 which are minimizers for each E^ω_α in 𝒞_1. By a similarly argument as proof of Theorem <ref>, there is a constant C such that E_α(u_α)≤ C. Then by small energy regularity Lemma <ref> either there is a subsequence converges strongly in C^2 to a non-constant H-sphere u : ^2 → N such that u ∈𝒞_1 and E^ω(u) = inf_v ∈𝒞_1 E(v), or there exists some energy concentration point, saying x_1 ∈^2. Let us consider the second case. Pick a small disk B(x_1,r) near blow-up point x_1, by energy gap Lemma <ref>, there exists a ε_0 > 0 such that E(u_α) ≥ε_0^2 provided that α - 1 is small enough. Then, we define s_α(x) = { u_α(x), when x ∈^2 \ B(x_1,r) û_α(x) when x ∈ B(x_1,r) . w_α(x) = { û_α∘ f(x), when x ∈^2 \ B(x_1,r) u_α(x) when x ∈ B(x_1,r) . where û_α is constructed as (<ref>) and f : ^2 \ B(x_1,r) → B(x_1,r) is the conformal reflection preserving the boundary ∂ B(x_1,r) fixed. Thus, s_α agrees with u_α outside B(x_1,r) while w_α agrees with u_α inside B(x_1,r). Next, by conformality of f, we have lim_α↘ 1 E_α^ω(s_α) = lim_α↘ 1E_α^ω(u_α, ^2 \ B(x_1,r)) + E(u, B(x_1,r)) + 1/2Vol(B(x_1,r)), lim_α↘ 1 E_α^ω(w_α) = lim_α↘ 1E_α^ω(u_α, B(x_1,r)) + E(u, B(x_1,r)) + 1/2Vol(B(x_1,r)). Therefore, we can choose small enough r > 0 and small enough α - 1 such that E_α^ω(s_α) ≤ E_α^ω(u_α,^2 \ B(x_1,r)) + δ/3 E_α^ω(w_α) ≤ E_α^ω(u_α, B(x_1,r)) + δ/3 for some δ > 0. Let [s_α] and [w_α] be the free homotopy classes of s_α and w_α, respectively. Then, [u_α] ⊂ [s_α] + [w_α] and we can conclude that inf_v ∈ [s_α] E_α^ω(v) + inf_v ∈ [w_α] E_α^ω(v) ≤ E_α^ω(s_α) + E_α^ω(w_α) < E_α^ω(u_α) + 2δ/3 = inf_v ∈𝒞_1 E_α^ω(v) + 2δ/3, which implies that inf_v ∈ [s_α] E_α^ω(v) ≤inf_v ∈𝒞_1 E_α^ω(u_α) - ε_0^2/4 and inf_v ∈ [w_α] E_α^ω(v) ≤inf_v ∈𝒞_1 E_α^ω(u_α) - ε_0^2/4, where we used the Proposition <ref> to conclude that E(s_α) ≥ε_0^2 and E(w_α) ≥ε_0^2. Thus, [u_α] ≠ [s_α] and [u_α] ≠ [w_α], in particular, [s_α] and [w_α] are both non-trivial. Furthermore, by the choice of 𝒞_1, the free homotopy classes [s_α] and [w_α] must belong to G which further implies [u_α] ∈ G for [u_α] ⊂ [s_α] + [w_α]. This contradicts to the choice of u_α, hence the conclusion of Theorem <ref> holds. §.§ Existence of Lg-Sphere of Bounded Morse Index for Generic Choice of Lg 5pt In this subsection, we complete the proof of main Theorem <ref>. Before presenting into the detailed proofs, we would like to demonstrate that it is possible to modify the values of a finite number of points on a non-constant H-sphere u: ^2 → N without affecting its Morse index, see <cit.> for the setting of α-harmonic maps and Gulliver-Lawson <cit.> for more general consequences. Let m be the Morse index of a non-constant H-sphere u:^2 → N. For any finite points {x_1, x_2, ⋯, x_l} in N, there exists a m-dimensional linear subspace 𝒱 of Γ(u^*TN) such that * The index form I_u(V,V) = ∫_^2( ⟨∇ V, ∇ V ⟩ - R(V,∇ u, ∇ u, V) ) d V_g + 2 ∫_M H( u, ∇ V), V dV_g + ∫_^2(∇_VH)( u,∇ u),VdV_g, for V ∈𝒱, of u is negative definite on 𝒱. * Given any V ∈𝒱, V vanishes in some neighborhood of x_i, for every 1 ≤ i ≤ l. By assumption, we can find a m-dimensional linear subspace 𝒱_0 of Γ(u^*TN) on which the index form I_u is negative definite. Then, choose a small enough ρ > 0 such that B(x_i,ρ) ∩ B(x_j,ρ) = ∅ for 1≤ i≠ j≤ l and the distant function r_i : B(x_i,ρ)\{x_i}→_+ is smooth. Take 0 < ε < min{ρ, 1} and a series of piecewise smooth functions φ_i(r_i):^2 → [0,1] can be defined as following φ_i(r_i) = { 0 if 0 ≤ r_i < ε^2, 2logε - logr_i/logε if ε^2 ≤ r_i ≤ε , 1 otherwise. . Then, a straightforward computation yields that ∫_^2∇φ_i dV_g ≤ C ∫_0^2π∫_ε^2^ε1/|logε| dr d θ≤ Cε/|logε| and ∫_^2∇φ_i^2 dV_g ≤C/|logε|. Let φ = Π_i = 1^l φ_i and for each V ∈𝒱 we estimate that I_u(φ V,φ V) = ∫_^2∇(φ V), ∇(φ V) - φ^2 R(V,∇ u, ∇ u, V) dV_g + 2 ∫_^2 H( u, ∇ (φ V)), φ V dV_g + ∫_^2(∇_φ VH)( u,∇ u), φ VdV_g ≤∫_^2φ^2 ( ⟨∇ V, ∇ V ⟩ - R(V,∇ u, ∇ u, V) ) d V_g + 2 ∫_^2φ^2 H( u, ∇ V), V dx + ∫_^2φ^2(∇_V H)( u,∇ u),VdV_g + C ε/|logε|sup_x ∈^2|V(x)| + |∇ V(x)| + C/|logε|sup_x ∈^2|V(x)|^2 + C √(E(u))sup_x ∈^2|V(x)|^2 /|logε|^1/2. Noting that by the construction of φ which equals to 1 away from x_i and vanishes in each small neighborhood of x_i, thus we must have I_u(φ V,φ V) < 0 for each V ∈𝒱_0 as long as ε > 0 is small enough. And 𝒱 := φ𝒱_0 is a m-dimensional linear space of Γ(u^*TN) satisfying the desired conclusions (<ref>) and (<ref>) of Lemma <ref>. We are now ready to complete the proof for the main Theorem <ref>. As per Theorem <ref>, we only need to establish the upper bound for the Morse index of H-spheres mentioned in Theorem <ref>. We adapt the convergence scheme established in <cit.> where the authors proved a similar Morse index upper bound for sequences of α-harmonic maps. By Corollary <ref>, for almost every λ∈ (0,∞), we can find a sequence of non-constant critical points {u_α_j}_j ∈ℕ for E^λω_α_j such that the Morse index of E^λω_α_j at u_α_j bounded from above by k-2 and the α_j-energy of u_α_j is uniformly bounded as j →∞. Then, by Theorem <ref>, after passing to a subsequence, u_α_j converges strongly in C^2(^2,N) except a finite many singular points {x_1, x_2, ⋯ , x_l} to a smooth λ H-sphere u:^2 → N. In the following part of proof Theorem <ref>, without ambiguity we simply write H to denote λ H and ω to denote λω for notation simplicity. Then, it suffices to show that the Morse index of limit map u is at most k-2 if l = 0, or to show that the Morse index of bubbles is no more than k-2 if l ≥ 1. To this end, we split the proof into two steps. 5pt The Morse index of weak limit u is at most k-2. 5pt In this step, it suffices to show Ind_E^ω(u) : = m ≤ k-2. Note that our argument is non-vacuous, since in viewing of Proposition <ref> the weak limit u is always non-constant. By previous Lemma <ref>, there exists m linearly independent vector fields V_1, V_2, … , V_m such that the index form of u is negative definite on subspace Span{V_1, V_2, … , V_m} and V_1, V_2, … , V_m ∈ u^*TN vanish in neighborhoods of singular points {x_1, x_2, ⋯ , x_l}. We then consider the commutative diagram of vector bundles u^*TN rd Π_2^* TN dr TN d ^2 r(i,u) ^2× N rΠ_2 N where Π_2 is the projection to the second variable and i : ^2→^2 is the identity map. Then we extend V_1,V_2, … ,V_m to smooth vector fields V_1,V_2, …, V_m ∈Π_2^*TN that supported in a tubular neighborhood of (i,u)(^2), and set W^l_α_j = (i,u_α_j)^*(V_l), for 1 ≤ l ≤ m. The sequence W_α_j^l can be regarded as a map from ^2 to the tangent bundle of N such that W^l_α_j(p) ∈ T_u_α_j(p)N, then W_α_j^l = (i,u_α_j)^*(V_l) ⟶ (i,u)^*(V_l) = V_l, in C^2(^2,u_α_j^*(TN)). Now we apply the second variation formula of E^ω_α_j to study the asymptotic properties of Morse index Ind_E^ω_α_j(u_α_j) as α_j↘ 1. By Corollary <ref>, we have δ^2 E_α_j^ω(u_α_j)(W_α_j^p,W_α_j^q) = α_j∫_^21 + ∇ u_α_j^2^α_j - 1( ∇ W_α_j^p, ∇ W_α_j^q - RW_α_j^p,∇ u_α_j, W_α_j^q, ∇ u_α_j) d V_g + 2α_j(α_j - 1)∫_^21 + ∇ u_α_j^2^α_j - 2∇ u_α_j, ∇ W_α_j^p ∇ u_α_j, ∇ W_α_j^q dV_g + ∫_^2 H( u_α_j, ∇ W_α_j^p), W_α_j^q + H( u_α_j, ∇ W_α_j^q), W_α_j^p dV_g + 1/2∫_^2∇_W^p_α_jH( u_α_j,∇ u_α_j), W^q_α_jdV_g + 1/2∫_^2∇_W^q_α_jH( u_α_j,∇ u_α_j), W^p_α_jdV_g for 1≤ p , q ≤ m. By the choice of W^l_α_j and (<ref>), we can estimate 2α_j(α_j - 1) ∫_^21 + ∇ u_α_j^2^α_j - 2∇ u_α_j, ∇ W_α_j^p ∇ u_α_j, ∇ W_α_j^q dV_g ≤ 2α_j(α_j - 1)∫_^21 + ∇ u_α_j^2^α_j - 1∇ W^p_α_j_L^∞(^2)∇ W^q_α_j_L^∞(^2) dV_g ≤ C (α_j - 1) ⟶ 0 as α_j↘ 1, which implies that δ^2E_α_j^ω(u_α_j)(W_α_j^p ,W_α_j^q) ⟶ ∫_^2( ∇ V_p , ∇ V_q - R(V_p,∇ u, V_q, ∇ u) ) d V_g + ∫_^2( H( u, ∇ V_p), V_q + H( u, ∇ V_q), V_p ) dV_g + 1/2∫_^2(∇_V^pH)( u,∇ u), V^q dV_g + 1/2∫_^2(∇_V^qH)( u,∇ u), V^p dV_g = I_u(V_p,V_q). Here, I_u is the index form of E^ω at u. Therefore, when α_j - 1 is small enough, δ^2 E^ω_α_j(u_α_j) is also negatively definite on span{W^1_α_j,W^2_α_j, … , W^m_α_j}, that is, m = Ind_E^ω(u) ≤ k-2, as desired. 5pt The Morse index of bubbles is less than k-2 if the number of energy concentration points l ≥ 1. 5pt By energy gap Lemma <ref>, we have E(u_α_j) ≥ε_0^2, for all j∈ℕ, which means the set of singular points is non-empty, say x_1 is an energy concentration point. Thus, by Theorem <ref>, there exists a non-constant H-sphere v : ^2 → N obtained by rescaling the sequence v_α_j(x) = u_α_j(x_α_j + λ_α_j x) where 1/λ_α_j = max_x ∈ B(x_1, r_0) |∇ u_α_j| for some small r_0 > 0 and x_α_j is the point such that the maximum is take on. Then v_α_j : B(0,λ_α_j^-1r_0) → N is a critical points of a new functional E^ω_α_j(u_α_j) = 1/2∫_B(0,r_0 λ_α_j^-1)λ_α_j^2 + |∇ v_α_j|^2^α_j dV_g_α_j+ λ_α_j^2α_j - 2∫_B(0,r_0 λ_α_j^-1)(v_α_j)^*ω where g_α_j := e^φ(x_α_j + λ_α_j x)((dx^1)^2 + (dx^2)^2) converges to the Euclidean metric on ^2 as α_j↘ 1. Near the energy concentration point x_1, we note that Ind_E^ω_α_j(u_α_j) = Ind_E^ω_α_j(v_α_j) for E^ω_α_j(u_α_j) = λ_α_j^2 - 2α_jE^ω_α_j(v_α_j). As in Step <ref>, it follows from the Lemma <ref> that there exists m linearly independent vector fields V_1, V_2, … , V_m on ^2 such that the Morse index of v is negative definite on subspace Span{V_1, V_2, … , V_m} and vanish in neighborhoods of the branch points of v and vanish around the infinite point ∞∈^2. Then we extend V_1,V_2, … ,V_m to smooth vector fields V_1,V_2, …, V_m ∈Π_2^*TN that supported in a tubular neighborhood of (i,v)(^2), and set W^l_α_j = (i,v_α_j)^*(V_l), for 1 ≤ l ≤ m. Similar to the computation of Lemma <ref>, we obtain the second variation of the functional E^ω_α_j δ^2 E_α_j^ω(v_α_j)(W_α_j^p,W_α_j^q) = α_j∫_ B(0,λ_α_j^-1r_0)λ_α_j^2 + ∇ v_α_j^2^α_j - 1( ∇W_α_j^p, ∇W_α_j^q - RW_α_j^p,∇ v_α_j, W_α_j^q, ∇ v_α_j) d V_g_α_j + 2α_j(α_j - 1)∫_ B(0,λ_α_j^-1r_0)λ_α_j^2 + ∇ v_α_j^2^α_j - 2∇ v_α_j, ∇W_α_j^p ∇ v_α_j, ∇W_α_j^q dV_g_α_j + λ_α_j^2α_j - 2∫_ B(0,λ_α_j^-1r_0)( H( v_α_j, ∇W_α_j^p), W_α_j^q + H( v_α_j, ∇W_α_j^q), W_α_j^p ) dV_g_α_j + λ_α_j^2α_j - 2/2∫_ B(0,λ_α_j^-1r_0)∇_W^p_α_jH( v_α_j,∇ v_α_j),W^q_α_jdV_g_α_j + λ_α_j^2α_j - 2/2∫_ B(0,λ_α_j^-1r_0)∇_W^q_α_jH( v_α_j,∇ v_α_j),W^p_α_jdV_g_α_j and observe that 2α_j (α_j - 1)∫_ B(0,λ_α_j^-1r_0)λ_α_j^2 + ∇ v_α_j^2^α_j - 2∇ v_α_j, ∇W_α_j^p e ∇ u_α_j, ∇W_α_j^q dV_g_α_j ≤ 4α_j(α_j - 1)∫_ B(0,λ_α_j^-1r_0)1 + ∇ v_α_j^2^α_j - 1∇W^i_α_j_L^∞(^2)∇W^j_α_j_L^∞(^2) dV_g_α_j ≤ C (α_j - 1) ⟶ 0 as α_j↘ 1. Because max_x ∈ B(0,λ_α_j^-1r_0)∇ v_α_j = 1 and λ_α_j ^2 - 2α_j→μ = 1 as α_j↘ 1, see Theorem <ref>, we conclude that δ^2 E^ω_α_j(v_α_j) ⟶ ∫_^2( ⟨∇V_i , ∇V_j⟩ - R(V_i,∇ v, V_j, ∇ v) ) dV_g + ∫_^2( ⟨ H( v, ∇V_i), V_j⟩ + ⟨ H( v, ∇V_j), V_i⟩) dV_g + 1/2∫_^2∇_V^p H( v,∇ v), V^q dV_g + 1/2∫_^2∇_V^q H( v,∇ v), V^p dV_g = I_v(V_p,V_q), where we used the conformally invariance of index form I_v to change the integral domain from ^2 to ^2. Therefore, when α_j - 1 is small enough, δ^2 E^ω_α_j(v_α_j) is negatively definite on spanW^1_α_j,W^2_α_j, … , W^m_α_j, that is, m = Ind_E^ω(v) ≤ k-2. Therefore, we complete the proof of the main Theorem <ref>. §.§ Existence of Lg-Sphere under Ricci Curvature Assumption When Lg Lg 5pt In this subsection, we prove the part (<ref>) of Theorem <ref>, more precisely, we aim to show that there exists a H-sphere in N for every choice of prescribed mean curvature H satisfying (<ref>) with Morse index at most 1. To this end, we first combine the Ricci curvature condition with Morse index estimates to obtain an energy bound for H-spheres which is uniformly for H, see Proposition <ref> and Proposition <ref> and then we can pass the limit to obtain the desired H-sphere for every H sastisfying (<ref>). In order to get the uniformly energy bound, we aim to build an index comparison for variable prescribed mean curvature H, see <cit.> for the case of minimal surfaces and <cit.> within the context of CMC surfaces. Before stating the detailed results, we recall some fundamental concepts about complex vector bundle and introduce our notations which inherits from <cit.> and <cit.>. Let u: ^2 → N be a H-sphere and denote the pull back bundle u^*TN simply by E. We represent the Riemannian metric on TN by ·,·, which can be complex bi-linearly extended to T_ℂN:=TN⊗_ℂ. The Levi-Civita connection ∇ on TN is also extended complex linearly to T_ℂN and is compatible with the Hermitian metric ·,· on T_ℂN. Here, we use the same notations, ·, · and ∇, regardless of whether they are defined on TN or T_ ℂ N. When the dimension of N is three, it is worth noting that the mean curvature type vector field H ∈Γ(∧^2(N) ⊗ TN) can be identified with a function defined on N. Therefore, the Euler-Lagrange equation (<ref>) of H-sphere is written as (<ref>) and the corresponding second variation formula becomes δ^2 E^ω(u)(V,V) = ∫_^2∇ V,∇ V - R(V,∇ u,V,∇ u) dV_g + 2∫_^2 H∇_∂_x^1V∧ u_x^2 + u_x^1∧∇_∂_x^2V,V dx^1dx^2 +2∫_^2 (∇_V H)u_x^1∧ u_x^2,V dx^1dx^2 in this scenario. Then, let z = x^1 + √(-1)x^2 be a local complex coordinate of ^2, we can rewrite the conformal H-sphere equations (<ref>) and (<ref>) as ∇_∂_z̅ u_z = √(-1)H(u_z∧ u_z), u_z,u_z = 0. The solution u : ^2→ N to (<ref>) and (<ref>) is a branched immersion and at each branch point p using coordinate z = x^1 + √(-1)x^2 u_z can be locally represented as u_z = z^b_p V where b_p ∈ℕ is the order of branching at p and V is a local section of E⊗ℂ := E with V(p) ≠ 0. This allows us to define the ramified tangent bundle ξ on ^2, that is, ξ is the tangent bundle of ^2 twisted at the branch points by the amount equal to b_p such that E = ξ⊕ν where ν is the normal bundle of ^2 in N. The complex structure on ^2 gives ξ the structure of a complex line bundle and induces the splitting ξ_ℂ: = ξ⊗_ℝℂ = ξ^1,0⊕ξ^0,1 where the fibres of ξ^1,0 and ξ^0,1 are locally spanned by u_z and u_z away from the branch points of u. The connection ∇ on E gives rise to metric compatible connections ∇ ^ ⊤ on ξ and ∇ ^ ⊥ on ν. Inspired by the construction of compared bi-linear functional described in <cit.> and <cit.>, we define the following bi-linear form to compared with δ^2 E^ω(u). For any s ∈Γ(ν) we let B_ω(u)(s,s) =∫_^2|∇ f|^2 - |∇ u|^2( |H|^2 + Ric( n, n)/2 - |∇ H| )f^2 dV_g where n∈Γ(ν) is the unit normal section of ν, f = s,n∈ C^∞(^2,), Ric_h(N) (in the following abbreviated as Ric) is the Ricci curvature tensor of target manifold (N,h). The index of B_ω(u) is naturally defined to be the maximum of dimension of the linear subspace of Γ(ν) on which B_ω(u) is negative definite. The following proposition is the first main result in this subsection. Let (N,h) be a 3-dimensional Riemannian manifold, then for non-constant solution u to (<ref>) and (<ref>), the index of B_ω(u) is no more than Ind_E^ω(u). Before proving the Proposition <ref>, we modify computations in <cit.> and <cit.> to obtain the following relations between δ^2 E^ω(u) and B_ω(u): For any σ∈Γ(ξ) and s ∈Γ(ν), we define η∈Γ(∧^1,0(^2)⊗ξ^0,1) by η = (∇_∂_z s)^0,1 + ∇^⊤_∂_zσ^0,1dz. Then δ^2 E^ω(u)(s + σ, s+σ) ≤ B_ω(u)(s, s) + 4∫_^2 |η|^2 dV_g. First, recalling Lemma <ref> it suffices to prove (<ref>) in the case where s and σ are supported away from the set of branch points ℬ. And, the Riemann uniformization theorem enables us to simplify our computations by assuming that they are always performed in a isothermal coordinate, denoted as (x^1, x^2), with a metric of ds^2 = κ^2((dx^1)^2 + (dx^2)^2). To begin, keeping in mind that |∇_∂_x^1 v|^2 + |∇_∂_x^2 v|^2 = 4 |∇_∂_z v|^2, u_x^1∧ u_x^2 = 2√(-1) u_z̅∧ u_z and ∇_∂_x^1v∧ u_x^2 + u_x^1∧∇_∂_x^2 v = 4 (u_z∧∇_∂_z̅ v) = 4-√(-1)u_z∧∇_∂_z̅ v, we rewrite the second variational formula (<ref>) of E^ω by representing δ^2 E^ω(u)(v, v) in terms of complex coordinates as follows: δ^2 E^ω(u)(v, v) = 4∫_^2 |∇_∂_z v|^2 - ⟨ R(v, u_z)u_z̅, v ⟩ dx^1 dx^2 +8∫_^2∇_∂_zv, √(-1)H(u_z∧ v) dx^1 dx^2 + 4 ∫_^2v,(∇_v H)(u_z∧ u_z̅) dx^1dx^2 where v:=s+σ. Taking derivative to the identity s,u_z̅ = 0 and utilizing equation (<ref>) to yields ∇_∂_zs + √(-1)H(u_z∧ s),u_z̅ =∇_∂_zs + √(-1)H(u_z∧ s),u_z = 0 which is equivalent to (∇_∂_zs)^1,0 + √(-1)H(u_z∧ s) = 0 since clearly (∇_∂_zs)^1,0 + √(-1)H(u_z∧ s), u_z = 0 and (∇_∂_zs)^1,0 + √(-1)H(u_z∧ s), n = 0. Similarly, from identity σ^0,1, u_z̅ = 0 and equation (<ref>), we can obtain (∇_∂_zσ^0, 1)^⊥ + √(-1)H(u_z, σ^0, 1) =0. Then, we decompose v = s + σ^1, 0 + σ^0, 1, and keeping in mind that u_z ∧σ^1,0 = 0, (<ref>) and (<ref>), we can rewrite the integrand in the second line of(<ref>) as ∇_∂_zv, √(-1)H(u_z∧ v) = ∇_∂_zv, √(-1)H(u_z∧σ^0, 1) + ∇_∂_zv, √(-1)H(u_z∧ s) =- (∇_∂_zv)^⊥, (∇_∂_zσ^0, 1)^⊥ - (∇_∂_zv)^⊤, (∇_∂_z s)^1, 0. Combining the above calculation with the first term in the first line of (<ref>) and writing it as |∇_∂_z v|^2 = |(∇_∂_z v)^⊥|^2 + |(∇_∂_z v)^⊤|^2, we get |∇_∂_z v|^2 + 2 ∇_∂_zv, √(-1)H(u_z ∧ v) = (|(∇_∂_z v)^⊥|^2 -2 (∇_∂_zv)^⊥, (∇_∂_zσ^0, 1)^⊥) + (|(∇_∂_z v)^⊤|^2 -2 (∇_∂_zv)^⊤, (∇_∂_z s)^1, 0) = ( |(∇_∂_z v)^⊥ - (∇_∂_zσ^0, 1)^⊥|^2 - |(∇_∂_zσ^0, 1)^⊥|^2) + (|(∇_∂_z v)^⊤ - (∇_∂_z s)^1, 0|^2 - |(∇_∂_z s)^1, 0|^2). We split (∇_∂_z v)^⊥ = (∇_∂_z s)^⊥ + (∇_∂_zσ^1, 0)^⊥ + (∇_∂_zσ^0, 1)^⊥ for the normal component, and recalling η = (∇_∂_z s)^0,1 + ∇^⊤_∂_zσ^0,1, for the tangent component, we have (∇_∂_z v)^⊤ = (∇_∂_z s)^1, 0 + (∇_∂_z s)^0, 1 + (∇_∂_zσ^1, 0)^⊤ + (∇_∂_zσ^0, 1)^⊤ =(∇_∂_z s)^1, 0 + η + (∇_∂_zσ^1, 0)^⊤ . Then plugging these terms into (<ref>) and expanding the square terms yield |∇_∂_z v|^2 + 2 ∇_∂_zv, √(-1)H(u_z ∧ v) = |(∇_∂_z s)^⊥ + (∇_∂_zσ^1, 0)^⊥ |^2 - |(∇_∂_zσ^0, 1)^⊥|^2 + | (∇_∂_z s)^0, 1 + (∇_∂_zσ^0, 1)^⊤ + (∇_∂_zσ^1, 0)^⊤ |^2 - |(∇_∂_z s)^1, 0|^2 =|(∇_∂_z s)^⊥|^2 + |(∇_∂_zσ^1, 0)^⊥|^2 + 2⟨ (∇_∂_z s)^⊥, ∇_∂_zσ^1, 0⟩ - |(∇_∂_zσ^0, 1)^⊥|^2 + μ^2/2|η|^2 + |(∇_∂_zσ^1, 0)^⊤|^2 - |(∇_∂_z s)^1, 0|^2. Next we combine the following identities into (<ref>) |(∇_∂_zσ^1, 0)^⊥|^2 + |(∇_∂_zσ^1, 0)^⊤|^2 = |∇_∂_zσ^1, 0|^2 (∇_∂_z s)^⊥, ∇_∂_zσ^1, 0 = ∇_∂_z s, ∇_∂_zσ^1, 0 - (∇_∂_z s)^1, 0, ∇_∂_zσ^1, 0 |(∇_∂_z s)^1, 0|^2 = |(∇_∂_z s)^⊤|^2 - |(∇_∂_z s)^0, 1|^2 and plugging the result into second variation formula (<ref>) to get 1/4δ^2 E^ω(u)(v,v) = 1/2∫_^2|η|^2dV_g +∫_^2 |(∇_∂_z s)^⊥|^2 - |(∇_∂_z s)^⊤|^2 + |(∇_∂_z s)^0, 1|^2 dx^1dx^2 + ∫_^2 |(∇_∂_zσ^1, 0)|^2 - |(∇_∂_zσ^0, 1)^⊥|^2 dx^1dx^2 + 2∫_^2∇_∂_z s, ∇_∂_zσ^1, 0 - (∇_∂_z s)^1, 0, ∇_∂_zσ^1, 0 dx^1dx^2 + ∫_^2v,(∇_v H)(u_z∧ u_z̅) - ⟨ R(v, u_z)u_z̅, v ⟩ dx^1dx^2. Then we consider the integral in (<ref>) term by term. First, we compute the first two integrands in the second line of (<ref>). Integration by parts gives ∫_^2∇_∂_zσ^1, 0^2 - |(∇_∂_zσ^0, 1)^⊥|^2 dx^1 dx^2 = ∫_^2 |(∇_∂_zσ^0, 1)^⊤|^2 + R(u_z, u_z̅)σ^1,0, σ^0, 1 dx^1 dx^2. Similarly, for the first integrand in the third line of integral (<ref>), we integrate by parts again to get 2 ∫_^2∇_∂_z s, ∇_∂_z̅σ^0, 1 dx^1 dx^2 =∫_^2-2 s, R(u_z, u_z̅)σ^0, 1 + 2∇_∂_z s, ∇_∂_z̅σ^1, 0 dx^1dx^2 = ∫_^2-2 s, R(u_z, u_z̅)σ^0, 1 + 2 (∇_∂_z s)^0, 1, (∇_∂_z̅σ^1, 0)^⊤ dx^1 dx^2 + ∫_^22 (∇_∂_z s)^⊥, (∇_∂_z̅σ^1, 0)^⊥ dx^1 dx^2. Inserting equations (<ref>) and (<ref>) into (<ref>), we get 1/4δ^2 E^ω(u)(v,v) = ∫_^2|η|^2 dV_g + ∫_^2|(∇_∂_z s)^⊥|^2 - |(∇_∂_z s)^⊤|^2 dx^1dx^2 + ∫_^2 R(σ^1, 0, u_z̅)u_z, σ^0, 1 -2 s, R(u_z, u_z̅)σ^0, 1 dx^1dx^2 + ∫_^22∇_∂_z s, (∇_∂_z̅σ^1, 0)^⊥ - 2(∇_∂_z s)^1, 0, ∇_∂_zσ^1, 0 dx^1dx^2 + ∫_^2v,(∇_v H)(u_z∧ u_z̅) - ⟨ R(v, u_z)u_z̅, v ⟩ dx^1dx^2. Here we used the integral identity μ^2/2|η|^2 = |(∇_∂_z s)^0, 1|^2 + |(∇_∂_zσ^0, 1)^⊤|^2 + 2 (∇_∂_z s)^0, 1, (∇_∂_z̅σ^1, 0)^⊤. By (<ref>) and (<ref>), utilizing integration by parts we consider the integral in the third line of (<ref>) to see ∫_^22∇_∂_z s, (∇_∂_z̅σ^1, 0)^⊥ - 2 (∇_∂_z s)^1, 0, ∇_∂_zσ^1, 0 dx^1dx^2 =∫_^22∇_∂_z s,-√(-1)H(u_z ∧σ^0,1) - 2 (-√(-1)H(u_z∧ s), ∇_∂_zσ^1, 0 dx^1dx^2 = ∫_^22H ∂_z s,-√(-1)u_z ∧σ^0,1 dx^1dx^2 = -∫_^2s,-√(-1) (∇_σH) (u_z ∧ u_z̅) dx^1dx^2. For the term about derivatives of H in (<ref>), we use the ant-symmetric of wedge product to get that ∫_^2v,(∇_v H)(u_z ∧ u_z̅)dx^1dx^2 = ∫_^2s,(∇_s H)(u_z∧ u_z̅) dx^1dx^2 + ∫_^2s,-√(-1) (∇_σH) (u_z ∧ u_z̅) dx^1dx^2. Furthermore, conjunction the second line in (<ref>) with the Riemann curvature tensor term in the final line of equation (<ref>) results in ∫_^2 R(σ^1, 0, u_z̅)u_z, σ^0, 1-2 s, R(u_z, u_z̅)σ^0, 1 - R(v, u_z)u_z̅, v dx^1dx^2 = -∫_^2 R(s, u_z)u_z̅, s dx^1dx^2. Therefore, by substituting (<ref>), (<ref>), (<ref>) into (<ref>), we obtain δ^2 E^ω(u)(v, v) = 4∫_^2 |η|^2 dV_g + 4∫_^2 |(∇_∂_z s)^⊥|^2 - |(∇_∂_z s)^⊤|^2dx^1dx^2 + 4∫_^2s,√(-1)(∇_s H)(u_z,u_z̅) - R(s, u_z)u_z̅, s dx^1dx^2 ≤ 4∫_^2 |η|^2 dV_g + ∫_^2|∇ f|^2 - |∇ u|^2( |H|^2 + Ric( n, n)/2 - |∇ H| )f^2 dV_g where s = f n, ⟨ R(s, u_z)u_z̅, s ⟩ = μ^2 |∇ u|^2/8f^2 Ric( n, n) |(∇_∂_z s)^⊤|^2 ≥ |(∇_∂_z s)^1, 0|^2 = |H|^2f^2 |u_z|^2 = μ^2 |∇ u|^2/4|H|^2f^2, and s,√(-1)(∇_s H)(u_z∧ u_z̅)≤1/4|∇ H|· |∇ u|^2 f^2, which gives the inequality (<ref>) as asserted. We are now ready to give the proof of Proposition <ref>. Based on Lemma <ref>, the task at hand can be accomplished by finding the solution to the following equation for σ^0,1∈Γ(ξ^0,1) (∇_∂_zσ^0, 1)^⊤ dz = -(∇_∂_z s)^0, 1 dz. But from the proof of genus zero part of <cit.> or <cit.>, for each s ∈Γ(ν) there exists a solution to (<ref>). Then, pick any linearly independent sections s_1, …, s_d of ν such that B_ω(u) is negative definite on their 𝒱 : = Span {s_1, …, s_d}. For each 1 ≤ i ≤ d we choose a solution σ_i^0, 1∈Γ(ξ^0, 1) to (<ref>) with s placed by s_i and define σ_i := σ_i^0, 1 + σ_i^0, 1. Next, we define a linear map T: V →Γ(E) by assigning each s_i to s_i + σ_i. By substituting s + σ with T(s) in Lemma  <ref>, it can be inferred that δ^2 E^ω(u)(T(s), T(s)) ≤ B_ω(u)(s, s) < 0 for all s ∈ V. The fact that T is injective leads to the conclusion that the index of B_ω(u) is no greater than Ind_E^ω(u). Using the index comparison mentioned in Proposition <ref>, in conjunction with the standard conformal balancing argument (as described in <cit.> and also see <cit.>), we are able to derive a uniform energy bound. The main result is the following. Suppose |H|^2 h + Ric_h/2 - |∇ H| h > C_0 h, and let u:^2 → N be a solution to (<ref>) and (<ref>) with Morse index at most 1. Then E(u) ≤C_0/8π. To establish this proposition, it is necessary to assume that u is non-constant. By assumption and Proposition <ref>, we see that B_ω(u) also has Morse index at most 1. Moreover, let f be a constant function in B_ω(u)(f, f), it follows that B_ω(u) admits index exactly one, which further implies B_ω(u)(f, f) ≥ 0, for any non-constant f. Suppose φ > 0 is a smallest eigenfunction for the elliptic operator -Δ - (|H|^2 + Ric( n, n)/2 - |∇ H|)|∇ u|^2. A mapping degree argument described in <cit.> tells us that there exists a conformal map F: ^2 →^2 ⊂^3 such that ∫_^2 (F^i) φ dV_g = 0, for i = 1, 2, 3. which implies F^i can not be constant. Here x^1, x^2, x^3 is the coordinates of the standard embedding ^2 ↪^3 and we use the abbreviation F^i as x^i(F) to simplify the notation. Hence by (<ref>) we have ∫_^2|∇ (F^i)|^2 - |∇ u|^2 ( |H|^2 + Ric( n, n)/2 - |∇ H| )(F^i)^2 dV_g ≥ 0. Rearranging the above inequality and taking summation to conclude C_0∫_^2|∇ u|^2 dV_g = C_0∫_^2|∇ u|^2∑_i = 1^3 (F^i)^2 dV_g ≤∑_i = 1^3 ∫_^2 |∇ (F^i)|^2 dV_g = ∑_i = 1^3∫_^2 |∇ x^i|^2 dV_g = 8π, where the inequality is obtained by the conformally invariance of the energy. This gives (<ref>). Firstly, by Theorem <ref> there exists a strictly increasing sequence λ_j ↗λ such that we can find a sequence of corresponding non-constant H-spheres u_j with prescribed mean curvature λ_j H and Morse index at most 1. By compactness of N and assumption (<ref>), there exists C_0 > 0 such that (<ref>) holds, hence E(u_j) ≤C_0/8π. By the energy gap Lemma <ref>, there exists ε_0 > 0 such that E(u_j) ≥ε_0^2. By a similar estimate as Lemma <ref> and Theorem <ref>, we also have an alternative: either, after passing to a subsequnece, u_j converges strongly to a H-sphere u with E(u) ≥ε_0^2, or the the energy E(u_j) of sequence u_j concentrates at some points, in which case we can also obtain a H-sphere v satisfying E(v) ≥ε_0^2 by a rescaling argument and applying Lemma <ref>. In both cases, the index upper bound is established exactly as in the proof of corresponding part of Theorem <ref> and the Ricci curvature condition (<ref>) implies that the Morse index of a non-constant H-sphere is positive. In a word, we complete the proof of Theorem <ref>. §.§ Existence of Lg-Sphere under Isotropic Curvature Assumption When Lg 5pt In this subsection, we prove the second assertion of Theorem <ref>. We first recall that an element z∈ T_pN⊗ℂ is called isotropic if z,z = 0 for complex linearly extended metric ·,· from h defined on T_pN and a complex linear subspace Z ⊂ T_pN⊗ℂ is called totally isotropic if z,z = 0 for any z ∈ Z. The Riemannian manifold (N,h) is said that has positive isotropic curvature if the complexified sectional curvature R satisfies 𝒦(σ) := R(z,w,z̅,w̅)/|z ∧ w|^2 > 0 whenever σ⊂ T_pN⊗ℂ is a totally isotropic two plane at p ∈ N. Moreover, recall ξ^1,0 and ξ^0,1 are locally spanned by u_z and u_z̅ which are isotropic line bundles within 𝐄, let ν⊗ℂ : = ν_ℂ be the complexified normal bundle of ξ_ℂ in 𝐄. Since ^2 can be viewed as an 1-dimensional complex manifold, by <cit.> there exists a unique holomorphic structure on ν_ℂ such that ∇^'' = ∂̅ where ∇^' : ∧^0,0(ν_ℂ) ⟶∧^1,0(ν_ℂ), ∇^'' : ∧^0,0(ν_ℂ) ⟶∧^0,1(ν_ℂ) are two component of complex linear extended connection ∇^⊥ on ν_ℂ. Equipped with above notions, we have: We will actually prove a stronger assertion: 5pt Let (N,h) be an n-dimensional Riemannian manifold with isotropic curvature satisfying (<ref>). Then any non-constant conformal H-sphere has Morse index at least [(n-2)/2]. 5pt By Grothendick's theorem, see <cit.>, which says that any holomorphic vector bundle over ^2 can be represented as a direct sum of holomorphic line bundles, we can decompose ν_ℂ as ν_ℂ = L_1 ⊕ L_2 ⊕⋯⊕ L_n-2 which is unique up to a permutation of the order for L_i. So after changing the order of L_i, we can assume that c_1(L_1)≥c_1(L_2)≥⋯≥c_1(L_n-2) where c_1(L_i) is the first Chern class of L_i evaluated on the fundamental class of ^2. The Levi-Civita connection ∇ preserves the Riemannian metric parallelly, resulting in a complex linearly extended bi-linear form denoted as ·,·:ν_ℂ×ν_ℂ→ℂ. This bi-linear form is holomorphic and establishes a holomorphic isomorphism between ν_ℂ and its dual ν_ℂ^*. Thus, by the invairance of Chern class, we have c_1(L_i) + c_1(L_n-i-1) = 0. Let V_i be a meromorphic section of L_i for 1 ≤ i ≤ n-2. If V_i,V_j≢0, then we must have c_1(L_i) + c_1(L_j) = 0, that is, V_i,V_j≡ 0 provided that c_1(L_i) + c_1(L_j) ≠ 0 for any section V_i ∈Γ(L_i) and V_j ∈Γ(L_j). Denote N_0 the direct sum of the line bundle that has zero first Chern class and N_+(N_-) be the direct sum of the line bundle that has positive (negative) first Chern class. Then N_+ is an isotropic sub-bundle of ν_ℂ and V_0,V_+ = 0 for each V_0 ∈Γ(N_0) and V_+ ∈Γ( N_+). It follows from the Riemann-Roch theorem that dim_ℂ(𝒪(L_i)) = { c_1(L_i) + 1, if c_1(L_i) ≥ 0, 0 if c_1(L_i) < 0, . where 𝒪(L_i) is the set of holomorphic section of L_i. For any holomorphic sections W_i,W_j of N_0, since V_i,V_j is a holomorphic function on ^2, W_i,W_j is constant which means that one can choose W_i such that {W_i}_1 ≤ i ≤dim(N_0) to be an othornormal basis at each fibre of N_0 respect to the bi-linear form ·,·. Let 𝒪 be the complex linear space of holomorphic sections of ν_ℂ spanned by the holomorphic isotropic sections W_1 + √(-1) W_2, W_3 + √(-1) W_4, ⋯ , W_2m-1 + √(-1)W_2m where m = [dim_ℂ( N_0)/2] ∈ℕ together with the holomorphic sections of N_+. Then, we conclude that _ℂ(𝒪) ≥ [(n-2)/2]. Note that by the choice of ν_ℂ, u_z is linearly independent with elements in 𝒪, that is, u_z and V can span a totally isotropic two plane in E for any V ∈𝒪. Thus, by complex form of second variation formula (<ref>) for E^ω, for any V ∈𝒪 we have δ^2 E^ω(u)(V, V) = 4∫_^2 - ⟨ R(V, u_z)u_z̅, V ⟩ dV_g + 4 ∫_^2V,√(-1)(∇_V H)(u_z,u_z̅) dV_g < 0, provided that (<ref>) holds. Therefore, the Morse index of u is greater than or equal to [(n-2)/2]. The proof of part (<ref>) of Theorem <ref> follows directly from Claim <ref>. 2cm amsalpha
http://arxiv.org/abs/2407.11957v1
20240716174923
Segregation to grain boundaries in disordered systems: an application to a Ni-based superalloy
[ "Dominik Gehringer", "Lorenz Romaner", "David Holec" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ]
inst1]Dominik Gehringer inst1]Lorenz Romaner inst1]David Holec [inst1]organization=Department of Materials Science, Montanuniversität Leoben, addressline=Franz-Josef-Straße 18, city=Leoben, postcode=8700, country=Austria § ABSTRACT Segregation to defects, in particular to grain boundaries (GBs), is an unavoidable phenomenon leading to changed material behavior over time. With the increase of available computational power, unbiased quantum-mechanical predictions of segregation energies, which feed classical thermodynamics models of segregation (e.g., McLean isotherm), become available. In recent years, huge progress towards predictions closely resembling experimental observations was made by considering the statistical nature of the segregation process due to competing segregation sites at a single GB and/or many different types of co-existing GBs. In the present work, we further expand this field by explicitly showing how compositional disorder, present in realistic engineering alloys (e.g. steels or Ni-based superalloys), gives rise to a spectrum of segregation energies. With the example of a Σ 5 GB in a Ni-based model alloy (Ni-Co-Cr-Ti-Al), we show that the segregation energies of Fe, Mn, W, Nb and Zr are significantly different from those predicted for pure elemental Ni. We further use the predicted segregation energy spectra in a statistical evaluation of GB enrichment, which allows for extracting segregation enthalpy and segregation entropy terms related to the chemical complexity in multi-component alloys. density functional theory segregation grain boundaries Ni-based superalloy multi-component alloy § INTRODUCTION Grain boundary (GB) segregation is a key phenomenon that needs to be understood and controlled when developing novel materials. This issue has been addressed experimentally for many decades. With ever-growing computational power, quantum-mechanical methods have become routinely used in the last two decades to predict GB segregation energies. Nevertheless, ab initio methods, such as e.g. density functional theory (DFT), remain computationally very demanding. Consequently, three major approximations are commonly imposed on the atomistic models. Firstly, only high symmetry grain boundaries are treated by DFT due to restrictions on the model size. This limitation can be overcome by applying some special methods such as a combination of quantum mechanics and molecular mechanics <cit.>; however, such techniques are still computationally costly and not yet widely used. A more common approach is to employ (semi-)empirical interatomic potentials (IP) together with molecular mechanics to investigate segregation to structurally complex GBs <cit.> or even simulating polycrystals <cit.>. A recent publication of <cit.> demonstrated that indeed the spectral treatment stemming from the structural variety of grain boundary sites in polycrystals is crucial for quantitative modeling of segregation. Secondly, most studies—as is the case also of the present contribution—restrict themselves to the dilute limit. When studying concentration dependence <cit.> or co-segregation <cit.>, many combinations of GB states need to be sampled which leads to a drastic increase in the number of calculations involved. The third simplification is that segregation phenomena in alloys are always—except for a work by <cit.> and an even very recent study <cit.>—modeled with a system of the corresponding pure metal. For example, pure iron is used as a surrogate model to steels for predicting the segregation behavior <cit.>. Similarly, pure Ni is used for ab initio modelling of segregation in Ni-based superalloys <cit.>. While the spectral nature of segregation energies due to structural variety in realistic materials has been heavily advocated in the recent literature <cit.>, the statistical distribution of segregation energies due to complex-chemistry in the matrix has been touched upon only scarcely and only for binary systems <cit.>. Therefore, in the present study, we address segregation of solute species in the dilute limit in a multi-component matrix. We have chosen a high symmetry Σ5(210)[001] GB (Fig. <ref>) as a structural model for our first-principles study. This is motivated by the vast existing literature on segregation in Ni to this GB, which helps discuss the effect of the complex chemical composition. The matrix composition, inspired by realistic Ni-based superalloy compositions <cit.>, e.g. the Udimet 720 alloy[<https://www.matweb.com/search/datasheet.aspx?matguid=205ba4b2490a481c95908800f21b7bc8>, accessed on 2024-05-30.], is summarized in Tab. <ref>. We considered 5 majority (matrix) elements, Ni, Cr, Co, Ti, and Al, and focused on the segregation behavior of five minority Fe, Mn, W, Nb, and Zr species. We model the matrix as a solid solution using special quasirandom structures (SQS) <cit.>. The paper is structured as follows: We start with revising the standard methodology for computing the segregation energy in pure elemental systems. We continue by introducing the necessary theory for the spectral representation of segregation energy, similar to Refs. <cit.>, but point out the major difference—the origin of the spectra (chemical vs. structural complexity). The theory part is concluded by generalizing the McLean model <cit.> for a multi-component system. In the next part, we describe our atomistic models and the DFT setup. The results section starts by comparing segregation phenomena in the pure system with our multi-component setup. We continue by comparing the predictions of the McLean model for the two cases. The final section is related to extracting enthalpy and entropy of segregation, which have been phenomenologically used before to explain experimental observations <cit.>. We conclude with a summary of the main results. § THEORY §.§ Segregation energy in disordered systems §.§.§ Segregation energy in a pure elemental system The segregation energy quantifies the thermodynamic driving force for segregation to a GB. This is evaluated as a difference between the formation energy of a point defect of the solute species X in bulk (B), E^f, X_B with the corresponding formation energy at the grain boundary (GB), E^f, X_GB. For a GB model of pure system of species M where the solute atom X sits at the GB site i, the segregation energy reads: Δ E^X,i_seg = (E_GB^X → M_i - E_GB + μ^M - μ^X)_E^f, X_GB - (E_B^X → M - E_B + μ^M - μ^X)_E^f, X_B = E_GB^X → M_i - E_B^X → M - E_GB + E_B . Here, E_GB and E_B refer to the total energies of the undecorated systems. μ^M and μ^X refer to the chemical potentials of matrix and solute species, respectively. E^X → M_B and E^X → M_i_GB are the energies of the decorated system, where one M atom is replaced by an X atom. We note that in a steady state, the chemical potential of each species is the same in bulk and at the GB <cit.>. It is a common practice to define a set of GB sites (sites which are belonging to the GB). Index i labels symmetry inequivalent GB sites. For example, for the pure Ni Σ 5 GB in Fig. <ref> can take i ∈{1, 2, 3}. All other sites have bulk-like nearest neighbor environment. §.§.§ Disorder in the bulk: E^f,X_B In the following two sections, we shed light on the implications of chemical disorder on the meaning of segregated energy. Firstly, the matrix of a multi-component system is composed of many species M∈𝐌 = {Ni, Cr, Co, Ti, Al}, in contrast to a pure Ni matrix. Secondly, even for one species M, the bulk formation energy is not a single-valued scalar since the chemical disorder breaks the symmetry and introduces a variety of local environments. In Eq. (<ref>), we implicitly considered a single bulk site that we compare with many GB sites. In the case of a multi-component matrix, we also get a spectrum of bulk formation energies, E^f, X →M_j_B, denoting a solute X replacing the matrix species M at the site j: E^f, X →M_j_B = E_B^X →M_j - E_B + μ^M - μ^X . §.§.§ Disorder at the GB: E^f,X_GB Already in the pure elemental case, there are several different GB sites (indexed with i, Fig. <ref> left) differing by the spatial arrangement of their neighboring sites. Similarly as for the bulk, the chemical disorder breaks also the symmetry at those GB sites causing that each might have different species as its neighbors. We note that the GB zone (red region in Fig. <ref>) containing the GB sites was chosen in accordance with Refs. <cit.>. As a consequence, the site index for the GB state in the multi-component case may correspond to any of these GB sites (instead of three (symmetry-inequivalent) in the pure elemental case, Fig. <ref>). In short, for both formation energies E^f,X_B and E^f,X_GB are sets of values rather than a single value: E^f, X_B/GB = {E^X →M_i_B/GB - E_B/GB + μ^M - μ^X }_∀ i ∈ 1, …, N_B/GB where N_B and N_GB are the number of bulk and GB states respectively. Consider now that the solute atom can occupy any bulk state, and from there cam reach each GB state. Consequently, we obtain the segregation energy spectrum by creating all possible combinations of sites and elements in sets in Eq. (<ref>): Δ E^X_ij^MM'_seg = E_GB^X →M_i - E_GB - E_B^X →M_j' + E_B + μ^M - μ^M' = E_GB^X →M_i - E_GB - E_B^X →M_j' + E_B + Δμ^MM'_μ^M - μ^M' . In order to avoid any ambiguity related to a particular choice of chemical potentials (see later discussion of Fig. <ref>), which is a non-trivial task in the case of compositionally complex alloy, we will further treat only situations where Δμ^MM'=0, i.e. when M=M'. This also helps to preserve the compositions of our bulk and GB simulation boxes as similar as possible due to their rather small size (100–200 as restricted by the DFT calculations). We also note that in the framework of this work, we do not consider any co-segregation of minority species, nor do we aim to discuss segregation competition between solute (minority) and matrix (majority) elements; segregation of the latter is not considered at this level of simplification. Thereby we finally arrive at: Δ E^X_seg = {Δ E^X_ij^MM'_seg| . M = M ', i=1,…,N_GB, j=1,…,N_B} . The above Eq. (<ref>) is a definition for the segregation energy in a multi-component matrix. It considers all possible swaps between bulk and grain boundary sites occupied by the same chemical species. §.§ Distribution of segregation energy The definition of segregation energies by Eq. (<ref>) results in a spectrum of values, which we will conveniently represent by a corresponding distribution function. Such an approach was suggested already by <cit.> and got attention again more recently. For example <cit.>, sampled the fundamental zones <cit.> (orientations of the cutting boundary plane <cit.>) of a Σ 5 grain boundary. Therein, they used a Gaussian distribution to describe the energy spectra. <cit.> discussed the impact of energy spectra more intensively and tried to connect it with the measurable enthalpy and entropy. They found a Gumbel distribution fitting their energy spectra best. Similarly, <cit.> studied the grain boundary segregation in a polycrystal and investigated the impact of the segregation energy spectra on the stability of nanocrystalline materials. In the present work we follow the approach of <cit.> and use a skew-normal distribution to represent the the segregation energies. For a solute X, the skew-normal distribution reads F̂^X(Δ E_seg^X) = 1σ√(2π)exp(-(Δ E_seg^X - μ)^2/2σ^2)erfc(-α(Δ E_seg^X - μ)/√(2σ^2)) where μ and σ are the parameters of a Gaussian distribution, and α the skewness parameter. The sign of α determines the side of the skew; α = 0 yields a Gaussian distribution. These parameters are obtained by fitting Eq. (<ref>) to a histogram of the (discrete) segregation energies from Eq. (<ref>). We further define the mean value of the distribution as: <Δ E_seg^X> = <F̂^X> = ∫_-∞^∞F̂^X(Δ E_seg^X) Δ E_seg^X dΔ E_seg^X . The width of a spectrum is quantified by its full-width at half maximum (FWHM). We want to point out a qualitative difference between the energy spectra discussed in literature <cit.> and the present study. The spectra in previous studies originated from sampling many structurally different GBs in a pure metal. In contrast, we focus only on a single GB (Σ 5(210)[001]), however, in a compositionally disordered system. The two main implications are as follows: Firstly, the chemical disorder gives rise to a distribution of bulk formation energies, too, compared to a single state in the chemically pure bulk case. This is illustrated by Fig. <ref>. For each solute (different color), Fig. <ref> shows a pair of formation energy distributions (exhibiting the same color). The bulk states' energy distribution on the left, and the GB states' distribution on the right. These distributions further split into subsets depending on the substituted matrix species M∈{Ni, Co, Cr}. Secondly, the segregation energy distribution arises from chemically different local environments of bulk and GB states. In other words, the distributions in literature <cit.> are caused by the structural variety of grain boundaries, while here, they are caused by the chemical complexity of the alloy model. Finally, we point out a subtle difference in the meaning of F̂^X. In a pure system (i.e. a single bulk state), F̂^X(Δ E_seg^X) is proportional to the number of GB states that correspond to energy Δ E_seg for the solute X. In a solid solution, were Δ E_seg consists of bulk and GB energies, F̂^X is proportional to the number of GB and bulk state pairs that yield segregation energy of Δ E_seg. §.§ Segregation energy distribution in thermodynamic models The McLean isotherm <cit.> relates the equilibrium solute concentration for a species X at the GB[We recall that our model is formulated under the assumption of constant matrix/GB composition and does not treat any co-segregation phenomena.], X_GB^X, with its bulk concentration, X_B and the corresponding Gibbs free-energy of segregation, Δ G_seg^X: X_GB1-X^GB = X^B1-X^Bexp(-Δ G^X_segk_B T) . In 0 K first-principles calculation corresponding to ambient conditions, it is common to substitute Δ E_seg^X for Δ G_seg^X due to the relation Δ G = Δ H -T Δ S = Δ U^Δ E + Δ p V + p Δ V^0 + TΔ S^0 K⇒ 0 . Note that Δ S only includes vibrational entropy as the configurational part is already included in McLean equation. Due to its computational complexity, the vibrational term is usually neglected and Δ G_seg^X≈Δ E_seg^X. For example, for the substitutional segregants in ferrite, this approximation was shown to be reasonable <cit.>. In general, vibrational entropy can be expected to cause a further splitting of segregation energies and to reduce segregation trends as discussed recently, e.g., in <cit.>. We do not discuss these effects here but leave them to future investigations. To account for the spectral nature of segregation energies, we compute an “averaged” isotherm by a convolution of the McLean isotherm (Eq. (<ref>)) with the distribution from Eq. (<ref>): <X̂^GB(T)> = ∫_-∞^∞ X^GB(Δ E^X_seg, T) F̂^X(Δ E^X_seg) dΔ E^X_seg . We note that the same effective isotherm has been used for also the polycrystalline spectral models <cit.>. In the present work, we compare three different levels of approximation for each solute. Firstly, we compute the isotherm for a pure Ni system (i.e., using Δ E_seg determined in a pure Ni matrix). In a second step we replace the segregation energy distribution of a real alloy with its mean value (Eq. (<ref>)) and then use it in single-value McLean isoterm, Eq. (<ref>). Finally, we compare both with the effective isotherm calculated using Eq. (<ref>). §.§.§ Determining the enthalpy and entropy of segregation The original purpose of the McLean isotherm was to determine the segregation energy from a set of measured concentrations at different temperatures <cit.>. By rearranging Eq. (<ref>) we obtain: Δ G^eff_seg = -k_B Tln(X̂^GB(1 - X̂^GB)X^B(1-X^B)) . By substituting Eq. (<ref>) for X_GB in Eq. (<ref>) we obtain a temperature dependence of Δ G^eff_seg(T). We note that the temperature dependence does not relate to an entropy of clear physical meaning but merely originates from averaging the multitude of segregation scenarios due to the distribution of local chemistries (spectrum of segregation energies). A similar concept has been recently discussed in the literature for the case where the spectrum originated from the geometrical variety of GB structures <cit.>. This is in agreement with the linear temperature dependence of the segregation energy <cit.> often observed in experiments. Therefore, it is the spectral nature of segregation energies that gives rise to the temperature dependence Δ G^eff_seg. In the present case of a multicomponent alloy, Eq. (<ref>) yields the temperature dependence for a given bulk concentration X^B. The slope and intercept of a linear fit will yield an estimate for Δ H and Δ S according to Eq. <ref>. For more details, we refer the reader to  <ref>. § COMPUTATIONAL METHODS §.§ Atomistic model generation The chosen calculation setup involves separate atomistic models for bulk and grain boundary regions. This is primarily motivated by minimizing the needed computational resources, while maximizing the variety of local environments in the bulk region (i.e. region unaffected by the grain boundary). We note that many works involving pure metal matrix often employ a single supercell for both regions, e.g., Ref.  <cit.>. Bulk models We used a 4× 4× 4 supercell of fcc-nickel. The resulting 108 lattice sites were populated according to the composition shown in Tab. <ref>. The atoms were distributed using sqsgenerator <cit.> with a Monte-Carlo approach, optimizing the short-range parameters on the first seven coordination shells with interaction weights w^i = 1/i. We checked 10^10 configuration to choose a single special quasi-random structure (SQS) <cit.> representing the bulk Ni-based superalloy. Subsequently, we placed a solute atom X at each lattice position to sample the bulk states (cf. Sec. <ref>). GB models To make our setup comparable with previous literature, we used similar GB cell geometries as reported by <cit.>. The cell vectors a⃗=[12̅0], b⃗=[001] and c⃗=[210] refer to the axes in Fig. <ref>. We used a vacuum padding of 9.5 Å in c⃗ direction. In-plane, we created a 2×3 (a⃗×b⃗) supercell (slightly larger than 2×2 used in Refs. <cit.>). Thereby, our GB cells contain 114 atoms. Each such model contains 30 different GB sites (red region in the left panel of Fig. <ref>). In the pure elemental setting, 12 of them correspond to sites 2 and 3 (marked orange and green in Fig. <ref>), while 6 are site 1 (blue in Fig. <ref>). Therefore, we generated five different SQS to sample the GB states in order to sample as many different local environments as possible. Those were chosen using sqsgenerator <cit.> by probing 10^11 configurations. For the procedure on how we select the five SQS, we refer the reader to <ref>. We note that the computational complexity for sampling all states of the disordered cell (258 calculations in our particular setup: 108 bulk and 150 GB states) in comparison to a pure metal (one bulk and three GB states) is drastically increased. Finally, because of the slightly different numbers of atoms in the bulk (108) and GB (114) cells, the composition do not match exactly. However, the last row of Tab. <ref> shows that the maximum deviation is < 0.5 at.%. §.§ DFT setup The quantum-mechanical calculations were carried out with the Vienna Ab initio Simulation Package (VASP) <cit.>. For treating exchange and correlation effects, we employed the general gradient approximation as parametrized by Perdew-Burke-Ernzerhof and revised for solids (GGA-PBEsol) <cit.>. For the electronic self-consistent cycle, we set a convergence criterion of Δ E_SCF =10^-4 eV/cell. All calculations were carried out in spin-polarized mode. Spins with an initial magnitude of 2μ_B were ferromagnetically arranged. The projector augmented wave (PAW) method <cit.> was used to describe the electron-ion interactions. For the k-mesh sampling of the Brillouin zone, we used a Monkhorst-Pack <cit.> scheme with 4 × 4 × 4 for the bulk, and 4 × 4 × 1 k-points for the GB cells. We employed a first-order Methfessel-Paxton smearing <cit.> with a smearing width of 0.2 eV. The calculated data are available under the Creative Commons license in the NOMAD archive <cit.>. § RESULTS AND DISCUSSION §.§ Pure Ni vs. disordered alloy Owing to its methodological simplicity, pure elemental Ni is usually taken as a representative model for segregation in Ni-based superalloys. Therefore, we also use it as a reference in our study. Figure <ref> shows the segregation energy for each solute, Fe, Mn, W, Nb, and Zr, to each GB site in the pure system (Fig. <ref>, left). Our data show systematically smaller (more positive) segregation energy than the literature. We attribute this to the different choice of the XC functional, namely PBEsol in the present study as compared with PBE in all other calculations <cit.>. This, for example, leads to a strongly reduced segregation tendency predicted here. Importantly, the ordering of the site preference, i.e., the most preferable segregation site (with the exception of Mn) is site 3, which is the one further away from the GB plane, is fully consistent with previous reports <cit.>. We therefore conclude that our calculations qualitatively agree with the previous reports, and we can proceed in discussing the impact of real alloy composition on segregation. We now turn our attention to a model of the real disordered alloy. Figure <ref> shows the spectra of segregation energies (Eq. (<ref>)) together with the skew-normal fits for all five solutes. Each of the plots shows three spectra, one for each of the three types of GB sites as defined in pure Ni (S1, S2, and S3). The dashed horizontal lines in Fig. <ref> represent the pure Ni reference values (cf. <ref>). All values (mean, fit parameters, and pure segregation energies) for Fig. <ref> are summarized in the Appendix, Table <ref>. Figs. <ref>a–<ref>e reveal three major insights. Firstly, a comparison of the expectation values <Δ E^X_seg> (solid colored horizontal line) of the disordered alloy with the corresponding segregation energies in pure Ni case (colored dashed lines) yields a drastically enhanced (more negative) tendency in the former. While for iron, this enhancement is ≈ 0.25 eV for all three segregation spectra (cf. Fig. <ref>a), we find up to ≈ 1 eV for Nb (Fig. <ref>d). Despite this enhancement, the qualitative behavior of individual sites is preserved. This is particularly obvious in the cases of W, Nb, and Zr, where the mean value corresponding to the S3 sites is clearly lower than that of the S1 and S2 sites, where the latter is the least favored scenario. In order to explain the enhanced segregation for the complex matrix compared to pure Ni we consider phenomenological segregation models. The segregation energy is determined chiefly by two terms, a bonding contribution and an elastic contribution. The former arises mainly from the change in cohesive energy between solute and matrix and to a weaker extent from interaction of solute and matrix. The elastic contribution arises from the volume difference between solute and matrix. The two contributions seem to cancel almost exactly for W in pure Ni. The higher cohesive energy of W causes anti-segregation (positive segregation energy) while the higher atomic radius would cause segregation (negative segregation energies). The Ni-Cr-Co-Ti-Al solid solution has a higher volume (V = 10.737Å^3 vs. V=10.253 Å^3 for pure Ni) and therefore, the elastic contribution is reduced. Since, overall, the opposite is observed we conclude that the bonding contribution mainly causes the enhanced segregation. This would imply that the cohesive energy of the complex matrix is higher than the one of Ni, and the bonds formed between Ni and W are stronger than the bonds between the complex matrix and W. Similar considerations should apply for the other solutes. Secondly, for Fe, Mn, and Zr, we find nearly the same FWHM irrespective of the segregation site S1, S2, or S3 (all three values are within 0.1 eV for each species). W exhibits ≈ 0.3 eV broader distribution for S1 as compared with those of S2 and S3; contrarily, the S2 spectrum of Nb is ≈ 0.2 eV narrower than the S1 and S3 spectra. Thirdly, the fitted skew values α listed in Tab. <ref> clearly reveal that a Gaussian distribution (as used in Ref. <cit.>) is not sufficient to describe any of the spectra. We recall that according to Eq. (<ref>), the skew-normal distribution becomes a Gaussian distribution for α=0, while all our fits yield |α|>0. Similarly, also a Gumbel distribution (as used in Ref. <cit.>) does not have enough degrees of freedom. This is demonstrated, e.g., by a sign change of the skew values for Fe of S1 (α < 0) and S2 (α > 0). In other words, the skewness as a degree of freedom is needed to describe the left-skewed S1 and right-skewed S2 spectrum. However, the differentiation between S1, S2, and S3 has been made solely for better comparison with pure Ni. In a disordered system, every site in the GB zone (red region in Fig. <ref>) is generally surrounded by different matrix species, leading to largely overlapping spectra for the sites S1–S3 (Fig. <ref>a–e). Consequently, we consider only a single spectrum for each solute. Those are shown in Fig. <ref>f, where each spectrum is computed by merging the three spectra in the corresponding panel. For example, the blue spectrum for Fe in Fig. <ref>b is obtained by merging the three spectra from Fig. <ref>a. Again, we fitted the resulting spectra with skew-normal distributions (shown in Fig. <ref>f) and the resulting fitting parameters present in Tab. <ref>. Comparison with the (lowest) segregation energies in pure Ni (first row) confirms the significant enhancement due to the chemical disorder. This enhancement is up to an order of magnitude for Fe, W, and Nb. For example, while Nb exhibits nearly no tendency to segregate to the GB in pure Ni (Δ E_set^pure= -0.07 eV), a mean value of the alloy segregation spectrum is <Δ E_seg^Nb> = -0.83 eV. In contrast, for Zr we report Δ E_seg^pure= -0.95 eV and hence already a strong segregation tendency in pure Ni, but we still predict an enhancement to <Δ E_seg^Zr> = -1.41 eV for the alloyed system. Finally, we note that the spectral properties of the segregation energy cannot be ignored. The distributions shown in Fig. <ref>f are too broad to be replaced with a mean value. In particular, the FWHM of all the spectra is in the range or larger than its mean value. For example, for W, we obtain a mean segregation energy of <Δ E_seg^W> = -0.42 eV, whereas its FWHM is 1.49 eV. Furthermore, our (limited) data do not suggest any trend between the mean values and the FWHM. For example, the mean value <Δ E_seg> is nearly twice as low for Nb compared to W, the FWHM of Nb increases only slightly w.r.t. W (cf. Tab. <ref>). §.§ Thermodynamics of segregation The segregation energetics discussed in the previous section serve as inputs to the thermodynamic assessment of grain boundary segregation using McLean isotherms described in Sec. <ref>. These predict the fraction of GB sites a segregating species occupies at a given temperature. The results are summarized in Fig. <ref>. For each species, the black dashed line is McLean isotherm corresponding to the minimum segregation energy in pure Ni (Fig. <ref> and Tab. <ref>). The McLean isotherms based on mean values of the segregation spectra (Eq. (<ref>)) of Ni-based disordered alloy are shown with colored dotted lines. We recall that those values are significantly lower (i.e., representing stronger segregation tendency) than for the case of pure Ni (cf. Tab. <ref>). Consequently, significantly higher solute concentrations in the GB sites are predicted for the disordered alloy compared with the pure Ni case, and the GB sites retain their full occupancy by the solutes (X^GB≈1) to higher temperatures. In contrast, the single isotherm computed from the mean value spectrum (colored dotted line) overestimates the GB concentration at lower temperatures but drops below the averaged isotherms, as those show a significantly flatter slope. The flattening of the averaged isotherms becomes more pronounced for solutes with an increased segregation tendency (e.g., compare Fe and Mn on the one hand, with W, Nb, and Zr on the other hand) while the crossover between the mean and the averaged isotherm shifts to higher temperature. Consequently, this crossover is not in the shown temperature range for Nb and Zr anymore. In summary, all three isotherms are significantly distinct from each other, e.g., for Nb we find at 1500 K concentrations ranging from X^GB(Δ E_seg^pure) = 6.9 % over X^XB(<Δ (E_seg)>) = 69.4 % to <X^GB(F(Δ E_seg)) > = 95.0 %. The lower panels show the effective segregation energy (solid color lines) according to Eq. (<ref>). For each graph, the dotted line gives a mean value of the distribution as a reference. The black dashed line is a linear fit of Δ G^eff_seg(T) to the low-temperature range (slightly different for each species, with an upper limit between 1000 and 2000 K, see <ref>). This linear fit allows extraction of enthalpy, Δ H, and entropy, Δ S, of segregation. On the example of Fe and Mn, we now elucidate the deviation of Δ G^eff_seg(T) from the linear behavior. A “simple” McLean isotherm (at constant X^B) is characterized by a single segregation energy, which is constant throughout a temperature range. The temperature dependence hence arises from the spectral nature of the segregation energy. We find a strong (almost linear) temperature dependence for Δ G^eff_seg for Fe and Mn for up to T ≈ 750 K. Above this temperature, a flattening towards a constant level (which would be achieved at extremely high temperatures, though) means that for high T, it is more appropriate to describe the isotherm using a single-valued McLean isotherm. However, we also show that a constant regime is never really reached for temperatures up to (an likely much above) 2000 K, and hence the spectrum-based description is unavoidable. We are unaware of any experimentally measured data of segregation enthalpies and/or entropies for pure Ni or a Ni-basedalloy to be used for validation of our predictions. A comprehensive overview of these quantities has been collected for Fe-based systems by Lejcek and co-workers <cit.>. Our DFT-based predictions are in the same order of magnitude as those found for substitutional solutes in α-iron, thereby indirectly supporting their correctness. § CONCLUSIONS In the present article, we elaborated on the meaning of a “segregation energy” in a multi-component disordered solid solution. We proposed a novel approach, that is based on well-established models and allows to calculate the segregation energy distributions for solutes in the dilute limit in compositionally complex systems. We applied this methods to the segregation of Fe, Mn, Nb, W and Zr in a Ni-based superalloy. Importantly, we showed that first-principles predictions for disordered models lead to qualitatively different results than for pure Ni. To quantify the differences, we extensively discussed the segregation energy spectra, thereby highlighting their essential importance. In the second part, we compared the impact on the predictions based on the McLean model. We showed that even when replacing the distribution with a single value—the mean of the distribution—we predicted qualitatively different behavior compared to pure Ni. Next, we presented a complete spectrum of isotherms based on the McLean model, corresponding to the spectrum of the segregation energies. This allowed us to obtain a physically more realistic Gibbs free energy of segregation, which, in turn, allowed for a fully ab initio determination of the entropy and enthalpy of segregation for the solutes. We reiterate that the here-reported segregation enthalpy and entropy are consequences of the chemical complexity of the matrix material; further level of complexity would stem from the geometrical variety of grain boundary structures. § ACKNOWLEDGEMENTS D.G. greatly appreciates the support (DOC scholarship) from the Austrian Academy of Sciences (ÖAW). D.G. and D.H. acknowledge financial support by Öster­rei­chi­sche For­schungs­för­der­ungs­ge­sell­schaft mbH (FFG), project number FO999888151, “AMnonWeldSuperAlloys”. The computational results were in part achieved by using the VSC computing infrastructure. The authors also sincerely thank V. I. Razumovskiy and D. Scheiber from the Materials Center Leoben (MCL) Forschung GmbH, for their input and helpful discussions. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § SELECTION PROCESS FOR THE FIVE SQS The SQS optimization routine <cit.> will result in multiple candidate structures. We want to minimize the number GB states that are coordinated similarly and so to maximize the range of different local environments. Therefore, let i ∈{1,…,30} be a GB site in a candidate SQS GB structure. Then, we represent the local environment x_i by a histogram of the neighboring species x_i = {N^M_i }_∀M∈𝐌 , where N^M_i denotes the number of M atoms in the first coordination shell of the i^th site. The set of all local environments X_ξ is then X_ξ = {x_i}_∀ i = 1,…, 30 , where ξ is the index of a site in the SQS structure. i can take 30 values since for our particular setup (Sec. <ref>), the GB (red) region in Fig. <ref> contains that many sites. Hence, we want to find several (in our case 5, ξ=1,…,5) different SQSs, that maximize | ⋃_ξ X_ξ| →max . By sampling 150 (5 × 30) GB states, we could identify 5 SQS cells that yield together 133 differently coordinated sites in the first coordination shell. The local chemical compositions, re-calculated to only the GB zone, are summarized in Tab. <ref>. § FITTING PARAMETERS FOR FIG. <REF> § LINEAR FIT TO EXTRACT ENTHALPY AND ENTROPY OF SEGREGATION All McLean isotherms in Figs. <ref>a–e are shown for a temperature range from 50 to 2500 K. This is because, for very low temperatures, a finite-sized (in terms of bits) floating point arithmetic reaches its accuracy limit. Furthermore, Figs. <ref>f–j present Δ G^eff_seg as a function of temperature accompanied by a linear fit. We find a strong non-linear behavior of the effective segregation energy for high temperatures. Therefore, we have manually fixed the temperature range for the linear fit, from which Δ H and Δ S are extracted. For all solutes, the lower border is T_min = 50 K, while the upper border for Fe and Mn is T_max^Fe = T_max^Mn = 750 K. For W we have used T_max^Fe = 1250 K. Finally for Nb and Zr the upper limits are T_max^Nb = T_max^Zr = 2000 K. elsarticle-num-names
http://arxiv.org/abs/2407.12208v1
20240716224835
Computing $k$-means in mixed precision
[ "Erin Carson", "Xinye Chen", "Xiaobo Liu" ]
math.NA
[ "math.NA", "cs.NA", "65G50, 68Q25, 68R10, 68U05" ]
label1]Erin Carsoncarson@karlin.mff.cuni.cz label1]Xinye Chenxinye.chen@matfyz.cuni.cz [label1]organization=Department of Numerical Mathematics, Charles University, city=Prague, postcode=186 75, country=Czech Republic label2]Xiaobo Liu xliu@mpi-magdeburg.mpg.de [label2]organization=Max Planck Institute for Dynamics of Complex Technical Systems, city=Magdeburg, postcode=39106, country=Germany § ABSTRACT The k-means algorithm is one of the most popular and critical techniques in data mining and machine learning, and it has achieved significant success in numerous science and engineering domains. Computing k-means to a global optimum is NP-hard in Euclidean space, yet there are a variety of efficient heuristic algorithms, such as Lloyd's algorithm, that converge to a local optimum with superpolynomial complexity in the worst case. Motivated by the emergence and prominence of mixed precision capabilities in hardware, a current trend is to develop low and mixed precision variants of algorithms in order to improve the runtime and energy consumption. In this paper we study the numerical stability of Lloyd's k-means algorithm, and, in particular, we confirm the stability of the widely used distance computation formula. We propose a mixed-precision framework for k-means computation and investigate the effects of low-precision distance computation within the framework. Through extensive simulations on various data clustering and image segmentation tasks, we verify the applicability and robustness of the mixed precision k-means method. We find that, in k-means computation, normalized data is more tolerant to the reduction of precision in the distance computation, while for nonnormalized data more care is needed in the use of reduced precision, mainly to avoid overflow. Our study demonstrates the potential for the use of mixed precision to accelerate the k-means computation and offers some insights into other distance-based machine learning methods. k-means mixed-precision algorithm normalization low precision clustering [2008] 65G50 68Q25 68R10 68U05 § INTRODUCTION The k-means algorithm has been one of the most studied algorithms in data mining and machine learning for decades, and it is still widely in use due to its simplicity. It aims to partition data into a selected number of groups such that the Sum of Squared Errors () is minimized. The k-means algorithm plays an important role in vector quantization <cit.>, bioinfomatics <cit.>, computer vision <cit.>, anomaly detection <cit.>, database management <cit.>, documents classification <cit.>, and nearest neighbor search <cit.>. As stated in <cit.>, with superpolynomial runtime, k-means is often very slow in practice. Many techniques have therefore been proposed to enhance the k-means algorithm regarding speed and scalability, for example, parallelization, batch computations, etc.; associated variants include  <cit.>, k-means|| <cit.>, mini-batch k-means <cit.>, I-k-means-+ <cit.>, sparse kernel k-means <cit.>, and parallel k-means <cit.>. With the increasing availability of and support for lower-precision floating-point arithmetic beyond the IEEE Standard 64-bit double (fp64) and 32-bit single (fp32) precisions <cit.> in both hardware and software simulation, low-precision arithmetic operations as well as number formats, e.g., 16-bit half precision (fp16) <cit.>, have been widely studied and exploited in numerous numerical algorithms. Low-precision floating point arithmetic offers greater throughput, reduced data communication, and less energy usage. Low-precision floating point formats have successfully been used in a number of existing works to improve computational speed, energy efficiency, and data storage costs, e.g., in solving systems of linear equations <cit.>, least squares problems <cit.>, and unsupervised learning <cit.>. There is thus great potential for exploiting mixed precision (particularly, lower-than-working precision) computations within algorithms to mitigate the cost of data transfers and memory access in computer cores or clusters, which can improve speed and reduce energy consumption. However, low precision computations also bring greater rounding error, and an increased risk of overflow and underflow due to the resulting narrower range of representable numbers (see  <ref> for parameters for floating point formats used in this paper), which can lead to significant loss of stability and accuracy. To avoid or mitigate these potential issues, a scaling strategy is often necessary in the use of low precision arithmetic <cit.>. Also, it is often crucial to perform rounding error analysis for the algorithm in order to recognize which parts of the algorithm can be performed in low precision—we need to identify the algorithmic components where mixed precisions can be employed judiciously to make the optimal use of our computational resources and quantitatively determine the precisions to be used, so as to maintain the quality of the algorithmic outputs at a much lower cost. Much effort has therefore been devoted to the study of mixed precision algorithms; we refer to <cit.> and <cit.>, for example, for surveys on the recent development of methods in numerical linear algebra and deep neural network quantization. In this work, we develop a mixed-precision k-means algorithm that has the potential to accelerate the k-means computation without degrading the output. Instead of developing a high performance implementation, our focus is on the analysis of the use of mixed precision; low-precision computations are simulated in software. Specifically, our contributions mainly include: * We study the numerical stability of Lloyd’s algorithm for k-means and confirm that the widely adopted distance computation scheme is stable. * We propose a mixed-precision framework for k-means computation. With some theoretical support, we investigate the effects of low-precision distance computations within the framework and develop a mixed-precision k-means method that works well on both normalized and nonnormalized data. The mixed-precision distance computing approach can be extended to other Euclidean distance-based algorithms. The paper is organized as follows. In Section <ref> we review some recent work on computing k-means with extra compute resources. In Section <ref> we recall the definitions of a commonly-used normalization technique in machine learning. The classical k-means algorithm is then presented in Section <ref>, where we briefly discuss the computational properties of two Euclidean distance computation formulae. In Section <ref> we discuss the numerical stability of Lloyd’s k-means method, including the stages of distance computation and center update. Subsequently, in Section <ref> we propose a mixed-precision k-means computation framework and identify through experiments a suitable mixed-precision distance computation scheme. Numerical simulations of our algorithms on various datasets against the working-precision algorithm are provided in Section <ref>. Conclusions and future directions of research are given in Section <ref>. § RELATED WORK Although the strategy of computing in mixed-precision has been widely used and become prevalent in neural network training (see, e.g., <cit.>), it appears the approach has not been exploited in k-means computation, and our work attempts to take the first step and provide some insight into its use via rounding error analysis. Despite the lack of study of mixed precision k-means algorithms, there are a number of publications for accelerating k-means algorithms. Due to the limited space of the paper we only mention a few that are most related to the focus of our work, e.g., those seeking performance improvement resort to extra computing resources. The k-means algorithm as well as the trending seeding of D^2 weighting are known to be inherently sequential, which makes it tricky to implement the algorithm in a parallel way, and numerous novel approaches have been proposed to enhance scalability and speed. The authors of <cit.> proposed a distributed computing scheme for coreset construction <cit.> that enables an acceptable low communication complexity and distributed clustering. In addition to coreset, many parallel and distributed implementations have been proposed. An initialization approach of D^2 weighting with MapReduce <cit.> is proposed in <cit.> to enable the use of only one job for choosing the k centers, which can significantly reduce the communication and I/O costs. A k-means algorithm based on GPUs using single instruction multiple data architectures is presented in <cit.>; the algorithm performs the assignment of data points as well as cluster center updates on the GPU, which enables a speedup by orders of magnitude. The GPU power is also harnessed in <cit.> to update centroids efficiently; the k-means algorithm therein also executes a single-pass strategy to remove data transfers—up to 2 and 18.5 times higher throughput is achieved compared to multi-pass and cross-processing strategies, respectively. More recently, Li et al. proposed in <cit.> a batched fashion GPU-based k-means, which achieves up to a 15× speedup versus a standard GPU-based implementation while generating the same output. § THE Z-SCORE NORMALIZATION Normalization tricks are very important and commonly used in machine learning applications, especially in statistical machine learning methods <cit.>. Recent advances show that normalization tricks play an important role in deep neural network training, e.g., batch normalization <cit.> and layer normalization <cit.>, which increase either general ability or data regularization. There are numerous ways to normalize the data to cater to specific applications. One of the most-widely used normalization method is the z-score method, which is almost a panacea in that is can be applied to all applications, and therefore we will use this method for data normalization in this paper. For P ∈ℝ^r × n, the z-score normalization of a vector p_i ∈ P is given by p_i = p_i - μ/σ, where μ=1/|P|∑_p ∈ Pp and σ=√(1/|P|∑_p ∈ P(p - μ)^2). The z-score normalization is a two-step procedure: the operation of p_i - μ is referred to as a data shift and the subsequent division by σ is referred to as data scaling. After the normalization, the data have a standard normal distribution. § THE K-MEANS METHOD Known as a classical problem in machine learning and computational geometry, k-means seeks a codebook of k vectors, i.e., C=[c_1, …, c_k] ∈R^r × k in n vectors P=[p_1, …, p_n] ∈R^r× n, such that k ≪ n where each c_i is associated with a unique cluster S_i ⊆ P. Letting S_1, S_2, …, S_k ⊆ P be the k computed clusters, k-means aims to minimize the Sum of Squared Errors () given by = ∑_i=1^kϕ (c_i, S_i) = ∑_i=1^k∑_p∈ S_idist(p, c_i)^2, where ϕ denotes some energy function, c_i denotes the center of cluster S_i, and dist(p_i, p_j) often denotes the Euclidean norm p_i - p_j_2:=√((p_i-p_j)^T(p_i-p_j)). In Euclidean space, Lemma <ref> below ensures that choosing the mean center μ_i = 1/|S_i|∑_p ∈ S_i p as c_i always leads the iterations to monotonically decrease in (<ref>), which can thus be written as = ∑_i=1^k |S_i| Var S_i. Lloyd's algorithm <cit.> (also known as k-means algorithm) is a suboptimal solution of vector quantization to minimize . For simplicity, we will henceforth denote the Euclidean norm as dist(·). Given an arbitrary data point p' in the cluster S whose mean center is denoted by μ, we have ϕ (p',S) = ϕ (μ, S) + |S|dist(p', μ)^2. As a local improvement heuristic, it is well known that the k-means algorithm converges to a local minimum <cit.> and that no worst-case approximation guarantee can be given <cit.>. Yet it has been observed and verified that the initialization of centers, known as seeding, has great impact on the final clustering result (see, e.g., <cit.> for surveys), and one of the most successful such methods is “seeding by D^2 weighting”, introduced by Arthur and Vassilvitskii <cit.> and presented as Algorithm <ref>. This optimal seeding by D^2 weighting combined with Lloyd's algorithm (often referred to as the classic k-means algorithm) can improve the stability, and achieve competitive while accelerating the classical k-means algorithm, and it has been proved to be O(log k)-competitive with the optimal clustering. The overall algorithm is known as the “” algorithm and is presented as Algorithm <ref>. §.§ Distance computation The computationally dominant step in the k-means algorithm (even when the initialization of clusters are included) is the distance computation, i.e., computing the probability in initializing the centers in Step <ref> of Algorithm <ref> and computing the closest center in Step <ref> of Algorithm <ref>. An obvious way of forming the squared distance between a point p_i in P and a center c_j is via dist(p_i, c_j)^2 := (p_i-c_j)^T (p_i-c_j), or dist(p_i, c_j)^2 := p_i^Tp_i-2c_j^T p_i + c_i^T c_i, where the distance computing scheme (<ref>) is of greater computational interest and is the prevailing choice in algorithmic implementations, for example, in the algorithm of the machine learning library <cit.>. If the distance formula (<ref>) is used, it requires about kn distance computations in each iteration of executing Steps <ref>–<ref> of Algorithm <ref> (which we henceforth simply refer to as an iteration) since there are n distinct points p_i and k different centers c_j that are not necessarily from P, and this accounts for about 3r kn floating-point operations (flops). Therefore, if it takes the algorithm a total of N iterations before termination, then the overall cost of distance computing is approximately 3Nr kn flops. On the other hand, the formula (<ref>) enables the n inner products p_i^Tp_i from all data points to be precomputed and stored, which takes only approximately 2r n flops and can be reused in different iterations in Algorithm <ref>. The center points c_j are potentially updated in a new iteration so recomputing is required for the c_j^Tc_j term in different iterations. Therefore, the overall cost of distance computing via (<ref>) in a total of N iterations is about 2r n+2Nr k + 3Nr kn=(3Nkn+2n+2Nk)r flops, which is approximately 3Nr kn flops in N iterations if n≫ 1 and Nk≫ 1, which is the case in practical applications. More importantly, computing the distance through (<ref>) for multiple vectors exploits better the level-3 basic linear algebra subprograms (BLAS) regarding matrix-matrix multiplication (GEMM) (see <cit.> for details)[<https://www.netlib.org/blas/>]. § NUMERICAL STABILITY OF THE K-MEANS METHOD To study the numerical stability of the k-means method, we need to look at its two main computational steps: distance computation and center update. We will use the standard model of floating-point arithmetic introduced in <cit.> for our stability analysis: (x y) = (x y) (1 + δ), |δ| ≤ u, where u denotes the unit roundoff associated with the floating point number system and x and y are floating-point numbers and denotes addition, subtraction, multiplication, or division. For vector addition and scalar multiplication, we have, for x,y∈ℝ^r and α∈ℝ <cit.>, (x+y) = x+y + v, (α x) = α x + w, where |v|≤ u|x+y| and |w|≤ u|α x|. Moreover, for vector inner products, we have <cit.>, (x^Ty) = x^Ty + s, |s|≤γ_r|x|^T|y|, where γ_r:= r u/(1-r u), assuming r u<1. The process of computing vector inner products is backward stable <cit.> but the forward accuracy depends also on the conditioning. Consider the inner product f(x):=x^Ty, where y is a fixed vector. This function has the relative condition number <cit.> (in the 2-norm) _2(f,x) := lim_ϵ→ 0sup_Δ x≤ϵxf(x+Δ x) - f(x)/ϵ |f(x)|, which is given explicitly by _2(f,x) = ∇ f x/f(x) = y^Tx/x^Ty = 1/cosθ, where θ∈[0,π] is the angle between x and y. This shows that the vector dot product is more sensitive (has larger condition number) if the two vectors are close to being orthogonal. Therefore, we can expect that the relative forward error in (x^Ty) will be large when x and y are close to being orthogonal.[In view of the rule of thumb that the forward error is approximately bounded above by the conditioning of the problem times the backward error; see <https://nhigham.com/2020/03/25/what-is-backward-error/>] On the other hand, if y=x, then we have |(x^Tx)-x^Tx|≤γ_r|x|^T|x|=γ_r x^Tx, which shows high relative accuracy is obtained. §.§ Distance computation We start with showing that the distance computation formula (<ref>) can be calculated to high relative accuracy in floating-point arithmetic, while there is no analogous bound on the relative forward error for the other formula (<ref>). The squared distance d_r ≡dist(x,y)^2 between x,y∈ℝ^r computed via d_r = (x-y)^T(x-y) in floating-point arithmetic satisfies d_r = d_r + Δ d_r where |Δ d_r| ≤γ_r+2 |d_r|. Using the bounds (<ref>)–(<ref>) from the standard model (<ref>), we have d_r≡((x-y)^T(x-y)) = (x-y+Δ s)^T(x-y+Δ s) +Δ p, where |Δ s|≤ u |x-y|, |Δ p|≤γ_r |x-y+Δ s|^T|x-y+Δ s|. Using <cit.>, we have |Δ d_r| = |d_r - d_r| ≤ 2|Δ s|^T|x-y| + γ_r |x-y|^T|x-y| + O(u^2) ≤ (2u + γ_r) |x-y|^T|x-y| + O(u^2) ≤γ_r+2 |x-y|^T|x-y| + O(u^2), and the result follows by ignoring second order terms in u. The squared distance d_r ≡dist(x,y)^2 between x,y∈ℝ^r computed via d_r = x^T x - 2 x^T y + y^T y in floating-point arithmetic with precision u satisfies d_r = d_r + Δ d_r where |Δ d_r| ≤γ_r+2 (x^T x + 2|x|^T |y| + y^T y). Similar to the proof of Theorem <ref>, we have d_r ≡(x^T x - 2 x^T y + y^T y) = (x^T x) - (2 x^T y) + (y^T y) + Δ s_1 + Δ s_2, where |Δ s_1| ≤ u |(x^T x) - (2 x^T y)|, |Δ s_2|≤ u |(x^T x) - (2 x^T y) + Δ s_1 + (y^T y)|, and (x^T x) = x^T x+ Δ p_1, (y^T y) = y^T y+ Δ p_2, (2 x^T y) = 2x^T y+ Δ p_3 , where |Δ p_1| ≤γ_r x^T x, |Δ p_2| ≤γ_r y^T y, |Δ p_3| ≤ 2 γ_r |x|^T |y|. By ignoring second order terms in u, we have |Δ d_r| = |d_r - d_r| ≤γ_r x^T x + 2γ_r |x|^T |y|+ γ_r y^T y + u|x^Tx - 2x^Ty| + u|d_r| ≤γ_r(x^T x + 2|x|^T |y| + y^T y) + u|x^Tx - 2x^Ty| + u|d_r|, and the result easily follows from further weakening the bound. Obviously, after the z-score normalization (<ref>) the pairwise distance between the scaled points x and y satisfy dist(x, y)^2 = dist(x, y)^2/σ^2, showing that the pairwise distance is scaled to 1/σ^2 of the original size. So for datasets that have points with large magnitude, normalization can avoid the overflow issue in computing the distance in low precision arithmetic. Also, the bounds given in Theorem <ref> and Theorem <ref> obviously scale with the data points, which means the absolute error in the distance computation will scale in data normalization. In general, the bound from Theorem <ref> is not satisfactory, but it says that if the two vectors x, y differ significantly in magnitude, then the squared distance d_r can be computed to high relative accuracy. Suppose x≪y (the case of y≪x is similar). Then from the Cauchy–Schwarz inequality, |x^Ty|≤ |x|^T|y| ≤|x||y|=xy≪y^2 = y^Ty, and therefore, d_r≈y^2 with |Δ d_r|≲γ_r+2y^2. Consider, on the other hand, when the two vectors x, y are close in magnitude, x≈y. By (<ref>), the bound from Theorem <ref> becomes |Δ d_r| ≤γ_r+2 (x^T x + 2xy + y^T y) ≈ 4γ_r+2y^2, and so the only worrying case implied from the bound is when (x-y)^T(x-y)= x^T x - 2 x^T y + y^T y =d_r^2 ≪y^2, x≈y, which can happen only when the angle θ between x and y is close to 0 (the two vectors are aligned). However, we know from (<ref>) that the forward error in computing x^Ty will be small if θ is close to zero, and from this argument it follows that the relative forward error in the squared distance computed via d_r = x^T x - 2 x^T y + y^T y is small. The conclusion from the analysis above is that the distance computing formula (<ref>) is easily shown to be forward stable, and the potential instability in evaluating the x^Ty term does not hinder the high relative accuracy of the other formula (<ref>) for distance computation. For a given dataset {p_i}_i=1^n, the pairwise distance computed by (<ref>) and (<ref>) can be stored into a kernel matrix with the sum of the squared errors represented in the Frobenius norm of the matrix. Let D=(d_ij) with d_ij = (p_i, p_j)^2 be the distance matrix obtained by using (<ref>) and let D=(d_ij) denote the computed values by using (<ref>). Then D - D_F = (∑_i,j=1^n(d_ij - d_ij)^2)^1/2, which can be viewed as an overall measure of the difference in terms of accuracy for two methods on the dataset. We generate data from the normal distribution and present in  <ref> the difference between the two formulae (<ref>) and (<ref>) implemented in double precision. In general, the difference is quite minor for data from the normal distribution with mean zero and various deviations, though this difference is slightly enlarged as the standard deviation increases. Overall, the formula (<ref>) should be preferred versus (<ref>) in terms of computational efficiency because it enables a precomputing paradigm which avoids a great deal of repetitive computation, and therefore it will be the focus henceforth. §.§ Cluster center update After the selection of the k initial centers, the k-means algorithm proceeds with updating cluster centers iteratively. In each iteration, a cluster center μ_i is updated by computing μ_i=1/|S_i|∑_p ∈ S_ip, which involves only the summation of points in the cluster S_i and a scalar division, and thus is highly accurate, as the following lemma shows. The μ_i computed in floating-point arithmetic is given by μ_i = μ_i + Δμ_i, where Δμ_i≤γ_m_iμ_i, Δμ_i≤γ_m_iμ_i, where m_i is the cardinality of the associated cluster S_i. The proof resembles that of Theorem <ref> and so is omitted. We have, from Lemma <ref> and Lemma <ref>, Δϕ_i:= ϕ (μ_i,S_i) - ϕ (μ_i,S_i)= m_i(μ_i,μ_i)^2 = m_iΔμ_i^2 ≲ m_i^3u^2μ_i^2, where u is the unit roundoff of the precision at which the center update (<ref>) is carried out. This bound tells us the energy deviation associated with cluster S_i to its minimal energy due to the presence of rounding errors. Since the dataset { p_i}_i=1^n in a clustering problem is usually z-score standardized to have zero mean and unit standard deviation, μ_i, the 2-norm of the mean center, is expected to be close to the origin, and so the energy deviation Δϕ_i should satisfy Δϕ_i≪ 1 if m_iu≤ nu ≪ 1 holds, which is almost always true in practical applications for u taken from the IEEE standard double precision or single precision. As mentioned in the previous section, the convergence of the k-means algorithm relies on its property as a local improvement heuristic, that is, reassigning the data points to their closest center and then updating the cluster centers by the mean can only decrease the of (<ref>). For the former it is important to compute the distances accurately, which is discussed in the previous subsection, and for the latter we have the following result regarding the convergence that is most relevant towards the end of the iterations when the correction made in the center update is small. Denote the computed mean centers in two successive iterations of Algorithm <ref> as (the previous) and (the current), respectively. Then the convergence of the algorithm will not be terminated due to the presence of rounding errors if the unit roundoff of the precision at which the center update in Step <ref> of Algorithm <ref> is done satisfies u≲| -|^T| -|/2| - |^T||. Consider the set of points assigned to a cluster S with cardinality m and compare ϕ (μ,S), the energy function evaluated at the mean center of cluster S computed in precision u, with ϕ (,S). For the convergence of k-means we require ϕ (μ,S) < ϕ (,S), which says the center update decreases the energy function. Using Lemma <ref> and equation (<ref>), we have ϕ (,S) - ϕ (μ,S) = ∑_p∈ S((p,)^2 - (p,μ+Δμ)^2 ) = ∑_p∈ S((p,)^2 - (p,μ)^2 + 2(p-μ)^TΔμ - Δμ^TΔμ) = |S|(,μ)^2 - 2∑_p∈ S(-p)^TΔμ - mΔμ^TΔμ. Using (<ref>) and after some manipulation, we arrive at ϕ (,S) - ϕ (μ,S) = m(,)^2 + 2m( - )^T Δμ - 2mΔμ^TΔμ ≥ m(,)^2 - 2m| - |^T |Δμ| - 2mΔμ^TΔμ, which, from Lemma <ref> and Δμ^TΔμ=Δμ^2≤γ_m^2μ^2=O(u^2), implies ϕ (,S) - ϕ (μ,S) ≥ m(,)^2 - 2mu| - |^T || + O(u^2). Therefore, absorbing higher order terms in u, we now obtain a sufficient condition for (<ref>) as (,)^2 ≳ 2u| - |^T ||, where the left-hand side is the squared distance between the previous and current computed mean centers and the right-hand side accounts for the incurred rounding errors due to the floating-point computations. Rearranging the inequality completes the proof. The vector |μ-| measures the center movement, whose norm can be large at the start of the k-means algorithm but tends to decrease as the iterations proceed. The precision bound (<ref>) is easily computable, and it indicates that higher precision is required as the distance between the previous and next centers gets smaller, which in general happens as the iteration proceeds. § MIX-PRECISION K-MEANS The goal of k-means algorithms is to classify all data points to their closest cluster centers, which at the beginning are chosen coarsely, via Algortihm <ref>. The subsequent iterations successively refine the positions of the centers in order to diminish the sum of squared errors . From this aspect we can perform the initialization of the centers wholly in a lower-than-working precision in the mixed-precision k-means algorithm. Then the distance computation for finding the cluster of points associated with a center is proposed to be performed in low precision. Finally, the centers are updated via (<ref>) in the working precision, which is required to satisfy the bound (<ref>). The mixed-precision k-means framework we propose is presented in Algorithm <ref>, where mixed precisions are used in different steps of the algorithm. Based on this framework, later we will introduce a second level of the use of mixed precisions—computing the pairwise distance in line <ref> of Algorithm <ref> in a mixed-precision fashion, and this gives the overall mixed-precision algorithm (with two different levels), presented in Algorithm <ref>. §.§ Utilizing mixed precision in distance computation As discussed in Section <ref>, the formula (<ref>) is expected to deliver high relative accuracy when there is a great difference in the magnitude of the two vectors, despite the potential instability in evaluating the x^Ty term. This motivates our idea of exploiting mixed precision in the distance computation, which comes from the fact that in the computation of v = z + x^Ty, where |x^Ty| ≪ |z|, the inner product x^Ty can be computed in lower precision than the subsequent addition without significantly deteriorating the overall accuracy; this idea has proven to be effective in <cit.>, <cit.>. We have the following theorem which bounds the error in the mixed-precision computing scheme. The mixed precision computation described in the Theorem below is different than what is stated in line 3 of Algorithm 6.1 (in line 3, it indicates that the entire computation is in precision u_ℓ. Yes, it is, as we wanted to make the point that mixed-precision k-means has two “levels”, the first level in different steps of the k-means algorithm, as presented in Algorithm 6.1, which we call “framework”; the second level is in the distance computation, which is related to the theorem below (Theorem 6.1). The overall mixed-precision algorithm (with two different levels) is presented in Algorithm 6.3. I am not sure if this is the best structure, though. Shall we add a sentence at the beginning of sect. 6 to emphasize there are two levels? Yes, I would add a sentence or two to the beginning of Section 6 that describes this “roadmap” for what we do in the section. I have added a sentence as suggested. The squared distance d_r = x^T x - 2 x^T y + y^T y computed in floating-point arithmetic, where x^Ty are computed in precision u_ℓ≥ u and the other parts are computed in precision u, satisfies d_r = d_r + Δ d_r where |Δ d_r| ≲ (r+2)u (x^T x + y^T y) + 2 (r+2)u_ℓ|x|^T |y|. The proof is very similar to that of Theorem <ref>, and we have used the approximation γ_r≈ r u in stating the result. Theorem <ref> says that, if x≪y or x≫y, we can safely compute x^Ty in precision u_ℓ chosen by u_ℓ≈δ u, δ := max{x^Tx, y^Ty}/|x|^T|y|≥max{x^2, y^2}/xy = max{x/y, y/x}, such that the relative error in the computed squared distance is approximately bounded above by 3(r + 2)u, which is only three times the uniform-precision bound in Theorem <ref>. Putting the discussion in the context of the k-means algorithm, the computation of p_i^Tc_j produces the dominant term in the flop count and requires the most data communication in the distance computation, so the use of lower precisions therein can improve performance of the algorithm by both reducing flops and memory requirements (particularly the latter as vector inner product is typically a memory-bound process). The choice of the lower precision u_ℓ should in principle follow the relation (<ref>), but in practice we do not have the ability to choose an arbitrary precision, and, instead, we are more interested in the case when u_ℓ is a precision implemented in hardware. Therefore, in the k-means algorithm, before calculating the product p_i^Tc_j in (<ref>) we check the condition (at the negligible extra cost of one scalar division) max{p_i^Tp_i/c_j^Tc_j, c_j^Tc_j/p_i^Tp_i}≥δ^2, δ≥ 1, and if this condition does not hold, then the working precision will be used for the dot product p_i^Tc_j; otherwise, we form the dot product p_i^Tc_j in the lower precision u_ℓ. The parameter δ should be determined by δ≈ u_ℓ/u according to (<ref>), but we found that in practice this choice is very pessimistic for combinations of precision pairs (u,u_ℓ) that are of practical interest. This is probably a compound effect of the fact that the bound given in Theorem <ref> and the Cauchy–Schwarz inequality can be arbitrarily pessimistic. As a consequence, our choice for the value of δ has to become more empirical and less rigorous, making this approach difficult to be extended to employing multiple lower precisions. Before moving to the discussion of the practical choice of δ in (<ref>), we present the mixed-precision distance computing scheme in Algorithm <ref>, where we have also incorporated a trivial scaling scheme aiming to prevent overflow from the use of the low precision u_ℓ, though it makes the low-precision computation be more prone to underflow. §.§ Simulations with various δ To gauge how many computations have been performed in a lower precision in computing the distance between a center and data points via Algorithm <ref>, we define and report when necessary in experiments the ratio η = The number of triggered low precision computations/Total number of distance computations. Obviously a larger chosen value for δ makes it more stringent for low precision to be used and therefore results in a reduction in the triggered rate η of low precision computations. For δ=1 the algorithm computes the distance fully in the low precision.  <ref> shows that the triggering rate for low precision η decreases as δ increases on both z-score normalized and nonnormalized data sets, and the triggering rate with δ=80 is already close to 0. Since the formula (<ref>) is independent of data scaling, the η-δ curves on normalized and nonnormalized data sets are almost identical. Also, in order to study the performance of the mixed-precision distance computation scheme, we embed this scheme into Algorithm <ref>, the mixed-precision k-means framework, to obtain a variant of the mixed-precision k-means algorithm that is presented as Algorithm <ref>. We test Algorithm <ref> with varying δ, where the working precision u is double precision and the low precision is chosen as quarter precision (q52) and half precision (fp16), respectively. The experimental results are presented in  <ref>. We observe that for u_ℓ being half precision (fp16), the performance of the algoirthm in terms of Adjusted Rand Index (ARI) <cit.> and Adjusted Mutual Information (AMI) <cit.> (see Section <ref> for a detailed description of these measures) is quite stable as δ varies from 1 to around 80, showing that computing the distance in fp16 does not sacrifice the clustering quality, and in fact, the resulting quality is not much different from the algorithms with the distance computation almost fully in double precision (with δ=80); we note that the AMI and ARI for double precision are almost identical to the AMI and ARI in the mixed-precision setting with δ=80—as the convergence tendency of the curves shown in  <ref>. I am confused about Figure 3. Where are the double precision results plotted? We do not show the results of double precision. We just compared the performance of mixed precision (fp16 and q52) in regards to the increasing δ values; As shown in FIG. 2, as the δ increases, the η will decrease and the low precision is triggered less often, which makes the algorithm almost works fully in working precision. Is it worth at least stating what the AMI/ARI is for double precision? Also, should we state somewhere what is the optimal ARI/AMI? (or we should at least forward reference that details on these measures are given in Section 7). I think the trigger rate shown in figure is very low at the end of the curse, which means almost no low precision trigger, and in such case the result of AMI and ARI should be the same as the fully double precision (exclude some special cases), but rigorously speaking, we cannot say they are the same. The current plot appears a converge tendency which indicates the results of ARI and AMI in double precision. On the other hand, when the low precision is further reduced to be quarter precision (q52), we see that the use of the mixed-precision distance computation does bring benefits compared to the case of distance computation fully in quarter precision on both normalized and nonnormalized data. We have further observed that the performance of the mixed-precison distance computation scheme is in general not monotonic, and in many cases a larger δ with more computation done in double precision somehow deteriorates the resulting quality. We do not have an explanation for this phenomenon, yet there is no optimal value for δ in general and its most suitable choice is clearly problem-dependent and probably also depends on the choice of precisions. From δ≈ 2 the mixed precision distance computation scheme starts to achieve better or similar performance as the scheme with larger δ and more working-precision computations, and it only carries approximately half of the distance computations in the working precision (cf.  <ref>). Therefore, we will set δ= 2 in all our later experiments. § NUMERICAL EXPERIMENTS In this section, we present numerical tests with the mixed-precision k-means algorithms. All the experiments were performed with native Python on a Linux compute server equipped with 2x Intel Xeon Silver 4114 2.2G (of a total of 20 cores, 40 threads) and 1.5 TB Random Access Memory (RAM). All algorithms are run in a single thread. Since all the k-means algorithms discussed in the paper invoke Algorithm <ref> to initialize the cluster center pseudo-stochastically, we simulated five random states for the algorithms on all experiments except the image segmentation test to alleviate the effect of randomness; the segmentation test is based on cluster membership (the cluster label for each data point) and the color map is assigned based on the computed centers. The code and data to reproduce all experimental results are publicly available on GitHub.[<https://github.com/open-sciml/mpkmeans>] In the experiments, we will use by default double precision (fp64) as the working precision u and half precision (fp16) or quarter precision (q52) as the low precision u_ℓ. The lower precisions are simulated by the function <cit.>. The following algorithms are compared in the simulations. * : the algorithm with Algorithm <ref> that uses (<ref>) in the working precision for the distance computation. * : the mixed-precision algorithm described in Algorithm <ref> in which all the distances are computed entirely in low precision. * : the mixed-precision algorithm described in Algorithm <ref> in which all the distances are computed via Algorithm <ref> with δ = 2. Note that both and employ the mixed-precision k-means framework; the only difference is that always uses the low precision in distance computation, while the low-precision distance computation in is triggered by Algorithm <ref>. All tests below are performed provided ground-truth clustering numbers and labels. The performance is measured by , ARI, AMI, Homogeneity, Completeness, and the V-measure <cit.>, which we briefly review in the following: * ARI: The Rand Index is a metric that defines the similarity between two clusterings in terms of the counting pairs assigned in the same or distinct clusters in the clustering and ground-truth clustering. Rand Index, however, tends to be larger when the two clusterings have a larger number of clusters. To fix this, the Adjusted Rand Index (ARI) is a form of Rand Index defined considering the adjustment for chance <cit.>. The ARI is a symmetric measure with an upper bound of 1; it has a value equal to 1 when the clusterings are identical, and close to 0 (can be negative) for random labeling independently of the number of clusters and samples. * AMI: Mutual Information computes a measure of the mutual dependence between two clusterings. Adjusted Mutual Information (AMI) is an adjustment of the Mutual Information to account for the adjustment for chance <cit.>. The Adjusted Mutual Information equals 1 if two clusterings are identical and it approaches 0 (can be negative) if two random partitions are evaluated. * Homogeneity: Homogeneity refers to all clusters of a clustering containing only data that belong to a single class such that the class distribution within each cluster has zero entropy <cit.>, which describes the closeness of the clustering to the ground-truth clustering and ranges over [0,1]. When there is only a single class, homogeneity is defined to be 1. * Completeness: Completeness refers to that all the data points that belong to a class are elements of the same cluster, it ranges from 0 to 1 <cit.>. The Completeness of 1 indicates a perfectly complete (successful) cluster labeling. Note that the completeness score calculated by using Scikit-learn[<https://scikit-learn.org/stable/>] will also be equal to 1 when the clustering completely fails and all data points are untrained and classified to the same cluster, and, in this case, we will denote the score as NA (not applicable) to distinguish it from a perfectly perfectly complete cluster labeling. * V-measure: It is known that Homogeneity and Completeness have an inverse relationship, i.e., Completeness is symmetrical to Homogeneity; an increasing Homogeneity often indicates a decreasing Completeness. The V-measure is computed as the harmonic mean of distinct Homogeneity and Completeness values, which can be weighted to emphasize the contributions of Homogeneity or Completeness <cit.>. §.§ Results for S-sets To evaluate our mixed precision algorithms over data with clusters of various degrees of overlap, we use in this simulation the four two-dimensional datasets from <cit.>, which are labeled and provided with ground-truth centroids with varying complexity in terms of spatial data distributions, and each dataset contains 5000 vectors and 15 clusters with a varying degree of overlap. The visualization is plotted in  <ref>. The results with the low precision being fp16 or q52 are presented in  <ref> and  <ref>, respectively. For nonnormalized data the SSE is naturally high in all cases, and so we should look at the other reported measures for the performance of the algorithms. We can see from the two tables that, for the normalized data, even using fp16 achieves about the same performance as the double precision algorithm , but completely fails on all nonnormalized data of S-sets with the low precision being fp16 or q52. On the contrary, achieves competing performance against the standard working-precision k-means algorithm with a low precision triggered rate η of (<ref>) approximately ranging from 33% to 57%. Comparing  <ref> and  <ref>, we found retains similar performance with a similar low precision arithmetic triggering rate, while its benefit over is only very clear when the low precision is set to be q52, in which case the performance of can degrade significantly compared with the low precision being fp16; we found that this is mainly because suffers severely from overflow problems, which is largely avoided by from the use of the scaling technique. §.§ Real-world datasets To further verify the utility and performance of the mixed-precision k-means algorithm on real-wolrd datasets, we evaluate the algorithms in the selected real-world datasets listed in  <ref>. The test results with the low precision arithmetic chosen as fp16 and q52 are presented in  <ref> and  <ref>, respectively. Similarly to the results for the S-sets, we found that, with the low precision being fp16 or q52, also achieves competitive performance versus the standard working-precision k-means algorithm. On the other hand, the simulation also shows that straightforwardly executing the distance computation of the mixed-precision k-means framework in a low precision can fail on nonnormalized datasets or result in performance degradation on normalized datasets. Again, we see that low precision is in most cases successfully exploited by , and in several cases more than 70% of distance computation is safely done in a much-lower-than-working precision. §.§ Image segmentation application We now present clustering results for an application in image segmentation for two images from ImageNet <cit.>. The images used for tests are as shown in  <ref>, where all images have been resized to be (300,280). The purpose of performing clustering in this task is to identify connected pixel regions of similar color scale in an image such that the nearby pixels (with each pixel represented as a three-dimensional vector comprised of RGB colors) tend to connect together in 2D regions in an image. Our image segmentation task follows similarly from <cit.>. The segmentation is completed by clustering pixels of images and the associated image is reconstructed via the cluster centers. Therefore, the reconstruction quality can be evaluated by both the perception and . All data in this task is normalized by dividing each channel of the image data by 255. We evaluate , , and in clusters of 5, 10, 20, and 50 and report the scores. By comparing  <ref> with  <ref> and  <ref>, we see, when various numbers of clusters are targeted and generated, the mixed-precision k-means algorithms, i.e., and , attain an score close to that obtained by the standard working precision k-means algorithm . When the low precision is taken to be q52, as  <ref> and  <ref> show, the performance of both mixed-precision k-means algorithms dropped significantly, particularly when the number of targeted clusters increases, though the reduction in the low precision has lesser impact on the performance of . Clearly, neither of the mixed-precision algorithms succeeded when the targeted number of clusters is set to be 20 or 50, and we found this failure is largely due to underflow in the distance computation, which resulted in a classification of many data points that are supposed to be separate in different clusters to end up in the same group. § CONCLUDING REMARKS In this paper we studied the numerical stability of the k-means method including the distance computation and center update therein, and we showed that the widely used distance computation formula is stable in floating point arithmetic. We proposed a mixed-precision framework for computing k-means, and, combining this framework with a new mixed-precision Euclidean distance computing scheme, we develop a mixed-precision algorithm for computing k-means, where the most computationally expensive part – the distance computation – is safely done in mixed precision. We observed that the normalized data is fairly robust to the use of low precision in the distance computation, while for nonnormalized data, a reckless use of low precision often results in poor clustering performance or complete failure, which is largely due to overflow when using the low precision format. The success of Algorithm <ref> on nonnormalized data seems to imply that underflow is often less problematic than overflow in the distance computation in k-means. On the other hand, we see the mixed-precision k-means algorithm can deliver comparable numerical results to the standard working-precision k-means algorithm on both normalized and nonnormalized data, typically via executing around half of the distance computations in a much lower precision (half or quarter precision), which reveals great potential for the use of mixed precision to accelerate k-means computations. By performing numerical simulation of low precision arithmetics on various datasets from data science tasks including data clustering and image segmentation, we showcase that appropriate reduced-precision computation for k-means only results in a minor increase in and does not necessarily lead to worse clustering performance. Our paper is the first attempt towards exploiting mixed precision in computing k-means, and, in particular, the Euclidean distance. The study may provide some insight into other Euclidean distance-based machine learning methods, e.g., data clustering <cit.>, manifold learning <cit.>, and k-nearest neighbors search <cit.>. As it is often observed in machine learning literature that normalizing the data can somehow improve numerical precision and make it more tolerant to the reduction of precision, our study adds another such an example, though better understanding of this phenomenon in the context of Euclidean distance computation is desired. Another future research direction could be providing stronger theoretical support for the use of mixed precision in distance computation and developing a more systematic approach of low precision switching mechanism. § ACKNOWLEDGEMENTS The first and second authors acknowledge funding from the European Union (ERC, inEXASCALE, 101075632). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. The first author additionally acknowledges funding from the Charles University Research Centre program No. UNCE/24/SCI/005. elsarticle-num
http://arxiv.org/abs/2407.13147v1
20240718041914
DFMSD: Dual Feature Masking Stage-wise Knowledge Distillation for Object Detection
[ "Zhourui Zhang", "Jun Li", "Zhijian Wu", "Jifeng Shen", "Jianhua Xu" ]
cs.CV
[ "cs.CV" ]
1]Zhourui Zhang zzrnnupg@nnu.edu.cn 1]Jun Li cor1 lijuncst@njnu.edu.cn 2]Zhijian Wu zjwu_97@stu.ecnu.edu.cn 3]Jifeng Shen shenjifeng@ujs.edu.cn 1]Jianhua Xu xujianhua@njnu.edu.cn [1]School of Computer and Electronic Information, Nanjing Normal University, Nanjing 210023, China [2]School of Data Science and Engineering, East China Normal University, Shanghai 200062, China [3]School of Electrical and Information Engineering, Jiangsu University, Zhenjiang 212013, China [cor1]Corresponding author § ABSTRACT In recent years, current mainstream feature masking distillation methods mainly function by reconstructing selectively masked regions of a student network from the feature maps of a teacher network. In these methods, attention mechanisms can help to identify spatially important regions and crucial object-aware channel clues, such that the reconstructed features are encoded with sufficient discriminative and representational power similar to teacher features. However, previous feature-masking distillation methods mainly address homogeneous knowledge distillation without fully taking into account the heterogeneous knowledge distillation scenario. In particular, the huge discrepancy between the teacher and the student frameworks within the heterogeneous distillation paradigm is detrimental to feature masking, leading to deteriorating reconstructed student features. In this study, a novel dual feature-masking heterogeneous distillation framework termed DFMSD is proposed for object detection. More specifically, a stage-wise adaptation learning module is incorporated into the dual feature-masking framework, and thus the student model can be progressively adapted to the teacher models for bridging the gap between heterogeneous networks. Furthermore, a masking enhancement strategy is combined with stage-wise learning such that object-aware masking regions are adaptively strengthened to improve feature-masking reconstruction. In addition, semantic alignment is performed at each Feature Pyramid Network (FPN) layer between the teacher and the student networks for generating consistent feature distributions. Our experiments for the object detection task demonstrate the promise of our approach, suggesting that DFMSD outperforms both the state-of-the-art heterogeneous and homogeneous distillation methods. Feature Masking Heterogeneous Knowledge Distillation Stage-wise Adaptation Learning Masking Enhancement Semantic Feature Alignment Object Detection § INTRODUCTION It is well-known that knowledge distillation (KD) can help transfer knowledge from a complex model (teacher) to a compact network (student), so that the latter can achieve improved performance at a much lower cost. It is considered to be an effective means of model compression for a variety of downstream tasks including object detection and semantic segmentation <cit.>. Primarily focusing on the output head of the network, early distillation algorithms aim at transferring implicit knowledge learned in the complex teacher network to the lightweight student model. This distillation scheme is also known as logit-based classification distillation <cit.>. In addition, the feature-based distillation approach has received increasing attention. It helps the student network to mimic feature maps from the teacher model in the distillation process, allowing the generated student features to enjoy improved representational capability <cit.>. More recently, a popular distillation paradigm has emerged as feature-masking distillation. In contrast to feature distillation in which the student's feature directly mimics the counterpart of the teacher <cit.>, feature-masking distillation operates by masking selective regions of the student feature map and reconstructing the masked regions for distillation <cit.>. In this sense, feature-masking distillation essentially reconstructs the transferred knowledge from the teacher instead of transferring knowledge directly. Consequently, it can help the student learn better from the teacher. In particular, recent efforts are devoted to taking advantage of feature attention for uncovering object-aware spatially important regions and channel-wise clues such that the student features are reconstructed with sufficient descriptive power comparable to teacher features <cit.>. As a result, this attention-directed feature masking strategy enormously contributes to improving the performance of the student model <cit.>. Although dramatic progress has been made in recent years, most feature-masking distillation methods are developed mainly to address homogeneous distillation, which assumes that teacher and student models share roughly similar structures except that the former usually adopts a stronger backbone. For example, RetinaNet-ResNet101 <cit.> and RetinaNet-ResNet50 <cit.> are used as the teacher and the student model, respectively, within the homogeneous distillation framework <cit.>. They fail to fully take into account heterogeneous distillation scenario which is more challenging due to significant diversity of the teacher and the student frameworks <cit.>. In terms of detection task, different heterogeneous detectors exhibit significant variances in object perception capability. As can be observed in Fig. <ref>, different detectors, including Faster R-CNN <cit.>, RetinaNet <cit.>, and FCOS <cit.> with the same ResNet50 backbone, exhibit substantial differences in activation maps and variations when transformed into feature masks <cit.>. Despite sharing the same backbone architecture, the teacher and student detectors have diverse representation capabilities due to heterogeneous network structures <cit.>. Consequently, heterogeneous detector heads encode different object-aware semantic clues. Directly transferring the knowledge learned from a teacher model to another heterogeneous student model leads to limited performance improvement, suggesting that a huge gap in semantic-aware capability makes it difficult for the student to learn the useful knowledge from the teacher <cit.>. Thus, the reconstructed student features do not improve the model performance. To address the above-mentioned drawbacks <cit.>, we have proposed a dual feature masking stage-wise distillation framework termed DFMSD for object detection in this study. Following an attention-guided dual feature masking framework, we integrate a stage-wise adaptation learning module into the dual masking framework for addressing heterogeneous distillation. Since it is not beneficial to directly transfer knowledge from the teacher to the student, we perform stage-wise distillation by firstly allowing the student to learn from a “weaker” teacher and subsequently adapting the improved student to a “stronger” teacher for distillation refinement <cit.>. In this way, the student model can be better adapted to teachers through progressive distillation, which is conducive to bridging the gap between them <cit.>. Furthermore, we embed the masking enhancement strategy into the stage-wise distillation, such that the “stronger” teacher in latter distillation stage can benefit from strengthened object-aware masking regions for improved feature-masking reconstruction <cit.>. In addition, we further perform semantic alignment using Pearson correlation coefficients <cit.> to generate consistent teacher-student feature distributions <cit.>. Through the above improvements, we can handle heterogeneous networks within a dual feature-masking distillation framework. Extensive experiments for detection tasks have demonstrated the superiority of our proposed method in both heterogeneous and homogeneous distillation scenarios <cit.>. The contributions of this study can be summarized as follows: * We have developed a dual feature-masking stage-wise distillation framework (DFMSD) by integrating a stage-wise adaptation learning (SAL) module into the dual masking network for bridging the semantic gap between heterogeneous teacher and student models. It enables the student to firstly learn from a “weaker” teacher and refines the adapted student with a “stronger” teacher, such that the knowledge can be better transferred to the student with improved adaptability. * We further introduce a masking enhancement module into our DFMSD, which can adaptively enhance the object-aware masking regions. In terms of the frequency distribution of the semantic regions, adaptive data enhancement strategy is adopted such that the corresponding masking regions can be strengthened for improving masking feature reconstruction. * For better aligning the heterogeneous networks, we further perform semantic alignment between layer-wise features with Pearson correlation coefficients, yielding consistent teacher-student feature distributions. * Extensive experiments for detection tasks demonstrate the promise of our method in both homogeneous and heterogeneous distillation settings. The remainder of this paper is structured as follows. After reviewing related work in Section  <ref>, we will elaborate on our method in Section  <ref>. In Section  <ref>, we conduct extensive experimental evaluations before the paper is finally concluded in Section  <ref>. § RELATED WORK In this section, we comprehensively review recent advances in object detection and knowledge distillation, both of which are closely related to our approach. §.§ Object Detection It is widely acknowledged that current object detection methods based on deep models can be roughly classified into three categories: anchor-based detectors <cit.>, anchor-free detectors <cit.>, and end-to-end detectors <cit.>. Anchor-based detectors, which consist of two-stage detectors <cit.> and one-stage detectors <cit.>, usually rely on predefined anchor boxes to achieve accurate object detection and localization. In particular, one-stage detectors enjoy a preferable trade-off between efficiency and accuracy by directly classifying and regressing anchors without generating object proposals in advance. Unlike anchor-based detectors, anchor-free approaches including keypoint-based CornerNet <cit.> and center-based CenterNet <cit.>, avoid predefined anchor boxes and can directly predict the object location with desirable flexibility. With the boom of Transformer architecture, recent years have witnessed great success of advanced end-to-end Transformer-based detectors such as DETR <cit.>. They enjoy unparalleled long-range global modeling capability, whereas expensive computational resources and costs are inevitable. In object detection, there is a huge gap between heavyweight and lightweight detectors. In particular, the heavyweight models, which are in pursuit of high performance, typically require complex backbone structures and significant computational resources <cit.>. Consequently, designing lightweight and efficient detectors with lower complexity and real-time performance is sought-after in practical applications. Since knowledge distillation techniques enable the transfer of stronger representation power from large networks to smaller ones, it facilitates the design of lightweight backbone networks with performance close to that of larger networks <cit.>. §.§ Knowledge Distillation Serving as an effective means of model compression, knowledge distillation maintains the compact structure of lightweight models with significantly improved performance. The earliest work dates back to <cit.> where soft labels obtained by a teacher network are incorporated into the loss of a student network, allowing the student network to learn probability distribution consistent with the teacher network for classification. In recent years, dramatic progress has been made in knowledge distillation, and we will comprehensively review different approaches to knowledge distillation. §.§.§ Feature-based knowledge distillation Feature-based distillation methods help the student model mimic the teacher counterpart to generate features with improved representation power. The first feature-based distillation method is known as FitNets in <cit.> which demonstrated that semantic information from intermediate layers can also be learned by the student network as implicit knowledge. Hence, distillation techniques have been widely applied to various downstream tasks. Li et al. <cit.> utilized region proposals from the larger network to assist the smaller network in learning higher-level semantic information. Dai et al. <cit.> developed the GID framework which selects specific distillation regions based on differences between student and teacher networks. Yang et al. <cit.> proposed FGD, which separates foreground and background for allowing the student model to learn from regions of interest and global knowledge distilled from the teacher network through simultaneous local and global distillation. §.§.§ Masked feature generative distillation Different from feature distillation techniques, masked feature distillation approaches enable the student model to reconstruct features from selectively masked areas instead of directly learning from the teacher feature. The first masked distillation framework is MGD <cit.> which randomly masks the feature maps of the student model and then reconstructs them from the teacher network. However, random masking may introduce additional noise, leading to biased feature map with impaired representation capability. To identify the importance of the masked areas, attention-driven masked feature distillation methods have been proposed to improve the object-aware perception of the student model. Yang et al. <cit.> proposed an adaptive masking distillation method, termed AMD, for object detection. On the one hand, AMD encodes the importance of specific regions by performing spatially adaptive feature masking, allowing the student model to learn more significant object-aware features from the teacher network. On the other hand, to enhance target perception capabilities, AMD employs a simple yet efficient SE block to generate helpful channel-adaptive cues for the student model. Based on AMD, Yang et al. <cit.> further proposed a dual masking knowledge distillation method, termed DMKD <cit.>. Unlike previous masking-based algorithms, DMKD <cit.> simultaneously focuses on both spatial and channel dimensions, which respectively characterize important spatial regions and channel-wise semantic information. Therefore, it significantly benefits student feature reconstruction and helps to improve the distillation performance, demonstrating superior performance compared to the previous methods. Compared to the aforementioned methods which are essentially one-stage distillation methods, our proposed approach performs stage-wise distillation so that the student can be progressively adapted to multiple teachers in different stages for bridging the gap between heterogeneous networks. To our knowledge, this is the first dual feature-masking stage-wise learning framework for addressing heterogeneous distillation. §.§.§ Heterogeneous Knowledge Distillation In knowledge distillation, the diversity between the teacher and the student networks poses a great challenge to knowledge transfer and is detrimental to distillation performance, especially when they have heterogeneous network architectures. To address this challenge, MimicDet <cit.> introduced a refinement module that mimics the workflow of two-stage detectors and performs feature alignment between the heads of the teacher and student networks for distillation. G-DetKD <cit.> was the first work to propose a universal distillation framework applicable to object detection. It performs soft matching at all pyramid levels to provide guidance. However, combining different levels of student features by learning similarity scores before feature imitation does not fundamentally bridge the semantic gap. In HEAD <cit.>, an assistant network, which has the same detection head as the teacher detector and learns directly from the teacher, is introduced into the knowledge distillation framework for connecting the teacher-student detectors. Since the assistant and teacher share the same detection head, the semantic feature gap in heterogeneous teacher-student detectors is effectively bridged for better knowledge transfer. Cao et al. <cit.> developed a knowledge distillation method PKD based on the Pearson correlation coefficient <cit.>, which uncovers the linear correlation between teacher and student features. To eliminate the negative effects of amplitude differences between different Feature Pyramid Network (FPN) stages and channels within and between the teacher-student detectors, the feature maps are firstly normalized to have zero mean and unit standard deviation before the mean square error (MSE) loss between these normalized features is minimized. Wang et al. <cit.> proposed an innovative cross-head distillation pipeline termed CrossKD to mitigate the target conflict issue. This method transfers intermediate features of the student network to the detection head of the teacher network, thereby generating cross-head predictions. Then, knowledge distillation is performed between these newly generated cross-head predictions and the original predictions generated from the teacher model. It guarantees that the KD loss does not influence the weight updates in the detection head of the student network, avoiding conflicts between the original detection loss and the KD loss. In addition, since both cross-head predictions and teacher predictions are generated from sharing parts of the detection head in the teacher network, the cross-head predictions are relatively consistent with the predictions obtained by the teacher. This significantly reduces the discrepancies between the teacher and student detectors, enhancing the stability of training during prediction imitation <cit.>. While these methods can achieve successful heterogeneous distillation, they do not explore feature masking in an adaptive stage-wise distillation manner, leading to the student features with limited boost in representation power. Consequently, the student still far lags behind the teacher and the large gap still exists between the heterogeneous networks. In contrast, our method addresses heterogeneous distillation by performing dual feature-masking stage-wise learning, thereby steadily improving the student feature and effectively reducing the gap between heterogeneous networks. § PROPOSED METHOD Since our proposed method essentially falls into the category of masked feature distillation, we will firstly introduce the formulation of feature distillation. Based on the feature distillation formulation, we will present an attention-directed dual masking distillation framework followed by our method. Furthermore, we will elaborate on our proposed dual feature masking stage-wise distillation (DFMSD) framework with three crucial components. §.§ Problem Formulation Feature distillation allows feature-level knowledge transfer from the teacher model to the student model to generate sufficiently descriptive features that are competitive with the teacher counterpart. Mathematically, it can be achieved with the following distillation loss function: L_Fea = ∑_l=1^L1/N_l∑_c^C∑_h^H∑_w^WF_c,h,w^T -Φ (F_c,h,w^S ) _2^2 where L denotes the number of layers in the FPN after the backbone networks, N_l represents the feature size of l-th layer, while C, H, and W indicate the number of channels, height, and width of the feature map, respectively. F^T and F^S denote respective features generated from the teacher and the student model. Φ(·) indicates the linear projection layer, which is capable of aligning F^S with F^T in the feature resolution. Recent studies have suggested that learning and reconstructing student features from the teacher model are considered to be a preferable alternative to feature imitation in the conventional feature distillation paradigm <cit.>. More specifically, expressive features can be reconstructed from selectively masking regions on the feature maps of the student network, which is also known as masked feature distillation. In particular, attention-directed masked feature distillation has improved the prototype masked generative distillation framework in which masked regions are randomly generated <cit.>. Recently, a dual masking knowledge distillation framework, termed DMKD <cit.>, is proposed to comprehensively encode object-aware semantics into the student network. More specifically, the dual attention maps derived from the teacher networks to capture both spatially important and informative channel-wise clues are formulated as: A^c =Sigmoid ( 1/HWτ∑_h=1^H∑_w=1^W⟨ F_h,w,1^T ,..., F_h,w,C^T⟩ ) A^s =ϕ _align ( Sigmoid ( 1/Cτ⟨ F_1^T_2^2 ,..., F_n^T_2^2⟩ ) ) where A^c ∈ R^C×1×1 and A^s ∈ R^1× H × W represent the channel and spatial attention maps, respectively. Then, attention-guided feature masking is performed before improved masked feature reconstruction is achieved via SE and generation modules <cit.>. §.§ Our DFMSD Framework Although the above-mentioned dual-masked feature distillation scheme is capable of reconstructing student features with improved representation power, it fails to transfer knowledge well from a teacher model to a student model when they have diverse network architectures, thereby achieving deteriorating performance for the heterogeneous distillation task. To alleviate this problem, we propose a dual feature-masking stage-wise knowledge distillation method for object detection, termed DFMSD, in this study. Fig. <ref> illustrates the framework of our proposed DFMSD model. Built on DMKD, the stage-wise adaptive learning strategy is integrated into the dual-masked distillation framework to progressively adapt the student to the teacher in separate stages, which contributes to bridging the gap between heterogeneous networks. Meanwhile, a masking enhancement module is also introduced to adaptively enhance object-aware masking regions according to the frequency distribution characteristics, so that stage-wise distillation is further improved with enhanced feature masking. In addition, semantic alignment is performed between teacher-student FPNs via Pearson Correlation Coefficient <cit.> for generating consistent feature distributions. Thus, our DFMSD network is capable of narrowing the teacher-student discrepancy with improved heterogeneous distillation performance. Next, we will elaborate on the three critical components mentioned above within our DFMSD network. §.§ Stage-wise Adaptive Learning Module The conventional masked distillation paradigm adopts a one-stage knowledge transfer strategy, in which a student model directly learns from one teacher model via single one-stage learning. However, this “one-stage learning” usually makes it difficult for the student model with limited capacity to learn sufficiently from a highly complex teacher model, let alone a heterogeneous teacher model with an entirely different network structure. To narrow the gap between heterogeneous teacher and student networks, we have integrated the stage-wise adaptive learning (SAL) mechanism into the dual masked distillation framework for improving the adaptability of the student model. Different from the previous methods in which only one teacher model is used in the distillation process, our strategy takes advantage of several advanced detectors and allows the student network to adaptively learn from the teachers in separate stages. More specifically, the student model can initially learn from relatively weaker teacher networks in the preceding stages, yielding suboptimal results. Subsequently, the adapted student is utilized as a new student to learn from a stronger teacher network in the latter stages, facilitating a more complete knowledge transfer. With the help of this SAL mechanism, the student network can be better adapted to the teacher model with the progressive distillation stages, and thus the gap between heterogeneous networks can be dramatically bridged. The beneficial effects of our SAL module can be illustrated in Fig. <ref>. It can be observed that the SAL module significantly benefits improving the distillation performance when heterogeneous Swin Transformer <cit.> and Faster R-CNN <cit.> are respectively used as the teacher and the student detectors. To be specific, the two-stage adaptive learning allows the Swin-Transformer-T <cit.> to boost the performance of the Faster R-CNN model from 38.4% to 42.2%, and further improves 0.9% with a stronger Swin-Transformer-S teacher detector <cit.>, achieving 43.1% mAP accuracy. This surpasses the traditional one-stage distillation method in which Faster R-CNN directly learns from the Swin-Transformer-S model and reports suboptimal 42.3% accuracy, which is only on par with the first-stage distillation performance within our SAL. Fig. <ref> intuitively compares the feature maps generated from the student network in different stages using SAL strategy. It can be clearly observed that the student network can capture more object-aware semantic regions after consecutive distillation stages. For example, compared with the original feature map of the student network, more semantically important regions corresponding to the zebras' heads and necks can be uncovered after the first-stage distillation. When the second-stage distillation is completed, the zebra-specific regions can be comprehensively characterized by discriminative feature maps close to the teacher counterparts and readily distinguished from the background regions. This fully suggests that our SAL module not only progressively improves the representation power of the student model but also significantly bridges the gap between heterogeneous teacher and student networks. §.§ Masking Enhancement module Prior research explores scale-aware object perception capability of CNN-based models from a frequency perspective <cit.>. It demonstrates that the same detector exhibits diverse detection performance in different frequency domains. More specifically, a CNN-based detector is likely to successfully identify larger objects while missing smaller ones in the low-frequency domain of an image, and vice versa in the high-frequency domain. Thus, when performing attention-directed feature masking on both frequency domains, different attention maps are obtained depending on the variance in object-aware frequency distribution. To be specific, the masked regions of smaller objects corresponding to the high-frequency components are endowed with higher attention scores, while the low-frequency masked regions are usually downplayed in the high-frequency domain. Conversely, low-frequency masked regions corresponding to larger objects tend to receive more attention and outweigh high-frequency regions in the low-frequency domain. However, within our SAL module, a “weaker” teacher with limited object-aware capability fails to generate accurate attention maps encoding spatial importance, especially when object-specific frequency distribution in an image is diverse. For example, as shown in Fig. <ref>, the RetinaNet detector generates a low attention score in some high-frequency regions corresponding to smaller objects in the high-frequency domain of the image. The low-scored regions are not identifiable for feature masking, which is detrimental to accurate detection of smaller objects, including the football and the far-end partially occluded referee in black. To further benefit subsequent distillation, we have introduced a masking enhancement module into our SAL module to improve object-aware perception capability. In terms of our masking enhancement strategy, data augmentation methods are adaptively applied to an image according to its object-specific frequency distribution, generating enhanced masking regions for feature reconstruction. For example, a proper augmentation method should strengthen the high-frequency information in an image dominated by small objects, such that more regions corresponding to the high-frequency small objects are identified as semantically important for feature masking. In contrast, when most objects in an image are medium-size or large-size, more low-frequency regions should be enhanced by adaptive data augmentation scheme to identify larger objects in the image. To investigate the frequency attributes of different data augmentation <cit.> methods, including random flipping <cit.>, random cropping <cit.>, and Gaussian noise perturbation <cit.>, we have performed detailed analyzes to explore the effects of various augmentation approaches on the original images in the frequency domain. More specifically, we performed two-dimensional Discrete Fourier Transform (DFT) <cit.> on images including an original unaltered image and its variants processed with different data augmentation methods, yielding a variety of Fourier spectrums used to intuitively demonstrate the frequency characteristics of different augmentation methods. As shown in Fig. <ref>, flipping the image produces a Fourier spectrum that resembles the original one without essentially changing its attribute characteristics. When adding Gaussian noise to the image, however, it can be observed that close-to-center frequency amplitude is suppressed in the frequency spectrum, which implies that Gaussian noise perturbation could benefit uncovering high-frequency small objects in the image. In contrast, images subjected to random cropping exhibit higher amplitude in the close-to-center region of the Fourier spectrum, suggesting that the low-frequency information of the image is strengthened. Since different augmentation strategies can boost specific frequency information, we attempt to perform an adaptive data augmentation technique on an image according to its object-aware frequency characteristics, such that the corresponding masking regions can be enhanced for feature reconstruction with improved representation power. On the one hand, we adopt a cropping augmentation approach to enhance the low-frequency components in an image hardly containing small object. To be specific, a randomly proportional cropping strategy is employed to adjust the edges of the image, which not only enhances the low-frequency clues of the image, thereby allowing the model to accurately identifying and localizing large-object regions. On the other hand, we add high-frequency Gaussian noise to an image predominantly featuring small objects for enhancing the high-frequency information. Specifically, high-frequency noise is sampled from a normal distribution with a mean of 0 and a variance of σ², denoted as 𝒩(0, σ²), and added to the original clean image with a certain probability. In this way, we can enhance the high-frequency object-aware regions of the images while maintaining primary feature information, thereby helping the detector to capture small objects more accurately. The resulting adaptively augmented data are delivered to the “stronger” teacher detector in the last stage of our SAL module for generating enhanced attention masks. Fig. <ref> demonstrates our introduced data augmentation strategy. For images with different object-aware distributions used as input, candidate object regions can be derived from the “weaker” teacher model in the previous stage. Then, adaptive augmentation approaches are employed depending on the object-specific frequency characteristics. Mathematically, our proposed feature masking adaptive data augmentation method can be formulated as: K_mask^size = k^big(x) if Area(x)≥λ k^small(x) if Area(x)<λ where Area(x) represents the summed area of all the candidate bounding boxes in image x derived from the teacher detector in the first distillation stage. λ denotes the predefined threshold that can help to distinguish whether an image predominantly contains relatively smaller or larger object-aware regions. When an image x predominantly constitutes relatively smaller objects indicated as Area(x)<λ, Gaussian noise is added to the image for enhancing the high-frequency masking regions corresponding to smaller objects. In contrast, low-frequency object-aware masking regions can be enhanced via cropping mechanism such that the larger-object masking regions receive more attention. Thus, adaptive enhanced masking regions can be obtained for improved feature reconstruction. Following <cit.>, in addition, adversarial examples are introduced for further mining inconsistent knowledge within the teacher model, which is conducive to improving the semantic perception capability of the student network <cit.>. Most feature-based knowledge distillation methods primarily focus on extracting consistent knowledge from the teacher model to ensure that the teacher model's outputs align with the true labels, that is: L_t ( x ) =y In this formula,L_t(x) represents the predicted label from the teacher model, and y denotes the true label. However, feature distillation loss (as well as feature mask distillation) solely compels the student model to mimic the deep representations of the teacher model. F_mask^t ( x_i ) =F_mask^s ( x_i ) ( i= 1,2,… ,k ) In this equation, F_mask^t and F_mask^s represent the feature masks of the teacher and student, respectively, and i denotes the layers of the FPN. Due to the constraints imposed by Equation (6), we often overlook inconsistent knowledge in the teacher model. Previous research has indicated that CNN-based models leverage adversarial features for predictions, which are highly predictive but imperceptible to humans. Therefore, to extract more inconsistent knowledge from the teacher model, we employ a transfer-based adversarial attack method to generate adversarial samples. Using both adversarial samples and clean images as inputs, we compel the student model to emulate the teacher model's deep adversarial representations. The formula is as follows: A_adv ( x ) =x+ η _t M^T ( A_adv ( x ) ) M^T ( x ) ,η _t≤ϵ In this formula, A_adv represents the generated inconsistent knowledge samples, η denotes the adversarial perturbation, t is the number of iterations, set to 5 in this experiment, M^T denotes the teacher model, and ϵ represents the maximum amplitude of the adversarial perturbation, set to 8 in this experiment. The loss function formula for our mask enhancement component is as follows: L_ME(x̂ ) = ∑_t=1^L1/N_l∑_c^C∑_h^H∑_w^WF_c,h,w^T(x̂ ) -Φ (F_Mask^S(x̂ ) ) _2^2 In this formula, most of the symbols have the same meanings as in Equation (1). x̂ represents the enhanced mask feature. F^T(x̂) and F^S(x̂) represent the Enhanced masking features generated from the teacher model and the corresponding features generated from the student model, respectively. In this study, we allow the "stronger" teacher and student to focus on the semantically consistent object perception regions enhanced by the "weaker" teacher from the previous stage. Simultaneously, we mine additional inconsistent knowledge from the teacher model to enhance the feature masks learned by the student. This approach effectively narrows the semantic gap between the student and teacher networks, thereby enhancing the student's semantic perception abilities, as well as the model's generalization capability and overall performance, enabling the student to learn more effectively from the teacher. §.§ Semantic Feature Alignment Module Due to the teacher-student gap, there is also a significant variance in the feature semantic awareness at each FPN level between heterogeneous networks. As shown in Fig. <ref>, there is a significant disparity between the feature distributions of the student and teacher models, and, more specifically, the features in the second layer (P2) of the FPN in both the teacher and student networks exhibit different object perception capabilities. To further bridge this gap, we propose performing semantic alignment at each FPN level between the teacher and the student so that the heterogeneous models generate a consistent feature distribution. More specifically, the features of both networks are firstly standardized to have zero mean and unit variance. Meanwhile, the mean squared error between the standardized features is minimized to better uncover the teacher-student correlation. In addition, this standardization strategy can somewhat reduce the cross-layer difference, allowing both teacher and student networks to comprehensively characterize high-level semantics with consistent representation power. Mathematically, our semantic alignment can be achieved by calculating Pearson Correlation coefficients formulated as follows: P(s,t)=∑_i=1^n (s_i-μ _t )(t_i-μ _t)/√(∑_i=1^n (s_i-μ _s )^2)√(∑_i=1^n (t_i-μ _t )^2) where P is calculated to quantify the degree of correlation between the teacher and student models. s and t represent the teacher and the student feature at each level, respectively, while μ denotes the mean of a normal distribution. In addition, n denotes the number of FPN levels. Through the feature standardization formulated as above, the teacher and the student features are well aligned to maximize the similarity between pre-standardized features of students and teachers. §.§ Loss Function The overall loss function for training our DFMSD can be formulated as: L = L_GT+ α L_distill where L_GT is the original detection loss whilst L_distill denotes the stage-wise distillation loss as follows: L_distill= ∑_i=1^S-1∑_c=1^C∑_h=1^H∑_w=1^W (F_c,h,w^T -φ^S (F_Mask^S) ) ^2+β L_ME where S is the number of distillation stages while C, H, and W represent the channel number, height, width of the feature maps. F^S_Mask denotes the masked student feature map. In addition, L_ME stands for the distillation loss imposed on adaptively augmented data in the last distillation stage: L_ME(x̂ ) = 1/N_l∑_c^C∑_h^H∑_w^WF_c,h,w^T(x̂ ) -Φ (F_Mask^S(x̂ ) ) _2^2 where F^T_c,h.w(x̂) and F^S_Mask(x̂) represent the enhanced masking features generated from the teacher and the student model, respectively. N_l is the total number of elements in the feature map at layer l used for normalization. With the help of Eq. (<ref>), our distillation is refined for further improving the performance of the student model. In the above equations, α and β are trade-off hyperparameters balancing different terms. § EXPERIMENTS In this section, we will present comprehensive experiments to evaluate our proposed DFMSD framework after briefly introducing the dataset and experimental setup. §.§ Dataset and experimental setup Our proposed DFMSD method is evaluated in the popular COCO dataset <cit.> which comprises over 320k images of 80 different object categories with abundant annotations. It is extensively applied to various tasks, including object detection, image segmentation, and scene understanding. In practice, we use 120k training images for training and 5k validation images for testing. Within our distillation framework, a variety of detectors are involved in our experiments, including RetinaNet <cit.>, FCOS <cit.>, Cascade Mask R-CNN <cit.>, Faster R-CNN <cit.>, GFL <cit.>, RepPoints <cit.>, and Swin-Transformer <cit.>, are involved in our experiments. In particular, we have evaluated our method for heterogeneous distillation in two cases, namely distillation between ViT and CNN architectures and distillation among different CNN detectors. In terms of our SAL strategy, the number of stages are set as S=2 for efficiency, which suggests two teacher models are involved for respective distillation stages. For performance measure, we follow <cit.> to adopt Average Precision (AP) and Average Recall (AR) as metrics. All the experiments are conducted on a desktop with an Intel(R) Core(TM) i9-10900K CPU and a 3090 GPU under the PyTorch framework. During the training process, SGD optimizer is used for training all the detectors within 24 epochs. Meanwhile, momentum is set as 0.9 whilst weight decay is set to 0.0001. In addition, a single-scale training strategy is utilized in our experiments. To demonstrate the superiority of our DFMSD model, numerous state-of-the-art (SOTA) masked feature distillation methods are involved in our comparative studies, including FKD <cit.>, FGD <cit.>, MGD <cit.>, AMD <cit.>, DMKD <cit.>, PKD  <cit.> and crossKD <cit.>. §.§ Heterogeneous distillation between ViT and CNN Models In this study, we have conducted extensive experiments in which the advanced Swin-Transformer (ST) model and different categories of CNN detectors are involved. More specifically, ST is used as the teacher framework, while a CNN student model is progressively adapted to the “weaker” ST-T model and “stronger” ST-S model via our SAL module. All student CNN detectors utilize ResNet50 as the backbone network. According to the CNN detector categories, our experiments for heterogeneous distillation between ViT and CNN models can be categorized into the following three groups. §.§.§ Distillation between ST and two-stage CNN detector In this group of experiments, the Faster R-CNN detector with ResNet50 backbone serves as the student model. As demonstrated in Table <ref>, our DFMSD method significantly improves baseline by 4.7% mAP, reporting the highest precision at 43.1%. Moreover, it surpasses the SOTA methods MGD and DMKD by 1.2% and 0.8%, respectively. Similar performance improvements are also observed in the mAR metric. These results fully demonstrate that our method can take advantage of stage-wise distillation to achieve more performance gains for the student model compared to the single-stage distillation approaches like MGD and DMKD. §.§.§ Distillation between ST and one-stage CNN detector Different from the first-group experimental setup, the student Faster R-CNN framework is replaced by the RetinaNet framework. Similar to the results of the first group, our method provides a significant improvement over baseline in mAP performance by 3.8% and mAR performance by 4.0%. Furthermore, the proposed DFMSD consistently outperforms the other two competitors and particularly beats its predecessor DMKD by 0.9% mAP, which suggests considerable advantages of our model. §.§.§ Distillation between ST and anchor-free CNN detector To further assess the generalizability of our proposed method, the anchor-free FCOS detector is used as the student network. Although our model reports less performance improvements compared with the previous two groups, it still exhibits consistent performance advantages. §.§ Heterogeneous distillation among CNN models In addition to the distillation between the ViT and CNN architectures, we have carried out additional heterogeneous distillation experiments among different categories of CNN detectors, namely two-stage models, one-stage models, and anchor-free models. The experiments are presented as the following three groups. Consistent with the above experiments, all the student CNN detectors adopt ResNet-50 as the backbone network. §.§.§ Distillation using two-stage detectors as the teachers In this group of experiment, two-stage Cascade Mask R-CNN is used for the teacher framework while the other CNN models for the students. In particular, the “weaker” and the “stronger” teacher models are Cascade Mask R-CNN with backbone networks used as respective ResNet-50 and ResNext-101. As revealed in Table <ref>, our distillation method significantly improves the one-stage student detector RetinaNet by 2.7%, reporting highest 40.1% mAP. Meanwhile, our method outperforms MGD and DMKD by 1% and 0.4% mAP respectively, which demonstrates our distillation scheme is more helpful for improving the student model. When the anchor-free FCOS detector is used for the student model while the Cascade Mask R-CNN remains the teacher network, the proposed DFMSD improves the baseline by 1.5% mAP with fewer performance gains compared to the above experiments, whereas the best results are still achieved by our method. §.§.§ Distillation using one-stage detectors as the teachers When using a one-stage detector as the teacher model, the RetinaNet frameworks with a “weaker” backbone ResNet-101 and a “stronger” backbone ResNeXt-101 are firstly used for the successive distillation stages. As demonstrated in Table. <ref>, our proposed DFMSD achieves respective performance boosts of 2.8% and 1.5% over the baseline student models of Faster R-CNN and FCOS, and outperforms the other two single-stage distillation approaches with consistent performance advantages in mAP and mAR. When the teacher framework is replaced by a more powerful GFL detector <cit.> while the student network is used as the FCOS <cit.>, similar improvements can also be observed over both the baseline and the other competitors, which implies that the student models can benefit from our distillation scheme with effective knowledge transfer. §.§.§ Distillation using anchor-free detectors as the teachers When adopting the anchor-free detector as the teacher network, FCOS is used as the teacher model, while three different types of detectors are used as the student models, namely two-stage Faster R-CNN, as well as one-stage GFL and RetinaNet. With Faster R-CNN as the student model, it is shown in Table <ref> that the performance boosts over the baseline achieved by our method reach 2.4% mAP and 1.9% mAR, which consistently beats the other distillation approaches. When one-stage student detectors are involved, including GFL and RetinaNet, our distillation method still achieves the best results. In particular, the proposed DFMSD elevates the mAP accuracy of RetinaNet from 37. 4% to 40. 2% and the mAR accuracies from 53. 9% to 56. 9%, demonstrating significant performance improvements. In addition, our DFMSD is also superior to MGD and DMKD with consistent improvements exceeding 0.5%. The results unanimously showcase the framework-independent advantages of our method in various cases, suggesting that more crucial information can be learned from diverse heterogeneous teacher models with the help of our distillation paradigm for improving the student performance. §.§ Comparison with SOTA Heterogeneous Knowledge Distillation Methods To further demonstrate the superiority of our method, we compare the proposed DFMSD with the other heterogeneous distillation approaches including PKD and crossKD. In particular, crossKD adopts a similar adaptive cross-head approach which aims at facilitating the prediction imitation to bridge the gap between teachers and students. In practice, our DFMSD performs stage-wise distillation such that the RetinaNet student detector with ResNet50 backbone network can adaptively learn from the original “weaker” Swin-Transformer-T (ST-T) to the “stronger” Swin-Transformer-S (ST-S). In contrast, PKD and crossKD, which are single-distillation methods without feature masking, function by directly transferring knowledge from Cascade Mask R-CNN to ST-T. As revealed in Table <ref>, our method outperforms both PKD and crossKD by respective 1.3% and 0.6% mAP accuracies, which indicates that a simple cross-head strategy is insufficient to reduce the difference between heterogeneous teacher and student models, and thus exhibits suboptimal performance. §.§ Experiments of homogeneous distillation In addition to the aforementioned heterogeneous distillation experiments, we have also evaluated our method in the case of homogeneous distillation for detection and compared it with the other SOTA schemes in the COCO, including FKD, FGD, MGD, AMD, and DMKD. In homogeneous distillation, the teacher and student models share the same detection framework, whereas the former has a more powerful backbone network than the latter. As shown in Table <ref>, four different detectors, including RetinaNet, RepPoints, GFL and FCOS are involved in our comparative studies. In addition, the backbone networks of the teacher and student frameworks are used as ResNeXt101 and ResNet50, respectively. The only exception is our DFMSD framework which incorporates two teacher models with respective ResNet101 and ResNeXt101 backbones in the process of stage-wise adaptive learning. The pre-trained models for the teacher are directly borrowed from the MMDetection toolbox <cit.>. It can be observed from the results that our DFMSD achieves consistent superiority to all the competing methods. For example, when the RetinaNet is used as the detection framework, our approach outperforms its predecessor DMKD by 0.5% mAP and beats the other signle-distillation methods. When using a more advanced GFL detector, the performance advantage against DMKD reaches 1.4%, which demonstrates the substantial benefits of integrating the stage-wise distillation mechanism into the feature masking framework. §.§ Ablation Studies In this section, extensive ablation experiments are conducted to gain a deeper insight into different module and configurations on the performance of our proposed distillation framework. Similar to the settings in the above-mentioned experiments, various ViT and CNN detectors are involved in our ablation studies. §.§.§ SAL module We have carried out different groups of experiments to explore the effect of distillation stages and different teacher detection frameworks on the model performance. More specifically, the teacher detectors include Cascade Mask R-CNN, FCOS, RetinaNet, and ST-T while RetinaNet with ResNet50 is used as the student model. As illustrated in Table <ref>, the highest 40.1% mAP accuracy is reported when the student successively learns from the Cascade mask R-CNN with ResNet101 and ResNext101 backbones. Interestingly, this result is even identical to the case when three teachers with successive ResNet50, ResNet101, and ResNeXt101 backbone networks are incorporated into our SAL module, which suggests that excessive distillation stages may not benefit improving the student performance due to the limitation of the representation power of similar teacher models. In addition, it is shown that deteriorating performance is reported when the teacher and the student detectors have diverse network architectures. For example, when the Cascade Mask R-CNN remains the “weaker” teacher framework and the “stronger” counterpart is used as an even more advanced ST-T framework, a slightly lower 40.0% mAP score is achieved, which lags behind the case when both teacher models simultaneously utilize the Cascade Mask R-CNN framework. This implies that the gap among multiple teacher models may be detrimental to the distillation performance. §.§.§ Masking enhancement module To explore the effect of the masking enhancement (ME) module on different distillation stages within our SAL module, we conduct a series of experiments in which the module is introduced into the first stage, the second stage, and both stages simultaneously. Specifically, the teacher Cascade Mask R-CNN detector successively leverages ResNet101 and ResNeXt101 for backbone networks and the RetinaNet-ResNet50 is used as the student model. As revealed in Table <ref>, integrating the masking enhancement module in both stages can not bring further performance improvement, since extra enhancement may generate repeatedly identified object-aware regions, and thus produce biased detection results. In contrast, our method achieves slightly superior performance of 40.1% mAP by only introducing masking enhancement into the second distillation stage. This suggests that “stronger” teacher with more powerful representation capability can benefit from the masking enhancement for better identifying the enhanced object-aware regions. §.§.§ Semantic Feature Alignment module To investigate the effect of Semantic Feature Alignment (SFA) module on the model performance, we perform semantic alignment at different feature layers between the teacher and the student backbone networks within our DFMSD model using different configurations. Consistent with the aforementioned setup, the Cascade Mask R-CNN with ResNet101 and ResNeXt101 backbones are used as the dual teachers and the student detector is RetinaNet-ResNet50. As shown in Table <ref>, performing semantic alignment at each FPN layer from P1 to P3 between the teacher and the student helps to generate consistent feature distribution and thus achieves the best result. This also indicates that the teacher-student gap is manifested in the variance in feature distribution at each feature layer. §.§.§ Ablating each module within our DFMSD framework In this section, we have comprehensively explored the three modules mentioned above by ablating each one in our experiments. The ablation studies fall into two groups according to the distillation setting, namely heterogeneous and homogeneous distillation. For heterogeneous distillation, the teachers are Transformer-based ST-T and ST-S models with the student detector used as RetinaNet-ResNet50. As demonstrated in Table <ref>, suboptimal result is reported when any one module operates independently. In particular, when a single SAL produces promising 40.8% mAP, combining it with ME and SFA modules further improves from 40.8% to 41.2%, substantially suggesting the benefits of integrating the complementary modules into the dual masking feature distillation framework. Similar results are also obtained in the ablation studies for homogeneous distillation where RetinaNet-ResNet101 and RetinaNet-ResNeXt101 are teacher models, while RetinaNet-ResNet50 is the student counterpart, demonstrating that the highest result of 42.0% mAP is obtained when all three modules are integrated as shown in Table <ref>. §.§ Parameter Analysis In this section, we discuss the setup of the hyperparameters involved in our DFMSD model. Firstly, various experimental evaluations are carried out using different threshold values λ, which indicates the scale distribution characteristics of the object-aware regions in Eq. (<ref>). As shown in Fig. <ref>, the best result is achieved when λ=0.5. This is reasonable since it is very likely that an image contains smaller objects when object-aware region areas account for less than half of the image size. In contrast, an image may constitute larger objects if λ>0.5. In addition, we explore the impact of the hyperparameters α and β in Eqs. (<ref>) and (<ref>) on the model performance. As shown in Fig. <ref>, it is shown that the highest 42.9% mAP accuracy is achieved when α and β are respectively set to 5.0×10^-7 and 2.5×10^-7, suggesting that different terms are balanced for desirable tradeoff. § CONCLUSION In this study, we have proposed a dual feature masking stage-wise distillation paradigm termed DFMSD to address heterogeneous distillation. More specifically, we propose integrating stage-wise learning into the dual feature masking framework such that the student can be progressively adapted to different teachers in various distillation stages. Meanwhile, masking enhancement is also introduced into the stage-wise learning such that the object-aware masking regions are enhanced for improved masking feature reconstruction. In addition, semantic alignment is also performed at different FPN layers between the teacher and the student network to generate consistent feature distributions. With all the above-mentioned modules incorporated, the gap between the teacher and the student models can be bridged for boosted distillation performance. Extensive experiments in the COCO dataset for object detection with different setups demonstrate the promise of our proposed method and the superiority to the SOTA, particularly in the heterogeneous distillation scenario. § ACKNOWLEDGEMENT The authors greatly appreciate the valuable and constructive comments of the editors and all the anonymous reviewers. This work was supported by the National Natural Science Foundation of China under Grant 62173186, 62076134, 62303230 and Jiangsu provincial colleges of Natural Science General Program under Grant 22KJB510004. § DECLARATIONS The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Conflict of Interests. The authors declare that they have no conflict of interest. Data availability statement. Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. elsarticle-num
http://arxiv.org/abs/2407.12365v1
20240717074253
Self-similar solutions, regularity and time asymptotics for a nonlinear diffusion equation arising in game theory
[ "Marco Antonio Fontelos", "Francesco Salvarani", "Nastassia Pouradier Duteil" ]
math.AP
[ "math.AP" ]
A nonlinear diffusion equation arising in game theory]Self-similar solutions, regularity and time asymptotics for a nonlinear diffusion equation arising in game theory M.A.F.: Instituto de Ciencias Matemáticas,(ICMAT, CSIC-UAM-UCM-UC3M),Campus de Cantoblanco, 28006 Madrid, Spain marco.fontelos@icmat.es N.P.D.: Sorbonne Université, Inria, Université Paris-Diderot SPC, CNRS, Laboratoire Jacques-Louis Lions, Paris, France nastassia.pouradier_duteil@sorbonne-universite.fr F.S.: Léonard de Vinci Pôle Universitaire, Research Center, 92916 Paris La Défense, France & Dipartimento di Matematica “F. Casorati”, Università degli Studi di Pavia, Via Ferrata 1, 27100 Pavia, Italy francesco.salvarani@unipv.it § ABSTRACT In this article, we study the long-time asymptotic properties of a non-linear and non-local equation of diffusive type which describes the rock-paper-scissors game in an interconnected population. We fully characterize the self-similar solution and then prove that the solution of the initial-boundary value problem converges to the self-similar profile with an algebraic rate. [ Francesco Salvarani July 22, 2024 ======================= § INTRODUCTION The rock-paper-scissors game is not only one of the classical examples in game theory, but it arises also in other contexts, such as bacterial ecology and evolution, where it has been extended to the scale of an entire population. In several situations, indeed, the rock-paper-scissors game allows to model cyclic competition between species and the stabilization of bacteria populations <cit.>, i.e. when three species coexist and there is cyclic domination of the first species on the second one, of the second species on the third one, and of the third species on the first one. Moreover, some applications of this game have been proposed in evolutionary game theory, for example to explain the coexistence or extinction of species <cit.> or male reproductive strategies <cit.>. This justifies the importance of having a description of the rock-paper-scissors dynamics at the mesoscopic (i.e. kinetic) and macroscopic levels, where the population is described by a density function: it allows the description of the global dynamics without needing to take into account the individual situations, and is therefore well adapted for population with a high number of individuals. A kinetic version of the rock-paper-scissors game has been studied in <cit.>. This situation involves a population of players who form temporary pairs through random encounters. The two members of a pair play the game once, then look for another contestant to play with, and so on. The independent variables are the time t∈_+ and an individual exchange variable x (which may correspond to the wealth of individuals, if the game involves agents exchanging a certain amount of money). In the case of a fully interconnected population, assuming that there are no forbidden pairs and that players continue to play as long as their wealth allows, the corresponding kinetic model introduced in <cit.> has the form of an integro-differential equation on the half-line _+= [0,+∞), with a boundary condition in x=0. By assuming that players increase the frequency of the game by a factor of ε^-1, (with ε >0) and, at the same time, reduce the amount played in each iteration of the game by a factor of ε, in the limit ε→ 0 the authors of <cit.> obtain a non-linear and non-local partial differential equation at the classical macroscopic level. In particular, the limiting initial-boundary value problem for the unknown u : _+×_+→, which represents the density of agents with wealth x∈_+ at time t ∈_+, is the following: ∂_t u(t,x) = (∫__+ u(t,z) dz) ∂^2_xu(t,x) for a.e. (t,x)∈^*_+×^*_+ [4pt] u(t,0) = 0 for a.e. t∈^*_+ u(0,x) = u^in(x) for a.e. x∈_+, where u^in∈ L^1(_+)∩ L^∞(_+) and ^*_+=(0,+∞). Existence and uniqueness of a very weak solution of (<ref>)- (<ref>) have been proven by means of a compactness argument in <cit.>. However, several open questions on this problem are still waiting for an answer. In this article we study two open questions about the initial-boundary value problem (<ref>)-(<ref>), namely the regularity of the problem and the intermediate asymptotics with respect to a suitable self-similar solution, which we will precisely identify. We stress that the asymptotic behavior is one of the main questions on diffusion equations – see the review article <cit.> and the references therein. Equation (<ref>) has a mathematical structure that is essentially non-local. It can be interpreted as a heat equation, whose diffusivity coefficient depends on the integral of the solution itself (i.e. the total mass, in the case of non-negative solutions), which is a typical global quantity of the system. Because of the peculiar structure of the nonlinearity in (<ref>), our methods of proof are sometimes close to those used in the study of linear equations <cit.> but, in several points, the need of approaches designed for studying non-linear equations are necessary (see, for example, <cit.>). More specifically, in this article we prove that, similarly to the heat equation, there exists an instantaneous gain in regularity. We moreover characterize the self-similar solutions of the problem and identify the convergence speed to the intermediate asymptotic profile under some conditions on the initial condition which we precisely characterize. We note that the algebraic convergence speed is a consequence of the non-local structure of the problem. The structure of this article is the following. The study of the regularity of the problem, together with other basic properties of its solution, are detailed in Section <ref>. Then, in Section <ref>, we study the long-time convergence of the solution toward the self-similar solution. We illustrate our study numerically in Section <ref> and, in the Appendix, we treat the convergence to the self-similar solution in the case of a bounded interval. § BASIC RESULTS In this section we deduce and collect some basic results about the initial-boundary value problem (<ref>)-(<ref>). §.§ Weak formulation We first define the very weak formulation of (<ref>)- (<ref>) as follows: Let T>0. A measurable function u∈ L^1([0,T]×_+) is said to be a very weak solution of the initial-boundary value problem (<ref>)-(<ref>) if it satisfies [ ∫_0^T ∫__+ u(t,x) ∂_tφ(t,x) d x d t + ∫_0^T (∫__+ u(t,x_*) d x_*) ∫__+ u(t,x) ∂^2_xφ(t,x) d x d t; + ∫__+u^in(x)φ(0,x) d x=0 ] for all φ∈ C^1([0,T];C_c^2())∩ L^∞([0,T]×), such that φ(T,x)=0 for all x∈_+, φ_x(t,0)=0 for all t∈ [0,T], where C_c^2() is the space of C^2 compactly supported functions on . Existence and uniqueness of a very weak solution to (<ref>)-(<ref>) was proven in <cit.>. Moreover, one can prove that the solution is bounded by the L^∞ norm of the initial data, and if the initial data is non-negative, the solution remains non-negative for all time. The precise results are recalled in the following theorem (see <cit.>): Consider the initial-boundary value problem (<ref>) -(<ref>), with initial condition u^in∈ L^1(_+)∩ L^∞ (_+) and such that u^in≥ 0 for a.e. x∈_+. Let T > 0. Then, it has a unique very weak solution, which belongs to L^1((0,T)×_+)∩ L^∞ ((0,T)×_+). Moreover, ‖ u(t, · ) ‖_L^∞(_+)≤‖ u^in‖_L^∞(_+) for a.e. t∈ (0,T). Lastly, the solution is non-negative, i.e. u(t,x)≥ 0 for a.e. t∈(0,T) and for a.e. x∈ _+. §.§ Improved regularity and positivity Let u be the very weak solution of the initial-boundary value problem (<ref>)-(<ref>). Then, it is possible to consider its antisymmetric extension v, defined for all x∈, such that v(t,x)= u(t,x)1_x≥ 0 - u(t,-x)1_x≤ 0, for a.e. t∈ (0,T). Consequently, v solves (in the very weak sense) the following auxiliary initial value problem for the unknown v : _+×→ ∂_t v(t,x) = (∫_^+ v(t,z) d z) ∂^2_xv(t,x) for a.e. (t,x)∈ _+× v(0,x) = v^in(x) for a.e. x∈, where v^in(x)=u^in(x)1_x≥ 0 - u^ in(-x)1_x≤ 0 for a.e. x∈. Because of the regularity conditions on the initial data, we immediately deduce that v^in∈ L^1()∩ L^∞ (). This antisymmetric extension of u will allow us to prove the following result: Let u be the very weak solution of the initial-boundary value problem (<ref>)-(<ref>), with initial and boundary conditions satisfying the hypotheses of Theorem <ref>. Then, u∈ C^∞((0,T)×^*_+). Moreover, u admits the following semi-explicit representation: [ u(t,x)= (4π∫_0^t∫__+ u(θ,z) d z dθ)^-1/2× ; ∫__+ u^in (y) {exp [-(x-y)^2(4∫_0^t∫__+u(θ,z) d z dθ)^-1] -exp [-(x+y)^2(4∫_0^t∫__+u(θ,z) d z dθ)^-1] } d y. ] We consider the auxiliary problem (<ref>). We have not yet proven that v is the unique solution of (<ref>), but we know, by construction, that it exists and belongs to L^1((0,T)× )∩ L^∞ ((0,T)× ), because of the results proved in <cit.>. We hence introduce the spatial Fourier transform v̂ : L^1((0,T)×)∩ L^∞ ((0,T)×) → L^2((0,T)×), which is meaningful because of the regularity hypotheses on v. We use the following convention for the Fourier transform of a function and for its inverse: ∀ ξ∈ℝ, v̂(t,ξ )=∫ _ v(t,x) e^-2π i ξ xx̣, and ∀ x ∈ℝ, v(t,x )=∫ _ v̂(t,ξ ) e^2π i ξ xξ̣. By applying the Fourier transform with respect to the x variable to all terms in the Cauchy problem (<ref>), we deduce a problem for the Fourier transform v̂ of the solution, i.e. we obtain ∂_t v̂(t,ξ) =-4π^2 ξ^2(∫__+ v(t,z) d z) v̂(t,ξ) for a.e. (t,ξ)∈_+× v(0,ξ) = v̂^in (ξ)= ℱ(v^in)(ξ) for a.e. x∈. This auxiliary problem can be integrated in time, allowing to deduce the integral form of the initial-boundary value problem (<ref>): v̂ (t,ξ)= v̂^in (ξ) exp [-4π^2 ξ^2∫_0^t (∫__+ v( θ ,z) d z) θ̣] for all ξ∈. Thanks to the regularity of u^in, we have that v̂^in∈ L^∞(). Hence, by Formula (<ref>) the decay to zero of v̂ when ξ tends to +∞ is faster than polynomial, for any degree of the polynomial. Consequently, v∈ L^1((0,T); C^∞()) (see, for example, <cit.>). By applying the inverse Fourier transform to the second factor of Equation (<ref>), we find ℱ^-1 (exp (-4π^2 ξ^2∫_0^t (∫__+ v( θ ,z) d z) θ̣)) = (4π∫_0^t ∫__+ v(θ,z) d z dθ)^-1/2exp [-x^2(4∫_0^t∫__+ v(θ,z) d z dθ)^-1], so that v (t,x) = (4π∫_0^t ∫__+ v(θ,z) d z dθ)^-1/2∫_ v^in (y) exp [-(x-y)^2(4∫_0^t∫__+ v(θ,z) d z dθ)^-1] d y. for all x∈. We note that, if v∈ L^1((0,T); C^∞()), then the right-hand side of the previous equation is, in fact, a quantity belonging to C((0,T); C^∞()). By a bootstrap argument <cit.>, we immediately deduce that v∈ C^∞((0,T)×). In particular, when x> 0, we can write the previous expression in the following way: v (t,x)= (4π∫_0^t∫__+ v(θ,z) d z dθ)^-1/2× ∫__+ u^in(y) {exp [-(x-y)^2(4∫_0^t∫__+ v(θ,z) d z dθ)^-1] -exp [-(x+y)^2(4∫_0^t∫__+ v(θ,z) d z dθ)^-1]} d y. Moreover, for x=0, we have that v (t,0)=0 for all t∈(0,T). Furthermore, v is clearly strictly positive for all x>0 provided that u^in is non-negative for a.e. x∈_+. By comparing (<ref>) and (<ref>)-(<ref>), we deduce that ũ= v 1_x≥ 0 satisfies the initial-boundary value problem (<ref>)-(<ref>) with initial value ũ(0,·)=1_x≥ 0. Because of the uniqueness of the very weak solution of (<ref>)-(<ref>) (see Theorem <ref>), we deduce that ũ(t,x)= v(t,x)1_x≥ 0=u(t,x) for a.e. (t,x)∈(0,T)×_+. Consequently, the previous computation allows to obtain a semi-explicit representation of u: [ u(t,x)= (4π∫_0^t∫__+ u(θ,z) d z dθ)^-1/2× ; ∫__+ u^in (y) {exp [-(x-y)^2(4∫_0^t∫__+u(θ,z) d z dθ)^-1] -exp [-(x+y)^2(4∫_0^t∫__+u(θ,z) d z dθ)^-1] } d y. ] Finally, we underline that u∈ C^∞((0,T)×^*_+) because u inherits the regularity properties of v. §.§ Some quantitative bounds The first step in our analysis consists in proving some uniform estimates. In all that follows, we will assume that ≥ 0. Let u be the solution of the initial-boundary value problem (<ref>)-(<ref>) and let M :t↦∫__+ u(t,x) d x. Then M is a decreasing function of time. In particular, M∈ C^∞((0,T)) and, for all t∈_+, M (t) ≤ M (0)= ∫__+ u^in(x) d x. The result is a direct consequence of the regularity proven in Proposition <ref>. Integrating in x Equation (<ref>), it holds M^'(t) = (∫_0^+∞∂_x^2 u(t,z) dz ) M(t) = (lim_x→+∞∂_x u(t,x) - ∂_x u(t,0)) M(t). By differentiating both sides of Equation (<ref>) with respect to x , we obtain [ ∂_x u(t,x) = -2/√(π) (4∫_0^t∫__+ u(θ,z) d z dθ)^-3/2∫__+u^in (y)(x-y) exp [-(x-y)^2(4∫_0^t∫__+u(θ,z) d z dθ)^-1] d y ; +2/√(π) (4 ∫_0^t∫__+ u(θ,z) d z dθ)^-3/2∫__+ u^in (y)(x+y) exp [-(x+y)^2(4∫_0^t∫__+u(θ,z) d z dθ)^-1] d y. ] We deduce that ∂_x u(t,0)≥ 0 and lim_x→+∞∂_x u(t,x) =0 for all t∈(0,T). The thesis follows directly. For future purposes, we introduce the spatial first moment M_1:_+→ such that, for u solution of (<ref>)-(<ref>), it holds M_1(t):=∫__+x u (t,x) d x, for all t∈ _+. Moreover, we introduce the spatial second moment M_2:_+→ such that, for u solution of (<ref>)-(<ref>), it holds M_2(t):=∫__+x^2 u (t,x) d x, for all t∈ _+. From here onward, we consider initial data with bounded spatial first moment, i.e. we suppose that the following property is satisfied. The initial condition u^in∈ L^1(_+)∩ L^∞(_+) is admissible if and only if M_1(0)=∫__+xu^in(x) d x<+∞. The following result holds. Let u be a (strong) solution of (<ref>)-(<ref>), and suppose that u^in is admissible (see Definition <ref>). Then the spatial first moment of u is conserved, i.e. for all t∈_+, ∫__+ x u(t,x) d x= ∫__+ x u^in(x) d x. Consider u:_+×_+→ a solution to (<ref>)- (<ref>), and let M be the total mass defined in Proposition <ref>. Let v:_+×→ be the antisymmetric extension of u defined in Equation (<ref>) and studied in Subsection <ref>. Notice that ∫_ x v(t,x) d x = ∫__+ x u(t,x) d x - ∫__- x u(t,-x) d x = ∫__+ x u(t,x) d x + ∫__+ x u(t,x) d x = 2 ∫__+ x u(t,x) d x. Now, let a:_+→_+ be defined by a:t↦∫_0^t M (τ) dτ, and define ṽ:_+×→ as follows: ṽ (a(t),x)=v(t,x) for all (t,x)∈_+×. Then ṽ is a solution to ∂_aṽ(a,x) = ṽ_xx(a,x) (a,x)∈𝒯×, ṽ(0,x) = v^in(x) for a.e. x∈, where 𝒯 =( 0, ∫_0^+∞ M (τ) dτ). Hence, ṽ satisfies the Cauchy problem for the heat equation on the real line, at least in the time interval 𝒯. Since M(0)>0 and M is continuous with respect to t∈_+, we deduce that ∫_0^t M (τ) dτ>0 for all t>0. At this point, we do not know if lim_t→+∞∫_0^t M (τ) dτ =+∞ or lim_t→+∞∫_0^t M (τ) dτ <+∞. However, this is not a problem in our case. It is enough to know that the first moment of ṽ, i.e. ∫_ xṽ(·,x) d x, is conserved at least in 𝒯. Hence, the first moment of v is also conserved, and ∫__+ x u(t,x) d x = 1/2∫_ x v(t,x) d x = 1/2∫_ x ṽ(τ(t),x) d x is also conserved. Because the first moment is conserved, from here onwards, we will denote by M_1 its value, defined by M_1=M_1(0)=M_1(t) for all t∈ _+. § SELF-SIMILAR SOLUTIONS AND LARGE-TIME ASYMPTOTICS In this section, we consider the initial-boundary value problem (<ref>)-(<ref>), and always suppose that the initial data are admissible (i.e. we suppose that u^in satisfies Definition <ref>). Note that the final time T appearing in the statement of Theorem <ref> is finite, but arbitrary, so that the large-time asymptotics of the problem makes sense. Let μ∈. We look for self-similar solutions g_μ of the form u(t,x)=t^μ-1g_μ(x/t^μ), so that the mass of the solution u satisfies for all t∈_+: ∫__+ u(t,z)dz = t^2μ-1∫__+ g_μ(ξ)dξ. Then g_μ(η ) satisfies the following non-local differential equation: for all η∈_+, (μ-1)g_μ(η)-μη g_μ^'(η)=( ∫_0^∞g_μ(s) d s) g_μ^''(η). We apply the rescaling η :ξ↦( ∫_0^+∞g_μ(s) d s) ^1/2ξ, and denote by f_μ:ξ↦ g_μ((∫_0^+∞ g_μ(s) ds)^1/2ξ) the solution to the simplified differential equation: (μ-1)f_μ(ξ )-μξ f_μ^'(ξ )=f_μ^''(ξ ). Its relation with u is given by: u(t,x)=t^μ-1f_μ((∫__+ u(t,z)dz)^-1/2x/t^1/2) = t^μ-1f_μ((∫__+ f_μ(s)ds)^-1x/t^μ), where we used the following relations: ∫__+g_μ(η) dη = (∫__+f_μ(ξ) dξ)^2 = t^-2μ+1∫__+u(t,x)dx. The following Proposition guaranties the existence of self-similar solutions of the form (<ref>) for any μ∈ [1/3,1) . For μ∈[ 1/3,1) there exists a solution to (<ref>) such that f(0)=0 and f(ξ )→ 0 as ξ→∞. The solution is positive and such that if μ∈( 1/3,1 ), f_μ(ξ )=O(ξ ^1-1/μ) as ξ→∞, and for μ =1/3, f_1/3(ξ )=ξ e^-1/6ξ ^2. Let μ∈[ 1/3,1). We look for an analytic solution to (<ref>), of the form f_μ(ξ )=∑_k=0^+∞b_kξ ^k, where b_k∈ for all k∈. The boundary condition in x=0 implies that f_μ(0)=b_0=0. From (<ref>), we obtain ∑_k=0^+∞[ (μ-1) b_k - μ k b_k - (k+2)(k+1) b_k+2 ] ξ ^k = 0. Thus, for any k∈, it holds b_k+2 = μ - 1 - μ k/(k+1)(k+2)b_k. In particular, the condition b_0=0 implies that b_2n=0 for all n∈ . Thus, denoting w_n:=b_2n+1 we can rewrite f as f_μ(ξ )=∑_n=0^+∞w_nξ ^2n+1, where (w_n)_n∈ satisfy the following relation (since μ>0): a_n+1=-μ/2n+1/2μ/( n+3/2) (n+1)w_n and hence w_n=(-1)^n( μ/2) ^nΓ( 3/2 ) Γ (1)/Γ( 1/2μ) Γ( n+1/2μ) /Γ( n+3/2) Γ (n+1)a_0. The solution can be written in terms of classical hypergeometric confluent functions _1F_1: f_μ(ξ ) =ξ _1F_1( 1/2μ,3/2;-μ/2 ξ ^2) =ξ e^-μ/2ξ ^2._1F_1( 3/2-1/2μ,3/2; μ/2ξ ^2 ) . In the particular case μ =1/3 one has f_1/3(ξ )=ξ e^-1/6ξ ^2 while, for μ∈ (1/3,1) (cf. <cit.> formula 13.7.1) ._1F_1( 1/2μ,3/2;-μ/2ξ ^2) ∼Γ( 3/2) /Γ( 3/2-1/2μ) ( μ/2ξ ^2) ^-1/2μ as ξ→∞ , that is f_μ(ξ )=O(ξ ^1-1/μ). The positivity of f_μ(ξ ) follows from the positivity of the integrand in the following representation formula for the confluent hypergeometric function of the first kind: ._1F_1( α,β;z) =Γ ( β)/Γ (β-α)Γ (α) ∫_0^1e^ztt^α-1(1-t)^β-α-1 d t with α=1/2μ, β=3/2. This concludes the proof of the Lemma. For μ<1/3, f_μ(ξ ) is not positive and has a zero, ξ^0_μ, that comes from infinity as μ decreases and approaches ξ^0 _0(0)=π (since f_0(ξ )=sin (ξ) for μ=0). Proposition <ref> thus provides us with a family of solutions f_μ to equation (<ref>), for μ∈[1/3 ,1). Recall that by definition, the solution to (<ref>)- (<ref>) must have finite mass. The relation between u and f_μ given by (<ref>) implies that for all t∈_+, ∫__+ u(t,x) dx = t^2μ-1(∫__+ f_μ(ξ) dξ)^2. From Lemma <ref>, f_μ is not integrable for any μ∈( 1/3,1), which means that the only admissible solution to (<ref>) giving a self-similar solution to (<ref>)- (<ref>) with finite mass is given by μ=1/3. We then postulate that the self-similar solution f_1/3 for μ = 1/3 is an attractor, in the sense that solutions tend to the self-similar solution in a suitable norm as t→∞, for all initial data that decay sufficiently fast. The remainder of this article aims to prove that this is indeed the case. We begin by defing a quantity that plays an important role in the definition of u and in the analysis of its asymptotic behavior. Given a solution u to (<ref>)-(<ref>), and its first moment M:t→ M(t) , we define a(t):= ∫_0^t M(s) d s. The question now is the identification of a(t) given by (<ref>). Note that from (<ref>), u(t,x)=1/2√(π a(t))∫_0^+∞( e^- (x-s)^2/4a(t)-e^-(x+s)^2/4a(t)) u^in (s) d s so that ∂_x u(t,0)=1/2√(π)a^-3/2(t)∫_0^+∞ se^-s^2/4 a(t)u^in(s) d s. Integrating Equation (<ref>) in ℝ_+, as seen in the proof of Proposition <ref>, it holds d M(t)/d t=-M(t)∂_x u(t,0), which allows us to conclude that a(t) satisfies the following integro-differential equation: a^''(t)=-a^'(t)1/2√(π)a^-3/2 (t)∫_0^+∞ se^-s^2/4a(t)u^in(s) d s. We define now G(a):=1/2√(π)a^-3/2∫_0^+∞ se^-s^2/4au^in(s) d s. The quantity G(a) is bounded provided that u^in is linear at the origin and has its first moment M_1(0) bounded. Let u^in∈ L^1(_+)∩ L^∞(_+) a positive and admissible initial condition. The integro-differential equation (<ref>), with initial condition a(0)=0 and a^'(0)=M(0), has a solution a: _+→ such that a(t)∼( 3/2√(π)M_1) ^2/3t^2/ 3 as t→∞ and a^'(t)∼2/3( 3/2√(π)M_1) ^ 2/3t^-1/3 as t→∞. Since a^''(t)=-d/ d t∫_0^a(t)( 1 /2√(π)(a^∗)^-3/2∫_0^+∞se^-s^2/ 4a^∗u^in(s) d s) d a^∗ Integrating once in time and using that a^'(0)=M(0), it holds a^'(t)+∫_0^a(t)( 1/2√(π)(a^∗)^-3/ 2∫_0^+∞se^-s^2/4a^∗u^in(s) d s) d a^∗=M(0), which we rewrite as a^'(t)+F(a(t))=M(0), denoting F(a):=∫_0^aG(a^∗). But now F(a)= ∫_0^aG(a^∗) d a^∗ =∫_0^a( 1/2√(π) (a^∗)^-3/2∫_0^+∞se^-s^2/4a^∗u^ in(s) d s) d a^∗ = 1/2√(π)∫_0^+∞( ∫_0^a(a^∗)^-3 /2e^-s^2/4a^∗ d a^∗) su^in(s) d s =1/2√(π)∫_0^+∞( ∫_0^as^-2(a^∗)^-3/2e^-1/4a^∗ d a^∗) u^in(s) d s and, since ∫_0^+∞a^-3/2e^-1/4ada=2√(π), we obtain lim_a^⋆→+∞ F(a^⋆ )=M(0). We conclude that as t tends to infinity, if a(t)→ +∞, then a^'(t)→ 0. Notice that from its definition, a is an increasing function, hence it has a limit when t goes to infinity. Let a_∞:=lim_t→+∞ a(t), and suppose that a_∞<+∞. Then lim_t→+∞ a^'(t) = 0, from which we get F(a_∞)=M(0). However, F(a) is the primitive of a strictly positive function and hence is strictly growing as a function of a , which contradicts lim_a^⋆→+∞ F(a^⋆ )=M(0). We then conclude that a(t)→ +∞ as t→ +∞. Then, as a(t)→ +∞, G(a(t))∼1/2√(π) a(t)^-3/2M_1 and a^''(t)∼ -a^'(t)a^-3/2(t)1/2√(π)M_1, so that a(t)∼ ct^2/3 as t→∞, with - 2/9c=-1/3c^-1/2√(1/π)M_1, that is c=( 3/2√(π)M_1) ^2/3. This concludes the proof of the Lemma. On the other hand, if u^in does not have its first moment bounded but u^in(x)=O(x^-δ), as x→ +∞ , 1 <δ <2 then G(a)= 1/2√(π) a^-3/2∫_0^+∞ se^-s^2/4au^ in(s) ds∼ 1/√(π) a^-1/2∫_0^+∞ se^-s^2/2u^in(√(2)a^1/2s) ds∼ Ca^- 1/2-δ/2∫_0^+∞ e^-s^2/2 s^1-δ ds and hence, a^''(t)∼ -Ca^'a^-1/2-δ/2 implying a(t)=O(t^2/δ+1 ) so that μ =δ +1. Lemma <ref> thus gives us the asymptotic behavior of a as t goes to infinity. It has two consequences. In Proposition <ref>, we prove that the L^∞-norm of the solution to (<ref>)-(<ref>) decays like t^-2/3. We can then compare the asymptotic behavior of u to that of the candidate self-similar profile. Proposition <ref> shows that this profile indeed also decays like t^-2/3. Let u^in∈ L^1(_+)∩ L^∞(_+) be a positive and admissible initial condition. Then for all x∈_+, u(t,x) ≤M_1/√(2 e π) a(t)∼C M_1^1/3 /t^2/3 as t→+∞, where the value of C can be computed explicitely and does not depend on the initial data. From Equation (<ref>), u(t,x)=1/2√(π a(t))∫_0^+∞ f_a(t)(s) u^ in (s) d s, where f_a(s) := e^-(x-s)^2/4a-e^-(x+s)^2/4a. Notice that f_a(s) = g_a(s)-g_a(-s) = ∫_-s^s g_a^'(ξ) dξ, where g_a(ξ) = e^-(x-ξ)^2/4a. One easily sees that g_a^'(ξ) = 1/√(a)( x-ξ/√(4a)e^- (x-ξ)^2/4a) ≤1/√(2 e a), in which we used the property: |ze^-z^2|≤ (2e)^-1/2 for all z∈_+. Hence, f_a(s)≤2s/√(2 e a), which implies that u(t,x) ≤1/√(2 e π) a(t) M_1. Lemma <ref> allows us to conclude. Let u^in∈ L^1(_+)∩ L^∞(_+) be a positive and admissible initial condition. Let u be the solution to (<ref>)-(<ref>), and let a:t↦∫_0^+∞ u(t,x) dx. Then for all x∈_+, M_1x/2√(π)a^3/2(t)e^-x^2/4a(t)∼C/t^2/3 as t→+∞, for some positive constant C. For all x∈_+, M_1x/2√(π)a^3/2(t)e^- x^2/4a(t) = M_1/√(π)a(t)x/2√(a(t) )e^-(x/√( 4a(t)))^2≤M_1/√(2 e π)a(t)∼C/t^ 2/3, using the asymptotic behavior of a shown in Lemma <ref>. Propositions <ref> and <ref> show that the L^∞ -norms of the solution u to (<ref>)-(<ref>) and of the candidate self-similar solution decay with the same order. In the final theorem of this article we show that u asymptotically approaches the self-similar solution as soon as the second moment of the initial data is bounded. If the initial data u^in has a bounded second moment M_2(0), then there exists C>0 such that | u(t,x)-M_1x/2√(π)a^3/2(t)e^- x^2/4a(t)|≤CM_2(0)/t for t>1. Since u(t,x)=1/2√(π a(t))∫_0^+∞( e^-(x-y)^2 /4a(t)-e^-(x+y)^2/4a(t)) u^in(y) d y, denoting v (x)=1/2√(π a)∫_0^+∞( e^- (x-y)^2/4a-e^-(x+y)^2/4a/y) yv^in(y) d y, we have v (x)-M_1x/2√(π)a^3/2e^-x^2/4a = 1/2√(π a)∫_0^+∞( e^-(x-y)^2 /4a-e^-(x+y)^2/4a/y-x/ae^-x^2/4a) yv^in(y) dy. We write now e^-(x-y)^2/4a-e^-(x+y)^2/4a/y-x/ae^- x^2/4a≡1/a^1/2Φ( x/a^1 /2,y/a^1/2) with Φ( X,Y) =e^-(X-Y)^2/4-e^-(X+Y)^2/4/ Y-Xe^-X^2/4. It is simple to show that there exists a constant C such that |Φ( X,Y) |≤ CY so that | v(x)-M_1x/2√(π)a^3/2e^-x^2/4a |≤C/a^3/2∫_0^+∞y^2u^ in(y)dy. Note that the previous result can be rewritten as t^2/3| u(t,x)-M_1x/2√(π)a^3/2(t)e^- x^2/4a(t)|≤CM_2(0)/t^1/3, which means that the convergence of the solution to the self-similar profile takes place at a faster rate than the decay of their L^∞-norms, which is to be expected. § NUMERICAL TESTS In this section we perform some numerical experiments in order to verify the theoretical results obtained above. At the numerical level, we worked with the finite space interval [0,400], which is sufficiently wide to minimize the boundary effects on the numerical solution, especially for initial data having a fast decay when x→ +∞. We have introduced a fixed space step Δ x>0 and a time step Δ t>0. Then, we have divided the interval [0,400] in N sub-intervals of measure Δ x=400/N. We have then used an explicit finite differences scheme where the diffusion coefficient (i.e. the mass) at each time step is taken as the mass in the previous time step. The method is stable under the standard stability condition Δ t ≤ M(0) (Δ x)^2/2. We have taken, as initial data, u_0(x)=χ _ 1,2]= {[ 1 x∈[ 1,2]; 0 otherwise. ]. In Figure <ref>, we plot the numerical approximation of the solution u for t=50,500, 5000, 50000 and, in Figure <ref>, we show the time evolution of the quantity log (M_0(t)) (i.e. the logarithm of the mass of u). As we can see, log (M_0(t)) tends to follow a straight line with slope -1/3, which indicates an asympotics of type M_0(t)=O(t^-1/3) as t→∞. Finally, in Figure <ref> we rescale the profiles in Figure <ref> by multiplying them by a(t) and representing them as a function of x/a^ 1/3(t). As we can see, they approach the self-similar profile f(η )=M_1/√(4π)η e^-η ^1/4 (dashed line). § APPENDIX We consider here the case of the finite domain Ω=(0,π), with homogeneous Dirichlet boundary conditions. This setting describes the diffusive limit of the kinetic rock-paper-scissors game by supposing that only individuals with wealth x∈Ω play the game, and can be deduced from the kinetic model described in <cit.> by adapting the same arguments. The problem studied in this Appendix is hence the following. We consider the equation w_t= [∫_0^π w(t,ξ) dξ] w_xx, (t,x)∈_+×Ω with initial data w(0,x)= w^in∈ L^2( 0,π) x∈Ω and boundary conditions w(t,0)=w(t,π)=0, t∈_+, where w^in≥ 0 for a.e. x∈ (0,π). Note that, by parabolic theory <cit.>, w(t,x)≥ 0. An explicit solution of (<ref>)-(<ref>), when w^in =Msin(x), is the following: w^*(t,x)=M/2sin (x)/1+Mt, (t,x)∈_+×Ω, where M>0 is a given constant. The function w^* also turns out to be a self-similar solution with the similarity exponent μ=0. Its initial mass is ∫_0^π w^*(t,ξ) dξ = M. We will show that, indeed, if the initial data w^in is in L^2(Ω ), the solution will tend to the explicit solution, i.e. w(t,x)∼Msin x/2(1+Mt) as t→ + ∞, and the rate of convergence is 𝒪(t^-2). Clearly, w∈ C( _+;L^2(Ω )) and we can write w(t,x) in terms of Fourier series which, because of the boundary conditions, takes the form w(t,x)=∑_n=1^+∞w_n(t)sin (nx). By simple inspection in (<ref>)-(<ref>), we deduce that w solves (<ref>), with initial condition w^in=∑_n=1^+∞w_n(0)sin (nx), and boundary conditions (<ref>). Let M(t)≡∫_0^πw(t,x)dx. Then M(t)=∑_n odd2w_n(t)/n so that dw_n/dt=-n^2M(t)w_n. Hence w_n(t)=w_n(0)e^-n^2∫_0^tM(t^')dt^', and we can compute M(t)=∑_n odd2w_n(0)/ne^-n^2∫_0^tM(t^')dt^'. Denoting, as before, a (t)=∫_0^tM(t^')dt^', we have then the ordinary differential equation a^'(t)=∑_n odd2w_n(0)/ne^-n^2a(t) so that, since w(t,x)≥ 0 and hence M(t)>0, we can integrate explicitly to obtain G(a)≡∫_0^ae^a^'/∑_n odd2w_n(0)/n e^(1-n^2)a^'da^'=t. Note that G(a) = 1/2w_1(0)(e^a-1)-1/2w_1(0)∫_0^a ∑_n=3,5,...w_n(0)/nw_1(0)e^(2-n^2)a^'/ 1+∑_n=3,5,...w_n(0)/nw_1(0)e^(1-n^2)a^' da^' = 1/2w_1(0)e^a-K+O(e^-7a), as a→ +∞ with K=1/2w_1(0)+1/2w_1(0)∫_0^∞ ∑_n=3,5,...w_n(0)/nw_1(0)e^(2-n^2)a^'/ 1+∑_n=3,5,...w_n(0)/nw_1(0)e^(1-n^2)a^' da^' We have then a∼log (2w_1(0)(t+K)+O(t^-7)), as t→ + ∞ , and hence M(t)=a^'∼1/t+K, as t→ +∞ . Therefore w_n(t)∼w_n(0)/(2w_1(0)(t+K))^n^2 as t→ +∞ . We can prove then the following Lemma: Let w^in be the initial condition of the initial-boundary value problem (<ref>)-(<ref>) and suppose that w^in∈ L^1(Ω )∩ L^∞(Ω ). Then there exists a constant C, depending on w^in, and a time T>0 such that, for any t>T, | w(t,x)- M/2sin (x)/1+Mt |≤C/t^2. We note w_n(0)=2/π∫_0^πw^in(x)sin (nx) dx, so that | w_n(0)|≤2/π∫_0^π| w^in(x)| dx=2/π‖ w^ in‖ _L^1(Ω ) and use (<ref>), (<ref>). Acknowledgments: This article has been written when the third author was visiting the Instituto de Ciencias Matématicas (ICMAT) in Madrid. FS deeply thanks ICMAT for its hospitality. The first author is supported by the project PID2020-113596GB-I00. The second author benefited from the Emergence grant EMRG-33/2023 of Sorbonne University. The third author acknowledges the support of INdAM, GNFM group, and of the COST Action CA18232 MAT-DYN-NET, supported by COST (European Cooperation in Science and Technology). The authors thank the anonymous referee for his/her remarks and suggestions which helped us in improving our paper. plain
http://arxiv.org/abs/2407.12523v1
20240717130643
A Scheduler for Real-Time Service in Wi-Fi 8 Multi-AP Networks With Parameterized Spatial Reuse
[ "Kirill Chemrov", "Dmitry Bankov", "Andrey Lyakhov", "Evgeny Khorov" ]
cs.NI
[ "cs.NI" ]
A Scheduler for Real-Time Service in Wi-Fi 8 Multi-AP Networks with Parameterized Spatial Reuse Kirill Chemrov, Dmitry Bankov, Andrey Lyakhov, Evgeny Khorov Kirill Chemrov, Dmitry Bankov, Andrey Lyakhov, and Evgeny Khorov are with the Institute for Information Transmission Problems of the Russian Academy of Sciences, Moscow 127051, Russia (emails: lastname@wireless.iitp.ru). July 22, 2024 ============================================================================================================================================================================================================================================================================================== § ABSTRACT Real-time applications (RTAs) require low delays and impose a significant challenge to Wi-Fi. In Wi-Fi, high delays are often caused by waiting for the channel to become idle. This problem can be solved with Parameterized Spatial Reuse (PSR), which allows a station (STA) to transmit its frame with reduced power simultaneously with a triggered uplink transmission in an overlapping network. The PSR opportunity depends on the pathloss between involved STAs, so the same transmission may allow PSR for one STA but not for another one. Thus, to satisfy tight delay constraints in dense overlapping networks, access points (APs) in the same area shall often allow PSR for every STA with RTA traffic. This letter proposes a fast scheduler enabling frequent PSR transmissions for RTA traffic. The scheduler uses Multi-AP coordination, the feature of upcoming Wi-Fi 8. With simulations, we show that it almost halves the delay for RTA traffic and does not deteriorate the quality of service for other traffic compared with an airtime fairness scheduler. Wi-Fi 8, Real-Time Applications, Channel Resource Scheduling, Parameterized Spatial Reuse, Multi-AP § INTRODUCTION The traffic of real-time applications (RTAs), such as virtual and augmented reality, remote control, industrial automation, cloud gaming, etc, requires low delays and high reliability. For example, Wi-Fi beyond 2030 shall support immersive communications with 99.9% of packets being served with delays below 20ms <cit.>. These requirements are challenging for modern Wi-Fi. First, Wi-Fi uses unlicensed spectrum, suffering from interference, which induces packet losses. Second, Wi-Fi uses random channel access, which results in high delays caused by waiting for the end of ongoing transmissions and collisions <cit.>. Third, the density of Wi-Fi networks (or Basic Service Sets, BSSs) is growing rapidly, leading to many overlapping BSSs, and inducing more interference. Recently launched 802.11bn (known as Wi-Fi 8 <cit.>), considers RTA as a key use case and studies how to improve the worst-case latency for RTA traffic. A promising way is extending several features introduced in Wi-Fi 6 <cit.> with multi-AP coordination, a killer feature of Wi-Fi 8. The first feature of Wi-Fi 6 is trigger-based (TB) transmission. This mechanism allows an Access Point (AP) to send a trigger frame (TF) and allocate channel resources for uplink transmissions of some stations (STAs). TB transmissions help to avoid contention between client STAs of the same BSS. The second feature is Parameterized Spatial Reuse (PSR), which improves the performance of overlapping BSSs <cit.>. PSR allows a STA to transmit during an ongoing TB transmission from another BSS. For that, the AP initiates a TB transmission with a TF, notifying the surrounding STAs about the PSR opportunity. If a STA wants to use the PSR opportunity, it reduces its transmit power according to information in the TF to avoid high interference with the TB transmission. PSR can be extremely useful for RTA because it significantly accelerates the access to the channel when it is busy <cit.>. PSR should only be used if the power restriction is not too strong and the delivery is reliable. However, this restriction depends on the pathloss between STAs, so not all TB transmissions are favorable for the PSR transmission of a particular STA. As a result, the efficiency of PSR in reducing delays depends on the schedule of TB transmissions. Therefore, APs need to use multi-AP coordination for information exchange to adjust the schedule, taking into account the PSR opportunity for STAs of overlapping BSSs with RTA traffic. Note that although existing works, e.g., <cit.>, show that PSR is favorable for RTA, they do not provide an algorithm to arrange transmissions so that the requirements of RTA STAs are satisfied. Moreover, the approach from <cit.> implies simple alternation of TB transmissions that allow and disallow PSR, which leads to unfairness in resource allocation. For example, if too many STAs allow PSR, then each of them receives more channel time than each of those that disallow PSR. In this letter, we design a new scheduling approach that, in comparison with the airtime fairness scheduler, reduces delays for several RTA STAs without degradation of throughput and fairness for STAs with delay-insensitive traffic. The rest of the letter is structured as follows. In Section <ref>, we describe the scenario and state the problem. Section <ref> contains the proposed scheduling approach with the optimization problem statement and its greedy solution. In Section <ref>, we discuss numerical results. Section <ref> concludes the letter. § SCENARIO AND PROBLEM STATEMENT Consider two overlapping BSSs sharing the same channel. The first BSS (hereinafter referred to as non-RTA BSS) includes N non-RTA STAs and a non-RTA AP, each of which generates non-RTA broadband traffic. The AP controls uplink transmissions with TF to prevent collisions: after a portion of downlink data, it transmits a TF to start an uplink transmission. The second one (hereinafter referred to as RTA BSS) consists of an RTA AP and M RTA STAs, which generate uplink RTA traffic. As the AP can access the channel with high priority, serving RTA downlink traffic is not as challenging as serving uplink RTA one. Thus, we consider only uplink traffic in the RTA BSS. As the RTA AP does not know about the presence of RTA data at STAs, it does not use TB transmissions. The non-RTA AP provides the PSR opportunity during some TB uplink transmissions. Note that the PSR opportunity cannot be provided during downlink transmissions. The PSR opportunity may set too stringent power restrictions for some RTA STAs, which makes successful delivery impossible. Let us call a non-RTA STA PSR-favorable for a particular RTA STA if, during its TB transmissions, PSR is allowed with such power restriction that allows signal-to-interference-plus-noise ratio (SINR) of the RTA STA's transmission to exceed some threshold SINR_th. Otherwise, if a non-RTA STA disallows PSR or allows only unreliable PSR transmission for an RTA STA, we call it PSR-unfavorable for this RTA STA. Without coordination between APs, the non-RTA AP is unaware of RTA STAs in the overlapping BSS and does not know which non-RTA STAs are PSR-favorable for them. So, the non-RTA AP performs some scheduler, e.g., the airtime fairness one, which on average allocates equal channel time for each STA <cit.>. As a result, the AP may schedule STAs that are PSR-unfavorable for some RTA STAs several times in a row, which increases worst-case channel access delays. Thanks to multi-AP coordination considered for Wi-Fi 8, we can design better scheduling algorithms. Specifically, APs perform the following PSR-aware classification for each RTA STA. The non-RTA AP communicates with associated STAs using TB transmissions. During this data exchange, the non-RTA AP provides the RTA AP with an allowed interference level using TF for each STA that allows PSR. Note that the allowed interference level can vary per non-RTA STA as it depends on the used modulation and coding scheme (MCS). At the same time, each RTA STA measures the received power level of the TF from non-RTA AP and reports it to the RTA AP, aggregating this information with data. Then, the RTA AP measures the received power level of TB transmission from each non-RTA STA and calculates the expected SINR of considered RTA STA transmission with the PSR opportunity granted during each non-RTA STA TB transmission. Note that the procedure does not require additional regular signaling, as it is enough to measure the received power of data transmissions and TF. A non-RTA STA is considered PSR-favorable if it meets the following condition: SINR > SINR_th. This procedure is repeated several times to make the classification more robust in mobile scenarios, e.g., when a VR player moves. A non-RTA STA is considered PSR-favorable only if the PSR criterion is met for all recent measurements. Finally, the RTA AP reports to the non-RTA AP the set of PSR-favorable STAs, via wireless or wired backhaul. While wireless backhaul is under development <cit.>, some multi-AP solutions already use the wired one <cit.>. In any case, the procedure does not induce much control traffic, as only significant changes in SINR affecting the classification generate new reports. We state a problem to develop a scheduler for non-RTA BSS that uses multi-AP coordination to improve the quality of service (QoS) for RTA BSS without QoS degradation for non-RTA BSS. As a QoS metric for RTA traffic, we consider the Q (0.999) quantile of delay among all STAs, where the delay is the time between a frame enqueuing and its delivery. For non-RTA traffic, we consider the average throughput and Jain's fairness index <cit.> of throughput: the ratio of the squared average throughput to the average of the squared throughput. § PROPOSED SCHEDULING APPROACH To solve the stated problem, we design a scheduler that determines the order of uplink non-RTA transmissions. To guarantee the fairness of resource allocation for non-RTA STAs, this order is repeated but in every repetition, each non-RTA STA transmits only once. Among all possible permutations of non-RTA STAs, the scheduler minimizes the worst-case distance between PSR-favorable transmissions for each RTA STA, which reduces the Q-quantile of RTA delays. This approach makes the scheduler independent of the type of RTA traffic, e.g., whether it is sporadic or regular. Moreover, since the scheduler only reorders the non-RTA STA transmissions, it also guarantees that the average delay does not notably degrade for non-RTA STAs. In Section <ref>, we state an optimization problem to find the order of non-RTA STAs. Then, in Section <ref>, we design a greedy solution for it. §.§ System Analysis Let us formalize the problem stated in Section <ref>. To characterize a non-RTA STA i, we use a favorability vector _i ={f_ij}_j=1^M, i∈ 1..N. We set f_ij = 1, if a non-RTA STA i is PSR-favorable for RTA STA j, otherwise, we set f_ij = 0. Let the scheduler make an order of active STAs, i.e., STAs with traffic, {i_1, i_2, ..., i_N}. To describe the scheduling of non-RTA BSS transmissions for one period (column order) and the corresponding PSR opportunities for RTA STAs, we use vectors _i in this order to build a matrix: = [_i_1|_i_2|...|_i_N], where [𝐀|𝐁] denotes concatenation of matrices 𝐀 and 𝐁. Let _i() be the maximum number of consecutive zeros in the i-th row of the periodically repeated matrix [...||||...]. For example, for a row (0, 1, 0, 1, 0, 0), _i =3. Note that it is enough to only consider [|] instead of [...||||...]. If the row i contains k consecutive zeros, the RTA STA i cannot make a PSR transmission for the time of k non-RTA transmissions, which results in delays as long as the duration of k non-RTA transmissions. Thus, to minimize the worst-case channel access delays for the RTA STA i, we minimize _i() over all column permutations of matrix [_1|_2|...|_N]. We denote these all permutations as {_i}. As it is not always possible to find a permutation that is optimal for all RTA STAs, we state a problem to minimize max_i∈1..M_i() over all ∈{_i}. If several permutations give the same max_i∈1..M_i(), we minimize the next maximum, etc. Let vector () contain {_i()}_i=1^M in descending order. We state the lexicographic optimization problem: () →∈{_i}lex min Note that the rows in that consist of all zeros or all ones do not change with column permutation. They do not affect the solution of the optimization problem and thus can be omitted while solving it. We further assume that all the M rows in have at least one 0 and one 1. The optimization problem (<ref>) can be solved with a brute force approach, but its complexity is 𝒪(M× N!), so we propose a fast greedy algorithm. §.§ Greedy Scheduler The idea of the greedy algorithm is to take vectors one by one and insert them in such a place of the result matrix that minimizes the objective function (...) for the current matrix, see Algorithm <ref>. The algorithm takes as input a set of favorability vectors of active STAs {_i}_i=1^N and outputs the greedy-optimal sequence of vectors concatenated in matrix ^*. We initialize the resulting matrix ^* by concatenating two first vectors (line <ref>). For each remaining vector _i, we find the best position in the matrix ^* (lines <ref>–<ref>). Let matrix ^* store the current optimal position of vector _i. We initialize ^* with the matrix ^* and insert the vector _i after column 1 (lines <ref>–<ref>). Then, we iterate through the remaining possible positions j of the vector _i (line <ref>). Variable stores the copy of the matrix ^* with the vector _i inserted after column j (lines <ref>–<ref>). If the objective function () is lexicographically less then ^*, i.e., we find a better position for _i, we overwrite ^* with (lines <ref>–<ref>). When all the positions are exhausted, ^* stores the matrix with the best position of _i. So, we overwrite the resulting matrix ^* with ^* and consider the next vector _i+1 (line <ref>). When all the vectors from {_i}_i=1^N are exhausted, we get the final greedy-optimal matrix ^*, which column order determines the scheduling order of non-RTA STAs. Note that the vectors {_i}_i=1^N can be added to the matrix ^* in an arbitrary order. We have tried several way to sort {_i}_i=1^N, e.g., by their Hamming weights. However, the numerical results for all of them turned out to be indistinguishable from the case without the preliminary sorting considered in Fig. <ref>. The complexity of the lexicographical comparison of vectors of size M is 𝒪(M). As it is performed in a double nested loop, the final complexity of the greedy algorithm is 𝒪(M× N^2), which is significantly better than brute force. Although we consider only two overlapping BSSs to simplify the explanation, the developed scheduler is valid for several overlapping BSSs. In this case, it creates a joint schedule for these BSSs, which can be implemented with multi-AP coordination, e.g., using the extended TXOP sharing considered for Wi-Fi 8 <cit.>. A controller that manages the joint schedule collects the favorability vectors of STAs in all BSSs and runs the scheduler for all the received vectors. Thus, adding more BSSs only increases N in the above algorithm. § RESULTS We study the efficiency of the proposed scheduler with extensive experiments in ns-3 <cit.>. Consider an apartment building, and let the Wi-Fi networks of two apartments on the same floor use the same band. Specifically, devices of two overlapping BSSs are located in two apartments seven meters from each other, see Fig. <ref>, which is reasonable as neighboring apartments likely use different channels. The non-RTA AP is near the right wall, and positions of N=8 non-RTA STAs are randomly scattered inside the 10 × 7 meters apartment. Non-RTA devices have saturated traffic of packets with the size of 1000 bytes, which are aggregated and transmitted with MCS 8. In fact, not all the STAs can have saturated traffic, so we consider only those STAs that have data to transmit. The non-RTA AP gains a transmission opportunity of ≈5ms and uses it equally for downlink and uplink transmissions. The RTA AP is located near the left wall of the right apartment, and the positions of M STAs are fixed inside the building, which we specify for each particular set of experiments. We choose RTA STAs locations to make favorability vectors highly likely to differ greatly, which challenges the algorithm the most. The RTA STAs generate 256-byte packets periodically with period T_RTA. For greater reliability, RTA STAs use MCS 0. We use the channel model and penetration losses for the Residential Scenario described in <cit.>. We list the other modeling parameters in Table <ref>. We run two sets of experiments. For the first set of experiments, we place M=2 RTA STAs as shown in Fig. <ref>. We vary T_RTA and compare QoS metrics for RTA and non-RTA BSSs for the baseline airtime fairness scheduler and two RTA-aware schedulers: brute force and greedy solution of problem (<ref>). As shown in Fig. <ref> (solid lines), for T_RTA>15ms, the RTA-aware schedulers almost halve the 0.999-quantile of delay. Moreover, for the delay limit of 20ms, the RTA-aware schedulers reduce the packet loss ratio by two orders. We average the performance for each arrangement over all arrangements, see Fig. <ref>. RTA-aware schedulers do not worsen the average throughput of non-RTA BSS compared to the baseline scheduler because RTA STAs use PSR opportunities more often and consume less channel time. Jain's fairness index of throughput is close to 1 for all the studied schedulers because they guarantee equal distribution of resources. Let us compare two RTA-aware schedulers. According to all metrics, the results for the brute force and the greedy algorithm are almost the same. The greedy algorithm loses in the delay quantile at most 2%. Thus, the results confirm that although the greedy algorithm cannot guarantee the optimal solution to the problem (<ref>), it manages to find the best non-RTA transmission schedule in the vast majority of cases. For the second set of experiments, we place M=4 RTA STAs as shown in Fig. <ref> and measure the same metrics (see Fig. <ref>, dashed lines). Despite the higher number of RTA STAs, the RTA-aware schedulers still give a 40% gain in the quantile of delay and decrease the ratio of frames not served within 20ms by almost 100 times. In this scenario, the greedy algorithm again shows results close to the brute force one. The results show that the baseline scheduler does not allow serving RTAs, for instance, immersive communications that require 99.9% of packets to be served with delays below 20ms <cit.>. At the same time, the developed scheduler handles this task without compromising the QoS for non-RTA traffic. § CONCLUSION In this letter, we tune PSR with multi-AP coordination and an RTA-aware scheduler for delay-tolerant traffic to service several RTA STAs in overlapping Wi-Fi 8 networks. To develop the RTA-aware scheduler, we state an optimization problem to minimize the worst-case channel access delays and propose a greedy solution. Extensive simulation shows that, with our scheduler, the average throughput and fairness index of throughput for non-RTA traffic do not change in comparison with the airtime fairness scheduler, but the 0.999-quantile of delay for RTA traffic almost halved. Moreover, we show that our fast greedy algorithm gives a solution that differs from the optimal one by less than 2% in terms of delay quantile. IEEEtran
http://arxiv.org/abs/2407.13058v1
20240718000719
ALICE FoCal overview
[ "Jonghan Park" ]
hep-ex
[ "hep-ex" ]
University of Tsukuba The Forward Calorimeter (FoCal) is a new sub-detector in ALICE to be installed during the LHC Long Shutdown 3 for LHC Run 4. It consists of a highly-granular Si+W electromagnetic calorimeter combined with a conventional metal-scintillator hadronic calorimeter, covering a pseudorapidity interval of 3.2<η<5.8. The FoCal is optimised to measure various physics quantities in the forward region, allowing exploration of the gluon density in hadronic matter down to x∼10^-6, thus providing insights into non-linear QCD evolution at the LHC. These proceedings introduce the FoCal physics program and its corresponding performance. Additionally, the performance of the FoCal prototype will be presented. ALICE FoCal overview Jonghan Park1jonghan@cern.ch for the ALICE Collaboration July 22, 2024 ============================================================ § INTRODUCTION The main scientific objective of the ALICE Forward Calorimeter (FoCal) is to study gluon saturation. In hadronic matter, the gluon density increases linearly with decreasing momentum fraction x, following a power law xg(x) ∼ x^-δ/2, where the exponent δ is determined by data fitting. However, at sufficiently small values of x, the gluon density becomes so high that partons start to overlap and recombine, causing a reduction in gluon density relative to linear projections <cit.>. Consequently, the growth of the gluon density cannot continue indefinitely as x decreases; instead, the gluon density eventually saturates due to gluon self-interaction, forming a new state of gluon-saturated matter. This state is characterised by a saturation momentum Q_ sat; gluons with momentum below Q_ sat experience saturation effects, while those with higher momentum follow linear QCD evolution. The saturation effects are more pronounced in heavy nuclei, as Q_ sat^2∼ A^1/3, where A represents the nuclear mass <cit.>. The FoCal will explore these novel phenomena through forward measurements of multiple electromagnetic and hadronic observables in hadronic pp and p–Pb collisions, as well as in ultra-peripheral p–Pb and Pb–Pb collisions. The data provided by FoCal will probe the partonic structure of hadronic matter and the nature of QCD evolution, reaching unprecedented momentum fraction down to x ∼ 10^-6 for small momentum transfer Q^2≈4, GeV/c <cit.>. FoCal will be installed during the LHC Long Shutdown 3 for data-taking in Run 4 (2029–2032). It will be positioned at 7 m from the interaction point of the ALICE detector, close to the beam pipe, enabling coverage of a very forward rapidity range, 3.2<η<5.8. FoCal consists of two parts: the electromagnetic calorimeter (FoCal-E), optimised for measuring direct photons, neutral and vector mesons, and the hadronic calorimeter (FoCal-H), designed for photon isolation and jet measurements. These proceedings report on the FoCal physics program and its performance. Additionally, the performance of the FoCal detector prototype will be presented. § FOCAL PHYSICS PERFORMANCE The FoCal physics performance is studied through simulated pp, p–Pb and Pb–Pb collision events, utilising PYTHIA8 <cit.> and HIJING <cit.>, along with an idealised FoCal geometry implemented in GEANT3 <cit.>. Additionally, the performance for measurements in Ultra-Peripheral Collisions (UPCs) is assessed using STARlight simulations <cit.>. These performance projections assume that the integrated luminosity ℒ_ int delivered during Run 4 will be 100 pb^-1 for pp collisions at √(s)=14 TeV, 300 nb^-1 for p–Pb collisions at √(s)=8.8 TeV split into p–Pb and Pb–p configurations, and 7 nb^-1 for Pb–Pb collisions at √(s)=5.5 TeV. This section will present several observables sensitive to the gluon saturation effect. - Direct photon measurements Direct photons are primarily produced at the parton interaction vertex via the Compton process qg→γ q at the LHC, providing access to the gluon density because they directly couple to incoming quarks and are not affected by final state effects. One of the key challenges in this measurement is distinguishing signal photons originating directly from the hard scattering process from decay photons. Enhancing the contribution of direct photons in FoCal can be achieved through three techniques: 1) isolation energy within a cone of a specified radius around the photon candidate in FoCal, 2) invariant mass of cluster pairs to reject photons from π^0 mesons (π^0→γγ), and 3) the long axis distribution of the shower shape ellipse originating from decay photons with small opening angles. Using these techniques, the signal fraction can be increased up to 72% at p_ T=14 GeV/c (by a factor approximately 11) <cit.>. The physics impact of direct photon measurements is assessed by constructing FoCal pseudo-data based on existing PDFs. Statistical and systematic uncertainties for the data were estimated from NLO calculations of production cross section. Figure <ref> shows the nuclear modification factor R_ pPb of inclusive direct photons at √(s_ NN)=8.8 TeV compared to QCD calculations at NLO using nPDFs without constraints from D^0 (gray), with re-weighting by D^0 data (blue) or FoCal pseudo-data (red). The theoretical prediction without the re-weighting procedure (gray) has PDF uncertainties of approximately 30% at p_ T=5 GeV/c. By applying the re-weighting procedure with FoCal pseudo-data, the PDF uncertainties are reduced by about 50%, demonstrating the potential of FoCal measurements to constrain global PDF fits. The inclusion of D^0 production at forward rapidity, measured by LHCb <cit.>, results in a notable reduction of PDF uncertainties. Although this reduction is significant, exploring the low-x phase-space requires a multi-messenger approach that includes a global analysis of all available data. Thus, incorporating FoCal direct photon data into global PDF fits will provide insights into factorisation and universality in nuclear exvironments. - Neutral meson measurements The FoCal can reconstruct neutral mesons (i.e. π^0, η, ω, etc.) decaying into photons or electrons using the electromagnetic showers in FoCal-E. The reconstruction performance was studied using simulated pp collision events that include the underlying event, which contributes to the combinatorial background. Thus, the reconstructed invariant mass distribution includes several components: signal, cluster splitting, and combinatorial background. The distribution is fitted using a cocktail: the signal component from the distribution of cluster pairs matching true photon pairs from π^0 decays, the cluster-splitting component from an invariant mass analysis of single-photon events where no signal contribution is expected. Three different approaches were tested for handling the combinatorial background: a polynomial function fit, event mixing, and rotational background method. The first two methods showed similar performance in describing the combinatorial background, but they did not accurately capture the distribution below π^0 peak (<0.1 GeV/c^2) due to unaccounted correlated backgrounds, such as photon pairs from the splitting of a single π^0 or secondary clusters from other π^0 photons <cit.>. In contrast, the rotation method demonstrated better performance, as shown in Fig. <ref> Figure <ref> shows an example of signal extraction using the rotation method, where the distribution is well described by both the signal component and rotational background, indicating that an additional template for cluster splitting is not necessary. After background subtraction, a clear signal distribution is fitted well by the Crystal Ball function. With the FoCal detector, the π^0 measurements will provide insights into gluon PDFs. - Jet measurements Forward jet measurements are sensitive to saturation effects at small-x, as there are three momentum scales involved in the process: Q_ sat, which characterises gluon-saturated matter at small-x, p_ T^ jet of the individual jets, and the momentum imbalance k_ T of the jet pair corresponding to the transverse momentum of the small-x gluons involved in the hard scattering. The performance of jet reconstruction is assessed using the anti-k_ T clustering algorithm <cit.> at both particle and detector levels. At the particle level, jet utilise particles within the FoCal acceptance, while at the detector level, jets use FoCal-E clusters and FoCal-H tower signals. Reconstruction performance is quantified by evaluating the relative energy and p_ T difference between particle-level and detector-level jets, defined as Δ E=E^ det-E^ part/E^ part,    Δ p_ T=p_ T^ det-p_ T^ part/p_ T^ part. Two key metrics, Jet Energy Scale (JES) and Jet Energy Resolution (JER), are evaluated using the mean and RMS of the Δ E and p_ T distributions. The JES and JER for jets with R=0.6 and centroid 4.0<η^ jet<4.9, calculated using Δ E as a function of the particle-level energy of the reconstructed jet, are shown in Fig. <ref> and Fig. <ref>, respectively. A negative JES value indicates that a fraction of the total jet energy escapes the jet reconstruction, resulting in a deficit in jet energy. The JES value decreases rapidly up to 600 GeV and remains relatively stable above this energy range. The JER is determined from Gaussian fits and numerical integration of the Δ E distributions separately. It remains below 15% for energies up to 3 TeV, although the value from the numerical integration exceeds 15% below 400 GeV. Further studies on the jets have been conducted, revealing potential improvements by accounting for biases in the neutral energy fraction of the jet population <cit.>. - Photon–hadron correlations Direct photon-hadron (γ_ dir–h) correlations in the forward region of pA collisions are sensitive to small-x gluon dynamics. The gluon dynamics at small-x are expected to modify the shape of the azimuthal distribution of γ_ dir–h correlations. The γ_ dir–h correlations can be determined from γ_ iso–h correlation by suppressing the non-γ_ dir–h correlation components. Thus, we performed a FoCal physics performance study for this channel by measuring the accuracy of the widths of γ_ iso–h correlation functions. Figure <ref> shows the raw isolated cluster–π^0 candidate correlation function at the detector level (red marker) and the pseudo-data for the raw correlation function (black marker) in pp collisions at √(s)=14 TeV. The pseudo-data are obtained from a fit of the raw isolated cluster–π^0 correlation function using a Gaussian function. In the raw correlation function, there is no peak in the near-side region, but rather a small dip around Δφ=0 due to the isolation cut. On the other hand, no dip behaviour appears in the pseudo-data since the fit does not account for the behaviour at Δφ=0. The fit function also does not describe the simulation for the highest trigger and associated p_ T bin, resulting in discrepancies between the simulation and pseudo-data as shown in the bottom-right panel of Fig. <ref>. The pseudo-data are refit using the aforementioned function, and the width of the correlation function and its uncertainties are extracted from the fit. Figure <ref> shows the width of the correlation function from the fit and its corresponding uncertainties in selected trigger and associated p_ T bins. The distributions become narrow for higher trigger and associated p_ T bins, reflecting the collimated recoil jet peak. According to the simulation study, the relative statistical error is less than 1% for the projected Run 4 integrated luminosity, ℒ_ int=100 pb^-1. - Vector meson photoproduction in ultra-peripheral collisions The photoproduction in ultra-peripheral collisions (UPC) extends the kinematic reach of current ALICE measurements and complement the EIC program <cit.>. The photoproduction cross sections of heavy vector mesons are sensitive to gluon dynamics as per LO pQCD calculations. The FoCal provides unique kinematic coverage, significantly enhancing measurements of J/ψ and ψ(2S) photoproduction cross sections in p–Pb collisions at center-of-mass energies W_γ p=2 TeV of the emitted photon and the proton projectile, extending up to 2 TeV. Figure <ref> shows the ratio of the ALICE data and NLO BFKL projection to the power-law used to fit the ALICE data. It demonstrates how FoCal's UPC measurements will reveal deviations from power-law growth at high energies, particularly when saturation occurs. Figure <ref> displays the invariant mass distribution for cluster pairs obtained from coherent J/ψ and ψ(2S) STARlight simulations <cit.>. The signal is extracted using a sum of double-sided Crystal Ball functions, clearly distingushing between the J/ψ and ψ(2S) states in FoCal. This illustrates successful measurements of J/ψ and ψ(2S) in ultra-peripheral Pb–Pb collisions. § FOCAL DESIGN CONCEPT As mentioned, FoCal consists of electromagnetic (FoCal-E) and hadronic (FoCal-H) components. FoCal-E is a Silicon(Si)+Tungsten(W) sampling calorimeter with fine lateral granularity readout. The W absorber material has a small Molière radius R_ M≈0.9 cm and radiation length X_ 0=3.5 mm, resulting in a total radiation length of approximately 20X_0. FoCal-E comprises 18 silicon pad layers and two silicon pixel layers. The pad sensor has a transverse cell size of 1 cm^2, which are read out by the High Granularity Calorimeter ReadOut Chip (HGCROC) <cit.>, allowing individual readout for channels in each layer. The HGCROC provides Analog to Digital Converter (ADC), Time Of Arrival (AOT) and Time Over Threshold (TOT). The ADC samples at 40 MHz with a configurable phase shift to match the LHC bunch collision timing. TOA measures signal arrival time relative to the interaction, facilitating TOT computation, which extends the dynamic range of the ADC. The pixel sensor, with a pixel size of approximately 30×30 μ m^2, is the ALICE Pixel Detector (ALPIDE) <cit.>, based on Monolithic Active Pixel Sensor (MAPS) technology. These sensors are located at the 5th and 10th layer. The readout chain for the pixel layers is similar to ITS2, but with modifications: the ALPIDEs are bonded in multi-chip strings using the SpTAP (Single-point Tape Automated Bonding) technique <cit.>, instead of wire bonding used in ITS2. The readout rates for the pixel layers are 1.2 Gbps and 400 Mbps for the Inner Barrel (IB) and Outer Barrel (OB), respectively. The FoCal-E module is followed by FoCal-H, a conventional hadronic sampling calorimeter with an effective nuclear interaction length of approximately ∼5λ_ int. FoCal-H lacks longitudinal segmentation and will be constructed using 2.5 mm outer diameter Cu capillary tubes filled with plastic scintillating fibers. The readout for FoCal-H utilise the H2GCROC (HGCROC for silicon photomultipliers(SiPMs)) in conjunction with SiPMs. Most functionalities of the H2GCROC are similar to those of the HGCROC used for the FoCal-E pads. The main difference lies in the use of a current conveyor in the analog input to attenuate the signal with a programmable gain. Additionally, the chip can adjust the bias voltage individually for each channel, tailoring it to the specific SiPMs to ensure a more uniform response. § PERFORMANCE OF THE FOCAL PROTOTYPE The performance of a full-length prototype of the FoCal detector has been studied through extensive test beam experiments at CERN PS and SPS from 2021 to 2023. The data were collected using hadron beams with energies up to 350 GeV and electron beams with energies up to 300 GeV. The performance of FoCal-E was studied by analysing the pad response to minimum ionising particles and quantifying the transverse shower width for shower separation in FoCal-E pixels. The performance of FoCal-H is studied by analysing the response to hadron beams and by comparing the data to simulation. To evaluate the performance of the FoCal-E pads, the linearity and resolution were assessed using electron beams. The energy response to electrons was calibrated by summing the charge signals from all active pad layers. The distributions of summed pad signals were fitted with Gaussian curves, and the mean and width of these distributions were extracted from the fit parameters. The mean value as a function of electron energy was described by a linear fit, Q(E)=q× E+Q_0, with the data showing less than 5% deviation from the fit <cit.>. The relative resolution is defined as r(E)=σ_Q(E)/(Q(E)-Q_0) where Q_0 is obtained from the linear fit. Figure <ref> presents the relative energy resolution of the FoCal-E pad layers, compared with simulations. Both experimental and simulated resolution values agree within uncertainties for energies above 80 GeV, and below 3% for 100 GeV, meeting the physics requirement of approximately 5%. The FoCal-E pixel layers enable shower separation, and their performance was evaluated using particle showers at sub-millimeter scales. As depicted in Fig. <ref>, the hit density distribution from a two-electron shower event with an energy of 300 GeV exhibits clear separation over approximately 1 cm. The distribution was projected onto a lateral axis, and the Full Width Half Maximum (FWHM) was measured to analyse the shower profile. The FWHM measures approximately 1.2ṁm (2.4 mm) for an electron energy of 20 GeV, reducing to 0.8 mm (1.2 mm) for an electron energy of 300 GeV in layer 5 (layer 10) <cit.>. Comparison with simulations show the measured data and simulations are within 0.5 mm of each other. The performance of FoCal-H was assessed by analysing the ADC signal sum distribution with a beam energy scan up to 350 GeV. The ADC distribution was fitted using a Gaussian curve to extract the mean and width, which characterise the detector response. Figure <ref> shows the energy resolution of FoCal-H as a function of beam energy. The resolution remains below 20% across the entire energy range, with discrepancies from simulations less than 5%. § SUMMARY AND OUTLOOK FoCal aims to study gluon saturation. The capabilities of FoCal were verified through years of physics simulations and test beam experiments. These proceedings present projections of the performance for γ_ dir, π^0 mesons, jets, γ-h correlations, and J/ψ in UPCs. Regarding detector performance, a comprehensive study of the detector prototypes was conducted using test beam experiments at CERN PS and SPS. FoCal is a project within ALICE, approved by the LHCC in March 2024, and is expected to be installed at the beginning of 2028. Several institutes involved in the project plan to proceed with mass production and module assembly from this year to mid-2027, and the detector installation is scheduled in the early 2028. Through commissioning in 2028, the FoCal will begin data-taking from 2029. The FoCal data will allow us to explore an unprecedented region of momentum fraction, down to x∼10^-6. 99 Armesto_2006 Armesto, Néstor, Nuclear shadowing. J. Phys. G 32, R367-R394 (2006). <http://dx.doi.org/10.1088/0954-3899/32/11/R01> Gelis_2010 F. Gelis et al, The Color Glass Condensate. Ann. Rev. Nucl. Part. Sci. 60, 463–489 (2010). <http://dx.doi.org/10.1088/0954-3899/32/11/R01> loi ALICE Collaboration, Letter of Intent: A Forward Calorimeter (FoCal) in the ALICE experiment. CERN-LHCC-2020-009, LHCC-I-036 (2020). <https://cds.cern.ch/record/2719928> ALICE:2023fov ALICE Collaboration, Physics of the ALICE Forward Calorimeter upgrade. ALICE-PUBLIC-2023-001 (2023). <https://inspirehep.net/literature/2661418> its2tdr ALICE Collaboration, Technical Design Report for the Upgrade of the ALICE Inner Tracking System. J. Phys. G 41, 087002 (2014). <https://inspirehep.net/literature/1305021> pythia8 C. Bierlich et al, A comprehensive guide to the physics and usage of PYTHIA 8.3. arXiv:2203.11601 [hep-ph]. <https://doi.org/10.48550/arXiv.2203.11601> hijing X. Wang and M. Gyulassy, hijing: A Monte Carlo model for multiple jet production in pp, pA, and AA collisions. Phys. Rev. D 44, 3501 (1991). <https://doi.org/Article-DOI-number> geant B., René et al, GEANT: Detector Description and Simulation Tool. doi:10.17181/CERN.MUHF.DMJ1. <https://cds.cern.ch/record/1082634?ln=en> starlight P. Aurenche et al, A critical phenomenological study of inclusive photon production in hadronic collisions. Eur. Phys. J. 9, 107-119 (1999). <https://doi.org/Article-DOI-number> focalphysicsperformance ALICE Collaboration, Physics performance of the ALICE Forward Calorimeter upgrade. ALICE-PUBLIC-2023-004 (2023). <https://cds.cern.ch/record/2869141?ln=en> D0LHCb LHCb Collaboration, Study of prompt D^0 meson production in pPb collsions at √(s_ NN)=5 TeV. JHEP. 10, 090 (2017). <http://dx.doi.org/10.1007/JHEP10(2017)090> antikt M. Cacciari et al, The anti-k_t jet clustering algorithm. JHEP. 04, 063 (2008). <https://doi.org/10.1088/1126-6708/2008/04/063> eic R. Abdul Khalek et al, Science Requirements and Detector Concepts for the Electron-Ion Collider. Nucl. Phys. A. 1026, 122447 (2022). <http://dx.doi.org/10.1016/j.nuclphysa.2022.122447> upc A. Bylinkin et al, Vector meson photoproduction in UPCs with FoCal. J. Phys. G: Nucl. Part. Phys. 50, 055105 (2023). <http://dx.doi.org/10.1088/1361-6471/acc419> hgcroc F. Bouyjou et al, HGCROC3: the front-end readout ASIC for the CMS High Granularity Calorimeter. JINST. 17, C03015 (2022). <https://dx.doi.org/10.1088/1748-0221/17/03/C03015> alpide ALICE Collaboration, The ALPIDE pixel sensor chip for the upgrade of the ALICE Inner Tracking System. Nucl. Instrum. Meth. A. 845, 583-587 (2017). <https://dx.doi.org/10.1088/1748-0221/17/03/C03015> focaltdr ALICE Collaboration, Technical Design Report of the ALICE Forward Calorimeter (FoCal). CERN-LHCC-2024-004 ; ALICE-TDR-022. <https://cds.cern.ch/record/2890281?ln=en> focaltb M. Aehle et al, Performance of the electromagnetic and hadronic prototype segments of the ALICE Forward Calorimeter. arXiv:2311.07413 (2023). <https://arxiv.org/abs/2311.07413> RefJ Journal Author, Article title. Journal Volume, page numbers (year). <https://doi.org/Article-DOI-number> RefB Book Author, Book title (Publisher, place, year) page numbers
http://arxiv.org/abs/2407.12563v1
20240717134717
Audio Conditioning for Music Generation via Discrete Bottleneck Features
[ "Simon Rouard", "Yossi Adi", "Jade Copet", "Axel Roebel", "Alexandre Défossez" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Experimental demonstration of spontaneous symmetry breaking with emergent multi-qubit entanglement Shi-Biao Zheng July 22, 2024 ================================================================================================== § ABSTRACT While most music generation models use textual or parametric conditioning (e.g. tempo, harmony, musical genre), we propose to condition a language model based music generation system with audio input. Our exploration involves two distinct strategies. The first strategy, termed textual inversion, leverages a pre-trained text-to-music model to map audio input to corresponding "pseudowords" in the textual embedding space. For the second model we train a music language model from scratch jointly with a text conditioner and a quantized audio feature extractor. At inference time, we can mix textual and audio conditioning and balance them thanks to a novel double classifier free guidance method. We conduct automatic and human studies that validates our approach. We will release the code and we provide music samples on https://musicgenstyle.github.iomusicgenstyle.github.io in order to show the quality of our model. § INTRODUCTION In the field of music generation, prior research has predominantly focused on producing brief musical segments <cit.>, MIDI generation <cit.>, while generating long and coherent waveforms (around 30 seconds) has only recently been tackled <cit.>. Specifically, most of these recent models have been designed to perform text-to-music generation, providing a fascinating tool for creators. Other types of high-level conditioning have been used in previous work such as tempo, harmony <cit.>. For lower-level and aligned conditioning, the authors of <cit.> use melody, while <cit.> uses chords, piano rolls, or the drum stem. However, music is hard to describe textually and the scarcity of text-music pair datasets makes it challenging to generate music in the style of a specific artist or song, since the artist is probably not represented in the training dataset. Then a common use case would be to generate music in the style of a reference segment. This gives more control to the user since they do not have to find a textual prompt that describes the music they want to generate. In the computer vision domain, the authors of <cit.> introduced textual inversion to extract visual concepts that can then be used to generate new images with a text-to-image model. Given a few images (3-5) of a concept or object, one sets them as outputs of a frozen text-to-image model with a randomly initialized learnable text embedding. Backpropagating the generative model loss on the text allows to learn new "pseudowords" in the textual embedding space of the model that match the common concept depicted on the images. One can then compose this learnt pseudoword S^* in a textual prompt to generate an image of the learnt concept (for instance "a painting of S^* in the style of Picasso"). We first adapted this method by using the text-to-music model MusicGen <cit.>, using crops of a song to depict a concept, and optimizing the cross-entropy loss of the music language model. This approach does not need to retrain a model from scratch. However, its inference is very slow since it requires hundreds of optimization steps of the textual prompt, including gradient computation through the language model, before generating music. To tackle this issue, we present another method where we design a style conditioner module that we jointly train with a text-to-music MusicGen model <cit.>. This style conditioner takes a few seconds of audio and extracts features out of it. As a result this new model can generate music using two modalities as input: waveforms and textual descriptions. Our conditioning is high level even if it can retain some lower level content such as melodic patterns or rhythm. Designing this style conditioner is challenging as we need to extract enough features to have a meaningful conditioning but not too much, to prevent the generative model to copy and loop the conditioning audio. We thus need to introduce and tune information bottlenecks in our conditioning module. Our contributions are the following: 1) We adapt the textual inversion method of <cit.> to a pretrained text-to-music MusicGen model. This allows to perform audio conditioning for music generation without training a model from scratch. 2) We present our style conditioner method which is based on a frozen audio feature extractor (Encodec <cit.>, MERT <cit.> or MusicFM <cit.>) followed by a transformer encoder <cit.>, Residual Vector Quantizer (RVQ) <cit.> and temporal downsampling. The number of residual streams used by RVQ is adjustable at inference time which gives the user the ability to change the strength of the style conditioning. To our knowledge, we are the first to explore this approach for music generation. 3) Since the model is trained with both textual and audio conditioning inputs, we can combine both to generate music. However, audio contains much more information, so that text is ignored by the model at inference. We propose to balance them with a new double classifier free guidance <cit.> which is a general method for merging conditions with various degrees of information. 4) We introduce novel objective metrics for style conditioning, based on nearest neighbors search in the latent space, validated with human evaluations. We compare our method to baselines which are: a MusicGen trained with CLAP embeddings <cit.> as conditioning, a text-to-music MusicGen used with text prompts, and a MusicGen model without conditioning used in continuation mode. We perform as well some ablation studies in order to justify the architecture of our style encoder. Based on results, we show the practicality of our methods and the musical quality of the generated music. § RELATED WORK §.§ Generative models for music Music generation models can be categorized into two types: autoregressive models and non autoregressive ones. Autoregressive ones are motivated by the successful work done in natural language modeling. Recent successful models use a compression model taking the form of a multi stream quantized autoencoder <cit.> in order to convert audio into K parallel discrete streams of tokens. The K streams are obtained by performing Residual Vector Quantization (RVQ) <cit.> on the latent space of an autoencoder, making the first stream contain coarse information and following ones refine the approximation of the latent space. Then, an autoregressive transformer <cit.> is used to model these audio tokens. MusicLM <cit.> and MusicGen <cit.> are built on this principle. MusicLM uses a multi-stage approach with different models to predict the K streams, while MusicGen models them in parallel using a delay pattern <cit.>. Non-autoregressive models such as AudioLDM2 <cit.>, MusicLDM <cit.>, and Stable Audio <cit.>, are latent diffusion models operating in the latent space of a continuous variational autoencoder. Some other models use cascaded diffusion such as Noise2Music <cit.> to progressively increase the sampling rate of the audio. Moûsai <cit.> uses a first diffusion model to compress the music and a second one to generate music from this representation and textual descriptions. MusTango <cit.> uses a latent diffusion model conditioned on textual description, chord, beat, tempo and key. Jen-1 <cit.> combines a diffusion model and a masked autoencoder trained with multi-tasks objectives. It can perform music generation, continuation and inpainting. A second version <cit.> uses source separation<cit.> over their dataset to allow the user to generate and edit music stem by stem. VampNet <cit.> is a masked modeling approach to music synthesis that uses masking at training and inference time in order to generate discrete audio tokens. MAGNeT <cit.> is based on the same masking principle. It can also combine autoregressive and masking to reach the same quality as the autoregressive baseline (MusicGen) but with a 7x faster inference. In MeLoDy <cit.>, a language model is used to model coarse semantic tokens and a dual path diffusion model is then used for acoustic modeling. The authors claim faster than real time generation. §.§ Jointly trained conditioners for music generative models Regarding the conditioning, most of the models focused on text-to-music <cit.>. Since pairs of text-music data are rare, most models use a pre-trained contrastive text-music model such as CLAP <cit.> or MuLan <cit.>, to condition their text-to-music models. Then, massive amount of non-annotated audio data can be used at training time and text is used at inference time. However, these text-to-music models never exploit the fact that audio can be used as conditioning. For other types of conditioning, MusTango <cit.> is trained with text, beat tempo, key and chords as conditioning, StableAudio <cit.> takes timing embeddings to control the length and structure of the generated music. Some models generate stems while being conditioned on other stems. For instance, SingSong <cit.> generates musical accompaniments from singing and Jen-1 Composer <cit.> handles multi-track music generation on 4 different stems (bass, drums, instrument and melody). MusicGen <cit.> and Music ControlNet <cit.> can handle melody as conditioning and the latter can also use dynamics and rhythm. Both papers use chromagrams extraction for melody conditioning. §.§ Conditioning a pretrained generative model With finetuning: In Coco-Mulla <cit.>, the authors use parameter-efficient fine-tuning (PEFT) to specialize a text-to-music MusicGen model on chords and rhythm. They finetune on a number of parameter that is 4% the amount of parameters of the original network with only 300 songs. Music ControlNet <cit.> is a finetuned text-to-music diffusion model that operates in the spectral domain. The finetuning strategy comes from the text-to-image method ControlNet <cit.> and allows to handle melody, dynamics and rhythm conditioning. The pixel-level control that allows ControlNet on images gives a pixel-level control on the mel-spectrogram. Without finetuning: In <cit.>, the authors use AudioLDM <cit.> as a backbone to perform textual inversion <cit.>. For each textual inversion they use a group of 5 excerpts of 10 seconds. They also try an experiment where they optimize the pseudoword S^* as well as the diffusion neural network which gives them better results. In <cit.>, the authors use a diffusion model trained on musical data with no conditioning and perform various interactive tasks at inference which are infilling, continuation, transition (smooth a transition between two songs) and guidance. The one that is the most similar to our audio conditioning is the guidance where the diffusion model is guided by the PaSST classifier <cit.> embedding of an audio prompt. However the model only generates 5 seconds excerpts of music. Some other papers involve new control with no finetuning such as in DITTO <cit.> where the authors use a pre-trained text-to-music diffusion model and control its inference by optimizing the initial noise latent. They apply this to several applications that are inpainting, outpainting, looping, intensity, melody and musical structure control. § TEXTUAL INVERSION METHOD We first present our textual inversion method in the case of autoregressive modeling (see Fig. <ref>). It is based on previous work in the image domain <cit.> with diffusion models. Autoregressive modeling aims to estimate the conditional distribution of the next token y_t given the preceding tokens y_<t and a conditioning context c, such as a textual embedding. In the framework of transformer decoder neural networks parameterized by θ, denoted as p_θ, this conditional distribution is typically modeled as a product of individual probabilities: p_θ(y_1:T | c) = ∏_t=1^T p_θ(y_t | y_<t, c) Here, y_1:T represents the sequence of tokens, and p_θ(y_t | y_<t, c) denotes the probability of observing token y_t given the preceding tokens and the conditioning context. During training, with a given sequence y_1:T and its associated textual description c, we compute the cross-entropy loss: ℒ_CE(θ, y_1:T, c) = - ∑_t=1^Tlog p_θ(y_t | y_<t, c) It is minimized by taking a gradient descent step on ∇_θℒ_CE(θ, y_1:T, c). This loss quantifies the dissimilarity between the predicted conditional distribution and the true distribution of the next token, serving as the optimization objective for training autoregressive models. For the textual inversion method, we take a pretrained text-to-music MusicGen for the transformer decoder. We initialize the textual embedding (for instance with the textual embedding of the word "music") c. Given a song Y, we cut it into random chunks {y_1:T^i}_i and optimize the textual embedding c by taking successive gradient steps on ∇_cℒ_CE(θ, y_1:T^i, c). After a few hundreds iterations the learnt c is fed into the text-to-music MusicGen model to generate a song in the style of Y. § STYLE CONDITIONING METHOD §.§ General Architecture The general architecture, depicted on the left of Fig. <ref>, is based on the text-to-music model MusicGen <cit.> with the addition of a style conditioner that is jointly trained with the language model. At train time, a 30 seconds music excerpt paired with a textual description is input to the model. The textual description is fed into a frozen T5 tokenizer and transformer encoder <cit.>. The style encoder takes a random subsample (between 1.5 and 4.5 seconds) of the input audio and encodes it. The text and style latent representations are both projected with linear layers to have the same dimension as the transformer language model, and provided as prefix to the sequence to model. The input audio is encoded by a pretrained EnCodec <cit.> model and the language model is trained in a autoregressive manner with a cross-entropy loss. In addition, the tokens that correspond to the random subsample fed into the style encoder are masked in the loss, as we noticed this reduces the tendency of the model to just copy the style audio input. At inference time, we can use text or/and a short excerpt of music as a conditioning to generate music. §.§ Architecture of the Style Conditioner Our style conditioner is designed with bottlenecks (RVQ <cit.> and downsampling) to prevent transmitting all the information of the conditioning audio excerpt to the model. Without these bottlenecks, the generative models retrieves easily the excerpt and copies it (see the ablation study in Sec. <ref>). The style conditioner depicted on the right of Fig. <ref> takes an audio input of length 1.5 to 4.5 seconds, passes it through a frozen feature extractor followed by a trainable transformer encoder and a residual vector quantization (RVQ) module with 6 codebooks. After quantization, we downsample on the temporal axis to obtain a conditioning with a 5Hz frame rate which gives a similar length as a text description (8 to 25 tokens). Finally a linear layer outputs the same dimension as the language model. The candidates for the audio encoder are a Encodec followed by trainable embeddings for each codebook that are summed, a transformer based music foundation model from <cit.> (we now name it MusicFM for the rest of the paper) where the authors claim state of the art on several downstream tasks specific to music information retrieval and a MERT model <cit.>, a transformer based music model trained in a self-supervised manner. The first one has a frame rate of 50Hz and 60M parameters, the second one has a frame rate of 25Hz and 620M parameters and the third one has a frame rate of 75Hz and 95M parameters At training time, we use dropout on the conditioning, keeping both conditions 25% of time, one of the two conditions 25% of time for each (no text or no style) or no condition 25% of time. There is also a dropout on the number of the codebooks used by the RVQ of the style conditioner: at each step of the training, the number of used codebooks is uniformly sampled between 1 and 6. Then, at inference time, we can control the bottleneck of the style conditioner. Setting the number of codebooks to 1 gives more flexibility to the generative model whereas using 6 levels of quantization constraints it more. In practice, this means that music generated with 6 streams of quantization will sound more similar to the input condition compared to music generated with 1 stream of quantization. §.§ Double Classifier Free Guidance When doing next token prediction, let’s denote l_style, text the logits of the model conditioned on style and textual description. Classifier free guidance <cit.> consists of pushing the logits in the direction predicted with the conditioning, to increase its importance: l_CFG = l_∅ + α (l_style, text - l_∅), with α>1, typically, α=3 is used in previous work <cit.>. When generating music with a textual description that contradicts the audio of the style conditioning, we observe that the description is ignored by the model. This is explained by the fact that audio is more informative conditioning compared with the text, so that the model weights it more during training. To counteract this effect, we introduce a double classifier free guidance in which we iterate the CFG formula: we first push from style only to style and text and we then push these logits a second time from no conditioning. l_double CFG = l_∅ + α [l_style + β(l_text, style - l_style) - l_∅] We retrieve the simple CFG with β = 1. For β > 1, we boost the importance of the text conditioning (see Sec. <ref>). §.§ Objective Metrics The difficulty with generating samples in the same style of a song is that we want to generate something that is similar enough but not too close. This is something that can be subjectively evaluated. For easing the comparison of various approaches and hyper parameters, we also introduce a novel set of objective metrics. Nearest Neighbours in Common: Let’s note x_C ∈ℝ^D × T (D=1 for mono music) the audio that we input in the style conditioner and x_G ∈ℝ^D × T’ the generated sequence. We use an encoder E: ℝ^D × T→ℝ^N which outputs a single vector whatever the input length T is. In practice, this is done by taking a MusicFM model and averaging on the time dimension. Then, for each song of our valid and test sets, we cut it into chunks of 30 seconds and store the embeddings {E_i,j}, i being the index of the song and j the chunk number. For E_C = E(x_C), we compute the cosine similarities cos(E_C, E_i, j), ∀ i, j and the set of its K nearest neighbors: {i_1^C, ... i_K^C}. We do the same for E_G = E(x_G) and obtain a set of K values {i_1^G, ... i_K^G}. We then have found the nearest songs in the dataset. We define our metric KNN_common(x_C, x_G) for a song x_G that has been generated after being conditioned by x_C: KNN_common(x_C, x_G) = |{i_1^C, ... i_K^C}∩{i_1^G, ... i_K^G}|/K∈ [0, 1]. The intuition behind this metric is that a model performs well at recreating a song in the style of another if the generated song and its conditioning audio have embeddings that are close enough to share neighbors in the dataset. However, if a model copies the conditioning (i.e. x_G ≈ x_C) the metric will tend to 1, we thus need a second metric to avoid x_G and x_C being too similar. G is the Nearest Neighbor of C: We want E_G and E_C to be close while being different. One way to be sure that the corresponding audios are not too similar is to check that if we add E_G to the set of embeddings {E_i,j}, E_G is not the nearest neighbor of E_C. Assuring that another song from the dataset is closer to the conditioning means that the model is creative enough and not just copying its input. Formally, denoting {E_∪} = {E_i,j}∪{E_G}, we define KNN_overfit(x_C, x_G) = 1 if E ∈{E_∪}argmax[cos(E_C, E)] = E_G 0 otherwise. For our evaluations, we take 1000 samples of 3 seconds x_C from our test set, generate the corresponding x_G and average the two KNN metrics. Intuitively, the two metrics are positively correlated, but for a similar value for KNN_common we will favor the model that minimizes KNN_overfit. Other Objective Metrics To evaluate the quality of the generated music, we also use the official implementation of the Fréchet Audio Distance defined in <cit.> that uses a VGGish model, the KL-divergence based metric introduced in <cit.> that computes the KL-divergence on the probabilities of the labels of a pretrained audio classifier between the conditioning and the generated music. We noticed that a high FAD (> 2) often indicates a poor quality of the generated samples. The CLAP score <cit.> computes the cosine similarity between the description and the audio embeddings obtained with the CLAP model. A higher score indicates that the generated audio aligns well with the textual description of the conditioning audio. §.§ Human studies metrics We follow a similar protocol as in <cit.> for the human studies. We ask human raters to evaluate three different aspects of the generated audio: (1) How would you rate the overall quality of this excerpt [OVL]? (2) Without considering audio quality, how similar are these two excerpts in terms of style [SIM]? (3) Without considering audio quality, how likely do you think these two excerpts are from the same song [VAR]? We believe that the SIM and VAR scores are the subjective versions of KNN_common and KNN_overfit. § EXPERIMENTAL RESULTS §.§ Hyperparameters for the textual inversion For the textual inversion method we test different parameters sets and retain these ones: we use a 12 tokens sentence for initialization, a batch size of 8 with 5 seconds segments randomly sampled from a 30 second excerpt with 200 optimization steps, a learning rate of 0.025 with a vanilla Adam optimizer. Finally the main issue that we encounter with this method is its instability. It is hard to find a set of hyperparameters that works well for any song. Some songs seem to be easier to invert for different sets of hyperparameters. For some song, we never achieve to obtain hearable music as the result suffers from glitches, and tempo instabilities. Finally, it seems beneficial to augment the length of the text embedding, as well as performing the inversion over chunks taken from a 30 seconds excerpt rather than the entire song. The results are shown in Tab. <ref>. §.§ Hyperparameters for the style conditioner All the models that we train are medium size (1.5B parameters) MusicGen models built on top of the 4 stream 32kHz music version of EnCodec <cit.>. All models are trained for 400K steps on 64 V100 GPUs with the AdamW optimizer using β_1=0.9, β_2=0.95, a batch size of 192, and music sequences of 30 seconds. For the style conditioner, excerpts between 1.5 and 4.5 seconds are subsampled from the original sequence, the transformer encoder has 8 layers, 8 heads, a dimension of 512 and is non-causal, the residual vector quantizer has a codebook size of 1024, 6 streams and a variable number of streams is sampled at each training step, hence allowing the language model to train on all the levels of quantization. The style tokens are downsampled to 5Hz. All our evaluations are done on 1000 samples of the test set. Similarly to the MusicGen Melody model, both the textual description and the style condition are provided as prefix to the language model. §.§ Datasets We use 20K hours of licensed music as in <cit.>. The training dataset is composed of 25K and 365K songs from the ShutterStock and Pond5 music data collections, as well as 10k tracks of an internal dataset. Each song comes with textual description, and is downsampled to 32kHz mono. §.§ Comparison with baselines and model selection Apart from the closed-source model udio <cit.>, there is no other audio conditioned music generative model. We use as a baseline a MusicGen model in the continuation setting: given 3 seconds of music, we ask MusicGen to continue the music with no textual prompt. For the second one we train a MusicGen model with a pretrained CLAP audio encoder <cit.> as conditioning, also taking 3 seconds of audio as input. In Tab. <ref>, we compare these two baselines with our model with the EnCodec feature extractor for the style conditioner, a quantization level of 2 and with a textual inversion model. We notice that the FAD correlates well with the quality metric (OVL) since the textual inversion model has the worst OVL and FAD scores. Thus excluding this approach, we observe that the KNN_common and the SIM metrics ranks the models in the same orders as well as the KNN_overfit and VAR metrics. Regarding the baselines, the textual inversion method provides results of poor quality (FAD). The continuation method provides music that has a high similarity to the conditioning (high KNN_common and SIM) but that is too similar to it (high KNN_overfit and VAR). However, the CLAP conditioning captures a more vague style of the conditioning and generates music that is too far from it (low KNN_common, KNN_overfit, SIM and VAR). Our model with the EnCodec feature extractor and 2 levels of quantization strikes the right balance between these two baselines. In order to strengthen our claim that our KNN metrics correlates well with human perception of closeness between musical excerpts, we showcase a second study in Tab. <ref>. In this study we compare the metrics of the MERT feature extractor with 3 quantization levels 1, 2, 4 (we recall that the models can go up to 6) as well as the EnCodec and MusicFM feature extractors with a quantization level of 2. All models generate music of similar quality (FAD and OVL). We notice that when the bottleneck is larger (i.e. when the quantization level is higher), the KNN_common augments. This follows the intuition that if the conditioner transmits more information to the language model, the generated music will be closer to the input condition. The models follows similar orders for KNN_common and SIM as well as for KNN_overfit and VAR. §.§ Ablation Study We perform an ablation study in Tab. <ref> on the components of the style conditioner with MERT as a feature extractor, and 4 RVQ streams. When reducing the size of the transformer encoder from 8 layers and 512 dimensions to 4 layers and 256 dimensions, the quality of the generated audio is worse. When removing the transformer encoder, the model generates audio that is far from music (high FAD). When we don't mask the music that is input to the style conditioner in the cross-entropy loss at training time, the audio quality is slightly worse and the model generates music that is too close to the conditioning and tends to loop. The very high KNN_overfit indicates it since for a KNN_common lower than the best model the KNN_overfit is twice its value. §.§ Tuning the Classifier Free Guidance When style and text conditioning are both used and are not consistent, it is necessary to use double CFG instead of simple CFG so that the text is not ignored. To tune the parameters α, β of the double classifier free guidance given by (<ref>), we rely on the following protocol. For 1000 samples of our test set, we randomly shuffle text descriptions and generate music while conditioning both on text and music. We track the FAD <cit.>, the KNN_common and the CLAP score. In Tab <ref> we observe the intuitive fact that the KNN_common and CLAP score are negatively correlated: if the balancing favors the text condition the CLAP score is higher, if it favors the audio condition the KNN_common is higher. The double CFG thus works as expected. § CONCLUSION In this paper we introduced style conditioning for language model based music generative models: given a few seconds of a musical excerpt, one can generate music in the same style using our proposed audio encoder with an information bottleneck. We introduced new metrics to assess the equilibrium between generating music that maintains a similar style to the condition while also being different. We validated those with human studies. Finally, we can also mix this style conditioning with inconsistent textual description and balance them thanks to a new double classifier free guidance method. This method could be applied in other generative models with multiple conditions. Ethical statement: Improving music generation brings ethical challenges. Through carefully chosen bottlenecks in our style extractor (RVQ, downsampling) we aim for the right balance between increasing the model controllability and possible creative use while ensuring the model does not copy existing works, and provided new metrics to measure this. Finally, we only used music we licensed.
http://arxiv.org/abs/2407.12286v1
20240717031522
Narrowband, Fast, and Autonomous Drone Radio Mapping for Localization
[ "Paul S. Kudyba", "Haijian Sun" ]
cs.NI
[ "cs.NI", "cs.RO" ]
Narrowband, Fast, and Autonomous Drone Radio Mapping for Localization Paul S. Kudyba and Haijian Sun School of Electrical and Computer Engineering, University of Georgia, Athens, GA, USA paul.kudyba@uga.edu, hsun@uga.edu July 22, 2024 =============================================================================================================================================================== § ABSTRACT This paper explores how a flying drone can autonomously navigate while constructing a narrowband radio map for signal localization. As flying drones become more ubiquitous, their wireless signals will necessitate new wireless technologies and algorithms to provide robust radio infrastructure while preserving radio spectrum usage. A potential solution for this spectrum-sharing localization challenge is to limit the bandwidth of any transmitter beacon. However, location signaling with a narrow bandwidth necessitates improving a wireless aerial system's ability to filter a noisy signal, estimate the transmitter's location, and self-pilot toward the beacon signal. By showing results through simulation, emulation, and a final drone flight experiment, this work provides an algorithm using a Gaussian process for radio signal estimation and Bayesian optimization for drone automatic guidance. This research supports advanced radio and aerial robotics applications in critical areas such as search-and-rescue, last-mile delivery, and large-scale platform digital twin development. Radio Mapping, Gaussian Process, Wireless Localization, Autonomous Aerial Robotics § INTRODUCTION Advances in aerial robotics give engineers new opportunities to collect radio information with a much increased speed and cost efficiency <cit.>. Within this revolution in capability lies unique engineering insights that could prove foundational to providing the next generation of wireless infrastructure. Such insights are already well established as areas of active wireless research, such as efficient and accurate radio cartography and radio mapping, which can be used to drive improved communication coverage and spectrum occupancy <cit.>. Furthermore, radio data, once collected or during collection, can be used to drive further valuable inferences, such as transmitter location <cit.>. However, these spatial radio mapping techniques rely on broadband transmissions, leaving the receiver to decompose the channel response, which compounds the effects of large- and small-scale fading <cit.>. In contrast to a broadband transmitter, an extremely narrowband transmission (ratio of bandwidth to carrier frequency ≪ 1) can be leveraged to simplify the channel modeling, but also increase the spectrum efficiency of a localization inference. These combined radio mapping and localization inferences can be used in applications such as last-mile delivery, search-and-rescue, and development of digital twins <cit.>. A common element in many of these radio insights and inferences is the societal demand to simultaneously produce novel radio technologies and robust standards as infrastructure <cit.>. This nexus of aerial robotics, wireless technology and machine learning allowing for researchers to explore new solution spaces and gain new insights, that can in turn, create new radio algorithm frameworks to be easily and rapidly deployed after thorough testing. Our experiment used the Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW) to quickly iterate our algorithm design within an emulated digital twin environment and collect real-world data on the testbed. This paper brings forward a combined system implementation of aerial mobility, Software-Defined Radio (SDR), and machine learning into a fully deployable top-level algorithm that uses a Gaussian Process (GP) to produce a nonparametric 2D tomographic narrowband channel radio map to be deployed on AERPAW. This radio map is generated as the rover collects samples to form a GP best approximation, or surrogate, of the unknown radio map spatial field. Bayesian Optimization (BO) is then leveraged to infer a best next location to collect a new radio sample. A pipeline methodology was created to simulate robot behavior with various pathloss functions, noise, and BO Activation Functions (AF). A laboratory testbed was constructed to provide signal analysis on the narrowband receiver and develop a filter to reject the deep fading encountered. A final algorithm was then coded to include all previous heuristics and analysis to run within the AERPAW emulator and testbed. The rest of the paper is structured as follows: in Section <ref>, the impetus challenge which supported much of this effort is described. In Section <ref>, a general structure of GP and how it is used to construct an evolving radio map as samples are collected. Section <ref> gives a relation to how the kernel function and its optimization impact the GP radio map. Section <ref> shows how once the GP has established a pathloss gradient, inference can be determined with BO. The methods used to create and deploy the final algorithm are established in Section <ref> with the results shown in Section <ref>. Finally, concluding remarks are given in Section <ref>. §.§ AERPAW Find A Rover Challenge Participation in the first AERPAW challenge provided an opportunity to address some of the active research questions mentioned above. The goal of the challenge was to use the platform emulator and testbed to locate a rover within three and ten minutes across an area of 19.7 acres using only a repeating BPSK pseudorandom pilot at 3.24 GHz with a 125 KHz bandwidth. A USRP B-210 mini SDR with a radio front end and filters was used. The objective was to provide the best (lowest) mean error in three randomly chosen locations. Drone log estimates determined two results at the three- and ten-minute, marked from takeoff. Figure <ref> illustrates the boundaries and each of the final locations of the rover transmitter. The final submissions were qualified by running them within the AERPAW emulator. If the emulator showed safe flight operation, the algorithm was used on the testbed to determine the final result. To facilitate radio-based localization, the AERPAW team provided a channel-sounding script that used GNU Radio to interface with the SDR. This script ran a cross-correlation with the transmitted pseudorandom pilot and provided an output channel power in dB and a normalized SNR quality value. §.§ Gaussian Process for Spatial Coverage Radio Mapping Kriging, emerging from practical use in geostatistics, was formalized into a canonical method called Gaussian Process (GP) <cit.>. As is used here, a typical application employs a time-invariant surrogate spatial field mapping as stochastic processes or a Gaussian distribution of functions. The effectiveness of reducing a time dimension in a geostatistical context can be easily apparent by producing a cartography or spatial map. However, this particular dimension reduction assumption (e.g., a time-invariant channel) can cause issues in a mobility context. Knowingly, we use a more traditional movement-agnostic signal-processing receiver provided by AERPAW. Nevertheless, this exercise remains important in elucidating an assumption of static dimensionality in a novel system design context and its upstream implications for the system inference task. Any channel impairment processes not directly accounted for within the receiver design must be considered noise for upstream system insights. This includes temporal noise, such as multipath, Doppler, and sample jitter; even if these alone are relatively small noise sources, they are combined noise sources within our receiver. In this sense, the radio map we construct will disregard the time-varying aspect of a channel for a direct spatial alignment with radio communication as coverage. This is especially important for mobile communications, where a channel can rapidly encounter fluctuations attributable to many environmental and systemic processes, and having knowledge of such mapping priors can inform downstream reasoning for throughput estimation in a space and time-critical context. The channel process p_i is a 2D spatial field radio map consisting of a GP with additive noise as shown in Eq. <ref>. The GP itself describes a probability distribution over a set of radio map functions f(·) shown in Eq. <ref> and is characterized by a mean and covariance (or kernel) function. From the domain 𝒳, a 2D spatial distribution prior f(·) is described with a mean function m(·)=c and a covariance function or kernel k(·,·). In this context c is a constant representing the default channel gain corresponding to no signal or receiving incoherent noise. These process descriptor functions (μ,k) correspond to a high-level prior over which the entire GP distribution of functions can resemble. p_i_channel process = f(𝐗_𝐢)_GP + v_i_Gaussian noise f(·) ∼𝒢𝒫(m(·), k(·, ·)) A vector of radio channel estimations, 𝐗_𝐢, is sampled at locations within the domain 𝒳⊂ℝ^2. In our case, the prior f(·) resembles a pathloss, and our expectation is that it should match an exponential decay, but the actual decay exponent remains an unknown parameter (due to the novel system and environment). Thus, this pathloss exponent will be found at run-time as a hyperparameter. With the correct kernel and hyperparameters, the true spatial field channel, p_i, will be drawn from the GP distribution of radio maps as the mean. This regression, with a prior indicating the most probable radio map and associated uncertainty, is how we generate a rigorous and robust atemporal channel estimate map. Additionally and intuitively, the location of the signal peak within the most probable radio map, when all the dimensional reduction assumptions and noise rejections hold, is the true transmitter's location. §.§ Kernel Functions and Maximum Likelihood Algorithm The kernel function k(·,·) serves as a unified covariance between the prior functions f(·). In this case, the spatial relationships k(X, X') correspond to increasing uncertainty as the distance increases from the channel samples. The kernel function must be positive and semi-definite. In this case, the commonly used Squared Exponential (SE) kernel function is applied as a prior. k_SE(x,x^')=σ^2exp(-(x-x^')^2/2ℓ^2) This inherently produces a distribution of infinitely smooth functions that exponentially decay as expected from any free space pathloss function. However, the SE, used alone, would also fit any noise and apply its effects incorrectly to the posterior as part of the pathloss. Fortunately, kernels can be combined with other kernels because they are positive semi-definite. With the addition of a Gaussian noise or white noise kernel, each posterior sample will now account for a hyperparameter of noise within each sample collected, allowing the entire radio map distribution to have more flexibility for inconsistencies in the input sampling. Combined, the starting kernel code is given as . This shows a Rational Basis Function (RBF), another name for the SE, with a ℓ value of 0.00276 being bounded between 0.001 and 0.004. Each kernel comes with a specific set of hyperparameters, which require marginal likelihood optimization to ensure a correct fit that balances the complexity across all the samples collected. For example, the SE kernel requires two hyperparameters σ, a scaling factor, and ℓ called the 'lengthscale' seen in Eq. <ref>. These hyperparameters are optimized via the Limited-memory Broyden-Fletcher-Goldfarb-Shanno with Bound constraints (L-BFGS-B) algorithm. In this case, the bounding is used to ensure an applicable resulting lengthscale at all times during the flight. §.§ Navigation with Bayesian Optimization The resulting radio map is a distribution of functions and does not have a closed-form solution. Another consideration is the time penalty, opportunity cost, or regret to produce a reliable sample <cit.>; the drone simply cannot sample the entire field, and yet it needs to know the best next location to fly towards and sample. The GP is also very likely to change as samples are added, especially at the beginning of any flight. These properties disallow many traditional forms of optimization; however, because our radio map has an associated uncertainty provided by our GP kernel function, Bayesian Optimization (BO) can be used to provide an optimal future sample location. This future optimal location will be provided with a closed-form surrogacy from which a maximum can be chosen from the GP output's mean (μ) and uncertainty (σ). BO does this by using an Activation Function (AF), to balance the objective of finding the global optimal by exploration of uncertain positions (exploration), and the best known mean (exploitation). UCB(X, Γ_n) = μ(X) + Γ_n* σ(X) Equation <ref> gives the activation function used in the final AFAR implementation. It is known as the Upper Confidence Bound (UCB). Γ is a non-negative exponentially decreasing sequence updated after collecting each sample. Therefore, Γ_n = d^s, where 0>d>1. This, in effect, reduces the exploration term as the flight progresses and samples are taken. § EXPERIMENTAL METHODOLOGY The production of a final AFAR submission consisted of three stages of research and development: simulation via Robotarium <cit.>, receiver testing with an in-lab setup, and emulation with AERPAW. The results of each stage gave valuable insight into the next building into a cohesive and successful deployment. A 2D robotic simulator was used to analyze the expected self-guided robot behavior in controlled scenarios, and a path loss functionality was added to facilitate autonomous development. For example, a 2D normal distribution as a basic pathloss function allowed control of the mean as a transmitter's location and the standard deviation to control the decay. The platform enabled many tests with varying pathloss models and signal-to-noise levels. It was also seen that two predefined routines before and after autonomous BO were beneficial to stability, efficiency, and accuracy. The robot would start with a predefined circle routine wide enough to establish a stable gradient and resolve any noise for BO to give stable guidance. Secondly, after a stable global maximum was repeatedly chosen as the optimal location, a routine could be triggered to circle that point, further refining the transmitter's location. Lastly, the simulation allowed experimentation with different AF, leading to the selection of UCB for the final deployment. Figure <ref> shows the mean and uncertainty radio maps resulting from BO with uniform noise added to the robot's receiver. A 2D grid was constructed to facilitate getting discrete spatial field points of the GP mean and maximum squared error. The AF could be computed from this grid, and the maximum would then be selected as the new target waypoint. To ensure that the GP receives no outlier radio data that could not be described as Gaussian noise, a non-linear filter was shown to be necessary from laboratory testing and data from a previous testbed run. A large discriminator was observed by binning the receiver quality signal and taking the variance of these readings. By removing any readings with an excessive quality-variance, large signal noise swings could be rejected as invalid samples and excluded from the GP. However, this threshold needed to be determined at flight time due to limited empirical data. Laboratory testing also indicated that receiver movement influenced the signal's tendency to encounter noise. The binning of the quality metric in a lab setup where the receiver periodically moves towards and away from the transmitter is shown in Figure <ref>. A final AERPAW deployment was then created to perform emulation testing and for the final deployment on AERPAW. Two grids are created using the latitude and longitude for both the flying boundary and the rover location boundary. The flying boundary grid is then used for all navigational inputs, and the rover's final estimates are given from the second rover grid. Both grids are sampled from the same GP. The drone then takes off to 40 m for the entire trial. Once at the correct altitude, the drone attempts to take the first radio sample binning data for 6 s while stationary and always facing northwest. If the quality-variance of that sample is higher than the threshold, the sample is retaken with an exponentially higher threshold until this first sample is accepted. The drone then flies to three predefined waypoints shown in figure <ref>. These waypoint samples give a stable GP and BO for autonomous flight. The drone then flies its mission according to the same AF UCB policy chosen in the simulation. Two possible criteria can trigger an auxiliary routine that creates a circle of waypoints from the drone's current position (observing the boundary limits). The first criterion is that the AF has repeatedly chosen very spatially similar points to investigate. This is similar to the simulation ending routine which indicates that drone has estimated it is within close proximity to the channel gain maximum. The second is if the drone rejects a maximum number of measurements throughout the mission. This is a failsafe that was seen to benefit the mission while testing within the AERPAW emulator. If the drone cannot navigate autonomously to points that provide valid radio samples for the GP, a circular routine of waypoints ensures that the radio will be given a broad spatial set of sample locations, which might recover a gradient towards the transmitter. During the mission, a mission timer tracks the duration and logs the final predicted location from the rover grid space for the three- and ten-minute estimates. § RESULTS AND ANALYSIS Table <ref> presents the results from the AERPAW emulator and testbed environments. The first mission shows how quickly the GP can fit a transmitter that is relatively close (125 m) from the drone's starting location. However, the drone could not resolve a better 10-minute location estimate due to receiver gain saturation. While a consistently high gain is generally desired for communication, corresponding to an ideal channel, in this case, the signal 'clipping' makes a spatial 'plateau.' This spatial feature makes resolving a transmitter location nearly impossible when within this region. Seeing this within the emulator, the radius for the auxiliary circle was set expecting to encounter this signal behavior; however, in this case, the auxiliary path was not wide enough to provide sufficient localization information. Additionally, critical points were missed on the northwest side of the auxiliary path due to excessive noise, correctly excluded by the quality-variance filter. The second trial, shown in Figure <ref>, gives the team's best 10-minute result of 27.8 m. The 3-minute result did not provide quite enough time for the drone to establish a proper gradient and travel to the transmitter 330 m from takeoff. However, the drone was able to establish a proper optimal trajectory toward the transmitter and perform the auxiliary path maneuver while staying within the boundary conditions. The auxillary path was wide enough to provide additional localization to the GP, and the quality-variance filter was working appropriately to filter excessive noise. This result shows the ideal behavior of the drone in this situation. To provide a better three-minute GP estimate with this starting routine, a kernel that supplied more information about the signal from this starting distance (as a prior) would need to be considered. During the third trial, the drone encountered a confluence of issues, which resulted in degraded localization estimation performance. However, the mission was not a failure; the drone was able to produce locations for both timed estimates from a starting distance of 286 m. Log analysis revealed that the quality-variance filter threshold (set with the first sample) was set to exclude any variance above 35. This was different from the second run, which allowed samples with a less strict value of 67. This disallowed many of the first critical direction-finding waypoints from being included within the GP. The drone then generated the auxiliary path due to missing too many sample measurements. From this waypoint path two samples were accepted. These samples greatly increased the accuracy of the location estimate. The drone reestablished an optimal path but was unable to collect any more samples due to the time limit. § CONCLUSION The results of the three trials show significant robustness in the GP and BO architecture for drone-based location finding. The quality-variance filter identified and removed outlier noise values from the narrowband receiver. Unfortunately, the drone was unable to find the correct filter parameter value from the start of the third trial. This alone shows a necessity for improved performance of the AERPAW emulator and a need for radio data collection. If the emulation noise and the testbed had a reliable digital-twin agreement, the filter parameter could be safely studied and set within the emulator similar to the auxiliary path area. This would save mission time and prevent mission hazardous calibrations. 00 spectrumcartography D. Romero and S.-J. Kim, “Radio Map Estimation: A data-driven approach to spectrum cartography,” IEEE Signal Process. Mag., vol. 39, no. 6, pp. 53–72, Nov. 2022, doi: 10.1109/MSP.2022.3200175. shresthaRadioMapEstimation2023 R. Shrestha et al., “Radio Map Estimation in the Real-World: Empirical Validation and Analysis,” in IEEE Conference on Antenna Measurements and Applications, pp. 169–174, 2023. kwonRFSignalSource2023b H. Kwon and I. Guvenc, “RF Signal Source Search and Localization Using an Autonomous UAV with Predefined Waypoints,” in 2023 IEEE 97th Vehicular Technology Conference (VTC2023-Spring), Jun. 2023, pp. 1–6. doi: 10.1109/VTC2023-Spring57618.2023.10200783. matzFundamentalsTimeVaryingCommunication2011 G. Matz and F. Hlawatsch, “Fundamentals of Time-Varying Communication Channels,” in Wireless Communications Over Rapidly Time-Varying Channels.1em plus 0.5em minus 0.4emElsevier, pp. 1–63, 2021. kudybaThesis P. Kudyba, “Rapid Autonomous Narrow-band Wireless Localization via Gaussian Process and Bayesian Optimization”, M.S. thesis, Sch. of Electr. and Comput. Eng., University of Georgia, Athens, GA, June 2024. AERPAW V. Marojevic et al., “Advanced wireless for unmanned aerial systems: 5g standardization, research challenges, and aerpaw architecture,” IEEE Vehicular Technology Magazine, vol. 15, no. 2, pp. 22–30, 2020. kudybaUAVChallAERPAW P. Kudyba et al., “A UAV-assisted wireless localization challenge on AERPAW” submitted for publication in IEEE Communications Magazine, Special Call on Experimentation in Large-Scale Wireless Community Testbeds, May, 2024. rasmussenGaussianProcessesMachine2006 C. E. Rasmussen and C. K. I. Williams, Gaussian processes for machine learning. in Adaptive computation and machine learning. Cambridge, Mass: MIT Press, 2006. santosMultirobotLearningCoverage2021 M. Santos et al., “Multi-robot Learning and Coverage of Unknown Spatial Fields,” in International Symposium on Multi-Robot and Multi-Agent Systems, pp. 137–145, 2021. pickemRobotariumRemotelyAccessible2017 D. Pickem et al., “The Robotarium: A remotely accessible swarm robotics research testbed,” in IEEE International Conference on Robotics and Automation.1em plus 0.5em minus 0.4emIEEE, pp. 1699–1706, 2017.
http://arxiv.org/abs/2407.12588v2
20240717141234
Benchmarking Robust Self-Supervised Learning Across Diverse Downstream Tasks
[ "Antoni Kowalczuk", "Jan Dubiński", "Atiyeh Ashari Ghomi", "Yi Sui", "George Stein", "Jiapeng Wu", "Jesse C. Cresswell", "Franziska Boenisch", "Adam Dziedzic" ]
cs.CV
[ "cs.CV", "cs.AI" ]
[ Benchmarking Robust Self-Supervised Learning Across Diverse Downstream Tasks equal* Antoni Kowalczukequal,cispa Jan Dubińskiequal,wut,ideas Atiyeh Ashari Ghomiequal,L6 Yi SuiL6 George SteinL6 Jiapeng WuL6 Jesse C. CresswellL6 Franziska Boenischcispa Adam Dziedziccispa cispaCISPA Helmholtz Center for Information Security wutWarsaw University of Technology ideasIDEAS NCBR L6Layer 6 AI, Toronto, Canada Adam Dziedzicadam.dziedzic@sprintml.com 0.3in ] § ABSTRACT Large-scale vision models have become integral in many applications due to their unprecedented performance and versatility across downstream tasks. However, the robustness of these foundation models has primarily been explored for a single task, namely image classification. The vulnerability of other common vision tasks, such as semantic segmentation and depth estimation, remains largely unknown. We present a comprehensive empirical evaluation of the adversarial robustness of self-supervised vision encoders across multiple downstream tasks. Our attacks operate in the encoder embedding space and at the downstream task output level. In both cases, current state-of-the-art adversarial fine-tuning techniques tested only for classification significantly degrade clean and robust performance on other tasks. Since the purpose of a foundation model is to cater to multiple applications at once, our findings reveal the need to enhance encoder robustness more broadly. Our code is available at https://github.com/layer6ai-labs/ssl-robustnessgithub.com/layer6ai-labs/ssl-robustness. § INTRODUCTION Foundation models trained through self-supervised learning (SSL) have become the backbone of many applications due to their versatility; one foundation model can be adapted to many downstream tasks with a small amount of data and training (or fine-tuning). Foundation models in the vision domain have even outperformed dedicated models on several tasks <cit.>. Despite their broad utility, the adversarial robustness of these models has only been explored for classification tasks with linear probing <cit.> while other common downstream tasks, such as semantic segmentation <cit.> and depth estimation <cit.>, remain unexplored. Recently,  <cit.> showed that non-robust features extracted from adversarial examples for supervised models (and useful for classification) become largely useless when transferred to self-supervised learning paradigms. They advocated for a cross-paradigm examination of robustness, yet focused their analysis solely on classification. A major outstanding question is whether adversarial robustness transfers across downstream tasks. We present an in-depth empirical evaluation of the adversarial robustness of self-supervised vision encoders <cit.> for downstream tasks beyond classification. We use attacks that operate in the encoder's embedding space () and those that leverage direct access to the downstream task outputs (), for classification <cit.> or for semantic segmentation <cit.>. Our main observation is that the state-of-the-art adversarial full fine-tuning of encoders <cit.>: (1) substantially lowers clean performance, (2) increases robustness only against the , and (3) remains ineffective in improving robustness against the task-specific . We observe only a slight improvement against the for classification when the adversarial fine-tuning dataset and downstream dataset come from the same distribution. This indicates a need to rethink what it means for a foundation model to be robust. Finally, we offer potential approaches to bolster the cross-task robustness of SSL encoders. § BACKGROUND AND RELATED WORK Self-Supervised Learning. SSL aims to extract a representation of data which is useful for downstream tasks specified at test-time <cit.>. In many frameworks, an input x is first modified by two semantic-preserving augmentations yielding x_1 and x_2, which are subsequently passed to an encoder f. The training objective aligns the output representations by minimizing a distance metric d (Euclidean distance) as L(f, x) = d(f(x_1),f(x_2)) <cit.>. Once trained, the encoder's representations are then used for various downstream tasks, such as classification, semantic segmentation, or depth estimation by fine-tuning adaptor networks. In this work, we focus on a state-of-the-art SSL framework, DINO <cit.>. DINO utilizes two encoder networks, the teacher f_t, and student f_s. The student network is optimized to minimize the cross-entropy between f_s(x_1) and the soft labels f_t(x_2), as a form of knowledge distillation <cit.>. To prevent collapse, the gradients are only passed through f_s. Parameters of f_t are updated using the moving average of the student's parameters. DINOv2 <cit.> improves over DINO in terms of scale and efficiency of training, rather than proposing a new SSL method. <cit.> showed substantial improvements on dense (pixel-wise) downstream tasks like semantic segmentation and depth estimation compared to DINO encoders. Adversarial Robustness in SSL. In this work, we focus on the state-of-the-art Decoupled Adversarial Contrastive Learning (DeACL) framework by <cit.> to obtain robust SSL encoders. For an overview on other methods for robust SSL and a thorough discussion on the advantages of DeACL, see <Ref>. DeACL fine-tunes existing encoders for increased robustness using knowledge transfer from a pre-trained encoder to a robust one. The objectives for the distillation are to: (1) match the distilled encoder representations to those of the pre-trained encoder (high cosine similarity), and (2) bring the distilled encoder's representations of adversarial examples (examples generated with the pre-trained encoder that maximize the distance to their original samples) close to their clean counterparts. By decoupling the encoder pre-training from increasing its robustness, DeACL provides high computational efficiency in comparison to prior methods and obtains state-of-the-art robust performance. Downstream Tasks. To evaluate the quality of representations learned by SSL methods, we consider three common downstream tasks. (1) Linear Classification assesses the quality of the learned representations by training a downstream classifier and measuring classification performance. (2) Semantic Segmentation is a common computer vision task that categorizes every pixel in an image into a class or object. While downstream-agnostic adversarial examples against SSL encoders can be used to fool segmentation models, <cit.> show with that tailoring the attack to the segmentation task is even more effective. aims at manipulating all pixel classifications of an image by introducing a weighted loss term between correctly classified and misclassified pixels. (3) Depth Estimation is another prevalent computer vision task aimed at estimating distances of objects in an image relative to the camera location, where each pixel is assigned a depth value. Targeted adversarial attacks against depth estimation can lead to strong deviations between actual and predicted depth <cit.>. At the same time, they can also be leveraged for depth estimation-specific adversarial training to improve robustness <cit.>. § ATTACK AND DEFENSE METHODS We propose a framework to assess the robustness of foundation models at both the embedding level and for downstream tasks, as described in <Ref>. The goal of benchmarking the robustness of foundation models across diverse downstream tasks restricts our possible selection of encoder models. Specifically, the encoder must generate representations that are applicable to a variety of tasks beyond classification. In our preliminary experiments, we evaluated the performance of SimCLR <cit.>, SimSiam <cit.>, and DINO encoders. We observed that the representations produced by SimCLR and SimSiam were insufficient to achieve high-quality downstream segmentation or depth estimation. For that reason, we use the foundation models DINO and DINOv2 as examples, and train a linear adaptor for each downstream task. For the embedding attack, we target the model at the representation layer. For downstream attacks, we evaluate three different tasks: classification, semantic segmentation, and depth estimation. Each attack is detailed in the following sections. §.§ Embedding-level Attack The operates directly on the underlying encoder's embeddings <cit.>. The objective behind the approach is to make imperceptibly small modifications to an input image such that the resulting representation from the SSL encoder is changed substantially. More concretely, for a clean input image x, we find its adversarial perturbation x_adv=x+δ such that ‖δ‖_∞ < ε, where ε is the maximum allowed input distortion measured in the ℓ_∞-norm. Given an encoder f, the objective is to find x_adv such that the ℓ_2 distance between the representations from the original f(x) and adversarial images f(x_adv) is maximized: _x_adv‖ f(x) - f(x_adv)‖_2. For sparse downstream tasks (classification) we target the CLS token embedding, while for dense tasks (semantic segmentation and depth estimation) we target patch embeddings. We leverage the projected gradient descent () attack <cit.> with the objective defined in representation space to find adversarial examples x_adv. We set the maximum perturbation to ε = 8/255, start with x_adv initialized from x with uniform noise added (defined as 𝒰(-ε,ε)), and perform 20 steps of with step size 2/255. To ensure that the distance from the original image x is within the ε-ball, we clip the perturbation to [-ϵ,ϵ] at every step of . §.§ Downstream Attacks Classification. For the standard classification tasks, we use the attack <cit.> with settings similar to those above: ε = 8/255, 20 steps with step size 2/255, and initialization from randomly perturbed images. The target is to maximize cross-entropy loss for the perturbed images. Semantic Segmentation. To attack semantic segmentation we leverage the attack <cit.> which calculates a weighted average of the loss over correctly and incorrectly classified pixels, L(f_seg(x^t_adv), y) = 1 - λ_t/HW∑_j ∈ P_T L_j + λ_t/HW∑_k ∈ P_F L_k. Here L_j represents the cross-entropy loss, λ_t is a hyperparameter, H and W denote the height and width of the image, while P^T and P^F are the sets of correctly and incorrectly classified pixels respectively. is used to find adversarial examples with this loss, and we use similar settings as mentioned previously. The weight λ_t starts from zero and increases linearly each iteration. The main insight behind the attack is to fool correctly classified pixels in the first attack iterations and then treat the correct and incorrect pixel classifications roughly equally in the later iterations. As a result, the attack can achieve similar attack effectiveness as but with substantially fewer iterations. Depth Estimation. Similarly to semantic segmentation, we compute the average loss per pixel and then apply a attack targeting this loss, referred to as . The loss terms used for depth estimation and its attack are akin to those in <cit.>, incorporating the multi-scale gradient matching loss <cit.> and pixel-wise depth loss <cit.>. For more details, refer to <Ref>. §.§ DeACL Defense We combat the above attacks using the state-of-the-art method of obtaining robust encoders, DeACL <cit.>. We select DeACL for several compelling reasons. Firstly, unlike many other methods aimed at enhancing robustness, it does not rely on any specific downstream task and instead improves robustness at the representation layer in a self supervised manner. Besides, it is a state-of-the-art method with superior robustness compared to other techniques. Lastly, the proposed adversarial fine-tuning approach is significantly more computationally efficient compared to training models from scratch using traditional adversarial training methods. These advantages make DeACL feasible and practical, particularly given the substantial computational resources required to train state-of-the-art encoder models. We start from a pretrained encoder f and create its robust version f_R using fine-tuning with the following objective: L(f_R, f) = d(f_R(x), f(x)) + γ d(f_R(x_adv), f_R(x)). Here we set d as the standard cosine similarity, and x as the input image. <Ref> aims to preserve representation quality, and improve robustness against adversarial examples. γ=2 is a parameter used to balance the impact of each goal on the final objective function. § EMPIRICAL EVALUATION §.§ Setup We present the results for encoders trained using the DINO and DINOv2 SSL frameworks, utilizing ViT <cit.> backbones. The underlying encoders are either Standard, provided by the SSL frameworks, or DeACL, further fine-tuned to enhance robustness. We present the hyperparameters that we use to train the linear layers for the various of downstream tasks. These hyperparameters are uniform across encoders and datasets, and vary only between different types of tasks, classification, semantic segmentation, and depth estimation. Full insights are presented in <Ref>. Classification. We use a learning rate of 0.5, batch size 16, and train the linear classifiers for 5 epochs using the Adam <cit.> optimizer. As a train-time augmentation we use random horizontal flips. Semantic segmentation. We follow the setup from the DINOv2 framework, and use a learning rate of 0.0001, batch size 16, weight decay 0.001, and train for 50 epochs using the AdamW <cit.> optimizer. For training as well as evaluation on non-uniformly sized images (PASCAL VOC 2012) we utilize sliding window inference, we divide the image into parts of uniform size, compute logits for all of the parts, and then combine them into one final logit map. Overlap between the parts is handled by averaging the logits in the overlap regions. We use random cropping, and random horizontal and vertical flips as training-time augmentations. Depth estimation. Since DINOv2 has achieved state-of-the-art performance in depth estimation, we adopt their settings. For training, we use their combination of gradient matching loss and pixel-wise depth loss. For the remaining hyperparameters, we use a learning rate of 0.0001, batch size 128, weight decay 0.01, and train for 20 epochs using AdamW. All hyperparameters are listed in <Ref>. §.§ Results Classification. We follow the widely used linear evaluation protocol <cit.>, where a linear classifier is trained on top of the frozen base SSL encoder, and test accuracy is used as a proxy for representation quality. We compare the classification accuracy after linear probing for the standard vision benchmarks: CIFAR10 <cit.>, CIFAR100 <cit.>, and STL10 <cit.>. The evaluation is presented in <Ref>. Contrary to the results shown by <cit.>, we observe no improvement in robustness against tailored attacks (right column) for the encoder fine-tuned using DeACL, with the only exception being on the STL10 dataset. We argue that the discrepancy in our results and ones reported by <cit.> stems from the underlying training sets of the fine-tuned encoder. <cit.> utilized encoders trained on CIFAR10, then fine-tuned and evaluated them on CIFAR10 as well. In contrast, we focus on ImageNet-trained encoders, use ImageNet for fine-tuning, and evaluate them on various datasets including CIFAR10. We assume that the discrepancy between training, fine-tuning, and evaluation sets leads directly to the inefficacy of DeACL in obtaining robust encoders against stronger adversarial attacks than , like . This idea is supported by the improved adversarial accuracy against attacks on STL10 with the fine-tuned encoder, as it is a subset of ImageNet. We observe an increase (above random guessing) in accuracy compared to the standard encoder (see second last and the last row of <Ref>, rightmost column), from 0 to 0.23. - - Semantic segmentation. Similarly to classification, a single linear layer is trained on patch embeddings, to obtain a low-resolution logit map. Next, we interpolate the logits to obtain a logit map of a resolution matching the size of x. The minimized objective is a pixel-wise cross-entropy loss. We evaluate encoders on ADE20k <cit.>, CityScapes <cit.>, and PASCAL VOC 2012 <cit.>, and report mean Intersection over Union (mIoU↑) scores in <Ref>. proves to be a potent downstream task-agnostic method of obtaining adversarial examples for the segmentation task, achieving mIoU of 0 for all clean encoders across all datasets. Similarly to the linear classification task, we note that fine-tuning with DeACL improves robustness against , however, it fails to achieve significant improvements for the downstream attack . - Depth estimation. For depth estimation, following <cit.>, we extract the final layer of the frozen transformer and concatenate the CLS token with each patch token. Then we apply bilinear upsampling to the tokens to enhance the resolution. Finally, we train a linear layer on top to estimate the depth of each pixel. We evaluate quality of the depth estimation using the standard Root Mean Square Error (RMSE) metric on the NYU-Depth-v2 dataset <cit.>. Our results in the <Ref> show that the and attacks significantly increase the RMSE. The only instance where the RMSE remains below 1 after an attack is with DeACL fine-tuning against the ; however, this fine-tuning fails to provide a notable improvement in robustness against the attack, similarly to the classification and semantic segmentation tasks. Evolution of Robustness During DeACL Fine-Tuning. <Ref> presents the dynamics of model robustness for different downstream tasks during DeACL fine-tuning. Notably, robustness against -based attacks exhibits minimal improvement, remaining unchanged during this process. The only exception is the improvement in robustness of linear classification on the STL10 dataset (which is a subset of ImageNet) observed during the first 20 epochs of training. We also observe that a relatively short period of fine-tuning—around 10 epochs—leads to noticeable improvements in robustness against . However, further fine-tuning iterations show diminishing returns, with robustness metrics plateauing. Performance on clean data remains relatively stable throughout fine-tuning after a drop during the first 10 epochs. The simultaneous increase in robustness against and decrease in performance on clean data observed at the start of the fine-tuning process confirms the trade-off between clean and adversarial model performance. The observed dynamics hold true across all downstream tasks. Our findings indicate that the adversarial fine-tuning method proposed by <cit.> exerts its greatest impact during the initial epochs, with little to no benefit from prolonging training to a larger (e.g. 100) number of epochs. § DISCUSSION AND CONCLUSIONS SSL encoders are foundation models leveraged for a myriad of downstream vision tasks in critical domains, like autonomous driving <cit.> or medical imaging <cit.>. This motivates the necessity of ensuring the encoders' robustness. In this work, we argue that prior work on SSL encoder robustness mainly evaluates downstream classification tasks while leaving other popular tasks, such as semantic segmentation or depth estimation under-explored. Through our experimentation, we show that encoders are highly vulnerable to adversarial attacks on multiple downstream tasks, which pose a significant risk. Our results also highlight that the defenses that were developed with downstream classification in mind also harm the downstream performance on classification and other tasks. This suggests that more fundamental work is required to make foundational SSL encoders robust and effective for a wide variety of tasks. Future directions for improving robustness. We observe that defenses against adversarial examples in SSL are effective only for a single attack type, namely . However, they remain ineffective for other perturbations, especially task-specific attacks like , , and . To train SSL models that are simultaneously robust to multiple perturbation types, a potential solution is to apply multi-perturbation adversarial training, similar to the approach used for enhancing robustness in supervised models against various perturbations <cit.>, which involved concurrent adversarial training with first-order ℓ_1, ℓ_2, and ℓ_∞ attacks. Therefore, to enhance the robustness of SSL encoders, we should not only fine-tune them on adversarial examples in the embedding space but also potentially perform robust tuning for each intended downstream task. plainnat § SOCIETAL IMPACT Prior work on SSL encoder robustness has primarily focused on classification tasks, leading to a false sense of security among users. Our findings reveal that encoders are also susceptible to attack on other downstream tasks, underscoring the need for more comprehensive defenses. This paves the way for the development of robust solutions, thereby enhancing the trustworthiness and reliability of foundational SSL encoders for broader societal applications. § EXTENDED RELATED WORK §.§ Adversarial Robustness in SSL For supervised tasks, adversarial attacks produce imperceptible changes δ to an input x that result in the model predicting an incorrect label y <cit.>. To increase robustness, adversarial training <cit.> incorporates the perturbed data with the correct label into the training data. Since SSL operates without labels, this approach is not directly applicable. The initial method towards robust SSL proposed by <cit.> introduces a purifier network to defend against adversarial examples, which attempts to recover the original input from an adversarially perturbed version before inputting it to the encoder. Robust contrastive learning (RoCL) <cit.> instead aims to make the encoder itself robust by maximizing the similarity between a random augmentation of a data point and its instance-wise adversarial perturbation. RoCL translates instance-level robustness to class-level robustness, at the cost of substantial degradation in clean performance. <cit.> propose adversarial examples specifically designed to challenge contrastive learning methods. Using these adversarial examples, they develop a novel adversarial training algorithm for self-supervised learning, which they call Contrastive Learning with Adversarial Examples (CLAE). Compared to standard contrastive learning, CLAE creates more difficult positive pairs by using adversarial examples. Additionally, by optimizing over all images in a batch, CLAE produces more challenging negative pairs through adversarial training. In essence, CLAE strengthens contrastive learning models by exposing them to tailored adversarial attacks during training. <cit.> introduce adversarial contrastive learning (ACL) to improve robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations. They extend SimCLR <cit.> to learn robust representations by maximizing feature consistency between differently augmented views. <cit.> build on top of ACL and propose AdvCL, which leverages labels in addition to instance-level robustness to further boost robust performance. <cit.> propose Dynamic Adversarial Contrastive Learning (DYNACL) as an extension that uses pseudo-labels directly generated by the pre-trained encoder. All these methods require retraining the large SSL encoders from scratch to improve robustness which is highly impractical and computationally expensive. To solve the problem, <cit.> propose a two-stage framework called Decoupled Adversarial Contrastive Learning (DeACL) which fine-tunes existing encoders for increased robustness. Therefore, the knowledge of a pre-trained encoder is distilled to a robust one. The objective for the distillation are to: (1) match the distilled encoder representations to those of the pre-trained encoder, and (2) bring the distilled encoder's representations of adversarial examples close to their clean counterparts. Closeness is defined by cosine similarity, and adversarial examples are just those examples generated on the original trained encoder to maximize the distance to the original samples. A compelling aspect is that the decoupling approach of DeACL is not limited to contrastive learning - the original encoder could potentially leverage other self-supervised learning (SSL) methods. Only the distillation loss may need adaptation for SSL frameworks like MAE <cit.>, where cosine similarity may not be optimal. Through this approach, DeACL sets a new state-of-the-art by effectively and efficiently improving encoder robustness. This is achieved by decoupling the SSL pre-training stage from the adversarial fine-tuning stage. The flexibility of DeACL leaves room for exploring different SSL methods in the first pre-training stage. Given the many advantages of DeACL demonstrated thus far, we focus our evaluation on this approach. § HYPERPARAMETERS §.§ Further Insights on Depth Estimation The multi-scale gradient matching loss <cit.> encourages smoother transitions in depth predictions and penalizes differences in log-depth gradients across multiple scales: L_grad = 1/n∑_k ∑_i |∇_x R^k_i | + |∇_y R^k_i |. The loss is computed at multiple scales where R_i^k represents the value of the log-depth difference at position i and scale k. ∇_x and ∇_y denote the gradients in the x and y directions, respectively. The pixel-wise depth loss <cit.> measures the difference between the predicted and ground truth depth values in a scale-invariant manner: L_pixel = α√(1/T∑_i g_i^2 - ρ/T^2( ∑_i g_i )). Where g_i = logd̃_i - log d_i, with d̃_i representing the predicted depth and d_i the ground truth depth. The parameters α and ρ are set to 1 and 0.85 in our experiments. The final loss we use is 1/2L_grad + L_pixel. §.§ DeACL fine-tuning In this section, we describe the hyperparameters we adopt to perform the adversarial fine-tuning proposed by <cit.> on DINOv1 with ViT B/16 backbone. We use a learning rate of 0.05 with a cosine scheduler and 10 epochs of warmup. We fine-tuned the model for 100 epochs with a SGD optimizer (momentum 0.9) and batch size of 128. The adversarial perturbation budget ε was set to 4/255. We did not use weight decay. We employed random crops, and random horizontal and vertical flips as training-time augmentations.
http://arxiv.org/abs/2407.12576v1
20240717140201
IICPilot: An Intelligent Integrated Circuit Backend Design Framework Using Open EDA
[ "Zesong Jiang", "Qing Zhang", "Cheng Liu", "Huawei Li", "Xiaowei Li" ]
cs.AR
[ "cs.AR", "cs.AI" ]
IICPilot: An Intelligent Integrated Circuit Backend Design Framework Using Open EDA Zesong Jiang^1,2, Qing Zhang^1, Cheng Liu^1,311 Corresponding author., Huawei Li^1,3, Xiaowei Li^1,3 ^1SKLP, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China ^2Institute of Advanced Technology, University of Science and Technology of China, Hefei, China ^3Dept. of Computer Science, University of Chinese Academy of Sciences, Beijing, China {liucheng}@ict.ac.cn This work is supported by the National Key R&D Program of China under Grant (2022YFB4500405), and the National Natural Science Foundation of China under Grant 62174162. July 22, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Open-source EDA tools are rapidly advancing, fostering collaboration, innovation, and knowledge sharing within the EDA community. However, the growing complexity of these tools, characterized by numerous design parameters and heuristics, poses a significant barrier to their widespread adoption. This complexity is particularly pronounced in integrated circuit (IC) backend designs, which place substantial demands on engineers' expertise in EDA tools. To tackle this challenge, we introduce IICPilot, an intelligent IC backend design system based on LLM technology. IICPilot automates various backend design procedures, including script generation, EDA tool invocation, design space exploration of EDA parameters, container-based computing resource allocation, and exception management. By automating these tasks, IICPilot significantly lowers the barrier to entry for open-source EDA tools. Specifically, IICPilot utilizes LangChain's multi-agent framework to efficiently handle distinct design tasks, enabling flexible enhancements independently. Moreover, IICPilot separates the backend design workflow from specific open-source EDA tools through a unified EDA calling interface. This approach allows seamless integration with different open-source EDA tools like OpenROAD and iEDA, streamlining the backend design and optimization across the EDA tools. LLM, Multi-Agent System, Integrated Circuit, IC Backend Design, Open EDA, Design Space Exploration. § INTRODUCTION Electronic Design Automation (EDA) occupies an important position in chip design and affects the PPA (performance, power, and area) of the resulting designs substantially <cit.>. As a critical interface between the chip design and fabrication, it converts the gate-level netlist generated in front-end to manufacturable GDSII data. However, with the continuous increase in chip design complexity and the accelerated pace of technological innovation, the backend design of integrated circuits faces multiple-folded challenges. On one hand, the backend design of chips involves a set of complex design procedures that require the use of various EDA software and design tools, each with complex programming interfaces and data formats, making automated chip backend design highly challenging. On the other hand, advanced backend design tools typically contain a large number of adjustable design parameters, and exploring the design space across a large number of options remains a pressing issue to be resolved. In addition, unlike commercial EDA tools, open-source EDA tools such as iEDA<cit.> and OpenROAD <cit.> are usually less robust due to the limited manpower and financial support, and unfamiliar to most of the designers, which further discourages the use of the open EDA tools. Hence, we argue that automating the use of Open EDA tools can substantially mitigate the barrier of using these EDA tools and encourage more feedback for continuous improvement. Although there are already many scripts developed to invoke these EDA tools conveniently, it remains insufficient for automating the backend design as it still require users to tune the configurations and parameters to suit the different designs and constraints. Motivated by the successful adoption of large language models (LLMs) on more and more complex tasks such as robotics and autonomous drving, we also attempt to leverage the powerful reasoning and natural language understanding capabilities of LLM to fully automate the use of Open EDA tools and make it accessible to more designers without backend design expertise. In this context, we propose, IICPilot, an LLM-based automatic IC backend design framework using open EDA. Essentially, it is a multi-agent system based on LLMs and has each agent specialized for a relatively independent task such as floorplan and routing such that each agent can focus on a relatively short context and be updated without affecting the other agents. Particularly, it has a user proxy agent to understand the user requirements through multiple-round interaction using natural language. Then, it has a control agent to leverage chain-of-thoughts and automatically produce a task sequence based on user requirements. Hence, the framework can be adapted to various backend designs. For instance, it can conduct design space exploration of the entire backend design flow or a specific backend design process. Basically, IICPilot can autonomously generate scripts, execute EDA tasks, and optimize chip PPA through design space exploration tools and mitigate the barrier of using open EDA tools. Additionally, it can allocate appropriate computing resources through containers to sustain various complex EDA tasks which may involve a set of time-consuming yet dependent EDA procedures. This not only effectively break the barrier of using open EDAs but also scales the complex backend design tasks over a distributed computing system for higher performance. The major contributions of this work are as follows: * We introduce IICPilot, the first intelligent backend design framework offering full-stack automation including backend design and distributed deployment for open EDA. * we propose a container agent that can automatically allocate appropriate computing resources for various EDA tasks and RTL designs, and provide scalable runtime optimization on distributed computing systems. * We propose a DSE agent that can automatically extract and adjust the parameters of open EDA tools for the sake of better PPA. § BACKGROUND & RELATED WORK §.§ LLM-based Design Automation LLMs such as GPT-3.5 and GPT-4 have achieved significant milestones in the field of natural language processing, offering robust support for research and applications with their exceptional performance and extensive applications. Multi-agent systems, comprised of collaborative agents, possess the ability to independently perceive, make decisions, and interact with each other. Following the advent of cutting-edge LLMs, the utilization of intelligent agents has been propelled into a new era of prominence and significance. Recently, multi-agent systems based on LLMs have been applied across various fields, including software design<cit.>, robotics<cit.>, social simulation<cit.>, and game simulation<cit.>, significantly enhancing project efficiency and simulation outcomes. Concurrently, there is also significant research on LLMs in the context of IC backend development. HDL debugger<cit.> uses large models and RAG technology to debug hardware description languages. VeriGen<cit.> has improved by scaling up the model size and expanding the hardware dataset. RTLLM<cit.> and VerilogEval<cit.> have introduced larger-scale open benchmarks for designing RTL generation based on natural language, evaluating prompt and fine-tuned models on these benchmarks. Chip-Chat<cit.> aims to assess the collaborative effectiveness of GPT-4 with hardware designers in generating processors and completing tape-outs. ChatEDA <cit.> automates EDA tools using LLMs. RTLCoder <cit.>, CodeGen <cit.>, VeriAssist <cit.>, and AutoChip <cit.> achieve RTL code generation through LLMs. More studies on LLMs for EDA can be referenced in this survey <cit.>. Despite the abundance of related research <cit.>, the integration of LLMs with multi-agent systems for intelligent work in IC backend design remains unexplored. By integrating LLMs and multi-agent systems, we can create a more intelligent and efficient dedicated IC backend system that can deeply analyze user needs and automate complex tasks, thus alleviating the burden on engineers and enhancing design efficiency and quality. §.§ Design Space Exploration of CAD Tools In IC backend design, Design Space Exploration (DSE) systematically evaluates various design options and parameter combinations to determine the optimal design that meets specific performance, power consumption, area, and cost requirements. The application of DSE offers significant advantages: it enables designers to quickly identify the optimal solution within a complex design space, eliminating the need for blind trials and lengthy iterations characteristic of traditional design processes. Additionally, DSE provides a range of alternative design options, allowing designers to select the most suitable design based on their specific needs. With the ongoing evolution of artificial intelligence and machine learning technologies, many efforts<cit.> have significantly boosted chip performance by integrating these advanced techniques into DSE. Currently, DSE plays a crucial role in IC backend design, driving the continuous advancement of the integrated circuit design field. This article aims to leverage open-source tools to complete the DSE and achieve the best combination of backend parameters. §.§ Kubernetes and Containers Kubernetes (K8s) is a powerful open-source container orchestration system designed for automating container deployment, scaling, and management. As a lightweight virtualization technology, containers encapsulate applications and their dependencies into portable units, enabling seamless migration and operation across different environments. In our research, the automation capabilities of K8s are crucial. It automatically handles container deployment, scaling, scheduling, and fault recovery, ensuring optimal task performance within the cluster. Futhermore, the intelligent agent in our system can allocates and adjusts resources based on the specific needs of IC backend tasks through K8s, maximizing design throughput and efficiency. These features make K8s an ideal choice for managing IC backend operations. § IICPILOT FRAMEWORK In this study, we introduce IICPilot, a multi-agent driven end-to-end EDA optimization framework. The aim of this framework is to implementing intelligent solutions for EDA tasks. IICPilot can automate IC backend tasks through collaborative efforts of multiple agents. Unlike the previous LLM used in EDA tasks, this framework not only addresses script generation for running EDA tasks, but also enhances chip performance using DSE tools and considers resource savings through containerization. It proposes solutions for runtime failures in practical scenarios and importantly, supports compatibility with two open-source platforms, demonstrating practical utility. Additionally, the framework can real-time monitor agent operations, obtain status reports, and access historical records of executions. §.§ LLM-based Agent Construction As illustrated in <Ref>,for each agent within the IICPilot system, we have meticulously designed their architecture, selected and developed various tools, and ensured that the agents can effectively utilize these tools. Firstly, the user agent acts as the interface between the entire framework and the user, responsible for translating user requirements into feasible IC backend tasks. It can also gather information from users when deemed insufficient by other agents. The control agent, on the other hand, transforms user requirements into a task list and assigning tasks to other agents. EDA agent generates or modifies the EDA task scripts according to the task list and executes different backend design processes. To enhance its generation capability for various EDA tasks, we have also equipped the EDA agent with the ability to use tools and understand EDA documentation. Furthermore, we observed that the agent's processing and understanding of large volumes of information can sometimes be confused. To address this, we have classified the information in EDA tasks into essential and optional categories, aiming for the agent to extract effective information before completing the task. Essential information refers to the data necessary to complete the EDA task, while optional information encompasses additional optimization details provided by the user to improve task outcomes. In this system, essential information includes RTL design and the specific EDA stage, the selection of technology node, and constraint files. Optional information includes clock period, core area utilization, placement density, etc. Essential information generally has well-defined paths and details, and we only need to articulate the requirements. However, optional information involves multiple dimensions and requires users to provide file paths, filenames, and specific modification values to facilitate task completion. Figure <ref> illustrates an example about how EDA agent works based on this method. The DSE agent conducts design space exploration to optimize IC backend configuration parameters, enhancing chip performance. We define the role and objectives for the DSE agent and provide it with tools for modifying JSON-formatted parameter configuration files. To facilitate the process of design space exploration, we equip the agent with design space exploration tools (Autotuner<cit.> and Hypermapper<cit.>) and DSE agent will invoke the EDA agent to complete EDA tasks during iterations through dse tools. Additionally, To address potential issues where unreasonable parameter ranges hinder exploration, DSE agent can consult the fault list to find solutions. This fault list will be introduced in the <ref> section. Furthermore, for enhanced compatibility across two platforms, we have developed the iEDA agent and OpenROAD agent. Each agent holds comprehensive resources pertaining to iEDA and OpenROAD respectively. When EDA agent needs to accomplish a task using a specific open-source toolset, it can request specific information and interfaces from these agents, thereby facilitating seamless execution of EDA tasks. The container agent create a container with suitable machine resources for EDA or DSE tasks, optimizing resource utilization and reducing costs. To achieve this, we provided it with K8s API for flexible container creation and resource allocation. Since machine resource cost is calculated as the product of the cost per unit time of different machine configurations and the runtime, we aim to predict the time required to complete specific tasks under various machine configurations. In general, the runtime of EDA tasks correlates with the scale of the RTL design, computational resources, and the nature of the tasks involved. To address this, we employ machine learning techniques to forecast EDA task runtimes across various stages, considering the RTL design size and machine configuration. The container agent can invoke trained model to allocate resources efficiently for a specific design and EDA stage. Furthermore, for EDA problems involving multiple stages or entire flows with time constraints (a scenario of significant practical value, given that most chip design-to-tapeout processes are not only constrained by resources but also have deadlines for completion), we adapt the approach from <cit.>, <cit.>, mapping the problem to the Multiple-Choice Knapsack Problem(MCKP) and obtain a solution. Specifically, each EDA subtask can be run on different machine configurations (i.e., vCPUs), which complete the job in t time and costs p. Let y_m(C) be an optimal solution defined on m applications and with total time constraints C: y_m(C):=max ∑_i=1^m∑_j=1^K_i s_ij1/c_ij such that, ∑_i=1^m∑_j=1^K_i s_ij1/c_ij≤ C ∑_j ∈ K_i s_ij = 1, i=1,…,m s_ij∈{0, 1}, i=1,…,m, j ∈ K_i where s_ij denotes whether we select configuration j for subtasks i or not, and K_i is the number of configurations used in the subtask. Similarly, c_ij represents the expenditure incurred for executing stage i with configuration j, and we get this from the price list of the designated cloud service provider. To find the optimal configuration, we implemented a pseudopolynomial solution using dynamic programming based on the method from Dudzinski and Walukiewicz <cit.>. y_m(C) = max{[ y_m-1(C - t_m1) + 1/c_m1 if 0 ≤ C - t_m1,; y_m-1(C - t_m2) + 1/c_m2 if 0 ≤ C - t_m2,; ⋮ ; y_m-1(C - t_mn_l) + 1/c_m_l if 0 ≤ C - t_m_l. ]. Using this strategy, we can determine the optimal CPU configuration selection for multiple EDA tasks with time constraints. Considering practical application scenarios, we have also designed the monitor agent and memory agent. The monitor agent monitors the real-time operation of agents, providing timely feedback to users in case of issues. Additionally, users can request status reports from specific agents to understand their operational status. The memory agent maintains system operation records, allowing users to access historical data. §.§ The Workflow of IICPilot As illustrated in <Ref>, we provide two descriptive examples to elaborate on the system's workflow. In the context of <Ref>, our multi-agent system initially captures user requirements through the User proxy agent and forwards them to the control agent, which decomposes these requirements into a series of subtasks executable by intelligent agents. Specifically, EDA tasks are initially assigned to the EDA agent. Upon receiving such tasks, the agent first retrieves process-specific information from the corresponding platform agent (typically the iEDA agent). EDA agent executes the required EDA tasks accordingly. Following this are DSE tasks, where the DSE agent is activated to generate or modify parameter configuration files and iterate through the EDA process via the EDA agent. It is noteworthy that to optimize machine resource utilization and minimize task completion costs, these tasks are executed within containers. Specifically, upon completion of each subtask, commands and configuration files seamlessly transfer to the container agent, which determines the machine resources required for container allocation. This comprehensive process covers the entire automated workflow, from user input to IC backend task generation, culminating in execution at the container level. For <Ref>, it reflects that if the user does not provide specific constraints or detailed information, the agent will proactively request additional information from the user. Furthermore, we provide a fault list where, upon encountering errors during execution. We expect the internal agents of the system to refer to this documentation to autonomously identify solutions and self-correct based on its contents, primarily focusing on the DSE agent, as DSE involves iterative optimization, numerous parameter-related issues may arise during the process, making it relatively complex. Users can contribute to this documentation by adding encountered errors and their corresponding solutions, gradually achieving comprehensive coverage of error scenarios. We have found that agents can effectively address issues in the figure <Ref> with the help of the list itself. § EXPERIMENT In this section, we evaluate the IICPilot framework, examining its capabilities ranging from automated execution of EDA tasks to optimizing chip performance and reducing machine resource costs. Futhermore, we delve into the key techniques employed in IICPilot and substantiate their benefits through thorough experimental validation. §.§ Experiment Setup To evaluate the effectiveness of the IICPilot framework, we conducted a series of experiments using open-source EDA tools. Prior to initiating the experimental phase, we selected iEDA and OpenROAD as our platforms and utilized Autotuner and Hypermapper for design space exploration. Additionally, we configured an appropriate K8s environment and deployed four nodes on Alibaba Cloud. We also gathered 400 data points from open-source websites such as Opencores<cit.> to complete the container-related experiments. §.§ Evaluation on EDA Tasks In the first set of experiments, we aim to evaluate the efficacy of multi-agent systems in executing EDA tasks. Due to the length of the article, we will only demonstrate the effectiveness of the tool on iEDA. Without the need to invoke the planning agent, tasks can be smoothly completed relying on the EDA agent and the user agent. As shown in <Ref>, an exemplary experiment demonstrates the successful completion of corresponding tasks executed by the EDA agent and user proxy agent. §.§ Evaluation on DSE tasks In the second experimental group, we aim to evaluate the capabilities of a multi-agent system in executing DSE tasks. We conduct multiple experiments using the Autotuner from OpenROAD, validating the system's effectiveness through empirical results. As illustrated in <Ref>, the experiments explore the backend parameter spaces of picorv32, ibex, gcd, and aes to assess the system's impact on chip performance optimization. The multi-agent system is tasked with utilizing DSE agents to complete the design space exploration for specific designs, focusing on optimizing metrics such as area, power consumption, and critical path delay. The results demonstrate that the multi-agent system effectively improves the performance metrics across different designs. §.§ Evaluation on Resources Allocation In this experiment, firstly, to illustrate the advantages of containerization and multi-node deployment over non-deployment scenarios, we deployed four nodes in the cloud. We conducted multiple experiments running eight different RTL designs on configurations utilizing 1, 2, 3, and 4 nodes respectively. As shown in <Ref>, compared to non-deployed environments, the multi-node container deployment architecture based on Kubernetes enables concurrent execution of multiple tasks, demonstrating significant speed advantages. Then, we aim to test the capability of a multi-agent system using containers in intelligent resource allocation. To achieve this, we take configuration as an example and verify the system's ability through experimental results. In this experiment, we first complete the EDA subflow for 400 benchmarks from open-source websites (e.g., OpenCores) under different settings (1vCPU, 2vCPUs, 4vCPUs, 8vCPUs) to measure the runtime under various configurations. Based on these measurements, a simple dataset is created, and a random forest model is employed for prediction, utilizing features such as the number of cells that reflect the design size and machine configuration. The container agent can invoke this trained model to predict the runtime of a specific design under a particular configuration. Then, the cost of each configuration for eash EDA stage is calculated based on the unit-hour running cost of different CPU configurations on Alibaba Cloud, resulting in the lowest-cost machine configuration. Finally, container agent will allocate corresponding resources to the container and completes the EDA tasks within the container based on this result. <Ref> provides an example of using machine learning to predict runtime and obtain containers at minimum cost. We can observe that when user need to accomplish the placement task for the rtl design picorv32, it is appropriate for the CPU to select four units. In addition, <Ref> illustrates the accuracy of the system's invocation model in predicting time. Furthermore, for EDA problems involving multiple subtasks or entire flows with time constraints, as depicted in <Ref>, container agent will give a solution based on the MCKP problem to achieve optimal resource allocation within the system. § CONCLUSION This paper realizes a multi-agent system dedicated to IC backend to complete various tasks in IC backend, which can assist engineers' work and effectively save their energy. The efficiency of this system has been effectively verified based on the open-source tools iEDA and OpenROAD. It is believed that through continuous optimization, this system will play a greater role in IC backend work in the future. IEEEtran
http://arxiv.org/abs/2407.12471v1
20240717104947
Characterization of Political Polarized Users Attacked by Language Toxicity on Twitter
[ "Wentao Xu" ]
cs.CY
[ "cs.CY", "cs.CL", "91, 94", "J.4" ]
Improvement of analysis for relaxation of fluctuations by the use of Gaussian process regression and extrapolation method Yukiyasu Ozeki July 22, 2024 ========================================================================================================================= § ABSTRACT Understanding the dynamics of language toxicity on social media is important for us to investigate the propagation of misinformation and the development of echo chambers for political scenarios such as U.S. presidential elections. Recent research has used large-scale data to investigate the dynamics across social media platforms. However, research on the toxicity dynamics is not enough. This study aims to provide a first exploration of the potential language toxicity flow among Left, Right and Center users. Specifically, we aim to examine whether Left users were easier to be attacked by language toxicity. In this study, more than 500M Twitter posts were examined. It was discovered that Left users received much more toxic replies than Right and Center users. § INTRODUCTION Social media has become an indispensable daily element of contemporary social life <cit.>. When social media platforms bring people freedom of communication, studies identified language toxicity emerging across platforms, such as Twitter (Now, X) <cit.>, Facebook <cit.>, Reddit <cit.>, YouTube <cit.>, Telegram <cit.>, and Whisper <cit.>. These studies have identified various forms of toxic content, including violence, obscenity, threats, insults, and abusive language. Toxic language in online social networks is prevalent between users with no connection than between mutual friends, and mildly offensive terms are used more frequently to express hostility between these two groups<cit.>. The nature and extent of toxic language can vary by platform. For instance, Reddit has been found to contain a higher frequency of posts with insults, identity attacks, threats of violence, and sexual harassment<cit.>. Additionally, the research indicates that while most studies on offensive language detection have focused on English, the toxic contents have been identified in other languages, such as Greek and Indonesian, as well <cit.>. In addition, the presence of toxic language on Twitter can have significant negative impacts on individuals and communities, even affecting mental health<cit.>. In political scenarios, online conversations during U.S. presidential elections can indeed exhibit toxicity <cit.>. While social media platforms are praised for enhancing democratic discussions, the presence of social bots can distort political discourse, potentially influencing public opinion and election integrity negatively<cit.>. The behaviour of social bots aggravates the propagation of toxic content. The toxicity in online political talk is often linked to incivility, challenging the perception that it is beneficial for the elections. Moreover, the study of online chatter surrounding elections is crucial for ensuring evidence-based political discourse and free and fair elections <cit.>. Therefore, while online conversations can provide a platform for political discussions, the toxicity and manipulation through bots underscores the importance of monitoring and studying these interactions to safeguard the election process. However, the political process is severely affected by polarization. Polarization is popular on Twitter, as the platform serves as a significant space for political discourse, which can influence public opinion and democratic processes. Studies have shown that Twitter can both facilitate cross-ideological exchanges and contribute to the clustering of users around shared political views, potentially reinforcing partisan loyalties and contributing to polarization <cit.>. The impact of Twitter on political polarization is also significant in fragmented political systems, where the platform's role in shaping communication among political entities can affect collaboration between parties and the overall political landscape <cit.>. Interestingly, while some research supports the “echo chambers” view, suggesting that social media platforms like Twitter foster political polarization by creating fragmented and niche-oriented spaces for like-minded individuals, other studies highlight the presence of cross-cutting interactions that could mitigate this effect <cit.>. Moreover, the influence of social media, on political polarization has been demonstrated through simulations, indicating that the type of political views presented in social media can shape the political orientation of a population <cit.>. The investigation into political polarization on Twitter is crucial due to the platform's ability to shape political communication and influence public opinion. While there is evidence of both echo chambers and cross-ideological interactions, the overall effect of Twitter on political polarization varies and is a subject of ongoing scholarly debate. Language on Twitter can potentially exacerbate political polarization <cit.>. The language of toxicity can be bared in the echo chambers on Twitter, leading to more extreme and toxic language, reinforcing divisions<cit.>. Moreover, language on Twitter can also be affected by traditional media, such as broadcast media language, which contributes to toxic online interactions<cit.>. Additionally, Twitter is vulnerable to manipulation by malicious actors who use polarized and toxic language to sow discord. This can significantly distort the political landscape and influence public opinion <cit.>. The most recent extensive research on language toxicity shows that toxicity does not always increase as online discussions progress, suggesting more rounds of conversation may not lead to higher toxicity <cit.>. However, the detailed dynamics of these toxic politically polarized conversations are still unclear. Here we focus on the replies to politically polarized users on Twitter. In most cases, any social networking service (SNS) account can freely engage with another one with texts, which could lead to further negative and harmful online social behaviours, such as rounds of toxic replying. <cit.> discovered that anti-vaxxers are more aggressive in replying by analyzing toxic replies of English and Japanese tweets. <cit.> found that ideological extremity is more associated with the conservatives than the liberals through network analysis. <cit.> identified toxic replies diffusing patterns on Twitter based on news outlets diffused on Twitter. These studies helped understand the mechanism of replies on Twitter, but the language toxicity patterns of politically polarized Twitter replies were not investigated. This study examined the correlation between political polarization and language toxicity of Right, “Center,” and Left replies. § DATA & METHODS §.§ Data It is well known that the COVID-19 pandemic is a worldwide healthcare crisis, during which political polarization was intensified <cit.>. Such a catastrophic global situation provides a time window to examine the association between political polarization and online language toxicity. In this study, 542,212,429 English tweets were collected from February 20 2020 to May 30 2022 by querying COVID-19-related keywords: “corona virus”, “coronavirus", “covid19”, “2019-nCoV”, “SARS-CoV-2”, “wuhanpneumonia” using the Twitter Search API. A total of 25,370,268 replies of English tweets were used for this study. §.§ User annotation A politically-leaning URL domain list of news websites was then obtained by requesting from Allsides[www.allsides.com] for academic research purposes, which contains 160 Left and Lean Left URLs, 98 Right and Lean Right URLs and 180 Center URLs. Based on the list, each reply was labelled as Right if its domain of the Twitter URL object was identified in the Right or Lean Right domain list; the other replies were labelled as Left and Center, accordingly. To examine the degree to which a user engages with labelled replies, we categorized users according to their replies' domain labels. For example, the Right user category includes users whose reply URL objects contain the Right domains, exclusively. It happens that a reply does not contain any URLs. Please, keep in mind that this study only looked at replies that met two criteria: * The user to whom were replied (“in_reply_to_screen_name” in the standard Twitter object [<https://developer.twitter.com/en/docs/twitter-api/v1/data-dictionary/object-model/tweet>]) is not a “Null” value. * The reply contains at least, one domain in the Twitter URL object. Meanwhile, the study further considered the frequency with which each user was replied to in each politically-leaning category. For example, if the Left domains occurred in a replied-to user's reply URL object three times without Center and Right domains occurring, this user was considered to be a three-time-replied-to user in the Left category and was called a three-time-replied-to Left user. As a result, a user who was replied more frequently, in this study, is regarded as a more engaged user with a politically-leaning domain category. To this end, Twitter users were annotated as Left, Right and Center categories. §.§ Toxicity Calculation The Perspective API [<https://www.perspectiveapi.com/>] is considered suitable for toxicity calculation due to its machine learning-based approach to detecting and moderating toxic content on social media platforms <cit.>. It has been adopted for content moderation, monitoring, and research purposes. It aligns well with human ratings of toxicity <cit.> and disrespectfulness, especially for highly toxic comments <cit.>, indicating the capability of language toxicity measurement for Perspective API is robust. For the text input into the Perspective API, a probability score scaling within [0,1] is calculated. The higher the score is, the more toxic the input text is. Some research uses a threshold for classifying “toxic” and “nontoxic” texts. Here, this strategy was not adopted, as I need to characterise the toxicity of all users. To measure the toxicity of each user, the replied texts for each user were aggregated, and then sent to Perspective API. Since each category of users possesses various statistical indicators for toxicity, here, the analysis for maximum and median toxicity scores of Left, Right and Center was reported. § RESULTS §.§ The Left received much more toxic replies. The overall negative correlation between maximum toxicity and the replied times was identified in this study(Figure.<ref>). The maximum toxicity is the highest value of language toxicity of the category with specific replied times. Figure 1 illustrates the maximum toxicities of each category replied at different times, indicating that more-replied-to users were less likely to receive replies with high toxicity. The Right and Center categories replied-to users shared a similar maximum toxicity distribution (Kolmogorov-Smirnov test, p>0.05), while the Left category showed a different distribution (p<0.05). However, the statistical difference does not change the overall trend of the three categories. In general, more frequently replied-to users shared lower maximum toxicities, regardless of user category. The Left category differs from the others, possibly due to the higher toxicity values of several outliers. For instance, some Left category outliers (indicated by arrows in Figure <ref>) shared larger toxicities and some of them even reached more than 0.8. The outliers could be top toxic repliers. By contrast, the Right category users' maximum toxicities were less than 0.4, when they were replied more than, approximately 1,000 times. The maximum toxicity is compared between categories in Figure <ref>. This reveals that the maximum toxicities of the Left category users are significantly higher than those of the other two categories (Mann-Whitney U test, p < 0.005). §.§ The Left and Center outliers received much more toxic replies. The median can be used to represent the centre tendency of a dataset. In contrast to the maximum scenario, the level of median toxicity did not exhibit a negative correlation with replied times. Most of the median toxicity values were concentrated between around 0.05 to 0.4. This overall tendency showed that the toxicity of replies was less aggressive, but fluctuated as the replied times increased. Specifically, when we looked at the Right category users, the median toxicities were below 0.5, but the outlier values for Left and Center users reached over 0.7. No statistical significance was identified across the Left, Right, and Center, suggesting the three categories shared a similar distribution for median toxicity (Figure <ref>), and no significant median toxicity group was identified out of the three categories (Figure <ref>). § DISCUSSION This study shows that Left users could receive more toxic replies than Right and Center users. This pattern of toxicity propagation is important for understanding misinformation propagation and echo chamber development, as toxicity in online interactions can lead to a decrease in user activity, ultimately impacting the collaborative nature of platforms  <cit.>. Previous research confirmed that the left group was more distant from the neutral group than the right group <cit.>. However, this study found that Left users were much closer to Right users than the Center user, in terms of maximum toxicities. This “toxicity distance” might suggest that right and left users were sending toxicities to each other, but Left users received much more. Although there was no significant difference in the language toxicity across the replied-to users of the Left, Right, and Center categories, the replied users targeted by toxic repliers in each category cannot be neglectable, especially the Left users. What precautions are necessary to take for protecting users from language toxicity attacks, especially during political discussions, such as U.S. presidential elections should be carefully considered. When users are engaging the Left, it is suggested to pay attention to the toxic comments and replies, which might further pollute the SNS ecosystem and make users more emotional. Future work would be finding out the dynamics of interaction and engagement dynamics for the Left, Right and Center. In addition, more intelligent tools need to be proposed to combat the aggression of the toxic language to keep our SNS ecosystem healthier. This study has implications for other platforms, such as Facebook and Reddit. unsrt
http://arxiv.org/abs/2407.13555v1
20240718142831
PetFace: A Large-Scale Dataset and Benchmark for Animal Identification
[ "Risa Shinoda", "Kaede Shiohara" ]
cs.CV
[ "cs.CV" ]
: A Large-Scale Dataset and Benchmark for Animal Identification R. Shinoda and K. Shiohara ^*Equal contribution ^1Kyoto University, Japan ^2The University of Tokyo, Japan : A Large-Scale Dataset and Benchmark for Animal Identification Risa Shinoda^1* 0009-0006-3965-7933 Kaede Shiohara^2* 0009-0005-0603-3377 ============================================================================= § ABSTRACT Automated animal face identification plays a crucial role in the monitoring of behaviors, conducting of surveys, and finding of lost animals. Despite the advancements in human face identification, the lack of datasets and benchmarks in the animal domain has impeded progress. In this paper, we introduce the dataset, a comprehensive resource for animal face identification encompassing 257,484 unique individuals across 13 animal families and 319 breed categories, including both experimental and pet animals. This large-scale collection of individuals facilitates the investigation of unseen animal face verification, an area that has not been sufficiently explored in existing datasets due to the limited number of individuals. Moreover, also has fine-grained annotations such as sex, breed, color, and pattern. We provide multiple benchmarks including re-identification for seen individuals and verification for unseen individuals. The models trained on our dataset outperform those trained on prior datasets, even for detailed breed variations and unseen animal families. Our result also indicates that there is some room to improve the performance of integrated identification on multiple animal families. We hope the dataset will facilitate animal face identification and encourage the development of non-invasive animal automatic identification methods. Our dataset and code are available at <https://dahlian00.github.io/PetFacePage/>. § INTRODUCTION Animal identification plays a crucial role in animal studies and applications such as monitoring animal behavior, conducting habitat surveys, locating missing animals, and performing health checks. Traditional identification techniques, including ear tags, tattoos, ear punching, and toe clipping, continue to be utilized mainly for experimental animals and livestock <cit.>. However, given that these methods have the potential to cause stress and pain <cit.>, their use should be minimized to prioritize animal welfare. Therefore, there is a pressing need for the development and adoption of identification technologies that are not only effective and efficient but also minimally invasive, thereby mitigating the ethical concerns associated with traditional methods. While advanced tools such as digital IDs <cit.> have been introduced, their applications involve a laborious process, , attaching the devices to each animal individually. The process is costly and potentially stressful for the animals. Moreover, these physical tags can identify only pre-defined individuals, which makes them impractical for use in real world scenarios. In the human domain, digital face recognition is one of the effective approaches for the identification. It has been developed for use in smartphones, airport security, and systems for finding missing people. Therefore, the research community made great efforts to develop sophisticated deep learning-based face recognition models <cit.>, empowered by large-scale datasets and benchmarks, ,  <cit.>. Despite the promise of human face recognition, the research progress towards automatic animal face individual recognition has been impeded, primarily because of the lack of extensive datasets and benchmarks for animal face recognition. Previous openly available datasets mostly include less than 100 individuals <cit.>, which makes it far from generalized and discriminative identification models and precise evaluation for unseen individuals. In this paper, we introduce a large-scale animal face recognition dataset called that contains 257,484 individuals in total across 13 species with 319 breeds with 1,012,934 images. We show the example images of our in Fig <ref>. The number of individuals in our dataset is over 110 times that in the previous largest animal face dataset <cit.>. We sourced images and related information from the internet, with automated and manual filtering processes applied to ensure the dataset is not only large but also finely detailed and clean. Moreover has fine-grained annotations including sex, breeds, and colors and patterns of their skin, which allows further investigation for fine-grained recognition and evaluation. offers two benchmarks: one for recognizing known (seen) individuals and the other for recognizing unknown (unseen) ones. We also conduct the verification of the fine-grade breeds and unseen animal categories. Our main contributions are as follows: (i) We establish a new dataset for animal face recognition called , which contains a total of 257,484 individuals across 13 types of animal families and 319 breeds with fine-grade annotations including sex, breed, color of animals. (ii) We set the benchmarks on recognizing known (seen) individuals and unknown (unseen) ones. (iii) We show that the model trained using our dataset shows the generalization capabilities for unseen individuals and even for unseen animal categories. § RELATED WORK Building datasets and benchmarks is an important step in advancing animal re-identification through deep models. While earlier efforts have established the basics of animal face recognition, there is considerable potential for further development. Compared to the dataset for human face recognition <cit.>, those for animal faces <cit.> have much fewer individuals. The evaluation scenarios represented in these datasets often fall short of real-world applicability; most of the datasets focus on closed-set re-identification, rather than recognizing unseen individuals that is a critical requirement for practical applications. Our work enables the training and evaluation on a large number of individuals across a wider range of animal families and breeds. Human Face Identification is the process of identifying an individual's identity using their unique facial characteristics. The huge demands for individual identification have grown mainly in the human domain. In the identification of human faces, which is known as face recognition, considerable efforts have been expended in the research community <cit.>. Because of their domain-agnostic frameworks, most of the state-of-the-art methods in face recognition can be exported into other domains, such as animal identification. Furthermore, these advancements have been supported by the introduction of large-scale datasets and benchmarks <cit.> that have played a crucial role in facilitating the exploration of powerful models. Motivated by this, we create the dataset to fill the gap between human face recognition and animal face recognition. Animal Identification is the process of identifying an individual's identity using their unique body or facial characteristics, which is an important task across various scenarios, including monitoring animals, conducting habitat surveys, and finding missing animals. With the advance of computer vision technology, various openly available datasets contribute to computer vision for animals <cit.>. Various methods are used to create the dataset, such as recording images <cit.> and videos <cit.>, and using aerial image <cit.>. Recording individual information is labor intensive; therefore, most of the previous datasets contain only a limited number of individuals and are focused on one species. Recently, WildlifeDatasets <cit.>, which gathers previously openly available datasets  <cit.> combined, was introduced. This research creates the benchmarks using existing available datasets. The advances in human face recognition raise an interest in animal face recognition. One notable challenge is the variation in facial structures between animals and humans. This has led to studies like AnimalWeb <cit.> and CatFLW <cit.> that propose specialized methods for animal facial key points detection. Automatic animal face identification has been studied, which can help humans to monitor animals <cit.>. We review several publicly available face identification datasets in Table <ref>. Apes, such as Chimpanzees <cit.> and Gorillas <cit.>, are one of the species that have been studied. However, as the dataset size is relatively limited, the datasets are insufficient to evaluate new unseen individual faces. In addition to primates, research on facial (head) identification has also extended to turtles, with the ZindiTurtleRecall <cit.> dataset offering a substantial number of images of individual turtles in a controlled environment. To address the limitations of in-the-wild data collection, the SeaTurtleID2022 <cit.> dataset includes wild data, albeit with a reduced number of individuals, owing to the extensive effort required for data collection. Among datasets containing mammals, the DogFaceNet dataset <cit.> featuring 1,393 individual dogs stands out for its size. Nevertheless, the significant variation in appearance across dog breeds suggests that even this larger dataset may not be sufficiently comprehensive. In addition to the limited number of individuals, previous animal face datasets often focus on only single animal families. They can contribute to specific animal family research but can not explore animal identification across many families and breeds. § DATASET is a large animal face identification dataset that expands research on animal face recognition, which has been impeded by a scarcity of suitable datasets and benchmarks. This section details the construction of the dataset, including our labor-efficient methods for collecting animal face images and a semi-automated filtering process to ensure quality fine-grained categorization and statistics of the dataset. §.§ Dataset Statistics The dataset encompasses 1,012,934 images spanning 257,484 unique individuals. Detailed distributions of animal families are illustrated in Fig <ref>(a). Images are cropped to 224 × 224 pixels around their faces. Our also has fine-grained annotations. Sex distribution across different animal families is depicted in Fig <ref>(b), with sex information available for 240,861 individuals, accounting for 94% of the dataset. The dataset includes annotations for 319 breeds, with examples of breed annotations shown in Fig <ref>(c). Furthermore, as detailed in Fig <ref>(d), the dataset provides in-depth color information through two-tier hierarchical annotations. Please see the supplemental for the detailed information per each animal family. §.§ Data Sourcing Collecting images of animal faces via photography is labor and time intensive, which impedes the creation of large-scale datasets. In contrast, the human face recognition field benefits greatly from the availability of images sourced from the Internet. In this section, we outline our approach to assembling the collection of animal face images through the Internet, where each is associated with unique individual identifiers. Curation of images. In contrast to the human domain, acquiring multiple images for individual animals is more challenging. Unlike human datasets where a large number of celebrity images can be readily sourced, animal images require alternative approaches for curation. We utilized two primary sources: (i) pet shops' websites and (ii) animal adoption websites. The advantage of pet shops lies in their provision of high-quality, diverse images, capturing individuals from various angles and offering detailed information about each animal, including color, sex, and specific breed details. On the other hand, animal adoption sites offer images set against a variety of backgrounds and conditions that are often provided by pet owners, thus ensuring each individual is presented in a unique setting. These sources provide images that are highly suitable for animal recognition tasks and that are especially useful for recognizing animals in varied wild environments. For Chimps, we additionally use images from the webpage of a collaborative research institution. To ensure the dataset's quality, we were selective in choosing websites for curation. We only use the websites introducing each animal on one page to gain the individual IDs. Aware of the potential for pet owners to upload the same images to multiple websites, we chose a single pet adoption website from each region (, one per country) to minimize duplicates. For pet shops, we confirmed that the animals listed were unique to a certain shop to ensure the uniqueness of our dataset entries. Using these sources, we collected 1,443,737 images from 325,420 individuals. §.§ Face Alignment and Filtering Face Detection. We detect facial landmarks to align and crop the images. We adopt the AnyFace <cit.> that is trained on mixed face datasets including AnimalWeb <cit.>. Because the positions of animal facial parts are sometimes very different from those of humans, we use different reference points for different species in face alignment. After landmark detection, we select one frontal image as a reference for each species. We compute the average landmarks over all the landmarks aligned with the reference. Then, we define the target positions of the landmarks in the images; all the images are aligned to the target positions. Data Filtering. Fig <ref> shows the overview of our data filtering process. Because the fully automatic face detection above sometimes fails, we adopt a two-stage data filtering process to filter out the following cases: 1) Images contain multiple animals simultaneously. 2) Images are unrelated to animals, such as advertisements. 3) Images focus on non-animal elements, such as random patterns in backgrounds, toys, or people, rather than the animals themselves. First, we automatically remove images where multiple faces are detected in an image. Secondly, annotators manually assess all the images and remove those that do not depict the target animals or where the face alignment differs from our intended criteria. Because checking whether the faces are properly aligned or not requires some expertise, all the images are filtered by the authors to ensure the quality of the dataset. This manual filtering process took about 100 man hours. After this stage, we keep 1,012,934 images, which is approximately 70% of the initial image number. Detailed distributions of animal families are illustrated in Fig <ref>(a). §.§ Fine-Grained Annotations Fine-grained categorization enhances its usefulness for downstream applications, , the creation of challenging datasets based on animal's individual attributes. However, collecting extensive individual animal data poses a major challenge in assembling large datasetse owing to the significant effort involved. In response, we collect as much individual animal information corresponding to the images as possible. Our annotations include sex, color, and specific breed details, depending on their availability on the source websites. The annotation process involves: 1) extracting individual information displayed on websites alongside images, 2) refining raw text data to develop individual data tables, 3) manually verifying that all attributes are refined and free of unrelated information, and 4) manually annotating individuals with missing attributes on the website. As a result, we include a sex category for all classes, with breed annotations for eight animal classes (Cat, Chimp, Dog, Guinea pig, Hamster, Parakeet, Pig, and Rabbit), and color annotations for eleven animal classes (Cat, Chinchilla, Degus, Ferret, Guinea pig, Hamster, Hedgehog, Parakeet, Java sparrow, Pig, and Rabbit). Note that the manual annotations are performed only on colors and patterns because images alone do not allow us to accurately determine an animal's sex or breed. Detailed distributions of sex per animal families are illustrated in Fig <ref>(b). In Fig <ref>(c) and (d), we illustrate the examples of breed annotations and color and pattern annotations. § EXPERIMENTAL METHODOLOGY We introduce two principal evaluation protocols: 1) re-identification for seen faces, and 2) verification for unseen faces. To establish these benchmarks, we have curated two distinct types of test sets. Specifically, for unseen face verification, we carefully separate the dataset into training, validation, and test sets to prevent data leakage that can arise from shared backgrounds. §.§ Evaluation Protocols We adopt two types of evaluation. * Re-identification: This evaluation selects images from the test data that match the identities present in the training data. This procedure involves identifying test images that correspond to the same identities used during model training. The objective is to assess the model's ability to accurately recognize and associate test images with the correct identities from the training dataset. * Verification: This evaluation checks if the model can identify unseen faces. This means that we use different identities during the training and testing phases. For each identity, we select one image that matches the identity and one image randomly chosen from a different identity. This process creates pairs of images where one pair consists of images from the same identity, and another pair consists of images from different identities. We then task the model with predicting whether the faces in each pair belong to the same identity or different identities. This approach allows us to comprehensively evaluate the model's ability to recognize and differentiate between individual identities. §.§ Data Split To establish benchmarks for seen individuals re-identification and unseen individuals verification, we use specific split protocols, creating two test sets designed for each task. For seen individuals re-identification, the process involves verifying images of the same identities used in training; thus, we select test images from the same individuals for the training set. We, therefore, ensure that each individual is represented by at least three photos , a minimum of two for training and one for testing. For unseen individuals verification dataset, we ensure that it contains no images from sources that are also represented in the training or validation sets per animal family to prevent bias from similar environmental conditions. We divide the dataset into training, validation, and each testing sets following a 7:1:2 ratio as closely as possible, given the outlined criteria. §.§ Models We train state-of-the-art models based on deep neural networks on our to build the benchmarks. For the re-identification task, we approach the training and testing phases as classification problems; an individual is assigned to a class. On the other hand, for the verification task, we train models in the same manner as for the re-identification task but evaluate the models by computing the cosine similarity between pairs of images to be identified if they are the same individual. Specifically, we focus more on the loss functions that are crucial for identification tasks than network architectures. We refer to three important loss functions in addition to the basic Softmax-based classification model: Triplet loss <cit.> is designed to take a triplet of samples, , an anchor x_a, a positive x_p (another image of the same identity as the anchor), and a negative x_n (an image of a different identity) - and learn embeddings in such a way that the x_p is closer to the x_a than the x_n by a margin. This loss function is particularly beneficial for face Re-ID as it directly targets the relative distances between different and same identity pairs, encouraging the model to learn a feature space where embeddings of the same identity are clustered together while being far from other identity clusters. Center loss <cit.> works alongside Softmax Loss to enhance the discriminative power of the learned features. While Softmax Loss focuses on inter-class separability, Center Loss aims to minimize the intra-class variations. It does this by penalizing the distance between the deep features of each class and their corresponding class center. Center Loss ensures that the embeddings of the faces of the same individual are closer together, thus making the feature distribution more compact for each identity. ArcFace loss <cit.> introduces an angular margin between classes to enforce a discriminative feature space. It modifies the Softmax loss by adding a margin penalty to the angle between the feature vector and the corresponding class center in the angular space. This angular margin encourages models to learn more distinguishable embeddings to separate classes. We use ResNet-50 <cit.>, which we found performs better than recent Transformerbased models in Table <ref>, as our base backbone for all applied loss functions to simplify our experiments and make them easier to grasp. In addition to the comparison on the models trained on , we evaluate state-of-the-art models trained on other datasets including: ImageNet <cit.> is a conventional image classification dataset including 1000 general object classes. We use the ResNet-50 trained on the dataset. CLIP <cit.> learns semantic relationships between images and texts in a cross-modal contrastive learning manner empowered by webscale imagecaption datasets. We use ResNet-50 backbone. MegaDescriptor <cit.> is a state-of-the-art model for animal identification trained on a unified animal identification dataset including 33 existing animal reidentification datasets <cit.>. We use the officially provided SwinTransformer-B <cit.> backbone. § EXPERIMENTAL RESULT §.§ Benchmark on Animal Face Re-identification We show the re-identification results in terms AUC in Table <ref>. We train models with the baseline loss functions independently on each class. We can see that ArcFace performs consistent results, 51.23% of average accuracy, compared to the other loss functions (41.88% for Softmax and 9.81% for Center). On the other hand, Center loss does not learn sufficient discriminative features for animal face re-identification, especially on Cat and Dog where the number of the test classes are 113,592 and 46,755, respectively. These results encourage the community to explore more effective representation learning methods for re-identification tasks. Moreover, motivated by MegaDescriptor, we train an ArcFace model on the entire , denoted as Joint-Trained on PetFace in the table. We observe that the joint-training strategy provides some improvements, , from 54.29% to 70.30% for Cat and from 29.08% to 41.49% for Pig although in some classes the results get worse, , from 43.27% to 34.30% for Chimp and from 62.19% to 44.78% to Parakeet. This result indicates that there is some room to improve the performance of integrated identification on imbalance and wide-range datasets. In Fig <ref>, we display the Top-k (k=1,3,5) accuracy metrics. While accuracy naturally increases with larger k values, the relative ranking of accuracy across different animal families remains largely consistent. ArcFace maintains the best performance across these evaluations. §.§ Benchmark on Animal Face Verification Next, we show the verification results in Table <ref>. We use the trained models in Table <ref> and evaluate them on unseen individuals. Similar to our findings in the re-identification task, ArcFace proved to be effective, achieving the best results with an average AUC of 92.17% when jointly trained on the dataset. Similar to face re-identification in Sec <ref>, in some classes, the results get worse when jointly trained across animal families , Chimp, Degus, Dog, Guinea pig, Parakeet, and Rabbit. We also examined models pre-trained on other datasets including ImageNet, CLIPand MegaDescriptor in order to make more comparisons. Among these models, MegaDescriptor, which trained on the animal re-identification dataset, showed the highest AUC of 83.70%. We show the similarity distribution on the our cat dataset in Fig <ref>. Comparing the performance of CLIP, MegaDescriptor, and our joint-trained models, our findings indicate that our model achieves the most distinct separation between positive and negative samples. In contrast, the MegaDescriptor model displays a closer distribution of these samples, and with CLIP, the overlap between positive and negative distributions is significantly more pronounced. §.§ Comparison with Previous Datasets We conduct cross-dataset evaluations where models are tested on different datasets other than the training datasets to demonstrate the generality of our dataset. Here, we compare our dataset with 1) CTai <cit.> and CZoo <cit.> for Chimpanzee and 2) DogFaceNet <cit.> and Flickr-dog <cit.> for Dog. All images of the compared datasets are aligned in the same manner as ours for fair comparisons. We split the identities of the compared datasets into training and test sets in a ratio of 7:3 and then uniformly pick the positive and negative pairs. Because CTai, CZoo, and Flickr-dog have a few test identities, we each pick five positive and negative pairs for each test identity for robust evaluations. We show the verification results in Tables <ref>(a) and (b). We exclude the results trained on the same dataset as the test set denoted in gray for fair comparisons. For the Chimpanzee datasets, the model trained on our dataset outperforms previous datasets. For case tested on CTai, our dataset outperforms CZoo (67.33% vs. 66.76%). For the other case tested on CZoo, our dataset also surpasses CTai (71.27% vs. 69.31%). This result demonstrates that our Chimpanzee dataset is more effective and general than the previous datasets. For the Dog datasets, owing to the significant scale of our dataset, the model trained on our dog dataset outperforms not only cross-dataset results but also in-dataset ones denoted in gray. These results support the quality of our dataset. §.§ Analysis Different networks. To examine the impact of different network architectures on the performance on , we additionally train two backbones, VisionTransformer-32-B <cit.> and SwinTransformer-B <cit.>, with the same training configuration as our base model (ResNet-50) and report the verification results in Table <ref>. We observe that ResNet-50 outperforms the additional transformer-based models in most cases and achieves the best average AUC of 91.30%. Fine-grained verification. In real world scenarios, animal verification is sometimes conducted within a specific breed rather than different breeds. To meet this demand, we introduce fine-grained verification where positive and negative pairs are chosen only from the same breeds. We refer to the top-10 dog breeds of individuals in our datasets: Chihuahua, Dachshund, French bulldog, Golden retriever, Miniature dachshund, Pomeranian, Shiba inu, Shih tzu, Toy poodle, and Yorkshire terrier. We construct the fine-grained test sets and evaluate the models on each breed. The results are visualized in Fig. <ref>. It can be seen that our model achieves a good performance on all the breeds (98.30% on average). The results achieved for the baseline models are lower in comparison (84.99%, 80.78%, and 86.25% for ImageNet, CLIP, and MegaDescriptor, respectively). Moreover, we find that the results achieved with MegaDescriptor show significantly more degradation than the cross-breed verification result in Table <ref> from 93.75% to 86.25%. while our model maintains the results at a high-level (from 99.01% to 98.39%). This result indicates that our dataset overcomes the limitation of the previous dog dataset <cit.> that suffers from the poor performance for specific breed verification owing to its insufficient number of individuals (192 individuals). Generality to Unseen Families. Lastly, we evaluate the verification generality to unseen animal families. To achieve this, we additionally collect 100 identities each of Parrot, Lacertilia, and Squirrel, which are not contained in either our dataset or WildLifeDataset <cit.>. We compare our model that is jointly trained on our dataset across families with the baseline models in Fig. <ref>. Our model outperforms the baselines on all the species. For Parrot, particularly, our model achieves an AUC of 88.99%, revealing its superiority over MegaDescriotor with a large margin. This result supports the generality and variety of our dataset. § CONCLUSION We introduced the , a comprehensive animal identification dataset encompassing 13 families, featuring 257,484 unique individuals across 319 breed categories. We have collected images and their information and conducted automated and manual filtering to ensure the quality of the large-scale dataset. This dataset also includes detailed annotations of sex, breed, colors, and patterns to facilitate more investigation and application in a real-world scenario. We establish two main benchmarks: 1) re-identification of seen individuals and 2) verification of unseen individuals. Our experiments show the generality of the models trained on our dataset to verification on other datasets or unseen animal families. We also found that there is still room for improvement in the integrated identification of multiple animal families. To promote further research, we will make this dataset, experiment code, and models available to the research communities. This dataset will enable computer vision researchers to tackle animal face identification across a wide range of breeds and push forward progress on animal face identification tasks. § ACKNOWLEDGEMENTS This work was partially supported by the Cooperative Research Program of the Primate Research Institute, Kyoto University. We thank Dr. Takuma Yagi from AIST for comments on paper proofreading. We also thank Prof. Toshihiko Yamasaki from the University of Tokyo for providing computation resources. splncs04 § FINE-GRAINED ANNOTATIONS We list breed annotations statistics for eight animal classes (Cat, Chimp, Dog, Guinea pig, Hamster, Parakeet, Pig, and Rabbit) in Fig <ref>. Similarly, color and pattern annotations statistics for eleven animal classes (Cat, Chinchilla, Degus, Ferret, Guinea pig, Hamster, Hedgehog, Parakeet, Java sparrow, Pig, and Rabbit) are presented in Fig <ref>. Our dataset provides color and pattern information through two-tier hierarchical annotations, especially for animal families whose colors commonly distinguish individuals. § IMPLEMENTATION DETAILS Pre-processing. We input the images with 224×224 pixels for our models. We use horizontal flip to augment the training images. For the other models, we strictly follow the defined pre-processings. Triplet loss. We use a 3rd-party implementation <cit.>. We set the margin parameter to 0.5. Center loss. We use a 3rd-party implementation <cit.>. ArcFace loss. We use a 3rd-party implementation <cit.>. We set the margin and scale parameters to 0.5 and √(2)log(C-1) where C denotes the number of individuals, respectively. ImageNet. We use the torchvision version <cit.>. CLIP. We use the open clip version <cit.>. MegaDescriptor. We use the official implementation <cit.>.
http://arxiv.org/abs/2407.12181v1
20240716212006
Non-semisimple topological field theory and $\widehat{Z}$-invariants from $\mathfrak{osp}(1 \vert 2)$
[ "Francesco Costantino", "Matthew Harper", "Adam Robertson", "Matthew B. Young" ]
math.QA
[ "math.QA", "math.GT", "math.RT", "81T45 (Primary), 20G42 (Secondary)" ]
§ ABSTRACT We construct three dimensional non-semisimple topological field theories from the unrolled quantum group of the Lie superalgebra 𝔬𝔰𝔭(1 | 2). More precisely, the quantum group depends on a root of unity q=e^2 π√(-1)/r, where r is a positive integer greater than 2, and the construction applies when r is not congruent to 4 modulo 8. The algebraic result which underlies the construction is the existence of a relative modular structure on the non-finite, non-semisimple category of weight modules for the quantum group. We prove a Verlinde formula which allows for the computation of dimensions and Euler characteristics of topological field theory state spaces of unmarked surfaces. When r is congruent to ± 1 or ± 2 modulo 8, we relate the resulting 3-manifold invariants with physicists' Z-invariants associated to 𝔬𝔰𝔭(1 | 2). Finally, we establish a relation between Z-invariants associated to 𝔰𝔩(2) and 𝔬𝔰𝔭(1 | 2) which was conjectured in the physics literature. Semantic Communication for the Internet of Sounds: Architecture, Design Principles, and Challenges Chengsi Liang, Yao Sun, Christo Kurisummoottil Thomas, Lina Mohjazi, and Walid Saad Chengsi Liang, Yao Sun (corresponding author), and Lina Mohjazi are with the James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK (e-mail: 2357875l@student.gla.ac.uk; {yao.sun, lina.mohjazi}@glasgow.ac.uk). Christo Kurisummoottil Thomas and Walid Saad are with the Bradley Department of Electrical and Computer Engineering at Virginia Tech, Arlington, VA 22203, USA. (e-mail: {christokt, walids}@vt.edu). Received: date / Revised version: date ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION toc The first goal of this paper is to develop the representation theory of an unrolled quantization of the orthosymplectic Lie superalgebra . The second goal is to connect this representation theory to the non-semisimple quantum topology of 3-manifolds. We achieve the second goal in two ways. First, we construct a three dimensional non-semisimple topological field theory (TFT) using the framework of relative modular categories. Second, we study the ^𝔤-invariants of 3-manifolds, which were recently introduced by Gukov, Pei, Putrov and Vafa. Here 𝔤 is a complex semisimple Lie superalgebra. More precisely, we establish direct relations between ^ and ^ and between ^ and the 3-manifold invariants defined by . In the remainder of the introduction, we explain our results in more detail and explain the general context into which they fit. Let be the complex orthosymplectic Lie superalgebra associated to the super vector space ^1 | 2 with the canonical non-degenerate supersymmetric bilinear form. The Lie superalgebra is in many respects the simplest Lie superalgebra. For example, it is basic[Recall that a Lie superlagebra is basic if admits a non-degenerate invariant (even) bilinear form and its even subalgebra is reductive.] classical of rank one and its category of finite dimensional complex representations is semisimple.[In fact, the category of finite dimensional representations of a Lie superalgebra 𝔤 is semisimple if and only 𝔤 is a semisimple Lie algebra or is isomorphic to 𝔬𝔰𝔭(1 | 2n) for some n ≥ 1 <cit.>.] The quantum group of and its representation theory were originally studied in <cit.> with applications to quantum topology following shortly thereafter. A modified version of the Reshetikhin–Turaev construction of link and 3-manifold invariants <cit.> which applies to certain ribbon Hopf superalgebras was introduced and studied in the context of the small quantum group of at primitive roots of unity of odd order <cit.>. This approach utilizes only a small class of representations of , requiring, in particular, that all quantum dimensions are non-vanishing. In various settings, relations between link invariants associated to quantizations of 𝔬𝔰𝔭(1 | 2n) and 𝔰𝔬(2n+1)—again coloured by only a small class of representations—are known <cit.>. The constructions of this paper are fundamentally different than those of the previous paragraph. The key new feature is the consideration of a much larger class of representations of quantum , in particular those of quantum dimension zero. Not only does this allow the definition of a larger class of link invariants, the resulting 3-manifold invariants are shown to be the top level of a three dimensional TFT. Another feature of our approach is the systematic treatment of all roots of unity e^2 π/r, r ≥ 3, without regard for the parity of r. In this way, interesting behaviour depending on the congruence class of r modulo 8 is revealed. We now explain our results in more detail. The central algebraic object of the paper is the restricted unrolled quantum group , an infinite dimensional Hopf superalgebra obtained as a semi-direct product of the restricted quantum group of with the group algebra of a Cartan lattice. Here q=e^2 π/r for an integer r ≥ 3. Section <ref> is devoted to studying the representation theory of and is presented so as to ease comparison with the representation theory of U_q^H(𝔰𝔩(2)), as developed in <cit.>. The category of finite dimensional -weight modules is rigid monoidal abelian but is neither finite nor semisimple. Motivated by the known universal R-matrix for the ħ-adic quantum group of <cit.>, we construct in Proposition <ref> a braiding on the category . Surprisingly, with respect to the natural class of pivotal structures, the category is ribbon only when r ≢4 8; see Proposition <ref>. In future work, we give an independent, Hopf algebraic, perspective on the lack of ribbon structure when r ≡ 4 8. Our first main result, which is the culmination of our study of the representation theory of , can now be stated as follows. See Section <ref> for recollections on relative modular categories. If r ≢4 8, then the category admits a relative modular structure. The details of the relative modular structure, which includes a grading = ⊕_g ∈_g by an abelian group and an abelian group ≃_0 × together with a ribbon functor σ: →_0, depend on the congruence class of r modulo 8. As explained below, the cases r ≡ 1 2 and r ≡ 2 4 should be seen as belonging to a single family while that of r ≡ 0 8 is a distinct family. By general results of De Renzi <cit.>, a relative modular structure on gives rise to a symmetric monoidal functor _ : _→_, the three dimensional oriented non-semisimple TFT associated to . The domain _ is a category of decorated surfaces and their admissible bordisms. For example, morphisms include the data of a cohomology class on the underlying 3-manifold with coefficients in . The codomain _ is the monoidal category of -graded vector spaces with a symmetric braiding which, upon restricting to underlying -graded vector spaces, recovers that of the category of super vector spaces. The 3-manifold invariants defined by _ are, up to normalization, the CGP invariants introduced by first author, Geer and Patureau-Mirand <cit.>. Theorem <ref> adds to the growing list of relative modular categories, and so non-semisimple TFTs, which so far includes weight modules over unrolled quantum groups associated to simple Lie algebras <cit.>, the Lie superalgebras 𝔰𝔩(m | n), m ≠ n, <cit.>, 𝔤𝔩(1| 1) <cit.> and Lie superalgebras with abelian bosonic subalgebra <cit.>. The resulting TFTs are less developed, with calculations being limited to <cit.>, 𝔤𝔩(1 | 1) <cit.> and Lie superalgebras with abelian bosonic subalgebra <cit.>. In Section <ref> we perform a number of calculations for _. We begin by proving a Verlinde formula, Theorem <ref>, which relates the partition function of a trivial circle bundle over a surface Σ_g of genus g ≥ 1, with having holonomy β∈ along the circle fibre, to a β-dependent specialization of the generating function of -graded dimensions of _(Σ_g). Our derivation of the Verlinde formula follows previous derivations <cit.>, relying on an explicit surgery presentation of the 3-manifold in question and the computation of modified quantum dimensions of projective -modules. The Verlinde formula is a key tool to establish the following result, which we state for g ≥ 2 for simplicity. The Euler characteristic with respect to the parity subgroup of and total dimension of the state space _(Σ_g) are given by χ(_(Σ_g)) = 0 r ≡ 1 2, 0 r ≡ 2 4, r^3g-3/2^2g-3 r ≡ 0 8 and __(Σ_g) = r^3g-3· 2^2g-2 r ≡ 1 2, 1/2^g-1 r ≡ 2 4, 1/2^2g-3 r ≡ 0 8. The computation of the Euler characteristic follows by taking the limit β→ 0 in the Verlinde formula. The total dimension is more difficult to access since it does not obviously arise as a limit of the Verlinde formula. To resolve this, we provide in Theorem <ref> a combinatorially defined spanning set of _(Σ_g) which, in particular, constrains the -support of _(Σ_g). Using these support conditions, the total dimension is seen to arise as a limit of the Verlinde formula, leading to the above result. In Section <ref> we shift attention to the -invariants (also called homological blocks) recently introduced in the physics literature in the context of 3d 𝒩=2 supersymmetric gauge theory with the goal of categorifying the Reshetikhin–Turaev invariants of 3-manifold <cit.>. Physically, ^𝔤(M;) is the BPS index of a system of intersecting fivebranes wrapping M in M-theory. These are formal series _𝔰^𝔤(M;) in an indeterminate which depend on a complex Lie superalgebra 𝔤 and geometric structure 𝔰 on the 3-manifold M. At present, there is a mathematical definition of _𝔰^𝔤(M;) only for weakly negative definite plumbed 3-manifolds M <cit.>; see Section <ref>. While the case 𝔤= is most studied, more general 𝔤 have recently received more attention <cit.>. In this paper we are primarily interested in the case 𝔤=, where again 𝔰 is a structure. Physically, this results from the inclusion of orientifold planes in the M-theoretic interpretation above. Our next result is a precise relation between -invariants for and . Let Γ be a weakly negative definite plumbing graph with non-degenerate plumbing matrix, M the rational homology 3-sphere obtained from integral surgery on the framed link defined by Γ and 𝔰 a structure on M. There exist constants C_𝔰∈ and Δ_𝔰∈ such that the -invariants of M take the form ^_𝔰(M;) = ^Δ_𝔰∑_n=-∞^∞ a_n (-)^n and ^_𝔰(M;) = C_𝔰^Δ_𝔰∑_n=-∞^∞ a_n ^n. Theorem <ref>, whose proof is a direct calculation using the surgery definition of -invariants, confirms the expected relation between ^ and ^ <cit.> and continues a long line of known relations between quantum invariants associated to and and, more generally, 𝔰𝔬(2n+1) and 𝔬𝔰𝔭(1 | 2n) <cit.>. Finally we establish a relation between the 3-manifold invariants associated to the TFT _ and ^-invariants. More precisely, we use a slight renormalization of the former, which we denote by N_r^; see Definition <ref>. Let δ∈{± 1}. Let Γ be a weakly negative definite plumbing graph with non-degenerate plumbing matrix, M the rational homology 3-sphere obtained from integral surgery and ∈ H^1(M; ) a sufficiently generic cohomology class. Under the technical assumptions of Hypothesis <ref>, there is an equality N_r^(M,) = lim_→ e^4π/r∑_𝔰∈(M) c^_,𝔰^_𝔰 (M;), where c^_,σ(b,s) = e^πμ(M,s)(M,[4 ])/| H_1(M;) |∑_a,f e^2 π(-r-δ/8(a,a) - (a,b+f) +2 (f,f)-1/2(a) ) if r ≡δ 8 and c^_,σ(b,s) = e^-δπ/2μ(M,s)(M,[2 ])/| H_1(M;) |∑_a,f e^2 π(-r-2δ/8(a,a) - (a,b+δ f) + δ(f,f) - 1/2(a) ) if r ≡ 2 δ 8. In the above formulae, * ∑_a,f denotes summation over a,f ∈ H_1(M;), * : H_1(M;) × H_1(M;) → is the linking pairing, * σ(b,s) is the structure associated to b ∈ H_1(M;) and structure s, * μ is the Rokhlin invariant, and * is the appropriately normalized Reidemeister torsion. Analogues of Theorem <ref> are known for <cit.> and 𝔰𝔩(2 | 1) <cit.>, again for particular roots of unity. As in these cases, the proof of Theorem <ref> is a delicate sequence of applications of reciprocity of Gauss sums. As discussed in Remark <ref>, the cases r ≡± 2 8 of Theorem <ref> bear a strong resemblance to the corresponding cases for <cit.>. The cases r ≡± 1 8 do not have a counterpart in <cit.> since in loc. cit. only roots of unity of even order are considered. As explained in Sections <ref> and <ref>, when r ≡ 0, ± 3 8 the calculations involved in proving Theorem <ref> do not lead to a universal topological relation between N_r^ and ^. The remaining case studied in <cit.>, namely r ≡ 4 8, does not relate to the present work since it is in precisely this case that the category is not ribbon. In view of <cit.>, it is natural to expect Theorem <ref> to hold, with minor modifications, for more general closed oriented 3-manifolds, although we do not pursue this generalization. Theorem <ref> is another instance of the relation between -invariants and previously defined quantum invariants of 3-manifolds. A relation between Reshetikhin–Turaev invariants for the small quantum group of and ^-invariants was conjectured in <cit.>. For proofs of this conjecture, in various special cases, and its generalization to other 𝔤, see <cit.>. To close this introduction, we mention two interesting problems which we do not address in this paper. The first is to provide a conceptual reason for the observed close similarity between the CGP and -invariants associated to and . In the semisimple setting, the quantum covering groups of Clark, Hill and Wang <cit.> provide such an reason <cit.>. The second is to find a physical realization of the TFT _ using as a guide recent successes in this direction for other Lie (super)algebras <cit.>. §.§ Acknowledgements The authors thank Nathan Geer, Thomas Kerler, and Cris Negron for discussions. A. R. is partially supported by National Science Foundation grants DMS-2104497 (Geer) and DMS-2302363 (Young). F. C. is supported by CIMI Labex ANR 11-LABX-0040 at IMT Toulouse within the program ANR-11-IDEX-0002-02. M. B. Y. is partially supported by National Science Foundation grant DMS-2302363 and a Simons Foundation Collaboration Grant for Mathematicians (Award ID 853541). § PRELIMINARY MATERIAL The ground field is unless stated otherwise. §.§ Superalgebra Our superalgebra conventions match <cit.>. The parity of a homogeneous element v of a super vector space V=V_ 0⊕ V_ 1 is denoted v ∈. Morphisms of super vector spaces are parity preserving linear maps. The tensor product of super vector spaces V and W is the tensor product of the underlying vector spaces with -grading given by (V ⊗ W)_ p = ⊕_ a + b = p V_ a⊗ W_ b. A (left) module over a (associative) superalgebra A is a super vector space M with an A-module structure for which the action map A ⊗ M → M is a morphism of super vector spaces. Given homogeneous elements a,b ∈ A, set [a,b] = ab -(-1)^ a b ba. §.§ Monoidal categories Our conventions for monoidal categories match <cit.>. Let be a -linear abelian monoidal category. We assume that the functor ⊗ : ×→ is -bilinear, the monoidal unit ∈ is simple and the -algebra map →_(), k ↦ k ·𝕀_, is an isomorphism. If is in addition rigid, braided and has a compatible twist, then is called a -linear ribbon category. The left and right duality structure maps are _V:V^∨⊗ V →, _V:→ V ⊗ V^∨ and _V:V⊗ V^∨→, _V: → V^∨⊗ V, respectively, the braiding is c={c_V,W: V ⊗ W → W ⊗ V}_V,W ∈ and the twist is θ={θ_V: V → V}_V ∈. An object V∈ is regular if _V is an epimorphism. In diagrammatic computations with ribbon categories, we read diagrams from left to right and bottom to top. Basic morphisms are 𝕀_V = [anchorbase] [->,thick] (0,0) – node[left] V (0,1); , 𝕀_V^∨ = [anchorbase] [<-,thick] (0,0) – node[left] V (0,1); _V = [anchorbase] [->,thick] (0,0) arc (0:180:0.5 and 0.75); at (-1.4,0) V; , _V = [anchorbase] [<-,thick] (0,0) arc (180:360:0.5 and 0.75); at (-0.5,-0.1) V; _V = [anchorbase] [<-,thick] (0,0) arc (0:180:0.5 and 0.75); at (0.4,0) V; , _V = [anchorbase] [->,thick] (0,0) arc (180:360:0.5 and 0.75); at (1.5,-0.1) V; c_V,W = [anchorbase] [->,thick] (0.5,0) – node[right,near start] W (0,1); [->,thick,cross line] (0,0) – node[left,near start] V (0.5,1); , θ_V = [anchorbase] [->,thick,rounded corners=8pt] (0.25,0.25) – (0,0.5) – (0,1); [thick,rounded corners=8pt,cross line] (0,0) – (0,0.5) – (0.25,0.75); [thick] (0.25,0.75) to [out=30,in=330] (0.25,0.25); at (-0.2,0.2) V; . Let ℛ_ be the ribbon category of -coloured ribbon graphs in ^2 × [0,1] and F_ : ℛ_→ the associated Reshetikhin–Turaev functor <cit.>. Two formal -linear combinations of -coloured ribbon graphs are skein equivalent if their images under F_ agree. The corresponding equivalence relation is denoted =̇. §.§ Relative modular categories We recall a number of definitions from <cit.>. Let be a -linear abelian ribbon category. Let V , W ∈. Recall that right partial trace along W is the map _W : _(V ⊗ W) → _(V) f ↦ (𝕀_V ⊗_W) ∘ (f ⊗𝕀_W^∨) ∘ (𝕀_V ⊗_W). A full subcategory ℐ⊂ is an ideal if it has the following properties: * If U∈ℐ and V ∈, then U⊗ V ∈ℐ. * If U ∈ℐ and V ∈ and there exist morphisms f:V→ U and g:U→ V satisfying g ∘ f=𝕀_V, then V∈ℐ. * A modified trace on an ideal ℐ⊂ is a family of -linear functions ={_V:_(V) →| V ∈ℐ} with the following properties: * Cyclicity: For all V,W ∈ℐ and f ∈_(W,V) and g ∈_(V,W), there is an equality _V(f ∘ g)=_W(g ∘ f). * Partial trace property: For all V ∈ℐ, W ∈ and f∈_(V⊗ W), there is an equality _V⊗ W(f)=_V(_W(f)). * The modified dimension of V∈ℐ is (V)=_V(𝕀_V). * A set ℰ={ V_i | i ∈ J } of objects of is dominating if for any V ∈ there exist indices {i_1,…,i_m }⊆ J and morphisms ι_k ∈_(V_i_k,V), s_k ∈_(V,V_i_k) such that 𝕀_V=∑_k=1^m ι_k ∘ s_k. * A dominating set ℰ is completely reduced if __(V_i,V_j)=δ_ij for all i,j ∈ J. Let (,+) be an abelian group. We often view as a discrete monoidal category with object set . A free realization of in is a monoidal functor σ: →, k ↦σ_k, such that * σ_0=𝕀, * θ_σ_k=𝕀_σ_k for all k ∈, and * if V ⊗σ_k ≃ V for a simple object V ∈, then k=0. We often identify σ: → with the set of objects σ_ := {σ_k | k ∈}, omitting from the notation the monoidal coherence data. Let (,+) be an abelian group. * A subset ⊂ is symmetric if =- and small if ⋃_i=1^n (g_i+) ≠ for all g_1,… ,g_n∈. * A -grading on is an equivalence of -linear abelian categories ≃⊕_g ∈_g, where {_g | g ∈} are full subcategories of , which has the following properties: * 𝕀∈_0. * If V∈_g, then V^∨∈_-g. * If V∈_g and V^'∈_g^', then V⊗ V^'∈_g+g^'. * A -graded abelian category is generically semisimple if there exists a small symmetric subset ⊂ such that each subcategory _g, g ∈∖, is semisimple. Let and be abelian groups, ⊂ a small symmetric subset and a -linear abelian ribbon category. Suppose that the following data is given: * A -grading on . * A free realization σ : →_0. * A non-zero modified trace on the ideal of projective objects of . Call a pre-modular -category relative to (,) if it has the following properties: * Generic semisimplicity: For each g ∈∖, there exists a finite set of regular simple objects Θ(g):={ V_i | i ∈ I_g } such that Θ(g) ⊗σ_:={ V_i ⊗σ_k | i ∈ I_g, k∈} is a completely reduced dominating set for _g. * There exists a bicharacter ψ: ×→^× such that c_σ_k,V∘ c_V,σ_k= ψ(g,k) ·𝕀_V ⊗σ_k for all g∈, V ∈_g and k ∈. Let be a pre-modular -category relative to (,). * The Kirby colour of index g ∈∖ is Ω_g:= ∑_V ∈Θ(g)(V) · V. * The stabilization coefficients Δ_±∈ are defined by the skein equivalences Fig-nondeg15ex for any g ∈∖ and V∈_g. * The pre-modular -category is non-degenerate if Δ_+Δ_-≠ 0. A modular -category relative to (,) is a pre-modular -category 𝒞 relative to (,) for which there exists a scalar ζ∈^×, the relative modularity parameter, such that relative_modularity10ex for all g,h ∈∖ and i,j ∈ I_g. The relative modularity parameter satisfies ζ = Δ_+ Δ_- <cit.>. In particular, relative modular categories are non-degenerate relative pre-modular. §.§ and structures on 3-manifolds Following <cit.>, we recall basic topology of 3-manifolds obtained by surgery on links in S^3. Let L ⊂ S^3 be a framed oriented link with connected components V and V × V linking matrix B. Let M be the closed oriented 3-manifold obtained by performing integral surgery on L. For an abelian group, there are canonical isomorphisms H_1(M; ) ≃^V B ^V and H^1(M;) ≃{ϕ∈^V| B ϕ =0 }. Throughout the paper we assume that B is non-singular, so that M is a rational homology sphere. In this case, | B | = | H_1(M;) | and, with respect to the isomorphism (<ref>), the linking pairing is : H_1(M;) ⊗ H_1(M;) →, a ⊗ b ↦ a^t B^-1 b 1 where (-)^t is transposition. Denote by (M) and (M) the sets of equivalence classes of and structures on M, respectively. There are identifications (M) = {s ∈( 2 )^V|∑_j ∈ V B_ij s_j ≡ B_ii 2 i ∈ V} and (M) = {K ∈^V 2 B ^V| K_i ≡ B_ii 2 i ∈ V}. There is a canonical surjective map σ: H_1(M;) ×(M) →(M), σ(b,s) = 2b+ i(s), where i(s) = B s̃ for any lift s̃∈^V of s ∈( 2 )^V. Finally, given s ∈(M), the Rokhlin invariant μ(M,s) ∈ 16 of M with spin structure s satisfies μ(M,s) ≡σ - s^t B s 4, where σ is the signature of B. § THE CATEGORY OF WEIGHT -MODULES §.§ Super quantum integers We introduce super variants of the standard quantum integers. Our conventions differ slightly from those in the literature (cf. <cit.>, <cit.>). Let v be an indeterminate and 𝒜=[,^-1]. For an integer n ≥ 0, set n = ^-n - (-)^n/+^-1 = ∑_i=0^n-1 (-1)^n+1+i^n-1-2i∈𝒜 and n != ∏_i=1^n i n ≥ 1, 1 n =0 , nk = n!/n-k!k! 0 ≤ k ≤ n, 0 k <0. For integers 1 ≤ k ≤ n, there are equalities n+1k=(-)^n-k+1nk-1+^-knk, n+1k=^k-n-1nk-1+(-)^knk. This is a direct verification. For any integer n ≥ 0, there is an equality ∑_k=0^n (-1)^k(k+1)/2^k(n-1)nk = 1 n=0, 0 n >0. The equality is proved using Lemma <ref> and induction on n. §.§ The quantum group of Let be the unital superalgebra over () with generators K, K^-1 of parity 0 and E, F of parity 1 and relations KK^-1=K^-1K=1, KE=^2 EK, KF=^-2 FK, [E,F]=K-K^-1/-^-1. There is a unique Hopf superalgebra structure on with counit ϵ, coproduct Δ and antipode S defined on generators by ϵ(K)=1, ϵ(E)=ϵ(F)=0, Δ (K) = K ⊗ K, Δ (E) = 1 ⊗ E + E ⊗ K, Δ (F) = K^-1⊗ F + F ⊗ 1 , S(K)=K^-1, S(E)= -E K^-1, S(F)= -KF. For i,n ∈ with i ≥ 1, define elements of by K;n = K ^n-K^-1 (-)^-n/-^-1, K;ni=∏_j=1^i K;n+j-ij. Set K;n0 = 1. A direct computation shows that (-1)^kjK;i+k + kK;i-j = j+kK;i for all i,j,k ∈. Compare with <cit.>. [cf. <cit.>] For each integer n ≥ 1, there are equalities [E,F^n] = n F^n-1K;-n+1, [F,E^n] = (-1)^n n E^n-1K;n-1. We prove only equation (<ref>). We proceed by induction on n. The case n=1 is a defining relation of . Assuming that equation (<ref>) holds for n, we compute EF^n+1 = (-1)^nF^nEF+nF^n-1K;-n+1F = (-1)^n+1F^n+1E+(-1)^nF^nK;0+nF^n^-2K;-n+1 so that [E,F^n+1] = F^n((-1)^nK-K^-1/-^-1+ ^-n - (-1)^n ^n/+^-1·K ^-n-1+(-1)^n K^-1^n+1/-^-1), which is seen to equal F^nn+1K;-(n+1)+1 by a direct calculation. [cf. <cit.>] For each integer n ≥ 1, there are equalities Δ (E^n) = ∑_j=0^n nj(-)^j(n-j) E^j ⊗ E^n-j K^j, Δ (F^n) = ∑_j=0^n nj^-j(n-j) K^-j F^n-j⊗ F^j. We prove equation (<ref>) by induction on n. The case n=1 holds by the definition of Δ. Assuming equation (<ref>) holds for n, we compute Δ (E^n+1) = (1 ⊗ E + E ⊗ K) ∑_j=0^n nj(-)^j(n-j) E^j ⊗ E^n-j K^j = ∑_j=0^n nj(-)^j(n-j)(-1)^j E^j ⊗ E^n+1-j K^j + ∑_j=0^n nj(-)^j(n-j)^2(n-j) E^j+1⊗ E^n-j K^j+1 = 1⊗ E^n+1+E^n+1⊗ K^n+1 + ∑_j=1^n (-)^j(n+1-j)( nj^-j + nj-1(-)^(n+1-j))E^j ⊗ E^n+1-j K^j = ∑_j=0^n+1n+1j(-)^j(n+1-j)E^j⊗ E^n+1-jK^j, the final equality following from Lemma <ref>. §.§ The unrolled quantum group of Let r ≥ 3 be an integer and q = e^2 π/r. We henceforth consider only super quantum integers specialized to = q, for which we use the same notation as Section <ref>. Given z ∈, set q^z = e^2 z π/r. If z ≠ 0, define z;n∈ as in Section <ref>, but with K replaced by z. The unrolled quantum group is the unital superalgebra over with generators H, K, K^-1 of parity 0 and E, F of parity 1 and relations [H, K] = [H, K^-1] = 0, [H,E]=2E, [H,F]=-2F, KK^-1=K^-1K=1, KE=q^2 EK, KF=q^-2 FK, [E,F]=K-K^-1/q-q^-1. The Lie superalgebra is generated by vectors J, X^+, X^- of parity 0 and ψ^+,ψ^- of parity 1 with defining relations [J,X^±]=± X^±, [J,ψ^±]=±1/2ψ^±, [X^+,X^-] = 2 J, [X^±,ψ^±]=0, [X^±,ψ^∓]=- ψ^±, [ψ^±,ψ^±]= ± 2 X^±, [ψ^+,ψ^-]= 2 J. In fact, the relations involving only J, ψ^± together with the super Jacobi identify already determine . Setting H = 4 J, E = ψ^+ and F = 2ψ^-, this smaller set of relations becomes [H,E] = 2 E, [H,F]=-2F, [E, F] = H. The superalgebra is the unrolled quantization of this presentation of . This also explains Definition <ref>, which agrees with <cit.> but differs from <cit.> and <cit.>. Our normalizations are chosen to ease comparison with the representation theory of the unrolled quantum group of . Give a Hopf superalgebra structure by supplementing the specialized Hopf structure of , given in Section <ref>, by the definitions ϵ(H)=0, Δ (H) = H ⊗ 1 + 1 ⊗ H, S(H)=-H. Lemmas <ref> and <ref> continue to hold for . If r=4, then n≠ 0 for all n ≥ 1. If r ≠ 4, then the minimal positive integer r which satisfies r=0 is r= 2r r ≡ 1 2, r r ≡ 2 4, r/2 r ≡ 0 8, r/4 r ≡ 4 8. If r=4, then n = (-)^n-1 n. If r ≠ 4, then the denominator of q^-n - (-q)^n/q+q^-1 is non-zero so that n=0 if and only if the numerator vanishes. This is equivalent to setting q^2n=(-1)^n, which holds if and only if n ∈r/4 (2 + p_n) where p_n ∈{0,1} is the parity of n. Note that if n is odd, then r ≡ 0 4. We proceed in cases: * If r ≡ 1 2, then n is even, so that n ∈r/2. It follows that n=2r is the minimal solution. * If r ≡ 2 4, then n is even, so that n ∈r/2. It follows that n=r is the minimal solution. * If r ≡ 0 8, then r=8k for k ∈, so that n ∈ 2k(2 + p_n). The minimal solution is n = 4k = r/2. * If r ≡ 4 8, then r=4k with k ∈ odd, so that n ∈ k(2 + p_n). The minimal solution is n = k = r/4. The peculiar behaviour of r=4 in Lemma <ref> obstructs the definition of a restricted Hopf algebra quotient of with q=√(-1); see the proof of Lemma <ref> below. For this reason, we henceforth exclude the case r=4 from consideration. For later use, we record that q^2r = (-1)^r = -1 r ≡ 4 8, 1 . Let and be the left ideals of generated by E^r and F^r, respectively, and let =+. The left ideal is a two-sided Hopf ideal. Since r=0, equation (<ref>) gives [F,E^r]=0. The defining relations of give K E^r = q^2 r E^r K and we conclude that E^r is central up to scalars. It follows that is a two-sided ideal. Similarly, is a two-sided ideal. Regarding the Hopf condition, note that the defining property of r implies rj= 1 j∈{0,r}, 0 0<j< r. In view of this, Lemma <ref> gives Δ(E^r)=E^r⊗ K^r+1⊗ E^r∈⊗ + ⊗. Using that S is a superalgebra anti-homomorphism, we compute S(E^r)=(-1)^r(r+1)/2 q^-r(r-1)E^rK^-r∈. Similarly, is Hopf. It follows that is a two-sided Hopf ideal. The restricted unrolled quantum group is the Hopf superalgebra = . The following result asserts that Definition <ref> is the only restricted quantum group associated to , by which we mean that both E and F are nilpotent, no additional relations are imposed on Cartan generators, and the resulting superalgebra is Hopf. Let J⊂ be an ideal which contains E^k_1 and F^k_2 for some k_1,k_2> 0 and does not contain K^l for l≠ 0. Suppose that n_1 and n_2 are the least such k_1 and k_2 for which E^k_1,F^k_2∈ J. Then J is a Hopf ideal if and only if n_1,n_2∈{1,r}. One can check that an ideal containing some E^k_1 and F^k_2 does not necessarily contain a nonzero power of K. To prove the proposition, it suffices to show that Δ(E^n_1)∈ J⊗ + ⊗ J if and only if n_1∈{1,r}. The argument involving the coproduct of F^n_2 is similar. The case n_1=1 is verified immediately. The case n_1=r is proven in Lemma <ref>. Suppose then that n_1∉{0,1,r} and write n_1=a r+b where a≥ 0 and 0≤ b <r. A straightforward induction shows that Δ(E^a·r)= ∑_i=0^aai E^ir⊗ K^irE^(a-i)r . Thus, by Lemma <ref>, we have Δ(E^n_1) =Δ(E^a·r)Δ(E^b) = (∑_i=0^aai E^ir⊗ K^irE^(a-i)r)( ∑_j=0^b bj(-)^j(b-j) E^j ⊗ E^b-j K^j ) = ∑_i=0^a∑_j=0^baibj(-)^j(b-j) (-1)^(a-i)jrE^ir+j⊗ K^irE^(a-i)r+b-jK^j . For any 0<i<a and 0<j<b, the exponents ir+j and (a-i)r+b-j are less than n_1 and nonzero. Moreover, bj is non-zero. Since nonzero powers of K do not belong to the ideal, the corresponding terms in the expansion of Δ(E^n_1) do not belong to J⊗ + ⊗ J. Let q be a primitive rth root of unity and set r̃= 2r r ≡ 1 2, r r ≡ 0 4, r/2 r ≡ 2 4. In <cit.>, it is claimed that E (denoted by e_n in loc. cit.) raised to the power r̃ (denoted by N in loc. cit.) generates a Hopf ideal of U_q(). The discrepancy with Proposition <ref> appears to be due to a misapplication of <cit.> which incorrectly assumes that (E⊗ K)(1⊗ E) is equal to -q(1⊗ E)(E⊗ K) in the computation of Δ (E)^n=(E⊗ K+1⊗ E)^n. This leads to the incorrect conclusion that Δ (E)^r̃ is equal to ∑_i=0^r̃r̃i_q E^r̃-i⊗ E^iK^r̃-i= E^r̃⊗ K^r̃+1⊗ E^r̃. If one instead uses that (E⊗ K)(1⊗ E)=-q^2(1⊗ E)(E⊗ K) and computes s̃ for q^2, a primitive root of unity of order s= r/(2,r), then one recovers s̃=r as in Lemma <ref>. §.§ Weight modules A finite dimensional -module V is a weight module if H acts semisimply on V and K v=q^λv for any H-weight vector v of weight λ∈. All -modules in this paper are assumed to be weight. The category of weight -modules and their -linear maps (of parity 0) is -linear, abelian and locally finite. The coproduct Δ induces a monoidal structure on with monoidal unit the trivial module =. Given V ∈, let V^∨∈ be the -linear dual of the underlying super vector space of V with -module structure given by (x · f)(v) = (-1)^ f x f(S(x)v), v ∈ V, f ∈ V^∨, x ∈. Let {v_i}_i be a homogeneous basis of V with dual basis {v_i^∨}_i. For each integer s ∈, define putative left and right duality structures on by the -linear maps _V(f ⊗ v) = f(v), _V (1) = ∑_i v_i ⊗ v_i^∨, ^(s)_V (v ⊗ f) = (-1)^ f vf(K^1-s v), ^(s)_V (1) = ∑_i (-1)^ v_iv_i^∨⊗ K^s-1 v_i. If q^2s=1, then the above maps define a pivotal structure on . That _V and _V define a left duality is clear. The -linearity of ^(s)_V and ^(s)_V is verified directly. For example, we find ^(s)_V(E ·( v ⊗ f)) = (1-q^2s)(-1)^( v + 1) f f(K^-sEv) which vanishes, as required, if q^2s=1. The remaining statements are verified by similar calculations. The pivotal isomorphism p^(s): 𝕀_⇒ (-)^∨∨ corresponding to Lemma <ref> has components p^(s)_V = K^1-sev_V, where ev_V: V → V^∨∨ is the canonical evaluation isomorphism of underlying super vector spaces. It is a consequence of Lemma <ref> that projective and injective objects of coincide; see <cit.>. §.§.§ Simple modules Let be the subalgebra of generated by H, K^± 1 and E. For each λ∈ and p ∈, let ℂ_(λ, p) be the one dimensional super vector space with parity p and -module structure H · 1 = λ, K · 1 = q^λ, E · 1 = 0. The Verma module of highest weight (λ+r-1, p) is V_(λ, p) = ⊗__(λ+r-1, p). The module V_(λ, p) has a basis {v_n=F^n · 1 ⊗ 1 | n=0, …, r-1} with v_n of weight[For convenience, we often view the parity of a homogeneous weight vector as part of its weight.] (λ+r-1-2n, p + n) ∈×. Comparison of weights and the observation that the dual of weight module is a weight module yields an isomorphism V_(λ, p)^∨≃ V_(-λ, p + r + 1). With respect to the pivotal structure of Lemma <ref>, the quantum dimension is qdim^(s) V_(λ, p) = ∑_i=0^r-1 (-1)^ p + i q^(1-s)(λ+r-1-2i) = (-1)^ p q^(1-s)(λ+r-1)1-(-1)^rq^-2r1+q^-2 =0, the final equality following from equation (<ref>). For later use, define the Verma module of lowest weight (λ-r+1, p) by V^-_(λ, p) = ⊗_^-_(λ-r+1, p), where is the subalgebra generated by H, K^± 1 and F and ^-_(λ, p) is the -module defined as above, but with the relation F · 1 = 0 replacing E · 1 = 0. The module V_(λ, p) is a lowest weight Verma module if and only if it is simple (see Lemma <ref> below), in which case a comparison of weights gives V_(λ, p)≃ V^-_(λ, p+r_0 + 1). We emphasize that λ is the average weight of V_(λ, p) and is as the weight of a vector in V_(λ, p) if and only if r≡ 1 2 if and only if r ≡ 4 8. Recall that p_n ∈{0,1} denotes the parity of n ∈. The module V_(λ, p) is simple if and only if λ∈∖{r/4 (2m+p_n+1) -r + n | m ∈, 1 ≤ n ≤r-1}. We search for singular vectors of V_(λ, p). Let 1 ≤ n ≤r-1. Lemma <ref> gives E v_n = nq^λ + r-1; -n+1 v_n-1. The restriction 1 ≤ n ≤r-1 ensures that n≠ 0 so that Ev_n=0 if and only if q^λ + r-1; -n+1 =0. The latter equation has no solutions if and only if λ is as in the statement of the lemma. Call λ∈ typical if V_(λ, p) is simple and atypical otherwise. If λ is atypical, then it can be written uniquely in the form λ =r/4 (2m+p_n+1) -r + n, m ∈, 1 ≤ n ≤r-1. For atypical λ, the unique non-trivial quotient S_(λ, p) of V_(λ, p) is simple of highest weight λ + r-1 and dimension n. The non-split short exact sequence 0 → S_(λ - 2n, p + n)→ V_(λ, p)→ S_(λ, p)→ 0 implies that S_(λ, p) is neither projective nor injective. A simple object of is isomorphic to V_(λ, p) for λ typical and p ∈ or S_(λ, p) for λ atypical and p ∈. This follows from the preceding discussion and the observation that any weight module has a homogeneous highest weight vector. If λ∈ is typical, then V_(λ, p) is projective and injective. Let f: W ↠ U be an epimorphism in . A non-zero morphism g: V_(λ, p)→ U is determined by a highest weight vector g(v_0)=u_0 ∈ U of weight (λ+r-1, p). Let w_0 ∈ W be a preimage of u_0 of the same weight which satisfies Ew_0 ∈ f. For all constants c_0=1,c_1, …, c_r-1∈ℂ, the weight vector w_0^' = ∑_i=0^r-1 c_i F^i E^i w_0 satisfies f(w_0^')=u_0. Using Lemma <ref>, we find that E w_0^'=0 if and only if i+1q^λ+r-1+2i+2; -i c_i+1 = (-1)^i c_i i=0, …, r-2. Since λ is typical, the coefficient of c_i+1 is non-zero and the system has a unique solution. This gives the desired lift of g as V_(λ, p)→ W, v_0 ↦ w_0^'. Let p_1, p_2 ∈ and λ_1, λ_2 ∈ such that λ_1, λ_2 and λ_1 + λ_2 are typical. Proposition <ref> and consideration of characters yields V_(λ_1, p_1)⊗ V_(λ_2, p_2)≃⊕_k ∈ H_r V_(λ_1+λ_2 - r+1, p_1 + p_2) +k, where H_r={(-2 i, i) | 0 ≤ i ≤r-1}. See <cit.> for a similar computation. §.§ Braiding Motivated by the universal R-matrix of the ħ-adic quantum group of <cit.>, in this section we construct a braiding on the category . Let V, W ∈. Define Υ_V,W∈_(V ⊗ W) by Υ_V,W (v ⊗ w) = q^1/2λ_v λ_w v ⊗ w, where v∈ V and w ∈ W are of weight λ_v and λ_w, respectively. Let R̃_V,W∈_(V ⊗ W) be the action (incorporating Koszul signs) of R̃ = ∑_l=0^r-1 (-1)^l q^l(l-1)/2 (q-q^-1)^l/l!E^l ⊗ F^l ∈^⊗ 2. Finally, let c_V,W =τ_V,W∘Υ_V,W∘R̃_V,W, where τ is the Koszul symmetry on the monoidal category of super vector spaces. The maps {c_V,W : V ⊗ W → W ⊗ V}_V,W ∈ define a braiding on . Using Lemma <ref>, the inverse of R̃ is seen to be R̃^-1 = ∑_l=0^r-1 (-q)^- l(l-1)/2(q-q^-1)^l/l! E^l ⊗ F^l. It follows easily from this that c_V,W is a -linear isomorphism. Naturality of c is clear. Next, we prove -linearity of c_V,W. Let v ∈ V and w ∈ W be homogeneous of weight λ_v and λ_w, respectively. Since E^l ⊗ F^l has H-weight zero, the action of H commutes with each term in R̃_V,W. It follows that c_V,W is H-linear. We verify E-linearity of c_V,W; verification of F-linearity is analogous. Setting A_l = q^l(l-1)/2 (q-q^-1)^l/l!, we find that E · c_V,W(v ⊗ w) is equal to ∑_l=0^r-1 (-1)^ w ( v + l) A_l q^1/2(λ_v + 2l) (λ_w - 2l) ((-1)^ w + l F^l w ⊗ E^l+1 v + q^λ_v + 2l EF^l w ⊗ E^l v ). By applying Lemma <ref>, we can rewrite this as E · c_V,W(v ⊗ w) = ∑_l=0^r-1 (-1)^ w ( v + l + 1) + l A_l q^1/2(λ_v + 2l) (λ_w - 2l) F^l w ⊗ E^l+1 v + ∑_l=0^r-1 (-1)^ w ( v + l) + l A_l q^1/2(λ_v + 2l) (λ_w - 2l) + λ_v + 2l F^lE w ⊗ E^l v + ∑_l=1^r-1 (-1)^ w ( v + l) A_l q^1/2(λ_v + 2l) (λ_w - 2l) + λ_v + 2llF^l-1K;-l+1w ⊗ E^l v. On the other hand, we compute c_V,W(E · v ⊗ w) = ∑_l=0^r-1 (-1)^l + l w + v w A_l q^1/2(λ_v + 2l)(λ_w - 2l + 2) F^l Ew ⊗ E^l v + ∑_l=0^r-1 (-1)^l w + v w + w A_l q^λ_w + 1/2(λ_v + 2l + 2)(λ_w - 2l) F^l w ⊗ E^l+1v. The coefficients of F^l Ew ⊗ E^l v in the previous two expressions are both equal to (-1)^l + w ( v + l) A_l q^1/2(λ_v + 2l)(λ_w - 2l + 2) whereas the coefficients of F^l w ⊗ E^l+1v are (-1)^ w ( v + l + 1) A_l q^λ_w + 1/2(λ_v + 2l + 2)(λ_w - 2l) and (-1)^ w( v + l + 1) + l A_l q^1/2(λ_v + 2l)(λ_2 - 2l) + (-1)^ w( v + l + 1) A_l+1 q^1/2(λ_v + 2l + 2)(λ_w - 2l - 2) + λ_v + 2l + 2l+1q^λ_w - l - (-1)^l q^-λ_w + l/q - q^-1, respectively, which are seen to be equal. This proves E-linearity of c_V,W. Verification of the hexagon axioms is similar and so is omitted. §.§ Ribbon structure Let = 2. Define a grading = ⊕_λ∈_λ, where _λ⊂ is the full subcategory of modules whose weights are congruent to λ. Let ⊂ be the image of the set {λ∈|λ - r +1 }. The -graded category is generically semisimple with small symmetric subset . For λ∈∖, a completely reduced dominating set of _λ is {V_(λ-r+1, p)|λ∈λ, p ∈}. The set is readily verified to be small and symmetric. Let λ∈∖ and V ∈_λ non-zero. Since λ∈∖, a homogeneous highest weight vector of V generates a submodule isomorphic to V_(λ-r+1, p) for some lift λ∈ of λ and p ∈. Injectivity of typical Verma modules (Proposition <ref>) ensures the existence of a splitting V≃ V^'⊕ V_(λ-r+1, p) with V^'∈_λ of dimension strictly less than V. Iterating this argument shows that _λ is semisimple with the claimed completely reduced dominating set. Given V, V^'∈, consider the open Hopf link invariant Φ_V^',V = (𝕀_V⊗_V^')∘ (c_V^',V⊗𝕀_V^'∨)∘ (c_V,V^'⊗𝕀_V^'∨)∘ (𝕀_V⊗_V^')∈_(V). When _(V) ≃, write ⟨Φ_V^',V⟩∈ for the scalar by which Φ_V^',V acts. With respect to the pivotal structure of Lemma <ref>, there is an equality ⟨Φ^(s)_V_(λ^', p^'),V_(λ, p)⟩ = (-1)^ p^' q^(λ + r - s) (λ^'+r-1) +(1-r) λ·r if λ∈r/4(2 + r+1), q^rλ-q^-rλ/q^λ+(-1)^r q^-λ otherwise. Since _(V_(λ, p)) =·𝕀_V_(λ, p), it suffices to compute the image of the highest weight vector under Φ^(s)_V_(λ^', p^'),V_(λ, p), to which only the swap and diagonal part Υ of the braiding contribute. We find ⟨Φ^(s)_V_(λ^', p^'),V_(λ, p)⟩ = (-1)^ p^' q^(λ + r - s) (λ^'+r-1)∑_i=0^r-1 (-(-1)^rq^-2λ)^i, and the claimed equality follows. Assume that r ≢4 8 and give the pivotal structure of Lemma <ref> determined by s ∈. The natural automorphism θ : 𝕀_⇒𝕀_, {θ_V = _R(c_V,V)}_V ∈ gives the structure of a -linear ribbon category if and only if s = r. It is automatic that θ satisfies the balancing conditions of a ribbon structure. By generic semisimplicity and <cit.>, to prove that θ is a ribbon structure it suffices to verify that θ_V^∨ = θ_V^∨ for all typical Verma modules. The twist of a Verma module is determined by its value on the highest weight vector, to which only the swap and diagonal part Υ of the braiding contribute. We find θ_V_(λ, p) = q^(λ+r-1)(λ + r-2s+1)/2𝕀_V_(λ, p). Using the isomorphism (<ref>), we see that θ_V^∨_(λ, p) = θ_V_(λ, p)^∨ if and only if q^2λ(r-s) =1. This equality holds for all typical λ if and only if s = r. Since we assume that r≢48, equation (<ref>) ensures that we can indeed set s=r in Lemma <ref>. We henceforth write and for ^(r) and ^(r). Using the relation [E,F] = K -K^-1/q-q^-1, the weight λ∈ of a one dimensional -module is seen to satisfy q^2λ=1, equivalently, λ∈r/2. Given (k, p) ∈×, let ^H_(kr/2, p) be the one dimensional module of weight kr/2 and parity p. In the notation of Theorem <ref>, we have ^H_(kr/2, p) = S_(kr/2-r+1, p). Computing as in the proof of Proposition <ref>, we find θ_^H_(kr/2, p) = q^1/2(kr/2)^2 + (1-r)kr/2𝕀_^H_(kr/2, p). Using the “standard" form of the R-matrix—as in Section <ref>—it is claimed in <cit.> that a version of the small quantum group of 𝔬𝔰𝔭(1|2n) is a ribbon Hopf superalgebra for any root of unity of order at least 3 . However, explicit details in the computation of a ribbon element are omitted. Proposition <ref> shows that when r ≡ 4 8, the “standard" braiding and pivotal structure—as in Section <ref>—do not induce a ribbon structure on the restricted quantum group. Note also that, by Proposition <ref>, there is no other restricted unrolled quantum group associated to which might resolve this issue. In future work, we give a Hopf algebraic perspective on this problem. For other examples of quantum groups which have braidings without compatible ribbon structures, see <cit.> and <cit.>. §.§ Non-degenerate relative pre-modularity [cf. <cit.>] The category is unimodular. Since projectivity and injectivity in coincide, it suffices to prove self-duality of the injective hull of the trivial module =. Let λ∈ be typical, so that V_(λ, 0) is projective (Proposition <ref>). Write V_(λ, 0)⊗ V_(λ, 0)^∨≃⊕_i=0^n P_i as a direct sum of projective indecomposables. Adjunction and simplicity of V_(λ, 0) give _(, V_(λ, 0)⊗ V_(λ, 0)^∨) ≃_(V_(λ, 0), V_(λ, 0)) ≃ so that the injective hull of appears in V_(λ, 0)⊗ V_(λ, 0)^∨ with multiplicity one; call it P_0. Since v_+:=v_0 ⊗ v_r-1^∨ and v_-:=v_r-1⊗ v_0^∨ span the weight spaces of V_(λ, 0)⊗ V_(λ, 0)^∨ of weights 2(r-1) and -2(r-1), respectively, we have v_+ ∈ P_i and v_- ∈ P_j for some i and j which satisfy P_i^∨≃ P_j. On the other hand, using Lemma <ref> we verify that F^r-1 v_+ and E^r-1v_- are non-zero -invariant vectors and so are elements of P_0. It follows that i=j=0 and P_0 is self-dual. Up to a global scalar, there exists a unique modified trace on the ideal of projective objects of . The category is a locally finite pivotal -linear tensor category with enough projectives. Since is unimodular, <cit.> applies and we conclude that the ideal of projectives has a unique non-trivial right modified trace. Since is braided (Proposition <ref>), this right modified trace is a modified trace. Let λ, λ^'∈ be typical. Cyclicity of the modified trace implies _V_(λ, p)(Φ_V_(λ^', p^'),V_(λ, p))=_V_(λ^', p^')(Φ_V_(λ, p) ,V_(λ^', p^')). Evaluating each side of this equation using Lemma <ref> and equation (<ref>) gives (-1)^ p^' q^λλ^'q^rλ-q^-rλ/q^λ+q^-λ(V_(λ, p)) = (-1)^ p q^λλ^'q^rλ^'-q^-rλ^'/q^λ^'+q^-λ^'(V_(λ^', p^')). We may therefore normalize the modified trace so that (V_(λ, p)) = (-1)^ pq^λ+q^-λ/q^rλ-q^-rλ. With this normalization, we have ⟨Φ_V_(λ^', p^'),V_(λ, p)⟩ = (-1)^ p + p^'q^λ^'λ/(V_(λ, p)). Define = (2,r), = 2/, r=r/. Let I_r = {-r+1+2 i | 0 ≤ i ≤r-1} r ≡ 1 2 r ≡ 2 4, {-r+1+i | 0 ≤ i ≤ r-1} r ≡ 0 8. For λ∈∖ with chosen lift λ∈, set λ+ I_r = {λ +i | i ∈ I_r}. Assume that r ≡ 1 2 or r ≡ 2 4. Give the generic semisimple structure of Proposition <ref> and ribbon structure of Proposition <ref>. Let =×. The monoidal functor σ: →_ 0, (k, p) ↦^H_( k r, p), is a free realization and gives the structure of a non-degenerate pre-modular -category relative to (,). Note that ^H_( k r, p) has weight k r and so, since r is even, lies in -degree 0. Equation (<ref>) gives θ_^H_( k r, p) = 𝕀_^H_( k r, p). Let λ∈ with lift λ∈. We compute for the required bicharacter ψ(λ, (k, p)) = q^ k r λ. If λ∈∖, then, in view of the completely reduced dominating set of Proposition <ref>, we can take Θ(λ) = {V_(j, 0)| j ∈λ + I_r}. For signs a, b ∈{±}, define the generalized quadratic Gauss sum G_a,b = ∑_i=0^r-1 q^a2 i + b2 i^2. Let λ∈ be typical. Using that endomorphisms a Verma module consist of scalars, we find for the stabilization coefficients Δ_b = ∑_k ∈ I_r(V_(λ+k, 0)) ⟨θ^b_V_(λ-r+1, 0)⟩⟨θ^b_V^∨_(λ+k, 0)⟩⟨Φ_V^-b_(λ+k, 0),V_(λ-r+1, 0)⟩, where we have used the notation V^+=V and V^- = V^∨. Using equation (<ref>), we compute Δ_b = q^-b(r-1)^2q^λG_+,b + q^-λG_-,b/q^λ+q^-λ = q^-b (r-1)^2 G_b, the second equality following from the observation that G_a,b is independent of a; its common value is denoted by G_b. Evaluating the Gauss sum, we find Δ_+ = √(r) q^-3/2· -1 r ≡ 1 8, - r ≡ 2 8, - r ≡ 3 8, -1 r ≡ 5 8, -1 r ≡ 6 8, r ≡ 7 8 and Δ_- = -Δ_+. When r ≡ 0 8, direct modifications of the computations above show that, with the =1 free realization, is relative pre-modular but degenerate: Δ_±=0. This motivates the following modification of Proposition <ref>. In particular, note that the grading group is modified. Assume that r ≡ 0 8. Give the ribbon structure of Proposition <ref>. Let = with subset = and = ⊕_λ∈_λ the grading by H-weight modulo . Let =×. The monoidal functor σ: →_ 0, (k, p) ↦^H_(kr, p), is a free realization and gives the structure of a non-degenerate pre-modular -category relative to (,). Repeating the proof of Proposition <ref> shows that is generically semisimple with small symmetric subset . The required bicharacter is ψ(λ,(k, p))=q^k r λ. A completely reduced dominating set of _λ, λ∈∖, is {V_(λ-r+1, p)|λ∈λ, p ∈} so that Θ(λ) = {V_(j, 0)| j ∈λ + I_r}. As in Proposition <ref>, we find Δ_b = q^-b (r-1)^2 G_b, where G_b= ∑_i=0^r-1 (-1)^i q^i + b i^2/2. Evaluating this expression gives Δ_+ = √(r) e^-3 π/4 q^18 and Δ_- = -Δ_+. §.§ Relative modularity Let W ∈_ 0. Recall that a morphism f ∈_(W) is transparent in _ 0 if 𝕀_U ⊗ f = c_W,U∘ (f ⊗𝕀_U) ∘ c_U,W and f ⊗𝕀_V = c_V,W∘ (𝕀_V ⊗ f) ∘ c_W,V for all U,V ∈_ 0. [cf. <cit.>, <cit.>] Let W ∈_ 0. A transparent morphism f ∈_(W) factors through a finite direct sum of one dimensional modules in _ 0. Let v_0 be a highest weight vector of V_(-r+1, 0)∈_ 0 and w ∈ W a weight vector. Because Ev_0=0, the explicit form of the braiding implies that c_V_(-r+1, 0),W(v_0 ⊗ w) is proportional to w ⊗ v_0. Because f is transparent, c_W,V_(-r+1, 0)(f(w) ⊗ v_0) is proportional to v_0 ⊗ f(w). Since {F^i v_0 | 0 ≤ i ≤r-1} is a basis of V_(-r+1, 0), we conclude that E f(w) =0. Arguing in the same way with V_(-r+1, 0) and v_0 replaced with V^-_(-r+1, 0) and its lowest weight vector v_0^-, we conclude that F f(w) =0. It follows that K-K^-1/q-q^-1 = [E,F] annihilates f(w). Writing λ∈ for the weight of f(w), we conclude that q^2λ =1 so that, by Example <ref>, each homogeneous weight vector in the image of f spans a one dimensional module. Since W is in degree 0 and the image of f is a direct sum of its weight spaces, we conclude that f factors through a direct sum of one dimensional modules of degree 0. Recall the notion of relative modular category from Definition <ref>. Assume that r ≢4 8. With the structures of Propositions <ref> and <ref>, the category is -modular relative to (,) with relative modularity parameter ζ = -r r ≡ 1 2 r ≡ 2 4, -r r ≡ 0 8. It remains to establish the existence of a relative modularity parameter. Consider Definition <ref> with h = γ and g = λ with V_i=V_(α, 0),V_j= V_(β, 0)∈Θ(λ). Denote by f_γ; α, β∈_(V_(α, 0)⊗ V_(β, 0)^∨) the morphism determined by the left hand side of diagram (<ref>). The handle slide property of <cit.> guarantees that f_γ; α, β is transparent in _ 0. By Lemma <ref>, we can write f_γ; α, β = ∑_i=1^m h_γ; α, β, i∘ g_γ; α, β, i for some g_γ; α, β, i∈_(V_(α, 0)⊗ V_(β, 0)^∨,^H_(k_i r/2, p_i)) and h_γ; α, β, i∈_(^H_(k_i r/2, p_i),V_(α, 0)⊗ V_(β, 0)^∨) with ^H_(k_i r/2, p_i)∈_ 0; see Example <ref>. Assume first that r ≡ 1 2 or r ≡ 2 4 so that contains all one dimensional modules in degree 0. The set I_λ is defined so that _(V_(α, 0)⊗ V^∨_(β, 0), ^H_( k_i r, p_i)) ≃_(V_(α, 0), V_(β, 0)⊗^H_( k_i r, p_i)) is zero unless α= β and (k_i, p_i) = (0, 0). We may therefore assume that α=β. In this case, f_γ; α, α factors through the trivial module and f_γ; α, α = ζ_V_(α, 0)∘_V_(α, 0) for some constant ζ. Applying _V_(α, 0)⊗ V_(α, 0)^∨ to the right hand side of equation (<ref>) gives ζ(V_(α, 0)) while applying it to the left hand side gives _V_(α, 0)⊗ V_(α, 0)^∨ (f_γ; α, α) = ∑_δ∈γ+ I_r(V_(δ, 0)) _V_(α, 0)⊗ V_(α, 0)^∨(f_δ; α, α) = (V_(α, 0)) ∑_δ∈γ+ I_r_V_(δ, 0)(Φ_V^∨_(α, 0),V_(δ, 0)) _V_(δ, 0)( Φ_V_(α, 0),V_(δ, 0)) = (V_(α, 0)) ∑_δ∈γ+ I_r(V_(δ, 0))^2 ⟨Φ_V^∨_(α, 0),V_(δ, 0)⟩⟨Φ_V_(α, 0),V_(δ, 0)⟩ = -(V_(α, 0)) r. The second equality follows from isotopy invariance, defining properties of and simplicity of V_(δ, 0). The final equality follows from the isomorphism (<ref>) and equation (<ref>). In particular, the sign in the expression reflects the fact that the highest weight vector of V^∨_(α, 0) is of odd parity, since r is even. We conclude that ζ = -r. If instead r ≡ 0 8, then _(V_(α, 0)⊗ V^∨_(β, 0), ^H_(k_i r/2, p_i)) ≃_(V_(α, 0), V_(β, 0)⊗^H_(k_i r/2, p_i)) is non-zero for a unique ^H_(k_i r/2, p_i) with k_i ∈{0,1} and p_i = 0; call it ^H_(k r/2, 0). It follows that f_γ; α, β = d_α,β𝕀_^H_(k r/2, 0)⊗_V_(α, 0)∘_V_(α, 0) for some d_α,β∈. The modified trace of the right hand side of equation (<ref>) is ± d_α,β(V_(α, 0)) and so vanishes precisely when d_α,β=0. Computing as in the previous paragraph, we find the modified trace of the left hand side of equation (<ref>) to be ∑_δ∈γ+ I_r(V_(δ, 0)) _V_(α+k, 0)⊗ V_(α, 0)^∨(f_δ; α+k,α) = -(V_(α, 0)) q^k γ∑_i=0^r-1 q^kr/2 i. When k=0 the sum over i is equal to r. When k=1 we have q^r/2=-1 and the sum over i vanishes. The constant d_α,β therefore vanishes unless α = β, in which case it is equal to -r. It follows that ζ = -r. § TOPOLOGICAL FIELD THEORY FROM In this section, we use the representation theoretic results of Section <ref> to construct family of decorated three dimensional TFTs. All manifolds are assumed oriented. §.§ Non-semisimple TFTs from relative modular categories Fix a relative modular category . Let _ be the category of decorated surfaces and their diffeomorphism classes of admissible decorated bordisms, as defined in <cit.>. An object of _ is a tuple =(Σ, {x_i}, , ℒ) consisting of * a closed surface Σ with a choice * of basepoint for each connected component, * a finite set {x_i}⊂Σ∖ * of oriented framed -coloured points, * a cohomology class ∈ H^1(Σ∖{x_i}, * ;) such that (_i) = g_i is the degree of the colour of x_i, where _i is the oriented boundary of a regular neighbourhood of x_i, and * a Lagrangian subspace ℒ⊂ H_1(Σ; ). A morphism in _ is a tuple ℳ = (M,T,,m) : _1 →_2 consisting of * a bordism M: Σ_1 →Σ_2, * a -coloured ribbon graph T ⊂ M whose colouring is compatible with those of the marked points of _1 and _2, * a cohomology class ∈ H^1(M∖ T, *_1 ∪ *_2; ) which restricts to _j on Σ_j, j=1,2, and such that the colour of each connected component T_c of T has degree (_c) ∈, where _c is an oriented meridian of T_c, and * an integer m ∈, the signature defect. Moreover, it is required that ℳ is admissible: for each connected component M_c of M which is disjoint from Σ_1, at least one edge of T ∩ M_c is coloured by a projective object of or there exists an embedded closed oriented curve γ⊂ M_c such that (γ) ∈∖. Disjoint union gives _ the structure of a symmetric monoidal category. The unique pairing γ: ×→{± 1} which makes the diagram [baseline= (a).base] [scale=1.0] (a) at (0,0)[column sep=7.0em,row sep=2.0em] σ_k_1⊗σ_k_2r[above]c_σ_k_1,σ_k_2d[left]≀ σ_k_2⊗σ_k_1d[right]≀ σ_k_1+k_2r[below]γ(k_1,k_2) ·𝕀_σ_k_1+k_2 σ_k_2+k_1; commute for all k_1, k_2 ∈ induces a symmetric braiding on the monoidal category _ of -graded vector spaces and their degree preserving linear maps. [<cit.>] A modular -category relative to (,) together with a choice 𝒟 of square root of the relative modularity parameter ζ defines a symmetric monoidal functor _: _→_. We refer to _ as the TFT associated to . The values of _ on closed bordisms ∅→∅ coincide with a (renormalization of the) 3-manifold invariants N_ of <cit.>; see Definition <ref> below. The former values are defined as follows. Let R be a -coloured ribbon graph in S^3. Suppose that an edge of R is coloured by a generic simple object V ∈. Let R_V be the (1,1)-ribbon graph obtained from R by cutting an edge labelled by V. Then F^'_(R) :=_V(R_V) ∈ is an isotopy invariant of R <cit.>. With this notation, the partition function of a closed admissible bordism ℳ=(M,T,,m) with computable surgery presentation L ⊂ S^3 is _(ℳ) = ^-1-l( /Δ_-)^m-σ(L) F'_(L∪ T) ∈. Here L has l connected components, σ(L) is the signature of the linking matrix of L and each component L_c of L is coloured by the Kirby colour Ω_(_c). We note for later that the construction of _ requires the choice of an element g_0 ∈ and a simple projective V_g_0∈_g_0. Up to equivalence, _ is independent of these choices. In the following sections, we take to be one of the relative modular categories of Theorem <ref>. The pairing γ: ×→{± 1} is γ((k_1, p_1),(k_2, p_2)) = (-1)^ p_1 p_2. The category is TFT finite in the sense of <cit.>, as follows easily from the observation that is a Krull–Schmidt category whose isomorphism classes of indecomposable projectives are in bijection with isomorphism classes of simple objects. It follows from <cit.> that all state spaces of _ are finite dimensional. §.§ Verlinde formulae In this section, we compute the value of :=_ on trivial circle bundles over surfaces. Together with a combinatorial description of spanning sets of state spaces of , this allows for the computation of Euler characteristics and total dimensions of state spaces. These results can be seen as Verlinde formulae for . Let[We omit the required Lagrangian subspace of H_1(Σ;) since it is not used in what follows. Similarly, we omit the signature defect required to define morphisms in _.] =(Σ,{x_1,…, x_s},) be a decorated admissible connected surface of genus g ≥ 1. For each λ∈, define the decorated 3-manifold × S^1_λ=(Σ× S^1,T={x_1, …, x_s}× S^1,⊕λ), where ⊕λ∈ H^1(Σ∖{x_i}; ) ⊕≃ H^1((Σ× S^1) ∖ T; ). The partition function of × S^1_λ can be computed via an explicit surgery presentation, as in <cit.>, <cit.>. Writing V_(μ_i-r+1, p_i) for the colour of x_i and setting μ = ∑_i=1^s (μ_i -r+1) and p = ∑_i=1^s p_i, we find (× S^1_λ) = (-1)^ pζ^g-1∑_k ∈ I_r q^μ (λ+k)(q^r(λ+k)-q^-r(λ-k)/q^λ+k+q^-λ-k)^2g-2-s. For (k, p) ∈, let _(k, p)() ⊂() be the subspace of -degree (k, p). Define the generating function of -graded dimensions of () to be _(x,y)() = ∑_(k, p) ∈ (-1)^ p__(k, p)() x^k y^ p∈[x^± 1,y^± 1]. Here we treat the parity subgroup as the multiplicative group {1, -1}. [Verlinde formula] There is an equality (× S^1_λ) = _(q^ r λ,1)(). The equality can be proved by a direct modification of the proofs of <cit.> and <cit.>. The only change is the explicit form of the bicharacter ψ, given in the present setting in Propositions <ref> and <ref>, which determines the appropriate specialization of the variables x and y. When has no marked points, that is, s=0, the Euler characteristic of () with respect to the parity subgroup is χ(()) = r r ≡ 1 2, r r ≡ 2 4, r r ≡ 0 8. if g =1 and χ(()) = 0 r ≡ 1 2, 0 r ≡ 2 4, r^3g-3/2^2g-3 r ≡ 0 8 if g ≥ 2. Theorem <ref> gives χ(()) = lim_λ→ 0(× S^1_λ). Since q^r is a sign (see equation (<ref>)), when s=0 equation (<ref>) becomes (× S^1_λ) = ζ^g-1∑_k ∈ I_r(q^rλ-q^-rλ/q^λ+k+q^-λ-k)^2g-2. When g=1, we obtain χ(()) = | I_r | = r r ≡ 1 2 r ≡ 2 4, r r ≡ 0 8. Consider then the case g ≥ 2. If r ≡ 1 2 or r ≡ 2 4, then lim_λ→ 0q^rλ-q^-rλ/q^λ+k+q^-λ-k vanishes for all k ∈ I_r. Indeed, the limit of the numerator vanishes while that of the denominator does not. It follows that χ(())=0. If instead r ≡ 0 8, then lim_λ→ 0 (q^λ+k+q^-λ-k) is zero if and only if k ∈{±r/2}, in which case an application of l'Hôpital's rule gives for the limit (<ref>) the value r and we arrive at χ(()) = r^3g-3/2^2g-3. Next, we turn to the computation of the total dimension _(). In isolation, Theorem <ref> is insufficient to do so. Indeed, without constraints on the -support of (), the total dimension cannot be accessed as a limit of (× S^1_λ). A similar issue was encountered and resolved in <cit.> in the context of TFTs constructed from the unrolled quantum group of . We explain the modifications for . Consider the following oriented trivalent graph Γ: [anchorbase] [thick,decoration=markings, mark=at position 0.275 with >,mark=at position 0.775 with <,postaction=decorate] (4.5,0) circle (0.35); [thick,decoration=markings, mark=at position 0.275 with >,mark=at position 0.775 with <,postaction=decorate] (6.5,0) circle (0.35); at (1.5,-2) e_1; at (0,-0.6) e_2; at (2,-0.6) e_3; at (0,0.6) f_1; at (1,0.3) f_2; at (2,0.6) f_3; at (3.5,0) ⋯; at (4.5,0.6) f_2g-5; at (5.5,0.3) f_2g-4; at (6.5,0.6) f_2g-3; at (4.5,-0.6) e_g-1; at (6.5,-0.6) e_g; at (7.75,1.0) e^-1_-1; at (8.5,1.0) e^0_-1; at (9.25,1.0) e^1_-1; at (7.25,-0.2) e_g+1; at (8.1,-0.3) e_g+1^'; at (8.85,-0.25) e_1^'; at (7.5,-2.0) e_1; [-,thick,decoration=markings, mark=at position 0.15 with <,mark=at position 0.35 with <,mark=at position 0.55 with <,postaction=decorate] (6.85,0) to [out=0,in=90] (10.0,-0.95); [-,thick,decoration=markings, mark=at position 0.5 with >,postaction=decorate] (6.85,-1.7) to [out=0,in=-90] (10.0,-0.95); [-,thick,decoration=markings, mark=at position 0.5 with <,postaction=decorate] (7.75,0.75) to (7.75,0.025); [-,thick,decoration=markings, mark=at position 0.5 with <,postaction=decorate] (8.5,0.75) to (8.5,0.05) ; [-,thick,decoration=markings, mark=at position 0.5 with <,postaction=decorate] (9.25,-0.05) to (9.25,0.75); [-,thick,decoration=markings, mark=at position 0.5 with >,postaction=decorate] (6.15,0) to (4.85,0); [thick,decoration=markings, mark=at position 0.275 with >,mark=at position 0.775 with <,postaction=decorate] (0,0) circle (0.35); [thick,decoration=markings, mark=at position 0.275 with >,mark=at position 0.775 with <,postaction=decorate] (2,0) circle (0.35); [-,thick,decoration=markings, mark=at position 0.5 with >,postaction=decorate] (1.65,0) to (0.35,0); [-,thick,decoration=markings, mark=at position 0.5 with >,postaction=decorate] (-0.35,0) to [out=220,in=180] (2,-1.7); at (3.5,0) ⋯; at (6.5,-1.7) …; . A colouring of Γ of degree k ∈ is a function : Edge(Γ) →Ob() such that (e_-1^1) = V_g_0 = (e_-1^-1), (e_-1^0)=σ_k and assigning to each of the remaining edges e a simple module V_(α_e-r+1, p_e), where (α_e, p_e) ∈× is congruent to (_e) ∈, such that the following Balancing Condition holds: at each trivalent vertex, the algebraic sum of the colours of incident edges is consistent with the isomorphism (<ref>). Fix a lift ∈ H^1(Σ; ×) of ∈ H^1(Σ; ) and let ℭ_k = {|(e_i) = V_(_e_i) +j j ∈ I_r, i=1,…, g }. Set ℭ = _k ∈ℭ_k. As explained in <cit.>, each colouring c ∈ℭ_k defines a vector v_c =(η̃,Γ_c,,0) ∈_-k(). The construction is based off the fact that Γ is a modification of an oriented spine of a genus g handlebody with a open ball removed; the modification is required to construct vectors of non-zero degree. Let =(Σ, ) be a decorated connected surface of genus g ≥ 1 without marked points. Assume that 2 is not in the image of the canonical map H^1(Σ; ) → H^1(Σ; ) and (_e_1)+g_0 ∉. Then {v_c | c ∈ℭ_k} spans _-k(). Moreover, _-k() is trivial unless k=(d, d) for some d ∈ which, if r ≡ 0 8, must be even. A direct generalization of the argument from the proof of <cit.> establishes the spanning statement; this uses the stated assumptions on . Next, we study the set ℭ. Assume that g ≥ 2; the slightly degenerate case g=1, where e_1=f_0, is dealt with similarly. An element of ℭ can be constructed as follows. For each i=1, …, g, colour e_i by a simple module V_(α_i-r+1, p_i) satisfying the constraint in the definition of ℭ_k. Next, colour f_1, f_2, …, f_2g-3 recursively so as to satisfy the Balancing Conditions. The colours {V_(β_i-r+1, q_i)} of {f_i} are determined by (ϵ_1, …, ϵ_2g-3) ∈{-i | 0 ≤ i ≤r-1}^× 2g-3 through the initial condition β_1 = α_1 - α_2 -2ϵ_1 and the recursive system β_2i =β_2i-1 +α_i+1 + 2ϵ_2i 1 ≤ i ≤ g-2, β_2i+1 + α_i+2 + 2ϵ_2i+1=β_2i 1 ≤ i ≤ g-2. The solution is β_2i=α_1+ 2∑_j=1^2i (-1)^j 2ϵ_j, 1 ≤ i ≤ g-2, β_2i+1 = α_1 - α_i+2 +2∑_j=1^2i+1 (-1)^j ϵ_j 1 ≤ i ≤ g-2. Similarly, we find for the parities q_2i= p_1+ ∑_j=1^2iϵ_j 1 ≤ i ≤ g-2, q_2i+1 = p_1 + p_i+2+ ∑_j=1^2i+1ϵ_j 1 ≤ i ≤ g-2. Writing σ_k=^H_( k r, p), the colours of the edges e_g+1, e_g+1^' and e_1^' are subject only to the Balancing Conditions associated to the four top right vertices of Γ: (α_1, p_1) + g_0 +2 ϵ^' = (α_1^', p_1^') , (α_g+1^', p_g+1^') + ( k r, p) = (α_1^', p_1^'), (α_g+1^', p_g+1^') = (α_g+1, p_g+1) + g_0 + 2 ϵ^'', (α_g+1, p_g+1) = (α_g, p_g) + (β_2g-3, q_2g-3) + 2 ϵ^'''. Here ϵ^', ϵ^'', ϵ^'''∈{-i | 0 ≤ i ≤r-1}. These equations hold if and only if k r=2ϵ^' - 2 ϵ^'' -2 ϵ^''' - 2∑_j=1^2g-3 (-1)^j ϵ_j, p = ϵ^' + ϵ^'' + ϵ^'''+ ∑_j=1^2g-3ϵ_j. and α_g+1=α_1 + 2ϵ^'''+2∑_j=1^2g-3 (-1)^j ϵ_j, p_g+1= p_1+ ϵ^''' + ∑_j=1^2g-3ϵ_j. Setting d = ϵ^' - ϵ^'' - ϵ^''' - ∑_j=1^2g-3 (-1)^j ϵ_j gives ( k r, p)=(2d, d). If r ≡ 1 2 or r ≡ 2 4, then r/2 is odd and we conclude that this colouring is of degree (d, d) ∈ for some d ∈. If r ≡ 0 8, then r/2 is even so that d is even. It is immediate that the above construction recovers each element of ℭ. When s=0, the total dimension of () is _() = r^3g-3· 2^2g-2 r ≡ 1 2, 1/2^g-1 r ≡ 2 4, 1/2^2g-3 r ≡ 0 8. When r ≡ 0 8, Theorem <ref> implies that () has even parity, whence _() is equal to the Euler characteristic computed in Corollary <ref>. When r ≡ 1 2 or r ≡ 2 4, Theorem <ref> implies that the generating function of graded dimensions simplifies to _(x,y)() = ∑_d∈ (-1)^ d__(d, d)() x^d y^ d. Since lim_λ→1/2 q^λ r = -1, Theorem <ref> and equation (<ref>) lead to the equality _() = lim_λ→1/2 ζ^g-1∑_k ∈ I_r(q^rλ-q^-rλ/q^λ+k+q^-λ-k)^2g-2. We have lim_λ→1/2 (q^rλ-q^-rλ) = 0. Writing k = -r+1+2n and noting that q^r=1, we see that lim_λ→1/2 (q^λ+1+2n+q^-λ-1-2n) =0 if and only if q^4n+ 2+ 1/=-1. The latter holds if and only if 4n+2+1/≡r/2 r. The unique element -r+1+2n_* ∈ I_r which satisfies the latter equation has n_* = 5r-5/8 r ≡ 1 8, 3r+2/8 r ≡ 2 8, 7r-5/8 r ≡ 3 8, r-5/8 r ≡ 5 8, r-6/8 r ≡ 6 8, 3r-5/8 r ≡ 7 8. An application of l'Hôpital's rule now gives the claimed total dimension. § COMPARISON BETWEEN N_R^ AND ^-INVARIANTS The goal of this section is to prove Theorems <ref> and <ref> from the Introduction. §.§ Plumbed 3-manifolds Let Γ=(Γ,f) be a plumbing graph, that is, Γ is a tree with (finite) sets V(Γ) and E(Γ) of vertices and edges, respectively, and vertex weighting f: V(Γ) →. We view E(Γ) as a collection of unordered pairs of distinct elements of V(Γ). Let B be the V(Γ) × V(Γ) matrix with entries B_v v^' = 1 {v, v^'}∈ E(Γ), f_v v=v^', 0 . We assume that B is invertible. Let b_+ (resp. b_-) be the number of positive (resp. negative) eigenvalues of B and σ=b_+ - b_- the signature of B. Note that b_+ + b_- = | V(Γ) |. The number of edges incident to a vertex v is its degree (v). The plumbing graph (Γ,f) is called weakly negative definite if the restriction of B^-1 to vertices of degree greater than two is negative definite <cit.>. Let L ⊂ S^3 be the framed link with an unknot component for each vertex v ∈ V(Γ) with framing f_v and with distinct components v and v^' Hopf linked whenever B_v v^'=1. The linking matrix of L is therefore B. Let M be the closed oriented 3-manifold obtained by integral surgery along L. §.§ -invariants for Let Γ be a plumbing graph. Define formal power series F^±({x_v}_v ∈ V(Γ)) = ∏_v ∈ V(Γ)( x_v ±1/x_v)^2 - v∈{x_v, x_v^-1}_v ∈ V(Γ). There is an equality F^-({ x_v}_v ∈ V(Γ)) = - F^+({x_v}_v ∈ V(Γ)). We have F^-({ x_v}_v ∈ V(Γ)) = ^2 | V(Γ) | - ∑_v v F^+({x_v}_v ∈ V(Γ)). Now use that, since Γ is a tree, 2 | V(Γ) | - ∑_v v = 2. Define complex numbers {F_l^±}_l ∈^V(Γ) through the equality F^±({x_v}_v ∈ V(Γ)) = ∑_l ∈^V(Γ) F^±_l ∏_v ∈ V(Γ) x_v^l_v. It is immediate that F^+_l = 0 unless l ≡ v 2, and similarly for F^-_l. Moreover, Lemma <ref> gives F^+_l = - ^l^t F^-_l, l ∈^V(Γ) where =(1, …, 1)∈^V(Γ). Let be an indeterminate. [<cit.>] Let Γ be a weakly negative definite plumbing graph with linking matrix B and associated closed oriented 3-manifold M. The ^-invariant of M with structure 𝔰 is ^_𝔰 (M;)= (-1)^b_+^3 σ- B/4∑_l ∈^V(Γ) l ≡𝔰 2 B ^V(Γ) F^+_l ^- 1/4l^t B^-1 l∈^Δ^_𝔰[^-1], for some scalar Δ^_𝔰∈. Keeping the notation of Definition <ref>, we recall from <cit.> that the -invariant for is ^_𝔰 (M;)= (-1)^b_+^3 σ- B/4∑_l ∈^V(Γ) l ≡𝔰 2 B ^V(Γ) F^-_l ^- 1/4l^t B^-1 l∈^Δ^_𝔰[^-1], for some Δ^_𝔰∈. * Definition <ref> generalizes the ^-invariants of <cit.> from negative definite to weakly negative definite plumbing graphs; note that b_+=0 in the former case. The additional overall factor of 2^-| V(Γ) | which appears in <cit.> is absorbed into the coefficients F_l^- used in this paper; compare equations (<ref>) and <cit.>. * Weak negative definiteness of Γ ensures convergence of the -series ^_𝔰(M; ) in the unit disk {||| < 1 } <cit.>. A direct modification of this argument applies to ^_𝔰(M; ). * That the -series (<ref>) is independent of the plumbing presentation of M, and hence an invariant of M, was verified in <cit.>. We expect a similar calculation to apply to ^_𝔰(M; ). §.§ ^ versus ^ The following result is motivated by physical calculations of Chauhan and Ramadevi <cit.>. For each structure 𝔰 on M, there exist constants C_𝔰∈ and Δ_𝔰∈ such that ^_𝔰(M;) = ^Δ_𝔰∑_n=-∞^∞ a_n (-)^n and ^_𝔰(M;) = C_𝔰^Δ_𝔰∑_n=-∞^∞ a_n ^n. In the notation of Section <ref>, write 𝔰 = σ(b,s) for some b ∈ H_1(M;) and s ∈(M). Writing the summation index l in the definition of ^_𝔰 as 2b+B(s-) + 2 Bk for k ∈^V(Γ) and using equation (<ref>) to replace F_l^+ with F_l^-, we find ^_σ(b,s)(M(Γ);) = (-1)^1+b_+^3 σ- B/4 -(2b+B(s-))^t B^-1 (2b+B(s-))/4^(2b+B(s-))^t ∑_k ∈^V(Γ) (-1)^k^t B F^-_l(k)^-k^t B k + k^t(2b+B(s-)). The definitions give k^t B k + k^t Bs = 2∑_1 ≤ i<j ≤ V(Γ) B_ij k_i k_j + ∑_i=1^V(Γ) B_ii k^2_i + ∑_i=1^V(Γ)(∑_j=1^V(Γ) B_ij s_j ) k_i. Since ∑_j=1^V(Γ) B_ij s_j ≡ B_ii 2, we have k^tB k + k^t Bs ≡ 0 2, whence -k^t B k + k^t(2b+B(s-)) ≡ k^t B 2. We conclude that ∑_k ∈^V(Γ) (-1)^k^t B F^-_l ^-k^t B k + k^t(2b+B(s-)) = ∑_k ∈^V(Γ) F^-_l (-)^-k^t B k + k^t(2b+B(s-)). ^_σ(b,s)(M(Γ);) = (-1)^1+b_+^3 σ- B/4 -(2b+B(s-))^t B^-1 (2b+B(s-))/4^(2b+B(s-))^t ∑_k ∈^V(Γ) F^-_l (-)^-k^t B k + k^t(2b+B(s-)). Making the same substitution as above for l in ^_σ(b,s)(M;), we see that we may take Δ_σ(b,s) = 3 σ- B/4 -(2b+B(s-))^t B^-1 (2b+B(s-))/4 and C_σ(b,s) = -^(2b+B(s-))^t. §.§ Relation to CGP invariants To compare -invariants and CGP invariants, it is useful to slightly renormalize the latter, as defined by equation (<ref>). Fix a relative modular category . [<cit.>] Let L ⊂ S^3 be an oriented framed link and M the closed oriented 3-manifold obtained by integral surgery along L. Let ∈ H^1(M; ) such that (_c) ∈∖ for each component L_c of L. The CGP invariant of (M,) is N_(M,) = 1/Δ_+^b_+Δ_-^b_- F^'_(L) ∈, where each component L_c of L is coloured by the Kirby colour Ω_(_c) and b_+ (resp. b_-) is the number of positive (resp. negative) eigenvalues of the linking matrix of L. Take now to be one of the relative modular categories of Theorem <ref>. We write N_r^(M,) for N_(M,) to emphasize the dependence on the order r of q. Given λ, λ^'∈, set (λ) = q^λ+q^-λ/q^rλ-q^-rλ, S(λ^',λ) = q^λ^'λ, T(λ) = q^λ^2-(r-1)^2/2. By equations (<ref>) and (<ref>), we have ⟨θ_V_(λ, p)⟩ = T(λ) and, for λ typical, (V_(λ, p)) = (-1)^ p(λ), ⟨Φ_V_(λ^', p^'),V_(λ, p)⟩ = (-1)^ p^'S(λ, λ^')/(λ). Let Γ be a weakly negative definite plumbing graph and ∈ H^1(M; ). Write α_v for the value of on the homology class of the oriented meridian of the vth component of Γ. For k ∈ I_r^V(Γ), write α_k = {α_v + k_v}_v ∈ V(Γ). With this notation, the CGP invariant (<ref>) becomes N_r^(M,) = 1/Δ_+^b_+Δ_-^b_-∑_k ∈ I_r^V(Γ)∏_v ∈ V(Γ)(α_k_v)^2 - (v) T(α_k_v)^f_v∏_{v_1,v_2}∈ E(Γ) S(α_k_v_1, α_k_v_2). In <cit.>, a regularization of the function via an additional parameter t is presented so that the limit → e^2π/r of particular sums involving ^_𝔰 (M;) may be computed as the evaluation at = e^2π/r followed by the limit at t=1. Similar techniques should be applied for more general Lie (super)algebras, like , but we do not pursue this. Instead, we work in the following setting. We assume that a similar regularization exists for ^_𝔰 (M;), so that the limit in Theorem <ref> may be computed as the evaluation at = e^4π/r followed by the limit t→ 1. The goal of the remainder of the paper is to prove the following comparison result. We use the topological notation of Section <ref>. Let δ∈{± 1} and assume that r is congruent to δ or 2 δ modulo 8. Let (Γ,f) be a weakly negative definite plumbing graph. There is an equality N_r^(M,) = lim_→ e^4π/r∑_𝔰∈(M) c^_,𝔰^_𝔰 (M;), where c^_,σ(b,s) = e^πμ(M,s)(M,[4 ])/| H_1(M;) |∑_a,f e^2 π(-r-δ/8(a,a) - (a,b+f) +2 (f,f)-1/2(a) ) if r ≡δ 8 and c^_,σ(b,s) = e^-δπ/2μ(M,s)(M,[2 ])/| H_1(M;) |∑_a,f e^2 π(-r-2δ/8(a,a) - (a,b+δ f) + δ(f,f) - 1/2(a) ) if r ≡ 2 δ 8. Here ∑_a,f denotes summation over a,f ∈ H_1(M;) and is the appropriately normalized Reidemeister torsion of M (defined below). Before proving Theorem <ref>, we record a number of comments. * The radial limit → e^4π/r appearing in Theorem <ref> should be interpreted as in the case of <cit.>. The subtlety is that the function ^_𝔰 (M;) is multivalued because of the overall factor of ^Δ^_𝔰. * From the perspective of Theorem <ref>, Hypothesis <ref> allows the limit → e^4π/r to be computed as an evaluation at = e^4π/r. This is the form in which Theorem <ref> is proved below. The -analogue of Hypothesis <ref> is verified for Y-shaped plumbing graphs in <cit.>. * The analogue of Theorem <ref> for is <cit.>, where the cases r ≡ 2 δ 8 and r ≡ 4 8 are treated. In the former case, the coefficients c^_,σ(b,s) and c^_,σ(b,s) are very closely related. Relabelling f with -f, a summand of ∑_a,f in c^_,σ(b,s) becomes e^2 π(-r-2δ/8(a,a) + δ(a,f - δ b) - 1/2(a)+ δ(f,f)), which is the corresponding summand in c^_,σ(b,s). The overall prefactors differ by the replacement of (M,[2 ]) in c^_,σ(b,s) with -(M,[]) in c^_,σ(b,s). It follows that we have -(M,[]) c^_,σ(b,s) = (M,[2]) c^_,σ(b,s). * The invariants N_r^ when r ≡ 1 2 were recently defined by Detcherry <cit.>. Based on the calculations of Section <ref>, where it is explained why the computation involved in the proof of Theorem <ref> does not extend to the case r ≡± 3 8, we expect N_r^ to be closely related to ^ when r ≡± 1 8 but not when r ≡± 3 8. * When r ≡ 0 8 the computation from the proof of Theorem <ref> again fails, although for a different reason than for r ≡± 3 8; see Section <ref>. The reason is at least partially due to the -grading of of Proposition <ref>, as opposed to the 2-grading used when r ≢0 8. The former grading leads to non-topological terms in the putative relation between N_r^ and ^. * When r ≡ 0 8 the invariant N_r^ is not defined since the category of -weight modules with its standard 2-grading and free realization is degenerate. Instead, there exist spin-refined CGP invariants <cit.> which, as shown in <cit.>, are related to ^-invariants. The spin-refinement uses the same 2-grading with a modified free realization. Motivated by this, we expect Theorem <ref> to extend to the case r ≡ 0 8 when N_r^ is replaced with its (not yet defined) spin-refinement which results from the category with its natural 2-grading, as opposed to the -grading of Proposition <ref>. The key tool used to prove Theorem <ref>—as in the case for —is the following reciprocity of quadratic Gauss sums. [<cit.>] Keeping the notation of Section <ref>, there is an equality ∑_n ∈^V(Γ) r ^V(Γ) e^2 π/r (n^t B n + p^t n) = e^πσ/4 (r/2)^| V(Γ) |/2/| B |^1/2∑_ã∈^V(Γ) 2B ^V(Γ) e^-π r/2(ã + p/r)^t B^-1(ã + p/r). §.§ Factorization when r ≢0 8 We begin with the case r ≢0 8. Using equation (<ref>), we see that the CGP invariant factorizes as N_r^(M,)=𝒜ℬ𝒞, where 𝒜 = q^-1/2(r-1)^2 B/Δ_+^b_+Δ_-^b_-, ℬ = F^-( {e^2 πr/rα_v}_v ∈ V(Γ))^-1 and 𝒞 = ∑_k ∈ I_r^V(Γ) F^+({q^α_k_v}_v ∈ V(Γ)) q^1/2(α +k)^t B (α + k). The existence of the factorization (<ref>) stems from the fact that r/r =2 (resp. r/r=1) when r ≡ 1 2 (resp. r ≡ 2 4), so that the exponents 2 πr/rα_v appearing in the arguments of F^- are invariant under the shifts in α_v by 2 which occur in the indexing set I_r of Kirby colours. When r ≡ 0 8—discussed in Section <ref> below—there is invariance only up to a sign, leading to a mild correction in 𝒞. Fix l ∈^V(Γ). Expanding the function F^+, the contribution of the monomial ∏_v ∈ V(Γ) x_v^l_v to 𝒞 is 𝒞_l = ∑_k ∈ I_r^V(Γ) q^1/2(α +k)^t B (α + k) + l^t (α+k). Defining α̃=α -(r-1) and inserting the explicit definitions of q and I_r gives 𝒞_l = e^π/rα̃^t B α̃ + 2 π/r l^t α̃∑_n ∈^V(Γ)r^V(Γ) e^2 π/r n^t (2/B) n + 2π/r2/(l+Bα̃)^t n. Applying Proposition <ref>, we obtain 𝒞_l=e^- π/r l^t B^-1 le^πσ/4(r/4)^| V(Γ) |/2/| B |^1/2𝒞^'_l, where we have introduced 𝒞^'_l = ∑_ã∈^V(Γ)4/B ^V(Γ) e^-π r/4ã^t B^-1ã - πã^t B^-1(l + B α̃). Setting ã = BA + a for A ∈^V(Γ)4/^V(Γ) and a ∈^V(Γ) B ^V(Γ), we have 𝒞^'_l = ∑_a ∈^V(Γ) B ^V(Γ) A ∈^V(Γ)4/^V(Γ) e^-π r/4 A^t B A- π r/4 a^t B^-1 a - π r/2 A^t a - π A^t(l + B α̃) - π a^t B^-1(l + B α̃). Write l ≡ 2b + B(s- ) 2 B ^V(Γ), where b ∈^V(Γ) B ^V(Γ) and s ∈( 2 )^V(Γ) satisfies ∑_j ∈ V(Γ) B_ij s_j ≡ B_ii 2. The final two terms in the exponent of a summand of 𝒞^'_l become - 2π A^tb - 2π a^t B^-1 b - π a^t(s- + α̃) - π A^t B(s-) -π A^t B α̃. Note that -2π A^tb ∈ 2 π. Since α is a cohomology class valued in = 2 and r is even, we have A^t B α̃≡ A^t B 2; see equation (<ref>) and Remark <ref>. Note also that a^t(s- + α̃) =a^t(s + α -r) is congruent to a^t (s+α) modulo 2, again because r is even. We therefore have 𝒞^'_l = ∑_a ∈^V(Γ) B ^V(Γ) e^- π r/4 a^t B^-1 a - 2π a^t B^-1 b - π a^t(s+ α)𝒞^''_l, where we have set 𝒞^''_l = ∑_A ∈^V(Γ)4/^V(Γ) e^-π r/4 A^t B A - π r/2 A^t a - π A^t Bs. A more refined case-by-case analysis is now required. §.§.§ The case r ≡± 1 8 Suppose that r ≡δ 8 with δ = ± 1. Since =(r,2) =1, we have 𝒞^''_l = ∑_A ∈^V(Γ) 4 ^V(Γ) e^2π/4 A^t (-δ/2B) A + 2π/4(-δ a-2Bs)^t A. Applying Proposition <ref> to 𝒞^''_l, we obtain 𝒞^'_l = e^-δπσ/4 2^| V(Γ) |/2/| B |^1/2∑_a, f ∈^V(Γ) B ^V(Γ) e^- π r/4 a^t B^-1 a - 2π a^t B^-1 b - π a^tα· e^4δπ f^t B^-1 f - 2 π f^t B^-1 a + δπ/4 a^t B^-1 a + δπ s^t B s. In terms of the formulae of Section <ref>, this gives 𝒞_l = e^-π/r l^t B^-1 le^πσ (1-δ)/4r^| V(Γ) |/2/| B |· ∑_a,f ∈^V(Γ) B ^V(Γ) e^2π(-r-δ/8(a,a) -(a,b) -1/2(a) + 2 δ(f,f) - (f,a) +δ/2 s^t B s ). Direct computations give 𝒜 = r^-| V(Γ) | q^3 σ - B/2· e^πσ δ = 1, e^π/2σ δ = -1. and ℬ = (-1)^b_+(M,[4 ]); see <cit.> for the latter. Combining the above computations, we find N_r^(,) = q^3σ- B/2 e^π(σ-s^t B s)(-1)^b_+(M,[4 ])/| H_1(M;) |· ∑_l ∈^V(Γ) a,f ∈^V(Γ) B ^V(Γ) F_l e^- π/r l^t B^-1 l e^2π(-r-δ/8(a,a) -(a,b+f) +2 δ(f,f) - 1/2(a)). Noting that e^- π/r l^t B^-1 l = q^- 1/2l^t B^-1 l and using equation (<ref>) we arrive at the claimed equality. §.§.§ The case r ≡± 2 8 Suppose now that r ≡ 2δ 8 with δ = ± 1. Starting from the end of Section <ref> and using that =2, we have 𝒞^''_l = ∑_A ∈^V(Γ) 2 ^V(Γ) e^2π/2 A^t (-δ/2B ) A + 2 π/2 (-a-Bs)^tA. Applying Proposition <ref>, we obtain 𝒞^'_l = e^-δπσ/4 2^| V(Γ) |/2/| B |^1/2∑_a,f ∈^V(Γ) B ^V(Γ) e^- π(r-2δ)/4 a^t B^-1 a - 2π a^t B^-1 b - π a^t α + 2δπ f^t B^-1 f -2δπ f^t B^-1 a + δπ/2 s^t B s. Recombining terms, we conclude that 𝒞_l = e^-π/r l^t B^-1 le^πσ/4 (1-δ)(r/2)^| V(Γ) |/2/| B |· ∑_a,f ∈^V(Γ) B ^V(Γ) e^2π(-r-2 δ/8(a,a) - (a,b+δ f) - 1/2(a) + δ(f,f) + δ/4 s^t B s ). Direct computations give 𝒜 = (r/2)^-| V(Γ) |/2 q^3 σ- B/2· e^-π/2σ δ = 1, 1 δ = -1 and ℬ = (-1)^b_+(M,[2 ]). Putting the above calculations together gives N_r^(M,) = e^-δπ/2μ(M,s)(M,[2 ])/| H_1(M;) |· lim_→ e^4π/r∑_a,b,f ∈ H_1(M;) e^2 π(-r-2δ/8(a,a) - (a,b+δ f) - δ(f,f) -1/2(a) )^_σ(b,s) (M;). §.§.§ The case r ≡± 3 8 While the calculations in the previous two sections can be repeated when r ≡ 3 δ 8, they do not lead to similar conclusions. Indeed, the first step is to write 𝒞^''_l = ∑_A ∈^V(Γ) 4 ^V(Γ) e^2π√(-1)/4 A^t (-3 δ/2B) A + 2π√(-1)/4(-3 δ a-2Bs)^t A. Unfortunately, due to the coefficient -3 δ/2 of the bilinear form B, an application of Proposition <ref> leads to an expression which involves a sum over ^V(Γ) 3 B ^V(Γ), which does not have an obvious topological interpretation in terms of M. For this reason, we do not arrive at a universal topological relation between N_r^ and ^. §.§ The case r ≡ 0 8 We require the following result. Given signs n ∈{± 1}^V(Γ), there is an equality F^-({n_v x_v}_v ∈ V(Γ)) = (-1)^∑_v ∈ V(Γ) n_v v F^-({x_v}_v ∈ V(Γ)). This is a direct calculation. Set α̃=α -(r-1). Since r/r = 1/2, Lemma <ref> leads to the following modification of the factorization (<ref>): 𝒜 = r^-V(Γ)/2 e^π√(-1)/4(3σ +(2-r) B) q^-36 σ - B/2, ℬ = (-1)^b_+(M,[]), 𝒞 = ∑_n ∈ ( r )^V(Γ) (-1)^∑_i ∈ V(Γ) n_i i F^+({q^α̃_n_j}_j ∈ V(Γ)) q^1/2(α̃ +n)^t B (α̃ + n). Fix l ∈^V(Γ) and write l = 2b + B(s - ) with ∑_j B_ijs_i ≡ B_ii 2. We have ∑_v ∈ V(Γ) n_v v ≡ n^t B(s+ ) 2. The contribution of ∏_v ∈ V(Γ) x_v^l_v to 𝒞 is 𝒞_l = ∑_n ∈ ( r )^V(Γ) q^1/2(α̃+n)^t B (α̃ + n) + l^t (α̃+n)+r/2n^t B(s+ ). Proceeding as in the previous sections, we arrive at the expression N_r^(M,) = q^-36 σ - B/2 e^π√(-1)/4(-σ +(6-r) B)(-1)^b_+ e^-πμ(M,s)(M,[])/| H_1(M;) | ∑_l ∈^V(Γ)· a ∈^V(Γ) B ^V(Γ) F_l e^2π(-r/2(a,a)-2(a,b)-(a)-1/2α^t B(s+) ). This has a number of problems from the perspective of generalizing Theorem <ref>. First, the term e^πα^t B(s+) is not topological. Second, the r-dependent factor q^-36 σ - B/2 e^π√(-1)/4(-σ +(6-r) B) is of a rather different form than the desired ^3 σ- B/4. amsalpha
http://arxiv.org/abs/2407.13168v1
20240718051524
SciCode: A Research Coding Benchmark Curated by Scientists
[ "Minyang Tian", "Luyu Gao", "Shizhuo Dylan Zhang", "Xinan Chen", "Cunwei Fan", "Xuefei Guo", "Roland Haas", "Pan Ji", "Kittithat Krongchon", "Yao Li", "Shengyan Liu", "Di Luo", "Yutao Ma", "Hao Tong", "Kha Trinh", "Chenyu Tian", "Zihan Wang", "Bohao Wu", "Yanyu Xiong", "Shengzhu Yin", "Minhui Zhu", "Kilian Lieret", "Yanxin Lu", "Genglin Liu", "Yufeng Du", "Tianhua Tao", "Ofir Press", "Jamie Callan", "Eliu Huerta", "Hao Peng" ]
cs.AI
[ "cs.AI", "cs.CL" ]
Maximin Fair Allocation of Indivisible Items under Cost Utilities Sirin Botan1 Angus Ritossa1 Mashbat Suzuki1 Toby Walsh1 Received X XX, XXXX; accepted X XX, XXXX ================================================================= § ABSTRACT Since language models (LMs) now outperform average humans on many challenging tasks, it is becoming increasingly difficult to develop challenging, high-quality, and realistic evaluations. We address this by examining LM capabilities to generate code for solving real scientific research problems. Incorporating input from scientists and AI researchers in 16 diverse natural science sub-fields, including mathematics, physics, chemistry, biology, and materials science, we create a scientist-curated coding benchmark, . The problems naturally factorize into multiple subproblems, each involving knowledge recall, reasoning, and code synthesis. In total, SciCode contains 338 subproblems decomposed from 80 challenging main problems, and it offers optional descriptions specifying useful scientific background information and scientist-annotated gold-standard solutions and test cases for evaluation. Claude3.5-Sonnet, the best-performing model among those tested, can solve only 4.6% of the problems in the most realistic setting. We believe that demonstrates both contemporary LMs' progress towards realizing helpful scientific assistants and sheds light on the building and evaluation of scientific AI in the future. [Data, code, and leaderboard available at <https://scicode-bench.github.io/>] § INTRODUCTION The development of evaluations in tandem with language models (LMs) has substantially contributed to the rapid advancement of these models  <cit.>. Because LMs now surpass the performance of most humans except domain experts, evaluating them becomes increasingly challenging. Many established benchmarks struggle to keep pace with the advancements in LM performance and have quickly become saturated  <cit.>, leading to discrepancies between the models' perceived and actual capabilities <cit.>. As a consequence, researchers are developing synthetic challenging benchmarks, often involving models in the construction of evaluation instances. For example, some subsample instances from existing benchmarks that cannot be solved by current models <cit.>, or augment them to construct more challenging evaluations <cit.>. However, it is unclear whether such efforts accurately reflect real-world applications and the models' performance in practical scenarios. Realistic, high-quality, and challenging evaluations are crucial for the continued advancement of LMs. We therefore propose , a benchmark containing code generation problems drawn from diverse natural science fields, including mathematics, physics, chemistry, biology, and materials science. contains 80 main problems, each decomposed into multiple subproblems, totaling 338. Each problem provides the scientific background when necessary as well as detailed instructions. To solve it, the model must implement multiple Python functions—one for each subproblem—and then integrate them into a complete solution for the main problem. For every main problem and subproblem, provides gold-standard solutions and multiple test cases, facilitating easy and reliable automatic evaluation. Figure <ref> shows an example. aims to overcome the challenges of current LM evaluations by introducing the following value-added design choices. * Intentional focus on natural science fields, such as computational mechanics, quantum information and computing, quantum chemistry, ecology, and molecular modeling. * Abundant high-quality data not usually made available to current LMs <cit.>, enabling a more robust evaluation of the models' ability to generalize to less familiar scenarios. * High annotation quality, with all problems, including gold solutions and test cases, annotated, revised, and verified by at least two senior researchers (PhD student level or above) in represented scientific domains. * Realistic and current problems sourced from scientists' everyday research tasks or influential papers. This ensures 's relevance to real-world applications. * Problems curated to have zero overlap with publicly available datasets to prevent potential data contamination.[In addition to addressing data contamination, we find that most problems are too challenging for even the best models. Therefore, we often simplify problem settings and provide more background during revisions.] * Problems that test LM's comprehensive and all-around capabilities. Solving the main problems requires deep scientific background knowledge, strong analytical capabilities to decompose complex problems into simpler ones and correctly solve each, and the ability to integrate partial into complete solutions. * Opportunities to evaluate various model capabilities in varied setups by toggling options, e.g., whether to provide scientific background information or to condition on gold or generated solutions to previous subproblems. Further, we believe that the availability of this well-designed benchmark can motivate research into developing new AI methods for accelerating scientific research, an area that has thus far benefited less from recent LM advancements partly due to a lack of commercial incentive. We use to evaluate state-of-the-art proprietary and open models. Results show that is a very challenging benchmark: in the most realist evaluation setup, Claude3.5-Sonnet, the best-performing model in our experiments, can solve only 4.6% of the main problems, while other strong models, such as Claude3-Opus and GPT-4o, solve only 1.5%. Similarly, the best open source model under test, Deepseek-Coder-v2, can only solve 3.1% of the problems. The other open-source LLMs under test (e.g., Llama-3-70B-Instruct and Mixtral-8x22B-Inst) fail to complete any problems despite successfully solving some subproblems correctly. Our analysis finds that all models can benefit from the background knowledge written by our scientist annotators, achieving substantial and consistent improvements. However, even with background, the best model can solve only 12.3% of the main problems. § This section examines the design principles and annotation process we chose for , describing: research-level coding problems from various natural science fields (<ref>); how we decomposed main problems into multiple, simpler subproblems (<ref>); our design choices for the annotation process (<ref>); and various evaluation setups that facilitates (<ref>). §.§ Challenging and Realistic Scientific Coding Problems sources challenging and realistic research-level coding problems across natural science disciplines, including mathematics, physics, chemistry, biology, and material science, covering a total of 16 subfields. This diverse selection ensures a comprehensive representation of the natural sciences, where extensive code development is essential. SciCode is mainly drawn from the scripts that scientists use in their everyday workflow. Many of these have been used in one or more publications, demonstrating their robustness and correctness. However, they are primarily for internal use, which means that they are seldomly open-sourced and often poorly annotated. Consequently, unlike general-domain coding problems, natural science problems have less exposure in most current LMs' training data. This offers a unique opportunity to evaluate the models' ability to generalize to less familiar contexts. In total, consists of 80 main problems, decomposed into 338 subproblems. Table <ref> lists the subfields covers along with the number of main problems in each. Each main problem has a median of 3 subproblems, with a maximum of 15. We reserve 15 main problems (50 subproblems) for the development split and use the remaining 65 main problems (288 subproblems) as the test data. The 15 main development problems cover all five domains; over half of these have less than 4 subproblems each for easier few-shot settings. §.§ A Main Problem with Multiple Subproblems In their everyday workflow, scientists often decompose a complex problem into multiple smaller, more manageable parts. They may write relatively independent code for each part and then integrate these parts into a complete solution to the main problem. In developing our dataset, we leverage this natural and intuitive structure and further standardize our dataset by instructing the scientists to adhere to the following format. Main Problem A main problem is a primary task that needs to be addressed. It defines the overall objective of the research and guides the direction of the study. The main problem encompasses all subproblems, with detailed instructions on required inputs and expected outputs articulated in a docstring block. With the main problem defined, scientists have sufficient guidance to solve the task. Subproblem Decomposition Subproblems focus on questions derived from the main problem. They decompose the complex main problem into smaller, more manageable parts, enabling a more detailed and systematic investigation. Detailed docstrings for each subproblem describe the required input and expected output, ensuring clarity and aiding in accurate code generation. This structured decomposition simplifies problem-solving and facilitates a more granular evaluation of the models' scientific coding capabilities. §.§ Data Annotation This process consists of three main stages: (1) Problem selection: Deciding on question topics related to the research domain (<ref>). (2) Evaluation design: Designing both numerical and domain-specific test cases to ensure the problem's validity (<ref>). (3) Problem validation: Iterating on the problems through three rounds of revisions to further enhance question design (<ref>). We now examine the design choices for each stage. §.§.§ Problem Selection Throughout the research project cycle, various coding needs arise, such as data processing, fitting, and plotting. To use , scientists select the problems that require intense scientific knowledge and reasoning to optimally test LM's science capability. This approach ensures that both the breadth and depth of frontier research are addressed. We focus on: * Numerical methods. Analytical forms are usually impossible to achieve for very complicated systems. Therefore, scientists must derive numerical models and algorithms that describe physical phenomena <cit.>, chemical reactions <cit.>, biological systems <cit.>, or statistical behaviors<cit.>. * Simulation of systems. In fields of natural science, scientists write code to simulate systems and processes. These simulations are based on theoretical principles and empirical data, reflecting deep scientific insights into the system being studied  <cit.>. * Scientific calculation. During data post-processing and visualization, scientists often perform many transformations based on scientific formulas to get physical observable of interest instead of raw experimental data  <cit.>. We also include several research problems that are built upon or reproduce methods used in Nobel Prize-winning studies to highlight current trends in scientific research: the self-consistent field (SCF) method for density functional theory (DFT) calculations <cit.> (The Nobel Prize in Chemistry 1998), the PMNS matrix for neutrino oscillation in matter <cit.> (The Nobel Prize in Physics 2015), the Haldane model for the anomalous quantum Hall effect <cit.> (The Nobel Prize in Physics 2016), optical tweezer <cit.> simulations for microscopic thermodynamics <cit.> (The Nobel Prize in Physics 2018), and the replica method for spin glasses <cit.> (The Nobel Prize in Physics 2021). §.§.§ Evaluation Design To facilitate evaluation, we have scientist annotators use only widely adopted and well-documented packages such as NumPy, SciPy, and SymPy when writing the solution code for their problems, as shown in Figure <ref>. Our test suite involves two key components. (1) Numerical tests list input-output pairs to check if the generated code produces the same outputs as ground truth. (2) Domain-specific test cases, introduced as an additional stage, evaluate whether model-generated solutions align with scientists' practical needs and further ensure the correctness and applicability of each solution within its specific field. These tests are extracted from real scientific workflows: scientists must design domain-specific test cases to verify code accuracy by reproducing results published in academic papers or matching analytical solutions derived from theoretical models. For example, we reproduce the phase transition at around kT/J=2.269 for the 2D square Ising model problem <cit.>, derive the surface plasmon mode in a 2D layered electron gas <cit.>, verify the ballistic Brownian motion in optical tweezer <cit.>, etc. By doing so, we validate that the code not only functions correctly but also accurately represents the underlying scientific problem. Overall, the evaluation design aims to balance the fidelity of the scientific problem with the practicality of the evaluation process, ensuring that the solutions are both accurate and accessible. §.§.§ Problem Validation for Quality Control We conduct three rounds of validation and revision for each problem: (1) In-domain scientist validation. At least two scientists in the same research domain cross-check the question design, solution code, and domain-specific test cases, providing detailed feedback. The scientists who design the workflows iterate on them based on this feedback to ensure the problems are scientifically accurate. (2) Out-of-domain scientist validation. One scientist from a different domain reviews the question design to ensure it is clear and that the information provided is precise and sufficient to solve the problem (e.g., all scientific constants are given). This helps to identify any assumptions that might be unclear to those outside the immediate field of study. (3) GPT-4 validation. GPT-4 assists with the final review round. The previously validated sub-questions are input to GPT-4 to generate code solutions. Scientists perform error analysis for the generated solutions and redesign the numerical test cases if necessary to prevent false positives. Based on the code solutions from GPT-4, the scientist may also revise the entire workflow a third time to addressany potential ambiguity. This multi-round validation approach ensures that the problems are scientifically rigorous, clear, and unambiguous, facilitating accurate and effective evaluation. §.§ Various Types of Evaluations offers unique opportunities for evaluating LMs across diverse settings, comprehensively testing their coding capabilities. * Without vs. with scientific background. A subproblem can provide scientific background knowledge to guide LMs in solving the coding task. 's scientific background for each problem offers two modes of evaluation. (1) When models are evaluated without scientific background, it tests their inherent scientific knowledge and reasoning along with their coding capability. (2) For models not designed to handle scientific problems, background provides the necessary knowledge and reasoning steps to solve the problems, shifting the evaluation's focus towards the models' coding and instruction-following capabilities. As we show in the experiments (<ref>), all models substantially improve performance when background is provided, indicating their lack of knowledge and reasoning capability in these natural science fields. * Gold vs. generated solutions to previous subproblems. Each main problem in factorizes into multiple subproblems, and solutions to previous problems provide vital information for solving the current one. enables use of gold or generated solutions to previous subproblems. Gold solutions focus only on the current problem, while generated ones provide a more realistic evaluation setting and are more challenging due to error accumulation. * Main vs. subproblem levels. (1) The LM is considered to have successfully solved the main problem when all subproblem solutions are correct and the integrated solution to the main problem is correct. (2) Alternatively, can assess at a subproblem level, evaluating a subproblem independently of other subproblems or its main problem. Among these setups, evaluation without background carrying over generated solutions to previous problems is the closest to scientists' real use case of LMs. Therefore, we dub this the standard setup. Our experiments indicate that this setup is very challenging for even the best models available today: Claude3.5-Sonnet, the best performing one, can solve only 4.6% of the main problems. To make useful for evaluating less capable or developing models, we also consider less challenging settings in our experiments. § EXPERIMENTS Prompts. We evaluate our model using zero-shot prompts. We keep the prompts general and design different ones for different evaluation setups only to inform the model about the tasks. We keep prompts the same across models and fields, and they contain the model's main and sub-problem instructions and code for previous subproblems. We also instruct the model to recall useful knowledge when gold background knowledge is not provided. <ref> presents an example. §.§ Evaluated Models Since is a challenging benchmark, we mainly consider strong language models.[For instance, CodeLlama-7B-Instruct achieves only 0.4% pass@1 in our main setting.] * GPT-4o <cit.>: An optimized version of GPT-4  <cit.> by OpenAI with multi-modal capability. * GPT-4-Turbo: A faster and more cost-effective variant of GPT-4 <cit.>. We use the `gpt-4-turbo-2024-04-09' snapshot. * Claude3.5-Sonnet  <cit.>: The latest model from the Claude 3.5 family from Anthropic. * Claude3-Opus  <cit.>: The most capable model from the Claude 3 family from Anthropic. * Claude3-Sonnet  <cit.>: The second most capable model from the Claude 3 family. * Gemini 1.5 Pro  <cit.>: A model from the Gemini 1.5 family by Google and the largest with open access at the time of writing. * Llama-3-70B-Instruct <cit.>: The instruction-tuned version of the largest available model from the Llama-3 family. * Mixtral-8x22B-Instruct <cit.>: The instruction-tuned version of Mistral AI's largest publicly accessible Mixture-of-Expert Model. * Deepseek-Coder-v2 <cit.>: Mixture-of-Experts (MoE) code language model continue pre-trained on DeepSeek-V2 * Qwen2-72B-Instruct <cit.>: The largest instruction-tuned Qwen-2 model. §.§ Main Results <ref> presents results under the standard setup.[ Without background and carrying over generated subproblem solutions. See <ref> for a more detailed discussion. ] For the easier subproblem-level evaluation, the state-of-the-art models we test solve 14%-26% of the subproblems. Among them, Claude3.5-Sonnet achieves the best performance, with a 26.0% pass@1 rate. However, all models perform much worse on the more realistic and challenging main problem evaluation. Claude3.5-Sonnet still performs the best in this setting, but with only a 4.6% pass@1 rate. These results show that is a difficult benchmark for current LMs. Consistent with our observations on proprietary models, open-weight LMs under test also showed their lack of capabilities in solving any main problem despite being able to solve a number of sub-problems correctly. §.§ Additional Results with Other Evaluation Settings Providing gold scientific background knowledge. <ref> presents results when background text authored by scientists is provided to the LMs and generated solutions to previous subproblems are used. This setting evaluates both the models' capabilities to faithfully follow the instructions provided in the background as well as their code-generation performance. The Δ columns indicate performance differences compared to the standard setup. All models substantially improve performance for both subproblem and main problem evaluations when given scientific background knowledge. For the subproblem evaluation, Claude3.5-Sonnet and GPT-4o perform the best, both with a 35.4% pass@1 rate. GPT-4-Turbo benefits the most from the provided scientific background and reasoning with an increase of 10.8 %. Open models improve less compared to proprietary models which might indicate weaker Instruction following capability. Interestingly, the comparison between Llama-3-70B-Instruct and Mixtral-8x22B-Instruct reveals a trend that differs from the standard setup: Llama-3-70B-Instruct benefits more from the scientific background knowledge and reaches the performance of Mixtral-8x22B-Instruct in this setting. For the main problem evaluation, the trend remains similar to the standard setup. Claude3.5-Sonnet performs best, with a 12.3% pass@1 rate, followed closely by GPT-4o and GPT-4-Turbo at 9.2%. GPT-4o, GPT-4-Turbo, and Claude3.5-Sonnet improve most from background content, at 7.7%. Nonetheless, all models still fall short of satisfactory performance even with the background knowledge provided. This reaffirms that is challenging even when focusing on code generation rather than testing the models' scientific knowledge. With gold subproblem solutions. <ref> plots the subproblem pass@1 rates conditioning on various numbers of previous subproblems and their gold solutions. Background knowledge is not provided. The intuition behind this analysis is that later steps can leverage gold solutions from previous steps to gain a richer understanding of the problem. Instructions and solutions from earlier steps serve as in-context demonstrations, enabling the model to rely less on its instruction-following capability. By focusing on later steps, we can more precisely assess the models' inherent capabilities. Overall, all three models show similar trends, with their performance generally improving as they condition on more gold solutions from previous steps. However, there is a notable exception when conditioning on 7 previous gold subproblem solutions. Additionally, performance starts to decline when models condition on more than 9 previous solutions, possibly due to the increased difficulty of managing long contexts. § RELATED WORK Language models for code. Code has long been an active field of research, and code LMs have co-evolved with foundation LMs since the era of BERT  <cit.>. Earlier works include CodeBert  <cit.> and CodeT5  <cit.>, while Codex  <cit.> arguably kick-started the LLM era for code-generation models. Since Codex, the field has experienced rapid growth in quantity and quality of large code generation models, including specially trained models like Codegen <cit.>, StarCoder models  <cit.>, and generalist models with code adapation  <cit.> such as CodeLlama  <cit.>, CodeQwen  <cit.>, and DeepSeek-Coder  <cit.>. As code generation gains more attention and becomes increasingly useful, contemporary generalist models often include non-trivial coding capabilities  <cit.>. Evaluating code generation. Before the emergence of very capable code synthesis models, when most models struggled to produce executable code, datasets like CoNaLa typically included n-gram-based metrics <cit.>. Soon after model capabilities improved, execution-based evaluation gained in popularity <cit.>. While n-gram or general text-based evaluation still exists, we opted to omit them from SciCode due to obvious limitations of surface form matching in scientific coding. Code generation benchmarks now take various forms. For simple function completion, MBPP <cit.> and HumanEval <cit.> are two widely used benchmarks that contain basic programming questions, mainly evaluating LMs' ability to turn natural language instructions into Python programs. Other benchmarks assess the models' competence in real-world programming scenarios, such as writing data science code <cit.>, repository-level code completion <cit.>, and more complex tasks in real-world software engineering <cit.>. Though our work is similar to MTPB <cit.> in terms of using a multi-turn setup, our subproblem instructions correspond to a high-level task, while theirs correspond to specific code actions (e.g., replace X with Y in the string). Language models for science. Scientific tasks are complex due to their demands for reasoning and knowledge. However, Recent advances in general and specialized language models have revolutionized the processing of text and other data modalities, such as molecules and proteins, in scientific fields. Galactica <cit.>, a general-purpose scientific model, can perform tasks like citation prediction, scientific reasoning, document generation, and molecular property prediction. Many models focus on one single domain or task, like math (e.g., Minerva <cit.> and Deepseek-Math <cit.> ), protein structure prediction (e.g., ESM-2 <cit.>), medical reasoning (e.g., Med-PaLM <cit.>, BioGPT <cit.>), and others. § CONCLUSION We introduce , a scientific research benchmark curated by professional natural scientists. We designed for scientific problem evaluation and collected problems representing 16 diverse domains. By assessing with ten contemporary state-of-the-art AI models, we demonstrated that our benchmark is within reach but remains very challenging. We believe will serve as a helpful guideline for building future code language models for varied scientific applications. let's make our submission anonymous? so let's not have this sectiondone Use unnumbered first level headings for the acknowledgments. All acknowledgments go at the end of the paper before the list of references. Moreover, you are required to declare funding (financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work). More information about this disclosure can be found at: <https://neurips.cc/Conferences/2024/PaperInformation/FundingDisclosure>. You can use the environment provided in the style file. As opposed to the main NeurIPS track, acknowledgements do not need to be hidden. abbrv § APPENDIX §.§ Prompt §.§ Python libraries used in . §.§ Full Problem Example §.§.§ Example Main Problem §.§.§ Example Subproblems §.§.§ Example Domain Specific Test Cases Both the k-space and sweeping grid sizes are set to very rough values to make the computation faster, feel free to increase them for higher accuracy. At zero on-site energy, the Chern number is 1 for ϕ > 0, and the Chern number is -1 for ϕ < 0. For complementary plots <ref>, we can see that these phase diagrams are similar to the one in the original paper: Fig.2 in Haldane, F. D. M. (1988). To achieve a better match, decrease all grid sizes. Compare the following three test cases. We can find that the phase diagram is independent of the value of t_1, and the ratio of t_2/t_1, which is consistent with our expectations.
http://arxiv.org/abs/2407.13460v1
20240718123546
SA-DVAE: Improving Zero-Shot Skeleton-Based Action Recognition by Disentangled Variational Autoencoders
[ "Sheng-Wei Li", "Zi-Xiang Wei", "Wei-Jie Chen", "Yi-Hsin Yu", "Chih-Yuan Yang", "Jane Yung-jen Hsu" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Semantic Alignment via Disentangled Variational Autoencoders S.-W. Li et al. Graduate Institute of Networking and Multimedia, National Taiwan University Department of Computer Science and Information Engineering, National Taiwan University Department of Artificial Intelligence, Chang Gung University Artificial Intelligence Research Center, Chang Gung University {r11944004,r12922147,r12922051,r12922220,yjhsu}@csie.ntu.edu.tw, cyyang@cgu.edu.tw SA-DVAE: Improving Zero-Shot Skeleton-Based Action Recognition by Disentangled Variational Autoencoders Sheng-Wei Li10009-0002-9091-8036Zi-Xiang Wei20009-0002-8214-3226Wei-Jie Chen20009-0001-0557-8106Yi-Hsin Yu20009-0008-6707-8589Chih-Yuan Yang3,40000-0002-8989-501XJane Yung-jen Hsu2,30000-0002-2408-4603 July 22, 2024 =================================================================================================================================================================================================================== § ABSTRACT Existing zero-shot skeleton-based action recognition methods utilize projection networks to learn a shared latent space of skeleton features and semantic embeddings. The inherent imbalance in action recognition datasets, characterized by variable skeleton sequences yet constant class labels, presents significant challenges for alignment. To address the imbalance, we propose SA-DVAE—Semantic Alignment via Disentangled Variational Autoencoders, a method that first adopts feature disentanglement to separate skeleton features into two independent parts—one is semantic-related and another is irrelevant—to better align skeleton and semantic features. We implement this idea via a pair of modality-specific variational autoencoders coupled with a total correction penalty. We conduct experiments on three benchmark datasets: NTU RGB+D, NTU RGB+D 120 and PKU-MMD, and our experimental results show that SA-DAVE produces improved performance over existing methods. The code is available at <https://github.com/pha123661/SA-DVAE>. § INTRODUCTION Action recognition is a long-standing active research area because it is challenging and has a wide range of applications like surveillance, monitoring, and human-computer interfaces. Based on input data types, there are several lines of studies on human action recognition: image-based, video-based, depth-based, and skeleton-based. In this paper, we focus on the skeleton-based action recognition, which is enabled by the advance in pose estimation <cit.> and sensor <cit.> technologies, and has emerged as a viable alternative to video-based action recognition due to its resilience to variations in appearance and background. Some existing skeleton-based action recognition methods already achieve remarkable performance on large-scale action recognition datasets <cit.> through supervised learning, but labeling data is expensive and time-consuming. For the cases where training data are difficult to obtain or prevented by privacy issues, zero-shot learning (ZSL) offers an alternative solution by recognizing unseen actions through supporting information such as the names, attributes, or descriptions of the unseen classes. Therefore, zero-shot learning has multiple types of input data and aims to learn an effective way of dealing with those data representations. For skeleton-based zero-shot action recognition, several methods have been proposed to align skeleton features and text features in the same space. However, to the best of our knowledge, all existing methods assume that the group of skeleton sequences are well captured and highly consistent so their ideas mainly focus on how to semantically optimize text representation. After carefully examining the source videos in two widely used benchmark datasets NTU RGB+D and PKU-MMD, we found the assumption is questionable. We observe that for some labels, the camera positions and actors' action differences do bring in significant noise. To address this observation, we seek an effective way to deal with the problem. Inspired by an existing ZSL method <cit.> which shows semantic-irrelevant features can be separated from semantic-related ones, we propose SA-DVAE for skeleton-based action recognition. SA-DVAE tackles the generalization problem by disentangling the skeleton latent feature space into two components: a semantic-related term and a semantic-irrelevant term as shown in <ref>. This enables the model to learn more robust and generalizable visual embeddings by focusing solely on the semantic-related term for action recognition. In addition, SA-DVAE implements a learned total correlation penalty that encourages independence between the two factorized latent features and minimizes the shared information captured by the two representations. This penalty is realized by an adversarial discriminator that aims to estimate the lower bound of the total correlation between the factorized latent features. The contributions of our paper are as follows: * We propose a novel SA-DVAE method. By disentangling the latent feature space into semantic-related and irrelevant terms, the model addresses the asymmetry existing in action recognition datasets and improves the generalization capability. * We leverage an adversarial total correlation penalty to encourage independence between the two factorized latent features. * We conduct extensive experiments that show SA-DVAE achieves state-of-the-art performance on the ZSL and generalized zero-shot learning (GZSL) benchmarks of the NTU RGB+D 60, NTU RGB+D 120, and PKU-MMD datasets. § RELATED WORK The proposed SA-DAVE method covers two research fields: zero-shot learning and action recognition, and it uses feature disentanglement to deal with skeleton data noise. Here we discuss the most related research reports in the literature. §.§.§ Skeleton-Based Zero-Shot Action Recognition. ZSL aims to train a model under the condition that some classes are unseen during training. The more challenging GZSL expands the task to classify both seen and unseen classes during testing <cit.>. ZSL relies on semantic information to bridge the gap between seen and unseen classes. Existing methods address the skeleton and text zero-shot action recognition problem by constructing a shared space for both modalities. ReViSE <cit.> learns autoencoders for each modality and aligns them by minimizing the maximum mean discrepancy loss between the latent spaces. Building on the concept of feature generation, CADA-VAE <cit.> employs variational autoencoders (VAEs) for each modality, aligning the latent spaces through cross-modal reconstruction and minimizing the Wasserstein distance between the inference models. These methods then learn classifiers on the shared space to conduct classification. SynSE <cit.> and JPoSE <cit.> are two methods that leverage part-of-speech (PoS) information to improve the alignment between text descriptions and their corresponding visual representations. SynSE extends CADA-VAE by decomposing text descriptions by PoS tags, creating individual VAEs for each PoS label, and aligning them in the skeleton space. Similarly, JPoSE <cit.> learns multiple shared latent spaces for each PoS label using projection networks. JPoSE employs uni-modal triplet loss to maintain the neighborhood structure of each modality within the shared space and cross-modal triplet loss to align the two modalities. On the other hand, SMIE <cit.> focuses on maximizing mutual information between skeleton and text feature spaces, utilizing a Jensen-Shannon Divergence estimator trained with contrastive learning. It also considers temporal information in action sequences by promoting an increase in mutual information as more frames are observed. While JPoSE and SynSE demonstrate the benefits of incorporating PoS information, they rely heavily on it and require additional PoS tagging effort. Furthermore, the two methods neglect the inherent asymmetry between modalities, aligning semantic-related and irrelevant terms to the semantic features and missing the chance to improve recognition accuracy further. In contrast, our approach uses simple class labels without the need of PoS tags, and uses only semantic-related skeleton information to align text data. Feature Disentanglement in Generalized Zero-Shot Learning. Feature disentanglement refers to the process of separating the underlying factors of variation in data <cit.>. Because methods of zero-shot learning are sensitive to the quality of both visual and semantic features, feature disentanglement serves as an effective approach to scrutinize either visual or semantic features, as well as addressing the domain shift problem <cit.>, thereby generating more robust and generalized representations. SDGZSL <cit.> decomposes visual embeddings into semantic-consistent and semantic-unrelated components using shared class-level attributes, and learns an additional relation network to maximize compatibility between semantic-consistent representations and their corresponding semantic embeddings. This approach is motivated by the transfer of knowledge from intermediate semantics (e.g., class attributes) to unseen classes. In contrast, SA-DVAE addresses the inherent asymmetry between the text and skeleton modalities, enabling the direct use of text descriptions instead of relying on predefined class attributes. § METHODOLOGY We show the overall architecture of our method as <ref>, which consists of three main components: a) two modality-specific feature extractors, b) a cross-modal alignment module, and c) three classifiers for seen/unseen actions and their domains. The cross-modal alignment module learns a shared latent space via cross-modality reconstruction, where feature disentanglement is applied to prioritize the alignment of semantic-related information (z^r_x and z_y). To improve the effectiveness of the disentanglement, we use a discriminator as an adversarial total correlation penalty between the disentangled features. Problem Definition. Let 𝒟 be a skeleton-based action dataset consisting of a skeleton sequences set 𝒳 and a label set 𝒴, in which a label is a piece of text description. The 𝒳 is split into a seen and unseen subset 𝒳_s and 𝒳_u where we can only use 𝒳_s and 𝒴 to train a model to classify x ∈𝒳_u. By definition, there are two types of evaluation protocols. The GZSL one asks to predict the class of x among all classes 𝒴, and the ZSL only among 𝒴_u = {y_i : x_i ∈𝒳_u}. Cross-Modal Alignment Module. We train a skeleton representation model (Shift-GCN <cit.> or ST-GCN <cit.>, depending on experimental settings) on the seen classes using standard cross-entropy loss. This model extracts our skeleton features, denoted as f_x. We use a pre-trained language model (Sentence-BERT <cit.> or CLIP <cit.>) to extract our label's text features, denoted as f_y. Because f_x and f_y belong to two unrelated modalities, we train two modality-specific VAEs to adjust f_x and f_y for our recognition task and illustrate their data flow in <ref>. Our encoders E_x and E_y transform f_x and f_y into representations z_x and z_y in a shared latent space via the reparameterization trick <cit.>. To optimize the VAEs, we introduce a loss as the form of the Evidence Lower Bound ℒ = 𝔼_q_ϕ(z|f) [log p_θ(f|z)] - β D_ KL(q_ϕ(z|f) p_θ(z)), where β is a hyperparameter, f and z are the observed data and latent variables, the first term is the reconstruction error, and the second term is the Kullback-Leibler divergence between the approximate posterior q(z|f) and p(z). The hyperparameter β balances the quality of reconstruction with the alignment of the latent variables to a prior distribution <cit.>. We use multivariate Gaussian as the prior distribution. Feature Disentanglement. We observe that although two skeleton sequences belong to the same class (they share the same text description), their movement varies substantially due to stylistic factors such as actors' body shapes and movement ranges, and cameras' positions and view angles. To the best of our knowledge, existing methods never address this issue. For example, Zhou . <cit.> and Gupta  <cit.> neglect this issue and force f_x and f_y to be aligned. Therefore, we propose to tackle the problem of inherent asymmetry between the two modalities to improve the recognition performance. We design our skeleton encoder E_x as a two-head network, of which one head generates a semantic-related latent vector z^r_x and the other generates a semantic-irrelevant vector z^v_x. We assume each of z^r_x and z^v_x has its own multivariant normal distribution N(μ^r_x, Σ^r_x) and N(μ^v_x, Σ^v_x), and our text encoder E_y generates a latent feature z_y, which also has a multivariant normal distribution N(μ_y, Σ_y). Let z_x = z^v_x ⊕ z^r_x where ⊕ means concatenation. We define the losses for the VAEs as ℒ_x = 𝔼_q_ϕ(z_x|f_x) [log p_θ(f_x|z_x)] - β_x D_ KL(q_ϕ(z^r_x|f_x) || p_θ(z^r_x)) - β_x D_ KL(q_ϕ(z^v_x|f_x) || p_θ(z^v_x)), ℒ_y = 𝔼_q_ϕ(z_y|f_y) [log p_θ(f_y|z_y)] - β_y D_ KL(q_ϕ(z_y|f_y) || p_θ(z_y)), where β_x and β_y are hyperparameters, p_θ(z^r_x), p_θ(z^v_x), p_θ(f_x|z_x), p_θ(z_y), and p_θ(f_y|z_y) are the probabilities of their presumed distributions, q_ϕ(z_x|f_x), q_ϕ(z^r_x|f_x) and q_ϕ(z^v_x|f_x) are the probabilities calculated through our skeleton encoder E_x, and q_ϕ(z_y|f_y) is the one through our text encoder E_y. We set the overall VAE loss as ℒ_ VAE = ℒ_x + ℒ_y. To better understand our method, we present the t-SNE visualization of the semantic-related and semantic-irrelevant terms, z^r_x and z^v_x in <ref>. <Ref> displays the t-SNE results for z^r_x, showing clear class clusters that demonstrate effective disentanglement. In contrast, <Ref> shows the t-SNE results for z^v_x, where class separation is less distinct. This indicates that while our method effectively clusters related semantic features, the irrelevant features remain more dispersed as they contain instance-specific information. Cross-Alignment Loss. Because we want our latent text features z_y to align with semantic-related skeleton features z^r_x only, regardless of the semantic-irrelevant features z^v_x, we regulate them by setting up a cross-alignment loss ℒ_C = ‖ D_y(z^r_x) - f_y ‖_2^2 + ‖ D_x(z^v_x ⊕ z_y) - f_x ‖_2^2 to train our VAEs for skeleton and text respectively. This loss enforces skeleton features to be reconstructable from text features and vice versa. To reconstruct skeleton features from text features, z^v_x is employed to incorporate necessary style information to mitigate the information gap between the class label and the skeleton sequence. Adversarial Total Correlation Penalty. We expect the features z^r_x and z^v_x to be statistically independent, so we impose an adversarial total correlation penalty <cit.> on them. We train a discriminator D_T to predict the probability of a given latent skeleton vector z^v_x ⊕ z^r_x whether the z^v_x and z^r_x come from the same skeleton feature f_x. In the ideal case, D_T will return 1 if z^v_x and z^r_x are generated together, and 0 otherwise. To train D_T, we design a loss ℒ_T = log D_T(z_x) + log(1 - D_T(z_x)), where z_x is an altered feature vector. We create z̃_x as the following steps. From a batch of N training samples, our encoder E_x generates N pairs of z_x,i^v and z_x,i^r, i = 1 … N. We randomly permute the indices i of z_x,i^v but keep z_x,i^r unchanged, and then we concatenate them as z_x. D_T is trained to maximize L_T, while E_x is adversarially trained to minimize it. This training process encourages the encoder to generate latent representations that are independent. Combining the three losses, we set the overall loss ℒ = ℒ_ VAE + λ_1ℒ_C + λ_2ℒ_T, where we balance the three losses by hyperparameters λ_1 and λ_2. Seen, Unseen and Domain Classifier. Because there are two protocols, ZSL and GZSL, to evaluate a zero-shot recognition model, we use two different settings for the two protocols. For the ZSL protocol, we only need to predict the probabilities of classes 𝒴_u from a given skeleton sequence, so we propose a classifier C_u as a single-layer MLP (Multilayer Perception) with a softmax output layer yielding the probabilities to predict probabilities of classes 𝒴_u from z_y by 𝐩_u = C_u(z_y) = C_u(E_y(f_y)), where dim(𝐩_u) = |𝒴_u|. During inference and given an unseen skeleton feature f^u_x, we get z^u_x = E_x(f^u_x), separate z^u_x into z^v,u_x and z^r,u_x, and generate 𝐩_u = C_u(z^r,u_x) to predict its class as y_î and î = _i = 1,…,|𝒴_u| p^i_u, where p^i_u is the i-th probability value of 𝐩_u. For the GZSL protocol, we need to predict the probabilities of all classes in 𝒴 = 𝒴_u ∪𝒴_s where 𝒴_s = {y_i : x_i ∈𝒳_s}. We follow the same approach proposed by Gupta  <cit.> to use an additional class classifier C_s for seen classes and a domain classifier C_d to merge two arrays of probabilities. Gupta first apply Atzmon and Chechik's idea <cit.> to a skeleton-based action recognition problem and outperform the typical single-classifier approach. The advantage of using dual classifiers is reported in a review paper <cit.>. Our C_s is also a single-layer MLP with a softmax output layer like C_u, but it uses skeleton features f_x rather than latent features to produce probabilities 𝐩_s = C_s(f_x), where dim(𝐩_s) = |𝒴_s|. We train C_s and C_u first, and then we freeze their parameters to train C_d, which is a logistic regression with an input vector 𝐩'_s ⊕𝐩_u where 𝐩'_s is the temperature-tuned <cit.> top k-pooling result of 𝐩_s and the number k = dim(𝐩_u). C_d yields a probability value p_d of whether the source skeleton belongs to a seen class. We use the LBFGS algorithm <cit.> to train C_d and use it during inference to predict the probability of x as 𝐩(y|x) = C_d(𝐩'_s ⊕𝐩_u) 𝐩_s ⊕ (1-C_d(𝐩'_s ⊕𝐩_u)) 𝐩_u = p_d 𝐩_s ⊕ (1-p_d) 𝐩_u and decide the class of x as y_î and î = _i = 1,…,|𝒴| p^i, where p^i is the i-th probability value of 𝐩(y|x). § EXPERIMENTS Datasets. We conduct experiments on three datasets and show their statistics in Table <ref>. We adopt the cross-subject split, where half of the subjects are used for training and the other half for validation. We use NTU-60 and NTU-120 as synonyms for the NTU RGB+D and NTU RGB+D 120 datasets. Due to discrepancies in class labels between the official website[Official website: <https://rose1.ntu.edu.sg/dataset/actionRecognition/>] and the GitHub codebase[GitHub link: <https://github.com/shahroudy/NTURGB-D>] of NTU-60 and NTU-120 datasets (the label of class 18 is “put on glasses” in their website but “wear on glasses” in GitHub), we follow existing methods by using the class labels provided in their codebase. Implementation Details. We implement the discriminator D_T as a two-layer MLP with ReLU activation and a Sigmoid output layer, and the encoders E_x, E_y, decoders D_x, D_y, seen and unseen classifiers C_s, C_u as single-layer MLPs. During training, we alternatively train VAEs and D_T. We train VAEs first, and after training VAEs n_d times, we train D_T once. We use the LBFGS implementation from Scikit-learn <cit.> to train C_d and divide our training set into a validation seen set and a validation unseen set. As the training of C_d requires seen and unseen skeleton features (f^s_x, f^u_x), we re-train other components using the validation seen set and use the validation unseen set to provide unseen skeleton features to train C_d. Finally, the trained C_d is used to make inferences on the testing set. The number of classes in the validation unseen set is the same as the original unseen class set | 𝒴_u |. We use the cyclical annealing schedule <cit.> to train our VAEs because cyclical annealing mitigates the KL divergence vanishing problem. At the beginning of each epoch, we set the actual training hyperparameters λ'_2, β'_1, and β'_2 as 0 until we use one-third training samples. Thereafter, we progressively increase λ'_2, β'_1, and β'_2 to λ_2, β_x, and β_y based on the number of trained samples, e.g, λ'_2={[ 0 ;; 3/2(k/n-1/3)λ_2 ,; ]. where k and n are the index and total number of training samples in an epoch. We set λ_1 as 0 in our first epoch and 1 for all subsequent epochs. We conduct our experiments on a machine equipped with an Intel i7-13700 CPU, an NVIDIA RTX 3090 GPU, and 32GB RAM. We implement our method using PyTorch 2.1.0, scikit-learn 1.3.2, and scipy 1.11.3. It takes 4.6 hours to train our model for a 55/5 split of the NTU RGB+D 60 dataset, and 8.7 hours for a 110/10 split of the NTU RGB+D 120 dataset. We determine the hyperparameters through random search, as listed in Tables <ref> and <ref>. The hyperparameter search space is detailed in Supplementary Materials Section A. Comparison with SOTA methods. We compare our method with several state-of-the-art zero-shot action recognition methods using the setting shown in <Ref> and report their results in Tables <ref> and <ref>. We use the same feature extractors and class splits as the one used by SynSE, and the only difference lies in the network architecture. The results show that SA-DVAE works well, in particular for unseen classes. Furthermore, for the more challenging GZSL task, SA-DVAE even improves more over existing methods. On the NTU RGB+D 60 dataset, SA-DVAE improves the accuracy of (+7.25% and +6.23%) in the GZSL protocol, greater than the (+4.39% and +1.2%) in the ZSL one. Random Class Splits and Improved Feature Extractors. The setting of class splits is crucial for accuracy calculation and Tables <ref> and <ref> only show results of a few predefined splits, which can not infer the overall performance on a complete dataset. Thus, we follow Zhou 's approach <cit.> to randomly select several unseen classes as a new split, repeat it three times, and report the average performance. In addition, we use improved skeleton feature extractor ST-GCN <cit.> and text extractor CLIP <cit.>, chosen for their broad applicability and robust performance across different domains. We also tested different feature extractors, which can be found in Supplementary Materials Section B. <Ref> shows our settings and Tables <ref> and <ref> show the results, where naive alignment means that we disable D_T and remove the extra head for z^v_x, and FD means that we disable D_T. The results show that both feature disentanglement and total correlation penalty contribute to accuracy improvements, and feature disentanglement is the major contributor, , +12.95% on NTU-60 compared to naive alignment in Table <ref>. The adversarial total correlation penalty (TC) slightly reduces the accuracy for seen classes but significantly improves unseen and overall accuracy. This is because TC enhances the embedding quality by reducing feature redundancy, making the domain classifier less biased towards seen classes. Consequently leading to improved generalization. The results in <ref> highlight this trade-off, where the improved harmonic mean indicates a more balanced and robust performance across both seen and unseen classes. From our three runs of the random-split experiment on the NTU-60 dataset (average results is shown in Table <ref>), we pick the most challenging run and show its per-class accuracy in <ref> and the t-SNE visualization of skeleton features (f_x) in <ref>. The labels of classes 16 and 17 are “wear a shoe” and “take off a shoe” and their movements are acted as a person sitting on a chair who bends down her upper body and stretches her arm to touch her shoe. The skeleton sequences of the two classes are highly similar so are their extracted features. In <ref>, samples of classes 16 and 17 are overlapped, and naive alignment generates poor accuracy on class 16. Similarly, naive alignment generates near-zero accuracy on classes 9 and 29. Since both classes 9 and 16 share similar skeleton sequences and were unseen during training, their features appear highly similar. This similarity leads naive alignment to misclassify samples belonging to class 9 as class 16. We can see significant improvements with the addition of FD and TC. These techniques allow the model to prioritize semantic-related information and improve classification performance. Impact of Replacing Skeleton Feature f_x with Semantic-Related Latent Vector z^r_x in Seen Classifier We replace the input skeleton feature f_x of the seen classifier with the disentangled semantic-related latent vector z^r_x under the random-split setting listed in <ref> and report results in <ref>. Notably, since the semantic-irrelevant terms also contain information that is beneficial for classification but not necessary related to the text descriptions, f_x retains both semantic-related and irrelevant details. This dual retention enhances performance compared to z^r_x, which focuses solely on semantic-related information. We incorporate zero-shot learning and action recognition techniques, including pose canonicalization <cit.> and enhanced action descriptions <cit.>, with additional experimental results in Supplementary Materials Section C. § CONCLUSION ZSL study aims to leverage knowledge from one domain to help solve problems in another domain and has been proven useful for action recognition tasks, in particular for 3D skeleton data because it is expensive and labor-consuming to build accurately labeled datasets. Although there are several existing methods in the literature, they never address the asymmetry problem between skeleton data and text description. In this paper, we propose SA-DVAE, a cross-modality alignment model using the feature disentanglement approach to differentiate skeleton data into two independent representations, the semantic-related and irrelevant ones. Along with an adversarial discriminator to enhance the feature disentanglement, our experiments show that the proposed method generates better performance over existing methods on three benchmark datasets in both ZSL and GZSL protocols. § ACKNOWLEDGMENTS This research was supported by the National Science and Technology Council of Taiwan under grant number 111-2622-8-002-028. The authors would like to thank the NSTC for its generous support. splncs04 § HYPERPARAMETER SEARCH SPACE AND SENSITIVITY We show our search space and initial values in <ref>. We first fix No. 2∼6 and randomly sample No. 1 in uniform distribution 5 times. We choose the one generating the highest GZSL harmonic mean on the validation set. Then we fix No. 1 and randomly sample No. 2∼6 100 times. <Ref> shows the influence of β_x and β_y on the experiments of Tables 6 and 7 in the main paper. As reported in Table 5 in the main paper, we use β_x as 0.023 and β_y as 0.011 because they perform best on the validation set. We leave out β_x and β_y ≥ 0.2 because their performance is low. § FEATURE EXTRACTORS We show an example by re-organizing Tables 6 and 7 in the main paper as <ref>. Their dataset, splits, and hyperparameters are the same and the only difference lies in feature extractors. Experimental results show that extractors matter and our proposed ST-GCN+CLIP works best. § COMBINING WITH EXISTING METHODS To potentialy improve our performance, we combine our method with pose canonicalization on skeleton data <cit.> and enhanced class descriptions by a large language model proposed in SMIE <cit.>. We will discuss the details and experimental results in the following sections. §.§ Pose Canonicalization on Skeleton Data The difference in the forward direction of the skeleton data introduces additional noise into the training process. Therefore, we implement the method proposed by Holden  <cit.> to canonicalize the skeleton data by rotating them so that they face the same direction. We compute the cross product between the vertical axis and the average vector of the left and right shoulders and hips to determine the new forward direction of the body. We then apply a rotation matrix to canonicalize the pose. Tables <ref> and <ref> present the experimental results under random split settings listed in Table 5 of the main paper. In zero-shot settings, we observe that canonicalization of skeleton data has little effect on model performance. For generalized zero-shot settings, we note a slight decrease in both seen and unseen accuracies. We hypothesize that this is because canonicalization reduces the variation in the skeleton dataset. This reduction in diversity limits the range of examples the model encounters during training, which may ultimately impair its ability to generalize effectively. §.§ Enhanced Class Descriptions by a Large Language Model (LLM) Zhou  <cit.> propose to use an LLM to augment class descriptions with richer action-related information and we directly compare our and their methods by using their augmented descriptions. We report results using the same setting for random split and list our hyperparameters in Table <ref>, and generate results shown in Tables <ref> and <ref>, which show that SA-DAVE outperforms SMIE using augmented descriptions in both ZSL and GZSL protocols and LLM-augmented descriptions significantly improve unseen accuracy while marginally decreasing seen accuracy. This is consistent with the pattern observed in the ablation study, indicating that the models achieve a more balanced prediction with minimal bias toward seen or unseen classes.
http://arxiv.org/abs/2407.12579v1
20240717140410
The Fabrication of Reality and Fantasy: Scene Generation with LLM-Assisted Prompt Interpretation
[ "Yi Yao", "Chan-Feng Hsu", "Jhe-Hao Lin", "Hongxia Xie", "Terence Lin", "Yi-Ning Huang", "Hong-Han Shuai", "Wen-Huang Cheng" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Scene Generation with LLM-Assisted Prompt Interpretation Y. Yao et al. National Yang Ming Chiao Tung University, Taiwan, {leo81005.ee10, cfhsu311510211.ee11, hhshuai}@nycu.edu.tw Jilin University, China, National Taiwan University, Taiwan {wenhuang@ntu.edu.tw} *These authors contributed equally to this workfootnote The Fabrication of Reality and Fantasy: Scene Generation with LLM-Assisted Prompt Interpretation Yi Yao1*0000-0001-8227-5662 Chan-Feng Hsu1* Jhe-Hao Lin1 Hongxia Xie20000-0002-5652-4327 Terence Lin1 Yi-Ning Huang1 Hong-Han Shuai1 0000-0003-2216-077XWen-Huang Cheng3 0000-0002-4662-7875 Received February 2024; Accepted July 2024 ================================================================================================================================================================================================= § ABSTRACT In spite of recent advancements in text-to-image generation, limitations persist in handling complex and imaginative prompts due to the restricted diversity and complexity of training data. This work explores how diffusion models can generate images from prompts requiring artistic creativity or specialized knowledge. We introduce the Realistic-Fantasy Benchmark (RFBench), a novel evaluation framework blending realistic and fantastical scenarios. To address these challenges, we propose the Realistic-Fantasy Network (RFNet), a training-free approach integrating diffusion models with LLMs. Extensive human evaluations and GPT-based compositional assessments demonstrate our approach's superiority over state-of-the-art methods. Our code and dataset is available at <https://leo81005.github.io/Reality-and-Fantasy/>. § INTRODUCTION Considerable advancements have been made in the field of text-to-image generation, especially with the introduction of diffusion models, e.g., Stable Diffusion <cit.>, GLIDE <cit.>, DALLE2 <cit.> and Imagen <cit.>. These models exhibit remarkable proficiency in generating diverse and high-fidelity images based on natural language prompts. However, despite their impressive capabilities, diffusion models occasionally face challenges in accurately interpreting complex prompts that demand a deep understanding or specialized knowledge <cit.>. This limitation becomes particularly apparent with creative and abstract prompts, which require a nuanced grasp of context and subtleties. For example, in scenarios where the prompt involves unconventional scenarios like “a rat is hunting a lion”, traditional diffusion models might not accurately represent the intended dynamics or relationships between entities (as shown in Fig.<ref>). A significant obstacle for traditional diffusion models in creating abstract images is the bias present within their training datasets <cit.>. These datasets often do not include images of scenarios that defy conventional reality, such as a mouse hunting a lion. Traditionally, mitigating these challenges has required costly data collection and complex filtering, as well as model retraining or fine-tuning <cit.>. Research costs significantly increase due to these long and labor-intensive processes. Moreover, fine-tuning neural networks and model editing can lead to catastrophic forgetting and overall performance degradation <cit.>. Recently, it has been demonstrated that utilizing Large Language Models (LLMs) to aid in the generation process ensures the production of accurate details <cit.>. Directly integrating these models during the generation phase signifies a more efficient strategy. In this paper, we want to address the question: how can generative models be improved to better capture imaginative and abstract concepts in images? In response to the existing gap in benchmarks for abstract and creative text-to-image synthesis, our work introduces a novel benchmark, Realistic-Fantasy Benchmark (RFBench). This benchmark is designed to evaluate both Realistic & Analytical and Creativity & Imagination interpretations in generated images. The Realistic & Analytical category includes four sub-categories, focusing on the models' ability to adhere to realism and analytical depth. Images are generated in response to prompts that require not only precision in science but also cultural sensitivity and nuanced expression of symbolic meaning. On the other hand, Creativity & Imagination, is segmented into five specific sub-categories based on attribute distinctions, challenges models to navigate the complexities of generating images from prompts that necessitate a high degree of creativity and abstract reasoning. To empower diffusion models with the capability to generate imaginative and abstract images, we introduce an innovative training-free approach Realistic-Fantasy Network (RFNet) that integrates diffusion models with LLMs. Given a prompt describing the desired image, the LLM generates an image layout, which includes bounding boxes for main subjects and background elements, along with textual details to support logic or interpret scientific data. To refine image generation, we further propose the Semantic Alignment Assessment (SAA), ensuring consistency with the scene's objects. This crucial step improves the final image quality. The enhanced details direct the diffusion model, enabling precise object placement through guidance constraints. Our method, leveraging pre-trained models, is compatible with independently trained LLMs and diffusion models, eliminating the need for parameter adjustments. In summary, our key contributions are: * We have collected a novel Realistic-Fantasy Benchmark (RFBench), which is a meticulously curated benchmark that stands out for its rich diversity of scenarios. It challenges and extends the boundaries of generative model creativity and inference capabilities, establishing a new standard for assessing imaginative data processing. * To empower diffusion models with the capability to generate imaginative and abstract images, we introduce an innovative training-free approach, Realistic-Fantasy Network (RFNet), that integrates diffusion models with LLMs. * Through our proposed RFBench, extensive human evaluations coupled with GPT-based compositional assessments have demonstrated our approach's superiority over other state-of-the-art methods. § RELATED WORK §.§ Text-guided diffusion models Diffusion models, utilizing stochastic differential equations, have emerged as effective tools for generating realistic images <cit.>. DALL-E 2 <cit.> pioneered the approach of converting textual descriptions into joint image-text embeddings with the aid of CLIP <cit.>. GLIDE <cit.> demonstrated that classifier-free guidance <cit.> is favored by human evaluators over CLIP guidance for generating images based on text descriptions. Imagen <cit.> follows GLIDE but uses pretrained text encoder instead, further reducing negligible computation burden to the online training of the text-to-image diffusion prior, and can improve sample quality significantly by simply scaling the text encoder. Although text-to-image capabilities have seen significant development, there has been limited focus on generating images involving high levels of creativity, scientific principles, cultural references, and symbolic meanings. The primary reason is the data bias in the training dataset <cit.>. Several studies <cit.> have investigated the impact of data bias on diffusion models, particularly in the context of Text-to-Image generation. Perera et al.<cit.> investigates the bias exhibited by diffusion models across various attributes in face generation. Luccioni et al.<cit.> evaluates bias levels in text-to-image systems regarding gender and ethnicity. In this work, we introduce a new task: reality and fantasy scene generation. Recognizing the absence of a dedicated evaluation framework for such tasks, we introduce a new benchmark, the Realistic-Fantasy Benchmark (RFBench), which blends scenarios from both realistic and fantastical realms. §.§ LLMs for image generation Recently, researchers have explored using LLMs to provide guidance or auxiliary information for text-to-image generation systems <cit.>. In LMD <cit.>, foreground objects are identified using LLMs, and then images are generated based on the layout determined by the diffusion model. Phung et al. <cit.> proposes attention-refocusing losses to constrain the generated objects on their assigned boxes generated by LLMs. LVD <cit.> requires LLMs to generate continuous spatial constraints to accomplish video generation. Besides using LLMs to generate spatial layout from user prompts, some studies <cit.> investigate integrating LLMs directly into the image generation pipeline. SLD <cit.> integrates open-vocabulary object detection with LLMs to enhance image editing. RPG <cit.> integrates LLMs in a closed-loop manner, allowing generated images to continuously improve through LLMs feedback, and uses Chain-of-Thought <cit.> to further improve generation quality. As a result of these developments, LLMs can be incorporated into pipelines for the generation of images. In this work, we use LLMs to uncover and elaborate upon the complexities embedded within complex and abstract prompts. § OUR PROPOSED REALISTIC-FANTASY BENCHMARK In this study, we explore how diffusion models can effectively process and generate imagery from prompts that pose significant challenges due to their reliance on creative thinking or specialized knowledge. Recognizing the absence of a dedicated evaluation framework for such tasks, we introduce a new benchmark, the Realistic-Fantasy Benchmark (RFBench), which blends scenarios from both realistic and fantastical realms. Benchmark Collection. We focus on two main categories, each with distinct subcategories, Realistic & Analytical and Creativity & Imagination, totaling nine subcategories. Each sub-category is meticulously crafted with around 25 text prompts, leading to an aggregate of 229 unique compositional text prompts designed to test the models against both conventional and unprecedented creative challenges. The collection process, outlined in <ref>, employs a hybrid method combining in-context learning and predefined rules, leveraging powerful language models such as ChatGPT and Bard for diverse text prompts creation. By alternating between these models, we achieve a diverse set of responses, capitalizing on the distinct advantages of each LLM. It boosts the variety and complexity of prompts while reducing the reliance on manual labeling. Realistic & Analytical Category. There are four sub-categories: Scientific and Empirical Reasoning, Cultural and Temporal Awareness, Factual or Literal Descriptions, and Conceptual and Metaphorical Thinking (details are shown in the upper part of <ref>). These sub-categories are anchored in real-world contexts, emphasizing logical reasoning, accurate data, and an understanding of cultural or historical contexts. They contain scientific exploration, realistic descriptions, and culturally symbolic narratives. This demands that the models not only draw from an extensive knowledge pool but also demonstrate an ability to grasp and articulate underlying concepts. Creativity & Imagination Category. It consists of five sub-categories: Common Objects in Unusual Contexts, Imaginative Scenarios, Counterfactual Scenarios, Role Reversal or Conflicting, and Anthropomorphic Scenarios (details are shown in the lower part of <ref>). This evaluation focuses on the model's capacity to innovatively repurpose familiar objects, attribute human-like characteristics to inanimate objects, and generate novel environments for everyday items. This category tests the model's out-of-the-box thinking and imaginative capabilities. § OUR PROPOSED REALISTIC-FANTASY NETWORK In this section, we propose a Realistic-Fantasy Network (RFNet) for the benchmark scenario we proposed in the previous section. To thoroughly interpret the details from the input prompt, we divide our approach into two stages, as shown in <ref>. In the first stage, we transform the initial input prompt into a refined version specifically tailored for image generation by LLMs. In the second stage, we utilize a diffusion model through a two-step process to generate outputs with extraordinary details. §.§ LLM-Driven Detail Synthesis In the first stage of our methodology, we concentrate on utilizing LLMs to uncover and elaborate on the intricacies embedded within the user's input prompt. This process involves specifying task requirements to more accurately define the task and incorporating in-context learning to enhance understanding for LLMs. The enriched response from the LLM encompasses additional information, such as layout, detailed descriptions, background scenes, and negative prompts [One detailed sample can be found in our supplementary material.]. This step is crucial as it aims to mitigate the primary challenge we seek to overcome: the training data bias inherent in current diffusion models. By leveraging the pre-trained LLM for logical reasoning and conjecture, we aim to compensate for the gaps left by these biases, ensuring a more accurate and coherent image generation process. §.§ Semantic Alignment Assessment As we proceed with generating images using the diffusion model using the details generated by the previous step, there is a critical challenge: the description lists generated by LLMs for one object usually overlook the relationships among them. For example, interpretations of “a lion” could range from being “unaware and asleep” to “frightened and trying to escape.” Although both depictions are valid, descriptions such as “unaware” and “trying to escape” can lead to conflicting interpretations, thus complicating the image generation process. To overcome this challenge, we introduce the Semantic Alignment Assessment (SAA) module. This module calculates the relevance between different object vectors, thereby selecting the candidate description that best fits the current scenario. By conducting the cosine similarity among different descriptions, we can navigate the complexities introduced by the LLM's output, selecting the most compatible details for the diffusion model. This step is crucial for maintaining the coherence and accuracy of the generated images, highlighting our novel approach to mitigating the risk of conflicting descriptions. Through this module, we ensure textual precision and compatibility, and provide clear, consistent instructions for the subsequent diffusion model to generate visually coherent representations. §.§ Comprehensive Image Synthesis In the second stage of our proposed RFNet, following LMD <cit.>, we propose a two-step generation process for imaginative and abstract concepts. As shown in <ref>, in the first step, we focus on generating each foreground object with comprehensive details. In the second step, we integrate the objects generated in the first step into corresponding background derived from the initial prompt. This structured approach ensures a cohesive integration of detailed foreground objects into a contextually relevant background, enhancing the overall effectiveness of our framework. Step 1: In-Depth Object Generation. We re-organize the SAA description lists to consider both the layout of specific objects and their descriptions. By concatenating the background prompt with the target object and its relevant descriptions, we set up the input prompt as “[background prompt] with [target object], [descriptions]” (e.g., “A grassland scene with a rat, roaring, with big mouth and sharp teeth, leap out at...”). Following LMD <cit.>, the initial latent representation for each target object is fixed to facilitate the fusion of various objects into a cohesive background scene. During generation, the diffusion model uses cross-attention layers to manage the influence of textual information on the visual output, allowing precise control over image details. The cross-attention map's constraint function integrates objects within the bounding box by enhancing cross-attention inside the box for accurate object representation while minimizing it outside the box. This function guides the update of the noised latent vector during denoising to ensure spatial conditions match predefined specifications. The constraint function is defined as: ℒ_obj(A, i, v) = [1-Topk_u(A_uv·m_i)] + [Topk_u(A_uv· (1-m_i))], where m_i denotes the binary mask of the bounding box associated with object i, performing element-wise multiplication over the cross-attention map A. The cross-attention map A is aggregated by summing the contributions across all layers. For object i, the operation Topk_u computes the mean of the top-k values within the spatial dimension u. Prior to each denoising step, the latent is refined by minimizing the constraint function: z_t^'← z_t - α·∇_z_t∑_v ∈ Vℒ(A, i, v), z_t-1← Diffusion Step(z_t^', 𝒫^(i)), where α denotes the hyperparameter that controls the magnitude of the gradient update, and V contains the set of token indices for the target object in the prompt. As in the diffusion step, the updated z_t^' along with the modified prompt 𝒫^(i) of object i, are served as the inputs to the diffusion model. After the generation, the cross-attention map derived from each target object is then converted into a saliency mask. This mask is applied to the latent representation of the target object through element-wise multiplication at each step of the denoising process. Both the cross-attention map and the masked latent representation of the target object between each denoising step are transmitted to the next step for background integration. Step 2: Seamless Background Integration. This step involves fusing the generated results with the background while preserving the high-quality generation achieved in the first step. To accomplish this, we first replace the generated latent z_t^' with the masking latent z_t^(masked, i) for each object i: z_t^'← Replacement(z_t^', z_t^(masked, i)), ∀ i. Following the approach established in Step 1, the initial latent representation z_T^' is also fixed with the initial latent of each target object in the first step. This alignment ensures a seamless integration of the object into the background. The replacement approach is performed within timestep rT, where r∈[0,1], reserving principles from LMD <cit.>. According to LMD, the diffusion model determines the position of the objects in the early denoising steps, while adjusting details in the later steps. This helps us to preserve exceptional control over the layout. Furthermore, we incorporate a specialized constraint function designed to enhance the integration of generated objects with their background, distinguished by two key components: guidance constraint and suppression constraint. As shown in <ref>, the guidance constraint is engineered to reduce the cross-attention within each bounding box relative to the original object's attention. With the purpose of seamlessly integrating with the detailed object generated in the first step. Conversely, the suppression constraint works to minimize cross-attention outside the bounding box, thereby mitigating interference among multiple objects when processed together, as illustrated in <ref>. These constraint functions mark a departure from conventional methods that predominantly use loss to fix the layout. ℒ_bg(A^', A^(i), i, v) = β·∑_u| (A_uv^' - A_uv^(i)) ·m_i |_guidance constraint + γ·Topk_u(A_uv^'· (1-m_i))_suppression constraint, ∀ i, where A^' represents the cross-attention map post the substitution procedure and A^(i) is the cross-attention map of object i extracted from the diffusion step in <ref>. The hyperparameters β and γ indicate the intensity of guidance constraint and suppression constraint, respectively. It is noteworthy that adjusting β amplifies the importance of the guidance loss within the overall loss function. This adjustment ensures the latent representation is precisely aligned with the object generated in the initial phase, thereby guaranteeing a high level of accuracy and consistency in the integration process. Upon completion of all denoising steps, the latent z_0 is fed into the decoder to produce the final image. Our strategy focuses on maintaining the integrity and coherence of the foreground objects generated, emphasizing the preservation of their quality and interaction with the background. By doing so, the generated visual elements are fidelity-aware and contextually appropriate. § EXPERIMENTS §.§ Implementation Details Experimental Setup. In this work, we choose versions 1.4 and 2.1 of Stable Diffusion <cit.> as the text-to-image baseline model. The number of denoising steps is set as 50 with a fixed guidance scale of 7.5, and the synthetic images are in a resolution of 512 × 512. All experiments are conducted on the NVIDIA RTX 3090 GPU with 24 GB memory. Evaluation Metrics. We generate 32 images for each text prompt in RFBench for automatic evaluation. We selected the following two metrics: (1) GPT4-CLIP [We adopt GPT4-CLIP due to BLIP-CLIP's <cit.> limitations in accurately capturing image meanings through generated captions.]. By utilizing GPT4 for captioning and calculating CLIP text-text cosine similarity, GPT4-CLIP ensures a more precise reflection of the intended meanings between images and prompts. (2) GPT4Score. Inspired by  <cit.>, we adopt GPT4Score to evaluate image alignment with text prompts, where GPT4 rates images on a 0-100 scale based on their fidelity to the prompts, enabling precise assessment of model-generated visuals against specified criteria[The widely recognized metric, CLIPScore <cit.> exhibits limitations in evaluating our task. For detailed examples, please see the supplementary materials.]. Comparison with Existing Methods. We benchmark our proposed RFNet against various open-source scene generation methods, including Stable Diffusion <cit.>, Attend and Excite <cit.>, LMD <cit.>, BoxDiff <cit.>, MultiDiffusion <cit.>, and SDXL <cit.>. Notably, all methods, including ours, utilize Stable Diffusion 2.1 as the foundational model, ensuring a fair comparison. §.§ Quantitative Evaluation Evaluation on RFBench. As evidenced in <ref>, our approach significantly outperforms other methods for both Realistic & Analytical and Creativity & Imagination tasks. For Realistic & Analytical task, our method seamlessly integrates LLM-based insights, achieving a remarkable accuracy improvement. Unlike Attend-and-excite, which focuses on semantic guidance, our method ensures precise adherence to detailed and complex prompt requirements. For the Creativity & Imagination, which demands high degrees of creativity and abstract conceptualization, our method outperforms others by not only adhering to the imaginative aspects of prompts but also maintaining coherent structure and contexts. For instance, SDXL, while adept at high-resolution image synthesis, occasionally lacks in capturing the nuanced creativity intended in prompts; our method fills this gap effectively. Similarly, LMD, though enhancing prompt understanding through LLMs, sometimes struggles with the scientific reasoning required for Realistic& Analytical tasks. Notably, for Realistic & Analytical category, our approach shows a 61% performance increase over MultiDiffusion on GPT4Score. Meanwhile, in Creativity & Imagination task, we observe a substantial enhancement, outperforming Stable Diffusion by over 43%. In light of the above, our method is unique in its ability to bridge the gap between realistic reasoning and imagination, creating a new benchmark for text-to-image generation. r0.5 GPT4Score comparison with Imagen on DrawBench subset. Prompt Imagen Ours A bird scaring a scarecrow 0.069 0.275 A blue coloured pizza 0.425 0.125 A fish eating a pelican 0.000 0.000 A horse riding an astronaut 0.000 0.000 A panda making latte art 0.050 0.250 A pizza cooking an oven 0.700 0.831 A shark in the desert 0.194 0.713 An elephant under the sea 0.300 0.900 Hovering cow abducting aliens 0.025 0.144 Rainbow coloured penguin 0.394 0.519 Evaluation on DrawBench. We also evaluate our method on DrawBench <cit.>, a comprehensive and challenging benchmark for text-to-image models. Similar to us, DrawBench also includes some Creativity & Imagination prompts, and we evaluate our method with Imagen <cit.> on these prompts. As shown in <ref>, our approach significantly outperforms Imagen on most prompt settings, demonstrating the generalization ability of our model. §.§ Qualitative Evaluation In the qualitative comparison of text-guided image generation, we select some advanced baseline methods, including Attend-and-excite <cit.>, BoxDiff <cit.>, LMD <cit.>, MultiDiffusion <cit.>, and SDXL <cit.>. Attend-and-excite focuses on enhancing the semantic understanding of prompts through attention mechanisms, while BoxDiff introduces a novel approach to text-to-image synthesis with box-constrained diffusion without the need for explicit training. MultiDiffusion proposes a method for fusing multiple diffusion paths to achieve greater control over the image generation process, and SDXL aims at improving the capabilities of latent diffusion models for synthesizing high-resolution images. As shown in Fig. <ref>, our method, produces more precise editing results than the aforementioned methods. This is attributed to our In-Depth Object Generation and Seamless Background Integration strategy. It ensures outstanding fidelity in outcomes and flawlessly retains the semantic structure of the source image, highlighting our approach's superior capability in complex editing tasks. §.§ User study Through an extensive user study, we benchmarked our model against other methods to assess real human preferences for the generated images. Utilizing our newly proposed benchmark, the RFBench, we selected a diverse set of 27 prompts and generated six images per prompt to ensure a broad representation of the model's capabilities. Detailed feedbacks were collected from 120 participants, evaluating each image for visual quality and text prompt fidelity [Details of survey samples can be found in our supplementary material.]. These criteria are critical, which measure the image's quality and correctness of semantics in the synthesized image. Participants rated images on a scale from {1, 2, 3, 4, 5}, with scores normalized by dividing by 5. We calculated the average score across all images and participants. As illustrated in Fig. <ref>, participants uniformly favored our model's output, recognizing it as superior in both quality and alignment with the textual descriptions. §.§ Ablation study r0.5 Ablation studies on various components on RFBench. SAA guidance suppression GPT4Score 0.295 0.407 0.554 0.572 0.719 Impact of Various Constraints. To validate the impact of guidance constraint and suppression constraint, we perform ablation studies on different combinations of constraints, and the results are listed in <ref>. As shown, the baseline model (Stable Diffusion) achieves a 0.295 in terms of GPT4Score without any constraints. As guidance constraint and suppression constraint work complementary to restrict the cross-attention of objects inside the conditional boxes, a higher GPT4Score of 0.572 is achieved on the generated images. Both proposed constraints are effective in controlling image quality and layout of synthesized foreground objects. Impact of Semantic Alignment Assessment (SAA) Module. As aforementioned, using conflict descriptions in the denoise step may potentially affect image synthesis. The quantitative evaluation is presented in <ref>. In the absence of SAA, the model attains a GPT4Score of 0.572. Similarly, with SAA, the model reaches a GPT4Score of 0.719. This indicates a lack of consistency between the semantics generated in the images and the provided text prompts, leading to a reduction in image quality. It is important to note that the inclusion of SAA significantly enhances the clarity of the images obtained. One visual illustration can be found in  <ref>. § CONCLUSION AND FUTURE WORK In this research, we present a novel challenge: generating scenes that blend reality and fantasy. We investigate the capacity of diffusion models to create visuals from prompts that demand high levels of creativity or specific knowledge. Noting the lack of a specific evaluation mechanism for such tasks, we establish the Realistic-Fantasy Benchmark (RFBench), combining elements of both realistic and imaginary scenarios. To address the task of generating realistic and fantastical scenes, we introduce a unique, training-free, two-tiered method, Realistic-Fantasy Network (RFNet), that combines diffusion models with large language models (LLMs). Our approach, evaluated through the RFBench using thorough human assessments and GPT-based compositional evaluations, has proven to be superior to existing cutting-edge techniques. Given the novelty of our task, future research could develop additional evaluation metrics beyond those used in this study, enhancing the assessment of generated scenes. splncs04 § LLM-DRIVEN DETAIL SYNTHESIS In this work, as described in the Sec. 4.1 of the main paper, we emphasized that by leveraging LLMs, we have significantly enriched responses to encompass additional information, such as layout, detailed descriptions, background scenes, and negative prompts. To achieve this, we facilitated an interaction with a LLM as shown in <ref>. The input given to the LLM, depicted on the left side of the figure, includes detailed task specifications and in-context learning examples to enhance the LLM's comprehension. The response from the LLM, shown on the right, is rich with details extracted from the prompt. Notably, the descriptions are particularly crucial for our work, serving as indispensable information for the later image generation stage. § QUALITATIVE COMPARISON ON RFBENCH In <ref> and <ref>, we present additional qualitative examples to showcase the exceptional outcomes of our work. <ref> shows the results under the category Realistic and Analytical, while <ref> shows the category Creativity and Imagination. Both figures demonstrate that our method achieves more accurate editing results compared to other approaches. § GPT4SCORE We follow the approach of T2I-Compbench, using Multimodal LLM (MLLM) to measure the similarity between generated images and input prompts. The key deviation lies in our observation that MiniGPT4, employed in T2I-Compbench, struggles to comprehend the surreal aspects of the images effectively. Therefore, we employ GPT4, a more powerful MLLM, as our new benchmarking model for evaluation, as mentioned in the Sec. 5.1 of the main paper. Specifically, given a generated image and its prompt, we input both the image and prompt into GPT4. Subsequently, we pose two questions to the model: “Describe the image” and “Predict the image-text alignment score”, the generated image is then assigned the final output score predicted by GPT4. For detailed prompts, please refer to the appendix of T2I-Compbench. § HUMAN EVALUATION In the human evaluation process, as introduced in the Sec. 5.4 of the main paper, we request annotators to assess the correspondence between a produced image and the textual prompt employed to create the image.  <ref> show the interfaces for human evaluation. The participants can choose a score from {1, 2, 3, 4, 5} and we normalize the scores by dividing them by 5. We then compute the average score across all images and all participants. skip=0.5 § HUMAN CORRELATION OF THE EVALUATION METRICS We adopt the methodology from T2I-Compbench, calculating Kendall's tau (τ) and Spearman's rho (ρ) to evaluate the ranking correlation between CLIPScore, GPT4Score, and human evaluation. For better comparison, the scores predicted by each evaluation metric are normalized to a 0-1 scale. The human correlation results are presented in  <ref>. These results indicate that CLIP underperforms in both categories, as discussed in Section 5.1 of the main paper. This underperformance may be due to CLIP's approach to image understanding, which is often too simplistic. Nevertheless, both metrics encounter challenges with Creativity and Imagination, highlighting that although GPT4Score offers a broader understanding of images, accurately assessing creativity remains a difficult task for both. § VISUALIZATION OF ABLATION STUDY In addition to the quantitative results presented in our ablation study, we have also included visual examples to showcase the impact of different components in our work. As shown in <ref>, the removal of guidance constraint and suppression constraint both causes the diffusion model to become muddled when dealing with multiple objects. Besides, eliminating the SAA module leads to unclear outcomes with the generated objects. §.§ Effect of the hyperparameter β of guidance constraint In our paper, we emphasize the critical role of the guidance constraint in integrating multiple objects into the background. To underscore its significance, we performed an additional ablation study focusing on the hyperparameter β, which influences the strength of guidance constraint. As shown in <ref>, we varied β from 0.1 to 30 to observe the effects on the generated results. The findings reveal that an optimal β value (e.g., setting it to 15) ensures objects are accurately aligned with the layout and are of high quality. However, extreme β values, such as 0.1 or 30, disrupt the layout and diminish the overall quality of the generated images.
http://arxiv.org/abs/2407.13537v2
20240718140903
GlobalPointer: Large-Scale Plane Adjustment with Bi-Convex Relaxation
[ "Bangyan Liao", "Zhenjun Zhao", "Lu Chen", "Haoang Li", "Daniel Cremers", "Peidong Liu" ]
cs.CV
[ "cs.CV" ]
11(2.5, -0.1) This paper has been accepted for publication at the European Conference on Computer Vision (ECCV), 2024. Springer GlobalPointer B. Liao, Z. Zhao 1Zhejiang University 2Westlake University 3The Chinese University of Hong Kong 4Dreame Technology (Suzhou) 5Hong Kong University of Science and Technology (Guangzhou) 6Technical University of Munich GlobalPointer: Large-Scale Plane Adjustment with Bi-Convex Relaxation Bangyan Liao1,2⋆0009-0007-7739-4879 Zhenjun Zhao3⋆0009-0000-6551-4537 Lu Chen40009-0003-5779-4673 Haoang Li50000-0002-1576-9408 Daniel Cremers60000-0002-3079-7984Peidong Liu20000-0002-9767-6220 ======================================================================================================================================================================================================== ^⋆ Equal contribution. ^ Corresponding author: Peidong Liu (liupeidong@westlake.edu.cn). § ABSTRACT Plane adjustment (PA) is crucial for many 3D applications, involving simultaneous pose estimation and plane recovery. Despite recent advancements, it remains a challenging problem in the realm of multi-view point cloud registration. Current state-of-the-art methods can achieve globally optimal convergence only with good initialization. Furthermore, their high time complexity renders them impractical for large-scale problems. To address these challenges, we first exploit a novel optimization strategy termed Bi-Convex Relaxation, which decouples the original problem into two simpler sub-problems, reformulates each sub-problem using a convex relaxation technique, and alternately solves each one until the original problem converges. Building on this strategy, we propose two algorithmic variants for solving the plane adjustment problem, namely GlobalPointer and GlobalPointer++, based on point-to-plane and plane-to-plane errors, respectively. Extensive experiments on both synthetic and real datasets demonstrate that our method can perform large-scale plane adjustment with linear time complexity, larger convergence region, and robustness to poor initialization, while achieving similar accuracy as prior methods. The code is available at https://github.com/wu-cvgl/GlobalPointergithub.com/wu-cvgl/GlobalPointer. § INTRODUCTION With the widespread adoption of LiDAR technology in applications such as 3D reconstruction and LiDAR SLAM <cit.>, tasks involving localization and scene modeling have gained significant attention. As a fundamental building block, there is an increasing demand for a more efficient, robust, and accurate multi-frame point cloud registration algorithm for downstream tasks. Although the two-frame point cloud registration problem <cit.> has been extensively studied in the computer vision community for decades, transitioning to multi-frame scenarios introduces new challenges. Specifically, the relative poses obtained from pairwise point cloud registration often result in the well-known pose drift problem <cit.>. Many efforts, such as pose graph optimization <cit.> and rotation averaging <cit.>, have been made to address this challenge by introducing additional pose observations to average out pose errors. However, these methods often yield sub-optimal and biased results due to the complex pose noise models. Inspired by the success of bundle adjustment <cit.>, researchers have explored its replication in multi-frame point cloud registration, leading to the recent development of plane adjustment techniques <cit.>. Similar to bundle adjustment, plane adjustment simultaneously optimizes the camera poses and plane parameters (as the counterpart of 3D points in bundle adjustment) by minimizing point-to-plane <cit.> or plane-to-plane <cit.> errors, as shown in <ref>. Prior state-of-the-art methods explore both nonlinear least square methods <cit.> and spectral-based methods <cit.> to improve the performance of plane adjustment. While existing works on plane adjustment have shown promising results, these methods are limited to small-scale problems, and their accuracy depends on the quality of initialization. Achieving both large-scale and globally optimal plane adjustment remains challenging. To address these challenges, we exploit a novel optimization strategy termed Bi-Convex Relaxation for the large-scale plane adjustment problem. This strategy decouples the original complex formulation into two sub-problems. Each sub-problem is reformulated using convex relaxation techniques <cit.> and solved alternately until the overall problem converges. The advantages of our method are two-fold: 1) the convex sub-problem enlarges the convergence region, enhancing robustness to poor initialization; and 2) decoupling the high-dimensional problem into multiple low-dimensional sub-problems avoids solving intractable large Semidefinite Programming (SDP) problem, enabling efficient optimization for large-scale scenarios. Building upon this framework, we present two algorithmic variants, namely GlobalPointer and GlobalPointer++, based on point-to-plane and plane-to-plane errors, respectively. The former algorithm exhibits a larger convergence region and better stability, while the latter demonstrates superior efficiency. Although there is no theoretical globally optimal guarantee, exhaustive empirical evaluations in both synthetic and real experiments demonstrate that our method can provide an empirical globally optimal solution, as shown in <ref>. In summary, our contributions are as follows: * We exploit a novel optimization strategy termed Bi-Convex Relaxation, which combines the advantages of both alternating minimization <cit.> and convex relaxation techniques <cit.>; * Building on this novel optimization strategy, we develop two algorithmic variants for plane adjustment, namely GlobalPointer and GlobalPointer++, which depend on point-to-plane and plane-to-plane errors, respectively; * Extensive synthetic and real experimental evaluations demonstrate that our method can perform large-scale plane adjustment with linear time complexity and robustness to poor initialization, while achieving similar accuracy as prior methods. § RELATED WORK Kaess <cit.> exploiting plane-to-plane error to formulate the plane adjustment problem as a nonlinear least squares problem, which they solved efficiently using the Gauss-Newton optimization method. Hsiao  <cit.> extend this approach to a keyframe-based SLAM system. Zhou  <cit.> later demonstrate that using point-to-plane error in the energy function formulation improves stability and efficiency over plane-to-plane error formulations. They utilize the matrix factorization trick to solve the resulting nonlinear least squares problem, avoiding the accumulation of a large number of point clouds. This method has been widely adopted in plane-based SLAM systems <cit.>. While these formulations are very efficient for small-scale problems (, with a small number of camera poses and planes), they struggle in large-scale problems due to the inherent time complexity. To address this issue, Ferrer  <cit.> propose to use the minimum eigenvalue of the covariance matrix to obtain a surrogate energy function. This approach avoids the explicit plane updates and requires only eigenvalue decomposition at each iteration. Analytical gradients are derived, and a first-order solver is used for solving this problem. To further improve the convergence speed, they derive the analytical Hessian matrix, enabling more efficient optimization with a second-order solver <cit.>. Similarly, Liu derive a similar analytical Hessian matrix and further improve overall efficiency in <cit.>. While methods based on eigenvalue decomposition can avoid explicit plane parameter updates, constructing the Hessian matrix itself is time-consuming, and performing eigenvalue decomposition at each iteration further increases the computational burden. Recently, Zhou <cit.> proposes a novel approach to exploit implicit constraints of eigenvalues to derive analytical Hessian matrices and gradient vectors. However, direct implicit function differentiation would potentially pose a numerical stability issue. More recently, the convex relaxation technique has been increasingly employed to solve challenging non-convex optimization problems in computer vision tasks <cit.>. Nevertheless, the efficiency of SDP, as a core computational tool, is highly related to the size of the state matrix <cit.>, making it intractable for large-scale plane adjustment. To address these challenges, we propose to hybridize two techniques, , alternating minimization <cit.> and convex relaxation <cit.>, to decouple the original complex plane adjustment problem into two simpler sub-problems. This technique, termed Bi-Convex Relaxation, avoids solving high-dimensional SDP problem, significantly reduces time complexity, and enlarges the convergence region. Although there is no theoretical guarantee for the entire problem, our empirical results indicate that our formulation can converge to a global minimum. § NOTATIONS AND BACKGROUND §.§ Notation We use MATLAB notation to denote sub-matrix operations. Specifically, [𝐚;𝐛] denotes the vertical concatenation of vectors of 𝐚 and 𝐛, while [𝐚^⊤,𝐛^⊤] denotes the horizontal concatenation of the transpose vectors of 𝐚 and 𝐛. The operator ⊗ denotes the Kronecker product, and × denotes the cross product of vectors. The operator vec(.) denotes vertical vectorization. For the efficiency of the algorithm, we use a unit quaternion parameterized rotation matrix in some tasks. §.§ Point-to-Plane Error Given the rotation matrix 𝐑∈𝐒𝐎(3) and translation vector 𝐭∈ℝ^3 , we can transform each 3D point from local coordinates 𝐏^l∈ℝ^3 to world coordinates 𝐏^g ∈ℝ^3 as 𝐏^g = 𝐑𝐏^l + 𝐭. For each 𝐏^g, the corresponding plane parameters 𝐧∈ℝ^3, 𝐪∈ℝ^3 are defined as the normal vector and an arbitrary point on the plane, respectively. To simplify, we introduce an auxiliary scalar d=-𝐧^⊤𝐪. We then establish a point-to-plane distance as a metric for measuring the registration error, illustrated in <ref>, as 𝐧^⊤ (𝐏^g - 𝐪)_2^2=𝐧^⊤𝐏^g + d_2^2. §.§ Plane-to-Plane Error Similar to the point-to-plane error, we can define the plane-to-plane error. Since LiDAR point clouds are typically dense, we can always find many LiDAR points in a single frame that correspond to the same 3D plane. We can use plane segmentation and fitting algorithms to obtain the parameters of these planes 𝐧_k^l and 𝐪_k^l, in the local LiDAR coordinate system. Given the rotation matrix 𝐑, translation vector 𝐭, normal vector 𝐧^l, and an arbitrary point 𝐪^l, we can readily transform them to the world coordinate as 𝐧^g = 𝐑𝐧^l and 𝐪^g = 𝐑𝐪^l + 𝐭, respectively. Given the corresponding global plane parameters 𝐧 and 𝐪, we can establish a plane-to-plane distance as a metric for measuring the registration error, illustrated in <ref>, as 𝐧^g - 𝐧_2^2 + 𝐧^g⊤𝐪^g - 𝐧^⊤𝐪_2^2=𝐧^g - 𝐧_2^2 + d - d^g _2^2. §.§ Convex Relaxation Quadratically Constrained Quadratic Program (QCQP), as a general but NP-hard optimization problem, has various applications across computer vision and machine learning <cit.>. It can be defined as: min_𝐱∈ℝ^n 𝐱^⊤𝐂𝐱 s.t. 𝐱^⊤𝐀_i 𝐱 = b_i, i = 1, …, m, where 𝐂, 𝐀_1,...,𝐀_m ∈𝒮^n and 𝒮^n denotes the set of all real symmetric n × n matrices. The convex relaxation technique serves as a general tool to reformulate the original QCQP problem into a new convex problem, allowing it to be solved to a global minimum. Specifically, we first rewrite 𝐱^⊤𝐂𝐱 as trace(𝐂𝐱𝐱^⊤ ) and subsequently replace 𝐱𝐱^⊤ with a new symmetric positive semidefinite (PSD) matrix 𝐗. This leads to a new Semidefinite Programming (SDP) problem: min_𝐗∈𝒮^n trace(𝐂𝐗) s.t. trace(𝐀_i𝐗) = b_i, i = 1, …, m, 𝐗≽0. The primary benefit of employing convex relaxation instead of directly solving the original problem lies in the convex nature of the SDP formulation. Although relaxation is applied, researchers have found that strong duality properties still hold for many problems <cit.>. This implies that solving the relaxed problem is often equivalent to solving the original problem <cit.>. With any off-the-shelf SDP solver <cit.>, we can always find the global minimum within a polynomial time. § METHODOLOGY In this section, we first define the formal plane adjustment algorithm in <ref>. Then, a new optimization strategy called Bi-Convex Relaxation will be established in <ref>. Building on this strategy, we propose the GlobalPointer algorithm, based on the point-to-plane error, in <ref>. In <ref>, we accelerate the original algorithm with two closed-form solvers in GlobalPointer++ based on the plane-to-plane error. We further incorporate some empirical insights and intuitive remarks on our solvers in <ref>. §.§ Plane Adjustment (PA) Let us consider m LiDAR frames and n reconstructed planes. For the i^th (i ∈{1,2,…,m}) LiDAR frame, we define its absolute pose in the global world coordinates by a rotation matrix 𝐑_i ∈𝐒𝐎(3) and a translation vector 𝐭_i ∈ℝ^3. For the j^th (j ∈{ 1,2,…,n}) reconstructed plane, we define 𝐧_j as its normal vector and 𝐪_j as its origin point. PA with Point-to-Plane Error. When utilizing the point-to-plane error in <ref>, the plane adjustment problem with point-to-plane error can be defined as: {𝐑^* , 𝐭^*, 𝐧^*, 𝐪^*} = arg min_𝐑, 𝐭, 𝐧, 𝐪 ∑_i = 1^m∑_j = 1^n[ 𝐧_j; d_j ]^⊤𝐓_i ℬ(i, j)𝐓_i^⊤[ 𝐧_j; d_j ] s.t. 𝐑_i ∈𝐒𝐎(3), 𝐧_j = 1, ℬ(i, j) = ∑_k ∈ obs(i, j)(𝐏̃_k^l 𝐏̃_k^l⊤), i = 1, …, m, j = 1, …, n, where obs(i, j) denotes the observation index set related to the j^th reconstructed plane and the i^th LiDAR frame, 𝐓_i = [𝐑_i, 𝐭_i; 0 ,1] represents the transformation matrix for LiDAR frame i, and 𝐏̃_k^l = [𝐏_k^l;1] denotes homogeneous coordinates of point 𝐏_k^l . Remark. The matrix ℬ(i, j) accumulates all relevant local points in advance, avoiding time-consuming point cloud accumulation operations. Nonetheless, solving this problem is still hard due to the non-convex nature of both the objective function and constraints, making it highly reliant on a good initialization. §.§.§ PA with Plane-to-Plane Error. When utilizing the plane-to-plane error in <ref>, the plane adjustment problem with plane-to-plane error can be defined as: {𝐑^* , 𝐭^*, 𝐧^*, 𝐪^*} = arg min_𝐑, 𝐭, 𝐧, 𝐪 ∑_i = 1^m∑_j = 1^n𝐧_j - 𝐑_i𝐧_ij^l*_2^2 + 𝐧_ij^l*⊤𝐑_i^⊤𝐭_i + d_j - d_ij^l*_2^2 s.t. 𝐑_i ∈𝐒𝐎(3), 𝐧_j = 1, i = 1, …, m, j = 1, …, n, where the optimal local plane parameters [𝐧_ij^l*; d_ij^l*] is obtained by minimizing the point-to-plane error for LiDAR points observed in the i^th LiDAR frame associated with the j^th reconstructed plane as: (𝐧_ij^l*, d_ij^l*) = arg min_𝐧_ij^l, d_ij^l[ 𝐧_ij^l; d_ij^l ]^⊤ℬ(i, j)[ 𝐧_ij^l; d_ij^l ]. Remark. As detailed in the subsequent section, the formulation akin to a nonlinear least squares problem simplifies <ref>. However, this also comes at a price. The direction of the normal vector for each local plane introduces ambiguity. Failure to unify these directions can lead to divergence in the entire optimization process. §.§ Bi-Convex Relaxation As shown above, whether based on point-to-plane or plane-to-plane error, direct plane adjustment presents a non-convex optimization problem. Fortunately, when either the planes or the poses are fixed, the remaining optimization problem becomes a QCQP problem, which can be reformulated with convex relaxation and solved to the global minimum. Thus, we adopt an alternating minimization approach to achieve convergence in the original plane adjustment. We term this strategy, which combines alternating minimization and convex relaxation, as Bi-Convex Relaxation, shown in <ref>. Although the non-convex nature of the original problem prevents Bi-Convex Relaxation from guaranteeing global convergence, extensive experiments demonstrate that this algorithm significantly enlarges the convergence region and even provides empirical global optimality guarantees under certain conditions. Additionally, fixing a subset of parameters decouples the inter-dependence among the remaining parameters for optimization (fixing p1, p2, p3 makes l1, l2 independent in <ref>). This allows for parallel optimization of each plane or pose, avoiding high-dimensional SDP optimization and significantly reducing time complexity. §.§ GlobalPointer In this subsection, we will follow the Bi-Convex Relaxation technique to derive the SDP formulation for each sub-problem based on point-to-plane error. The algorithm is summarized in Alg. <ref>. For simplicity, we omit the derivation from primal to QCQP and QCQP to SDP. For detailed derivation, please refer to the supplementary material. Pose-Only Optimization. After fixing the plane parameters, we can readily convert the original plane adjustment problem in <ref> to a pose-only SDP as: {𝐗^*} = arg min_𝐗 ∑_i=1^m trace ( 𝒞(i) 𝐗_i) s.t. 𝒞(i) = ∑_j = 1^n(𝐛_j𝐛_j^⊤) ⊗ℬ(i, j), 𝐗_i ≽0, {redundant rotation constraints}, i = 1, …, m, j = 1, …, n, Pose-Only SDP where the auxiliary vector 𝐛_j∈ℝ^4 is defined as 𝐛_j = [𝐧_j ;-𝐧_j^⊤𝐪_j] and rank one symmetric PSD matrix 𝐗_i = vec(𝐓_i) vec(𝐓_i)^⊤ represents the i^th primal pose variable. The redundant rotation constraints are defined as: {redundant rotation constraints} : { 𝐑_i^⊤𝐑_i = 𝐈_3, 𝐑_i𝐑_i^⊤ = 𝐈_3, (𝐑_i𝐞_i) × (𝐑_i𝐞_j) = (𝐑_i𝐞_k), ∀ (i, j, k) ∈{ (1, 2, 3), (2, 3, 1), (3, 1, 2)}. . Plane-Only Optimization. Similar to pose-only optimization, we can conduct plane-only optimization by fixing the pose parameters. This reformulates the original plane adjustment problem in <ref> into a plane-only SDP as: {𝐘^*} = arg min_𝐘 ∑_j=1^ntrace(𝒟(j)𝐘_j) s.t. 𝒟(j) = ∑_i = 1^m𝐓_i ℬ(i, j)𝐓_i^⊤, 𝐘_j ≽0, 𝐧_j = 1, i = 1, …, m, j = 1, …, n, Plane-Only SDP where the rank one symmetric PSD matrix 𝐘_j = 𝐛_j𝐛_j^⊤ denotes the j^th primal plane variable. §.§ GlobalPointer++ In this subsection, we introduce a new variant of the plane adjustment algorithm, GlobalPointer++, which accelerates the original GlobalPointer method. Similar to GlobalPointer, GlobalPointer++ decouples the original formulation and alternately solves each sub-problem in closed form until convergence. Unlike GlobalPointer, GlobalPointer++ relies on the plane-to-plane error defined in <ref> for registration instead of the point-to-plane error. Although this formulation can be solved in closed form, the ambiguity of the normal direction prevents practical usage, as discussed in <ref>. In the following, we first address the ambiguity issue by introducing global normal direction calibration and then derive the closed-form solvers. The algorithm is summarized in Alg. <ref>. Global Normal Direction Calibration. To resolve the ambiguity of the normal direction, we reformulate the simultaneous rotation and normal direction search problem as a new SDP problem. Firstly, we rewrite the cross term -2 𝐧_j^⊤𝐑_i𝐧_ij^l* in 𝐧_j - 𝐑_i𝐧_ij^l*_2^2 (<ref>) using quaternion-based rotation representation as 𝐪_i^⊤𝐌_ij𝐪_iθ_ij, where 𝐪_i is the unit quaternion of rotation 𝐑_i, 𝐌_ij is the corresponding auxiliary matrix, and θ_ij={+1,-1} represents the normal direction sign. We then define a single column vector 𝐪̅_i = [𝐪_i; 𝐪_i θ_i1; …; 𝐪_i θ_in] and the corresponding rank one symmetric PSD matrix 𝐐_i=𝐪̅_i𝐪̅_i^⊤. This can be relaxed to a rotation-only SDP as: {𝐐^*} = arg min_𝐐 ∑_i = 1^m trace(𝐌̅_i𝐐_i) s.t. 𝐐_i ≽0, trace([𝐐_i]_00) = 1, [𝐐_i]_00 = [𝐐_i]_jj i = 1, …, m, j = 1, …, n, Rotation-Only SDP where 𝐌̅_i includes all 𝐌_ij as [𝐌̅_i]_0j = 0.5 𝐌_ij, [𝐌̅_i]_j0 = 0.5 𝐌_ij^⊤, j = 1,…, n. Remark. Following this calibration, all local plane normal directions are made consistent. It is worth noting that this calibration can also be applied to two-frame registration. Consequently, we employ this solver to sequentially initialize the normal directions by incrementally registering point clouds, thereby reducing computational time. Once the normal directions are corrected, we then apply the following closed-form solvers to each sub-problem. Pose-Only Closed-Form Solver. After fixing the plane parameters, we can convert the original plane adjustment problem in <ref> to a pose-only closed-form solver as: {𝐑^* , 𝐭^*} = arg min_𝐑, 𝐭 ∑_i = 1^m∑_j = 1^n𝐧_j - 𝐑_i𝐧_ij^l*_2^2 + 𝐧_ij^l*⊤𝐑_i^⊤𝐭_i + d_j - d_ij^l*_2^2 =arg min_𝐪, 𝐭 ∑_i = 1^m∑_j = 1^n𝐪_i^⊤𝐌_ij𝐪_i+𝐧_ij^l*⊤𝐑_i^⊤𝐭_i + d_j - d_ij^l*^2 s.t. 𝐪_i = 1, i = 1, …, m, where the auxiliary matrix 𝐌_ij is defined as in <ref>. The quaternion can be analytically determined as the eigenvector corresponding to the minimum eigenvalue. Subsequently, by substituting the quaternion into the second term, the translation vector can be obtained in closed form using linear least squares. Plane-Only Closed-Form Solver. Similar to the pose-only closed-form solver, by fixing the plane parameters, we can convert the original plane adjustment problem in <ref> to a plane-only closed-form solver as: {𝐧^* , d^*} = arg min_𝐧, d ∑_i = 1^m∑_j = 1^n𝐧_j - 𝐑_i𝐧_ij^l*_2^2 + 𝐧_ij^l*⊤𝐑_i^⊤𝐭_i + d_j - d_ij^l*_2^2 s.t. 𝐧_j = 1, j = 1, …, n. The optimal 𝐧_j can be determined through eigen decomposition. Subsequently, the optimal d_j can be analytically obtained by substituting the optimal 𝐧_j. §.§ Performance Analysis In this section, we provide some intuitive remarks on our solver. For more detailed discussions, please refer to the supplementary material. Remark 1. For general scenes, the proposed Bi-Convex Relaxation optimization strategy consistently ensures convergence to a local minimum of the original plane adjustment problem. Intuitively, the two sets of parameters are not mutually constrained by equalities or inequalities, making the overall problem more stable and straightforward. Remark 2. In the absence of outliers and noise, each SDP sub-problem (, <ref>, <ref>, and <ref>) converges to a global minimum with zero duality gap. Moreover, even in the presence of low noise levels, our algorithm maintains empirical global optimality. Remark 3. Although our original problem is non-convex, experimental results demonstrate that our algorithm achieves empirical global optimality. While this global optimality is empirical, further analysis can be conducted through other means, which is beyond the scope of this paper. Remark 4. The overlap between planes and poses is a critical factor for our algorithm. Problems with a larger overlap result in a more stable and efficient optimization convergence process. From a practical perspective, a larger overlap is preferred. Remark 5. The time complexity of the proposed GlobalPointer is linear with respect to the number of planes and poses. The time complexity of the proposed GlobalPointer++ is linear when the number of planes and poses is small, and quadratic when both are extremely large. § EXPERIMENTS §.§ Testing Setup In our experiments, the proposed method is compared to the state-of-the-art methods including BALM2 <cit.>, EF <cit.>, ESO-Full <cit.>, ESO-BFGS (the first-order version of <cit.>), PA-Full <cit.>, and PA-Decoupled (the decoupled version of <cit.>). We implement our method in Matlab and run it on a laptop with an i9-13900HX CPU and 32 GB RAM. The maximum number of iterations is set to 200 for all second-order solvers, and the relative stop tolerance is set to 10^-4. We use Yalmip as our solver's interface and Mosek as the core SDP solver. §.§ Synthetic Data Testing Setup. We generate numerous virtual planes and virtual observation poses. Each pose is set to be inside a box with a maximum size of 50 meters, and each plane is set to be observed from any pose. Runtime. We evaluate the runtime of GlobalPointer and GlobalPointer++. The number of planes and poses increases gradually, using well-initialized parameters. All statistics are computed over 50 independent trials. As illustrated in <ref>, the runtime of GlobalPointer exhibits a linear growth trend with increasing numbers of planes and poses, while GlobalPointer++ achieves a 5-10x speedup compared to GlobalPointer. Interestingly, as the number of planes and poses reaches a certain magnitude, the most time-consuming part in the GlobalPointer++ is neither the eigen decomposition nor the linear least squares, but rather the data preparation for each iteration. Due to the quadratic growth complexity of data preparation time, the entire algorithm demonstrates quadratic time complexity when the number of planes and poses is large. Accuracy. We extensively compare our proposed algorithms with other methods in terms of accuracy under varying levels of point cloud noise and pose initialization noise. We choose the total point-to-plane error, defined as e_total = (𝐧^⊤(𝐑𝐏+𝐭) + d)^2, as the evaluation metric. All statistics are computed over 50 independent trials. As shown in <ref>, GlobalPointer consistently converges to the global optimum across all settings, while GlobalPointer++ is more sensitive to point cloud noise. ESO <cit.> performs poorly due to unstable numerical stability in Hessian matrix derivation. Other methods fail to converge to the global minimum above a certain level of noise. Time Complexity. We also evaluate the growth of time complexity for the algorithms. Starting with 5 planes and 5 poses, we incrementally increase the number of poses and planes. Time complexity is measured as the multiple of the optimization time relative to the initial setting. We conduct 50 independent trials and use the median time as the metric. Experimental results in <ref> demonstrate that our two algorithms exhibit similar time complexity. BALM2 <cit.>, EF <cit.>, and ESO-BFGS <cit.> show less sensitivity to increasing numbers of planes. However, ESO-Full <cit.> and PA-Full <cit.> exhibit time complexities approaching cubic. PA-Decoupled <cit.>, which also employs alternating minimization, demonstrates time complexity similar to our method. §.§ Real Data Accuracy. For real datasets, we select sequences from the Hilti dataset <cit.> with a higher number of indoor planes. Initially, we transform the local point clouds of each frame into the global coordinate system using ground truth poses. Subsequently, we employ plane segmentation and fitting algorithms to extract point clouds <cit.> associated with the same plane, with a RANSAC fitting threshold of 0.01. The segmentation results can be found in the supplementary material. Furthermore, to align with our hypothesis that each plane should be observed by as many LiDAR frames as possible, we randomly divide all point cloud frames into 50 subsets and accumulate them into individual sub-cloud frames. All statistics are computed over 50 independent trials. In <ref>, we compare the accuracy of our algorithms with other methods. It is evident that our proposed GlobalPointer and GlobalPointer++ consistently demonstrate global convergence performance across all settings, while other methods require well-initialized values to converge to the ground truth. § CONCLUSIONS In this paper, we exploit a novel optimization strategy, Bi-Convex Relaxation, and apply it to solve the plane adjustment problem with two variants of errors, resulting in the proposals of GlobalPointer and GlobalPointer++, respectively. Extensive synthetic and real experiments demonstrate that our method enables large-scale plane adjustment with linear time complexity, larger convergence region, and without relying on good initialization, while achieving similar accuracy as prior methods. § ACKNOWLEDGMENTS This work was supported in part by NSFC under Grant 62202389, in part by a grant from the Westlake University-Muyuan Joint Research Institute, and in part by the Westlake Education Foundation § SUPPLEMENTARY MATERIAL § OVERVIEW In this supplementary material, we further provide the following content: * A complete derivation of GlobalPointer (<ref>). * Detailed discussions on the remarks presented in in <ref> of the main paper (<ref>). * Additional experiment analysis of our solvers (<ref>). § SUPPLEMENTARY DERIVATION In this section, we derive the primal QCQP and the corresponding SDP with convex relaxation for each sub-problem in GlobalPointer. Pose-Only Optimization. After fixing the plane parameters, we can readily convert the original plane adjustment problem defined in Eq. 3 to a pose-only QCQP as: (vec(𝐓^*)) = arg min_vec(𝐓) ∑_j = 1^n∑_i = 1^m𝐛_j^⊤𝐓_i ℬ(i, j)𝐓_i^⊤𝐛_j = arg min_vec(𝐓) ∑_i = 1^m vec(𝐓_i)^⊤𝒞(i) vec(𝐓_i) s.t. 𝒞(i) = ∑_j = 1^n(𝐛_j𝐛_j^⊤) ⊗ℬ(i, j), 𝐑_i ∈𝐒𝐎(3), i = 1, …, m, j = 1, …, n, Pose-Only QCQP where the auxiliary vector 𝐛_j is defined as 𝐛_j = [𝐧_j; -𝐧_j^⊤𝐪_j]. This pose-only QCQP is a standard block-wise QCQP and can be decoupled into m sub-problems, each of which can be converted into m low-dimensional SDPs. Let the rank one symmetric PSD matrix 𝐗_i = vec(𝐓_i) vec(𝐓_i)^⊤ represent the i^th primal pose variable. We can convert this pose-only QCQP to a pose-only SDP as: {𝐗^*} = arg min_𝐗 ∑_i=1^m trace ( 𝒞(i) 𝐗_i) s.t. 𝒞(i) = ∑_j = 1^n(𝐛_j𝐛_j^⊤) ⊗ℬ(i, j), 𝐗_i ≽0, {redundant rotation constraints}, i = 1, …, m, j = 1, …, n, Pose-Only SDP where the redundant rotation constraints are defined as: {redundant rotation constraints} : { 𝐑_i^⊤𝐑_i = 𝐈_3, 𝐑_i𝐑_i^⊤ = 𝐈_3, (𝐑_i𝐞_i) × (𝐑_i𝐞_j) = (𝐑_i𝐞_k), ∀ (i, j, k) ∈{ (1, 2, 3), (2, 3, 1), (3, 1, 2)}. . Plane-Only Optimization Similar to pose-only optimization, we can conduct plane-only optimization by fixing the pose parameters. We can readily convert the original plane adjustment problem in Eq. 3 to a plane-only QCQP as: (𝐛^*) = arg min_𝐛 ∑_j = 1^n∑_i = 1^m𝐛_j^⊤𝐓_i ℬ(i, j)𝐓_i^⊤𝐛_j = arg min_𝐛 ∑_j = 1^n𝐛_j^⊤(∑_i = 1^m𝐓_i ℬ(i, j)𝐓_i^⊤)𝐛_j s.t. 𝐧_j = 1, i = 1, …, m, j = 1, …, n. Plane-Only QCQP Similar to pose-only SDP, we introduce the rank one symmetric PSD matrix 𝐘_j = 𝐛_j𝐛_j^⊤ and convert this plane-only QCQP to a plane-only SDP as: {𝐘^*} = arg min_𝐘 ∑_j=1^ntrace(𝒟(j)𝐘_j) s.t. 𝒟(j) = ∑_i = 1^m𝐓_i ℬ(i, j)𝐓_i^⊤ 𝐘_j ≽0, 𝐧_j = 1, i = 1, …, m, j = 1, …, n. Plane-Only SDP § DETAILED DISCUSSIONS In this section, we further discuss all the remarks claimed in <ref> of the main paper. §.§ Remark 1 Remark 1. For general scenes, the proposed Bi-Convex Relaxation optimization strategy consistently ensures convergence to a local minimum of the original plane adjustment problem. Intuitively, the two sets of parameters are not mutually constrained by equalities or inequalities, making the overall problem more stable and straightforward. Considering a multi-variable constrained optimization problem defined as: (𝐚^*, 𝐛^*) = arg min_𝐚, 𝐛 f(𝐚, 𝐛) s.t. c_i(𝐚, 𝐛) ≤ 0, i = 1,…,m c_j(𝐚, 𝐛) = 0, j = 1,…,n. Primal Constrained Problem By fixing one variable, we can solve the remaining sub-problem until convergence. The two sub-convex problems at iteration n are defined as: (𝐚^*) = arg min_𝐚 f(𝐚, 𝐛) s.t. c_i(𝐚, 𝐛) ≤ 0, i = 1, …, m, c_j(𝐚, 𝐛) = 0, j = 1, …, n, (𝐛^*) = arg min_𝐛 f(𝐚, 𝐛) s.t. c_i(𝐚, 𝐛) ≤ 0, i = 1, …, m, c_j(𝐚, 𝐛) = 0, j = 1, …, n. Bi-convex Problem   Lemma 1. The partial optimum <cit.> of <ref> is a local minimum of <ref> if the partial derivatives c_i(𝐚, 𝐛) and c_j(𝐚, 𝐛) with respect to 𝐚 and 𝐛 are linearly independent.   Proof. Due to the lack of constraint correlation between 𝐚 and 𝐛 in our optimization problem, this sufficient condition can be easily verified as: ∂ c_i(a,b)/∂ a linearly independent of ∂ c_i(a,b)/∂ b ∂ c_j(a,b)/∂ a linearly independent of ∂ c_j(a,b)/∂ b Therefore, for each sub-problem, we can further simplify it as: (𝐚^*) = arg min_𝐚 f(𝐚, 𝐛) s.t. c_i(𝐚) ≤ 0, i = 1, …, m, c_j(𝐚) = 0, j = 1, …, n, (𝐛^*) = arg min_𝐛 f(𝐚, 𝐛) s.t. c_i(𝐛) ≤ 0, i = 1, …, m, c_j(𝐛) = 0, j = 1, …, n. Simplified Bi-convex Problem Once we achieve the partial optimum (where each sub-problem solver reaches a local minimum), we can combine the Karush-Kuhn-Tucker (KKT) conditions <cit.> from each sub-problem into the larger KKT conditions. These larger KKT conditions are exactly the KKT conditions of the original problem.□   Hence, once the bi-convex optimization problem converges to the partial optimum, we conclude that this solution converges to a local minimum of the primal constrained problem. §.§ Remark 2 Remark 2. In the absence of outliers and noise, each SDP sub-problem (, <ref>, <ref>, and <ref>) converges to a global minimum with zero duality gap. Moreover, even in the presence of a low noise level, our algorithm maintains empirical global optimality. To begin with, we define the primal QCQP as: min_𝐱∈ℝ^n 𝐱^⊤𝐂𝐱 s.t. 𝐱^⊤𝐀_i 𝐱 = b_i, i = 1, …, m. Primal QCQP The Lagrange dual of the primal QCQP can be derived as: max_𝐲∈ℝ^m 𝐛^⊤𝐲 s.t. 𝐂 - ∑_i = 1^m y_i𝐀_i ≽ 0. Dual SDP The dual of the Lagrange dual of the primal QCQP can be derived as: min_𝐗∈𝒮^n trace(𝐂𝐗) s.t. trace(𝐀_i𝐗) = b_i, i = 1, …, m, 𝐗≽0. Dual Dual SDP Based on Lemma 2.1 in <cit.>, we define the following lemmas:   Lemma 2. Let ℋ(𝐲)=𝐂 - ∑_i = 1^m y_i𝐀_i. 𝐱 is proven to be optimal for <ref>, and strong duality holds between <ref> and <ref> if 𝐱∈ℝ^n, 𝐲∈ℝ^m satisfy: {𝐱^⊤𝐀_i 𝐱 = b_i, i = 1, …, m (Primal Feasibility) ℋ(𝐲) ≽ 0 (Dual Feasibility) ℋ(𝐲)𝐱 = 0 (Stationary Condition) .   Lemma 3. In addition to Lemma 2, if ℋ(𝐲) has corank one, then 𝐱𝐱^⊤ is the unique optimum of <ref>, and 𝐱 is the unique optimum of <ref>.   Based on these two lemmas, we can prove the strong duality of our methods.   Theorem 1. The duality gap of Pose-Only SDP is zero under the noise-free and outlier-free condition.   Proof. Let 𝐱_i^* be the ground truth solution, and let the corresponding Lagrange multipliers 𝐲_i reduce to 0. Then ℋ_i(𝐲_i) = 𝒞(i). The primal feasibility is always satisfied. For the dual feasibility, it is straightforward to show that: 0 = 𝐱_i^*⊤ℋ_i(𝐲_i)𝐱_i^*≤𝐱_i^⊤ℋ_i(𝐲_i)𝐱_i ∀𝐱_i ∈{𝐱|𝐱^⊤𝐀_i 𝐱 = b_i}. which guarantees the dual feasibility condition. For the stationary condition, we can perform Cholesky decomposition as ℋ_i(𝐲_i) = 𝐋_i 𝐋_i^⊤. Since 𝐋_i^⊤𝐱_i=0, then ℋ_i(𝐲_i)𝐱_i = 𝐋_i 𝐋^⊤_i𝐱_i = 0, satisfying the stationary condition. Finally, given the plane correspondence, the ground truth rigid transformation 𝐱^*_i𝐱^⊤*_i is the unique non-zero solution that satisfies trace(ℋ_i(𝐲_i)𝐱_i𝐱_i^⊤) = 0 up to scale. We can conclude that ℋ_i(𝐲_i) is corank one, and thus proving Theorem 1. □   Theorem 2. The duality gap of Plane-Only SDP is zero under the noise-free and outlier-free condition.   Proof. Let 𝐛_j^* be the ground truth solution, and let the corresponding Lagrange multipliers 𝐲_j reduce to 0. Then ℋ_j(𝐲_j) = 𝒟(j). The primal feasibility is always satisfied. For the dual feasibility, it is straightforward to show that: 0 = 𝐛^*⊤_jℋ_j(𝐲_j) 𝐛^*_j ≤𝐛^⊤_jℋ_j(𝐲_j)𝐛_j ∀𝐛_j ∈{𝐱|𝐱^⊤𝐀_i 𝐱 = b_i}. which guarantees the dual feasibility condition. For the stationary condition, we can perform Cholesky decomposition as ℋ_j(𝐲_j) = 𝐋_j 𝐋_j^⊤. As 𝐋_j^⊤𝐛_j=0, then ℋ_j(𝐲_j)𝐛_j = 𝐋_j 𝐋^⊤_j𝐛_j = 0, satisfying the stationary condition. Finally, given the pose correspondence, the ground truth plane 𝐛^*_j𝐛^⊤*_j is the unique non-zero solution that satisfies trace(ℋ_j(𝐲_j)𝐛_j𝐛_j^⊤) = 0 up to scale. We can conclude that ℋ_j(𝐲_j) is corank one, thus proving Theorem 2. □   Theorem 3. The duality gap of Rotation-Only SDP is zero under the noise-free and outlier-free condition.   Proof. Let 𝐪^* be the ground truth solution, and set the corresponding Lagrange multipliers 𝐲 to -2. Then ℋ(𝐲) = 𝐌 + 2𝐈. The primal feasibility is always satisfied. For the dual feasibility, it is straightforward to show that: 0 = 𝐪^*⊤(𝐌 + 2𝐈)𝐪^*≤𝐱^⊤(𝐌 + 2𝐈)𝐱 ∀𝐱∈{𝐱|𝐱^⊤𝐀_i 𝐱 = b_i} which also verifies the stationary condition. As 𝐪^* denotes the unique non-zero solution of 𝐪^⊤(𝐌 + 2𝐈)𝐪 = 0 up to scale, we can conclude that ℋ(𝐲) is corank one, thus proving Theorem 3. □ §.§ Remark 3 Remark 3. Although our original problem is non-convex, experimental results demonstrate that our algorithm achieves empirical global optimality. While this global optimality is empirical, further analysis can be conducted through other means, which is beyond the scope of this paper. In general, when the original problem is convex, utilizing the alternating minimization ensures global convergence. However, for the plane adjustment problem, the non-convex nature of the original problem prevents our algorithms from guaranteeing convergence to the global minimum. Nevertheless, our proposed algorithms exhibit several advantages: * By transforming the sub-problem into a convex SDP problem, the convergence region is significantly enlarged, especially when utilizing alternating minimization. * In specific scenarios, particularly under fully observed scenarios where each plane is observed by every LiDAR frame, our algorithm demonstrates global optimality, empirically validated in <ref>. §.§ Remark 4 Remark 4. The overlap between planes and poses is a critical factor for our algorithm. Problems with a larger overlap result in a more stable and efficient optimization convergence process. From a practical perspective, a larger overlap is preferred. As shown in <ref>, the computational time increases from 3s to 10s as the overlap ratio decreases from 100% to 20% while achieving consistent accuracy. This confirms our remark that a small overlap ratio may cause vibration in the convergence process. §.§ Remark 5 Remark 5. The time complexity of the proposed GlobalPointer is linear with respect to the number of planes and poses. The time complexity of the proposed GlobalPointer++ is linear when the number of planes and poses is small, and quadratic when both are extremely large. For GlobalPointer, assuming a total of I iterations with m poses and n planes, where the optimization time of each single SDP is fixed, the total SDP optimization time complexity for pose-only and plane-only optimization is 𝒪(Im) and 𝒪(In), respectively. The accumulation of ℬ(i, j) is performed once before optimization, which can be considered negligible. Assuming a fixed calculation time for (𝐛_j𝐛_j^⊤) ⊗ℬ(i, j) and 𝐓_i ℬ(i, j)𝐓_i^⊤, the time complexity for data preparation is 𝒪(Imn). Furthermore, since the SDP optimization time is several orders of magnitude longer than the data preparation time, the total computational time complexity is 𝒪(Imn+Im+In)≈𝒪(Im+In). Thus, we conclude that the time complexity of GlobalPointer is linear with respect to the number of poses and planes. For GlpbalPointer++, assuming a total of I iterations with m poses and n planes, the time complexity for global normal direction calibration is 𝒪(mn^2). The pose-only and plane-only closed-form solvers have fixed calculation time per pose or plane, resulting in time complexities of 𝒪(Im) and 𝒪(In), respectively. Similar to GlobalPointer, the time complexity for data preparation remains 𝒪(Imn). Thus, the total computational time complexity is 𝒪(Imn+Im+In). When mn is not excessively large, the dominant time complexity of GlobalPointer++ is linear with respect to the number of poses and planes. However, as mn increases, the quadratic increase in data preparation time makes GlobalPointer++ quadratic with respect to the number of poses and planes. Despite this quadratic complexity, the absolute computation time does not become significantly large. § SUPPLEMENTARY EXPERIMENT In this section, we present additional experimental results on Global Optimality Analysis <ref>, Synthetic Data Analysis <ref>, Real Data Analysis <ref>, Absolute Time Analysis <ref>, and Overlap Analysis <ref>. §.§ Evaluation Metrics Given the estimated rotation 𝐑, translation 𝐭, normal vector 𝐧, and d (where d=-𝐧^⊤𝐪 as introduced in the point-to-plane error), we define the following metrics as: {[ e_total = (𝐧^⊤(𝐑𝐏+𝐭) + d)^2; e_R = arccos((trace(𝐑_g^⊤𝐑)-1)/2)^2; e_t = ||𝐭-𝐭_g||^2_2; e_n = ||abs(𝐧)-abs(𝐧_g)||^2_2; e_d = ||abs(d)-abs(d_g)||^2_2 ]. where [.]_g denotes the ground truth. §.§ Global Optimality Analysis In this section, we validate the global convergence performance of GlobalPointer, GlobalPointer++, pose-only GlobalPointer, and pose-only GlobalPointer++. We conduct 200 independent trials under three different point cloud noise levels with random initialization. As shown in <ref>, GlobalPointer consistently maintains global convergence across varying noise levels. In contrast, GlobalPointer++ achieves global convergence only when the point cloud noise is small, as the plane adjustment with plane-to-plane error relies on well-estimated local normal vectors. However, GlobalPointer++ converges with fewer iterations, demonstrating its efficiency. Similarly, pose-only GlobalPointer consistently achieves global convergence, while pose-only GlobalPointer++ achieves global convergence only under low point cloud noise levels. To further validate the global convergence of our proposed GlobalPointer, we conduct multiple independent trials with random initialization in a synthetic environment consisting of 10 planes and 10 LiDAR poses, fully observed. As shown in <ref>, each column box represents a unique random setting. Under two point cloud noise levels, we perform 9 random settings, and for each setting, we run 1000 independent trials with random initialization. The experimental results confirm that our method achieves global optimality even under a high point cloud noise level (σ_p =0.1). §.§ Synthetic Data Analysis We extensively compare our proposed algorithms with other methods in terms of accuracy under varying levels of point cloud noise and pose initialization noise. All statistics are computed over 50 independent trials. As shown in <ref>, GlobalPointer consistently converges to the global optimum across all settings, while GlobalPointer++ is more sensitive to point cloud noise. ESO <cit.> performs poorly due to unstable numerical stability in Hessian matrix derivation, and other methods fail to converge to the global minimum above a certain level of noise. §.§ Real Data Analysis For real datasets, we select sequences from the Hilti dataset <cit.> with a higher number of indoor planes. Initially, We transform the local point clouds of each frame into the global coordinate system using ground truth poses. Subsequently, we employ plane segmentation and fitting algorithms to extract point clouds <cit.> associated with the same plane, with a RANSAC fitting threshold of 0.01. The segmentation results are shown in <ref>. Furthermore, to align with our hypothesis that each plane should be observed by as many LiDAR frames as possible, we randomly divide all point cloud frames into 50 subsets and accumulate them into individual sub-cloud frames. All statistics are computed over 50 independent trials. In <ref>, we compare the accuracy of our algorithm with other methods. It is evident that our proposed GlobalPointer consistently demonstrates global convergence performance across all settings, while other methods require well-initialized values to converge to the ground truth. §.§ Absolute Time Analysis We also evaluate the absolute time of our algorithms. Starting with 5 planes and 5 poses, we incrementally increase the number of poses and planes. We conduct 50 independent trials and use the median time as the metric. Experimental results in <ref> demonstrate that the second-order methods BALM2 <cit.> and ESO-Full <cit.> exhibit superior efficiency, while our proposed GlobalPointer++ achieves the highest efficiency. In contrast, the first-order methods EF <cit.> and ESO-BFGS <cit.> show lower efficiency due to their slower convergence rates. PA-Full <cit.> exhibits the worst efficiency because of its cubic time complexity. PA-Decoupled <cit.>, which also employs alternating minimization, demonstrates time complexity similar to our method. §.§ Overlap Analysis As highlighted in Remark 4, the overlap between planes and poses is a critical factor for our algorithm's performance. We conduct a study to evaluate the impact of this factor on the accuracy and efficiency of our algorithm. The evaluation is performed by decreasing the overlap ratio from 100% to 20% in a synthetic environment comprising 100 planes and 100 LiDAR poses. Random initialization and varying degrees of random overlap are employed for each experiment. All statistics are computed over 100 independent trials. As shown in <ref>, our method achieves consistent accuracy across different overlap levels, while requiring more time to converge as overlap decreases. splncs04
http://arxiv.org/abs/2407.13124v1
20240718032555
Moments of the derivative of the characteristic polynomial of unitary matrices
[ "Brian Conrey", "Michael O. Rubinstein", "Nina C. Snaith" ]
math-ph
[ "math-ph", "math.MP", "math.NT", "60B20" ]
Thanks to the American Institute of Mathematics and the NSF FRG grant DMS-1854398 for supporting several visits of the authors during the course of this work. § ABSTRACT Let Λ_X(s)=(I-sX^†) be the characteristic polynomial of the unitary matrix X. It is believed that the distribution of values of Λ_X(s) model the distribution of values of the Riemann zeta-function ζ(s). This principle motivates many avenues of study. Of particular interest is the behavior of Λ_X'(s) and the distribution of its zeros (all of which lie inside or on the unit circle). In this article we present several identities for the moments of Λ_X'(s) averaged over U(N), for x ∈ as well as specialized to |x|=1. Additionally, we prove, for positive integer k, that the polynomial ∫_U(N) |Λ_X(1)|^2k of degree k^2 in N divides the polynomial ∫_U(N) |Λ_X'(1)|^2k which is of degree k^2+2k in N and that the ratio, f(N,k), of these moments factors into linear factors modulo 4k-1 if 4k-1 is prime. We also discuss the relationship of these moments to a solution of a second order non-linear Painléve differential equation. Finally we give some formulas in terms of the _3F_2 hypergeometric series for the moments in the simplest case when N=2, and also study the radial distribution of the zeros of Λ_X'(s) in that case. On Finding the Closest Zonotope to a Polytope in Hausdorff Distance George D. Torres July 22, 2024 =================================================================== § INTRODUCTION Ever since work at the turn of the millennium connecting mean values of the Riemann zeta function with averages of characteristic polynomials of unitary matrices selected at random with respect to Haar measure from the unitary group (see for example <cit.>), there has accumulated much literature on averages of products and ratios of characteristic polynomials and their derivatives over U(N) with Haar measure. Here we define the characteristic polynomial associated to X∈ U(N) to be Λ_X(s)=∏_j=1^N(1-se^-iθ_j)=(I-sX^†), where e^iθ_1, e^iθ_2, …, e^iθ_N are the eigenvalues of X and X^† is the conjugate transpose of X. This is not the usual way that we teach students to write the characteristic polynomial, but this way of writing the characteristic polynomial is akin to the Hadamard product of the zeta function. Furthermore, statistics involving Λ are easily related to statistics involving the traditional characteristic polynomial by pulling out exponentials from the product. For the purpose of the moments of the characteristic polynomial or it's derivative, both our way and the traditional way of writing the characteristic polynomial lead to identical results. The origin of the methods used in the current work is <cit.> where in 2006 the authors introduced a k-fold contour integral expression for averages over U(N) with respect to Haar measure of a product of 2k characteristic polynomials (reproduced in Lemma <ref> below). From this they obtain an asymptotic formula for large N with integer k: ∫_U(N)|Λ_X'(1)|^2k∼ b_k N^k^2+2k, where dX is Haar measure on U(N) normalized so as to be a probability measure (see (<ref>)), with b_k = (-1)^k(k+1)/2∑_h=0^k k h(d/dx)^k+h(e^-x x^-k^2/2_k× k( I_i+j-1(2√(x)) )|_ x=0, and I_ν(z) denotes the modified Bessel function of the first kind. The logarithmic derivative of the above determinant was shown by Forrester and Witte <cit.> to be a solution of a Painlevé differential equation. In this current work we derive several identities for the 2k-th moments of |Λ_X^'(x)| for complex x and also specialized to |x|=1, i.e. on the unit circle. In the last section, also study the radial distribution of the zeros of Λ_X^'. Our first theorem, in Section <ref>, gives our first formula for the moments of Λ_X^'(x), for x on the unit circle. [Theorem <ref> below] For k a non-negative integer, and x ∈ with |x|=1, ∫_U(N)Λ_X^'(x)^2k = (-1)^k2∑_m=0^k kmN^k-m (-1)^m ∑_∑_j=1^k t_j = k+mk+mt_1, …, t_k ×[ N+k+i+j-22k+t_j -1]_1 ≤ i ≤ k 1 ≤ j ≤ k. Note that the right hand side does not depend on the argument of x since Haar measure on U(N) is invariant under rotation. For instance, looking ahead to (<ref>), letting x=r exp^iθ, and changing variables ω_j=θ_j-θ, gives the 2k-th moment of |Λ'(r)|. While computationally challenging, (<ref>) is explicit and simple enough to work out the first few moments, as functions of N, by summing the terms. Additionally, by studying the determinants in the inner sum, we are able to prove the following theorem which shows a connection to the moments of |Λ_X(1)|. [Theorem <ref> below] For k a non-negative integer, ∫_U(N)Λ_X^'(1)^2k = ∫_U(N)Λ_X(1)^2k× f(N,k), where f(N,k) is a polynomial in N of degree 2k. Note that the 2k-th moment of |Λ_X(1)| is known explicilty <cit.>. They prove: ∫_U(N)Λ_X(1)^2k = ∏_j=1^N Γ(j)Γ(2k+j)/Γ(k+j)^2 = ∏_j=0^k-1( j!/(j+k)!∏_i=0^k-1 (N+i+j+1) ). The first equality is valid for k > -1/2 while the second is for non-negative integer k. Equation (<ref>) is also explicit enough to discover a curious factorisation (see Section <ref>) in ℤ_p[N], if p=4k-1 is prime. [Theorem <ref> below] For k a non-negative integer, if 4k-1 is prime (4k-1) ∫_U(N)|Λ_X'(1) |^2k= = (-2) (N-2k+1) (N-2k+2)⋯ N/ (k-1)! (k-1)!∫_U(N)Λ_X(1)^2k 4k-1. Note, as part of our proof, it will emerge that that the rational coefficients of powers of N of ∫_U(N)|Λ_X'(1) |^2k dX have a single power of 4k-1 in their denominators. In interpretting the left hand side, the factor 4k-1 on the left hand side should first be cancelled with the 4k-1 in the denominators ahead of reducing 4k-1. Apart from that, all arithmetic in this expression is modulo 4k-1. So all other integers appearing in denominators are to be interpreted as inverses mod 4k-1. In Section <ref>, we derive formulas for the 2k-th moment of |Λ_X^'(x)| for any x ∈. [Theorem <ref> below] Let x ∈, and k be a non-negative integer. Then ∫_U(N)Λ_X^'(x)^2k =(-1)^(k+1)k/2d^k/dt_1^kd^k/dt_2^k e^-t_1 N[ F_N+k+i+j-1,k(t_1,t_2,x) ]_1 ≤ i ≤ k 1 ≤ j ≤ k|_t_1=t_2=0 where F_a,k(t_1,t_2,x) = 1/2π i∮w^a-1/(w-1)^k (w-|x|^2)^kexp(t_1/(w-1) + t_2/(w-|x|^2)) dw, and the contour is a circle centred on the origin enclosing the points 1 and |x|^2. To clarify, the right hand side of the above is evaluated, after carrying out the derivatives, at t_1=t_2=0. Furthermore, if |x| ≠ 1 and a is a positive integer, then F_a,k(t_1,t_2,x) is also equal to ∑_0 ≤ m+n+2k ≤ at_1^m/m!t_2^n/n! ( |x|^2(a-n-k)/(|x|^2-1)^m+k∑_l=0^n+k-1a-1 n+k-1-l -m -k l|x|^2l/(|x|^2-1)^l + 1/(1-|x|^2)^n+k∑_l=0^m+k-1a-1 m+k-1-l -n -k l1/(1-|x|^2)^l). [Theorem <ref> below] Let |x|=1, and k be a non-negative integer. Then ∫_U(N)Λ_X^'(x)^2k = (-1)^k∑_h=0^k k h N^k-h (d/dt)^k+h_k× k[L_N+i-j^(2k-1)(t) ] |_t=0. A related formula featuring this determinant was given in <cit.>, but for the characteristic polynomial Λ rotated so as to be real on the unit circle. See formula (4-4) of their paper, with their h equal to k. Curiously, if instead of proceeding via the k-fold contour integral of <cit.> we start with the Weyl integration formula for Haar measure on U(N) (an N-fold integral) and proceed with similar steps, a closely related expression appears, but featuring an N× N determinant instead of k× k, exposing a duality between the parameters k and N. [Theorem <ref> below] For positive integer k and |x|=1 ∫_U(N)Λ_X^'(1)^2k = (-1)^kN∑_h=0^k k h (-1)^h N^k-h (d/dt)^k+h_N× N[L_k+i-j^(-2k-1)(t) ] |_t=0. We can obtain more explicit expressions, see Section <ref>, if we restrict to N=2: [Theorem <ref> below] For positive integer k and any complex x we have ∫_U(2) |Λ_X'(x)|^2k  = ∑_m=0^k km^2 (4x^2)^m2k-2mk-m/k-m+1 = 2kk/k+1_3F_2(-1-k,-k,-k;1,1/2 -k; |x|^2). Finally, allowing k to be non-integer, [Theorem <ref> below] For all k ∈, and x ∈ with |x|>1 we have ∫_U(2) |Λ_X'(x)|^2k dX = 2^2 k |x|^2 k _3F_2(1/2,-k,-k;1,2;|x|^-2). Additionally, this formula extends to |x|=1 if k > -1. Interestingly, when k is not an integer, numerical investigation shows that these expressions do not agree, see Section <ref>. Finally, we end with a theorem for the logarithmic average of Λ'_X(r) for N=2. [Theorem <ref> below] For 0 ≤ r<1 we have ∫_U(2)log|Λ_X'(r)|  dX =2 r _3F_2(1/2,1/2,1/2;3/2,3/2;r^2) +r √(1-r^2) +sin ^-1(r)/π-1/2. The history of these questions in random matrix theory originated in the thesis of Hughes <cit.> and also includes derivatives of a closely related function (the analogue of the Hardy's Z-function in number theory) which is real on the unit circle and defined by Z_X(s)=e^iπ N/2e^i∑_n=1^Nθ_n/2 s^-N/2Λ_X(s). Hughes (although with slightly different notation) looked at moments of the form (note that for k=h this reduces to (<ref>)) ∫_U(N)|Λ_X(1)|^2k-2h|Λ_X'(1)|^2h and F_N(h,k):= ∫_U(N)|Z_X(1)|^2k-2h|Z_X'(1)|^2h, with h>-1/2 and k>h-1/2. Hughes shows that the limit lim_N→∞1/N^k^2+2hF_N(h,k)=F(h,k) exists and conjectured a form for F(h,k) that continues to non-integer k but not non-integer h. This work was extended and proved by Dehaye <cit.>. The method of <cit.> was extended in <cit.> to the mixed moment (<ref>), including the connection to a Painlevé equation via a determinant of Bessel functions. The first step towards anything other than an even integer exponent on the derivative of the characteristic polynomial is the work of Winn <cit.>, who has a concrete formula for F_N(h,k) when h=(2m-1)/2 for m∈ℕ. Assiotis, Keating and Warren <cit.>, however, establish that the limit (<ref>) exists for real h and k and relate the leading order coefficient to the expectation value of a particular random variable. Averages of higher derivatives, quantities of the form of (<ref>), but instead of the zeroth and first derivative, with two arbitrary orders of differentiation, n_1 and n_2, are addressed by Keating and Wei for integer k and h in <cit.> where they find the moment is asymptotically, for large N, of order N^k^2+2(k-h)n_1+2hn_2 and they give explicit forms for the leading order coefficients. Barhoumi-Andréani <cit.> addresses combinations of more than two different derivatives, but still asymptotically and always evaluated at the point 1 and expresses the leading order coefficient as a multiple contour integral. There are a wealth of interesting results concerning the average (<ref>) in <cit.>. They express F_N(h,k), for integer h and k, in terms of the same determinant of Laguerre polynomials that arises in our Theorem <ref>. They relate the logarithmic derivative of this determinant to the solution of a Painlevé differential equation. This allows them to solve recursively for F_N(h,k) resulting in expressions that continue to non-integer k. They explicitly write out the first few F_N(h,k) for small integer h values and general k, in the form F_N(0,k) multiplied by what looks like a polynomial in N. In Section <ref> we describe the differential equation and its use in our setting to for the fast calculation of (<ref>) for specific values of k. There is also literature studying average values of the logarithmic derivative of the characteristic polynomial of X∈ U(N), Λ_X'(s)/Λ_X(s). Whereas most of the results in the literature for moments (as opposed to ratios with characteristic polynomials in the denominator) including derivatives of characteristic polynomials evaluate the derivative at the point 1, averages of (<ref>) must be evaluated away from the unit circle (|s|≠ 1) due to singularities at the eigenvalues of X from the characteristic polynomial in the denominator, see <cit.> and <cit.>. One of the motivations for studying the moments of derivatives of characteristic polynomials of random matrices drawn from U(N) with Haar measure is because from the moments information can be retrieved about the distribution of the zeros of derivatives of characteristic polynomials. With the analogy relating the characteristic polynomial to the Riemann zeta function, this has the potential to shed light on the distribution of zeros of the derivative of the zeta function <cit.> and hence on the Riemman Hypothesis by the ideas of Levinson <cit.>. In Section <ref> this motivates an investigation into the radial distribution of the zeros of the derivative of the characteristic polynomial through moments of the derivative evaluated on the interior of the unit circle. All of the above has prompted similar calculations in other ensembles of unitary matrices. For matrices in the classical compact groups SO(2N), SO(2N+1) and USp(2N), Snaith and Alvarez <cit.> consider asymptotic formulae for moments of the logarithmic derivative, where the exponent is an integer, evaluated at a point on the real axis approaching 1 faster then 1/N as the matrix size N goes to infinity. This is extended to non-integer exponent for SO(2N+1) in <cit.> and by Ge <cit.> to the other classical compact groups. Asymptotic formulae for low derivatives of characteristic polynomials evaluated at the point 1 can be found in <cit.>, and for joint higher derivatives (in analogy to Keating and Wei) in <cit.>. Mixed moments of characteristic polynomials and their derivatives for the circular β-ensemble is studied by Forrester <cit.>. § A FINITE-N DETERMINANT FORMULA FOR MOMENTS OF THE DERIVATIVE Λ'(1) In this section we will prove the following theorem on derivatives of characteristic polynomials over U(N) with Haar measure. The result is exact for finite matrix size N. For k a positive integer, ∫_U(N)Λ_X^'(1)^2k = (-1)^k2∑_m=0^k kmN^k-m (-1)^m ∑_∑_j=1^k t_j = k+mk+mt_1, …, t_k ×[ N+k+i+j-22k+t_j -1]_1 ≤ i ≤ k 1 ≤ j ≤ k. The proof starts with a variant of Lemma 3 from <cit.>. ∫_U(N)∏_j=1^k Λ_X(1/a_j)Λ_X^†(a_j+k) = 1/k!(2π i)^k∮∏_j=1^k (u_j/a_j)^N ∏_1≤ i ≤ k 1≤ j ≤ 2k1/1-a_j/u_i∏_i≠ j( 1-u_j/u_i) ∏_j=1^k1/u_jdu_j = (-1)^k2/k!(2π i)^k∮∏_j=1^k u_j^N-k/a_j^N∏_1≤ i≤ k 1≤ j≤ 2ku_i/u_i-a_j∏_i<j(u_j-u_i)^2 ∏_j=1^k du_j. with each of the k contours in this k-dimensional integral being simple closed contours enclosing all the poles of the integrand. The precise contours are not important since we will eventually compute our integrals via residues, but one can take them to be sufficiently large circles centred on the origin. The proof of this Lemma is the same as in <cit.>. The difference here is that we are stating the Lemma with the variable u which corresponds to their e^w. This makes no difference if one restricts to sufficiently small a_j, as they do, by eventually setting all the a_j=1. However, in Section <ref>, we will need to allow the a_j's to be more general, and we would run into complications if we were to use e^w, due to the periodic nature of the exponential function counting the same poles repeatedly. Hence we have stated our Lemma in this form. The idea is the same as previous work on moments of derivatives of characteristic polynomials, or joint moments of derivatives and the characteristic polynomial itself that start from Lemma <ref> (for example <cit.>) with the difference that we do not scale variables with 1/N and make a large N approximation. This allows us to calculate exactly for finite N. We will take one derivative with respect to each a_j, then set them all equal to 1, thus finding the 2k^th moment of the derivative of the characteristic polynomial evaluated at 1. After taking the derivative, we have a product of two multinomial expansions, so we introduce two new parameters, t and s to factorise those terms as the derivative of an exponential. We then apply Andréief's identity to express the multiple contour integral as a determinant. For simplicity, we now replace a_j with 1/a_j for 1 ≤ j ≤ k. These first k of the a_j appear in the factors a_j^N ∏_1 ≤ i ≤ ku_i/u_i - 1/a_j = a_j^N ∏_1 ≤ i ≤ ku_ia_j/u_ia_j - 1 whose derivative with respect to a_j is Na_j^N-1∏_1 ≤ i ≤ ku_ia_j/u_ia_j - 1 + a_j^N ∑_n=1^k -u_n/(u_na_j-1)^2∏_m≠ nu_ma_j/u_ma_j-1. Evaluated at a_j=1, each of these k factors gives ∏_1≤ i ≤ ku_i/u_i -1( N - ∑_m=1^k 1/u_m-1). Next, we look at the derivatives with respect to a_j for k < j ≤ 2k. These a_js appear only in the factors ∏_1 ≤ i ≤ ku_i/u_i - a_j and each of these derivatives is given by ∑_n=1^k u_n/(u_n-a_j)^2∏_m ≠ nu_m/u_m - a_j which, evaluated at a_j = 1, yields ∏_1 ≤ i ≤ ku_i/u_i-1( ∑_m=1^k 1/u_m-1). We combine all 2k derivatives to get ∫_U(N)Λ^'_X(1)^2k = (-1)^k2/k!(2π i)^k∮∏_j=1^k u_j^N-k( u_j/u_j-1)^2k( N- ∑_m=1^k 1/u_m-1)^k (∑_m=1^k 1/u_m-1)^k Δ^2( u) ∏_j=1^k du_j = d^k/dt^kd^k/ds^k(-1)^k2/k!(2π i)^k∮∏_j=1^k u_j^N+k/(u_j-1)^2kexp( Nt- (t-s)∑_m=1^k 1/u_m-1) Δ^2( u) ∏_j=1^k du_j |_t=s= 0 , where the Vandermonde determinant is Δ( u) = Δ(u_1,…,u_k) := ∏_1≤ i<j ≤ k(u_j - u_i) = _k × k[ u_j^i-1]. The last equality in (<ref>) uses d/dtexp( Nt-t∑_m=1^k1/u_m-1)|_t=0=(N-∑_m=1^k1/u_m-1) and is designed to move the k-th powers involving the sum over m into an exponent so that the variables can be separated. A form of Andréief's identity says 1/n!∫_J^n(∏_j=1^n f(u_j)) _n × n(ψ_i(u_j)) _n × n (ϕ_i(u_j)) d u_1 ⋯ d u_n =1/n!∫_J^n_n × n(f(u_j)ψ_i(u_j)) _n × n (ϕ_i(u_j)) d u_1 ⋯ d u_n = _n × n( ∫_J f(u)ψ_i(u) ϕ_j(u) du ), for some interval or contour J and a sequence of functions ϕ and ψ. With the introduction of the variables s and t in (<ref>), we have worked the integrand multiplying the Vandermonde determinants into the multiplicative form ∏_j=1^kf(u_j), so applying Andréief, we now have ∫_U(N)Λ^'_X(1)^2k = (-1)^k2d^k/dt^kd^k/ds^kexp(Nt) [ 1/2π i∮u^N+k+i+j-2/(u-1)^2kexp(s-t/u-1) du]_1 ≤ i ≤ k 1 ≤ j ≤ k|_t=s=0. Next, we apply the product rule to the differentiation with respect to t, on account of the extra exp(Nt) in front of the determinant, and the above becomes ∫_U(N)Λ^'_X(1)^2k =(-1)^k2∑_m=0^k km N^k-m ×(d^m/dt^md^k/ds^k[ 1/2π i∮u^N+k+i+j-2/(u-1)^2kexp(s-t/u-1) du]_1 ≤ i ≤ k 1 ≤ j ≤ k)|_t=s=0 =(-1)^k2∑_m=0^k km N^k-m ×(-1)^m (d^k+m/dt^k+m[ 1/2π i∮u^N+k+i+j-2/(u-1)^2kexp(t/u-1) du]_1 ≤ i ≤ k 1 ≤ j ≤ k)|_t=0, where in the final line we have combined the s and t derivatives, noting that they have exactly the same effect on the integrand, collecting the extra minus signs from the t differentiations in the factor (-1)^m. Finally, when differentiating a determinant one time with respect to the variable t we get, by the product rule, a sum of determinants d/dt(a_1, a_2, …, a_n) = (d/dta_1, a_2, …, a_n) + … + (a_1, a_2, …, d/dt a_n). Note that, by writing u=(u-1)+1 in the numerator and using the binomial expansion in the middle expression (or else by Cauchy's Integral Formula for derivatives), we compute the residue d/dt1/2π i∮u^N+k+i+j-2/(u-1)^2kexp(t/u-1) du|_t=0=1/2π i∮u^N+k+i+j-2/(u-1)^2k+1du=N+k+i+j-22k. Repeating this process, differentiating k+m times with respect to t, results in a sum involving many determinants. We index according to how many times we have differentiated the ℓ-th column with respect to t, say t_ℓ≥ 0, with ∑ t_ℓ = k+m. This results in the following equation. ∫_U(N)Λ^'_X(1)^2k =(-1)^k2∑_m=0^k kmN^k-m (-1)^m ∑_∑_ℓ=1^k t_ℓ = k+mk+mt_1, … , t_k ×[ 1/2π i∮u^N+k+i+j-2/(u-1)^2k+t_j du]_1 ≤ i ≤ k 1 ≤ j ≤ k =(-1)^k2∑_m=0^k kmN^k-m (-1)^m ∑_∑_ℓ=1^k t_ℓ = k+mk+mt_1, …, t_k ×[ N+k+i+j-22k+t_j -1]_1 ≤ i ≤ k 1 ≤ j ≤ k. The multinomial coefficient arises because we can arrive at the same number of differentiations of each column in multiple ways by doing the differentiations in different orders, eg. first differentiating the first column and then the second, versus differentiating the second and then the first. So in the sum arising from applying (<ref>) k+m times, we group all terms with the same number of differentiations of column 1, column 2, etc, and account for the multiplcity using the multinomial coefficient. §.§ Factoring out moments of the characteristic polynomial Using equation (<ref>), we can obtain the exact moment values for small k. Doing so, we can clearly see that the 2k^th moment of Λ_X^'(1) factors into two parts: the 2k^th moment of Λ_X(1), i.e. the non-differentiated characteristic polynomial, as well as some polynomial in N of degree 2k. We therefore define f(N,k) = ∫_U(N)Λ_X^'(1)^2k/∫_U(N)Λ_X(1)^2k, and list these functions, below, for k ≤ 6: f(N,1) = N (1 + 2 N)/6 f(N,2) = N (12 + 27 N + 40 N^2 + 61 N^3)/840 f(N,3) = N (840 + 2174 N + 2829 N^2 + 2980 N^3 + 3933 N^4 + 6648 N^5)/388080 f(N,4) = N ( 211680 + 605724 N + 828464 N^2 + 835627 N^3 + 831344 N^4 . . + 915970 N^5 + 1279520 N^6 + 2275447 N^7 )/544864320 f(N,5) = N ( 544864320 + 1680129432 N + 2440884600 N^2 + 2498415180 N^3 + 2320167235 N^4 + 2266635142 N^5 . . + 2448916150 N^6 + 2872062460 N^7 + 4060136575 N^8 + 7401505546 N^9 )/7190496593280 f(N,6) = N ( 222615993600 + 727617496320 N + 1115985182112 N^2 + 1176700689444 N^3 + 1073389052700 N^4 . . + 988586333095 N^5 + 978075305136 N^6 + 1034426527167 N^7 + 1167375408300 N^8 . . + 1398326972685 N^9 + 1974154070952 N^10 + 3654712923689 N^11)/14333056542604800 We will prove the following: For k∈, f(N,k) is a polynomial in N of degree 2k. The data above together with the plot of the roots below in Figure <ref> suggests that f(N,k) may be irreducible over the rationals. As in Section <ref> We start with the average of the characteristic polynomial itself, letting a_j = 1 for all j in (<ref>), we have, using the methods of the previous section, ∫_U(N)Λ_X(1)^2k = (-1)^k2/k!(2π i)^k∮∏_j=1^k u_j^N-k∏_1 ≤ i ≤ ku_i^2k/(u_i-1)^2k∏_i < j (u_j-u_i)^2 ∏_j=1^kdu_j = (-1)^k2_k× k[ 1/2π i∮u^N+k+i+j-2/(u-1)^2k du ] = (-1)^k2_k× k[ N+k+i+j-22k -1]. The quantity on the left hand side is the familiar 2kth moment of the characteristic polynomial, many forms for which are well-known, but we continue to evaluate it using the above determinant as the method will help us complete this proof of Theorem <ref>. First, we note how one can compute this determinant simply by factoring out common terms and using row reduction. Indeed, [ N+k+i+j-22k -1] = [ N+k2k-1 N+k+12k-1 N+k+22k-1 … N+2k-12k-1; N+k+12k-1 N+k+22k-1 N+k+32k-1 … N+2k2k-1; N+k+22k-1 N+k+32k-1 N+k+42k-1 … N+2k+12k-1; ⋮ ⋮ ⋮ ⋮ ⋮; N+2k-12k-1 N+2k2k-1 N+2k+22k-1 … N+3k-22k-1 ] = 1/((2k-1)!)^k× [ (N+k)…(N-k+2) (N+k+1)…(N-k+3) … (N+2k-1) … (N+1); (N+k+1)…(N-k+3) (N+k+2)…(N-k+4) … (N+2k) … (N+2); ⋮ ⋮ ⋮ ⋮; (N+2k-1)…(N+1) (N+2k)… (N+2) … (N+3k-2)… (N+k) ]. We notice that, for every column, the first k factors of the entry in the first row are factors of every row therefore we can pull them out; from the first column we pull out (N+k)(N+k-1)…(N+2)(N+1), from the second column we pull out (N+k+1)(N+k)…(N+3)(N+2) and so on until the last column where we pull out (N+2k-1)…(N+k+1)(N+k). Therefore, (<ref>) equals (N+1)…(N+k-1)^k-1(N+k)^k(N+k+1)^k-1(N+k+2)^k-2…(N+2k-1)/((2k-1)!)^k×, where = [ N(N-1)…(N-k+2) (N+1)N…(N-k+3) … (N+k-1)… (N+1); (N+k+1)N…(N-k+3) (N+k+2)(N+1)…(N-k+4) … (N+2k)(N+k-1)… (N+2); ⋮ ⋮ ⋮ ⋮; (N+2k-1)…(N+k+1) (N+2k)…(N+k+2) … (N+3k-2)…(N+2k-1) ]. We pause here to explain the structure of the above matrix, which we will call : the first row contains the last k-1 original factors, since we factored out the first k. The second row however, will still have its first original factor followed by its k-2 last factors, since we factored out k factors starting with the second factor. The third row will have its first 2 factors followed by its k-3 last factors, since we factored out k factors starting from the third factor, and so on until the last row which simply has its first k-1 original factors since we factored out the last k factors. We illustrate this as well as the row reduction process with an example where k=4 below. For the row reduction, the important observation is that every pair of rows _j and _j+1 will have k-2 factors in common; the only difference will be between the last factor in _j and the first factor in _j+1, and their difference will always be 2k-1. Therefore, starting with the k^th row, we remove the k-1^th row from the k^th, then remove the k-2^th row from the k-1^th, and so on, leaving the first row fixed. Now we can pull out a factor of (2k-1) from every row except the first, so we pull out (2k-1)^k-1. We note that we are not changing the determinant since we are simply taking linear combinations of rows. We repeat this process, this time fixing both the first and second rows, and now the difference between entries from one row to the one above will be 2k-2, therefore we pull out a factor of (2k-2)^k-2. By reiterating this process, each time fixing one more row, we eventually have a matrix that looks like [ N(N-1)…(N-k+2) (N+1)N…(N-k+3) … (N+k-1)… (N+1); N…(N-k+3) (N+1)…(N-k+4) … (N+k-1)… (N+2); ⋮ ⋮ ⋮ ⋮; N N+1 … N+k-1; 1 1 … 1 ], and we've pulled out (2k-1)^k-1(2k-2)^k-2…(k+1) from the determinant. Now, we will repeat this process but on the columns. We first subtract the (k-1)^th column from the k^th, then the (k-2)^th from the (k-1)^th, and so on fixing the first column. We repeat the process but fixing the second column, then we repeat again, each time fixing one more column. The result is a rotated upper triangular matrix, which looks like [ N(N-1)…(N-k+2) (k-1)N…(N-k+3) … (k-1)(k-2)… 2; N…(N-k+3) (k-2)N…(N-k+4) … (k-2)… 2 0; ⋮ ⋮ ⋮ ⋮; N 1 … 0 0; 1 0 … 0 0 ], and whose determinant is (-1)^k2(k-1)(k-2)^2(k-3)^3 … 2^k-2. Finally, after all this manipulation, we have that = (-1)^k2(2k-1)^k-1(2k-2)^k-2…(k+1)(k-1)(k-2)^2(k-3)^3 … 2^k-2, therefore ∫_U(N)Λ_X(1)^2k =(-1)^k2[ N+k+i+j-22k -1] = (N+1)…(N+k-1)^k-1(N+k)^k(N+k+1)^k-1…(N+2k-1)/((2k-1)!)^k = 2!·3!…(k-1)!(N+1)…(N+k-1)^k-1(N+k)^k(N+k+1)^k-1…(N+2k-1)/(2k-1)!(2k-2)!… k!, where here we have arrived at the familiar polynomial form of the moment, i.e. equation (<ref>). To illustrate the process above, we take for example k=4. Then, the determinant we are trying to compute is [ N+47 N+57 N+67 N+77; N+57 N+67 N+77 N+87; N+67 N+77 N+87 N+97; N+77 N+87 N+97 N+107 ]. Factoring out 7! from each column as well as the first 4 factors of the first row entry from each column, we have that the determinant is equal to (N+1)(N+2)^2(N+3)^3(N+4)^4(N+5)^3(N+6)^2(N+7)/(7!)^4× [ N(N-1)(N-2) (N+1)N(N-1) (N+2)(N+1)N (N+3)(N+2)(N+1); (N+5)N(N-1) (N+6)(N+1)N (N+7)(N+2)(N+1) (N+8)(N+3)(N+2); (N+6)(N+5)N (N+7)(N+6)(N+1) (N+8)(N+7)(N+2) (N+9)(N+8)(N+3); (N+7)(N+6)(N+5) (N+8)(N+7)(N+6) (N+9)(N+8)(N+7) (N+10)(N+9)(N+8) ]. Now, we subtract the third row from the fourth, then the second from the third, and finally the first from the second. Now the matrix is given by [ N(N-1)(N-2) (N+1)N(N-1) (N+2)(N+1)N (N+3)(N+2)(N+1); 7N(N-1) 7(N+1)N 7(N+2)(N+1) 7(N+3)(N+2); 7(N+5)N 7(N+6)(N+1) 7(N+7)(N+2) 7(N+8)(N+3); 7(N+6)(N+5) 7(N+7)(N+6) 7(N+8)(N+7) 7(N+9)(N+8) ]. We repeat this process, but fixing the second row. So, we remove the third row from the fourth and the second row from the third. Now our matrix is [ N(N-1)(N-2) (N+1)N(N-1) (N+2)(N+1)N (N+3)(N+2)(N+1); 7N(N-1) 7(N+1)N 7(N+2)(N+1) 7(N+3)(N+2); 7· 6N 7· 6(N+1) 7· 6(N+2) 7· 6 (N+3); 7· 6(N+5) 7· 6(N+6) 7· 6(N+7) 7· 6(N+8) ]. Finally, we remove the third row from the fourth, and so the determinant of (<ref>) is given by (N+1)(N+2)^2(N+3)^3(N+4)^4(N+5)^3(N+6)^2(N+7)/(7!)^4× 7^3 6^2 5 [ N(N-1)(N-2) (N+1)N(N-1) (N+2)(N+1)N (N+3)(N+2)(N+1); N(N-1) (N+1)N (N+2)(N+1) (N+3)(N+2); N (N+1) (N+2) (N+3); 1 1 1 1 ]. Now, we subtract the third column from the fourth, then the second column from the third, and the first column from the second. Now the matrix looks like [ N(N-1)(N-2) 3N(N-1) 3(N+1)N 3(N+2)(N+1); N(N-1) 2N 2(N+1) 2(N+2); N 1 1 1; 1 0 0 0 ]. We repeat the process, but next fixing both the first and second columns. This yields [ N(N-1)(N-2) 3N(N-1) 3· 2N 3· 2(N+1); N(N-1) 2N 2 2; N 1 0 0; 1 0 0 0 ]. Finally, subtracting the third column from the fourth, we get [ N(N-1)(N-2) 3N(N-1) 3· 2N 3· 2; N(N-1) 2N 2 0; N 1 0 0; 1 0 0 0 ], whose determinant is 12 = 3· 2^2 = 3! · 2!. Putting this all together, we have computed determinant of the matrix (<ref>), which is given by (N+1)(N+2)^2(N+3)^3(N+4)^4(N+5)^3(N+6)^2(N+7)/(7!)^4× 7^3 6^2 5 · 3! 2! = 3! 2! (N+1)(N+2)^2(N+3)^3(N+4)^4(N+5)^3(N+6)^2(N+7)/7!6!5!4!. Now, with this method in mind, we will factor the determinant (<ref>) out of (<ref>). We do this by showing it is a factor of every determinant in the sum. This is true for the simple fact that Na+b = Naf(N,a,b), where f(N,a,b) is a polynomial in N of order b. Since t_j ≥ 0, we can rewrite N+k+i+j-22k +t_j -1 = N+k+i+j-22k -1(N+i+j-k-1)(N+i+j-k-2)… (N+i+j-k-t_j)/(2k + t_j -1)(2k+t_j - 2) … (2k). Therefore, every entry in the matrices of the determinant from (<ref>) is a factor of the corresponding entry of matrices from (<ref>). Therefore, we can pull out the factors of (N+1)…(N+k-1)^k-1(N+k)^k(N+k+1)^k-1(N+k+2)^k-2…(N+2k-1)/((2k-1)!)^k as before. Equation (<ref>) is the 2kth moment (<ref>) up to a constant factor. What is left is a polynomial in N. It is of order 2k because we know from previous literature (for example <cit.>, but see the introduction for the full history) that the 2kth moment of the derivative is order k^2+2k in N. Therefore what is left (after we factor out the order k^2 polynomial that is the moment (<ref>)) is a polynomial in N of order 2k. §.§ Roots of f(N,k) Finally, it is interesting to note that f(N,k) has small roots, which are distributed in an ellipse-like shape very near to the origin. Below we include the plots of the roots of f(N,k) for small k. In order to align the roots onto roughly the same ellipse, we find the appropriate scaling for the real roots, and then scale the complex roots accordingly. That is, since every f(N,k) has two real roots, one equal to zero and the other non zero, denote the non zero real root of f(N,k) by r(k). Then, we find the coefficient call it C_k that is the ratio of r(k)/r(6); we scale by the root of f(N,6) simply because it is the largest one in our data, otherwise we scale by the largest available. Then, we scale both the real and imaginary parts of the roots of each f(N,k) by C(k): then we see that the f(N,k)s are pairwise interlacing in the upper and lower half planes. §.§ Moments modulo 4k-1 In the case where 4k-1 is a prime we can determine the polynomial f(N,k) 4k-1 explicitly. For k a positive integer, if 4k-1 is prime (4k-1) ∫_U(N)|Λ_X'(1) |^2k dX = = (-2) (N-2k+1) (N-2k+2)⋯ N/ (k-1)! (k-1)!∫_U(N)Λ_X(1)^2k 4k-1. See the introduction for a comment about the factor of 4k-1 on the left hand side. Before starting the proof, we develop a determinantal identity. We consider k× k determinants. Here a_1, …, a_k and A are column vectors of length k. The notation (a_1,a_2,…,a_k) represents the determinant of a matrix that has columns a_1 to a_k. The following relation is true: (A,a_2-a_1,a_3-a_2,…,a_k-a_k-1)=(A,a_2,a_3,… a_k) + (a_1, A,a_3, …, a_k) +(a_1,a_2,A,a_4,…,a_k) +⋯ + (a_1, a_2, a_3, …, a_k-1,A) We will start with the 3× 3 example and then prove the Lemma in general. Using standard properties of determinants: (A,b,c)+(a,A,c)+(a,b,A) = (A,b,c)+(A,-a,c)+(a,b,A) =(A,b-a,c)+(a,b,A) = (A,b-a,c)+(a,b-a,A) = (A,b-a,c)+(A,b-a,-a) = (A, b-a, c-a) = (A,b-a,c-b) And now the same process in general: (A,a_2,a_3,a_4,…,a_k) + (a_1,A,a_3,a_4,…,a_k) + (a_1,a_2,A,a_4,…,a_k)+⋯ +(a_1,a_2,a_3,a_4,…,A) =(A,a_2,a_3,a_4,…,a_k) + (A,-a_1,a_3,a_4,…,a_k) + (a_1,a_2,A,a_4,…,a_k) +⋯ +(a_1,a_2,a_3,a_4,…,A) =(A,a_2-a_1,a_3,a_4,…,a_k) + (A,a_2-a_1,-a_1,a_4,…,a_k) +(a_1,a_2,a_3,A,a_5…,a_k)+ ⋯ +(a_1,a_2,a_3,a_4,…,A) =(A,a_2-a_1,a_3-a_1,a_4,…,a_k) + (A,a_2-a_1,a_3-a_1,-a_1,a_5,…,a_k)+⋯ +(a_1,a_2,a_3,a_4,…,A) ⋮ =(A,a_2-a_1,a_3-a_1,…,a_k-1-a_1,a_k) +(a_1,a_2,a_3,a_4,…,A) =(A,a_2-a_1,a_3-a_1,…,a_k-1-a_1,a_k) +(A,a_2-a_1,a_3-a_1,a_4-a_1,…,-a_1) = (A,a_2-a_1,a_3-a_1,…,a_k-1-a_1,a_k-a_1) =(A,a_2-a_1,a_3-a_2,…,a_k-1-a_k-2,a_k-a_k-1) where the final line follows by subtracting the k-1th column from the kth column, then the k-2th column from the k-1th, and so on. Now we are ready to prove the main theorem of this section. We start with the moment of the determinant of the derivative of the characteristic polynomial written as a sum of determinants from Theorem <ref> ∫_U(N)Λ_X^'(1)^2k = (-1)^k2∑_m=0^k kmN^k-m (-1)^m ∑_∑_j=1^k t_j = k+mk+mt_1, …, t_k ×[ N+k+i+j-22k+t_j -1]_1 ≤ i ≤ k 1 ≤ j ≤ k, where the t sum is over all possible non-negative integer values of the variables t_j, j=1,…,k, that sum to k+m. We are going to work modulo 4k-1 with this polynomial in N. Note that k m and k+m t_1, …, t_k are integers and do not involve N. However, the binomial coefficients in the determinants, when viewed as polynomials in N, have rational coefficients, and some of these entries have 4k-1 as a factor of the denominator, so some care is needed with these. Specifically, if t_j=2k for some j, then each entry of the jth column of the determinant for that term will be of the form N+ν4k-1 for some integer ν (where ν varies from row to row). Thus, as a polynomial in N, the relevant determinant in (<ref>) has rational coefficients that all have exactly one power of 4k-1 in the denominator coming from the (4k-1)! of the binomial coefficient. Unless t_j=2k, a given entry will not have any 4k-1's in their denominators. Furthermore, we are assuming in this theorem that 4k-1 is prime, and so cannot be constructed as products of other factors. The condition that t_j=2k for some j implies that we are only considering terms where m=k, where all the t are zero except for t_j=2k. Therefore, multiplying the determinants by ((2k-1)!)^k-1(4k-1)! to clear the denominators of the corresponding binomial coefficients, the only terms which survive 4k-1 are the terms on the right hand side of the following: ((2k-1)!)^k-1(4k-1)!∫_U(N)|Λ_X'(1) |^2k dX = (-1)^k2 (-1)^k ×∑_n=1 ^k [(N+k+i+j-2)(N+k+i+j-3)… (N-k+i+j-t_j)]_ 1≤ i≤ k 1≤ j ≤ k t_j=0, except t_n=2k 4k-1, since all the other determinants vanish 4k-1 when multiplied by (4k-1)!. As an example, when k=2, the various t's that occur in the sums in (<ref>) are m (t_1,t_2) 0 (2,0) (1,1) (0,2) 1 (3,0) (2,1) (1,2) (0,3) 2 (4,0) (3,1) (2,2) (1,3) (0,4) We get a determinant in (<ref>) for every (t_1, t_2) pair. From this example we see that the only place that we get t_j=4 is when m=2 and (t_1,t_2)=(4,0) or (0,4). Thus the only determinants that will survive the process of clearing the denominator and calculating mod 4k-1=7 are [ [ (N+2)(N+1)N(N-1)(N-2)(N-3)(N-4) (N+3)(N+2)(N+1); (N+3)(N+2)(N+1)N(N-1)(N-2)(N-3) (N+4)(N+3)(N+2) ]] and [[ (N+2)(N+1)N (N+3)(N+2)(N+1)N(N-1)(N-2)(N-3); (N+3)(N+2)(N+1) (N+4)(N+3)(N+2)(N+1)N(N-1)(N-2) ]]. Note that because N+3=N-4 7 and N+4=N-3 7, (N+2)(N+1)N(N-1)(N-2)(N-3)(N-4) = (N+3)(N+2)(N+1)N(N-1)(N-2)(N-3) 7 =(N+4)(N+3)(N+2)(N+1)N(N-1)(N-2) 7 and thus both entries of the first column of (<ref>) and both entries of the second column of (<ref>) are all identical, meaning that they can be factored out, leaving a column of ones. In general, ((2k-1)!)^k-1(4k-1)!∫_U(N)|Λ_X'(1) |^2k dX= (-1)^k2 (-1)^k(N+2k-1)(N+2k-2)…(N-2k+1) ×∑_n=1 ^k[ (N+k+i+j-2)(N+k+i+j-3)… (N-k+i+j)]_1≤ i≤ k 1≤ j ≤ k column n → 1 4k-1, where the nth determinant in the sum has its nth column replaced by a column with 1 in every entry. Now we are in a setting where we can apply Lemma <ref>. In that Lemma, we end up subtracting one column from an adjacent column, so we need the simplification that: (N+m)(N+m-1)⋯(N+m-2k+2) -(N+m-1)(N+m-2)⋯(N+m-2k+2)(N+m-2k+1) = (N+m-(N+m-2k+1))×(N+m-1)⋯(N+m-2k+2) = (2k-1)(N+m-1)⋯(N+m-2k+2). So we have ((2k-2)!)^k-1(4k-1)!∫_U(N)|Λ_X'(1) |^2k dX= (-1)^k2 (-1)^k (N+2k-1)(N+2k-2)…(N-2k+1) ×([ 1 (N+k)⋯ (N-k+3) (N+k+1)⋯(N-k+4) ⋯ (N+2k-2)⋯ (N+1); 1 (N+k+1)⋯(N-k+4) (N+k+2)⋯ (N-k+5) ⋯ (N+2k-1) ⋯ (N+2); ⋮ ⋮ ⋮ ⋱ ⋮; 1 (N+2k-1)⋯ (N+2) (N+2k)⋯(N+3) ⋯ (N+3k-3)⋯ (N+k) ]) 4k-1, If k=1, the determinant is just equal to 1 and we have 3!∫_U(N)|Λ_X'(1) |^2 dX=-(N+1)N(N-1)=-N(N-1)∫_U(N)|Λ_X(1) |^2 dX 3. If k>1, we can return to (<ref>) and subtract the k-1th row from the kth, then row k-2 from row k-1 and so forth until we have subtracted the first row from the second, noting that the subtractions collapse to a single product of consecutive integers, in exactly the same way as (<ref>): (N+m+1)⋯ (N+m-2k+4)-(N+m)⋯(N+m-2k+3)=(2k-2)(N+m)⋯(N+m-2k+4), where m is an integer. ((2k-3)!)^k-1(4k-1)!∫_U(N)|Λ_X'(1) |^2k dX= (-1)^k2 (-1)^k (N+2k-1)(N+2k-2)…(N-2k+1) ×([ 1 (N+k)⋯ (N-k+3) (N+k+1)⋯(N-k+4) ⋯ (N+2k-2)⋯ (N+1); 0 (N+k)⋯(N-k+4) (N+k+1)⋯ (N-k+5) ⋯ (N+2k-2) ⋯ (N+2); ⋮ ⋮ ⋮ ⋱ ⋮; 0 (N+2k-2)⋯ (N+2) (N+2k-1)⋯(N+3) ⋯ (N+3k-4)⋯ (N+k) ]) 4k-1. Here we have pulled a factor of (2k-2) from row 2 to k and we note that the first column is now an identity column, effectively reducing the size of the determinant by one. A determinant of this form has been evaluated at (<ref>), once a factor of 1/(2k-1)! has been pulled from each of the k rows of that determinant. In (<ref>), the matrix size is k-1, so we replace k in (<ref>) by k-1 and replace N in (<ref>) with N+1, allowing us to use that result without further modification: (-1)^k-12([ (N+k)⋯(N-k+4) (N+k+1)⋯ (N-k+5) ⋯ (N+2k-2) ⋯ (N+2); ⋮ ⋮ ⋱ ⋮; (N+2k-2)⋯ (N+2) (N+2k-1)⋯(N+3) ⋯ (N+3k-4)⋯ (N+k) ]) =((2k-3)!)^k-12· 3! ⋯ (k-2)!(N+2)⋯ (N+k-1)^k-2(N+k)^k-1(N+k+1)^k-2⋯ (N+2k-2)/(2k-3)! (2k-4)!⋯ (k-1)!. Substituting this into (<ref>) and cancelling ((2k-3)!)^k-1 yields (4k-1)!∫_U(N)|Λ_X'(1) |^2k dX= - (N+2k-1)(N+2k-2)⋯ (N-2k+1) ×2· 3! ⋯ (k-2)!(N+2)⋯ (N+k-1)^k-2(N+k)^k-1(N+k+1)^k-2⋯ (N+2k-2)/(2k-3)! (2k-4)!⋯ (k-1)!. In anticipation of our final formula, we gather factors on the right hand side: - 2· 3! ⋯ (k-2)!/(2k-3)! (2k-4)!⋯ (k-1)! (N-2k+1) (N-2k+2)⋯ N × (N+1)(N+2)^2⋯ (N+k-1)^k-1(N+k)^k(N+k+1)^k-1⋯ (N+2k-2)^2(N+2k-1). By (<ref>), the above equals - (2k-1)!(2k-2)!(N-2k+1) (N-2k+2)⋯ N/ (k-1)! (k-1)∫_U(N)Λ_X(1)^2k 4k-1. Finally, since 4k-1 is prime, we can use Wilson's Theorem to simplify on the left hand side: (4k-2)! = -1 4k-1, cancelling with the -1 on the right hand side and leaving 4k-1 as a factor on the left (in the above proof, this factor of 4k-1 cancels, ahead of reducing mod 4k-1, with a factor of 4k-1 that appears in the denominator of the 2k^th moment of |Λ'_X(1)|). Also, by Wilson's Theorem, (2k-1)!^2 = 1 4k-1 so that, using (2k-1)^-1 = -2 4k-1, we simplify on the right hand side: (2k-1)! (2k-2)! = -2 4k-1. § DETERMINANTAL FORMULAS FOR MOMENTS OF THE DERIVATIVES Λ'(1) AND Λ'(X) In this section we give several formulas for the moments of Λ'(1) and Λ'(x) and also describe a related differential equation satisfied by the main function that appears in the moments of Λ'(1). Our first theorem expresses the moments of Λ'(x) in terms of the derivatives of a k× k determinant. Let x ∈, and k be a non-negative integer. Then ∫_U(N)Λ_X^'(x)^2k =(-1)^(k+1)k/2d^k/dt_1^kd^k/dt_2^k e^-t_1 N[ F_N+k+i+j-1,k(t_1,t_2,x) ]_1 ≤ i ≤ k 1 ≤ j ≤ k|_t_1=t_2=0 where F_a,k(t_1,t_2,x) = 1/2π i∮w^a-1/(w-1)^k (w-|x|^2)^kexp(t_1/(w-1) + t_2/(w-|x|^2)) du, and the contour is a circle centred on the origin enclosing the points 1 and |x|^2. To clarify, the right hand side of the above is evaluated, after carrying out the derivatives, at t_1=t_2=0. Furthermore, if |x| ≠ 1 and a is a positive integer, then F_a,k(t_1,t_2,x) is also equal to ∑_0 ≤ m+n+2k ≤ at_1^m/m!t_2^n/n! ( |x|^2(a-n-k)/(|x|^2-1)^m+k∑_l=0^n+k-1a-1 n+k-1-l -m -k l|x|^2l/(|x|^2-1)^l + 1/(1-|x|^2)^n+k∑_l=0^m+k-1a-1 m+k-1-l -n -k l1/(1-|x|^2)^l). We derive the formulas in this theorem in the next subsection. We also give, in (<ref>), another formula for the above expression in parentheses in terms of the _2F_1 hypergeometric function. The fact that the right hand side of this theorem depends on the norm of x but not its argument again follows from the rotational invariance of Haar measure on U(N). However, we first note that, if |x|=1, the theorem simplifies. To begin, setting |x|=1, we have F_a,k(t_1,t_2,1) = 1/2π i∮w^a-1/(w-1)^2kexp((t_1+t_2)/(w-1)) dw. Notice that the dependence of the integrand on t_1 or t_2 appears in the exponential, and, when x=1, appears symmetrically. If we expand the determinant as a permutation sum whose summands are products of the entries of the matrix, we see that carrying out the differentiations with respect to t_1 and t_2 involves multiple applications of the product rule. Each time we differentiate a particular entry with respect to either variable, the effect is, on differentiating under the integral sign, to pull down, in the integrand, 1/(w-1) from the exponential. Furthermore, after carrying out all the differentiations, we set t_1=t_2=0. Thus any specific multiple derivative of a given entry of the matrix in the determinant with respect to t_1 and t_2, followed by setting t_1=t_2=0, can be achieved instead by letting t=t_1+t_2, and differentiating the determinant with respect to t the same total number of times (as with respect to t_1 and t_2 combined) and setting t=0. Also, note the presence of the factor e^-t_1 N in front of the determinant which is only impacted by differentiation with respect to t_1, and also figures in the application of the product rule. Each differentiation with respect to t_1 of this factor pulls down one power of -N. Hence, with |x|=1, we have d^k/dt_1^kd^k/dt_2^k e^-t_1 N[ F_N+k+i+j-1,k(t_1,t_2,1) ]_1 ≤ i ≤ k 1 ≤ j ≤ k|_t_1=t_2=0 = ∑_h=0^k k h (-N)^k-h (d/dt)^k+h[ F_N+k+i+j-1,k(t) ]_1 ≤ i ≤ k 1 ≤ j ≤ k|_t=0, where the entries in the second determinant are F_a,k(t) := 1/2π i∮w^a-1/(w-1)^2kexp(t/(w-1)) dw, with the contour counter clockwise, centred on the origin and enclosing the point u=1. This matches equation (<ref>), once we include the extra (-1)^k in each summand and the (-1)^k+1 2 in front of the right hand side of Theorem <ref>. The binomial coefficient k h arises from applying the product rule with respect to the t_1 variable, differentiating k-h times, for 0 ≤ h ≤ k. The (-N)^k-h accounts for the number of times we differentiate exp(-t_1 N) with respect to t_1, namely k-h. The (d/dt)^k+h comes about from differentiating the first determinant h times with respect to t_1 and k times with respect to t_2 and, and consolidating the t_1 and t_2 variables in the entries as explained above. Furthermore, the function F_a,k(t) can be expressed in terms of the generalized Laguerre polynomials, whose generating function is given by: exp(-zt/(1-z))/(1 - z)^α + 1 = ∑_n=0^∞ L_n^(α)(t) z^n. The L_n^(α)(t) can be hence written as a contour integral of the generating function, around a circle of radius <1 taken counter clockwise about z=0: L_n^(α)(t)= 1/2π i∮exp(-zt/(1-z))/(1 - z)^α + 1 z^-(n+1) dz. Substituting z=1/w, L_n^(α)(t)= 1/2π i∮exp(t/(1-w))/(w - 1)^α + 1 w^n+α dw, where the contour is again counter clockwise along a circle centred on the origin of radius >1. Therefore, F_a,k(t) = L_a-2k^(2k-1)(-t). Substituting this into (<ref>) and then into Theorem <ref>, gives the following theorem. Let |x|=1, i..e on the unit circle, and k be a non-negative integer. Then ∫_U(N)Λ_X^'(x)^2k = (-1)^k+1 2∑_h=0^k k h (-N)^k-h (d/dt)^k+h_k× k[L_N-k+i+j-1^(2k-1)(-t) ] |_t=0 = (-1)^k∑_h=0^k k h N^k-h (d/dt)^k+h_k× k[L_N+i-j^(2k-1)(t) ] |_t=0. The last equality follows by reversing the columns of the determinant in the line above, introducing an extra factor of (-1)^k(k-1)/2. Additionally, replacing -t with t in each entry of the determinant introduces an extra k+h powers of -1 when we carry out the k+h derivatives with respect to t ahead of setting t=0, and these cancel with the k-h powers of -1 that appear in (-N)^k-h. In terms of implementing this formula for the purpose of calculation, the following explicit expansion of the Laguerre polynomials is handy: L_n^(α)(x) = ∑_i=0^n (-1)^in+αn-i x^i/i! if n ≥ 0, 0 if n < 0. Interestingly, a similar formula holds in the N aspect. We again allow x ∈, not necessarily on the unit circle. We first write Λ'(x) = Λ(x) Λ'(x)/Λ(x) = -Λ(x) ∑_1^N e^-i θ_j/1-xe^-i θ_j. Therefore, averaging over U(N), using the Haar measure in terms of the eigenangles, 1/N! (2π)^N∏_1≤ j< l≤ N |e^iθ_l-e^iθ_j|^2, and applying (<ref>), we have, for positive integer k and x ∈: ∫_U(N) |Λ_X'(x)|^2k  = 1/N! (2π)^N∫_[0,2π]^N |Λ(x)|^2k( ∑_1^N e^-i θ_j/1-xe^-i θ_j)^k ( ∑_1^N e^i θ_j/1-xe^i θ_j)^k ∏_1≤ j< l≤ N |e^iθ_j-e^iθ_l|^2 ∏_1^N dθ_j. We write |Λ(x)|^2 = ∏_1^N (1-xe^-iθ_j) (1-xe^iθ_j) and also note, for example, as before, ( ∑_1^N e^-i θ_j/1-xe^-i θ_j)^k = d^k/dt_1^kexp( t_1 ∑_1^N e^-i θ_j/1-xe^-i θ_j) |_t_1=0. The purpose of expressing the right hand side as derivatives of an exponential is to make the integrand more separable so as to apply Andreief's identity. Substituting the above two identities into (<ref>), and something similar for the conjugate of (<ref>), we have that (<ref>) equals d^k/dt_1^kd^k/dt_2^k1/N! (2π)^N∫_[0,2π]^N∏_1^N (1-xe^-iθ_j)^k (1-xe^iθ_j)^k exp( ∑_1^N t_1 e^-i θ_j/1-xe^-i θ_j + ∑_1^N t_2 e^i θ_j/1-xe^i θ_j) ∏_1≤ j< l≤ N |e^iθ_j-e^iθ_l|^2 ∏_1^N dθ_j, evaluated at t_1=t_2=0. Applying the Andreief identity (<ref>) (recognizing the above double product as a product of a Vandermonde determinant and its conjugate) to express the N-dimensional integral as an N× N determinant, the above becomes d^k/dt_1^kd^k/dt_2^k_N× N[ 1/2π∫_0^2π (1-xe^-iθ)^k (1-xe^iθ)^k exp( t_1 e^-i θ/1-xe^-i θ + t_2 e^i θ/1-xe^i θ +i θ(j-l) ) dθ] |_t_1=t_2=0. Specializing again to x=1, this becomes d^k/dt_1^kd^k/dt_2^k_N× N[ 1/2π∫_0^2π (1-e^-iθ)^k (1-e^iθ)^k exp( t_1 e^-i θ/1-e^-i θ + t_2 e^i θ/1-e^i θ +i θ(j-l) ) dθ] |_t_1=t_2=0. We can rewrite the first factor in the integrand as (1-e^-iθ)^k = (-1)^k e^-i k θ (1-e^i θ)^k. Furthermore, e^-i θ/(1-e^-i θ) = -1/(1-e^i θ), and e^i θ/(1-e^i θ) = -1 + 1/(1-e^i θ). Using these, and pulling out (-1)^k e^-t_2 from each row of the determinant,  (<ref>) equals (-1)^kNd^k/dt_1^kd^k/dt_2^k e^-t_2 N_N× N[ 1/2π∫_0^2π (1-e^iθ)^2kexp( (t_2-t_1)/1-e^i θ +i θ(j-l-k) ) dθ] |_t_1=t_2=0 = (-1)^kNd^k/dt_1^kd^k/dt_2^k e^-t_2 N_N× N[ L_j-l+k^(-2k-1)(t_2-t_1) ] |_t_1=t_2=0. As in (<ref>), we apply the product rule, but with respect to t_2. Furthermore, we can consolidate the t_2-t_1 as t, taking care to include a factor (-1)^k to account for the effect of the chain rule each of the k times we differentiate t_2-t_1 with respect to t_1. Below, that factor cancels with part of the (-1)^k-h that occurs when we differentiate exp(-t_2N) k-h times with respect to t_2. We thus have For positive integer k ∫_U(N)Λ_X^'(1)^2k = (-1)^kN∑_h=0^k k h (-1)^h N^k-h (d/dt)^k+h_N× N[L_j-l+k^(-2k-1)(t) ] |_t=0. As before, this formula also holds for the average of Λ_X^'(x)^2k for any |x|=1, by rotational invariance of Haar measure on U(N). §.§ Proof of Theorem <ref> As in Section <ref>, we start with Lemma <ref>, first replacing a_j by 1/a_j for 1≤ j≤ k, and then introduce the moments of Λ' by differentiating the formula in that Lemma with respect to each of the a_j's, 1 ≤ j ≤ 2k. But unlike Section 2, where we then set all the a_j equal to 1, here we set a_j=x, and a_j+k=x, for 1 ≤ j ≤ k. Instead of (<ref>) we get x^(N-1)( ∏_1≤ i ≤ ku_i/u_i -1/x) ( N + ∑_m=1^k 1/1-u_m x), and instead of (<ref>) we get -1/x( ∏_1 ≤ i ≤ ku_i/u_i-x) ( ∑_m=1^k 1/1-u_m/x). Combining all the derivatives with respect to all the a_j's gives ∫_U(N) |Λ'_X(x)|^2k dX = (-1)^kx^kN|x|^-2k/k!(2π i)^k∮∏_j=1^k u_j^N+k( N + ∑_i=1^k 1/1-u_i x)^k ( ∑_i=1^k 1/1-u_i/x)^k /∏_1^k (u_i-1/x)^k (u_i-x)^k ∏_i≠ j (u_i-u_j) ∏_j=1^k du_j where each of the k-contours of this k-dimensional contour integral is around simple closed contours enclosing the points 1/x and x. We wish to make the integrand more separable so as to apply the Andreief identity, see (<ref>). To this end, we introduce parameters t_1 and t_2, and notice that d^k/dt_1^kexp(t_1 N + t_1 ∑_i=1^k 1/1-u_i x) |_t_1=0 = ( N + ∑_i=1^k 1/1-u_i x)^k, and d^k/dt_2^kexp(t_2 ∑_i=1^k 1/1-u_i/x) |_t_2=0 = ( ∑_i=1^k 1/1-u_i/x)^k. We substitute the left hand sides of these two formulas for the numerator of the displayed fraction in the integrand in (<ref>). We also note that ∏_i≠ j (u_i-u_j) is, up to sign, the square of a Vandermonde determinant, specifically equal to (-1)^k(k-1)/2∏_1 ≤ j<i≤ k(u_i-u_j)^2. Other than this factor the rest of the integrand separates (after carrying out the above two substitutions), and we can apply Andreief's identity (see (<ref>)) to get ∫_U(N) |Λ'_X(x)|^2k dX = (-1)^(k+1)k/2 x^kN|x|^-2kd^k/dt_1^kd^k/dt_2^k e^t_1 N_k × k[ 1/2π i∮u^N+k+i+j-2/(u-1/x)^k (u-x)^kexp(t_1/(1-ux) + t_2/(1-u/x)) du ]_1 ≤ i ≤ k 1 ≤ j ≤ k|_t_1=t_2=0. Let a=N+k+i+j-1, and also substitute w=ux into the entry of the determinant, which thus becomes x^2k-a/2π i∮w^a-1/(w-1)^k (w-|x|^2)^kexp(-t_1/(w-1) - t_2 |x|^2/(w-|x|^2)) dw. But, in carrying out the derivatives with respect to t_1 and t_2, we can, by the chain rule, replace -t_1 and -t_2 with t_1 and t_2 since an even number of derivatives are carried out (k derivatives with respect to each). Similarly we can replace t_2 |x|^2 above with t_2 by dropping the |x|^-2k that appears in (<ref>). Additionally, 2k-a = -N+k-i-j+1. We can pull out x^-N+k-i+1 from the i-th row of the determinant, and x^-j from the j-th column. Altogether this pulls out, after using ∑_i=1^k i = k(k+1)/2 (and, likewise, for j) and simplifying, x^-Nk which cancels with the x^Nk in (<ref>). We have thus arrived at the first formula of Theorem <ref>. Next, we derive formula (<ref>) for the function F_a,k(t_1,t_2,X). From (<ref>) we see that F_a,k(t_1,t_2,X) an entire function of t_1 and t_2, thus we may expand it in a two dimensional Maclaurin series about the origin, valid for all t_1 and t_2: F_a,k(t_1,t_2,x) = ∑_m,n≥ 0 F_a,k^(m,n)(0,0,x) t_1^m t_2^n/m! n!, where, F_a,k^(m,n) is the m-th derivative of F_a,k(t_1,t_2,x) with respect to t_1 followed by the n-th derivative with respect to t_2. But F_a,k^(m,n)(0,0,x) = 1/2π i∮w^a-1/(w-1)^m+k (w-|x|^2)^n+k dw. Now, as in the proof of Liouville's theorem in complex analysis, the contour intergral on the rhs vanishes if m+n+2k≥ a+1, which we can see by replacing the contour with ever larger circles of radius R centred on the origin, with circumference growing proportionately to R, whereas the integrand is at most O(1/R^2). Thus the sum over m and n involves finitely many terms, and we can truncate the sum in (<ref>) at m+n+2k≤ a. To evaluate the above contour integral we evaluate the residues at the points 1 and |x|^2. Note that we are assuming a to be a positive integer so the factor w^a-1 is entire. We first evaluate the residue at the pole |x|^2. We need to determine the Laurent series for each factor of the integrand about the point |x|^2. Now, w^a-1 = (|x|^2+w-|x|^2)^a-1 = ∑_l=0^a-1a-1 l |x|^2(a-1-l) (w-|x|^2)^l, and, if |x| ≠ 1, 1/(w-1)^m+k = 1/((w-|x|^2)+(|x|^2-1))^m+k = 1/(|x|^2-1)^m+k1/(1+(w-|x|^2)/(|x|^2-1))^m+k = ∑_l=0^∞-m-k l(w-|x|^2)^l/(|x|^2-1)^m+k+l, the last step being the binomial expansion for negative exponents. And, recalling complex analysis, we note, for convergence of this expansion, that when we compute the residue at a given point, we replace our contour by a small circle surrounding that point (in this case the point |x|^2), so that w-|x|^2 can be made as small as we wish on that contour. Thus, because of the factor (w-|x|^2)^n+k in the denominator of the integrand, we determine the residue of (<ref>) at |x|^2 as the coefficient of (w-|x|^2)^n+k-1 in the series about |x|^2 of w^a-1/(w-1)^m+k, namely as equal to |x|^2(a-1)/(|x|^2-1)^m+k∑_l_1+l_2= n+k-1a-1 l_1 -m -k l_2|x|^-2l_1/(|x|^2-1)^l_2, where l_1 and l_2 run over non-negative integers summing to n+k-1. The residue at 1 can similarly be evaluated, and equals 1/(1-|x|^2)^n+k∑_l_1+l_2= m+k-1a-1 l_1 -n -k l_21/(1-|x|^2)^l_2. Summing the two residues and replacing l_1 with n+k-1-l_2 in the first sum, and m+k-1-l_2 in the second sum, we get F_a,k^(m,n)(0,0,x) = |x|^2(a-n-k)/(|x|^2-1)^m+k∑_l=0^n+k-1a-1 n+k-1-l -m -k l|x|^2l/(|x|^2-1)^l + 1/(1-|x|^2)^n+k∑_l=0^m+k-1a-1 m+k-1-l -n -k l1/(1-|x|^2)^l = a - 1 n + k - 1|x|^2(a-n-k)/(|x|^2-1)^m+k _2F_1(m + k, -n - k + 1; a - n - k + 1;|x|^2/(|x|^2-1)) + a - 1 m + k - 11/(1-|x|^2)^n+k _2F_1(n + k, -m - k + 1; a - m - k + 1;1/(|x|^2-1)) We have thus arrived at formula (<ref>) of Theorem <ref>. §.§ An associated differential equation Let f_N,k(t): = t (_k× k[L_N+i-j^(2k-1)(t) ])' /_k× k[L_N+i-j^(2k-1)(t) ] , i.e. t times the logarithmic derivative of the displayed determinant. Then, f_N,k(t) satisfies the differential equation. t^2f”(t)^2+4tf'(t)^3-(4k^2-4Nt+t^2+4f(t))f'(t)^2 -(2kN(2k+t)+(4N-2t)f(t))f'(t)-(kN-f(t))^2 = 0. We initially found this differential equation for f_N,k(t) experimentally. In <cit.>, f_N,k(t) is shown to satisfy an equivalent differential equation identified as a σ-Painlevé V equation with three parameters. Specifically, our (<ref>) matches their equation (3-88), with f(t) equal to their σ̃(t)-Nt/2. The differential equation allows one, for example, to efficiently determine the moments of Λ'(1) for specific values of k and arbitrary N, in comparison to expanding the determinant in Theorem <ref> and differentiating, or summing the terms in Theorem <ref>. For one, Theorem <ref> only requires us to determine the coefficients of powers of t in the Maclaurin series of _k× k[L_N+i-j^(2k-1)(t) ] up to terms of degree 2k. Writing the Maclaurin series of f_N,k(t) up to this term as ∑_1^2k c_j t^j, with the coefficients c_j depending on N and k, the differential equation gives a recursion for the coefficients c_j. Note that c_0=0 since (<ref>) has no constant term. Section 3 of  <cit.> lists the first few coefficients of their σ̃(t), hence the first few coefficients of f_N,k(t) are: c_1 = -N/2 c_2 = -N ( N+2 k ) /4(2k-1)(2k+1) c_3 = 0 c_4 = ( 2N+2k+1 ) ( 2N+2k-1 ) ( N+2k ) N/ 16 ( 2k-3 ) ( 2k+3 ) ( 2k-1 ) ^2( 2k+1 ) ^2 c_5 = 0 c_6 = -( 6N^2+12Nk+4k^2-1 ) ( 2N+2k+1 ) ( 2N+2k-1 ) ( N+2k ) N/ 32 ( 2k-5 ) ( 2k+5 ) ( 2k-1 ) ^3 ( 2k+1 ) ^3( 2k-3 ) ( 2k+3 ) c_7 = 0 ⋮ To recover _k× k[L_N+i-j^(2k-1)(t) ] from the series expansion of f_N,k(t), we divide (<ref>) by t and then integrate with respect to t. This gives log_k× k[L_N+i-j^(2k-1)(t) ] = ∑_1^∞ c_j t^j/j + C where C is the constant of integration. We can determine C by setting t=0, specifically C = log_k× k[L_N+i-j^(2k-1)(0) ] = log_k× k[ N+2k+i-j-1 2k-1 ]. We can reverse the columns of the above determinant to get C = log((-1)^k 2_k× k[ N+k+i+j-2 2k-1 ] ), which we recognize as being equal to the log of the 2k-th moment of |Λ(1)| (see equation (<ref>)). This is consistent with Theorem <ref> which includes those moments as a factor on its right hand side. However, it is not immediately obvious from the differential equation that the function f(N,k) is a polynomial in N of degree 2k. Rather the recursion of the the differential equation only immediately yields that f(N,k) is, for given k, a rational function of N. We therefore get _k× k[L_N+i-j^(2k-1)(t) ] = exp(∑_1^∞ c_j t^j/j) ∫_U(N)Λ_X(1)^2k. Finally, we can compose the Maclaurin series for exp with that of the exponent to get the Maclaurin series for the determinant (with coefficients rational functions of N and k), and substitute this into the right hand side of Theorem <ref> to determine, for given k, the moments as functions of N. For example, employing the differential equation, we thus calculated the functions f(N,7) and f(N,8) that appear in Theorem <ref> in a few seconds. We found that f(N,7)=N (395850216912899348 N^13+211532624477224855 N^12+150409183615071976 N^11 +124529753766572861 N^10+108717221805362394 N^9+99500444626471665 N^8 +94746015810816508 N^7+94787692493435963 N^6+100709159551410998 N^5 +112720739347604080 N^4+123320249823386616 N^3+113230079581194576 N^2 +70456770368487360 N+20564820256780800)/6249929305402823040000 and f(N,8)=N (294731809494409081373 N^15+156236163525907760000 N^14+111548545120295422636 N^13 +92004150627732094528 N^12+79833050367269223318 N^11+72103908989822633280 N^10 +67069494832732475668 N^9+64329209879764227904 N^8+63853927671987675845 N^7 +66311663923553088320 N^6+72711963466727700696 N^5+82589042563096637568 N^4 +89406760833815044464 N^3+79567109646364454400 N^2+47560703381244144000 N +13348437875764992000)/18702760476120263262720000 Figure <ref> shows a plot of the zeros of f(N,7) and f(N,8). Notice, as in our previous plots, that the arguments of the zeros are interlacing. § THE CASE N=2 In the case N=2 we can work out more explicit formulas for the moments in terms of the _3F_2 hypergeometric function. Consider ∫_U(2)Λ_X'(x)^k Λ_X^†'(x) ^k . For our first result in this section, we assume that k is a positive integer and x is complex. For a matrix X∈ U(2) with eigenvalues e^iθ_1 and e^iθ_2 we have Λ_X(x)=(1-xe^-iθ_1)(1-xe^-iθ_2)=1-x(e^-iθ_1+e^-iθ_2)+x^2e^-i(θ_1+θ_2) so that Λ_X'(x)=-e^-iθ_1-e^-iθ_2+2xe^-i(θ_1+θ_2). Using the fact that the joint probability density function for eigenvalues of matrices from U(N) with Haar measure is 1/N! (2π)^N∏_1≤ j< k≤ N |e^iθ_k-e^iθ_j|^2= 1/N! (2π)^N|Δ(e^iθ_1,…,e^iθ_N)|^2, our integral is ∫_U(2) |Λ_X'(x)|^2k  = 1/8π^2∫_[0,2π]^2(2x-(e^iθ_1+e^iθ_2))^k (2x-(e^-iθ_1+e^-iθ_2))^k |e^iθ_1-e^iθ_2|^2  dθ_1  dθ_2 = ∑_m,n=0^k kmkn (2x)^m (2x)^n (-1)^m+nF(k-m,k-n) where F(A,B):= 1/8π^2∫_[0,2π]^2 (e^iθ_1+e^iθ_2)^A (e^-iθ_1+e^-iθ_2)^B |e^iθ_1-e^iθ_2|^2  dθ_1  dθ_2. Writing out F(A,B), with binomial expansions for the quantities in brackets, F(A,B) =1/8π^2∫_[0,2π]^2(∑_a=0^A Aae^i a θ_1 e^i(A-a)θ_2) (∑_b=0^B Bb e^-i bθ_1e^-i(B-b)θ_2)(2-e^iθ_1-iθ_2-e^iθ_2-iθ_1) dθ_1 dθ_2 =1/8π^2∑_a=0^A ∑_b=0^B AaBb∫_[0,2π]^2[ 2e^i(a-b)θ_1 e^i(A-a-B+b)θ_2. -e^i(a-b+1)θ_1 e^i(A-a-B+b-1)θ_2 . -e^i(a-b-1)θ_1e^i(A-a-B+b+1)θ_2]dθ_1 dθ_2. For the θ_1 integral to be non-zero, we require a=b in the first term in the square brackets, a=b-1 in the second term and a=b+1 in the third term. Each of these then requires A=B for the θ_2 integral to be non-zero. This results in F(A,A) = ∑_a=0^A Aa^2-∑_a=1^A AaAa-1. The first sum can be identified as the coefficient of z^A in the squared binomial expansion of (1+z)^A, that is the coefficient of z^A in the expansion of (1+z)^2A. So, the first sum equals 2A A. Furthermore, by writing A a-1 = A A-a+1, we recognize the second sum as the coefficient of z^A+1 in, as before, the square of the binomial expansion of (1+z)^A. Thus the second sum equals 2A A+1. However, 2A A - 2A A+1 = 1/A+12A A. In summary, F(A,B)={[ 1/A+12AA ; 0 ]. Thus, we have For positive integer k and any complex x we have ∫_U(2) |Λ_X'(x)|^2k  = ∑_m=0^k km^2 (4x^2)^m2k-2mk-m/k-m+1 = 2kk/k+1_3F_2(-1-k,-k,-k;1,1/2 -k; |x|^2). To prove that the two right-hand sides in the statement of the theorem are equal, we use the series for the _3F_2 hypergeometric function: _3F_2(a_1,a_2,a_3;b_1,b_2;z)=∑_n=0^∞(a_1)_n (a_2)_n (a_3)_n/(b_1)_n (b_2)_nz^n/n!, where (a)_n is the rising factorial (Pochhammer symbol) defined by (a)_0=1, (a)_n=a(a+1)⋯(a+n-1), for n≥ 1. The idea is to compare the coefficients of like powers of x in both expressions. This comparison reduces to proving that (2k)!(-1-k)_m(-k)^2_m (k-m)!^3(k-m+1)!=k!^3(k+1)!(2k-2m)!4^m(1/2-k)_m. Also, it helps to turn the Pochhammer symbols into factorials: (-k)_m=(-1)^m k!/(k-m)! and 4^m (1/2-k)_m=(-1)^m (2k)!(k-m)!/(2k-2m)! k!. For our next result, we will also allow k to be complex, but require that |x|>1. For a matrix X∈ U(2) with eigenvalues e^iθ_1 and e^iθ_2 we have from (<ref>) that Λ_X'(x)^k= e^2 π i β k (2x)^ke^-ik(θ_1+θ_2)(1-e^iθ_1+e^iθ_2/2x )^k, where the factor exp(2 π i β k) is to account, for complex exponentiation, for the fact that we need to pay attention to the particular branch of the logarithm being used when exponentiating complex numbers with a non-integer exponent. We take the branch to be principal with argument lying in (-π,π]. Therefore, the extra factor exp(2 π i β k) has β∈ℤ, depending on θ_1, θ_2 and x, selected to ensure that the imaginary part of the logarithm of the rhs above, before multiplying by k, lies in (-π,π]. However, we will be multiplying by the k-th power of the conjugate Λ_X^†'(x) in (<ref>), and that formula similarly requires an extra factor but with opposite argument, i.e. exp(-2 π i β k). These two factors thus cancel and we focus our attention away from it. We have assumed that |x|>1; therefore we may expand the last factor into an absolutely convergent binomial series and have Λ_X'(x)^k= e^2 π i β k (2x)^ke^-ik(θ_1+θ_2)∑_m=0^∞km(-e^iθ_1+e^iθ_2/2x )^m. Now we use the ordinary binomial theorem and have Λ_X'(x)^k= e^2 π i β k (2x)^ke^-ik(θ_1+θ_2)∑_m1,m2=0^∞km_1+m_2(m_1+m_2)!/m_1!m_2!(-1)^m_1+m_2(e^m_1iθ_1e^im_2θ_2)/(2x)^m_1+m_2. This expression simplifies to Λ_X'(x)^k= e^2 π i β k (2x)^ke^-ik(θ_1+θ_2)∑_m1,m2=0^∞Γ(k+1)/Γ(k-m_1-m_2+1)m_1!m_2!(-1)^m_1+m_2(e^m_1iθ_1e^im_2θ_2)/(2x)^m_1+m_2. Similarly, we have Λ_X^†'(x)^k= e^-2 π i β k (2x)^ke^ik(θ_1+θ_2)∑_m3,m4=0^∞Γ(k+1)/Γ(k-m_3-m_4+1)m_3!m_4! (-1)^m_3+m_4(e^-m_3iθ_1e^-im_4θ_2)/(2x)^m_3+m_4. In preparation for computing the average over U(2) we observe that if we integrate the product of the above two expressions over [0,2π]^2 we get 1/(2π)^2∫_[0,2π]^2Λ_X'(x)^k Λ_X^†'(x) ^k dθ_1dθ_2 = (2|x|)^2kΓ(k+1)^2∑_m_1,m_2(2|x|)^-2m_1-2m_2/m_1!^2m_2!^2Γ(k+1-m_1-m_2)^2 = 4^k |x|^2 k _3F_2(1/2,-k,-k;1,1;1/|x|^2). Now we include the factor |Δ(e^iθ_1,e^iθ_2)|^2/2=(e^iθ_1-e^iθ_2)(e^-iθ_1-e^-iθ_2)/2=1-e^i(θ_1-θ_2)+e^i(θ_2-θ_1)/2 in the integrand. We get 2^2 k |x|^2 k _3F_2(1/2,-k,-k;1,1;1/|x|^2)-2^2 k-2 k^2 |x|^2 k-2 _3F_2(3/2,1-k,1-k;2,3;1/|x|^2). Upon using (<ref>) for all three of the _3F_2 functions involved, we have For all k,x ∈ with |x|>1 we have ∫_U(2) |Λ_X'(x)|^2k dX=2^2 k |x|^2 k _3F_2(1/2,-k,-k;1,2;|x|^-2). While our derivation was for |x|>1, we can extend it by continuity to |x|=1 when k > -1. This is because of the standard fact that the sum defining the hypergeometric function _3F_2(a_1,a_2,a_3;b_1,b_2;z) converges at z=1 if (b_1 + b_2 - a_1 -a_2 -a_3) > 0. Here this condition reads k > -5/4. Furthermore, on the left hand side, we can specialize to x=1, by rotational invariance. For given k, the integrand is bounded away from the origin (mod 2π). Near the origin, on examining (<ref>) and (<ref>), we can compare the moment to the integral ∫_[0,2π]^2 (θ_1+θ_2)^2k dθ_1 dθ_2, which converges and is continuous for k > -1. Now let us compare our results. Let G_0(k,x) be the result of numerical integration of |Λ_X'(x)|^2k; let G_1(k,x)=2kk/k+1_3F_2(-1-k,-k,-k;1,-1/2 -k; x^2); and let G_2(k,x)=2^2 k x^2 k _3F_2(1/2,-k,-k;1,2;x^-2). We compare the triples (G_0,G_1,G_2) for various k and x. First of all, we observe that these are all the same if k is a positive integer: (k,x)=(3,5/4)→ (G_0,G_1,G_2)=(713.203,713.203,713.203) (k,x)=(3,1/3)→ (G_0,G_1,G_2)=(14.8656,14.86,14.86) Next, if k is not an integer and x>1, then G_0 and G_2 agree (k,x)=(5/4,9/5)→ (G_0,G_1,G_2)=(27.5617, 14.4-.04 i, 27.5617) Finally, if k is not an integer and 0<x<1 then none of them agree: (k,x)=(3/4,1/5)→ (G_0,G_1,G_2)=(1.01969, 1.0409, 1.15548 - 0.13579 i) So, Carlson's theorem does not apply here, in that, for x>1, the two functions G_1 and G_2 agree for all positive integers k, yet do not always agree in k >0, implying that the needed growth conditions for Carlson's Theorem do not hold. Indeed, the function _3F_2(-1-k,-k,-k;1,-1/2 -k; x^2) seems to grow too quickly along the negative imaginary axis. It remains to find a formula when k is not an integer and 0<x<1. § RADIAL DISTRIBUTION OF THE ROOTS OF Λ_X'(X) In this section we obtain a formula for a logarithmic average of Λ'_X(r) for N=2. For 0≤ r<1 we have ∫_U(2)log|Λ_X'(r)|  dX =2 r _3F_2(1/2,1/2,1/2;3/2,3/2;r^2) +r √(1-r^2) +sin ^-1(r)/π-1/2. One interest in a logarithmic average such as this is that it is intimately connected with the distribution of the zeros of Λ_X'(z) inside the unit circle. This question has been studied numerically in <cit.>. In <cit.>, for large matrix size, the tails of the distribution are explicitly determined. This question is also connected to the distribution of the zeros of the derivative of the Riemann zeta function to the right of the half-line, which in turn is at the heart of the method developed by Levinson when he proved that at least one-third of the zeros of the Riemann zeta-function are on the critical line. We are able to derive our theorem from Jensen's formula together with a calculation about the radial distribution of the roots of Λ'_X(r). Jensen's formula asserts that 1/2π∫_0^2πlog |f(re^iθ)| dθ= log|f(0)|+logr^n/|z_1… z_n| for any function f which is analytic in the disc |z|≤ r, with f(0)≠ 0, and zeros z_1,…, z_n inside that disc, counted with multiplicity. The last term in this expression can be written as a Stieltjes integral as ∫_0^r logr/u d N_f(u) where the radial distribution of the zeros is given by N_f(u)=∑_z_n, f(z_n)=0 |z_n|≤ u1 i.e. it is the counting function of the zeros of f in |z|≤ u. Jensen's formula, integrated over U(2), becomes ∫_U(2)1/2π∫_0^2πlog |Λ_X'(re^iθ)| dθ  = ∫_U(2)log|Λ_X'(0)|  + ∫_U(2)∫_0^r (logr/u) N'_Λ'(u) du dX = ∫_U(2)log|Λ_X'(0)|  + ∫_U(2)∫_0^r N_Λ'(u)/u du dX, with the last equality by integration by parts. In (<ref>), the invariance of the unitary group under rotation implies we can replace the inner integrand on the left hand side with |Λ_X'(r)|^2k. By (<ref>), the first integral on the right hand side is ∫_U(2)log|Λ_X'(0)|  = 1/4π^2∫_[0,2π]^2 (log| e^i(θ_1 -θ_2) + 1|) (1-cos(θ_1-θ_2))  dθ_1  dθ_2, where we have pulled out |-e^-iθ_1|=1 from the absolute values without affecting it. The integrand is therefore a function of θ_1 - θ_2. But the integrand is periodic with period 2π. For any given θ_2, the inner integral with respect to θ_1 thus evaluates the same on substituting θ_1 = θ_2+θ. Therefore, the right hand side above reduces to a one-dimensional integral and we have ∫_U(2)log|Λ_X'(0)|  = 1/2π∫_0^2π (log| e^iθ + 1|) (1-cos(θ)) dθ = 1/2π∫_0^2πlog| e^iθ + 1| dθ -1/2π∫_0^2π (log| e^iθ + 1|) cos(θ) dθ. The first integral on the right hand side can be evaluated using Gauss' Mean Value Theorem, for the function log(1+re^iθ), with r<1, which we may do since the function log(1+z) is analytic in the unit circle, and then taking the real part which is log|1+re^iθ|. Letting r → 1^-, this gives a value, by Gauss' Mean Value Theorem, of log(1)=0 for the first integral. For the second integral we introduce an extra factor of 1/2 in front of the integral so as to square the absolute value inside the logarithm, and also use |e^iθ + 1|^2 = 2+2 cos(θ). Integrating by parts we get -1/4π∫_0^2πlog( 2 + 2cos(θ)) cos(θ) dθ = -1/4π∫_0^2πsin(θ)^2/1 + cos(θ) dθ = -1/2, with the last step on replacing sin(θ)^2 = 1- cos(θ)^2 = (1-cos(θ))(1+cos(θ)), and cancelling the last factor. Putting this together,  (<ref>) becomes ∫_U(2)log|Λ_X'(0)|  = -1/2. Next we determine, for given u, the average over U(N) of N_Λ_X'(u) so as to swap order of integration in the last integral in (<ref>). From (<ref>) above, we see that the zero of Λ_X'(z) for a matrix X∈ U(2) is at (e^iθ_1+e^iθ_2)/2. Therefore, ∫_U(2)N_Λ_X'(u) dX=1/8π^2∫_[0,2π]^2 |e^iθ_1+e^iθ_2|≤ 2u|e^iθ_1-e^iθ_2|^2  dθ_1 dθ_2. As before, we can reduce the integral on the right hand side to a one-dimensional integral by pulling out e^iθ_2 from the absolute values in the above expression and substituting θ_1 = θ_2+θ while exploiting periodicity of the exponential function, so that ∫_U(2) N_Λ_X'(u) dX=1/4π∫_[0,2π] |1+e^iθ|≤ 2u|e^iθ-1|^2  dθ. The integrand simplifies as 2-2cos(θ). Furthermore, squaring the inequality |1+e^iθ| ≤ 2u, and using cos(θ)^2+ sin(θ)^2=1, we have 1+cosθ≤ 2u^2, or cos^-1 (2u^2-1)≤θ≤π. The requirement θ≤π is on account of the cos^-1 function, but our integral is over [0,2π]. Hence we need to include an extra factor of 2 to take into account θ∈ (π,2π]. Thus, we have ∫_U(2)N_Λ_X'(u) dX = 1/2π∫_cos^-1(2u^2-1)^π (2-2cosθ)  dθ = 2 u√(1-u^2) +cos^-1(1-2 u^2)/π. After swapping the order of integration of the last double integral in (<ref>), we substitute the above. But 2/π∫_0^r √(1-u^2) du = r√(1-r^2) + sin^-1(r)/π. Furthermore, 1/π∫_0^r cos^-1(1-2u^2)/u du can be expressed in terms of the series cos^-1(1-2u^2) = 4 ∑_0^∞1/2n+12n n( u/2)^2n+1, so that (<ref>) equals 4/π∑_0^∞1/(2n+1)^22n n( r/2)^2n+1, which can also be expressed in terms of the _3F_2 hypergeometric function as 2r/π_3F_2(1/2,1/2,1/2;3/2,3/2;r^2), thus completing the proof of the theorem. This calculation may give a (very!) small amount of insight into what is going on in the case of general N× N matrices. alpha
http://arxiv.org/abs/2407.12180v1
20240716211819
A UAV-assisted Wireless Localization Challenge on AERPAW
[ "Paul Kudyba", "Jaya Sravani Mandapaka", "Weijie Wang", "Logan McCorkendale", "Zachary McCorkendale", "Mathias Kidane", "Haijian Sun", "Eric Adams", "Kamesh Namuduri", "Fraida Fund", "Mihail Sichitiu", "Ozgur Ozdemir" ]
cs.NI
[ "cs.NI", "cs.RO" ]
A UAV-assisted Wireless Localization Challenge on AERPAW Paul Kudyba, Jaya Sravani Mandapaka, Weijie Wang, Logan McCorkendale, Zachary McCorkendale, Mathias Kidane, Haijian Sun, Eric Adams, Kamesh Namuduri, Fraida Fund, Mihail Sichitiu, Ozgur Ozdemir P. Kudyba and H. Sun are with School of Electrical and Computer Engineering, University of Georgia, Athens, GA, USA. J. S. Mandapaka, L. McCorkendale, Z. McCorkendale, M. Kidane, and K. Namuduri are with Department of Electrical Engineering, University of North Texas, Denton, TX, USA. W. Wang and F. Fund are with epartment of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA. E. Adam is with Delmont Systems LLC, Hurst, TX, USA. M. Sichitiu and O. Ozdemir are with Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC, USA. July 16, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== A UAV-assisted Wireless Localization Challenge on AERPAW Paul Kudyba, Jaya Sravani Mandapaka, Weijie Wang, Logan McCorkendale, Zachary McCorkendale, Mathias Kidane, Haijian Sun, Eric Adams, Kamesh Namuduri, Fraida Fund, Mihail Sichitiu, Ozgur Ozdemir P. Kudyba and H. Sun are with School of Electrical and Computer Engineering, University of Georgia, Athens, GA, USA. J. S. Mandapaka, L. McCorkendale, Z. McCorkendale, M. Kidane, and K. Namuduri are with Department of Electrical Engineering, University of North Texas, Denton, TX, USA. W. Wang and F. Fund are with epartment of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA. E. Adam is with Delmont Systems LLC, Hurst, TX, USA. M. Sichitiu and O. Ozdemir are with Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC, USA. July 16, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT As wireless researchers are tasked to enable wireless communication as infrastructure in more dynamic aerial settings, there is a growing need for large-scale experimental platforms that provide realistic, reproducible, and reliable experimental validation. To bridge the research-to-implementation gap, the Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW) offers open-source tools, reference experiments, and hardware to facilitate and evaluate the development of wireless research in controlled digital twin environments and live testbed flights. The inaugural AERPAW Challenge, “Find a Rover,” was issued to spark collaborative efforts and test the platform's capabilities. The task involved localizing a narrowband wireless signal, with teams given ten minutes to find the "rover" within a twenty-acre area. By engaging in this exercise, researchers can validate the platform's value as a tool for innovation in wireless communications research within aerial robotics. This paper recounts the methods and experiences of the top three teams in automating and rapidly locating a wireless signal by automating and controlling an aerial drone in a realistic testbed scenario. Wireless Localization, Autonomous Aerial Vehicles, Drones, Large-scale testbed § INTRODUCTION Localizing wireless radio transmissions has numerous use cases ranging from search and rescue operations, wildlife tracking, to finding jammers (intentional or not), as well as tracking intruding Unmanned Aerial Vehicles (UAVs) <cit.>. Many methods of localizing wireless transmitters are common, including using Radio Frequency (RF) sensors at fixed locations, using vehicles or manned aircraft with RF receivers, or searching on foot (commonly referred to as “fox hunting”). Among these methods, using UAVs for localization of RF sources has the potential to alleviate many of the drawbacks of the other methods, resulting in a cost-effective solution, yet quickly covering a large geographical area and with the potential for very accurate results. Since optimal solutions are computationally intractable, the methods for wireless localization in the research literature often try to strike a balance between computational complexity and their ability to handle noisy measurements and heterogeneous environments; in the end, the proof is in the pudding, and an experimental approach to performance evaluation is often the best indication on the performance of an approach in the real world for this type of research. A significant hurdle for experimentation with UAVs and wireless communications equipment is the considerable effort involved in building the UAVs and their payloads, obtaining RF permits (such as FCC approvals), and the availability of FAA safety pilots for the UAVs. The Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW) <cit.> has been designed and built to facilitate the development and testing of this type of research. In the AERPAW platform, researchers can develop their experiments involving wireless communications, UAVs, and Unmanned Ground Vehicles (UGVs) in a digital twin emulation environment and then transfer their experiments to the physical testbed. The experiments are then executed using real UAVs, UGVs, and radio transceivers, and the results are subsequently returned to the researchers in the emulator, where a new iteration can take place. In the Fall of 2023, AERPAW organized a student competition named AERPAW Find a Rover (AFAR) that challenged teams of students to program a drone equipped with a wireless receiver to find the location of a hidden transmitter located on a UGV on the ground. This paper details the approaches taken by the top three finalists as well as lessons learned about the AERPAW platform in particular and digital twins in general. The remainder of the paper is organized as follows. Section <ref> and <ref> present details about the AERPAW platform and the AFAR challenge. Section <ref> describes the approaches taken by each of the three top teams in the challenge. Section <ref> shows and comments on the results of the experiments in the field and compares them with the results from the digital twin. Section <ref> concludes the paper. §.§ The AERPAW Platform AERPAW is the third of the original four Platforms for Advanced Wireless Research (PAWR), which are a set of experimental platforms sponsored by the National Science Foundation (NSF) in partnership with an industry consortium. The four platforms allow wireless researchers from industry and academia to perform wireless experiments at scale, in a real outdoor environment. While all four platforms enable advanced wireless research, only AERPAW enables controlled mobility, by allowing the researchers to program autonomous aerial and ground vehicles. At its core, AERPAW consists of a physical testbed and its supporting facilities. The physical testbed consists of fixed and mobile nodes. The fixed nodes comprise eight 20m tall fixed towers (or pole/roof-top mounts) with an enclosure at the base and radio equipment installed at each location. The fixed nodes, in addition to a common set of USRPs (Universal Software Radio Peripheral), feature a heterogeneous collection of radio equipment that includes a 4G/5G NSA Ericsson cellular network, RF sensors, and LoRa Gateways. The mobile nodes are comprised of a vehicle and a portable node. In general, (with one exception) any portable node can be mounted on any vehicle depending on the experiment's needs. The vehicles relevant for the AFAR challenge are a Large AERPAW Multicopter (LAM), and the first AERPAW rover. Both vehicles can handle the Large Portable Nodes (LPN), and during the experiments in the AFAR challenge, each vehicle carried one such node. The LAM is a capable UAV designed by the AERPAW team to carry the 3-4 kg of a typical LPN for up to 40 minutes. The maximum payload of the LAM is limited by FAA regulations to 13 kg (with a corresponding reduction in endurance). All the components of an LPN are centered around the USRP (for this competition, a B205 mini was used): an Intel NUC 10 (Intel i7-5550U with 64 GB RAM and 1 TB SSD) is used as a companion computer, and a 1 W wide-band low-noise power amplifier with filters is used at the front ends of the USRP. Power for the LPN is provided by the vehicle batteries. Each LAM is controlled by an autopilot that is capable of executing the commands sent by the companion computer of the attached LPN. The operating frequency during the AFAR challenge was 3.4 GHz. An essential supporting system for the AERPAW physical testbed is its digital twin. The AERPAW digital twin environment is designed to allow users to develop experiments without requiring direct access to the AERPAW UAVs and radios. While other testbeds allow their users direct access to the physical resources of the testbed, due to the inclusion of programmable UAVs in the testbed, AERPAW users have to use the digital twin to develop their experiments. In the digital twin, three hardware elements are virtualized: the USRPs, the channel propagation, and the UAVs. Standing in for the USRPs in the testbed, the AERPAW team has developed virtual USRPs that can be discovered by the USRP Hardware Driver (UHD) and used like hardware-based USRPs by any software that uses UHD. The AERPAW channel emulator is designed to forward (after considering fading and channel impairments) the signals between the virtual USRPs of all participating nodes in an experiment. Finally, a software-in-the-loop setup is used to emulate the drones. The drone emulator is periodically updating the wireless channel emulator with the positions and orientations of the virtual drones such that the channel emulator can take into account the relative positions of all virtual USRPs in the testbed, including their relative antenna gains. Once an experiment is developed in the digital twin, it can then be transferred to the testbed (by moving docker containers) and executed in batch mode; subsequently, the experiment results are returned to the experimenters in the digital twin. When comparing results from the testbed and digital twin emulation, the emulation of the drones is quite accurate: both the real as well as the virtual drones have identical autopilot software, and both the real and the virtual drones are more than capable of executing the commands given by the autopilots (i.e., their performance limits are far in excess of the demands of the autopilot). On the other hand, the wireless channel emulator was primarily designed to enable the development of experiments that will be executed on the testbed to obtain the results. As such, the accuracy of the channel emulator is nowhere close to the accuracy of the drone emulator. In short, the channel emulator at the time of the challenge used a free-space propagation model with 10 dB of added white noise, while the propagation in the physical testbed was considerably noisier (with up to 30-40 dB of noise at times). In Section <ref>, it will be clear that this discrepancy between the digital twin propagation model and the testbed wireless propagation has had a clear detrimental effect on the teams that tuned their approach to the propagation characteristics of the digital twin (while the evaluation of the results was performed in the real testbed). §.§ The AFAR Competition In the summer of 2023, the AERPAW team decided to hold a student competition on the AERPAW platform for several reasons: first, a competition will raise the visibility of the platform among wireless researchers, which could convert into users at a later time; furthermore, a competition will exercise and validate the entire platform design, and allow the AERPAW team to identify any significant obstacles in using the platform. Finally, the problem considered has many real-life use cases, and a competition in a real environment may provide interesting insights toward solving the problem. The problem considered in the AFAR challenge is as follows: the organizers (AERPAW) hide a rover at an unknown location while the rover is continuously transmitting a narrowband radio signal, while the competitors program a drone equipped with a wireless receiver in order to find the rover (i.e., estimate its location). While the problem is theoretically relatively simple, noisy measurements (from the reduced bandwidth and hardware impairment) considerably complicate data processing as well as the design of the search algorithms. Both the hidden rover and the drone feature wideband nearly omnidirectional antennas, which result in a local minimum when the drone is directly above the rover. Furthermore, during the drone movement, the drone antenna tilts up to five degrees in the direction of the movement, introducing further uncertainty in the measurements. While the maximum signal strength decreases reliably with the distance between transmitter and receiver, signal strength drops of 30-40 dB were very common throughout the experiment. Figure <ref> shows the physical setup of the competition: the blue rectangle shows where the UAV is allowed to fly (at a minimum altitude of 20 m and maximum of 110 m). The green rectangle shows the allowed locations for the rover. For reference, the sides of the rectangles were between 270 m and 300 m. The competitors were allowed to fly the UAV in any orientation they desired and at any speed under 10 m/s. The three red markers show where the rover was actually hidden by the organizers during the competition. Naturally, the competitors were not aware of these locations before the competition. The three “hiding spots” for the rover were intended to be increasingly difficult, with the first one close to the start location of the drone, the second one farther away (and close to an edge of the allowable area), and the third one intentionally placed in a region where the drone was not allowed to travel to. The same three locations were used for all runs for all competitors. The competitors did not have access to the rover in any way during the competition. To balance localization speed and accuracy, two separate estimates were required from the teams (during the same flight): the first estimate is a fast estimate, which is required after three minutes from the start of the search. The second estimate is the final estimate, which is required after ten minutes from the start of the search. Separate scores and awards were made for each of the two estimates. In preparation for the competition, the AERPAW team developed the channel sounder (based on a pseudo-random noise sequence) comprising the transmitter on the rover and the receiver on the UAV. Each measurement consisted of two numbers: a signal strength and a confidence level (which measured how strong the signal was in comparison with the background noise). Furthermore, the AERPAW team also provided sample code showing how to integrate UAV control with radio measurements. The sample code (a simple type of gradient descent) assumed that the channel measurements are monotonically varying with the distance between the rover and the UAV, which is an assumption that does not hold (at all) in the real testbed. All the participating teams significantly changed the sample code (they only used the primitives the AERPAW team provided for moving the UAV and collecting radio measurements). Section <ref> provides details on the methodology employed by each of the top three teams. For fairness, none of the AERPAW students were allowed to participate in the competition, and none of the principal investigators were allowed to mentor any participating teams. The score for each team was computed by computing the average error for their estimates for either the fast or the final estimates (or both). The team with the lowest average errors wins (in each category). § DESIGN METHODOLOGY AND EXPERIENCES §.§ New York University (NYU) Team The NYU team used a reference experiment provided by AERPAW as a starting point. In this baseline solution, the UAV measures the received signal strength at fixed intervals. If the signal strength decreases between the beginning and end of an interval, the UAV turns 90 degrees. We used the baseline to identify areas of improvement, and then designed our final solution - based on Bayesian optimization - to address these. In initial experiments with the baseline solution, we identified these challenges: * With a fixed interval size, the UAV either overshoots the rover location (too-large interval) or moves very slowly (too-small interval). At the end of the flight - when the UAV is close to the rover - the UAV flies in a rectangle around the rover, and cannot get closer. * Early in the search, “decreasing signal strength” is a relatively reliable indicator of whether the UAV is moving toward the rover or away. Later in the search, however, when the UAV is already close to the rover, this is no longer a reliable indicator because of the noisy relationship between signal strength or distance. The first challenge was easily addressed with small changes to the baseline solution. The first change, modeled after learning rate annealing in gradient descent, was to gradually reduce the interval during flight time. Then the UAV can move fast at the beginning of the search, and take smaller steps at the end. Next, inspired by the idea of momentum in gradient descent, we added an accumulating additive term to the interval, so the UAV moves faster in the same direction if the signal strength increases in successive intervals. The team also tracked boundaries in each of the four cardinal directions, updated each time signal strength decreased while the UAV was moving in that direction, and made the UAV more cautious about crossing a “boundary,” when it had previously seen signal strength decrease beyond that point. With the changes to the baseline solution mentioned above, the trajectory of the UAV was much more efficient, and it reached the approximate location of the rover quickly. However, because of the second challenge mentioned above, the location estimate did not improve much with additional flight time. Consequently, the NYU team switched to an approach based on Bayesian optimization to fit a Gaussian process (GP) regression <cit.>. The GP regression can learn from noisy data by adding a white kernel to interpret the noise. Also, Bayesian optimization is well suited for optimizing a problem where it is expensive to sample new points (in this case, consumes more flight time) in the parameter space. The algorithm was implemented with the package in Python <cit.>, which turned out to be very effective. The team used the digital twin environment in AERPAW to experiment with different kernels for the GP regression, different acquisition functions for the Bayesian optimization, and hyperparameters for both the kernel and acquisition function, in order to further improve the results. The NYU team also added two features to our solution to accommodate specific elements of the AFAR challenge. First, we need the UAV to have a reasonable fast estimate after three minutes. Therefore, we began each search by traversing the south and west edges of the UAV geofence, to identify the latitude and longitude at which the highest signal strength was observed, and used that as an initial estimate. Second, we had to address the possibility of the rover being outside the UAV geofence. In that case, we flew the UAV perpendicular to the boundary, then used a previously fitted linear regression to estimate the distance between the boundary and the rover. As shown in Figure <ref>, the NYU team achieved good improvement over the baseline solution in the digital twin environment, although the location estimate was not as accurate in the physical testbed due to differences in the signal propagation dynamics. We attribute this success to the ability to conduct extensive experiments in the digital twin environment, and to the reference code that allowed the team to start with a working solution right away and then iterate on it. §.§ University of North Texas (UNT) Team The UNT team developed a recursive algorithm that can be used to scan and locate the rover. The team's approach was to execute a perimeter sweep of the overlap between the two geofences (rover and UAV), as shown in Figure <ref>. Although the rover was not guaranteed to be located in this region, it was the largest searchable region within which the rover may be located. Its rectangular nature lent itself to an edge traversal, with periodic measurements logged, and the location of the highest power value recorded on a per edge basis. As the drone continued along its path, it would sample a signal power value every 0.2 seconds. This signal was then pushed to a buffer, and when the buffer reached a length of 8 recorded samples, the average was calculated and stored. The average signal power values calculated are then associated with the center location of the buffer. This allowed for a continuous 1.6-second interval scan of the signal power strength along the perimeter. All the signal power strengths and their locations were then stored for further use. Four positions were then selected, one on each side of the rectangle determined by the greatest signal power strength. Implementing the sampling buffer in our approach was to account for external noise from other unknown sources. The sampling and averaging of the radio frequency power values negated the effects that random noise spikes would have had during the search. For example, if the drone traverses a side of the perimeter and experiences a sudden burst of radio frequency noise and records it, that position would be the point of highest recorded value, resulting in an inaccurate measurement. With the sampling and averaging process, we can filter out all the instances of noise and obtain a more refined and accurate search area. Once a full perimeter sweep has concluded, the measured signal values are evaluated to determine the location of the highest recorded power values on each side of the rectangle. The four locations are then used to create line segments from points on opposite sides of the rectangle. Vertical and horizontal line segments are created using Pyturf's Geojson <cit.> format by inserting two locations (with both latitude and longitude). These lines were then used in Pyturf's line intersect function to calculate the location at which the two lines intersect, predicting the initial guess. The recursive algorithm then repeats the perimeter sweep by selecting a quadrant that contains the previous intersection location. After recording the greatest received power strengths and locations, the location associated with the overall greatest received power strength was compared to all the current corner locations of the perimeter sweep to determine the next quadrant. The closest corner location to the highest received signal power location is then used to select the next quadrant to search. This corner is determined to be the first corner for the next recursion; the remaining three are derived by leveraging their relationship to the initial corners. Two of the three corner points are derived using the half-length of each side segment to determine the point location. The final point is determined by creating two line segments to find the center position of the current search perimeter. Now that the corners of the quadrant are defined, the UAV calculates the distance between all the corner locations, where the shortest distance determines the starting location. The corner locations are updated for the next recursive perimeter sweep and are used to define the path the UAV will traverse. Upon the last recursion, the intersection point is selected as our final and best guess for the location of the missing rover, as shown in Figure <ref>. One of the most notable experiences was learning the difference between the emulation environment and the real-world execution of the experiment. The UNT team took into consideration the potential interference of signal noise from unknown sources that were not present in the emulated environment. This reality was not stated in the competition description and rule book, but upon reviewing the signal values in the emulation, the team realized the signals were too clean and did not show similar instances of simulated noise. Once this was realized, the search algorithm was adapted to account for noise, which was the sampling buffer and averaging of the recorded values. This allowed for an advantage in real-world flight. §.§ University of Georgia (UGA) Team The UGA approach centered on achieving rover localization by expeditiously solving for the unknown path loss through continuously updating a 2D regression function or radio map <cit.>. The algorithm autonomously navigates the UAV to estimate the transmitter’s position from a handful of strategically collected radio samples. Due to the limited flight time and access to real data, the regression model needed to make minimal assumptions <cit.>. Therefore, the team chose GP regression for its sparse, adaptable, and statistically rigorous path loss estimation <cit.>. The GP radio map was updated with new valid samples, and Bayesian optimization guided the UAV to optimal sample locations to improve the radio map <cit.>. Initially, the team modeled a smooth 2D path loss function in Robotarium <cit.>, a Python-based open-source robotic simulator, to analyze expected self-guided behavior. Using a 2D normal distribution for the basic path loss allowed control of the mean as a transmitter’s location and the covariance to model the decay. A simulation result is shown in Figure <ref>a. The robotic platform tested various path loss models, signal noise levels, and agential behavior with different heuristic optimization acquisition functions <cit.>. Predetermined routines of drone maneuvers before and after Bayesian optimization proved beneficial. The robot established a path loss direction with a few widely spaced samples, creating a reliable start that rapidly converged on the transmitter’s position. When repeatedly estimating a similar position, the robot could construct a circle of waypoints around that estimated position. This strategy reduced the risk of an incorrect radio map due to noise or anomalies in the radio samples and refined the final estimated position. By trialing different Bayesian optimization acquisition functions, the team chose to use the upper confidence bound function for final drone guidance. After successful simulation testing in Robotarium, the team ensured the GP received valid and accurate path loss samples. This filtering step was crucial due to the limited testbed radio data. The receiver randomly encountered rapid signal fading, termed receiver ”dropout.” It was important to exclude these dropout samples from the path loss estimate to avoid slowing down the drone’s estimation. The team set up a low-power laboratory testbed to gather receiver data. Tests confirmed that receiver movement increased the chance of encountering receiver dropout <cit.>. Various time-series filters were then trialed with the lab and AERPAW testbed data but could not remove the dropout. A final go-no-go filter was designed based on grouping the confidence level readings over short periods. The model excluded readings during high variance in the receiver's confidence level and discarded the averaged power as an outlier if it exceeded this quality-variance threshold. With these preliminary steps using the open-source SDR and Robotarium platforms, the team developed a final algorithm to deploy on AERPAW. The UAV continually updated a grid of points predefined by latitude and longitude to represent the GP's radio map. The UAV used two maps: one for UAV guidance and another to estimate the UGV position. These maps correspond to the separated boundaries seen in Figure <ref>, and a final digital twin result is shown in Figure <ref>b. The UAV began its mission with a quality-variance threshold set using the existing testbed data. However, after takeoff and facing northwest, the UAV could exponentially increase the threshold until it accepted this first sample. After taking this first critical sample, the UAV went on to sample three additional waypoints before autonomously selecting the next waypoint using Bayesian optimization. The UGV radio map provided the three-minute and ten-minute estimates. Figure <ref>c shows a final estimate of the second testbed run. Throughout the effort to construct a rapid radio localization algorithm, the team carefully tuned the necessary parameters and initial settings of the GP kernel functions. Success in the Robotarium simulation and AERPAW digital twin environment confirmed the algorithm’s task effectiveness within the designated time frame without violating any boundary constraints. Lab data collection also allowed for further enhancements of the final algorithm. Still, the team conservatively managed any parameter adjustments to account for discrepancies and limitations expected from these mock environments. § RESULTS The emulation and testbed performances were completed by mid-December 2023. For the testbed run, all fifteen experiments (three positions for five finalist teams) were completed on the same day from approximately 10 am to 5 pm. Figure <ref> shows an instance where the UAV was passing close to the rover's location while the rover was at the third hidden location. Overall, each team demonstrated strong performances in specific runs, highlighting the necessity for various approaches and revealing distinct insights on a team's performance within specific circumstances. Additionally, broader trends and patterns emerge by examining the results across all runs for each team. The results are shown in Table <ref>. Within the emulator, NYU showed stellar performances in most individual runs, giving excellent final estimate consistency, and swept the win for both the three-minute and ten–minute estimates. The testbed results show much more variation among the team's results, highlighting the challenge and gap between the real-world testbed and a more controlled digital twin environment. The first run's three-minute result shows a relatively accurate result from UGA due to the short start routine before full autonomous control. For the final estimate, UNT improved its final estimate significantly, returning the best estimate within the entire testbed trials, within 17.6 m from the rover's true location. However, the second run gives distinctly different results. UNT gives the best three-minute estimate by a wide margin. UGA has the largest estimated improvement, showing a team-best accuracy of 27.8 m from the rover location. The third run shows similar three-minute results from the UNT and NYU teams. However, the NYU team improved their estimate, giving the best final ten-minute estimate for the most challenging trial. § CONCLUSIONS AND ANALYSIS The UNT team secured a first-place award across both categories from the final testbed results. The UGA and NYU teams each earned second and third-place finishes in the two categories. The collaborative achievements of the AERPAW team and all participating schools mark the inaugural successful completion of the AERPAW Challenge, and the data collected is now accessible to the public. Anticipating this new data, efforts are already underway to integrate the data collected into improving the emulator to continue closing the digital twin loop. The heterogeneous results across the digital twin emulation and testbed clearly show a need for wireless community platforms that unlock the highest levels of wireless research. Mutual efforts like these provide training and data for a robust wireless future. Please stay tuned for more exciting developments and similar competitive challenges. § ACKNOWLEDGMENTS The three participating teams (NYU, UNT, UGA) are grateful to AERPAW and all sponsors for the exciting and challenging educational experience. 1 kwonRFSignalSource2023a H. Kwon and I. Guvenc, “RF Signal Source Search and Localization Using an Autonomous UAV with Predefined Waypoints,” in IEEE 97th Vehicular Technology Conference (VTC2023-Spring), pp. 1–6, 2023. AERPAW V. Marojevic et al., “Advanced wireless for unmanned aerial systems: 5g standardization, research challenges, and aerpaw architecture,” IEEE Vehicular Technology Magazine, vol. 15, no. 2, pp. 22–30, 2020. rasmussenGaussianProcessesMachine2006 C. E. Rasmussen and C. K. I. Williams, Gaussian processes for machine learning. in Adaptive computation and machine learning. Cambridge, Mass: MIT Press, 2006. bayesopt F. Nogueira, “Bayesian Optimization: Open source constrained global optimization tool for Python,” accessed: 2024-05-30. [Online]. Available: <https://github.com/bayesian-optimization/BayesianOptimization>. PyturfPyturf2024 “Pyturf/pyturf,” pyturf, accessed: 2024-5-30. [Online]. Available: <https://github.com/pyturf/pyturf>. wilsonEfficientlySamplingFunctions2020 J. Wilson, et al., “Efficiently sampling functions from Gaussian process posteriors,” in Proceedings of the 37th International Conference on Machine Learning, pp. 10292–10302, Nov. 2020. santosMultirobotLearningCoverage2021 M. Santos et al., “Multi-robot Learning and Coverage of Unknown Spatial Fields,” in International Symposium on Multi-Robot and Multi-Agent Systems, pp. 137–145, 2021. finkOnlineMethodsRadio2010 J. Fink and V. Kumar, “Online methods for radio signal mapping with mobile robots,” in IEEE International Conference on Robotics and Automation, pp. 1940–1945, 2010. shresthaRadioMapEstimation2023 R. Shrestha et al., “Radio Map Estimation in the Real-World: Empirical Validation and Analysis,” in IEEE Conference on Antenna Measurements and Applications, pp. 169–174, 2023. duvenaudAutomaticModelConstruction D. K. Duvenaud, “Automatic Model Construction with Gaussian Processes,” accessed: 2023-05-25. [Online]. Available: <https://www.cs.toronto.edu/ duvenaud/thesis.pdf> matthewsGaussianProcessBehaviour2018a Matthews, Hron, Rowland, Turner, and Ghahramani, “Gaussian Process Behaviour in Wide Deep Neural Networks,” in International Conference on Learning Representations, Feb. 2018. deisenrothGaussianProcesses M. Deisenroth, “Gaussian Processes,” accessed: 2024-05-30. [Online]. Available: <https://www.deisenroth.cc/teaching/2019-20/linear-regression-aims/lecture_gaussian_processes.pdf> wangRecentAdvancesBayesian2023 X. Wang et al., “Recent Advances in Bayesian Optimization,” ACM Comput. Surv., vol. 55, no. 13s, pp. 287:1-287:36, Jul. 2023. pickemRobotariumRemotelyAccessible2017 D. Pickem et al., “The Robotarium: A remotely accessible swarm robotics research testbed,” in IEEE International Conference on Robotics and Automation.1em plus 0.5em minus 0.4emIEEE, pp. 1699–1706, 2017. matzFundamentalsTimeVaryingCommunication2011 G. Matz and F. Hlawatsch, “Fundamentals of Time-Varying Communication Channels,” in Wireless Communications Over Rapidly Time-Varying Channels.1em plus 0.5em minus 0.4emElsevier, pp. 1–63, 2021. IEEEtran § BIOGRAPHY SECTION Paul Kudyba recently received his master's in electrical and computer engineering from The University of Georgia. He received his B.S. in 2015 from Southern Polytechnic State University, now Kennesaw State. He is currently seeking opportunities for collaboration in machine learning for industrial processes and Internet of Things. Jaya Sravani Mandapaka is currently pursuing her Ph.D in Electrical Engineering at the University of North Texas. Her research is focused on UAS-UAS Communications- use case scenarios and message protocols. Jaya received her master's degree in computer science engineering from Arkansas State University in May 2021 and her B.S. in Electrical and Communication Engineering from Jawaharlal Nehru Technological University Hyderabad, India. Her research interests lie in communication Advancements of Advanced Air Mobility (AAM) vehicles. Weijie Wang received a B.S. degree in Computer Engineering from New York University Tandon School of Engineering, New York, United States in 2023. He is working toward a M.S. degree at Columbia University, New York, United States. His research interests include UAV-assisted wireless communication, embedded systems, and Internet of Things. Logan McCorkendale recently graduated with his Bachelor of Science in Electrical Engineering from the University of North Texas (UNT). His research is focused on autonomous system design for the advancement of Advanced Air Mobility (AAM). Zachary McCorkendale is a recent graduate from the Department of Electrical Engineering at the University of North Texas. His research interests include Autonomy and UAV communications for the Advancement of Air Mobility. Mathias Feriew Kidane recently received his B.S. degree in Electrical Engineering from the University of North Texas. Haijian Sun is an Assistant Professor in the School of Electrical and Computer Engineering at The University of Georgia. He obtained his Ph.D. in the Department of Electrical and Computer Engineering from Utah State University, USA. His current research interests include vehicular communication, wireless communication for 5G and beyond, IoT communications, and optimization analysis. Dr. Sun is a Member of the IEEE. Eric Adams A longtime researcher at the University of Massachusetts Amherst ECE, he is the founder of 3 companies relating to remote sensing, edge-to-cloud computing, and cyberinfrastructure for small UAS and Advanced Air Mobility. Kamesh Namuduri is a professor in the Department of Electrical Engineering at the University of North Texas. His research interests include Autonomy and UAV communications. Fraida Fund is a Research Assistant Professor in the Department of Electrical and Computer Engineering at the NYU Tandon School of Engineering. She received her Ph.D. degree in Electrical Engineering from NYU Tandon School of Engineering. Her research interests include low latency wireless network protocols, economics of wireless networks, and design of open experimental platforms for research and education in communication networks. Mihail Sichitiu is a professor in the Department of Electrical Engineering at NC State University. His primary research interest is in Wireless Networking with an emphasis on multi-hop networking and wireless local area networks. Ozgur Ozdemir is an Associate Research Professor at the Department of Electrical and Computer Engineering at NC State University. His research interests include mmWave channel sounding, SDRs, and UAV communications.
http://arxiv.org/abs/2407.12437v1
20240717094527
Variable-Agnostic Causal Exploration for Reinforcement Learning
[ "Minh Hoang Nguyen", "Hung Le", "Svetha Venkatesh" ]
cs.LG
[ "cs.LG", "cs.AI" ]
A Comprehensive Sustainable Framework for Machine Learning and Artificial Intelligence [ July 22, 2024 ====================================================================================== fancy arabic § ABSTRACT Modern reinforcement learning (RL) struggles to capture real-world cause-and-effect dynamics, leading to inefficient exploration due to extensive trial-and-error actions. While recent efforts to improve agent exploration have leveraged causal discovery, they often make unrealistic assumptions of causal variables in the environments. In this paper, we introduce a novel framework, Variable-Agnostic Causal Exploration for Reinforcement Learning (VACERL), incorporating causal relationships to drive exploration in RL without specifying environmental causal variables. Our approach automatically identifies crucial observation-action steps associated with key variables using attention mechanisms. Subsequently, it constructs the causal graph connecting these steps, which guides the agent towards observation-action pairs with greater causal influence on task completion. This can be leveraged to generate intrinsic rewards or establish a hierarchy of subgoals to enhance exploration efficiency. Experimental results showcase a significant improvement in agent performance in grid-world, 2d games and robotic domains, particularly in scenarios with sparse rewards and noisy actions, such as the notorious Noisy-TV environments. Keywords: Reinforcement Learning, Causality, Deep RL. § INTRODUCTION Reinforcement learning (RL) is a machine learning paradigm wherein agents learn to improve decision-making over time through trial and error <cit.>. While RL has demonstrated remarkable success in environments with dense rewards <cit.>, it tends to fail in case of sparse rewards where the agents do not receive feedback for extended periods, resulting in unsuccessful learning. Such scarcity of rewards is common in real-world problems: e.g., in a search mission, the reward is only granted upon locating the target. Prior studies tackle this problem by incentivizing exploration through intrinsic rewards <cit.>, motivating exploration of the unfamiliar, or with hierarchical reinforcement learning (HRL) <cit.>. However, these methods encounter difficulties when scaling up to environments with complex structures as they neglect the causal dynamics of the environments. Consider the example of a search in two rooms (Fig. <ref>(a, b)), where the target is in the second room, accessible only by opening a “door” with a “key” in the first room. Traditional exploration methods might force the agent to explore all corners of the first room, even though only the “key” and “door” areas are crucial. Knowing that the action pick up key is the cause of the effect door opened will prevent the agent from aimlessly wandering around the door before the key is acquired. Another challenge with these approaches is the Noisy-TV problem <cit.>, where the agent excessively explores unfamiliar states and actions that may not contribute to the ultimate task. These inefficiencies raise a new question: Can agents effectively capture causality to efficiently explore environments with sparse rewards and distracting actions? Inspired by human reasoning, where understanding the relationship between the environmental variables (EVs) helps exploration, causal reinforcement learning (CRL) is grounded in causal inference <cit.>. CRL research often involves two phases: (i) causal structure discovery and (ii) integrating causal knowledge with policy training <cit.>. Recent studies have demonstrated that such knowledge significantly improves the sample efficiency of agent training <cit.>. However, current approaches often assume assess to all environmental causal variables and pre-factorized environments <cit.>, simplifying the causal discovery phase. In reality, causal variables are not given from observations, and constructing a causal graph for all observations becomes a non-trivial task due to the computational expense associated with measuring causality. Identifying EVs crucial for downstream tasks becomes a challenging task, thereby limiting the effectiveness of CRL methods. These necessitate the identification of a subset of crucial EVs before discovering causality. This paper introduces the Variable-Agnostic Causal Exploration for Reinforcement Learning (VACERL) framework to address these limitations. The framework is an iterative process consisting of three phases: “Crucial Step Detection”, “Causal Structure Discovery”, and “Agent Training with Causal Information”. The first phase aims to discover a set of crucial observation-action steps, denoted as the S_COAS. The term “crucial observation-action step” refers to an observation and an action pair stored in the agent's memory identified as crucial for constructing the causal graph. We extend the idea of detecting crucial EVs to detecting crucial observation-action steps, motivated by two reasons. Firstly, variables in the environment are associated with the observations, e.g., the variable “key” corresponds to the agent's observation of the “key”. Secondly, actions also contribute to causality, e.g., the agent cannot use the “key” without picking it up. One way of determining crucial observation-action steps involves providing the agent with a mechanism to evaluate them based on their contribution to a meaningful task <cit.>. We implement this mechanism using a Transformer architecture, whose task is to predict the observation-action step leading to the goal given past steps. We rank the significance of observation-action steps based on their attention scores <cit.> and pick out the top-ranking candidates since the Transformer must attend to important steps to predict correctly. In Phase 2, we adapt causal structure learning <cit.> to discover the causal relationships among the observation-action steps identified in the discovered S_COAS set, forming a causal graph G. The steps serve as the nodes of the causal graph, while the edges can be identified through a two-phase iterative optimization of the functional and structural parameters, representing the Structure Causal Model (SCM). In Phase 3, we train the RL agent based on the causal graph G. To prove the versatility of our approach in improving the sample efficiency of RL agents, we propose two methods to utilize the causal graph: (i) formulate intrinsic reward-shaping equations grounded on the captured causal relationship; (ii) treat the nodes in the causal graph as subgoals for HRL. During subsequent training, the updated agent interact with the environments, collecting new trajectories for the agent memory used in the next iteration of Phase 1. In our experiments, we use causally structured grid-world and robotic environments to empirically evaluate the performance improvement of RL agents when employing the two approaches in Phase 3. This improvement extends not only to scenarios with sparse rewards but also to those influenced by the Noisy-TV problem. We also investigate the contributions of the core components of VACERL, analyzing the emerging learning behaviour that illustrates the captured causality of the agents. Our main contributions can be summarized as: * We present a novel VACERL framework, which autonomously uncovers causal relationships in RL environments without assuming environmental variables or factorized environments. * We propose two methods to integrate our framework into common RL algorithms using intrinsic reward and hierarchical RL, enhancing exploration efficiency and explaining agent behaviour. * We create causally structured environments, with and without Noisy-TV, to evaluate RL agents' exploration capabilities, demonstrating the effectiveness of our approach through extensive experiments. § RELATED WORK Causal Reinforcement Learning (CRL) is an emerging field that integrates causality and reinforcement learning (RL) to enhance decision-making in RL agents, addressing limitations associated with traditional RL, such as sample efficiency and explainability <cit.>. CRL methods can be categorized based on their experimental setups, whether they are online or offline <cit.>. Online-CRL involves real-time interaction with the environment <cit.>, while Offline-CRL relies on learning from a fixed previously collected dataset <cit.>. Our framework operates online, using trajectories from an online policy for an agent training while simultaneously constructing the underlying causal graph. Prior works in CRL have focused on integrating causal knowledge into RL algorithms and building causal graphs within the environment. Pitis et al., <cit.> use Transformer model attention weights to generate counterfactual data for training RL agents, while Coroll et al., <cit.> use causal effect measurement to build a hierarchy of controllable effects. Zhang et al., <cit.> measure the causal relationship between states and actions with the rewards and redistribute the rewards accordingly. For exploration purposes, CRL research integrates causal knowledge by rewarding the agents when they visit states with higher causal influence <cit.> or treating the nodes of the causal graph as potential subgoals in HRL <cit.>. Zhang et al., <cit.> measure the average causal effect between a predefined group of variables and use this as a reward signal, meanwhile, Seitzer et al., <cit.> propose conditional mutual information as a measurement of causal influence and use it to enhance the exploration of the RL agent. Hu et al., <cit.> introduce a continuous optimization framework, building a causal structure through a causality-guided intervention and using it to define hierarchical subgoals. Despite advancements, previous methods often assume prior knowledge of EVs and the ability to factorize the environment accordingly. Our framework autonomously detects crucial steps associated with the key EVs, enabling causal structure learning without predefined EVs, thus, distinguishing it from previous methods. The causal graph uncovered by VACERL is versatile and can complement existing RL exploration methods, such as intrinsic reward motivation or as hierarchical subgoals. Intrinsic Reward Motivation addresses inefficient training in sparse reward RL environments; an issue associated with random exploration techniques like ϵ-greedy <cit.>. The core idea underlying these motivation strategies is to incorporate intrinsic rewards, which entail adding bonuses to the environment rewards to facilitate exploration <cit.>. These methods add bonuses to environment rewards to encourage exploration, either based on prediction error <cit.> or count-based criteria <cit.>. However, they struggle to scale to complex structure environments, especially in the scenario of Noisy-TV, where the agent becomes excessively curious about unpredictable states and ignores the main task <cit.>. VACERL tackles this by incorporating a mechanism to identify essential steps for the primary task and construct the causal graph around these steps, thus, enabling the agent to ignore actions generating noisy-TV. Goal-conditioned Hierarchical Reinforcement Learning (HRL) is another approach that is used to guide agent exploration. Levy et al., <cit.> propose a multilevel policies framework, in which each policy is trained independently and the output of higher-ranking policies are used as subgoals for lower-level policies. Zhang et al., <cit.> propose an adjacency constraint method to restrict the search space of subgoals, whereas, Pitis et al., <cit.> introduce a method based on maximum entropy gain motivating the agent to pursue past achieved goals in sparsely explored areas. However, traditional HRL methods often rely on random subgoals exploration, which has shown inefficiency in learning high-quality hierarchical structures compared to causality-driven approaches <cit.>. Hu et al., <cit.> operate under the assumption of pre-availability and disentanglement of causal EVs from observations, using these EVs as suitable subgoals for HRL. However, they overlook cases where these assumptions are not applicable, e.g., the observation is the image. In our apprroach, subgoals are determined by abstract representations of the observation and action, thereby, extending the applications of causal HRL to unfactorized environments. § METHODS §.§ Background §.§.§ RL Preliminaries. We are concerned with the Partially Observable Markov Decision Process (POMDP) framework, denoted as the tuple (S,A,O,P,Z,r,γ). The framework includes sets of states S, actions A, observations O providing partial information of the true state, a transition probability function P(s'| s,a), and an observation model Z denoted as Z(o| s,a), indicating the probability of observing o when taking action a in state s. r:S× A→ R is a reward function that defines the immediate reward that the agent receives for taking an action in a given state, and discount factor γ. The objective of the RL agent is to maximize the expected discounted cumulative reward E_π,P[∑_t=0^∞γ^tr(s_t,a_t)], over a policy function π mapping a state to a distribution over actions. §.§.§ Causality. Causality is explored through the analysis of relationships among variables and events <cit.>. It can be described using the SCM framework <cit.>. SCM, for a finite set V comprising M variables, is V_i:=f_i(PA(V_i)_(G),U_i),∀ i∈{1,…,M}, where F={f_1,f_2,...,f_M} denotes the set of generating functions based on the causal graph G and U={U_1,U_2,...,U_M} represents the set of noise in the model. The graph G={V,E} provides the edge e_ij∈ E, representing variable V_i causes on variable V_j, where e_ij=1 if V_j∈PA(V_i), else, e_ij=0. The SCM framework can be characterized by two parameter sets: the functional parameter δ, representing the generating function f; the structural parameter η∈ R^M× M, modelling the adjacency matrix of G <cit.>. §.§ Variable-Agnostic Causal Exploration Reinforcement Learning Framework §.§.§ Overview. The primary argument of VACERL revolves around the existence of a finite set of environment variables (EVs) that the agent should prioritize when constructing the causal graph. We provide a mechanism to detect these variables, aiming to reduce the number of nodes in the causal graph mitigating the complexity of causal discovery. Initially, we deploy an agent to randomly explore the environment and gather successful trajectories. Once the agent accidentally reaches the goal a few times, we initiate Phase 1, reformulating EVs detection into finding the “crucial observation-action steps” (COAS) from the collected trajectories. The agent is equipped with the ability to rank the importance of these steps by employing the Transformer (TF) model's attention scores (a_s). Top-M highest-score steps will form the crucial set S_COAS. Subsequently, in Phase 2, we identify the causal relationships among steps in S_COAS to learn the causal graphs G of the environment. In Phase 3, where we extract a hierarchy causal tree from graph G and use it to design two approaches, enhancing the RL agent's exploration capability. We then utilize the updated agent to gather more successful trajectories and repeat the process from Phase 1. See Fig. <ref>(c) for an overview of VACERL and detailed implementation in Supp. A [The source is available at https://github.com/mhngu23/Variable-Agnostic-Causal-Exploration-for-Reinforcement-Learning-VACERL]. §.§.§ Phase 1: Crucial Step Detection. We hypothesize that important steps (a step is a pair of observation and action) are those in the agent's memory that the agent must have experienced to reach the goal. Hence, these steps should be found in trajectories where the agent successfully reaches the goal. We collect a buffer B=({o_t^1,a_t^1}_t=1^T^1,{o_t^2,a_t^2}_t=1^T^2,…,{o_t^n,a_t^n}_t=1^T^n), where n is the number of episodes wherein the agent successfully reaches the goal state, o_t and a_t is the observation and action, respectively at step t in an episode, and T^k is the number of steps in the k-th episode. We train the TF model, whose input consists of steps from the beginning to the second-to-last step in each episode and the output is the last step. The reasoning behind choosing the last step as the prediction target is that it highlights which steps in the trajectories are crucial for successfully reaching the goal. For a training episode k-th sampled from B, we predict (ô_T^k,â_T^k)=TF({. The model is trained to minimize the loss ℒ_TF=[MSE(()], where MSE is the mean square error. Following training, we rank the significant observation-action steps based on their attention scores a_s (detailed in Supp. A) and pick out the top-M highest-score steps. We argue that the top-attended steps should cover crucial observations and actions that contribute to the last step prediction task, associated with meaningful causal variables. For instance, observing the key and the action of picking it up are linked to the variable “key”. In continuous state space, the agent may repeatedly attend to similar steps involving the same variable. For example, the agent might select multiple instances of observing the key, from different positions where the agent is located, and picks it up. As a result, the set S_COAS will be filled with similar steps relating to picking up the key and ignoring other important steps. To address this, we introduce a function 𝚒𝚜_𝚜𝚒𝚖 to decide if two steps are the same: * For discrete action space environments, 𝚒𝚜_𝚜𝚒𝚖((o,a),(o',a')) = 1 if cos(o, o')>ϕ_sim and a = a', else 0. * For continuous action space environments, 𝚒𝚜_𝚜𝚒𝚖((o,a),(o',a'))=1 if cos((o,a),(o',a'))>ϕ_sim, else 0. where Cos(o,o')=o· o'/o·o' and ϕ_sim is a similarity threshold. Intuitively, if the agent has two observations with a high cosine similarity and takes the same action, these instances are grouped. The score a_s for a group is the highest a_s among the steps in this group. The proposed 𝚒𝚜_𝚜𝚒𝚖 method will also be effective in noisy environments, particularly when the observations are trained representations rather than raw pixel data. Subsequently, we add the steps with the highest a_s to S_COAS. We define an abstract function ℐ to map a pair (o_t^k,a_t^k) to an element i in S_COAS: i=ℐ((o_t^k,a_t^k))𝚒𝚜_𝚜𝚒𝚖((o_t^k,a_t^k),(o,a)_i)=1 and collect a new buffer B^*, where: B^*=B\{(o_t^k,a_t^k):ℐ((o_t^k,a_t^k))∉ S_COAS} Here, B^* is B removing steps that are unimportant (not in S_COAS). §.§.§ Phase 2: Causal Structure Discovery. Inspired by the causal learning method proposed by Ke et al., <cit.>, we uncover the causal relationships among M steps identified in the S_COAS set. Our approach optimizes the functional parameter δ and the structural parameter η associated with the SCM framework. The optimization of these parameters follows a two-phase iterative update process, wherein one parameter is fixed while the other is updated. Both sets of parameters are initialized randomly and undergo training using the buffer B^* (Eq. <ref>). Our intuition for training the SCM is that the “cause” step has to precede its “effect” step. Therefore, we train the model to predict the step at timestep t using the sequence of steps leading to that particular timestep. In the first causal discovery phase, we fix η and optimize δ. For a step t in the trajectory k-th, we formulate f as: (ô_t^k,â_t^k)=f_δ,ℐ((o_t^k,a_t^k))({o_t'^k,a_t'^k}_t'=1^t-1∧|G) where {o_t'^k,a_t'^k}_t'=1^t-1 is the sequence of steps from 1 to t-1 that belong to the parental set of PA(ℐ(o_t^k,a_t^k)), as defined by the current state of G parameterized by η. We use MSE as the loss function: ℒ_δ,G=𝔼_t,k[MSE((o_t^k,a_t^k),(ô_t^k,â_t^k))] In the second phase, we fix δ and optimize the parameter η by updating the causality from variable X_j to X_i as η_ij=η_ij-∑_h((σ(η_ij)-e_ij^(h))ℒ_δ,G^(h),i(X)), where h indicates the h-th drawn sample of causal graph G, given the current parameter η, and β is the update rate. e_ij^(h) is the edge from variable X_j to X_i of G^(h), and σ is the sigmoid function. ℒ_δ,G^(h),i(X) is the MSE loss in Eq. <ref> for specific variable X_i of current function f_δ,X_i under graph G^(h). After updating parameter η for a number of steps, we repeat the optimization process of parameter δ. Finally, we use the resulting structural parameter η to construct the causal graph G. We derive edge e_ij of graph G, using: e_ij= 1 if η_ij>η_ji and σ(η_ij)>ϕ_causal 0 otherwise where ϕ_causal is the causal threshold. §.§.§ Phase 3: Agent Training with Causal Information. We extract a refined hierarchy causal tree from graph G with an intuition to focus on steps that are relevant to achieving the goal. Using the goal-reaching step as the root node of the tree, we recursively determine the parental steps of this root node within graph G, and subsequently for all identified parental steps. This causal tree is used to design causal exploration approaches. These approaches include (i) intrinsic rewards based on the causal tree, and (ii) utilizing causal nodes as subgoals for HRL. For the first approach, we devise a reward function where nodes closer to the root are deemed more important and receive higher rewards, preserving the significance of the reward associated with the root node and maintaining the agent's focus on the goal. In the second approach, subgoals are sampled from nodes in the causal tree, with nodes closer to the root sampled more frequently. We present the detailed implementations and empirically evaluate these approaches in Sec. <ref> and Sec. <ref>. § EXPERIMENTS §.§ VACERL: Causal Intrinsic Rewards - Implementation and Evaluation §.§.§ Causal Intrinsic Reward. To establish the relationship where nodes closer to the goal hold greater importance, while ensuring the agent remains focused on the goal, we introduce intrinsic reward formulas as follows: r_causal(o,a) =r_g-(d-1)r_0 ∀(o,a)∈ D_d where r_g is the reward given when the agent reach the goal, r_causal(o,a) is the intrinsic reward given to a node (o,a), D_d is the set of nodes at depth d of the tree, r_0=α(r_g/h) with α is a hyperparameter and h is the tree height. In the early learning stage, especially for hard exploration environments, the causal graph may not be well defined and thus, r_causal may not provide a good incentive. To mitigate this issue, we augment r_causal with a count-based intrinsic reward, aiming to accelerate the early exploration stage. Intuitively, the agent is encouraged to visit never-seen-before observation-action pairs in early exploration. Notably, unlike prior count-based methods <cit.>, we restrict counting to steps in S_COAS, i.e., only crucial steps are counted. Our final intrinsic reward is: r_causal^+=(1/√(n_(o,a)_t))r_causal(o_t,a_t) where n_(o,a)_t is the number of time observation o and action a is encountered up to time step t. Starting from zero, this value increments with each subsequent encounter. We add the final intrinsic reward to the environment reward to train the policy. The total reward is r(s_t,a_t)=r_env(s_t,a_t)+r_causal^+(s_t,a_t), where r_env is the extrinsic reward provided by the environment. §.§.§ Environments. We perform experiments across three sets of environments: FrozenLake (FL), Minihack (MH), and Minigrid (MG). These environments are tailored to evaluate the approach in sparse reward settings, where the agent receives a solitary +1 reward upon achieving the goal (detailed in Supp. B) FL includes the 4x4 (4x4FL) and 8x8 (8x8FL) FrozenLake environments (Supp Fig. B.1(d,e)) <cit.>. Although these are classic navigation problems, hidden causal relationships exist between steps. The pathway of the agent can be conceptualized as a causal graph, where each node represents the agent's location cell and its corresponding action. For example, moving right from the cell on the left side of the lake can be identified as the cause of the agent falling into the lake cell. We use these environments to test VACERL's efficiency in discrete state space, where 𝚒𝚜_𝚜𝚒𝚖 is not used. MH includes MH-1 (Room), MH-2 (Room-Monster), MH-3 (Room-Ultimate) and MH-4 (River-Narrow) <cit.>. These environments pose harder exploration challenges compared to FL due to the presence of more objects. Some environments even require interaction with these objects to reach the goal, such as killing monsters (MH-2 and MH-3) or building bridges (MH-4). For this set of environments, we use pixel observations. MG is designed based on Minigrid Environment <cit.>, with escalating causality levels. These include the Key Corridor (MG-1) (Supp. Fig. B.1(a)) and 3 variants of the BlockUnlockPickUp: 2 2x2 rooms (MG-2 Fig. <ref>(a)), 2 3x3 rooms (MG-3) and the 3 2x2 rooms (MG-4) (Supp Fig. B.1(b,c)). The task is to navigate and locate the goal object, in a different room. These environments operate under POMDP, enabling us to evaluate the framework's ability to construct the causal graph when only certain objects are observable at a timestep. In these environments, the agent completes the task by following the causal steps: firstly, remove the obstacle blocking the door by picking it up and dropping it in another position, then, pick up the key matching the colour of the door to open it; and finally, pick up the blue box located in the rightmost room, which is the goal. In MG-3, distracting objects are introduced to distract the agent from this sequence of action. In any case, intrinsic exploration motivation is important to navigate due to reward sparsity; however, blind exploration without an understanding of causal relationships can be ineffective. Noisy-TV setting is implemented as an additional action (action to watch TV) and can be incorporated into any of the previous environments, so the agent has the option to watch the TV at any point while navigating the map <cit.>. When taking this watching TV action, the agent will be given white noise observations sampled from a standard normal distribution. As sampled randomly, the number of noisy observations can be conceptualized as infinite. §.§.§ Baselines. PPO <cit.>, a policy gradient method, serves as the backbone algorithm of our method and other baselines. Following Schulman et al., <cit.>, vanilla PPO employs a simple entropy-based exploration approach. Other baselines are categorized into causal and non-causal intrinsic motivation. Although our focus is causal intrinsic reward, we include non-causal baselines for comprehensiveness. These include popular methods: Count-based <cit.> and RND <cit.>. Causal motivation baselines include ATTENTION and CAI, which are two methods that have been used to measure causal influence <cit.>. We need to adapt these methods to follow our assumption of not knowing causal variables. The number of steps used to collect initial successful trajectories and to reconstruct the causal graph (denoted as H_s and T_s respectively) for VACERL and causal baselines are provided for each environment in Supp. D. However, not all causal methods can be adapted, and as such, we have not conducted comparisons with approaches, such as <cit.>. Additionally, as we do not require demonstrating trajectory from experts, we do not compare with causal imitation learning methods <cit.>. §.§.§ Results. In this section, we present our empirical evaluation results of VACERL with causal intrinsic rewards. Discrete State Space: Table <ref> illustrates that our rewards improve PPO's performance by approximately 30%, in both 4x4FL and 8x8FL environments. Notably, VACERL outperforms both causal baselines, ATTENTION and CAI. Specifically, VACERL surpasses ATTENTION by 67% and 39% in 4x4FL and 8x8FL. CAI fails to learn the tasks within the specified steps due to insufficient trajectories in the agent's memory for precise causality estimation between all steps. In contrast, our method, incorporating a crucial step detection phase, requires fewer trajectories to capture meaningful causal relationships in the environment. VACERL also performs better than Count-based by 66% in 4x4FL and 100% in 8x8FL, and RND by 51% in 4x4FL and 31% in 8x8FL. We hypothesize that Count-based and RND's intrinsic rewards are unable to encourage the agent to avoid the trapping lakes, unlike VACERL's are derived from only successful trajectories promoting safer exploration. MG-2 Learning Curve Analysis: We conduct experiments with 2 types of observation space (image and vector) and visualize the learning curves in Fig. <ref>(a) and Supp. Fig. B.4. Results demonstrate that VACERL outperforms vanilla PPO, causal baselines, and RND in both types of observation space. While VACERL shows slightly slower progress than Count-based in early steps, it quickly catches up in later stages, ultimately matching optimal performance. We attribute this to VACERL requires a certain number of training steps to accurately acquire the causal graph before the resulting causal rewards influence the agent's training—a phenomenon observed in other causal baselines as well. Continuous State Space: Table <ref> summarizes the testing results on 8 continuous state space environments (MH-1 to MG-4). In most of these environments, VACERL demonstrates superior performance. It only ranks as the second-best in MH-4, MG-1 and MG-2 with competitive returns. In MG-3 environment, at 30 million steps, VACERL achieves the best result with an average return of 0.77, outperforming the second-best Count-based by 10%, while other baselines show little learning. Notably, in the hardest task MG-4, only VACERL can show signs of learning, achieving an average score of 0.29 after 50 million steps whereas other baselines' returns remain zero. Additional learning curves and results are provided in Supp. B. Under Noisy-TV: Fig. <ref>(b, c), showing the results on MG-2 environment under Noisy-TV setting, confirm that our reward exhibits greater robustness in Noisy-TV environments compared to traditional approaches. Count-based, CAI, and RND fail in this setting as they cannot differentiate noise from meaningful novelty, thus, getting stuck watching Noisy-TV. While the noise less impacts ATTENTION and naive PPO, their exploration strategies are not sufficient for sparse reward environments. Overall, VACERL is the only method performing well across all settings, with or without Noisy-TV. §.§ VACERL: Causal Subgoals - Implementation and Evaluation §.§.§ Causal subgoals sampling. In HRL, identifying subgoals often relies on random exploration <cit.>, which can be inefficient in large search spaces. We propose leveraging causal nodes as subgoals, allowing agents to actively pursue these significant nodes. To incorporate causal subgoals into exploration, we suggest substituting a portion of the random sampling with causal subgoal sampling. Specifically, in the HRL method under experimentation where subgoals are randomly sampled 20% of the time, we replace a fraction of this 20% with a node from the causal tree as a subgoal, while retaining random subgoals for the remainder. Eq. <ref> denotes the probability of sampling a node i at depth d>0 (excluding the root node as this is the ultimate goal) from the causal tree: P^(i)=(d_i)^-1/∑ _i=1^N(d_i)^-1 with d_i is the depth of node i and N is the number of nodes in the causal tree. §.§.§ Environments. We use FetchReach and FetchPickAndPlace environments from Gymnasium-Robotics <cit.>. These are designed to test goal-conditioned RL algorithms. We opt for sparse rewards settings, in which only a single reward of 0 is given if the goal is met, otherwise -1 (detailed in Supp. C). §.§.§ Baselines. HAC <cit.>, a goal-conditioned HRL algorithm, serves as the backbone and a baseline. HAC is implemented as a three-level DDPG <cit.> with Hindsight Experience Replay (HER) <cit.>, where the top two levels employ a randomized mechanism for subgoal sampling. We also evaluate our performance against the standard DDPG+HER algorithm <cit.> on the FetchPickAndPlace environment, as this is the more challenging task <cit.> and for comprehensiveness. §.§.§ Results. In this section, we outline our empirical evaluation results of VACERL with causal subgoals. FetchReach: We assess the impact of replacing varying proportions of random sampling subgoals with nodes from the causal graph, based on Eq. <ref>, on the performance of the HRL agent. The learning curve in Fig. <ref>(a) suggests that replacing with percentages of 50% and 70% enhances the sample efficiency of the Vanilla HAC. Notably, when employing a 70% substitution rate, agents demonstrate signs of learning after only 4,000 episodes, a considerable improvement over the HAC agent's 10,000 episodes. Conversely, replacing 50% leads to a swifter convergence, at 20,000 episodes comparing to HAC at 25,000 episodes. Additional experiments (Supp. C) demonstrate that this accelerated convergence rate is attributable to the learned causal subgoals. In contrast, employing a 90% substitution rate results in a decline in performance. We assert that this decline comes from insufficient exploration of new subgoals, leading to an inadequate number of trajectories in buffer B for causal discovery. FetchPickAndPlace: We adopt the 50% replacement, which yielded the most stable performance in the FetchReach environment for this environment. The learning curve of VACERL in Fig. <ref>(b) shows a similar pattern to the learning curves for the MG-2 task in Fig. <ref>(a). VACERL progresses slower but eventually achieves optimal performance, surpassing DDPG+HER and HAC after 90,000 episodes. In this environment, we reconstruct the causal tree every 10,000 episodes, and as seen in the learning curve, the RL agent's performance begins to improve after approximately 20,000 episodes (worst case improves after 40,000 episodes). §.§ Ablation Study and Model Analysis We use MG-2 (Fig. <ref>(a)) task and causal intrinsic reward for our analysis. Crucial Step Detection Analysis: We investigate how the the Transformer model TF's performance changes with varying buffer B sizes. As depicted in Fig. <ref>(a,b), increasing the number of trajectories in B enhances the framework's accuracy in detecting important steps through attention. Initially, with 4 trajectories (Fig. <ref>(a)), TF attends to all actions in the top-left grid. However, after being trained with 40 trajectories (Fig. <ref>(b)), TF correctly attends to pick-up action (PU) in the top-left grid, corresponding to the key pickup event. It can also attend to toggle action (T) in front of the door, corresponding to using the key to open the door. Additional visualization for 4x4FL is in Supp. Fig. B.6. Next, we investigate the effects of employing varying sizes of S_COAS (M). The results in Fig. <ref>(c) reveal that varying M changes the performance of the agent drastically. If M is too small, the agent will not be able to capture all causal relations, thereby failing to mitigate the issue of sparse reward. On the other hand, too large M can be noisy for the causal discovery phase as the causal graph will have redundant nodes. We find that the optimal value for M, in MG-2, is 70, striking a balance between not being too small or too large. Intrinsic Reward Shaping Analysis: We exclude the counting component (Eq. <ref>) from the final intrinsic reward to assess the agent's exploration ability simply based on r_causal. The results in Fig. <ref>(c) show that the agent remains proficient relying only on the causally motivated reward (green curve). In particular, in the absence of Eq. <ref>, the VACERL agent still outperforms Vanilla PPO. However, its performance is not as optimal as the full VACERL (r_causal^+, blue curve). This is because, in early iteration, the causal graph is not yet well defined, diminishing the efficiency of solely using the causal intrinsic reward r_causal. Causal Structure Discovery Contribution: We study the contribution of Phase 2, comparing between causality and attention correlation. We directly used the a_s assigned to each (o,a) in Phase 1 to compute the intrinsic reward: r_bonus(o,a)=α a_s(o,a), where α is the hyperparameter in Eq. <ref>. This reward differs from the ATTENTION reward used in Sect. <ref> in that the augmentation in Eq. <ref> is not applied. We expect that as attention score is a reliable indicator of correlation, building intrinsic reward upon it would benefit the agent, albeit not as effectively as when a causal graph is used (correlation is not as good as causality). The learning curve in Fig. <ref>(c) showcases that the result using causal as intrinsic motivation (green curve) performs better than using attention correlation (purple curve) by a large margin. To further evaluate, we extract the learned causal graph in Fig. <ref> and present the detailed analysis of this graph in Supp. B. The result shows that our method can recover an approximation of the ground truth causal graph. Although there are redundant nodes and edges, important causal hierarchy is maintained, e.g., “open door” is the parental step of “pickup key”. § CONCLUSION This paper introduces VACERL, a framework that enhances RL agent performance by analyzing causal relationships among agent observations and actions. Unlike previous methods, VACERL addresses causal discovery without assuming specified causal variables, making it applicable to variable-agnostic environments. Understanding these causal relationships becomes crucial for effective agent exploration, particularly in environments with complex causal structures or irrelevant actions, such as the Noisy-TV problem. We propose two methods to leverage the identified causal structure. Future research could explore other methods utilizing this structure. Empirical evaluations in sparse reward navigation and robotic tasks demonstrate the superiority of our approach over baselines. However, a limitation is the introduction of new hyperparameters, which require adjustment for different settings. splncs04 10 andrychowicz2017hindsight Andrychowicz, M., Wolski, F., Ray, A., Schneider, J., Fong, R., Welinder, P., McGrew, B., Tobin, J., Pieter Abbeel, O., Zaremba, W.: Hindsight experience replay. Advances in neural information processing systems 30 (2017) bellemare2016unifying Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., Munos, R.: Unifying count-based exploration and intrinsic motivation. Advances in neural information processing systems 29 (2016) burda2018exploration Burda, Y., Edwards, H., Storkey, A., Klimov, O.: Exploration by random network distillation. arXiv preprint arXiv:1810.12894 (2018) MinigridMiniworld23 Chevalier-Boisvert, M., Dai, B., Towers, M., de Lazcano, R., Willems, L., Lahlou, S., Pal, S., Castro, P.S., Terry, J.: Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks. CoRR abs/2306.13831 (2023) corcoll2020disentangling Corcoll, O., Vicente, R.: Disentangling causal effects for hierarchical reinforcement learning. arXiv preprint arXiv:2010.01351 (2020) de2019causal De Haan, P., Jayaraman, D., Levine, S.: Causal confusion in imitation learning. Advances in neural information processing systems 32 (2019) ding2022generalizing Ding, W., Lin, H., Li, B., Zhao, D.: Generalizing goal-conditioned reinforcement learning with variational causal reasoning. Advances in Neural Information Processing Systems 35, 26532–26548 (2022) hu2022causality Hu, X., Zhang, R., Tang, K., Guo, J., Yi, Q., Chen, R., Du, Z., Li, L., Guo, Q., Chen, Y., et al.: Causality-driven hierarchical structure discovery for reinforcement learning. Advances in Neural Information Processing Systems 35, 20064–20076 (2022) hung2019optimizing Hung, C.C., Lillicrap, T., Abramson, J., Wu, Y., Mirza, M., Carnevale, F., Ahuja, A., Wayne, G.: Optimizing agent behavior over long time scales by transporting value. Nature communications 10(1),  5223 (2019) ke2019learning Ke, N.R., Bilaniuk, O., Goyal, A., Bauer, S., Larochelle, H., Schölkopf, B., Mozer, M.C., Pal, C., Bengio, Y.: Learning neural causal models from unknown interventions. arXiv preprint arXiv:1910.01075 (2019) gymnasium_robotics2023github de Lazcano, R., Andreas, K., Tai, J.J., Lee, S.R., Terry, J.: Gymnasium robotics (2023), <http://github.com/Farama-Foundation/Gymnasium-Robotics> 10.5555/3635637.3662964 Le, H., Do, K., Nguyen, D., Venkatesh, S.: Beyond surprise: Improving exploration through surprise novelty. In: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems. p. 1084–1092. AAMAS '24, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC (2024) levy2017learning Levy, A., Konidaris, G., Platt, R., Saenko, K.: Learning multi-level hierarchies with hindsight. arXiv preprint arXiv:1712.00948 (2017) lillicrap2015continuous Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015) mnih2015human Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.: Human-level control through deep reinforcement learning. nature 518(7540), 529–533 (2015) pearl2009causal Pearl, J.: Causal inference in statistics: An overview (2009) pitis2020maximum Pitis, S., Chan, H., Zhao, S., Stadie, B., Ba, J.: Maximum entropy gain exploration for long horizon multi-goal reinforcement learning. In: International Conference on Machine Learning. pp. 7750–7761. PMLR (2020) pitis2020counterfactual Pitis, S., Creager, E., Garg, A.: Counterfactual data augmentation using locally factored dynamics. Advances in Neural Information Processing Systems 33, 3976–3990 (2020) plappert2018multi Plappert, M., Andrychowicz, M., Ray, A., McGrew, B., Baker, B., Powell, G., Schneider, J., Tobin, J., Chociej, M., Welinder, P., et al.: Multi-goal reinforcement learning: Challenging robotics environments and request for research. arXiv preprint arXiv:1802.09464 (2018) samvelyan2021minihack Samvelyan, M., Kirk, R., Kurin, V., Parker-Holder, J., Jiang, M., Hambro, E., Petroni, F., Kuttler, H., Grefenstette, E., Rocktäschel, T.: Minihack the planet: A sandbox for open-ended reinforcement learning research. In: Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1) (2021), <https://openreview.net/forum?id=skFwlyefkWJ> schulman2017proximal Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017) seitzer2021causal Seitzer, M., Schölkopf, B., Martius, G.: Causal influence detection for improving efficiency in reinforcement learning. Advances in Neural Information Processing Systems 34, 22905–22918 (2021) silver2017mastering Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al.: Mastering the game of go without human knowledge. nature 550(7676), 354–359 (2017) sun2023offline Sun, Z., He, B., Liu, J., Chen, X., Ma, C., Zhang, S.: Offline imitation learning with variational counterfactual reasoning. Advances in Neural Information Processing Systems 36 (2023) sutton2018reinforcement Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. MIT press (2018) tang2017exploration Tang, H., Houthooft, R., Foote, D., Stooke, A., Xi Chen, O., Duan, Y., Schulman, J., DeTurck, F., Abbeel, P.: # exploration: A study of count-based exploration for deep reinforcement learning. Advances in neural information processing systems 30 (2017) towers_gymnasium_2023 Towers, M., Terry, J.K., Kwiatkowski, A., Balis, J.U., Cola, G.d., Deleu, T., Goulão, M., Kallinteris, A., KG, A., Krimmel, M., Perez-Vicente, R., Pierré, A., Schulhoff, S., Tai, J.J., Shen, A.T.J., Younis, O.G.: Gymnasium (Mar 2023). 10.5281/zenodo.8127026, <https://zenodo.org/record/8127025> vaswani2017attention Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) zeng2023survey Zeng, Y., Cai, R., Sun, F., Huang, L., Hao, Z.: A survey on causal reinforcement learning. arXiv preprint arXiv:2302.05209 (2023) zhang2020deep Zhang, P., Liu, F., Chen, Z., Jianye, H., Wang, J.: Deep reinforcement learning with causality-based intrinsic reward (2020) zhang2020generating Zhang, T., Guo, S., Tan, T., Hu, X., Chen, F.: Generating adjacency-constrained subgoals in hierarchical reinforcement learning. Advances in Neural Information Processing Systems 33, 21579–21590 (2020) zhang2024interpretable Zhang, Y., Du, Y., Huang, B., Wang, Z., Wang, J., Fang, M., Pechenizkiy, M.: Interpretable reward redistribution in reinforcement learning: A causal approach. Advances in Neural Information Processing Systems 36 (2024) todorov2012mujoco Todorov, E., Erez, T., Tassa, Y.: Mujoco: A physics engine for model-based control. In: 2012 IEEE/RSJ international conference on intelligent robots and systems. pp. 5026–5033. IEEE (2012) §.§ Details of Methodology §.§.§ VACERL Framework The detailed processing flow of the VACERL framework is described in Algo. <ref>. Buffer B is initialized using the process from lines 2-6, using a random policy to collect successful trajectories (Note: as long as the agent can accidentally reach the goal and add 1 trajectory to B, we can start the improving process). We, then, start our iterative process (the outer loop). In Phase 1, “Crucial Step Detection” (lines 8-21), the process commences with the training of the Transformer model TF using Algo. <ref>. Subsequently, we collect the dictionary D that maps (o_t^k,a_t^k) to attention score a_s. D is sorted based on a_s. We, then, define 𝚒𝚜_𝚜𝚒𝚖 function (line 9) and abstract function ℐ (line 10) to handle similar observation-action steps, and add the top M (o_t^k,a_t^k) steps to the set S_COAS using the process from lines 12-20. After collecting S_COAS, we apply Eq. 1 to acquire the new buffer B^*. With buffer B^*, we initiate Phase 2 (line 22) called “Causal Structure Discovery”. We optimize the two parameters δ and η using Algo. <ref> and collect the causal graph G. Using graph G, we collect the causal tree relative to the goal-reaching step to create a hierarchy of steps. We use this hierarchy to calculate the intrinsic reward associated with (o,a) using Eq. 6 or to calculate subgoals sampling probability using Eq. 7. Finally, we train the policy π_θ and adding new successful trajectories to buffer B, summarizing Phase 3 (lines 23-29) called “Agent Training with Causal Information”. The process starts again from Phase 1 using the updated buffer B. §.§.§ Transformer Model Training Detailed pseudocode for training the Transformer model is provided in Algo. <ref>. We utilize the Transformer architecture implemented in PyTorch [https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html], for our TF model implementation. This implementation follows the architecture presented in the paper <cit.>, thus, the attention score a_s for a step is computed using the self-attention equation: softmax(QK^T/√(d_k)), where Q=XW_Q represents the query vector, K=XW_K represents the key vector, X is the learned embedding of a step (o_t^k,a_t^k), and W_Q,W_K are trainable weights. The values of a_s are extracted from the encoder layer of the TF model during the last training iteration. We use a step (o_t^k,a_t^k) as the key in dictionary D that maps to an associated a_s, described in the process in lines 5-11 (Algo. <ref>). §.§.§ SCM Training The detailed pseudocode is provided in Algo. <ref>. Our approach involves a two-phase iterative update process, inspired by the causal learning method proposed by Ke et al., <cit.>. This process optimizes two parameters: the functional parameter δ of generating function f and the structural parameter η of graph G, representing a Structural Causal Model (SCM). In Phase 1 of the process, we want to keep the structural parameter η fixed and update the functional parameter δ, whereas in Phase 2, we keep δ fixed and update η. Both sets of parameters underwent training using the buffer B^* (Eq. 1). The generating function f is initialized as a 3-layer MLP neural network with random parameter δ. The parameter η∈ R^M× M, the soft adjacency matrix of size M× M representing the direct causality graph of the M steps, is initialized as a random M× M tensor, such that η_ij denotes the causal relationship between step at index j of S_COAS on step at index i of S_COAS. At each step in lines 4 and 20 of Algo. <ref>, we sample a hypothesis causal graph G by Bernoulli sampling Ber((σ(η))) that will be used for the optimization process, where σ(x)=1/1+e^-x. The intuition behind this optimization process is that the step representing the cause should occur before its associated effect step, so, for a step t in the trajectory k-th, we formulate f as: (ô_t^k,â_t^k)=f_δ,ℐ((o_t^k,a_t^k))({o_t'^k,a_t'^k}_t'=1^t-1 ∧|G) In our implementation, every steps from 1 to t-1 that do not belong to the parental set of PA(ℐ(o_t^k,a_t^k)) are masked out when inputting into the MLP, in this way, only steps that belong to the parental set of PA(ℐ(o_t^k,a_t^k)) are used in the prediction of (o_t^k,a_t^k). To learn f and optimize parameter δ, we compute an MSE loss as denoted in Eq. 3. In the second phase, we fix δ and optimize the parameter η by updating the causality from variable X_j to X_i. After updating parameter η for several steps, we return to the optimization process of parameter δ. Finally, we use the resulting structural parameter η to construct the final causal graph G. We first get edge e_ij using Eq. 4, where, ϕ_causal represents the causal confident threshold. In our implementation, ϕ_causal was tuned with the values in [0.5,0.6,0.7,0.8,0.9,1.0]. This Eq. 4 is used to ensure that there is no internal loop in the adjacency matrix. §.§.§ Causal Tree Extraction We extract a tree from the resultant causal graph G, focusing on the steps relevant to achieving the goal. We use the goal-reaching step as the root of the tree and recursively determine the parental steps of this root node within graph G and add them to the causal tree as new nodes, and subsequently, we will determine the parental steps for all these identified nodes. However, to avoid cycles in the tree, we need to add an order of ranking, thus, we use the ranking of attention score a_s. So, for edge e_ij from variable X_i to variable X_j, we will remove the edge e_ij if a_s of X_i is smaller than the a_s of X_j, even if e_ij=1 according to the graph G. §.§ Setting to Test VACERL Causal Intrinsic Reward §.§.§ Environments FrozenLake Environments These tasks involve navigating the FrozenLake environments of both 4x4 (4x4FL) and 8x8 (8x8FL) <cit.>. Visualizations for these environments can be found in Fig. <ref>(d,e). The goal of the agent involves crossing a frozen lake from the starting point located at the top-left corner of the map to the goal position located at the bottom-right corner of the map without falling into the frozen lake. The observation in these environments is a value representing the current position of the agent. The number of possible positions depends on the map size, with 4x4FL having 16 positions and 8x8FL having 64 positions. The agent is equipped with four discrete actions determining the direction of the agent's movement <cit.>. If the agent successfully reaches the goal, it receives a +1 reward. However, if it falls into the lake or fails to reach the goal within a predefined maximum number of steps, it receives a 0 reward. The chosen maximum number of steps for 4x4FL and 8x8FL to validate our framework are 100 steps and 2000 steps, respectively. Minihack Environments These tasks involve MH-1 (MiniHack-Room-5x5-v0), MH-2 (MiniHack-Room-Monster-5x5-v0), MH-3 (MiniHack-Room-Ultimate-5x5-v0) and MH-4 (MiniHack-River-Narrow-v0); a suit of environments collected from <cit.>. These environments present more challenging exploration scenarios compared to FrozenLake environments due to the increased number of objects. Certain environments necessitate interaction with objects to achieve the goal, such as defeating monsters (MH-2 and MH-3) or constructing bridges (MH-4). If the agent successfully reaches the goal within a predefined maximum number of steps, it receives a +1 reward; otherwise, it receives a 0 reward. The maximum number of steps for all four environments is the default number of steps in <cit.>. Minigrid Environments These tasks involve four environments: Key Corridor (MG-1 Fig. <ref>(a)), two 2x2 rooms (MG-2 Fig. 1), two 3x3 rooms (MG-3 Fig. <ref>(b)) and three 2x2 rooms (MG-4 Fig. <ref>(c)). The goal of the agent, in MG-1, is to move and pick up the yellow ball and, in MG-2, MG-3 and MG-4, is to pick up a blue box which is located in the rightmost room, behind a locked door <cit.>. In these environments, the agent has six actions: turn left (L), right (R), move forward (F), pick up (PU), drop (D) and use (T) the object. With each new random seed, a new map is generated, including the agent's initial position and the position of the objects. For MG-2, MG-3, and MG-4, there will always be an object obstructing the door. Specifically, for MG-3 and MG-4, the agent has to pick up a key with a matching colour to the door. In MG-3, we also introduce two distracted objects (the red ball and the green key in Fig. <ref>(b)). All 3 environments are POMDP, meaning that the agent can only observe part of the map; the observation is an image tensor of shape [7,7,3]. The agent is equipped with six actions. Successfully reaching the goal within a predefined maximum number of steps results in a +1 reward for the agent; otherwise, it receives a 0 reward. The selected maximum number of steps for MG-1 is 270, MG-2 is 500, MG-3 is 1000 and for MG-4 is 5000. To complete these environments, the agent has to learn to move the ball by picking it up and dropping it in another location, then, it has to pick up the key, open the door and pick up the object in the other room. §.§.§ Baseline Implementations The backbone algorithm is Proximal Policy Optimization (PPO). We use the Pytorch implementation of this algorithm on the open-library Stable Baselines 3 [https://github.com/DLR-RM/stable-baselines3]. To enhance the performance of this backbone algorithm, we fine-tuned the entropy coefficient and settled on a value of 0.001 after experimenting with [0.001, 0.005, 0.01, 0.05]. All other parameters were maintained as per the original repository. Subsequently, we incorporated various intrinsic reward baselines on top of the PPO backbone, including Count-Based, Random Network Distillation (RND), ATTENTION, CAI, and our VACERL. For the Count-Based method, similar to the work of Bellemare et al. <cit.>, we tracked the frequency of observations and associated actions and used the Simhash function to merge similar pairs <cit.>. The intrinsic reward is formulated as r^+(o,a)=α/√(n(ϕ(o,a))), where n(ϕ(o,a)) represents the count and ϕ(o,a) is the simhash function. We tune the exploration bonus hyperparameter α, however, there are no significant performance gains, as long as the bonus rewards do not overtake the rewards of the environment. Finally, we settle for a value of 0.001. We also test different values of the hashing parameters k of the Simhash function. The final implementation used k=256, which shows the best results and is consistent with the referenced paper <cit.>. We also adopt a public code repository for the implementation of RND (MIT License)[https://github.com/jcwleo/random-network-distillation-pytorch]. To align the implementation with our specific environments, we have made adjustments to the input shapes and modified the PPO hyperparameters to match those of other baselines. We adhered to the implementation provided in the code, using the hyperparameter values as specified. In the case of ATTENTION, we leveraged the attention scores a_s from the encoder layer of the Transformer model as the reward signal. This approach has been used in the work of Pitis et al., <cit.> as a method to measure causal influence. The intrinsic reward has the form r_bonus(o,a)=α a_s(o,a). We integrate this as an iterative process similar to our VACERL framework and also aid the exploration in the earlier phase using Eq. 6 for fairness. Similar to our framework VACERL and the implementation of Count-based, we use an α value of 0.001 for this baseline. Finally, for the CAI method, we measure the causal influence between each observation-action pair with the observation-action pair of the goal-reaching step <cit.>. This is slightly different from the implementation of the original paper, which assumes the knowledge of the location of the goal and other objects. Our final implementation used a two-layer MLP neural network to measure CAI and it is based on the code of the original paper (MIT License)[https://github.com/martius-lab/cid-in-rl]). Additional details of hyperparameters can be found in Sec. <ref>. §.§.§ Additional Experiment Results This section presents additional experiment results and visualization. Learning Curve The learning curves for the 4x4FL and 8x8FL tasks are illustrated in Fig. <ref>; the learning curves for the MH-1 to MH-4 are illustrated in Fig. <ref>, while the corresponding curves for MG-1, MG-3 and MG-4 are presented in Fig. <ref>. The learning curves of VACERL in these figures show a similar pattern to the learning curves for MG-2 as presented in Fig. 2. Initially, the learning progress is slightly slower, as a number of training steps are required to acquire a correct causal representation. Subsequently, the performance accelerates rapidly, eventually surpassing baselines and attaining the optimal point. The steps shown in these figures are the number of times the agent interacts with the environment, so for fairness, the number of steps of VACERL and causal baselines (CAI and ATTENTION) are computed as H_s+a.T_s, where a is the number of outer-loop iterations in line 7 of Algo. <ref>. 4x4FL Heatmap The attention heatmap for the 4x4FL task is provided in Fig. <ref>. These two figures show a similar pattern as in Fig. 4(a,b). Specifically, when the size of buffer B is small (Fig. <ref>(a)), the accuracy of crucial step detection is not as precise compared to scenarios where the size of buffer B is larger (Fig. <ref>(b)). MG-2 Generated Causal Graph The causal graph generated, as a result of our framework, for the MG-2 task is provided in Fig. 5. Each observation-action is associated with an image representing the map at the timestep that the agent executes the action. Below is a summary of the relationships and our rationale for why the agent generated the unexpected, though reasonable, relationship. Expected relationships: * Drop Key to Pick Up Goal. * Open Door to Pick Up Goal. * Pick Up Key to Drop Key. * Pick Up Key to Open Door. * Pick Up Ball to Drop Ball. Unexpected Relationships: * Drop Ball to Pick Up Goal and not to Open Door: We believe that this unexpected relationship is because the agent is allowed to repeat action that it has taken before in the environment. Consequently, the agent can pick up the ball again after it opens the door, thus affect the relationship between Drop Ball and Open Door. In addition, as the agent can only hold one item at a time, it must drop the ball before picking up the goal, which creates a relationship between Drop Ball and Pick Up Goal. If this sequence of steps frequently occurs in collected trajectories, the agent will infer that this sequence represents an accurate relationship, thus leading to the generation of this causal graph. Although this relationship is not what we expected, the relationship is not inaccurate, particularly in this environment wherein the agent can only hold a single item at a time. §.§ Setting to Test VACERL with Causal Subgoal for HRL §.§.§ Environments Both of the environments used in this section are available in Gymnasium-Robotics <cit.> and are built on top of the MuJoCo simulator <cit.>. The robot in question has 7 degrees of freedom (DoF) and a two-fingered parallel gripper. In FetchReach, the state space is S⊂ R^10, and in FetchPickAndPlace, the state space is S⊂ R^25. In both environments, the action space is A⊆[-1,1]^4, including actions to move the gripper and opening/closing of the gripper. In FetchReach, the task is to move the gripper to a specific position within the robot's workspace, which is relatively simpler compared to FetchPickAndPlace. In the latter, the robot must grasp an object and relocate it. In both cases, user are given two values which are “achieved_goal” and “desired_goal”. Here, achieved_goal denotes the final position of the object, and desired_goal is the target to be reached. In FetchReach, these goals represent the gripper's position since the aim is to relocate it. While in FetchPickAndPlace, they signify the block's position that the robot needs to manipulate. Success is achieved when the Euclidean distance between achieved_goal and desired_goal is less than 0.05m. Sparse rewards are employed in our experiments, wherein the agent receives a reward of -1 if the goal is not reached and 0 if it is. The maximum number of timesteps allowed in these environments is set to 100. §.§.§ Baseline Implementations We utilize the PyTorch implementation of DDPG+HER from the Stable Baselines 3 [https://github.com/DLR-RM/stable-baselines3] open-library as one of our baselines. The hyperparameters for this algorithm are set to benchmark values available in RL-Zoo [https://github.com/DLR-RM/rl-baselines3-zoo]. We assess the performance of this baseline against the results presented in the original robotic paper by Plappert et al. <cit.>, noting similarities despite differences in environment versions. For our HAC implementation, the core algorithm of our approach, we adopt a publicly available code repository [https://github.com/andrew-j-levy/Hierarchical-Actor-Critc-HAC-] (MIT License) by the author of the original paper <cit.>. We modify this code to align with our environments, where the goal position and the goal condition are supplied by the environments themselves. The baseline is implemented as a three-level DDPG+HER, in which the top two levels are used to supplied subgoals and the lowest level is used to learn the actions. We adjust the hyperparameters of the lowest level DDPG+HER to match those of the DDPG+HER baseline for fairness. Additional details of hyperparameters can be found in Sec. <ref>. §.§.§ Additional Experiment Results To validate our assertion that causal subgoals can effectively narrow down the search space for an HRL agent to significant subgoals, thus enhancing HRL sample efficiency in Robotic-Environments, we present an additional experiment along with the visualization of subgoals' average coordinates selected by VACERL and Vanilla HAC in this experiment (Fig. <ref>). The experiment was conducted in the FetchReach environment, with the causal graph re-evaluated every 2,000 steps, mirroring our main experiments. We specifically chose a run where initial subgoals of HAC and VACERL exhibited similar average coordinates (x, y, z) for fairness. In this run, the goal (indicated by a red + marker) was positioned at coordinates (1.38294353, 0.68003652, 0.396999). As illustrated in Fig. <ref>(a), despite the initial subgoals' average coordinates being very similar (represented by blue markers) – (0.727372811, 0.501121846, 0.28868408) for HAC and (0.7377534, 0.521795303, 0.2345436) for VACERL – VACERL swiftly converges to subgoals much closer to the goal after just one iteration of causal discovery learning, while, Vanilla HAC struggles to converge. We plot the weighted average coordinates of nodes in the causal graph after this iteration (indicated by a grey + marker), with weights determined by the probability of node sampling according to Eq. 7; higher probabilities correspond to higher weights. We choose to plot the values of this iteration because it represents the instance where VACERL undergoes the most significant shift in subgoals' coordinates. The results indicate that the coordinates of nodes in the causal graph closely align with the coordinates of subgoals sampled by the top-level policy. This supports our intuition that causal subgoals contribute to the improvement in subgoal sampling and the overall sample efficiency of HRL. The improvement is also reflected in the associated learning curve, in Fig. <ref>(b), of the agent: after training for 4,000 episodes, VACERL begins to learn the environment, whereas HAC requires 8,000 episodes – coinciding with the point where the agent starts selecting subgoals with coordinates closer to the goal. §.§ Architecture and Hyperparameters of VACERL The default hyperparameters (if not specified in accompanying tables then these values were used) are provided in Table. <ref>. The definitions and values for hyperparameters, which require tuning and may vary across different environments, are specified in the accompanying tables. The system's architecture and the explanation for tuning of hyperparameter are outlined below: Architecture * TF model's architecture: num_encoder_layers=2, num_decoder_layers=2, hidden_size=128, dropout=0.1. * Functional model f_δ's architecture: 3-layer MLP, hidden_size=512. * PPO: Stable Baselines 3's hyperparameters with entropy coefficient =0.001. * DDPG+HER: RL-Zoo's architecture and hyperparameters for FetchReach and FetchPickAndPlace environments. * HAC: 3-levels DDPG+HER, architectures and hyperparameters are the same with DDPG+HER. Tuning * H_s (used for VACERL and all causal baselines): This hyperparameter requires tuning as it relies on the complexity of the environment. The more challenging the environment, the greater the number of head steps required to gather a successful trajectory and start the framework. For MG, FL, and MH environments, we use a random policy to collect this initial phase, however, in challenging robotic environments where collecting successful trajectories is difficult, we leverage the underlying HAC agent to gather these trajectories. Consequently, the value of H_s equals T_s in such environments. Additionally, H_s in MG, FL, and MH denotes the number of time the agent interacts with the environments, whereas in robotic environments, it denotes episode. * M: This hyperparameter requires tuning as it depends on the state-space of the environment. Generally, a larger state-space requires a larger value for M. However, as shown in Fig. 4, too large M can introduce noise during the causal structure discovery phase and affect the final policy training result. * ϕ_sim: This hyperparameter is only used in continuous space environments. * T_s (used for VACERL and all causal baselines): Similar to H_s, this hyperparameter varies between environments. T_s in MG, FL, and MH denotes the number of steps the agent interacts with the environments, whereas in robotic environments, it denotes episode. T_s is also the number of steps/episodes before the causal graph is reconstructed.
http://arxiv.org/abs/2407.12264v1
20240717022658
Hybrid Near-Far Field Channel Estimation for Holographic MIMO Communications
[ "Shaohua Yue", "Shuhao Zeng", "Liang Liu", "Yonina C. Eldar", "Boya Di" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
= 10 in *EndIf INPUT: OUTPUT: salign remark  Remarkproposition  Propositiontheorem  Theoremdefinition  Definitionlemma  Lemmaobservation  Observation Hybrid Near-Far Field Channel Estimation for Holographic MIMO Communications Shaohua Yue, Shuhao Zeng, Student Member, IEEE, Liang Liu, Member, IEEE, Yonina C. Eldar, Fellow, IEEE, and Boya Di, Member, IEEE Shaohua Yue, Shuhao Zeng, and Boya Di are with the State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing 100871, China. (e-mail: {yueshaohua; shuhao.zeng; boya.di}@pku.edu.cn). Liang Liu is with the Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong, SAR, China. (e-mail: liang-eie.liu@polyu.edu.hk).Yonina C. Eldar is with the Faculty of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 7610001, Israel (e-mail: yonina.eldar@weizmann.ac.il).Part of this work has been accepted for publication in the IEEE GLOBECOM 2023 conference<cit.>. ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Holographic MIMO communications, enabled by large-scale antenna arrays with quasi-continuous apertures, is a potential technology for spectrum efficiency improvement. However, the increased antenna aperture size extends the range of the Fresnel region, leading to a hybrid near-far field communication mode. The users and scatterers randomly lie in near-field and far-field zones, and thus, conventional far-field-only and near-field-only channel estimation methods may not work. To tackle this challenge, we demonstrate the existence of the power diffusion (PD) effect, which leads to a mismatch between the hybrid-field channel and existing channel estimation methods. Specifically, in far-field and near-field transform domains, the power gain of one channel path may diffuse to other positions, thus generating fake paths. This renders the conventional techniques unable to detect those real paths. We propose a PD-aware orthogonal matching pursuit algorithm to eliminate the influence of the PD effect by identifying the PD range, within which paths diffuse to other positions. PD-OMP fits a general case without prior knowledge of near-field and far-field path numbers and the user’s location. The computational complexity of PD-OMP and the Cramér-Rao Lower Bound for the sparse-signal-recovery-based channel estimation are also derived. Simulation results show that PD-OMP outperforms state-of-the-art hybrid-field channel estimation methods. Holographic MIMO communication, channel estimation, power diffusion, near-field communication. § INTRODUCTION To fulfill the high spectrum efficiency requirement of the future sixth-generation (6G) network <cit.>, holographic MIMO communication has been proposed as a promising solution<cit.>, where numerous antenna elements are integrated into a compact two-dimensional surface <cit.>. Potential implementation technologies include reconfigurable holographic surface <cit.> and extremely large reconfigurable intelligent surface <cit.>. Due to the increased radiation aperture size of the antenna array, the Fresnel region (radiating near-field region of the antenna) is significantly enlarged <cit.>. As a result, part of the users and scatterers lie in the near-field region of the holographic antenna array <cit.>, where the electromagnetic (EM) waves are characterized by spherical waves <cit.>. The remaining users and scatterers are located in the far-field region and the EM waves can be modeled via uniform plane waves. This gives rise to the so-called hybrid near-far field communication <cit.>. Due to the modeling difference between near-field and far-field EM wave propagation, conventional channel estimation methods cannot be directly applied to the hybrid-field case, necessitating the development of new schemes. Most existing works focus on either near-field channel estimation <cit.> or far-field channel estimation <cit.>. In <cit.>, the polar domain and angular domain channel representation are proposed, respectively, to depict the characteristics of the near-field and far-field channel models. In <cit.>, the near-field channel estimation problem is investigated, where the near-field region is divided into grids to perform on-grid estimation. A new dictionary is designed for near-field sparse channel representation and estimation in <cit.> to relieve the high coherence burden of the dictionary because of the two-dimensional near-field channel representation. Some initial works <cit.> consider the concept of a hybrid-field channel. Channel estimation techniques are designed relying on prior knowledge of the number of near-field and far-field paths such that the near-field and far-field path components are estimated separately <cit.>. However, there is a power diffusion effect in the sparse-signal-recovery-based hybrid-field channel estimation, which has not been discovered in the existing literature. To be specific, due to the high coherence between certain near-field and far-field steering vectors, when a near-field (or far-field) path component is transformed from the spatial domain into the angular domain (polar domain), the power gain of this path component spreads to multiple steering vectors and generates fake paths. Such an effect leads to an inaccurate path component estimation in the transform domain, which refers to the near-field polar domain, far-field angular domain, and the hybrid-field joint angular-polar domain, i.e., a concatenation of the polar domain and angular domain. This inaccuracy consequently causes estimation errors for the spatial-domain hybrid-field channel. Moreover, for the general case where the number of near-field and far-field paths are unknown, existing hybrid-field channel estimation algorithms  <cit.> are not applicable. In this paper, we investigate hybrid near-far field channel estimation without any prior knowledge of the numbers of near-field and far-field paths. Against this background, two new challenges arise. First, it is non-trivial to distinguish the far-field and the near-field paths in the hybrid-field case. The boundary of the near-field and far-field regions is hard to specify since it changes with the propagation direction of the EM wave <cit.>. Second, due to the power diffusion effect, the power gain of near-field paths and far-field paths is coupled. It leads to inaccurate transform-domain path component estimation, which urges efficient channel estimation schemes. To cope with the above challenges, we develop a power-diffusion-aware orthogonal matching pursuit algorithm (PD-OMP) for hybrid near-far field multipath channel estimation. The key idea of PD-OMP is two-fold. First, we observe that the angular domain can only provide accurate information about far-field path components while the polar domain only provides accurate information about near-field path components. Thus, we transform the spatial-domain hybrid-field channel to a joint angular-polar domain, where the near-field and far-field path components are successfully separated. Second, we define the power diffusion range to quantify the power gain diffusion from each path component to other positions. We demonstrate that the power diffusion range of each path is positively related to its power gain. Hence, by estimating the power gain, direction, and propagation distance of each path component, the power diffusion range is calculated so that the interference brought by the power diffusion effect can be identified and eliminated. In this way, the information on each path is extracted regardless of the power diffusion effect, and the hybrid-field channel is estimated accurately. Our contributions are summarized below. * We analyze the power diffusion effect in the sparse-signal-recovery-based hybrid-field channel estimation, which leads to inaccurate transform-domain path component estimation. It indicates that when a multipath hybrid-field channel is transformed from the spatial domain into the angular domain or the polar domain, the power gain corresponding to a path spreads to other positions and generates fake paths. This reveals why conventional far-field and near-field sparse-signal-recovery-based channel estimation methods cannot be directly applied to the hybrid-field case. The power diffusion effect is also demonstrated by an illustrative example and quantified by the power diffusion range. * We develop a power-diffusion-aware hybrid-field channel estimation method (PD-OMP), which does not require any prior information on the number of far-field and near-field paths to perform channel estimation. The joint angular-polar domain channel transform is utilized in PD-OMP so that different path components of the multipath channel are separated. Moreover, employing an iterative compressed sensing-based method, PD-OMP introduces the power diffusion range to resolve the inaccurate transform-domain path component estimation, which is different from other hybrid-field channel estimation techniques. * Simulation results show that the proposed PD-OMP achieves a higher estimation accuracy than current hybrid-field channel estimation techniques, which do not consider the power diffusion effect, given different SNRs and pilot lengths. The influence of scatterer distribution and the considered power diffusion range on the algorithm performance are also discussed. The rest of this paper is organized as follows. In section <ref>, the holographic MIMO communication scenario, the hybrid-field channel model, and the signal model are described. In Section <ref>, we present the hybrid-field channel characteristics in the joint angular-polar domain and analyze the power diffusion effect. A hybrid-field channel estimation method PD-OMP is proposed and the Cramér-Rao Lower Bound of the sparse-signal-recovery-based hybrid-field channel estimation is derived in Section <ref>. In Section <ref>, simulation results are provided and conclusions are drawn in Section <ref>. Throughout the paper, we use the following notation. Vectors and matrices are represented by lower-case and upper-case boldface letters, respectively. The writing 𝐗∈ℂ^a × b means that the size of 𝐗 is a × b and each element of 𝐗 is a complex number. In addition, 𝐗(p,:) and 𝐗(:,p) denote the p-th row and p-th column of the matrix 𝐗, respectively. We use (·)^T, (·)^H and (·)^† to denote the transpose, conjugate transpose, and pseudo-inverse operation respectively. |·| is the absolute operator, Tr(·) is the trace operator, and 𝔹(𝐗_1, ..., 𝐗_N) represents a block diagonal matrix generated from matrices 𝐗_1, ..., 𝐗_N. card(𝔸) denotes the number of elements in set 𝔸. § SYSTEM MODEL In this section, we first describe the holographic MIMO communication scenario. The far-field and near-field path modeling are then given, respectively, based on which the hybrid-field channel model is presented. The signal model for pilot signal transmission is also provided. §.§ Scenario Description As shown in Fig. <ref>, we consider an uplink communication system. The base station (BS) is equipped with an extremely large linear antenna array[The discussion on channel estimation in this paper is also applicable to the case of a planar array.] to communicate with a single-antenna user, with the number of antenna elements and the element spacing denoted by N and d, respectively. We assume that the antenna elements are connected via N_RF<N radio frequency (RF) chains such that the analog precoding scheme is employed at the BS. The EM radiation field of the antenna array can be divided into the near field and far field, as indicated in Fig. <ref>. The boundary between these two fields depends on Rayleigh distance, which is positively correlated with the size of the antenna array <cit.>. Given the large size of the holographic antenna array, the near-field region extends, leading to hybrid-field communications, i.e., the user and scatterers can be located in either the near field or the far field of the antenna array. §.§ Hybrid-field Channel Model Assume that the hybrid-field multipath channel from the user to the antenna array at the BS consists of a line-of-sight (LoS) path, denoted by path 0, and L-1 non-line-of-sight (NLoS) paths, denoted by paths 1, 2,..., L-1. In the following, we refer to the LoS path as far-field (or near-field) if the user lies in the far-field (or near-field) region of the antenna array and we refer to an NLoS path as far-field (or near-field) if the scatterer corresponding to this path locates in the far-field (or near-field) region of the antenna array. For the hybrid-field channel, the L paths consist of both far-field and near-field paths. We first present the model for the far-field and near-field paths, respectively, which are then combined to obtain the hybrid-field channel. For simplicity, we introduce a Cartesian coordinate system, where the x-axis is perpendicular to the linear antenna array and the y-axis is aligned with the antenna array. The location of the middle point of the antenna array is set to be (0,0), as depicted in Fig. <ref>. §.§.§ Far-field Path Modeling For the user or the scatterer located in the far field of the antenna array, the EM wave of the far-field path received by the antenna array can be approximated by a uniform plane wave. In this case, if path l is a far-field path, where l is the index of path based on (<ref>), the model for this path is described as <cit.> 𝐡_F,l = g_l 𝐚(θ_l), where for the LoS path, i.e., l=0, g_l represents the channel fading and θ_l is the angle between the x-axis and the direction from the origin to the user. For the NLoS path, i.e., l ≥ 1, g_l is a random complex factor describing the joint impact of scattering and channel fading, and θ_l is the angle between the x-axis and the direction from the origin to the l-th scatterer. 𝐚(θ_l) represents the far-field steering vector toward θ_l, i.e., 𝐚(θ_l)= 1/√(N)[1, e^j2π d/λsin(θ_l), ..., e^jπ2π(N-1)d/λsin(θ_l)]^T. §.§.§ Near-field Path Modeling When the user or the scatterer is located in the near field of the antenna array, we use a spherical wave model to describe the wavefront of EM waves, which is more accurate than the plane wave model. To capture this feature, if path l is a near-field path, the model for this path is described as <cit.> 𝐡_N,l = g_l 𝐛(θ_l,r_l), where for the LoS path, i.e., l = 0, r_l is the distance between the origin and the user. For the NLoS path, i.e., l ≥ 1, r_l is the distance between the origin and the l-th scatterer. The term 𝐛(θ_l,r_l) is the near-field steering vector, expressed as 𝐛(θ_l,r_l) = 1/√(N)[e^-j2π/λ(r_1,l-r_l),...,e^-j2π/λ(r_N,l-r_l)]^T, where r_n,l is the distance between the n-th antenna element of the antenna array and the user or the l-th scatterer corresponding to this path. The term r_n,l can be written as r_n,l=√((r_l cosθ_l)^2+(t_n d-r_lsinθ_l)^2), where t_n = 2n-N+1/2 and (0, t_nd) is the coordinate of the n-th antenna element. §.§.§ Overall Hybrid-field Channel Modeling Among the L paths, the set of the far-field and near-field paths from the user to the antenna array is denoted by 𝕃_F and 𝕃_N, respectively, i.e., L = card(𝕃_F) + card(𝕃_N). By combining near-field path components and far-field components, the hybrid-field multipath channel from the user to the BS is modeled as 𝐡_H = ∑_l ∈𝕃_F𝐡_F,l+∑_l ∈𝕃_N𝐡_N,l. §.§ Signal Model During uplink channel estimation, the user continuously transmits pilot symbols to the BS for Q time slots. We assume that the channel coherent time is longer than the Q time slots, so that channel state information (CSI) remains static during channel estimation. After the analog beamforming, the equivalent received pilot 𝐲_q ∈ℂ^N_RF of the BS at time slot q is denoted as 𝐲_q = 𝐖_q𝐡_H x_q+𝐖_qn_q, where x_q is the transmitted pilot signal at time slot q and 𝐖_q∈ℂ^N_RF×N is the beamforming matrix set at the BS. The term n_q ∼𝒞𝒩(0, σ^2𝐈_N × 1) is zero-mean complex Gaussian additive noise. Because no prior CSI is available in the channel estimation phase, the beamforming is configured with random phase shifts. Based on (<ref>), the received pilot signal at the BS over the entire Q time slots can be written as 𝐲 = 𝐖𝐡_H x+𝐖𝐧, where 𝐲 = [𝐲_1^T, 𝐲_2^T, ..., 𝐲_Q^T]^T, 𝐖 = [𝐖_1^T, 𝐖_2^T, ..., 𝐖_ Q^T]^T, and n=[n_1^T, n_2^T, ..., n_Q^T]^T. The target of channel estimation is to distinguish each path and estimate the {g_l, θ_l, r_l} for each path of the hybrid-field channel based on the received pilot signal 𝐲. Due to a large number of antenna elements N, it may lead to a poor channel estimation accuracy if we view (<ref>) as a system of linear equations and solve 𝐡_H directly from (<ref>) directly. This is because the channel parameters to be estimated outnumber the pilot signals, i.e. N>QN_RF. § CHARACTERISTICS OF HYBRID-FIELD CHANNEL Because of the limited number of scatterers in the millimeter-wave communication<cit.>, we aim to reduce the channel estimation overhead in (<ref>) based on sparse signal recovery techniques. In existing works such as  <cit.>, sparse channel characteristics are revealed by transforming channels to the angular domain (or polar domain) for the far-field (or near-field) case. By exploiting the channel sparsity, sparse-signal-recovery-based channel estimation algorithms with low pilot overhead are designed<cit.>. However, in the hybrid-field case, if we apply the angular domain (or polar domain) channel transform, a power diffusion effect occurs. This effect indicates that the power gain of a path spreads to other positions and causes multiple fake paths to be detected. The non-orthogonality between the near-field and far-field steering vectors is the reason for such an effect. Thus, in the hybrid-field case, sparse channel representations can no longer be obtained via either the angular-domain transform or the polar-domain transform, which is shown explicitly below. We therefore apply a joint angular-polar domain channel transform, based on which the power diffusion effect is quantified by the power diffusion range. The power diffusion effect is then alleviated. §.§ Introduction to Power Diffusion Effect We first describe the angular-domain and the polar-domain channel transform, where the power diffusion effect of the hybrid-field channel is discovered. Next, the description of such an effect is given. §.§.§ Angular-domain Transform The far-field channel is a weighted sum of steering vectors 𝐚(θ) at different propagation directions of the EM waves, as given in (<ref>). Therefore, a matrix 𝐅_A can be designed to transform a far-field channel 𝐡_F only consisting of far-field paths to the angular domain representation<cit.>: 𝐡_F = 𝐅_A𝐡_A,F, where 𝐅_A = [𝐚(θ_1), 𝐚(θ_2), ..., 𝐚(θ_N)] and 𝐚(θ_n), defined in (<ref>), denotes that far-field steering vector toward direction θ_n. We set θ_n = arcsin2n-1-N/N, n=1, ..., N. 𝐡_A,F is the angular-domain representation of an arbitrary far-field channel 𝐡_F. §.§.§ Polar-domain Transform Similarly, a matrix 𝐅_P consisting of near-field steering vectors is designed to transform a near-field channel 𝐡_N only consisting of near-field paths to its representation in the polar domain, denoted as 𝐡_N = 𝐅_P𝐡_P,N, where 𝐅_P is obtained by sampling both angles and distances in the space and 𝐡_P,N is the polar-domain representation of an arbitrary near-field channel 𝐡_N. Specifically, 𝐅_P = [𝐅_P,1, 𝐅_P,2... 𝐅_P,S], 𝐅_P,s = [𝐛(θ_1, r_s,1), 𝐛(θ_2, r_s,2)... 𝐛(θ_N, r_s,N)], s = 1, ..., S, where the term 𝐛(θ_1, r_s,1), defined in (<ref>), is the near-field steering vector toward position {θ_1, r_s,1}. The design of {θ_n, r_s,n} can be found in <cit.>. §.§.§ Power Diffusion Effect The Power diffusion effect refers to the phenomenon that when a path component is transformed from the spatial domain into the angular domain or the polar domain, the power gain corresponding to this path component spreads to other positions and generates fake paths, which are represented by multiple steering vectors. To demonstrate the power diffusion effect, in Fig. <ref> we illustrate the angular-domain representation of a hybrid-field channel 𝐡_H, which is defined in (<ref>) and consists of a far-field path and a near-field path. The polar-domain representation of this hybrid-field channel is shown in Fig. <ref>. They are obtained by 𝐡_A,H = |𝐅_A^H𝐡_H|, 𝐡_P,H = |𝐅_P^H𝐡_H|. As shown in Fig. <ref>, the power gain of the near-field path is not concentrated in one steering vector but spreads over multiple far-field steering vectors in the angular-domain transform matrix 𝐅_A. Thus, multiple far-field steering vectors should be jointly applied to describe the near-field path. Similarly, in the polar domain, the far-field path should be described by multiple near-field steering vectors, as shown in Fig. <ref>. We notice that the power diffusion effect also exists for a near-field path to be transformed to a polar-domain transform matrix consisting of multiple submatrices, i.e., S ≥ 2, which is not depicted in Fig. <ref>. The reason for the power diffusion effect is that two different steering vectors in the angular domain or polar domain can have high coherence. Formally, high coherence can be expressed as, μ_p,q = |𝐛(θ_p,r_p)^H 𝐛(θ_q,r_q)| >α, μ_r,q = |𝐚(θ_r)^H 𝐛(θ_q,r_q)| >α, where α is a positive constant, 𝐛(θ_p,r_p) and 𝐛(θ_q,r_q) are two different near-field steering vectors as defined in (<ref>), and 𝐚(θ_r) and is a far-field steering vector as defined in (<ref>). The terms μ_p,q, μ_r,q denote the coherence between two steering vectors. To quantify the power diffusion effect, we define the power diffusion range as the range of steering vectors whose coherence with the steering vector representing the path is larger than a pre-defined threshold α. Due to the power diffusion effect in the angular-domain and polar-domain channel representation, the issue of inaccurate transform-domain path component estimation arises, which consequently causes errors in spatial-domain channel estimation. We illustrate inaccurate transform-domain path component estimation in the angular domain based on conventional OMP channel estimation<cit.>. OMP aims to search for L peaks in the transform-domain channel representation, which represent the L path components of the multipath channel. The estimation result of transform-domain channel representation, consisting of the magnitude and corresponding steering vectors for the L peaks, provides sufficient information to restore the CSI. However, the power diffusion effect causes multiple steering vectors within the power diffusion range, more than L in number, to carry channel information. For instance, in Fig. <ref>, the steering vectors that provide information for the near-field path are within the range of the ellipse. Since OMP only identifies L steering vectors, the steering vector representing the far-field path is omitted, resulting in an inaccurate estimation of the transform-domain path component, as shown in Fig. <ref>. §.§ Joint Angular-Polar Channel Transform To solve the issue of inaccurate transform-domain path component estimation, we transform the hybrid-field channel to the joint angular-polar domain. Based on this domain, the information of both far-field and near-field paths can be explicitly extracted to eliminate the influence of the power diffusion effect. The hybrid-field channel representation 𝐡_J,H in the joint angular-polar domain is denoted as 𝐡_H = 𝐅_J𝐡_J,H. By defining 𝐅_A, 𝐅_P,1, 𝐅_P,2... 𝐅_P,S as the submatrix of 𝐅_J, the joint angular-polar domain transform matrix 𝐅_J can be expressed as <cit.> 𝐅_J = [𝐅_A, 𝐅_P,1, 𝐅_P,2... 𝐅_P,S]. The far-field and near-field steering vectors contained in 𝐅_J are illustrated in Fig. <ref>. Based on 𝐡_J, the joint angular-polar domain channel representation of the hybrid-field channel is obtained by 𝐡_J,H = |𝐅_J^H𝐡_H|. An example of a two-path hybrid-field channel representation 𝐡_J,H is shown in Fig. <ref>, which is the same channel as shown in Fig. <ref> and Fig. <ref>. Based on Fig. <ref>, we obtain the following two observations. Given a properly designed joint angular-polar domain transform matrix 𝐅_J, for each of the L path components, a peak in the joint angular-polar domain corresponding to this path exists. In the joint angular-polar domain channel representation, for each peak, the magnitude of its peak is larger than the magnitude of its power diffusion counterpart. This is because, among all steering vectors in 𝐅_J, the steering vector representing the peak is most correlated with the steering vector representing the path. Based on the aforementioned observations, a novel channel estimation method is developed, involving L iterations. Specifically, in the l-th iteration, given Observation <ref> and Observation <ref>, a steering vector representing the peak can be found in the joint angular-polar domain channel representation, indicating that a path component is detected. Thus, the estimation of the direction and distance of this path is obtained, and the power diffusion range of the detected path (shown as red ellipses in Fig. <ref>) can be further determined via calculation, which will be described in Section <ref>. By identifying and eliminating the interference caused by steering vectors within the power diffusion range of the previously detected paths, the (l+1)-th iteration can be performed to detect the corresponding steering vector without being influenced by the power diffusion. Consequently, this approach resolves the issue of inaccurate transform-domain path component estimation and provides a high-resolution approximation for the joint angular-polar domain channel representation 𝐡̂_J, thereby enhancing the channel estimation accuracy. The details of the proposed channel estimation method will be elaborated in Section <ref>. § HYBRID-FIELD CHANNEL ESTIMATION ALGORITHM In this section, we first propose a hybrid-field channel estimation algorithm without prior knowledge of the number of near-field and far-field paths. We then analyze the computational complexity of the algorithm and the Cramér-Rao Lower Bound (CRLB) of the channel estimation problem in the form of sparse signal recovery. §.§ Algorithm Design We design a new hybrid-field power-diffusion-aware OMP channel estimation algorithm (PD-OMP), which considers the aforementioned power diffusion effect to improve estimation accuracy. §.§.§ Initialization The PD-OMP is first initialized in Steps 1-3. In Step 1, to explicitly reveal the peaks of all far-field and near-field path components of the hybrid-field multipath channel, we generate the transform matrix 𝐅_J for the joint angular-polar domain according to its definition in (<ref>). In Step 2, to whiten the noise in the received signal, we calculate the pre-whitening matrix 𝐃 based on the beamforming matrix. To be specific, 𝐃 is obtained by decomposing the covariance matrix of noise with Cholesky factorization, which is denoted as, 𝐂 = σ^2 𝐃𝐃^H. The covariance matrix of noise is computed as <cit.> 𝐂 = σ^2 𝔹(𝐖_1𝐖_1^H, 𝐖_2𝐖_2^H, ..., 𝐖_Q𝐖_Q^H), where 𝔹(·) represents the generation of a block diagonal matrix and 𝐖_q is the beamforming matrix of the q-th time slot. In Step 3, we set the equivalent measurement matrix as Φ=𝐃^-1𝐖𝐅_J, where Φ transforms the pilot signal to the joint angular-polar domain in Step 4. §.§.§ Main Body The key procedures of PD-OMP are performed iteratively as follows. * Path detection (Step 4): Identify the steering vector that exhibits the highest correlation with the residual pilot signal 𝐑 for detecting a path. 𝐑 is obtained by subtracting the power gain of detected paths from the received pilot signal. * Power diffusion range identification (Step 5): Generate the power diffusion range of the newly detected path. * Residual signal update (Steps 6-8): Eliminate the power gain of the newly detected path in the residual signal. After the initialization stage, L iterations are performed to find the steering vectors corresponding to the L path components from the user to the antenna array. Specifically, in Step 4, we first transform the residual signal to the joint angular-polar domain and then detect a new steering vector as i^*_l = max_i|Φ(:,i)^H 𝐑|^2, which indicates that the residual signal 𝐑 has the strongest correlation with the i_l^*-th steering vector in 𝐅_J. In this way, a path is detected, and we would like to point out that the direction and distance {θ_i_l^*, r_i_l^*} associated with the i_l^*-th steering vector is the estimation result for the propagation direction and distance of the newly detected path. Given {θ_i_l^*, r_i_l^*} as well as the power gain of the newly detected path, the power diffusion range Γ_l of this path is generated using Algorithm 2 in Step 5, which will be presented in Section <ref>. The overall support set Γ is then updated with the union of Γ_l in Step 6. The channel representation 𝐡̂_J,H is estimated with the least square method in Step 7. The residual signal is updated by removing the projection of the detected paths in the received pilot signal in Step 8 as 𝐑 = 𝐲 - Φ(:,Γ)𝐡̂_J,H. Finally, in Step 9, the iteration is terminated and the hybrid-field channel is recovered as 𝐡̂_̂Ĥ = 𝐅_J𝐡̂_J,H. The proposed PD-OMP channel estimation algorithm is summarized in Algorithm 1. §.§.§ Identifying the Power Diffusion Range In Step 5, the power diffusion range of the detected path is generated using Algorithm 2. Specifically, the coherence between two steering vectors that are close to each other in direction is generally larger than the coherence between two steering vectors that differ greatly from each other in direction. Hence, to reduce the computational complexity of calculating the coherence, we start by computing the coherence between the i_l^*-th steering vector and the steering vector which is in each submatrix of 𝐅_J and has the same direction as the i_l^*-th steering vector, i.e., the permitted variation in direction Δ i = 0. Each submatrix of 𝐅_J represents a way of sampling the space with a series of steering vectors. The criterion for checking whether the i-th steering vector is in the power diffusion range is given as μ_i_l^*,i≥α/m̅_l, where α is the power diffusion detection threshold and 0 ≤α≤ 1. μ_i_l^*,i is the coherence between the i-th and i_l^*-th steering vectors. m̅_l is the normalized magnitude of the l-th detected path and is defined as m̅_l = m_l/max_lm_l = m_l/m_1 = max_i|Φ(:,i)^H 𝐑|/m_1, where m_1 is the magnitude of the first detected path[Since PD-OMP detects the steering vector with the largest magnitude in each iteration, max_lm_l = m_1.]. The magnitude of each path is normalized and is introduced into (<ref>) because the magnitude variation of each path should be considered to limit the power diffusion range of weak paths. If a steering vector satisfies (<ref>), this steering vector carries information about the detected path and is therefore added to the power diffusion range. To generate the complete power diffusion range, the permitted variation in direction, represented as Δ i, expands in each submatrix until it violates the criterion (<ref>), as shown in Fig. <ref>. After we iterate through all the submatrix of 𝐅_J, the power diffusion range for the detected path is generated with a smaller computational complexity than calculating the coherence between the i_l^*-th steering vector and each steering vector in 𝐅_J. Based on (<ref>), a smaller α indicates that a wider range of power diffusion is considered. On the one hand, by applying a large α, the steering vector representing a path with weak power gain is likely overlooked. On the other hand, small α gives rise to a large power diffusion range, which introduces noise into the estimation result. Therefore, a trade-off of channel estimation accuracy exists for α. The relationship between α and the size of the power diffusion range will be analyzed in Section <ref> and the effect of α on the performance of PD-OMP will be investigated through simulation in Section <ref>. §.§ Computational Complexity Analysis The computational complexity of Algorithm 1 mainly comes from Step 5 and Step 7. The size of matrices or vectors used in PD-OMP is given as 𝐃∈ℂ^N_RF× N_RF, Φ∈ℂ^QN_RF× N(S+1), 𝐑∈ℂ^QN_RF, 𝐲∈ℂ^QN_RF and 𝐡̂_J∈ℂ^N(S+1). For Step 3, because the matrix inversion can be performed offline, we focus on the complexity of matrix production, which is 𝒪((QN_RF)^2). The complexity for Step 4, including the matrix product and the maximizing operation, is 𝒪(NL(S+1)(QN_RF+1)). For Step 5, the computational complexity for coherence calculation is 𝒪(N). Hence, the process of power diffusion range generation requires a complexity of 𝒪(N(S+1) card(Γ)), where card(·) denotes the number of elements for a given set. Step 6 has a complexity of 𝒪(L). Step 7, 8, 9 have the computational complexities of 𝒪(QN_RF( card(Γ))^2), 𝒪(QN_RF card(Γ)) and 𝒪(N card(Γ)), respectively. Considering the large dimension of the antenna array and the limited size of the support set, we have N > QN_RF and NLS>( card(Γ))^2. The computational complexity of PD-OMP is 𝒪(N(S+1)(LQN_RF+ card(Γ))), which is linear with the number of antenna elements N. §.§ Power Diffusion Range Analysis Since the size of the power diffusion range, i.e., card(Γ), directly affects the computational complexity of PD-OMP, we analyze the influence of power diffusion detection threshold α on card(Γ). A closed-form relationship between card(Γ) and α is intractable because the criterion (<ref>) needs to be checked for each steering vector in the transform matrix 𝐅_J to generate Γ. Since the total power diffusion range can be approximated by the sum of the power diffusion range of each path component in each submatrix, i.e., card(Γ) ≈∑_l=1^L∑_s=1^S card(Γ_l,s), we can obtain an approximation for card(Γ_l,s) as follows. If a path component, which is represented by a steering vector 𝐛(θ_p, r_p), is transformed based on a submatrix 𝐅_J,s = [ 𝐛(θ_1, r_s,1), 𝐛(θ_2, r_s,2), ..., 𝐛(θ_N, r_s, N)], the sum of squares of all transform result components |𝐛(θ_n, r_s,n)^H𝐛(θ_p, r_p) |, n=1,2,..., N approximates to 1. In other words, the transform-domain representation of this path component satisfies ∑_n=1^N|𝐛(θ_n, r_s,n)^H𝐛(θ_p, r_p) |^2 ≈ 1. See Appendix <ref>. The size of a power diffusion range for a path l, i.e., card(Γ_l), can be approximated as a piece-wise function of α: card(Γ_l) ≈∑_s=-S_0+1^S-S_0+1ϵ(m̅_̅l̅μ_l(s)-α)/(μ_l(s))^2, where ϵ(x) = 1 if x ≥ 0 and ϵ(x) = 0 if x < 0. Here S_0 is the index of the submatrix containing the detected i_l^*-th steering vector obtained by (<ref>), and μ_l(s) is the coherence between two steering vectors that are in the same direction but are in the submatrices of 𝐅_J, S_0 and 𝐅_J, S_0+s, respectively: μ_l(s) = 1/N| ∑_(1-N)/2^(N-1)/2 e^j2π/λn^2d^2sρ|, where ρ is a parameter for polar-domain transform matrix design. According to the simulation result shown in Fig. <ref>, the power gain of a detected path concentrates on a few steering vectors around the direction of the detected path. Hence, we apply a rectangle to approximate the power diffusion effect in the (s-S_0)-th submatrix such that card(Γ_l,s) is approximately equal to the width of the rectangle Δ D. We set the steering vector which has the same direction as the detected i_l^*-th steering vector to be the midpoint of the rectangle. The height of the rectangle is therefore set as μ_l(s). According to Lemma<ref>, we have card(Γ_l,s) ≈Δ D ≈ 1/(μ_l(s))^2∑_n=1^N|𝐛(θ_n, r_s,n)^H𝐛(θ_p, r_p) |^2 ≈1/(μ_l(s))^2. Since the normalized magnitude of each path is considered in the criterion of power diffusion range (<ref>), condition m̅_̅l̅μ_l(s) ≥α is introduced to limit the power diffusion range, which concludes the proof. Based on Proposition <ref>, the relationship between the support set Γ and the power diffusion detection threshold α is given as card(Γ) ≈∑_l=1^L∑_s=-S_0+1^S-S_0+1ϵ(m̅_̅l̅μ_l(s)-α)/(μ_l(s))^2. §.§ Cramér-Rao Lower Bound Analysis The CRLB bound serves as a theoretical lower bound of MSE to evaluate the performance of channel estimation algorithms. We first derive the CRLB for the estimation of sparse channel representation 𝐡̂_J,H. Then we obtain the CRLB for the spatial-domain hybrid-field channel 𝐡̂_H based on 𝐡̂_H = 𝐅_J𝐡̂_J,H. The CRLB for the estimation of sparse channel representation 𝐡̂_J is given as <cit.> 𝔼{‖𝐡̂_J,H - 𝐡_J,H‖_2^2 } = σ^2 Tr ( ( Φ_𝐡_J,H^H Φ_𝐡_J,H )^-1 ), where Φ_𝐡_J,H∈𝒞^QN_RF× card(Γ) is a matrix composed of columns of Φ indexed by the indices of true support set of 𝐡_J,H. Since Rank(Φ_𝐡_J,H^H Φ_𝐡_J,H)= card(Γ), (<ref>) can be further written as Tr ( ( Φ_𝐡_J,H^H Φ_𝐡_J,H )^-1 )=∑_i=1^ card(Γ)λ_i^-1, where λ_1, λ_2, ... λ_ card(Γ) are the eigenvalues of Φ_𝐡_J,H^H Φ_𝐡_J,H. The coherence of Φ is defined as μ_Φ = max_i≠ j|ϕ_i^Hϕ_j|, where ϕ_i and ϕ_j are the i-th and j-th column of Φ, respectively, and have the following forms: ϕ_i = 𝐃^-1𝐖𝐟_i, ϕ_j = 𝐃^-1𝐖𝐟_j, where 𝐟_i and 𝐟_j are the i-th and j-th column of 𝐅_J, respectively. According to the Gershgorin Disc Theorem <cit.> as well as the fact that Φ_𝐡_J,H^H Φ_𝐡_J,H is a positive semidefinite matrix, the eigenvalues of Φ_𝐡_J,H^H Φ_𝐡_J,H are real and lie in the range of [ max{ 1- card(Γ)μ_Φ,0}, 1+ card(Γ)μ_Φ ]. Therefore, we have 𝔼{‖𝐡̂_̂Ĵ,̂Ĥ - 𝐡_J,H‖_2^2 }≥σ^2 card(Γ)/1+ card(Γ)μ_Φ. Since ϕ_i and ϕ_j is obtained by the randomly generated beamforming matrix 𝐖, as shown in (<ref>) and (<ref>), μ_Φ is a random variable with respect to 𝐖. Here we derive an upper bound on the expectation of μ_Φ. The upper bound for the expectation of μ_Φ with respect to 𝐖 is given as, 𝔼{μ_Φ} < QN_RF/N. See Appendix <ref>. Based on Lemma. <ref>, we have 𝔼{‖𝐡̂_J,H - 𝐡_J,H‖_2^2 }≥σ^2 card(Γ)/1+ card(Γ)μ_Φ ≈σ^2 card(Γ)/1+ card(Γ)𝔼{μ_Φ} > σ^2 card(Γ)/1+ card(Γ)QN_RF/N. Given 𝐡_H = 𝐅_J𝐡_J,H, we have the following proposition. The CRLB for the estimated hybrid-field multipath channel 𝐡̂ is given as 𝔼{‖𝐡̂_H - 𝐡_H‖_2^2}≥σ^2 card(Γ)/1+ card(Γ)QN_RF/N(σ_min(𝐅_J))^2, where σ_min(𝐅_J) denotes the smallest singular value of 𝐅_J. See Appendix <ref>. Based on the above formula for CRLB, it is observed that the increase in the number of RF chains and pilot length can reduce the CRLB. This is reasonable as additional RF chains and pilot signals can provide more information on the hybrid-field channel, thus improving the estimation accuracy. Besides, the increase in antenna numbers, which indicates that more parameters have to be estimated, increases the CRLB. The increase in power diffusion range size card(Γ) also increases the CRLB. § SIMULATION RESULTS In this section, we evaluate the performance of our proposed channel estimation algorithm PD-OMP in terms of the normalized mean square error (NMSE). The NMSE is defined as NMSE = 𝔼{‖ĥ_H - 𝐡_H‖_2^2/‖𝐡_H‖_2^2}, which is the expectation of the square of the relative estimation error. Besides, the influence of power diffusion detection threshold α on the performance of PD-OMP is investigated. §.§ Parameter Setting In the simulation, we consider the case where an antenna array at the working frequency of 30 GHz is equipped with 200 antenna elements. The number of RF chains is set as N_RF = 10. The number of paths in the multipath channel is set as L = 7. The distance and angle of the user and scatterers to the origin satisfy the uniform distribution and are within the range of (30m, 300m) and (-60^∘, 60^∘), respectively. The joint factor of channel fading and scattering g_l of NLoS path satisfies circularly symmetric complex Gaussian distribution of 𝒞𝒩(0,1). For the LoS path, g_l = 1. Each element of the beamforming matrix 𝐖 is randomly chosen from {1/√(N),-1/√(N)} with equal probability. The number of submatrices for 𝐅_P is 5. In the simulation, we calculate the normalized CRLB of compressed-sensing-based hybrid-field channel estimation, which is denoted as CRLB = σ^2 card(Γ)/(1+ card(Γ)QN_RF/N)‖𝐡_H‖_2^2. To evaluate the performance of PD-OMP, we also compare it with the basic MMSE algorithm and five OMP-based channel estimation algorithms, i.e., * MMSE<cit.>: A basic estimation method that applies the second-order statistics of the CSI to minimize the mean square error of channel estimation. * NPD-OMP: Estimate the far-field and near-field path components simultaneously with joint angular-polar domain transform. * Near-OMP<cit.>: Only apply the near-field polar-domain transform (<ref>). * Far-OMP<cit.>: Only apply the far-field angular-domain transform (<ref>). * HF-OMP<cit.>: Require the numbers of near-field and far-field paths as prior information and estimate the far-field and near-field path components separately. * SD-OMP<cit.>: Require the numbers of near-field and far-field paths as prior information and estimate the far-field and near-field path components separately. When estimating far-field path components, the support set Γ also includes far-field steering vectors whose directions are close to the detected far-field steering vector's direction. It should be noted that none of the comparing algorithms consider the power diffusion effect. §.§ Influence of SNR on Algorithm Performance Fig. <ref> demonstrates the NMSE performance of different estimation algorithms versus SNR. Compared with HF-OMP or SD-OMP, which requires the numbers of near-field and far-field paths as prior knowledge, PD-OMP can estimate the channel more accurately without the prior knowledge of path distribution, demonstrating the effectiveness of the proposed algorithm. This is because PD-OMP applies the joint angular-polar transform matrix, which is capable of capturing both the far-field and near-field features of the channel. The NMSE of PD-OMP is lower than NPD-OMP, showing the necessity of considering the power diffusion range in the support set. Besides, PD-OMP enjoys superiority over benchmark algorithms when the SNR is closer to 20 dB. This is because, at a higher SNR, the power diffusion range can be estimated more accurately, which is then utilized by PD-OMP to compensate for the performance degradation caused by the power diffusion effect. In contrast, none of the existing algorithms consider the power diffusion effect. Besides, we calculate the CRLB to evaluate the theoretical performance of PD-OMP. We observe that CRLB is larger than the NMSE when SNR ≤ 8 dB, which shows that CRLB is valid for PD-OMP only at a high SNR regime. This is because CRLB gives a theoretical lower bound for unbiased estimators. However, OMP is a biased estimator <cit.> and the bias is not negligible at a lower SNR regime. §.§ Influence of Pilot Length on Algorithm Performance Fig. <ref> presents the NMSE performance of different algorithms with the increase of pilot length Q. PD-OMP can achieve the lowest NMSE among all benchmark methods for different Q. In other words, PD-OMP requires a small pilot length to achieve the same estimation accuracy. For instance, to reach an NMSE of -5.8 dB, PD-OMP only requires Q=8 while MMSE requires Q=20. Besides, the gaps of NMSE between PD-OMP and benchmark methods increase with Q. This is because more pilot signals can provide more information on the channel so that for PD-OMP, the power diffusion range can be generated more accurately to help channel estimation. §.§ Influence of Scatterer Distribution The scatterer distribution is defined as the split ratio γ = card(𝕃_N)/L, which represents the ratio of the near-field path numbers card(𝕃_N) to the total multipath numbers L. The mean, standard deviation, maximum, and minimum of NMSE concerning γ are applied to evaluate whether algorithms are robust to the variation of the split ratio. The mean and standard deviation of NMSE are calculated by NMSE = ∑_γ∈Ξ NMSE_γ/ Card(Ξ), σ_ NMSE = √(∑_γ∈Ξ ( NMSE_γ - NMSE)^2/ Card(Ξ)), where Ξ = {0, 0.1,..., 1} and NMSE_γ denotes the NMSE performance obtained by PD-OMP given the split ratio γ. The simulation parameters are set as L=10, SNR = 10 dB, and Q = 10. It is observed from Table <ref> that PD-OMP achieves the lowest mean estimation error among all benchmark algorithms and the lowest standard deviation of estimation error among all OMP-based benchmark algorithms. It is noted that the MMSE method obtains a lower standard deviation than PD-OMP because MMSE employs the statistical information of the channel to assist channel estimation. Besides, the PD-OMP has the smallest maximum NMSE among all methods. Hence, PD-OMP is robust to the change of split ratio in the hybrid-field wireless communication scenario, demonstrating its stability against the scatterer distribution variation. §.§ Advantage of Joint Angular-Polar Domain Channel Transform To demonstrate the necessity of applying joint angular-polar domain channel transform instead of the polar-domain channel transform in the hybrid-field channel estimation, we compare the performances of two transforms in Fig. <ref>. The curves “Polar Domain ζ=5" and “Polar Domain ζ=8" are obtained with a channel estimation method that replaces the joint angular-polar domain transform in PD-OMP with the polar-domain transform. We use ζ to denote the number of submatrices in the transform-domain matrix. A larger ζ indicates that a larger area is sampled with near-field steering vectors, which induces a higher computational complexity. As shown in Fig. <ref>, when we set ζ=5 for both transform matrices, i.e., the computational complexities for two algorithms based on different transform matrices are the same, the NMSE of the joint angular-polar domain transform is smaller than the polar-domain transform. Compared with the polar-domain channel transform (ζ=8), the joint angular-polar domain channel transform (ζ=5) saves 37.5% computational complexity while achieving a similar NMSE performance. Hence, the joint angular-polar domain can help reach a balance between the estimation accuracy and the computational complexity. §.§ Influence of Power Diffusion Detection Threshold α on Algorithm Performance Power diffusion detection threshold α∈(0, 1 ] decides the considered power diffusion range in PD-OMP, which affects the estimation accuracy and the computational complexity of PD-OMP. In Fig. <ref>, we present how the NMSE of PD-OMP changes with the power diffusion detection threshold α when the pilot length Q increases. From Fig. <ref>, the NMSE of each α reduces as Q increases. For a higher Q, a smaller α reaches the minimum NMSE. This is because given more pilot signals, with a smaller α, a wider range of considered power diffusion can help eliminate the estimation error more thoroughly. Nevertheless, in the case of a small Q, a falsely estimated range of power diffusion is likely to be introduced into the support set Γ if a small α is adopted. Thus, a high α is desirable with a small Q, as the range of power diffusion is limited. Fig. <ref> also reveals that the pilot length serves as auxiliary information for choosing a proper size of power diffusion range in channel estimation. In Fig. <ref>, how the NMSE of PD-OMP changes with the SNR and the power diffusion detection threshold α is investigated. The NMSE reduces as the SNR increases for different α. Besides, according to the yellow curve, with the increase of SNR, PD-OMP with a decreased α can achieve the lowest NMSE. This is because the inaccurate transform-domain path component estimation is alleviated by applying a smaller α in the case of a high SNR. However, if the SNR is low, a falsely included power diffusion range will introduce additional noise to the estimation result and worsen the NMSE performance. Therefore, Fig. <ref> reveals that when the SNR is low, a high α should be selected to limit the range of power diffusion to improve the channel estimation accuracy. In Fig. <ref>, the number of iterations and NMSE of PD-OMP concerning α∈{0.4, ..., 1} are investigated. The number of iterations refers to how many times Step 4 to Step 10 in Algorithm 2 are performed, which dominates the computational complexity of PD-OMP. First, a trade-off between the computational complexity and the estimation accuracy is revealed. When α rises from 0.4 to 1, the number of iterations reduces as a narrow range of power diffusion is selected, which reduces the calculation of coherence and the matrix inversion in PD-OMP. However, the estimation accuracy also worsens with the increase of α. Second, it is observed that with the increase of paths number, both the number of iterations and NMSE increase. This is because additional paths lead to an increasingly complicated multipath channel and a wider power diffusion range has to be estimated, which gives rise to higher estimation error. § CONCLUSION In this paper, we investigated the hybrid-field multipath channel estimation in holographic MIMO communications. We first identified the issue of inaccurate path component estimation, which led to inaccurate channel estimation if the conventional far-field or near-field channel estimation methods were applied to the hybrid-field case. We revealed that the reason came from the power diffusion effect that the power gain of each path diffused to other positions. In consequence, fake paths are generated in the channel representation in transform domains so that path components were difficult to be separated from each other. To cope with the power diffusion effect, the hybrid-field channel was transformed from the spatial domain to the joint-angular-polar domain. A solution concept of power diffusion range was introduced to quantify the range of diffused power gain so that the power diffusion effect could be identified and eliminated. A novel channel estimation algorithm PD-OMP was then proposed. The computational complexity of PD-OMP and the CRLB of sparse-signal-recovery-based hybrid-field channel estimation were derived. The theoretical analysis showed that the computational complexity of PD-OMP was linear with the number of antenna elements. Simulation results showed that: 1) By extracting both far-field and near-field path components of the hybrid-field channel, the joint angular-polar domain channel transform reached a balance between the estimation accuracy and the computational complexity compared to the polar-domain channel transform. 2) The proposed PD-OMP outperformed current state-of-the-art hybrid-field channel estimation methods under different SNRs and pilot lengths. Besides, PD-OMP was robust to the variation of scatterer distribution. 3) When the SNR or the pilot length increased, a wider range of power diffusion should be selected in PD-OMP to produce a smaller NMSE. 4) There existed an optimal power diffusion detection threshold to reach a trade-off between the computational complexity and the channel estimation accuracy of PD-OMP. § PROOF OF LEMMA <REF> We omit the propagation direction and transmission distance (θ_p, r_p) of the steering vector 𝐛(θ_p, r_p) for simplicity. Hence, we have ∑_n=1^N|𝐛(θ_n, r_s,n)^H𝐛|^2 = (𝐅_J,s𝐛)^H𝐅_J,s𝐛 = Tr(𝐅_J,s^H𝐅_J,s𝐛𝐛^H), where 𝐅_J,s is the s-th submatrix of 𝐅_J. The term 𝐅_J,s^H𝐅_J,s is denoted as 𝐅_J,s^H𝐅_J,s = [ χ_1,1 ⋯ χ_1,N; ⋮ ⋱ ⋮; χ_N,1 ⋯ χ_N,N ], where the (x,y)-th element of 𝐅_J,s^H𝐅_J,s is expressed as χ_x,y = 𝐛(θ_x, r_s,x)^H𝐛(θ_y, r_s,y). The term 𝐛𝐛^H is denoted as 𝐛𝐛^H = 1/N[ e^-jk(r_1,p-r_1,p) ⋯ e^-jk(r_1,p-r_N,p); ⋮ ⋱ ⋮; e^-jk(r_N,p-r_1,p) ⋯ e^-jk(r_N,p-r_N,p) ]. Therefore, we have {𝐅_J,s^H𝐅_J,s𝐛𝐛^H}_j,j = 1/N∑_n=1^Nχ_j,ne^-jk(r_n,p-r_j,p). Since only one submatrix is considered, we have χ_j,ne^-jk(r_n,p-r_j,p)(a)≈ 1/N∑_z=(1-N)/2^(N-1)/2 e^jπ z (sinθ_j-sinθ_n) e^-jk(r_n,p-r_j,p)j ≠ n= e^jπ (sin(θ_j)-sin(θ_n))(1-N)/2 -jk(r_n,p-r_j,p)1-e^jπ N (sin(θ_j)-sin(θ_n))/1-e^jπ (sin(θ_j)-sin(θ_n)), where the approximation (a) is obtained by performing the second-order Taylor Expansion to the distance term r_y,s,j and r_y,s,n for each y=1, ..., N in the steering vector 𝐛(θ_j,r_s,j) and 𝐛(θ_n,r_s,n), which are used in the calculation of χ_j,n. The second-order Taylor Expansion for r_y,s,j around r_s,j is expressed as r_y,s,j=r_s,j - t_y d sin(θ_j)+t_y^2d^2(1-sin^2(θ_j))/r_s,j, where t_y = 2y-N+1/2. Besides, the transform from (<ref>) to (<ref>) also employs the following equation: 1-sin^2(θ_j)/r_s,j = 1-sin^2(θ_n)/r_s,n, which is a property of the joint angular-polar domain transform matrix. Due to the definition of direction θ in (<ref>), sin(θ_j)-sin(θ_n) is multiple of 2/N. Therefore, based on (<ref>), we have χ_j,ne^-jk(r_n,p-r_j,p)≈{ 0 j ≠ n, 1 j = n. . Therefore, {𝐅_J,s^H𝐅_J,s𝐛𝐛^H}_j,j≈1/N, and we have Tr(𝐅_J,s^H𝐅_J,s𝐛𝐛^H)≈ N*1/N =1. § PROOF OF LEMMA <REF> The term μ_Φ is expressed as μ_Φ = max_i≠ j|ϕ_i^Hϕ_j|, and we have ϕ_i = 𝐃^-1𝐖𝐟_i and ϕ_j = 𝐃^-1𝐖𝐟_j, where 𝐟_i and 𝐟_j are the i-th and j-th column of 𝐅_J, respectively. Therefore, |ϕ_i^Hϕ_j| = | (𝐃^-1𝐖𝐟_i)^H 𝐃^-1𝐖𝐟_j| = |𝐟_i^H𝐖^H (𝐃^H𝐃)^-1𝐖𝐟_j| = |𝐖^H𝐟_i^H 𝐂^-1𝐖𝐟_j| = |𝐟_i^H𝐖^H (𝔹{𝐖_1𝐖_1^H, 𝐖_2𝐖_2^H, ... 𝐖_Q𝐖_Q^H})^-1𝐖𝐟_j | = |𝐟_i^H𝐖^H (𝔹{ (𝐖_1𝐖_1^H)^-1, ... (𝐖_Q𝐖_Q^H)^-1})𝐖𝐟_j | = |𝐟_i^H∑_q=1^Q{𝐖_q^H (𝐖_q𝐖_q^H)^-1𝐖_q}𝐟_j |. Considering that the beamforming is randomly generated, we further analyze the expectation of μ_Φ: 𝔼{μ_Φ} = |𝐟_i^H 𝔼{∑_q=1^Q{𝐖_q^H (𝐖_q𝐖_q^H)^-1𝐖_q}}𝐟_j | = Q |𝐟_i^H 𝔼{𝐖_q^H (𝐖_q𝐖_q^H)^ -1𝐖_q}𝐟_j |, ∀ q. Considering the expectation of the (x,y)-th element of 𝐖_q𝐖_q^H: 𝔼{(𝐖_q𝐖_q^H)_x,y} = 𝔼{∑_z=1^N((𝐖_q)_x,z(𝐖_q^H)_z,y)}. Because the modulus of each element of the beamforming matrix 𝐖_q is 1, and each element is independently and randomly chosen with equal probability, we have 𝔼{(𝐖_q𝐖_q^H)_x,y} = N, if x = y, 0, if x ≠ y. Identically, considering the expectation of the (x,y)-th element of 𝐖_q^H𝐖_q, we have 𝔼{(𝐖_q^H𝐖_q)_x,y} = N_RF, if x = y, 0, if x ≠ y. Therefore, we have 𝔼{𝐖_q^H (𝐖_q𝐖_q^H)^-1𝐖_q} = N_RF/N𝐈_N × N. Since the modulus of each element of 𝐟_i is 1/√(N), ∀ i , we have 𝔼{μ_Φ} = QN_RF/Nmax_i ≠ j|𝐟_i^H𝐟_j| < QN_RF/N. § PROOF OF PROPOSITION <REF> For an arbitrary matrix 𝐀 and an arbitrary vector 𝐱, ‖𝐀𝐱‖_2 ≥σ_min(𝐀)‖𝐱‖_2, where σ_min(𝐀) is the minimum singular value of matrix 𝐀. Hence, we have ‖𝐡̂_H - 𝐡_H‖_2^2 = ‖𝐅_J(𝐡̂_J,H - 𝐡_J,H)‖_2^2 ≥ (σ_min(𝐅_J))^2 ‖ (𝐡̂_J,H - 𝐡_J,H)‖_2^2. Thus, we have 𝔼{‖𝐡̂_H - 𝐡_H‖_2^2 }≥ (σ_min(𝐅_J))^2 σ^2 card(Γ)/1+ card(Γ)QN_RF/N. 0conf S. Yue, S. Zeng, L. Liang, and B. Di, “Channel estimation for holographic communications in hybrid near-far field,” in IEEE Glob. Commun. Conf. (GLOBECOM), Kuala Lumpur, Malaysia, Dec. 2023. 6G W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems," IEEE Network, vol. 34, no. 3, pp. 134-142, May/June 2020. hcommun T. Gong, P. Gavriilidis, R. Ji, C. Huang, G. C. Alexandropoulos, L. Wei, Z. Zhang, M. Debbah, H. V. Poor, and C. Yuen, “Holographic MIMO communications: Theoretical foundations, enabling technologies, and future directions," IEEE Commun. Surv. Tutor., Aug. 2023. hdma R. Deng, B. Di, H. Zhang, and L. Song, “HDMA: Holographic-pattern division multiple access," IEEE J. Sel. Areas Commun., vol. 40, no. 4, pp. 1317-1332, Apr. 2022. holocom C. Huang, S. Hu, G. C. Alexandropoulos, A. Zappone, C. Yuen, R. Zhang, M. D. Renzo, and M. Debbah, “Holographic MIMO surfaces for 6G wireless networks: Opportunities, challenges, and trends," IEEE Wireless Commun., vol. 27, no. 5, pp. 118-125, Oct. 2020. RHS R. Deng, B. Di, H. Zhang, Y. Tan, and L. Song, “Reconfigurable holographic surface: Holographic beamforming for metasurface-aided wireless communications," IEEE Trans. Veh. Technol., vol. 70, no. 6, pp. 6255-6259, Jun. 2021. RHS2 D. Dardari, “Communicating with large intelligent surfaces: fundamental limits and models," IEEE J. Sel. Areas Commun., vol. 38, no. 11, pp. 2526-2537, Nov. 2020. LIS S. Zeng, H. Zhang, B. Di, Z. Han, L. Song, “Reconfigurable intelligent surface (RIS) assisted wireless coverage extension: RIS orientation and location optimization," IEEE Commun. Lett., vol. 25, no. 1, pp. 269-273, Jan. 2021. lis2 Y. Han, W. Tang, S. Jin, C. -K. Wen, and X. Ma, “Large intelligent surface-assisted wireless communication exploiting statistical CSI," IEEE Trans. Veh. Technol., vol. 68, no. 8, pp. 8238-8242, Aug. 2019. dbyhbf B. Di, H. Zhang, L. Song, Y. Li, Z. Han, and H. V. Poor, “Hybrid beamforming for reconfigurable intelligent surface based multi-User communications: Achievable rates With limited discrete phase shifts," IEEE J. Sel. Areas Commun., vol. 38, no. 8, pp. 1809-1822, Aug. 2020. extendnear Y. Liu, Z. Wang, J. Xu, C. Ouyang, X. Mu, and R. Schober, “Near-field communications: A tutorial review," IEEE open J. Commun. Soc., vol. 4, pp. 1999-2049, Aug. 2023. nfs M. K. Ozdemir, H. Arslan, and E. Arvas, “On the correlation analysis of antennas in adaptive MIMO systems with 3-D multipath scattering,” in IEEE Wireless Commun. Networking Conf. (WCNC), Atlanta, GA, USA, Mar. 2004, pp. 295-299. sphericalwave E. Björnson, Ö. T. Demir, and L. Sanguinetti, “A primer on near-field beamforming for arrays and reconfigurable intelligent surfaces," in Asilomar Conf. Signals, Syst., Comput. (ACSSC), Pacific Grove, CA, USA, Oct. 2021, pp. 105-112. hfomp X. Wei and L. Dai, “Channel estimation for extremely large-scale massive MIMO: Far-field, near-field, or hybrid-field?," IEEE Commun. Lett., vol. 26, no. 1, pp. 177-181, Jan. 2022. hfmodel R. Li, S. Sun, and M. Tao, “Applicable regions of spherical and plane wave models for extremely large-scale array communications,” arXiv preprint arXiv:2301.06036, Jan. 2023. Polar Domain M. Cui and L. Dai, “Channel estimation for extremely large-scale MIMO: Far-field or near-field?,” IEEE Trans. Commun., vol. 70, no. 4, pp. 2663-2677, Apr. 2022. nfce Y. Han, S. Jin, C. -K. Wen, and X. Ma, “Channel estimation for extremely large-scale massive MIMO systems,” IEEE Wireless Commun. Lett., vol. 9, no. 5, pp. 633-637, May 2020. classicomp J. Lee, G. -T. Gil, and Y. H. Lee, “Channel estimation via orthogonal matching pursuit for hybrid MIMO systems in millimeter wave communications," IEEE Trans. Commun., vol. 64, no. 6, pp. 2370-2386, Jun. 2016. ynearce X. Zhang, H. Zhang, and Y. C. Eldar, “Near-field sparse channel representation and estimation in 6G wireless communications", IEEE Trans. Commun., Oct. 2023. sdomp Z. Hu, C. Chen, Y. Jin, L. Zhou, and Q. Wei, “Hybrid-field channel estimation for extremely large-scale massive MIMO system," IEEE Commun. Lett., vol. 27, no. 1, pp. 303-307, Jan. 2023. channelmodel H. Lu and Y. Zeng, “Communicating with extremely large-scale array/surface: Unified modeling and performance analysis," IEEE Trans. Wireless Commun., vol. 21, no. 6, pp. 4039-4053, Jun. 2022. nfmodel C. A. Balanis, “Arrays: Linear, planar, and circular”, in Antenna Theory Analysis and Design, 4th ed. Hoboken: John Wiley & Sons, Inc., 2016. sparsity R. He, B. Ai, G. Wang, M. Yang, C. Huang, and Z. Zhong, “Wireless channel sparsity: Measurement, analysis, and exploitation in estimation,” IEEE Wireless Commun., vol. 28, no. 4, pp. 113-119, Aug. 2021. farfieldtransform C. Huang, L. Liu, C. Yuen, and S. Sun, “Iterative channel estimation using LSE and sparse message passing for mmWave MIMO systems,” IEEE Trans. Signal Process., vol. 67, no. 1, pp. 245–259, Jan. 2019. samplingbook Y. C. Eldar, “Compressed sensing", in Sampling Theory: Beyond Bandlimited Systems, 1st ed. Cambridge, United Kingdom: Cambridge University Press, 2015. wn J. Rodríguez-Fernández, N. González-Prelcic, K. Venugopal, and R. W. Heath, Jr., “Frequency-domain compressive channel estimation for frequency-selective hybrid millimeter wave MIMO systems,” IEEE Trans. Wireless Commun., vol. 17, no. 5, pp. 2946–2960, May 2018. sparseCRLB Z. Ben-Haim and Y. C. Eldar, “The Cramer–Rao bound for estimating a sparse parameter vector,” IEEE Trans. Signal Process., vol. 58, no. 6, pp. 3384–3389, Jun. 2010. matcomp G. H. Golub and C. F. V. Loan, “Unsymmetric eigenvalue problems", in Matrix computations, 4th ed. Baltimore: Johns Hopkins University Press, 2013. MMSE R. W. Heath JR. and A. Lozano, “Channel estimation," in Foundations of MIMO communication, UK: Cambridge University Press, 2019, pp. 116-120. ompbias F. Gomez-Cuba and A. J. Goldsmith, “Sparse mmWave OFDM channel estimation using compressed sensing," IEEE Int. Conf. Commun. (ICC), Shanghai, China, May, 2019, pp. 1-7.
http://arxiv.org/abs/2407.13681v1
20240718165412
Imaging the jet of MWC 349A with resolved Radio Recombination Line emission from ALMA
[ "Antonio Martínez-Henares", "Qizhou Zhang", "Izaskun Jiménez-Serra", "Jesús Martín-Pintado", "Nuria Huélamo", "Sirina Prasad", "James Moran", "Alejandro Báez-Rubio" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA", "astro-ph.HE" ]
0000-0001-5191-2075]Antonio Martínez-Henares Centro de Astrobiología (CSIC-INTA) Ctra. de Torrejón a Ajalvir, km 4 28850 Torrejón de Ardoz, Madrid, Spain 0000-0003-2384-6589]Qizhou Zhang Center for Astrophysics | Harvard & Smithsonian 60 Garden Street Cambridge, MA 02138, USA 0000-0003-4493-8714]Izaskun Jiménez-Serra Centro de Astrobiología (CSIC-INTA) Ctra. de Torrejón a Ajalvir, km 4 28850 Torrejón de Ardoz, Madrid, Spain 0000-0003-4561-3508]Jesús Martín-Pintado Centro de Astrobiología (CSIC-INTA) Ctra. de Torrejón a Ajalvir, km 4 28850 Torrejón de Ardoz, Madrid, Spain 0000-0002-2711-8143]Nuria Huélamo Centro de Astrobiología (CSIC-INTA) Ctra. de Torrejón a Ajalvir, km 4 28850 Torrejón de Ardoz, Madrid, Spain 0000-0002-1082-5589]Sirina Prasad Center for Astrophysics | Harvard & Smithsonian 60 Garden Street Cambridge, MA 02138, USA 0000-0002-3882-4414]James Moran Center for Astrophysics | Harvard & Smithsonian 60 Garden Street Cambridge, MA 02138, USA UWC Mahindra College Village Khubavali, PO Paud MH 412 108, India § ABSTRACT Jets and disk winds arise from materials with excess angular momentum ejected from the accretion disks in forming stars. How these structures are launched and how they impact the gas within the innermost regions of these objects remains poorly understood. MWC349A is a massive star that has a circumstellar disk which rotates in accord with Kepler's Law, with an ionized wind and a high-velocity jet launched from the disk surface. The strongly maser-amplified emission of hydrogen radio recombination lines (RRLs) observed toward this system provides a comprehensive picture of its ionized environment with exquisite detail. In this Letter, we present ALMA observations of the H26α RRL and continuum emission obtained with the highest angular resolution ever used toward this source (beam of ∼0.02"). The maser RRL emission is resolved for the first time and clearly delineates the ionized disk, wind and jet. We analyzed the RRL data cubes with the 3D non-LTE radiative transfer model MORELI, confirming that the jet is poorly collimated. We found that the jet orientation is closer to the rotation axis of the system than derived from spatially unresolved data. This study confirms that hydrogen RRL masers are powerful probes of the physical structure and kinematics of the innermost ionized material around massive stars. § INTRODUCTION MWC349A is a well-known massive star located at ∼ <cit.> that has been the object of numerous studies across a wide wavelength range. This star is surrounded by an almost edge-on circumstellar disk <cit.> in Keplerian rotation <cit.>. The radio continuum emission shows a 'X-shape' and flux density scaling of ν^0.67 at centimeter and millimeter wavelengths <cit.>, well modeled as an ionized wind of an electron density that decreases with the radial distance as r^-2.14 and an isotropic mass loss expanding at a constant velocity <cit.>. One of the distinctive aspects of MWC349A is the maser amplification of its hydrogen Radio Recombination Line (RRL) emission <cit.>, which has been extensively observed across the radio wavelength spectrum since its discovery (see for a review). The case of MWC349A has motivated the search and discovery of RRL masers in other massive systems such as CepAHW2 <cit.>, MonR2-IRS2 <cit.>, η Carina <cit.>, MWC922 <cit.> or G45.47+0.05 <cit.>. Radio interferometric observations towards MWC349A with the Plateau de Bure Interferometer (PdBI, ), and the Submillimeter Array (SMA, ) revealed that the ionized wind is expanding radially and rotates in the same sense as the disk, removing angular momentum from the system. Recent high-sensitivity Atacama Large Millimeter/submillimeter Array (ALMA) observations have explored the highest velocities of the RRL emission for the first time, revealing a high-velocity jet in addition to the ionized disk and wind <cit.>. In all these works, the RRL emission is not spatially resolved and the relative spectroastrometry is conducted by determining the position of its 2D Gaussian centroid in each frequency channel. This method enables the determination of the mean location of the emission as a function of radial velocity with a positional accuracy that is inversely proportional to the SNR value <cit.>. The achieved angular precision is much higher than the synthesized beam size, typically of (sub-)au scales, due to the maser amplification. By comparing the radio continuum, RRL profiles and centroids with the non-LTE 3D radiative transfer MOdel for REcombination LInes MORELI <cit.>, the physical structure and kinematics of the Keplerian disk and the ionized wind <cit.> and of the jet of MWC349A <cit.> have been constrained. However, these components have neither been resolved nor been imaged directly before, leading to uncertainties in the determination of the structure and kinematics of the ionized gas in the model. In this Letter, we report new ALMA observations that resolve for the first time the structure of the sub-mm continuum and H26α RRL emission toward MWC349A using ALMA's longest baseline configuration. We compare these observations with predictions from the MORELI code and find that a small modification in the orientation of the jet and the disk with respect to the model from <cit.> reproduces with high fidelity the resolved continuum and RRL emission. This proves the power of the MORELI code in providing an accurate picture of the physical structure and kinematics of the innermost ionized material around massive stars even when the emission remains unresolved. § OBSERVATIONS AND DATA REDUCTION The observations were performed with ALMA (Project code 2019.1.01069.S: PI Qizhou Zhang) on 2021 September 5 and 10 in the array configuration C43-9/10 (longest baseline of ). The spectral setup consisted of two spectral windows for the continuum centered at 340.610 and of bandwidth, and two spectral windows centered on the H26α RRL () to obtain the line emission, one with higher spectral resolution than the other. The line spectral window of lower spectral resolution had a bandwidth of and channel width of or , while the other one was wide with a spectral resolution of or . The phase center for the MWC349A observations was α(J2000)=20^h32^m45^s.528, δ(J2000)=40^∘39'36".623. Calibrators J2253+1608 and J2015+3710 were observed for bandpass and gain calibration, respectively. The initial calibration of the data was obtained from the ALMA pipeline. We performed self-calibration with CASA on the calibrated continuum and line data with the averaged continuum and the brightest channel of the line, respectively. The continuum self-calibration tables were obtained after five iterations of the phase calibration and one amplitude calibration, with a solution time interval of 16 seconds. We used the same solution interval to obtain the line self-calibration tables after five rounds of phase and one round of amplitude calibration. The continuum subtraction of the line spectral windows was done with the line-free channels of the wide spectral window. Finally, we imaged the data with CASA's routine tclean using Briggs weighting with a robust parameter of 0.0. The resulting synthesized beam size was of 0.025 × 0.012arcsec (30au × 14au, assuming a distance of 1.2kpc) with a position angle (PA) of -7^∘.1 for the continuum, and same beam size with a PA of -3^∘.3 and -11^∘.7 for the narrow and wide spectral window line cubes, respectively. § RESULTS AND DISCUSSION §.§ Continuum emission at 341 GHz The distribution of continuum emission is shown in black contours in the left panel of Figure <ref>. This is the highest frequency in which the continuum emission toward MWC349A has been resolved. The total integrated flux measured is 2.4±0.2Jy, in agreement with the SMA and ALMA fluxes reported in <cit.>. We assume the nominal 10% uncertainty in the flux density measurement. The hourglass or 'X' shaped morphology seen at 2, 1.3, and 0.7cm with the VLA <cit.> and at 217GHz with ALMA <cit.> is also observed in our ALMA continuum image at 341GHz. This suggests that the continuum emission is dominated by free-free radiation from a constant velocity expanding outflow <cit.> and not by thermal dust emission, which is also consistent with the spectral index of 0.64±0.02 recently obtained by <cit.> by fitting the continuum flux densities up to 340GHz. We also identify the central 'dark lane' caused by the lack of free-free emission in a neutral circumstellar disk <cit.> and the asymmetry in flux between the northern (brighter) and southern (fainter) lobe (see right panels in Fig.2 of ). The position angle of the disk is ≃100^∘, as reported in previous observations. §.§ Resolved H26α line emission In the right panels of Figure <ref> we show the spectral profile of the H26α line emission observed with ALMA in black contours. The spatial aperture over which the spectra is integrated is 0.18×0.17arcsec, which corresponds to the integration limits of the model (see Appendix <ref>). The double peak profile with maxima at -15 and arises from the ionized Keplerian disk. The emission is strongly maser-amplified, especially toward the edges of the disk corresponding to the emission peaks seen in the H26α profile, where the amplification coherence length is the largest <cit.> causing the saturation of the maser <cit.>. The amplification is very sensitive to the electron density and temperature. Therefore, the asymmetry of the peaks may be caused by clumps of different size and density of ionized materials in the disk <cit.>. At larger velocities, there are high-velocity broad emission components between -30 and and between 50 and that arise from the ionized wind and jet. There is additional weak line emission at indicated with the arrow in Figure <ref> which we will discuss in Sect. <ref>. The ALMA images with the spatially resolved H26α line emission are presented in Figures <ref> and <ref>, where we show integrated intensity maps in bins of for the blueshifted and redshifted emission, respectively. We calculate a systemic velocity of the source of as the central velocity between the peaks of the H26α line. The higher velocity emission (e.g. panels with central radial velocities and ) shows a bipolar shape in the north-south direction perpendicular to the plane of the disk, which is consistent with a jet whose projection in the plane of the sky lies directly on the rotation axis. As expected for a tilted jet with the northern lobe of the jet facing the observer and the southern part moving away <cit.>, the images show stronger emission in the north at high blueshifted velocities (panels at -96.00, -82.91 and in Fig.<ref>) and in the south at the most redshifted ones (panels at 100.34 and 113.44km s^-1 in Fig.<ref>). The lobes have a larger opening angle as velocity decreases, which is expected from a collimated jet whose velocity is higher near the jet axis <cit.>. This effect is better seen in Figure<ref>, where the opening angle of the high velocity emission decreases for increasing velocities (red and dark blue points). The opening angle is obtained from the major and minor axes of a 2D Gaussian fit in the image plane to the integrated intensity maps reported in Figures <ref> and <ref> using CASA's imview. At lower radial velocities ( in Fig.<ref> and in Fig.<ref>), the emission displays the 'X' shaped morphology seen in the continuum, which is consistent with the wide-angle wind and the slower parts of the jet expanding inside the double cone geometry proposed for MWC349A <cit.>. The H26α emission at the central velocities ( in Fig.<ref> and in Fig.<ref>) traces the ionized disk and wind. The strong, compact emission that shifts from west to east along the midplane for increasing radial velocities arises from the maser spots in the Keplerian disk. This emission corresponds to the bright double peaks of the line in Fig.<ref>. The low-level emission of the H26α line arises from the wind, which retains the 'X' shaped morphology seen at higher velocities. The northern half is consistently brighter along the whole blueshifted (Fig.<ref>) and most of the redshifted emission of H26α, except for the highest velocities where the southern lobe of the jet is expanding away from the observer (panels at 100.34 and 113.44km s^-1 Fig.<ref>). This may be explained by the disk being slightly inclined from the edge-on orientation, with the northern half facing the observer (see Section <ref>). Finally, we note a bright compact feature to the southeast in panel -109.90km s^-1 of Figure<ref>, which corresponds to the additional emission noticed in the bottom right panel of Figure<ref> (see black arrow) that we analyze in Section<ref>. §.§ Additional line emission In the top panel of Figure<ref> we show a zoom in of the spectral profile displaying faint emissions superimposed on the blueshifted wing of H26α (Fig.<ref>). After removing the contribution from H26α pixel by pixel, we generated an integrated intensity map between 353.74 and 353.90GHz (left panel in Fig.<ref>) that shows the 'X' shaped morphology, suggesting that the emission arises from the ionized disk and wind of MWC349A. The emission from this new line is more compact and blueshifted than H26α. The nearest lines to the frequency of the emission are the helium He26α and carbon C26α RRLs (red dashed lines in top right panel of Fig.<ref>). We identify peaks 1, 2 and 3 located at radial velocities of 12.58, -34.87 and -62.83km s^-1, respectively, for the rest frequency of He26α, while for C26α the radial velocities are of 39.98, -7.47 and -35.43km s^-1 respectively. The separation between the brightest pixels for peaks 1 and 3 is 0.031±0.012arcsec, where the error is the beam size in the east-west direction. Previous works reported lines next to hydrogen RRL emission and associated them with helium RRLs at radial velocities from -15 to -40km s^-1 <cit.>. In our ALMA data, the peaks of H26α are found at radial velocities of 32.69 and -15.25km s^-1 (Figure <ref>), which lie closer to the radial velocities associated with the C26α line. The separation between the brightest pixels of the H26α line peaks is 0.040±0.012arcsec, similar to the new line. In addition, an absorption component is found between peaks 1 and 3 that also crosses the waist of the 'X' shaped nebula from east to west for increasing frequency. Carbon RRL absorption has been previously reported at MHz frequencies from observations of the cold neutral medium <cit.> and neutral, diffuse gas in front of HII regions <cit.>. In our case, the absorption could originate from the photo-dissociation region <cit.>, which has a lower excitation temperature than the background HII region. Given the absorption component and the closer radial velocity to the H26α line, we tentatively identify this new line as C26α, although we cannot discard some contribution from He26α. §.§ Comparison with the non-LTE radiative transfer model We employed the 3D non-LTE radiative transfer code MORELI <cit.> to model the continuum and H26α observations. The model parameters are presented in Table<ref>, and they are described in detail in Appendix<ref>. A sketch of the geometry of the model is shown in Figure<ref>. The model considers that the ionized gas is located within a double cone structure with a semi-opening angle of 57^∘, and an electron density distribution with a radial dependence r^-2.14 and an angular dependence such that the density is higher near the walls of the double cone. This electron density structure is the same as the one used by <cit.>, who modeled all available radio continuum observations from radio-frequencies to the mid-IR. The observed and modeled continuum emission at 341GHz are presented in Figure<ref> (left panel). The proximity of the contour levels in both the model (red contours) and the observations (black contours) indicates that the electron distribution assumed in the model closely resembles the actual distribution. The total integrated flux from the model is 2.5Jy, in excellent agreement with the observed one (2.4±0.2Jy) and with previous data (Sect.<ref>). As shown by <cit.> (see their Figure4), the assumed electron distribution also provides an excellent match to the morphology of the continuum emission observed with the VLA in the K and Q bands at a resolution of ∼0.08 and ∼0.03arcsec <cit.>, comparable to the angular resolution of our ALMA 341GHz images. We note that the self similar scaling of the 341GHz continuum image with respect to the lower frequency data is expected from the electron density distribution of the model: the r^-2.14 dependence implies that the optical depth of the continuum reaches a value of ≈1 at smaller radii for increasing frequencies <cit.>. In previous works, the kinematics of the system were constrained from unresolved hydrogen RRL observations by comparing the modeled and observed line profiles and 2D Gaussian centroids: i) the disk rotates following a Keplerian law around the central star <cit.>; ii) the wind is launched at 24au from the central star, rotating in the same sense as the disk and expanding at a velocity of 60km s^-1 <cit.>; and iii) the system also presents a poorly collimated, high-velocity jet expanding at a maximum velocity of 250km s^-1 that decreases to half its maximum value at a semiopening angle of 24^∘ <cit.>. As discussed in <cit.>, one of the parameters with the strongest effect on the 2D Gaussian centroid maps and morphology of the emission is the orientation of the jet, θ_jet, and the . In the previous model, the disk was tilted 8^∘ from the edge-on orientation with the southern side facing the observer. This model showed an excess of emission in the southern half at velocities more blueshifted than , and no emission at the northern half for redshifted velocities higher than . In addition, the southern lobe was brighter for radial velocities more redshifted than , while in the observations this is seen from (see the last two panels of Fig. <ref>). To overcome this mismatch with the northern and southern emission, we changed the disk orientation in steps of 1^∘, which also modifies the wind and jet orientation (see Figure <ref> in the Appendix). Reorienting the disk such that the rotation axis crossed the plane of the sky solved the issue with the asymmetrical emission seen at high velocities. By visual examination of the results, we adopted a final θ_i value of -4^∘ (see rightmost panels in Figure<ref>). As a second step, we updated the tilt of the jet following the same method. In the previous model the jet was tilted 16-22^∘ with respect to the plane of the disk, with the northern cone facing the observer and the southern cone facing away. This large tilt was suggested to be caused by an unresolved circumbinary companion and/or a warped disk <cit.>. The new model requires a tilt of 2-6^∘, much closer to the rotation axis than before (see Table<ref> and Figure<ref>). Similar to the previous model, in the new simulations the northern cone also faces the observer while the southern cone moves away. As for the disk, a small change in the jet tilt angle is clearly reflected in the morphology of the line emission, hence the tilt of each cone had to be fitted individually <cit.>. The comparison between the observed integrated intensity maps and the ones from the final model is presented in Figures <ref> and <ref>. The model reproduces very well the observed line emission across all velocities. Note that it also reproduces the higher collimation of the jet for increasing velocities, which is consistent with the behavior observed with our high-angular resolution ALMA data (see Figure<ref>). Finally, Figure<ref> shows the modeled line profile obtained with MORELI (see red lines). The maser amplification of the H26α line is saturated <cit.>, which we tackle in the model using an approximation (see Appendix <ref>). This approximation is known to overestimate the flux density at disk velocities between -15kms^-1 and 33kms^-1 (see right panels in Fig.<ref>). In addition, the model predicts a symmetric line profile while the observed one is clearly asymmetric, which might be explained by the presence of clumps of ionized material <cit.>. We note that other parameters such as the opening angle and electron temperature of the ionized disk also affect strongly the intensity of the line. However, these parameters were previously constrained by modeling the unsaturated H30α RRL in <cit.> and <cit.>. At higher velocities (≤-30kms^-1 and ≥50kms^-1), the model also predicts low-intensity line-wing emission with the blueshifted component being brighter than the redshifted one. We note that some discrepancies do exist between the observed and modeled line emission at these high velocities. This may be caused by differences between the real electron density distribution of the wind and the one assumed in the model (as e.g. inhomogeneities in the wind). The coexistence of a rotating jet and wide angle wind launched from the disk at several au from the central star is expected from magnetohydrodynamic (MHD) disk wind models <cit.>, which have been suggested for this star previously <cit.>. <cit.> estimated that the jet mass loss rate and jet momentum rates derived from the model are consistent with those from other well-studied systems such as the jet from the massive protostar Cep A HW2 <cit.>. The fact that both jet and wind are rotating in the same sense as the disk implies that they are removing angular momentum from the disk <cit.>. It is however unclear whether MWC349A is a young star still accreting material from its disk (possibly in an advanced stage of formation given the poor collimation of the jet) or whether it is an evolved star as suggested by <cit.>. § SUMMARY In this work, we have presented the first spatially resolved sub-millimeter RRL emission from the massive star MWC349A using the most extended configuration of ALMA. The unprecedented angular resolution of ≤0.025 arcsec (≤30au) has allowed us to spatially resolve the H26α RRL and continuum emission arising from the ionized material in the inner parts of the system. We employed the non-LTE radiative transfer code MORELI <cit.> to model the ALMA images, using the most recent version based on previous unresolved RRL observations <cit.>. This model considers an ionized disk in Keplerian rotation, a wide-angle ionized wind, and a poorly collimated, high-velocity jet, both rotating in the same sense as the disk and extracting angular momentum from the system. The resolved emission seen in the new ALMA observations can be explained by introducing minor modifications to the orientation of the disk and the jet, whose axis is closer to the rotation axis of the system than previously found. Our model of a high-velocity jet expanding inside the ionized wind is consistent with magneto-hydrodinamical launching models of the wind <cit.>. Acknowledgements. A.M.-H., I.J-.S and J.M.-P. acknowledge funding from grants No. PID2019-105552RB-C41 and PID2022-136814NB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by “ERDF/EU”. A.M.-H. has received support from grant MDM-2017-0737 Unidad de Excelencia "María de Maeztu" Centro de Astrobiología (CAB, CSIC-INTA) funded by MCIN/AEI/10.13039/501100011033. A.M.-H acknowledges grant CSIC iMOVE 23244 for the funding of a stay at the Harvard-Smithsonian Center for Astrophysics during which the results of this work were obtained. N.H. has been funded by grant No.PID2019-107061GB-C61 by the Spanish Ministry of Science and Innovation/State Agency of Research MCIN/AEI/10.13039/501100011033. This paper uses the following ALMA data: ADS/JAO. ALMA No. 2019.1.01069.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. astropy <cit.>, matplotlib <cit.>, Common Astronomy Software Application (CASA) 6.6.3 <cit.>. § DESCRIPTION OF THE NON-LTE RADIATIVE TRANSFER MODEL §.§ Structure and kinematics of the ionized gas In this Appendix we briefly describe the fundamentals of the non-LTE radiative transfer code MORELI <cit.>, which has been used to model the 341GHz continuum and H26α RRL emission from MWC349A (Sect. <ref>). The model is described in detail in <cit.>, with the most updated version that includes a jet presented in <cit.>. The parameters of the model for this work are listed in Table <ref> and a sketch of the geometry is shown in Figure<ref>. MORELI considers a certain 3D geometry for the ionized region, which is discretized into a mesh of regular cubes with sizes (dx,dy,dz). The z axis corresponds to the direction of the line of sight; the x axis is the projection of the rotation axis of the region onto the plane of the sky; the y axis is orthogonal to z and x. The integration limits are computed from the effective radius that contains the free-free emission of an isotropic, partially optically thick wind expanding radially <cit.>. The radiative transfer equation is then integrated along the z axis for the whole mesh, i.e. for each line of sight. The calculation accounts for possible non-LTE effects by including the LTE departure coefficients of <cit.> computed for the local value of electron density and electron temperature of each cell. The result of this integration is the emission from the recombination line and the continuum, which are respectively arranged in a line cube and a continuum image to be compared with observations. In the case of MWC349A, the electron density, electron temperature and kinematics of the region are modeled considering an ionized disk, wind and jet in a double cone structure of semi-opening angle θ_a whose axis of symmetry is inclined an angle θ_i with respect to the plane of the sky. The ionized disk corresponds to the edges of the double cone of thickness θ_d, which is a boundary layer between a neutral disk and the ionized wind and jet that are launched from the ionized disk (see Figure<ref>). The electron density of the ionized gas in the double cone is described by a distribution N_e(r,θ) that depends on the radius as r^-2.14 and has an angular dependence such that the density is higher near the walls of the double cone (see Table<ref>). The ionized disk is characterized by an electron temperature T_d. It has a radius r_K and rotates around the central mass M_* following Kepler's law. The ionized wind is located in the inner part of the double cone with a semi-opening angle θ_a-θ_d, internal to the ionized disk, and has an electron temperature T_0. The wind expands with a terminal velocity v_wind decelerated radially by the value b_v and rotates around the axis of the cone following a Keplerian law scaled by a factor α: 𝐯_𝐰𝐢𝐧𝐝 = v_wind (r/r_0)^b_v𝐞_𝐫 + α v_Kepler𝐞_φ where 𝐞_𝐫 and 𝐞_φ are the radial and azimuthal unitary vectors in spherical coordinates, respectively; v_Kepler=√(GM_*/ρ) is the Keplerian velocity at a distance ρ from the rotation axis; and r_0 is a characteristic length used in the model. The jet expands radially with a maximum velocity v_jet, and is engulfed within the wind in the inner part of the double cone. Its orientation is described with the spherical coordinates (φ_jet,θ_jet), where the azimuth φ_jet has its origin on the line of sight and increases for clockwise angles as seen from the north, and the origin of the polar angle θ_jet corresponds to the northern part of the rotational axis of the system. The collimation of the jet is described with a normalized distribution f(ψ) adapted from <cit.>: f(ψ) = ψ_0^2/sin^2ψ+ψ_0^2 where ψ is the angular distance from the jet axis to any point inside the double cone and ψ_0 is a flattening factor for the distribution with higher values for less collimated jets <cit.>. The azimuth of the jet powered by MWC349A corresponds to the line of sight <cit.>, so the tilt angle θ_jet is sufficient to describe its orientation by defining positive angles away from the observer (Figure<ref>). The jet term is summed to the wind one, hence the final expression for wind and jet velocity is 𝐯_𝐰𝐢𝐧𝐝+𝐣𝐞𝐭 = [v_wind+v_jetf(ψ)] (r/r_0)^b_v𝐞_𝐫 + α v_Kepler𝐞_φ §.§ Saturation of the maser As mentioned in Section <ref>, the maser emission of the H26α RRL is saturated <cit.>. Under unsaturated maser amplification, the intensity of the emission is amplified along the line of sight following an exponential dependence with the module of the total (line and continuum) optical depth. At a certain point in the amplification along the line of sight, the amplification reaches a limit where each pump event results in the release of a maser photon, affecting the inversion of population between the transition levels. This is the point where the maser is completely saturated <cit.>, from which the emission increases linearly with the optical depth. Saturation of the maser in MWC349A takes place in the edges of the ionized disk, where the electron densities are optimum for the amplification <cit.> and where the longer amplification paths towards the observer are found <cit.>. The emission from this region corresponds to the intense peaks of the H26α line profile (Figure <ref>) and to the integrated intensity maps at the velocities of the edges of the Keplerian ionized disk (e.g. panel at central velocity -17.46 km s^-1 in Figure <ref> and panel at 34.90 km s^-1 in Figure <ref>). MORELI accounts for the saturation of the maser considering only photons coming from the same line of sight <cit.>. A more realistic treatment accounts for photons coming from every direction to evaluate the population in the atomic levels, such as in the model of <cit.>. Hence, in spite of considering the saturation of the maser, MORELI still overestimates the intensity of the line, as seen in Figure <ref>. This is particularly evident in the central part of the line (between -15 and 30kms^-1), whose intensity is overestimated by a factor of ∼2. The parameters of the model that regulate the saturation are the solid angle of the maser beam, with a value of 4π/Ω = 60 <cit.>, and the degree of saturation over the maser beam above which the regime turns to be linear J_ν/J_ν, sat=15. This value of J_ν/J_ν, sat is the one used in the previous model <cit.> and has remained unchanged since it roughly reproduces the intensity of the line. A higher value results in higher intensities for the modeled line. § INFLUENCE OF THE DISK ORIENTATION ON THE MODELED H26Α EMISSION In Figure <ref> we show the result of models with different values of θ_i to illustrate the effect of this parameter on the shape of the emission. aasjournal
http://arxiv.org/abs/2407.12310v1
20240717042103
A characterization of translated convex bodies
[ "Efren Morales-Amaya" ]
math.MG
[ "math.MG" ]
Flexible String Model of Unsaturated Lipid Bilayer Sergei Mukhin July 22, 2024 ================================================== § ABSTRACT In this work we present a theorem regarding two convex bodies K_1, K_2⊂ℝ^n, n≥ 3, and two families of sections of them, given by two families of tangent planes of two spheres S_i⊂ K_i, i=1,2, such that, for every pair Π_1, Π_2 of parallel supporting planes of S_1, S_2, respectively, which are corresponding (this means, that the outer normal vectors of the supporting half spaces determined by the two planes have the same direction), the sections Π_1∩ K_1, Π_2∩ K_2 are translated, the theorem claims that if S_1, S_2 have the same radius, the bodies are translated, otherwise, the bodies are also spheres. § INTRODUCTION. In <cit.>, A. Rogers proved that if every pair of parallel 2-sections of two convex bo­dies K_1,K_2 passing through two fix points p_1∈ K_1, p_2∈ K_2 are directly homothetics, the convex bo­dies are directly homo­thetic. On the other hand, in <cit.>, G. R. Burton proved the general case p_1, p_2∈ℝ^n. An interesting variation of Roger's Theorem was presented in <cit.>, there the two families of sections are not longer given by concurrent planes, instead L. Montejano consider two families of planes which, for one hand, varies continuously and, on the other hand, given a direction v, there is only one plane of each family orthogonal to v, however, Montejano restrict himself to consider only translated sections. The case for homothetic sections was considered in <cit.>. We can find several interesting problems and result about the determination of a convex bodies by families of section given by concurrent hyperplanes in <cit.>, <cit.> and <cit.>. In order to present formally our main result, which can be con­si­de­red either as generalization of A. Roger's theorem given in <cit.> or as geometrical progress of a conjecture due to J. A. Barker and D. G. Larman <cit.>, we need the following notation and definitions. Let ℝ^n be the Euclidean space of dimension n endowed with the usual inner product ⟨·, ·⟩ : ℝ^n×ℝ^n→. We take a orthogonal system of coordinates (x_1,...,x_n) for ℝ^n. Let B_r(n)={x∈ℝ^n: ||x||≤ r} be the n-ball of radius r centered at the origin, and let r𝕊^n-1={x∈ℝ^n: ||x|| = r} be its boundary. For each vector u∈𝕊^n, we denote by H^ +(u) the closed half-space {x ∈ℝ ^n: x· u ≤ 0} with unit normal vector u, by H(u) its boundary hyperplane {x ∈ℝ ^n: ⟨ x, u⟩ = 0}. For r∈ℝ, we denote by G(u), rG(u) the affine hyperplanes u+H(u), ru+H(u) and by E(u), rE(u), F(u), rF(u) the half-spaces u+H^+(u), ru+H^+(u), u+H^+(-u), ru+H^+(-u), respectively (See Fig. <ref>). Let S_1,S_2⊂ be two spheres with centers p_1,p_2 and radius r_1,r_2, respectively. The parallel supporting planes Π_1,Π_2 of S_1, S_2, respectively, are said to be corresponding if there exist u∈𝕊^n-1 such that Π_i=p_i+r_iG(u) and S_i⊂ p_i+r_iE(u),i=1,2. (SeeFig.<ref>) Let W_1,W_2⊂ be two convex bodies, i.e, compact convex sets with non empty interior. The bodies W_1,W_2 are said to be translated if there exists a non-zero vector u such that W_2=u+W_2. Let K_i⊂ℝ^n be a convex body, n≥ 3, and let S_i⊂ K_i be a sphere with center p_i and radius r_i, i=1,2 . Suppose that, for every pair of corresponding planes Π_1, Π_2 of S_1 and S_2, there exists a translation ψ: ℝ^n→ℝ^n such that ψ(Π_2∩ K_2) = Π_1∩ K_1. If r_1=r_2, then K_1, K_2 are translated. If r_1≠r_2, then K_1 and K_2 are spheres with centers at p_1 and p_2, respectively. Let K ⊂ℝ^n be a convex body, n≥ 3, and let S⊂ K be a (n-1)-sphere with center p. Suppose that: 1) for all supporting hyperplane Π of S the section Π∩ K is centrally symmetric and 2) for all pair of parallel supporting hyperplanes Π_1, Π_2 of S the sections Π_1 ∩ K and Π_2∩ K are translated. Then K is centrally symmetric. We take a system of coordinates such that p is the origin. We apply Theorem <ref>, taking K_1=K, K_2=-K and S_1=S=S_2. Therefore K and -K are translated. Thus K is centrally symmetric. This work is organized as follow: * Introduction. * Reduction of the Theorem <ref> to dimension 3. * Characterization of the sphere. * Lemmas for the case n=3 and r_1= r_2. * Proof of the Theorem <ref> in dimension 3 for r_1=r_2. * Lemmas for the case n=3, r_1≠r_2. * Proof of the Theorem <ref> in dimension 3 for r_1≠r_2. § REDUCTION OF THEOREM <REF> TO DIMENSION N=3 . For the points x,y ∈ we will denote by xy the line determined by x and y and by [x,y] the line segment contained in L(x,y) with endpoints x and y. As usual K, K will denote the interior and the boundary of the convex body K, respectively. For each vector u∈𝕊^n-1, we denote by π_u:→ H(u) the orthogonal projection parallel to u. Suppose that the bodies K_1,K_2 and the spheres S_1,S_2 satisfies the conditions of Theorem <ref> for dimension n, n≥ 4, and that the Theorem <ref> holds in dimension n-1. We are going to prove that, for u∈𝕊^n, the bodies π_u(K_1), π_u(K_1) and the spheres π_u(S_1), π_u(S_2) satisfies the condition of the Theo­rem <ref> in dimension n-1. Consequently, either, if r_1=r_2, π_u(K_1), π_u(K_1) are translated or, if r_1≠r_2, π_u(K_1), π_u(K_1) are spheres. Thus if all the orthogonal projections of K_1 and K_2 are translated, in virtue of Theorem 1 of <cit.>, K_1 and K_2 are translated, otherwise, K_1 and K_2 are spheres, i.e, the Theorem <ref> holds for dimension n. Notice that the conditions of Theorem <ref> are invariant under translation, consequently we can assume that the centres p_1,p_2 are at origin of a system of coordinates. Let v∈ H(u)∩𝕊^n. Since K_1∩ r_1G(v) is a translated copy of K_2∩ r_2G(v), i.e., there exists a vector ω∈ such that K_1∩ r_1G(v)= ω + [K_2∩ r_2G(v)], then π_u(K_1∩ r_1G(v)) =π_u(ω) + π_u(K_2∩ r_2G(v)), i.e., π_u(K_1)∩π_u(r_1G(v)) =π_v(ω) + π_u(K_2)∩π_u(r_2G(v)), i.e., π_u(K_1)∩π_u(r_1G(v)) is a translated copy of π_u(K_2)∩π_u(r_2G(v)) for every v∈ H(u)∩𝕊^n, that is, the bodies π_u(K_1), π_u(K_1) and the spheres π_u(B_1), π_u(B_2) satisfies the condition of the Theo­rem <ref> in dimension n-1. § CHARACTERIZATION OF THE SPHERE. Given a convex body K⊂ℝ^n and a point x∈ℝ^n∖ K we denote the cone generated by K with apex x by C(K,x), i.e., C(K,x):={x+λ (y-x): y∈ K, λ≥ 0}. The boundary of C(K,x) is denoted by S(K,x), in other words, S(K,x) is the support cone of K from the point x. We denote the graze of K from x by Σ(K,x), i.e., Σ(K,x):=S(K,x)∩ K. On the other hand, given a line L (or a vector u), we denote by C(K,L) the cylinder generated by K and L, i.e., the family of all the lines parallel to L and with non empty intersection with K. The boundary of C(K,L) is denoted by S(K,L), in other words, S(K,L) is the support cylinder of K corresponding to L. We denote the shadow boundary of K corresponding to L by Σ(K,L), i.e., Σ(K,L):=S(K,L)∩ K. Let M_1, M_2 ⊂ℝ^3 be convex bodies, let B be an sphere with B⊂ M_1 ∩ M_2 and let r be a real number, r>0, r≠1. Suppose that for each supporting plane Π of B the sections Π∩ M_1 and Π∩ M_2 are homothetic, with centre and radius of homothety Π∩ B and r, respectively. Then M_1 and M_2 are spheres concentric with B. We claim that M_1 ∩ M_2=∅. Suppose that there is a point p∈ M_1 ∩ M_2. Let Π be a supporting plane of B, p∈Π and it touches the boundary of B in the point x. Since B⊂ M_1 ∩ M_2, we have x∈ (Π∩ M_1), (Π∩ M_2). If either (Π∩ M_1) ⊂ (Π∩ M_2) or (Π∩ M_2) ⊂ (Π∩ M_1), then either the centre of homothety y which send Π∩ M_1 into Π∩ M_2 is in a common supporting line of Π∩ M_2 and Π∩ M_2, and therefore y≠ x, or the radius r is equal to 1, but, in the first case, we contradicts the hypothesis, i.e., x is the centre of homothety between the sections Π∩ M_1 and Π∩ M_2, on the other hand, if r=1 then M_1=M_2. The case (Π∩ M_1) not contained in (Π∩ M_2), or viceversa, implies that the centre of homothety is a point outside of the sections Π∩ M_1, Π∩ M_2 but again this is in contradiction with the hypothesis that x∈ M_1 ∩ M_2 is the centre of homothety between this two sections. Thus we can choose the notation such that B⊂ M_1⊂ M_2. Let p∈ M_2. We take one the two components of S(B,p) ∩ M_1 and we denote it by Φ. We are going to show that Φ is a planar curve contained in a plane parallel to the plane where Σ(B,p) is contained (See Fig. <ref>). From this, it follows that Φ is a circle. Let v∈Φ and let Π be a plane parallel to the plane of Σ(B,p) and v∈Π. Pick an arbitrary point w∈Φ, w≠ v. Let x,y∈Σ(B,p) be points where the supporting lines pv and pw of B touches B, respectively. By virtue of the hypothesis, it is easy to see that xp/xv=yp/yw=r (Taking supporting planes Γ_x and Γ_y of B at x and y, respectively, we have that the sections Γ_x ∩ M_1, Γ_x ∩ M_2 are homothetics with centre and radius of homothety x and r, respectively, and the sections Γ_y ∩ M_1, Γ_y ∩ M_2 are homothetics with centre and radius of homothety y and r, respectively). Thus the triangles xpy and vpw are similar and consequently, the lines xy and vw are parallel. Hence w∈Π. It follows that M_1 is a sphere concentric with B (also due the more general statement proved in <cit.>). Since M_1 is a sphere all the sections of M_2 with planes tangent to B are circles, then, M_2 is a sphere concentric with B. § LEMMAS FOR THE CASE N=3 AND R_1=R_2. In this section we will assume that n=3 and r_1=r_2. Notice that the conditions of Theorem <ref> are invariant under translations, consequently, we can assume that S_1=S_2 and that we have a system of coordinates such that p_1 is the origin and r_1=1. Thus S_1=𝕊^2. For u∈𝕊^2, we denote by K_i (u) the section G(u)∩ K_i, i=1,2. By virtue of the theorem's hypothesis there exists a map μ: 𝕊^2 →T𝕊^2 such that μ(u) · u=0 and K_2 (u)= μ(u)+K_1(u), where T 𝕊^2 is the tangent bundle of 𝕊^2. From the continuity of the boundaries of K_1 and K_2 it follows the continuity of μ(u). Hence, by virtue of Theorem 27.8 Pag. 141 of <cit.>, there exists u_1∈𝕊^2 such that K_2 (u_1)= K_1(u_1). Let Ω⊂𝕊^2 be the collection of vectors u such that u∈Ω if K_1(u)= K_2(u) and let Φ⊂𝕊^2 the set of vectors such that u∈Φ if the line G(u_1)∩ G(u) is supporting line of K_1(u_1). The curve Φ is homeomorphic to 𝕊^1, then, by the Jordan's Theorem, 𝕊^2 is decomposed by Φ in two disjoint open sets, say A,B and we choose the notation such that u_1∈ A. In order to prove the Theorem <ref> first, we are going to show now that Φ⊂Ω and, second, that Ω = 𝕊^2. Therefore K_1=K_2. The set Φ is contained in Ω. Let u∈Φ, i.e., L(u):=G(u_1)∩ G(u) is supporting line of K_1(u_1). We are going to show that the sections K_1(u) and K_2(u) coincides. We claim that K_1(u) and K_2(u) are contained in the same half plane of G(u) of the two determined by L(u). Let Ψ_i be a supporting planes of K_i containing L(u), i=1,2. We denote by Ψ^+_i, the supporting half spaces determined by Ψ_i, i=1,2. It is clear that 𝕊^2 ⊂Ψ^+_1 ∩Ψ^+_2, since we are assuming that 𝕊^2 ⊂ K_1 ∩ K_2. Hence K_1(u), K_2(u) ⊂Ψ^+_1 ∩Ψ^+_2 and, consequently, K_1(u), K_2(u) are contained in the same half plane of G(u) of the two determined by L(u). On the other hand, we denote by S the intersection L(u) ∩ K_1. Observe that, since (<ref>) holds, S is equal to L(u) ∩ K_2. By hypothesis the sections K_1(u) and K_2(u) are translated, by virtue that they have a common supporting line, the set S in common and they are contained in the same half plane determined by L(u), they coincides. Let u,v∈Φ such that L(u):=G(u)∩ G(u_1) and L(v):=G(v)∩ G(u_1) are parallel supporting lines of K_1(u_1). For u∈Φ, we define the following subset of K_1 S(u):=[E(u)∩ F(v) ∩ E(u_1)]∩Σ(K_1,L(u)). Now we consider the union of the sets S(u) for u∈Φ, i.e., Σ:=⋃_u∈Φ S(u). Let k be an integer, 1≤ k ≤ n-1. An embedding of 𝕊^k in is a map α: 𝕊^k→ such that α is homeomorphism onto its image. The set Σ is contained in K_2. Let p be a point in Σ. Then there are parallel supporting lines L(u),L(v) of K_1(u_1) such that p∈ S(u) for some u∈Φ, i.e., there exists u∈Φ such that p∈ [E(u)∩ F(v) ∩ E(u_1)]∩Σ(K_1,L(u)). First we assume that p is not in G(-u_1). We denote by D the central projection of 𝕊^2 onto the plane G(u_1) from the point p, i.e., D:=S(𝕊^2,p)∩ G(u_1). By virtue of the choice of p, it follows that Φ∩Σ(𝕊^2, p)≠∅. Since p∈ [E(u)∩ F(v) ∩ E(u_1)]∩Σ(K_1,L(u)) there exist a supporting line L⊂ E(u)∩ F(v) ∩ E(u_1) of K_1 through p and parallel to L(u). Let G(q_1), G(q_2) be the two supporting planes of 𝕊^2 containing L. Changing the notation if necessary, we can assume that q_1∈ A and q_2∈ B (See Fig. <ref>). Therefore in each one of the arcs of Σ(𝕊^2,p) determined by q_1,q_2 there is a point in Φ, say w_1,w_2. Consequently, the figures D and K_1(u_1) have two common supporting lines, say L(w_1):=G(w_1)∩ G(u_1), L(w_2):=G(w_2)∩ G(u_1). By Lemma <ref>, K_i(w_i)= K_2 (w_i), i=1,2. Hence p∈ K_2. On the other hand, if p∈ E(-u_1), from the previous case and since the boundaries of K_1,K_2 are closed it follows that p∈ K_2. Let M_1,M_2⊂ be two translated convex figures and let a,b,c,d be points which belongs to M_1∩ M_2. If the quadrilateral □ abcd is not a trapezoid, then M_1= M_2. On the other hand, if the quadrilateral □ abcd is a trapezoid and M_1≠ M_2, then the pair of parallel edges of □ abcd are contained in M_1 and M_2. Since the convex figures M_1,M_2⊂ are translated there exists w∈ such that M_1=w+M_2. Let abcd ⊂ℝ^2 be a quadrilateral which is not a trapezoid. Let u∈𝕊^1 be a unit vector, not parallel to an edge of abcd. Then there exists two vertices of abcd, say b,d, such that [b+r^+(u)]∩ (abcd)≠∅and [d+r^-(u)]∩ (abcd)≠∅ where r^+(u):={α u: α∈ℝ, α>0 } and r^-(u):={α u: α∈ℝ, α<0 } are the rays defined by u. Since (abcd) ⊂ M_1 ∩ K_2, the relation (<ref>) implies that w is not parallel to u. O­ther­wise, let us assume that w is parallel to u. Then either a,a+w,c,c+w,d,b+w∈ K_1 or a,a+w,c,c+w,b,d+w∈ K_1. In the first case, by (<ref>), it follows that b∈ K_1 which contradicts that b∈ K_1. In the other case, d∈ K_1 which contradicts that d∈ K_1. By the previous paragraph, we know that w is parallel to an edge of abcd, say w is parallel to the edge [a,b]. If abcd is not a trapezoid, the edge [c,d] is not parallel to [a,b]. Let L be a supporting line of K_1 parallel to w. Then [d+L(u)]∩ (abcd)≠∅ where L(u):={α u: α∈ℝ}. By the relation K_1=w+K_2 it is clear that a,b,c,d,a+w,b+w,c+w,d+w∈ K_1. On the other hand, by (<ref>) either d+w∈ K_1, if ⟨ u, w⟩>0 or d∈ K_1, if ⟨ u, w⟩<0. In order to avoid the contradiction we must have w=0 and, consequently, K_1=K_2. If abcd is a trapezoid, say the edge [c,d] is parallel to [a,b], since a,b,c,d,a+w,b+w,c+w,d+w∈ K_1 and by virtue that a,a+w,b, b+w∈ L(a,b) and c,c+w,d,d+w∈ L(c,d), the lines L(a,b) and L(c,d) are supporting lines of K_1. Analogously we can see that the lines L(a,b) and L(c,d) are supporting lines of K_2. Consequently, [ab], [cd] are contained in M_1 and M_2. The set A is contained in Ω. Let u∈ A. Then the line L(u):=G(u)∩ G(u_1) intersects the interior of K_1(u_1) (the vectors u which correspond to the lines L(u) which are supporting lines of K_1(u_1) are those in Φ, on the other hand, if u∈ B, the line L(u) does not intersect K_1(u_1)). Let L(p),L(q)⊂ G(u_1) be a pair of supporting lines of K_1(u_1), parallel to L(u), such that L(p)⊂ E(u) and L(v)⊂ F(u) and let M be a supporting line of G(u)∩ K_1 parallel to L(u) and contained in E(u_1). We pick a point x∈ M∩ K_1. First we suppose that G(p) ∩ G(q)∩ K_1=∅. It follows that x∈Σ. Let a,b be the extreme points of the line segment G(u_1)∩ K_1(u) and let c,d be the extreme points of the line segment G(q)∩ K_1(u). Since K_1(u_1)=K_2(u_1) and K_1(q)=K_2(q), notice that a,b,c,d∈ K_1∩ K_2. If the quadrilateral □ abcd is not a trapezoid, then, by Lemma <ref>, K_1(u)=K_2(u). Thus u∈Ω. If the quadrilateral □ abcd is a trapezoid, by virtue that the segments [a,b], [c,d] can not belong to the boundary of K_1(u), the segments [a,c], [b,d] are parallel. On the other hand, by virtue that x∈Σ, it follows that x∈ K_1 ∩ K_2 and there exists a supporting line N of K_1(u) parallel to L(u). Let w be the outer normal vector of the supporting half-plane of K_1(u) defined by N. Since K_1(u) and K_2(u) are translated there exists α∈ such that K_1(u)=α+K_2(u). By virtue that N is supporting line of K_1(u) at x and since x∈ K_2(u) it follows that w·α≤ 0. Since N is supporting line of K_1(u) at x, N-α is supporting line of K_2(u) at x-α but, since the inequality w·α≤ 0 holds, the line N-α separates the point x from the section K_2(u) if α≠0 and w·α<0. Thus if α≠0, then w·α=0, but since [a,c] and [c,d] belongs simultaneously to K_1(u) and K_2(u) the condition w·α=0 is impossible. Hence α=0, i.e., K_1(u)=K_2(u). Now we suppose that G(p) ∩ G(q)∩ K_1≠∅ (See Fig. <ref>). If u is such that the relation G(u) ∩ [E(p)∩ F(q) ∩ E(u_1)]∩Σ(K_1,L(p))≠∅ holds, then we proceed as before and we get to the conclusion u∈Ω. On the other hand, if u is such that the relation G(u) ∩ [E(p)∩ F(q) ∩ E(u_1)]∩Σ(K_1,L(p)) = ∅ is satisfied (See Fig. <ref>), we proceed as follows. Let v∈𝕊^2 such that the line W:=G(u)∩ G(v) is supporting line of K_1(u), parallel to L(u) and W⊂ F(u_1). We claim that G(v) ∩ [F(p) ∩ E(q) ∩Σ(K_1,L(p))]≠∅. We consider the orthogonal projection, defined by L(u), of the sets L(p),L(q), G(p) ∩ G(q), G(u_1), G(p), G(q), G(u), G(v),𝕊^2 which will be denoted, respectively, by x,y,z,P,Q,R,S,T, J. It is clear that J is inscribed in the triangle xyz and the condition (<ref>) implies that S intersects the edges xy and yz of the triangle xyz (See Fig. <ref>). Suppose that relation (<ref>) does not hold, it is equivalent to the fact that T not intersect the edge xz of xyz. Since T intersects the edge xy the aforesaid implies that T intersects yz. By virtue that G(u) and G(v) are supporting planes of 𝕊^2, J is contained in the angle determined by S and T. Thus there is not a point of J in the edge xz but this contradict that J is inscribed in xyz. This contradiction show that relation (<ref>) holds. From (<ref>) it follows that, for all w∈ (𝕊^2∩{u_1,u}) such that w· u_1>v· u_1, the relation G(w) ∩ [F(p) ∩ E(q) ∩Σ(K_1,L(p))]≠∅. From (<ref>), we conclude that w∈Ω for all w∈ (𝕊^2∩{u_1,u}) such that w· u_1>v· u_1 (See the proof of Lemma <ref>). Consequently, the arc K_1(u)∩ F(u_1) belongs to K_2. In fact, since w∈Ω for all w∈ (𝕊^2∩{u_1,u}) such that w· u_1>v· u_1, by Lemma <ref>, the points G(u)∩ K_1(w) belong to K_2 as well. Now varying w∈𝕊^2 such that w· u_1>v· u_1 we get to the aforesaid. By virtue that the translated sections K_1(u), K_2(u) have the arc K_1(u)∩ F(u_1) in common they coincide. Thus u∈Ω. Let C and D be two convex figures in ℝ^2, D ⊂ C. For every x∈ C, we define a polygonal P_D(x), with respect to C and D, in the following manner. We take a supporting line L of D passing through x=x_1, and we denote by x_2 the second intersection of L=L_1 with C. We denote now by L_2 the supporting line of C, L_2≠ L_1, passing through x_2 and so on. The set of vertices of P_D(x) is the set of points {x_1,x_2,x_3,...,x_k,...}. Therefore, the edges of P_D(x) are {x_1x_2,x_2x_3,...,x_ix_i+1,...}. Given the directed edge x_ix_i+1, we denote by E_i(i+1) and F_i(i+1) the two half-plane determined by L(x_i,x_i+1), we choose the notation such that E_i(i+1) is the half supporting plane of D. We denote by u_i the interior unit normal vector of E_i,(i+1), i=1,2,...,k,.... Now we require that the set {x_ix_i+1, u_i} is a left frame of ℝ^2. The relation C⊂⋃_i∈ I F_i(i+1) holds, where I is a countable set possibly finite. Contrary to the statement of Lemma <ref>, we will assume that (<ref>) does not holds. It is equivalent to suppose the existence of x∈ C such that x_i → x, when i→∞, and, furthermore, x∉ F_12. Notice that [x_i,x_i+1]→ 0, when i→∞. Since in each x_ix_i+1 there is a point z_i in D, it follows that z_i→ x, when i→∞. Thus x∈ D. However this contradicts that D⊂ C. § PROOF OF THEOREM <REF> IN DIMENSION 3 FOR R_1=R_2. From Lemma <ref> we conclude F(u_1) ∩ K_1⊂ K_2. Analogously, we can see that, for every u∈Ω, the relation F(u)∩ K_1⊂ K_2 holds. Let v∈𝕊^2 ∩ H(u_1). Let μ_1,μ_2 be two supporting lines of K_1(u_1) parallel to v. We consider the polygonal P_𝕊^2(x_1) inscribed in π_v(K_1), as it was defined at the Lemma <ref>, where x_1=π_v(μ_1). By Lemma <ref> π_v(K_1)⊂⋃_i∈ I(v) F_i(i+1). where I(v) is a numerable set of indices depending of v. Hence K_1⊂⋃_i∈ I(v)π^-1(F_i(i+1)) By virtue that we can find u_i∈Ω such that F(u_i)=π^-1(F_i(i+1)), from (<ref>) it follows that K_1⊂⋃_i∈ I(v) F(u_i) By (<ref>) and (<ref>) it follows that K_1 and K_2 coincides in ⋃_i∈ I(v) F(u_i). Varying v∈𝕊^2∩ H(u_1) we conclude that K_1= K_2. § LEMMAS FOR THE CASE N=3 AND R_1≠R_2. We take a system of coordinates such that p_2 is the origen and r_2=1, i.e., 𝕊^2=S_2. Since the hypothesis of Theorem <ref> is invariant under translations, if we assume that there exists vectors u∈𝕊^2, z∈ such that (A) G(u) is supporting plane of z+S_1, the sphere z+S_1 is contained in E(u) and K_2(u)=G(u) ∩ (z+K_1), then, since S_2≠z+S_1, one of the following two conditions holds: (B) (z+S_1) ∩ S_2=u and (z+S_1)\{u}⊂ S_2 (See Fig. <ref>), (C) there exists a point p∈ G(u) such that C(z+S_1,p)=C(S_2,p) (See Fig. <ref>). In this section we will suppose that the bodies K_1,K_2, the points p_1,p_2 and the spheres S_1,S_2 are such that there exist vectors u∈𝕊^2, z∈ for which the conditions (A) and (C) holds. We denote the body z+K_1 by K_1 and the sphere z+S_1 by S_1. Let x∈\ (S_1∪ S_2). Notice that the collection of corresponding planes of the supporting planes of C(S_2,x) are passing through one point, which will be denoted by ψ(x), and, therefore, they are the envelope of the cone C(S_1,ψ(x)). We denote by Δ the family of supporting planes of C(S_2,p) and by Σ⊂𝕊^2 the set of unit vectors such that u∈Σ if G(u)∈Δ. For u∈Σ, we denote by K_i (u) the section G(u)∩ K_i, i=1,2. There is no supporting plane Π of K_1,K_2 such that Π∉Δ and [Π∩ (K_1∩ K_2)]=2. Contrary to the statement of Lemma <ref>, let suppose that there exists a supporting plane Π of K_1,K_2 such that [Π∩ (K_1∩ K_2)]=2, Π∉Δ. We denote by A,B the sets Π∩ K_1, Π∩ K_2, respectively and let D⊂Π be a disc such that A∪ B⊂ D. Let L_1,L_2 be a pair of supporting lines of S_1, S_2, respectively, such that L_1,L_2 are parallel and z_1:=L_1∩Π∉ D and z_2:=(L_2∩Π)∈ A∩ B (See Fig. <ref>). Let Γ_1,Γ_2 be a pair of corresponding planes of S_1,S_2 passing through ψ(z_2) and z_2 such that Γ_1 ∩ D=∅. By the hypothesis, there exists α∈ such that Γ_1∩ K_1= α+(Γ_2∩ K_2). We claim that α cannot be parallel to Π. Otherwise, the equality (Γ_1∩ K_1)∩Π= α+[(Γ_2∩ K_2)∩Π] would hold. Consequently, α+z_2∈ (Γ_1∩ K_1)∩Π=Γ_1∩ (K_1∩Π)=Γ_1∩ A. By virtue that A⊂ D and Γ_1∩ D=∅ it follows that Γ_1 ∩ A=∅. This contradiction proves our claim. As a corollary we obtain that line segment α+[(Γ_2∩ K_2)∩Π] is contained in K_1\Π and it is parallel to Π. By the choice of z_1 and z_2 it is possible to take Γ_1, Γ_2 in such a way that we can construct a family of line segments (non-degenerated in a point) contained in K_1\Π, parallel to Π and whose corresponding set of direction is a planar set in 𝕊^2 and whose measure is not zero, however, this contradicts the Theorem 1 of <cit.>. Such contradiction shows that our initial assumption is false. From this the Lemma follows. Now we suppose that p∉ K_2(u). Let L_1 ⊂ G(u) be a supporting line of K_2(u), p∈ L_1. In virtue that the points q_1=G(u) ∩ S_1, q_2= G(u) ∩ S_2 belongs to K_2(u), L_1 is not contained in C(S_2,p). Thus there exists a supporting plane G(v) of C(S_2,p) such that L_1⊂ G(v) and G(u)≠G(v). The equality G(v) ∩ K_1=G(v) ∩ K_2 holds. Using the same argument that in the proof of Lemma (<ref>) the equality (<ref>) follows. For the sets K_i^p:=(ℝ^n \ C(K_i,p)) ∩ K_i, respectively, i=1,2, where the point p ∈ℝ^n is given by the condition (C), we have the following lemma. The equality K_1^p=K_2^p holds. Let q be a point in L_1∩ K_2(u). We take a line L_τ⊂ G(u) such that p∈ L_τ and it has an interior point τ of K_2(u) in the line segment [q_1q], q_1≠τ≠q. Notice that, since q_1 ∈ K_2 and since all the points of K_2(u)\ K_2 are interior points of K_2, τ∈ K_2. The line L_τ is not contained in S(K_2,p) thus there exists a supporting plane G(w) of S(K_2,p) so that L_τ⊂ G(w), G(w) ≠ G(u) and G(w) ≠ G(v). Then, by Lemma <ref>, the sections K_1(w) and K_2(w) have four points in common, the points given by the intersection of the lines L_τ and G(w) ∩ G(v) with K_2. We denote by a,b and c,d such points, respectively. By hypothesis there exists a vector α∈ G(w) such that K_1(w)= α+ K_2(w). Then either: α=0, i.e., K_1(w)= K_2(w) or α≠0. If α=0, we finish. On the other hand, we suppose now that α≠0. Then, by Lemma <ref>, the quadrilateral □ abcd is a trapezoid and it has one pair of edges parallel to α. If the line segment ab is parallel to α, then ab ⊂ K_2, however it would contradict that τ∈ ab is an interior point of K_2. Thus the edges bc and da are parallel to α and bc, da⊂ K_1∩ K_2. We take a line L_ρ⊂ G(u) such that p∈ L_ρ and it has an interior point ρ of K_2(u) in the line segment q_1q, τ≠ρ. The line L_ρ is not contained in S(K_2,p) thus there exists a supporting plane G(x) of S(K_2,p) such that L_ρ⊂ G(x), G(x) ≠ G(u), G(x) ≠ G(v) and G(x) ≠ G(w). The sections K_1(x) and K_2(x) are translated and have six points in common, the points given by the intersection of the lines L_ρ, G(x) ∩ G(v) and G(x) ∩ G(w) with K_1. We denote by a',b', c', d' and e', f' such points, respectively. Such as we have seen above the segments a',b', c', d' and e', f' can not be parallel. If the quadrilateral □ a'b'c'd' is a trapezoid with a'c' and b'd' a pair of edges parallel, then the points {a,a',c,c'} and {b,b',d,d'} defined a pair of parallel supporting planes Π_1,Π_2 of K_2 and {a,a',c,c'},{b,b',d,d'}⊂ K_1∩ K_2 (See Fig. <ref>). However, according Lemma <ref>, this is impossible. Thus □ a'b'c'd' is not a trapezoid and, consequently, K_1(x)= K_2(x). As the supporting planes G(s) of C(S_2,p), s∈arc (u,v)⊂Σ are in correspondence with the points t∈ [p_1,q], t≠τ we get that for every s∈arc (u,v)⊂Σ the equality G(s)∩ K_1=G(s)∩ K_2 holds. Finally we can extent the previous argument for all s∈Σ (Considering the second supporting line of K_2(v) passing through p we find a vector v̅∈Σ such that K_1(v̅)= K_2(v̅) and repeat the argument from above to K_2(v) and K_2(v̅) and so on until we cover the set Σ). Now the proof of Lemma <ref> is complete. It is not possible the case K_1=K_2 and S_1 is not contained in S_2 in Theorem <ref>, i.e., there are no convex body K⊂ℝ^3, two spheres S_1,S_2⊂ K of radius r_1, r_2, r_1<r_2, S_1 is not contained in S_2, such that, for every pair of corresponding planes Π_1, Π_2 of S_1 and S_2, there exists a translation ψ: ℝ^n→ℝ^n such that ψ(Π_2∩ K) = Π_1∩ K. Contrary to the statement of the Lemma <ref>, let us assume that there are convex body K⊂ℝ^3, two spheres S_1,S_2⊂ K of radius r_1, r_2, r_1<r_2, S_1 is not contained in S_2, such that, for every pair of corresponding planes Π_1, Π_2 of S_1 and S_2, there exists a translation ψ: ℝ^n→ℝ^n such that ψ(Π_2∩ K) = Π_1∩ K. Since the measure of the set of directions parallel to the segments contained in K is zero, according with <cit.>, we can fin a direction w∈ and a pair of corresponding planes Π_1,Π_2 such that there is no a line segment parallel to u, contained in K, and Π_1∩ K=w+(Π_2 ∩ K). Notice that area(Π_1∩ K)=area(Π_2∩ K), where area(Z) denotes the area of the set Z⊂. Let Γ_1, Γ_2 be a pair of corresponding planes with Γ_1 parallel to Π_1. Thus there exists w̅∈ such that Γ_1∩ K=w̅+(Γ_2∩ K) which implies that area(Γ_1∩ K)=area(Γ_2∩ K). On the other hand, since the planes Π_1,Π_2 are corresponding there exist u∈𝕊^3 such that Π_1=p_1+r_1G(u) and S_1⊂ p_1+r_1E(u) Π_2=G(u) and S_2⊂ E(u) (notice that p_2 is the origin and r_2=1). First, we assume that S_1∩ S_2≠∅. Then Π_1∩ K⊂ F(u) and Γ_1∩ K, Γ_2 ∩ K⊂ E(u) (See Fig. <ref>). By (<ref>), (<ref>), (<ref>), the choice of w and the Brunn's inequality for slice volumes (Theorem 12.2.1 P. 297 of <cit.>) area(Γ_1∩ K)≠area(Γ_2∩ K). However (<ref>) contradicts (<ref>). The case S_1∩ S_2=∅ can be considered analogously. Thus the proof of Lemma <ref> is now complete. § PROOF OF THEOREM <REF> IN DIMENSION 3 FOR R_1≠R_2. §.§ Condition (A) and (B) are satisfied simultaneously. We suppose that the bodies K_1,K_2 are such that every time we translate the body K_1 and in order that the condition (A) holds, then the the condition (B) holds. This supposition implies that, since the conditions of the Theorem <ref> are invariant under translations, if the spheres S_1 has its center at the origin, then, for all u∈, it follows that K_2(u) =(1-r_1)u+[r_1G(u) ∩ K_1] , ((1-r_1)u+S_1) ∩ S_2=u and [(1-r_1)u+S_1] ⊂ S_2, i.e, it is possible to translate K_1 and S_1 in order that (A) and (B) holds for all u ∈ (See Fig. <ref>). Let Δ: → be an homothety with centre at the origin and radius of homothety r=r_1. Thus the bodies K_1 and Δ(K_2) have the following property: for each supporting plane Π of S_1 the sections Π∩ K_1 and Π∩Δ(K_2) are homothetic with centre and radius of homothety Π∩ S_1 and r, respectively, i.e., the bodies K_1 and Δ(K_2) satisfies the condition of Lemma <ref> and, consequently, K_1 and Δ(K_2) are two concentric spheres. Therefore K_1 and K_2 are two concentric spheres. §.§ The bodies K_1,K_2, the points p_1,p_2 and the spheres S_1,S_2 are such that there exist vectors u∈𝕊^2, z∈ for which the conditions (A) and (C) holds. By Lemma <ref>, the equality K_1^p=K_2^p holds, however, by Lemma <ref>, K_1≠ K_2. Thus there exists a pair of supporting planes of Γ_1, Γ_2 of K_1 and K_2 which are corresponding, close enough to G(u), such that there exists z̅∈ with the properties (𝒜) Γ_2 ∩ K_2=z̅ +(Γ_1 ∩ K_1), Γ_2 is supporting plane of z̅+S_1 and the sphere z̅+S_1 is contained in Γ^+_2, the supporting half-space of S_2 defined by Γ_2, , (𝒞) there exists a point p̅∈Γ_2\((z̅+K_1)∪ K_2) such that C(z̅+S_1,p̅)=C(S_2,p̅) (See Fig. <ref>). Furthermore, by Theorem 1 of <cit.>, we can take the planes Γ_1, Γ_2 in such a way that (𝒟) there is no a line segment parallel to z̅ and contained neither K_1 nor K_2. We will consider the cylinders C:={λz̅+x:x∈Γ_1∩ K_1, 0≤λ≤ 1}, C^-:={λz̅+x:x∈Γ_1∩ K_1, λ <0}, the planes Γ_λ :=λz̅+Γ_1, Δ:={ x+λz̅: x∈ G(u)∩Γ_2, λ∈} and the half-spaces Δ^+, Δ^- defined by Δ, we choose the notation such that p∈Δ^-. Furthermore, we establish the notation K̅_1=K_1+z̅, K̅_2:=K_2, K̅^p̅_i:=[\ C(K̅_i, p̅)]∩K̅_i, i=1,2. By Lemma <ref>, the following relations (Γ_1 ∩ K_1)∩ F(u)⊂ K_1, K_2 and (Γ_2 ∩ K_1)∩ F(u)⊂ K_1, K_2 holds. By (<ref>), (𝒟) and the Brunn's inequality for slice volumes (Theorem 12.2.1 P. 297 of <cit.>), the section Γ_λ∩ K_1 is such that A_λ:=[ (Γ_λ∩ K_1)] ∩Δ^+ ⊂\ C, for every λ, 0< λ < 1, and C_λ:=[(Γ_λ∩ K_1)∩Δ^+] ⊂ C^-∩Δ^+, for every λ, λ<0 such that Γ_λ∩ (K_1 ∩Δ^+)≠∅ (See Fig. <ref>). Notice that by Lemma <ref> A_λ=B_λ:= [ (Γ_λ∩ K_2)] ∩Δ^+ and C_λ=D_λ:= [ (Γ_λ∩ K_2)] ∩Δ^+. for every λ, 0< λ < 1. From (<ref>) and (<ref>) B_λ⊂\ C, for every λ, 0< λ < 1, and, on the other hand, from (<ref>) and (<ref>) D_λ⊂ C^-∩Δ^+, for every λ, λ<0. By (<ref>), for ϵ>0 small enough such that Γ_-ϵ∩ (K_1 ∩Δ^+)≠∅, the relation C_-ϵ⊂ C^-∩Δ^+ holds, consequently C_-ϵ+z̅⊂ C∩Δ^+. Notice that C_-ϵ+z̅⊂K̅_1. By Lemma <ref> applied to the bodies K̅_1,K̅_2, by virtue that we are assuming that the conditions (𝒜) and (𝒞) are satisfied, the equality K̅^p̅_1=K̅^p̅_2 holds. Thus C_-ϵ+z̅⊂K̅_2=K_2. Since Γ_-ϵ+z̅=Γ_(1-ϵ) it follows that C_-ϵ+z̅=B_(1-ϵ). Hence <ref> implies B_(1-ϵ)⊂ C∩Δ^+. By (<ref>) B_(1-ϵ)⊂ (\ C). From (<ref>) and (<ref>) we have an absurd. This contradiction was derived by the assumption that the conditions (A) and (C) holds. 9 bl J.A. Barker, D.G. Larman: Determination of convex bodies by certain sets of sectional volumes. Discrete Mathematics 241 (2001) 79-96. bg G. Bianchi and P. Gruber: Characterization of Ellipsoids, Archiv der Math. 49 (1987) 344-350. burton G. R. Burton. Sections of convex bodies. J. London Math. Soc., Vol 2-12, (1976) 331-336. bu H. Busemann, The geometry of geodesic, New York, 1955. ewald G. Ewald, D. G. Larman, C. A. Rogers, The directions of the line segments and of the r-dimensional balls on the boundary of a convex body in Euclidean space. Mathematika Vol. 17-1,(1970), 1-20. gardner R. Gardner. Geometric Tomography (Second Edition). Cambridge University Press. USA. 2006. jmm2 J. Jeronimo-Castro, L. Montejano and E. Morales. Shaken Roger's theorem for homothetic sections. Canadian mathematical bulletin, ISSN 0008-4395, Vol. 52, No. 3 (2009) 403-406. matousek Matousek. Lectures on discrete geometry (2002). Springer mo L. Montejano: Two applications of Topology to Convex Geometry. Proceedings of the Steklov Institute of Mathematics, Vol. 247 (2004) 164-167. dima F. Nasarov, D. Ryabogin, A. Zvavitch. Non-uniqueness of convex bodies with prescribed volumes of sections and projections. Mathematika, Vol. 59, Part 1 (2013) 213-221. ro1 C.A. Rogers: Sections and projections of convex bodies, Portugaliae Math. 24 (1965) 99-103. steenrod N. Steenrod: Topology of fibre boundles. Princeton Landmarks in Mathematics. 1960.
http://arxiv.org/abs/2407.12570v1
20240717135257
Further improvement of medium temperature heat treated SRF cavities for high gradients
[ "L. Steder", "C. Bate", "K. Kasprzak", "D. Reschke", "L. Trelle", "H. Weise", "M. Wiencek" ]
physics.acc-ph
[ "physics.acc-ph" ]
Further improvement of medium temperature heat treated SRF cavities for high gradientsThis work was funded by the Helmholtz Association within the MT ARD and the European XFEL R&D Program. L. Stederlea.steder@desy.de, C. Bate, K. Kasprzak, D. Reschke, L. Trelle, H. Weise, M. Wiencek Deutsches Elektronen-Synchrotron DESY, Germany ============================================================================================================================================================================================ § ABSTRACT The application of heat treatments on 1.3 GHz TESLA type cavities in ultra-high vacuum at 250to 350is called medium temperature or heat treatment. In various laboratories such treatments on superconducting radio frequency (SRF) cavities result reproducible in three main characteristic features for the quality factor in dependency of the accelerating electric field strength . First, comparing heat treatment with a baseline treatment, a significant increase of up to 5·10^10 at 2 can be observed. Second, with increasing accelerating gradient the increases up to a maximum around 16 to 20 MV/m. This effect is known as . The third observation for a heat treatment compared to a baseline treatment is an often reduced maximum gradient . In <cit.> the appearance of a (HFQS) is reported after heat treatments of 3 hours at 350or of 20 hours at 300at DESY. Using the heating temperature and the heating time taken from the temperature profile of the furnace effective oxygen diffusion lengths were calculated. In the follow-up study presented here, a set of three single-cell cavities with diffusion lengths above 1700 nm, showing HFQS, were treated with an additional so-called bake of 24-48 hours at 120to 130. The subsequent reproducible -performances results indicate that the bake procedure cures the HFQS like for cavities treated with the EuXFEL recipe <cit.> of (EP) and following treatments. As presented in the following, values of more than 3·10^10 at 16 MV/m and accelerating gradients of 32 to 40 MV/m are achieved. More detailed analyses of the cavity performances - especially their sensitivity against trapped magnetic flux - as well as the application to EuXFEL type nine-cell cavities are currently under preparation. § COMBINATION OF HEAT TREATMENTS FOR HIGH GRADIENTS For a future upgrade of the European XFEL an exchange of the first 17 accelerator modules is proposed <cit.>. The performance goals for the new cavities are envisaged with a of 2.7·10^10 at of 16 MV/m. Nowadays, the high-duty-cycle (HDC) working group which is coordinating the R&D work towards the mentioned upgrade, even proposes a above 3·10^10 and gradients larger than 20 MV/m <cit.> for the HDC operation mode. In addition, the EuXFEL shall be still be operable in the pulsed mode for high beam energies, for which large accelerating gradients of the cavities are needed as well. The present cavity R&D activities at DESY are therefore focused on heat treatments. In <cit.>, the results of 19 heat treatments on single-cell 1.3 GHz TESLA cavities, which were treated in UHV (ultra-high vacuum) at medium temperatures of 250to 350with a duration between 3 and 20 hours, were analyzed. Five of the used single-cell cavities are fabricated of large-grain (LG) niobium material, all others are fine-grain (FG) cavities. In the following, the most important findings of <cit.> are shortly reported and the scope of the additional analysis presented in this very paper are introduced. The heat treatment requires no additional gases like nitrogen during the process, is highly reproducible and eliminates the need for a subsequent chemical surface treatment. In the analysis the treatments were categorized according to the effective oxygen diffusion length based on the whole temperature vs. time profile of the furnace treatments, more details are explained in <cit.>. As can be seen in <ref>, a characteristic increase of the quality factor over with a maximum at 16 to 20 MV/m can be observed for all treatments. A significant reduction of the BCS surface resistance was observed as well. For the investigated range of the oxygen diffusion length between 234 nm and 2655 nm, the (=16 MV/m) at 2was independent of . For most treatments, no gradient higher than 30 MV/m could be achieved, as can be seen in <ref>. This observation is consistent with the results of other laboratories <cit.>. Interestingly, a slight trend towards higher gradients with larger diffusion length was found as can be seen in the right part of <ref>. Moreover, for large oxygen diffusion lengths  – corresponding to temperatures around 300for 20h and around 350for 3h – the so-called (HFQS) with its characteristic exponential decay of the -value above an onset accelerating field of about 28 MV/m was observed. The HFQS is a well-known feature of the -performance after an as final surface treatment <cit.> or a so-called 'soft reset' via 800annealing <cit.>. Typically such curves showing HFQS are mostly limited by available RF power (PWR) between 30 and 35 MV/m, meaning the cavity can not be driven to the breakdown (BD) of the superconductivity, which is also called quench. Nevertheless, the HFQS was not expected to occur after a heat treatment at such low temperatures around 350. The well-established empirical procedure to cure the HFQS is a so-called 'bake'. Several causes for the and methods to overcome it via heat treatments are reviewed in <cit.>. An additional explanation for the occurrence of HFQS is possible via so-called 'Nano-hydrids', details can be found in <cit.>. During such a bake procedure an UHV inside of the cavity is maintained and a wide range of parameters for temperature (90- 150) and duration (12h - 100h) can be applied. Typically a 48 hours at 120process is used, as in the cavity series production for the EuXFEL <cit.>. In 2018 it was observed that the so-called 'two-step-bake' adding four hours at 75at the beginning of a standard procedure of 48 hours at 120<cit.> enhances the cavity gradient . A statistical analysis of EuXFEL cavity data <cit.> and the above mentioned two-step-bake led to the implementation of a process of 4 hours at 75followed by 24 hours at 130as standard treatment at DESY. In order to investigate the response of the three heat treated single-cells showing , an additional bake process and a subsequent vertical test were performed. § EXPERIMENTAL OVERVIEW The DESY furnace infrastructure, used for treatments, consisting of a refurbished niobium retort furnace located in the cavity assembly cleanroom as well as its control features and the related workflow are described in detail in <cit.>. The complete information about the used single-cell cavities, their surface treatments and niobium material, the testing environment and procedures as well as further explanations of the measured variables can be found in <cit.>. The measurement uncertainty for independent RF measurements is approximately 10% for and up to 20% for . However, within a single vertical test and for each curve, the observed measurement deviation is significantly smaller, only around 1% for and 3% for <cit.>. Results of three cavities are presented in the following, they are called 1AC02, 1DE12 and 1RI06 - with the latter made of LG niobium. All of them received a baseline treatment in form of a soft reset via 800annealing or a short and a consecutive vertical performance test. In between each baseline or heat treatment and these vertical tests a standard cycle of six high-pressure rinsings (HPRs) followed by the assembly of auxiliaries and especially an adjustable antenna for adaptive coupling takes place. Furthermore heat treatments resulting in large oxygen diffusion lengths were applied according to the parameters given in column two of <ref> and followed by an extensive vertical test. Due to the appearance of the for these three cavities, two of them received the bake in the standard environment in an inert gas atmosphere on the fully assembled and evacuated cavity ready for vertical test. 1DE12 was treated 'open' without any attached flanges under argon atmosphere in a dedicated heating chamber which is located in the DESY cleanroom. The design of this chamber is based on the successful initial experiments reported in <cit.>. Since this chamber is only usable up to a maximal temperature of 120, also the other heat treatments were restricted to this maximum temperature. Parameters of the different applied bake procedures can also be found in <ref>. § RESULTS OF AND HEAT TREATED SRF CAVITIES In the scope of the recent campaign at DESY seven single-cell cavities were treated for either 3 hours at 350or 20 hours at 300resulting in calculated oxygen diffusion lengths in the range of 1700 nm to 2700 nm <cit.>. Three of them showed a clear behaviour after the heat treatment. Hence, they are the subject of treatment studies in the following. §.§ Quality factors and accelerating gradients The onset of the of the aforementioned cavities ranges between 28 MV/m and 32 MV/m. Above these field strengths decreases exponentially with increasing accelerating gradient, which can be seen in the curves in the following <ref>, <ref> and <ref>. All of the here shown measurements, no matter whether before or after heat treatments, were performed field emission free and at an operation temperature of 2. In addition, also the baseline performances after the reset treatments are shown in . All of the baseline measurements exhibit the typical after or 800treatments. The end of the curves is always determined by reaching the RF power limit during vertical testing. Comparing the performances before () and after () the heat treatments, the typical features like significant enhancement and can be observed for all three cavities. The heat treatment characteristic of an early quench is not occurring, instead a can be observed. In contrast to the baseline measurements, the measurements of the heat treatments are limited by quenches and not by an RF power limit. The most interesting curves in <ref>, <ref> and <ref> are the ones, since they show the cavity performances after the additional bake procedures. For all three cavities, the after the heat treatment is cured by the additionally applied treatments. All of the curves are limited by quenches of the cavities, which occur between 32 MV/m for 1RI06 and 40 MV/m for 1AC02. §.§.§ 1AC02 In <ref> the behaviour of 1AC02 after the baseline annealing (), the heat treatment of 3h@350in the niobium retort furnace () and the additional procedure (), which took place in a UHV pumped condition, is shown. During both tests after the and treatment a possible antenna overheating was observed for the last few measurement points, while applying high RF power. A re-test with exchanged adjustable antenna is under preparation. The very large gain in produced by the heat treatment is almost preserved after the bake. Most prominent is the very large maximal accelerating gradient of 40 MV/m with a of 2.4·10^10 at this point. §.§.§ 1RI06 The performances of the large grain cavity 1RI06 are shown in <ref>. After its production the cavity got a final EP with a material removal of 20 m before the heat treatment procedure of 20h@300. This prolonged treatment at 300results in similar oxygen diffusion lengths as 3h 350procedures <cit.>. Also here, the enormous difference between the baseline curve in and the performance after the heat treatments is obvious and demonstrates the advantage of heat treatments. By curing the with a bake under vacuum, the accelerating gradient is in this case not enhanced. But with a of 2.4·10^10 at 32 MV/m also here an outstanding performance can be observed. This cavity degrades in the quality factor after first quenching. As described in <cit.>, this degradation can be healed via a thermal cycling up to 30. After reaching the operation temperature of 2again, the initial higher value is restored. §.§.§ 1DE12 In contrast to the other two single-cells, cavity 1DE12 received its heat treatment in a dedicated heating chamber in the DESY cleanroom while being exposed to argon atmosphere. Also for this cavity a degradation in can be observed. In this case, the degraded performances are shown in <ref> (a), since they are better comparable. Additionally, the non-degraded curves are shown in <ref> (b) to enable an evaluation of the possible factor. The curve of the first measurement (, open markers) shows the degradation in (kink) before the maximal achievable gradient. This was not only observed here, but also in other vertical tests of heat treated cavities, which were performed at DESY. A further analysis, which may reveal an explanation for the degradation process needs to be conducted in future. A quality factor of 2.4·10^10 at 35 MV/m even in the degraded state is again a very good performance of a cavity after the combination of and heat treatment. Furthermore, it is notable that the quality factor is even slightly improved after the baking process compared to the heat treatment §.§.§ Performance comparison In <ref> the curves of all three cavities after –treatment combination are compared. It is obvious that the performances of all three of them are outstanding and very promising since large accelerating gradients are in reach. All quality factors are above 2.4·10^10 over the complete range at 2. At 16  MV/m and 20 MV/m respectively the values are between 3.2·10^10 and 4.0·10^10 thus significantly well above the foreseen specification for an European XFEL upgrade. §.§ Surface resistances In order to learn more about the underlying processes which lead to the improved performance after bake appliance, the composition of the surface resistance R_S(T,B) is analyzed. The surface resistance of a SRF cavity can be written as R_S(T, B) = R_BCS(T) + R_const, where R_BCS(T) depicts the temperature dependent BCS resistance and R_const=R_res+R_flux(B) consisting of the temperature independent parts: The residual resistance R_res and R_flux(B), an additional surface resistance induced by magnetic flux. Both and R_const are studied separately in the following. An estimation of at 2can be gained using curves taken at 2and 1.5with Q_0 ≈ 1/R_S and R_BCS,2 K≈ R_S,2 K-R_S,1.5 K. The latter equation is used to determine curves for all three cavities after and after the combination of and heat treatments which are shown in <ref>. Due to the steep in the curves, the evaluation beyond the value of 26 MV/m makes no sense. Interestingly the BCS resistance is reduced in all cases after the additional applied procedure, though the significance is not good, due to uncertainties in the range of 1 to 2 nΩ <cit.>. The constant part of the surface resistance can be approximated by the evaluation of curves at 1.5and in this case at 16 MV/m. The corresponding results can be found in <ref>. While the temperature dependent resistance is reduced after the treatment (cf. <ref>), the here shown temperature independent part appears to be increased. Only for 1DE12 the R_const is lowered, but this change of 0.3 nΩ is smaller than the uncertainty level. Here, only observations are reported, while for the interpretation of the results more studies with cavities subjected to the combination of and heat treatments are necessary. § SUMMARY Applying a heat treatment of either 3 hours at 350or 20 hours at 300results in the cavities showing characteristic performances but also a . This slope can be cured by an additionally applied procedure. The newly developed –chain leads to quality factors well above 2·10^10 over the complete gradient range and in the region of interest for a possible EuXFEL upgrade even values between 3·10^10 and 4·10^10 are reached. For the first time, heat treated single-cell cavities achieve, after an additional bake, besides their large quality factors also accelerating gradients of up to 40 MV/m. The behaviour of the and R_const needs to be analyzed in more detail in order to optimize the process even further. § ACKNOWLEDGEMENTS We express our gratitude to the DESY and University of Hamburg SRF team for fruitful discussions and especially the cleanroom staff and the AMTF team for their essential support during cavity preparation and testing. This work was supported by the Helmholtz Association within the topic Accelerator Research and Development (ARD) of the Matter and Technologies (MT) and benefits greatly from the European XFEL R&D Program.
http://arxiv.org/abs/2407.12905v1
20240717180000
Self-Dual Cosmology
[ "Mariana Carrillo González", "Arthur Lipstein", "Silvia Nagy" ]
hep-th
[ "hep-th", "astro-ph.CO", "gr-qc" ]
=1 equationsection a]Mariana Carrillo González,[a]Theoretical Physics, Blackett Laboratory, Imperial College, London, SW7 2AZ, U.K b]Arthur Lipstein,b]Silvia Nagy[b]Department of Mathematical Sciences, Durham University, Durham, DH1 3LE, UKm.carrillo-gonzalez@imperial.ac.ukarthur.lipstein@durham.ac.uksilvia.nagy@durham.ac.uk We construct cosmological spacetimes with a self-dual Weyl tensor whose dynamics are described by conformally coupled scalars with only cubic self-interactions. Similar to the previously discovered cases in flat and (Anti) de Sitter backgrounds, the interactions are characterized by a bracket that encodes a kinematic algebra. We discuss how the color-kinematics duality and double copy are realized in these cosmological backgrounds. If we further impose that the Ricci scalar is that of an FLRW spacetime, we find two new self-dual metrics corresponding to radiation-dominated and coasting (non-accelerating) FLRW backgrounds. Relaxing this requirement, we find an infinite family of solutions given by three different conformal classes of cosmological self-dual metrics. These solutions approximate those of FLRW as long as we impose a simple additional constraint on the scalar theory. Self-Dual Cosmology [ July 22, 2024 =================== § INTRODUCTION Computations in curved spacetimes can be extremely involved, mainly due to the high non-linearity of Einstein's equations. However, this same non-linearity allows for exciting solutions to arise in gravitational theories. When working on asymptotically flat spacetimes, simplifications can arise. For example, the scattering amplitudes of gravitons can be written as the double copy of the scattering amplitudes of gluons <cit.>. This can be done by exchanging color structures with kinematic ones, which satisfy the same algebra due to the color-kinematics duality. One of the simplest realizations of the double copy is within the so-called self-dual sector <cit.>. This corresponds to the sub-sector of solutions that have a self-dual curvature tensor. It is well known that Ricci flat spacetimes that are solutions to the vacuum Einstein equations with a self-dual Riemann tensor can be described by a single scalar satisfying the Plebanski “heavenly” equation <cit.>. Furthermore, when working in the lightcone gauge, the interactions of this scalar can be written in terms of nested Poisson brackets acting in a two-dimensional subspace, which gives rise to a single cubic vertex. This cubic vertex corresponds to the (++-) vertex. For real momenta, all the tree-level scattering amplitudes of this theory vanish, which is a consequence of their classical integrability <cit.>. This property is broken at loop order, but the amplitudes are simple rational functions <cit.>, and the theory is one-loop exact, since no 2-loop or higher diagrams can be written with the single (++-) vertex. Similar statements hold for self-dual Yang-Mills in a flat spacetime. In this case, the interactions are given in terms of a Lie bracket and the same Poisson bracket as in the gravitational case. This structure showcases one of the simplest examples of color-kinematics duality. The Poisson bracket encodes the kinematic algebra, which in this case is given by area-preserving diffeomorphisms <cit.>. Different realizations of the double copy in the self-dual sector have been described in <cit.>. Other explicit realizations of kinematic algebras or Lagrangians with explicit color-kinematics duality have arisen in various contexts for theories beyond the Yang-Mills self-dual sector <cit.>. An interesting question that has been asked in the past few years is whether a double copy relation can exist in curved spacetimes, and hence help us to simplify the complicated calculations arising in these backgrounds. This has been explored in many different contexts in <cit.>, but a systematic understanding of the double copy in general backgrounds is still lacking. A promising avenue is to consider self-dual theories in curved spacetimes. A first step in this direction was taken in <cit.>, where self-dual gravity in Anti-de Sitter (AdS) space was reduced to a simple cubic scalar theory which arises from the double copy of self-dual Yang-Mills in AdS, and exhibits a deformed w_1+∞ algebra analogous to that of self-dual gravity in flat spacetime <cit.>[See also <cit.> for further work and deformations of these algebras in gravity and YM.]. Other formulations of self-dual gravity in de Sitter (dS) space were constructed in <cit.>. The w_1+∞ symmetry in AdS was subsequently studied from other perspectives in <cit.>. In this paper, we will explore a further generalization of these ideas to cosmological spacetimes. In particular, we show that they extend to self-dual gravity in radiation dominated and coasting (non-accelerating) FLRW spacetimes, as well as an infinite class of solutions, obtained by performing Weyl transformations, whose stress tensors become FLRW-like after imposing a certain constraint on the scalar theory. The scalar theory describing self-dual gravity in these backgrounds contains cubic interactions constructed from Jacobi brackets <cit.>, which encode a kinematic algebra analogous to the color algebra of self-dual Yang-Mills, reflecting a color/kinematics duality in these backgrounds. The outline of the paper is as follows. In Section <ref>, we introduce the concept of off-shell and on-shell self-dual Weyl tensor. We show that the on-shell case, where the Ricci tensor is fully fixed by a chosen stress-energy tensor, only gives rise to the known flat and (A)dS cases. These self-dual solutions can be cast in terms of a scalar theory with cubic interactions. We extend this description to other backgrounds by considering solutions to the self-dual Weyl tensor without incorporating the Einstein equations, that is, in the off-shell case. This construction gives three conformal classes of metrics with self-dual off-shell Weyl tensors. For each of these conformal classes, we find a Jacobi bracket that characterizes their cubic interactions. We continue in Section <ref> by deriving these self-dual solutions from the double copy of Yang-Mills in conformally flat backgrounds and showing that they exhibit a deformed w_1+∞ algebra which is closely tied to the kinematic algebra encoded by the Jacobi brackets. This construction generalises the previously known flat and dS cases to more general cosmological self-dual solutions. In Section <ref>, we analyse the stress tensors and equations of state of the new cosmological self-dual solutions to gain further insight into their physical interpretation. We highlight two new interesting cases whose Ricci scalar is that of an FLRW metric; these are radiation-dominated and coasting FLRW self-dual solutions. Finally, we summarize our results and discuss future directions in Sec. <ref>. In the Appendices, we briefly review power-law cosmologies and provide more details about the self-dual Weyl tensor equation, Jacobi brackets, and properties of the new self-dual solutions. § SELF-DUAL GRAVITY IN THE PRESENCE OF SOURCES In spacetimes with vanishing Ricci tensor, the condition of a self-dual Weyl tensor is reduced to R_μνρσ = 12ϵ_μν^μνηλ R_ηλρσ. where ϵ_=√(g)ε_, ε_ the 4-dimensional Levi-Civita symbol, and we work in Euclidean signature[We can obtain non-trivial solutions in Lorentzian signature by adding a factor of i in the right-hand side of Eq. (<ref>) and considering complex solutions.]. Note that this equation encodes both the Einstein equations and the Bianchi identity when contracting two of its indices. Thus, solving Eq. (<ref>) is enough to find a self-dual solution to the vacuum Einstein equations. Let us instead consider the following constraint: C_=1/2ϵ_μν^ηλ C_ηλρσ, where the Weyl tensor is given by C_μν^ρσ=R_μν^ρσ-2 R_[μ^[ρg_ν]^σ]+1/3 R g_[μ^[ρg_ν]^σ] . Note that this equation is by definition invariant under Weyl transformations. Hence, if we find one solution then we can obtain an infinite family of solutions by applying Weyl transformations. We will refer to this as a conformal class. Recall that the Ricci tensor and scalar are determined by the Einstein equation as follows R_μν=T_μν - 1/2T g_μν , where T^ is the stress-energy tensor, and T≡T_μ^μ is its trace. Making this replacement in (<ref>) then gives an object that we will refer to as the on-shell Weyl tensor: .C_μν^ρσ|_ on-shell≡R_μν^ρσ-2T_[μ^[ρg_ν]^σ]+2/3Tg_[μ^[ρg_ν]^σ] . We may then impose self-duality of the on-shell Weyl tensor, which we will refer to as on-shell self-duality: .C_|_ on-shell=1/2ϵ_μν^ηλ.C_ηλρσ|_ on-shell Contracting this equation on both sides with g^νσ then implies the Einstein equations sourced by a generic stress-energy tensor: R_μρ-T_μρ+1/2 T g_μρ=1/2√(-g)ϵ_μ^σηλ R_ηλρσ=0 . As in the vacuum case, the left-hand side gives the trace reversed Einstein equations, and the right-hand side gives the Bianchi identity. If we set T^=-Λ g^, we will recover the result for an (A)dS spacetime found in <cit.>. Note that the on-shell self-duality constraint in (<ref>) is not Weyl invariant. §.§ On-shell self-duality First, we will consider solutions to the on-shell self-dual equations in Eq. (<ref>). We will work in double lightcone coordinates u =t+iz , v=t-iz , w =x+iy , w̅=x-iy , and consider the metric ansatz given by ds^2=a(τ)^2 (dw dw̅-du dv + h_μν dx^μ dx^ν) , where τ is the conformal time given by τ=(u+v)/2 . This metric reduces to an FLRW metric when h_μν=0. We will refer to this as the background metric. For convenience, we will split the spacetime coordinates as x^i=(u,w), y^α=(v,w̅). Since our coordinates are complex, x^i corresponds to the holomorphic and y^α the anti-holomorphic sector. We work in lightcone gauge, h_uμ=0, and take the ansatz. h_iμ=0, h_αβ = 1/4_(α_β)ϕ , where _α and _α are differential operators that will be unspecified for now. We proceed by making the following assumptions: * The _α and _α operators are at most first order in derivatives and can only depend on functions of conformal time. * When a(τ)→1, the self-dual solution reduces to the standard result in a flat background were _α=_α=(∂_w,∂_u) , and the scalar equation of motion becomes _ℝ^4ϕ - {{ϕ,ϕ}}=0 where the Poisson brackets are defined as {f,g}=∂_wf∂_ug-∂_uf∂_wg . The interactions of this scalar are purely on the holomorphic sector and encode the kinematic algebra corresponding to area-preserving diffeomorphisms of the holomorphic x^i plane <cit.>. * The equation of motion for the scalar contains at most second-order derivatives. Given these assumptions, we will show that the on-shell self-duality in (<ref>) can only be solved in flat or (A)dS backgrounds. We start by looking at the two components of Eq. (<ref>) that involve only and : .C_vuvu|_ on-shell-1/2ϵ_vu^ηλ.C_ηλ vu|_ on-shell= 1/2a'^2 +a a' ∂_u-a^2(∂_u∂_w-∂_u^2) =0 , .C_w̅vwu|_ on-shell-1/2ϵ_w̅v^ηλ.C_ηλ wu|_ on-shell= 1/2a'(a' +a(∂_u-∂_w)) =0 . Here, prime denotes a conformal time derivative: '≡∂_τ. The stress tensor in Eq. (<ref>) corresponds to a perfect fluid sourcing an FRW spacetime with scale factor a: T^=ρ u^μ u^ν+ P γ^ , ρ=3(a'/a^2)^2 P=ρ/3-2a”/a^3 , where ρ is the energy density, P is the pressure, u^μ is a timelike unit vector giving the direction of flow of the fluid, and γ_μν=g_μν+u_μ u_ν is the metric of the surface perpendicular to the flow. Under assumptions 1-2, we find that to solve Eq. (<ref>) we need to fix the operators as _α=(∂_w,∂_u+F(τ)) _α=(∂_w,∂_u+2a'/a-F(τ)) with an arbitrary function F. Using these operators on Eq. (<ref>) then gives .C_vuvu|_ on-shell-1/2ϵ_vu^ηλ.C_ηλ vu|_ on-shell=(2a'^2-a a”)∂_w^2ϕ/4=0 . The only solution to this equation, without imposing any constraints on ϕ, requires the scale factor to be either a constant or proportional to 1/τ. Thus, the on-shell self-duality equations can only be reduced to equations of motion of a scalar theory in flat <cit.> or (A)dS background <cit.>. Instead of constraining the scale factor, we could impose ∂_w^2ϕ=0, but then the vuw̅u component of Eq. (<ref>) would lead to .C_vuw̅u|_ on-shell-1/2ϵ_vu^ηλ.C_ηλw̅u|_ on-shell=(∂_τ-2a'/a)(2a'^2-a a”)∂_wϕ/8=0 . which again reduces the equations to the (A)dS and flat cases or further imposes ∂_wϕ=0. Imposing the latter would lead to a non-interacting theory, as can be seen in Eq. (<ref>) below. Thus, to find a solution in more general backgrounds, we must consider the less restrictive constraint in (<ref>), which we will refer to as off-shell self-duality. §.§ Off-shell self-duality Following the result above, we analyze whether the off-shell self-duality constraint in  (<ref>) can be solved in a similar manner for more generic backgrounds. Since this equation is invariant under Weyl rescalings of the metric, the scale factor will not play a role. In other words, every solution obtained in this section does not correspond to a single metric but a conformal class of metrics. We start by solving the components of Eq. (<ref>) that are linear in h_ subject to the ansatz in (<ref>) and (<ref>) and assumptions 1-3 described in the previous subsection. This requires that C_vuvu-1/2ϵ_vu^ηλ C_ηλ vu∝∂^2_wh_w̅w̅-2∂_u∂_w h_vw̅+∂_u^2 h_vv=0 . This equation can be solved by taking =Π=(∂_w,∂_u) , =Π^ζ≡(∂_w,∂_u+2 ζ(u+v)) . where ζ(u+v) is a function with units of inverse length. Using these operators, the self-dual equations reduce to only two independent equations, non-linear in ϕ, given by the components v,u,w̅,v and w̅,u,w̅,u of Eq. (<ref>). The v,u,w̅,v component determines the equation of motion satisfied by the scalar field: (-∂_u∂_v+∂_w∂_w̅)ϕ-∂_u ζ/ζ(∂_u+∂_v)ϕ-∂^2_u ζ/ζϕ +(h_vvh_w̅w̅-h_vw̅^2+(∂^2_u ζ/2ζ-2∂_u ζ+ζ^2)(∂_wϕ)^2)=0. Then, the w̅,u,w̅,u component of the self-dual equations can only be solved if ∂_u(∂_u ζ/ζ^2)=0 , which gives[Here and throughout the paper, we will ignore additional freedom corresponding to a constant shift of conformal time.]ζ=-2/(u+v) , or ζ=constant . The details of this calculation can be found in the Appendix <ref>. The first case corresponds to the (A)dS self-dual equation. The constant ζ case will give rise to a new solution if the constant does not vanish and reduces to the flat one when ζ→0. Using Eq. (<ref>), the equation of motion Eq. (<ref>) can be rewritten as √(g_ζ)(_ζ-R_ζ/6)ϕ-{{ Lζ ϕ,Lζ ϕ}} _ζ=0 , where _ζ is the Laplacian operator for an auxiliary metric (g_μν)_ζ of the form (<ref>) with h_μν=0 and scale factor a=Lζ. Note, however, that the scale factor of the auxiliary metric should not be identified with that of (<ref>) at this stage. Indeed, the off-shell self-duality in (<ref>) is Weyl invariant, so the scale factor in (<ref>) plays no role. We have introduced a length scale, L, to keep the scale factor dimensionless. Additionally, having the explicit factors of this length scale in the equations of motion will allow us to take the correct flat space limit. The auxiliary metric corresponds to de Sitter if ζ=-2/(u+v) or flat space if ζ=constant. R_ζ is its Ricci scalar, and g_ζ its determinant. Note that the kinetic term is that of a conformally coupled scalar and can be mapped to a massless kinetic term by performing a Weyl transformation of the auxiliary metric to flat space. The interactions are given by a bracket defined as { f,g} _ζ ={ f,g} +c_ζ ζ(u+v) (f∂_wg-g∂_wf) , c_ζ ≡2(∂_u ζ/ζ^2-1)=constant , where the undeformed Poisson bracket is defined in (<ref>). This bracket satisfies the Jacobi identity, {f,{ g, h} _ζ} _ζ+{g,{ h, f} _ζ} _ζ+{h,{ f, g} _ζ} _ζ=0. and instead of satisfying the Leibniz rule, it satisfies a deformed version of it, { fg,h} _ζ=f{ g,h}_ζ+g{ f,h}_ζ-c_ζζ(u+v) fg∂_wh , As shown in Appendix <ref>, this bracket corresponds to a Jacobi bracket <cit.>, which is defined as a Lie bracket on the algebra of smooth functions and is given by a bilinear first-order differential operator D as D(f, g)=i(P)( f ∧ g)+f i(X) g-g i(X) f where i denotes the interior product, P is a bivector, and X a vector (called the Reeb vector field), satisfying [P, P]=2 X ∧ P , [X, P]=0 , as a consequence of the Jacobi identity. Here, [ , ] is the Schouten-Nijenhuis bracket, which is a generalization of a Lie bracket for multivector fields. Using a coordinate basis, (<ref>) can be expressed as D(f, g)=P^μν∂_μ f ∂_ν g +f X^μ∂_μ g - g X^μ∂_μ f , where P^μν is anti-symmetric. Then (<ref>) can be put in the form above with P^μν and X^μ given in (<ref>). It is interesting to note that other formulations of self-dual solutions on AdS backgrounds obtained from twistor space also give rise to a Jacobi instead of a Poisson bracket <cit.>. The Poisson bracket, defined by Eq. (<ref>) with ζ=0, defines a kinematic algebra that can be lifted to the w_1+∞ algebra <cit.>. In the following, we derive the deformation of the w_1+∞ algebra that arises when considering instead the Jacobi bracket with ζ≠0. By performing a Weyl rescaling ϕ→ (1/ (Lζ))ϕ one can rewrite Eq. (<ref>) as _ℝ^4ϕ-1/Lζ{{ϕ,ϕ}} _ζ=0 , The solutions to (<ref>) are plane waves: ϕ=e^i k· x . We will further consider on-shell states with k^2=0, and take the soft limit (k_u,k_v,k_w,k_w̅) → (0,0,0,0) in such a way that k_w̅/k_u=k_v/k_w=ρ, where ρ is some number. These on-shell plane waves can now be written as an expansion in soft momenta given by e^ik· x=∑_a,b=0^∞(ik_u)^a(ik_w)^b/a!b!𝔢_ab , where 𝔢_ab=(u+ρw̅)^a(w+ ρ v)^b. Further defining w_m^p=1/2𝔢_p-1+m,p-1-m we find that { w_m^p,w_n^q} _ζ ={ w_m^p,w_n^q} +c_ζ/2 (m+q-p-n) ζ(u+v) w_m+n+1/2^p+q-3/2 . When ζ=0, this reduces to the w_1+∞ algebra in flat space <cit.>, and ζ≠0 gives a deformed version of this algebra. There are known deformations of the w_1+∞ algebra <cit.>, which involve a constant parameter. In principle, this is similar to the case ζ=constant≠0, but it is unclear whether our case corresponds to any of the known deformations via a change of variables. § COLOR-KINEMATICS DUALITY AND DOUBLE COPY An intriguing property of self-dual gravity in a flat background is that it can be derived from the double copy of self-dual Yang-Mills at the Lagrangian level <cit.>. Since Yang-Mills theory is classically scale-invariant in four dimensions, its Lagrangian in any conformally flat background is the same as in flat space, although one has to impose boundary conditions if there is a boundary. Moreover, self-dual gravity in AdS can be derived from an asymmetrical double copy <cit.>. We will briefly review the self-dual Yang-Mills theory and then show how to obtain self-dual gravity in more general backgrounds from a double copy. §.§ Self-dual Yang-Mills We begin by summarizing the construction of the self-dual solution for gauge theories. In this case, the self-dual condition is given as F_μν=1/2ϵ_μνρλF^ρλ , where F_μν is the YM fields strength. Due to scale invariance, this equations imposes the same constraint in any conformally flat background. Working in lightcone gauge, A_u=0, the solution is given by A_i=0, A_α=Π_αΦ . with Π_α as in (<ref>) and where Φ is a scalar field in the adjoint representation of the gauge group whose equation of motion is _ℝ^4Φ-i[{Φ,Φ}]=0 , [{f,g}]= ε^αβ[Π_α f,Π_β g] , and [ , ] is the standard Lie bracket of the gauge theory. While the same scalar field theory describes the self-dual Yang-Mills solutions in all conformally flat spacetimes, the boundary conditions will be different since Weyl transformations change the nature of the asymptotic structure of the spacetime. For example, the lack of time translations in FLRW will be explicit when calculating boundary correlation functions. Eq. (<ref>) is solved by plane waves Φ=c e^i k· x , where c is a spacetime constant in the adjoint representation, k· x is the flat space inner product, and k satisfies the on-shell condition k_uk_v-k_wk_w̅=0. Using these solutions as external states, the three-point vertex is given by V_SDYM =1/2 X(k_1,k_2)f^a_1a_2a_3, X(k_1,k_2) =k_1uk_2w-k_1wk_2u where f^a_1a_2a_3 are the structure constants of the color algebra and the factor X(k_1,k_2) can be thought of as the structure constants of the kinematic algebra which in this case corresponds to area-preserving diffeomorphisms in the u-w plane. It is also worth highlighting that the Jacobi identity for X(k_1,k_2) is satisfied for off-shell momenta, that is, without imposing k^2=0. This displays one of the simplest realizations of color-kinematics duality. §.§ Double copy The self-dual Yang-Mills and gravity solutions for conformally flat spacetimes that we have described above are given in terms of a scalar field satisfying _ℝ^4Φ-i[{Φ,Φ}]=0 , _ℝ^4ϕ-1/Lζ{{ϕ,ϕ}} _ζ=0 , respectively. Given the explicit color-kinematics duality in the self-dual Yang-Mills theory, one can obtain straightforwardly the double copy by exchanging color by kinematics. At the level of the equations of motion, one can perform the replacements Φ→ϕ , i[ ] →1/L ζ{ }_ζ to go from the self-dual Yang-Mills equation in (<ref>) to the self-dual gravity one in (<ref>). This gives an asymmetric double copy since the gravitational equation of motion, Eq. (<ref>), involves both the flat self-dual Poisson bracket and the cosmological Jacobi bracket. Note that throughout the paper, we have set the couplings of both Yang-Mills and gravity to one, which is why we do not have the usual replacement g→κ. While this might look singular in the flat space limit, ζ→ 0, we can recover the flat self-dual equations of motion by keeping L ζ fixed and taking ζ→ 0 as explained in Sec. <ref>. Similarly to the self-dual Yang-Mills case, the self-dual gravity equation in (<ref>) allows for plane wave external states. The Feynman rule for the three-point vertex for such states is then given by V_SDG =1/2 1/L ζ X(k_1,k_2) X^ζ(k_1,k_2) , X^ζ(k_1,k_2) =X(k_1,k_2)-i c_ζ ζ(k_1-k_2)_w . Thus, the double copy replacement for the three-point vertex is f^a_1a_2a_3→1/L ζX^ζ(k_1,k_2) . As before, the flat space limit is taken by keeping L ζ fixed and taking ζ→ 0. We have formulated the double copy by using the equations of motion of both Yang-Mills and gravity with a flat auxiliary metric. We could have equivalently performed a field redefinition of the scalar fields, Φ→ aΦ and ϕ→ aϕ, which changes the kinetic term to a conformally coupled scalar in an FLRW background with scale factor a. Under these rescalings, the equations of motion read √(g_a)(_a-R_a/6)Φ-i a[{a Φ,a Φ}]=0 , √(g_a)(_a-R_a/6)ϕ-a/Lζ{{ a ϕ,a ϕ}} _ζ=0 . The double copy is again given by the color-kinematic replacements in Eq. (<ref>) and Eq. (<ref>). Note that this is not a different double copy in a new background, but simply a field redefinition of the conformally coupled scalars. The double copy that we have formulated here is a double copy for all conformal classes, not a double copy on a specific FLRW background. § COSMOLOGICAL SELF-DUAL SOLUTIONS In the previous section, we derived general solutions to the off-shell self-duality equation in (<ref>). Demanding that they obey the on-shell self-duality equation in (<ref>) where both the Ricci tensor and the Ricci scalar are fixed by the Einstein equations then fixes the background to be either flat or AdS. In this section, we will only impose a constraint on the Ricci scalar, leaving the traceless part of the Ricci tensor unconstrained. Since we focus on cosmological solutions, we will require the Ricci scalar, or equivalently, the trace of the stress-energy tensor to be that of an FLRW metric. In the present case, the self-dual solutions above have R^s.d. =R^FLRW-24(ζ∂_ua-∂_u^2a)∂_w^2ϕ/a^3 , R^FLRW =24∂_u^2a/a^3 , so that imposing R^s.d.=R^FLRW , leads to ζ(u+v)≡∂_u^2a/∂_u a=∂_uH/H+aH , where H is the Hubble parameter. The usual expression for the Hubble parameter is given in terms of cosmic time, defined by t= a τ, so that H=∂_ta/a. In terms of lightcone coordinates, we have H=2(∂_u a)/a^2. The requirement of a self-dual Weyl tensor in Eq. (<ref>) implies that we need to take either a=e^c τ/L , with c= constant or a=(τ/L)^p , with p=0,-1,1 , where the length scale L determines the curvature of the spacetime. In the exponential case, this is the scale factor of a coasting FLRW spacetime. For the power law, the first two cases correspond to the flat and (A)dS solutions described above, while the third one gives the scale factor of a radiation-dominated Universe. We explore the new cases, radiation-dominated and coasting FLRW, in more detail in the following sections. The source of these cosmological self-dual metrics can be interpreted as a viscous fluid with a stress-energy tensor given by T^=ρ u^μ u^ν+ P γ^- 2 ησ^ + q^(μu^ν) , where ρ,P,u^μ, and γ^ are defined under Eq. (<ref>), η is the shear viscosity , q^μ is the momentum density, and the traceless tensor, σ^ is the shear tensor (or anisotropic stress perturbation) <cit.>. The equation of state parameter is defined as ω=P/ρ . Below, we will examine the properties of the sources for the different cosmological self-dual solutions. One should remember that we are obtaining a non-perturbative result, but it can be helpful for those familiar with cosmology to understand h_ as the perturbations that would arise in standard cosmological perturbation theory <cit.>. In that case, these perturbations are sourced by the deviations from the perfect fluid sources. In the present case, one can choose appropriate boundary conditions on the scalar field and match any desired physical boundary conditions. For example, the boundary conditions can be chosen to have an asymptotically FLRW spacetime <cit.>. We have previously formulated a general double copy prescription for all the conformal classes of cosmological self-dual metrics. We can restrict this procedure to the four special solutions with R^s.d.=R^FLRW, and consider now a double copy on a fixed FLRW background. The color-kinematic replacements for the equations of motion and three-point vertex can be found in Table <ref> and Table <ref> respectively. §.§ Self-dual radiation-dominated FLRW This section analyzes self-dual gravitational solutions sourced by a traceless stress-energy tensor with a time-dependent scale factor. We dub these solutions as self-dual radiation. As mentioned in the previous section, we can find a self-dual solution with a metric ds^2=(u+v/2 L)^2 (dw dw̅-du dv +1/4(Π_(αΠ^ζ_β)ϕ) dx^α dx^β) . When ϕ=0, this reduces to the radiation-dominated FLRW solution. From (<ref>), we see that ζ=0 for a=τ/L. Thus the scalar satisfies the same equation as it does in flat background, (<ref>). After a field redefinition that takes ϕ→ aϕ, the equation of motion can be rewritten in terms of the Laplacian for the FLRW radiation metric as √(g_a)_aϕ - a{{aϕ,aϕ}}=0 , where a=(u+v)/2L and the conformally coupled mass term vanishes since the Ricci scalar vanishes in this background. The energy density and pressure of the source of the self-dual radiation solution are given by ρ=M_Pl^2/L^2 ( 3+3∂_w^2ϕ+(∂_w^2ϕ)^2/a^4 -L(∂_u+∂_v+2(∂_u∂_wϕ)∂_w)∂^2_wϕ/a^3) , P=1/3ρ . Both redshift as expected for a radiation component as long as ∂^2_wϕ≪1. Similarly, one can find that the trace of the stress-energy tensor vanishes, T_μ^μ=0, or equivalently that the equation of state is ω=1/3, which identifies the source as radiation. Note that contrary to the standard perfect fluid sources of FLRW spacetimes, the source of this metric does not have to be homogeneous or isotropic. We provide the full expression for the stress-energy tensor in an ancillary Mathematica file. §.§ Self-dual coasting FLRW An FLRW Universe with a(τ)=e^ℋτ and ℋ≡∂_τ a/a=constant is sourced by a perfect fluid with an equation of state ω=-1/3. Going back to Cartesian coordinates and performing a further change of coordinates to cosmic time, a(τ)τ= t, we write the background metric in its better-known form s^2=- t^2+(ℋt)^2𝐱^2 , which describes a coasting FLRW cosmology <cit.>. When written in this form, one can describe whether the Universe is accelerating by looking at the dimensionless parameter q=-a∂^2_ta/(∂_ta)^2 , which is referred to as the deceleration parameter. The Universe is accelerating for q<0, decelerating for q>0, and neither if q=0, as in the metric above. We have found a self-dual solution with the Ricci-scalar of this coasting FLRW spacetime. This solution has a metric of the form ds^2=(e^ℋu+v/2)^2 (dw dw̅-du dv +1/4(Π_(αΠ^ζ_β)ϕ) dx^α dx^β) . Noticing that the function ζ in Eq. (<ref>) can be written in terms of the deceleration parameter as ζ=(1-q)ℋ/2 , we see that in the present case ζ=ℋ/2. From (<ref>), we then find that the scalar field satisfies the equation of motion (∂_u∂_v-∂_w∂_w̅)ϕ+(h_vvh_w̅w̅-h_vw̅^2+(ℋ/2)^2 (∂_wϕ)^2)=0 , which under a field redefinition, ϕ→ aϕ can be rewritten as √(g)(_a+(ℋ/a)^2)ϕ - a{{aϕ,aϕ}}_ζ=ℋ/2=0 , where a=e^Hτ, and the mass term is simply the conformal coupling in the coasting FLRW spacetime. We can compute the source for this solution and find that it is a viscous fluid with energy density ρ=(M_Plℋ)^2/a^2(3+Θ̂_exp.∂_w^2ϕ) , where Θ̂_exp. is a differential operator given by Θ̂_exp.=1+ℋ^-1∂_w∂_uϕ∂_w , and the equation of state of the fluid is ω=-1/3+Θ̂_exp.∂_w^2ϕ/6+Θ̂_exp.∂_w^2ϕ . For ∂_w^2ϕ≪1, this approaches the usual equation of state of a coasting FLRW spacetime. In other words, it will have a slight acceleration or deceleration depending on the sign of the second term in Eq. (<ref>). The full expression for the stress-energy tensor is found in the ancillary Mathematica file. §.§ Approximately FLRW self-dual metrics While there are only four cases of self-dual cosmological solution with a Ricci scalar fixed to be that of an FLRW metric, there are an infinite set of metrics with self-dual Weyl tensor which fall into three conformal classes mentioned in Section <ref>. Note that the self-dual solution in radiation domination background can be obtained by performing a Weyl transformation of the one in flat background so it belongs the same conformal class. Hence, the three conformal classes can be obtained by performing Weyl transformations of the flat, dS, and coasting solutions constructed in the previous section. In this section we will construct other representatives of these conformal classes whose stress-energy tensor approximately behaves as an FLRW one, as long as the scalar field satisfies a simple additional constraint. Given a solution to the off-shell self-duality equation in (<ref>), one can obtain another solution by performing a Weyl transformation: g̃_μν=Ω^2(τ) g_μν . The new metric g̃_μν is sourced by a stress-energy tensor given by T̃_μν/M_Pl^2= T_μν/M_Pl^2-2∇_μ∇_νΩ/Ω+4∇_μΩ∇_νΩ/Ω^2 -g_μν(-2∇_μ∇^μΩ/Ω+4∇_μΩ∇^μΩ/Ω^2), where T_μν is the stress-energy tensor sourcing g_μν and ∇_μ is its covariant derivative. The trace of the stress-energy tensor is T̃=1/Ω^2(6M_Pl^2 ∇^2 Ω/Ω+T) , where T=T_μ^μ. Since this new metric is no longer a homogeneous isotropic metric, the metric is not expected to be sourced by a perfect fluid but rather by a viscous fluid with a non-zero momentum flux vector and shear tensor, as in Eq. (<ref>). The energy density and pressure of the fluid are ρ̃=T̃_μνũ^μũ^ν , P̃=1/3T̃_μνγ̃^μν where ũ^μ is a unit timelike vector with respect to g̃_μν and the metric of the surface perpendicular to u^μ is γ̃_μν=g̃_μν+ũ_μũ_ν. We will now perform Weyl transformations of self-dual solutions in flat, dS, and non-accelerated FRLW backgrounds such that the resulting metric takes the following general form: s^2=(u+v/2 L)^2p(- u v+ ww̅+1/4(Π_(αΠ^ζ_β)ϕ) dx^α dx^β) , which describes well-known power law cosmologies when ϕ=0, see Appendix <ref>. This metric can be obtained by applying a Weyl transformation with Ω=(τ/L)^p to the solution in flat background, Ω=(τ/L)^p+1 to the one in dS background, and Ω=e^-ℋτ(τ/L)^p to the one in coasting background. The properties of the resulting stress-energy tensors are described in detail in Appendix <ref>, and their energy density and equation of state take the schematic form ρ =ρ_FLRW+Θ̂_ζ ∂^2_wϕ , ω =ω_FLRW+Γ̂_ζ ∂^2_wϕ , where ρ_FLRW and ω_FLRW are the energy density and equation of state of the source of the metric in Eq. (<ref>) with ϕ=0, and Θ̂_ζ and Γ̂_ζ are differential operators depending on the conformal class we are working with. Note that if we require that ∂^2_wϕ≪1, these solutions have sources that approximate those of the corresponding FLRW metric. This case is closer to what happens in cosmological perturbation theory; the equation of state remains close to the FLRW one, but it is not forced to remain the same. Strictly imposing that ∂^2_wϕ=0, the scalar can be written as ϕ=ϕ_1(u,v,w̅)+w ϕ_2(u,v,w̅) , where the equation of motion now reduces to √(g_a)(_a-R_a/6)ϕ_1+a^2∂_w̅ϕ_2+a/ζ(∂_u(aϕ_2))^2-c_ζΦ_2∂_u(aϕ_2)=0 , √(g_a)(_a-R_a/6)ϕ_2=0 . We can see that ϕ_2 is a free scalar whose u and w̅ derivatives source the scalar ϕ_1. As commented above, if we start with the self-dual solution in flat background and apply a Weyl transformation with Ω=τ/L, this gives the self-dual solution in radiation-dominated background. Similarly, if we Weyl transform the dS or non-accelerated self-dual solutions to flat background (such that the resulting metric takes the form in (<ref>) with p=0), the resulting stress tensor turns out to be traceless: T̃∝6M_Pl^2 a∇^2(1/a)+T=M_Pl^2 R+T=0 , where R is the Ricci scalar of g_, and we imposed the Eisntein equations in the last equality. On the other hand, the energy density does not evolve as the usual radiation-dominated FLRW (ρ∼τ^-4). Schematically, if we start with the dS self-dual solution and Weyl transform to flat background the resulting stress-energy tensor has ρ∼τ^-2+Θ̂_ζ∂^2_wϕ, and if we start with the coasting self-dual solution and Weyl transform to flat background we find that ρ=constant+Θ̂_ζ∂^2_wϕ. The exact expressions are given in Appendix <ref>. § CONCLUSIONS AND DISCUSSION In this paper, we show that an infinite set of metrics with a self-dual Weyl tensor can be described using conformally coupled scalars with cubic interactions containings Jacobi brackets. These metrics look like a time-dependent deformation of the well-known solution for self-dual gravity in flat space. In particular we find three distinct conformal classes of self-dual metrics: flat, (A)dS, and coasting FLRW. We also present a general double copy prescription that maps self-dual Yang-Mills in an FLRW background to these self-dual cosmological solutions and show that they exhibit a deformed w_1+∞ algebra, generalising the one found for AdS in <cit.>. Interestingly, if we demand that the Ricci scalar of these self-dual solutions is equal to that of an FLRW metric, there are only four possible backgrounds for which this is possible: flat, dS, radiation-domination, and coasting FLRW. More general solutions can then be obtained by performing Weyl transformations of these solutions. While the solution corresponding to radiation domination can be obtained from a Weyl transformation of the flat solution, in general performing such Weyl transformations will lead to solutions whose Ricci tensor is not that of an FLRW metric. On the other hand, we find that the stress tensor of the resulting solutions corresponds to viscous fluids whose equations of state become FLRW-like in the limit ∂_w^2ϕ≪1. Interestingly, we find that Weyl-transforming the dS and non-accelerated self-dual solutions to the flat background results in a traceless stress tensor, which, therefore, describes a fluid whose equation of state is that of radiation. The existence of self-dual cosmological solutions and their intriguing Jacobi brackets suggests many future directions. An immediate question is how the color-kinematics duality at the level of equations of motion translates to correlation functions in FLRW spacetimes. While all tree-level amplitudes of self-dual Yang-Mills and self-dual gravity vanish beyond three points, this will not be the case for the curved background we consider because they are time dependent so energy is not conserved. Nevertheless, we expect the correlation functions in these backgrounds to be strongly constrained by symmetry. Finding an underlying geometric interpretation of the Jacobi brackets and recovering an infinite hierarchy of asymptotic symmetries, along the lines of <cit.>, would also be another important direction. Finally, it would be interesting to consider Moyal deformations of the scalar theories discussed in this paper. In a flat background, such deformations give rise to chiral higher spin theories <cit.>, so doing so in the present context may give higher spin theories in cosmological backgrounds, which may be of interest for holography <cit.>. § ACKNOWLEDGMENTS MCG work is supported by the Imperial College Research Fellowship. A.L. and S.N. are supported by an STFC Consolidated Grant ST/T000708/1. § POWER LAW COSMOLOGIES Power law cosmologies have the following metric: s^2=(u+v/2 L)^2p(- u v+ ww̅) , where a(u+v)=(u+v/2 L)^p , and are sourced by a perfect fluid with stress-energy tensor T^=(ρ+P) u^μ u^ν+ P g^ , with an equation of state parameter ω=2-p/3p . The metric, in this coordinates, is a decelerating (for p>0) or accelerating (for p<0) FLRW spacetime. We approach the flat space limit as |u+v|→∞ in both cases. In the decelerating case, the spacetime has a null infinity and its Penrose diagram is the upper half of the Minkowski one with a singularity at u+v→0 corresponding to the Big Bang. These spacetimes include the matter-domination with p=2 and radiation-domination with p=1. On the other hand, the accelerating FLRW spacetimes have a spatial boundary at infinity, just like de Sitter. Taking u+v→ -∞ now brings us to the far past; hence, the spacetime looks flat. Another relevant case, not included above, has a power law scale factor in cosmic time t, defined from a(τ)τ= t, but becomes exponential in conformal time. This is the case of an FLRW spacetime with no acceleration and ω=-1/3. Its Penrose diagram is the same as the Minkowski one. Details on the conformal structure of these spacetimes can be found in <cit.>. § SELF-DUAL OFF-SHELL WEYL TENSOR In this Appendix, we show how to solve the two independent self-dual equations that are non-linear in the scalar field. We start with the simplest one, which can be written as a^-2( C_vuw̅v-1/2ϵ_vu^ηλ C_ηλw̅v)=1/2ζ(u+v) ∂_w eom , where eom is given by Eq. (<ref>). The last self-dual equation is Ψ=a^-2( C_w̅vw̅v-1/2ϵ_w̅v^ηλ C_ηλw̅v) . Since we have fixed the equation of motion of ϕ, to obtain a self-dual solution, we require that Eq. (<ref>) can be written entirely in terms of the equation of motion and its derivatives. When ζ=0 we have that Eq. (<ref>) is Ψ|_ζ=0=(∂_w∂_w̅-∂_u∂_v)(eom)-∂_u^2ϕ∂_w^2(eom) -∂_w^2ϕ∂_u^2(eom)+2∂_w∂_uϕ∂_w∂_u(eom) . To find a solution for a general scale factor we look at each derivative contribution separately: Ψ= Ψ_0+ζΨ_1+ (ζ^2 Ψ_2 + ζ' Ψ_3) +(ζ^3 Ψ_4+ζζ' Ψ_5+ζ”Ψ_6) +(ζ^2ζ'Ψ_7+ζζ”Ψ_8+(ζ')^2Ψ_9+ζ”' Ψ_10) , where '≡∂_u+v. Writing Ψ_0=Ψ|_ζ=0 with the equation of motion for ζ≠0 introduces new terms proportional to ζ. We can now find an expression for Ψ_1 in terms of the equation of motion, which will also introduce new terms proportional to ζ, but this won't affect the previous solutions since the new terms arise at a higher mass dimension. Thus, we can solve order by order in the mass dimension of the terms with ζ to find a complete solution. Proceeding this way, we find that Ψ_1= -2∂_v (eom)-2 ∂_uϕ∂_w^2(eom)+2∂_u∂_wϕ∂_w (eom) -2∂_w^2ϕ∂_u(eom)+2∂_w ϕ∂_u∂_w(eom) +ζ'/ζ^2(∂_u (eom)+∂_v (eom)) , But the solution breaks at the next order where there is no expression for Ψ_2 and Ψ_3 in terms of the equation of motion for a general ζ. The obstruction arises due to the following term Ψ^eom⊃ -ζ∂_u(∂_u ζ/ζ^2)(∂_u^2ϕ+∂_v^2ϕ) , which cannot be written in terms of the equation of motion and necessarily appears when rewriting lower-order contributions in terms of the equation of motion. To find a solution, we need to fix ζ so that this term vanishes. This is the case for ζ=-2/(u+v) , or ζ=1/L , where L is a constant length scale. As mentioned earlier, when ζ=0, Ψ is given by Eq. (<ref>). When ζ=-2/(u+v) Ψ^ζ=-2/(u+v)=Ψ|_ζ=0 +ζΨ_1 +ζ”'/ζ'(∂_wϕ∂_w(eom)) +ζ'^2/ζ^2(6∂_wϕ∂_w(eom)+2ϕ∂_w^2(eom)+(6∂_w^2ϕ-1/2)(eom)) -ζ”/ζ(5∂_wϕ∂_w(eom)+2ϕ∂_w^2(eom)+ 5∂_w^2ϕ(eom)) . Meanwhile, choosing ζ=1/L≠0 we find Ψ^ζ=c=Ψ|_ζ=0 +ζΨ_1 +ζ^2((∂_wϕ∂_w(eom)) +6∂_wϕ∂_w(eom)+2ϕ∂_w^2(eom)+6∂_w^2ϕ(eom) -5∂_wϕ∂_w(eom)-2ϕ∂_w^2(eom)- 6∂_w^2ϕ(eom)) . § JACOBI BRACKET OF COSMOLOGICAL SELF-DUAL SOLUTIONS The bracket in (<ref>) corresponds to a Jacobi bracket. In this Appendix, we will explicitly show that this is the case. For convenience, we reproduce the component-wise definition of the Jacobi bracket in (<ref>) below: D(f, g)=P^μν∂_μ f ∂_ν g +f X^μ∂_μ g - g X^μ∂_μ f , We require D(f,g) to concide with our bracket { f,g} _ζ={ f,g} +c_ζ ζ(u+v) (f∂_wg-g∂_wf) , with the first term defined in (<ref>). We can then read off P^μν={ -1, μ=u, ν=w 1, μ=w, ν=u 0, otherwise. and X^μ={ c_ζ ζ(u+v), μ=w 0, otherwise. The Jacobi bracket is required to satisfy the conditions in (<ref>): [P, P]=2 X ∧ P , [X, P]=0 . They ensure that the Jacobi identity is satisfied. Thus we could check them indirectly, by looking at the Jacobi identity, but let us write them explicity, as a sanity check that we have correctly identified P^μν and X^μ. The Schouten-Nijenhuis bracket between an m-tensor A and a p-tensor B is given by: [A,B]^μ_1...μ_m+p=1(m-1)!p!ε^μ_1...μ_m+p_ν_2...ν_mρ_1...ρ_p A^σν_2...ν_m∂ B^ρ_1...ρ_p/∂ x^σ +1m!(p-1)!ε^μ_1...μ_m+p_ν_1...ν_mρ_2...ρ_pB^σρ_2...ρ_p∂ A^ν_1...ν_m/∂ x^σ Since all the components of P^μν are constant, we immediately have [P,P]=0 We then explicitly have 2 X∧ P= 4 c_ζ ζ(u+v) ∂/∂ w∧∂/∂ w∧∂/∂ u =0 , thus the first condition is immediately satisfied. An alternative way to see the above is by noting that P and X can be seen as tensors in a two-dimensional space spanned by u and w (P is simply the epsilon tensor in this space), where we treat v and w̅ as parameters. Then the first equation in (<ref>) is trivially satisfied, as both sides vanish, since they are three-forms. For the second equality in (<ref>) we can write explicitly [X,P]^μ_2μ_3=12ε^μ_2μ_3_ρ_1ρ_2X^σ∂ P^ρ_1ρ_2/∂ x^σ+ε^μ_2μ_3_νρ_2P^σρ_2∂ X^ν/∂ x^σ =c_ζ ε_wu^μ_2μ_3∂ζ(u+v)/∂ w=0 where we additionally made use of the fact that the non-zero coefficient of X is independent of w. § PROPERTIES OF CONFORMALLY SELF-DUAL METRICS We proceed to show explicitly the expressions for the energy density and equation of state of the power law cosmological self-dual solutions. Conformally flat self-dual For conformally flat self-dual solutions, the energy density is ρ=M_Pl^2/L^2a^-2(1+p)/pp(3p+(3p+Θ̂_flat)∂^2_wϕ) , where a=(u+v)/(2 L) and the operator Θ̂_flat is Θ̂_flat=1+p/2∂^2_wϕ-τ(∂_τ+2(∂_u∂_wϕ)∂_w) , and the equation of state is given by ω=P̃/ρ̃=2-p/3p+2(p-1)/3pΘ̂_flat∂^2_wϕ/3p+(3p+Θ̂_flat)∂^2_wϕ . Thus, we obtain the usual FLRW equation of state when the second term in the equation above vanishes. For a generic ϕ, this is only satisfied if p=1, which corresponds to the case of the self-dual radiation solution discussed in Section <ref>. Alternatively, as long as the second term is small, the solution has an equation of state that approximates the corresponding perfect fluid. One possibility is to have ∂^2_wϕ≪1, in which case the equation of motion of the scalar reduces to □_ℝ^4ϕ-(∂_u∂_wϕ)^2≃0. Conformally dS self-dual In the case of metrics conformal to the dS self-dual solution, we use Ω=(τ/L)^p+1 in Eq. (<ref>) and find that the energy density is given by ρ=M_Pl^2/L^2a^-2(1+p)/p(3p^2+(1+p)((1+3p)+Θ̂_dS)∂^2_wϕ) , where the operator Θ̂_dS is Θ̂_dS=1/2p∂^2_wϕ+2(∂_wϕ)∂_w-τ(∂_τ+2(∂_u∂_wϕ)∂_w) , and the equation of state reads ω=1/3(3p(2-p)/(1+p)+((1-3p)+Θ̂_dS)∂^2_wϕ/3p^2/(1+p)+((1+3p)+Θ̂_dS)∂^2_wϕ) . From this, we can see that when p=0, we recover the equation of state of radiation. For the case p≠0, we can rewrite the equation of state as ω=2-p/3p-2((1+2p)+(1-p)Θ̂_dS)∂^2_wϕ/3p(3p^2/1+p+((1+3p)+Θ̂_dS))∂^2_wϕ) . which shows that, just like in the conformally flat self-dual case, we can approach the FLRW equation of state if ∂^2_wϕ≪1. Conformally coasting self-dual In this last case, we take Ω=e^-τ/Lτ^p in Eq. (<ref>) to obtain power law scale factors in conformal time. With this choice, we find that the energy density is given by ρ=M_Pl^2/L^2a^-2(1+p)/p(3p^2+(3p^2+Θ̂_non-a.)∂^2_wϕ) , where the operator Θ̂_non-a. is Θ̂_non-a.= p(1+p)∂^2_wϕ /2 -pτ(ℋ(2+∂_wϕ∂_w)+∂_τ+2(∂_u∂_wϕ)∂_w) τ^2(-ℋ^2∂^2_wϕ/2+ℋ(∂_τ+(∂_u∂_wϕ)∂_w)) . Meanwhile, the equation of state can be written as ω=3p(2-p)+(3p(2-p)+6 p ℋτ+Θ̂_non-a.)∂^2_wϕ/3(3p^2+(3p^2+Θ̂_non-a.)∂^2_wϕ) , which, as in the conformally dS self-dual case, reduces to 1/3 for p=0. When p≠0, we can write ω=2-p/3p+1/3((6 p ℋ t+2(p-1)/pΘ̂_non-a.)∂^2_wϕ/3p^2+(3p^2+Θ̂_non-a.)∂^2_wϕ) , such that, like in all the previous cases, when ∂^2_wϕ≪1, the equation of state approaches that of the usual FLRW spacetime. JHEP
http://arxiv.org/abs/2407.12515v1
20240717120523
Intrinsic mixed-dimensional beam-shell-solid couplings in linear Cosserat continua via tangential differential calculus
[ "Adam Sky", "Jack S. Hale", "Andreas Zilian", "Stéphane P. A. Bordas", "Patrizio Neff" ]
math.NA
[ "math.NA", "cs.CE", "cs.NA", "math-ph", "math.MP" ]
Inertial Methods with Viscous and Hessian driven Damping for Non-Convex Optimization Rodrigo Maulen-Soto, Jalal Fadili, Peter Ochs July 22, 2024[The insight and motivation for the study of inertial methods with viscous and Hessian driven damping came from the inspiring collaboration with our beloved friend and colleague Hedy Attouch before his unfortunate recent departure. We hope this paper is a valuable step in honoring his legacy. ] ======================================================================================================================================================================================================================================================================================================================== § ABSTRACT We present an approach to the coupling of mixed-dimensional continua by employing the mathematically enriched linear Cosserat micropolar model. The kinematical reduction of the model to lower dimensional domains leaves its fundamental degrees of freedom intact. Consequently, the degrees of freedom intrinsically agree even at the interface with a domain of a different dimensionality. Thus, this approach circumvents the need for intermediate finite elements or mortar methods. We introduce the derivations of all models of various dimensions using tangential differential calculus. The coupling itself is then achieved by defining a mixed-dimensional action functional with consistent Sobolev trace operators. Finally, we present numerical examples involving a three-dimensional silicone-rubber block reinforced with a curved graphite shell on its lower surface, a three-dimensional silver block reinforced with a graphite plate and beams, and lastly, intersecting silver shells reinforced with graphite beams. Key words: mixed-dimensional coupling, Cosserat micropolar continua, shell elements, plate elements, beam elements, volume elements, finite element method. § INTRODUCTION The design of structural parts in engineering practices often entails combinations of physically large and small components, for example in sandwich structures <cit.>, compound composites <cit.>, or fibre-reinforced materials <cit.>. From a modelling point of view, it is often possible to consider these joint components as a single continuous body, and thus model its behaviour through a single continuum theory. However, from a computational perspective, the latter approach may prove inefficient or even unfeasible for a given computational power, as it requires the discretisation of the domain to bridge the scale-gap between the small scale components and the large scale components. This task is difficult in terms of both, generating satisfactory finite element meshes for high-fidelity simulations <cit.> (see also <ref>), as well as in the required computational effort that arises from the very fine mesh which is needed for scale-transition. Consequently, it is commonplace to regard designs containing small and large parts as mixed-dimensional continua. Let a d-dimensional body ⊂^d represent a large scale component, the smaller scale components can be modelled as continua on its k-codimensions Ξ⊂^d - k. This approach allows for coarser discretisations of both the large and small scale components, while compensating for inaccuracies in the behaviour of the small scale components with idealised models, which are fine-tuned for the expected behaviour in the small scale. However, since multiple continuum models are now used to model a single structural part, it becomes necessary to couple them in a single computational framework. The latter often proves challenging, as the differing continuum models may entail different types of degrees of freedom. A common example for the aforementioned difficulty and the focus of this work are kinematical couplings. Namely, in linear elasticity one is often compelled to couple the kinematical displacement field :⊂^3 →^3 of a three-dimensional body ⊂^3 with kinematical fields of codimensional models, such as shells or beams. For linear elasticity, the shell models are the Cosserat <cit.>, and the classical Naghdi <cit.> and Koiter shells <cit.>, or their flat correspondents, the Reissner–Mindlin <cit.> and Kirchhoff–Love plates <cit.>. The beam models are the well-known Timoshenko–Ehrenfest <cit.> and Euler–Bernoulli <cit.> beams, and in the geometrically nonlinear case also the Cosserat beam <cit.>, possibly with enhanced Saint-Venant torsion kinematics <cit.>. It is precisely in this scenario that the coupling becomes difficult, as the shell and beam models introduce rotation degrees of freedom :Ξ⊂^3-k→^3, which the three-dimensional model simply does not have. Thus, the coupling requires some additional treatment, e.g., using intermediate finite elements <cit.>, static condensation <cit.>, Nitsche's method <cit.>, or mortar approaches <cit.>. Specifically in mechanical continuum models it is clear that the difficulty in the coupling stems from the lacking agreement in degrees of freedom. As such, this work considers an alternative approach with an enriched continuum model <cit.>, such that degrees of freedom intrinsically agree for all possible dimensions. Namely, this work employs the Cosserat micropolar theory as its starting point. The Cosserat micropolar continuum <cit.>, a concept introduced by the Cosserat brothers in 1909 <cit.>, represents a generalised continuum model that extends beyond the classical Cauchy continuum theory. Unlike the Cauchy continuum, which considers material points characterised solely by their position, the Cosserat continuum also takes into account possible independent rotations of each material point, allowing for the introduction of more intricate kinematics. Effectively, the Cosserat theory turns each material point into a non-deformable solid microbody, implying the existence of a local micro-moment of inertia. Another consequence is the introduction of couple-forces M: ⊂^3 →(3), relating to higher order tractions. Concisely, the Cosserat model naturally encompasses both a displacement field :⊂^3 →^3 and a rotation field : ⊂^3 →^3, even in three-dimensional bodies ⊂^3. As a result, any dimensional reduction of the model to shells or beams leaves both the translational and rotational degrees of freedom intact. Therefore, the model is ideal for mixed-dimensional designs, as the degrees of freedom intrinsically agree even on interfaces of differing dimensionality |_Ξ = |_Ξ = 0. In fact, the coupling procedure is reduced to combining the bulk energy functional with lower dimensional energy functionals based on Sobolev trace operators <cit.> applied to the three-dimensional fields on codimensional domains I_(,) + I_(_,_) + I_(_,_). The derivation of reduced dimensional models is an involved procedure, especially if the lower dimensional domain is curved. This is the case since derivatives on curved domains are naturally expressed using differential geometry, implying the usage co- and contravariant derivatives as well as Christoffel symbols. This makes the derivation process difficult to interpret, and the translation to a finite element software challenging. Alternatively, a recent approach called tangential differential calculus (TDC) <cit.> allows to circumvent the need for curvilinear coordinates and Christoffel symbols by introducing equivalent differentiation operators based on projections <cit.>. To clarify, a gradient on a curved surface can be computed as the tangential projection of the gradient with respect to the Cartesian coordinates onto the surface ∇_t λ = ∇λ. If the surface is parameterised r:ω⊂^2 →⊂^3, then the tangential gradient ∇_t λ and the gradient given by the chain-rule ∇λ are equivalent. We give a short summary of the relation of tangential differential calculus to classical differential geometry via tensor calculus <cit.> in <ref>. While tangential differential calculus can also be used to introduce the geometry of the domain implicitly, e.g., with a level-set function <cit.>, we employ this framework for its advantageous applicability to automated solvers of partial differential equations. Specifically, the reinterpretation of co- and contravariant gradients as projected gradients of the global Cartesian system allows to directly define energy functionals or variational forms in the Cartesian sense, making it straightforward to script these functionals in frameworks like NGSolve <cit.>, FEniCS <cit.>, or Firedrake <cit.>. In this work we introduce the linear isotropic micropolar Cosserat model in three dimensions, and subsequently reduce it to shell-, plate- and beam models by means of kinematical assumptions and integration. The models all entail the same kinematical degrees of freedom, namely, displacements and rotations , and thus intrinsically agree on interfaces of mixed dimensionality Ξ⊂^d-k. The coupling itself is then achieved by restricting the bulk fields to codimensional domains using consistent Sobolev trace operators, yielding a mixed-dimensional action functional I(,) - L(,) →min {,} , I(, ) = I_(,) + I_(_,_) + I_(_,_) , L(,) = L_(, ) + L_(_, _) + L_(_, _) . The derivations in this work are done using tangential differential calculus, thus allowing for direct usage in automated solvers of partial differential equations. We employ NGSolve[www.ngsolve.org] to compute numerical solutions of three coupling examples, and finally, we discuss conclusions and outlook. We emphasise that the resulting reduced models in this work do not coincide with the standard linear Naghdi shell <cit.> or the Reissner–Mindlin plate <cit.> formulations, nor with the traditional definition of the Cosserat rod <cit.> in the case of the beam formulations. This is because the aforementioned models are derived from the energy functional of the corresponding Cauchy continuum theory, as opposed to the energy functional of the Cosserat micropolar continuum that we employ here. However, all the models are referred to as Cosserat type by virtue of their descendance from the three-dimensional Cosserat model. §.§ Notation The following notation is used throughout this work. Exceptions to these rules are made clear in the precise context. * Vectors are defined as bold lower-case letters v, ξ∈^d. * Second order tensors are denoted with bold capital letters T∈^d × d. * Higher-order tensors are designated by the blackboard-bold format ℂ∈^d × d × d …. * We denote the Cartesian basis as {e_1, e_2, e_3}. * Summation over indices follows the standard rule of repeating indices. Latin indices represent summation over the full dimension, whereas Greek indices define summation over the co-dimension. * The angle-brackets are used to define scalar products of arbitrary dimensions vu = v_i u_i, TF = T_ijF_ij. * The matrix product is used to indicate all partial-contractions between a higher-order and a lower-order tensor Tv = T_ij v_j e_i, ℂT = C_ijklT_kle_i ⊗e_j. * The second-order identity tensor is defined via , such that v = v. * The trace operator reads T = T. * A general physical body of some arbitrary dimension d is denoted with ⊂^d. * Volumes, surfaces and curves of the physical domain are identified via , and , respectively. Their counterparts on the reference domain are Ω⊂^3, ω⊂^2 and γ⊂. * Tangential and normal vectors on the physical domain are designated by t and n, respectively. On reference domain the respective vectors read τ and ν. * We define the constant space of skew-symmetric second order tensors as (d) = {T∈^d × d | T = -T^T}. * The space is associated with the operators T = (1/2)(T - T^T) ∈(d), v = v×∈(3), and its inverse (v) = v. * The nabla operator is defined as ∇ = e_i ∂_i. * The left-gradient is given via ∇, such that ∇λ = ∇⊗λ. * The right-gradient is defined for vectors and higher order tensors via , such that v = v⊗∇. * We define the vectorial divergence as v = ∇v = (v). * The tensor divergence is given by T = T∇, implying a single contraction acting row-wise * The vectorial curl operator reads v = ∇×v * For tensors the operator is given by T = -T×∇, acting row-wise. * The jump operator is denoted with ·. Further, we introduce the following Hilbert spaces and their respective norms () = { u : →ℝ | u _ < ∞} , u_^2 = ∫_ u^2 , () = { u ∈() | ∇ u ∈ [()]^d } , u^2_ = u^2_ + ∇ u^2_ , , = {∈ [()]^d | ∈ [()]^d} , ^2_ = ^2_ + ^2_ , where ⊂^d. § THE LINEAR COSSERAT MICROPOLAR MODEL The internal energy of a three-dimensional Cosserat medium is given in dislocation form (Curl instead of gradient) <cit.> by the functional I_(, ) = 12∫_ + 2 ( - )^2 + ^2 , where :⊂^3 →^3 is the displacement of the body ⊂^3. Its infinitesimal micro-rotation :⊂^3 →(3) is given by the skew-symmetric tensor = = × , = = -12𝔼 , where 𝔼∈^3 × 3 × 3 is the third-order Levi-Civita permutation tensor. The infinitesimal micro-rotation tensor is characterised by the axial vector :⊂^3 →^3. Consequently, its Curl can be written as = () = () - ()^T , using Nye's formula <cit.>. The material tensor :^3 × 3→^3 × 3 is positive definite, and assumed to be isotropic hereinafter T = 2T + (T) , T∈^3 × 3 , where > 0 and ≥ 0 are Lamé constants. The coupling between the infinitesimal macro-rotation and the infinitesimal micro-rotation is governed by the Cosserat couple modulus > 0. The characteristic length-scale parameter is denoted by > 0, and :^3 × 3→^3 × 3 is a positive definite isotropic fourth-order tensor of dimensionless weights T = a_1 T + a_2 T + a_33 (T) , a_1,a_2,a_3 ≥ 0 , T∈^3 × 3 . The external work for the medium is given by L_(,) = ∫_f + M , where f:⊂^3 →^3 are the body forces and M:⊂^3 →(3) are couple-forces. For simplicity, we do not consider external fluxes in this work, such that any Neumann boundary is always homogeneous. The balance of energy is expressed as the minimisation problem I_(,) - L_(,) →min {,} , {,} = _, (I_ - L_) . In order to find minimisers {,} we take variations with respect to the displacements and infinitesimal rotations. The variation with respect to the displacements yields δ_ (I_-L_) = ∫_δ + 2 δ( - ) - δf = 0 . The variation with respect to the infinitesimal rotation reads δ_ (I_-L_) = ∫_ -2 δ( - ) + ^2 δ -δM = 0 . Combining the two results in the weak form ∫_δ + 2 (δ - δ)( - ) + ^2 δ = ∫_δf + δM . Now, by splitting the boundary between Dirichlet and Neumann ∂ = _D ∪_N and applying partial integration we obtain the boundary value problem -[ + 2 ( - )] = f in , -2( - ) + ^2 () = M in , [ + 2 ( - )] n = 0 on _N^ , -^2 () (n) = 0 on _N^ , = on _D^ , (n) = (n) on _D^ . Accordingly, the non-symmetric stress tensor is σ = + 2( - ) . The domain of the Cosserat model with boundary conditions is depicted in <ref>. We note that setting = 0 decouples the equations, leaving ^2 () = M as a standalone Curl-Curl problem. In contrast, the coefficient is redundant in the geometrically exact Cosserat model <cit.>, and can be set to zero = 0 in the related relaxed micromorphic model <cit.> without decoupling the kinematical fields. Hereinafter, we employ material norms for better readability of the shell and beam models, and compactness of notation. Namely, we define the norms _^2 = = 2^2 + [()]^2 , and () - ()^T_^2 = = a_1 ^2 + a_2 ^2 + a_33 ()^2 , where we used the identity <ref> for skew-symmetric tensors. Thus, the internal energy functional reads I_(, ) = 12∫__^2 + 2 ( - )^2 + ^2 () - ()^T_^2 . The functional is composed of three energy densities pertaining to the displacement, rotational coupling, and dislocation, respectively. §.§ Linear isotropic Cauchy materials The Cosserat micropolar model intrinsically incorporates the classical Navier–Cauchy linear elasticity model into the structure of its energy functional. Namely, a highly homogeneous Cauchy medium is characterised by a vanishing characteristic length-scale parameter → 0, and the absence of couple-forces M = 0 <cit.>. Thus, <ref> reads - 2( - ) = 0 > 0 = , such that the internal energy functional yields 12∫__^2 + 2 ( - )^2 + ^2 () - ()^T_^2 ↦ 12∫___M^2 , where _M: ^3 × 3→^3 × 3 is the material tensor of linear isotopic Cauchy materials _MT = 2T + (T) , T∈^3 × 3 , with the Lamé constants > 0 and ≥ 0. In summary, it suffices to set → 0, M = 0, > 0 and = _M in order to obtain the behaviour of the Navier–Cauchy model from the Cosserat model. § LINEAR ISOTROPIC COSSERAT SHELLS IN THREE DIMENSIONS Let the domain of a thin curved shell be given by the mapping x:Ω⊂^3 →⊂^3 , Ω = ω× [-h/2,h/2] , = × [-h/2,h/2] , where h ≪ || is the thickness of the shell, we can write the mapping explicitly as x(ξ,η,ζ) = r + ζn , r = r(ξ,η) , ζ∈ [-h/2,/h/2] , such that r maps the middle surface of the shell r:ω⊂^2 →⊂^3, and n = n(ξ,η) is the unit normal vector of the surface, see <ref>. With the surface normal we can define corresponding tangential and normal projection operators = - n⊗n , = n⊗n , :^3 → , :^3 →^3 ∖ , where is the space of the tangential vectors of the surface . Using the latter we split the symmetrised gradient of the displacements between its tangential and normal components = ( + ) ( + ) = () + () + () + () . Consequently, the first energy density component in <ref> can be expressed as _^2 = () + ()_^2 + 4 ()^2 , where () = () due to symmetry. Analogously, we split the infinitesimal rotation tensor = ( + ) ( + ) = + + , where = = 0 due to its skew-symmetry : ⊂^3 →(3). As such, the second energy density term is expanded to 2 ( - )^2 = 2( - ) ^2 + 4 ( - ) ^2 , where ( - ) = ( - ) and () = (1/2) ( - []^T) = 0 hold due to skew-symmetry. Putting it all together, the internal energy functional reads I_(,) =12∫_ () + ()_^2 + 4 ()^2 + 4 ( - ) ^2 + 2( - ) ^2 + ^2() - ()^T_^2 . Integration of the energy over the volume is split across the middle surface and the thickness of the shell via ∫_ (·) = ∫_∫_-h/2^h/2 (·) (1 -2H ζ + K ζ^2) ζ , using the shell-shifter from Steiner's formula <cit.>, see also <ref>. The factor is composed of the mean and Gauß curvatures H = 12W , K = W , being invariants of the Weingarten curvature tensor W = - _t n , whose derivation can be found in <ref>. Thus, the energy functional is finally given as I_(,) = 12∫_∫_-h/2^h/2 (() + ()_^2 + 4 ()^2 + 4 ( - ) ^2 + 2( - ) ^2 + ^2() - ()^T_^2 ) (1 -2H ζ + K ζ^2) ζ . §.§ The shell model At this point we make standard engineering assumptions for the kinematics of the shell. Firstly, the infinitesimal rotation is constant throughout the thickness of the shell ≠(ζ). Secondly, the displacement vector takes the form (ξ,η,ζ) = v + ζn , v = v(ξ,η) , = (ξ,η) , where v:ω⊂^2 →^3 is the displacement of the middle surface. In other words, no out-of-plane stretching is permitted, and displacements parallel to the middle surface v:ω⊂^2 → are given by the translation of the middle surface v together with its rotation ζn. Observe that = is still given by the three-dimensional axial vector :ω⊂^2 →^3, such that constant drill throughout the thickness is possible. Using the projection operators we define tangential gradients with respect to the parametrisation of the middle surface r:ω⊂^2 →⊂^3 ∇_t(·) = ∇ (·) , _t (·) = [(·)] . The gradient of the displacement field can now be split across its tangential gradient and normal components via <ref> = v + (ζn) = _t v + (n) ∇ζ + ζ_t(n) . Thus, for the tangential part we find () = _t = _t v + ζ [_t (n)] = _t v + ζ [_t (×n)] = _t v - ζ [(n) _t + W] , where W= -_t n is the Weingarten curvature tensor, which is naturally tangential W = W as per <ref>. Since (n)n = 0, we can further replace (n)_t with (n) where = _t , is the covariant gradient, finally yielding _t = _t v - ζ [(n) + W] . The normal part of the gradient is simply () = (n) ∇ζ = n⊗n = Q , and for the micro-dislocation we find = (_t ) - (_t )^T , _t = (_t ) , seeing as = (∘r)(ξ,η). Next, we observe that () = ([] ) yields () = (v - ζ [(n) + W]) . Analogously, we find ( - ) = (v - ζ [(n) + W]) - . Further, there holds () = 1/2( [] + []^T) = 12(_t v - ζW) - 12 , such that the skew-symmetric counterpart satisfies ( - ) = 12(_t v - ζW) + 12 - = 12(_t v - ζW) - 12 , where we use that ^T = -. Lastly, the normal-normal part of the symmetrised gradient yields ( ) = ( ) = = 0 , since = = 0. Consequently we can simplify the energy density terms of <ref>. We start by observing that () + ()_^2 = (v - ζ [(n) + W])_^2 . Here, we get that the normal-normal strain component vanishes. Following the classical Naghdi shell and corresponding Reissner–Mindlin plate formulations, this leads to an asymptotically improper model since an out-of-plane stress σ arises from transverse contractions via the Poisson ratio <cit.>. Thus, we adopt the plane-stress assumption, such that the normal-normal component of the strain is condensed a priori and the corresponding stress component is set to vanish σ = 0. For the planar material tensor this implies that its material constants are adapted to those of plane stress ·_^2 = 2^*·^2 + ^*[(·)]^2 , ^* = E_e2(1+ν_e) = , ^* = E_e 1-ν_e^2 = 2 + 2 , where E_e and ν_e are Young's modulus and the Poisson ratio, respectively. Observing that () = ( - ), the next two energy density terms are combined into 4 ()^2 + 4 ( - ) ^2 = ( + ) (_t v - ζW - )^2 . For the skew-symmetric tangential-tangential energy term we find 2( - ) ^2 = 2(v - ζ [(n) + W]) - ^2 . Finally, the energy density of the micro-dislocation reads ^2(_t ) - (_t )^T_^2 . Thus, the total energy is given by I_(v,) = 12∫_∫_-h/2^h/2 ((v - ζ [(n) + W])_^2 + ( + ) (_t v - ζW - )^2 + 2(v - ζ [(n) + W]) - ^2 12 + 2 + ^2(_t ) - (_t )^T_^2) (1 -2H ζ + K ζ^2) ζ . Having derived the total energy, we employ asymptotic analysis to obtain the final shell model by eliminating ζ-terms of order three or higher Ø(ζ^3), see <ref>. These terms result in constants of order Ø(h^4) in the thickness h, which is imperatively required to be very small h ≪ || for a well-defined formulation <cit.>. Thus, these terms are expected to generate insignificantly small energies for a thin shell and are therefore omitted. Further, we exploit that linear terms of the form of (·)ζ vanish by the symmetry of integration over the thickness of the shell ∫_-h/2^h/2(·)ζ ζ = 0. We integrate the energy densities multiplied with the shell-shifter 1 -2H ζ + K ζ^2 over the thickness in three steps. Firstly, we find the integral over the constant 1 to be h v_^2 + h^312 ([n] + W)_^2 + ( + ) (h(_t v - )^2 + h^312W^2) + 2 (hv - ^2 + h^312( [n] + W)^2) + ^2h(_t ) - (_t )^T_^2 . Secondly, the linear term of the shell-shifter -2H ζ leads to H h^33 ( v([n] + W) + ( + ) (_t v - )W + 2v - ([n] + W) ) . Thirdly, the quadratic term K ζ^3 yields K h^312 ( v_^2 + ( + ) (_t v - )^2 + 2v - ^2 + ^2(_t ) - (_t )^T_^2) . A common simplification at this point can be made for shells with a relatively small curvature, H/ || ≪ 1 and K/ || ≪ 1. Therein, also the terms H h^3 and K h^3 become insignificantly small Ø(h^4) and are thus omitted from the internal energy, yielding the final internal energy functional of the curved shell I_(v,) = 12∫_ h (v_^2 + 2v - ^2+ ( + ) (_t v - )^2 ) + h^312 ( ([n] + W)_^2 + 2( [n] + W)^2 ) + h^312 ( + ( + ) h^312W^2 + ^2h(_t ) - (_t )^T_^2 . The energy density terms in the functional can be interpreted with respect to their mechanical action. The first term of the functional gives the so-called membrane energy. By <ref> it is apparent that the second term governs in-plane torque and is therefore the drill energy density <cit.>, which does not appear in the Naghdi shell <cit.> or Reissner–Mindlin plate <cit.> models. The third term is given by the out-of-plane coefficients of the in-plane tensors, yielding the shear energy. The next three terms represent the bending energy, and the final term is the dislocation energy. Now, in order to construct a minimisation functional we simply redefine the forces and couple-forces as surface quantities L_(v,) = ∫_vq + M , such that q:→^3 and M:→(3). The balance of energy is now expressed analogously to <ref> via I_(v,) - L_(v,) →min {v,} , {v,} = _v, (I_ - L_) , such that minimisers are found by standard variation with respect to v and . §.§ The plate model A plate is defined as a flat shell model. Therefore, its kinematical assumptions are similar as for the curved shell but we have that W = 0. Accordingly, we get that the mean curvature H = (1/2)W = 0 and the Gaussian curvature K = W = 0 also vanish. Consequently, the internal energy functional is reduced to I_(v,) = 12∫_ h (v_^2 + 2v - ^2+ ( + ) (_t v - )^2 ) + h^312 ( ([n] )_^2 + 2( [n] )^2 ) h^312 + h^312 ( + ^2 h(_t ) - (_t )^T_^2 . The form can be even further simplified if the plate is embedded in the x-y or y-z planes. §.§ A membrane-shell model In the case of an extremely thin shell h / || ≪ 1, the term h^3 becomes negligible Ø(h^3), such that the energy functional reduces to I_(v,) = 12∫_ h ( v_^2 + 2v - ^2 + ( + ) (_t v - )^2 + ^2(_t ) - (_t )^T_^2) . Interestingly, the functional naturally discards any explicit energy terms related to curvature. However, the Weingarten tensor W and its invariants still influence the energy implicitly, which can be directly observed by taking derivatives of the kinematic fields defined on the curvilinear coordinates of the shell. § LINEAR THREE-DIMENSIONAL COSSERAT BEAMS Let r = r(s) map the centroid curve of some three-dimensional beam, parameterised by the arc-length parameter s ∈ [0,l] and equipped with the accompanying normal vectors n(s) ⊥t(s) and c(s) = t×n, the beam domain is given by x(s,η,ζ) = r + ηn + ζc , x: [0,l] ×ω⊂^3 →⊂^3 , where l ≥ 0∖{0} is the length of the beam and {η,ζ} define the cross-section of the beam ω⊂^2, see <ref>. The initial orientation of the cross-section of the beam is given by the choice of n, and we exclude twists of its cross-section in the mapping τ = 0, see <ref>. The unit tangent vector of the centroid curve is given by t = r_,s , t = 1 , such that we can define the tangential and normal projection operators = t⊗t , = - = - t⊗t = n⊗n + c⊗c , : ^3 → , : ^3 →^3 ∖ , where = {t} is the space of tangential vectors to the curve. Following the same procedure as for the shell models, we employ the decomposition of and (-) using the projectors. The decomposition of yields the same result as in <ref>, but the projection operators are now those of the beam. Observe that = t⊗t implies [() - ] = - = 0 due to skew-symmetry. Thus, <ref> changes to 2 ( - )^2 = 2( - ) ^2 + 4 ( - ) ^2 , and the internal energy of the beam reads I_(,) =12∫_s ∫_ω ( () + ()_^2 + 4 ()^2 + 4 ( - ) ^2 + 2( - ) ^2 + ^2() - ()^T_^2) (1- κ_nη - κ_cζ) ω s , where we use <ref> to decompose the integral over the volume of the beam across its cross-section surface ω⊂^2 and length [0,l]⊂. §.§ The beam model The kinematic of the beam is given via u(s,ξ,η) = v + ηn + ζc = v + (ηn + ζc) , v = v(s) , = (s) , where v:[0,l] →^3 is the translation of the centroid curve and ≠(η,ζ) is assumed to be constant throughout the cross-section of the beam. In the framework of tangential differential calculus, the gradient of the displacement field can be expressed as = _tv + _,s(ηn + ζc) ⊗t + (ηn + ζc)_,s⊗t + _n(ηn + ζc) = _t v - (ηn + ζc)_t - (κ_n η + κ_c ζ) + , for a twist-free cross-section, as per <ref>. Consequently, its symmetrised tangential-tangential part reads () = (v - (ηn + ζc)_t ) , since = = 0. Its normal-normal part reads () = () = 0 . Analogously, we find ( - ) = (v - (ηn + ζc)_t ) , and ( - ) = - = 0 . Due to () = ( - ) = 0 we get that the perpendicular strains vanish, similarly to the case of shells <cit.>. Thus, the model is adjusted to the beam equivalent of plane-stress, being () _^2 = E_e () ^2 , E_e = (3 + 2 ) + , via Young's modulus E_e, such that the perpendicular strains are eliminated from the energy a priori. Next, the mixed normal-tangential part of the symmetrised gradient reads () = 12(_t v - (ηn + ζc)_t - (κ_n η + κ_c ζ) - ) . The corresponding skew-symmetric part also yields ( - ) = 12(_t v - (ηn + ζc)_t - (κ_n η + κ_c ζ) - ) . Consequently, the internal energy of the beam is given by I_ (v,) =12∫_s ∫_ω ( E_e(v - [η(n) + ζ (c)]_t )^2 + ( + ) (_t v - [η(n) + ζ (c)]_t - (κ_n η + κ_c ζ) - )^2 12 + ( + ) + ^2(_t ) - (_t )^T_^2) (1- κ_nη - κ_cζ) ω s , where the differential operators of the dislocation density are inherently tangential. Observe that since the parametrisation is with respect to the centroid-line of the beam, integrals of the form ∫_ωη ω = ∫_ωζ ω = 0 vanish. Further, we restrict the formulation to cross-sections that are symmetric with respect to at least the η- or the ζ-axis. Thus, mixed terms of the form ∫_ωηζ ω = 0 also vanish. By definition we have that = ∫_ω ω , I_η = ∫_ωζ^2 ω , I_ζ = ∫_ωη^2 ω , I_p = ∫_ωζ^2 + η^2 ω = I_η + I_ζ , represent the cross-section surface and the classical second order moments of inertia, respectively. Lastly, assuming the curvature of the beam is relatively small (|κ_n| + |κ_c|)/|l ≪ 1, using asymptotic analysis we omit quadratic terms multiplied by curvatures, cubic and higher order terms. The latter implies that in the integration, the shifter term 1- κ_n η - κ_c ζ is essentially reduced to 1. Consequently, the integration of the first energy term yields E_ev^2 + E_eI_ζ( [n ]_t )^2 + E_eI_η([c]_t )^2 , and for the second term we find ( + )(A(_t v - )^2 + I_ζ([n]_t + κ_n )^2 + I_η([c]_t + κ_c )^2 ) . Finally, the dislocation energy is just multiplied by the surface area A, such that the internal energy of the beam is given by I_(v,) =12∫_s E_ev^2 + E_eI_ζ( [n ]_t )^2 + E_eI_η([c]_t )^2 + ( + )A(_t v - )^2 + ( + )I_ζ([n]_t + κ_n )^2 12 + ( + )A + ( + )I_η([c]_t + κ_c )^2 + ^2 A (_t ) - (_t )^T_^2 s . By their mechanical action, the first term is the membrane energy, the next two terms are the bending energy, and the fourth term is the shear energy density. The two subsequent terms represent warp torsion energy since (n)_t + κ_n = κ_n - (_t ) c⊗t , (c)_t + κ_c = (_t ) n⊗t + κ_c , couple rotations perpendicular to the curvature with the rotational intensity. One can observe that _t relates to the change in torque θ^t via = θ^t t + θ^n n + θ^c c , _t = _t = (θ^t_,s - κ_n _n - κ_c _c) t⊗t = θ^t_,s - κ_n _n - κ_c _c , as the remaining terms of the gradient _t are not tangential-tangential and thus eliminated, compare <ref>. Lastly, the endmost term is the dislocation energy. Now, for the minimisation functional we simply reformulate the forces and couple-forces as curve quantities L_(v,) = ∫_vq + M , such that q:[0,l] →^3 and M:[0,l] →(3). The balance of energy is now expressed as in <ref>. §.§ The straight beam model If the beam is not curved κ_n = κ_c = 0, then the energy functional simplifies to I_(v,) =12∫_s E_ev^2 + E_eI_ζ( [n ]_t )^2 + E_eI_η([c]_t )^2 + ( + )A(_t v - )^2 + ( + )I_p_t ^2 + ^2 A (_t ) - (_t )^T_^2 s , where we used <ref> to obtain (n)_t ^2 = (c)_t ^2 = _t ^2, enabling the combination of I_η and I_ζ to the polar second order moment of inertia I_p = I_η + I_ζ. The form can be further simplified if the beam is embedded in the x-axis with its cross-section aligned to the y-z-plane. §.§ A micro-beam model A special Cosserat beam model can be derived under the assumption that the surface of the beam is very small | | / l ≪ 1, implying that the second order moments of inertia I_η→ 0 and I_ζ→ 0 vanish. In this scenario the energy functional reduces to I_(v,) =12∫_s E_ev^2 + ( + )A(_t v - )^2 + ^2 A (_t ) - (_t )^T_^2 s . Analogously to the membrane-shell model, we get that the micro-beam model is naturally independent of any explicit curvature energy terms. § NUMERICS The volumetric geometry of the domain is discretised using finite element meshing procedures. Correspondingly, the geometry of any embedded shell or beam in the domain is explicitly controlled by the meshing procedure as well. In other words, so-called conforming meshes are required, where the geometries of the lower dimensional shell and beams are clearly identified as faces and lines of three-dimensional polyhedra, compare <ref>. The displacement and infinitesimal rotation fields of the three-dimensional Cosserat model are defined using ^0()-continuous Lagrange elements ,∈ [^p()]^3 ⊂ [()]^3 over the volumetric domain ⊂^3. The coupling with the lower dimensional models is now achieved by restriction of the fields to codimensional domains using consistent Sobolev trace operators <cit.>, as depicted in <ref>. For the middle surface ⊂^3 of a shell we have _ = _ , _ = _ , _^t () = () _ = _t (_) = _t _ , _^t () = () _ = _t (_) = _t _ , where = - n⊗n, and analogously for the centroid line ⊂^3 of beams _ = _ , _ = _ , _^t () = () _ = _t (_) = _t _ , _^t () = () _= _t (_) = _t _ , where for curves we have = t⊗t. This approach is naturally consistent on the finite element spaces since scalar products and norms remain well-defined and square integrable. There hold the relations _s u = _s _ u , ∇_t_ u = _^t ∇ u , ∇_t _s u = ∇_t _s _ u = _s^t ∇_t _ u = _s^t ∇ u , where u:⊂^3 → represents one row of :⊂^3 →^3. In other words, one finds the commuting de Rham diagram <cit.> ^p() ∩() ^p-1() ∩, ⊃ ∇^p() _↓ _^t ↓ ^p() ∩() ^p-1() ∩, ⊃ ∇_t ^p() _↓ _^t ↓ ^p() ∩() [^p-1() ∩()] t ⊃ ∇_t ^p() , where ^p-1() ⊃∇^p(), ^p-1()⊃∇_t ^p() are the respective volume and surface Nédélec elements of the second type <cit.>, and [^p-1()] t = ∇_t ^p() are discontinuous Lagrange elements on the curve. Thus, the total mixed-dimensional energy functional can be naturally defined using the single discretisation {,}∈ [^p()]^3 × [^p()]^3 as I(,) = I_(, ) + I_(_, _) + I_(_, _) , , ∈ [^p()]^3 , where I_(,) is the energy of the volumetric Cosserat model, I_(_, _) is the energy of an embedded shell, and correspondingly I_(_, _) is the energy of an embedded beam. To clarify, the displacement field of the shell is simply set to v = _, and analogously for the beam we have v = _s. Accordingly, the total external work is given by L(,) = L_(, ) + L_(_, _) + L_(_, _) , , ∈ [^p()]^3 , such that L_(,) is the volumetric work, L_(_, _) is work on an embedded shell, and L_(_, _) is the work of an embedded beam. Accordingly, the discrete problem reads I(,) - L(,) →min {,} , {,} = _, (I - L) , , ∈ [^p()]^3 . In the following we compute three examples using NGSolve with cubic order Lagrange polynomials ^3(), to mitigate potential locking effects <cit.>. In the first example we consider the 3D-2D-coupling of a volume with shells, while the second example demonstrates the 3D-2D-1D coupling of a volume with a plate and beams. The last example showcases the 2D-1D coupling of shells and beams with intersections. §.§ Reinforcement with a stiff shell In the following we consider a three-dimensional domain made of silicone rubber and reinforce it with shells made of graphite. The domain is given by = {ξ∈ [0,1]^3 | x = [ 200(2ξ-1) 40(2η-1)(3-2[2ξ-1]^2) 10(ζ - 7sin[4ξ-2]) ]^T } , such that its length is 400, its minimal width is 80, its maximal width is 240 and its thickness is 10, see <ref>. The three-dimensional body is made of silicone rubber, whose Lamé parameters read ≈ 5.328 , ≈ 0.34 , such that its Poisson ratio is ν≈ 0.47. Consequently, this type of rubber is still compressible and volumetric locking does not occur in numerical simulations. The volume force acting on the domain is f = -f e_3 , f = 10^-6 / ^3 , pointing downwards. We start by fine-tuning the characteristic length-scale parameter of the Cosserat model to capture an equivalent solution to that of the Navier–Cauchy model. We set = 1 , () = 𝕀() = , such that is the identity map. The Dirichlet boundary of the displacements is given by __D^u = 0 , _D^u = _D^1 ∪_D^2 , where the boundary surfaces read _D^1 = { (η,ζ) ∈ [0,1]^2 | x = [ -200 40(2η-1) 10(ζ - 7sin[-2]) ]^T } , _D^2 = { (η,ζ) ∈ [0,1]^2 | x = [ 200 40(2η-1) 10(ζ - 7sin[2]) ]^T } , such that the displacement vanishes on x = ± 400. For the rotations the entire boundary is homogeneous Neumann _N^ = ∂. The resulting convergence in relative energy and displacements is given in <ref>, where we compare with a classical Navier–Cauchy formulation of the same domain. We observe that for ≤ 1/√(10) the convergence flattens towards a relative difference of < 1% in both the energy and the displacements. Still, even for = 1 we observe max≈ 15.46 in comparison to max≈ 16.28 for = 10^-2, which is equivalent to the result of the Navier–Cauchy model. We note that it is not possible to exactly capture the Navier–Cauchy model with the Cosserat model numerically, since → 0 implies =, but these fields belong to different discrete spaces. Namely, is discontinuous while is continuous. Still, the deviation for ≤ 10^-2 is insignificantly small, such that we henceforth use = 10^-2 in our computations. We remark that although an alternative computation with → 0 via = 10^-7 ≈ 0 and = 1 leads to an equivalent result with max≈ 16.31, at the limit lim→ 0 this approach amounts to solving two independent problems. Namely, - (_M) = f and ^2 () = 0, resulting in ≠→ 0 evidenced by _≈ 96.42 ≠_≈ 1.59 for = 10^-7, such that the coupling of the infinitesimal macro-rotation and the infinitesimal micro-rotation is lost. For = 0 with no Dirichlet boundary |_D^| = 0 the problem is not well-posed. The design in <ref> is made of extremely soft silicone rubber, leading to deformations of ≈ 16 even for the very small volume force of 10^-6 / ^3. The deformation can be substantially reduced by adding a thin reinforcement layer at the bottom surface of the shell Ξ = { (ξ,η) ∈ [0,1]^2 | r = [ 200(2ξ-1) 40(2η-1)(3-2[2ξ-1]^2) - 70sin(4ξ-2) ]^T } , which is simply the parametrisation of the domain evaluated with ζ = 0. We do so using the presented shell models composed of graphite. The material coefficients of the considered graphite @1.6 (H237) material read = 289.451 , = 2122.64 , → + ∞ , a_1 = 10867.9 , a_2 = 122264 , a_3 = 0 , stemming from the experimental results in <cit.>. Conversion formulae of the material coefficients between different forms of the Cosserat model are available in <cit.>. We note that the characterisation does not allow to determine , a_1, and a_2 individually, but can still be used in the simulation. For the infinitely large Cosserat couple modulus → + ∞ <cit.> we take a value one order of magnitude higher than for simplicity = 10000, assuming it is sufficient to enforce the implied constraint → +∞ : = , = () = - 12 ()^T = -12 ()^T , which must be satisfied for finite energies. We compare the result of the shell and membrane-shell formulations in <ref> setting the shell-thickness to h = 1.6, representing one layer of the graphite material. The maximal displacement using the shell formulation from <ref> is max≈ 0.068. In comparison, the membrane-shell formulation from <ref> yields max≈ 0.073. Clearly, for both formulations we observe a significant reduction in the deformation from max≈ 16.28, corroborating our shell-volume coupling approach. The relative difference between the two methods given by 100(0.073 - 0.068)/(0.073) ≈ 6.8 % suggests that for this small thickness the higher order energy terms relating to bending and curvature are negligible. §.§ A cantilever beam In the following we consider a simple beam shaped domain = [0,500] × [0,100] × [0,15] , such that its length is 500, its width is 100 and its thickness is 15. The domain is loaded with the vertical volume force f = -f e_3 , f = 10^-5 / ^3 . The bulk of the domain is set to be made of silver, such that its Lamé parameters read ≈ 98.5 , ≈ 30 , implying a Poisson ration of about ν≈ 0.38. We define the Dirichlet boundary for the displacement as __D^ = 0 , _D^ = [0,0] × [0,100] × [0,15] , implying the kinematics of a cantilever beam. The complete boundary of the rotation field is set to be homogeneous Neumann _N^ = ∂. Testing for various values of the characteristic while setting = 1 and = 𝕀()=, we find the maximal deformation for = 10^-2, such that lower values of do not increase the deformation. As such, we conclude that for = 10^-2 we retrieve the equivalent Navier–Cauchy solution. We successively reinforce the bulk domain with a plate and beams made out the graphite @1.6 (H237) from the previous example. The thickness of the plate is again set to h = 1.6. The beam is defined with a full circular cross-section of radius r = 0.8, such that its surface and moments of inertia read A = π r^2 = 0.64 π ^2 , I_η = I_ζ = 14π r^4 = 0.1024 π ^4 . Due to its full symmetry the choice of its orientation given by the n and c vectors is inconsequential. The plate reinforcement of the bulk is on codimension one. Its surface is defined as Ξ_1 = [0,500] × [0,100] × [10,10] . Further, on two sides of Ξ_1 we introduce beam reinforcements Ξ_2 = {[0,500] × [0,0] × [10,10]}∪{[0,500] × [100,100] × [10,10]} , on codimension one of the surface, representing codimension two of the bulk. Thus, we embed in the volume at z = 10 a plate, and two beams at the same depth with y = 0 and y = 100. The domain with its reinforcement is illustrated in <ref>. The maximal deformation without any reinforcement is also depicted in <ref>, reading max≈ 48.49. After reinforcement with two beams the maximal deformation reduces to max≈ 37.04. Alternatively, reinforcing the bulk with the plate yields max≈ 16.71. Finally, combining both reinforcements leads to the maximal deformation max≈ 0.08. The respective deformation results are depicted in <ref>. We observe the expected kinematical behaviour. Firstly, the reinforcement with beams is less pronounced than the effect of reinforcing with the plate. Secondly, the combined reinforcements yield a stiffness-additive solution, which is the natural outcome for linear elastic mechanics. We note that we do not retrieve an absolute super-positional solution, which is presumably due to the stiffening effect of the rotations also in x- and z-directions. Due to the lack of analytical results or a sound mixed-dimensional error estimator it is difficult to undertake convergence studies in order to estimate the quality of the mixed-dimensional approximation. Comparisons with fully volumetric discretisations are also restricted, as the reduced models are an idealisation of the three-dimensional kinematics, incapable of capturing the full mechanical phenomenon. Nevertheless, we propose a weighted bulk benchmark to evaluate the approximation power of the coupled plate model to some extent. The plate model allows for a rudimentary comparison due to its simple geometry, in contrast to the additional errors induced by the non-smooth geometrical approximation of curved shells. We start by splitting the bulk domain across two volumes = _∪_Ξ_1 , _ = {[0,500] × [0,100] × [0,10-h/2] }∪{[0,500] × [0,100] × [10+h/2,15] } , _Ξ_1 = [0,500] × [0,100] × [10-h/2,10+h/2] , such that _ is the domain made of silver, and _Ξ_1 represents the domain of the plate with thickness h made of graphite, see <ref>. Clearly, the larger h is, the less silver the domain is composed of, and the stiffer the it becomes. However, in the mixed-dimensional model the thickness of the plate does not explicitly appear in the discretisation of the geometry, such that it does not affect the volume of the silver-made domain. At the same time, reducing h in the fully volumetric model in order to account for this inconsistency reduces the overall stiffness of the model, although it remains constant in the corresponding mixed-dimensional model h = 1.6. Thus, we study the behaviour of the comparable fully volumetric model by decreasing the thickness h, while simultaneously compensating for this loss in stiffness with a linear scaling factor k = 1.6/h ≥ 1. This multiplicative factor is applied to the material coefficients of the graphite-made domain. We use quadratic Lagrange elements for the computation with h ∈{ 1.6, 1.2, 0.8, 0.4}, such that the scaling factor reads k ∈{ 1, 4/3, 2, 4}. The convergence, along with the mesh for h = 0.8 and the displacement for h = 0.4 are depicted in <ref>. Clearly, taking the value h = 0 is impossible as it implies a zero volume and k → + ∞. Even for h ≤ 0.3 it already becomes difficult to generate satisfactory meshes that are manageable from a computational perspective. As such, we instead extrapolate the remaining values of the convergence curve with a curve-fitting based on a generic exponential function f(h) = a e^b h + c with a,b,c ∈. As seen in <ref>, the fitting perfectly matches the computed values and predicts a maximal displacement of max≈ 16.36 for h → 0. Thus, the relative difference in the maximal displacement between the fully volumetric model and the mixed-dimensional model is 100 (16.71 - 16.36) / 16.71 ≈ 2.09 % or 16.71 - 16.36 = 0.35 in absolute values. This relatively small difference implies a good agreement between the two models, and demonstrates the accurate prediction power of the mixed-dimensional approach. Further, since the mixed-dimensional model slightly under-estimates the stiffness of the system and therefore over-estimates the deformation, the result is on the so-called safe side for subsequent design decisions. §.§ An S-shaped reinforced shell The proposed coupling procedure is general across dimensions due to the degrees of freedom of the underlying linear Cosserat continuum. In particular, shells naturally include in-plane drill rotations. The fact that a three-dimensional rotation vector composed of three independent rotation fields is present in every model irrespective of dimensionality allows for seamless interaction even across intersections. This feature is greatly facilitated by the use of tangential differential calculus (TDC), since both the displacement field and the rotation field are defined on the Cartesian system. In other words, whether a rotation represents drill for one shell but bending for another is determined solely by the projections and no explicit transformations to the coordinate system of any hyper-surface are required. The same holds true analogously for intersecting beams with respect to bending and torsion. The final example serves to demonstrate this feature for 2D-1D couplings. We consider a reflected S-shaped (S-shape) domain constructed of two half-circles _1 = {ϕ∈ [π/2, 3π/4] | r = [ 0 50 cosϕ 50 (sinϕ - 1) ]^T } , _2 = {ϕ∈ [-π/2, π/2] | r = [ 0 50 cosϕ 50 (sinϕ + 1) ]^T } , with a line crossing their intersection _3 = [0,-100,0] × [0,100,0] . The shell surface is then defined via the extrusion = [0, 500] ×{_1 ∪_2 ∪_3 } . Accordingly, the radius of the circles is 50, the middle plate has the width of 200, and the length of the structure is 500. The Dirichlet boundary _D is defined on the edge curves at x = 0 and is applied such that both the displacements and rotations vanish __D = __D = 0 . The shell is set to be made of silver with thickness h = 1.6 and is subsequently reinforced with beams of a circular cross-section with r = 0.8 made of graphite. Curved beams are defined along the S-curves, whereas straight lines are defined with the straight beam model Ξ = { (η, ϕ) ∈{0,1,…,10}× [π/2, 3π/4] | r = 50[ η cosϕ (sinϕ - 1) ]^T } ∪{ (η,ϕ) ∈{0,1,…,10}× [-π/2, π/2] | r = 50[ η cosϕ (sinϕ - 1) ]^T } ∪{ (x,y) ∈ [0,500] ×{-100,0,100} | r =[ x y 0 ]^T } ∪{ (x,y) ∈{0,500}× [-100,100] | r = [ x y 0 ]^T } , representing codimensional domains on the surface of the shell. The corresponding material coefficients can be found in the previous examples. The force is applied as a line load in / to the upper and lower horizontal edges of the shell structure q = q [ 0; 1; 0 ] , q = { -10^-4 z for z = ± 100 0 otherwise . , implying torsion. The domain along with its reinforcing frame and the resulting displacements for 620 cubic elements are depicted in <ref>. We clearly observe the resulting warp-torsion deformation of the non-reinforced thin structure with a maximal displacement of max≈ 41.14. Activating the beam-frame reinforcement reduces the maximal displacement to max≈ 0.26. The intersection of multiple shells and beams with differing orientations poses no challenge to the proposed coupling approach, as evident by the symmetric displacement solutions. § CONCLUSIONS AND OUTLOOK In this work we employed the linear Cosserat micropolar model in dislocation form to derive corresponding shell, plate, and beam models via kinematical and dimensional reduction. Further, we have shown that for → 0 it possible to recover the Navier–Cauchy response while using the Cosserat model. The reduced dimensional models were derived by standard engineering assumptions for the kinematics and splits in the integration over the volumetric domain. For the derivation we made use of the modern framework of tangential differential calculus. Thus, we were able to derive the corresponding energy functionals without the need for curvilinear coordinates or Christoffel symbols. Another major advantage of this approach is its applicability to automated solvers of partial differential equations, where the energy functional could be written as is, and solved directly without further treatment. All the various dimensional models intrinsically share the same type of degrees of freedom, namely translations and rotational. As a consequence, we were able to define coupled systems using merely consistent Sobolev trace operators, without the need for intermediate finite elements or Lagrange multipliers. The natural ^0()-regularity of continuous Lagrange elements ensured that its Sobolev traces on codimensional domains remain well-defined and square integrable also on these lower dimensional manifolds. We were thus able to validate our approach with three numerical examples computed in NGSolve, where we coupled a volume with a shell, a volume and with a plate and beams, or a shell with beams. The approach presented in this work has three advantages. Firstly, it is simple as no special treatment is required for the coupling. Instead, the coupling is done by defining additional energy functionals for Sobolev traces of the displacement and rotation fields on codimensional domains. Secondly, it is arbitrarily valid across dimensions, even for intersections. Thirdly, although not explicitly discussed in this work, the coupling modulus can be understood as a penalty term or a rotational spring. In other words, it could be used to control how strong the coupling is, allowing for deltas on interfaces of mixed-dimensional domains. Collectively, we surmise that our approach is relevant for engineering designs with mixed-dimensional parts, such as sandwich elements in aerospace, or general composite and fibre-reinforced materials. Notwithstanding, the presented approach does entail one disadvantage, being its additional rotational degrees of freedom on all volumetric elements, making it computationally more expensive than the comparable Navier–Cauchy model for isotropic Cauchy materials. § ACKNOWLEDGEMENTS Adam Sky is grateful for the technical support by Christopher Lackner (cerbsim, Technical University of Vienna) of the NGSolve community. Patrizio Neff acknowledges support in the framework of the DFG-Priority Programme 2256 “Variational Methods for Predicting Complex Phenomena in Engineering Structures and Materials”, Neff 902/10-1, Project-No. 440935806. spmpsci § TENSOR CALCULUS Let Ω⊂^3 define some reference three-dimensional body and ⊂^3 a physical three-dimensional body, we define the mapping x = x(ξ,η,ζ) , x:Ω→ . The covariant tangent vectors in the physical body read g_i = ∂_i^ξx = ξ^ix = x^jξ^ie_j = x_,i . Let the mapping x be invertible ξ = ξ(x) = x^-1(ξ), using the chain rule we can directly deduce x^jξ^iξ^kx^le_je_l = x^jξ^iξ^kx^lδ_jl = x^jξ^iξ^kx^j = ξ^kx^jx^jξ^i = ξ^kξ^i = δ_ki . implying the definition of so called contravariant vectors g^j = ξ^jx^ie_i , g_ig^j = δ_ij . Inversely, we can retrieve the Cartesian basis vectors form the co- and contravariant basis e_j = ξ^ix^jg_i , e_i = x^iξ^jg^j . The covariant vectors define the metric tensor of the three-dimensional body G = g_ije_i ⊗e_j = g_ig_j e_i ⊗e_j = x^kξ^ix^kξ^je_i ⊗e_j . We observe (g_ije_i ⊗e_j) (g^kg^l e_k ⊗e_l) = (g_ije_i ⊗e_j) (g^kle_k ⊗e_l) = g_ij g^klδ_jke_i ⊗e_l = g_ik g^kle_i ⊗e_l = δ_ile_i ⊗e_l , since g_ik g^kl = x^qξ^ix^qξ^kξ^kx^rξ^lx^r = x^qξ^ix^qx^rξ^lx^r = δ_qrx^qξ^iξ^lx^r = x^rξ^iξ^lx^r = ξ^lξ^i = δ_li . Consequently, the inverse of the metric tensor is G^-1 = g^ije_i ⊗e_j = g^ig^j e_i ⊗e_j . A covariant vector is mapped to a contravariant vector via g_i g^ij = x^kξ^ie_k ξ^ix^lξ^jx^l = e_k x^kx^lξ^jx^l = e_k δ_klξ^jx^l = e_k ξ^jx^k = g^j , and vice versa g_j = g^i g_ij . With this basic machinery in place, we can now express second order derivatives. Second order derivatives of the mapping can be expressed using Christoffel symbols ∂_j^ξg_i = ξ^jg_i = g_i,j = Γ_ij^ijkg_k = Γ_ijkg^k . The components of the Christoffel symbols Γ_ij^ijk and ω_ijk are identified using the orthogonality of the co- and contravariant vectors g_i,jg^l = Γ_ij^ijkg_kg^l = Γ_ij^ijkδ_k^kl = Γ_ij^ijl , g_i,jg_l = Γ_ijkg^kg_l = Γ_ijkδ^k_kl = Γ_ijl . § TANGENTIAL DIFFERENTIAL CALCULUS The identity tensor reads = δ_ije_i ⊗e_j = e_i ⊗e_i . It can also be expressed as a mixed tensor of co- and contravariant basis vectors = e_i ⊗e_i = ξ^jx^ig_j ⊗x^iξ^kg^k = ξ^jξ^kg_j ⊗g^k = δ_jkg_j ⊗g^k = g_j ⊗g^j , and analogously as = g^j ⊗g_j. If we define some hyper-surface ⊂^3 by a mapping r:ω⊂^2 →, whose tangent vectors are g_1 and g_2, then its normal unit vector g_3 can be defined as g_3 = g_1 ×g_2g_1 ×g_2 . We observe that g_3g_3 = 1 , g_αg_3 = 0 , implying that the co- and contravariant normal vectors are the same g_3 = g^3. Since g_3 is the unit normal vector to the surface, we can define a corresponding tangential projection operator = - n⊗n = - g_3 ⊗g^3 = g_i ⊗g^i - g_3 ⊗g^3 = g_α⊗g^α , v = (g_α⊗g^α) v^i g_i = v^αg_α . In other words, the tensor eliminates any non-tangential components. Analogously, we can define the normal projection operator = - = g_3 ⊗g^3 = n⊗n , v = (g_3 ⊗g^3) v^i g_i = v^3 g_3 . Now, let some function λ depend only on two parameters of a reference surface (λ∘r)(ξ,η), then its gradient reads ∇λ = e^i ∂_i^x λ = e_i x^iλ = e_i ξ^αξ^αx^iλ = e_i ξ^αx^iξ^αλ = g^α∂_α^ξλ = λ_,α g^α . Evidently, ∇λ : ω→ is a tangential vector in this case such that ∇λ = (g^β⊗g_β) λ_,α g^α = λ_,β g^β , leaves the vector unchanged. We call the projected gradient the tangential gradient and define it as ∇λ = ∇_t λ . If λ is a function of a three-dimensional system (λ∘x)(ξ,η,ζ), then the tangential gradient eliminates the out-of-plane component ∇_t λ = e^i ∂_i^x λ = g^i ∂_i^ξλ = g^α∂_α^ξλ = λ_,α g^α . Analogously, we can define the same operator for vectors _t v = (v) = v_,α⊗g^α , by applying the projection row-wise. If we restrict also the vectorial basis on the left of the tensor to the tangential plane we retrieve the so-called covariant gradient v = _t v = (v) . As the name suggests, the latter simply expresses derivatives within the tangential coordinate system. With the vectorial gradient defined, the tangential divergence is naturally _t v = (_t v) = (v) = v = v_,i⊗g^ig^α⊗g_α = v_,ig^αδ_iα = v_,αg^α . We note that the trace of the covariant gradient yields the same result (v) = (v) = v = _t v , seeing as = by its projection property. Accordingly, the tensorial tangential divergence reads _t T = (T_,α) g^α = (T) , where the latter implies a double-contraction. Finally, the tangential gradient allows to also define the surface curl operator _t v = _t vn = v_,α⊗g^αn× = v_,α⊗g^αn×g^i⊗g_i = v_,αn×g^iδ_α i = g^α×v_,αn , where we applied the circular shift in the last step. If the vector is a function of the plane v = (v∘r)(ξ,η), then the definition coincides with the analogous formula of the full gradient _t v = vn = v_,j⊗g^jn× = v_,jn×g^j = v_,αn×g^α = g^α×v_,αn , since n×g^3 = n×n = 0. Finally, the tangential curl can also be related to the covariant skew-symmetric gradient of a vector field v = (v∘r)(ξ,η) via v = (v) = 12 (v) = 12 (_t v)(n) , where any in-plane vectors are eliminated due to (g_α ) = [g_α× (g_β⊗g^β)] = 1g_1 ×g_2 (n⊗ε_αβγg^γ) = 0 . § THE WEINGARTEN CURVATURE TENSOR Let a hyper-surface ⊂^3 in a three dimensional space ^3 be mapped from some flat surface ω⊂^2 via r:ω⊂^2 →, the tangent vectors on the hyper-surface read t_1 = g_1 = ξr = r_,ξ , t_2 = g_2 = ηr = r_,η . Using the latter two one can define a unit normal vector on the hyper-surface n = t_1 ×t_2t_1 ×t_2 , n = 1 . Clearly, the tangential vectors t_α are orthogonal to n by the very construction t_α⊥n, such that t_αn = 0 , ∂_β^ξt_αn = ξ^βt_αn = t_α,βn + t_αn_,β = 0 , implying the equality t_α,βn = - t_αn_,β . Further, since n is a unit vector there holds nn = 1 , ∂_β^ξnn = ξ^βnn = 2 n_,βn = 0 , implying that infinitesimal changes in n with respect to {ξ,η} are orthogonal to it n_,β⊥n. Since n is a unit vector normal to the hyper-surface , any change in its orientation is a measure of curvature. Using the Christoffel symbols we express derivatives of the tangent vectors as t_α,β = Γ_αβ^αβig_i = Γ_αβ^αβγt_γ + Γ_αβ^αβ3g_3 = Γ_αβ^αβγt_γ + W_αβn , = Γ_αβγt^γ + W_αβn , where we define the covariant components of the so called Weingarten map as W_αβ = Γ_αβ^αβ3 = t_α,βn = t_α,βg^3= t_α,βg_3 = Γ_αβ3 . Doing the same for derivatives of the normal vector we find n_,α = Γ_3α^3αkg_k = Γ_3α^3αβt_β + Γ_3α^3α3n . Now, using <ref> and <ref> we find Γ_3αβ = n_,αt_β = -t_β,αn = -W_αβ , Γ_3α^3α3 = n_,αn = 0 , such that any infinitesimal change in the unit normal vector of the surface is captured by the Weingarten map n_,α = -W_αβg^β = -W_α^αβt_β , where we used <ref> and <ref> to switch to a mixed variant definition. Clearly, the components of the Weingarten tensor are curvature measures, such that the tensor itself is known as the curvature tensor. Its components can be identified using W_α^αβ = -n_,αg^β . Consequently, the tensor itself can be defined via W = -n = -n_,α⊗g^α = W_α^αβt_β⊗g^α . It is clear that the tensor is tangential, such we can also write W = -_t n , using tangential differential calculus. Let κ_1 and κ_2 be the eigenvalues of the tensor, its invariants read H = 12W = 12 W_α^αα = 12 (W_1^11 + W_2^22) = 12(κ_1 + κ_2) , K = W = W_1^11 W_2^22 - W_1^12 W_2^11 = κ_1 κ_2 , representing the mean- and Gaussian curvature measures, respectively. § THE SHELL-SHIFTER Let the middle hyper-surface of the shell be mapped from some flat reference surface ω r = r(ξ,η) , r:ω⊂^2 →⊂^3 , one can define the complete volume of the shell via x = x(ξ,η,ζ) , x = r + ζn , x:Ω⊂^3 →⊂^3 , where ζ∈ [-h/2, h/2] is the thickness parameter of the shell, and the normal vector n is defined using the middle surface as per <ref>. The volume of the shell is therefore given by = × [-h/2, h/2] ⊂^3 . The infinitesimal tangent vectors of the shell read xξξ = g_1 ξ = (t_1 + ζn_,ξ) ξ , xηη = g_2 η = (t_2 + ζn_,η) η , xζζ = g_3 ζ = nζ . As the surface is parameterised by {ξ,η}, its infinitesimal tangent vectors reads rξξ = t_1 ξ , rηξ = t_2 η . Consequently, an infinitesimal surface element of the middle surface reads = t_1 ξ×t_2 η = t_1 ×t_2 ξη = t_1 ×t_2 ω . We find the infinitesimal volume element of the shell using the triple vector product = x_,ξξ×x_,ηηx_,ζζ = x_,ξ×x_,ηx_,ζζω = (t_1 + ζn_,ξ) × (t_2 + ζn_,η)nζω . The term can be decomposed into the additive parts t_1 ×t_2nζω , t_1 ×ζn_,η + ζn_,ξ×t_2 nζω , ζn_,ξ×ζn_,ηnζω . The first term is clearly t_1 ×t_2nζω = ζ , due to n∥t_1 ×t_2 and n=1. Expanding the second part while applying the curvature tensor W = -_t n to express derivatives of the normal vector we find t_1 ×ζn_,η = -ζt_1 × W_2^22t_2 , ζn_,ξ×t_2 = -ζ W_1^11t_1 ×t_2 , yielding together -ζt_1 × W_2^22t_2 -ζ W_1^11t_1 ×t_2nζω = -ζ ( W_1^11 + W_2^22) t_1 ×t_2nζω = -2H ζ ζ . Using the curvature tensor W again for the third term we find ζn_,ξ×ζn_,η = ζ^2 (W_1^11t_1 + W_1^12t_2) × (W_2^21t_1 + W_2^22t_2) = ζ^2(t_1 ×t_2)(W_1^11W_2^22 - W_1^12W_2^11) = κ ζ^2(t_1 ×t_2) , such the term reads ζn_,ξ×ζn_,ηnζω = κ ζ^2 t_1 ×t_2nζω = K ζ^2 ζ . Putting it all together, we find Steiner's formula for the change in volume = (1 -2H ζ + K ζ^2) ζ , where 1 -2H ζ + K ζ^2 is called the shell-shifter. § TANGENTIAL GRADIENTS ON SHELLS In general, for fields on the shell λ = (λ∘x)(ξ,η,ζ)= (λ∘x)(ξ) it is possible to determine corresponding gradients given by tangential differential calculus. In other words, the gradients can be characterised by the parametrisation of the middle surface. We observe that the covariant basis of the parametrisation x(ξ,η,ζ) from <ref> can be expanded as g_1 = t_1 + ζn_,ξ = t_1 - ζ (W_1^11t_1 + W_1^12t_2) , g_2 = t_2 + ζn_,η = t_2 - ζ (W_2^11t_1 + W_2^12t_2) , g_3 = n , using <ref>. Clearly, g_1 and g_2 are tangential to the middle surface g_α⊥n, such that we immediately obtain g^3 = g_3 = n and g^α⊥n for the contravariant basis. Now, let t^1 and t^2 be the contravariant basis of the parametrisation of the middle surface r = r(ξ,η), the vectors satisfy t^αt_β = δ_αβ and t^α⊥n, such that we can use them to the define the contravariant basis of x(ξ,η,ζ) implicitly as g^1 = c_11t^1 + c_12t^2 , g^2 = c_21t^1 + c_22t^2 . The contravariant basis must satisfy g^αg_β = δ_αβ, leading to following system of equations g^1g_1 = (1-ζ W_1^11) c_11 - ζ W_1^12 c_12 = 1 , g^1g_2 = -ζ W_2^11 c_11 + (1-ζ W_2^12) c_12 = 0 , g^2g_1 = (1-ζ W_1^11) c_21 - ζ W_1^12 c_22 = 0 , g^2g_2 = -ζ W_2^11 c_21 + (1-ζ W_2^12) c_22 = 1 . Solving the system leads to c_11 = 1-ζ W_2^121 -2H ζ + κ ζ^2 , c_12 = ζ W_2^111 -2H ζ + κ ζ^2 , c_21 = ζ W_1^121 -2H ζ + κ ζ^2 , c_22 = 1-ζ W_1^111 -2H ζ + κ ζ^2 . Thus, the contravariant basis reads g^1 = 11 -2H ζ + κ ζ^2[(1-ζ W_2^12)t^1 + ζ W_2^11t^2] , g^2 = 11 -2H ζ + κ ζ^2[ζ W_1^12t^1 + (1-ζ W_1^11)t^2] . With the contravariant basis at hand we can write gradients of fields on x(ξ,η,ζ) explicitly as ∇λ = g^i λ_,i = λ_,αg^α + λ_,ζn , λ = (λ∘x)(ξ) . As the tangential gradient of a field ∇_t λ = λ_,αt^α is defined with respect to the parametrisation of the middle surface r(ξ,η) we find λ_,αg^α = 11 -2H ζ + κ ζ^2[λ_,ξ ([1-ζ W_2^12]t^1 + ζ W_2^11t^2) + λ_,η(ζ W_1^12t^1 + [1-ζ W_1^11]t^2)] , being the product of 11 -2H ζ + κ ζ^2 [(1-ζ W_2^12)t^1 ⊗t_1 + ζ W_1^12t^1 ⊗t_2 + ζ W_2^11t^2 ⊗t_1 + (1-ζ W_1^11)t^2 ⊗t_2] (λ_,ξt^1 + λ_,ηt^2) . Now, since the cofactor of the curvature tensor W = W_α^1βt_β⊗t^α reads W = (W_2^12t_1 ⊗t^1 - W_2^11t_1 ⊗t^2 - W_1^12t_2 ⊗t^1 + W_2^11t_2 ⊗t^2)^T = W_2^12t^1 ⊗t_1 - W_2^11t^2 ⊗t_1 - W_1^12t^1 ⊗t_2 + W_1^11t^2 ⊗t_2 , we finally identify λ_,αg^α = 11 -2H ζ + κ ζ^2 ( - ζW) ∇_t λ , such that a gradient with respect to x(ξ,η,ζ) can be written as ∇λ = 11 -2H ζ + κ ζ^2( - ζW)∇_t λ + λ_,ζn . At this point it is important to note that if the explicit structure of the function λ is known, it may be possible to significantly simplify its gradient. For example, given the following function λ(x) = λ_0(r) + λ_1(ζ) λ_2(r) , r:ω⊂^2 →⊂^3 , using the product rule we find ∇λ = ∇λ_0 + λ_2 ∇λ_1 + λ_1 ∇λ_2 = ∇_t λ_0 + λ_2 λ_1,ζn + λ_1 ∇_t λ_2 , since λ_0 and λ_2 are functions of the Cartesian coordinates of solely the middle surface, and ∇_t λ_1 = 0. Thus, for certain explicit structures it is possible to circumvent the need for <ref>. § CURVED BEAMS IN THREE DIMENSIONS A curve in three-dimensional space can be mapped from some parametric space γ⊂ via r = r(ξ) , r:γ⊂→^3 . A unit tangent vector to the curve is given by t = r_,ξr_,ξ , r_,ξ = r_,ξt . Accordingly, the infinitesimal curve element is given by = r = r_,ξ ξ , sξ = r_,ξ , ξs = r_,ξ^-1 , such that t = sr = rξξs = r_,ξr_,ξ^-1t . We use the unit tangent vector to define the tangential and normal projection operators = t⊗t , = - = - t⊗t . Further, we re-parameterise the curve r = r(s) using the arc-length parameter s(ξ) = ∫_0^ξ = ∫_0^ξr_,ξ ξ , ξ(s) = ∫_0^s ξ = ∫_0^s r_,ξ^-1 , s ∈ [0,l] , where we assume that ξ and s start at zero for simplicity. Thus, the tangential gradient of a function λ = λ(s) with respect to the curve reads ∇λ = λ_,sg^s = λ_,st = ∇λ = ∇_t λ , where g^s is the contravariant pseudo-inverse of the covariant tangent vector g_s = t. For the divergence of some vector v = v(s) we find _t v = (_t v) = v = v_,s⊗tt⊗t = v_,st . Next, by defining two orthogonal unit vectors n = n(s) , c(s) = t×n , tn = tc = nc = 0 , n = c = 1 , we can construct a moving coordinate system along the curve. We observe that stt = 2t_,st = 0 , implying t_,s⊥t. Thus, we can define t_,s = κ_n n + κ_c c , t_,sn = κ_n , t_,sc = κ_c , such that κ_n and κ_c are curvature measures. There holds snn = 2n_,sn = 0 , stn = t_,sn + tn_,s = 0 , such that we can define n_,s = - κ_n t + τc , n_,st = -t_,sn = - κ_n , n_,sc = τ , where τ is called the torsion of the curve. Finally, we find c_,s = s(t×n) = t_,s×n + t×n_,s = -κ_c t - τn . Next, we define a beam in three-dimensional space as a curve with a thickness given by some surface mapping x(s,η,ζ) = r + ηn + ζc , x: [0,l]×ω⊂^3 →⊂^3 , where η and ζ span the surface ω⊂^2 of the cross-section of the beam and l is its length. The covariant basis of the map reads g_1 = x_,s = (1- κ_nη - κ_cζ)t - τζn + τηc , g_2 = x_,η = n , g_3 = x_,ζ = c . Thus, we immediately get that the contravariant basis satisfies g^1 ⊥n, g^1 ⊥c, g^2 ⊥c and g^3 ⊥n by the Kronecker delta property g^ig_j = δ_ij, such that g^1 = 11- κ_nη - κ_cζt , g^2 = τζ1- κ_nη - κ_cζt + n , g^3 = -τη1- κ_nη - κ_cζt + c . Consequently, gradients of some function λ = λ(x) with respect to the mapping read ∇λ = λ_,ig^i = 11- κ_nη - κ_cζ (λ_,s + τζλ_,η - τηλ_,ζ) t + λ_,ηn + λ_,ζc = 11- κ_nη - κ_cζ∇_t λ + 11- κ_nη - κ_cζ(T + [1- κ_nη - κ_cζ]) (λ_,ηn + λ_,ζc) , where T is defined as the torsion tensor of the beam T = τ(ηc - ζn) ⊗t , Tt = τ(ηc - ζn) , T = τt⊗ (ζn - ηc) . If the structure of λ is known, it is possible to exploit it to reduce the complexity of the gradient λ(x) = λ_0(s) + λ_1(η,ζ) , ∇λ = ∇_t λ_0 + τζλ_1,η - τηλ_1,ζ1 - κ_nη - κ_cζt + λ_1,ηn + λ_1,ζc , Clearly, if the cross-section of the beam does not twist τ = 0, the gradient simplifies even further to ∇λ = ∇_t λ_0 + λ_1,ηn + λ_1,ζc = ∇_t λ_0 + ∇λ_1 = ∇_t λ_1 + ∇_n λ_1 , ∇_n λ = ∇λ , motivating the definition of the normal gradient ∇_n(·). Finally, the mapping of an infinitesimal beam volume element reads = x_,ηη×x_,ζζx_,s s = t(1- κ_nη - κ_cζ)t - τζn + τηc ηζ s = (1- κ_nη - κ_cζ) ω s . The term 1- κ_nη - κ_cζ represents the beam-analogue of the shell-shifter. § ASYMPTOTIC ANALYSIS FOR SHELLS Let every term in an integral over ζ be a quadratic form (a + b ζ)^2 = a^2 + 2 a b ζ + b^2 ζ^2 , a ≠ a(ζ) , b ≠ b(ζ) , composed of a constant part and a linear part with respect to ζ, we find the following integration formulae ∫_-h/2^h/2 (a + b ζ)^2c ζ = ch a^2 + ch^312b^2 , ∫_-h/2^h/2 (a + b ζ)^2 (c ζ) ζ = ch^36ab , ∫_-h/2^h/2 (a + b ζ)^2 (c ζ^2) ζ = ch^312a^2 , where we omitted higher order terms Ø(ξ^3), c ≠ c(ζ), and terms of the form (·)ζ vanish by the symmetry of integration ∫_-h/2^h/2(·)ζ ζ = 0.
http://arxiv.org/abs/2407.13266v1
20240718081656
How Private is Low-Frequency Speech Audio in the Wild? An Analysis of Verbal Intelligibility by Humans and Machines
[ "Ailin Liu", "Pepijn Vunderink", "Jose Vargas Quiros", "Chirag Raman", "Hayley Hung" ]
cs.SD
[ "cs.SD", "cs.HC", "eess.AS" ]
A BCS state formulation for the fermionic Tonks-Girardeau gas Bruno Juliá-Díaz July 22, 2024 ============================================================= § ABSTRACT Low-frequency audio has been proposed as a promising privacy-preserving modality to study social dynamics in real-world settings. To this end, researchers have developed wearable devices that can record audio at frequencies as low as 1250 Hz to mitigate the automatic extraction of the verbal content of speech that may contain private details. This paper investigates the validity of this hypothesis, examining the degree to which low-frequency speech ensures verbal privacy. It includes simulating a potential privacy attack in various noise environments. Further, it explores the trade-off between the performance of voice activity detection, which is fundamental for understanding social behavior, and privacy-preservation. The evaluation incorporates subjective human intelligibility and automatic speech recognition performance, comprehensively analyzing the delicate balance between effective social behavior analysis and preserving verbal privacy. § INTRODUCTION Speech, as a fundamental modality, serves as a rich source for studying various paralinguistic aspects of human behavior, encompassing elements such as prosody, intonation, and rhythm <cit.>. Analyzing these features not only provides insights into emotional states, social dynamics, and communication patterns <cit.> but also contributes to advancements in fields such as linguistics, psychology, and human-computer interaction. However, analyzing human behavior through speech analysis presents a significant challenge in ensuring privacy, especially in real-world settings where individuals may inadvertently disclose sensitive information in natural conversations. Striking a balance between extracting valuable paralinguistic insights and preserving privacy becomes paramount in ethical and responsible research practices, especially in real-life applications. One promising strategy is to use low-frequency audio recordings in smart badges <cit.> which allows for analysis of paralinguistic features while mitigating the risk of inferring verbal content. Recording at a low frequency makes it possible to infer essential nonverbal elements of social and emotional behavior such as turn-taking and prosodic features without compromising the privacy of the verbal content. This is particularly relevant in the wild, where unscripted and spontaneous interactions occur, reflecting genuine human behavior. This study investigates the feasibility of leveraging low-frequency audio recordings captured in real-world settings to infer plausible verbal content, which could be any words perceived artificially or by human listeners. Our primary emphasis is on verbal privacy, as nonverbal features, including gender and personal attributes <cit.>, are commonly investigated in the context of social dynamics. Employing automatic techniques, including speech-to-text conversion and bandwidth-extension methods, we aim to explore the potential of extracting meaningful insights from these recordings while safeguarding the privacy of individuals involved in the conversations. While other technical strategies exist for preserving semantic privacy post-recording, low-frequency audio promises to provide users with agency in providing informed consent. Informing users beforehand that their audio is to be recorded at low frequencies for privacy sensitivity reasons is crucial for empowering users to choose how their data is used for research and promoting social and emotional well-being in this research space. § RELATED WORK An approach for analyzing turn-taking using voice activity detection (VAD) in privacy-sensitive speech is to extract audio features <cit.> which cannot be used to reconstruct intelligible verbal speech content. Applying the Principal Component Analysis method to an audio spectrogram has been proposed to detect non-speech activity and prevent speech reconstruction <cit.>. Moreover, encryption methods on privacy-sensitive audio are available to hide verbal content <cit.> for speaker segmentation tasks or obfuscation in urban sound recording <cit.>. Low-frequency audio <cit.> has been used for group gender classification under privacy-sensitive speech. Sound shredding <cit.>, slicing <cit.>, subsampling <cit.>, and degradation <cit.> are methods to mutate the raw sound which makes it difficult to recover the verbal content of the original recording but maintain some acoustic features of it. Alternatively, replacing the original data with artificial speech generated from Generative Adversarial Network (GAN) architectures <cit.> is used. Also, some work proposes using speech embeddings to preserve privacy <cit.>. The advantage of utilizing low-frequency audio lies in its transparent nature. Users can conceptualize the sound of low-frequency recordings or actively listen to them, gaining a clear understanding that their privacy is safeguarded. In contrast to alternative methods like encryption, where users must rely on trust in researchers to ensure data usage aligns with consent, low-frequency recordings eliminate certain potential misuses, because specific information is inherently absent from the recorded signal, providing users with a tangible and reassuring layer of privacy. § ANALYSIS OF LOW-FREQUENCY AUDIO We examined the performance of low-frequency audio on VAD, automatic speech recognition (ASR), and extended short-term objective intelligibility (eSTOI) <cit.>. To apply an intuitive attack, we used bandwidth-extension (BWE) methods to potentially improve the intelligibility. Bandwidth extension of audio is a task aiming to enhance speech quality over narrow-band telephone connections by extrapolating higher frequencies missing in the low-resolution input. To assess the effectiveness of the potential attack, the human and machine intelligibility of the bandwidth-extended audio is measured. Figure <ref> shows an overview of our study. In section 3.1 we present three audio datasets that were used, each being recorded in different noise settings. In section 3.2 we make a comparison across sample rates of VAD performance (3.2.1) and automatic speech intelligibility. In section 3.3 we extend the analysis to bandwidth-extended audio, both for machine speech intelligibility (3.3.2) and speech intelligibility by humans (3.3.3). §.§ Datasets We used three datasets in our study: Pop-glass <cit.>, VCTK <cit.>, and <cit.>. Pop-glass and were recorded in mingling environments. In Pop-glass, speech is mainly in English, while in speech is mainly in Dutch. VCTK, on the other hand, was recorded in clean audio conditions and is in English. Further details about each dataset are described below. Pop-glass consists of 32 people who participated in a mingling event with the official spoken language in English. Each recording was approximately 1 hour long. Every participant wore an omnidirectional Lavalier microphone attached to the face. The original frequency of the audio is 44.1 kHz. 27 out of 32 recordings are included in our study after filtering out completely silent audio and audio from malfunctioning microphones. VCTK is an English multi-speaker corpus provided in the CSTR voice cloning toolkit. Each speaker reads a different set of sentences from a newspaper article in a quiet and single-speaker setting. The original frequency of the audio is 48 kHz. The audio of a female speaker is used which aligns with the open-sourced pre-trained model available for speech enhancement on VCTK. contains personal audio recorded from individual microphones. The setting is a professional networking event with around 100 attendees. 43 consented to wear a microphone and from this data, 16 people's audio data were selected for our experiment to ensure a diverse set of speakers. The microphone used and the original recording frequency is the same as Pop-glass. Most of the audio is in Dutch, although sometimes English is also spoken. §.§ An analysis of low-frequency speech audio During the analysis, the main motivation was to understand how the frequency of input speech affects the state-of-the-art VAD and open-sourced ASR systems. §.§.§ Voice activity detection In this study, we used rVAD <cit.> for the VAD task. It is an unsupervised segment-based method and is compared favorably with some existing methods, such as Kaldi Energy VAD <cit.> and VQVAD <cit.>. To evaluate the performance of rVAD on different sample rates, false error rates (FER) are calculated as the ratio between the number of wrongly categorized events and the total number of actual events. The same dataset was down-sampled to different frequencies before being used for evaluation on rVAD to test performance across frequencies. 27 audio samples (all samples are from different participants) from Pop-glass and 6 audio samples from p225 of VCTK are taken into account. Pop-glass samples are cut into segments of 20 to 30 seconds from 1 hour of the mingling event. All samples were down-sampled to 300, 800, 1250, 2000, 3200, 5000, 8000, and 20000 Hz, chosen to be logarithmically increasing. An order 8 Chebyshev type I Low-pass filtering was applied before down-sampling to avoid aliasing. Figure <ref> shows that the FER drops dramatically when going from a 300 to an 800 Hz sample rate on the VCTK audio. A similar, though less dramatic drop in FER, is observed from 2000 Hz onwards on samples from Pop-glass. Even though the performance of VAD is sensitive to the sample rates, it is reasonable to use the VAD above 800 Hz for clean speech audio and 2000 Hz for speech audio in a mingling environment. §.§.§ Speech intelligibility In this study, automatic speech intelligibility is evaluated in terms of the performances of ASR and eSTOI. The performance of ASR evaluates whether machines can transcribe audio into text. eSTOI is an automated intelligibility listening test that compares noisy audio sources to a clean reference. We employed the open-sourced ASR model Whisper <cit.> trained on multilingual and multitask supervised data from the web to evaluate samples in different frequencies. To evaluate the performance of the ASR model on different frequencies, word error rate (WER) <cit.> was calculated. Outputs of the ASR model were pre-processed by lower-case transformation, white space removal, and bag-of-words reduction before computing the WER metrics. Figure <ref> shows that the WER is ∼10% for Pop-glass and is 0% for the samples from VCTK at 20000 Hz. It shows that open-sourced ASR works well for high-frequency speech audio. However, the WER is higher than 97.5% for 300 - 800 Hz VCTK audio and higher than 98% for 300 - 1250 Hz Pop-glass audio. This indicates that ASR performance is significantly worse on low-frequency speech audio compared to high-frequency speech audio. A higher score in eSTOI represents a prediction that the speech intelligibility performance will be better, compared to a given reference speech signal. The scores range between 0 and 100 (as clear as the original audio). As the eSTOI result shows, samples from both VCTK and Pop-glass maintain 20 and 40 respectively, when the sample rates are lower than 2000 Hz, compared to their high-frequency speech audio. Furthermore, there is little improvement in the intelligibility prediction between 800 and 2000 Hz. Generally, as expected, automatic speech intelligibility decreases with lower speech frequency. §.§ Analysis of bandwidth-extended low-frequency speech To understand the effect of the potential attack on low-frequency audio, we performed an analysis of ASR performance and a user study after a bandwidth-extension process on the same audio samples mentioned earlier, evaluating intelligibility by both machines and humans. §.§.§ Simulating an attack via Bandwidth Extension By "hallucinating" higher frequencies which are absent in the low-resolution input, bandwidth-extension of audio aims to improve audio quality and intelligibility of speech. In this study, we used neural bandwidth-extension models <cit.>. The two models were trained on and VCTK respectively to simulate a privacy violation situation. The VCTK model trained and tested on audio from the same speaker and noise conditions simulates the lower bound of such an attack. The model simulates a more aggressive, possibly more realistic informed attack where only the noise conditions of the sample are known beforehand and exploited as part of a pre-trained BWE approach using other data (in our case, Pop-glass). The open-source VCTK model is trained on 16 kHz audio from the single speaker of VCTK and the model is trained on 8 and 5 kHz audio from multiple speakers of . Signal-to-Noise Ratio (SNR) describes the ratio of signal power to noise in the signal in the time domain, measuring BWE performance. It represents the intensity of error in predicted audio signals to the intensity of their corresponding reference signals. As Table <ref> shows, the higher the SNR, the better the model's performance. We selected 6 out of 27 samples in Pop-glass containing the minimum, 25th percentile, two medians, 75th percentile, and maximum fundamental frequency F0, as representative samples. To align with formants F1 (500 Hz) and F2 (1500 Hz) <cit.>, sample rates at 800, 1250, and 2000 Hz are evaluated in the intelligibility analysis, because F2 has been indicated for contributing the most to intelligibility <cit.>. §.§.§ Machine intelligibility To evaluate how bandwidth-extended audio improves WER compared with the original low-frequency audio samples, the same model of Whisper is applied to both. Figure <ref> shows that there is a reasonable improvement of WER achieved by the bandwidth extension models in the Pop-glass audio samples with a sample rate of 1250 or 2000 Hz and the VCTK audio samples with a sample rate of 800, 1250, or 2000 Hz. The decrease in WER can be interpreted as an improvement in automatic speech intelligibility. However, most of the words recovered from the bandwidth extension models are stop-words which might be less informative on privacy. §.§.§ Human intelligibility We conducted a perceptual experiment on speech intelligibility to investigate how much speech intelligibility is preserved in low-frequency audio. Typically, speech intelligibility is measured via rating scales <cit.> and word recognition tests <cit.>. We recruited 6 participants including 4 males and 2 females. All the participants confirmed they didn't have any hearing impairment and carried out the intelligibility test inside a sound-isolated listening booth. They were asked to wear headphones for the study, but the volume was not restricted. They were permitted to increase or decrease the volume and listen to the audio samples multiple times. 14 audio samples were used; 6 of them from Pop-glass, and 8 from VCTK. After listening to each sample, they were asked to fill out a questionnaire on the intelligibility of the audio content based on a 7-point Likert scale. Q1: Are you able to hear anything in the audio file? Q2: Are you able to hear speech in the audio file? Q3: Please transcribe the audio file word by word (mark all perceived but not recognized words with a character X). Q4: Do you hear more than one speaker in the conversation? If you can, state roughly how many speakers in the conversation you think there are. Q1 and Q2 are measured on a Likert scale (1 to 7, 1 being “Not at all” and 7 being “Very clearly”). <ref> illustrates the results. For both datasets, higher sample rates correlated with higher speech intelligibility scores. However, 2000 Hz audio is not perceived as significantly clearer than 1250 Hz audio. Q3 is measured by WER first and Figure <ref> shows when transcribing low-frequency speech recordings, humans perform marginally worse than the open-sourced ASR. Q3 is also analyzed by other metrics in the next paragraph. Q4 is posed for gaining contextual information about whether the main speaker was transcribed. The number of recognized speakers in Pop-glass followed a mean and std of 0.67 (0.62) at 800 Hz; 1.417 (0.64) at 1250 Hz; and 1.58 (0.64) at 2000 Hz. For VCTK, at all sample rates, the means and std were found to be 1 (0). Consequently, the results on VCTK are representative of the primary speaker. The results of Pop-glass indicate that cross-talk constitutes another source of privacy threat; the question of whose information is leaked is beyond the scope of the present analysis focusing on the verbal intelligibility of low-frequency audio but warrants future investigation. Metrics for Human Speech Intelligibility: Beyond evaluating the WER of the transcripts, as shown in Table <ref>, we introduce the following metrics to measure human speech intelligibility: The number of recognizable words represent words that participants can write down. The number of recognizable words evaluates how many words could be perceived regardless of whether they were truly spoken. The number of perceivable words estimates how many words are perceived, including recognizable words and those that cannot be spelled by participants, but the beginning and the end of which can be identified. It provides a good insight into whether the audio can be used to detect multiple potential words. The ratio of recognizable and perceivable words measures how many words are recognized from all words that were perceived to have been spoken in the audio sample. The longest chain of recognizable words of the audio samples has been chosen to determine whether the recognized words are located randomly or continuously in a sentence. Continuous recognizable words tend to provide more information than corpora located randomly in sentences. Pairwise cosine similarity of each audio sample was chosen to measure the similarity between transcripts of two participants listening to the same audio. A higher pairwise cosine similarity means more identical words are shared in the transcripts of participants. Audio at 2000Hz in a mingle setting has a significantly higher pairwise cosine similarity than others. It reveals many words that can be identified at 2000 Hz but not at 1250 Hz for all participants. Thus, 1250 Hz is a reasonable threshold that blocks most of the intelligible verbal content during mingling. § CONCLUSION We investigated the privacy-preserving nature of low-frequency speech audio. While estimating voice activity is desirable for turn-taking dynamics in interactions, the ability to transcribe specific verbal content is a privacy risk. Our results indicate that 800 Hz and 2000 Hz are reasonable thresholds for maintaining VAD functionality whilst blocking intelligible content in clean and mingling-setting audio. Further, human intelligibility of bandwidth-extended low-frequency speech audio was slightly lower than an open-source ASR trained on web data, highlighting the challenges in transcribing such audio. While low-frequency recording shows promise in preserving privacy by obstructing intelligible speech, it is not a comprehensive solution. It remains an open question whether more advanced attacks might still extract sensitive information from low-frequency audio (e.g. model fine-tuning). Acknowledgements Thanks to Martha Larson for feedback on our final draft. This work was partially funded by the Erasmus+ funding program and the Netherlands Organization for Scientific Research, project number 639.022.606. IEEEtran
http://arxiv.org/abs/2407.13122v1
20240718031235
MO-EMT-NAS: Multi-Objective Continuous Transfer of Architectural Knowledge Between Tasks from Different Datasets
[ "Peng Liao", "XiLu Wang", "Yaochu Jin", "WenLi Du" ]
cs.LG
[ "cs.LG", "cs.AI" ]
MO-EMT-NAS P.Liao, X.Wang et al. Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, China pengliao@mail.ecust.edu.cn, wldu@ecust.edu.cn Trustworthy and General AI Lab, School of Engineering, Westlake University, Hangzhou, 310030, PR China jinyaochu@westlake.edu.cn Computer Science, University of Surrey, Surrey, GU2 7XH, UK wangxilu@surrey.ac.uk MO-EMT-NAS: Multi-Objective Continuous Transfer of Architectural Knowledge Between Tasks from Different Datasets Peng Liao10009-0006-9711-7142 XiLu Wang30000-0002-0926-4454 Yaochu Jin1,2 () 0000-0003-1100-0631WenLi Du1() 0000-0002-2676-6341 July 22, 2024 =================================================================================================================================== § ABSTRACT Deploying models across diverse devices demands tradeoffs among multiple objectives due to different resource constraints. Arguably, due to the small model trap problem in multi-objective neural architecture search (MO-NAS) based on a supernet, existing approaches may fail to maintain large models. Moreover, multi-tasking neural architecture search (MT-NAS) excels in handling multiple tasks simultaneously, but most existing efforts focus on tasks from the same dataset, limiting their practicality in real-world scenarios where multiple tasks may come from distinct datasets. To tackle the above challenges, we propose a Multi-Objective Evolutionary Multi-Tasking framework for NAS (MO-EMT-NAS) to achieve architectural knowledge transfer across tasks from different datasets while finding Pareto optimal architectures for multi-objectives, model accuracy and computational efficiency. To alleviate the small model trap issue, we introduce an auxiliary objective that helps maintain multiple larger models of similar accuracy. Moreover, the computational efficiency is further enhanced by parallelizing the training and validation of the weight-sharing-based supernet. Experimental results on seven datasets with two, three, and four task combinations show that MO-EMT-NAS achieves a better minimum classification error while being able to offer flexible trade-offs between model performance and complexity, compared to the state-of-the-art single-objective MT-NAS algorithms. The runtime of MO-EMT-NAS is reduced by 59.7% to 77.7%, compared to the corresponding multi-objective single-task approaches. § INTRODUCTION EMT-NAS <cit.> is a recently proposed multi-tasking NAS algorithm that aims to address the challenge of multiple tasks from different datasets. Different from the initial shared representation of tasks from the same dataset, EMT-NAS considers knowledge transfer from one task to a related task, e.g., transferring knowledge from playing squash to playing tennis <cit.>. EMT-NAS ensures that each task has a separate set of supernet parameters, skillfully alleviating the negative transfer <cit.> that may result from joint training of weight parameters for multiple tasks. However, it evaluates the architectural accuracy only and therefore tends to favor larger models in the search space, lacking control over the model size. To address the above limitations, the present work aims to identify a set of architectures that can balance multiple objectives for each task by means of multi-objective optimization (MO), successfully tackling the complexities posed by multiple classification tasks from diverse datasets. Many existing studies on single-task NAS have adopted the MO approach to strike a trade-off between the accuracy and other indicators required in a wide range of settings, such as computational complexity, CPU and GPU latency <cit.>, adversarial robustness <cit.>, and data privacy <cit.>. Evolutionary MO-NAS based on the weight-sharing-based supernet has achieved remarkable success, which significantly reduces computational consumption by allowing all possible architectures to share parameters. However, it has been found that when considering the deployment of models on diverse devices with varying resources, simultaneously optimizing the classification error and model size often drives the population to quickly converge to smaller models. This phenomenon arises because smaller models converge faster in the early stage of evolutionary optimization <cit.>, resulting in lower classification errors for small models <cit.>. Hence, environmental selection based on the non-dominance relationship favors smaller models and eliminates larger ones. To address the above issue, CARS <cit.> designed two environmental selection strategies, one minimizing the model error and size that favors small models, and the other minimizing both the model error and convergence speed that favors larger models. It was found, however, that the population was able to maintain a good degree of diversity in the initial stages; however, the population became polarized into large and small models as the evolution proceeded, losing the models of medium sizes. To tackle the challenges of multi-objective MT-NAS, this work adopts multi-objective multi-factor evolutionary algorithms (MO-MFEAs) <cit.> to transfer latent similarity knowledge across different tasks. Furthermore, an auxiliary objective is introduced to maintain the architecture diversity so that small, medium, and large models can be retained, ensuring that the final solution set contains trade-off models of a wide range of model sizes. Key contributions are as follows: * We propose an MO-EMT-NAS framework to effectively search for multiple Pareto optimal architectures for tasks from different datasets, utilizing transferable architecture knowledge across the tasks to facilitate continuous architecture search. * MO-EMT-NAS considers both the classification error and model size, and the auxiliary objective that mitigates the search bias towards small neural architectures. With the help of multiobjectivization, MO-EMT-NAS maintains architecture diversity in terms of model sizes. * The computational efficiency of MO-EMT-NAS is improved by allowing parallel training and validation of the task-specific weight-sharing supernet on each task. * We benchmark MO-EMT-NAS's performance on seven datasets with two, three, and four tasks, respectively. These datasets include CIFAR-10, CIFAR-100, ImageNet, and four medical datasets (PathMNIST, OrganMNIST_{Axial, Coronal, Sagittal}). From the results, we see that MO-EMT-NAS can obtain architectures with trade-offs between the performance and model size. Through implicit architectural knowledge transfer across different tasks, MO-EMT-NAS can achieve better-performing neural architectures while using less runtime compared to the multi-objective single-tasking approach. § RELATED WORK Neural architecture search aims at automatically finding neural network architectures that are competitive with those manually designed by human experts <cit.>. Reinforcement learning (RL) <cit.>, gradient descent (GD) <cit.>, and evolutionary algorithms (EA) <cit.> are three typical search strategies used in NAS. Most NAS algorithms, however, suffer from huge computational costs <cit.>, leading to the development of one-shot NAS that can significantly reduce GPU days and lower the high demand for computational resources through parameter sharing <cit.>. Multi-objective NAS has been developed to search for neural network models for optimizing objectives in addition to the accuracy required in real-world applications. Among the existing MO-NAS approaches, evolutionary MO, a population-based method, has been widely adopted as it is capable of achieving a set of Pareto optimal neural architectures in a single run. NASGNet <cit.> was proposed to generate a set of architectures by simultaneously minimizing classification error and model complexity (floating-point operations per second, FLOPs) with a Bayesian network as a performance predictor. Alternatively, MT-ENAS <cit.> adopted the network performance and model size as two objectives and used multi-task training to construct a radial-basis-function neural network <cit.> as a performance predictor. It is worth noting that they utilized separate populations for each objective without knowledge transfer. In NSGANetV2 <cit.>, five objectives are simultaneously optimized with the help of multiple performance predictors. Since most MO-NAS approaches rely heavily on the quality of the performance predictors, we demonstrate the differences between MO-EMT-NAS and them in that MO-EMT-NAS performs training and validation based on a weight-sharing supernet to reduce the computational overhead, instead of using performance predictors <cit.> or zero-shot metric evaluation <cit.>. Although the use of parameter sharing allows candidate submodels to be easily evaluated without training from scratch <cit.>, candidates with small model sizes generally achieve better validation accuracy at the beginning of the search. Hence, promising candidate models with large sizes fail to survive to the next generation, resulting in a search bias towards small models. Multi-tasking NAS: NAS has evolved from single-task and transfer learning to multi-task learning, with the latest search focusing on different datasets. Since the NAS algorithm <cit.> was proposed, NAS has demonstrated much success in automatically designing effective neural architectures. Initially, it was employed for optimizing models for specific single tasks, such as the classification task on CIFAR-10 <cit.>, termed single-task learning. As related tasks can be encountered, researchers resorted to transfer learning to improve NAS by transferring knowledge from similar previous tasks. For example, a pre-trained model is employed to guide the search for a new task <cit.>. Meanwhile, it has been observed that different tasks can stem from the same dataset. For example, instance and semantic segmentation and depth prediction can be performed on a large dataset for road scene understanding, CityScapes <cit.>. By learning shared representations across these tasks, a common neural architecture for all tasks is constructed via multi-task learning, instead of searching for a task-specific model for each task in the traditional approach <cit.>. Regardless of its well-known efficiency, this line of research is limited to considering multiple tasks from the same dataset <cit.>. Recently, the presence of tasks from different datasets poses challenges for multi-tasking NAS, due to the fact that two tasks from different datasets show lower relatedness scores compared to those originating from the same dataset <cit.>. Arguably, EMT-NAS <cit.> was first developed as an MT-NAS with the help of an evolutionary multi-tasking framework to address tasks on different datasets. Although recently proposed methods have shed light on the advantages of incorporating transfer learning and multi-task learning into NAS, our work establishes a multi-objective multi-tasking framework and focuses on handling multiple tasks on different datasets and providing a set of Pareto optimal architectures by balancing the model error and model size. § APPROACH In this work, we used the search space of <cit.>. The encoding of a neural architecture consists of normal and reduction cells, with each cell comprising five blocks, and each block containing two input bits and two operation bits, amounting to a total of 40 bits. For each operation bit, candidate operators include depthwise-separable convolution, dilated convolution, max pooling, average pooling and identity. More details can be found in Supplementary Material A. §.§ MO-EMT-NAS The overall framework of MO-EMT-NAS is shown in Fig. <ref>, including two main parts: the search algorithm and the training and validation of the weight-sharing-based supernet. Implementing multiprocessing enables parallelized training and validation of the supernet for various tasks, significantly boosting computational efficiency. See Supplementary Material B for more details and the pseudocode. The architecture search algorithm is shown on the left panel of Fig. <ref>. First, individuals of the initial population are randomly assigned to different tasks as the corresponding parent population. The parent individuals are sampled from the supernet and trained on each task, and then their objectives are evaluated. Then, the main loop is performed as follows. Individuals are selected (called mate selection) from the parent population to generate an offspring population by exploring the same tasks and transferring knowledge across different tasks. Both offspring and parent individuals are sampled from the supernet and trained on each task and then their objectives, i.e., the model error, size and the auxiliary objective, are evaluated. The non-dominated sorting with the crowding distance is performed on the combined parent and offspring population to select the population for the next generation. After repeating the main loop for several generations, a set of Pareto optimal solutions for each task is obtained. In this work, block-based crossover and bit-based mutation of <cit.> are adopted due to the discrete coding of NAS. The block-based crossover uses the block as the basic unit and allows the selected two parents to exchange blocks at a predefined crossover probability. Bit-based mutation adopts bits as the basic unit of bits and randomly varies the bits of a selected single parent within a candidate range at the mutation probability. Within the framework of MO-EMT-NAS, the generation of offspring populations enables implicit knowledge transfer between tasks: 1) For parents assigned to the same task, the offspring is generated via crossover, and mutation operators can explore the corresponding task. When parents come from different tasks, the generation of offspring is controlled by a parameter called random mating probability (RMP). 2) Architectural knowledge transfer is triggered at a probability of RMP, where the offspring are generated from the parents through crossover and mutation, and are assigned to the tasks as one of its parents. 3) otherwise, no knowledge transfer will happen, i.e., the parents independently undergo mutation to generate the corresponding offspring, and these offspring inherit the task of their parents. In MO-EMT-NAS, environmental selection must consider multiple conflicting objectives on multiple tasks. Accordingly, the population should be divided into subpopulations by equipping individuals with different tasks. Thus each task can execute its environmental selection separately. Subsequently, a multi-objective environmental selection is performed to consider the validation error, the number of model parameters, and the auxiliary objective, aiming to enhance the architecture diversity and provide a set of promising architectures. §.§ Auxiliary Third Objective Consider a minimization problem with M objectives, individual A dominates individual B, i.e., A is better than B, if: f_m (A) ≤ f_m (B), ∀ m ∈{1, 2, …, M}, f_m (A) < f_m (B), ∃ m ∈{1, 2, …, M}. If A does not dominate B and B does not dominate A, then A and B are non-dominated to each other, indicating A and B are similar. Similarly, A is dominated by B means that A is worse than B. The selection of non-dominated solutions in NSGA-II <cit.> is outlined as follows and depicted in Fig. <ref>: 1) Non-dominated sorting is performed on the combined population of the parent and offspring, resulting in a non-dominated level/rank for each individual. 2) A predefined number of individuals are selected to survive to the next generation based on their non-dominated level. 3) If the number of individuals at the last accepted level exceeds the predefined population size, the crowding distance of each individual (indicating its contribution to solution diversity) is used as the selection criterion. Solutions with a large crowding distance will be prioritized to ensure the diversity of the population. The key insight the multi-objective NAS approach can offer is to provide a bigger picture of the trade-offs between multiple important objectives for a real-world application. Unfortunately, achieving a set of diverse and promising architectures to simultaneously minimize validation error and model size is non-trivial, as previously discussed. Figure <ref> visualizes the populations obtained by NSGA-II with different objectives at different generations. In Fig. <ref>, the population obtained by minimizing both the model error and size converges rapidly towards small-size models as the evolution proceeds. This can be attributed to the fact that small-size models can achieve a superior validation error in the initial phase when using a weight-sharing-based supernet, resulting in losing all large models in the final population. To mitigate this issue, a practical approach is to include an auxiliary extra objective for improving the diversity of candidate architectures when performing non-dominated sorting. Figure <ref> depicts the results of adding the multiply-accumulate operations (MACs) as a third auxiliary objective. Unfortunately, small models still dominate large ones, simply because MACs provide a similar selection pressure to the model size. Alternatively, CARS <cit.>, a state-of-the-art NAS method, introduced a third objective, called accuracy speed, which is measured by the reciprocal of the number of model parameters. CARS performs the non-dominated sorting twice at each generation, one considering the validation accuracy and the number of model parameters, and the other the accuracy and the accuracy speed. As a result, CARS can maintain small and large models, but cannot retain medium-size ones, as shown in Fig. <ref>. To resolve the above issue, this work introduces an auxiliary objective f_a to maintain large models in the population by integrating both the model accuracy and size, and utilizing exponential distributions with respect to the size. Specifically, f_a is defined as follows: f_a =(1-params)e^-(1-params)(1-error) , where the number of model parameters params and the validation error error are normalized to [0,1] across the population of the current generation. The maintenance of large models is achieved by generating different exponential distributions <cit.> with respect to 1-params. According to Eq.(<ref>), a larger model will result in a smaller value of 1-params and accordingly a smaller f_a, compared to that of a smaller model with a similar error. Figure <ref> includes an example of the calculation of the auxiliary objective f_a: three architectures A, B, and C, achieve the same error of 0.7, i.e., 1-error=0.3. As a result, f_a of C with 1-params=0.2 will be smaller than that of A with 1-params=0.8. Meanwhile, f_a guides the search towards minimizing the validation error, since f_a always prefers models with a lower error. Therefore, f_a will not only strike a balance between error and params, but also alleviate the search bias by carefully prioritizing the selection of large models. Therefore, the diversity of architectures can be enhanced by employing f_a as the auxiliary objective, resulting in a more even distribution of model sizes in the population, as shown in Fig. <ref>. §.§ Parallel Training and Evaluation The training and validation of the weight-sharing-based supernet is shown on the right panel of Fig. <ref>. The multi-tasking framework allows the training and validation of the supernet for each task to be parallelized. Specifically, the available number of iterations is divided by the number of individuals to obtain the number of iterations train_n for each individual. This process utilizes only one epoch of training data. Following this, individuals of each task are decoded as a neural architecture and trained for train_n times. Finally, the validation error and the number of model parameters of the population of each task are obtained. This way, each neural architecture can substantially enhance its validation accuracy during the successive training iterations, effectively reflecting its true performance <cit.>. § EXPERIMENTS §.§ Settings We adopt a Multi-Objective Single-Tasking NAS (MO-ST-NAS) baseline by only removing the multi-tasking setting from MO-EMT-NAS to show the promising advantages of transferring architectural knowledge across related tasks on diverse datasets. Similarly, MO-EMT-NAS is compared with a representative single-objective evolutionary MT-NAS (EMT-NAS <cit.>) to demonstrate the efficiency of MO with the auxiliary objective. We select seven datasets and conduct experiments on two, three and four tasks for performance evaluation. 1) We design a two-task experiment on the classical datasets CIFAR-10 and CIFAR-100 <cit.>. Additionally, the obtained architectures are retrained on ImageNet <cit.> in order to examine our MO-EMT-NAS's transferability. 2) Multiple tasks, i.e., two-, three- and four-task settings, are designed on MedMNIST <cit.> to validate the generalization ability of our method. Four datasets, namely PathMNIST, OrganMNIST_Axial, OrganMNIST_Coronal, and OrganMNIST_Sagittal, are selected to simulate various medical imaging scenarios such as rectal cancer pathology and 2D images from 3D computed tomography (CT) images of liver tumors in different planes. Both our baseline and proposed approach are performed five independent runs, and the hyperparameters are listed in Table <ref>. See Supplementary Material C for more experimental setups. Following the practice in <cit.>, to better visualize and compare the optimal architectures obtained from the MO algorithms, the final population is divided into four groups based on the model size as shown in the example given in Fig. <ref>, and the architecture with the smallest error is selected from each group (denoted as A, B, C, and D). §.§ Performance Indicator We adopt hypervolume (HV) <cit.> to evaluate the sets of architectures found by different approaches in terms of convergence and diversity. HV calculates the volume of the objective space dominated by a set of non-dominated solutions 𝒫 and bounded by a reference point 𝐫 (see Fig. <ref>a), HV(𝒫)=VOL(∪_𝐲∈𝒫[𝐲, ]), where VOL(·) denotes the usual Lebesgue measure, [𝐲, 𝐫] represents the hyper-rectangle bounded by 𝐲 and 𝐫. A larger HV value means better performance: In Fig. <ref>b, the set of converged and well-distributed green dots, exhibiting a higher HV value, achieves better performance compared with the set of black dots. For each task, after each separate run of all compared algorithms, the maximum values of each objective across all solutions form the reference point 𝐫. Therefore, 𝐫 varies across different tables that involve different algorithms. §.§ Two-task on CIFAR-10 and CIFAR-100 The results in Table A in the Supplementary Material show that models found by MO-EMT-NAS dominate all models found by other methods under comparison, except for Baseline-A. This indicates that MO-EMT-NAS overwhelmingly outperforms all compared approaches. As shown in Fig. <ref>, MO-EMT-NAS achieves a set of diverse and superior architectures (the red line is at the bottom-left). Interestingly, MO-EMT-NAS approaches are more competitive compared to single-objective MT-NAS approaches, indicating that simultaneously optimizing multiple conflicting objectives enhances the maintenance of large models without undue sacrifice of the validation error. Besides, the comparison between MO-EMT-NAS and MO-ST-NAS demonstrates that architecture knowledge transfer between tasks facilitates the search for neural architectures. The average of the HV values over five runs on CIFAR-10 and CIFAR-100 for MO-ST-NAS and MO-EMA-NAS in Table <ref> further demonstrates the better convergence and diversity of our approach. It is important to highlight that the runtime of the algorithms under comparison is summarized in Table A in the Supplementary Material. Among these algorithms, MO-EMT-NAS emerges as the most efficient computationally, requiring just 0.38 GPU days for CIFAR-10 and CIFAR-100. §.§ Transfer to ImageNet The 16 neural architectures obtained by MO-ST-NAS and MO-EMT-NAS on CIFAR-10 and CIFAR-100, as plotted in Fig. <ref>, are transferred to ImageNet for retraining. The results of the transferred architectures on ImageNet are compared with several representative algorithms and given in Table B in the Supplementary Material. From Fig. <ref>, we observe that MO-EMT-NAS shows superior results in terms of the Top-1 accuracy than other algorithms under comparison, while providing a series of trade-off models with the number of parameters ranging from 1.57M to 3.25M. And the architectures transferred from MO-EMT-NAS always yields better performances than that from MO-ST-NAS. The model with the highest accuracy, Ours-C-100-D, has an accuracy of 75.47% and 3.25M number of parameters. Note that the experiment on the ImageNet (a single task with a large dataset) aims to evaluate the architecture transferability of each method rather than its ability to solve multiple tasks. §.§ Medical Multi-Objective Multi-Tasking PathMNIST, OrganMNIST_Axial, OrganMNIST_Coronal, and OrganMNIST_ Sagittal abbreviated as P, A, C, and S. Multi-Objective NAS: Figures <ref>-<ref> show that MO enables MO-EMT-NAS to yield a set of promising models with respect to the accuracy, model size or both. This further confirms the advantage of using the MO methods with the auxiliary objective. Importantly, MO-EMT-NAS finds a set of neural architectures with a low error that dominate the single models found by both single-objective NAS architectures, the Single-Tasking and EMT-NAS. Evolutionary Multi-Tasking NAS: In Table <ref>, the obtained Pareto optimal architecture set for each task is evaluated by the HV metric. Compared with MO-ST-NAS, MO-EMT-NAS achieves higher HV values, i.e., better performance in terms of convergence and diversity, on various task combinations. This is accomplished by using the knowledge transfer across tasks to promote the multi-tasking optimization. Across all settings, MO-EMT-NAS consistently achieves better accuracy while being significantly faster than MO-ST-NAS. Scalability of MO-EMT-NAS: The scalability of MO-EMT-NAS is tested by setting the number of tasks to two, three, and four, respectively. As illustrated in Figs. <ref>-<ref>, MO-EMT-NAS consistently exhibits superior performance compared to single-objective NAS approaches, confirming the promising scalability of MO-EMT-NAS. Specifically, architectures discovered by MO-EMT-NAS consistently dominate (with better performance) or are non-dominated (with similar performance) compared to those found by EMT-NAS and Single-Tasking NAS. Multitasking with Different Similarities: Using ResNet-50 as a feature extractor, we conduct the representation similarity analysis <cit.> to obtain task relatedness scores (RS) <cit.> between the four medical datasets. The RS results presented in Fig. <ref> vary from 0.09 to 0.50. Notably, one can observe lower scores between P and A, C, S and higher scores between A, C, S. According to Fig. <ref>-<ref>, a performance drop in terms of the error can be observed with the decrease of the RS. For example, MO-EMT-NAS finds a set of Pareto optimal models with errors ranging from 6.8% to 8.0% on the dataset P for the two tasks with RS=0.25 in Fig. <ref> while obtaining models with errors ranging from 7.3% to 9.6% on the dataset P in the two tasks PS with RS=0.09 in Fig. <ref>. A possible reason is that the lack of similarity between tasks poses challenges in architectural knowledge transfer, since less transferable information can be obtained. Search Efficiency: The runtime during the experiments is recorded and the percentage of the saved time by each algorithm compared with MO-ST-NAS is measured and presented in Table <ref>. The time saved by MO-EMT-NAS with addressing tasks in parallel is denoted as "GPU Days 1 (%)". One can observe that compared with the multi-objective single-tasking baseline, the proposed MO-EMT-NAS reduces the runtime from 59.7% to 77.7% for jointly addressing two, three and four tasks, while reaching a better balance between the error and model size. Besides, the time saved by MO-EMT-NAS unsurprisingly increases with the increase of the number of tasks solved jointly. The main reason is that the parallel training and evaluation of multiple tasks in MO-EMT-NAS significantly improves the computational efficiency and caps the overall runtime to that of the slowest task. Hence, for two-task settings, the time saving does not exceed 50%. Besides, while MO-EMT-NAS handles multiple tasks simultaneously, MO-ST-NAS solves tasks one by one, resulting in much more computational cost. To further investigate the efficiency of MO-EMT-NAS, the time reduced by MO-EMT-NAS without the parallelization of the training and evaluation on multiple tasks is denoted as "GPU Days 2 (%)". More specifically, the time saved for training and validation and that for searching are measured and denoted as "Time 1 (%)" and "Time 2 (%)", respectively. The results of "GPU Days 2 (%)" show that MO-EMT-NAS obtains up to 53.5% time savings compared with MO-ST-NAS, indicating the efficiency gained from the multi-tasking framework. Interestingly, by comparing "GPU Days 1 (%)" and "GPU Days 2 (%)", we can confirm the existence of heterogeneous time costs of different tasks. According to "Time 1 (%)", MO-EMT-NAS reduces up to 60.9% the time cost for training and validation. Similarly, the time spent on searching using an evolutionary algorithm is significantly reduced with the increase of the number of tasks. The reason is that the EA requires almost the same time for each task, accordingly the time for searching is doubled if addressing tasks one by one. §.§ Ablation studies To validate the efficiency of the auxiliary objective, MO-EMT-NAS with and without the auxiliary objective f_a are performed on CIFAR-10 and CIFAR-100. The fact that the HV values achieved by MO-EMT-NAS, i.e., 0.571 for CIFAR-10 and 0.532 for CIFAR-100, are better than that achieved by MO-EMT-NAS without f_a, i.e., 0.443 for CIFAR-100 and 0.356 for CIFAR-100, convincingly showcasing the advantage of using f_a. The HV values in Table <ref> indicate that MO-EMT-NAS with the help of the proposed auxiliary objective yields a set of well-converged and diverse non-dominated architectures. §.§ Sensitivity Analysis The random mating probability (RMP) is an important parameter that controls the degree of knowledge transfer between tasks. Hence, RMP is set to 0, 0.2, 0.4, 0.6, 0.8 and 1 to test its impact on the performance, and the HV results on two datasets, Path and Organ_A, are summarized in Table <ref>. Accordingly, we find that MO-EMT-NAS with a higher RMP value tends to achieve better performance on Path, shedding lights on the potential advantage of encouraging the architectural knowledge transfer. Indeed, a higher degree of knowledge transfer indicated by a larger RMP does not always improve the performance on Organ_A, but the best performance is achieved when RMP=1. We have also extensively tuned the crossover and mutation probabilities, population size and the number of generations. The results in terms of HV values are presented in Supplementary Material D. § CONCLUSION In this work, we propose a multi-objective multi-tasking NAS framework with the help of weight-sharing-based supernets to efficiently achieve a set of promising architectures with diverse model sizes. The multi-tasking framework enables architecture knowledge acquired from different tasks to be implicitly transferred, thereby effectively solving multiple tasks from different datasets. To mitigate the small model trap problem, we introduce an auxiliary objective that prefers larger models over smaller ones when they achieve similar accuracy, thereby achieving a set of promising architectures with various model sizes. Extensive experiments demonstrate that the architectures obtained by MO-EMT-NAS exhibit superior performance at a lower computational cost than the state of the art while being able to maintain a high degree of diversity in model sizes. § ACKNOWLEDGEMENTS This work was supported by National Natural Science Foundation of China (Key Program: 62136003), the Shanghai Committee of Science and Technology, China (Grant No.22DZ1101500), Fundamental Research Funds for the Central Universities (222202417006) and the Programme of Introducing Talents of Discipline to Universities (the 111 Project) under Grant B17017 and Shanghai AI Lab. splncs04
http://arxiv.org/abs/2407.13046v1
20240717225557
Unsafe Impedance: Safe Languages and Safe by Design Software
[ "Lee Barney", "Adolfo Neto" ]
cs.PL
[ "cs.PL" ]
1]Lee Barney 2]Adolfo Neto [1]Brigham Young University-Idaho, Rexburg, USA [2]Universidade Tecnológica Federal do Paraná, Curitiba, Brazil Unsafe Impedance: Safe Languages and Safe by Design Software [ July 22, 2024 ============================================================ § ABSTRACT In December 2023, security agencies from five countries in North America, Europe, and the south Pacific produced a document encouraging senior executives in all software producing organizations to take responsibility for and oversight of the security of the software their organizations produce. In February 2024, the White House released a cybersecurity outline, highlighting the December document. In this work we review the safe languages listed in these documents, and compare the safety of those languages with Erlang and Elixir, two BEAM languages. These security agencies' declaration of some languages as safe is necessary but insufficient to make wise decisions regarding what language to use when creating code. We propose an additional way of looking at languages and the ease with which unsafe code can be written and used. We call this new perspective unsafe impedance. We then go on to use unsafe impedance to examine nine languages that are considered to be safe. Finally, we suggest that business processes include what we refer to as an Unsafe Acceptance Process. This Unsafe Acceptance Process can be used as part of the memory safe roadmaps suggested by these agencies. Unsafe Acceptance Processes can aid organizations in their production of safe by design software. § INTRODUCTION Computers connected to networks exist in a dangerous space. From viruses to bots <cit.>, bad actors are constantly looking for weaknesses they can exploit. In response to this, a group of national security agencies stated that C-Suite executives have a responsibility along with technical experts to reduce the attack surface of software over which they have responsibility <cit.>. The proposal they make is to migrate systems from being written in what they refer to as memory unsafe languages to Memory Safe Languages (MSLs). In their mitigations, these agencies state, "Even the most experienced developers write bugs that can introduce significant vulnerabilities. Training should be a bridge while an organization implements more robust technical controls, such as memory safe languages." <cit.> They also suggest several other activities, from implementing and enforcing coding guidelines to hardware changes, that may mitigate memory based Common Vulnerability and Exposure (CVE) Types when using memory unsafe languages. All of this is done as a preamble to the major suggestions of the publication, the use of and transition to MSLs. As part of a move to a more secure future, these agencies urge all software manufacturers, not just those that sell or give away software, to produce and publish "memory safe roadmaps" indicating how they are going to take ownership of the "security outcomes" <cit.> of their software and develop secure products. All of this to promulgate understanding amongst all producers of software that "the software industry needs more secure products, not more security products." <cit.> All of the MSLs mentioned in the articles published by these security agencies allow memory unsafe code to be directly written in the language or loaded from libraries. If unsafe code is written or loaded to produce a product, the product is then is written using unsafe code. This implies that moving to MSLs is necessary but insufficient. Unsafe impedance, as we shall define it, is an additional way to aid technical experts and C-suite executives to choose languages and build their memory safe roadmaps. It is also a way to aid creators of MSLs as they contemplate the design of their new language. The contributions of this paper are: * Introduction of the concept of "unsafe impedance" as a novel perspective for evaluating the safety of programming languages in the context of software security. * Review and comparison of safe languages listed in cybersecurity documents from security agencies with Erlang and Elixir, two BEAM languages. * Proposal for an Unsafe Acceptance Process (UAP) to enhance software security by evaluating the necessity and risks associated with incorporating Native Implemented Functions (NIFs) in Erlang or Elixir applications. § SECURE BY DESIGN AND SECURE BY DEFAULT In 2023, thirteen security agencies from various nations around the world described technology products that are "secure by design" and "secure by default" <cit.>. They define secure by design as it relates to technology products as meaning these products "are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure" <cit.>. These same agencies define secure by default as "products are resilient against prevalent exploitation techniques out of the box without added charge" <cit.>. Considering a programming language as a product, we build upon these definitions and define secure by default languages. Languages are secure by default if they are created in a way that protects against the production of all types of Common Vulnerability and Exposures (CVEs). We will focus our assessments of languages in regard to being secure by default with regard to the languages' ability to create memory CVEs. If a language is not secure by default, we define it to be unsecure by default. While all categorization schemes are flawed, we categorize memory CVEs into spacial and temporal groupings. Spacial memory CVEs being those where memory locations and their use cause vulnerabilities and exposures and temporal memory CVEs being those where time differences cause memory vulnerabilities and exposures. The CVEs in Table <ref> and their mitigations are commonly taught in undergraduate computer science courses, and commonly dealt with when hardening software against attack. § MEMORY SAFE LANGUAGES AND UNSAFE CODE In their December 2023 report <cit.>, eight security agencies list six languages as being memory safe, C#, Go, Java, Python, Rust, and Swift. However, none of these languages are secure by default. Each of these languages, regardless of their safety declarations, allow unsafe code to be written and used within the languages' safe code. It is true, however, that each language has its own requirements that are enforced when unsafe code is used. We define the difficulty experienced by programmers when when complying with these requirements as unsafe impedance. Choosing languages with a high unsafe impedance as part of the product design makes it easier to claim that the product is secure by design. For each language listed in the December 2023 report, we give one or more examples of how to use unsafe code. The examples are not intended to indicate how to safely use unsafe code in that language. Neither do we claim these code snippets to be common uses of unsafe code in these languages since the common uses of unsafe code in these languages vary widely. The purpose of these code snippets is to allow the reader to assess the amount of unsafe impedance programmers in each language experience. Also, we are not implying that any programmer or engineer would purposefully write the code in these snippets. We define languages as having no unsafe impedance when they have no syntactical or other impediments to writing unsafe code. Commonly known examples of this language grouping are C and C++. Also, these two languages are unsecure by default. Languages that have few syntactical or other impediments to writing unsafe code we define as having low unsafe impedance. Languages that have many syntactical or other impediments to writing unsafe code we define as having high unsafe impedance. Languages that do not allow any writing, loading, or use of unsafe code we define as having infinite unsafe impedance. At this time we offer no rubric to rank languages with regards to unsafe impedance. §.§ C# The C# <cit.> language uses a static method of the Marshall class to allocate an unsafe array of specified types, int in the snippet below. Notice that Marshal.AllocHGlobal does not initialize the allocated memory. Any old data stored in the memory remains. This can cause an uninitialized memory read spacial memory issue unless the applications' programmers are sufficiently experienced so that they are aware of the need to write extra code to initialize the memory with some set of default values. Additionally, there is no verbiage or any other indicator stating that this code, and code using the results of this code, are unsafe. Instead, the programmers creating this code and those who latter read and debug this code are required to gather the knowledge that this code is unsafe from external sources. [language=C] IntPtr pointer = Marshal.AllocHGlobal(5 * sizeof(int)); When memory is allocated using Marshal.AllocHGlobal, it is required of the programmers to use the unsafe static method, Marshal.FreeHGlobal (see code snippet below). If this function is called at the wrong time, the application will experience temporal memory issues such as a dangling pointers, use after free, and double free. if the programmers fail to call this function in an appropriate location in their codebase, the memory leak temporal memory issue is created. In non-simple applications it can be difficult for programmers to know the correct or even a good location to call Marshal.FreeHGlobal. [language=C] Marshal.FreeHGlobal(pointer); In addition to the allocate and free issues already described, unsafe pointers can be created, manipulated, and possibly misused. For these reasons, we classify C# as being unsecure by default. Also, C# allows programmers to execute unsafe code from within their safe code with no syntactical or other indication that the code they are writing is unsafe. For this reason, we would categorize C# as having no unsafe impedance. §.§ Go In Go <cit.>, an unsafe pointer can be created with no visual indicator that the pointer is unsafe. [language=Go] var ptrToUint32 *uint32 = nil fmt.Println("Value:", *ptrToUint32) As in C, any nil pointer in Go can be dereferenced causing undefined behavior. Since Go allows the creation, manipulation, and possible misuse of pointers, we classify Go as unsecure by default. It is true that users of the Go language can choose to include the name of the unsafe package from Go's standard library in their code. This is a visual indicator that the result is unsafe to use. [language=Go] var num uint32 = 200 var ptrToUint32 *uint32 = (*uint32)(unsafe.Pointer( num)) *ptrToUint32 = 300 However, the use of this unsafe indicator is not required. [language=Go] var num uint32 = 200 var ptrToUint32 *uint32 = num *ptrToUint32 = 300 There is no requirement enforced by the Go language's syntax to indicate that a variable is unsafe. For programmers with limited understanding of unsafe code's behaviors, this lack of visual indicators can lead to security vulnerabilities. For this reason we would categorize Go as having low unsafe impedance. §.§ Java In Java <cit.>, any library that can be accessed using a C-style header can be loaded and run via Java Native Interface (JNI) functions. There are no constraints on what unsafe code can do in those non-Java native functions. Anything that is unsafe in that library becomes a hidden unsafe behavior in the encapsulating Java code which can then be used in any number of places in the Java application. Here is the Java side of the JNI relationship of a simple function that adds two ints. [language=Java] public class AdditionExample // Declare the native method public native int add(int x, int y); // Load the native library static System.loadLibrary("addition"); Notice there is no explicit visual indicator that unsafe code may be executed. The native keyword is insufficient in that a novice reading the code is not directly told the code may be unsafe. While it is true that several steps are required to complete the library, the impedance generated by these steps is reduced by the Java system providing tools to make it easier to execute unsafe JNI functions. For this reason, we also are categorizing JNI interactions as having a low unsafe impedance. Foreign Function and Memory (FFM) has been introduced as part of Project Panama <cit.> and is scheduled for release as part of Java 22. FFM allows direct direct memory manipulation and is designed to replace JNI. Based on the FFM documentation, FFM uses sandboxes, referred to as arenas, to try to contain and constrain the behaviors of unsafe code executed via the FFM API. In a presentation regarding FFM, it was stated that FFM attempts to "find a balance" <cit.> between safety and flexibility. The lack of visual unsafe cues in code examples using FFM and the balance between safety and ease of use lead us to categorize this type of Java code as having a low unsafe impedance. We also categorize Java as being unsecure by default because of the safety/flexibility tradeoff of FFM. §.§ Python Python <cit.> programmers can use the ctypes library to interact with dynamically loaded libraries. These libraries can be written in any language that can produce libraries "which export functions using the standard cdecl calling convention" <cit.>. This includes languages such as C, Rust, and Swift. Compilation and testing of these libraries is done external to the Python language and its standard toolset. There are non-standard toolsets designed to ease the creation and integration of these compiled libraries. When a Python application loads a dynamically linked library using ctypes.CDLL, the library uses the memory space of the Python REPL. This gives any security vulnerability access to any data and code in the REPL's memory space. This means there is no additional safety for code and data run in the REPL compared to if the code were written in the language of the library. It is true that if the library were loaded by an additional Python REPL, the new REPL's memory space would only include the data used and created by the loaded library. However, this does not provide any additional safety. Notice that the use of the dynamically loaded library in the example below does not include any direct indicators that the code being used is unsafe. [language=Python] import ctypes libc = ctypes.CDLL('libc.dylib') libc.malloc.argtypes = [ctypes.c_size_t] libc.malloc.restype = ctypes.c_void_p libc.free.argtypes = [ctypes.c_void_p] num_elements = 10 element_size = ctypes.sizeof(ctypes.c_int) array_ptr = libc.malloc(num_elements * element_size) Python's wrapping of unsafe arrays in Python types for access and modification does add some security when manipulating them within Python. However, this does not preclude their misuse within the library itself. Untrustworthy actors could still leverage weaknesses in the library. Because Python requires unsafe code to be compiled outside of its language, runtime, and tools, we claim Python has a high unsafe impedance. This, however, is tempered by the lack of indicators for unaware and unknowledgeable programmers or engineers that the code being executed is unsafe. A version of Python with an infinite unsafe impedance could be created. It would require that no externally compiled code would be allowed to be loaded. This version of Python would, unfortunately, be unavailable for use by the machine learning community since a great deal of the code used by Python machine learning modules is found in dynamically loaded libraries written in unsafe languages. We do categorize Python as a secure by default language as long as no unsafe code is loaded and used. §.§ Rust Rust <cit.> includes the keyword unsafe. It is used along with scope indicators to specify a block of code where all Rust's safety rules are ignored by the compiler. Use of the unsafe keyword and code blocks does not add any additional safety to the code or the applications. It is strictly there to indicate to programmers and engineers that CVE's can be executed in the blocks. Therefore, any unsafe behavior such as use-after-free, dangling pointers, and the other CVEs are purposefully placed inside of these blocks. The intention is that by localizing places where CVEs can exist, they will be easier to manage. These unsafe code blocks also are used to leverage code written in unsafe languages. It is common to hide the use of unsafe code in Rust by writing functions that show themselves as being safe and following the compiler's safety rules yet those functions contain unsafe code blocks. This is so common that when discussing using libraries written in unsafe languages, the Rust documentation states that one of these unsafe libraries "can choose to expose only the safe, high-level interface and hide the unsafe internal details" <cit.>. Notice the use of unsafe Rust code in this example. Unsafe behaviors are not limited to libraries written in other languages. Rust itself eases the creation of unsafe code through its own syntax. [language=Python] use std::alloc::alloc, dealloc, Layout; unsafe let layout = Layout::new::<u16>(); let ptr = alloc(layout); . . . unsafe dealloc(ptr, layout); When a Rust application loads a library, the library shares the same memory space with the rest of the application. This causes any Rust application that loads libraries to have the same kind of vulnerabilities as a Python application with loaded unsafe libraries. Because of this and the inclusion of Rust syntax for creating unsafe code, we categorize Rust as having low unsafe impedance. The unsafe keyword and the requirement that the mut keyword be used for mutable variables would, in our opinion, place Rust in the upper portion of low unsafe impedance category. Because of Rust's ability to create, manipulate, and possibly misuse pointers, we categorize Rust as being an unsecure by default language. §.§ Swift When a Swift <cit.> programmer is considering writing unsafe Swift code, Swift's syntax enforces the use of an unsafe indicator. This is found in the names of the initialization functions for UnsafePointer, UnsafeMutablePointer, and UnsafeRawPointer. These types map to various C-style pointers and, when included in Swift code, are often used to increase speed or interact with libraries in other languages. The Swift documentation states that "Swift imports any function declared in a C header as a Swift global function" <cit.>. Swift also enables interactions with C macros <cit.>, structures, and unions <cit.>. The C header declaration implies that any library written in any language that can expose its functionality as if it was written in C is interoperable with Swift. Like Rust and Python, any such library shares the same memory space as the rest of the Swift application. As with those languages, this opens up the possibility of bad actors gaining access to data in the application and executing nefarious code within the Swift portion of the application. Below is an example of unsafe code written in Swift. [language=Swift] var pointer = UnsafeMutablePointer<UInt16> .allocate(capacity: count) . . . pointer.deinitialize(count: count) pointer.deallocate() The first line indicates that the pointer created is unsafe. This unsafe code can be wrapped in Swift code that appears to be safe, hiding unsafe code in what appears to be safe code. Swift's use of unsafe in the initialization functions for unsafe types in our opinion places Swift in the upper portion of low unsafe impedance category. Swift's native ability to create, manipulate, and possibly misuse pointers causes us to categorize Swift as an unsecure by default language. §.§ Erlang, Elixir, and other BEAM Languages Erlang <cit.>, Elixir <cit.>, and other programming languages run on the BEAM virtual machine <cit.> which is often referred to as 'the BEAM'. The BEAM allows unsafe functions to be loaded into its memory and execution space. Like Rust, Swift, and the other languages described as safe in the December 2023 report <cit.>, this means any unsafe, loaded code may be leveraged by bad actors and give the bad actors access to data in the application and executing nefarious code within the BEAM. Because of this, the Erlang documentation includes this warning, "An erroneously implemented native function can cause a VM internal state inconsistency, which can cause a crash of the VM, or miscellaneous misbehaviors of the VM at any point after the call to the native function" <cit.>. Unlike Swift, the libraries containing unsafe code, one or more Native Interface Functions (NIFs), have to include a specific header and deal with some Erlang terminology. Below is a small example. [language=c] // example_nif.c #include "erl_nif.h" static ERL_NIF_TERM create_array(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) // Example function that allocates an array and returns a pointer as an integer unsigned int* array = malloc(sizeof(unsigned int) * 10); if(array == NULL) return enif_make_badarg(env); // Just an example: initializing array with arbitrary values for(int i = 0; i < 10; i++) array[i] = i; return enif_make_uint64(env, (ErlNifUInt64)array); static ERL_NIF_TERM free_array(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) // Expects a pointer as an unsigned integer ErlNifUInt64 ptr; if(!enif_get_uint64(env, argv[0], ptr)) return enif_make_badarg(env); free((void*)ptr); return enif_make_atom(env, "ok"); static ErlNifFunc nif_funcs[] = "create_array", 0, create_array, "free_array", 1, free_array ; ERL_NIF_INIT(example_nif, nif_funcs, NULL, NULL, NULL, NULL) It is possible to use the create_array function from Erlang code without knowing it is unsafe. BEAM languages such as Erlang and Elixir can hide unsafe code in much the same way Rust, Swift, and the other languages mentioned do. The Erlang and Elixir languages are functional, declarative, and "secure by default" <cit.>. As part of this secure by default approach, the creators of Erlang and Elixir have not included the ability to create or use pointers in the languages, unlike Rust and Swift. Therefore nine of the ten common CVEs listed in Table <ref> are irrelevant assuming no NIFs are used in the creation of the Erlang library, application, or system. The tenth CVE from Table <ref>, data race conditions, cannot occur when using variables since all Erlang and Elixir variables, tuples, lists, maps, etc. are immutable. This is not to say that there are no poor programming practices in Erlang and Elixir that can be problematic. The Erlang Ecosystem Foundation's Security Working Group provides guidance for avoiding these poor practices <cit.>. As an example, in large, long-running systems, it is possible to exceed the the number of atoms available since atoms are not garbage collected. The number of available atoms for a node is determined at startup. The default value is 1,048,576, which can by increased by using the +t flag. It is unwise to accept large amounts of arbitrary data that is converted to atoms. When data must be converted to atoms, the application of other interventions such as using list_to_existing_atom/1 and then using list_to_atom/1 if list_to_existing_atom/1 fails is indicated. This type of practice preserves the limited atom table resource. The awkwardness of the creation of NIFs, the requirement that they be compiled outside of the standard build environment, and the inherent safety of Erlang and Elixir code in our opinion places Erlang and Elixir in the high unsafe impedance category. We believe it is in the lower end of this category due to the ability to hide the execution of unsafe code. We do categorize Erlang and Elixir as a secure by default languages as long as no unsafe code is loaded and used. § SECURE BY DESIGN SOFTWARE USING ERLANG AND ELIXIR To achieve secure by default behavior in an application written using secure by default languages such as Erlang and Elixir, business processes must be implemented and enforced that strongly discourage the use of NIFs. We propose that any group producing software in any safe by default language create what we refer to as an Unsafe Acceptance Process (UAP). The purpose of any UAP is to increase unsafe impedance. The UAP must impose a significant barrier to the loading and use of potentially unsafe code. We also propose that to be of most use, any UAP must include a required, measurable proof that only a NIF can solve the problem presented. We propose this proof should include at least these items in a proposal to include a NIF in an application: * a statement that by the lack of a NIF implementation one or more existing consumers/users of the software are being damaged and how, * a statement of the security risks any NIF brings, along with a statement of the potential security risks posed by the NIFs code, * the source code for the proposed NIF, * the source code for unit tests, when unit testing is possible, or other extensive proofs that the NIF exhibits no unsafe behavior for all conceivable edge cases, * measured speed increases presented by using the NIF, and * an alternative solution, if possible, in Erlang or Elixir that improves on the current solution, but is insufficient. As part of any UAP developed by software producing organizations, no programmer or software engineer should be able to independently add potentially unsafe code to a library, application, or system. The NIF proposal should be evaluated by technological and business persons responsible for the software. Such an evaluation must be skeptical, and adverse to the addition of any NIF in its initial perspective. This implies that the addition of a NIF must be supported by compelling and overwhelming evidence of its necessity. In our opinion, if a business process like the one described here is implemented for Erlang or Elixir applications, the resulting code would fall into the middle of the high unsafe impedance classification. § CONCLUSIONS It is possible for any system written in safe languages to use unsafe code and unwittingly expose the system to attack. For some of the languages described as safe by United States Cybersecurity and Infrastructure Security Agency, et. al. <cit.>, the unsafe code can be written in the language itself and hidden. For other languages declared to be safe, the unsafe code can be written in other languages and loaded separately. This also hides the unsafe code. In the past and currently, decisions are and were made to value ease of using unsafe code. These decisions were made to overcome potential or real speed restrictions in the languages and for code reuse reasons. Such decisions tend to encourage the use of unsafe code and encourage programmers and engineers to overlook potential code safety issues found in unsafe code. Heartbleed in OpenSSL <cit.> is an example of reuse of unsafe code causing vulnerability in large numbers of systems. We question whether these speed and reuse decisions are still relevant. In a time of increased and increasing connectivity, increased attacks of various kinds, and increased size of the code bases being created, it is time to value safety over speed and reuse. Unintended consequences of unsafe code are used by intruders to gain unwarranted access to gather data and access or damage computing systems. To reduce these unintended consequences, secure by default and an infinite unsafe impedance should be the goal of every language creator and maintainer. Business practices such as UAPs can not guarantee code safety, only reduce the probability that unsafe code exists in any software product. We also propose that speed improvements can happen within safe code and in hardware that can mitigate the need for using purposefully written code that may be unsafe. We also propose that using any secure by default language, such as Erlang and Elixir, along with business processes that include an Unsafe Acceptance Process (UAP) aid organizations in producing software that is secure by design <cit.>. These applications, however, are not secure by default since it appears there are no secure by default languages that have infinite unsafe impedance. Further research can and should be done to expand the assessment of commonly used programming languages with regard to being fully secure by default, i.e. expanding upon the memory CVEs from Table <ref> used in our assessment. Further research should also be done to define a rubric that can be used to rank languages' unsafe impedance. plainnat
http://arxiv.org/abs/2407.13120v1
20240718025806
HPPP: Halpern-type Preconditioned Proximal Point Algorithms and Applications to Image Restoration
[ "Shuchang Zhang", "Hui Zhang", "Hongxia Wang" ]
cs.CV
[ "cs.CV", "math.OC" ]
Reconfigurable Intelligent Surface Aided Vehicular Edge Computing: Joint Phase-shift Optimization and Multi-User Power Allocation Kangwei Qi, Qiong Wu, Senior Member, IEEE, Pingyi Fan, Senior Member, IEEE, Nan Cheng, Senior Member, IEEE, Wen Chen, Senior Member, IEEE, and Khaled B. Letaief, Fellow, IEEE This work was supported in part by the National Natural Science Foundation of China under Grant No. 61701197, in part by the National Key Research and Development Program of China under Grant No.2021YFA1000500(4), in part by the 111 Project under Grant No. B12018. Kangwei Qi, Qiong Wu are with the School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China. (e-mail: kangweiqi@stu.jiangnan.edu.cn, qiongwu@jiangnan.edu.cn). Pingyi Fan is with the Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China (e-mail: fpy@tsinghua.edu.cn). Nan Cheng is with the State Key Lab. of ISN and School of Telecommunications Engineering, Xidian University, Xi’an 710071, China (e-mail: dr.nan.cheng@ieee.org). Wen Chen is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China (e-mail: wenchen@sjtu.edu.cn). K. B. Letaief is with the Department of Electrical and Computer Engineering, the Hong Kong University of Science and Technology (HKUST), Hong Kong (email:eekhaled@ust.hk). July 22, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT Preconditioned Proximal Point (PPP) algorithms provide a unified framework for splitting methods in image restoration. Recent advancements with RED (Regularization by Denoising) and PnP (Plug-and-Play) priors have achieved state-of-the-art performance in this domain, emphasizing the need for a meaningful particular solution. However, degenerate PPP algorithms typically exhibit weak convergence in infinite-dimensional Hilbert space, leading to uncertain solutions. To address this issue, we propose the Halpern-type Preconditioned Proximal Point (HPPP) algorithm, which leverages the strong convergence properties of Halpern iteration to achieve a particular solution. Based on the implicit regularization defined by gradient RED, we further introduce the Gradient REgularization by Denoising via HPPP called GraRED-HP^3 algorithm. The HPPP algorithm is shown to have the regularity converging to a particular solution by a toy example. Additionally, experiments in image deblurring and inpainting validate the effectiveness of GraRED-HP^3, showing it surpasses classical methods such as Chambolle-Pock (CP), PPP, RED, and RED-PRO. Halpern iteration, Preconditioned proximal point algorithms, RED, Image restoration § INTRODUCTION Image restoration (IR) problems, including image deblurring, super-resolution, and inpainting, can be formulated as the following optimization problem <cit.>: min_∈𝒳λ f()+g(𝐊), where f: 𝒳→ℝ∪{+∞} and g: 𝒴→ℝ∪{+∞} are convex, lower semicontinuous functions, 𝐊: 𝒳→𝒴 is a bounded linear operator, and λ > 0 is a balance parameter. Both 𝒳 and 𝒴 are real Hilbert spaces. The first term f represents the data fidelity, while the second term g serves as a regularization (or prior) to mitigate the ill-posedness of IR problems. Examples include total variation (TV) regularization ∇𝐱_1 (𝐊 = ∇) as in <cit.>, and sparsity regularization 𝐱_1 (𝐊 = I). By the first-order optimality condition, the convex optimization problem (<ref>) is equivalent to the following inclusion problem: find ∈𝒳 such that 0 ∈λ∂ f() + 𝐊^* ∂ g(𝐊), where ∂ f () and ∂ g () are the subdifferentials of f and g at , respectively <cit.>. Following <cit.>, by introducing an auxiliary variable ∈∂ g(𝐊), we can reformulate (<ref>) as: find ∈ℋ such that 0∈𝒜, where 𝒜 = [ λ∂ f 𝐊^*; - 𝐊 (∂ g)^-1 ], = (, ), and ℋ = 𝒳×𝒴. The problem (<ref>) is common in modern optimization and variational analysis <cit.>. When 𝒜 is maximal monotone, the resolvent J_𝒜 = (I + 𝒜)^-1 is nonexpansive with a full domain, as established by the Minty surjectivity theorem <cit.>. The proximal point iteration ^k+1 = (I + 𝒜)^-1^k is used to solve (<ref>) and has been proven to converge weakly <cit.>. Since the resolvent (I + 𝒜)^-1 is generally difficult to compute, splitting methods have been proposed to address this issue. The well-known Douglas-Rachford splitting (DRS) <cit.> decomposes 𝒜 into the sum of two maximal monotone operators 𝒜_1 and 𝒜_2, for which the resolvents J_𝒜_1 and J_𝒜_2 are easier to obtain. Another popular splitting algorithm is the Chambolle-Pock (CP) algorithm <cit.>, which solves the saddle-point problem of (<ref>), i.e., min_∈𝒳max_∈𝒴⟨𝐊, ⟩ + λ f() - g^*(), where g^*: 𝒦→ℝ^+ is the Fenchel conjugate functional. He and Yuan <cit.> were the first to analyze the Primal-Dual Hybrid Gradient (PDHG) by the preconditioned proximal point (PPP) method, given by 0∈𝒜^k+1 + ℳ(^k+1-^k) with a positive definite preconditioner ℳ : ℋ→ℋ. Bredies et al. <cit.> developed a unified degenerate PPP algorithmic framework with a semi-definite preconditioner. Assuming that 𝒯 = (𝒜 + ℳ)^-1ℳ has a full domain and is single-valued, the PPP method with the classic Krasnosel'skii-Mann (KM) iteration solves for the fixed point of 𝒯, i.e., 𝒯() =. From the perspective of degenerate PPP <cit.>, both DRS and CP splitting algorithms can be considered as special cases of the PPP algorithm with proper preconditioner. However, PPP algorithms with a semi-definite preconditioner generally exhibit weak convergence in Hilbert space, resulting in uncertain solutions. It is meaningful to study the limit point of sequences generated by PPP, Bauschke et al. obtained strong convergence results of PPP in the special case of linear relation on 𝒜 <cit.>, the fixed-point set Fix(𝒯) is a linear subspace, the weak limit of the PPP sequence is the ℳ-projection of initial point. The classic Halpern iteration <cit.> offers the advantage of strong convergence over other iterations (such as the KM iteration) in infinite-dimensional Hilbert spaces, with the limit identified as the metric projection of the anchor onto the fixed point set <cit.>. This method is also known as an implicit regularization technique <cit.>. Due to this implicit bias (projection onto solutions), degenerate PPP algorithms incorporating Halpern iteration are more likely to approximate the true solution, yielding better recovery results for IR problems. Besides the benefit of converging to a particular fixed point, Halpern iteration also possesses acceleration properties widely utilized in machine learning <cit.>. Even without theoretical guarantees, literature <cit.> has demonstrated that PDHG with Halpern iteration achieves a faster convergence rate for function values in CT image reconstruction. Recently, first-order optimization algorithms (steepest decent (SD), fixed-point (FP) iteration and ADMM) have played an critical role in implicit regularization, such as PnP priors <cit.> and RED <cit.>, which have achieved state-of-the-art performance in IR problems <cit.>. PnP methods by replacing proximal operators with denoisers do not seek the minimization of an explicit objective function, which strongly limits their interpretation <cit.>. The RED prior defines a convex function whose gradient corresponds to the denoising residual itself, formulating a clear and well-defined objective function <cit.>. Thus, the gradient of RED can be used by these first-order optimization methods. To prove the global convergence of RED, based on fixed-point projection, Cohen et al. proposed the RED-PRO model <cit.> to bridge between RED and PnP using the hybrid steepest descent method (HSD) <cit.>. Assuming that the data term f is a strong convexity function and the residual of the denoising neural network is nonexpansive, Ryu et al. proved the convergence of PnP-FBS(Forward-Backward Splitting) and PnP-ADMM by the classical Banach contraction principle <cit.>, but imposing strong convexity on f excludes many IR tasks <cit.>. To overcome the strong convexity assumption, Hurault et al. proposed proximal denoisers of PnP methods, and the gradient step denoiser <cit.> can be interpreted as a proximal operator of implicit regularization <cit.>. However, these works did not obtain the particular solution. It is crucial to obtain a unique, stable, and particular solution for IR problems <cit.>. Since the Halpern iteration can natually obtain the particular solution (initial point projection onto the fixed-point set). Therefore, Halpern-type preconditioned proximal point called HPPP (<ref>) is proposed to takle the above issue, i.e., ^k+1=μ_k+1𝐚+(1-μ_k+1)𝒯(^k), where 𝐚, ^0 ∈ℋ are the anchor point and the initial point, respectively, 𝒯=(ℳ+𝒜)^-1ℳ is a ℳ-FNE operator (<ref>) and {μ_k } satisfies ∑_k∈ℕμ_k = +∞, μ_k→ 0(k→∞). In addition to TV, implicit regularization such as PnP prior is also suitable for HPPP. Under the nonexpansive assumption, we find that the residual of the denoiser can also be interpreted as the proximal operator of the implicit regularization via Moreau decomposition, making it straightforward to integrate into the primal-dual algorithm with the special semi-definte preconditioner ℳ. These splitting algortihms can be viewed as fixed-point iterations from the perspective of PPP. Based on HPPP and implicit regularization, [Since the gradient of RED is exactly the residual of the denoiser, we used the notion to denote the proximal operator of the implicit regularization.]we propose the implicit Gradient Regularization by Denoising via HPPP called GraRED-HP^3(see Algorithm <ref>) for IR problems. Not only can it converge to a particular solution, but it also leverages the data adaptivity of implicit regularization. This integration enriches the theoretical and algorithmic understanding of RED and PnP priors. The main contributions are as follows: * Theoretically, under the semi-definite preconditioner, we analyze the convergence of HPPP in Hilbert space. Compared with PPP, the proposed HPPP has the advantage of converging to a unique particular solution ^* =min_∈Fix(𝒯)-𝐚_ℳ^2(Lemma <ref>), which extends the results of the classic Halpern iteration. Let {^k}_k∈ℕ be the sequence generated by HPPP; we establish an 𝒪(1/k) convergence rate for 𝒯^k - ^k and ^k+1 - ^k. * Based on the special preconditioner ℳ, the primal-dual algorithm to solve (<ref>) is viewed as the fixed-point iteration ^k+1 =𝒯^k. We apply TV or implicit regularization to HPPP, then propose the GraRED-HP^3 algorithm for image restoration problems, further discuss the relationship between PnP-ADMM, GraRED-P^3 (GraRED via PPP), and the proposed algorithm. * We numerically verify the regularity of HPPP with a simple 1D example, and further demonstrate the state-of-the-art performance of GraRED-HP^3 on image deblurring and inpainting experiments. The results show that GraRED-HP^3 outperforms the classic CP <cit.> and PPP <cit.> algorithms with TV regularization, as well as the RED <cit.> and RED-PRO <cit.> algorithms with TNRD denoiser <cit.>. The rest of this paper is organized as follows. In section <ref>, we review some useful preliminaries for convergence analysis, including the background of PPP and implicit gradient RED. In section <ref>, we establish the convergence and convergence rate of HPPP and apply it to TV regularization. Based on the implicit gradient RED, we propose the GraRED-HP^3 algorithm for IR problems. In section <ref>, we verify the regularity and efficiency of the proposed algorithms with a 1D toy example. Furthermore, we validate the performance of the proposed GraRED-HP^3 algorithm through image deblurring and inpainting experiments. Finally, conclusions are presented in section <ref>. § PRELIMINARIES In this section, we provide some preliminaries which can analyze HPPP. We examine fundamental concepts related to the degenerate PPP and RED. Let ℋ be a real Hilbert space with inner product ⟨·,·⟩ and with the corresponding induced norm ·, 𝒜: ℋ→ 2^ℋ be a (maybe multivalued) operator. §.§ Preconditioned proximal point Bredies et al. <cit.> introduced a linear, bounded, self-adjoint and positive semi-definite operator admissible preconditioner ℳ: ℋ→ℋ. The proper preconditioner ℳ can make 𝒜+ℳ have a lower triangular structure, which conveniently calculates (𝒜+ℳ)^-1ℳ. An admissible preconditioner for the operator 𝒜: ℋ→ 2^ℋ is a bouned, linear, self-adjoint, and positive semi-definite operator ℳ:ℋ→ℋ such that 𝒯=(ℳ+𝒜)^-1ℳ is single-valued, of full-domain, and Lipschitz continuous. Therefore, the preconditioned proximal point iteration is written into ^0∈ℋ,^k+1 = 𝒯^k = (ℳ+𝒜)^-1ℳ^k. If ℳ = I, then 𝒯 is firmly non-expansive operator <cit.> and (<ref>) becomes the standard proximal point iteration. If ℳ is semi-definite, the property of 𝒯 is related with the degenerate ℳ-firmly nonexpansive (ℳ-FNE) operator <cit.>, which is associated with seminorm _ℳ = √(⟨ℳ, ⟩) and semi inner-product ⟨,⟩_ℳ = ⟨ℳ,⟩. The following notion extends monotone characteristics, i.e., ℳ-monotonicity. Let ℳ:ℋ→ℋ be a bouned linear positive semi-definite operator, then ℬ:ℋ→ 2^ℋ is ℳ-monotone if we have ⟨-', -'⟩ _ℳ≥ 0,∀ (, ), (',')∈ℬ. According to <cit.>, if ℳ^-1𝒜 is ℳ-monotone, then 𝒯 is ℳ-FNE, i.e., 𝒯-𝒯_ℳ^2+(I-𝒯)-(I-𝒯)^2_ℳ≤-^2_ℳ. By the KM iteration, the relaxed PPP algorithm is written into ^k+1=(1-λ_k)^k + λ_k 𝒯^k, where λ_k satifies ∑_kλ_k (2-λ_k) = +∞. §.§ Implicit Gradient RED Romano et al. introduced an explicit regularizer called Regularization by Denoising (RED) <cit.> to construct an explicit objective function, which is defined by g(𝐱) = 1/2⟨𝐱,𝐱-D_σ(𝐱)⟩, where D_σ:^n→^n is a denoiser (σ>0 denotes a noise level). If D_σ(𝐱) satisfies homogeneity D_σ(c𝐱) = cD_σ(𝐱)(∀ c>0) and has symmetric Jacobian, then ∇ g(𝐱) = 𝐱-D_σ(𝐱). Under the following nonexpansive Assumption (<ref>) <cit.> on residul R = I-D_σ, i.e., (𝐱-D_σ(𝐱))-(𝐲-D_σ(𝐲))≤𝐱-𝐲,∀𝐱,𝐲∈^n,A we will prove that there exists an implicit regularization ϕ such that R(𝐱) = ∇ g(𝐱) = prox_ϕ^*(𝐱), where ϕ^* is the conjugate of ϕ. Here is the definition of the proximal operator and characterization of proximal operators. <cit.> Given a function ϕ:^n→ℝ^+, the proximal operator of ϕ(𝐱) is defined by prox_ϕ (𝐱)= min_𝐱∈^n1/2𝐮-𝐱^2+ϕ(𝐱). A function h:^n→^n defined everywhere is the proximal operator of a proper convex l.s.c. (lower semicontinuous) function ϕ:^n→ℝ^+ if, and only if the following conditions hold jointly: (a) there exists a (convex l.s.c.) function ψ such that for each 𝐱∈^n, h(𝐱) = ∇ψ(𝐱); (b)h is nonexpansive, i.e., h(𝐱)-h(𝐲)≤𝐱-𝐲, ∀𝐱,𝐲∈^n. Assume that a denoiser D_σ:^n→^n satisfies homogeneity and Assumption (<ref>), and has symmetric Jacobian, then the gradient of RED defines an implicit regularization ϕ:^n→ℝ^+ such that R(𝐱) = ∇ g(𝐱) = prox_ϕ^*(𝐱), D_σ(𝐱) =prox_ϕ(𝐱). Let ψ(𝐱) = 1/2⟨𝐱,𝐱-D_σ(𝐱)⟩, h(𝐱) = R(𝐱) in Proposition <ref>. By Assumption (<ref>), there exist a function denoted ϕ^*(ϕ^* is the conjugate of ϕ) such that R(𝐱) = ∇ g(𝐱) = prox_ϕ^*(𝐱). By Moreau decomposition <cit.>, D_σ(𝐱) =prox_ϕ(𝐱). § HALPERN-TYPE PRECONDITIONED PROXIMAL POINT (HPPP) Compared to KM iteration, Halpern iteration <cit.> is an effective method to find a particular fixed point, i.e., ^k+1=λ_k+1^0+(1-λ_k+1)T^k, the sequence {^k}_k∈ℕ with suitable {λ_k}_k∈ℕ strongly converges to the projection of the initial point ^0 to Fix(T)  <cit.>, i.e., ^* = P_Fix(T)(^0), where P_Ω(^0) = min_∈Ω-^0^2 denotes the standard metric projection. Using the PPP to solve the inclusion problem (<ref>), the fixed point of 𝒯 is the solution of (<ref>). Choosing an appropriate anchor point, such as the degraded image or an image filled with ones or zeros, can bring the projection closer to the ground-truth solution, resulting in better restoration performance. Hence, the HPPP algorithm (<ref>) theoretically converges to the ground-truth solution under some known anchor point prior 𝐚. When 𝒯 satisfies the mild condition 𝒯-𝒯≤ C-_ℳ(C>0) <cit.>, the HPPP algorithm is able to find a unique solution ^* of (<ref>), where ^* = min_∈Fix(𝒯)-𝐚_ℳ^2(see Lemma <ref>). §.§ Convergence analysis Firstly, we analyze the convergence of HPPP, the sequence {^k} generated by HPPP (<ref>) converges to a particular fixed point. The strong and weak convergences are denoted → and ⇀, respectively. Let 𝒜:ℋ→ 2^ℋ be an operator with zer𝒜≠∅, and ℳ an admissible preconditioner such that (ℳ+𝒜)^-1 is L-Lipshitz. Let {^k } be the sequence generated by HPPP (<ref>). Assume that every weak cluster point of {^k}_k∈ℕ lies in Fix(𝒯), and {μ_k}_k∈ℕ satisfies * lim_k→∞μ_k = 0; * ∑_k∈ℕμ_k = +∞; * lim_k→∞μ_k+1-μ_k/μ_k = 0 or ∑_k∈ℕ|μ_k+1-μ_k |<∞. Then ^k converges strongly to ^* which is the unique solution of min_∈Fix(𝒯)-𝐚_ℳ^2. Firstly, we show that ^k-^* _ℳ→ 0, where ^* = P_Fix(𝒯)^ℳ(𝐚) = min_∈Fix(𝒯)-𝐚_ℳ^2. Then, we have ^k+1-^* _ ℳ^2 = μ_k+1(𝐚-^*)+(1-μ_k+1)(𝒯^k-^*) _ℳ^2 = μ_k+1^2𝐚-^* _ℳ^2+(1-μ_k+1)^2𝒯^k-^* _ ℳ^2 +2μ_k+1(1-μ_k+1)⟨𝐚-^*, 𝒯^k-^*⟩_ ℳ ≤ (1-μ_k+1)^k-^* _ℳ^2+μ_k+1δ_k+1, where δ_k = μ_k𝐚-^* _ ℳ^2+2μ_k(1-μ_k)⟨𝐚-^*, 𝒯^k-1-^*⟩_ ℳ. By Lemma <ref>, {^k } is bounded, there exists such that ^k⇀∈Fix(𝒯) according to the known condition. Assume that ^k_n⇀ such that lim sup_k→∞⟨𝐚-^*, ^k-^*⟩_ ℳ = lim_n→∞⟨𝐚-^*, ^k_n-^*⟩_ ℳ = ⟨𝐚-^*, -^*⟩_ ℳ. Since ^* is the unique solution of min_∈Fix(𝒯)𝐚-_ℳ^2 according to Lemma <ref>, which solves ⟨^*-𝐚, -^*⟩ _ℳ≤ 0, ∀∈Fix(𝒯). Therefore lim sup_k→∞δ_k≤ 0, from Lemma <ref> we obtain lim_k→∞^k-^* _ ℳ = 0. we then prove ^k- ^* → 0. ^k+1-^* = μ_k+1(𝐚-^*) +(1-μ_k+1)(𝒯^k-^* ≤μ_k+1𝐚-^* +(1-μ_k+1)𝒯^k - ^* ≤μ_k+1𝐚-^* +C(1-μ_k+1)^k - ^* _ℳ, it easily follows that ^k- ^* → 0 from μ_k+1→ 0 and ^k-^*_ℳ→ 0. It is easy to verify that μ_k = 1/k^α(0<α≤ 1) satisfy conditions (i)-(iii). Under mild conditions(i)-(iii), compared with <cit.>, we have provided an alternative method to obtain the limit point as ℳ-projection of the initial point. Let 𝒜:ℋ→ 2^ℋ be a maximal operator with zer𝒜≠∅, and ℳ an admissible preconditioner such that (ℳ+𝒜)^-1 is L-Lipshitz. Let {^k } be the sequence generated by HPPP (<ref>). If {μ_k}_k∈ℕ satisfies ∑_k∈ℕμ_k =∞, μ_k→ 0 (k→∞ ), lim_k→∞μ_k+1-μ_k/μ_k = 0 or ∑_k∈ℕ|μ_k+1-μ_k |<∞, then ^k converges strongly to ^*, which is the unique solution of min_∈Fix(𝒯)-𝐚_ℳ^2. Assume that ^k ⇀, 𝒯^k-^k→ 0 (Lemma <ref>), it follows that 𝒯^k = 𝒯^k-^k+^k⇀0+= , i.e., 𝒯^k⇀. From 𝒯^k-^k→ 0 and 𝒯^k =(ℳ+𝒜)^-1ℳ^k we have 𝒜𝒯^k ℳ(^k-𝒯^k)→ 0. By the maximality of 𝒜 we have that 𝒜 is closed in ℋ_weak×ℋ_strong (see <cit.>), hence 0∈𝒜, i.e., is a fixed point of 𝒯. Thus, every weak cluster point of {^k}_k∈ℕ lies in Fix(𝒯), it holds by Theorem <ref>. Compared Theorem <ref>, Corollary <ref> with <cit.> and <cit.>, {^k}_k∈ℕ generated by HPPP converges strongly to a particular fixed point of 𝒯. All conditions are the same except we only add the mild assumption about {μ_k}_k∈ℕ used for convergence analysis. {^k}_k∈ℕ, {𝒯^k}_k∈ℕ can converges weakly to the same fixed point of 𝒯 since 𝒯^k-^k→ 0. The Lipschitz regularity of (ℳ+𝒜)^-1 is a mild assumption especially in applications to splitting algorithms, which is used to prove the uniqueness of ℳ-projection and guarantee the boundedness of {^k}_k∈ℕ,{𝒯^k}_k∈ℕ. §.§ Convergence rate In this section, we will give the result of the sub-linear convergence rate of the gap between two successive iterations. We now present a technical Lemma <cit.> for convergence rate. Let M>0. Assume that { a_k }_k∈ℕ is a sequence of nonnegative real numbers which satisfy a_1<M and a_k+1≤ (1-γ b_k+1)a_k +(b_k-b_k+1)c_k, k≥ 1, where γ∈ (0,1], { b_k }_k∈ℕ is a sequence which is defined by b_k = min{2/γ k,1 }, { a_k }_k∈ℕ is a sequence of real numbers such that c_k≤ M<∞. Then, the sequence { a_k }_k∈ℕ satisfies a_k≤MJ/γ k, k≥ 1, where J =⌊2/γ⌋. Let 𝒯 = (ℳ+𝒜)^-1ℳ be a ℳ-FNE operator, (ℳ+𝒜)^-1 is L-Lipshitz, and { u^k } be the sequence generated by (<ref>). If μ_k = min{2/ k,1 }, then * ^k+1-^k _ℳ≤2M'/k, i.e., ^k+1-^k _ℳ = 𝒪(1/k); * ^k+1-^k = 𝒪(1/k), ^k-𝒯^k = 𝒪(1/k). For (i), denote M' = sup_k∈ℕ{^0-𝒯^k _ℳ}, since ^k+1-^k _ℳ ≤(1-μ_k+1)^k-^k-1 _ℳ+|μ_k+1-μ_k |M', it derives (i) by Lemma <ref>. For (ii), since ^k+1-^k ≤ C(1-μ_k+1)^k-^k-1_ℳ +M'|μ_k+1-μ_k| ≤2M'C/k-1+2M'/k(k+1), k≥ 2. and thus ^k-𝒯^k ≤^k-^k+1+μ_k+1𝐚-𝒯^k =𝒪(1/k). §.§ HPPP with TV regularization Chambolle-Pock(CP) is a first-order primal-dual algorithm to solve non-smooth convex optimization problems with known saddle-point structure <cit.>. Take the starting point (_0,_0)∈𝒳×𝒴, the CP primal-dual algorithm is written into the following form: {[ _k+1 = (I+τ∂ f)^-1(_k-τ𝐊^*_k),; _k+1 =(I+s ∂ g^*)^-1(_k+s 𝐊(2_k+1-_k)), ]. here we consider TV regularization 𝐊=∇. The convergence of CP algorithm was proved when step sizes satisfy τ s𝐊^2<1. By introducing the operator 𝒜 and preconditioner ℳ, set 𝒜 = [ ∂ f 𝐊^*; -𝐊 ∂ g^* ], ℳ = [ 1/τI -𝐊^*; -𝐊 1/sI ]. As fistly noticed in <cit.>, the CP method is a special PPP method 0∈𝒜^k+1+ℳ(^k+1-^k). Bredies et al. proved the weak convergence of the CP algorithm about the degenerate case τ s𝐊^2=1 <cit.>, i.e., ℳ is a semi-definite preconditioner. To apply HPPP with TV regularization, under the degenerate case τ s𝐊^2=1 we obtain {[ _k+1 = μ_k+1_a+(1-μ_k+1)(I+τ∂ f)^-1(_k-τ𝐊^*_k),; _k+1 =μ_k+1_a+(1-μ_k+1)(I+ s ∂ g^*)^-1(2 s 𝐊(_k+1-μ_k+1_a)/1-μ_k+1- s𝐊_k+_k), ]. where 𝐚 = (_a, _a), (_0, _0)∈𝒳×𝒴 are the anchor and initial points, respectively, and μ_k satisfies conditions(i)-(iii) of Theorem <ref>. §.§ HPPP with implicit regularization Recently, RED frameworks have achieved state-of-the-art performance for inverse problems by utilizing CNN (convolutional neural network) denoiser as regularization <cit.>, such as DnCNN <cit.>. Ryu et al. proposed a nonexpansive residual of DnCNN to ensure the convergence of PnP methods <cit.>. Gradient step denoisers, defined as D_σ = I - ∇ g_σ for Gaussian noise level σ, have been proposed to generalize RED <cit.>, where g_σ:^n→ is a scalar function parameterized by a differentiable neural network. The gradient step denoiser can be interpreted as the proximal operator of an implicit regularization <cit.>. The primal-dual algorithm to solve (<ref>) is viewed as the fixed-point iteration ^k+1 =𝒯^k=(𝒜+ℳ)^-1ℳ^k with 𝒜 = [ λ∂ f 𝐊^*; - 𝐊 (∂ g)^-1 ], ℳ= [ 1/τI -𝐊^*; -𝐊 1/sI ] to solve (<ref>). According to subsection <ref>, R() = prox_ϕ^*(), we consider the special application g = ϕ, 𝐊 =I with the degenerate ℳ(τ s = 1). The proposed HPPP algorithm has the advantage of converging to a particular solution than PPP. Due to the data adaptability of CNN, which has the powerful expressiveness to represent the implicit regularization ϕ <cit.>. Based on HPPP and GraRED, we propose the implicit Gradient RED via HPPP called GraRED-HP^3 (see Algorithm <ref>). According to (<ref>), we further discuss GraRED-P^3 Algorithm <ref> for IR problems. PnP-ADMM <cit.> is a well-known optimization method to solve (<ref>) when g = ϕ, i.e., ^k+1 = prox_ϕ(^k-^k) = D_σ(^k-^k), ^k+1 = prox_λ f(^k+1+^k), ^k+1 = ^k+^k+1-^k+1. Let 𝐰^k =^k +^k, then PnP-ADMM can be written into equivalent Douglas-Rachford Splitting (DRS) form: 𝐰^ k+1 = 𝐰^k +D_σ(2prox_λ f(𝐰^k)-𝐰^k)-prox_λ f(𝐰^k). Using 𝐰^k = ^k-^k,τ=s=1, λ_ k =1 in GraRED-P^3 results exactly in the above DRS iteration. Thus, PnP-ADMM can be obtained from the perspective of PPP <cit.>, and its convergence follows by <cit.>. The following Theorem <ref> verifies the convergence of Algorithm <ref> and Algorithm <ref>. Assume that a denoiser D_σ:ℋ→ℋ satisfies homogeneity and Assumption (<ref>), and has symmetric Jacobian, relaxing parameter 0<inf_kλ_k ≤sup_kλ_k<2 and RED is convex, then {^k}_k∈ℕ generated by Algorithm <ref> converges to some uncertain solution ^* of (<ref>), while {^k}_k∈ℕ generated by Algorithm <ref> converges to the particular solution P_Fix(𝒯)^ℳ(𝐚) of (<ref>). Followed by <cit.> and Corollary <ref>. § EXPERIMENTS In this section, we show the numerical experiments of the algorithms discussed in section 3. Firstly, we will verify the regularity of HPPP by an easy 1D example. Then, we compare the CP (<ref>), PPP (<ref>), GraRED-HP^3 algorithms (see Algorithm <ref>), and GraRED-P^3 (see Algorithm <ref>) for image deblurring and inpainting under the same setting, and verify the efficiency of the proposed algorithms. §.§ A toy example We consider the following optimization problem in ℝ, i.e., min_x∈ℝ f(x)+g(x), where f(x) =max{-x, 0} and g(x) = max{1-x, 0} (See Figure <ref>). Since for any y∈ℝ, min{(1+y) x-1, y x}= (1+y) x-1, x<1 y x, x ≥ 1, then g^*(y) =max _x[y x-max{1-x, 0}] = y+δ_[-1,0](y), y ∈ℝ, where δ_C(y) = 0, y∈ C, ∞, y ∉ C. Therefore, the corresponding saddle-point problem is: min_x∈ℝmax_y∈ℝxy+f(x)-g^*(y). We denote the optimal set X^* = [1,+∞) = min_x∈ℝ f(x)+g(x) and the primal-dual objective function F(x,y) = xy+f(x)-g^*(y). Let us solve the saddle-point set { (x^*,y^*):F(x^*, y)≤ F(x^*, y^*)≤ F(x, y^*), ∀ (x, y)∈ℝ^2 } of F(x,y). Fixed x^*≥ 1, then max_-1≤ y≤ 0{ x^*y-g^*(y) } = max_-1≤ y≤ 0{ x^*y-y } ={[ 0, x^*>1, y=0;; 0, x^*=1, y∈ [-1,0]. ]. If x^* = 1, assume that -1≤ y^*< 0, then F(1,y^*)=0, while F(x,y^*) = y^*(x-1)+max{ -x,0 } and there exists x=2 such that F(2,y^*)=y^*<F(1,y^*), which leads to contradiction. Therefore, the saddle-point set is Ω = { (x^*, y^*):x^*≥ 1,y^*=0 }. For this toy example shown in Figure <ref>, we choose the same initial point ^0 = (-6,6) or (0,0), three different anchor points 𝐚 = (12,10),(12,9),(12,8), the semi-definite ℳ = [ 1 -1; -1 1 ], and the total itertation number N=1000. As shown in Figure <ref>, ^k generated by HPPP (<ref>) can converge to the particular saddle point controlled by the anchor point. While the limit of ^k generated by PPP (<ref>) may be uncertain, which is related with ^0 and λ_k. For example, the sequence ^k oscillates around the limit (1.8, 0) shown in Figure <ref>. As shown in Figure <ref>, min_∈Fix(𝒯) -𝐚_ℳ^2 = min_(x,y)∈Ω(x-x_a-(y-y_a))^2 can be geometrically interpreted as ℳ-projection, i.e., the point ∈Fix(𝒯) projection onto the line x-y =x_a-y_a through the anchor point 𝐚 =(x_a, y_a). Therefore, the ℳ-projection of 𝐚 =(x_a,y_a) onto Ω is definitely solved by P_Ω^ℳ (𝐚) = (1,0), x_a-y_a-1≤ 0; (x_a-y_a,0), x_a-y_a-1> 0. §.§ Image deblurring Firstly, we compared with classic TV regularization about CP <cit.>, PPP <cit.> , and HPPP(see details in <ref>). For classic TV -ℓ^2 regularization, min_𝐱∈ℝ^nλ/2-𝐲_2^2+β∇𝐱_1, where 𝐲 is the degraded image, is a linear operator. In case can be written as a convolution, i.e., = 𝐤∗ 𝐱, where 𝐤 is the blurring convolution kernel. The primal-dual model of TV -ℓ^2 deblurring is min_∈ℝ^nmax_𝐩∈ℝ^n×ℝ^n -⟨,div𝐩⟩+λ/2-𝐲_2^2-δ_P(𝐩), where f() =λ/2-𝐲_2^2,g^*(𝐩) = δ_P(𝐩), P ={𝐩∈ℝ^n×ℝ^n : 𝐩_∞≤β}, 𝐩_∞ is the discrete maximum norm defined as: 𝐩_∞ = max_i,j|𝐩_i, j|, |𝐩_i,j| =√((𝐩_i,j^1)^2+(𝐩_i,j^2)^2). We can calculate the proximal operator (I+τ∂ g^*)^-1 of the indicator g^*(𝐩), i.e., 𝐩=(I+τ∂g^*)^-1(𝐩̃) ⟺ 𝐩_i, j=𝐩̃_i, j/max(β,|𝐩̃_i, j|). The resolvent operator for f() can be computed by FFT. =(I+τ∂ f)^-1() =min _-/2 τ+λ/2 𝐤 * -𝐲_2^2 =ℱ^-1(τλℱ(𝐲) ℱ^*( 𝐤)+ℱ(𝐮̃)/τλℱ( 𝐤)^2+1), where ℱ(·) and ℱ^-1(·) denote the FFT and inverse FFT, respectively. We use a 2D Gaussian function with a standard deviation of 1.6 to convolve 10 test gray images, and finally obtain the degraded images with an additive WGN with noise level 0.01. Firstly, we compared three algorithms, CP, PPP, and the proposed HPPP. All the algorithms use the degraded images as initial points. We calculate the norm K= 1.75, and choose the total iteration N=400, balance coefficients λ =2,β = 5× 10^-4. Their parameters are given in Table <ref>. Both GraRED-P^3 and GraRED-HP^3 use DnCNN, other parameters are used below: * GraRED-P^3 : τ = 1, s =1, λ_k = 0.2, λ = 20; * GraRED-HP^3 : τ = 1, s =1, μ_k = 1/(k+2), λ = 20, _a = , _a = 0. Secondly, we compared with RED and RED-PRO. The RED-PRO model uses the hybrid steepest descent method (HSD) <cit.> to solve IR problems. Following <cit.>, a 9× 9 uniform point spread function (PSF) or a 2D Gaussian function with a standard deviation of 1.6 are used to convolve test images. We finally obtained the degraded images with an additive WGN with noise level σ = √(2). The original RGB image is converted to the YCbCr image, PnP restoration algorithms are applied to the luminance channel, and then the reconstruction image is returned to RGB space to obtain the final image. PSNR is measured on the luminance channel of the ground truth and the restored images. Table <ref> shows PSNR (dB) of restoration results on CP, PPP, HPPP, GraRED-P^3 and GraRED-HP^3. The performance of three different methods is evaluated using PSNR measure. The best recovery results are highlighted in bold. From Table <ref>, GraRED-P^3 and GraRED-HP^3 are better than classic algorithms with explicit TV regularization, which demonstrates implicit regularization is more powerful to regularize inverse imaging problems. We visualize the numerical comparison between GraRED-P^3, GraRED-HP^3, CP, PPP, and HPPP in Figure <ref>. To further compare the robustness of the initial points between the proposed HPPP, CP, and PPP with TV regularization. As shown in Figure <ref>, we plot their respective evolutions of PSNR values for iterations for the image House with 10 random initial points. The HPPP algorithm converges faster (less than 200 iterations) and achieves better PSNR values than CP and PPP algorithms. Once the anchor point is chosen, the proposed algorithm is more robust than CP and PPP algorithms for image deblurring with random initializations. To verify Corollary <ref>, we show the trend of convergence rate of the gap ^k+1-^k between two successive iterations throughout the iterations with step size min{2/k, 1}(k≥ 1) in Figure <ref>. From the deblurring experiment Table <ref> and Figure <ref>, GraRED-P^3 and GraRED-HP^3 achieve better performance than RED, RED-PRO, and RRP, which illustrates that KM or Halpern iteration used in PPP methods is effective. §.§ Image inpainting In this section, we use the proposed algorithm to solve TV image inpainting problems and compare their numerical results with CP <cit.>, PPP <cit.>, and HPPP algorithms. The discrete image inpainting model is min_∈ℝ^nλ𝐌⊙- 𝐲_F^2+β∇_1, where ∇_1 is the TV regularization, λ, β are balance coefficients, ⊙ indicates pointwise multiplication. Then the saddle-point problem is min_∈ℝ^nmax_p∈ℝ^n×ℝ^n -⟨, div 𝐩⟩+ λ 𝐌⊙- 𝐲 _F^2-δ_P(𝐩), where f() = 𝐌⊙- 𝐲_F^2, g^*(𝐩) = δ_P(𝐩), P ={𝐩∈ℝ^n×ℝ^n : 𝐩_∞≤β}, 𝐩_∞ is the discrete maximum norm. Their resolvent operators of f, g^* are 𝐩=(I+τ∂g^*)^-1(𝐩̃) ⟺ 𝐩_i, j=𝐩̃_i, j/max(β,|𝐩̃_i, j|) and = (I+τ∂f)^-1() = 2τ𝐌⊙𝐲 +/1+2τ𝐌, the multiplication and division operators should be understood pointwise in the above formula. We test 10 common images for evaluation. The first 𝐌 is filled with a Bernoulli random mask whose each pixel is missing with probability p=0.5, i.e., 50% of pixels are missed. The second 𝐌 is a character mask where about 19% of pixels are missed. All the algorithms start their iterations with the degraded images. For classic algorithms, we fix the balance parameter α = 0.01 and the total number N = 400, and use the following other parameters : * HPPP: τ = s = 1/K = 0.57, anchor point _a = 1∈ℝ^m× n, _a = 0·∇_a, stepsize μ_k = 1/10(k+2); * PPP: τ = s = 1/K = 0.57, λ_k= 1.6 or λ_k= 1.2; * CP: τ = s = 1/K = 0.57. Both GraRED-P^3 and GraRED-HP^3 use DnCNN <cit.>, other parameters are used below: * GraRED-P^3 : τ = 10, s =0.1, λ_k = 0.2, λ = 5; * GraRED-HP^3 : τ = 10, s =0.1, μ_k = 0.05/(k+2), λ = 5, _a = , _a = 0. In Table <ref> and Table <ref>, we compared the numerical performance of classic algorithms with TV regularization. As we can see from both two tables, the proposed two algorithms outperform other algorithms. In Figure <ref> and <ref>, we compare visualization results of House degraded by Bernoulli random mask and character mask, the proposed algorithms achieve better visual performance than TV regularization. Moreover, we compare the recovery results about different anchors 𝐚 =(_a, _a ) with fixed _a = 0 and step size 1/k+1. As shown in Figure <ref>, the anchor _a = 1 can achieve the best peroformance for random inpainting, which illustates the projection point P_Fix(𝒯)^ℳ(𝐚) of 𝐚 =(1, 0) is closest to the true solution. Anchors selection is not difficult, GraRED-HP^3 can achieve similar peroformance with other anchors. § CONCLUSIONS In this paper, based on Halpern iteration and gradient RED, we propose HPPP and GraRED-HP^3 for IR problems, which can converge strongly to a particular fixed point P_Fix(𝒯)^ℳ(𝐚). Numerical experiments verify the regularity of HPPP, and the effectiveness GraRED-HP^3 for image deblurring and inpainting, which can achieve better performance than classic algorithms with TV regularization. In the future, we plan to study the convergence of nonconvex implicit regularization <cit.> and extend the definition of ℳ-monotonicity to ℳ-comonotonicity for nonconvex case <cit.>. § BOUNDEDNESS AND ℳ-PROJECTION Firstly, we study the boundedness and asymptotic regularity of {^k}_k∈ℕ generated by (<ref>). To further establish regularity of {^k}_k∈ℕ (Theorem <ref>), we introduce a important Lemma from <cit.>. Let {a_k}_k∈ℕbe a sequence of non-negative real numbers satisfying a_k+1≤ (1-μ_k)a_k+μ_k β_k + γ_k, where {μ_k}_k∈ℕ, {β_k}_k∈ℕ, {γ_k}_k∈ℕ satisfies the following conditions: * {μ_k } converges to 0 in [0,1], and ∑_k=0^∞μ_k = +∞, or equivently ∏_k=0^∞ (1-μ_k)= 0; * limsup_k→∞β_k ≤ 0; * γ_k ≥ 0, ∑_k=0^∞γ_k <∞. Then lim_k→∞a_k=0. Let 𝒯 be a ℳ-FNE operator, (ℳ+𝒜)^-1 is L-Lipshitz, and {^k } be the sequence generated by (<ref>), 𝐚,^0∈ℋ, and lim_k→∞μ_k+1-μ_k/μ_k = 0 or ∑_k∈ℕ|μ_k+1-μ_k |<∞, which satisfies the following: * {^k }, {𝒯^k}(k∈ℕ) are bounded; * 𝒯^k-^k _ℳ→ 0, k→∞; * ^k+1-^k → 0, 𝒯^k-^k → 0, k→∞. Let ℳ =𝒞𝒞^* be a decomposition of ℳ, C = L𝒞. For ', ”∈ℋ, 𝒯'-𝒯” = (ℳ+𝒜)^-1𝒞𝒞^*'-(ℳ+𝒜)^-1𝒞𝒞^*” ≤ L𝒞'-”_ℳ = C'-”_ℳ For (i), ^k is bounded. For any ^*∈Fix(T), ‖^k+1 -^*‖_ℳ = μ_k+1(𝐚-^*)+(1-μ_k+1)(𝒯^k-^*)_ℳ ≤μ_k+1𝐚-^* _ℳ+(1-μ_k+1)^k-^* _ℳ ≤max{𝐚-^* _ℳ,^k-^* _ℳ} ≤…≤max{𝐚-^* _ℳ,^0-^* _ℳ}. Furthermore, ^k+1-^* ≤μ_k+1 𝐚-^* +(1-μ_k+1) 𝒯^k-^* ≤μ_k+1 𝐚-^* +(1-μ_k+1)·C ^k-^* _ℳ ≤max{ 𝐚-^* , C ^k-^* _ℳ }<+∞. and 𝒯^k-^* ≤C ^k-^* _ℳ<+∞ So is the sequence {𝒯^k }_k∈ℕ. For (ii), we should show 𝒯 is ℳ-asymptotically regular 𝒯^k-^k→ 0. ^k+1= μ_k+1𝐚+(1-μ_k+1)𝒯^k. There exists a positive real number M>0 such that 𝐚-𝒯^k _ℳ≤ 𝐚-^* _ℳ+ ^k-^* _ℳ≤M, it follows from the ℳ-FNE operator 𝒯 and boundedness of {𝐚-𝒯^k _ℳ}_k∈ℕ, ^k+1-^k _ℳ = (μ_k+1-μ_k)(𝐚-𝒯^k-1)+(1-μ_k+1)(𝒯^k-𝒯^k-1) _ℳ ≤ (1-μ_k+1)𝒯u^k-𝒯u^k-1_ℳ+|μ_k+1-μ_k |𝐚-𝒯^k-1_ℳ ≤ (1-μ_k+1)^k-^k-1_ℳ+|μ_k+1-μ_k | M If lim_k→∞μ_k+1-μ_k/μ_k = 0 or ∑_k∈ℕ|μ_k+1-μ_k |<∞, then ^k+1-^k _ℳ→ 0 from Lemma <ref>. ^k-𝒯^k _ℳ = ^k-^k+1+^k+1-𝒯u^k _ℳ ≤^k-^k+1_ℳ+μ_k+1𝐚-𝒯^k _ℳ. Thus, lim_k→∞^k-𝒯^k _ℳ→ 0. For (iii), there exists M' such that ^0-𝒯^k-1≤ M', ^k+1-^k ≤ (1-μ_k+1)𝒯^k-𝒯^k-1+|μ_k+1-μ_k |^0-𝒯^k-1 ≤ C(1-μ_k+1)^k-^k-1_ℳ+M'|μ_k+1-μ_k |, Therefore ^k+1-^k→ 0. ^k-𝒯^k ≤ ^k-^k+1 +μ_k+1 𝐚-𝒯^k →0,k→∞. Let 𝒜:ℋ→ 2^ℋ be an operator with zer𝒜≠∅, and ℳ an admissible preconditioner such that 𝒯-𝒯≤ C-_ℳ(C>0), and Fix(𝒯) be a closed convex subset of a Hilbert space ℋ, and -𝐚_ℳ^2 is a proper lower-semicontinuous differentiable convex function. There exists a unique solution u^*=min_∈Fix(𝒯)-𝐚_ℳ^2 which solves: ⟨^*-𝐚, -^*⟩_ ℳ≥ 0, ∀∈Fix(𝒯). Let l() = -𝐚_ℳ^2 is the differentiable convex function. Assume that ^*∈Fix (𝒯) is the optimal solution, Fix( 𝒯) is the convex set, thus t∈ (0,1), ^*+t(-^*)∈Fix (𝒯) for ∀∈Fix( 𝒯 ), lim_t→ 0l(^*+t(-^*))-l(^*)/t = ⟨ l'(^*), -^*⟩ = ⟨ 2ℳ(^*-𝐚),-^*⟩ = 2⟨^*-𝐚, -^*⟩ _ℳ≥ 0. If ^** is the another solution such that ⟨^**-a, -^**⟩ _ℳ≥ 0. Replace with ^**, ^* in the above two inequalities, respectively. ⟨^*-𝐚, ^**-^*⟩ _ℳ≥ 0, ⟨^**-𝐚, ^*-^**⟩ _ℳ≥ 0. If we add two inequalities, then we obtain ^*-^**_ ℳ = 0. Since ^*-^** = 𝒯^*-𝒯^** ≤C ^*-^** _ℳ=0 It follows ^* = ^** from above inequality. As mentioned in <cit.> and uniqueness projection onto Fix(𝒯), we can introduce the following notion of ℳ-projection. Assume ∀^0∈ℋ, there exist an unique point ^* ∈Fix(𝒯) such that ^*-^0_ℳ≤-^0_ℳ(∀∈Fix(𝒯)), then ^* is called the ℳ-projection of ^0 onto Fix(𝒯), also denoted P_Fix(𝒯) ^ℳ(^0). Let 𝒯 = (ℳ+𝒜)^-1ℳ such that 𝒯-𝒯≤ C-_ℳ(C>0) and ^0∈ℋ, then the following conditions are equivalent: * ^* = P_Fix(𝒯) ^ℳ(^0); * ⟨^*-^0, -^*⟩_ ℳ≥ 0, ∀∈Fix(𝒯). See Lemma <ref> and Definition <ref>. siamplain
http://arxiv.org/abs/2407.13433v1
20240718120119
Precision bounds for quantum phase estimation using two-mode squeezed Gaussian states
[ "Jian-Dong Zhang", "Chuang Li", "Lili Hou", "Shuai Wang" ]
quant-ph
[ "quant-ph" ]
[]zhangjiandong@jsut.edu.cn School of Mathematics and Physics, Jiangsu University of Technology, Changzhou 213001, China Research Center for Novel Computing Sensing and Intelligent Processing, Zhejiang Lab, Hangzhou 311121, China School of Mathematics and Physics, Jiangsu University of Technology, Changzhou 213001, China School of Mathematics and Physics, Jiangsu University of Technology, Changzhou 213001, China § ABSTRACT Quantum phase estimation based on Gaussian states plays a crucial role in many application fields. In this paper, we study the precision bound for the scheme using two-mode squeezed Gaussian states. The quantum Fisher information is calculated and its maximization is used to determine the optimal parameters. We find that two single-mode squeezed vacuum states are the optimal inputs and the corresponding precision bound is superior to the Heisenberg limit by a factor of 2. For practical purposes, we consider the effects originating from photon loss. The precision bound can still outperform the shot-noise limit when the lossy rate is below 0.4. Our work may demonstrate a significant and promising step towards practical quantum metrology. Precision bounds for quantum phase estimation using two-mode squeezed Gaussian states Shuai Wang July 22, 2024 ===================================================================================== § INTRODUCTION Phase estimation based on optical interferometers is a fundamental means to achieve high-precision measurements for many important physical quantities, concentration, magnetic fields, gravitational waves, to name a few. Quantum phase estimation can provide enhanced precision beyond the shot-noise limit, which is the precision bound attainable by exploiting classical resources. In this regard, the quantum Fisher information is an effective tool to evaluate the precision bound for a specific input and parameterization <cit.>. It is particularly important for single-phase estimation, for the precision bound given by the quantum Fisher information can always be asymptotically saturated through specific positive-operator-valued measure (POVM) and maximum likelihood estimation. The quantum Fisher information corresponding to the shot-noise limit and the Heisenberg limit can be expressed as F_SNL = N and F_HL = N^2, where N is the total average photon number employed for phase estimation. In terms of the parameter estimation theory, any interferometer can be divided into three parts including probe preparation, phase encoding and POVM. The precision bound is completely determined by the first two parts. In general, the part of phase encoding is deterministic; therefore, improving the precision bound requires the engineering of the probe preparation. Related to this, two methods are usually deployed. The first method utilizes a linear beam splitter to combine two single-mode inputs, while the two inputs in the second method are combined by a nonlinear beam splitter, i.e., optical parametric amplifier (OPA). The interferometers with these two methods are known as linear and nonlinear interferometers, respectively. Over the past decades, numerous efforts have been made in quantum phase estimation based on linear interferometers. Many exotic quantum states were considered as the input, such as two-mode squeezed vacuum states <cit.>, N00N states <cit.>, entangled coherent states <cit.>, twin Fock states <cit.> and coherent along with squeezed vacuum states <cit.>. The precision bounds for the schemes using these above states can outperform the shot-noise limit and reach the Heisenberg limit. In recent years, quantum phase estimation using nonlinear interferometers has also received lots of attention <cit.>. In this configuration, two inputs undergo a two-mode squeezing process provided by the first OPA. The correlation between the two modes are improved, and the total average number of photons is also increased. These two types of interferometers have their own advantages. As a result, quantum phase estimation using hybrid or nested interferometers composed of nonlinear and linear beam splitters has also drawn considerable interest <cit.>. The aforementioned studies analyzed the precision bounds for the schemes using some specific inputs. Recently, exploring the optimal inputs by maximizing the quantum Fisher information has been reported in a linear interferometer. Lang et al. analyzed the optimal input for the second port with the first port fed by a coherent state <cit.>. Zhang et al. discussed the optimal single-mode input <cit.>. The optimal separable Gaussian inputs were showed by Sparaciari et al. <cit.>. In this paper, we extend the problem of determining the optimal inputs to nonlinear interferometers. We consider two general single-mode Gaussian states as the inputs. The precision bound given by the quantum Fisher information is calculated and maximized by selecting the best parameters. We analyze the precision bound in a lossy environment and discuss the tolerance against photon loss. This work may provide a positive complement to the aspect of quantum phase estimation using nonlinear interferometers and relevant variants. The remainder of this paper is organized as follows. Section <ref> introduces the estimation scheme and provides the general expression for the quantum Fisher information. Section <ref> determines the optimal inputs by maximizing the quantum Fisher information, and the corresponding precision bound is analyzed. In Sec. <ref>, we study the precision bound in the presence of photon loss and compare it with the precision bound of a classical-quantum hybrid scheme. Finally, we summarize main results in Sec. <ref>. § ESTIMATION SCHEME AND QUANTUM FISHER INFORMATION Figure <ref> gives the schematic diagram of quantum phase estimation using two-mode squeezed Gaussian states. Two single-mode Gaussian states pass through an OPA and evolve into two-mode squeezed Gaussian states. The estimated phase φ is encoded into the state in mode a, and a POVM is performed. Since the two-mode squeezing process of OPA is fixed, the task in this paper is to find the optimal Gaussian inputs. The most general single-mode Gaussian states can be expressed as displaced squeezed thermal states <cit.>. Meanwhile, it is no use to prepare thermal states as the inputs <cit.>. As a consequence, we only need to consider pure Gaussian inputs, two single-mode displaced squeezed vacuum states. Without loss of generality, we assume that two states in modes a and b are the same. Specifically, the input can be written as | in⟩ = D( α)S( ξ)| 0 ⟩⊗ D( α)S( ξ)| 0 ⟩, where D( α) is the displacement operator with α = | α|e^iδ being the displacement amplitude, S( ξ) is the squeezing operator with ξ = re^iθ being the squeezing parameter. Further, the average photon number of the inputs is given by N_in = ⟨in|( a^†a + b^†b)| in⟩ = 2( | α|^2 + sinh^2r ), where a^† (b^†) and a (b) are creation and annihilation operators for mode a (b). The relation between the mode operators and displacement operator is D^†( α)aD( α) = a + α, D^†( α)bD( α) = b + α, and the relation between the mode operators and single-mode squeezing operator is S^†( ξ)aS( ξ) = acosh r - e^iθa^†sinh r, S^†( ξ)bS( ξ) = bcosh r - e^iθb^†sinh r. Due to the gain of the OPA, the total average number of photons in our scheme is found to be N = ⟨in|S_OPA^†( g )( a^†a + b^†b )S_OPA^( g )| in⟩ = N_incosh2g + 2| α|^2cos 2δsinh 2g + 2sinh ^2g, where we used the following relation S_OPA^†( g )aS_OPA^( g ) = acosh g + b^†sinh g S_OPA^†( g )bS_OPA^( g ) = bcosh g + a^†sinh g with S_OPA^( g ) being the two-mode squeezing operator and g being the gain. Now we analyze the precision bound of our scheme. Since the inputs and operations are Gaussian, all information regarding the estimated phase can be obtained via the mean (first moment) and variance (second moment) of the outputs. For this reason, we calculate the quantum Fisher information through the use of symplectic geometry method. Let us consider the vector composed of quadrature operators of modes a and b, 𝐗 = [ [ x_a p_a x_b p_b ] ]^𝖳, where x_m = m^† + m p_m = i( m^† - m ) with m ∈{a,b}. Then the mean vector of the inputs is given by 𝐌_in = ⟨𝐗⟩ = 2 | α | · [ [ cosδ sinδ cosδ sinδ ] ]^𝖳, and we can find the covariance matrix of the inputs Σ _in = [ [ e^2r 0 0 0; 0 e^ - 2r 0 0; 0 0 e^2r 0; 0 0 0 e^ - 2r ]], whose arbitrary matrix element is defined by Σ _kn = 1/2⟨X_kX_n + X_nX_k⟩ - ⟨X_k⟩⟨X_n⟩. Based on the relations between optical operations and quadrature operators, we can write the transformation matrix for OPA 𝐔_OPA = [ [ cosh g 0 sinh g 0; 0 cosh g 0 - sinh g; sinh g 0 cosh g 0; 0 - sinh g 0 cosh g ]] and that for phase encoding 𝐔_PE = [ [ cosφ - sinφ 0 0; sinφ cosφ 0 0; 0 0 1 0; 0 0 0 1 ]]. By using the following transformations 𝐌_φ = 𝐔_PE𝐔_OPA𝐌_in, Σ_φ = 𝐔_PE^𝐔_OPA^Σ _in^𝐔_OPA^𝖳𝐔_PE^𝖳, we can obtain the mean vector and covariance matrix of the outputs after passing through the estimated phase. On the basis of the above results, the quantum Fisher information turns out to be F = 1/2Tr{∂ _φΣ[ Σ( ∂ _φΣ)^ - 1Σ^𝖳 + 1/4Ω( ∂ _φΣ)^ - 1Ω^𝖳]^ - 1} + ( ∂ _φM)^𝖳Σ^ - 1( ∂ _φM), where 𝐌 = 𝐇𝐌_φ, Σ = 𝐇^Σ _φ^𝐇_^𝖳, ∂ _φM = ∂M/ . -∂φ and ∂ _φΣ = ∂Σ/ . -∂φ. The specific forms of two transformation matrices are given by Ω = [ [ 0 1 0 0; - 1 0 0 0; 0 0 0 1; 0 0 - 1 0 ]] and 𝐇 = 1/2[ [ 1 i 0 0; 1 - i 0 0; 0 0 1 i; 0 0 1 - i ]]. After straightforward calculation, we finally get the general expression for the quantum Fisher information of our scheme F = 1/4e^4r [ 8| α|^2e^2r( 1 + e^ - 4g)( e^4r + 4gcos^2δ + sin^2δ) + ( 1 + e^8r )( 1 + cosh 4g) ] - 1. § OPTIMAL INPUTS AND PRECISION BOUND Given the current experimental techniques, we can reasonably assume | α|^2≫1, sinh ^2r ≫1 and sinh ^2g ≫1. For simplicity, throughout this paper we use the following abbreviations G ≡sinh ^2g ≈cosh ^2g ≈e^2g/4≫1, S ≡sinh ^2r ≈e^2r/4≫1. We define the ratio of displacement portion to squeezing portion in the inputs as k, i.e., k ≡| α|^2/sinh ^2r. Related to this, the total average photon number can be approximately expressed as N ≈ 4GS( 1 + 2kcos^2δ ) and the quantum Fisher information can be approximately expressed as F≈ 32G^2S^2( 1 + 4kcos^2δ ). It is not difficult to find the following inequality 1 + 4kcos^2δ≤ (1 + 2kcos^2δ )^2 with equality if the condition k = 0 is satisfied. Hence, k = 0 is the optimal ratio for maximizing the quantum Fisher information, and the optimal inputs are two squeezed vacuum states. At this point, the average photon number of the inputs is N_in = 2sinh^2r and the total average photon number is N = N_incosh2g + 2sinh ^2g. The corresponding quantum Fisher information is reduced to F_S⊗S = 1/4e^4r( 1 + e^8r + 2e^4rcosh 4gcosh 4r) - 1, and we have F_S⊗S≈32G^2S^2≈ 2N^2 for large squeezing parameter and gain. The above result indicates a sub-Heisenberg-limited precision bound, regardless of the ratio of average photon number of the inputs to total average photon number. Since two squeezed vacuum states are used as the inputs, our scheme is a pure quantum scheme. In a lossless environment, it is superior to a pure classical scheme using two coherent states (≈ 4N^2/3) <cit.> and a classical-quantum hybrid scheme using coherent along with squeezed vacuum states (≈ 3N^2/2) <cit.>. § PRECISION BOUND IN THE PRESENCE OF PHOTON LOSS For any optical system, photon loss is always inevitable and is main hindrance to achieve high precision. This process can be simulated by adding fictitious beam splitters. The reflection of the fictitious beam splitter leads to a decrease in the number of photons; meanwhile, the coupling of vacuum fluctuation reduces the coherence of quantum states. Both of these two factors result in a degradation of the precision. For practical purposes, in this section we analyze the precision bound of our scheme in a lossy environment. Generally, a quantum state after a lossy process becomes a mixed state. At this point, it is quite difficult to give an analytical expression for the quantum Fisher information. In particular, for Gaussian states, symplectic geometry method can provide an analytical result. Let us use L to represent the lossy rate, then the mean vector and covariance matrix in a lossy environment can be written as <cit.> 𝐌_φ = √(1 - L)·𝐔_PE𝐔_OPA𝐌_in, Σ_φ = (1 - L) 𝐔_PE^𝐔_OPA^Σ _in^𝐔_OPA^𝖳𝐔_PE^𝖳 + L 𝐈_4. The above results suggest that a quantum state after photon loss becomes a statistical mixture of the quantum state and a vacuum state. By substituting the above results into Eq. (<ref>), the quantum Fisher information in a lossy environment is found to be F_S⊗S^ L = ( 1 - L)^2Δ _1/4Δ _2 with Δ _1 = 4e^4r( L - L^2 )cosh 2gcosh 2r[ cosh 4g + 2cosh 4r - 3] + ( 1 + e^8r )( 2 - 2L + L^2 ) - 2e^4r ( 4 - 4L + 3L^2 ) + 2e^4rcosh 4g[ ( 2 - 2L + 3L^2 )cosh 4r - L^2 ] and Δ _2 = e^4rL^2[ ( 1 - L )^2cosh 4g + 2( 2 - 2L + L^2 )cosh^22r ] + 2e^2r( 1 + e^4r)( 1 - L)L( 1 - L + L^2)cosh 2g + e^4r(2 - 4L + 5L^2). In Fig. <ref> and Fig. <ref> we give the quantum Fisher information against lossy rate to intuitively show the precision bound in the presence of photon loss. It turns out that the precision bound is inferior to the Heisenberg limit even for a slight lossy rate. As the lossy rate increases, the precision bound gradually degrades to the shot-noise limit. For a fixed gain, the degradation of precision bound slightly slows down with the decrease of photon number of the inputs, as shown in Fig. <ref>. For a fixed photon number of the inputs, with the decrease of gain, the precision bound degrades to the shot-noise limit with a slightly faster pace, as shown in Fig. <ref>. This may be due to the fact that, for a fixed photon number of the inputs, the correlation between the two modes can be improved with increasing gain. On the whole, it is beneficial for improving the tolerance against photon loss by reducing the proportion of photon number of the inputs when the total number of photons is fixed. In general, pure quantum schemes are sensitive to photon loss. Here we compare the precision bound of our (pure quantum) scheme with that of the (classical-quantum hybrid) scheme using coherent along with squeezed vacuum states (see Appendix for detailed calculation). Let us consider quantum advantage, which is defined as A_Q = F_S⊗S^ L/ F_C⊗S^ L - 1. The positive and negative values indicate that a pure quantum scheme is superior and inferior to a classical-quantum hybrid scheme, respectively Figure <ref> gives the dependence of quantum advantage on the lossy rate. The advantage of the pure quantum scheme is remarkable for an extremely low lossy rate, but this advantage quickly disappears with the increase of the lossy rate. As the lossy rate further increases, the classical-quantum hybrid scheme becomes an optimal candidate. In addition, the range of quantum advantages is larger for a high gain or a low photon number of the inputs. § CONCLUSION In summary, we addressed the problem of quantum phase estimation using two-mode squeezed Gaussian states. We analyzed the precision bound through the use of the quantum Fisher information. By maximizing the precision bound, two squeezed vacuum states were determined as the optimal inputs. For a lossless environment, the precision bound can outperform the Heisenberg limit by a factor of 2. In the presence of photon loss, sub-shot-noise-limited precision bound can be attainable with the lossy rate below 0.4. These results may be beneficial for practical quantum metrology based on nonlinear dynamics. § ACKNOWLEDGMENT This work was supported by the National Natural Science Foundation of China (12104193) and the Program of Zhongwu Young Innovative Talents of Jiangsu University of Technology (20230013). § APPENDIX Here we provide the calculation of the quantum Fisher information of a lossy scheme using coherent and squeezed vacuum states. We assume that the coherent and squeezed vacuum states are injected via modes a and b, respectively. The phase of displacement parameter and that of squeezing parameter are 0 and π, which are the optimal phase matching condition. Accordingly, the mean vector of the inputs is given by 𝐌'_in = [ [ 2 | α | 0 0 0 ] ]^𝖳, and the covariance matrix of the inputs is given by Σ' _in = [ [ 1 0 0 0; 0 1 0 0; 0 0 e^2r 0; 0 0 0 e^ - 2r ]]. Based on the method in the main text, we get the quantum Fisher information F_C⊗S^ L = 1 - L/4[ γ _1/γ _2 + γ _3/γ _4] with γ _1 = 4( 1 - L )cosh ^2rsinh ^22g[ L^2 + e^4rL^2 + 2e^2r( 2 - 2L + L^2 ) + ( 1 + e^2r )^2( 1 - L)Lcosh 2g ] -4( 1 - L )cosh ^2rcosh ^2g{e^4r( 8 - 17L + 15L^2 ) -e^2r( 24 - 38L + 34L^2 ) - ( 1 + e^2r )^2( 1 - L )Lcosh 4g + 8 - 17L + 15L^2 - 2cosh 2g[ 4 - 7L + 8L^2 + e^4r( 4 - 7L + 8L^2 ) - 2e^2r( 2 - 7L - 8L^2 ) ] }, γ _2 = ( 1 + e^2r )^2[ 4L( 1 - 2L + 2L^2 - L^3 )cosh 2g + ( 1 - L )^2L^2cosh 4g ] +e^2r( 8 - 16L + 18L^2 - 12L^3 + 6L^4 ) + ( 1 + e^4r )( 5L^2 - 6L^3 + 3L^4 ), γ _3 = 8| α|^2cosh ^2g[ 1 - e^2r( 1 - 3L) -L + (1+e^2r)( 1 - L)cosh2g], γ _4 = 1-2L+( 1 + e^2r) L^2 + ( 1 + e^2r)( 1 - L)Lcosh 2g. 26 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Braunstein and Caves(1994)]PhysRevLett.72.3439 author author Samuel L. Braunstein and author Carlton M. Caves, title title Statistical distance and the geometry of quantum states, 10.1103/PhysRevLett.72.3439 journal journal Phys. Rev. Lett. volume 72, pages 3439–3443 (year 1994)NoStop [Liu et al.(2019)Liu, Yuan, Lu, and Wang]Liu_2020 author author Jing Liu, author Haidong Yuan, author Xiao-Ming Lu, and author Xiaoguang Wang, title title Quantum fisher information matrix and multiparameter estimation, 10.1088/1751-8121/ab5d4d journal journal J. Phys. A: Math. Theor. volume 53, pages 023001 (year 2019)NoStop [Anisimov et al.(2010)Anisimov, Raterman, Chiruvelli, Plick, Huver, Lee, and Dowling]PhysRevLett.104.103602 author author Petr M. Anisimov, author Gretchen M. Raterman, author Aravind Chiruvelli, author William N. Plick, author Sean D. Huver, author Hwang Lee, and author Jonathan P. Dowling, title title Quantum metrology with two-mode squeezed vacuum: Parity detection beats the Heisenberg limit, 10.1103/PhysRevLett.104.103602 journal journal Phys. Rev. Lett. volume 104, pages 103602 (year 2010)NoStop [Dowling(2008)]07510802091298 author author Jonathan P. Dowling, title title Quantum optical metrology – the lowdown on high-N00N states, 10.1080/00107510802091298 journal journal Contemp. Phys. volume 49, pages 125–143 (year 2008)NoStop [Joo et al.(2011)Joo, Munro, and Spiller]PhysRevLett.107.083601 author author Jaewoo Joo, author William J. Munro, and author Timothy P. Spiller, title title Quantum metrology with entangled coherent states, 10.1103/PhysRevLett.107.083601 journal journal Phys. Rev. Lett. volume 107, pages 083601 (year 2011)NoStop [Campos et al.(2003)Campos, Gerry, and Benmoussa]PhysRevA.68.023810 author author R. A. Campos, author Christopher C. Gerry, and author A. Benmoussa, title title Optical interferometry at the heisenberg limit with twin fock states and parity measurements, 10.1103/PhysRevA.68.023810 journal journal Phys. Rev. A volume 68, pages 023810 (year 2003)NoStop [Pezzé and Smerzi(2008)]PhysRevLett.100.073601 author author Luca Pezzé and author Augusto Smerzi, title title Mach-Zehnder interferometry at the Heisenberg limit with coherent and squeezed-vacuum light, 10.1103/PhysRevLett.100.073601 journal journal Phys. Rev. Lett. volume 100, pages 073601 (year 2008)NoStop [Plick et al.(2010)Plick, Dowling, and Agarwal]Plick_2010 author author William N Plick, author Jonathan P Dowling, and author Girish S Agarwal, title title Coherent-light-boosted, sub-shot noise, quantum interferometry, 10.1088/1367-2630/12/8/083014 journal journal New J. Phys. volume 12, pages 083014 (year 2010)NoStop [Ou(2012)]PhysRevA.85.023815 author author Z. Y. Ou, title title Enhancement of the phase-measurement sensitivity beyond the standard quantum limit by a nonlinear interferometer, 10.1103/PhysRevA.85.023815 journal journal Phys. Rev. A volume 85, pages 023815 (year 2012)NoStop [Marino et al.(2012)Marino, Corzo Trejo, and Lett]PhysRevA.86.023844 author author A. M. Marino, author N. V. Corzo Trejo, and author P. D. Lett, title title Effect of losses on the performance of an su(1,1) interferometer, 10.1103/PhysRevA.86.023844 journal journal Phys. Rev. A volume 86, pages 023844 (year 2012)NoStop [Sparaciari et al.(2016)Sparaciari, Olivares, and Paris]PhysRevA.93.023810 author author Carlo Sparaciari, author Stefano Olivares, and author Matteo G. A. Paris, title title Gaussian-state interferometry with passive and active elements, 10.1103/PhysRevA.93.023810 journal journal Phys. Rev. A volume 93, pages 023810 (year 2016)NoStop [Li et al.(2016)Li, Gard, Gao, Yuan, Zhang, Lee, and Dowling]PhysRevA.94.063840 author author Dong Li, author Bryan T. Gard, author Yang Gao, author Chun-Hua Yuan, author Weiping Zhang, author Hwang Lee, and author Jonathan P. Dowling, title title Phase sensitivity at the heisenberg limit in an su(1,1) interferometer via parity detection, 10.1103/PhysRevA.94.063840 journal journal Phys. Rev. A volume 94, pages 063840 (year 2016)NoStop [Chekhova and Ou(2016)]Chekhova:16 author author M. V. Chekhova and author Z. Y. Ou, title title Nonlinear interferometers in quantum optics, 10.1364/AOP.8.000104 journal journal Adv. Opt. Photon. volume 8, pages 104–155 (year 2016)NoStop [Ou and Li(2020)]10.1063/5.0004873 author author Z. Y. Ou and author Xiaoying Li, title title Quantum SU(1,1) interferometers: Basic principles and applications, 10.1063/5.0004873 journal journal APL Photonics volume 5, pages 080902 (year 2020)NoStop [Zuo et al.(2020)Zuo, Yan, Feng, Ma, Jia, Xie, and Peng]PhysRevLett.124.173602 author author Xiaojie Zuo, author Zhihui Yan, author Yanni Feng, author Jingxu Ma, author Xiaojun Jia, author Changde Xie, and author Kunchi Peng, title title Quantum interferometer combining squeezing and parametric amplification, 10.1103/PhysRevLett.124.173602 journal journal Phys. Rev. Lett. volume 124, pages 173602 (year 2020)NoStop [Du et al.(2022)Du, Kong, Bao, Yang, Jia, Ming, Yuan, Chen, Ou, Mitchell, and Zhang]PhysRevLett.128.033601 author author Wei Du, author Jia Kong, author Guzhi Bao, author Peiyu Yang, author Jun Jia, author Sheng Ming, author Chun-Hua Yuan, author J. F. Chen, author Z. Y. Ou, author Morgan W. Mitchell, and author Weiping Zhang, title title Su(2)-in-su(1,1) nested interferometer for high sensitivity, loss-tolerant quantum metrology, 10.1103/PhysRevLett.128.033601 journal journal Phys. Rev. Lett. volume 128, pages 033601 (year 2022)NoStop [Kong et al.(2013)Kong, Ou, and Zhang]PhysRevA.87.023825 author author Jia Kong, author Z. Y. Ou, and author Weiping Zhang, title title Phase-measurement sensitivity beyond the standard quantum limit in an interferometer consisting of a parametric amplifier and a beam splitter, 10.1103/PhysRevA.87.023825 journal journal Phys. Rev. A volume 87, pages 023825 (year 2013)NoStop [Zhang et al.(2018)Zhang, Jin, Zhang, Cen, Hu, and Zhao]Zhang:18 author author Jian-Dong Zhang, author Chen-Fei Jin, author Zi-Jing Zhang, author Long-Zhu Cen, author Jun-Yan Hu, and author Yuan Zhao, title title Super-sensitive angular displacement estimation via an su(1,1)-su(2) hybrid interferometer, 10.1364/OE.26.033080 journal journal Opt. Express volume 26, pages 33080–33090 (year 2018)NoStop [Zhang et al.(2021)Zhang, You, Li, and Wang]PhysRevA.103.032617 author author Jian-Dong Zhang, author Chenglong You, author Chuang Li, and author Shuai Wang, title title Phase sensitivity approaching the quantum cramér-rao bound in a modified su(1,1) interferometer, 10.1103/PhysRevA.103.032617 journal journal Phys. Rev. A volume 103, pages 032617 (year 2021)NoStop [Lang and Caves(2013)]PhysRevLett.111.173601 author author Matthias D. Lang and author Carlton M. Caves, title title Optimal quantum-enhanced interferometry using a laser power source, 10.1103/PhysRevLett.111.173601 journal journal Phys. Rev. Lett. volume 111, pages 173601 (year 2013)NoStop [Zhang et al.(2022)Zhang, You, and Wang]Zhang:22 author author Jian-Dong Zhang, author Chenglong You, and author Shuai Wang, title title Sub-shot-noise-limited phase estimation via single-mode inputs, 10.1364/OE.474643 journal journal Opt. Express volume 30, pages 43143–43156 (year 2022)NoStop [Sparaciari et al.(2015)Sparaciari, Olivares, and Paris]Sparaciari:15 author author Carlo Sparaciari, author Stefano Olivares, and author Matteo G. A. Paris, title title Bounds to precision for quantum interferometry with gaussian states and operations, 10.1364/JOSAB.32.001354 journal journal J. Opt. Soc. Am. B volume 32, pages 1354–1359 (year 2015)NoStop [Weedbrook et al.(2012)Weedbrook, Pirandola, García-Patrón, Cerf, Ralph, Shapiro, and Lloyd]RevModPhys.84.621 author author Christian Weedbrook, author Stefano Pirandola, author Raúl García-Patrón, author Nicolas J. Cerf, author Timothy C. Ralph, author Jeffrey H. Shapiro, and author Seth Lloyd, title title Gaussian quantum information, 10.1103/RevModPhys.84.621 journal journal Rev. Mod. Phys. volume 84, pages 621–669 (year 2012)NoStop [Monras and Paris(2007)]PhysRevLett.98.160401 author author Alex Monras and author Matteo G. A. Paris, title title Optimal quantum estimation of loss in bosonic channels, 10.1103/PhysRevLett.98.160401 journal journal Phys. Rev. Lett. volume 98, pages 160401 (year 2007)NoStop [Ono and Hofmann(2010)]PhysRevA.81.033819 author author Takafumi Ono and author Holger F. Hofmann, title title Effects of photon losses on phase estimation near the heisenberg limit using coherent light and squeezed vacuum, 10.1103/PhysRevA.81.033819 journal journal Phys. Rev. A volume 81, pages 033819 (year 2010)NoStop [Gard et al.(2017)Gard, You, Mishra, Singh, Lee, Corbitt, and Dowling]gard2017nearly author author Bryan T Gard, author Chenglong You, author Devendra K Mishra, author Robinjeet Singh, author Hwang Lee, author Thomas R Corbitt, and author Jonathan P Dowling, title title Nearly optimal measurement schemes in a noisy mach-zehnder interferometer with coherent and squeezed vacuum, 10.1140/epjqt/s40507-017-0058-8 journal journal EPJ Quantum Technol. volume 4, pages 1–13 (year 2017)NoStop
http://arxiv.org/abs/2407.12740v1
20240717165930
How does dark matter stabilize disc galaxies?
[ "K. Aditya" ]
astro-ph.GA
[ "astro-ph.GA" ]
How does dark matter stabilize disc galaxies? ] How does dark matter stabilize disc galaxies? K. Aditya] K. Aditya E-mail : aditya.k@iiap.res.in Indian Institute of Astrophysics, Koramangala, Bengaluru 560 034, INDIA [ [ ===== § ABSTRACT The study presents a theoretical framework for understanding the role of dark matter on the stability of the galactic disc. We model the galaxy as a two-component system consisting of stars and gas in equilibrium with an external dark matter halo. We derive the equations governing the growth of perturbations and obtain a stability criterion that connects the potential of the dark matter halo and the gas fraction with the stability levels of the galaxy. We find that a two-component disc is more susceptible to the growth of gravitational instabilities than individual components, particularly as gas fractions increase. However, the external field, due to the dark matter halo, acts as a stabilizing agent and increases the net stability levels even in the presence of a cold gas component. We apply the stability criterion to models of the Milky Way, low surface brightness galaxies, and baryon-dominated cold rotating disc galaxies observed in the early universe. Our results show that the potential due to the dark matter halo plays a significant role in stabilizing nearby galaxies, such as the Milky Way, and low surface brightness galaxies, which would otherwise be prone to local gravitational instabilities. However, we find that the baryon-dominated cold disc galaxies observed in the early universe remain susceptible to the growth of local gravitational instabilities despite the stabilizing effect of the dark matter halo. hydrodynamics-instabilities, galaxies:kinematics and dynamics, galaxies:structure, galaxies:star formation, Galaxy:evolution § INTRODUCTION Gravitational instabilities are fundamental processes that drive the evolution of the galaxy. It provides important clues for understanding how gas in the galaxies is converted into stars <cit.>, and how non-axisymmetric structures like bars and spiral arms form in galaxies <cit.>. One of the simplest diagnostics for accessing the stability of the galactic disc against the growth of axisymmetric gravitational instabilities was proposed by <cit.>. It measures the competing effect of self-gravity, which tries to destabilize the disc, and the stabilizing effect of the differential rotation and the random velocity dispersion. The balance between the stabilizing agents, i.e., differential rotation and random velocity dispersion, and the destabilizing agent, i.e., the self-gravity, is classically quantified by the stability criterion proposed by <cit.>: q=κσ/π G Σ. In the above equation, κ is the epicyclic frequency, Σ is the mass surface density, and σ is the radial velocity dispersion, where q>1 is the condition for stability of the disc against axisymmetric perturbations. The stability criterion proposed by <cit.> has been modified to include the self-gravity of both stars and gas by <cit.> and finally <cit.> derive a N-component stability parameter to quantify the stability of gravitationally coupled multiple stellar and gaseous discs. The stability parameter has also been modified further to include the physical processes like the effects of the turbulence <cit.> and the three-dimensional structure of ISM <cit.>. The two-component model for studying the stability of disc galaxy against the growth of local axisymmetrical gravitational instabilities was envisaged by <cit.>. In the two-component model, stars and gas in the galactic disk are modeled as two isothermal fluids that interact gravitationally with each other. One of the components in the two-component model resembles the interstellar medium (ISM) with smaller values of the velocity dispersion, and the other resembles the stellar component with higher velocity dispersion. The approach has been used extensively to study the role of the cold ISM in driving the instabilities in galactic disc <cit.> and for studying the stability of the Galactic disc by <cit.>. Further, <cit.> presents the stability criterion for a disc consisting of multiple isothermal components. Each component is categorized as either collisional, such as the ISM, or collisionless, such as stars. The results obtained by <cit.> for the stability using a collisionless treatment for stars and collisional approach for the ISM are comparable to the results obtained <cit.> and <cit.>. The two-component stability parameter is a valuable diagnostic for understanding if the stability levels are driven by stars or by gas; for example, see <cit.> and <cit.>. The stability criterion in the literature considers the self-gravity of the gas and stars. However, it does not consider the role of dark matter in driving the gravitational instabilities. The initial effort to incorporate the influence of a dark matter halo on the stability of a single-component disc was undertaken by <cit.>. In this work, we present the conditions for appraising the stability of the gravitationally coupled two-component disc consisting of stars and gas in equilibrium with an external dark matter halo. The differential equations governing the growth rate of perturbations are derived by considering the two-component disc in equilibrium with the external force field of the dark matter halo. Each component is specified by its velocity dispersion, surface density, and angular frequency, but the system is under the influence of the force field of the dark matter halo. We show that the governing equations for the growth of instabilities resemble a wave equation with extra terms. We use plane wave ansatz to derive the dispersion relation for the gravitationally coupled two-component system in equilibrium with the dark matter halo and obtain a simple stability criterion. The stability criterion presented in this work explicitly quantifies the contribution of dark matter to the overall stability levels. It can be used to explore the role of dark matter in regulating various physical processes within the galactic disc where gravitational instabilities are important. The paper is organized as follows: in 2, we will formulate the basic equations and derive the governing differential equations. We will derive the dispersion relation and stability criterion in 3 and 4. We finally present the results in 5 and discuss the applications of the stability criterion in 6 & 7, and conclude in 8. § FORMULATION AND DERIVATION OF BASIC EQUATIONS We consider a coaxial and coplanar thin disc comprising stars and gas, which interact with each other gravitationally. The two-component disc is supported by random pressure and rotation, and the system is in equilibrium with a constant external force field of the dark matter halo. The problem is described in the galactic cylindrical coordinate system (R,θ,z). We start with the basic hydrodynamic equations in which the external force field of the dark matter halo is in equilibrium with the two-component disc. We then introduce small perturbations in the basic equations and derive the dynamic equations governing the evolution of the perturbed quantities. The Force equation, continuity equation, and the Poissons equation for a thin disc in equilibrium with an external potential Φ_ext are: Σ_i∂V_i/∂ t + Σ_i( V_i.∇)V_i = -∇ P_i - Σ_i∇ (Φ_s +Φ_g ) - Σ_i∇Φ_ext, ∂Σ_i/∂ t +∇ .(Σ_iV_i)=0, ∇^2 (Φ_s + Φ_g)=4π G(Σ_s+Σ_g) δ(z). The above equations, when expressed in cylindrical coordinates, supplemented with an isothermal equation of state P_i= Σ_i c^2_i read: Σ_i∂ u_i/∂ t + Σ_i u_i∂ u_i/∂ R + v_iΣ_i/R∂ u_i/∂θ -Σ_i v^2_i/R=-c_i^2∂Σ_i/∂ R -Σ_i∂(Φ_s + Φ_g )/∂ R - Σ_i∂Φ_ext/∂ R, Σ_i∂ v_i/∂ t + Σ_iu_i∂ v_i/∂ R + Σ_iv_i/R∂ v_i/∂θ +Σ_i v_i u_i/R =-c_i^2/R∂Σ_i/∂θ - Σ_i/R∂ (Φ_s+Φ_g )/∂θ, ∂Σ_i/∂ t +1/R∂(R u_iΣ_i) /∂ R + v_i/R∂Σ_i/∂θ + Σ_i/R∂ v_i/∂θ=0, 1/R∂/∂ R( R ∂(Φ_s+Φ_g )/∂ R) +∂^2(Φ_s + Φ_g ) /∂ z^2 + 1/R^2∂^2 (Φ_s +Φ_g)/∂θ ^2=4 π G (Σ_s + Σ_g) δ(z). In the above equations, 'i' is used to index 'stars' and 'gas', u_i and v_i are the velocity components in the radial and the tangential directions respectively, Σ_i and Φ_i are the surface density and the gravitational potential associated with the stellar and the gas disc, respectively, and c_i is the velocity dispersion of each component. Assuming the disc is axisymmetric, the above equations can be written as Σ_i∂ u_i/∂ t + Σ_i u_i∂ u_i/∂ R -Σ_i v^2_i/R=-c_i^2∂Σ_i/∂ R -Σ_i∂(Φ_s + Φ_g )/∂ R - Σ_i∂Φ_ext/∂ R, Σ_i∂ v_i/∂ t + Σ_iu_i∂ v_i/∂ R + Σ_i v_i u_i/R =0, ∂Σ_i/∂ t +1/R∂(R u_iΣ_i) /∂ R =0, 1/R∂/∂ R( R ∂(Φ_s+Φ_g )/∂ R) +∂^2(Φ_s + Φ_g ) /∂ z^2 =4 π G (Σ_s + Σ_g) δ(z). We now introduce small perturbations in the above basic equations: Σ_i = Σ_0,i + ϵΣ_1,i, Φ_i =Φ_0,i + ϵΦ_1,i, v_i= v_0,i + ϵ v_1,i, u_i= ϵ u_1,i. The quantities Σ_0,i and Φ_0,i, v_0,i, u_0,i are the locally unperturbed states, and the perturbed quantities are denoted by Φ_1,i, Σ_1,i, v_1,i, u_1,i, where the value of ϵ << 1. Substituting equation (13) in equations [(9), (10), (11), (12)] and keeping only the first order terms [ϵ^1], we obtain the governing equations for the perturbed quantities. But, before that, in order to better understand how the external potential interacts with the two-component ' star + gas' disc, we write down the zeroth order terms [ϵ^(0)] corresponding to equation (9): v^2_0,i/R = c^2_i∂lnΣ_0,i/∂ R +∂Φ_0,s/∂ R +∂Φ_0,g/∂ R + ∂Φ_ext/∂ R. In the above equation, the contribution of the term c^2_i∂lnΣ_0,i/∂ R is negligible since the velocity dispersion is very small compared to the rotation velocity <cit.>. Further, we write, v^2_0,s=R∂Φ_0,s/∂ R, v^2_0,g=R∂Φ_0,g/∂ R and v^2_ext=R∂Φ_ext/∂ R, we obtain: v^2_net = v^2_0,s + v^2_0,g + v^2_ext. In the above equation, we have labeled v_0,i as v_net since it contains the effective contribution from the stars, gas, and the external potential. The value of v_net is typically determined through observations of neutral hydrogen in galaxies <cit.>. We express the circular velocity of stars, gas, and the external potential in terms of the circular frequency as v_0,s=RΩ_0,s, v_0,g=RΩ_0,g and v_ext=RΩ_ext. This leads to v^2_net= R^2Ω^2_net =R^2(Ω^2_disc + Ω_ext), where Ω^2_disc= Ω^2_0,s + Ω^2_0,g. The stability criterion derived by <cit.>, <cit.>, <cit.>, <cit.> applies exclusively to star and/or gas disc. The net rotation contains the effect of only star and/or gas but does not contain the contribution of the external potential to the net rotation, as shown in equation (15). In their treatment, the centrifugal force balances the unperturbed potential of either star or/and gas, i.e., v^2_net/R = ∂( Φ_0,s + Φ_0,g )/∂ R or v^2_net/R = ∂Φ_0 /∂ R for a single component, but does not consider the contribution of the dark matter to the net rotation which enters our equations as an external potential (∂Φ_ext /∂ R). However, when reconstructing stability using observed properties, the observed rotation curve is used, which includes contributions from stars, gas, and the dark matter halo. In contrast, the analytic treatment considers contributions only from the stars and/or gas disk. <cit.> identifies this difference between the analytical treatment and the observational reconstruction of the stability criterion in the literature and derives a modified stability criterion for a one-component disc that includes the contribution of the dark matter halo to the net rotation. Following the short detour aimed at understanding how the external potential interacts with the 'star+ gas' disc, we now write down the linearized equations governing the growth of perturbed quantities. The first order terms in ϵ^(1) are given by: ∂ u_1,i/∂ t -2 Ω_net v_1,i + c_i^2/Σ_0,i∂Σ_1,i/∂ R + ∂ ( Φ_1,s + Φ_1,g )/∂ R = 0, ∂ v_1,i/∂ t -2B_net u_1,i=0, ∂Σ_1,i/∂ t + Σ_0,i∂ u_1,i/∂ R=0, and the Poisson equation for the thin disc assumes the form <cit.>; ∂( Φ_1,s+ Φ_1,g )/∂ R = - 2 π i G( Σ_1,s + Σ_1,g ). In equation (16), v_net is expressed as v_net=Ω_net R, and the term v^2_net/R - ∂( Φ_0,s + Φ_0,g + Φ_ext )/∂ R cancels, as it is just the centrifugal term balancing the total unperturbed potential of the two-component disc and the external potential. In equation (17), we have expressed [Ω_net + ∂ (Ω_net R)/∂ R] = -2B_net, where B_net is the Oort constant. Further, substituting Ω_net=√(Ω^2_disc + Ω^2_ext) in [Ω_net + ∂ (Ω_net R)/∂ R] = -2B_net, it is straightforward[κ^2_net =-4B_netΩ_net=(RdΩ^2_disc/dR + 4Ω^2_disc) + (RdΩ^2_ext/dR + 4Ω^2_ext), κ^2_disc=(RdΩ^2_disc/dR + 4Ω^2_disc), κ^2_ext=(RdΩ^2_ext/dR + 4Ω^2_ext)] to show that κ^2_net=κ^2_disc + κ^2_ext, where κ_net is the net epicyclic frequency defined as κ^2_net=-4B_netΩ_net. In equation (18), the term 1/R[ ∂(R u_1,iΣ_0,i)/∂ R] is approximated as Σ_0,i∂ u_1,i/ ∂ R, as RΣ_0,i will vary gradually with R when compared with the rapid oscillatory behaviour of u_1,i. § DISPERSION RELATION IN THE PRESENCE OF EXTERNAL FIELD In this section, we will derive the dispersion relation for the two-component disc in the presence of an external field of the dark matter halo. We will show that the linearized equations [(16), (17), (18), (19)] governing the evolution of the perturbed quantities can be recast to resemble coupled wave equations with extra terms and thus admit solutions of the form e^ik.r -ω t. Indexing equations [(16), (17), (18), (19)] for stars; ∂ u_1,s/∂ t -2 Ω_net v_1,s + c_s^2/Σ_0,s∂Σ_1,s/∂ R + ∂ (Φ_1,s +Φ_1,g )/∂ R = 0, ∂ v_1,s/∂ t - 2B_netu_1,s=0, ∂Σ_1,s/∂ t + Σ_0,s∂ u_1,s/∂ R=0. Operating with ∂/∂ R on equation (20), and eliminating the terms ∂/∂ R (∂ u_1,s/∂t) by taking the time derivative of equation (22), which will give ∂/∂ R ( ∂ u_1,s/∂t)= (-1/Σ_0,s)∂^2Σ_1,s/∂ t^2. Similarly, ∂ v_1,s/∂ R is eliminating by operating ∂/∂ R on equation (21) and substituting for ∂ u_1,s∂ R from equation (22) to get ∂ v_1,s/∂ R =-2B_netΣ_1,s/Σ_0,s. And finally substituting for ∂^2 (Φ_1,s +Φ_1,g )/∂ R^2 with equation (19), we obtain: ∂^2Σ_1,s/∂ t^2 - c_s^2∂^2Σ_1,s/∂ R^2 -4Ω_net B_netΣ_1,s + 2π i G Σ_0,s∂/∂ R(Σ_1,s + Σ_1,g) =0. Similarly, the equation for gas reads ; ∂^2Σ_1,g/∂ t^2 - c_g^2∂^2Σ_1,g/∂ R^2 -4Ω_net B_netΣ_1,g + 2π i G Σ_0,g∂/∂ R(Σ_1,s + Σ_1,g) =0. The above equations resemble wave equations and will indeed admit plane wave ansatz. Substituting e^i(k.r - ω t) for the perturbed quantities in equations (23) and (24), we obtain Σ_1,s =-2 π G k Σ_0,sΣ_1,g/(ω^2 -c_s ^2 k^2 -κ^2_net + 2 π G Σ_0,s k ), and similarly Σ_1,g =-2 π G k Σ_0,gΣ_1,s/(ω^2 -c_g ^2 k^2 -κ^2_net + 2 π G Σ_0,g k ). Combining equations (25) and (26), the final dispersion relation reads; (ω^2 -c_s ^2 k^2 -κ^2_net + 2 π G Σ_0,s k) (ω^2 -c_g ^2 k^2 -κ^2_net + 2 π G Σ_0,g k)= (2 π G Σ_0,s k)(2 π G Σ_0,g k). By setting the contribution of the external field to zero, i.e., κ_ext=0, equation (27) becomes equivalent to the dispersion relation for a two-component galactic disk, as shown in equation (17) of <cit.>. Further, if either of Σ_0,s,c_s=0 or Σ_0,g,c_g=0, equation (27) reduces to the case of a one-component disc under the influence of the external field <cit.>. In deriving the above dispersion relation, we have started with a two-component disc in equilibrium with an external force field of dark matter halo. We then introduced small perturbations and compose the linearized perturbation equations, which resemble plane wave equations and then use the plane wave ansatz to derive the dispersion relation. § CONDITION FOR STABILITY In this section, we derive the stability criterion for assessing if the two-component disc in the force field of the dark matter halo is susceptible to the growth of axisymmetric instabilities or not. Firstly, we define the following quantities: α_s= κ^2_net + c_s ^2 k^2 - 2 π G Σ_0,s k , α_g= κ^2_net + c_g ^2 k^2 - 2 π G Σ_0,g k , β_s=2 π G Σ_0,s k, β_g=2 π G Σ_0,g k. Substituting equation (28) in (27), the dispersion relation and the respective roots are given by; ω^4 -ω^2(α_s + α_g)+(α_sα_g -β_sβ_g)=0 ω^2_±=1/2(α_s + α_g) ±1/2( (α_s+α_g)^2 -4(α_sα_g -β_sβ_g))^1/2. For a one-component disc, α_g≥ 0 or α_s≥ 0 is the sufficient condition for stability. For a marginally stable one-component disc a function F can be defined as F= 2 π G Σ_0 k/(κ^2_net + k^2c^2 ). A value of F=1 indicates marginal stability, F>1 represents an unstable disc and F<1 represents a stable disc. The value of k_min for the one-component disc is obtained by putting dω^2/dk=0, where ω^2= κ^2_net - 2 π G Σ_0 k +c^2k^2, which yields k_min = π G Σ_0/c^2. Evaluating F at k_min yields F= 2/(1+Q^2). For the one-component system in the force field of an external potential, Q is defined as Q=κ_netc/π GΣ_0= q√(1+ (κ^2_ext/κ^2_disc)), where q=κ_discc/π G Σ_0. The condition for marginal stability of two-component disc reads ω^2_-=0 or (α_sα_g -β_sβ_g)=0, and for the disc to be unstable the conditions is α_sα_g -β_sβ_g <0. With simple algebra the condition for neutral equilibrium, (α_sα_g -β_sβ_g)=0 can be written as: F=2 π G Σ_0,s k/κ^2_disc +κ^2_ext + k^2c_s^2 + 2 π G Σ_0,g k/κ^2_disc +κ^2_ext + k^2c_g^2 where F=1. In the above, we have expressed κ^2_net= κ^2_disc +κ^2_ext to gauge the effect of the external potential on the 'star + gas' disc. See, the discussion following equation (19) for deriving κ_net in terms of κ_disc and κ_ext. We define the gas fraction f = Σ_0,g/(Σ_0,s+Σ_0,g), and X_s-g=κ^2_disc/[2 π G (Σ_0,s+Σ_0,g)k_min]. X_s-g is the dimensionless wavelength at which it is hardest to stabilize the two-component system. The value of k_min for the two-component system is given by conditions, dω^2_-/dk=0, or d(ω^2_+ω^2_-)/dk=0, i.e. finding d(α_sα_g -β_sβ_g)/dk which yields; k^3(4c_s^2 c_g^2) - 3k^2(2π GΣ_0,s c_g^2 +2π GΣ_0,g c_s^2) +2k κ^2_net(c_g^2+c_s^2)-(2π GΣ_0,s+2π GΣ_0,g) κ^2_net=0 The function F for the two-component model is a superposition of the one-component cases <cit.>. Thus, in analogy with the one-component case, the condition for stability of the two-component disc under the force field of external potential is defined as: 2/1+Q_T^2=(1-f)/X_s-g(1+ (1-f)^2 q_s^2/4X^2_s-g + R ) + f/X_s-g(1+ f^2 q_g^2/4X^2_s-g + R ). In the above equation, R quantifies the contribution of the external potential on the two-component 'star+gas' and is defined as R=κ^2_ext/κ^2_disc. Also, q_s and q_g are the classical one-component stability criterion for stars and gas, defined as q_s=κ_discc_s/π GΣ_0,s and q_g=κ_discc_g/π GΣ_0,g respectively. The above condition is equivalent to the stability condition Q_s-g derived by <cit.> in the absence of the external force field (R=0). For the sake of continuity of notation, we denote the stability criterion for the two-component disc in the absence of an external field using q_T. The disc is stable against the growth of axisymmetric instabilities when Q_T>1, and the disc is susceptible to the growth of axisymmetric perturbations when Q_T<1. § RESULTS §.§ Marginal stability of one-component disc under the influence of dark matter halo To gain better insight into the role of dark matter on a two-component disc, we first investigate the impact of the force field of dark matter halo in driving the stability levels in a one-component disc. The dispersion relation for a one-component disc is given by: ω^2=(κ^2_disc + κ^2_ext )k^(0) + c^2k^(2) -2 π G Σ_0 k^(1). In the above equation, at a large value of k, k^2 will dominate; thus, pressure stabilizes the disc at small scales. At small k, i.e., k^(0) the differential rotation of the disc (κ^2_disc) and the dark matter halo (κ^2_ext) stabilize the disc at large scales. At intermediate k, the self-gravity of the galactic disc becomes important. The field due to the external potential (κ^2_ext) adds up with the differential rotation of the disc (κ^2_disc) and will stabilize the disc. Next, we inspect the marginal stability of the one-component galactic disc. Putting ω^2 =0, equation (33) can be recast to obtain a quadratic equation in k, 1 + Q^2/4k^2/k'^2_T -k/k'_T=0, where, Q=q (1+ κ^2_ext/κ^2_disc)^1/2, k'_T=k_T(1+κ^2_ext/κ^2_disc), k_T=κ^2_disc/2 π G Σ_0 and defining ζ'= k'_T/k = k_T/k(1+κ^2_ext/κ^2_disc), i.e ζ'=ζ(1+ κ^2_ext/κ^2_disc). With the above substitutions, equation (34) can be written as Q=2[ ζ (1+R)( 1- ζ (1+ R) )]^1/2, where R=κ^2_ext/κ^2_disc. In Figure. 1, we show the effect of the external force field of the dark matter halo on the stability of the one-component disc. We find that upon increasing the contribution of the dark matter by increasing the value of R, the maximum value of Q is shifted towards a smaller value of ζ, indicating that a larger contribution from dark matter to the total potential can effectively stabilize the galaxy over large scales. Further, from Q=q √(1+ R), we can see that when R=κ^2_ext/κ^2_disc=0, the value of Q corresponds to the classical stability criterion (q) derived by <cit.>. The stability criterion derived by <cit.> considers the self-gravity of only one component and does not include the contribution of the external potential due to dark matter halo. The centrifugal force is balanced only by the corresponding force due to the unperturbed potential of stars/gas. The marginal stability in the absence of the external potential is given by q=1. The maximum value value of Q, when R=0.5, or κ_ext=( κ_disc/√(2)) is equal to 1.2 compared to 1 when R=0, indicating that Q>q. Thus, it is evident that the addition of dark matter to the total potential increases the marginal stability levels and makes it much harder to destabilize the one-component disc, making the disc more stable against the growth of instabilities. The stability criterion in the presence of the external dark matter halo Q=q (1+ κ^2_ext/κ^2_disc)^1/2, can be written as Q=κ_netc/π GΣ_0. The one-component stability criterion derived by <cit.> is applicable only for stars/gas. However, we note that the mathematical expression for the stability criterion in the presence of an external halo remains unchanged (Q=κ_netc/π GΣ_0). Thus, when reconstructing q from observations following the classical treatment by <cit.>, the contribution of the external force field of dark matter is implicitly accounted for. In other words, using net epicyclic frequency (κ_net) derived from the observed rotation curve in q <cit.> is equivalent to computing Q derived in this work. §.§ Role of dark matter on the stability of two-component disc In 5.1, we found that the external potential due to the dark matter halo increases the marginal stability of the one-component disc, indicating that effectively Q>q, or that it is now harder to destabilize the disc due to the force field of the dark matter halo. <cit.> and <cit.>, show that the addition of gas disc makes the galaxy more prone to the growth of instabilities or in other words, the two-component disc is more unstable than either component by itself. The stability criterion presented in this work allows us to assess if gas is enough to lead to the growth of local instabilities, even in the presence of a stabilizing dark matter component. In Figure 2, we present the two-component stability criterion Q_T as a function of q_s and q_g, respectively. In the top panel, we have fixed the value of the gas fraction at f=0.01; in the bottom panel, we have fixed the gas fraction at 0.3. When the external force field due to the dark matter halo is zero R=0, we find that at a fixed value of the gas fraction, the value of the q_T is lower than the values of q_s and q_g. This supports the earlier findings by <cit.>, which show that a two-component disc is less stable than a disc composed only of stars or gas. The results indicate that the two-component disc is more prone to the development of gravitational instabilities than a single-component disc. For example, when f=0.01, R=0, value of q_T=1.95, when q_s and q_g are equal to 2.5. Similarly, when we increase the gas fraction to f=0.3, keeping R=0, q_T now becomes 1.35 when q_s,q_g=2.5. Further, when f = 0.3 , R=0, and q_s ,q_g=1.5, the value of q_T drops to 0.75. This shows that adding a second disc in the absence of external potential due to dark matter effectively renders the two-component disc susceptible to the growth of gravitational instabilities, even though the stars and gas are stable by themselves. We will now discuss the effect of dark matter on the stability of the two-component disc by varying the value of R. It is evident from Figure 2 that for a given gas fraction, when we move from left to right, the value of the marginal stability of the two-component disc increases with increasing R. For example, at a gas fraction equal to 0.01, when both q_s and q_g are equal to 2.5, q_T is equal to 1.95 for R = 0. However, upon increasing the contribution of dark matter to the total potential (i.e., R = 1), Q_T becomes 2.4. A similar effect is observed at a higher gas fraction, when f=0.3 and both q_s and q_g are equal to 2.5, the value of q_T is 1.35 in the absence of dark matter (R=0). However, when the contribution of dark matter is included (R=1), Q_T=1.8. The external potential of the dark matter halo stabilizes the two-component system, which would otherwise be prone to the growth of axisymmetric instabilities. At smaller values of q_s and q_g equal to 1.5 and a gas fraction equal to 0.3, the two-component disc becomes susceptible to the growth of gravitational instabilities (q_T=0.75) when R=0. However, upon including the contribution of the dark matter halo (R=1) at f=0.3, the two-component system stabilizes itself (Q_T>1). Thus, we note that an external force due to the dark matter halo effectively suppresses the growth of local axisymmetric instabilities. However, the two-component system can be susceptible to axisymmetric instabilities on rarer occasions, even in the presence of stabilizing external potential due to the dark matter halo. An example is provided in Figure 2, when R=0.5, and the gas fraction is equal to 0.3 and q_s, q_g≤1.5. The two-component system has Q_T≤1, indicating that the system is prone to growth of local axisymmetric instabilities. § APPLICATION From the analysis presented in the previous section, we understand that the external force field of the dark matter halo stabilizes the two-component system of stars+gas. However, in rare instances where the force due to the dark matter halo is insufficient compared to the destabilizing effect of the gas disc, the two-component system may become prone to the growth of local axisymmetric instabilities. In this section, we will investigate the role of dark matter on the stability of two-component models of nearby galaxies like the Milky Way and low surface brightness galaxies, and models of galaxies observed in the early universe. The stellar distribution in our galaxy follows an exponential surface density given by Σ_s(R)=Σ_s0e^-R/R_D, where Σ_s0=640M_⊙pc^-2 and R_D=3.2kpc <cit.> are the central surface density and the disc scalelength. The gas distribution in the Galaxy is given by Σ_g(R)=Σ_g0e^-1.65R/R_25, in the above equation Σ_g0=28.2M_⊙pc^-2 is the central density of the gas disc and R_25 is the radius at which the B-band surface brightness drops to 25.5 mag arsec^-2, R_25=4R_D <cit.>. The stellar velocity dispersion is given by <cit.> σ_s(R)= (1/0.6) √( (2 π G R_DΣ_s(R)/7.3), and we use a constant gas velocity dispersion equal to 10kms^-1 <cit.>. The circular velocity (v_c) corresponding to the exponential distribution is given as <cit.> v^2_c(R)= 4π GΣ_0R_Dy^2[ I_0(y)K_0(y) -I_1(y)K_1(y)], where y=R/2R_D and I_0,I_1 and K_0,K_1 are the modified Bessel functions of the first and second kind. The epicyclic frequency κ at a radius R is defined as κ^2(R)= ( RdΩ^2 (R)/dR + 4Ω^2 (R) ), where Ω is the angular frequency defined as Ω^2 (R)= v^2_c/R^2. The dark matter density is given by a pseudo-isothermal halo, with a central density ρ_0=0.035M_⊙pc^-3 and a core radius R_c=5kpc <cit.>. The epicyclic frequency due to the pseudo-isothermal dark matter halo is given by κ^2_PIS(R)= 4 π G ρ_0[ 2 R_c^2/R^2 + R_c^2 + R_c^4/R^2 (R^2 + R_c^2) - R_c^3/R^3tan^-1( R/R_c) ]. With all the building blocks needed to compute the stability in place, we will now discuss the role of dark matter halo on different galaxy models. Case 1: Stability of Milky Way We show the stability analysis of the Milky Way in the first row of Figure 3. We find the minimum value of the stability for stars and gas is q^min_s=1.5 and q^min_g=1.8, indicating that individually stars and gas are stable. However, the two-component formalism yields a q^min_T=0.88, indicating that the two-component system is prone to the growth of local gravitational instabilities in the absence of the external potential due to the dark matter halo. Now, the addition of the dark matter to the total potential naturally increases the one-component stability from q^min_s=1.5 to Q^min_s=2.7 and q^min_g=1.8 to Q^min_g=3.8. Moreover, the two-component system, which was unstable with q^min_T=0.88, now has Q^min_T=1.7, highlighting the importance of the external potential of the dark matter halo in stabilizing massive disc galaxy like Milky Way. Case 2: Low mass disc in a high mass halo In order to better gauge the role of dark matter on the stability levels of the two-component disc, we minimize the contribution of the stars and gas disc to the total potential. We lower the stellar and the gas surface density to typical values observed in the low surface brightness galaxies; Σ_s0=100M_⊙pc^-2 and R_D=2.5kpc and the gas surface density to 14.2M_⊙pc^-2 <cit.>. We keep the values of the dark matter halo to that of the Milky Way. We show the rotation velocity for this mass distribution in the second row of Figure 3. We can see that κ_ext is significantly higher than κ_disc, highlighting that the dark matter is the dominant mass component. We find that q^min_s = 1.7 and q^min_g = 1.8, indicating that the stars and gas are stable on their own. However, similar to the Milky Way, the two-component star+gas system has q^min_T = 0.9, making the disc susceptible to the growth of local gravitational instabilities in the absence of the dark matter halo. However, upon including the contribution of the dark matter halo, we find Q^min_s = 6.9, Q^min_g = 8, and Q^min_T=4.4, indicating that a higher contribution of the dark matter to the total potential is reflected in higher net stability levels of the two-component system. Case 3: Low mass disc in a low mass halo As a final example, we will inspect the effect of a low-mass stellar and gas disc embedded in a low-mass dark matter halo akin to the mass distribution of a low surface brightness galaxy. We keep the surface density of the stars, gas, and dark matter halo parameters to the typical values obtained from the mass models of low surface brightness galaxies: ρ_0= 0.066M_⊙pc^-3 and R_c=1.5kpc <cit.>. The parameters for the stars and gas are the same as in Case 2. We show the rotation velocity corresponding to this mass distribution in the third row of Figure 3. The minimum value of q_s, q_g and q_T are comparable to values obtained in case 2: q^min_s=1.7, q_g=1.8 and q_T=0.9. However, since the contribution of dark matter to the total potential is small compared to the massive dark matter halo of the Milky Way, the shift in the stability curves upon adding a dark matter halo is also small. We find that Q^min_s=3.2, Q^min_g=3.6 and Q^min_T=2.1. In both Case 2 and Case 3, we find that the force due to the dark matter potential stabilizes the two-component low surface brightness disc, which is otherwise unstable. The only difference is that a massive halo contributes significantly to the overall stability. This aligns with the previous finding in <cit.>, which shows that dark matter is important in regulating the stability of low surface brightness galaxies. § DISCUSSION A large number of recent studies show that the galaxies observed at high redshift are dominated by baryons <cit.>. We construct a galaxy model in which the contribution of stars and gas exceeds that of the dark matter halo in the total mass budget. The rotation curve decomposition by <cit.> show that stellar disc makes maximum contribution to the total rotation curve. However, the contribution from the dark matter halo and the gas disc are typically comparable. In our model, the stellar disc has a surface density profile comparable to the Milky Way: Σ_0 =640M_⊙pc^-2 and R_d=3.2kpc. We increase the gas surface density from 28.2M_⊙pc^-2 for the Milky Way to 200M_⊙pc^-2 and keep the scalelength comparable to the stellar disc. However, the gas disc continues to be a cold component with a velocity dispersion of 10 km/s. We also lower the contribution of the dark matter by reducing the dark matter density and core radius to 0.05M_⊙pc^-3 and 2kpc, respectively. We aim to ascertain the contribution of the dark matter halo to the net stability levels in baryon-dominated systems akin to the cold rotating disc galaxies observed in the early universe. We show the results in the fourth panel of Figure 3. We find that in the absence of potential due to the dark matter halo, q^min_s=1.5, q^min_g=0.8 and q^min_T=0.5, indicating that a massive cold two-component disc is susceptible to the growth of local gravitational instabilities. Although the dark matter halo increases the net stability levels, the two-component system is still susceptible to local gravitational instabilities <cit.>, Q^min_s=2.8, Q^min_g=0.9, and Q^min_T=0.8. This indicates that despite the stabilizing nature of the dark matter halo, the net contribution of the dark matter is insufficient to stabilize the baryon-dominated cold disc galaxies in the early universe. § CONCLUSIONS In this study, we have derived detailed theoretical formalism to understand the role of dark matter and gas fraction on the stability of the two-component model of galactic disc. We model the galaxy as a coplanar and a coaxial system of stars and gas in equilibrium with an external dark matter halo. We derive the equations governing the growth rate of perturbation and, finally, present a simple stability criterion for appraising the stability of the two-component disc under the influence of dark matter halo. We find that: * The two-component disc is more susceptible to the growth of gravitational instabilities than the individual components. Increasing the gas fraction at a fixed value of external potential lowers the stability of the two-component disc, highlighting the role of cold gas in destabilizing the galaxy consistent with the earlier finding of <cit.>. * The external field due to the dark matter halo acts as a stabilizing agent and increases the net stability levels of the two-component system. In dark matter-dominated systems, the gravitational force exerted by the dark matter halo stabilizes the two-component system, even when the system is locally unstable <cit.>. This indicates that the cold gas component cannot destabilize the two-component disc when the dark matter halo dominates the mass budget of the galaxies. * We apply the stability criterion to the models of the Milky Way and low surface brightness galaxies and find that the Milky Way and the low surface brightness discs are locally unstable, when contribution of dark matter is not included in the total potential. However, the addition of the dark matter to the total potential significantly increases the net stability levels in these galaxies <cit.>. We note that when the contribution of dark matter to the total mass budget is small, the corresponding effect on the net stability levels would also be diminished. * In rare cases, the two-component system can be susceptible to the growth of gravitational instabilities despite the presence of a stabilizing dark matter halo potential. One example is found in baryon-dominated cold rotating disc galaxies observed in the early universe. The influence of dark matter on the overall gravitational potential is insufficient to stabilize the galaxies observed in early universe. § ACKNOWLEDGEMENTS Aditya would like to thank the referee for their insightful comments that improved the quality of this manuscript. § DATA AVAILABILITY No new data was generated in this work. mnras
http://arxiv.org/abs/2407.13546v1
20240718142028
Treatment-control comparisons in platform trials including non-concurrent controls
[ "Marta Bofill Roig", "Pavla Krotka", "Katharina Hees", "Franz Koenig", "Dominic Magirr", "Peter Jacko", "Tom Parke", "Martin Posch" ]
stat.ME
[ "stat.ME" ]
Unified Asymptotics for Investment under Illiquidity: Transaction Costs and Search Frictions Jin Hyuk Choi^∗∗ July 22, 2024 ============================================================================================ § ABSTRACT Shared controls in platform trials comprise concurrent and non-concurrent controls. For a given experimental arm, non-concurrent controls refer to data from patients allocated to the control arm before the arm enters the trial. The use of non-concurrent controls in the analysis is attractive because it may increase the trial’s power of testing treatment differences while decreasing the sample size. However, since arms are added sequentially in the trial, randomization occurs at different times, which can introduce bias in the estimates due to time trends. In this article, we present methods to incorporate non-concurrent control data in treatment-control comparisons allowing for time trends. We focus mainly on frequentist approaches that model the time trend and Bayesian strategies that limit the borrowing level depending on the heterogeneity between concurrent and non-concurrent controls. We examine the impact of time trends, overlap between experimental treatment arms and entry times of arms in the trial on the operating characteristics of treatment effect estimators for each method under different patterns for the time trends. We argue under which conditions the methods lead to type 1 error control and discuss the gain in power compared to trials only using concurrent controls by means of a simulation study in which methods are compared. Platform trials; External controls; Time trends § INTRODUCTION Platform trials offer a highly efficient way to evaluate multiple treatments using a single infrastructure <cit.>. By allowing different experimental treatments to enter and exit at different times, platform trials facilitate a faster evaluation of new treatments as the efficacy of new treatments can be investigated as soon as they become available <cit.>. This approach optimizes resource utilization and enables a more rapid and comprehensive assessment of potential treatments <cit.>. Platform trials usually consider a shared control on the basis of which to assess the effectiveness of treatments. For treatment arms entering later, control data are distinguished between concurrent data, referring to data from patients in the control arm randomised concurrently in time with the entering arm, and non-concurrent data, referring to control data from patients in the trial randomised before the arm entered. The use of non-concurrent data in the analysis comparing the efficacy of treatments against control has been widely discussed in recent years <cit.>. Several approaches have been recently suggested to incorporate non-concurrent controls in platform trials <cit.>. Frequentist and Bayesian modelling approaches that utilize non-concurrent control data and adjust for time trends have been proposed. In trials with binary and continuous endpoints, frequentist methods have been proposed with adding time as a covariate to the regression model to adjust for temporal changes <cit.>. These adjustments were presented in the context of a platform trial with two experimental arms and a shared control. For trials with binary endpoints, Bayesian strategies include the Time Machine approach, which considers a Bayesian generalized linear model that smooths the control response over time <cit.>. In the context of historical controls, the meta-analytic-predictive (MAP) Prior approach was proposed as a Bayesian borrowing method, that models the between-trial variation <cit.>. The MAP approach performs a prediction of the control effect in the trial from historical control data using random-effects meta-analytic methods. Simulation studies are an important tool to assess the operating characteristics of trials using external controls <cit.> and also recommended by regulatory authorities <cit.>. Although different comparative studies have been carried out through simulations to propose and/or evaluate the characteristics of different methods using external or historical controls <cit.>, there are so far no simulation studies comparing the recently published methods for the use of non-concurrent controls in platform trials. In this work, we consider trials with continuous data. We aim to extend existing methods for treatment-control comparisons incorporating non-concurrent control data and compare them in a simulation study. We focus mainly on frequentist and Bayesian modelling approaches that model the time trend as well as on Bayesian strategies that limit the borrowing level depending on the heterogeneity between concurrent and non-concurrent controls. More precisely, we consider frequentist model-based adjustments proposed in <cit.> and extend them for a more general setting of platform trials with K experimental arms (K≥ 2), adapt Bayesian meta-analytic-predictive approaches commonly used in the context of incorporating historical data into the analysis, and extend the Bayesian Time Machine proposed in <cit.> for platform trials with binary endpoints to continuous endpoints. We evaluate the statistical power and the type 1 error rate of the methods for individual treatment-control comparisons and compare them through an extensive simulation study. Specifically, we assess the statistical properties of using non-concurrent controls when using these methods over a wide range of settings, including different time trend patterns and scenarios with equal or different strengths of the time trend across arms, as well as varying the number of experimental arms added later, the frequency of new arms entering and the overlapping period between experimental arms throughout the trial. The paper is organised as follows. In Section <ref>, we describe the trial design and main notation, and present several methods for comparing treatment against control in platform trials using non-concurrent controls. In Section <ref>, we compare the different methods in a simulation study. We finish with conclusions and a discussion in Section <ref>. § METHODS TO USE NON-CONCURRENT CONTROLS Consider a randomized controlled trial evaluating the efficacy of multiple experimental treatments compared to a control treatment. Suppose that the experimental treatment arms enter sequentially into the trial up to a total number of K different experimental treatment arms. Participants are randomized equally between control and active treatment arms upon recruitment, where active refers to open for enrollment. Suppose that the sample sizes of experimental treatment arms are equal to n and the total sample size in the platform trial equals N. We denote by j ∈𝒥={1,⋯,N} the participant index and by k ∈𝒦={0,⋯,K} the treatment indicator, k=0 indicating the control treatment and k>0 indicating the experimental treatments ordered by entry times. Denote by y_j the response for the j-th participant, assumed to be continuous. For simplicity, we assume that the response is obtained at the same time as the participant enters the trial. We denote the treatment effect size for treatment k compared to the control by θ_k (k∈𝒦) and consider the respective hypothesis testing problem: H_0,k: θ_k ≤ 0 H_1,k: θ_k>0 In this work, we focus on the inference on those arms that enter the trial when it is already running, and therefore, for those where non-concurrent control data is available. We aim to compare experimental treatments against the control as soon as this experimental treatment arm leaves the trial. In what follows, we present several approaches to test H_0,k by incorporating the non-concurrent control data in the treatment-control analysis. In particular, we consider frequentist model-based adjustments in Section <ref>, the Bayesian Time Machine in Section <ref> and the Meta-Analytic-Predictive (MAP) approach in Section <ref>. The methods differ with respect to the data used for the analyses and in whether the time component of the trial is considered to adjust for potential trends and, if yes, how it is used. For readability, the notation and concepts specific to each method are introduced in its corresponding section. §.§ Frequentist model-based adjustments To evaluate the efficacy of experimental treatment k compared to the control, we consider a regression model, where time is modelled as a step function as proposed in <cit.>. To incorporate time into the model, we divide the platform trial duration into intervals where there is no change in the number of active arms. A new time period starts when an experimental treatment arm is added (starts being active) or removed (stops being active) from the platform. Assume that the study has S periods (S>1), and denote by s ∈𝒮= { 1, ⋯, S } the period indicator. The observed data is {(y_j, k_j, s_j), j=1, ⋯, N}, where k_j is the treatment of participant j; and s_j represents the period at which the participant j enters. For a given treatment k∈𝒦, we denote by S_k the period in which arm k finished, and by 𝒦_S_k⊆{1, ..., K} the set of experimental treatments in the platform prior or up to S_k. Figure <ref>-A) shows a platform trial with three experimental treatment arms and a control, where treatment arms enter sequentially, resulting in a platform trial with five periods. If the treatment under current evaluation is the treatment administrated in Arm 3, the non-concurrent data corresponds to the data from periods 1 and 2. To test H_0k, we consider the t-test coming from a frequentist linear model which estimates the treatment effects of experimental arms in the trial up to period S_k and includes time as a factor in the analysis of the trial. The model is given by: E(y_j) = η_0 + ∑_k' ∈𝒦_S_kθ_k'· I(k_j=k') + ∑_s=2^S_kτ_s · I(s_j=s) where η_0 denotes the response in the control group in the first period, θ_k represents the effect of the treatment k compared to control, and τ_s denotes a step-wise time effect between periods 1 and s. This approach models time using a step function, and implicitly assumes: (i) the period-time effect is the same for all arms in the platform trial, (ii) the time effect is constant in each period and (iii) this effect is additive in the model scale. As we test the efficacy of arm k when this leaves the platform, we fit the model using all data from the trial until experimental treatment arm k leaves the platform, i.e., { (y_j, k_j, s_j), j=1,...,N : s_j ≤ S_k }. Hence, data from all experimental arms that were active at some point up to S_k contribute to estimating the effect of time by means of τ_s (s=2, ..., S_k). Figure <ref>-A) illustrates the data used to fit the model and thus to adjust for time trends. §.§ Bayesian Time Machine modelling Saville et al. <cit.> proposed time-adjusted analyses to model potential temporal drifts over the trial. The model, so-called Bayesian Time Machine, is built on the basis of a generalized linear model to evaluate multiple experimental treatment arms versus a control in trials with binary endpoints, extended here for continuous endpoints. In the Time Machine model, the time is incorporated differently than in the frequentist model. Instead of considering the variable “period”, the time is adjusted using equal-sized time intervals, called “time buckets”, indexed backwards from the most recent time interval to the beginning of the trial. Figure <ref>-B) illustrates a platform trial with three experimental treatment arms and a control, splitting the time duration into time buckets. Aiming at comparing a given treatment k∈𝒦 against control and assuming that the trial has a total of C_k time buckets (C_k>1) when arm k leaves the trial, we denote by c ∈𝒞= { 1, ⋯, C_k } the bucket indicator, where c=1 corresponds to the last time bucket in which treatment k is active in the trial and c=C_k denotes the beginning of the trial. Analogously to the previous section, the observed data is {(y_j, k_j, c_j), j=1, ⋯, N}, where y_j and k_j are the continuous response and treatment indicator for participant j as before, but now the time information is given by c_j that represents the time bucket at which the participant j enters. Denote by 𝒦_C_k⊆{1, ..., K} the set of active treatments in the platform prior to or up to C_k. We extend the model to trials with continuous endpoints as follows: Y_j = E(Y_j) + ϵ_j E(Y_j) = η_0 + ∑_k' ∈𝒦_C_kθ_k'· I(k_j=k') + ∑_c=2^C_kω_c· I(c_j=c) where η_0 is the intercept and θ_k are the treatment effects, with typically (nearly) non-informative prior distributions that depend on the scale of the data: η_0 ∼ N(0,σ^2_η_0) θ_k ∼ N(0,σ^2_θ) The parameter ω_c is the increment predictor for the time bucket and quantifies the drift over time, where ω_1 corresponds to the most recent time interval. Similarly, as in <cit.>, for every previous time interval, the time parameter is modeled with the following Bayesian second-order normal dynamic linear model: ω_1 = 0 ω_2 ∼ N(0, 1/τ) ω_c ∼ N(2ω_c-1 - ω_c-2, 1/τ) , 3≤ c≤ C_k The precision parameter τ is the inverse of the variance and specifies the degree of smoothing over time intervals. A hyperprior distribution is then specified as follows: τ ∼ Gamma(a_τ, b_τ) The precision of the individual participant responses, ϵ_j, is also assumed to have a Gamma hyperprior distribution ϵ_j ∼ N (0, 1 / τ_Y) with τ_Y ∼Gamma(a_Y, b_Y). Analogously as for the frequentist model, and as illustrated in Figure <ref>-B), we fit the Time Machine model using all data from the trial until the experimental treatment arm k leaves the platform, i.e., {(y_j, k_j, c_j), j=1, ⋯, N : c_j ≤ C_k }. A non-informative prior may be used for τ_Y. However, using this type of prior also for τ is generally not appropriate when the number of time intervals may be small <cit.>. Rather, a weakly informative approach is recommended, whereby the bulk of the prior density covers an a-priori plausible region of the parameter space. To calibrate such a weakly informative prior, it can be helpful to think about plausible changes in E(Y) between time buckets c and c-1. All such changes ω_c-1 - ω_c have a standard deviation of τ^-1/2. The first change ω_C-1 - ω_C has a mean of zero, while all subsequent changes have a mean equal to the previous change. We could then consider what is our most plausible value for τ^-1/2, denoted D_Expected, and what do we consider a very large value of τ^-1/2, denoted D_Maximum, such that we have only a small belief, ι, that τ^-1/2 > D_Maximum. That is, we solve E(τ) = 1/D_Expected^2 P(τ < 1/D_Maximum^2) = ι for a_τ and b_τ, given some small value of ι, e.g. 0.01. As in the frequentist model, the Time Machine models time using a step function and assumes that the period-time effect is the same for all arms in the platform trial and that this effect is additive in the model scale. Another similarity between the two approaches is the fact that both models use all available data to estimate the effect of time. §.§ Meta-Analytic Predictive prior approaches The Meta-Analytic Predictive (MAP) prior approach was proposed as a method to involve data from multiple historical studies <cit.> in the final analysis of a current clinical trial. It aims at summarising relevant sources of information (data from historical controls) while accounting for between-trial heterogeneity. The resulting distribution, the MAP prior, is then used as an informative prior for the concurrent controls in the final analysis. The MAP prior is derived using a random-effects meta-analysis model with a subsequent prediction for the control mean of a future trial. In this work, we consider the MAP prior as an approach to perform the treatment-control comparisons in platform trials using non-concurrent controls. More precicesly, in the context of platform trials, the MAP approach can be used to derive the prior distribution for the control response in the concurrent periods by combining the control information from the non-concurrent periods with an initial non-informative prior. To evaluate the efficacy of treatment k compared to the control, consider the non-concurrent control responses Y_NCC = { y_j : k_j=0, s_j≤S_k} where S_k denotes the period preceding k entering the trial. Figure <ref>-C) illustrates the periods (defined in Sect. <ref>) preceding Arm 3. Note that Y_NCC are the non-concurrent controls with respect to experimental treatment arm k, but we omit the dependence on k in the notation for simplicity. For y_j∈ Y_NCC, let y_j|η_s_j∼ f(η_s_j), where f is the likelihood function. In order to borrow strength from the source information from the different periods, we consider a hierarchical model for the control response in period s that links the parameters from the different periods: η_s = β + ν_s ν_s ∼ N(0, τ^2) where s=1, ..., S_k, omitting here the subindex j in s_j for simplicity. In this model, β is the population mean in the control, and ν_s is the variability introduced by the periods with mean 0 and standard deviation τ, which can be interpreted as the between-period heterogeneity. For β and τ, the following hyperprior distributions are assumed: β ∼ N(0, σ^2_β) τ ∼ HalfNormal(0, σ^2_τ) The data from period S_k+1 and beyond is not available before experimental treatment arm k is included in the platform trial. Therefore, the posterior of the parameters is based on non-concurrent data only. The prior information on the concurrent controls is the posterior for the above-specified model, called the MAP prior, that is p_MAP(η_S_k) = P(η_S_k|Y_NCC) Once the concurrent control data for treatment k is available, the posterior for η_S_k can be obtained as p(η_S_k|Y_CC) ∝ p(Y_CC|η_S_k) · p_MAP(η_S_k), where Y_CC are the concurrent controls, that is, Y_CC = { y_j : k_j=0, S_k< s_j≤ S_k}. The MAP prior can be robustified to avoid prior-data conflicts. This is achieved by adding a weakly-informative mixture component p_non-inf, resulting in the prior distribution p_rMAP=(1-a_R)p_MAP+a_R p_non-inf where a_R is a weight that can be interpreted as the degree of scepticism towards borrowing strength. Note that the MAP approach is conceptually different from the previous two models. Here, the hierarchical model (and thus the use of non-concurrent controls) is only to build the prior of the concurrent controls. In addition, data from other experimental treatment arms are not used in this case (see Figure <ref>-C)) for a representation of the data usage). § SIMULATION STUDY We simulated platform trials evaluating the efficacy of K experimental treatment arms compared to a shared control. Arm k (k>1) enters after d_k participants have been recruited to the trial and d_1=0. To investigate the properties of the methods for utilising non-concurrent controls under different situations, we considered three trial settings explained in Sect. <ref>. The settings differ from each other in the number of experimental arms, K, and the overlaps between arms, 𝐝 = (d_1,...,d_K). In all three settings, we assume all experimental arms were equal sized with sample sizes of 250, equal allocation among control and treatment in each period as well as block randomisation per period. Note that the sample size for the control arm (and thus for the overall trial) varies depending on the entry pattern and overlapping between experimental treatment arms. We compare the performance of the frequentist regression model (Section <ref>), Bayesian Time Machine (Sect. <ref>) and MAP prior (Sect. <ref>) in terms of individual power and type 1 error rate. For comparative purposes, we also considered the so-called separate analysis (t-test using concurrent controls), and the so-called pooled analysis (t-test pooling concurrent and non-concurrent control data without adjustments). We describe the choice of the priors for the Time Machine and MAP approaches in Section <ref>. We performed all computations using software. For the simulation and analysis, we used the R package <cit.>. The code to reproduce the results is available at <https://github.com/pavlakrotka/NCC_MethodsComp>. §.§ Data generation and trial settings We simulated trials with continuous data using the generating model E(Y_j) = η_0 + ∑_k=1,...,Kθ_k · I(k_j = k) + f(t_j), where Y_j, η_0 and θ_k_j refer to the continuous response, the control response and treatment effects, respectively. We furthermore assume that the error terms in the responses are identically and independently normally distributed with zero mean and homoscedastic variances equal to 1. We assumed effect sizes of θ_k = 0.25, k=1,...,K for the treatment-control comparisons under the alternative hypothesis, and a response of 0 in the control. When evaluating under the null hypothesis, we simulate all experimental treatment arms under the null hypothesis, while when evaluating under the alternative hypothesis, we assume all experimental treatment arms under the alternative hypothesis. The chosen sample and effect sizes lead to 80% power for the treatment-control comparison using a separate analysis (one-sided t-test at 2.5% significance level using only concurrent controls). The term f(·) represents the time trend function and t_j is the calendar time when participant j is enrolled in the trial. We assume that one and only one patient enters the trial at a particular calendar time, so that t_j=j for all j. Similarly as in <cit.>, we considered three time trends patterns: * Stepwise time trend: f(j) = λ_k_j· (c_j - 1), where c_j is the number of experimental treatment arms have already entered the ongoing trial when participant j was enrolled * Linear time trend: f(j) = λ_k_j·j-1/N-1, where N is the total sample size in the trial * Inverted-U time trend: f(j) = λ·j-1/N-1 for j ≤ N_p, and f(j) = -λ·j-N_p/N-1 + λ·N_p-1/N-1 for j > N_p, where N_p denotes the sample size at which the form of the time trends changes. N_p is set to approximately N/2 so that the peak is always approximately in the middle of the trial. where the parameter λ_k_j quantifies the strength of the time trend. Note that the functional form of the time trend is assumed to be equal across arms. If the strength of the time trend is equal across arms, then we will say that it satisfies the assumption of equal time trends. Figure <ref> illustrates the time trend patterns. Note that the stepwise time trend is more severe in terms of the change in mean responses over time. Also, note that the linear time trend is more pronounced for a specific arm when there are more arms open compared to when there are fewer arms open, given that in the first case, the arm under question runs for a longer time than in the second case. To investigate the impact of time trends, entry times, and overlaps between arms, we consider the following platform trial settings, varying, in each of them, different elements to understand their implications in the analysis: Setting I: Platform trial with three experimental arms and equidistant entry times. We explore the effect of the overlap between arms in trials with K=3 experimental arms, in settings with equal time trends and different time trends. We examine a platform trial with three experimental treatment arms, where treatment arm k enters after every d_k = d · (k-1) participants have joined the trial. We consider five options for d: d = 0, 125, 250, 375 and 500, resulting in platform trials with different overlaps between arms. We consider linear, stepwise and inverted-U time trends. As for the strength of the time trend, we consider time trends that are equal across all arms (λ_k=λ_0, ∀ k≥1), that differ from the control in arm 1 (λ_3=λ_2=λ_0 ≠λ_1), and that differ from the control in arms 1 and 2 (λ_3=λ_0 ≠λ_1=λ_2). We investigate the type 1 error and power of comparing arm 3 against control in cases with different time trends. Note that when d=0, all arms enter and finish at the same time, so we are in a classical multi-arm trial setting, while if d=500, a new treatment arm enters when the preceding one ends. Setting II: Platform trial with four arms and non-equidistant entry times. We investigate the impact of different entry times in trials with K=4 arms and equal stepwise time trends. Focusing on evaluating the treatment efficacy of arm 4 against control, we examine a platform trial with four experimental treatment arms, where treatment arms 2 and 4 enter after every d_2=300 and d_4=800 participants have been recruited, respectively. For the timing of adding the treatment arm 3, we consider d_3 = 300, 425, 550, 675, 800. Setting III: Platform trials with multiple arms and equidistant entry times We explore the operating characteristics of platform trials with K=10 arms. We investigate the impact of different time trends between arms with potentially random strength. We consider a platform trial with K=10 experimental treatment arms, where treatment arm k enters after every 300 · (k-1) participants have been recruited to the trial. We use linear time trends only and focus on evaluating the efficacy of arm 10 against control. Here, the time trend in arm 10 is equal to the control group (λ_0 = λ_10), for which we consider different values. The time trend in the remaining treatment arms is sampled from λ_k ∼ N(λ_0, 0.5), ∀ k ∈{1,…,9}. We illustrate the settings in Figure <ref> and summarise the investigated aspect and considered parameters in Table <ref>. For each considered scenario, we used 10,000 simulation replicates. §.§ Choice of Priors for the Bayesian Approaches For the Bayesian approaches, Time Machine model and MAP prior approach, we consider different parameter constellations for the priors to investigate the robustness of the results with respect to design parameter assumptions. Time Machine. We used bucket sizes of 25 in all settings. For the values of the prior distributions' hyperparameters, we considered precisions of the prior regarding the treatment effect and control response to be 0.001 and 0.001, corresponding to the reciprocal of σ^2_η_0 and σ^2_θ, and then η_0 ∼ N(0,1000) and θ_k ∼ N(0,1000). We consider different hyperprior distributions for the drift parameter, τ∼ Gamma(a_τ,b_τ), depending on the design. In Setting I with equal time trends, we investigate the impact of the choice of value of the prior distribution of the time drift. For this, suppose that at the design stage of the trial, we define the prior of the time drift by assuming a stepwise functional form of the time trend where the strength of the time trend value is a nuisance parameter. We calibrate the values of the prior for τ by anticipating the expected change between buckets by means of D_Expected and D_Maximum as described in Sect. <ref>. The values considered for D_Expected and D_Maximum are summarised in Table <ref>. In all cases, we set ι=0.01. In settings II and III, as well as in scenarios with different time trends in Setting I, we set the values of the prior of the time trend corresponding to the assumption of expected change of D_Expected=1 and maximal change of D_Maximum=1.5. Additional results using D_Expected=0.01 and D_Maximum=0.15 are to be found in Section B of the Supplementary Material. MAP prior approach. Similarly to the Time Machine, the prior distributions for the treatment effects are θ_k ∼ N(0,1000). The weight used for the non-informative component for the robustification of the MAP prior was set to 0.1 (corresponding to a weight of 0.9 for the MAP component). We used unit-information priors for the weakly-informative mixture component, p_non-inf, of the robustified MAP as suggested by <cit.>. In Setting I with equal time trends, we consider different cases for the choice of the precision parameter of the half-normal hyperprior for the between-period heterogeneity: 1/σ^2_τ∈{2, 0.2, 0.002}, as well as two different cases for the precision parameter of the normal hyperprior: 1/σ^2_β∈{ 0.001, 1 }. In the remaining cases, the precision 1/σ^2_τ was set to 0.002 and the value of the precision parameter of the normal hyperprior to 1/σ^2_β=0.001, corresponding to the following prior distribution for the control mean: μ_η∼𝒩(0, 1000). Additional results using 1/σ^2_β = 1 are to be found in Section B of the Supplementary Material. §.§ Results §.§.§ Setting I: Three experimental arms and equidistant entry times We describe the impact of the calibration of the priors in Figure <ref>. The left plot shows type 1 error curves with respect to the strength of the time trend for different calibrations of the prior of time drift for the Time Machine, and the right plot shows the analogous plot for the calibration of the prior for the MAP approach. For the Time Machine, as expected, for positive time trends, higher type 1 error inflation is found in scenarios in which there was a firmer belief in smooth time trends (i.e. small values of D_Expected – see, for instance, the curve corresponding to D_Expected=0.001). In addition, the type 1 error rate increases with respect to the strength of the time trend in scenarios that used small values for the expected change between time buckets, D_Expected. When we assume larger values for the expected change in time drifts in the calibration, then the type 1 error rate is, in general, maintained regardless of the strength of the time trend. When assuming intermediate values as D_Expected=0.01, the type 1 error rate is maintained under weak time trends, but for strong trends, there is a type 1 error inflation in case of positive trends, and type 1 error conservatism in case of negative trends, falling below the pre-defined significance level. Thus, for the scenarios that follow and also in Settings II and III, we will use the calibration D_Expected=1 and D_Maximum=1.5 since the resulting prior lead to, in general, type 1 error rate control. In the Supplementary material, we include the results when using D_Expected=0.01 and D_Maximum=0.15, which results in a mild level of type 1 error inflation under equal time trends across arms, but achieves larger power. Similarly, when calibrating the MAP approach, we vary the between-period heterogeneity and variability of responses in the control group, which influence the amount of borrowing. The more heterogeneity and variability assumed, the less borrowing and the more type 1 error control, while more borrowing results in a higher power. As it is well-known <cit.>, the risk of type 1 error inflation exists when data is borrowed in such methods, and strict control of type 1 error rate implies that no power gain is possible. Jiao et al. <cit.> also showed that the inflation of the type 1 error rate is bounded in the case of the MAP approach since when concurrent and non-concurrent controls are very dissimilar, the approach does not borrow data from the past and the inflation then decreases. This behaviour can also be observed in Figure <ref> for an increasing λ. In the scenarios below, we use an intermediate calibration to serve as a trade-off between error control and power gains. Next, we investigate the impact of the overlap of experimental treatment arms on the power and type 1 error in trials with equal time trends and equidistant entry times for treatment arms. The left plot in Figure <ref> shows type 1 error curves with respect to the difference between the entry time of one experimental arm and the next one, d. Therefore, the smaller the d, the more overlap between experimental treatment arms. Also note that the larger the value of d, the larger the size of non-concurrent controls. Thus, the inflation in type 1 error for the pooled analysis, which does not adjust for time drifts, increases with d. We can also see a small inflation in the considered scenarios for the MAP approach for increasing d. The type 1 error rate is maintained when using the frequentist regression model and the Time Machine (with D_Expected=1 and D_Maximum=1.5). In terms of power, the pooled analysis gives rise to the most powerful method at the cost of the inflation in type 1 error discussed above. Time Machine and the frequentist model perform similarly and achieve a power increase as compared to the separate approach. When smaller time drifts in the Time Machine prior calibration are assumed (for instance, D_Expected=0.01 and D_Maximum=0.15), the Time Machine has larger power over the frequentist regression model (see Supplementary material, Figure 3). If assuming larger time drifts, the Time Machine controls the type 1 error but limits borrowing non-concurrent controls. As a result, it leads to similar gains as the frequentist regression model approach, as we can observe in Figure <ref>. As for the MAP approach, the power for the Time Machine increases with d in the case of smaller time drift calibration (see Supplementary material, Figure 3). For larger time drift calibration, the Time Machine behaves similarly to the frequentist regression model. In such a case, both models reach their maximum at an intermediate value of d and then decrease (see Figure <ref>). This is because the Time Machine utilises the non-concurrent data to update the prior of the time drift even if there is no overlap between experimental treatment arms. However, in the frequentist model, only if there is an overlap between arms will there be an estimated time period effect. If not, then the non-concurrent data is not used, and thus, the power coincides with the power of separate trials. Another point to note is that evidently the later is the entry of the arm under evaluation, the larger the potential inflation of type 1 error (if any) and the larger the increase in power when using non-concurrent controls. We can observe that the type 1 error behaviour is magnified when we compare the results obtained for arm 3 (dotted line) with the results of arm 2 (solid line). This is because treatment arm 3 enters later, and therefore, its corresponding analysis makes more use of non-concurrent data, thus increasing the potential inflation. As mentioned above, the frequentist model (Sect. <ref>) and the Time Machine (Sect. <ref>) assume that the effect of time is additive and affects all arms in the trial equally. Here, we also investigate the robustness of the methods when the assumption of equal time trends is violated. Figure <ref> shows the type 1 error rate results in trials with different time trends. The left panel refers to a trial where only time trends in arm 1 are different to the others, while the right panel refers to when the time trends are different in arms 1 and 2. We can observe how the frequentist regression and Time Machine fail to control the type 1 error rate. Inflation becomes more pronounced the more different the trend in arm 1 is compared to the rest (i.e., as larger λ_0-λ_1) (see left plot). We can also see that the pattern worsens when both arm 1 and arm 2 are different (see right plot). Figure <ref> presents the case under stepwise time trends. However, it is worth noting that such inflation is lower when the trends are linear or even inverted-U time trends (see Supplementary Material, Figure 2). §.§.§ Setting II: Four experimental arms and non-equidistant entry times Next, we evaluate trials with non-equidistant arm entry under equal time trends across arms. In particular, we evaluate a four-arm trial where arm 3 enters at some point between arms 2 and 4. The first row in Figure <ref> shows the type 1 error with respect to the entry time of arm 3, d_3, according to the pattern of the time trend. We can see that Bayesian methods are affected by such a change in this case, and the type 1 error rate is considerably affected, especially in stepwise time trends. In the case of the Time Machine, this loss of control is especially evident when assuming medium or small time drifts in the prior calibration (see, for instance, the results when using D_Expected=0.01 and D_Maximum=0.15 in the Supplementary Material, Figure 5). This is because, on the one hand, the effect of the time trend is greater with such a change since there are more jumps in a shorter time. Also, in the case of d_2=d_3, even the jump is doubled, which means that there is a greater difference between the concurrent and non-concurrent controls than in the previously discussed settings. On the other hand, the Time Machine assumes that the time drift will be similar over time, which is not the case here, as the time drifts depend on the entry times of the arms in the stepwise time trends, and arms do not enter in an equidistant manner. For the MAP approach, we can see that the inflation remains constant with respect to the entry time d_3, but increases under stepwise time trends. The reason is that when dealing with linear or inverted-U trends, the time trend increases gradually, whereas the changes are more abrupt and severe with stepwise, resulting in more inflation. When inspecting the power in Figure <ref> (second row), we can see that the power obtained when using the Time Machine is slightly higher than the power using the frequentist regression, in which case the type 1 error rate was controlled when using the Time Machine but consistently larger than the one obtained by using the frequentist model. §.§.§ Setting III: Ten experimental arms and equidistant entry times Here, we assume a longer trial, with ten experimental treatment arms. We assume that the strength of the time trends may vary between treatments but that they are equally distributed according to a normal distribution with a mean equal to the time trend strength in the control group and equal variance. Under this setting, we can see in Figure <ref> that the type 1 error rate for arm 10 vs control is maintained when using frequentist regression, and so its performance is similar to that of the separate approach. This is because the differences in time trends across arms are averaged out, and the effect of different time trends gets diluted. This is also the case for the Time Machine when using D_Expected=1, but not with D_Expected=0.01 (see Supplementary material, Figure 6). For the later case of the Time Machine and also for the MAP, there is a perceived loss of type 1 error rate control, leading to either conservative tests or inflation of the type 1 error rate depending on whether the intensity in the control group is negative or positive, respectively. § DISCUSSION In platform trials, arms enter and leave while the control arm continues enrolling participants. Adding a new arm in a platform trial comes with pre-existing data from control participants, referred to as non-concurrent controls. The development of methods to utilise non-concurrent controls in platform trials is an area undergoing rapid evolution, engaging both academia and regulatory bodies. Deciding whether to incorporate these controls into the analysis of treatment versus control involves weighing advantages and disadvantages. Utilising non-concurrent controls offers the potential benefit of providing information about the control arm, leading to more precise estimates and enhanced power for comparing treatment effectiveness. However, it is essential to recognise that non-concurrent controls do not come from the same randomised process as the concurrent controls but rather stem from an earlier time of the trial. This temporal distinction between the earlier and current parts of the trial can introduce confounding factors into the analysis, especially if time trends are present. Current research is principally focused on modeling time effects, exploring diverse models, identifying necessary assumptions for robust performance, and investigating the worst-case scenarios in case those assumptions are not met. In this work, we primarily aimed to compare current methods to incorporate non-concurrent controls. But we also extend existing methods for treatment-control comparisons incorporating non-concurrent control data. The Time Machine approach <cit.> was originally proposed for trials with binary endpoints, and here we extended it to continuous endpoints. In addition, we have extended the Bayesian meta-analytic-predictive (MAP) approach for utilizing historical controls, to use it for non-concurrent controls in platform trials. In this work, we investigate frequentist and Bayesian modelling approaches that model the time trend and Bayesian strategies that limit the borrowing level depending on the heterogeneity between concurrent and non-concurrent controls. In particular, we consider the frequentist model-based adjustments that were proposed in <cit.>, the MAP approach, and the Bayesian Time Machine. When evaluating the statistical power and the type 1 error rate for individual treatment-control comparisons, we examine the impact of time trends in each method under different patterns for the time trends and the role of the overlap between arms. When considering time trends, two important points need to be taken into account. The first would be the interaction of the trends with the treatment arms, i.e., whether we consider that the time trends will affect all arms similarly or not. Second, model-based approaches assume that the time trend is on the same scale as the response and that it is additive with respect to the model. This point might be especially relevant in trials with binary or survival endpoints, and it is worth further study in this direction. For trials with potentially different time trends across arms, there have been recent proposals to ease the assumption of equal time trends by taking into account the interaction between treatment and time as a random factor. This approach helps to lower the inflation of type 1 error in scenarios where there are different time trends between treatment groups. However, it is still not possible to maintain strict error control <cit.>. In case of interactions of the treatment effect with time, already the interpretation in a simple two-arm trial comparing a single experimental treatment to control can become tricky. E.g., assume an improvement in the control arm over time but none in the experimental arm. If there is enough difference early in the trial, the final results might yield a statistically significant difference. But actually, at the end of the trial, there might be no advantage over the control arm left. In standard trials, this time aspect is usually not investigated and, therefore, not discussed. But, platform trials now offer the opportunity to reveal such issues, especially when considering the utilization of non-concurrent control data for decision-making. We have seen that interactions of the treatment effect with time affect modelling-based methods, like frequentist and Time Machine, which may lose control of type 1 error. Also, we saw that the effect of unequal time trends is more pronounced when more than one arm presents unequal time trends, and in addition, those arms do have equal time trends between them. On the other hand, if the trends follow the same pattern and the strength of the trend is different in the different arms but the same on average, the frequentist regression keeps the type 1 error rate under control. Regarding overlap between experimental treatment arms, intermediate overlaps result in the highest gains in power for the frequentist model. This is also true for the Time Machine model when considering priors controlling the type 1 error, where the results are similar to those of the frequentist model. If there is no overlap, the frequentist model loses power and the Time Machine can lose type 1 error control. The MAP approach does not directly adjust for time trends and, in general, as was the case in trials incorporating external controls, does not control the type 1 error in the presence of time trends <cit.>. Both MAP approach and Time Machine rely on the assumptions regarding the prior distributions. We have seen that in the case of Time Machine, if one restricts to priors that give rise to type 1 error rate control, the gain in power is minimal compared to the frequentist regression model. If one chooses priors potentially leading to more inflation of type 1 error rate, one can gain more power than when using the frequentist regression. There is extensive literature on Bayesian methods for using historical controls (see <cit.> for a review of methods); we have chosen MAP as it is one of the most well-known and widely used. Other methods, such as power prior approaches, could be considered and further explored in this context. When considering the MAP prior approach, we keep the original idea of this approach and consider each period as if it were a different source of information. This way of constructing the MAP does not, however, take into account the order of the periods. One could consider extending the approach by including the time variable or stochastic order of the periods analogously to the Time Machine. Modeling approaches, such as the frequentist regression and Bayesian Time Machine, align with the recent Food and Drug Administration (FDA) draft guidance on Master Protocols <cit.>, which suggests the use of stratified analyses to avoid bias caused by time trends. If and how non-concurrent control data will be used in platform trials already needs a pre-specification in the master protocol <cit.> without any knowledge of the control arm when adding later treatment arms. The justification should ideally also include a discussion on which assumptions have to be taken for the analysis considered. Regardless of the chosen method, a clear understanding of the required assumptions is crucial. If these assumptions are met, results can benefit from improvement in the precision of estimates and gain in statistical power. Having a thorough awareness of risks is imperative before taking any. While taking risks may be acceptable in some cases, it is important to quantify them beforehand. Moreover, these assumptions play a pivotal role in regulatory interactions. Proposing a design that utilises non-concurrent controls requires a precise statement of the assumptions being made, an accurate assessment of the robustness of the proposed method with respect to the scenarios to be envisaged in the specific indication and an evaluation of the risks to be taken and potential gains. § SUPPLEMENTARY MATERIAL Additional results from the simulation study. The GitHub repository (<https://github.com/pavlakrotka/NCC_MethodsComp>) contains the R code to reproduce the results of the simulation study. § ACKNOWLEDGMENTS EU-PEARL (EU Patient-cEntric clinicAl tRial pLatforms) project has received funding from the Innovative Medicines Initiative (IMI) 2 Joint Undertaking (JU) under grant agreement No 853966. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation programme and EFPIA and Children’s Tumor Foundation, Global Alliance for TB Drug Development non-profit organisation, Spring works Therapeutics Inc. This publication reflects the authors’ views. Neither IMI nor the European Union, EFPIA, or any Associated Partners are responsible for any use that may be made of the information contained herein. The Paul Ehrlich Institute receives funding exclusively from the EU Commission. This research was funded in whole, or in part, by the Austrian Science Fund (FWF) [ESP 442 ESPRIT-Programm]. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. § CONFLICT OF INTEREST Dominic Magirr declares a competing interest as an employee of Novartis Pharma AG. Peter Jacko and Tom Parke are employees of Berry Consultants, a consulting company that specialises in the design, conduct, and analysis of Bayesian and adaptive clinical trials. The rest of the authors declare that they have no competing interests regarding the content of this article. [pages=-]Suppmaterial.pdf
http://arxiv.org/abs/2407.13539v1
20240718141454
The separatrix operational space of next-step fusion experiments: From ASDEX Upgrade data to SPARC scenarios
[ "Thomas Eich", "Thomas Body", "Michael Faitsch", "Ondrej Grover", "Marco Andres Miller", "Peter Manz", "Tom Looby", "Adam Qingyang Kuang", "Andreas Redl", "Matt Reinke", "Alex J. Creely", "Devon Battaglia", "Jon Hillesheim", "Mike Wigram", "Jerry W. Hughes", "the ASDEX Upgrade team" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
On the Discriminability of Self-Supervised Representation Learning Zeen Song, Wenwen Qiang, Changwen Zheng, Fuchun Sun, Fellow, IEEE, and Hui Xiong, Fellow, IEEE Z. Song, W. Qiang, and C. Zheng are with the University of Chinese Academy of Sciences, Beijing, China. They are also with the Science & Technology on Integrated Information System Laboratory, Institute of Software Chinese Academy of Sciences, Beijing, China. E-mail: songzeen, lingyu, hujie, qiangwenwen, changwen@iscas.ac.cn. Z. Song and L. Si have contributed equally to this work. F. Sun is with the Science & Technology on Integrated Information System Laboratory, Department of Computer Science and Technology, Tsinghua University, Beijing, China. E-mail: fcsun@tsinghua.edu.cn. H. Xiong is with the Hong Kong University of Science and Technology, China. E-mail: xionghui@ust.hk. July 22, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Fusion power plants require ELM-free, detached operation to prevent divertor damage and erosion. The separatrix operational space (SepOS) is proposed as a tool for identifying access to the type-I ELM-free quasi-continuous exhaust regime. In this work, we recast the SepOS framework using simple parameters and present dedicated ASDEX Upgrade discharges to demonstrate how to interpret its results. Analyzing an extended ASDEX Upgrade database consisting of 6688 individual measurements, we show that SepOS accurately describes how the H-mode boundary varies with plasma current and magnetic field strength. We then introduce a normalized SepOS framework and LH minimum scaling and show that normalized boundaries across multiple machines are nearly identical, suggesting that the normalized SepOS can be used to translate results between different machines. The LH minimum density predicted by SepOS is found to closely match an experimentally determined multi-machine scaling, which provides a further indirect validation of SepOS across multiple devices. Finally, we demonstrate how SepOS can be used predictively, identifying a viable QCE operational point for SPARC, at n_e,sep=4×10^20m^-3, T_e,sep=156eV and α_t=0.7 — a value solidly within the QCE operational space on ASDEX Upgrade. This demonstrates how SepOS provides a concise, intuitive method for scoping ELM-free operation on next-step devices. § INTRODUCTION Fusion power plants need to be designed with power exhaust in mind to ensure that they achieve the high performance and availability required for commercial success. Tokamak design must address two related challenges — the mitigation of the inter-ELM heat exhaust, and the avoidance or mitigation of large ELMs. Already on existing devices, these challenges require the careful planning of experiments to limit damage to the device walls and pollution of the core plasma. On fusion power plants, however, the challenge will be even more severe and will significantly constrain the viable operational space. Continuous operation with unmitigated type-I ELMs is likely not viable <cit.> due to the significant degradation of the thermo-mechanical properties of the divertor targets under high cumulative neutron fluences, and the projected increase in ELM energy fluxes<cit.>. Continuous divertor detachment will be required throughout the flattop as well as part of the ramp-up and ramp-down to reduce the risk of tile-cracking due to thermal stresses, melting due to bulk heating, and target erosion and core pollution due to sputtering. Fusion power plants must therefore be able to achieve sufficient performance and power production in detached, ELM-free scenarios — and, ideally, should be designed for access to such scenarios. For this, we need models which are accurate enough to drive the optimization in the correct direction, while still being fast enough to quickly evaluate many potential design points in an optimization loop. We propose the extended separatrix operational space (SepOS) as a scoping tool, building on the model developed to describe ASDEX Upgrade discharges in Eich & Manz et al., 2021 <cit.>. We first present the model in section <ref> — providing a concise set of easily-implementable equations. We then highlight work to validate the model. To demonstrate how the SepOS describes transitions between different regimes for fixed conditions, we show a series of dedicated ASDEX Upgrade discharges in section <ref>. We then extend this comparison to show that the framework is valid even as the magnetic field strength and plasma current is varied in section <ref>. Varying these parameters is shown to shift the regime boundaries. To compare the SepOS predicted for different conditions, we develop a scaling for the LH minimum-density point in section <ref> and show that normalizing to this point results in nearly identical operational spaces across different conditions. This finding hints that we can use the framework to translate the results of one device to another. As a demonstration of how the normalized SepOS can be used for scoping ELM-free operations, we apply it to identify the QCE regime on SPARC in section <ref>. Finally, we discuss ongoing and future work to validate the SepOS framework under a broader range of conditions, including on other machines and with strong impurity seeding for detachment access. This ongoing work will be used to extend the SepOS framework, towards a reliable tool for designing fusion power plants with tolerable edge conditions in center view. § THE SEPARATRIX OPERATIONAL SPACE EXPRESSED WITH LOCAL PARAMETERS §.§ A data-driven approach In ASDEX Upgrade, we can classify discharges based on their separatrix density n_e,sep and temperature T_e,sep <cit.>. In figure <ref>, H-modes (indicated in blue) have a higher separatrix temperature than L-modes (indicated in green) for a given separatrix density. Above a certain density, density limit disruptions are observed — either as L-mode density limits (red), HL back-transitions leading to L-mode density limits (cyan) or H-mode density limits where no HL back-transition is observed (magenta). To describe the boundaries between L-modes, H-modes and density-limit-disruptions, we introduced the separatrix operational space (SepOS) in Eich & Manz et al., 2021 <cit.>. This framework describes the observed boundaries in terms of turbulent growth rates and shear-flow suppression close to the separatrix — building on the work of Rogers, Drake & Zeiler, 1998<cit.> and Scott, 2005 <cit.>. A key advantage of our approach compared to these earlier works was the availability of a large database of edge Thomson-scattering measurements from ASDEX Upgrade, which let us quickly check the validity of our proposed model. As seen in figure <ref>, this data-driven approach reproduces the experimental results remarkably well. However, it also highlights the need for careful validation of the model for other conditions — as we show in section <ref> — and for other machines and beyond the existing database — as we discuss in section <ref>. In this section we present the central equations of the separatrix operational space, expressed in terms of separatrix parameters, gradients and global machine parameters. The end result is a set of easily interpretable equations, which has been implemented in the publicly-available https://github.com/cfs-energy/cfspopcongithub.com/cfs-energy/cfspopcon. We do not intend for this section to be a complete discussion of the development or physics of the SepOS framework. Instead, we refer the reader to * Rogers, Drake & Zeiler, 1998 <cit.>, Scott, 2005 <cit.> and Chapters 11-14 of Scott, 2021 <cit.> for a discussion of the underlying physics, * Eich et al., 2020 <cit.> for the parametrization of near-separatrix gradients in terms of α_t, * Eich & Manz et al., 2021 <cit.> for the basics of the separatrix operational space, * Manz, Eich & Grover et al., 2023 <cit.> for an extended discussion of the L- and H- mode density limit, * Faitsch et al., 2023 <cit.> for the application of the SepOS framework to identify QCE access, and * Grover et al., 2024 <cit.> for an extension to treat unfavorable ∇ B ion drift cases. §.§ Identifying the separatrix using Spitzer-Harm power balancing The availability of high-resolution density and electron temperature profiles from the edge Thomson scattering system on ASDEX Upgrade was crucially important for the development of the separatrix operational space. The Thomson scattering measurement does not directly identify the separatrix, however, and due to the steep profiles in the edge, small errors in the magnetically-reconstructed separatrix position can introduce significant errors in the separatrix values. Instead, to determine the position of the separatrix, the separatrix temperature is estimated via Spitzer-Harm power balancing T_e,sep≈( 7/2f_cond f_tar q_∥,u L_∥/κ_e,0)^2/7 where f_cond≈ 1 is the conducted power fraction, q_∥,u=P_sepf_share/2π (R+a)λ_qB_t,omp/B_p,omp is the upstream parallel heat flux density directed towards the outboard target (for λ_q the turbulence-broadened heat-flux decay length <cit.>), L_∥ is the parallel connection length from the outboard midplane to the divertor target and κ_e,0 is the electron heat conductivity constant. We identify the separatrix as the point where the temperature profile measured via Thomson scattering matches the estimated T_e,sep — which lets us calculate the separatrix density n_e,sep. In practice, we actually use a point at ρ=0.999, which is taken at a position 0.57mm radially inwards at the outboard midplane from the separatrix position determined above. §.§ The turbulence parameter α_t A key parameter in the SepOS framework is the turbulence parameter α_t. This was originally introduced in Scott, 2005 <cit.> (denoted there as C ω_B), and it can be interpreted as giving the relative strength of interchange turbulence (dominant at α_t∼ 1) relative to drift-wave turbulence (dominant at α_t∼ 0) <cit.>. We provide here several definitions, directly in equation <ref>, in a simplified form which highlights how it depends on the separatrix values in equation <ref> and in terms of the edge collisionality in equation <ref>. α_t = 1.02 ν_ei/c_sq̂_cyl^2 R ( 1 + m_e/m_i) ( 1 + T_i,sep/T_e,sep1/⟨ Z ⟩) ≈ 3.13 × 10^-18q̂_cyl^2 R n_e,sep/T_e,sep^2 Z_eff,sep ≈1/100ν_e,edge^* q̂_cyl where ν_ei is the electron-ion collision frequency in the edge, c_s is the ion sound speed, q̂_cyl is the cylindrical safety factor (defined via equation K.6 in ref <cit.>), R is the major radius, m_e and m_i are the electron and main-ion masses, T_e,sep and T_i,sep are the electron and ion temperatures at the separatrix, ⟨ Z ⟩ is the mean ion charge and ν_e,edge^* is the edge collisionality. Here, electron and ion temperatures are set equal. As α_t is increased the interchange drive in the edge increases, increasing the filamentary cross-field transport and increasing the frequency at which scrape-off-layer filaments are born <cit.>. Above a certain critical value of α_t and for sufficient shaping (δ_95≳0.3), the pedestal foot becomes ballooning unstable <cit.>. The resulting transport reaches a level sufficient to prevent the pedestal from reaching the peeling-ballooning limit, eliminating large type-I ELMs. This type-I ELM-free regime is called the `quasi-continuous exhaust' (QCE) regime <cit.>, since the filaments generated in this regime have a much higher frequency and lower amplitude than type-I ELMs. In Faitsch et al., 2023 <cit.> it was shown that the QCE operational space on ASDEX upgrade is found for points above the LH transition and with α_t > 0.55, giving a simple metric for identifying this type-I-ELM free regime. §.§ The LHL boundary In Kim & Diamond, 2003 <cit.> it was proposed that a sustained H-mode requires that the rate of energy lost to a shear flow must exceed the rate of energy production due to turbulence[This was later investigated in Manz et al., 2012 <cit.>)]. We used this argument to develop a simple condition which, if fulfilled, indicates that an operational point should be in H-mode; α_RSk_EM^3/1+( α_t/α_c k_EM^2)^2 > α_t/α_c(1/2k_EM^2+k_EM^4) +1/2√(2λ_p,e,H/R) Here, the left-hand side of the equation represents the turbulent energy lost to the shear flow, and the right-hand side represents the energy produced by turbulence in the electron (first term) and ion (second term) channels. This expression is given in terms of α_t (given by equation <ref>), the Reynold-stress factor α_RS = 1 in forward field 0.4 in reversed field from Grover et al., 2024 <cit.>, the critical value for the ballooning drive (originally from ref <cit.>); α_c=κ_sep^1.2(1 + 1.5δ_sep) the spatial scale at which electromagnetic induction overcomes electron inertia; k_EM=β_e,sep / m_e/m_i=2μ_0 n_e,sep T_e,sep m_i/B^2 m_e the electron pressure gradient near the separatrix in H-mode λ_p,e,H (given in terms of the sound Larmor radius ρ_s0 and α_t in equation K.1 of ref <cit.>) and the major radius R. From equation <ref> we see that α_t is large for high densities and low temperatures. Under these conditions, the electron turbulent drive (the first term on the right-hand-side of condition <ref>) dominates, setting the H-mode transition above the LH-minimum density. Below the minimum density, α_t is small and the ion turbulent drive (the second term on the right-hand-side of condition <ref>) dominates. Expressing the LH transition in terms of a separatrix temperature instead of a power at first seems difficult to reconcile with established scalings such as Martin et al., 2008 <cit.>. However, as shown in ref <cit.>, for a given T_e,sep, there should be an associated P_sep,e to fulfil equation <ref> — which lets us recast the LH transition in more familiar terms. More challenging, however, is the treatment of the energy required in the ion channel. For a given ion temperature gradient and ion cross-field heat diffusivity χ_i, we can calculate the necessary P_sep,i. However, there are no established scalings for χ_i, which introduces significant uncertainty. As we discuss in section <ref>, determining such as scaling could be a key application for multi-machine studies and high-fidelity modelling. §.§ The L-mode density limit As α_t is increased, the plasma transport in the edge increases<cit.>. At intermediate values, this can be desirable for preventing the edge gradients from reaching the peeling-ballooning limit (as discussed in section <ref>). At higher values this increased transport can lead to a complete collapse of the plasma, triggering a density limit disruption. In Rogers, Drake & Zeiler, 1998 <cit.>, it was proposed that the density limit is triggered at low β, where interchange turbulence is no longer damped due to electromagnetic induction but rather enhanced by electromagnetic transport. The condition for the density limit is given in terms of the ratio of electromagnetic induction to electron inertia (as given in <ref>), and of the wavenumber of resistive ballooning mode. If the following condition is fulfilled in L-mode (i.e. the condition in <ref> is not met), we expect that an operational point will undergo a density-limit disruption; k_EM > √(α_c/α_t√(2λ_p,e,L/R)) Here, λ_p,e,L is the electron pressure gradient near the separatrix for L-modes specifically — given in terms of the sound Larmor radius ρ_s0 and α_t in equation B.1 of ref <cit.> — rather than for H-modes as in section <ref>. This is because, so long as the turbulence suppression condition given by equation <ref> is met, catastrophic transport conditions cannot be reached. Instead, within the SepOS framework, H-mode density limits are described as a preceded by a HL back-transition and then an L-mode density limit — as we show in section <ref>. We can expand and solve equation <ref> for n_e,sep, giving n_e,sep > n_GW× 0.11·√(α_c)/κ̂^2√(T_e,sep/Z_eff,sep)λ_p,e,L^1/4R^1/4 where n_GW is the density limit given in Greenwald et al., 1998 <cit.> and κ̂=√(1/2(1 + κ_sep^2 + ( 1 + 2 δ_sep^2 - 1.2 δ_sep^3 ))). Here, we see that the density limit proposed here is similar to the Greenwald density limit, with a slight additional dependence on the separatrix power (which sets T_e,sep via equation <ref>). For a more detailed look at the L-mode density limit, see Manz, Eich & Grover et al., 2023 <cit.>. §.§ The ideal MHD limit At sufficiently high β, the discharges are limited by the ideal MHD limit. If the following condition is fulfilled, the transport will increase until this condition is recovered; α_MHD=R q̂^2_cyl/λ_p,e,H β_e,sep<α_c for α_c defined in equation <ref>. § DEMONSTRATING REGIME BOUNDARIES IN DEDICATED ASDEX UPGRADE DISCHARGES To demonstrate how to interpret the separatrix operational space and as a step towards its validation, we performed a series of ASDEX Upgrade dedicated discharges. Three of these discharges are shown in Fig. <ref>, showing a discharge designed to probe the L-mode density limit boundary (top row), another to demonstrate a LH transition and a stable HL back-transition (middle row), and finally a LH transition followed by a disruptive HL back-transition (bottom row). In the first discharge (#36156) — shown in the top row of Fig. <ref> — the gas puff is ramped up at constant heating power, steadily increasing the edge and core densities until a density limit disruption is observed. In the SepOS diagram, we see the L-mode points (green circles) gradually moving to higher separatrix densities while the separatrix temperature due to the constant heating power remains the same, until the discharge terminates with a density-limit (red triangle) at the boundary predicted by condition <ref>. In the second discharge (#40155) — shown in the middle row of Fig. <ref> — the gas puff level is kept constant and the heating power is increased by several MW at 4 s, resulting in a clear transition to H-mode as seen by the sharp increase in the confined-region density and stored energy. At 6.5 s the heating power is reduced to the original level and the plasma transitions back into L-mode. In the SepOS diagram, the increase in heating power is seen as an increase of the separatrix temperature, as we expect from equation <ref>. Despite constant gas-puffing, we see a significant change in n_e,sep during the LH transition as the particle confinement improves. Interestingly, after the HL back transition, the L-mode remains at a slightly higher separatrix density than from the point where it transitioned into H-mode. Finally, in the third discharge (#38589) — shown in the bottom row of Fig. <ref> — a LH transition is induced by a step in the heating power early in the discharge while the fuelling is continuously increased. Then, at 5.5s, the heating power is reduced to its original level — similar to in #40155. However, this time the density of the L-mode after the HL back-transition is above the L-mode density limit, and as a consequence the L-mode disrupts. In the SepOS diagram, we see that this disruption occurs at higher density than the L-mode density limit seen in #36156. These points are not directly reachable from L-mode because they would disrupt already at lower densities, and as such we label these points as corresponding to a H-mode density limit (HDL). Despite the name, these density-limit disruptions do not occur in H-mode directly, but rather first with a HL back-transition and a subsequent disruption of the over dense L-mode. In the database considered, we have not identified any density-limit disruptions with H-mode separatrix conditions (i.e. fulfilling condition <ref>). § THE EFFECT OF VARYING I_P AND B_T ON THE SEPARATRIX OPERATIONAL SPACE In the original SepOS paper <cit.> and in figure <ref>, we showed the separatrix operational space determined for a fixed plasma current of I_p=0.8 MA and fixed magnetic field strength of 2.5 T, with the experimental data points selected for a narrow range around these values. To apply the separatrix operational space in a predictive way, we need to show that it is valid as the magnetic field strength and plasma current is varied. We developed an extended database with 6688 individual measurements collected across 524 ASDEX Upgrade discharges over the range of conditions given in table <ref>. For each measurement point, we used the SepOS framework to calculate whether a point should be in L-mode, H-mode, or density-limit disrupting, and then compared this to the actual state of the plasma during the measurement. In figure <ref>, we show the balance of the stabilizing (left-hand-side) and destabilizing (right-hand-side) terms of condition <ref> as the x and y axes respectively — equivalent to figure 3 of ref <cit.> with the expanded database. For points where the stabilization due to the shear flow exceeds the rate of turbulent energy production we expect the plasma will be in H-mode, and vice-versa. A perfect prediction by SepOS would have all H-mode points (blue squares) in the lower-right triangle, and all other points in the upper-right triangle, with the y=x line (solid blue line) perfectly separating the two. As we see in figure <ref>, the agreement is remarkably good — a few H-mode points with weak stabilizing and destabilizing terms (i.e. lower left corner) are not described as accurately, but otherwise the method is able to summarize the entire dataset with an impressive combination of accuracy and simplicity. Dimensionless comparisons such as figure <ref> are able to treat databases gathered across different machine conditions (I_p, B_t, etc) and even different machines, since these dimensionless quantities can be computed for each individual point. However, while this is useful for testing the theory, it is difficult to build an intuitive interpretation of these results. To recast the above comparison in terms of separatrix parameters, as in figure <ref>, we need to restrict the experimental database to narrow ranges of machine parameters and determine the dimensional SepOS for those conditions. This is because the SepOS equations depend on quantities such as q̂_cyl, which depend on machine parameters like the plasma current and magnetic field strength. As seen in figure <ref>, this means that when we show the results in terms of n_e,sep and T_e,sep, the SepOS transitions change as functions of I_p and B_t. In figure <ref> we see that the H-mode access (blue line) moves to lower T_e,sep as the plasma current is increased and as the magnetic field is decreased. Since P_sep,e∝ T_e,sep^7/2 (from equation <ref>), this corresponds to a decrease in the required power crossing the separatrix (in the electron channel) with increasing plasma current or decreasing magnetic field strength. This matches the experimental measurements well, as well as the expected increase in the LH threshold with the magnetic field strength. However, the result seems to contradict the general observation that the LH power threshold for the high density branch does not have an explicit current dependence <cit.>. We propose several resolutions for this apparent contradiction. Firstly, the plasma current affects the heat flux decay length λ_q ∝ 1/I_p (at fixed machine parameters). Because of this, as I_p is increased less P_sep,e is required to maintain the same T_e,sep. This will at least partly counteract the dependence of T_e,sep on I_p, resulting in a weaker dependence of P_sep on I_p and bring our results closer to ref <cit.>. A second point to consider is that we have only discussed the scaling of P_sep,e, ignoring the contribution of the ion heat flux which is likely dominant <cit.>. In section <ref>, we propose future work which may reduce this uncertainty, but at this point we simply note that the results here should not be over-interpreted. Finally, there are other machine parameters beyond I_p and B_t which can affect the SepOS results. These include the shaping, which strongly affects α_c, and the machine dimensions, which requires validation on other machines as discussed in section <ref>. § THE NORMALIZED SEPARATRIX OPERATIONAL SPACE In the previous section, we showed two approaches to generalize the SepOS framework for changing machine parameters. The first — in terms of normalized variables — worked for arbitrary machine parameters but was non-intuitive, while the second — in terms of separatrix variables for slices of the database — had the opposite problem. Is it possible to somehow combine the two approaches, to get the best of both? This led us to normalize the separatrix variables to the LH-minimum density and temperature — which gave some surprising results. In figure <ref> we see that the LH transition curves for ASDEX Upgrade, Alcator C-Mod and SPARC more-or-less overlap when normalizing to their LH minima (given in table <ref>). The L-mode density limit and ideal MHD limits are closer than without the normalisation but do not exactly overlap. The use of the ASDEX Upgrade L-mode λ_p,e,L (equation B1 from ref <cit.>) for C-Mod and SPARC is not appropriate, since this scaling was not elaborated to have any explicit magnetic field, plasma current or machine size dependence. Nevertheless, it is interesting to consider why the L-mode density limit moves to higher normalized densities on C-Mod and SPARC. As discussed in section <ref>, the L-mode density limit scales approximately linearly with the plasma current<cit.>, while the LH transition does not<cit.>, as such operating with a higher plasma current reduces the LH density minimum relative to the density limit in the model used. We remind the reader that the actual decay length only enters with the fourth root in equation (<ref>). We also see that the contours of α_t overlay exactly once normalized to the LH minima. This result is noteworthy as it has an important implication. As we discussed in section <ref>, the ELM-free QCE regime is found for α_t > 0.55 on ASDEX Upgrade <cit.>. With this result, that suggests that machines operating with densities above the LH minimum density and sufficient shaping should generally be able to access this regime. In figure <ref>, we normalized the SepOS results from each machine according to the computed LH contour. How does this point change when considering different machines? We should, in principle, be able to find the point where the derivative of equation <ref> with respect to density equals zero, indicating the minimum density. However, this is not straightforward because λ_p,e depends on α_t, and so we instead perform a regression on the model results. We evaluated the SepOS for 19 sets of parameters representing a range of C-Mod, ASDEX Upgrade, JET, ITER and SPARC conditions, for B_t ranging from 1.47 - 12.2 T, I_p ranging from 0.62 - 15 MA, minor radius a from 0.25 - 2.0 m and major radius R from 0.7 - 6.2 m. The best fit for the minimum density across these cases is n_e,min[10^19 m^-3]=(0.55± 0.07) B_T^0.81± 0.02 I_p^0.36± 0.02 a^-0.89± 0.03 for B_T in T, I_p in MA and a in m. This can be compared to equation 3 from Ryter et al., 2014 <cit.>, setting the aspect ratio for ASDEX Upgrade to R/a=1.65 m/0.5 m=3.3 to find n̅_e,min [10^19 m^-3] = 0.7 B_T^0.62 I_p^0.34 a^-0.95(3.3)^0.4 =1.1 B_T^0.62 I_p^0.34 a^-0.95 These scalings agree remarkably well, considering that they were determined via entirely different methods. The most significant difference is that of the prefactor, which is a factor of 2 higher in equation <ref>. However, our scaling is in terms of the separatrix density n_e,sep while equation <ref> is in terms of the line-average density n̅_e, which is typically 2-3× higher than n_e,sep in H-modes. This difference is enough to explain the difference in prefactor, although with a more typical n̅_e / n_e,sep = 3 we are actually predicting a 50% higher minimum density than Ryter et al., 2014 <cit.>. Similar to our discussion in section <ref>, there are multiple suspects for this remaining difference, chiefly the ion heat flux, which we will discuss in section <ref>. Nevertheless, even before this remaining factor is identified, the agreement between equations <ref> and <ref> suggests that the SepOS can be used to inform the operations of other machines, including next-step devices. § PREDICTING QCE ACCESS ON SPARC USING THE SEPARATRIX OPERATIONAL SPACE What does the separatrix operational space look like for a next-step device like SPARC? SPARC's primary reference discharge (PRD) is a high-performance Q∼ 11, full-field B_t ∼ 12.2 T, full-current I_p ∼ 8.7 MA H-mode <cit.>. This operating point is predicted to have a separatrix density of n_e,sep∼ 1.5×10^20 m^-3 and a separatrix power of P_sep∼20 MW, corresponding to a separatrix temperature of T_e,sep∼ 195 eV. Normalizing these values to the LH minimum of 1.64×10^20 m^-3, 91.8 eV and comparing to figure <ref>, we see that this puts the PRD operational point squarely in the type-I ELMy regime. To deal with this, SPARC has the ability to use its error field correction coils as a resonant magnetic perturbation (RMP) ELM-suppression system. Nevertheless, intrinsically ELM-free scenarios are attractive, especially for scaling to a power plant. With the SepOS we identified a broad operational space on SPARC that we expect will be in the QCE regime. Within this space we propose a point maintaining the PRD's separatrix power of P_sep∼20 MW, but with a significantly higher separatrix density of n_e,sep∼4×10^20 m^-3. The separatrix temperature is also slightly lower, at T_e,sep∼156 eV, due to λ_q broadening at increased α_t <cit.>. This point is also more attractive for steady-state heat-exhaust, since the increased separatrix density significantly reduces the edge impurity concentration required for detachment. As part of ongoing work, this QCE operational space will be investigated in https://github.com/cfs-energy/cfspopconcfspopcon to determine the highest-gain point compatible with detached, ELM-free conditions in the edge. § CONCLUSIONS AND FUTURE WORK The separatrix operational space (SepOS) provides a simple, fast and intuitive framework for ensuring benign heat exhaust conditions when designing future devices and planning their operation. The SepOS framework is relatively simple, expressed concisely in terms of separatrix parameters, separatrix gradients and machine parameters. Rather than a formal derivation, the framework selects terms from linear turbulence models to develop an accurate description of an extensive database of ASDEX Upgrade experimental results. Perhaps unsurprisingly, this model provides an excellent description of the experimental database with which it was derived. However, due to the data-driven approach, before we can use the framework predictively, we needed to rigorously validate it under other conditions — which is the focus of this work. In this paper, we showed that the SepOS framework remains accurate as the plasma current and magnetic field strength is varied on ASDEX Upgrade. For this, we presented results from an extended database of 6688 individual measurements collected across 524 ASDEX Upgrade discharges, including several discharges which were specifically designed to probe the transitions predicted by SepOS. Across this extended database, the SepOS framework was able to accurately predict whether a given operational point would be in L-mode, H-mode or undergoing a density-limit disruption. A second, indirect validation was given when realizing that the density of the LH minimum point scaled the same way as in the Ryter scaling <cit.>. Since that scaling broadly agrees with published minima data from devices as diverse as ASDEX Upgrade, C-Mod, DIII-D and JET, finding agreement with that scaling suggests that the SepOS framework (or at least the predicted LH transition) is applicable across multiple devices. Building on these results, we then demonstrated how the SepOS framework can be applied predictively, to identify an operational point expected to be in the QCE regime for SPARC. This proposed operational point will be investigated in more detail as an attractive operational point for combining high fusion gain with relatively benign heat-exhaust conditions. Once SPARC starts H-mode operations in its second campaign, testing whether the SepOS framework predicts the access to the QCE regime will be a strong validation (or falsification) of the model and its use for fusion power plants. Rather than a conclusive validation of the SepOS framework, this work should be considered a starting point and a call to action. Future work is needed to investigate strongly-seeded discharges, to determine how the SepOS predictions are modified as plasmas are pushed towards the detached conditions needed for tolerable steady-state heat exhaust. A further step would be a detailed investigation into the ion temperature dynamics, which would be particularly important for translating the results from a separatrix temperature to power fluxes across the separatrix. Similar to their use in Grover et al., 2024 <cit.>, this could be a useful application of high-fidelity turbulence models such as GRILLIX <cit.>. Further investigations should probe the underlying physics in different regions of the separatrix operational space, as a rigorous check against fortuitous agreement. Finally, the model must be applied to other, existing devices. A validation of the model against C-Mod data is in progress and a publication is in preparation. By performing a multi-machine validation of SepOS across various devices such as JET, TCV, MAST-U, and DIII-D, we can test and improve the predictive capabilities of the framework, letting us confidently apply it for the design of future fusion power plants with optimized heat exhaust solutions. § ACKNOWLEDGEMENTS This work was supported by Commonwealth Fusion Systems. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 — EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.
http://arxiv.org/abs/2407.12198v1
20240716214914
Doping-induced Quantum Anomalous Hall Crystals and Topological Domain Walls
[ "Miguel Gonçalves", "Shi-Zeng Lin" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mes-hall" ]
Theoretical Division T-4, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA Theoretical Division T-4, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA Center for Integrated Nanotechnologies (CINT), Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA § ABSTRACT Doping carriers into a correlated quantum ground state offers a promising route to generate new quantum states. The recent advent of moiré superlattices provided a versatile platform with great tunability to explore doping physics in systems with strong interplay between strong correlation and nontrivial topology. Here we study the effect of electron doping in the quantum anomalous Hall insulator realized in TMD moiré superlatice at filling ν=1, which can be described by the canonical Kane-Mele-Hubbard model. By solving the Kane-Mele-Hubbard model using an unrestricted real-space Hartree-Fock method, we find that doping generates quantum anomalous Hall crystals (QAHC) and topological domain walls. In the QAHC, the doping induces skyrmion spin textures, which hosts one or two electrons in each skyrmion as in-gap states. The skyrmions crystallize into a lattice, with the lattice parameter being tunable by the density of doped electrons. Remarkably, we find that the QAHC can survive even in the limit of vanishing Kane-Mele topological gap for a significant range of fillings. Furthermore, doping can also induce domain walls separating topologically distinct domains with different electron densities, hosting chiral localized modes. Doping-induced Quantum Anomalous Hall Crystals and Topological Domain Walls Shi-Zeng Lin July 16, 2024 =========================================================================== [t]1 |#⟩1| #1⟩ ⟨#|1⟨#1 | #1‖#1⟩ #1⟨#1‖ ⟨#|1⟩#2⟨#1. | #2 ⟩ #1#2⟨#1. ‖#2⟩ #1⟨#1 ⟩ tr Tr ∂ Im Re sgn Det #1|#1| ↑ ↓ #1#1 #1#1 #1#1 § INTRODUCTION Moiré superlattices have emerged as a highly tunagble platform to explore strongly correlated topological quantum states. The kinetic energy of electrons can be tuned to be small in comparison to electron-electron interactions. As a consequence, a plethora of interaction-induced phases have been experimentally observed in these systems, ranging from superconductivity, heavy fermion liquid, correlated insulator, Wigner crystal, the integer and fractional quantum anomalous Hall effects <cit.>. A particular example of a very rich class of moiré systems is transition metal dichalcogenides (TMDs). Some moiré TMDs can be described by triangular lattice Hubbard models with a trivial band topology <cit.>, which has been verified by experimental observations <cit.>. Of direct relevance to our work, homobilayer TMD moiré is also very appealing as it realizes generalized Kane-Mele-Hubbard models <cit.>, where the recent experimental observations of the integer and fractional quantum anomalous Hall effects have been made <cit.>, inspired by model studies <cit.> and particularly material-specific modeling <cit.>. One unique advantage of moiré systems is that the carrier density can be controlled to fully fill or empty moiré bands by varying gate voltage. This immediately raises interesting questions about doping induced new physics, particularly doping around the correlation stabilized quantum many body states, and has attracted considerable attention recently. For instance, it was shown that doping electrons around a commensurate filling in twisted bilayer graphene stabilizes skyrmions <cit.>, similar to the well known quantum Hall ferromagent for the Landau levels <cit.>. It was further argued that the condensation of skyrmions can be the mechanism for the experimentally observed superconductivity in twisted bilayer graphene <cit.>. In TMD moiré superlattices, it was demonstrated that doping of carrier around half filling can stabilize spin polarons <cit.>, and can give rise to superconductivity <cit.> or kinetic ferromagnetism <cit.>. Motivated by these exciting developments, we investigate the doping induced phases around a quantum anomalous Hall insulator (QAHI) in TMD homobilayer, such as twisted MoTe_2 and WSe_2, where the interplay between correlations and topology is essential. We unravel rich phases [see Fig.<ref>(c) and Fig.<ref>(a)], particularly the quantum anomalous Hall crystal (QAHC), stabilized by doping carriers into the correlated QAHI. Doping generates a skyrmion lattice with one or two electrons localized inside each skyrmion but with a different mechanism compared to that in quantum Hall ferromagnets. Furthermore, doping can create domain walls that separate topologically distinct regions with varying electron densities, hosting chiral localized modes. § MAIN RESULTS The low-energy electronic states in the twisted homobilayer TMD moiré such as MoTe_2 form a honeycomb lattice, with the two sublattices being layer polarized as illustrated in Fig.<ref>(a1). Because of the strong spin orbit coupling, the valley and spin degrees of freedom are locked [see Fig.<ref>(a2)], and in the following we use spin to denote both quantum numbers. The effective Hamiltonian is the Kane-Mele-Hubbard model <cit.> H= -t∑_⟨ i,j⟩,σc_i,σ^†c_j,σ-t_2∑_⟨⟨ i,j⟩⟩,σe^- iσϕ_ijc_i,σ^†c_j,σ +U∑_in_i,n_i,+V∑_⟨ i,j⟩,σ,σ'n_iσn_jσ' , where c_i,σ^† creates an electron at site i with spin σ. The first term describes nearest-neighbor hoppings on the honeycomb lattice. The second term describes next-nearest-neighbor (NNN) hoppings, where ϕ_ij=±π/2 and the sign is defined by the arrows depicted in Fig.<ref>(a1): + (-) if the electron hops along (against) the direction of the arrow. The final terms denote the onsite Hubbard and nearest-neighbor repulsive interaction. Throughout the manuscript, all results will be presented in units of the nearest-neighbor hopping parameter t, which we set to 1. We will also consider different electron fillings ν, with ν=0 and ν=4 corresponding, respectively, to the fully empty and fully filled bands in Fig.<ref>(a3). An electron filling ν here can be mapped to a doping of ν holes per moiré unit cell in TMD, by particle-hole transformation. We will focus on zero temperature. The noninteracting band structure is illustrated in Fig.<ref>(a3), where the opposite spins have the opposite Chern number and the system realizes a quantum spin Hall insulator at filling ν=2 (equivalent to two holes per moiré unit cell). The Hubbard interactions can spontaneously split the degeneracy between spin up and down sectors and give rise to the ferromagnetic QAHI at filling ν=1, corresponding to fully filling one of the bands, as shown in Fig.<ref>(a3). The magnetization of the ground state is perpendicular to the moiré plane (see Supplementary Information, Section <ref>) because of the Ising spin-orbit coupling. Below we study the novel phases induced by dopping around the ν=1 ferromagnetic QAHI. In Fig.<ref>(b), we compare the free energies of the lowest energy homogeneous (translationally invariant) state and the true ground state. The former is a half-metal (HM), a spin polarized metal, which is the ground state below and above a critical interaction strength, consistent with recent exact diagonalization calculations in the large-U regime <cit.>. Interestingly, Fig.<ref>(b) shows that there is a range of interaction strengths for which the ground state is not a HM. This range is finite for all the fillings - from ν=1 to ν=4/3 - and intensities of next-nearest-neighbor hopping strength/topological mass, t_2∈[0,0.2], studied in this paper. The main results are shown in Fig.<ref>(c), where we selected an interaction strength within the range where the ground state is inhomogeneous and explored the possible phases in the plane of t_2 and filling ν. For a large enough t_2, a topological domain wall phase (DW) is stabilized for any finite electron doping away from ν=1. The domain wall separates two distinct magnetic domains: the ferromagnetic QAHI domain at filling ν=1 with |C|=1 and a topologically trivial coplanar magnetic insulator (CoMI) domain at filling ν=4/3, as depicted in Fig.<ref>(d1). Although the CoMI domain has C=0, its magnetization profile is non-trivial: the unit cell becomes three times larger than the honeycomb unit cell and the coplanar magnetization forms vortices. Due to an intricate interplay between the non-trivial magnetization and the Kane-Mele topological mass, these vortices are characterized by a winding number determined by the sign of this mass, as we will detail below. Because the CoMI and ferromagnetic domains are insulating and topologically distinct, chiral localized modes naturally arise at the domain wall as shown in Figs.<ref>(e1,e2). For a smaller t_2, there is a first-order phase transition into a QAHC with quantized Hall conductance |σ_xy|=e^2/h. In this phase, skyrmions are spontaneously induced by doping the QAHI state. The skyrmions generate an emergent magnetic field that couples with the orbital magnetization of the Chern band to minimize energy. As a result, the Chern number of the filled Chern band determines the sign of the skyrmion charge. An example of the magnetization profile in this phase is shown in Fig.<ref>(d2), where a lattice of localized skyrmions spontaneously breaks the translational symmetry. Each skyrmion accommodates exactly one electron, therefore the skyrmion crystal is also an electron crystal, in analogy to Wigner crystals but with a quantized Hall conductance, as shown in the Supplementary Information Sec.<ref>. The lattice constant of the emergent skyrmion lattice is therefore determined by doping. It is important to note that for incommensurate fillings the crystal has imperfections: even though the skyrmions repel, favoring crystallization, it is not possible to form a perfect crystal. Nevertheless, these imperfections do not affect the quantization of Hall conductance. The QAHC has edge states exemplified in Figs.<ref>(e3,e4), with a uniform probability density for the background spin component ( in this example) and modulated by the skyrmion distribution for the other spin component. Surprisingly, the QAHC extends down to t_2=0, where the topological mass vanishes. In this case, the system is a ferromagnetic semimetal at ν=1 due to interaction. However, at a finite doping, interactions stabilize both spontaneous Chern gaps and skyrmions. Between the HM and the QAHC there is a quantum anomalous Hall metal (QAHM) phase that has a finite but non-quantized Hall conductance that perfectly correlates with the total skyrmion charge, which is also smaller than the total number of doped electrons. This is also true for small but finite t_2, showing that the creation of the QAHC does not require the existence of a QAHI at ν=1. The transition between the QAHM and QAHC phases is of first-order, as evidenced by the abrupt change in chemical potential exemplified in Supplementary Information Sec<ref>, Fig.<ref>(c). Finally, it is important to note that for t_2=0, each individual skyrmion in the crystal has skyrmion charge χ=2 and accommodates two electrons, as shown in Fig.<ref>(d3) (see also Supplementary Information Sec.<ref>). Charge-2e skyrmions can also be stabilized at finite t_2 by doping sufficiently away from filling ν=1 as we show in detail in the Supplementary Information, Section <ref>. § ORIGIN OF SKYRMIONS We now uncover the mechanism behind the formation of skyrmions upon doing. For a small, but finite, number of doped electrons, the ground state can contain skyrmions in a wider range of t_2 than the one spanning the QAHC in Fig.<ref>(c). We show an example phase diagram in the plane of U and number of doped electrons δ_e in Fig.<ref>(a) for t_2=0.2, where only the DW phase exists at finite density for 1<ν<4/3. In this phase diagram, we observe that there is an interaction range where a skyrmion bound with an electron is generated by doping an electron. This phase with electron-skyrmion bound states undergoes a first-order transition into the DW phase by varying U or δ_e. The number of skyrmions increases with δ_e. The stability of this phase shrinks with increasing δ_e, and there is a critical δ_e above which only the DW phase is stable. The reason is that although the DW phase is energetically more favorable at finite electron density, its existence requires critical δ_e. We note that here we identified the ground state with the DW phase whenever the different doped electrons cluster to form a domain. In the Supplementary Information we detail the different structures that can arise in the DW phase when doping a small number of electrons, which do not necessarily have vanishing skyrmion charge. On a narrow interaction range, right before we reach the HM phase at larger U, a new metallic domain wall phase DW_2 arises. This phase is characterized by two domains with opposite magnetizations. However, unlike in the DW phase, one of the domains is metallic and no chiral edge modes arise at the domain wall. We analyze the DW_2 phase in more detail in Supplementary Information Sec.<ref>. Here no QAHC is present since the formation of skyrmions is unstable at finite electron density compared to the DW phase. In order to unravel the mechanism behind the formation of skyrmions, in what follows we focus on the single-electron doping problem, where a single skyrmion is formed. We can interpret the emergence of the skyrmion as a magnetic impurity in the ferromagnetic background that is created in order to save energy for the doped electron. Because of this impurity, in-gap bound states are created as exemplified in Fig.<ref>(b). Filling these in-gap states saves energy compared to overcoming the gap at filling ν=1 to form a HM. However, this does not explain why a magnetic impurity with unit skyrmion charge is favorable compared to a simple spin-flip (polaron), or skyrmions with larger charges. To answer this question, we consider the ansatz described by ⟨ n_ r,σ⟩=1/4(1+σĥ_z^ r), ⟨ c_ r,^†c_ r,⟩=1/4(ĥ_x^ r+ iĥ_y^ r), with ĥ^ r=(sinθ_ rcosϕ_ r,sinθ_ rsinϕ_ r,cosθ_ r) and θ_ r=π[1-arctan(| r- r_0|/ξ)], ϕ_ r=pα. In this ansatz, r_0 is the position of the skyrmion's center, ξ is the size of the skyrmion, α is the polar angle between the vector r- r_0 and the x-axis, and p is an integer that determines the skyrmion charge χ. This ansatz gives rise to the exchange field texture illustrated in the inset of Fig.<ref>(c). Plugging it into Eq.<ref> (with V=0), we obtain H_MF( h)=H_0-∑_ r h( r)· s_ r+U/4∑_ r,σn_ r,σ , where we have defined h( r)=Uĥ^ r/2 and s^ r=1/2 c_ r^†σ c_ r, with c_ r=[c_ r,↑ ,c_ r,↓]^T and σ being Pauli matrices. In the Supplementary Information Sec.<ref>, we show that this ansatz captures very well the exact solution. Using the ansatz, we show in Fig.<ref>(c) an example of the free energy as a function of ξ in the regime where skyrmions are stabilized, for different values of χ C. Here C is the ground state Chern number for δ_e=1 and the skyrmion topological charge χ can be changed by varying p, with χ=(4π)^-1∫ d^2 r m_ r·(_x m_ r×_y m_ r)∝ p [see Supplementary Information Sec.<ref>]. Fig.<ref>(c) shows that the free energy is minimized at an optimal ξ when χ C=1, which means that the Chern number determines the sign of the skyrmion charge. We note that for the value of ξ that minimizes the free energy, the skyrmion is very localized and to a good approximation, the in-plane components of the magnetization are non-vanishing only essentially for the nearest-neighbors of site r_0. Because of this, and of 𝒞_3 symmetry around r_0, we have χ = (n,2). The skyrmion charge values shown in Fig.<ref>(c) therefore saturate all the possibilities, and no difference in the free energy is observed between χ C=-1 and χ C=2. This is no longer true at a larger ξ, where the skyrmion spreads over further neighboring sites. Based on these results, the mechanism for skyrmion formation arises from a topological term in the free energy of the form F_B=-Cχ g_B(t_2,U), where g_B is an unknown function of the model parameters. This term is allowed by symmetry and arises from the coupling between the orbital magnetization <cit.> (proportional to C) of the Chern band and the emergent magnetic field created by the skyrmion. <cit.> In the adiabatic limit, when the spin of the conduction electron follows the skyrmion texture, the emergent magnetic field is B_S≈ϕ_0χ/(2π)ξ^2, with ϕ_0=hc/e the flux quantum and ξ is the skyrmion size <cit.>. Away from the adiabatic limit, B_S is expected to be reduced. Interestingly, it is still possible to explicitly compute the function g_B(t_2,U) through g_B(t_2,U)=|F_χ=1(t_2,U)-F_χ=-1(t_2,U)|. where F_χ is the free energy obtained for skyrmion charge χ. For the Ising spin orbit coupling, F_χ depends on the sign of χ only through the topological term. As displayed in Fig.<ref>(d), |F_B| increases with decreasing t_2, correlating with the increase in the skyrmion size ξ. For a small enough t_2 however, |F_B| starts decreasing, which occurs approximately when the HM becomes the ground state [see Fig.<ref>(c)]. Finally, we note that since both the topological term and skyrmion size ξ decrease with t_2, the ansatz predicts that the skyrmion becomes a polaron (a spin-flip magnetic impurity with χ=0) at a larger t_2. This is indeed what is observed in the numerical calculation, as we detail in the Supplementary Information Sec.<ref>. § HALL CONDUCTANCE IN QAHC In this section we discuss the origin of the quantized Hall conductance in the QAHC, as displayed in Fig.<ref>(a). For a large enough t_2, when there is a sizable topological gap, one could argue that doping simply introduces in-gap skyrmion states that do not contribute to the Hall conductance, implying that skyrmions would not play a relevant role. However, the QAHC is very robust and extends down to vanishing t_2, where the ν=1 topological gap vanishes. The reason for this robustness is that skyrmions always provide a crucial contribution to the Hall response, even when the topological gap at ν=1 is sizable. To show this, we examined the distribution of the Berry curvature as a function of energy, Ω(E) (Supplementary Information Sec.<ref> for details on the calculation). An example is shown in Fig.<ref>(b). Upon doping the ν=1 QAHI, when a sizable topological gap is present, skyrmions are created as in-gap states. However, the Berry curvature redistributes, acquiring a significant weight in these states as shown in Fig.<ref>(b). This is compatible with the extended nature of these states, which is evidenced by the inverse participation ratio results shown in Supplementary Section Sec.<ref>. While doping a finite number of electrons creates in-gap states exponentially localized around the skyrmions, at a finite density the skyrmion crystallization implies that these states become extended through hybridization with states in neighboring skyrmions. For a small t_2 and higher dopings, the topological gap at ν=1 is very small and it no longer makes sense to interpret skyrmions as arising from in-gap states upon doping. Instead, the Hall response becomes a direct consequence of the spontaneous crystallization of skyrmions, not requiring the existence of the QAHI at ν=1. This is confirmed by the results of the Hall conductance at small or vanishing t_2. In this case, a critical doping with respect to ν=1 is needed to induce both finite Hall response and total skyrmion charge, with these quantities perfectly correlating in the QAHM and QAHC phases, as shown in Fig.<ref>(a) (see Supplementary Information Sec.<ref> for expression for σ_xy). It is important to note that even though the skyrmions repel in the QAHC, which naturally favors crystallization, the skyrmion crystal is free to move as a whole in the clean limit. However, since impurities are always present in nature, it is expected that skyrmions are pinned by impurities in an experimental realization of the QAHC phase. As such, the QAHC is an insulator with quantized Hall conductance in the presence of a weak electric field. For a strong electric field, the skyrmion crystal is driven into motion, rendering the system metallic and spoiling the quantization of the Hall conductance. § ORIGIN OF DOMAIN WALL STATE We now turn to understand the origin of the DW phase and its stability at a larger t_2. The coplanar domain [see Fig.<ref>(d1)] has a charge density ⟨ n_i⟩=2/3 corresponding to an electron filling ν=4/3, in contrast with the ferromagnetic domain that has ⟨ n_i⟩=1/2 corresponding to filling ν=1 [Supplementary Information Sec.<ref>]. Both the ν=1 ferromagnetic QAHI and the ν=4/3 coplanar state are the most energeetically favorable states around ν=1 fillings, and it is preferable for the system to have phase separation with coexisting ν=1 and ν=4/3 domains. At ν=4/3, the coplanar domain fills the whole system as shown in Fig.<ref>(a), forming a topologically trivial gapped magnetic ground state. The unit cell for this state triples in size compared to the original unit cell, as shown in Fig.<ref>(a). Furthermore, the coplanar magnetization assumes the form m_ r=M[cosθ_ r,sinθ_ r,0] with a finite winding number w=(2π)^-1∮ d^2 r ∇θ_ r· d r, where the integral is performed in a closed contour connecting the unit cell sites. Below, we will show that w is fully determined by the sign of the topological mass. In the Supplementary Information Sec.<ref>, we obtain the full band structure of the CoMI at ν=4/3. In Fig.<ref>(b), we plot the free energy and energy gap for different parameters. The free energy increases with U and decreases with t_2, which is consistent with the instability of the DW state at sufficiently large U [Fig.<ref>(b)] and small t_2 [Fig.<ref>(c)]. This can be better understood by analysing the band structure. In Fig.<ref>(c) we show the band structure for different U and for two significantly different values of t_2. In the lefmost figure, we plot the band structure for U=0 in the original hexagonal Brillouin zone. Since the unit cell is tripled for the CoMI, band folding occurs for U=0 and gaps are opened at the degeneracy points for U≠0. Note that for U≠0, each shown band is doubly degenerate as we will detail below. For a large value of t_2, we can get very narrow bands around the Fermi level due to the small dispersion around the Dirac points. Because of this, it is possible to significantly decrease free energy by opening a gap at ν=4/3 [shown in magenta in Fig.<ref>(c)]. For a smaller t_2, the folded energy bands are more dispersive, and it is not possible to save as much energy by opening the gap. Another interesting feature of this band structure is that a significantly larger gap is opened around filling ν=4/3 than around ν=8/3. In fact, for a small U, the gap around ν=8/3 is essentially suppressed. To understand the underlying mechanism, we derive an approximate continuum model for the 8 central bands. In Fig.<ref>(a-right), we show the Fourier transforms of ⟨ c_ r_m,^†c_ r_m,⟩, that peak at K or K' depending on the sublattice and on w (not shown). This gives rise to the following contribution for the mean-field Hamiltonian, Me^-iσ wmϕ∑_ b, kc_ k,m,-σ^†c_ k-σ wm( K+ b),m,σ, where M∝ U, m=±1 respectively for sublattices A and B, b are the reciprocal lattice vectors and ϕ=π/6 is the angle difference between the magnetization vector at different sublattices. Because low-energy physics is dominated by the states around the Dirac points, we take k=- K, K and consider the first-order processes resulting from momentum transfers in this term. From this we can derive the following low-energy continuum Hamiltonian (see Supplementary Information Sec.<ref> for details), [ H=v_fψ̅(s_xτ_z_x+s_y_y)ψ+λ_SOψ̅s_zτ_zσ_zψ; +M/2ψ̅τ_x(c_ϕσ_x-s_ϕσ_y) ψ+wM/2ψ̅ s_zτ_y(c_ϕσ_y+s_ϕσ_x)ψ ] , where s,τ,σ are Pauli matrices acting respectively on the sublattice, valley and spin subpaces, c_ϕ=cos(ϕ), s_ϕ=sin(ϕ) and λ_SO=-3√(3)t_2. By rearranging ψ in the sub-blocks (ψ_A, K,,ψ_B, K,,ψ_A, K',,ψ_B, K',) and (ψ_B, K,,ψ_A, K,,ψ_B, K',,ψ_A, K',), the Hamiltonian becomes block-diagonal with identical blocks, which explains the double degeneracy of the band. Note that in contrast to the U=0 case, this is not a spin degeneracy. Defining s_w=(wλ_so), we can easily derive that [ E( q=0)= -|λ_SO|-Mδ_s_w,1 -|λ_SO|+Mδ_s_w,1 |λ_SO|-Mδ_s_w,-1 |λ_SO|+Mδ_s_w,-1 ] where q is measured from the Dirac points. This implies that depending on the sign of wλ_SO, the gap can open either at the lower- or higher-energy bands. In order to minimize energy at filling ν=4/3, the former case is more favorable. Therefore, quite remarkably, the sign of the topological mass λ_SO fixes the winding number w of the coplanar domain. This phenomenon is only possible when λ_SO≠0. Note also that there is no dependence on the sub-lattice angle difference ϕ (different values simply define different U(1) spin rotations, see Eq.<ref>). This is only an artifact of our first-order expansion to obtain the continuum model. The full exact calculation has a ϕ dependence (that manifests more significantly at larger U), setting ϕ=π/6 as the value that minimizes free energy. § ROBUSTNESS OF QAH AND DW PHASES Even though the onsite Hubbard interaction is the dominant one in twisted TMDs, longer-range interactions can also be quite significant. In what follows, we study the robustness of the phases unveiled here against nearest-neighbor interactions. We start by studying the stability of the DW phase. In Fig.<ref>(a), we vary V for t_2=0.2 and U=8, where only the DW phase exists at V=0 [see Fig.<ref>(c)]. We show that the DW phase is robust up to a significantly large V, above which the ground state becomes a CoMI, with . Furthermore, there is a V-induced reentrant transition into the QAHC, that is stabilized for intermediate V and for ν close to ν=1. Two important comments are in order regarding the features of the finite-V DW phase. A finite V favors a staggered density between NN sites. Since the density is constant in the ferromagnetic domain, it is energetically favorable to split it in smaller domains to increase the number of NN with staggered density at the domain walls. The size of the domains is therefore dictated by the competition between U and V. In particular, while the number of sites in the domain wall for a single domain scale as N_S^DW∝ L, it scales as N_S^DW∝ N_DWL_DW∝ L^2 for a crystal of domains, where N_DW∝ L^2 is the number of magnetic domains and L_DW is the domain wall size of each domain, that we assume to be L-independent. For a large enough V it can therefore be more favorable to create different domains of fixed size than a single domain to maximize the energy gain at the domain walls. An example ground state corroborating this picture is shown in Fig.<ref>(c). Regarding the coplanar domain, for finite V both the density and magnitude of coplanar magnetization acquire a sub-lattice modulation, which can also be seen in the example in Fig.<ref>(c) [see Supplementary information Sec.<ref> for plots of the electron density]. As V is increased further, the ferromagnetic domains vanishes smoothly through a second-order phase transition into the CoMI phase. The spin texture in the CoMI phase is also quite complex. At ν=1 within this phase, a topologically trivial gapped coplanar domain with strong sub-lattice density and coplanar magnetization modulation is formed [see Fig.<ref>(d)], while at ν=4/3 the V=0 coplanar ground state is very stable in the studied range of V. For 1<ν<4/3, domains that inherit from the ν=1 and ν=4/3 ground states are formed, as exemplified in Fig.<ref>(d). Since both these domains are topologically trivial, no edge states arise at the domain walls, and the system is still a trivial gapped insulator at these intermediate fillings. Finally, we check the stability of the t_2=0 QAHC for a nonzero V. Fig.<ref>(b) shows that this phase can be stable up to V≈1 for the smallest filling fractions for which it develops. The smallest skyrmions that form the crystal again accommodate two electrons and have topological charge χ=2 (not shown), as for V=0. However, for larger fillings, the QAHC becomes less robust to V, as shown in Fig.<ref>(b). § DISCUSSION In this paper, we studied the ground state phase diagram arising from doping a quantum anomalous Hall insulator. We found that depending on doping and on the strength of interactions and non-interacting topological mass, a quantum anomalous Hall crystal phase competes with a topological domain wall phase, both spontaneously breaking time-reversal and translational symmetry. Remarkably, topology plays a crucial role in determining the nature of both phases. Arguably the most important results are the robustness and tunability of the QAHC phase for fillings significantly away from ν=1 and down to t_2=0, showing that a finite non-interacting topological mass is not necessary for the spontaneous realization of the QAHC state. In fact, the crystallization of skyrmions becomes the key ingredient behind the QAHC. To our knowledge, this is the first example of a non fine-tuned spontaneous QAHC arising from a topologically trivial parent state. Previous examples of spontaneous QAHC effect arising from trivial bands, also induced by noncoplanar spin textures, require specific fillings where perfect Fermi surface nesting conditions are met <cit.>, such as filling up to the Van Hove singularities. In contrast, in the present case, the spontaneous skyrmion crystals exist for a significant range of doping and are highly tunable, despite the presence of the tight-binding lattice. By varying both the model parameters and the filling fraction, it is possible to stabilize phases with charge-e and charge-2e skyrmions, or even with more complicated patterns such as stripes of skyrmions as we show in Supplementary Information Sec.<ref>. These features also distinguish the QAHC phase unveiled here from the anomalous Hall crystals recently proposed in the literature <cit.>. It is important to note that the DW phase discussed here is also different from previous examples. It has previously been shown that doping a gapped correlated insulator can spontaneously induce different domains, with the extra charge accommodated in topologically protected bound states in the domain walls <cit.>, a mechanism well captured by the Jackiw-Rabbi model <cit.>. This mechanism can be activated due to the existence of an inhomogeneous potential (e.g., a spatially varying substrate potential), which has been proposed in twisted bilayer graphene <cit.>, consistent with experimental observations <cit.>. It can also be at play at finite temperatures due to an increase in entropy from the topological localized modes that are located on the domain wall <cit.>. In contrast, in our example the DW phase is not induced by entropy. Furthermore, the CoMI domain accommodates most of the doped electrons, since it is very energetically favorable, with the domain walls playing a sub-leading role. We finally comment on the possible relevance of our results to moiré TMD, such as MoTe_2 and WSe_2. Starting from the QAHI at ν=1, our results show that the gap survives over an extended region of doping. In the QAHC, the Hall conductance remains quantized against doping of electrons. In the DW phase, there coexist topological and trivial gapped phases corresponding, respectively, to ν=1 and ν=4/3. As we gradually dope the system away from ν=1, as far as the topological domain percolates the whole system, the Hall quantization survives against doping. At a threshold doping, the topological domain ceases to percolate and the system is taken over by the ν=4/3 domain, in which case the Hall conductance vanishes. One interesting feature of the experimental phase diagram of MoTe_2 moiré is that the Hall conductance plateau extends for fillings around ν=1 <cit.>. The QAHC and DW phases offer new possible mechanisms to explain the robustness of quantization of the Hall conductance against doping, in addition to a more conventional mechanism based on the Anderson localization of doped electrons. In addition to Hall conductance, doping-induced spin textures can be detected experimentally through multiple techniques, including scanning tunneling microscope, electronic compressibility and optical measurements. Interesting future directions include understanding whether quantum fluctuations can melt the QAHC and give rise to exotic states of matter such as superconductivity. In fact, we have shown that the QAHC can contain charge-2e skyrmions in some regions of parameters. The condensation of these charge-2e skyrmions would give rise to superconductivity. Interestingly, this may also occur for the simpler charge-e skyrmions. Since a skyrmion generates one flux quantum emergent magnetic field for electrons, the charge-e skyrmion composite object is a boson, in analogy to the composite boson due to the attachment of one flux quantum to an electron in the fractional quantum Hall systems. § ACKNOWLEDGMENTS The authors would like to thank Di Xiao, Cristian Batista and Long Ju for fruitful discussions. The work is partially supported by the US DOE NNSA under Contract No. 89233218CNA000001 through the LDRD Program and was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. DOE Office of Science, under user proposals #2018BU0010 and #2018BU0083. § METHODS We employ the unrestricted self-consistent Hartree-Fock method in real-space. The full mean-field Hamiltonian corresponding to Eq.<ref> is given by (see Supplementary Information for detailed derivation) H_MF=H_0+U∑_i,σ⟨ n_i,-σ⟩ n_i,σ-U∑_i,σ⟨ c_i,-σ^†c_i,σ⟩ c_i,σ^†c_i,-σ +V∑_i,j,σ,σ'δ_⟨ i,j⟩⟨ n_j,σ'⟩ n_iσ-V∑_i,j,σ,σ'δ_⟨ i,j⟩⟨ c_j,σ'^†c_i,σ⟩ c_iσ^†c_jσ' -U∑_i(⟨ n_i,⟩⟨ n_i,⟩-⟨ c_i^†c_i⟩⟨ c_i^†c_i⟩) -V∑_⟨ i,j⟩,σ,σ'(⟨ n_iσ⟩⟨ n_jσ'⟩-⟨ c_jσ'^†c_iσ⟩⟨ c_iσ^†c_jσ'⟩), where H_0 is the non-interacting part of the Hamiltonian in Eq.<ref> and ⟨⋯⟩ denotes the average value with respect to the mean-field ground state, to be determined self-consistently. The site-resolved magnetization vector can be computed from the variational parameters as m_i=(2[⟨ c_i^†c_i⟩], 2[⟨ c_i^†c_i⟩], ⟨ n_i,↑⟩-⟨ n_i,↓⟩). We considered finite system sizes with L× L unit cells. In order to increase the speed of convergence we employ the Broyden method <cit.>. Furthermore, in order to avoid convergence to local minima, we also performed between 100 and 1000 calculations (depending on system size) with different starting guesses and took the one with the smallest free energy as the ground state. Finally, we use twisted boundary conditions with randomly chosen twists. This allows to break unwanted degeneracies (or quasi-degeneracies) that can be harmful for the convergence of the mean-field equations. For the smallest system sizes used (L=12 and L=16), we used a completely random guess for the variational parameters in order to identify the possible phases in an unbiased way. To provide the thermodynamic limit estimation of the phase boundaries, we carried out calculations for larger L (up to L=32), verifying the convergence of the critical points, within the provided error bars. Because for a large system it becomes very challenging to converge from a random starting guess, we used random skyrmion and domain wall initial guesses motivated by the smaller L results. We then compared the free energies and converged variational parameters for both guesses. In the Supplementary Information Sec.<ref> we provide a detailed overview of these calculations. Finally, defining the vector of variational parameters v=( n_, n_,[Δ_],[Δ_],[t̃],[t̃]), where Δ_^i=⟨ c_i,^†c_i,⟩ and t̃_ij;σσ'=⟨ c_iσ^†c_jσ'⟩δ_⟨ i,j⟩, we only stopped the calculation when |Δ v|<10^-8, where Δ v denotes the difference between consecutive iterations, up to a maximum of N_iter iterations. Depending on the size of the system, we chose N_iter∈[10^3-5×10^4]. Supplemental Information for: § TMD HOMOBILAYERS MOIRÉ SUPERLATTICE AS KANE-MELE-HUBBARD MODEL In the main text, we discuss that TMD homobilayers moiré superlattice can be described by an effective Kane-Mele-Hubbard model. Here we provide a more detailed explanation. The essential ingredients behind this emergent description are illustrated in Fig.<ref>(a), that we reproduce here in Fig.<ref>. These include <cit.>: * (i) the Wannier functions for the two topmost valence bands of TMD homobilayers moiré superlattice, such as twisted MoTe_2 bilayer, are very localized in the purple and green corners of the moiré cell depicted in Fig.<ref>(a1), respectively for the bottom and top layers; * (ii) the maxima and minima of the first and second highest energy valence bands are respectively located at the Dirac points K_t and K_b of the moiré Brillouin zone arising from the displaced Dirac cones of top and bottom layers, as shown in Fig.<ref>(a2); * (iii) there is spin-valley locking, that is, the wave function at K (- K) valley is spin up (down) polarized, as illustrated in Fig.<ref>(a2). Because of (i), a tight-binding model in a honeycomb lattice with the emergent sites represented in Fig.<ref>(a1) can be derived to describe the topmost valence bands. Also, because of (ii), the low-energy expansion around K_t and K_b introduces layer(sublattice)-dependent next-nearest-neighbor (NNN) complex hopping between the effective sites given by exp(iσκ_t(b)· a_M), where a_M is the moiré lattice vector connecting two NNN sites and σ=±1 respectively for spins and . The spin-dependence of the phase follows from (iii). This NNN term is precisely the Kane-Mele spin-orbit term, responsible for the non-trivial topology. The bands with opposite spin/valley have opposite Chern numbers. At twist angles when the electron kinetic energy is reduced, the interaction between electrons can be much larger than the kinetic energy. The most important interaction term is on-site Hubbard, even though longer-range interactions can also be large <cit.>. § EASY-AXIS FERROMAGNETIC QAHI FOR Ν=1 In this section we obtain the homogeneous ferromagnetic QAHI state at filling ν=1. To keep the problem analytically tractable, we will first search for homogeneous solutions. We will then confirm indeed that the true ground state is homogeneous by carrying out the full real-space mean-field calculation for this filling. Let us assume that [ ⟨ n_i,σ⟩=𝒩_σ≡1/N_ uc∑_i⟨ n_i,-σ⟩; ⟨ c_i,-σ^†c_i,σ⟩=Δ_σ,-σ≡1/N_ uc∑_i⟨ c_i,-σ^†c_i,σ⟩ ] where N_uc is the number of unit cells. Using this assumption, we get the following mean-field Hamiltonian [ H_MF =H_0+U(∑_i,σ𝒩_-σn_i,σ-∑_i,σΔ_σ,-σc_i,σ^†c_i,-σ) ], where by hermiticity Δ_,=(Δ_,)^*. Introducing the Fourier transforms c_i,σ^†=1/√(N_ uc)∑_ ke^ i k· R_ia_ k,σ^† ,i∈ A 1/√(N_ uc)∑_ ke^ i k· R_ib_ k,σ^† ,i∈ B the momentum-space mean-field Hamiltonian reads H=∑_ k c_ k^† h_ k^ MF c_ k+cte. , where cte is a constant with no fermionic operators and c_ k=([ a_ k, b_ k, a_ k, b_ k, ])^T h_ k^ MF=([ λ( k)+U𝒩_ f( k) -UΔ_ 0; f^*( k) -λ( k)+U𝒩_ 0 -UΔ_; -UΔ_^* 0 -λ( k)+U𝒩_ f( k); 0 -UΔ_^* f^*( k) λ( k)+U𝒩_ ]) f( k)=-t(1+e^ i k· a_1+e^ i k· a_2) λ( k)=-2t_2[sin( k· a_1)-sin( k·( a_1- a_2))-sin( k· a_2)] Defining m_x=Δ_+Δ_ m_y=-i(Δ_-Δ_) m_z=𝒩_-𝒩_ we can write the mean-field Hamiltonian as H_MF=∑_ k c_ k^†[U𝒩/2𝕀+f_x( k)σ_x+f_y( k)σ_y+f_z( k)σ_zs_z-U/2 m· s] c_ k+U/2N_uc m^2-U/2N_ucn^2 where 𝒩=𝒩_+𝒩_, m=(m_x,m_y,m_z), s are the Pauli matrices in the spin space and we made explicit the constant term in Eq.<ref> since it depends on m. Writing m=M(sinθcosϕ,sinθsinϕ,cosθ), the eigenenergies are given by ϵ( k)=f_0( k)+U𝒩/2±√( f^2( k)+(UM/2)^2±|UM|√(f_x^2( k)+f_y^2( k)+f_z^2( k)cos^2(θ))) Let us consider the situation where U is large enough so that the lowest energy band is isolated (U≳4 for t_2=0.2). In this case, we simply need to consider the free energy of the lowest energy band, which is minimized for θ=0. Therefore, the ground state develops finite magnetization along the z axis. Computing the total free energy for θ=0, we have F=⟨ H_MF⟩=⟨∑_α, kϵ_α( k,M)c_ k,α^†c_ k,α⟩+UN_uc(M^2-n^2)=∑_ k(f_0( k)-| f^2( k)|)+UN_uc/2(-|M|+M^2)+cte, which is minimized for M=1/2, consistent with the numerical results. § DERIVATION OF MEAN-FIELD HAMILTONIAN Starting with the Hamiltonian in Eq.<ref> of the main text, we can make the following mean-field decoupling by using Wick's theorem: [ n_i,σn_j,σ'=c_iσ^†c_jσ'^†c_jσ'c_iσ=-:c_iσ^†c_jσ'^†c_jσ'c_iσ:; =⟨ n_iσ⟩:n_jσ':+⟨ n_jσ'⟩:n_iσ:+⟨ n_iσ⟩⟨ n_jσ'⟩; -⟨ c_iσ^†c_jσ'⟩:c_jσ'^†c_iσ:-⟨ c_jσ'^†c_iσ⟩:c_iσ^†c_jσ':-⟨ c_jσ'^†c_iσ⟩⟨ c_iσ^†c_jσ'⟩; Using; =⟨ n_iσ⟩ n_jσ'+⟨ n_jσ'⟩ n_iσ-⟨ n_iσ⟩⟨ n_jσ'⟩; -⟨ c_iσ^†c_jσ'⟩ c_jσ'^†c_iσ-⟨ c_jσ'^†c_iσ⟩ c_iσ^†c_jσ'+⟨ c_jσ'^†c_iσ⟩⟨ c_iσ^†c_jσ'⟩ ] where :: denotes the normal ordering operation. Plugging this in the Hamiltonian in Eq.<ref>, we get the meanfield Hamiltonian in Eq.<ref>. The expectation values can be computed at every mean-field iteration by introducing the eigenbasis d_α=∑_i,σ𝒰_iσ,α^*c_i,σ, through [ ⟨ c_jσ^†c_iσ'⟩ =⟨∑_α,βd_α^†𝒰_jσ,α^*𝒰_iσ',βd_β⟩_MF; =∑_α𝒰_jσ,α^*𝒰_iσ',αf(E_α) ] where f(E_α)=[exp[β(E_α-μ)]+1]^-1. Finally, the free energy can be computed through F=⟨ H⟩=⟨ H_0⟩+⟨ H_U⟩+⟨ H_V⟩ , where [ ⟨ H_0⟩_MF=-∑_i,σ⟨ t∑_⟨ i,j⟩c_i,σ^†c_j,σ-t_2∑_⟨⟨ i,j⟩⟩,σe^- iσϕ_ijc_i,σ^†c_j,σ⟩_MF; =∑_α∈ occ(-t∑_⟨ i,j⟩𝒰_α,iσ^*𝒰_α,jσ-t_2∑_⟨⟨ i,j⟩⟩,σe^- iσϕ_ij𝒰_α,iσ^*𝒰_α,jσ); =_ occ[𝒰^†H_0𝒰] ] [ ⟨ H_U⟩_MF= ⟨ U∑_in_i,n_i,⟩_MF=U∑_i⟨ n_i,⟩_MF⟨ n_i,⟩_MF-U⟨ c_i^†c_i⟩_MF⟨ c_i^†c_i⟩_MF; =U∑_i(∑_α∈ occ|𝒰_i,α|^2)(∑_β∈ occ|𝒰_i,β|^2)-U∑_i|∑_α∈ occ𝒰_i,α𝒰_i,α^*|^2 ] [ ⟨ H_V⟩_MF= ⟨ V∑_⟨ i,j⟩,σ,σ'n_iσn_jσ'⟩=V∑_⟨ i,j⟩,σ,σ'⟨ n_iσ⟩_MF⟨ n_jσ'⟩_MF-V⟨ c_iσ^†c_jσ'⟩_MF⟨ c_jσ'^†c_iσ⟩_MF; =V∑_⟨ i,j⟩,σ,σ'(∑_α∈ occ|𝒰_iσ,α|^2)(∑_β∈ occ|𝒰_jσ',β|^2)-V∑_⟨ i,j⟩,σ,σ'|∑_α∈ occ𝒰_iσ,α^*𝒰_jσ',α|^2 ] § DETAILS ON CALCULATION OF PHASE BOUNDARIES In this section we provide details on the calculations of the phase boundaries presented in the main text. As mentioned in the Methods section, we started by running completely unbiased calculations with a random starting guess for smaller system sizes, while for the larger systems we provided educated guesses. We show an example of this scheme in Fig.<ref>. In Fig.<ref>(a) we show the results for L=12 and a completely random starting guess, where the transition between the QAHC and the DW phases can be clearly observed. In order to determine the critical point as precisely as possible, we provided the following educated starting guesses for the larger system sizes: * Random domain wall (rDW): circular coplanar magnetic domain with random radius (around a mean value determined by number of doped electrons) in a ferromagnetic background; * Random skyrmions (rSkyr): δ_e skyrmions with χ=1 charge, randomly distributed in the lattice over a ferromagnetic background. Given the starting guess rDW (rSkyr), even if the true ground state is the QAHC (DW), the system may converge to a DW (QAHC) state corresponding to a local minimum in the free energy. Nonetheless, the true ground state can be unveiled by comparing the free energies of the two cases, which show a crossing point that is robust to increasing L, as exemplified in Fig.<ref>(b). This crossing point provides an accurate estimation of the critical point between the QAHC and DW phases, whose convergence can be tested by comparing the results for different system sizes. We now turn to explain how the phase boundaries of the HM and QAHM phases were obtained in Fig.<ref>(c). In Fig.<ref>(a) we show the difference in free energy between the true ground state and the homogeneous HM state. Motivated by the results for smaller systems that showed that the ground state is a QAHC for this region of parameters, we used rSkyr as a starting guess for larger system sizes. For low enough doping, the free energies of the HM and the true ground state are exactly the same, since there is convergence to the HM state even with the inhomogeneous rSkyr starting guess. Above a critical doping, this difference starts being finite, and the true ground state becomes the QAHM. This critical doping is robust to increasing L as shown in Fig.<ref>(a). The QAHM phase is characterized by a skyrmion charge 0<χ<δ_e [Fig.<ref>(b)], a non-quantized Hall conductance [inset of Fig.<ref>(c)] and a gapless spectrum [Fig.<ref>(c)]. Above a critical doping there is a transition to QAHC, characterized by χ=δ_e [Fig.<ref>(b)], a robust gap [Fig.<ref>(c)] and |σ_xy|=e^2/h [inset of Fig.<ref>(c)]. The critical point was estimated by averaging the critical points obtained for the two largest system sizes. We finally detail the calculation of the phase boundaries for the phase diagram in Fig.<ref>(a), with finite nearest-neighbor interactions. For the DW to QAHC transition we used the same procedure as the one previously described, showing an example in Fig.<ref>(a). For the transition between the DW and CoMI phases, we inspected the critical V above which |m_z|^tot = N_sites^-1∑_i |⟨ m_i^z⟩| vanishes, as exemplified in Fig.<ref>(b). Note that |m_z|^tot decreases smoothly with V, corroborating the second-order nature of the transition, where the correlation length associated with the coplanar domain diverges. In this case, we used a completely random initial guess and therefore reached smaller system sizes (up to L=20) than for the DW to QAHC transition. Nonetheless, a reasonable convergence in the critical point was obtained for the two largest system sizes (L=16 and L=20), as indicated by the small error bars in Fig.<ref>(a), that correspond to the difference in the critical point estimations for these sizes. § A DEEPER LOOK INTO THE DIFFERENT PHASES In this section we explore in more detail some interesting regions of the phase diagrams shown in the main text. *Electron densities in DW and QAHC phases.— In Fig.<ref>(d) of main text we presented examples of the magnetization profiles for the DW and QAHC phases. In Fig.<ref> we present the electron densities ⟨ n_i⟩=⟨ n_i↑⟩+⟨ n_i↓⟩, ⟨ n_i↑⟩ and ⟨ n_i↓⟩ together with the magnetization profiles. Within the DW phase, Fig.<ref>(a) shows that the density profile within the coplanar domain corresponds to ⟨ n_i↑⟩=⟨ n_i↓⟩=1/3, while in the ferromagnetic domain we have ⟨ n_i↑⟩=0 and ⟨ n_i↓⟩=1/2 (note that we can also converge to the alternative degenerate symmetry breaking state with ⟨ n_i↑⟩=1/2 and ⟨ n_i↓⟩=0). In the QAHC phase with charge-e and χ=1 skyrmions [Fig.<ref>(b)], the plot of ⟨ n_i⟩ shows a charge density wave modulation, with the excess charge accumulating around the skyrmions. The excess charge around each skyrmion compared to the ferromagnetic background charge ⟨ n_i⟩=0.5 is precisely one electron, as mentioned in the main text. Finally, in the QAHC phase with charge-2e and χ=2 skyrmions [Fig.<ref>(c)], the ⟨ n_i⟩ plot does not distinguish very well the skyrmions from the background since they become significantly larger. This becomes more clear in the spin-resolved plots, where spin density is mostly distributed within the skyrmions while spin density mostly occupies the (small) background. *Ground states for small δ_e.— We now turn to analyze the phase diagram in Fig.<ref>(a) in more detail, where doping of a small number of electrons was considered with respect to filling ν=1. While we decided to label any region where a charge clustering was observed as a DW phase, these regions have rich substructures. We unveil them in Figs.<ref>,<ref> for 1≤δ_e≤4. Interestingly, ground states with finite skyrmion charge χ≤δ_e can be found in the DW region. We note that for δ_e=1 we considered the ground state in region I, distinct from the skyrmion states in regions II and III [see Fig.<ref>(a)], to belong to the DW phase, even though it does not correspond to any charge clustering with the single-electron doping. *DW vs. DW_2 phases.— For larger electron dopings, we enter the regime where only the DW phase survives [see examples in Figs.<ref>(a,b) for δ=8]. Over a small range of interaction strength, a second domain wall phase that we label DW_2 is stabilized. The magnetization profile in this phase is exemplified in Fig.<ref>(c), showing that two ferromagnetic domains with opposite spin polarization are formed. However, in this case there are no clear chiral edge states at the domain wall, which we attribute to the smaller domain being metallic. This is evidenced by the extended nature of the eigenstate at the Fermi level within this domain, shown in Fig.<ref>(c). If this domain was insulating, then chiral edge states would be expected at the domain wall due to the different Chern numbers for opposite spin domains. *A myriad of different skyrmion crystals.— We now turn to explore the different types of skyrmion crystals that can be stabilized in the QAHC phase by varying doping and the model parameters. In Fig.<ref> we show how doping can induce charge-2e skyrmions with χ=2 [see also Fig.<ref>(c)]. The reason is that increasing doping decreases the lattice spacing of the skyrmion crystal and given that the individual skyrmions repel, it becomes energetically favorable to create slightly larger skyrmions that accomodate more than one electron. For smaller values of the topological mass t_2, the QAHM phase can be stabilized, where skyrmions are formed but do not crystallize, as exemplified in Figs.<ref>(a),<ref>(a). For certain fillings in the small t_2 regime, the system prefers to arrange the skyrmions in stripe patterns, as illustrated in Fig.<ref>(b). For t_2=0, there is a significant range of doping for which a crystal of only χ=2 skyrmions hosting two electrons each is formed, as shown in Fig.<ref>(b) (if the number of electrons is odd, an additional skyrmion defect containing the extra electron is formed). Depending on doping, these can also arrange in stripe patterns, as exemplified in Fig.<ref>(c). * *Electron density in DW phase with V≠0.— We finish this section by providing results on the electron density in the DW phase for V≠0. As mentioned in the main text, a sublattice density modulation is obtained in the coplanar domains, while the ferromagnetic domains have uniform density. This is illustrated in Fig.<ref>. § VALIDITY OF SKYRMION ANSATZ AND SKYRMION TO POLARON TRANSITION In this section we demonstrate the validity of the skyrmion ansatz proposed in Eq.<ref> to describe the effective exchange field spontaneously generated for δ_e=1. In Fig.<ref>(a) we plot the difference in the free energy between the skyrmion ground state and half-metal solutions to show that the regime of stability predicted by the ansatz is qualitatively (and almost quantitatively) consistent with the one predicted by solving exactly the mean-field equations. From the ansatz, it is also possible to predict that above a critical t_2, the skyrmion ground state becomes unfavorable compared to the polaron ground state (spin-flip magnetic impurity with null skyrmion charge). This is explicitly shown in Fig.<ref>(b). The underlying reason is that both the magnitude of the topological term and the skyrmion size ξ decrease with t_2, as shown in Fig.<ref>(d) of the main text, which in turn make the skyrmion texture become energetically unfavorable at large enough t_2. These predictions are consistent with the exact result shown in Fig.<ref>(c). § COPLANAR MAGNET AT Ν=4/3 Here we provide additional details on the calculations for the CoMI phase at ν=4/3. Using the unit cell definition and site numbering in Fig.<ref>, and defining f_2( x, y)=it_2(1+e^i x· k+e^i y· k), the full Hamiltonian in k-space is given by H( k)=([ H_( k) H_( k); H_( k) H_( k) ]), where H_( k)=([ 0 -t f_2^*( a'_1,( a'_1- a'_2)) -t -f_2^*( a'_1, a'_2) -te^-i a'_1· k; -t 0 -t -f_2^*( a'_2,( a'_2- a'_1)) -te^-i a'_2· k f_2^*( a'_1, a'_2); f_2( a'_1,( a'_1- a'_2)) -t 0 -te^i( a'_1- a'_2)· k f_2^*( a'_2,( a'_2- a'_1)) -t; -t -f_2( a'_2,( a'_2- a'_1)) -te^i( a'_2- a'_1)· k 0 -t -f_2^*( a'_1,( a'_1- a'_2)); -f_2( a'_1, a'_2) -te^-i a'_2· k f_2( a'_2,( a'_2- a'_1)) -t 0 -t; -te^i a'_1· k f_2( a'_1, a'_2) -t -f_2( a'_1,( a'_1- a'_2)) -t 0 ]) H_( k)=H_( k,t_2→-t_2) H_( k)=MUdiag(1,e^iθ,e^2iθ,e^5iθ,e^4iθ,e^3iθ), H_( k)=H_^†( k) [ a'_1= (3/2,√(3)/2); a'_2= (3/2,-√(3)/2) ]. We will now provide more details on the derivation of the continuum model in the main text, Eq.<ref>. As stated in the main text, the Fourier transforms of ⟨ c_ r_m,^†c_ r_m,⟩ peak at K or K' depending on the sublattice and on w. From this observation, we have ⟨ c_ r_m,σ^†c_ r_m,-σ⟩=M∑_ be^-iwσ m[ϕ+( K+ b)· r_m], where M∝ U, m=±1 respectively for sublattices A and B, b are the original reciprocal lattice vectors and ϕ=π/6 is the angle difference between sublattices. This yields the following contribution to the mean-field Hamiltonian, Me^iσ wmϕ∑_ b, kc_ k,m,-σ^†c_ k-σ wm( K+ b),m,σ . Let us consider the first-order process in perturbation theory, taking q= K, K'. For w=-1, we have [ Me^-iϕ∑_ bc_ K',-1,^†c_ K'-( K+ b),-1,+Me^iϕ∑_ bc_ K,1,^†c_ K+( K+ b),1,; +Me^-iϕ∑_ bc_ K',1,^†c_ K'-( K+ b),1,+Me^iϕ∑_ bc_ K,-1,^†c_ K+( K+ b),1,; =Me^-iϕ∑_ bc_ K',-1,^†c_ K- b,-1,+Me^iϕ∑_ bc_ K,1,^†c_ K'+ b,1,; +Me^-iϕ∑_ bc_ K',1,^†c_ K- b,1,+Me^iϕ∑_ bc_ K,-1,^†c_ K'+ b,1, ] while for w=1, we have [ Me^iϕ∑_ bc_ K,-1,^†c_ K+( K+ b),-1,+Me^-iϕ∑_ bc_ K',1,^†c_ K'-( K+ b),1,; +Me^iϕ∑_ bc_ K,1,^†c_ K+( K+ b),1,+Me^-iϕ∑_ bc_ K',-1,^†c_ K'-( K+ b),1,; =Me^iϕ∑_ bc_ K,-1,^†c_ K'+ b,-1,+Me^-iϕ∑_ bc_ K',1,^†c_ K- b,1,; Me^iϕ∑_ bc_ K,1,^†c_ K'+ b,1,+Me^-iϕ∑_ bc_ K',-1,^†c_ K- b,1, ] From these expressions, we can derive the low-energy continuum model in Eq.<ref>, neglecting the 2 degenerate lowest and highest energy bands of the Hamiltonian in Eq.<ref>. The energy bands for this model are given by [ E( q)= -M/2±√(v_f^2 q^2+(M/2-wλ_SO)^2) M/2±√(v_f^2 q^2+(M/2+wλ_SO)^2) ] Note that there are only four different dispersions even though there are 2 flavours because each band is 2-fold degenerate, in agreement with the exact model. The reason is that, as stated in the main text, this Hamiltonian matrix can be written in terms of two identical 4×4 blocks if the basis elements are rearranged as [ (ψ_A, K,,ψ_B, K,,ψ_A, K',,ψ_B, K',; ψ_B, K,,ψ_A, K,,ψ_B, K',,ψ_A, K',). ] § OBSERVABLES §.§ Hall conductivity and Chern number §.§.§ Kubo formula for tight-binding Hamiltonians We start with a general tight-binding Hamiltonian given by H_0=∑_RR'αβt_RR'^αβc_R,α^†c_R'β We introduce the coupling to the vector potential to a Peierl's phase, that is, t_RR'^αβ→ t_RR'^αβe^-ie∫_r_α^r'_βA(r,t)· dr, where r_α is the position of site α belonging to the unit cell R. Within linear response, we expand to quadratic terms in A and assume that it is constant over a lattice spacing to get t_RR'^αβ→ t_RR'^αβ[1-ieA·δ+1/2e^2(A·δ)^2] where δ=r'_β-r_α. If we now write H=H_0+H', we have H'(t)=-ie∑_αβ∑_RR'(A(t)·δ)t_RR'^αβc_R,α^†c_R'β+1/2e^2∑_αβ∑_RR'(A(t)·δ)^2t_RR'^αβc_R,α^†c_R'β or H'(t)=-j_μ^PA_μ(t)+1/2A_μ(t)Δ_μνA_ν(t) where the first and second terms are respectively the paramagnetic and diamagnetic components of the current, with j_μ^P=ie∑_αβ∑_RR't_RR'^αβc_R,α^†c_R'βδ^μ Δ_μν=e^2∑_αβ∑_RR't_RR'^αβc_R,α^†c_R'βδ^μδ^ν Assuming the linear coupling H'(t)=-∫ drj(r)·A(r,t), the total current operator is then given by j_ tot^μ=-δ H'/δ A_μ=j_P^μ-Δ_μνA_ν+𝒪(A^2) Using the Kubo formula, we have ⟨ j_ tot^μ⟩(t)=-⟨Δ^μν⟩_0A^ν(t)-∫ dt'Π^μν(t,t')A^ν(t') where ⟨⟩_0 is the average value taken for A=0, or in frequency space, ⟨ j_ tot^μ⟩(ω)=-[⟨Δ^μν⟩_0+Π^μν(ω)]A^ν(ω) , where Π^μν(t,t')=-iΘ(t-t')⟨[j_P^μ(t),j_P^ν(t')]⟩_0 , and Π^μν(ω)=∫_0^+∞dte^iω tΠ^μν(t,0). Note that only j_P is considered in the commutator above (and not j_ tot) otherwise we would have a quadratic contribution to ⟨ j_ tot^μ⟩. §.§.§ Calculation of Π^μν(ω) We will start by computing Π^μν(t,t'). First, we write the Hamiltonian in the unperturbed eigenbasis basis, with d_n^†=∑_Rαa_Rα^nc_Rα^† and c_Rα^†=∑_n(a_Rα^n)^*d_n^†, H_0=∑_nϵ_nd_n^†d_n. We first compute j_μ^P(t) in this eigenbasis, j_μ^P(t)=∑_nmj_nm^μ,Pd_n^†(t)d_m(t)=∑_nmj_nm^μ,Pe^i(ϵ_n-ϵ_m)td_n^†d_m , where we used d_n^†(t)=e^iϵ_ntd_n^† and j_nm^μ,P=ie∑_αβ∑_RR't_RR'^αβ(a_Rα^n)^*a_R'β^mδ^μ We now compute Π^μν(t,t'): Π^μν(t,t') =-iΘ(t-t')∑_nm∑_ll'j_nm^μ,Pe^i(ϵ_n-ϵ_m)tj_ll'^ν,Pe^i(ϵ_l-ϵ_l')t'⟨[d_n^†d_m,d_l^†d_l']⟩_0 =-iΘ(t-t')∑_nmj_nm^μ,Pj_mn^ν,Pe^i(ϵ_n-ϵ_m)(t-t')[f(ϵ_n)-f(ϵ_m)] where we used [d_n^†d_m,d_l^†d_l']=d_n^†{d_m,d_l^†}d_l'-d_l^†{d_n^†,d_l'}d_m=d_n^†d_l'δ_ml-d_l^†d_mδ_nl'. We will now work in frequency space. Using Θ(t)=-lim_η→0∫dω/2π ie^-iω t/ω+iη and ∫dω'/2π i1/ω'+iη∫_0^+∞dte^i(ω-ω'+ϵ_n-ϵ_m)t =1/i1/ω+ϵ_n-ϵ_m+iη , we get Π^μν(ω) =∫_0^+∞dte^iω tΠ^μν(t,0) =i∑_nmj_nm^μ,Pj_mn^ν,P[f(ϵ_n)-f(ϵ_m)]∫dω'/2π i1/ω'+iη∫_0^+∞dte^i(ω-ω'+ϵ_n-ϵ_m)t =∑_n≠ mj_nm^μ,Pj_mn^ν,Pf(ϵ_n)-f(ϵ_m)/ω+ϵ_n-ϵ_m+iη . §.§.§ Calculation of ⟨Δ^μν⟩_0 We still need to evaluate ⟨Δ^μν⟩_0. This is the contribution from the diamagnetic part and therefore can be computed using time-independent perturbation theory, by applying a static vector potential A. The total current ⟨ j_ s,tot^μ⟩ which is the average current after applying the static A is ⟨ j_ s,tot^μ⟩=1/Z[e^-β H_Aj_ s,tot^μ], where j_ s,tot^μ≡- H_A/ A_μ. Similarly to Eq.<ref>, we have H_A=-j_s,P^μA_μ+1/2A_μΔ_μνA_ν where the superscript s stands for static. By definition we have j_s,P^μ=- H_A/ A_μ|_A=0 Δ_μν^s=^2H_A/ A_μ A_ν|_A=0 To the lowest order, we have that H_A=-j_s,P·A+𝒪(A^2) and j_ s,tot^μ=j_s,P^μ-Δ_μνA_ν+𝒪(A^2). Let us define S(τ) through U(τ)=e^-τ(H_0+V)=e^-τ H_0S(τ) and from _τU(τ)=-(H_0+V)U(τ) we can derive S(τ)=exp[-∫_0^τdτ'V(τ')]≈1-∫_0^τdτ'V(τ') We now set V(τ)=-j_P^μ(τ)A_μ and apply this result to the thermal average ⟨ j_ s,tot^μ⟩ to get, to order 𝒪(A^2): ⟨ j_ s,tot^μ⟩ =1/Z[e^-β H_Aj_ s,tot^μ] ≈1/Z[e^-β H_0S(β)(j_P^μ-Δ_μνA_ν)] =⟨ j_P^μ⟩_0-⟨Δ_μν⟩_0A_ν+∫_0^βdτ'⟨ T_τj_P^ν(τ')j_P^μ⟩_0A_ν This implies that (noticing that ⟨ j_P^μ⟩_0=0): ⟨ j_ s,tot^μ⟩/ A_ν|_A=0=-⟨Δ_μν⟩_0+∫_0^βdτ'⟨ T_τj_P^ν(τ')j_P^μ⟩_0 In the end we want to write ⟨Δ_μν⟩_0 in terms of the other two quantities. The second term is ∫_0^βdτ'⟨ j_P^μ(τ')j_P^ν⟩_0 =∫_0^βdτ'∑_nm∑_ll'j_nm^μ,Pj_ll'^ν,P⟨ T_τd_n^†(τ')d_m(τ')d_l^†d_l'⟩_0 =∫_0^βdτ'∑_nm∑_ll'j_nm^μ,Pj_ll'^ν,P[f(ϵ_n)f(ϵ_l)δ_mnδ_ll'-G_n(-τ')G_m(τ')δ_nl'δ_ml] where G_μ(τ)=-⟨ T_τd_μ(τ)d_μ^†⟩. The first term is proportional to ∑_nj_nn^μ,Pf(ϵ_n)∑_lj_ll^ν,Pf(ϵ_l)=⟨ j_P^μ⟩_0⟨ j_P^ν⟩_0=0 For the second term in Eq.<ref>, we work in the Matsubara frequency space to get G_μ(-τ)G_ν(τ) =1/β^2∑_ω'_n,ω”_ne^iτ(ω'_n-ω”_n)G_μ(iω'_n)G_ν(iω”_n) →∫_0^βdτ'G_μ(-τ')G_ν(τ') =1/β∑_ω'_n,ω”_nG_μ(iω'_n)G_ν(iω”_n)1/β∫_0^βdτ'e^iτ(ω'_n-ω”_n) =1/β∑_ω'_nG_μ(iω'_n)G_ν(iω'_n) The result can then be simplified. If μ≠ν, we have 1/β∑_ω'_nG_μ(iω'_n)G_ν(iω'_n)=1/β ∑_ω'_n1/iω'_n-ϵ_μ1/iω'_n-ϵ_ν =1/2π i∫ dzf(z)1/z-ϵ_μ1/z-ϵ_ν =f(ϵ_μ)-f(ϵ_ν)/ϵ_μ-ϵ_ν On the other hand, if ϵ_μ=ϵ_ν, we have 1/β∑_ω'_nG_μ(iω'_n)G_ν(iω'_n) =1/2π i∫ dzf(z)1/(z-ϵ_μ)^2=f'(ϵ_μ) where we used e Res(g,z_0)=f^(n-1)(z_0)/(n-1)! for g(z)=f(z)/(z-z_0)^n. We therefore have ∫_0^βdτ'⟨ j_s,P^μ(τ')j_s,P^ν⟩_0=-∑_n≠ mj_nm^μ,Pj_mn^ν,Pf(ϵ_n)-f(ϵ_m)/ϵ_n-ϵ_m-∑_ϵ_n=ϵ_mj_nm^μ,Pj_mn^ν,Pf'(ϵ_n) and finally, from Eq.<ref> ⟨Δ_μν⟩_0=-⟨ j_ s,tot^μ⟩/ A_ν|_A=0-∑_ϵ_n=ϵ_mj_nm^μ,Pj_mn^ν,Pf'(ϵ_n)-∑_n≠ mj_nm^μ,Pj_mn^ν,Pf(ϵ_n)-f(ϵ_m)/ϵ_n-ϵ_m In the absence of pairing terms, the first term should vanish. The reason is that the total current due to an applied static potential, ⟨ j_ s,tot^μ⟩, should vanish. Nonetheless, we note that this term can be finite for finite systems. However, it should vanish when the thermodynamic limit is taken. §.§.§ Combining Π^μν(ω) and ⟨Δ_μν⟩_0 and computing the conductivity Recovering the expression ⟨ j_ tot^μ⟩(ω)=-[⟨Δ^μν⟩_0+Π^μν(ω)]A^ν(ω) , we combine the last term of ⟨Δ_μν⟩_0 with Π^μν(ω) to get ∑_n≠ mj_nm^μ,Pj_mn^ν,P[f(ϵ_n)-f(ϵ_m)](1/ω+ϵ_n-ϵ_m+iη-1/ϵ_n-ϵ_m) =-∑_n≠ mj_nm^μ,Pj_mn^ν,Pf(ϵ_n)-f(ϵ_m)/ϵ_n-ϵ_mω+iη/(ω+ϵ_n-ϵ_m+iη) The conductivity is given by ⟨ j_ tot^μ⟩(ω)=σ_μν(ω)E^ν(ω) Using that E^ν(ω)=i(ω+iη)A^ν(ω), we have that σ_μν(ω)=i/ω+iη[⟨Δ^μν⟩_0+Π^μν(ω)] and therefore σ_μν(ω)=-i/ω+iη[⟨ j_ s,tot^μ⟩/ A_ν|_A=0+∑_ϵ_n=ϵ_mj_nm^μ,Pj_mn^ν,Pf'(ϵ_n)]-i∑_n≠ mf(ϵ_n)-f(ϵ_m)/ϵ_n-ϵ_mj_nm^μ,Pj_mn^ν,P/(ω+ϵ_n-ϵ_m+iη) §.§.§ Hall conductivity Taking the regular part of the Hall conductivity and setting T=0 in Eq.<ref>, we finally get σ_xy=σ_xy(0)=-i∑_n≠ mj_nm^μ,Pj_mn^ν,P[f(ϵ_n)-f(ϵ_m)]/(ϵ_n-ϵ_m)^2=∑_n∈occ.,m∈emp.2[j_nm^μ,Pj_mn^ν,P]/(ϵ_n-ϵ_m)^2 §.§ Skyrmion charge The skyrmion charge is given by χ=1/4π∫ d^2 r m_ r·(_x m_ r×_y m_ r). This formula needs to be discretized in order to do the calculation in the honeycomb lattice. A possible way introduced in Ref.<cit.> is to map m to the spin-coherent state, |z_j⟩=([ e^-iϕ_j/2cos(θ_j/2); e^iϕ_j/2sin(θ_j/2) ]), where j is the site index, such that [ m_x^j=2⟨z|S_x^j|z⟩= r_jcos(ϕ_j)sin(θ_j); m_y^j=2⟨z|S_y^j|z⟩= r_jsin(ϕ_j)sin(θ_j); m_z^j=2⟨z|S_z^j|z⟩= r_jcos(θ_j) ] Then, we can compute χ by summing the berry phases over all the hexagonal plaquettes in the system, as χ=∑__n∑_j∈_n(⟨ψ_j|ψ_j+Δ⟩). We refer to Ref.<cit.> for the complete derivation. § BERRY CURVATURE DISTRIBUTION In the main text, we established that even though a topological gap is already present at filling ν=1 for larger t_2, skyrmions introduced upon doping play a crucial role for the quantization of the Hall conductance, not simply acting as localized spectators in the ν=1 topological background. In fact, once skyrmions are introduced as as in-gap states, the berry curvature re-distributes and acquires a very significant weight in these states. Here we will show additional data supporting these claims. We examine how the berry curvature is distributed as a function of energy. To do so, we write the Hall conductivity in Eq.<ref> as σ_xy=∑_n∈occ.,m∈emp.2[j_nm^μ,Pj_mn^ν,P]/(ϵ_n-ϵ_m)^2=∑_n∈occ.Ω_n=∫ dE Ω(E) , where we defined the energy-dependent Berry curvature Ω(E)=∑_n∈occ.δ(E-ϵ_n) Ω̃(E), with Ω̃(E)=∑_m∈emp.2[j_E,m^μ,Pj_m,E^ν,P]/(E-ϵ_m)^2. Ω̃(E) can be obtained by interpolation of the points obtained at E=ϵ_n. In practice, for the actual calculation, we use a Lorentzian broadening for the Dirac delta function, approximating δ(E-ϵ_n)≈η^2/[(E-ϵ_n)^2+η^2], and using η∈[0.01-0.025]. We also take Ω̃(E)≈Ω̃(E_n) for the n-th term in the sum, given that the Lorentzian is very peaked around E=ϵ_n. To complement these results, we will also compute the inverse participation ratio for the mean-field single-particle eigenstates, defined as IPR_n=∑_j,σ|ψ_j,σ^n|^4/(∑_j,σ|ψ_j,σ^n|^2)^2 for the n-th eigenstate |ψ_n⟩=∑_j,σψ_j,σ^n|j,σ⟩, with j and σ respectively the site and spin indices. The results are shown in Fig.<ref>. In short, they indicate that skyrmions contribute to the Hall conductance even when there is a sizable topological gap at ν=1. More concretely, when in-gap states are created upon doping, the Berry curvature is redistributed and acquires a very significant contribution within these states. In this way, skyrmions always contribute to the Hall conductance at finite density above ν=1 in the QAHC phase, with only the fraction of the contribution changing for different parameters (larger fraction for smaller t_2 and higher ν). The first example is shown in Fig.<ref>(b), where a very small doping is considered with respect to ν=1. Because the doping is small, only a tiny fraction of in-gap states is formed, as indicated by the IDOS plot. Nonetheless, this small fraction of states carries a significant contribution to σ_xy as indicated by the large values of Ω(E) inside the ν=1 topological gap, in Fig.<ref>(b). For higher dopings, the fraction of states that occupies the ν=1 gap gets larger and they start providing the most sizable contribution to the Hall conductance, as shown in Fig.<ref>(d), also shown in the main text. This is consistent with the IPR results , where we observe IPR∼ L^-2 at finite density for all states, including those created inside the ν=1 gap, see Fig.<ref>(c). This result implies that the in-gap states are extended and can therefore contribute to the Hall conductivity. For small t_2, the topological gap at ν=1 is very small and it no longer makes sense to interpret skyrmions as arising from in-gap states upon doping. Instead, a sizable Chern gap is spontaneously opened and by far the larger contributions for σ_xy arise for states close to the Fermi energy as shown in Fig.<ref>(f). This again indicates that skyrmions play a crucial role for the Hall response, which is again consistent with the IPR results in Fig.<ref>(e).
http://arxiv.org/abs/2407.13753v1
20240718175501
Exploring Facial Biomarkers for Depression through Temporal Analysis of Action Units
[ "Aditya Parikh", "Misha Sadeghi", "Bjorn Eskofier" ]
cs.CV
[ "cs.CV" ]
Exploring Facial Biomarkers for Depression through Temporal Analysis of Action Units Aditya Parikh1, Misha Sadeghi123, Björn Eskofier 123 1 University of Erlangen–Nuremberg 2 Machine Learning and Data Analytics Lab (MaD Lab) 3 Department Artificial Intelligence in Biomedical Engineering (AIBE) July 22, 2024 ===================================================================================================================================================================================================================================================== § ABSTRACT Depression is characterized by persistent sadness and loss of interest, significantly impairing daily functioning and now a widespread mental disorder. Traditional diagnostic methods rely on subjective assessments, necessitating objective approaches for accurate diagnosis. Our study investigates the use of facial action units (AUs) and emotions as biomarkers for depression. We analyzed facial expressions from video data of participants classified with or without depression. Our methodology involved detailed feature extraction, mean intensity comparisons of key AUs, and the application of time series classification models. Furthermore, we employed Principal Component Analysis (PCA) and various clustering algorithms to explore the variability in emotional expression patterns. Results indicate significant differences in the intensities of AUs associated with sadness and happiness between the groups, highlighting the potential of facial analysis in depression assessment. depression, action units, emotion, facial analysis, biomarkers, clustering, time-series § INTRODUCTION The World Health Organization (WHO) estimates that depression affects millions of people worldwide, making it one of the most common mental health conditions<cit.>. Depression is a condition that considerably reduces everyday operations and quality of life. It is characterized by continuous sorrow, lack of interest in activities, and a variety of mental and physical difficulties <cit.>. A timely and accurate diagnosis is essential for managing and treating conditions effectually. Traditional diagnostic techniques frequently depend on subjective self-reported questionnaires and clinical interviews like the Beck Depression Inventory (BDI) <cit.>, the Hamilton Depression Rating Scale (HDRS) <cit.>, and the Patient Health Questionnaire (PHQ-8 and PHQ-9) <cit.>. Several factors could influence these evaluations. More objective, trustworthy, and quantitative diagnostic tools are needed to supplement these established techniques. One promising approach involves the analysis of facial expressions and emotions, as these can provide objective and non-invasive indicators of mental health status <cit.>. Previous studies have demonstrated that certain facial expressions and feelings are more prevalent in individuals with depression <cit.>. Using facial expression analysis to find biomarkers linked to depression is one potential method. Facial expressions, which are governed by the Facial Action Coding System (FACS)'s definition of facial action units (AUs), are widely used markers of emotional states and can reveal important information about a person's mental state <cit.>. The FACS, proposed by Ekman and Friesen <cit.>, provides an exhaustive framework for categorizing facial movements associated with specific emotions. This system allows for the detailed analysis of facial action units, which are the fundamental actions of individual facial muscles <cit.>. Additionally, people with depression often exhibit a higher prevalence of sadness and lower frequencies of happiness compared to healthy individuals. However, there is a need for more refined and quantitative analyses to understand better the relationship between AUs, emotions, and depression. This study aims to investigate the potential of facial AUs as biomarkers for depression through a detailed temporal analysis. Temporal analysis refers to examining how AUs change and interact over time. Instead of merely looking at static facial expressions, we analyze the dynamic sequences and patterns of these expressions to understand their temporal characteristics. By examining the dynamic patterns of facial expressions, we seek to identify specific AUs that are indicative of depression and to develop predictive models that can accurately distinguish between individuals with and without depression. Our study contributes to a thorough examination of the temporal dynamics of facial expressions in depression, adding to the expanding corpus of research on objective mental health screening instruments. The results of this research could improve the precision of diagnoses and facilitate the creation of automated, non-invasive depression screening instruments, which would eventually improve patient outcomes and accelerate intervention times. § RELATED WORK The relationship between facial expressions, depression, and emotions has been studied broadly, providing significant insights into the possible applications of face analysis in mental health evaluations. This section reviews key studies that have identified dominant facial AUs and emotions in people with depression, as well as the techniques used to conduct these analyses. Jones et al. (2018) <cit.> conducted an extensive study on the facial expressions of depressed and healthy individuals using the FACS. Their analysis revealed that depressed patients exhibited shows higher frequencies of AU1 (inner brow raiser), AU4 (brow lowerer), and AU15 (lip corner depressor), which are commonly associated with sadness and distress. Also, the study found that these patients displayed lower frequencies of AU12 (lip corner puller), which is related to expressions of happiness. Li et al. (2020) <cit.> explored the use of machine learning models to classify depression based on facial expression data. They extracted a wide range of AUs and employed Principal Component Analysis (PCA) to reduce dimensionality before applying support vector machines (SVM) for classification. Their findings showed that AUs associated with sadness (e.g., AU1, AU4) and reduced expressions of happiness (e.g., AU12, AU25 - lips part) were significant predictors of depression. The study demonstrated the feasibility of using automated facial analysis and machine learning for depression detection. Zhang et al. (2022) <cit.> proposed a hybrid model combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to capture both spatial and temporal features of facial expressions in depressed patients. By leveraging long short-term memory (LSTM) networks, their model effectively captured the temporal dynamics of facial expressions, leading to a significant improvement in depression classification performance. According to their research, temporal information must be taken into account to reliably identify depression from facial cues. A more recent study by Wang et al. (2023) <cit.> utilized a multi-modal deep learning approach, integrating facial expression analysis with audio and textual data to detect depression. Their multi-modal model showed superior performance compared to traditional models, highlighting the importance of combining different types of behavioral data for comprehensive depression assessment. This study provided plausible evidence for the potential of multi-modal deep learning frameworks in mental health diagnostics. These studies collectively highlight the importance of specific facial AUs and emotions in identifying depression. They provide a foundation for our research, which further investigates the dominance of certain AUs and emotions in depression versus people without depression. § METHODOLOGY §.§ Data Collection Data collection was part of the EMPKINS Subproject D02 - Empatho-Kinaesthetic Sensor Technology for Biofeedback in Depressed Patients<cit.>. The goal of this sub-project is to train smartphone-based reappraisal training for psychological assessment using facial expressions as biomarkers. The project aims to understand the relationship between cognition, facial expressions, and affect as underlying mechanisms for the development and maintenance of depression, addressing the lack of empirical studies on facial expressions and the need for quantifying and modifying them. Each participant was provided with a mobile application designed for the study. Participants were asked questions and expressed emotions were recorded using their smartphones and a camera setup. This process was conducted in the presence of a psychologist, encompassing multiple phases to capture a diverse range of emotional expressions. §.§ Feature Engineering The recorded video data for this study were processed and exported to CSV files using the OpenDBM tool <cit.>. These CSV files contained detailed frame-wise information about facial expressivity and AUs. Specifically, OpenDBM provided features such as the intensity and presence of individual AUs, as well as composite metrics for overall facial expressivity. The videos were recorded at a frame rate of 30 frames per second (fps). To handle the large volume of data efficiently, a Python script was developed to batch process the videos on a high-performance computing (HPC) cluster. This script automated the extraction of relevant features and ensured consistent processing across all video files. §.§.§ Data Preprocessing The exported files included two key files for each video as CSVs: one for the emotions and AUs. These files contained frame-wise data on facial expressivity and AU metrics, respectively. AU presence is a binary indicator that denotes whether a specific AU is active (1) or not (0) in a given frame. AU intensity, on the other hand, provides a more nuanced measure of emotional expression by indicating the strength of the AU activation on a continuous scale. The intensity values typically range from 0 to 1, where 0 means the AU is not present, and 1 indicates the maximum intensity of the AU. For this study, we focused on AU intensity values as they provide a more subtle measure of emotional expression, capturing the gradations in facial movements that are crucial for identifying patterns associated with depression. Given the structure of our experiments, we managed the timing information provided by the mobile application to trim the CSV files. This mobile app recorded the start and end times of the experiment as well as the timestamps for different experimental phases. For this study, we specifically analyzed data from the `emotional induction' (EI) phase. This phase involved inducing negative moods and emotions in participants using negative statements. By analyzing participants' expressive behavior under these standardized conditions, we gathered critical data on how their facial expressions responded to negative emotional stimuli. §.§.§ Depression-Indicative Action Units Based on previous studies, several AUs have been identified as significantly associated with depression. These AUs were found to be more prominent or frequently activated in depressed patients compared to non-depressed patients <cit.> <cit.>. According to Table <ref>, the top 5 action units associated with depression include AU1 (Inner Brow Raiser), AU4 (Brow Lowerer), AU15 (Lip Corner Depressor), AU6 (Cheek Raiser), and AU10 (Upper Lip Raiser) <cit.>. These AUs have been identified in previous research as being more frequently activated or prominent in individuals with depression compared to non-depressed individuals. Studies already mentioned in Section <ref> have demonstrated that AUs mentioned in <ref> are more frequently observed or have higher intensity in individuals with depression. For instance, AU1 (Inner Brow Raiser) and AU4 (Brow Lowerer) are associated with sadness and distress, while AU15 (Lip Corner Depressor) is directly linked to expressions of sadness and despair. Using FACS, we mapped specific combinations of AUs to distinct emotional expressions. This mapping allowed us to quantify the presence and intensity of various emotions based on the AU intensity values recorded in the exported files. Table <ref> outlines the combinations of AUs used to identify each emotion. For each frame, we calculated the intensity of these emotions by summing the intensities of the corresponding AUs. This approach provided a continuous measure of emotional expression over time. §.§.§ Feature Extraction The primary features extracted for analysis included: * AU Intensity Values: The intensity levels of individual AUs recorded for each frame of the video data. * Emotional Expressivity: Calculated as the cumulative intensity of specific combinations of AUs associated with distinct emotional states. These features were pivotal in capturing both subtle and pronounced changes in facial expressions, crucial for distinguishing between depressed and healthy individuals during the emotional induction phase. §.§ Mean Intensity Comparison To compare the differences in emotional expressions between depressed and healthy participants, we conducted a mean intensity comparison across selected emotions. This involved averaging AU intensity values over all frames during the phase for each participant. Statistical tests were applied to compare the mean intensity values of AUs associated with emotions such as happiness and sadness, particularly between the different patient groups. Significant differences were identified to highlight distinctive emotional expression patterns. §.§ PCA Clustering Principal Component Analysis (PCA) was employed to reduce the dimensionality of AU intensity data. This technique transformed the original set of correlated AUs into a smaller set of uncorrelated principal components (PCs). The first few PCs, which captured the highest variance in the data, were selected for subsequent clustering analysis. Various clustering methods, including K-means clustering, Agglomerative clustering, and Gaussian Mixture Model (GMM), was applied to these principal components. These methods enabled the identification of coherent clusters of facial expression patterns, facilitating an unsupervised understanding of emotional expression variability among participants. §.§ Silhouette Score Analysis The quality of clusters generated by K-means clustering was assessed using the silhouette score <cit.>, a metric that evaluates the coherence and separation of clusters. For each data point, the silhouette score computed how similar it was to its cluster compared to neighboring clusters. For each data point, the silhouette score was calculated as follows: s(i) = b(i) - a(i)/max(a(i), b(i)) where a(i) is the average distance from the data point i to the other points in the same cluster, and b(i) is the average distance from i to points in the nearest different cluster. The overall silhouette score was calculated as the mean of individual scores across all data points. Higher silhouette scores indicated well-defined clusters, providing validation of the clustering approach and insight into the distinctiveness of facial expression patterns associated with depression. §.§ Time Series Classification Performance Given the temporal nature of facial expression data, we explored the use of time series classification to distinguish between depressed and healthy patients. Each video sequence was treated as a time series of AU intensity values. A crucial step before applying any classification model is data pre-processing. This typically involves splitting the data into training (80% of the whole dataset) and testing sets (20% of the whole dataset). Splitting strategies include random splitting and stratified splitting (maintaining class proportions in both sets). Scaling the data to a common range (e.g., 0-1 or standard deviation) to ensure all features contribute equally during model training. We employed several classification models tailored for time series analysis: * ROCKET + LogisticRegression: This method involves applying random convolutional kernels to the time series to extract features efficiently. After feature extraction, a logistic regression classifier is trained on the transformed features <cit.>. * ROCKET + RidgeClassifierCV: Similar to Logistic Regression but uses ridge regression for classification, which can handle multicollinearity among features better <cit.>. * InceptionTime: A deep learning architecture designed specifically for time series classification, leveraging inception modules similar to those used in image classification <cit.>. * LSTM: A type of recurrent neural network (RNN) well-suited for sequence data, capable of learning long-term dependencies in time series <cit.>. * XGBoost Classifier: An ensemble learning method known for its efficiency and effectiveness in various machine learning tasks, including time series classification <cit.>. Performance was evaluated using accuracy to assess the model's ability to classify sequences correctly. § RESULTS AND DISCUSSION §.§ Descriptive Statistics The bar plot in Figure <ref> shows the distribution of participants in our study across three categories: Depressed, Healthy, and Sub-clinical. A smaller group of participants (9 individuals) were categorized as Sub-clinical, indicating that they exhibited some signs of depression but did not meet the full criteria for a depressed classification. To ensure a more comprehensive analysis, these Sub-clinical participants were included in the Depressed category for subsequent analysis. This decision was made based on their demonstrated symptoms of depression, which justifies their inclusion in the depressed group to better understand the spectrum of depressive symptoms and their impact on facial expressivity and emotional expression metrics. This reclassification resulted in a combined Depressed group of 57 participants, providing a more robust dataset for our analysis. §.§ Mean Intensity Comparison of the Dominant Action Units In this series of experiments, we compared the mean intensity of the dominant AUs associated with depression between depressed and healthy patients which are known to be significantly associated with facial expressions related to depression, offering a measurable basis for comparing facial expressivity between the two groups, specifically, Inner Eyebrow Raise (AU1), Brow Lowerer (AU4), and Lip Corner Depressor (AU15). The comparison of these AUs provides insight into the facial expressivity differences between the two groups over time during the target phase. The plots <ref>, <ref> and <ref> include a horizontal dashed line representing the overall mean intensity of happiness for both groups, as well as shaded areas depicting the standard deviation. These additions provide a clear visual representation of the variability and central tendency of the data. As illustrated in Figures <ref>, <ref>, and <ref>, there are notable differences in the mean intensity of specific AUs between depressed and healthy patients. * Inner Eyebrow Raise (AU1): Figure <ref> shows that the average intensity of AU1 is generally higher in depressed patients compared to healthy patients. This difference is consistent over time, suggesting that depressed patients exhibit more pronounced inner eyebrow raises, often associated with sadness and distress. * Cheek Raiser (AU6): Figure <ref> demonstrates that the average intensity of AU6 is generally higher in depressed patients compared to healthy patients. This difference is noticeable over time, indicating that depressed patients exhibit more pronounced brow lowering, which is associated with expressions of sadness and concern. * Lip Corner Depressor (AU15): As shown in Figure <ref>, the average intensity of AU15 is significantly higher in depressed patients. This action unit is associated with expressions of sadness and despair, aligning with the emotional state of depressed individuals. The analysis of mean intensity comparisons of key action units (AU1, AU4, AU15) in Table <ref> reveals significant differences in facial expressivity between depressed and healthy patients. Depressed individuals consistently exhibit higher intensities of these action units, corresponding to expressions of sadness, distress, and concern. These findings support the hypothesis that specific facial expressions can serve as biomarkers for depression, providing a potential avenue for more objective and automated psychological assessments. §.§ Mean Intensity Comparison of Dominant Expressions: Sadness and Happiness In this section, we analyze the mean intensity of dominant expressions, specifically sadness, and happiness, between depressed and healthy patients. This comparison highlights the differences in emotional expressivity between the two groups, providing further insight into their emotional states. Figures <ref> and <ref> illustrate the mean intensity of expressions of happiness and sadness, respectively, over time. * Happiness: As shown in Figure <ref>, the average intensity of happiness is consistently lower in depressed patients compared to healthy patients. This suggests that depressed individuals exhibit fewer and less intense expressions of happiness, indicative of their reduced positive affect. * Sadness: Figure <ref> shows that the average intensity of sadness is markedly higher in depressed patients compared to healthy patients. This consistent difference underscores the prevalence of negative affect and emotional distress in depressed individuals. The analysis of dominant expressions reveals that depressed patients show significantly lower intensities of happiness and higher intensities of sadness. These findings align with the expected emotional profiles of depressed individuals, reinforcing the potential of using facial expressions as objective indicators for assessing depression. §.§ PCA and Clustering Analysis §.§.§ Explained Variance by Principal Components Figure <ref> illustrates the cumulative explained variance with respect to the number of principal components, demonstrating that the first 20 components explain approximately 95% of the variance. §.§.§ Clustering Analysis In this section, we perform K-means clustering on PCA-transformed data to analyze the clustering patterns of emotion intensities. We applied K-Means clustering with 2 clusters on the PCA-transformed data. The silhouette score was computed to evaluate the clustering performance and is noted in Section <ref>. In Figure <ref> & <ref> we see the results of K-means clustering applied to PCA-transformed data. It visualizes how data points are grouped into clusters based on the similarity of emotion intensity features. The clusters are reasonably well-separated and most data points are correctly assigned to their clusters, with some overlap near cluster boundaries. We then perform Agglomerative Clustering on PCA-transformed data to analyze the clustering patterns of emotion intensities. It works from the dissimilarities between the objects to be grouped. In Figure <ref> & <ref> we see the results of Agglomerative clustering applied to the transformed data. The clusters are well-separated from each other, data points in one cluster are distinctly different from those in neighboring clusters where the cluster for healthy patients indeed contains significantly fewer points than the other. We utilize the Gaussian Mixture Model (GMM) on PCA-transformed data to analyze the clustering patterns of emotion intensities. In Figure <ref> & <ref> we see the results of Agglomerative clustering applied to the transformed data. The clusters are distinguishable from each other, but there is some overlap or ambiguity in the assignment of a few data points. §.§.§ Evaluating Silhouette Scores Table <ref> compares the silhouette scores for happiness and sadness across different clustering algorithms. Based on the comparison of silhouette scores for happiness and sadness intensities across different clustering algorithms (Table <ref>), Agglomerative Clustering emerges as the optimal method, achieving the highest score of 0.709 and 0.446. It demonstrates a moderately well-defined clustering structure. A score in this range suggests that the clusters are sufficiently distinct and separated from each other, although there may still be some overlap or ambiguity at the boundaries between clusters. These results underline the effectiveness of Agglomerative Clustering in capturing distinct emotional patterns, particularly for sadness, thereby showcasing its suitability for such clustering tasks. §.§ Time Series Classification Performance This section evaluates the performance of various classification techniques on our dataset, which consists of emotion intensities related to happiness and sadness. We compare the accuracy scores achieved by each method and discuss key aspects of their implementation. We utilized the learning rate finder from the 'tsai' framework <cit.>, a leading deep learning library for time series and sequences, to ascertain the optimal learning rate for training data across all classification techniques employed. A comparison of accuracy scores achieved by each method on the dataset is presented in Table <ref>. Based on the results in Table <ref>, ROCKET (Random Convolutional Kernel Transform) combined with Logistic Regression demonstrates superior performance in our classification task potentially due to: * Effective Feature Extraction: Utilizes random convolutional kernels to capture diverse time-dependent patterns, providing crucial features for accurate classification. * Dimensionality Reduction: Reduces high-dimensional feature space through techniques like random projections and pooling, preserving essential information and enhancing generalization. * Efficient Training: Logistic Regression efficiently learns linear decision boundaries in the transformed feature space, enabling faster training compared to complex models. § CONCLUSION This study demonstrates the efficacy of using facial action units (AUs) and emotions as objective biomarkers for detecting depression. The analysis of facial action units, specifically comparing their mean intensities, supports the hypothesis that these metrics effectively evaluate a patient's condition. The results also highlight sadness and happiness as the predominant emotions observed during patient evaluations. By applying time series classification to facial expression data, significant differences in the intensity of specific AUs between depressed and healthy individuals were identified. Moving forward, future research should concentrate on refining these models and exploring multi-modal approaches that integrate facial expression analysis with other behavioral data sources such as voice and text. This integration aims to enhance diagnostic accuracy and reliability, offering a more comprehensive understanding of an individual's mental health status and enabling personalized and timely interventions. The findings suggest that automated facial analysis can complement traditional diagnostic methods, providing a more objective and non-invasive approach to mental health assessment. Future studies should continue to refine these models and explore multi-modal frameworks to further improve diagnostic efficacy and reliability in clinical settings. IEEEtran 00 girard2014 J. M. Girard, J. F. Cohn, M. A. Mahoor, S. M. Mavadati, and D. P. Rosenwald. "Social Risk and Depression: Evidence from Facial Expressions of Emotion," IEEE Transactions on Affective Computing, vol. 5, no. 4, pp. 324-333, Oct.-Dec. 2014. who_report World Health Organization. (2020). Depression. Retrieved from https://www.who.int/news-room/fact-sheets/detail/depression who_report2 World Health Organization, "Depression," World Health Organization, 2021. [Online]. Available: https://www.who.int/news-room/fact-sheets/detail/depression. [Accessed: 25-Jun-2024]. dep_rating Hamilton, M. (1960). A rating scale for depression. Journal of Neurology, Neurosurgery, and Psychiatry, 23(1), 56-62. phq K. Kroenke, R. L. Spitzer, and J. B. W. Williams. (2001). "The PHQ-9: Validity of a Brief Depression Severity Measure," Journal of General Internal Medicine, vol. 16, no. 9, pp. 606-613. jones2018 A. Jones, H. Q. Ngo, and R. L. Miller. (2018). "Facial Action Units and Depression: Evidence from a Large Clinical Sample," Journal of Affective Disorders, vol. 245, pp. 65-72, Feb. 2018. cciftcci2013 U. Çiftçi and F. Akçay. (2013). "Recognition of Emotional States in Depressed Patients Using Facial Action Coding System," Computers in Biology and Medicine, vol. 43, no. 12, pp. 2260-2268, Dec. 2013. facs Ekman, P.,& Friesen, W. V. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto: Consulting Psychologists Press. sadness_happiness F. Ekman and E. Rosenberg. (2005). What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS), 2nd ed. Oxford University Press. facs_depression Girard, J. M., Cohn, J. F., Mahoor, M. H., Mavadati, S. M., & Rosenwald, D. P. (2013). Social Risk and Depression: Evidence from Manual and Automatic Facial Action Unit Analysis. In Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 1-7. beck Beck, A. T., Steer, R. A., & Brown, G. K. (1996). Beck Depression Inventory–II. San Antonio, TX: Psychological Corporation. r1 Li, M., Yang, Y., Shi, W., & Wang, B. (2020). Classification of depression based on facial expressions using machine learning. Journal of Affective Disorders, 276, 263-271. doi:10.1016/j.jad.2020.07.012 r2 Zhang, Y., Liu, X., Wang, Z., & Li, H. (2022). Hybrid model combining CNNs and RNNs for depression detection through temporal analysis of facial expressions. IEEE Transactions on Affective Computing, 13(2), 325-335. doi:10.1109/TAFFC.2022.3141123 r3 Wang, J., Chen, L., Zhang, X., & Yang, S. (2023). Multi-modal deep learning for depression detection: Integrating facial expression, voice, and text analysis. Journal of Affective Disorders, 321, 246-255. doi:10.1016/j.jad.2023.01.112 r4 Baltrusaitis, T., Zadeh, A., Lim, Y. C., & Morency, L.-P. (2018). OpenFace 2.0: Facial behavior analysis toolkit. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018) (pp. 59-66). IEEE. doi:10.1109/FG.2018.00019 r5 Li, X., Huang, S., & Sui, J. (2020). Facial action units and machine learning for depression detection: A review. IEEE Transactions on Affective Computing, 11(3), 432-444. doi:10.1109/TAFFC.2020.2979262 r6 Zhang, W., Lin, H., Liu, Z., & Yu, J. (2022). Hybrid deep learning models for capturing spatiotemporal features of facial expressions in depression detection. Journal of Affective Disorders, 295, 897-904. doi:10.1016/j.jad.2021.09.089 r7 T. Kanade, J. F. Cohn and Y. Tian. (2000). Comprehensive database for facial expression analysis, Proc. of FG00, pages 46-53. r8 Cohn, J. F., & Ekman, P. (2005). Measuring facial action by manual coding, facial EMG, and automatic facial image analysis. In J. A. Harrigan, R. Rosenthal, & K. R. Scherer (Eds.), The New Handbook of Methods in Nonverbal Behavior Research (pp. 9-64). Oxford University Press. tsai Ignacio Oguiza. (2023). tsai - A state-of-the-art deep learning library for time series and sequential data, Github. do2 Keinert, M., Schindler-Gmelch, L., Rupp, L.H., Sadeghi, M., Capito, K., Hager, M., Fahimi, F., Richer, R., Egger, B., Eskofier, B.M., Berking, M. (2024). Facing Depression: Evaluating the Efficacy of the EmpkinS-EKSpression Reappraisal Training augmented with Facial Expressions – Protocol of a Randomized Controlled Trial. Manuscript submitted for publication sil_score J. Rousseeuw. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. (1987) J Comput Appl Math, 20 (1987), pp. 53-65 rocket Dempster, A., Schmidt, D. F., & Webb, G. I. (2019). ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. Data Mining and Knowledge Discovery, 33(6), 2066-2095. inception Fawaz, H. I., Forestier, G., Weber, J., Idoumghar, L., & Muller, P. A. (2019). InceptionTime: Finding AlexNet for time series classification. arXiv preprint arXiv:1909.04939. lstm Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780. xgb Chen, T., & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16), 785-794.
http://arxiv.org/abs/2407.13378v1
20240718103641
The maxcut of the sunrise with different masses in the continuous Minkoskean dimensional regularisation
[ "Filippo Caleca", "Ettore Remiddi" ]
hep-th
[ "hep-th" ]
[12pt] -1cm 16.3cm 22.5cm 0cm 0cm
http://arxiv.org/abs/2407.13531v1
20240718140436
Evaluating the performance-deviation of itemKNN in RecBole and LensKit
[ "Michael Schmidt", "Jannik Nitschke", "Tim Prinz" ]
cs.LG
[ "cs.LG" ]
michael3.schmidt@student.uni-siegen.de University of Siegen Adolf-Reichwein-Straße 2a Siegen Germany 57076 tim.prinz@student.uni-siegen.de University of Siegen Adolf-Reichwein-Straße 2a Siegen Germany 57076 jannik.nitschke@student.uni-siegen.de University of Siegen Adolf-Reichwein-Straße 2a Siegen Germany 57076 § ABSTRACT This study evaluates the performance variations of item-based k-Nearest Neighbors (ItemKNN) algorithms implemented in the recommender system libraries, RecBole and LensKit. By using four datasets (Anime, Modcloth, ML-100K, and ML-1M), we explore the efficiency, accuracy, and scalability of each library's implementation of ItemKNN. The study involves replicating and reproducing experiments to ensure the reliability of results. We are using key metrics such as normalized discounted cumulative gain (nDCG), precision, and recall to evaluate performance with our main focus on nDCG. Our initial findings indicate that RecBole is more performant than LensKit on two out of three metrics. It achieved a 18% higher nDCG, a 14% higher Precision and a 35% lower Recall. To ensure a fair comparison, we adjusted LensKit's nDCG calculation implementation to match RecBole's approach. After aligning the nDCG calculations implementation, the performance of the two libraries became more comparable. Using implicit feedback, LensKit achieved an nDCG value of 0.2540, whereas RecBole attained a value of 0.2674. Further analysis revealed that the deviations were caused by differences in the implementation of the similarity matrix calculation. Our findings show that RecBole’s implementation outperforms the LensKit algorithm on three out of our four datasets. Following the implementation of a similarity matrix calculation, where only the top K similar items for each item are retained (a method already incorporated in RecBole's ItemKNN), we observed nearly identical nDCG values across all four of our datasets. For example, Lenskit achieved an nDCG value of 0.2586 for the ML-1M dataset with a random seed set to 42. Similarly, RecBole attained the same nDCG value of 0.2586 under identical conditions. Using the original implementation of LensKit's ItemKNN, a higher nDCG value was obtained only on the ModCloth data set. Evaluating the performance-deviation of itemKNN in RecBole and LensKit Jannik Nitschke ====================================================================== § INTRODUCTION In the age of online shopping and streaming, one thing has become indispensable: recommender systems. Good recommendations lead to satisfied users, making recommender systems central to these services. In this context, Top-N recommendations are frequently mentioned. These are realized by recommender system developers who train models to predict user-specific Top-N recommendations. A commonly used algorithm in such models is the nearest-neighbor algorithm. This algorithm relies on the principle of similarity where each item or user is interpreted as a vector based on its attributes. This allows the calculation (often using the cosine distance) of how similar items or users are, with a small difference implying high similarity. With the increasing need for effective recommender system, there are now many code libraries available for implementing such algorithms e.g. RecPack <cit.>, Elliot <cit.>, MyMediaLite <cit.>, LibRec <cit.> or Auto-CaseRec <cit.>. Two additional libraries are RecBole and LensKit. While LensKit has long been established as a solid machine learning library, RecBole is relatively new. This makes it interesting to determine which library has more efficient algorithms. For our research we used the LensKit 0.14.4 <cit.> and RecBole 1.2.0 <cit.> versions. This study aims to examine how the ItemKNN algorithm implemented in LensKit differs from that in RecBole. The commonly used nDCG evaluation metric (normalized discounted cumulative gain) will be used to assess the performance of the algorithms. This metric is particularly suitable for evaluating the quality of Top-N recommendations as it takes into account the position of items in the recommendation list, giving higher scores to relevant items appearing at higher positions. Building on this, we will try to identify differences in their implementation, which, on the one hand, could help practitioners choosing the right library for their purpose, and, on the other hand help developers implement an effective ItemKNN-algorithm. We begin by simply running both algorithms on one dataset in order to evaluate whether they differ or not and how far. After that we will try to equalize all factors that could lead to different results (e.g. different nDCG calculation in the two libraries). Relying on this, we will use the implementations to explain the different nDCG-values. § LIBRARY INTRODUCTION We are working with two libraries, LensKit and RecBole. These are two powerful libraries that are widely used in the field of recommender systems, each offering distinct features to enhance recommendation tasks. Starting with LensKit <cit.>, it was developed by GroupLens researchers at the University of Minnesota. The library comes along with a big and modular framework for constructing, evaluating, and analyzing recommender algorithms. It includes a wide range of recommendation techniques, encompassing collaborative filtering, content-based methods, and hybrid approaches. One of them is the implementation of the k-nearest neighbors (KNN), with parameters such as neighborhood size, similarity metrics, and weighting schemes. LensKit originally had a Java version <cit.>, but today it is available in Python. There are also extensions of LensKit, such as "LensKit-Auto" <cit.>, which automates the process of selecting and tuning recommendation algorithms for optimal performance. On the other hand we have RecBole <cit.>, a library developed by a team of researchers at the Renmin University of China. The library is based on Python and PyTorch in order to reproduce and develop recommendation algorithms in a unified, comprehensive, and efficient framework for research purpose. Although it primarily focuses on deep learning-based recommendation models, it also offers a wide variety of traditional algorithms, like KNN methods. In the KNN module, RecBole integrates advanced features such as neural network-based similarity functions and adaptive neighbor selection strategies. Both libraries provide a detailed documentation, tutorials, and examples to show the utilization of their KNN implementations and other functionalities within Python-based recommender systems. § METHOD §.§ Data Sets We utilized the following four data sets for our experiment: Anime, Modcloth, ML-100K and ML-1M. RecBole requires "Atomic Files" <cit.> to implement their ItemKNN algorithm, which are provided in a Google Drive folder <cit.> containing 28 preprocessed datasets ready for direct use by the algorithm. Conversely, LensKit's ItemKNN algorithm can accept any data contained within a pandas DataFrame <cit.>. All four data sets were sourced from the Google Drive folder provided by RecBole <cit.>. §.§ Algorithms Since this paper is about the performance-deviation of Item-based k-Nearest Neighbors, we used the implementations of ItemKNN from the LensKit and RecBole libraries for our experiments to evaluate their performance differences <cit.>. Our goal was to highlight the difference between the two Recommender System Libraries, rather than optimizing for the nDCG score, we set the hyperparameter k to a standard value of 20 for simplicity. Each algorithm was then configured to generate 10 recommendations for each user in the test data set. §.§ Pre-processing and Data Splitting In terms of pre-processing, we converted the four data sets into implicit feedback. For the ML-100K, ML-1M and Modcloth data sets we converted ratings higher than three to 1 and lower to 0 and only kept the instances which got a value of 1 (table <ref>). This approach is eligible by the following reason: if an item is consistently rated negatively by all users who rated the item, it is unlikely to be recommended to users in the test set. This is because an item deemed irrelevant by the majority of users is not expected to hold significance for any specific user. Another notable consideration is that the corpus of items available for recommendation to a user expands with the inclusion of items that may not align with the user's interests, as evidenced by low ratings from other users. Since Anime has a range of -1 to 10 for their rating values (-1 is used if the user has seen the anime but did not leave a rating for it) <cit.> we converted rated items with a rating higher or equal than 6 to 1 and lower than 6 to 0. The reason we did that is the same as above for ML-100K, ML-1M and Modcloth. In order to make it easier to use the splitted data of RecBole for LensKit, we used a 80/20 holdout split with a user-based splitting. User-based splitting is used to handle data sparsity and provides a good amount of data to create recommendations <cit.>. In chapter <ref> we used a random seed set to 42. In our later experiments, we used three random seeds, 21, 42 and 84, in order to make sure, that the results not only depend on the random seed <cit.>. After applying these configurations in RecBole implementation we saved the output in separate folders based on the data sets, so both algorithms (ItemKNN RecBole & LensKit) get the same data input for their training and the creation of recommendations in the test set. In chapter <ref> of our study, we employed the same splitting technique as utilized in the replication phase, with the addition of two distinct random seeds, specifically set to 21 and 84. This approach was adopted to ensure that our experimental results are representative and reproducible. §.§ Algorithm Training and Evaluation After splitting the data (details see section <ref>), the train set for each dataset was used to train the RecBole and LensKit ItemKNN algorithms to generate the desired models. The evaluation was realized by using the test sets to create predictions for the users within those and used the nDCG@10 to evaluate the predictions. §.§ Hardware Specifications The experiments were conducted on a personal computer with the following specifications: * Operating System: Microsoft Windows 11 Home, Version 10.0.22631 Build 22631 * CPU: AMD Ryzen 7 5800X, 8 Cores and 16 Threads * RAM: 32GB DDR4 * GPU: Nvidia GeForce RTX 3070, 8 GB GDDR6 These specifications provided a good amount of computational power to handle the data pre-processing, model training, and evaluation tasks efficiently. The generation of recommendations for the ML-100K, ML-1M and Modcloth data sets was completed in less than one minute for both libraries, even after adjustment of the ItemKNN of LensKit. For the Anime data set, RecBole required only one minute to generate recommendations, whereas LensKit took four minutes. § RESULTS §.§ First Steps In this chapter, we tried to get a first impression of the differences in the itemKNN algorithms of RecBole and LensKit. For this purpose, we utilized the well-known ml-100k data set to run both algorithms<cit.>. RecBole provides a quickstart routine that allows training, running, and evaluating a model with just one line of code, which significantly simplifies the process. LensKit, on the other hand, required more lines of code and a more complex setup. It is easy to see, that RecBole outperformed LensKit in two out of three metrics: it achieved a 18% higher nDCG, a 14% higher Precision but a 35% lower Recall. These results indicated that there had to be a difference between the two algorithms (or at least the error metrics calculations), motivating us to investigate further. §.§ Further Investigations §.§.§ Adjustment of LensKit nDCG Calculation After observing the discrepancy in the nDCG values between RecBole and LensKit, we decided to look further into the specific implementation details of each library. Our initial step was to align the nDCG calculations to ensure consistency across both algorithms. The nDCG metric is important in order to evaluate the ranking quality of a recommender system, and even slight differences in the implementation can lead to varying results. RecBole’s nDCG calculation follows the standard formula<cit.>: nDCG@k = DCG@k/IDCG@k where DCG@k = ∑_i=1^k2^rel_i - 1/log_2(i + 1) and IDCG@k = ∑_i=1^|REL_k|2^rel_i - 1/log_2(i + 1) In contrast, the LensKit implementation differs slightly, resulting in different nDCG values compared to RecBole: The difference is that if fewer than k items have been rated, RecBole calculates the IDCG using the last valid value, which would be IDCG@5 for 5 items. In contrast, LensKit still uses IDCG@k. In order to ensure a fair head to head comparison, we modified LensKit's nDCG calculation to match RecBole’s approach.<cit.><cit.> After making these adjustments, we ran the evaluations on all our data sets. Table <ref> shows, that the nDCG value of LensKit for the ML-100K data set changed, but is still 5% lower than RecBole's (Figure <ref>, Random Seed 42). This result indicates, that it was important to isolate the causes of the differences observed in the initial results and provided the baseline for our further analysis of the algorithmic implementations. The resulting adjustments in the LensKit ItemKNN implementation, which are described in the following sections, and the use of the RecBole implementation lead to a foundation of consistent evaluation metrics. All of our implementations can be found in our public GitHub repository <cit.>. The RecBole results can be found in figures <ref>-<ref>. §.§.§ Implementation Difference To understand the (still) different nDCG values of the two libraries, we examined their implementations. We went through the entire process of data splitting, training, and evaluation and found the following difference in the similarity calculation: Both libraries use the cosine distance to calculate the similarity of two items. However, while LensKit includes the similarity values to all other items for each item in the similarity matrix <cit.>, RecBole directly limits the entries per item to the predefined relevant number of neighbors (k) <cit.>. This means that a column in the similarity matrix in RecBole contains the k nearest neighbors for an item with their similarity values. All other entries in the column are set to zero, resulting in item-count minus k zero-entries. For example, when we have 1000 items, both algorithms start by constructing a 1000x1000 similarity matrix. Now, both use the cosine distance to calculate the distance from item 1 to all other items contained in the similarity matrix. LensKit now takes all of the calculated values to fill the first column in the matrix. RecBole on the other hand, filters for the topK values in the 1000 similarities and puts just these topK values in the column. All other entries are set to zero. This is done for every column. This difference becomes apparent in the prediction of items for a user. The prediction algorithm in both libraries is essentially the same: based on the user-specific "rated"-corpus (the interactions of the user in the test set), the relevance is calculated for each item known by the algorithm, meaning all items from the train corpus. LensKit does it as follows <cit.>: Now, a score is assigned to each item, and the items with the highest scores can be recommended. RecBole essentially does the same using matrix multiplication. But since there are many zero-entries in RecBole's similarity matrix, the array sims will contain many zeros. Only these matrix-entries, where ratedItem is in the topK similarities of the item, will have a score higher than zero. §.§.§ Possible advantages of the RecBole-Implementation Besides saving memory (the matrix is stored in CSR format), RecBole's implementation may have a major advantage: reducing the number of column entries to k similarity values per item minimizes noise during score calculation in the prediction process. In LensKit, even dissimilar items can be weighted if they are in the topK of all rated items. This cannot happen in RecBole: the item must be among the k nearest neighbors, otherwise it is weighted with zero. §.§.§ Adjustment of LensKit ItemKNN To explain the differences in the nDCG results after equalizing the nDCG implementation and providing the LensKit algorithm with the same data as RecBole, we analyzed the ItemKNN implementations of both algorithms. By decomposing and understanding both algorithms, we gained a more precise perspective. In doing so, we identified a significant difference in the creation of the similarity matrix during the training (more details see <ref>). Recognizing this disparity, we modified LensKit's ItemKNN implementation to generate a similarity matrix akin to that of RecBole. Subsequently, we re-evaluated both algorithms using the previously mentioned four data sets (ML-100K, ML-1M, Anime and Modcloth) and three random seeds (21, 42 and 84). The results presented in figures <ref>-<ref> illustrate the outcomes of these adjustments. It is clear to see that after the adjustments, both models almost got the same nDCG@10 results for every data set and random seed provided. §.§ Discussion Our findings show that RecBole's implementation outperforms the LensKit algorithm on three out of our four data sets. Only on the ModCloth data set LensKit's original implementation achieves an nDCG of 0.0978 (more details see table <ref>), while the adjusted implementation results in a nDCG of 0.0935 (see figure <ref>). Our first adjustment involved standardizing the nDCG calculation across both implementations. This adjustment was crucial for ensuring a fair comparison and revealed that the initial performance deviations were partly due to differences in the nDCG calculation methods. Once aligned, the results showed more comparable performance metrics between the two libraries. This highlights the importance of standardized evaluation metrics. Relying on the now comparable nDCG calculation, we conducted a deeper investigation and, by deeply studying both implementations, discovered the part causing the different nDCG values: the similarity-matrix calculation. After adjusting the similarity-matrix calculation in LensKit's source code<cit.>, the nDCG values became nearly equal to RecBole's and got better in three out of four cases. Our results indicate, that adopting RecBole's implementation could lead to better overall performance. However, to determine which implementation is superior or more suitable for specific cases, further experiments are necessary. These should include a wider variety of data sets with different characteristics, such as varying levels of sparsity, diversity in item types, and different user interaction patterns. Still, one can see that we were able to boost LensKit's nDCG values in three out of four cases by adjusting the similarity-matrix calculation. We believe RecBole's approach, which limits the entries per column in the similarity matrix to k items, effectively ignores irrelevant items. This could result in more accurate item-scores, leading to more accurate predictions. In conclusion, we demonstrated that different similarity-matrix calculations lead to different predictions, even when using the same prediction algorithm, which can cause significant variations in the nDCG results. While we cannot definitively state that RecBole's implementation is better, an important takeaway is that developers implementing the ItemKNN algorithm should carefully consider the similarity-matrix calculation and its impact on prediction performance. § ACKNOWLEDGEMENTS This work was conducted as part of a Machine Learning Internship 2024 at the University of Siegen, Department of Electrical Engineering and Computer Science, Intelligent Systems Group <cit.>. ACM-Reference-Format
http://arxiv.org/abs/2407.13652v1
20240718162931
Two-dimensional forest fires with boundary ignitions
[ "Jacob van den Berg", "Pierre Nolin" ]
math.PR
[ "math.PR", "math-ph", "math.MP" ]
Two-dimensional forest fires with boundary ignitions Jacob van den Berg[CWI, Amsterdam; E-mail: .], Pierre Nolin[City University of Hong Kong; E-mail: . Partially supported by a GRF grant from the Research Grants Council of the Hong Kong SAR (project CityU11309323).] ========================================================================================================================================================================================================================== § ABSTRACT In the classical Drossel-Schwabl forest fire process, vertices of a lattice become occupied at rate 1, and they are hit by lightning at some tiny rate ζ > 0, which causes entire connected components to burn. In this paper, we study a variant where fires are coming from the boundary of the forest instead. In particular we prove that, for the case without recoveries where the forest is an N × N box in the triangular lattice, the probability that the center of the box gets burnt tends to 0 as N →∞ (but substantially slower than the one-arm probability of critical Bernoulli percolation). And, for the case where the forest is the upper-half plane, we show (still for the version without recoveries) that no infinite occupied cluster emerges. We also discuss analogs of some of these results for the corresponding models with recoveries, and explain how our results and proofs give valuable insight on a process considered earlier by Graf <cit.>, <cit.>. Key words and phrases: near-critical percolation, forest fires, self-organized criticality. AMS MSC 2020: 60K35; 82B43. § INTRODUCTION §.§ Background and motivation Let G = (V,E) be an infinite two-dimensional lattice, such as the square lattice ^2 or the triangular lattice , where V and E contain its vertices and edges, respectively. We consider processes which are indexed by time t ∈ [0,∞), and consist of vertex configurations (ω_v(t))_v ∈ V_D in given subdomains D = (V_D,E_D) of G. Here, V_D ⊆ V, and E_D contains all edges in E whose endpoints both lie in V_D. At each time t ∈ [0,∞), every vertex v ∈ V_D can be in three possible states: vacant (ω_v(t) = 0), occupied (ω_v(t) = 1), and burnt (ω_v(t) = -1). Initially, at time t=0, all vertices in V_D are vacant, and they then become occupied at rate 1. Moreover, we add the following ignition mechanism along the outer boundary V_D of V_D (consisting of all the vertices v ∈ V ∖ V_D which are neighbors of one – or several – vertices in V_D, i.e., such that {v,v'}∈ E for some v' ∈ V_D): each vertex in V_D is hit by lightning at some given rate ζ∈ (0,∞]. When this happens, all occupied neighbors in V_D of this vertex, as well as all the vertices connected (in V_D) to these neighbors by an occupied path, become burnt immediately. Note that we allow the rate ζ to be infinite, which corresponds to connected components of occupied vertices (also called occupied clusters, or simply clusters) burning as soon as they touch the boundary. We can then adopt two natural rules for the future evolution of these burnt vertices, leading to two distinct processes: either they remain burnt (state -1) forever (forest fire without recovery), or they are simply considered the same as vacant, so that they can become occupied again, still at rate 1 (forest fire with recovery) – and then burn again, and so on. Observe that when there are no recoveries, every vertex v remains eventually (at times sufficiently large, depending on v) either occupied or burnt. In this paper, we analyze these processes in two particular situations. First, we consider sequences of increasing finite subdomains D_N = (V_N,E_N), N ≥ 1. We then analyze forest fire processes in the upper half-plane, where V_D = V ∩, denoting := {(x,y) ∈^2 : y ≥ 0}. In this case, ignitions thus originate along the real line. This specific setting is directly connected to earlier papers by Graf <cit.>, <cit.>, that were an important inspiration for us. In the absence of ignitions, we would get the classical Bernoulli site percolation process in V_D, at each time t ∈ [0,∞). In this process, each vertex v ∈ V_D is either occupied or vacant, with respective probabilities p and 1-p, independently of the other vertices, where p = p(t) := 1 - e^-t is the percolation parameter. In what follows, we refer to this underlying percolation process as the pure birth process. On the whole (2D) lattice G, Bernoulli percolation is known to display a phase transition at some distinguished value of p, called the percolation threshold, that we denote by p_c^site(G) (∈ (0,1)): for each p < p_c^site(G) (subcritical regime), there exists (almost surely) no infinite cluster, while there is (at least) one for p > p_c^site(G) (supercritical regime). In particular, we can consider the critical time t_c := -log (1-p_c^site(G)), at which the pure birth configuration is exactly critical (and so an infinite cluster starts to emerge). Our main goal in the present work is to understand the macroscopic effect of the boundary ignitions, and compare them to “bulk” ignitions, as in the classical Drossel-Schwabl forest fire process <cit.>. For such processes, existence was established by Dürre <cit.>, in any dimension d ≥ 2, and further properties were derived in recent works <cit.>, <cit.>. Note that strictly speaking, these two latter papers are concerned with the variant without recovery, but there is strong reason to believe that the near-critical behavior, close to time t_c (and in fact, slightly later than that), is essentially the same when recoveries are allowed, as we emphasized in <cit.> (in a different, but related setting). For boundary ignitions, proving existence requires significant work in the upper half-plane, and this was done in <cit.> (under an extra condition on the burning mechanism, see Section <ref> in the present paper). Clearly, in the case of finite domains the process is a finite-state Markov chain and hence existence is standard. §.§ Main results Let us now describe informally our results. From now on, we focus on the triangular lattice . We need to do it for technical reasons, as it is the two-dimensional lattice on which the most precise results are known rigorously for Bernoulli percolation. As before, we denote by V and E its set of vertices and set of edges, respectively. First, we consider the forest fire process in finite “hexagonal” subdomains of : for each N ≥ 1, we let H_N be the domain consisting of all the vertices within a graph distance N from the origin 0 (see Figure <ref> below for an illustration). In order to emphasize the dependence on N, we use the notation _N. Consider the forest fire process without recoveries in H_N, N ≥ 1, with boundary ignitions at a given rate ζ∈ (0, ∞]. We have _N(0 is eventually burnt) N →∞⟶ 0. Moreover, for any δ > 0: for all N sufficiently large, _N(0 is eventually burnt) ≥ N^-5/52 - δ. In particular, note that the left-hand inequality of (<ref>) implies that for the event that 0 gets burnt at some time, its probability vanishes substantially more slowly than _p_c(0 ∂ H_N). Here, 0 ∂ H_N denotes the existence of an occupied connection to the boundary of the domain (in other words, the event that 0 belongs to an occupied cluster that reaches ∂ H_N). Indeed, _p_c(0 ∂ H_N) is known to decay as N^-5/48 + o(1) from <cit.> (see the explanation between (<ref>) and (<ref>) below). By following closely the successive steps in our proof of (<ref>), and estimating each of the (high probability) events along the way (one of them needs to fail), one can also obtain an explicit power-law upper bound on the probability that 0 burns. However, so far we could only get such a bound with an exponent smaller than the exponent 5/52 in the lower bound. For our results, we do not need ignitions to come from the whole boundary: it is enough that only a positive fraction (bounded away from 0 as N →∞) of the vertices get ignited. For example, exactly the same result would hold in the case of ignitions along the bottom side of H_N only. In other words, we could state them in H_N ∩, with ignitions coming from vertices with y-coordinate -√(3)/2 (N+1), and N →∞. Our reasonings do not really require that ζ is constant either, and we could allow it to be a function ζ_N of N. In this case, we need to require that ζ_N does not tend 0 too quickly, and the relevant condition for our proofs is ζ_N ≫ N^-2/3 as N →∞. Our reasonings are based on the existence of paths entering deep into the domain by staying within a cone, with an opening angle strictly smaller than π. These paths are used to control precisely the spread of ignitions from the boundary. We develop these geometric considerations in Section <ref>, showing that “many” such long-distance connections exist. Roughly speaking, we show that slightly after time t_c, some portions of the boundary remain available, from where ignitions can spread. These boundary arcs, when ignited, allow one large cluster to burn (shown in red on Figure <ref>). This cluster is macroscopic, and it burns at a time which is slightly supercritical (more precisely, the characteristic length for the pure birth process tends to ∞ as N →∞, but it is still ≪ N). We will use it to deduce that with high probability, this giant cluster does not reach the origin, but rather, it surrounds it and remains at a mesoscopic distance from it. In particular, the existence of this mesoscopic island containing the origin, unaffected by ignitions, means that the probability for the origin to burn tends to 0. As for the process with recoveries, tools originating from a paper by Kiss, Manolescu and Sidoravicius <cit.> can then be harnessed, as in our paper <cit.>, to derive an analogous result in that situation. More precisely, we can control the process up to some time strictly later than the critical time t_c. Consider the forest fire process with recoveries in H_N, N ≥ 1, with boundary ignitions at rate ζ∈ (0, ∞]. For some universal t̂ > t_c (that is, independent of N and ζ), we have _N(0 gets burnt before time t̂) N →∞⟶ 0. Furthermore, the same quantitative lower bound holds in this case: for any δ > 0, for all N sufficiently large, _N(0 gets burnt before time t̂) ≥ N^-5/52 - δ. We now turn to Graf's forest fire process in the upper half-plane. We consider, rather, a version of this process without recoveries, for which we prove the following result. Consider the forest fire process without recoveries in V ∩, with boundary ignitions at rate ζ∈ (0, ∞]. Then almost surely, no infinite occupied cluster emerges. More precisely, with probability 1, we have: for every vertex v ∈ V ∩, there exists L < ∞ such that at all times t ∈ [0,∞), the occupied cluster of v contains at most L vertices. We comment further on this theorem, and some of its intuitive implications, in Section <ref>. As we explain briefly in Section <ref>, we believe that the process with recoveries displays an analogous property, namely that no infinite cluster emerges before some universal time t̂ > t_c. However, additional technical difficulties arise in that setting, which are related, informally speaking, to the combined effect of recoveries near the real line. We plan to handle them in a future work. §.§ Discussion §.§.§ Connections with Graf's model In Graf's model, the forest corresponds to the upper half-plane in the triangular lattice, and there is, besides the ignitions from the boundary, an additional source of burnings (at least, theoretically): namely, as soon as an infinite occupied cluster emerges, it is burnt instantaneously. This two-types-of-ignition model may look somewhat artificial, but it has a very natural motivation, as explained in the Introduction of <cit.>. In the two papers <cit.> and <cit.>, Graf proved the existence, and several interesting properties, of that process. In particular he showed that a.s., each vertical column contains only finitely many sites that burnt before or at t_c, and infinitely many sites that burn after t_c. However, the actual role of the second source of burnings in his model remained quite mysterious. More precisely, the question whether an infinite occupied cluster emerges at all remained open (it is listed as the second open problem in Section 2 of <cit.>). For the version without recoveries, our Theorem <ref> implies a negative answer to that question. Our last paragraph of Section <ref> (as well as the first paragraph in Section <ref> below) expresses our belief that there is a time t̂ > t_c such that for Graf's process with recoveries, restricted to the time interval [0,t̂], the answer is negative too. Theorems <ref> and <ref>, besides being of independent interest, also have the following (more indirect) connection with Graf's work. As said in the Introduction of <cit.>, Graf's original motivation came from the study of possible subsequential limits of the forest fire process on an N-by-N box in the triangular lattice, with ignitions from the boundary of the box, as N →∞ (and with fixed center for the boxes). Our Theorems <ref> and <ref> (or, rather, a simple modification of their proofs) imply that, for the case without recoveries, and for the case with recoveries but where time is restricted to the time interval [0, t̂], any subsequential limit process is simply the pure birth process. §.§.§ Process with impurities We want to conclude this introduction by mentioning another possible approach, which was developed in <cit.>. The main result of that thesis, Theorem 4.11, is in some sense a weak version of our Theorem <ref>. It does not imply that in the case we study (i.e. with fixed ignition rates) the probability that the center of the box burns tends to 0, but that in the case where, roughly speaking, the ignition rates are of the form N^-, the probability that the center does not burn is at least C N^-, where C = C() > 0. We believe that a substantial refinement of a so-called arm-stability result involved in his method might lead to an alternative proof of our result (<ref>), but that proof would be much more elaborate than that in the present paper. The method in <cit.> uses a percolation process with random impurities, attached independently along the boundary of the domain. This idea was directly inspired by the analogous process introduced in <cit.> (for the original Drossel-Schwabl process, i.e. “bulk” ignitions). In that paper, independent impurities are deleted from all over the lattice, in order to analyze the effect of fires taking place early (before the pure birth process enters an associated near-critical window). More precisely, the percolation process provides a stochastic lower bound, which can be used to ensure that with sufficient probability, some prescribed occupied (unburnt) connections exist in the forest fire process. Let D = (V_D, E_D) be a subdomain of G. In the process with impurities used in <cit.> (illustrated in Figure <ref>), we let each boundary vertex v ∈ V_D be, independently of the others, the center of a “hole” (impurity) with a random radius R_v satisfying, for some constant c_0, (R_v ≥ k) ≤ c_0 k^-13/12 + (k ≥ 1). Note that the exponent 13/12 already appeared in <cit.> (as the sum 1/ν + ρ in the notations of that paper, see Condition 3.1 there), and it will show up again multiple times in the present paper, for roughly the same reasons, notably in Lemma <ref>. §.§ Organization of the paper In Section <ref>, we first set notations for Bernoulli percolation, focusing on the two-dimensional setting, and we also recall classical properties of that process, which are needed later for our proofs. Next, in Section <ref>, we develop results about the existence of cone sites, which play a key role in our subsequent reasonings. We then study, in Section <ref>, the forest fire process with boundary ignitions in a finite domain, establishing Theorems <ref> and <ref>. Finally, in Section <ref>, we analyze the process in the upper half-plane, proving in particular Theorem <ref>. § NEAR-CRITICAL PERCOLATION IN 2D In this section, we start by introducing notations for Bernoulli percolation in dimension two, in Section <ref>. We then recall classical properties of two-dimensional percolation in Section <ref>, especially at and near its critical point, before stating results which are more specific to percolation in cones, in Section <ref>. §.§ Setting and notations Let . be the usual Euclidean distance in the plane ^2. In this paper, we work on the triangular lattice = (V, E), with set of vertices V = V_ := { x + y e^i π / 3∈ : x, y ∈} (we identify ≃^2), and set of edges E = E_ := {{v, v'} : v, v' ∈ V with v - v' = 1 }. Two vertices v, v' ∈ V such that v - v' = 1, i.e. {v, v'}∈ E, are said to be neighbors, and we use the notation v ∼ v'. A path is a finite sequence of vertices v_0, v_1, …, v_k, for some k ≥ 1 called the length of the path, such that v_i ∼ v_i+1 for all i = 0, …, k-1. Usually, we assume the vertices in this sequence to be distinct, i.e. the path does not use the same vertex twice. A circuit is such a path whose vertices are all distinct, except that v_0 = v_k. For n ≥ 0, we let B_n := { v ∈ V : v < n} be the (open) ball of radius n, and for 0 ≤ n_1 < n_2, let A_n_1,n_2 := { v ∈ V : n_1 < v < n_2} be the annulus with radii n_1 and n_2 (both centered at the origin 0). For v ∈ V, we also denote B_n(v) := v + B_n and A_n_1,n_2(v) := v + A_n_1,n_2. Finally, for a subset A ⊆ V, we denote A^c := V ∖ A, we define its inner (vertex) boundary by A := {v ∈ A : v ∼ v' for some v' ∈ A^c}, and its outer boundary by A := (A^c) (= {v ∈ A^c : v ∼ v' for some v' ∈ A}). Bernoulli (site) percolation on the triangular lattice is obtained by tossing a coin for each vertex v ∈ V: for some given value p ∈ [0,1] (the percolation parameter), v is declared to be occupied or vacant, with respective probabilities p and 1-p, independently of the other vertices. This produces a percolation configuration ω = (ω_v)_v ∈ V belonging to Ω := {0,1}^V, which is equipped with the product measure _p. In such an ω, we say that two vertices v, v' ∈ V are connected if we can find an occupied path from v to v'. i.e. a path v_0, v_1, …, v_k, for some k ≥ 0, with v_0 = v, v_k = v', and consisting only of occupied vertices. This is indicated by v v'. Note that in particular, v and v' themselves have to be occupied. Two subsets of vertices A, A' ⊆ V are said to be connected, denoted by A A', if we can find two vertices v ∈ A and v' ∈ A' which are connected. Occupied vertices can be grouped into connected components (i.e. classes for the equivalence relation ), that we call occupied clusters. We denote by (v) := {v' ∈ V : v' v} the occupied cluster containing v (so that (v) = ∅ when v is vacant). Moreover, we can also define vacant paths and vacant clusters in an analogous way, simply replacing occupied vertices by vacant ones. Bernoulli percolation displays a phase transition as the parameter p increases, in the following sense. We define θ(p) := _p(0 ∞), where for each v ∈ V, v ∞ denotes the event that (v) is infinite. We also use the notation for the union of all infinite occupied clusters. Then the the special value p_c = p_c^site(), called the percolation threshold, satisfies the following. For all p ≤ p_c, there exists no infinite occupied cluster, almost surely (a.s.). On the other hand, for all p > p_c, there exists a.s. an infinite cluster, which is furthermore unique. Moreover, it is now classical that p_c^site() = 1/2 (the proof is an adaptation of <cit.>, see Section 3.4 in <cit.>). For more detailed background on Bernoulli percolation, the reader can consult the classical references <cit.>, <cit.>. We can consider rectangles on the lattice, which are subsets of the form R = ([x_1,x_2] × [y_1,y_2]) ∩ V, for x_1 < x_2 and y_1 < y_2. In particular, we sometimes use the notation R_n_1,n_2 := ( [0,n_1] × [0,n_2] ) ∩ V (⊆) for n_1, n_2 ≥ 0. Assuming that R is non-empty, we can define in a natural way its left and right sides _L R and _R R (resp.), which are subsets of R. By definition, a horizontal (occupied) crossing of R is an occupied path which stays in R, and connects _L R and _R R. The event that such a path exists is denoted by (R), and we can define in an analogous way vertical crossings (connecting the top and bottom sides), using the notation (R) in this case. Furthermore, considering vacant paths instead, we obtain vacant horizontal and vertical crossings, and we denote their existence by (resp.) ^*(R) and ^*(R). Similarly, for an annulus A = A_n_1,n_2(v) of the form above, we consider the event that there exists an occupied (resp. vacant) circuit around A, i.e. a circuit which remains in A and surrounds _n_1(v) once, and we denote it by (A) (resp. ^*(A)). §.§ Classical properties Our reasonings use primarily techniques and results developed to describe critical and near-critical Bernoulli percolation in dimension two, i.e. for values of the parameter p which are sufficiently close to p_c. Indeed, as we explain in the subsequent sections, the relevant macroscopic behavior of our forest fire processes takes place at times when the density of trees in the forest is approximately critical, an instance of the phenomenon of self-organized criticality. More quantitatively, we measure the distance to criticality via the characteristic length L, which is defined as L(p) := min{ n ≥ 1 : _p ( ( R_2n,n ) ) ≤ 0.001 } ( p < 1/2), and L(p) := L(1-p) ( p > 1/2). The classical Russo-Seymour-Welsh bounds at criticality (p = 1/2) imply directly that L(p) →∞ as p →1/2, so it is natural to set L(1/2) := ∞. We make use of the following properties, which are now considered standard (see e.g. <cit.> and <cit.>). * Russo-Seymour-Welsh bounds near criticality. For all K ≥ 1, there exists δ(K) > 0 such that: for all p ∈ (0,1), 1 ≤ n ≤ K L(p), _p ( ( R_4n,n ) ) ≥δ and _p ( ^*( R_4n,n ) ) ≥δ. * Exponential decay property with respect to L(p). There exist universal constants c_1, c_2 > 0 such that: for all p < 1/2, n ≥ 1, _p ( ( R_4n,n ) ) ≤ c_1 e^- c_2 n/L(p) (see Lemma 39 in <cit.>). By duality, we have the following corresponding statement for p > 1/2: for all n ≥ 1, _p ( ( R_4n,n ) ) ≥ 1 - c_1 e^- c_2 n/L(p). * Asymptotic estimates for θ and L. Denote by π_1(n) (resp. π_4(n)), n ≥ 0, the probability at p = 1/2 that there exists an occupied path (resp. four paths, which are occupied, vacant, occupied, and vacant, in counterclockwise order), connecting 0 to distance n (resp. each connecting a neighbor of 0 to distance n). We have the following equivalences near p_c: θ(p) ≍π_1(L(p)) as p ↘1/2, (see Theorem 2 in <cit.>, or (7.25) in <cit.>), and | p - p_c | L(p)^2 π_4 ( L(p) ) ≍ 1 as p →1/2. (see (4.5) in <cit.>, or Proposition 34 in <cit.>). Combined with the values of the one-arm exponent α_1 = 5/48 <cit.> and the (polychromatic) four-arm exponent α_4 = 5/4 <cit.> (so that π_j(n) = n^-α_j + o(1) as n →∞, for j=1,4), these yield the critical exponents for θ and L: θ(p) = ( p-1/2)^5/36 + o(1) as p ↘1/2, and L(p) = | p-1/2|^-4/3 + o(1) as p →1/2. The characteristic length L can be used to define near-critical parameter scales, for each n ≥ 1. These scales are formally defined as p_λ(n) = 1/2 + λ/n^2 π_4(n), λ∈ (-∞,∞), where π_4(.) denotes the four-arm probability at criticality (see (<ref>) above), and they satisfy: for each fixed λ≠ 0, L(p_λ(n)) ≍ n as n →∞. In particular, using that π_4(n) = n^-5/4 + o(1), we get that for each λ≠ 0, p_λ(n) = 1/2 + λ n^-3/4 + o(1). We will often use these scales implicitly: for example, if we consider the parameter p_c + n^-β, then L(p_c + n^-β) ≪ n or ≫ n, depending on whether β > 3/4 or β < 3/4. §.§ Percolation in cones Let := × [0,∞) be the closed upper half-plane. We write V_ := V ∩, and we let E_ be the corresponding set of edges. For α∈ (0,π/2] and v ∈ V_ (i.e. with y-coordinate equal to 0), we denote by ^(α)(v) the intersection of V_ with the closed cone with apex v and opening angle 2 α (see Figure <ref> below). Note that ^(α)(v) contains v by definition, and this yields a connected subgraph of (V_,E_) for α≥π/6. For n ≥ 0, we consider the truncated cone _n^(α)(v) := ^(α)(v) ∩ B_n(v). We denote by _1(_n^(α)(v)) the event that there exists an occupied path connecting v to B_n(v) within the cone. In particular for α = π/2, we get the usual one-arm event in the upper half-plane (and we need α≥π/6 to ensure _1(_n^(α)(v)) ≠∅). We write π_1^(α)(n) := _1/2( _1(_n^(α)(0)) ) (n ≥ 0). We also consider the sections _n_1,n_2^(α)(v) := ^(α)(v) ∩ A_n_1,n_2(v), for 0 ≤ n_1 < n_2, and the event _1(_n_1,n_2^(α)(v)) that there exists an occupied path “crossing” such a section. Formally, such a path is required to remain in _n_1,n_2^(α)(v), and to connect two vertices v_1 and v_2, each having a neighbor outside the section, lying at a distance from v which is, respectively, ≤ n_1 and ≥ n_2. We denote by π_1^(α)(n_1,n_2):= _1/2( _1(_n_1,n_2^(α)(0)) ) (0 ≤ n_1 < n_2) the one-arm probabilities at criticality in the cone ^(α)(0). Finally, as usual, we use the lighter notations ^(α) = ^(α)(0), _n^(α) = _n^(α)(0) and _n_1,n_2^(α) = _n_1,n_2^(α)(0). We will need the following results, which are more specific to cones. As we explain, they can all be easily obtained from standard results and reasonings. * Near-critical stability for one-arm events in a cone. For all α∈ (0,π/2], K ≥ 1, there exist C_1, C_2 > 0 (depending only on α and K) such that: for all p ∈ (0,1), 0 ≤ n_1 < n_2 ≤ K L(p), c_1 π_1^(α)(n_1, n_2) ≤_p ( _1(_n_1,n_2^(α)) ) ≤ c_2 π_1^(α)(n_1, n_2). This result can be obtained through similar reasonings as Theorem 27 in <cit.> (which regards arm events in the full plane). * Estimates on π_1^(α). For all α∈ (0,π/2], there exist c_1, c_2 > 0 (depending only on α) such that: for all 0 ≤ n_1 < n_2, π_1^(α)( n_1/2, n_2 ), π_1^(α)(n_1, 2 n_2) ≥ c_1 π_1^(α)(n_1, n_2), and for all 0 ≤ n_1 < n_2 < n_3, c_1 π_1^(α)(n_1, n_3) ≤π_1^(α)(n_1, n_2) π_1^(α)(n_2, n_3) ≤ c_2 π_1^(α)(n_1, n_3). The first property shows the “extendability” of the arm in a cone, and the second property is usually called quasi-multiplicativity. Even though substantial work is required to establish them in the case of general polychromatic arm events in the plane (see, resp., Propositions 16 and 17 in <cit.>), in this particular situation of a single occupied arm, they are almost direct consequences of the Russo-Seymour-Welsh bounds at criticality, used in combination with the Harris inequality. * One-arm exponent in a cone. For all α∈ (0,π/2], let α_1^(α) = π/2 α·1/3. Then, for all > 0, there exist c_i(α,) > 0, i=1,2, such that: for all 1 ≤ n_1 < n_2, c_1 ( n_1/n_2)^α_1^(α) + ≤π_1^(α)(n_1,n_2) ≤ c_1 ( n_1/n_2)^α_1^(α) - . This exponent can be obtained from the conformal invariance property of critical percolation in the scaling limit <cit.>. The following a-priori estimate will be useful. For all α > π/6, we have: for all n ≥ 1, ∑_k=1^n ( π_1^(α)(k,n) )^-1≤ C n, where C depends only on α. Let α > π/6, and consider the corresponding exponent α_1^(α), which we know is < 1 (from (<ref>)). Hence, we can let > 0 so that α_1^(α) + 2 = 1. From (<ref>), we have π_1^(α)(k,n) ≥ c_1 ( k/n)^α_1^(α) + = c_1 ( k/n)^1 - for some c_1(α) > 0. We deduce immediately ∑_k=1^n ( π_1^(α)(k,n) )^-1≤ (c_1)^-1 n^1 - ·∑_k=1^n k^-(1 - )≤ (c_1)^-1 n^1 - · c'_1 n^ = C n, which gives (<ref>), and completes the proof. § CONE SITES In this section, we develop the main geometric idea used in our proofs: we move away from the boundary by considering paths contained in cones, included in and with an opening angle just slightly below π. In the following, we consider the pure birth process in (with rate 1), i.e. Bernoulli percolation with parameter p(t) = 1 - e^-t. Moreover, we fix some ζ > 0, and we “trigger” the vertices along V_, with y-coordinate - √(3)/2, according to a Poisson process with rate ζ. Each time such a vertex is triggered, we consider the two vertices in V_ above it: for each of them, if it is occupied, we put a mark on all occupied vertices connected to it at this time (and otherwise, if it is vacant, nothing happens). For each t ≥ 0, we then denote by _t the set of all vertices carrying a mark at time t, which was thus transmitted by a vertex triggered at an earlier time. We start by the following observation. For v ∈ V_ and n ≥ 1, we denote F_n(v) := {one of the two neighbors of v on V_ gets triggered at some time t ∈ [0,t_c], at which _1(_n^(π/2)(v)) occurs}. In other words, F_n(v) is the event that in the pure birth process, there exists an occupied path (in V_), which is ignited by a neighbor of v before time t_c and reaches distance n from v. We have the following estimate. Let ζ∈ (0,∞), and > 0. There exists c() such that for all v ∈ V_ and n ≥ 1, ( F_n(v) ) ≤ c ζ n^-13/12 + . Establishing this upper bound (<ref>) requires a little bit of care, and we do it below. However, note that it is easy to explain heuristically the exponent 13/12 that appears, as follows. For a connection to distance n to have a reasonable probability to form, the parameter p(t) ≤ p_c needs to be such that L(p(t)) ≳ n. In other words, the ignition has to take place in the corresponding near-critical window, which has length n^-3/4 + o(1) (using the critical exponent 4/3 for L, see (<ref>)). In this case, there exists an occupied path to distance n with a probability n^-1/3 + o(1), from (<ref>) and (<ref>). Hence, we obtain ≈ζ n^-3/4 + o(1)· n^-1/3 + o(1) = ζ n^-13/12 + o(1). Let us now prove this estimate formally. For notational convenience, we write π_1^+ = π_1^(π/2) in this proof (only). Note that (<ref>) provides the value of the corresponding exponent: α_1^+ = α_1^(π/2) = 1/3. The lemma follows from a summation argument similar to those in the proofs of Lemma 6.8 in <cit.> and Lemma 3.3 in <cit.>. Let δ > 0 be such that L(t_c - δ) = n. Without loss of generality, we can assume that n is large enough so that t_c - δ > 3/4 t_c, and introduce the integer J ≥ 0 satisfying t_c - 2^J+1δ≤3/4 t_c < t_c - 2^J δ. Observe that in particular, we have necessarily t_c - 2^J+1δ > 1/2 t_c. We then bound the desired probability from above by summing according to the subinterval [t_c - 2^j+1δ, t_c - 2^j δ), 0 ≤ j ≤ J, containing the time t at which one of the neighbors of v gets triggered (note that there might be several such times, but this is not an issue). If t ∈ [t_c-δ, t_c], we have L(t) ≥ n, and we can simply use 2 ζδπ_1^+(n) as an upper bound. If t ∈ (0, t_c - 2^J+1δ), we use the upper bound 2 ζ t_c c_1 e^-c_2 n/L(3/4 t_c) (coming from (<ref>)). Hence, we obtain from the union bound: ( F_n(v) ) ≤ 2 ζ( δπ_1^+(n) + ∑_j=0^J 2^j δπ_1^+ ( L(t_c - 2^j+1δ) ) 4 c_1 e^-c_2 n/L(t_c - 2^j δ) + t_c c_1 e^-c_2 n/L(3/4 t_c)). We will use that the following bounds hold for L(t_c - 2^j δ): there exist constants c'_1, c'_2 > 0 (depending only on ) such that for all j ∈{0, …, J}, c'_1 (2^j)^-4/3 - n ≤ L(t_c - 2^j δ) ≤ c'_2 (2^j)^-4/3 + n. Indeed, this follows by writing L(t_c - 2^j δ) = L(t_c - 2^j δ)/L(t_c - δ) L(t_c - δ) = L(t_c - 2^j δ)/L(t_c - δ) n, and estimating the ratio above by using (<ref>), combined with classical bounds on the four-arm (full-plane) probability π_4(n_1,n_2) (see e.g. Lemma 2.5 in <cit.>, and also the quasi-multiplicativity property for π_4, which is (2.6) in that paper). Consider some j ∈{0,…,J}. On the one hand, π_1^+ ( L(t_c - 2^j+1δ) ) ≤ c_3 π_1^+ ( L(t_c - δ) ) π_1^+ ( L(t_c - 2^j+1δ), L(t_c - δ) )^-1≤ c'_3 (2^j+1)^4/3·1/3 + π_1^+ (n), using (<ref>), and then (<ref>) (combined with the value from (<ref>), as well as (<ref>)). Here, c_3 is universal, and c'_3 depends only on . On the other hand, e^-c_2 n/L(t_c - 2^j δ)≤ e^-c”_2 (2^j)^4/3 - . (from (<ref>)), for some c”_2() > 0. By combining (<ref>), (<ref>) and (<ref>), we obtain ( F_n(v) ) ≤ 2 ζ( δπ_1^+(n) + δπ_1^+(n) ∑_j ≥ 0 c'_3 (2^j+1)^4/9 + e^-c”_2 (2^j)^4/3 - + t_c c_1 e^-c_2 n/L(3/4 t_c)). This allows us to conclude, using finally that for some constants c_4, c'_4, c_5 depending only on , n = L(t_c - δ) ≤ c_4 δ^-4/3/(1-/2), so δ≤ c'_4 n^-3/4 + /2, and π_1^+(n) ≤ c_5 n^- 1/3 + /2 (from (<ref>) and (<ref>)): ( F_n(v) ) ≤ c ζ n^-3/4 + /2 n^- 1/3 + /2 = c ζ n^-13/12 + . We have thus established (<ref>), which completes the proof. We are now ready to introduce cone sites. We adopt the definition below, illustrated on Figure <ref>. Let ζ∈ (0,∞). Let α > π/6, and n > 0. A vertex v ∈ V_ is called an (α,n)-cone site if the two conditions below are satisfied: * _t_c∩^(α)(v) = ∅, * and _1(_n^(α)(v)) occurs at t_c, i.e. there exists (in the pure birth process) an occupied arm connecting v to distance n within the cone ^(α)(v). Note that in particular, v has to be occupied. Later in the paper, we use this notion twice, or rather small variations of it, in two different situations. First, in Section <ref>, cone sites are used to study the spread of ignitions in large finite domains. And then in Section <ref>, we explain how cone sites can be used to gain further insight on Graf's forest fire process in the upper half-plane. We will make use of the following estimate. Let ζ∈ (0,∞). For any α≥π/6, we have the following estimates. * There exists c_1(α,ζ) > 0 such that: for all v ∈ V_ and n ≥ 0, c_1 π_1^(α)(n) ≤( v is an (α,n)-cone site) ≤π_1^(α)(n). * There exists c_2(α) > 0 such that: for all v, v' ∈ V_ and n ≥ 0, ( v and v' are both (α,n)-cone sites) ≤ c_2 π_1^(α)(n) ·π_1^(α)(v-v'∧ n). We first consider (i). The second inequality in (<ref>) is clear from the definition, so we only need to prove the first one. For this purpose, consider, for some given d_0 ≥ 1 that we explain how to choose later, the additional event that all vertices v' ∈ V_ with v-v'≤ d_0 are 2t_c-vacant. This has a fixed cost (1 - p(2t_c))^2 d_0 = e^-2 d_0 t_c, and under this condition, marks inside ^(α)(v) can only be created by triggered vertices at a distance ≥ d_0 + 1 from v. Now, let v' ∈ V_ with k:= v-v'≥ d_0+1. We observe that a path from v' to ^(α)(v) implies in particular the existence of an occupied path from v' to distance d(k) = k cosα, so Lemma <ref> implies the following. For any > 0, the probability that one of the two neighbors of v' gets triggered at some time t ∈ [0,t_c] at which there exists an occupied path from v' to the cone is at most ζ c d^-13/12 +, for some c(). Choosing = 1/24, we deduce from the union bound that the probability that such a v' exists is at most 2 ζ∑_k=d_0^∞ c (k cosα)^- 25/24≤ c' (d_0)^- 1/24. In particular, it can be made ≤1/2 by choosing d_0 sufficiently large (in terms of α and ζ only), which we do. We finally obtain that (v is an (α,n)-cone site) ≥1/2 e^-2 d_0 t_cπ_1^(α)(n), which completes the proof of (i). Let us now turn to (ii), and let d = v-v'. If d ≥n/2, the event that v and v' are both cone sites implies in particular the existence of occupied arms from v and v', both to distance n/4. We deduce that (v and v' are both (α,n)-cone sites) ≤( π_1^(α)( n/4) )^2, and we can conclude in this case by using the extendability property (<ref>). Next, we consider d ≤n/2. In this case, we make appear an arm in _d/2^(α)(v) and an arm in _d/2^(α)(v'), and also an arm in _2d,n^(α)(v'). We obtain (v and v' are both (α,n)-cone sites) ≤( π_1^(α)( d/2) )^2 π_1^(α)( 2d, n ) ≤ c_2 π_1^(α)(n) π_1^(α)(d), where we used (<ref>) and (<ref>) in the second inequality. This completes the proof. Next, we use a second-moment argument, based on Lemma <ref>, to check that there are typically plenty of cone sites. Let α∈ (π/6, π/2), and ζ∈ (0,∞). For n ≥ 0 and δ > 0, let V_n = V^(α),δ_n := | { v ∈ ([-n,n] ×{0})∩ V_ : v is an (α,δ n)-cone site}|. Then for each > 0, there exists c_1(α,ζ), c_2(α,ζ,), c_3(α,ζ) > 0 such that for all n large enough, ( V_n ≥ c_1 n π_1^(α)(δ n) ) ≥ 1 - c_2 n · (δ n)^-13/12 + - c_3 δ. On the one hand, (<ref>) implies directly that for some c = c(α,ζ) > 0, [ V_n ] ≥ c n π_1^(α)(δ n). In order to use a second-moment reasoning, we replace each event I_n(v) := {v is an (α,δ n)-cone site} by a “localized” version Ĩ_n(v) (depending on α and δ), obtained by considering only paths and ignitions within the box R_n(v) = R_n^(α),δ(v) := ( v + [- (tanα) δ n, (tanα) δ n ] ×[- √(3)/2, δ n ] ) ∩ V_. More precisely, in the definition that v is an (α,δ n)-cone site, we replace the first condition (that is, (i) in Definition <ref>) by (i)' _t_c(v) ∩^(α)(v) = ∅, where _t_c(v) is the set of vertices that can be reached by a local path before time t_c: i.e., a path which is marked by the triggering of a vertex v' with v-v'≤ (tanα) δ n, and which stays completely inside R_n(v) (see Figure <ref>). We denote by Ṽ_n the corresponding number of vertices v. Clearly, I_n(v) ⊆Ĩ_n(v), so Ṽ_n ≥ V_n. On the other hand, we observe that the event Ĩ_n(v) ∖ I_n(v) implies the existence of an ignited path (marked by the triggering of a vertex) with radius ≥κδ n, for some suitable κ(α) > 0. Indeed, assume that Ĩ_n(v) occurs, but not I_n(v): this means that a path γ, ignited by some vertex v', reaches the cone ^(α)(v) (before time t_c), but there exists no such local path. We can then distinguish two cases. If v-v'≥ (tanα) δ n/2, then γ has radius at least (sinα) δ n /2. On the other hand, if v-v'≤ (tanα) δ n/2, then γ has to exit the box R_n(v) (otherwise it would be a local path), so it connects v' to a distance at least (tanα) δ n/2. This establishes the claim. This leads us to consider the event _n := {∃ v' ∈([-2n,2n] ×{- √(3)/2})∩ V_ triggering an ignited path with radius ≥κδ n}, and we let '_n be the event that some ignition from outside [-2n,2n] ×{- √(3)/2} reaches one of the cones ^(α)(v), v ∈ ([-n,n] ×{0})∩ V_. Hence, V_n = Ṽ_n on the event (_n ∪'_n)^c. Using Lemma <ref>, we can easily obtain that for some c_2(α,ζ,), (V_n ≠Ṽ_n) ≤(_n ∪'_n) ≤ c_2 n · (δ n)^-13/12 + . Now, there remains to estimate Var(Ṽ_n). Noticing that for any two vertices v, v' ∈ ([-n,n] ×{- √(3)/2})∩ V_, the events Ĩ_n(v) and Ĩ_n(v') are independent as soon as v - v'≥ 2 (tanα) δ n, we obtain Var(Ṽ_n) ≤∑_v ∈ [-n,n] ×{0}∑_v' : v-v'≤ 2 (tanα) δ n( Ĩ_n(v) ∩Ĩ_n(v') ) ≤∑_v ∈ [-n,n] ×{0} 2 ∑_d=0^2 (tanα) δ n c^(1)π_1^(α)(δ n) ·π_1^(α)(d ∧δ n) = ∑_v ∈ [-n,n] ×{0} 2 ( ∑_d=0^δ n c^(1)π_1^(α)(δ n) ·π_1^(α)(d) + ∑_d=δ n+1^2 (tanα) δ n c^(1)π_1^(α)(δ n) ·π_1^(α)(δ n) ) ≤∑_v ∈ [-n,n] ×{0} 2 ( ∑_d=0^δ n c^(2)π_1^(α)(δ n)^2 ·( π_1^(α)(d,δ n) )^-1 + c^(3)δ n π_1^(α)(δ n)^2 ). where we used (<ref>) for the second inequality, and the quasi-multiplicativity property (<ref>) on the last line. In this series of inequalities, each of the constants c^(i) only depends on α. From Lemma <ref>, we have ∑_d=0^δ n( π_1^(α)(d,δ n) )^-1≤ c^(4)δ n (here we use the hypothesis α > π/6), so Var(Ṽ_n) ≤ 2n π_1^(α)(δ n)^2 · (c^(5)δ n) = c^(6)δ (n π_1^(α)(δ n))^2. It thus follows from Chebyshev's inequality, combined with (<ref>) and Ṽ_n ≥ V_n, that ( Ṽ_n ≥c/2 n π_1^(α)(δ n) ) ≥ 1 - 4 Var(Ṽ_n)/(c n π_1^(α)(δ n))^2≥ 1 - c_3 δ, for some c_3(α,ζ). If we let c_1 = c/2, we obtain, thanks also to (<ref>), ( V_n ≥ c_1 n π_1^(α)(δ n) ) ≥ 1 - c_2 n · (δ n)^-13/12 + - c_3 δ, which completes the proof. § FOREST FIRE WITH BOUNDARY IGNITIONS We now investigate the behavior in finite domains of the forest fire model with boundary ignitions. For convenience, we focus on hexagonal domains fitting on the triangular lattice. First, we set notations in Section <ref>, defining precisely the processes under consideration. We then prove our main results, Theorems <ref> and <ref>, for the processes without and with recoveries, respectively, in Sections <ref> and <ref>. §.§ Setting and notations We now define precisely the forest fire model that we study, and set notations. Our process is defined on vertices in the hexagon H_N centered on 0 and with side length N, which is depicted on Figure <ref>. Formally, H_N is the set of vertices at a graph distance at most N from 0, i.e. which can be reached from 0 through a path of length at most N. Vertices along H_N (which consists of the vertices at distance exactly N+1 from 0) get ignited at some given rate ζ∈ (0,∞], and trigger ignitions within the hexagon: when such a vertex gets triggered, all the occupied vertices connected to it burn immediately (similarly to the half-plane setting in Section <ref>). We denote by _N the probability measure governing this process. In our proofs, we mostly use the the lower side of H_N, and the ignitions produced by the vertices on the row just below, i.e. with y-coordinate - (N+1) √(3)/2. For this purpose, we will naturally shift the definition of cone sites vertically, by - N √(3)/2. Once again, for each t ∈ [0, ∞), we denote by _t the set of vertices which are carrying a mark at time t (i.e. which were reached by a marked path before time t, produced by the triggering of some vertex during [0,t]). The following observation will turn out to be handy. Let ζ∈ (0,∞). For all δ∈ (0, 1/13), we have: for all β > 3/4 (1- δ), ( _t_c + N^- β∩ H_N-N^1-δ≠∅) N →∞⟶ 0. In the following, we use this lemma through the fact that (<ref>) holds with some δ > 0 and some β slightly smaller than 3/4, so that the time t_c + N^-β is “much later” than t_c + N^-3/4. We can for example set δ_0 = 1/14, and β_0 = 7/10. We will see that the requirement δ < 1/13 appears naturally in the proof. Let δ > 0. A similar summation as that in the proof of Lemma <ref> gives the following analog of (<ref>). For each > 0, the probability, for a marked path originating from v ∈ H_N, to reach distance N^1 - δ before time t_c + N^-β is at most (N^1 - δ)^-1/3 + (N^1 - δ)^-3/4 +, for all N large enough. Here we also use the condition β > 3/4 (1- δ), which ensures that L(t_c + N^- β) = N^4/3β + o(1)≫ N^1-δ. From the union bound, we conclude that ( _t_c + N^- β∩ H_N-N^1-δ≠∅) ≤ c N · (N^1-δ)^-13/12 + 2 . Now, if we assume that δ < 1/13, and we then choose > 0 small enough, the right-hand side of (<ref>) tends to 0, which completes the proof. §.§ Proof of Theorem <ref> We are now in a position to prove our main result for the forest fire process in H_N. We consider first ζ∈ (0,∞). As we explain toward the end of the proof, the case ζ = ∞ can be handled in the same way, with only a small adaptation in the definition of cone sites being required. For any given > 0, let us prove that for all N large enough, _N(0 gets ignited) ≤. Consider some arbitrary δ∈ (0, 1/13) and β∈ (3/4 (1- δ), 3/4). Let t := t_c + N^- β. We know from Lemma <ref> that for all N ≥ N_1, ( _ t∩ H_N-N^1-δ = ∅) ≥ 1 - /5. Let β' ∈ (β, 3/4). Using the critical exponent for L (see (<ref>)), we have L( t) = N^4/3β + o(1)≪ N^4/3β' as N →∞, so we deduce from (<ref>) that (in the pure birth process): for all N ≥ N_2, ( at time t, (H_N-N^1-δ∖ H_N - N^4 β'/3) occurs) ≥ 1- /5. From now on, we assume that the two events appearing in (<ref>) and (<ref>) occur, and we denote by any occupied circuit as in (<ref>). Now, let α = π/2 / (1 + δ), so that α_1^(α) = 1/3 (1+δ) (see (<ref>)). We will consider cone sites (as always, at time t_c) along the lower side _B H_N of H_N, to distance η N, for some well-chosen η>0. For this, we use a small adaptation of Lemma <ref> (with e.g. the particular value = 1/24). To be safe, we lower bound the number of cone sites by restricting to vertices along the middle “third” of _B H_N, that we denote by _[B] H_N. Even though we are not exactly in the upper half-plane any more, it is easy to see that the same conclusions hold for the number of cone vertices, by truncating some of the summations in the proofs if necessary. Hence, we get that for some η > 0 small enough: for all N ≥ N_3, ( there exist at least N^2/3 - δ(α,η N)-cone sites along _[B] H_N) ≥ 1- /5. We now assume that the event in (<ref>) holds, in addition to those in (<ref>) and (<ref>), and we investigate what happens after time t, conditionally on these events. We make the following observations. * At time t, none of these cone sites has been triggered yet. Indeed, this would otherwise allow _t_c + N^- β to enter into H_N-N^1-δ, contradicting the event in (<ref>). Moreover, at this time t, all the cone sites are connected to the circuit . * Let t := t_c + N^-2/3 + 2 δ > t, we have for all N ≥ N_4, ( one of the cone sites gets triggered before time t) ≥ 1- /5. When this happens, this causes the circuit to burn, thanks to the previous observation. * Finally, L( t) = N^8/9 - 8/3δ + o(1) as N →∞, so for all N ≥ N_5, ( at time t, ^*(0) occurs) ≥ 1- π_1(N^8/9 - 3 δ) ≥ 1- /5, where we denote by ^*(0) the existence, in the pure birth process, of a vacant circuit which surrounds 0, and furthermore separates 0 from H_N/2. We can thus conclude, by combining (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), that for all N ≥ N_0 := max_1 ≤ i ≤ 5 N_i, _N(0 does not get ignited) ≥ 1- , as desired. This establishes (<ref>). In order to obtain (<ref>), i.e. explicit lower and upper bounds on the probability that 0 gets ignited, it suffices to be slightly more careful in the successive steps of the above proof. Instead of considering cone sites to distance η N, for some fixed η > 0, we can consider (π/2 / (1 + δ), N^12/13 + δ)-cone sites. Then with high probability, there are at least N^9/13 - δ of them (along _[B] H_N). By considering the corresponding time t := t_c + N^-9/13 + 2 δ, at which L( t) ≪ N^12/13 + δ (using (<ref>)), this provides the lower bound (1 - ) π_1(L( t)), which is ≥ N^-5/52 - δ for all N large enough (from the one-arm exponent α_1 mentioned below (<ref>)). Finally, we discuss briefly the case ζ = ∞ of an infinite rate of ignition, i.e. clusters burning immediately when they touch the boundary. In this case, we need to change a little bit the definition of cone sites (note that they could not even exist otherwise): at time t_c, we replace the event _1(_n^(α)(v)) by the event {v is vacant, and there exists an occupied path from one of the two neighbors of v above it to distance n} (and remaining within ^(α)(v)). Then it is easy to see that Lemmas <ref> and <ref> hold with this modified notion of cone sites, as well as Lemma <ref>. This completes the proof. §.§ Role of recoveries: proof of Theorem <ref> We now explain how tools developed in <cit.> can be used to obtain the analog of Theorem <ref> for the process with recoveries, i.e. Theorem <ref>. In that recent work, which was strongly inspired by the earlier paper <cit.> by Kiss, Manolescu and Sidoravicius, and generalizes substantially the results in that paper, we study the geometric impact of recoveries in forest fires. For our purpose, one specific result derived in <cit.> turns out to be sufficient. Most of the proof of Theorem <ref> can be repeated, and we only mention the extra input that is needed (using the same notations as in that proof). We know from (<ref>) that at the time t > t_c, which satisfies L( t) ≪ N^4/3β' as N →∞, there exists an occupied circuit in H_N-N^1-δ∖ H_N - N^4 β'/3 with high probability (w.h.p.). It is easy to see that, in addition, there is a t-occupied circuit in A_N/4,N/2, that we denote by ', as well as a t-occupied path from the former circuit to B_N/4. So in particular, these two mentioned circuits and ' are connected, so that the ignition which triggers the burning of (see the sentence below (<ref>)) will also cause ' to burn, “isolating” 0 in an island contained in B_N/2. Now, for the process with recoveries, we can apply Theorem 5.7 of <cit.> to the burning of ' (together with some obvious monotonicity, since t > t_c). This allows us to conclude that w.h.p., that burning will keep 0 isolated until, at least, some (universal) time t̂ > t_c (which is the time t_c + 𝔡 provided by <cit.>). This completes the proof. § CONSEQUENCES FOR GRAF'S FOREST FIRE PROCESS Finally, we give a proof for Theorem <ref>. Recall that this result says, roughly speaking, that no infinite occupied clusters (and hence, “no infinite fires”) emerge in the forest fire process in the upper half-plane, with ignitions along the real line. As mentioned earlier, we focus on the process without recoveries in the present paper. We believe that the same result holds true for the original process, with recoveries, introduced by Graf, again up to some universal time t̂ > t_c. However, proving it seems to require some non-trivial adaptation of the results from <cit.> and <cit.> (contrary to the proof of Theorem <ref> above), so we decided not to include it in the present paper. Strictly speaking, our proof below gives Theorem <ref> under the assumption that the process exists. However, careful inspection of the proof shows that it also implies the analog of Theorem <ref> for the case of Graf's process without recoveries (which has the extra rule that infinite clusters burn immediately, and for which existence follows from the same arguments as in <cit.>). But then it automatically follows that our process also exists (and satisfies Theorem <ref>). Choose any vertex v ∈ V_, and let k := v+1, so that v ∈ B_k. Consider an arbitrary > 0. We follow the construction depicted on Figure <ref>. First, it follows from the classical RSW bounds at criticality that we can fix K(k) large enough so that (in the pure birth process) ( at time t_c, ^*(A_k,K∩ V_) occurs) ≥ 1- /3 (with obvious notation for the event ^* in the semi-annulus A^+_k,K := A_k,K∩ V_). We then let t be larger than t_c, but sufficiently close to it, so that | A^+_k,K| ·( 1 - e^-( t - t_c)) ≤/3. Obviously, this requirement implies that ( no vertex of A^+_k,K switches from vacant to occupied during (t_c, t) ) ≥ 1 - /3. From now on, we assume, without loss of generality, that ζ∈ (0,∞) (the case ζ = ∞ can be handled through the same small change as we did toward the end of the proof of Theorem <ref>). Using the existence of cone sites, similarly to the conclusion of Lemma <ref> (with, e.g., the specific opening angle α = π/3) and standard RSW bounds, it is easy to see that the following holds. For some c_1(ζ) > 0, we have (for the process with marks): for all j ≥ 1, ( at time t_c, there exists an occupied (unmarked) semi-circuit in A^+_2^j K, 2^j+1K) ≥ c_1. Moreover, the events appearing inside this probability can be “made independent” for odd values of j, using again localized versions of cone sites (as for the proof of Lemma <ref>, see in particular Figure <ref>). Hence, there exists J large enough so that ( during (t_c, t), a t_c-occupied semi-circuit burns in A^+_2^j K, 2^j+1K, for some j ∈{1,…, J}) ≥ 1- /3. We observe that if the three events appearing in (<ref>), (<ref>) and (<ref>) occur simultaneously, which has a probability at least 1 -, then the following happens. * Before time t, the vertex v is disconnected from infinity by a vacant semi-circuit, provided by ^*(A^+_k,K). * From time t on (and most likely, much earlier), v is disconnected from infinity by a burnt semi-circuit. Hence, the occupied cluster of v remains bounded over the whole time interval [0,∞), with probability at least 1 -. Since can be taken arbitrarily small, we finally get that ( the occupied cluster of v remains bounded) = 1. This completes the proof, using the countability of V_. plain
http://arxiv.org/abs/2407.13670v1
20240718164526
An Earth Encounter As the Cause of Chaotic Dynamics in Binary Asteroid (35107) 1991VH
[ "Alex J Meyer", "Oscar Fuentes-Muñoz", "Ioannis Gkolias", "Kleomenis Tsiganis", "Petr Brave", "Shantanu Baidu", "Daniel J Scheeres" ]
astro-ph.EP
[ "astro-ph.EP" ]
Alex J Meyer alex.meyer@colorado.edu 0000-0001-8437-1076]Alex J. Meyer Smead Department of Aerospace Engineering Sciences, University of Colorado Boulder, 3775 Discover Dr, Boulder, CO 80303, USA 0000-0001-5875-1083]Oscar Fuentes-Muñoz Smead Department of Aerospace Engineering Sciences, University of Colorado Boulder, 3775 Discover Dr, Boulder, CO 80303, USA Aristotle University of Thessaloniki, Thessaloniki, Greece Aristotle University of Thessaloniki, Thessaloniki, Greece 0000-0001-8434-9776]Petr Pravec Astronomical Institute of the Academy of Sciences of the Czech Republic, Fričova 298 Ondřejov, CZ 25165 Czech Republic Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA 0000-0003-0558-3842]Daniel J. Scheeres Smead Department of Aerospace Engineering Sciences, University of Colorado Boulder, 3775 Discover Dr, Boulder, CO 80303, USA § ABSTRACT Among binary asteroids, (35107) 1991VH stands out as unique given the likely chaotic rotation within its secondary component. The source of this excited dynamical state is unknown. In this work we demonstrate that a past close encounter with Earth could have provided the necessary perturbation to allow the natural internal dynamics, characterized by spin-orbit coupling, to evolve the system into its current dynamical state. In this hypothesis, the secondary of 1991VH was previously in a classical 1:1 spin-orbit resonance with an orbit period likely between 28-35 hours before being perturbed by an Earth encounter within ∼80,000 km. We find if the energy dissipation within the secondary is relatively inefficient, this excited dynamical state could persist to today and produce the observed ground-based measurements. Coupled with the orbital history of 1991VH, we can then place a constraint on the tidal dissipation parameters of the secondary. § INTRODUCTION Near-Earth binary asteroids are typically comprised of an oblate primary rotating rapidly, with a tidally locked, elongated secondary in its orbit. These systems are generally thought to be formed by mass shedding from a rapidly rotating primary driven by YORP spin-up <cit.>, and tidal dissipation tends to relatively quickly synchronize the secondary's rotation with the orbit period <cit.>. In the synchronous state, these dissipative forces can eventually be balanced by radiative forces in the form of the binary YORP (BYORP) effect <cit.>. This BYORP-tide singly-synchronous equilibrium is a steady state found among binary asteroids <cit.>. The binary asteroid (35107) 1991VH is an exception to this singly-synchronous paradigm, as its secondary is not only in an asynchronous rotation state <cit.>, it is most likely experiencing non-principal axis (NPA) rotation and undergoing an exchange of angular momentum between the secondary and the orbit <cit.>. As a result, the orbit period and secondary rotation period have evolved over observations. This intriguing dynamical state has motivated substantial photometry and radar observations of the system. And with 1991VH's accessibility from Earth, it was originally selected as a target by NASA's SIMPLEx Janus mission <cit.>. As a result, there is a wealth of data and analysis surrounding this system. One question that remains is the source of the excited dynamical state of 1991VH, which we attempt to answer in this work. Binary asteroids are characterized by strong spin-orbit coupling <cit.>. These internal dynamics can easily excite the secondary into a complex rotational state given an external perturbation <cit.>. Fluctuations of the spin and orbit period of a system experiencing only principal axis rotation are generally not large enough to explain the variations seen in the observations <cit.>. Thus, it seems the most natural explanation for the current dynamical state of 1991VH is NPA rotation within the secondary caused by an external perturbation and driven by the internal dynamics. This dynamical state has been postulated by <cit.> and <cit.>, and in this work we provide a more quantitative analysis of this possibility. The lifecycle of NEAs is chaotic and driven by orbital resonances with the giant planets and close encounters with the terrestrial planets <cit.>. Therefore a reasonable explanation for the excited dynamics of 1991VH is a recent close planetary encounter with the Earth. <cit.> showed that close Earth encounters with 1991VH are common, and it has also been shown that such close encounters can excite both the internal orbital and spin dynamics of binary asteroids <cit.>. Thus, it appears all the pieces are present to provide a satisfactory explanation of the current dynamical state of 1991VH. We test the hypothesis that 1991VH was previously in a singly-synchronous state, and possibly in the BYORP-tide equilibrium. This state is long-lived, but can be broken by resonances with the primary's spin <cit.>, destabilizing impacts on the secondary <cit.>, or planetary encounters <cit.>. These destabilizing events can lead to non-principal axis (NPA) rotation in the secondary, which has the effect of weakening or eliminating the BYORP effect <cit.>. For the current asynchronous secondary rotation in 1991VH, the BYORP effect is not present. In this work we specifically focus on the effects of a single close encounter with the Earth as a possible explanation for the unstable dynamics in 1991VH. This dynamical evolution has not been shown in the literature, and in the current work we are filling this gap. To achieve this, we perform a suite of Monte Carlo simulations, where a singly-synchronous 1991VH-like system is perturbed by the Earth in a variety of encounter geometries. We then compare the orbit period, secondary rotation period, semimajor axis, and eccentricity of the post-flyby system to check for similarities with the current observations. We provide a background discussion of the observations and dynamics of 1991VH in Section <ref>. Then, we detail the results of the Monte Carlo flyby simulations in Section <ref>. In Section <ref> we perform a more detailed analysis of a single flyby case that produces a system very similar to 1991VH. We then give a detailed analysis of how long-lived the current dynamical state of 1991VH is expected to be in Section <ref>, and summarize and discuss implication in Section <ref>. § 1991VH OBSERVATIONS AND DYNAMICS Observations of 1991VH date back to 1997 and have continued on through 2020, both through photometric lightcurves and radar range-Doppler imaging. This extensive set of observations has revealed the secondary in 1991VH to be in an asynchronous, likely chaotic spin state <cit.>. Due to spin-orbit coupling, this spin state has resulted in an evolving orbit <cit.>. The observations of the secondary rotation period and orbit period are listed in Table <ref>. These measurements range between around 11.5 to 14.2 hours for the secondary spin period. Several estimates of the orbit eccentricity provide e=0.05±0.02 <cit.>. Estimates of the orbit semimajor axis range from a=3.24 km <cit.> to a=3.26 km <cit.>. Both photometric and radar observations give a volume-equivalent diameter of the primary of 1.2 km <cit.>, although more recent observations suggest a smaller diameter of 0.9 km <cit.>. Radar observations reveal the primary to be top-shaped. Photometric observations suggest the secondary has a diameter of 450 m with an elongation of a_2/b_2=1.33±0.1 <cit.>, while radar observations give a more elongated value of a_2/b_2=1.5 <cit.>. Here, a_2>b_2>c_2 are the semiaxes of an ellipsoid fit to the shape of the secondary. §.§ Binary Asteroid Population The excited state of 1991VH is relatively unique among binary asteroids. Using data from <cit.> which has been kept up-to-date in an online repository <cit.>, we see only about 15% of binary asteroids have asynchronous secondaries. This statistic is calculated using only systems which have a solution for both the orbit period and secondary rotation period. We demonstrate the uniqueness of 1991VH in Fig. <ref>, in which we plot the secondary spin to orbit period ratio for the population of binary asteroids, as a function of the separation distance between the two asteroids. Among close binary asteroids, which we define here as having a/D_A<5, 1991VH has the smallest secondary to orbit period ratio (about 0.4). Here a is the binary asteroid semimajor axis and D_A is the diameter of the primary. 1991VH has a separation of about 3 primary diameters. Any of the systems which have a larger discrepancy between these periods are wide systems with orbit periods on the order of 100 hours. §.§ Dynamical Model To capture the spin-orbit coupling in binary asteroids one generally uses the full two-body problem (F2BP). This gives a 9 degree-of-freedom system, as the attitude of both bodies must be computed in addition to the relative position. We track the attitude of these bodies using the coordinate transformation matrices 𝐂_𝐀 and 𝐂_𝐁, which map vectors written in either the primary (A) or the secondary (B) body-fixed frame, respectively, to the inertial frame. In this environment, the mutual potential is U = G∫_A∫_BdM_AdM_B/|r⃗-𝐂_𝐀r⃗_A+𝐂_𝐁r⃗_B| where r⃗ is the relative position of body B with respect to body A, r⃗_A is the position of an infinitesimal mass element dM_A of A with respect to the body's barycenter, and similarly for r⃗_B for dM_B in B. The equations of motion for the F2BP are derived in <cit.>, and are implemented in the numerical tool General Use Binary Asteroid Simulator (gubas) <cit.>. gubas integrates the F2BP using a user-specified expansion order for the mutual gravitational potential. While gubas is an efficient implementation of the F2BP, we can simplify the problem further to allow for even faster calculation when appropriate. By reducing the primary to be spherical, we can ignore its attitude. This simplification means we will ignore the oblateness of the primary which can be important, but in this work we are more focused on the rotation state of the secondary. Using a second-order gravity expansion, this is equivalent to the sphere-ellipsoid problem <cit.>. In higher-fidelity simulations, we will remove this simplification and return to the full primary shape. But for now, we use the sphere-ellipsoid model, where the mutual potential energy can be approximated to second order using MacCullagh's formula <cit.>: U = -GM_AM_B/r - GM_A(I_x+I_y+I_z-3Φ)/2r^3 where Φ = I_xx^2+I_yy^2+I_zz^2/r^2 and I_i is the principal inertia about axis i, and I_x<I_y<I_z. x, y, and z define the location of the spherical primary in the body-fixed frame of the secondary. The sphere-ellipsoid formulation allows for more efficient computation while still allowing for three-dimensional rotation of the secondary, which will be vital for understanding the chaotic dynamics of 1991VH. However, in a binary asteroid, the widely-separated assumption used in MacCullagh's formula may not be sufficiently accurate. Ideally, the separation would be at least an order of magnitude larger than the primary's radius, whereas for 1991VH the separation is only around 6-8 primary radii. For a more accurate calculation we turn to a more traditional spherical-harmonic approximation. For an ellipsoid, the spherical harmonics can be simply written up to degree and order 4 <cit.>, C_20 = 1/5r_0^2(c_2^2-a_2^2+b_2^2/2) C_22 = 1/20r_0^2(a_2^2-b_2^2) C_40 = 15/7(C_20^2+2C_22^2) C_42 = 5/7C_20C_22 C_44 = 5/28C_22^2, where r_0 is an arbitrary normalization radius. All other spherical harmonic terms are zero. These can be easily used in secondary-centered equations of motion using an algorithm such as the one defined in <cit.>. We will use MacCullagh's approach to obtain Poincaré maps for 1991VH in Sec. <ref>, since the approximation is sufficient to qualitatively capture the behavior of the system and the computational efficiency allows us to generate more initial conditions for these maps. For the remainder of this work we will use the spherical harmonics approach. Given the mass parameters of the system, the equilibrium spin rate in the sphere-ellipsoid model at a given separation distance r is given by <cit.>: θ̇^2 = GM_AM_B/r^3(1+3/2r^2(I̅_y+I̅_z-2I̅_x)) where I̅_i is the mass-normalized inertia. This corresponds to the spin rate of the system when it is in the perfect 1:1 spin-orbit resonance. §.§ Dynamical Structure We first explore the dynamical structure of 1991VH using a secondary with a_2/b_2=1.3, b_2/c_2=1.2, in the sphere-ellipsoid problem. This shape is the nominal estimate for the elongation of the secondary of 1991VH from <cit.>. Since there is no information on the b_2/c_2 for 1991VH, we choose b_2/c_2=1.2 as this is near the values of other secondaries of near-Earth binary asteroids <cit.>. We start the system in the synchronous equilibrium near its current dynamical state, then add eccentricity by perturbing the velocity such that the eccentricity becomes approximately 0.05. We test different values of the secondary spin rate, keeping the velocity perturbation constant so the translational kinetic energy is the same for each initial condition. Following <cit.>, we plot maps of the normalized secondary spin rate ω_B/θ̇ as a function of the secondary's libration angle ϕ at each periapsis crossing. In binary asteroid dynamics, the periapsis is not necessarily well defined, as the Keplerian orbit precesses at a rate comparable to the orbit rate <cit.> In these plots, we define a periapsis crossing as a local minimum in the orbit separation. Thus, we adopt a purely numerical definition for the periapsis crossing. We define the libration angle as the angle between the secondary's longest axis and the vector from the secondary to the primary. The results, restricted to planar motion, are shown in Fig. <ref>. These Poincare maps show spin-orbit resonances in blue, quasiperiodic dynamics in green (in which the Poincare map will trace out a curve but never exactly repeat), and chaotic dynamics in black. This demonstrates how different initial conditions result in varying dynamical behavior. These results are similar to the results obtained by <cit.>, which is expected as our approach is nearly the same. We see stable equilibria around the 1:1, 2:1, and 5:2 spin-orbit resonances. However, the perturbation is strong enough to force overlapping of the 2:1 and 3:2 resonances, leading to a very large chaotic sea. Several distinct initial conditions result in the chaotic sea shown in Fig. <ref>. Above this chaotic sea, the system follows predictable quasi-periodic curves. We also note the resonances are shifted away from exactly 1, 2, or 2.5 due to the effect of libration in the system, which shifts the secondary's spin rate at periapsis <cit.>. Around the 1:1 spin-orbit resonance of the full 3 dimensional problem, <cit.> showed that resonances among the natural frequencies of the system can excite out-of-plane rotation. Thus, certain shapes of the secondary are inherently unstable. The large uncertainty on the secondary's shape in 1991VH allows for instabilities in the system driven by the internal spin-orbit coupling in binary asteroids. Notably, the nominal shape of a_2/b_2=1.3, b_2/c_2=1.2 is near a 2:1 resonance between the nutation and libration of the secondary, and as we will show next, this system is susceptible to chaotic motion. While Fig. <ref> presents a classical result in the spin-orbit problem, it is limited to planar motion. However, it is well known that out-of-plane rotation in the secondary is important in binary asteroids. Furthermore, observations suggest the secondary of 1991VH could be in a state of non-principal axis (NPA) rotation. So we expand our analysis to include out-of-plane rotation. Taking the same initial conditions as those in Fig. <ref>, we add now a small perturbation to the secondary's spin axis of (1×10^-6)^∘ along its minimum principal inertia direction. The results of these simulations are shown in Fig. <ref>. We see a significant change to the structure of the dynamics. Most importantly, because we have increased the dimensionality by 2, the resonant and quasi-periodic curves of the two dimensional problem are no longer barriers to chaotic motion in three dimensions. We see some solutions around the 1:1 resonance become chaotic, even though the spin perturbation is very small. The chaotic sea now crosses the 1:1, 2:1, and 5:2 resonances, along with the quasiperiodic curves. However, these quasiperiodic curves and the 2:1 and 5:2 resonances generally keep the same structure as in the planar case. These solutions have sufficient angular momentum that the small spin perturbation is not large enough to cause these solutions to become unstable. The instabilities we see in Fig. <ref> are not unexpected, as a linear stability analysis predicts this behavior for a wide range of secondary shapes <cit.>. This analysis demonstrates the importance of considering full three-dimensional rotation in dynamical analyses of 1991VH, and binary asteroids in general. Even very small perturbations can destroy the structure that exists when the system is restricted to planar motion and rotation. Applied specifically to 1991VH, this shows that a very small perturbation to the orbit, sufficient to produce the observed eccentricity, could kick-start a period of NPA rotation, even if the system was previously in the 1:1 spin-orbit resonance. This provides one possible evolutionary pathway to place 1991VH in the dynamical state observed today. §.§ External Perturbation The most likely source of external perturbation to 1991VH is a close gravitational encounter with a terrestrial planet. Over the last 100,000 years, there is around a 25% chance that 1991VH had a close encounter with either Mars or Earth sufficient to increase the eccentricity to 0.1. <cit.>. On average, the change in eccentricity to a binary asteroid can be calculated by <cit.>: |Δ e| ≈ 1.89√(G/M_A+M_B)M_Sa^3/2/v_∞ q^2 where M_S is the mass of the perturbing planet, a is the semimajor axis of the binary, v_∞ is the hyperbolic excess speed of the flyby, and q is the closest approach distance between the planet and the binary's barycenter. Using a semimajor axis of 3.25 km and a hyperbolic excess velocity of 10 km/s, 1991VH would gain an eccentricity of 0.05 from a close Earth encounter with a periapsis of approximately 160,000 km. From the semi-analytical propagation carried out by <cit.>, such an encounter was possible no sooner than 12,000 years ago. The average change in semimajor axis is similarly written as <cit.>: |Δ a| ≈ 1.48√(G/M_A+M_B)M_Sa^5/2/v_∞ q^2. Using the same numbers for a 160,000 km Earth flyby, this corresponds to a change in the semimajor axis of about 140 m. Thus, on average, we would expect the pre-encounter 1991VH to have a semimajor axis around 3.22-3.28 km, with the flyby either increasing or decreasing the semimajor axis to its currently estimated value. In their high-fidelity simulations of a 1991VH-like binary asteroid undergoing an Earth encounter, <cit.> find a flyby distance of 100,000 km results in an eccentricity of around 0.05±0.01. This also corresponds to a change in the binary's mutual semimajor axis of about 100±100 m. The binary asteroid parameters used in that work are slightly different from those of 1991VH, but the results are still generally consistent with the analytic predictions. § EARTH ENCOUNTER SIMULATIONS Our hypothesis is that 1991VH was previously in a singly-synchronous state, similar to the majority of observed binary asteroids <cit.>, before a close Earth encounter added eccentricity to the system and excited the system into its currently-observed chaotic state. To test this hypothesis, we vary possible pre-encounter orbit periods for the singly-synchronous state, then simulate a variety of Earth encounter conditions. §.§ Simulation Inputs In these simulations, we fix the bulk density of the system to be ρ=1.6g/cm^3 <cit.>, and set the primary and secondary densities equal to one another. Thus, when we choose the pre-encounter orbit period, this fixes the pre-encounter semimajor axis. Instead of using the Keplerian relationship to determine the semimajor axis, we adopt the approach used by <cit.>, who demonstrated the inaccuracy of the Keplerian elements in binary asteroid dynamics due to relaxing the point-mass assumption. Thus, we use the stroboscopic orbit period, also called the plane-crossing orbit period, which is determined by the differences in timings of the secondary crossing of an arbitrary inertial plane. Equivalently, this is the time required for the secondary to complete one full revolution around the primary in inertial space. Following <cit.>, we iteratively calculate the separation distance required to achieve a desired stroboscopic orbit period using a numerical secant search algorithm. In these simulations, we will use the F2BP dynamics, and thus allow for the primary to be aspherical. We set the primary J_2=0.02 (S. Naidu, personal communication, April 2024) and assume it is axisymmetric, and use a second degree-and-order expansion of the mutual potential. We fix the primary volume-equivalent diameter to 1.2 km. We generate realistic flyby geometries of 1991VH relative to the Earth by propagating its orbit backward in time, taking into account the uncertainties in its ephemeris. We do this using the semi-analytical NEO propagation tool developed by <cit.>. We use the heliocentric orbit nominal solution and full covariance from JPL's SSD/CNEOS Small-Body Database[Orbit solution retrieved from JPL's SSD/CNEOS SBDB API available at <https://ssd-api.jpl.nasa.gov/doc/sbdb.html>. Data accessed 03-08-2024.]. This solution uses a 29 year data arc, including 91 high-precision observations from the Gaia Focused Product Release <cit.>, incorporated in the solution following the data treatment of <cit.>. Even though the latter observations further constrain 1991VH's orbit, the long-term propagation leads to a statistical distribution of the Earth flyby parameters. The results of this propagation for the past 100,000 years are shown in Fig. <ref>, showing every close approach within 150,000 km of Earth. In the propagation we occasionally find close encounters with Mars and Venus, but based in the estimated perturbation <cit.> and the lower frequency of those encounters <cit.> we exclude them from this analysis. There are a cluster of possible flybys around 25,000 years ago, followed by another cluster around 40,000 years ago. After these periods, uncertainties grow large enough that we have a more uniform distribution in flybys. The distribution of the longitude of the ascending note is generally uniform for all flybys, but the hyperbolic excess velocity has a clear evolution with time, generally decreasing into the past until around 80,000 years ago. The inclination of the flybys appears roughly Gaussian, and centered around 90^∘. The argument of periapsis is centered around either 0/360^∘ or 180^∘. We generate a uniform distribution of the Earth-secondary phase angles at closest approach, indicating we are testing a wide range of possible geometries Since there is no data on the system's phase angle, we randomly draw from all possible values. For the pre-encounter binary asteroid, we use the range of possible values outline in Table <ref>. This gives the pre-encounter orbit period, as well as the size and shape of the secondary. The separation distance is calculated from the orbit period such that the secondary is in a physically circular orbit, using the F2BP dynamics to eliminate pre-encounter libration from the secondary. We use the Spherical-Restricted Full 3-Body dynamical model defined and implemented by <cit.>. This model calculates the full spin-orbit coupling between the primary and secondary while also influenced by a distant and large spherical perturber, in our case the Earth. We simulate the flyby for 20 days with closest approach centered at 10 days. After this, we hand off the results to the pure F2BP to speed up computational time without needing to track the effect of the Earth, and simulate the system for a full year. <cit.> showed that plus or minus 2 days from closest approach, the effect of the perturbing planet was negligible, so we are conservative in our use of plus and minus 10 days from closest approach. §.§ Results As already discussed, <cit.> showed the inaccuracy of Keplerian elements in representing binary asteroid orbits. As such, we will also adopt the use of the `observable' semimajor axis and eccentricity in our results. These quantities are defined using only the maximum and minimum separation distances within some time interval; in this work we will use a sliding time window of 5 days. This still leaves the question of how best to calculate the secondary's spin period, which is a key piece of information provided by observations. We are concerned with the dynamics of 1991VH that will reproduce these observations, so our approach should mimic how the spin period is calculated in these observations. Generally this is done using lightcurves of the secondary <cit.>. We define synthetic lightcurves, which we will calculate using the visible cross section of the secondary's cross-sectional area: A = π√(x̂^2b_2^2c_2^2+ŷ^2a_2^2c_2^2+ẑ^2a_2^2b_2^2) where x̂, ŷ, and ẑ is the direction of the observer defined in the body-fixed coordinates of the secondary. After calculating the cross-section area, we perform a 1-degree Fourier fit over the same 5-day sliding observation window. If the r-squared value of the fit is at least 0.5, we keep the fit as a measurement of half the secondary spin period. Due to the symmetry of the perfect ellipsoid, the Fourier fit will only measure half the spin period, so we correct by a factor of 2 to obtain the true spin period. Our observer is fixed at an arbitrary position in inertial space, so these synthetic lightcurves calculate the sidereal period. The real period measurements are reported as synodic values, so we do not have an exact comparison between the synthetic and true data. However, the difference between the sidereal and synodic periods are less than 0.1 hour for all observations <cit.>, so this difference is small. Due to the uncertainties and chaotic dynamics in the system, we will not be able to exactly reproduce the true measurements, so this difference is acceptable. We perform 4,600 Monte Carlo simulations of Earth close encounters, randomly sampling from the distribution in Fig. <ref>. In each of these simulations we calculate the stroboscopic orbit period of the post-encounter system. To match the observations, we remove any results that have an average orbit period less than 30 hours or greater than 35 hours. After this removal, we obtain 1,866 results, which are plotted in Fig. <ref>. This plots the secondary spin period measurements as a function of closest approach distance, with dashed lines bracketing the observed values. From Fig. <ref>, we see a large grouping of secondary spin periods in the 30-35 hour range. These are the systems which remain synchronous with the orbit period after the flyby. We see most simulations fall in this category, but a significant number of simulations do have faster or shorter secondary spin periods, indicating these systems have become asynchronous. The number of results falling within the observed window, bracketed by dashed lines, is small but significant, indicating it is possible to reproduce the observed systems after a single Earth close encounter. We see a narrow range of flyby conditions that are able to reproduce both the observed secondary rotation period and the orbital eccentricity. These flybys have a close approach distance within the range of 50,000 - 80,000 km. Closer than this produces an eccentricity that is too large, while further than this does not provide a sufficient perturbation to induce asynchronous rotation in the secondary. This range is smaller than the analytic prediction in Section <ref>. This suggests reproducing the observed secondary spin period is more difficult than simply achieving the observed eccentricity. In Fig. <ref>, we plot the post-encounter orbit period as a function of pre-encounter orbit period. Again, the dashed lines bracket the observed values. In order to achieve these observed values, the pre-encounter orbit period of the system was likely between 28 and 35 hours, but to match the observed eccentricity, the pre-encounter orbit period was likely not within the 30-33 hour range. Thus, the most likely pre-encounter orbit period was either 28-30 or 33-35 hours. This indicates the orbit period and semimajor axis of 1991VH have likely not changed significantly as a result of a possible Earth encounter in its past, and the main change is the eccentricity and rotation of the secondary. In our simulations, we do not find a single case in which a system remains in planar rotation and has a secondary rotation period near the observed values. This is another indication of chaotic rotation in 1991VH. § EXAMPLE HISTORY Here we present an example simulation providing a close match with observations. This system was initially in a circular orbit with a separation distance of 3.11 km and an orbit period of 29.9 hours. The secondary ellipsoid has a volume-equivalent diameter of 458 m, and axis ratios a_2/b_2=1.39 and b_2/c_2=1.17. This system undergoes an Earth encounter with a close approach distance of 46,580 km with a hyperbolic excess speed of 7.6 km/s. The hyperbolic elements are i=78^∘, Ω=217^∘, and ω=193^∘. To study this history, we integrate the system forward for 10 years after the flyby and plot the post-encounter secondary spin period and orbit period in Fig. <ref>. The results are not a perfect match, but lie very close to the observed values. The secondary spin period alternates between epochs of more chaotic rotation and quasi-constant spin. The periods of quasi-constant spin last for several years at around a 10 hour spin period, generally consistent with observations. During the more chaotic epochs the spin period varies between 10 and 40 hours, and larger values are also possible. We note the actual observations of the secondary's rotation period in 1991VH do not see these longer rotation periods, although there is a possible estimate of a spin period up to 29 hours (see Table <ref>). Likewise, the orbit period fluctuates between around 31.5 and 33 hours. The observed orbit period ranges between 32.5 and 32.8 hours. Thus, our simulated orbit period is not an exact match, but very close. There is a clear correlation with the secondary spin period. During the quasi-constant secondary spin period, the orbit period is slightly less than 32 hours, but increases during the periods of chaotic rotation. In Fig. <ref> we plot the observable semimajor axis and eccentricity. The semimajor axis ranges between 3.23 and 3.32 km. Similar to the orbit period, these values are a close match to the true observations of 32.4-32.6 km. Again there is a correlation with the secondary spin, with a lower semimajor axis during quasi-constant spin and a larger value during the chaotic rotation. The eccentricity ranges between 0.02 and 0.15. The true observed eccentricity values range from 0.03 to 0.07. The periods of higher eccentricity correlate the the chaotic rotation and lower eccentricity during quasi-constant spin. So while our simulated eccentricity values can exceed these observations, they are overall generally consistent with observations. We track the secondary's orientation with a classical set of 1-2-3 roll, pitch, yaw Euler angles relative to the rotating Hill frame with z-axis aligned with the orbit's angular momentum vector. In this construction, a tidally-locked secondary would have zero rotation in all three axes. Fig. <ref> shows these Euler angles over time for this example. During the period of quasi-constant spin, the secondary is generally only librating in its roll angle, either around 0^∘ or 180^∘. However, the other angles are circulating, indicating even in this period of quasi-constant spin, the secondary is still fully tumbling. During the periods of chaotic rotation, all three angles are circulating. In this simulation, the secondary enters a state of fully chaotic rotation, and in particular the barrel instability is prominent in the roll angle <cit.>. As this dynamical state closely reproduces the observations, this is another indicator that 1991VH is indeed experiencing chaotic rotation in the secondary. Overall, this simulation demonstrates that due to the chaotic nature of binary asteroids, it is possible for a single Earth encounter to transform an equilibrated, singly-synchronous binary asteroid into a system that looks similar to the current 1991VH. § SECULAR ENERGY DISSIPATION While we have demonstrated a close Earth encounter can create a system similar to 1991VH from a previously singly-synchronous binary asteroid, it is unclear how long-lived this excited state will last. The earliest Earth encounter sufficiently close to produce these dynamics was around 12,000 years ago <cit.>. In the singly-synchronous configuration, for a given spin rate of the primary the system's energy is minimized for the amount of angular momentum in the system. A perturbation such as a close planetary flyby will change the energy and angular momentum such that NPA rotation is allowable, precipitating an exchange of angular momentum between the secondary and the orbit. Since the secondary's angular momentum is much smaller than that of the orbit and primary, a small excess in energy at a given angular momentum level allows for NPA rotation. In the literature, it has been argued that NPA rotation can be long-lived <cit.>, including specifically for binary asteroids <cit.>. In their analysis, <cit.> make arguments that the asynchronous state of 1991VH can also be persistent over long times. However, recently <cit.> demonstrated that eccentricity damping in binary asteroids can be much faster than classical analytic predictions due to the spin-orbit coupling and the relationship between libration and eccentricity. In this section, we provide a more concrete analysis to investigate the question of the persistence of the NPA rotational state of 1991VH. In the minimum-energy configuration, the equilibrium spin rate is defined by Eq. <ref>. Because the secondary is in the 1:1 spin-orbit resonance in this configuration, its spin rate relative to an inertial frame is also defined by <ref>. Thus, the angular momentum for this configuration is simply written as: H=(I_Bz+Mr^2)√(G(M_A+M_B)/r^3(1+3/2r^2ℐ))+I_Azω_A where ℐ=I̅_By+I̅_Bz-2I̅_Bx and M=M_AM_B/M_A+M_B. Here we are explicitly assuming the primary is limited to principal-axis rotation about its major axis. In the equilibrium configuration, the orbital velocity is v=rθ̇. Using this, the corresponding minimum energy can be written as <cit.>: E^* = 1/2(I_Bz+Mr^2)G(M_A+M_B)/r^3(1+1/3r^2ℐ)-GM_AM_B/r(1+1/2r^2ℐ)+1/2I_Azω_A^2. When the system is perturbed away from this equilibrium, it will dissipate energy at a constant angular momentum. As the orbit expands and the primary's rotation rate slows, it will eventually be constrained to synchronous rotation in the secondary. This happens at the combination of separation and primary spin rate defined in Eq. <ref>. At this point, energy dissipation will continue as long as the primary rotates asynchronously. However, there will not be enough energy to allow for a complex spin state in the secondary. As dissipation continues within the primary, the separation distance r will increase while the primary's spin rate ω_A will decrease. The primary methods of energy dissipation considered are tidal torques. Starting from the classical tidal torque equation in <cit.>, <cit.> generalized this to three dimensions as: Γ⃗_B = 3GM_A^2M_B^2/2r^6R_B(3/4πρ)^2k_B/Q_B(-(ϕ̇⃗̇-(ϕ̇⃗̇·r̂)r̂)/|ϕ̇⃗̇-(ϕ̇⃗̇·r̂)r̂|) where ϕ̇⃗̇=ω⃗_B-ω⃗_orbit. The same equation can be applied to the primary, and the torque on the orbit is equal and opposite to the sum of the torques on the primary and secondary. As derived in <cit.>, this adds an acceleration to the orbit: r̈⃗̈=Γ⃗_orbit×r⃗/Mr^2. where the torque on the orbit is simply Γ⃗_orbit=-(Γ⃗_A+Γ⃗_B) to conserve angular momentum in the system. For the ratio of tidal quality factor to Love number, we initially use the radius-dependency derived by <cit.>. This was shown to be a good approximation for rubble-pile asteroids <cit.>, and is given as Q_i/k_i≈300R_i for body i, where R_i is in meters. However, there is significant uncertainty around these values, and in reality could be larger or smaller than this value. Nominally, this gives us Q_A/k_A = 180,000 for the primary and Q_B/k_B = 67500 for the secondary. Using the results from Section <ref>, we integrate the equations of motion forward for another 100 years using the sphere-ellipsoid equations of motion. Because of the asynchronous rotation of the secondary, we do not need to include the BYORP effect in these simulations. The percent change in total energy, free energy, and primary spin rate in these units are plotted in Fig. <ref>, demonstrating the energy dissipation. Here we are defining free energy as the total energy minus the contribution from the primary. In Fig. <ref>, we also plot the total and free angular momentum. This ensures our system is conserving total angular momentum accurate to within 10^-6 percent. However, due to the secular evolution, the free angular momentum (the contribution of the secondary and the orbit) is increasing. To predict the system's future behavior, we perform a linear fit to both the total energy and the primary spin rate. The linear fit is a conservative estimate, as dissipation will in reality slow as the separation distance increases. Using this fit, we can extrapolate the total energy and the primary's spin rate at a future time. Using the primary's spin rate, numerically solving Eq. <ref> provides the separation distance required to conserve angular momentum, then Eq. <ref> at these values gives the energy for the singly-synchronous configuration at this angular momentum level, what we call the minimum energy. The difference between the total energy and this minimum energy is the excess energy, which is plotted in Fig. <ref>. As we see, the excess energy is greater than 0 for around 5,000 years in this case, suggesting NPA rotation would in fact be relatively short lived for these tidal parameters. Indeed, a close Earth encounter for 1991VH did not occur within the past 12,000 years <cit.>. To validate this predicted time for NPA rotation, we integrate the system for 10,000 years. The results of this integration are shown in Fig. <ref>, plotting the libration angle, semimajor axis, and eccentricity. In this plot, the libration angle is the angle between the secondary's body-fixed x-axis and the position vector of the secondary relative to the primary. From Fig. <ref>, we see very good agreement with the predicted duration of NPA rotation and these fully numerical results. The libration angle stops circulation a little before 5,000 years, very similar to the prediction by our linear extrapolation. In this example, the secondary settles into the anti-synchronous case where it has flipped 180^∘ from its initial orientation. This plot also shows the reduction of eccentricity, which happens much faster than predicted in analytical models <cit.>. During this initial energy dissipation, the eccentricity is dissipating quicker than the semimajor axis is expanding, although we do see a secular increasing trend in the semimajor axis envelope toward the end of the simulation. With the linear dissipation model validated by this detailed numerical integration, we now extend the analysis to different values of secondary tidal parameters. The secondary's libration dissipation is driven primarily by the parameters of the secondary <cit.>, so we can limit our analysis to only these parameters while keeping the primary's parameters constant. Fig. <ref> plots the total time for which NPA rotation is permissible as a function of secondary tidal parameters, keeping the tidal parameters of the primary constant. The latest a flyby could have occurred was around 12,000 years ago, which is plotted as a dashed black line. Assuming it was an Earth encounter that provided the excitation to place 1991VH in an excited dynamical state, this places a lower bound on Q_B/k_B of around 2×10^5, although in reality it would likely be higher. This is relatively large for a rubble pile secondary <cit.>, and using the model of <cit.> would require a 15 m or smaller dissipative regolith layer or less around the secondary. However, these values are still within the realm of possibility. § DISCUSSION In this work we have investigated the feasibility that the current excited and chaotic dynamical state of 1991VH is the result of a previous close encounter with Earth. We find that it is relatively easy to excite NPA rotation of the secondary of 1991VH, and this NPA rotation is generally chaotic and can cross between planar spin-orbit resonances thanks to the addition of two out-of-plane rotational degrees of freedom. We find it is most likely that 1991VH was in a singly-synchronous configuration with an orbit period between 28 and 35 hours, although probably not within the 30-33 hour range. A single close Earth encounter around between 50,000 km and 80,000 km could have then placed the system into its current dynamical state, slightly closer but still in line with our initial analytic prediction. Because the current eccentricity of 1991VH is relatively small, this indicates the semimajor axis and orbit period of the system did not significantly change as a result of the flyby. However, the spin-orbit coupled dynamics are sufficient to place the secondary into its currently excited rotation state as a result of this flyby. A flyby closer than 50,000 km is possible if secular dissipation has reduced the eccentricity into the range observed today. Using only the internal spin-orbit coupled dynamics characteristic of binary asteroids, we find a possible range of close encounter geometries can excite the system to the level observed today. We emphasize this requires only a single external perturbation, and the internal system dynamics naturally evolve the system into this state. However, there is a large range of possible post-encounter dynamical states, and the number of systems that reproduce the observed secondary rotation period is the minority. One point of consideration is that 1991VH itself is a minority among the binary asteroid population, in that it exists in this excited, asynchronous state <cit.>. In our simulations, we see most flybys result in a system remaining within the synchronous configuration, consistent with the observed binary asteroid population. The second point of consideration is that our simulations are not calculating the probability that 1991VH experienced this Earth encounter. This probability was calculated in <cit.>, who find a 50% chance 1991VH was significantly perturbed by a close encounter within the past 250,000 years. Instead, our simulations simply show that if 1991VH experienced such an Earth encounter, this could explain its current dynamical state. Finally, in reality, a wider range of flyby geometries is possible, as we have considered only a single flyby whereas in reality there the system would experience successive close approaches over time. These successive close encounters, along with energy dissipation, would further evolve the system and could be responsible for the observed state of the system today. One interesting point of discussion is how long-lived we would expect this state of NPA rotation to last. In our analysis, we only include dissipation caused by tidal torques, while in reality there are many mechanisms of energy dissipation, including NPA rotation <cit.>, tidal saltation <cit.>, YORP <cit.>, and surface motion <cit.>. While these could increase the rate of tidal dissipation, tidal torque is the strongest among these <cit.>, and these other dissipation mechanisms could in effect be wrapped within the uncertainty on the tidal Q/k parameters. These additional dissipation mechanisms could be thought of as simply increasing the rate of dissipation used in this work. This simplification is adopted to increase numerical stability for our long-term simulations. Nominally, the proximity and rubble-pile nature of binary asteroids suggests these systems can re-synchronize quite quickly <cit.>. This is contrary to the observed state of 1991VH, as the system has not undergone a close planetary encounter within the past 12,000 years <cit.>. This constraint places a lower bound on the secondary's Q_B/k_B tidal parameter: Q_B/k_B≳2×10^5. This is consistent with recent results from <cit.>, who place a general upper bound for Q_B/k_B roughly around ≲1×10^6, moderately higher than our minimum constraint. However, their work also uses the analytical approach to eccentricity damping <cit.>, which was shown in <cit.> to generally over-estimate the eccentricity damping time in binary asteroids. Thus, while our resulting Q_B/k_B is slightly larger than typically found, the required values of Q_B/k_B in 1991VH are in general consistent with the current literature. Another consideration is successive close encounters would counteract energy dissipation by providing additional perturbations over time. This means our calculated lower bound of Q_B/k_B is a conservative estimate. When considering only binary asteroids that have estimates for both the orbit period and secondary rotation period, we find around 15% of these systems have asynchronous secondaries. If we assume these systems became asynchronous as a result of close planetary encounters, this potentially provides information about energy dissipation rates within these systems. In their work, <cit.> calculate the probability of a close planetary encounter for two binary asteroids: (35107) 1991VH and (175706) 1996FG3. For 1991VH, they found a likelihood of 15% that this system experienced a close encounter within the past 70,000 years. For 1996FG3, they found the same likelihood occurs around 30,000 years ago. These are of the same order of magnitude, so if we assume these numbers can be generalized to the population of NEA binary asteroids at large, then 15% of NEA binary asteroids would be perturbed by these close flybys on the order of 10,000 to 100,000 years ago. If we take 1991VH to be a characteristic binary binary asteroid, this would place the tidal parameters of secondaries Q_B/k_B roughly around 1×10^5 to 1×10^6 in order to maintain around 15% of binary asteroids to be asynchronous. However, far more data and analysis is needed to substantiate this argument. Due to the orbital evolution of NEAs, we argue a planetary encounter is the most likely source for the excited dynamics in 1991VH. However, many other possible explanations exist. For example, one or several impacts onto the secondary could induce these unstable dynamics. This was demonstrated by the DART impact, which induced NPA rotation in the secondary of (65803) Didymos <cit.>. However, an NEA the size of the secondary of 1991VH has an extremely small probability of experiencing a significant collision in the past million years (on the order of 10^-13) <cit.>. As discussed, it is unlikely that NPA rotation persists longer than this time for 1991VH. Another possible explanation is if 1991VH was previously in a BYORP-tide equilibrium, and a resonance between the orbit and primary spin destabilized the rotation of the secondary <cit.>. If the secondary shape is near a resonance between the natural frequencies within the system, this destabilization would result in NPA rotation. However, the rapid rotation of the primary in 1991VH means the system is far from any low-order resonance between its spin and the mean motion. Alternatively, BYORP and tidal evolution could have migrated the system into a wider configuration where the secondary's rotation was no longer synchronized by the primary <cit.>. However, this scenario would then require a mechanism to reduce the semimajor axis. Another cause of the excited dynamics could be if 1991VH was previously a triple system whose secondaries merged, resulting in an unstable secondary rotational state. The very elongated shape of the secondary in 1991VH predicted by radar data could be explained by such a contact binary <cit.>, although this evidence is circumstantial at best, and there is no concrete evidence of a contact binary satellite. The contact binary secondary of (152830) Dinkinesh could be an example of such a merger, although its satellite is much more elongated than even the upper estimates for 1991VH <cit.>. Furthermore, the merger would have had to occur relatively recently for the excited rotational state to persist to today, and the probability of such an event is unknown. All of these scenarios could be examined in further detail. However, the high probability of recent and close encounters with the terrestrial planets suggests this is the most likely cause for the unstable dynamics <cit.>. While we have exclusively focused on Earth encounters in this work, encounters with Mars are also possible. In this work we have demonstrated one possible cause of the excited dynamical state of 1991VH. In this approach, a single Earth flyby within the past ∼100,000 years is capable of producing the currently observed state in the system. This state could then persist to the current day without energy dissipation returning it to the archetypal singly-synchronous state, provided the secondary is not very efficient at energy dissipation or repeated close encounters counteract energy dissipation. § ACKNOWLEDGEMENTS A.J.M. acknowledges support from the Planetary Defense Conference Student Grant. A portion of this work was conducted at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). The work by P.P. was supported by the Grant Agency of the Czech Republic, Grant 23-04946S. aasjournal
http://arxiv.org/abs/2407.13320v1
20240718092151
Deep Reinforcement Learning for Multi-Objective Optimization: Enhancing Wind Turbine Energy Generation while Mitigating Noise Emissions
[ "Martín de Frutos", "Oscar A. Marino", "David Huergo", "Esteban Ferrer" ]
eess.SY
[ "eess.SY", "cs.AI", "cs.SY", "math.OC" ]
1]Martín de Frutoscor1 [cor1]Corresponding author m.defrutos@upm.es 1]Oscar A. Marino 1]David Huergo 1,2]Esteban Ferrer [1]ETSIAE-UPM-School of Aeronautics, Universidad Politécnica de Madrid, Plaza Cardenal Cisneros 3, E-28040 Madrid, Spain [2]Center for Computational Simulation, Universidad Politécnica de Madrid, Campus de Montegancedo, Boadilla del Monte, 28660 Madrid, Spain Wind turbine Deep Reinforcement LearningQ-learning Blade Element Momentum Theory Aeroacoustic Brooks Pope and Marcolini multi-objective optimization torque-pitch control § ABSTRACT We develop a torque-pitch control framework using deep reinforcement learning for wind turbines to optimize the generation of wind turbine energy while minimizing operational noise. We employ a double deep Q-learning, coupled to a blade element momentum solver, to enable precise control over wind turbine parameters. In addition to the blade element momentum, we use the wind turbine acoustic model of Brooks Pope and Marcolini. Through training with simple winds, the agent learns optimal control policies that allow efficient control for complex turbulent winds. Our experiments demonstrate that the reinforcement learning is able to find optima at the Pareto front, when maximizing energy while minimizing noise. In addition, the adaptability of the reinforcement learning agent to changing turbulent wind conditions, underscores its efficacy for real-world applications. We validate the methodology using a SWT2.3-93 wind turbine with a rated power of 2.3 MW. We compare the reinforcement learning control to classic controls to show that they are comparable when not taking into account noise emissions. When including a maximum limit of 45 dB to the noise produced (100 meters downwind of the turbine), the extracted yearly energy decreases by 22%. The methodology is flexible and allows for easy tuning of the objectives and constraints through the reward definitions, resulting in a flexible multi-objective optimization framework for wind turbine control. Overall, our findings highlight the potential of RL-based control strategies to improve wind turbine efficiency while mitigating noise pollution, thus advancing sustainable energy generation technologies. § INTRODUCTION In recent years, the aerodynamic design of wind turbines has undergone significant advances, reaching near-optimal efficiency through substantial investments in aerodynamic optimization, as well as advancements in manufacturing techniques and materials. Consequently, the focus has shifted towards addressing the issue of noise generated by wind turbines, which is emerging as a competitive factor within the wind energy industry. Numerous studies have explored the correlation between wind turbine sound power levels and public-reported perceptions of annoyance <cit.>. Accurate prediction of wind turbine noise under real operational and atmospheric conditions is crucial to design quieter turbines and complying with imposed noise regulations <cit.>. This necessity underscores the importance of fast turn-around methods to incorporate noise calculations into design and optimization processes, as well as to assess noise in real time during operation. Such efforts not only optimize wind resource utilization, but also minimize the impact on the quality of life of nearby communities and wildlife. Aerodynamic noise poses a significant limitation to further exploiting wind energy resources. This type of noise results from the turbulent flow interacting with the airframe, necessitating a detailed resolution of the flow for accurate far-field noise prediction. However, computational fluid dynamics (CFD) solvers, while capable of simulating the flow field, incur a high computational cost which escalates further when resolving the acoustic field. Consequently, numerical approaches to wind turbine noise prediction remain challenging. Therefore, most noise prediction models for wind turbines are based on aeroacoustic semi-empirical models rather than numerical simulations <cit.>. Despite these obstacles, wind turbines remain an essential component in the generation of clean and renewable energy. However, effective control strategies are imperative to optimize their performance under variable wind conditions. Wind turbine control systems are designed to maximize energy generation while ensuring structural integrity and safe operation <cit.>. Given the dynamic nature of wind, adaptive control strategies are essential, with classic mechanisms including adjustments to yaw angle and rotational speed, as well as blade pitch angle modulation. Leveraging real-time wind measurements, turbine dynamics, and advanced control algorithms enables simultaneous adjustments to rotor speed, and pitch, enhancing energy generation, reducing fatigue loads, and extending turbine lifespan. The emergence of reinforcement learning (RL) presents novel opportunities for wind turbine control by enabling data-driven adaptive decision-making <cit.>. RL, a machine learning approach, involves an agent learning to make decisions in an environment to maximize cumulative rewards over time <cit.>. Applied to wind turbines, RL offers autonomous learning of control inputs to maximize power generation by capturing complex non-linear relationships between wind conditions, turbine states, and actions. RL-based control methods adapt in real-time to changing wind conditions, offering significant advantages in wind turbine operation. This paper reviews recent advances in RL-based control strategies for wind turbines, focusing on pitch angle and rotor speed modulation. Previous studies have proposed RL algorithms with comprehensive reward definitions, showcasing their efficacy in optimizing wind turbine performance under varying wind conditions. For example <cit.> proposed a RL pitch controller to maintain the nominal power using an adaptive dynamic programming algorithm, reducing the energy consumption of the pitch actuator. <cit.> developed a data-driven model to perform a torque-pitch controller, modeling the dynamics and using RL to control the wind turbine. <cit.> discussed different reward definitions for wind turbine controllers, while <cit.> and <cit.> developed RL methods for yaw control avoiding control parameter tuning. <cit.> and <cit.> employed Q-learning RL methods for maximum power point tracking (MPPT) control of the generator speed. Overall, these studies demonstrate the adaptability of RL systems to realistic wind conditions, thereby enhancing overall energy generation and efficiency of wind farms. In this paper, we introduce a reinforcement learning-based dynamic control method designed to maximize power output while adhering to specified maximum decibel levels. The paper is structured as follows. First, we summarize the methodology in <Ref>. There, we include the wind turbine model, validating the aeroacoustic model with three different wind conditions. Additionally, the multi objective reinforcement learning strategy is explained, we provide details on the reward, the neural network architecture and the training procedure. Second, in <Ref> we validate the controller with simple steady winds to later challenge the method with turbulent wind conditions obtained from experimental measurements. We end with conclusions and outlooks. § METHODOLOGY In this section, we detail the methodology for integrating Deep Reinforcement Learning (DRL) with the dynamic control of a wind turbine. We begin by describing the model of the wind turbine, focusing on how both the power output and the noise levels are computed, and validate the methodology using field measurements for a SWT2.3-93 wind turbine with a rated power of 2.3 MW. Subsequently, we detail the setup of the DRL algorithm, which is designed to maximize power generation within specified noise constraints, demonstrating the application of advanced machine learning techniques to real-world energy optimization challenges. §.§ Wind turbine modeling using OpenFAST One critical requirement for incorporating a wind turbine solver into the DRL control framework is the ability to perform rapid evaluations, as the DRL training process requires a large number of simulations. To meet this need, we have chosen to employ an efficient Blade Element Momentum Theory (BEMT) solver. Specifically, we use OpenFAST <cit.>, a well-known open-source software for simulating wind turbine dynamics and acoustic predictions. BEMT, is known for its efficiency and offers a simple yet accurate method for estimating the aerodynamic forces and energy generation of wind turbines. Its ability to perform rapid function evaluations is crucial for training and validating the agent within a reasonable timeframe. In the realm of BEMT, the wind turbine blade is segmented into smaller sections along its span. The aerodynamic forces exerted on each section are computed based on the local wind conditions and the airfoil's geometry. These local flow conditions, defined for every section and time step, encompass the wind's speed and direction, along with the turbulence intensity. Polar curves for each airfoil section are used to compute the aerodynamic forces (lift, drag and moment coefficients). By integrating the forces along the span of the blades, we can derive the overall power and thrust generated by the wind turbine. Additionally, OpenFAST includes an aeroacoustic module that enables the computation of noise levels generated by the wind turbine at specific observer locations. To determine the aerodynamic noise sources from wind turbine blades, various semi-empirical noise models are included <cit.> and we select the Brooks Pope and Marcolini model. The sound pressure level (SPL) for each blade segment is calculated based on individual noise mechanisms. The cumulative effect of these mechanisms yields the noise source for each airfoil. Finally, the noise sources from all blade segments are combined as uncorrelated noise sources, contributing to the overall computation of the wind turbine's sound power level. The essential aspect of this process is the precise identification and modeling of the various noise mechanisms associated with each blade section. These mechanisms can be categorized into two groups: turbulent inflow noise and airfoil self-noise. OpenFAST implements the turbulent inflow model presented by <cit.> and, among the airfoil self-noise models described by <cit.>, we have specifically selected: turbulent boundary layer trailing edge noise and tip vortex formation noise. §.§.§ Validation of OpenFAST with a SWT2.3-93 wind turbine The onshore wind turbine selected for the study is based on the SWT2.3-93, which performs a rated power of 2.3 MW. This turbine has undergone extensive in field experimental testing and complete details on its geometry and benchmark data are available in the open access repository https://zenodo.org/records/7323750Zenodo (through the work of the European project <cit.>). The airfoil polar curves are available in the airfoil catalog compiled by <cit.>. More details can be found in <cit.>. All the open-source information enables us to create a SWT2.3-93 OpenFAST model. Additionally, the benchmark results from the Zenodo dataset can be utilized to validate the model. <Ref> presents the validation of both the power output and the sound pressure level (SPL) of the wind turbine. <Ref> compares the experimental power curve (from the Zenodo database) with that generated by the OpenFAST solver, showing good agreement. Meanwhile, <Ref> shows the one-third octave SPL for frequencies ranging 10Hz to 10kHz, comparing the Zenodo dataset with results from OpenFAST. These comparisons cover three different operational conditions, detailed in <ref>. The Zenodo acoustic results are computed for an observer positioned 100 meters downstream from the wind turbine, at ground level. We observe good agreement with the experimental data for the three operating conditions. We conclude that OpenFAST, with the acoustic model of Brooks Pope and Marcolini, provides accurate predictions of power generation and acoustics, and is therefore a valid tool to perform multi-objective optimization. §.§.§ Sensitivity to control parameters The selected parameters to control the wind turbine power and noise include the rotational speed Ω and the blade pitch angle θ. Before discussing the RL setup, it is crucial to illustrate the sensitivity of these parameters for the two performance metrics: the power coefficient and the overall sound pressure level. In <Ref> the sensitivity analyses for both rotational speed and blade pitch angle are displayed for a single incoming wind speed (U_∞). The values for one-third octave SPL, power coefficient and overall sound pressure level are shown for different operational conditions. <Ref> depicts the influence of Ω, increasing the rotational speeds makes both the sound pressure level and the power coefficient rise, highlighting the trade-off between maximizing power and minimizing noise. The entire SPL spectrum increases uniformly when increasing Ω, due to a higher relative velocity in the blades and leading to a rise in SPL regardless of the noise mechanism. A similar analysis is presented in <Ref> for the blade pitch angle. Although the general conclusion about the opposition between power and noise remains valid, the SPL spectrum behaves differently across frequencies. The pitch angle mainly affects trailing edge noise leading to changes at relatively low frequencies ranging from 10Hz to 1kHz. §.§ Design of a reinforcement learning control Reinforcement Learning is a branch of machine learning that focuses on how agents should take actions in an environment to maximize cumulative reward. Unlike supervised learning, where the model learns from a labeled dataset, RL is driven by agent-environment interactions. The agent takes actions based on the current state of the environment and receives feedback in the form of rewards. The state represents the situation of the environment at a given time, while the actions are the possible moves the agent can make. The reward is the feedback indicating the immediate benefit or cost of an action, guiding the agent toward better actions over time. In particular, in this work we use Q-learning RL, which is detailed in the next section. §.§.§ Reinforcement Learning for Multi Objective Control Q-learning is a widely recognized reinforcement learning algorithm <cit.>. It is categorized under model-free RL algorithms, implying that it operates without the necessity for prior knowledge or explicit models that represent the dynamics of the system. The fundamental component of Q-learning is the Q-value, which quantifies the anticipated cumulative reward for executing a specific action in a given state. The Q-value is updated iteratively via the Bellman equation, which formulates the optimal action-value function in terms of the maximum expected future reward. During the learning process, the wind turbine interacts with the environment, transitions between states, and takes actions according to its current policy. The Q-learning algorithm employs an ϵ-greedy exploration-exploitation trade-off to strike a balance between exploring new actions (ϵ times) and exploiting current knowledge (1-ϵ times) to maximize cumulative rewards. In RL the cumulative reward is computed taking into account that a reward received immediately is worth more than a reward received in the future, specifically, each time step the reward is discounted by γ, the discount rate. Initially, the Q-values are arbitrarily initialized. As the wind turbine explores the environment and receives feedback in the form of rewards, the Q-values are updated using the temporal difference error. The temporal difference error represents the discrepancy between the observed reward and the predicted reward based on the Q-values. Through repeated iterations, the Q-learning algorithm gradually converges to an optimal policy. In this state, the wind turbine learns the best actions to take in different states, thereby maximizing power generation while minimizing noise. In our case, this agent-environment interactions for the wind turbine control are illustrated on <Ref>. Deep Q-Network (DQN) is a variant of Q-learning that employs a deep neural network to estimate Q-values <cit.>. It replaces the traditional lookup table with a neural network, enabling generalizations across states to handle large state spaces efficiently. In this study, a Double Deep Q-Learning (DDQN) is employed. DDQN is an extension of DQN that uses two neural networks: the primary network and the target network. The primary network select the action and the target network evaluates its Q-value. This way of decoupling the action selection and evaluation addresses the overestimation of Q-values, often observed in DQN algorithms due to the maximization bias, <cit.>. The weights of the primary network are obtained by minimizing the following loss function: ℒ(ϕ) = 𝔼_(s, a, r, s')[ ( r + γ Q_ϕ'( s', max_a Q_ϕ(s', a) ) - Q_ϕ(s, a) )^2 ], where r is the reward, s is the state of the environment, a denotes a possible action that the agent can take, Q(s,a) is the Q-Function and ϕ and ϕ' are the set of weights of the primary and target network, respectively. The loss function ℒ(ϕ) quantifies the residual of the Bellman equation, which defines formally the optimal values of the Q-values, <cit.>. The set of weights from the target network , ϕ', is updated using a soft update rule to enhance the stability of the learning process, <cit.>. ϕ' ←τϕ + (1 - τ) ϕ'. To train the DDQN, an experience replay buffer is utilized. During the training phase, the agent interacts with the environment and stores the experiences (state, action, reward, next state) in the replay buffer. Subsequently, random batches of experiences are sampled from the replay buffer to train the network and update its weights. This process helps to break the correlation between consecutive samples and improves stability during the learning process. An additional consideration in solving this reinforcement learning problem is the need to balance maximizing power output with minimizing noise impact. These objectives are inherently conflicting, placing this problem within the Multi-Objective Reinforcement Learning (MORL) framework. MORL extends traditional reinforcement learning to handle problems involving multiple, often conflicting, objectives. Various strategies exist for addressing MORL problems. One of the simplest methods is to define the reward using a scalarized function that combines the rewards for each objective into a single global reward, thus transforming the problem into a single-objective reinforcement learning task <cit.>. Another approach involves Pareto optimization, which aims to find a set of optimal policies that lie on the Pareto Front, where no other policy is superior in all objectives <cit.>. There are already methods that apply these MORL approaches using deep learning implementations <cit.>. In this work, a scalarized method is adopted to define a reward that balances the two objectives of maximizing power and minimizing noise. §.§.§ State-action structure The state of the agent must include all the necessary information about the environment to enable the agent to take the best possible action. If the state lacks relevant information, the agent may not be able to achieve optimal performance. The state of the agent is defined by the incoming wind conditions, specifically the wind speed, U_∞, along with the control variables of the wind turbine, which are the rotational speed, Ω, and the blade pitch angle, θ. To fit within the DDQN framework, the state space 𝒮 needs to be bounded. Some variables (rotational speed and pitch) are bounded by mechanical/structural limitations, whereas the wind speed is bounded by physical range. Note that these can be tuned for specific wind turbines and geographic sites. We include an additional constraint on the tip speed ratio λ = Ω R/U_∞, with the blade radius R=46.5 m, to ensure the correct behavior of the BEMT solver. The specific values of all the constraints are outlined below: * U_∞∈ [4,16] m/s, * Ω∈ [6,18] rpm, * θ∈ [-5,10] degrees, * λ∈ [3,12]. The actions available to the agent involve either increasing or decreasing the control variables. Since Q-learning is defined for a discrete action space 𝒜, the control variables can only be adjusted by fixed increments. Five distinct actions are defined: two for each control variable (one for increasing and one for decreasing), and one for maintaining the current state (doing nothing). The specific fixed increment for each possible action is determined based on the sensitivity analysis detailed in <Ref>. Since the rotational speed is more sensitive to both power generation and sound pressure level compared to the pitch angle, the incremental adjustments for each variable has been designed so that their corresponding actions have effects of the same magnitude. The actions that the agent can take are specified as follows: * a_1: increase Ω by 0.5 rpm, * a_2: decrease Ω by 0.5 rpm, * a_3: increase θ by 1 degree, * a_4: decrease θ by 1 degree, * a_5: do nothing. It is important to note that the transition between states is not deterministic a priori. Although we can freely adjust the control variables, the wind conditions depend on the environment and are beyond our control. This motivates the use of a model-free reinforcement learning method, as model-based approaches only guarantee convergence if the transition function between states is known. §.§.§ Reward definition The reward function is key when defining the RL algorithm, as it is the only feedback to quantify how successful are the actions taken by the agent. Therefore, the reward must be carefully crafted for each specific problem to learn an appropriate policy. As mentioned in <Ref>, this is a multi-objective optimization problem (or MORL), requiring a specific strategy to address the two conflicting objectives: maximizing power extraction while minimizing noise generation. In this work, we choose to blend the two objectives through a linear function, to define the overall reward. The reward can be expressed as follows: r = r_PW + r_, where r_PW denotes the reward associated to the power objective and r_ the one related to the SPL one. As we already discussed in our previous work <cit.>, the reward power component should encourage the agent to obtain the highest energy generation possible, regardless of the wind conditions. To achieve this, we use the power coefficient C_p of the wind turbine. We set this reward to increase linearly from 0 to 1, with 1 corresponding to the maximum possible value of the power coefficient within the state space, C_p,nom. Therefore, the reward power component reads, r_PW = C_p/C_p, nom. The reward term related to sound generation, r_, is highly dependent on the specific problem being modeled. First, we need to select the observer locations where the SPL is computed, typically in critical areas where noise mitigation is a priority. In this work, we decide to set one observer 100 m downstream the wind turbine, see <Ref>. Next, we decide how to penalize the sound generation (SPL) in the reward function. We opt to use a ReLU activation function that begins penalizing once the SPL exceeds a certain threshold, _thr. Below this threshold, the agent focuses solely on maximizing power. Additionally, we define a ΔdB value that specifies how much the SPL threshold can be exceeded before the reward becomes -1. Beyond this point, no matter how much power the agent generates, the total reward will be negative. Therefore, _thr+ΔdB serves as an effective noise limit. For this specific application, we defined _thr = 45 dB and ΔdB = 5 dB, but note that these values can be adapted to specific sites or regulations. The reward noise component can be seen in <Ref> and reads as follows: r_ = -ReLU(-_thr/ΔdB). In addition, we need to include the bounds of 𝒮 in the reward. To make the agent learn the limits, it receives punishments whenever it performs a forbidden action, that is, an action that leads to a state s_t+1∉𝒮. In such cases, the agent receives a negative reward with a value of r = -3 and the action is revoked so that the control variables remain the same. The punishment is set to -3 to differentiate it from the possible negative reward of r_. This distinction is made because exceeding the 𝒮 limits is considered worse than generating noise above the threshold. Finally, the reward function for the agent is the following r(s_t,a_t,s_t+1) = C_p(s_t+1)/C_p,nom -ReLU((s_t+1)-_thr/ΔdB), -3 if s_t+1∉𝒮. §.§.§ Neural Network architecture When using DQN, neural networks (NN) are employed to approximate the Q-Function. Typically, the NN is designed to approximate the Q-Vectors, q(s), which represent the Q-Values in the state s for all possible actions. That is, q(s)_i = Q(s,a_i). This approach is used because 𝒜 is a discrete space, and encoding these discrete actions as inputs can be problematic; it is more convenient to create a mapping between real subspaces. The neural network map is defined as q_ϕ( s): 𝒮⊂ℝ^3 →ℝ^5, where ϕ denotes all the NN weights, the output space dimension is |𝒜|=5 and s denotes the state vector, which is s = [U_∞,Ω,θ]^T. The neural network architecture employs a Multi-Layer Perceptron structure, consisting of two dense hidden layers with Rectified Linear Unit (ReLU) activation functions. The final layer, uses a linear activation function instead of ReLU. This allows the Q-values to take on any sign, rather than being restricted to positive numbers. The number of dense layers and their sizes were determined through extensive trial and error. Ultimately, two layers with 128 and 64 neurons, were found to be sufficient to accurately represent the Q-Function. The architecture of the Q-Network used to train the DDQN agent is shown in <Ref>. §.§.§ Training of the Wind Turbine DDQN Agent The main ideas of Q-Learning have already been explained in <Ref>. However, here the specific details of the DDQN training to design the wind turbine controller are included. During the training phase, the agent faces random steady wind conditions during short episodes of 20 time steps. This allows the agent to adapt to virtually any wind, even if the wind speed changes faster than 20 time steps. This adaptability is achieved because experiences are stored in the replay buffer, and batches are selected randomly. Consequently, the specific temporal evolution of states during the agent's experience is not critical, provided that the stored transitions comprehensively represent all actual transitions in the system. The Double Deep Q-Network (DDQN) agent is trained using the hyperparameters listed in <Ref>. To illustrate the influence of each parameter on the training process, a pseudocode for the DDQN training is presented in <Ref>. The effectiveness of the learning progress during training is assessed by displaying the Q-values of the state-action pairs encountered by the agent, as shown in <Ref>. As the agent learns, the Q-values of the actions taken at each state are expected to increase, as depicted in the figure. For the implementation, we utilized OpenAI Gym <cit.> to create the environment, serving as a bridge between the reinforcement learning formulation and the OpenFAST wind turbine solver. TF-Agents <cit.> was employed to develop the agent and manage the entire training process. All neural networks were constructed using the Keras API <cit.>, and the Adam optimizer was used for training <cit.>. § RESULTS The agent's performance is assessed under various wind conditions. First, we validate the operational point that the agent reaches under constant wind conditions, assessing its optimality via a Pareto front. Next, we evaluate the agent's ability to adapt to turbulent wind conditions, comparing its control strategy against a classic controller. Finally, we estimate the agent's annual energy production and compare it against a classic control strategy designed to maximize energy extraction. §.§ Steady wind validation The simplest test for evaluating the agent is to assess its performance under steady wind conditions. In this scenario, with unchanging wind conditions, the agent should identify, reach and maintain the state that maximizes the cumulative reward. This optimum state does not depend on the initial conditions of pitch and rotor speed, as the wind conditions are steady. However, it is important to note that the agent's actions are discrete, limiting its ability to reach every possible state. To validate the performance and robustness of the agent, a Pareto diagram is used. The agent is tested for different initial conditions (with the same steady wind speed) to determine if it can consistently reach optimal states (at the pareto front), regardless of the initial state. To illustrate the agent's trajectory (sequence of state-action-reward) for each initial condition, these trajectories are displayed on a power coefficient - sound pressure level diagram, along with values for 1000 random states from 𝒮. <Ref> presents the Pareto diagram, illustrating the agent's trajectory from four different initial conditions. It is clear that, regardless of the initial condition, the agent successfully achieves high power outputs up to the maximum permissible decibel level. Furthermore, the agent demonstrates robust performance by consistently avoiding the maximum limit of SPL (dB A) while remaining close to the limit to maximize power. It can be seen that the RL does not always reach the same final state, but that the optima are relatively close to each other. This suggests the existence of local optimum. In addition, the discrete nature of the Q-learning actions, may not allow the agent to reach certain optima, since not all states are reachable from an initial state. Despite these issues, the agent consistently avoids acoustic penalties and achieves high power outputs, with power coefficients ranging from 0.26 to 0.30. For completness, <Ref> shows the initial conditions of the control variables for the trajectories displayed on <Ref>, as well as the final state control variables. Overall, the agent robustness has been tested on a simple steady wind scenario. The agent is able to reduce the wind turbine noise to admissible levels while maximizing power. Furthermore, the agent finds optimum operational conditions regardless of the initial condition, showing the robustness of the algorithm. In other words, the neural network which approximates Q(s,a) has covered all his input space 𝒮×𝒜. §.§ Control Strategy for Experimental Winds The agent capabilities are now tested over experimental wind conditions. We compare the energy extraction between our agent and two controllers that are designed solely to maximize power. By doing so, we can demonstrate how much power we need to sacrifice to keep the wind turbine at acceptable decibel levels. The performance of the three controllers are going to be compared. These controllers include: * Classic wind turbine controller: Standard wind turbine controller designed to reach the power curve of the wind turbine, using torque or pitch control depending on whether the wind speed is above or below rated wind speed. Details can be seen in <ref>. * Power DDQN: Agent designed to maximize solely power. It is trained with no noise penalization, that is only the power reward is included, see <ref>. * Quiet DDQN: Agent designed to maximize power without producing more that 45 dB decibel levels at 100 m downwind of the rotor. It is trained with the complete reward definitions including power and noise, see <ref>. The wind data used to validate the control performance under real wind conditions was obtained from the measurement and instrumentation data center (MIDC) of NREL, see <cit.>. These daily wind measurements are available as open-source. For this study, wind speed and wind direction measurements at an 80 m height from June 1, 2023, to June 1, 2024, are selected. <Ref> displays a wind rose illustrating the wind speed and wind direction of this dataset. Since this study concentrates on torque-pitch control, it is assumed that the incoming wind is consistently aligned with the wind turbine, a condition typically managed by yaw control. Therefore, we assume perfect alignment and only the wind speed distribution is used in the subsequent results. Note that the Power DDQN agent considers power optimization uniquely. Therefore, it is only applicable to the below rated wind speed region defined on <ref>. When the wind speed exceeds the rated value, the control strategy maintains nominal power rather than maximizing it. To achieve this behavior with a reinforcement learning agent, the reward function would need to be modified. Consequently, the wind speed distribution used to validate the agent under turbulent wind conditions is restricted to below-rated wind speeds, enabling a meaningful comparison between the three controllers. The control performance of the three agents is analyzed in detail over an 8-hour time span, using the wind speed distribution shown in <Ref>. The RL controllers are allowed to control each minute. In the next section, we will estimate the annual energy production for all controllers. <Ref> shows the results from the different controllers over the first 8 hours of the yearly dataset. <Ref> display the control parameters evolution while <Ref> illustrates the power and the noise generated 100 m downwind. It is noted that the Power agent matches the power extraction achieved by the classic control strategy, essentially implementing the same control approach but with the discrete actions defined for the reinforcement learning agent. Since neither of these controllers is designed to consider the acoustics of the wind turbine, both generate high levels of noise when the wind speed is sufficiently high. In contrast, the Quiet agent can match the power generation of the power-oriented controllers when the wind speed is moderate. When wind speeds are higher, it extracts as much power as possible while keeping noise levels below the threshold value. Moreover, all three controllers maintain a constant pitch value to maximize power extraction. However, the Quiet agent adjust the pitch angle to reduce the noise levels when the wind speeds get higher. This test shows the flexibility of the RL strategy for control and highlights the possibility of including multi-objectives. In addition, we see that there is no need to have an a priori knowledge of the turbine performance (e.g., the power curve or rated maximum power) since the RL will learn these characteristics when trained. §.§ Annual wind energy estimation There are different methodologies to obtain an estimate of the annual generation of wind energy, <cit.>. The standard procedure is based on decoupling the wind turbine from the wind distribution of the particular site. It considers the observed wind speed frequency histogram to fit a theoretical probability density function (PDF) for the wind speed. It also requires a transfer function that models the relation between power output and wind speed. Typically, the Weibull distribution is used to fit the wind speed frequency histogram. <cit.> showed that although the Weibull distribution may not be substantiated for most sites, it does not include important errors on the energy estimations. The probability density function of the Weibull distribution is given by: f_U(u) = (k/c)·(u/c)^k-1exp(-(u/c)^k) , where U denotes the random variable that models the wind speed. The fit of the shape and scale parameters k and c are established from the mean and variance of the wind speed, μ_U=𝔼[U] and σ_U^2 = 𝕍[U]. The specific relations can be seen in <cit.> work and are the following: k = (σ_U/μ_U)^-1.086, and c = μ_U/Γ(1+1/k), where Γ denotes the special gamma function. This formulation can be employed to obtain the Weibull probability density function that represents the one-year experimental data reported by <cit.>. This is illustrated in <Ref>. Regarding the transfer function between power and wind speed, various strategies exist <cit.>. The Theoretical Power Curve (TPC) does not account for control mechanisms. Furthermore, since our wind turbine control strategy considers acoustic generation, the wind turbine will exhibit a significantly different Effective Power Curve (EPC). It is necessary to compute an EPC that accurately represents the transfer function between power and wind speed for our specific control scenario. The EPC can be computed using simulations of the wind turbine control. The agent is faced against a turbulent wind that covers all the range of interest of wind speed, mainly between cut-in and cut-off wind speed. This turbulent wind must be representative of the turbulent nature of the wind that the wind turbine is going to face during operation. Once the simulation is done, all the pairs of data points (U_∞,C_p) can be used to obtain a transfer function for the power coefficient C_p(U_∞). A subset of 100 hours of the experimental wind measurements from the MIDC <cit.> has been used to obtain the EPC of the SWT2.3-93 wind turbine using the Classic Control and the Quiet DDQN agent already introduced on <Ref>. <Ref> illustrates the results of this simulations, showing the operational laws of control for each agent on <Ref> and the SPL and power associated on <Ref> respectively. The behavior is as expected, the Quiet Agent does not increase the rotational speed above 10.5 rpms to avoid surpassing the SPL threshold and uses the pitch to reduce noise if needed, which explains the high variance bars on the pitch (see <ref>) and low ones in the rotational speed (see <ref>). Meanwhile the classic control can increase the rotational speed freely and the pitch is only use in above-rated wind speed scenarios, see <ref>. The large standard deviations on the classic control pitch are due to the PID control, which is dynamically adjusting to the turbulent wind. In <Ref> it is illustrated how the classic control matches on average the TPC. However, it is not able to adjust perfectly to the turbulent wind, showed by its high variance on the above-rated wind speed region. The Quiet agent achieves less power than the classic one but is able to maintain the sound pressure level below the specified threshold of 45 dB A, see <ref>. Traditional approaches use only the average value or polynomial fits of the historical/simulated data to construct the EPC. All these methods do not capture the variance of the data in the model. To account for this variability on the EPC model we introduce a statistical method. For simplicity, we model the power coefficient C_p(U_∞), which is obtained by non-dimensionalizing the EPC data. A Gaussian Process Regression <cit.> can be employed to model the power coefficient at each wind speed as a Gaussian probability distribution, C_p(U_∞)∼𝒩[μ_C_p(U_∞),σ_C_p(U_∞)]. <Ref> shows the power coefficients points obtained after the wind turbine control simulation. This data is used to fit the Gaussian Process (GP) model and obtain the mean and standard deviation of the power coefficient as functions of the wind speed, this fit is also included in <Ref>. The C_p distribution for specific values of the wind speed is illustrated on <Ref> where it is compared with the histogram of the power coefficient from the data. Assuming the Weibull probability distribution for the wind speed U_∞ combined with the GP model for the C_p distribution, the estimation of the annual wind energy can be performed by computing the expectation of the wind energy E_w. Mathematical details on the statistical distributions are provided in <ref>. <Ref> presents the annual energy estimation for each control strategy. It is important to note that the Power DDQN controller is applicable only in the below-rated wind speed region. Therefore, when computing the annual wind energy generation with this control, we impose nominal power for wind speeds above the rated value. The Power DDQN controller is included in the comparison to ensure that it remains competitive with the Classical Control within the below-rated wind speed range. The Quiet DDQN controller is able to control the wind turbine without surpassing the sound pressure level threshold selected and obtains an 78% of the annual energy production compared to the Classical Control. Additionally, in <Ref> it is shown the average standard deviation across the wind speed for the power coefficient GP fit. There, it is shown that the Quiet agent exhibits the control strategy with the least variance. § CONCLUSIONS In conclusion, integration of reinforcement learning with wind turbine control holds promise for optimizing energy generation and efficiency while minimizing acoustic environmental impact. A DDQN reinforcement learning agent can replicate the control strategy of a standard wind turbine controller without prior explicit knowledge of the wind turbine, relying solely on a wind turbine solver for experiential learning. Moreover, advanced control strategies can be readily implemented by modifying the reward function. In this work, an RL controller is defined to maximize power output while maintaining acceptable decibel levels, thereby incorporating acoustic effects into the control strategy. This demonstrates that MORL is capable of dynamically balancing two different objectives effectively. An effective power curve is computed from control simulations of turbulent wind data. This allow to characterize the reinforcement learning control strategy, obtaining the operational laws and obtaining an annual wind energy estimation. The methodology is validated using a SWT2.3-93 wind turbine with a rated power of 2.3 MW. We evaluate the yearly energy production for a realistic site. The DDQN reinforcement learning control provides similar energy production that a traditional control. The methodology presented allows for the inclusion of noise limits leading to a 22% reduction in the annual energy extraction when activating a maximum allowed noise of 45 dB (100 meters downwind downwind of the turbine). Further research directions include investigating Multi-Agent Reinforcement Learning algorithms for cooperative control of wind turbines within farms, which could enhance overall system performance while controlling noise at the farm level. § STATISTICAL DETAILS FOR THE EPC MODEL Let us consider a two dimension random variable 𝕏 = (C_p,U). This random variable model the probability of obtaining a certain wind speed with a certain power coefficient. The wind speed marginal distribution accounts for the global wind speed distribution of the localization of the wind turbine. Meanwhile, the power coefficient distribution measures the performance of the wind turbine at different wind speeds. There exist an a priori unknown joint probability density f(c_p,u). The power generated by the wind turbine, P, is a function of this random variable, so it is itself a random variable, P = 1/2ρ A C_pU^3. The wind energy, E_w, that the wind turbine extracts from the wind for a given period can be written as E_w = ∫_0^T P dt. However, this statistical model does not include information about the temporal evolution of C_p and U. We can compute the expectation of the wind energy using the expectation of the power over a period of time. 𝔼[E_w] = Δ T𝔼[P] = Δ T ∫1/2ρ A c_pu^3 f(c_p,u) dc_p du, where Δ T denotes the period of time. Notice that this only make sense if the unknown wind speed evolution U_∞(t) fits in the annual distribution modeled by the random variable U. Although we do not know the joint PDF, we know that the wind speed random variable U follows a Weibull distribution. Therefore, the marginal probability density function of U, f_U(u) is a weibull PDF that follows <ref>. On the other hand we can obtain the distribution of power coefficient for each wind speed value. This would be the conditioned power coefficient PDF, that is f_C_p(c_p|U=u). From this two PDF we can obtain the joint PDF, using the following relation: f_C_p(c_p| U=u) = f(c_p,u)/f_U(u) In this work, the conditional power coefficient PDF is obtained using a Gaussian Process Regression algorithm. Therefore, its density function is the following: f_C_p(c_p| U=u) = 1/√(2πσ_C_p(u)^2)exp(c_p-μ_C_p(u)/σ_C_p(u))^2, where the mean μ_C_p(u) and standard deviation σ_C_p(u) are obtained from the wind turbine control simulations. Finally, the expectation of the wind energy can be compute as follows: 𝔼[E_w] = Δ T𝔼[P] = (1/2Δ Tρ A) ∫_U_in^U_off(∫_0^C_p,nom c_pf_C_p(c_p|U=u)dc_p) u^3 f_U(u) du. Notice that the inner integral is the expectation of the conditional distribution, 𝔼[C_p|U=u] = μ_C_p(u). Hence, the expectation of the wind energy only requires the mean of the distribution fitted by the GP. The variance of the control, σ_C_p(u)^2, has no influence on the estimation of the wind energy. However, it gives us information about the control and can be useful to measure, this can be done computing the expectation of the variance. 𝔼[σ_C_p(U)] = ∫_U_in^U_offσ_C_p(u)f_U(u)du § STANDARD WIND TURBINE CONTROL STRATEGY The control strategy for variable-speed horizontal-axis wind turbines can be divided into four regions based on wind speed. Although each region definition may vary depending on the specific control design, the fundamental objectives within each region are as follows: * Region I: When the wind speed is below the cut-in value, the turbine cannot operate. * Region II: At wind speeds above the cut-in threshold but below the rated speed, the primary objective is to optimize power generation. This is achieved by adjusting the rotor speed to align with the power curve of the wind turbine, utilizing a predetermined lookup table. * Region III: When wind speeds exceed the rated value, the focus shifts to maintaining a consistent rotor speed across a broad range of wind velocities. This is typically achieved through adjustment of the blade pitch, commonly implemented using a proportional-integral-derivative (PID) control strategy, although there are more sophisticated approaches <cit.>. * Region IV: When the wind speed surpasses the cut-off value, the turbine must be shut down for safety. The transition between regions II and III, sometimes referred to as Region II 1/2, is characterized by maintaining a constant rotor speed. Although there are different options depending on the specific control design. <Ref> illustrates these regions on the power curve of the wind turbine. Further details on classical wind turbine control strategies can be found in the works of <cit.> or <cit.>. The controller module in OpenFAST facilitates the customization of controllers. In this study, the wind turbine controller is derived from the OpenFAST implementation from <cit.>, tailored to suit the characteristics of the SWT wind turbine. § ACKNOWLEDGMENTS Esteban Ferrer and Oscar A. Marino would like to thank the support of Agencia Estatal de Investigación for the grant "Europa Excelencia" for the project EUR2022-134041 funded by MCIN/AEI/10.13039/501100011033) and the European Union NextGenerationEU/PRTR and also the funding received by the Grant DeepCFD (Project No. PID2022-137899OB-I00) funded by MICIU/AEI/10.13039/501100011033 and by ERDF, EU. This research has been cofunded by the European Union (ERC, Off-coustics, project number 101086075). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. plainnat
http://arxiv.org/abs/2407.13155v1
20240718044613
Real-Time 3D Occupancy Prediction via Geometric-Semantic Disentanglement
[ "Yulin He", "Wei Chen", "Tianci Xun", "Yusong Tan" ]
cs.CV
[ "cs.CV" ]
Quantum Key Distribution Routing Protocol in Quantum Networks: Overview and Challenges Pankaj Kumar, Student Member, IEEE, Neel Kanth Kundu, Member, IEEE, and Binayak Kar, Member, IEEE P. Kumar and B. Kar are with Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taiwan (e-mail: pnkazaayan@gmail.com, bkar@mail.ntust.edu.tw). N. K. Kundu is with the Department of Electrical and Electronic Engineering, University of Melbourne, Australia (e-mail: neelkanth.kundu@unimelb.edu.au) Received XXX 2024 / Accepted XXX 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Occupancy prediction plays a pivotal role in autonomous driving (AD) due to the fine-grained geometric perception and general object recognition capabilities. However, existing methods often incur high computational costs, which contradicts the real-time demands of AD. To this end, we first evaluate the speed and memory usage of most public available methods, aiming to redirect the focus from solely prioritizing accuracy to also considering efficiency. We then identify a core challenge in achieving both fast and accurate performance: the strong coupling between geometry and semantic. To address this issue, 1) we propose a Geometric-Semantic Dual-Branch Network (GSDBN) with a hybrid BEV-Voxel representation. In the BEV branch, a BEV-level temporal fusion module and a U-Net encoder is introduced to extract dense semantic features. In the voxel branch, a large-kernel re-parameterized 3D convolution is proposed to refine sparse 3D geometry and reduce computation. Moreover, we propose a novel BEV-Voxel lifting module that projects BEV features into voxel space for feature fusion of the two branches. In addition to the network design, 2) we also propose a Geometric-Semantic Decoupled Learning (GSDL) strategy. This strategy initially learns semantics with accurate geometry using ground-truth depth, and then gradually mixes predicted depth to adapt the model to the predicted geometry. Extensive experiments on the widely-used Occ3D-nuScenes benchmark demonstrate the superiority of our method, which achieves a 39.4 mIoU with 20.0 FPS. This result is ∼ 3 × faster and +1.9 mIoU higher compared to FB-OCC, the winner of CVPR2023 3D Occupancy Prediction Challenge. Our code will be made open-source. § INTRODUCTION Vision-based occupancy prediction <cit.> leverages surround-view camera images of ego vehicle to estimate object occupancy and semantics within a voxel space <cit.>. Compared to 3D object detection <cit.>, it offers finer-grained 3D scene perception and produces a LiDAR-free alternative. Besides, by determining object presence within grid cells, occupancy prediction models can identify general objects, effectively handling out-of-vocabulary and unusual obstacles. Despite these strengths, existing methods <cit.> often suffer from low inference speed (e.g., 1 ∼ 3 FPS on Nvidia A100 <cit.>) and high memory usage (e.g., > 10,000 MB <cit.>) due to the high computational cost of 3D voxel features. These limitations hinder their application in AD vehicles equipped with on-board GPUs. To redirect the focus from solely prioritizing accuracy to also considering deployment friendliness, we conduct an extensive evaluation of the speed and memory usage for most public available methods. Through an extensive review and evaluation of existing methods, we identify a core challenge in achieving both fast and accurate performance: the strong coupling between geometry and semantic. As shown in Fig. <ref>, the geometric prediction (depth) serves as the input of the 2D-to-3D feature projection and impacts the downstream semantic classification. Therefore, inaccurate prediction depth can destroy the discriminative power of features and increases optimization difficulty. To address this issue, we propose to decouple geometric and semantic learning from both network design and learning strategy two perspectives. As for the network design, existing methods primarily rely on heavy 3D networks <cit.> to simultaneously refine geometric structure and learn semantic knowledge. However, the high computational cost of 3D networks is unaffordable for real-time methods. Recently, several works <cit.> collapse 3D voxel features into BEV features to improve efficiency, but they often fail to achieve satisfactory accuracy (e.g., FastOcc <cit.> in Fig. <ref>), as the BEV representation loses height information <cit.>. Therefore, it is both natural and promising to adopt a hybrid BEV-Voxel representation, which combines the strengths of computational efficiency in BEV representation and geometric integrity in voxel representation. To this end, we propose a Geometric-Semantic Dual-Branch Network (GSDBN) guided by two principles: sparse geometry and dense semantics. In the BEV branch, we employ BEV-level temporal fusion and a 2D semantic encoder with U-Net <cit.> structure to extract dense semantic features. In the voxel branch, we propose a 3D geometric encoder with a re-parameterized 3D large-kernel convolution, which refines the sparse geometric structure with enhanced receptive field and reduces computation through the re-parameterization technique. To fuse the features of two branches, we propose a BEV-Voxel lifting module, which projects BEV-level semantic features into the voxel space along the height dimension, thus effectively recovering the lost height information. As for the learning strategy, followed by Lift-Splat-Shoot (LSS) <cit.>, almost all existing methods <cit.> directly utilize the prediction depth for 2D-to-3D view transformation. However, they overlook that the prediction depth is not always accurate, especially at the early stage of training, which exacerbates the coupling problem and leads to unstable optimization. Inspired by language models <cit.>, which provide sequential ground-truth tokens to predict the next token, we replace the prediction depth with the ground-truth depth for 2D-to-3D view transformation during training. However, this strategy performs poorly when using the prediction depth for testing, as the model is not adapted to the prediction depth and cannot correct errors in the predicted geometry. To this end, we introduce a Geometric-Semantic Decoupled Learning (GSDL) strategy. Initially, we use ground-truth depth for 2D-to-3D view transformation to maintain accurate geometric structure, allowing for isolated semantic learning. Gradually, we mix the ground-truth depth with the prediction depth, which enables the model to learn to refine the predicted geometry. By decoupling the learning of geometric refinement and semantic knowledge, we effectively reduce the optimization difficulty and achieve further accuracy improvements without incurring additional deployment costs. Our contributions can be summarized as follows: * We conduct an extensive evaluation of speed and memory usage for most public available methods, which aims to redirect the focus from solely prioritizing accuracy to also considering deployment friendliness. * We propose a dual-branch network with a hybrid BEV-Voxel representation, which separates the learning of sparse geometry and dense semantics, ensuring both computational efficiency and geometric integrity. * We propose a novel learning strategy to decouple the learning of geometric refinement and semantic knowledge, which achieves consistent accuracy improvements across various pre-training models and methods. * We propose GSD-Occ, a Geometric-Semantic Disentangled Occupancy predictor, which establishes a new state-of-the-art with 39.4 mIoU and 20.0 FPS for real-time occupancy prediction. § RELATED WORKS Vision-based BEV Perception. Bird's-Eye View (BEV) perception <cit.> has recently seen significant advancements, developing as a crucial component in autonomous driving (AD) due to its computational efficiency and rich visual semantics. By leveraging 2D-to-3D view transformation to project camera image features into the BEV representation, multiple tasks can be integrated into a unified framework. View transformation can be broadly categorized into two types: forward projection and backward projection. The former employs explicit depth estimation to project image features into 3D space <cit.>. In contrast, the latter first initializes a BEV space and then implicitly models depth information by querying image features using a spatial cross-attention <cit.>. Although BEV perception excels in 3D object detection, it still struggle with corner-case and out-of-vocabulary objects, which are crucial for ensuring the safety of autonomous driving. To address this issue, 3D occupancy prediction has been proposed, quickly emerging as a promising solution in AD <cit.>. 3D Occupancy Prediction. 3D occupancy prediction reconstructs the 3D space using continuous voxel grids, which offers an enhanced geometry information and capability in detecting general objects. A straightforward idea is to replace the BEV representation of 3D object detection methods with the voxel representation, and then append a segmentation head <cit.>. However, voxel representations incur substantial computational and memory costs compared to BEV. To address this, TPVFormer <cit.> divided the 3D space into three-view planes for feature extraction, followed by interpolation to recover voxel representations. SurroundOcc <cit.> and CTF-Occ <cit.> utilized multi-scale encoders to gradually enhance voxel representations. FB-OCC <cit.> adapt a hybrid of forward and backward view transformation to complete sparse voxel features. COTR <cit.> proposes a compact voxel representation through downsampling, yet its feature enhancement network is so heavy that slows down the runtime significantly. PannoOcc <cit.> introduced a novel panoramic segmentation task based on occupancy representation and adapt sparse 3D convolutions to decrease computation. Despite progress in accuracy, existing methods often suffer from speed and memory limitations. Therefore, we establish a benchmark that incorporates speed and memory usage to provide a more comprehensive and fair assessment of methods. Deployment-Friendly Occupancy Prediction. Recently, several works have focused on the deployment friendliness of occupancy prediction. For example, FlashOcc <cit.> directly uses a BEV representation to predict geometry and semantic, thereby reducing computational costs. Similarly, FastOcc <cit.> employed a BEV representation but enhanced it using a residual structure that integrates voxel features obtained from view transformation. SparseOcc <cit.> employed a pure sparse transformer-based network to reduce computation. However, these methods typically evaluate the speed or memory usage of only a limited set of methods. To establish a comprehensive and fair evaluation benchmark, this work evaluates most public available methods using a same experimental environment. Moreover, while existing methods significantly improve efficiency, they often fail to achieve satisfactory accuracy in real-time. This work addresses this limitation by decoupling the learning of geometry and semantic, thereby achieving both real-time and accurate performance. § METHOD §.§ Problem Formulation Given a sequence of images I_i,t∈ℝ^H_i × W_i × 3 from N_c surround-view cameras over T frames, where i∈{1,..., N_c} and t∈{1,...,T}. The camera intrinsic parameters{K_i} and extrinsic parameters{[R_i | t_i]} in each frame are also known. Vision-based 3D occupancy prediction aims to estimate the state of 3D voxels within the range [X_s, Y_s, Z_s, X_e, Y_e, Z_e] around the ego vehicle. The shape of the 3D voxels is [X, Y, Z] (e.g., [200,200,16] in <cit.>), where [X_e-X_s/X, Y_e-Y_s/Y, Z_e-Z_s/Z] is the size of each voxel. Each voxel contains occupancy state (“occupied” or “empty”) and specific semantic information (“category” or “unknown”). Benifit from the learning of occupancy, 3D occupancy prediction can develop a general object representation to handle out-of-vocabulary and unusual obstacles. §.§ Overall Architecture The overview of the Geometric-Semantic Disentangled Occupancy predictor (GSD-Occ) is shown in Fig. <ref>, which includes an image encoder to extract image features, a 2D-to-3D view transformation to project image features into 3D space, a geometric-semantic dual-branch network (Sec. <ref>) to efficiently maintain geometric integrity and extract rich semantics, and a geometric-semantic decoupled learning strategy (Sec. <ref>) to further enhance the ability of geometric refinement and semantic learning. Image Encoder. Given a set of surround-view camera images at T-th frame, denoted as I_T = {I_i,T∈ℝ^H_i × W_i × 3}_i=1^N_c, we employ a pre-trained backbone network (e.g., ResNet-50 <cit.>) to extract image features F={F_i ∈ℝ^C_F × H_F × W_F}_i=1^N_c, where [H_i, W_i, 3] and [C_F,H_F,W_F] are the shapes of images and features, respectively. N_c is the number of cameras on the ego-vehicle. 2D-to-3D View Transformation. 2D-to-3D view transformation aims to convert 2D image features F to voxel representation. Given the limited learning capacity of real-time models, we adopt an explicit view transformation module <cit.> supervised by depth. Specifically, the image features F are first fed into the DepthNet <cit.> to generate a predicted depth distribution D={D_i ∈ℝ^D_b i n× H_F × W_F}_i=1^N_c, where D_b i n is the number of depth bins. With F and the D as input, a pseudo point cloud feature P ∈ℝ^N_c D_b i n H_F W_F × C_p can be obtained through outer product F ⊗ D. Finally, voxel-pooling is applied to the P to obtain the voxel features V ∈ℝ^C ×X/2×Y/2×Z/2, with 2× downsampling to reduce computational complexity. §.§ Geometric-Semantic Dual-Branch Network The key idea behind Geometric-Semantic Dual-Branch Network (GSDBN) module is to employ a hybrid BEV-Voxel representation, where sparse voxel features server as “skeleton” to maintain 3D geometric information and computation-efficient BEV features are used as “flesh” to complete voxel features with semantic information. We first elaborate the two principles for the design of GSDBN i.e., sparse geometry and dense semantic. (1) Sparse geometry in 3D occupancy grids reflects the discretization of the physical world, which leads to the sparsity of voxel features, with over 35% of values being zero after the 2D-to-3D view transformation. (2) Dense semantic, on the other hand, is necessary to maintain the model's classification ability, as excessive zero values can severely degrade performance. Then, we detail GSDBN based on the two key principles. §.§.§ Semantic BEV Branch BEV-Level Temporal Fusion. To reduce computation and memory costs, we propose using BEV features instead of voxel features employed in  <cit.> for temporal fusion. Besides, we introduce a history feature queue as in <cit.> to avoid time-consuming and redundant feature re-computation in  <cit.>. Specifically, we collapse the voxel feature V along the height dimension to obtain the BEV feature B∈ℝ^C×X/2×Y/2, and maintain a memory queue of length τ to store the historical BEV features. To fuse the BEV features of the historical τ frames with the current frame, we first warp them to the current timestamp T and then feed them into 2D convolutions to obtain the temporal BEV features B_t∈ℝ^C ×X/2×Y/2. The sparsity of voxel features enable BEV features to retain rich information, resulting in an acceptable accuracy degradation (0.69 mIoU) and a notable decrease in inference time (0.025 s). 2D Semantic Encoder. We employ a light-weight 2D UNet-like <cit.> encoder to extract features with rich semantic information. Specifically, the temporal BEV feature B_t is downsampled and then upsampled by a factor of 4, with residuals utilized to fuse multi-scale features. This process yields the semantic BEV features B_s∈ℝ^C^'×X/2×Y/2. §.§.§ Geometric Voxel Branch 3D Geometric Encoder. Inspired by  <cit.>, we extend re-parameterization technique to 3D occupancy prediction by designing a large-kernel re-parameterized 3D convolution for geometric encoding. By this way, we can enhance the receptive field of voxel features to refine the geometric structure, while the re-parameterization technique significantly reduces inference time. During training, we employ a non-dilated small-kernel and multiple dilated small-kernel 3D convolutions along with batchnorm (BN) layers. This combination helps capture small-scale patterns and enhance the receptive filed. During inference, these parallel small-kernel 3D convolutions can be converted into a large-kernel convolution to improve efficiency. As illustrated in Fig <ref>, we show a case of a 3D convolutional kernel with size [K_X,K_Y,K_Z] equals to [11,11,1]. Since omitting pixels in the input is equivalent to inserting extra zero entries into the convolution, a dilated convolution with a small kernel can be equivalently converted into a non-dilated one with a sparse larger kernel <cit.>. For a small 3D convolutional kernel W ∈ℝ^k_x × k_y × k_z with the dilation rate (r_x,r_y,r_z), this transformation can be elegantly implemented by a transpose convolution: W^' = conv_transpose3d(W, I, s=(r_x,r_y,r_z)) where I∈ℝ^1 × 1 × 1 and s means the stride. Then, the sparse kernel W^' and the subsequent 3D BN layer (with the parameters of accumulated mean μ, standard deviation σ, the learned scaling factor γ, and the learned bias β) can be converted into a convolution with a bias vector: W^'' = γ/σ W^', b^'' = - μγ/σ + β. The weight and bias of the final large kernel can be obtained by summing W^'' and b^'' across multiple parallel convolutions: Ŵ = ∑_i=1^C_szero_padding (W^''_i), b̂ = ∑_i=1^C_s(b^''_i), where C_s is the number of small-kernel convolutions and zero_padding is the zero-padding function that pads W^'' to the size of large kernel [K_X,K_Y,K_Z]. Finally, the geometric voxel features V_g ∈ℝ^C^'×X/2×Y/2×Z/2 are obtained by performing the 3D convolution with the weight Ŵ and bias b̂ of the large kernel. BEV-Voxel Lifting Module. To fuse the output of BEV and voxel branches, we propose a BEV-Voxel lifting (BVL) module that projects BEV features into voxel space. This design is inspired by LSS <cit.>, but it projects BEV features along the height dimension instead of image features along the depth dimension. As shown in Fig. <ref>, the BVL module is applied to the temporal BEV feature B_t and the semantic BEV feature B_s. For example, using B_s as input, a context branch generates height-aware features B_s^'∈ℝ^C^'×X/2×Y/2, while a height branch predicts a height distribution H^'∈ℝ^X/2×Y/2×Z/2. Then, the semantic voxel features V_s ∈ℝ^C^'×X/2×Y/2×Z/2 are then obtained through the outer product B_s⊗ H^'. Finally, the geometric-semantic decoupled features V_g&s∈ℝ^C^'× X × Y × Z are obtained by summing the geometric voxel feature V_g and the semantic voxel featureV_s, followed by upsampling 2 × using transpose 3D convolutions: V_g&s = upsample (V_g + V_s). §.§ Geometric-Semantic Decoupled Learning In Sec. <ref>, the GSDBN module effectively mitigates the coupling problem between geometry and semantic through a dual-branch network design. In this section, we further think about this issue from a learning perspective. We focus on a key component for 2D-to-3D view transformation, i.e., the LSS module, which projects image features into voxel space by predicting a depth distribution. However, as the prediction depth is not always accurate, especially at the early stage of training, which would exacerbate the coupling problem and lead to unstable optimization. An intuitive idea is to directly replace the prediction depth with the ground-truth depth during training in LSS, while using the prediction depth in inference. This strategy is inspired by language models <cit.>, where sequential ground-truth tokens are provided to predict the next token during training, but complete sentences are predicted in inference. However, this strategy performs poorly because the model does not learn how to refine the predicted geometry. To address this issue, we propose a geometric-semantic decoupled learning (GSDL) strategy. Specifically, we introduce ground-truth depth D̂={D̂_i ∈ℝ^D_b i n× H_F × W_F}_i=1^N_c to LSS at the beginning of training, so that the model can separately focus on learning semantics with accurate ground-truth geometry. Subsequently, we gradually mix the ground-truth depth D̂ with the prediction depth D during training to adapt the model to the predicted geometry. The mixup depth D^m can be obtained by conducting the arithmetic mean, using a factor α∈ [0,1]: D^m = {D^m_i}_i=1^N_c, D^m_i = D_i α + D̂_i (1-α). The value of α is determined by a projection function, which is monotonically increasing with respect to the number of training iterations. We first transform the range of iterations from x∈[0, T_max] to x∈[-N_α, N_α], where T_max is the maximum number of training iterations and N_α is a constant set to 5 in this work without careful selection. We then employ a sigmoid function to smooth the training process: α = 1/1+e^r x where r is a parameter that controls the steepness of the mixup. As α→ 1 by the end of training, the model gains the ability to refine predicted geometry and no longer requires ground-truth depth in inference. § EXPERIMENTS §.§ Experimental Setup We evaluate our model using the Occ3D-nuScenes <cit.> benchmark, which is based on nuScenes <cit.> dataset and was constructed for the CVPR2023 3D occupancy prediction challenge. The dataset consists of 1000 videos, split into 700 for training, 150 for validation, and 150 for testing. Each key frame of video contains a 32-beam LiDAR point cloud, six RGB images from surround-view cameras, and dense voxel-wise semantic occupancy annotations. The perception range in 3D voxel space is [-40m, -40m, -1m, 40m, 40m, 5.4m], with each voxel sized [0.4m,0.4m,0.4m]. The voxels contain 18 categories, including 16 known object classes, an unknown object class labeled as “others”, and an “empty” class. Following previous works <cit.>, we use the mean intersection over union (mIoU) across all classes to evaluate accuracy. §.§ Implementation Details Adhering to common practices <cit.>, we adopt ResNet-50 <cit.> as the image backbone. We maintain a memory queue of length 15 to store historical features and fuse temporal information with 16 frames. For the large-kernel re-parameterized 3D convolution in the geometric encoder, we set the size of convolution kernel to [11, 11, 1]. The steepness parameter r is set to 5 in geometric-semantic decoupled learning. During training, we use a batch size of 32 on 8 Nvida A100 GPUs. Unless otherwise specified, all models are trained for 24 epochs using the AdamW optimizer <cit.> with a learning rate 1× 10 ^-4 and a weight decay of 0.05. During inference, we use a batch size of 1 on a single Nvidia A100 GPU. The FPS and memory metrics are tested using the mmdetection3d codebase <cit.>. §.§ Main Results In Tab. <ref> and Fig. <ref>, we compare GSD-Occ with previous state-of-the-art (SOTA) methods on the validation split of Occ3D-nuScenes. GSD-Occ demonstrates real-time inference speed and low memory usage while achieving accuracy comparable to or better than non-real-time methods, such as BEVFormer <cit.>, BEVDet4D <cit.>, SurroundOcc <cit.>, and FlashOCC <cit.>. When compared with FB-Occ <cit.>, the winner of CVPR 2023 occupancy challenge, GSD-Occ is ∼ 3 × faster and shows a 1.9% mIoU improvement. Compared to other real-time occupancy prediction methods, GSD-Occ achieves a notable 5.2% higher mIoU with even faster speed than FastOCC <cit.>. These results highlight the effectiveness of geometric-semantic disentanglement in our method. When we increase the input image size of GSD-Occ to 2×, the mIoU further improved by 2.3% without bells and whistles. The inference speed decreases by 2 ×, which indicates a nearly linear relationship between input size and inference speed. This property enables GSD-Occ to efficiently handle high-resolution images. Compared to more recent methods, GSD-Occ* achieves only 0.4% lower mIoU than PannoOcc <cit.>, but it is ∼ 3 × faster and uses only ∼ 50% of the memory. Although COTR <cit.> achieves 2.8% higher mIoU than GSD-Occ*, it is significantly slower (10 ×). Additionally, we also report the RayIoU metric proposed by  <cit.> in Tab. <ref>. GSD-Occ achieves 4.9 % higher mIoU with faster speed and lower memory usage when compared with the recent SOTA method, SparseOcc <cit.>. We further provide qualitative results in Fig. <ref>. Despite significantly reducing computation, our method can also effectively perceive geometric details (even with few clues in Row 2) and accurate semantics (Row 3). Additionally, our method also performs well under night conditions (Row 4). §.§ Ablations In this section, we conduct conduct ablation experiments on validation split of Occ3d-nuScenes to delve into the effect of each module. §.§.§ Ablations on GSDBN The results are shown in Tab. <ref>, we can observe that each component of geometric-semantic dual-branch network (GSDBN) contributes to the overall performance. The baseline model, which lacks temporal fusion and both 2D and 3D encoders, achieves fast speed (27.0 FPS) but falls short in accuracy (35.11% mIoU). For temporal fusion, although applying voxel features leads to 0.69 % mIoU improvement when compared with using BEV features, it also introduces a significant inference delay (0.029s), which is costly relative to the accuracy gain. Integrating the GSDBN module into the baseline model results in a 3.79% mIoU improvement, with only a modest increase in computational cost (speed decreases from 27.0 FPS to 20.0 FPS). It demonstrates that GSDBN efficiently and effectively decouples the learning of geometry and semantic by a hybrid BEV-Voxel representation. §.§.§ Ablations on GSDL To prove the effectivenss of geometric-semantic decoupled learning (GSDL), we apply it to different pre-training models and methods, as shown in Tab. <ref>. Without incurring additional computation costs, GSDL achieves consistent accuracy improvement across different pre-training models (BEVDepth <cit.> and ImageNet <cit.>) and methods (FB-OCC <cit.> and our GSD-Occ). It highlights the generalizability of GSDL, which further decouples the geometry and semantic by a simple yet effective learning strategy. §.§.§ Additional Ablations The Effectiveness of BVL. We compare BEV-Voxel lifting (BVL) module with the other exisiting methods as shown in Tab. <ref>, it shows that BVL module achieves the best accuracy with the fastest speed, proving its effectiveness. Are More History Frames Better? As illustrated in Tab. <ref>, we delve into the impact of various time-series lengths: short (1), moderate (7), long (15), and very long (31). The results indicate that the long temporal fusion achieves the highest accuracy. Since we employ 2D temporal fusion with BEV features, the computational cost remains affordable even as the time-series length increases. Is a Larger 3D Convolutional Kernel Better? In Table <ref>, we present the results of different kernel sizes in 3D re-parameterized convolution. Adopting a kernel size of 11 × 11 × 1 achieves the highest accuracy. It indicates that correcting geometric errors requires a relatively large receptive field, but excessively large kernels can be counterproductive. Additionally, thanks to the re-parameterized technique we employed, the inference speed has significantly improved from 18.6 FPS to 20.0 FPS. Smooth or steep mixup of predicted and ground-truth depth? As shown in Fig. <ref>, we plot the curve of Eq.<ref> and conduct experiments to explore the impact of various steepness levels in GSDL. When the steepness parameter r is set to 5, we achieve the best accuracy. This suggests that overly smooth mixup curves may hinder the model's ability to adapt to the predicted depth, while excessively steep curves can complicate the training process. § CONCLUSION In this paper, we propose GSD-Occ, a employ-friendly real-time 3D occupancy prediction method that achieves accuracy comparable to many non-real-time methods. To achieve this, we identify and address a core challenge: the strong coupling between geometry and semantic. Specifically, we propose a geometric-semantic dual-branch network with a hybrid BEV-Voxel representation, which maintains both computational efficiency and geometric integrity. Additionally, we propose a geometric-semantic decoupled learning strategy, which separates the learning of geometric correction and semantic knowledge, resulting in consistent accuracy improvements across various pre-training models and methods. To validate the effectiveness of our method, we compare GSD-Occ with recent state-of-the-art (SOTA) methods on the Occ3D-nuScenes benchmark. The results demonstrate that GSD-Occ achieves new SOTA performance in real-time occupancy prediction. ieeenat_fullname
http://arxiv.org/abs/2407.13192v1
20240718060445
Global Stability of the Boltzmann Equation for a Polyatomic Gas with Initial Data Allowing Large Oscillations
[ "Gyounghun Ko", "Sung-jun Son" ]
math.AP
[ "math.AP", "35Q20, 76P05" ]
|| ‖‖ 0.5 pt 0.5 pt 6.7 in -.2in -0.1in = 9.3 in theoremTheorem[section] acknowledgement[theorem]Acknowledgement corollary[theorem]Corollary definition[theorem]Definition lemma[theorem]Lemma proposition[theorem]Proposition remark[theorem]Remark =α=ε=̣δ=η=ω=∂=∂=ϑØ=Ω=Γø=ω=γþ=ϑł=λŁ=Λ=φ=θ=̱β='
http://arxiv.org/abs/2407.12194v2
20240716214221
Empirical large-scale extension of Yakhot's model of strong turbulence
[ "Christoph Renner" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
Empirical large–scale extension of Yakhot's model of strong turbulence Ch. Renner July 16, 2024 ====================================================================== § ABSTRACT We propose an empirical extension of Yakhot's model of strong turbulence <cit.> that correctly describes the statistics of longitudinal velocity increments not only in the inertial range but also for larger scales up to the system length scale L. Two additional parameters are introduced to the original equation for the probability density function of velocity increments. These parameters can be motivated by physical arguments and their values are fixed by large scale boundary conditions. The resultant model ensures correct convergence of structure functions at the system length scale, including vanishing slopes at L, and shows good agreement with experimental data. § INTRODUCTION Even though the first theory of fully developed turbulence dates back to 1941 <cit.>, the statistical properties of velocity fluctuations in turbulent fluid motion remain a challenging open problem. Velocity fluctuations on a certain length scale l are usually investigated by means of the longitudinal velocity increment v(l), the difference of the velocities at two points in space separated by the distance l: v(l) = w(x+l)-w(x) (here, w is the component of the velocity field in direction of the separation vector l). The statistics of v is often characterized by means of the moments 𝒮_n(l) = <v(l)^n>, the so-called structure functions. A commonly accepted model for the structure functions has not yet been found. The only exception is the third order function for which an exact relation can be derived directly from the Navier–Stokes equation, Kolmogorov’s famous four–fifths law. Neglecting the dissipative term, the third order structure function is a linear function of the scale l 𝒮_3(l) = - 4/5 ℰ l. where ℰ is the mean rate of energy dissipation within the flow. (<ref>) is valid for scales l much smaller than the system size L and larger than the dissipation length scale η of the flow configuration, the so–called inertial range of length scales. Inspired by this result models of fully developed turbulence are usually based on the hypothesis that in the inertial range structure functions follow power laws in l 𝒮_n(l) ∝ l^ζ_n, where the ζ_n are the so–called scaling exponents. Several models for these exponents have been developed in recent decades, from the simple linear relation ζ_n = n/3 proposed by Kolmogorov in 1941 <cit.> to, amongst others, functions of second <cit.> and third <cit.> order in n. In 1998, V. Yakhot proposed a model of fully developed turbulence in the limit of very large Reynolds number which is based on an analytical treatment of the Navier–Stokes equation <cit.>. A result of this model is the prediction of a closed–form expression for the scaling exponents ζ_n which can be shown to be a generalization of several of the most important models of turbulence <cit.>. Remarkably enough, this prediction for the scaling exponents replicates an earlier results by B. Castaing <cit.> who showed that very general considerations allow to identify a quantity conserved along scales which can be thought of as the analogon of temperature in thermodynamics. From this, he was able to derive a closed–form expression for the scaling exponents which preempts Yakhot's. Unlike many others, Yakhot's model is not restricted to the inertial range but also correctly describes important aspects of the transition to large length scales. In particular, the model correctly captures the decline[Strictly speaking this statement refers to absolute values: Odd–order structure functions are negative in the inertial range, cf. eq. (<ref>), and so actually increase for larger scales to finally reach zero from below.] of odd–order structure functions to zero at the system length scale. Yet, the model misses out on a few other aspects of the transition to large scales. One of the aspects not described correctly is the convergence to constant levels of even–order structure functions at the system scale L. Also, structure functions approach the system scale L with non–zero slopes in Yakhot's model which runs contrary to physical intuition and experimental data. In this paper we propose a straightforward empirical extension of Yakhot's model which also captures these features. Two additional parameters are introduced into the original partial differential equation for the evolution of the volatility increment's probability density function. These parameters can be motivated by physical considerations and their values are fixed by large scale boundary conditions. The resultant structure functions show good agreement with experimental data. The proposed extension is empirical, but we believe that the rationale underlying the model and the results obtained with it are sound, making it a prospective starting point for further experimental and theoretical investigations. § LARGE SCALE BOUNDARY CONDITIONS The large scale boundary conditions for the structure functions are not dependent on a specific model but can be derived from rather general considerations and assumptions. We start by defining the system length scale L as the scale at which the velocity fluctuations w have de–correlated: < w(x+l) w(x) > = 0. for l ≥ L It should be noted that the system scale defined by (<ref>) is not identical with the often used integral length scale, which is also usually denoted by L (see <cit.> and references therein for the definition of the integral scale). The integral scale marks the upper end of the scaling range and is significantly smaller than the system scale as used in this paper. For the second order structure function, the definition (<ref>) implies that at length scale L (and larger): 𝒮_2(L) = < w(x+L)-w(x))^2 > = 2 σ^2, where σ = √(< w^2 >) is the rms of the velocity fluctuations w. We furthermore assume that velocity fluctuations w on large scales follow normal distributions. Thus, the increment v(L) is the difference of two uncorrelated normal processes and hence a normal random processes itself with a variance given by (<ref>). The large scale boundary levels of structure functions follow from general properties of the normal distribution as: 𝒮_n(L) = < v(L)^n > = {[ (n-1)!! (2σ)^n/2 if n even,; ; 0 if n odd ]. where (n-1)!! = 1 · 3 ·…· (n-3) · (n-1). The second set of boundary conditions, expected both from physical intuition as well as experimental data, is vanishing first order derivatives at the system scale L for structure functions of all orders: . ∂/∂ l 𝒮_n(l) |_l=L = 0 ∀ n. § YAKHOT'S MODEL OF STRONG TURBULENCE Following ideas by A.M. Polyakov <cit.>, Yakhot proposed a model of fully developed turbulence in the limit of very large Reynolds number which is based on an analytical treatment of the Navier–Stokes equation <cit.>. For the case of turbulence driven by Gaussian velocity fluctuations on large scales he derived a partial differential equation for the evolution of the probability density function p(v,l) of the velocity increment v in scale l: B ∂ p/∂ l - ∂/∂ v{ v ∂ p/∂ l} = - A/l∂/∂ v{ v p } + σ/L∂^2/∂ v^2{ v p }, where A and B are two parameters that are not fixed by the theory. A remarkable feature of this equation, amongst others <cit.>, is the occurrence of the system scale L in the second term on the right–hand side, i.e. the fact that large–scale effects are explicitly taken into account. It is this term that describes the declinesignOfS3 of odd-order structure functions to zero at L. For what follows we will discuss this equation in the dimensionless variables r and u defined as: r = l/L, u = v/σ. With these definitions the equation for the probability density function is: B ∂ p/∂ r - ∂/∂ u{ u ∂ p/∂ r} = - A/r∂/∂ u{ u p } + ∂^2/∂ u^2{ u p }. By multiplication of eq. (<ref>) with u^n and subsequent integration with respect to u, the equation for the dimensionless structure functions S_n(r) = 𝒮_n(r)/σ^n = ∫ u^n p(u) du can be derived: r ∂/∂ r S_n(r) = ζ_n S_n(r) + z_n r S_n-1(r), ζ_n = An/B+n, z_n = n(n-1)/B+n. In the limit of small scales r ≪ 1, the second term on the rhs of equation (<ref>) becomes negligible. In this limit, the solutions of the resulting equation are simple power laws: S_n(r) ∝ r^ζ_n. The four–fifths law (<ref>) imposes the condition ζ_3=1 on the scaling exponents which, inserted into equation (<ref>), leads to: A = B+3/3. The ζ_n and z_n can thus be expressed as a function of only one yet unknown parameter, either B as in Yakhot's original work <cit.>, or A as for the most part of this paper: ζ_n = n/3B+3/B+n = A n/3 (A-1) n. z_n = n (n-1) /B + n = n (n-1) /3 (A-1) + n. The choice of A as independent parameter is, amongst other things, motivated by the fact that it can be interpreted as the limit of ζ_n for n→∞: lim_n→∞ ζ_n = lim_n→∞ A/3 (A-1)/n + 1 = A. It is worthwhile noting again that the scaling exponents in Yakhot's model are in line with an earlier result by B. Castaing <cit.>. For the second order structure function the last term on the rhs of eq. (<ref>) vanishes on all scales (as S_1(r)=0) and the general solution for S_2 is the power law (<ref>). The constant of integration can be determined from the large scale boundary condition (<ref>) yielding: S_2(r) = 2 r^ζ_2. Inserting this result into the equation for the third order structure function leads to: r ∂/∂ rS_3(r) = ζ_3 S_3(r) + z_3 r S_2(r) = S^3(r) + 2 z_3 r^ζ_2+1 . The general solution for this equation is: S_3(r) = K_3 r + 2 z_3/ζ_2 r^ζ_2+1. The integration constant K_3 can be determined from the four–fifths law (<ref>) which in the dimensionless variables u and r can be written as S_3 ( r ≪ 1 ) = - 4/5 ϵ r, where ϵ = ℰ L/σ^3 is the dimensionless mean rate of energy dissipation. The final result for the third order structure function is: S_3(r) = - 4/5 ϵ r + 2 z_3/ζ_2 r^1+ζ_2 = - 4/5 ϵ r + 18 B+2/( B+3 )^2 r^1+ζ_2 . The parameter B can be determined from the condition that the third order moment vanish at the integral length scale, i.e. S_3(r=1)=0: 4/5 ϵ = 18 B+2/( B+3 )^2 From dimensional arguments Yakhot inferred that ϵ≈ 1. Equation (<ref>) is then solved by B ≈ 18.5 which corresponds to A ≈ 7.2. For these values the scaling exponents (<ref>) are, within the errors, indistinguishable from experimental values <cit.>. § SHORTCOMINGS OF YAKHOT'S MODEL Yakhot's model stands out in several aspects: The Castaing–Yakhot scaling exponents (<ref>) can be shown to generalize several of the most relevant theories of turbulence <cit.> and unlike many other models it captures the decline of odd–order structure functions to zero. Yet, several other features of structure functions are not described correctly. These aspects will in what follows be discussed by benchmarking the predictions of Yakhot's model against experimental data. These were measured in a cryogenic axisymmetric helium gas jet at a Reynolds number of approx. 4 · 10^5. Details on on the experimental setup and the data can be found in <cit.> and appendix <ref>. A comparison of the model's prediction (<ref>) for the second order structure function with experimental data is shown in figure <ref>. When parameterized to match the large scale boundary condition S_2(r=1)=2, the model fails to describe the function at smaller scales. A better fit to the inertial range can be obtained by adjusting the constant of integration as shown in figure (<ref>). However, the necessary increase of the constant of integration (from 2 as in eq. (<ref>) to approximately 4.5) leads to a massive overestimation of the large scale level S_2(r=1). It is clearly not possible to correctly describe both the scaling in the inertial range as well as the large scale boundary level of the second order structure function. Results for the third order structure functions also exhibit significant deficiencies. Figure <ref> displays the negative of S_3 in comparison to the predictions of Yakhot's model. In original parameterization (left) according to eq. (<ref>) the model, while correctly declining to zero at r=1, fails to match the function at all other scales, including the inertial range where S_3 should exhibit a linear dependence on scale according to the four–fifths law. A closer investigation reveals that this is due to the fact that the approximation ϵ≈ 1 for the dimensionless rate of energy dissipation as used for the derivation of eq. (<ref>) does not hold. From fits to experimental data as indicated by the dotted lines in figure <ref> we in fact obtain a value of ϵ≈ 2.3. Using this value and the scaled constant of integration for S_2, the parameterization of Yakhot's model can be adjusted and is the found to be in better agreement with the experimental data (right). But also with the adjusted parameterization the model misses out on an important aspect, the linear dependence on scale in the inertial range according to the four–fifths law. In comparison to a linear fit according to that law (figure <ref>, dotted lines) it becomes evident that the influence of the second order structure function, which causes the deviation from the linear scaling and the eventual decline towards zero, sets in too early, i.e. at too small length scales. Considering that the four–fifth law is the only exact relation known this constitutes a serious drawback of the model. Moreover, for any parameterization the structure functions will in this model always exhibit finite first order derivatives at the system length scale L. This is in contradiction to experimental data as can be seen from the results presented here for the second order structure function (while results for the third order function are less clear owing to higher statistical noise, see appendix <ref>). § EXTENDED MODEL §.§ Specification We seek to address the shortcomings of Yakhot's model through an extension of the original equation (<ref>) by as simple as possible elementary functions. As we are interested in the large scale behaviour of the model it seems natural to concentrate on the second term on the right–hand side of eq. (<ref>), the term describing the influence of large scales. Experimental results based on the theory of Markov processes <cit.> motivate a first extension. In these analyses, the evolution in scale of the longitudinal velocity increment was described by means of the Fokker–Planck–equation, a partial differential equation for the pdf p(u,r). In this framework, the stochastic component of the process is described by the so–called diffusion coefficient D_2(u,r) which can be a function of both r and u. Analyzing experimental data, the diffusion coefficient was found to have constant, linear and second order terms in u. It could be shown <cit.> that equation (<ref>) can approximately be converted to an equivalent Fokker–Planck–equation. The resultant effective diffusion coefficient comprises linear and second order terms in u, but lacks a constant term. This is not only inconsistent with experimental results but also counter–intuitive from a theoretical point of view: Diffusion coefficients that contain only terms of order u or higher would imply that the stochastic variable u will never change its sign[Ignoring the effect of the drift term.]. The corresponding pdf p(u,r) would accordingly be restricted to one half of the real axis[The best known example for such a process is geometric Brownian motion for which the diffusion coefficient is of second order and the corresponding distribution is lognormal.] which is clearly not a reasonable assumption for the probability density function of velocity increments. Further compelling evidence for the existence of such an additive noise term in the Fokker–Planck equation was recently given in a study based on the integral fluctuation theorem <cit.>. While the pdf derived by Yakhot, equation (<ref>), is not a Fokker–Planck–equation we deem it reasonable based on above considerations to introduce an effective constant (in u) diffusion term. In the Fokker-Planck–equation such a term takes on the form D(r) ∂^2/∂ u^2 p(u,r) which nicely integrates into the large scale term in (<ref>). For the purpose of modeling the large scale behaviour, we find that D can be assumed constant[A more detailed investigation shows that this assumption does not hold for small scales.] in r. A further adjustment can be inferred from the observations discussed in section <ref>, in particular the fact that the third order structure function deviates from the four-fifth scaling "too early", i.e. at too small length scales. In order to address this we propose to modify the large scale term by multiplying it with a function c(r) that goes to zero for small scales: lim_r → 0 c(r) = 0. Based on these considerations we propose the following extension to Yakhot's model: B ∂ p/∂ r - ∂/∂ u{ u ∂ p/∂ r} = - A/r∂/∂ u{ u p } + ∂^2/∂ u^2{ ( u c(r) + D ) p }. By multiplication with u^n and integration with respect to u the equation for S_n(r) is obtained: r ∂/∂ r S_n(r) = ζ_n S_n(r) + z_n r { c(r) S_n-1(r) + D S_n-2(r) }, with ζ_n and z_n as in (<ref>). §.§ Exploiting the Large Scale Boundary Conditions The additional degrees of freedom can be used to enforce the condition of vanishing first order derivatives of the structure functions at the system length scale r=1. For odd–order structure functions the condition is. 0 != . r ∂/∂ r S_2n+1(r) |_r=1 = ζ_2n+1 S_2n+1(1) + z_2n+1 1 { c(1) S_2n(1) + D S_2n-1(1) } = z_2n+1 c(1) S_2n(1). The last line follows from the fact that the odd–order functions S_2n+1 and S_2n-1 are zero at the system scale, see eq. (<ref>). In order for the first order derivative at the system scale to vanish also the remaining term involving S_2n in (<ref>) needs to be zero. This implies that: c(r=1) = 0. The simplest elementary function fulfilling both condition (<ref>) as well as (<ref>) is: c(r) = r (1-r) C, with constant parameter C. The parameter D is fixed by imposing the condition of vanishing first order derivatives on even–order structure functions: 0 != . r ∂/∂ r S_2n(r) |_r=1 = ζ_2n S_2n(1) + z_2n 1 { c(1) S_2n-1(1) + D S_2n-2(1) } = 2An/B+2n S_2n(1) + 2n(2n-1)/B+2n D S_2n-2(1) = 2n/B+2n S_2n-2(1) { A S_2n(1)/S_2n-2(1) + D (2n-1) } = 2n/B+2n S_2n-2(1) (2n-1) { 2 A + D }, where in the last step we used the equality S_2n(1)/S_2n-2(1) = 2 (2n-1) which follows from the large scale boundary condition (<ref>). We obtain: D = - 2 A. Using this relation to replace D in eq. (<ref>) we finally obtain: ∂/∂ r S_n(r) = ζ_n/r S_n(r) + z_n c(r) S_n-1(r) - 2(n-1) ζ_n S_n-2(r). §.§ Solutions and Fit to Experimental Data For order n=2, eq. (<ref>) simplifies to: r ∂/∂ r S_2(r) = ζ_2 S_2(r) - 2 ζ_2 r. (Note that S_0(r)=1.) With boundary condition S_2(1)=2 this is solved by: S_2(r) = 2/1-ζ_2 ( r^ζ_2 - ζ_2 r ). A comparison with experimental data shows good agreement over a wide range of length scales, see Figure <ref>. Knowledge of S_2(r) is the prerequisite to solving equation (<ref>) for third order: r ∂/∂ r S_3(r) = ζ_3 S_3(r) + z_3 r c(r) S_2(r). Inserting expression (<ref>) for S_2(r) we find the general solution S_3(r) = K_3 r + 2 z_3 /1 - ζ_2 C r F(r), where K_3 is the constant of integration and F(r) = 1/1+ζ_2 r^1+ζ_2 - ζ_2/2 r^2 - 1/2+ζ_2 r^2+ζ_2 + ζ_2/3 r^3. K_3 is readily obtained from the four–fifth law (<ref>) as K_3=-4/5ϵ and the parameter C can be determined from the condition that S_3(r) is zero at the system scale: C = 2/5 ϵ/z_3 1-ζ_2/F(1). The solution for the third order structure function finally simplifies to: S_3(r) = - 4/5 ϵ r { 1 - F(r)/F(1)}. The prediction of the extended Yakhot model is found to be in good agreement with experimental data from the system scale through to the lower end of the inertial range, see figure <ref>. The parameter C ultimately cancels out of the solution for S_3(r) but enters the equations for higher order structure functions. The following approximation for this parameter can therefore be of interest: By expressing the second order scaling exponent ζ_2 in terms of the small scaling anomaly δ defined as δ = ζ_2 - 2/3 we can expand the term F(1) and obtain the following approximation of first order in δ: F(1) = 1/1+ζ_2 - 1/2+ζ_2 - ζ_2/6 ≈ 1 - ζ_2/3 . This approximation is correct within a maximum relative deviation of 1.5% in the relevant range[This range is defined by the commonly accepted experimental value for the scaling anomaly of δ = 0.029 ± 0.004 <cit.>. This corresponds to a range of 0.692 ≤ζ_2 ≤ 0.700 or 7 ≤ A ≤ 9.] of parameters. With this approximation and the explicit expression (<ref>) for z_3 we obtain: C = 2/5 ϵ/z_3 1-ζ_2/F(1) = 1/5 A ϵ 1-ζ_2/F(1) ≈ 3/5 A ϵ . We conclude with a comparison of the (numerical) solutions of the model equation for structure functions of orders four to six with experimental data in figure <ref>. Considering that statistical noise increases with both length scale r and order n the agreement between model and experimental data can be considered good. § SUMMARY AND DISCUSSION An extension of Yakhot's model of turbulence is proposed which introduces two additional parameters, D and C, into the original equation for the probability density function p(u,r) of the dimensionless velocity increment u: B ∂ p/∂ r - ∂/∂ u{ u ∂ p/∂ r} = - A/r∂/∂ u{ u p } + ∂^2/∂ u^2{ ( u c(r) + D ) p }. where the function c(r) is defined as c(r) = r (1-r) C with C, as well as A, B and D, being constant. Large scale boundary conditions and Kolmogorov's four–fifth allow to express the model parameters in terms of only one independent parameter. We find it most convenient to express B, C and D as functions of A: B = 3 (A-1) C ≈ 3/5 ϵ A D = - 2 A. The exact expression (<ref>) for C is slightly more involved but also depends on the parameter A (and ϵ) only. The model equation for the structure function S_n(r) of order n is obtained as ∂/∂ r S_n(r) = ζ_n/r S_n(r) + z_n c(r) S_n-1(r) - 2(n-1) ζ_n S_n-2(r), ζ_n = A n/3 (A-1) +n, z_n = n (n-1) /3 (A-1) + n. For second and third order the solutions of eq. (<ref>) are: S_2(r) = 2/1-ζ_2 ( r^ζ_2 - ζ_2 r ), S_3(r) = - 4/5 ϵ r { 1 - F(r)/F(1)} with F(r) = 1/1+ζ_2 r^1+ζ_2 - ζ_2/2 r^2 - 1/2+ζ_2 r^2+ζ_2 + ζ_2/3 r^3. The proposed extension of Yakhot's model clearly is of empirical nature. Yet, the additional terms can be motivated by physical arguments and in a straightforward manner be determined from very general considerations, namely the boundary conditions for the structure function's values and first order derivatives at the system length scale L. Remarkably, the extended model reproduces experimental data fairly well, from the system length scale through to the inertial range. In that context it is worth stressing (again) that none of the parameters has been used to optimize the fit to experimental data, including the free parameter A. This parameter is set to 7.3 (corresponding to ζ_2=0.7) following <cit.> where this value was identified as the limit of the scaling exponents ζ_n for n →∞ (cf. eq.(<ref>)). When determined from large scale boundary conditions, the effective diffusion coefficient D turns out to be proportional to A. It is this simple relation that makes the solution (<ref>) for S_2 a straightforward extension of "conventional" inertial range scaling and establishes a link between the level of the structure function in the inertial range and the (known) large scale level. Having established this link is the main result of this paper. Further investigations will have to follow. The most obvious indication for the proposed model not being the last word is the fact that both the function c(r) as well as the assumption of a constant effective diffusion parameter D violate the fundamental symmetry p(-u,-r)=p(u,r) of the original model. This symmetry follows from properties of the Navier–Stokes equation <cit.> and violating it hence constitutes a serious drawback. In order to restore this symmetry the diffusion coefficient D has to be made a function of r with d(-r)=-d(r) and c(r) needs to be adapted[These adaptions might require to give up the assumption of the extensions being elementary functions.] to fulfill c(-r)=c(r). Conditions (<ref>) and (<ref>) on c(r) are not per se in contradiction to this condition, but expression (<ref>) for D will have to be rewritten as d(r=+1) = - 2A. Despite the empirical nature of the approach and some remaining open questions, the results obtained with the proposed model for the transition from large to inertial range scales are unprecedented to the best of our knowledge. We therefore believe that the ideas presented here could serve as starting points for further theoretical and experimental investigations §.§ Acknowledgments We gratefully acknowledge fruitful discussions with J. Peinke and the provision of the high–quality dataset by courtesy of B. Castaing and B. Chabaud. § EXPERIMENTAL DATA AND METHODS The predictions of the original and the proposed extended Yakhot model are in this paper benchmarked against experimental data measured in a cryogenic axisymmetric helium gas jet at a Reynolds number of approx. 4 · 10^5. The data set contains 1.6 · 10^7 samples of the local velocity measured in the center of the jet. Taylor’s hypothesis of frozen turbulence is used to convert time lags into spatial displacements. Details on on the experimental setup and the data can be found in <cit.>. Calculation of structure functions from the data is rather straightforward, but extracting reliable information about their large scale properties poses certain difficulties. In particular determining the system length scale L is intricate owing to the facts that (i) large scale levels are approached with vanishing slopes and (ii) statistical noise can become considerable for large scales [For any given data series, the number of statistically independent observations of velocity increments decreases with increasing length scale]. The latter effect is distinctly more pronounced for odd–order structure functions where contributions with different signs sum up to zero. Both effects become apparent in figure <ref> displaying the structure functions of order two and three calculated from the data set considered here. 𝒮_2 clearly converges towards a constant level, but with the slope of the function decreasing for large scales the precise point of convergence cannot easily be identified. 𝒮_3 shows the expected linear dependence on l (see eq. (<ref>)) for scales l ≪ L, but even before the function reaches its maximum it exhibits clearly visible oscillations which are arguably a signature of statistical noise rather than a genuine physical effect. The method applied in this paper therefore builds upon the second order structure function. It furthermore makes use of the fact that the first order derivative of S_2 shows a more clearly pronounced convergence towards its large scale level (of zero) than the structure function itself, see figure <ref>, and hence determines L as the scale for which the first order derivative of the second order structure function becomes zero. For large scales the derivative of S_2 exhibits a range where it can be approximated by a linear function. For the data set considered here we obtain the system scale L from linear fits as L ≈ 3250 (sampling steps). The result varies slightly with the range used for the fit resulting in a relative error of ≈ 4%. Two other length scales of interest are the lower and upper bounds of the inertial range. We define the inertial range as the range of length scales for which the four–fifth law (<ref>) is fulfilled. From eq. (<ref>) it follows that in the inertial range: l/𝒮_3·∂𝒮_3/∂ l = ζ_3 = 1. Figure <ref> shows the left–hand side of eq. (<ref>) determined from experimental data. The bold curve indicates the range of values that deviate by less than 10% from the theoretical level of 1. This criterion is complemented by the additional requirement that the average of values in this range must not deviate from the expected level of 1 by more than the standard deviation of values. This is the case with the average value and the standard deviation of ζ_3 in the marked range being 0.98 and 0.05, respectively. We obtain the lower and upper bounds of the inertial range as l=20 and l=230. Yakhot V. Yakhot, Phys. Rev. E 57(2), 1737 (1998). K41 A.N. Kolmorogorv, Dokl. Akad. Nauk. SSSR 30, 301 (1941). K62 A.N. Kolmogorov, J. Fluid Mech. 13, 82 (1962). Lvov V. L’vov & I. Procaccia, Phys. Rev. E 62(6), 8037 (2000). Mine C. Renner & J. Peinke, Journal of Statistical Physics 146(1), pp.25-32 (2012). Castaing B. Castaing. The Temperature of Turbulent Flows. Journal de Physique II, EDP Sciences, 1996, 6 (1), pp.105-114. ift N. Reinke et al, 2018 J. Fluid Mech. 848 117-152. Polyakov A.M. Polyakov, Phys. Rev. E 52(6), 6183 (1995). data O. Chanal, B. Chabaud, B. Castaing and B. Hebral, Eur. Phys. J. B 17 (2), 309–317 (2000). transition Ch. Renner, J. Peinke, R. Friedrich, O. Chanal, and B. Chabaud Phys. Rev. Lett. 89, 124502 (2002). howItAllStartet R. Friedrich and J. Peinke Phys. Rev. Lett. 78, 863 (1997). myMarkovPaper Renner C., Peinke J. and Friedrich R., 2001 J. Fluid Mech. 433 383. DavoudiTabar J. Davoudi and M. Tabar, Phys. Rev. Lett. 82, 1680 (1999). Arneodo A. Arneodo et al., Europhys. Lett. 34(6), 411 (1996). SreenivasanAndYakhot K.R. Sreenivasan and V. Yakhot, Phys. Rev. Fluids 6 104604 (2021).
http://arxiv.org/abs/2407.12716v1
20240717163748
Stein's method and general clocks: diffusion approximation of the $G/G/1$ workload
[ "Anton Braverman", "Ziv Scully" ]
math.PR
[ "math.PR" ]
1]Anton Braverman [1]The Kellogg School of Management at Northwestern University 2]Ziv Scully [2]Cornell University Stein’s method and general clocks: diffusion approximation of the G/G/1 workload [ ================================================================================ § ABSTRACT We begin developing the theory of the generator comparison approach of Stein's method for continuous-time Markov processes where jumps are driven by clocks having general distributions, as opposed to exponential distributions. This paper handles models with a single general clock. Using the workload process in the G/G/1 queueing system as a driving example, we develop two variants of the generator comparison approach for models with a single general clock: the original, which we call the limiting approach, and the recently proposed prelimit approach. The approaches are duals of one another, yielding distinct bounds on the diffusion approximation error of the steady-state workload. We also contribute to the theory of heavy-traffic approximations for the G/G/1 system. Under some assumptions on the interarrival time distribution, the prelimit approach allows us to bound the diffusion approximation error in terms of G/G/1 model primitives. For example, when the interarrival time has a nonincreasing hazard rate that is bounded from above, we show that the diffusion approximation error of the expected workload is bounded in terms of the first three moments of the interarrival and service-time distributions, as well as the upper bound on the interarrival hazard rate. § INTRODUCTION The generator comparison approach of Stein's method is a powerful technique for comparing stationary distributions of Markov processes, and has been widely applied to continuous-time Markov chains (CTMCs) and discrete-time Markov processes. Due to a gap in the theory, it has yet to be applied to continuous-time Markov processes where jumps are driven by clocks having general distributions. We say that these models have general clocks, in contrast to CTMCs where clocks are exponentially distributed. We begin to fill this gap by developing the generator comparison approach for systems with a single general clock. We illustrate the approach using the single-server queue with general interarrival and service-time distributions, known as the G/G/1 system. We focus on the workload process, which consists of the remaining workload and the residual interarrival time. The former decays at a unit rate and increases when new customers arrive to the system, and the latter is the general clock tracking the time until the next arrival. We bound the error of the exponential approximation of the workload, and our approach generalizes to other models with a single general clock. A byproduct of our analysis is a novel upper bound on the expected busy period duration that requires only the first two moments of the interarrival and service-time distributions to be finite. The only other bound that we are aware of is due to <cit.> and requires three finite moments. We work with two variants of the generator comparison approach: the original, attributed to <cit.> and <cit.>, and the prelimit approach, recently recently proposed by <cit.>. The original approach starts from the Poisson equation of the approximating diffusion, also called the Stein equation, while the prelimit approach uses the Poisson equation of the Markov process to be approximated—the prelimit. We refer to the former as the limiting approach for ease of reference. One of our goals with this work was to compare the fruits of the prelimit and limiting approaches, to highlight each of their relative strengths. The limiting approach yields a bound on the exponential approximation error that involves the expected equilibrium idle period length, which can be restated in terms of the first two moments of the descending ladder heights of the random walk corresponding to the customer waiting times. Except for special cases, there is no simple expression for this quantity in terms of G/G/1 model primitives; see <cit.> for efforts to analyze the idle period. The same term also appears in the error bound of Theorem 1.2 in <cit.>, where the authors analyze the G/G/1 waiting time approximation using Stein's method for the exponential distribution. The prelimit approach yields an entirely different expression for the exponential approximation error. When the interarrival distribution has a density and its hazard rate is bounded from above, and either (a) the hazard rate is nondecreasing or (b) the hazard rate is bounded from below, we bound the approximation error using only the moments of the interarrival and service-time distributions and the bounds on the hazard rate. As a sample of our main results, let U and S denote the interarrival and service-time distributions, respectively, let λ = 1/ U, let ρ<1 be the system utilization, and let V be the steady-state workload. Provided that U has a bounded and nonincreasing hazard rate, we show in Theorem <ref> that (1-ρ) V - λ (U-S)^2 /2≤ (1-ρ) C_3, where C_3 is known explicitly and depends on the interarrival hazard rate at zero and the first three moments of U and S. Cases (a) and (b) represent a nontrivial class of interarrival distributions. Generalizing our error bounds to broader classes of hazard rates is possible at the cost of increased complexity in bounding the third-order Stein factors, as we would need to adapt the result of Lemma <ref> to the new hazard rate. Handling interarrival distributions with point masses is more challenging, as it affects the bounds of both second- and third-order Stein factors; see Remark <ref> below Lemma <ref>. The prelimit and limiting approaches are duals of one another. Summarizing their differences, the limiting approach starts from the Poisson equation for the exponential distribution, which has well-known Stein factor bounds. However, the limiting approach considers the expectation of the generator difference with respect to the distribution of the workload, resulting in the complicated idle period term. In contrast, the prelimit approach starts from the Poisson equation for the workload, which has complicated Stein factor bounds, but considers the expectation of the generator difference with respect to the simple exponential distribution. It is notable that combining the results of both the prelimit and limiting approaches yields stronger bounds than either approach does on its own; see the discussion following Theorem <ref>. Lastly, we comment on the added technical challenge of applying both approaches to a model with a general clock. The limiting approach requires using the Palm inversion formula to extract the generator of the approximating diffusion from the jump component of the workload process stationary equation. The prelimit approach does not require Palm calculus. The key is to average the Poisson equation, which depends on both the workload and residual interarrival time, over the stationary distribution of the residual time. This averaged Poisson equation depends on both the time average of its solution, the value function, as well as its expectation at jump times. Exploiting the structure of the value function, we relate both of these terms without using the Palm inversion formula. It is interesting to note that the value function naturally lies in the domain of the workload generator; i.e., the jump terms in the stationary equation for that function equal zero. This is in contrast to the test functions used in <cit.>, which had to be carefully engineered in order to have the jump terms equal to zero. §.§ Literature review The generator comparison approach of Stein's method is a powerful technique for comparing Markov process stationary distributions. Though Stein's method dates back to the seminal paper of <cit.>, the connection between Stein's method and Markov procesess is attributed to <cit.> and <cit.>. A particularly rich application domain for the generator comparison approach is the field of queueing theory. The approach was applied to birth-death processes by <cit.>. Later, more complex queueing systems were analyzed by <cit.>, though it is notable that these authors used the main elements of the generator approach without being aware of the rich literature on Stein's method. A few years later, the generator comparison approach was popularized in queueing theory by <cit.> in the setting of diffusion approximations, and by <cit.> in the setting of mean-field models. Since then, Stein's method and, specifically, the generator comparison approach has been an active area of research in the queueing community. The seminal work of <cit.> initiated a wave of research into upper and lower bounds on the expected waiting time in the G/G/1 system; for surveys of the numerous existing bounds, see <cit.>. Since the expected waiting time has a well-known relationship to the expected workload <cit.>, bounds on one quantity translate to bounds on the other. Though existing existing waiting time bounds are tight as the system utilization tends to one, most of them do not quantify the gap between the bound and expected waiting time. Some exceptions include <cit.>, which uses the transform method to give an expression for the moments of the waiting time. In particular, they provide an approximation the expected waiting time and a corresponding upper bound on the error that is of O(1/log(1-ρ)), though the constant in their error bound is not explicitly known. In comparison, we provide an approximation with an O(1) error bound (does not increase as ρ→ 1) and an explicit constant. Another approximation for the expected waiting time that is o(1) accurate (error goes to zero as ρ→ 1) is stated following <cit.>, though we cannot find a proof. Lastly, another related research direction is on extremal queues by <cit.>, where authors identify interarrival and service-time distributions for which the bounds on expected waiting time are tight. In addition to the numerous bounds on the expected waiting time, there have been several applications of Stein's method to the single-server queue. Bounds on the exponential approximation of the customer waiting time in the G/G/1 system were obtained using equilibrium couplings by <cit.>, where the authors follow the approaches of <cit.> and exploit the fact that the waiting time is a convolution of a geometrically distributed number of i.i.d. random variables. They also get error bounds for the M/G/1 system using the generator comparison approach. Their G/G/1 bound depends on the expected equilibrium idle period, which is exponentially distributed in the case of the M/G/1 model since arrivals follow a Poisson process. In <cit.>, the authors apply the generator comparison approach to the workload process of the M/G/1+GI system—the single-server queue with general patience-time distribution and Poisson arrivals. They focus on establishing diffusion approximation error bounds that are universal across various patience-time distributions and system loads. The queue-length of a discrete-time G/G/1 system is considered by <cit.>, where the authors bound the error of the exponential approximation. Analogous to the equilibrium idle period in a continuous-time model, they need to bound the second moment of the steady-state unused service. They do this by assuming that the number of service completions in a single time slot is bounded. Lastly, a simple application of the prelimit approach to the M/M/1 queue length can be found in <cit.>. In addition to steady-state approximations, <cit.> establishes process-level rates of convergence to the diffusion approximation for the M/M/1 and M/M/∞ systems. More recently, <cit.> develops an approach for approximating random processes with Gaussian processes, and applies it to the G/G/∞ system. § THE G/G/1 WORKLOAD PROCESS Consider a single-server queueing system operating under a first-in-first-out (FIFO) service discipline. Let U and S be random variables having the interarrival and service time distributions, respectively. Let G(x) = (U ≤ x), and let λ = 1/ U and ρ = λ S, be the arrival rate and system utilization, respectively. We assume that the first three moments of U and S are finite, though in some places we need the fourth moment of S to be finite as well. Let V(t) be the workload in the system and R(t) be the remaining time until the next arrival at time t ≥ 0; we call R(t) the residual time. Similarly, let A(t) be the elapsed time since the last arrival prior to t ≥ 0 or simply the age at time t. Set δ = (1-ρ) and X(t) = δ V(t), and consider the right continuous with left limits (RCLL) workload process {Z(t) = (X(t),R(t)) : t ≥ 0}, which is a piecewise-deterministic Markov process with state space 𝕊 = {(x,r) ∈^2_+ : x ≥ 0, r > 0}. Note that (x,0) ∉𝕊 for x ≥ 0 because the workload process is RCLL. We let B_0 be the length of the initial busy period, with the convention that B_0 = 0 if X(0) = 0. We also let B_1,B_2, …, be the lengths of the subsequent busy periods, which are i.i.d. B̅, where B̅ is the duration of a busy period initialized by an arrival to an empty system. We also let I_n, n ≥ 0, be the duration of the idle period following B_n, and note that I_1,I_2,…, are i.i.d. I̅, where I̅ is the duration of the idle period following B̅. When ρ < 1, <cit.> say that I̅ = 1-ρ/ρB̅ < ∞. Furthermore, note that B̅ = lim_ϵ↓ 0_0,ϵ B_0 = (_δ S, U B_0), where _(x,r)(·) = (· | Z(0)=(x,r)) and the outer expectation on the right-hand side is with respect to the distributions of U and S. The workload process is a regenerative process, with regeneration happening at those instances when a customer arrives to an empty system. Going forward we assume that ρ <1 and U is nonlattice. When a customer arrives to an empty system at t=0 and (<ref>) holds, <cit.> guarantees the existence of a limiting steady-state distribution for {Z(t): t ≥ 0}. We let (X,R) have this distribution and note that (R ≤ x) = λ∫_0^x (1-G(t)) dt, x ≥ 0, since R is the steady-state residual interarrival time. We now argue that the expected time to regeneration is finite under (<ref>) given any initial condition (x,r) ∈𝕊. Namely, we show that _x,r B_0≤(_x + δ S, U B_0) < ∞. The first inequality is true because the busy period is made longer if an arrival happens immediately. For the second inequality, let us treat the (unscaled) initial work x/δ as low-priority work and all other work as high-priority work. The low-priority work is cleared when there is no high priority work; i.e., during the idle periods in a system with only high-priority work. Thus, the initial low-priority workload is cleared when the cumulative time spent without high-priority work exceeds x/δ. If N_x is the number of idle periods required for this, then N_x < ∞ by <cit.>, and by Wald's identity, the expected end of the N_xth idle period equals N_x( B̅ + I̅), which is finite, implying (<ref>). We conclude this section by introducing some notation. At times, we will need to consider expected values with respect to some variables but not others. For example, given random variables T_n, n ≥ 1, and some subset of them T_n_1, …, T_n_k, we write ^T_n_1,…, T_n_k(f(T_1, …, T_n)) to denote an expectation with respect T_n_1,…, T_n_k only. We still write (·) to denote the expectation over all random quantities inside the parentheses. § THE LIMITING GENERATOR COMPARISON APPROACH Let Y be exponentially distributed with rate 2θ/σ^2 and, for all f ∈ C^2(), define G_Y f(x) = -θ f'(x) + 1/2σ^2 f”(x), x ∈. Then Y is the stationary distribution of a one-dimensional reflected Brownian motion with generator G_Y <cit.>. This connection is meant to add context, and is not used anywhere in this paper. Recalling that δ = (1-ρ), the following is the main result of this section. Let Y be exponentially distributed with rate 2/(λ (S-ρ U)^2). Then sup_h ∈| h(X) - h(Y) | ≤ (1-ρ) ( (2+ρ) U^2/2 U + ρI̅^2/2 I̅ + 2 S + 4 S-ρ U^3/ (S-ρ U)^2). We prove Theorem <ref> at the end of this section after introducing all then necessary ingredients. Given h: →, consider the differential equation G_Y f_h(x) = h(Y) - h(x), x ∈. The following lemma bounds f_h(x) and its derivatives, and is proved in Section <ref>. The solution to (<ref>) satisfies f_h'(0)=0. Furthermore, if h(x) is Lipschitz then f_h”'(x) is absolutely continuous and f_h”≤h'/θ and f_h”'≤ 4 h'/σ^2, and, as a consequence, f_h'(x)≤xh'/θ and f_h(x)≤1/2 x^2 h'/θ. Setting x = X in (<ref>) and taking expectations yields h(Y) - h(X) = -θ f_h'(X) + 1/2σ^2 f_h”(X). All expectations on the right-hand side are well defined by Lemma <ref> and the fact that X< ∞ <cit.>. We compare the right-hand side of (<ref>) to the stationary equation for the workload, also called the basic adjoint relationship (BAR) <cit.>, which says that if Z(0) is initialized according to the stationary distribution (X,R), then for all sufficiently regular functions f: 𝕊→, 0 = (∫_0^t( -δ 1(X(s)>0) ∂_x f(Z(s)) - ∂_r f(Z(s)) ) ds) + (∑_m=1^∞ 1(τ_m≤ t) Δ f(Z(τ_m)) ), where τ_m is the time of the mth arrival and Δ f(Z(t)) = f(Z(t)) - f(Z(t-)). The following lemma, proved in Section <ref>, contains the BAR for f(Z(t)) = f_h(X(t) - δρ R(t)). For any h ∈, 0 = -δ(( 1(X>0) - ρ) f_h'(X - δρ R )) + 1/2δ^2 (S-ρ U)^2 ∑_m=1^∞(1(τ_m≤ 1) f_h”(X(τ_m-))) + ∑_m=1^∞(1(τ_m≤ 1) ∫_0^δ (S-ρ U) (δ (S-ρ U) - v)∫_0^v f_h”'(X(τ_m-)+u) du dv ). We work with the BAR for f_h(X(t) - δρ R(t)) instead of f_h(X(t)) because it simplifies the algebra. Using f_h(X(t)) is possible but creates a term involving f_h'(X(τ_m-)) in (<ref>), which then necessitates more applications of the Palm inversion formula (Lemma <ref>). The BAR (<ref>) suggests what the values of θ and σ^2 in (<ref>) should be. For example, the first term in the BAR satisfies -δ(( 1(X>0) - ρ) f_h'(X - δρ R )) ≈ -δ(( 1- ρ) f_h'(X )) = -δ^2 f_h'(X), so we choose θ = δ^2. The choice of σ^2 comes from the term involving f_h”(X(τ_m-)) but is not yet apparent, because we need to use the Palm inversion formula to relate this term to f_h”(X). The following is a special case of the Palm inversion formula <cit.>. We provide an elementary proof in Section <ref> that does not use Palm calculus. Let U_m = τ_m+1 - τ_m be the interarrival time of the customer arriving at τ_m+1 and let S_m be the workload brought by the customer arriving at τ_m. For any h ∈, f_h”(X) =∑_m=1^∞(1(τ_m ≤ 1) ∫_0^U_m f_h”((X(τ_m-) + δ S_m - δ u )^+) du ) [Proof of Theorem <ref>] Fix h ∈. With θ = δ^2 and σ^2 = δ^2λ (S-ρ U)^2, subtract (<ref>) from (<ref>) to get h(Y) - h(X) = -δ^2 ( f_h'(X) - f_h'(X - δρ R )) - δ( 1(X=0) f_h'(- δρ R )) + 1/2δ^2 (S-ρ U)^2 ( λ f_h”(X) - ∑_m=1^∞(1(τ_m≤ 1) f_h”(X(τ_m-))) ) + ∑_m=1^∞(1(τ_m≤ 1) ∫_0^δ (S-ρ U) (δ (S-ρ U) - v)∫_0^v f_h”'(X(τ_m-)+u) du dv ). The result follows once we show that δ^2 f_h'(X) - f_h'(X - δρ R )≤δρ U^2/2 U, δ 1(X=0) f_h'(- δρ R )≤δρI̅^2/2 I̅, 1/2δ^2 (S-ρ U)^2 | λ f_h”(X) - ∑_m=1^∞(1(τ_m≤ 1) f_h”(X(τ_m-))) | ≤ 2 δ( S + U^2/2 U), ∑_m=1^∞|1(τ_m≤ 1) ∫_0^δ (S-ρ U) (δ (S-ρ U) - v)∫_0^v f_h”'(X(τ_m-)+u) du dv | ≤ 4δS-ρ U^3/ (S-ρ U)^2. We begin with (<ref>). Observe that δ^2 f_h'(X) - f_h'(X - δρ R )≤δ^3 ρ R f_h”≤δ^3 ρ1/2λ U^2 1/δ^2 = δρ U^2/2 U, where the last inequality is due to Lemma <ref>. To prove (<ref>), note that f_h'(0)=0 by Lemma <ref>, and therefore δ 1(X=0) f_h'(- δρ R )≤δ^2 ρf_h” ( 1(X=0) R ) = δ^3ρf_h” ( R | X=0 ) = δρI̅^2/2 I̅. The first equality follows from (X=0)=(1-ρ) <cit.>. To justify the second equality, recall from our discussion below (<ref>) that in steady state, the workload cycles between busy and idle periods with lengths distributed as B̅ and I̅, respectively. Thus, conditioned on X=0, the distribution of the residual time R is the same as the equilibrium distribution of I̅. To prove (<ref>), our starting point is λ f_h”(X) - ∑_m=1^∞(1(τ_m≤ 1) f_h”(X(τ_m-))) = ∑_m=1^∞(1(τ_m ≤ 1) (λ∫_0^U_m f_h”((X(τ_m-) + δ S_m - δ u )^+) du - λ∫_0^U_m f_h”(X(τ_m-)) du ) ). Since f_h”((X(τ_m-) + δ S_m - δ u )^+) - f_h”(X(τ_m-) )≤f_h”'δ(S_m + u), it follows that ∑_m=1^∞(1(τ_m ≤ 1) |λ∫_0^U_m f_h”((X(τ_m-) + δ S_m - δ u )^+) du - λ∫_0^U_m f_h”(X(τ_m-)) du | ) ≤ ∑_m=1^∞(1(τ_m ≤ 1) λδ(U_m S_m +U_m^2/2) f_h”') ≤λδ( S + U^2/2 U)f_h”', where in the final inequality we used the fact that (∑_m=1^∞ 1(τ_m≤ 1) ) = λ; e.g., <cit.>. Since f_h”'≤ 4h'/σ^2 and σ^2 = δ^2λ (S-ρ U)^2, the bound in (<ref>) follows. The bound in (<ref>) is argued similarly. Namely, ∑_m=1^∞(1(τ_m≤ 1) ∫_0^δ (S-ρ U) (δ (S-ρ U) - v)∫_0^v f_h”'(X(τ_m-)+u) du dv ) ≤ δ^3 λS-ρ U^3 f_h”'≤ 4δS-ρ U^3/ (S-ρ U)^2. §.§ Deriving the stationary equation For a version of Lemma <ref> involving Palm calculus, see <cit.> or <cit.>. [Proof of Lemma <ref>] Initialize (X(0),R(0)) according to (X,R). We make frequent references to Lemma <ref> for bounds on f_h(x) and its derivatives. Since f_h(x) is differentiable, the fundamental theorem of calculus yields f_h(X(t) - δρ R(t) ) - f(X(0) - δρ R(0) ) = ∫_0^t -(δ 1(X(s)>0) - δρ) f_h'(X(s) - δρ R(s)) ds + ∑_m=1^∞ 1(τ_m≤ t) Δ f_h(X(τ_m) - δρ R(τ_m) ). We claim that f_h(X - δρ R) < ∞, which follows from f_h(x)≤1/2 x^2 h'/θ and (X- δρ R)^2 < ∞. The latter is true because R^2 = λ U^3/3 < ∞ by (<ref>), and X^2 < ∞ since S^3 < ∞ <cit.>. Taking expectations of both sides yields 0 = ∫_0^t -(δ 1(X(s)>0) - δρ) f_h'(X(s) - δρ R(s)) ds + ∑_m=1^∞ 1(τ_m≤ t) Δ f_h(X(τ_m) - δρ R(τ_m) ). By the Fubini-Tonelli theorem, we can interchange the expectation and integral because f_h'(x)≤xh'/θ and X < ∞. Assuming for now that we can also interchange the expectation and summation, we arrive at 0 = -δ(( 1(X>0) - ρ) f_h'(X - δρ R )) + ∑_m=1^∞(1(τ_m≤ 1) Δ f_h(X(τ_m) - δρ R(τ_m) )). Let U_m = τ_m+1 - τ_m be the interarrival time of the customer arriving at τ_m+1 and let S_m be the workload brought by the customer arriving at τ_m, and observe that Δ f_h(X(τ_m) - δρ R(τ_m) ) = f_h(X(τ_m-) + δ S_m - δρ U_m ) - f_h(X(τ_m-)). Since S_m and U_m are both independent of τ_m and X(τ_m-), and (S_m - ρ U_m) = 0, Lemma <ref> follows from using the Taylor expansion f(x+y) - f(x) = y f'(x) + 1/2 y^2 f”(x) + ∫_0^y (y - v)∫_0^v f”'(x+u) du dv with x = X(τ_m-) and y = δ(S_m - ρ U_m). It remains to verify the interchange of the expectation and summation. Using (<ref>) and the fact that f_h'(x)≤xh'/θ, it follows that ∑_m=1^∞1(τ_m≤ 1) Δ f_h(X(τ_m) - δρ R(τ_m) ) ≤ h'/θ∑_m=1^∞ 1(τ_m≤ 1) δ(S-ρ U) (X(τ_m-) + δ S) ≤ h'/θδ(S+ρ U) ∑_m=1^∞(1(τ_m≤ 1) X(τ_m-)) + h'/θδ^2 (S(S+ρ U)) ∑_m=1^∞ 1(τ_m≤ 1). To show that the right-hand side is finite, we need only show that ∑_m=1^∞(1(τ_m≤ 1) X(τ_m-)) < ∞. Let N(1) be the number of arrivals on the interval [0,1]. Then ∑_m=1^∞(1(τ_m≤ 1) X(τ_m-)) ≤ ( N(1) sup_0 ≤ t ≤ 1 X(t) ) ≤ ( N(1)(X(0) + N(1))) = (N(1) X(0)) + N^2(1) ≤√( N^2(1))√( X^2) + N^2(1) < ∞, because X^2 < ∞ and N^2(1) < ∞ (since U^2 < ∞). §.§ Stein factor bounds for the exponential distribution [Proof of Lemma <ref>] We repeat the proof of <cit.>. One readily checks that f_h'(x) = - e^2 θ/σ^2 x∫_x^∞2/σ^2( h(Y) - h(y) ) e^-2 θ/σ^2 x dy satisfies (<ref>) for x ∈ and f_h'(0) =0. Since h(x) is Lipschitz, it is absolutely continuous and its derivative h'(x) exists almost everywhere. Differentiating both sides of (<ref>) yields -θ f_h”(x) + 1/2σ^2 f_h”'(x) = -h'(x) for those x where h'(x) exists, and, therefore, f_h”(x) = - e^2 θ/σ^2 x∫_x^∞2/σ^2 (-h'(y)) e^-2 θ/σ^2 x dy, implying that f_h”≤h'/θ. Rearranging the terms in (<ref>) yields f_h”'(x) = 2θ/σ^2 f_h”(x) - 2/σ^2 h'(x)≤4h'/σ^2. §.§ The Palm inversion formula [Proof of Lemma <ref>] Initialize Z(0)=(X(0),R(0)) according to its stationary distribution and note that f_h”(X) = (∫_0^1 f_h”(X(t)) dt ). Interchanging the expectation is justified since h ∈ implies f_h” < ∞ by Lemma <ref>. Let N(1) be the number of arrivals on [0,1]. Then ∫_0^1 f_h”(X(t)) dt = ∫_0^τ_1 f_h”(X(t)) dt + ∑_m=1^N(1)∫_τ_m^τ_m+1 f_h”(X(t)) dt - ∫_1^τ_N(1)+1 f_h”(X(t)) dt. Note that ∫_τ_m^τ_m+1 f_h”(X(t)) dt = ∫_0^U_m f_h”((X(τ_m-) + δ S - δ u )^+) du. Furthermore, since τ_1 = R(0) and τ_N(1)+1 = 1 + R(1), ∫_0^τ_1 f_h”(X(t)) dt - ∫_1^τ_N(1)+1 f_h”(X(t)) dt = ∫_0^R(0) f_h”(X(t)) dt - ∫_0^ R(1) f_h”(X(1+t)) dt, and we note that the expected value of the right-hand side is zero by stationarity. § THE PRELIMIT GENERATOR COMPARISON APPROACH We begin with an informal outline of the prelimit approach. Recall that any z ∈𝕊 takes the form z = (x,r) and define the workload generator G_Z f(z) = lim_t → 0_z f(Z(t)) - f(z) /t , z ∈𝕊, for any function f(z) for which the right-hand side is well defined. Then, provided that the value function F_h(z) = ∫_0^∞( _z h(X(t)) - h(X) ) dt, z ∈𝕊, exists and is sufficiently regular, it satisfies the Poisson equation G_Z F_h(z) = -δ 1(x > 0) ∂_x F_h(z) - ∂_r F_h(z) = h(X) - h(x), z ∈𝕊. The left-hand side of (<ref>) does not have jump terms like in the stationary equation in Section <ref>, because the infinitesimal drift from any state z=(x,r) with r > 0 is deterministic. Thus, G_Z f(z) does not encode the full dynamics of the workload process for arbitrary f(z). However, F_h(z) is special because it lies in the domain of the generator; i.e., (1(τ_m≤ 1) Δ F_h(Z(τ_m)))= 0 in the notation of Section <ref>. Similar to (<ref>), we let Y be an exponential random variable with rate 2θ/σ^2 and G_Y be the corresponding generator. The value of σ^2 we use Sections <ref> and <ref> is different from the σ^2 used in Section <ref>. The left-hand side of (<ref>) depends on both x and r, while the right-hand side depends only on x, suggesting, at first, that we are free to choose any r that we want. However, since G_Y f(x) only depends on x, it is not obvious how to compare G_Z to G_Y. We propose setting r = R, where R is defined by (<ref>), and taking expectations in (<ref>), which results in -δ 1(x > 0) ∂_x F_h(x,R) + λ(F_h(x + δ S,U) - F_h(x,U) ) = h(X) - h(x). In order to compare the left-hand side to G_Y f(x), we must first decide which f(x) to use. The challenge is that (<ref>) depends on both F_h(·, R) and F_h(·, U). We resolve this by establishing a relationship between the two functions, allowing us to rewrite (<ref>) in terms of F_h(·, R). We then use Taylor expansion to compare the rewritten (<ref>) to G_Yf(x) with f(x) = F_h(x,R). The result is Lemma <ref>, the main result of Section <ref>, which gives an expression for h(X) - h(Y) in terms of the second and third derivatives of F_h(x,R), also known as Stein factors. Stein factor bounds are the topic of Section <ref>. The remainder of Section <ref> formalizes the prelimit approach. Namely, in Section <ref>, we rigorously derive (<ref>). We show how this can be done without explicitly verifying that F_h(z) is well defined, by working with the finite-horizon value function instead. This technique be of independent interest for models where it is challenging to verify that F_h(z) is well defined. Then, in Section <ref> we establish the relationship between F_h(·, R) and F_h(·, U) and use it to compare G_Z and G_Y via Taylor expansion, resulting in Lemma <ref>. §.§ The Poisson equation. To use F_h(z) we first need to verify that it is well defined, i.e., ∫_0^∞_z h(X(t)) - h(X) dt < ∞, z ∈𝕊. When U and S have finite p+1 moments, it is shown by <cit.> that _z X(t) - X decays at a rate of 1/t^p-1 for a class of queueing-network models much more general than the G/G/1 system, but we wish to avoid using their complex machinery. Another way to verify (<ref>) is by noticing that ∫_0^∞_z h(X(t)) - h(X) dt ≤∫_0^∞(_z h(X(t)) - _Z h(X(t)) ) dt, where the outer expectation is taken with respect to the stationary distribution of Z. To bound the right-hand side we can couple two workload processes, one with initial condition z and one with Z, and bound the expected coupling time in terms of z and Z. Constructing such a coupling is complicated by the fact that z and Z may differ both in the initial workload X(0) and residual time R(0). In this paper we propose an alternative approach to arrive at (<ref>) that bypasses the need to verify (<ref>) directly, and involves using the M-horizon value function F^M_h(z) = ∫_0^M( _z h(X(t)) - h(X) ) dt, z ∈𝕊. Our starting point is the following proposition, which is proved in Appendix <ref>. For any h ∈ and almost all M > 0, -δ∂_x F_h^M(x,R) + λ(F_h^M(x+δ S,U) - F_h^M(x,U) ) = ( _x,R h(X(M)) - h(x) ). We wish to take M →∞ in (<ref>) and recover (<ref>). However, since we do not assume that F_h(z) is well defined, we first need to specify what we mean by both ∂_x F_h(x,R) and (F_h(x + δ S,U) - F_h(x,U) ). We define F_h(x+ϵ, r) - F_h(x,r) = ∫_0^∞( _x+ϵ,r h(X(t)) - _x,r h(X(t)) )dt, x ≥ 0, ϵ,r > 0. To argue that this quantity is well defined, we now introduce a synchronous coupling of the workload process. This coupling also plays a central role in Section <ref>, and is far simpler than the coupling described following (<ref>) because the coupled workloads differ only in the initial workload, but not the initial residual time. Given ϵ > 0, let {Z^(ϵ)(t) = (X^(ϵ)(t),R(t)) : t ≥ 0} be a coupling of {Z(t)= (X(t),R(t)): t ≥ 0} with initial condition X^(ϵ)(0) = X(0)+ϵ. Both systems share the same arrival process, and the service time of each arriving customer is identical in both systems. Similar to B_0, we define B_0^(ϵ) = inf{t ≥ 0: X^(ϵ)(t) = 0}. It follows that for every sample path, ∂/∂ t( X^(ϵ)(t) - X(t)) = -δ 1(X(t) = 0, t ∈ [0,B_0^(ϵ)]), Z^(ϵ)(t) = Z(t) for t > B_0^(ϵ). We adopt the convention that _x,r(·) is the expected value conditional on Z(0) = (x,r), even if the quantity inside the expectation is a function of Z^(ϵ)(t). Considering (<ref>), our synchronous coupling yields F_h (x+ϵ, r) - F_h(x,r) = ∫_0^∞(_x+ϵ,r h(X(t)) - _x,r h(X(t)) ) dt = _x,r∫_0^B^(ϵ)_0( h(X^(ϵ)(t)) - h(X(t)) ) dt. The right-hand side is well defined because h(X^(ϵ)(t)) - h(X(t))≤ϵh' and _x,rB^(ϵ)_0 = _x+ϵ,rB_0 < ∞ by (<ref>). A similar line of reasoning yields the following two lemmas. The detailed proofs are found in Appendix <ref>. Let T ≥ 0 be any random variable and define ∂_x F_h(x,T) by ∂_x F_h(x,T) = lim_ϵ→ 01/ϵ( F_h(x+ϵ, T) - F_h(x,T) ), x ≥ 0, with the convention that ∂_x F_h(x,T) = ∂_x F_h(x,r) when T=r is deterministic. Then for any h ∈, ∂_x F_h(x,T) = (_x,T∫_0^B_0 h'(X(t)) dt), x ≥ 0. In the special case that T=r is deterministic, (<ref>) yields an expression for ∂_x F_h(x,r), which implies, in particular, that ∂_x F_h(x,T) = ∂_x F_h(x,T), x ≥ 0. For any h ∈ and x ≥ 0, lim_M →∞( _x,R h(X(M)) ) = h(X), lim_M →∞∂_x F_h^M(x,R) = ∂_x F_h(x,R), lim_M →∞(F_h^M(x+δ S,U) - F_h^M(x,U) ) = (F_h(x+δ S,U) - F_h(x,U) ). Applying Lemma <ref> to take M →∞ in (<ref>) of Proposition <ref>, using the fact that ∂_x F_h(x,R) = ∂_x F_h(x,R) from Lemma <ref>, and noting from (<ref>) that ∂_x F_h(0,R) = 0, we arrive at -δ∂_x F_h(x,R) + λ(F_h(x + δ S,U) - F_h(x,U) ) = h(X) - h(x), x ≥ 0. In the following section, we replace (F_h(x + δ S,U) - F_h(x,U) ) by a term where the expectation is taken over R instead of U. We then perform a Taylor expansion to compare the left-hand side with G_Y. §.§ Taylor expansion. Let S' be an independent copy of S and introduce the random variable J(x,r) = - (x ∧δ r) + δ S', (x,r) ∈𝕊. We present the following lemma, which is proved in Appendix <ref>. For any (x,r) ∈𝕊, s ≥ 0, and h ∈, F_h(x+ δ s,r) - F_h(x,r) = ( F_h(x+δ s + J(x,r),U) - F_h(x+J(x,r),U) ) + ϵ(x,r,s), where U on the right-hand side is independent of S' and, therefore, J(x,r), and ϵ(x,r,s) = ^S'( ∫_-x ∧(δ r)^-(x+δ s) ∧(δ r)∂_x^U F_h(x+δ s +v +δ S',U) dv) + ∫_0^r( h((x + δ s-δ t)^+) - h((x -δ t)^+) ) dt, where we recall that ^U(·) and ^S'(·) denote expectations with respect to U only and S' only, respectively. To simplify notation, we define F̅_h'(x) = ∂_x F_h(x,R). Replacing x by x + J(x,R) in the Poisson equation (<ref>) and taking expectations yields h(X) - h(x + J(x,R')) = -δF̅_h'(x + J(x,R)) + λ(F_h(x+ δ S,R) - F_h(x,R) ) - (ϵ(x,R,S)). We emphasize that F̅_h'(x + J(x,R)) is actually ^R∂_x^R' F_h'(x + J(x,R),R'), where R and R' are independent copies. We now perform Taylor expansion on the right-hand side of (<ref>). The following lemma presents the Taylor expansion of (<ref>), and is proved in Appendix <ref>. Let θ = δ^2, σ^2 = δ^2 λ(U-S)^2, and let Y be exponentially distributed with rate 2θ/σ^2. Fix h ∈ and assume that F̅_h”(x) and F̅_h”'(x) exist for all x ≥ 0, and that F_h'(Y), F_h”(Y) < ∞. Then h(X) - h(Y) = ( h(Y + J(Y,R)) - h(Y) ) - (ϵ(Y,R,S)) -δ( 1(δ R ≥ Y) ( F̅_h'(δ S ) - F̅_h'(Y) - δ(S-R)F̅_h”(Y)) ) -δ( 1(δ R < Y) ∫_0^δ(S -R )∫_0^vF̅_h”'(Y+u) du dv ) + λ∫_0^δ S (δ S - v)∫_0^vF̅_h”'(Y+u) du dv In the next section, we verify the differentiability of F̅_h'(x) and bound its derivatives in order to bound the right-hand side of the expression for h(X) - h(Y) in Lemma <ref>. § STEIN FACTOR BOUNDS FOR THE G/G/1 WORKLOAD PROCESS This section is focused on bounding F̅_h”(x) and F̅_h”'(x) with the ultimate goal of proving the following theorem. Explicit expressions for all constants can be recovered from the proof of the theorem at the end of this section. Let Y be exponentially distributed with rate 2θ/σ^2, where θ = δ^2, σ^2 = δ^2 λ(U-S)^2, and δ = 1-ρ. Suppose that U has a density G'(x) and hazard rate η(x) = G'(x)/(1-G(x)). Define η = inf{η(x): x ≥ 0} and η = sup{η(x) : x ≥ 0}. * If η(x) is nonincreasing and η = η(0) < ∞, then X - Y≤ (1-ρ) C_3 sup_h ∈_3 h(X) - h(Y)≤ (1-ρ) C'_3 + (1-ρ)^2 C'_4, where C_3 and C_3' are constants that depend on η and the first three moments of U and S. Similarly, C_4' depends on η, the first three moments of U and first four moments of S. * If η < ∞ and η>0, then X - Y≤ (1-ρ) C_3 sup_h ∈_3 h(X) - h(Y)≤ (1-ρ) C'_3 + (1-ρ)^2 C'_4, where C_3 and C_3' are constants that depend on η, η and the first three moments of U and S. Similarly, C_4' depends on η,η, the first three moments of U and first four moments of S. We make a few comments before moving on. Case (a) covers certain heavy-tailed interarrival distributions, like when G'(x) decays polynomially in x as x →∞. Case (b) implies that the interarrival time distribution has light tails. The assumption that the density of U exists and the assumptions on the hazard rate are all made to simplify the analysis. Section <ref> can accommodate other forms of hazard rates not covered by cases (a) and (b) at the expense of added complexity. Extending our results to cases when U has point masses would require more effort. We elaborate on this following the statement of Lemma <ref> in Section <ref>. Comparing Theorems <ref> and <ref>, the former only requires h ∈, whereas the latter assumes h ∈_3, but we can combine both theorems for an even better result as follows. Let W denote the steady-state customer waiting time, let V = X/δ be the unscaled workload, and to avoid confusion let Y_1 and Y_2 denote the exponential random variables appearing in Theorems <ref> and <ref>, respectively. It is well known that I̅^2/2 I̅ = (S-U)^2/2(U-S) - W = (S-U)^2/2(U-S) - ρ S^2/2 S - ρ V = 1/1-ρ ( Y_2 - X) - ρ S^2/2 S + X where the first and second equalities are due to (2.5) of Chapter X.2 and Corollary X.3.5 of <cit.>, respectively, and the final equality follows from (U-S) = (1/λ)(1-ρ). Under the assumptions of case (a), Theorem <ref> yields I̅^2/2 I̅≤C_3 + X. To bound X, we can either exploit the relationship between V and W together with one of the many bounds on W listed in <cit.> that depend on the first two moments of U and S, or we could use the cruder bound of X ≤ Y + (1-ρ)C_3 due to Theorem <ref>. In either case, combining (<ref>) with the upper bound of Theorem <ref> yields a bound on sup_h ∈ h(X) - h(Y_1) in terms of η and the first three moments of S and U alone. Lastly, we note that in case (b), the idle period I̅ is trivially upper bounded by an exponential random variable with rate η. Thus, the expected equilibrium idle period length I̅^2 / (2I̅) is also bounded by 1/η. Nevertheless, we include case (b) to show that it can be covered by our analysis. We prove Theorem <ref> after first introducing several intermediate results. The following two lemmas contain the Stein factor bounds for the workload process. We split the bounds into two lemmas because their proofs are very different. Lemma <ref> is proved in Section <ref> while Lemma <ref> is proved in Section <ref>. Suppose that η < ∞. Then for any h ∈ and x ≥ 0, δF̅_h'(x)≤ x( 1 + (λ + η) B̅ ) and δ∂_x F_h(x,U)≤ x( 1 + 2ηB̅ ). Furthermore, F̅_h”(x) exists for any h ∈_2, and for any x ≥ 0, δF̅_h”(x)≤ (1+x)( 1 + (λ + η) B̅) and δ∂_x^2 F_h(x,U)≤ (1+x)(1 + 2ηB̅). Suppose that η < ∞. Then F̅_h”'(x) exists for all x ≥ 0 and h ∈_3. Furthermore, for any h ∈_3, * if η(x) is nonincreasing, then δ^2 F̅_h”'(x)≤ λ(δ S (1+x+δ S) )(1 + 2ηB̅) + (δ U > x) 3ληB̅ + (δ U < x < δI̅ +δ U) λη (1 + η S) B̅, where U and I̅ are independent. In the special case when h(x) = x, δ^2 F̅_h”'(x)≤ (δ U > x) 3ληB̅ + (δ U < x < δI̅ +δ U) λη (1 + η S) B̅. * if η>0, then for all x ≥ 0, δ^2 F̅_h”'(x)≤ λ(δ S (1+x+δ S)( 1 + 2ηB̅) ) + (U > x/δ) 3 ληB̅ + ( 1(U < x/δ) e^-η (x/δ-U)) ) ληB̅ and in the special case when h(x) = x, δ^2 F̅_h”'(x)≤ (U > x/δ) 3 ληB̅ + ( 1(U < x/δ) e^-η (x/δ-U)) ) ληB̅. The bounds in both Lemmas contain B̅. The only bound on B̅ that we know of is B̅≤ 0.9 ρ√(Var(U-S))/1-ρexp( 5.4U-S^3/ (Var(U-S) )^3/2 + 0.8 (U-S) /√(Var(U-S) )), which is due to <cit.> and involves the first three moments of S and U. The following lemma, proved in Section <ref>, yields an alternative bound. Recall that I_1, I_2, … are i.i.d. I̅. For any ρ < 1, V = λ S^2/2 + λB̅[ ( S - U)^+ + ∑_k=2^∞( S - U - ∑_i=1^k-1 I_i)^+]. Denoting the steady-state customer waiting time distribution by W, a consequence of (<ref>) is that B̅ = W S/ ( S - U)^+ + ∑_k=2^∞( S - U - ∑_i=1^k-1 I_i)^+≤ρVar(S-U)/2 (1-ρ) ( S - U)^+. The bound in (<ref>) is a result of the upper bound on W due to <cit.>. Tighter bounds on W have been established since then <cit.>, and any one of them could be used instead. We require one final auxiliary lemma, proved in Appendix <ref>, which uses the Stein factor bounds from Lemmas <ref> and <ref> to bound the right-hand side of the Taylor expansion in Lemma <ref>. We recall from (<ref>) that R^k = λ U^k+1/(k+1). Let Y be exponentially distributed with rate 2θ/σ^2, where θ = δ^2, σ^2 = δ^2 λ(U-S)^2, and set ν = 2θ/σ^2. For any h ∈_3, h(Y + J(Y,R)) - h(Y) ≤δ( R + S), ϵ(Y,R,S)≤δ R S + δ^2 ν R( S^2 + ( S)^2)( 1 + 2 ηB̅ ) δ 1(δ R ≥ Y) ( F̅_h'(δ S ) - F̅_h'(Y) - δ(S-R)F̅_h”(Y)) ≤δ^2 ( ν( 2 S R + 5 R^2 ) + R^2 ) ( 1 + (λ + η) B̅) Furthermore, * if η(x) is nonincreasing, then λ| ∫_0^δ S (δ S - v)∫_0^vF̅_h”'(Y+u) du dv | ≤λδ( S^3 (S (1+1/ν +δ S) ) + δ S^4 ) λδ( 1 + 2ηB̅) + λδ S^3 ( νδ 3 ηB̅ + νδ ( U + I̅) λη (1 + η S ) B̅) δ| 1(δ R < Y) ∫_0^δ(S -R )∫_0^vF̅_h”'(Y+u) du dv | ≤δ((S-R)^2 ((S' (1+1/ν+δ S+δ S') ) λδ( 1 + 2ηB̅) + νδ 3 ηB̅ + νδ ( U + I̅) λη (1 + η S) B̅) ), and in the special case that h(x) = x, λ| ∫_0^δ S (δ S - v)∫_0^vF̅_h”'(Y+u) du dv | ≤λδ S^3 ( νδ 3 ηB̅ + νδ ( U + I̅) λη (1 + η S) B̅) * if η>0, then for all x ≥ 0, λ∫_0^δ S (δ S - v)∫_0^vF̅_ĥ”'(Y+u) du dv ≤λδ( S^3 (S (1+1/ν +δ S) ) + δ S^4 ) λδ( 1 + 2ηB̅) + λδ S^3 ( νδ 3 ηB̅ + δνλ (η / η) B̅) δ∫_0^J(Y,R)∫_0^vF̅_ĥ”'(Y+u) du dv ≤δ((S-R)^2 ((S' (1+1/ν+δ S+δ S') ) λδ( 1 + 2ηB̅) + νδ 3 ηB̅ + δνλ (η / η) B̅) ) and in the special case that h(x) = x, λ∫_0^δ S (δ S - v)∫_0^vF̅_ĥ”'(Y+u) du dv ≤λδ S^3 ( νδ 3 ηB̅ + δνλ (η / η) B̅) Note that all B̅ terms appearing in the bounds of Lemma <ref> are multiplied by δ = (1-ρ), which compensates for the 1/(1-ρ) term appearing in the bounds of B̅ in (<ref>) and (<ref>). [Proof of Theorem <ref>] The result follows from applying the bounds in Lemma <ref> to the expression for h(X) - h(Y) in Lemma <ref>. To bound B̅, we use the better bound among (<ref>) and (<ref>), noting that the former depends only on the first three moments of U and S. §.§ Second-derivative bound In this section we prove Lemma <ref>. Recall that A(t) is the age of the interarrival process at time t. Our first step is the following lemma, which is proved in Appendix <ref> by differentiating ∂_x F_h(x,r). For any h ∈_2, any absolutely continuous random variable T ≥ 0 with bounded density θ(x), and any x ≥ 0, ∂_x^2 F_h(x,T) = ∂_x F_h'(x,T) + 1/δ h'(0) + 1/δ( θ(x/δ) + (1(T<x/δ)_x,Tη(A(B_0)) ) ) ( ∂_x F_h(δ S, U) ), where ∂_x F_h'(x,T) is as in Lemma <ref> but with h'(x) instead of h(x). In the proof of Lemma <ref>, the term lim_ϵ→ 0 (1/ϵ)_x,r(R(B_0) < ϵ/δ) appears. Since R(B_0) = I_0, this quantity is the density of the idle period I_0 at zero. When U has a density, the density of I_0 at zero equals _x,rη(A(B_0)), a term that plays an important role in our third-derivative bounds. Extending our results to the case that U has point masses would involve working with lim_ϵ→ 0 (1/ϵ)_x,r(R(B_0) < ϵ/δ) directly. [Proof of Lemma <ref>] For any h ∈_2 and any random variable T ≥ 0, Lemma <ref> implies that δ∂_x F_h(x,T)≤δ( _x,T B_0). We claim that it suffices to show |δ(_x,RB_0)| ≤ x( 1 + (λ + η)B̅) and |δ(_x,UB_0)| ≤ x( 1 + 2ηB̅). Note that (<ref>) follows trivially from (<ref>). Furthermore, since Lemma <ref> implies that ∂_x F_h(δ S, U) ≤B̅, we can apply (<ref>) and (<ref>) to the expression for F̅_h”(x) = ∂_x^2 F_h(x,R) in Lemma <ref>, together with the fact that the density of R is bounded by λ, to conclude the first bound in (<ref>). Since the density of U satisfies G'(x) = η(x) (1-G(x)) ≤η, a similar argument yields the second bound in (<ref>). We now prove (<ref>), starting with the bound on |δ(_x,RB_0)|. Let ĥ(x) = x, in which case Lemma <ref> and the fact that F̅_ĥ'(0) = 0 yield δ(_x,RB_0) = δF̅_ĥ'(x) = ∫_0^xδF̅_ĥ”(y) dy. Thanks to Lemma <ref>, we know that the integrand satisfies δF̅_ĥ”(x) = 1 + ( λ(1-G(x/δ)) + (1(R<x/δ)_x,Rη(A(B_0)) ) ) B̅, where we used the facts that the density of R is λ(1-G(x)) and that ( ∂_x F_ĥ(δ S, U) ) = B̅. Our assumption that the hazard rate is bounded yields the first bound in (<ref>). Since the density of U is bounded by η, the second bound follows by a similar argument. §.§ Third-derivative bound In this section we prove Lemma <ref>. Differentiating twice both sides of the Poisson equation (<ref>) yields δF̅_h”'(x) = λ∂_x^2(F_h(x + δ S,U) - F_h(x,U) ) + h”(x). The following lemma is proved in Appendix <ref>. It follows directly from Lemma <ref> after verifying that ∂_x^2(F_h(x + δ S,U) - F_h(x,U) ) = ^S(∂_x^2^U F_h(x + δ S,U) - ∂_x^2^U F_h(x,U)). Suppose that U has a bounded density. Then for any h ∈_3 and any x ≥ 0, δ^2 F̅_h”'(x) = λδ(∫_0^δ S∂_x^2 F_h'(x+y,U) dy ) + λ( G'(x/δ + S) - G'(x/δ))( ∂_x F_h(δ S, U) ) + λ(1(x/δ < U <x/δ + S)_x+ δ S,Uη(A(B_0))) ( ∂_x F_h(δ S, U) ) + λ( 1(U < x/δ) (_x+ δ S,Uη(A(B_0)) - _x,Uη(A(B_0))) ) ( ∂_x F_h(δ S, U) ) + δ h”(x). All of the terms in the expression for F̅_h”'(x) are straightforward to bound, with the exception of ( 1(U < x/δ) (_x+ δ S,Uη(A(B_0)) - _x,Uη(A(B_0))) ) ( ∂_x F_h(δ S, U) ). Naively bounding this term by ηB̅ is not good enough. Sharper bounds are presented in the following lemma, which is proved in Section <ref>. Assume that η < ∞. For any x,s ≥ 0 and r < x/δ, * if η(x) is nonincreasing, then _x+ δ s,rη(A(B_0)) - _x,rη(A(B_0))≤η (1 + η s) (I̅ > x/δ-r). * if η>0, then _x+ δ s,rη(A(B_0)) - _x,rη(A(B_0))≤η e^-η (v-r). [Proof of Lemma <ref>] Consider the expression of δ^2 F̅_h”'(x) in Lemma <ref>. We now bound each term there one by one. Using (<ref>), λ(∫_0^δ Sδ∂_x^2 F_h'(x+y,U) dy ) ≤λ(δ S (1+x+δ S) )(1 + 2ηB̅). Next, since G'(x) = η(x) (U > x) ≤η(U > x), λ( G'(x/δ + S) - G'(x/δ))( ∂_x F_h(δ S, U) ) ≤ 2λη(U > x/δ) B̅, and λ(1(x/δ < U <x/δ + S)_x+ δ S,Uη(A(B_0))) ( ∂_x F_h(δ S, U) ) ≤λ(U > x/δ) ηB̅. Lastly, Lemma <ref> and the fact that h”≤ 1 imply that in case (a), λ( 1(U < x/δ) _x+ δ S,Uη(A(B_0)) - _x,Uη(A(B_0))) ( ∂_x F_h(δ S, U) ) is bounded by λη (1 + η S) ^U(1(U < x/δ) ^I̅ (1(I̅ > x/δ-U)) ) B̅ = λη (1 + η S) (U < x/δ < I̅ + U)B̅, where U and I̅ are independent, and in case (b), it is bounded by λη( 1(U < x/δ) e^-η (x/δ-U)) B̅. When h(x) = x, the bounds follow from the fact that ∂_x F_ĥ '(x,U) = 0. §.§.§ The renewal process driven by idle times In this section we prove Lemma <ref>. Recall that A(t) is the age of the interarrival process at time t≥ 0, and note that given R(0), the age A(t), t ∈ [0,R(0)), has no impact on the evolution of the workload process. Thus we assume, without loss of generality, that A(0) = 0. Define ℓ_0 = B_0 and ℓ_n = ℓ_n-1 + I_n-1 + B_n, n ≥ 1, to be the start of the zeroth and nth idle periods of {Z(t): t ≥ 0}, respectively. Also let T_-1 = 0 and T_n = ∑_i=0^n I_i, n ≥ 0, and define {Γ(t) = A(ℓ_n + t) : t ∈ [T_n-1,T_n), n ≥ 0 }, which tracks {A(t): t ≥ 0} during idle periods of the workload process. For x ≥ 0 we let v = x/δ denote the unscaled workload. Recall our synchronous coupling {Z^(ϵ)(t): t ≥ 0} and its initial busy-period duration B_0^(x), which equals the time when {Z(t): t ≥ 0} has idled for exactly v time units; see (<ref>). It follows that A(B^(x)) = Γ(v), x = δ v ≥ 0, and, therefore, for any (x,r) ∈𝕊 and s ≥ 0, _x+δ s,rη (A(B)) - _x,rη(A(B)) = _0,rη (A(B^(x+δ s))) - _0,rη (A(B^(x))) = _0,r( η (Γ(v+s)) - η (Γ(v)) ), where _x,r(·) is the expectation conditioned on Z(0) = (x,r). Now fix x and r < x/δ = v, and define Z(0-) = lim_ϵ↓ 0 Z(-ϵ). We claim that _0,r( η (Γ(v+s)) - η (Γ(v)) ) = (( η (Γ(v+s)) - η (Γ(v)) | Γ(r) ) | Z(0)=(0,r)) = (( η (Γ(v-r+s)) - η (Γ(v-r)) | Γ(0)) | Z(0-)=(0,0) ) = ( η (Γ(v-r+s)) - η (Γ(v-r)) | Z(0-)=(0,0) ). The first equality follows from the tower rule and the fact that conditioned on Γ(r), the value of Γ(r+t), t ≥ 0, is independent of Z(0). The second equality follows, once we observe that Γ(r) given Z(0)=(0,r) has the same distribution as Γ(0) given Z(0-)=(0,0). The latter claim is true because given Z(0)=(0,r), the definition of Γ(t) in (<ref>) implies that Γ(r) = A(ℓ_1) d= A(B̅), because ℓ_1 = B_0 + I_0 + B_1 = r+B_1. Similarly, given Z(0-)=(0,0), i.e., a customer arrives to an empty system at t = 0, it follows that B_0d=B̅ and therefore, Γ(0) = A(B_0) d= A(B̅). Now consider any process {Γ(t): t ≥ 0} that is equivalent in distribution to {Γ(t): t ≥ 0}. Then ( η (Γ(v-r+s)) - η (Γ(v-r)) ) = ( η(Γ(v-r+s)) - η(Γ(v-r)) | Γ(0) = Γ(0) ) = ( η( Γ(v-r+s)) - η(Γ(v-r+s)) | Γ(0) = Γ(s) ). Thus, we arrive at _x+δ s,rη (A(B)) - _x,rη(A(B)) = _0,r( η (Γ(v+s)) - η (Γ(v)) ) = ( η( Γ(v-r+s)) - η(Γ(v-r+s)) | Z(0-)=(0,0), Γ(0) = Γ(s) ). To bound the right-hand side, we now specify the joint distribution of {(Γ(t), Γ(t)) : t ≥ 0} and analyze the coupling time of this process. We first argue that {Γ(t) : t ≥ 0} is equivalent to the continuous-time Markov process defined by the generator G_Γ f(γ) = f'(γ) + η(γ) ( f(A( B̅)) - f(γ)), γ≥ 0. By (<ref>), {Γ(t): t ≥ 0} increases at a unit rate and jumps at times t = T_n, n ≥ 0, with Γ(T_n) = A(ℓ_n+1) d= A(B̅), where ℓ_n denotes the end of the nth busy period and, consequently, the start of the nth idle period. Finally, given t>0, the probability that a jump occurs on the interval (t,t+dt) conditioned on Γ(t) equals (Γ(t) < U < Γ(t)+dt)/(U > Γ(t)) = η(Γ(t))dt + o(dt), where o(dt) → 0 as dt → 0. Next, we specify the joint evolution of {(Γ(t), Γ(t)): t ≥ 0}. Defining η_m(x,y) = min{η(x),η(y)}, η_Δ(x,y) = max{η(x),η(y)} - min{η(x),η(y)}, we let {(Γ(t), Γ(t)), t ≥ 0} have the same distribution as the Markov process defined by the generator G_J f(γ̃, γ) = ∂_γ̃ f(γ̃, γ) + ∂_γ f(γ̃, γ) + η_m(γ̃, γ) ( f(A( B̅), A( B̅)) - f(γ̃, γ) ) + η_Δ(γ̃, γ) 1(η(γ) < η(γ̃)) ( f(A( B̅), γ)- f(γ̃, γ)) + η_Δ(γ̃, γ) 1(η(γ) > η(γ̃)) (f(γ̃, A( B̅)) - f(γ̃, γ)). Note that the marginal law of either component of this process is equivalent to the Markov process defined in (<ref>). Furthermore, when this process jumps it either couples, or only one of the components jumps. Having defined our coupling, we proceed to bound (<ref>). [Proof of Lemma <ref>] Fix x = δ v ≥ 0 and r < v, and let τ_C = inf{t ≥ s : Γ(t) = Γ(t)}, it follows that = ( η( Γ(v-r+s)) - η(Γ(v-r+s)) | Z(0-)=(0,0), Γ(0) = Γ(s) ) ≤ η(τ_C > v-r+s | Z(0-)=(0,0), Γ(0) = Γ(s) ), If η > 0, then η_m(γ̃, γ) ≥η, and it follows by the dynamics of (<ref>) that coupling is guaranteed to happen after an exponentially distributed amount of time with rate η. Thus, the right-hand side is bounded by η e^-η (v-r). Now assume that η(x) is nonincreasing. Note that Γ(0)=A(B_0) by (<ref>), and that A(B_0) d= A(B̅) given Z(0-) = (0,0). If Γ(s) < Γ(s), then η_m(Γ(t),Γ(t)) = η(Γ(t)) for all t ≥ s until the first jump of {Γ(t): t ≥ 0}, at which point coupling occurs. Since Γ(s) = Γ(0) d= A(B̅), the first jump after s happens after I̅ amount of time, implying that the probability of no jump on (s,v-r+s] is at most (I̅ > v-r). Conversely, if Γ(s) > Γ(s), then τ_C corresponds to the first jump time after s of {Γ(t): t ≥ 0}. We will shortly prove that probability that {Γ(t): t ≥ 0} does not jump on (s,v-r+s] is at most (I̅ > v-r) N(s) ≤(I̅ > v-r)η s, where N(t) is the number of jumps made by {Γ(t): t ≥ 0} on [0,t]. The inequality in (<ref>) is justified because our nonincreasing hazard rate assumption implies that η(x) ≤η, x ≥ 0, which further implies that N(s) ≤η s because { N(t) : t ≥ 0 } can be dominated by a Poisson process of rate η. Adding (<ref>) and (<ref>) yields an upper bound of (1 + η s) (I̅ > v-r) on the probability of not coupling on the interval (s,v-r+s] when η(x) is nonincreasing. It remains to verify (<ref>). Since Z(0-) = (0,0) and Γ(0) = A(B_0) d=A(B̅), the inter-jump times I_n, n ≥ 0, of {Γ(t): t ≥ 0} are i.i.d. I̅. To make the notation more typical, let us shift the indices of the inter-jump times forward by one; i.e., the first jump happens at I_1 instead of I_0, the second jump happens at I_1+I_2 instead of I_0 + I_1, etc. We also let J_0 = 0 and J_n, n ≥ 1, be the time of the nth jump, which satisfies J_n = J_n-1 + I_n. It follows that (N(v-r+s) - N(s) = 0) = ( I_N(s)+1 > v-r+s - J_N(s)) = ∑_n=0^∞(I_n+1 > v-r+s - J_n, N(s) = n). Since {N(s)=n} = {J_n < s, I_n+1>s-J_n}, the right-hand side equals ∑_n=1^∞(I_n+1 > v-r+s - J_n,J_n < s) ≤ ∑_n=1^∞(I_n+1 > v-r,J_n < s ) = (I̅ > v-r) ∑_n=1^∞(J_n < s ) = (I̅ > v-r) ∑_n=1^∞(N(s) ≥ n ) = (I̅ > v-r) N(s), where the first equality follows from the independence of I_n+1 and J_n, as well as the fact that I_n+1d=I̅. §.§ The expected duration of the busy period initialized by an arrival to an empty system [Proof of Lemma <ref>] We first prove (<ref>). Let ĥ(x) = x, recall that F̅_ĥ(0) = 0 due to (<ref>), and consider the Poisson equation (<ref>) with h(x) = ĥ(x) evaluated at x = 0, which results in λ( F_ĥ(δ S, U) - F_ĥ(0,U)) = δ V, Using the synchronous coupling {Z^(ϵ)(t)} introduced in Section <ref>, it follows that δ V = λ(∫_0^B^(δ S)_0,U( X^(δ S)(t) - X(t) ) dt ). From (<ref>) we know that the difference X^(δ S)(t) - X(t) decays at rate δ only during the idle periods of {X(t) : t ≥ 0}. Recall that the idle and busy period durations are I_0, I_1, …, and B_0, B_1, B_2, …, respectively, and that B_0 = 0 because X(0) = 0. It follows that for all times t corresponding to the busy period B_k, k ≥ 1, X^(δ S)(t) - X(t) = δ( S - ∑_i=0^k-1 I_i)^+. Letting ℐ = {t ∈ [0,B^(δ S)]: X(t) = 0}, it follows that ∫_0^B^(δ S)( X^(δ S)(t) - X(t) ) dt = ∫_ℐ( X^(δ S)(t) - X(t) ) dt + ∫_[0,B^(δ S)] ∖ℐ( X^(δ S)(t) - X(t) ) dt = δ S^2/2 + ∑_k=1^∞ B_kδ( S - ∑_i=0^k-1 I_i)^+. We conclude (<ref>) by combining this equation with (<ref>), noting that B_k is independent of ( S - I_0 - I_1 … - I_k-1)^+ and B_kd=B̅, and that I_0d= U since Z(0) = (0,U). The equality in (<ref>) is true because V = λ S^2/2 + ρ W due to Corollary X.3.5 of <cit.>. The inequality follows from the well-known bound in (7') of <cit.>, which says that W ≤Var(S-U)/2(S-U) = ρVar(S-U)/2(1-ρ) S . § PROVING PROPOSITION <REF> We require several auxiliary lemmas. For any h ∈ and M > 0, ∂_x F_h^M(z) = _z∫_0^B_0∧ M h'(X(t)) dt, z = (x,r) ∈𝕊. [Proof of Lemma <ref>] The proof is identical to that of Lemma <ref>; see Appendix <ref>. The next three lemmas are proved in Appendix <ref>. For any h ∈ and almost all M > 0, -δ∂_x F_h^M(z) - ∂_r F_h^M(z) = _z h(X(M)) - h(x), z=(x,r) ∈𝕊. In particular, ∂_r F_h^M(z) is well defined for z ∈𝕊. For any differentiable f: _+→ with |f(U)|, |f(R)|, and |f'(R)| < ∞, f'(R) = 1/ U( f(U) - lim_ϵ→ 0 f(ϵ)). For any h ∈ and M > 0, lim_ϵ→ 0 F_h^M(x,ϵ) = F_h^M(x+δ S, U). [Proof of Proposition <ref>] For almost all M > 0, Lemma <ref> says that -δ∂_x F_h^M(z) - ∂_r F_h^M(z) = _z h(X(M)) - h(x), z ∈𝕊. Observe that f(r) = F_h^M(x,r) satisfies the conditions of Lemma <ref>. Indeed, f(U), f(R)< ∞ because M < ∞. Furthermore, f'(R) < ∞ follows from the expression for ∂_r F_h^M (z) in Lemma <ref>, together with the observation that ∂_x F_h^M(z)≤ M, which follows from Lemma <ref>. Setting r = R in (<ref>) and taking expected values yields ( _z,R h(X(M)) - h(x) ) = -δ∂_x F_h^M(x,R) - ∂_r F_h^M(x,R) = -δ∂_x F_h^M(x,R) - λ( F_h^M(x,U) - lim_ϵ→ 0 F_h^M(x,ϵ )) = -δ∂_x F_h^M(x,R) - λ( F_h^M(x,U) - F_h^M(x+δ S,U) ), where the second and third equalities follow from Lemmas <ref> and <ref>, respectively. §.§ Auxiliary Lemma Proofs [Proof of Lemma <ref>] We define h̃(x) = h(x) - h(X) for convenience, in which case F_h^M(z) = ∫_0^M_zh̃(X(t)) dt, z ∈𝕊. Our goal is to prove that -δ∂_x F_h^M(z) - ∂_r F_h^M(z) = _z h(X(M)) - h(x), z=(x,r) ∈𝕊. Fix z = (x,r) ∈𝕊 and suppose first that x = 0. On one hand, F_h^M+ϵ(0,r+ϵ) = F_h^M(0,r+ϵ)+ ∫_M^M+ϵ_0,r+ϵh̃(X(t)) dt, and on the other, F_h^M+ϵ(0,r+ϵ) = ∫_0^ϵ_0,r+ϵh̃(X(t)) dt + ∫_0^M_0,rh̃(X(t)) dt = ϵh̃(0) + F_h^M (0,r), Equating the two expressions and dividing both sides by ϵ yields lim_ϵ→ 01/ϵ(F_h^M(0,r+ϵ) - F_h^M(0,r)) = h̃(0) - lim_ϵ→ 01/ϵ∫_M^M+ϵ_0,r+ϵh̃(X(t)) dt. The left-hand side equals δ∂_x F_h^M(z) +∂_r F_h^M(z) = ∂_r F_h^M(z), since ∂_x F_h^M(0,r) = 0 by Lemma <ref>. Thus, to prove (<ref>) when x = 0, it suffices to show that 1/ϵ∫_M^M+ϵ(_0,r+ϵh̃(X(t)) - _0,rh̃(X(t)) )dt = 1/ϵ∫_M^M+ϵ_0,r( h̃(X(t-ϵ)) -h̃(X(t)) )dt → 0 as ϵ→ 0, which implies that lim_ϵ→ 01/ϵ∫_M^M+ϵ_0,r+ϵh̃(X(t)) dt = lim_ϵ→ 01/ϵ∫_M^M+ϵ_0,rh̃(X(t)) dt = _0,rh̃(X(M)). Observe that X(t-ϵ) - X(t) is bounded by the workload processed during [t-ϵ,t], which is at most δϵ, plus any new work that arrives during [t-ϵ,t]. Letting A([t_1,t_2]) denote the number of customers arriving during [t_1,t_2], Wald's identity says that the expected workload to arrive during [t_1,t_2] equals S A([t_1,t_2]). Thus, to prove (<ref>), we observe that for any h ∈ and for all t ∈ [M,M+ϵ], _0,r| h̃(X(t-ϵ)) -h̃(X(t)) | ≤ _0,rX(t-ϵ) - X(t) ≤ δϵ + _0,r( δ S (A([t - ϵ, t])) ) ≤ δϵ + δ S _0,r( A([M - ϵ, M+ϵ])). It suffices to argue that the right-hand side goes to zero as ϵ→ 0. By the dominated convergence theorem, lim_ϵ→ 0_0,r( A([M - ϵ, M+ϵ])) = _0,r( A([M, M])), which equals the expected number of arrivals at time M. The right-hand side may be non-zero if the distribution of U has point masses. However, since the number of point masses is at most countable, then _0,r( A([M, M])) = 0 for all but at most countably many M. This proves (<ref>) when x = 0. The case when x> 0 follows similarly. We repeat the arguments, highlighting the differences. Given z = (x,r), fix ϵ < x/δ. Then F_h^M+ϵ(x,r+ϵ) = F_h^M(x,r+ϵ) + ∫_M^M+ϵ_x,r+ϵh̃(X(t)) dt and F_h^M+ϵ(x,r+ϵ) = ∫_0^ϵ_x,r+ϵh̃(X(t)) dt + F_h^M(x-δϵ,r). Equating both expressions, subtracting F_h^M(x,r) from each side, and dividing by ϵ yields 1/ϵ( F_h^M(x,r+ϵ) - F_h^M(x,r)) = 1/ϵ(F_h^M(x-δϵ,r)- F_h^M(x,r)) + 1/ϵ∫_0^ϵ_x,r+ϵh̃(X(t)) dt - 1/ϵ∫_M^M+ϵ_x,r+ϵh̃(X(t)) dt. We now argue that each of the terms on the right-hand side has a well-defined limit as ϵ→ 0, implying that the left-hand side converges to ∂_x F_h^M(z), which is itself well defined. The first term on the right-hand side converges to -∂_x F_h^M(z), which we know exists for all z ∈𝕊 by Lemma <ref>. Furthermore, lim_ϵ→ 01/ϵ∫_0^ϵ_x,r+ϵh̃(X(t)) dt = h̃(x) and lim_ϵ→ 01/ϵ∫_M^M+ϵ_x,r+ϵh̃(X(t)) dt = _x,rh̃(X(M)). The first equality is straightforward because no arrival occurs during [0,ϵ], while the second equality is proved the same way as (<ref>). [Proof of Lemma <ref>] For simplicity, we first assume that (U > 0) = 1. Initialize R(0) ∼ R, let τ_0 = 0 and τ_m be the time of the mth arrival, and let U_m, m ≥ 1 be the interarrival times with U_1 = R(0) and U_m i.i.d. U for m ≥ 2. Then τ_m+1 = τ_m+ U_m+1 for m ≥ 0, and by isolating times when jumps occur, one can verify that for any t > 0, 0 = 1/t( f(R(t)) - f(R(0))) = 1/t∫_0^t -f'(R(s)) ds + 1/t∑_m=1^∞ 1(τ_m≤ t) (f(R(τ_m)) - f(0)) Initialize R(0) ∼ R, where R is defined by (<ref>). Since |f'(R)| < ∞, the Fubini-Tonelli theorem says that 1/t∫_0^t -f'(R(s)) ds = -1/t∫_0^t f'(R(0)) ds = - f'(R). To conclude, we argue that lim_t →∞1/t∑_m=1^∞ 1(τ_m≤ t) (f(R(τ_m)) - f(0)) = 1/ U( f(U) - f(0)). Since τ_m = ∑_i=1^m U_i and R(τ_m) = U_m+1, and it follows that 1/t∑_m=1^∞ 1(τ_m≤ t) (f(R(τ_m)) - f(0)) = 1/t( 1(U_1 ≤ t) (f(U_2) - f(0))) + 1/t∑_m=2^∞ 1(U_1 + ⋯ + U_m ≤ t) (f(U_m+1) - f(0)) = 1/t(U_1 ≤ t) (f(U_2) - f(0)) + 1/t∑_m=2^∞ 1( U_1 + ⋯ + U_m ≤ t) (f(U_m+1) - f(0)). Since U_1 ∼ R, we know by (<ref>) that the first term converges to (1/ U)(f(U) - f(0)) as t → 0, and it remains to show that the second term converges to zero as t → 0. Note that 1/t∑_m=2^∞ 1( U_1 + ⋯ U_m ≤ t) (f(U_m+1) - f(0)) = 1/t∑_m=2^∞(U_1 ≤ t, U_1 + U_2 ≤ t, …, U_1 + ⋯ U_m ≤ t) (f(U_m+1) - f(0)) ≤ (f(U) - f(0)) 1/t∑_m=2^∞(U_1 ≤ t, …, U_m ≤ t) = (f(U) - f(0)) 1/t∑_m=2^∞(U_1 ≤ t) (U ≤ t)^m-1 = (f(U) - f(0)) 1/t(U_1 ≤ t) (U ≤ t)/1-(U ≤ t)→ 0 as t → 0, where the first equality is by the independence of the U_m, m ≥ 1, and the convergence to zero follows from (U = 0) = 0. The case 0 < (U > 0) < 1 follows similarly. Since f(U_m+1) - f(0) = 0 in (<ref>) if U_m+1 = 0, we only need to consider those jump times τ_m when U_m+1≠ 0. In the last display, we would then replace (U ≤ t) by (U ≤ t, U > 0). [Proof of Lemma <ref>] Define h̃(x) = h(x) - h(X) and consider first the case when x = 0. Then F_h^M(0,ϵ) = ∫_0^ϵ_0,ϵh̃(X(t)) dt + ∫_ϵ^M_0 ,ϵh̃(X(t)) dt = ∫_0^ϵ_0,ϵh̃(X(t)) dt + ∫_0^M-ϵ_δ S, Uh̃(X(t)) dt, where the outer expectation is with respect to U and S. Taking ϵ→ 0, the left-hand side converges to lim_ϵ→ 0 F_h^M(0,ϵ) while the right-hand side converges to lim_ϵ→ 0∫_0^M-ϵ_δ S, Uh̃(X(t)) dt = ∫_0^M_δ S, Uh̃(X(t)) dt - lim_ϵ→ 0∫_M-ϵ^M_δ S, Uh̃(X(t)) dt. The first term equals F_h^M(δ S, U) while the second term is zero because h ∈. Now suppose that x > 0 and take ϵ < x/δ. Arguing as before, F_h^M(x ,ϵ) = ∫_0^ϵ_x,ϵh̃(X(t)) dt + F_h^M-ϵ(x-δϵ + δ S, U). To conclude, we use the fundamental theorem of calculus to write F_h^M-ϵ(x-δϵ + δ S, U) = F_h^M-ϵ(x + δ S, U) + ∫_0^-δϵ∂_x F_h^M-ϵ(x+v + δ S, U) dv. The second term on the right-hand side converges to zero because |∂_xF_h^M-ϵ(z)|≤ M due to Lemma <ref>. The first term converges to F_h^M(x + δ S, U) because lim_ϵ→ 0∫_0^M-ϵ_x+δ S, Uh̃(X(t)) dt = ∫_0^M_x+δ S, Uh̃(X(t)) dt - lim_ϵ→ 0∫_M-ϵ^M_x+δ S, Uh̃(X(t)) dt, and the second term equals zero since h ∈. § PROOFS OF LEMMAS <REF> AND <REF> We recall the synchronous coupling {Z^(ϵ)(t): t ≥ 0} defined in Section <ref>. [Proof of Lemma <ref>] First, observe that 1/ϵ( ∫_0^∞( _x+ϵ,T h(X(t)) - _x,T h(X(t)) ) dt ) = 1/ϵ(_x,T∫_0^B_0( h(X^(ϵ)(t)) - h(X(t)) ) dt ) + 1/ϵ(_x,T∫_B_0^B^(ϵ)_0( h(X^(ϵ)(t)) - h(X(t)) ) dt ). Note that h(X^(ϵ)(t)) - h(X(t))/ϵ≤h'≤ 1 and B^(ϵ)_0→ B_0 as ϵ→ 0. Also note that for all ϵ < 1, _x,T B^(ϵ)_0 = _x+ϵ,T B_0≤_x+1,T B_0≤(_x+1+δ S,U B_0) < ∞, where the second-last inequality follows from the fact that the busy period starting at state (x+1,T) is made longer if the next arrival happens immediately. The DCT then implies that 1/ϵ(_x,T∫_0^B_0( h(X^(ϵ)(t)) - h(X(t)) ) dt ) → (_x,T∫_0^B_0 h'(X(t)) dt ), 1/ϵ(_x,T∫_B_0^B^(ϵ)_0( h(X^(ϵ)(t)) - h(X(t)) ) dt ) → 0. [Proof of Lemma <ref>] Fix h ∈. To avoid confusion, we write E^X, E^R, and E^X,R to denote expectations with respect to X, R, and (X,R), respectively. We first prove (<ref>). Using the tower property of conditional expectation, ( _x,R h(X(M)) ) - h(X) = ^R( _x,R h(X(M)) ) - ^R(^X[_X,R h(X(M)) | R ] ) = ^R( ^X[_x,R h(X(M)) - _X,R h(X(M)) | R ] ) = ^X,R( _x,R h(X(M)) - _X,R h(X(M)) ). Using our synchronous coupling defined in (<ref>) and the fact that h ∈, it follows that _x,R h(X(M)) - _X,R h(X(M)) ≤x-X_x ∨ X, R( B_0 > M), where the probability on the right-hand side corresponds to the probability that coupling does not occur by time M. Thus, lim_M →∞( _x,R h(X(M)) ) - h(X)≤lim_M →∞( x-X_x ∨ X, R( B_0 > M ) ) = 0. The last equality follows from the DCT because X < ∞, and because lim_M →∞_x ∨ x',r( B_0 > M ) = 0 for any x,x',r > 0 by (<ref>). To prove (<ref>), one can reuse the arguments used to prove Lemma <ref> to show that ∂_x F_h^M(x,r) = _x,r∫_0^B_0∧ M h'(X(t)) dt →∂_x F_h(x,r) as M →∞ for all (x,r) ∈𝕊, and also that lim_M →∞∂_x F_h^M(x,R) = lim_M →∞∂_x F_h^M(x,R). Lastly, we prove (<ref>). Similar to the way we argued (<ref>), F_h^M(x+ϵ,r) - F_h^M(x,r) = _x,r∫_0^B^(ϵ)_0∧ M ( h(X^(ϵ)(t)) - h(X(t)) ) dt → F_h(x+ϵ,r) - F_h(x,r), as M →∞ for all (x,r) ∈𝕊. Let ĥ(x) = x and observe that since h ∈ and X^(ϵ)(t) ≥ X(t), then F_h^M(x+ϵ,r) - F_h^M(x,r)≤ _x,r∫_0^B^(ϵ)_0∧ M (X^(ϵ)(t) - X(t) ) dt ≤ F_ĥ(x+ϵ,r) - F_ĥ(x,r). It remains to show that (F_ĥ(x+δ S,U) - F_ĥ(x,U)) < ∞, because then we can use the DCT to conclude (<ref>). The finiteness of this expectation follows from (F_ĥ(x+δ S,U) - F_ĥ(x,U)) = lim_M →∞(F_ĥ^M(x+δ S,U) - F_ĥ^M(x,U) ) = lim_M →∞( _x,R X(M) - x ) + lim_M →∞δ∂_x F_ĥ^M(x,R), where the first equality is due to the monotone convergence theorem, since F_ĥ^M(x+ϵ,r) - F_ĥ^M(x,r) is increasing in M and is nonnegative for any (x,r) ∈𝕊, and the second equality is due to (<ref>). The right-hand side is finite by (<ref>) and (<ref>). § SECTION <REF> PROOFS [Proof of Lemma <ref>] Recall that J(x,r) = - (x ∧δ r) + δ S'. It follows that F_h(x+ δ s,r) - F_h(x,r) = ∫_0^∞(_x+δ s, r h(X(t)) - _x,r h(X(t)) ) dt = ∫_0^r( h((x + δ s-δ t)^+) - h((x -δ t)^+) ) dt + ( F_h( x+δ s +J(x+δ s,r), U)- F_h( x +J(x,r), U) ). To conclude, note that ( F_h( x+δ s +J(x+δ s,r), U)- F_h( x +J(x,r), U) ) = ( F_h( x+δ s +J(x,r), U) - F_h( x +J(x,r), U) ) + ( F_h( x+δ s +J(x+δ s,r), U) - F_h( x+δ s +J(x,r), U)). Using the fundamental theorem of calculus, together with Lemma <ref>, which shows that ∂_x F_h(x+ δ S,U) = ∂_x F_h(x+ δ S,U), we arrive at ( F_h( x+δ s +J(x+δ s,r), U) - F_h( x+δ s +J(x,r), U)) = ( ∫_-x ∧(δ r)^-(x+δ s) ∧(δ r)∂_x F_h(x+δ s +v +δ S',U) dv) = ^S'( ∫_-x ∧(δ r)^-(x+δ s) ∧(δ r)^U∂_x F_h(x+δ s +v +δ S',U) dv) = ^S'( ∫_-x ∧(δ r)^-(x+δ s) ∧(δ r)∂_x^U F_h(x+δ s +v +δ S',U) dv). Interchanging ^U with the integral in the second equality is justified by the Fubini-Tonelli theorem because ^S'^U∂_x F_h(x+δ S',U)≤^S'^U_x +δ S',U B_0 < ∞ for all x ≥ 0 by Lemma <ref> and (<ref>). §.§ Proving Lemma <ref> We recall that F̅_h'(x) = ∂_x F_h(x,R) and that F̅_h”(x) and F̅_h”'(x) are assumed to exist. We recall (<ref>), or h(X) - h(x + J(x,R')) = -δF̅_h'(x + J(x,R')) + λ(F_h(x+ δ S,R') - F_h(x,R') ) - (ϵ(x,R',S)), where J(x,r) = - (x ∧δ r) + δ S'. The following lemma expands the first two terms on the right-hand side. We prove it after proving Lemma <ref>. For any x ≥ 0, F̅_h'(x + J(x,R')) = F̅_h'(x) + δ(S'-R')F̅_h”(x) + 1(δ R' < x) ∫_0^δ(S'-R')∫_0^vF̅_h”'(x+u) du dv + 1(δ R' ≥ x) ( F̅_h'(δ S') - F̅_h'(x) - δ(S'-R')F̅_h”(x)) (F_h(x+ δ S,R') - F_h(x,R') ) = δ S F̅_h'(x) + 1/2δ^2 S^2 F̅_h”(x) + ∫_0^δ S (δ S - v)∫_0^vF̅_h”'(x+u) du dv, [Proof of Lemma <ref> ] Recall that λ S= ρ. Combining Lemma <ref> with (<ref>) yields h(X) - h(x + J(x,R')) = - δ( F̅_h'(x) + δ (S'-R')F̅_h”(x) ) + λ( δ S F̅_h'(x) + 1/2δ^2 S^2 F̅_h”(x) ) -δ( 1(δ R' ≥ x) ( F̅_h'(δ S') - F̅_h'(x) - δ(S'-R')F̅_h”(x)) ) -δ( 1(δ R' < x) ∫_0^δ(S'-R')∫_0^vF̅_h”'(x+u) du dv ) + λ∫_0^δ S (δ S - v)∫_0^vF̅_h”'(x+u) du dv . Using the facts that λ S = ρ, λ U = 1, and that R' = λ U^2 /2, we see that the first line on the right-hand side equals -δ (1-ρ) F̅_h'(x) + 1/2δ^2 ( λ S^2 - 2 λ U S' + λ U^2 ) F̅_h”(x) = G_YF̅_h(x). Since F_h'(0) = 0 due to Lemma <ref>, our assumptions that F_h'(Y), F_h”(Y) < ∞ and integration by parts yield G_YF̅_h(Y) = 0. [Proof of Lemma <ref>] The expression for F̅_h'(x + J(x,R')) follows from the facts that F̅_h'(x + J(x,R')) = 1(δ R' ≥ x) F̅_h'(δ S') + 1(δ R' < x) F̅_h'(x -δ R' + δ S') and, for all x > δ R', F̅_h'(x -δ R' + δ S') = F̅_h'(x) + δ(S'-R')F̅_h”(x) + ∫_0^δ(S'-R')∫_0^vF̅_h”'(x+u) du dv. Next, we argue that (F_h(x+ δ S,R') - F_h(x,R') ) = ∫_0^δ SF̅_h'(x + v) dv, so that the expression for (F_h(x+ δ S,R') - F_h(x,R') ) also follows from Taylor expansion of the integrand around x. To prove (<ref>), note that (F_h(x+ δ S,R') - F_h(x,R') ) = ∫_0^δ S∂_xF_h(x + v,R') dv = ^S∫_0^δ S^R'∂_xF_h(x + v,R') dv = ^S∫_0^δ S∂_x F_h(x + v,R') dv = ∫_0^δ SF̅_h'(x + v) dv. The first and second-last equalities follows from Lemma <ref>. Once we justify the interchange of the integral and expectation in the second equality using the Fubini-Tonelli theorem, (<ref>) will follow. Let ĥ(x) = x. Using the form of ∂_x F_h(x,r) from Lemma <ref>, it follows that for any h ∈, ∂_x F_h(x,r)≤_x,r B_0 = _x,r∫_0^B_0ĥ'(X(t)) dt = ∂_x F_ĥ(x,r). Thus, ∫_0^δ S∂_xF_h(x + v,R') dv ≤∫_0^δ S∂_xF_ĥ(x + v,R') dv = (F_ĥ(x+ δ S,R') - F_ĥ(x,R') ), and the right-hand side is finite because the right-hand side of (<ref>) in Lemma <ref> is finite. § STEIN FACTOR BOUND PROOFS §.§ Second-order bounds We first state and prove an auxiliary lemma. We then prove Lemma <ref>. For any ϵ > 0 and (x,r) ∈𝕊 with r < x/δ, 1/ϵ_x,r(R(B_0) < ϵ/δ) ≤ M/δ, lim_ϵ→ 01/ϵ_x,r(R(B_0) < ϵ/δ) = 1/δ_x,r r(A(B_0)) [Proof of Lemma <ref>] Let U_n denote the interarrival time of the nth customer, let W_0 = V(0), and let W_n = V(U_1 + ⋯ + U_n) be the workload in the system right after the nth customer arrives, which includes the workload brought by the nth customer. Let σ = min{ n ≥ 1 : U_n > W_n-1} be the number of customers served in the first busy period [0,B_0]. Now assuming that Z(0) = (x,r) ∈𝕊 with r < x/δ, it must be that σ > 1, because W_0 = x/δ and U_1 = r. Since {R(B_0) ≤ϵ/δ} = {U_σ≤ W_σ -1 + ϵ/δ}, it follows that 1/ϵ_x,r(R(B_0) ≤ϵ/δ) = 1/ϵ∑_n=2^∞_x,r( U_n≤ W_n-1 + ϵ/δ| σ = n) _x,r( σ = n) = 1/ϵ∑_n=2^∞_x,r[ _x,r( U_n≤ W_n-1 + ϵ/δ| σ = n, W_n-1) | σ = n ] _x,r( σ = n) To proceed, note that {σ = n } = { U_1≤ W_0, …, U_n-1≤ W_n-2, U_n > W_n-1} for any n ≥ 1, implying that for any n ≥ 2, _x,r( U_n≤ W_n-1 + ϵ/δ| σ = n, W_n-1) = ( U_n≤ W_n-1 + ϵ/δ| W_0=x/δ, U_1=r, σ = n, W_n-1) = ( U_n≤ W_n-1 + ϵ/δ| W_0=x/δ, U_1=r, U_1≤ W_0, …, U_n-1≤ W_n-2, U_n > W_n-1, W_n-1) = ( U_n≤ W_n-1 + ϵ/δ| U_n > W_n-1, W_n-1) = ( U ≤ W_n-1 + ϵ/δ| U > W_n-1, W_n-1) = U(W_n-1+ϵ/δ) - U(W_n-1)/1 - U(W_n-1) , and therefore 1/ϵ_x,r(R(B_0) ≤ϵ/δ) = 1/ϵ∑_n=1^∞_x,r[ U(W_n-1+ϵ/δ) - U(W_n-1)/1 - U(W_n-1) | σ = n ] _x,r( σ = n) = 1/ϵ_x,r[ U(W_σ-1+ϵ/δ) - U(W_σ-1)/1 - U(W_σ-1) ]. To prove (<ref>), observe that the right-hand side of (<ref>) is bounded by M/δ because by the mean value theorem, U(w+ϵ/δ) - U(w)/1 - U(w) = ϵ/δU'(ξ)/1-U(w) = ϵ/δ r(ξ) 1-U(ξ)/1-U(w)≤ϵ/δ M for some ξ∈ [w,w+ϵ/δ], where the last inequality follows from ξ≥ w and our assumption that r(x) ≤ M. Once we observe that W_σ -1 = A(B_0), then (<ref>) follows from taking ϵ→ 0 in (<ref>) and applying the dominated convergence theorem. [Proof of Lemma <ref>] Fix h ∈_2, x ≥ 0, and ϵ > 0, and consider 1/ϵ( ∂_x F_h(x+ ϵ,T) - ∂_x F_h'(x,T) ) = 1/ϵ( _ x,T∫_0^B_0(h'(X^(ϵ)(t)) - h'(X(t))) dt ) + 1/ϵ( _x,T∫_B_0^B_0^(ϵ) h'(X^(ϵ)(t)) dt). Repeating the proof of Lemma <ref> yields 1/ϵ( _ x,T∫_0^B_0(h'(X^(ϵ)(t)) - h'(X(t))) dt ) →∂_x F_h'(x,T) as ϵ→ 0. Recall that R(B_0) is the residual interarrival time at the end of the initial busy period (which also equals the length of the first idle period I_0). If R(B_0) ≥ϵ/δ, then there is no arrival during the interval [B_0,B_0^(ϵ)). Since X^(ϵ)(t)(B_0) = ϵ, this implies that 1/ϵ( _x,T( 1(R(B_0) ≥ϵ/δ) ∫_B_0^B_0^(ϵ) h'(X^(ϵ)(t)) dt)) = ( _x,T(R(B_0) ≥ϵ/δ)) 1/ϵ∫_0^ϵ/δ h'(ϵ - δ t) dt. As ϵ→ 0, the right-hand side converges to ( _x,T(R(B_0) >0 )) 1/δ h'(0) = 1/δ h'(0). To justify the last equality, we observe that R(B_0) = 0 would imply that an arrival occurs precisely at the instant that the workload hits zero. Since the workload process is right-continuous, this would imply that X(B_0) > 0, which contradicts the definition of B_0. It remains to show that 1/ϵ( _x,T( 1(R(B_0) < ϵ/δ) ∫_B_0^B_0^(ϵ) h'(X^(ϵ)(t)) dt)) → 1/δ( θ(x/δ) + (1(T<x/δ)_x,T r(A(B_0)) )) ( ∂_x F_h(δ S, U) ). Since R(B_0) < ϵ/δ implies that an arrival occurs in [B_0,B_0^(ϵ)), then 1/ϵ( _x,T( 1(R(B_0) < ϵ/δ) ∫_B_0^B_0^(ϵ) h'(X^(ϵ)(t)) dt)) = 1/ϵ( _x,T( 1(R(B_0) < ϵ/δ)∫_0^R(B_0) h'(ϵ - δ t) dt)) + 1/ϵ( _x,T( 1(R(B_0) < ϵ/δ)( ∂_x F_h(ϵ - δ R(B_0) + δ S, U)))). The first term on the right-hand side converges to zero as ϵ→ 0. To analyze the second term, note that R(B_0) < ϵ/δ implies that T < x/δ + ϵ/δ, and if x/δ≤ T < x/δ + ϵ/δ then R(B_0) = T-x/δ. Therefore, the second term equals 1/ϵ(1(T < x/δ) _x,T( 1(R(B_0) < ϵ/δ)( ∂_x F_h(ϵ - δ R(B_0) + δ S, U))) + 1/ϵ(1( x/δ≤ T < x/δ + ϵ/δ ) _x,T( ( ∂_x F_h(ϵ - (δ T - x) + δ S, U))) It is straightforward to check that sup_0 ≤ x' ≤ϵ( ∂_x F_h(x'+δ S, U)) - ( ∂_x F_h( δ S, U))→ 0 as ϵ→ 0. Lemma <ref> and the DCT then yield 1/ϵ(1(T < x/δ) _x,T( 1(R(B_0) < ϵ/δ)( ∂_x F_h(ϵ - δ R(B_0) + δ S, U))) → 1/δ(1(T<x/δ)_x,T r(A(B_0)) )( ∂_x F_h( δ S, U)). Similarly, using the fact that θ(x) is bounded, 1/ϵ(1( x/δ≤ T < x/δ + ϵ/δ ) _x,T( ( ∂_x F_h(ϵ - (δ T - x) + δ S, U))) → 1/δθ(x/δ) ( ∂_x F_h( δ S, U)). §.§ Third-order bounds §.§.§ Proof of Lemma <ref> Suppose that U has a bounded density. Then for any h ∈_2 and any x ≥ 0, ∂_x^2(F_h(x + δ S,U) - F_h(x,U) ) = ^S(∂_x^2^U F_h(x + δ S,U) - ∂_x^2^U F_h(x,U)). [Proof of Lemma <ref>] Observe that δF̅_h”'(x) = λ∂_x^2(F_h(x + δ S,U) - F_h(x,U) ) + h”(x) = λ^S(∂_x^2^U F_h(x + δ S,U) - ∂_x^2^U F_h(x,U))+ h”(x), where the first equality is due to (<ref>) and the second is due to Lemma <ref>. Applying the expression for ∂_x^2^U F_h(·,U) from Lemma <ref> to the right-hand side yields the result. [Proof of Lemma <ref>] Though we do not assume F_h(z) to be well defined, note that ∂_x(F_h(x + δ S,U) - F_h(x,U) ) = lim_ϵ→ 0(1/ϵ(F_h(x + ϵ + δ S,U) - F_h(x+ δ S,U) ) - 1/ϵ(F_h(x + ϵ,U) - F_h(x,U) ) ) = ∂_x F_h(x + δ S,U) - ∂_x F_h(x,U), so that by differentiating both sides with respect to x, we arrive at ∂_x^2(F_h(x + δ S,U) - F_h(x,U) ) = ∂_x^2 F_h(x + δ S,U) - ∂_x^2 F_h(x,U). By repeating the arguments used to prove Lemma <ref>, one can check that ∂_x^2 F_h(x + δ S,U) = ∂_x^S∂_x^U F_h(x + δ S,U). Similarly, ∂_x^S∂_x^U F_h(x + δ S,U) = ^S∂_x^2 ^U F_h(x + δ S,U) follows from repeating the proof of Lemma <ref>. §.§ Proof of Lemma <ref> We use the following inequalities throughout the proof, which follow from the fact that Y has density ν e^-ν y and is independent of R. (δ R ≥ Y ) = (1-e^-νδ R) ≤νδ R Y 1(δ R ≥ Y) = 1/ν( (1-e^-νδ R) - νδ R e^-νδ R) ≤ 2 νδ^2 R^2 , Y^2 1(δ R ≥ Y) = 1/ν^2( 2(1-e^-νδ R) - 2νδ R e^-νδ R - (νδ R)^2 e^-νδ R) ≤ 3 νδ^3 R^3 We also use the facts that U,S,S',R,I̅, and Y are independent, and that S and S' have the same distribution. We first prove (<ref>)–(<ref>), then (<ref>)–(<ref>), followed by (<ref>)–(<ref>). §.§.§ Proof of (<ref>)–(<ref>). Inequality (<ref>) follows from h'≤ 1 and J(x,r) = - (x ∧δ r) + δ S'. Next, we prove (<ref>), we first note that the definition of ϵ(x,r,s) in Lemma <ref> and the fact that h'≤ 1 both imply that ∫_0^r h((x + δ s-δ t)^+) - h((x -δ t)^+) dt ≤ r δ s. Furthermore, 1(δ r > x) ^S'( ∫_-x^-(x+δ s) ∂_x^U F_h(x+δ s +v +δ S',U) dv) ≤ 1(δ r > x) δ s ^S'( sup_0 ≤ w ≤δ s∂_x^U F_h(w +δ S',U) ) ≤ 1(δ r > x) s (δ s + δ S)( 1 + 2 ηB̅ ), where in the second inequality we used (<ref>) of Lemma <ref>. Combining the bounds and using (<ref>) yields ϵ(Y,R,S)≤ δ R S + (δ R ≤ Y) δ ( S^2 + ( S)^2)( 1 + 2 ηB̅ ) ≤ δ R S + δ^2 ν R( S^2 + ( S)^2)( 1 + 2 ηB̅ ). Next we prove (<ref>). Recall from Lemma <ref> that δF̅_h'(x)≤ x( 1 + (λ + η) B̅ ), δF̅_h”(x)≤ (1+x)( 1 + (λ + η) B̅), x ≥ 0. Therefore, 1(δ R ≥ Y) ( δF̅_h'(δ S ) - δF̅_h'(Y) - δ(S-R) δF̅_h”(Y)) ≤ ((1(δ R ≥ Y)( δ S + Y)) + (1(δ R ≥ Y)δ(S-R)(1+ Y) )) ( 1 + (λ + η) B̅) ≤ δ^2 ( ν( 2 S R + 5 R^2 ) + R^2 ) ( 1 + (λ + η) B̅). The last inequality follows from using (<ref>) and (<ref>) to show that (1(δ R ≥ Y)( δ S + Y)) ≤δ^2 ν S R + 2 νδ^2 R^2, δ S(1(δ R ≥ Y) (1+ Y) ) ≤δ S (νδ R + 2 νδ^2 R^2), δ(1(δ R ≥ Y)R(1+ Y) ) ≤δ(νδ R^2 + δ R^2), where in the final inequality we used the fact that (1(δ R ≥ Y)R Y ) ≤δ R^2 instead of (<ref>). Using the latter would have resulted in a term involving R^3. §.§.§ Proof of (<ref>)–(<ref>). Recall from Lemma <ref> that for any x,u such that x + u ≥ 0, |δ^2 F̅_ h”'(x+u) | ≤ λδ(S (1+x+u+δ S) )( 1 + 2ηB̅) + (δ U > x+u) 3ληB̅ + (δ U < x+u < δI̅ +δ U) λη (1 + η S) B̅. We claim that for any u > 0, |δ^2 F̅_ h”'(Y+u) | ≤ λδ(S' (1+Y+u+δ S') ) ( 1 + 2ηB̅) + νδ 3 ηB̅ + νδ ( U + I̅) λη (1 + η S) B̅, and for any u ∈ [-δ R,0], ^Y(1(δ R ≤ Y) |δ^2 F̅_ h”'(Y+u) | ) ≤ λδ(S' (1+Y+δ S') ) ( 1 + 2ηB̅) + νδ 3 ηB̅ + νδ ( U + I̅) λη (1 + η S) B̅. We now prove (<ref>). Since λ(∫_0^δ S (δ S - v)∫_0^vF̅_ h”'(Y+u) du dv ) = λ^S(∫_0^δ S (δ S - v)∫_0^v^YF̅_ h”'(Y+u) du dv), applying (<ref>) to the right-hand side and using the fact that Y = 1/ν yields (<ref>). Note that (<ref>) follows identically because due to Lemma <ref>, in the special case that h(x)= x, |δ^2 F̅_ h”'(x+u) | ≤ (δ U > x+u) 3ληB̅ + (δ U < x+u < δI̅ +δ U) λη (1 + η S) B̅. To prove (<ref>), we note that δ1/δ^2( ∫_0^δ(S -R )∫_0^v| 1(δ R < Y) δ^2F̅_h”'(Y+u) | du dv ) = δ1/δ^2^S,R( ∫_0^δ(S -R )∫_0^v^Y| 1(δ R < Y) δ^2F̅_h”'(Y+u) | du dv ) ≤ δ1/δ^2δ^2 ((S-R)^2 (λδ(S' (1+Y+δ S+ δ S') ) ( 1 + 2ηB̅) + νδ 3 ηB̅ + νδ ( U + I̅) λη (1 + η S) B̅) ), where the inequality follows from using both (<ref>) and (<ref>), together with the fact that u ≤δ S. It remains to prove (<ref>) and (<ref>). For any u > 0, (S' (1+Y+u+δ S') ) = (S' (1+1/ν +u+ δ S') ), (δ U > Y+u) ≤(δ U > Y) = (1-e^-νδ U) ≤νδ U , (δ U < Y+u < δI̅ +δ U)≤νδ ( U + I̅). The last inequality is true because (δ U < Y+u < δI̅ +δ U) = ( Y ≤δ U < Y+u < δI̅ +δ U) + ( δ U < Y, Y+u < δI̅ +δ U) ≤ (δ U ≥ Y) + (δ U < Y < δ U + δI̅) = (1-e^-νδ U)) + e^-νδ U (1-e^-νδI̅) ≤νδ ( U + I̅). Combining (<ref>) with (<ref>) and using the fact that λ U = 1 proves (<ref>). Finally, we prove (<ref>). We claim that for any u ∈ [-δ R , 0], (S (1+Y+u+S) ) ≤(S (1+Y+S) ) ^Y,U( 1(δ R < Y)1(δ U > Y + u ) ) ≤νδ U, ^Y,U,I̅(1(δ U < Y+u < δI̅ +δ U) ) ≤νδ ( U+ I̅ ) B̅. The first inequality is immediate. The second follows from ^Y,U( 1(δ R < Y)1(δ U > Y + u ) ) ≤ ^Y,U( 1(δ R < Y < δ U +δ R ) ) = e^-νδ R (1-e^-νδ U) ≤νδ U and the third from E^Y,U,I̅(1(δ U < Y+u < δI̅ +δ U) ) ≤ E^Y,U,I̅(1(δ U -u < Y < δI̅ +δ U-u) ) = ( e^-ν (δ U -u) (1-e^-νδI̅)) ≤νδI̅. §.§.§ Proof of (<ref>)–(<ref>) We now argue that for any u ∈, ^Y( 1(δ U < Y+u) e^-(η/δ) (Y+u-δ U)) ) ≤ δν / η. Applying this to the upper bound on |δ^2 F̅_ h”'(x+u) | in Lemma <ref>, together with the bounds we already established when proving (<ref>)–(<ref>), we arrive at (<ref>) and (<ref>). For u > 0, ^Y( 1(δ U < Y+u) e^-(η/δ) (Y+u-δ U)) ) ≤ ^Y( 1(δ U < Y+u) 1(u<δ U) e^-(η/δ) (Y+u-δ U)) ) + ^Y( 1(u ≥δ U) e^-(η/δ) Y) ). Since Y has density ν e^-ν y, ^Y( 1(u ≥δ U) e^-(η/δ) Y) ) ≤ν/ν + η/δ≤δν/η, and ^Y( 1(δ U < Y+u) 1(u<δ U) e^-(η/δ) (Y+u-δ U)) ) = e^-(η/δ) (u-δ U)1(u<δ U) ∫_δ U -u^∞ν e^-(ν + η/δ) y dy = e^-(η/δ) (u-δ U)1(u<δ U) e^-(ν + η/δ) (δ U-u)ν/ν + η/δ≤δν / η. The case when u ≤ 0 is argued similarly. Note that (<ref>) follows identically because due to Lemma <ref>, in the special case that h(x)= x, |δ^2 F̅_ h”'(x+u) | ≤ (U > x/δ) 3 ληB̅ + ( 1(U < x/δ) e^-η (x/δ-U)) ) ληB̅.
http://arxiv.org/abs/2407.12079v1
20240716180000
New insights into the internal structure of GJ 1214 b informed by JWST
[ "Matthew C. Nixon", "Anjali A. A. Piette", "Eliza M. -R. Kempton", "Peter Gao", "Jacob L. Bean", "Maria E. Steinrueck", "Alexandra S. Mahajan", "Jason D. Eastman", "Michael Zhang", "Leslie A. Rogers" ]
astro-ph.EP
[ "astro-ph.EP" ]
Matthew C. Nixon mcnixon@umd.edu 0000-0001-8236-5553]Matthew C. Nixon Department of Astronomy, University of Maryland, College Park, MD, USA 0000-0002-8518-9601]Anjali A. A. Piette School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK Earth & Planets Laboratory, Carnegie Institution for Science, Washington, DC, USA 0000-0002-1337-9051]Eliza M.-R. Kempton Department of Astronomy, University of Maryland, College Park, MD, USA 0000-0002-8518-9601]Peter Gao Earth & Planets Laboratory, Carnegie Institution for Science, Washington, DC, USA 0000-0003-4733-6532]Jacob L. Bean Department of Astronomy & Astrophysics, University of Chicago, Chicago, IL, USA 0000-0001-8342-1895]Maria E. Steinrueck Department of Astronomy & Astrophysics, University of Chicago, Chicago, IL, USA 0009-0000-8049-3797]Alexandra S. Mahajan Center for Astrophysics, Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA 0000-0003-3773-5142]Jason D. Eastman Center for Astrophysics, Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA 0000-0002-0659-1783]Michael Zhang Department of Astronomy & Astrophysics, University of Chicago, Chicago, IL, USA 0000-0003-0638-3455]Leslie A. Rogers Department of Astronomy & Astrophysics, University of Chicago, Chicago, IL, USA § ABSTRACT Recent JWST observations of the sub-Neptune GJ 1214 b suggest that it hosts a high-metallicity (≳100× solar), hazy atmosphere. Emission spectra of the planet show molecular absorption features, most likely due to atmospheric H_2O. In light of this new information, we conduct a thorough reevaluation of the planet's internal structure. We consider interior models with mixed H/He/H_2O envelopes of varying composition, informed by atmospheric constraints from the JWST phase curve, in order to determine possible bulk compositions and internal structures. Self-consistent atmospheric models consistent with the JWST observations are used to set boundary conditions for the interior. We find that a total envelope mass fraction of at least 8.1% is required to explain the planet's mass and radius. Regardless of H_2O content, the maximum H/He mass fraction of the planet is 5.8%. We find that a 1:1 ice-to-rock ratio along with 3.4–4.8% H/He is also a permissible solution. In addition, we consider a pure H_2O (steam) envelope and find that such a scenario is possible, albeit with a high ice-to-rock ratio of at least 3.76:1, which may be unrealistic from a planet formation standpoint. We discuss possible formation pathways for the different internal structures that are consistent with observations. Since our results depend strongly on the atmospheric composition and haze properties, more precise observations of the planet's atmosphere would allow for further constraints on its internal structure. This type of analysis can be applied to any sub-Neptune with atmospheric constraints to better understand its interior. § INTRODUCTION One of the most important goals of modern exoplanet science is to better understand the nature of the large population of planets with radii between that of Earth and Neptune, often referred to as “sub-Neptunes”. With no analogues in the solar system, very little prior information regarding the properties of such planets is available. Demographic studies of sub-Neptunes orbiting FGK stars indicate that this population consists of two distinct classes of planet, divided by radius <cit.>. The group with larger radii (≳ 1.8 R_⊕) are theorised to host substantial H/He-rich envelopes of up to a few percent by mass <cit.>. In contrast, the population of sub-Neptunes orbiting M dwarfs appears to be separated by density rather than radius, and includes several planets of intermediate density between the two classes established for FGK stars, leading to suggestions of an additional sub-population of H_2O-rich “water worlds” orbiting such stars <cit.>. GJ 1214 b was one of the first sub-Neptunes to be discovered <cit.>. Since then, the planet has been studied extensively, with numerous efforts undertaken to characterise both its atmosphere and interior. Initial spectroscopic observations at optical and near-infrared wavelengths <cit.> yielded a flat, featureless transmission spectrum, indicating that the planet's atmosphere could not be both cloud-free and hydrogen-rich, but must possess either high-altitude aerosols and/or a high mean molecular weight (MMW). Extensive follow-up in the near-infrared with the Hubble Space Telescope <cit.> found that a high MMW alone could not explain the planet's transmission spectrum, meaning it must host high-altitude aerosols, but was still unable to constrain the composition. Several internal structure modelling studies have also attempted to determine the planet's bulk composition. The bulk properties of the planet and host star are shown in Table <ref>. The relatively low bulk density of the planet (2.26 ± 0.18 g cm^-3) means that it likely possesses a significant gaseous component consisting of H/He and/or other chemical species. <cit.> explored a range of internal structure models that could explain this bulk density, but degeneracies between model solutions made it impossible to infer a unique composition. They also found that the planet would be too hot to host liquid water. Evolutionary models from <cit.> indicated that the planet was more likely to host a mixed H/He/H_2O envelope than to be a pure water world with an entirely steam atmosphere, since a pure H_2O atmosphere would require an extremely high ice-to-rock ratio of ∼6 to 1 to explain the planet's mass and radius. <cit.> also noted that it was not possible to distinguish between a pure H/He, pure H_2O, or mixed envelope, but did place an overall upper limit of ∼7% on the H/He mass fraction of the planet. A later study <cit.> revised the mass of the planet and placed new constraints on the H/He mass fraction in the case of a pure H/He envelope (x_ env = 5.24^+0.30_-0.29%). Further information regarding the nature of GJ 1214 b's atmosphere was revealed thanks to observations of its thermal emission phase curve obtained with the mid-infrared instrument's low-resolution spectrometer (MIRI/LRS) on board JWST <cit.>. These observations ruled out a pure H/He envelope for the planet, since models with a high metallicity (≳100× solar) were required to explain the large amplitude of the phase curve. Furthermore, models with high-albedo hazes also yielded better fits to the observations, since clear atmospheres would absorb too much stellar radiation and were therefore globally too hot to explain the phase curve. The planet's dayside and nightside MIRI/LRS emission spectra both show >3σ evidence of absorption features, with atmospheric retrievals indicating that H_2O is the most likely cause in both spectra. Additionally, a comparison of the 1–10 μm transmission spectrum with photochemical haze models <cit.> showed that a range of haze prescriptions could explain the flat spectrum, favouring higher metallicities (≳300× solar) or a pure H_2O atmosphere. The new insight into the atmosphere of GJ 1214 b provided by <cit.> and <cit.> motivate an updated investigation of the possible internal structures of the planet. Additional impetus is provided by the work of <cit.>, who used the JWST transit and secondary eclipse light curves for an improved characterization of the host star, leading to revisions of the mass and radius of the planet (M_p=8.41^+0.36_-0.35 M_⊕, R_p = 2.733 ± 0.033 R_⊕, see Table <ref> and Figure <ref>). In this study, we wish to determine how the incorporation of a hazy, metal-enriched atmosphere affects internal structure models. The atmospheric composition and haze properties will impact the thermal structure of the outer envelope, which can significantly impact mass-radius relations for planets with a substantial volatile component <cit.>. We therefore combine self-consistent atmospheric models with internal structure models to place updated constraints on the bulk composition of the planet. In Section <ref> we describe the internal structure model used to characterise GJ 1214 b in this study, as well as the self-consistent atmospheric model used to find the temperature profile of the outer envelope. We present constraints on the internal structure from these models in Section <ref>, and discuss the implications of our findings, as well as caveats and future directions of study, in Section <ref>. § METHODS We characterise the internal structure of GJ 1214 b using an updated version of the model described in <cit.>, which has been used to characterise several other sub-Neptunes <cit.>. Our model solves the planetary structure equations of mass continuity and hydrostatic equilibrium assuming spherical symmetry. Model planets consist of iron, silicates (namely MgSiO_3), water and H/He. Throughout the text we refer to the iron and silicate component as the “nucleus” and the water and H/He component as the “envelope”. We use the term “atmosphere” to refer to the outer region of the envelope (P ≤ 100 bar). The equation of state (EOS) prescription for the iron core is adopted from <cit.>, who used a Vinet EOS of the ϵ phase of Fe <cit.>. We use an isothermal EOS for iron, since thermal effects in the core should not significantly affect the planetary radius <cit.>. We use an updated, temperature-dependent silicate EOS following the phase diagram shown in <cit.> and including three phases: bridgmanite, post-perovskite and liquid MgSiO_3. We use a Vinet EOS from <cit.> for bridgmanite, a different Vinet EOS from <cit.> for post-perovskite, and an EOS following the RTPress formulation <cit.> for liquid MgSiO_3. We also use a temperature-dependent EOS for the outer H_2O and H/He layers, noting that the temperature profile of the outer envelope can strongly impact the mass-radius relation <cit.>. For H_2O, we apply the EOS compiled in <cit.>, which was constructed from a range of sources <cit.> in order to cover the full pressure-temperature (P–T) space relevant to sub-Neptune interiors. The H/He EOS is taken from <cit.>, which assumes a helium mass fraction Y=0.275. Since the temperature profile of a planet can have a substantial effect on its internal structure, it is important to consider realistic P–T profiles when analysing a planet's interior. Previous studies of the internal structure of GJ 1214 b used analytic P–T profiles calculated using the double gray approximation <cit.>. Given that recent JWST observations suggest that GJ 1214 b has a hazy atmosphere, we wish to include the effect of hazes on the planet's temperature profile in the present study, an effect which was not considered in previous analyses of the planet's interior. We therefore calculate hazy dayside P–T profiles for GJ 1214 b using an adaptation of GENESIS <cit.> suited to mini-Neptune atmospheres <cit.>. GENESIS is a self-consistent 1D atmospheric model that computes equilibrium P–T profiles and thermal emission spectra under the assumptions of radiative–convective, hydrostatic, and local thermodynamic equilibrium. We consider a range of P–T profiles, calculated using different envelope compositions and haze properties informed by recent JWST observations <cit.>. Specifically, we use the following compositions as inputs for the self-consistent models: 100× solar, 300× solar, 500× solar, 1000× solar, and a steam atmosphere (100% H_2O). The model includes opacity due to H_2O, CH_4, NH_3, CO, CO_2, HCN and C_2H_2, which are the dominant opacity sources expected in hydrogen-rich atmospheres <cit.>. For the N× solar compositions, the abundance of each chemical species is calculated self-consistently with the temperature profile using <cit.>, assuming a solar carbon-to-oxygen ratio (C/O) of 0.54 <cit.>. We additionally include opacity due to H_2-H_2 and H_2-He collision-induced absorption (CIA). The absorption cross sections of each of these opacity sources are calculated using the methods of <cit.>, and data from the ExoMol, HITRAN and HITEMP databases (H_2O, CO and CO_2: , CH_4: , C_2H_2: , NH_3: , HCN: , CIA: ). The calculated P–T profiles from GENESIS extend to a pressure of 100 bar, beyond which the profile is extended to higher pressures following an adiabat (isentrope). We assume an internal heat flux equivalent to an intrinsic temperature T_ int = 30 K, as adopted by <cit.>, and which is consistent with evolutionary models of GJ 1214 b <cit.>. We additionally assume efficient redistribution of energy between the dayside and nightside. This provides a globally-averaged temperature profile, which can be used as a boundary condition for the 1D interior structure model. However, we note that the phase curve observations of <cit.> suggest somewhat inefficient day-night heat redistribution, which may result in minor asymmetries between the dayside and nightside boundary conditions that are beyond the scope of this work. In order to explain the emission spectra of GJ 1214 b presented in <cit.>, aerosols present in the atmosphere require a single scattering albedo at least comparable to that of Titan tholins, and potentially as high as 1 (i.e., maximally reflective). Following <cit.>, we consider these two haze scenarios as end-member cases for our atmospheric models, where the maximally reflective hazes have refractive indices n=1.8, k=10^-9. These are identical to the “purely scattering” hazes in <cit.>. Haze particle size distributions are also identical to <cit.>. We allow the haze production rate to vary between 10^-14 and 10^-9 g cm^-2 s^-1, with the lower limit set by haze production on Titan <cit.> and the upper limit set by estimates from photochemical models of GJ 1214 b <cit.>. For all atmospheric models considered, the P–T profile remains hot enough that H_2O never enters any liquid or ice phases, instead remaining in vapour, supercritical, and superionic phases throughout the envelope. The miscibility of hydrogen and water does not appear to have been tested directly throughout the full P–T range of these phases. <cit.> found that H and H_2O are likely to be miscible at temperatures between 2000 and 6000 K and pressures at 10s of GPa using ab initio simulations, and <cit.> found H and H_2O to be miscible at temperatures down to 650 K at ∼0.01 GPa. However, some experimental studies have suggested that demixing may occur at certain pressures and temperatures <cit.>. Studies of other chemical species indicate that gases and supercritical fluids are completely miscible <cit.>, and the assumption of the miscibility of hydrogen and supercritical water has been made in previous studies of sub-Neptune interiors <cit.>. We therefore assume that H/He and H_2O remain fully mixed throughout the envelope, but note that further experimental and theoretical work to test this assumption would be extremely valuable for sub-Neptune internal structure modelling. § RESULTS Our goal is to explore the range of internal structures that are consistent with both the mass and radius measurements as well as the known atmospheric properties of GJ 1214 b. We use self-consistent atmospheric models to determine which atmospheric compositions are consistent with the planet’s emission spectrum. The compositions and thermal structures of best-fitting models are then used to set the composition and thermal structure of the envelope in the interior model, under the assumption that the atmosphere/envelope is well-mixed. This allows us to constrain the bulk composition of the planet. We begin by comparing our grid of self-consistent atmospheric models from GENESIS to the observed dayside emission spectrum of the planet, in order to assess which compositions and haze properties are consistent with observations. Figure <ref> shows emission spectra from the grid which most closely fit the observations. For both tholin hazes and maximally reflective hazes we find that, across all compositions, models with a haze production rate of 10^-11g cm^-2 s^-1 yield the best fit to the spectrum. Additionally, for a pure H_2O atmosphere, lower haze production rates also provide a reasonable fit. We therefore consider P–T profiles from this subset of the model grid when generating internal structure models, exploring how varying the atmospheric properties alters the possible bulk compositions consistent with the planet's mass and radius. More detailed modelling of the emission spectrum of GJ 1214 b will be presented in a future study. We note that although models with metallicities as low as 100× solar are included in our chosen subset of models based on the fit to the emission spectrum, a metallicity of >300× solar is preferred by the planet's transmission spectrum <cit.>. In this study we consider metallicities ranging from 100× solar and higher, but we note that models with metallicities >300× solar are the most plausible in light of other observations. We subsequently turn to internal structure models. We consider two possible scenarios for the gaseous envelope: a mixture of H/He/H_2O and a pure H_2O (steam) atmosphere. For mixed atmospheres, we choose H_2O mass fractions in the envelope, x_ H_2O, env in order to match a given atmospheric MMW μ_ atm using the following formula: x_ H_2O, env = μ_ H_2O( μ_ atm - μ_ H/He)/μ_ atm( μ_ H_2O - μ_ H/He), where μ_ H_2O=18.02amu and μ_ H/He=2.34amu represent the MMW of H_2O and H/He (where H and He are mixed with a helium mass fraction Y=0.275). Table <ref> shows the MMW and H_2O mass mixing ratios for metallicities included in the atmospheric model grid. We compute the thermal structure by interpolating between the P–T profiles shown in Figure <ref> with the closest metallicities. For the H/He/H_2O envelopes we consider MMW values from 5–17 amu, since lower values would correspond to atmospheric metallicites ≲100× solar, which are unlikely given the observations. Under the assumption of an Earth-like nucleus (1/3 iron, 2/3 silicates by mass), and that an H/He/H_2O envelope would remain fully mixed, only a single free parameter (x_ env) remains for a given atmospheric composition. We note that while other chemical species such as CO_2 may be present in the atmosphere, we choose a composition of H/He/H_2O partly since there is observational evidence for H_2O absorption in the planet's emission spectra, and partly because there is a lack of high-pressure EOS data for other chemical species. However, constraints on the H/He mass fraction derived here are unlikely to change regardless of whether the main non-H/He constituent of the atmosphere is H_2O or a combination of other chemical species (see Section <ref> for more detail). Figure <ref> shows the mass and radius of GJ 1214 b alongside a selection of best-fitting models incorporating maximally reflective hazes. We can see from this figure that it is possible to explain the mass and radius of the planet with any of the envelope compositions we consider, due to inherent degeneracies present in internal structure models. However, the best-fitting envelope mass fractions vary significantly between different compositions. We therefore explore in detail how the permitted envelope mass fraction depends on its composition. Resulting envelope mass fractions that fit the mass and radius of GJ 1214 b to within 1σ for models with a mixed H/He/H_2O envelope and a fixed haze production rate of 10^-11g cm^-2 s^-1 are shown in Figure <ref>. We find that, as the atmospheric metallicity increases, so does the required envelope mass fraction, since a more massive envelope is required to allow the denser material to account for the same volume. Likewise, across all atmospheric compositions, models with tholin hazes require a higher envelope mass fraction than those with maximally reflective hazes in order to explain the planet's mass and radius. Best-fitting models with tholin hazes have envelopes which are up to 1.55× more massive than those with maximally reflective hazes. This is due to the tholins having higher UV extinction than the maximally reflective hazes, which are purely scattering. This yields deeper isotherms, which in turn leads to cooler, and therefore denser, envelopes overall. We illustrate this effect for the 300× solar envelope case in Figure <ref>, which shows the P–T and P–ρ profiles for the best-fit models with both haze prescriptions. Across all compositions considered, the envelope must be at least 8.1% by mass. This corresponds to a H_2O mass fraction of 5.0%, a H/He mass fraction of 3.1%, and an iron+rock mass fraction of 91.9%. This represents the maximum possible mass fraction of the iron+rock nucleus. For the higher metallicity scenarios (≥300× solar) that are the most consistent with the observed phase curve and transmission spectrum, the minimum envelope mass fraction rises to 34.4%. For the highest MMW mixed atmosphere that we consider (17 amu), the total envelope mass fraction can be up to 91.1%. However, this solution consists almost entirely of H_2O (90.3% by mass) and is likely unrealistic from a planet formation scenario, as discussed in Section <ref>. One common way of setting an upper limit for the H_2O mass fraction is to allow the H_2O layer and the iron+rock nucleus to have equal mass fractions <cit.>, which is derived from the solar system ice-to-rock ratio <cit.>. We find that such a solution is permissible by the data: for the tholin haze case, this corresponds to x_ H_2O = x_ nuc = 47.6%, x_ H/He = 4.8%, and in the maximxally reflective haze case, this corresponds to x_ H_2O = x_ nuc = 48.3%, x_ H/He = 3.4%. For models with a MMW ≲8 amu, the H/He mass fraction increases as the total envelope mass fraction increases. However, at higher MMW, H_2O dominates over H/He in the atmosphere such that the total H/He fraction starts to decrease. This leads to an overall maximum H/He mass fraction for the planet of 5.8% for tholin hazes (or 4.8% for maximally reflective hazes). While the H/He fraction could technically drop as low as zero for e.g., a pure steam atmosphere, such a scenario would require a very high H_2O mass fraction, as discussed below. Considering models preferred by the transmission spectrum (≥300× solar metallicity) with an ice-to-rock ratio ≤1, the H/He mass fraction must range between 3.4–5.2%. Figure <ref> shows best-fitting H_2O mass fractions in the case of a pure steam atmosphere, with no H/He. Since lower haze production rates lead to a hotter atmosphere, the value of x_ H_2O required to explain the mass and radius decreases. Furthermore, at lower haze production rates, the hazes have less of an impact on the thermal structure, meaning that the difference between the tholin haze models and the maximally reflective haze models decreases. In the tholin haze case, the highest haze production rate (10^-11g cm^-2 s^-1) cannot be fit with a pure H_2O envelope; even a 100% H_2O mass fraction yields a planet that is too small. In contrast, the maximally reflective haze model with the same haze production rate can have a water mass fraction of 90–99%. For the lowest haze production rates considered, the models require a water mass fraction of at least 79%. The feasibility of such a substantial H_2O component from a planet formation perspective is discussed in Section <ref>. § DISCUSSION AND CONCLUSIONS §.§ Implications for planet formation The indications of a high-metallicity atmosphere and H_2O absorption features found in the JWST observations of GJ 1214 b suggest that the planet may have a substantial water component. It has long been theorised that such planets could come into being through formation outside the water ice line of protoplanetary disks and subsequent inward migration <cit.>. In the case of GJ 1214 b, this inward migration and corresponding increase in irradiation would cause the planet's H_2O to evaporate into a steam/supercritical atmosphere. The resulting envelope composition would depend on whether the planet had also accreted a substantial H/He envelope at formation. If H/He accretion had taken place, the planet would end up with a mixed H/He/H_2O envelope, and if not, it would host a pure steam atmosphere. These two formation and evolution scenarios are therefore analogous to the two modelling cases covered in this work, and we discuss their feasibility here. §.§.§ Mixed hydrogen/helium/water envelope In this scenario, the ice/rock nucleus would accrete some amount of H/He, which would subsequently mix with the H_2O layer to form an extended H/He/H_2O envelope, with H_2O in vapour and/or supercritical phases. A commonly assumed composition for water worlds is a 1:1 ratio of iron and rock to H_2O <cit.>. This ratio is ultimately derived from the estimated solar system ice-to-rock ratio of 1.17:1 reported in <cit.>. As presented in Section <ref>, a 1:1 ice-to-rock ratio would correspond to a H/He mass fraction of 3.4–4.8%. Following theoretical predictions from <cit.>, we would expect GJ 1214 b to host an initial H/He mass fraction of 2.2% after gaseous accretion and boil off. This would suggest that the above scenario is unlikely. However, we note that <cit.> only consider accretion on to a rocky, and not icy, nucleus, and that the details of the boil-off process are highly uncertain <cit.>. Furthermore, as can be inferred from Figure <ref>, lower ice-to-rock ratios would lead to required H/He mass fractions closer to 2%, meaning this scenario could still be possible. Relaxing the assumption of an Earth-like nucleus composition could also change the required H/He mass fraction for this scenario: for example, a purely silicate nucleus with no iron would lead to lower required H/He fractions. An alternative pathway to forming a high-metallicity atmosphere without significant accretion of ice at formation could be the production of volatiles due to reactions between the iron+rock nucleus and the H/He-rich envelope <cit.>. However, the impact of reactions at the nucleus-envelope interface on the composition of the outer envelope and atmosphere remains poorly understood. §.§.§ Pure water envelope In this case, the ice+rock nucleus would be devoid of any H/He, either due to a lack of accretion or the complete stripping of H/He during evolution. Previous work stated that this scenario would require an ice-to-rock ratio of ≳6:1 in order to be consistent with the bulk properties of the GJ 1214 b <cit.>. Our work indicates that the planet could host a pure H_2O envelope with a somewhat smaller ratio of least 3.76:1. While still much higher than solar system values, similarly large H_2O mass fractions for sub-Neptunes have been proposed; for example, <cit.> consider an upper limit of 75% for the water mass fraction at formation (an ice-to-rock ratio of 3:1). Furthermore, studies of solar system ice giants have invoked ice-to-rock ratios as high as 15 for Neptune <cit.>, though the validity of this modelling has been called into question <cit.>. Ultimately, while it is assumed that some amount of rocky material is required to initiate further ice and gas accretion, there is no consensus on a realistic upper limit for the H_2O mass fraction for a sub-Neptune. While several studies have explored the available H_2O budget for planet formation <cit.>, further theoretical work into the formation and evolution of icy planets is required to determine whether such a composition is truly plausible. Our work suggests that a H/He-free water world scenario is unlikely for GJ 1214 b, though if the planet is ultimately determined to have such a composition, it would have interesting implications for the reservoir of icy material available in the original disk. §.§ Comparison with previous studies Early efforts to quantify the maximum H/He mass fraction possible for GJ 1214 b both obtained an upper limit of ∼7% within 1σ <cit.>. The higher planet mass presented in <cit.> led to a higher bulk density for the planet, which correspondingly lowered the 1σ upper limit for x_H/He to 5.54%. However, the models used to derive this upper limit assumed an isothermal atmosphere at the planet's equilibrium temperature for a solar-composition H/He envelope. The present work advances on this modeling in two important ways. First of all, we do not use an isothermal temperature profile, instead using profiles taken from self-consistent models which impose thermo-chemical and radiative-convective equilibrium as well as accounting for the effects of clouds and hazes, which have been shown to be present in GJ 1214 b's the atmosphere from JWST observations <cit.>. This leads to hotter P–T profiles overall than would be obtained by assuming an isotherm at T_ eq. The second key difference is that we do not assume a solar composition atmosphere, instead varying the composition within a range of values consistent with the JWST data. While the hotter atmosphere leads to a more inflated envelope, thus reducing the allowed H/He mass fraction, the compositional difference has the opposite effect, meaning that larger H/He fractions are permitted. In the lowest metallicity cases presented here (100× solar), we find that lower H/He mass fractions than past work are permissible (∼3%). However, at higher metallicities, we find that H/He mass fractions slightly larger than those reported in <cit.> are permissible, up to ∼6%. These results strongly depend on the specific atmospheric composition that is considered. §.§ Caveats and model improvements In order to better understand the internal structures of planets much larger than Earth, we require more information on the behaviour of materials at the high pressures found in giant planet interiors. For example, experimental data on the equation of state of H_2O at high pressures remains limited <cit.>, meaning the densities used in internal structure models are based on theoretical approximations. Validating such approximations will be extremely valuable in refining sub-Neptune interior models. Additionally, while studies have shown that H/He and H_2O are expected to be miscible and approximately follow an ideal mixing law at certain pressures and temperatures <cit.>, this must be extended to a wider range of thermodynamic conditions to determine whether the assumption of a mixed H/He-H_2O envelope throughout GJ 1214 b is valid, and whether the region of the atmosphere probed through observations (P ≲ 1 bar) is representative of the entire envelope. If there was a compositional gradient such that the water enrichment was greater moving deeper into the planet, then we would expect the required envelope mass fractions to increase due to its higher density. In this sense, the fully mixed case presented here can be thought of as providing a lower limit to the required envelope fraction. It is also possible that additional H_2O may be partitioned in the iron/rock nucleus of the planet <cit.>, an effect which must be better understood in order to refine measurements of planetary bulk water mass fractions. In our internal structure models, we have made the assumption that the outer envelope of GJ 1214 b consists of H, He and H_2O, based on observational evidence as well as modelling limitations. While other chemical species may be present in the envelope, we do not possess high-pressure EOS data for these species and therefore they cannot be included in our models at present. Theoretical and experimental work to determine the high-pressure EOS of chemical species such as CO_2 and CH_4 would be invaluable to enable more detailed modelling efforts of the internal structures of sub-Neptunes. While we have assumed an intrinsic temperature T_ int=30K for the atmospheric models used in this study, following <cit.>, we note that there is some uncertainty in this value due to the uncertain age of the planet. We recomputed P–T profiles with a higher value of T_ int=40K and did not find a significant change in the planetary radius. For example, for a planet with M_p = 8.41 M_⊕ and a 300× solar envelope with x_ env=0.356 and maximally reflective hazes, the radius assuming T_ int=30K is 2.727 R_⊕, increasing to 2.729 R_⊕ at T_ int=40K. Internal structure modelling efforts would also benefit from a better understanding of haze microphysics. For example, it is unclear how hazes would be able to form in a pure steam atmosphere <cit.>. Additionally, the haze models considered in this study use spherical haze particles, but porous fractal aggregates could also be present, which have an impact on resultant spectra and P–T profiles <cit.>, potentially allowing for lower haze production rates. However, the impact of such haze properties has not been explored in detail and is beyond the scope of this study. §.§ Benefit of additional observations The findings of this work show that JWST observations of sub-Neptune atmospheres can be used to learn about internal structures, with the composition, haze properties and thermal structure of the atmosphere inferred from GJ 1214 b's phase curve all affecting constraints on the possible bulk composition of the planet. This motivates further atmospheric observations of the planet, as well as of other sub-Neptunes. A more precise measurement of the atmospheric composition of GJ 1214 b would further limit the possible cases beyond those considered in this work, and lead to better constraints on the internal structure. Furthermore, observations of different sub-Neptunes could allow for population studies that may point towards particular formation pathways or common bulk compositions. We have assumed that H_2O is the main component of the atmosphere other than H/He, since it was identified as the most likely cause of the features detected in GJ 1214 b's emission spectrum <cit.>. However, this detection is far from definitive, with 2.5σ confidence on the dayside and 2.6σ confidence on the nightside, and the constraints on the H_2O abundance are very broad. Furthermore, the C/O for the atmospheric models used here is assumed to be solar (C/O=0.54), since we have so far been unable to measure the abundance of any carbon-bearing species on the planet. Acquiring precise constraints on the atmospheric metallicity and C/O would restrict the set of internal structure models that could be applied to this planet, helping to break degeneracies and better understand the planet's bulk composition, as well as allowing the use of better-informed atmospheric temperature profiles that more accurately represent the range of relevant opacity sources. We note, however, that because our model grid encompasses a wide range of values of the envelope MMW, and the temperature of the planet means that phase transitions are not a concern, the constraints placed on the H/He mass fraction in this study are unlikely to change significantly even in the presence of additional atmospheric species other than H_2O. §.§ Conclusions Recent observations of the atmosphere of GJ 1214 b indicated that the planet possesses a high-metallicity atmosphere containing reflective hazes. We have used this information alongside internal structure models and updated measurements of the planet's mass and radius <cit.> to place constraints on the possible bulk composition of the planet. We considered models with an envelope consisting of mixed H/He/H_2O as well as pure H_2O. Across the range of models considered, we find the following: * GJ 1214 b hosts a volatile envelope of at least 8.1% by mass. This lower limit is derived from the lowest metallicity atmosphere that remains consistent with the phase curve observations (100× solar). This limit increases to 34.4% if only models with metallicities ≥300× solar are considered, as preferred by the transmission spectrum. * While a 100% volatile envelope consisting of pure H_2O is consistent with the mass and radius, it is very unlikely that a planet would form with this composition. * Regardless of H_2O content, the maximum H/He mass fraction for the planet is 5.8%. If we require an ice-to-rock ratio of ≲1:1, then the minimum H/He mass fraction is 3.4%. * Assuming that the volatile envelope consists primarily of H, He and H_2O, the planet must have a total H_2O mass fraction of at least 5.0%. * Viable solutions exist in which the planet has H_2O and iron/rock components of equal mass, i.e., the “water world” scenario described by <cit.>. In this case, the H/He mass fraction must be 3.4–4.8%. * If the planet has a pure steam atmosphere, with no H/He, then it must have a very high ice-to-rock ratio of at least 3.76:1. It is unclear whether this is realistic from a planet formation perspective. * The atmospheric composition of a given model, including assumptions about aerosols, strongly influences the inferred bulk H/He fraction. JWST holds the promise of enabling a much more comprehensive understanding of the atmospheres of sub-Neptunes than has previously been possible, unlocking new information about their composition, aerosol properties and thermal structure. Alongside GJ 1214 b, a number of sub-Neptunes in the 2–3R_⊕ regime have already been observed using JWST, leading to constraints on their atmospheric composition, including K2-18 b <cit.> and TOI-270 d <cit.>, with many more targets set to be observed in the first years of JWST operations. Our work demonstrates that detailed atmospheric measurements of these planets will also allow us to characterise their internal structures in detail. This new era of high-quality sub-Neptune observations will therefore usher in unprecedented insight into the inner workings of these mysterious worlds. We thank the anonymous referee for their comments, which improved the quality of this manuscript. The NASA/CSA/ESA JWST observations shown in this work are associated with the GO program #1803 (PI: Kempton). Data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. The specific observations analyzed can be accessed via [DOI: 10.17909/qe3z-qj40]https://doi.org/10.17909/qe3z-qj40. Support for this program was provided by NASA through a grant from the Space Telescope Science Institute. This research was also supported by the AEThER program, funded in part by the Alfred P. Sloan Foundation under grant #G202114194, as well as by NASA ADAP 80NSSC19K1014. M.S. and M.Z. acknowledge support from the 51 Pegasi b Fellowship, funded by the Heising-Simons Foundation. This research has made use of the NASA Astrophysics Data System and the NASA Exoplanet Archive, as well as the Python packages NumPy <cit.>, SciPy <cit.>, and Matplotlib <cit.>. aasjournal
http://arxiv.org/abs/2407.13647v1
20240718162517
Weak-to-Strong Reasoning
[ "Yuqing Yang", "Yan Ma", "Pengfei Liu" ]
cs.CL
[ "cs.CL", "cs.AI" ]
mystyle rightline=true, innerleftmargin=10, innerrightmargin=10, outerlinewidth=3pt, topline=false, rightline=true, bottomline=false, skipabove=, skipbelow= myboxi[1][] breakable, title=#1, colback=red!5, colbacktitle=red!5, coltitle=black, fonttitle=, bottomrule=0pt, toprule=0pt, leftrule=2pt, rightrule=2pt, titlerule=0pt, arc=0pt, outer arc=0pt, colframe=red, myboxnote[1][] breakable, title=#1, colback=orange!0, colbacktitle=orange!0, coltitle=black, fonttitle=, bottomrule=0pt, toprule=0pt, leftrule=2pt, rightrule=2pt, titlerule=0pt, arc=0pt, outer arc=0pt, colframe=orange, myboxii[1][] breakable, freelance, title=#1, colback=white, colbacktitle=white, coltitle=black, fonttitle=, bottomrule=0pt, boxrule=0pt, colframe=white, overlay unbroken and first= [red!75!black,line width=3pt] ([xshift=5pt]frame.north west) – (frame.north west) – (frame.south west); [red!75!black,line width=3pt] ([xshift=-5pt]frame.north east) – (frame.north east) – (frame.south east); , overlay unbroken app= [red!75!black,line width=3pt,line cap=rect] (frame.south west) – ([xshift=5pt]frame.south west); [red!75!black,line width=3pt,line cap=rect] (frame.south east) – ([xshift=-5pt]frame.south east); , overlay middle and last= [red!75!black,line width=3pt] (frame.north west) – (frame.south west); [red!75!black,line width=3pt] (frame.north east) – (frame.south east); , overlay last app= [red!75!black,line width=3pt,line cap=rect] (frame.south west) – ([xshift=5pt]frame.south west); [red!75!black,line width=3pt,line cap=rect] (frame.south east) – ([xshift=-5pt]frame.south east); , fancy black itemize* =10pt enumerate* json showstringspaces = false, keywords = false,true, alsoletter = 0123456789., morestring = [s]"", stringstyle = , MoreSelectCharTable = @DefSaveDef`:@json@json, basicstyle = , keywordstyle = , json @json @mode=@Pmode @AddToHookOutput @mode=@Pmode @thestyle @DetectKeywords @AddToHookEOL bibcount lbibitem bibcount[] bibsetup 1 Weak-to-Strong Reasoning Yuqing Yang2,4 Yan Ma2,3,4 Pengfei Liu1,3,4[1] 1Shanghai Jiao Tong University 2Fudan University 3Shanghai AI Laboratory 4Generative AI Research Lab (GAIR) ====================================================================================================================================================================================================== [1] Corresponding Author. fancy < g r a p h i c s > § ABSTRACT When large language models (LLMs) exceed human-level capabilities, it becomes increasingly challenging to provide full-scale and accurate supervisions for these models. Weak-to-strong learning, which leverages a less capable model to unlock the latent abilities of a stronger model, proves valuable in this context. Yet, the efficacy of this approach for complex reasoning tasks is still untested. Furthermore, tackling reasoning tasks under the weak-to-strong setting currently lacks efficient methods to avoid blindly imitating the weak supervisor including its errors. In this paper, we introduce a progressive learning framework that enables the strong model to autonomously refine its training data, without requiring input from either a more advanced model or human-annotated data. This framework begins with supervised fine-tuning on a selective small but high-quality dataset, followed by preference optimization on contrastive samples identified by the strong model itself. Extensive experiments on the GSM8K and MATH datasets demonstrate that our method significantly enhances the reasoning capabilities of Llama2-70b using three separate weak models. This method is further validated in a forward-looking experimental setup, where Llama3-8b-instruct effectively supervises Llama3-70b on the highly challenging OlympicArena dataset. This work paves the way for a more scalable and sophisticated strategy to enhance AI reasoning powers. All relevant code and resources are available in <https://github.com/GAIR-NLP/weak-to-strong-reasoning>. [b]0.45 < g r a p h i c s > Llama2-7b -.2ex < g r a p h i c s > supervises Llama2-70b -.2ex < g r a p h i c s > on GSM8K <cit.>. [b]0.45 < g r a p h i c s > Llama3-8b-instruct -.2ex < g r a p h i c s > supervises Llama3-70b -.2ex < g r a p h i c s > on OlympicArena <cit.>. (a): Test accuracy on GSM8K using Llama2-7b to supervise Llama2-70b. (b): Test accuracy on OlympicArena using Llama3-8b-instruct to supervise Llama3-70b. “Weak Floor” refers to the results of the weak model. “Full Weak FT” refers to the results of the baseline where the strong model is naively fine-tuned on the full dataset generated by the weak model. “Our Stage I” represents the results from the first stage of supervised fine-tuning using our proposed weak-to-strong method. Note that our method in Stage I produces three variants of enhanced strong models and we present the best results here. “Our Stage II” denotes the results from the second stage of preference optimization using our method. fancy Weak-to-Strong Reasoning § INTRODUCTION “A student need not be inferior to the teacher; a teacher need not be wiser than the student.” — On Teachers r0.5 0.5 < g r a p h i c s > Illustration of weak-to-strong reasoning through the strong model self-refining its training data. As the pursuit of Artificial General Intelligence (AGI) advances, the creation of superintelligent systems—models that exceed human cognitive capabilities—remains a key ambition within the field <cit.>. This quest introduces a host of challenges, especially concerning the supervision and learning paradigms for these advanced AI models. Conventional supervision methods, which typically depend on human oversight <cit.> or guidance (i.e., distilled knowledge) from more advanced models <cit.>, become inadequate as the capabilities of AI exceed those of their supervisors <cit.>. To address this issue, we focus on the weak-to-strong learning paradigm <cit.>, which operates under a unique task setting where only a less capable model and a stronger[Similar to <cit.>, we define “strong model” in the context of LLMs, taking into account their characteristics—that is, LLMs often contain the knowledge and capabilities needed to perform specific tasks, but these have not yet been fully elicited <cit.>. Typically, it refers to stronger and larger pre-trained language models whose capabilities have not been fully realized yet.] but not fully utilized model are available. The central question of weak-to-strong learning is whether models with limited capabilities can effectively guide the development of more advanced, stronger models. Previous studies by <cit.> have demonstrated the feasibility of it in classification, chess, and reward modeling tasks. However, the applicability of this setup to more complex reasoning tasks, which demand more than mere extrapolation or pattern recognition, remains an open question. Complex reasoning represents a key aspect of human cognition, crucial for assessing whether LLMs can emulate or surpass human-like capabilities in comprehending the world, making decisions, and solving problems <cit.>. Given the complexity and the critical nature of these tasks, applying the weak-to-strong learning framework to advanced reasoning challenges is essential, particularly within the broader context of achieving superintelligence. Although <cit.> suggest that naively fine-tuning strong models on the full set of noisy data produced by weak models, named full weak fine-tuning, can consistently improve their performance over the weaker counterparts, this approach is still far from recovering the full capabilities of strong models, and our experiments show that it loses effectiveness when facing more complex reasoning challenges. They also propose an auxiliary confidence loss to mitigate the issue of strong models imitating the errors of their supervisors. However, this method is tailored to classification tasks with a set of fixed labels and does not naturally extend to open-ended generation tasks including reasoning. Currently, there is a lack of effective methods beyond naive fine-tuning to prevent the overfit of weak errors and to further elicit the intrinsic reasoning abilities of strong models within the weak-to-strong reasoning framework. To achieve the above goal, we introduce a progressive refinement learning framework, guided by the principle that a model can enhance its capabilities more effectively by initially focusing on smaller, more reliable subsets of data, and then iteratively expanding its learning scope, as illustrated in Fig. <ref>. In the first stage, we hypothesize that it is more advantageous to utilize smaller quantities of data that are likely to be more accurate. We achieve this by combining weak data, generated by the less capable model, with data self-generated by the more advanced model through in-context learning. This blend is then used to selectively curate datasets for subsequent supervised fine-tuning. In the second stage, upon having developed a strong model with improved reasoning capabilities, we utilize its ability to construct contrastive samples for preference optimization <cit.> and enables the model to learn effectively from the errors of the weaker model. In implementation, we employ Llama2-70b <cit.> as the strong model, test three separate weak models: Llama2-7b, Gemma-2b <cit.>, and Mistral-7b <cit.>, and conduct experiments on the commonly used math reasoning datasets GSM8K <cit.> and MATH <cit.>. Experimental results reveal that: * Full weak fine-tuning, while effective in classification tasks, falls short for complex reasoning tasks. * Our proposed method significantly outperforms full weak fine-tuning method, achieving a 26.99-point improvement on GSM8K when supervised solely by the weak model (i.e., Gemma-2b) after the first stage of training (→_plus), and further enhances performance by an additional 8.49 points through preference optimization without knowing the gold answer (_plus→_pro). * Our proposed preference optimization phase enables the strong model to learn from errors made by the weak supervisor, ultimately surpassing the strong model fine-tuned on gold-standard solutions (i.e., strong ceiling) in challenging scenarios, such as level 4-5 MATH problems. To more accurately approximate future scenarios, we conduct additionally experiments on OlympicArena <cit.>, an extremely challenging dataset with no definitive ground truth answers. Llama3-8b-instruct <cit.>, despite its smaller size, has been aligned and proved to effectively supervise the larger Llama3-70b, whose potential have not yet been fully realized. Moreover, our proposed two-stage training approach outperforms full weak fine-tuning by 3.19 points. § PRELIMINARIES §.§ Typical Learning Paradigms for LLMs r0.50 G.T. Answer Stronger Model Generic-supervised Distillation-based Self-improvement Semi-supervised Weak-to-strong Typical Learning Paradigms for LLMs. “” and “” indicate whether supervision is required, and “” indicates it is optional. “G.T.” represents Ground Truth. We outline common learning paradigms in large model training, primarily characterized by the need for ground truth answers and supervision from stronger models as shown in Tab. <ref>. Generic-Supervised Learning When training LLMs, it is ideal to have a sufficient amount of training data with ground truth answers, which we refer to as generic-supervised learning paradigm <cit.>. However, acquiring such data is often label-intensive and can sometimes be impossible. As a result, various learning paradigms have emerged to reduce the effects of data quality and quantity while still improving performance. Distillation-based Learning In the current context, to enhance a strong model like Llama2-70b, improvements can still be made by seeking help to a stronger model like GPT-4 <cit.>, even without ground truth. Hence, many existing works suggest that a stronger model acts as a teacher model to provide specific feedback to improve the targeted model <cit.>. This paradigm can be viewed as distilling the stronger teacher model's knowledge. Nonetheless, merely imitating the teacher model is not a long-term solution; imitation models only slightly close the performance gap to the teacher model on tasks not well-represented in the imitation data <cit.>. Furthermore, distillation learning primarily benefits models that are less capable than the teacher model. Self-Improvement Learning Considering the high costs of annotating training data by humans or stronger proprietary models, a line of works relies on the correct responses generated by the model itself to update it. For example, <cit.> filter solutions according to the correctness of final answers, while <cit.> employ reward models trained on gold annotations to score self-generated content. It is evident that, whether using binary labels or fine-grained feedback, this paradigm still requires ground truth to assess the usability of the model's self-generated responses. Without ground truth answers, self-improvement leads to minimal performance gains and may even degrade performance <cit.>. Semi-Supervised Learning Gaining insights from semi-supervised learning within the domain of traditional machine learning, another type of LLM learning depends not on extensive labeling but instead on a small, high-quality seed dataset. <cit.> have demonstrated improvement by learning differences between self-generated responses and expert-annotated responses. We also include the trending research topic of easy-to-hard generalization <cit.> in this category, where models are trained to tackle complex tasks by learning from human annotations on easier tasks. This series of research inevitably require access to a small yet high quality set of standard answers. Weak-to-Strong Learning In scenarios where models surpass human capabilities, the challenge of providing comprehensive and precise supervision for complex tasks intensifies, particularly as no ground truth exists, nor a superior model for supervisory guidance. This absence underscores the critical importance of weak-to-strong learning approaches. Such methods uniquely leverage weaker supervisory signals to recover latent knowledge from already powerful models. For example, fine-tuning GPT-4 with a GPT-2-level supervisor can recover close to GPT-3.5-level performance on certain tasks <cit.>. This strategy holds profound implications for advancing human societal progress by equipping LLMs with the capabilities to address currently unsolvable mathematical and physical challenges. Unlike other learning paradigms, weak-to-strong learning operates under comparatively relaxed conditions, opening expansive opportunities for exploration and innovation. §.§ Weak-to-Strong Reasoning Setup r0.50 Role weak model strong model task question 2*Analogue Llama2-7b 2*Llama2-70b ∈GSM8K + SFT(_gold, 1) =∈MATH Weak-to-Strong Reasoning Setup. In this paper, we address reasoning tasks in the weak-to-strong setting, as illustrated in Tab. <ref>. First, we examine mathematical reasoning tasks, such as those in GSM8k and MATH. These tasks require each step of the reasoning process to demonstrate fundamental mathematical problem-solving skills, including problem comprehension and algebraic operations, and build upon the previous steps. It imposes higher demands on the model's learning and generalization capabilities. Unlike classification tasks, where models can rely on superficial pattern extrapolation or recognition, reasoning tasks offer minimal benefit from guessing. Then, we use a weak model (e.g., Llama2-7b) with a certain degree of mathematical problem-solving ability,[Otherwise, the weak model can hardly provide useful supervision.] denoted as m. This model acts analogously to human supervisors with limited expertise in the era of superintelligence. Besides, we only have a set of questions 𝒬 = {q_i} without ground truth answers and the goal is to improve the reasoning capability of a strong model (e.g., Llama2-70b). To implement this, following <cit.>, we randomly divide the original training set into two equal parts, _gold, 1 and _gold, 2. The weak model is initially fine-tuned using _gold, 1 where the gold solutions are available, resulting in a weak model with some problem-solving capability, i.e. m. In contrast, the strong model can only access the questions from _gold, 2, without reasoning chains or final answers, i.e., 𝒬. § METHODOLOGY In this section, we propose a weak-to-strong training method designed to maximize the use of weak data and to elicit the strong model's innate talent. First, we identify potentially positive samples in the absence of ground truth and external signals. During Stage I, we exclusively utilize this subset of data for supervised fine-tuning. Then once the strong model has achieved a certain level of reasoning proficiency, we employ the full weak data, particularly the potentially negative samples in Stage II via preference learning-based approaches like DPO <cit.>, encouraging the strong model to learn from mistakes made by the weaker model. The whole framework is depicted in Fig. <ref>. §.§ Stage I: Learn from “Positive” Samples Given a weak model m and a series of math problems 𝒬 without ground truth, m generates weak data _weak = {q_i, c_weak, i, a_weak, i}, where q_i ∈𝒬, c_weak, i represents a reasoning chain, and a_weak, i represents the final answer. The correctness of a_weak, i is unknown. The central challenge is: how can we maximize the use of m and _weak to fully enhance and recover the mathematical reasoning capabilities of a stronger model ? §.§.§ Full Weak Fine-Tuning Our initial strategy is to fine-tune the stronger model across the entirety of the weak dataset _weak. While prior research <cit.> has validated the effectiveness of this approach in text classification tasks, its efficacy in reasoning tasks remains unexplored. We have therefore embarked on an investigation to determine whether the phenomenon of weak-to-strong generalization can also enhance the reasoning capabilities of in this less examined domain. §.§.§ Weak In-Context Learning Another straightforward approach is in-context learning (ICL, <cit.>), which requires only several training samples as demonstrations in the prompt. Specifically, we randomly select four samples from _weak as demonstrations. Since we do not have access to the ground truth, these demonstrations cannot be provably correct. < g r a p h i c s > Overview of our method evolving from -.2ex < g r a p h i c s > → _plus -.2ex < g r a p h i c s > → _pro -.2ex < g r a p h i c s > . Left: we utilize final answer consistency to selectively filter weak and icl data from diverse sources, which is used to fine-tune the strong model and obtain _plus with enhanced mathematical reasoning capabilities. Right: we leverage the confidence of _plus to identify contrastive samples for performance optimization, resulting in a more robust strong model _pro. §.§.§ Weak-ICL Fine-Tuning Given that models can mimic weak errors through supervised fine-tuning <cit.>, we propose refining _weak before use, instead of using all data blindly. Additionally, we seek to harness the innate abilities of the strong model activated via in-context learning. Building on these two ideas, we introduce weak-icl fine-tuning, employing both weak data _weak and “icl data” _icl = {q_i, c_icl, i, a_icl, i}, where q_i ∈𝒬, c_icl, i and a_icl, i are generated by with few-shot demonstrations,[Experiments in <ref> show that despite ICL being affected by demonstration selection, our method can achieves further improvements accordingly beyond ICL.] as higher-quality supervision signals. Note that, for both _weak and _icl, we cannot determine whether a certain answer is correct or not. Nonetheless, when two models, employing distinct data representations, converge on the same answer in an open-ended task, it is indicative of a higher likelihood of accuracy. This phenomenon supports the reliability of the results when consistency is observed across different methodologies. We thus compare _weak and _icl generated by the weak model and strong model, respectively, and select _weak and _icl if a_weak, i = a_icl, i, for subsequent supervised fine-tuning. We call this approach final answer consistency. Considering the combination of the two sets of data, we can obtain three versions of enhanced fine-tuned strong models: * _weak-ft: fine-tuned on _weak. * _icl-ft: fine-tuned on _icl. * _hybrid-ft: fine-tuned on the union of _weak and _icl. Iterative Training Upon closed examination of _weak-ft and _icl-ft, we see that they still satisfy the condition of having different data representations, as they are trained on data from different sources—_weak is generated by the weak model, whereas _icl primarily originates from the strong model itself. Hence, we can perform iterative training to bootstrap performance. We denote the initial round of supervised fine-tuning data as _weak^1 and _icl^1, resulting in models _weak-ft^1, _icl-ft^1, and _hybrid-ft^1. In the second iteration, we obtain zero-shot solutions from _weak-ft^1 applied to to construct _weak^2, and those from _icl-ft^1 to construct _icl^2. Here, the subscripts “weak” and “icl” indicate the initial data source. Then we apply final answer consistency to obtain _weak^2 and _icl^2. Following another round of supervised fine-tuning, we have: * _weak-ft^2: fine-tuned on _weak^2. * _icl-ft^2: fine-tuned on _icl^2. * _hybrid-ft^2: fine-tuned on the union of _weak^2 and _icl^2. Note that the iterative training step is optional; it may lead to performance degradation when data quality is too low or the model overfits. §.§ Stage II: Learn from “Negative” Samples We denote the final iteration of _hybrid-ft from Stage I as _plus, which has learned dual mathematical solutions and holds potential for further enhancement. Next, we apply preference optimization techniques to strategically utilize the potentially erroneous subset of the original weak dataset _weak = {q_i, c_weak, i, a_weak, i} generated by m, which allows the strong model to identify and avoid similar errors in future reasoning processes. The key factor lies in how to construct contrastive samples for learning. Question (q_i): John has five more roommates than twice as many as Bob. If Bob has 10 roommates, how many roommates does John have? Weak Response ({c_weak, i, a_weak, i}): John has 10+5=15 roommates. The answer is 15. Self Response 1 ({c_strong, i^1, a_strong, i^1}∈ A_strong, i^+): Bob has 10 roommates. Twice as many as Bob is 2*10 = 20 roommates. John has 5 more roommates than twice as many as Bob, so John has 20+5 = 25 roommates. The answer is 25. Self Response 2 ({c_strong, i^2, a_strong, i^2}∈ A_strong, i^+): Let x be the number of roommates Bob has. John has 5 more roommates than twice as many as Bob, so John has 2x+5 roommates. Bob has 10 roommates, so x=10. John has 2*10+5 = 25 roommates. The answer is 25. A real case example. Given a math question, the incorrect “weak response” is generated by m, while the two correct “self responses” are sampled from A_strong, i^+ self-generated by _plus. Benefiting from dual solutions in the training data during Stage I, _plus is able to generate different reasoning paths that converge to the same final answer. Through Stage II, _plus learns to avoid m's error of overlooking the key word “twice” in calculations. Without access to ground truth, the current strong model with enhanced reasoning capabilities identifies the most likely correct answers based on its confidence. Specifically, for each question q_i ∈, we sample n responses from _plus, and define the probability of the answer that appears most frequently among these responses as confidence. When the confidence falls below a specified threshold τ, we consider the model's judgment on this question unreliable and therefore discard it. Conversely, if the confidence is no less than τ, we regard the model as capable of solving the question and proceed to construct contrastive samples as follows. * For a question q_i where _plus is confident, we denote the most confident answer as a_strong, i^+ and P(a_strong, i^+) ≥τ. It can be considered as the “correct” answer according to _plus. For instance, if we set τ=0.6 and 8 out of 10 sampled responses have the same final answer “4.2”, we say that _plus considers “4.2” to be the correct answer to this question, i.e. a_strong, i^+ = 4.2. * Then we divide the sampled n responses of _plus to q_i into two sets: A_strong, i^+ = {c_strong, i^j, a_strong, i^j} where a_strong, i^j = a_strong, i^+; A_strong, i^- = {c_strong, i^k, a_strong, i^k} where a_strong, i^k a_strong, i^+. In the above example, |A_strong, i^+| = 8 and |A_strong, i^-| = 2. * If the weak model holds an answer that the enhanced model considers “correct”, that is, a_weak, i = a_strong, i^+, we treat the weak model's response {c_weak, i, a_weak, i} as chosen response and randomly select a rejected response from A_strong, i^-. Otherwise, if a_weak, i a_strong, i^+, we treat {c_weak, i, a_weak, i} as rejected response and randomly select a chosen response from A_strong, i^+. Examples are shown in Tab. <ref>. Further training _plus on these samples enables it to distinguish between correct and incorrect solutions, leading to a stronger model _pro. < g r a p h i c s > Main results of Stage I. “Iter. 0” presents the performance of two baselines, where “weak” indicates full weak fine-tuning, i.e., naively fine-tuning on the entire weak data, and “icl” refers to weak ICL without fine-tuning. Models connected by a line mean that they share the same training data sources. Results below “strong ceiling” present test accuracy via greedy decoding, while those above show pass@k scores (k=10 and temperature=1.0). For simplicity, we only present the pass@k scores of _hybrid-ft and checkpoints that surpass it using greedy decoding, and full results are provided in <ref>. § EXPERIMENTS §.§ Datasets r0.50 # _gold, 1 # _gold, 2 # Test GSM8K 7,000 7,000 1,319 MATH 6,000 6,000 500 Data Statistics. _gold, 1 and _gold, 2 are subsets of the training set. The weak model uses _gold, 1 to cultivate initial mathematical skills, while the strong model can only access questions from _gold, 2 without ground truths. GSM8K <cit.> and MATH <cit.> are two widely used datasets for mathematical reasoning, and MATH comprises more challenging competition problems. The data statistics we use are presented in Tab. <ref>. Particularly, to ensure a sufficient amount of training data for developing preliminary mathematical skills in the weak model, we augment the GSM8K training set with the data constructed by <cit.>. Further details are available in <ref>. §.§ Experiment Settings We use Llama2-70b as the strong model and employ three weak models from different families: Llama2-7b, Gemma-2b, and Mistral-7b. We apply full parameter fine-tuning to the weak models on _gold, 1, and consistently adopt LoRA <cit.> to fine-tune the strong model. In Stage I, we perform two rounds of iterations on GSM8K and one round on MATH according to the principles of iteration outlined in <ref>. In Stage II, we adopt two preference learning-based approaches, DPO <cit.> and its variant ORPO <cit.>. Details are provided in <ref>. We evaluate the accuracy on the test set. The performance of the weak model m is defined as the “weak floor”. The performance of the strong model , fine-tuned with data containing gold solutions from _gold, 2, is termed the “strong ceiling”. It represents the upper limit of the capabilities that the strong model can achieve with _gold, 2. §.§ Results of Stage I The main results of Stage I on both GSM8K and MATH datasets are depicted in Fig. <ref>. Notably, in the MATH experiments, we randomly sample additional data that is not chosen based on the final answer consistency, due to the small amount available. Please refer to <ref> for details. According to Fig. <ref>, we have the following observations. Weak-ICL fine-tuning demonstrates a notable enhancement. Using our proposed method, the performance of the strong model, supervised only by the weak Gemma-2b with 25.17 accuracy on GSM8K (without any gold answers), can be improved up to 60.12, surpassing naive full weak fine-tuning by 31.08, and _plus (i.e., _hybrid-ft^2) outperforms it by 26.99. This verifies the effectiveness of data refining before supervised fine-tuning. Also, experimental results show that the mathematical reasoning capabilities of the strong model are increasingly recovered as the weak model improves, a conclusion verified by <cit.> on classification tasks. In detail, the performance on GSM8K gradually improves for Gemma-2b, Llama-7b, and Mistral-7b (25.17 → 33.81 → 59.51). Hence, the maximum performance of the strong model, trained with data generated by these models, also progressively enhances (60.12 → 63.76 → 68.39). _hybrid-ft achieves the highest pass@k scores. As expected, _hybrid-ft achieves the highest pass@k scores in the weak-to-strong setting, benefiting from its training data that incorporates two types of solutions—one from the weak model, and another from the strong model. This diversity enhances the robustness of the model by reducing the likelihood of overfitting. Additionally, the performance of _icl-ft generally surpasses that of _weak-ft, which can be attributed to variations in process-level accuracy and possibly the solution format. Detailed analyses are conducted in <ref>. Naive fine-tuning is inadequate for weak-to-strong reasoning. When using Gemma-2b as the weak model on the MATH dataset, full weak fine-tuning underperforms compared to the weak floor (10.0 v.s. 11.6). This indicates that naive fine-tuning, though successfully applied to classification, chess, and reward modeling tasks <cit.>, falls short for intricate reasoning tasks, particularly those of substantial difficulty like questions in MATH. In contrast, our weak-icl fine-tuning method effectively bridges the gap, offering an effective and scalable solution for the weak-to-strong reasoning challenge. Effect of ICL Performance r0.5 0.5 < g r a p h i c s > Results on GSM8K supervised by Gemma-2b. -.2ex < g r a p h i c s > and -.2ex < g r a p h i c s > are under original demonstrations, and -.2ex < g r a p h i c s > and -.2ex < g r a p h i c s > are under carefully selected demonstrations. Given that the efficacy of weak-icl fine-tuning partially depends on the effectiveness of weak ICL, we further explore how enhancing ICL performance through careful selection of demonstrations affects the performance of weak-icl fine-tuning. Fig. <ref> shows the test accuracy on GSM8K using Gemma-2b as the weak model under a different set of demonstrations. The results indicate that the performance of weak ICL with this particular group of demonstrations increases from the original 56.48 to 64.06. We then regenerate _icl with these demonstrations in the prompt and fine-tune the strong model on _icl, which is selectively curated through final answer consistency. This further improves performance from 64.06 to 64.75, confirming the utility of self-directed data curation. It is worth noting that although weak ICL holds the potential for high performance, the selection of effective demonstrations in a weak-to-strong framework is a non-trivial thing, and is beyond the scope of this paper. §.§ Results of Stage II r0.5 2*Weak Model 3cTest Accuracy (lr)2-4 I II. DPO II. ORPO 3lGSM8K Llama2-7b 62.62 66.19 (+3.57) 68.16 (+5.54) Gemma-2b 56.03 64.52 (+8.49) 63.91 (+7.88) Mistral-7b 68.39 70.96 (+2.57) 72.18 (+3.79) 3lMATH Llama2-7b 14.00 12.00 (-2.00) 15.00 (+1.00) Gemma-2b 14.20 11.60 (-2.60) 16.00 (+1.80) Mistral-7b 14.80 13.40 (-1.40) 17.00 (+2.20) Main results of Stage II. As discussed in <ref>, we employ the final iteration of _hybrid-ft as _plus for subsequent preference learning. The experimental results in <ref> validate this checkpoint achieves higher pass@k and possesses significant potential for further refinement. As shown in Tab. <ref>, our method for constructing positive and negative samples effectively enhances the strong model's math reasoning capabilities. On GSM8K, both DPO and ORPO consistently achieve significant improvements using our constructed datasets, notably resulting in an increase of 8.49 points when supervised by Gemma-2b. Despite the inherently challenging nature of MATH problem, which compromises the strong model's judgment and introduces inaccuracies in the training data, our method still achieves improvements on MATH through ORPO by at least 1 point.[<cit.> demonstrate that DPO can cause performance degradation on MATH due to the lack of regularization in its loss.] Data Construction Recipe When constructing preference data, we always use weak responses generated by the weak model as one of the chosen/rejected responses, instead of relying exclusively on self-generated data. We also test the self-generated setting on GSM8K using Llama2-7b as the weak model, where both chosen and rejected responses are generated by the strong model itself. The DPO test accuracy in this setting is 62.40 (-0.22), indicating a slight performance degradation. Without ground truth, the constructed positive and negative samples actually correspond to the more frequently and less frequently occurring answers, respectively, and are related to the answers the model tends to choose. Since preference optimization essentially performs ranking, the potential benefit of this self-generated setting is minimal. Therefore, incorporating weak data signals in the preference data construction process proves to be a better approach. §.§ Analysis < g r a p h i c s > Test accuracy across varying difficulty levels on the MATH test set. We use ORPO to obtain _pro. For further analysis, we examine the accuracy across different difficulty levels in the MATH test set (See <ref> for data statistics). As shown in Fig. <ref>, the strong model exhibits better generalization on easier problems. Specifically, even though Llama2-7b achieves only 6.98 points accuracy on level 1 problems, Llama2-70b can achieve an accuracy exceeding 30 points after training using this weak supervision. For more challenging problems (levels 4-5), _pro, enhanced with ORPO, even surpasses the strong ceiling obtained by supervised fine-tuning solely on gold solutions. This phenomenon serves to validate the effectiveness of learning from incorrect data. §.§ Experiments Closer to Future Scenarios r0.4 Test Accuracy Weak Floor 11.82 Full Weak FT 12.46 Weak ICL 8.63 _weak-ft^1 12.78 _icl-ft^1 9.58 _hybrid-ft^1 11.18 _weak-ft^2 13.10 _icl-ft^2 11.50 _hybrid-ft^2 (_plus) 11.82 _pro 15.65 Results on OlympicArena using Llama3 family. The best result is in bold, and the best result of supervised fine-tuning in underlined. In preliminary tests with Llama3-70b <cit.>, we observe that on GSM8K and MATH, Llama3-70b can largely unlock its potential through in-context learning, with marginal or even adverse impacts from parameter updates due to training instabilities. Consequently, we focus on a more challenging dataset developed after the release of Llama3-70b, OlympicArena <cit.>, to simulate a more realistic future scenario. We only consider English questions in OlympicArena, excluding the CODE (Code Generation) and OT (Others) problem types that require case-based or expert evaluation. This results in 6,020 training data without solutions and final answers, and 313 test data with final answers to assess the performance of different methods. We use Llama3-8b-instruct (without initial fine-tuning on a subset of training data) as the weak model and Llama3-70b as the strong model to be improved. The hyperparameters are consistent with those used for GSM8K. This configuration more closely resembles future real-world weak-to-strong scenarios. Experimental results are displayed in Tab. <ref>. “Weak Floor” represents the zero-shot performance of Llama3-8b-instruct, “Full Weak FT” denotes the performance of Llama3-70b after supervised fine-tuning on the full set (i.e, 6,020) of weak solutions generated by Llama3-8b-instruct on the training set, and “Weak ICL” indicates the performance of Llama3-70b under 4-shot weak demonstrations generated by Llama3-8b-instruct. Despite having more parameters, Llama3-70b under in-context learning still performs lower than the zero-shot performance of Llama3-8b-instruct due to insufficient mining capabilities. _weak-ft^1, obtained by our proposed weak-icl fine-tuning method, achieves higher performance than Full Weak FT with fewer training data (i.e., 746), outperforming it by 0.32 points. After the second stage of preference optimization, which further exploits the weak model and training questions without answers, the strong model's performance is improved by an additional 3.19 points over Full Weak FT. This demonstrates the robustness and generalizability of our method in scenarios closer to future conditions. § RELATED WORK §.§ LLM Training LLMs can enhance their ability to solve tasks and better align with human instructions through a supervised fine-tuning (SFT) phase <cit.>. This phase heavily relies on the quality of training data, as previous studies <cit.> demonstrate that higher data quality translates to substantial gains in model performance. In this paper, we investigate the potential of learning from weak supervisions. To further align LLMs with human values and enable learning from both positive and negative feedback, additional training is required, such as reinforcement learning from human feedback (RLHF, <cit.>) and direct preference optimization (DPO, <cit.>). In particular, DPO reparameterizes reward functions in RLHF and has been widely used due to its simplicity. Several variants of DPO have then emerged to further enhance its stability and performance, such as ORPO <cit.> and SimPO <cit.>, etc. This paper explores the capabilities of DPO and ORPO using our constructed contrastive samples in a weak-to-strong setting. §.§ Mathematical Reasoning The exploration of mathematical reasoning within LLMs has been a focal point for evaluating their cognitive capabilities akin to human reasoning <cit.>. Researchers have developed various methods to enhance the mathematical reasoning capabilities of LLMs after pre-training, which can be broadly classified into two categories: (1) Prompting: Some works <cit.> aims to elicit the intrinsic reasoning abilities of LLMs by specific prompting engineering, without updating the model parameters; (2) Fine-tuning: Another line of studies focuses on generating a more extensive and higher-quality collection of question-answer pairs <cit.>. Through supervised fine-tuning and preference optimization <cit.>, the models can achieve significant improvements in their mathematical problem-solving capabilities. § CONCLUSION In this paper, we explore the efficacy of weak-to-strong framework in complex reasoning tasks. We introduce a new method that elicits strong capabilities using weak supervisions, without relying on annotations from humans or more advanced models. This method focuses on the strong model's ability to autonomously refine its training data, even if it has not learned the task before. By iteratively expanding its learning scope, the strong model continuously broadens its reasoning skills. This self-directed data curation is crucial for scaling up the enhancement of AI reasoning capabilities, making the model more independent and effective in its developmental trajectory. Through this work, we seek to illuminate new pathways for AI development, emphasizing the critical role of innovative model supervision in advancing AGI and beyond. § LIMITATIONS In our experiments, we use Llama2-70b and Llama3-70b as a proxy for hypothetical superintelligent models of the future. We acknowledge that there might be performance discrepancies compared to a genuine future advanced model. Nonetheless, our efforts lay the groundwork for investigating methodologies in weak-to-strong reasoning. Additionally, this paper does not explore supervision at the process level, such as the model's ability to learn from partially correct data <cit.>. In the weak-to-strong scenario, the presence of non-negligible errors and noise at the process level yields only limited performance improvements in our early experiments, thereby posing challenges for future research. § ACKNOWLEDGEMENTS We sincerely thank Xuefeng Li, Haoyang Zou, and Ting Wu for their valuable insights during discussions, which greatly enhance the quality of this work. acl_natbib § APPENDIX §.§ Dataset Details §.§.§ Dataset Construction For GSM8K, we evenly divide the original training dataset of 7,473 samples into two subsets, _gold, 1 and _gold, 2. Additionally, we supplement both _gold, 1 and _gold, 2 with the data of the same distribution developed by <cit.>, until each contains 7,000 samples. Thus, the weak model uses _gold, 1, which includes both questions and gold solutions, to obtain basic problem-solving capabilities. Meanwhile, the strong model can only access a training dataset = {q_i}, where q_i ∈_gold, 2, consisting of 7,000 mathematical problems without ground truth answers. GSM8K also includes 1,319 test samples. For MATH, we employ the same subset of 500 representative problems as the test set, identical to that used in <cit.>. We then split the remaining 12,000 samples evenly between _gold, 1 and _gold, 2, each containing 6,000 samples. §.§.§ Statistics of MATH test set r0.5 # L1 # L2 # L3 # L4 # L5 # Total 43 90 105 128 134 500 Data statistics of the MATH test set. The distribution of difficulty levels across the 500 test data samples in MATH is listed in Tab. <ref>. §.§ Training Details For supervised fine-tuning in Stage I, we adopt LoRA to fine-tune the strong model with a learning rate of 1 × 10^-4 and search for weight decay in the set {0, 0.01}. We run 2 epochs on GSM8K and 3 epochs on MATH, with a batch size of 8. In Stage II, we employ two preference optimization methods. For DPO, we train the enhanced strong model _plus with a learning rate of 1 × 10^-5 and run 1 epoch. For ORPO, we search for β in the set {0.1, 0.5, 1.0} with a learning rate of 3 × 10^-5 and run 1 epoch. All experiments are conducted using A100 GPUs. When constructing contrastive samples in Stage II, we sample n=10 responses at temperature=1.0, and use a confidence threshold of τ=0.6. Normally, we evaluate using greedy decoding. For calculating pass@k, we set k=10 at temperature=1.0. §.§ Additional Analysis §.§.§ Diversity Analysis < g r a p h i c s > Frequency distribution of the number of distinct solutions on GSM8K supervised by Llama2-7b. To investigate why _hybrid-ft achieves high pass@k scores despite lower greedy decoding results, we explore the diversity of responses generated by _hybrid-ft and _icl-ft. We specifically examine the frequency distribution of the number of distinct solutions for each question across the two strong model checkpoints. Given a question from _gold, 2, we sample n = 10 responses at temperature=1.0 for each checkpoint. We consider two responses distinct if their ROUGE-L similarity is less than 0.7. We then compute the number of clusters formed by these distinct responses and plot their frequency distribution in Fig. <ref>. As shown in Fig. <ref>, _icl-ft^2 tends to produce nearly the same sampled responses for each question in more than 36% of the instances. This indicates a limited exploration of problem-solving paths and difficulty in generating diverse, correct solutions during the sampling process. In contrast, _hybrid-ft^2 generates a variety of responses, increasing its hit rate with multiple sampling and thus achieving higher pass@k scores. Additionally, diverse solutions are crucial for robust outcomes and model generalization <cit.>. In Stage II, diverse solutions also ensure the distinction between positive and negative samples, demonstrating the rationale for selecting _hybrid-ft^2 for preference optimization in Stage II. §.§.§ Training Accuracy of Stage I r0.5 Final Answer Process-Level 4lGSM8K 2*Llama2-7b _weak^1 89.82 72.50 _icl^1 89.82 76.50 [0.5em] 2*Gemma-2b _weak^1 87.97 73.10 _icl^1 87.97 73.80 [0.5em] 2*Mistral-7b _weak^1 92.38 80.10 _icl^1 92.38 77.90 4lMATH 2*Llama2-7b _weak 46.11 32.04 _icl 46.11 39.22 [0.5em] 2*Gemma-2b _weak 30.40 26.30 _icl 31.90 29.90 [0.5em] 2*Mistral-7b _weak 24.75 21.50 _icl 25.25 25.60 Training accuracy of Stage I. Tab. <ref> presents the final answer accuracy and process-level accuracy for both weak data and icl data utilized in the initial round.[The relatively low accuracy observed in MATH explains why we choose to perform one round of iteration.] To compute process-level accuracy, we randomly sample a maximum of 1,000 training sample from each of weak data and icl data, and evaluate them using GPT-4o following <cit.>, the prompt we use is illustrated in Tab. <ref>. Accuracy at this level is determined strictly on the basis that there are no errors throughout the intermediate reasoning steps. From the results we can see that despite having consistent final answer accuracy (with the exceptions of Gemma-2b and Mistral-7b on MATH using augmented training data), there are noticeable differences in process-level performance, leading to variations in the effectiveness of _weak-ft and _icl-ft. Moreover, it is counterintuitive that models trained on icl data with relatively low process-level accuracy achieve higher performance. This might be because the models prefer self-generated solutions and can more effectively learn those that better align with their inherent distribution <cit.>. §.§ Additional Experiments 0.45 Greedy Decoding Pass@k 4lGSM8K 3*Llama2-7b _weak-ft^2 57.47 77.26 _icl-ft^2 63.76 81.05 _hybrid-ft^2 62.62 86.28 [0.5em] 3*Gemma-2b _weak-ft^2 45.03 71.49 _icl-ft^2 60.12 80.14 _hybrid-ft^2 56.03 85.14 [0.5em] 3*Mistral-7b _weak-ft^2 66.72 85.67 _icl-ft^2 66.64 84.08 _hybrid-ft^2 68.39 88.70 4lMATH 3*Llama2-7b _weak-ft^1 10.80 34.80 _icl-ft^1 11.80 35.00 _hybrid-ft^1 14.00 33.60 [0.5em] 3*Gemma-2b _weak-ft^1 14.80 38.80 _icl-ft^1 13.60 33.60 _hybrid-ft^1 14.80 39.60 [0.5em] 3*Mistral-7b _weak-ft^1 10.80 34.20 _icl-ft^1 15.60 31.60 _hybrid-ft^1 14.20 38.40 Greedy decoding and pass@k results (k=10 and temperature=1.0) for the three variants of enhanced strong models obtained through weak-icl fine-tuning. The best results are in bold. 0.45 Test Acc. # Training Data 3lGemma-2b SFT on Full Weak 10.00 6,000 SFT on Gold Weak 15.60 644 _weak-ft^1 11.00 448 _icl-ft^1 11.40 448 _hybrid-ft^1 13.20 448 × 2 3lMistral-7b SFT on Full Weak 14.40 6,000 SFT on Gold Weak 16.60 861 _weak-ft^1 12.40 584 _icl-ft^1 15.60 584 _hybrid-ft^1 14.20 584 × 2 Stage I results on MATH without augmenting training data. “Test Acc.” refers to Test Accuracy. Weak Model Full Weak FT Weak-ICL FT 3lGSM8K Llama2-7b 22.47 78.53 Gemma-2b 8.27 75.71 Mistral-7b 14.63 71.38 3lMATH Llama2-7b 10.45 71.64 Gemma-2b -25.81 64.52 Mistral-7b 19.05 28.57 Performance Gap Recovered (PGR) in Stage I. §.§.§ Details of Stage I on MATH In the Stage I experiment conducted on the MATH dataset, it is found that the amount of training data selected via final answer consistency is so limited that the strong model can hardly learn the effective features through supervised fine-tuning. To address this, we randomly sample additional inconsistent data. Based on the weak model's performance (Llama-7b < Gemma-2b < Mistral-7b on MATH), we supplement the data (both _weak and _icl) to 1,000 instances for Gemma-2b and 2,000 instances for Mistral-7b, and present the results in Fig. <ref>. The original amount of training data and test accuracy for these two weak models are shown in Tab. <ref>. §.§.§ Pass@k Results Tab. <ref> summarizes the greedy decoding and pass@k results for the three variants of enhanced strong models obtained through weak-icl fine-tuning. Notably, _hybrid-ft utilizes a training set that combines those used by _weak-ft and _icl-ft. The results indicate that _hybrid-ft outperforms its counterparts in terms of pass@k, achieving superior pass@k scores with margins of up to 5.23 points. The only exception occurs in the MATH dataset supervised by Llama2-7b, where the underperformance is likely due to limited training data. The superior performance of _hybrid-ft can be attributed to the diversity of solutions in its training set (verified in <ref>), validating our approach of adopting the final iteration of _hybrid-ft from Stage I for preference optimization in Stage II. It is important to note that while higher pass@k scores suggest greater potential, the true challenge lies in effectively harnessing this potential, particularly in the weak-to-strong setting where no ground truths are available. Our proposed weak-to-strong preference optimization in Stage II successfully addresses this challenge, transforming theoretical potential into tangible performance gains in greedy decoding, as proved in <ref>. §.§.§ PGR of Stage I <cit.> propose a new metric called performance gap recovered (PGR) to measure the fraction of the performance gap that can be recovered through weak supervision, as illustrated in Eq. <ref>. Tab. <ref> displays the results of the naive full weak fine-tuning (i.e., Full Weak FT) and our best weak-icl fine-tuning (i.e., Weak-ICL FT) in terms of PGR, which also demonstrate that our method can outperform the simple competitor. However, the variations in PGR across different weak models do not provide meaningful insights. In the experiments described in the main text, we use test accuracy instead to provide a more detailed depiction of model performance. PGR = weak-to-strong - weak floor/strong ceiling - weak floor. §.§.§ Effect of SFT Data r0.5 Weak Model SFT Data Test Accuracy 6*Llama2-7b Full Weak 42.38 Gold Weak 54.21 (+11.83) Our Weak 53.68 (+11.30) (lr)2-3 Full ICL 59.14 Gold ICL 64.29 (+5.15) Our ICL 61.71 (+2.57) 6*Gemma-2b Full Weak 29.04 Gold Weak 46.40 (+17.36) Our Weak 42.91 (+13.87) (lr)2-3 Full ICL 58.61 Gold ICL 63.86 (+5.25) Our ICL 59.21 (+0.60) 6*Mistral-7b Full Weak 61.33 Gold Weak 67.55 (+6.22) Our Weak 65.96 (+4.63) (lr)2-3 Full ICL 62.32 Gold ICL 66.64 (+4.32) Our ICL 65.43 (+3.11) Detailed results of Stage I on GSM8K. Tab. <ref> presents more detailed comparative experimental results of Stage I on GSM8K. “Full Weak” denotes full weak fine-tuning, “Our Weak” is equivalent to _weak-ft^1, and “Our ICL” is equivalent to _icl-ft^1. “Gold Weak” refers to the scenario where weak data with correct final answers are filtered and used for supervised fine-tuning, which is impossible in the weak-to-strong setting and just used for experimental analysis. Similarly, “Gold ICL” refers to the scenario where solutions with correct final answers, generated by the strong model via weak ICL, are filtered. Compared to using a large volume of noisy data (i.e., Full Weak and Full ICL), reducing the data quantity while enhancing data quality can significantly improve the accuracy of the trained model, with potential gains over 17 points. Although our method performs slightly lower than the gold results, it proves highly effective and stable in scenarios where obtaining the ground truth is impossible. Question: {question} Student Solution: {solution} Your task involves three parts: 1. **Step-by-step Evaluation:** Go through the student solution carefully and identify key errors and potential misunderstandings that led to the incorrect solution. 2. **Final Judgement:** Provide an overall judgement on the correctness of the student's solution. 3. **First Error Step:** If the solution is incorrect, generate the step number where the first error occurs, otherwise generate N/A here. Here's the format I want: Step-by-step Evaluation: [Provide a step by step examination of the student solution and identify key errors and misunderstandings here.] Final Judgement: [Insert only **correct** or **wrong** here] First Error Step: [Insert either N/A or the step number where the first error occurs] Please follow this format without any additional introductory or concluding statements. Prompt used to evaluate process-level accuracy.
http://arxiv.org/abs/2407.13767v1
20240718175932
Topological insulators on fractal lattices: A general principle of construction
[ "Daniel J. Salib", "Bitan Roy" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.dis-nn", "cond-mat.mtrl-sci" ]
Department of Physics, Lehigh University, Bethlehem, Pennsylvania, 18015, USA Department of Physics, Lehigh University, Bethlehem, Pennsylvania, 18015, USA § ABSTRACT Fractal lattices, featuring the self-similarity symmetry, are often geometric descents of parent crystals, possessing all their discrete symmetries (such as rotations and reflections) except the translational ones. Here, we formulate three different general approaches to construct real space Hamiltonian on a fractal lattice starting from the Bloch Hamiltonian on the parent crystal, fostering for example strong and crystalline topological insulators resulting from the interplay between the nontrivial geometry of the underlying electronic wavefunctions and the crystal symmetries. As a demonstrative example, we consider a generalized square lattice Chern insulator model, and within the framework of all three methods we successfully showcase incarnations of strong and crystalline Chern insulators on the Sierpiński carpet fractal lattices. The proposed theoretical framework thus lays a generic foundation to build a tower of topological phases on the landscape of fractal lattices. Topological insulators on fractal lattices: A general principle of construction Bitan Roy July 22, 2024 =============================================================================== Introduction. Quasicrystals and fractals are prominent members of the structurally diverse family of solids, typically encompassing crystals. As such, quasicrystals are constituted by a set of sites living on a brane inside a higher-dimensional crystal, as shown in Fig. <ref>(a) in terms of the one-dimensional Fibonacci quasicrystal residing within a two-dimensional (2D) square lattice (SL) <cit.>. By contrast, fractals can be built by eliminating specific sites of crystals, such that the resulting structures feature the self-similarity symmetry. This procedure is shown in Fig. <ref>(b) for the Sierpiński carpet fractal lattice (SCFL), emerging out of a SL <cit.>. Thus, together they constitute the geometric descent family of crystals. Such geometric correspondences, when imposed on the Hilbert space of the topological Bloch Hamiltonian for a crystal, raises a fascinating possibility of harnessing novel topological phases of matter, resulting from the intriguing interplay between the geometry of the underlying electronic wavefunctions and the crystal symmetries, on quasicrystals <cit.> and fractal lattices. Among the plethora of possibilities, strong <cit.> and crystalline <cit.> topological insulators (TIs), about which more in a moment, are the most prominent and commonly occurring ones in quantum crystals that are routinely discovered in nature following the prescriptions of topological quantum chemistry <cit.>. Identifying such phases on fractal lattices is the central theme of the current pursuit. As demonstrative examples, here we showcase the appearance of both strong and crystalline TIs on SCFLs, possessing all the discrete symmetries of a SL, such as four-fold rotation about the z direction and reflections about x and y axes, except for the translational ones, starting from a generalized SL model for quantum anomalous Hall or Chern insulators [Figs. <ref>(c) and (d)]. We formulate three different approaches to construct the effective real space Hamiltonian on SCFLs, each of which allows strong and crystalline analogues of the SL Chern insulator [Fig. <ref>]. In all these cases, the bulk-boundary correspondence between a nontrivial bulk topological invariant and the resulting edge modes remain operative [Fig. <ref>]. Our theoretical formulation, therefore, opens an unexplored territory of exotic topological phases, already cataloged for topological quantum crystals, realizable on their geometric descent fractal lattices. Bloch Hamiltonian. The Qi-Wu-Zhang model for quantum anomalous Hall or Chern insulators is given by <cit.> [4] H^ gen_ QWZ= ∑_k⃗( c^†_+,k⃗ c^†_-,k⃗) [ ∑^3_j=1τ_j d_j(k⃗) ] ( [ c_+,k⃗; c_-,k⃗ ]). Fermionic creation (annihilation) operators with parity τ=± and momentum k⃗=(k_x,k_y) is c^†_τ,k⃗ (c_τ,k⃗). The vector Pauli matrix τ=(τ_1, τ_2, τ_3) operates on the parity index. The components of d⃗(k⃗) are <cit.> d_1(k⃗) = t_1 S_x + t_2 C_x S_y, d_2(k⃗) = t_1 S_y + t_2 C_y S_x, d_3(k⃗) = M-4 B -4 B̃ + 2B ( C_x + C_y ) + 4 B̃ C_x C_y, where S_j=sin(k_j a) and C_j=cos(k_j a) for j=x,y. The hopping amplitude between the orbitals with opposite [same] parities [parity], living on the nearest-neighbor (next-nearest-neighbor) sites of the SL with the lattice spacing a is t_1 (t_2) [2B (4 B̃)]. The on-site staggered density between two orbitals is M-4 B -4 B̃. For simplicity, we ignore any particle-hole asymmetry. Chern number. Topological properties of this model can be tracked by computing the first Chern number of the filled valence band, for example, given by <cit.> [4] C=-∫_ BZd^2k⃗4π [ ∂_k_xd̂⃗̂(k⃗) ×∂_k_yd̂⃗̂(k⃗) ] ·d̂⃗̂(k⃗), where d̂⃗̂(k⃗)=d⃗(k⃗)/|d⃗(k⃗)|. The integral is restricted within the first Brillouin zone (BZ). The resulting phase diagram is shown in Fig. <ref>(c). It accommodates Chern insulators with the band inversion at the Γ and M points with C=-1 and +1, respectively, named Γ and M phases. They represent strong TIs, featuring band inversion at an odd number of points in the BZ. The latter is translationally active, as the M point results from the translational symmetries of the underlying SL. Due to longer range hopping, the above model also supports a Chern insulator with the band inversion simultaneously around the X and Y points of the BZ, connected by four-fold rotations, with C=-2, representing a crystalline TI. The normal insulator (NI) therein has C=0. Bott index. We aim to harness these phases on the SCFLs, where the notion of a BZ becomes moot due to the absence of the translational symmetry. Thus, we bring a related topological invariant onto the stage, the Bott index (BI), computed from the Hilbert space of the associated real space Hamiltonian H^ gen, SL_ QWZ on a SL, obtained via a Fourier transformation of H^ gen_ QWZ, satisfying H^ gen, SL_ QWZ|E⟩ = E |E⟩. First, we define two diagonal matrices, X and Y, with their respective matrix elements given by X_i,j=x_i δ_i,j and Y_i,j=y_i δ_i,j, encoding the position (x_i,y_i) of the ith site, from which we define two unitary matrices U_X=exp(2 π i X) and U_Y=exp(2 π i Y). Next, in terms of the projector onto the filled eigenstates of H^ gen, SL_ QWZ, up to the Fermi energy E_F=0, defined as 𝒫=∑_E<E_F|E⟩⟨E|, we compute <cit.> BI= 1/2 π( Tr[ ln( V_X V_Y V^†_X V^†_Y ) ]), in systems with periodic boundary conditions (PBCs), where V_j= I-𝒫 + 𝒫 U_j 𝒫 for i=X,Y, showing BI≡ C. Thus, BI yields identical phase diagram as in Fig. <ref>(c). Method 1. In this method, also named `Method of symmetry', we replace each term appearing in d⃗(k⃗) [Eq. (<ref>)], constituting the Bloch Hamiltonian [Eq. (<ref>)], by its symmetry analogous term in the real space, such that both transform identically under all the discrete symmetry operations, the four-fold rotation about the z direction, and the reflections about x and y axes. See Table <ref>. The resulting real space Hamiltonian on SCFLs then reads [4] H^ gen, 1_ QWZ = ∑_j ≠ kΘ(r^ PA_jk-R_1)/2exp[ 1-r^ PA_jk/r^ PA_0] c^†_j [ -i t_1 ( τ_1 cosϕ^ PA_jk + τ_2 sinϕ^ PA_jk) + 2B τ_3 ] c_k + ∑_j ≠ kΘ(r^ BD_jk-R_2)/2exp[ 1-r^ BD_jk/r^ BD_0] c^†_j [ -i t_2/√(2)( τ_1 sinϕ^ BD_jk + τ_2 cosϕ^ BD_jk) + 2 B̃τ_3 ] c_k + ∑_j c^†_j [ M-4 B -4 B̃] τ_3 c_j, where r^a_jk=|r⃗_j-r⃗_k| (ϕ^a_jk) is the distance (azimuthal angle) between the jth and kth sites, located at r⃗_j and r⃗_k, respectively, placed along the principal axes (a= PA) and body diagonals (a= BD), c^⊤_j=( c_+,j, c_-,j) is a two-component spinor, and c_τ,j is the fermion annihilation operator with parity τ=± on the jth site. In this construction, R_1 (R_2) controls the range of hopping along PA (BD). Throughout, we set r^ PA_0=r^ BD_0=a. Method 2 and Method 3. Any tight-binding Hamiltonian on a SL (H_ SL) can be cast as a block matrix [4] H_ SL = ( [ H_∙∙ H_∙∙; H_∙∙ H_∙∙ ]), where H_∙∙ (H_∙∙) is the part of H_ SL operating only on the black (red) colored sites, and H_∙∙ and H_∙∙=H^†_∙∙ capture the coupling between them. See Fig. <ref>(b). In Method 2, also named `Method of site elimination', the effective Hamiltonian for SCFLs is given by [4] H^ gen, 2_ QWZ= H_∙∙ . Here contributions from the red sites is completely ignored. In Method 3, also named `Method of renormalization', the effective or renormalized Hamiltonian for SCFLs is constructed by integrating out the red colored sites of the parent SL, yielding [4] H^ gen, 3_ QWZ= H_∙∙- H_∙∙ H^-1_∙∙ H_∙∙, assuming that H^-1_∙∙ exists. This condition can be satisfied as possible singularities (zero-energy modes of H_∙∙) are always isolated, and therefore can be regularized by taking a proper limiting procedure <cit.>. This is so because the gap at the bulk or boundary nodal point scales as ∼ 1/L_R, where L_R is the linear system size, constituted by the red sites, with the nodal point pinned at zero energy only in the thermodynamic limit (L_R →∞). With the general methodologies of constructing the Hamiltonian for SCFLs being staged, we now proceed to showcase the incarnation of all the TIs, accommodated by the SL Qi-Wu-Zhang model, on such systems. The results are summarized in Fig. <ref>, which we discuss next. Results. A SCFL is constructed from a parent SL in the following way. We divide a SL into 3 × 3 squares. Then we remove the central square. We repeat this procedure recursively for each of the eight remaining squares to obtain different generations (g). In the gth generation the total number of squares is 9^g and the total number of unremoved squares is 8^g. Hence the SCFL has a fractal dimension d_ frac=ln(8^g)/ln(√(9^g)) ≈ 1.89. The topological phases in the SCFLs of any g can be identified from the BI by diagonalizing the corresponding real space Hamiltonian, shown in Eqs. (<ref>), (<ref>), and (<ref>). Irrespective of g, Method 1 and Method 2, resulting in the Hamiltonian in Eq. (<ref>) and Eq. (<ref>), respectively, yield identical phase diagrams for SCFLs for various parameter values therein, when in the former setup we set R_1=(1+δ)a and R_2=(√(2)+δ)a, where δ≪ 1 is a small positive number. This observation assures the existence of various phases, appearing in the phase diagrams, in the thermodynamic limit. Specifically, Fig. <ref>(a) displays Chern insulators with BI=-1 and +1, identical to those for the Γ and M phases, respectively. Fig. <ref>(b) shows the appearance of a Chern insulator with BI=-2, as found in the `Valley XY phase'. These phase diagrams are qualitatively similar to the one shown in Fig. <ref>(c) for a SL, obtained in terms of the first Chern number and BI. Phase diagrams of a g=3 SCFL, obtained via Method 3, are identical to those found in the parent SL of linear dimension L=27. For example, Fig. <ref>(c) accommodates Chern insulators with BI=+1 and -1, whereas Fig. <ref>(d) shows a Chern insulator with BI=-2. Topological phases with nontrivial and quantized BI support topological edge modes, manifesting the hallmark bulk-boundary correspondence. On a SL, the near-zero-energy topological edge modes are found only in systems with open boundary conditions (OBCs) [Fig. <ref>(a)]. Due to the self-similarity symmetry, 2D fractal lattices harbor outer and inner edges. Topological Hamiltonian constructed in Method 3 (H^ gen, 3_ QWZ), however, accommodates such near-zero-energy modes on SCFLs only with OBCs that are localized only near the outer edges [Fig. <ref>(b)]. This Hamiltonian does not support any near-zero-energy modes close to the inner edges of SCFLs with PBCs. This so, because H^ gen, 3_ QWZ [Eq. (<ref>)] is constructed by systematically integrating out the red sites of the parent SL, thereby inheriting all the spectral properties of the parent crystal. By contrast, Hamiltonian for the SCFLs, constructed from Method 1 [Eq. (<ref>)] and Method 2 [Eq. (<ref>)] support near-zero-energy modes close to their outer and inner edges in systems with OBCs and PBCs [Figs. <ref>(c) and (d)], respectively, as they are constructed by ignoring any influence of the red sites of the parent SL. Thus, these two methods expose the inner boundaries of the self-similar fractal lattices. Although in Fig. <ref>, we display these results for a Chern insulator with BI=+1, we arrive at qualitatively similar results for those with BI=-1 and -2 (not show explicitly). Finally, we note that SCFLs also support NIs with BI=0, devoid of any near-zero-energy outer or inner edge modes. Summary & discussions. Here we formulate three independent approaches to construct the effective real space Hamiltonian on fractal lattices starting from the Bloch Hamiltonian in their parent crystals to harness different classes of TIs therein, namely the strong and crystalline ones. We believe that none of these methods can describe the effective Hamiltonian on fractal lattices in real materials in full accuracy. Nonetheless, given that all three methods permit strong and crystalline TIs, with Method 2 and Method 3 corresponding to two extreme limits, and feature the signature bulk-boundary correspondence, it is highly conceivable that all these phases can also be found in real fractal lattices, nowadays realizable in designer electronic <cit.> and molecular <cit.> systems as well as in classical metamaterials <cit.>. Our proposed methodologies can be employed to capture topological phases on any fractal lattice belonging to any Altland-Zirnbauer symmetry class and any crystalline group in any dimension (such as the three-dimensional Menger sponge), as long as there exists a parent topological crystal (cubic, in this case). Our theoretical framework, therefore, opens promising possibilities to realize (both theoretically and experimentally) a vast variety of topological phases of matter on the rich landscape of fractal lattices, going beyond the existing studies of specific topological models on specific fractal lattices <cit.>. We also note that there is no sharp topological bulk gap when effective Hamiltonian are constructed from Methods 1 and 2 [Fig. <ref>], raising a question of fundamental and practical importance regarding the stability of TIs on fractal lattices in the inevitable presence of disorder. These fascinating research directions are reserved for systematic future investigations. Acknowledgments. This work was supported by the NSF CAREER Grant No. DMR-2238679 of B.R. We are thankful to Vladimir Juričić and Sanjib Kumar Das for critical reading of the manuscript. quasicrystal:book C. Janot, Quasicrystals: A Primer, 2nd ed. (Clarendon, Oxford, 2012). fractal:book B. B. Mandelbrot, The Fractal Geometry of Nature, 2nd ed. (Times Books, New York, 1982). panigrahi-roy-juricic A. Panigrahi, V. Juričić and B. Roy, Projected topological branes, Commun. Phys. 5, 230 (2022). tyner-juricic A. C. Tyner and V. Juričić, Three-dimensional ℤ topological insulators without reflection symmetry, Sci. Rep. 14, 4288 (2024). TITh1 M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010). TITh2 X.-L. Qi and S.-C. Zhang, Topological insulators and superconductors, Rev. Mod. Phys. 83, 1057 (2011). TITh3 A. Bansil, H. Lin, and T. Das, Colloquium: Topological band theory, Rev. Mod. Phys. 88, 021004 (2016). TITh4 C. L. Kane and E. J. Mele, Z_2 Topological Order and the Quantum Spin Hall Effect, Phys. Rev. Lett. 95, 146802 (2005). TITh5 B. A. Bernevig, T. L. Hughes, and S.-C. Zhang, Quantum spin Hall effect and topological phase transition in HgTe quantum wells, Science 314, 1757 (2006). TITh6 L. Fu and C. L. Kane, Topological insulators with inversion symmetry, Phys. Rev. B 76, 045302 (2007). TITh7 J. E. Moore and L. Balents, Topological invariants of time-reversal-invariant band structures, Phys. Rev. B 75, 121306(R) (2007). TITh8 A. Kitaev, Periodic table for topological insulators and superconductors, AIP Conf. Proc. 1134, 22 (2009). TITh9 R. Roy, Topological phases and the quantum spin Hall effect in three dimensions, Phys. Rev. B 79, 195322 (2009). TITh10 C.-X. Liu, X.-L. Qi, H. Zhang, X. Dai, Z. Fang, and S.-C. Zhang, Model Hamiltonian for topological insulators, Phys. Rev. B 82, 045122 (2010). TITh11 S. Ryu, A. P. Schnyder, A. Furusaki, and A. W. W. Ludwig, Topological insulators and superconductors: Tenfold way and dimensional hierarchy, New J. Phys. 12, 065010 (2010). TITh12 B. A. Bernevig and T. L. Hughes, Topological Insulators and Topological Superconductors (Princeton University Press, USA, 2013). CTITh1 C.-K. Chiu, J. C. Y. Teo, A. P. Schnyder, Classification of topological quantum matter with symmetries, and S. Ryu, Rev. Mod. Phys. 88, 035005 (2016). CTITh2 L. Fu, Topological crystalline insulators, Phys. Rev. Lett. 106, 106802 (2011). CTITh3 R.-J. Slager, A. Mesaros, V. Juričić, and J. Zaanen, The space group classification of topological band-insulators, Nat. Phys. 9, 98 (2013). CTITh4 K. Shiozaki and M. Sato, Topology of crystalline insulators and superconductors, Phys. Rev. B 90, 165114 (2014). TQC1 B. Bradlyn, L. Elcoro, J. Cano, M. G. Vergniory, Z. Wang, C. Felser, M. I. Aroyo, and B. A. Bernevig, Topological quantum chemistry, Nature (London) 547, 298 (2017). TQC2 H. C. Po, A. Vishwanath, and H. Watanabe, Complete theory of symmetry-based indicators of band topology, Nat. Commun. 8, 50 (2017). TQC3 J. Kruthoff, J. de Boer, J. van Wezel, C. L. Kane, and R-J Slager, Topological Classification of Crystalline Insulators through Band Structure Combinatorics, Phys. Rev. X 7, 041069 (2017). TQC4 H. C. Po, H. Watanabe, and A. Vishwanath, Fragile topology and Wannier obstructions, Phys. Rev. Lett. 121, 126402 (2018). TQC5 T. Zhang, Y. Jiang, Z. Song, H. Huang, Y. He, Z. Fang, H. Weng, and C. Fang, Catalogue of topological electronic materials, Nature (London) 566, 475 (2019). TQC6 M. G. Vergniory, L. Elcoro, C. Felser, N. Regnault, B. A. Bernevig, and Z. Wang, A complete catalogue of high-quality topological materials, Nature (London) 566, 480 (2019). TQC7 F. Tang, H. C. Po, A. Vishwanath, and X. Wan, Comprehensive search for topological materials using symmetry indicators, Nature (London) 566, 486 (2019). QWZ X.-L. Qi, Y.-S. Wu, and S.-C. Zhang, Topological quantization of the spin Hall effect in two-dimensional paramagnetic semiconductors, Phys. Rev. B 74, 085308 (2006). TKNN D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Quantized Hall conductance in a two-dimensional periodic potential, Phys. Rev. Lett. 49, 405 (1982). Bottindex T. A. Loring and M. B. Hastings, Disordered topological insulators via C^⋆-algebras, Europhys. Lett. 92, 67004 (2010). blockmatrix J. R. Silvester, Determinants of Block Matrices, The Mathematical. Gazette 84, 460 (2000). frac:exp1 S. N. Kempkes, M. R. Slot, S. E. Freeney, S. J. M. Zevenhuizen, D. Vanmaekelbergh, I. Swart, and C. M. Smith, Design and characterization of electrons in a fractal geometry, Nat. Phys. 15, 127 (2019). frac:exp2 R. Canyellas, C. Liu, R. Arouca, L. Eek, G. Wang, Y. Yin, D. Guan, Y. Li, S. Wang, H. Zheng, C. Liu, J. Jia and C. Morais Smith, arXiv:2309.09860. frac:exp3 J. Shang, Y. Wang, M. Chen, J. Dai, X. Zhou, J. Kuttner, G. Hilt, X. Shao, J. M. Gottfried, and K. Wu, Assembling molecular Sierpiński triangle fractals, Nat. Chem. 7, 389 (2015). frac:exp4 T. Biesenthal, L. J. Maczewsky, Z. Yang, M. Kremer, M. Segev, A. Szameit, and M. Heinrich, Fractal photonic topological insulators, Science 376, 1114 (2022). frac:exp5 S. Zheng, X. Man, Z.-L. Kong, Z.-K. Lin, G. Duan, N. Chen, D. Yu, J.-H. Jiang, and B. Xia, Observation of fractal higher-order topological states in acoustic metamaterials, Sci. Bull. 67, 2069 (2022). frac:exp6 J. Li, Q. Mo, J.-H. Jiang, and Z. Yang, Higher-order topological phase in an acoustic fractal lattice, Sci. Bull. 67, 2040 (2022). frac:th1 M. Brzezińska, A. M. Cook, and T. Neupert, Topology in the Sierpiński-Hofstadter problem, Phys. Rev. B 98, 205116 (2018). frac:th2 S. Pai and A. Prem, Topological states on fractal lattices, Phys. Rev. B 100, 155135 (2019). frac:th3 A. A. Iliasov, M. I. Katsnelson, and S. Yuan, Hall conductivity of a Sierpiński carpet, Phys. Rev. B 101, 045413 (2020). frac:th4 S. Manna, B. Pal, W. Wang, and A. E. B. Nielsen, Anyons and fractional quantum Hall effect in fractal dimensions, Phys. Rev. Research 2, 023401 (2020). frac:th5 M. Fremling, M. van Hooft, C. M. Smith, and L. Fritz, Existence of robust edge currents in Sierpiński fractals, Phys. Rev. Research 2, 013044 (2020). frac:th6 Z. Yang, E. Lustig, Y. Lumer, and M. Segev, Photonic Floquet topological insulators in a fractal lattice, Light Sci. Appl. 9, 128 (2020). frac:th7 S. Manna, C. W. Duncan, C. A. Weidner, J. F. Sherson, and A. E. B. Nielsen, Anyon braiding on a fractal lattice with a local Hamiltonian, Phys. Rev. A 105, L021302 (2022). frac:th8 S. Manna, S. Nandy, and B. Roy, Higher-order topological phases on fractal lattices, Phys. Rev. B 105, L201301 (2022). frac:th9 M. N. Ivaki, I. Sahlberg, K. Pöyhönen, T. Ojanen, Topological Random Fractals, Commun. Phys. 5, 327 (2022). frac:th10 S. Manna and B. Roy, Inner skin effects on non-Hermitian topological fractals, Commun. Phys. 6, 10 (2023). frac:th11 B. Ren, Y. V. Kartashov, L. J. Maczewsky, M. S. Kirsch, H. Wang, A. Szameit, M. Heinrich, and Y. Zhang, Theory of nonlinear corner states in photonic fractal lattices, Nanophoton. 12, 3829 (2023). frac:th12 Y.-B. Yang, J.-H. Wang, K. Li, and Y. Xu, Higher-order topological phases in crystalline and non-crystalline systems: A review, J. Phys.: Condens. Matter 36, 283002 (2024). frac:th13 S. Manna, S. K. Das, and B. Roy, Noncrystalline topological superconductors, Phys. Rev. B 109, 174512 (2024). frac:th14 P. Lai, H. Liu, B. Xie, W. Deng, H. Wang, H. Cheng, Z. Liu, and S. Chen, Spin Chern insulator in a phononic fractal lattice, Phys. Rev. B 109, L140104 (2024). frac:th15 Z. Li and P. Yan, Peculiar corner states in magnetic fractals, Phys. Rev. B 110, 024402 (2024). frac:th16 L. L. Lage, N. C. Rappe, A. Latgé, Corner and Edge States in Topological Sierpinski Carpet Systems, arXiv:2403.13774 frac:th17 M. Amundsen, V. Juričić, J. A. Ouassou, Josephson effect in a fractal geometry, arXiv:2404.01373
http://arxiv.org/abs/2407.13083v1
20240718010513
Modeling and Driving Human Body Soundfields through Acoustic Primitives
[ "Chao Huan", "Dejan Markovic", "Chenliang Xu", "Alexander Richard" ]
cs.SD
[ "cs.SD", "cs.CV", "eess.AS" ]
Acoustic Primitives C. Huang et al. University of Rochester, Rochester, NY, USA Codec Avatars Lab, Meta, Pittsburgh, PA, USA {chaohuang,chenliang.xu}@rochester.edu,{dejanmarkovic,richardalex}@meta.com Modeling and Driving Human Body Soundfields through Acoustic Primitives Chao Huang1 Dejan Marković2 Chenliang Xu1 Alexander Richard2 July 22, 2024 ======================================================================= § ABSTRACT While rendering and animation of photorealistic 3D human body models have matured and reached an impressive quality over the past years, modeling the spatial audio associated with such full body models has been largely ignored so far. In this work, we present a framework that allows for high-quality spatial audio generation, capable of rendering the full 3D soundfield generated by a human body, including speech, footsteps, hand-body interactions, and others. Given a basic audio-visual representation of the body in form of 3D body pose and audio from a head-mounted microphone, we demonstrate that we can render the full acoustic scene at any point in 3D space efficiently and accurately. To enable near-field and realtime rendering of sound, we borrow the idea of volumetric primitives from graphical neural rendering and transfer them into the acoustic domain. Our acoustic primitives result in an order of magnitude smaller soundfield representations and overcome deficiencies in near-field rendering compared to previous approaches. Our project page: <https://wikichao.github.io/Acoustic-Primitives/>. § INTRODUCTION Learning, rendering, and animating 3D human body representations has been a long standing research area with applications in gaming, movies, and more recently also AR/VR. MetaHumans <cit.> and Codec Avatars <cit.> provide highly realistic models and advances in neural rendering have pushed the visual quality to new frontiers <cit.>. Animating full-body models has seen significant progress with the availability of generative models, ranging from pose-based animation <cit.> to audio- and text-driven animation <cit.>. Overall, visual representations of 3D humans these days are of excellent quality and drivable from pose, audio, and text inputs. However, on the acoustic side of the problem, , rendering spatial sound in 3D for these full-body representations, the research landscape looks dire. It has been shown that accurate audio-visual modeling is important for an immersive 3D experience <cit.> but still almost no research exists that would allow to render spatial audio of virtual humans. Analogous to visual full-body models, acoustic full-body models have similar requirements: first, it must be possible to render spatial sounds produced by a virtual human at any position in 3D space, and second, the soundfield needs to be drivable. In this work, we focus on generating and driving full-body soundfields from 3D body pose and head-mounted microphones. This problem has recently been addressed in pioneering work by Xu  <cit.>, who developed a neural soundfield rendering system for full body avatars, driven by body pose and headset microphone input. However, <cit.> has several major limitations: The approach relies on a single high order ambisonic (spherical harmonics) representation that models the sound emitted from the surface of a sphere around the human body, with a diameter of about 2m. Sound can only be modeled outside of this sphere, such that near-field modeling of signals closer to the body is not possible, see <ref>a. Moreover, accurate sound reproduction in <cit.> relies on extremely high-order ambisonic coefficients which are expensive to compute and instable to estimate. To get around this instability, <cit.> does not predict the ambisonic coefficients directly, but instead predicts the raw audio signal on 345 positions surrounding the body, and then uses traditional signal processing to compute a 17-th order ambisonic representation from these 345 raw waveforms. This mechanism is computationally inefficient and prevents realtime sound rendering. In this work, we propose a novel sound rendering method based on acoustic primitives which solves the problems of <cit.>: Near-field Rendering. We take inspiration from recent methods in visual neural rendering that rely on volumetric primitives like cuboids <cit.> or Gaussians <cit.>. Instead of modeling the body soundfield by an ambisonic representation on a single sphere around the full human body as in <cit.>, we attach multiple acoustic primitives (small spheres each representing low-order ambisonics) to the 3D human skeleton and model the sound radiating from each of these acoustic primitives separately, see  <ref>b. The full soundfield produced by all primitives together is given by the sum of the individual rendered sound from each primitive. This way, sound can be modeled arbitrarily close to the body. Efficient Soundfield Representation. Instead of a single 17-th order ambisonic representation, we model body sounds by multiple low-order (typically second order) ambisonic primitives. This reduces the number of parameters characterizing the acoustic scene by an order of magnitude and allows for a more compact and efficient soundfield representation. Efficient Rendering. Instead of predicting 345 raw audio signals and relying on traditional, costly high-order ambisonic encoders and decoders, we predict the low-order ambisonic coefficients of each primitive directly. Efficient sound rendering can then be achieved using spherical wave functions as described in <ref>. Drivability. Same as <cit.>, our method can be driven from body pose and a head mounted microphone, 3D soundfields can be generated for novel acoustic input and body motion. Note that this is in stark contrast to its visual counterparts <cit.> which are designed to synthesize novel views of fixed scenes, but are typically not drivable from user input. In summary, we propose an efficient and drivable 3D sound rendering system with * audio-visual driving: given body pose and an audio signal from head-mounted microphones, we can accurately render the soundfield produced by the body (speech, snapping, clapping, footsteps, etc) in 3D; * real-time rendering: the introduction of acoustic primitives allows for efficient real-time rendering of 3D sound scenes; * high quality: although relying only on low-order ambisonic representations, we achieve comparable quality to <cit.> but avoid the high computational cost. § RELATED WORKS Spatial Audio Modeling. Existing works on spatial audio rendering are either based on traditional signal processing and linear filters <cit.> or on more recent neural binaural renderers <cit.>. While these approaches can typically produce spatial audio in an efficient way, they come at strong restrictions, particularly, they need to know the exact location of each sound source to render as well as the clean sound signal for each sound location. Such information is available in fully synthetic, artist-created scenes but is usually unknown in real environments and real acoustic scenes. Our approach, in contrast, does not rely on such knowledge and implicitly learns to separate an aggregated audio signal into its distinct sources (the acoustic primitives) through inverse acoustic rendering. More recently, data-driven methods aim to produce binaural audio from audio-visual input information. Chen  <cit.> propose a system to render a pre-recorded 3D acoustic scene from novel viewpoints, however, this method can not handle new acoustic scenes. Liang  <cit.> propose to reconstruct the 3D audio-visual scene from videos, but the scene is static. Gao  <cit.> analyze a 2D image of a visual scene to generate binaural audio for a given monaural sound source. Note that this approach can only deal with single sound sources and can't handle complex 3D sound scenes. In <cit.>, the authors propose a method to generate spatial audio from mono acoustic input and 360 degree camera inputs. They demonstrate that their approach can correctly localize sound sources in the scene and generate correct spatial audio at a coarse resolution. Primitives in Volumetric Rendering. Neural rendering has been revolutionized by volumetric rendering methods like neural volumes <cit.> and neural radiance fields <cit.>. Follow-up works build on volumetric primitives such as cuboids <cit.> or Gaussians <cit.> and render via ray-marching or splatting. These technologies have unlocked real-time rendering for animatable avatars <cit.>. We borrow the idea of volumetric primitives and transfer them from the visual domain into the acoustic domain to build an efficient and low-parameter characterization of soundfields. Audio-Visual Learning. Audio-visual learning has been widely applied to find connections between acoustic and 2D visual signals, in audio-visual localization <cit.>, for source-separation <cit.>, or to learn associations from 360-degree videos <cit.>. Application of audio-visual learning to 3D settings is mostly limited to 3D visual scenarios, such as audio-visual driving of avatars <cit.> or audio-driven gesture synthesis <cit.>. While these works use audio-visual input to learn information about a scene, they operate on visual outputs and do not model acoustic scenes. Most closely related to our work is <cit.>, who address the same task we address in this work. However, as outlined above, <cit.> has some significant drawbacks such as the inability to render near-field audio, or the lack of real-time rendering capabilities. § POSE-GUIDED SOUNDFIELD GENERATION USING ACOUSTIC PRIMITIVES §.§ Problem Definition Let 𝐚_1:T_a (a_1,…,a_T_a) be the input audio signal from one or multiple head-mounted microphones, and 𝐩_1:T_p = (p_1,…,p_T_p) be the corresponding sequence of 3D body pose, where each p_t ∈ℝ^J × 3 is a vector containing 3D body joint coordinates. We aim to predict a sound signal 𝐬 at an arbitrary 3D position (r, θ, φ). Note that we use spherical coordinates. In other words, we learn to aim a mapping from 3D body pose, headset audio signal, and a 3D position in space to the audio signal at that 3D spatial position, 𝐚_1:T_a, 𝐩_1:T_p, (r, θ, φ) ↦𝐬_1:T_a. The core challenge is how to get training data to learn such a model. It is impossible to place microphones at all positions in 3D space to get dense sampling of the space. Instead, we follow the strategy of <cit.> (and actually use the same public dataset for our work) and sample soundfield signals 𝐬_1:T_a only on a sphere around the human body. This poses the challenge of rendering the soundfield at positions that are not on the surface of the sphere on which data has been captured. Note the analogy to graphics neural rendering: approaches like NeRF <cit.> also don't have spatially dense samples of 2D images taken from a scene, yet through inductive biases such as the rendering equation, they succeed in synthesizing the 3D scene from any novel viewpoint. We apply the same strategy for audio, and learn the mapping in Eq. (<ref>) by differentiation through the wave propagation function, which we explain in the next section. §.§ Sound Radiation using Spherical Wave Functions The general solution of the homogeneous, time-dependent wave equation in the spherical coordinate system is given by <cit.>: 𝐰(t, f, r,θ,φ) = ∑_n=0^∞∑_m=-n^n[ b_nm(t, f) · j_n (kr) + c_nm(t, f) · h_n (kr) ] · Y_nm (θ, φ), where (r, θ,φ) are arbitrary coordinates inside a source-free region, t and f denote the time and frequency (we will omit them in the following sections for clarity), and k = 2π f / v_sound is the corresponding wavenumber; Y_nm (θ, φ) represents the spherical harmonic of order n and degree m, and j_n (kr) and h_n (kr) are, respectively, nth-order spherical Bessel and Hankel functions. Coefficients b_nm(t, f) and c_nm(t, f) describe, respectively, incoming and outgoing waves. In particular, considering the scenario depicted in <ref>a, only the radiating field component is present, b_nm = 0, which in literature is known as exterior domain problem <cit.>. Given recorded or predicted microphone signals on the surface of the sphere surrounding the body, SoundingBodies <cit.> approaches the sound field modeling task as a traditional exterior domain problem and estimates the sound field coefficients c_nm(t, f). While the general solution requires an infinite number of harmonic orders, the practical estimates are limited by the available number of microphones M as N =√(M)-1, 𝐰̂(r,θ,φ) = ∑_n=0^N∑_m=-n^nĉ_nm· h_n (kr) · Y_nm (θ, φ). This reliance on a generic solution of the exterior domain problem limits the practicability of <cit.>. While the network uses pose information to predict microphone signals, the successive DSP processing of these signals required for spatial sound rendering (binauralization) does not leverage pose information at all. This brings two main issues: * To model sound sources located further away from the center of the representation, see R_0 in <ref>a, higher harmonic orders are needed, which in turn requires the network to predict a high number of microphones before any spatial rendering can be performed; predicting a smaller number of signals would limit the harmonic order and, consequently, the rendered scene would collapse towards the center. * The wave equation solution is valid only outside the boundary surface encompassing all sources of sound, as shown in <ref>a. Inside this region, the high harmonic orders produce chaotic results, limiting the minimum distance at which the scene can be rendered and experienced. To address the above issues we take a different approach. Instead of using pose-conditioned network to predict microphone signals, we use the network to predict sound field coefficients directly. Furthermore, instead of trying to estimate a generic high-order sound field representation, we leverage the knowledge of possible positions of sound, given by the body pose, and model the sound radiation as a superposition of several small-order elementary sound fields originating from different positions of the body as depicted in <ref>b. Similarly to Eq. (<ref>), the sound pressure produced by a single elementary field of order N is given by 𝐰(r,θ,φ) = ∑_n=0^N∑_m=-n^n(c_nm· h_n (kr_ref)) ·h_n(kr)/h_n(kr_ref) · Y_nm (θ, φ) = ∑_n=0^N∑_m=-n^nc_nm·h_n (kr)/h_n(kr_ref) · Y_nm (θ, φ), where we use h_n(kr_ref) with r_ref=0.5m for numerical stability of the learning process, and refer to harmonic coefficients 𝒮 = [c_00,...,c_NN] as an acoustic primitive. Given Eq. (<ref>), we can translate the task of modeling 3D spatial sound for the visual body to learning a set of small acoustic primitives {𝒮_i}_i=1^K, which we choose N up to the second order for harmonic coefficients and set the number of acoustic primitives as K. In practice, capturing ground truth sound field coefficient is infeasible, while the microphone signals received on the surface of the dome are available with prior efforts by <cit.>. Since the produced sound pressure 𝐰(r,θ,φ) indeed represents the audio signal produced at spherical position (r,θ,φ), we can therefore decompose the entire learning process into two sub-steps: * Learning Acoustic Primitives. The main objective of this step is to design a neural network ℱ that consumes audio and pose data as input, and output the sound field representation {𝒮_i}_i=1^K = ℱ(𝐚_1:T_a, 𝐩_1:T_p). * Rendering Audio with Learned Acoustic Primitives. With the learned acoustic primitives {𝒮_i}_i=1^K, we leverage Eq. (<ref>) as a differentiable rendering function, denoted as ℛ, to generate the audio waveform received at the target position 𝐬̂_1:T_a(r, θ,φ) = ℛ({𝒮_i}_i=1^K, r, θ,φ). The hyperparameter in ℛ is fixed once the harmonic order N and the number of primitives K are initialized, and all the operations in ℛ are differentiable, making it feasible to run end-to-end training. With the training data tuple (𝐚_1:T_a, 𝐩_1:T_p, 𝐬_1:T_a(r, θ,φ)) that includes recorded microphone signal at position (r, θ,φ), we can learn a pose-guided acoustic primitive synthesis system by simply optimizing the loss between 𝐬̂_1:T_a and 𝐬_1:T_a. §.§ Multimodal Feature Encoding Pose Encoder. Human body movements offer crucial clues for how sound is distributed in space. To capture these rich spatial cues, we employ a pose encoder that processes the input pose sequence 𝐩_1:T_p. The pose input is first encoded into a latent feature representation. To capture temporal relationships, we apply two layers of temporal convolutions with a kernel size of 5. Finally, we concatenate the encoded features for all joints and use an MLP to create a compact representation, denoted as f_p ∈ℝ^C_p × T^'_p. Here C_p is the number of feature channels and T^'_p represents the temporal dimension after convolution. More details are provided in the supplementary material. Audio Encoder. While sound can originate from various points on the body (, hands, feet), it's captured by the headset microphone located at a central position near the head. This difference in location creates a slight time delay between the moment the sound is produced and when it's actually recorded. Previous research has shown that compensating for this time delay can be beneficial <cit.>. In our approach, we leverage the pose features as guidance and use an MLP (as shown in <ref>) to estimate the delay for each acoustic primitive attached to a body joint, and time-warp the audio signal accordingly. Via STFT, the warped signals are then transformed into complex spectrograms X^c_a ∈ℝ^C_h × F × T where F and T represent the number of frequency and time bins respectively, and C_h is the number of audio channels. The resulting audio features are then encoded with a network consisting of convolutional- and LSTM layers to capture both local context and long-range dependencies within the audio data. The encoder architecture utilizes four layers, where each layer contains two ResNet blocks, a temporal LSTM block, and a downsampling block with a factor of 2. Eventually, we can obtain the latent audio features f_a = E_a(X_a) ∈ℝ^C_a ×F/16×T/16. Audio-Pose Feature Fusion Module. While headset audio reveals the content of sounds (, finger snapping), it lacks precise spatial information about the source. Conversely, body pose offers strong spatial cues about joint locations, but cannot identify the sound type (, speech) solely from pose data. Therefore, effectively combining audio and pose features is crucial for learning acoustic primitives and determining their contribution to the final sound generation. We first interpolate the pose features f_p to match the temporal size of the audio features f_a. We then employ a lightweight fusion module with two ResNet blocks and one attention block to combine the concatenated audio and pose features, resulting in a new representation denoted as f_ap∈ℝ^C_a ×F/16×T/16. §.§ Acoustic Primitive Decoding As described in <ref>, an acoustic primitive determines the audio heard at arbitrary coordinates inside its sound field. It considers factors like the harmonic coefficients, the center coordinate of the primitive, and the target location where the sound is perceived. In this work, we focus on generating sound based on the target location. This translates to learning two key components: the primitive's coordinates and the harmonic coefficients. However, this is non-trivial as the primitive's location changes dynamically as the body moves. Additionally, the harmonic coefficients must capture not only the sound content but also the spatial cues such as sound directivity. In the next section, we will explain how our approach addresses these challenges. Sound Field Decoder. Leveraging the fused features f_ap, our decoder D_a simultaneously generates the sound field representations for all the acoustic primitives. Similar to the input spectrograms X_a, the harmonic coefficients have the same spatial dimensions but differ in the number of channels, which encode the richness of the sound's spatial information. A higher number of channels allows for more precise control over the perceived location of the sound. For simplicity, we design the decoder to resemble the audio encoder and add skip connections between the encoder and decoder, with the main difference being the number of output channels. The decoder outputs (N+1)^2 × K channels. Here, N represents the order of the harmonic coefficients, which controls the level of detail captured in the spatial representation. Finally, we separate the decoder's output into K distinct harmonic coefficients {𝒮_i}_i=1^K, one for each acoustic primitive. Primitive Offsets. We initialize acoustic primitives to be at body joint locations, at the wrists, face and ankles. While the initial 3D coordinates of these body joints provide a reasonable starting point, the actual locations of the sound sources might differ slightly from the body joint positions. For example, the chosen keypoints represent wrists, but the sound of finger snapping originates from the fingers themselves. This discrepancy between the body joint and sound production locations can affect the learning process and the accuracy of the rendered spatial audio. To address this limitation, we learn offsets for the initial coordinates to better represent the actual positions of acoustic primitives. In practice, we employ a three-layer MLP network that operates on the fused features f_ap. First, we apply mean pooling along the frequency axis of f_ap but keep its time dimension, obtaining f̅_̅a̅p̅. Then, we generate the offsets by Δ(x,y,z) = σ·tanh(MLP_offset(f̅_̅a̅p̅)). To constrain the predicted offsets within a reasonable range, we use a tanh activation function and apply a scaling factor of σ = 0.2 to restrict the offsets to a maximum range of 20 centimeters around the initial locations. Primitive Weights. At different points in time, primitives have different importance. For instance, when a finger is snapped, the hand primitive emits high energy sound while other primitives emit at most low energy. The relationship is a function of the input audio and body pose. We therefore explicitly model the weight of each primitive as a function of the combined audio and pose encodings f̅_̅a̅p̅, W = softmax(MLP_weight(f̅_̅a̅p̅)). W is the predicted weight for each primitive at each time instance and indicates the relative influence in the final rendered sound. §.§ Differentiable Acoustic Primitive Renderer Given the initial primitive locations (the joint locations to which the primitives are attached), the learned offsets, harmonic coefficients, and weights, we can now render the sound field for each primitive (as shown in <ref>). We first compute the primitive's predicted location by adding the learned offsets to the corresponding body joint location. Now, given a listener position in 3D space at which we want to render the sound, we transform each primitive's predicted location into spherical coordinates (r_k, θ_k, φ_k) representing the relative position of the listener with respect to each of the K primitives. We now use the differentiable audio renderer from Eq. (<ref>) to render the audio signal ŝ_1:T_a^k produced by the k-th primitive at the listener's position, ŝ_1:T_a^k = ℛ(𝒮_k · W_k, r_k, θ_k, φ_k), and obtain the full sound field by summation over all acoustic primitives, ŝ_1:T = ∑_k^Kŝ_1:T^k. §.§ Loss Function Since our renderer ℛ is differentiable, it allows us to efficiently train the model using loss functions on the final predicted waveforms. In this work, we employ a multiscale STFT loss <cit.> between the predicted audio ŝ_1:T_a and the ground truth audio 𝐬_1:T_a on their amplitude spectrograms, denoted as ℒ_amp(ŝ_1:T_a,𝐬_1:T_a) and on the real and imaginary parts of spectrograms, denoted as ℒ_ri(ŝ_1:T_a,𝐬_1:T_a). The window sizes are set as 2048, 1024, 512, 256. As proposed in <cit.>, a shift-ℓ1 loss helps reduce the spatial alignment error. We therefore add this loss term as ℒ_sℓ1(ŝ_1:T_a,𝐬_1:T_a). Additionally, determining a primitive's contribution to the final sound (corresponding to the primitive weights W) can be challenging without additional guidance. To overcome this, we leverage clip-level labels (denoted by y∈ℝ^K) that specify which body joint contributes to the received audio. We apply average pooling along the frequency dimension and find the maximum value of W for each primitive across all time steps, resulting in W∈ℝ^K. This essentially summarizes whether an acoustic primitive has contributed to the sound in the audio clip. Finally, a simple cross-entropy loss function ℒ_cts(W, y) is employed to aid in the learning process. Our final loss becomes ℒ_total = λ_ampℒ_amp+λ_riℒ_ri+λ_sℓ1ℒ_sℓ1+λ_ctsℒ_cts. Please refer to the supplementary materials for ablation on the loss terms. § EXPERIMENTS §.§ Experimental Setting Dataset. To evaluate our approach, we leverage the publicly available dataset introduced in  <cit.> [<https://github.com/facebookresearch/SoundingBodies>][Note: data used in the paper and data released publicly differ by 1.5 subjects (8 subjects used in <cit.> vs 6.5 publicly released). We updated the performance metrics of the baseline <cit.> to account for this difference.]. The dataset captures synchronized audio and visual data in an anechoic chamber, offering multimodal data specifically designed for speech and body sound field modeling research. It utilizes 5 Kinect sensors for body tracking and a large microphone array (345 microphones) arranged in a spherical fashion around the recording area. The data encompasses various participants performing a diverse range of body sounds and speech in different settings (, standing or sitting). The recordings are segmented into non-overlapping one-second clips. We adopt the same train/validation/test splits established by <cit.>, resulting in 10,076/1,469/1,431 clips, respectively. Implementation Details. In our experimental setup, we employ a sampling rate of 48 kHz for audio signals and a frame rate of 30 fps for body pose data. The audio waveforms are converted into complex spectrograms using a Hann window of size 512 and a hop length of 128 and FFT length of 1022. Within the encoders, both the pose features f_p and audio features f_a are configured to have the same channel size C_a = C_p = 256. We set the order of harmonic coefficients to N=2. During training, the batch size is set as 1 per GPU and we randomly select 20 microphones from the available pool of 345 target microphones for each forward pass. The AdamW optimizer with a learning rate of 0.0002 is used, and the network is trained for 100 epochs. To balance different loss terms, we set the weights λ_amp=7, λ_ri=3, λ_sℓ1=0.5, and λ_cts=1. The experiments are conducted on 4 NVIDIA Tesla A100 GPUs, with model training for 100 epochs taking approximately 55 hours to complete. Evaluation Metrics. We evaluate the performance of our model using three main metrics: the signal-to-distortion ratio (SDR), the ℓ_2 error on the amplitude spectrogram, and the angular error of the phase spectrogram. The SDR measures the overall quality of the reconstructed sound, with higher values indicating better quality. The amplitude error shows how well the reconstructed sound matches the original in terms of the distribution of sound energy, while the angular error evaluates the timing accuracy of the reconstructed sound waves relative to the original. We report amplitude errors multiplied by a factor of 1000 to remove leading zeros. §.§ Comparison with Baseline We compare our method using 12 acoustic primitives of 2nd order with the SoundingBodies <cit.> baseline. Results are shown in  <ref>. We can observe that the sound field modeling performance of the proposed method is comparable to  <cit.> while having a much faster inference speed. In particular, proposed method even performs better than the baseline on SDR metric for non-speech sounds and phase metric for speech. Regarding the inference speed, we show average time needed to compute 1 second of audio at 48 kHz. Note that for <cit.> we only report the time needed for the network to predict the microphone signals. In a practical scenario <cit.> needs also DSP processing of these microphone signals to obtain the high-order sound field representation, which would further increase the overall processing overhead. §.§ Ablation Study In this section we evaluate the impact of the number of acoustic primitives and their harmonic order. Zero-order harmonics are able to model only omidirectional fields, while higher orders allow for increasingly complex radiation patterns. Note that we limit the maximum order to 2 given that PyTorch implementations of spherical wave functions are available only up to the second order. Intuitively, a higher number of acoustic primitives allows for modeling of more complex overall sound fields. We test three configurations of acoustic primitives: 5 primitives: head, L/R hand, L/R foot; 9 primitives: head, L/R hand, L/R foot, L/R shoulder, L/R hip; and 12 primitives where head and hands have two primitives associated with the same key-point (given location offsets these primitives are not bound to be in the same location allowing approximation of a higher order radiation pattern). Results are shown in  <ref>. In general, both higher number of primitives and higher primitive order improve the performance as expected. This is especially true for speech. For body sounds on the other hand, increasing from 1st to 2nd order does not seem to be beneficial. We also evaluate the model without the primitive offset adjustment. From  <ref> we can observe that removing the offset has similar impact as decreasing the number of primitives from 12 to 9, which intuitively makes sense given that repeated primitives collapse to the same key-point location. For more experiments, such as ablation on the loss terms and visualizations with different harmonic orders, please refer to the supplementary materials. §.§ Qualitative Results Some sound field visualization examples are shown in <ref>. We can observe that the network is able to correctly associate different kinds of sounds to the appropriate acoustic primitives. We can also observe the speech radiation pattern matching the head orientation. Furthermore, <ref> shows predicted and ground truth waveforms at different microphone locations. We can observe good temporal alignment and amplitude match for most cases. One exception is the body tapping sound in which the amplitude does not match across different microphones. This may be due to the primitive struggling to match a highly variable radiation pattern. § CONCLUSION We propose a neural rendering system for sound that allows to generate and render 3D sound fields from sparse user input like body pose and headset audio. We demonstrate the we maintain similar quality to state-of-the-art sound rendering, while improving significantly on speed and soundfield completeness: our approach is an order of magnitude faster than the approach from <cit.> and is capable of rendering sound in the near-field, close to the transmitter's body, where the previous approach from <cit.> failed. Moreover, we want to highlight the design similarities to successful neural renderers from computer graphics: By leveraging an acoustic rendering equation and acoustic primitives, similar to leveraging volumetric primitives in graphical neural rendering, we design a 3D spatial audio system with a conceptual duality to its visual counterpart. We hope this work will impact sound rendering in 3D settings like computer games and AR/VR. Limitations. Albeit the promising results in terms of quality and efficiency, our approach is still far from broad availability: model training relies on data collected with a multi-microphone capture stage that is not broadly available. Future directions need to aim at enabling learning such acoustic scenes with simpler setups, ideally with commodity hardware like smartphones. Generalization beyond human bodies is another natural extension that emerges from the availability of broader data sources for spatial sound. Potential Society Impact. Ethical and societal risks in this work are low since no data is manipulated in a generative fashion - pure spatialization has little potential for harmful actors. splncs04 Acoustic Primitives C. Huang et al. Supplementary Materials for “Modeling and Driving Human Body Soundfields through Acoustic Primitives” July 22, 2024 ===================================================================================================== § RENDERING AUDIO WITH LEARNED SOUNDFIELD As illustrated in Sec 3.2, the sound pressure, the audio signal, produced by a learned soundfield of order N is given by 𝐰(r,θ,φ) = ∑_n=0^N∑_m=-n^n(c_nm· h_n (kr_ref)) ·h_n(kr)/h_n(kr_ref) · Y_nm (θ, φ) = ∑_n=0^N∑_m=-n^nc_nm·h_n (kr)/h_n(kr_ref) · Y_nm (θ, φ), here, k = 2π f / v_sound is the corresponding wavenumber; Y_nm (θ, φ) represents the spherical harmonic of order n and degree m, which is Y_nm (θ, φ) ≡√(2n+1/4π(n-m)!/(n+m)!)P_nm(cosθ)e^imφ, P_nm(z) is an associated Legendre polynomial, and h_n (kr) is nth-order spherical Hankel functions. All the functions are implemented with PyTorch and, therefore are fully differentiable. In this paper, we choose spherical harmonics up to second order and showcase them in <ref>. Similarly to Y_nm (θ, φ), the learned soundfield, acoustic primitive, can also be decomposed into a series of spherical harmonics functions, each representing a different spatial component of the soundfield. We demonstrate the decomposition process in <ref>. Our learned soundfield representation is enforced to express the same spatial information as spherical harmonics because each predicted acoustic primitive has (N+1)^2 channels, which is equivalent to the number of spherical harmonics. § POSE FEATURE ENCODING To capture these rich spatial cues, we employ a pose encoder that processes the input pose sequence. This sequence, denoted as 𝐩_1:T_p, contains the 3D coordinates of body joints for each frame 𝐩_t∈ℝ ^ J × 3. However, since these coordinates are captured from a third-person perspective, they might not fully capture the spatial relationship relevant to the audio, where the sound originates from the body but is received at the headset. To address this, we enhance the pose input by selecting the head joint 𝐩^h_t as an anchor and calculating relative coordinates and Euclidean distances. This extended pose input [𝐩_t, 𝐩_t - 𝐩^h_t, dist(𝐩_t-𝐩^h_t)] ∈ℝ ^ J × 7, consisting of original coordinates, relative coordinates to the head, and distance from the head, provides the pose encoder with a more comprehensive understanding of the body's spatial relationship with the sound. Details are shown in <ref>. § MORE ABLATIONS Visualization of primitive offsets. In <ref>, we observe a time delay between the predicted and GT audios for the model without offsets, likely caused by inaccurate primitive coordinates. In contrast, our model with learned offsets mitigates this issue, resulting in a closer match to the ground truth audio. Also, we visualize the sound fields generated by our framework for different primitives after applying the learned offsets. We observe that the learned offsets generally match location where we would expect the source of sound to be given the particular sound event such as snap, clap, or footstep. Visualization of different harmonic order. <ref> illustrates the impact of sound field order on the accuracy of predicted audio. As shown, the model's prediction using a 2nd-order sound field exhibits a closer match to the GT audio in terms of amplitude. This is because higher-order harmonics offer finer spatial rendering capabilities, allowing the model to capture more precise directional details of the sound. In contrast, the predicted 0th-order sound field is omnidirectional, meaning it radiates sound equally in all directions. This limitation hinders its ability to encode specific spatial information, resulting in less accurate audio amplitude prediction. Ablation on the choices of loss function. In  <ref>, we conduct an ablation study to investigate the effectiveness of each loss term in the total loss function (Eq. (11) in the main paper). We remove each loss term from the total loss ℒ_total one at a time. The results show that including ℒ_cts improves the overall performance on both speech and non-speech data, and combining all the loss terms yields the best or second-best performance across different metrics for both speech and non-speech data and generally the best performance on average over the metrics. § DEMO VIDEO We have prepared a supplementary video to visually demonstrate the capabilities of our method in spatial audio rendering. The video showcases a full-body avatar producing correctly spatialized binaural audio corresponding to various actions and interactions using our trained model. In particular, the input of the audio system is a single channel mono audio that contains the mixture of all sounds being made. Our model can render them with the correct spatial locations using wearer's body pose. This means that the wearer can clap left, clap right, applaud, snap around, and the sounds will be appropriately positioned. Additionally, the system works with objects that the wearer may be using, such as an egg shaker. § ACKNOWLEDGEMENT The authors would like to thank Frank Yu for engineering work on the VR demo.
http://arxiv.org/abs/2407.13474v1
20240718124747
Examining inverse generative social science to study targets of interest
[ "Thomas Chesney", "Asif Jaffer", "Robert Pasley" ]
stat.CO
[ "stat.CO" ]
inst1]Thomas Chesney [inst1]organization=Nottingahm University Business School, addressline=Wollaton Road, city=Nottingham, postcode=NG8 1BB, country=UK inst2]Asif Jaffer inst1]Robert Pasley [inst2]organization=Karachi Institute of Business Administration, addressline=University Road, city=Karachi City, postcode=Sindh 75270, country=Pakistan , state=/*authors listed alphabetically § ABSTRACT We assess an emerging simulation research method—Inverse Generative Social Science (IGSS) <cit.>—that harnesses the power of evolution by natural selection to model and explain complex targets. Drawing on a review of recent papers that use IGSS, and by applying it in two different studies of conflict, we here assess its potential both as a modelling approach and as formal theory. We find that IGSS has potential for research in studies of organistions. IGSS offers two huge advantages over most other approaches to modelling. 1) IGSS has the potential to fit complex non-linear models to a target and 2) the models have the potential to be interpreted as social theory. The paper presents IGSS to a new audience, illustrates how it can contribute, and provides software that can be used as a basis of an IGSS study. agent-based modeling genetic programming simulation conflict model § INTRODUCTION In response to <cit.> calling for a refresh of simulation studies organisation studies, we assess an emerging simulation research method—Inverse Generative Social Science (IGSS) <cit.>—that harnesses the power of evolution by natural selection to model and explain complex targets. IGSS has the ability to explore a vast search space of variables and their relationships seeking non-linear models to fit to data. More importantly the best models found have the potential to act as formal theory and as such bring with them the many advantages thereof, while also answering calls for more formal theory in studies of organisations <cit.>. Drawing on a review of recent papers that use IGSS, and by applying it in two different studies of conflict, we here assess its potential both as a modelling approach and as formal theory. IGSS marries the power of evolutionary computing <cit.>—specifically here we use genetic programming <cit.>—and agent-based modelling <cit.>. Genetic programming is used to evolve micro-specifications—hereafter referred to as rules—that when inserted into an agent model's code, dictate individual agents' behaviour. When such a model is run, if it generates macro behaviours that have been observed in a target of interest then under the paradigm of generative or `bottom up' social science <cit.>, that target is considered to have been explained. We evaluate this view in Section <ref>. Before that, in Section <ref> we outline IGSS. Then in Section <ref> we present two IGSS studies of conflict. A discussion of the work is offered in Section <ref>. § OVERVIEW A typical IGSS study will progress as follows. There exists a target for which an explanation is sought. There also exist data on that target—or such data is collected—called the reference dataset. The reference dataset is input to software that evolves rules using genetic programming. The goal is to find rules that when implemented and run in an agent-based model will simulate the target such that it behaves as described in the reference dataset. This progression is unpacked over the proceeding paragraphs. The target is a phenomenon or situation of interest that is being studied. The reference dataset stores observations on it capturing some interesting aspect of how it behaves. It is this behaviour that is to be explained by evolved micro rules. The reference dataset is quantitative and contains variables that have been identified as being theoretically relevant, or at least are suspected of such. This is the same as any quantitative research dataset. Indeed the reference dataset may have been collected previously for another study. Agent-based modelling and genetic programming are well known in the organisations and decision support literature (see for example: <cit.>). Agent-based modelling simulates a target allowing it to be experimented on, explored and observed <cit.>. Genetic programming is the automatic evolution of computer code <cit.>. To combine them requires genetic programming to evolve rules that will then be used to dictate agent behaviour in an agent model. (In addition the agent model will likely be required during evolution to test evolved rules for fitness.) Genetic programming starts with a population of random (and therefore almost certainly meaningless) rules–the first generation of rules. For example: * IF agentAttribute1 > agentAttribute2 THEN behaviour1 * IF agentAttribute3 == globalVariable1 THEN behaviour2 * IF agentAttribute1 / agentAttribute2 < agentAttribute3 THEN behaviour1 * IF agentAttribute4 != globalVariable2 THEN behaviour3 In this example Behaviours 1, 2 and 3 are each one possible action that an agent might take. Agent attributes are variables such as an agent's age or a summary of the situation they are currently in, perhaps the number of other agents that are close by. Global variables hold the same value for all agents. Examples could include a count of the number of agents, the current simulated weather, or the legal regulations under which the agents are operating. Next each of the rules is tested for fitness. Fitness is the answer to the question: if agents follow this rule, how close will the output of the agent model be to the reference dataset? A common fitness metric is the squared difference between agent model data and reference data. It could be (and often will be) that to calculate fitness the agent model must be run which can make the process of completing an IGSS study take many weeks of computer processing. Three genetic operations are then used repeatedly to create the next generation of rules: * One of the rules (chosen with probability proportional to its fitness, so most likely a rule with high fitness) is allowed to reproduce and simply moves into the next generation. * One of the rules (again one most likely that has high fitness) mutates randomly and the mutated rule moves into the next generation. A mutation means that part of the rule is changed at random. * Two of the rules (as before two that most likely have high fitness) breed and the two child rules move into the next generation. This breeding is known as crossover which means that part of the first rule and part of the second rule are swapped with each other. These genetic operations allow for the essential ingredients of evolution to take place: reproduction, mutation and natural selection, which are known to be powerful tools for finding solutions <cit.>. The second generation of rules is then tested for fitness. A third generation is created as the second was and so on until a stopping condition is reached. So far few IGSS studies have been conducted but all have used the approach outlined above with small variation (the most notable of which is probably using genetic algorithms rather than genetic programming but the overall goal is the same). <cit.> investigate flocking behaviour in birds. Their reference dataset comes from a single run of an `ideal' agent-based model which was written based on observations of birds in flight <cit.>. <cit.> studying field irrigation decisions by farmers use reference data collected from an experiment using human participants. <cit.> use genetic programming to match survey data in a study of alcohol consumption. Creating a suitable test for fitness is often a challenging part of an AI effort but here IGSS has a huge advantage. Given that the goal is matching—not exceeding or extrapolating from or any other goal that might be assigned to an AI—the reference dataset, the necessary existence of a reference dataset means the fitness function can often simply be some measure of closeness, possibly Euclidean distance or as was suggested earlier, mean squared error. This overview glosses over a lot of complexity–conducting an IGSS study is not a trivial matter. For example <cit.> test fitness against multiple objectives, not just one. The IGSS modeller must also develop the agent model that will use the evolved rules and decide on parameters such as the agent model's scale, what type of agents to include and the numbers of each, other initial parameter settings, and environment settings such as size and shape. The modeller may also need to make decisions about the discrete time nature of agent-modelling, specifically the point at which fitness is tested. This decision is very much context dependent. The bird flocking model of <cit.> for example is neverending: the agent birds are constantly flying with no end destination and therefore fitness can be tested at any point. This is not the case in the study by <cit.> of alcohol use and they test for fitness at a set point in time. Then after all of this, the final rule or rules that have been produced will need to be pruned and interpreted. § CODE AS THEORY We will use two phrases later that will be useful to define upfront. Language of the target refers to descriptions of behaviour using terms that have meaning in the context of the target situation. If the target is salary negotiation, the language of the target might include `wages', `hardball', `play hard to get' and `hold your nerve'. Language of the target theory refers to descriptions of behaviour using terms that have meaning in the theory being used to explain the target. This time for salary negotiations these might include `shared pot', `utility', `bounded choice', `rational' and so forth. The word theory in social science is very imprecise, something that has long been criticised <cit.>. Table <ref> shows some of the ways it is used and the list is certainly incomplete. It is unlikely that this paper—or any other—will convince readers to agree on a meaning for theory, but we can make clear the definition that we use: theory is an explanation of data. Those data come from observing a target. Theory abstracts out elements of interest, as defined by the researcher, from a target and explains them. The theory will not attempt to explain more than these elements. (This is one reason why theory can be so poor when used for prediction.) Theory can appear in many formats. One format that will be relevant to our discussion later is computer code. (Again, perhaps not all readers will agree with this but a justification will be given at that time.) The purpose of a social scientist is to contribute to theory–to explain. In most scientific endeavours, theory appears twice. When using an agent model as a research method, the first is that theory is used to develop a model, to dictate what micro behaviour is implemented. It is however rare that theory will be available to guide all micro behaviour. To make a simulation work, often additional behaviour—possibly even from additional theories—will have to be melded together. Discussing this in relation to using simulations for social science, <cit.> say: The use of such terms as `theory', `framework', `model', and `paradigm' in psychology and the social sciences is as informal as the models themselves. One person`s conceptual model is another person`s theory or framework...In psychology and the social sciences, theorizing about a problem typically begins with verbal conceptual models, which then may be elaborated and adjusted over time as relevant empirical data accumulate. Formal mathematical models, computational models, statistical models, etc. rely on verbal conceptual models to specify variables and relations among them, although a host of extra assumptions and plausible estimates are typically needed to translate a verbal theory into a workable implementation. The exception to this is exploratory research where theory is not used, or not relied on as much, in building a model. Instead, when used in agent modelling, researchers are keen to explore different ideas they have for what might be going on in a target. These ideas do not have to be grounded firmly in theory and are implemented as micro behaviour to allow a researcher to observe whatever macro behaviour emerges from them. The second time theory appears in scientific endeavours is as an output. The contribution a scientist makes is to theory–developing a new theory; adapting, confirming, or validating an existing theory. Theory is both an input and an output; theory goes into science, theory comes out of science. Although rare in social science, it is possible that theory will be formal <cit.>. It is recognised that formalising theory brings many advantages <cit.>. Formats for this include formal logic, mathematics, and computational models (which because they are used to simulate data ultimately means they are computer code). Examples of formal logic are difficult to find, see <cit.>. Mathematics is of course common if we include results from linear regression and structured equation modelling, although <cit.> for example would not consider this alone to be theory. Mathematical modelling work such as that used by for example <cit.> is common although this method does blur with computational models and there is a question over whether writers who use this approach actually mean for their mathematical models to be interpreted as theory (though perhaps this is not an essential characteristic of theory). An entire field is devoted to computational models of social behaviour, for examples see the Journal of Artificial Societies and Social Simulation. <cit.> demonstrate the process of turning imprecise theories into computational models and discuss why this is valuable. By implementing a theory as a computational model such as an agent-based model it can be tested and explored. Running a model generates data that can be compared with observations. This will demonstrate—prove in fact—that a model explains—or does not explain—those observations (it doesn't prove that it is the correct explanation, only that it is an explanation). Implementing a theory as a computational model makes an imprecise social science theory formal and precise, and allows for rigorous testing <cit.>. IGSS sits in the generative social science paradigm as described by <cit.>. They say, “we consider a given macrostructure to be `explained' by a given microspecification when the latter's generative sufficiency has been established” <cit.>. Under IGSS, the final chosen rule that has been selected from the evolved rules (which will have high fitness but might be the most fit–this is explained later in this section) should explain the target. If theory is an explanation, then this rules should at least have the potential to be considered theory–we would argue that it is theory. <cit.> would likely agree. Perhaps <cit.> would too. Although they neither approve of or discount computer code as being theory, citing a lack of agreement about whether a model can constitute theory (p.371), agent model code does seem to fit well with their description of what strong theory is. Agent model code answers the question of why behaviours emerge. The code makes explicit the connections among elements in the target. The models present causal relationships and identify the timing of events. Code presents underlying processes that allow us to understand “systematic reasons for a particular occurrence or nonoccurrence” and present the microprocesses involved. A counter argument to this last point might be that agent-based models do not concern inter-relationships among macro level variables (although we think this is incorrect–the output from agent models do allow us to examined macro level variables). <cit.> finish their section with a quotation from Karl Weick: a good theory “explains, predicts, and delights”. There is certainly something delightful about observing a phenomena emerge `in front of your eyes' on a computer screen while running an agent model. At least in the sense of explaining social processes of interest, agent-based models can be considered as theory <cit.> although as we say this we remember that not all readers will agree on what theory is. An agent model's code can be pruned to test and observe how robust the output is, thus potentially simplifying a theory. In addition, a model/theory can be explored to generate hypotheses that have not been thought of previously, and computational theories can be tested in ways that would be expensive, unethical, or impossible otherwise <cit.>. As such agent-based models offer real possibilities for theory advancement that deserve more attention in this regard, see for example <cit.>. We now consider how this idea of formal theory fits with IGSS. IGSS is best seen as exploratory research. Exploratory research is used to study a phenomenon that has not been studied previously. It is used to identify research questions for further examination and hypotheses that will later be tested. An exploratory researcher will likely get to prioritise these questions and lay out the future research agenda. As for statistics, exploratory cannot use null hypothesis significance testing. To do so would be a form of data dredging. (To explain why briefly: if 20 random ideas are tested using a significance threshold of 5%, it would be expected that 1 of them would be observed because 5% equates to a 1 in 20 chance. Therefore if 20 ideas that might be real effects are explored in an agent model, but in fact all of them are just random, one of them will probably wrongly be identified as being real.) Instead, exploratory research relies on graphical methods, descriptive statistics such as measures of central tendency, spread and correlation, and qualitative approaches. Many of these, such as interviews and focus groups, are inappropriate for agent models. Instead, when traditional agent modelling is used for exploratory research, it involves exploring ranges and combinations of parameter settings, and less frequently exploring a small number of competing rules. The jurisdiction of IGSS is this later space–exploring competing sets of rules, many more than could possibly be explored manually. Indeed, IGSS can be considered `turbo charged' exploratory research. The consequence of this is that IGSS should firmly be considered exploratory research. Indeed if our interest was in implementing existing theory as an agent-based model then we would not have need for IGSS. Even if theory is guiding the design of the agents' environment, or guiding the selection of which agent breeds to include or the numbers of each to have, eventually the genetic programming algorithm must take over and learn micro rules on its own, at which point it will not be being guided by existing theory–it will be exploring its search space. The consequence is that theory as an input is less relevant to IGSS than to other applications of agent models. There are a infinite number of micro rules that will produce any macro behaviour perfectly and it would be impossible to decide between them simply by qualitatively or quantitatively observing the output of a model. When discussing this, <cit.> uses the analogy of rolling a die: Imagine you are holding a loaded die. If it is thrown on the ground, six comes up most frequently, followed by five, then four, and so on down to one. If however it is thrown against a certain slope, the frequency of getting a six is reduced. In fact if you throw the die now, each side comes up about 16 times in 100, exactly as expected from a normal die. Is the die now uniform? Or more simply, is it right? The system of the die's use has now produced a fair die even though we have not actually changed what most people would see as the `correct' part of the system that should have been changed. This issue is dealt with explicitly in extant IGSS studies. <cit.> recognise the problem thus: “it is important to clarify that different solutions in program space can potentially map to the same solution in logical space” and state that the solution will take a “new set of non-trivial tools”. <cit.> list their 10 best rules which are quite different from each other and there is no obvious way to choose between them other than closeness to the reference dataset, which as with the die metaphor does not necessarily give the `correct' solution. Interestingly they examine 10,000 rules using factor analysis to try to identify common elements (they refer to them as `strategies') found within them. This is a very exciting approach and is discussed further later. <cit.> acknowledge the problem and deal with it in part by supplying their genetic algorithm with a “library of theory building blocks” which are “entities, attributes and mechanisms”. These come from an existing computational model <cit.> which is itself based on the Theory of Planned Behaviour <cit.>. As part of their discussion they state that the fact that the genetic algorithm uses all of the building blocks demonstrates that they “are all of theoretical importance for determining individuals' intentions” to behave in a certain way (in their case drinking alcohol). They then (Paragraph 6.4) go on to list additional empirical evidence as support of the building blocks' importance. This of course gives one of the ways in which the infinity of models is reduced, by giving the IGSS algorithm access to theoretically valid constructs to work with. The pity in this is that there may exist constructs that hadn't yet been considered by human researchers that the AI will not be given opportunity to find. <cit.> and <cit.> discuss the concept of cognitively plausible agents. This means that fitness is not enough to judge evolved rules. A rule has to be plausible. It must be possible to interpret it meaningfully in either the language of the target or the language of the target theory and in Section <ref> we discuss this in relation to our models from Section <ref>. <cit.> points out that a purely rationale agent for example would not be cognitively plausible and suggest that agent emotion, deliberation and social influence must contribute to interpreting and even guiding evolved behaviour. Genetic programming is favoured for IGSS rather than many other forms of AI as it produces output that has strong potential to be meaningfully interpreted (although it is not the only possibility–a decision tree is another candidate). Contrast this with the output of an algorithm which trains a neural network which would be an interconnected series of weights something that could not possibly be interpreted meaningfully. §.§ Metatheory We end this section with a comment on a possibility that may be of interest to researchers. Rather than generating a theory, IGSS generates candidate theories, competing alternative explanations. This suggests a role for IGSS in the study of metatheory. Metatheory is theory on theory, or theorising about the development and refinement of theories. A theory explains why certain behaviours are observed in a target. There exists an underlying process that creates those behaviours. It is this process that is being explained as it reveals itself in the target. With a traditional agent-based model, agents are told what this process is. With IGSS this behaviour is evolved in what was earlier called `turbo charged' exploratory research. This is valuable in and of itself but it will also allow for the study of how rules evolve; such a study would contribute to metatheory, a theory about the development of a theory, an explanation of where rules come from. We can picture IGSS as building a model of models to examine under what conditions a certain observation that is explained by a certain theory emerges. For example, the factor analysis carried out by <cit.> might be considered to be an early instance of this. Factor analysis is the authors' term and might be misleading. They use `factor' to refer to a set of commands that are applied to a number of parameters to produce a return value <cit.>. In social science terms examples of factors would a construct such as personality, age, height, or a behaviour strategy such as playing tit-for-tat. From the 10,000 rules that were evolved, they assess how each factor and how joint factors (factor interaction) impact on model fitness by measuring joint contribution. This involves training a random forest regressor on the factor presence to fitness data and assessing this using tree analysis software <cit.>. The most important factors then are the theoretical elements that when included lead to a theory being successful. We suggest that this can be considered the beginnings of a study of metatheory. § STUDIES OF CONFLICT Understanding and dealing with conflict is an important part of research into organisations. To illustrate IGSS and guide our discussion, in this section we 1) evolve behaviour in a hawk-dove game and 2) re-create the well known civil disobedience model. §.§ Hawk dove model For the hawk-dove agent model, in each time period agents move to a location upon which is a certain amount of a desirable resource. Locations can accommodate up to two agents at a time. Each agent must assess their situation and decide how much of the resource to try to take. If there is conflict between two agents over a resource then both get zero, otherwise they take what they wanted. One run of the simulation lasts 100 time periods. The decision of how much resource to try to take is the rule that is evolved using IGSS with the format: IF condition THEN take x ELSE take y Agents initially choose a location at random and will return to it unless they receive zero, in which case they choose another location at random. An agent's situation is recorded as a number of variables that are then used to construct the rule. These are: * amount of resource on my current location, resource * number of agents on my current location, agents * amount of resource I took in the previous round, previousTook * amount of resource on my location in the previous round, previousResource * number of agents on my location in the previous round, previousAgents * number of agents in the simulation, totalAgents * total amount of resource I have taken, totalResource In this study we created reference datasets manually to capture various scenarios of interest. We looked at: equality (where all agents end up with the same total amount of resource, roughly the maximum possible for each) and several forms of inequality (using the wealth distribution in the UK described by The Equality Trust as a guide). The study was used to answer the question: how would all members of a society have to act in order to achieve this distribution of wealth? The code is written in R and is available at github.com/ThomasCNotts/RIGSS. §.§.§ Results To achieve near perfect equality, IGSS found the following rule: IF (previousResource - previousResource) * (previousResource - previousResource) >= (previousTook - resource) - (totalResource - agents) THEN take = 1 Evolved rules almost always have to be pruned to remove redundancy. In this case, the rule prunes down to `always take 1' which given the amount of resource and the number of agents in the simulation is a simple way to achieve equality. (The rule actually reduces to `if zero is greater than a negative number then take 1'.) For inequality we started with a reference dataset that produces the distribution of wealth shown in the right panel of Figure <ref>. To try to match this IGSS found the rule: IF (previousTook - totalAgents) != (agents AND previousTook) THEN take = 1 ELSE take = 9 which with pruning reduces to: IF previousTook >= 1 THEN take = 1 ELSE take = 9 and produces the distribution shown in the left panel of Figure <ref>. §.§ Civil disobedience model The second illustrative model was written in NetLogo which is perhaps less well known than R. A specialist agent development environment, NetLogo is free to download. The civil disobedience agent model <cit.> models violence flaring up in a densely populated environment. A version called Rebellion.nlogo is included in the NetLogo models library. Citizen agents are either happy or unhappy with their government, and are worried about arrest if they protest with violence. They calculate their political grievance based on an individual perceived hardship parameter and a government legitimacy global parameter, and calculate their chance of arrest based on the number of other people rebelling and the presence of police. If an agent’s grievance exceeds the risk of arrest by a small threshold, the agent decides to rebel. Occasionally, given certain initial parameters, there is a city-wide riot; but what other mirco rules would lead to the same behaviour? An IGSS study is a suitable way to explore this question. There are three rules governing behaviour: * Rule M: if there are any new sites free, citizens move to a new site if they are not in jail; police move to a new site. * Rule A: citizens who are not in jail decide if they should riot. The rule for this is: riot if (grievance - risk-aversion * estimated-arrest-probability) > threshold. * Rule C: if police sees a rioter or rioters, they arrest one of them randomly. The civil disobedience model has an advantage for researchers who want to run their first IGSS study in that the behaviour executed in each tick (the NetLogo term for a discrete unit of time) is independent of events that have happened in the previous tick–agents do not have to remember or factor in their past. This means that we can generate a static reference dataset and use this throughout learning. The alternative would be having to run the model every time we wanted to test the fitness of a generated rule as was the case with the hawk dove model which slows running considerably. To handle the A–M–C order of rules, we evolve each rule separately, one at a time. We ran the model for 40 time periods using three different combinations of global variables, giving us data covering 120 time periods. For each time period we collected data on every agent, recording the agent variables. Since some internal agent states change more than once during a time period we captured certain agent variables at three different steps in each period. Essentially all data—including any hidden states—that determines behaviour was recorded in a csv file which became the reference dataset. A total of 122,160 data points were generated with 38 variables making up each point. These data were given as input to a second NetLogo program that implemented the genetic programming algorithm. As before genetic programming was used to generate a candidate rule of the form IF condition THEN action. Due to bias in the reference dataset, in that in most time periods many more agents choose not to riot than choose to riot, we did not use a simple accuracy measure to test fitness. Instead we used a balanced accuracy measure as a fitness measure. This takes into account both sensitivity (true positive rate) and specificity (true negative rate) to provide a more comprehensive assessment of the model's ability to classify instances from different classes correctly. Rule M, the move rule, is relatively simple and the algorithm discovered a rule with perfect fit quickly. It is not qualitatively identical to the actual Rule M in the original Rebellion.nlogo but is quantitatively the same: IF 0 >= jail-term0 AND threshold < free-neighborhood0 [set movement-calculated 1] Rule A is by far the most complicated of the three and it took much longer to achieve 98.07% fitness: Rule A: IF ((movement-tracker0 = 0 AND movement-tracker1 = 1 OR active-binary > 0) OR government-legitimacy * government-legitimacy < perceived-hardship) AND (count cops-on neighborhood) * estimated-arrest-probability = jail-term [set active? True] In the language of the target: IF (((agent did not move last time AND agent moved this time) OR agent is active since last time) OR government-legitimacy squared < hardship) AND count police * arrest probability = jail term of the agent THEN riot Like Rule M, a version of Rule C was discovered with perfect fitness: Rule C: if breed = cops [enforce] We created a new model with the new evolved rules and compared the results with the original model. The results are shown in Figure <ref>. Qualitatively the two would be difficult to distinguish. § DISCUSSION In 2009, <cit.> published an examination of claims made about the philosophy of computer simulation arguing that far from demanding a new `metaphysics, epistemology, semantics and methodology', simulation raises few if any new philosophical problems. As IGSS is exploratory research it shares a philosophy of science with exploratory research. The fact that IGSS generates candidate theories quantitatively at a larger scale than `manual' exploratory research does not in itself bring fresh philosophical challenges. However there is much to get excited about. IGSS offers two huge advantages over most other approaches to modelling. 1) IGSS has the potential to fit complex non-linear models to a target and 2) the models have the potential to be interpreted as social theory. One of the most interesting prospects is that the candidate theories IGSS generates can be non-linear. Not only that but IGSS allows for the possibility of a candidate theory to be discovered made up of multiple non-linear behaviours belonging to different types of agents thus introducing heterogeneity between different types of agents. This can be compared with model discovery in the physical sciences where AI has long been used to search for linear and non-linear ordinary differential equations that yield an observable relationship between inputs and outputs (see for example <cit.>). The downside of this is that the search space will be enormous. While the review in this paper has highlighted several approaches to reducing it, this remains a avenue for future research. Our illustrative examples raise many points to note about IGSS. Starting with specific results before moving on to make general comments about IGSS, for the hawk-dove model, while our system did not find a good fit for either of our inequality models, it did find a rule that leads to a smoother (and indeed therefore more realistic) unequal distribution of wealth (right side panel, Figure <ref>). The rule that gave allowed this to emerge exhibited extremes; in the language of the target it is that agents take either very little or a lot: IF previousTook >= 1 AND agents >= 1 THEN take = 1 ELSE take = 9 According to the preceding argument, this rule can explain wealth inequality. This should be explored further in a fresh agent model to probe the relationships between extremes and wealth distribution. (Strictly as this would still be exploratory research, it would not have to be a fresh agent model although a fresh model would be needed if hypothesis testing were going to be used–see <cit.>. Indeed a brief exploration of this rule in our agent model reveals that any values where the second is greater than the first will lead to inequality, but the values above maximise it.) Looking at the civil disobedience model, Rule A as presented in the previous section may seem like it it is not explainable and this is a good case where the rule must be interpreted in the language of the target or the target theory. To recap the evolved version of the rule is: IF ((movement-tracker0 = 0 AND movement-tracker1 = 1 OR active-binary > 0) OR government-legitimacy * government-legitimacy < perceived-hardship) AND (count cops-on neighborhood) * estimated-arrest-probability = jail-term [set active? True] This can be interpreted in the language of the target as: IF (((agent did not move last time AND agent moved this time) OR agent is active since last time) OR government-legitimacy squared < hardship) AND count police * arrest probability = jail term of the agent THEN riot These parameters can be further interpreted. For example, arrest probability = jail term is effectively arrest probability = 0. This rule then starts to make more sense. Human interpretation like this will be essential to make sense of complicated rules and as we have said before, this sort of interpretation is what a social scientists does so we should not fear it. More generally, from our studies it is apparent that when running an IGSS study, there are a number of decisions that must be made almost arbitrarily. Each of these decisions could be a subject for further research. For example, the choice of primitives to give to the genetic programming algorithm to work with. Primitives are the mathematical operators from which the rules are built and there is very little theoretical guidance on what these should be. Other examples are the size of the initial population of rules and the probabilities of each genetic operation (that is, the chance of reproduction, mutation or crossover happening to a rule). In our hawk-dove model, the number of generations was chosen arbitrarily. Each of these decisions could potentially impact on results and yet there currently exists no systematic way to decide on all of them other than `rules of thumb'. We ran our hawk dove model for 100 time periods, the civil disobedience for 40–again these are arbitrary decisions that we suspect will have little bearing on results but this should be studied further. Lastly we give some suggestions for possible IGSS studies in the area of decision support and organisations. IGSS could be used to model individual-level decision-making in supply chains (e.g., purchasing behavior, supplier relationship management) and simulate how these micro-behaviors aggregate to produce macro-level outcomes like market fluctuations, supply chain disruptions, or resilience. This could help predict and understand complex phenomena that emerge from individual interactions within the supply chain. IGSS could explore novel and potentially more efficient or ethical supply chain configurations that wouldn't be easily discovered through traditional optimization approaches. Finally IGSS could model how individual agents adapt their behavior when faced with changes like new regulations, economic crises, or technological advancements providing insight into potential unintended consequences of policies or predict how supply chains react to disruptions, aiding in better preparedness and policy design. We finish with a final observation. An agent model is not a necessary part of an IGSS study. A reference dataset can be generated in any number of ways, only one of which is an agent model. A genetic programming algorithm could be run on data produced in an experiment, a survey, or an existing dataset to produce rules. It may be natural though not essential to then implement those rules in an agent model to demonstrate that they lead to the behaviour observed in the target; this is of course the idea behind generative social science. The main point is, that when IGSS software becomes easier to use, there will be many existing datasets that have be used to create many accepted theories that could be explored with IGSS to produce alternative theories that can then be assessed by researchers. elsarticle-harv 48 natexlab#1#1 [#1],#1 [Adner et al.(2009)Adner, Polos, Ryall and Sorenson]Adner09 authorAdner, R., authorPolos, L., authorRyall, M., authorSorenson, O., year2009. titleThe case for formal theory. journalAcademy of Management Review volume34, pages201–208. [Ajzen(1991)]Ajzen91 authorAjzen, I., year1991. titleThe theory of planned behaviour. journalOrganizational Behavior and Human Decision volume50, pages179–211. [Banzhaf et al.(1998)Banzhaf, Nordin, Keller and Francone]Banzhaf98 authorBanzhaf, W., authorNordin, P., authorKeller, R.E., authorFrancone, F.D., year1998. titleGenetic programming: an introduction: on the automatic evolution of computer programs and its applications. publisherMorgan Kaufmann Publishers Inc. [Braune et al.(2022)Braune, Benda, Doerner and Hartl]Braune22 authorBraune, R., authorBenda, F., authorDoerner, K.F., authorHartl, R.F., year2022. titleA genetic programming learning approach to generate dispatching rules for flexible shop scheduling problems. journalInternational Journal of Production Economics volume243, pages108342. [Buckley et al.(2022)Buckley, Field, Vu, Brennan, Greenfield, Meier, Nielsen, Probst, Shuper and Purshouse]Buckley22 authorBuckley, C., authorField, M., authorVu, T.M., authorBrennan, A., authorGreenfield, T.K., authorMeier, P.S., authorNielsen, A., authorProbst, C., authorShuper, P.A., authorPurshouse, R.C., year2022. titleAn integrated dual process simulation model of alcohol use behaviours in individuals, with application to us population-level consumption, 1984–2012. journalAddictive behaviors volume124, pages107094. [Busemeyer and Diederich(2010)]Busemeyer10 authorBusemeyer, J.R., authorDiederich, A., year2010. titleCognitive modeling. publisherSage. [Camerer(2011)]Camerer11 authorCamerer, C.F., year2011. titleBehavioral game theory: Experiments in strategic interaction. publisherPrinceton university press. [Chesney(2021)]Chesney21 authorChesney, T., year2021. titleAgent-Based Modelling of Worker Exploitation. publisherSpringer. [Chesney et al.(2017)Chesney, Gold and Trautrims]Chesney17 authorChesney, T., authorGold, S., authorTrautrims, A., year2017. titleAgent based modelling as a decision support system for shadow accounting. journalDecision Support Systems volume95, pages110–116. [Davis et al.(2007)Davis, Eisenhardt and Bingham]Davis07 authorDavis, J., authorEisenhardt, K., authorBingham, C., year2007. titleDeveloping theory through simulation methods. journalAcademy of Management Journal volume32, pages480–499. [Edwards(2010)]Edwards10 authorEdwards, J.R., year2010. titleReconsidering theoretical progress in organizational and management research. journalOrganizational Research Methods volume13, pages615–619. [Edwards and Berry(2010)]Edwards10a authorEdwards, J.R., authorBerry, J.W., year2010. titleThe presence of something or the absence of nothing: Increasing theoretical precision in management research. journalOrganizational Research Methods volume13, pages668–689. [Epstein and Axtell(1996)]Epstein96 authorEpstein, J., authorAxtell, R., year1996. titleGrowing artificial societies. publisherBrookings Institution Press. [Epstein(2002)]Epstein02 authorEpstein, J.M., year2002. titleModeling civil violence: An agent-based computational approach. journalProceedings of the National Academy of Sciences volume99, pages7243–7250. [Epstein(2014)]Epstein14 authorEpstein, J.M., year2014. titleAgent_Zero: toward neurocognitive foundations for generative social science. publisherPrinceton University Press. [Epstein et al.(2023)Epstein, Garibay, Hatna, Koehler and Rand]Epstein23a authorEpstein, J.M., authorGaribay, I., authorHatna, E., authorKoehler, M., authorRand, W., year2023. titleSpecial section on" inverse generative social science": Guest editors' statement. journalJournal of Artificial Societies & Social Simulation volume26. [Farrell and Lewandowsky(2010)]Farrell10 authorFarrell, S., authorLewandowsky, S., year2010. titleComputational models as aids to better reasoning in psychology. journalCurrent Directions in Psychological Science volume19, pages329–335. [Frigg and Reiss(2009)]Frigg09 authorFrigg, R., authorReiss, J., year2009. titleThe philosophy of simulation: hot new issues or same old stew? journalSynthese volume169, pages593–613. [Gilbert and Troitzsch(2005)]Gilbert05 authorGilbert, N., authorTroitzsch, K., year2005. titleSimulation for the social scientist second edition. publisherOpen University Press. [Grand et al.(2016)Grand, Braun, Kuljanin, Kozlowski and Chao]Grand16 authorGrand, J.A., authorBraun, M.T., authorKuljanin, G., authorKozlowski, S.W., authorChao, G.T., year2016. titleThe dynamics of team cognition: A process-oriented theory of knowledge emergence in teams. journalJournal of Applied Psychology volume101, pages1353. [Greig et al.(2023)Greig, Major, Pacholska, Bending and Arranz]Greig23 authorGreig, R., authorMajor, C., authorPacholska, M., authorBending, S., authorArranz, J., year2023. titleLearning interpretable logic for agent-based models from domain independent primitives. journalJournal of Artificial Societies and Social Simulation volume26. [Gunaratne and Garibay(2020)]Gunaratne20 authorGunaratne, C., authorGaribay, I., year2020. titleEvolutionary model discovery of causal factors behind the socio-agricultural behavior of the ancestral pueblo. journalPlos one volume15, pagese0239922. [Gupta and Chutani(2020)]Gupta20 authorGupta, V., authorChutani, A., year2020. titleSupply chain financing with advance selling under disruption. journalInternational Transactions in Operational Research volume27, pages2449–2468. [Hajmohammad and Shevchenko(2020)]Hajmohammad20 authorHajmohammad, S., authorShevchenko, A., year2020. titleMitigating sustainability risk in supplier populations: an agent-based simulation study. journalInternational Journal of Operations & Production Management volume40, pages897–920. [Holland(1992)]Holland92 authorHolland, J.H., year1992. titleAdaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. publisherMIT press. [Jule(2014)]Jule14 authorJule, A., year2014. titleGender theory, in: editorMichalos, A. (Ed.), booktitleEncyclopedia of Quality of Life and Well-Being Research, publisherSpringer. [Kamps(1999)]Kamps99 authorKamps, J., year1999. titleOn criteria for formal theory building: Applying logic and automated reasoning tools to the social sciences, in: booktitleAAAI/IAAI, pp. pages285–290. [Koza et al.(1992)]Koza92 authorKoza, J.R., et al., year1992. titleGenetic programming. publisherMIT Press Cambridge. [Lomi and Larsen(2001)]Lomi01 authorLomi, A., authorLarsen, E.R., year2001. titleDynamics of organizations: Computational modeling and organizational theories. publisherThe MIT Press. [Martin and Turner(1986)]Martin86 authorMartin, P.Y., authorTurner, B.A., year1986. titleGrounded theory and organizational research. journalThe journal of applied behavioral science volume22, pages141–157. [Melnyk et al.(2023)Melnyk, Thürer, Blome, Schoenherr and Gold]Melnyk23 authorMelnyk, S.A., authorThürer, M., authorBlome, C., authorSchoenherr, T., authorGold, S., year2023. title(re)-discovering simulation as a critical element of om/scm research: call for research. journalInternational Journal of Operations & Production Management . [Miranda et al.(2023)Miranda, Baggio and Ozmen Garibay]Miranda23 authorMiranda, L., authorBaggio, J., authorOzmen Garibay, O., year2023. titleEvolutionary model discovery of human behavioral factors driving decision-making in irrigation experiments. journalJASSS: Journal of Artificial Societies and Social Simulation volume26. [Nowak and Rauh(2006)]Nowak06 authorNowak, K.L., authorRauh, C., year2006. titleThe influence of the avatar on online perceptions of anthropomorphism, androgyny, credibility, homophily, and attraction. journalJournal of Computer-Mediated Communication volume11, pages153–178. [Oliveira and Secchi(2021)]Oliveira21 authorOliveira, N., authorSecchi, D., year2021. titleTheory building, case dependence, and researchers' bounded rationality: An illustration from studies of innovation diffusion. journalSociological Methods & Research , pages0049124120986201. [Ougaard(2013)]Ougaard13 authorOugaard, M., year2013. titleWhat is Theory. publisherCBS Press, addressCopenhagen. chapter chapterWhat is Theory in Political Science. [Reynolds(1987)]Reynolds87 authorReynolds, C.W., year1987. titleFlocks, herds and schools: A distributed behavioral model, in: booktitleProceedings of the 14th annual conference on Computer graphics and interactive techniques, pp. pages25–34. [Saabas(2019)]Saabas19 authorSaabas, A., year2019. titleTreeinterpreter software. journalavailable at GitHub <https://github.com/andosa/treeinterpreter>. [Schmidt and Lipson(2009)]Schmidt09 authorSchmidt, M., authorLipson, H., year2009. titleDistilling free-form natural laws from experimental data. journalscience volume324, pages81–85. [Shady et al.(2022)Shady, Kaihara, Fujii and Kokuryo]Shady22 authorShady, S., authorKaihara, T., authorFujii, N., authorKokuryo, D., year2022. titleA novel feature selection for evolving compact dispatching rules using genetic programming for dynamic job shop scheduling. journalInternational Journal of Production Research volume60, pages4025–4048. [Shevchenko et al.(2020)Shevchenko, Pagell, Lévesque and Johnston]Shevchenko20 authorShevchenko, A., authorPagell, M., authorLévesque, M., authorJohnston, D., year2020. titlePreventing supplier non-conformance: Extending the agency theory perspective. journalInternational Journal of Operations & Production Management volume40, pages315–340. [Singer(2018)]Singer00 authorSinger, P., year2018. titleMarx: A very short introduction. publisherOxford University Press. [Sutton and Staw(1995)]Sutton95 authorSutton, R.I., authorStaw, B.M., year1995. titleWhat theory is not. journalAdministrative science quarterly , pages371–384. [Van Hemel et al.(2008)Van Hemel, MacMillan, Zacharias et al.]van08 authorVan Hemel, S.B., authorMacMillan, J., authorZacharias, G.L., et al., year2008. titleBehavioral Modeling and Simulation: From Individuals to Societies. publisherNational Academies Press. [Vancouver et al.(2016)Vancouver, Li, Weinhardt, Steel and Purl]Vancouver16 authorVancouver, J.B., authorLi, X., authorWeinhardt, J.M., authorSteel, P., authorPurl, J.D., year2016. titleUsing a computational model to understand possible sources of skews in distributions of job performance. journalPersonnel Psychology volume69, pages931–974. [Vancouver et al.(2010)Vancouver, Tamanini and Yoder]Vancouver10 authorVancouver, J.B., authorTamanini, K.B., authorYoder, R.J., year2010. titleUsing dynamic computational models to reconnect theory and research: Socialization by the proactive newcomer as example. journalJournal of management volume36, pages764–793. [Vancouver et al.(2018)Vancouver, Wang and Li]Vancouver20 authorVancouver, J.B., authorWang, M., authorLi, X., year2018. titleTranslating informal theories into formal theories: The case of the dynamic computational model of the integrated model of work motivation. journalOrganizational Research Methods volume23, pages238–274. [Vu et al.(2023)Vu, Buckley, Duro, Brennan, Epstein and Purshouse]Vu23 authorVu, T.M., authorBuckley, C., authorDuro, J.A., authorBrennan, A., authorEpstein, J.M., authorPurshouse, R.C., year2023. titleCan social norms explain long-term trends in alcohol use? insights from inverse generative social science. journalJournal of artificial societies and social simulation: JASSS volume26. [Zaffar et al.(2019)Zaffar, Kumar and Zhao]Zaffar19 authorZaffar, M.A., authorKumar, R.L., authorZhao, K., year2019. titleUsing agent-based modelling to investigate diffusion of mobile-based branchless banking services in a developing country. journalDecision Support Systems volume117, pages62–74.
http://arxiv.org/abs/2407.13299v1
20240718090239
Reducing Numerical Precision Requirements in Quantum Chemistry Calculations
[ "William Dawson", "Katsuhisa Ozaki", "Jens Domke", "Takahito Nakajima" ]
physics.chem-ph
[ "physics.chem-ph" ]
RIKEN Center for Computational Science, Kobe, Japan Shibaura Institute of Technology, Saitama, Japan § ABSTRACT The abundant demand for deep learning compute resources has created a renaissance in low precision hardware. Going forward, it will be essential for simulation software to run on this new generation of machines without sacrificing scientific fidelity. In this paper, we examine the precision requirements of a representative kernel from quantum chemistry calculations: calculation of the single particle density matrix from a given mean field Hamiltonian (i.e. Hartree-Fock or Density Functional Theory) represented in an LCAO basis. We find that double precision affords an unnecessarily high level of precision, leading to optimization opportunities. We show how an approximation built from an error-free matrix multiplication transformation can be used to potentially accelerate this kernel on future hardware. Our results provide a road map for adapting quantum chemistry software for the next generation of High Performance Computing platforms. Reducing Numerical Precision Requirements in Quantum Chemistry Calculations Katsuhisa Ozaki July 22, 2024 =========================================================================== § INTRODUCTION In recent years, progress in the field of Artificial Intelligence has lead to an increase in demand for computing resources to perform deep learning <cit.>. This has significant implications for developments in computational quantum chemistry — simulation software must be written to coexist on AI-centric software and hardware ecosystems. Of particular importance will be the ability to run quantum chemistry packages on low precision hardware such as NVIDIA's Tensor Cores or Google's TPUs (see, for example, Ref. <cit.> and Ref. <cit.> respectively for their application to scientific problems). While targeting low precision hardware will increase the challenge of developing simulation software in the short term, it also presents new opportunities for co-designing specialized hardware optimal for solving scientific problems. In this paper, we will systematically explore the floating point precision requirements necessary for a representative quantum chemistry kernel: calculation of the single particle density matrix represented in a linear combination of atomic orbitals (LCAO) basis set. We will find that double precision calculations provide an unnecessarily high level of precision. We will then propose the use of the error-free transformation for matrix multiplication developed by Ozaki and coworkers <cit.> (i.e. the Ozaki scheme) in combination with density matrix purification <cit.> to exploit low precision hardware while obtaining the required precision (and no more). § BACKGROUND We will first review the key points of floating point calculations on modern hardware. Then we will discuss the history of low precision calculation algorithms in computational quantum chemistry. Subsequently we will introduce the target algorithm of density matrix purification and its relevance for low precision calculations (including recent promising work). Finally, we will present the Ozaki scheme for efficiently emulating higher precision matrix multiplication using low precision hardware. §.§ Floating Point Representations IEEE-754 floating point numbers are written in terms of a sign bit, exponent, and mantissa (Fig. <ref>). For example, single precision (FP32) uses 8 bits for the exponent and 23 for the mantissa, whereas half precision (FP16) uses 5 and 10 respectively. Due to the implicit bit, these formats are able to effectively store 24 (FP32) and 11 (FP16) bits of precision in the mantissa. In quantum chemistry codes, the standard is to use double precision calculations (FP64), which uses 11 bits for the exponent and 52 for the mantissa. Recently, new floating point formats such as NVIDIA's TF32 (8, 10) or BFLOAT16 (8, 7) have been proposed specifically for machine learning applications where only a small mantissa is required. In this work, we will also consider AMD's FP24 (7, 16) format as an example of a type with a mantissa between the size of FP16 and FP32. §.§ Low Precision Quantum Chemistry Calculations There has been significant research on low precision computing for computational quantum chemistry, however it has primarily focused on single precision vs. double precision. Single precision for the analytic calculation of two-electron ERIs in a Gaussian basis set has been demonstrated to be an effective strategy to exploit GPUs <cit.>. Single precision can also be employed to accelerate semi-numerical methods <cit.>. The single precision strategy has recently been applied to Slater type orbitals as well <cit.>. In materials science, single precision was shown to be a promising strategy to accelerate the computation of exact-exchange in codes using a planewave basis set <cit.>. Single precision has also been employed to speed up the iterative eigenvalue solvers of materials codes <cit.>. Single precision calculations are particularly promising for many body perturbation theory methods. Early work demonstrated the reduced precision requirements of MP2 calculations <cit.>, including when using the RI technique <cit.>. Single precision can also be applied to Coupled Cluster <cit.>, including time dependent variants <cit.>. Other targets of single precision optimizations include DMRG <cit.>, quantum transport calculations <cit.>, and GW <cit.>. A challenge the community has faced for developing low precision software has been to predict how relevant such optimizations will be to future architectures. For example, Yasuda showed that evaluation of the exchange and correlation functional could be accelerated using single precision <cit.>; however, a recent work <cit.> explicitly rejected this strategy noting the gap in performance between double and single precision on GPUs has been closed. The earlier mentioned work on semi-numerical calculations <cit.> justified its strategy by targeting lower-cost “gaming GPUs”, where single precision continues to have higher performance. With recent developments in Artificial Intelligence, the pendulum has swung back towards the relevance of low (and more exotic) precision hardware. It is now crucial for the quantum chemistry community to establish clear precision requirements for their simulations. We note that even if double precision capable hardware remains the standard, studying the effects of low precision will still be potentially useful for reducing data transfer costs. §.§ Density Matrix Purification In mean field quantum chemistry calculations, such as Kohn-Sham Density Functional Theory <cit.>, we need to compute the single particle density matrix from a given Hamiltonian. Limiting ourselves to the spin-restricted case, we expand the orbitals in some set of M basis functions: ψ_i(r) = ∑_j^M c_ijϕ_j(r). We in turn obtain matrix representations of our fundamental operators: S_ij = <ϕ_i|Î|ϕ_j>, H_ij = <ϕ_i|Ĥ|ϕ_j>, where Î is the identity and Ĥ the Hamiltonian operator. This leads to the generalized eigenvalue problem: Hψ_i = λ_i S ψ_i, from the solutions of which we can construct the single particle density matrix: K_ij = ∑_a^M f_aϕ_iaϕ_ja, where f is the occupation number (usually 2 for occupied and 0 for unoccupied orbitals for spin-restricted calculations of insulating systems). In most implementations based on LCAOs, equation <ref> is solved by invoking a dense eigenvalue solver, such as the ones available in LAPACK or ScaLAPACK. Unfortunately, these calculations have a computational cost that scales with the third power of the number of basis functions. To reduce this cost for application to large systems, many “diagonalization free” methods have been proposed <cit.>. One class of “diagonalization free” methods is based on the purification algorithm first proposed by McWeeny <cit.>, which iteratively computes K using the following recurrence relation: P_0 = λ/2(μ I - H) + 1/2I, P_k + 1 = 3P_k^2 - 2P_k^3, where μ is the chemical potential, λ scales the spectrum of H to be within the range [0, 1], and K=2P. The power of such an approach is that the core computational kernel is matrix - matrix multiplication, which can readily exploit the underlying sparsity of H and K that exist for large insulating systems <cit.>. A number of different purification methods exist (see, for example, the methods implemented in the NTPoly library <cit.>); each employs different forms and orders of polynomials during the iterations. The second order trace correcting method of Niklasson <cit.> has the benefit of only needing to compute the square of a matrix, a point we will return to in Sec. <ref>. Density matrix purification is not only useful for the case of extremely large systems where such sparsity exists. Since the purification algorithm has matrix - matrix multiplication as a bottleneck, it has the benefit of scaling better on supercomputers than eigenvalue solvers do. When developing a Hartree-Fock code for the Tianhe-2 supercomputer, Chow and coworkers utilized purification to substantially improved scalability <cit.>. Finkelsein and coworkers recently demonstrated how an algorithm similar to purification implemented on a GPU could outperform dense eigenvalue solvers even on a single GPU <cit.>. Pederson and coworkers <cit.> similarly have proposed using dense purification as a means of exploiting a cluster of Google TPUs. In their work, they use a mixed-precision scheme where early SCF iterations are performed in single precision and the final iterations in a software emulated double precision. Around the same time, Finkelstein and coworkers performed a series of studies using the Tensor Cores available on NVIDIA GPUs <cit.>. They targeted single precision accuracy by taking advantage of the Tensor Core's ability to accumulate in single precision and employing the Markidis scheme <cit.>. The work was further extended to density functional perturbation theory <cit.>. In our work here, we will go beyond these earlier studies to achieve the higher precision required for general application. §.§ Ozaki Scheme The Ozaki scheme <cit.> performs an error-free transformation of computing the product of two matrices into a summation of several matrix multiplications that can be performed without rounding error. Several implementations of the Ozaki scheme exist both to target higher than double precision <cit.> and for using low precision units like Tensor Cores <cit.>. Remarkably, an implementation based on the INT8 Tensor Cores of an RTX A6000 GPU could outperform double precision cuBLAS by >4× without loss of accuracy. On future architectures where low precision arithmetic units further dominate FP64, this scheme would be even more potent. The details of the Ozaki scheme have been presented in several previous publications, so we only briefly review the concept here (Fig. <ref>). The scheme begins by splitting the input matrices into S split matrices (of the same size as the original). This is done through a series of rounding and bit shifting operations so that each matrix can be represented exactly in the low precision representation (we include the improved version introduced later by Minamihata et al. <cit.>, see equation 3 in the paper of Mukunoki et al. <cit.> for details). A further scaling operation is applied to maintain the exponent's range <cit.> (we note that in the scaling method of Mukunoki, only error free terms are considered, but in our implementation we extend the scaling to all terms). We then compute the product of pairs A_iB_j, where i+j ≤ S+1 as well as a set of remainder terms. Finally, the resulting matrices are summed up in the original precision. The accuracy of the final result depends on the number of splits; if both matrices are split S times we require 2S times as much memory and S(S+1)/2 as many multiplications. In Fig. <ref>, we show some example calculations of the multiplication of two random matrices (FP64 elements distributed between [0, 1]) using different precisions and splits. Accuracy is measured as the Frobenius norm of the difference between the double precision reference result and the Ozaki scheme result. From this data, we see one crucial point about the Ozaki scheme: the floating point precision for accumulation determines the overall precision. Hence the Ozaki-scheme is particularly effective at taking advantage of hardware like NVIDIA's Tensor Cores. § IMPLEMENTATION DETAILS For this study, we created two different implementations of density matrix purification: one based on NVIDIA's CUDA API for use on an NVIDIA RTX A6000 GPU and another using GNU MPFR to emulate low precision operations in software. MPFR will be particularly helpful to this study as it allows us to investigate arbitrarily defined floating point numbers <cit.>. However, one significant limitation of MPFR is that it emulates an infinite exponent, though it is able to detect under/overflow. Thus we will use MPFR to explore precise precision requirements, and validate our results using the (substantially faster) CUDA implementation. As input to our purification implementation, we use Hamiltonians coming from the PySCF <cit.> and BigDFT <cit.> codes. We use HGH pseudopotentials <cit.> to remove the core electrons in BigDFT. BigDFT calculations are performed in the linear-scaling mode in order to produce a Hamiltonian in an LCAO basis set <cit.>. PySCF calculations are done with the Polarization Consistent basis sets series <cit.>. Calculations with BigDFT are performed with the PBE exchange-correlation functional <cit.> and for PySCF with B3LYP <cit.>. For simplicity, we first transform equation <ref> to the standard eigenvalue problem using the Löwdin method by diagonalization in double precision. For our tests, we implement the Trace Resetting Fourth Order (TRS4) purification method <cit.>. We choose this method because it gives a clearer convergence signal than lower order methods. As a convergence criteria for the purification iterations, we use a change in the electronic energy, Tr(HP), of 1× 10^-8 Hartree; this in turn serves as a benchmark of whether we have achieved sufficient precision. For cases where convergence can't be achieved (due to low precision errors), the purification algorithm halts when the previous two energy values have increased, with an absolute change of less than 1×10^-2 Hartree (or on the 100th iteration). The source code of our implementation is available online (gitlab.com/wddawson/ozp). § NUMERICAL EXPERIMENTS We will now perform numerical experiments to understand what level of precision will be required for practical calculations. Our first experiments will establish the fact that double precision provides more than the necessary precision, opening up the way for approximate calculations. We will then investigate the Ozaki scheme as a means of achieving the target precision. We will further apply the Ozaki scheme to larger datasets of matrices to verify the robustness of our findings. We will also compare these results to the competing Markidis method. Finally, based on our findings, we will consider how current implementations of purification could benefit from a mixed precision scheme. We note that for all calculations, low precision is only used for the matrix multiplications, and double precision everywhere else. As test cases, we will first use the Hamiltonian coming from a BigDFT calculation of a Molnupiravir molecule bound to six water molecules. We will then expand the dataset to include a selenite ion surrounded by 10 water molecules calculated with different basis sets in PySCF. We also will include water clusters of different sizes and bulk silicon computed with BigDFT. We finally analyze a large water cluster (573 molecules) computed with B3LYP/PCSEG using NTChem <cit.> (due to the system size). The main systems used in this paper are show in Fig <ref>. §.§ Precision Requirements Sweep In order to understand the precision requirements for computing the density matrix, we perform density matrix purification using different custom floating point types implemented using MPFR on the Molnupiravir system. For this experiment, we assume infinite precision in the exponent. In Fig. <ref>, we plot the error of the resulting density matrix compared to a reference result computed with diagonalization in double precision. We vary the effective precision of the mantissa used for multiplication from 11 (half) to 53 (double). For each calculation, we use one of three fixed accumulation mantissa values (24, 37, 53) or the “same” precision as multiplication. From this data, we see that single precision is not sufficient to achieve a converged result. The error in the norm is 8.4× 10^-5, which corresponds to an absolute energy error of 5.2× 10^-6 Hartree. However, this failure comes from the accumulation phase, not the multiplication step. Thus, it is possible to store (and hence communicate) intermediate matrices in single precision if they are multiplied in a higher precision (a point we will return to in Sec. <ref>). We found that the magnitude of the error for any given multiplication in the iterations remains roughly the same, so there would be little benefit to adjusting the precision across iterations (for example, we tried using full double precision in the final iteration, but the error did not significantly improve). Importantly, we found that 37 bits of effective precision is sufficient for accumulation; for a single precision multiplication with 37 bits of accumulation, the error in the norm is 2.3 × 10^-6 and energy is 4.2× 10^-8 Hartree. Thus, the double precision commonly used for quantum chemistry calculations essentially provides an unnecessarily high level of precision for computing the single particle density matrix. We now repeat this experiment using the selenite system with the PCSEG-3 basis set (Fig. <ref>). The pattern is very similar, with convergence again found around 37 bits of effective precision. For single precision multiplication and 37 effective bit accumulation, the error in norm is 2.9× 10^-5 and absolute error in energy 3.3× 10^-6 Hartree. Remarkably, the result with 53 effective bits for multiplication and accumulation has an error in norm of 4.9× 10^-6 and absolute error in energy of 2.0× 10^-10 Hartree. We note that this discrepancy does not exist if one uses standard double precision calculations, instead it comes from the infinite exponent of MPFR (indeed, mpfr_check_range detects such out of range errors during the calculation). Hence, it is the double precision result that contains errors. From this data, we can see that larger basis set calculations, where orbitals may be presented as a sum of small contributions from many diffuse functions, may benefit from a increase in the exponent range using a scaling scheme (such as the one discussed in Sec. <ref>). Similar caution must be used when performing single precision calculations: the error in the absolute energy with standard single precision was 1.8× 10^-2 Hartree, whereas with MPFR (infinite exponent) it was 1.5× 10^-3 Hartree. §.§ Ozaki Scheme Application Despite this promising finding, it is unlikely that future hardware will offer implementations of floating point types with effective 37 bit mantissas. Instead, we propose using the Ozaki scheme with a reduced number of splits to take advantage of the reduced precision needs. In Fig. <ref>, we plot the errors as a function of the number of splits for various floating point representations. When using NVIDIA's Tensor Cores (FP16 with FP32 accumulation), four to five splits (10 to 15 multiplications) is sufficient for a converged result; the difference the absolute error in the energy between 4 and 5 splits is 2.94× 10^-10 Hartree (1.34 × 10^-8 Hartree for three and four splits). By contrast, according to Fig. <ref> to fully reproduce a double precision result on random matrices required 6 splits / 21 multiplications. Hence, the Ozaki scheme allows for approximately a factor of two savings in computational cost due to the lower precision requirements for quantum chemistry application. In all cases, the scaling scheme seamlessly handles the exponent, and hence the precision comes down to the size of the mantissa. Using plain FP16, 16 - 18 splits (136 - 171 multiplications) is required. This is a substantial increase in the number of multiplications compared to the Tensor Core version, however future hardware may not offer FP16 TC (especially non-NVIDIA chips), with different performance ratios. For a fully converged double precision result on random matrices, 20 splits / 210 multiplications was required, leading again to an approximate factor of two savings. We compare as well the intermediate sized mantissa of AMD's FP24, and find that it converges around 7 splits / 28 multiplications. For this system, the TRS4 algorithm required 26 calls to the Ozaki scheme multiplication routine to converge. Hence, when using Tensor Cores an overall speedup would require a time to solution ratio of 260 – 390 multiplications to one double precision diagonalization. For pure FP16, the requirement increases to 3536 multiplications. §.§ Potential to Exploit Sparsity Depending on the underlying distribution of matrix values, the splitting procedure may introduce a significant amount of sparsity. This can potentially be used to accelerate the calculation: on an A100 GPU, the FP16 tensor core performance increases from 312 TFLOPS to 624 TFLOPS for sparse matrices. We examined the final multiplication performed for the Molnupiravir system by the purification algorithm when using the Tensor Core representation (FP16 with FP32 accumulation) and 5 splits. An entry of the split matrix (after scaling) was considered to be zero if it fell below the smallest value representable by double precision. The first split of both matrices have a number of non-zeros below 40%, however the next split is above 94%, and subsequent splits are nearly fully dense. For the pure FP16 implementation with 16 splits, the matrices are substantially more sparse: the number of non-zeros is below 3% for the first split, 7% for the second, and 14% for the third split. By the sixth split the sparsity of both matrices was above 50% (and for all subsequent splits). Thus we anticipate that some sparsity may be exploited in future work, though it would be limited to the first set of splits. §.§ Basis Set and Size Effects We now examine some larger datasets to determine the robustness of these previous results. For this analysis, we employ our CUDA implementation of the Ozaki scheme and focus on the FP16 Tensor Core version with three, four, five, and six splits. For these calculations, we first focus on the selenite system. We examine basis sets with up to quadruple-ζ quality as well as augmented versions. As matrices coming from Gaussian basis set calculations have worse conditioning (overlap matrix) and larger spectral widths (Hamiltonian) than the in-situ optimized support functions of BigDFT, this test examines the numerical stability of our findings. The selenite ion is an anion (-2) and selenium is a fourth row element, making it a suitable test case for larger basis sets. In Fig. <ref> we plot the error for each of the basis sets with a given number of splits. We observe that for the larger basis sets, the required number of splits increases from four to five; with five splits, the error in the norm for PCSEG-3 is 1.1× 10^-6 and absolute error in energy is 3.2× 10^-8 Hartree. This is consistent with our previous results where the difference between four and five splits yield was small, but not non-negligible. Overall, we find that our earlier findings transfer well to even challenging numerical conditions. Thus, purification with the Ozaki scheme should be applicable to routinely performed quantum chemistry calculations, and not just specialized approximate schemes. We also investigate the number of splits required as a function of system size using water clusters of increasing size computed with BigDFT. This analysis will allow us to separate the effect of basis set conditioning with matrix size. For the water clusters, the largest matrix is of dimension 1806× 1806, and for the selenite systems it was 1508× 1508 (PCSEG-3). The errors with respect to system size are plotted in Fig. <ref>. We find a modest growth in the number of splits required with system size. The error with five splits is similar for the largest water cluster as for the selenite system with the largest basis set; the error in the norm 5.2× 10^-7 is and absolute error in energy is 5.8× 10^-8 Hartree. Thus, we conclude that five splits is a sufficient recommendation, though automated schemes may improve the usability of the Ozaki scheme. §.§ Comparison with the Markidis Method In the previous work of Finkelstein et al. that performed density matrix purification on NVIDIA Tensor Core units <cit.>, the precision of the result was improved using the method of Markidis <cit.>. Here, the matrix X is split into a lower and high precision part: X^0 = FP16[X], X^1 = FP16[X - X^0], after which the product AB can be approximated as: AB = FP32[A^0B^0 + A^0B^1 + A^1B^0 + A^1B^1]. In the Markidis method, we can include up to four terms, each refining the precision of the result. In Tab. <ref> we show the precision and the number of purification iterations required (which may increase due to low precision) for a given number of terms for various systems. We caution that the number of iterations may be made more uniform by a more sophisticated convergence test <cit.>. While comparing our CUDA and MPFR implementations of the Markidis method, we noted that a significant amount of error was introduced due to the limited exponent range. To improve this, we applied the same scaling method for the exponent ranges used in the Ozaki scheme <cit.>, and found it provided substantial improvement. For example, the 4 Terms result for Selenite/PCSEG-3 improved from an error of 1.30×10^-2 to 2.16×10^-3. Nonetheless, the MPFR result still remains more precise; in the future, it may be possible to further improve the Markidis method with a new scaling scheme. In the implementation of Finkelstein et al., they include three terms in the Markidis correction, which is well justified from our data. This three term result can be accomplished at the cost of only two multiplications because they implement a scheme that only requires squaring symmetric matrices, which means that one can exploit the relation A^1A^0 = (A^0A^1)^T even in inexact arithmetic. The benefit of the Markidis method is thus that it can substantially improve the accuracy at a low cost with FP16 Tensor Cores (especially if combined with the exponent scaling scheme). On the other hand, it does not converge to the double precision result, can't take advantage of lower precision hardware like FP16 without FP32 accumulation, and has significant errors in more challenging numerical conditions (like larger systems and basis sets). For example, for the 3 Term scheme the absolute error in energy for Molnupiravir is 4.4× 10^-5 Hartree, but it is 6.5× 10^-3 Hartree for selenite with the PCSEG-3 basis set. For this reason, the higher order approximation of the Ozaki scheme is valuable. When only the matrix square is required, the Ozaki scheme can also exploit this symmetry in the calculation of the error free terms (see Fig. <ref>) as well as other optimizations detailed by Uchino <cit.>. §.§ Mixed Precision Purification Finally, we will revisit the result of Sec. <ref> regarding the precision needed for multiplying matrices and for accumulation. The findings of Fig. <ref> indicate that the matrices used in purification may be stored in FP32 as long as the multiplication upcasts them to FP64. This could be utilized in current libraries that implement purification <cit.> to reduce communication and data transfer costs, even without considering the Ozaki scheme. To validate this finding, we modify the NTPoly code <cit.>, which implements the TRS4 method using sparse matrix algebra. During the multiplication process, we down cast all input matrix elements back and forth from FP32 to simulate the loss of precision. As a test matrix, we use the Hamiltonians coming from a BigDFT calculation of a bulk silicon supercell (3240 atoms) and an NTChem calculation of a water cluster (573 molecules). We compare for double precision and single precision purification against the electronic energy computed with dense diagonalization in double precision. Since NTPoly enforces sparsity in the underlying matrices by filtering small values, we report in Tab. <ref> the errors in energy when using several different thresholds. We find that the use of single precision for input matrix values has a negligible effect on the quality of the purification result. For the Silicon / BigDFT calculation, the number of purification iterations remained unchanged; for the Water / PCSEG-1 system, we did observe the iterations increased due to stagnation. We anticipate that the results here may be further improved with a suitable scaling scheme to handle the limited exponent range. These results demonstrate how the findings of Fig. <ref> can have an immediate impact of quantum chemistry codes. § CONCLUSION In this work, we have examined the specific floating point precision requirements of a representative kernel from quantum chemistry calculations: calculation of the single particle density matrix using density matrix purification. Exploiting MPFR to emulate arbitrary mantissa sizes, we found that double precisions affords an unnecessarily high level of precision. If a precision between single and double precision is used (for example, with an effective 37 bit mantissa), a reliable result can be obtained. We further identified that single precision is sufficient if accumulation is done in a higher precision; libraries that implement density matrix purification or similar algorithms may immediately exploit this fact to reduce communication and data transfer costs. Conversely, in the case of calculations with large basis sets, the exponent range of double precision can limit the quality of the final result. To further take advantage of this reduced precision requirement, we proposed the use of the Ozaki scheme with a smaller number of splits. We found that the reduced precision requirements of purification leads to a reduction in the number of multiplications needed for the Ozaki scheme by about a factor of two. In this work, we have only examined the reliability of this approach, and not implemented an optimized version. Nonetheless, on an RTX A6000 GPU the Ozaki-scheme has already been demonstrated to outperform standard double precision multiplication calls <cit.>. Furthermore, on an A100 a similar algorithm to purification was shown to require less time to solution then dense diagonalization <cit.>. Hence, a combination of the two may provide practical benefit in the short term (in particular, a node parallel version). More importantly, if future architectures are developed with an even higher ratio of low precision multiplication to FP64, our work shows that a performance improvement could be realized without sacrificing precision. The methodology developed here may be straightforwardly applied to a number of other matrix multiplication based algorithms in quantum chemistry (particularly many-body methods like MP2 and Coupled Cluster). Software emulation of low precision results can substantially increase the time to solution, however many practical problems remain in reach, particularly with the development of appropriate libraries for multi-precision linear algebra <cit.>. Another approach may be to use tools like veritracer to instrument electronic structure codes and measure floating point errors <cit.>. It will be particularly essential to test low precision algorithms in combination with physically motivated approximations. Skilled practitioners already know how to employ every available approximation (smaller basis sets, lower levels of theory, low-rank approximations, numerical thresholds, spatial locality, etc.) to achieve a reliable result using as little computational resources as possible. Low precision approximations will almost certainly fail to have an improved trade off between cost and accuracy than domain specific methods. Fortunately, the Ozaki scheme represents one low precision approximation that can accelerate calculations without sacrificing meaningful amounts of precision, making it an ideal candidate for combination with the diverse set of algorithms available in quantum chemistry. Computations were performed using resources at the Research Center for Computational Science, Okazaki, Japan (Projects: 23-IMS-C029 and 24-IMS-C151). We gratefully acknowledge members of the RIKEN R-CCS Low Precision Working Group for their advice and guidance.
http://arxiv.org/abs/2407.13239v1
20240718074925
Gravitational Wave Mixture Separation for Future Gravitational Wave Observatories Utilizing Deep Learning
[ "Cunliang Ma", "Weiguang Zhou", "Zhoujian Cao" ]
astro-ph.IM
[ "astro-ph.IM" ]
School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, 341000, China School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, 341000, China [Zhoujian Cao: ]zjcao@amt.ac.cn Institute of Applied Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China § ABSTRACT Future GW observatories, such as the Einstein Telescope (ET), are expected to detect gravitational wave signals, some of which are likely to overlap with each other. This overlap may lead to misidentification as a single GW event, potentially biasing the estimated parameters of mixture GWs. In this paper, we adapt the concept of speech separation to address this issue by applying it to signal separation of overlapping GWs. We show that deep learning models can effectively separate overlapping GW signals. The proposed method may aid in eliminating biases in parameter estimation for such signals. Gravitational Wave Mixture Separation for Future Gravitational Wave Observatories Utilizing Deep Learning Zhoujian Cao [corresponding author] July 22, 2024 ========================================================================================================= § INTRODUCTION The field of gravitational-wave (GW) detection has witnessed remarkable progress since the first direct detection <cit.>. The third observing run (O3) of GW detection ended in spring 2020, boosting the total number of confident events to above 90, with an event rate currently standing at 1.5 per week <cit.>. However, the upcoming third-generation (3G) detectors such as Einstein Telescope (ET) <cit.> and Cosmic Explorer (CE) <cit.>, envisioned in the 2030s, promise a significant leap forward. This enables the detection rate of above 10^5 per year at cosmological distances. The surge in detection rate, along with the remarkable enhancement of sensitivity across both lower and higher frequency ranges in 3G detectors, will significantly extend the duration of signals within the sensitivity band. As a consequence, the probability of GW signals overlapping in these 3G detectors will become significant <cit.>, posing potential challenges for the GW search and parameter estimation. As early as 2009, T. Regimbau and Scott A. Hughes delved into the effects of binary inspiral confusion on the sensitivity of ground-based GW detectors <cit.>. They emphasized the necessity for rigorous data analysis to disentangle mixture signals. Since then, numerous studies have focused on analyzing the strain with mixture signals. Y. Himemoto et al., utilizing Fisher matrix analysis, explored the statistical ramifications of mixture GWs on parameter estimation <cit.>. Their findings revealed that mixture signals can introduce notable statistical errors or systematic biases, especially when the coalescence times and redshifted chirp masses of the mixture GWs are closely matched. A realistic distribution analysis further indicated that mergers occurring within a second of each other are common occurrences over a year in 3G detectors <cit.>. Modern data analysis techniques for parameter estimation typically assume the presence of a single signal amidst background noise. However, when two or more GWs are simultaneously detected, their signals overlap, creating a distorted, non-physical waveform. This leads the sampling software to identify parameter sets aligned with this composite waveform, rather than the individual signals <cit.>. Experimental results from P. Relton et al. demonstrated that, in most instances, current parameter estimation methods can accurately assess the parameters of one of the mixture events <cit.>. Notably, if one signal is at least three times stronger than the other, the louder signal's source parameters remain unaffected <cit.>. By applying a narrow prior on the coalescence time, obtained during the GW detection phase, it may be feasible to accurately recover both posterior parameter distributions <cit.>. Experiments conducted by E. Pizzati et al. showed that parameter inference remains robust as long as the coalescence time difference in the detector frame exceeds 1 second <cit.>. Conversely, when this time difference is less than 0.5 seconds, significant biases in parameter inference are likely to emerge <cit.>. Upon comparing the effects of mixture signals on coefficients at various post-Newtonian (PN) orders, it has been determined that, overall, the 1PN coefficient experiences the greatest impact. The findings further indicate that, although a significant proportion of mixture signals introduce biases in PN coefficients, which individually might suggest deviations from General Relativity (GR), collectively, these deviations occur in random directions. As a result, a statistical aggregation of these effects would still tend to align with GR <cit.>. Quantifying source confusion within a realistic neutron star binary population reveals that parameter uncertainty generally rises by less than 1%, except in cases where overlapping signals exist with a detector-frame chirp mass difference of ≲ 0.01 M_⊙ and an overlap frequency of ≳ 40 Hz <cit.>. Among 1 × 10^6 simulated signals, only 0.14% fall within this specific range of detector-frame chirp mass differences, yet their overlap frequencies are usually below 40 Hz <cit.>. Apart from the task of parameter estimation, several studies focus on exploring the impact of overlapping signals on gravitational wave detection. Within the CWB framework for GW searching, most signals resulting from closely merged events will only be detected as a single trigger <cit.>. In the context of the PyCBC framework and the search for binary black hole (BBH) events, it has been noted that when the relative merger time exceeds 1 second, the search efficiency diminishes by approximately 1% <cit.>. In cases where the relative merger time is less than 1 second, the search efficiency drops by 26% because most paired signals are either detected by a single trigger or not detected at all <cit.>. The biases in the estimation of the PSD will negatively impact the sensitivity of the 3G ground-based GW detectors, especially considering the large population of overlapping signals <cit.>. The confusion noise's contribution to the signal-to-noise ratio (SNR) is considerably lesser than that of the instrumental noise <cit.>. Certain studies focus on refining data processing techniques to address the challenge posed by overlapping signals. J. Janquart et al. analyze the overlapping binary black hole merger with hierarchical subtraction and joint parameter estimation <cit.>. They find that joint parameter estimation is usually more precise but comes with higher computational costs. J. Langendorff et al. first utilize normalizing flows for the parameter estimation of overlapping GW signals <cit.>. Compared to the traditional Bayesian method, the normalizing flow results in broader posterior distributions, whereas the Bayesian-based approach tends to become overconfident, potentially overlooking the injection <cit.>. Recently, we have proposed a novel framework (MSNRnet) aimed at accelerating the matched filtering process for GW detection <cit.>. This is achieved by incorporating deep learning techniques for waveform extraction and discrimination. However, as the waveform extraction stage solely captures one waveform, in scenarios where multiple signals overlap, there is a possibility that the MSNRnet framework may overlook one of the overlapping signals. Real-world speech communication frequently takes place in vibrant, multi-speaker settings <cit.>. To function effectively in these environments, a speech processing system must possess the capability to distinguish and separate speeches from various speakers. While this endeavor comes naturally to humans, it has been exceedingly challenging to replicate in machines. However, in recent years, deep learning strategies have notably pushed the boundaries of this problem <cit.>, surpassing traditional techniques like independent component analysis (ICA) <cit.> and semi-nonnegative matrix factorization (semi-NMF) <cit.>. The mixed speech can be compared to mixed GW signals. Drawing inspiration from the task of speech separation, this study marks the first attempt to apply deep learning to GW separation. The proposed method for GW signal separation holds potential for future applications in GW search and parameter estimation. Furthermore, this work serves as a complement to the existing tasks of deep learning applied to GW data processing, including end-to-end GW signal search <cit.>, parameter estimation <cit.>, waveform or envelope extraction <cit.>, GW source localization <cit.>, and glitch classification <cit.>. Since the GW components buried in noise, the GW separation task is more challenging than speech separation. In this work, we first explored the potential of utilizing deep learning for GW separation. We find that the mixture strain with noise and multi-signals can be separated. § METHOD FOR GW SEPARATION In the early stages of applying deep learning to speech separation, the preprocessing phase typically involved converting mixed sound into a time-frequency representation <cit.>, isolating source bins via time-frequency masks, and synthesizing waveforms via invert time-frequency transform. However, challenges arose, including the optimality of Fourier decomposition and the need to handle both magnitude and phase in the complex STFT domain. This often led to methods that only adjusted the magnitude, ultimately limiting separation performance. In 2018, Luo et al. introduced the Time-domain Audio Separation Network (TasNet) <cit.>. This neural network was designed to directly model the time-domain mixture waveform through an encoder-separation-decoder framework, where the actual separation occurred at the encoder's output. The following year, they further refined TasNet, evolving it into Conv-TasNet <cit.>. The key innovation of Conv-TasNet was the use of a Temporal Convolutional Network (TCN) for the separation component, consisting of stacked one-dimensional dilated convolutional blocks. In 2020, the same team proposed DPRNN <cit.>, which incorporated a dual-path RNN for the separation phase. Later that year, J. Chen et al. enhanced DPRNN, giving birth to DPTNet <cit.>. This advancement replaced the dual-path RNN module with a dual-path transformer module. We have utilized all three iterations of TasNet—Conv-TasNet, DPRNN, and DPTNet—for the task of GW separation. Among these, we find that DPRNN has proven to be superior to the other two methods. So, in this work, we focus on DPRNN for GW separation. Suppose that the strain captured by the interferometer, denoted as d(t), can be regarded as a combination of a noise component, n(t), and the GW component, h(t). d(t) = n(t) + h(t) For the GW separation task, the GW component comprises multiple signals, represented as h(t)=∑_i=1^Nh_i(t), where h_i(t) signifies each individual GW signal, and N signifies the overall count of GW signals existing in the analyzed data segment. In this work, we will solely focus on the scenario where there are two signals h_A(t) and h_B(t) present in the data. So d(t) = n(t) + h_A(t) + h_B(t). We aim to directly estimate h_A(t) and h_B(t) from d(t). The TasNet-like framework decomposes the signal separation task into three stages: Encoder, Separation, and Decoder, and the overall framework for GW separation is shown in Fig. <ref>. During the Encoder stage, the input signal is encoded into a hidden layer feature F. In the Separation stage, masks (M_A and M_B) for each signal component are evaluated. Subsequently, the Decoder stage utilizes these masked features to obtain the separated output as follows: h_A = Decoder(M_A ⊙ F), h_B = Decoder(M_B ⊙ F). where ⊙ denotes the Hadamard product. The Encoder, Separation, and Decoder stages can be likened to the STFT, time-frequency masking, and inverse STFT stages respectively, of signal separation utilized by the short-time Fourier transform. In the subsections that follow, we will elaborate on the three stages of GW separation. §.§ Encoder stage Suppose the Encoder receives an input signal s ∈ℝ^1 × L, where L denotes the number of time samples of the input strain. Through the Encoder stage, we get the signal feature F ∈ℝ^C × L by F = ReLU(Conv1D(s)), where in the 1D convolutional layer, C=256 filters are used, and the filter size is configured to 2. §.§ Separation stage The input of the separation stage is signal feature F and the output generates two feature masks namely M_A and M_B. The signal feature F is initially passed through Layer Normalization and a Conv1D layer, undergoing transformation into a tensor representation having a shape of ℝ^N × L where N=64 represents the number of Conv1D filters. Afterward, the tensor sequentially undergoes a segmentation operation, followed by processing through four DPRNN blocks, and concludes with an overlap-add operation. In the segmentation step, the 2D tensor undergoes a transformation into a 3D tensor through sub-frame alternation. This transformed tensor is then relayed to a stack of DPRNN blocks, where both local and global modeling are alternately and interactively employed. Upon completion of DPRNN processing, the output from the final layer is conveyed to a 2D convolutional layer and subsequently reverted to two 2D tensors via the Overlap-Add operation. These tensors are then simultaneously processed through two distinct convolutional modules equipped with different activation functions: Tanh and Sigmoid. Following this, the tensors are combined and subjected to a ReLU activation function, ultimately yielding two masks, designated as M_A and M_B. §.§.§ Segmentation and Overlap-Add Fig. <ref> shows the flow chart of the Segmentation and Overlap-Add step in the separation stage. Let the input of the segmentation is a 2D tensor F and the output of the segmentation is a 3D tensor T. For the segmentation stage, we first split the 2D tensor to S small tensors (D_i ∈ℝ^N × K, i ∈{1, 2, …, S}). Then concatenate all the small 2D tensors together to form a 3D tensor T = [D_1, D_2, …, D_S] ∈ℝ^N × K × S. In this work K = 250 and S = 134. Suppose the output of the last DPRNN block as T_B+1∈ℝ^N × K × S, then the Overlap-Add step can be seen as the inverse process of the Segmentation step. It applies the S 2D tensors to form output Q ∈ℝ^N × L. Initially, we split the 3D tensor into S 2D tensors and aligned according to real-time. Following this, we added the S 2D tensors up and got one 2D tensor. §.§.§ DPRNN block The segmentation output T is subsequently forwarded to a stack consisting of 4 DPRNN blocks. Each block maps a 3D tensor into another 3D tensor of the same shape. Let’s take the map T_i → T_i+1 as an example to illustrate the calculation process of a DPRNN block. The flow chart depicting the DPRNN block is illustrated in Fig. <ref>. Initially, the input tensor is processed through a local modeling block, followed by a global modeling block. The key distinction between these two blocks lies in their approach to signal slicing. Specifically, the local modeling block slices the 3D tensor based on the third indicator, whereas the global modeling block performs slicing using the second indicator. For brevity, we only detail the mathematical expression pertaining to local modeling in this context. Suppose the input of the local modeling is T_i and the output is T̂_i. We first put each divided chunk to a bidirectional LSTM block and concatenate them together to get a tensor U_i ∈ℝ^H × K × S. In this work, we set H to 256. U_i=j*Concatenate BiLSTM(T_i[:,:,j]), where T_i[:,:,j] ∈ℝ^N × K is the sequence defined by chunk j. We then apply a fully connected layer to the tensor U_i and obtain U_i ∈ℝ^N × K × S as follows: U_i = j*Concatenate GU_i[:,:,j] where G ∈ℝ^N × H. Then Layer normalization is applied to U_i as follows: LN(U_i)=U_i-μ(U_i)/√(σ(U_i)+ϵ)⊙ z+r where z, r ∈ℝ^N × 1 are the rescaling factors, ϵ is a small positive number for numerical stability, and μ(·) and σ(U_i) represent the mean and standard deviation operators, respectively. Then we get T_i as follows: T_i = T_i + LN(U_i). Put the 3D tensor T_i to the global modeling block, we then get the output of the DPRNN block T_i+1. §.§ Decoder stage The Decoder stage maps the masked Encoded feature F_M_i = M_i ⊙ F ∈ℝ^C × L to separated signal. Each element in F_M_i (which can be likened to feature values) may be viewed as a component of a hidden vector (comparable to feature vectors) at a specific time. h̃_i=ConvTranspose1d(F_M_i), The hidden vectors can be regarded as the adjustable parameters of the transposed convolutional layer. This layer accepts N input channels and outputs a single channel. Its purpose is to decrease the channel count of the masked encoded features from C to 1. By configuring the kernel size as 2, stride as 1, and padding as 0, the transposed convolution preserves the length of the time series at L. As a result, the masked encoded features are reconstituted into a one-dimensional time series, denoted as h_i ∈ℝ^1 × L. § DATA FOR TRAINING AND TESTING In this paper, we concentrate on the Einstein Telescope, which could potentially consist of three detectors arranged in a triangular configuration. For simplicity, we limit our analysis to just one of these detectors. We utilize the PyCBC package <cit.> for synthesizing data, which aids in training, validation, and testing processes. The strain captured by the detector can be represented as a combination of noise and two mixture signals: n(t) + h_A(t) + h_B(t), where n(t) signifies the noise component. This noise is generated using the power spectrum density (PSD) linked to the Einstein Telescope, which offers insights into the detector's sensitivity at various frequencies. Specifically, we use EinsteinTelescopeP1600143 to simulate this noise. Both h_A(t) and h_B(t) are generated through a linear combination of h_+(t) and h_×(t), which are accurately modeled by SEOBNRv4. In our waveform simulation, the masses of the two black holes range from (10M_⊙, 80M_⊙). The dimensionless spin is randomly sampled within the interval (0, 0.998). Additionally, the declination and right ascension are uniformly sampled across the entire sphere. During the simulation of h_+(t) and h_×(t), the luminosity distance from the astrophysical source to Earth is fixed at 4000 Mpc. In the training phase, the amplitudes of h_A(t) and h_B(t) undergo random rescaling to align with two randomly generated signal-to-noise ratios (SNRs) falling between 5 and 20. Furthermore, the peak amplitude times of h_A(t) and h_B(t) are randomly positioned between 50% and 95% of the designated time window, which spans a duration of 4 seconds. The entire simulation operates at a sampling frequency of 4096 Hz. § PERFORMANCE OF THE GW SEPARATION NETWORK Previous studies examining data processing of overlapping gravitational wave (GW) strains have primarily focused on how GW overlapping affects traditional GW data processing methods, such as matched filtering for GW detection <cit.> and Bayesian posterior sampling for parameter estimation <cit.>. Recently, the normalizing flow has emerged as a new technique for parameter estimation of overlapping GW strains <cit.>. In our study, we propose the utilization of signal separation via deep learning for the analysis of overlapping GW strains. The gravitational wave (GW) separation network can be considered a parameterized system. The network's output includes the waveforms of the estimated clean gravitational wave signals. To optimize the performance of the proposed model, we train it using utterance-level permutation invariant training (uPIT) <cit.>, aiming to maximize the scale-invariant signal-to-noise ratio (SI-SNR) <cit.>. SI-SNR is defined as: s_target = ⟨h̃, h ⟩ h/h^2 e_noise = h̃ - s_target SI-SNR := 10 log_10s_target^2/e_noise^2 where h̃∈ℝ^1 × L and h∈ℝ^1 × L are the estimated and target clean sources respectively, L denotes the length of the signals, and h̃ and h are both normalized to have zero-mean to ensure scale-invariance. During the training phase, the Adam method is used. A learning rate of 10^-5 is established. The system undergoes 20 epochs of training. During the training stage, we assume that the peak time of signal A lags behind that of signal B. In other words, typically, signal A is only disrupted by the inspiral stage of signal B, whereas signal B experiences interference from the entire signal process, encompassing the inspiral, merger, and ringdown stages. In this section, we explore the performance of the GW separation network. Prior researches <cit.> have established that the accuracy of parameter estimation for the two sources can be notably influenced by both the peak time difference and the SNR difference. Our study examines how these two factors specifically affect GW separation. Fig. <ref> illustrates an example of overlapping signal shapes, considering variations in peak time differences (a) and signal-to-noise ratio (SNR) differences (b). In subsequent sub-sections, we will introduce noise to these waveforms to produce simulated strain data, and then evaluate the performance of the GW separation model using this simulated data. From this figure, it is evident that, in most scenarios, the near merger and ringdown stages of signal A remain unaffected, whereas all stages of signal B appear blurred. The following subsections will demonstrate that despite the blurring of signal B and the inspiral stage of signal A, in most cases, the waveforms of both signal A and signal B can often be accurately reconstructed. §.§ Impact of peak time difference on the GW separation In this subsection, we elaborate on the influence of peak time disparities on GW separation. We produce three elements constituting a single strain: noise, signal A, and signal B. The source parameters of signal A and signal B are the same as the waveform shown in Fig. <ref>. With signal A peaking at 3.7 seconds within the entire strain window, we adjust the peak time of signal B to generate eight distinct waveforms. These waveforms exhibit time differences between the peaks of signal A and signal B ranging from -0.7 s to 0 s. By combining these three components, we synthesize eight unique strains. Subsequently, we subject these strains to the GW separation network and analyze the outputs. Fig. <ref> displays the individual outputs corresponding to each of the eight strains. To measure the separation performance, we utilize the overlap between the two separated signals and the two original signals. The overlap of signal h and h can be written as overlap(h,h̃)=∫ h(t)h̃ (t)dt/√(∫ h^2(t)dt∫h̃^2(t) dt) From Fig. <ref> we can see that all eight strains have been successfully separated. Surprisingly, in extreme situations where the peak time of signal A and signal B are the same, the overlaps of both signal A and signal B are greater than 0.95. Fig. <ref> presents a single case study demonstrating the effect of peak time difference on GW separation. Here, we undertake a comprehensive statistical analysis to investigate the broader influence of peak time disparities on the process of GW separation. To this end, we have generated eleven sub-test-datasets, with the sole difference among them being the peak time disparities, specifically {-1.0 s, -0.9 s, -0.8 s, -0.7 s, -0.6 s, -0.5 s, -0.4 s, -0.3 s, -0.2 s, -0.1 s, 0 s}. Each of these sub-datasets comprises 1000 samples, ensuring consistency in noise distribution and other parameter distribution across all datasets. The stack plot in Fig. <ref> illustrates the distribution of separated signals based on their relative merger time (T_B - T_A). Please note that if the overlap between the isolated signal and the actual injected signal exceeds 0.9, we consider the signal to be successfully isolated. This figure reveals that in most scenarios, both signal A and signal B are effectively separated. Notably, even in the most extreme circumstance, where the merger time of signal A and signal B coincide, over 80% of the samples are still accurately separated, while approximately 10% of the samples yield successful separation of only one of the two injections. Approximately 5% of the samples show unsuccessful separation for both signal A and signal B. These results further underscore the exceptional performance of our model in denoising and separating mixed signals. The model effectively distinguishes overlapped signals under different peak time difference conditions, achieving high-quality separation results in the majority of cases. This highlights its robustness and capability in signal-processing tasks. §.§ Impact of SNR difference on the GW separation In the preceding section, we discussed the impact of peak time differences on the separation of gravitational wave signals. In practical scenarios, the amplitudes of the individual components within the entangled signals exhibit diversity. Herein, we delve into the influence of signal strength on GW disentanglement. Signal strength can be quantified by the matched signal-to-noise ratio (SNR). To be specific, we maintain an SNR of 10 for signal A while adjusting the SNR differential between signal B and signal A in increments of 2, spanning from -4 to 10. Consequently, the SNRs for signal B are adjusted to the following values: {6, 8, 10, 12, 14, 16, 18, 20}. We configure the parameters identically to those presented in Fig. <ref>. Specifically, we establish the peak time of signal A at 3.7 seconds within the strain window and set the peak time of signal B at 3.5 seconds, resulting in a peak time difference of -0.2 seconds. We then adjusted the SNR of signal B, varying it from 6 to 20. After superimposing signal A, signal B, and noise, we input the combined signal into the Gravitational Wave (GW) separation network and obtained the output. Fig. <ref> illustrates the separated and injected waveforms for both signal A and signal B. Here, we analyze the influence of signal A on the GW separation performance of signal B by the right column of Fig. <ref>. By changing the SNR of signal B from 6 to 20, the separation results of signal A almost unchanged. All the separation overlaps of signal A are greater than 0.98. When the Signal-to-Noise Ratio (SNR) of signal B is 6, we can see that the overlap between the separated signal B and the buried signal is approximately 0.82. We hypothesize that there may be two primary factors influencing the separation performance of signal B. Firstly, the SNR of signal B is significantly low, causing noise to interfere with the separation process. Secondly, both signal A and noise contribute to the decrease in separation performance. To gain a deeper understanding of the reasons behind the incorrect separation, we subtract signal A and preserve only signal B and the noise in the strain data. This modified data is then inputted into the separation model to observe the impact on the separation of signal B. We verified that the overlap of signal B is equal to 0.80, which is nearly identical to 0.82. The results suggest that the underwhelming performance observed in the separation of signal B in Fig. <ref> (a) is unrelated to the overlapping signal B, but is instead impacted by the intensity of noise. To further investigate the impact of SNR differences on separation performance and identify potential shortcomings of our model, we prepared 1,000 samples for each SNR difference value. Fig. <ref> illustrates the four separation scenarios under different SNR differences, with the x-axis representing SNR differences ranging from -4 to 10. Note that the SNR of signal A is set to 10. We set the SNR of signal B to {6, 8, 10, 12, 14, 16, 18, 20} corresponding to the SNR difference {-4, -2, 0, 2, 4, 6, 8, 10}. From the area chart in Figure 8, it is evident that the orange region, indicating the successful separation of both signals, occupies the majority of the area. The red region, representing scenarios where neither signal was successfully separated, remains very small. Specifically, when the SNR of Signal A is fixed at 10 and the SNR of Signal B is 6 or 8, the instances where only Signal A is successfully separated significantly outnumber the instances where only Signal B is successfully separated. This indicates that, in scenarios with smaller SNR differences, the model is more likely to successfully separate the signal with the higher SNR. These results suggest that further optimization is needed to enhance the model's performance in separating overlapping signals with low SNR parts. At the same time, they also confirm the robustness of the current model in most cases. § CONCLUSION In this paper, we attempt to address the challenge posed by overlapping GW signals, which is an emerging issue as future GW observatories. We have demonstrated the feasibility of adapting speech separation techniques to the domain of GW signal separation, employing deep learning models for this task. Our findings reveal that the proposed approach can effectively disentangle overlapping GW signals, even when they exhibit different peak time differences. This capability ensures robust signal identification and accurate extraction of individual GW events from a complex signal mixture. Additionally, we observed that the method performs remarkably well across a range of SNRs. Even in low SNR scenarios, where noise levels are relatively high, the model demonstrates its ability to separate and identify GW signals with reasonable accuracy. *
http://arxiv.org/abs/2407.13101v1
20240718021900
Retrieve, Summarize, Plan: Advancing Multi-hop Question Answering with an Iterative Approach
[ "Zhouyu Jiang", "Mengshu Sun", "Lei Liang", "Zhiqiang Zhang" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Entropic uncertainty relations in Schwarzschild space-time Dong Wang July 22, 2024 ============================================================ § ABSTRACT Multi-hop question answering is a challenging task with distinct industrial relevance, and Retrieval-Augmented Generation (RAG) methods based on large language models (LLMs) have become a popular approach to tackle this task. Owing to the potential inability to retrieve all necessary information in a single iteration, a series of iterative RAG methods has been recently developed, showing significant performance improvements. However, existing methods still face two critical challenges: context overload resulting from multiple rounds of retrieval, and over-planning and repetitive planning due to the lack of a recorded retrieval trajectory. In this paper, we propose a novel iterative RAG method called ReSP, equipped with a dual-function summarizer. This summarizer compresses information from retrieved documents, targeting both the overarching question and the current sub-question concurrently. Experimental results on the multi-hop question-answering datasets HotpotQA and 2WikiMultihopQA demonstrate that our method significantly outperforms the state-of-the-art, and exhibits excellent robustness concerning context length. § INTRODUCTION Open-domain question answering is a task that involves providing factual responses based on extensive documents <cit.> and is of significant application in hot industry scenarios such as intelligent assistants and generative search engines <cit.>. Multi-hop question answering is one common and challenging sub-task within this field, requiring the system to integrate information to complete multi-step reasoning and answer questions <cit.>. With the rapid development of large language models (LLMs), retrieval-augmented generation (RAG) based on these LLMs has become a popular method for addressing open-domain question answering <cit.>. The typical RAG process involves using a retriever to recall documents from a corpus that are relevant to a given query and using these documents as context inputs for the LLMs to generate responses. However, when dealing with multi-hop question answering, conventional RAG techniques frequently fall short of aggregating all the critical information within a singular retrieval iteration, leading to incomplete or incorrect answers. Consequently, a new genre of iterative RAG methods that incorporate question planning has recently been developed <cit.>. These methods assess after each retrieval whether the information at hand is sufficient for answering the question. If it is not, the methods generate a sub-question for the next step and perform another retrieval, iterating this process until the question can be satisfactorily answered. Owing to the employment of multiple retrieval iterations, iterative RAG has achieved a notable improvement in multi-hop question-answering scenarios compared to the single-round RAG approaches. However, existing iterative RAG methods still encounter two principal challenges when handling multi-hop question answering. Firstly, due to multiple rounds of retrieval, iterative RAG methods have to deal with a longer context in contrast to single-round RAG methods, which consequently introduces more noise from the documents and increases the risk of the model missing key information during response generation <cit.>. Secondly, current iterative RAG methods are heavily dependent on the model's interpretation of retrieved documents for decision-making, lacking a concrete record of the navigated trajectory. This makes it difficult for the model to discern whether the information needed to answer the overarching question has been sufficiently gathered and whether a sub-question has already been retrieved, leading to two issues: an over-planning scenario wherein the iterative process does not stop even despite sufficient information has been retrieved, and a repetitive planning scenario wherein a sub-question that has already been retrieved is reproduced <cit.>. The two challenges mentioned previously are primarily related to the effective processing of information obtained during the retrieval phase. To tackle this, drawing inspiration from query-focused summarization <cit.>, we introduce the ReSP (Retrieve, Summarize, Plan) approach. This method not only condenses but also functionally decomposes the information accrued in each retrieval episode. Specifically, we integrate a novel LLM-based summarizer within the established iterative RAG framework and refine the iterative process. The summarizer undertakes dual roles: firstly, it compiles a summary of corroborative information from the retrieved documents for the overarching target question, termed the global evidence memory; secondly, it crafts a response for the current sub-question based on the retrieved documents, termed the local pathway memory. At the inception of each iteration, the accumulated global evidence memory and local pathway memory are combined as contextual inputs for the model's evaluation. Should the information be evaluated as adequate, the procedure advances to the generation of the final response; otherwise, a new sub-question is formulated, with the requirement that the model must not generate previously retrieved sub-questions. Our experimental findings reveal that, under uniform experimental settings, the ReSP model markedly surpasses a range of current single-round and iterative RAG approaches when evaluated on multi-hop question-answering benchmarks such as HotpotQA <cit.> and 2WikiMultihopQA <cit.>. Notably, it exhibits a substantial enhancement in performance, with an increase of 4.1 F1 score over the state-of-the-art (SOTA) on HotpotQA, and an improvement of 5.9 F1 score over the SOTA on 2WikiMultihopQA. Furthermore, we conducted a series of in-depth comparative studies to discern the effect of model size on its performance and confirmed that ReSP possesses commendable robustness to context length compared to other RAG methods. In conclusion, our contributions are as follows: * Targeting the multi-hop question-answering scenario, we propose an innovative iterative RAG approach that incorporates query-focused summarization to tackle the context overload problem resulting from multiple rounds of retrieval. In particular, we have refined the summarizer's function to compress information aimed at both the overarching question and the current sub-question, thereby optimizing issues related to over-planning and repetitive planning. * Experimental results show that our approach significantly surpasses existing methods in performance, and it exhibits considerable robustness to variations in context length. § RELATED WORKS Retrieval-Augmented Generation. Retrieval-Augmented Generation (RAG) enhances LLMs by retrieving relevant documents from external databases and integrating them into the generation process <cit.>. Recent work can be divided into single-round RAG <cit.> and iterative RAG <cit.> based on the number of retrieval rounds. In multi-hop question-answering scenarios, iterative RAG often achieves better results because it allows for detailed decomposition of the question. However, due to the increased number of iterations, iterative RAG faces challenges in long-context processing. § METHODOLOGY Figure <ref> illustrates our ReSP framework, which consists of four components: , , , and . Essentially, Reasoner, Summarizer, and Generator are all based on a fine-tuning-free LLM, designed to execute specific tasks using an array of carefully selected prompt engineering techniques. For specific prompt templates, please see Appendix <ref>. Our main contribution lies in the Summarizer, while the design of the other modules is similar to that of conventional iterative RAG methods. §.§ Dual-Function Summarizer As mentioned earlier, our goal is to address issues of context overload and redundant planning. To tackle context overload, a straightforward approach is to employ summarization to compress information. However, even with summarization, the model still lacks an explicit record of the planning path, which does not resolve the issue of redundant planning. During the iterative process, over-planning could arise if summaries overlook information crucial for directly addressing the overarching question, or repetitive planning might occur if the information difference between different rounds of summaries is not significant. Therefore, a more sophisticated design of the summarizer is necessary to distinguish the functions of various pieces of information. Drawing on the idea of query-focused summarization, we have designed a dual-function summarizer. Confronted with retrieved documents, this summarizer concurrently executes two tasks: producing summaries of supporting information pertinent to the overarching question and generating responses for the current sub-question, while managing two distinct memory queues–the global evidence memory and the local pathway memory. Summaries related to the overarching question are deposited into the global evidence memory, serving to explicitly signal the model to cease iterations when information is enough, thus mitigating the risk of over-planning. Concurrently, responses for the current sub-question, alongside the sub-question itself, are stored in the local pathway memory. This explicitly guides the model's recognition of the progress in the question planning path as well as the sub-questions that have been historically retrieved, preventing repetitive planning. §.§ Summary-Enhanced Iterative RAG Process Here we delineate the refined iterative RAG workflow that incorporates the dual-function summarizer. Given a query Q and a document corpus D, we initially deploy a retriever to identify the K documents from D that are most relative to Q. These documents are then directed into the summarizer for summary creation and memory queue updates (note that during the first iteration, the sub-question is essentially the overarching question, so there is no generation of response for the current sub-question). Subsequently, the contents of the two memory queues are concatenated to provide context input for the reasoner, which is responsible for determining whether the current information is sufficient to address the overarching question. Should it be adequate, the iterative process is halted, and the memory queues are utilized as context for the generator to produce the final answer. If the information is insufficient, the reasoner generates a subsequent sub-question Q* that is distinct from previously retrieved sub-questions based on the current context, prompting the next iteration round. § EXPERIMENTAL SETTINGS AND RESULTS §.§ Datasets We conduct experiments on two multi-hop question-answering benchmark datasets: HotpotQA <cit.> and 2WikiMultihopQA <cit.>. Following the open-sourced RAG toolkit FlashRAG <cit.>, we employ its preprocessed dataset format. For each dataset, we utilize the first 1,000 entries from the original development set for testing. We report the token-level F1 score of answer strings to evaluate the quality of the generation. §.§ Experimental Setup In our main experiment, we employ the Llama3-8B-instruct <cit.> as the base model and the E5-base-v2 <cit.> as the retriever, while utilizing Wikipedia data from December 2018 as the retrieval corpus. For the model and data links, please refer to the FlashRAG open-source repository [https://github.com/RUC-NLPIR/FlashRAG]. The model's maximum input length is set to 12,000, and the maximum output length is set to 200. For each query, we retrieve the top-5 documents from the corpus based on vector similarity as the result. The maximum number of iterations is set to 3. If the retrieval process is still in iteration after 3 attempts, the model will directly proceed to the final response generation. All experiments are conducted on 4 NVIDIA A100 GPUs. §.§ Baselines We select representative methods from single-round RAG and iterative RAG as baselines for comparison. Single-round RAG: Standard RAG directly generates answers based on all retrieved documents. SuRe <cit.> constructs and ranks summaries of the retrieved passages for each of the multiple answer candidates. RECOMP <cit.> compresses retrieved documents into textual summaries before in-context integration. REPLUG <cit.> prepends each retrieved document separately to the input context and ensembles output probabilities from different passes. Iterative RAG: Iter-RetGen <cit.> leverages the model output from the previous iteration as a specific context to help retrieve more relevant knowledge. IRCoT <cit.> guides the retrieval with Chain-of-Thought (CoT) <cit.> and in turn uses retrieved results to improve CoT. §.§ Main Results Our results on HotpotQA and 2WikiMultihopQA are displayed in Table <ref>. First, we notice that iterative RAG, especially IRCoT, demonstrates significant performance improvements compared to single-round RAG. This suggests that conducting multiple rounds of retrieval can indeed capture information more comprehensively and produce more accurate responses. Second, within single-round RAG, RECOMP, which incorporates summarization, exhibits superior performance, indicating that summarization is an effective method of information compression even within single-round RAG. These findings validate the rationale behind our approach, which combines multi-round retrieval with summarization. Our method, ReSP, achieves significant improvements on both datasets, outperforming the SOTA by 4.1 F1 score on HotpotQA and 5.9 F1 score on 2WikiMultihopQA, surpassing a range of existing iterative RAG methods. This demonstrates the effectiveness of the approach we propose. § EMPIRICAL ANALYSIS To further analyze ReSP, we conduct comparative experiments to investigate the impact of the base model size on modules' performance. Additionally, we examine the robustness of the method to context length. Through case studies, we demonstrate that ReSP can mitigate the issues of over-planning and repetitive planning. §.§ Impact of the base model size ReSP features a modular design, wherein each module works independently, allowing for the use of different base models to collaborate. To provide empirical guidance for model selection in practical applications, we test how the size of the model affects the performance of each module. We substitute Llama3-70B-Instruct for Llama3-8B-Instruct and use this larger model as the base for the Reasoner, Summarizer, and Generator modules respectively, comparing the effect changes with the original results. Table <ref> presents our experimental results. Firstly, regarding the reasoner module, the changes are inconsistent across the two datasets, with an improvement on HotpotQA but a decline on 2WikiMultihopQA. The reason for this inconsistency is that 2WikiMultihopQA has questions with more logical hops compared to HotpotQA. A larger model is likely to give more detailed planning steps, leading to a failure to obtain all the necessary information to answer the question within the set maximum number of iterations, hence causing a drop in performance. Secondly, for the summarizer module, we observe that using a larger model size does not result in performance improvements; in fact, there is a significant decline on 2WikiMultihopQA. Upon reviewing the summarization logs, we find that Llama3-70B-Instruct is more lenient in discerning relevance. It tends to extract information that seems related but is actually irrelevant to the question, which can disrupt the planning and ultimately the generation of responses. Lastly, regarding the generator module, we observe consistent improvements when using Llama3-70B-Instruct, which suggests that even when provided with clear evidence, a model with stronger semantic comprehension capabilities still aids in generating more accurate responses. In summary, in real-world applications, it is advisable to allocate a larger base model to the generator, as long as the available resources allow for it. However, for the reasoner module, if the allowable number of iterations is low, there is no need to use a larger base model. The summarizer also does not require a larger base model. §.§ Robustness to context length To determine whether ReSP can address the issue of context overload, we adjust the number of documents retrieved per iteration and observe the changes in performance. Our comparison involves using standard RAG and IRCoT as control groups. These experiments are carried out on the HotpotQA dataset. Results are shown in Figure <ref>. Firstly, when the number of documents retrieved per iteration k is set to 3, all three methods experience a notable decrease in performance. This indicates that the adequacy of information retrieved in a single round significantly affects both single-round and iterative RAG methods. Therefore, it is necessary to extend the context length during application appropriately. When k is greater than 5, the standard RAG and IRCoT exhibit a performance degradation trend. Particularly, IRCoT, which utilizes iterative retrieval, suffers from a more significant performance drop due to the accumulation of retrieved information. This demonstrates that context overload has a pervasive impact on existing RAG methods. Our method demonstrates exceptional robustness to context length, delivering consistent performance regardless of whether k is set to 15 or 5. This is because we extract key information in each iteration, effectively maintaining a stable and concise context for the generator. As a result, the generator remains unaffected by changes in the length of retrieved documents during the process. §.§ Case Study We present evidence of ReSP addressing over-planning and repetitive planning by comparing cases of IRCoT and ReSP on two questions, as shown in Table <ref>. In the first case, the retrieved documents are consistent since the initial retrieval question is the same. However, the two models make different decisions about what to do next. IRCoT, which integrates information processing and planning in one step, has a higher level of task complexity. It mistakenly misses information related to , which leads the model to decide that further retrieval is needed, resulting in over-planning. On the other hand, ReSP accurately and comprehensively acquires the supporting facts related to the overarching question through the summarizer. Consequently, the reasoner determines that the question can be answered, and the generator produces the correct response, thereby halting the iteration after the first round of retrieval. In the second case, at the end of the first round of retrieval, both models make similar decisions due to the absence of information related to the main subject, in the documents: they both query for documents related to . However, due to limitations in document coverage or retriever capability, no relevant documents on are found. At this point, the two models propose different sub-questions. Lacking a recorded retrieval trajectory, IRCoT can only make judgments based on the current information, thus continuing to query , leading to repetitive planning. In contrast, ReSP, which avoids outputting previously retrieved sub-questions, adjusts the retrieval subject and produces a sub-question about , thereby avoiding repetitive planning. § CONCLUSION In this work, we propose an iterative RAG approach that incorporates query-focused summarization. By employing a dual-function summarizer to simultaneously compress information from retrieved documents targeting the overarching question and the current sub-question, we address the context overload and redundant planning issues commonly encountered in multi-hop question answering. Experimental results demonstrate that our method significantly outperforms other single-round and iterative RAG methods. Furthermore, we hope that our empirical analysis will aid the community in practical applications. § PROMPT TEMPLATES OF MODULES The prompt templates of different modules in ReSP are shown in Table <ref>.
http://arxiv.org/abs/2407.12554v1
20240717133507
Shock cooling emission from explosions of massive stars: III. Blue Super Giants
[ "J. Morag", "N. Sapir", "E. Waxman" ]
astro-ph.HE
[ "astro-ph.HE" ]
firstpage–lastpage Bridging Wright-Fisher and Moran models Anne-Florence Bitbol July 22, 2024 ======================================= § ABSTRACT Light emission in the first hours and days following core-collapse supernovae is dominated by the escape of photons from the expanding shock-heated envelope. In preceding papers, we provided a simple analytic description of the time-dependent luminosity, L, and color temperature, T_ col, as well as of the small (≃10%) deviations of the spectrum from blackbody at low frequencies, hν< 3T_ col, and of `line dampening' at hν> 3T_ col, for explosions of red supergiants (RSGs) with convective polytropic envelopes (without significant circum-stellar medium). Here, we extend our work to provide similar analytic formulae for explosions of blue supergiants with radiative polytropic envelopes. The analytic formulae are calibrated against a large set of spherically symmetric multi-group (frequency-dependent) calculations for a wide range of progenitor parameters (mass, radius, core/envelope mass ratios) and explosion energies. In these calculations we use the opacity tables we constructed (and made publicly available), that include the contributions of bound-bound and bound-free transitions. They reproduce the numeric L and T_ col to within 10% and 5% accuracy, and the spectral energy distribution to within ∼20-40%. The accuracy is similar to that achieved for RSG explosions. radiation: dynamics – shock waves – supernovae: general § INTRODUCTION In core collapse supernovae (SNe) explosions, a radiation mediated shock (RMS) traverses outwards through the stellar progenitor, heating and expelling material as it passes. If no significant circumstellar material (CSM) is present around the star, arrival of the shock at the surface produces a hard UV/X-ray ∼10^45 erg s^-1 `shock-breakout' emission, lasting from tens of minutes to an hour. The breakout pulse is then followed in the coming hours and days by thermal UV/optical `shock-cooling' emission, caused by diffusion of photons out of the shock-heated stellar ejecta. Typical luminosities and temperatures during shock-cooling are of the order 10^42-10^44 erg s^-1, and 1-10 eV. As the photons diffuse out, deeper parts of the ejecta are gradually exposed over time <cit.>. In order to constrain the properties of the progenitor star, it is helpful to have high cadence multi-band observations in the first hours of shock-cooling <cit.>. Among these measurements, ultraviolet observations are especially important, as they are closer to the thermal emission peak, and can be used to determine the emission temperature and the UV extinction self-consistently <cit.>. Combined with an accurate theoretical model, these measurements can be used to reproduce the progenitor and explosion parameters, including radius, surface composition, explosion energy per unit mass, and the extinction. Analytic models are especially important for solving the "inverse problem" of inferring system parameters and uncertainties from the observed spectral energy distribution (SED). Catching supernovae within the first hour presents a practical challenge, and few such observations have been achieved <cit.>. Existing and upcoming observatories, such as the Zwicky Transient Factory (ZTF) <cit.>, the upcoming Vera Rubin Observatory <cit.>, and the expected launch of the wide-field UV space telescope ULTRASAT <cit.> will greatly increase the quantity and quality of early measurements, enabling a systematic study. Emission during shock-cooling is amenable to accurate modeling in the case of `envelope breakout' (i.e. in the absence of CSM with significant optical depth), largely because the system is near local thermal equilibrium (LTE) at this time, the hydrogen is highly ionized, and the opacity is dominated over much of the frequency range by electron scattering. As explained in our earlier papers, our results are accurate up to the Hydrogen recombination time, roughly when T_ col drops to 0.7 eV. In preceding papers, <cit.>, we provided analytic formulae describing the shock-cooling emission following core-collapse in red supergiant stars (RSG). A recent study of a large set of type II SN observations <cit.> finds that our model accounts well for the early multi-band data of 50% of observed SNe, corresponding to 70% of the intrinsic SN distribution after accounting for luminosity bias (the others are likely affected by thick CSM). This agreement enables the inference of progenitor radius, explosion velocity, and relative extinction by using the formulae provided in and . In this paper, we extend our work to provide similar analytic formulae for explosions of blue supergiants (BSG) with radiative polytropic envelopes. The analytic formulae are calibrated against a large set of 1D multi-group (frequency-dependent) calculations for a wide range of progenitor parameters: radii in the range R=10^12-3×10^13 cm, explosion energies in the range E=10^50-10^52 erg, core/envelope mass ratios in the range M_ c/M_ e=0.1-3. We have demonstrated in earlier work , that the shock-cooling emission characteristics are insensitive to metallicity, to deviations from ideal `polytropic' structure, and to core radius variations. We, therefore, do not explore the dependence on these parameters in this paper. Our numeric calculations use the opacity tables we constructed (and made publicly available), that include the contributions of bound-bound and bound-free transitions. The paper is structured as follows. In  <ref> we summarize the analytic results of earlier work that we use in this paper. We concisely describe our numerical code and opacity tables in  <ref>. In  <ref>, we provide our calibrated analytic formulae, and in  <ref> we assess their accuracy by comparisons to numeric results. In  <ref> and  <ref>, we show that our results are not sensitive to the effects of "expansion opacity" and to deviations from LTE excitation and ionization. In  <ref> we compare our results to the results of similar STELLA simulations. The results are summarized and discussed in  <ref>. A complete description of our analytic formulae for both BSG and RSG progenitors is given in the appendix. § EARLIER ANALYTIC RESULTS We summarize below the analytic results of earlier work that we use in this paper. In the case of a polytropic density profile, the density near the stellar edge is given by ρ_0 = f_ρρ̅δ^n, where δ≡ (R-r)/R, r is the radial coordinate, the total average progenitor density is ρ̅≡ M/(4π R^3/3), f_ρ is a constant of order unity, and n=3/2,3 for convective and radiative envelopes, respectively. To avoid confusion, all values dependent on the choice of n=(3/2,3) are provided in this paper for BSG's (n=3) only, except for the appendix that summarizes the formulae for both cases. Following core-collapse, the outward-propagating shock accelerates down the steep density gradient, with shock velocity increasing in a self-similar manner according to <cit.> v_ sh = v_ s∗δ^-β_1 n, where β_1=0.19, and v_ s∗ is approximately given by <cit.> v_ s∗≈ 1.05 f_ρ^-β_1 v_∗, v_∗≡√(E/M). Photons from the RMS will escape the shock and the star, i.e. will "breakout" when the scattering optical depth ahead of the shock approaches the optical depth across the width of the shock, τ = c/ v_ sh <cit.>. This occurs at δ_ bo = (n+1)c /κρ_ bo v_ bo R, where κ is the opacity and the shock velocity and pre-shock density at breakout, v_ bo and ρ_ bo, are defined by ρ_0=ρ_ bo ( v_ boτ/c)^n/(1+n), v_ sh = v_ bo (v_ boτ/c)^-β_1 n/(1+n), where τ is the scattering optical depth from r to the stellar edge, r=R. For BGS's, <cit.> ρ_ bo = 5.6 × 10^-9 M_0^0.13 v_∗,8.5^-0.87 R_12^-1.26κ_0.34^-0.87 f_ρ^0.29 g cm^-3, v_ bo/v_∗ = 8.6 M_0^0.16 v_∗, 8.5^0.16 R_12^-0.32κ_0.34^0.16 f_ρ^-0.05, and δ_ bo=0.088 R_13^0.58 (f_ρ M_0 v_ s*,8.5 κ_0.34)^-0.29. Here, R= 10^12R_12 cm, κ=0.34 κ_0.34 cm^2 g^-1, v_∗=v_∗,8.5 10^8.5 cm s^-1, v_ s*=v_ s*,8.5 10^8.5, and M=1 M_0 M_⊙. The duration of the breakout pulse is related to the shock crossing time of the breakout layer as δ_ boR/ v_ bo =(n+1)c /κρ_ bo v_ bo^2= (n+1)t_ bo=350 ρ_ bo,-9^-1κ_0.34^-1 v_ bo,9^-2 s, where ρ_ bo=10^-9ρ_ bo,-9 g cm^-3 and v_ bo= 10^9 v_ bo,9 cm s^-1. The observed pulse duration may be longer due to light travel time effects, which "smear" the pulse over t∼ R/c. For later use we recast equation (9) of in terms of r and t, ρ(r,t) = 1.26 × 10^-11 f_ρ M_0 v_ s*,8.5^7.18 r_14^-10.18 t_ d^7.1 8 g cm^-3, wherer=r_14 10^14 cm. Alternatively, using equation <ref>, ρ(r,t) = 3.64 × 10^-12 R_13^2 v_ bo,9^6.18κ_0.34^-1 r_14^-10.18 t_ d^7.18 g cm^-3. Using the self-similar diffusion profile of <cit.> we have T(r,t) = 4.66 R_13^1/4 (f_ρM_0)^0.29 v_ s*,8.5^2.25κ_0.34^0.04 t_ d^1.71 r_14^-2.79 eV, T(r,t) = 2.91 R_13^0.75 v_ bo,9^1.88ρ_ bo,9^-0.08κ_0.34^-1/3 t_ d^1.71 r_14^-2.79 eV. Assuming a blackbody spectral distribution, the emitted luminosity is then given by, L_ BB=L×π B_ν(T_ col)/σ T_ col^4. In we provided a slight modification to the blackbody formula, taking into account the modification due to line suppression in the ultraviolet, L_ν = L ×min[ π B_ν(T_ col)/σ T_ col^4 , π B_ν(0.74 T_ col)/σ (0.74 T_ col)^4]. § NUMERICAL CODE Our gray and multigroup numerical codes are described in detail in and , along with multiple tests of the codes comparing the numeric results to analytic ones, e.g. for the steady planar RMS and shock breakout problems <cit.>. We provide below a brief description of these codes and of our numercial analysis method, which are identical to those used (and described in detail) in . We solve the one-dimensional spherical Lagrangian hydrodynamics equations for ideal fluid flow coupled with radiative transfer under the diffusion approximation. Our gray simulations assume radiation pressure dominated flow and solve diffusion using constant electron scattering opacity, κ_ es=0.34 g cm^-2. The color temperature of the emitted thermal flux from the gray simulations is determined in post-processing using Rosseland mean opacity as described in  <ref>. In the multigroup simulations, we calculate both the plasma energy density e and the frequency binned photon energy density u_ν. We include frequency-dependent radiative emission/absorption and diffusion. For emission/absorption, we use a frequency-binned average of the opacity extracted from the high-frequency-resolution table. For the diffusion, we use a binned Rosseland mean. For all simulations, we place a static reflective boundary at the inner surface and a free boundary with an Eddington parameter at the outer edge. The numerical analysis is carried out using a succession of simulations, with each simulation starting later in time using a snapshot of previous simpler simulations. We begin a hydrodynamics-only simulation of a "thermal bomb" placed near the center of a simplified progenitor star with a uniform high-density core and radiative polytropic envelope structure. When the shock reaches a scattering optical depth τ=(10-24)c/ v_ sh, we begin a gray diffusion simulation. Later, at time t=(1-2)R/c following breakout, we begin a multigroup simulation based on the instantaneous gray diffusion simulation profiles. All simulations are carried out until past the validity time (of H recombination) for later comparison. Our Lagrangian spatial resolution grid is as described in , with 4000-8000 cells in the hydrodynamic only simulations, 1600-3200 cells for the gray diffusion simulations, 50-100 spatial cells[the latter do not describe the breakout pulse and as a result have lower resolution requirements] and up to 256 photon groups for multigroup simulations. As before, we check for convergence, though based on convergence results of earlier calculations, we do this only for a sub-sample of the simulations. Changing both the optical depth of the outermost cell from τ=10^-2 to 10^-3 and the cell count from 50 to 100 leads to small, 1-few percent deviations, with 10% deviations for some frequencies in extreme cases with no effect on the calibration of our analytic formulas. Following breakout, the instantaneous velocity of the ejecta is monotonically increasing outwards except for the breakout shell, where the velocity profile is flat or even mildly decreasing. The decreasing profile near the edge leads to the formation of a high density shell (which may be unstable), which does not affect the escaping radiation significantly but can lead to numerical issues. BSG's exhibit a more pronounced density inversion, as the velocity decrease can be up to ∼10% of the maximum velocity (compared with 0.1% for RSG's). To mitigate numerical issues, as in and , we add an artificial viscosity term q proportional to the velocity difference across cells. Its strength is dependent on spatial resolution, but it always satisfies q≪ u, where u is the energy density. In a few cases of our analysis, we combine gray simulations at low spatial resolution (50 cells - largest q) with our higher resolution gray sims (1600-3200 cells), specifically in Fig. <ref>. Our quoted RMS agreement between the analytic formula and simulation is unaffected by this choice of resolution. §.§ Opacity table Our frequency-dependent opacity table is calculated assuming LTE plasma ionization and excitation and includes free-free, bound-free, and bound-bound components. The latter is based on the Kurucz line list <cit.> that is experimentally verified near recombination temperatures. In and , we showed that for pure Hydrogen and for solar mix compositions, our table provides similar results (Rosseland mean, frequency-dependent opacity) to those of TOPS, with the exception of bound-bound lines, which can be important for our problem. We also tested convergence of the results with respect to the resolution of the underlying density, temperature, and frequency grid, finding convergence using an underlying grid of Δν/ν∼10^-5. Our opacity code is publicly available for use on github <cit.>. When extracting the color temperature in the gray simulations, we use TOPS in our analysis for temperatures above 4 eV, and our own table below 4 eV, as was done in . In the multigroup simulations, we use our own opacity table and not TOPS. As discussed in , at later times it is more appropriate to use our own table, while at earliest times (T>4 eV) the observable UV/optical lightcurve is minimally affected by the presence of atomic lines. We previously showed weak sensitivity to metallicity so use here only solar mix values (comprised of 10 important elements up to Fe). §.§ Determining T_ col and T_ col,ν in post-processing We repeat the procedures in and for extracting the color temperature by "post-processing" of our numerical results. For gray simulations, T_ col(t) is obtained from the hydrodynamic profiles as the plasma temperature at the "thermal depth", r_ Th(t), from which photons diffuse out of the envelope without further absorption. Following and , we approximate r_ Th(t) by ∫_r_ Th(t)ρ√(3κ_ R(κ_ R-κ_ es))dr'=1. Here, κ_ es(ρ,T) is the electron scattering opacity, accounting for the ionization fraction, κ_ R(ρ,T) is Rosseland mean opacity, and (κ_ R-κ_ es) represents the effective absorption opacity, κ_ abs. For later use, we also define the frequency dependent color temperature T_ col,ν at the thermal depth r_ col,ν by τ_⋆,ν(r=r_ col,ν)≡∫_r_ col,ν^∞ρ√(3κ_ abs,ν(κ_ abs,ν+κ_ es))dr'=1, where the abs, es and ν subscripts indicate absorption, (electron) scattering and frequency dependence, respectively. § CALIBRATED ANALYTIC MODEL §.§ Gray Formulae As was done in for the gray formula in red supergiants, we combine the exact planar phase solutions of with the approximate equations for the spherical phase of / for BSG's. The result is very similar to equation (33) in , with the addition of a weak dependency on progenitor radius, L(t)=L_ planar + L_ RW× R_13^0.1 A exp[ -( at/t_ tr)^α], T(t) = min(T_ planar , 1.07 × T_ ph,RW). The values {A,a,α}={0.79,4.57,0.73} are the same as given in . Analyzing our numeric results we find that for BSG's v_* ∼ 0.7 v_ s,* to ±15% (recall that for RSG's the relation is v_* ≈ v_ s,*). We again simplify our analytic model, using an analytic approximation for the planar phase, L/L_ br=t̃^-4/3+ A R_13^0.1exp[-(at/t_ tr)^α] t̃^-0.34, and T_ col/T_ col,br=min[0.97 t̃^-1/3,t̃^-0.48], Here t̃ is the time normalized to the "break time," the time of transition from planar to spherical expansion, t̃=t/t_ br, with t_ br= 0.69 R_13^1.32 v_ s*,8.5^-1.16 (f_ρM_0κ_0.34)^-0.16 hrs, L_ br=7.16×10^42 R_13^0.55 v_ s*,8.5^2.22 (f_ρM_0)^0.22κ_0.34^-0.77 erg s^-1, T_ col,br= 9.49 R_13^-0.38 v_ s*,8.5^0.62 (f_ρM_0)^0.07κ_0.34^-0.18 eV. L_ br, T_ col,br and t_ tr can be directly deduced from observations, and their determination constrains the model parameters. For example, R is given by R_13= 2.22 L_ br,42.5^0.56 t^-0.12_ br,3 T_ br,5^-2.24, where t_ br=3 t_ br,3 hours, L_ br=10^42.5 L_ br,42.5 erg s^-1, T_ br=5 T_ br,5 eV. For the BSG parameters' range we consider, t_ br=30 sec - 1 day, L_ br=8×10^40-3×10^43 erg s^-1, T_ br= 1.5 - 32 eV (and f_ρ M=0.2-40 M_⊙, v_ s*,8.5=0.3-2.3). Our analysis is valid, and the above formulae provide an accurate description of the emitted radiation, at max[2 R / c,t_ bo] < t < min[t_ 0.8 eV, t_ tr/a], where for BSGs t_ 0.8 eV = 4.46 R_13^0.52 v_ s*,8.5^0.14 (f_ρM_0)^-0.02κ_0.34^-0.55 days, = 5.07 t_ br,3 T_ br,5^2.10 days, and t_tr=19.5 √(M_ env,0κ_0.34v_ s*,8.5^-1). The lower limits are set by the shock breakout time and the condition that light-travel time effects be negligible - after roughtly 2 R / c ∼ 1 R_12 min. The upper limits are set by the time at which the temperature drops to 0.8 eV, and the time at which the photon escape time from deep within the envelope becomes comparable to the dynamical time. Note the slight difference between BSG and RSG validity times (the latter valid from 3R/c to t_ 0.7 eV). In BSG's, recombination occurs on average at slightly higher temperature, and the assumption of LTE fails after t_0.8 eV (see  <ref>). §.§ Frequency Dependent Formula Our frequency dependent model is derived piecwise, in the same way as in , based on the strength of the absorption opacity κ_ abs,ν relative to the scattering opacity κ_ es. At frequencies where κ_ abs,ν>κ_ es, the emitted flux may be approximated as a blackbody with a frequency-dependent thermal depth (surface of last absorption) r_ col,ν, and corresponding frequency-dependent color temperature T_ col,ν=T(r_ col,ν), given by L_ν, BB=4π r_ col,ν^2 B_ν(T_ col,ν). At frequencies where the absorption opacity is smaller than the scattering opacity, κ_ abs,ν<κ_ es, we base our approximation on the flux f_ν emitted by a semi-infinite planar slab of temperature T in the two-stream approximation <cit.>, arriving at L_ν,ϵ=(4π)^2/√(3)r_col,ν^2√(ϵ_ν)/1+√(ϵ_ν)B_ν(T_col,ν), where ϵ_ν=κ_ abs,ν/(κ_ abs,ν+κ_ es). We first derive an expression describing the emission at the regime of relatively low absorption opacity, κ_ abs,ν<κ_ es, which occurs primarily at intermediate frequencies near and below the Planck peak. Absorption in this regime is dominated by free-free transitions with a small bound-free contribution. Neglecting the bound-free contribution, we approximate equation (<ref>), that defines the frequency-dependent thermal depth, as ∫_r_ col,ν^∞ρ√(3κ_ ff,νκ_es)dr' =1. Here we have neglected κ_ abs,ν with respect to κ_ es and used κ_ abs,ν→κ_ ff,ν= 4.13×10^-31g_ ffρ T^-1/2(hν)^-3(1-exp(-hν/T)) cm^2 g^-1, where the density ρ is in cgs, temperature T is in ergs. We approximate the gaunt factor g_ ff∼0.7(hν/T)^-0.27. Solving equation (<ref>) using the analytic / spherical phase density profiles (equations <ref> - <ref>) we obtain the radius, temperature and opacity at the thermal depth, r_ col,ν = 1.32 × 10^14 R_13^-0.01 (f_ρM_0)^0.11 v_ s*,8.5^0.74κ_0.34^0.04 t_ d^0.77ν_eV^-0.09 cm, T_ col,ν = 2.13 R_13^0.28 v_s*,8.5^0.16ν_ eV^0.25κ_0.34^-0.06 t_ d^-0.45 eV, κ_ ff,ν = 0.02 (f_ρ M_0)^-0.07 R_13^-0.22 v_ s*,8.5^-0.63κ_0.34^-0.30 t_ d^-0.14ν_ eV^-1.66 cm^2 g^-1. We find that modifying the expression for r_ col,ν to r_ col,ν = R + 1.32 × 10^14 R_13^-0.01 (f_ρM_0)^0.10 v_ s*,8.5^0.74κ_0.34^0.04 t_ d^0.77ν_eV^-0.09 cm while keeping the expressions for T_ col,ν and κ_ ff,ν unchanged provides a good description of the spectrum also at the planar phase (and at the transition from planar to spherical evolution). In "break notation" we have r_ col,ν = R + 2.02 × 10^13 L_ br,42.5^0.48 T_ br,5^-1.97κ_0.34^-0.08t̃^0.77ν_ eV^-0.09 cm, T_ col,ν = 5.71 L_ br,42.5^0.06 T_ br,5^0.91κ_0.34^0.22t̃^-0.45ν_ eV^0.25 eV, κ_ ff = 0.03 L_ br,42.5^-0.37 T_ br,5^0.56κ_0.34^-0.46t̃^-0.14ν_ eV^-1.66 cm^2 g^-1. The thermal depth values can then be inserted into equation (<ref>) with ϵ_ν=κ_ ff,ν/(κ_ ff,ν+κ_ es) in order to describe the emission in the low absorption frequency range. For frequency regions with strong absorption, where κ_ abs,ν>κ_ es, we return to equation (<ref>). The thermal depth at these frequencies is located at the outer edge of the ejecta, where the density decreases sharply and the temperature, determined by the escaping photons, is nearly uniform. We therefore approximate r_ col,ν≈ const. (ν) and T_ col,ν≈ const. (ν) for these frequencies, and describe the emission as a gray blackbody L_BB, equation (<ref>). At low frequencies where the free-free opacity dominates, we find numerically that the luminosity is well approximated by L_BB(0.85 T_ col). Meanwhile, at frequencies near and above the Planck peak, where atomic transitions dominate, we use both the simulations and a separate analytic estimate (see  <ref>) to improve upon the approximate L_ BB (0.74 T_ col) description of the UV suppression of , replacing the suppression factor 0.74 with a function of (R,t) lying in the range [0.6,1]. The combined freq-dept formula is thus L_ν = [L_ BB (0.85 T_ col)^-m + L_ν,ϵ^-m]^-1/m hν<3.5 T_ col 1.2 × L_ BB(0.85 R_13^0.13 t_ d^-0.13× T_ col) hν>3.5 T_ col, where m=5, and L_ν,ϵ is again given by equations (<ref>) and (<ref>) with the choice κ_ abs,nu→κ_ ff,ν. The 1.2 factor accounts for modest UV excess we observe in our results at the planck peak due to the presence of strong lines. The frequency slope in the Raleigh-Jeans regime is similar, but slightly lower than the blackbody value L∼ν^2. equation (<ref>) can be further simplified to be given in terms of only L and T_ col, with moderate decrease in the approximation's accuracy, L_ν = π/σL/T_ col^4[ (B_ν(0.85 T_ col)/(0.85)^4)^-m + . . ( 8/√(3) x^-0.18 T_ col,5^-0.12√(ϵ_ a)/1+√(ϵ_ a) B_ν(1.71 x^0.25 T_ col) )^-m]^-1/m hν<3.5 T_ col 1.2× L_ BB(1.11 L_42.5^0.03 T_5^0.18× T_ col) hν>3.5T_ col, where x=hν/T_ col, T_ col = 5 T_ col,5 eV, and ϵ_ a = 0.0051 x^-1.66 T_ col,5^-1.10. L and T_ col are given by equations (<ref>) and (<ref>). § NUMERIC RESULTS Figures <ref>-<ref> compare our numeric gray simulation results with our gray analytic solution, equations (<ref>, <ref>). The breakout parameters ρ_ bo and β_ bo= v_ bo/c are extracted from the simulation by fitting the luminosity of the breakout pulse to the table of , and these, along with the known R, are used to determine the physical parameters v_ s,*, f_ρ M used in the equations without additional fitting. The RMS agreement is roughly ∼10% and ∼5% for L(t) and T_ col(t) respectively[The case R_13=0.1, E_51=10, which exhibits somewhat larger deviation in luminosity (fig. <ref>) is excluded from our analysis for L and is considered somewhat outside our validity range. Likewise, the cases R_13=3, E_51=(0.1,1) exhibit up to ∼ 30% deviation, though the latter is not reproduced in the multigroup simulations.]. Fig. <ref>, showing varying core and envelope masses combinations, also includes light-travel time effects, which are important during shock breakout and act to smear the observed breakout pulse (see footnote in describing how they are calculated). Similarly to the case of RSG's, the light travel-time effects mark the early validity time, t∼2 R/c, beyond which the effects are not important. Fig. <ref>, which shows the effect of varying explosion energy, does not include these effects. We also observe that varying the core radius between R_ core/R∈[10^-3,0.03], does not change the agreement with our formulae. In Figures <ref>-<ref> we compare the SED results from multigroup numeric simulations to our gray (equations <ref>-<ref>) and frequency-dependent formulae (equation <ref>). In agreement with our previous works, the SED is approximately a blackbody, with minor 10 percent deviations in the Raleigh Jeans tail and prominent suppression in the ultraviolet. We find 20-35% RMS agreement between the frequency-dependent formulae and simulations, excluding for the R=10^12 cm, E=10^51 erg calculation. The slightly simplified frequency-dependent formula (equation <ref>) yields an RMS inaccuracy of 35-40%. Both of the uncertainties quoted above include (in a sum of squares sense) the uncertainties in our numeric model, based on a comparison to a different method of estimating the SED as described in sec. <ref>. § EXPANSION OPACITY AND LOCAL THERMAL EQUILLIBRIUM §.§ Expansion Opacity and Finite Frequency Resolution We show here, as we did in , that our numeric code should correctly describe the approximate SED despite being coarse in frequency resolution relative to transition line widths and despite not including `expansion opacity' effects <cit.>. In our simulations, for each frequency bin we use the Rosseland mean of the opacity for the diffusion calculation and the frequency averaged opacity for the emission/absorption calculation. In the presence of large velocity gradients, such that the Doppler shift of the plasma-frame photon frequency across a spatial resolution element, Δ r, is comparable to the frequency separation between strong lines, the effective photon mean-free-path may be much smaller than that derived from the Rosseland mean, l_ exp≈c/ vΔν/νr, where Δν/ν is the frequency difference between adjacent `strong' lines (with optical depth τ>1 taking into account Doppler shift, see and references therein). The ratio l_ Ross/l_ exp, determines when expansion opacity effects may be significant. Using our high-frequency opacity table, we extract the locations of strong lines and examine the above ratio at the latest validity time in our analysis, t_0.8 eV, when expansion effects are strongest. Similarly to our analysis of RSG's, we find that the Rosseland mean dominates the opacity for most frequencies. At frequencies above the Planck peak (5-7 eV), the velocity gradient effect becomes significant at late time at the outer edge of the ejecta. At these frequencies, l_ Ross∼ l_ exp at the diffusion depth or further out, see fig. <ref>. However, since the temperature does not vary rapidly beyond this radius, the impact of the modification of the mean-free-path, compared to the Rosseland mean, is not expected to be large, as we demonstrate below. As in , to test the sensitivity of our numeric results to the finite frequency resolution used and to the effects of `expansion opacity', we perform a separate calculation of the SED in `post-processing', using the numeric plasma density and temperature profiles. The SED is estimated by calculating the frequency-dependent thermal depth r_ col,ν and corresponding color temperature T_ col,ν as determined by equation (<ref>), using the full high-resolution-frequency opacity table and including the effect of Doppler shifts (in this calculation we do not bin the photons into energy groups). The spectral luminosity in this test is then determined by L_ν, Dopp = Eq. (<ref>) τ_ es ( τ_*,ν=1) ≤ f_ cut Eq. (<ref>) τ_ es ( τ_*,ν=1) > f_ cut, where the SED is not sensitive to the value of f_ cut, chosen in the range 0.1-3. The SED obtained agrees with that obtained directly from the numeric simulations to ∼ 10-20 % (see examples in fig. <ref>), implying that our numeric SED results are accurate at this level. §.§ Deviations from LTE Ionization and Excitation The opacity tables that we use in our numeric calculations were constructed assuming LTE in ionization and excitation states. We show below that the deviations from LTE are not expected to be large, and hence are not exepcted to affect the SED significantly, based on an analysis of the relevant interaction rates (similar to the analysis of ). We note that one of the reasons for this result is that during shock-cooling, the photons dominate the energy density in the ejecta and are nearly thermally distributed (see, e.g., fig. 9 of ). This is in contrast to the later nebular phase, where the photons do not dominate the energy density and are far from a Planck distributed. Collision, excitation and ionization rates typical for the shock-cooling phase are shown in fig. <ref>. With the exception of electron impact ionization, the rates are large compared to the (inverse of the) dynamical time (including those of electron-electron and electron-ion Coulomb collisions, electron impact excitation, photo-excitation, and ionization). The relative low rate of impact ionization is not expected to lead to deviations from LTE ionization since ionization is dominated by photons and the energy distributions of both photons and electrons are close to those of LTE. § COMPARISON TO PREVIOUS WORKS In , we compared our numeric SED results to those of several earlier numeric works that use the STELLA <cit.> radiative transfer code to calculate the emission of radiation from shock breakout and cooling from stars with density profiles derived using the MESA <cit.> stellar evolution code. We showed that our results, with progenitor density profiles approximated as simple polytropes, are in good agreement with those earlier results, with the important exception of the flux at the thermal-peak frequencies where strong lines are present. This demonstrates the validity of the diffusion approximation we use for the radiative transfer, and reconfirms the result that the emission is not sensitive to deviations of the density profiles from polytropic profiles. It also suggests that STELLA calculations underestimate the effect of lines relative to us (see for a discussion). Here we carry out a similar comparison, reproducing STELLA lightcurves of <cit.>, that describe shock-cooling following an explosion of a BSG progenitor. We approximate the progenitor density profile as a polytropic profile, with R=50 R_⊙ and core and envelope masses of M_ c=M_ e=7 M_⊙ (this mass ratio is chosen arbitrarily for lack of information). The explosion energy is E=1.7×10^51 erg. Fig. <ref> shows moderate agreement between our results and those of <cit.> for the multiband lightcurves in the first hours and days following the explosion. A better agreement is obtained when we turn off the bound-bound opacity in our calculations. This provides further evidence for the underestimation of the effects of lines in STELLA calculations. § DISCUSSION AND SUMMARY In preceding papers of this series, and , we provided a simple analytic description of the time-dependent luminosity, L, and color temperature, T_ col, as well as of the small (≃10%) deviations of the spectrum from blackbody at low frequencies, hν< 3T_ col, and of `line dampening' at hν> 3T_ col, for explosions of RSGs with convective polytropic envelopes (without significant circum-stellar medium). Here, we extended our work to provide similar analytic formulae for explosions of BSGs with radiative polytropic envelopes. The approximations for L(t) and T_ col(t) are given in equations (<ref>) and (<ref>), and the frequency-dependent deviations from blackbody are given by equation (<ref>). A slightly less accurate approximation for the frequency-dependent deviations, that depends only on L and T_ col, is given in equation (<ref>). The formulae describing our approximations for shock cooling emission for both RSGs and BSGs are summarized in the appendix. They are valid until significant recombination of Hydrogen. The analytic formulae were calibrated against a large set of 1D `gray' and multi-group (frequency-dependent) calculations for a wide range of progenitor parameters (mass, radius, core/envelope mass ratios) and explosion energies using the opacity tables we constructed (and made publicly available), that include the contributions of bound-bound and bound-free transitions. They reproduce the numeric L and T_ col to within 10% and 5% accuracy, and the spectral energy distribution to within ∼20-40%. We have shown that the SED is not sensitive to the effects of expansion opacity and deviations from LTE ionization and excitation. Our numeric results are in good agreement with those of STELLA calculations of shock cooling emission from the explosions of RSG and BSG progenitors with non-polytropic pre-explosion density profiles obtained from the MESA stellar evolution code. This demonstrates the validity of the diffusion approximation we use for the radiative transfer, and reconfirms the result that the emission is not sensitive to deviations of the density profiles from polytropic profiles. We find that STELLA calculations underestimate the effect of lines relative to our calculations . mnras § SUMMARY OF MODEL EQUATIONS We provide here the formulae describing shock cooling emission for both RSG's and BSG's. §.§ Gray Formulae The bolometric luminosity L and the color temperature T_ col (equations <ref>-<ref> for BSG's), are given by L/L_ br=t̃^-4/3+t̃^[-0.17 , -0.34]× A R_13^[ 0 , 0.1]exp(-[at/t_ tr]^α), T_ col/T_ col,br=min[0.97 t̃^-1/3,t̃^[-0.45 , -0.40]]. Here, the [ x, y] notation indicates values x,y given for n=3/2,3 respectively, {A,a,α} = [{0.9,2,0.5},{0.79,4.57,0.73}], t̃=t/t_ br, and we define t=0 as the time at which the breakout flux peaks. For RSG's, the break parameters (with br subscript) are given as a function of progenitor radius R, ejecta velocity v_ s*, and total ejecta mass M, by t_ br= 0.86 R_13^1.26 v_ s*,8.5^-1.13 (f_ρM_0κ_0.34)^-0.13 hrs, L_ br=3.69×10^42 R_13^0.78 v_ s*,8.5^2.11 (f_ρM_0)^0.11κ_0.34^-0.89 erg s^-1, T_ col,br= 8.19 R_13^-0.32 v_ s*,8.5^0.58 (f_ρM_0)^0.03κ_0.34^-0.22 eV. For BSG's (equations <ref>-<ref>) we have t_ br= 0.69 R_13^1.32 v_ s*,8.5^-1.16 (f_ρM_0κ_0.34)^-0.16 hrs, L_ br=7.16×10^42 R_13^0.55 v_ s*,8.5^2.22 (f_ρM_0)^0.22κ_0.34^-0.77 erg s^-1, T_ col,br= 9.49 R_13^-0.38 v_ s*,8.5^0.62 (f_ρM_0)^0.07κ_0.34^-0.18 eV. Here, R= 10^13 R_13 cm, κ=0.34 κ_0.34 cm^2 g^-1, v_ s*=v_ s*,8.5 10^8.5 cm s^-1, M_0 denotes mass in units of solar mass, and f_ρ≃1 depends on the inner structure of the envelope (see equation <ref>). v_ s∗ is related to the characteristic ejecta velocity v_∗ by (equation <ref>) v_ s∗≈ 1.05 f_ρ^-0.19v_∗, v_∗≡√(E/M), where E is the energy deposited in the ejecta. §.§ Frequency Dependent Formulae Our analytic approximation for the spectral luminosity of the shock cooling emission, taking into account deviations from a blackbody spectrum, is (equation <ref>) L_ν = [L_ BB (0.85 T_ col)^-m + L_ν,ϵ^-m]^-1/m hν<3.5 T_ col 1.2 × L_ BB(0.85 R_13^0.13 t_ d^-0.13× T_ col) hν>3.5 T_ col, with m=5 and (equations <ref>, <ref>) L_ BB=L×π B_ν(T_ col)/σ T_ col^4, L_ν,ϵ=(4π)^2/√(3)r_col,ν^2√(ϵ_ν)/1+√(ϵ_ν)B_ν(T_col,ν), ϵ_ν=κ_ ff,ν/κ_ ff,ν+κ_ es. For RSG's r_ col,ν = R + 2.18 × 10^13 L_ br,42.5^0.48 T_ br,5^-1.97κ_0.34^-0.07t̃^0.80ν_ eV^-0.08 cm, T_ col,ν = 5.47 L_ br,42.5^0.05 T_ br,5^0.92κ_0.34^0.22t̃^-0.42ν_ eV^0.25 eV, κ_ ff = 0.03 L_ br,42.5^-0.37 T_ br,5^0.56κ_0.34^-0.47t̃^-0.19ν_ eV^-1.66 cm^2 g^-1. For BSG's the corresponding equations are (equations <ref>-<ref>) r_ col,ν = R + 2.02 × 10^13 L_ br,42.5^0.48 T_ br,5^-1.97κ_0.34^-0.08t̃^0.77ν_ eV^-0.09 cm, T_ col,ν = 5.71 L_ br,42.5^0.06 T_ br,5^0.91κ_0.34^0.22t̃^-0.45ν_ eV^0.25 eV, κ_ ff = 0.03 L_ br,42.5^-0.37 T_ br,5^0.56κ_0.34^-0.46t̃^-0.14ν_ eV^-1.66 cm^2 g^-1, Here, L_ br=L_ br,42.5 10^42.5 erg s^-1, T_ col=5 T_ col,5 eV, and ν=ν_ eV eV. R is given in terms of the break parameters as R = 2.41×10^13 t_ br,3^-0.1 L_ br,42.5^0.55 T_ br,5^-2.21 cm for RSGs, and as R = 2.23×10^13 t_ br,3^-0.1 L_ br,42.5^0.56 T_ br,5^-2.24 cm for BSG's (equation <ref>). A simpler approximation, that depends only on the L and T_ col and is slightly less accurate, is L_ν = π/σL/T_ col^4[ (B_ν(0.85 T_ col)/(0.85)^4)^-m + . . ( 8/√(3) x^-0.155 T_5^-0.1√(ϵ_ a)/1+√(ϵ_ a) B_ν(1.63 x^0.247 T_ col) )^-m]^-1/m hν<3.5 T_ col 1.2× L_ν, BB(1.11 L_42.5^0.03 T_5^0.18× T_ col) hν>3.5T_ col for RSG's, and L_ν = π/σL/T_ col^4[ (B_ν(0.85 T_ col)/(0.85)^4)^-m + . . ( 8/√(3) x^-0.18 T_ col,5^-0.12√(ϵ_ a)/1+√(ϵ_ a) B_ν(1.71 x^0.25 T_ col) )^-m]^-1/m hν<3.5 T_ col 1.2× L_ν, BB(1.11 L_42.5^0.03 T_5^0.18× T_ col) hν>3.5T_ col for BSG's. Here, x=hν/T_ col, T_ col = 5 T_5 eV, and ϵ_ a = [5.5,5.1]×10^-3 x^-1.66 T_ col,5^-1.098. §.§ Validity Time Our analytic approximations are valid during max [3 R / c , t_ bo] < t < min[t_ 0.7 eV, t_ tr/a], for RSG's, and at max [2 R / c , t_ bo] < t < min[t_ 0.8 eV, t_ tr/a], for BSG's. Here, t_ bo = 30 R_13^2.16 v_ s*,8.5^-1.58 (f_ρ M_0 κ_0.34)^-0.58 sec, (RSG) = 45 R_13^1.90 v_ s*,8.5^-1.45 (f_ρ M_0 κ_0.34)^-0.45 sec, (BSG) t_0.7 eV = 6.86 R_13^0.56 v_ s*,8.5^0.16κ_0.34^-0.61 (f_ρM_0)^-0.06 days, (RSG) t_ 0.8 eV = 4.46 R_13^0.52 v_ s*,8.5^0.14 (f_ρM_0)^-0.02κ_0.34^-0.55 days, (BSG) and t_ tr = 19.5 √(κ_0.34M_ env,0/v_ s*,8.5) days. The formulae for RSG's were shown to reproduce well the results of numeric calculations over the parameter range: R=3×10^12-2×10^14 cm, E=10^50-10^52 erg, total mass M=2 -40 M_⊙, core and envelope mass ratios M_ e/M_ c=0.3 - 10, and solar-like metallicity between Z=0.1-1 Z_⊙. For BSG's, they were shown to reproduce well the results of numeric calculations over the parameter range: R=10^12-3×10^13 cm, E=10^50-10^52 erg, M=4 -60 M_⊙, M_ e/M_ c=0.3 - 10, and solar-like metallicity between Z=0.1-1 Z_⊙.
http://arxiv.org/abs/2407.13317v1
20240718091853
Gravitation as a Statistical Theory on the Light Cone
[ "José M. Isidro", "Claudio F. Paganini", "Alessandro Pesci" ]
gr-qc
[ "gr-qc", "math-ph", "math.MP" ]
§ ABSTRACT In this paper, we will explore Padmanabhan’s mesoscopic, statistical approach to gravity <cit.> with a twist. The general picture of his approach is that spacetime is made of large numbers of localized quantum degrees of freedom. Padmanabhan assumed that the degrees of freedom of a given quantum state of geometry contribute, after averaging over fluctuations, a vector degree of freedom for spacetime at a point. For null vectors, this can be regarded as corresponding to one single vector, i.e. a pure state, for the statistical ensemble on the light cone at every point. In the present paper, we consider instead the case where the states of the gravitational degrees of freedom are spread out and overlap, with only probabilistic information on which of them determines the actual spacetime at a point. In the continuum limit, this corresponds to a mixed state for the statistical ensemble on the light cone at every point. This change in assumptions leads to some interesting observations. When we define a statistical ensemble on the light cone, its variance “knows” about the interior of the light cone. As an intriguing consequence, we find that the cosmological constant can be related to the variance over the light cone. With a mixed state, we can no longer derive the gravitational field equations from an entropy functional. Here, instead, we show that a naive implementation of the measure of a mixed state on the light cone in the variation principle leads to modified measure theories (MMT) as the grand canonical ensemble and allows one to reframe unimodular gravity as the canonical ensemble of a statistical theory on the light cone. Possible molecules of triple-heavy pentaquarks within the extended local hidden gauge formalism Xiang Liu^1,2,6,7 July 22, 2024 =============================================================================================== § INTRODUCTION The present paper is a result of a series of papers comparing the structures and ideas of different approaches to fundamental physics <cit.> as well as the upcoming articles <cit.>. The goal of this project is to motivate the community to establish an extensive collection of such articles as a sort of "Rosetta stone" for approaches to fundamental physics. The hope is that such a set of dictionaries of ideas helps the exchange across approaches and thereby catalyzes progress in the foundations of physics. The present article demonstrates the potential of this project, set in the context of thermodynamic approaches to gravity, it takes inspiration from Causal Fermion Systems (CFS) <cit.> and links directly to Modified Measure Theories (MMT) <cit.> and unimodular gravity (UG) <cit.>. The original idea of spacetime thermodynamics came about in the context of black holes. Bekenstein <cit.> realized that the black hole parameters satisfy the same relations as conjugate variables in thermodynamics and boldly suggested assigning an entropy to black holes proportional to the area of their event horizon. This idea was given further credibility by Hawking's discovery <cit.> that black holes indeed have a temperature, thus fixing the proportionality constant to be k_B/4 l_p^2. In his seminal paper in 1995 Jacobson <cit.> then showed that in fact the equations of General Relativity can be understood as thermodynamic balance equations across any null surface through any point in spacetime. To obtain the full Einstein equations in the bulk, one then needs to assume that the stress energy tensor of the matter fields is divergence-free. This fixes the dynamics in the bulk up to a constant of integration which can be identified with the cosmological constant. Ever since, there has been great interest in this approach to the puzzle of (quantum) gravity. For recent work, see, for example, <cit.>. For our results here, the paper <cit.> is of particular interest, as they argue that rather than giving rise to the Einstein-Hilbert action, thermodynamic arguments actually favor UG which features equivalent dynamics. Our main focus, however, will be on the contributions by Padmanabhan <cit.> and his ideas of a mesoscopic Boltzmann approach to gravity. Hence in his reasoning, he assumes that there do exist internal degrees of freedom underlying every point of spacetime; however, he remains agnostic to the concrete nature of these microscopic degrees of freedom. The goal of his approach was to obtain a variational principle for gravity such that the equations of motion are invariant under ℒ_M→ℒ_M+C where ℒ_M is the matter Lagrangian and C is a constant. To that end he studied the small sphere limit x→ y of Synge's world function modified by adding a minimal length σ_L(x,y)=σ(x,y) +L^2 [Synge's world function σ(x,y) is given by one half times the geodesic interval between x and y squared.] to derive a "gravitational density of states". His original derivation was performed in the Euclidean setting and carried over to the Lorentzian setting via analytic continuation. One of the authors of this paper later showed <cit.> that the same result can be obtained working directly in the Lorentzian setting. In the course of his investigation, Padmanabhan frequently identified the light cone as the configuration space for the internal degrees of freedom[The derivation for the density of states works equally well for time-like or space-like vectors, however the particular emphasis on null-vectors is due to their connection to horizons and thereby the original thermodynamic considerations.]. This is the starting point for our present work. In his later work <cit.> he remarked that one should keep in mind that the null vectors that enter the variation principle for the entropy should be thought of as representing underlying gravitational quantum degrees of freedom which are localized in a small volume around a point x with a sharply peaked distribution in momenta. He demands that when averaging over this state in momentum space, the result should be a null vector again. This implies that in the continuum limit where we consider a state on the light cone at a point x this corresponds to a pure state, i.e. a Dirac delta in configuration space. In the present paper, we take inspiration from Causal Fermion Systems. In the continuum limit of the Minkowski vacuum, the underlying Hilbert space is given by the Dirac sea <cit.>. It is therefore natural to associate these states with the gravitational sector of the theory. Hegerfeldt's theorem <cit.> tells us that vectors in this Hilbert space cannot be localized in any spatially compact region. In fact, if we consider the plane wave solutions, eigenstates of the momentum operator, the Dirac sea is built from maximally delocalized quantum states. As a result, we should consider that the volume at a point x is not sourced by a single (pure) gravitational state, but by a collection of states; and, assuming to have only probabilistic information on them, by a mixed of state. In the continuum limit, this corresponds to a general measure on configuration space, i.e., on the light cone. Padmanabhan <cit.> argued that for the (pure) states he considered, the variance is small and hence can be neglected in a first-order approximation. This can not hold true if we consider mixed states. Working with mixed states leads to a number of interesting observations. In particular, we find that taking the average over a probability field naturally gives rise to a time-like vector field that is similar in nature to the regularizing vector field that plays a key role in <cit.>. We also find that the variance can be linked to the cosmological constant if we consider the validity of the Einstein field equations as an observational fact. Furthermore, it turns out that working with mixed states allows us to link several lines of research in the context of modified/thermodynamic gravity. In this paper, we explore the link with MMT and UG which is a subset of MMT, as explained in <cit.>. MMT was developed with the same goal in mind that was driving Padmanabhan's research: To find a variational principle for which the equations of motion for the matter sector and the gravitational sector are both invariant, to the addition of constants in the Lagrangian. Instead of focusing on the light cone, MMT replace the metric measure √(-g)d^4x with a measure that is independent of the metric. As a result, additive constants to the Lagrangian drop out of the equations of motion as desired[We will touch briefly on the conjecture that these approaches are indeed two sides of the same coin if one considers the conformal factor of the metric as an independent degree of freedom. However, a rigorous discussion of this idea is beyond the scope of this paper. ]. In our present paper, we show that even a naive implementation of a mixed state on the light cone in the variation principle has interesting consequences. It naturally leads to MMT as the grand canonical ensemble and allows us to reframe UG as the canonical ensemble of a statistical theory on the light cone. Despite the ad hoc nature of these action principles, on the conceptual level they fit nicely with the considerations regarding baryogenesis <cit.> in the context of CFS <cit.>. In particular, they give a complementary point of view on MMT. This perspective also fits with the arguments in <cit.> as the derivation of UG requires the conventionally defined matter stress-energy tensor to be divergence-free, an assumption that is dropped in MMT. However, given the ad-hoc nature of the variation principles brought forward in this paper, they likely will not be the final answer, but at least they seem to be a good starting point for further investigations. Finally, we mention that this mathematical setup has the potential to link to Verlinde's heuristic derivation <cit.> of modified Newtonian dynamics from entropic considerations. §.§ Organization In Section <ref> we introduce basic mathematical notations and useful facts to keep in mind for the later constructions. In Section <ref> we summarize the key arguments in Padmanabhan's papers. The mathematical core of the paper is in Section <ref> where we introduce the probability density on the light cone. Here we recall the fact that averaging over the light cone always results in a time-like vector except for pure states. Finally, we prove that the variance is a positive definite tensor. The core physical results are presented in the following two sections. In Section <ref> we observe that the variance can be connected to the cosmological constant. In Section <ref> we show how theories with modified measures can be obtained if we consider a density of states instead of a probability measure on the light cone. Finally, in Section <ref>, we give an overview of possible applications and generalizations of our mathematical framework. § MATHEMATICAL BACKGROUND In the present work we will always consider (M, g) to be a spacetime, i.e. a four-dimensional orientable Lorentzian manifold. The study of null-geodesics, i.e. geodesics γ who's tangent vector γ̇^μ satisfies g_μνγ̇^μγ̇^ν =0 at every point has been of considerable interest as they play a crucial role in characterizing the causal structure of spacetime, see e.g., <cit.>. From a physical perspective, null geodesics describe the path of light rays and are thus, in principle, amendable to experiment/observation. For that reason, they play a key role in the axiomatic reconstruction of spacetime from physical structures by Ehlers, Schild and Piran <cit.>. In the mathematical literature, there has been recent progress in the study of the geometry of the space of null-geodesics <cit.>. In the present work, we are interested in a geometric structure closely associated with the space of null-geodesics, namely the bundle of past light cones. [Past Light Cone Bundle (PLCB)] Let (M,g) be a smooth spacetime, that is, a smooth time orientable Lorentzian manifold. Then the PLCB is the subset of the tangent bundle given by ℒ:= { (x,v) ∈ T M| g_x(v,v)=0, v≠ 0 and v past directed}. The fiber ℒ_x of ℒ at a point x is just the past light cone at this point. In <cit.> it was shown that the PCLB ℒ is, in fact, a submanifold of the tangent bundle. This is a first sanity check that it makes sense to consider ℒ as a configuration space for a physical theory. To make calculations more tractable, we will use a coordinate system based on an orthonormal tetrad e_0^μ,e_1^μ,e_2^μ,e_3^μ where e_0^μ is time-like and the other tetrad vectors are space-like. Using the notation k̅·e̅ ^μ = k_1e_1^μ+ k_2 e_2^μ+k_3 e_3^μ we notice that any null vector can then be written as n^μ = λ (e_0^μ + k̅·e̅ ^μ) where k̅ = (k_1, k_2, k_3) is a three vector with unit Euclidean norm |k|=1. This makes the × S^2 topology of the light cone explicit. We make use of this split in the definition of the past celestial sphere bundle. [Past Celestial Sphere Bundle (PCSB)] Let (M,g) be a smooth spacetime and X be a smooth, non-vanishing timelike vector field. Then the PCSB is the subset of the tangent bundle given by CSM:= { v ∈ T_x M| g(v,v)=0, g(v,X)=1}. From the results in <cit.>, we get the following corollary. There is a global factorization of the PLCB by the PCSB. ℒ= _+ × SM. This can be seen immediately from the definition of the PCSB and a global rescaling of the vector field X by a scalar λ. This fact will become especially important in our follow-up paper <cit.>, where we will remove most of the structures that we assumed for simplicity in the present paper. This concludes our collection of relevant notation and existing results. § PADMANABHAN'S THERMODYNAMIC CONSIDERATIONS In <cit.> Jacobson showed that the equations of motion of General Relativity on the light cone (R_μν-κ T_μν) n^μ n^ν =0, ∀ n^μ with g(n,n)=0 can be recovered as a thermodynamic balance equation. To obtain the full Einstein equations from the equations on the light cone, one has to impose the fact that the stress-energy tensor is divergence free ∇^μ T_μν=0. This leads to R_μν-1/2R+Λ g_μν= κ T_μν with Λ as a free constant of integration. Padmanabhan's idea was to extend on this argument and derive (<ref>) from mesoscopic considerations. The philosophy of this approach is to assume that there exist microscopic degrees of freedom underlying spacetime and GR, which are macroscopic phenomena. However, no assumptions are made about the detailed properties of these microscopic degrees of freedom other than their existence. To this end Padmanabhan postulated, e.g. in <cit.>, a density of microscopic degrees of freedom ρ (x, ϕ_A), where ϕ_A is a yet to be specified variable labeling the internal microscopic degrees of freedom. He arrived at this conclusion by investigating the minimum-length metric (also quantum metric, or q-metric for short), i.e., a matric description of spacetime with a built-in minimal length <cit.>. The q-metric postulates that the squared integral between two points σ^2(x,y) (i.e., two times Synge's world function <cit.>) is modified on short scales[This allows for a covariant implementation of a minimal length scale in the universe. See <cit.> for a review of minimal length scenarios in quantum gravity and  <cit.> for a connection to holography.]. The simplest implementation of this idea is the addition of a constant to its square (the results derived from the q-metric do not depend on the details of the modification) σ(x,y)^2 ⟶σ_L(x,y)^2=σ(x,y)^2+L^2 . The q-metric corresponding to this modified world function is a bitensor, i.e. it is defined in terms of two points x and y, which is singular everywhere in the coincidence limit (where x→ y). This singular behaviour is a direct consequence of the minimal length in the corresponding distance function. One can calculate the volume and surface of an equi-geodesic ball with σ_L(x,y)^2=C. Taking the small-sphere limit, i.e. x → y, one can compare the result from the modified distance with that from the original metric. In the ordinary spacetime metric volumes and areas vanish, of course, in the coincidence limit. For the q-metric, however, only the volume vanishes in the coincidence limit, while the area does not. This paves the way to introduce the density of gravitational degrees of freedom. Padmanabhan then defines the density of microscopic degrees of freedom by the limit ρ (x, ϕ_A)= lim_x→ y√(h_σ_L)/√(h_σ_L, flat) = 1 - 1/6L^2R_μνn^μ n^ν . where √(h_σ_L) and and √(h_σ_L, flat) are the area elements associated with the modified distance function in curved space and in flat space respectively and n^μ is the surface normal vector. Working in a Euclidean setting, Padmanabhan interpreted the zero point as the light cone after Wick rotation and n^μ in the limit accordingly as null vectors. This calculation led Padmanabhan to identify the internal degrees of freedom ϕ_A in his density of microscopic degrees of freedom by the set of null vectors n^μ at a point. In his later papers <cit.>, Padmanabhan then conceived a given quantum state of spacetime as the product of a collection of elementary quantum degrees of freedom over elemental volumes throughout all of spacetime. To go from the quantum state of spacetime to the classical description of spacetime he defined in <cit.> the average over quantum fluctuations at every point x representing an elemental volume through the functional integral n_μ= ∫𝒟n n(x)_μ P(n(x)^μ, x) = l^μ(x) where P(n(x)^μ, x) is parameterized by some null vector field l^μ(x), and P is the probability that the actual quantum degree of freedom is given by n(x)^μ at x. Here, in principle, one should think of P as a sharply peaked Gaussian in the variable [n(x)^μ- l^μ(x)] supported in a small volume around the point x. This suggests, that n_μ in (<ref>) should be regarded as the expectation value of the momentum over a single quantum degree of freedom treated essentially as a pure. In the continuum limit, which we consider subsequently, such a state corresponds to a Dirac measure on the light cone. Following Padmanabhan, we will treat the light cone itself as the configuration space of the internal variables n^μ. However, inspired by CFS we now consider a situation where the states of the underlying gravitational quantum degrees of freedom overlap in spacetime, with only probabilistic information of which state actually determines spacetime at x. In the present set-up this motivates the consideration of mixed states described by general invariant probability measures on the light cone dP(n^μ). Given (<ref>), if the measure is absolutely continuous with respect to the Lebesgue measure of the tetrad coordinates, we can write this probability measure as dP(n^μ)=P(λ,)dλ d Ω, where dΩ is the canonical measure on S^2 and dλ the canonical measure on _+. We define the average over the light cone at a point x by n^μ_(P)= ∫_R_+× S^2λ (e_0^μ + k̅·e̅ ^μ) P(λ, k̅) dλ dΩ . Following Padmanabhan we define the variance analogously. [Gravitational Dissipation] We define the gravitational dissipation as the variance over the light cone σ(P)^μν = n^μ n^ν_(P) - n^μ_(P) n^ν_(P). Here ·_ again refers to the average over the configuration space of the internal variable n^μ (i.e. the light cone). In analogy to fluid mechanics, where Σ^ab = p^a p^b_p - p^a_p p^b_p is the dissipation tensor and ·_p denotes the average over momentum space with P^a= p^a_p being the flow velocity of the macroscopic fluid. For the collection of (pure) states considered by Padmanabhan <cit.> σ^μν is a small correction originating from quantum gravity (as diffusion is a small effect compared to the overall flow of a fluid) and l^μ= n^μ is a null vector depending on the single quantum degree of freedom at x through the specific state P(n(x)^μ,x). Therefore a variation of the physical system with respect to P(n(x)^μ,x) is equivalent to a variation of the null vector l^μ. In Padmanabhan's approach, requiring extremization of total entropy (of spacetime and matter degree of freedom) with respect to P(n(x)^μ,x) hence leads to δ [(R_μν-κ T_μν) n^μ n^ν] ≈δ [(R_μν-κ T_μν) l^μ l^ν] =0, subject to the constraint l^μ l_μ=0. Where we used that δ P ∼δ l^μ. To leading order one thus gets that (R_μν-κ T_μν) n^μ n^ν≈ (R_μν-κ T_μν) n^μ n^ν =(R_μν-κ T_μν) l^μ l^ν = 0 . has to hold for all states P and hence for all null vectors l^μ. Demanding again, that the stress-energy tensor be divergence free ∇^μ T_μν=0, this leads to the Einstein field equations with a cosmological constant as a constant of integration, along the lines of the original argument by Jacobson. As we shall see in the following, extracting expectation values of mixed states strongly impacts the argument just mentioned. Indeed, it is a well-known fact in Lorentzian geometry that the average of any two future (past) directed light-like vectors can only be null if the vectors are linearly dependent. This is the starting point for our results below. § A PROBABILITY MEASURE ON THE LIGHT CONE In this section we will assume two things: * The configuration space for the internal degrees of freedom for gravitation is the PLCB. * In analogy to Boltzmann's phase-space density f(x,p^μ) for fluids, we will assume a field of invariant Borel measures dP(x,n^μ) on the PLCB. If, in a fiber at a point x, the measure is absolutely continuous with respect to the Lebesgue measure of the tetrad coordinates, then it can be written in terms of the coordinates introduced above as P(λ,)dλ dΩ, where P(λ,)≥0 is nonnegative. In an abuse of notation, we will also write Dirac measures in terms of P(λ, ) with Dirac distributions. The goal of this section is to establish the properties of the statistical quantities required for Padmanabhan's formalism, in the context of mixed states. For now, we ignore the spatial dependence and consider dP(n^μ) to be a probability measure on the past light cone at a point. This means that we assume ∫_R_+× S^2 P(λ, k̅) dλ dΩ=1 , and for the variance to be finite ∫_R_+× S^2λ^2 P(λ, k̅) dλ dΩ< ∞. As a warmup, we show the following lemma which is a well-known fact in Lorentzian geometry. The vector n^μ_[It is tempting to associate -t^μ=- n^μ_ with the flow of time, given it also shows up in the “classical” thermodynamic arguments <cit.>. Therefore, it would be tempting to identify -t^μ as the arrow of time. ] is time-like except in the case where P(λ, k̅)=P(λ)δ (k̅ -k̅_0). For P(λ, k̅)=P(λ)δ (k̅ -k̅_0) it is clear that n^μ_= ∫_R_+λ (e_0^μ + k̅_0 ·e̅ ^μ) P(λ) dλ = (e_0^μ + k̅_0 ·e̅ ^μ)∫_R_+λ P(λ) dλ. and therefore n^μ_ is still a null vector. Now we treat the case of a general P(λ, k̅). For that, it is convenient to introduce the marginal probability density P(λ) = ∫_S^2 P(λ, ) dΩ. P() is then defined analogously. In addition, it is convenient to define λ_avg= ∫_ R_+λ P(λ) d λ. This allows us to write the expectation value over the light cone as n^μ_= λ_avg(e_0^μ + ∫_R_+ × S^2λ P(λ,k̅) k̅/λ_avg dΩ dλ·e̅ ^μ) Now it is clear that ∫_ S^2 P(λ, k̅) k̅ dΩ is a convex combination over S^2 for more than one λ and therefore that the inequality |∫__+× S^2λ P(λ,k̅) k̅/λ_avg dΩ dλ| ≤∫__+λ/λ_avg|∫_S^2 P(λ,k̅) k̅ dΩ|dλ ≤∫__+λ/λ_avg(∫_ S^2 P(λ, k̅) |k̅|dΩ)d λ =∫__+λ/λ_avg P(λ)dλ =1, where |·| is the Euclidean scalar product, in three dimensions is strict unless P(λ, ) is of the form P(λ, k̅)=P(λ)δ (k̅ -k̅_0). This finishes the argument. Note, that the exception of course includes the pure states considered by Padmanabhan with P(λ, )=δ(λ-λ_0)δ(-_0). The following corollary will be useful for further calculations. For a probability measure over the light cone dP(n^μ) that is not of the form P(λ, k̅)=P(λ)δ (k̅ -k̅_0), we can choose an orthonormal basis e_0^μ,e_1^μ,e_2^μ,e_3^μ, that is, a coordinate system such that ∫__+ × S^2λ P(λ, k̅) k̅/λ_avg d λ dΩ=0 and thus n^μ_(P)=λ_avg e_0^μ. In the next step, we will show that every state that is not of the form P(λ, k̅)=P(λ)δ (k̅ -k̅_0) can be normalized. Let dP(n^μ) be a probability measure that is not of the form P(λ, k̅)=P(λ)δ (k̅ -k̅_0), and e_0^μ,e_1^μ,e_2^μ,e_3^μ a tetrad choosen according to Corollary <ref>, then we can always find a related probability measure dP(n^μ):= dP(α n^μ) such that n^μ_ is a unit timelike vector. Let P(λ, k̅)= P(αλ, k̅) with α>0 then we have ∫_R_+× S^2λ (e_0^μ + k̅·e̅ ^μ) P(λ, k̅) dλ dΩ = ∫_R_+× S^2λ (e_0^μ + k̅·e̅ ^μ) P(αλ, k̅) dλ dΩ1/α^2 Replacing λ/α = λ and dλ/α= dλ we get ∫_R_+× S^2λ (e_0^μ + k̅·e̅ ^μ) P(λ, k̅) dλ dΩ = 1/α^2∫_R_+× S^2λ (e_0^μ + k̅·e̅ ^μ) P(λ, k̅) dλ dΩ = λ_avg/α^2e_0^μ Setting α^2=λ_avg gives us n^μ_(P)= e_0^μ. This finished the argument. We will call a probability measure dP with n^μ_(P)= e_0^μ a normalized state. The above lemma gives us a relation between the normalization of the resulting timelike vector field and the change in the underlying probability measure. We are now ready to study the variance of a probability measure on the light cone. For that, we need the following definition. We call P(λ,k̅) degenerate of order n if there exist n linearly independent vectors k_i∈ S^2 such that supp (P(λ, k̅))⊂ span{k_i| i≤ n ∈}^⊥ or if there are n-1 such vectors and P(λ) = δ (λ-λ_0). For any non-degenerate P(λ,k̅) σ(P)^μν = n^μ n^ν_(P) - n^μ_(P) n^ν_(P) is strictly positive. We choose an orthonormal basis according to Corollary <ref>. Then we have the corresponding co-tangent basis e^0_μ,e^1_μ,e^2_μ,e^3_μ. Due to orthogonality for i={1,2,3} we have ( n^μ n^ν_(P). - . n^μ_(P) n^ν_(P)) e^i_μ e^i_ν= n^μ n^ν_(P) e^i_μ e^i_ν = ∫_R_+× S^2λ^2 (e_0^μ + k̅·e̅ ^μ) (e_0^ν + k̅·e̅ ^ν) P(λ, k̅) dλ dΩ e^i_μ e^i_ν = ∫_R_+× S^2λ^2 ( k_i)^2 P(λ, k̅) dλ dΩ This is positive by our assumption, that P(λ, k̅) be non-degenerate. For e_0 we need a different argument. We calculate ( n^μ n^ν_(P). - . n^μ_(P) n^ν_(P)) e^0_μ e^0_ν=∫_R_+λ^2 P(λ)dλ-(∫_R_+λ P(λ)dλ)^2 = ∫_R_+ P(κ)dκ∫_R_+λ^2 P(λ)dλ - (∫_R_+λ P(λ)dλ)(∫_R_+κ P(κ)dκ) =∫_R_+× R_+λ (λ-κ) P(λ)P(κ) dλ d κ≥0 The positivity follows from Lemma <ref> in Appendix <ref> due to the fact that P(λ)P(κ) is positive and symmetric. This concludes the proof. § SOME OBSERVATIONS In the following we apply the results of the previous section to the variation of entropy. Clearly, in general we can not apply straightforwardly Padmanabhan's extremization of entropy setting the variation of the state P equal with the variation of the null vector l^a. Instead we always have to carry the variation with respect to the state P along which leads to 0=δ [(R_μν - κ T_μν) ⟨ n^μ n^ν⟩] = δ [(R_μν - κ T_μν) (⟨ n^μ⟩⟨ n^ν⟩ + σ(P)^μν] ⇕ δ [(R_μν - κ T_μν) σ(P)^μν] = -δ[(R_μν - κ T_μν) ⟨ n^μ⟩⟨ n^ν⟩] at equilibrium. We now restrict to variations such that δ[(R_μν - κ T_μν) ⟨ n^μ⟩⟨ n^ν⟩] = (R_μν - κ T_μν) δ[⟨ n^μ⟩⟨ n^ν⟩] under the constraint ⟨ n_ν⟩⟨ n^ν⟩ =-C. Following <cit.> this implies that the Einstein equation hold R_μν-1/2R+Λ g_μν= κ T_μν with the cosmological constant originating from the Lagrange multiplier. If we then formally integrate equation (<ref>) we get (R_μν-κ T_μν) σ(P)^μν = - λ_avg^2 (R_μν-κ T_μν) e_0^μ e_0^ν +G with G=(R_μν - κ T_μν) ⟨ n^μ n^ν⟩ following from (<ref>), a constant independent of P. We now make use of the fact that the Einstein field equations hold for this system (with the trace term either on the geometric side or on the matter side) R_μν-κ T_μν = (Λ-κ/2T)g_μν R_μν-κ T_μν = (1/2R-Λ)g_μν to replace the relevant term on the right-hand side of (<ref>) we get the following intriguing result (R_μν-κ T_μν) σ(P)^μν/λ_avg^2 = Λ-κ/2T + (Λ-κ/2T)g_μν⟨ n^μ n^ν⟩=Λ-κ/2T= 1/2R-Λ where we used g_μν⟨ n^μ n^ν⟩= ⟨ g_μν n^μ n^ν⟩=0 in the second step. Similarly we get (R_μν-κ T_μν)σ( P)^μν = Λ-κ/2T =1/2R-Λ for normalized states. At the present state of our understanding we do not want to interpret too much into this relation but report it as something reassuring and promising. Nevertheless, it is tempting that the cosmological constant shows up either way, especially in light of the fact that so much of Padmanabhan's work was geared towards explaining the cosmological constant. As a final remark, it is interesting to observe that a mixed state gives rise to a vector field t^μ= n^μ_ℒ relevant for the computation similar to the regularizing vector fields that appear in Causal Fermion Systems in the context of baryogenesis <cit.>. In light of the results in <cit.>, it is interesting to note that this vector field seems to have no dynamical relevance, but is more of a sort of a bookkeeping device. In accordance with classical thermodynamic treatments, there is a plethora of states P(λ, ) that lead to the same right-hand side in (<ref>). However, as a bookkeeping device, it might be relevant in the context of Padmanabhan's earlier considerations. In <cit.> he derived GR in terms of thermodynamically conjugate variables, and in <cit.> he derived what he calls the “holographic equipartition” for stationary spacetimes. All of these considerations feature the choice of a vector field. In Appendix <ref> we calculated this vector field for several forms of P(λ,) that are relevant in Causal Fermion Systems, and it turns out that λ_avg is proportional to the inverse of the regularization length ε, which can be thought to be of the order of the Planck length. § MODIFIED MEASURE THEORIES FROM A DENSITY OF STATES The Hawking-King-McCarthy and Malament theorem <cit.> states that under very weak causality conditions the causal order (M, ≺) determines the metric (M, g) up to a conformal factor. Formally, this implies that (M, g) is equivalent to (M, ≺) plus a volume form Φ, or colloquially: spacetime is equal to causal order plus volume. The attentive reader might notice that, by working on the light cone, we already fixed the causal structure, so the only variable left is the conformal factor of the metric. If we drop the requirement typically invoked in thermodynamic derivations of gravity that the stress-energy tensor of matter needs to be divergence free ∇^μ T_μν=0, then this allows for MMT. Instead of integrating the Lagrangian against the measure √(-g)d^4x MMT promote the measure to be an independent quantity. The total action is then given by S= ∫_M L Φ(A) d^4x with L=-1/κ R(Γ, g) +L_m, where κ is the gravitational coupling constant, the scalar curvature is given in terms of the connection and the metric, and the measure by Φ(A)= 1/6ε^αβμν∂_α A_βμν, where A_βμν is the tensor gauge potential of a non-singular exact 4-form ω=dA. Thus, the non-Riemannian volume element density Φ(A) is given by the scalar density of the dual field-strength associated with that potential. As a result, Φ(A) d^4x is invariant under general coordinate transformations. This introduces an extra scalar degree of freedom as can be easily seen by rewriting Φ(A) as χ(x)√(-g) where χ(x)=Φ(A)/√(-g). In the following, we will show that when working with a measure on the light cone, it is straightforward to implement it in such a way that it naturally leads to MMT. For that we will relax assumption (<ref>) and reinstate the dependence of the state dP(x,n^μ) on the spacetime point x. In the following, we assume a global time-like vector field t^μ such that the condition from Corollary <ref> is satisfied at every point. We then make the following definition. [Density of gravitational states] Given a measure dP(x,n^μ) on the PLCB which can be represented in terms of coordinates by P(x,λ,k̅)dλ dΩ√(|g|)d^4x the density of gravitational states is given by χ(x)= ∫_R_+ × S^2 P(x,λ,k̅) dΩ d λ. While in the previous discussion we have fixed χ(x)=1 we now allow it to take any positive value. This definition is in perfect analogy with the number density of particles in Boltzmann's treatment of fluids [ In this case we have n(x)=∫_Ω(p^μ) f(x,p^μ) dω, where p^μ are the internal degrees of freedom of the microscopic degrees of freedom, Ω(p^μ) is the domain of these internal degrees of freedom and dω is a suitable measure on Ω(p^μ). ] and χ(x) counts the number of gravitational states that contribute to the volume density at a point x. We are now ready to formulate an action on ℒ as S[g_μν, L_m, P(x,λ, )]=∫_ℒ( -1/κR + L_m)dP(x,n^μ)=∫_ℳ-1/κR + L_m_√(-g)d^4x which is to be minimized. A short calculation S[g_μν, L_m, P(x,λ, ] = ∫_ℒ( -1/κR + L_m) P(x,λ, ) √(-g)d^4x dλ dΩ =∫_M d^4x √(-g)[( -1/κR + L_m)∫_R_+ × S^2 P(x,λ,k̅) dΩ d λ] = ∫_M ( -1/κR + L_m) χ(x) √(-g)d^4x = ∫_M ( -1/κR + L_m)Φ_A d^4x. shows that this gives rise to a spacetime action with a modified measure. Now what happens if we minimize (<ref>) with respect to P(x,λ, ) (i.e. if we vary P(x,λ, ) itself)[Conceptually a variation of P(x,λ, ) represents a variation of the distribution of the momenta of the underlying gravitational states.]? The answer is straightforward ∫_M -1/κR + L_m_(P+δ P) √(-g)d^4x = ∫_M d^4x √(-g)[( -1/κR + L_m)∫_R_+ × S^2 P(x,λ,k̅)+δ P(x,λ,k̅) dΩ d λ] = ∫_M ( -1/κR + L_m) [χ(x)+δχ(x)] √(-g)d^4x, where δχ(x)= ∫_R_+ × S^2δ P(x,λ,k̅) dΩ d λ. Therefore, we see immediately that a variation of the state P(x,λ, ) induces a variation in the modified measure, and hence, if we vary the state under the requirement that the variation respect a volume constraint, as is usually required in MMT, the dynamics obtained from this variation principle are identical to that of MMT with a single modified measure. Following <cit.> we get the following system of equations R_μν-1/2Rg_μν= κ/2(T_μν+ M g_μν) + 1/χ( χ_,μ;ν -. .g_μν χ), χ- κ/D-1[ (M + 1/2 T)+ (D-2)/2L_m ] χ =0 . Where T is the trace of the stress energy tensor and the integration constant M=-2 Λ/κ takes the role of the cosmological constant. We see that this system of equations can only give rise to solutions compatible with the equations of Einstein's General Relativity if 1/χ( χ_,μ;ν -g_μνχ)=0. This holds, e.g., if χ=const. which requires (M + 1/2 T)+ (D-2)/2L_m =0. That is, for the right hand side of (<ref>) to vanish and for L_m=0, which can be satisfied if we minimize the action with respect to the fields in L_m. An interesting consequence of the above equations of motion is that we get a non-conservation of the conventionally defined matter stress energy that does not include a contribution from the field χ, ∇^μ T_μν= -2 ∂ L_m/∂ g^μν g^μα∇_αlnχ. This explains why the conventional derivation of the Einstein field equations from thermodynamic considerations never considered MMT as a possible alternative solution compatible with the usual arguments on the light cone. Now what happens if instead of general variations of the state P(x, λ, ), we constrain the variation to a subclass of states with χ(x)=const.? This leads us directly to unimodular gravity. §.§ Unimodular Gravity from a Density of States. Unimodular gravity was originally developed with a goal similar to MMT. If one limits the class of admissible variations of the metric to those that leave √(-g) invariant, then adding a constant to the Lagrangian will not change the gravitational equations of motion. UG has been shown to appear as a special case of MMT with two measures when formulated in a generally covariant form <cit.> 𝒮= ∫ d^4x √(-g) (R + 2 Λ + ℒ_m) - ∫ d^4 x Φ(A) 2 Λ . Here, a priori, Λ is a dynamical scalar field, and Φ(A) is as above. However, variations with respect to A imply that Λ is a constant, whereas variations with respect to Λ yield Φ(A) =√(-g). This variation principle can be reformulated in terms of a measure on the PLCB as a minimization of 𝒮= ∫ d^4x √(-g)(R + 2 Λ + ℒ_m)_ under the constraint ∫ d^4x √(-g)Λ_= const. If we vary the constraint with respect to P(x,λ,) then we get as above, that  Λ is a constant, while variations with respect to Λ yields that χ(x) must be constant. All other variations proceed as usual. Hence, the constraint restricts us to the subclass of measures dP(x, n^μ) for which the density of states is constant in spacetime. Note that from a thermodynamic perspective, the constraint (<ref>) now has a clear interpretation: it is nothing else than the average energy E per state. This is again remeniscent of Padmanabhan's holographic equipartition <cit.> and allows us to interpret the cosmological constant as the average energy per gravitational degree of freedom and the constraint as Boltzmann's equipartition of energy per degree of freedom[Alternatively one could try to interpret it as the average spacetime volume per gravitational degree of freedom.]. At this point it is worth noting again that none of the variation principles in this section depend on the time-like vector field that we can associate with dP(x, n^μ) and accordingly this is truly just a bookkeeping device and we need not worry about the phenomenological constraints discussed in <cit.>. We would also remind the reader that the physically relevant quantity derived from dP(x, n^μ) is Lorentz invariant and that, in general, there is a huge number of measures dP(x, n^μ) with the same χ(x) and n^μ_. § SUMMARY AND OUTLOOK For better readability, we split this section into three parts. A short summary, an outlook that ranges from concrete to speculative, and the conclusion. §.§ Summary On the mathematical side, starting from an invariant probability measure on the light cone, we presented in full detail the well-known fact from Lorentzian geometry that an average over the light cone always leads to a time-like vector field. Furthermore, we showed that the variance associated with any nondegenerate measure is a positive two tensor. On the physical side, we showed that the same setup can give rise to modified measures in a straightforward manner. Furthermore, we showed that unimodular gravity (UG), which is a subset of modified measure theories (MMT), can be characterized in terms of thermodynamic concepts. While MMT corresponds to the grand canonical ensemble where the number of gravitational degrees of freedom is free to change, UG corresponds to the canonical ensemble where the density of gravitational states is constant throughout spacetime. This aligns neatly with the observation in <cit.> in the context of causal fermion systems (CFS) that the matter/antimatter asymmetry originated in a shift of states from the gravitational sector to the matter sector. The present work strengthens the case for modified measures playing a key role in this mechanism. The fact that the stress energy tensor is not conserved in MMT is, in this context, not a bug but an essential feature. §.§ Outlook In the present work, we established the basic viability of a theory assuming a field of probability measure or a density of states. We will now show that this rather simple idea actually opens a window to a wide variety of possibilities for future avenues of research, some more mathematical in nature, some more physical. First, as noted in the body of the text, working on the light cone a priori assumes a lot of structure, as the light cone fixes the metric except for the conformal factor. If we intend for the formalism to have any bearing on the problem of quantum gravity, this is clearly not satisfactory. For this we would want to be able to derive the metric structure of spacetime from the fundamental ingredients alone. In an upcoming paper <cit.> some of the present authors will show that we can indeed strip away most of the structure assumed here and reconstruct it instead from more fundamental assumptions. From the thermodynamic viewpoint, the present paper raises several interesting questions. One is whether we can gain a deeper understanding of the formulation of GR with respect to conjugate variables, cf. <cit.>, where a vector field ξ plays an important role. It is tempting to identify this vector field as the one derived from the state on the light cone. This is connected to another question we have left unanswered in the present work, namely how to connect dP(x, n^μ) to some notion of entropy of spacetime. As a follow-up the same question can be asked in the context of Padmanabhan's holographic equipartition in static spacetimes <cit.>. Here, two aspects seem particularly interesting. First, his argument only works for a particular vector field, i.e., in the present language a particular class of measures. Second, for static spacetimes we can make a global split into positive and negative frequencies for the Dirac equation. For CFS this implies that we can define the Dirac sea globally. If, as suggested by the derivation of the Dirac equation in Minkowski space <cit.>, we assume the gravitational degrees of freedom to be in the Dirac sea, then in this setting we get a clean split between the matter degrees of freedom and the gravitational ones. On a more speculative note, this line of investigation could suggest that, in this setting, holography only applies to static or stationary spacetimes. Furthermore, it would be interesting to see whether the present formalism can be connected to the line of reasoning developed in <cit.>. Especially the ideas in the most recent paper in this series <cit.> might connect to the results some of the present authors are preparing in the aforementioned follow-up paper <cit.>. Finally, it seems possible to connect the formalism presented here with Verlinde's derivation of Modified Netwonian Dynamics from entropic arguments <cit.>. In particular, suppose that we consider the interacting region of de Sitter space in static coordinates. ds^2= ( 1- 1/3Λ r^2 )dt^2 - ( 1- 1/3Λ r^2 )^-1 dr^2 - r^2 (dθ^2+sin^2 θ dϕ^2) If we start with a probability measure P(x,n^μ) on the PLCB such that ∂_t=ξ= n^μ_P at all points x∈ℳ with r(x)≤ r_ℋ_C where r_ℋ_C is the coordinate radius of the cosmological horizon. That is, the time-like vector field ξ associated with the measure dP(x,n^μ) is the hypersurface normal with respect to a t=const surface in that coordinate system. In this setup Padmanabhan's holographic equipartition of states <cit.> applies, which fits with Verlinde's assumption that the number of bulk degrees of freedom is equal to the number of surface degrees of freedom calculated from the cosmological horizon entropy. Now if one wants to realize Verlinde's perturbation of de Sitter space by adding a small mass M at the center of the interacting region, one has to look for a suitable perturbation of the measure dP(x,n^μ). From the calculations in Section <ref> a variation δ P corresponds to a variation of the time-like vector field associated with it n^μ_(P+δ P) = ξ +δ u. Given that n^μ_(P) is a hypersurface normal vector field, its variation δ u will necessarily be tangential to the hypersurface t=const for any possible variation. Choosing δ P to be spherically symmetric around the center of the interacting region seems to be a good starting point for an attempt to reproduce his results from our formalism. One key step is to relate such a variation δ P to the addition of a perturbative mass in the center of the interacting region used by Verlinde in his argument. Another question is whether we can associate δ P with a change in entropy in a suitable sense to match Verlinde's argument. It seems important to note here that Padmanabhan's holographic equipartition breaks down for the perturbed vector field. Many of our considerations in this paper were informed by the CFS theory. In particular, the fact that we chose to define the measure dP(x, n^μ) as a mixed state on the past light cone bundle (PLCB) was motivated by the prominent role that the Dirac sea plays in CFS. Given the emergent nature of spacetime in CFS it is natural to think about a thermodynamic interpretation for the theory. A first step in that direction will be taken in our forthcoming comparison paper between the two approaches <cit.> where the focus will be on the role of a minimal length scale <cit.>. Another approach is to try and interpret the weight of the measure P(x,λ,) as a weight in the symbol of the fermionic projector. Dropping the spacetime dependence, this motivated the particular choices of P(λ,) in Appendix <ref> where P_1 corresponds to the so-called hard cut off regularization and P_2 corresponds to the iε regularization. One could then try to interpret the fermionic projector as the Wigner-Weyl quantization cf.<cit.> of the density of states and relate it to the ideas in <cit.> concerning spacetime correlations. To complete the Boltzmannian picture, the missing piece in the present work is an evolution equation for the measure P(x,λ,). Taking again inspiration from CFS where it was shown in <cit.> that the regularization satisfies a transport equation along null geodesics, this would be the natural first candidate. Similarly, one could postulate for P(x,λ,) to satisfy the massless Vlasov equation. Finally, there are many choices in this paper that one might make differently. For example, given that our framework is defined on the PLCB, instead of the action principles (<ref>) and (<ref>) one could try building an action involving expressions of the form of the left-hand side of (<ref>) or (<ref>) depending explicitly on the null vectors n^μ. Given that such a variation principle contains terms of the form n^μ n^ν, it could give rise to equations of motion quadratic in the vector fields as they appear in <cit.> for a covariant version of Verlinde’s emergent gravity. § POSITIVITY LEMMA First we will prove a the following Lemma Let P(x,y) be positive and symmetric, then ∫_R_+× R_+ x (x-y) P(x,y) dx dy ≥0 We simply split the integral along the diagonal ∫_R_+× R_+ x (x-y) P(x,y) dx d y = ∫_0^∞∫_y^∞ x (x-y) P(x,y) dx d y+∫_0^∞∫^∞_x x (x-y) P(x,y) d y dx =∫_0^∞∫_0^∞(l+y)lP(l+y,y)dy dl +∫_0^∞∫_0^∞ x(-k)P(x,x+k)dx dk =∫_0^∞∫_0^∞[(l+y)lP(l+y,y)-l y P(l,l+y)]dy dl =∫_0^∞∫_0^∞ l^2P(l+y,y)dy dl≥0 Where in (<ref>) we set l=x-y in the first integral and k=y-x in the second integral. In (<ref>) we simply relabel the variables in the second integral x=l and k=y and sum the integrals. § AVERAGE VECTOR FIELD FOR SPECIFIC PROBABILITY DISTRIBUTIONS In the following, we will calculate λ_avg for three particularly relevant probability measures characterized by their weight function with respect to a particular choice of tetrad P_0=1/4πδ (ε^-1-λ), P_1= ε/4πθ(λ)θ(ε^-1 -λ), and P_2= ε/4π e^-ελ. Here, θ(λ) corresponds to the Heaviside function and P_1 accordingly to the cutoff regularization in CFS. P_2 on the other hand corresponds to the iε regularization. It is clear that for i={0,1,2} ∫_R_+ × S^2 P_i(λ,k̅) dΩ d λ=1 holds. We get for P_0 λ_avg = ∫_R_+ × S^2λ P_0(λ,k̅) dΩ d λ = ϵ^-1 Note that this measure is degenerate of order one, as the zero-zero component of the variance vanishes. Such states are of interest in our follow-up paper <cit.> as they can be related to Hartle-Hawking's no-boundary proposal <cit.>. We get for P_1 λ_avg = ∫_R_+ × S^2λ P_1(λ,k̅) dΩ d λ =∫_0^ε^-1ελ dλ = [ ελ^2/2]_0^ε^-1 = ε^-1/2. We get for P_2 λ_avg = ∫_R_+ × S^2λ P_2(λ,k̅) dΩ d λ =∫_0^∞ελ e^-ελ dλ = - [λ e^-ελ]_0^∞ + ∫_0^∞ e^-ελ dλ = - [ ε^-1 e^-ελ]_0^∞ = ε^-1 Therefore all three states are naturally associated with a vector field ε^-1e_0^μ which we can identify with the regularizing vector field of the locally rigid regularization used for the baryogenesis result. There is a certain constant rescaling of λ_avg between different regularizations that might matter in detailed phenomenological calculations. Over all this gives a tempting connection to CFS. Note that it is a curious observation that due to the S^2 factor in the light cone we always need to normalize by a factor of 4π, which Padmanabhan needs to determine the current value of the cosmological constant. This need not mean much, as it will obviously show up in any calculation that involves S^2, however, due to the fact that we demonstrated in (<ref>) and (<ref>) that there is a direct connection between dP and the cosmological constant, there might be something more to it. Note that if we make any of these states dependent on x by P_0=1/4πδ (ε(x)^-1-λ), P_1= ε_0/4πθ(λ)θ(ε(x)^-1 -λ) and P_2= ε_0/4π e^-ε (x)λ then for P_1 and P_2 we immediately get ∇_μχ(x)≠ 0 when we allow the regularization length ε(x) to vary, while for P_0 we get χ(x)=const. amsplain
http://arxiv.org/abs/2407.12362v1
20240717073820
Numerical Study of the Higher-Order Maxwell-Stefan Model of Diffusion
[ "Bérénice Grec", "Srboljub Simic" ]
math.AP
[ "math.AP" ]
Higher-Order Maxwell-Stefan Model B. Grec and S. Simić Bérénice Grec and Srboljub Simić Université Paris Cité, CNRS, MAP5, Paris, F-75006, France, berenice.grec@u-paris.fr University of Novi Sad, Faculty of Sciences, Department of Mathematics and Informatics, Trg Dositeja Obradovića 4, Novi Sad, 21000, Serbia, ssimic@uns.ac.rs Numerical Study of the Higher-Order Maxwell-Stefan Model of Diffusion Bérénice Grec1 Srboljub Simić2 July 22, 2024 ===================================================================== § ABSTRACT The aim of the study is to compare the standard Maxwell-Stefan model of diffusion with the higher-order one recently derived. This higher-order model takes into account the influence of the complete pressure tensor. A numerical scheme is developed for comparing the two models through numerical simulations of three-component diffusion. It is shown that the higher-order model preserves qualitative features of the diffusion process, but quantitative differences were observed in the behavior of the mixture components. § INTRODUCTION Diffusion is usually described as a flow of matter from a region of high concentration to region of low concentration, which appears as a consequence of the random motion of molecules, i.e. motion of one species relative to another. This description is intuitively appealing and mainly reflects our macroscopic perception of the phenomenon. Moreover, it is closely related to the simple Fick law of diffusion, a mathematical model which became a synonym for the physical process in the scientific community. Although the Fick law (and its generalized forms) is a reliable tool for the study of diffusion in different physical situations and widely used in design of engineering systems, it has certain shortcomings which impose limitations to its applicability. Roughly speaking, diffusion of the substance with respect to background medium and diffusion in binary mixture are typical “playgrounds” for the simple Fick law. It can also be applied in more complex situations, as long as the process is restricted to a neighborhood of equilibrium state. However, phenomena related to cross-diffusion in multicomponent mixtures, which move the system far from equilibrium, cannot be properly described by this model. The Maxwell-Stefan model presents an alternative approach to diffusion phenomena, with sound physical arguments. In contrast to Fick's model, in which the gradient of concentration (or chemical potential) is the driving agent, the Maxwell-Stefan model describes the diffusion process by means of momentum transfer between the species. The whole model consists of the mass conservation laws and momentum balance laws for species ∂_tρ^i + ∇_𝐱· (ρ^i𝐮^i) = 0, ∇_𝐱 p^i = - ∑_j=1^S f_ijρ^iρ^j (𝐮^j - 𝐮^i), where ρ^i, 𝐮^i and p^i are species' mass densities, velocities and partial pressures, respectively, and f_ij are the drag coefficients, i,j = 1, …, S. This model was first derived by Maxwell <cit.> and then generalized by Stefan <cit.>. Its derivation, at least in macroscopic/continuum framework, is usually based upon heuristic arguments, since Eqs. (<ref>) represent a kind of truncated version of the complete momentum balance laws for species. In <cit.>, the model was put in the context of kinetic theory of mixtures and derived as an asymptotic limit of the moment equations in diffusive scaling. The model was further generalized to include non-isothermal processes and chemical reactions <cit.>. It was also recovered in the continuum framework by means of scaling arguments <cit.>. When applied to the cross-diffusion in rarefied gases, Maxwell-Stefan model is usually restricted to the case of inviscid gases without heat conduction. Even when viscous dissipation is taken into account, it is included by assumption, i.e. in an ad hoc manner. In recent studies <cit.>, a procedure for the systematic derivation of higher-order models is developed within the framework of kinetic theory of mixtures. It is based upon physically motivated diffusive scaling and application of the maximum entropy principle in the scaled form. As an outcome, approximate velocity distribution functions are obtained in the scaled form, which facilitated closure of the moment equations at desired order. The aim of this paper is to perform a numerical study and compare the standard Maxwell-Stefan model with the higher-order one which takes into account viscous pressures (stresses). This will be done for the benchmark example of a ternary mixture used in the famous Duncan & Toor experiment <cit.>. To that end, we shall first make a brief overview of the kinetic derivation of the Maxwell-Stefan model, and its higher-order counterpart. In Section <ref>, a suitable numerical scheme for the models will be given in a one-dimensional setting and parameters for numerical computation will be evaluated or estimated. Finally, numerical simulations will be performed and comparison of the results will be provided in Section <ref>. § OVERVIEW OF THE MAXWELL-STEFAN DIFFUSION MODELS §.§ Kinetic approach to diffusion models The kinetic theory of mixtures is based upon a statistical modelling of the state of the species through velocity distribution functions f^i(t,𝐱,𝐯) for each species i = 1, …, S, where (t,𝐱) ∈ℝ×ℝ^3 are time-space variables and 𝐯∈ℝ^3 is the particle velocity variable. Their evolution is described by the system of Boltzmann equations ∂_t f^i + 𝐯·∇_𝐱 f^i = ∑_j=1^S Q^ij(f^i,f^j)(𝐯), 1≤ i ≤ S, where Q^ij(f^i,f^j)(𝐯) is the collision operator which determines the rate of change of distribution functions due to elastic collisions between particles of species i and j. It has the form Q^ij(f^i,f^j)(𝐯) = ∫_ℝ^3∫_𝕊^2[ f^i(𝐯') f^j(𝐯'_∗) - f^i(𝐯) f^j(𝐯_∗) ] ℬ^ij(𝐯,𝐯_∗, σ) d σ d 𝐯_∗, where ℬ^ij(𝐯,𝐯_∗, σ) are the collision cross sections. For the sake of simplicity, it is assumed that the cross sections ℬ^ij correspond to Maxwell molecules <cit.>, i.e. that there exists a function b^ij : [-1, 1] →ℝ^* such that ℬ^ij (𝐯,𝐯_*,σ) = b^ij (cosθ), where cosθ := v-v_*/|v-v_*|·σ. It will also be assumed that the function b^ij is even and that b^ij∈ L^1(-1,1), following Grad’s angular cutoff assumption. Transforming the Boltzmann equations (<ref>) into dimensionless form, and assuming that Mach number Ma and Knudsen number Kn are of the same small order of magnitude Ma = Kn = α≪ 1, one arrives at the Boltzmann equations for mixtures in diffusive scaling <cit.> α∂_t f^i + 𝐯·∇_𝐱 f^i = 1/α∑_j=1^S Q^ij(f^i,f^j)(𝐯). To recover the macroscopic model of diffusion one has to exploit the (dimensionless) moment equations in diffusive scaling α∂_t ∫_ℝ^3ψ^i(𝐯) f^i d 𝐯 + ∇_𝐱·∫_ℝ^3𝐯ψ^i(𝐯) f^i d 𝐯 = 1/α∑_j=1^S ∫_ℝ^3ψ^i(𝐯) Q^ij(f^i,f^j)(𝐯) d 𝐯, where ψ^i(𝐯) is an appropriate test function. In our case of interest, the mass balance laws for the species are derived by choosing ψ^i(𝐯) = m_i, and the momentum balance laws for the species emerge by taking ψ^i(𝐯) = m_i𝐯. Since the set of test functions is taken to be finite, an approximate velocity distribution function is needed to close the system of moment equations. To this end, the velocity distribution function is assumed in the form of a local Maxwellian with a small parameter α. This system of equations is sufficient to recover the Maxwell-Stefan model (<ref>)-(<ref>) in the asymptotic limit, α→ 0 (see <cit.>). In <cit.> the velocity distribution function is chosen by assumption. This restricts the analysis to mixtures of gases in which viscosity and heat conductivity are neglected. To overcome this restriction, it was proposed in <cit.> to apply the maximum entropy principle in dimensionless form to derive the approximate velocity distribution function of any desired order. In fact, such an approach enabled the construction of the higher-order Maxwell-Stefan model <cit.>. The system of moment equations is extended by the balance laws for the species' momentum fluxes by taking ψ^i(𝐯) = m_i𝐯⊗𝐯. In the asymptotic limit α→ 0, diagonal terms of the stress tensor remained in the model, leading to an extension of the classical Maxwell-Stefan model. §.§ Comparison of the 1D models In this work, it is our aim to compare the two Maxwell-Stefan models of diffusion in 1D setting. The classical model in 1D has the following form <cit.>, for any 1≤ i ≤ S ∂_t n^i + ∂_x J^i = 0, ∂_x n^i = ∑_j=1 j ≠ i^S1/D_ij( n^i J^j - n^j J^i). In Eqs. (<ref>)-(<ref>), n^i is the species' number density, J^i = n^i u^i is the diffusion flux per unit mass, and D_ij are the Maxwell-Stefan diffusion coefficients. Since there are only S-1 independent equations in (<ref>), we need a closure relation for this system. Throughout this study, both in the classical and in the higher-order case, we shall use the one proposed in <cit.>: ∑_i=1^S J^i =0. Note that this closure relation implies that the total density of the mixture is constant ∑_i=1^S n^i = n^ref. The higher-order model <cit.> in a 1D setting is given by: ∂_tρ^i + ∂_x (ρ^i u^i) = 0; ∂_x( p^i + p^i_⟨ 11 ⟩) = ∑_j=1^S2 π b^ij_L^1/m_i + m_jρ^iρ^j( u^j - u^i). In Eqs. (<ref>)-(<ref>), ρ^i is the mass density of species i, u^i its macroscopic velocity, p^i its partial pressure and p^i_⟨ 11 ⟩ is a diagonal term in the partial pressure deviator. In the asymptotic diffusion limit, deviatoric parts are determined through the following sets of algebraic equations: ∑_j=1^S M_ij p^j_⟨ 11 ⟩ = β^11_i, where M_ij and β^11_i are given by <cit.>: M_ij = {[ 2 π b^ij_L^1/(m_i+m_j)^2 m_jρ^i, if j ≠ i,; 2 π b^ii_L^1/4 m_i^2 m_iρ^i - ∑_ j = 1^S2 π b^ij_L^1/(m_i+m_j)^2 (2 m_i + m_j) ρ^j if j = i, ]. and β_i^11 = ∑_j = 1^Sπ/(m_i + m_j)^2 ×[ b^ij_L^1( (m_j - 4 m_i) ρ^j p^i + 5 m_jρ^i p^j) - 3 m_j B^ij (ρ^j p^i + ρ^i p^j) ], with B^ij := ∫_-1^1η^2 b^ij(η) dη. For the comparison of these two models it is necessary to take into account the following (dimensionless) relations <cit.>: ρ^i = m_i n^i, p^i = κρ^i/m_i T = κ T n^i, where κ = 5/3 for monatomic gases and T is the constant mixture temperature. Taking into account (<ref>)_1 and the definition of the diffusion fluxes, it is easy to show that (<ref>) is completely equivalent to (<ref>). By introducing the definition of the diffusivity coefficients: D_ij = (m_i + m_j) κ T/2 π m_i m_j b^ij_L^1, equation (<ref>) can be transformed into: ∂_x( n^i + P^i) = ∑_j=1^S1/D_ij( n^i J^j - n^j J^i), where we denoted P^i = p^i_⟨ 11 ⟩/κ T. There remains to transform the equations (<ref>). Using (<ref>) and (<ref>) to express b^ij_L^1 in terms of D_ij, after some straightforward transformations one arrives at the system: ∑_j=1^SM̂_ij P^j = β̂^11_i, for i = 1, …, S, where M̂_ij = {[ 1/m_i+m_j1/D_ij n^i, if j ≠ i,; -1/ m_i1/D_ii n^i - ∑_ j≠ i1/m_i+m_j( 2 + m_j/m_i) 1/D_ij n^j if j = i, ]. and β̂^11_i = ∑_j=1^S1/2 m_i1 - 3 γ^ij/D_ij n^i n^j. Note that in deriving (<ref>), for simplicity, we assumed that B^ij = γ^ij b^ij_L^1. Remark. In a 3D setting, equation (<ref>) is accompanied with another two sets of equations: ∑_j=1^SM̂_ijp^j_⟨ 22 ⟩/κ T = β̂^22_i, ∑_j=1^SM̂_ijp^j_⟨ 33 ⟩/κ T = β̂^33_i. and these relations imply that, for any i = 1, …, S, p^i_⟨ 11 ⟩ + p^i_⟨ 22 ⟩ + p^i_⟨ 33 ⟩ = 0. § NUMERICAL SCHEME §.§ Description of the numerical scheme Let us first describe the 1D explicit numerical scheme used to discretize the simple Maxwell-Stefan system (<ref>)-(<ref>) in the case of a three species mixture (S=3). Consider a space discretization (x_ℓ)_0≤ℓ≤ N of the domain Ω, with a space step Δ x>0, such that x_ℓ = ℓΔ x. The discretization of the equations is done using a staggered dual grid. For each species i, its number density n^i and its deviatoric pressure P^i are evaluated at the points x_ℓ, 0≤ℓ≤ N, whereas its flux J^i is evaluated at x_ℓ+1/2 = (ℓ+1/2) Δ x, for 0 ≤ℓ≤ N-1. Therefore, we shall denote {n^i}^n_ℓ≃ n^i (t^n,x_ℓ), {P^i}^n_ℓ≃ P^i (t^n,x_ℓ) and {J^i}^n_ℓ+1/2≃ J^i(t^n,x_ℓ+1/2) the numerical approximations of the unknowns at the discretization points. For given values of {n^i}^n_ℓ, one can compute the values of {J^i}^n+1_ℓ+1/2 from the momentum conservation equation (<ref>) discretized as follows for any 1≤ i ≤ 3 ∑_j≠ i1/D_ij( {n^i}^n_ℓ+1/2{J^j}^n+1_ℓ+1/2 - {n^j}^n_ℓ+1/2{J^i}^n+1_ℓ+1/2) = {n^i}^n_ℓ+1 - {n^i}^n+1_ℓ/Δ x , where {n^i}^n_ℓ+1/2 = ( {n^i}^n_ℓ+1 +{n^i}^n_ℓ )/2. The mass conservation equation (<ref>) then allows to update the values of {n^i}^n+1_ℓ for any 1≤ i ≤ 3 {n^i}^n+1_ℓ - {n^i}^n_ℓ/Δ t + {J^i}^n+1_ℓ+1/2 - {J^i}^n+1_ℓ-1/2/Δ x = 0. Observe that using the closure relations (<ref>) and (<ref>), one can get rid of the unknowns for species 3 and rewrite equations (<ref>) as a 2×2 system, which allows to obtain after inversion both fluxes J^1 and J^2 depending on the number densities n^1 and n^2. Equations (<ref>) become [ {A_11}^n_ℓ+1/2 {A_12}^n_ℓ+1/2; {A_21}^n_ℓ+1/2 {A_22}^n_ℓ+1/2 ][ {J^1}^n+1_ℓ+1/2; {J^2}^n+1_ℓ+1/2 ] = [ {n^1}^n_ℓ+1 - {n^1}^n_ℓ/Δ x; {n^2}^n_ℓ+1 - {n^2}^n_ℓ/Δ x ], with {A_11}^n_ℓ+1/2 = -n^ref/D_13 + (1/D_13 - 1/D_12) {n^2}^n_ℓ+1/2 , {A_12}^n_ℓ+1/2 = ( 1/D_12 - 1/D_13) {n^1}^n_ℓ+1/2 , {A_21}^n_ℓ+1/2 = ( 1/D_12 - 1/D_23) {n^2}^n_ℓ+1/2 , {A_22}^n_ℓ+1/2 = -n^ref/D_23 + ( 1/D_23 - 1/D_12) {n^1}^n_ℓ+1/2 . If needed, the values of {J^3}^n+1_ℓ+1/2 are directly computed from the closure relation as {J^3}^n+1_ℓ+1/2 = - {J^1}^n+1_ℓ+1/2 - {J^2}^n+1_ℓ+1/2 . The scheme thus consists in solving (<ref>) (and possibly (<ref>)) followed by (<ref>) for i=1,2 and {n^3}^n+1_ℓ= n^ref - {n^1}^n+1_ℓ-{n^2}^n+1_ℓ . We will now explain the extension of the scheme which has been used to discretize the higher-order Maxwell-Stefan system (<ref>)-(<ref>)-(<ref>). In a similar way, we start to compute the values of {J^i}^n+1_ℓ+1/2 from the momentum conservation equation, for given values of {n^i}^n_ℓ and {P^i}^n_ℓ, by solving [ {A_11}^n_ℓ+1/2 {A_12}^n_ℓ+1/2; {A_21}^n_ℓ+1/2 {A_22}^n_ℓ+1/2 ][ {J^1}^n+1_ℓ+1/2; {J^2}^n+1_ℓ+1/2 ] = [ {n^1}^n_ℓ+1 - {n^1}^n_ℓ/Δ x + {P^1}^n_ℓ+1 - {P^1}^n_ℓ/Δ x; {n^2}^n_ℓ+1 - {n^2}^n_ℓ/Δ x + {P^2}^n_ℓ+1 - {P^2}^n_ℓ/Δ x ]. Equation (<ref>) remains the same. Then, the values of {n^i}^n+1_ℓ are updated from (<ref>)-(<ref>). The values of {P^i}^n+1_ℓ are computed pointwise by inversion of the following matrix relation 𝕄^n+1_ℓℙ^n+1_ℓ = 𝔹^n+1_ℓ, where ℙ_ℓ^n+1 = [ {P^1}_ℓ^n+1, {P^2}_ℓ^n+1, {P^3}_ℓ^n+1 ]^T, and for any 1≤ i,j≤ S, [𝕄^n+1_ℓ]_ij is equal to M̂_ij from (<ref>) in which any n^i is replaced by {n^i}^n+1_ℓ, and [ 𝔹^n+1_ℓ]_i is equal to β̂_i^11 from (<ref>) in which again any n^i is replaced by {n^i}^n+1_ℓ. The scheme thus consists in solving (<ref>) followed by (<ref>)-(<ref>) and finally (<ref>). §.§ Parameters for numerical computations The comparison of the two models described in Section <ref> requires to simulate these models in a physically meaningful setting. We shall analyze the mixture used in in the experiment of Duncan and Toor (1962) <cit.>, since it is a benchmark example for the Maxwell-Stefan model of diffusion. This mixture involves three gases H_2, N_2 and CO_2. For the numerical simulations, it is crucial to choose proper values of the dimensionless parameters. Let us describe how these values are chosen. First, the dimensionless temperature is chosen to be T=1. Dimensionless masses. The molecular masses of the mixture constituents, expressed in atomic mass units, are m_1^* = 2; m_2^*= 28; m_3^* = 44, where the subscript ·_1 relates to H_2, ·_2 to N_2 and ·_3 to CO_2. To determine the dimensionless molecular masses, we have to choose a reference value for them. In this analysis, we chose their average mass: m_0 = 1/3( m_1^*+ m_2^* + m_2^* )= 24.6667. This choice leads to the following values of dimensionless molecular masses m_1 = m_1^*/m_0 = 0.08108; m_2 = m_2^*/m_0 = 1.13514; m_3 = m_3^*/m_0 = 1.78378. Dimensionless diffusivities. The Maxwell-Stefan diffusivities in our mixture are <cit.> D_12^∗ = 0.833cm^2/s; D_13^∗ = 0.68cm^2/s; D_23^∗ = 0.168cm^2/s. The reference diffusivity will be chosen to be the average diffusivity D_0 = 1/3( D_12^∗ + D_13^∗ + D_23^∗) = 0.560333. Taking this into account, the dimensionless Maxwell-Stefan diffusivities become D_12 = D_12^∗/D_0 = 1.48662; D_13 = D_13^∗/D_0 = 1.21356; D_23 = D_23^∗/D_0 = 0.299822. Observe that once the diffusivities are determined, cross sections can be computed from (<ref>), which leads to the following estimates: b^12_L^1 = 2.35784; b^13_L^1 = 2.81833; b^23_L^1 = 1.27538. Dimensionless self-diffusivities. A rough estimate of self-diffusivities D_ii can be obtained from (<ref>) if one take m_i = m_j <cit.>: D_ii = 1/π1/m_iκ T/ b^ii_L^1 ⇔ b^ii_L^1 = 1/π1/m_iκ T/D_ii. In this work, we shall assume the values of intra-species cross section norms b^ii_L^1, and then compute the self-diffusivities D_ii. The choice of the norms is based upon the observation that b^ij_L^1 is smaller for the smaller mass ratios of the species that interact. Therefore, we shall assume that all the norms of dimensionless intra-species cross sections are the same: b^11_L^1 = b^22_L^1 = b^33_L^1 = 1.0. This assumption leads to the following values of D_ii D_11 = 6.54304; D_22 = 0.46736; D_33 = 0.297411. The influence of these parameters will be evaluated through numerical simulations. Moments of the cross sections. To determine the remaining parameters, one has to estimate the second moment of the cross sections, assumed to be of the form B^ij = ∫_-1^1η^2 b^ij(η) dη = γ^ij b^ij_L^1. Since the mean value of the function η^2 is 1/2∫_-1^1η^2dη = 1/3, we decided to choose, for any i,j = 1, 2, 3, γ^ij = 0.1 as a reasonable estimate. However, to check the influence of this parameter, different values are tested in the next section. § NUMERICAL SIMULATIONS The scheme has first been validated on very simple cases. Since constant states (with zero fluxes) are stationary solutions of the equations, we tested that the scheme preserves constant states. Moreover, in the case of a two-species mixture, no cross diffusion effect happen in the equations, although the pressure terms still involve some coupling. The test case chosen here is related to the Duncan and Toor experiment (which can be seen as essentially a 1D setting), which involves a mixture of three species and in which the phenomenon of uphill diffusion appears. The domain is chosen as Ω=[0,1], and the discretization parameters are Δ x=0.05, Δ t = 2× 10^-4. Let us comment briefly on the CFL condition associated to this choice of parameters. It is of course restrictive, since we consider an explicit scheme for a diffusion equation. Further, in <cit.>, a stability condition for this scheme had been proved in a special case for the Maxwell-Stefan system, and it had been verified numerically in other cases. A natural extension of this stability condition for the higher-order Maxwell-Stefan model would be max(D_12, D_13, D_23, D_11, D_22, D_33) Δ t/Δ x^2≤ 0.5, and the chosen parameters are at the limit of this condition. The initial values are chosen as follows {n^1}^0(x) = 0.8 ×1_[0,0.5], {n^2}^0(x) = 0.2, {n^3}^0(x) = 0.8 ×1_[0.5,1], with {F^i}^0 = 0 for i=1,2,3. For this test case, the asymptotic solution for the number densities is obviously {n^1}^∞(x) = 0.4, {n^2}^∞(x) = 0.2, {n^3}^∞(x) = 0.4, with zero fluxes, and for the pressures, for any i=1,2,3, {p^i}^∞=κ T {n^i}^∞ from equation of state (<ref>), whereas {P^i}^∞ is computed from {n^i}^∞ by the inversion of (<ref>). In the simulations, in order to compare the pressures in the two models, we shall consider for the higher-order Maxwell-Stefan system the total pressure p^i_tot = p^i + κ T P^i = p^i + p^i_⟨ 11 ⟩ of each species i. The behavior of the scheme is validated by checking that the known asymptotic profile is well captured. The dynamics of the diffusion process is shown for the higher-order Maxwell-Stefan system on Figure <ref>, where for each species, we plotted at different times its number density and its total pressure. At first, it may be observed that the higher-order Maxwell-Stefan system does not bring a qualitatively different result in comparison to classical Maxwell-Stefan model. In particular, diffusion of species 1 and 3 may be regarded as regular, while species 2 exhibits the well-known uphill diffusion. Furthermore, convergence to equilibrium is faster for species 1 and 3 than for species 2. Finally, asymmetry of the density (and pressure) profile may be observed for species 2 in transient regime, which is typical for Maxwell-Stefan model of diffusion. Along with developing a reliable numerical scheme for a new diffusion model, the aim of this analysis is also to compare the Maxwell-Stefan system (MS) and the higher-order Maxwell-Stefan system (HOMS) through their respective numerical solutions for the same set of parameters and the same initial data. This is presented in Figure <ref>. In the case of species 1 and 3, n^1 and n^3 converge to equilibrium faster in MS model than in HOMS model. In consistence with this result, the gradient of total partial pressure in HOMS in slightly larger than the gradient of partial pressure in MS for species 1 and 3. Since behavior of species 1 and 3 is not unusual, the real challenge for diffusion modelling was the non-Fickian behavior of species 2. From the initially uniform space distribution it evolves in a non-uniform way. The classical MS model reproduced this behavior. What can be observed in HOMS is the similar pattern as in the case of species 1 and 3: the evolution of n^2 in HOMS has a delay with respect to one in MS, and thus has a slower convergence towards equilibrium. Therefore, it may be concluded that HOMS leads to a decrease of the rate of convergence of the species' number densities towards equilibrium. The derivation of MS and HOMS from the Boltzmann equations has the advantage of systematic derivation of macroscopic equations from the mesoscopic dynamics. At the same time it inherits the necessity of choosing the appropriate cross sections. In the present study, this obstacle has been overcome by using the diffusivities instead of the norms of the cross sections. However, the need for computation of B^ij required the estimate for the second moments of the cross section. We chose γ = γ^ij = 0.1, for all i,j = 1, 2, 3, as a reasonable estimate. Nevertheless, this is only an estimate, and we wanted to analyze the influence of γ on the solution. Numerical results are presented in Figure <ref>, where we compared the results for three different values of γ. For all the quantities, increasing γ towards 1/3 certainly leads to the convergence of the HOMS solution towards the MS one. This was expected, since β̂_i^11 vanish for γ = 1/3 (see Eq. (<ref>)), and the system (<ref>) only has the trivial solution P^j = 0, i.e. p^j_⟨ 11 ⟩ = 0, which reduces HOMS to MS. As a final remark, let us mention that the coefficients M̂_ij and β̂_i^11 inherit the influence of self-diffusion. Since the corresponding coefficients of self-diffusivity can hardly be measured, we estimated them theoretically. For all the numerical computations we performed two `runs'— one with the values of D_ii given in Section <ref>, and one with 1/D_ii→ 0, thus neglecting the effect of self diffusion. The differences on the results were insignificant, we thus decided not to go further in the analysis of this phenomenon due to its negligible influence. § CONCLUSIONS In this study, we analyzed numerical simulations of the recently proposed higher-order Maxwell-Stefan model, derived within the framework of kinetic theory of gases. The main feature of the model is that it takes into account the influence of higher-order moments — the pressure tensor, to be precise. In the asymptotic limit, when Ma = Kn = α→ 0, the balance laws for the pressure tensor reduce to a system of algebraic equations. The classical Maxwell-Stefan model is thus extended by the influence of normal components of the pressure tensor in the momentum balance laws. Our aim was twofold: first, to develop a reliable numerical scheme which can be used for the analysis of higher-order Maxwell-Stefan model; second, to compare the solutions of the higher-order model with the solutions of the classical one for the same initial data. We simulated the conditions of the celebrated Duncan and Toor experiment as a benchmark example. The analysis was restricted to the 1D case. The results may be summarized as follows: * The numerical solution of the HOMS model shares the same qualitative features as the solution of the classical one, regarding the convergence to equilibrium, uphill diffusion and asymmetry of density profile for N_2 in transient regime. * The comparison of the solutions of HOMS and MS model revealed slower convergence to equilibrium for all species in the higher-order case. * The higher-order model inherits the parameters B^ij (moments of the cross sections), which distinguishes the higher-order model from the classical one. Numerical solutions of HOMS exhibited tendency towards the solution of MS when the parameter was continuously varied. * Numerical simulations of the higher-order model showed that self-diffusion may be neglected, at least in the example analyzed in this study. In a forthcoming study, we aim to enhance the model with inertial terms in the momentum and the pressure tensor balance laws, which will certainly enrich the picture regarding the applicability of the Maxwell-Stefan approximation. It would also be interesting to test the model in higher-dimensional settings. 20 anwasia2020maxwell B. Anwasia, M. Bisi, F. Salvarani, A. J. Soares, On the Maxwell-Stefan diffusion limit for a reactive mixture of polyatomic gases in non-isothermal setting, Kinet. Relat. Models 13(1), 63–95 (2020). anwasia2020formal B. Anwasia, P. Gonçalves, A. J. Soares, On the formal derivation of the reactive Maxwell-Stefan equations from the kinetic theory, Europhysics Letters 129(4), 40005 (2020). anwasia2022maximum B. Anwasia, S. Simić, Maximum entropy principle approach to a non-isothermal Maxwell-Stefan diffusion model, Appl. Math. Lett. 129, 107949-9 (2022). boltzmann L. Boltzmann, Lectures on gas theory, University of California Press, Berkeley, 1964. Reprint of the 1896–1898 Edition. Reprinted by Dover Publications, 1995. boudin2015maxwell L. Boudin, B. Grec and F. Salvarani, The Maxwell-Stefan Diffusion Limit for a Kinetic Model of Mixtures, Acta Appl. Math. 136, 79–90 (2015). chapman1970mathematical S. Chapman and T.G. Cowling, The Mathematical Theory of Non-Uniform Gases, Cambridge University Press, Cambridge, 1995. Reprint of the Third Edition 1970. dun-too-62 J. B. Duncan and H. L. Toor, An experimental study of three component gas diffusion, AIChE Journal 8(1), 38–41 (1962). grec2023higher B. Grec and S. Simić, Higher-Order Maxwell-Stefan Model of Diffusion, La Matematica (2023). hutridurga2017maxwell H. Hutridurga, F. Salvarani, Maxwell-Stefan diffusion asymptotics for gas mixtures in non-isothermal setting, Nonlinear Anal. 159, 285–297 (2017). kri-wes-97 R. Krishna and J. A. Wesselingh, The Maxwell-Stefan approach to mass transfer, Chem. Eng. Sci. 52 (6), 861–911 (1997). max1866 J. C. Maxwell, On the dynamical theory of gases, Phil. Trans. R. Soc. 157, 49–88 (1866). ste1871 J. Stefan, Ueber das Gleichgewicht und die Bewegung insbesondere die Diffusion von Gasgemengen, Akad. Wiss. Wien, 63, 63–124 (1871).
http://arxiv.org/abs/2407.12755v1
20240717172941
Quantum vs. Symplectic Computers
[ "Igor Volovich" ]
quant-ph
[ "quant-ph", "hep-th" ]
=1
http://arxiv.org/abs/2407.12196v1
20240716214347
MASIVE: Open-Ended Affective State Identification in English and Spanish
[ "Nicholas Deas", "Elsbeth Turcan", "Iván Pérez Mejía", "Kathleen McKeown" ]
cs.CL
[ "cs.CL" ]
Individualized Federated Learning for Traffic Prediction with Error Driven Aggregation Hang Chen, Collin Meese, Student Member, IEEE, Mark Nejad, Senior Member, IEEE, Chien-Chung Shen, Member, IEEE Hang Chen and Chien-Chung Shen are with the Department of Computer and Information Sciences, University of Delaware, Newark, Delaware, 19716, USA (e-mail: {chenhang, cshen}@udel.edu). Collin Meese, and Mark Nejad are with the Department of Civil and Environmental Engineering, University of Delaware, Newark, Delaware, 19716, USA (e-mail: {cmeese, nejad}@udel.edu). ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In the field of emotion analysis, much NLP research focuses on identifying a limited number of discrete emotion categories, often applied across languages. These basic sets, however, are rarely designed with textual data in mind, and culture, language, and dialect can influence how particular emotions are interpreted. In this work, we broaden our scope to a practically unbounded set of affective states, which includes any terms that humans use to describe their experiences of feeling. We collect and publish MASIVE, a dataset of Reddit posts in English and Spanish containing over 1,000 unique affective states each. We then define the new problem of affective state identification for language generation models framed as a masked span prediction task. On this task, we find that smaller finetuned multilingual models outperform much larger LLMs, even on region-specific Spanish affective states. Additionally, we show that pretraining on MASIVE improves model performance on existing emotion benchmarks. Finally, through machine translation experiments, we find that native speaker-written data is vital to good performance on this task. § INTRODUCTION In the field of emotion analysis, much NLP research focuses on identifying a limited number of discrete emotion categories, typically using basic emotion sets from the field of psychology <cit.>. These basic emotion sets are rarely designed with textual expression in mind (e.g., , whose model defines basic emotions by the recognizability of facial expressions), and very little research examines the validity of adapting these sets to textual data. Emotion analysis furthermore relies on largely the same emotion categories across languages, including, in some cases, simply translating resources such as lexicons from one language into another <cit.> or translating finetuning and evaluation data <cit.>. Previous research has also shown that existing multilingual models encode meaning in an Anglocentric way <cit.>. As recent studies have found that culture and language influence the particular meaning of emotional terms like "love" <cit.>, models that fail to understand this cultural context or rely on mainstream dialects may also fail to capture the nuances of an author's emotional expression <cit.>. In this work, we argue for a descriptive approach to emotion analysis. We broaden our scope from a small set of basic emotions to a practically unbounded set of affective states, which includes any terms that humans use to describe their experiences of feeling, including emotions, moods, and figurative expressions of feelings (e.g. "blue" as an expression of sadness instead of the color) <cit.>. We then define the new problem of affective state identification (ASI), which is a targeted masked span prediction task: given a text description of an emotional experience, we train models to produce single-word affective states that correspond to the description. These affective states may include common emotion categories such as happy or sad, but they also allow us to incorporate nuance and intensity (e.g., elated, calm, jealous, lonely, etc.) as well as other classifications that are not typically considered emotions such as moods (e.g., longer-term feelings of being motivated or stuck). We collect MASIVE: Multilingual Affective State Identification with Varied Expressions, a new benchmark dataset for affective state identification using Reddit data. We use a bootstrapping procedure to discover new affective state labels and collect posts containing natural emotional expressions in English and Spanish, yielding 1600 unique affective state labels in English and 1000 in Spanish.[Our dataset, code, and model checkpoints will be made publicly available upon publication] We evaluate our data collection methods with human annotation, finding that 88% and 72% of our automatically collected English and Spanish labels, respectively, reflect affective states, and document unique features of the datasets including negations and, in Spanish, grammatical gender. We then use this dataset to evaluate the performance of several commonly-used generative models, finding that small fine-tuned models generally outperform LLMs. Beyond ASI, we experiment with using our corpora as pretraining data and show that MASIVE incorporates knowledge that generalizes to existing emotion detection benchmarks. Finally, we assess finetuning and evaluating models on machine-translated data and find that original texts from native speakers are essential for performing ASI. Our contributions in this work are as follows: * We introduce a novel benchmark for affective state identification with language generation models, including a significantly larger label set than prior related benchmarks; * We benchmark multilingual models and show that smaller, finetuned models outperform current LLMs on this dataset; * We analyze the behavior and performance of models on region-specific affective language, gendered language, and negations; and * We empirically argue that both finetuning and evaluating on texts authored by native speakers is vital for capturing nuances in multilingual affective writing § DATA §.§ MASIVE Corpus Our goal is to collect data representing the broad set of ways in which humans describe their feelings. We refer to these expressions as affective states <cit.>; this is an umbrella term incorporating multiple kinds of feelings such as emotions and moods. We collect texts with expressions of affective states from Reddit[Using the PullPush API at <https://pullpush.io/>] using a bootstrapping procedure. Beginning with the adjective forms of the Ekman emotions, we search for texts containing forms of “I feel <affect> and...”, “I am feeling <affect> and...”, where <affect> is replaced with each emotion term. Notably, we also search for “I don't feel <affect> and...” and “I am not feeling <affect> and...” to better capture the diversity of ways in which authors can express feelings. We extract affective state terms that follow the “and” from the retrieved posts to form a new set of search phrases with these terms. We repeat these steps, expanding the pool of query affective states in each round. Our primary assumption is that any adjective conjuncts of the query emotion term are also affective states, regardless of whether they are canonical emotion terms. For example, if "happy" was used to query the text "I feel happy and excited," the term "excited" is both an adjective and a conjunct; the same is true of “light” in “I feel happy and light”. In contrast, in "I feel happy and want to smile", "want" is a verb and would not be considered an affective state. We evaluate of this assumption in <ref>. In Spanish, we conduct the same procedure using forms of “Estoy <affect> y...”, “Me siento <affect> y...”, and “Estoy sintiendo <affect> y...”. We also seed the process with the most common Spanish translations of the Ekman emotions on Reddit (see <ref>). Additionally, as Spanish includes both masculine and feminine forms for some terms, we search for both forms where applicable. Finally, we also collect a challenge set including affective state labels associated with regional Spanish varieties, hand-selected by a native Spanish-speaker, to evaluate models' abilities to generalize to less-represented dialects (see <ref>). For both English and Spanish, we run 4 rounds of bootstrapping; for the regional Spanish terms, we run only a single round to avoid introducing non-regional terms. 15 affective states were randomly sampled from both datasets, and all posts containing those 15 affective states were reserved as part of each test set to evaluate models on unseen emotions. Summary statistics describing the English and Spanish splits as well as the regional Spanish challenge set are included in <ref>, and we include a Data Statement in <ref>. §.§ Data Analysis To validate the assumptions of our bootstrapping procedure and examine how affective states are used in our dataset, we collect human evaluations of the automatically identified affective states. Judgments are conducted by 2 native Spanish-speakers in Iberian Culture studies and 2 native English-speakers in Psychology for Spanish and English respectively. We randomly sample 250 texts from each language's test set for evaluation such that 50 texts are shared by each pair. Annotators are provided with a full Reddit post with a single automatically-identified affective state highlighted. We ask annotators to judge the term in context on 3 dimensions, beginning with whether the highlighted term reflects an affective state. If a term is judged to reflect an affective state, annotators are asked to judge whether the highlighted term better reflects an emotion or a mood[We distinguish emotions – shorter-term feelings triggered by identifiable events – from moods – longer-term feelings not necessarily triggered by an event.] and whether the highlighted term is used figuratively (e.g., "blue") or literally (e.g., "sad"). All 3 dimensions are judged on 4-point Likert scales where higher values mean the term primarily reflects an affective state, an emotion, and a literal usage, respectively. Annotators achieved moderate agreement in English (κ=.51) and substantial agreement in Spanish (κ=.69). Additional details concerning human annotations are included in <ref>. Additionally, we analyze 2 aspects of our dataset that differentiate it from prior emotion detection benchmarks. First, because Spanish is a language with grammatical gender for adjectives, part of the affective state prediction problem in MASIVE includes choosing whether to use the masculine or feminine form in the context of the input. Second, authors in natural settings may also tend to express their feelings by stating how they do not feel (e.g., “I'm not happy, but....”), and we specifically include negations to test models' capability to contend with this construction in both English and Spanish. The results of the aforementioned data annotations as well as the automatically extracted statistics are included in <ref>. Human annotation results are reported as the percentage of affective states within the sample; for negations and grammatical gender, we report the percentage of texts in our datasets that include any target negations or any feminine adjectives.[Recall that a single datapoint may have multiple labels joined by and.] Large majorities (88% and 72% in English and Spanish respectively) of terms were judged to reflect affective states, validating the contents of MASIVE. §.§ Fixed-Label Set Data We additionally evaluate the performance of MASIVE-finetuned models on two previously published datasets in both English and Spanish. A key distinction from MASIVE is that these datasets feature limited label sets; we describe our evaluation procedures in <ref>. In English, we evaluate on GoEmotions <cit.>, a commonly-used emotion dataset consisting of Reddit comments; it is originally labeled with 27 distinct emotion categories, though the authors also relabel the data with the Ekman basic emotions. We additionally evaluate on EmoEvent <cit.>, a dataset with both English and Spanish subsets of Tweets (among other languages) also labeled with the Ekman set. §.§ Machine-Translated Data Finally, we conduct 2 cross-lingual experiments expanding on prior work investigating the use of machine translation and high-resource language models for inference on lower-resource languages <cit.>. In contrast to prior findings, however, we hypothesize that neither translating the training nor evaluation data will be enable competitive performance with models trained on native data. First, using our natural test sets, we evaluate models finetuned on translated data. Second, we evaluate the performance of our native-trained models on translated data, mimicking the translation of lower-resource language data for inference with a model trained on a higher-resource language. In both settings, we use bilingual Opus-MT models <cit.> to independently translate the input documents and target affective state labels. We select Opus-MT models as they are accessible, open-source models, reflecting resources that may be used for large scale translation, and are utilized in experiments in <cit.>. Throughout the experiments, models finetuned on translated data are denoted _Tr. Test sets generated through machine translation are similarly denoted as En_Tr and Es_Tr. § EXPERIMENTAL CONFIGURATION §.§ Models We experiment with finetuning small language models on our original and machine-translated data. We also perform experiments with two Large Language Models (LLMs) in a zero-shot setting. Finetuned Generative Models. Most of our models are based on mT5-Large <cit.> During finetuning and prediction on MASIVE, we mask the automatically identified affective state words wherever they appear and task models to fill them, mimicking mT5's initial pretraining. We additionally experiment with T5-large <cit.> for English only.[No comparable monolingual T5 checkpoint for Spanish has been made publicly available.] In the results, models' superscripts denote that a model was finetuned on our English (T5^En and mT5^En) or Spanish (mT5^Es) corpus. Large Language Models. We evaluate two modern, open-source LLMs–Llama-3[<https://llama.meta.com/llama3>] and Mixtral-Instruct <cit.>–as these models have been specifically evaluated in multilingual settings. We instruct these models to perform the same masked token prediction task as mT5 (see <ref>). Due to context window constraints and input lengths, LLMs are evaluated in a zero-shot setting. Further checkpoint and generation hyperparameter details are included in <ref>. §.§ Metrics §.§.§ MASIVE Evaluation We report top-k accuracy for our models with k ∈{1, 3, 5}[As some samples in the datasets have multiple labels, we calculate top-k accuracy at the sample level using beam search and report average sample-level scores.], along with two generative metrics: the negative log-likelihood (NLL) of the gold affective state and the model's log perplexity. In Spanish, if the gendered form of the prediction does not match that of the gold term (e.g. enojado vs. enojada), the prediction is considered incorrect, but the similarity of the prediction in these cases is captured by the top-k similarity metric, which we describe below. Top-k Similarity. Because our label set is very large, we also report a measure of similarity between the model's top predictions and the gold. Here, we rely on contextual embeddings using multilingual, pre-trained BERT-base <cit.>. To ensure that the similarity model encodes affective senses of each term, we embed the predicted and gold emotion terms within 100-token contexts from the original post and calculate cosine similarity between them. We report the maximum similarity of these contextual embeddings when looking at the top 1, 3, and 5 most likely model predictions. Full details are available in <ref>. §.§.§ Fixed-Label Set Evaluation To evaluate how well our dataset imbues models with general emotional knowledge, we evaluate two variants of mT5: first, mT5 finetuned only on existing emotion benchmarks, and second, mT5 finetuned on MASIVE followed by existing benchmarks (denoted with superscript ^MAS). To adapt the evaluation sets to our generative setting, we append "I feel <extra_id_0>" to the end of each input to match the format of our evaluation on MASIVE (see <ref>), using adjective forms of the gold emotion labels. In this setting, we report top-k accuracy and similarity as we do for MASIVE. Additionally, to adapt our models to the fixed-label set setting, we sort the fixed set of emotion labels by their likelihood according to the model and select the most probable emotion label as the prediction. For these experiments, we report macro precision, recall, and F1 score. § RESULTS §.§ MASIVE Evaluation <ref> presents the performance metrics for finetuned mT5, Llama-3, and Mixtral on our English and Spanish test sets, as well as finetuned T5 for the English test set only. Among multilingual models, mT5 outperforms both LLMs on top-k accuracy for both languages (Takeaway #1), despite having drastically fewer parameters.[Llama-3 occasionally refuses to make a prediction if the content discussed is sensitive (e.g., drug use). Results with invalid responses filtered out are included in <ref>.] Additionally, mT5 achieves the highest top-k similarity scores, except for top-1 similarity in Spanish. Between the LLMs, Mixtral tends to outperform Llama-3. This performance difference may be explained by the difference in size between models, as well as the fact that multilingual data was upsampled in Mixtral's pretraining compared to prior models. In English, the large variant of T5 has been shown to slightly outperform mT5 <cit.>. While we find a similar difference, the performance gap is notably quite large. Because the remaining experiments include Spanish data, we focus on mT5. We note, however, that dedicated monolingual English models may offer significantly higher performance on ASI (Takeaway #2) and leave further exploration of the differences between monolingual and multilingual models to future work. While the differences in language and content of the English and Spanish datasets prevent us from making conclusions concerning their relative difficulty, <ref> also shows that performance in Spanish tends to be higher than in English, despite the better representation of English in pre-training and larger size of the collected English data compared to Spanish. This trend could be due to the larger set of unique affective states in our English data than Spanish, with more nuanced affective states that may be difficult for models to predict accurately. §.§ Fixed-Label Set Evaluation To evaluate the generalized emotion detection capabilities afforded by finetuning on MASIVE, <ref> shows the performance of mT5 finetuned on existing English and Spanish emotion benchmarks, both with and without prior finetuning on MASIVE. First, when used as a classifier, we find that mT5 finetuned on MASIVE first achieves a higher macro-F1 for all datasets. This suggests that finetuning on our corpus gives models generalizable knowledge of emotions (Takeaway #3). Because our corpora contain many more affective state labels than the evaluation datasets, models finetuned on MASIVE will include more nuanced terms than basic emotions in the top-k predictions. So, as expected, models finetuned only on the emotion benchmarks achieve higher top-k accuracy and similarity scores, as they are more likely to predict terms within the smaller label sets. The top-k similarity scores for our models, however, remain high, suggesting that the generated affective states are similar to the ground truth basic emotion labels. §.§ Unseen and Regional Set Evaluation To analyze how well models generalize beyond affective states explicitly included in finetuning, we present performance metrics on seen and unseen affective states in both languages in <ref>. In both languages, all models perform considerably better on affective states included in the finetuning data than on unseen affective states. T5^En , however, maintains better performance on unseen affective states than mT5^En, suggesting that monolingual models may better generalize. In addition to unseen affective states, we present evaluation results on a subset of Spanish affective states which are region-specific in <ref>. Similarly to results on the full Spanish data, finetuned mT5^Es outperforms both LLMs in top-k accuracy and similarity. Notably, the performance of mT5^Es on this regional subset is comparable to its performance on general unseen Spanish emotions (<ref>), while Llama-3 and Mixtral, which are not explicitly finetuned on our corpora, perform significantly worse on the regional subset than they do on the Spanish data as a whole (<ref>). Because top-k accuracy drops significantly on unseen and region-specific affective states (top-k similarity as well, though less so), future work in this area should prioritize a generalized understanding of affective states, including regionalisms (Takeaway #4). §.§ Grammatical Gender and Negations We break down the top-k accuracy and top-k similarity results for each model by grammatical gender and negations in <ref>. We see again that mT5 outperforms both LLMs across all subsets, and that mT5 often places the gold label among the top 3 or 5 predictions if not the top 1. In particular, mT5^Es performs better on feminine adjectives than masculine adjectives or those with only a single form, and T5^En and mT5^Es perform better on negated targets than non-negated targets (mT5^En shows the same pattern for accuracy, though not similarity). Llama-3 and Mixtral achieve highest accuracy for masculine adjectives and highest similarity for single-form adjectives, while for negations, Llama-3 performs better on non-negations and Mixtral performs slightly better on negations. These results suggest that explicit training on MASIVE may improve performance specifically on unique features of generative ASI (Takeaway #5). §.§ Machine-Translation vs. Natural Data Finally, we evaluate the changes in performance first when using machine-translated finetuning data and alternatively when translating evaluation data in <ref>. First, we find an expected drop in performance when models are finetuned on machine-translated data for both English and Spanish. Interestingly, the drop in accuracy and similarity metrics (90% and 28%, respectively) in Spanish are notably larger than in English (77% and 14%). This could perhaps be explained by the translation model performing better in the Spanish to English direction than English to Spanish, as well as mT5's ability to better generalize in English than in Spanish. As an alternative approach to finetuning on translated data, we also consider the case where data may be translated at inference time. In these cases (En_Tr and Es_Tr in <ref>), we find that performance falls. Artifacts of machine translation have been found to impact evaluation of translation models <cit.>, and, similarly, errors and artifacts of unnatural translation may cause these changes in performance. In contrast to prior work suggesting that performance on the target data translated into English is comparable to finetuning on the target language for tasks such as sentiment detection, our results suggest that for our task, machine-translating the evaluation data leads to poorer performance, and translating either at training or inference time result in similar performance (Takeaway #6). § RELATED WORK Emotion Taxonomies. Many different models of human emotion have been proposed, intending to capture the universal experience of different emotions across cultures. Some of the most notable categorical models in psychology and NLP research are the <cit.> basic emotion set derived from facial expression and the <cit.> basic emotion set which assumes emotions occur in opposing pairs (e.g. joy and sadness), though other models exist (e.g., ). Multiple different dimensional models have also been proposed, situating emotions in a space governed by features such as pleasantness and activation <cit.>. Many such models of emotions have been frequently compared and evaluated in psychology and as they apply to emotion detection (see ). Emotion and Language Generation. Numerous approaches to automated emotion detection in text have been proposed, including emotion lexicons <cit.> and classification models (see for a review of approaches). Most of this work focuses on small, finite emotion sets, usually Ekman or Plutchik, though some prior work has used larger sets <cit.>. <cit.> in particular collect data for a very large but still strictly limited set of emotions. More recently, language generation tasks have been proposed that call for models with greater emotional understanding, such as emotional dialogue generation <cit.>, controllable generation <cit.>, and emotion trigger summarization <cit.>. Given that language generation models have been employed to unify these and other classification and generation tasks, endowing models with a greater understanding of human emotions would greatly benefit multiple applications. Cross-cultural Emotion Perception. Many researchers have suggested that a basic set of emotions are universal, while others have argued that emotions are shaped by culture. Past work has built on Ekman's proposal and provided evidence that emotion categories are universal <cit.>, with finding little support for the argument that language plays a foundational role in perceiving emotions. Additionally, past work has at least in part supported differences in emotion perception and recognition across languages and cultures <cit.>, even with bilingual speakers <cit.>. Past work in sentiment and emotion in NLP frequently translates English corpora to enable multilinguality <cit.>. Some work, however, has demonstrated cross-cultural differences in model performance <cit.>, and approaches that do not rely on machine translation have also been proposed (e.g., ). Our work evaluates the use of machine translation for the ASI task, and we find that machine translation may not be sufficient for cross-lingual transfer. § CONCLUSION In this work, we introduce the novel task of affective state identification, a language generation task prioritizing the authors' natural expressions of their feelings rather than using a prescribed set of emotion labels. For this task, we automatically collect and publish two datasets of Reddit posts in English and Spanish, both containing over 1,000 unique affective state labels. We use this dataset to benchmark multilingual generative models, and find that (Takeaway #1) small finetuned T5 and mT5 models outperform zero-shot LLMs. Results specifically show that (Takeaway #2) T5 significantly outperforms mT5 in English on ASI, suggesting that monolingual models may be more capable. Additionally, we show that (Takeaway #3) models finetuned on our corpora transfer knowledge that generalizes to existing emotion detection benchmarks. In analyzing model performance on unseen emotions and Spanish regionalisms, we argue that (Takeaway #4) generalization to a broader set of affective states, including those from underrepresented dialects, is an important avenue for future work. With respect to grammatical gender in Spanish and negations, (Takeaway #5) finetuning on MASIVE improves on specific linguistic constructions unique to generative ASI. Finally, we quantify the observed performance differences when using machine-translated data at finetuning or inference time, finding that in contrast to prior work, (Takeaway #6) machine translation leads to large performance drops. We hope these results spark future work into ASI to enable prediction of more nuanced feelings in a variety of languages and contexts, and ultimately, enable prediction of an unbounded set of labels. § LIMITATIONS We limit ourselves in this work to investigating two high-resource languages, English and Spanish, in part because for this application, we find it important that members of the research team be able to speak the languages of study fluently. Additionally, we gather data from one source, Reddit, which limits the demographics of the people whose experiences are represented in our data. This choice of data source may particularly limit our Spanish data, which includes less data and fewer labels than English (<ref>). We choose not to control for things like topic or subreddit when collecting English and Spanish data separately because we wish to collect a natural variety of data, but this also means that we do not claim our two datasets to be equivalent. Our data gathering framework collects only explicit expressions of affective states by searching for statements including an “I feel”-style template. While we can use models trained on this type of data to predict affective state labels for any input by simply appending an “I feel” statement to be filled (see <ref>), our training targets do not include this type of data, and this paradigm impacts the types of affective states we are likely to collect. We also acknowledge that our choices of specific resources limit our work in various ways. We use only Opus-MT models to perform our machine translation experiments because they exhibit good performance in both languages; however, it is possible that we would see different results with different translation models. Our similarity metric also uses pre-trained BERT embeddings because of the benefits of contextual embeddings and subword tokenization, but there are many other possible choices of embedding framework that may more accurately capture emotional nuances. Finally, we evaluate only open-source LLMs on our dataset. § ETHICS STATEMENT We strictly collect publicly available user-authored texts on the pseudonymous social media website Reddit, but we acknowledge the privacy concerns of users when collecting data from social media. Accordingly, we will release the collected texts only with randomly assigned IDs and usernames stripped. We discourage others from attempting to identify authors of the texts in the collected dataset, and will remove data from the dataset upon request. Because we rely entirely on open-source models, including open-source LLMs, and make our data available, our results are fully reproducible. We also release our code and model checkpoints along with our data. In total, our finetuning and evaluation amounts to approximately 73 hours using Nvidia A100 GPUs. Our task allows models to predict a larger set of affective states, capturing more nuanced expressions of an authors' feelings than traditional emotion detection. At the same time, a larger label set could exacerbate the consequences of misclassification in sensitive contexts (e.g., mental health and crisis settings). In some applications of this task where this may be an important consideration, the label set can be artificially restricted, as we show in our external evaluation experiments. Finally, the aim of predicting authors' expressions of their own feelings can require models to generate regional or dialectal texts. Prior work has identified dialectal biases in language models (e.g., African American Language; ) and we find that all evaluated models perform poorly on regional varieties of Spanish. We hope future work makes progress toward closing performance gaps among dialects and language varieties. § ACKNOWLEDGEMENTS This work was supported in part by grant IIS2106666 from the National Science Foundation, the Defense Advanced Research Projects Agency (DARPA) Cross-Cultural Understanding (CCU) program under Contract No HR001122C0034, National Science Foundation Graduate Research Fellowship DGE-2036197, a research gift from Amazon, the Columbia Provost Diversity Fellowship, and the Columbia School of Engineering and Applied Sciences Presidential Fellowship. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and should not be interpreted as representing the official views or policies of the National Science Foundation, the Department of Defense, or the U.S. Government. We thank Julia Hirschberg and Melanie Subbiah for feedback on earlier drafts of this work. § DATA STATEMENT §.§ Curation Rationale The aim of collecting the texts contained in MASIVE was to produce both a training dataset and benchmark for affective state identification. Affective state identification tasks models with predicting individual terms reflecting how a text's author feels, and in particular, predicting terms that would be used by the author themself. The dataset collection process was designed to automatically extract a large set of possible affective state labels from texts where an author explicitly describes how they feel. Both an English and Spanish version of the dataset were collected in the same fashion to enable research on cross-lingual work, as well as a small set of regional Spanish to enable work on linguistic variation. We intend to make the dataset publicly available §.§ Language Variety MASIVE contains texts both in English (en) and Spanish (es). Data collection was not restricted to a particular variety of English or Spanish, and distributions of these varieties likely reflects the overall demographics of English and Spanish-speaking users on Reddit. A small set of data was collected specifically to reflect Spanish specific to particular regions, including terms primarily associated with Spanish spoken in Mexico, Spain, Venezuela, and El Salvador among other regions and countries. §.§ Annotator Demographics Two sets of annotators were involved in validating the automatically extracted labels in MASIVE. For the English data, annotators were 2 native English-speakers and Psychology undergraduate students. Both English data annotators were American and female. For the Spanish data, annotators were 2 native Spanish-speakers and graduate students in the department of Latin American and Iberian Cultures. The Spanish data annotators were Colombian and Ecaudorian, and both were male. §.§ Speech Situation The collected texts in MASIVE were not restricted to a particular range of time, and may have been published anytime between the founding of Reddit (2005) and the time of data collection (April, 2024). Texts were also not restricted to a particular place, but likely reflect the countries of origin of English and Spanish-speaking Reddit users. All texts were originally written and published on Reddit, which may or may not have been edited before they were included in the dataset. As with most interactions through Reddit posts, the texts reflect asynchronous interactions and are likely intended for a general public audience in most cases. §.§ Text Characteristics The texts in MASIVE may discuss a wide variety of topics. All texts, however, contain explicit expressions of feelings or explicit mentions of terms that may reflect feelings. Thus, many texts may reflect personal narratives that provide context for an author's feelings. Thus, the dataset may also discuss sensitive topics and include the kinds of offensive or harmful content that can be found online. § DATA COLLECTION AND ANNOTATION §.§ Seed Emotions The specific adjective forms of the Ekman emotions used to seed our bootstrapping procedure are shown in <ref>. These are also the terms used as the gold in our fixed-set label evaluation, with the addition of `nothing' for the no-emotion class if it is used. For fixed-label evaluation of GoEmotions (27), the following terms are used for the expanded label set: `admiration', `amused', `angry', `annoyed', `approving', `caring', `confused', `curious', `desire', `disappointed', `disapproval', `disgusted', `embarrassed', `excited', `afraid', `grateful', `grief', `happy', 'love', `nervous', `optimistic', `proud', `realized', `relieved', `remorseful', `sad', `surprised', and `nothing'. §.§ Regional Spanish Affective States To collect affective state labels associated with one or more particular Spanish-speaking regions, we use the following set of terms: `mamado/a', `patitieso/a', `emputado/a', `encandilado/a', `arrechado/a', `fastidiado/a', `encabronado/a', `hallado/a', `rayado/a', `achispado/a', `ahuevado/a', `enrabiado/a', `tusa', `chocho/a', `encachimbado/a', `bravo/a', `apantallado/a', `embromado/a', `engorilado/a', `alicaido/a', `flipando/a', `cagado/a', `aguitado/a', `engrinchado/a', `chato/a', `chipil', `picado/a', `bajoneado/a', `acojonado/a', `arrecho/a'" The terms are not exhaustive, but reflect varieties of Spanish spoken in Spain, Chile, Colombia, Venezuela, Mexico, Bolivia, Argentina, Uruguay, and Paraguay. §.§ Data Annotation The instructions and interface given to our human annotators are shown in <ref> and <ref>, respectively. Annotators were paid $23/hour for their work in accordance with the standards of their university. Each annotator completed a pilot task of 30 examples before beginning to annotate the data in order to build familiarity with the platform and task. § EXPERIMENTAL SETUP §.§ Generation Configuration Checkpoints. Throughout our experiments, we use the large variants of T5 (770 million parameters; google-t5/t5-large) and mT5 (1.2 billion parameters; google/mt5-large). For our two LLMs, we evaluate the instruct variants of Llama-3 (8 billion parameters; meta-llama/Meta-Llama-3-8B-Instruct) loaded in bfloat16 and Mixtral (7×22 billion parameters; mistralai/Mixtral-8x22B-Instruct-v0.1). Mixtral is accessed through the <fireworks.ai> API. Beyond the evaluated models, we use two open-source, unidirectional translation models for our translation experiments. In particular, we employ the Helsinki-NLP English-to-Spanish (Helsinki-NLP/opus-mt-en-es) and Spanish-to-English (Helsinki-NLP/opus-mt-es-en) models. We also use a multilingual BERT checkpoint as part of the similarity metric (168 million parameters; bert-base-multilingual-uncased). Finally, we also rely on <cit.> to identify parts of speech in English (en_core_web_md) and Spanish (es_core_news_md) during our data collection. Generation. For T5, mT5, and Llama-3, we use beam search to generate the top-k most likely predictions, with 5 beams (as we need only the top-5 outputs). We use the default settings of Huggingface's https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig, including, e.g., no repetition penalty, etc.; though we expect a single-word output, we allow generations of up to 32 tokens. The API used to run inference with Mixtral does not allow retrieving the top 5 most probable predictions as we do with the aforementioned models. Instead, Mixtral predictions are generated with a top-k of 5, and a temperature of 0.5. The top 5 candidate generations are then reranked by the log-probability according to Mixtral to be used in evaluating the ranked, top-5 predictions. Also due to accessing Mixtral through an API, we were not able to calculate the log perplexity of the ground truth labels. Hyperparameters. T5 and mT5 models are finetuned with a batch size of 4 for 3 epochs each. Model parameters are optimized using Adafactor <cit.> as implemented by Huggingface's with a learning rate of 1 × 10^-4, Huggingface's linear learning rate scheduler with default parameters, and a weight decay parameter (here, an L2 penalty) of 0.01. For each model, all data is tokenized using the correct pretrained tokenizer corresponding to its pretrained checkpoint. Any input that is longer than 512 tokens (including the end-of-sequence token) is trimmed to fit; in order to preserve the target affective state masks and the grammatical integrity of the text, this trimming removes full sentences (as parsed by <cit.>) from the end of the text if possible (i.e., if this will not remove a target mask), or the beginning otherwise, until the text fits within 512 tokens. §.§ Prompts <ref> shows the prompts provided to Mixtral and Llama-3 throughout our experiments. In a minority of cases, models would reply in the form "Here is a list of terms to fill each <MASK>: ", in which case, only the terms following the colon were considered as the model's prediction. §.§ Machine Translation Configuration In the finetuning experiment, we subset the English data and translated English-to-Spanish data to keep the number of training steps constant across settings. For these two models, we repeat the experiment with 5 different random subsets and report the averages across the five trials. § TOP-K SIMILARITY Let P = [p_1, p_2, p_3, ...p_n], where n ≥ k, be a list of predictions ordered according to descending likelihood, and let g be the gold (where p_i and g are strings). Additionally, let E(x) be a function on a term x that incorporates 100 tokens of context, tokenizes and embeds the sequence with a pre-trained BERT tokenizer, and returns the contextual embedding corresponding to the first sub-word token in x. Then, we report top-k similarity specifically as sim_k(P, g)=max_i ≤ k[cosine_sim(E(p_i), E(g))] § EXTENDED RESULTS §.§ Limited Evaluation for Llama-3 For some inputs, Llama-3 would decline to make a prediction, particularly for inputs that discuss topics such as depression or drug use. While these are important topics for models to be able to accurately analyze as they are increasingly applied in mental health contexts, Llama-3's behavior may unfairly skew its evaluation results. <ref> presents updated results for Llama-3 on the subset of texts for which the model's response followed the correct format. 87% of English, 96% of Spanish, and 98% of regional Spanish responses by Llama-3 were formatted correctly. Among the datasets, English scores improve the most given the higher percentage of invalid responses, and scores improve only by up to .6% top-k accuracy and .003 top-k similarity. Considering these results, no conclusions made are altered. §.§ Full Fixed-Label Set Results Extended results from the fixed-label evaluation are given in <ref>. Notably, we include results using T5 in English, where T5 represents a model finetuned only on the target dataset and T5^MAS represents a model finetuned on MASIVE and then finetuned on the target dataset. Precision, recall, and F1 are calculated by ranking the adjective forms of each emotion class (<ref>) according to model likelihood and taking the most likely one as the preedicted class, while top-k accuracy and similarity are calculated in a generative setting as in the remainder of the paper. T5 scores consistently well on F1; pretraining on MASIVE does not usually improve T5's performance on GoEmotions, while it does for EmoEvent (En). Because MASIVE pretraining does improve performance on EmoEvent (Es), it is possible that English T5 is already a very strong baseline and potentially near the performance ceiling of generative models.
http://arxiv.org/abs/2407.12742v1
20240717170044
On the existence of reflecting $n$-queens configuration
[ "Tantan Dai", "Tom Kelly" ]
math.CO
[ "math.CO", "math.NT", "05D40, 05C15, 05B30, 11P99" ]
Georgia Institute of Technology tdai44@gatech.edu Georgia Institute of Technology tom.kelly@gatech.edu Research supported by the National Science Foundation under Grant No. DMS-2247078. § ABSTRACT In 1967, Klarner proposed a problem concerning the existence of reflecting n-queens configurations. The problem considers the feasibility of placing n mutually non-attacking queens on the reflecting chessboard, an n× n chessboard with a 1× n “reflecting strip” of squares added along one side of the board. A queen placed on the reflecting chessboard can attack the squares in the same row, column, and diagonal, with the additional feature that its diagonal path can be reflected via the reflecting strip. Klarner noted the equivalence of this problem to a number theory problem proposed by Slater, which asks: for which n is it possible to pair up the integers 1 through n with the integers n+1 through 2n such that no two of the sums or differences of the n pairs of integers are the same. We prove the existence of reflecting n-queens configurations for all sufficiently large n, thereby resolving both Slater's and Klarner's questions for all but a finite number of integers. On the existence of reflecting n-queens configurations Tom Kelly ====================================================== § INTRODUCTION An n-queens configuration is a placement of n queens on an n× n chessboard such that no two queens are in the same row, column, or diagonal. The classical n-queens problem asks how many n-queens configurations exist for a given n × n chessboard. The problem can also be considered on a toroidal chessboard, where the diagonals wrap around the board from left to right and from top to bottom. The problem was first proposed in 1848 by German chess composer Max Bezzel and elicited the interest of several prominent mathematicians, including Gauss and Pólya. Detailed accounts of the historical development of the n-queens problem can be found in the survey by Bell and Stevens <cit.> and the work by Bowtell and Keevash <cit.>. Let Q(n) denote the number of classical n-queens configurations, and let T(n) denote the number of toroidal n-queens configurations. In 1874, Pauls <cit.> proved that Q(n)>0 for every n≥ 4. In 1918, Pólya <cit.> showed that T(n)>0 if and only if n≡ 1 or 5 6. In 1994, Rivin, Vardi, and Zimmerman <cit.> conjectured that log Q(n) = Θ (nlog (n)) and that log T(n) = Θ (nlog (n)) for n≡ 1,5 6. In 2017, Luria <cit.> showed that T(n)≤ ((1+o(1))ne^-3)^n and that there exists a constant α > 1.587 such that Q(n)≤ ((1+o(1))ne^-α)^n for all n. Bowtell and Keevash <cit.> and independently Luria and Simkin <cit.> proved that Q(n)≥ ((1+o(1))ne^-3)^n for all sufficiently large n. Bowtell and Keevash <cit.> also proved that T(n) ≥ ((1 - o(1))ne^-3)^n for n ≡ 1,5 6, thereby giving an asymptotic solution to the toroidal n-queens problem. These results, combined with Luria's result, completely settled the conjecture by Rivin, Vardi, and Zimmerman <cit.>. Furthermore, Simkin <cit.> improved the bounds for the classical n-queens problem, showing that there exists a constant 1.94<α < 1.9449 such that Q(n) = ((1± o(1))ne^-α)^n. Glock, Munhá Correia, and Sudakov <cit.> investigated a natural extension of the n-queens problem, known as the n-queens completion problem. This problem asks under what conditions a given placement of mutually non-attacking queens can be extended to an n-queens configuration. They showed that any partial configuration with at most n/60 queens can be completed. Additionally, they provided a partial configuration of roughly n/4 queens that cannot be extended into a complete n-queens configuration. A key tool in their work is their “rainbow matching lemma”, which is also crucial to the proof of our main theorem. We consider a slight variation of the n-queens problem. Consider an n× n chessboard with an additional 1× n strip of squares, called the reflecting strip, attached to one side of the chessboard. Without loss of generality, we assume the reflecting strip is placed above the n× n chessboard. The squares on the reflecting strip are labeled from left to right as 1,2,…, n. The kth reflecting diagonal is the union of the two diagonals (one of which may be empty) whose extension intersects with the kth square of the reflecting strip. A diagonal whose extension does not intersect the reflecting strip is called a non-reflecting diagonal. An n× n chessboard with the reflecting strip, along with the rows, columns, reflecting diagonals, and non-reflecting diagonals defined above, is called a reflecting chessboard. Two queens attack each other on a reflecting chessboard if they are in the same row, column, reflecting diagonal, or non-reflecting diagonal. A reflecting n-queens configuration is a placement of n mutually non-attacking queens on the n × n reflecting chessboard. Figure <ref> provides an illustration of the reflecting chessboard. The figure on the left illustrates the 3rd reflecting diagonal (in cyan) and the 8th reflecting diagonal (in pink) on the reflecting 8× 8 chessboard. The figure on the right shows two queens that do not attack each other on a classical 8× 8 chessboard but attack each other on the reflecting 8× 8 chessboard via the 4th reflecting diagonal. See also Definition <ref>. In this paper, we prove that reflecting n-queens configurations exist for all sufficiently large n. This question was first proposed by Klarner <cit.> in 1967 as an alternative interpretation of a number theory problem of Slater <cit.>, which we introduce in detail in the following section. The problem also appears in the book of Guy <cit.> on unsolved problems in number theory and the survey of Bell and Stevens <cit.>. §.§ A Related Number Theory Problem For n ∈ℕ, we let [n] = {1, …, n}. In 1962, Shen and Shen <cit.> proposed the following research question: for which n≥ 3 is it possible to divide the elements in the set [2n] into pairs (a_i,b_i), such that for all i∈ [n], the 2n sums and differences b_i± a_i are distinct. This problem was solved by Huff <cit.> using a number theoretic approach. In 1963, Slater <cit.> proposed a more restricted version of this problem, which states: for which values of n is it possible to form pairs (1,a_1), (2,a_2), …, (n, a_n), where {a_1,…,a_n}={n+1,n+2,…, 2n}, such that for all i∈ [n], the 2n sums and differences a_i± i are all distinct. Slater noted that there is no solution to the problem when n=2,3, and 6, and conjectured that solutions exist for all other n. Klarner <cit.> extended this line of inquiry by proposing the question on the existence of reflecting n-queens configurations. Klarner showed that Slater's problem has a solution for a given n if and only if there exists a reflecting n-queens configuration. To see the equivalence, consider an n× n reflecting chessboard with rows labeled from top to bottom starting at the row below the reflecting strip as 1,2,…,n, and columns labeled from left to right as n+1, n+2, …, 2n. Two queens placed on row i column j and on row i' column j' attack each other if they are: * in the same row, i.e., i=i', * in the same column, i.e., j=j', * on the same “plus-diagonal”, i.e., i+j =i'+j', * on the same “minus-diagonal”, i.e., i-j =i'-j', or * on the same reflecting diagonal and not on the same “plus-diagonal” or “minus-diagonal”, i.e., i+j =j'-i' or j-i =i'+j'. A solution to Slater's version of the problem avoids all these constraints, thus yielding a reflecting n-queens configuration, and vice versa. Therefore, there is a one-to-one correspondence between the solutions to Slater's problem for n and reflecting n-queens configurations. In <cit.>, Klarner showed that reflecting n-queens configurations exist for n=4,5,7, and 8. Subsequently, in <cit.>, Sebastian extended this result for n=9,…, 27. Our goal is to establish the existence of reflecting n-queens configurations for all sufficiently large n. Before presenting the proof, we formulate the problem mathematically and introduce some necessary definitions. §.§ Algebraic Formulation We formulate the n× n chessboard as [n]× [n]. We label the rows from top to bottom with 1,2,…,n, and the columns from left to right with 1,2,…,n. In the following definition, we define necessary terminologies regarding the chessboard mathematically. Consider a chessboard [n]× [n]. * For i∈ [n], define row i to be R_i = {(i,j): j∈[n]}. Let = {R_i:i∈[n]} denote the set of rows. * For j∈ [n], define column j to be C_j = {(i,j): i∈[n]}. Let = {C_j:j∈[n]} denote the set of columns. * For k∈{-n,…, 0, …, n}, define the kth plus-diagonal to be D_k^+ = {(i,j)∈ [n]× [n]: i+j-(n+1)=k}, and the kth minus-diagonal to be D_k^- = {(i,j)∈ [n]× [n]: i-j=k}. Let = {D_k^+:k∈{-(n-1),…, 0,…, n-1}}∪{D_k^-:k∈{-(n-1),…, 0,…, n-1}} denote the set of non-empty diagonals. * A reflecting diagonal is defined as RD_ℓ = D_ℓ-(n+1)^+∪ D_-ℓ^- for ℓ∈ [n]. A non-reflecting diagonal is a diagonal in that is not part of a reflecting diagonal. The set of reflecting diagonals is denoted by ℛ𝒟={RD_ℓ:ℓ∈ [n]}, and the set of non-reflecting diagonals is denoted by 𝒩𝒟={D_k^+: k∈{0,…, n-1}}∪{D_k^-: k∈{0,…, n-1}}. * A line is defined to be an element in the set =∪∪ℛ𝒟∪𝒩𝒟. Although each of these definitions technically depends on n, there will be no ambiguity. Note that RD_ℓ is the union of the two diagonals (one of which may be empty) whose extensions intersect the reflecting strip at the ℓth slot. Recall that a reflecting n-queens configuration is a placement of n queens on an n × n reflecting chessboard such that no two queens are contained in the same line. We prove the existence of reflecting n-queens configurations for all large enough n, thereby resolving both Slater's and Klarner's questions for all but finitely many n. A reflecting n-queens configuration exists for all sufficiently large n. In Section <ref>, we discuss the connection between the reflecting n-queens problem and the rainbow matching problem, highlighting how insights from the rainbow matching problem can help us with the understanding of the reflecting n-queens problem. Section <ref> provides the necessary tools for proving the main theorem and presents the proof of the main theorem. Note that our proof will only work when n is sufficiently large. It remains open to show that reflecting n-queens configurations exist for all n. Moreover, we could ask the question about how many possible reflecting n-queens configurations there are for any integer n. Further variations of the n-queens problem can be found in the survey on the n-queens problems by Bell and Stevens <cit.>. § THE RAINBOW MATCHING LEMMA The proof of our main theorem is inspired by the work of Glock, Munhá Correia, and Sudakov <cit.> on the n-queens completion problem, which asks under which condition a given partial configuration can be extended to an n-queens configuration. To answer this question, Glock, Munhá Correia, and Sudakov presented the “rainbow matching lemma” which allowed them to find perfect rainbow matchings in certain graphs. A rainbow matching in an edge-colored graph is a matching in which all edges have distinct colors, and a perfect matching in a graph is a matching that saturates every vertex. We apply a generalized version of the “rainbow matching lemma” to prove our main theorem. Historical results and recent developments on rainbow matching problems are discussed in <cit.>. We can translate the problem of the existence of a reflecting n-queens configuration into a rainbow matching problem. Given an n× n reflecting chessboard, we construct a complete bipartite graph K_n,n with one part =⋃_i=1^n{R_i} representing the n rows of the chessboard and the other part =⋃_i=1^n{C_i} representing the n columns. An edge {R_i,C_j} corresponds to square (i,j) on the chessboard. A matching in the graph corresponds to a placement of queens in the reflecting chessboard such that no two queens are in the same row or column. We view the diagonals as colors and assign to each edge {R_i, C_j} two colors corresponding to the two diagonals containing square (i,j), as follows: * If 0≤ i+j-(n+1)≤ n, assign D_i+j-(n+1)^+ to {R_i,C_j}. Otherwise, assign RD_i+j to {R_i,C_j}. * If 0≤ i-j≤ n, assign D_i-j^- to {R_i,C_j}. Otherwise, assign RD_j-i to {R_i,C_j}. A reflecting n-queens configuration corresponds to a perfect rainbow matching, a matching that saturates all the vertices in the graph and in which the color sets of the edges in the matching are pairwise disjoint. Similarly, the number theory problem proposed by Shen and Shen <cit.> discussed in Section <ref> can be formulated as determining whether a particular coloring of K_2n has a perfect rainbow matching. To understand the existence of a perfect rainbow matching in our situation, we need the generalized version of the rainbow matching lemma in <cit.>. Before stating the generalized rainbow matching lemma, we need the following definitions. Let G be a graph. A t-fold edge-coloring of G is an assignment of sets of t colors to the edges of G. This coloring is called b-bounded if, for every vertex v, each color appears in at most b edges incident to v, and any pair of colors appears on at most b edges together. In particular, a t-fold coloring is proper if the edges at each vertex have pairwise disjoint color sets, and is linear if for every pair of colors, there is at most one edge that contains both colors. A subgraph H of G is rainbow if for any pair of edges of H, their color sets are disjoint. The degree of a color is the number of edges whose color set contains that color. In the original version of the “rainbow matching lemma”, Glock, Munhá Correia, and Sudakov <cit.> considered proper and linear t-fold coloring of graphs. They remarked that the proper and linear conditions can be weakened to the t-fold coloring being b-bounded. This leads us to the following generalized version of the rainbow matching lemma. For all α>0 and b,t∈ℕ, there exists ε>0 such that the following holds for all sufficiently large n. Let G be a bipartite graph with parts A, B of size n with a b-bounded t-fold edge-coloring. If there exists some d such that * every vertex has degree (1±ε)d, * every color has degree at most (1-α)d, and * there are at least α n^2 edges between any two sets A'⊆ A and B'⊆ B of size at least (1-α) d, then G has a perfect rainbow matching. Recall that our goal is to find perfect rainbow matchings in complete bipartite graphs. The lemma tells us that if we have a complete bipartite graph such that all the vertices have roughly the same degrees, the degree of each color is a bit smaller than that of the vertices, and there are many edges between any sufficiently large sets of vertices, then we can find a perfect rainbow matching. Note that the first and the third conditions are trivially satisfied by the complete bipartite graph K_n,n. However, the reflecting queens coloring does not satisfy the second condition as the main diagonals D_0^+ and D_0^- have size n, and thus the two corresponding colors have degree n. The reflecting diagonals also have size n - 1. Nevertheless, on average, the colors have degree 3n/4. Thus, our goal is to find a subgraph G of K_n,n, such that the degree of each color is significantly smaller than that of the vertices. Now, to prove the existence of an n-queens configuration, it suffices to show that the subgraph G has a perfect rainbow matching. We will find such a subgraph G in the following section and complete the proof of the main theorem. § PROOF OF THEOREM <REF> In this section, we prove Theorem <ref>. As mentioned in the previous section, our goal is to find a subgraph of K_n,n with a 2-fold edge-coloring, where edges represent the squares on the reflecting chessboard and the colors of the edges represent the diagonals in which the corresponding square is contained, to which we can apply Lemma <ref>. In particular, we want to find a subgraph such that the degree of the colors is significantly smaller than the degree of the vertices. Equivalently, we want to find a subset S of the reflecting chessboard such that the diagonals contain significantly fewer squares than the rows and the columns. We prove the existence of such a subset S in the following lemma. For all sufficiently large n, there exists a subset S⊆ [n]× [n] such that * each row and each column has (1± n^-1/4)5n/6 squares in S, * each non-reflecting diagonal and each reflecting diagonal has at most 119n/144 squares in S, and * for every A, B ⊆ [n] satisfying |A|, |B| ≥ 119n/144, we have |S ∩ (A × B)| ≥ n^2 / 120. To prove Lemma <ref>, we first define a weight function w on [n]× [n] such that the sum of the weight of each line lies within a particular range, as detailed in Lemma <ref>. Using this weight function, we construct the desired subset S of [n]× [n] by including every square (i,j)∈ [n]× [n] independently with probability w((i,j)). The expected degree of each line corresponds to its total weight under w, and with high probability, the actual degree will be close to its expectation. We can then apply concentration inequalities and the union bound to show the desired properties. For all n∈ℕ, there exists a weighting w:[n]× [n]→ [17/24,1] such that * the sum of the weights in each row and each column is 5n/6± 10/3, * the sum of the weights in each non-reflecting diagonal is less than 59 n/72, and * the sum of the weights in each reflecting diagonal is less than 59n/72. Consider the function w: [n]× [n]→ [17/24,1] with w((i,j)) = 43/48 if i/(n+1)∈[0,1/3), j/(n+1) ∈ [0,1/3)∪ (2/3,1] 17/24 if i/(n+1)∈[0,1/3), j/(n+1)∈[1/3,2/3] 41/48 if i/(n+1)∈[1/3,2/3], j/(n+1) ∈ [0,1/3)∪ (2/3,1] 19/24 if i/(n+1)∈[1/3,2/3], j/(n+1)∈[1/3,2/3] 3/4 if i/(n+1)∈(2/3,1], j/(n+1) ∈ [0,1/3)∪ (2/3,1] 1 otherwise. For a line L∈, define its weight w(L) to be the sum of the weights of all squares in L. The weight function divides the [n]× [n] grid into 9 boxes. We label them 1, …, 9 from left to right and top to bottom. In Figure <ref>, the diagram on the left shows the labeling of the boxes, and the diagram on the right gives the weight of the squares in each box. Let r be the remainder of n mod 3. Note that boxes 1, 3,7,9 each have (n-r)/3 rows and columns, boxes 4 and 6 each have (n+2r)/3 rows and (n-r)/3 columns, boxes 2 and 8 each have (n-r)/3 rows and (n+2r)/3 columns, and box 5 has (n+2r)/3 rows and columns. First, to see (<ref>), observe that 2(43/48)+17/24=2(41/48)+19/24=2(3/4)+1=5/2. Since each box has (n± 4)/3 rows, each row has weight (5/2)(n±4)/3=5n/6± 10/3, as desired. Similarly, since 43/48+41/48+3/4=17/24+19/24+1=5/2, each column has weight (5/2)(n±4)/3=5n/6± 10/3, as desired. Next, we prove (<ref>) by considering the weight of the non-reflecting diagonals. Since the weight function is symmetric, it suffices to consider the non-empty plus diagonals D_k^+ for k∈{0,1,…, n-1} as w(D_k^+)=w(D_k^-) for all k∈{0,1,…, n-1}. We claim that the weight of each plus-diagonal is dominated by the weight of D_0^+. Indeed, for each k∈ [n], the diagonal D_k^+ has one fewer square than D_k-1^+. Moreover, the number of squares of D_k^+ and of D_k-1^+ in each box differs by at most 1. Since the weight of each square is between 17/24 and 1, we have w(D^+_k)≤ w(D^+_k-1)+2-3(17/24)<w(D^+_k-1). Hence, D_0^+ has the largest weight of the plus-diagonals, as claimed. Note that on D_0^+, there are (n-r)/3, (n+2r)/3, and (n-r)/3 squares in boxes 1, 5, and 9, respectively, so w(D_0^+)=43/48(n-r/3)+19/24(n+2r/3)+3/4(n-r/3)=13/16n-1/48r <59/72n. Therefore, the weight of each non-reflecting diagonal is less than 59n/72, as desired. Finally, we prove (<ref>) by showing w(RD_ℓ)<59n/72 for every ℓ∈ [n]. We consider three cases. 0.5cm Case 1: ℓ≤ (n-r)/3 or ℓ≥ (2n+r)/3+1. By the symmetry of the weight function, it suffices to prove the case where ℓ≤ (n-r)/3. We claim that the maximum weight is achieved when ℓ= (n-r)/3. Observe that when ℓ≤ (n-r)/3, there are (n-r)/3-1, ℓ, (n+2r)/3-ℓ,ℓ, and (n-r)/3-ℓ squares of the reflecting diagonal RD_ℓ in boxes 1, 2, 5, 6, and 9, respectively. Comparing the two reflecting diagonals RD_ℓ and RD_ℓ+1 for ℓ < (n - r)/3, we see that RD_ℓ+1 has one more square in each of boxes 2 and 6, and one fewer square in each of boxes 5 and 9. Thus, w(RD_ℓ+1) - w(RD_ℓ) = 17/24+41/48 - 19/24 - 36/48 = 1/48. Hence, when ℓ< (n-r)/3, the weight of RD_ℓ increases as ℓ increases, and thus is bounded by w(RD_(n-r)/3)=43/48(n-r/3-1)+17/24(n-r/3)+3/4r+41/48(n-r/3) = 59/72n -5/72r-43/48 <59/72n. Therefore, the weight of the reflecting diagonal RD_ℓ is less than 59n/72 when ℓ≤ (n-r)/3, as desired. Since the weight function is symmetric, we have w(RD_ℓ)=w(RD_n-ℓ) for all ℓ∈[n]. Hence, for ℓ≥ (2n+r)/3+1, the weight of the reflecting diagonal RD_ℓ is also less than 59n/72. 0.5cm Case 2: r=2 and either ℓ=(n-r)/3+1 or ℓ=(2n+r)/3. For r=2 and ℓ=(n-2)/3+1, there is one square of the reflecting diagonal RD_ℓ in box 5, and (n-2)/3 squares in boxes 1,2, and 6, respectively. Hence, we have w(RD_(n-2)/3+1)=(43/48+17/24+41/48)(n-2/3)+19/24 = 59/72n-61/72<59/72n. By the symmetry of the weight function, we also have w(RD_(2n+2)/3)=w(RD_(n-2)/3+1)<59n/72. 0.5cm Case 3: r=2 and (n-r)/3+1<ℓ<(2n+r)/3 or r∈{0,1} and (n-r)/3+1≤ℓ≤ (2n+r)/3. In this case, there are (2n-2r)/3 -ℓ+1, (n+2r)/3-1, ℓ-(n+2r)/3, ℓ-(n-r)/3-1, and (2n+r)/3-ℓ squares of the reflecting diagonal RD_ℓ in boxes 1, 2, 3, 4, and 6, respectively. Hence, there are (n-4r)/3+1 squares with weight 43/48, and (n+2r)/3-1 squares each with weight 17/24 and with weight 41/48. Thus, the weight w(RD_ℓ) for the values of ℓ considered in this case is independent of ℓ. Thus, in this case, we have w(RD_ℓ) = 43/48(n-4r/3+1)+(17/24+41/48)(n+2r/3-1)=59/72n-11/72r-2/3<59/72n. Therefore, the weight of each reflecting diagonal is less than 59n/72, thereby proving (<ref>). We have thus found a weight function w that satisfies all the desired conditions. We will use the following standard Chernoff-type bound (see <cit.>) in the proof of Lemma <ref>. If X is the sum of mutually independent Bernoulli random variables with μX, then for all δ∈ [0, 1], we have |X - μ| ≥δμ≤ 2e^-δ^2 μ / 3. We are now equipped with the necessary tools to prove Lemma <ref>. First, consider choosing S⊆ [n]× [n] randomly by including every square (i,j) independently with probability w((i,j)), where w : [n] × [n] → [17,24, 1] is the weighting from Lemma <ref>. Note that every line L ∈ satisfies |L∩ S| = w(L) and |L ∩ S| is the sum of n mutually independent Bernoulli random variables. Let L∈∪. By Lemma <ref>(<ref>), we know that |L ∩ S|=5n/6±10/3. Applying Lemma <ref> with n^-1/3 playing the role of δ and |L∩ S| playing the role of X, we have |L ∩ S| = (1± n^-1/4)5n/6≥ 1 - 2exp(-5n^1/3/19) ≥ 1 - 0.01/2n. Hence, by a union bound over the n rows and n columns of the chessboard, we see that in S, each row and each column contains (1± n^-1/4)5n/6 squares with probability at least 0.99. Now, let L∈𝒩𝒟∪ℛ𝒟. By Lemma <ref>(<ref>) and (<ref>), we know that |L ∩ S|<59n/72, and moreover, by considering increasing the weights, |L∩ S| is stochastically dominated by a random variable X_L which is the sum of n mutually independent Bernoulli random variables with X_L = 59n/72. Applying Lemma <ref> with 59n/72 playing the role of μ, 1/118 playing the role of δ, and X_L playing the role of X, we have |L ∩ S| ≤119n/144≥|X_L - 59n/72| > n/144≥ 1 - 2exp(-n/50976) ≥ 1 - 0.01/3n. By a union bound over the n reflecting diagonals and the 2n non-reflecting diagonals of the chessboard, we see that with probability at least 0.99, each reflecting diagonal and each non-reflecting diagonal in S contains at most 119n/144 squares. Now, let A,B⊆ [n] be subsets of size at least 119n/144. Then |A× B| ≥ (119n/144)^2. Since each square of [n]× [n] is included in S independently with probability at least 17/24, we have |S∩ (A× B)|≥(119n/144)^2(17/24)>29n^2/60. Applying Lemma <ref> with 57/58 playing the role of δ and |S∩ (A× B)| playing the role of X, we have |S∩ (A× B)|≥n^2/120≥ 1- 2exp(-n^2/7)≥ 1 - 0.01/4^n. By a union bound over at most 4^n choices of A,B⊆ [n], we conclude that |S∩ (A× B)|≥ n^2/120 with probability at least 0.99. Therefore, with positive probability, there exists a subset S of [n]× [n] that satisfies all three properties. Now we can finish the proof of our main theorem. Let n be sufficiently large, and consider an n× n reflecting chessboard represented by [n]× [n]. By Lemma <ref>, there exists a subset S of [n]× [n] such that each row and each column contains (1± n^-1/4)5n/6 squares, each non-reflecting diagonal and each reflecting diagonal contains at most 119n/144 squares, and for every A, B⊆ [n] satisfying |A|,|B|≥ 199n/144, we have |S∩ (A× B)|≥ n^2/120. Consider the bipartite graph G with one part =⋃_i=1^n{R_i} corresponding to the n rows of the reflecting chessboard and the other part =⋃_i=1^n{C_i} corresponding to the n columns. For each square (i,j)∈ S, include {R_i,C_j} in the edge set of G. Assign to each edge {R_i, C_j} in G two colors corresponding to the diagonals containing (i,j) as follows. * If 0≤ i+j-(n+1)≤ n, assign D_i+j-(n+1)^+ to {R_i,C_j}. Otherwise, assign RD_i+j to {R_i,C_j}. * If 0≤ i-j≤ n, assign D_i-j^- to {R_i,C_j}. Otherwise, assign RD_j-i to {R_i,C_j}. Observe that this coloring is 2-bounded, as any two lines of the reflecting chessboard intersect in at most two squares. We can now apply Lemma <ref> to G with α = 1/120, t=2, b=2, and d=5n/6 to find a perfect rainbow matching in G. Observe that * every vertex has degree (1± n^-1/4)5n/6, * every color has degree at most (1-1/120)5n/6, and * there are at least n^2/120 edges between any two sets '⊆ and '⊆ of size at least (1-1/120)5n/6. By Lemma <ref>, the graph G has a perfect rainbow matching. Therefore, the corresponding subset S of [n]× [n] contains a reflecting n-queens configuration, as desired. amsabbrv
http://arxiv.org/abs/2407.13597v1
20240718153602
PLANTS: A Novel Problem and Dataset for Summarization of Planning-Like (PL) Tasks
[ "Vishal Pallagani", "Biplav Srivastava", "Nitin Gupta" ]
cs.CL
[ "cs.CL", "cs.AI" ]
[ Yeongseong Jo ================= § ABSTRACT Text summarization is a well-studied problem that deals with deriving insights from unstructured text consumed by humans, and it has found extensive business applications. However, many real-life tasks involve generating a series of actions to achieve specific goals, such as workflows, recipes, dialogs, and travel plans. We refer to them as planning-like (PL) tasks noting that the main commonality they share is control flow information. which may be partially specified. Their structure presents an opportunity to create more practical summaries to help users make quick decisions. We investigate this observation by introducing a novel plan summarization problem, presenting a dataset, and providing a baseline method for generating PL summaries. Using quantitative metrics and qualitative user studies to establish baselines, we evaluate the plan summaries from our method and large language models. We believe the novel problem and dataset can reinvigorate research in summarization, which some consider as a solved problem. § INTRODUCTION Text summarization is a crucial task in natural language processing (NLP) that focuses on condensing large volumes of unstructured text into concise and informative summaries <cit.>. This task has significant applications in various domains such as news aggregation, document summarization, and content recommendation systems <cit.>. Traditional summarization techniques can be broadly categorized into extractive <cit.> and abstractive methods <cit.>. Extractive summarization selects key sentences or phrases from the original text, whereas abstractive summarization generates new sentences that capture the essence of the text. Recently, large language models (LLMs) have demonstrated remarkable capabilities, outperforming human summaries <cit.> on several datasets such as Multi-News <cit.> and MediaSum <cit.>. Despite its extensive applications, text summarization has primarily concentrated on static documents, overlooking dynamic tasks that involve sequences of actions aimed at achieving specific goals. We refer to these tasks as planning-like (PL) tasks<cit.>. Examples of PL tasks include workflows, recipes, dialogs, and travel plans, which often contain control flow information critical for execution. For instance, consider the task of cooking a cheese sandwich. Numerous recipes exist for making a cheese sandwich, each with varying ingredients and steps. A summary for this PL task aims to condense these multiple recipes into a single, coherent summary. This summary would allow a knowledgeable user to quickly make a cheese sandwich based on the brief summary or help a user decide which recipe best suits their needs based on the ingredients they have available. This approach can be considered similar to multi-document summarization on a high level, where information from multiple sources is synthesized into a concise summary <cit.>. By summarizing multiple action sequences into coherent and actionable insights, we provide users with valuable information and facilitate quicker decision-making. Consider another example of routes from Google Maps, a commercial service offering travel routes between selected locations. In Figure <ref>, we provide an instance where the user wants to find driving routes between Manhattan, New York, and Pleasantville, New York. Google Maps offers multiple route options visually on the map and provides a summary of three possible routes to reach the destination. This summary focuses on the critical roads, estimated travel time, and distance. This allows the user to choose their preferred route without going through the complete step-by-step instructions for all three options. Each summary in Box 1 can be expanded to reveal more detailed summaries, including additional key roads or waypoints. This capability enables quick decision-making and efficient route planning, illustrating the utility of summarization in PL tasks. To address the gap in summarization literature for PL tasks, we introduce the novel problem of summarizing planning like (PL) tasks[We also refer to it as plan summarization or PL summaries.]. Plan summarization aims to create concise and coherent summaries of action sequences that achieve specific goals, thereby facilitating quick understanding and decision-making. Unlike traditional text summarization, plan summarization must account for the executability and logical flow of actions. We present a new dataset, called as PLANTS[https://github.com/VishalPallagani/PLANTS-benchmark], specifically designed for plan summarization tasks, encompassing diverse domains such as automated plans, recipes, and travel plans. Additionally, we propose a baseline method for generating PL summaries. Our evaluation includes comparisons with summaries generated by both extractive and abstractive methods through a user study. We believe that introducing the plan summarization problem and providing a relevant dataset will spark renewed interest in the summarization research community. Our contributions are threefold: [(1)] * Definition of the planning task summarization; * Creation of a dataset tailored for PL tasks; * Development of a baseline method for generating summaries; * Initial evaluation of how users perceive PL summaries from the baseline method and LLMs. § PLANNING-LIKE TASKS Planning-like tasks involve a series of actions required to achieve specific goals. These tasks are defined and explored in <cit.>. In this paper, we focus on three primary domains of PL tasks: automated plans, recipes, and travel routes. Each of these domains involves unique challenges and characteristics that necessitate effective summarization for better user comprehension and decision-making. Automated Plans Automated planning <cit.> involves creating action sequences for intelligent agents to achieve specified goals. In automated planning, a problem is typically represented as a tuple consisting of states, actions, and goals. The objective is to generate an automated plan that transitions the system from the initial state to the goal state while satisfying certain constraints. The semantics of automated plans require them to be sound and feasible, meaning each action must be executable in the given context, and the sequence must logically lead to the achievement of the goal. Summarizing automated plans helps in quickly understanding the essential steps and ensuring all actions are executable. Recipes In the domain of culinary arts, recipes are structured sequences of actions aimed at preparing specific dishes. Each recipe includes a list of ingredients and step-by-step instructions for combining them. Given the multitude of recipes available for a single dish, there can be significant variation in ingredients and preparation methods. This diversity makes it challenging for users to quickly identify the essential components and steps needed to prepare a dish. Summarizing recipes allows users to identify must-have ingredients and critical steps, making it easier to choose or adapt a recipe based on available ingredients. Travel Routes Travel planning involves creating efficient paths from a starting location to a destination. This process includes determining the optimal route, considering factors such as distance, travel time, and road conditions. Travel routes are complex, often involving multiple possible paths and decisions about which roads or highways to take. Summarizing travel routes provides a clear overview of the main paths, travel times, and distances, aiding in quick decision-making and efficient route planning. These PL tasks, as summarized in Table <ref>, highlight the different characteristics and requirements across domains. Summarizing these tasks enhances usability and accessibility, providing users with concise, actionable insights for efficient decision-making and task execution. § PLANNING TASK SUMMARIZATION Planning task summarization involves generating a concise summary of multiple plans that achieve the same goal. In various domains, such as travel planning, recipe generation, and automated planning, it is common to have multiple possible plans to reach a desired outcome. Each plan may differ in the sequence and number of actions required. Inspired by early work on process summarization <cit.>, our approach aims to enhance user comprehension and facilitate better decision-making by providing a summary that consolidates these multiple plans into a single, coherent overview, highlighting the key actions and considerations for achieving the goal. We formally define the planning task summarization problem as follows. Given a set of plans P = { p_1, p_2, …, p_n }, each plan p_i consists of a sequence of actions { a_i1, a_i2, …, a_im} designed to achieve a common goal G. The task is to produce a summary plan P^* that is a function of the size and number of actions constrained by metadata. Mathematically, this can be expressed as: P^* = Summarize(P, constraints) where the constraints may be in terms of textual features (e.g., maximum allowable characters, words or lines) or plan features (e.g., maximum number of actions) in the summary plan. Hence, it is expected that |P^*| << |P|. These constraints ensure that the summary plan remains concise and focused on the most critical actions necessary to achieve the goal. Several challenges arise in the planning task summarization process. Different plans might take varied approaches to achieve the same goal, making it challenging to create a summary that captures the essential steps without losing critical diversity. Additionally, the summary must strictly adhere to the provided constraints, ensuring it remains concise and relevant. Another significant challenge is the selection of actions from the original plans to include in the summary. The goal is to ensure that the summary is representative of the original plans and efficient in the number of actions. § PLANTS DATASET In this section, we introduce the PLANTS dataset, specifically designed for planning task summarization. The dataset encompasses three distinct planning-like tasks: automated plans, recipes, and travel routes. For each task, we have curated 10 different problems/goals. Each goal has 5 different plans for automated plans and recipes, and 3 different plans for travel routes, resulting in a total of 130 diverse plans in the dataset (see Figure <ref>). Automated Plans: For generating automated plans, we utilized five classical planning domains from the <cit.>: , , , , and . These domains are released as part of the International Planning Competition (IPC) <cit.>. The repository includes both the domains and their corresponding problems, where the goals are defined. We selected two distinct problems (i.e., goals) from each planning domain, resulting in a total of ten unique goals. Each problem was solved using SymK <cit.>, a state-of-the-art classical optimal and top-k planner based on symbolic search that extends Fast Downward <cit.>. We set k to 5, generating five different plans for each problem. This approach ensures that our dataset contains a variety of viable solutions for each planning problem, providing a robust basis for summarization. Recipes: For the recipes, we manually selected ten distinct and commonly made dishes such as cheese sandwich, guacamole and omelette from the Recipe1M+ dataset <cit.>. Recipe1M+ is a large-scale dataset containing over one million recipes with associated images and instructions. Assumption: To ensure diversity in preparation methods, we assume that distinct ingredient lists will result in different preparation steps. Based on this assumption, we extracted five different recipes for each dish by calculating the Jaccard similarity between the ingredient lists and selecting recipes with low similarity scores. This method ensures that the chosen recipes have varied ingredients, leading to diverse preparation steps. Specifically, we only extracted the ingredients and step-by-step instructions for each recipe. This manual selection and extraction process ensures that our dataset includes multiple viable approaches to achieve the same culinary goal, providing a robust basis for summarization. Travel Routes: For the travel routes, we manually selected ten different pairs of start and destination coordinates to ensure a diverse set of route planning problems. The coordinates were chosen to cover a variety of urban layouts, providing a comprehensive testbed for summarization. We utilized the OpenStreetMap (OSM) API <cit.>, a collaborative mapping project that provides free geographic data and mapping services, to generate routes between these coordinates. The OSM API allows for the extraction of detailed route information, including road networks and step-by-step directions. For each pair of coordinates, the API generates atmost three distinct routes, ensuring that the routes are unique by default. We extracted the step-by-step directions for each route, including the sequence of roads and waypoints. This approach ensures that our dataset captures a variety of viable travel options for each route planning problem. § EXPERIMENTAL SETTINGS In this section, we describe the different models used for plan summary generation and also discuss the user study settings. The constraints applied to these models and the prompt templates used for GPT-4o are detailed in Supplementary Material (Section 3). §.§ Models For each task, we use GPT-4o as the representative of LLMs and an abstractive technique for obtaining plan summaries. For extractive summarization, we use TextRank. Additionally, we developed a new frequency-based baseline method for extractive plan summarization. Each approach receives as input a set of plans to generate a summary. For automated plans and recipes, each set contains 5 plans, and for travel routes, each set contains 3 plans. Algorithm <ref> outlines our baseline method, which involves parsing the plans to extract actions and creating a structured representation of the data. This structured data is then analyzed in two views: text view and plan view. The text view analysis identifies common items and n-grams by counting the frequency of individual actions and sequences of actions. The plan view analysis examines the structure and sequence of actions, identifying the most common actions, secondary mentions (such as objects or ingredients), the shortest plan, and the most common action sequences. The results from these analyses are combined to generate a plan summary. §.§ User Study To assess the ease of understanding, clarity for action, and overall preference for the summaries, we conducted a human evaluation involving ten annotators. The annotators were students (undergraduate and graduate students) and faculty staff, all with an understanding of the three PL tasks: automated plans, recipes, and travel routes. For each PL task, we provided the annotators with the actual plans and presented them with summaries generated by three different methods: GPT-4 (abstractive), TextRank (extractive), and our frequency-based baseline method (extractive). To ensure the reliability of our results, we calculated the overall inter-annotator agreement using Cohen’s kappa coefficient <cit.>. We found that the agreement among annotators was acceptable, with a coefficient of 0.72. § EXPERIMENTAL RESULTS §.§ Experiment 1: Comparing the number of tokens across the summaries Figure <ref> shows the boxplot comparing the token counts across three summarization methods: baseline, TextRank, and GPT-4o. The median token count for baseline is around 53, indicating consistent summary lengths with minimal variability. TextRank exhibits significant variability, with a median token count lower than baseline, reflecting diverse summary lengths. GPT-4o displays the highest median token count at approximately 176.5, indicating longer and more detailed summaries, with a wider interquartile range. This analysis highlights the differences in summary lengths, providing insights into the summarization characteristics of each method. §.§ Experiment 2: Comparing the information-richness of the summaries In this experiment, we measure the lexical density of summaries generated by baseline, TextRank, and GPT-4o to evaluate their information richness. Lexical density is calculated as the proportion of content words—nouns, verbs, adjectives, and adverbs—to the total number of words in a summary. Figure <ref> shows the lexical density of the three summary methods across 30 planning summarization tasks in the benchmark dataset. GPT-4o consistently achieves the highest lexical density, indicating it produces the most information-rich summaries. The baseline demonstrates moderate lexical density, followed by TextRank, which exhibits the lowest and most variable lexical density. §.§ Experiment 3: Comparing the ease of understanding of the summaries From the user studies, we obtained results on how easy it is to understand a summary to take an action. Each summary was rated on a scale from 1 to 5, with 1 being very difficult to understand and 5 being very easy to understand. The average ease of understanding scores are presented in Table <ref>. GPT-4o received the highest ease of understanding scores across the three PL tasks. For automated plans, the baseline approach ranked second, while TextRank was rated second for recipes and travel routes. §.§ Experiment 4: User preference of the summaries The user study was also used to rank the summaries based on preferences. The aggregate preferences for each summary choice were then analyzed. For automated plans, GPT-4o was the first preference for 76% of users, followed by the baseline approach as the second preference for 44%, and TextRank as the third preference for 59%, as shown in Table <ref>. GPT-4o received the first preference across all three planning tasks, with TextRank and the baseline approach varying in their ranking depending on the specific task. § CONCLUSION In this work, we introduced the novel problem of planning task summarization. To address this problem, we developed the PLANTS dataset, encompassing three distinct PL tasks: automated plans, recipes, and travel routes. Alongside the dataset, we also presented a frequency-based baseline method for plan summarization. We evaluated both abstractive and extractive summarization methods for planning task summarization through user studies and empirical analysis. Our findings indicate that while GPT-4o is the preferred approach for generating plan summaries due to its detailed and information-rich outputs, further evaluation is needed to verify if these summaries maintain the executional semantics of PL tasks. The issue of hallucination in abstractive methods remains a significant challenge that warrants further investigation. Additionally, there is a need to develop evaluation metrics specifically tailored for PL task summaries to ensure their effectiveness and reliability. We believe this work represents an initial effort towards advancing research in planning task summarization. The broader impact of this research could influence various domains, including robotics, dialog agents, and planning agents. We hope our contributions will inspire further advancements and exploration in this field, ultimately leading to more robust and efficient summarization techniques, datasets, and evaluation metrics for the problem of planning task summarization. § LIMITATIONS Size of the Dataset: While the PLANTS dataset provides a valuable starting point for planning task summarization, it includes only 10 problems per domain, with 5 plans each for automated plans and recipes, and 3 plans each for travel routes. This limited size may not fully capture the variability and complexity of real-world planning tasks. Additionally, the dataset does not include gold summaries, as it is challenging to obtain authoritative summaries for PL tasks due to their inherent variability and subjective nature. However, to facilitate future research, we release the generators used to create this dataset, allowing for the development of larger and more diverse datasets across these domains. Evaluation Metrics: The evaluation metrics employed in this study, such as human preference and ease of understanding, are inherently subjective and may not fully reflect the executional semantics of the plans. Inter-Annotator Agreement: Although we measured inter-annotator agreement using Cohen’s kappa and found it to be acceptable, the subjective nature of human evaluation introduces potential variability in judgments. Future work could explore more rigorous training for annotators. § ETHICS STATEMENT The development and evaluation of the PLANTS dataset were conducted with strict adherence to ethical standards. All data were sourced from publicly available repositories, ensuring compliance with usage terms and privacy regulations. Human evaluators, consisting of graduate students and professors with domain expertise, participated voluntarily and provided informed consent. Their responses were anonymized to maintain privacy. The dataset and evaluation methods were designed to minimize bias and ensure accuracy. We release the dataset generators for research purposes, encouraging responsible use in compliance with ethical guidelines. This work aims to benefit multiple domains, including robotics and planning agents, and we advocate for the responsible deployment of summarization technologies to avoid potential harm. § ACKNOWLEDGEMENTS We would like to thank Amitava Das for discussions related to textual summarization and for helping us build parallels to planning task summarization. plainnat § CHECKLIST * For all authors... * Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? * Did you describe the limitations of your work? * Did you discuss any potential negative societal impacts of your work? * Have you read the ethics review guidelines and ensured that your paper conforms to them? * If you are including theoretical results... * Did you state the full set of assumptions of all theoretical results? * Did you include complete proofs of all theoretical results? * If you ran experiments (e.g. for benchmarks)... * Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? * Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? * Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? * If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... * If your work uses existing assets, did you cite the creators? * Did you mention the license of the assets? * Did you include any new assets either in the supplemental material or as a URL? * Did you discuss whether and how consent was obtained from people whose data you're using/curating? * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? * If you used crowdsourcing or conducted research with human subjects... * Did you include the full text of instructions given to participants and screenshots, if applicable? * Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
http://arxiv.org/abs/2407.13350v1
20240718094938
General monogamy relations of the $S^{t}$ and $T^{t}_q$-entropy entanglement measures based on dual entropy
[ "Zhong-Xi Shen", "Kang-Kang Yang", "Zhi-Xiang Jin", "Zhi-Xi Wang", "Shao-Ming Fei" ]
quant-ph
[ "quant-ph" ]
18738951378@163.com 2220501004@cnu.edu.cn zxjin@dgut.edu.cn wangzhx@cnu.edu.cn feishm@cnu.edu.cn ^1School of Mathematical Sciences, Capital Normal University, Beijing 100048, China ^2School of Computer Science and Technology, Dongguan University of Technology, Dongguan 523808, China § ABSTRACT Monogamy of entanglement is the fundamental property of quantum systems. By using two new entanglement measures based on dual entropy, the S^t-entropy entanglement and T^t_q-entropy entanglement measures, we present the general monogamy relations in multi-qubit quantum systems. We show that these newly derived monogamy inequalities are tighter than the existing ones. Based on these general monogamy relations, we construct the set of multipartite entanglement indicators for N-qubit states, which are shown to work well even for the cases that the usual concurrence-based indicators do not work. Detailed examples are presented to illustrate our results. Keywords: Monogamy of entanglement, Dual entropy, Entanglement measures, Entanglement indicator General monogamy relations of the S^t and T^t_q-entropy entanglement measures based on dual entropy Shao-Ming Fei^1 July 22, 2024 =================================================================================================== § INTRODUCTION As a fundamental issue of quantum mechanics, quantum entanglement is the most important resource in quantum information processing <cit.>. The characterization and quantification of entanglement is of vital significance. A variety of entanglement measures have been proposed from different perspectives to describe the degree of inseparability of multipartite quantum states, for instance, the concurrence <cit.>, entanglement of formation <cit.>, Rényi-α entropy entanglement <cit.>, Tsallis-q entropy entanglement <cit.>, and Unified-(q,s) entropy entanglement <cit.>. Recently, a new entanglement measure, called S^t-entropy entanglement, has been presented by adding its complementary dual part to the well-known von Neumann entropy, which can be viewed as a quantum version of entropy <cit.>. Another new entanglement measure, T^t_q-entropy entanglement, is proposed in Ref. <cit.> based on the total entropy of Tsallis-q entropy and its complementary dual. Both measures are analytically computable for any N-qubit states. The monogamy of entanglement is a key property characterizing the entanglement sharability in multipartite quantum systems. Coffman, Kundu and Wootters (CKW) first characterized the monogamy of an entanglement measure ℰ for three-qubit states ρ_ABC <cit.>, ℰ(ρ_A|BC)≥ℰ(ρ_AB)+ℰ(ρ_AC), where ρ_AB= tr_C(ρ_ABC), ρ_AC= tr_B(ρ_ABC) are the reduced density matrices of ρ_ABC, ℰ(ρ_A|BC) stands for the entanglement under the bipartition A and BC. This relation is called monogamy of entanglement <cit.>. Later, Osborne and Verstraete extended this monogamy inequality to the squared concurrence for N-qubit systems <cit.>. Extensive researches have been conducted on the distribution of entanglement in multipartite quantum systems by employing various measures such as the squared entanglement of formation (EOF) <cit.>, the squared Rényi-α entropy <cit.>, the squared Tsallis-q entropy <cit.> and the squared Unified-(q,s) entropy <cit.>. Generally, monogamy inequalities depend on both detailed measures of entanglement and detailed quantum states. It has been shown that the squashed entanglement is monogamous for arbitrary dimensional systems <cit.>. Interestingly, a set of tight α-th powers monogamy relations have been investigated for multi-qubit systems <cit.>. The traditional monogamy inequality (<ref>) provides a lower bound for “one-to-group” entanglement, i.e., the quantum marginal entanglement <cit.>. The monogamy inequality corresponds to a residual quantity <cit.>, for example, the concurrence corresponds to the 3-tangle. The residual measure derived from the entanglement of formation is demonstrated to serve as an indicator for multi-qubit entanglement, capable of detecting all genuine multipartite entangled states <cit.>. These monogamy relations also play an important role in quantum information theory <cit.>, condensed-matter physics <cit.> and even black-hole physics <cit.>. The rest of this paper is organized as follows. In Sec.<ref> and Sec.<ref>, we review some background knowledge on entanglement measures that will be used in the main text, and establish two classes of tighter monogamy inequalities for S^t-entropy entanglement and T^t_q-entropy entanglement measures, respectively. In Sec.<ref>, we investigate multipartite entanglement indicators based on two new monogamy relations for N-qubit states, together with detailed examples. We summarize our main results in Sec.<ref>. § MONOGAMY OF S^T-ENTROPY ENTANGLEMENT The S^t-entropy entanglement of a pure bipartite state |Φ⟩_AB in d× d dimensional Hilbert space H_A⊗ H_B is given by E_t(|Φ⟩_AB)=1/rS^t(ρ_A), where r=dlog_2d-(d-1)log_2(d-1) is a normalization factor, ρ_A= Tr_B(|Φ⟩_AB⟨Φ|) is the reduced density operator with respect to the subsystem A, and S^t(ρ) is the total entropy of a quantum state ρ defined by S^t(ρ)=-Tr[ρlog_2ρ+(1-ρ)log_2(1-ρ)], with 1 the identity matrix. For a bipartite mixed state ρ_AB in H_A⊗ H_B, the S^t-entropy entanglement is given via the convex-roof extension, E_t(ρ_AB)=inf_{p_i,|Φ_i⟩}∑_ip_iE_t(|Φ_i⟩_AB), where the infimum is taken over all the possible pure state decompositions of ρ_AB=∑_ip_i|Φ_i⟩⟨Φ_i| with p_i≥0, ∑_ip_i=1. In Ref. <cit.> the authors provide an analytic formula of the S^t-entropy entanglement for two-qubit systems based on concurrence. The concurrence of a bipartite pure state |Φ⟩_AB is defined by <cit.>, C(|Φ⟩_AB)=√(2(1- Tr(ρ^2_A))). For mixed states ρ_AB, the concurrence is given by the convex-roof extension, C(ρ_AB)=inf_{p_i,|Φ_i⟩}∑_ip_iC(|Φ_i⟩_AB), where the infimum takes over all the possible pure-state decompositions of ρ_AB. In particular, for a two-qubit mixed state ρ the concurrence has the analytic formula <cit.>, C(ρ)=max{0,η_1-η_2-η_3-η_4}, with η_i the eigenvalues of the matrix √(ρ(σ_Y⊗σ_Y)ρ^* (σ_Y⊗σ_Y)) in decreasing order, where ρ^* is the complex conjugate of ρ and σ_Y is the standard Pauli operator. Consider any C^2⊗ C^d pure state |ϕ⟩_AB in ℋ_A⊗ℋ_B with Schmidt form, |ϕ⟩_AB=√(λ_0)|0⟩|ϕ_0⟩ +√(λ_1)|1⟩|ϕ_1⟩, where the subsystem A is a qubit system, while the subsystem B is a d dimensional space, |ϕ_0⟩ and |ϕ_1⟩ are orthogonal states in ℋ_B, λ_0 and λ_1 are the Schmidt coefficients. From Eq. (<ref>) we have E_t(|ϕ⟩_AB)=-λ_0log_2λ_0-λ_1log_2λ_1. Besides, the concurrence of |ϕ⟩_AB is given by C(|ϕ⟩_AB)=√(2(1-Tr(ρ^2_A)))=2√(λ_0λ_1). For any two-qubit pure state |ϕ⟩_AB one has <cit.>, E_t(|ϕ⟩_AB)=h(C(|ϕ⟩_AB)), where h(x) is an analytic function defined by h(x) = -1+√(1-x^2)/2log_21+√(1-x^2)/2 -1-√(1-x^2)/2log_2(1-√(1-x^2)/2). Thus, we get a functional relation (<ref>) between the concurrence and the S^t-entropy entanglement for any qubit-qudit pure state in ℋ_A⊗ℋ_B. It has been shown that the relation (<ref>) holds also for two-qubit mixed states ρ_AB <cit.>, E_t(ρ_AB)=h(C(ρ_AB)). The EOF is defined by <cit.>, E_f(|Φ⟩_AB)=-Tr(ρ_Alog_2ρ_A) for any pure state |Φ⟩_AB in H_A⊗ H_B, and E_f(ρ_AB)=inf_{p_i,|Φ_i⟩}∑_ip_iE_f(|Φ_i⟩)_AB for any bipartite mixed state ρ_AB, where the infimum takes over all the possible pure-state decompositions of ρ_AB. It is shown in Ref. <cit.> that E_f(|ϕ⟩_AB)=f(C^2(|ϕ⟩_AB)) for any 2⊗ m (m≥2) pure state |ϕ⟩_AB, and E_f(ρ_AB)=f(C^2(ρ_AB)) for any two-qubit mixed state ρ_AB, where f(x) is an analytic function defined by f(x) = -1+√(1-x)/2log_21+√(1-x)/2 -1-√(1-x)/2log_2(1-√(1-x)/2). Thus the S^t-entropy entanglement reduces to EOF for two-qubit systems. The α-th power of EOF is monogamous for any N-qubit system ρ_AB_1⋯ B_N-1 <cit.>, E^α_f(ρ_A|B_1⋯ B_N-1)≥∑_i=1^N-1E^α_f(ρ_AB_i) for α≥√(2), where E_f(ρ_A|B_1⋯ B_N-1) is the bipartite entanglement with respect to the bipartition A and B_1⋯ B_N-1 and E_f(ρ_AB_i) is the entanglement of the reduced density operator ρ_AB_i= Tr_AB_1⋯ B_i-1B_i+1⋯ B_N-1(ρ_AB_1⋯ B_N-1) of the joint subsystems A and B_i for i=1, ⋯, N-1. From Eqs. (<ref>) and (<ref>), both EOF and S^t-entropy entanglement have the same monogamy features. Thus for any N-qubit system ρ_AB_1⋯ B_N-1, one has E^α_t(ρ_A|B_1⋯ B_N-1)≥∑_i=1^N-1 E^α_t(ρ_AB_i) for α≥√(2). By using the inequality (1+t)^x≥1+(2^x-1)t^x for 0≤ t≤ 1, x≥1 <cit.>, the relation (<ref>) can be improved as E^α_t(ρ_A|B_1B_2⋯ B_N-1) ⩾ E^α_t(ρ_AB_1)+⋯+(2^α/√(2)-1)^N-3E^α_t(ρ_AB_N-2)    +(2^α/√(2)-1)^N-2E^α_t(ρ_AB_N-1), with E^√(2)_t(ρ_AB_i)⩾∑_j=i+1^N-1E^√(2)_t(ρ_AB_j) for i=1, 2, ⋯, N-2, α≥√(2). Similarly by using the inequality (1+t)^x≥1+(2^x-t^x)t^x for 0≤ t≤ 1, x≥2 <cit.>, the relation (<ref>) can be further improved as E^α_t(ρ_A|B_1B_2⋯ B_N-1) ⩾ E^α_t(ρ_AB_1)+∑_i=2^N-1(∏_j=1^i-1M_j)E^α_t(ρ_AB_i), with E^√(2)_t(ρ_AB_i)⩾∑_k=i+1^N-1E^√(2)_t(ρ_AB_k) for i=1, 2, ⋯, N-2, M_j=2^α/√(2)-(∑_k=j+1^N-1E^√(2)_t(ρ_AB_k)/E^√(2)_t(ρ_AB_j))^α/√(2) , for j=1, 2, ⋯, N-2, α≥2√(2). In the following, we show that the monogamy inequalities (<ref>), (<ref>) and (<ref>) satisfied by the S^t-entropy entanglement can be further refined and become even tighter. For convenience, we denote E_t AB_i=E_t(ρ_AB_i) the S^t-entropy entanglement of ρ_AB_i and E_t A|B_1,B_2,⋯,B_N-1=E_t(ρ_A|B_1 ⋯ B_N-1). We first introduce the following lemmas. [Lemma 1]. Let t and x be real numbers satisfying 0⩽ t⩽ 1 and x⩾ 2. We have (1+t)^x-1⩾1+(x-1)t. Set h(t,x)=(1+t)^x-1-(x-1)t-1 with 0⩽ t⩽ 1 and x⩾ 2. Since ∂ h(t,x)/∂ t=(x-1)(1+t)^x-2-(x-1)=(x-1)[(1+t)^x-2-1]⩾0, the function h(t,x) is increasing with respect to t. As 0⩽ t ⩽ 1, h(t,x)≥ h(0,x)=0, we obtain the inequality (<ref>). [Lemma 2]. Let x be a real number satisfying x⩾ 2. For any t satisfying 0⩽ t ⩽ 1, we have (1+t)^x⩾1+t+(2^x-2)t^x. First we note that the above inequality is trivial for t=0. So we prove the case for t≠0. Consider the function f(t,x)=(1+t)^x-t-1/t^x with 0<t ⩽ 1, and x⩾ 2. By using Lemma 1 we have ∂ f(t,x)/∂ t =[x(1+t)^x-1-1]t^x-x t^x-1[(1+t)^x-t-1]/t^2x =t^x-1[-x(1+t)^x-1+(x-1)t+x]/t^2x⩽ 0, since -x(1+t)^x-1+(x-1)t+x ⩽ 0 for x⩾ 2. Therefore, f(t,x) is a decreasing function of t. As 0<t ⩽ 1, we obtain f(t,x)⩾ f(1,x)=2^x-2 and the inequality (<ref>). [Lemma 3]. For any 2⊗2⊗2 mixed state ρ∈ H_A⊗ H_B⊗ H_C, if E^√(2)_t AB⩾ E^√(2)_t AC, we have E^α_t A|BC⩾(1+E^√(2)_t AC/E^√(2)_t AB) E^α_t AB+(2^α/√(2)-2)E^α_t AC for all α⩾2√(2). By straightforward calculation, if E^√(2)_t AB⩾ E^√(2)_t AC we have E^α_t A|BC ⩾ (E^√(2)_t AB+E^√(2)_t AC)^α/√(2) =E^α_t AB(1+E^√(2)_t AC/E^√(2)_t AB)^α/√(2) ⩾ E^α_t AB[1+E^√(2)_t AC/E^√(2)_t AB+(2^α/√(2)-2)(E^√(2)_t AC/E^√(2)_t AB)^α/√(2)] =(1+E^√(2)_t AC/E^√(2)_t AB) E^α_t AB+(2^α/√(2)-2)E^α_t AC, where the second inequality is due to Eq. (<ref>) in Lemma 2. The lower bound becomes trivially zero when E_t AB=0. From Lemma 3, we have the following theorem for multi-qubit quantum systems. [Theorem 1]. For any N-qubit mixed states, if E^√(2)_t AB_i⩾∑_j=i+1^N-1E^√(2)_t AB_j for i=1, 2, ⋯, N-2, we have E^α_t A|B_1B_2⋯ B_N-1  ⩾∑_i=1^N-2(1+Ω_i)Γ^i-1 E^α_t AB_i+Γ^N-2E^α_t AB_N-1 for all α⩾2√(2), where Γ=2^α/√(2)-2, Ω_i=∑_j=i+1^N-1E^√(2)_t AB_j/E^√(2)_t AB_i, i=1, 2, ⋯, N-2. From the inequality (<ref>), we have E^α_t A|B_1B_2⋯ B_N-1 ⩾ (1+Ω_1)E^α_t AB_1+Γ (∑_j=2^N-1E^√(2)_t AB_j)^α/√(2) ⩾ (1+Ω_1)E^α_t AB_1+(1+Ω_2)Γ E^α_t AB_2 +Γ^2(∑_j=3^N-1E^√(2)_t AB_j)^α/√(2) ⩾⋯ ⩾ (1+Ω_1)E^α_t AB_1+⋯+(1+Ω_N-2)Γ^N-3E^α_t AB_N-2    +Γ^N-2E^α_t AB_N-1 for all α⩾2√(2). [Remark 1]. Theorem 1 gives a new class of monogamy relations for multi-qubit states, which includes the inequality (<ref>) as a special case of N=3, E_t AB_1=E_t AB_2 and α≥2√(2). From the analysis of the aforementioned findings, we observe that different monogamy relationships are characterized by different inequalities, and the compactness of monogamy relations is exactly the compactness of these inequality relations. Since (1+t)^x ≥1+t+(2^x-2)t^x =1+(2^x-1)t^x+t-t^x ≥1+(2^x-1)t^x for 0≤ t≤1 and x≥2, where the last inequality is due to that t-t^x≥0, obviously our formula (<ref>) in Theorem 1 gives a tighter monogamy relation (with larger lower bounds) than the inequalities (<ref>) and (<ref>) for α≥2√(2). In order to show our formula (<ref>) in Theorem 1 is indeed tighter than relation (<ref>), We need introduce the following lemma. [Lemma 4]. Let t and x be real numbers satisfying 0⩽ t⩽√(5)-1/2 and x⩾ 2. We have t-t^x⩾ t^x-t^2x. Set u(t,x)=t-2t^x+t^2x with 0⩽ t⩽ 1 and x⩾ 2. Then ∂ u(t,x)/∂ x=-2t^xln t+2t^2xln t=2t^xln t(t^x-1)⩾0 as ln t⩽0 and t^x-1⩽0. Hence, the function u(t,x) is increasing with respect to x. As x ⩾ 2, we get u(t,x)⩾ u(t,2)=t-2t^2+t^4. Set v(t)=t-2t^2+t^4. We obtain the four solutions of the equation v(t)=0, t_1=-1-√(5)/2, t_2=0, t_3=-1+√(5)/2 and t_4=1. Since v(t)⩾ 0 for 0⩽ t⩽√(5)-1/2, see Fig.<ref>, we have u(t,x)⩾ 0 and obtain the inequality (<ref>). [Remark 2]. In fact, the monogamy relation (<ref>) is derived from the following inequality, (1+t)^x≥1+(2^x-t^x)t^x,  0≤ t≤ 1,  x≥2. Our monogamy relation (<ref>) is derived from the following inequality, (1+t)^x⩾1+t+(2^x-2)t^x,   0≤ t≤ 1,  x≥2. Since (1+t)^x ≥1+t+(2^x-2)t^x =1+(2^x-1)t^x+t-t^x ≥1+(2^x-1)t^x+t^x-t^2x =1+(2^x-t^x)t^x for 0⩽ t⩽√(5)-1/2 and x⩾ 2, where the second inequality is due to the inequality (<ref>) in Lemma 4, obviously our formula (<ref>) in Theorem 1 gives a tighter monogamy inequality than (<ref>) for α≥2√(2). [Example 1]. Consider the following three-qubit state |ψ⟩ in generalized Schmidt decomposition <cit.>, |ψ⟩_ABC = λ_0|000⟩+λ_1e^iφ|100⟩+λ_2|101⟩ +λ_3|110⟩ +λ_4|111⟩, where λ_i≥0, 0≤φ≤π and ∑_i=0^4λ_i^2=1. One gets C(ρ_A|BC)=2λ_0√(λ_2^2+λ_3^2+λ_4^2), C(ρ_AB)=2λ_0λ_2 and C(ρ_AC)=2λ_0λ_3. Setting λ_0=λ_3=λ_4=1/√(5), λ_2=√(2/5) and λ_1=0, we have C(ρ_A|BC)=4/5, C(ρ_AB)=2√(2)/5 and C(ρ_AC)=2/5. By using the equality (<ref>), we obtain the S^t-entropy entanglement E_t A|BC=0.7219, E_t AB=0.4287 and E_t AC=0.2502. It is seen that our formula (<ref>) in Theorem 1 is tighter than the inequalities (<ref>), (<ref>) and (<ref>), see Fig.<ref>. Generally, we have the following monogamy inequality. [Theorem 2]. For any N-qubit mixed states, if E^√(2)_t AB_i⩾∑_k=i+1^N-1E^√(2)_t AB_k for i=1, 2, ⋯, m, and E^√(2)_t AB_j⩽∑_k=j+1^N-1E^√(2)_t AB_k for j=m+1,⋯,N-2, ∀ 1≤ m≤ N-3, N⩾ 4, we have E^α_t A|B_1B_2⋯ B_N-1 ⩾∑_i=1^mΓ^i-1(1+Ω_i)E^α_t AB_i+Γ^m+1 E^α_t AB_m+1    +Γ^m+1∑_j=m+2^N-2(1+Υ_m+1)⋯ (1+Υ_j-1)E^α_t AB_j    +Γ^m(1+Υ_m+1)⋯ (1+Υ_N-2)E^α_t AB_N-1 for all α⩾2√(2), where Γ=2^α/√(2)-2, Ω_i=∑_k=i+1^N-1E^√(2)_t AB_k/E^√(2)_t AB_i, i=1, 2, ⋯, m, Υ_j=E^√(2)_t AB_j/∑_k=j+1^N-1E^√(2)_t AB_k, j=m+1, m+2, ⋯, N-2. From the inequality (<ref>) in Lemma 3, we have E^α_t A|B_1B_2⋯ B_N-1 ⩾ (1+Ω_1)E^α_t AB_1+Γ (∑_k=2^N-1E^√(2)_t AB_k)^α/√(2) ⩾ (1+Ω_1)E^α_t AB_1+(1+Ω_2)Γ E^α_t AB_2 +Γ^2(∑_k=3^N-1E^√(2)_t AB_k)^α/√(2) ⩾⋯ ⩾ (1+Ω_1)E^α_t AB_1+⋯+(1+Ω_m)Γ^m-1E^α_t AB_m    +Γ^m(∑_k=m+1^N-1E^√(2)_t AB_k)^α/√(2). Similarly, as E^√(2)_t AB_j≤∑_k=j+1^N-1E^√(2)_t AB_k for j=m+1,⋯,N-2, we get (∑_k=m+1^N-1E^√(2)_t AB_k)^α/√(2) ⩾Γ E^α_t AB_m+1+(1+Υ_m+1)(∑_k=m+2^N-1E^√(2)_t AB_k)^α/√(2) ⩾⋯ ⩾Γ(E^α_t AB_m+1+⋯+ (1+Υ_m+1)⋯ (1+Υ_N-3)E^α_t AB_N-2)     +(1+Υ_m+1)⋯ (1+Υ_N-2)E^α_t AB_N-1. Combining Eqs. (<ref>) and (<ref>), we have Theorem 2. Theorem 2 gives another monogamy relation based on the S^t-entropy entanglement. Comparing inequality (<ref>) in Theorem 1 with inequality (<ref>) in Theorem 2, it is important to point out that for some states that do not meet the conditions outlined in Theorem 1, Theorem 2 may be more effective. [Example 2]. Consider an N-qubit Dicke state <cit.> with k excitations, |D^(k)_n⟩_A_1A_2⋯ A_n=1/√(nk)∑_perm(|0⟩^⊗ (n-k)|1⟩^⊗ k), where the summation is over all possible permutations of the product states having N-k zeros and k ones, and Nk denote the combination number choosing k items from N items. The concurrences for Dicke state are given by C(|D^(k)_n⟩_A_1|A_2⋯ A_n) = 2√(k(n-k))/n, C(|D^(k)_n⟩_A_1A_i) = -2√(k(k-1)(n-k)(n-k-1))/n(n-1) + 2k(n-k)/n(n-1), where i∈{2, ⋯, n}. Consider N=4 and k=1. We get C(|D^(1)_4⟩_A_1|A_2A_3A_4)=√(3)/2, C(|D^(1)_4⟩_A_1A_i)=1/2, i∈{2, 3, 4}. By using equality (<ref>), we get S^t-entropy entanglement E_t(|D^(1)_4⟩_A_1|A_2A_3A_4)=0.8113, E_t(|D^(1)_4⟩_A_1A_i)=0.3546, i∈{2, 3, 4}. It is easy to see that E_t(|D^(1)_4⟩_A_1A_i) does not satisfy the condition (<ref>) of Theorem 1. From the inequality (<ref>) of Theorem 2, we have E_t^α(|D^(1)_4⟩_A_1|A_2A_3A_4)≥(5/2×2^α/√(2)-2)(0.3546)^α for α≥2√(2), see Fig.<ref>. § MONOGAMY OF T^T_Q-ENTROPY ENTANGLEMENT For a pure state |Φ⟩_AB on Hilbert space H_A⊗ H_B, the T^t_q-entropy entanglement is defined by 𝒯^t_q(|Φ⟩_AB)=T^t_q(ρ_A), where ρ_A= Tr_B(|Φ⟩_AB⟨Φ|) denotes the reduced density operator of the subsystem A. T^t_q(ρ) is the total entropy of the Tsallis-q entropy. Its complementary dual of a quantum state ρ on d-dimensional Hilbert space H is defined by T^t_q(ρ) = 1- Trρ^q- Tr(1-ρ)^q+ Tr(1-ρ)/q-1. For a bipartite mixed state ρ_AB on Hilbert space H_A⊗ H_B, the T^t_q-entropy entanglement is defined via convex-roof extension 𝒯^t_q(ρ_AB)=inf_{p_i,|Φ_i⟩}∑_ip_i𝒯^t_q(|Φ_i⟩_AB), where the infimum is taken over all the possible pure-state decompositions of ρ_AB=∑_ip_i|Φ_i⟩_AB⟨Φ_i|. For a bipartite pure state |ψ⟩_AB, the Tsallis-q entanglement is defined by <cit.> T_q(|ψ⟩_AB)=S_q(ρ_A)=1/q-1(1-trρ_A^q), for any q > 0 and q 1. If q tends to 1, T_q(ρ) converges to the von Neumann entropy, lim_q→1 T_q(ρ)=-trρlnρ=S_q(ρ). For a bipartite mixed state ρ_AB, Tsallis-q entanglement is defined via the convex-roof extension, T_q(ρ_AB)=min∑_ip_iT_q(|ψ_i⟩_AB), with the minimum taken over all possible pure-state decompositions of ρ_AB. In Ref. <cit.> the authors presented an analytic relation between Tsallis-q entanglement and concurrence for 5-√(13)/2≤ q≤5+√(13)/2, T_q(|ψ⟩_AB)=g_q(C^2(|ψ⟩_AB)), where the function g_q(x) is defined as g_q(x)=[1-(1+√(1-x)/2)^q-(1-√(1-x)/2)^q]/q-1. It has been shown that T_q(|ψ⟩)=g_q(C^2(|ψ⟩)) for any 2⊗ m (m⩾2) pure state |ψ⟩, and T_q(ρ)=g_q(C^2(ρ)) for any two-qubit mixed state ρ in Ref. <cit.>. For any N-qubit system ρ_AB_2⋯ B_N-1, it is further proved that T_q^β(ρ_A|B_1B_2⋯ B_N-1)⩾∑_i=1^N-1T_q^β(ρ_AB_i), with 5-√(13)/2⩽ q⩽5+√(13)/2 and β≥2. Consider an arbitrary pure state ϕ_AB given by Eq. (<ref>). It can be verified that 𝒯^t_q(|ϕ⟩_AB)=f_q(C(|ϕ⟩_AB)), where the analytic function f_q(x) is defined by f_q(x) = 2[1-(1+√(1-x^2)/2)^q-(1-√(1-x^2)/2)^q]/q-1. From Ref. <cit.> we have the functional relation, 𝒯^t_q(ρ_AB)=f_q(C(ρ_AB)) for a bipartite two-qubit mixed state ρ_AB on Hilbert space ℋ_A⊗ℋ_B. From Eqs. (<ref>) and (<ref>), this means that the T^t_q-entropy entanglement for qubit systems can be reduced to the Tsallis-q entropy entanglement. Thus, a general monogamy of the Tsallis-q entropy entanglement in multi-qubit systems is naturally inherited by the T^t_q-entropy entanglement <cit.>. For any N-qubit system ρ_AB_2⋯ B_N-1, we have (𝒯^t_q)^β(ρ_A|B_1⋯ B_N-1)≥∑_i=1^N-1 (𝒯^t_q)^β(ρ_A|B_i), with 5-√(13)/2⩽ q⩽5+√(13)/2 and β≥2. By using the inequality (1+t)^x≥1+(2^x-1)t^x for 0≤ t≤ 1 and x≥1 <cit.>, the relation (<ref>) is improved for β≥2 as (𝒯^t_q)^β(ρ_A|B_1B_2⋯ B_N-1) ⩾ (𝒯^t_q)^β(ρ_AB_1)+⋯+(2^β/2-1)^N-3(𝒯^t_q)^β(ρ_AB_N-2)    +(2^β/2-1)^N-2(𝒯^t_q)^β(ρ_AB_N-1) with (𝒯^t_q)^2(ρ_AB_i)⩾∑_j=i+1^N-1(𝒯^t_q)^2(ρ_AB_j) for i=1, 2, ⋯, N-2, 5-√(13)/2⩽ q⩽5+√(13)/2. Similarly by using the inequality (1+t)^x≥1+(2^x-t^x)t^x for 0≤ t≤ 1 and x≥2 <cit.>, the relation (<ref>) is further improved as (𝒯^t_q)^β(ρ_A|B_1B_2⋯ B_N-1) ⩾ (𝒯^t_q)^β(ρ_AB_1)+∑_i=2^N-1(∏_j=1^i-1M_j)(𝒯^t_q)^β(ρ_AB_i) with (𝒯^t_q)^2(ρ_AB_i)⩾∑_k=i+1^N-1(𝒯^t_q)^β(ρ_AB_k) for i=1, 2, ⋯, N-2, M_j=2^β/2-(∑_k=j+1^N-1(𝒯^t_q)^2(ρ_AB_k)/(𝒯^t_q)^2(ρ_AB_j))^β/2 , for j=1, 2, ⋯, N-2, 5-√(13)/2⩽ q⩽5+√(13)/2 and β≥4. In the following, we show that these monogamy inequalities satisfied by the T^t_q-entropy entanglement can be further refined and become even tighter. For convenience, we denote T_AB_i=𝒯^t_q(ρ_AB_i) the T^t_q-entropy entanglement of ρ_AB_i and T_A|B_1,B_2,⋯,B_N-1=𝒯^t_q(ρ_A|B_1 ⋯ B_N-1). We first introduce a lemma. [Lemma 5]. For any 2⊗2⊗2 mixed state ρ∈ H_A⊗ H_B⊗ H_C, if T^2_AB⩾ T^2_AC, we have T^β_A|BC⩾(1+T^2_AC/T^2_AB) T^β_AB+(2^β/2-2)T^β_AC for all β⩾4 and 5-√(13)/2⩽ q⩽5+√(13)/2. By straightforward calculation, if T^2_AB⩾ T^2_AC we have T^β_A|BC ⩾ (T^2_AB+T^2_AC)^β/2 =T^β_AB(1+T^2_AC/T^2_AB)^β/2 ⩾ T^β_AB[1+T^2_AC/T^2_AB+(2^β/2-2)(T^2_AC/T^2_AB)^β/2] = (1+T^2_AC/T^2_AB) T^β_AB+(2^β/2-2)T^β_AC, where the second inequality is due to Lemma 2. As the subsystems A and B are equivalent in this case, we have assumed that T_AB⩾ T_AC without loss of generality. Moreover, if T_AB=0 we have T_AB=T_AC=0. That is to say the lower bound becomes trivially zero. From Lemma 5, we have the following theorem. [Theorem 3]. For any N-qubit mixed state, if T^2_AB_i⩾∑_j=i+1^N-1T^2_AB_j for i=1, 2, ⋯, N-2, we have T^β_A|B_1B_2⋯ B_N-1  ⩾∑_i=1^N-2(1+Ω_i)Γ^i-1 T^β_AB_i+Γ^N-2T^β_AB_N-1 for all β⩾4 and 5-√(13)/2⩽ q⩽5+√(13)/2, where Γ=2^β/2-2, Ω_i=∑_j=i+1^N-1T^2_AB_j/T^2_AB_i, i=1, 2, ⋯, N-2. From the inequality (<ref>) in Lemma 5, we have T^β_A|B_1B_2⋯ B_N-1 ⩾ (1+Ω_1)T^β_AB_1+Γ (∑_j=2^N-1T^2_AB_j)^β/2 ⩾ (1+Ω_1)T^β_AB_1+(1+Ω_2)Γ T^β_AB_2 +Γ^2(∑_j=3^N-1T^2_AB_j)^β/2 ⩾⋯ ⩾ (1+Ω_1)T^β_AB_1+⋯+(1+Ω_N-2)Γ^N-3T^β_AB_N-2    +Γ^N-2T^β_AB_N-1 for all β⩾4 and 5-√(13)/2⩽ q⩽5+√(13)/2. [Remark 3]. Theorem 3 introduces a new class of monogamy relations for multi-qubit states, encompassing inequality (<ref>) as a specific case of N=3, T_AB_1=T_AB_2 and β≥4. Similar to the discussions in Remark 1 and Remark 2, our formula (<ref>) in Theorem 3 gives a tighter monogamy relation with larger lower bounds than the inequalities (<ref>), (<ref>) and (<ref>). [Example 3]. Let us again consider the three-qubit state |ψ⟩_ABC in Example 1. Setting λ_0=λ_3=λ_4=1/√(5), λ_2=√(2/5) and λ_1=0, we have C(ρ_A|BC)=4/5, C(ρ_AB)=2√(2)/5 and C(ρ_AC)=2/5. By using equality (<ref>) and taking q=2, we get the T^t_q-entropy entanglement of |ψ⟩_ABC, T_A|BC=0.64, T_AB=0.32 and T_AC=0.16. It is seen that our formula (<ref>) in Theorem 3 is tighter than inequalities (<ref>), (<ref>) and (<ref>) for β≥4, see Fig.<ref>. Generally, the conditions for inequalities (<ref>) are not always satisfied. In following, we present a general monogamy inequality. [Theorem 4]. For an N-qubit mixed state, if T^2_AB_i⩾∑_k=i+1^N-1T^2_AB_k for i=1, 2, ⋯, m, and T^2_AB_j⩽∑_k=j+1^N-1T^2_AB_k for j=m+1,⋯,N-2, ∀ 1≤ m≤ N-3, N⩾ 4, we have T^β_A|B_1B_2⋯ B_N-1 ⩾∑_i=1^mΓ^i-1(1+Ω_i)T^β_AB_i+Γ^m+1 T^β_AB_m+1    +Γ^m+1∑_j=m+2^N-2(1+Υ_m+1)⋯ (1+Υ_j-1)T^β_AB_j    +Γ^m(1+Υ_m+1)⋯ (1+Υ_N-2)T^β_AB_N-1 for all β⩾4 and 5-√(13)/2⩽ q⩽5+√(13)/2, Ω_i=∑_k=i+1^N-1T^2_AB_k/T^2_AB_i, i=1, 2, ⋯, m, Υ_j=T^2_AB_j/∑_k=j+1^N-1T^2_AB_k, j=m+1, m+2, ⋯, N-2. From the inequality (<ref>) in Lemma 5, we have T^β_A|B_1B_2⋯ B_N-1 ⩾ (1+Ω_1)T^β_AB_1+Γ (∑_k=2^N-1T^2_AB_k)^β/2 ⩾ (1+Ω_1)T^β_AB_1+(1+Ω_2)Γ T^β_AB_2 +Γ^2(∑_k=3^N-1T^2_AB_k)^β/2 ⩾⋯ ⩾ (1+Ω_1)T^β_AB_1+⋯+(1+Ω_m)Γ^m-1T^β_AB_m    +Γ^m(∑_k=m+1^N-1T^2_AB_k)^β/2. Similarly, as T^2_AB_j≤∑_k=j+1^N-1T^2_AB_k for j=m+1,⋯,N-2, we get (∑_k=m+1^N-1T^2_AB_k)^β/2 ⩾Γ T^β_AB_m+1+(1+Υ_m+1)(∑_k=m+2^N-1T^2_AB_k)^β/2 ⩾⋯ ⩾Γ(T^β_AB_m+1+⋯+ (1+Υ_m+1)⋯ (1+Υ_N-3)T^β_AB_N-2)     +(1+Υ_m+1)⋯ (1+Υ_N-2)T^β_AB_N-1. Combining Eqs. (<ref>) and (<ref>), we have Theorem 4. Theorem 4 gives another monogamy relation based on the T^t_q-entropy entanglement. Comparing inequality (<ref>) in Theorem 3 with inequality (<ref>) in Theorem 4, one notices that for some classes of states that do not satisfy the conditions in Theorem 3, Theorem 4 works still. [Example 4]. Let us consider the four-qubit generalized W state, |W⟩_ABCD=1/2(|1000⟩+|0100⟩+|0010⟩+|0001⟩). Suppose q = 2. We have T_A|BCD=3/4 and T_AB=T_AC=T_AD=1/4. It is easy to see that this state does not satisfy the condition (<ref>) of Theorem 3. From inequality (<ref>) of Theorem 4, we have T^β_A|BCD≥(5/2×2^β/2-2)(1/4)^β for β≥4, see Fig.<ref>. § TWO NEW KINDS OF MULTIPARTITE ENTANGLEMENT INDICATORS Based on the monogamy relations (<ref>) and (<ref>), we are able to construct two sets of useful entanglement indicators that can be utilized to identify all genuine multi-qubit entangled states even for the cases that the three tangle of concurrence does not work. Let us first recall the definition of tangle. The tangle of a bipartite pure states |ψ⟩_AB is defined as <cit.>, τ(|ψ⟩_AB)=2(1- trρ_A^2), where ρ_A= tr_B|ψ⟩_AB⟨ψ|. The tangle of a bipartite mixed state ρ_AB is defined as τ(ρ_AB)=[min_{p_k,|ψ_k⟩}∑_kp_k√(τ(|ψ_k⟩_AB))]^2, where the minimization in Eq. (<ref>) is taken over all possible pure state decompositions of ρ_AB=∑_kp_k|ψ_k⟩_AB⟨ψ_k|. Based on Eq. (<ref>), we can construct a class of multipartite entanglement indicators in terms of the S_t-entropy entanglement, τ_t(ρ_A|B_1…B_N-1)=min∑_ip_iτ_t(|ψ_A|B_1…B_N-1^i⟩), where the minimum is taken over all possible pure state decompositions {p_i,ψ_A|B_1…B_N-1^i} of ρ_AB_1…B_N-1 and τ_t(|ψ_A|B_1…B_N-1^i⟩=E_t^√(2)(ψ_A|B_1…B_N-1^i)-∑_j=1^N-1 E_t^√(2)(ρ_AB_j^i). Similarly, based on Eq. (<ref>), we can construct a class of multipartite entanglement indicators in terms of the T^t_q-entropy entanglement for 5-√(13)/2⩽ q⩽5+√(13)/2, ω_q(ρ_A|B_1…B_N-1)=min∑_ip_iω_q(|ψ_A|B_1…B_N-1^i⟩), where the minimum is taken over all possible pure state decompositions {p_i,ψ_A|B_1…B_N-1^i} of ρ_AB_1…B_N-1 and ω_q(|ψ_A|B_1…B_N-1^i⟩=(𝒯_q^t)^2(ψ_A|B_1…B_N-1^i)-∑_j=1^N-1 (𝒯_q^t)^2(ρ_AB_j^i). In particular, we evaluate Eqs. (<ref>) and (<ref>) for the W-state. The nonzero values of τ_t and ω_q in following example assert their validity as two genuine entanglement indicators. [Example 5]. We consider the N-qubit W state, |W⟩_N=1/√(N)(|10⋯0⟩+|01⋯0⟩+|0⋯01⟩). The three tangle cannot detect the genuine tripartite entanglement of the W-state. However, the indicator τ_t works in this case. By using the multipartite entanglement indicator given in Eq. (<ref>), we have τ_t(|W⟩_N)=h^√(2)(2√(N-1)/N)-(N-1)h^√(2)(2/N). We plot the indicator as a function of N in a N-qubit W state, where the nonzero values imply that the genuine multipartite entanglement is detected, see Fig.<ref>. Moreover, the indicator ω_q effectively detects the genuine multipartite entanglement in this state too. By using the multipartite entanglement indicator given in Eq. (<ref>), we have ω_q(|W⟩_N)=f_q^2(2√(N-1)/N)-(N-1)f_q^2(2/N). We plot the indicator as a function of q for N=3,5,7,10, respectively. It shows that the indicator ω_q(|W⟩) is always positive for q∈[5-√(13)/2,5+√(13)/2], see Fig.<ref>. § CONCLUSION The monogamy relationship of quantum entanglement embodies fundamental properties manifested by multipartite entangled states. We have provided the general monogamy relations for two new entanglement measures in multi-qubit quantum systems, and demonstrated that these inequalities give rise to tighter constraints than the existing ones. Detailed examples have been presented to illustrate the effectiveness of our results in characterizing the multipartite entanglement distributions. Based on these general monogamy relations, we are able to construct the set of multipartite entanglement indicators for N-qubit states, which work well even when the concurrence-based indicators fails to detect the genuine multipartite entanglement. The distribution of entanglement in multipartite systems can be more precisely characterized through stricter monogamy inequalities. Our results may shed new light on further investigations of comprehending the distribution of entanglement in multipartite systems. Acknowledgments This work is supported by the National Natural Science Foundation of China (NSFC) under Grants 12075159, 12171044 and 12301582; the specific research fund of the Innovation Platform for Academicians of Hainan Province; the Start-up Funding of Dongguan University of Technology No. 221110084. 99 JMHA2017M. Jafarpour, F. K. Hasanvand and D. Afshar, https://doi.org/10.1088/0253-6102/67/1/27 Commun. Theor. Phys. 67, 27 (2017). WMYX2018M. Y. Wang, J. Z. Xu, F. L. Yan and T. Gao, https://doi.org/10.1209/0295-5075/123/60002 Europhys. Lett. 123, 60002 (2018). HHGB2018H. L. Huang, A. K. Goswami, W. S. Bao and P. K. Panigrahi, https://doi.org/10.1007/s11433-018-9175-2 Sci. China-Phys. Mech. Astron. 61, 060311 (2018). DFGR2017 F. G. Deng, B. C. Ren, and X. H. Li, https://doi.org/10.1016/j.scib.2016.11.007 Sci. Bull. 62, 46 (2017). Hill1997S. Hill, and W. K. Wootters, https://doi.org/10.1103/PhysRevLett.78.5022 Phys. Rev. Lett. 78, 5022-5025 (1997). Bennett19963824 C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, https://doi.org/10.1103/physreva.54.3824 Phys. Rev. A 54, 3824 (1996). HHH1996R. Horodecki, P. Horodecki, and M. Horodecki, https://doi.org/10.1016/0375-9601(95)00930-2 Phys. Lett. A 210, 377-381 (1996). Gour2007G. Gour, S. Bandyopadhyay, and B. C. Sanders, https://doi.org/10.1063/1.2435088 J. Math. Phys. 48, 012108 (2007). Kim2010RJ. S. Kim, and B. C. Sanders, https://doi.org/10.1088/1751-8113/43/44/445305 J. Phys. A: Math. Theor. 43, 445305 (2010). LV1998P. T. Landsberg, and V. Vedral, https://doi.org/10.1016/S0375-9601(98)00500-3 Phys. Lett. A 247, 211-217 (1998). Kim2010TJ. S. Kim, https://doi.org/10.1103/PhysRevA.81.062328 Phys. Rev. A 81, 062328 (2010). KimBarry2011J. S. Kim, and B. C. Sanders, https://doi.org/10.1088/1751-8113/44/29/295303 J. Phys. A Math. Theor. 44, 295303 (2011). Yang2023 X. Yang, Y. H. Yang, L. M. Zhao, M. X. Luo, https://doi.org/10.1140/epjp/s13360-023-04259-9 Eur. Phys. J. Plus 138, 654 (2023). CKW2000 V. Coffman, J. Kundu, W. K. Wootters, https://doi.org/10.1103/PhysRevA.61.052306 Phys. Rev. A 138, 052306 (2000). Terhal2004 B. Terhal, https://doi.org/10.1147/rd.481.0071 IBM J. Res. Dev. 48, 71-78 (2004). T.J.Osborne T. J. Osborne, and F. Verstraete, https://doi.org/10.1103/PhysRevLett.96.220503 Phys. Rev. Lett. 96, 220503 (2006). Oliveira2014 T. R. de Oliveira, M. F. Cornelio, and F. F. Fanchini, https://doi.org/10.1103/10.1103/PhysRevA.89.034303 Phys. Rev. A 89, 034303 (2014). Bai3 Y. K. Bai, Y. F. Xu, and Z. D. Wang, https://doi.org/10.1103/PhysRevLett.113.100503 Phys. Rev. Lett. 113, 100503 (2014). Bai2014 Y. K. Bai, Y. F. Xu, and Z. D. Wang, https://doi.org/10.1103/10.1103/PhysRevA.90.062343 Phys. Rev. A 90, 062343 (2014). R2015 W. Song, Y. K. Bai, M. Yang, and Z. L. Cao, https://doi.org/10.1103/PhysRevA.93.022306 Phys. Rev. A 93, 022306 (2016). Luo2016Y. Luo, T. Tian, L. H. Shao, and Y. M. Li, https://doi.org/10.1103/10.1103/PhysRevA.93.062340 Phys, Rev. A 93, 062340 (2016). Khan2019A. Khan, J. ur Rehman, K. Wang, and H. Shin, https://doi.org/10.1038/s41598-019-52817-y Sci. Rep. 9, 16419 (2019). Christandl2004M. Christandl, and A. Winter, https://doi.org/10.1063/1.1643788 J. Math. Phys. 45, 829-840 (2004). Zhu2014X. N. Zhu, and S. M. Fei, https://doi.org/10.1103/PhysRevA.90.024304 Phys. Rev. A, 90, 024304 (2014). Luo2015 Y. Luo, and Y. M. Li, https://doi.org/10.1016/j.aop.2015.08.022 Ann. Phys. 362, 511-520 (2015). JF2017Z. X. Jin, S. M. Fei, https://doi.org/10.1007/s11128-017-1520-3 Quantum Inf.Process. 16, 77 (2017). JF2018Z. X. Jin, J. Li, T. Li, S. M. Fei, https://doi.org/10.1103/PhysRevA.97.032336 Phys.Rev.A 97, 032336 (2018). Walter2013M. Walter, B. Doran, D. Gross, and M. Christandl, https://doi.org/10.1126/science.1232957 Science, 340, 1205 (2013). See2010 M. P, Seevinck, https://doi.org/10.1007/s11128-009-0161-6 Quantum Inf. Process. 9, 273 (2010). Ma2011 X. S. Ma, B. Dakic, W. Naylor, A. Zeilinger, P. Walther, https://doi.org/10.1038/NPHYS1919 Nat. Phys. 7, 399 (2011). Ve2013 E. Verlinde, H. Verlinde, https://doi.org/10.1007/JHEP10(2013)107 J. High Energy Phys. 1310, 107 (2013). Rungta2001 P. Rungta, V. Bužek, C. M. Caves, M. Hillery, and G. J. Milburn, https://doi.org/10.1103/PhysRevA.64.042315 Phys. Rev. A 64, 042315 (2001). Wootters1998W. K. Wootters, https://doi.org/10.1103/PhysRevLett.80.2245 Phys. Rev. Lett. 80, 2245 (1998). JQZ. X. Jin, C. F. Qiao https://doi.org/10.1088/1674-1056/ab6720 Chinese Phys. B 29, 020305 (2020). TYHY. H. Tao, K. Zheng, Z. X. Jin, S. M. Fei, https://doi.org/10.3390/math11051159 Mathematics 11(5), 1159 (2023). AALE2000 A. Acin, A. Andrianov, L. Costa, E. Jane, J. I. Latorre, and R. Tarrach, https://doi.org/10.1103/PhysRevLett.85.1560 Phys. Rev. Lett. 85, 1560 (2000). GXH2008 X. H. Gao, S. M. Fei, https://doi.org/10.1140/epjst/e2008-00694-x Eur. Phys. J. Special Topics 159, 71 (2008). Karmakar(2016) S. Karmakar, A. Sen, A. Bhar, and D. Sarkar, https://doi.org/10.1103/PhysRevA.93.012327 Phys. Rev. A 93, 012327 (2016). YGM2016G. M. Yuan, W. Song, M. Yang, D. C. Li, J. L. Zhao, and Z. L. Cao, https://doi.org/10.1038/srep28719 Sci. Rep. 6, 28719 (2016).
http://arxiv.org/abs/2407.13662v1
20240718163841
Obstructions to homotopy invariance of loop coproduct via parametrised fixed-point theory
[ "Lea Kenigsberg", "Noah Porcelli" ]
math.AT
[ "math.AT", "math.GT", "math.KT", "math.SG" ]
§ ABSTRACT Given f:M → N a homotopy equivalence of compact manifolds with boundary, we use a construction of Geoghegan and Nicas to define its Reidemeister trace [T] ∈π_1^st( N, N). We realize the Goresky-Hingston coproduct as a map of spectra, and show that the failure of f to entwine the spectral coproducts can be characterized by Chas-Sullivan multiplication with [T]. In particular, when f is a simple homotopy equivalence, the spectral coproducts of M and N agree. Studying the Performance of the Jellyfish Search Optimiser for the Application of Projection Pursuit [ July 22, 2024 ==================================================================================================== § INTRODUCTION Let M be a closed smooth oriented manifold, and M its free loop space. There are various structures one can define on the homology of M. The first to be introduced was the Chas-Sullivan product <cit.>: μ^CS: H_*( M)⊗ H_*( M) → H_*-n( M), which, roughly speaking, takes two generic families of loops in M and concatenates them when their starting points agree. There is also the Goresky-Hingston coproduct <cit.>: Δ^GH: H_*( M) →H̃_*+1-n( M/M∧ M/M) which takes a generic family of loops, and for each loop γ in the family and s ∈ [0,1] such that γ(0) = γ(s), contributes the pair of loops (γ|_[0,s], γ|_[s,1]). See <ref>. There are many other structures and constructions of this flavor, all fall under the general umbrella term of string topology. For instance, there is a Lie bracket on equivariant homology H_*^S^1( M) <cit.>. Another example is Cohen-Jones' construction of a unital ring structure <cit.>: M^-TM∧ M^-TM→ M^-TM. This structure recovers the Chas-Sullivan product by taking homology, but also gives operations in other generalised homology theories. The first offering of our paper is a generalization of the Goresky-Hingston coproduct to non-oriented manifolds with corners, and to a map of spectra: Δ: M^-TM/∂ M ^-TM∧ S^1→Σ^∞ M/M∧ M/M, where ∂ M := M|_∂ M is the space of loops γ∈ M with γ(0) ∈∂ M. Note that Δ does not define a coring structure in the usual algebraic sense, since it is not of the form A → A ⊗ A for any A. We still refer to Δ as a coproduct since when M is a closed oriented manifold, Δ is a natural generalisation of the Goresky-Hingston coproduct, which does define a (non-unital) coalgebra structure on H_*+n-1( M, M; k) where k is a field. See Section <ref> for an exact statement and proof. It would be interesting to understand the nature of the algebraic structure that Δ defines. It was shown in <cit.>, <cit.> and <cit.> that the Chas-Sullivan product is preserved by homotopy equivalences, and by Rivera-Wang <cit.> that for simply-connected manifolds the Goresky-Hingston coproduct over is preserved by homotopy equivalences. Motivated by a computation of Naef <cit.>, showing that the Goresky-Hingston coproduct is not a homotopy invariant in general, the first goal of this paper is to characterize the failure of the spectral Goresky-Hingston coproduct to be a homotopy invariant. More precisely, let f: N → Z be a homotopy equivalence of compact manifolds with boundary. Then f induces equivalences of spectra f: Σ^∞ N/N →Σ^∞ Z/Z and f_!: N^-TN/∂ N^-TN Z^-TZ/∂ Z^-TZ. See <ref>. Then the first goal of this paper is to study the failure of the diagram N^-TN/∂ N^-TN∧ S^1[r, "Δ^N"] [d, "f_! ∧ Id_S^1"] Σ^∞ N/N∧ N/N[d, "f ∧ f"] Z^-TZ/∂ Z^-TZ∧ S^1 [r, "Δ^Z"] Σ^∞ Z/Z∧ Z/Z to commute. As a first step to addressing the general case, we assume that f is a codimension 0 embedding, and that the complement W := Z ∖ N is an h-cobordism. We then define operations Ξ_l, Ξ_r: Z^-TZ/∂ Z^-TZ∧ S^1 →Σ^∞ Z/Z∧ Z/Z, in the spirit of parameterized Reidemeister traces, following ideas of Geoghegan-Nicas <cit.> and Malkiewich <cit.>. See <ref> for further explanation. The first theorem of this paper is then: Assume f: N → Z is a codimension 0 embedding such that the complement is an h-cobordism. Then the failure of diagram (<ref>) to commute is given by Ξ_r and Ξ_l. That is: Δ^Z ∘ (f_! ∧ Id_S^1) - (f ∧ f) ∘Δ^N ≃Ξ_r - Ξ_l. We next characterize the discrepancy Ξ_r - Ξ_l in terms of familiar operations and invariants. To do this, to f we first associate a parameterized fixed-point invariant: [T]: Σ^∞ S^1→Σ^∞ N/N. Viewed as a framed manifold via the Pontryagin-Thom isomorphism, the class [T] is constructed as in Geoghegan-Nicas <cit.>, and is given by the fixed points of a strong deformation retraction F: W× I → W. See <ref> for further explanation. Then by composing with appropriate anti-diagonal maps we obtain classes: [T_diag], [T_diag]: Σ^∞ S^1→Σ^∞ N × N/N× N. In <ref> we define spectral Chas-Sullivan products: μ_r: M^-TM/∂ M^-TM∧Σ^∞_+ M →Σ^∞_+ M, and μ_l: Σ^∞_+ M ∧ M^-TM/∂ M^-TM→Σ^∞_+ M, which after passing to homology realize the usual homology-level Chas-Sullivan products. Let [Z]: → Z^-TZ/∂ Z^-TZ→ Z^-TZ/∂ Z^-TZ denote the fundamental class of Z. Then the following theorem says that Ξ_r and Ξ_l can be interpreted as the Chas-Sullivan product with [T]: Under the same assumptions as Theorem <ref>, there are homotopies of maps of spectra: Ξ_r ≃μ_r(·× [Z],[T_diag]) and Ξ_l ≃μ_l([T_diag],[Z] ×·), where we use the spectral Chas-Sullivan product for Z × Z, inserting the classes [Z], [T_diag] and [T_diag] as appropriate. In order to reduce the general case to the codimension 0 setting we prove the following stability property: Let e: M ↪^L be an embedding with normal bundle ν; let Dν be the total space of the unit disc bundle of ν, also a compact manifold. Then the coproducts for M and Dν agree. The following corollary is immediate from Theorem <ref>: If N and Z are simple homotopy equivalent closed manifolds, then their coproducts agree. Corollary <ref> has also been proved in recent work of Naef-Safronov <cit.>; see also Remark <ref>. We may extend the construction of the invariant [T] to any homotopy equivalence f: N → Z. Combining Theorems <ref>, <ref> and <ref> in Section <ref>, we deduce the main result of our paper: Let f: N → Z be a homotopy equivalence of compact manifolds with boundary (of any dimensions). Then the failure of f to respect the spectral Goresky-Hingston coproduct is given by: Δ^Z ∘ (f_! ∧ Id_S^1) - (f ∧ f) ∘Δ^N ≃μ_r(·× [Z],[T_diag]) - μ_l([T_diag], [Z] ×·). We now give the corresponding statement on homology. Let h_*: Ω^fr_*(·) → H_*(·) be the Hurewicz homomorphism. Using the results of Sections <ref> and <ref>, which show that after taking homology our spectral constructions agree with their homological counterparts, we obtain the corresponding homological statement: Let f: N → Z be an orientation-preserving homotopy equivalence of closed oriented manifolds. Then for all x ∈ H_p( N): Δ^GH∘ f_*(x) - (f× f)_*∘Δ^GH(x) = (-1)^np+nμ^CS(f_*(x) × [M], h_*[T_diag]) - (-1)^p+nμ^CS(h_*[T_diag], [M] × f_*(x)). where we take the Chas-Sullivan product in Z × Z. A variant of formula (<ref>), first conjectured by Naef in <cit.>, has been recently proved by Naef-Safronov <cit.> using different methods. Their formula is similar but instead of h_*[T] uses a different homology class; <ref> below implies that when π_2=0, these homology classes agree. In particular, we expect that in the case π_2=0, Corollary <ref> recovers <cit.>.Another variant of this formula is to appear in upcoming work of Wahl <cit.>, using a differently defined obstruction class. It is natural to conjecture that all of these obstruction classes agree. Lastly, when we assume π_2(N)=0, we can invoke a theorem of Geoghegan and Nicas <cit.> which further identifies [T] with the Dennis trace of the Whitehead torsion of f. More precisely, let tr: K_1([π_1(M)]) → HH_1([π_1(N)]) be the classical Dennis trace. Then after identifying HH_1([π_1(N)]≅ H_1( N) (which requires the π_2=0 assumption), and projecting away from constant loops, the content of <cit.> implies that tr(τ) = h_*[T], where τ is the Whitehead torsion of f. See Section <ref> for more precise statements. We expect that the condition π_2 =0 can be removed by lifting the invariants of <cit.> to live in topological, rather than ordinary, Hochschild homology. See <ref>. Let tr(τ)_diag and tr(τ)_diag be the images of tr(τ) under the antidiagonal maps. Then combining (<ref>) and <ref> we obtain: Let f: N → Z be an orientation-preserving homotopy equivalence of closed oriented manifolds. Suppose that π_2(N)=0. Then for all x ∈ H_p( N): Δ^GH∘ f_*(x) - (f× f)_*∘Δ^GH(x) = (-1)^np+nμ^CS(f_*(x) × [M], tr(τ)_diag) - (-1)^p+nμ^CS( tr(τ)_diag, [M] × f_*(x)). §.§ Future work and directions Let E → B a be smooth fiber bundle with fiber a smooth closed manifold M. Suppose we are given a fiberwise homotopy equivalence f: E → M × B over B. In future work we hope to show that one can build spectral operations in families and define Δ_fib^E, Δ_fib^B× M, Ξ_l^B, Ξ_r^B, μ_l^M × B and μ_r^M × B as morphisms of parametrized spectra. In particular, we conjecture that an analogue of <ref> holds: Δ_fib^B× M∘ f_! - f∧ f ∘Δ_fib^E = Ξ_l^B - Ξ_r^B. We further conjecture that Ξ_l^B -Ξ_r^B can be characterized in terms of multiplication by higher Reidemeister traces. Namely, let (̋M) be the stable h-cobordism space of M. Then we expect that one can extend the constructions of <ref> to define a map: RT: (̋M) →Ω^∞+1Σ^∞ M/M, and show: There are homotopies of maps of parametrised spectra: Ξ_r^B ≃μ_r^M × B(·× [M], [RT_diag]) and Ξ_l^B ≃μ_l^M × B( [RT_diag], [M] ×·). Lastly, to further relate these traces to higher Whitehead torsion, we conjecture a natural generalization of (<ref>) of <cit.>: The following diagram commutes up to natural homotopy: ΩΩ^∞ K[Σ^∞_+Ω M] [r] [d, "Ω tr"] (̋M) [d,"RT"] ΩΩ^∞ THH(Σ^∞_+Ω M) [r] ΩΩ^∞Σ^∞( M/M ) where tr is the Dennis trace on THH due to Bökstedt <cit.>, the top horizontal arrow is given by Waldhausen's splitting theorem, and the bottom arrow is the equivalence: THH(Σ^∞_+Ω M) ≃Σ_+^∞ M. Combined, these conjectures imply that the failure of the Goresky-Hingston coproduct to commute in families can be measured by (suitably interpreted) multiplication with traces of higher Whitehead torsions. §.§ Structure of the paper In Section <ref> we set up conventions and notations. In Section <ref> we define the spectral Goresky-Hingston coproduct. In Section <ref> we define a version of the spectral Chas-Sullivan product. In Sections <ref> and <ref> we show that these recover the usual definitions after passing to homology; as an intermediate step, we use models for the string topology operations built using transversality.In Section <ref> we show that the spectral string topology operations are invariant under replacing M with the total space of certain disc bundles over M. From this, we deduce simple homotopy invariance of the coproduct.In Section <ref> we recall and define fixed-point invariants and operations. In Section <ref> we prove Theorem <ref> in the special case that N → Z is a codimension 0 embedding such that the complement Z ∖ N^∘ is an h-cobordism. In Section <ref> we prove Theorem <ref> in general, by using results of Section <ref> to reduce to the codimension 0 case.Appendix <ref> recaps some conventions for signs in stable homotopy theory. §.§ Acknowledgements We are grateful to Florian Naef and Nathalie Wahl for helpful conversations. Lea would like to thank Mohammed Abouzaid, Roger Casals, Inbar Klang, and Cary Malkiewich for helpful conversations and support, and the president post doctoral fellowship program for professional development and creating excellent work conditions. Noah thanks Ilaria Di Dedda, and Oscar Randal-Williams for helpful conversations, and is supported by the Engineering and Physical Sciences Research Council [EP/W015889/1]. § PRELIMINARIES §.§ Loops Let M be a smooth Riemannian manifold. In this section we recall from <cit.> a convenient model for the free loop space of M. A loop γ: I := [0,1] → M is of Sobolev class H^1 if γ and its weak derivative are of class L^2. This means that γ'(t) is defined almost everywhere, and the length: l(γ) = ∫_0^1 γ'(t) is finite and well defined. The inclusions: C^∞-loops⊂piecewise C^∞-loops⊂ H^1 -loops⊂C^0-loops are homotopy equivalences. See <cit.> and references within. A constant speed path is a path γ such that |γ'(t)| is constant where it is defined. For our model of the free loop space, M, we take the space of constant speed H^1 loops. By reparametrising, this space is homotopy equivalent to the space of all H^1-loops. Note that this model depends on the metric on M, but if g and g' are different metrics on M, there is a canonical homeomorphism (M,g) → (M,g') given by reparametrising all loops. In our formulas consisting of operations on loops, we always implicitly reparameterise so that the loops are of constant speed. This makes concatenation strictly associative. More explicitly, if γ, β: [0,1] → M are two constant speed loops, first define σ = l(γ)/l(γ) +l(β). Then the concatenation α⋆β is given by: α⋆β (t)= γ(t/σ) if 0≤ t ≤σ β(t-σ/1-σ) if σ≤ t ≤ 1 . The same convention is used in <cit.>.For the purpose of readability, we use the following notation for concatenation of paths. Given a path γ from x to y and a path δ from y to z, we write x γ⇝ y δ⇝ z for the constant speed concatenation of the two paths. §.§ Suspensions We will write many explicit formulas for maps into or out of suspensions of based spaces so we choose which model for the suspension functor we work with. For L ≥ 0, we give two models for Σ^L X: * [-1,1]^L × X/(∂ [-1,1]^L × X)∪( [-1,1]^L ×{*}) * ^L × X/((^L ∖ (-1,1)^A) × X) ∪( ^L ×{*}) in both cases based at the point which is the image of the collapsed subspace.In both cases, if X is equipped with a basepoint x_0, we further quotient by [-1,1]^L ×{x_0}.We will use these two models interchangeably, noting they are canonically homeomorphic. § SPECTRAL GORESKY-HINGSTON COPRODUCT §.§ Preamble Let M be a compact smooth manifold, possibly with corners. The main goal of this section is to define and study a realization of the Goresky-Hingston coproduct as a map of spectra. Fix an embedding e: M →^L, and let ν_e be the normal bundle (defined to be the orthogonal complement of de(TM)) equipped with the pullback metric. Denote by Dν_e and Sν_e the corresponding unit disk and sphere bundles respectively. Let ev_0: M → M, be the evaluation map sending γ↦γ(0). We use ev_0 to pull back ν_e to a bundle which, by abuse of notation, we write as ν_e → M. The Thom space, M^Dν_e, is defined by: M^Dν_e := Tot(Dν_e → M)/Tot(Sν_e → M), where Tot refers to taking the total space. Similarly to the case of suspensions, this is canonically homeomorphic to: M^Dν_e≅Tot(ν_e → M)/(Tot(ν_e → M) ∖Tot(Dν_e → M)^∘) Let M^-TM be the spectrum given by desuspending this Thom space. That is, it is the sequential spectrum whose i^th space, for i ≫ 0, is given by: M^-TM_i := M^D(^i-L⊕ν_e). In Section <ref> we describe the Goresky-Hingston coproduct as a map of spectra: Δ: M^-TM∧ S^1 →Σ^∞ M/M∧ M/M for a closed smooth manifold. The definition in this case is more transparent and requires less choices than the general case, but already contains most of the main ideas. In Sections <ref> and <ref> we treat the more general case of smooth compact manifolds with corners, and define a map: Δ: M^-TM/∂ M^-TM∧ S^1 →Σ^∞ M/M∧ M/M, where ∂ M := M|_∂ M denotes the space of loops γ such that γ(0) ∈∂ M. We keep track of all the choices involved in the definition, and prove independence of choices in <ref>. In Section <ref> we prove a stability property, from which we deduce simple homotopy invariance of the coproduct. §.§ The closed case In this section M is a smooth closed manifold of dimension n. Let e, ν_e, and Dν_e be as in <ref>. We identify Dν_e with an -tubular neighborhood U ⊂^L by an embedding ρ:Dν_e→ U. Let π: Dν_e→ M be the projection and r: U → M the retraction defined by e ∘π∘ρ^-1. Note that we can choose ρ and so that r(u) is always the closest point to u in M. Recall from <ref> and <ref> our conventions and notation for the concatenation of paths. Moreover, suppose x, y ∈ U ⊂^L are such that U contains the the straight line path between x and y. Denote by x θ⇝ y its retraction to M using r. Let (v, γ, t) ∈ M^Dν_e∧ S^1. That is, γ∈ M, t ∈ S^1 and v ∈ (Dν_e)_γ(0). The unstable coproduct is the map of spaces: Δ_unst: M^Dν_e∧ S^1 →Σ^L M/M∧ M/M sending (v, γ, t) to: (2/(v-γ(t)), γ(0)γ|_[0,t]⇝γ(t)θ⇝γ(0), γ(0) θ⇝γ(t) γ|_[t,1]⇝γ(0)) if ‖ v-γ(t)‖≤ otherwise. where we perform the subtraction in ^L.The (stable) coproduct: Δ: M^-TM∧ S^1 →Σ^∞ M/M∧ M/M is obtained from the unstable coproduct by desuspending Δ_unst L times (see Lemma <ref>). For sufficiently small , the map Δ_unst is a well-defined continuous map. Indeed, first note that for sufficiently small , if ‖ v - γ(t)‖≤ then the straight-line path connecting v and γ(t) lives in U, so the paths γ(t)θ⇝γ(0) and γ(0) θ⇝γ(t) are well defined. For equations of the form of (<ref>), we call the “if” condition (so ‖ v-γ(t)‖ in the case of (<ref>) the incidence condition. Secondly, we defined Δ_unst using coordinates on Tot(Dν_e → M) × I. To show that it descends to the quotient M^Dν_e∧ S^1, we need to check that when either |ρ^-1(v) | = 1, t=0, or t=1, (v, γ, t) is sent to the basepoint. Note that v is a normal vector at γ(0) and that we chose the tubular neighborhood U so that γ(0) is the closest point to v in e(M). This means that when |ρ^-1(v) | = 1, ‖ v -γ(t) ‖≥ε for every t, hence the first entry in <ref> has ‖·‖≥ 2 and (v, γ, t) is sent to the basepoint. Moreover, when t=0, the retraction of the straight line path from v to γ(0) is the constant path at γ(0), since γ(0) is the closest point to v in M. This implies that the second argument in <ref> is sent to the base point. The case of t=1 is similar. We treat independence of choices when we deal with the general case in Lemma <ref>. §.§ Choices In this section we collect all the choices required for our definition of the coproduct when M is a manifold with corners. To define the coproduct we require an embedding e: M →^L, and a tubular neighborhood of e(M). In order to extend the definition of a tubular neighborhood to manifolds with corners, we consider a small “extension” of M, denoted M^ext, and containing M as a codimension 0 submanifold: Let M be a smooth compact manifold with corners. As a topological manifold, M^ext is given by M^ext := M ∪_∂ M∂ M × [0,1]. To equip M^ext with a smooth structure we choose a vector field on M which points strictly inwards at the boundary. Let {ϕ^s}_s ≥ 0 be the associated flow. Then there is a homeomorphism Φ: M^ext→ M sending x ∈ M to ϕ^1(x), and (y,t) ∈ M × [0,1] to ϕ^1-t(y). We equip M^ext with the pullback of the smooth structure on M. Note that M^ext contains a copy of M, which is a codimension 0 submanifold with corners. Furthermore the canonical projection map M^ext→ M is piecewise smooth. The auxiliary data required to define the string coproduct for M is as follows: Let L ≥ 0 be an integer. A choice of embedding data of rank L is a tuple (e, ρ^ext, ζ, V, , λ) consisting of: * A smooth embedding e: M^ext↪^L. We write ν_e for the normal bundle of this embedding, defined to be the orthogonal complement of TM^ext. Note that e canonically equips both TM^ext and ν_e with metrics, by pulling back the Euclidean metric on ^L. Let π_e: ν_e → M^ext be the projection map. * A tubular neighbourhood ρ^ext: D_2 ν_e ↪^L, where D_2 denotes the length-2 disc bundle. More precisely, a smooth embedding, restricting to e on the zero-section. We let Ũ be the image of ρ^ext. We let ρ be the restriction of ρ^ext to the unit disc bundle of ν_e over M, and U the image of ρ. In symbols: ρ :=ρ^ext|_D_1 ν_e|_M, U:= Im(ρ) and Ũ = Im(ρ^ext). From the choices above we obtain a retraction r: Ũ→ M defined to be the composition of (ρ^ext)^-1, the projection to M^ext, and the natural map M^ext→ M. * A real number ζ > 0. We require that ζ is small enough that whenever x, y ∈ M satisfy ‖ x-y‖≤ζ, the straight-line path between them [x,y] lies inside Ũ. * An inwards-pointing vector field, V, on M. We write {ϕ_s}_s ≥ 0 for the flow of this vector field. We require that V is small enough that the following condition holds: for each x ∈ M, the length of the path {ϕ_s(x)}_s ∈ [0,1] is ≤ζ/4. * A real number > 0 sufficiently small such that: * U contains an -neighbourhood of M. * The Euclidean distance: d(ρ(Dν|_ϕ_1(M)), ρ(Dν|_∂ M))) ≥ 2 * If x, y ∈ U and ‖ x-y ‖≤, then the straight-line path [x,y] lies in Ũ, and r([x,y]) has length ≤ζ/4. If this final condition holds, we write θ_xy (or just θ if the endpoints are clear from context) for the path r([x,y]). * λ > 0, large enough such that: λ· d(ρ(Sν_e|_M), e(M)) ≥ 2 where Sν_e is the unit sphere bundle of ν_e; note that this distance on the left hand side is at least , by (<ref>.<ref>). We write ED^L(M) for the simplicial set whose k-simplices consist of the set of continuously-varying families of tuples of embedding data, parametrised by the standard k-simplex. There is a forgetful map ED^L(M) →Emb(M^ext, ^L) to the simplicial set of embeddings M^ext↪^L, which forgets all the data except the embedding e. These conditions are used in Lemma <ref> to ensure that the map we use to define the coproduct is well-defined. We indicate how they are used: * In Condition (<ref>.<ref>) we give a precise definition of the tubular neighborhood needed for the definition of the coproduct. The somewhat cumbersome definition stems from the fact that we are dealing with manifolds with boundary or corners. * Condition (<ref>.<ref>) is used in Lemma <ref>, which allows us to discard small loops, of length < ζ. * The choice of vector field, V in (<ref>.<ref>), and the bounds (<ref>.<ref>) are used so that the coproduct sends loops with starting point in ∂ M to the base point. * The choice of λ in (<ref>.<ref>) is a logistical choice, so we can avoid excessive rescaling. It used in ensuring that the coproduct descends to the Thom space. The forgetful map ED^L(M) →Emb(M^ext, ^L) is a trivial Kan fibration and hence a weak equivalence. It follows that ED^L(M) is (L-2n-3)-connected. We let ED_i^L(M) be the simplicial set consisting of tuples consisting of the first i pieces of data of a choice of embedding data; note that the conditions that each piece of data in Definition <ref> must satisfy only involve earlier pieces of data. Then ED_6^L(M) = ED^L(M) and ED^L_1(M) = Emb(M^ext, ^L). There are forgetful maps ED_i^L(M) → ED_i-1^L(M); we argue that each of these is a trivial Kan fibration.It is standard that ED_1^L(M) is a Kan complex. A standard argument (using the implicit function theorem) implies the first forgetful map ED_2^L(M) → ED^L_1(M) is a trivial Kan fibration. For the second forgetful map, note that the condition for ζ holds for sufficiently small ζ; similarly (<ref>.<ref>) holds for any sufficiently small vector fields V. Similarly for (respectively λ), any sufficiently small (respectively large) choice will satisfy the required conditions. All of these arguments also work for families over a simplex, implying that each forgetful map is a trivial Kan fibration. §.§.§ Stabilization There are stabilisation maps: st=st^L,L+1: ED^L(M) → ED^L+1(M) constructed by sending (e, ρ^ext, ζ, V, , λ) ↦ (e', ρ'^ext, ζ, V, , λ). Here e' is given by composing e with the standard embedding ^L ↪⊕^L = ^L+1, and ρ'^ext is the composition: ρ'^ext: D_2 ν_e' = D_2 (⊕ν_e') ⊆ [-2, 2] × D_2 ν_e'→⊕^L = ^L+1, where the final arrow is inclusion on the first factor and ρ^ext on the last factor. It is clear that these are compatible with the natural inclusion, st_Emb: Emb(M^ext, ^L) →Emb(M^ext, ^L+1), given by composing with the inclusion ^L ≅{0}×^L ↪^L+1. Also note that there are natural identifications ν_e' = ⊕ν_e. It is straightforward to check that this data does indeed define embedding data.For L ≤ L', we write st^L,L': ED^L(M) → ED^L'(M) for the composition of L'-L stabilisation maps. §.§ Coproduct Let M be a smooth manifold with corners. In this section we define the coproduct as a map of spectra: Δ: M^-TM/∂ M^-TM∧ S^1 →Σ^∞ M/M∧ M/M, by defining it first unstably as a map of spaces: Δ_unst=Δ^Q_unst: M^Dν_e/∂ M^Dν_e∧ S^1 →Σ^L M/M∧ M/M, for a fixed choice of embedding data Q for M. Before stating the definition of Δ_unst and Δ, we define a map B: M → M which “crushes” small loops to constant loops. More precisely: Let Q ∈ ED^L(M) be embedding data. Note that the embedding e: M →^L induces a metric on M. Let M ^≤ζ be the subset of M consisting of loops of length less than ζ. Then there exists a map: B=B^Q: M → M, homotopic to the identity (relative to the space of constant loops) and continuously varying in Q, which sends M ^≤ζ to constant loops. Let M ⊂ M be the inclusion of constant loops. Let s_γ: M → [0,1] be the continuous function defined by s_γ = max {t | ℓ(γ_[0,t] )≤ζ} where ℓ denotes Riemannian length. Define a homotopy H: M × [0,1] → M to send (γ, τ) to γ(0) γ_[0, τ s_γ ]⇝γ( τ s_γ) θ⇝γ(s_γ) γ_[s_γ, 1]⇝γ(1), noting that the path γ( τ s_γ) θ⇝γ(s_γ) is well-defined, by (<ref>.<ref>). Then H_1 is the identity. Moreover, the subset M^≤ζ is sent by H_0 to the subset of constant loops. We now proceed with the definition of Δ_unst. Fix embedding data Q for M. The unstable coproduct, Δ_unst=Δ^Q_unst, is the map of spaces: Δ_unst: M^Dν_e/∂ M^Dν_e∧ S^1 →Σ^L M/M∧ M/M defined as follows. Let (v, γ, t) ∈ M^Dν_e/∂ M^Dν_e∧ S^1: so t ∈ [0,1], γ∈ M, and v ∈ Dν_e lies in the fibre over γ(0). Then Δ_unst(v, γ, t)=[ λ(v-ϕ_1 ∘γ(t)),; B( γ(0) γ|_[0,t]⇝γ(t) ϕ⇝ϕ_1 ∘γ(t) θ⇝γ(0) ),; B( γ(0) θ⇝ϕ_1 ∘γ(t) ϕ⇝γ(t) γ|_[t,1]⇝γ(0) ); ] if ‖ v - ϕ_1 ∘γ(t)‖≤ε otherwise. Note that we have used Convention (<ref>.<ref>) for the target. The path γ(0) θ=θ_v, γ(0)⇝ϕ_1 ∘γ(t) is defined as in <ref>, and γ(t) ϕ⇝ϕ_1 ∘γ(t) denotes the path given by the flow of ϕ. See Figure <ref> for a picture. The second and third entries in (<ref>) each consist of three paths concatenated, but not all are of equal importance: the paths ϕ, ϕ and θ are all “small” and their purpose is to ensure the start and endpoint of the path are the same, whereas the paths γ|_[0,t] and γ|_[t,1] are “big” and are the ones which are “morally” important. When M is closed, for an appropriate choice of embedding data Q, the coproduct in Definition <ref> is homotopic to the coproduct in Definition <ref>, by applying Lemma <ref>. Δ_unst is a well-defined continuous map. We must check that (<ref>) sends (v, γ, t) to the basepoint whenever t ∈{0,1}, |v|=1 or γ(0) ∈∂ M. Once this is verified, it is clear that (<ref>) defines a continuous map.If t =0 and the incidence condition for Δ_unst holds (i.e. ‖ v-ϕ_1 ∘γ(t)‖≤ε), then the first loop in (<ref>): B(γ(0) γ|_[0,t]⇝γ(0) ϕ⇝ϕ_1 ∘γ(0) θ⇝γ(0)) is a constant loop since the path inside the brackets has length ≤ζ, by (<ref>.<ref>) and (<ref>.<ref>).Similarly if t=1 and the incidence condition holds, the second loop in (<ref>) is constant for the same reason.If |v|=1, the first entry in (<ref>) lies outside of [-1,1]^L, by (<ref>.<ref>), so (<ref>) represents the basepoint.If γ(0) ∈∂ M, then by (<ref>.<ref>), the incidence condition can never hold (noting that ‖ v-γ(0)‖≤ and using the triangle inequality). The (stable) string coproduct is the map of spectra Δ=Δ^Q: M^-TM/∂ M^-TM∧ S^1 →Σ^∞ M/M∧ M/M obtained from the unstable coproduct by applying Lemma <ref> to Δ_unst. The coproduct Δ: M^-TM/∂ M^-TM∧ S^1 →Σ^∞ M/M∧ M/M is independent of choices. Let Q be a fixed choice of embedding data. Note that Δ^Q can be alternatively described on the i^th space: ( M^-TM/∂ M^-TM∧ S^1)_i := M^D(^i-L⊕ν_e)/∂ M^D(^i-L⊕ν_e)∧ S^1 by using in <ref> the stabilized embedding data, st^L,i(Q), as defined in <ref> and noting that for e' (the embedding associated to st^L,i(Q)), there is a natural identification M^D(^i-L⊕ν_e)/∂ M^D(^i-L⊕ν_e)∧ S^1 = M^Dν_e'/∂ M^Dν_e'∧ S^1. Indeed, this follows by noting that the structure maps: Σ( M^D(^i-L⊕ν_e)/∂ M^D(^i-L⊕ν_e)∧ S^1) → M^D(^1+i-L⊕ν_e)/∂ M^D(^i+1-L⊕ν_e)∧ S^1 send the [-1,1] variable, corresponding to the first suspension factor on the left hand side, to the first variable in the ^1+i-L on the right hand side, and by the identity in all other factors. Hence the diagram: Σ( M^D(^i-L ⊕ν_e)/∂M^D(^i-L ⊕ν_e) ∧S^1) [rr, "ΣΔ_unst"] [d] ΣΣ^L M/M ∧M/M [d] M^D(^1+i-L ⊕ν_e)/∂M^D(^1+i-L ⊕ν_e) ∧S^1 [rr, "Δ_unst"] Σ^L+1 M/M ∧M/M commutes. Here the vertical maps are the structure maps, and the bottom horizontal map is Δ_unst as in <ref> using the stabilized embedding data. Hence Δ can be defined on the i^th space using the stabilised embedding data. Now, for sufficiently large L the space of choices ED^L(M) is connected. Given embedding data Q, Q' ∈ ED^L(M), there is a unique up to homotopy path from Q to Q', giving a (canonical up to homotopy) equivalence of spectra associated to the embeddings e and e', as well as a homotopy between Δ^Q and Δ^Q'. The conclusion follows. § SPECTRAL CHAS-SULLIVAN MODULES Let M be a compact n-manifold, possibly with corners. The purpose of this section is to construct a generalization of the Chas-Sullivan product to maps of spectra: μ_r: M^-TM/∂ M^-TM∧Σ^∞_+ M →Σ^∞_+ M, and μ_l: Σ^∞_+ M ∧ M^-TM/∂ M^-TM→Σ^∞_+ M. These maps, constructed in the spirit of Cohen and Jones <cit.>, are adapted to the case that M has boundary and are best suited for our purposes. In general, M^-TM/∂ M^-TM is a unital ring spectrum, whose multiplication M^-TM/∂ M^-TM∧ M^-TM/∂ M^-TM→ M^-TM/∂ M^-TM realises the Chas-Sullivan product on homology in the case M is closed, see <cit.>. Although we do not prove this here, μ_l and μ_r equip Σ^∞_+ M with the structure of a bimodule over this ring spectrum. In Section <ref> we prove that our model for these module maps does recover the definition of the Chas-Sullivan product given in <cit.> after passing to homology, up to a sign. Let Q be a choice of embedding data for M. The unstable right product is defined to be the map of spaces: μ_r,unst = μ^Q_r, unst: M^Dν_e/∂ M^Dν_e∧ M_+ →Σ^L_+ M sending ((v, γ), δ) to [ λ (v-ϕ_1 ∘δ(0)),; γ(0) γ⇝γ(0) θ⇝ϕ_1 ∘δ(0) ϕ⇝δ(0) δ⇝δ(0) ϕ⇝ϕ_1 ∘δ(0) θ⇝γ(0) ] if ‖ v - ϕ_1 ∘δ(0) ‖≤ otherwise. The unstable left product is defined to be the map of spaces: μ_l, unst=μ_l, unst^Q: M_+ ∧ M^Dν_e/∂ M^Dν_e→Σ^L_+ M sending (δ, (v, γ)) to [ λ (v-ϕ_1 ∘δ(0)),; γ(0) θ⇝ϕ_1 ∘δ(0) ϕ⇝δ(0) δ⇝δ(0) ϕ⇝ϕ_1 ∘δ(0) θ⇝γ(0) γ⇝γ(0) ] if ‖ v - ϕ_1 ∘δ(0) ‖≤ otherwise. The stable left module product μ_l: Σ^∞_+ M ∧ M^-TM/∂ M^-TM→Σ^∞_+ M and the stable right product μ_r: M^-TM/∂ M^-TM∧Σ^∞_+ M →Σ^∞_+ M, are obtained from the unstable counterparts via Lemma <ref>. Arguing exactly as in Lemmas <ref> and <ref> we see that these are well-defined maps of spectra, independent of choices up to homotopy. Let Σ^∞_+ M ≃Σ^∞_+ M ∨Σ^∞ M/M be the cannonical splitting induced by the inclusion of constant loops. Then μ̃_r, unst is the composition: μ̃_r, unst: M^-TM/∂ M^-TM∧Σ^∞ M/M→ M^-TM/∂ M^-TM∧Σ^∞_+ M Σ^∞_+ M →Σ^∞ M/M where the first and second arrows are the canonical inclusion and projection respectively, induced by (<ref>). § STABILITY Let M be a compact manifold, possibly with corners, and let e ∈Emb(M, ^L). In this section we prove that the string topology operations from Sections <ref> and <ref> are invariant under replacing M with the total space of the disc bundle Dν of the normal bundle ν of e. Let π: ν→ M be the projection, and ι: M ↪ν the inclusion of the zero section. In the folllowing lemma we first identify the domains of the coproducts for M and Dν: There is a homotopy equivalence of spectra α: M^-TM/∂ M^-TM→ Dν^-TDν/∂ Dν^-TDν Choose embedding data Q for M extending e. We define a homotopy equivalence of spaces α: M^Dν/∂ M^Dν→ Dν/∂ Dν which induces a homotopy equivalence of spectra as desired, via Lemma <ref>.For (v, γ) ∈ M^Dν/∂ M^Dν, we define α(v,γ) := (γ_v) ∈ Dν/∂ Dν where γ_v is the loop v θ⇝γ(0) γ⇝γ(0) θ⇝ v A homotopy inverse to α is given by sending γ to (γ(0), π∘γ). Also note that since the space of embedding data extending e is connected, α is well-defined up to homotopy. By construction , the map α in Lemma <ref> is compatible with fundamental classes (see Definition <ref>), in the sense that the following diagram commutes up to homotopy: [r, "[M]"] [d, "="] M^-TM/∂ M^-TM[r, "i^M"] [d, "≃"] M^-TM/∂ M^-TM[d, "α"] [r, "[Dν]"] Dν^-TDν/∂ Dν^-TDν[r, "i^Dν"] Dν^-TDν/∂ Dν^-TDν where the i^M and i^Dν are induced by the inclusions of constant loops for M and Dν respectively. In the definition of the coproduct, we do not have to quotient by ∂ M^-TM; one would still arrive at a reasonable operation. However if we do not do this, then Lemma <ref> can't hold: the domains of the two coproducts wouldn't be homotopy equivalent.For example, if ν is a trivial vector bundle of rank r and M has no boundary, the spectra Dν^-TDν and Dν^-TDν/∂ Dν^-TDν differ by a shift of degree r. §.§ Coproduct There is a homotopy commutative diagram of spectra: M^-TM/∂ M^-TM∧ S^1 [r, "Δ"][d, "α∧ Id_S^1"] Σ^∞ M/M∧ M/M Dν^-TDν/∂ Dν^-TDν∧ S^1 [r, "Δ"] Σ^∞ Dν/Dν∧ Dν/Dν[u, "π∧π"] where α and π∧π are homotopy equivalences. Choose Q = (e, ρ^ext, ζ, V, , λ) ∈ ED^L(M) embedding data extending e. We define Q' = (e', ρ'^ext, ζ', V', ', λ') ∈ ED^L(Dν) as follows. Let e'=ρ. Note that since this is a codimension 0 embedding, its normal bundle is trivial. We fix a diffeomorphism (Dν)^ext≅ D_2 ν_e, such that the natural map r': (Dν)^ext→ Dν is given by projection to the sphere bundle on D_2ν∖ Dν, and on Dν_e|_M^ext∖ M is a horizontal lift of the map M^ext→ M. In particular, this implies r ∘ r' = r. Let ρ'^ext = ρ^ext. We set ζ' = ζ and assume we have chosen ζ > 0 small enough that (<ref>.<ref>) holds for Dν.Using the induced metrics on M and ν_e|_M, we let Ṽ be the horizontal lift of V to Dν. Let W be the tautological vector field on Dν (i.e. its value at a point v is v). Now choose μ > 0 and let V' = V-μ W. This is an inwards-pointing vector field on Dν, and for μ > 0 small enough, (<ref>.<ref>) holds.Let ' = and λ' = λ, and we may choose them so that is small enough and λ is large enough that (<ref>.<ref>, <ref>, <ref>, <ref>) all hold.We show that the following diagram commutes up to homotopy, with vertical arrows homotopy equivalences, which will imply the desired result, by Lemmas <ref> and <ref>. M^Dν/∂ M^Dν∧ S^1 [r, "Δ_unst^Q"][d, "α∧ Id_S^1"] Σ^L M/M∧ M/M Dν/∂ Dν∧ S^1 [r, "Δ_unst^Q'"] Σ^L Dν/Dν∧ Dν/Dν[u, "π∧π"] Now consider the incidence conditions for Δ^Q_unst and Δ^Q'_unst∘ (α∧ Id_S^1) respectively, for (v, γ, t) ∈ M^Dν/∂ M^Dν∧ S^1. These are the conditions ‖ v-ϕ_1 ∘γ(t) ‖≤, and ‖ v-ϕ'_1 ∘γ_v(t)‖≤ respectively.If the incidence conditions hold, the two ways around the diagram both have the same final two components.We find a homotopy between these two ways around the diagram by linearly interpolating between V and V'. Explicitly, this is the homotopy H: [0,1]_u × M^Dν/∂ M^Dν∧ S^1 →Σ^L M/M∧ M/M defined so that H_u sends (v, γ, t) to [ λ(v-ϕ_1^u ∘γ_uv(t)),; B( γ(0) γ|_[0,t]⇝γ(t) ϕ⇝ϕ_1 ∘γ(t) θ⇝γ(0) ),; B( γ(0) θ⇝ϕ_1 ∘γ(t) ϕ⇝γ(t) γ|_[t,1]⇝γ(0) ); ] if ‖ v - ϕ_1^u ∘γ_uv(t)‖≤ε otherwise. where ϕ_1^u is the time-one flow of the vector field V-μ u W (so in particular ϕ^1_1 = ϕ'_1). Note the only difference from (<ref>) is that ϕ is replaced by ϕ^u (which agrees with ϕ on the zero section M).Arguing as in <ref>, we see that (<ref>) is well-defined. We assume > 0 is small enough that d(Sν, D_1/2ν) > and d(ϕ_1^1/2(D_1ν), Sν) >. Then if |v|=1, the incidence condition can't hold: for u ≤1/2 this is because ϕ_1^u∘γ_uv⊆ D_1/2ν so by the first condition the incidence condition can't hold, and for u ≥1/2 by the second condition the incidence condition can't hold. Inspection of (<ref>) and (<ref>) shows that H_0 and Δ^Q_unst agree, and also that H_1 and (π∧π_ ∘Δ^Q'_unst∘ (α∧ Id_S^1) agree. It is clear that π∧π is a homotopy equivalence. Let M and M' be closed manifolds which are simple homotopy equivalent. Then their string coproducts agree.More precisely, there is a homotopy commutative diagram of spectra, with vertical arrows homotopy equivalences: M^-TM/∂ M^-TM∧ S^1 [r, "Δ"] [d, "≃"] Σ^∞ M/M∧ M/M[d, "≃"] M'^-TM'/∂ M'^-TM'∧ S^1 [r, "Δ"] Σ^∞ M'/M'∧ M'/M' This in particular implies homeomorphism invariance of the string coproduct, though this could have been proved in a different way (for example, by giving a more general definition that did not make use of the smooth structure on M). By <cit.>, for L ≫ 0, there are embeddings M,M' ↪^L with diffeomorphic tubular neighbourhoods; the result then follows from Theorem <ref>. Alternatively, this corollary follows from Theorem <ref>, which includes the case when M and M' have boundary, and further without assuming M and M' even have the same dimension. §.§ Product The following lemma is stated for μ_r, but a similar one holds for μ_l. There is a homotopy commutative diagram of spectra: M^-TM/∂ M^-TM∧Σ^∞_+ M [r, "μ_r"] [d, "α∧ι"] Σ^∞_+ M Dν^-TDν/∂ Dν^-TDν∧Σ^∞_+ Dν[r, "μ_r"] Σ^∞_+ Dν[u, "π"] where α is as in Lemma <ref>. We choose embedding data Q for M extending the embedding e, and use this to define embedding data Q' for Dν as in the proof of Theorem <ref>. We take α to be as in Theorem <ref>. Then the following diagram of spaces commutes up to homotopy: M^Dν/∂ M^Dν∧ M_+ [r, "μ_r, unst^Q"] [d, "α∧ι"] Σ^L_+ M Dν/∂ Dν∧ Dν[r, "μ_r, unst^Q'"] Σ^L_+ Dν[u, "π"]. via a homotopy constructed similarly to the one in Theorem <ref>, interpolating between the different incidence conditions (and first coordinates) obtained from going the two different ways around (<ref>). § HOMOLOGICAL COMPARISONS: COPRODUCT Let M be a closed oriented manifold of dimension n. In this section we prove that by taking homology and applying the Thom isomorphism, the spectral coproduct defined in Section <ref> recovers the Goresky- Hingston coproduct as defined in <cit.>. Note that the homology coproduct currently existing in the literature only deals with the case that M has no boundary, so that's the one we treat in this section. To do the comparison, in <ref> we give a geometric model for the homology coproduct using transversality. It follows the constructions in <cit.> which gives a similar description for the Chas-Sullivan product, and <cit.> which gives a similar description for the coproduct for some homology classes. §.§ Goresky-Hingston coproduct In this section we recap the definition of the Goresky-Hingston coproduct, following <cit.>. The definition we give here differs only in that, corresponding to the conventions in Section <ref>, we restrict to working with constant speed loops in the domain and codomain. This is unproblematic since the inclusion of constant speed loops into all loops induces an isomorphism in homology. That said, it will still be convenient at one stage to consider the space of free loops of not necessarily constant speed, which we denote by M.Assume M is equipped with a Riemannian metric. Let τ_M ∈ H^n(DTM, STM) be the Thom class determined by the given orientation on M. Let Δ: M ↪ M × M be the diagonal embedding. We choose a tubular neighbourhood of the diagonal Δ(M) as follows: let σ_Δ: DTM → M × M send v ∈ (DTM)_p ↦ (p, exp_p(v)) Let U_M = Im(σ_Δ). This also identifies the normal bundle of the diagonal ν_Δ with TM. We may push forward the Thom class τ_M along the diffeomorphism σ_Δ: (DTM, STM) → (U_M, ∂ U_M) to obtain a cohomology class that we also denote by τ_M ∈ H^n(U_M, ∂ U_M). Let e_I: M× [0,1] → M × M send (γ, s) to (γ(0), γ(s)). Then let = e_I^-1(Δ(M)), which we note contains M×{0,1}, and U_GH = e_I^-1 U_M, a neighbourhood of . Let ∂ U_GH = e_I^-1∂ U_M. Let cut: → M × M be the map which sends (γ, s) to (γ|_[0,s], γ|_[s, 1]) (reparametrised appropriately).We pull back τ_M along the map of pairs e_I: (U_GH, ∂ U_GH) → (U_M, ∂ U_M) to obtain a class that we call τ_GH = e_I^*τ_M ∈ H^n(U_GH, ∂ U_GH).Let R_GH: U_GH→ be the retraction which sends (γ, s) to the concatenation (γ(0) γ|_[0,s]⇝γ(s) θ⇝γ(0) θ⇝γ(s) γ|_[s,1]⇝γ(0), s) We parametrise this loop so that it reaches the middle γ(0) at time s (this is unproblematic since if s=0, the first two paths are constant, and similar for s=1, and so that the loop has constant speed on both [0,s] and [s, 1] separately. The paths θ are there to force a self-intersection at time s. Also note that here we parametrise loops differently to <cit.>, though this is unproblematic since the space of orientation-preserving homeomorphisms of S^1 of Sobolev class H^1 preserving 0 is contractible. Similarly they also concatenate with geodesic paths rather than the θ; again the resulting maps are homotopic. (<cit.>) The Goresky-Hingston coproduct Δ^GH (written ∨_TH in <cit.>) is defined to be the following composition: H_*( M) H_*+1( M × [0,1], M ×{0,1}) H_*+1-n(U_GH, M ×{0,1}) H_*+1-n(, M ×{0,1}) H_*+1-n( M × M, (M × M) ∪ ( M × M)) As in <cit.>, we work with the definitions of the cup and cap products for (co)homology from <cit.>. §.§ Coproduct via geometric intersections In this section we a definition of the Goresky-Hingston coproduct using transverse intersections. Let X be a closed oriented manifold and f: X → M. We define Y=Y(f, X) to be the space Y = {(x, t) ∈ X × [0,1] | f(x)(t) = f(x)(0) & t ≠ 0} Here · denotes the closure in X × [0,1]. f is homotopic to a map f': X → M such that Y(f',X) is a transversally cut out submanifold of X × [0,1], with boundary on X ×{0,1} and intersecting it transversally. We first show that the intersection of Y with X × [0, η) can be made smooth, for some small η > 0. Choose a Riemannian metric on M; this induces one on M × M along with a decomposition T(M × M)|_Δ(M)≅ TΔ(M) ⊕ν_Δ, where ν_Δ is the normal bundle of the diagonal. Then for η>0 small, there are time-dependent sections {α_t}_t ∈ [0, η)⊆Γ((X×{0}, (ev_0) ∘ f)^*TΔ(M)) and {β_t}_t ∈ [0, η)⊆Γ(X ×{0}, (ev_0) ∘ f)^* ν) such that both are identically 0 for t=0, and such that for (x, t) ∈ X × [0, η), f(x)(t) = exp_f(x)(0)(α_t(x) + β_t(x)) The intersection of Y with X × [0, η) is then {(x, t) | β_t(x) = 0}; this may not be smooth.Now let β' ∈Γ(X ×{0}, (ev_0) ∘ f)^* ν) be a generic section, so its zero set S is transversally cut out. Then we may homotope f in X × [0, η), without changing ev_0 ∘ f, so that for (x,t) ∈ X × [0, η), we have that f(x)(t) = exp_f(x)(0)(tβ'(x)) Then the intersection of Y with X × [0, η) is S × [0, η), which is smooth. We may do the same thing on (1-η, 1], so that Y ∩ (X × ([0, η) ∪ (1-η, 1])) is smooth; generically perturbing f, we may then assume Y is smooth everywhere. We may assume the conclusion of Lemma <ref> holds. Then the normal bundle ν_Y ⊆ X× [0,1] of Y in X × [0,1] is canonically identified with the pullback (ev_I ∘ f)^*ν_Δ≅ (ev_0 ∘ f)^*TM; this is oriented and so we obtain a Thom class τ_Y ⊆ X × [0,1] := (f × Id_[0,1])^*τ_GH = (ev_0 ∘ f)^*τ_M for ν_Y ⊆ X × [0,1].We orient Y so that the natural isomorphism T(X × [0,1])|_Y ≅ν_Y ⊆ X × [0,1]⊕ TY is orientation-preserving (similarly to <cit.>). We use the following result of Jakob <cit.>: Let B be a space and A ⊆ B a subspace, such that the pair (B, A) is homotopy equivalent to a CW pair. Let x ∈ H_*(B, A).Then x = f_* (α∩ [X]), where * X is a compact oriented i-manifold, for some i. * f: X → B is some map sending ∂ X to A. * α∈ H^i-p(X). We call such a triple (X^i,f, α) a geometric representative for x. We define the geometric coproduct to be the map Δ^geo: H_*( M) → H_*+1-n( M × M, (M × M) ∪ ( M× M)) defined as follows.Let x ∈ H_p( M), and let (X^i, f, α) be a geometric representative for x.Assume that Y=Y(f, X) satisfies the conclusion of Lemma <ref>. Let g = cut∘ (f × Id_[0,1]): Y → M × M; this sends ∂ Y to ( M × M) ∪ (M × M).We define Δ^geo(x) = (-1)^n(i-p)g_*(α|_Y ∩ [Y]) It is not immediate that the definition for Δ is independent of choices, since the representation x=f_*(α∩ [M]) is not unique. However its failure to be unique is completely classified by Jakob <cit.>. Using this, one could show independence of choices directly.We do not carry this out. Instead, it follows from Proposition <ref> or Proposition <ref> that Δ^geo is well-defined. §.§ From the Goresky-Hingston to the geometric coproduct In this section, we prove: Δ^geo(x) = Δ^GH(x) for all x ∈ H_*( M). This extends <cit.> in the case x= f_*[X] for f: X → M a map from a closed oriented manifold, and is proved similarly. Let x ∈ H_p( M), and assume x has geometric representative (X^i, f, α). Then x × [0,1] = (f × Id_[0,1])_* (α∩ [X × [0,1]]) ∈ H_p+1( M× [0,1], M ×{0,1}) (f × Id_[0,1])_*(α∩ [X × [0,1]]) = (f × Id_[0,1])_* (α∩ ([X] × [0,1]) ) = (f × Id_[0,1])_* ( (α∩ [X]) × (1 ∩ [0,1])) = f_*(α∩ [X]) × (Id_[0,1])_*[0,1] = x × [0,1] Let x ∈ H_p( M), and assume x has geometric representative (X^i, f, α), such that Y = Y(f, X) satisfies the conclusion of Lemma <ref>. Then τ_GH∩ (x × [0,1]) = (-1)^n(i-p)(f × Id_[0,1])_* (α|_Y ∩ [Y]) noting that f × Id_[0,1] sends Y to and sends ∂ Y to M ×{0,1}. τ_GH∩ (x × [0,1]) = τ_GH∩(f × Id_[0,1])_* (α∩ [X × [0,1]]) = (f × Id_[0,1])_* ((f × Id_[0,1])^* τ_GH∩(α∩[X × [0,1]])) = (f × Id_[0,1])_* (((f × Id_[0,1])^* τ_GH∪α) ∩[X × [0,1]]) = (f × Id_[0,1])_* ((τ_Y ⊆ X × [0,1]∪α) ∩ [X × [0,1]]) = (-1)^n(i-p) (f× Id_[0,1])_* (α∩ (τ_Y ⊆ X × [0,1]∩ [X × [0,1]]) = (-1)^n(i-p) (f × Id_[0,1])_* (α|_Y ∩ [Y]) The first equality is by Lemma <ref>, the second is by <cit.>, the third by <cit.>, the fourth by (<ref>), the fifth by <cit.> and the sixth by Poincaré duality (see e.g. <cit.>). Let x ∈ H_*( M), and (X^i,f, α) a geometric representative for x. Note that f × Id_[0,1] sends Y to ⊆ U_GH, so R_GH acts on it by the identity. Δ^GH(x) = (cut∘ R_GH)_* (τ_GH∩ [x × [0,1]]) = (-1)^n(i-p) (cut∘ (f × Id_[0,1]))_*(α|_Y ∩ [Y]) = Δ^geo(x) where the second equality is by Lemma <ref>, and the others are by definition. §.§ From the geometric to the spectral coproduct In this section, we prove that taking homology and applying the Thom isomorphism, the spectral coproduct from Section <ref> agrees with the geometric coproduct, up to sign. More precisely: The following diagram commutes up to a sign of (-1)^n: H_*( M^-TM∧ S^1) [r, "Δ_*"] [d, "Thom∧ Id_S^1"] H_*(Σ^∞ M/M∧ M/M) [d, "="] H_*+n( M_+ ∧ S^1) H̃_*( M/M∧ M/M) H_*+n-1( M) [u, "·×[0,1]"] [ur, "Δ^geo"] By Proposition <ref>, it follows that Proposition <ref> also holds with Δ^geo replaced with Δ^GH. Choose an embedding e: M ↪^L for some L ≫ 0 and embedding data for M extending e. Using the identifications from Definitions <ref>, <ref>, we see that it suffices to show that the following diagram commutes: H̃_*( M^Dν_e∧ S^1 ) [r, "(Δ_unst)_*"] [d, "τ_ν_e∩·"] H̃_*(Σ^L M/M∧ M/M) [d, "Φ", shift left=5] H̃_*+n-L( M_+ ∧ S^1) [u, "Θ", shift left=5] H̃_*-L( M/M∧ M/M) [u, "[-1,1]^L ×·"] H_*+n-L-1( M) [u, "·×[0,1]"] [ur, "(-1)^n ·Δ^geo"] where Θ and Φ are which we define shortly, inverse to the corresponding maps in the reverse direction.. Note all vertical maps in (<ref>) are isomorphisms. Let x ∈H̃_*-L( M/M∧ M/M) have geometric representative (X^i, f, α), with f: X → M × M sending ∂ X to ( M × M) ∪ (M × M). Then [-1,1]^L × x = (-1)^L(i-p) (Id_[-1,1]^L× f)_* (α∩ [[-1,1]^L × X]) [-1,1]^L × x = [-1,1]^L × f_*(α∩ [X]) = (Id_[-1,1]^L× f)_*([-1,1]^L × (α∩ [X])) = (-1)^L(i-p) (Id_[-1,1]^L× f)_*(α∩ [[-1,1]^L × X]) where the final equality is by <cit.>. We now define the map Φ from (<ref>). Let x ∈H̃_p(Σ^L M/M∧ M/M), and let (X^i, f, α) be a geometric representative for x, where f: X → [-1,1]^L × M × M sends ∂ X to (∂ [-1,1]^L × M × M) ∪ [-1,1]^L × (( M × M) ∪ (M × M)) Generically perturbing f if necessary, we may assume that f is transverse to {0}× M × M. Let Z = f^-1({0}× M × M).Z is a smooth submanifold of X with normal bundle ν_Z ⊆ X canonically identified with ^L. We orient Z so that the canonical identification TX|_Z ≅^L ⊕ TZ is orientation-preserving. Note that f|_Z sends Z to M × M and ∂ Z to ( M × M) ∪ (M × M). We now define Φ(x) := (-1)^L(i-p) (f|_Z)_*(α|_Z ∩ [Z]) It follows from the following lemma that the definition for Φ(x) is independent of the choice of geometric representative of x. Φ is an inverse to [-1,1]^L ×·. Let x ∈H̃_p( M/M∧ M/M), and let (X^i, f, α) be a geometric representative, where f: X → M × M sends ∂ X to ( M × M) ∪ (M × M). By Lemma <ref>, we have that [-1,1]^L × x = (-1)^L(i-p) (Id_[-1,1]^L× f)_*(α∩[[-1,1]^L × X]) Applying Φ to the right hand side gives a geometric representative with Z={0}× X ≅ X equipped with the same orientation, so we find that Φ([-1,1]^L× x) = x. We now define the map Θ from (<ref>). Let x ∈H̃_p( M_+ ∧ S^1) and let (X^i, f, α) be a geometric representative, with f: X → M × [0,1] sending ∂ X to M ×{0,1}.Let X̃ = Tot(f^*Dν_e → X), and let f̃: X̃→Tot(Dν_e → M) × [0,1] be the map induced by X.X̃ is naturally a smooth manifold of dimension i+L-n, and there is a canonical identification TX̃≅ f^*ν_e ⊕ TX We orient X̃ so that this is orientation-preserving. We now define Θ(x) := (-1)^(L-n)(i-p)f̃_*(α∩ [X̃]) It follows from the following lemma that the definition for Θ(x) is independent of the choice of geometric representative of x. Θ is an inverse to τ_ν_e∩·. Let x, as well as a geometric representative (X^i, f, α) for x, be as above. Then τ_ν_e∩Θ(x) = (-1)^(L-n)(i-p)τ_ν_e∩f̃_* (α∩ [X̃] = (-1)^(L-n)(i-p)f̃_* ( (f̃^* τ_ν_e∪α) ∩ [X̃]) = f̃_*(α∩ (f̃^*τ_ν_e∩ [X̃])) = (f̃|_X)_*(α∩ [X]) = x noting that the intersection of X̃ with the zero section is exactly X, with the same orientation. Let x ∈ H_p+n-L-1( M). We show that the result of going both ways around (<ref>) to the bottom right give the same result when applied to x. Let (X^i, f, α) be a geometric representative for x; we may assume the conclusion of Lemma <ref> holds. Let Y= Y(f, X), oriented as in (<ref>). Then by definition, Δ^geo(x) = (-1)^n(i-p-n+L+1) g_*(α|_Y ∩ [Y]) where g= cut∘ (f × Id_[0,1].By Lemma <ref>, x × [0,1] = (f × Id_[0,1])_*(α∩ [X × [0,1]]) Let X̃ = Tot(f^* Dν_e → X), and f̃: X̃→Tot(Dν_e → M) the natural map. We orient X̃ so that the natural identification TX̃≅ f^*ν_e ⊕ TX is orientation-preserving. Then Θ(x × [0,1]) = (-1)^(L-n)(i+1-p-n+L) (f̃× Id_[0,1])_* (α∩ [X̃× [0,1]]) and so (Δ_unst)_*(Θ(x × [0,1])) = (-1)^(L-n)(i+1-p-n+L) (Δ_unst∘ (f̃× Id_[0,1]))_* (α∩ [X̃× [0,1]]) We next compute Φ(<ref>). Define Y' := (Δ_unst∘ (f̃× Id_[0,1]))^-1({0}× M × M) ⊆X̃× [0,1] Opening up (<ref>), we see that Y' = {(v,x, t) | x∈ X, v ∈ (Dν_e)_f(x), t ∈ [0,1] v=0 f(x)(t) = f(x)(0)} which is canonically identified with Y as smooth manifolds. Examining the two maps Y, Y' → M × M, we see that Φ((Δ_unst)_*(Θ(x × [0,1]))) = (-1)^(L-n)(i+1-p-n+L)(-1)^L(L-n+i+1-p) g_*(α|_Y'∩ [Y']) = (-1)^n(i-p-n+L+1) g_*(α|_Y'∩ [Y']) Note the sign here agrees with that of (<ref>). It remains to compare the orientions on Y' and Y.Consider the following diagram of isomorphisms of vector bundles over Y' ≅ Y (all pulled back appropriately): ν_e ⊕ TM ⊕ TY' [r, "Y' ≅ Y"] [d, "- ⊕ Id_TY'"] ν_e ⊕ TM ⊕ TY [d, "(<ref>),(<ref>)"] ^L ⊕ TY' [d, "(<ref>)"] ν_e ⊕ T(X × [0,1]) [d, "="] T(X̃× [0,1]) [r, "="] ν_e ⊕ TX ⊕ where isomorphism -: ν_e ⊕ TM sends (u,v) to u-v. Inspecting (<ref>) and (<ref>) shows that the diagram commutes. All isomorphisms except possibly the top horizontal and top left vertical ones are orientation-preserving; the top left verical one preserves orientation up to (-1)^n (since +: ν_e ⊕ TM →^L is orientation-preserving and TM has rank n) so the diffeomorphism Y' ≅ Y is orientation-preserving up to (-1)^n. Therefore [Y] = (-1)^n [Y'] Comparing this with (<ref>) and (<ref>), the result follows. § HOMOLOGICAL COMPARISONS: PRODUCT In this section we prove the spectral product we work with in Section <ref> recovers the Chas-Sullivan product by taking homology and applying the Thom isomorphism. A similar result is shown in <cit.>, however here we work with different sign conventions/twists.Let M be a closed oriented manifold of dimensions n. As in Section <ref>, similar methods can be applied to the case where M has boundary. §.§ Chas-Sullivan product In this section we recap the definition of the Chas-Sullivan product, following <cit.>. Once again we work implicitly with constant-speed loops, but this does not affect the homology-level product operation.Assume M is equipped with a Riemannian metric, and let τ_M, Δ, σ_M, U_M all be as in Section <ref>.We define U_CS = (ev_0 × ev_0)^-1U_M ⊆ M × M, and U_CS = (ev_0 × ev_0)^-1∂ U_M. We pull back τ_M along the map of pairs ev_0 × ev_0: (U_CS, ∂ U_CS) → (U_M, ∂ U_M) to obtain a class τ_CS = (ev_0 × ev_0)^*τ_M ∈ H^n(U_CS, ∂ U_CS).Let R_CS: U_CS→ M ×_M M be the retraction which sends (γ, δ) to (γ, γ(0) θ⇝δ(0) δ⇝δ(0) θ⇝γ(0)) and let concat: M ×_M M → M send (γ, δ) to the concatenation (γ(0) γ⇝γ(0) = δ(0) δ⇝δ(0)). (<cit.>) The Chas-Sullivan product μ^CS (written ∧_TH in <cit.>) is defined to be the following composition: H_*( M) ⊗ H_*( M) H_*( M × M) H_*-n(U_CS) H_*-n( M) §.§ Product via geometric intersections In this section we recap an alternative definition of the Chas-Sullivan product, using transverse intersections, following <cit.> (though with slightly different sign conventions). We define the geometric product to be the map μ^geo: H_*( M) ⊗ H_*( M) → H_*-n( M) defined as follows.Let x ∈ H_p( M) and y ∈ H_q( M). Let (X^i, f, α) and (Y^j, g, β) be geometric representatives for x and y respectively. Generically perturbing if necessary, we may assume that the maps ev_0 ∘ f: X → M and ev_0 ∘ g: Y → M are transverse. We define Z to be the space {(a,b) ∈ X × Y | f(a)(0)=g(b)(0)} which is a smooth manifold of dimension i+j-n by assumption. We orient Z so that the natural isomorphism ν_M ⊕ TZ ≅ TX ⊕ TY is orientation-preserving. Let h: Z → M send (a, b) to concat(f(a), g(b)).We define μ^geo(x,y) = (-1)^i(j-q) + n(i+j-p-q) h_*((α∪β) ∩ [Z]) where we pull α and β back to Z in the natural way. §.§ From the Chas-Sullivan to the geometric product In this section, we prove: μ^CS(x, y) = μ^geo(x,y) for all x ∈ H_p( M), y ∈ H_q( M). This extends <cit.> as well as <cit.>, with a similar proof. Let (X^i, f, α) and (Y^j, g, β) be geometric representatives for x and y respectively. Then τ_CS∩ (x × y) = τ_CS∩(f_*(α∩ [X]) × g_*(β∩ [Y])) = (-1)^i(j-q)τ_CS∩( (f× g)_* ((α∪β) ∩ [X × Y]) ) = (-1)^i(j-q) + n(i+j-p-q) (f× g)_*( (α∪β) ∩( (f× g)^* τ_CS∩ [X × Y])) = (-1)^i(j-q)+n(i+j-p-q) (f × g)_* ( (α∪β) ∩ [Z]) = μ^geo(x,y) §.§ From the geometric to the spectral product In this section, we prove that taking homology and applying the Thom isomorphism, the spectral products (on the left or right) from Section <ref> agree with the geometric product, up to sign. More precisely: The following diagrams commute up to a sign of (-1)^n: H_*( M^-TM∧Σ^∞_+ M) [d, "Thom"] [r, "(μ_r)_*"] H_*(Σ^∞_+ M) [d, "="] H_*(Σ^∞_+ M ∧ M^-TM) [d, "Thom"] [r, "(μ_l)_*"] H_*(Σ^∞_+ M) [d, "="] H_*+n( M × M) H_*( M) H_*+n( M × M) H_*( M) H_*( M ) ⊗ H_*+n( M) [u, "×"] [ur, "μ^geo"] H_*( M ) ⊗ H_*+n( M) [u, "×"] [ur, "μ^geo"] By Proposition <ref>, it follows that Proposition <ref> also holds with μ^geo replaced with μ^CS. We give the proof for the right-hand diagram; the left-hand case is identical. Choose an embedding e: M ↪^L and embedding data for M extending e; since M is closed, we may assume the isotopy {ϕ_s}_s is constant. Using the identifications from Definitions <ref>, <ref> and <ref> (choosing sequences (u_i)_i and (v_i)_i with u_L=L and v_L=0), we see that it suffices to show that the following diagram commutes: H̃_r( M^Dν_e∧ M_+) [d, "τ_ν_e∩·"] [r, "(μ_r,unst)_*"] H̃_r (Σ^L_+ M) [d, "Φ", shift left=5] H_r+n-L( M × M) [u, "Θ'", shift left=5] H_r-L( M) [u, "[-1,1]^L ×·"] H_p( M ) ⊗ H_q( M ) [u, "×"] [ur, "(-1)^nμ^geo"] where p+q=r+n-L, Φ is as in (<ref>) and Θ' is defined analogously to (<ref>). Let x ∈ H_p( M) and y ∈ H_q( M); let (X^i, f, α) and (Y^j, g, β) be geometric representatives for x,y respectively. x× y = (-1)^i(j-q) (f × g)_*((α∪β) ∩ [X × Y]) x × y = f_*(α∩ [X]) × g_*(β∩ [Y]) = (f × g)_*((α∩ [X]) × (β∩ [Y])) = (-1)^i(j-q) (f × g)_*( (α∪β) ∩ [X × Y]) where the final equality is by <cit.>. We first compute Θ'(x × y): Θ'(x × y) = (-1)^i(j-q)Θ'( (f × g)_* ( (α∪β) ∩ [X × Y]) ) = (-1)^i(j-q) (-1)^(L-n)(i+j-p-q) (f̃× g)_*((α∪β) ∩ [X̃× Y]) where we define X̃ = Tot(f^* Dν_e → X) and f̃: X̃→Tot(Dν_e → M) is the natural map. The first equality is by Lemma <ref> and the second by definition of Θ'. Therefore (μ_r, unst)_*(Θ'(x × y)) = (-1)^i(j-q)+(L-n)(i+j-p-q)(μ_r,unst∘ (f̃× g) )_* ((α∪β) ∩ [X̃× Y]) Similarly to the proof of Proposition <ref>, we see that Φ((μ_r, unst)_*(Θ'(x × y))) = (-1)^L(i+j+L-n-r) + i(j-q) + (L-n)(i+j-p-q) h_*((α∪β) ∩ [Z']) =(-1)^i(j-q) + n(i+j-p-q) h_*((α∪β) ∩ [Z']) where Z' = (μ_r,unst∘ (f̃× g))^-1({0}× M). Z' is transversally cut out by assumption, and we have a canonical identification Z ≅ Z' as smooth manifolds. Since the sign here agrees with that of (<ref>), it suffices to compare the orientations on Z and Z'; by the same argument as in the proof of Proposition <ref>, their orientations differ by a factor of (-1)^n. Therefore [Z]= (-1)^n[Z']; the result follows. § TRACES AND TORSION Given a homotopy equivalence f: N → Z one could ask whether f is a simple homotopy equivalence. A related question arises when classifying diffeomorphism classes of higher dimensional h-cobordisms. Namely, one could ask whether an h cobordism is smoothly trivial. The two questions are sufficiently related, and in order to prove the main results of this paper we convert the first question into the second. That is, to f we associate a codimension 0 embedding of manifolds with boundary, P ⊂ Q, so that the complement of P in Q is an h-cobordism. We then study the failure of f to be a simple homotopy equivalence by considering instead the triviality of W. In particular, we will study the whitehead torsion, τ (W), and its image under various trace maps. So let W be a smooth h-cobordism of dimension n ≥ 6; we assume its boundary is partitioned into two components M and N. In <cit.> Geoghegan and Nicas study the obstruction to deforming W to M in a fixed point free manner. They do so by considering the fixed point set of a strong deformation retraction F: W× I → W. To such a deformation retraction they associate an algebraic 1-parameter Reidemeister trace: R(W) ∈ HH_1( [π_1 M])/ [π_1 M], and prove the following: Let M be a smooth compact manifold of dimension n ≥ 5, and (̋M) the space of h-cobordisms on M. Suppose π_2(M) = 0. Then the following diagram commutes: K_1 ([π_1(M)]) [r] [d, "tr"] Wh(π_1(M)) ≅π_0(̋M) [d, "-R(W)"] HH_1( [π_1 M])[r] HH_1( [π_1 M])/ [π_1 M]. Here the equivalence Wh(π_1(M)) ≅π_0(̋M) is given by the s-cobordism theorem; tr is the Dennis trace map, and the horizontal maps are the natural quotient maps. In order to prove the main results of this paper we need to consider other geometric incarnations of the invariant R(W). In <cit.> Geoghegan and Nicas further define a geometric 1 parameter Reidemeister trace, Θ(W) ∈ H_1(E_F), where E_F is the twisted free loop space defined by: E_F := {γ: I → W × I × W | γ(0) = (x,t,x) and γ(1) = (y, s, F_s(y)) for some x,y,s,t}. They construct a map: Ψ: H_1(E_F) → HH_1( [π_1 M]) and prove: Ψ(Θ(W)) =-R(W). Moreover, when π_2(M) = 0, Θ(W) vanishes if and only if R(W) vanishes. In this section we construct two other variations of the 1 parameter Reidemeister trace. In <ref> we define a framed bordism class [T] ∈Ω_1( W, W), which is used in the statement of our main <ref>. Using the homotopy equivalence r: W → M, this construction gives a well defined map: T_*: π_0 (̋M) →Ω_1( M, M). Combining <ref>, <ref> and <ref> we obtain: Suppose π_2(M) = 0. Then the following diagram commutes: K_1 ([π_1(M)]) [r] [d, "tr"] π_0(̋M) [d, "h_* ∘ T_*"] HH_1( [π_1 M])[r] H_1( M, M). Here h_*: Ω^fr_1( M, M ) → H_1( M, M ) is the Hurewicz homomorphism. The bottom horizontal arrow is the composition: HH_1( [π_1 M]) H_1(E_F) H_1( M) H_1( M, M), μ is given in <ref>, r: W → M is the retraction, Ψ is the isomorphism of <cit.>[6A], and q is the projection map. In <ref> we construct the 1 parameter Reidemeister trace: Tr(W): Σ𝕊→Σ^∞ W/W on spectra. This definition adapts a homotopical construction of the Reidemeister trace to the 1-parameter and relative settings, see for example <cit.>. The invariant Tr(W) is shown to agree with [T] in <ref>. It is also used as a prototype for the definition of the operations: Ξ_l, Ξ_r: Σ W^-TW/∂ W^-TW→Σ^∞ W/W∧ W/W constructed in Section <ref>. The maps Ξ_l and Ξ_r we used in <ref>, and morally speaking correspond to taking the Chas-Sullivan product by the class [T], as we prove in <ref>. §.§ The framed bordism invariant §.§.§ The definition of [T] For the rest of this section, we assume that W is embedded as a codimension 0 submanifold of ^L. Define subsets T̃, T^∘, T of W × [0,1] as follows. T̃ := { (x,t) ∈ W× [0,1] | F_t(x) = x} T^∘ := {(x,t) ∈T̃ | t ≠ 0 and x ∉ M}, and let T = T̅^∘ be the closure of T^∘ in W × [0,1], which we note is compact. There is a small perturbation of F such that T is a smooth 1-dimensional submanifold of [0,1]× W, possibly with boundary which must lie on {0}× W. Note this lemma cannot hold for T̃ instead of T, since T̃ always contains (W ×{0}) ∪ (M × [0,1]). If we could perturb F arbitrarily, standard transversality results would imply the lemma. Instead, F is constrained along M × [0,1], W ×{1} and W ×{0}. We first argue that the lemma holds in some neighbourhood of this region. T̃ does not intersect W ×{1} except along M ×{1}. We may perturb F such that for all x sufficiently close to M, the path {F_t(x)}_t ∈ [0,1] is the embedded geodesic to the closest point in M. Now any point in (x, t) ∈T̃ such that x is near to M must have x ∈ M.It follows that now T can only intersect (W ×{0, 1}) ∪ (M × [0,1]) along W ×{0}.To ensure T is smooth near W ×{0}, we consider the vector field V on W, whose value at p ∈ W is d/ds|_s=0 F_s(p). This is constrained so that it points inwards along N and vanishes along M. We may generically perturb F such that V intersects the zero section transversally away from M. We may further perturb F so that for η > 0 small, for all p ∈ W, the path {F_t}_t ∈ [0,η) is a geodesic. Now the intersection of T with W × [0,η) agrees with S × [0,η).Therefore T is smooth near W ×{0}; perturbing generically away from the region on which F is constrained allows us to obtain the lemma. Let i: T ↪ W × [0,1] be the natural inclusion, and denote the normal bundle by ν_i. Let ψ: ν_i →^L be the isomorphism of vector bundles sending (v,t) in the fibre of ν_i over (x,s) to ψ(v,t) := v- dF_(x,s)(v,t). We consider the natural map f: T → W sending (x,t) to the loop F|_[0,t] from x to itself. Note that ψ equips T with a stable framing which therefore defines a class [T] in Ω^fr_1( W,W). The space of strong deformation retractions is contractible. The class [T] ∈Ω_1^fr( W,W) is independent of choices. Let F' be another choice of strong deformation retraction as above. Since the space of such deformation retractions is contractible, there is a 1-parameter family of strong deformation retractions {F^τ: W × I → W}_τ∈ [0,1] such that F^0 = F and F^1 = F'. Generically perturbing {F^τ} relative to {τ∈{0,1}} similarly to Lemma <ref>, and letting S be the closure of S^∘ := {(x, t, τ) ∈ W × [0,1]^2 | F^τ_t(x) = x, t≠ 0, x ∉ M} provides the desired bordism; this can be equipped with a stable framing similarly to (<ref>). The following classes determined by [T] are used in Theorem <ref>: We define classes [T_diag], [T_diag] ∈Ω^fr_1( (W × W), (W × W)) to be the images of [T] under the antidiagonal maps sending γ to (γ, γ) and (γ, γ) respectively. §.§.§ Definition of Θ(W) In this subsection we recall the definition of Θ (W) ∈ H_1(E_F) appearing in <cit.>. Let (x,t),(y,s) ∈ W × [0,1] be two fixed points of F. We say that (x,t) and (y,s) are in the same fixed point set if there is some path γ in W × I from x to y, such that the loop (pr_1∘γ) ⋆ (F∘γ)^-1 is homotopically trivial (where pr_1 projects to the first factor of W × [0,1]). This defines an equivalence relation on the set of fixed points. The manifold T, constructed in <ref>, consists of a union of circles and arcs. Note that fixed points in the same path component of T are in the same fixed point class. A geometric intersection invariant in <cit.> is defined using the submanifold A ⊂ T consisting only of the union of those circles of intersections not in the same fixed point class as the fixed points of F_0 and F_1. In <cit.> an orientation of A is defined as follows: to an isolated fixed point x of F_t, one associates an index i(F_t, x), which is the degree of the map: id -F_t: B_ϵ(x) ∖{x}→^L ∖{0}. Here B_ϵ is a small neighborhood of x in W ×{t} not containing any other fixed point of F_t. The transversality hypothesis implies that generically i(F_t,x) = ± 1, and both values occur on each loop. The orientation on each circle of fixed points, S, is given by picking any (x,t) for which i(F_t, x) =1, and orientating S near (x,t) in the direction of increasing time. Let E_F be the twisted loop space defined in <ref>. Then A is a closed oriented 1-manifold which includes into E_F by constant loops and hence defines a class which we define Θ(W) ∈ H_1(E_F) to be. §.§.§ Relating [T] and Θ(W) To compare Θ(W) and [T] we need to consider the following. Firstly, we need to relate the target of the invariants; the definition of [T] involves the free loop space W while Θ(W) concerns the twisted loop space E_F. Moreover, Θ (W) consists of a choice of orientation and defines a class in H_1(E_F), while [T] consists of a choice of framing, and defines a class in Ω_1^fr( W,W). Secondly, Θ(W) is defined by manually discarding circles of intersections in the fixed point class of F_0 and F_1. The analogous procedure in the definition of [T] corresponds to modding out W by constant loops. We show that if π_2(W) = 0, after passing to homology, the two invariants agree. For this to make sense, we must first relate the groups in which these invariants live. There exists a homotopy equivalence μ: E_F → W. We will construct μ as the composition of several homotopy equivalences. Let Ẽ_F be the pullback in the diagram: Ẽ_F [r] [d] ( W)×(I) [d] W×I ×I [r] W×I ×I ×W where the bottom horizontal map is given by (w,t,s) ↦ (w, t,s, F_s(w)), the right vertical map is given by (α, β) ↦ (α(0), β(0), β(1), α(1)), and · denotes the path space. Then Ẽ_F consists of pairs (α, β) ∈(W )×(I) satisfying F_β(1)(α(0)) = α(1) Let γ be a path in E_F, so γ(0) = (x,t,x) and γ(1) = (y, s, F_s(y)). We can decompose γ into components (γ_1, γ_I, γ_2) by projecting into the first, second, and third factors in W × I × W. So that γ_1 is a path from x to y, γ_2 is a path from from x to F_s(y), and γ_I is a path in I from t to s. Define Γ: E_F →Ẽ_F by sending γ to (y γ_1⇝ x γ_2⇝ F_s(y), γ_I) where we choose the concatenation of y γ_1⇝ x γ_2⇝ F_s(y) to happen at time equals to 1 /2. Then Γ is a homotopy equivalence admitting an inverse sending (α, β) to (α_[0,1/2], β, α_[1/2, 1]) (and appropriately rescaling). Note that since (I) is contractible, Ẽ_F is further homotopy equivalent to E̅_F, the pullback of the diagram: E̅_F [r] [d] (W) [d] W×I [r] W ×W where the right vertical map is given by γ→ ( γ(0), γ(1)), and the bottom horizontal map is given by (w,s) → (w, F_s(w)). Then E̅_F consists of pairs (α, s) where α: [0,1] → W is such that α(1) =F_s(α(0)). The homotopy equivalence is given by the forgetful map sending (α, β) → (α, β(1)). We further define Γ̅: E̅_F → W × I by sending (α, s) to: (α(0) α⇝ F_s(α(0)) F|_[0,s]⇝α(0),s). Then Γ̅ is a homotopy equivalence with inverse given by (δ, s) ↦ (δ(0) δ⇝δ(0) F|_[0,s]⇝ F_s(δ(0)), s). Lastly, note that the forgetful map W × I → W is a homotopy equivalence. The homotopy equivalence μ is given by the composition of Γ, Γ̅ and the forgetful map. The homotopy equivalence μ from <ref> induces a map: μ_*: H_1(E_F) → H_1( W), which we can compose with the quotient map: π: H_1( W) → H_1( W,W). To complete the comparison of [T] and Θ (W), we will need to consider the Hurewicz map h_*: Ω^fr_1( W, W ) → H_1( W, W ). In order to define h_*, we must fix conventions for how a stable framing on a manifold induces an orientation. Given a stably framed manifold, one consistent choice of orientation is given as follows. Let [Y] ∈Ω^fr_1( W) be represented by f:Y → W; choose an embedding e: Y →^L+1, with normal bundle ν_Y, and framing ϕ: Y×^L →ν_Y representing the stable framing on Y. Let {v_0, v_1, ... , v_L} be the standard basis of ^L+1 and { v_1, ... , v_L} a basis for ^L. For y ∈ Y, there exists a unique vector v_y ∈ T_y Y ⊂^L+1 such that the matrix (ϕ(y,v_1) , ..., ϕ(y, v_L), v_y) has determinant 1. We orient Y so that the positive orientation points in the direction of v_y. Suppose π_2(W) = 0. Then π∘μ_*(Θ(W)) = h_*([T]). Both invariants are defined starting with the manifold T. Since in the definition of Θ (W) we discard the arcs and circles in T∖ A, we need to consider their contribution to h_*[T]. Note that for (x,t) ∈ T∖ A, the loop F|_[0,t](x) is contractible. Let _0 W be the path component of W consisting of contractible loops. When π_2(W) =0, π_1( _0 W) is isomorphic to π_1(W) (by the long exact sequence associated to the fibration Ω_0 W →_0 W → W) and is generated by constant loops. Hence π_1(_0 W, W)=0, and the contributions of T∖ A die in H_1( W, W). By chasing the homotopy equivalence μ we see that μ sends the constant loop at (y, s, F_s(y)), associated to a fixed point (y,s), to the loop F_[0,s] based at y. Hence, up to a question of orientation, we have the equivalence π∘μ_*(Θ(W)) = h_*([T]). So the last thing to consider is the equivalence of orientations. Let x be a fixed point of F_t, such that i(F_t,x) =1. Let (v_1, ..., v_L) be the standard basis for ^L, and (v_0, v_1, ..., v_L) be the standard basis for ^L⊕. This choice of basis induces a trivialization of T(W× [0,1]) ≅^L ⊕. Recall the map Id -F_t: B_ϵ(x) ∖{x}→^L ∖{0} defining the index i(F_t,x). Note that Id-F_t extends to B_ϵ and we denote its differential at x by ϕ. For generic (x, t), ϕ is a linear isomorphism; we may assume this holds. Note if the degree of Id - F_t equals to one, then ϕ is orientation preserving, and hence has positive determinant. Let ψ: ^L ⊕→^L be the map sending (v,s) ∈ T(W × [0,1]) in the fibre over (x,t) to ψ(v,s) := v- dF_(x,t)(v,s). Note that ψ≅ TT. Let ψ̃:^L ⊕→^L ⊕ be the map sending (v,s) in the fibre over (x,t) to ψ̃(v,s) := (s,v- dF_(x,t)(v,s)). Then ψ̃^-1 defines an isomorphism ^L ⊕→^L ⊕ sending the final factor to TT (by the implicit function theorem). The matrix of ψ̃ is given by [ ϕ *; 0 1 ] and hence has positive determinant, and the matrix of ψ̃^-1 is given by: [ ϕ^-1 *; 0 1 ] where the vector τ := [ *; 1 ] = ψ̃^-1[ 0; 1 ]∈ TT_x,t is oriented in the direction of increasing time (because its first coordinate is positive). The first L columns of ψ̃^-1 don't necessarily give a framing of ν_T, but by performing column operations (specifically those which don't change the sign of the determinant), i.e. projecting off of the subspace spanned by τ, we arrive at a matrix (χ, τ) which has positive determinant, and is such that: ^L ^L ⊕^L is the identity and hence induces our choice of framing of T. Note that after possibly rescaling by a positive number, τ defines an orientation of TT, consistent with the Hurewicz isomorphism defined above. Since τ is oriented in the direction of increasing time, it follows that the two conventions for orienting T agree. §.§ The Reidemeister trace of an h-cobordism Let W be a smooth h-cobordism of dimension n. ∂ W consists of two boundary components, which we call M and N. In this section we define the Reidemeister trace of W as a map of spectra: Tr: Σ^∞ S^1→Σ^∞ W/W and show that it is related to the framed bordism invariant [T] by the Pontrjagin-Thom isomorphism in Section <ref>.We will need to make some choices, as in the definition of the coproduct. §.§.§ Choices We choose an extension W^ext:= M× [0,1]∪_M W ∪_N × N × [0,1] of W as in <ref>. Trace data for W is a tuple R̅ = ( e, ρ^ext, ζ, V, ϵ, λ, F) consisting of: * A smooth embedding e: W^ext↪^L. We write ν_e for the normal bundle of this embedding, defined to be the orthogonal complement of TW^ext. Note that e canonically equips both TW^ext and ν_e with metrics, by pulling back the Euclidean metric on ^L. Let π_e: ν_e → W^ext be the projection map. * A tubular neighbourhood ρ^ext: D_2 ν_e ↪^L. More precisely, ρ^ext is a smooth embedding, restricting to e on the zero-section. We let Ũ be the image of ρ^ext. We let ρ be the restriction of ρ^ext to the unit disc bundle of ν_e over W, and U the image of ρ; this lies in the interior of Ũ. In symbols: ρ :=ρ^ext|_D_1 ν_e|_W, U:= Im(ρ) and Ũ = Im(ρ^ext). From the choices above we obtain a retraction r: Ũ→ W defined to be the composition of (ρ^ext)^-1, the projection to W^ext, and the natural map W^ext→ W. * ζ > 0 such that ζ is less than half of the injectivity radius of the induced metric on M. * A vector field V on W^ext such that: * V|_W points strictly inwards at N and strictly outwards at M. For simplicity, we require that for (m,t) ∈ M × (0,1], V_(m,t) is a non zero rescaling of V_(m,0), and similarly for (n,t) ∈ N × [0,1]. We denote the flow of V by {ϕ_s(x)}_s ≥ 0. A priori this isn't defined for all time since the flow can leave along one of the components of ∂ W^ext; we define the flow to be constant in s as soon as it hits this component of ∂ W^ext. * Let π: W^ext→ W be the natural projection. For x ∈ W^ext, the length of the path π({ϕ_s(x)}_s ∈ [0,1]) is ≤ζ/4. * A real number > 0 sufficiently small such that: * < ζ/8. * Ũ contains an -neighbourhood of W. * If x∈ U, y ∈ e(W^ext) and ‖ x-y ‖≤ then the straight line path [x,y] lies in Ũ, and r([x,y]) has length ≤ζ/4. * The Euclidean distance: d(ϕ_1(M), ρ(Dν|_ W))) ≥ 2 * The Euclidean distance: d(ϕ_1(N), ρ(Dν|_ N))) ≥ 2 * λ > 0, large enough such that: λ· d(ρ(Sν_e|_W), e(W^ext)) ≥ 2 where Sν_e is the unit sphere bundle of ν_e; note that this distance on the left hand side is at least . * A strong deformation retraction F: W × [0,1] → W onto M. We write TD^L(W) for the simplicial set whose k-simplices consist of the set of continuously-varying families of tuples of trace data, parametrised by the standard k-simplex. The forgetful map TD^L(W) →Emb(M^ext, ^L) which forgets all the data except the embedding e is a trivial Kan fibration and hence a weak equivalence. This lemma is the same as that of <ref>, also using the fact that the space of deformation retractions is contractible. §.§.§ The definition of the trace Fix trace data R̅ = (e, ρ^ext, ζ, V, ϵ, λ, F). Let (v, w, t) ∈W^D_ν_e/∂ W^D_ν_e∧ S^1. So t ∈ [0,1], w ∈ W, and v ∈ (Dν_e)_w. The unstable Trace, Tr_unst, is the composition of the Thom collapse map: Σ^L S^1 →W^D_ν_e/∂ W^D_ν_e∧ S^1 and the map W^D_ν_e/∂ W^D_ν_e∧ S^1 →Σ^L W/W defined by: (v, w, t) ↦[ λ(v-ϕ_1 ∘ F_t(w)),; B( w F|_[0,t]⇝ F_t(w) ϕ⇝ϕ_1 ∘ F_t(w) θ⇝ w ) ] if ‖ v - ϕ_1 ∘ F_t(w)‖≤ε otherwise. Note that we have used convention (<ref>) for a model of the target. Unlike the case of the coproduct, the target of ϕ_1 is W^ext, hence in order to end up with loops in W we need to use the natural projection W^ext→ W. Therefore, in <ref> the path F_t(w) ϕ⇝ϕ_1 ∘ F_t(w) is understood to be its projection to W, and the path ϕ_1 ∘ F_t(w) θ⇝ w is the retraction of the straight line path [v, π∘ F_t(w)] to W. Tr_unst is a well-defined continuous map. Clearly the collapse map is well defined. We must check that (<ref>) sends (t, w, v) to the basepoint whenever t ∈{0,1}, |v|=1 or w∈∂ W. Indeed, if t =0 and the incidence condition holds then the second component simplifies to B(w ϕ⇝ϕ_1(w) θ⇝ w) which is a constant loop since each of the paths has length less than ζ/4 by (<ref>.<ref>) and (<ref>.<ref>).When t=1, F_1(w) is in M, and by (<ref>.<ref>) the incidence condition can not hold so (<ref>) represents the basepoint. Similarly, if w ∈∂ W, then by (<ref>.<ref>) and (<ref>.<ref>) the incidence condition can not hold so (<ref>) represents the basepoint. Lastly, if |v|=1, the first entry in (<ref>) lies outside of the cube, by (<ref>.<ref>), so (<ref>) represents the basepoint. The (stable) Trace: Tr: Σ^∞ S^1→Σ^∞ W/W is defined to be the L-times desuspension of Tr_unst: Σ^L S^1 →Σ^L W/W for some trace data R̅. The proof of <ref> carries over word by word to give: The stable Trace is well defined and is independent of choices up to homotopy. Similarly to Definition <ref>, we define: We define Tr_diag and Tr_diag: Σ^∞ S^1 →Σ^∞ (W × W)/W× W to be given by the map Tr composed with the antidiagonals W/W→ (W × W)/W× W sending γ to (γ, γ) and (γ, γ) respectively. §.§ The operations Ξ_l and Ξ_r In the previous section we defined the trace map: Tr: Σ^∞ S^1 →Σ^∞ W/W. In this section we will upgrade the construction and define maps: Ξ_l, Ξ_r: W^-TW/∂ W^-TW∧ S^1 →Σ^∞ W/W∧ W/W. It will prove more useful for the following sections to consider the situation of a cobordism with a filling. This is, let M ⊆ P be a codimension 0 submanifold with corners, with j: M↪ P an embedding which is a homotopy equivalence, and such that ∂ M and ∂ P are disjoint. Let M^∘ = M ∖∂ M be the interior of M and W = P ∖ M^∘, a cobordism from ∂ M to ∂ P. We assume that W is an h-cobordism. Precomposing j by the diffeomorphism Φ of <ref>, we obtain an embedding M^ext↪ P. Note that this defines a collar neighborhood ∂ M × [0,1] → P by restricting this embedding to M^ext∖ M^∘, and a smooth structure on: W^ext := W∪_∂ M∂ M × [0,1] ∪_∂ N∂ N × [0,1]. A choice of trace data for (M,P,j) is a pair (Q,F) where Q ∈ ED^L(P) is embedding data for P and F:P× I → P is a deformation retraction onto M. We require that: (Q,F)|_W:= (e|_W^ext, ρ^ext|_D_2ν|_W^ext, ζ, V|_W^ext, , λ, F) consists of trace data for W.We write TD^L(M j P) for the simplicial set whose k-simplices consist of the set of continuously-varying families of tuples of trace data, parametrised by the standard k-simplex. Fix a choice of trace data R ∈ TD^L(M j P). We define Ξ_l,unst: P^Dν_e/∂ P^Dν_e∧ S^1 →Σ^L P/P∧ P/P to send (v, γ, s) to: [ λ(v-ϕ_1 ∘ F_s ∘γ(0)),; B( γ(0) F|_[0,s]⇝ F_s ∘γ(0) ϕ⇝ϕ_1 ∘ F_s ∘γ(0) θ⇝γ(0) ),; B( γ(0) θ⇝ϕ_1 ∘ F_s ∘γ(0) ϕ̅⇝ F_s ∘γ(0) F̅|_[0,s]⇝γ(0) γ⇝γ(0) ) ] if ‖ v- ϕ_1 ∘ F_s ∘γ(0) ‖≤ otherwise. and similarly, Ξ_r,unst: P^Dν_e/∂ P^Dν_e∧ S^1 →Σ^L P/P∧ P/P sends (v, γ, s) to [ λ(v-ϕ_1 ∘ F_s ∘γ(0)),; B( γ(0) γ⇝γ(0) F|_[0,s]⇝ F_s ∘γ(0) ϕ⇝ϕ_1 ∘ F_s ∘γ(0) θ⇝γ(0) ),; B( γ(0) θ⇝ϕ_1 ∘ F_s ∘γ(0) ϕ⇝ F_s ∘γ(0) F|_[0,s]⇝γ(0) ) ] if ‖ v- ϕ_1 ∘ F_s ∘γ(0) ‖≤ otherwise. Ξ_l,unst and Ξ_r,unst are well-defined continuous maps. We prove that (<ref>) sends (v, γ, s) to the basepoint if s ∈{0,1}, γ(0) ∈∂ P or |v|=1; the case of (<ref>) is identical.If s = 0, the second entry in (<ref>) is constant, and so (<ref>) represents the basepoint. If s=1 and γ(0) ∈ W then since F_1(γ(0)) ∈ M by (<ref>.<ref>) the incidence condition can not hold. If γ(0) ∈ M, then the second entry of (<ref>) represents the basepoint. The case of |v|=1 and γ(0) ∈∂ P is the same as in Lemma <ref>. The stable operations: Ξ_l, Ξ_r: P^-TP/∂ P^-TP∧ S^1→Σ^∞ P/P are defined to be the L-times desuspension of Ξ_l,unst and Ξ_r,unst. By a proof similar to that of Lemma <ref>, Ξ_l and Ξ_r are independent of choices. § CODIMENSION 0 COPRODUCT DEFECT Let j: M ⊆ P be a codimension 0 embedding such that the complement W := P ∖ M^∘ is an h-cobordism (in particular, j is a homotopy equivalence). Let F: W × I → W be a strong deformation retraction onto ∂ M, which we extend by the identity on M to a strong deformation retraction F: P × I → P. Let [T] the associated framed bordism invariant, defined as in Section <ref>. In this section we compare the coproducts on M and P and relate the difference to the diagonal Chas-Sullivan product with [T]. We do so by first relating the difference to the operations Ξ_l and Ξ_r in Section <ref> (<ref>), and then relating Ξ_l and Ξ_r to the diagonal Chas-Sullivan product with [T] in Section <ref> (<ref>). §.§ Coproduct defect is given by Ξ_r-Ξ_l For the rest of this section fix a tuple (Q,F) ∈ TD^L(M j P). We assume that j extends to an embedding j^ext: M^ext↪ P such that j^ext(M^ext) and ∂ P are disjoint. We require that Q ∈ ED^L(P) is a choice of embedding data, such that Q|_M := (e|_M^ext, ρ^ext|_D_2ν|_M^ext, ζ, V|_M, , λ) consists of embedding data for M. For convenience, we write ν_P for ν_e|_P, and similarly for ν_M. Let F:P× I → P be the deformation retraction. Then F_1 induces a map of spaces: F_1: P^Dν_P/∂ P^Dν_P→ M^Dν_M/∂ M^Dν_M by sending F_1(v, γ)= (v, F_1 ∘γ) if γ(0) ∈ M otherwise By passing to spectra, we get a map that we also call F_1: F_1: P^-TP/∂ P^-TP→ M^-TM/∂ M^-TM F_1 is an equivalence of spectra. We prove this at the level of spaces. We define an explicit homotopy inverse G: M^Dν_M/∂ M^Dν_M→ P^Dν_P/∂ P^Dν_P as follows. Choose a collar neighbourhood C: ∂ M × I → M sending ∂ M ×{1} to ∂ M, and choose a map g_1: ∂ M × I → W ∪ C which is given by C|_∂ M×{0} on ∂ M ×{0} and sends ∂ M ×{1} to ∂ P, along with a homotopy {g_t}_t∈[0,1] from g_0=C to g_1 relative to ∂ M ×{0}. This exists since P ∖ M^∘ is an h-cobordism: we essentially have chosen a homotopy inverse (rel boundary) to F_1.Now define G(v, γ) = (v, γ) if γ(0) ∈ M ∖ C ( ṽ, g_1(x,t) g(x,t)⇝ g_0(x,t)=γ(0) γ⇝γ(0) g(x,t)⇝ g_1(x,t)) if γ(0) = C(x,t) where ṽ is given by parallel transporting v along the path {g_τ(x,t)}_τ∈ [0,1].We show by explicit construction of a homotopy that G ∘F_1 ≃ Id_P; the other direction is similar. We do this by concatenating two homotopies H,H': P^Dν_P/∂ P^Dν_P× [0,1] → P^Dν_P/∂ P^Dν_P For s ∈ [0,1], we define H_s(v, γ) to be (v, F_s ∘γ) if γ(0) ∈ M ∖ C (ṽ_s, g_1(x, t) g⇝ g_0(x, t) F_s ∘γ⇝ g_0(x, t) g⇝ g_1(x,t)) if γ(0) ∈ W ∪ C We choose a map δ: (W ∪ C) × [0,1]_τ× [0,1]_t → W ∪ C, which we think of as a family of paths {δ^y_τ}_τ∈ [0,1], y ∈ W∪ C, such that: * δ(y, τ, 0) = y for all y, τ. * δ(y, 1, t) = y for all y, t. * δ(y, 0, ·) is the path y F⇝ F_1(y) = C(x, t) g ⇝ g_1(x, t) for all y, where (x, t) ∈∂ M × [0,1] is determined by F_1(y) = C(x, t) (noting if y ∈ W then t=1, and this path is constant). * δ(y, τ, t) = y for all y ∈ C(∂ M ×{0}). * δ(y, τ, 1) =y for all y ∈∂ P and all τ. These constraints specify δ on ((W ∪ C) ×∂ [0,1]^2) ∪(C(∂ M ×{0}) × [0,1]^2) (and are compatible with each other on overlaps). Since W × [0,1]^2 deformation retracts to this subspace, we can indeed choose such a δ.We define H'_s(v, γ) to be (v, γ) if γ(0) ∈ M ∖ C (ṽ_s, δ^γ(0)_s(1) δ^γ(0)_s⇝γ(0) γ⇝γ(0) δ^γ(0)_s⇝δ^γ(0)_s(1)) otherwise. where ṽ_s denotes v parallel transported along the path δ^γ(0)_s.Then H_1 = G∘F_1, H_0 = H'_1 and H'_0 is the identity. The main result of this section is that Ξ_r and Ξ_l together determine the failure for the coproducts for M and P to agree: There is a homotopy Δ^P - (j∧ j) ∘Δ^M ∘F_1 ≃Ξ_r - Ξ_l between maps of spectra P^-TP/∂ P^-TP∧ S^1 →Σ^∞ P/P∧ P/P. To prove <ref>, we start by defining a map Λ whose boundary will give rise to the required homotopy. For the fixed choice of (Q,F), we define a map of spaces: Λ: P^Dν/∂ P^Dν× [0,1]^2_s,t→Σ^L P/P∧ P/P which sends (v, γ, s, t) to [ λ(v-ϕ_1∘ F_s ∘γ(t)),; B( γ(0) F|_[0,s]⇝ F_s ∘γ(0) F_s ∘γ|_[0,t]⇝ F_s ∘γ(t) ϕ⇝ϕ_1 ∘ F_s ∘γ(t) θ⇝γ(0) ),; B(γ(0) θ⇝ϕ_1 ∘ F_s ∘γ(t) ϕ⇝ F_s ∘γ(t) F_s ∘γ|_[t,1]⇝ F_s ∘γ(1) F|_[0,s]⇝γ(1) ) ] if ‖ v-ϕ_1∘ F_s ∘γ(t) ‖≤ε otherwise. Λ is well-defined. Furthermore if both s, t ∈{0,1}, then Λ sends (v, γ, s,t) to the basepoint. For (<ref>) to be well-defined, it must send (v,γ,s,t) to the basepoint whenever |v|=1 or γ(0) ∈∂ P; this holds by the same argument as in Lemma <ref>.Suppose s=0 and t=0 (or 1). Then if the incidence condition holds, the second (or third, respectively) entry in (<ref>) must be constant, by (<ref>.<ref>) and (<ref>.<ref>).Suppose s=1. Then F_s∘γ(t) ∈ M. If γ(0) ∈ W, then by (<ref>.<ref>) the incidence condition can not hold. If γ(0) ∈ M then since F|_M is the identity, the paths F|_[0,s] and F|_[0,s] appearing in (<ref>) are constant. Then if t=0, the second entry of (<ref>) is constant by the same argument as in Lemma <ref>; similarly if t=1 the third entry of (<ref>) is constant. We next analyse the restriction of Λ to each of the four sides of the square [0,1]^2_s,t. The restriction of Λ to the subspace s=0 is denoted by: Λ|_{s=0} := Λ|_(v, γ, 0,t) : P^Dν/∂ P^Dν× [0,1]_t→Σ^L P/P∧ P/P The other sides of the square are denoted in a similar manner. By Lemma <ref>, Λ|_{s=0}, as well as the restriction of Λ to the other sides of the square, descend to maps from P^Dν/∂ P^Dν∧ S^1. Λ|_{s=0} = Δ^P. Since F_0 is the identity on P, this follows by comparing (<ref>) and (<ref>). There is a homotopy Λ|_{t=0}≃Ξ_r,unst, relative to the subspace {s ∈{0,1}, t=0}.Similarly there is a homotopy Λ|_{t=1}≃Ξ_l,unst, relative to the subspace {s ∈{0,1}, t=1}. We first construct the homotopy Λ|_{t=0}≃Ξ_l. We define a homotopy H: [0,1]_τ× P^Dν/∂ P^Dν∧ S^1 →Σ^L P/P∧ P/P by H_τ(γ,s)= (λ (v-ϕ_1 ∘ F_s ∘γ(0)), α_s,γ,τ, β_s,γ,τ) if ‖ v-ϕ_1 ∘ F_s ∘γ(0)‖≤ε otherwise. where α_s,γ,τ = B( γ(0) F|_[0, τ s]⇝ F_τ s∘γ(0) F_τ s∘γ⇝ F_τ s∘γ(0) F|_[τ s, s]⇝ F_s ∘γ(0) ϕ⇝ϕ_1 ∘ F_s ∘γ(0) θ⇝γ(0) ) β_s,γ,τ=B( γ(0) θ⇝ϕ_1 ∘ F_s ∘γ(0) ϕ⇝ F_s ∘γ(0) F|_[0,s]⇝γ(0) ) This is well-defined by the same argument as in Lemma <ref>. Inspection of (<ref>), (<ref>) and (<ref>) shows that H_0 = Ξ_r,unst and H_1 = Λ|_{t=0}, so H is the required homotopy.The other case is similar; explicitly, a homotopy H': [0,1]_τ× P^Dν/∂ P^Dν∧ S^1 →Σ^L P/P∧ P/P between Λ|_{t=1} and Ξ_l,unst is given by H'_τ(γ,s)= (λ (v-ϕ_1 ∘ F_s ∘γ(0)), α̃_s,γ,τ, β̃_s,γ,τ) if ‖ v-ϕ_1 ∘ F_s ∘γ(0)‖≤ε otherwise. where α̃_s,γ,τ = B( γ(0) F|_[0, s]⇝ F_s ∘γ(0) ϕ⇝ϕ_1 ∘ F_s ∘γ(0) θ⇝γ(0) ) β̃_s,γ,τ = B( γ(0) θ⇝ϕ_1 ∘ F_s ∘γ(0) ϕ⇝ F_s ∘γ(0) F|_[τ s,s]⇝ F_τ s∘γ(0) F_τ s∘γ⇝ F_τ s∘γ(0) F|_[0,τ s]⇝γ(0) ) Lastly, we prove the following: Λ|_{s=1} = (j ∧ j) ∘Δ^M ∘F_1. Note that F_1 ∘γ(t) ∈ M. Hence, if γ(0) ∈ W, by (<ref>.<ref>), the incidence condition can not hold. If γ(0)∈ M then F_1(v, γ) = (v, γ), and by our choice <ref>, the equality holds on the nose. Passing to suspension spectra (and desuspending L times), Theorem <ref> follows from Lemmas <ref>, <ref> and <ref>, by using the homotopy Λ. §.§ Characterizing Ξ_l and Ξ_r In this section we relate the Chas-Sullivan product and the framed bordism invariant [T] (defined in Section <ref>) with the operations Ξ_l and Ξ_r (defined in Section <ref>). Let M ⊆ P be a codimension 0 submanifold with corners, such that the complement W := P ∖ M^∘ is an h-cobordism. Assume that there exists a codimension 0 embedding e: P →^L.We let: * [P]: → P^-TP/∂ P^-TP be the composition: 𝕊→P^-TP/∂ P^-TP→ P^-TP/∂ P^-TP where the first arrow is the fundamental class (as in Appendix <ref>), and the last arrow is given by inclusion of constant loops. * Tr_diag, Tr_diag: Σ^∞ S^1 → (P × P)/P× P be the maps from Definition <ref> applied to the h-cobordism W, composed with the map induced by the inclusion W ↪ P. * μ̃^P × P_r be the version of the product on P × P considered in (<ref>). Then Ξ_r: P^-TP/∂ P^-TP∧ S^1 →Σ^∞ P/P∧ P/P is homotopic to the following composition: P^-TP/∂ P^-TP∧ S^1 P^-TP/∂ P^-TP∧∧Σ^∞ S^1 P^-TP/∂ P^-TP∧ P^-TP/∂ P^-TP∧Σ^∞ (P × P)/P × P (P × P)^-T(P × P)/∂ (P × P)^-T(P × P)∧Σ^∞ (P × P)/P × PΣ^∞ (P × P)/P × PΣ^∞ P/P∧ P/P Similarly on the left, Ξ_l: P^-TP/∂ P^-TP∧ S^1 →Σ^∞ P/P∧ P/P is homotopic to the following composition: P^-TP/∂ P^-TP∧ S^1 ∧ P^-TP/∂ P^-TP∧Σ^∞ S^1 Σ^∞ S^1 ∧∧ P^-TP/∂ P^-TP Σ^∞ (P × P)/P× P∧ (P × P)^-T(P × P)/∂ (P × P)^-T(P × P)Σ^∞ (P × P)/P × PΣ^∞ P/P∧ P/P We write μ_r((·× [P]), [T_diag]) for the composition (<ref>) and μ_l([T_diag], [P] ×·) for the composition (<ref>). As suggested in the notation in Definition <ref>, the compositions (<ref>) and (<ref>) are the appropriate spectral-level analogues of taking the cross product with the fundamental class [P] and then taking the Chas-Sullivan product with the classes Tr_diag and Tr_diag in π_1^st, and indeed this is exactly what these maps do on any generalised homology theory. The assumption that P embeds as a codimension 0 submaniold of ^L is not necessary, but is sufficient to prove Theorem <ref>. The proof of Theorem <ref> constitutes the rest of this subsection. We show the statement for the right product; the left case is identical. We first make convenient choices of trace data. §.§.§ Convenient data We first choose collars for M and P and trace data so that certain conditions, detailed in <ref>, hold. More precisely let: _P: ∂ P × [0,1] → W be a collar neighbourhood of ∂ P, sending ∂ P ×{1} to ∂ P. We write _P also for its image, and _P^in for the smaller collar neighbourhood _P(∂ P × [1/2, 1]). Similarly, let _M: ∂ M × [0,1] → W, a collar neighbourhood of ∂ M, sending ∂ M ×{0} to ∂ M. We write _M also for its image; we assume this is disjoint from _P. We can choose trace data (Q,F) ∈ TD^L(M j P), as well as collars _P and _M as above, so that the following conditions hold: * If x ∈_P, there is a (necessarily unique) s^+=s^+(x) ∈ [0,1] such that F_[0,s^+](x) ⊆_P is a straight line in the collar direction, and F|_(s^+, 1](x) ⊆ P ∖_P. * Whenever x ∈_M, the path F(x) lies in _M and is a straight line in the collar direction. * For all x ∈_M, the path F(x) has length ≤ζ/4. * For all x ∈_P, F|_[0,s^+](x) has length ≤ζ/4 * V=0 on P ∖ (M ∪_M ∪_P^in) * d(P∖_P, _P^in) > We first choose e and ρ^ext any embeddings as in Definition <ref>, and then ζ > 0 sufficiently small. Next, choose disjoint collar neighbourhoods of the boundaries _M and _P, which are small enough that the straight lines in each collar neighbourhood all have length ≤ζ/4; this ensures (<ref>) and (<ref>) hold.Choose a vector field V on P which points into M along ∂ M and into P on ∂ P, and which satisfies (<ref>), and scale V down to be sufficiently small.Specifying a smooth strong deformation retraction F: P × [0,1] → P is the same as a smoothly-varying family of paths {F_t(x)}_t ∈ [0,1] for x ∈ P. We first choose any smooth strong deformation retraction F, then modify F by preconcatenating (and reparametrising appropriately) the paths {F_t(x)}_t ∈ [0,1] with a straight line in the collar direction for all x ∈_P and postcomposing similarly for all x ∈_P; this ensures that (<ref>) and (<ref>) hold.We now choose > 0 sufficiently small that (<ref>) holds. Given F satisfying the conditions in <ref>, let T=T(F) be the framed manifold defined as in <ref> and f: T → P the natural map sending (x,t) to the loop F|_[0,t] from x to itself. Let [T] ∈Ω^fr_1( P/P) be the associated framed bordism class. We can choose (Q, F) ∈ TD^L(M j P) such that the conditions in Lemma <ref> hold, and additionally T has no boundary. Consider the vector field V' on W, where V'(p) = d/ds|_s=0 F_s(p). Zeroes of this vector field in W ∖∂ M biject with points in ∂ T. Since the relative Euler characteristic χ(W, ∂ M) vanishes, we can choose F so that this vector field has no zeros; furthermore this is compatible with the proof of Lemma <ref>. We assume we have chosen (Q, F) so that the conclusion of Lemma <ref> also holds. We consider the following composition, which is the composition (<ref>) on (3L)^th spaces (see Appendix <ref>): P/∂ P∧Σ^L S^0 ∧Σ^L S^1 P/∂ P∧ P/∂ P∧Σ^L_+ (P × P) Σ^3L_+ P × P →Σ^3L P/P∧ P/P where [P]_unst and (Tr_diag)_unst are maps of spaces representing the maps of spectra [P] and Tr_diag respectively, as in Appendix <ref>.To prove Theorem <ref>, it suffices to show that (<ref>) is homotopic to the map sending (γ, u, v, t) (so γ∈ P,u,v ∈ [-1,1]^L and t ∈ S^1) to (u, v, Ξ_r,unst(γ,t)) Though the first map in (<ref>) may depend on the choice of vector field in the proof of Lemma <ref> (which isn't necessarily unique up to homotopy), the total composition does not. §.§.§ Simplifying Ξ_r Let (γ,s) ∈ P/∂ P∧ S^1. If γ(0) lies in M, _M or _P^in, then Ξ_r,unst(γ,s) is given by the basepoint.In particular, if Ξ_r,unst(γ,s) isn't the basepoint, then by (<ref>.<ref>) and (<ref>.<ref>), V vanishes at F_s ∘γ(0). If γ(0) ∈ M, the final term in (<ref>) is constant.If γ(0) ∈_M, then by (<ref>.<ref>) and (<ref>.<ref>), the final term of (<ref>) is again constant.Now suppose γ(0) ∈_P^in. If s ≤ s^+(γ(0)), then by (<ref>.<ref>), the final term of (<ref>) is constant. If instead s ≥ s^+(γ(0)), by (<ref>.<ref>), the incidence condition for (<ref>) can't hold. For λ > 0 large enough, for any (γ,s) ∈ P/∂ P∧ S^1, if Ξ_r,unst(γ, s) is not equal to the basepoint, then (γ(0), s) ∈σ_χ(Dν_i). Same as Lemma <ref>. We now assume we have made choices such that λ satisfies the hypothesis of Lemma <ref>. By Lemmas <ref> and <ref>, we can write an alternative formula for Ξ_r,unst with respect to these choices of data: For (γ, s) ∈ P/∂ P∧ S^1, we have that Ξ_r,unst( γ, s) is equal to [ λ(γ(0)-F_s ∘γ(0)),; B( γ(0) γ⇝γ(0) F|_[0,s]⇝ F_s ∘γ(0) θ⇝γ(0) ),; B( γ(0) θ⇝ F_s ∘γ(0) F|_[0,s]⇝γ(0) ) ] if (γ(0), s) ∈σ_χ(Dν_i) otherwise. Note that (<ref>) is the equation (<ref>), with the incidence condition replaced by that of (<ref>), and with all instances of ϕ removed. §.§.§ Proof Using Lemma <ref>, Lemma <ref>, Lemma <ref> to remove instances of ϕ and then plugging in the definitions, we see that (<ref>) is homotopic to the map which sends (γ, u, x, s) (so γ∈ P, u, x ∈ [-1, 1]^L and s ∈ [0,1]) to: [ λ(γ(0) - x),; λ( u-x),; λ(x-F_s(x)),; B(γ(0) γ⇝γ(0) θ⇝ x F|_[0,s]⇝ F_1(x) θ⇝ x θ⇝γ(0)),; B( u θ⇝ x θ⇝ F_s(x) F|_[0,s]⇝ x θ⇝ u) ] if u ∈ P, x ∈ P, ‖ x-F_s(x)‖≤ and ‖ (γ(0), u) - (x,x)‖≤ otherwise. Note that the first two conditions of the incidence condition of (<ref>) are implied by the final two, implying they are redundant and we may therefore drop them.We argue that this map is homotopic to (<ref>). The final terms are homotopic via a homotopy similar to the one between the final terms described in the proof of Lemma <ref>.Then the second entry may be replaced with λ u, by a homotopy which replaces (u-x) with (u-τ x) at time τ∈ [0,1], both in the second entry and in the incidence condition.The the third entry can be replaced by λ(γ(0)- F_s∘γ(0)), by a homotopy which at time τ replaces (x-F_s(x)) with z_τ(x,y)-F_s(z_τ(x,y)) where {z_τ(x,y)}_τ is a straight-line path between x and y, both in the third entry and in the incidence condition.Then the first entry can be replaced with -λ x via a similar argument to the second entry. The resulting map then differs from (<ref>) only by applying the linear transformation [ 0 Id_L; -Id_L 0 ] to the first two entries; this matrix has positive determinant so is homotopic to the identity in O(2L). §.§ T and Tr In this section, we show that [T] ∈Ω^fr_1( W/W) corresponds to Tr=Tr(W) ∈π_1^st( W/W) under the Pontrjagin-Thom correspondence. We work with the same choices of trace data as in the previous section. We consider Pontrjagin-Thom data (see Appendix <ref>) for P and T as follows.For P, we take the embedding e: P ↪^L, which (by rescaling if necessary), we may assume the image of e lies in (-1, 1)^L. Since this is a codimension 0 embedding, no extra data is required.For T, we take * The embedding T i P × [0,1) e × Id (-1,1)^L × (-1,1) * ψ_μ: ν_i →^L is the isomorphism of vector bundles sending (v,t) in the fibre of ν_i over (x,s) to ψ(v,t) :=μ( v- dF_(x,s)(v,t)), where μ>0 is large. * σ_χ: Dν_i → P × [0,1] to send (v,t), lying in the fibre of Dν_i over (x,s) ∈ T, to (x,s) + χ· (v,t), where χ > 0 is small. For χ > 0 sufficiently small, σ_χ is an embedding, with image lying outside of (_M ∪_P) × [0,1].For χ > 0 fixed and μ > 0 sufficiently large, ψ_μ satisfies (<ref>). The first statement follows from the inverse function theorem and the fact that i(T) lies outside (_M ∪_P) × [0,1]. The second statement is clear. For the rest of the section, we fix χ, μ>0 as in Lemma <ref>. We assume the maps [P]_unst and [T]_unst appearing in (<ref>) are taken with respect to these choices of data. For λ > 0 large enough, if Tr(γ,s) is not the basepoint, then (γ, s) ∈σ_χ(Dν_i). Let S = {(x, s) ∈ P × [0,1] | ‖ x-F_s(x)‖≤}∖σ_χ(Dν_i^∘). Since S is compact, for λ > 0 large enough, whenever (γ(0), s) doesn't lie in S, the first term of (<ref>) has large norm. Choosing λ >0 large enough that Lemma <ref> holds and using (<ref>.<ref>) and Lemma <ref>, we have: Tr_unst(x, s) = [ λ(x-F_s(x)); B(x F|_[0,s]⇝ F_s(x) θ⇝ x) ] if (x, s) ∈σ_χ(Dν_i) otherwise. Using the chosen Pontrjagin-Thom data for T (and assuming that λ = μ/χ, which we can do by increasing λ or μ as necessary) and opening up the definition of ψ_μ, we have that [T]_unst(x, s) = [ λ ((x-y) - dF_(y,t) (x-y,s-t)); y F|_[0,s]⇝ y ] if (x, s) ∈σ_χ(Dν_i) otherwise. Here (y, t) ∈ T is the fibre in which σ^-1_χ(x, s) lives, assuming the incidence condition holds. Tr and [T] are homotopic. Comparing (<ref>) and (<ref>), we see that they are homotopic, since the first entries agree up to first order (so they are homotopic if we take λ sufficiently large), and in the second entry we can take a homotopy of the form {z_τ(x,y) F|_[0,s]⇝ F_s(z_τ(x,y)) θ⇝ z_τ(x,y)}_τ, where {z_τ(x,y)}_τ follows the straight line between x and y, and also applying Lemma <ref>. § PROOF OF <REF> In this section we prove Theorem <ref> using the results of the previous sections. We first reduce to the case where the homotopy equivalence is a codimension 0 embedding of manifolds with corners, and then appeal to results of Section <ref>.Let f: N → Z be a homotopy equivalence of compact manifolds as in Theorem <ref>. Embed Z into ^L for some large L. Let P be the unit disc bundle of the normal bundle, which we embed as a submanifold of ^L extending the embedding of Z. Composing f with the inclusion of the zero section Z ↪ P gives a map N → P. This is not an embedding, but we can choose a generic perturbation to an embedding N ↪ P ⊂^L. Let M be the unit disc bundle of N, which we can assume embeds as a submanifold of P extending the embedding of N. Let j: M ↪ P be the inclusion. Note j is a codimension 0 embedding. Then there is a homotopy commutative diagram: N [r, "f"] [d, "ι^N"] Z [d, "ι^Z"] M [r, "j"] P where the vertical arrows, ι^N and ι^Z, are the inclusions of the zero sections, and in particular are simple homotopy equivalences.Let ν_N and ν_Z be the normal bundles of the embeddings N, Z ↪^L respectively, so M ≅Tot(Dν_N) and P ≅Tot(Dν_Z). For L sufficiently large, the complement W := P ∖ M^∘ is an h-cobordism. We first argue that the inclusions ∂ M, ∂ P ↪ W induce isomorphisms on π_1. ∂ P ≅Tot(Sν_Z)∪_Tot(Sν_Z|_∂ Z)Tot(Dν_Z|_∂ Z) Since the fibres of the sphere bundle Sν_Z are high-dimensional spheres, by the long exact sequence of a fibration we see that the projections Tot(Sν_Z) → Z and Tot(Sν_Z|_∂ Z) →∂ Z induce isomorphisms on π_1. Therefore by Seifert-van Kampen, we find that π_1(∂ P) ≅π_1(Z) *_π_1 ∂ Zπ_1 ∂ Z ≅π_1 Z It follows that the inclusion ∂ P ↪ P induces an isomorphism on π_1. Exactly the same argument shows that the inclusion ∂ M ↪ M ≃ P does too. Since the handle dimension of M is at most the dimension of N and thus bounded above independently of L, for L sufficiently large any loop in P can be generically perturbed away from the skeleton of some handle decomposition of M, and therefore can be homotoped to live in W. Similarly given any loops in W which are homotopic in P, the homotopy can be generically perturbed away from the same skeleton, and therefore can be homotoped to live in W. It follows that ∂ M, ∂ P ↪ W induce isomorphisms on π_1.Now by excision and using the above isomorphisms on π_1, the relative homology group with universal local coefficients H_*(W, ∂ M; [π_1]) ≅ H_*(P, M; [π_1]) = 0 vanishes. Using Alexander duality, we see also that H_*(W, ∂ P; [π_1]) also vanishes. It follows that W is an h-cobordism. The inclusion j:M ↪ P now satisfies the conditions of Section <ref>. Choose a strong deformation retraction F: W × [0,1] → W and extend it by the identity to F: P × [0,1] → P; let F_1 be as in (<ref>).We next define a map f_!: N^-TN/∂ N^-TN→ Z^-TZ/∂ Z^-TZ, and give an alternative characterisation of it in the case that N and Z have no boundary.Since F_1 and α^Z are homotopy equivalences, we may choose a map f_! such that the following diagram commutes up to homotopy, and this choice is well-defined up to homotopy: N^-TN/∂ N^-TN[r, "f_!"] [d, "α^N"] Z^-TZ/∂ Z^-TZ[d, "α^Z"] M^-TM/∂ M^-TM P^-TP/∂ P^-TP[l, "F_1"] Suppose that N and Z are both closed manifolds. Then f_! is homotopic to the following composition: N^-TN N^-f^*TZ Z^-TZ where the first map is given by Atiyah's equivalence <cit.> between -TN and -f^*TZ, as stable spherical fibrations.In particular, if N and Z are oriented and f is orientation-preserving, then the following diagram commutes: H_*+n( N) [r, "( f)_*"] [d, "Thom"] H_*+n( Z) [d, "Thom"] H_*( N^-TN) [r, "(f_!)_*"] H_*( Z^-TZ) We first recap (a version of) the construction of the equivalence of stable spherical fibrations -TN ≃ -f^* TZ from <cit.>. We construct this as a map Ati: f^*Dν_Z → Dν_N of fibre bundles over N, sending boundaries to boundaries, that is a fibrewise homotopy equivalence of pairs. We make use of the fact that using the vector bundle structure, between any two points in the same fibre of the disc bundle of a vector bundle, there is a canonical path given by taking the convex hull of these two points; we call this a fibre line path and write these paths Fib^π for a vector bundle π: E → B; in general it should be unambiguous what the endpoints are.Let j, ι^N, ι^Z be as in <ref>. Let h' be a homotopy from h'_0 = j ∘ι^N to h'_1 = ι^Z ∘ f: N → P, and let h = F_1 ∘ h', a homotopy between ι^N, F_1∘ι^Z ∘ f: N → M. Let x ∈ N, and choose a vector v ∈ (f^*Dν_Z)_x = (Dν_Z)_f(x). Let u = F_1(v) ∈ P ≅ Dν_N. u does not necessarily live in the fibre over x; it instead lives in the fibre over π^N∘ F_1(v). We parallel transport along a natural path between these two points. Consider the path in N: δ^v,x: π^N ∘ F_1(v) π^N ∘ F_1 ∘Fib^π^Z⇝π^N ∘ F_1 ∘ι^Z ∘ f(x) π^N ∘ F_1 ∘ h'(x)⇝π^N ∘ F_1 ∘ j ∘ι^N(x) = x where the first path in the concatenation is π^N∘ F_1 composed with a fibre line path of the disc bundle M → N. We define Ati(v) to be the image of F_1(v) under the parallel transport map along the path δ^v,x; this lives in the fibre over x by construction, and assuming we parallel transport along a metric-compatible connection, if |v|=1 then |Ati(v)|=1, so this induces a well-defined map of spherical fibrations.It suffices to show that the following diagram commutes up to homotopy, which we do by writing down an explicit homotopy: N^f^* Dν_Z[r, "Ati"] [d, "f"] N^Dν_N[dr, "α^N"] Z^Dν_Z[r, "α^Z"] Z/∂ Z[r, "F_1"] M/∂ M We define a homotopy {H_t}_t ∈ [0,1]: N^f^*Dν_Z→ M/∂ M as follows. Choose (γ, v) ∈ N^f^* Dν_Z and t ∈ [0,1].We first define u_t^v,γ∈ P to be the image of v along the parallel transport map along the path in Z: f ∘γ(0) π^Z ∘ h'|_[t,1]∘γ(0)⇝π^Z ∘ h'_t ∘γ(0) Note u_1^v,γ = v. We also define a path δ^v,γ_t in N: π^N ∘ F_1(v) π^N ∘ F_1 ∘Fib^π^Z⇝π^N ∘ F_1 ∘ι^Z ∘ f ∘γ(0) π^N ∘ F_1 ∘ h'|_[t,1]∘γ(0)⇝π^N ∘ F_1 ∘ h'_t∘γ(0) π^N ∘ F_1 ∘Fib^π^Z⇝π^N ∘ F_1 (t · u_t^v,γ) where t · u_t denotes u_t rescaled by t. Let w_t^v,γ∈ M be the image of F_1(v) under the parallel transport along the path δ^v,γ_t; note that w_1^v, γ = F_1(v) since δ_1^v,γ consists of a path concatenated with its inverse. By inspection of (<ref>) we see that δ^v,γ_0 = δ^v, γ(0); from this we also see that w_0^v, γ = Ati_γ(0)(v).We define H_t(v,γ) to be the following loop: w_t^v,γFib^π^N⇝ F_1(t · u_t^v,γ) F_1 ∘Fib^π^Z⇝ h_t ∘γ(0) h_t ∘γ⇝ h_t ∘γ(0) ⇝ F_1(t · u_t^v,γ) ⇝ w_t^v,γ where the last two paths are the reverses of the first two paths. Then since w_1^γ, v = F_1(v) and h_1 = F_1 ∘ι^Z ∘ f, we see that H_1(v, γ) = (v,F_1 ∘α^Z ∘ f ∘γ).Similarly, since δ^v,x_0 = δ^v,x, 0 · u_0^γ,v = π^Z u_0^γ,v and h_0 = ι^N, we see that H_0 = α^N ∘Ati. Now consider the following diagram. N^-TN/∂ N^-TN∧ S^1 [rr, "Δ^N"] [dd, "f_! ∧ Id_S^1"] [dr, "α^N ∧ Id_S^1"] Σ^∞ N/N∧ N/N[dd, near start, "f ∧ f"] [dr, "ι^N ∧ι^N"] M^-TM/∂ M^-TM∧ S^1 [rr, near start, "Δ^M"] Σ^∞ M/M∧ M/M[dd, "j ∧ j"] Z^-TZ/∂ Z^-TZ∧ S^1 [dr, swap, "α^Z ∧ Id_S^1"] [ rr, near start, "Δ^Z"] Σ^∞ Z/Z∧ Z/Z[dr, "ι^Z ∧ι^Z"] P^-TP/∂ P^-TP∧ S^1 [rr, "Δ^P"] [uu, near start, "F_1 ∧ Id_S^1"] Σ^∞ P/P∧ P/P where α^N, α^Z are the homotopy equivalences from Lemma <ref>. The back cube is the square (<ref>) whose failure to homotopy commute we wish to determine.The top and bottom squares in (<ref>) homotopy commute by Theorem <ref>. The left square homotopy commutes by construction. The right square homotopy commutes by homotopy commutativity of (<ref>). Let [T] ∈Ω^fr_1( P, P) be the framed bordism fixed-point invariant associated to the inclusion j: M ↪ P, as in Section <ref>. We also write [T]: Σ^∞ S^1 →Σ^∞ Z/Z for the corresponding stable homotopy class under the Pontrjagin-Thom isomorphism.As in Section <ref>, we let [T_diag] and [T_diag] be given by [T] composed with the two antidiagonal maps. A proof similar to Lemma <ref> shows that the class [T] ∈Ω^fr_1( Z, Z) only depends on the homotopy equivalence f: N → Z, and none of the auxiliary choices.The front square of (<ref>) does not necessarily commute, but its failure to commute is determined by Theorems <ref> and <ref>, which together imply that there is a homotopy: Δ^P- (j ∧ j) ∘Δ^M ∘F_1 ≃μ_r((·× [P]), [T_diag]) - μ_l([T_diag], [P] ×·) where the maps on the right are as in Section <ref>. The following diagram commutes up to homotopy: Z^-TZ/∂ Z^-TZ∧ S^1 [d, "α^Z ∧ Id_S^1"] [rrr, "μ_r^Z × Z(·× [Z], [T_diag])"] Σ^∞ Z/Z∧ Z/Z P^-TP/∂ P^-TP∧ S^1 [rrr, "μ^P × P_r(·× [P], [T_diag])"] Σ^∞ P/P∧ P/P[u, "π^Z ∧π^Z"] where the horizontal maps are defined as in Theorem <ref>.A similar diagram commutes with the top and bottom horizontal arrows replaced by μ_l^Z × Z([T_diag], [Z] ×·) and μ_l^P × P([T_diag], [Z] ×·) respectively. Follows from homotopy commutativity of (<ref>) and Theorem <ref>. <ref> then follows from the homotopy commutativity of four of the squares in (<ref>), along with (<ref>) and Lemma <ref>. §.§ Proof of Corollary <ref> Let f: N → Z be an orientation-preserving homotopy equivalence of closed oriented manifolds. Let M be a closed oriented manifold. Let τ∈π^st_1( (M × M)/M × M).Then the following diagram commutes up to a factor of (-1)^np: H_p+1-n( M^-TM∧ S^1) [d, "Thom∧ Id_S^1"] [rrr, "(μ_r(·× [M], τ))_*"] H_p+1-n(Σ^∞ M/M∧ M/M) [dd, "="] H̃_p+1( M_+ ∧ S^1) H_p( M) [u, "·×[0,1]"] [rrr, "μ^CS(·× [M], h_*τ)"] H̃_p+1-n( M/M∧ M/M) Similarly, the following diagram commutes up to a factor of (-1)^p: H_p+1-n( M^-TM∧ S^1) [d, "Thom∧ Id_S^1"] [rrr, "(μ_l(τ, [M] ×·))_*"] H_p+1-n(Σ^∞ M/M∧ M/M) [dd, "="] H̃_p+1( M_+ ∧ S^1) H_p( M) [u, "·×[0,1]"] [rrr, "μ^CS(h_*τ, [M] ×·)"] H̃_p+1-n( M/M∧ M/M) Consider the following diagram: H_p+1-n( M^-TM∧ S^1) [d, "Thom"] [r, "≃"] H_p+1-n( M^-TM∧∧Σ^∞ S^1) [d, "Thom"] [r, "Id ∧[M]∧ Id"] H_p+1(( M^-TM)^∧ 2∧ M^-TM∧ S^1) [d, "Thom"] H̃_p+1( M_+ ∧ S^1) [r, "≃"] H_p+1( M_+ ∧∧Σ^∞ S^1) H_p+1+n(Σ^∞ M_+^∧ 2∧Σ^∞ S^1) H_p( M) [u, "·× [0,1]"] [r, "="] H_p( M) [u, "·× [0,1]"] [r, "·× [M]"] [ur, "·× [M] × [0,1]"] H_p+n( M × M) [u, "·× [0,1]"] All of (<ref>) commutes except the top right trapezium, which commutes up to a factor of (-1)^pn, coming from commuting x ∈ H_p( M) past the Thom class of the second copy of -TM. Also consider: H_p+1(( M^-TM)^∧ 2∧ S^1) [d, "Thom"] [r, "Id ∧ Id ∧τ"] H_p+1-n(( M^-TM)^∧ 2∧ (M × M)/M × M) [d, "Thom"] [r, "μ^M × M_r"] H_p+1-n(Σ^∞ (M × M)/M × M) [d, "="] H_p+1-n(Σ^∞ M_+^∧ 2∧Σ^∞ S^1) [r, "Id ∧ Id ∧τ"] H_p+1-n(Σ^∞ M_+^∧ 2∧ (M × M)/M × M) [r, "μ^CS_M × M"] H̃_p+1-n( (M × M)/M × M) H_p+n( M × M) [u, "·× [0,1]"] [r, "·× h_*τ"] H̃_p+1+n( M_+^∧ 2∧ (M × M)/M × M) [u, "="] [r, "μ^CS_M × M"] H̃_p+1-n( (M × M)/M × M) [u, "="] (<ref>) commutes; for the top right square this uses Corollary <ref> applied to M × M (which is even-dimensional).Then the concatenation of (<ref>) and (<ref>), followed by the natural collapse map H̃_*( (M × M)/M × M) →H̃_*(( M/M)^∧ 2) has outer square given by (<ref>), so (<ref>) commutes up to a factor of (-1)^np.Consider the following diagram, analagous to (<ref>): H_p+1-n( M^-TM) [d, "Thom"] [r, "≃"] H_p+1-n(∧ M^-TM∧ S^1) [d, "Thom"] [r, "[M] ∧ Id ∧ Id"] H_p+1-n(( M^-TM)^∧ 2∧ S^1 ) [d, "Thom"] H̃_p+1( M_+ ∧ S^1) [r, "≃"] H_p+1(∧ M_+ ∧Σ^∞ S^1) H_p+1+n(Σ^∞ M_+^∧ 2∧Σ^∞ S^1) H_p( M) [u, "·× [0,1]"] [r, "="] H_p( M) [u, "·× [0,1]"] [ur, "[M] ×·× [0,1]"] [r, "[M] ×·"] H_p+n( M × M) [u, "·× [0,1]"] All of (<ref>) commutes except the top right trapezium, which commutes up to a factor of (-1)^n, coming from commuting [M] ∈ H_n( M) past the Thom class of the second copy of -TM. Also consider: H_p+1-n(( M^-TM)^∧ 2∧ S^1) [d, "Thom"] [r, "Swap"] H_p+1(Σ^∞ S^1 ∧( M^-TM)^∧ 2) [d, "Thom"] [r, "τ∧ Id"] H_p+1(Σ^∞ (M × M)/M × M∧( M^-TM)^∧ 2) [d, "Thom"] H_p+1+n(Σ^∞ M_+^∧ 2∧Σ^∞ S^1) [r, "Swap"] H̃_p+1(Σ^∞ S^1 ∧ M_+^∧ 2) [r, "τ∧ Id"] H̃_p+1( (M × M)/M × M∧ M_+^∧ 2) H_p+n( M × M) [u, "·× [0,1]"] [rr, "h_*τ×·"] [ur, "[0,1] ×·"] H̃_p+1+n( (M × M)/M × M∧ M_+^∧ 2) [u, "="] All of (<ref>) commutes except the bottom left triangle, which commutes up to a sign of (-1)^p +n. Then the diagram obtained by concatenating (<ref>) (<ref>), composing with maps μ_l^M × M and μ_M × M^CS similarly to (<ref>) and then composing with the natural collapse map (<ref>), has outer square given by (<ref>), so (<ref>) commutes up to a factor of (-1)^p. Combining Proposition <ref>, Corollary <ref>, Proposition <ref> and plugging these into Theorem <ref>, we find that for x ∈ H_p( N): (-1)^n Δ^GH∘ f_*(x) - (-1)^n (f× f)_*∘Δ^GH(x) = (-1)^npμ^CS(x × [M], h_*[T_diag]) - (-1)^pμ^CS(h_*[T_diag], [M] ×·) Multiplying through by (-1)^n then gives the result. § CONVENTIONS FOR STABLE HOMOTOPY THEORY We work with spectra throughout this paper. We work with the sign conventions of <cit.>, mirrored: for example, we apply Σ on the left when considering the structure maps of spectrum, whereas loc. cit. applies ·∧ S^1 on the right. In this section, we recap the properties and definitions that we need: all results here are standard, but it will be convenient to have a self-contained treatment of all the sign and order conventions we require. §.§ Spectra When the spaces in the spectra are not of finite type, the definition given below does not necessarily include all morphisms of spectra considered in <cit.>. However all morphisms that we need in this paper are of this form, so the definition given below is sufficient for our purposes. A spectrum X consists of a sequence of based spaces {X_n}_n≫ 0 for n sufficiently large, along with structure maps σ_n^X: Σ X_n → X_n+1. A map of spectra f: X → Y consists of based maps f_n: X_n → Y_n for sufficiently large n, compatible with the structure maps.A homotopy between two maps X → Y consists of homotopies between the corresponding maps X_n → Y_n for sufficiently large n, compatible with the structure maps up to homotopy. We consider two spectra or maps of spectra the same if they agree for sufficiently large n.For k ∈, the functor Σ^k from spectra to itself sends a spectrum X={X_n, σ_n^X}_n ≫ 0 to {X_n+k, σ_n+k^X}_n ≫ 0, and acts similarly on maps of spectra. The homotopy category of spectra is enriched in abelian groups, and as such, given a map of spectra f: X → Y and n ∈, there is a map of spectra n · f: X → Y well-defined up to homotopy. Similarly if i ≥ 1, then the set of homotopy classes of maps of based spaces f:Σ^j X → Y is naturally an abelian group, and there is a map of spaces n· f: Σ^i → Y, well-defined up to homotopy. A suspension spectrum is one in which all structure maps are homotopy equivalences. The sphere spectrum has i^th space Σ^i S^0 ≅ [-1,1]^i/∂ [-1,1]^i. In this paper, we always work in the homotopy category of spectra. For n ≤ n', we sometimes write σ^X_nn' as shorthand for σ^X_n'-1∘…∘Σ^n'-nσ^X_n: Σ^n'-nX_n → X_n'. All spectra that we consider are suspension spectra.The advantage of working with suspension spectra is that we have the following lemmas: Let f, g: X → Y be maps betweem two suspension spectra, and n≫0 large enough that f_n and g_n are defined. Then f and g are homotopic if and only if f_n and g_n are homotopic as maps of spaces. Let X and Y be suspension spectra, and n≫ 0 large enough that X_n and Y_n are defined. Then for any map g: X_n → Y_n there is a (unique up to homotopy) map of spectra f: X → Y whose associated map f_n: X_n → Y_n is g. Since all σ^X and σ^Y are homotopy equivalences, we may choose maps f_i: X_i → Y_i such that the following diagram commutes up to homotopy: Σ^i-nX_n [r, "Σ^i-ng"] [d, "σ^X_ni"] Σ^i-n Y_n [d, "σ^Y_ni"] X_i [r, "f_i"] Y_i These are compatible with the structure maps up to homotopy, by construction. Let X be a spectrum and S a space. The spectrum X ∧ S has i^th space (X ∧ S)_i := X_i ∧ S and structure maps σ^X ∧ S_i := σ^X_i ∧ Id_S. §.§ Homology Let X be a suspension spectrum. We define its homology to be H_*(X) := H̃_*+i(X_i) for some i ≫ 0. We identify these groups for different choices of i as follows: for i ≤ i', we use the isomorphism H̃_*+i(X_i) H̃_*+i'(Σ^i'-iX_i) H̃_*+i'(X_i') These isomorphisms are compatible with each other in the sense that composing (<ref>) for i ≤ i' and i' ≤ i” gives (<ref>) for i ≤ i”. §.§ Thom spectra Let E → B be a vector bundle of rank r. We assume that either B is a finite CW complex or that E = f^*E' where E' → B' is a vector bundle over a finite CW complex and f: B → B'.If E is equipped with a metric, we write DE for its unit disc bundle, SE for its unit sphere bundle and B^DE for the Thom space DE / SE. This is canonically homeomorphic to the quotient space E/(E ∖ DE^∘); we use these two models for the Thom space interchangeably. The Thom spectrum B^-E of -E is the suspension spectrum defined as follows. Choose an embedding e: E ↪^L of vector bundles, for some L≫ 0. If B is not finite CW, we assume this embedding is obtained by choosing an embedding E' ↪^L and pulling back.Let ν_e be the orthogonal complement of E in ^L. Then for i ≥ L the i^th space of B^-E is defined to be (B^-E)_i := B^D(^i-L⊕ν_e) = Tot(D(^i-L⊕ν_e) → B)/Tot(S(^i-L⊕ν_e )→ B) The structure maps Σ B^D(^i-L⊕ν_e)→ B^D(^1+i-L⊕ν_e) send the [-1,1]-coordinate from Σ to the first coordinate in ^1+i-L: more precisely, (t, (u, v, b)) is sent to ((t, u), v, b), where t ∈ [-1,1], b ∈ B, u ∈^i-L and v ∈ (Dν_e)_b. This definition depended on a choice of embedding e. For different choices of e, there is a natural identification between the resulting spectra. §.§ Thom isomorphism We work in the same setting as Section <ref>. Assume also that E is oriented, with corresponding Thom class τ_E ∈H̃^r(B^E). The Thom isomorphism is the isomorphism Thom: H_*-r(B^-E) → H_*(B) given by τ_^i-L⊕ν_e∩ - : H̃_*-r+i(B^D(^i-L⊕ν_e)) where τ_^i-L⊕ν_e is a Thom class for the vector bundle ^i-L⊕ν_e, which we orient so that the canonical isomorphiam ^i-L⊕ν_e ⊕ E ≅^i-L⊕^L = ^i is orientation-preserving. This map is independent of choices, in the sense that it is compatible with the maps (<ref>) for different choices of i. §.§ Smash product We recap the construction of the smash product of spectra from <cit.>. Let X, Y be suspension spectra. Choose sequences of nonnegative integers u⃗= (u_i)_i and v⃗=(v_i)_i (which we only require to be defined for sufficiently large i≫ 0) such that * u⃗ and v⃗ are both monotonically increasing and unbounded. * u_i+v_i = i for all i. We define the smash product X ∧ Y as follows. The i^th space is (X∧ Y)_i = X_u_i∧ Y_v_i and the structure maps are as follows.If u_i+1 = u_i + 1 (so v_i+1 = v_i), σ^X ∧ Y_i is the composition Σ(X ∧ Y)_i = Σ X_u_i∧ Y_v_i X_u_i+1∧ Y_v_i+1 = (X ∧ Y)_i+1 If v_i+1 = v_i+1 (so u_i+1 = u_i), σ^X ∧ Y_i is the composition Σ (X ∧ Y)_i = Σ X_u_i∧ Y_v_i X_u_i∧Σ Y_v_i X_u_i+1∧ Y_v_i+1 = (X ∧ Y)_i+1 The definition of smash product above depends on the choice of sequences u⃗ and v⃗; however the resulting spectra for different choices are canonically identified up to homotopy equivalence, see <cit.>. Let X,Y,Z be suspension spectra. Let f: X_i ∧ Y_j → Z_i+j be a map of spaces. We may choose sequences u⃗, v⃗ as in Definition <ref> with u_i+j = i and v_i+j = j and apply Lemma <ref> to obtain a well-defined map of spectra X ∧ Y → Z. Let X be a spectrum. Then there is a homotopy equivalence of spectra f: X ∧→ X Let (u_i)_i, (v_i)_i be sequences as in Definition <ref>. We define f on i^th spaces to be the composition (X ∧)_i = X_u_i∧Σ^v_i S^0 Σ^v_i X_u_i∧ S^0 ≅Σ^v_i X_u_i X_i This is a map of spectra. §.§ Pontrjagin-Thom theory In this section, we record a concrete model for the Pontrjagin-Thom construction, for use in later sections. A stable framing on a manifold X consists of an equivalence class of isomorphisms of vector bundles over X ψ: ^i-k⊕ TX →^i. The equivalence relation is generated by the following relations: * ψ, ψ': ^i-k⊕ TX →^i are equivalent if they are homotopic (through isomorphisms of vector bundles). * ψ is equivalent to Id_⊕ψ: ^1+i-k⊕ TX →^1+i. Let A⊆ B be a CW subcomplex of a CW complex, and X^k a compact manifold, possibly with boundary, equipped with a stable framing. Let f: X → B be a map sending ∂ X to A. Pontrjagin-Thom data of rank L for the data above consists of a tuple (i, σ, ψ): * i: X ↪ (-1, 1)^L is an embedding. Write ν_i for the normal bundle of this embedding. * σ: Dν_i ↪ [-1, 1]^L is a tubular neighbourhood of the embedding i. * ψ: ν_i →^L-k is an isomorphism of vector bundles such that the following composition is a representative for the stable framing on X: ^L-k⊕ TX ν_i ⊕ TX ^L and such that |ψ(v)| ≥ |v| for all v ∈ν_i. Given Pontrjagin-Thom data as above, we construct a map of spectra Σ^k →Σ^∞B/A as follows.This map is defined on (L-k)^th spaces to be the composition, which we call [X]_unst: Σ^L S^0 X^Dν_i/∂ X^Dν_iΣ^L-kX/∂ XΣ^L-kB/A Here the first map Collapse sends p ∈ [-1, 1]^L to σ^-1(p) if p ∈Im(p) and to the basepoint otherwise, and the second map ψ sends (v, x) (where x ∈ X and v ∈ (Dν_i)_x) to (ψ(v), x).Standard arguments (e.g. <cit.>) show that Pontrjagin-Thom data always exists, and that the induced map of spectra is independent of the choice of Pontrjagin-Thom data up to homotopy. Let M be a compact manifold, possibly with boundary of corners. Its stable homotopy fundamental class is the map [M]: →M^-TM/∂ M^-TM constructed as follows.Let i: M ↪ (-1, 1)^L be an embedding, and σ A map of spaces [M]_unst is defined to be the map Σ^L S^0 →M^Dν_i/∂ M^Dν_i sending x ∈ [-1,1]^L to σ^-1(x) if x ∈Im(σ), and * otherwise. The map of spectra [M] is then induced by Lemma <ref>. This map of spectra is independent of choices up to homotopy. abbrv
http://arxiv.org/abs/2407.12470v1
20240717104743
Continual Learning for Temporal-Sensitive Question Answering
[ "Wanqi Yang", "Yunqiu Xu", "Yanda Li", "Kunze Wang", "Binbin Huang", "Ling Chen" ]
cs.CL
[ "cs.CL" ]
Continual Learning for Temporal-Sensitive Question Answering Wanqi Yang, Yunqiu Xu, Yanda Li, Kunze Wang, Binbin Huang, Ling Chen Wanqi Yang, Yunqiu Xu, Yanda Li and Ling Chen are with University of Technology Sydney, Sydney, 2007, Australia. (email: wanqi.yang-1@student.uts.edu.au, yunqiuxu1991@gmail.com, Yanda.Li@student.uts.edu.au, ling.chen@uts.edu.au). Kunze Wang is with University of Sydney, Sydney, 2050, Australia. (email: kwan4418@uni.sydney.edu.au). Binbin Huang is with Hangzhou Dianzi University, Hangzhou, 310018, China. (email: huangbinbin@hdu.edu.cn). July 22, 2024 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In this study, we explore an emerging research area of Continual Learning for Temporal Sensitive Question Answering (CLTSQA). Previous research has primarily focused on Temporal Sensitive Question Answering (TSQA), often overlooking the unpredictable nature of future events. In real-world applications, it's crucial for models to continually acquire knowledge over time, rather than relying on a static, complete dataset. Our paper investigates strategies that enable models to adapt to the ever-evolving information landscape, thereby addressing the challenges inherent in CLTSQA. To support our research, we first create a novel dataset, divided into five subsets, designed specifically for various stages of continual learning. We then propose a training framework for CLTSQA that integrates temporal memory replay and temporal contrastive learning. Our experimental results highlight two significant insights: First, the CLTSQA task introduces unique challenges for existing models. Second, our proposed framework effectively navigates these challenges, resulting in improved performance. continual learning, temporal-sensitive question, question answering § INTRODUCTION A temporal-sensitive question refers to a question that involves temporal-related details, and modifying this temporal information within the question will result in a different answer <cit.>. Take the question “What was the role of Barack Hussein Obama in YEAR?” as an example. If YEAR = 2006, the answer should be “Federal Senator”; whereas if YEAR = 2016, the answer should be “President of the United States”. In everyday life, we frequently encounter questions influenced by time, with answers that can change as new events occur. This unpredictability highlights the need for a novel task called Continual Learning for Temporal Sensitive Question Answering (CLTSQA), which requires continuously learn a model of temporal sensitive question answering as time progresses. Although some works have been conducted in related areas, two key challenges of CLTSQA have been overlooked: the absence of a suitable dataset, and the scarcity of effective methods in continually dealing with temporal-sensitive questions. While some existing works, e.g., <cit.>, proposed new datasets with the aim of investigating the Temporal-sensitive Question Answering (TSQA) to explore the model's sensitivity and its reasoning capabilities to temporal information. They follow the setting of traditional question answering. As shown in Fig. <ref>, TSQA assumes that the entire dataset is adapted for training the model. It lacks the ability to continuously incorporate updated and new data which could potentially alter the answer to a question as time progresses. In terms of the second challenge, many works have been proposed to retain model's performance with evolving dataset through continual learning. For example, <cit.> studied continual learning for a single domain (Twitter data from 2018 to 2019), and <cit.> worked on efficient life-long pre-training on emerging data in multiple domains. Currently, there are no existing efforts or studies focused on the application specifically to address CLTSQA. The objective of the Continual Learning for Temporal Sensitive Question Answering (CLTSQA) task is to simulate a real-world scenario where updates and new knowledge cannot be learned all at once but requires continual learning. CLTSQA task explores the forgetting degree of model of knowledge in earlier time and the learning capability for acquiring updated and new knowledge over time. To deal with the absence of an available dataset, we construct a new dataset that includes subsets of temporal-sensitive questions, thereby offering a solution to this challenge, and facilitating the study in CLTSQA. Then, to make the model capable of effectively handling temporal-sensitive questions in a continuous fashion, we propose a novel framework featured by 1) temporal memory replay to alleviate the catastrophic forgetting of the past knowledge; and 2) temporal contrastive learning to enhance the model's sensitivity to temporal information and boost its performance on questions with most up-to-date information. The experimental results show that: 1) the existing models struggle to deal with this challenging task, resulting in poor performance; 2) our proposed framework can effectively help the models to address CLTSQA, demonstrating not only improvement in answering the most up-to-date questions, but also good performance retention when answering historical questions. The main contributions of this work are summarised as: * We propose a novel task called CLTSQA. * We propose a new dataset to deal with the absence of available dataset and facilitate the study in CLTSQA. * We propose a novel framework featured by temporal memory replay and temporal contrastive learning to deal with the model-level challenge in CLTSQA. * We have obtained experimental findings indicating that: 1) CLTSQA is a challenging yet promising task, and 2) our framework assists the model in effectively addressing CLTSQA. § RELATED WORK §.§ Temporal-Sensitive Question Answering Some previous studies have explored the task of Temporal-sensitive Question Answering by introducing new datasets. The TempQuestions dataset <cit.> provides a clear definition of what constitutes a “temporal question” and utilizes specific trigger words such as “before” and “after”. To investigate “temporal question”, <cit.> mentioned that answers to a question can change over time and created a dataset with 13% temporal-sensitive data. <cit.>, <cit.> and <cit.> also created new datasets, but were with a primary focus on TSQA. By evaluating existing models on the proposed datasets, these work proved that answering temporal-sensitive questions is challenging, which serves as a motivation of our study. Different from them, we not only extend TSQA towards a more realistic and challenging task CLTSQA, but also offer solutions to enhance model performance in tackling it. In addition to the dataset, temporal-sensitive question learning requires the model to be sensitive to temporal information. Several studies have utilized pre-trained language models to aid in question comprehension. However, these models do not effectively distinguish between different temporal expressions found in free-text <cit.>. Inspired by the framework proposed in <cit.>, our framework develops a temporal contrastive learning that the model can understand the crucial factor lies in recognizing the variation in temporal information, rather than the specific format of the question. §.§ Continual Learning Numerous research efforts have been dedicated to the examination of continual learning for general QA <cit.>. Through extensive exploration of the general question answering domain, researchers have discovered that temporal-related QA tasks pose greater challenges. <cit.> proposed a dataset named StreamingQA, which aims to investigate models’ adaptation to changing knowledge. The dataset's context spans the years 2007 to 2020, with questions that do not involve temporally sensitive information. StreamingQA dataset employs a specific data format (question date, question, answer, document date, document), and the question date for each query is intentionally set by the author. However, datasets with additional fields and with narrower timeframes does not inherently enhance the model's robustness and generalizability. <cit.> designed a new continual learning task called continual knowledge learning (CKL). From a task-oriented perspective, the aim of CKL involves consistently enhancing the internal knowledge of the language model through ongoing pre-training on new datasets. A noteworthy distinction is that, CKL predominantly concentrates on enriching the internal knowledge within the pre-trained model, encompassing a broader domain. In contrast, CLTSQA places a stronger emphasis on a downstream task, wherein the model continuously learns and adapts to temporal-sensitive question answering. What's more, some temporal-related QA dataset for continual learning were proposed in <cit.> and <cit.>. <cit.> extracted data from Twitter and divided the data into subsets of three months each for continual learning. And <cit.> employed the difference between consecutive snapshots of English Wikipedia and English Wikidata for both training and evaluation purposes. However, they simply used the existing classical methods <cit.> that can alleviate catastrophic forgetting in continual learning, instead of proposing improvement strategies based on their datasets. § PRELIMINARIES TSQA The Temporal Sensitive Question Answering (TSQA) task aims to investigate the model's sensitivity and reasoning capabilities concerning temporal information. In the TSQA, the model is provided with a context c (e.g., a document, or a series of sentences) and a question q as the input. Then, the model is required to predict the answer a through either extracting from c, or selecting one from a set of answer candidates. The specific task setup for TSQA involves training the model on an entire dataset. In order to answer temporal-sensitive questions, the model is required to not only pay specific attention to temporal information within the question, but also be capable of reasoning over the implicit temporal information within the context. CLTSQA The TSQA task is conducted with the assumption that the model is trained using a complete dataset, However, it does not possess the capability to continuously integrate updated or new data with temporal information. In order to alleviate this assumption, thus bridging the gap between TSQA and the real world temporal-sensitive problems, we propose a new task, CLTSQA, which forces the model to learn and inference in a continual learning manner. Their major difference lies in the dataset and training settings. Instead of assuming the availability of a whole dataset, in CLTSQA we require the model to keep awareness of the latest knowledge, while not forgetting the old knowledge. The training data is divided into K subsets 𝒟={𝒟_1,…, 𝒟_K}, with each subset covering time points that are chronologically earlier than those in the subsequent subset t_𝒟_k-1 < t_𝒟_k. Given an initial model M_0, it will be subsequently trained on the subsets to obtain the corresponding trained models M_1, M_2, ..., M_K, where M_k denotes the model after training on 𝒟_1, 𝒟_2, ..., 𝒟_k. The subsequent models sequentially load the pre-trained weights of the previous model and continual training. The model M_k is required to be well-performing on the current dataset of 𝒟_k, while not encountering significantly performance decay in the previous subsets 𝒟_k-1. § CLTSQA DATASET In this section, we introduce a new dataset - CLTSQA-Data, with the aim of addressing the aforementioned data-level challenge. Our dataset is built on the basis of TimeQA <cit.>, which extracts time-evolving contexts from WikiData, and generates question-answer pairs from these contexts by some manual templates. We chose a collection of 20,000 questions and 5,000 contexts sourced from TimeQA. Moreover, we produced a higher volume of context-specific temporal-sensitive questions. As a result, our dataset now encompasses a total of 50,000 questions and 5,000 contexts. Then we divides the whole dataset into K temporal-sensitive subsets 𝒟= {𝒟_1, 𝒟_2,…, 𝒟_K}. Fig. <ref> shows some examples, where each subset 𝒟_k consists of questions within a specific time range [ t_k^start, t_k^end ]. We keep the original context unchanged and generate questions based on it, then assign them to subsets with non-overlapping time ranges. For example, given a long context “Introduction of Barack Hussein Obama”, which ranges from 1961 to 2017, we generate a series of related questions, such as “What position did Barack Hussein Obama take in 1963?”, “What position was held by Barack Hussein Obama in 1995?”, “Barack Hussein Obama took which position in 2010?”, then put them into different subsets based on time periods. Besides the explicit questions, whose answers could be directly extracted from the context, we also generate the more challenging implicit questions, whose answers could not be directly obtained, and require the model to reason from the implicit temporal relation. For example, given the context “Barack Hussein Obama won re-election in the 2012 presidential election”, the answer to the question “Who is the President of the United States in 2014” should be “Barack Hussein Obama”. Table <ref> shows the statistics the CLTSQA-Data dataset. Our dataset contains a total of 50,000 questions and 5,000 contexts. We construct K=5 subsets, which are made of varying time spans to ensure that they have similar amount of data. The questions could be divided into 5 types: * Easy reasoning, where the temporal information in the question is explicitly specified in the context. * Joining commonsense, which requires the model to understand the temporal commonsense knowledge. Such as 2010 is included within 2008-2017. * Joining multiple descriptions, which requires the model to reason the context from multiple descriptions within the same paragraph. * Joining multiple paragraphs, which is a multi-paragraph extension of Joining multiple descriptions - the model is required to reason the context across multiple paragraphs. Joining multiple paragraphs not only limits to adjoining paragraphs, but it also extends to cases where significant temporal gaps exist between paragraphs that must be integrated. For the introductory passage about Giorgos Dedes, where the initial paragraph delineates his birth year as 1943, followed by subsequent paragraphs narrating his life at ages 30 and 40. Failing to incorporate contextual information from earlier periods would render it challenging to address inquiries such as “Which team did Giorgos Dedes play for in 1973/1983?”. This underscores the importance of seamlessly weaving old and new text and the importance of continuous learning. * Unanswerable, where the answer could not be found or reasoned from the context. According to the description in a context, “Barack Hussein Obama was born in August 1961”, we cannot answer the question “What position did Barack Hussein Obama hold in 1960?”. § CLTSQA FRAMEWORK In this section, we propose a model-agnostic framework - CLTSQA-Framework to address the aforementioned model-level challenge, thus helping an arbitrary model to learn the CLTSQA task. Fig. <ref> gives an overview of our framework, which consists of two key features 1) temporal memory replay, and 2) temporal contrastive learning. Initialized with a pre-trained language model M_0, we follow the task setting in the Preliminaries section to sequentially train the model on different subsets, where M_i denotes the model after training M_i-1 on the subset 𝒟_i-1. The first key feature is temporal memory replay, which inherits from continual learning to alleviate the forgetting problem during training on the new subset. Specifically, a portion of the data from the time period preceding the new subset is stored, and then replayed during the learning process of the new subset. The second key feature is temporal contrastive learning, which aims at enhancing model's sensitivity to the temporal information within the questions. Specifically, it involves creating two additional questions based on the original question, and then combining a context along with these questions as three separate inputs for the model. §.§ Temporal Memory Replay One of the key properties of the CLTSQA task, is the continual learning process, which is always accompanied by the catastrophic forgetting problem - the model tends to “forget” the old knowledge during ingesting the new knowledge <cit.>. For the temporal-sensitive questions, in particular, after acquiring knowledge about a new question, which shares a similar context to an old question except for the temporal information, the model might encounter difficulties when re-trying to answer the old question. For example, the model might get in trouble in answering “Who is the president of United States in 2009” after learning the new knowledge about “Who is the president of United States after 2020?”. Motivated by the memory replay <cit.>, which helps the model to remember old knowledge through retaining some old training data and reusing them in the subsequent training process, we propose a temporal memory replay strategy that is for dealing with catastrophic forgetting of the data from the previous time periods. Specifically, as the choice of which data to retain plays a crucial role in temporal memory replay, we aim to prioritize the model's attention towards data that are 1) easily learnable samples for efficiently keeping previous knowledge and 2) susceptible to distraction within the new dataset. Take the model M_i-1 as an example, which has been sequentially trained on the previous subsets 𝒟_i-1, and will be trained on the current subset 𝒟_i. 1) To better retain data from previous time periods, we removed the top μ of the hardest samples from the preceding subsets 𝒟_i-1, while retaining the easily learnable ones. This approach mitigates the challenge of data forgetting. Notably, the term “hard sample” is used to describe the sample that received the lowest evaluation score among the previous subsets. 2) From a temporal perspective, we select a part (ν) of data from previous time periods that had the same context but different answers, and incorporated them into the new subset. By introducing these distractors, we aimed to enhance the model's robustness and its sensitivity for temporal information. §.§ Temporal Contrastive Learning CLTSQA-Data generates multiple questions based on a single context, where the questions have identical content but vary in their temporal information and expression. To enhance the model's sensitivity to temporal information in questions and acknowledge that differences in question expression do not affect the answer, the strategy of temporal contrastive learning is employed. Fig. <ref> shows the strategy encompassing the generation procedure for contrasting and similar questions, along with the learning process employed by the model. Generation of Contrastive and Similar Question. We generate a contrastive question q_contrast and a similar question q_similar for the original question q of each sample in the training dataset. To create the contrastive question q_contrast, we simply substitute the temporal information in the original question with different temporal references while keeping everything else unchanged. For example, the contrastive question of the original question “What position did Barack Hussein Obama hold in 2010?” is “What position did Barack Hussein Obama hold in 1995?”. It should be emphasized that the answer to the contrastive question consistently differs from the answer to the original question, thereby ensuring their distinctiveness. To generate a similar question q_similar, we maintain the temporal information while modifying the wording of the question. If there are alternative expressions of the original question available in CLTSQA-Data dataset 𝒟, then substitute the expression of the original question with one of those alternatives. The original question “What position did Barack Hussein Obama hold in 2010?” can be transformed to a similar question “Barack Hussein Obama took which position in 2010?”. If no other expression exists in CLTSQA-Data dataset, We process the question with word segmentation and randomly rearrange the positions of the tokens in the question, excluding the temporal information. For example, the original question is “What position did Barack Hussein Obama hold in 2010?”, and its similar question is “position What Barack Hussein Obama did hold in 2010?”. The study conducted by <cit.> and <cit.> demonstrate that word order does not have a significant impact on model performance across various downstream tasks, including Question Answering (QA). Therefore, we employ the aforementioned approach to strive for consistency between similar questions and the original question. Temporal Contrastive Learning. As Fig. <ref> shows, we concatenate a context c and original question q_ori, contrastive question q_con, similar question q_sim respectively as the three inputs 𝐱={q_ori, c}, 𝐱_con={q_con, c} and 𝐱_sim={q_sim, c} of the model. These inputs are passed through model, obtaining three representations a_ori, a_con and a_sim. We first apply TripletMarginLoss <cit.> function over a_ori, a_con and a_sim to obtain L_triple. T(s,p,n) = max{d(s_i, p_i)-d(s_i, n_i)+margin, 0 } where d(x,y) = ∥ x-y ∥_p and L_triple = T(a_ori,a_sim,a_con) Then a_ori and a_sim are processed by a linear layer to obtain representations â_ori and â_sim. We get answer prediction loss L_predict by applying CrossEntropy function over target label a_target and the representation â_ori. Likewise, get similar loss L_similar by applying CrossEntropy function over target label a_target and the representation â_sim. Finally we combine L_predict, L_similar and L_triple as the final objective function loss: Loss = α L_predict+β L_similar+γ L_triple where α>0, β>0, γ>0 are weight factors. § EXPERIMENTS In this section, we conduct experiments for the CLTSQA task, and would like to answer the following three research questions: 1) whether the novel task CLTSQA poses new challenges to the existing QA models; 2) whether our framework helps the models to deal with the CLTSQA task; and 3) which part of our framework contributes more to the performance improvement. Data We conduct the experiment upon the proposed CLTSQA-Data dataset. Specifically, we use K=5 subsets, each of which consists of around 7,000 training questions, 1,500 validation questions and 1,500 testing questions. Table <ref> shows the statistics of the subsets. Model As illustrated in Sec. <ref>, our framework is model-agnostic and can be applied to arbitrary QA models. We use the following two models as our baselines: * FiD <cit.>, whose objective is to generate answers sequentially, token by token, in an auto-regressive manner. It has achieved impressive performance on Natural Questions <cit.> and TriviaQA <cit.>. * BigBird <cit.>, which introduces a sparse attention mechanism that enhances performance across various tasks involving extensive contextual information. This model focuses on extracting the answers from a given sequence and has achieved remarkable outcomes in question answering. Training We follow <cit.> and <cit.> to construct FiD and BigBird, and initialize the baselines with Natural Question pre-trained weights. For temporal memory replay, we set μ=10% and ν=10%. For temporal contrastive learning, we set α:β:γ=1:0.5:0.5. During training, we continuously train the model on the 5 subsets. For each subset, we train the model for 8 epochs with a batch size of 1. The model is optimized using AdamW <cit.> with a learning rate of 5e^-5. Evaluation After training on a subset, we evaluate the model on the testing set of this subset as well as all previous subsets. We use exact match (EM) and F1 score as the evaluation metrics. § RESULTS AND DISCUSSIONS §.§ Main Results Table <ref> shows models' evaluation performance after subsequently training on the five subsets. “FiD-CLTSQA” (“BigBird-CLTSQA”) and “FiD-baseline” (“BigBird-baseline”) denote the model trained with / without the proposed CLTSQA-Framework, respectively. The baselines (“FiD-Baseline” and “BigBird-Baseline”), which are trained in a sequential manner but without utilizing the proposed framework (i.e., no temporal memory replay or temporal contrastive learning), exhibit poor performance. In particular, the baselines perform worst when being evaluated on Subset1, which has the greatest temporal difference from the most up-to-date subset (Subset5). Such observations answer our first research question - the current QA models may face challenges when tackling the CLTSQA task. When it comes to the proposed CLTSQA-Framework, it is evident that this framework helps the models to obtain improved performance, especially in those “earlier” subsets. Taking the earliest subset, Subset1, as an example, when equipped with CLTSQA-Framework, the BigBird model demonstrates a 14.69% increase in EM and 6.91% increase in F1 (“BigBird-CLTSQA” v.s., “BigBird-Baseline”). More significant performance improvement could be observed in FiD, which demonstrates a 31.16% increase in EM and 23.20% increase in F1 (“FiD-CLTSQA” v.s., “FiD-Baseline”). Such observations answer our second research question - the proposed framework helps the models to deal with the CLTSQA task. The significant performance improvement could be attributed to two strategies introduced by the proposed CLTSQA-Framework: 1) the temporal memory replay, which helps the model to retain the old knowledge when ingesting the latest knowledge; and 2) the temporal contrastive learning, which helps the model to acquire representations in a manner that captures and distinguishes the temporal information present in the question, thus enhancing model's ability in answering the temporal-sensitive questions. To validate these strategies, Fig.  <ref> shows the testing performance of “FiD-Baseline” and “FiD-CLTSQA” models in different training stages, where M_i denotes the model after training on subset 𝒟_i. It could be observed that while “FiD-Baseline” encounters performance drop in Subset 1, Subset 2 and Subset 3 with the progress of training, “FiD-CLTSQA” retains its performance on those subsets throughout the training process, validating the first strategy. The second strategy could be validated from two perspectives. Firstly, going beyond retaining the performance, the model with CLTSQA-Framework can even improve performance on Subset 1 with the progress of training, showing the enhancement of ability of answering temporal-sensitive questions. Secondly, in the up-to-date subsets such as Subset 4 and Subset 5, where there is reduced necessity to retain the old knowledge, the model with CLTSQA-Framework could still obtain better performance. Table <ref> gives some examples of answers generated by “FiD-Baseline” and “FiD-CLTSQA”. §.§ Ablation Studies §.§.§ The Contributions of TMR and TCL In order to further investigate the contributions of the two strategies brought by CLTSQA-Framework, we conduct ablation studies by building two more model variants upon “FiD-CLTSQA”: * FiD-CLTSQA w/o TCL, which only applies temporal memory replay * FiD-CLTSQA w/o TMR, which only applies temporal contrastive learning. Table <ref> shows the final evaluation result, where “FiD-CLTSQA w/o TCL w/o TMR” is indeed the baseline model “FiD-Baseline”. The result answers our third research question: the temporal memory replay effectively alleviates forgetting of the previous knowledge, thus playing a more important role in the old subsets (“FiD-CLTSQA” v.s., “FiD-CLTSQA w/o TMR”). Differently, the temporal contrastive learning brings less significant but consistent performance improvement across all subsets (“FiD-CLTSQA” v.s., “FiD-CLTSQA w/o TCL”). Overall, the CLTSQA-Framework benefits from both modifications. §.§.§ The Novelty of TMR In order to emphasize on the novelty of temporal memory replay, we conduct a comparative experiment by employing two more model variants upon “FiD-Baseline”: * FiD-Baseline with MR, which only applies memory replay which selects 10% old knowledge from each previous subset and reuses them in the subsequent training process. * FiD-Baseline with TMR, which only applies temporal memory replay demonstrated in section <ref>. The experimental results shown in Table <ref> provide compelling proof of the superiority of our temporal memory replay method over the memory replay. § CONCLUSION In this study, we pioneered a novel task, Continual Learning for Temporal Sensitive Question Answering (CLTSQA). We first introduced a new dataset, CLTSQA-Data, to facilitate research in this area, followed by the introduction of a novel framework, CLTSQA-Framework, designed to assist models in handling temporally-sensitive QA in a continual learning context. Our experimental results revealed that while the CLTSQA task poses fresh challenges for existing models, the proposed framework effectively equips the model to overcome these hurdles, resulting in improved performance. We are confident that our contributions, encompassing both the dataset and the framework, will stimulate future research in this innovative direction. As we move forward, there is a need for further exploration of datasets and models to delve deeper into the complexities of CLTSQA. 00 chen2021dataset W. Chen, X. Wang, and W. Y. Wang, “A dataset for answering time-sensitive questions,” 2021. zhang2021situatedqa M. J. Zhang and E. Choi, “Situatedqa: Incorporating extra-linguistic contexts into qa,” 2021. wang2022archivalqa J. Wang, A. Jatowt, and M. Yoshikawa, “Archivalqa: A large-scale benchmark dataset for open-domain question answering over historical news collections,” pp. 3025–3035, 2022. liska2022streamingqa A. Liska, T. Kocisky, E. Gribovskaya, T. Terzi, E. Sezener, D. Agrawal, D. Cyprien De Masson, T. Scholtes, M. Zaheer, S. Young et al., “Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models,” PMLR, pp. 13 604–13 622, 2022. dhingra2022time B. Dhingra, J. R. Cole, J. M. Eisenschlos, D. Gillick, J. Eisenstein, and W. W. Cohen, “Time-aware language models as temporal knowledge bases,” pp. 257–273, 2022. loureiro2022timelms D. Loureiro, F. Barbieri, L. Neves, L. E. Anke, and J. Camacho-Collados, “Timelms: Diachronic language models from twitter,” arXiv preprint arXiv:2202.03829, 2022. qin2022elle Y. Qin, J. Zhang, Y. Lin, Z. Liu, P. Li, M. Sun, and J. Zhou, “Elle: Efficient lifelong pre-training for emerging data,” arXiv preprint arXiv:2203.06311, 2022. jia2018tempquestions Z. Jia, A. Abujabal, R. Saha Roy, J. Strötgen, and G. Weikum, “Tempquestions: A benchmark for temporal question answering,” in Companion Proceedings of the The Web Conference 2018, 2018, pp. 1057–1062. min2020ambigqa S. Min, J. Michael, H. Hajishirzi, and L. Zettlemoyer, “Ambigqa: Answering ambiguous open-domain questions,” arXiv preprint arXiv:2004.10645, 2020. ning2020torque Q. Ning, H. Wu, R. Han, N. Peng, M. Gardner, and D. Roth, “Torque: A reading comprehension dataset of temporal ordering questions,” arXiv preprint arXiv:2005.00242, 2020. shang2021open C. Shang, P. Qi, G. Wang, J. Huang, Y. Wu, and B. Zhou, “Open temporal relation extraction for question answering,” in 3rd Conference on Automated Knowledge Base Construction, 2021. han2020econet R. Han, X. Ren, and N. Peng, “Econet: effective continual pretraining of language models for event temporal reasoning,” arXiv preprint arXiv:2012.15283, 2020. shang2022improving C. Shang, G. Wang, P. Qi, and J. Huang, “Improving time sensitivity for question answering over temporal knowledge graphs,” arXiv preprint arXiv:2203.00255, 2022. biesialska2020continual M. Biesialska, K. Biesialska, and M. R. Costa-Jussa, “Continual lifelong learning in natural language processing: A survey,” arXiv preprint arXiv:2012.09823, 2020. ke2022continual Z. Ke and B. Liu, “Continual learning of natural language processing tasks: A survey,” arXiv preprint arXiv:2211.12701, 2022. jang2021towards J. Jang, S. Ye, S. Yang, J. Shin, J. Han, G. Kim, S. J. Choi, and M. Seo, “Towards continual knowledge learning of language models,” arXiv preprint arXiv:2110.03215, 2021. jang2022temporalwiki J. Jang, S. Ye, C. Lee, S. Yang, J. Shin, J. Han, G. Kim, and M. Seo, “Temporalwiki: A lifelong benchmark for training and evaluating ever-evolving language models,” arXiv preprint arXiv:2204.14211, 2022. chen2020recall S. Chen, Y. Hou, Y. Cui, W. Che, T. Liu, and X. Yu, “Recall and learn: Fine-tuning deep pretrained language models with less forgetting,” arXiv preprint arXiv:2004.12651, 2020. kirkpatrick2017overcoming J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of the national academy of sciences, vol. 114, no. 13, pp. 3521–3526, 2017. he2021analyzing T. He, J. Liu, K. Cho, M. Ott, B. Liu, J. Glass, and F. Peng, “Analyzing the forgetting problem in pretrain-finetuning of open-domain dialogue response models,” in Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021, pp. 1121–1133. hu2021lora E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “Lora: Low-rank adaptation of large language models,” arXiv preprint arXiv:2106.09685, 2021. wang2020k R. Wang, D. Tang, N. Duan, Z. Wei, X. Huang, G. Cao, D. Jiang, M. Zhou et al., “K-adapter: Infusing knowledge into pre-trained models with adapters,” arXiv preprint arXiv:2002.01808, 2020. mccloskey1989catastrophic M. McCloskey and N. J. Cohen, “Catastrophic interference in connectionist networks: The sequential learning problem,” in Psychology of learning and motivation.1em plus 0.5em minus 0.4emElsevier, 1989, vol. 24, pp. 109–165. rebuffi2017icarl S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incremental classifier and representation learning,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2017, pp. 2001–2010. balntas2016learning V. Balntas, E. Riba, D. Ponsa, and K. Mikolajczyk, “Learning local feature descriptors with triplets and shallow convolutional neural networks.” in Bmvc, vol. 1, no. 2, 2016, p. 3. izacard2020leveraging G. Izacard and E. Grave, “Leveraging passage retrieval with generative models for open domain question answering,” arXiv preprint arXiv:2007.01282, 2020. kwiatkowski2019natural T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee et al., “Natural questions: a benchmark for question answering research,” Transactions of the Association for Computational Linguistics, vol. 7, pp. 453–466, 2019. joshi2017triviaqa M. Joshi, E. Choi, D. S. Weld, and L. Zettlemoyer, “Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension,” arXiv preprint arXiv:1705.03551, 2017. zaheer2020big M. Zaheer, G. Guruganesh, K. A. Dubey, J. Ainslie, C. Alberti, S. Ontanon, P. Pham, A. Ravula, Q. Wang, L. Yang et al., “Big bird: Transformers for longer sequences,” Advances in neural information processing systems, vol. 33, pp. 17 283–17 297, 2020. loshchilov2017decoupled I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101, 2017. sinha2020unnatural K. Sinha, P. Parthasarathi, J. Pineau, and A. Williams, “Unnatural language inference,” arXiv preprint arXiv:2101.00010, 2020. sinha2021masked K. Sinha, R. Jia, D. Hupkes, J. Pineau, A. Williams, and D. Kiela, “Masked language modeling and the distributional hypothesis: Order word matters pre-training for little,” arXiv preprint arXiv:2104.06644, 2021. § CLTSQA-DATA STATISTICS Distribution of Question Types in CLTSQA-Data We investigated the various question types present in our dataset, which encompassed Easy Reasoning, Joining Commonsense, Joining Multiple Descriptions, Joining Multiple Paragraphs, and Unanswerable. Furthermore, we calculated the distribution of these question types within the entire dataset as Fig. <ref> shows. Examples in Question Types As shown in Table <ref>, we present five different question types of our CLTSQA-Data, including context, question, and answer. § ABLATION STUDY ON TEMPORAL MEMORY REPLAY We investigated the performance of temporal memory replay with / w.o the step of removing hard samples, respectively. We assess model M_5 by 𝒟_5^dev. It can be seen from Fig. <ref> that temporal memory replay with step removing hard samples has better performance. Experimental Parameters The parameter settings for the two models, FiD and BigBird, used in the experiment are illustrated in Table <ref> and Table <ref>, respectively. Experimental Results Table <ref>, <ref>, <ref>, <ref> show results of specific performance of each stage in FiD without CLTSQA-Framework, FiD with Temporal Memory Replay, FiD with Temporal Contrastive Learning and FiD with CLTSQA-Framework respectively. Each model M_i is assessed by 𝒟_i^dev and 𝒟_i^test.
http://arxiv.org/abs/2407.13042v1
20240717222726
Non-Perturbative Yang-Mills Beyond One-Loop Order
[ "Seth Grable" ]
nucl-th
[ "nucl-th", "hep-th" ]
1.5 Comprehensive Review and Empirical Evaluation of Causal Discovery Algorithms for Numerical Data Wenjin Niu wenjin.niu@warwick.ac.uk The School of Engineering University of Warwick CV4 7AL Coventry, U.K. Zijun Gao zijun.gao.1@warwick.ac.uk The School of Engineering University of Warwick CV4 7AL Coventry, U.K. Liyan Song songly@hit.edu.cn School of Software, Faculty of Computing Harbin Institute of Technology (HIT) 92 Xida Street, Nangang District, 154001, Harbin, China Lingbo Li lingbo.li.1@warwick.ac.uk The School of Engineering University of Warwick CV4 7AL Coventry, U.K. State Key Laboratory for Novel Software Technology Nanjing University 163 Xianlin Avenue, Nanjing, Jiangsu Province, China July 22, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT I present a novel analytic framework for SU(N) Yang-Mills theory in the four-dimensional continuum. Background and effective field theory techniques are used to include non-perturbative contributions from cubic and quartic interactions. This approach is inspired by Savvidy who claims that first-order contributions from quartic interactions stabilize IR divergence found at one-loop order, paving the way for IR finite Yang-Mills calculations. I assess the validity of this claim and discuss the implications of my findings. § ANALYTIC DIFFICULTIES OF YANG-MILLS THEORIES Yang-Mills theories are asymptotically free, explaining phenomena like confinement and Bjorken scaling, providing the foundations of Quantum Chromodynamics (QCD) <cit.>. However, negative beta functions invalidate perturbation theory at low energy scales, where effects like confinement and gluon mass generation occur. Nonetheless, lattice QCD, which is inherently non-perturbative, has successfully calculated effects such as the hadronic spectrum and the critical temperature in QCD <cit.>. Yet a non-perturbative analytic understanding of non-abelian gauge theories remains elusive. Historically, non-perturbative methods date back to Euler and Heisenberg’s work on an effective action for QED, and was later extended by Schwinger’s proper-time formalism to effective theories broadly <cit.>. Effective actions for Yang-Mills theories with covariantly constant background field strength tensors have been extensively studied, revealing vacuum instabilities arising from zero eigenvalues of the effective theory at one-loop order <cit.>. Recent research <cit.> indicates that infrared instabilities cancel between gauge and matter contributions in QCD with n_f = 12 flavors. Moreover, Savvidy has shown that instabilities due to zero modes in the effective Lagrangian can be mitigated by including contributions from quartic interactions of gauge fluctuations <cit.>. This paper aims to compute an effective Yang-Mills Lagrangian beyond one-loop order by incorporating first-order contributions from quartic gauge fluctuations. This is achieved by introducing an auxiliary field through a modified Hubbard-Stratonovich transformation, similar to those found in large-N calculations <cit.>. In section 3, an outline of the background field methodology is presented. Section 4 gives an eigenspectrum analysis of operators found in the effective Lagrangian at quadratic order. Section 5 presents and auxiliary field method used to set up an effective Lagrangian and section 6 present detailed calculations beyond one-loop order, and compares this analysis with existing lattice data. § THE BACKGROUND FIELD SET UP FOR YANG MILLS Consider the Yang-Mills Lagrangian density for general SU(N) given by L=1/4 g_0^2(ℱ_μν^a)^2. The field strength tensor and covariant derivative are defined in the adjoint representation as: ℱ_μν^a = ∂_μδ^ac A^c_ν(x) -∂_νδ^ac A^c_μ(x) + f^abcA^b_μ(x)A^c_ν(x) D^ac_μ = ∂_μδ^ac + f^abcA^b_μ(x), where f^abc are the structure constants of some SU(N) algebra, and g_0 is the bare Yang-Mills coupling constant. In Euclidean space with the temporal direction compactified on the thermal cylinder, the partition function is <cit.> Z=∫𝒟A e^-1/4g^2_0∫_x (ℱ_μν^a)^2. Equation (<ref>) demands that gauge fields A^a_μ(x) are periodic in the temporal direction giving A^a_μ(0,x⃗) = A^a_μ(β,x⃗) <cit.>. Next the gauge fields are separated into a background-field B^a_μ(x) and fluctuations a^a_μ(x) such that A^a_μ(x)=B^a_μ(x)+a^a_μ(x). With this, integrals over linear contributions of the fluctuations vanish as do terms that go like B^a_μ(k)a^b_μ(k) due to orthogonality. This yields the field strength tensor as <cit.>: ℱ_μν^a = F_μν^a + D^ac_μ a^c_ν(x) -D^ac_ν a^c_μ(x) + f^abca^b_μ(x)a^c_ν(x) D^ac_μ = ∂_μδ^ac + f^abcB^b_μ(x), where D^ab_μ and F^a_μν are functions of B^c_μ(x). Choosing a covariantly constant self-dual field strength tensor of <cit.>, F^a_μν=[ 0 B^a 0 0; -B^a 0 0 0; 0 0 0 B^a; 0 0 -B^a 0 ], and a background-field configuration of B^a_μ(x) = -1/2F^a_μνx_ν, satisfies the classical source-free Yang-Mills equations <cit.>: D^ab_μ F^b_μν =0. The effective one-loop Lagrangian is now: 1/4g^2(ℱ^a_μν)^2_quadratic = 1/4g^2[(D^ac_μa^c_ν - D^ac_νa^c_μ)^2 + 2f^abcF^a_μν a^b_μ a^c_ν + 2(D^ac_μ a^c_μ)^2 ] +c^a[(D^2)^ac]c^c, and is invariant to the local gauge transformations <cit.> a^a_μ(x) → a^a_μ(x)- f^abcβ^b(x) a^c_μ(x) B^a_μ(x) → B^a_μ(x) + B^a_μ(x)D_μβ^a(x) c^a(x)→ c^a(x) -f^abcβ^b(x) c^c(x). Considering kinetic terms in the action such as (D^ac_μa^c_ν)^2, integration by parts can be performed over the gauge fluctuations a^a_ν as they now transform as matter fields in the adjoint representation <cit.>. With the gauge fixing condition of D^ac_μ a^c_μ(x) =0 <cit.>, equation (<ref>) augmented with ghost fields c^a(x) is Z=∫ B∫𝒟a𝒟c̅𝒟c e^-∫_x 1/4g^2_0(F^a_μν)^2-S_0-S_I where <cit.> <cit.> S_0 = ∫_x 1/2a^a_μ[-(D^2)^acδ_μν + 2F_μν^cf^abc]a^c_ν + c^a(-D^2)^acc^c S_I=∫_x g_0(D_μ a_ν^a)f^abc a_μ^b a_ν^c -c^a (f^abcD^gb_μ) a_μ^g c^c+ g^2_0/4(f^abca_μ^b a_ν^c)^2. The quadratic terms are relabeled as θ_Glue≡-(D_μ^2)^acδ_μν + 2F_μν^bf^abc, and θ_Ghost≡-(D_μ^2)^ac such that the one-loop effective theory is <cit.> Z≈∫ dB e^-β V/4g^2_0(F^a_μν)^2-1/2ln(θ_Glue)+ln(θ_Ghost). As noted by Savvidy, Luetwyler, and others <cit.>, dropping gauge fluctuations beyond quadratic order generates IR divergences in ln(θ_Glue), indicating the need for regulating terms from higher-order interactions. § THE ONE-LOOP EIGENSPECTRUM To find the eigenspectrum of θ_Glue contributions from the covariant derivative and the field strength tensor are considered separately as -(D^2)^acδ_μν and 2F_μν^bf^abc are simultaneously diagonalizable in Lorentz and color space <cit.>. Therefore the sum of eigenvalues of -(D^2)^acδ_μν and 2F_μν^bf^abc equals the eigenvalues of -(D^2)^acδ_μν+2F_μν^bf^abc. Equations (<ref>) and (<ref>) give the covariant derivative in terms of the field strength tensor as D^ac_μ = ∂_μδ^ac +i/2A^acF_μνx_ν where A is the hermitian matrix consisting of the sum generators in the adjoint representation. Letting the Lorentz index run from 0 to 3 gives <cit.> -(D^2)^ac = -(∂^2_0)δ^ac -(∂^2_1)δ^ac + i(AB)^ac(x_1∂_0-x_0∂_1) + 1/4(A^2B^2)^ac(x_0^2+x_1^2 ) -(∂^2_2)δ^ac -(∂^2_3)δ^ac + i(AB)^ac(x_3∂_2-x_2∂_3) + 1/4(A^2B^2)^ac(x_2^2+x_3^2). The color matrix A has two zero eigenvalues. In the diagonal basis of A these zero eigenvalues reduce -(D^2)^ac to a massless d'Alembert operator which vanishes in zeta-function regularization <cit.>. Letting A' be a six-by-six diagonal matrix containing the non-zero eigenvalues of A, consider the operators c_μ =[ ∂_μ 1/(BA')^1/2 + 1/2(BA')^1/2x̂_μ], c^†_μ = [-∂_μ 1/(BA')^1/2 + 1/2(BA')^1/2x̂_μ]. Using the canonical commutation relation [x̂_μ, -i∂_ν]=iδ_μν it can be shown that [c_i,c^†_i]=1. The six non-trivial components of the -(D^2), labeled -(D^2)', are represented as <cit.> -(D^2)' = BA'[(c^†_0+ic^†_1)(c_0-ic_1) + 1] + BA'[(c^†_2+ic^†_3)(c_2-ic_3) + 1], and commutation relation of (c^†_0+ic^†_1) and (c_0-ic_1) is [(c_0-ic_1),(c^†_0+ic^†_1)] =2. Thus, (c^†_0+ic^†_1)(c_0-ic_1) effectively acts as a double number operator, and -(D^2)' has the form of harmonic oscillators in the 0-1 and 2-3 planes. T he eigenspectrum Ω of (D^2_μ)' is Ω= ∑_m,n BA'(2n+1) + BA'(2m+1). Finally, the eigenvalues of F_μν come in complex pairs of ± i B giving the total eingenspectrum of -(D^2_μ)' as <cit.>, (Λ^+_a)_m,n = (2n+1)Bλ_a +(2m+1)Bλ_a +2Bλ_a (Λ^-_a)_m,n = (2n+1)Bλ_a+(2m+1)Bλ_a-2Bλ_a, with m,n∈ℕ. The eigenstates of -(D^2)' are given as b^a_μ(x)_mn = ((c^†_0)^ab+i(c^†_1)^ab)^n((c^†_2)^ab+i(c^†_3)^ab)^m b^a_μ(x)_00, where the groundstate b_00 is given by b^a_μ(x)_00 = (e^-1/4 Bλ^a(x_ν)^2)_μ. The non-trivail spectrum of -(D_μ)^2 contains a zero eigenvalue where m=n=0, explicitly generating IR divergences in the one-loop approximation <cit.>. To regulate this divergence the The eigenspectrum of -(D_μ^2)' can now be used to calculate lnθ_Glue and lnθ_Ghost, with added contributions from cubic and quartic interactions through the use of an auxiliary field Δ. With this and appropriate minimization conditions for B and Δ, presented in the following sections, an effective form of (<ref>) can be calculated with a saddle point approximation which is asymptotically dominant in the large β V limit. § VACUUM CALCULATIONS WITH THE USE OF AN AUXILIARY FIELD Starting with the gauge fixed partition function dropping linear coupling in a^a_μ to the ghost fields gives Z_0= ∫ dB 𝒟a 𝒟c𝒟c e^-∫_x (1/4g^2_0F^a_μν)^2 -S_0 - S_I where S_0=∫_x 1/2 a^a_μ[θ_Glue]a^c_ν + c^a[θ_Ghost]c^c S_I=∫_x -g_0(D_μ a_ν^a)f^abc a_μ^b a_ν^c + g^2_0/4(f^abca_μ^b a_ν^c)^2. To integrate the interaction terms at R_0 level <cit.> a Hubbard Stratonovitch transformation is applied to (<ref>) of the form <cit.>, ∫^∞_-∞𝒟σ∫^∞_0 𝒟ξRe[e^-i∫_x ξ^a_μν(σ^a_μν - f^abca^b_μ a^c_ν)] = 1, under the ansatz that ξ^a_μν is diagonal in Lorentz space, the transformation acts only such terms, and the remaining terms in S_I will be discarded. The effective action of the gluon contribution is now S^Red_eff= -∫_x 1/2a^a_μ[θ_Glue^ac-iξ^b_μνf^abc]a^c_ν - g_0(D_μ a_ν^a)σ^a_μν + (g^2_0/4σ^a_μν)^2 + iξ^a_μνσ^a_μν. Integrating out σ^a_μν, letting ξ=ξ̅+ξ'(x), and discarding fluctuations ξ'(x) results in additional term of -(D_μ a^a_ν)^2 in the effective gluon action giving, S^glue_eff= -∫_x 1/2a^a_μ[(D^2_μ)^acδ_μν + 2A^acF_μν-iξ_μν A^ac]a^c_ν -(iξ^a_μν)^2/g^2_0, Evoking this difference of a minus sign on eq (<ref>) gives the operator decomposition of (D^2_μ)^ac as (D^2)^ln = (∂^2_0)δ^ac +(∂^2_1)δ^ac - i(AB)^ac(x_1∂_0-x_0∂_1) -1/4(A^2B^2)^ac(x_0^2+x_1^2 ) +(∂^2_2)δ^ac +(∂^2_3)δ^ac- i(AB)^ac(x_3∂_2-x_2∂_3) - 1/4(A^2B^2)^ac(x_2^2+x_3^2). Then similar to equation (<ref>) (D^2)' = (BA')(a^†_0-ia^†_1)(a_0+ia_1) + A'B + (BA')(a^†_2-ia^†_3)(a_2+ia_3) + A'B. where a_μ =[ ∂_μ 1/(BA')^1/2 - 1/2(BA')^1/2x̂_μ], a^†_μ = [∂_μ 1/(BA')^1/2 + 1/2(BA')^1/2x̂_μ]. This operator decomposition gives the same eigenvalue spectrum as -(D^2_μ)^ac. However, the eigenstates of (D^2_μ)^ac are exponentially growing as opposed to (<ref>), leading to nonphysical propagators. The variable substitution a^a_μ(x)→ ia^a_μ(x) restores the proper sign of the kinetic term and leaves the eigenspectrum of 2A^acF_μν unchanged. Further, the variable substitution iξ_μν→ BΔδ_μν gives the effective action of S^Glue_eff= -∫_x 1/2a^a_μ[-(D^2_μ)^acδ_μν + 2A'^acF_μν+Δ B A^acδ_μν]a^c_ν -(BΔ^a_μν)^2/g^2_0. Taking the saddle point approximation for B and Δ partition function is now Z= ∫𝒟a𝒟c𝒟c Re[ e^-∫_x 1/4g^2_0(F^ab_μν)^2-(BΔ^a_μν)^2/g^2_0+a^a_μ[-(D^2)^acδ_μν +2A^acF_μν + Δ B A^acδ_μν]a^c_ν +c̅^a[(-(D_μ)^2)^ac]c^c]_B,Δ. where Δ and B are minimizations of their respective functions in (<ref>). The form of (<ref>) constrains the saddle equations of Δ to the positive real axis, and likewise the saddle point of B will be found to give B≥ 0. Without this appropriate constraint, the theory diverges due to an infinite number of tachyon solutions. Further as (BΔ̅^a_μν)^2/g^2_0 is a constant, it can be removed with a counter term giving an effective Z as Z= e^-β V/g_0^2(Bλ^a)^2-1/2ln[θ^R_0_Glue]+ln[θ_Ghost]|_B̅Δ̅ where θ^R_0_Glue=-(D^2)^ac_μν + 2 A^ac F_μν + Δ B A^acδ_μν. The effective operator θ^R_0_Glue remains gauge invariant under the local transformations of U(x)= e^f^abcβ(x)^b_μ associated with (<ref>) as θ_Glue is invariant under (<ref>) and U^†AU=A. § CALCULATIONG THE EFFECTIVE ACTION To calculate lnθ_Ghost I will use zeta function regularization of the form <cit.> lnθ =-[d/ds1/Γ[s]∫_0^∞ dττ^s-1 K_θ]_s=0, where K_θ = Tr^ab_μν∑_n,me^-τθ/μ^2 is the heat kernel of the operator θ and μ is an arbitrary renormalization scale. Noting contributions where λ^a=0 go to zero in zeta-function rgularization, the heat kernels are given as <cit.> K_Ghost = ∑_lβ V (Bλ^l)^2/16π^2[1/sinh^2(Bλ^lτ/μ^2)] & K^R_0_Glue =∑_lβ V (Bλ ^l)^2/4π^2(e^-τ B λ^l Δ/μ^2)[2+1/sinh^2(Bλ^lτ/μ^2)]. where l indexes from one to six. The ghost field contribution gives lnθ_Ghost = - ∑_l β V (Bλ^l)^2/48 π ^2[ ln(Bλ^l/μ^2)+ln(2e/A^12)]. where A is the Glaisher constant. For the gluon contribution, the exponential dependence on Δ is expanded giving -1/2ln[θ^R_0_Glue] =∑_lβ V (Bλ^l)^2 /8 π ^2×d/ds[1/Γ(s)(2∫^∞_0 dττ^s-1 e^-τ Bλ^lΔ/μ^2 + ∫^∞_0 dτ∑^2_n=0τ^n-1(-Bλ^lΔ/μ^2)^n/n!1/sinh^2(Bλ^lτ/μ^2) +∫^∞_0 dτ∑^∞_n=3τ^n-1(-Bλ^lΔ/μ^2)^n/n!1/sinh^2(Bλ^lτ/μ^2))]_s=0. The first integral in (<ref>) is vanishing in the IR, curing the one-loop IR divergence. The second integral containing n<3 contributions gives two UV divergences, one from the n=0 term, and one from the n=2 term. Further, the n=1 and n=2 terms give the only non-vanishing IR contributions. The second sum in (<ref>) contains a series of UV finite contributions, and is real, and absolutely and uniformly convergent for Δ≥ 0 giving commutativity of the limits, collectively yielding, with the sum over color space implied, ln Z/β V = -(Bλ^l)^2/g_0^2-(Bλ^l)^2/48π^2[11 ln(B λ^l /μ ^2) + C +f(Δ)+g(Δ,B,μ) ]|_B̅Δ̅ where C= ln(e/2 A^12) & g(Δ,B,μ)= 3 Δ^2ln(B λ^l /μ ^2)+12 ln (Δ ) & f(Δ)= -Δ(ln(64 π ^6)-3Δln (2)) +12 Δln(Γ[Δ +2/2])- 24 ζ ^(1,0)(-1,Δ +2/2). There are a variety of ways to regulate the divergent gluon contributions, and the self-energy of the gauge fields up to the scale of the Landau pole for each regulated Lagrangian is numerically sensitive to these choices <cit.>. To maintain all IR physics, and remove the UV divergent, and scale-dependent physics associated with Δ I will absorb g(Δ,B,μ) into the running coupling such that <cit.> 1/g^2(μ)= 1/g_0^2+ 1/48 π ^2g(Δ,B,μ), producing an effective action in which all scale dependence couples to the background field, and giving a beta-function of d g(μ)/dln(μ) =-11g^3(μ)/48π^2, which matches the perturbative beta-function at first loop order times a factor of 1/3. Solving (<ref>) for g(μ) gives a renormalized pressure is of ln Z/β V=-∑_a(B λ^l) ^2/48 π ^2[11ln(Bλ^l /Λ_YM ^2)+C +f(Δ)]_B,Δ. The function f(Δ) can be plotted to show the existence of a clear stable minimum. It can be numerically shown that f(Δ) is monotonically increasing many order of magnitude beyond the stable minimum shown in <ref>. The gap equation for Δ is 2 ζ ^(1,1)(-1,Δ +2/2)+γ_E Δ -2 log(Γ(Δ +2/2))-(Δ(H_Δ/2+log (2)))+log (2 π )=0, where H is the harmonic number function. As the gap equation for Δ decouples from λ^l its saddle is identical for all color indices giving Δ→ 2.82898. Plugging (<ref>) into (<ref>) gives ln Z/β V=-∑_l(B λ^l) ^2/48 π ^2[(14.4634 -11 log(B λ^l /Λ_YM^2))]. The non-zero free energy density contribution (-ln Z/β V)^l is plotted showing the existence of stable minimum, and the unstable perturbative minimum at B=0, and is likewise monotonically increasing beyond the stable minimum, structurally matching results of <cit.>. The gap equation for B is (14.4634 -11 log(B λ ^l/Λ_YM ^2))-11 /2=0 giving B→2.25885 Λ_YM ^2/λ^l , B→ 0, where the trivial solution gives an unstable vacuum pressure, associated with the perturbative minimum. Summing over the SU(3 )color index for the stable solution of B gives ln Z/β V =6× 0.0592377 Λ_YM^4. The self energy of the gluon field as due to the gap equation for Δ in(<ref>) is equivalent for all elements where λ^a ≠ 0, and zero else wise. Thus the final effective theory is gauge invariant as the self-energy of the gauge fields is a Lorentz scalar and a gauge singlet, and the remaining terms are comprised of covariant derivatives and the field strength tensor. Now the effective mass per component of the scalar glueball for non-zero λ^a can be read off of (<ref>) as m^a_Glue = √(Bλ^aΔΛ^2_YM)=2.52789Λ_YM. I cannot currently make predictions on the scale of the landau pole in our theory Λ_YM, nor have I done any matching condition or reparameterization analysis with other possible renormalization schemes. However, if Λ_YM fall in the range of 400-700 MeV it gives a mass range of m^a_Glue≈ 1011.16 to 1769.53 MeV which is in agreement with decades of quenched lattice QCD predictions for the scalar glueball mass <cit.>. § FUTURE WORK The eigenspectrum analysis and general computational methods can be extended to include matter fields <cit.>, finite temperature and chemical potential in QCD. Further one could calculate the propagators of the theory in the energy-eigenbasis of -D^2, and confinement parameters such as the Polyakov loop. However, it is imperative to understand the wide range of renormalization schemes available for non-perturbative calculations as typically RG-flow analysis is unavailable in the large coupling regime and particularly at the scale of the Landau pole. There is a rich tapestry of non-perturbative field theory calculations that can be done, including non-perturbative standard model analysis, which may lead to beyond “perturbative” standard model physics. § ACKNOWLEDGEMENTS This work is supported by DOE award No DE-SC0017905. I am for all the fruitful discussions had with Paul Romatschke, Ryan Weller, Johannes Reinking, Willie Wei Su, and Scott Lawrence. plain
http://arxiv.org/abs/2407.12728v1
20240717164721
Exploring the interplay of individual traits and interaction dynamics in preschool social networks
[ "Gülşah Akçakır", "Amina Azaiez", "Alberto Ceria", "Clara Eminente", "Guglielmo Ferranti", "Govind Gandhi", "Aishvarya Raj", "Iacopo Iacopini" ]
physics.soc-ph
[ "physics.soc-ph", "cs.SI" ]
Review of nonflow estimation methods and uncertainties in relativistic heavy-ion collisions Fuqiang Wang July 22, 2024 =========================================================================================== § ABSTRACT Several studies have investigated human interaction using modern tracking techniques for face-to-face encounters across various settings and age groups. However, little attention has been given to understanding how individual characteristics relate to social behavior. This is particularly important in younger age groups due to its potential effects on early childhood development. In this study, conducted during the Complexity 72h Workshop, we analyze human social interactions in a French preschool, where children's face-to-face interactions were monitored using proximity sensors over an academic year. We use metadata from parent surveys and preschool linguistic tests, covering demographic information and home habits, to examine the interplay between individual characteristics and contact patterns. Using a mixture of approaches, from random forest classifiers to network-based metrics at both dyadic and higher-order (group) levels, we identify sex, age, language scores, and number of siblings as the variables displaying the most significant associations with interaction patterns. We explore these variables' relationships to interactions within and outside classrooms and across mixed and single-grade classes. At the group level, we investigate how group affinity affects group persistence. We also find that higher-order network centrality (hypercoreness) is higher among children with siblings, indicating different group embedding despite similar total contact duration. This study aligns with existing literature on early social development and highlights the importance of integrating individual traits into the study of human interactions. Focusing on 2-5-year-olds offers insights into emerging social preferences during critical phases of cognitive development. Future research could use these findings to enhance mechanistic models of complex social systems by incorporating individual traits. § INTRODUCTION Complex social systems, consisting of numerous interacting individuals, can be effectively represented by networks of nodes connected via edges <cit.>. Since many different natural and man-made systems can be characterised through individual interacting parts, complex networks have emerged as a universal representation to analyse and model them <cit.>. While the study of networks began with graph theory in the field of discrete mathematics <cit.>, it has increasingly been applied to a wide range of systems. In the social sciences <cit.>, networks have helped to understand, among other things, the emergence of norms <cit.>, cooperation <cit.>, the dynamics of social contagion <cit.>, and the formation of opinions and consensus <cit.>. Over the past decades, advancements in automated data collection technologies have enhanced researchers' ability to feed empirical data into theoretical approaches, allowing the integration of insights obtained from observational studies of social systems into mathematical modeling. In particular, sensors like Radio Frequency Identification (RFID) tags enable the collection of comprehensive real-time data on close-range or face-to-face interactions <cit.>. Research in the field of proxemics has uncovered correlations between social and spatial distances, where a closer physical distance between interacting individuals often indicates a more intimate relationship <cit.>. More generally, taken as a proxy for social interactions, these longitudinal data of dyadic contacts between individuals can then be studied within the framework of temporal (social) networks <cit.>, where the strength of connections can map the frequency or the duration of interactions <cit.>. The temporally-resolved nature of these empirical data collections also enables higher-order network representations, where dyadic interactions are replaced by groups <cit.> —and links become hyperlinks connecting an arbitrary number of nodes <cit.>. Such networks can subsequently be analysed in order to learn more about an individual's importance or role within the network, in particular with respect to the context of interaction and the research question behind the study. For example, dominant nodes within a network can be identified through network centrality measures. These measures can then be used to locate and analyse peripheral nodes or, in the context of social systems, isolated members of the group. Significant efforts have been put forward towards developing more realistic models of social interaction starting from data collected through these types of technologies <cit.>, both in pairwise and non-dyadic settings <cit.>. Despite the success of these data-driven modeling approaches, they often neglect the individual characteristics that can influence the observe social dynamical. The integration of heterogeneous traits at the level of nodes could instead lead to a better characterisation of the driver of social interactions, enhancing our understanding of these systems and our ability to inform realistic models. While research on social interaction patterns often falls short of supplementing contact information with social identity markers and individual characteristics beyond basic demographics like gender or age, there are exceptions <cit.>. In one such study <cit.>, investigating the contact preferences of 6–12 years old children in a primary school in France led to evidence for sex-based homophily. The observation that sex-based homophily increases with age is made more complex when considering weak ties. In this case, sex homophily decrease with grade for girls, yet increases with grade for boys <cit.>. Furthermore, even when metadata on the participants are available, studies typically focus on one dimension at a time or, at most, interactions between two dimensions. However, relying on multidimensional traits to assess social interactions holds promise of gaining a better understanding of dynamics of behavioral networks <cit.>. Multidimensional traits can reveal complex interdependencies and patterns that single or two-dimension analyses might overlook, providing deeper insights into how various social factors interplay and influence contact preferences at the dyadic and group level. In this study, we investigate the interplay between individual characteristics and social interactions in a French preschool <cit.>, utilizing time-resolved data on face-to-face proximity collected through wearable sensors over different months throughout the time-span of a year. The study of early childhood interactions provides valuable insights into how developmental factors impact the functioning of human social systems. In fact, through interaction with peers and caregivers, children between the ages of 3 and 6 develop skills in social and emotional regulation <cit.>. Moreover, research in the fields of social-emotional competence and school-readiness demonstrate that social competence of children entering kindergarten is a strong predictor for later academic success <cit.>. Getting a deeper understanding of social interaction in early childhood can also inform us about the development of social preferences and homophily. Several studies demonstrate that social links are more likely to be formed between individuals who share traits such as gender, race, and social status <cit.>. Further insights can thus be gained by examining how these phenomena emerge during the critical period when children develop social skills and are influenced by their peers. Controlled experiments conducted in educational institutions of a higher level with young children or adult participants (e.g., <cit.>) do not fully inform us about the factors contributing to formation and dynamics of behavioral networks in the early childhood period. Considering prominent differences in structure and rules both in- and out-of-class settings (e.g., seating arrangements) on top of overall developmental level, one anticipates to find substantial differences in the preferences. Primary school students and beyond typically spend the entire class time seated near peers, either by choice or through assigned seating arrangements. Preschoolers, on the other hand, are usually freer to move around and interact with a broader range of classmates even in class. Research on early childhood social dynamics utilising wireless sensor networks has gained some attention in recent years. The collection of movement and speech data with sensing technologies in pre-school settings has been used to learn about the relationship between language development and social interactions <cit.>. In fact, movement data alone can provide a rich description of social dynamics since this data captures both information on proximity and synchrony, which are distinct indicators used distinguish between friendships and ephemeral interactions to achieve temporary goals <cit.>. Findings on age-related differences in expectations of inter-personal distancing <cit.> show that children develop reason abilities about inter-personal space at an early age, supporting the use of sensor methods in pre-school classrooms. Network approaches to studying peer to peer relationships can reveal links between developmental disabilities such as autism spectrum disorder and lower social connectedness and isolation <cit.>. Similar results were found by Chamberlain et al., in a study on the involvement on autistic children in typical classrooms; children with high functioning autism or autistic spectrum disorder experienced lower centrality, acceptance, companionship, and reciprocity, but not lower levels of loneliness<cit.>. In recent work exploring the links between socio-demographic traits and homophily in pre-school children using a range of individual and group-based indices found that children choose to interact similar others, particularly with respect to sex and linguistic development features. Additionally, sex-based homophily increased with age <cit.>. Previous work on the structure of social groups has shown that tendencies towards cluster formation differs between children and adults. In particular, it was found that the levels of transitive organisation increased from the age of 3 to 11. This is said to reflect cognitive development in children giving rise to interpersonal preferences<cit.>. In the following report, we identify the relevant socio-demographic and linguistic features that exhibit variability among the pupils, and the consistent interaction patterns during both in-class and out-of-class periods. We do this by examining both dyadic and higher-order signatures of social interactions, accounting thus for the existence of social groups of different sizes that be combined in non-trivial patterns. Finally, we look for differences in interaction patterns when accounting for individual traits. § MATERIALS AND METHODS §.§ Dataset This work utilizes data from the DyLNet project <cit.>. The goal of the original project was to observe the co-evolution of social networks and language development of children in pre-school age. The dataset comprises information on a total of 164 children and their interactions within school setting. Social interactions were estimated using spatial proximity sensors. Proximity data was collected in the span of 10 months. Specifically, recordings were made during one week in each of the 10 months and in 9 sessions per week. Alongside the information on interactions, the dataset also contains the metadata collected through a survey administered to the parents and language tests administered to the children. The survey is composed of basic socio-demographic information on the children as well as additional information regarding their attitudes, favorite activities outside of school, and home environment. The language tests were designed to assess the level of language development of the participants. The authors of Ref. <cit.> already investigated the relationship between network structure and some additional features by means of homophily. In particular, they investigated whether networks aggregated over the timespan of four months showed homophilic behavior with respect to gender, dominant language of the child, occupation category of the mother, occupation category of the father, education level of the mother, education level of the father, vocabulary size, and syntactic development level. They found that the dimension showing the most significant level of homophily is gender, especially during time spent out of class. Moreover, they found that the level of homophily is very close to the baseline for all dimensions during class time, probably due to the the fact that during class-time children are assigned fixed seats. §.§ Individual characteristics The uniqueness of this dataset lies in the metadata that comes attached with the individual nodes. Parents of the children participating in the study were asked to fill in questionnaires in order to provide a basic socio-demographic characterization of the children (gender, age) as well as other information regarding their environment at home and activity preferences. Information relevant to assess the link between language and social interactions was collected through the questionnaire and a series of language tests. Parents were asked to indicate the main language spoken in the household and whether the children could understand and speak a language other than French. Additionally, children were administered individual language tests in order to evaluate their receptive lexical skills, short-term memory and their receptive syntactic skills, and the individual scores from these tests were recorded in the dataset. We first inspected the descriptive statistics of the survey data to identify features that can augment our understanding of social behavioural patterns when combined with the high-resolution contact data. Among the participating children, the distribution of sex is comparable, with 81 female and 83 male. The age gap spans nearly three years, ranging from a minimum of 24 months to a maximum of 59 months. Based on their school level (i.e., 1^st, 2^nd and, 3^rd grade), children were divided into 7 classes. Two classes have mixed grades (3 and 5), with one combining 1^st- and 2^nd-graders, and the other combining 2^nd- and 3^rd-graders. Figure <ref>(a) illustrates the age and grade composition of each class. As expected, the mixed classes show the highest age variability, with ages ranging from 26 to 44 months in Class 3 and 41 to 57 months in Class 5. Overall, we observe that sex balance was maintained across the classes, except for one class where there was a 70-30% split, with male students in the majority. The survey questions regarding children's personality traits were restricted to only two aspects: sociability and talkativeness. Figure <ref>(b) demonstrates the distribution of parent responses to the question of if the child is social or shy, categorized by sex. We observe that parents perceive a higher rate of male pupils as shy compared to females, whereas females are more likely to be categorized as sociable. The information on talkativeness lacks variability and so is not included in the further analyses. As illustrated in Figure <ref>(c), the number of siblings is another variable that exhibits variation among the group of pupils who participated in the original study, with the majority having at least one sibling. The dataset also includes two indicators per both lexical and syntactic skills, as well as short-term memory span evaluations administered at two different time points, resulting in a total of 10 measures. The linguistic evaluations include test items specifically designed for the level of each grade and 10 anchor questions presented to the children whichever their grade —and chosen to be rather adapted to 3rd grade pupils. Since the test items measuring the same type of skills are highly correlated (not shown), we aggregated them by averaging, for each skill, the sum of distinct scores from the two consecutive years. These two resulting aggregated linguistic scores show a correlation, yet we still observe considerable variation across different score ranges. Conversely, short-term memory span does not correlate with linguistic skills. Probably unsurprisingly, these developmental measures, even though separately designed for each grade, turn out to be correlated with age, but at moderate levels. The boxplots for the three resulting development skill test measures are reported in Figure <ref>(d), disaggregated by sex. Parents were also asked to indicate their children's preferred activities from a set of proposed ones, distinguishing between daytime and nighttime activities. Aggregated counters associated to the given answers are reported in Figure <ref>(a), while the correlations between them is displayed in Figure <ref>(b). We first notice that the sample of children is quite heterogeneous, and most of the variables are poorly correlated. Additionally, we also see from (a) that some variables exhibit low variability and may not effectively distinguish social behaviors. Nevertheless, we decided to retain all these features to avoid arbitrary selections, while we did not use dimensionality reduction techniques to preserve their individual interpretation. §.§ From proximity data to networks Information on spatial proximity and contact duration was collected using autonomous RFID Wireless sensors installed on participants. These devices tracked face-to-face proximity with a resolution of 5s. The experimental set-up included in-situ checks and controlled experiments, in order to validate the information recorded through the sensors (i.e. making sure it corresponds to non spurious interactions). More details on the validation pipeline are given in Ref. <cit.>. Once validated, the proximity data was aggregated in mutually observed pairs to create an undirected network. For the scope of this study, the data was further aggregated with a time resolution of 10s, associated to a proximity of participants within 2 meters <cit.>. Additional checks were performed to be sure that this choice did not affect the fundamental network properties of the resulting contact network. The obtained network was then used to investigate the social interaction patterns among pupils. We assigned to each link, e.g. the one between nodes/children i and j, a corresponding weight w_ij, accounting for the total amount of time spent together. The individual propensity of a node i to interact with its peers can be quantified via the node strength s_i, which is the total amount of time spent interacting with any other node. Finally, the number of different individuals that a node i interacts with is quantified by its degree, k_i. Figure <ref> shows pictorial visualization of the resulting networks measured during the out-of-class context, aggregated over time. The positioning of the nodes is the same in both panels, but in Figure <ref>(a) nodes are colored according to the sex of the child, while in Figure <ref>(b) nodes are colored according to the class affiliation. We can see from this second case how, even during the out-of-class time, where in principle pupils are free to interact beyond their classmates, the network presents clusters: pupils from the same class preferentially interact with each other. Additionally, the network is polarized with respect to grade: lower grade classes are located at the bottom, while higher grade ones are located at the top. As already known <cit.>, Figure <ref>(a), reveals a clustering according to sex. Interestingly, different behaviors are observed depending on the age of the pupils. Children in classes 1, 2, 3, and 4 tend to cluster first by class, then within their classes by sex. Conversely, children in classes 5, 6, and 7 exhibit a different pattern, clustering first by sex and then by class. This qualitatively observed pattern suggests that younger children prioritize class-based groupings over gender, while older children exhibit stronger gender-based clustering, indicating a developmental shift in social dynamics. This change reflects previous work which finds that sex-based homophily increases with age in young children <cit.>. Moreover, both younger and older children prefer to associate with others similar in age. § RESULTS In this section, we analyse the interplay of individual characteristics with behavioural social patterns starting from the simplest analysis at the level of individual nodes. We will then move to more complex dyadic and group measures that take into account the features of the nodes at different categorical levels. The first and most fundamental quantity we consider is the total interaction time. Indeed, at the node level, the strength provides a simple and interpretable measure of the individual propensity to interact. Given the specific age group considered in this study, this represents a particularly interesting factor: it can be interpreted, in a dual manner, as an early indicator of both a tendency to interact and, conversely, as a proxy for a propensity to isolate. §.§ Classification of interaction time using metadata We split the distribution of interaction time into four quantile-based classes, emphasizing the first and last quantiles to represent the most and least socially active children, respectively [Figure  <ref>(a,d)]. We then train a Random Forest classifier <cit.> on the metadata prepared as described in section  <ref> to predict the quantile of each pupil. In order to improve the stability of our results, given the relatively small sample size, we employed stratified k-folding <cit.> for training. The resulting confusion matrices, obtained by taking the average over 20 independent training realizations, are reported in Figure <ref>(b) and (e) for the in-class and out-of-class settings, respectively. Our analysis reveals that even though the metadata do not contain direct information on contacts, it is possible to use them to classify individuals according to their measured level of sociality. The signal, even though globally weak, enables good partial differentiation if one focuses on the two extreme quantiles —which are also the most interesting ones as they depict the most and less active individuals. The signal is stronger in the out-of-class setting [Figure  <ref>(e)], possibly due to the removal of spatial and logistic constraints imposed by the room and the teacher in the classrooms. We identify the most important features for classification, shown in Figure  <ref>(c) and (f). Notice how the top five are consistent in the two settings: age, school-class and language test scores. This preliminary analysis suggests that individual characteristics that do not explicitly include levels of sociality are sufficient to detect the amount of interactions in preschool, in particular for the two most interesting subset of most and less active children. §.§ Pairwise interaction analysis We just saw that age can be a good predictor for duration of contact at the level of the single node. We would like to move a step further by taking into account the coupling at the level of links. Previous works have looked at the effect of age differences in classrooms on childrens' performance, but have not touched upon the social effects and differences associated with it <cit.>. To begin, we construct a weighted undirected network from all the in-class interactions happening between children of the same class, aggregated over all the 10 weeks of observation. In Figure <ref>(a) we plot the distribution of duration of in-class interactions against the age difference of the children involved in the considered pair. We see a peak (longer interactions) for pairs of pupils with less than half a year of age difference, but this quickly diminishes beyond that. To access the effect of network structure and/or the distribution of weights in the observed pattern, we use two reshuffling methods. The “hard” reshuffling removes all edges and reassigns them to random pairs of nodes (conserving only the total number of edges). This procedure completely destroys the network structure, and consequently also the duration distribution observed in the data. By contrast, the “soft” reshuffling removes all the weights (without changing the network structure) and reassigns them randomly to the edges. This method does not affect the patterns of interactions, indicating that the effect of how much you interact with somebody instead of another person in your network pales in comparison to the effect of network structure. We then repeat the same construction and reshuffling methods but for out-of-class interactions. While the distribution resembles that of in-class interactions, the extent to which you interact with people in your network matters significantly more compared to the null model. In Figure <ref>(b), we see that pupils with less than half a year age difference interact more than expected when compared to both the hard and the soft reshuffling null models. Interactions between children of larger age differences are also much less than expected from the soft reshuffling model. Having accessed these differences between in- and out-of-class interactions, we now want to look at possible differences between the level of sociality when exposed to different “treatments”. From a social perspective, the most obvious change in mixed-grade classrooms is that students of different ages are forced, when in class, to interact with each other. When this restriction is removed, do these children interact differently? We can artificially explore the differences in “removing” this restriction by looking at interactions happening out-of-class but differentiating for children belonging to mixed- against single-grade classes. Notably, we do not observe significant differences in among the two. Even though a more rigorous check should be performed in further studies, this means that the distribution of duration of out-of-class contacts given an age difference, is not different if a pupil is exposed to larger age differences in class or not. We can look for alternative signals by leveraging information we have about the environment at home. For example, do children with siblings interact outside their age group more than only children? That is, do children who are exposed at home to other children with an age gap of roughly a year or more tend to interact more with varied age groups at school as well? In Figure <ref>(c), we plot the duration of out-of-class interactions against the age difference of interacting pairs of nodes for three cases: (i) both nodes have no siblings, (ii) both nodes have siblings, (iii) only one node has siblings. While this preliminary investigation seem to indicate that having a sibling does not seem to be associated with more interactions outside your age group, pupils who have no siblings seem to have a different pattern in the way duration depends on age difference: instead of observing a smooth decay with age difference, single children display a more pronounced peak for interactions with children within a year apart from each other, followed by an abrupt drop. §.§ Higher-order interaction analysis So far we have discussed, on the one hand, the interplay between the individual characteristics of a child and their individual propensity to interact, and on the other, the propensity of children of similar ages to spend more time together than those of larger age gaps. This latter analysis focused on the amount of time spent together by pairs of children. Nevertheless, as for many other systems, childrens' interactions can involve more than 2 pupils at the same time <cit.>. Such group (non-dyadic) interactions have been observed in many social settings, such as offices, conferences, primary schools and high-schools, and it has been shown that the higher-order representations offered by hypergraphs, as opposed to pairwise graphs, can lead to interesting emerging phenomena <cit.>. Given the individual metadata available in this study, it is then natural to extend the study conducted so far and investigate how individual traits of group members influence the overall time spent together in a given group gathering –beyond the pairs. To address this question, we deduced the group interactions among children from the proximity data discussed above. Interaction data are recorded in the form of tuples ((i,j),t) between pairs of children occurring at a specific time t, where (i,j) is the pair of nodes (pupils) interacting and t is the time at which the interaction was observed. Given the temporal granularity, it is however possible to naively construct the group interactions by looking at the cliques formed by pairwise interactions occurring at the same timestamp, (following the procedure of Refs. <cit.>). For example, if at a specific timestamp t we observe that in the contact data nodes i, j and k are all mutually connected by a link, i.e. we observe ((i,j),t), ((j,k),t) and ((i,k),t), then the three nodes are considered as jointly interacting in a group at time t. The so-obtained network can then be considered what has been recently addressed as a higher-order network <cit.>, where group interactions among children are represented as hyperlinks, that are links that connect an arbitrary number of nodes. Similarly to the pairwise case, we then assign to each hyperlink a weight which accounts for the total temporal duration the hyperlink was observed in the contact data. Hyperlinks with null weights (i.e., never observed) were not considered in this analysis. We integrate individual traits belonging to different group member into a measure of affinity for the whole group. In particular, we considered two individual characteristics that were included in the previous analyses, i.e. age and results in development skill tests. While age is a simple measure given in months, test scores require some pre-processing. We first normalise the test results such that the score obtained by children is always between 0 and 1. Then, for each pupil, we store these results into a 3-dimensional vector, where each entry corresponds to the result obtained in the test on vocabulary, syntax and memory. We can now quantify group affinity in terms of how further apart are its members with respect to the considered quantity. A natural way of doing this is to consider individual distances from the barycenter of the group. We thus measure the diversity in age and development skills test scores by calculating the sum of the absolute distances of each node from the barycentric coordinate, and then dividing this sum by the number of group members. As per previous analyses, we split interactions into in-class and out-of-class, to distinguish between the two different social contexts in which pupils interact. In Figure <ref> we plot group diversity as a function of the group duration for the two different individual traits and the two different contexts of interaction considered. We observe that longer-lasting groups tend to be less distant, thus group affinity is higher, both in terms of age and development skill performances. Results obtained from empirical data are compared with randomized reference models, where the distance between a node and its group is computed instead using different nodes taken at random from another group of the same size. In the in-class context [Figure <ref>(a-b)], we see that the empirical groups that last for long present lower diversity that those from the reshuffled model. This trend is much more evident in the case of age diversity (a) than in the case of development skill tests (b), where appreciable differences only appear for the longer-lasting groups. Signals are stronger in the out-of-class context [Figure <ref>(c-d)], where we can see clear differences of trends between empirical data and the reshuffled cases. The group diversity after reshuffling remains constant as group duration increases, in striking contrast to the decreasing trend observed in real-world data. We conclude that, whenever pupils are left free to interact (out-of-class), they gather in groups whose duration depends on group affinity in terms of individual age and development skill test performances, independently from their size. §.§ Node attributes and network measures In this final section, we consider node-level measures of interaction computed for individual pupils and look for differences in their distributions when accounting for different categories of individual traits. In particular, we focus on the same characteristics used in the previous sections: sex, grade, development skill test scores, and number of siblings. We find that the overall the interaction time of female in out-of-class settings is higher than the one of male children. This is shown in Figure <ref>(b) where the two strength distributions are compared across the two sex categories (significance was checked using a 1-sided Mann–Whitney U test, p<0.01). We also notice that total duration of out-of-class interactions increases with age. This is displayed in Figure <ref>(a) where the strength distribution for both female and male is plotted for different grade levels. Even if this is not shown in the figure, a temporal analysis shows that this effect seems to be more pronounced in the first quarter in the academic year. This could be due to the differences in the acquaintance network prior to the beginning of the data collection (stronger for 1st graders). We then turn our attention to language development measures in association with duration of contacts in both in-class and out-of-class settings. We first discard the results of anchor questions —since they are positively correlated with grade, which we have already shown to be correlated with strength. In all settings we find [Figure <ref>(c-h)] a significant positive correlation for all test scores, in particular for memory and vocabulary tests (which are not correlated with one another). Finally, we focus on differences in out-of-class contacts for children with and without siblings both in terms of duration of interactions and their patterns (how interactions are arranged across groups). In Figure <ref>(i) we compare distributions of contact duration across the two categories, finding no significant differences across the two. However, going beyond simple contact duration, differences emerge. This is shown in Figure <ref>(j), where we compare the hypercoreness values of the nodes. Hypercoreness is a recently-developed measure of centrality for higher-order networks <cit.> that quantifies the extent of nodes' interactions within groups, considering at the same time number of different groups and group sizes. High hypercoreness values correspond to nodes that interact within many large-sized groups that contain high-degree nodes [More formally, we start by performing a (k, m)-hypercore decomposition <cit.>. The (k, m)-hypercore decomposition is analogous to a k-core decomposition for graphs, involving the recursive removal of nodes with degree k_i < k and groups (or hyper-edges) with size m_e < m. The resulting (k,m)-hypercore is the maximal connected sub-network where all nodes belong to at least k distinct groups and all groups of size at least m. The hypercoreness centrality of a given node i is defined as: R(i) = ∑_m=2^M C_m(i) /N_m k_max^m where the m-core number, denoted as C_m(i), is the value k such that i belongs to the (k,m)-hypercore but not the (k+1,m)-hypercore, k_max^m is the maximum value of k such that the (k,m)-hypercore is not empty and N_m is the number of groups of size m in the original network.]. § DISCUSSION We investigated the interplay of individual characteristics of children at preschool with their longitudinal patterns of face-to-face interactions automatically collected across different classes and contexts of interactions. Starting from the amount of sociality as given by the total time spent interacting/in isolation, we showed it is possible to make predictions about the social activity of children using individual characteristics. We subsequently used random forest methods to identify the key individual traits playing a role in interactions between students in and out of class as well as across mixed-grade classes. The main traits found were age, sex, number of siblings and test scores (vocabulary, syntax and memory scores). We identified an increasing preference with age for children to interact with same-sex others as well as a preference for children to associate with classmates outside of class. Furthermore, children were found to prefer to associate with others similar in age to themselves. Our investigation into the age-based in-class and out-of class interaction patterns among children reveals that the age difference significantly influences the duration of in-class interactions, with the highest interactions occurring among children with less than half a year age difference. To understand the role of network structure and weight distribution in these patterns, we employed two reshuffling methods: hard reshuffling (randomly reassigning edges) and soft reshuffling (randomly reassigning weights while preserving network structure). The hard reshuffling did not replicate the observed duration distribution, indicating the importance of network structure, whereas the soft reshuffling showed that interaction duration within the network was less critical. Out-of-class interactions mirrored the in-class patterns but highlighted a significantly greater importance of interaction duration even when the network structure is preserved. Further, we examined whether having siblings affected childrens' interactions across age groups. The analysis indicated no significant difference in interaction patterns based on sibling presence, except for children without siblings who showed longer interactions within their age group. These preliminary results suggest that exposure to mixed-age interactions within the family does not significantly influence the age gap in social interactions at school —apart for a different decreasing trend on how contact duration decreases as the gap increases. Nevertheless, additional comprehensive causal work is necessary to make this inference. At the group level, we found differences across children with or without siblings in terms of node centrality in the higher-order networks. In fact, even though no significant differences emerge when comparing group durations, children with siblings display higher hypercoreness <cit.>, meaning that they engage in more groups of larger sizes. Along this line, an interesting directions to investigate would focus on temporal hypercoreness <cit.>, leveraging the longitudinal nature of the dataset, or more complex local interactions patterns as hypermotifs <cit.>. Despite its prevalence, the French double-grade class system has been criticised and linked to poor performance outcomes due to teachers needing to switch attention between two teaching groups. However some benefits of this system presented were tutoring, imitation and joint supervision. Future work could explore how age differences, number of students and teaching abilities lead to positive or negative interactions and outcomes <cit.>. In the context of early cognitive development, an important direction of future work could couple neuro-cognitive measures with measures of sociability to learn more about how brain development changes the nature of social interactions. Another natural direction to explore in future work is the use of affinity measures for individual characteristics within group interactions based on the higher-order definition recently presented in Ref. <cit.>, but generalised to account for labels that can take more than two values. More generally, our preliminary study calls for a more comprehensive and exhaustive investigation of homophilic <cit.> —and monophilic <cit.>— patterns of group formation and evolution. These signals could then be used to inform mechanistic model of higher-order social networks <cit.>. In fact, recent studies have found consistent dynamical patterns of individual group transitions, group formation and disaggregation phenomena in both preschool and university settings during different activity types (in-class, out-of-class, and weekend) <cit.>. The observed phenomena could be replicated by a synthetic model describing the dynamics of individuals forming groups of different sizes and navigating through them, using a mechanism of short-term memory for group duration (“long gets longer” effect) and long-term memory for social contacts. Going beyond these simple signature of recurrent social contact, the individual preferences analysed in this study could be used to complement and further improve these mechanistic dynamics of social interactions. § ACKNOWLEDGEMENTS This work is the output of the Complexity72h workshop, held at the Universidad Carlos III de Madrid in Leganés, Spain, 24-28 June 2024, <https://www.complexity72h.com>. We thank Alain Barrat for the insightful discussions and contributions during the workshop. The higher-order interaction analysis was performed using the XGI library <cit.>. unsrtnat 72 urlstyle[Albert and Barabási(2002)]albert2002statistical Réka Albert and Albert-László Barabási. Statistical mechanics of complex networks. Rev. Mod. Phys., 740 (1):0 47, 2002. https://doi.org/10.1103/RevModPhys.74.47. [Newman(2003)]newman2003structure Mark EJ Newman. The structure and function of complex networks. SIAM review, 450 (2):0 167–256, 2003. https://doi.org/10.1137/S003614450342480. [Latora et al.(2017)Latora, Nicosia, and Russo]latora_nicosia_russo_2017 V. Latora, V. Nicosia, and G. Russo. Complex Networks: Principles, Methods and Applications. Complex Networks: Principles, Methods and Applications. Cambridge University Press, 2017. ISBN 9781107103184. URL <https://books.google.it/books?id=qV0yDwAAQBAJ>. [Barrat et al.(2008)Barrat, Barthélemy, and Vespignani]barrat2008dynamical A. Barrat, M. Barthélemy, and A. Vespignani. Dynamical Processes on Complex Networks. Cambridge University Press, 2008. ISBN 9780521879507. URL <https://books.google.at/books?id=TmgePn9uQD4C>. [Vespignani(2012)]vespignani2012modelling Alessandro Vespignani. Modelling dynamical processes in complex socio-technical systems. Nature Physics, 80 (1):0 32–39, 2012. [Euler(1741)]euler1741solutio Leonhard Euler. Solutio problematis ad geometriam situs pertinentis. Commentarii academiae scientiarum Petropolitanae, pages 128–140, 1741. [Wasserman and Faust(1994)]wasserman1994social Stanley Wasserman and Katherine Faust. Social Network Analysis : Methods and Applications (Structural Analysis in the Social Sciences). Cambridge University Press, 1994. ISBN 0-521-38707-8. [Castellano et al.(2009)Castellano, Fortunato, and Loreto]castellano2009statistical Claudio Castellano, Santo Fortunato, and Vittorio Loreto. Statistical physics of social dynamics. Rev. Mod. Phys., 810 (2):0 591, 2009. https://doi.org/10.1103/RevModPhys.81.591. [Baronchelli(2018)]baronchelli2018emergence Andrea Baronchelli. The emergence of consensus: A primer. R. Soc Open Sci, 50 (2):0 172189, 2018. [Axelrod and Axelrod(1984)]axelrod1984evolution R. Axelrod and R.M. Axelrod. The Evolution of Cooperation. Basic Books. Basic Books, 1984. ISBN 978-0-465-02121-5. [Mitchell(1973)]mitchell1973networks J Clyde Mitchell. Networks, norms and institutions. Mouton., 1973. [Centola and Macy(2007)]centola2007complex Damon Centola and Michael Macy. Complex contagions and the weakness of long ties. American journal of Sociology, 1130 (3):0 702–734, 2007. [Nowak et al.(1990)Nowak, Szamrej, and Latané]nowak1990private Andrzej Nowak, Jacek Szamrej, and Bibb Latané. From private attitude to public opinion: A dynamic theory of social impact. Psychol. Rev., 970 (3):0 362, 1990. [Sznajd-Weron and Sznajd(2000)]sznajd2000opinion Katarzyna Sznajd-Weron and Jozef Sznajd. Opinion evolution in closed community. International Journal of Modern Physics C, 110 (06):0 1157–1165, 2000. [Isella et al.(2011)Isella, Stehlé, Barrat, Cattuto, Pinton, and Van den Broeck]isella2011s Lorenzo Isella, Juliette Stehlé, Alain Barrat, Ciro Cattuto, Jean-François Pinton, and Wouter Van den Broeck. What's in a crowd? analysis of face-to-face behavioral networks. Journal of theoretical biology, 2710 (1):0 166–180, 2011. [Barrat et al.(2014)Barrat, Cattuto, Tozzi, Vanhems, and Voirin]barrat2014measuring Alain Barrat, Ciro Cattuto, Alberto Eugenio Tozzi, Philippe Vanhems, and Nicolas Voirin. Measuring contact patterns with wearable sensors: methods, data characteristics and applications to data-driven simulations of infectious diseases. Clin. Microbiol. Infect., 200 (1):0 10–16, 2014. https://doi.org/10.1111/1469-0691.12472. [Mastrandrea et al.(2015a)Mastrandrea, Fournet, and Barrat]mastrandrea2015contact Rossana Mastrandrea, Julie Fournet, and Alain Barrat. Contact patterns in a high school: a comparison between data collected using wearable sensors, contact diaries and friendship surveys. PloS one, 100 (9):0 e0136497, 2015a. [Cristani et al.(2011)Cristani, Paggetti, Vinciarelli, Bazzani, Menegaz, and Murino]cristani2011towards Marco Cristani, Giulia Paggetti, Alessandro Vinciarelli, Loris Bazzani, Gloria Menegaz, and Vittorio Murino. Towards computational proxemics: Inferring social relations from interpersonal distances. In 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing, pages 290–297. IEEE, 2011. [Holme and Saramäki(2012)]holme2012temporal Petter Holme and Jari Saramäki. Temporal networks. Phys. Rep., 5190 (3):0 97–125, 2012. https://doi.org/10.1016/j.physrep.2012.03.001. [Génois et al.(2019)Génois, Zens, Lechner, Rammstedt, and Strohmaier]genois2019building Mathieu Génois, Maria Zens, Clemens Lechner, Beatrice Rammstedt, and Markus Strohmaier. Building connections: How scientists meet each other during a conference. arXiv preprint arXiv:1901.01182, 2019. [Battiston et al.(2020)Battiston, Cencetti, Iacopini, Latora, Lucas, Patania, Young, and Petri]battiston2020networks F. Battiston, G. Cencetti, I. Iacopini, V. Latora, M. Lucas, A. Patania, J.-G. Young, and G. Petri. Networks beyond pairwise interactions: Structure and dynamics. Phys. Rep., 874:0 1–92, 2020. 10.1016/j.physrep.2020.05.004. [Battiston et al.(2021)Battiston, Amico, Barrat, Bianconi, Ferraz de Arruda, Franceschiello, Iacopini, Kéfi, Latora, Moreno, Murray, Peixoto, Vaccarino, and Petri]battiston2021physics F. Battiston, E. Amico, A. Barrat, G. Bianconi, G. Ferraz de Arruda, B. Franceschiello, I. Iacopini, S. Kéfi, V. Latora, Y. Moreno, M. Murray, T. Peixoto, F. Vaccarino, and G. Petri. The physics of higher-order interactions in complex systems. Nat. Phys., 170 (10):0 1093–1098, 2021. 10.1038/s41567-021-01371-4. [Torres et al.(2021)Torres, Blevins, Bassett, and Eliassi-Rad]torres2021and Leo Torres, Ann S Blevins, Danielle Bassett, and Tina Eliassi-Rad. The why, how, and when of representations for complex systems. SIAM Rev., 630 (3):0 435–485, 2021. https://doi.org/10.1137/20M1355896. [Bick et al.(2023)Bick, Gross, Harrington, and Schaub]bick2023higher Christian Bick, Elizabeth Gross, Heather A Harrington, and Michael T Schaub. What are higher-order networks? SIAM Rev., 650 (3):0 686–731, 2023. https://doi.org/10.1137/21M1414024. [Stehlé et al.(2010)Stehlé, Barrat, and Bianconi]stehle2010dynamical Juliette Stehlé, Alain Barrat, and Ginestra Bianconi. Dynamical and bursty interactions in social networks. Phys. Rev. E, 810 (3):0 035101, 2010. 10.1103/PhysRevE.81.035101. [Zhao et al.(2011)Zhao, Stehlé, Bianconi, and Barrat]zhao2011social Kun Zhao, Juliette Stehlé, Ginestra Bianconi, and Alain Barrat. Social network dynamics of face-to-face interactions. Phys. Rev. E, 830 (5):0 056109, 2011. 10.1103/PhysRevE.83.056109. [Perra et al.(2012)Perra, Gonçalves, Pastor-Satorras, and Vespignani]perra2012activity Nicola Perra, Bruno Gonçalves, Romualdo Pastor-Satorras, and Alessandro Vespignani. Activity driven modeling of time varying networks. Scientific reports, 20 (1):0 469, 2012. [Starnini et al.(2013)Starnini, Baronchelli, and Pastor-Satorras]starnini2013modeling Michele Starnini, Andrea Baronchelli, and Romualdo Pastor-Satorras. Modeling human dynamics of face-to-face interaction networks. Phys. Rev. Lett., 1100 (16):0 168701, 2013. 10.1103/PhysRevLett.110.168701. [Vestergaard et al.(2014)Vestergaard, Génois, and Barrat]vestergaard2014memory Christian L Vestergaard, Mathieu Génois, and Alain Barrat. How memory generates heterogeneous dynamics in temporal networks. Phys. Rev. E, 900 (4):0 042805, 2014. https://doi.org/10.1103/PhysRevE.90.042805. [Karsai et al.(2014)Karsai, Perra, and Vespignani]karsai2014time Márton Karsai, Nicola Perra, and Alessandro Vespignani. Time varying networks and the weakness of strong ties. Sci. Rep., 40 (1):0 1–7, 2014. https://doi.org/10.1038/srep04001. [Nadini et al.(2018)Nadini, Sun, Ubaldi, Starnini, Rizzo, and Perra]nadini2018epidemic Matthieu Nadini, Kaiyuan Sun, Enrico Ubaldi, Michele Starnini, Alessandro Rizzo, and Nicola Perra. Epidemic spreading in modular time-varying networks. Sci. Rep., 80 (1):0 1–11, 2018. https://doi.org/10.1038/s41598-018-20908-x. [Le Bail et al.(2023)Le Bail, Génois, and Barrat]lebail2023modelling Didier Le Bail, Mathieu Génois, and Alain Barrat. Modeling framework unifying contact and social networks. Phys. Rev. E, 107:0 024301, Feb 2023. 10.1103/PhysRevE.107.024301. URL <https://link.aps.org/doi/10.1103/PhysRevE.107.024301>. [Petri and Barrat(2018)]petri2018simplicial Giovanni Petri and Alain Barrat. Simplicial activity driven model. Phys. Rev. Lett., 1210 (22):0 228301, 2018. 10.1103/PhysRevLett.121.228301. [Gallo et al.(2024a)Gallo, Lacasa, Latora, and Battiston]gallo2024higher Luca Gallo, Lucas Lacasa, Vito Latora, and Federico Battiston. Higher-order correlations reveal complex memory in temporal hypergraphs. Nature Communications, 150 (1):0 4754, 2024a. [Iacopini et al.(2023)Iacopini, Karsai, and Barrat]iacopini2023temporal Iacopo Iacopini, Márton Karsai, and Alain Barrat. The temporal dynamics of group interactions in higher-order social networks. arXiv preprint arXiv:2306.09967, 2023. [Kontro and Génois(2020)]kontro2020combining Inkeri Kontro and Mathieu Génois. Combining surveys and sensors to explore student behaviour. Education Sciences, 100 (3):0 68, 2020. [Dai et al.(2022)Dai, Bouchet, Karsai, Chevrot, Fleury, and Nardy]dai2022longitudinal Sicheng Dai, Hélène Bouchet, Márton Karsai, Jean-Pierre Chevrot, Eric Fleury, and Aurélie Nardy. Longitudinal data collection to follow social network and language development dynamics at preschool. Scientific Data, 90 (1):0 777, 2022. [Génois et al.(2023)Génois, Zens, Oliveira, Lechner, Schaible, and Strohmaier]genois2023combining Mathieu Génois, Maria Zens, Marcos Oliveira, Clemens M Lechner, Johann Schaible, and Markus Strohmaier. Combining sensors and surveys to study social interactions: A case of four science conferences. Pers. Sci., 4:0 1–24, 2023. https://doi.org/10.5964/ps.9957. [Stehlé et al.(2013)Stehlé, Charbonnier, Picard, Cattuto, and Barrat]stehle_gender_2013 Juliette Stehlé, François Charbonnier, Tristan Picard, Ciro Cattuto, and Alain Barrat. Gender homophily from spatial behavior in a primary school: A sociometric study. Social Networks, 350 (4):0 604–613, October 2013. ISSN 03788733. 10.1016/j.socnet.2013.08.003. URL <https://linkinghub.elsevier.com/retrieve/pii/S0378873313000737>. [Mastrandrea et al.(2015b)Mastrandrea, Fournet, and Barrat]mastrandrea_contact_2015 Rossana Mastrandrea, Julie Fournet, and Alain Barrat. Contact Patterns in a High School: A Comparison between Data Collected Using Wearable Sensors, Contact Diaries and Friendship Surveys. PLOS ONE, 100 (9):0 e0136497, September 2015b. ISSN 1932-6203. 10.1371/journal.pone.0136497. URL <https://dx.plos.org/10.1371/journal.pone.0136497>. [McClelland et al.(2007)McClelland, Cameron, Connor, Farris, Jewkes, and Morrison]mcclelland2007links Megan M McClelland, Claire E Cameron, Carol McDonald Connor, Carrie L Farris, Abigail M Jewkes, and Frederick J Morrison. Links between behavioral regulation and preschoolers' literacy, vocabulary, and math skills. Developmental psychology, 430 (4):0 947, 2007. [Denham(2006)]denham2006social Susanne A Denham. Social-emotional competence as support for school readiness: What is it and how do we assess it? Early education and development, 170 (1):0 57–89, 2006. [Birch and Ladd(1997)]birch1997teacher Sondra H Birch and Gary W Ladd. The teacher-child relationship and children's early school adjustment. Journal of school psychology, 350 (1):0 61–79, 1997. [McPherson et al.(2001)McPherson, Smith-Lovin, and Cook]mcpherson2001birds Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a feather: Homophily in social networks. Annual review of sociology, 270 (1):0 415–444, 2001. [Mayhew et al.(1995)Mayhew, McPherson, Rotolo, and Smith-Lovin]mayhew1995sex Bruce H Mayhew, J Miller McPherson, Thomas Rotolo, and Lynn Smith-Lovin. Sex and race homogeneity in naturally occurring groups. Social Forces, 740 (1):0 15–52, 1995. [Elbaum et al.(2024)Elbaum, Perry, and Messinger]elbaum2024investigating Batya Elbaum, Lynn K Perry, and Daniel S Messinger. Investigating children's interactions in preschool classrooms: An overview of research using automated sensing technologies. Early childhood research quarterly, 66:0 147–156, 2024. [Horn et al.(2024)Horn, Karsai, and Markova]horn2024automated Lisa Horn, Márton Karsai, and Gabriela Markova. An automated, data-driven approach to children's social dynamics in space and time. Child Development Perspectives, 180 (1):0 36–43, 2024. [Santos et al.(2015)Santos, Daniel, Fernandes, and Vaughn]santos2015affiliative António J Santos, Joao R Daniel, Carla Fernandes, and Brian E Vaughn. Affiliative subgroups in preschool classrooms: Integrating constructs and methods from social ethology and sociometric traditions. PloS one, 100 (7):0 e0130932, 2015. [Paulus(2018)]paulus2018preschool Markus Paulus. Preschool children’s and adults’ expectations about interpersonal space. Frontiers in Psychology, 9:0 400891, 2018. [Chen et al.(2019)Chen, Lin, Justice, and Sawyer]chen2019social Jing Chen, Tzu-Jung Lin, Laura Justice, and Brook Sawyer. The social networks of children with and without disabilities in early childhood special education classrooms. Journal of autism and developmental disorders, 49:0 2779–2794, 2019. [Locke et al.(2013)Locke, Kasari, Rotheram-Fuller, Kretzmann, and Jacobs]locke2013social Jill Locke, Connie Kasari, Erin Rotheram-Fuller, Mark Kretzmann, and Jeffrey Jacobs. Social network changes over the school year among elementary school-aged children with and without an autism spectrum disorder. School Mental Health, 5:0 38–47, 2013. [Chamberlain et al.(2007)Chamberlain, Kasari, and Rotheram-Fuller]chamberlain2007involvement Brandt Chamberlain, Connie Kasari, and Erin Rotheram-Fuller. Involvement or isolation? the social networks of children with autism in regular classrooms. Journal of autism and developmental disorders, 37:0 230–242, 2007. [Dai(2022)]dai2022thesis Sicheng Dai. Study of dynamical social networks of pre-school children using wearable wireless sensors. Theses, Université de Lyon ; East China normal university (Shanghai), May 2022. URL <https://theses.hal.science/tel-04010766>. [Leinhardt(1973)]leinhardt1973development Samuel Leinhardt. The development of transitive structure in children's interpersonal relations. Behavioral science, 180 (4):0 260–271, 1973. [Dai et al.(2020)Dai, Bouchet, Nardy, Fleury, Chevrot, and Karsai]dai2020temporal Sicheng Dai, Hélène Bouchet, Aurélie Nardy, Eric Fleury, Jean-Pierre Chevrot, and Márton Karsai. Temporal social network reconstruction using wireless proximity sensors: model selection and consequences. EPJ Data Sci., 90 (1):0 19, 2020. https://doi.org/10.1140/epjds/s13688-020-00237-8. [Bastian et al.(2009)Bastian, Heymann, and Jacomy]bastian2009gephi Mathieu Bastian, Sebastien Heymann, and Mathieu Jacomy. Gephi: an open source software for exploring and manipulating networks. In Proceedings of the international AAAI conference on web and social media, volume 3, pages 361–362, 2009. [Breiman(2001)]breiman2001random Leo Breiman. Random forests. Machine learning, 450 (1):0 5–32, 2001. [Hastie et al.(2009)Hastie, Tibshirani, and Friedman]hastie2009elements Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media, 2009. [Veenman(1996)]veenman1996effects Simon Veenman. Effects of multigrade and multi-age classes reconsidered. Review of educational research, 660 (3):0 323–340, 1996. [Leroy-Audouin and Suchaut(2007)]leroy2007revisiting Christine Leroy-Audouin and Bruno Suchaut. Revisiting the pedagogical effectiveness of multigrade classes in france. Revue francaise de pedagogie, 1600 (3):0 103–118, 2007. [Lambiotte et al.(2019)Lambiotte, Rosvall, and Scholtes]lambiotte2019networks R. Lambiotte, M. Rosvall, and I. Scholtes. From networks to optimal higher-order models of complex systems. Nat. Phys., 2019. 10.1038/s41567-019-0459-y. [Iacopini et al.(2019)Iacopini, Petri, Barrat, and Latora]iacopini2019simplicial Iacopo Iacopini, Giovanni Petri, Alain Barrat, and Vito Latora. Simplicial models of social contagion. Nat. Commun., 10:0 2485, 2019. https://doi.org/10.1038/s41467-019-10431-6. [Cencetti et al.(2021)Cencetti, Battiston, Lepri, and Karsai]cencetti2021temporal Giulia Cencetti, Federico Battiston, Bruno Lepri, and Márton Karsai. Temporal properties of higher-order interactions in social networks. Sci. Rep., 110 (1):0 1–10, 2021. https://doi.org/10.1038/s41598-021-86469-8. [Mancastroppa et al.(2023)Mancastroppa, Iacopini, Petri, and Barrat]mancastroppa2023hyper Marco Mancastroppa, Iacopo Iacopini, Giovanni Petri, and Alain Barrat. Hyper-cores promote localization and efficient seeding in higher-order processes. Nat. Commun., 14:0 6223, 2023. https://doi.org/10.1038/s41467-023-41887-2. [Mancastroppa et al.(2024)Mancastroppa, Iacopini, Petri, and Barrat]mancastroppa2024structural Marco Mancastroppa, Iacopo Iacopini, Giovanni Petri, and Alain Barrat. The structural evolution of temporal hypergraphs through the lens of hyper-cores. arXiv preprint arXiv:2402.06485, 2024. [Liu et al.(2020)Liu, Yuan, Lin, Qin, Zhang, and Zhou]liu2020efficient Boge Liu, Long Yuan, Xuemin Lin, Lu Qin, Wenjie Zhang, and Jingren Zhou. Efficient (α, β)-core computation in bipartite graphs. The VLDB Journal, 290 (5):0 1075–1099, 2020. [Lotito et al.(2022)Lotito, Musciotto, Montresor, and Battiston]lotito2022higher Quintino Francesco Lotito, Federico Musciotto, Alberto Montresor, and Federico Battiston. Higher-order motif analysis in hypergraphs. Communications Physics, 50 (1):0 79, 2022. [Suchaut(2010)]suchaut2010efficacite Bruno Suchaut. Efficacité pédagogique des classes à cours double à l’école primaire: le cas du cours préparatoire. Revue française de pédagogie. Recherches en éducation, 0 (173):0 51–66, 2010. [Veldt et al.(2023)Veldt, Benson, and Kleinberg]veldt2023combinatorial Nate Veldt, Austin R Benson, and Jon Kleinberg. Combinatorial characterizations and impossibilities for higher-order homophily. Science Advances, 90 (1):0 eabq3200, 2023. [Altenburger and Ugander(2018)]altenburger2018monophily Kristen M Altenburger and Johan Ugander. Monophily in social networks introduces similarity among friends-of-friends. Nature human behaviour, 20 (4):0 284–290, 2018. [Gallo et al.(2024b)Gallo, Zappalà, Karimi, and Battiston]gallo2024higher_f2f Luca Gallo, Chiara Zappalà, Fariba Karimi, and Federico Battiston. Higher-order modeling of face-to-face interactions. arXiv preprint arXiv:2406.05026, 2024b. [Landry et al.(2023)Landry, Lucas, Iacopini, Petri, Schwarze, Patania, and Torres]landry2023xgi Nicholas W Landry, Maxime Lucas, Iacopo Iacopini, Giovanni Petri, Alice Schwarze, Alice Patania, and Leo Torres. Xgi: A python package for higher-order interaction networks. Journal of Open Source Software, 80 (85):0 5162, 2023.
http://arxiv.org/abs/2407.13512v1
20240718134301
MIR laser CEP estimation using machine learning concepts in bulk high harmonic generation
[ "Balázs Nagyillés", "Gergely N. Nagy", "Bálint Kiss", "Eric Cormier", "Péter Földi", "Katalin Varjú", "Subhendu Kahaly", "Mousumi Upadhyay Kahaly", "Zsolt Diveki" ]
physics.optics
[ "physics.optics" ]
1ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary 2Institute of Physics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary 3Laboratoire Photonique Numérique et Nanosciences (LP2N), UMR 5298, CNRS-IOGS-Université Bordeaux, 33400 Talence, France †Authors contributed equally to this work *mousumi.upadhyaykahaly@eli-alps.hu **zsolt.diveki@eli-alps.hu Monitoring the carrier-envelope phase (CEP) is of paramount importance for experiments involving few cycle intense laser fields. Common measurement techniques include f-2f interferometry or stereo-ATI setups. These approaches are adequate, but are challenging to implement on demand, at different locations as additional metrology tools, in intense few cycle laser-matter interaction experiments, such as those prevalent in sophisticated user beamlines. In addition there are inherent difficulties for CEP measured at non-conventional laser wavelengths (like e.g. mid infrared) and measurements above 10 kHz laser repetition rates, on single shot basis. Here we demonstrate both by simulations and by experiments a machine learning (ML) driven method for CEP estimation in the mid infrared, which is readily generalizable for any laser wavelength and possibly up to MHz repetition rates. The concept relies on the observation of the spectrum of high harmonic generation (HHG) in bulk material and the use of ML techniques to estimate the CEP of the laser. Once the ML model is trained, the method provides a way for cheap and compact real-time CEP tagging. This technique can complement the otherwise sophisticated monitoring of CEP, and is able to capture the complex correlation between the CEP and the observable HHG spectra. § INTRODUCTION High harmonic generation (HHG) relies on the highly non-linear interaction between ultrashort intense laser pulses and matter <cit.>, and has been demonstrated with a wide range of driving laser wavelengths <cit.>. Mid-infrared (MIR) driving lasers in particular have two appealing features. On the one hand, because of the wavelength scaling of the ponderomotive energy <cit.> U_p ∝ I_Lλ^2, the cut-off energy of the generated harmonics in gas can be extended to the keV spectral regime <cit.>, providing X-rays with unmatched temporal and spatial qualities. On the other hand, because of its low linear excitation rate, ultrashort MIR lasers can perform HHG in transparent solid state media <cit.>, like semiconductors or dielectrics, well below the damage threshold of the material <cit.>. In the few cycle regime, the carrier-envelope phase (CEP) of an ultrashort laser, which indicates the phase of the carrier wave with respect to the peak of the intensity envelope of the pulse, has significant impact on the strong field interaction and the HHG process. Therefore accurate measurement/monitoring and proper control of the CEP is of paramount importance, and is essential for multiple applications in attoscience relevant to chemistry <cit.>, atomic and molecular physics <cit.> and lightwave electronics <cit.>, to mention a few. There are different ways to measure the CEP of few cycle pulses, utilizing different types of interactions, for example, like observation of half cycle cut off in HHG spectra from gas <cit.> or quantum interference in semiconductors <cit.>. Nontheless, two other techniques have become predominant. The first, one uses f-2f interferometry where a large bandwidth fundamental spectrum overlaps with its second harmonic signal and the appearance of the spectral fringes reveals the relative CEP of the laser <cit.>. Single-shot CEP measurement above 10 kHz is very challenging with f-2f, although several methods allows one to reach MHz repetition rates<cit.> and the overlap between the fundamental and the second harmonic's spectral region might lie outside some common detectors. The second, technique is based on the measurement of stereographic above threshold ionization (Stereo-ATI) signal <cit.>, which is capable of performing single-shot absolute measurements at high repetition rate <cit.>. But, in spite of its advantages, Stereo-ATI needs sophisticated and expensive instrumentation and is an in-vacuum CEP metrology tool. The last point makes it difficult to integrate into and permanently keep in place as a metrology tool inside the sophisticated existing high repetition rate attosecond beamlines like those existing at ELI-ALPS <cit.>. Recently new techniques relying on solid HHG were proposed to measure the (relative) CEP of the laser <cit.> where the overlap between adjacent harmonic orders was exploited. The technique could be scaled to longer driving wavelengths, however the appearance of the interference pattern between adjacent harmonics is not a general condition. If the latter condition is not fullfilled an alternative approach was proposed providing the CEP stability shot-to-shot <cit.> or time domain based electric field reconstruction in solids using a delayed replica of the driving laser to perturb the HHG process <cit.>. During the last decade the applications of machine learning techniques have slowly entered not just into the every day life but into various fields of science, too. At XFELs <cit.> it was successfully applied to improve and accelerate the metrology of the emitted radiation. Convolutional neural networks have been applied to reconstruct the temporal shape of femtosecond laser pulses <cit.>. Interestingly, their model was robust enough to retrieve the spectral amplitude and phase from experimental second harmonic generation-frequency resolved optical gating spectrograms while being trained only with simulated spectrograms. Recently, independently from our study, there was a theoretical proposition <cit.> to reconstruct the band structure of a crystal and at the same time characterize the driving few-cycle laser pulse in the solid HHG process, including both their chirp and CEP, relying on the training of deep neural network models. In another theoretical study <cit.>, the CEP dependence of the solid HHG was combined with deep learning models to retrieve the band structure of MgO crystal. In this report, we combine the benefits of solid HHG and machine learning in order to propose a concept that helps tagging the driving laser's relative CEP in a simple setup that is instrumentally not demanding while still offering high repetition rate tagging. First, we demonstrate the feasibility of such tagging relying on simulated harmonic spectra from thin ZnO crystal based on a simple 1D model for bulk ZnO <cit.>, that correctly reproduces its semiconducting features, while proving useful for accurate retrieval of the spectral phases of the incident few-cycle pulse, despite some amplitude noise and phase jitter. Then we show that even without large training data the machine learning model can still achieve good relative CEP estimations. Next, relying on our estimations on the number of needed training data we experimentally demonstrate that solid HHG spectra is indeed a good indicator of the relative CEP of the driving laser. § FORMULATION OF OUR APPROACH In most of the ultrafast light matter interactions, including solid HHG as well, the temporal/spectral profile of the driving laser exhibit a deterministic influence on the outcome. In order to enable exploitation of this feature and utilize it for laser CEP estimations, two prerequisites need to be fulfilled. Firstly, there should be a direct, but not necessarily obvious or even explicit, correlation between the harmonic spectrum and the laser CEP in solid HHG. Secondly, this correlation should manifest as a one-to-one mapping, ensuring that the high harmonic spectrum of a CEP scan exhibits a periodicity of 2π, otherwise CEP prediction would be limited for only on a fraction of the full range. In the case of few cycle lasers, where the amplitude of the electric field changes substantially from one half cycle to the other, interband harmonics are generated at different times in each cycle <cit.>. This causes a phase shift (attochirp) between the same harmonics originating from different half cycles. By changing the CEP of the laser, the timing of the emission of given harmonics can be directly controlled, causing the emergence of CEP dependent patterns in the harmonic spectrum <cit.>. Therefore, the first requirement, the precense of a CEP-dependent feature in the spectrum, is fulfiled. While CEP dependence of harmonic generation though intraband processes in mid-infrared regime is claimed to be insignificant <cit.>, with increasing driving wavelength, signatures of CEP dependence of intraband harmonics become more prominent <cit.>. The symmetry properties of the target material also have influence on the CEP response of the harmonics. When the interatomic structure is randomly distributed in the crystal on a scale smaller than the laser wavelength, the harmonic spectrum will exhibit π periodicity versus the CEP change, because each half-cycle will experience the same average response, like in the case of fused silica <cit.>. Crystaline quartz possesses a non-centro-symmetric crystal structure, which results in an absence of inversion symmetry, thereby leading to a nonzero nonlinear susceptibility tensor. The broken inversion symmetry also ensures that the consecutive laser half-cycles experience a different collective response, leading to 2π CEP dependence. By assuring interband contributions to the harmonic generation in a broken inversion symmetry material one paves the way for mapping one-to-one the laser CEP to the generated high harmonic spectrum. This objective is achieved through the utilization of a representative theoretical model for bulk ZnO that incorporates interband <cit.> contributions (driving with a MIR laser) to harmonic generation and posseses C6v symmetry group, through a 2-band model. Then, a machine learning algorithm can be trained with known CEP and solid HHG spectrum pairs to estimate the CEP of a new laser pulse based on the generated harmonic spectrum, as depicted in Figure <ref>. First, we test and validate this assumption by simulating solid HHG spectra generated from ZnO crystal, applying theoretically constructed laser pulses with known absolute CEP parameter. Subsequently, we demonstrate that the effectiveness of this approach does not depend on an extensive dataset, thereby highlighting its practicality. In the present work, we select three machine learning algorithms: Linear Regression, Extremely Randomized Trees (ExtraTree) and Gradient Boosting. The linear regressor, the simplest one of all three, works by assuming a linear relationship between the predictor values X_j and the predicted value Y, expressed as Y = β_0 + ∑_j β_j X_j. Major advantages of this algorithm are simplicity and fast training. The simple dependence on the β_j coefficients enables a direct estimation of the significance of certain features. However, the model only performs well if there is a direct linear correspondence between X_j and Y. <cit.> The ExtraTree and Gradient Boosting algorithms are both decision tree based ensemble models. In a conventional decision tree model, a tree is formed by segmenting the dataset into branches. The model makes decisions at each node based on feature values, eventually arriving to a prediction at the end-points of the branches. In the ExtraTree model, multiple decision trees are built and used concurrently, and the prediction is decided by averaging the prediction of the independent trees. Different trees are grown from different parts of the training dataset by employing bagging, which decreases the bias and variance in the model compared to a single decision tree. The trees are built by randomly selecting a subset of features in the data, and using random threshold values (as opposed to a certain criteria, such as information gain) to split the nodes of the tree. <cit.>. This high degree of randomization significantly reduces the bias of the model, and makes it less likely to overfit compared to other ensemble models (such as Random Forest)<cit.>. Opposed to the ExtraTree model, where the trees are created by random selections, the Gradient Boosting regression builds the trees iteratively. In each iteration, the model accuracy is estimated by a loss function (typically by the mean squared error, MSE), and new trees are created to correct the error of the previous trees. The final prediction is then given as a weighted average of the prediction of the trees. The weights are updated using a gradient descent method in each iteration of the training. This error-correcting strategy enables higher accuracy, but also can make the model more prone to overfitting <cit.>. § SIMULATIONS In order to simulate the CEP-dependent HHG process, we employ an one-dimensional semiclassical model, which provides a computationally efficient way to compute the response of electrons in a periodic potential to a strong laser field <cit.>. Herein we adopt the single-electron approximation to derive the Bloch states and their associated energies, enabling us to discern the distinct contributions of various initial states to high harmonic radiation. In more detail, for an electron with charge e and mass m, we consider the following Hamiltonian: H(t)=1/2m(𝐩-e𝐀(t))^2 +U(x), where velocity gauge is used, 𝐀(t) denotes the time-dependent vector potential component of the external field along the x direction, while U(x) represents the periodic (model) potential of the solid. This simplified model of the potential adequately captures essential features of the crystal lattice's atomic structure, including the band gap width; however, it ignores the three-dimensional symmetries including the broken inversion symmetry of C6v. Consequently, in our investigation of ZnO targets, we appropriately consider factors such as the lattice constant of 5.2 along c-axis, and a band gap measuring 3.27. On the other hand, scattering events and finite temperature can be taken into account by considering density matrix ρ instead of pure quantum mechanical states (which are practically Bloch waves in this case). That is, we solve the von Neumann equation ∂/∂ tρ(t)=-i/ħ[H(t),ρ(t)] + . ∂/∂ tρ(t)|_scatt, such that the initial density matrix (before the interaction with the laser field) describes thermal equilibrium, and the second term is responsible for the scattering events. We consider the change of the diagonal elements of the density matrix towards thermal equilibrium at a rate of γ_d, and also the decay of the off-diagonal matrix elements (i.e., the loss of quantum mechanical coherences) with a rate of γ_od.) By considering realistic rates (γ_od=0.25 and γ_d=0.05) , not only the CEP dependence of the harmonic peaks can be determined, but also the most important features of the HHG spectra, namely the presence of the plateau, and the intensity dependence of the cutoff and the heights of the harmonic peaks can be investigated <cit.>. A numerical solution is obtained using a Cash-Karp Runge-Kutta algorithm. The sinusoidal model periodic potential, U(x) is parametrized by the lattice constant d and the potential depth U_0. Corresponding to the eight valence electrons present in one unit cell of ZnO, the value of U_0 is chosen so that the band gap between the 4th and the 5th band corresponds to the experimentally determined band gap of ZnO, 3.27, as reported by <cit.>. The band structure of this one-dimensional model is presented on Fig. S4 of the Supplementary document. For simulating the HHG process, a laser with t_p=18 pulse duration, λ_0=3.2 and a peak intensity of I_0=1.36e12 (corresponding to a peak field of E_0 = 3.20) is selected. We employ a cos^2 temporal pulse profile as an approximation to a Gaussian profile, leveraging its computational efficiency. Our objective is to model an experiment that can be readily executed within a conventional HHG beamline setup, without necessitating specialized equipment. To this end, the most accessible means for harmonic radiation detection is through the utilization of a spectroscope equipped with a CCD camera. Consequently, we multiply each HHG spectrum with a response function composed of a typical response curve of a grating and a CCD camera, thereby constraining the range of observable harmonics. Details of this response function are reported in the SM. Using this numerical model, a HHG spectrum is simulated with ∼800 different CEP values between ± 0.5 π. The result is shown on Fig. <ref>a. After generating the simulated CEP-dependent harmonic spectra, they are partitioned into two datasets through random selection. Eighty percent of the data is allocated to the training dataset, while the remaining twenty percent is assigned to the test dataset. The training dataset is employed to train an ExtraTree model, enabling the recognition of patterns linking the spectrum to a specific CEP value. Subsequently, the trained model processes both the spectra from the training and test datasets (the latter representing previously unseen spectra), leveraging the learned patterns to estimate the corresponding laser CEP. As anticipated, the estimated CEP values align perfectly with the training set (particularly considering those data were used to train the model), as shown in Figure <ref> b as orange rectangles. The predictive accuracy of the model is determined by the relative square root of mean square error between the true (_t) and predicted (_p) carrier-envelope phase values expressed in percentages as % = √(1/∑_i=1^(_t,i - _p,i)^2)·100/π. When applying the model on the test set (blue circles), very good agreement is achieved between the true and the estimated CEP values, with an MSE% of 0.06. The outcome proves our assumption, that there is a one-to-one link between the harmonic spectrum and the CEP and this can be captured with a machine learning model. The excellent agreement observed between the true and estimated CEP values can be attributed in part to the extensive sampling of training data. Nevertheless, acquiring a sufficient number of experimental data points may pose challenges in practical scenarios. Generally, increased training data enhances model accuracy. Therefore, it is important to discuss the effect of the number of training data () on the model accuracy (MSE%) using the test data. Furthermore, in a general case, the data used for model training is picked on randomly selected label values. This has the disadvantage of an uneven sampling of the parameter space, which in turn results in a varying model accuracy over the parameter range. However, in an experimental situation where the tag values are known to be within a closed range, it is possible to provide the model with an evenly sampled training data. To address this question, we train three different models (ExtraTree, Linear Regression and GradiantBoosting) using a varying number of train data () and a fixed number of test data (). We have simulated a total of 800 solid HHG spectra corresponding to an input of an equidistant CEP grid between -0.4π and 0.4π (in the simulation, because of the 1D model, the CEP has π periodicity, therefore we reduced the examined CEP range, to keep one-to-one mapping). We used random sampling without replacement to select =1 - 400 pairs and for each case we selected in a similar manner = 150 and calculated the model's performance, shown in Figure <ref> a. This random selection is performed 10 times for each values. The model is trained using the selected HHG spectra, and then tested using the data points to estimate the CEP from spectra that was not known for the model before. The quality parameter is then obtained by performing a linear fit on the predicted versus actual CEP values of the test dataset. Figure <ref> presents a comparative result of this analysis of the effect of using randomly (a) and equidistantly (b) picked CEP training data. Regarding the models, we can observe similar trends in both cases. The ExtraTree model presents superior model accuracy in case of low number of training data (< 100), which can be attributed to the algorithm's marked resilience againist overfitting. However, the linear model, which is the least accurate in this sparsely sampled regime, achieves superior accuracy in case of a densely sampled training data (> 200). This hints to the existence of a strong linear correlation between the CEP of the laser and the features of the HHG spectra. The GradientBoosting method stays in-between for small train sets, and performs comparatively poorly when a high number of training data is available. As expected, in both sampling cases, at low number of training data the performance improves exponentially (note the log-log plot) with the number of the training data, but around =200 the prediction accuracy starts to saturate, indicating that additional training points will not improve substantially the model's precision. It is also observed that the equidistant sampling (Figure <ref> b) of the training data results in a significantly improved accuracy in all cases, resulting in an increase of the accuracy by about an order of magnitude, especially in the case of low (< 200). This implies that the number of required training data points for a given accuracy can be significantly reduced by performing a methodical CEP sampling in the experimental case. The improvement is especially prominent in case of the ExtraTree model since it gains an order of magnitude prediction performance in favor of the evenly sampled scenario. § EXPERIMENTAL SETUP The experiments were carried out using the MIR laser at ELI-ALPS <cit.>. The laser is operating at 100 kHz repetition at 3.2 and capable to deliver 140J of pulse energy. The measured spectrum of the driving field can be seen in Figure S1 in the Supplementary document. The 45 fs output pulses are spectrally broadened in BaF2 and Si optical windows and recompressed in bulk BaF2 windows combined with three reflections on negative TOD dispersive mirrors to reach 18 fs pulse duration. The temporal profile of the pulse was measured by a TIPTOE device by coupling out the beam before the off-axis parabolic mirror. The accurate CEP measurement and control is crucial for this experiment, we used an f-2f setup called Fringeezz by Fastlite which is controlling the acousto-optic programmable dispersive filter (AOPDF, Dazzler, Fastlite) in the OPCPA front-end in a closed loop. The device is able to measure CEP values shot-to-shot at 10 kHz, while the laser is capable to deliver 100 mrad CEP stability. We generate high harmonics in a 90 thick ZnO crystal by focusing 1.3J pulses using an off axis parabola with 100 focal length. The resulting intensity for the compressed pulses was 1.3·10^12 W/cm^2, by introducing chirp we lowered this intensity to 6.6·10^11 W/cm^2. After solid HHG, a thick 50 long BK7 bulk filters out the MIR driving beam and transmits the harmonics, but cuts off radiation below 350. Subsequently, the remaining visible range of the harmonic spectrum is imaged into a commercial spectrometer (AvaSpec-ULS2048 from Avantes). In this configuration we were able to measure from harmonic order 3 to harmonic order 9. During the measurement we recorded 100 spectra for each CEP settings and used the average of these during the analysis, as shown in Figure <ref> a. The CEP scan was recorded between -π and π having 100 evenly distributed CEP values in between. The spectrum for each CEP value is normalized in this plot. The fringe pattern extending over the NIR region on the harmonic spectrum on Figure <ref> is solely due to interference of harmonics generated from the front and the rear side of the ZnO crystal <cit.> and is therefore independent of the CEP. However, the harmonic spectrum expresses harmonic minima at different CEP values, which is the result of constructive and destructive interference between the XUV bursts generated in consecutive laser half cycles <cit.>. Direct comparison between simulation and experimental data is not straightforward, because on one hand, in simulation the CEP is expressed in absolute terms, while in experiment it is relative. On the other hand, the simulation does not take into account the complexity of the experimental conditions, and the 1D model cannot be expected to reproduce neither 3D symmetry properties of the real crystal, nor propagation-related effects, like phase matching. However, the spectral minima shift as the function of CEP is visible in both simulations and experiments. It is known that the extent of CEP sensitivity has dependence on the instantaneous structural changes in the laser-induced lattice, which is not taken into our simulations, thereby possibly causing some difference in features. Furthermore, it is important to note that the CEP scan confirms a one-to-one mapping between the CEP and the related spectra, as required for the machine learning model to work. We split the experimentally recorded data in Figure <ref> b into train (80%) and test (20%) randomly selecting into these datasets and present the performance of a trained ExtraTree model in estimating the laser's CEP from the input spectra. The orange rectangles represent the CEP predictions using the spectra from the train data set, showing perfect agreement between the predicted and the true laser CEP. The blue circles present the performance of the model on the previously unseen test data, indicating that the model is well trained to recognize spectral patterns in the solid HHG spectrum to estimate the laser CEP correctly with an MSE% = 0.75. The only major discrepancy between the estimated and the true CEP is where there is a gap in the sampling in the train data set (around π rad in the figure). This issue can be overcome by evenly sampling the training dataset, as described in Figure <ref> b. To visualize this effect we resample our dataset into 50% train and 50% test with evenly and randomly sampled input CEP values, as shown in Figure <ref> c and d, respectively. As expected, the evenly sampled scenario (MSE% = 0.61) is outperforming the randomly sampled case (MSE% = 1.15) by a factor of 2 in estimating the CEP of the laser from the harmonic spectra. These outcomes clearly prove that our concept for CEP estimation works and could be used for monitoring laser CEP during experiments. They also highlight that good model performance can be only achieved if the training CEP values are evenly sampled with small step size. § CONCLUSION AND OUTLOOK In conclusion, we have laid out a conceptual scheme for estimating MIR laser CEPs relying on the spectrum of high order harmonics generated from a solid crystal exploiting the ability of complex pattern recognition of a machine learning model. Furthermore, we demonstrated the applicability of this concept both through theoretical simulations and experimental measurements. This proven scheme offers an economic, instrumentally not demanding option to measure the laser CEP. The concept can be generalized to other laser wavelength, assuming that the combination of the laser and the crystal fulfills the requirements of one-to-one mapping of the CEP to the solid HHG spectrum and the existence of 2π periodicity. Furthermore, in principle it is possible to perform single shot recordings of the harmonic spectrum with a sampled beam (only 1% of the total energy was used) while an experiment is carried out with the remaining beam. In case of random CEP laser source, the latter approach will allow CEP tagging. The current study relies on a reference method as implimented in a fast CEP measurement device, Fringeezz by FastLight, to train the ML model. However with more accurate modelling the simulated and experimental CEP scans could match better, meaning simulations could be a tool to train the ML model, as it was done in <cit.> for FROG scans. Additional benefit of such simulation would be to retrieve the absolute CEP of the laser, since in simulations it is an input parameter. It is important to notice that this study implies a fixed laser intensity for the training and for subsequent actual measurements and CEP retrieval. The question arises then whether intensity variations would ruin the process. Initial investigations have been carried out that show the ability of the machine learning to recover CEP values from various field configurations and intensities. However, in-depth study of the sensitivity to intensity fluctuations is ongoing. Overall, our proposed protocol, combined with precise simulations, opens up the route towards multi-parameter estimation and optimization using machine learning concepts, as it has been already proposed by <cit.>. Solid HHG is a process, where the laser parameters and the material features have a direct manifestation in the measured spectrum, resulting in an economic and adaptable tool for monitoring experimental conditions. Therefore we utilize HHG signal from semiconducting ZnO crystal in MIR regime to infer the CEP, based on ML models trained with combined experimental and simulations results. Our findings demonstrate the effectiveness of the ExtraTree-based models for predicting the one-to-one correlation between CEP and HHG spectra. Our approach can prove instrumental in determination and control of laser parameters, such as intensity, CEP in-situ depending on the target material, prior to an advanced HHG based experiment design. While it is possible to infer CEP from spectral measurement, this usually needs extended data acquisition. Our method achieves the same with significantly less number of measurements, thereby being of prominent use towards reducing cost of running experiment campaign, as relevant for large-scale user facilities like ELI ALPS. § FUNDING The ELI-ALPS project (GINOP-2.3.6-15-2015-00001) is supported by the European Union and co-financed by the European Regional Development Fund. § ACKNOWLEDGMENTS We thank Rajaram Shrestha for his kind help in processing data and plotting figures for the Supplementary Material.
http://arxiv.org/abs/2407.13407v1
20240718112158
Nonconvex landscapes for $\mathbf{Z}_2$ synchronization and graph clustering are benign near exact recovery thresholds
[ "Andrew D. McRae", "Pedro Abdalla", "Afonso S. Bandeira", "Nicolas Boumal" ]
math.OC
[ "math.OC", "math.ST", "stat.TH" ]
DOI HERE 2024 00 2024 Advance Access Publication Date: Day Month Year Paper Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. 1 ] McRae, Abdalla, Bandeira, and Boumal [*]Corresponding author: mailto:andrew.mcrae@epfl.chandrew.mcrae@epfl.ch Date0Year Date0Year Date0Year [ Nicolas Boumal Institute of Mathematics, EPFL, Lausanne, Switzerland July 22, 2024 ========================================================================= 1]Andrew D. McRae 2]Pedro Abdalla 2]Afonso S. Bandeira 1]Nicolas Boumal [1]Institute of Mathematics, EPFL [2]Department of Mathematics, ETH Zurich [ July 22, 2024 ================= § INTRODUCTION AND RESULT HIGHLIGHTS In a synchronization problem, one wants to estimate n group elements g_1, …, g_n from pairwise relative measurements R_ij≈ g_i g_j^-1. Such problems are important in robotics <cit.>, computer vision <cit.>, signal processing <cit.>, computer science <cit.>, and many other areas. In this paper, we study synchronization over the two-element group _2 formulated as the real orthogonal group (1) = {± 1} under multiplication. The _2 synchronization approach we consider takes the form of a combinatorial optimization problem (arising from a maximum likelihood and/or least squares formulation of the synchronization problem—see, e.g., <cit.>): max_x∈{± 1}^n ∑_i,j=1^n C_ij x_i x_j = max_x∈{± 1}^n Cx x^⊤, where C ∈^n× n is a cost matrix to be specified based on the exact application, and AB = (A^⊤ B) denotes the entrywise Euclidean (Frobenius) inner product between real matrices A and B of the same size. As the NP-hard max-cut problem is an instance of (<ref>), we expect the problem to be computationally intractable in general (though not necessarily for particular instances; indeed, in the synchronization context, our results will yield a tractable algorithm for computing the exact solution). A classical (approximate) solution approach is the semidefinite program (SDP) relaxation of <cit.>: max_X ≽ 0 CX(X) = I_n, where ^n × n→^n × n extracts the diagonal part of a matrix, and I_n is the n× n identity matrix. For synchronization problems, this has proved to be a powerful and, in many cases, statistically optimal approach <cit.>. However, as we have approximately squared the number of variables compared to the original problem (<ref>), it is still computationally expensive to solve (<ref>) directly for large n. To lighten the computational burden of the SDP approach, we consider its Burer–Monteiro factorization <cit.>, which corresponds to the following family of smooth but nonconvex “partial” relaxations of (<ref>): for any integer r ≥ 2, max_Y ∈^n× r CYY^⊤(YY^⊤) = I_n. Although (<ref>) is nonconvex, in many cases it has a benign landscape in the sense that every second-order critical point Y (where the gradient is zero and the Hessian is negative semidefinite in a Riemannian geometric sense) is in fact globally optimal and also solves the SDP relaxation (i.e., X = Y Y^⊤ solves (<ref>)). This is generically true when r ≈√(n) <cit.>. However, for the specific problem instances arising in synchronization, there is much empirical and theoretical work showing that (<ref>) has a benign landscape even for small r not increasing with n <cit.>; in this case, we can solve (<ref>) via a local algorithm on the problem (<ref>), which has only rn variables. See, for example, <cit.> for an introduction to this approach for general orthogonal group synchronization problems. Our main deterministic result (<Ref>) provides sufficient conditions on the _2 synchronization problem (in terms of the noise level and measurement graph connectivity) for which not only does the corresponding problem (<ref>) have a benign landscape for small r, but also the solutions exactly recover the underlying ground-truth signal.[More precisely, we recover the ground truth up to a global sign to which the problem is invariant.] Our conditions are sharper than previous such results. We then combine this deterministic result with probabilistic analyses to obtain asymptotic exact recovery results for several common random models in _2 synchronization and graph clustering. In each case, our results match established optimal thresholds for exact recovery. This was previously only shown for the direct SDP approach or other specialized algorithms; previous analyses of the nonconvex approach fell short of the optimal thresholds. Furthermore, adapting the techniques of <cit.>, we show that our results largely inherit one of the key advantages of the SDP approach, which is its robustness to monotone adversaries (perturbations which seem “helpful” but can disrupt algorithms by changing the problem's random structure). Before describing our results in full generality, we now present in brief these key statistical exact recovery guarantees. §.§ _2 synchronization with Gaussian noise. As a simple example and application of our results, consider the problem of recovering a vector z∈{± 1}^n from the noisy relative measurements R_ij = z_i z_j + σ W_ij (symmetric in i,j), where σ≥ 0 controls the noise strength, and W_ij are independent and identically distributed (i.i.d.) standard normal random variables (modulo symmetry: W_ij = W_ji). Up to rescaling (and the choice of diagonal elements, which have no effect on our problem), W is often called a Gaussian Wigner matrix or a draw from the Gaussian Orthogonal Ensemble. <cit.> introduced this particular model as a simpler version of the well-known angular synchronization problem. <cit.> later used it as an alternative to the Bernoulli-noise problem we consider in <Ref>. We have the following asymptotic recovery result for this model (proved in <Ref>): For n ≥ 2, let C = z z^⊤ + σ W ∈^n × n, where W is a symmetric n × n random matrix with i.i.d. (up to symmetry) standard normal off-diagonal entries. Fix a tolerance ϵ>0 and an integer r_0 ≥ 3 independent of n, and suppose σ≤r_0-3/r_0-1√(n/(2+ϵ)log n). Then, with probability → 1 as n →∞, for all r ≥ r_0, every second-order critical point Y of (<ref>) with cost matrix C satisfies Y = z u^⊤ for some unit-norm u ∈^r. From this, we can recover z up to global sign as the top left singular vector of Y. <cit.> showed that the SDP relaxation achieves exact recovery when σ≤√(n/(2+ϵ)log n) and that this is the best possible threshold in terms of the noise variance σ. Our result shows that nonconvex optimization of (<ref>) achieves the same threshold for large enough r (depending only on ϵ); thus we can approach the optimal recovery threshold by nonconvex optimization over ∼ C(ϵ) n variables rather than ∼ n^2 as with the semidefinite relaxation (<ref>). Previously, <cit.> showed that the nonconvex problem (<ref>) with r = 2 can achieve exact recovery but with suboptimal scaling of σ in the problem size n. More recently, <cit.> proved this result with the optimal scaling, showing that for large enough r, σ≤1/4 + ϵ√(n/log n) suffices. <Ref> improves the constant to reach the exact asymptotic threshold. §.§ _2 synchronization with partial measurements and Bernoulli noise The model (<ref>) assumes that we have a measurement for every pair (i,j), which is too restrictive for many applications. More generally, we may only have measurements for certain pairs (i,j). We model this by a measurement graph G=(V,E), where V = {1, …, n}, and we only have access to measurements R_ij≈ z_i z_j for pairs (i,j) ∈ E. For now, we consider a simple random graph model (we study more general deterministic graphs in <Ref>). Suppose the measurement graph G = (V, E) is sampled from the random graph distribution (n, p), in which each possible edge (i,j) appears with fixed probability p (independent of the others). Clearly, if the graph G is disconnected, then global synchronization is impossible. The random graph has a connectivity phase transition at p = log n/n. Thus we will consider p=a log n/n for some constant parameter a>1 because only in this regime is G connected with high probability for large n. We consider more general scaling of p with n in <Ref>. Furthermore, although the Gaussian noise in (<ref>) is convenient for analysis, it may not be the most natural noise model. Indeed, we have the prior knowledge that the entries of z (and hence the ideal observations of them) are ± 1. This motivates another multiplicative noise model in which the signs of our observations are flipped with some probability. We assume that the noise acts independently and identically (i.e., each observed sign is flipped with a constant probability) on each pair (i,j): this is the Bernoulli noise model. Specifically, given a sample G = (V, E) from (n,p), we want to recover a vector z∈{± 1}^n from the measurements R_ij = z_i z_j with probability 1 + δ/2 - z_i z_j with probability 1 - δ/2 for (i, j) ∈ E, where δ∈ [0, 1] determines how likely we are to see the correct sign. Larger δ implies better measurements; if δ = 0, the measurements are independent of the signal z (and therefore useless), whereas δ = 1 implies our measurements are entirely uncorrupted. Note that we can also interpret this problem as clustering graph vertices based on signed edges: if z_i is the “true” cluster label of node i, an edge between nodes i and j is likely to be positive (resp. negative) if i and j have the same (resp. different) label. This general problem arose in a machine learning context as “correlation clustering” <cit.>. The “_2 synchronization” interpretation was introduced by <cit.>. <cit.> proposed the specific probabilistic model we present here. Our next result establishes when we can obtain exact recovery of z with nonconvex optimization depending on the graph connectivity and signal strength parameters: Consider the model (<ref>) with δ∈ [0, 1], where the graph G is sampled from 𝒢(n,p) for p=a log n/n with fixed a > 1. Suppose, for some ϵ>0, a (1 - √(1 - δ^2)) ≥ 1 + ϵ. Then there exists a constant r_0 (depending only on a and ϵ) such that, with probability → 1 as n →∞, the following holds: if we choose the cost matrix C according to C_ij = R_ij if (i,j) ∈ E 0 otherwise, then, for all r ≥ r_0, every second-order critical point Y of (<ref>) satisfies Y = z u^⊤ for some unit-norm u ∈^r. This threshold is the best possible for exact recovery <cit.>. In particular, <cit.> showed that this threshold is achieved by the semidefinite relaxation (<ref>); once again, we show that we can reach the same threshold via nonconvex optimization of (<ref>). <Ref> is a simplified version of the more general result <Ref> in <Ref>; see the discussion after that theorem for further context and related work. §.§ Graph clustering with the binary stochastic block model We now shift focus to a different measurement model. We are again given a graph and we want to cluster its nodes. In <Ref>, we had signed edges indicating whether nodes are in the same cluster, but now we have no signs and must work only with edge density. We want to assign the nodes to clusters such that the density of edges within clusters is larger than the density between clusters. This is a classical and well-studied problem and is also called “community detection” or the “planted partition” (or bisection) problem. See the surveys of <cit.> and the citations below for further introduction. We focus on one specific setup in this area. The stochastic block model (SBM) for a graph is the following: given “ground truth” clusters, we assume that nodes in the same cluster are connected with an edge with some fixed probability p, and that nodes in different clusters are connected with fixed probability q < p (the existence of each edge being independent of the others). In particular, we consider the binary and symmetric case, in which there are two equally-sized clusters. Let n be even, and consider the following random graph G on vertices 1, …, n. Given a vector z ∈{± 1 }^n of “true” cluster labels (with z = 0, as the cluster have equal size; denotes the all-ones vector), the entries A_ij of the adjacency matrix of G are independent (modulo symmetry) Bernoulli random variables with, for i ≠ j (we assume A has zero diagonal), (A_ij = 1) = p if z_i = z_j, and q if z_i ≠ z_j. The connection to synchronization becomes clear from the fact that A = p - q/2 z z^⊤ + p + q/2^⊤ - p I_n. The last term reflects the fact that the diagonal of A is zero. However, treating this exactly like _2 synchronization by taking cost matrix C = A in (<ref>) would not work, because the all-ones vector is a trivial solution. This is due to the dominance of the all-ones term in (<ref>). We can deal with this by (approximately) subtracting off the mean of the entries of A_ij: we set C_ij A_ij - 1/n^2∑_i,j=1^n A_ij, that is, C A - 1/n^2A^⊤^⊤. If the true average edge probability is known, we can more simply set C A - p + q/2^⊤. Because this graph resembles the random graph considered in the previous section, it is natural once again to set the edge probabilities proportional to log n/n. We then have the following result: Consider the model (<ref>) with parameters p = a log n/n and q = b log n/n for some a, b ≥ 0 satisfying √(a) - √(b) > √(2). There exists a constant r_0 (depending only on a and b) such that, with probability → 1 as n →∞, the following holds: if we choose the cost matrix C according to (<ref>) or (<ref>), then, for all r ≥ r_0, every second-order critical point Y of (<ref>) satisfies Y = z u^⊤ for some unit-norm u ∈^r. The condition on a and b is optimal for exact recovery <cit.>. Once again, we have shown that optimal results for the SDP relaxation (<ref>) (found in <cit.>) can be achieved with nonconvex optimization in (<ref>). This is a simplified version of the more general result <Ref> in <Ref>; see the discussion after that theorem for details and comparison to previous work. §.§ Paper outline and notation The remainder of this paper is organized as follows: * <Ref> presents a general deterministic result (<Ref>) that describes when, in the context of _2 synchronization, the nonconvex problem (<ref>) has a benign landscape such that all second-order critical points yield exact recovery of the ground truth. * <Ref> uses this deterministic result together with probabilistic analyses to study the random _2 synchronization problems presented in <Ref>. In addition to <Ref> for Gaussian noise, we obtain <Ref>, which is a more general version of <Ref> for random graphs and Bernoulli noise. * <Ref> adapts these arguments to the graph clustering problem presented in <Ref>; we obtain the high-probability asymptotic result <Ref> from which <Ref> is derived. Along the way, we obtain a deterministic condition (<Ref>) that may be of independent interest. * <Ref> discusses the robustness (summarized in <Ref>) of our results to monotone adversaries. In general, we define notation as needed when we use it. However, we include here for completeness some basic definitions that are common in the literature and will appear throughout the paper. denotes the all-ones vector in ^n. v denotes the Euclidean (ℓ_2) norm of a vector v. M, M, and M respectively denote the operator (in ℓ_2) norm, Frobenius (elementwise ℓ_2) norm, and nuclear norm (sum of singular values) of a matrix M. ·· is the inner product notation both for the standard Euclidean inner product of vectors and the elementwise (Frobenius) inner product of matrices (the distinction will be clear from context). (v), for a vector v ∈^n, denotes the n × n diagonal matrix with the entries of v on the diagonal. ^n × n→^n × n extracts the diagonal of a matrix and sets the remaining elements to zero. a ≲ b means there is an unspecified but universal (i.e., not depending on any problem parameters) constant c > 0 such that a ≤ c b (and a ≳ b means b ≲ a). § DETERMINISTIC _2 SYNCHRONIZATION ANALYSIS In this section, we state and prove our main deterministic result for _2 synchronization. We start by stating the problem in greater generality than before. Let A ∈^n × n be the symmetric adjacency matrix of an undirected but potentially weighted graph G on n vertices. Assume the cost matrix C in (<ref>) has the form C_ij = A_ij z_i z_j + Δ_ij, that is, C = (z) A (z) + Δ, where z=(z_1,…,z_n) ∈{± 1}^n is the vector we want to recover, and Δ is a matrix representing noise. Notice that the models (<ref>) and (<ref>) are both instances of (<ref>). In general, we will assume, without loss of generality, that C, A, and Δ have zero diagonal, as this has no effect on the landscape of (<ref>) (only changing the objective value by a constant). We consider the following question: which conditions on the graph G (i.e, its adjacency matrix A) and the noise matrix Δ ensure that we can recover z via (<ref>)? * Connectivity: Clearly, if the graph G is disconnected, synchronization is impossible. Thus we expect some measure of connectivity to be important. In particular, recall that the Laplacian matrix of G is L (A ) - A = D-A, where D is the n × n diagonal matrix whose diagonal entries are the vertex degrees d_1, …, d_n: D_ii = d_i ∑_j = 1 j ≠ i^n A_ij. It is well known that L is positive semidefinite with eigenvalues 0=λ_1≤λ_2 ≤…≤λ_n. Furthermore, G is connected if and only λ_2>0. The quantity λ_2 is called the algebraic connectivity of G and plays a key role in our analysis. * Local Stability: Notice that to ensure we recover z exactly even with the original combinatorial problem (<ref>), we must rule out the possibility that flipping one entry of z could increase (or fail to increase) the objective function. In particular, we must have, for all i, Cz z^⊤ > Cz_-i z_-i^⊤, where z_-i:=(z_1,…,z_i-1, -z_i, z_i+1, …,z_n). Expanding this via (<ref>) reveals that this condition is equivalent to d_i > -z_i∑_j = 1 j ≠ i^n Δ_ij z_j ρ_i^Δ. Therefore, it is natural to expect that a control on the quantity ρ^Δmax_i ρ_i^Δ will be necessary. * Noise spectral norm: Beyond the local stability condition above, it is also useful to control the noise Δ via its matrix operator norm Δ. For the Burer–Monteiro approach (<ref>), our main result shows that a suitable condition on those three quantities (depending also on the factorization rank r) leads to exact recovery of z: Let G be a connected graph on n vertices with adjacency matrix A and algebraic connectivity λ_2. Let C be the cost matrix of the _2 synchronization measurement model (<ref>) on G. Let r ≥ 4 be an integer, and suppose ρ^Δ + r+11/r-3Δ≤r-3/r-1λ_2. Then every second-order critical point Y of (<ref>) satisfies Y = z u^⊤ for some unit-norm u ∈^r. The same result holds if r = 3 and Δ = 0. We defer the proof to the end of this section. When Δ = 0, we recover known results <cit.>. The result and its proof build on and resemble those of <cit.>. Compared to <cit.>, our result only depends on λ_2 rather than a condition number and is thus tighter and more general. Compared to <cit.>, our analysis is specialized to the _2 case in order to achieve tighter results in terms of the noise Δ. More broadly, our work builds on recent results in nonconvex optimization for orthogonal group synchronization (often known, for particular problems, as phase/angular synchronization or synchronization of rotations). See, for example, <cit.> for more on this topic. However, most existing results in this area do not directly yield exact recovery guarantees in the presence of noise (in fact, it is not, in general, possible to recover the ground truth exactly with noise because every orthogonal group other than _2 is continuous-valued). Exceptions to this are the results of <cit.> as discussed previously; these papers specifically consider the _2 case. Indirectly, we could combine the more general benign landscape results with other previous work showing that the ground truth is the global optimum of the SDP relaxation (e.g., <cit.>), but the assumptions required on the problem parameters would be suboptimal. <Ref> provides sufficient conditions for exact recovery, but we make no claim that the conditions are necessary in any way (even approximately): there are many instances where (<ref>) has a benign landscape that is not explained by our results. For example, much existing work (e.g., <cit.>, or the vast literature on Kuramoto oscillators) concerns the case r = 2, which our result does not cover. This is particularly relevant to our results' robustness to monotone adversaries, as there are problem instances (not covered by our results) where the nonconvex approach succeeds but is not robust to monotone adversaries; see <Ref>. On the other hand, this illustrates the benefits of requiring r ≥ 3: the Kuramoto oscillator literature presents many examples (even with Δ = 0) for which (<ref>) does not have a benign landscape for r = 2, and the known examples which are not robust to monotone adversaries also have r = 2. The “benign landscape” aspect of our results (i.e., that second-order critical points are global optima) depends fundamentally on the fact that we can recover the ground truth exactly. Without this, landscape analysis becomes more difficult, as it is more complicated to describe the global optimum. This can be seen in the literature on more general orthogonal group synchronization, where we do not expect exact recovery in the presence of noise and, consequently, the best-known conditions for a benign landscape are more strict (see, e.g., the non-exact–recovery results of <cit.>). Nevertheless, outside the exact recovery regime, one could obtain weaker statistical recovery results (second-order critical points are close to the ground truth) as in, for example, <cit.>, but we do not explore this here. We now turn to the proof of <Ref>. The proof resembles that of <cit.>, but, because we are optimizing over spheres rather than more general Stiefel manifolds, we can refine that proof to obtain a stronger result. The first step is to write down the conditions a second-order critical point Y of (<ref>) must satisfy. See <cit.> for an overview for problems of this form. In short, (<ref>) is an optimization problem over a Riemannian manifold (in particular, a product of n (r-1)-spheres embedded in ^r). Setting S(Y) (C Y Y^⊤) - C, a second-order critical point Y must satisfy the following: * S(Y) Y = 0; this is equivalent to the Riemannian gradient being zero. * For any ∈^n × r satisfying (Y ^⊤) = 0, we have S(Y)^⊤≥ 0; this is equivalent to the Riemannian Hessian being negative semidefinite. We can, without loss of generality (see <cit.> for similar arguments), assume that z=, as the change of variables Y ↦(z) Y implies the result for a general z∈{± 1}^n. Although Δ changes, the important quantities Δ and ρ^Δ do not. Under this assumption, we have C = A + Δ, and ρ^Δ_i = - ∑_j = 1^n Δ_ij (recalling that we have assumed Δ is zero on the diagonal). Let L^Δ = (Δ) - Δ be the Laplacian matrix[This Laplacian matrix is not, in general, positive semidefinite like a graph Laplacian, as the entries of Δ are not assumed to be nonnegative.] of Δ, and note that L^Δ = 0 and L^Δ_ij = -ρ^Δ_i if i = j, -Δ_ij otherwise. Note that for i ≠ j, C_ij = - (L_ij + L^Δ_ij). Furthermore, note that adding diagonal entries to C has no effect on S(Y) due to the constraint (Y Y^⊤) = I_n. We can therefore write S(Y) = - ( Y Y^⊤), where := L + L^Δ . We will use the notation Y = Y_1 ⋮ Y_n ∈^n× r and = _1 ⋮ _n ∈^n× r. We will also treat the 1 × r matrix rows as vectors in ^r. To prove Theorem <ref>, it suffices to show that Y= u^⊤ for some u∈^r; the problem constraint then ensures u^2 = Y_1 Y_1^⊤ = 1. To show this, our proof will consider a general decomposition Y= u^⊤ + W, where u ∈^r is chosen such that W^⊤ = 0. The desired result is thus equivalent to W = 0. With this decomposition defined, we can state the following technical lemma (proved at the end of this section) that we will need: Given a feasible point Y of (<ref>), let Q be the n × n matrix with entries Q_ij=1/4Y_i-Y_j^4. Decomposing Y = u^⊤ + W with W^⊤ = 0 and setting a_i = 1/4W_i^4 (where W_i ∈^r is the ith row of W) for i = 1, …, n, we have Q = a ^⊤ + a^⊤ + , where is a matrix satisfying ≤ 14 W^2. The key step in the proof is to apply the second-order criticality inequality S(Y)^⊤≥ 0 for a particular choice of ∈^n × r satisfying the requirement (Y ^⊤) = 0. It is, in fact, convenient to choose randomly and take an expectation. Our random choice of is also used by <cit.> and is a special case of those of <cit.>. We choose _i = Γ - ΓY_i Y_i, where Γ is a 1 × r matrix whose entries are i.i.d. standard Gaussian random variables. Note that Y_i_i = 0 for all i, so indeed we have (Y ^⊤) = 0. Next, it follows directly from the choice of that _i_j = r - 2 + Y_iY_j^2 = r - 2+ *1 - 1/2Y_i - Y_j^2^2 = r - 1 - Y_i - Y_j^2 + 1/4Y_i - Y_j^4_Q_ij = r - 3 + 2 Y_iY_j + Q_ij. Equivalently, ^⊤ = (r - 3) ^⊤ + 2 Y Y^⊤ + Q. We apply the second-order criticality condition under the expectation to obtain 0 ≤S(Y)^⊤ = S(Y)(r-3) ^⊤ + 2 Y Y^⊤ + Q (a)=S(Y) (r-3) ^⊤ + Q = L + L^Δ - ((L + L^Δ) Y Y^⊤) (r-3) ^⊤ + Q (b)= - A + ΔQ - (r-3) L + L^ΔY Y^⊤. Equality (a) uses the fact that S(Y)Y Y^⊤ = 0 (this is clear from the first-order criticality condition S(Y) Y = 0, but one can easily verify that the same holds for any feasible Y). Equality (b) uses the facts that is in the null space of both L and L^Δ and that Q is zero on the diagonal. Rearranging, we obtain the inequality (r - 3) L + L^ΔY Y^⊤ + AQ≤ -ΔQ. Recalling the decomposition Y = u^⊤ + W with W^⊤ = 0, note that L + L^ΔY Y^⊤ = L + L^ΔW W^⊤ = L - ΔW W^⊤ - ∑_i=1^n ρ^Δ_i W_i^2 ≥λ_2 - ΔW^2 - ∑_i=1^n ρ^Δ_i W_i^2. The last inequality follow from the fact that for any symmetric matrices A, B with B ≽ 0, AB≥λ_min(A) (B). Because G is a connected graph, unless Y_1 = ⋯ = Y_n (or, equivalently, W = 0), we have AQ = 1/4∑_i,j=1^n A_ijY_i - Y_j^4 > 0. From now on, assume, by way of contradiction, that W ≠ 0, so (<ref>) holds. Next, we upper bound -ΔQ. By <Ref>, -ΔQ = -2 Δa ^⊤ - Δ ≤ -2 ∑_i,j=1^nΔ_ij a_i + 14 ΔW^2 = 1/2∑_i=1^n ρ^Δ_i W_i^4 + 14 ΔW^2. Combining (<ref>), (<ref>), (<ref>), and (<ref>), we obtain (r-3) (λ_2 - Δ) W^2 <14 ΔW^2 + ∑_i=1^n ρ^Δ_i * (r-3) W_i^2 + 1/2W_i^4 ≤ 14 ΔW^2 + ρ^Δ* (r-3) W^2 + 1/2∑_i=1^nW_i^4 . Using the fact that W_i≤ 2 and therefore W_i^4 ≤ 4 W_i^2, we obtain (r - 3)(λ_2 - ρ^Δ - Δ) W^2 < 14 ΔW^2 + 2 (ρ^Δ∨ 0) W^2, where we have used the shorthand a ∨ b = max{a, b}. The last inequality is equivalent to * (r-3) [λ_2 - ρ^Δ] - 2(ρ^Δ∨ 0) - (r + 11) ΔW^2 < 0. If (r-3) [λ_2 - ρ^Δ] - 2(ρ^Δ∨ 0) - (r + 11) Δ≥ 0, we obtain a contradiction (of the supposition W ≠ 0) and thus must have W = 0. It is clear that (<ref>) holds if r = 3 and Δ = 0 (and hence ρ^Δ = 0). If r ≥ 4, considering the two cases ρ^Δ≥ 0 and ρ^Δ≤ 0 reveals that (<ref>) implies (<ref>). This completes the proof of <Ref>. Finally, we prove the technical lemma we used: Expanding the entries of Q, we have Q_ij = 1/4Y_i - Y_j^4 = 1/4W_i - W_j^4 = 1/4*W_i^2 + W_j^2 - 2 W_iW_j^2 = 1/4*W_i^4 + W_j^4 + 2 W_i^2 W_j^2 + 4 W_iW_j^2 - 4(W_i^2 + W_j^2) W_iW_j = (a ^⊤ + a^⊤)_ij + *1/2W_i^2 W_j^2 + W_iW_j^2 - (W_i^2 + W_j^2) W_iW_j__ij. Now we must bound . First, noting that for a matrix M ≽ 0, M = (M), we have (again recalling that each W_i≤ 2) **1/2W_i^2 W_j^2 + W_iW_j^2 _ij = 3/2∑_i=1^n W_i^4 ≤ 6W^2. To bound the remaining terms W_i^2 W_iW_j_ij = W_j^2 W_iW_j_ij, note that the matrix W_i^2 W_iW_j_ij is the Hadamard (entrywise) product of W W^⊤ (whose nuclear norm is W^2) with the matrix v ^⊤, where v_i = W_i^2. The inequality <cit.> bounds the singular values of Hadamard products with such structure: it implies that, because v_i≤ 4 for all i, **W_i^2 W_iW_j_ij≤ 4 W^2. Putting these bounds together with the triangle inequality, we obtain the bound ≤ 14 W^2, completing the proof of <Ref>. § ASYMPTOTIC PROBABILISTIC RESULTS FOR _2 SYNCHRONIZATION In this section, we apply the deterministic result <Ref> to the problems with random noise and graphs described in <Ref>. §.§ Complete graph and Gaussian noise First, we apply <Ref> to the model (<ref>) in <Ref> to prove <Ref>; this is not only a simple and illustrative application of <Ref> but also a warm-up to the more involved probabilistic analyses later in this paper. In this case, the measurement graph G is the complete graph on the vertices 1, …, n, and the noise matrix has the form Δ = σ W, where W is an n × n symmetric random matrix with i.i.d. (modulo symmetry) standard normal entries. To apply <Ref>, we need to estimate three quantities: λ_2, ρ^Δ and Δ. Because G is the complete graph, it has graph Laplacian L = n I_n - ^⊤ and thus algebraic connectivity λ_2 = n. Next, noting that ρ^Δ in (<ref>) is the maximum of n zero-mean Gaussian random variables with variance (n-1) σ^2, standard concentration inequalities for Gaussian variables and matrices (see, e.g., <cit.>) yield that, for any ϵ' > 0, with probability → 1 as n →∞, ρ^Δ≤σ√((2+ϵ')nlog n) and Δ≤ 3σ√(n). On this event, plugging λ_2=n and these upper bounds for ρ^Δ and Δ into <Ref>, we obtain that for r≥ 4, the desired result holds if σ√((2 + ϵ')nlog n) + 45 σ√(n)≤r-3/r-1n. The first term dominates as n →∞. With, for example, ϵ' = ϵ/2 (with ϵ as in the corollary statement), this holds for sufficiently large n when σ≤r-3/r-1√(n/(2+ϵ)log n). In the case r = 3, the condition implies σ = 0 and therefore Δ = 0, so <Ref> still applies. This completes the proof of <Ref>. §.§ random graph and Bernoulli noise We now state and prove a general result for random graphs with Bernoulli noise, as described in <Ref>: Consider the _2 synchronization measurement model with Bernoulli noise (<ref>) for some δ∈ (0, 1] on an random graph G ∼(n, p). Suppose that the problem parameters p, δ, and r ≥ 4 (which can depend on n) satisfy, for some 0 < ϵ≤ 1/3 independent of n, np/log n* 1 - √(1 - *r-3/r-1 - ϵ^2 δ^2 )≥ 1. Then, with probability → 1 as n →∞, for cost matrix C as in (<ref>), every second-order critical point Y of (<ref>) satisfies Y = z u^⊤ for some unit-norm u ∈^r. A more restrictive condition for which the same result holds is that, for some ϵ > 0 independent of n, 1/δ≤r-3/r-1√(np/(2+ϵ) log n). Note that condition (<ref>) is not tight in terms of r in the “noiseless” case δ = 1, as then <Ref> guarantees that even r=3 suffices when the measurement graph is connected (which occurs when np/log n > 1). In fact, by quite different methods, <cit.> show that even r = 2 suffices under similar conditions on p and n. The threshold np/log n (1 - √(1 - δ^2)) = 1, which <Ref> approaches for large r and small ϵ, is the best possible for exact recovery <cit.>. In particular, <cit.> showed that (in the case that p is proportional to log n/n) this threshold is achieved by the semidefinite relaxation (<ref>); we show that we can reach the same threshold via nonconvex optimization of (<ref>). Furthermore, our result, like the SDP approach, is robust to monotone adversaries: see <Ref> for more details. To our knowledge, the only previous work to consider our nonconvex optimization approach for this specific model is that of <cit.>; though phrased differently in terms of Kuramoto oscillators, this is equivalent to our model in the specific case p = 1 (complete graph). Our result improves this previous result by a constant factor. We now prove <Ref> from <Ref>. As in the proof of <Ref> in <Ref> we can assume, without loss of generality, that z =. If A is the adjacency matrix of the random graph G, the model (<ref>) with cost matrix C as in (<ref>) gives, independently (modulo symmetry) for all i and j, C_ij = A_ij with probability 1 + δ/2 - A_ij with probability 1 - δ/2. We could directly apply <Ref> with adjacency matrix A and noise matrix Δ = C - A. However, this turns out to be problematic for two reasons: first, this Δ is not zero-mean, and thus Δ may be quite large (though this problem could be remedied with some rescaling tricks similar to what we do below); second and more seriously, the tight coupling between the graph G and the locations of the errors turns out to make the general result <Ref> not as tight as possible when applied directly. We therefore take a different approach. To apply <Ref>, we can decompose C as C = + however we like as long as the entries of are nonnegative. The adjacency matrix A of an graph is well-approximated (up to rescaling) by that of the complete graph. In particular, noting that C = δ p (^⊤ - I_n), we write C = δ p (^⊤ - I_n)_ + C - C_. Now, is the (rescaled) complete-graph adjacency matrix, and includes both the original measurement error and the “sampling noise” of the random graph itself. The scaled graph Laplacian of is = δ p (n I_n - ^⊤), for which λ_2 = ⋯ = λ_n = n δ p. We apply <Ref> with this and . We first need to bound the error operator norm . The following general-purpose result is useful: Let X be a real symmetric n × n random matrix with independent (modulo symmetry) and zero-mean entries X_ij satisfying X_ij≤ 2 almost surely and X_ij^2 ≤ v for all i,j, where v ≥log n/n. Then, with probability at least 1 - n^-3, X≲√(nv). This is a slight generalization of <cit.> and is a corollary of, for example, <cit.>. Using the inequality 1 - √(1 - x)≤ x for x ∈ [0, 1], condition (<ref>) implies np/log nδ^2 ≥ 1; this also implies p ≥log n/n. We can then apply <Ref> with X = = C - C and v = p; it is clear from the definition of that _ij = 0 and _ij≤ 2, while _ij^2 ≤ C_ij^2 = A_ij = p. We then obtain, with probability → 1 in n, ≲√(np)≤n δ p/√(log n). This implies that for large enough n, the term in the condition (<ref>) is negligible compared to λ_2 = nδ p. It thus remains to deal with ρ^ from (<ref>). It suffices to show that, with probability → 1, ρ^/ nδ p≤r-3/r-1 - ϵ' for any ϵ' > 0 independent of n. This ϵ' provides some slack to account for the asymptotically negligible . It turns out we also need slack in another place in the proof, so we set, with ϵ > 0 as in the theorem statement, c r-3/r-1 - ϵ, c' r-3/r-1 - ϵ/2, and we will show that ρ^≤ c' nδ p with high probability. To do so, we need to upper bound the n random variables ρ^_i = -∑_j = 1^n (C_ij - C_ij) = ∑_j = 1 j ≠ i^n (δ p - C_ij). For each i, the sum in j is over n-1 i.i.d. random variables. We can calculate, for t ∈, e^t(δ p - C_ij) = e^t δ p* p 1 + δ/2 e^-t + p 1 - δ/2 e^t + 1 - p, so, using the inequality log (1 + x) ≤ x, log( e^t ρ^_i) = (n-1) log* e^t δ p* p 1 + δ/2 e^-t + p 1 - δ/2 e^t + 1 - p ≤ np *δ t + 1 + δ/2 e^-t + 1 - δ/2 e^t - 1 . Recalling from (<ref>) that c = c' - ϵ/2, we then obtain, by a Chernoff bound and union bound, for t ≥ 0, (ρ^≥ c' nδ p) ≤ n max_i  e^t ρ^_i/ e^c' nδ p t ≤ n exp* np *δ (1 - c') t + 1 + δ/2 e^-t + 1 - δ/2 e^t - 1 = exp*log n + np * (1 - c) δ t + 1 + δ/2 e^-t + 1 - δ/2 e^t - ϵδ t/2 - 1 ≤exp*log n + np *1 + cδ/2 e^-t + 1 - cδ/2 e^t - ϵδ t/2 - 1, where the last inequality uses the fact that, for all t ≥ 0, t ≤sinh t = e^t - e^-t/2. Choosing t = 1/2log1 + cδ/1 - c δ, we obtain (ρ^≥ c' nδ p) ≤exp*log n + np *√(1 - c^2 δ^2) - 1 - ϵδ np/4log1 + c δ/1 - cδ ≤exp* - ϵ c δ^2 np/2. The second inequality uses 1/2log1 + x/1-x≥ x for x ≥ 0 and the condition (<ref>). Noting that the same condition (<ref>) implies log n ≤ np * 1 - √(1 - c^2 δ^2)≤ np c^2 δ^2 (using the inequality 1 - √(1 - x)≤ x for x ∈ [0,1]) and that c ≤ 1, we obtain (ρ^≥ c' nδ p) ≤ e^-ϵ/2log n→ 0. This proves the sufficiency of condition (<ref>) in <Ref>. For the simplified condition (<ref>), note that np/2 log n c^2 δ^2 ≥ 1 implies (<ref>) by the inequality 1 - √(1 - x)≥x/2 for x ∈ [0, 1]. Redefining ϵ > 0 as needed, we see that condition (<ref>) is sufficient. § GRAPH CLUSTERING UNDER THE SBM In this section, we adapt our arguments and results in <Ref> to the problem of graph clustering under the binary stochastic block model (SBM) described in <Ref>. The main result, of which <Ref> is a simplified version, is the following theorem: Consider the model (<ref>) with parameters 0 ≤ q < p ≤ 1. Suppose the problem parameters p, q, and r ≥ 4 (which can depend on the problem size n) satisfy, for some 0 < ϵ≤ 1/12 independent of n, n/log n*√( p - *1/r-1 + ϵ (p - q) ) - √(q + *1/r-1 + ϵ (p - q) )^2 ≥ 2. Then, with probability → 1 as n →∞, with cost matrix C as in (<ref>) or (<ref>), every second-order critical point Y of (<ref>) satisfies Y = z u^⊤ for some unit-norm u ∈^r. For sufficiently large r and small ϵ, this approaches the optimal threshold n/log n (√(p) - √(q))^2 = 2 for exact recovery <cit.>. The SDP relaxation (<ref>), which has the key benefit of robustness to monotone adversaries (see <Ref>), was shown to achieve this threshold by <cit.>. Our result <Ref> shows that we can achieve the same threshold (with similar robustness to monotone adversaries—see <Ref> in <Ref>) with continuous, benignly nonconvex optimization while optimizing over fewer variables. The first theoretical work on the nonconvex approach to this problem was by <cit.>, who studied the case r = 2. That paper gives sufficient conditions for high-probability exact recovery, but these conditions scale suboptimally in the problem size n. Recently, <cit.> proved the optimal dimension scaling, showing that, for r ≥ 4 and for some c > 0, n/log n(p-q)^2/p+q≥ c suffices for exact recovery. Our result closes the remaining gap to the optimal threshold. There has been a variety of work on more general graph clustering models that encompass both the _2 synchronization problem with Bernoulli noise of <Ref> and the ordinary clustering problem of <Ref> as special cases (e.g., the “weighted” SBM of <cit.>, the “labeled” SBM of <cit.>, or the “signed” SBM of <cit.>). However, to combine or generalize these models in our algorithmic framework, we must reconcile two factors: * In the Bernoulli noise model, the problem difficulty is invariant to the actual value of the ground truth labels z (this is clear from the proofs, in which we assume without loss of generality that z =). * For the ordinary SBM, the cluster sizes matter greatly. It is standard in the literature to assume, as we have, that the clusters have equal size (indeed, this assumption is present even in <cit.>). Otherwise, the problem difficulty depends on the relative cluster sizes (see, e.g., <cit.>). Furthermore, if the parameters p and q are unknown, it becomes necessary to estimate the cluster sizes, which adds considerable technical complication to the algorithm and analysis (again, see <cit.>). Thus to unify the problems, we need either to impose a balancing requirement (which is artificial and unnecessary for the pure synchronization/“signed clustering” problem) or to deal with the complications of the SBM with unbalanced clusters. For simplicity and clarity, we avoid this in the present work. §.§ Deterministic condition for cluster recovery As the first step of proving <Ref>, we consider how to translate the deterministic analysis of <Ref> to the somewhat different graph clustering problem. Recall from <Ref> that we are given the n × n adjacency matrix A of a random graph G with distribution given by (<ref>) for unknown cluster labels z ∈{± 1 }^n and with (potentially unknown) problem parameters 0 ≤ q < p ≤ 1. We have further assumed that the clusters are balanced: z = 0 (hence n must be even). We then attempt to recover z by the problem (<ref>) with cost matrix C = A - 1/n^2A^⊤^⊤ as in (<ref>) or, if the parameters are known, C = A - p+q/2^⊤ as in (<ref>). We can find a condition similar to the local stability for _2 synchronization in (<ref>) as follows: for every i∈{1, …, n}, consider the quantity d^z_i  {neighbors of i from same cluster} - {neighbors of i from other cluster} =  z_i ∑_j = 1 j ≠ i^n A_ij z_j. A necessary condition for ± z to be the unique optimum of (<ref>) is that flipping the sign of one entry of z must strictly decrease the objective function. One can verify that, for any C of the form C = A - α^⊤, this is equivalent to d^z_i + 2 α > 0 for all i. Thus we expect d^z_min:=min_i d^z_i to play a role in the analysis. These quantities have appeared before in the literature. One result of <cit.> says that we can asymptotically obtain exact recovery if and only if n (d^z_i ≤ 0) → 0 as n →∞. For the SDP relaxation, <cit.> show that d^z_min > A - A is a (deterministically) sufficient condition for exact recovery. Our first step is to show that a similar condition suffices for the nonconvex problem (<ref>). A careful application of <Ref> yields the following result: Let r≥ 4 be an integer, suppose p > q, and suppose one of the following holds: * We set the cost matrix C as in (<ref>) and assume d^z_min≥r + 11/r-3 (2 A - A + p) + n(p-q)/r-1. * We set C as in (<ref>) and assume d^z_min≥r + 11/r-3A - A + n(p-q)/r-1. Then every second-order critical point Y of (<ref>) satisfies Y = z u^⊤ for some unit-norm u ∈^r. In the limit r →∞ (which one can take even for fixed n for the full SDP relaxation—see, e.g., <cit.>), (<ref>) becomes d^z_min > A - A; this is identical to the requirement of <cit.>. The extraneous factor of two in (<ref>) (the p term is negligible for large n) is the result of a crude approximation and could be improved at the cost of a slightly more complicated theorem statement. But in any case, the A - A terms of (<ref>) and (<ref>) will prove to be negligible in our asymptotic analysis. The result follows from applying <Ref> to a suitably translated problem. Similarly to the proof of <Ref> in <Ref>, we compare the cost matrix C to that arising from an idealized complete-graph problem. We cannot assume now that z = (as the balancing property z = 0 is important), so the notation will be somewhat more cumbersome than before. First, we prove the first part of <Ref>, where C = A - 1/n^2A^⊤^⊤. For convenience, we denote 1/n^⊤ and I_n - = I_n - 1/n^⊤ as the orthogonal projection matrices onto {} and {}^⊥ respectively. We can then more compactly write C = A - A. Recalling the expression for A in (<ref>), note that C = p-q/2 z z^⊤ - p I_n + p . We can then decompose C = p-q/2 z z^⊤ - p I_n + p + C - C = p-q/2 (z z^⊤ - I_n)_(z) (z) - p+q/2 I_n + p + C - C_, where = p-q/2 (^⊤ - I_n) is the complete-graph adjacency matrix with scaling factor p-q/2. As before, we can assume the diagonal elements of C are zero; we therefore ignore the identity term in the decomposition of C and any diagonal elements of . We then apply <Ref> with graph adjacency matrix and noise matrix . Clearly, λ_2 = np-q/2. Next, note that we can bound = p + A - A - (A - A) ≤ p + 2 A - A. Last, we need to bound ρ^ = max_i ρ^_i where ρ^_i = - z_i ∑_j = 1 j ≠ i^n _ij z_j. Recalling (<ref>), note that for i ≠ j, _ij = p/n + C_ij - C_ij = p/n + A_ij - 1/n^2∑_k,ℓ A_kℓ - p-q/2 z_i z_j - p/n = A_ij - A^⊤/n^2 - p-q/2 z_i z_j. Then, noting that ∑_j≠ i z_j = z - z_i = -z_i, we can calculate ρ^_i = -z_i ∑_j ≠ i A_ij z_j + z_i*∑_j≠ i z_j A^⊤/n^2 + (n-1) p-q/2 = - d^z_i - A^⊤/n^2 + (n-1) p-q/2 ≤ - d^z_i + n p-q/2. Then, clearly, ρ^≤ - d^z_min + n p-q/2. To show the condition (<ref>) of <Ref> holds, it then suffices to have - d^z_min + n p-q/2 + r+11/r-3 (p + 2 A - A) ≤r-3/r-1 n p-q/2. The condition (<ref>) is equivalent. This proves the sufficiency of (<ref>) when C is set by (<ref>). The proof of the second part, where C = A - p+q/2^⊤, is simpler. Again recalling (<ref>), we now have C= p-q/2 z z^⊤ - p I_n, so we can decompose C = C + C - C =p-q/2 (z z^⊤ - I_n)_(z) (z) - p+q/2 I_n + A - A_. Noting that now ρ^_i = - (d^z_i - d^z_i) = - d^z_i + n p-q/2 - p, the result follows if - d^z_min + n p-q/2 - p + r + 11/r-3A - A≤r-3/r-1 n p - q/2, which is strictly more permissive than (<ref>). §.§ Asymptotic probabilistic analysis for the SBM Now, we show how <Ref> implies <Ref>. The proof structure is quite similar to that for <Ref> in <Ref>. We will show that, with high probability, the condition (<ref>) is satisfied (in which case the alternative condition (<ref>) also holds). First, we show that the operator norm term in (<ref>) is asymptotically negligible. The condition (<ref>) ensures p ≥log n/n, so we can apply <Ref> from <Ref> with v = p (note that (A_ij - A_ij)^2 ≤ A_ij^2 ≤ p) to obtain A - A≲√(np) with probability → 1 in n. Furthermore, condition (<ref>) implies the second inequality in √(np)/n(p-q)≤1/√(n)√(p) + √(q)/p - q = 1/√(n)(√(p) - √(q)) ≤1/√(2 log n), so, with probability → 1 in n, A - A/n(p - q)≲1/√(log n). Set, similarly to (<ref>), γ = *1/r-1 + ϵ (p - q) and γ' = *1/r-1 + ϵ/2 (p - q) = γ - ϵ(p - q)/2. By (<ref>), if we can show that d^z_min≥γ' n with probability → 1, condition (<ref>) will be satisfied with probability → 1 as n →∞. Note that d^z_min is the minimum of n i.i.d. random variables of the form V - W, where V ∼((n/2) - 1, p) and W ∼(n/2, q). We focus on the lower tail of V - W, or, equivalently, the upper tail of W - V. Note that, for any t ∈, e^t(W - V) = (1 - q + q e^t)^n/2(1 - p + p e^-t)^n/2 - 1, and, therefore, log [ e^t(W - V)] = n/2log(1 - q + q e^t) + *n/2 - 1log (1 - p + p e^-t) ≤n/2 (-q + q e^t) + *n/2 - 1 (- p + p e^-t) = n/2 ( - p + p e^-t - q + q e^t ) + p(1 - e^-t) ≤n/2* p e^-t + q e^t - p - q + 1. By a union bound and a Chernoff bound, for any t ≥ 0, (d^z_min≤γ' n ) ≤ n ( W - V ≥ -γ' n ) ≤ n e^t(W - V)/e^-t γ' n ≤exp*log n + n/2* p e^-t + q e^t - p - q + γ' n t + 1 = exp*log n + n *p e^-t + q e^t/2 - p + q/2 + γ t - ϵ(p - q)/2 n t + 1, where the last equality uses (<ref>). Noting that, for all t ≥ 0, t ≤e^t - e^-t/2, we have p e^-t + q e^t/2 + γ t ≤(p - γ) e^-t + (q + γ) e^t/2. If γ≤p-q/2 (which is ensured by the assumption ϵ≤ 1/12), this last expression has a minimum value (over t ≥ 0) of √((p - γ)(q + γ)) for t = 1/2logp - γ/q + γ. Then, with this choice of t, we have p e^-t + q e^t/2 - p + q/2 + γ t ≤√((p - γ)(q + γ)) - p + q/2 = -1/2*√( p - γ) - √(q + γ)^2. The condition (<ref>) then ensures that log n + n *p e^-t + q e^t/2 - p + q/2 + γ t≤log n - n/2*√( p - γ) - √(q + γ)^2 ≤ 0, so we obtain (d^z_min≤γ' n ) ≤exp* 1 - nϵ(p-q)/4logp - γ/q + γ. Denoting ϵ_r = 1/r-1 + ϵ so that γ = ϵ_r(p-q), we have 1/2logp - γ/q + γ = 1/2logp+q/2 + (1-2ϵ_r) p-q/2/p+q/2 - (1-2ϵ_r) p-q/2 ≥2/p+q (1 - 2 ϵ_r) p-q/2 = (1 - 2 ϵ_r) p-q/p+q by the inequality 1/2log1 + x/1-x≥ x. Recalling that r ≥ 4 and ϵ≤1/12, we have 1 - 2 ϵ_r ≥1/6. Furthermore, note that (p-q)^2/p+q ≥*p-q/√(p) + √(q)^2 = (√(p) - √(q))^2 ≥ (√(p-γ) - √(q+γ))^2 ≥2 log n/n by the condition (<ref>), so, putting these inequalities together, we obtain nϵ(p-q)/4logp - γ/q + γ ≥nϵ(p-q)/2(1 - 2 ϵ_r) p-q/p+q ≥ϵ(1 - 2 ϵ_r) log n ≥ϵ/6log n, so (d^z_min≤γ' n ) ≤exp* 1 - ϵ/6log n , which goes to zero as n →∞. Thus we obtain the result <Ref>. § ROBUSTNESS TO MONOTONE ADVERSARIES A useful property for synchronization and clustering algorithms is robustness to a “monotone adversary.” For example, for the synchronization problem of <Ref>, what happens if somebody is allowed to add additional edges to the measurement graph with correct measurements or to delete or correct some of the erroneous measurements? For the graph clustering problem of <Ref>, what happens if somebody adds edges between vertices in the same ground-truth cluster and/or removes edges between vertices in different clusters? With the random base models we have presented, adding the possibility of additional such “helpful” modifications gives a semi-random model <cit.>. Intuitively, such an “adversary” should only be helping us and making the problem easier. However, algorithms that depend on certain regularity properties (e.g., matrix eigenvalues/vectors) of the underlying random model may be severely disrupted. It is in fact possible that the underlying estimation problem becomes fundamentally harder. See, for example, <cit.> for further discussion and references. Nevertheless, certain problem settings and algorithms do exhibit robustness. In our setting, we characterize this robustness as follows. Suppose, given a cost matrix C, the problem (<ref>) has optimal solution z. Now suppose we perturb C by a matrix Δ^+ such that, for all i,j, Δ^+_ij z_i z_j ≥ 0. Clearly, (<ref>) with cost matrix C' = C + Δ^+ will still have z as an optimum (and, if the optimum is unique for the original problem, it also is for the perturbed problem). An algorithm is robust to monotone adversaries if, given that it successfully recovers the optimum z from data C, it also recovers z with data C' = C + Δ^+ for any Δ^+ satisfying (<ref>). One can easily verify (see, e.g., <cit.>) that the SDP relaxation (<ref>) of (<ref>) has such robustness. What about the nonconvex partial relaxation (<ref>)? <cit.> showed, by a connection to Kuramoto oscillator networks, that in the case r = 2 the nonconvex approach is not robust to monotone adversaries. However, when r ≥ 3, <cit.> showed that the nonconvex approach is robust in the sense that the benign landscape results in that paper are not harmed by monotone adversaries. The same is true for our results: The results <Ref> and, in the case where C is set by (<ref>), <Ref> are robust to monotone adversaries in the following sense: for each theorem, under the same conditions, if we further perturb C by replacing it with C' = C + Δ^+, where Δ^+ and z satisfy (<ref>), the same conclusion holds. To see this, note that in the context of <Ref>, we can replace the graph adjacency matrix A with the matrix A' defined by A'_ij = A_ij + Δ^+_ij z_i z_j. Because each entry is only increased, the algebraic connectivity λ_2 is only increased. This shows robustness for <Ref>; the same result immediately propagates to that theorem's direct corollaries. For the ordinary graph clustering problem with C chosen as in (<ref>), the situation is less clear, as elementwise perturbations affect the entire cost matrix C via the centering operation. There are certainly ways around this issue for the full SDP relaxation (see <cit.>), but an extension to our nonconvex setting is not trivial, and we do not pursue it in this paper. <Ref> does not say that for every cost matrix C such that (<ref>) happens to have a benign landscape, a monotone perturbation will preserve that landscape. That would be a stronger result and would require quite different proof techniques (as ours fundamentally depend on the problem structure). However, the specific analysis that leads to the result <Ref> and its corollaries is only helped by such perturbations. § DATA AVAILABILITY No new data were generated or analyzed in support of this research. § ACKNOWLEDGEMENTS This work was supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract MB22.00027.
http://arxiv.org/abs/2407.12919v1
20240717180006
Nitrogen Abundance Distribution in the inner Milky Way
[ "Jorge L. Pineda", "Shinji Horiuchi", "L. D. Anderson", "Matteo Luisi", "William D. Langer", "Paul F. Goldsmith", "Thomas B. H. Kuiper", "Christian Fischer", "Yan Gong", "Andreas Brunthaler", "Michael Rugel", "Karl M. Menten" ]
astro-ph.GA
[ "astro-ph.GA" ]
0000-0001-8898-2800]Jorge L. Pineda Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-8099, USA 0000-0002-8395-3557]Shinji Horiuchi CSIRO Space & Astronomy/NASA Canberra Deep Space Communication Complex, PO Box 1035, Tuggeranong ACT 2901, Australia 0000-0002-7045-9277]L. D. Anderson Department of Physics and Astronomy, West Virginia University, Morgantown, WV 26506, USA Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA Green Bank Observatory, P.O. Box 2, Green Bank, WV 24944, USA 0000-0001-8061-216X]Matteo Luisi Department of Physics, Westminster College, New Wilmington, PA 16172, USA Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-8099, USA 0000-0002-6622-8396]Paul F. Goldsmith Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-8099, USA 0000-0003-1798-4918]Thomas B. H. Kuiper Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-8099, USA 0000-0003-2649-3707]Christian Fischer Deutsches SOFIA Institut, Pfaffenwaldring 29, 70569 Stuttgart, Germany 0000-0002-3866-414X]Yan Gong Max-Planck-Institut für Radioastronomie,Auf dem Hügel 69, 53121 Bonn, Germany Max-Planck-Institut für Radioastronomie,Auf dem Hügel 69, 53121 Bonn, Germany 0009-0009-0025-9286]Michael Rugel M.R.R. is a Jansky Fellow of the National Radio Astronomy Observatory, USA. Center for Astrophysics, Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA National Radio Astronomy Observatory, 1003 Lopezville Rd, Socorro, NM 87801, USA Max-Planck-Institut für Radioastronomie,Auf dem Hügel 69, 53121 Bonn, Germany 0000-0001-6459-0669]Karl M. Menten Max-Planck-Institut für Radioastronomie,Auf dem Hügel 69, 53121 Bonn, Germany 0000-0001-8898-2800]Jorge L. Pineda Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-8099, USA 0000-0002-8395-3557]Shinji Horiuchi CSIRO Space & Astronomy/NASA Canberra Deep Space Communication Complex, PO Box 1035, Tuggeranong ACT 2901, Australia 0000-0002-7045-9277]L. D. Anderson Department of Physics and Astronomy, West Virginia University, Morgantown, WV 26506, USA Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA Green Bank Observatory, P.O. Box 2, Green Bank, WV 24944, USA 0000-0001-8061-216X]Matteo Luisi Department of Physics, Westminster College, New Wilmington, PA 16172, USA Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-8099, USA 0000-0002-6622-8396]Paul F. Goldsmith Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-8099, USA 0000-0003-1798-4918]Thomas B. H. Kuiper Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-8099, USA 0000-0003-2649-3707]Christian Fischer Deutsches SOFIA Institut, Pfaffenwaldring 29, 70569 Stuttgart, Germany 0000-0002-3866-414X]Yan Gong Max-Planck-Institut für Radioastronomie,Auf dem Hügel 69, 53121 Bonn, Germany Max-Planck-Institut für Radioastronomie,Auf dem Hügel 69, 53121 Bonn, Germany 0009-0009-0025-9286]Michael Rugel M.R.R. is a Jansky Fellow of the National Radio Astronomy Observatory, USA. Center for Astrophysics, Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA National Radio Astronomy Observatory, 1003 Lopezville Rd, Socorro, NM 87801, USA Max-Planck-Institut für Radioastronomie,Auf dem Hügel 69, 53121 Bonn, Germany 0000-0001-6459-0669]Karl M. Menten Max-Planck-Institut für Radioastronomie,Auf dem Hügel 69, 53121 Bonn, Germany ApJ Jorge L. Pineda Jorge.Pineda@jpl.nasa.gov § ABSTRACT We combine a new Galactic plane survey of Hydrogen Radio Recombination Lines (RRLs) with far–infrared (FIR) surveys of ionized Nitrogen, N^+, to determine Nitrogen abundance across Galactic radius. RRLs were observed with NASA DSS–43 70m antenna and the Green Bank Telescope in 108 lines–of–sight spanning -135< l < 60, at b=0. These positions were also observed in [N ii] 122μm and 205μm lines with the Herschel Space Observatory. Combining RRL and [N ii] 122μm and 205μm observations in 41 of 108 samples with high signal–to–noise ratio, we studied ionized Nitrogen abundance distribution across Galactocentric distances of 0–8 kpc. Combined with existing Solar neighborhood and Outer galaxy N/H abundance determinations, we studied this quantity's distribution within the Milky Way's inner 17 kpc for the first time. We found a Nitrogen abundance gradient extending from Galactocentric radii of 4–17 kpc in the Galactic plane, while within 0–4 kpc, the N/H distribution remained flat. The gradient observed at large Galactocentric distances supports inside–out galaxy growth with the additional steepening resulting from variable star formation efficiency and/or radial flows in the Galactic disk, while the inner 4 kpc flattening, coinciding with the Galactic bar's onset, may be linked to radial flows induced by the bar potential. Using SOFIA/FIFI–LS and Herschel/PACS, we observed the [N iii] 57μm line to trace doubly ionized gas contribution in a sub–sample of sightlines. We found negligible N^++ contributions along these sightlines, suggesting mostly singly ionized Nitrogen originating from low ionization H ii region outskirts. § INTRODUCTION The regulation of star formation in galaxies is a key driver of galaxy evolution. As galaxies evolve, gas from the circum–galactic medium accretes into their disks, cooling down and forming dense molecular clouds where star formation takes place. As massive stars form, their radiative and mechanical feedback ionizes and disperses their surrounding gas slowing the gravitational collapse and star formation. Stellar feedback can also result in large scale outflows of gas that is transported back to the circum–galactic medium which, depending on how the energy and momentum from stellar feedback couples with the gas, can either accrete back into the disk of the galaxy, restarting the whole process, or be expelled to the inter–galactic medium. The interplay between accretion and outflows of gas into and out of the disk of galaxies, molecular cloud formation, and the effects of stellar feedback into the interstellar medium determines the efficiency and rate at which gas is converted into stars in galaxies. Therefore understanding these processes individually as well as their interplay is key for understanding the regulation of star formation and the evolution of galaxies. The distribution of elemental abundances in the disk of galaxies provides a fundamental observational constraint for models of the formation and evolution of galaxies. Elements such as C, N, and O, are products of “primary" and "secondary" processes in massive and inter-mediate mass stars <cit.> and therefore their abundances are related to the type of stars, star formation rate, and star formation history at a given location, with each element having a different enrichment timescale. Metallicities are typically referred in terms of the Oxygen abundance with respect to hydrogen, O/H. However the abundance of elements such as Nitrogen and Carbon can also provide important insights on the chemical evolution of galaxies. Oxygen is mostly formed in massive stars and its production timescale is short (∼0.1 Gyr; e.g. ). Carbon and Nitrogen however, can also be produced in intermediate mass stars, and the timescales for their production are much longer than those for Oxygen (1–10 Gyr). Therefore, Nitrogen and/or Carbon abundances, alongside that of Oxygen, can provide important information on the star formation history in galaxies. The distribution of elements in the disk of galaxies provides important constraints on the growth of galaxies as it is related to the radial gas accretion profile in galaxies, which is an important parameter on galaxy chemical evolution models <cit.>. Several studies of the distribution of elemental abundances in the disk of galaxies have been conducted using optical lines <cit.>. <cit.> studied a large sample of galaxies, finding that the abundance of Oxygen relative to that of Hydrogen, O/H, decreases with galactocentric distance with a slope that steepens with the galaxy's stellar mass. For galaxies at the high end of the stellar mass range in their sample (M>10^10.5 M_⊙), a flattening of the O/H distribution is observed. They also studied the Nitrogen abundance relative to that of Oxygen and found that the N/O ratio increases with galactocentric distance and does not flatten in the inner parts of massive galaxies. These observations are interpreted in the context of inside–out growth of galaxy disks in which the central parts of massive galaxies reach a metallicity equilibrium, in which metal production is balanced by metal consumption by star formation expulsion by outflows, and while the outer part continues accreting less enriched gas. Note however that, in the central parts of massive galaxies, the dust extinction becomes significant and therefore enriched gas close to the galaxy's center might be unaccounted for by optical and near–IR observations <cit.>. It is important to test whether the properties of abundance distributions observed in nearby galaxies also apply to the Milky Way. Detailed studies of the properties of the interstellar medium and star formation at high spatial resolution are currently only possible in the Milky Way. Therefore, by studying the elemental abundance distribution in the Milky Way, we can obtain a deeper insight into the nature of these distributions, which in turn can be used to interpret the results obtained over large samples of unresolved, external galaxies. The abundance distributions in the Milky Way have been traditionally obtained observing H ii regions with optical lines in a variety of environments. These observations which are mostly focused on nearby H ii regions, where dust extinction obscures these optical lines moderately, have been used to infer that the abundance of Oxygen and Nitrogen increase with Galactocentric distance from the outer galaxy inward to R_ gal= 4 kpc <cit.>. Due to increased dust extinction, optical studies are however unable to probe the inner Galaxy, where most of the star formation takes place, and which is thought to have formed during the early stages of the Milky Way's evolution. Therefore, these observations have been unable to test the flattening in the O/H distribution and a possible increase of the N/O ratio observed in the central parts of external galaxies with similar stellar masses as the Milky Way's. Longer wavelength observations, including those of far–infrared fine structure lines, radio continuum, and hydrogen radio recombination lines can be combined to determine the abundance of elements with the advantage that they are unobscured by dust, enabling elemental abundance determination deep into the inner galaxy <cit.>. These observations however are limited to small samples mostly focused in dense (ultra–)compact H ii regions that might have complex ionization structures, and thus require a large number of spectral line observations to determine elemental abundances. <cit.> presented a survey of 149 lines–of–sight observed uniformly in the Galactic plane in the [N ii] 205μm and 122μm lines at b=0 using the Herschel/PACS instrument. The [N ii] 122μm/205μm ratio provides an accurate determination of the electron density that is independent of Nitrogen abundance and has a weak dependence on electron temperature. The electron densities derived in their sample show little scatter with a mean value of about 30 cm^-3, suggesting that these observations are tracing an extended moderate density ionized gas component in the ISM which is likely to have moderate far–UV (FUV) and extreme–UV (EUV) fields, and thus the ionization structure is simpler than in compact H ii regions. In <cit.> we derived Nitrogen abundances in a sample of 10 sight lines taken from the <cit.> survey by combining [N ii] and RRL observations. We found that the distribution of Nitrogen abundances in the inner Galaxy derived from our data has a linear slope that is consistent with that found in the outer Galaxy in optical studies. In this paper we present an analysis of a larger sample to confirm these results and investigate the distribution of Nitrogen in the inner Galactic plane. We present in this paper a Galactic plane survey of Hydrogen Recombination Lines, covering the range between -135< l < 60 and b=0 using the NASA Deep Space Network Station 43 (DSS–43) in Canberra, Australia, and the Green Bank telescope. The lines–of–sight coincide with those observed by Herschel/HIFI in [C ii] <cit.> and by Herschel/PACS in [N ii] 122μm and 205μm <cit.>. The high–velocity resolution observations of RRLs will allow us to disentangle the source of [N ii] emission in the Galactic plane. In this paper we focus on combining the RRL data set with that observed in [N ii] with Herschel to derive the abundance of ionized Nitrogen with respect to ionized Hydrogen in the inner parts of the Milky Way. This data set will be also combined with SOFIA/FIFI–LS and Herschel observations of the [N iii] 57μm to account for the contribution of higher ionization states of Nitrogen, such as N^++. lcccccccc 9 0pt [N ii], [N iii], RRL, and radio continuum intensities for l≥0. 1 LOS l b [N ii] 122μm [N ii] 205μm [N iii] 57μm RRL RRL I.D. <T_ C> [×10^-8] [×10^-8] [×10^-7] free–free at 8.5GHz [] [] [W m^-2 sr^-1] [W m^-2 sr^-1] [W m^-2 sr^-1] [K km s^-1] [K] G000.0+0.0 0.000 0.0 86.165 ± 0.097 26.426 ± 0.089 4.61 ± 0.76 8.23 ± 0.08 H89α 6.97 ± 0.01 G000.5+0.0 0.500 0.0 13.509 ± 0.025 7.296 ± 0.068 – 2.12 ± 0.16 H89α 2.17 ± 0.01 G003.5+0.0 3.478 0.0 2.064 ± 0.005 1.775 ± 0.019 – 0.15 ± 0.03 H89α 0.06 ± 0.01 G004.3+0.0 4.348 0.0 1.580 ± 0.003 1.149 ± 0.021 – 0.09 ± 0.04 H89α 0.05 ± 0.01 G005.2+0.0 5.217 0.0 1.999 ± 0.012 2.566 ± 0.063 – 0.12 ± 0.02 H89α 0.07 ± 0.01 G006.1+0.0 6.087 0.0 3.927 ± 0.006 2.749 ± 0.014 – 0.35 ± 0.04 H89α 0.19 ± 0.01 G007.8+0.0 7.826 0.0 2.486 ± 0.004 1.780 ± 0.018 – 0.19 ± 0.03 H89α 0.15 ± 0.01 G008.7+0.0 8.696 0.0 3.689 ± 0.012 3.707 ± 0.083 – 0.33 ± 0.10 H89α 0.34 ± 0.02 G010.4+0.0 10.435 0.0 7.827 ± 0.017 5.505 ± 0.072 3.61 ± 0.40 0.98 ± 0.06 H89α 0.43 ± 0.02 G011.3+0.0 11.304 0.0 2.798 ± 0.013 3.232 ± 0.097 – 0.28 ± 0.04 H89α 0.09 ± 0.01 G012.2+0.0 12.174 0.0 8.228 ± 0.017 5.864 ± 0.067 <2.62 ± 0.87 0.45 ± 0.05 H89α 0.18 ± 0.01 G013.0+0.0 13.043 0.0 3.306 ± 0.006 2.161 ± 0.019 – 0.35 ± 0.05 H89α 0.20 ± 0.01 G013.9+0.0 13.913 0.0 4.981 ± 0.013 3.793 ± 0.078 – 1.26 ± 0.02 H89α 0.29 ± 0.02 G014.8+0.0 14.783 0.0 4.055 ± 0.012 3.300 ± 0.059 – 0.21 ± 0.04 H89α 0.16 ± 0.01 G016.5+0.0 16.522 0.0 3.791 ± 0.005 2.537 ± 0.009 – 0.32 ± 0.08 H89α 0.13 ± 0.01 G020.0+0.0 20.000 0.0 1.715 ± 0.004 1.717 ± 0.011 – 0.25 ± 0.07 H89α 0.12 ± 0.01 G020.9+0.0 20.870 0.0 5.122 ± 0.012 3.570 ± 0.060 – 0.56 ± 0.05 H89α 0.25 ± 0.01 G021.7+0.0 21.739 0.0 6.533 ± 0.015 4.723 ± 0.064 – 0.22 ± 0.04 H89α 0.22 ± 0.01 G023.5+0.0 23.478 0.0 14.796 ± 0.016 9.960 ± 0.061 <2.42 ± 0.81 1.08 ± 0.06 H89α 0.46 ± 0.01 G024.3+0.0 24.348 0.0 8.428 ± 0.019 6.898 ± 0.063 3.67 ± 0.50 0.73 ± 0.03 H89α 0.30 ± 0.01 G025.2+0.0 25.217 0.0 4.991 ± 0.014 5.093 ± 0.061 – 0.27 ± 0.11 H89α 0.19 ± 0.01 G026.1+0.0 26.087 0.0 18.625 ± 0.021 9.721 ± 0.075 <4.64 ± 1.55 1.15 ± 0.03 H89α 0.45 ± 0.01 G027.0+0.0 26.956 0.0 3.898 ± 0.006 3.141 ± 0.020 – 0.22 ± 0.05 H89α 0.17 ± 0.01 G028.7+0.0 28.696 0.0 11.173 ± 0.013 6.507 ± 0.065 2.64 ± 0.28 0.81 ± 0.06 H89α 0.52 ± 0.02 G030.0+0.0 30.000 0.0 6.596 ± 0.013 4.577 ± 0.075 – 0.38 ± 0.07 H89α 0.40 ± 0.02 G031.3+0.0 31.277 0.0 10.271 ± 0.015 6.857 ± 0.068 – 1.14 ± 0.02 H89α 0.45 ± 0.01 G036.4+0.0 36.383 0.0 2.020 ± 0.011 1.602 ± 0.075 – 0.16 ± 0.04 H89α 0.09 ± 0.01 G037.7+0.0 37.660 0.0 5.400 ± 0.010 4.505 ± 0.073 – 0.67 ± 0.01 H89α 0.25 ± 0.01 G041.5+0.0 41.489 0.0 2.600 ± 0.003 1.387 ± 0.010 – 0.19 ± 0.03 H89α 0.15 ± 0.01 G044.0+0.0 44.043 0.0 1.658 ± 0.002 1.501 ± 0.020 – 0.26 ± 0.06 H89α 0.08 ± 0.01 G045.3+0.0 45.319 0.0 0.812 ± 0.002 0.525 ± 0.012 – 0.14 ± 0.03 H89α 0.07 ± 0.01 G049.1+0.0 49.149 0.0 1.716 ± 0.012 1.447 ± 0.055 – 0.14 ± 0.03 H89α 0.15 ± 0.01 G054.3+0.0 54.255 0.0 1.322 ± 0.003 0.716 ± 0.011 – 0.11 ± 0.03 H89α 0.09 ± 0.01 lcccccccc 9 0pt [N ii], [N iii], RRL, and radio continuum intensities for l<0. 2 LOS l b [N ii] 122μm [N ii] 205μm [N iii] 57μm RRL RRL I.D. <T_ C> [×10^-8] [×10^-8] [×10^-7] free–free at 8.5GHz [] [] [W m^-2 sr^-1] [W m^-2 sr^-1] [W m^-2 sr^-1] [K km s^-1] [K] G302.6+0.0 302.553 0.0 2.352 ± 0.003 2.225 ± 0.016 – 0.64 ± 0.09 H91α 0.16 ± 0.00 G305.1+0.0 305.106 0.0 4.656 ± 0.009 6.512 ± 0.070 – 1.55 ± 0.07 H91α 0.44 ± 0.00 G306.4+0.0 306.383 0.0 1.012 ± 0.003 1.134 ± 0.018 – 0.18 ± 0.03 H92α 0.10 ± 0.00 G307.7+0.0 307.660 0.0 2.467 ± 0.012 2.312 ± 0.067 – 0.29 ± 0.01 H92α 0.17 ± 0.00 G310.2+0.0 310.213 0.0 0.999 ± 0.003 0.429 ± 0.011 – 0.27 ± 0.02 H92α 0.15 ± 0.00 G314.0+0.0 314.043 0.0 1.885 ± 0.004 1.336 ± 0.010 – 0.28 ± 0.03 H92α 0.16 ± 0.00 G316.6+0.0 316.596 0.0 5.633 ± 0.014 4.563 ± 0.073 – 0.63 ± 0.03 H92α 0.38 ± 0.00 G317.9+0.0 317.872 0.0 4.064 ± 0.013 3.000 ± 0.060 – 0.60 ± 0.03 H92α 0.37 ± 0.00 G326.8+0.0 326.808 0.0 10.015 ± 0.015 6.501 ± 0.067 – 1.07 ± 0.07 H91α 0.36 ± 0.00 G330.0+0.0 330.000 0.0 1.470 ± 0.004 0.914 ± 0.014 – 0.18 ± 0.09 H91α 0.18 ± 0.01 G331.7+0.0 331.739 0.0 4.953 ± 0.014 3.708 ± 0.071 – 0.22 ± 0.06 H91α 0.27 ± 0.02 G332.6+0.0 332.609 0.0 4.583 ± 0.013 2.878 ± 0.068 – 1.17 ± 0.17 H92α 0.28 ± 0.02 G333.5+0.0 333.478 0.0 9.275 ± 0.011 5.345 ± 0.057 – 0.82 ± 0.12 H91α 0.43 ± 0.02 G336.1+0.0 336.087 0.0 15.073 ± 0.022 10.208 ± 0.061 – 0.94 ± 0.16 H91α 0.53 ± 0.03 G337.0+0.0 336.957 0.0 16.668 ± 0.022 11.856 ± 0.072 – 0.92 ± 0.13 H91α 1.08 ± 0.07 G337.8+0.0 337.826 0.0 12.177 ± 0.014 8.488 ± 0.071 – 0.72 ± 0.11 H91α 0.57 ± 0.04 G338.7+0.0 338.696 0.0 3.371 ± 0.005 2.202 ± 0.014 – 0.60 ± 0.09 H92α 0.24 ± 0.02 G342.2+0.0 342.174 0.0 4.645 ± 0.014 4.807 ± 0.065 – 0.53 ± 0.08 H91α 0.29 ± 0.02 G343.9+0.0 343.913 0.0 3.718 ± 0.013 2.761 ± 0.064 – 0.28 ± 0.08 H91α 0.24 ± 0.02 G345.7+0.0 345.652 0.0 12.527 ± 0.015 5.812 ± 0.074 – 14.12 ± 0.28 H89α 1.73 ± 0.07 G346.5+0.0 346.522 0.0 2.879 ± 0.011 1.899 ± 0.060 – 0.45 ± 0.08 H89α 0.29 ± 0.02 G349.1+0.0 349.130 0.0 21.023 ± 0.025 9.128 ± 0.061 – 3.92 ± 0.05 H89α 1.51 ± 0.06 G350.9+0.0 350.870 0.0 1.410 ± 0.011 1.302 ± 0.060 – 0.33 ± 0.09 H89α 0.14 ± 0.01 G353.5+0.0 353.478 0.0 3.672 ± 0.006 2.945 ± 0.014 – 0.20 ± 0.07 H89α 0.21 ± 0.02 G354.3+0.0 354.348 0.0 2.396 ± 0.012 1.534 ± 0.066 – 0.26 ± 0.04 H89α 0.16 ± 0.01 G355.2+0.0 355.217 0.0 5.180 ± 0.012 1.953 ± 0.051 – 0.49 ± 0.04 H89α 0.46 ± 0.06 G356.1+0.0 356.087 0.0 2.596 ± 0.004 1.628 ± 0.008 – 0.16 ± 0.04 H89α 0.13 ± 0.02 G359.5+0.0 359.500 0.0 17.209 ± 0.027 9.763 ± 0.091 – 0.68 ± 0.15 H89α 1.06 ± 0.01 The paper is organized as follows. In Section <ref> we describe the RRL, [N ii], and [N iii] 57μm observations. In Section <ref>, we describe our method to determine the Nitrogen abundance from our observations and we discuss the properties of the derived Nitrogen abundance distribution. We also discuss the implications of the observed distribution in terms of theoretical predictions from chemical evolution models. In Section <ref> we summarize our results. § OBSERVATIONS We surveyed the Galactic plane in the Hydrogen radio recombination line (RRL) emission, using RRL transitions between H89α and H92α, covering 108 lines–of–sights (LOS) in the -135<l<-60 range in Galactic longitude and at a Galactic latitude of b=0, with the NASA DSN DSS–43 70m telescope and the Green Bank Telescope. These LOS correspond to a subsample of the Herschel Open Time Key project, GOT C+ <cit.>, which observed the [C ii] 158μm line at high spectral resolution with the HIFI instrument. The GOT C+ sample with b=0 was also observed in [N ii] 122μm and 205μm by <cit.>. The LOS are sampled every ∼1 in the inner galaxy (|l|<60 ) and every ∼2–5 in the outer galaxy. The GOT C^+ goal of adopting this sampling of the Galactic plane was to obtain a statistical sample of a large number of environments distributed across the Galaxy. In the upper panel of Figure <ref> we show the [N ii] 205μm, 122μm, and RRL integrated intensities as a function of Galactic longitude. These intensities are listed in Tables <ref> and <ref>. The lower panel of Figure <ref> shows the longitude–velocity distribution of the observed RRL emission. The longitude–velocity (LV) map is overlaid with projections of the Scutum–Crux, Sagittarius–Carina, Perseus, and Norma-Cygnus Milky Way spiral arms. For the LV map we used the fits to the parameters determining the logarithmic spiral arms presented by <cit.>. We assumed a flat rotation curve with parameters presented by <cit.>. We see a good correspondence between RRL emission and spiral arms, with an enhanced RRL emission at the spiral arm tangents, which can be explained by the longer path length for a given velocity range in these regions. In Figure <ref> we show the [N ii] 122μm and 205μm intensities as a function of the RRL integrated intensity for our sample. The figure shows a linear correlation that suggests that both [N ii] and RRL line emission are tracing the same ionized gas, and that Nitrogen is mostly in a singly ionized form in our sample, even in the conditions of the Galactic center, whose data points in the top right of Figure <ref>. A more detailed discussion about the ionization of Nitrogen is presented in Section <ref>). §.§ Hydrogen Recombination Line Observations We used the NASA DSN DSS–43 telescope in Camberra, Australia, to observe 45 LOS in our sample, covering the southern sky portion of the Galactic plane. We used X–band receiver in position–switching mode to observe the H91α and H92α hydrogen radio recombination lines at 8.58482 GHz and 8.30938 GHz, respectively. The angular resolution of the DSS–43 at 8.420 GHz is 115. We converted the data from an antenna temperature to a main beam temperature scale using a main beam efficiency of 0.78[Details in the determination of the aperture efficiency and beamsize of the DSS–43 antenna is available on–line at <https://deepspace.jpl.nasa.gov/dsndocs/810-005/>. ]. Both lines were resampled to a common spectral grid and averaged together to increase the signal–to–noise ratio, with the H92α intensities scaled to correspond to that of the H91α lines[As seen in Equation (<ref>), in LTE, the intensity of a RRL is proportional to the EM, the line width, and line frequency. Given that EM and Δ v are intrinsic properties of the source, the intensity of two RRLs are related by the inverse of the ratio of their frequencies <cit.>. In the case when LTE does not apply (Equation <ref>), however, a dependence on electron density and temperature is introduced to the relationship between two line intensities. For lines with similar principal quantum number, n, this effect is negligible, and assuming LTE is appropriate. For larger Δ n, however, NLTE effects need to be taken into account when scaling RRL intensities. ] . We fitted a 3rd order polynomial baseline to our data. The resulting spectra have a typical rms noise of 4 mK in a 1 km s^-1 channel. We also re-observed a subsample of LOS in the H89α (9.17332 GHz) line with the DSS–43 antenna to improve the signal–to–noise using the ROACH2 spectrometer <cit.>. The spectra have a rms noise of 1 mK over a 3 km s^-1 channel. We also observed X–band RRLs in 79 LOS in our sample, covering the northern sky portion of the Galactic plane, using the Versatile GBT Astronomical Spectrometer (VEGAS) on the Green Bank Telescope (GBT) in the position–switching mode. The angular resolution of the GBT in X–band is 84. For each observed direction, we simultaneously measured seven Hnα RRL transitions in the 9 GHz band, H87α to H93α, using the techniques discussed in <cit.>, <cit.>, and <cit.>, and averaged all spectra together to increase the signal-to-noise ratio using TMBIDL <cit.>. The data were resampled to a common grid, intensities scaled to correspond to that for the H89α line (9.17332 GHz), and we averaged all lines in the band (2 polarizations per transition) together to increase the signal–to–noise ratio. The GBT data was calibrated using a noise diode of known power. The spectra were later corrected with a 3rd order polynomial baseline and smoothed to ∼1.9 km s^-1. We converted the intensities from an antenna temperature to main beam temperature using a main beam efficiency of 0.94. The typical rms noise of these data is 2.5 mK in a 1.9 km s^-1 channel. §.§ [N ii] 122μm and 205μm observations We used Herschel/PACS data of the [N ii] 205μm and 122μm lines that were surveyed by <cit.> in 149 GOT C+ LOS with b=0. We refer to <cit.> for the details on the reduction of this data set. The PACS instrument has a 5×5 pixel grid, with a pixel separation of ∼9.4 corresponding to a footprint of 47 in the sky. The PACS spectrometer has a resolving power of 1000 at 122μm and 2000 at 205μm, corresponding to a velocity resolution of 300 km s^-1 and 150 km s^-1, respectively, and therefore the emission lines are spectrally unresolved. The Herschel telescope with PACS has a FWHM beam width of 10 at 122μm and 15 at 205μm. To improve the signal–to–noise of the observations, and to reduce the contrast between the angular resolution of the [N ii] and RRL data sets, we used the footprint–averaged spectra, corresponding to an angular resolution of 47. (A discussion about beam dilution effects is presented in Section <ref>.) Note that there is a slight difference between the [N ii] 122μm and 205μm intensities we calculated and those presented by <cit.>, as the latter used a simple average of all pixels in the PACS footprint, while we used a Gaussian–weighted average based on the distance of each pixel to the center of the footprint. The difference between these two approaches is minimal and does not significantly affect the derived electron densities and N^+ column densities. The typical rms noise of the observations are ∼2×10^-7 erg cm^-2 s^-1 sr^-1 for the 122μm line and ∼6×10^-7 erg cm^-2 s^-1 sr^-1 for the 205μm line. §.§ [N iii] 57μm observations To study the ionization structure of Nitrogen, and to estimate the contribution of highly ionized states of Nitrogen to the total Nitrogen abundance, we observed a subsample of 8 LOS in the [N iii] 57μm line with the Far Infrared Field-Imaging Line Spectrometer (FIFI–LS; ) which is an integral field far–infrared spectrometer on SOFIA. These observations were taken as part of the SOFIA ID 08_0023 and 09_0150 projects. The [N iii] 57μm spectra were obtained with pointed observations of the FIFI–LS 5×5 pixel footprint in the blue channel (μm), with an angular resolution of 6 at 57μm. To improve the signal–to–noise ratio we averaged the spectra in each footprint, and therefore the angular resolution of the average corresponds to the 30 size of the FIFI-LS footprint. The spectral resolution of FIFI-LS at 57μm is 280 km s^-1. Given that the typical line width of the RRL emission in our sample is 25 km s^-1, the lines observed with FIFI–LS are spectrally unresolved. We computed the footprint–averaged spectrum for each LOS using the SOSPEX software <cit.> and imported the resulting spectra into GILDAS/CLASS for baseline corrections and fitting. The averaged FIFI–LS spectra at 57 μm shows a steep baseline that results from the effects of rapidly changing atmosphere conditions during flight. The shape of this baseline is similar to that seen for the transmission curve of the atmosphere. We fitted a 3rd order polynomial to the flux spectrum, using our knowledge of the location of the spectral lines from the spectrally resolved RRL spectra to define a window in the flux spectrum where the channels are excluded from the fit. The typical rms noise of the observations is 6.51×10^-5 erg s^-1 cm^-2 sr. We detected the [N iii] 57μm line in 4 out of the 10 LOS. For the undetected LOS, we will use their 3σ upper limits to constrain the contribution from doubly ionized Nitrogen in these locations. We complemented our analysis with Herschel/PACS observations of the [N iii] 57μm, [N ii] 122μm, and [O iii] 88μm and 52μm, and SPIRE–FTS [N ii] 205μm emission observed in the Sagittarius A region in the Galactic center. These observations were presented by, and the data reduction is described in, <cit.>. In our analysis we used the PACS footprint averaged spectrum and therefore its angular resolution is ∼47. The SPIRE–FTS [N ii] 205 μm spectrum presented by <cit.> was computed in a 30 aperture. As with the FIFI–LS observations, the spectral lines observed with Herschel/PACS and SPIRE are unresolved in velocity. In Figure <ref> we show the detected SOFIA FIFI–LS [N iii] 57μm spectra. The intensities of the [N iii] 57μm, and 3σ upper limits, are listed in in Table <ref>. We present an analysis of this data set in Section <ref>. §.§ Sample Location with Respect to Known H II Regions To evaluate potential sources of uncertainties in the derived Nitrogen abundances, such as the presence of doubly ionized Nitrogen and beam dilution effects related to the use of observations from different telescopes in compact sources, we need to understand the nature of the regions we are sampling. As discussed in Section <ref>, the sample of LOS used in our study is drawn from the Herschel GOT C+ survey which provided a uniform sampling of the Galactic plane and therefore it did not intentionally targeted to the center of any specific H ii region. <cit.> derived electron densities in this sample showing typical values of about 30 cm^-3 which are higher than what is expected for the Warm Ionized Medium (WIM) but lower than that of compact H ii regions. This result suggests that the [N ii] emission detected in the GOT C+ sample arises from an extended component of the ionized ISM, which is not closely associated with massive stars. To further assess the nature of the sources in our sample we studied the environment traced by our sample LOS by searching for the nearest known H ii regions from the Wide-field Infrared Survey Explorer (WISE) Catalog of Galactic H ii Regions <cit.>. These regions were followed up with RRL and radio continuum observations to confirm that the mid–infrared warm dust emission is associated with ionized gas <cit.>. In Tables <ref> and <ref>, we list the nearest WISE H ii region to each LOS in our sample, the distance between the center of the observations' beam and that of the nearest H ii region, the radius of the nearest H ii region, and the percentage of the RRL beam area that overlaps with the H i region. The radius of a WISE H ii region is defined by that of a circular aperture that encloses its associated mid–infrared emission. We find that the majority of LOS in our sample (40 out of 61) do not overlap with kwown H i regions. There are, however, 12 LOS in which the RRL beam has an overlap with the nearest H ii region of over 50%, of which 7 have a 100% overlap. We compared the average electron temperatures, volumen densities, and ionized Nitrogen abundances (12+log(N^+/H^+)) for the sample with 0% overlap and that with 100% overlap. These average quantities are 8500 K, 25.8 cm^-3, and 7.95, for the sample without overlap, and 8450 K, 30.9 cm^-3, and 7.85, for the sample that fully overlaps with known H ii regions. We find no significant difference in the derived quantities depending on whether they overlap with a known H ii region or not, suggesting that most of the sources in our sample are not associated with compact H ii regions and are part of an extended, low ionization gas component at the outskirts of H ii regions. lcrrr 5 0pt Nearest known H ii region to sample LOS for l≥0. 3 LOS H ii Source Distance H ii Radius Beam overlap % G000.0+0.0 G000.008+00.036 134.9 25.0 0 G000.5+0.0 G000.522-00.011 90.5 27.0 0 G003.5+0.0 G003.462-00.014 75.1 16.7 0 G004.3+0.0 G004.283+00.031 257.1 42.5 0 G005.2+0.0 G005.186-00.083 318.5 21.6 0 G006.1+0.0 G006.057-00.033 158.8 42.5 0 G007.8+0.0 G007.761+00.006 233.7 140.1 0 G008.7+0.0 G008.685-00.047 171.3 287.8 100 G010.4+0.0 G010.441+00.012 51.9 53.6 44 G011.3+0.0 G011.274-00.053 217.9 22.5 0 G012.2+0.0 G012.145-00.001 104.0 402.9 100 G013.0+0.0 G013.135+00.058 391.5 55.5 0 G013.9+0.0 G013.899-00.014 69.3 60.0 30 G014.8+0.0 G014.867+00.060 374.4 23.2 0 G016.5+0.0 G016.560+00.002 141.4 35.5 0 G020.0+0.0 G019.986+00.094 342.6 29.3 0 G020.9+0.0 G020.959+00.055 380.7 47.7 0 G021.7+0.0 G021.746-00.035 125.0 85.4 1 G023.5+0.0 G023.459-00.026 115.5 50.5 0 G024.3+0.0 G024.356+00.048 176.9 75.5 0 G025.2+0.0 G025.179+00.038 194.4 137.2 0 G026.1+0.0 G026.091-00.057 204.1 25.6 0 G027.0+0.0 G026.984-00.062 243.5 155.9 0 G028.7+0.0 G028.702+00.014 57.2 70.4 63 G030.0+0.0 G030.014+00.017 80.5 23.7 0 G031.3+0.0 G031.264+00.031 123.4 71.0 0 G036.4+0.0 G036.380+00.014 53.2 53.2 42 G037.7+0.0 G037.691+00.027 151.1 40.4 0 G041.5+0.0 G041.512+00.021 113.8 86.2 10 G044.0+0.0 G044.007-00.016 138.6 27.1 0 G045.3+0.0 G045.453+00.044 508.0 244.8 0 G049.1+0.0 G049.163-00.066 243.0 203.6 1 G054.3+0.0 G054.376-00.050 469.7 121.9 0 lcrrr 4 0pt Nearest known H ii region to sample LOS for l<0 4 LOS H ii Source Distance H ii Radius Beam overlap % G302.6+0.0 G302.586-00.029 156.6 64.2 0 G305.1+0.0 G305.201+00.009 343.6 27.6 0 G306.4+0.0 G306.361-00.291 1050.1 84.1 0 G307.7+0.0 G307.850+00.015 686.1 28.9 0 G310.2+0.0 G310.227-00.023 96.9 325.3 100 G314.0+0.0 G314.077+00.004 123.5 38.2 0 G316.6+0.0 G316.548-00.003 173.1 134.5 9 G317.9+0.0 G317.870-00.008 26.9 132.4 100 G326.8+0.0 G326.951+00.009 516.1 440.1 0 G330.0+0.0 G329.976-00.002 86.5 121.4 83 G331.7+0.0 G331.760+00.064 242.6 109.7 0 G332.6+0.0 G332.718-00.053 435.7 25.2 0 G333.5+0.0 G333.580+00.058 423.4 293.0 0 G336.1+0.0 G336.097+00.005 40.9 219.4 100 G337.0+0.0 G336.969-00.013 61.7 17.3 3 G337.8+0.0 G337.827+00.056 202.8 135.5 0 G338.7+0.0 G338.596-00.007 360.8 31.7 0 G342.2+0.0 G342.120+00.001 194.5 295.2 100 G343.9+0.0 G343.912+00.116 418.5 91.4 0 G345.7+0.0 G345.651+00.015 55.5 98.2 90 G346.5+0.0 G346.529-00.013 52.2 79.2 72 G349.1+0.0 G349.126+00.010 40.7 98.9 100 G350.9+0.0 G350.850-00.040 159.2 43.5 0 G353.5+0.0 G353.547-00.013 252.4 123.4 0 G354.3+0.0 G354.356+00.000 28.9 41.5 45 G355.2+0.0 G355.221-00.015 53.6 99.8 93 G356.1+0.0 G356.090-00.075 269.6 71.9 0 G359.5+0.0 G359.486+00.028 115.3 15.9 0 lcccc 5 0pt Summary of Available Radio Continuum Surveys 5 I.D. Frequency Angular Resolution Galactic Longitude Coverage Reference [GHz] [] VLA Galactic Center 0.332 0.72 358< l <2 <cit.> HASLAM 0.408 51 Full Sky <cit.> VGPS 1.4 1 18< l < 67 <cit.> SGPS 1.4 1.6 253< l < 358 <cit.> Effelsberg/GLOSTAR 4.88 2.4 358< l < 60 <cit.> Parkes 64–m 5.0 4.1 190< l < 40 <cit.> Effelsberg/GLOSTAR 6.82 1.8 358< l < 60 <cit.> Nobeyama 45–m 10.3 2.66 355< l < 56 <cit.> § DISCUSSION §.§ Nitrogen Abundance Determination The ionized Nitrogen abundance, X_ N^+, relative to ionized hydrogen is given by the ratio of the ionized Nitrogen, N( N^+), and ionized hydrogen, N( H^+), column densities (see also ), X_ N^+=N( N^+)/N( H^+). We derived the ionized Nitrogen column density from the [N ii] 205μm line intensity (I_ 205 μ m) using <cit.>, N( N^+)= 4π I_ 205 μ m/A_10 h ν_ 205 μ m f_1(n_e,T_e), where the spontaneous decay rate (Einstein's A coefficient) is A_10=2.08×10^-6 s^-1, the rest frequency is ν_205μ m=1.461×10^12 Hz. The fractional population of the ^3 P_1, f_1, is a function of the electron density, n_e, and the electron temperature T_e. In local thermodynamical equilibrium (LTE), the main–beam temperature (in units of K) per unit velocity (km s^-1) of a hydrogen recombination line is related to the emission measure, EM, (in units of cm^-6 pc), as <cit.>, ∫ T^ RRLdv=5.76×10^11 T_ e^-3/2 EM ν_ RRL^-1, where the speed of light, c, is in units of km s^-1, the rest frequency of the RRL, ν_ RRL, in Hz, and the electron temperature, T_e, is in K. The EM is defined as the integral of the electron volume density squared along the line of sight, EM=∫ n^2_e dl. Assuming that the electron density is constant along the line of sight, which is approximately valid for the discrete sources we typically detect, this equation can be simplified to EM=n_eN_e≃ n_eN( H^+). We can thus, re–order equation (<ref>) in terms of the H^+ column density and electron density as, N( H ^+)=∫ T^ RRLdv /(1.87×10^-7ν_RRL^-1 T_ e^-3/2 n_ e) , where n_ e is in units of cm^-3 and N( H ^+) in units of cm^-2. The hydrogen recombination line emission can be affected by deviations from local thermodynamical equilibrium and in this situation this deviation can be defined in terms of the ratio <cit.>, G_ LTE(n_e, T_c)= T^ RRL/T^ RRL_ LTE=b_n [1-1/2τ_ cβ_n ], where b_n and β_n are the departure coefficient and amplification factor for a transition with principal quantum number n, respectively, and T_c and τ_c are the continuum brightness temperature and opacity, respectively, at ν_RRL. The continuum opacity can be derived from observations of T_c, and the electron temperature, using τ_c=T_ c/T_ e. The effects of deviations from local thermodynamical equilibrium are well understood <cit.> and a correction for these effects can be readily applied. We evaluated the brightness temperature of the continuum at the frequency of the RRL observations by extrapolating the synchrotron and free–free spectral energy distributions from the respective brightness temperature derived for each LOS, as described in Section <ref>. The electron density can be calculated from the [N ii] 205 μm/122 μm intensity ratio and the electron temperature using Equations 21 and 22 in <cit.>, for a range between 10 and 1000 cm^-3. The uncertainties in the determination of the electron density are determined by the uncertainty in the [N ii] 205μm/122μm intensity ratio and in on those of the electron temperature. §.§.§ Electron Temperature Determination The electron temperature in optically thin, ionized gas regions that are in LTE can be derived from the ratio of radio recombination line (RRL) emission to thermal free-free radio continuum emission. This derivation is possible because RRL emission is proportional to the product of the emission measure (EM) and temperature to the power of -2.5, while thermal radio free-free emission is proportional to EM times temperature to the power of -1.35 <cit.>. As a result, the ratio of RRL to thermal radio continuum emission is proportional to temperature to the power of -1.15 and is independent of EM (e.g., ). The electron temperature is therefore given by, T_e/K = [ 6.985 × 10^3 (T_ b/T_ L) (ν_rc/ GHz)^2.1 · (ν_ RRL^-1) · (Δ v)^-1· (1+y)^-1 ] ^0.86956, where T_b and T_L are the brightness temperatures of the free-free continuum and RRL peak intensity, respectively, Δ v is the RRL full width at half maximum, ν_rc is the frequency of the radio continuum emission, ν_RRL is the frequency of the RRL observations, and y is a term related to the contribution of ^4He^+, which is assumed to be 0.08 <cit.>. There are several radio continuum surveys with a spatial coverage that overlaps that from our RRL survey and that have similar angular resolution, so that we can extract intensities for our analysis. These surveys include the VLA Galactic Plane Survey <cit.>, the Southern Galactic Plane Survey <cit.>, the Nobeyama Radio Observatory 45–m telescope survey <cit.>, the Parkes 64–m telescope 6 cm survey[Data from the Nobeyama 45–m and Parkes 64–m surveys are, among other Galactic plane surveys, available for download at the MPIFR's survey sampler at <https://www3.mpifr-bonn.mpg.de/survey.html>. ] <cit.>, and the Effelsberg 100–m part of the GLOSTAR survey <cit.>. In Table <ref> we list the frequency, angular resolution and Galactic longitude coverage of these surveys. These surveys have varying angular resolutions and frequencies, and thus uncertainties in the relative calibration and the correction from the contribution from synchrotron emission can vary from survey to survey. To minimize these uncertainties, in each LOS we corrected all available continuum brightness temperatures for the contribution from synchrotron emission at their frequencies (see below), and estimated the continuum brightness temperatures at the frequency of our RRL observations, 8.5 GHz, assuming a free-free spectrum with spectral index of -2.1. We then averaged all available samples together to obtain an average free–free brightness temperature at 8.5 GHz, which are listed in Tables <ref> and <ref>. In our analysis, we only used radio continuum brightness temperatures with a signal–to–noise ratio larger than 10. In Appendix <ref> we show the N^+/H^+ distribution as a function of Galactocentric distance derived using electron temperatures derived from each of the radio continuum surveys listed in Table <ref> individually, showing that its distribution is not significantly affected by the choice of radio continuum survey used to derive electron temperatures. We estimated the contribution of synchrotron emission at a given frequency using the 408 MHz map from <cit.> and assuming a synchrotron spectral index, in brightness temperature scale, of -2.8. The 408 MHz map has an angular resolution of 51 which is significantly larger than that of our observations. Note however, that synchrotron emission in the Galactic plane is expected arise from diffuse spatially extended gas, while free–free emission originates from more compact and denser regions, so that we expect that uncertainties related to the difference in angular resolution are not significant. In the LOS toward the Galactic center region, where several compact non-thermal features are observed, we used the VLA 332 MHz map presented by <cit.> convolved the angular resolution of our observations. We find that the typical contribution from synchrotron to the observed radio continuum brightness temperature in our sample is ∼47% at 1.4 GHz, ∼40% at 5 GHz, ∼36% at at 6.82 GHz, ∼26% at 10.3 GHz. As mentioned above, sources of uncertainties in using this derivation of the electron temperature include measurement uncertainties of the continuum and RRL emission, calibration uncertainties between the different frequency bands, the relative contribution from synchrotron and free-free emission to the observed continuum emission, and non-LTE effects for the RRL intensities. Note that pressure broadening is not expected to be significant in the density regime that we are sampling (). We used the electron densities from the [N ii] lines to account for non–LTE effects on T_L using equation (<ref>). In Appendix B, we compare our methodology to derive electron temperature against electron temperatures derived in a sample of H ii regions by <cit.> in which calibration uncertainties and synchrotron contribution are carefully assessed. We find that for electron temperatures derived using continuum at both 1.4 GHz and 6.82 GHz there is a scatter of about 20–30% which we attribute to the unaccounted uncertainties described above. We also studied whether beam dilution effects resulting from using observations with different angular resolution in the RRL and continuum emission impact our determination of electron temperatures in our sample.To study beam dilution effects in the northern part of our sample, we smoothed the 60 VGPS survey at 1.4 GHz to the 84 and 160 resolutions of the GBT RRL and Nobeyama Radio Continnum data sets, respectively, and studied the intensity ratio at these two different angular resolutions. We found that beam filling effects are small for our sample, with typical variations smaller than 5%. To study beam dilution effects in the southern part of our sample, we also convolved the SGPS data at 100 to the 115 angular resolution of the DSS–43 data. We found small variations in the intensity ratio of the SGPS continuum data at 100 and 115 with typical variations smaller than 2%. Because the variation in intensities due to beam filling in our sample are small, suggesting that most sources are extended, we did not apply a beam filling correction to our data. In Tables <ref> and <ref> we show the derived ionized Nitrogen abundances, electron densities, electron temperatures, and N^+ and H^+ column densities for our sample. In Figure <ref> we show the distribution of these quantities as a function of Galactic longitude. As seen in Figure <ref>, the derived electron temperatures range between 3500 K and 21000 K, with an average value of T_ e = 8225 K, which is typical of ionized gas regions, with a standard deviation of 3900K. We notice that there is a dependence in the value of T_ e with the signal–to–noise ratio of the RRL observations, with a tendency for temperatures to be higher at lower SNR values. The average temperature for SNR>10 is 7484 K, while for 5<SNR<10, it is 9343 K. This difference in the electron temperature has a small impact on the derived ionized Nitrogen abundances. Using Equations (<ref>), (<ref>), (<ref>), and (<ref>), for the observed range in ionized Nitrogen and RRL intensities, we can derive that N^+/H^+ is proportional to the electron temperature between T^-1.1_ e and T^-1.2_ e. Because N^+/H^+ has an additional dependence on the RRL line intensity as T^-1_ L (Equation <ref>), and the electron temperature depends on the RRL intensity as T^-0.87_ L (Equation <ref>), the resulting dependence of N^+/H^+ on the RRL intensity is weak (∼ T^-0.04_L). Thus, an uncertainty of a factor of 2 arising from T_ L would impact T_ e by the same factor but N^+/H^+ by only a factor of 1.03. Note however, that N^+/H^+ is proportional to ∼ T_ b^-1, and therefore uncertainties from the continuum intensity can have a larger impact in the derived Nitrogen abundance. This result motivated us to adopt a larger SNR>10 criterion for selecting lines-of-sights from the radio continuum data set. §.§.§ The ionization structure of Nitrogen The ionization potential of Nitrogen (14.53 eV) is greater than that of hydrogen (13.60 eV) by 0.93 eV, so Nitrogen can be in neutral form in regions where Hydrogen is fully ionized by photons between 13.60 eV. and 14.53 eV. In the ISM, four primary processes contribute to Nitrogen ionization: EUV (14.53 eV to 124.24 eV) photoionization, electron collisional ionization, proton (H^+) charge transfer, and X-ray photoionization. While electron recombination with N^+ serves as the primary loss mechanism <cit.>. Models show that electron collisional ionization of Nitrogen alone is inefficient at temperatures below ∼10^4 K due to the large ionization potential of atomic Nitrogen. However, proton charge transfer (H^+ +N→ H+ N^+ - 0.93 eV), which has a smaller energy barrier, might be important at temperatures above 5000 K (see Section 4 in and ). Therefore a significant fraction of Nitrogen might be ionized even where few EUV photons are present above 14.54 eV due to collisional and exchange ionization by electrons and protons, respectively. Note, however, that a medium where Nitrogen is fully ionized is difficult to attain without EUV photons because collisional ionization with electrons or charge exchange with protons is generally balanced by electron recombination and therefore roughly independent of electron density <cit.>. It is only where EUV dominates ionization that an increase in photon flux can eventually overcome electron recombination leading to a fully ionized Nitrogen gas. Under typical ISM and H ii region conditions, higher ionization states of Nitrogen, such as N^++, can be maintained only by the presence of an EUV field. Models show that an electron temperature greater than about 25000 K is required for collisional ionization to be important. Thus, under typical temperatures and densities of the ionized gas, as determined here, N^++ will not have a significant abundance without a source of EUV photons, such as are typically present in the close vicinity of massive stars. This result suggests that regions with higher ionization levels than N^+, such as can be probed with the [N iii] 57μm line, are likely to be compact and closely associated with H ii regions. It is possible, however, that EUV photons leakage from H ii regions <cit.> might make a more diffuse, extended component of highly ionized Nitrogen possible <cit.>. If a significant fraction of the gas–phase Nitrogen is at ionization levels higher than N^+, our assumption that the N/H abundance ratio can be traced by the N^+/H^+ abundance ratio might not be valid, and therefore it introduces uncertainties into our analysis. Note that the linear correlation between the [N ii] and RRL lines shown in Figure <ref> suggest that if there is any underestimation of the total Nitrogen along the line–of–sight, the effect is not significant. To determine whether higher ionization states, such as N^++, might be a significant source of Nitrogen in our sample, we used SOFIA and Herschel observations of the fine structure line of doubly ionized Nitrogen [N iii] 57μm, to characterize the ionization environment of 8 LOS in our survey, and in the Sgr A region in the Galactic center. Under the typical physical conditions of our LOS sample (temperatures less than 20,000 K), N^++ can only be maintained by EUV photons from massive stars. Therefore, the presence of [N iii] will enable us to determine whether EUV photons play an active role in determining the ionization structure of Nitrogen in our sample. The [N iii] 57μm line was detected in 4 out of the 8 LOS, and their intensities, and 3σ upper limits in case of the non–detections, are listed in Table <ref> and the detected spectra are shown in Figure <ref>. As we can see in Table <ref>, these LOS are either inside (G010.4+0.0 and G028.7+0.0), or in the close vicinity of (G000.0+0.0 and G024.3+0.0), dense H ii regions cataloged with WISE. This association suggests that their environments can be influenced by EUV photons that can further ionize N^+ to N^++. In Table <ref>, we present the results of our analysis of the [N iii] 57μm observations. We derived N^++ column densities from the observed [N iii] 57μm intensity and the electron density derived from the [N ii] 122μm/205μm ratio using, N( N^++)= 4π I_ 57 μ m/A_ul h ν_ 57 μ m f_3/2(n_e), where the spontaneous decay rate for N^++ is A_ul=4.79×10^-5 s^-1, and the rest frequency is ν_57μ m=5.229×10^12 Hz. The fractional population of the ^1/2 P_1/2, f_3/2, is a function of the electron density, n_e, as shown in Figure <ref>. The derived N^++/N^+ ratio for the detected sources ranges from 0.09 to 0.87, suggesting that doubly ionized Nitrogen in these regions is not dominant, but could be a significant fraction of the total Nitrogen, and thus can introduce an underestimation of the total Nitrogen abundance derived from N^+/H^+ between factors of ∼1.09 for G000.0+0.0 and ∼1.87 in G010.4+0.0. A similar range is obtained for the 3σ upper limits. However, these results are based on the assumption that the electron density of the [N iii]–emitting region is the same as that of the [N ii]–emitting region, whereas, given the EUV requirements to produce N^++, it is more likely that [N iii] comes from compact regions closer to the source of H ii regions. Thus, under these assumptions, the derived N^++/N^+ ratio should be considered as an upper limit. As we can see in Figure <ref>, for the typical densities derived from the Nitrogen lines (∼20-100 cm^-3), the population of the ^1/2 P_3/2 level is very small (<10%), and thus a relatively large N^++ column density is needed to reproduce the observed [N iii] 57μm intensities and upper limits. For T_ e=8000 K, the ^1/2 P_3/2 level is 50% populated at about 1000 cm^-3, and assuming such a density in our analysis would result in a N^++ column density and N^++/N^+ ratio that are a factor of ∼10 lower that those resulting from electron densities determined from the [N ii] 122μm/205μm ratio. A more appropriate tracer of the volume density of gas associated with doubly ionized Nitrogen is the [O iii] 52μm/88μm ratio, as the critical density of the [O iii] 52μm line is similar to that for [N iii] 57μm, as is the EUV requirement to produce O^++. The Sagittarius A region in the Galactic center was observed in the [N ii] 122μm and 205μm and the [O iii] 52μm and 88μm lines by Herschel/PACS and represents an ideal location to study the ionization structure of Nitrogen without uncertainties in the volume density determination. We used both the [N ii] 122μm/205μm ratio and the [O iii] 52μm/88μm line ratios to derive the electron volume density in the low and high ionization regions, respectively. We followed the procedure discussed in Section <ref> and used the ratio of the X–band continuum brightness temperature determined from the GBT, and the H92α Hydrogen recombination line data presented by <cit.>, including a correction for the contribution of Synchrotron emission, to derive an electron temperature of 11443 K (see Table <ref>). We derive an electron density of 254 cm^-3 from the [N ii] lines, and 2874 cm^-3, from the [O iii] lines. Using the derived volume densities for low ([N ii]) and high ([O iii]) ionization regions, and the electron temperature, we calculated a column density of singly ionized Nitrogen of 6.3×10^17 cm^-2 and of doubly ionized Nitrogen 3.7×10^16 cm^-2. We find that N^++ represents only a small fraction (6%) of the total N^++N^++. Thus, when appropriate electron densities are used for the different ionization regimes, we find that most of the ionized gas mass in this region is at a low ionization state where most of the Nitrogen is singly ionized. Using the parameters derived above, and assuming that the emission measure from the RRL lines can be separated between that arising from low and high ionization regions using the 6% ratio derived for Nitrogen, we derive an Nitrogen abundance for this region of 12+log(N/H)=7.86, which is consistent to those derived in our sample for the Galactic center. lccccccc 4 0pt Single and Doubly ionized Nitrogen Column Densities 6 LOS T_e n_e ([N ii]) N( N^+) n_e ([O iii]) N( N^++) N( N^++)/N( N^+) [K] [cm^-3] [10^17 cm^-2] [cm^-3] [10^17 cm^-2] G000.0+0.0 11910 ± 97 121.47 ± 0.52 3.58 ± 0.01 – 0.32 ± 0.05 0.09 ± 0.01 G007.0+0.0 8000 ± 800 32.77 ± 0.66 0.81 ± 0.01 – <0.43 ± 0.14 <0.53 ± 0.18 G010.4+0.0 6129 ± 367 24.32 ± 0.51 1.01 ± 0.01 – 0.88 ± 0.10 0.87 ± 0.10 G012.2+0.0 5800 ± 648 23.35 ± 0.43 1.09 ± 0.01 – <0.65 ± 0.22 <0.60 ± 0.20 G023.5+0.0 5949 ± 339 25.99 ± 0.25 1.77 ± 0.01 – <0.55 ± 0.18 <0.31 ± 0.10 G024.3+0.0 5744 ± 276 18.15 ± 0.29 1.45 ± 0.01 – 1.15 ± 0.16 0.80 ± 0.11 G026.1+0.0 5630 ± 198 38.83 ± 0.42 1.49 ± 0.01 – <0.71 ± 0.24 <0.47 ± 0.16 G028.7+0.0 8663 ± 612 37.62 ± 0.56 1.06 ± 0.01 – 0.48 ± 0.05 0.45 ± 0.05 SGRA 9601 ± 367 237.63 ± 27.43 6.34 ± 0.63 2686.6±1027.9 0.37 ± 0.01 0.06 ± 0.01 §.§.§ Galactocentric Distance Determination Because our sample in the inner Galaxy is uniformly distributed in Galactic longitude, we expect that the LOS analyzed here have a wide range of galactocentric distances in the inner Galaxy, so that we can study the radial distribution of the ionized Nitrogen abundance in the Milky Way. The traditional method for determining the Galactocentric distances from sources in the Galactic plane is the use of kinematic distances derived from the LSR velocity and Galactic coordinates of the source. However, because of non circular motions associated with the Galactic bar, kinematic distances cannot be accurately determined in the inner R_ gal≲4 kpc of the Galaxy. To ensure that we are able to sample the Galactic plane in the innermost parts of the Galaxy and the Galactic center, we instead use the model of the Galaxy presented by <cit.> to determine the Galactocentric distances to the sources detected in our survey. <cit.> presented a model of the spiral structure of the Milky Way based on 200 trigonometric parallaxes of masers associated with massive star forming regions. Distances measured from maser trigonometric parallaxes have the advantage that they do not rely on any assumption on the kinematics of the Galaxy. The <cit.> model enables us to estimate distances to sources using their Galactic coordinates and LSR velocity. From a distance, D, to a source with Galactic longitude l and latitude b=0, the corresponding Galactocentric distance is given by, R_ gal=√(D^2-2R_⊙Dcos(l)+R_⊙^2), where R_⊙ is the distance from the Sun to the Galactic center which is fitted to be R_⊙ = 8.15 kpc by <cit.>. In Figure <ref>, we show a comparison between Galactocentric distances derived for our sample using the <cit.> model and those derived from kinematic distances, for R_ gal>4 kpc. The kinematic distances are determined for a given velocity component with Galactic longitude l, latitude b, and local standard of rest (LSR) velocity V_ LSR, is given by R_ gal= R_⊙sin (l)cos(b) (V(R_ gal)/V_ LSR+V_⊙sin(l)cos(b) ), where V_⊙ is the orbital velocity of the Sun with respect to the Galactic center, and V(R_ gal) is the rotation curve. We assume a "Universal" rotation curve presented by <cit.> (see Equation 3 in ) assuming the value of the Sun's rotation velocity, V_⊙ = 247 km s^-1, fitted by <cit.>. We tested the dependence of R_ gal with other determinations of V_⊙ <cit.>, finding negligible differences. As we can see, the <cit.> model is in very good agreement with Galactocentric distances derived from kinematic distances, with values of R_ gal derived from the <cit.> model being on average ∼5% lower than those derived from kinematic distances. There are 24 LOS in our sample that have a single RRL velocity component enabling us to directly associate this emission with the velocity–unresolved [N ii] 205μm and 122μm and radio continuum emission. In case of multiple component LOS, we determined R_ gal for each velocity component and determined the range of galactocentric distances that these velocities represent. We assume that the N^+/H^+ gradient is smooth at <1.5 kpc scales and thus we can assume that the velocity components in given LOS have the same Nitrogen abundance in case they are at least within 1.5 kpc from each other. There are 14 LOS in this category for which the relative distance between the two components is lower than 1.5 kpc. For this sub sample, we assigned the average radial distance to the derived ionized Nitrogen abundance and considered the radial range as error bars in the X–axis. There is a subset of 8 LOS, mostly located in the inner ±20 from the Galactic center, where a high LSR velocity component with v_ LSR>±80 km s^-1 together with another low LSR velocity at about v_ LSR≃0 km s^-1, are observed. The <cit.> model suggests that the low velocity components are at distances larger than 4 kpc from the Galactic Center while the components with high LSR velocity are in the 0 kpc<R_ gal<4 kpc range. Thus, the high LSR velocity components are likely associated with the Galactic bar and located in the proximity of the Galactic center. In Figure <ref>, we show the result of numerical simulations of gas flows in a barred potential presented by <cit.>, with the left panel showing the predicted gas kinematics in the longitude–velocity map of this region and in the right panel the spatial distribution of these different components. We overplotted the fitted velocity of the RRLs for the 18 velocity components in our survey in this region as orange dots. As we can see the high LSR velocity components are likely associated with the bar potential gas corresponding to the inner 2 kpc of the Galaxy. At the same time, sources with velocities near v_ LSR≃ 0 km s^-1 are likely at higher Galactocentric distances. In the figure, we see that the G000.0+0.0 LOS shows multiple velocity components which are predicted to be associated with the Galactic center. We therefore assume for this LOS that the N^+/H^+, electron temperature, and electron density of the gas is the same for all velocity components, and therefore we can combine the RRL data with the unresolved PACS and continuum data to derive N^+/H^+. Because the observed velocity unresolved [N ii] 122μm and 205μm emission is the sum of the intensity of these lines arising from both velocity components, to determine the N^+/H^+ in the high LSR velocity sources we need to estimate the contribution to the observed [N ii] intensity from the low LSR velocity sources. As discussed above, the <cit.> model suggests that the low velocity components are at distances larger than 4 kpc from the Galactic Center. If that is the case, we can assume that the N^+/H^+ for the low LSR velocity sources is within the range observed for all other sources across the Galaxy with R_ gal>4 kpc, and we can determine its value using the fit to the N^+/H^+ distribution as a function of Galactocentric distance derived below (Equation <ref>). With an assumed N^+/H^+, the intensity of the [N ii] 122μm and 205μm emission for the low velocity component is given by, I^low_122μ m= A_21hν_122μ mX_ N^+EMf_2(ne,T_e)/4π ne and I^low_205μ m= A_10hν_205μ mX_ N^+EMf_1(ne,T_e)/4π ne where f_1 and f_2 are the level populations of the ^3 P_1 and ^3 P_2 levels, respectively, and the EM is derived from the RRL observations for this velocity component using Equation (<ref>). As we can see the intensities of these sources depend on the electron density and temperature of the low velocity component. With the derived [N ii] 122μm and 205μm intensities of the low velocity component we can obtain I^high_122μ m= I_122μ m-I^low_122μ m and I^high_205μ m= I_205μ m-I^low_205μ m, and the electron density for the high velocity component is derived from the [N ii] 122μm/205μm ratio and an electron temperature. With the intensity of either the [N ii] 122μm or 205μm lines, the electron temperature, density, and the measured EM from the RRL, we can derive the ionized Nitrogen abundance using Equation (<ref>),(<ref>), and (<ref>). Note however, that the electron temperatures and densities of each LSR velocity component cannot be independently derived. This is because both [N ii] 122μm and 205μm and radio continuum intensities, which are used to determine the electron density and temperature, respectively, are velocity unresolved and thus correspond to the sum of the intensities arising from each velocity component. To investigate the range of possible solutions for the N^+/H^+ ratio for the high velocity component given these uncertainties, we evaluated this quantity using the method described above, for ranges in the electron temperature, for the low and high LSR velocity components, from 1000 K to 25000 K, of the [N ii] 122μm/205μm ratio, corresponding to n_e=10–1000 cm^-3 Additionally, we assumed a range for the N^+/H^+ ratio for the low LSR velocity component that corresponds to the typical standard deviation for R_ gal>4 kpc of a factor of 2. Using 10^6 combinations of these parameters, we solved for the N^+/H^+ abundance ratio of the high velocity component. These solutions were constrained using the measured values of the continuum temperature and the [N ii] 205μm and 122μm lines. We used Equation (<ref>) to, given the known RRL intensity of each component, estimate the continuum brightness temperature of each component that would result from a given temperature, with the constraints that the continuum brightness temperature of each source has a SNR above 5 and that the sum of the resulting continuum brightness temperatures of each component is equal to the measured value within its uncertainties. Additionally, for a given [N ii] 122μm/205μm ratio and temperature for the low LSR component, we determined the corresponding electron density. With the electron density, electron temperature, and the N^+/H^+ ratio for this velocity component, we evaluated the [N ii] 205μm and 122μm line intensities using Equations (<ref>) and (<ref>). We then used Equations (<ref>) and (<ref>) to evaluate the corresponding [N ii] intensities and electron density of the high LSR velocity component, for a given electron temperature for this velocity component. The solutions were constrained requiring that the derived [N ii] intensities for the high and low LSR velocity components have each a SNR above 5, and that the sum of the [N ii] lines from these components match the observed values within their uncertainties. In Tables <ref> and <ref> we list the average value of all possible solutions for N^+/H^+ and the constrained values of the electron temperature, electron density, and N^+ and H^+ column densities, for the high LSR velocity component together with their corresponding standard deviation. In Figure <ref>, we show two examples of the derived N^+/H^+ and constrained parameters, showing that this approach results in well constrained parameters. lrrcccccc 8 0pt Derived Nitrogen Abundances in the Galactic Plane for l≥ 0 7 LOS l b n_e T_e N( N^+) N( H^+) 12+log(N^+/ H^+) R_ gal [] [] [cm^-3] [K] [10^17 cm^-2] [10^20 cm^-2] [kpc] G000.0+0.0 0.000 0.0 121.5 ± 0.5 11910 ± 97 35.8 ± 0.12 50.6 ± 0.9 7.8 ± 0.0 0.0 ± 0.0 G000.5+0.0 0.500 0.0 40.9 ± 23.4 12855 ± 706 9.9 ± 3.17 49.7 ± 19.3 7.3 ± 0.1 0.1 ± 0.0 G003.5+0.0 3.478 0.0 16.3 ± 0.3 5421 ± 1340 3.9 ± 0.04 2.4 ± 1.1 8.2 ± 0.2 5.2 ± 0.0 G004.3+0.0 4.348 0.0 24.7 ± 0.8 7791 ± 3330 2.2 ± 0.04 1.6 ± 1.4 8.1 ± 0.4 5.2 ± 0.0 G005.2+0.0 5.217 0.0 6.6 ± 0.5 7513 ± 1844 11.7 ± 0.29 7.6 ± 3.8 8.2 ± 0.2 5.2 ± 0.0 G006.1+0.0 6.087 0.0 18.8 ± 7.2 6969 ± 1992 5.3 ± 1.21 4.2 ± 1.5 8.1 ± 0.2 0.9 ± 0.0 G007.8+0.0 7.826 0.0 27.5 ± 0.5 10043 ± 1644 3.4 ± 0.03 4.5 ± 1.5 7.9 ± 0.1 6.0 ± 1.4 G008.7+0.0 8.696 0.0 14.3 ± 0.8 12852 ± 3316 11.1 ± 0.25 21.7 ± 12.0 7.7 ± 0.2 5.5 ± 0.0 G010.4+0.0 10.435 0.0 24.3 ± 0.5 6129 ± 367 10.1 ± 0.13 12.3 ± 1.5 7.9 ± 0.1 1.5 ± 0.0 G011.3+0.0 11.304 0.0 8.3 ± 0.6 4556 ± 707 10.5 ± 0.31 6.8 ± 2.2 8.2 ± 0.1 5.7 ± 0.0 G012.2+0.0 12.174 0.0 23.4 ± 0.4 5800 ± 648 10.9 ± 0.12 5.4 ± 1.2 8.3 ± 0.1 4.5 ± 0.0 G013.0+0.0 13.043 0.0 29.6 ± 0.4 7647 ± 1046 3.8 ± 0.03 5.1 ± 1.4 7.9 ± 0.1 5.7 ± 1.2 G013.9+0.0 13.913 0.0 17.9 ± 0.6 3501 ± 170 7.2 ± 0.15 9.4 ± 0.9 7.9 ± 0.0 4.7 ± 0.0 G014.8+0.0 14.783 0.0 21.6 ± 0.7 10169 ± 1904 7.2 ± 0.13 6.3 ± 2.5 8.1 ± 0.2 4.8 ± 0.0 G016.5+0.0 16.522 0.0 25.9 ± 0.2 5716 ± 1301 4.5 ± 0.02 3.4 ± 1.6 8.1 ± 0.2 5.2 ± 0.6 G020.0+0.0 20.000 0.0 12.5 ± 0.2 6479 ± 1706 4.7 ± 0.03 6.9 ± 3.8 7.8 ± 0.2 3.9 ± 0.6 G020.9+0.0 20.870 0.0 25.0 ± 0.7 6335 ± 566 6.5 ± 0.11 7.2 ± 1.3 8.0 ± 0.1 4.5 ± 0.0 G021.7+0.0 21.739 0.0 29.2 ± 0.7 12850 ± 1891 9.2 ± 0.13 6.9 ± 2.2 8.1 ± 0.1 6.3 ± 0.0 G023.5+0.0 23.478 0.0 26.0 ± 0.2 5949 ± 339 17.7 ± 0.11 12.1 ± 1.4 8.2 ± 0.1 4.1 ± 0.0 G024.3+0.0 24.348 0.0 18.1 ± 0.3 5744 ± 276 14.5 ± 0.13 11.3 ± 1.1 8.1 ± 0.0 3.8 ± 0.3 G025.2+0.0 25.217 0.0 12.8 ± 0.4 9093 ± 3292 14.9 ± 0.18 11.9 ± 9.2 8.1 ± 0.3 4.4 ± 0.6 G026.1+0.0 26.087 0.0 38.8 ± 0.4 5630 ± 198 14.9 ± 0.12 7.8 ± 0.5 8.3 ± 0.0 3.9 ± 0.0 G027.0+0.0 26.956 0.0 22.0 ± 0.3 10085 ± 1915 6.7 ± 0.04 6.6 ± 2.7 8.0 ± 0.2 4.1 ± 0.0 G028.7+0.0 28.696 0.0 37.6 ± 0.6 8663 ± 612 10.6 ± 0.11 10.9 ± 1.6 8.0 ± 0.1 4.0 ± 0.0 G030.0+0.0 30.000 0.0 31.8 ± 0.9 13227 ± 2184 8.6 ± 0.14 11.5 ± 4.0 7.9 ± 0.2 4.1 ± 0.0 G031.3+0.0 31.277 0.0 25.8 ± 0.4 5548 ± 170 12.1 ± 0.12 11.7 ± 0.7 8.0 ± 0.0 4.8 ± 0.6 G036.4+0.0 36.383 0.0 20.8 ± 1.7 7445 ± 1652 3.3 ± 0.16 3.2 ± 1.5 8.0 ± 0.2 5.6 ± 0.0 G037.7+0.0 37.660 0.0 17.1 ± 0.5 5299 ± 214 9.6 ± 0.16 9.8 ± 0.8 8.0 ± 0.0 5.5 ± 0.2 G041.5+0.0 41.489 0.0 46.8 ± 0.5 10619 ± 1481 2.2 ± 0.02 2.8 ± 0.8 7.9 ± 0.1 7.3 ± 0.0 G044.0+0.0 44.043 0.0 13.8 ± 0.3 4308 ± 929 3.4 ± 0.05 3.4 ± 1.5 8.0 ± 0.2 6.5 ± 0.6 G045.3+0.0 45.319 0.0 29.3 ± 1.1 6955 ± 1313 0.9 ± 0.02 1.8 ± 0.7 7.7 ± 0.2 6.1 ± 0.0 G049.1+0.0 49.149 0.0 21.7 ± 1.7 13245 ± 2858 3.3 ± 0.13 6.5 ± 3.0 7.7 ± 0.2 6.2 ± 0.0 G054.3+0.0 54.255 0.0 46.0 ± 1.0 10888 ± 2869 1.1 ± 0.02 1.6 ± 0.9 7.8 ± 0.2 6.6 ± 0.0 lrrcccccc 8 0pt Derived Nitrogen Abundances in the Galactic Plane for l<0 8 LOS l b n_e T_e N( N^+) N( H^+) 12+log(N^+/ H^+) R_ gal [] [] [cm^-3] [K] [10^17 cm^-2] [10^20 cm^-2] [kpc] G302.6+0.0 302.553 0.0 12.5 ± 0.2 4015 ± 481 5.3 ± 0.04 7.9 ± 2.0 7.8 ± 0.1 7.1 ± 0.0 G305.1+0.0 305.106 0.0 4.9 ± 0.2 4321 ± 162 31.5 ± 0.34 55.9 ± 4.9 7.8 ± 0.0 7.0 ± 0.0 G306.4+0.0 306.383 0.0 9.9 ± 0.4 8090 ± 1043 3.9 ± 0.06 7.8 ± 2.2 7.7 ± 0.1 6.6 ± 0.0 G307.7+0.0 307.660 0.0 15.4 ± 0.9 8658 ± 309 5.9 ± 0.17 8.7 ± 0.9 7.8 ± 0.0 6.5 ± 0.0 G310.2+0.0 310.213 0.0 62.3 ± 2.1 8792 ± 539 0.6 ± 0.02 1.9 ± 0.3 7.5 ± 0.1 7.4 ± 1.1 G314.0+0.0 314.043 0.0 26.4 ± 0.3 8349 ± 706 2.5 ± 0.02 4.6 ± 0.8 7.7 ± 0.1 6.2 ± 0.0 G316.6+0.0 316.596 0.0 21.0 ± 0.6 8844 ± 372 9.8 ± 0.16 14.5 ± 1.4 7.8 ± 0.0 7.0 ± 0.8 G317.9+0.0 317.872 0.0 25.3 ± 0.9 9234 ± 477 5.9 ± 0.12 11.9 ± 1.4 7.7 ± 0.1 8.0 ± 1.4 G326.8+0.0 326.808 0.0 26.5 ± 0.4 5199 ± 317 11.2 ± 0.12 9.0 ± 1.2 8.1 ± 0.1 5.8 ± 0.0 G330.0+0.0 330.000 0.0 39.0 ± 1.0 13446 ± 6062 1.6 ± 0.02 4.2 ± 4.1 7.6 ± 0.4 5.2 ± 0.5 G331.7+0.0 331.739 0.0 29.5 ± 1.0 16634 ± 4501 7.6 ± 0.14 9.2 ± 5.3 7.9 ± 0.2 5.7 ± 0.0 G332.6+0.0 332.609 0.0 25.6 ± 0.9 3941 ± 571 4.8 ± 0.11 6.5 ± 2.0 7.9 ± 0.1 5.1 ± 0.6 G333.5+0.0 333.478 0.0 36.8 ± 0.6 7703 ± 991 8.7 ± 0.09 8.8 ± 2.4 8.0 ± 0.1 5.3 ± 0.6 G336.1+0.0 336.087 0.0 28.5 ± 0.3 8161 ± 1282 18.5 ± 0.11 14.3 ± 4.7 8.1 ± 0.1 3.9 ± 0.6 G337.0+0.0 336.957 0.0 31.9 ± 0.3 15478 ± 2066 23.0 ± 0.14 32.6 ± 9.0 7.8 ± 0.1 3.8 ± 0.6 G337.8+0.0 337.826 0.0 22.6 ± 10.8 14731 ± 5334 16.7 ± 5.64 13.3 ± 6.2 8.1 ± 0.2 3.1 ± 0.0 G338.7+0.0 338.696 0.0 28.0 ± 0.3 6405 ± 906 3.8 ± 0.03 6.2 ± 1.8 7.8 ± 0.1 5.4 ± 0.0 G342.2+0.0 342.174 0.0 11.0 ± 0.6 8643 ± 1176 11.5 ± 1.98 15.1 ± 2.9 7.9 ± 0.1 3.0 ± 0.0 G343.9+0.0 343.913 0.0 21.7 ± 11.1 16864 ± 5699 4.3 ± 2.09 3.6 ± 2.3 8.1 ± 0.2 3.0 ± 0.0 G345.7+0.0 345.652 0.0 25.2 ± 10.2 15869 ± 6014 10.5 ± 2.68 21.7 ± 12.5 7.7 ± 0.3 2.0 ± 0.0 G346.5+0.0 346.522 0.0 30.4 ± 1.5 8642 ± 1394 3.4 ± 0.11 7.5 ± 2.6 7.7 ± 0.1 5.6 ± 1.3 G349.1+0.0 349.130 0.0 52.2 ± 0.5 5674 ± 207 13.1 ± 0.09 19.7 ± 1.3 7.8 ± 0.0 2.8 ± 0.0 G350.9+0.0 350.870 0.0 14.4 ± 1.3 5845 ± 1572 3.1 ± 0.14 6.5 ± 3.8 7.7 ± 0.2 4.4 ± 0.0 G353.5+0.0 353.478 0.0 17.0 ± 5.1 12417 ± 3271 7.0 ± 1.61 6.9 ± 2.2 8.0 ± 0.1 3.4 ± 0.0 G354.3+0.0 354.348 0.0 31.8 ± 2.1 8488 ± 1212 2.7 ± 0.11 4.0 ± 1.2 7.8 ± 0.1 2.9 ± 0.0 G355.2+0.0 355.217 0.0 87.8 ± 3.0 12528 ± 1706 2.7 ± 0.07 4.7 ± 1.2 7.8 ± 0.1 4.2 ± 0.0 G356.1+0.0 356.087 0.0 35.3 ± 0.3 10465 ± 2529 2.8 ± 0.01 3.1 ± 1.5 8.0 ± 0.2 4.2 ± 0.0 G359.5+0.0 359.500 0.0 47.3 ± 21.4 20202 ± 2209 13.5 ± 3.96 19.9 ± 6.2 7.8 ± 0.1 0.1 ± 0.0 §.§ The distribution of Nitrogen abundances in the disk of the Milky Way. In Figure <ref>, we show the distribution of Nitrogen abundances as a function of Galactocentric distance derived from our sample in the range from 0 to 8 kpc in the inner Galaxy. We used only data with RRL emission above a SNR of 5, and a radio continuum brightness temperature used to determine the electron temperature with SNR above 10. These criteria result in a sample of 41 positions for which we consider the data to be of high quality. We also show, in dark green, the Nitrogen abundance derived in Sgr A and discussed in Section <ref>. We also include a sample of Nitrogen abundances derived in 42 H ii regions presented by <cit.>, using optical spectral lines, sampling the Galactic plane from 4 to 17 kpc. We find that the Nitrogen abundances derived here and those those derived with optical spectral line observations are in excellent agreement in the Galactocentric distance region they overlap. Taken together, these data sets represent a continuous sample of the Nitrogen abundance over the disk of the Milky Way from 0 to 17 kpc. Both our Nitrogen abundances and those from <cit.> are determined in low ionization regions that have a negligible fraction of doubly ionized Nitrogen. Therefore we can assume for both data sets that N/H≈N^+/H^+. The agreement between our Nitrogen abundances and those derived using optical data is in contrast to discrepancies between FIR and optical derived abundances and the N/O ratio as reported by <cit.> and <cit.>. <cit.> compared the optical based observations of the N/H abundance gradient with those derived using mid– and far–infrared observations by <cit.> finding that the latter show a significantly steeper gradient and larger dispersion, compared to those derived with optical lines. A possible explanation for this discrepancy is that <cit.> abundance determination is based on observations of spectral lines tracing high ionization states, such as [N iii] and [O iii] lines, and therefore they rely in a correction factor for lower ionization gas that can introduce significant uncertainties in the measurements. In the left panel of Figure <ref> we show the ionized Nitrogen abundances as a function of Galactocentric distance separated into the different cases in which they were derived, with single velocity components shown in red, double components in black, triple components in green, and LOS associated with the central molecular zone shown in yellow. In the right panel of Figure <ref> we show the ionized Nitrogen abundance as a function of Galactocentric distance separated from those derived for LOS with l ≥ 0 (black) and l<0 (blue). We find that the ionized Nitrogen abundances for LOS with l ≥ 0 are on average 40% larger than for l<0. This difference coincides with a similar asymmetry in the star formation rate distribution in the Milky Way <cit.>, suggesting that metal production is enhanced in the region with l ≥ 0 compared to that for l<0. We find that the Nitrogen abundance in the Milky Way decreases from about 4 kpc out to 17 kpc, while having a flat distribution from 4 kpc to the Galactic center. Observations of Cepheids and red giants also show that different elements, including iron (Fe), have abundances close to the Galactic center that are lower than predicted by extrapolating the abundance distribution at larger radii <cit.>, in agreement with our results. We did not attempt to fit our data in the 4 kpc<R_ gal<8 kpc range, as this range is too narrow for enabling an accurate representation of N/H across the Milky Way. Instead, we combined our data set with that from <cit.> to obtain a fit to the distribution of Nitrogen abundance in the Galactic plane between 4 kpc and 17 kpc. We used the orthogonal bi-variate error and intrinsic scatter method <cit.>, including a bootstrap resampling error analysis, resulting in, 12+log( N^+/ H^+)= 8.30±0.04-(0.068±0.005)R_ gal. The slope of our fit is consistent within its uncertainties to that derived by <cit.>, -0.057 dex kpc^-1, using optical lines, but is shallower than that derived by <cit.>, -0.076 dex kpc^-1, using the same method presented here in a smaller sample of 11 LOS, and that derived by <cit.>, -0.085 dex kpc^-1. The slope of our fit is steeper than that derived for O/H by <cit.>, -0.042 dex kpc^-1. This difference can be understood as a larger number of older intermediate mass stars, that can contribute additional Nitrogen to the ISM, are present in the inner Galaxy. The typical dispersion from the fit for our data set for R_ gal>4 kpc is 0.16 dex and 0.14 dex when also considering the data set from <cit.>. These dispersions are somewhat larger than those reported by <cit.>, of 0.1 dex, but are consistent with the suggestion from this work that azimuthal variations are not significant for Nitrogen. Note that in <cit.> we assumed electron temperatures from <cit.> instead of deriving them using radio continuum data as is done here. The <cit.> electron temperature gradient was derived from H ii regions between 5 kpc and 8 kpc, but we extrapolated the fit down to the Galactic center. Data presented by <cit.> show electron temperatures in the Galactic center that are significantly larger than predicted by extrapolating the fit from electron temperatures at larger Galactocentric distances inward to R_ gal=0 kpc. This underestimation of the electron temperature at the Galactic center resulted in an overestimation of the Nitrogen abundances in this region as found by <cit.>. §.§.§ Comparison with chemical evolution models of the Milky Way. Metallicity gradients in the disk of Galaxies are formed when star formation is more efficient in their inner parts compared with their outer parts <cit.>. Such a gradient in the star formation efficiency can be produced by “inside–out” galaxy formation, in which the disk of galaxies form by gas accretion with a rate that is faster in the inner Galaxy compared with the outer Galaxy <cit.>. In these models, the measured slope of the gradient constrains the galaxy accretion rate. However, other mechanisms can also predict and/or steepen a metallicity gradient such as the presence of a density threshold for star formation, a star formation efficiency that decreases with Galactocentric distances, and inwards radial flows <cit.>. <cit.> presented set of chemical evolution models of the Milky Way that assume a two–infall model <cit.>, in which the thick and thin disks were formed in two accretion episodes separated by ∼3.25 Gyr, to study the abundance distribution of several elements as a function of Galactocentric distance. These models that include the effects of radially variable star formation efficiency (SFE) and radial flows in addition to inside–out growth in the determination of the radial distribution of element abundances in the Milky Way. The mechanism by which a variable SFE induces and/or steepen an abundance gradient is that in the innermost regions of galaxies the star formation rate is enhanced compared to the outer regions, leading to an increased chemical enrichment. Additionally, radial flows can contribute to this effect. As gas moves toward the inner parts of the galaxy, the star formation rate increases, resulting in more significant metal production closer to the center compared to the outer areas. In Figure <ref> we compare the observed Nitrogen abundance distribution with the models presented by <cit.> for Nitrogen in the Milky Way. In both panels of Figure <ref> we show a model that includes inside–out growth only (dark grey), which shows a shallower distribution compared with observations, suggesting that this mechanism alone is insufficient to reproduce the observed Nitrogen abundance gradient. In the left panel Figure <ref> we show models with variable star formation efficiency labeled B, F, and G, and the right panel of Figure <ref> we show models with radial flows labeled C, D, and E in <cit.>. We refer to <cit.> for specific parameters assumed for these models. As we can see, in agreement with the conclusion in <cit.>, the variable SFE model F, and the radial flow model E, agree the best with the observed Nitrogen abundance gradients. Thus, in a addition to inside-out growth, variable SFE and/or radial flows are necessary to explain the observed of the Nitrogen abundance gradient for R_ gal>4 kpc. Note that absolute value of the predicted Nitrogen abundances in the <cit.> models are on average a factor of 2.6 (0.41 dex) larger than the observed values. In Figure <ref> we adjusted the model predicted absolute Nitrogen abundances by this factor so that their average value coincides with the Nitrogen abundance predicted by the fit in Equation (<ref>) at 4 kpc. The <cit.> models assume stellar yields for massive stars from <cit.>, and for low–intermediate mass stars from <cit.>, and uncertainties in stellar yield calculations might result in an overestimation in the production of Nitrogen. In addition, based on optical spectroscopy of stars that show the presence two, higher and lower metallity, populations of massive stars in the disk of the Milky Way, it has been recently hypothetized that a recent accretion episode (in addition to the two–infall model) of low metalliciy gas in the Milky Way disk in the last  2.7 Gyr have resulted in a a general ISM metal impoverishment <cit.>. Given that the Nitrogen abundances derived here those by <cit.> trace the recent production of Nitrogen in the Milky way, our observations would be consistent with this hypothesis. In Figure <ref>, we see a peak in the N/H abundance at R_ gal≃4 kpc, which is associated with the outermost part of the Galactic Bar <cit.>. A bar potential can efficiently redistribute angular momentum and mass in galaxies, and the radial flows produced by such a potential are expected to mix elemental abundances, flattening any abundance gradient over time <cit.>. The star formation rate in the Milky Way peaks at R_ gal≈ 4 kpc, and do not continue to rise for smaller Galactocentric radii <cit.>. Therefore the formation of new elements in the inner 4 kpc must be less efficient. Note that the observed reduction in the star formation rate in the inner Galaxy is likely due to a reduction of the number of star forming regions per unit area compared with larger Galactocentric distances. Chemical evolution models predict elemental abundances to reach an equilibrium value in regions where the production of new elements is balanced by metal consumption by star formation and expulsion by outflows <cit.>. The time scales for reaching the equilibrium abundance are different for each element, with elements such as Oxygen and Nitrogen reaching their equilibrium at the timescale at which gas is being depleted by star formation and outflows, while Fe reaches equilibrium at this timescale or that of SNe Ia enrichment (∼1.5 Gyr), whichever is longer <cit.>. <cit.> observed a flattening of the O/H abundance in the center of galaxies with stellar masses similar to that of the Milky Way, but observed that the O/N ratio continues to rise toward the center of these galaxies. They interpreted this result as Oxygen reaching an equilibrium abundance, but Nitrogen is continuously being produced by secondary nucleosynthetic production, in longer–living, intermediate mass stars. We do not observe this effect in the center of the Milky Way, suggesting that the timescales for a Nitrogen abundance equilibrium has not been reached in the inner Milky Way. Note however, that the star formation rate in the Milky Way peaks at R_ gal=4 kpc <cit.>, and thus the high star formation rate required for the equilibrium hypothesis is not reached at smaller radii. We therefore favor radial flows induced by the stellar bar as the most likely mechanism for the flattening of the Nitrogen abundance in the innermost parts of the Galaxy. § SUMMARY We presented a Galactic plane survey of Hydrogen Radio Recombination Lines (RRLs) observed with the NASA DSS–43 70m antenna and the Green Bank Telescope. We observed 108 lines–of–sights covering a range between -135< l < 60 and b=0 in the Galactic plane. We combined these observations with observations of the [N ii] 122μm and 205μm lines taken with the Herschel space observatory and [N iii] 57μm taken with SOFIA/FIFI–LS, and radio continuum data, to characterize the distribution of the Nitrogen abundance across the disk of the Milky Way. In a sample of 41 LOS, where we have high enough signal–to–noise ratio, we studied the distribution of the ionized Nitrogen abundance relative to ionized Hydrogen covering galactocentric distances between 0 to 8 kpc. Combined with existing determinations of the N/H abundance in the solar neighborhood and outer Galaxy, we are able to study for the first time the distribution of this quantity in the inner 16 kpc of the Milky Way. The results of this work can be summarized as follows: * We find a Nitrogen abundance gradient extending over Galactocentric distances between 4 and 17 kpc in the Galactic plane, while for 0 to 4 kpc we find a flat N/H distribution. * The gradient observed at Galactocentric distances larger that 4 kpc supports inside–out galaxy growth with the additional steepening resulting from variable star formation efficiency and/or radial flows in the Galactic disk. * The observed flattening of the Nitrogen abundance distribution in the inner 4 kpc, which coincides with the start of the Galactic bar, can be associated with radial flows induced by the bar potential. * We studied the ionization structure of a sub–sample of 8 LOS for which we obtained [N iii] 57μm observations, and [O iii] 88μm and 52μm observations in Sagittarius A. We find that most of the Nitrogen in our sample is likely singly ionized, which is consistent with their locations being in low ionization outskirts of H ii regions, and that any highly ionized Nitrogen comes from compact high electron density H ii regions. Our observations demonstrate the power of using of far–infrared spectral lines and radio recombination lines, for an unobscured study of the ionization structure and the Nitrogen abundance distribution in galaxies. Far–infrared observatories, such as the GUSTO and ASTHROS balloons, and a future NASA far–infrared probe mission, together with ground based radio observatories, such as e.g. the GBT and NASA DSN antennas, can provide important insights in the chemical evolution in galaxies, which in turn provide important information to models of galaxy evolution. This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. The work in this publication was supported by NASA’s Astrophysics Data Analysis Program (ADAP) under grant No. 80NM0018F0610,18-2ADAP18-0196. We would like to acknowledge and express our gratitude to Javier Goicoechea for kindly providing the Herschel/PACS data in Sagittarius A. We also extend our sincere appreciation to Karla Arellano-Cordova for contributing the optical Nitrogen abundance data and to Marco Palla for supplying the model Nitrogen abundance distributions and for their insightful discussions. Their inputs and contributions have greatly enriched and strengthened our work. We also thank the anonymous referee for comments that significantly improved the paper. This project made use of the Smithsonian Astrophysical Observatory 4 × 32k-channel spectrometer (SAO32k) and the TAMS observatoryCtrl observing system, which were developed by L. Greenhill (Center for Astrophysics), I. Zaw (New York University Abu Dhabi), D. Price, and D. Shaff, with funding from SAO and the NYUAD Research Enhancement Fund and in-kind support from the Xilinx University Program. We thank West Virginia University for its financial support of GBT operations, which enabled the observations for this project. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. LDA and ML are supported by NSF grant AST1516021 to LDA. We thank the staff of the SOFIA Science Center for their help. U.S. Government sponsorship acknowledged. TMBIDL <cit.>, GILDAS/CLASS <cit.> GBT, DSN/DSS–43, SOFIA, Herschel § DEPENDENCE OF NITROGEN ABUNDANCE DISTRIBUTION ON ELECTRON TEMPERATURES DERIVED FROM DIFFERENT RADIO CONTINUUM SURVEYS. In Figure <ref> we show the N^+/H^+ distribution as a function of Galactocentric distance derived using electron temperatures derived from each of the radio continuum surveys listed in Table <ref>. In general, most LOS with l<0 are covered by the SGPS and/or the Parkes 6 cm survey. For sources with l≥ 0, all sources are covered by either the Effelsberg/GLOSTAR survey at 4.88 GHz and 6.82 GHz and the Nobeyama 10.3 GHz survey. A subsample is also covered by the VGPS survey at 1.4 GHz. Each panel in Figure <ref> are labeled with the combination of surveys used to cover the entire range of Galactic longitudes. Note that some LOS do not appear in all panels because they do not meet the SNR>10 criteria for the free–free brightness temperature. We also show the N^+/H^+ distribution resulting from using the averaged free–free continuum temperature as described in Section <ref>. We find that N^+/H^+ distribution is not significantly affected by the choice of radio continuum survey used to derive electron temperatures. § COMPARISON WITH BALSER ET AL. 2015 ELECTRON TEMPERATURES. The uncertainties derived for the electron temperatures described in Section <ref> are based on the uncertainties of the measurements of the continuum and RRL emission, and therefore do not account for calibration uncertainties, that can vary for the different frequency spectral bands used for the continuum emission, and the accuracy of our correction for synchrotron emission. To assess the sensitivity of the derived electron temperatures to these uncertainties we compared our methodology to derive electron temperatures against electron temperatures derived in a sample of H ii regions by <cit.> in which calibration uncertainties and synchrotron contribution are carefully assessed. <cit.> derived electron temperatures in a sample of 21 H ii regions in the Galactic plane in both RRL and continuum emission at 8.7 GHz using the Green Bank Telescope. This sample was selected to have a continuum intensity signal-to-noise ratio greater than 10 and to be isolated enough so that spatial and spectral blending can be avoided. Because the RRL and continuum observations are done in the same band, with the same telescope, these measurements do not suffer from calibration uncertainties. Also, because the spatial structure in continuum is well isolated, the free–free and synchrotron emission can be separated by fitting the compact free–free emission with a Gaussian function and the more extended synchrotron emission with a low order polynomial baseline. Therefore a comparison between the electron temperatures derived by <cit.> and those derived using our methodology can be used to assess the impact of calibration uncertainties and free–free/synchrotron separation in the electron temperatures derived in our sample. There are however a few differences between our sources and those used by <cit.>. Their sources are compact H ii regions, while our sources are associated with the outskirts of H ii regions. As discussed in Section <ref>, the beam filling effects in our Galactic plane sample are small but are more significant for the <cit.> sample. Also, the electron densities in this sample are likely to be larger than the typical densities in our sample. For this comparison, we will not apply a non–LTE correction, as we did for our sample. In the left panel of Figure <ref>, we show a comparison between the electron temperatures derived from RRL and continuum intensities from <cit.> and those derived using our methodology with the RRL intensities from <cit.> but with the continuum intensities extracted from the Effelsberg 100 m GLOSTAR data at 6.82 GHz. The angular resolution of the Effelsberg data set is 106 which is somewhat larger than the 87 resolution of the GBT data. We find good agreement between the electron temperatures using the two methods. The scatter from the one–to–one correlation between the electron temperatures derived from these two methods is on average 18%. This difference can arise from the different angular resolution of the data, which can be important due to the compact H ii emission in this sample, from the uncertainties in the relative calibration between the Effelsberg and GBT data sets, and/or from the separation between free-free and synchrotron emission, with the latter estimated to be on average 19% of the total continuum emission at 6.82 GHz. To better test our prediction for the contribution from synchrotron emission in our method, we use 1.4 GHz data, where synchrotron emission is expected to be more significant. We derive electron temperatures using the VLA Galactic plane survey data set, which covers the Galactic latitude range of the <cit.> sample. This comparison helped us to assess our derivation of electron temperatures in sources in the southern hemisphere, where we used the SGPS data set at 1.4 GHz. We convolved the 60 resolution VGPS survey to 87 to match the angular resolution of the GBT data set. In the right panel of Figure <ref> we show a comparison between the electron temperatures derived using our method against those derived from the <cit.> data set. We find that electron temperatures have a similar range with scatter around the one-to-one correlation. In this case, the difference between electron temperatures derived using our method and those from <cit.> is on average 33%. As before, these differences can arise from uncertainties in the relative calibrations between the VGPS and GBT data sets and/or from the separation between free-free and synchrotron emission, which is on average 38% of the continuum emission at 1.4 GHz in this sample. In summary, a comparison between our method to derive electron temperatures using 1.4 GHz and 6.82 GHz observations shows that additional uncertainties between 20% to 30% can arise due to uncertainties in the calibration of several continuum bands and in our correction for the contribution of synchrotron emission. Note that these uncertainties are reduced by using a combination of several radio continuum data sets in the derivation of electron temperatures as discussed in Section <ref>. aasjournal
http://arxiv.org/abs/2407.13191v1
20240718060138
Water drop impact on thin viscous oil layers
[ "Surjit Bharatsingh", "Piyush Sahu", "Gaurav Salwan", "Dileep Mampallil" ]
physics.flu-dyn
[ "physics.flu-dyn", "cond-mat.soft" ]
[]dileep.mampallil@iisertirupati.ac.in Indian Institute of Science Education & Research Tirupati, Yerpedu P. O. PIN 517619, Tirupati, AP, INDIA § ABSTRACT We conducted experimental investigations into the short-term and long-term impact dynamics of pure water drops on silicone oil-coated surfaces. Our observations revealed distinct phases: initially, rapid maximal spreading followed by retraction, subsequent oscillations characterized by a relatively stable contact angle, and a final phase involved the gradual spreading of the drop on the underlying solid surface due to the rupturing of the oil layer. The maximal spreading radius follows We^1/4 scaling, where We is the Weber number, independent of the viscosity of the underlying oil layer. By introducing fluorescent tracer particles in the oil, we noted that the drop expanded over the oil layer during the initial spreading without displacing oil. Subsequently, as the oscillations dampen, the drop ruptures the oil layer, initiating a dewetting process and spreads over the solid surface, leaving a tiny oil droplet under the water drop. Upon completion of this dewetting process, the onset time for the drop spreading nonlinearly increased with the oil layer thickness as ∼ d_o^2, which we attribute to the dewetting dynamics. Our findings unveil intriguing dynamics of water drops impacting on oil-coated surfaces. Water drop impact on thin viscous oil layers Dileep Mampallil July 22, 2024 ============================================= Keywords: Drop impact, oil film ^† Equal contribution § INTRODUCTION Fundamental understanding of drop impact <cit.> on various surfaces such as pre-wetted <cit.>, superhydrophobic <cit.>, granular <cit.> and liquid <cit.> surfaces and subsequent spreading <cit.> has implications in various fields such as inkjet printing <cit.>, drop-impact printing <cit.>, cooling systems <cit.>, spraying of pesticides and medicines <cit.>, to list a few. These applications also involve the impact of drops of complex fluids <cit.>, such as blood drops, where the resulting patterns have implications in forensics <cit.>. Thus, understanding the fundamental dynamics of drop impact is important in many processes. The dynamics of a liquid drop after impacting a surface depends upon the drop size, impact velocities, properties of the liquid, and the physical nature of surface, such as wettability and roughness <cit.>. The impact dynamics demonstrate spreading, rebounding, splashing, and break-up depending upon the liquid and surface proprieties and impact energies <cit.>. Upon impact, the inertia expands the liquid on the surface, which is balanced by viscous and surface tension forces <cit.>. Drop impact on hydrophobic and superhydrophobic surfaces <cit.> shows bouncing of the drop with a short contact time that depends upon the inertia, surface tension, and liquid-surface interactions <cit.>. Relative temperature difference between the impacting liquid and the solid can also lead to non-sticking effects <cit.>. The ambient medium (usually air) also influences drop impact dynamics on solid surfaces. The local increase in pressure between the impacting face of the drop and substrate surface deforms the impacting face, eventually entrapping a thin layer of air upon impact <cit.>. Drop impact on granular medium <cit.> can demonstrate the effects of this air entrapment as patterns on the granular layer and also microbubble generation <cit.>. Impact drops on soft and viscoelastic surfaces <cit.>, and even immiscible liquids <cit.>, have also attracted scientific interest. The spreading and retraction of drops on soft surfaces strongly depend upon the viscoelastic properties of the surface <cit.>. On such surfaces, the contact angle hysteresis can dissipate much of the impact energy, especially during retraction. In the case of drop impact on liquid layers, the competition between the inertial, capillary, and viscous effects determines the dynamics after impact. An example is the water drop impact on oil layers <cit.> and oil-infused surfaces <cit.>. These studies show that impact energy and oil viscosity determines the dynamics of both the drop and the oil <cit.>. Further, Lee et al. showed that the viscosity of the oil film does not have significant effect on the maximal spreading radius of impacted water drops <cit.>. However, at lower oil viscosity, the bouncing of the drop is enhanced <cit.>. The crown formation of the impacted drop also vivid at relatively higher impact velocities <cit.>. Previous studies indicate that the interaction between water drops impacting oil layers, or, more broadly, the impact of drops on liquid layers, is a complex process that requires further exploration. In this research, we experimentally study water drop impacting a highly viscous silicone oil layer coated on various substrates and demonstrate several previously unreported dynamics. When the drop impacts, it rapidly spreads without disrupting the oil layer. However, during the subsequent retraction phase, the contact line drags the oil layer inward and thus the drop dissipates the impact energy. Additionally, the drop exhibits damping oscillations and slowly penetrates the oil layer. We find that the time the drop takes for this penetration increases with the oil layer thickness and impact velocity, contrary to the expected rupturing the oil layer immediately upon impact. We systematically investigate the different dynamics exhibited by the impacted drop and draw comparisons with established scaling laws. § EXPERIMENTS We spin-coated silicone oil on glass slides sequentially cleaned with isopropyl alcohol and water followed by air drying. The spin coating of silicone oil was performed at different rpm ranging 500 to 5000. Different rpm resulted in different thickness of the oil, depending upon the oil viscosity (μ_o), such as 370 and 10000 cSt (Sigma Aldrich). The oil thickness was obtained by comparing the weight of the glass slide in a microbalance (Quintix224-10IN) before and after the uniform coating with the oil. The thickness was obtained as d_o = δ mρ_o A, where δ m is the oil mass, ρ_o is the density of oil, and A is the area of the glass slide. An average thickness was obtained from five different samples. The relationship between the spin rpm and the oil layer thickness for different oils are shown in Supplementary Fig.S1. We compared the measured oil thicknesses to the ones obtained using the Meyerhofer model <cit.>, which gives the spin-coated thickness as (3μ_o/4ρ_oω^2 T)^1/2, where, ω is the angular velocity of spinning and T is the duration of the spinning. The calculated values were higher than the measured ones up to two times, especially at lower rpm. We used the experimental values of the thickness for further analyses. We coated oil on different substrates such as, glass, indium tin oxide (ITO), and polydimethylsiloxane (PDMS). On these bare surfaces (without oil coat), water drops have contact angles of 22 ± 2^∘, 60 ± 2^∘. and 100 ± 2^∘, respectively. Water drops of volume 6.5 μl (with 5% deviation) having radius R = 1.1 mm was dropped from a Teflon tubing connected to a syringe pump. The impact height varied from 0 to 0.95 m. The corresponding impact velocity varied between 0 and 4.3 m/s. Upon impact, the dynamics of the drop was imaged using a high-speed camera (Phantom MICRO LAB110) at frame rates up to 5000. The images were analyzed using ImageJ software and homemade Matlab code to obtain the contact angle and base radius as a function of time. § RESULTS To begin, we will provide an overview of the observations regarding the behavior of liquid drops impacting on oil-coated glass slides as shown in Fig.1. The dynamics of the drop's base radius and contact angle are quantified in Fig. 2(a) and 2(b), respectively. Upon impact, the drop rapidly spreads to reach its maximum base diameter (D_max) in approximately 4 milliseconds. As the impact velocity increases, the time required to achieve maximum spreading becomes shorter (Fig.1). Just before the drop retracts, the contact angle decreases to a minimum value in proportion to the impact velocity. Subsequently, the drop enters a phase of retraction followed by damping oscillations while maintaining a relatively constant average contact angle of around 100 degrees in a quasi-equilibrium state. Notably, after about a second, the drop undergoes a slow spreading phase, indicated by an increase in the base radius and a decrease in the contact angle. We will delve into these features of the curves in Fig 2 in the subsequent discussions. The maximum spreading diameter: Upon impact, in the absence of splash, the maximum spreading diameter <cit.> is an important parameter in several applications involving drop impact. The maximum spreading is characterized by a factor, β, which is the ratio of the diameter upon maximum spreading (D_max) to that at the initial spherical drop (D = 2R). The value of β results from the balance of kinetic energy, surface energy, and viscous dissipation. Considering these aspects, numerous relationships were derived connecting β to dimensionless numbers such as, Webber number, We = ρ_w D u_i^2/σ_w and Reynolds number, Re = ρ_w D u_i/μ_w, where, ρ_w is the density of the impacting liquid (water), u_i is its impact velocity, σ_w is the surface tension, and μ_w is the viscosity of the impacting drop. In the viscous regime where spreading occurs by a balance between the kinetic energy and the viscous dissipation, β∼Re^1/5 was reported <cit.>. When viscous effects are small, a comparison of the initial kinetic energy with the capillary energy has predicted β∼We^1/2 at asymptotic regimes of large Webber numbers. Clanet et al. derived another scaling law, β∼We^1/4, considering the volume conservation between the initial spherical and the final pancake shapes <cit.>. They considered that the final thickness of the pancake drop is proportional to the capillary length a_c = √(σ_w/(ρ_w g')), where g' ∼ u_i/D is an increased acceleration experienced by the drop during the spreading. In our experiments, we observe the scaling β∼We^1/4. Although the scaling We^1/4 was first demonstrated for low-viscosity drop impact on strongly water repellent surfaces <cit.>, impact on various surfaces <cit.> including oil-infused surfaces <cit.> have shown this scaling. Impact of viscous drops such as water-glycerol (6 & 51 cSt) and blood drops on steel surfaces have also shown We^1/4 scaling <cit.>. Various other works reported different scaling laws as We^1/2, We^1/4, Re^1/4, Re^1/5 or complex relationships suggesting a universal scaling <cit.>. Since correlating the scaling nature with the experimental parameters is difficult, recently, machine learning has been employed to explore the maximal spreading dynamics <cit.>. The scaling law being independent of the viscosity of the oil layer can be understood by comparing the viscous dissipation, ∫μ (∂ u/∂ z)^2 dV, inside the water drop and the underlying oil layer (Here, dV is the volume element; also see inset of Fig.3). During spreading, the ratio of total viscous dissipation in oil to that in the water can be written as E_o/E_w= μ_o(u_o/d_o)^2 d_o π R_p^2/μ_w((u-u_o)/d_w)^2 d_w π R_p^2, where R_p is the radius of the pancake drop, d_o is the oil layer thickness, d_w is the pancake drop thickness, u is the fluid velocity at the top of the drop, and u_o is the fluid velocity at the water-oil interface (see inset of Fig.<ref>). At the interface between the water and oil, the viscous stress must balance. Thus, we can write, μ_w (u-u_o)/d_w ≈μ_o u_o/d_o. Therefore, we can rewrite Eq.<ref> approximately as μ_w/μ_od_o/d_w≪ 1. Eq.<ref> implies that the dissipation of the impact energy inside the oil is negligible compared to that in the spreading water drop <cit.>. Thus, the viscous properties of the underlying oil layer do not play a crucial role in the spreading dynamics. Thus, we can expect the same scaling laws regardless of the oil layer viscosity. To understand how the initial fast spreading of the water drop displace the oil, we dispersed red fluorescent polystyrene particles of diameter 2 μm in the oil (viscosity: 370 cSt). We observed the impact from below the glass slide (Fig.<ref>). Upon impact, the drop spreads without creating any noticeable movement of the particles indicating that the spreading is mostly peripheral. Also the side view of the drop demonstrated an advancing angle more than 90^∘. After the maximum spread, during the retraction, drop drags the oil radially inward manifested by the corresponding movement of the fluorescent particles. It demonstrates that the drop dissipates its energy in the oil mainly during the retraction phase. Other effects may also influence the dynamics of impacted drop spreading. For instance, air film entrapment between the impacting drop and the oil layer <cit.> might cause a differences in the spreading dynamics on the low and high viscous oil surfaces. Since the surface of the relatively low viscous oil deforms faster, a wide air film can be trapped there, which might be minimal with a high viscous oil layer. While the specific mechanisms altering the scaling behavior remain unclear, our measurements suggest that changes in the viscosity of the impacting surface strongly affect the dynamics of spreading and thus the scaling behaviour. Drop oscillations: Upon retraction, the drop executes damping oscillations. The oscillatory motion of the contact line can be seen in Fig.<ref>. A simple analogy with the spring-mass system can capture the damped oscillation dynamics of the drop <cit.>. Therefore, we can write, mÿ + cẏ + ky = 0, where, m is the mass, y is the displacement, c is the damping coefficient and k is the spring constant. The general solution to Eq.<ref> is y(t) = A e^-α tsin(γ t + ϕ), where α = c/2m, γ = √(ω^2-α^2), ω^2 = k/m, and A=y(0). In Fig.3(a), we plotted the frequency γ and the damping factor α, obtained by fitting Eq.<ref> to the oscillating regions of the curves, as a function of We, i.e, with increasing impact velocity. We see that α∼We^-0.15 and γ∼We^0.15, where the same power with opposite signs is expected as γ = √(ω^2-α^2). An increased degree of damping with increasing We is expected. It is important to note that the retraction phase is responsible for dissipating most of the energy. When the impact velocity is higher, the droplet spreads over a larger area, and the dissipation increases proportionally. Thus, with increasing We, the available energy for oscillations becomes less and the initial amplitude of oscillation decreases. Therefore, the effective decay rate α also decreases with We. The damping factor is related to the ratio of viscous force to inertia force or the inverse of the Reynolds number. Therefore, in Supplementary Fig.S2, we have plotted the values of α as a function of Re, which indicates a relationship α∼Re^-1/4. A typical damping time can be estimated as t_damp = ρ_w R^2/μ_w, which gives a value of about 1s with R∼ 1 mm. Using the viscosity of oil (μ_o), we estimate a damping time of 1 ms, which is much shorter than the experimentally observed value of about 0.3 s. Thus, we assume that most of the damping occurs within the water drop, also in accordance with Eq.<ref>. Drop penetration into the oil layer and spreading: Throughout the damping oscillations, both the base radius and the corresponding contact angle maintained a consistent average value for a duration of t_R as evident in Fig.2. For example, the contact angle, constant around approximately 100^∘, decreases to approximately 35^∘ at around t_R. The corresponding increase in the base radius is also apparent. We observed similar spreading also with PDMS and ITO surfaces coated with oil. When an oil layer (7 μm thick) was applied to PDMS and ITO surfaces, after a duration t_R, the final contact angles were measured to be 100 ± 2^∘ and 65 ± 2^∘, respectively. These values closely align with the contact angles observed on the corresponding bare surfaces, namely 100 ± 2^∘ for PDMS and 60 ± 2^∘ for ITO. These findings strongly suggest that the change in contact angle occurs as the drop's contact line penetrates the oil layer and interacts with the underlying substrate. This is expected, as the oil density (971 kg/m^3) is slightly lower than that of water. On glass, the final contact angle did not match the contact angle of water on the bare substrate. For example, while the drop displayed a contact angle of 22^∘ on bare glass, it exhibited a slightly larger angle (approximately 35^∘) after spreading with an oil layer. This difference might be attributed to the potential presence of a thin oil film trapped beneath the spreading drop <cit.>. We further looked into the details of the water penetration into the oil layer. The bottom view of the drop shows that, before the water drop spreads over the underlying substrate, the water ruptures the oil layer and the dewetting of the oil layer follows. The rupturing and dewetting process starts only a few hundred microseconds after the impact. This short delay might be due to the entrapped air layer between the drop and the oil as observed in experiments involving water drops being engulfed by the oil with some initial delay due to air entrapment <cit.>. The dewetting process is shown the images in Fig. <ref>(a) and Supplementary video 1. The entrapped air and the dewetting process delay penetration of the drop edge into the oil. This overall delay in the drop edge penetration, denoted by t_R, amounts to about one second on 370 cSt oil and over 50 seconds on 10000 cSt oil layers (Fig. <ref>(b)). The rupturing and dewetting of the oil is initiated somewhere under the bulk region of the drop (Fig. <ref>(a) and Supplementary video 1). Under the drop, as the dewetting advances a hole is opened up in the oil layer and the water wets the substrate. Assuming a thin oil layer at the dewetting region, the initial dewetting velocity can be estimated as <cit.>, v_dw = k σ_ow/μ_oθ_E^3, where σ_ow is the interfacial tension between oil and water, θ_E^3 is the equilibrium contact angle of oil on glass surface, and k is a numerical coefficient that ranges between 10^-2 and 10^-3 and related to the molecular properties of the liquid <cit.>. With σ_ow = 39 mN/m <cit.>, θ_E = 87^∘, and k = 10^-2, we calculate a velocity of 3.7 mm/s for the case μ_o = 370 cSt. This value is very close to the measured dewetting velocity of 5 ± 2 mm/s for oil viscosity 370 cSt. The measured value was about 0.03 mm/s for oil viscosity 10000 cSt. As shown in Fig. <ref>(a), eventually, the dewetted oil converges to form a small droplet at the central region under the water drop (Supplementary video 1). This oil droplet formation is driven by the relatively strong interfacial tension of oil/water interface (39 mN/m) <cit.>. Despite the oil having a slightly lower density, the oil droplet adheres to the substrate without ascending within the water drop. We observed that only when the dewetted oil converges under the water drop, the edge of the water drop starts to spread outward, i.e., at time t_R. Thus, neglecting the delay (a few hundred microseconds) imparted by the air entrapment, the value of t_R is mainly the time for completion of the dewetting process, which can be estimated as t_R∼R_b/v_dw, where R_b is the equilibrium base radius of the water drop. With R_b = 1.6 mm, we obtains t_R as 0.43 s and 11 s for oils of viscosity 370 and 10000 cSt, respectively. The t_R values for the highly viscous oil can be even larger, as the values of k may be smaller than 10^-2. Therefore, the estimated values of t_R align closely with the observed ones in our experiments. We further study the influence of the impact parameters on the delay time t_R. The value of t_R rises with the We, as demonstrated in Fig. <ref>(b). This observation may seem counter-intuitive, as one might anticipate that a greater impact velocity would expedite the drop's penetration into the oil layer, resulting in an earlier encounter with the underlying substrate. However, the opposite trend suggests that the impacting drop does not promptly displace the oil layer, as was also evident from Fig. <ref>. We observed that t_R follows a relationship of t_R ∼We^0.25±0.05 for oil with viscosity 370 cSt, and t_R ∼We^0.4±0.1 for oil with viscosity 10000 cSt, as shown in Fig. <ref>(b). Our measurements show that t_R depends upon the oil thickness as well. Remarkably, as illustrated in Fig. <ref>(c), it follows a power-law relationship, denoted by t_R ∼ d_o^1.9±0.2. This exponent, exceeding 1.0, implies that the drop's edge isn't sinking freely. The observed exponent can be related with the dewetting process. When a hole appears on a viscous thin layer of thickness e, the dewetting proceeds until its thickness increases to a value e_c = 2 κ^-1 sin(θ_E/2), where κ^-1 is the capillary length <cit.>. Thus, as the layer thickness increases during the dewetting, the velocity takes the form <cit.>, v_dw∼ e_c^2-e^2, with e(t)≤ e_c. Thus, v_dw decreases as e_c^2-e^2 with time. An estimated value of t_R can be obtained from the average velocity as t_R∼ R_b/⟨ e_c^2-e^2 ⟩. It aligns with the observation that t_R ∼ d_o^2. The exact expression connecting the temporal variation of v_dw and the observed scaling with initial oil layer thickness (d_o) and Weber number may require further theoretical investigations. Followed by the dewetting, the drop spreads over the substrate surface. It shows an increase in the base radius (see region after about 1 s in Fig. 2(a)). We plotted this region of the curves in Fig. <ref>. Our findings reveal a scaling behavior of R_b ∼ t^0.25 ± 0.07. Typically, the initial spreading dynamics of drops conform to a power law. For instance, in the inertial-capillary regime, a scaling law of R_b ∼ t^0.5 has been documented <cit.>. However, in the viscous-capillary regime, various exponents have been reported <cit.>. As the viscous drop spreads, the exponent diminishes over time, eventually reaching 1/10 on wetting surfaces <cit.>. Despite our drop being a low-viscosity liquid (water), the resistance posed by the viscous oil affects the movement of the contact line. Moreover, the spreading regime does not align with early times, as the drop had already partially spread on the oil layer. Consequently, we anticipate a smaller exponent, indicative of the late-time dynamics of viscous droplets, in our case. § DISCUSSION Our observation that the impacted drop initially does not deform the oil layer contrast with the findings from other studies on water drop impact on oil <cit.>. For instance, Che et al. <cit.> examined water drop impact on silicone oil layers with viscosities ranging from 5 to 50 cSt. They noted that the impacted drop displaces the oil layer, but this effect diminishes with increasing oil viscosity, consistent with our findings. Our measurements, conducted at much higher viscosity (370 and 10000 cSt), reveal that the impacted drop does not disturb the oil layer during its spreading phase. While the drops spread on high-viscosity oil almost akin to a solid surface, the retraction phase involves significant viscous dissipation. This could be attributed to the dragging of the oil along with the retracting contact line. Conversely, in experiments with very low viscosity, such as our trials with 20 cSt silicone oil, the retracted water drop retained considerable kinetic energy, resulting in the formation of a vertically elongated jet shape (see Supplementary Fig. S3). At large impact energies, this jet subsequently broke up to form a small drop at the tip. This observation aligns with previous findings indicating that the bouncing of the impacted drop occurs when the oil viscosity is low <cit.>. § SUMMARY In our experimental study, we investigated the impact of water drops on thin layers of viscous oil applied to solid surfaces. We observed a series of dynamic phenomena, including maximal spreading and retraction, oscillations, delayed penetration into the oil layer, and spreading on the solid surface. We show that the viscosity of the underlying oil layer does not affect the scaling relationship for the maximum spreading radius (β∼We^1/4). Following spreading and retraction, the drop undergoes oscillations, maintaining a constant average base radius and contact angle for a duration t_R (a few seconds). During this time the water slowly rupture the oil layer and dewet the oil under the drop, eventually converging as a small oil droplet under the spreading water drop. Intriguingly, we observed a nonlinear relationship between t_R and the thickness of the oil layer (t_R ∝ d_o^2), which we relate to the dewetting dynamics of oil layer under the water drop. Similarly, t_R increases with the Weber number approximately as We^1/4, indicating a complex interplay between the dewetting process, oil layer thickness, and impact energy. In summary, we unveil previously unreported short-term and long-term dynamics of drops impacting on oil layers. Our findings may inspire new experimental and theoretical investigations in related areas. § ACKNOWLEDGMENT DM acknowledges IISER Tirupati intramural funds and Science and Engineering Research Board (India) grant CRG/2020/003117. § COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests.
http://arxiv.org/abs/2407.12482v1
20240717110711
Exploring Milky Way rotation curves with Gaia DR3: a comparison between $Λ$CDM, MOND, and General Relativistic approaches
[ "William Beordo", "Mariateresa Crosta", "Mario Gilberto Lattanzi" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.CO", "gr-qc" ]
Planar reinforced k-out percolation Christian Hirsch July 22, 2024 =================================== § INTRODUCTION With the advent of the new Gaia Data Release 3 (DR3), it is crucial to assess the accuracy of the dynamical models widely employed in the literature. Recently, carefully selected rotation curves of the Milky Way (MW) were presented by <cit.> for different stellar populations, using data from Gaia DR3. This work served as a follow-up of <cit.>, which utilised the previous Gaia DR2. Both studies focused on a direct comparison between a general relativistic velocity profile derived from the solution of <cit.> and a classical Newtonian model featuring a Navarro-Frenk-White dark matter halo (MWC). In the present work we extend the analysis of <cit.> to two state-of-the-art models in the context of galaxy dynamics: the MOdifyied Newtonian Dynamics <cit.> paradigm and the ΛCDM model, where the simulation-motivated Einasto profile <cit.> is assumed to describe the CDM density distribution. The following sections provide a description of the two models adopted and the data, as well as a discussion and comparison of the results with the MWC and BG models presented in <cit.>. § MOND In the MOND paradigm, the gravitational acceleration is 𝐠_ 𝐌𝐎𝐍𝐃 = η(g_ Ng_0) 𝐠_ 𝐍 , where 𝐠_ 𝐍 is the conventional Newtonian acceleration produced by the baryonic matter alone, while the interpolation function η sets the transition between the Newtonian and the deep MOND regimes through the acceleration scale g_0. According to the MOND assumptions, when g_ N≪ g_0 the gravitational acceleration is boosted by a quantity η→√(g_0/g_ N) in order to obtain a flat rotation curve, while Newtonian dynamics is restored requiring η→ 1 when g_ N≫ g_0. Here, we adopt the analytical expression proposed by <cit.>, namely η(g_ Ng_0) = (1 - e^-√(g_ N/g_0))^-1 . This interpolation function has been shown to provide an excellent representation of the Radial Acceleration Relation (RAR) observed for external disc galaxies <cit.>. If we equal the gravitational acceleration g_ MOND to the centripetal acceleration g=V^2/R of an object orbiting in circular motion with velocity V at a distance R from the centre of a disc galaxy, the Mondian rotation velocity results V_ MOND(R, g_ N) = √(R g_ N1 - e^-√(g_ N/g_0)) . Similarly, the magnitude of the Newtonian acceleration originated by the distribution of the baryonic matter alone can be written as g_ N = V_ bar^2/R; therefore the above expression becomes V_ MOND(R, V_ bar) = V_ bar√(1 - e^-V_ bar/√(R g_0)) . Here, we assume the same density distributions used in <cit.> for the baryonic components of the Milky Way (i.e. a Plummer bulge and two Miyamoto-Nagai discs), so that the MWC and MOND models do not differ where Newtonian gravity dominates. While in the MWC model of <cit.> the total rotation curve is given by adding in quadrature the dark matter halo component (namely V_ bar^2 + V_ h^2, where V_ bar^2 = V_ b^2 + V_ td^2 + V_ Td^2), in the MOND model the pure Mondian boost is represented by the denominator of equation (<ref>) and writes explicitly as V^ MOND_ boost(R, V_ bar) =√(V_ MOND^2 - V_ bar^2) = V_ bar√(η(R, V_ bar) - 1) . The free parameters of the baryonic matter distribution share the same prior distributions of the MWC model <cit.>. As an additional parameter of the model, we have the acceleration scale g_0, which has been constrained to extremely tight values by the observed RAR of external galaxies <cit.>, namely g_0 = (1.20 ± 0.02) × 10^-10 m s^-2. This scale is supposed to be fixed in the framework of MOND. However, given the small uncertainty on the parameter, setting a Gaussian prior 𝒩(μ = 1.20, σ = 0.02) × 10^-10 m s^-2 does not affect the results <cit.>; therefore, strictly following the Bayesian approach, we prefer to not fix it and to marginalize over it afterwards. § ΛCDM MODEL WITH EINASTO HALO PROFILE We consider the distribution of cold dark matter within the ΛCDM scenario to follow the Einasto density profile <cit.>, namely ρ_ Einasto(r) = ρ_ s exp{-2α[(rr_ s)^α - 1]} . Consistently with the prescriptions of <cit.>, the parameters of the Einasto profile are written in terms of the halo concentration C_200≡ r_200/r_s, where the virial radius r_200 is defined such that the enclosed average density is 200 times the critical density of the Universe, i.e. ρ_200 = 200 ρ_c = 75 H_0^2 / (π G), with H_0 = 67 km s^-1 Mpc^-1 <cit.>; from here, the rotation velocity and the enclosed halo mass at the virial radius are then V_200 = 10 C_200 r_s H_0 ; M_200 = V_200^310 G^2 H_0^2 . With this redefinition, the following boundaries are set: 0<C_200<100, 10<V_200 [km s^-1] <500, and 0<α<2 <cit.>. Three prior distributions, coming from N-body simulations within the ΛCDM cosmology, are then imposed to constrain the parameters: * Stellar Halo Mass (SHM) relation <cit.>: M_⋆M_200 = 2N [(M_200M_1)^-β + (M_200M_1)^γ]^-1, where M_⋆ = M_ b + M_ td + M_ Td, log(M_1) = 11.59, N=0.0351, β=1.376 and γ=0.608, with scatter σ(log M_⋆) = 0.15 dex. * Halo mass-concentration relation <cit.>: log(C_200) = a + b log(M_200/[10^12 h^-1 M_⊙]), where a=0.977 and b=-0.130 assuming the Planck cosmology, with a scatter of 0.11 dex. * Einasto shape parameter as a function of halo mass <cit.>: α = 0.0095 λ^2 + 0.155, with a scatter of 0.16 dex, where log(λ) = -0.11 + 0.146 m + 0.0138 m^2 + 0.00123 m^3 and m=log(M_200/[10^12 h^-1 M_⊙]). Again, the baryonic matter distribution is modelled in the same way as the MOND and MWC models <cit.>. § DATA The data utilized in this study have been carefully selected from Gaia DR3, following the rigorous criteria outlined in the reference paper <cit.>. The resulting sample boasts high-quality astrometric and spectrophotometric information for a total of 719143 young disc stars located within |z| < 1 kpc and spanning from R = 4.5 to 19 kpc. This includes 241918 O-,B-,A-type stars (OBA), 475520 Red Giant Branch stars (RGB) with nearly-circular orbits, and 1705 classical Cepheids (DCEP), ensuring a comprehensive representation of various stellar tracers of the Galactic disc potential. Leveraging this extensive sample, <cit.> derived rotation curves of the Milky Way for six distinct data sets: the 3 ‘pure’ data sets of OBA, DCEP, and RGB stars, the combined OBA + DCEP and RGB + DCEP samples, and the total sample consisting of all of the disc stars selected combined (OBA + RGB + DCEP, hereafter ALL). Here, in order to determine the error bars of the velocity profiles, the Robust Scatter Estimate (RSE) was adopted as a robust measure of the azimuthal velocity dispersion of the population in each radial bin, instead of performing the bootstrapping technique.[The RSE is defined as (2√(2) erf^-1(4/5))^-1≈ 0.390152 × (Q90 - Q10), with Q90 and Q10 being the 90^ th and 10^ th percentiles of a distribution, and it coincides with the standard deviation in the case of a normal distribution.] In fact, as discussed in <cit.>, the stellar azimuthal velocities are considerably dispersed around the median values (the RSE is typically >10 km s^-1), therefore smaller error bars would not encompass the actual variability of the sample. Moreover, with the large amount of stars per radial bin, a bootstrapped quantity would be much smaller than the individual uncertainty on the velocity measurements, representing thus a nonphysical situation. In the present study, we employ these rotation curves to constrain the models outlined in sections <ref> and <ref>. § RESULTS Table <ref> lists, as best-fit estimates, the medians of the posteriors and their 1σ credible intervals. Details about the Bayesian analysis can be found in <cit.>. The parameters of the baryonic matter distribution are found in agreement between different datasets and models, even though the ΛCDM paradigm tends to assign less mass to the baryonic component, as the values of M_ b, M_ td, M_ Td are slightly smaller compared to those estimated with the MOND and MWC models. Additionally, ΛCDM cosmological constraints are observed, as made clear by figure <ref>: specifically, the estimated parameters align within 1-σ with the relations for the halo mass-concentration and the Einasto shape parameter, and within 2-σ with the SHM relation. Rotation curves of the Milky Way for six stellar populations are presented in figure <ref>: the results for MOND and ΛCDM are drawn on top of the corresponding results of <cit.> for the BG and MWC models. Again, the fits of the four models result statistically equivalent to each other, having comparable values for the WAIC and LOO tests. Figure <ref> shows the matter density profiles for the four models along with the only data point considered, which corresponds to the baryonic matter density observed at the Sun <cit.>. Estimates of the local baryonic density (ρ^Λ CDM_ bar, ⊙ and ρ^ MOND_⊙, table <ref>) are consistent with those in <cit.>, being all around 0.080 M_⊙ pc^-3, as well as with estimates found in the literature <cit.>. As for the dark matter density at the Sun (ρ^Λ CDM_ h, ⊙), we recover again the recent values reported in the literature <cit.>, being almost ten times smaller than the local baryonic density: for the full sample, ρ^Λ CDM_ h, ⊙= (0.0090 ± 0.0006) M_⊙ pc^-3, which corresponds to (0.34 ± 0.02) GeV cm^-3. As expected, all density profiles show agreement within the radial range covered by the data: MWC and ΛCDM total matter density profiles are almost coincident, while departing from each other only at very large radii; so do their dark matter density profiles, but the Einasto profiles of the ΛCDM model result larger than the NFW ones both in the inner and outer parts of the Galaxy; this translates into the dynamical mass being supplied by more dark matter in the ΛCDM scenario compared to the case of an NFW halo without cosmological constraints. This is confirmed by table <ref>, where the estimates for the total baryonic and virial masses are reported (respectively M^Λ CDM_ bar and M^Λ CDM_ 200). The baryonic mass predicted by the ΛCDM scenario is around 8–9 × 10^10 M_⊙, smaller than the values of 9–10 × 10^10 M_⊙ expected for the MWC model <cit.>, but still higher compared to <cit.>, which reports a value of (5.43 ± 0.57) × 10^10 M_⊙. The virial mass, ranging in the interval 1.5–2.5 × 10^12 M_⊙, is up to two times the value of (1.30 ± 0.30) × 10^12 M_⊙ found by <cit.> and the values in <cit.> for the virial mass of the MWC model, significantly higher than those reported by some authors <cit.> and an order of magnitude greater than some very recent values claimed <cit.>. These low values from the literature are the result of a clearly falling rotation curve derived by the authors beyond 18 kpc; some comments about this surprising finding are given in section <ref>. The BG and MOND density profiles are consistent with both the baryonic and total density profiles of the classical models (MWC and ΛCDM). Therefore, no further insight can be gained at this stage without looking at the amount of mass expected. The total mass predicted by MOND (M^ MOND) aligns with the total baryonic mass of the MWC model, while being up to 10 per cent higher than the ΛCDM estimate. Moreover, this value exceeds the estimate of (7.8 ± 0.5) × 10^10 M_⊙ found by <cit.> in the MOND framework. Finally, the BG relativistic mass reported in <cit.> compares favourably with the value of 1–2 × 10^10 M_⊙ expected for the baryonic mass in ΛCDM and MOND, calculated in the region of validity of the BG model. §.§ Comments about the recent claim of a Keplerian rotation curve Recent claims regarding a Keplerian rotation curve extending up to 30 kpc have been brought to light <cit.>, a finding seemingly consistent with various recent studies <cit.>. Despite the resonance found with these works, our analysis is restricted by the range of our selected samples, precluding a direct quantitative comparison. However, several noteworthy distinctions emerge. Firstly, within the overlapping range of 10–18 kpc, our rotation curves exhibit slightly declining profiles, aligning with recent findings that indicate a pronounced decline only beyond 18–19 kpc (see figure <ref>). The profiles of <cit.> and <cit.> are found to be the closest match; however, our rotation curve shows a slight flattening around 15 kpc compared to other profiles (although statistical limitations of our selected sample start to become significant at that distance) and exhibits higher velocities in the inner region close to 5–7 kpc. Secondly, our approach diverges notably in the methodology employed. While we refrained from conducting the Jeans analysis, being found negligible in the DR2 paper <cit.>, we instead implemented an eccentricity selection for the orbits of RGB stars. This adjustment was aimed at removing the effects of the asymmetric drift, to match the OBA and DCEP rotation curves. By conducting the Jeans analysis on our selected sample (as detailed in appendix <ref>), i.e., considering the derived circular velocity profile instead of the azimuthal one, the rotation curve shows a further slight increase within error bars (see figure <ref>), as expected. This suggests that the lack of the Jeans analysis in our procedure is unlikely to be the cause of the discrepancy observed at around 15 kpc. Thirdly, our sample selection criteria differed significantly, as we imposed a stringent requirement of errors on parallaxes smaller than 20%. In contrast, the referenced literature employed various techniques to extend the measured rotation curve to 30 kpc, a topic we abstain from delving into here — one of our rigorous requirements in <cit.> was to create the most homogeneous sample ever —, acknowledging the reliability of the methods as asserted by the authors (see the discussion in the related works). Furthermore, we delineated our study by confining our analysis to stars within |z|<1 kpc, deviating from the convention of considering a thicker disc encompassing |z|<3 kpc <cit.>; for instance, a thinner disc selection reflects into higher velocities, especially between 5–15 kpc from the Galactic Centre <cit.>. Additionally, our depiction of error bars on the rotation curves incorporates the significant dispersion of azimuthal velocity among the stars. As shown in figure <ref>, the density plot highlights the azimuthal velocity distribution of our selected sample. This stands in contrast to the rotation curves presented by the cited authors, whose error bars are derived by bootstrapping techniques and typically encompass only systematic sources of uncertainty, remaining relatively small compared to the velocity dispersion observed within each radial bin. Lastly, we adopted quite small radial bins of 0.1 kpc width (except for the DCEP sample for which we chose larger bins of 0.5 kpc) compared to typical values of 0.5 and 1 kpc used in the literature. As discussed in appendix <ref>, this of course plays an important role in the derived rotation curve, as the outer data points are distributed differently (see figure <ref>). However, perfectly consistent results of the model parameters are obtained with either 0.5 or 1 kpc radial bins, indicating the robustness of the rotation curve defined in <cit.>. These differences in methodology and analysis are potential contributors to the observed discrepancy in the estimation of the virial mass compared to recent literature. As a result, our findings suggest that the dynamical mass of the Galaxy, in the classical context with dark matter, is expected to remain on the order of 10^12 M_⊙. §.§ Contributions to the rotation curve We can compare the models by making the contributions to the rotation curves explicit, recalling what was done in section 7 of <cit.>, since different models and samples contribute to the rotation curve differently. Figure <ref>, drawn on top of figure 4 in <cit.>, shows the Newtonian/baryonic counterpart for each model, alongside the corresponding non-Newtonian/non-baryonic contributions. The rotation curve of the BG general relativistic model of <cit.> is split between an effective Newtonian counterpart and a gravitational dragging contribution, which is a purely relativistic effect due to the spacetime geometry. The MWC and ΛCDM rotation curves are given by the quadratic summation of the baryonic and halo contributions, while the Mondian boost responsible for the flat profile is given by equation <ref>. From this plot, it is clear that the non-Newtonian/non-baryonic contributions onset between 10–15 kpc, becoming dominant beyond 15 kpc. Due to the slightly higher baryonic mass in MOND, the baryonic component of the rotation curve tends to be slightly higher compared to the classical models with dark matter. Similarly, within the framework of ΛCDM, the dark matter halo described by the Einasto profile contributes more than the NFW case of the MWC model, where less dark matter and more baryonic matter are attributed. § CONCLUSIONS In this study, we have undertaken a comprehensive analysis of Galactic rotation curves utilizing the latest Gaia Data Release 3, in line with seminal previous results with Gaia DR2 <cit.>. Considering larger samples from the latest Gaia Release, we compared the rotation curves within the frameworks of two prominent dynamical models: MOND and the ΛCDM paradigm. We extended the analysis previously presented in <cit.> with the same set of data, where we compared again the rotation curves using the BG and MWC models. Our analysis revealed several key findings. Firstly, we found that both the MOND and ΛCDM models provided statistically equivalent fits to the Gaia DR3 rotation curve data across various stellar populations. This suggests that all four models are capable of accurately describing the observed dynamics of the Milky Way, albeit with different underlying assumptions. The parameters of the baryonic matter distribution are consistent between the different datasets and models, although the ΛCDM paradigm tends to assign slightly less mass to the baryonic component compared to MOND and the MWC model. Furthermore, our analysis confirms the previously reported tension between our derived dynamical mass estimates and those reported in recent literature, claiming the presence of a Keplerian decline beyond 18-19 kpc. Our estimates of the virial mass range from 1.5–2.5 × 10^12 M_⊙ in the ΛCDM framework, which is an order of magnitude higher than proposed in the literature lately. While our findings align with recent studies showing near-flat rotation curves within the observed range, we acknowledge the discrepancy and highlight the need for further investigation into the methodologies employed. Additionally, we compared the contributions to the rotation curve from the baryonic and non-Newtonian components for each model. We found that the non-Newtonian/non-baryonic contributions become dominant beyond 10–15 kpc, with MOND predicting a slightly higher baryonic contribution compared to the classical models with dark matter, and ΛCDM attributing more dynamical mass to the dark matter halo described by the Einasto profile compared to the NFW halo in the MWC model. Overall, our study underscores the importance of comparing different dynamical models in understanding the dynamics of galaxies. By leveraging the wealth of data provided by Gaia DR3, we have gained valuable insights into the structure and composition of our own Milky Way galaxy, shedding light on the behaviour and the validity of different theories of gravity. Future work should focus on resolving the discrepancies observed in the dynamical mass estimates and further exploring the implications for our understanding of the dark matter role and galactic structure, especially considering the subtle richness provided by a general relativistic scenario. § DATA AVAILABILITY The data sets used in this analysis are presented by <cit.>. Posterior distributions or other data underlying this article will be shared on reasonable request to the corresponding author. § JEANS ANALYSIS OF THE SAMPLE Under the assumption of axisymmetric and stationary equilibrium, the Jeans equation in cylindrical coordinates is <cit.> ∂(ν⟨ V_R^2 ⟩)∂ R + ∂(ν⟨ V_R V_z ⟩)∂ z + ν( ⟨ V_R^2 ⟩ - ⟨ V_ϕ^2 ⟩R + ∂ϕ∂ R) = 0 , where ν is the radial volume density of the Galaxy, ϕ is the gravitational potential, V_R, V_ϕ, V_z are the radial, azimuthal, and vertical velocities respectively, and the brackets ⟨⟩ represent quantities averaged over the velocity space. Being the circular rotation velocity defined as V_c^2 = R ∂ϕ∂ R|_z ≈ 0 , the Jeans equation for z ≈ 0 becomes [ ∂(ν⟨ V_R^2 ⟩)∂ R + ∂(ν⟨ V_R V_z ⟩)∂ z + ν( ⟨ V_R^2 ⟩ - ⟨ V_ϕ^2 ⟩ + V_c^2R) ]_z ≈ 0 = 0 . Considering the vertical gradient of the cross term ⟨ V_R V_z ⟩ negligible, the equation can be written as V_c^2 = [ ⟨ V_ϕ^2 ⟩ - ⟨ V_R^2 ⟩( 1 + ∂lnν∂ln R + ∂ln⟨ V_R^2 ⟩∂ln R) ]_z ≈ 0 , where the radial volume density is usually assumed to be ν∝ e^-R/h_R. Widely used values for the radial scale length h_R range from 2 to 5 kpc, therefore we adopt a value of 2.5 kpc <cit.>. The radial gradient of the averaged squared radial velocity ⟨ V_R^2 ⟩, i.e. the last term in equation (<ref>), can instead be derived directly from the data set. Fitting √(⟨ V_R^2 ⟩) with an exponential function, we estimate a scale length of ≈ 26 kpc, in line with other studies <cit.>. The resulting circular velocities for the full sample are plotted in figure <ref>. These values typically exceed the azimuthal velocities by less than 5 per cent and fall well within the error bars, as expected, given that the orbital eccentricity selection performed in <cit.> has already cleared out most of the asymmetric drift. § DEPENDENCE ON THE WIDTH OF RADIAL BINS The choice of the radial bin size strongly affects the derived rotation curve, the more the size of the bin is large. This is because some information is lost and some features of the rotation curve are smoothed out by averaging the velocities of stars with different true radial coordinates, especially in less populated bins (i.e. at large radii where data points are crucial in the dynamical mass determination). For the optimal choice of the bin size, in <cit.> we adopted Knuth's Rule <cit.>, a data-based Bayesian algorithm, yielding to 0.1 kpc bins (0.5 kpc for the DCEP sample). However, in the literature, observed data are usually grouped in 0.5–1 kpc radial bins. Imposing radial bins of 1 kpc, we find the rotation curve shown in the top panel of figure <ref>. The flattening around 15 kpc, although less clear, is still present; a similar but softer behaviour seems to be found by <cit.> and <cit.> at slightly larger distances and lower velocities. The bottom panel of the same figure shows the rotation curve obtained with 0.5 kpc radial bins instead, highlighting how data points are differently distributed with respect to the 1 kpc case. However, if error bars are appropriately selected, one should obtain consistent results. In fact, when repeating the fitting process using the rotation curve derived with either 0.5 or 1 kpc bin width for the four models, we obtain estimates of the model parameters that are perfectly consistent with previous ones. This underlines the robustness of the rotation curve defined in <cit.> and used in the present work. On the contrary, it remains unclear to what extent the results are influenced by the radial bin size when error bars are derived via bootstrapping instead, necessitating further verification. We wish to thank Paola Re Fiorentin and Alessandro Spagna for the selection of the stellar samples performed in <cit.>. This work has made use of data products from: the ESA Gaia mission (gea.esac.esa.int/archive/), funded by national institutions participating in the Gaia Multilateral Agreement. We are indebted to the Italian Space Agency (ASI) for their continuing support through contract 2018-24-HH.0 and its addendum 2018-24-HH.1-2022 to the National Institute for Astrophysics (INAF). JHEP
http://arxiv.org/abs/2407.12749v1
20240717171113
HDLCopilot: Hardware Design Library Querying with Natural Language
[ "Manar Abdelatty", "Sherief Reda" ]
cs.CL
[ "cs.CL" ]
manar_abdelatty@brown.edu Brown University School of Engineering Providence RI USA sherief_reda@brown.edu Brown University School of Engineering Providence RI USA § ABSTRACT Hardware design engineers routinely work with multiple Process Design Kits (PDKs) from various fabrication labs, each containing several standard cell libraries, optimized for specific metric such as speed, power, or density. These libraries include multiple views such as liberty files for timing information, LEF files for abstract layout details, and technology LEF for process design rules. Navigating this complex landscape to retrieve specific information about gates or design rules is often time-consuming and error-prone. To address this, we present HDLCopilot, an LLM-powered PDK query system that allows engineers to streamline interactions with PDKs in natural language format, making information retrieval accurate and more efficient. HDLCopilot achieves an accuracy of 94.23% on an evaluation set comprised of diverse and complex natural language queries. HDLCopilot positions itself as a powerful assistant in the hardware design process, enhancing productivity and reducing potential human errors. HDLCopilot: Hardware Design Library Querying with Natural Language Sherief Reda July 22, 2024 ================================================================== § INTRODUCTION At the core of semiconductor design workflows lies a critical component: the Process Design Kit (PDK). PDKs serve as a comprehensive library of building blocks that are used for synthesizing abstract circuit definitions into manufacturable chips. PDKs usually contain different standard cell libraries, each optimized for a specific metric, such as speed, density, or power. These libraries include detailed files on cell timing information at different process corners, physical layout data, and metal stack properties. Traditionally, hardware engineers navigate this complex landscape manually, parsing through extensive library files containing thousands of cells, each with numerous attributes, in order to locate specific information relevant to their current design task. This manual process is not only time consuming but also prone to human errors. Therefore, there is a need for more automated and efficient tools to assist engineers in managing and utilizing PDKs effectively, potentially accelerating the design process, and enhancing accuracy. Large Language Models (LLMs) have enhanced productivity in various engineering domains, including hardware design. They've shown promise in tasks like Verilog code generation, RTL code bug fixing, and EDA tool scripting <cit.>. However, their application to PDK management remains underexplored. LLMs could potentially enhance engineer-PDK interactions through natural language interfaces, automated data retrieval, and intelligent suggestions, accelerating the design process and reducing errors. Currently, Large Language Models (LLMs) are not inherently aware of Process Design Kits (PDK) specifics. Domain adaptive pre-training could be one way of enhancing LLMs knowledge with PDK-specific data <cit.>. However, it would require expensive and time-consuming pre-training of the LLM on a large scale dataset of different PDKs from different manufacturers and process nodes. Moreover updating the PDK information would require retraining the LLM, making it impractical to maintain the LLM's relevance to the most recent PDK version. In contrast, Retrieval Augmented Generation (RAG) offers a more flexible and maintainable solution. RAG enhances LLM capabilities by grounding their responses in external knowledge sources, which can be easily updated without retraining the base model <cit.>. RAG has been applied to various data types: unstructured (e.g., text documents), semi-structured (e.g., JSON), and fully structured (e.g., relational databases) <cit.>. For unstructured data, RAG uses semantic similarity matching, while for semi-structured data, it combines semantic and structural information. With fully structured data, RAG transforms the retrieval into a text-to-SQL task, generating SQL queries based on user questions and database schemas, then using the query results as context for the LLM's response. Since PDKs generally follow a structured format, they are well-suited for conversion into relational databases. By transforming PDK data into a SQL database, we can leverage the benefits of SQL-based retrieval while maintaining the flexibility to update and expand the knowledge base without retraining the underlying LLM. The SQL-based retrieval also allows for human verification of the generated SQL queries, ensuring reliability and transparency in the information retrieval process. In light of this, we introduce HDLCopilot, an LLM-powered PDK query, designed for interacting with Process Design Kits (PDKs) using natural language. HDLCopilot harnesses the power of RAG and text-to-SQL conversion to provide an intuitive, efficient, and accurate interface for hardware designers to access and utilize PDK information. Fig. <ref> presents an overview of the HDLCopilot framework, illustrating the flow of converting natural language queries to SQL queries to retrieve relevant data from the PDK database. Our contributions are summarized as follows: * We introduce HDLCopilot, a multi-agent collaborative framework, designed to streamline interactions with Process Design Kits (PDKs) in natural language format, offering a novel approach for enhancing hardware design engineers efficiency. * We propose a database schema for storing PDK information in relational tables, which facilitates easy and seamless integration with LLM-enabled applications. This structured approach allows LLMs to perform precise data retrieval by dynamically generating SQL queries based on natural language inputs. * Experimental results show that HDLCopilot can answer diverse and complex user questions with high precision across different libraries in the PDK, achieving an accuracy of 94.23%. HDLCopilot also demonstrates high capability in generating efficient SQL queries with an efficiency score of 98.07%. This paper is organized as follows. Section <ref> discusses related work. Section <ref> provides an overview of the HDLCopilot framework. Section <ref> presents experimental results to validate the utility of the HDLCopilot framework. Finally, section <ref> concludes the paper. § RELATED WORK In this section, we review relevant work in three key areas: LLMs for hardware design process in Section. <ref>, Retrieval augmented generation in Section. <ref>, LLMs for text-to-SQL applications in Section <ref>. §.§ LLMs for Hardware Design LLMs have been extensively applied to various tasks in the hardware design process from verilog code generation, EDA scripting to RTL bug fixing. Takhur et al. <cit.> introduced the first finetuned LLM for verilog code generation. Wu et al. <cit.> introduced ChatEDA, demonstrating how LLMs can be used for EDA tool scripting and automation. Tsai et al. <cit.> explored the use of LLMs for fixing RTL code bugs. Several studies have also explored LLMs for question-answering tasks in hardware design. For example, Liu et al. <cit.> proposed ChipNeMo, an LLM fine-tuned for hardware design tasks including question answering. §.§ Retrieval Augmented Generation (RAG) Retrieval augmented generation emerged as a powerful approach to enhance the performance and reliability of LLMs by connecting them to external knowledge sources. Lewis et al.  <cit.> introduced the RAG model, which combines a pre-trained neural retriever with a sequence-to-sequence model for open-domain question answering. Guu et al. <cit.> proposed REALM (Retrieval-Augmented Language Model Pre-Training), which integrates retrieval during model pre-training, showing improvements in open domain question-answering tasks. Shuster et al. <cit.> have shown that using retrieval can reduce LLM hallucination in knowledge-grounded dialogue tasks. §.§ LLMs for Text-to-SQL The transformation of natural language queries into SQL using LLMs has been a focus of recent research. Pourezza et al. <cit.> proposed decomposing the user question into sub-questions and feeding the solution of those sub-questions to the LLM to generate the final SQL query. Cyren et al. <cit.> proposed a multi-agent collaborative framework for converting the user question to SQL by first semantically choosing relevant tables, then performing query decomposition into multiple sub-queries. While these works demonstrate the broad applicability of LLMs in hardware design and the potential of text-to-SQL in other domains, the application of text-to-SQL techniques for PDK management remains an underexplored area. HDLCopilot addresses this gap by combining the strengths of LLMs, text-to-SQL conversion, and structured PDK data to provide an efficient and accurate natural language interface for PDK queries. § PROPOSED HDLCOPILOT FRAMEWORK In Fig. <ref>, we present HDLCopilot, a multi-agent collaborative framework for streamlining interactions with Process Design Kits (PDKs) using natural language. HDLCopilot employs four specialized LLM agents, each designed to perform a specific function. This section first provides an overview of the PDK files conversion to a relational database, followed by a detailed explanation of each agent's functionality and purpose. §.§ LLM-Compatible PDK Database Schema To facilitate integration with LLM-based retrieval, we first convert the PDK files into a relational database. LLM agents can then retrieve information from the database by dynamically generating SQL queries based on the user input. We mainly focus on three views in the PDK files: liberty for storing timing information at different operating conditions, LEF for abstract layout information for each cell, and Technology LEF for storing metal stack properties. We propose a schema for each view. These schema are designed to support storing information across different standard cell libraries different operating conditions within the PDK. The liberty schema is shown in Fig. <ref>. This schema comprises tables capturing operating conditions, cell attributes, pin properties, and timing data. The LEF schema is shown in Fig. <ref>. This schema comprises tables that capture abstract physical information about various macros (cells), including cell dimensions, obstruction layers in cell layouts, and pin physical attributes such as antenna gate area and pin shapes. Fig. <ref> depicts the schema for technology LEF files, which stores technology-specific information. This includes details about routing layers (such as preferred routing direction, width, and spacing rules, resistance), via layers, and their associated antenna ratios. §.§ LLM Agents HDLCopilot comprises four LLM agents: : the Dispatcher, Selector, SQL-Generator, and Interpreter, which collaborate together to ensure a reliable and accurate SQL generation pipeline. Dispatcher: The main objective of the dispatcher agent is to route the user question to the appropriate standard cell library, library view, and operating conditions. Given the user question 𝒬, the available libraries ℒ and library views 𝒱 in the PDK, the dispatcher selects the relevant library ℒ^', library view 𝒱^', and operating conditions 𝒞^' (if applicable). The function of the dispatcher agent is described in Eq. <ref>, where f_dispatcher(.|ℳ) represents LLM agent ℳ. {ℒ^', 𝒱^', 𝒞^'} = f_dispatcher(𝒬, ℒ, 𝒱 | ℳ) The selected library view 𝒱^' is then used to filter the PDK tables, retaining only those relevant to the routed view as described in Eq. <ref>. The selected tables 𝒯^' are then passed to to the selector agent to perform further fine-grained table selection by choosing only the tables relevant to the user question. 𝒯^' = { t ∈𝒯|𝒱^'} Selector: The selector agent performs a more refined reduction of the tables. Given the user question 𝒬 and the schema description of the routed tables 𝒯^', the selector narrows down the the set of the routed tables to only those most relevant to the user question. The function of the selector agent is described in Eq. <ref>. The main purpose of this table reduction process is to make the text-to-SQL task easier by having the SQL-Generator only examine the relevant tables. This focused approach enhances the efficiency and accuracy of the SQL generation process. 𝒯^'' = f_selector(𝒬, 𝒯^'| ℳ) SQL-Generator: The SQL-generator serves as the core agent of the framework. Its primary function is to construct a SQL query that accurately retrieves the required information from the PDK database to address the user's question. The generator employs a query decomposition approach proposed in <cit.> that breaks down the user question into smaller, manageable sub-questions. For each sub-question, the generator produces a corresponding sub-query. These sub-queries are then combined to form the final SQL query. This step-wise approach enhances accuracy and allows for handling complex user questions. The generator function is describe in Eq. <ref>. It takes as input the user question 𝒬, the schema description of the selected tables 𝒯^'', the relevant standard cell library ℒ^', and operating conditions 𝒞^'. 𝒮𝒬ℒ = f_generator(𝒬, 𝒯^'', ℒ^', 𝒞^'| ℳ) Interpreter: The interpreter main role is to translate the raw database results into a coherent, natural language response that directly addresses the user's question. This agent processes the user question 𝒬 and the result ℛ obtained from executing the generated SQL query (Eq. <ref>) and then then formulates an output answer 𝒪 in natural language format, as formalized in Eq. <ref>. ℛ = f_execute(𝒮𝒬ℒ, 𝒟ℬ) 𝒪 = f_interpreter(𝒬, ℛ|ℳ) § EXPERIMENTAL RESULTS We conduct all experiments using OpenAI's GPT models. All models were accessed through their API, specifically gpt-3.5-turbo-0125 for GPT3.5, gpt-4-turbo-2024-04-09 for GPP4, and gpt-4o-2024-05-13 for GPT4-o. For the Process Design Kit (PDK), we utilize the open-source Skywater 130nm <cit.>. This PDK encompasses 6 Standard Cell Libraries (SCLs). First, we converted the PDK files to a database using our proposed schema. The resulting Skywater database comprises 20 tables, 39,576 different cell entries, 4,986,160 entries for cell timing information at different process corners, with a total size of 7.2. This comprehensive large-scale database provides a robust platform for evaluating our framework. §.§ Evaluation Set To evaluate HDLCopilot, we created an evaluation set of 52 user questions with corresponding SQL-queries. This set encompasses a diverse range of complexities, from simple single-table selections to complex multi-table joins with sub-queries and multiple conditions. As shown in Table <ref>, the set incorporates various SQL clauses, aggregation functions, and sub-queries, providing a comprehensive test of the framework's SQL handling capabilities. §.§ Evaluation Metrics Following text-to-SQL work  <cit.>, we use the Execution Accuracy (EX) and Valid Efficiency Score (VES) to evaluate the performance of our proposed framework. The Execution Accuracy (EX) quantifies the framework's ability to generate SQL queries that produce correct results. It measures the proportion of questions in the evaluation set where the execution results match those of the ground truth queries. It is formally defined in Eq. <ref>, where N defines the number of questions in the evaluation set, V_i defines the set returned by the ground truth SQL query and V̂_̂î defines the set returned by the predicted SQL query. 11(.) is an indicator function that is equal to 1 if both the ground truth set and the predicted set are equal and 0 otherwise. EX = ∑_i=1^N 11(V_i, V̂_i)/N, 11(V_i, V̂_i) = 1, if V_i = V̂_i 0, if V_i V̂_i The Valid Efficiency Score (VES) evaluates the correctly generated SQLs by comparing their execution time against those of the ground truth SQLs. VES is formally defined in Eq. <ref>, where R(.) is the relative efficiency of the predicted SQL and the ground truth SQL, and E(.) is the execution time of each SQL in the database. The VES metric provides insights into both the correctness and the computational efficiency of the generated SQL queries. VES = ∑_i=1^N 11(V_i, V̂_i) · R(Y_i, Ŷ_i)/N, R(Y_i, Ŷ_i) = √(E(Y_i)/E(Ŷ_i)) §.§ Main Results First, we present three qualitative examples that showcase HDLCopilot's capability in generating complex SQL queries, retrieving relevant information, and providing precise answers to user questions. Fig. <ref> shows a user question that asks for comparing the width of the 4-input MUX across all libraries. The framework is able to answer the question with high precision. This analysis is potentially useful in providing designers with immediate insights into a specific cell's footprint variation among different library options, helping them determine which library is most suitable for their design requirement. Fig. <ref> also presents a cross-library comparison of leakage power in flip-flop cells. This analysis is useful for designers focusing on low-power applications, allowing them to quickly identify the most power-efficient cell for their specific needs. Fig. <ref> showcases the framework's ability to handle more sophisticated queries. In this example, the framework generates and executes a complex SQL query to compare the propagation delay of a 2-input MUX cell between two specific libraries. These three example highlight the system's ability to perform diverse set of analyses that would be time-consuming if done manually. We also present quantitative assessment of the system's performance on the 52 examples in our evaluation set. We first evaluate the accuracy of the dispatcher and the selector independently to determine the most reliable setup for these agents. We conduct evaluations both with and without few-shot examples. Table. <ref> shows that few-shot demonstrations generally improves the accuracy for both GPT3.5 and GPT-4 models. GPT-4-o achieves the highest overall accuracy even without few-shot examples, showcasing its ability to comprehend and execute the task without additional context. The best setup is achieved by using GPT-4 and GPT-4o models, with an overall dispatch accuracy of 99.35% across all three routing tasks and table selection accuracy of 98.07%. Table. <ref> shows the Execution Accuracy (EX) and valid Efficiency Score (VES) of the entire framework. GPT-4o demonstrates superior performance, achieving the highest overall EX of 94.23% and VES of 98.58%. The results demonstrate that liberty queries are particularly harder than the LEF and TechLef queries. This is mainly because liberty files contains more attributes and diverse data types. Nonetheless, GPT4-o demonstrates a high execution accuracy of 91.30% for the liberty set, and even generates more efficient SQL queries than the hand-crafted ground truth SQLs, as shown by its VES of 101.57%. § CONCLUSION In this paper, we introduced HDLCopilot, an LLM-powered multi-agent collaborative framework, designed to streamline interactions with Process Design Kits (PDK) in natural language format. To facilitate integration with LLM-Agents, the PDK information are first converted to a relational database, which HDLCopilot agent interacts with by generating SQL queries to retrieve relevant information. HDLCopilot also has the potential of integrating well with other hardware design copilots in order to give LLMs PDK awareness. ACM-Reference-Format
http://arxiv.org/abs/2407.12489v1
20240717111446
Dual-level Adaptive Self-Labeling for Novel Class Discovery in Point Cloud Segmentation
[ "Ruijie Xu", "Chuyu Zhang", "Hui Ren", "Xuming He" ]
cs.CV
[ "cs.CV" ]
Dual-level Adaptive Self-Labeling for NCD in Point Cloud Segmentation Ruijie Xu, Chuyu Zhang et al. ShanghaiTech University, Shanghai, China Shanghai Engineering Research Center of Intelligent Vision and Imaging, Shanghai, China {xurj2022,zhangchy2,renhui,hexm}@shanghaitech.edu.cn Dual-level Adaptive Self-Labeling for Novel Class Discovery in Point Cloud Segmentation Ruijie Xu1,* Chuyu Zhang1,Both authors contributed equally. Code is available at https://github.com/RikkiXu/NCD_PCGithub. Hui Ren1 Xuming He1,3 July 22, 2024[The insight and motivation for the study of inertial methods with viscous and Hessian driven damping came from the inspiring collaboration with our beloved friend and colleague Hedy Attouch before his unfortunate recent departure. We hope this paper is a valuable step in honoring his legacy. ] ======================================================================================================================================================================================================================================================================================================================== § ABSTRACT We tackle the novel class discovery in point cloud segmentation, which discovers novel classes based on the semantic knowledge of seen classes. Existing work proposes an online point-wise clustering method with a simplified equal class-size constraint on the novel classes to avoid degenerate solutions. However, the inherent imbalanced distribution of novel classes in point clouds typically violates the equal class-size constraint. Moreover, point-wise clustering ignores the rich spatial context information of objects, which results in less expressive representation for semantic segmentation. To address the above challenges, we propose a novel self-labeling strategy that adaptively generates high-quality pseudo-labels for imbalanced classes during model training. In addition, we develop a dual-level representation that incorporates regional consistency into the point-level classifier learning, reducing noise in generated segmentation. Finally, we conduct extensive experiments on two widely used datasets, SemanticKITTI and SemanticPOSS, and the results show our method outperforms the state of the art by a large margin. § INTRODUCTION Point cloud segmentation is a core problem in 3D perception <cit.> and potentially useful for a wide range of applications, such as autonomous driving and intelligent robotics <cit.>. Recently, there has been tremendous progress in semantic segmentation of point clouds due to the utilization of deep learning techniques <cit.>. However, current segmentation methods primarily focus on a closed-world setting where all the semantic classes are known beforehand. As such it has difficulty in coping with open-world scenarios where both known and novel classes coexist, which are commonly seen in real-world applications. For open-world perception, a desirable capability is to automatically acquire new concepts based on existing knowledge <cit.>. While there has been much effort into addressing the problem of novel class discovery for 2D or RGBD images <cit.>, few works have explored the corresponding task for 3D point clouds. Only recently, Riz et al. <cit.> propose an online point-wise clustering method for discovering novel classes in 3D point cloud segmentation. To avoid degenerate solutions, their method relies on an equal class-size constraint on the novel classes. Despite its promising results, such a simplified assumption faces two key challenges: First, the distribution of novel classes in point clouds is inherently imbalanced due to the different physical sizes of objects and the density of points. Imposing the equal-size constraint can be restrictive, causing the splitting of large classes or the merging of smaller ones. In addition, point-wise clustering tends to ignore the rich spatial context information of objects, which leads to less expressive representation for semantic segmentation. To tackle the above challenges, we propose a dual-level adaptive self-labeling framework for novel class discovery in point cloud segmentation. The key idea of our approach is two-fold: 1) We design a novel self-labeling strategy that adaptively generates high-quality imbalanced pseudo-labels for model training, which facilitates clustering novel classes of varying sizes; 2) To incorporate semantic context, we develop a dual-level representation of 3D points by grouping points into regions and jointly learns the representations of novel classes at both the point and region levels. Such a dual-level representation imposes additional constraints on grouping the points likely belonging to the same category. This helps in mitigating the noise in the generated segmentation. Specifically, our framework employs an encoder to extract point features for the input point cloud and average pooling to compute representations of pre-computed regions. Both types of features are fed into a prototype-based classifier to generate predictions across both known and novel categories for each point and region. To learn the feature encoder and class prototypes, we introduce a self-labeling-based learning procedure that iterates between pseudo-label generation for the novel classes and the full model training with cross-entropy losses on points and regions. Here the key step is to generate imbalanced pseudo labels, which is formulated as a semi-relaxed Optimal Transport (OT) problem with adaptive regularization on class distribution. Along with the training, we employ a data-dependent annealing scheme to adjust the regularization strength. Such a design prevents discovering degenerate solutions and meanwhile enhances the model flexibility in learning the imbalanced data distributions. To demonstrate the effectiveness of our approach, we conduct extensive experiments on two widely-used datasets: SemanticKITTI <cit.> and SemanticPOSS <cit.>. The experimental results show that our method outperforms the state-of-the-art approaches by a large margin. Additionally, we conduct comprehensive ablation studies to evaluate the significance of the different components of our method. The contributions of our method are summarized as follows: * We propose a novel adaptive self-labeling framework for novel class discovery in point cloud segmentation, better modeling imbalanced novel classes. * We develop a dual-level representation for learning novel classes in point cloud data, which incorporates semantic context via augmenting the point prediction with regional consistency. * Our method achieves significant performance improvement on the SemanticPOSS and SemanticKITTI datasets across nearly all the experiment settings. § RELATED WORK *Point cloud semantic segmentation. Point cloud semantic segmentation has attracted much attention in recent years <cit.>. While previous methods have made significant progress, their primary focus is on closed-world scenarios that heavily rely on annotations for each class and cannot address open-world challenges. In contrast, we aim to develop a model to discover novel classes in 3D open-world scenarios. In the context of point cloud representation learning, incorporating spatial context is pivotal for enhancing representation learning. Several works <cit.> introduce a hierarchical representation learning strategy that leverages regions as intermediaries to connect points and semantic clusters. Unlike them, we develop a dual-level learning strategy that concurrently learns to map points and regions to semantic classes. Thanks to the learning of region-level representation, our method is less sensitive to the local noises in point clouds. Moreover, we cluster regions into semantic classes by an imbalance-aware self-labeling algorithm instead of simple K-Means. *Novel class discovery. The majority of research on Novel Class Discovery (NCD) has focused on learning novel visual concepts in the 2D image domain via designing a variety of unsupervised losses on novel class data or regularization strategies <cit.>. Among them, EUMS <cit.> addresses novel class discovery in semantic segmentation, employing a saliency model for clustering novel classes, along with entropy ranking and dynamic reassignment for clean pseudo labels. More relevantly, Zhang et al. <cit.> consider the NCD task in long-tailed classification scenarios, and develop a bi-level optimization strategy for model learning. It adopts a fixed regularization to prevent degeneracy, imposing strong restrictions on learned representations, and a complex dual-loop iterative optimization procedure. In contrast, we propose an adaptive regularization strategy, which is critical for the success of our self-labeling algorithm. Moreover, our formulation leads to a convex pseudo-label generation problem, efficiently solvable by a fast scaling algorithm <cit.> (see Appendix A for detailed comparisons). Perhaps most closely related to our work is <cit.>, which explored the NCD problem for the task of point cloud semantic segmentation. Assuming a uniform distribution of novel classes, they develop an optimal-transport-based self-labeling algorithm to cluster novel classes. However, the method neglects intrinsic class imbalance and spatial context in point cloud data, often leading to sub-optimal clustering results. *Optimal transport for pseudo labeling. Unlike naive pseudo labeling <cit.>, Optimal Transport (OT) <cit.>-based methods allow us to incorporate prior class distribution into pseudo-labels generation. Therefore, it has been used as a pseudo-labels generation strategy for a wide range of machine learning tasks, including semi-supervised learning <cit.>, clustering <cit.>, and domain adaptation <cit.>. However, most of these works assume the prior class distribution is either known or simply the uniform distribution, which is restrictive for NCD. By contrast, we consider a more practical scenario, where the novel class distribution is unknown and imbalanced, and design a semi-relaxed OT formulation with a novel adaptive regularization. § METHOD In this section, we first introduce the problem setup of novel class discovery for point cloud segmentation and an overview of our method in Sec.<ref>. We then describe our network architecture, including dual-level representation of point clouds in Sec.<ref>. Subsequently, we present in detail our adaptive self-labeling framework for model learning that discovers the novel classes in Sec.<ref>. Finally, we introduce our strategy to estimating the number of novel classes in Sec.<ref>. §.§ Problem Setup and Overview For the task of point cloud segmentation, the novel class discovery problem aims to learn to classify 3D points of a scene into known and novel semantic classes from a dataset consisting of annotated points from the known classes and unlabeled points from novel ones. Formally, we consider a training set of 3D scenes, where each scene comprises two parts: 1) an annotated part of the scene {(x^s_n, y^s_n)}^N_n=1, which belongs to the known classes C^s and consists of original point clouds along with the corresponding labels for each point; 2) an unknown part of the scene {(x^u_m)}^M_m=1, which belongs to the novel classes C^u and does not contain any label information. These two sets C^s and C^u are mutually exclusive, i.e., C^s ∩ C^u = ∅. Our goal is to learn a point cloud segmentation network that can accurately segment new scenes in a test set, each of which includes both known and novel classes. To tackle the challenge of discovering novel classes in point clouds, we introduce a dual-level adaptive self-labeling framework to learn a segmentation network for both known and novel classes. The key idea of our method includes two aspects: 1) utilizing the spatial smooth prior of point clouds to generate regions and developing a dual-level representation that incorporates regional consistency into the point-level classifier learning; 2) generating imbalance pseudo-labels with a novel adaptive regularization. An overview of our framework is depicted in Fig.<ref>. §.§ Model Architecture We adopt a generic segmentation model architecture consisting of a feature encoder for the input point cloud and a classifier head to generate the point-wise class label prediction. Note that to capture both known and novel classes, our feature encoder is shared by all the classes C^s ∪ C^u and the output space of our classifier head also includes known and novel classes. Below we first introduce our feature representation and encoder, followed by the classifier head. Dual-level Representation. Instead of treating each point independently, we exploit the spatial smoothness prior to 3D objects in our representation learning. To this end, we adopt a dual-level representation of point clouds that describes the input scene at different granularity. Specifically, given an input point cloud 𝐗, we first use a backbone network f_θ to compute a point-wise feature 𝐙^p={𝐳_i^p}, where 𝐳_i^p∈ℝ^D × 1. In this work, we employ MinkowskiUNet <cit.> for the backbone. In addition, we cluster points into regions based on their coordinates and then compute regional features by average pooling of point features. Concretely, during training, we first utilize DBSCAN <cit.> to generate K_i regions, ℛ={r_k}^K_i_k=1, for unlabeled point in sample i, and calculate the regional features as follows, {r_k}^K_i_k=1←DBSCAN({x^u_i}^M_i=1), 𝐳^r_k =AvgPool{𝐳_i^p|𝐳_i^p=f_θ(x_i^u), x_i^u∈ r_k}, where 𝐳^r_k is the feature of region r_k. Such a dual-level representation allows us to enforce regional consistency in representation learning. Prototype-based Classifier. We adopt a prototype-based classifier design for generating the point-wise predictions. Specifically, we introduce a set of prototypes for known and novel classes, denoted as h = [h^s, h^u] ∈ℝ^D×(|C^s|+|C^u|), and D denotes the dimension of the last-layer feature. For each point or region, we compute the cosine similarity between its feature and the prototypes, followed by Softmax to predict the class probabilities. Here we use the same set of prototypes for the points and regions, which enforces a consistency constraint within each region and results in a more compact representation for each class. §.§ Adaptive Self-labeling Framework To handle class-imbalanced data, we propose an adaptive self-labeling framework that dynamically generates imbalanced pseudo-labels. To this end, we adopt the following loss function for the known and novel classes, ℒ = ℒ_s+ αℒ_u^p+βℒ_u^r, where ℒ_s is the cross-entropy loss for known classes, ℒ_u^p is point-level loss and ℒ_u^r is region-level loss for novel classes. α and β are weight parameters. For the novel classes, we first generate pseudo-labels for points and regions by solving a semi-relaxed Optimal Transport problem and then adopt the cross-entropy loss with the generated labels. The pseudo-code of our algorithm is shown in Appendix B and below we will focus on our novel pseudo-label generation process. Imbalanced Pseudo Label Generation. The pseudo-labels generation for balanced classes can be formulated as an optimal transport problem as follows <cit.>: min_𝐐1/M⟨𝐐,-log𝐏^u⟩_F, s.t. 𝐐1_|C^u|=1_M, 𝐐^⊤1_M= M/|C^u|1_|C^u|, where 𝐐∈ℝ^M× |C^u| are the pseudo labels of unlabeled data, <,>_F is Frobenius inner product and 𝐏^u are the output probabilities of the model. For imbalanced point cloud data, we relax the second constraint on the class sizes in <ref>, which leads to a parameterized semi-relaxed optimal transport problem as below: min_𝐐ℱ_u(𝐐,γ )=1/M⟨𝐐,-log𝐏^u⟩_F+γ KL(1/M𝐐^⊤1_M,1/|C^u|1_|C^u|) s.t. 𝐐∈{𝐐∈ℝ^M× |C^u||𝐐1_|C^u|=1_M}, where γ is a weight coefficient for balancing the constraint on cluster size distribution in the second term. We further add an entropy term -ϵℋ (1/M𝐐) to <ref> and for any given γ, this entropic semi-relaxed OT problem can be efficiently solved by fast scaling algorithms <cit.>. <ref> outlines the optimization process, and further details are provided in Appendix A. In this work, we propose a novel adaptive regularization strategy that adjusts the weight γ according to the progress of model learning, significantly improving pseudo-label quality. Details of our strategy will be illustrated subsequently. Adaptive Regularization Strategy. The objective <ref> aims to strike a balance between the distribution represented by model prediction 𝐏^u and the uniform prior distribution. A large γ tends to prevent the model from learning a degenerate solution, e.g. assigning all the samples into a single novel class, but it also restricts the model's capacity to learn the imbalanced data. One of our key insights is that the imbalanced NCD learning requires an adaptive strategy for setting the value of γ during the training. Intuitively, in the early training stage where the model performance is relatively poor, a larger constraint on 𝐐^⊤1_M is needed to prevent degenerate solutions. As the training progresses, the model gradually learns meaningful clusters for novel classes, and the constraint should be relaxed to increase the flexibility of pseudo-label generation. To achieve that, we develop an annealing-like strategy for adjusting γ, inspired by the ReduceLROnPlateau method that reduces the learning rate when the loss does not decrease. Here we employ the KL term in <ref> as a guide for decreasing γ, as the value of the KL term reflects the relationship between the distribution of pseudo labels and the uniform distribution. Specifically, our formulation for the adaptive regularization factor is as follows: γ_t+1 = λγ_t, if KL(1/M𝐐^⊤1_M,1/|C^u|1_|C^u|)≤ρ consecutively for T iter. where ρ, λ, T and γ_0 are hyperparameters. Compared to typical step decay and cosine decay strategies, our adaptive strategy is aware of the model learning process and allows for more flexible control of γ based on the characteristics of the input itself. Hyperparameter Search. To search the values of our hyperparameters, we design an indicator score that can be computed on the training dataset. Specifically, our indicator regularizes the total loss in <ref> with a KL term that measures the distance between the distribution of novel classes and the uniform distribution. Formally, the indicator is defined as follows: ℐ = ℒ + γ KL(1/M𝐐^⊤1_M,1/|C^u|1_|C^u|), where γ is obtained by <ref>. Empirically, this indicator score provides a balanced evaluation of the model's performance in the known and novel classes. §.§ Estimate the number of novel classes To deal with realistic scenarios, where the number of novel classes (C^u) is unknown, we extend the classic estimation method <cit.> in NCD to point clouds semantic segmentation for estimating C^u. Specifically, we extract representation from a known-class pre-trained model for training data, define a range of possible total class counts ( |C^s| |C_all| max classes), and apply Kmeans to cluster the labeled and unlabeled point clouds across different |C_all|. Then, we evaluate the clustering performance of known classes under different |C_all|, and select |C_all| with the highest clustering performance as the estimated |C_all|. § EXPERIMENTS §.§ Experimental setup *Dataset. We perform evaluation on the widely-used SemanticKITTI <cit.> and SemanticPOSS <cit.> datasets. The SemanticKITTI dataset consists of 19 semantic classes, while the SemanticPOSS dataset contains 13 semantic classes. Both datasets have intrinsic class imbalances. For a fair comparison with existing works <cit.>, we divide the dataset into 4 splits and select one split as novel classes, while treating the remaining splits as the known classes. Additionally, to assess the effectiveness of our method under more challenging conditions, we further split the SemanticPOSS dataset into two parts, selecting one part as novel classes. The dataset details are provided in Appendix C. *Evaluation Metric. Following the official guidelines in SemanticKITTI and SemanticPOSS, we conduct evaluations on sequences 08 and 03, respectively. These sequences contain both known and novel classes. For the known classes, we report the IoU for each class. Regarding the novel classes, we employ the Hungarian algorithm to initially match cluster labels with their corresponding ground truth labels. Subsequently, we present the IoU values for each of these novel classes. Additionally, we calculate the mean of columns across all known and novel classes. *Implementation Details. We follow <cit.> to adopt the MinkowskiUNet-34C <cit.> network as our backbone. For the parameters in DBSCAN, we set the min_samples to a reasonable value of 2, and select an epsilon value of 0.5, ensuring that 95% of the point clouds are included in the region branch learning process. A detailed analysis of DBSCAN is included in Appendix J. For the input point clouds, we set the voxel size as 0.05 and utilize the scale and rotation augmentation to generate two views. The scale range is from 0.95 to 1.05, and the rotation range is from -π / 20 to π / 20 for three axes. We train 10 epochs and set batch size as 4 for all experiments. The optimizer is Adamw, and the initial learning rate is 1e-3, which decreases to 1e-5 by a cosine schedule. For the hyperparameters, we set α=β =1 and fix λ at 0.5. We choose T=10 and ρ=0.005 based on the indicator mentioned in <ref> and analyze them in the ablation study. Both the point- and region-level self-labeling algorithms employ the same parameters. All experiments are conducted on a single NVIDIA A100. §.§ Results SemanticPOSS Dataset. As presented in <ref>, our approach exhibits significant improvements in novel classes over the previous method across all four splits. Specifically, we achieve an increase of 12.7% and 6.2% in split 0 and 1, respectively. It is worth noting that the fully supervised upper bounds for novel classes in split 0 and 1 are 72.7% and 53.3%, respectively, and the performance gaps have been significantly reduced. In the more challenging split 2 and split 3, we observe gains of 3.6% and 4.7%, respectively. The corresponding upper bounds for these splits are 26.9% and 33.2%, indicating their increased difficulty compared to splits 0 and 1. On average, we achieve an IoU of 30.2% for novel classes across all four splits, outperforming NOPS (22.5%) by 7.7%. In addition, we provide a detailed comparison with NOPS on head, medium, and tail classes in Appendix D, as well as under a more comparable setting that applies our training strategy to NOPS in Appendix E. To further verify that our method can alleviate the imbalanced problem, we divided the SemanticPOSS dataset into two splits, creating a more severe imbalance scenario that poses a greater challenge for clustering novel classes. As shown in <ref>, on novel classes, our method outperforms NOPS significantly on both splits, with a margin of 7.6% on split 0 and 5.1% on split 1. In particular, for the novel classes, we observe that our improvement mainly stems from the medium classes, such as person and bike. It is worth noting that NOPS employs extra training techniques, such as multihead and overclustering, whereas we use a simpler pipeline without needing them, further demonstrating our effectiveness. SemanticKITTI Dataset. The results in <ref> demonstrate our superior performance compared to previous methods on different splits. Specifically, we achieve significant improvements of 8.6%, 3.3%, and 3.6% on splits 0, 1, and 2, respectively, for novel classes. The supervised upper bounds for these splits are 82.0%, 42.4%, and 39.6%, respectively. In split 3, our results are slightly higher than NOPS by 0.2%, possibly due to the scarce presence of these novel classes in split 3. On average across all four splits, our approach achieves an IoU of 27.5%, surpassing NOPS (23.4%) by 4.1% on novel classes. Visualization Analysis. Additionally, in <ref>, we perform visual comparisons on the results between NOPS and our method, and it is evident that our method shows significant improvements compared to NOPS. Specifically, as shown in the first row of <ref>, NOPS produces noisy predictions due to uniform constraints, mixing medium classes (e.g., building) and tail classes (e.g., car). In the second and third rows of <ref>, NOPS often confuses between medium and head classes, such as building and plants, as well as parking and car. In contrast, our method achieves better results for both datasets due to adaptive regularization and dual-level representation learning, generating high-quality imbalanced pseudo labels. More visual comparisons for additional splits are provided in the Appendix K. §.§ Ablation Study Component Analysis. To analyze the effectiveness of each component, we conduct extensive experiments on split 0 of the SemanticPOSS dataset. Here we provide ablation on three components, including Imbalanced Self-Labeling (ISL), Adaptive Regularization (AR), and Region-Level Branch (Region). As shown in <ref>, compared to baseline which employs equal-size constraints, imbalanced self-labeling improves performance by 4.2%. The confusion matrix in <ref> indicates that except for the highly-accurate class “ground", there is a significant improvement in the head and medium classes. This phenomenon is clearly depicted in <ref>, where the predictions of the baseline exhibit noticeable noise. From the second and third rows of <ref>, the adaptive regularization leads to a significant improvement of 8.2% in split0 and 4.5% in overall splits. As shown in <ref>, adaptive regularization enhances the quality of pseudo-labels for each class, especially for the head class (plants). We also visualize the class distribution of pseudo-labels in Appendix F , which shows adaptive regularization provides greater flexibility than fixed regularization term. According to the third, fourth and last rows of <ref>, the inclusion of the region-level branch leads to a 9.1% improvement and an additional 4.2% improvement built upon the AR. In addition, more experiments and analysis on prototype learning are included in Appendix G. In <ref>, there's a significant improvement in pseudo-labels for each category, particularly for the tail class (car) and the head class (plants). From <ref>, it is evident that the region-level branch can correct cases where a single object is mistakenly labeled as multiple categories. Due to the utilization of spatial priors, where closely-located points are highly likely to belong to the same category, our region-level branch can correct misclassifications by considering context from neighboring points, preventing splitting a single object into multiple entities. Those experiments validate the effectiveness of each component in our method. Estimate the number of novel classes. For computational simplicity, we conduct experiments on splits 0 of the SemanticPOSS dataset and randomly sample 800,000 points from all scenes to estimate |C^u|. We set max classes to 50, which is an estimate of the maximum number of new classes that might appear in a typical scene. The estimated |C^u| is 3, which is close to the ground truth value (GT is 4). Finally, we conduct experiments with |C^u| as 3. As <ref> illustrated our method still outperforms NOPS by a large margin. Adaptive Regularization and Hyperparameters Selection. To analyze the impact of adaptive regularization, we compare it with various fixed regularization factors, as illustrated in <ref>. We notice that employing a very small fixed γ, such as 0.05 as indicated in the table, results in a weak prior constraint, and the model tends to learn a degenerate solution where all samples are assigned to a single cluster. When the γ increases to 0.5, the model achieves optimal results, but the increment decreases when the γ further increases. Compared with adaptive γ, the optimal results of fixed γ is nearly 8.2% lower, demonstrating that the adoption of an adaptive γ not only enhances the model's flexibility but also prevents any performance degradation. Furthermore, we experiment with the setup adopting the GT class distribution and substituting the KL constraint in <ref> with an equality constraint. Surprisingly, the results indicate that the GT class distribution constraint is not the optimal solution for clustering imbalanced novel classes. At last, in <ref>, we visualize the γ curves for SemanticPOSS in four splits. Split 0 exhibits the highest rate of change, followed by Split 1, while Splits 2 and 3 remain constant, indicating that our strategy is adaptive to each dataset. To further validate the effectiveness of adjusting γ based on KL divergence, we also compare it with typical step decay and cosine annealing strategies. For the step decay, we set the initial γ to 1 and decay it by multiplying it with λ every epoch. For the cosine annealing approach, we also set the initial γ to 1 and reduce it to the minimum value (min γ). From the <ref> and <ref>, we observe that the results of simple step decay and cosine annealing are nearly 10% worse than adaptive γ (which is 44.2). We believe that these two typical strategies lack flexibility compared to adaptive γ. They might not facilitate the adaptive control of the γ decay process based on the model learning process. To choose the hyperparameters ρ and T according to the indicator outlined in <ref>, we conduct experiments for various values of ρ and T. The results are displayed in <ref> and <ref>. Additionally, we plot the indicator's curve for each experiment in <ref> and <ref>. The plots reveal that when ρ falls within the range of 0.01 to 0.005, and T is set between 5 and 20, the indicator value remains low while achieving a high novel IoU. Those results demonstrate the efficiency of our hyperparameters selection strategy and the robustness of our method. Limitations. One limitation is our problem setup which follows  <cit.> and only addresses scenarios where unlabelled data constitutes novel classes. In contrast, a more realistic open-world setting necessitates handling situations where both known classes and novel classes lack labels. Nevertheless, we anticipate that our method will establish a robust baseline and stimulate further research aimed at addressing the challenges presented by practical open-world situations. § CONCLUSION In this paper, we propose a novel dual-level adaptive self-labeling framework for novel class discovery in point cloud segmentation. Our framework formulates the pseudo label generation process as a Semi-relaxed Optimal Transport problem and incorporates a novel data-dependent adaptive regularization factor to gradually relax the constraint of the uniform prior based on the distribution of pseudo labels, thereby generating higher-quality imbalanced pseudo labels for model learning. In addition, we develop a dual-level representation that leverages the spatial prior to generate region representation, which reduces the noise in generated segmentation and enhances point-level classifier learning. Furthermore, we propose a hyperparameters search strategy based on training sets. Extensive experiments on two widely used datasets, SemanticKITTI and SemanticPOSS, demonstrate the effectiveness of each component and the superiority of our method. §.§.§ Acknowledgments This work was supported by National Science Foundation of China under grant 62350610269, Shanghai Frontiers Science Center of Human-centered Artificial Intelligence, and MoE Key Lab of Intelligent Perception and Human-Machine Collaboration (ShanghaiTech University). splncs04
http://arxiv.org/abs/2407.12159v1
20240716203037
The IoT Breaches your Household Again
[ "Davide Bonaventura", "Sergio Esposito", "Giampaolo Bella" ]
cs.CR
[ "cs.CR" ]
The IoT Breaches your Household Again* In Proceedings of the 21st International Conference on Security and Cryptography (https://secrypt.scitevents.org/Home.aspx?y=2024Secrypt 2024), ISBN 978-989-758-709-2, ISSN 2184-7711, pages 475-482. DOI: https://doi.org/10.5220/001276770000376710.5220/0012767700003767 1st Davide Bonaventura Dipartimento di Matematica e Informatica Università di Catania Catania, Italy d.bonaventura@studium.unict.it https://orcid.org/0009-0004-4463-7991orcid.org/0009-0004-4463-7991 2nd Sergio Esposito Information Security Group Royal Holloway University of London Egham, UK sergio.esposito.2019@live.rhul.ac.uk https://orcid.org/0000-0001-9904-9821orcid.org/0000-0001-9904-9821 3rd Giampaolo Bella Dipartimento di Matematica e Informatica Università di Catania Catania, Italy giamp@dmi.unict.it https://orcid.org/0000-0002-7615-8643orcid.org/0000-0002-7615-8643 July 22, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Despite their apparent simplicity, devices like smart light bulbs and electrical plugs are often perceived as exempt from rigorous security measures. However, this paper challenges this misconception, uncovering how vulnerabilities in these seemingly innocuous devices can expose users to significant risks. This paper extends the findings outlined in previous work, introducing a novel attack scenario. This new attack allows malicious actors to obtain sensitive credentials, including the victim's Tapo account email and password, as well as the SSID and password of her local network. Furthermore, we demonstrate how these findings can be replicated, either partially or fully, across other smart devices within the same IoT ecosystem, specifically those manufactured by Tp-Link. Our investigation focused on the Tp-Link Tapo range, encompassing smart bulbs (Tapo L530E, Tapo L510E V2, and Tapo L630), a smart plug (Tapo P100), and a smart camera (Tapo C200). Utilizing similar communication protocols, or slight variants thereof, we found that the Tapo L530E, Tapo L510E V2, and Tapo L630 are susceptible to complete exploitation of all attack scenarios, including the newly identified one. Conversely, the Tapo P100 and Tapo C200 exhibit vulnerabilities to only a subset of attack scenarios. In conclusion, by highlighting these vulnerabilities and their potential impact, we aim to raise awareness and encourage proactive steps towards mitigating security risks in smart device deployment. IoT, Tp-Link, Smart Homes, Smart Devices, Smart Bulb, Smart Plug, Smart Camera, Penetration Test, Vulnerability Assessment. § INTRODUCTION The digital revolution in Internet of Things (IoT) devices has led to “smart” devices becoming more and more an integral part of our daily lives. From smart home appliances to industrial sensors, IoT has unlocked a world of convenience, efficiency, and innovation. The number of IoT devices worldwide is forecast to almost double from 15.1 billion in 2020 to more than 29 billion IoT devices in 2030 <cit.>. The interconnectedness always brings forth significant security challenges that cannot be ignored. Due to their often neglected security, IoT devices are typically preferred devices by attackers. On average, 54% of organizations experience attempted cyberattacks targeting IoT devices every week. This indicates a 41% increase in the average number of weekly attacks per organization targeting IoT devices compared to 2022 <cit.>. More and more inexpensive IoT devices are designed without a security-first mindset <cit.> <cit.>, and their long lifecycles can expose them to evolving threats for years. The consequences of inadequate IoT security can be far-reaching. A compromised IoT device not only poses a risk to the privacy and safety of users but can also serve as a gateway to launch larger-scale attacks on critical infrastructures. We observe that usually different devices produced by the same manufacturer, belonging to the same product line, e.g. Tapo, share parts of the firmware and application protocols used for communication. Following these observations, this paper rests on the following research questions: (i) Do IoT devices from the same vendor share similar vulnerabilities? (ii) What consequences does this have on the end user's security, privacy and safety? §.§ Contributions To answer the research questions we chose Tp-Link's IoT ecosystem as the target. Our experiments are focused on the following Tp-Link Tapo IoT devices. * Tp-Link Tapo Smart Wi-Fi Light device, Multicolor (L530E) <cit.>, targeted by previous work, leading to the discovery of several vulnerabilities <cit.>. * Tp-Link Tapo Smart Wi-Fi Light Bulb, Dimmable (L510E V2) <cit.>. * Tp-Link Tapo Smart Wi-Fi Spotlight, Multicolor (L630) <cit.>. * Tp-Link Tapo Mini Smart Wi-Fi Socket (P100) <cit.>. * Tp-Link Tapo Pan/Tilt Home Security Wi-Fi Camera (C200) <cit.>. We found that the tested Tapo devices, part of Tp-Link's IoT ecosystem, use the protocols outlined in previous work <cit.>, or its variants. Consequently, we had the intuition that all attack scenarios described in previous work, or at least some of them, could most likely be exploited across all devices in the Tp-Link IoT ecosystem. Hence, our findings regarding the tested Tp-Link devices can be summarised as follows: * The L510E V2 and the L630 use the same protocols as the L530E, thereby making all attack scenarios exploitable. * Communications between the Tapo app and the C200 are secured via TLS encryption, limiting exploitation of the vulnerabilities. * The configuration process of the P100 occurs over Bluetooth rather than Wi-Fi, restricting exploitability to attack scenarios that don't target association and configuration processes. Additionally, we introduce a new attack scenario leveraging the first two vulnerabilities outlined in previous work <cit.>. In this scenario, the attacker authenticates as the Tapo device to the Tapo app. As a result, the attacker can obtain the victim's Wi-Fi SSID and password, as well as her Tapo email and password. §.§ Ethics and Responsible Disclosure All experiments only involve resources owned by the authors of this work, including devices, Wi-Fi networks, accounts, emails, and passwords. No user or third-party data was accessed during the experiments. Tp-Link acknowledged the issues we responsibly reported through their Product Security Advisory (PSA) <cit.>. We actively collaborated with them, by testing the fixes and confirming the attack scenarios are no longer exploitable or do not give the attacker any advantage. Tp-Link confirmed that they already released the necessary fixes to address the vulnerabilities and that the changes do not affect the normal use and stability of the products. §.§ Paper Summary This document proceeds with a brief overview of relevant literature in the subsequent section (<ref>), followed by a concise summary of prior research (<ref>). Subsequently, the new attack scenario is explained in detail (<ref>). Then, for all devices covered by our study, a detailed description of the applicability or non-applicability of each attack scenario is provided(<ref>). Ultimately, pertinent conclusions are derived (<ref>). § RELATED WORK This section delves into the related work within the field of IoT security. Nebbione et al. <cit.> delved into popular IoT protocols for data sharing and service discovery. They underscored the security risks posed by protocol limitations, device constraints, and vulnerabilities. Their conclusion emphasizes the need for enhancing service discovery protocols, implementing end-to-end security, and raising user awareness about IoT security risks. In the work by Yaacoub et al. <cit.>, the authors underscore the importance of implementing proactive security measures in IoT systems, and highlight the limitations of traditional security methods. Their solution involves periodic ethical hacking simulations and penetration tests across various IoT components. In conclusion, the paper advocates for continuous training for all employees to make IoT systems more secure. Unlike similar studies often focused on individual devices, Heiding et al. <cit.> conducted systematic penetration tests on 22 smart devices across different categories commonly found in connected homes. As a result, a total of 17 vulnerabilities were uncovered and published as new CVEs. These vulnerabilities could grant attackers physical access to homes, posing significant risks to residents. In the work by Akhilesh et al. <cit.>, the authors focus on enhancing the security of smart home-based IoT devices through automated penetration testing. Manual testing of IoT devices is labour-intensive and requires in-depth knowledge. To streamline this process, authors developed an automated penetration testing framework. Five smart home IoT devices were selected for testing, and common vulnerabilities were identified. The Tp-Link devices were found to be the most vulnerable, while the Google Home Mini was the most secure. The study concludes that the framework can be used by non-experts, contributing to improved IoT security and safer smart homes. Researchers are also exploring various approaches to enhance the security levels of the IoT. For example, Hassija et al. <cit.> show how four different technologies, i.e., blockchain, fog computing, edge computing, and machine learning, can be used to increase the level of security in IoT, solving some of the main security issues present in the four layers in which an IoT application can be divided, which are sensing layer, network layer, middleware layer, and application layer. Finally, Salah and Khan <cit.> present and survey major security issues for the IoT environment and show how blockchain can solve many of them. §.§ Previous Attacks on Tapo Bulbs Previous work on Tapo L530E smart bulbs <cit.> delineates the communication process between Tapo devices and the Tapo app, comprising three primary macro-steps: (1) Device Discovery - allows the Tapo app to locate the Tapo device within the local network, and to get the Tapo device’s configuration; (2) Tapo Symmetric Key Exchange Protocol (TSKEP) - allows the Tapo app and the Tapo device to exchange a symmetric session key; (3) Tapo device usage - allows the user to use the Tapo device via the Tapo app, by sending get and set messages. Within these macro-steps, authors identify and explain four vulnerabilities: * Vulnerability 1. Lack of authentication of the Tapo device with the Tapo app allows an adjacent attacker to impersonate the Tapo device with the Tapo app during the TSKEP step. * Vulnerability 2. Hard-coded, short shared secret allows an adjacent attacker to obtain the secret for authentication during the Device Discovery phase. * Vulnerability 3. Lack of randomness during symmetric encryption allows an adjacent attacker to make the AES128-CBC scheme deterministic. * Vulnerability 4. Insufficient message freshness allows an adjacent attacker to replay messages both to the Tapo device and the Tapo app. These vulnerabilites were exploited by the authors in five attack scenarios, which we hereby summarise: * Attack Scenario 1, Fake Bulb Discovery Messages Generation, that allows to discover Tapo devices within the network and serve false configurations to the Tapo app. * Attack Scenario 2, Password Exfiltration from Tapo User Account, that allows to get the password in cleartext of the user's Tapo account, and its associated email account in hash form. * Attack Scenario 3, MITM Attack with a Configured Tapo L530E, that allows to perform a Man-in-the-Middle attack and violate the confidentiality and integrity of all messages exchanged between the Tapo app and the Tapo device. This results in the exfiltration of the Tapo account password in cleartext, and the associated email account in hash form. * Attack Scenario 4, Replay Attack with the Smart Bulb as Victim, that allows to replay previously intercepted messages. If the adversary can observe the smart bulb's behaviour when the message arrives, they can infer the message's meaning and reuse it at will. * Attack Scenario 5, MITM Attack with an Unconfigured Tapo L530E, that allows to perform a Man-in-the-Middle attack and intercept traffic between the Tapo app and the Tapo device during configuration. As Tapo username and password, together with the Wi-Fi SSID and Wi-Fi password are sent in Base64 encoding during configuration, the adversary is able to exfiltrate all information. Finally, the authors conduct experiments across three different network setups, denoted as Setup A, Setup B, and Setup C. In Setup A, both the victim (i.e., a phone running the Tapo app) and the adversary are connected to the same network, while the Tapo device is on a separate, remote network; in Setup B, the adversary, the victim and the Tapo device are all connected to the same local network, and the Tapo device is already configured; in Setup C, the adversary keeps deauthenticating <cit.> the Tapo device, resetting it to the unconfigured state, until the user connects it to the adversary's Wi-Fi honeypot, thinking it's their home network. § BREACHING THE HOUSEHOLD AGAIN In this section, we present a novel attack scenario, which we call “Attack Scenario 6 - Passwords exfiltration with an unconfigured Tapo device”, following the enumeration within previous work on Tapo devices <cit.>. In this new attack scenario, the adversary is able to exfiltrate passwords using an unconfigured Tapo device. The devices used during the attack are: * A Wi-Fi switch to provide local connectivity. * A smart bulb Tapo series L530 with Hardware Version 1.0.0 and Firmware Version 1.1.9. * A Samsung smartphone running Android 11 and the Tapo app Version 2.8.14. * An Ubuntu 22.04 machine with 5.15.0-47 kernel. §.§ Setup D The network configuration we use during the attack, which we call Setup D, for consistency with previous work <cit.>, is as follows. * The victim wants to associate an unconfigured Tapo device with her Tapo account. * The Tapo app (hence, the victim) believes to be connected to the network X created by the Tapo device, but is actually connected to a network Y controlled by the attacker's Ubuntu device. This setup requires that the Tapo device has been reset or has not been configured yet. The attacker must only be connected to the network they control, and not to the access point started by the Tapo device. The victim's Tapo app must be connected to the network controlled by the attacker. In this setup, the victim opens the Tapo application and starts the device association process. The network configuration for this setup is shown in Figure <ref>. As shown in previous work <cit.>, the attacker can use the Wi-Fi deauthentication attack <cit.> to easily get the Setup D as well. Initially, the adversary can use the deauthentication attack to disconnect the Tapo device from the network to which it is connected, forcing the victim to reset it. Subsequently, after the Tapo device enters setup mode, the attacker can perform the same attack to deauthenticate the Tapo app from the network started by the Tapo device, trying to get the victim to connect to the network they control. §.§ Attack Scenario 6 In this experiment, we exploited two of the four vulnerabilities, in order: * Vulnerability 2, with the goal of creating fake device discovery response, * Vulnerability 1, with the goal of authenticating as the Tapo device to the Tapo app. The context in which we conduct the experiment is the Setup D(<ref>) previously described. The attack diagram is shown in Figure <ref>. The exploitation begins when the victim starts the association process within the Tapo app. In the beginning, the app starts broadcasting device discovery request. Hence, the attacker exploits his ability to create fake device discovery response to respond to various device discovery request from the victim. The attacker sets the response's messages fields as shown in Listing <ref>: * He sets the device_id and owner fields with random hex values. * He sets the device_type and device_model fields with the name and type of the device he wants to impersonate. * He sets the ip and port fields to point to an adversary-controlled server. * He sets the factory_default field to . This is important because it allows the application to understand that the response is coming from a device not yet associated with any accounts. Note that, differently from the Attack Scenario 2 described in previous work <cit.>, the attacker does not need the victim's owner id. [style=code, caption=JSON attack scenario 6,label=lst:attacco 6] "result": "device_id": "RANDOM.HEX.VALUE", "owner": "RANDOM.HEX.VALUE", "device_type": "DEVICE.TYPE", "device_model": "DEVICE.MODEL", "IP": "ATTACKER.IP", "mac": "ATTACKER.PORT", "factory_default": true, "is_support_iot_cloud": false, "mgt_encrypt_schm": "is_support_https": false, "encrypt_type": "AES", "http_port": 80 , "error_code": 0 After receiving the response, the Tapo app assumes that it comes from a device that needs to be associated. Therefore, it starts the TSKEP protocol with the attacking device. Because of vulnerability 1, the TSKEP protocol does not give the Tapo app any evidence about the identity of the interlocutor. For this reason, the Tapo app assumes that the newly received key is shared with the device to be associated, while it is shared with the attacker instead. The attacker must then perform the association process with the Tapo app until the set_qs_info request. At that point, they can get the password of the victim's Tapo account and the associated email address, as well as the SSID and the password of the victim's local network. The attack can be summarised as follows: * The attacker gets the Device Discovery shared key and creates fake device discovery response. Therefore, the authentication of the device discovery response fails. * The Tapo app executes the TSKEP protocol with the attacker instead of the Tapo device. Therefore, authentication of the Tapo device with the Tapo app fails. This results in an integrity loss. * The Tapo app shares the key with the attacker, hence the distribution of the session key fails. This results in an confidentiality loss. * The attacker can violate the confidentiality of the messages and get the password and the hash of the email of the victim’s Tapo account as well as the SSID and the password of the victim's local network. This results in a confidentiality loss. § IMPACT ON TARGET DEVICES Table <ref> provides an overview of the different versions tested for Tapo devices and the Tapo app. We tested versions of device firmwares that were supposedly vulnerable to the discovered vulnerabilities, then we tested the fixed firmwares to check if the vulnerabilities were not exploitable anymore. §.§ Firmware Without Fixes Our primary focus is to assess the impact of the identified vulnerabilities on each target device. A summary of the vulnerabilities exposed by each target device running a firmware without fixes is shown in Table <ref>. Throughout the section, we also analyze the applicability of the attack scenarios, and we summarise the reproducible ones on each target device in Table <ref>. §.§.§ Tp-Link Tapo Smart Wi-Fi Light device, Multicolor (L530E) We test an L530E with Hardware v1.0.0 and Firmware v1.1.9, using a Tapo app v2.8.14. The smart bulb exposes all the vulnerabilities, and all attack scenarios are reproducible <cit.>, including our novel Attack Scenario 6 (<ref>). §.§.§ Tp-Link Tapo Smart Wi-Fi Light Bulb, Dimmable (L510E V2) We test an L510E V2 with Hardware v2.0 and Firmware v1.0.8, using a Tapo app v2.16.112. The Tapo L510E, for the Device-App communications, uses the same vulnerable protocols (<ref>) with the same security parameters used by the L530E, i.e., HTTPS protocol is not supported, CBC-AES128 bit encryption is used, and Wi-Fi is the communication channel during configuration. Therefore, this smart bulb exposes all the listed vulnerabilities, and all attack scenarios can be reproduced, including Attack Scenario 6 introduced in this paper. We hereby describe how we apply each attack scenario to the new device, highlighting any differences from previous work <cit.> as necessary. For newly tested devices, we will refer to the L510E's behaviour as a baseline. In later sections, we will only detail Attack Scenarios (AS) that deviate from this baseline. AS1 works with the Tapo L510E firmware tested. The key that this device uses for the Message Authentication Code is static and hardcoded, the same used by the Tapo L530E. Therefore, an attacker can create false device discovery messages for both the bulb and the app. AS2 works with the Tapo L510E firmware tested. The Tapo L510E communicates using the TSKEP protocol with the Tapo app. By creating fake device discovery response, the attacker can impersonate the Tapo L510E, prompting the app to start TSKEP with them. This allows the attacker to get the Tapo password and the hash of the victim's Tapo email. AS3 works with the Tapo L510E firmware tested. TSKEP lacks identity verification, enabling the attacker to perform a MITM attack on the Tapo L510E-Tapo app communication, compromising confidentiality. AS4 works with the Tapo L510E firmware tested. The Tapo L510E accepts all messages without checking their timestamp. This allows attackers to replay sniffed messages with non-expired session keys, enabling arbitrary command execution. AS5 works with the Tapo L510E firmware tested. During pairing, communications between the Tapo L510E and the Tapo app happen over Wi-Fi. Hence, the attacker can perform a MITM attack and hijack the association process. AS6 works with the Tapo L510E firmware tested. During pairing, Tapo L510E and Tapo app communicate via Wi-Fi. TSKEP's identity verification vulnerability allows MITM attacks, compromising the email and password of the victim's Tapo account, as well as the SSID and password of her local network. §.§.§ Tp-Link Tapo Smart Wi-Fi Spotlight, Multicolor (L630) We test an L630 with Hardware v1.0 and Firmware v1.0.3, using a Tapo app v2.16.112. We confirmed this device aligns with our baseline —– mirroring the behavior of the Tapo L510E V2. Thus, it shares all listed vulnerabilities and allows reproduction of all attack scenarios, including the new Attack Scenario 6 introduced in this paper. §.§.§ Tp-Link Tapo Mini Smart Wi-Fi Socket (P100) We test a P100 with Hardware v1.20.0 and Firmwares v1.4.9 and v1.4.16, using Tapo app v2.16.112. This device employs vulnerable protocols (<ref>), lacks HTTPS support, and uses CBC-AES128 encryption, exposing all vulnerabilities. Unlike previous devices, P100 uses Bluetooth for configuration, limiting attack scenarios to those involving already associated devices. Hence, Attack Scenarios 1 to 4 are aligned with our baseline, while Attack Scenarios 5 and 6 cannot be reproduced on Tapo P100 because the adversary is not able to perform the MITM attack during the bulb configuration process. §.§.§ Tp-Link Tapo Pan/Tilt Home Security Wi-Fi Camera (C200) We test a C200 with Hardware v1.0.0 and Firmware v1.1.18, using a Tapo app v2.16.112. Unlike the other analysed devices, the Tapo C200 supports HTTPS, utilizing TLS for TSKEP between the Tapo app and device even during configuration. This limits exposure to only Vulnerability 2. The use of TLS prevents message inference or traffic sniffing by requiring a valid certificate from the attacker. While TSKEP remains vulnerable to replay attacks, TLS encapsulation ensures security. Consequently, only Attack scenario 1 can be reproduced out of six attack scenarios. One potential attack involves downgrading the communication channel from HTTPS to HTTP. The attacker may attempt this by replying to the device discovery requests from the application with the same security parameters supported by the Tapo L530E, i.e., HTTPS not supported, as shown in Listing <ref>. However, we verified that this downgrade attack produces no results. This is because the Tapo application does not consider valid all device discovery response received from C200 devices that do not support HTTPS. [style=code,caption=Attack sub-scenario 2 UDP discovery response,label=lst:c200_sub_attack] "error_code": 0, "result": "device_id": "1234...441", "device_name": "Tapo_Camera_E3FF", "device_type": "SMART.IPCAMERA", "device_model": "C200", "ip": "192.168.1.55", "mac": "AA-BB-CC-DD-EE-FF", "hardware_version": "1.0", "firmware_version": "1.1.18 Build 220518 Rel.61472n(4555)", "factory_default": false, "is_support_iot_cloud": false, "mgt_encrypt_schm": "is_support_https": false, "encrypt_type": "AES", "http_port": "Evil.tcp_port" §.§ Firmware With Fixes For each device tested, we diligently communicated the discovered vulnerabilities to Tp-Link. The responsible disclosure process enabled Tp-Link to promptly identify and address the vulnerabilities. They developed new versions of the Tapo app and the Tapo devices' firmware, implementing security updates to resolve the issues. We then actively tested the beta versions of this firmware, confirming the mitigation of potential risks arising from the vulnerabilities, and providing feedback to the manufacturer. Although only three out of the four vulnerabilities, i.e., Vuln. 1, Vuln. 3, and Vuln. 4, were addressed with fixes, their absence indirectly mitigates the risk associated with the remaining vulnerability, i.e., Vuln. 2, making it acceptable. Therefore, even if the last vulnerability is still exposed, it would not pose a significant security risk to the end user. A summary of the vulnerabilities exposed by each target device running a firmware with fixes is shown in Table <ref>. Regarding the attack scenarios, we tested all six of them using the beta version of the Tapo app, specifically Version 2.17.206, and the device's firmware provided by the Tp-Link. Only one of the six attack scenarios can still be reproduced, i.e., Attack scenario 1, Fake Bulb Discovery Messages Generation. However, the inability for the adversary to reproduce the other scenarios renders Attack scenario 1 virtually negligible in terms of risk to the victim, thus offering no advantage to the potential attacker. This observation confirms that all attack scenarios are effectively nullified, as none yields any results. A summary of the reproducible attack scenarios on each device running a firmware with fixes is shown in Table <ref>. § CONCLUSIONS In this paper, we attempted to exploit different Tapo devices using vulnerabilities that affected the Tapo L530E smart bulb, which were found in previous work. Results show that said vulnerabilities are present and exploitable in other devices belonging to the Tp-Link ecosystem and not exclusive to a specific Tapo device. More generally, to answer our first research question, this hints at the fact that the stack of technologies underlying IoT devices is shared between devices of the same family, and that advisories published for a single device may actually be helpful to both attackers and defenders in identifying the same vulnerabilities on other devices of the same ecosystem. This is most likely not unique to the Tapo environment, but verification of this claim is left to future work. Additionally, we expanded previous work by introducing a new Attack Scenario, which we called “Attack Scenario 6”, and a novel network configuration to exploit the vulnerabilities, which we called “Setup D”. We then tested all attack scenarios on different Tapo devices, finding that they are mostly reproducible, with a few exceptions. Hence, we answer our second research question by verifying that exploitable vulnerabilities retain their potential of obtaining the Tapo account password of the victim user, even when exploited on other Tapo devices. This could allow the attacker to access the victim's account and control all associated devices. Additionally, the possibility to obtain the password of the victim's private network should not be underestimated as well, as network access can be the entry point for the attacker to execute different attacks on other devices within the network. apalike
http://arxiv.org/abs/2407.12204v1
20240716221619
Acoustic modulation of individual nanowire quantum dots integrated into a hybrid thin-film lithium niobate photonic platform
[ "Thomas Descamps", "Tanguy Schetelat", "Jun Gao", "Philip J. Poole", "Dan Dalacu", "Ali W. Elshaari", "Val Zwiller" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "physics.app-ph" ]
§ ABSTRACT Surface acoustic waves (SAWs) are a powerful tool for controlling a wide range of quantum systems, particularly quantum dots (QDs) via their oscillating strain fields. The resulting energy modulation of these single photon sources can be harnessed to achieve spectral overlap between two QDs otherwise emitting at different wavelengths. In this study, we integrate InAsP/InP nanowire quantum dots onto a thin-film lithium niobate platform, a strong piezoelectric material, and embed them within Si_3N_4-loaded waveguides. We demonstrate emission wavelength modulation of 0.70nm at 13dBm with a single focused interdigital transducer (FIDT) operating at 400MHz, and achieve twice this modulation by using two FIDTs as an acoustic cavity. Additionally, we bring two QDs with an initial wavelength difference of 0.5nm into resonance using SAWs. This scalable strain-tuning approach represents a significant step towards producing indistinguishable single photons from remote emitters heterogeneously integrated on a single photonic chip, and paves the way for large scale on-chip quantum information processing using photonic platforms. § KEYWORDS quantum dots, single photon source, surface acoustic waves, thin-film lithium niobate, integrated photonics § INTRODUCTION Surface acoustic waves (SAWs), with their capacity to interact mechanically with both the supporting crystal and the materials on its surface, have shown significant interest for controlling various quantum systems, including superconducting qubits <cit.>, spin qubits <cit.>, quantum optomechanical cavities <cit.>, and single-photon emitters based on defect centers <cit.> or III/V semiconductor quantum dots (QDs). In the latter case, the oscillating electric field created by the SAW propagating on a piezoelectric medium was used to transport charge carriers to the QD and to control the emitter's charge state <cit.>. Additionally, the oscillating strain field induced by the SAW modulates the energy levels of the QD <cit.>. Utilizing this property, coherent coupling between acoustic phonons and single photons <cit.> as well as single-photon frequency shifting have been demonstrated <cit.>. These investigations, predominantly focused on a single QD, could be extended to multiple emitters on the same chip, each independently modulated by a SAW to tune their emission wavelengths. This advancement would be of technological interest, as it would address the variance in emission wavelengths of these sources <cit.>, a major limitation for their applications in integrated linear quantum computing <cit.> and quantum communication <cit.>, where photon indistinguishability is paramount. Typically generated by driving an interdigital transducer (IDT) patterned on a piezoelectric substrate with a microwave signal, SAWs offer several advantages over other tuning mechanisms. First, the emission wavelength can be either redshifted or blueshifted, unlike thermo-optic schemes based on local heating of the source, which always result in a redshift <cit.>. Secondly, QDs can be directly modulated without the need for doping the heterostructure and making electrical contacts, as required for Stark effect-based tuning <cit.>. Lastly, the localized strain field and fabrication simplicity of this method make it more scalable and robust compared to other strain mechanisms, such as those using global static fields applied with piezoelectric substrates <cit.> or MEMS technologies employing suspended films <cit.>. In this work, we examine InAsP/InP nanowire (NW) quantum dots (NWQDs), which are known for being bright sources of high-purity and indistinguishable single photons <cit.>. Unlike the monolithic approach, where self-assembled QDs are embedded in waveguides etched into the III/V heterostructure <cit.>, the site-controlled NWs are picked up and placed <cit.> onto an unreleased thin-film lithium niobate (LN) platform, as this strong piezoelectric material enables more efficient electro-mechanical transduction. The NWs are then integrated into Si_3N_4-loaded waveguides <cit.>, and positioned at the center of an acoustic delay line consisting of focused interdigital transducers (FIDTs). We achieve a modulation of the emission wavelength of 0.70nm by driving a single FIDT at 400MHz with a microwave power of 13dBm, and we double this modulation by driving two FIDTs as an acoustic cavity. Finally, we demonstrate that two waveguide-integrated NWQDs, whose emission wavelengths differ by 0.5nm, can be brought into resonance using SAWs. This result paves the way for generating indistinguishable single photons from multiple remote QDs on a single heterogeneous photonic integrated chip. § DESIGN AND METHODS An optical microscope image of the hybrid quantum photonic platform developed in this work is shown in <ref>(a), featuring four nanowire quantum dots each integrated into a photonic waveguide and positioned within an acoustic delay line. The wurtzite InP NWs embedding individual InAsP QDs <cit.> emitting around 900nm were picked up with a nano-manipulator inside a scanning-electron microscope (SEM), and transferred to a 300nm-thick Y-cut thin-film LN chip with 4.7 m buried SiO_2. The NWs were oriented along the crystallographic Z-axis. A 350nm-thick Si_3N_4 loading layer was then deposited by plasma-enhanced chemical vapor deposition (PECVD) on the whole surface and etched to define the photonic elements. The waveguides were 1.2 m-wide and terminated with grating couplers used for exciting the QD and collecting the emitted photons. An SEM image of the photonic channel around the NW is presented in <ref>(b), while an SEM image of the waveguide-integrated NW is shown in <ref>(c). The alignment of the waveguide relative to the NW was well-achieved, with a 150nm-large gap present between them, as the Si_3N_4 did not reproducibly adhere to the InP during deposition. The tapered shape of the NW favors an adiabatic mode transfer of the transverse electric (TE) mode of the NW to the fundamental TE mode of the waveguide. The latter is mainly confined in the LN since Si_3N_4 has a slightly lower refractive index. Finite-difference time-domain simulations (Lumerical) were conducted assuming lossless materials and yielded a coupling efficiency of 78%. Two of the four NWs, hereafter referred to as NW1 and NW2 (with quantum dots QD1 and QD2, respectively), were selected based on their emission properties to be at the center of two acoustic delay lines. Each delay line comprised two opposing FIDTs made of chromium with a common geometric focal point. Both FIDTs feature the same geometry, with a period of Λ = 10 m repeated N=20 times, a 400 m focal length and a 45^∘ opening. By orienting the transducers toward the X-axis of the crystal, a shear SAW with a fundamental frequency at ν_0 = 402.4MHz can be excited. The displacement profile of this SH0 mode is shown in <ref>(d). Based on the delta-function model <cit.>, the bandwidth of the resonance is Δν=17.8MHz, according to the expression Δν = 2 βν_0 / (N π), with sinc(β)=1/√(2). The in-plane displacement is perpendicular to the SAW propagation direction and is mostly confined into the LN and SiO_2 layers. The wave velocity is c_SH0 = Λ×ν_0 = 4024m s. The primary component of the associated strain tensor is the shear element ε_zx whose profile is represented in <ref>(e). The presence of non-zero strain at the center of the nanowire, positioned on top of the thin-film LN, indicates that the QD experiences an oscillating strain field as the SAW propagates. Compared to a straight-electrode IDT, which generates plane-wave SAWs, a focused IDT, whose electrodes are shaped as arcs of periodically spaced concentric circles, can be used to enhance the SAW intensity. The SAW radiated by the fabricated FIDT was simulated with COMSOL, and its transverse displacement field is displayed in <ref>(f). The maximum acoustic amplitude is reached at x_0=470 m, offset by 70 m from the geometric focal point of the FIDT. The acoustic intensity can be fitted to a Gaussian beam profile to extract a Rayleigh length of x_R=60 m. Compared to a straight-electrode IDT, the acoustic field at the beamwaist is increased by a factor of 4.1, and at the geometric focal point by a factor of 2.7 (section S3 of the Supporting Information). The strain field experienced by the QD is therefore significantly enhanced due to the focusing capability of the FIDT. The sample was investigated at 1.8K in a dry cryostat configured for confocal micro-photoluminescence (PL) measurements and equipped with high-frequency cables. An 80MHz pulsed-laser (measured 80.026MHz) was focused with a microscope objective on one grating coupler to excite the waveguide-integrated NWQD above-band at 800nm. The PL signal propagating towards the same grating coupler was collected by the same microscope objective, dispersed by a 750cm focal length spectrometer and detected by a liquid nitrogen-cooled charge-coupled device (CCD) camera. A two-channel analog signal generator was used to apply sinusoidal radio frequency (RF) signals with adjustable power P_RF and phase difference Δϕ to one or both FIDTs of the delay line. § RESULTS <ref>(a) displays the PL spectrum of QD1 without acoustic modulation at an excitation power of 500nW. In the following, we investigated the brightest line at 899.46nm, attributed to the charged exciton <cit.>. After filtering with a monochromator (0.1nm bandwidth), the purity of the single photon source was assessed in a Hanbury Brown-Twiss measurement (inset of <ref>(a)). The signal was detected by superconducting nanowire single photon detectors and counted by a time tagging device. The second-order correlation function was fitted with a sequence of equidistant photon pulses assuming a mono-exponential decay, yielding a radiative decay time of τ = 0.88 ±0.02ns. The suppression of the peak at zero time delay indicates strong single-photon emission. The ratio of the area of the zero time delay peak to the area of the finite time delay pulses gives g^(2)(0) = 0.010 ± 0.002. When a 400MHz RF signal is applied to the FIDT, the sinusoidal modulation of the strain field around the QD induces a modulation of its bandgap energy at the same frequency, causing the spectral lines to oscillate around their unstrained energies <cit.>. Spectral detuning already becomes noticeable for all peaks at approximately P_RF=-10dBm and reaches 0.70nm at 13dBm (<ref>(b)). This optomechanical coupling arises exclusively from shear strain modulating the energy levels of the QD, an effect less commonly studied compared to normal strain coupling. Although the nanowire is not encapsulated inside the waveguide, it maintains good mechanical contact with the lithium niobate thin film even at moderate RF powers, as evidenced by the stable increase in modulation. The broadening also remains symmetric around the unstrained emission, indicating that heating of the QD is effectively mitigated at moderate RF powers <cit.>. By choosing a modulation frequency lower than the decay rate of the emitter, phonon sidebands around the central emission line are avoided. Then, both FIDTs forming the delay line are driven at 400MHz with two independent microwave channels to produce two counter-propagating SAWs whose superposition forms a standing wave. A minor performance discrepancy between the two FIDTs, attributed to fabrication imperfections, is compensated by applying slightly less power to the first FIDT (P_RF,1=12.5dBm) compared to the second (P_RF,2=13dBm). The standing wave generates a pattern of nodes (points of zero displacement) and anti-nodes (points of maximum displacement) whose position with respect to the nanowire can be adjusted by modifying the phase difference Δϕ of the two RF signals. Figure 2(c) illustrates the modulation of the brightest emission line of QD1 as a function of Δϕ. When both signals are in phase, the nanowire lies at an anti-node of the standing wave, resulting in a modulation amplitude that is twice that obtained with a single propagating SAW. Conversely, the acoustic modulation is completely suppressed when a π phase shift is imposed between the two FIDTs. The dynamic spectral broadening 2Δ E was extracted from the data by fitting it to a time-integrated oscillating Lorentzian emission line <cit.>. In <ref>(d), 2Δ E is plotted as a function of Δϕ. Its trend follows the theoretical expression 2Δ E = 2Δ E_0 × 2|cos((Δϕ+γ)/2)|, where 2Δ E_0 is the energy broadening when only one of the FIDTs is excited. The fitting parameter γ=-2.0^∘ represents a residual phase shift attributed to a slight length mismatch of the RF cables within the cryostat. The good fitting also confirms that heating has no noticeable effect, even when both FIDTs are driven simultaneously on the same chip. Similarly to QD1, the modulation performance of QD2 in the second delay line was investigated. For both QDs, the spectral broadening is plotted as a function of the driving RF power on a logarithmic scale (<ref>(a)). Over the studied power range, the modulation of NW2 is, on average, 21% smaller than that of NW1. This discrepancy is attributed to variations in the performance of the FIDTs, and to different adhesions of the NWs on the lithium niobate. In both cases, the strain-induced broadenings follow the power law 2Δ E ∝ (P_RF)^α, where α=0.489 ± 0.001 for NW1, and α=0.474 ± 0.002 for NW2. These coefficients closely approach the ideal value of α = 0.5 expected for deformation potential coupling, indicating that the observed broadening primarily arises from optomechanical coupling <cit.>. As shown in <ref>(b), the wavelength of the charged exciton line of NW2 is 0.5nm greater than that of NW1. The different emission wavelengths can stem from multiple factors, from the growth process <cit.> to the static strain and charge environment after transfer to the host substrate. Two separate RF signals at 400MHz were employed to excite one FIDT from each delay line in order to modulate both QDs independently. A higher power was applied to the FIDT of the second delay line to achieve the modulation amplitude necessary to achieve identical modulation amplitudes for both nanowires. Over an acoustic cycle, the two QDs emitted at a common wavelength of 899.70nm. To extract the modulated photons from both QDs at this common wavelength, synchronized microwave sources with a π-shift between them are required, ensuring that one photon is blueshifted while the other is redshifted. Depending on the ratio of the exciton lifetime to the SAW period, two different processing schemes can be considered. If the SAW period is longer than the radiative decay time, the strain field and the resulting deformation potential around the QD can be considered quasi-static. Having the SAW frequency be an integer multiple of the repetition rate of the pulsed lasers allows for repeated optical excitation of the QD at a fixed point in the acoustic cycle. As a result, the emission would consistently fall within a desired energy range, eliminating the need for spectral filtering. Conversely, if the strain field varies during the exciton recombination time, different emission wavelengths arise and post-emission filtering becomes necessary to ensure spectral overlap. This can be realized with integrated photonic resonators, as illustrated in <ref>(c), which can be tuned using electro-optic schemes <cit.> or SAWs <cit.>, provided a tunable phase to compensate for propagation delay. § DISCUSSION A statistical analysis of similar NWQDs, emitting at slightly longer wavelengths than those investigated here, revealed a Gaussian distribution of the emission wavelengths with a standard deviation of 4.65nm <cit.>. Although the measurement presented above demonstrated that two selected NWQDs could be tuned in resonance, achieving larger spectral modulation would relax the selection process. One straightforward improvement would be to increase the driving RF power beyond 13dBm, provided that sample heating does not deteriorate spectral tuning <cit.>. By extrapolating the power law observed in <ref>(a), we estimate that a dynamic broadening of 1.16nm can be reached at a microwave power of 17.1dBm with a single FIDT, potentially bringing 10% of such NWQDs into resonance. To reduce ohmic losses, a lower resistivity metal such as aluminium, gold or platinum <cit.> could be used instead of chromium for the FIDT electrodes. Placing the QD at an anti-node of a standing-wave created by driving both FIDTs of the delay line is another effective approach to improve modulation performance by a factor of two, as demonstrated in <ref>(d). A similar effect can be obtained by positioning the QD between two SAW mirrors and exciting the acoustic cavity with only one IDT <cit.>, thereby reducing the thermal load by half. Furthermore, the SH0 mode profile (<ref>(d)) shows that the SAW is confined in both the LN and silica layers, hence reducing the acoustic energy at the surface. Higher mechanical confinement, and thus enhanced optomechanical modulation, could be achieved by releasing the LN <cit.>, although this would involve a more challenging fabrication process and result in a more fragile device. Our strain-modulation scheme can also be scaled to more than two emitters on the same chip, without additional fabrication complexity. In this regard, the footprint of the FIDT can be shrinked from a focal length of 400 m to 100 m with a slight reduction of the maximum transverse displacement of the SAW by 15% (section S3 of the Supporting Information). § CONCLUSION We successfully transferred InAsP/InP nanowire quantum dots on a thin-film lithium niobate platform, and heterogeneously integrated them into hybrid photonic waveguides through Si_3N_4 strip loading. By operating a single focused interdigital transducer at 400MHz, we excited and coupled a shear SAW to the energy levels of a QD, resulting in a modulation of the emission wavelength by 0.70nm at 13dBm. By driving both FIDTs of the delay line, we could either double this modulation or suppress it altogether, depending on the phase difference between the driving RF signals. This local and scalable strain tuning approach allowed us to bring two waveguide-integrated NWQDs with a 0.5nm wavelength difference into resonance. This represents a crucial step towards generating indistinguishable single photons from multiple remote emitters on a single photonic chip. Photons brought into resonance can then be filtered using resonators operating at the same frequency as the FIDTs, and subsequently manipulated with photonic circuits for integrated quantum photonic applications. Device fabrication; Photoluminescence spectrum of NW2; FIDT acoustic field simulations. § AUTHOR CONTRIBUTIONS T.D. and T.S. contributed equally to this work. T.D. and T.S. fabricated the samples, performed the measurements and the simulations, and analyzed the data. P.J.P. and D.D. grew the nanowire quantum dots. All authors contributed to discussion of the results. T.D. and T.S. wrote the manuscript with inputs from all authors. T.D. conceived the experiment. T.D. and V.Z. supervised the project. § ACKNOWLEDGEMENT The work was partially supported by the Knut and Alice Wallenberg (KAW) Foundation through the Wallenberg Centre for Quantum Technology (WACQT). The authors also acknowledge the support from the European Union’s Horizon 2020 Research and Innovation Programme through the project aCryComm, FET Open Grant Agreement no. 899558. §.§ Funding sources The work was partially supported by the Knut and Alice Wallenberg (KAW) Foundation through the Wallenberg Centre for Quantum Technology (WACQT). The authors also acknowledge the support from the European Union’s Horizon 2020 Research and Innovation Programme through the project aCryComm, FET Open Grant Agreement no. 899558. Supporting Information § S1. DEVICE FABRICATION The LNOI surface (<ref>(a)) was coated with positive resist (AR-P 6200.9), and alignment markers were patterned using electron-beam lithography (EBL). After development, a Ti/Au layer was evaporated and subsequently lifted-off (<ref>(b)). The nanowires were transferred from the growth substrate to the chip (<ref>(c)) using nano-manipulators mounted inside a scanning electron microscope (SEM). A 350nm-thick Si_3N_4 loading layer was then deposited at 300 using plasma-enhanced chemical vapor deposition (PECVD) on the entire surface (<ref>(d)). This process was carried out at 1000mTorr with a gas mixture of 350sccm 5%-diluted SiH_4 and 20sccm NH_3. The deposition involved repeated cycles of high-frequency plasma (13.56MHz - 50W) and low-frequency plasma (100kHz - 50W) for 12s and 8s, respectively. The surface was then coated with negative EBL resit (ma-N 2403) and the photonic elements were patterned by EBL according to the positions of the nanowires. The pattern was transferred to the Si_3N_4 by reactive ion etching in a CHF_3/SF_6 plasma to define the photonic elements (<ref>(e)). The waveguides were 1.2 m-wide and the grating couplers had a period of 590nm with a filling factor of 0.5. Finally, the FIDTs were created by EBL followed by chromium evaporation and lift-off (<ref>(f)). The FIDT had a split-52 design (period showed in inset of <ref>(f)) with an electrode width of 1 m, allowing for SAW excitation at a fundamental frequency of f_1=402.4MHz and harmonics f_n=nf_1 for n=2,3, and 4. § S2. PHOTOLUMINESCENCE SPECTRUM OF NW2 § S3. FIDT ACOUSTIC FIELD SIMULATIONS The acoustic field generated by the FIDT is simulated using COMSOL. Twenty pairs of chromium electrodes, shaped as arcs of concentric circles, were placed on top of a 300nm-thick Y-cut thin-film LN chip with 4.7 m buried oxide. The FIDT had a period of 10 m with two electrodes per period, a 400 m focal length and a 45^∘ opening angle. An oscillating electric potential at 400MHz was applied to every other electrode while the remaining electrodes were grounded. Perfectly matched layer conditions were imposed on the lateral boundaries of the domain, and the bottom boundary was fixed. The orientation of the axes is the same as shown in Fig. 1 in the main text. <ref> shows the transverse displacement u_z on the top surface along the direction of SAW propagation at a constant z=0 m. The envelope of the mechanical oscillations can be fitted to a Gaussian beam profile along its center axis u_z(x) ∝1√(1+((x-x_0)/x_R)^2) where x_0 is the position of the beam waist, and x_R is the Rayleigh length. The fitting parameters are x_R=60 m and x_0=470 m, indicating that the beam's focus is offset from the geometric focus by 70 m. A similar simulation was conducted for a straight-electrode IDT with an identical period. In this case, the mechanical oscillations exhibit a nearly constant amplitude over the simulated propagation distance. This amplitude is extracted by fitting the data to a simple sinusoidal function, serving as a baseline to evaluate the performance gain of the FIDT. Compared to the IDT, the FIDT generates an acoustic field at the beam waist that is greater by a factor of 4.1, and at the geometric focal point by a factor of 2.7. The transverse displacement generated by a smaller FIDT with 100 m focal length but with the same 45^∘ opening angle is shown in <ref>(a). The SAW is also focused, and the maximum displacement occurs at 15 m to the geometric focus. The reduction of the footprint of the FIDT is particularly interesting for increasing the density of modulated sources on the same chip. By fitting the envelope of the displacement around the beam waist, we found that the maximum displacement generated by the 100 m focal length FIDT is slightly reduced by 15% compared to the 400 m focal length FIDT (<ref>(b)).
http://arxiv.org/abs/2407.13749v1
20240718175236
BIRA: A Spherical Bistatic Reflectivity Measurement System
[ "Carsten Andrich", "Tobias F. Nowack", "Alexander Ihlow", "Sebastian Giehl", "Maximilian Engelhardt", "Gerd Sommerkorn", "Andreas Schwind", "Willi Hofmann", "Christian Bornkessel", "Reiner S. Thomä", "Matthias A. Hein" ]
eess.SP
[ "eess.SP" ]
=10000 Shape of Motion: 4D Reconstruction from a Single Video Qianqian Wang1,2* Vickie Ye1* Hang Gao 1* Jake Austin1 Zhengqi Li2 Angjoo Kanazawa1 ^1 UC Berkeley ^2 Google Research July 22, 2024 ================================================================================================================================== § ABSTRACT The upcoming 6G mobile communication standard will offer a revolutionary new feature: Integrated sensing and communication (ISAC) reuses mobile communication signals to realize multi-static radar for various applications including localization. Consequently, applied ISAC propagation research necessitates to evolve from classical monostatic radar cross section (RCS) measurement of static targets on to bistatic radar reflectivity characterization of dynamic objects. Here, we introduce our “Bistatic Radar” (BIRA) and antenna measurement facility for bistatic spherical positioning with sub-millimeter accuracy on a diameter of up to 7 m and with almost continuous frequency coverage from 0.7 up to 260 GHz. Currently, BIRA is the only bistatic measurement facility capable of unrestricted ISAC research: In addition to vector network analysis, BIRA employs advanced wideband transceiver technology with an instantaneous bandwidth of up to 4 GHz. These transceivers grant BIRA the unique ability to characterize dynamic targets in both Doppler and range, while also significantly accelerating RCS measurements of static objects. Radar reflectivity, radar cross section, bistatic radar measurement, millimeter wave antenna testing, spherical gantry positioner, anechoic chamber, wideband transceiver, integrated sensing and communication. § INTRODUCTION Applied microwave propagation research is indispensable for automated and connected mobility and its vital need for reliable communication and accurate sensing. In this regard, the future mobile communication standard 6G will include a revolutionary new feature: isac. In contrast to traditional, monostatic Radar systems, isac is inherently bistatic, therefore necessitating bistatic measurement infrastructure for experimental research. Additionally, the microwave spectrum envisioned for 6G requires antenna characterization in the sub-THz frequency range. The thimo in its core competence “wireless and information technology” has been conducting leading research of wireless transmission related to road and rail traffic and drones, based on its automotive antenna facility vista since 2014. vista comprises an anechoic chamber, a turntable positioner with 6.5 m diameter, and an antenna measurement arch <cit.>. Over the last decade, vista has enabled diverse and novel contributions on, e.g., vehicle-in-the-loop virtual drive testing for mobile communication systems <cit.>, performance evaluation of automotive antennas in their installed state, including phaseless antenna measurements <cit.>, automotive radar testing <cit.>, and realistic emulation of gnss signals <cit.>. Although we have utilized vista for static <cit.> and dynamic <cit.> isac measurements, for lack of suitable mechanical positioning infrastructure, the respective bistatic geometries were restricted to the horizontal plane. This limitation has motivated us to upgrade vista from an antenna measurement chamber to the universal bistatic measurement facility bira with eight mechanical degrees of freedom. Its mechanical and spectral capabilities qualify BIRA for comprehensive 6G research, e.g., antenna measurement up to sub-THz frequencies and dynamic radar reflectivity characterization for isac. The paper is structured as follows: <ref> provides a comparison with state-of-the-art bistatic measurement facilities. <ref> describes the mechanical aspects of the bira system and <ref> explains its measurement transceivers and interchangeable microwave probe modules. Then, <ref> illustrates the associated software suite comprising a digital twin and a generic api and abstraction layer. Finally, <ref> summarizes this paper. § STATE OF THE ART Only a few measurement facilities are known to enable measurements of the bistatic reflectivity of extended targets. Those found in the literature will be introduced in the following. To facilitate a comparison by features, their capabilities are contrasted with those of bira in <ref>. The tx and rx positioner characteristics (azimuth and co-elevation) are given within the respective machine coordinate system. Together with the dut rotation (turntable), the mechanical degrees of freedom of the positioners result in a set of reachable bistatic angles from the dut perspective. 1.) The bistatic anechoic chamber (BIANCHA) of the National Institute for Aerospace and Technology (INTA) in Torrejón de Ardoz, Spain, consists of a turntable and two gantry arms, providing four degrees of freedom <cit.>. 2.) In the CACTUS measurement facility at the centre d'etudes scientifiques et techniques d'Aquitaine (Cesta) in France, two pedestals with elevated antennas can move along a circular rail (on the floor), providing two degrees of freedom <cit.>. 3.) Similarly, in the BABI measurement chamber of the office national d'études et de recherches aérospatiales (Onera) in France, the antennas slide on an elevated circular rail <cit.>. 4.) The emsl, located in Ispra, Italy, contains an elevation arch with two sliding antennas and a 360° turntable dut positioner <cit.>. 5.) The lamp, located in Deqing, Zhejiang, China is a facility similar to emsl. The chamber structure itself with two sliding antennas on an elevation arch and the dut placement onto a turntable on rails is comparable to emsl <cit.>. Only BIANCHA and bira enable unrestricted, bistatic, spherical positioning, i.e., setting illumination azimuth angle  and co-elevation angle  independently of observation azimuth angle  and co-elevation angle , which we consider mandatory for bistatic radar reflectivity and antenna measurement research. Exclusively bira is large enough for target objects up to the size of a passenger car. Regarding the measurement instrumentation, all facilities use vna. In contrast, bira also features sdr transceivers for Doppler and range resolved measurement of time-variant targets, which are integral to isac research. Additionally, bira supports arbitrary probes and payloads that enable 6G-ready frequency coverage up to 260 GHz through converters. Owing to its unique capabilities, bira is currently the only published bistatic measurement facility capable of performing unrestricted isac research. § CONCEPT AND MECHANICAL PROPERTIES The vista depicted in <ref> provides the ideal prerequisites for bistatic reflectivity and antenna measurements of target objects up to the size of a passenger car. vista comprises a shielded, anechoic chamber with 13 m × 9 m × 7.5 m of usable interior volume, a turntable with 6.5 m diameter for loads up to 3000 kg, and a permanently installed SG 3000F multi-probe antenna measurement arch for frequencies from 70 MHz up to 6 GHz <cit.>. The immovable measurement arch necessitates two significant constraints for the installation of our bistatic positioning system bira: It must fit within the inner diameter of the arch and, more importantly, it must be removable and re-installable to enable unobstructed use of the multi-probe system. Assembly duration should be no more than a single day and repeated re-installation must not degrade mechanical accuracy. The contractor commissioned with the development and installation of bira was cmts. bira comprises two modular gantry positioners (see <ref>) enabling almost arbitrary positioning of two probes on the same sphere with a radius of approximately 3 m and its origin in the focal point 2.27 m above the turntable in the plane of the mvg arch. Both gantries can be installed independently, enabling single gantry operation.The gantry mounting plates are located below the metallic false floor. When not installed, the respective cutouts in the false floor are covered by metal plates with minimal clearances. Each gantry is composed of multiple modules that can be installed incrementally, with the first module fastened on the mounting plate. Precision positioning pins between the modules ensure repeat accuracy between assembly cycles. Relying on the static gantry (see <ref> a), the moving gantry and the turntable, bira has a total of eight degrees of freedom, separated into the following axes: * Azimuth: The moving gantry is mounted on a semi-circular rail around the focal point, enabling independent azimuth positioning (see <ref> f). The static gantry's mounting plate is fixed directly to the building foundation. With respect to the dut on the turntable, the azimuth positioning is realized by rotating the turntable. * Elevation: Both gantries implement spherical positioning on more than a hemisphere by means of rotating a raised boom in the shape of a flattened quarter circle (see <ref> i). The boom's center of rotation is located in the horizontal plane of the focal point. Consequently, the probe at the boom tip always points radially towards the focal point. Note that due to this geometry the effective azimuth position of the probe is offset by ±90° relative to the center of rotation. * Rotation/Polarization: The gantry geometry inherently guarantees that the probe is not rotated around the radial axis towards the focal point when moving in either azimuth or elevation. A noteworthy exception is the 180° inversion of the rotation angle when the elevation passes the zenith. For the purpose of intentionally rotating the probe, it is fixed to an electronic roll positioner at the tip of the boom (see <ref>). This enables, e.g., dual-polarized measurements with linearly polarized antennas (see <ref>). * Radius: The roll positioner itself is mounted on an electronic linear positioner aligned with the radial axis (see <ref>). This radial positioner can be used, e.g., to account for different probe lengths, to adjust for frequency-variant phase centers, or to augment nearfield to farfield transformation via the radial domain <cit.>. A dedicated laser tracking system was used to repeatedly measure the motion of all axes and to subsequently calibrate and adjust the system. These measurements have demonstrated a repeatable positioning accuracy for the azimuth axis of 0.02° (0.05° after reassembly) and for the elevation axes of 0.006° (0.02° after reassembly). The position-dependent elastic deformation from the weight of the positioners and the resulting position error are compensated automatically by the motor controller firmware based on the laser tracker calibration values. The firmware is also responsible for mechanical safety: As the components of both positioners are located predominantly on the surface of the same sphere, collisions are physically possible. The firmware dynamically detects an imminent collision and prevents it by automatically enforcing an emergency stop, which is only the last out of several safety measures. See <ref> for details on the remaining safety procedures. The rotating gantry geometry of bira enables fully covering its surfaces facing the dut with microwave absorbers. This is an advantage over other types of positioners, e.g., traveling trolleys (cf. <ref>), which require an unobstructed rail for their motion. The absorbers are mounted magnetically to facilitate assembly and disassembly. Additionally, this enables the use of multiple sets of absorbers, each optimized for the respective frequency ranges of interest. § UNIVERSAL PROBE AND TRANSCEIVER SUPPORT Versatility and modularity were primary design requirements for bira. These ensure future extensibility and frequency coverage up to the sub-THz range. Therefore, no measurement equipment is firmly integrated into the positioners. Probe flanges on the tips of the rotation positioners enable installation of arbitrary measurement probes (see <ref>). The backside of each gantry boom supports mounting additional payloads, also within generous size and mass constraints (see <ref>). For each probe, bira provides up to 300 W of electrical power and gpio signals for electronic switching of gain and polarization. Additionally, each positioner is equipped with drag chains and empty conduits to enable the mechanically safe and straightforward installation of temporary wiring specific to the probe in use. §.§ Monolithic VNA Transceivers Bistatic radar reflectivity characterization necessitates signal transmission measurements with widely separated tx and rx antennas. Conventional measurement transceivers, e.g., vna, concentrate tx and rx ports into a single device. bira supports such monolithic transceivers with one integrated coaxial cable from each probe to directly adjacent connectors in the anechoic chamber. A central rotary joint in either elevation positioner incurs the mechanical limit of a single such cable per gantry. The tx cable length sums up to 30 m, because it passes through the azimuth rail drag chain of the moving positioner. The rx cable spans only 17 m, as the azimuth position of the static gantry is fixed. These extensive cable lengths require both active and passive conditioning to enable frequency coverage from 0.5 to 26 GHz: Multiple wideband amplifiers compensate for high insertion loss, e.g., 60 dB at 26 GHz in the tx path. Additionally, passive equalizers flatten the sloped frequency response of the cable to prevent non-linear distortion at lower frequencies, where electronic amplification would otherwise vastly exceed cable insertion loss. We employ a Keysight N5222B vna in combination with the coaxial cables described above, to measure the radar reflectivity of static targets or antennas under test. To enable time gating for the suppression of parasitic reflections, we typically rely on a 10 MHz step width, resulting in a 30 m spatial ambiguity limit. §.§ Non-converting RF Probes for VNAs When measuring with monolithic transceivers like vna, dual-polarized measurements require electronic switching in combination with the single cable connection between probe and vna port. For our vna applications up to 20 GHz, we rely on a pair of tx/rx probes that do not perform frequency conversion. Each probe implements amplification and polarization switching in the frequency range from 0.7 to 20 GHz. Both probes include a digital step attenuator for gain adjustment. Suitable quad-ridged dual-polarized antennas include a pair of Schwarzbeck CTIA0710 and a pair of RFspin QRH20E. See <ref> for a picture of both rf probes mounted on bira. §.§ Software-defined Parallel Wideband Transceivers Radar reflectivity measurements with vna typically require stepped positioning to avoid signal smear, because of extended sweep times and – for bira – exacerbated by the need to sequentially switch polarizations. With bistatic radar reflectivity measurements encompassing at least four degrees of freedom, the constraint of stepped measurements results in long measurement times. Furthermore, vna are not suitable for radar reflectivity measurement of moving targets: Depending on carrier frequency and target motion speeds, even sweep times in the low millisecond range may be insufficient for Nyquist-compliant Doppler sampling. This constraint renders vna generally unsuitable for dynamic isac research. For non-static measurement scenarios, we employ sdr transceivers with instantaneous bandwidths of 2 GHz or even 4 GHz in combination with IQ mixers or channel bonding <cit.>. These have already enabled unprecedented radar reflectivity and micro-Doppler signature measurements of moving targets <cit.>. sdr also significantly accelerate the bistatic measurement of static radar targets. Their negligible excitation signal periods enable continuous measurement while moving along a trajectory. sdr bear another advantage: We can deploy miniaturized and synchronized sdr directly to each probe. This obviates the need for any actively conditioned, long-haul signal paths from the probes to a central monolithic transceiver. Implicitly, this also eliminates the previous limit of one single path: Multi-channel sdr support parallel polarization measurements without rx switching. The use of orthogonal excitation signals can even redundantize tx switching. We employ two Xilinx RFSoC ZU47DR direct rf sampling transceivers with a custom sdr firmware and software architecture <cit.>. Each transceiver offers eight adc channels and eight dac channels operating at sample rates up to 5 or 10 GSa/s, respectively. Both adc and dac support sampling in the 2nd and 3rd Nyquist zones with a 6 GHz upper cutoff frequency. A centrally generated clock distributed optically via rfof synchronizes both sdr. Integrated 100 Gigabit Ethernet network interfaces stream samples to and from servers conveniently located outside of the anechoic chamber. Capable of arbitrary signal generation and sustained recording, the sdr support a wide array of applications beyond bistatic transmissions measurements, e.g., signal and system emulation, hardware-in-the-loop testing, and generic 6G isac demonstrators <cit.>. §.§ Coaxial Quadrature Converters The upper cutoff frequency of the sdr transceivers necessitates frequency converters for measurements above 6 GHz. Up to approximately 67 GHz, coaxial technology offers straightforward and affordable dual-polarized antennas and converters. In particular, quadrature mixers available for this frequency range provide frequency flexibility through inherent image suppression without external filters and if bandwidths of several GHz (cf. <ref>). Their quadrature if interface doubles the native instantaneous bandwidth of multi-port transceivers, achieving 4 GHz bandwidth in combination with our sdr. We designed and assembled two pairs of quadrature frequency converters in coaxial technology: One up-converter (tx) and one down-converter (rx) each for the frequency ranges from 5 to 20 GHz and from 18 to 50 GHz. All converters use dual linearly polarized antennas and fully parallel microwave branches for both polarizations, requiring four adc/dac ports per converter. Technical details are summarized in <ref>. The components are integrated into a shielded case with microwave absorbers magnetically mounted on the front and a flange compatible with bira, see <ref>. Phase-aligned operation of tx and rx is ensured by lo sharing: Within the bira setup, the lo is distributed via rfof from a common signal generator (Keysight EXG N5173B). The quadrature probes support vna through analog signal adapters: The IQ channels are externally combined via 90° hybrid couplers and polarization selection is carried out via microwave switches that are controlled remotely with gpio signals. The integrated rf cables (cf. <ref>) then route the resulting single if signal between each probe and the vna. §.§ Waveguide Converters Targeting frequencies above 67 GHz requires switching from coaxial to waveguide components. Three pairs of converters are available for the waveguide bands WR-12 (60 to 90 GHz), WR-6.5 (110 to 170 GHz), and WR-4.3 (170 to 260 GHz), see <ref>. Within these frequency ranges, individual bands can be selected by exchanging band-pass filters. The trade-off between antenna directivity and angular field-of-view is addressed by multiple antenna pairs. All parameters are documented in <ref>. All waveguide converters are implemented single-polarized. Dual-polarized measurements are realized by remote-controlled mechanical rotation of the probes. As with the IQ converters, phase-aligned operation is ensured by lo distribution via rfof and by applying lo multiplication. Hardware integration and supply of the waveguide converters was provided by the contractor bsw TestSystems & Consulting AG. § SOFTWARE The operation of the eight mechanical bira axes is not straightforward. Firstly, its machine coordinates differ from the bistatic angles , , , and of both probes with respect to the dut. The mapping from bistatic angles to machine coordinates is not unique. Secondly, collisions, although prevented by firmware, are hypothetically possible, due to that fact that both gantry positioners move along the same sphere around the focal point. This necessitates careful planning of measurement trajectories, i.e., consecutive lists of positioner waypoints, to ensure uninterrupted operation. The trajectories must pre-emptively circumnavigate gantry collisions while also minimizing the extent of detours incurred by this process. Our bira software suite comprises two primary components: A mechanically exact digital twin and a hardware abstraction layer combined with a generic api. §.§ Interactive Digital Twin We implemented the digital twin using the Python programming language and the Visualization Toolkit (VTK) computer graphics library. The twin relies on the actual cad models of bira to implement its geometrically exact replica. Additionally, a simplified bounding box is used for collision detection with a safety clearance of 10 cm. The digital twin includes an optional, interactive graphical user interface (GUI) (see <ref>). The GUI visualizes the machine coordinate system, which facilitates system familiarization for first-time users and accelerates measurement trajectory planning significantly. We also employ the GUI for safe interactive control of bira by visualization of movements prior to execution. The digital twin is essential for mechanical safety. Being realized as a multi-purpose library with a flexible api, we also use it to compute a collision table containing all possible permutations of the three main mechanical axes of bira: Moving gantry azimuth and co-elevation as well as static gantry co-elevation. The remaining axes' range of motion precludes these axes from any contribution to possible collisions. The table contains approximately 10 million entries at 1° angular resolution, see <ref> for an exemplary section of the table. The simplified bounding boxes used to compute intersections result in a short half-hour computation time, facilitating generation and use of collision tables for non-standard probes or even very large dut. A default table for cylindrical probes as depicted in <ref> is used by the firmware and possibly custom tables are used by the software to implement failsafe, multi-layered collision prevention. §.§ Hardware Abstraction Layer and Application Programming Interface (API) The primary design goal of bira was universal usability, requiring a generic api not limited to specific measurement types. This user-facing api must satisfy the following requirements in the order of priority: Mechanical safety, efficiency, and user-friendliness. All requirements rule out granting these users direct access to the positioner hardware. Instead, we opted for a safety enforcing yet straightforward abstraction layer: A trajectory and parameter file in JavaScript Object Notation (JSON). It provides a list of consecutive positions for all eight axes. JSON is a language-independent data interchange format that is natively supported by many programming languages, e.g., MATLAB and R. The remainder of the software was implemented in Python. We developed a kinematic model of all eight axes, which incorporates their velocity-, acceleration-, and jerk-limited motion. This model enables offline verification of user-supplied trajectories: For each movement between subsequent trajectory steps, the kinematic model provides the fine-grained intermediate positions of all axes, which are checked against the collision table computed by the digital twin. We also use the kinematic model to predict exact measurement duration and to check trajectory efficiency, e.g., to identify detours. The actual measurement software interfacing the positioner hardware is strictly isolated from the users for safety reasons. Only operators, i.e., trained, experienced, and subsequently authorized staff members, initiate measurements using JSON trajectory files that have passed offline verification. These operators also select additional axis motion parameters, i.e., velocity, acceleration, and deceleration, based on user requirements, simplifying the users' trajectory preparation from 29 to only eight parameters. This approach ensures safety and efficiency. Verified trajectories can be used for long-term, unattended measurements over night or weekends. Our measurement software interfaces the turntable, the gantry positioners, and the microwave measurement devices via an Ethernet local area network. The involved industrial motor control interfaces have response latencies around 100 ms. With four bistatic angles and thus four primary degrees of freedom, typical measurement times range from several hours up to multiple days. This implies that even minor efficiency gains can result in significant time savings. Therefore, we implemented fully parallelized network remote control using Python's asyncio library and asynchronous coroutines. We batch movement commands for multiple axes into single network requests and optimize for successful command execution by deferring error checks until after all movement commands were issued. This way, we achieve a total remote control overhead of only 100 ms. In contrast, multiple sequential network requests would add up request latencies, accumulating up to to several hours for long-term measurements. Like bira itself, our software also supports arbitrary applications. This requires the integration of custom user code into the measurement software program flow. Obviously, user-provided program code and data must remain strictly isolated from the positioner hardware for safety reasons. We realized this through a restricted callback api. For stepped measurements, our software calls a user-provided function after all axes have come to standstill at their respective target position. For continuously moving measurements, our software runs an asynchronous user-provided coroutine in parallel to continuously orchestrating all motions. The user code has read-only access to all states, e.g., current position and velocity, i.e., cannot issue any commands. To facilitate the development of user code, the measurement software can run fully offline without hardware access, partially simulating positioner behavior. However, default implementations for vna and sdr measurements are available and can be used without modification for most measurement applications. § SUMMARY In this paper, we introduced our bira measurement system, which extends the vista with two universal mechanical positioners. Together with dut rotation by the turntable, independent illumination and observation angles of both more than a hemisphere (0 to 360° azimuth, 0 to 114° co-elevation) are realized with sub-millimeter accuracy. This comprehensive upgrade was inaugurated in 2023 and addresses the requirements of research for 6G and beyond in the upcoming decades <cit.>. Although motivated by and named after bistatic radar, i.e., isac, our bira system is entirely use-case agnostic and includes a variety of features useful for 3D antenna pattern measurements up to the sub-THz frequency range. bira is a modular spherical positioning system for dut up to the size of a passenger car. Either positioner can be installed and used independently. A universal probe flange, power supply, integrated microwave cabling and lo distribution, and generic low-level api support almost arbitrary payloads and ensure future upgradability. A digital twin ensures the safe operation of the positioners, facilitates measurement planning for researchers, and accelerates software development. Currently available microwave probes include coaxial frequency converters for parallel, dual-polarized measurements up to 50 GHz and linearly polarized waveguide converters for almost continuous signal coverage up to 260 GHz. All probes are baseband agnostic and compatible with a monolithic vna as well as distributed sdr transceivers. bira is a bistatic reflectivity measurement facility suitable for unrestricted isac research. It is one of only two installations with at least four spherical degrees of freedom, which is the minimum required for fully bistatic reflectivity measurements. Secondly and more importantly, bira employs distributed sdr transceivers, while other installations use vna <cit.> with theoretically zero instantaneous bandwidth, restricting their applicability to the reflectivity of stationary objects. In contrast, the instantaneous bandwidth of the sdr tranceivers (up to 4 GHz) enables dynamic isac measurements with superior range resolution of up to 3.75 cm. bira provides novel and highly relevant experimental access to the bistatic radar reflectivity of extended objects and antenna characterization up to the sub-THz range. Selected examples of both tasks will be presented in a forthcoming version of this paper. With these features, bira presents a unique asset of the Thuringian Center of Innovation in Mobility. 25 IEEEtran