entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
18
175
authors
sequencelengths
1
1.12k
primary_category
stringclasses
114 values
categories
sequencelengths
1
8
text
stringlengths
5
364k
http://arxiv.org/abs/2407.13544v1
20240718141836
Drilling holes in the Brownian disk: The Brownian annulus
[ "Jean-François Le Gall", "Alexis Metz-Donnadieu" ]
math.PR
[ "math.PR", "60D05, 60F17" ]
Research on Tibetan Tourism Viewpoints information generation system based on LLM Jinhu Qi Department of Artificial Intelligence Chengdu Jincheng College Chengdu, China qijinhu1218@gmail.com   Shuai Yan* Department of Artificial Intelligence Chengdu Jincheng College Chengdu, China yanshuai1@cdjcc.edu.cn *Corresponding author   Wentao Zhang Department of Software Engineering Chengdu Jincheng College Chengdu, China vraniumzwt@gmail.com   Yibo Zhang Department of Computer Science Chengdu Jincheng College Chengdu, China z1575075389@gmail.com Zirui Liu Department of Software Engineering Chengdu Jincheng College Chengdu, China liuzirui733@gmail.com Ke Wang Department of Artificial Intelligence Chengdu Jincheng College Chengdu, China wangke@cdjcc.edu.cn =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT We give a new construction of the Brownian annulus based on removing a hull centered at the distinguished point in the free Brownian disk. We use this construction to prove that the Brownian annulus is the scaling limit of Boltzmann triangulations with two boundaries. We also prove that the space obtained by removing hulls centered at the two distinguished points of the Brownian sphere is a Brownian annulus. Our proofs rely on a detailed analysis of the peeling by layers algorithm for Boltzmann triangulations with a boundary. § INTRODUCTION Brownian surfaces are basic models of random geometry that have been the subject of intensive research in the recent years. They arise as scaling limits of large classes of random planar random maps viewed as random metric spaces, for the Gromov-Hausdorff topology. The first result in this direction was the convergence to the Brownian sphere <cit.>, which is a Brownian surface in genus 0 with no boundary. This convergence has been extended to many different classes of random planar maps by several authors. The recent paper of Bettinelli et Miermont <cit.> constructs general Brownian surfaces in arbirary genus g and with a finite number of boundaries of given sizes, as the scaling limit of large random quadrangulations with boundaries (boundaries of quadrangulations are distinguished faces with arbitrary degrees, whereas the other faces have degree 4). The construction of <cit.> applies to the case where the volume of the surface is fixed as well as the boundary sizes, and it is also of interest to consider “free” models where this volume is not fixed, which appear as scaling limits of planar maps distributed according to Boltzmann weights. The special case where there is only one boundary in genus 0 corresponds to the so-called Brownian disk, which has been studied extensively (see in particular <cit.>). Our object of interest in this work is the free Brownian annulus, which is a Brownian surface in genus 0 with two boundaries. As noted in <cit.>, the free Brownian annulus is one of the very few Brownian surfaces (together with the Brownian disk and the pointed Brownian disk) for which the free model makes sense under a probability measure — for instance, the free Brownian sphere is defined under an infinite measure. One motivation for the present work came from the recent paper of Ang, Rémy and Sun <cit.>, which studies the modulus of Brownian annuli in random conformal geometry. The definition of the Brownian annulus in <cit.> is based on considering the complement of a hull in the free pointed Brownian disk conditionally on the event that the hull boundary has a fixed size. As the authors of <cit.> observe, this definition leads certain technical difficulties due to conditioning on an event of probability zero. In this work, we give a slightly different construction of the Brownian annulus which involves only conditioning on an event of positive probability. We show that this definition is equivalent to the one in <cit.>, and we also relate our construction to the scaling limit approach of Bettinelli et Miermont <cit.> by showing that the Brownian annulus is the scaling limit of large random triangulations with two boundaries — this was asserted without proof in <cit.>. Let us give a more precise description of our main results. We start from a free pointed Brownian disk (𝔻,D) with boundary size a>0. As usual, ∂𝔻 denotes the boundary of 𝔻. Then, 𝔻 has a distinguished interior point denoted by x_*. For every r∈ (0,D(x_*,∂𝔻)), we denote the closed ball of radius r centered at x_* by B_r(x_*), and the hull H_r is obtained by “filling in the holes” of B_r(x_*). In more precise terms, 𝔻∖ H_r is the connected component of 𝔻∖ B_r(x_*) that contains the boundary ∂𝔻. The perimeter or boundary size of H_r may then be defined as 𝒫_r=lim_→ 0^-2𝐕({x∈𝔻∖ H_r:D(x,H_r)<}), where 𝐕 is the volume mesure of 𝔻. The process (𝒫_r)_0<r<D(x_*,∂𝔻) has a modification with càdlàg sample paths and no positive jumps, and, for every b>0, we set r_b=inf{r∈ (0,D(x_*,∂𝔻)):𝒫_r=b}, where inf∅=∞. Then (r_b<∞)=a/(a+b) (Lemma <ref>), and, on the event {r_b<∞}, r_b is the radius of the first hull of boundary size b. Under the conditional probability (·| r_b<∞), we define the Brownian annulus of boundary sizes a and b, denoted by ℂ_(a,b), as the closure of 𝔻∖ H_r_b, which is equipped with the continuous extension d^∘ of the intrinsic metric on 𝔻∖ H_r_b and with the restriction of the volume measure of 𝔻 (Theorem <ref>). It is convenient to view ℂ_(a,b) as a measure metric space marked with two compact subsets (the “boundaries”) which are here ∂𝔻 and ∂ H_r_b. Much of the present work is devoted to proving that the space ℂ_(a,b) is the Gromov-Hausdorff limit of rescaled triangulations with two boundaries. More precisely, for every sufficiently large integer L, let 𝒞^L be a random planar triangulation with two simple boundaries of respective sizes ⌊ a L⌋ and ⌊ b L⌋ (see <cit.> for precise definitions of triangulations with boundaries). Assume that 𝒞^L is distributed according to Boltzmann weights, meaning that the probability of a given triangulation τ is proportional to (12√(3))^-k(τ) where k(τ) is the number of internal vertices of τ. We equip the vertex set V(𝒞^L) with the graph distance rescaled by the factor √(3/2) L^-1/2, which we denote by d^∘_L. Then, Theorem <ref> states that (V(𝒞^L),d^∘_L) (d)L→∞⟶ (ℂ_(a,b),d^∘), in distribution in the Gromov-Hausdorff sense. Theorem <ref> gives a stronger version of this convergence by considering the Gromov-Hausdorff-Prokhorov distance on measure metric spaces marked with two boundaries, in the spirit of <cit.> (see Section <ref> below). The proof of the convergence (<ref>) relies on two main ingredients. The first one is a result of Albenque, Holden and Sun <cit.> showing that the free Brownian disk is the scaling limit of Boltzmann triangulations with a simple boundary, when the boundary size tends to ∞. The second ingredient is the peeling by layers algorithm for Boltzmann triangulations with a simple boundary, which was already investigated in the recent paper <cit.> in view of studying the spatial Markov property of Brownian disks. Roughly speaking, a peeling algorithm “explores” a Boltzmann triangulation 𝒟^L with boundary size ⌊ a L⌋ step by step, starting from a distinguished interior vertex, and, in the special case of the peeling by layers, the explored region at every step is close to a (discrete) hull centered at the distinguished vertex. At the first time when the boundary size of the explored region becomes equal to ⌊ b L⌋ (conditionally on the event that this time exists), the unexplored region is a Boltzmann triangulation with two simple boundaries of sizes ⌊ a L⌋ and ⌊ b L⌋, and is therefore distributed as 𝒞^L. One can then use the main result of <cit.> giving the scaling limit of 𝒟^L to derive the convergence (<ref>). Making this argument precise requires a number of preliminary results, and in particular a detailed study of asymptotics for the peeling process of Boltzmann triangulations with a boundary, which is of independent interest (see Section <ref> below). These asymptotics are closely related to the similar results for the peeling process of the UIPT obtained in <cit.>. As a by-product of our construction, we obtain several other results relating the Brownian annulus to the Brownian disk or the Brownian sphere. Consider again the free pointed Brownian disk (𝔻,D) of perimeter a, but now fix r>0. Conditionally on {D(x_*,∂𝔻)>r, 𝒫_r=b} the closure of 𝔻∖ H_r equipped with an (extended) intrinsic metric has the same distribution as ℂ_(a,b) (Proposition <ref>) and furthermore is independent of the hull H_r also viewed a random metric space for the appropriate intrinsic metric. This result in fact corresponds to the definition of the Brownian annulus in <cit.>. Another related result involves removing two disjoint hulls in the free Brownian sphere. Write (_∞, 𝐃) for the free Brownian sphere, which has two distinguished points denoted by _* and _0 that play symmetric roles. For every r>0 and x∈_∞, let B^∞_r(x) be the closed ball of radius r centered at x in _∞. Then, for r∈(0,𝐃(_*,_0)), let the hull B^∙_r(_*) be the complement of the connected component of _∞∖ B^∞_r(_*) that contains _0, and define B^∙_r(_0) by interchanging the roles of _* and _0. Let r,r'>0. Then, conditionally on the event {𝐃(_*,_0)>r+r'}, the three spaces B^∙_r(_*), B^∙_r'(_0) and _∞∖ (B^∙_r(_*)∪ B^∙_r'(_0)) are independent conditionally on the perimeters |∂ B^∙_r(_*)| and |∂ B^∙_r'(_0)| (these perimeters are defined by a formula analogous to (<ref>)), and _∞∖ (B^∙_r(_*)∪ B^∙_r'(_0)) is a Brownian annulus with boundary sizes |∂ B^∙_r(_*)| and |∂ B^∙_r'(_0)| (see Theorem <ref> below for a more precise statement, and <cit.> for a closely related result). In addition to our main results, we obtain certain explicit formulas, which are of independent interest. In particular, Proposition <ref> gives the distribution of 𝒫_r under (·∩{ r<D(x_*,∂𝔻)}) (note that the distribution of D(x_*,∂𝔻) was computed in <cit.>). We also consider the “length” ℒ_(a,b) of ℂ_(a,b), which is the minimal distance between the two boundaries. By combining our definition of ℂ_(a,b) with the Bettinelli-Miermont construction of the Brownian disk, one gets that ℒ_(a,b) is distributed as the last passage time at level b for a continuous-state branching process with branching mechanism ψ(λ):=√(8/3) λ^3/2 started with initial density 3/2 a^3/2(a+z)^-5/2, and conditioned to hit b (this conditioning event has probability a/(a+b)). Unfortunately, we have not been able to use this description to derive an explicit formula for the distribution of ℒ_(a,b), but Proposition <ref> gives a remarkably simple formula for its first moment: [ℒ_(a,b)]=√(3π/2)(a+b)(√( a^-1)+√(b^-1)-√(a^-1+b^-1)). The paper is organized as follows. Section <ref> gathers a number of preliminaries, concerning in particular the peeling algorithm for random triangulations, the Bettinelli-Miermont construction of the free pointed Brownian disk, and a useful embedding of the Brownian disk in the Brownian sphere. Then, Section <ref> presents our construction of the Brownian annulus, and also proves a technical lemma that will be used in the proof of the convergence of rescaled triangulations to the Brownian annulus. In Section <ref>, we recall the key convergence of rescaled triangulations with a boundary to the Brownian disk, and we use this result to investigate the convergence of certain explored regions in the peeling algorithm of Boltzmann triangulations towards hulls in the Brownian disk. Section <ref> is devoted to asymptotics for the perimeter process in the peeling by layers algorithm of Boltzmann triangulations: the ultimate goal of these asymptotics is to verify that the (suitably rescaled) first radius at which the perimeter of the explored region hits the value ⌊ bL⌋ converges to r_b, and that this convergence takes place jointly with the convergence to the Brownian disk (Corollary <ref>). Section <ref> gives the proof of the scaling limit (<ref>). If 𝒞^L is constructed via the peeling algorithm as explained above, a technical difficulty comes from the fact that it is not easy to control distances near the boundary of the unexplored region, and, to overcome this problem, we use approximating spaces obtained by removing a tubular neighborhood of the latter boundary. In Section <ref>, we explain how the convergence (<ref>) can be sharpened to hold in the sense of the Gromov-Hausdorff-Prokhorov topology on measure metric spaces marked with two boundaries. Section <ref> explains the relation between our construction of the Brownian annulus and the definition of <cit.>, and also proves Theorem <ref> showing that the complement of the union of two hulls centered at the distinguished points of the Brownian sphere is a Brownian annulus. Finally, Section <ref> discusses the distribution of the length ℒ_(a,b) of the Brownian annulus. § PRELIMINARIES In this section, we recall the basic definitions and the theoretical framework that we will use in this paper. Section <ref> introduces Boltzmann triangulations as well as the peeling by layers algorithm, which plays an important role in this work. In Section <ref>, we recall the definition of the Gromov-Hausdorff-Prohorov topology for measure metric spaces, using the formalism of <cit.>. Section <ref> gives a construction of the free Brownian disk, which is the compact metric space arising as the scaling limit of Boltzmann triangulations with a boundary. §.§ Boltzmann triangulations of the disk and the annulus For two integers L≥ 1 and k≥ 0, we let 𝕋^1(L, k) be the set of all pairs (τ, e), where τ is a type I planar triangulation with a simple boundary ∂τ of length L and k internal vertices, and where e is a distinguished edge on ∂τ. Here, type I means that we allow the presence of multiple edges and loops, but the boundary has to remain simple. Each edge e of ∂τ is oriented so that the outer face lies to the left of e (see Figure 1), and we write |∂τ|=L for the boundary size of τ. By convention, we will consider the map consisting of a single oriented (simple) edge e as an element of 𝕋^1(2, 0) and in that special case it is convenient to consider that ∂τ consists of two oriented edges, namely e and e with the reverse orientation. For integers L,p≥ 1 and k≥ 0, we let 𝕋^2(L, p,k) be the set of triplets (τ, e_0, e_1), where τ is a planar triangulation of type I having two vertex-disjoint simple boundaries — namely an outer boundary ∂_0τ of length L and an inner boundary ∂_1τ of length p — and k internal vertices, and where e_0 and e_1 are distinguished edges on ∂_0τ and ∂_1τ respectively. The edges on the boundaries are again oriented so that the boundary faces lie on their left. See <cit.> for more precise definitions. We have the following explicit enumeration formulas (cf. <cit.>): ∀ (L, k)≠ (1, 0), Card 𝕋^1(L, k)=4^k-1(2L+3k-5)!!/k!(2L+k-1)!!L2L L, ∀ L, p≥ 1, k≥ 0, Card 𝕋^2(L, p, k)=4^k(2(L+p)+3k-2)!!/k!(2(L+p)+k)!!L2L Lp2p p, with the convention (-1)!!=1. Note that, in the case (L,k)=(2, 0), formula (<ref>) remains valid thanks to the previous convention making the map composed of a single edge an element of 𝕋^1(2,0). In the following, we are interested in triangulations for which the number of internal vertices is random, and we set 𝕋^1(L)=⋃_k≥ 0𝕋^1(L, k) and 𝕋^2(L, p)= ⋃_k≥ 0𝕋^2(L, p, k). A random triangulation 𝒯 in 𝕋^1(L) (resp. in 𝕋^2(L, p)) is said to be Boltzmann distributed if, for every k≥ 0 and every θ∈𝕋^1(L, k) (resp. θ∈𝕋^2(L, p , k)), the probability that 𝒯=θ is proportional to (12√(3))^-k. More precisely, asymptotics of (<ref>) and (<ref>) show that the quantities Z^1(L):=∑_k≥ 0 (12√(3))^-kCard 𝕋^1(L, k), and Z^2(L, p):=∑_k≥ 0 (12√(3))^-kCard 𝕋^2(L,p,k), are finite. The Boltzmann measure on 𝕋^1(L) gives probability Z^1(L)^-1(12√(3))^-k to any triangulation θ∈𝕋^1(L, k), where k≥ 0. Similarly, the Boltzmann measure on 𝕋^2(L,p) gives probability Z^2( L, p)^-1(12√(3))^-k to any triangulation θ∈𝕋^2(L,p, k). By <cit.>, Section 2.2, we have the explicit expression: ∀ L≥ 1, Z^1(L)=6^L(2L-5)!!/8√(3) L!, where again (-1)!!=1. In the following, it will also be useful to define Z^1(0):=(24√(3))^-1. Finally, we let 𝕋^1, ∙(L, k) be the set of all triangulations in 𝕋^1(L, k) that have (in addition to the distinguished edge on the boundary) another distinguished oriented edge chosen among all edges of the triangulation. This second distinguished edge may or may not be part of the boundary, but we will call it the distinguished interior edge with some abuse of terminology. The Boltzmann measure on 𝕋^1,∙(L)=⋃_k≥ 0𝕋^1,∙(L, k) is again the probability measure that gives probability proportional to (12√(3))^-k to any τ∈𝕋^1,∙(L, k). This makes sense because a triangulation τ∈𝕋^1(L, k) has 3k+2L -3 edges, by Euler's formula, so that the number of ways of choosing an oriented edge in τ is 6k+4L-6 and we have: Z^1, ∙(L):= ∑_k≥ 0(6k+4L-6)(12√(3))^-kCard 𝕋^1( L, k)<∞, since Card 𝕋^1(L, k)= O((12√(3))^kk^-5/2) when k→∞. Note that Card 𝕋^1, ∙(2, 0)=2. Peeling and the discrete spatial Markov property We now recall the main properties of the so-called peeling algorithm. We refer to <cit.> for a more detailed introduction to this algorithm. In the following, it will be convenient to add an isolated point † to the different state spaces that we will consider. The point † will play the role of a cemetery point when the exploration given by the peeling algorithm hits the boundary. Fix p≥ 1, γ∈𝕋^2(L, p) and let e be an edge of ∂_1γ (this edge will be called the peeled edge). Let u be the vertex opposite e in the unique internal face f of γ incident to e. Three configurations may occur: * u is an internal vertex of γ, in this case we call peeling of γ along the edge e the sub-triangulation of γ consisting of the internal faces of γ distinct from f. We see this triangulation as an element of 𝕋^2(L, p+1). * u is an element of the inner boundary ∂_1γ. In this case f splits γ into two components, only one of which is incident to the outer boundary ∂_0γ. We call peeling of γ along the edge e the sub-triangulation consisting of the faces of this component, that we see as an element of 𝕋^2(L, p') for some 1 ≤ p'≤ p. * Finally, if u belongs to the outer boundary of γ, we say by convention that the “triangulation” obtained by peeling γ along e is †. Note that this description is slightly incomplete since it would be necessary to specify (in the first two cases) how the new distinguished edge on the inner boundary is chosen. In what follows, we will iterate the peeling algorithm, and it will be sufficient to say that this new distinguished edge is chosen at every step as a deterministic function of the rooted planar map that is made of the initial inner boundary and of the faces that have been “removed” by the peeling algorithm up to this step. Let us fix an algorithm 𝒜 that chooses for any triangulation τ∈⋃_p≥ 1𝕋^1, ∙(p) an edge e of ∂τ. The peeling of a triangulation according to the algorithm 𝒜 consists in recursively applying the peeling procedure described above, choosing the peeled edge at each step as prescribed by 𝒜. Let us give a more precise description. We start with a triangulation γ∈𝕋^1,∙(L) and we let e_0 be its distinguished interior edge. If e_0 is incident to the boundary ∂γ of γ, we set by convention γ_0=τ_0=†. Otherwise, if e_0 is a loop, we let τ_0 be the triangulation induced by the faces of γ inside the loop and we let γ_0 be the triangulation that consists of the faces of γ outside this loop. We view τ_0 as an element of 𝕋^1, ∙(1, k) for some k≥ 0 (we let both distinguished edges to be the loop e_0 oriented clockwise) and we view γ_0 as an element of 𝕋^2(L, 1) by seeing the loop as bounding an internal face of degree one. Finally, if e_0 is a simple edge (not incident to ∂γ), we let τ_0 be the unique element of 𝕋^1, ∙(2, 0) with both distinguished edges oriented in the same direction, and γ_0 is the element of 𝕋^2 (L, 2) obtained from γ by splitting the edge e_0 so as to create an inner boundary face of degree 2 (cf. figure 2) – note that our special convention for ∂τ_0 explained at the beginning of Section <ref> allows us to identify ∂_1γ_0 with ∂τ_0 in that case. We then build recursively two sequences (τ_i)_i≥ 0 (the explored part) and (γ_i)_i≥ 0 (the unexplored part), in such a way that, for every i≥ 0 such that τ_i≠†, we have τ_i∈𝕋^1, ∙(p) and γ_i∈𝕋^ 2(L, p), for some p≥ 1, and the inner boundary ∂_1γ_i is identified with ∂τ_i. Assume that we have constructed τ_i and γ_i for some i≥ 0. If τ_i=†, we set τ_i+1=γ_i+1=†. Otherwise the algorithm 𝒜 applied to τ_i yields an edge e of ∂τ_i=∂_1γ_i. The triangulation γ_i+1 is obtained by peeling γ_i along this edge. If γ_i+1≠†, we let τ_i+1 be the triangulation obtained by adding to τ_i the faces of γ_i that we removed by the peeling of e. The distinguished edge on the boundary of τ_i+1 is the one that is identified to the distinguished edge of γ_i+1 on its second boundary, and the other distinguished edge of τ_i+1 is taken to be the same as the one of τ_i. Finally, if γ_i+1=†, we simply take τ_i+1=†. In the case of Boltzmann triangulations, the peeling is a “Markovian exploration”. More precisely, we apply the peeling procedure described above to a random triangulation 𝒟^L distributed according to the Boltzmann measure on 𝕋^1, ∙(L). This gives rise to two sequences of random triangulations (T_i^L)_i≥ 0 (explored parts) and (U_i^L)_i≥ 0 (unexplored parts). Then, conditionally on the event {T_i^L≠†} and on the value |∂ T_i^L|, the triangulation U_i^L is distributed according to the Boltzmann measure on 𝕋^2(L, |∂ T_i^L|) and is independent of T_i^L. We will call this property the spatial Markov property for the peeling of Boltzmann triangulations. Peeling by layers and perimeter process Let x_*^L be the root of the distinguished interior edge of 𝒟^L and let Δ^L be the graph distance in 𝒟^L. In the following, we will use a particular peeling algorithm — that is, a particular choice of 𝒜 — which we call the peeling by layers. This algorithm is designed to satisfy the following additional property: for every i such that T_i^L≠†, if we set h_i^L:=Δ^L(x_*^L, ∂ T_i^L), then for every vertex u of ∂ T^L_i, we have h_i^L≤Δ^L(u, x_*^L)≤ h_i^L+1. In other words, the distances from boundary vertices of T_i^L to x_*^L in 𝒟^L can only take at most one of two consecutive values at any time. It is easy to choose the peeling algorithm so that this property holds, and we will assume that (T_i^L)_i≥ 0 and (U_i^L)_i≥ 0 are obtained by such a peeling algorithm. We refer to <cit.> for a more precise description of the peeling by layers algorithm. An important object for us is the random sequence (|∂ T_i^L|)_i≥ 0 taking values in ℕ∪{†} and recording the evolution of the perimeter of the part explored by the peeling algorithm by layers, where by convention |∂ T_i^L|=† if T_i^L=†. By the arguments of <cit.>, Section 3, conditionally on the value of |T_0^L|∈{1, 2, †}, this perimeter process is a Markov chain on ℕ∪{†} starting from |T_0^L| ∈{1,2,†} whose transition kernel q_L is given for every k≥ 1 and m∈{-1, 0, … , k-1} by: q_L(k, k-m)=2Z^1(m+1)Z^2(L, k-m)/Z^2(L, k), and q_L(k, †)=1-∑_m=-1^k-1 q_L(k, k-m) for all k≥ 1, q_L(†, †)=1. This kernel is closely related to the transition kernel q_∞ of the perimeter process of the UIPT of type I (cf <cit.>, Section 6.1) which is defined for every k≥ 1 and m∈{-1,0, …, k-1} by: q_∞(k, k-m)= 2Z^1(m+1)C^(1)(k-m)/C^(1)(k). where we wrote C^(1)(k):=3^k-2/4√(2π) k2k k. As noted in <cit.>, the Markov chain associated with the kernel q_L is a Doob h-transform of the chain associated to q_∞, for the harmonic function 𝐡_L(j):=L/L+j, j≥ 1. More precisely, for every p≥ 1, m∈{-1,0, …, p-1}: q_L(p, p-m)=𝐡_L(p-m)/𝐡_L(p)q_∞(p, p-m). §.§ Convergence of metric spaces In order to state the convergence of (rescaled) Boltzmann triangulations with two boundaries towards the Brownian annulus, we will consider the space 𝕄 of all isometry classes of compact metric spaces, and we will write d_𝙶𝙷 for the usual Gromov-Hausdorff distance on 𝕄. Then (𝕄, d_𝙶𝙷) is a Polish space. We will use analogs of the Gromov-Hausdorff distance for spaces marked with subspaces and measures, which we present along the lines of <cit.>. Here and in what follows, if (E,Δ) is a compact metric space E, we will write Δ_𝙷 and Δ_𝙿 for the Hausdorff and Prohorov metrics associated with Δ, which are defined respectively on the set of all nonempty compact subsets of E and on the set of all finite Borel measures on E. For l∈ℕ, we let 𝕄^l, 1 be the set of all isomorphism classes (for an obvious notion of isomorphism) of compact metric spaces marked with l compact subspaces and a finite measure. More precisely, we consider marked spaces of the form ((𝒳, d_𝒳), 𝐀, μ) where: • (𝒳, d_𝒳) is a compact metric space, • 𝐀=(𝐀_1,…,𝐀_l) is an l-tuple of compact subsets of 𝒳, • μ is a finite Borel measure on 𝒳. The set 𝕄^l,1 is endowed with a metric d^l,1_𝙶𝙷𝙿, which is defined for any two spaces 𝕏=((𝒳, d_𝒳), 𝐀, μ) and 𝕐=((𝒴, d_𝒴), 𝐁, ρ) in 𝕄^l,1 by: d^l,1_𝙶𝙷𝙿(𝕏, 𝕐)=inf_(𝒵, Δ) ι_X: 𝒳↪𝒵 ι_Y: 𝒴↪𝒵max{Δ_𝙷(ι_𝒳 (𝒳), ι_𝒴(𝒴)), max_1≤ i≤ lΔ_𝙷(ι_𝒳(𝐀_i), ι_𝒴(𝐁_i)), Δ_𝙿(ι_X_* μ, ι_Y_*ρ)}, where the infimum is taken over all compact metric spaces (𝒵, Δ) and isometric embeddings ι_𝒳:(𝒳, d_𝒳)→ (𝒵, Δ) and ι_𝒴:(𝒴, d_𝒴)→ (𝒵, Δ). Then d_𝙶𝙷𝙿^l,1 is a metric on 𝕄^l, 1. Furthermore, (𝕄^l,1,d^l,1_𝙶𝙷𝙿) is a Polish space. In what follows, we will be interested in the case l=2: the Brownian annulus comes with a volume measure and with two distinguished subsets which are its boundaries. §.§ The Bettinelli-Miermont construction of the Brownian disk This section presents a variant of the Bettinelli-Miermont construction of the free Brownian disk, which is based on a quotient space defined from a Poisson family of Brownian trees. We borrow the formalism of <cit.>. The Brownian snake We start with a brief presentation of the Brownian snake, referring to <cit.> for more details. Let 𝒲 be the set of continuous paths w:[0, ζ(w)]→ℝ, where ζ(w)≥ 0 is a nonnegative real number called the lifetime of w. We endow this set with the distance: d_𝒲(w, w')=|ζ(w)-ζ(w')|+sup_t≥ 0 |w(t∧ζ(w))-w(t∧ζ(w'))|. For every x∈ℝ, let 𝒲_x be the set of all w∈𝒲 such that w(0)=x. We identify the unique element of 𝒲_x having lifetime 0 to the real number x. A snake trajectory starting at x is a continuous function ω:ℝ_+→𝒲_x satisfying: • ω_0=x and σ(ω):=sup{s≥ 0, ω_s≠ x}<∞; • for all 0≤ s≤ s', ω_s(t)=ω_s'(t) whenever t≤min_u∈ [s, s']ζ(ω_u). Let 𝒮_x be the set of snake trajectories starting from x, that we endow with the distance: d_𝒮_x(ω, ω')=|σ(ω)-σ(ω')|+sup_s≥ 0d_𝒲(ω_s,ω'_s). If ω∈𝒮_x, we let ζ_ω:ℝ_+→ℝ_+ be the function defined by setting ζ_ω(s):=ζ(ω_s) and we also write ω̂ for the function called the head of the snake trajectory ω defined by ω̂(s):= ω_s(ζ_ω(s)). One easily verifies that ω is entirely determined by the two functions ζ_ω and ω̂. We will also use the notation W_*(ω)=min{ω̂_s:s∈ [0,σ(ω)]}. Given a snake trajectory ω, we can define a (labelled compact) ℝ tree T_ω, which is called the genealogical tree of ω. To construct this tree, we introduce the pseudo-distance d_ω on [0, σ(ω)] given by: ∀ s, t∈ [0, σ(ω)], d_ω(s, t)= ζ_ω(s)+ζ_ω(t)-2min_u∈ [s, t]ζ_ω(u), and we define T_ω as the quotient space of [0, σ(ω)] for the equivalence relation s∼ t iff d_ω(s, t) =0, which is equipped with the metric induced by d_ω. We let p_ω:[0, σ(ω)]→ T_ω be the canonical projection and we write ρ_ω:=p_ω(0) for the “root” of T_ω. The volume measure on T_ω is just the pushforward of Lebesgue measure on [0,σ(ω)] under the projection p_ω. By the definition of snake trajectories, the property p_ω(s)=p_ω(t) implies that ω̂(s)=ω̂(t). Thus we can define a natural labelling ℓ_ω:T_ω→ℝ by requiring that ω̂=ℓ_ω∘ p_ω. Let x∈. The Brownian snake excursion measure with initial point x is the σ-finite measure ℕ_x on 𝒮_x such that the pushforward of ℕ_x under the function ω↦ζ_ω is the Itô measure of positive Brownian excursions, normalized so that ℕ_x(sup_s≥ 0ζ_ω(s)≥ε)=1/2ε, and such that, under ℕ_x and conditionally on ζ_ω, the process (ω̂_s)_s≥ 0 is a Gaussian process centered at x with covariance kernel K(s, s ')=min_u∈[s, s']ζ_ω(u) when s≤ s'. We will use some properties of exit measures of the Brownian snake. If w∈𝒲, and y∈ℝ, we write τ_y(w)=inf{t≤ζ(w): w(t)=y} with the convention inf∅ =+∞. If x∈ℝ and y∈(-∞,x), the quantity: 𝒵_y(ω):=lim_ε→ 01/ε^2∫_0^σ(ω)1_{τ_y(ω_s)= ∞, ω̂(s)<y+ε}ds, exists ℕ_x(dω) almost everywhere and is called the exit measure at y. The process (𝒵_y(ω))_y∈(-∞,x) has a càdlàg modification with no positive jumps, which we consider from now on. The free Brownian sphere Let us now recall the construction of the free Brownian sphere under the measure ℕ_0(dω). We start by recalling the definition of “intervals” on the genealogical tree T_ω of a snake trajectory ω. We use the convention that, if s,t∈[0,σ(ω)] and s>t, the interval [s,t] is defined by [s,t]=[s,σ(ω)]∪[0,t]. Then, if u,v∈ T_ω, there is a smallest interval [s,t], with s,t∈[0,σ(ω)], such that p_ω(s)=u and p_ω(t)=v, and we define u,v=p_ω([s,t]). We set, for every u,v∈ T_ω, 𝐃^∘(u, v):=ℓ_ω(u)+ℓ_ω(v)-2max(min_w∈ u, vℓ_ω(w), min_w∈ v, uℓ_ω(w)), and 𝐃(u, v):= inf_u=u_0,u_1, …, u_p=v∑_j=1^p𝐃^∘(u_i, u_i+1), where the infimum is taken over all choices of the integer p≥ 1 and the points u_0,…, u_p∈ T_ω such that u_0=u and u_p=v. Then, 𝐃 is a pseudo-metric on T_ω, and the free Brownian sphere is the associated quotient space _∞=T_ω /{𝐃=0}, which is equipped with the metric induced by 𝐃, for which we keep the same notation. We note that the free Brownian sphere is a geodesic space (any two points are linked by at least one geodesic). We emphasize that the free Brownian sphere is defined under the infinite measure _0, but later we will consider specific conditionings of _0 giving rise to finite measures. We write Π for the canonical projection from T_ω onto _∞. The volume measure Vol(·) on _∞ is the pushforward of the volume measure on T_ω under Π. For u,v∈ T_ω, the property 𝐃(u,v)=0 implies ℓ_ω(u)=ℓ_ω(v), and so we can define ℓ(x) for every x∈_∞, in such a way that ℓ(x)=ℓ_ω(u) whenever x=Π(u). There is a unique point _* of _∞ such that ℓ(_*)=min_x∈_∞ℓ(x), and we have 𝐃(_*,x)= ℓ(x)-ℓ(_*) for every x∈_∞. We will write _*:=-ℓ(_*). We also observe that the free Brownian sphere has another distinguished point, namely _0:=Π(ρ_ω). Note that 𝐃(_*,_0)=-ℓ(_*)=_* Let us now turn to hulls. For every r>0 and x∈_∞, we write B^∞_r(x) for the closed ball of radius r centered at x in _∞. Then, for every r∈(0,_*), the hull B^∙_r(_*) is the complement of the connected component of _∞∖ B^∞_r(_*) that contains _0 (this makes sense because _0∉ B^∞_r(_*) when r<_*). Note that all points of ∂ B^∙_r(_*) are at distance r from _*. By definition, the perimeter of the hull B^∙_r(_*) is the exit measure 𝐏_r:=𝒵_r-r_*. This definition is justified by the property 𝐏_r=lim_ε→ 01/ε^2Vol({x∈_∞∖ B^∙_r(_*):𝐃(x,B^∙_r(_*))<ε}), which can be deduced from (<ref>). The process (𝐏_r)_r∈(0,_*) has càdlàg sample paths and no positive jumps. The Bettinelli-Miermont construction of the Brownian disk We now present a construction of the free pointed Brownian disk, which is the compact metric space that appears as the scaling limit of Boltzmann triangulations in 𝕋^1, ∙(L). We fix a>0 and let (𝚎(t))_t∈ [0, a] be a positive Brownian excursion of duration a. Conditionally on (𝚎(t))_t∈[0,a], let 𝒩=∑_i∈ Iδ_(t_i, ω^i) be a Poisson point measure on [0, a]×𝒮 with intensity 2 t ℕ_√(3)𝚎(t)(ω). We let ℐ be the quotient space of [0, a]∪⋃_i∈ I T_ω^i , for the equivalence relation that identifies ρ_ω^i and t_i for every i∈ I (and no other pair of points is identified). We endow ℐ with the maximal distance d_ℐ whose restriction to each tree T_ω^i coincides with d_ω^i, and whose restriction to [0,a] is the usual distance. More explicitly, the distance between two points x∈ T_ω^i and y∈ T_ω^j, i≠ j is given by d_ω^i(x, ρ_ω^i)+|t_i-t_j|+ d_ω^j(y, ρ_ω^ j). Then ℐ is a compact metric space (in fact, a compact -tree), and we can consider the labelling ℓ:ℐ→ℝ defined by: ℓ(x)={[ ℓ_ω^i(x) if x∈ T_ω^i, for some i∈ I,; √(3)𝚎(x) if x∈ [0, a]. ].. By standard properties of the Itô measure, one verifies that the quantity Σ:=∑_i∈ Iσ(ω^i) is almost surely finite and it is possible to concatenate the functions p_ω^i to obtain a “contour exploration” π:[0, Σ]→ℐ. Formally, to define π, let μ=∑_i∈ Iσ(ω^i)δ_t_i be the point measure on [0, a] giving weight σ(ω^i) to t_i, for every i∈ I, and consider the left-continuous inverse μ^-1 of its cumulative distribution function, μ^-1(s):=inf{t∈[0,a]: μ([0, t])≥ s} for every s∈[0,Σ]. For every s∈ [0, Σ], we set π(s)=ω^i(s-μ([0, μ^-1(s)))) if μ^-1(s)=t_i for some i∈ I and π(s)=μ^-1(s) otherwise. This contour exploration π allows us to define intervals on ℐ, in a way similar to what we did on T_ω. For every u, v∈ℐ, there exists a smallest interval [s, t] in [0, Σ] such that π(s)=u and π (t)=v, where by convention [s, t]=[s, Σ]∪[0, t] if s> t, and we write u, v for the subset of ℐ defined by u, v={π(b), b∈ [s, t]}. We then set ∀ u, v∈ℐ, D^∘(u, v):=ℓ(u)+ℓ(v)-2max(min_w∈ u, vℓ(w), min_w∈ v, uℓ(w)), and we consider the pseudo-metric D on ℐ defined for u, v∈ℐ by: D(u, v):= inf_u=u_0,u_1, …, u_p=v∑_j=1^pD^∘(u_i, u_i+1), where the infimum is taken over all choices of the integer p≥ 1 and the points u_0,…, u_p∈ℐ such that u_0=u and u_p=v. The space 𝔻_(a) is defined as the quotient space ℐ/{D=0}, which we equip with the distance induced by D, for which we keep the notation D. Then 𝔻_(a) is a compact metric space. Let Π:ℐ→𝔻_(a) be the canonical projection. It is easy to verify that Π(a)=Π(b) implies ℓ(a)=ℓ(b), and so 𝔻_(a) inherits a labelling function, still denoted by ℓ(·) from the labelling of ℐ. We can then define: • 𝐕=(Π∘π)_* λ_[0, Σ], where λ_[0, Σ] denotes Lebesgue measure on [0, Σ]. This is a finite Borel measure on 𝔻_(a) called the volume measure. • ∂𝔻_(a):=Π([0, a]), which is the “boundary” of 𝔻_(a). • x_* is the point of minimal label in 𝔻_(a). We then view ((𝔻_(a), D), (x_*, ∂𝔻_(a)), 𝐕) as a random variable in 𝕄^2, 1. This is the free pointed Brownian disk of perimeter a. As the (free) Brownian sphere, the (free pointed) Brownian disk is a geodesic space. In a way similar to the Brownian sphere, we have D(x,x_*)=ℓ(x)-ℓ(x_*) for every x∈_(a). In particular, if we set r_*:= -ℓ(x_*) =-min_x∈𝔻_(a)ℓ(x) we have r_*=D(x_*, ∂𝔻_(a)) (note that ℓ(u)=√(3) 𝚎(u)≥ 0 for every u∈ [0,a]⊂ℐ). Occasionally (in particular in Proposition <ref> below), we will also say that the space ((𝔻_(a),D),x_*,𝐕) — which is a random element of 𝕄^1,1 — is a free pointed Brownian disk of perimeter a: This makes no real difference, as the boundary ∂_(a) can be recovered as the closed subset of consisting of points that have no neighborhood homeomorphic to the open unit disk. Hulls in the Brownian disk Consider the Brownian disk _(a) as defined above. For every r>0, let B_r(x_*) stand for the closed ball of radius r centered at x_* in 𝔻_(a). For every r∈(0,r_*], we define the hull H_r as the complement in 𝔻_(a) of the connected component of 𝔻_(a)∖ B_r(x_*) that intersects the boundary ∂𝔻_(a) (in fact, for r<r_*, this connected component must contain the whole boundary). Points of ∂ H_r are at distance r from x_*. In a way analogous to the definition of 𝐏_r for the Brownian sphere, we define the perimeter of H_r by 𝒫_r=∑_i∈ I_r-r_*(ω^i). Then 𝒫_r satisfies a formula analogous to (<ref>) (if r<r_*, there are only finitely many indices i such that _r-r_*(ω^i)>0). We also take 𝒫_0=0. The process (𝒫_r)_r∈[0,r_*] has càdlàg sample paths and no positive jumps. Let r>0. Then the law of 𝒫_r under (·∩{r<r_*}) has density y↦ 3√(3/2π) r^-3 a/a+y √(y) e^-3y/(2r^2) with respect to Lebesgue measure on (0,∞). We postpone the proof to the Appendix, as this result is not really needed in what follows. It will be useful to describe the hull H_r in terms of the labelled tree ℐ of the Bettinelli-Miermont construction. Let x∈ℐ and suppose first that x∈ T_ω^i for some i∈ I. Since T_ω^i is an -tree, there is a unique continuous injective path linking x to the root ρ_ω^ i of T_ω^i, which is called the ancestral line of x. We let m_x be the minimum label along this path. If x∈[0,a], we take m_x=ℓ(x). Then we have m_x=m_y if Π(x)=Π(y), and thus the mapping ℐ∋ x↦ m_x induces a continuous function from 𝔻_(a) to ℝ which we denote again by 𝔻_(a)∋ u ↦ m_u. Using the cactus bound (see <cit.> for this bound in the setting of the Brownian sphere, which is easily extended), one gets that: H_r= {u∈𝔻, m_u≤ -r_*+r}. Similarly, the boundary ∂ H_r of H_r in 𝔻 is the image under Π of the set of all points x∈ℐ such that we have both ℓ(x)=r-r_* and all points of the ancestral line of x (with the exception of x) have a label greater than r-r_*. Brownian disks in the Brownian sphere We now explain how the free pointed Brownian disk of the previous section can be obtained as a subset of the free Brownian sphere under a particular conditioning of the measure _0. We first recall a result from <cit.>. Let r>0, and argue under the conditional probability measure _0(·|_*>r). We can then consider the hull B^∙_r(_*), and we write B̌^∘_r(_*)=_∞∖ B^∙_r(_*), and B̌^∙_r(_*) for the closure of B̌^∘_r(_*). We equip the open set B̌^∘_r(_*) with the intrinsic metric 𝐝^∘: for every x,y∈B̌^∘_r(_*), 𝐝^∘(x,y) is the infimum of lengths of continuous paths connecting x to y that stay in B̌^∘_r(_*). Then, according to <cit.>, under the probability measure _0(·|_*>r), the intrinsic metric on the set B̌^∘_r(_*) has a continuous extension to its closure B̌^∙_r(_*), which is a metric on B̌^∙_r(_*), and the random metric space (B̌^∙_r(_*),𝐝^∘) equipped with the restriction of the volume measure on _∞ and with the distinguished point _0 is a free pointed Brownian disk of (random) perimeter _r. For our purposes, it will be useful to have a version of the preceding result when r is replaced by a random radius. For every a>0, we define, under _0, _a:=inf{r∈(0,_*): _r-_*=a} with the usual convention inf∅=∞. By <cit.>, we have _0(_a<∞)=(2a)^-1. For future use, we record the following simple fact. If (a_n)_n∈ is a sequence decreasing to a, we have _a_n↓_a as n→∞, _0 a.e. on the event {𝐫_a<∞}. This follows from the description of the law of the process (_r)_r<0 under _0, as a time change of the excursion of a stable Lévy process, see <cit.>. Let a>0. Almost surely under the probability measure _0(·|_a<∞), the intrinsic measure on the set B̌^∘__a(_*) has a continuous extension to its closure B̌^∙__a(_*), which is a metric on B̌^∙__a(_*), and the resulting random metric space equipped with the restriction of the volume measure on _∞ and with the distinguished point _0 is a free pointed Brownian disk of perimeter a. The shortest way to prove this proposition is to use Proposition 10 in <cit.>, which determines the distribution under _0(dω|_a<∞) of the snake trajectory ω truncated at level _a-_*, which is denoted by tr__a-_*(ω) (we refer e.g. to <cit.> for a definition of this truncation operation). On one hand, the space _∞∖ B^∙__a(_*) equipped with its intrinsic measure can be obtained as a function of tr__a-_*(ω), as it is explained in the proof of <cit.>. On the other hand, Proposition 10 in <cit.> shows that this snake trajectory has exactly the distribution of the random snake trajectory that codes the free pointed Brownian disk in the construction of <cit.> — which is known to be equivalent to the Bettinelli-Miermont construction presented above. We omit the details, since Proposition <ref> is clearly a variant of Theorem 8 in <cit.>. Proposition <ref> allows us to couple Brownian disks with different perimeters. Consider a decreasing sequence (a_n)_n∈ that converges to a. On the event {_a_n<∞}, B̌^∙__a_n(_*) and B̌^∙__a(_*) are both well defined, and we have trivially B̌^∙__a_n(_*)⊂B̌^∙__a(_*). Furthermore, a.e. on the event {_a<∞}, we have _a_n<∞ for all n large enough, _a_n↓_a as n→∞, and sup{𝐃(x,∂B̌^∙__a(_*)):x∈B̌^∙__a(_*)∖B̌^∙__a_n(_*)} 0. Let us justify (<ref>). First note that, for every x∈B̌^∘__a(_*), there is a path from x to _* that does not hit B^∙__a(_*), and thus stays at positive distance from ∂B̌^∙__a(_*). Since _a_n↓_a, it follows that x∈B̌^∘__a_n(_*) for n large enough, and we have proved that, a.e. on the event {_a<∞}, B̌^∘__a(_*)=⋃_n∈,_a_n<∞B̌^∘__a_n(_*), from which (<ref>) easily follows via a compactness argument. § THE BROWNIAN ANNULUS §.§ The definition of the Brownian annulus We again fix a>0 and write (𝔻_(a),D) for the free pointed Brownian disk of perimeter a in the Bettinelli-Miermont construction described above. Recall the notation x_* for the distinguished point of 𝔻_(a) and r_*=D(x_*,∂𝔻_(a)). Also recall that 𝒫_r stands for the perimeter of the hull H_r of radius r. We fix b>0, and set r_b=inf{r∈[0,r_*): 𝒫_r=b}, with again the convention inf∅=∞. Note that r_b<∞ if and only b<𝒫^*, where 𝒫^*=sup{𝒫_r:r∈[0,r_*)}. The next theorem is then an analog of Proposition <ref>. Almost surely under the probability measure (·| r_b<∞), the intrinsic metric on 𝔻_(a)∖ H_r_b has a continuous extension to the closure of 𝔻_(a)∖ H_r_b, which is a metric on this set. The resulting random metric space, which we denote by (_(a,b),d^∘), is called the Brownian annulus with perimeters a and b. The terminology will be justified by forthcoming results showing that the Brownian annulus is the Gromov-Hausdorff limit of triangulations with two boundaries. We note that the Brownian annulus _(a,b) has two “boundaries”, namely ∂_0_(a,b)=∂_(a), and ∂_1_(a,b)=∂ H_r_b. Furthermore, distances in _(a,b) from the second boundary ∂_1_(a,b) correspond to labels in the Bettinelli-Miermont construction. More precisely, for every z∈_(a,b), D(z,∂_1_(a,b))=D(z,x_*)-r_b=ℓ(z)-(r_b-r_*). This follows from the interpretation of labels in terms of distances from x_*, recalling that all points of ∂_1_(a,b)=∂ H_r_b are at distance r_b from x_*. We may and will assume that the Brownian disk 𝔻_(a) is constructed as the subset B̌^∙__a(_*) of the free Brownian sphere _∞ under the probability measure _0(·|_a<∞), as in Proposition <ref>, and, in particular, the distinguished point of 𝔻_(a) is the point _0 of the Brownian sphere. Furthermore, for every r∈ (0,D(_0,∂𝔻_(a))), the hull H_r in the Brownian disk 𝔻_(a) coincides with the hull B^∙_r(_0) in _∞ (defined as the complement of the connected component of _∞∖ B^∞_r(_0) that contains _*). In particular, on the event {r_b<∞}, we have r_b=_b, where _b is the hitting time of b by the process of perimeters of the hulls B^∙_r(_0), r∈ (0,𝐫_*). Furthermore, conditioning 𝔻_(a) on the event that r_b<∞ is equivalent to arguing under the conditional probability _0(·|𝐃(_0,_*)>_a+_b). Now note that _* and _0 play symmetric roles in the Brownian sphere _∞ (cf. <cit.>), and that proving that the intrinsic metric on 𝔻_(a)∖ H_r_b has a continous extension, which is a metric, to its closure is equivalent to proving that the intrinsic metric on _∞∖ B^∙__b(_0) has a continuous extension, which is a metric, to its closure. By symmetry, this equivalent to proving that the metric on _∞∖ B^∙__b(_*) has a continuous extension, which is a metric, to its closure. But we know from Proposition <ref> that this is true. It turns out that the probability of the conditioning event {r_b<∞} has a very simple expression, which will be useful in forthcoming calculations. We have (r_b<∞)=a/a+b. Let us set 𝒫̌_r=𝒫_r_*-r for r∈[0,r_*], so that 𝒫̌_r=∑_i∈ I_-r(ω^i), in the notation of (<ref>). From the identification of the law of the exit measure process under _0 (see e.g. Section 2.4 in <cit.>), it is not hard to verify that (𝒫̌_r)_r∈[0,r_*] is a continuous-state branching process with branching mechanism ψ(λ):=√(8/3) λ^3/2. Furthermore, Remark (ii) at the end of <cit.> shows that the initial value 𝒫̌_0=𝒫_r_* of this continuous-state branching process has density 3/2 a^3/2 (a+z)^-5/2. The classical Lamperti transformation allows us to write (𝒫̌_r)_r∈[0,r_*] as a time change of a (centered) spectrally positive Lévy process with Laplace exponent ψ and the same initial distribution, which is stopped upon hitting 0. For this Lévy process started at z, the probability that it never hits b is equal to √((b-z)^+/b) (cf. <cit.>). From the preceding considerations, we get (r_b=∞)= 3/2 a^3/2 ∫_0^b dz/(a+z)^5/2 √(b-z/b)= b/a+b. This completes the proof. §.§ A technical lemma We keep the notation of the preceding section. In the following lemma, lengths of paths refer to the metric on the Brownian disk _(a). Let η>0. Then, almost surely, for every x,y∈_(a,b)∖∂_1_(a,b), for every continuous path γ in _(a,b) connecting x to y and with finite length L(γ), we can find a path γ' staying in _(a,b)∖∂_1_(a,b) and connecting x to y, whose length is bounded by L(γ)+η. Let us set ^∘_(a,b)=_(a,b)∖(∂_0_(a,b)∪∂_1_(a,b)) which can be viewed as the “interior” of _(a,b). In order to prove Lemma <ref>, it is enough to consider the case where x,y∈^∘_(a,b) and the path γ stays in _(a,b)∖∂_0_(a,b). If not the case, we can cover the set of times t at which γ(t) belongs to ∂_1_(a,b) by finitely many disjoint closed intervals I=[s_I,t_I] such that γ(t)∈_(a,b)∖∂_0_(a,b) for every t∈ I and γ(s_I),γ(t_I)∉∂_1_(a,b), and we consider the restriction of γ to each of these intervals. Fix >0 and, for every u>0, let E_(a,u) denote the event where u<𝒫^* and there exist x,y∈^∘_(a,u) and a path γ_0 with finite length L(γ_0) connecting x to y and staying in _(a,u)∖∂_0_(a,u), such that any path γ' connecting x to y and staying in ^∘_(a,u) has length at least L(γ_0)+ε. Also set, for every u∈(0,𝒫^*) and x,y∈^∘_(a,u), F(x,y,u)=inf{L(γ):γ is a path connecting x to y in ^∘_(a,u)}. If E_(a,u) holds, then clearly there exist x,y∈^∘_(a,u) such that the function v↦ F(x,y,v) has a (positive) jump at v=u (take γ_0 as above and note that F(x,y,v)≤ L(γ_0) if 0<v<u). The same then holds for every x',y'∈^∘_(a,u) sufficiently close to x,y: To see this, consider the path obtained by concatenating γ_0 with geodesics from x to x' and from y to y'. Hence, if for n≥ 1, we consider the monotone nonincreasing function (0,𝒫^*)∋ v ↦ G_n(a,v)=∫_^∘_(a,v) (n-F(x,y,v))^+ 𝐕( x)𝐕( y) we obtain that this function has a jump at u when E_(a,u) holds, at least when n is large enough. It follows that 1_E_(a,u)≤lim inf_n→∞1_{G_n(a,u+)<G_n(a,u-)}, with an obvious notation for the right and left limits of v ↦ G_n(a,v) at u. Hence, (E_(a,u))≤lim inf_n→∞({u<𝒫^*}∩{G_n(a,u+)<G_n(a,u-)})). Since the function (0,𝒫^*)∋ v ↦ G_n(a,v) has at most countably many discontinuities, if follows that ∫_0^∞(E_(a,u)) u=0 and therefore (E_(a,u))=0 for Lebesgue almost all u. To obtain the statement of the lemma, we need to prove that (E_(a,u))=0 for every u>0. Fix u>0, and let (a_n)_n≥ 0 be a sequence of reals decreasing to a. We will verify that lim inf_n→∞(E_(a_n,u))≥(E_(a,u)). Thanks to Proposition <ref>, we may assume that _(a)=B̌^∙__a(_*), resp. _(a_n)=B̌^∙__a_n(_*), which is a Brownian disk of perimeter a, resp. of perimeter a_n, under _0(·|_a<∞), resp. under _0(·|_a_n<∞). If E_(a,u) holds, we can find a path γ_0 staying in _(a,u)∖∂_(a) that satisfies the properties stated at the beginning of the proof, and this path stays at positive distance from ∂_(a). On the other hand, by (<ref>), we have sup{𝐃(x,∂_(a)):x∈_(a)∖_(a_n)} 0, _0 a.e. on {_a<∞}. It follows that the path γ_0 stays in _(a_n,u)∖∂_(a_n) when n is large, so that E_(a_n,u) also holds when n is large. Hence, we get lim inf_n→∞_0(E_(a_n,u)∩{_a<∞})≥_0(E_(a,u)∩{_a<∞}), and using also the fact that _0(_a_n<∞)⟶_0(_a<∞) as n→∞ we get (<ref>). From (<ref>) and a scaling argument, we have then lim inf_u'↑ u(E_(a,u'))≥(E_(a,u)). Clearly, this implies that we have (E_(a,u))=0. Since >0 was arbitrary, this completes the proof. § PRELIMINARY CONVERGENCE RESULTS §.§ Convergence towards the Brownian disk Let a>0. For every integer L≥ 1/a, let 𝒟^L_(a) a Boltzmann triangulation in 𝕋^1, ∙(⌊ aL⌋). Let Δ^L be the graph distance on 𝒟^L_(a) and consider the rescaled distance d_L=√(3/2) L^-1/2Δ^L. Let ν^L be the counting measure, rescaled by the factor (3/4)L^-2, on the vertex set of 𝒟^L_(a). Then, ((𝒟^L_(a), d_L), (x_*^L, ∂𝒟^L_(a)), ν^L) ((𝔻_(a), D), (x_*, ∂𝔻_(a)), 𝐕), where ((𝔻_(a), D), (x_*, ∂𝔻_(a)), 𝐕) is the free pointed Brownian disk with perimeter a as constructed in Section <ref>, and the convergence holds in 𝕄^2,1 endowed with the metric d_𝙶𝙷𝙿^2,1 introduced in Section <ref>. In the last display, we abusively identify 𝒟^L_(a) with its vertex set (we will often make this abuse of notation in what follows). The convergence (<ref>) follows from <cit.>. Note that Theorem 1.1 in <cit.> deals with the so-called GHPU convergence including the uniform convergence of the “boundary curves”, but it is straightforward to verify that this also implies the convergence (<ref>) in 𝕄^2,1. Also, <cit.> considers Boltzmann triangulations in 𝕋^1(⌊ aL⌋) instead of 𝕋^1, ∙(⌊ aL⌋), and the limit is therefore the free (unpointed) Brownian disk. However, as explained in <cit.>, the convergence for pointed objects easily follows from that for pointed ones (since we are here pointing at an edge and not at a point, we also need Lemma 5.1 in <cit.>, stated for quadrangulations but easily extended, to verify that the degree-biased measure on the vertex set is close to the uniform measure — we omit the details). §.§ The processes of perimeters and volumes of hulls We consider the free pointed Brownian disk ((𝔻_(a), D), (x_*, ∂𝔻_(a)), 𝐕) as given in the Bettinelli-Miermont construction. Recall that r_*=D(x_*, ∂𝔻_(a)). For r∈(0,r_*], the perimeter 𝒫_r of the hull H_r was defined in formula (<ref>), and we set 𝒱_r=𝐕(H_r). We also define 𝒱_0=0. It is not hard to verify that the process (𝒫_r,𝒱_r)_r∈[0,r_*] has càdlàg sample paths. Let r>0 and let us argue conditionally on the event {r_*> r}. Recall that _(a) is obtained as a quotient space of the labelled tree ℐ, and that, for x∈ℐ, m_x denotes the minimal label along the ancestral line of x. We can use the restriction of the contour exploration π:[0,Σ]⟶ℐ to every connected component of the open set {s∈[0,Σ] : m_π(s)< r-r_*}, in order to define a snake trajectory with initial point r-r_*, which we call an excursion away from r-r_*. More precisely, if (α,β) is such a connected component, there is an index i∈ I such that (α,β)⊂ (a_i,b_i), where [a_i,b_i]={s∈[0,Σ]:π(s)∈ T_ω^i}. Then, setting α'=α-a_i and β'=β-a_i, we have ω^i_α'=ω^i_β', ω̂^i_α'=ω̂^i_β'=r-r_* and ζ(ω^i_s)>ζ(ω^i_α') for every s∈(α',β'), and we define a snake trajectory ω by taking ω_s(t)=ω^i_(α'+s)∧β'(ζ(ω^i_α')+t) for every 0≤ t≤ζ(ω^i_(α'+s)∧β')-ζ(ω^i_α') (in the language of <cit.>, ω is an excursion of ω^i away from r-r_*). As a straightforward consequence of Proposition 12 in <cit.>, the snake trajectories obtained in this way and shifted so that there initial point is 0 correspond to the atoms of a point measure 𝒩_r which conditionally on 𝒫_r is Poisson with intensity 𝒫_rℕ_0(·∩{W_* >-r}) and to which we add an extra atom ω_* distributed according to ℕ_0(·| W_*=-r) (the law of the latter atom is described in <cit.> in terms of a Bessel process of dimension 9). Using formula (<ref>), it is not hard to verify that the process (𝒫_s,𝒱_s)_s∈[0,r] is determined as a function of the point measure 𝒩_r+δ_ω_* (in particular, 𝒫_s=𝒵_s-r(ω_*)+∫𝒩_r(dω) 𝒵_s-r(ω) for 0<s<r). Let us now consider the Brownian plane of <cit.>. For the Brownian plane, we can also define the processes of perimeter and volume of hulls (𝒫_s^∞, 𝒱_s^ ∞)_s≥ 0 and the law of this pair of processes is described in <cit.>. It follows from the preceding observations and the construction of <cit.> that, for every u>0, the conditional distribution of (𝒫^∞_s,𝒱^∞_s)_s∈[0,r] knowing 𝒫^∞_r=u is the same as the conditional distribution of (𝒫_s,𝒱_s)_s∈[0,r] knowing 𝒫_r=u. Since 𝒫_r and 𝒫_r^∞ both have a positive density on (0,∞) (by Proposition <ref> and <cit.>), we arrive at the following lemma. The law of (𝒫_s, 𝒱_s)_s≤ r conditionally on the event {r_*> r} is absolutely continuous with respect to the law of the pair (𝒫_s^∞, 𝒱_s^∞)_s≤ r. We end this section by stating a technical property showing that the perimeter process can be recovered as a deterministic function of the volume process. For every r>0, we have almost surely on the event {r<r_*}: 𝒫_r =lim_α→ 0^+(1/αlim_ϵ→ 0^+ϕ(ϵ)^-1Card{s∈ [r-α, r]:Δ𝒱_s≥ϵ}), where ϕ(ϵ)=c_0ϵ^-3/4, with c_0=2^1/4Γ(4/3), and Δ𝒱_s=𝒱_s-𝒱_s-. It is explained in the proof of <cit.> that (<ref>) holds if 𝒫_r and 𝒱_s are replaced by 𝒫^∞_r and 𝒱^∞_s respectively. It then suffices to use the absolute continuity property of Lemma <ref>. §.§ Joint convergence of hulls One expects that the explored sets T_i^L in the peeling by layers will correspond in the limit (<ref>) to the hulls H_r. This section aims to give a precise result in this direction. Let us start with a technical proposition giving some information about the geometry of 𝔻_(a). For every δ>0 and s∈(0,r_*), let 𝒰_δ^s be the set of all points x∈𝔻_(a) such that there is a continuous path from x to ∂𝔻_(a) that stays at distance at least s-δ from x_*. Almost surely, for every s which is not a jump of the perimeter process (𝒫_r)_r∈(0,r_*) and every ε>0, there exists δ>0 such that: 𝒰_δ^s ⊂{x∈𝔻_(a) : D(x, 𝔻_(a)∖ H_s)<ε}. We argue by contradiction. If the statement of the proposition fails, we can find ε>0 and s∈(0,r_*) which is not a jump of the perimeter process, and then a sequence δ_n↓ 0 and points x_n∈𝔻_(a) such that D( x_n, 𝔻_(a)∖ H_s)≥ε and there is a path linking x_n to ∂𝔻_(a) and remaining at distance at least s-δ_n from x_*. By compactness, we may assume that the sequence (x_n) converges to a point x_∞, which therefore satisfies D(x_∞, 𝔻_(a)∖ H_s)≥ε. We have m_x_n≤ -r_*+s since x_n∈ H_s, and, on the other hand, an application of the cactus bound <cit.> gives m_x_n≥ -r_*+s-δ_n. Letting n→∞ we get m_x_∞= -r_*+s. On the ancestral line of x_∞, we can find a point x close to x_∞ whose label is strictly greater than -r_*+s and is still such that m_x=-r_*+s (if no such x existed, this would mean that x_∞∈∂ H_s, contradicting D(x_∞, 𝔻_(a)∖ H_s)≥ε). Then all points in a sufficiently small neighbourhood of x are in H_s but not in H_s-δ for any δ>0. In other words the process (𝒱_r)_r∈(0,r_*) has a jump at s. Since the jumps of (𝒱_r) and (𝒫_r) almost surely coincide (this holds for 𝒱^∞ and 𝒫^∞ by <cit.> and therefore also for 𝒱 and 𝒫 using Lemma <ref>), we end up with a contradiction. As in Section <ref>, we consider the sequences of random triangulations (T_i^L) and (U_i^L) obtained by applying the peeling by layers algorithm to the Boltzmann triangulation 𝒟^L_(a). It will be convenient to view the triangulations that we consider as geodesic spaces. To this end we just need to identify each edge with a copy of the interval [0,1] in the way explained in <cit.>. If the vertex set of 𝒟^L_(a) is replaced by the union of all edges equipped with the obvious extension of the (rescaled) graph distance, the convergence (<ref>) remains valid, and this has the advantage of making 𝒟^L_(a) a geodesic space. From now on, we will always view triangulations as geodesic metric spaces as we just explained. In particular, we can consider continuous paths in 𝒟^L_(a) as in Lemma <ref> below, and, similarly, in the next proposition, we interpret ∂ T^L_k as the union of the edges on the boundary of T^L_k. By Skorokhod's representation theorem, we may assume that (<ref>) holds almost surely. From now on until the end of this section, we fix ω∈Ω for which the (almost sure) convergence (<ref>) does take place. By a straightforward extension of <cit.>, we may assume the metric spaces (𝒟^L_(a), d^L) and (𝔻_(a), D) are embedded isometrically in the same compact metric space (E, Δ) in such a way that 𝒟^L_(a) and ∂𝒟^L_(a) converge to 𝔻_(a) and ∂𝔻_(a) respectively, for the Hausdorff metric Δ_𝙷, x_*^L converges to x_* and ν^L converges weakly to 𝐕. In particular, we will consider the triangulations T_i^L and U_i^L as subsets of E so that we can speak about the Δ_𝙷-convergence of these objects in the following proposition. If γ:[0, σ]→ E and γ':[0, σ']→ E are two continuous paths in E, we will say that γ' is ε-close to γ if Δ(γ(0),γ'(0))≤ε, Δ(γ(σ), γ'(σ'))≤ε and if sup_t∈ [0, σ']Δ(γ'(t), γ)≤ε, where we identify γ and the compact subset γ([0, σ])⊂ E. Note that this definition is not symmetric in γ and γ'. We also write ℓ_Δ(γ) for the length of the path γ in (E,Δ). Let ω be fixed as above and let s∈(0,r_*) such that the perimeter process (𝒫_r) is continuous at s. Recall the notation h_k^L:=Δ(x^*_L, ∂ T_k^L). For every sequence of integers (N_L)_L≥ 1 such that (√(3/2L)h_N_L^L)_L≥ 1 converges to s, we have the convergences: T_N_L^L H_s, ∂ T_N_L^L ∂ H_s, U_N_L^L C̅_s, where C̅_s denotes the closure of C_s:=𝔻_(a)∖ H_s. To simplify notation, we set c_L=√(3/2)L^-1/2 and recall that d_L=c_L Δ^L. The convergences of T_N_L^L and U_N_L^L are proved in a way very similar to Lemma 12 in <cit.> (which deals with the case where N_L is replaced by the hitting time of ∂𝒟^L_(a) by the peeling algorithm). We only give here the main steps of the proof. We start with a simple lemma. For every η>0 and A>0, there exists δ>0 and L_0≥ 0 such that, for every L≥ L_0 and any choice of points x,y∈𝔻_(a) and x^L, y^L∈𝒟^L_(a) satisfying Δ(x, x^L)≤δ and Δ(y, y^L)≤δ, we have: * For any continuous path γ from x to y in 𝔻_(a), there exists a continuous path γ^L from x^L to y^ L in 𝒟_(a)^L which is η-close to γ. If γ has length at most A, one can choose γ^L such that ℓ_Δ(γ^L)≤ℓ_Δ(γ)+η. * For any continuous path γ^L from x^L to y^L in 𝒟_(a)^L, there is a continuous path γ from x to y in 𝔻_(a) which is η-close to γ^L. If γ^L has length at most A, one can choose γ such that ℓ_Δ(γ)≤ℓ_Δ(γ^L)+η. We omit the proof of this lemma (see <cit.>), and proceed to the proof of Proposition <ref>. We first consider U_N_L^L. If ε>0 and K⊂ E, we write K^ε={x∈ E, Δ(x, K)≤ε} (only in this proof and the next one). If ε>0 is fixed, we need to verify that, for L large, U_N_L^L⊂ (C_s)^ε and C̅_s ⊂ (U^L_N_L)^ε. Let x∈C̅_s∖∂ H_s=C_s. Then there is a path γ connecting x to a point y of ∂𝔻_(a) that stays in C_s. By compactness, this path stays at distance at least α>0 from ∂ H_s, hence at distance at least s+α from x_*. We can assume that α≤ε. By part 1 of Lemma <ref>, and using the fact that ∂𝒟_(a)^L converges towards ∂𝔻_(a), we can find, for L large enough, points x^L∈𝒟_(a)^L and y^L∈∂𝒟_(a)^L and a path γ^L in 𝒟_(a)^L from x^L to y^L that is (α/2)-close to γ. Since x^L_* converges to x_* and c_L h_N_L^L converges to s, we get (taking L even larger if necessary) that all points of γ^L lie at distance greater than c_L(h^L_N_L+1) from x^L_*. However, by the construction of the peeling by layers, points of ∂ T_N_L^L are at a distance at most c_L(h_N_L^L+1) from x_*^L . Therefore we found a path connecting x^L to a point of ∂𝒟_(a)^L that does not visit ∂ T_N_L^L, and it follows that x^L is a point of U_N_L^L. Since Δ(x^L, x)≤α/2<ε we then have x∈ (U_N_L^L)^ε for large L. If x∈∂ H_s, this is also true because we can approximate x by a point of C_s. A compactness argument finally allows us to conclude that C̅_s⊂ (U_N_L^L)^ε for any L large enough. Let us show conversely that U_N_L^L⊂ (C_s)^ε when L is large. We choose δ∈(0,ε) such that the conclusion of Proposition <ref> holds with ε replaced by ε/2. Let v^L∈ U_N_L^L, which implies in particular that v^L is at Δ-distance at least c_L h_N_L^L from x_*^L. Then there is a path γ^L in U_N_L^L connecting v^L to ∂𝒟_(a)^L. Using part 2 of Lemma <ref> and the convergence of ∂𝒟_(a)^L to ∂𝔻_(a), if L is large enough (independently of the choice of v^L), we can approximate γ^L by a path γ in 𝔻_(a) that is (δ/2)-close to γ^L and connects a point v∈𝔻_(a) to a point of ∂𝔻_(a). Notice that Δ(v,v^L)≤δ/2<ε/2. Provided that L has been chosen even larger if necessary (again independently of the choice of v^L), it follows that the path γ contains only points at distance at least s-δ from x_*. By our choice of δ, this implies that Δ(v,C_s)<ε/2 and thus Δ(v^L,C_s)<ε. We therefore have v^L∈ (C_s)^ε and we have obtained that U^L_N_L⊂ ( C_s)^ε, thus completing the proof of the convergence of U_N_L^L to C̅_s. Let us now discuss the convergence of ∂ T_N_L^L. We let ℬ^L(c_Lh_N_L^L) and ℬ^L(c_L(h_N_L^L+ 2)) denote the closed balls of respective radii c_Lh_N_L^L and c_L(h_N_L^L+2) centered at x_*^L in (𝒟_(a)^ L, Δ). We also write B_s=B_s(x_*) for the closed ball of radius s centered at x_* in (𝔻_(a), Δ). Since 𝒟^L_(a) and _(a) are both length spaces and 𝒟^L_(a) converges to _(a) for the Hausdorff distance on E, we get that ℬ^L(c_Lh_N_L^L) and ℬ^L(c_L(h_N_L^L+2)) both converge to B_s for the Hausdorff distance. However, ℬ^L(c_L h_N_L^L)⊂ℬ^L(c_L h_N_L^L)∪∂ T^L_N_L⊂ℬ^L(c_L(h_N_L^L+2)). It follows that ℬ'_L:=ℬ^L(c_L h_N_L^L)∪∂ T^L_N_L also converges towards B_s when L→∞. Observe that ∂ T_N_L^L=ℬ'_L∩ U_N_L^L and ∂ H_s= B_s∩C̅_s. Let ε>0. Using the convergence of ℬ'_L towards B_s and the convergence of U_N_L^L towards C̅_s, we get that for L sufficiently large and for every x∈∂ H_s, we have Δ(x, ℬ'_L)< ε and Δ(x,U_N_L^L)< ε. Fix x∈∂ H_s and let u_1∈ U_N_L^L and u_2∈ℬ'_L such that Δ(u_1, x)≤ε and Δ(u_2, x)≤ε. In particular, Δ(u_1, u_2)≤ 2ε and since a geodesic path between u_1 and u_2 in 𝒟_(a)^L must intersect ∂ T_N_L^L, it follows that one can find v∈∂ T_N_L^L with Δ(u_1, v)≤ 2ε. This implies Δ(x, v)≤ 3ε, but since this is true for any x∈∂ H_s, we conclude that ∂ H_s is contained in the 3ε-neighbourhood of ∂ T_N_L^L as soon as L is large enough. A similar argument shows that ∂ T_N_L^L is contained in the 3-neighbourhood of ∂ H_s when L is large enough. This proves the convergence of ∂ T_N_L^L towards ∂ H_s. Once we have obtained the convergence of U^L_N_L to C̅_s and the convergence of ∂ T^L_N_L to ∂ H_s, the convergence of T_N_L^L towards H_s follows from straightforward arguments, and we leave the details to the reader. For every integer k≥ 1, we set σ_k^L:=inf{ n∈ℕ : h_n^L≥ k}. On the event {σ^L_k<∞}, the (discrete) hull of radius k in 𝒟_(a)^L is defined by ℋ^L_k:=T_σ_k^L^L. Recall that ω is fixed as explained before Proposition <ref>. Let s∈(0,r_*) such that the perimeter process (𝒫_r) has no jump at s. Then the hull ℋ_⌊ s/c_L⌋^L converges towards H_s for the Hausdorff metric, and its volume ν^L(ℋ_⌊ s/c_L⌋^L) converges towards 𝒱_s. The convergence of ℋ_⌊ s/c_L⌋^L towards H_s is an immediate corollary of the previous proposition, since by construction c_L h^L_σ^L_⌊ s/c_L⌋⟶ s as L→∞. It remains to show that ν^L(ℋ_⌊ s/c_L⌋^L) converges to 𝒱_s. We keep the notation K^ε={x∈ E, Δ(x, K)≤ε} introduced in the previous proof. It is easy to verify that 𝐕(∂ H_s)=0. Then, if ε>0 is fixed, we can find δ>0 such that 𝐕((∂ H_s)^δ)<ε. Since ν_L converges weakly to 𝐕, we get, for L large enough, ν_L(ℋ^L_⌊ s/c_L⌋)≤ν_L((H_s)^δ/2)≤𝐕((H_s)^δ)+ε≤𝐕(H_s)+𝐕((∂ H_s)^δ)+ε≤𝐕(H_s)+2ε. On the other hand, ∂ℋ_⌊ s/c_L⌋^L→∂ H_s when L→∞ (by Proposition <ref>), so that we have also ∂ℋ_⌊ s/c_L⌋^L⊂ (∂ H_s)^δ/2 for every large enough L. It follows that, for large enough L, we have ν_L((∂ℋ_⌊ s/c_L ⌋^L)^δ/2) ≤𝐕((∂ H_s)^δ)+ε≤ 2ε. Hence we get for L large, 𝐕(H_s)≤ν_L((ℋ^L_⌊ s/c_L⌋)^δ/2)+ε≤ν_L(ℋ^L_⌊ s/c_L⌋)+ν_L((∂ℋ^L_⌊ s/c_L⌋)^δ/2)+ε≤ν_L(ℋ^L_⌊ s/c_L⌋)+3ε. The desired convergence of ν_L(ℋ^L_⌊ s/c_L⌋) towards 𝐕(H_s)=𝒱_s follows from the last two displays. § LIMIT THEOREMS FOR THE PERIMETER AND THE VOLUME OF THE EXPLORED REGION In this section, we take a=1 for simplicity, and (as in Section <ref>) we write 𝒟^L instead of 𝒟^L_(1) for a Boltzmann triangulation in 𝕋^1,∙(L). Recall that (T_i^L)_i≥ 0 is the sequence of (explored) triangulations we get when we apply the peeling by layers algorithm of Section <ref> to 𝒟^L. We also set S_L:=inf{i≥ 0 : T_i^L=†}, which corresponds to the hitting time of ∂𝒟^L. To simplify notation, we let P^L_k=|∂ T^L_k| be the boundary size of T^L_k, for every 0≤ k<S_L. Still for 0≤ k<S_L, we also write V^L_k for the number of vertices of T^L_k, and we recall that h^L_k is the graph distance from the distinguished vertex x^L_* to the boundary ∂ T^L_k. Properties of the peeling by layers ensure that the graph distance from x^L_* to any point of ∂ T^L_k is equal to h^L_k or h^L_k+1. Let (T_i^∞)_i≥ 0 be the sequence of triangulations with a boundary obtained by applying the same peeling algorithm to the UIPT (we refer to <cit.> for a discussion of the peeling by layers algorithm for the UIPT). We define P^∞_k, V^∞_k and h^∞_k, now for every integer k≥ 0, by replacing T^L_k with T^∞_k in the respective definitions of P^L_k, V^L_k and h^L_k. Finally, we set S_L=L^-3/2(S_L-1) if S_L>0, and by convention we also take S_L=0 when S_L=0. Recall the notation c_L=√(3/2) L^-1/2. We introduce the rescaled processes P^L_t=1/L P^L_⌊ L^3/2t⌋, V^L_t=3/4L^2 V^L_⌊ L^3/2t⌋, h^L_t=c_L h^L_⌊ L^3/2t⌋. for 0≤ t≤ S_L (by convention, P^L_0= V^L_0= h^L_0=0 when S_L=0). We similarly define, for every t≥ 0, P^∞,L_t=1/L P^∞_⌊ L^3/2t⌋, V^∞,L_t=3/4L^2 V^∞_⌊ L^3/2t⌋, h^∞,L_t=c_L h^∞_⌊ L^3/2t⌋. From <cit.> (more precisely, from the version of this result for type I triangulations, as explained in Section 6.1 of <cit.>), we have ( P^∞,L_t, V^∞,L_t, h^∞,L_t)_ t≥ 0(𝒮^+_t, 𝕍_t, 2^-3/2∫_0^t u/𝒮^+_u)_t≥ 0, where the convergence holds in distribution in the sense of the Skorokhod topology. Here the limiting process (𝒮^+_t,t≥ 0) is a stable Lévy process with no positive jumps and Laplace exponent ψ̃(λ)=3^-1/2λ^3/2 started at 0 and conditioned to stay positive (see <cit.> for the definition of this process), and we refer to <cit.> for the description of the conditional law of the process 𝕍 knowing 𝒮^+. The next proposition gives an analog of (<ref>) where P^∞,L, V^∞,L, and h^∞,L are replaced by P^L, V^L, and h^L respectively. We have (( P^L_t∧ S_L, V^L_t∧ S_L, h^L_t∧ S_L)_t≥ 0, S_L)((𝒫_t, 𝒱_t,𝒜_t)_t≥ 0, Σ_∞)_t≥ 0, where 𝒜_t=2^-3/2∫_0^t∧Σ_∞ u/𝒫_u and the distribution of ((𝒫_t, 𝒱_t)_t≥ 0,Σ_∞) is determined by [ G((𝒫_t, 𝒱_t)_t≥0) f(Σ_∞)] = √(3π)/4 ∫_0^∞ u f(u) [ G((𝒮^+_t∧ u)_t≥ 0,(𝕍_t∧ u)_t≥0) 1/√(𝒮^+_u) (1+𝒮^+_u)^-5/2] for any measurable functions f:_+⟶_+, and G:(_+,_+^2)⟶_+. We first derive the convergence in distribution of ( P^L_t∧ S_L)_t≥ 0 to (𝒫_t)_t≥ 0. The h-transform relation between the Markov chains P^L and P^∞, which was discussed in Section <ref>, shows that, for every integer k≥ 0 and every bounded function F on ^k+1, [F(P^L_0,…,P^L_k) 1_{k<S_L}| S_L>0]= [F(P^∞_0,…,P^∞_k) 𝐡_L(P^∞_k)/𝐡_L(P^∞_0)], where we recall that 𝐡_L(j)=L/L+j, and we note that P^∞_0=1 if the root edge of the UIPT is a loop, and P^∞_0=2 otherwise. By the Markov property, we have (S_L=k+1| S_L>k,P_0,P_1,…,P_k)=q_L(P_k,†). It then follows that [F(P^L_0,…,P^L_k) 1_{S_L=k+1}| S_L>0]= [F(P^∞_0,…,P^∞_k) 𝐡_L(P^∞_k)/𝐡_L(P^∞_0) q_L(P^∞_k,†)]. Let G be a bounded continous function on (_+,_+)×_+, such that 0≤ G≤ 1. Using (<ref>), we have [G(( P^L_t∧ S_L)_t≥ 0, S_L)| S_L>0] =∑_k=0^∞[1_{S_L=k+1} G(( P^L_t∧ (L^-3/2k))_t≥ 0,L^-3/2k)| S_L>0] = ∑_k=0^∞[G(( P^∞_t∧ (L^-3/2k))_t≥ 0,L^-3/2k)𝐡_L(P^∞_k)/𝐡_L(P^∞_0) q_L(P^∞_k,†)] = L^3/2∫_0^∞ u [G(( P^∞,L_t∧ (L^-3/2⌊ L^3/2 u⌋))_t≥ 0,L^-3/2⌊ L^3/2 u⌋) 𝐡_L(P^∞_⌊ L^3/2 u⌋)/𝐡_L(P^∞_0) q_L(P^∞_⌊ L^3/2 u⌋,†)] Note that (S_L>0) tends to 1 as L→∞. Furthermore, 𝐡_L(P^∞_⌊ L^3/2 u⌋)=1/1+ P^∞,L_u and we also know from <cit.> that L^3/2q_L(P^∞_⌊ L^3/2 u⌋,†) ∼√(3π)/4 1/√( P^∞,L_u) (1+ P^∞,L_u)^-3/2 when L and P^∞_⌊ L^3/2 u⌋ are large (see the last display before Section 3.2 in <cit.>). Using the convergence (<ref>) (which we may assume to hold a.s. by the Skorokhod representation theorem) and the preceding observations, we get from an application of Fatou's lemma that lim inf_L→∞[G(( P^L_t∧ S_L)_t≥ 0, S_L)] ≥∫_0^∞ u [ G((𝒮^+_t∧ u)_t≥ 0,u) √(3π)/4 1/√(𝒮^+_u) (1+𝒮^+_u)^-5/2]. At this stage, we observe that ∫_0^∞ u [√(3π)/4 1/√(𝒮^+_u) (1+𝒮^+_u)^-5/2]=1. Indeed, by the identification of the potential kernel of 𝒮^+ in <cit.>, we know that, for any measurable function f:_+⟶_+, [∫_0^∞ u f(𝒮^+_u)]=∫_0^∞ x W̃(x) f(x) where the function W̃ is determined by its Laplace transform ∫_0^∞ e^-λ xW̃(x) x=1/ψ̃(λ)= 3^1/2 λ^-3/2. It follows that W̃(x)=2√(3)/√(π) √(x), and the left-hand side of (<ref>) is equal to √(3π)/4×2√(3)/√(π)∫_0^∞ x (1+x)^-5/2= 1 as desired. Thanks to (<ref>), we can replace G by 1-G in (<ref>) to get the analog of (<ref>) for the limsup instead of the liminf, and we conclude that lim_L→∞[G(( P^L_t∧ S_L)_t≥ 0, S_L)] = √(3π)/4 ∫_0^∞ u [ G((𝒮^+_t∧ u)_t≥ 0,u) 1/√(𝒮^+_u) (1+𝒮^+_u)^-5/2]. This gives the convergence of (( P^L_t∧ S_L)_t≥ 0, S_L) to the pair ((𝒫_t)_t≥ 0,Σ_∞) introduced in the proposition. We can deduce the more general statement of the proposition from the convergence (<ref>) by exactly the same arguments. The point is the fact that the perimeter process (P^L_k)_k≥ 0 is Markov with respect to the discrete filtration generated by the sequence (T^L_k). We leave the details to the reader. Let R_L:=h^L_S_L-1 (we argue on the event where S_L>0). By previous observations, the graph distance between x^L_* and ∂𝒟^L is either R_L or R_L+1. Recall the notation σ^L_k=inf{n∈:h^L_n≥ k}, so that σ^L_k is finite for 1≤ k≤ R_L. For 1≤ k≤ R_L, we write 𝒫^L_k:=P^L_σ^L_k and 𝒱^L_k:=V^L_σ^L_k, which are respectively the perimeter and the volume of the discrete hull ℋ^L_k=T^L_σ^L_k. We also set r^L_*=c_LR_L, which essentially corresponds to the rescaled graph distance between x^L_* and ∂𝒟^L. Finally, we introduce rescaled versions of the processes 𝒫^L_k and 𝒱^L_k by setting 𝒫^L_t:=1/L𝒫^L_⌊ t/c_L⌋ and 𝒱^L_t:=3/4L^2𝒱^L_⌊ t/c_L⌋ for 0≤ t≤ r^L_*. Recall the processes 𝒫_t, 𝒱_t,𝒜_t in Proposition <ref>. We have ( (𝒫^L_t∧ r^L_*, 𝒱^L_t∧ r^L_*, L^-3/2σ^L_⌊ t/c_L⌋∧ R_L)_t≥ 0, r^L_*) ( (𝒫^∞_t, 𝒱^∞_t,η_t)_t≥ 0, r^∞_*) where r^∞_*=𝒜_∞ and, for every t≥0, 𝒫^∞_t=𝒫_η_t, 𝒱^∞_t=𝒱_η_t with η_t=inf{s≥ 0: 𝒜_s≥ t∧𝒜_∞}. Moreover the convergence in distribution (<ref>) holds jointly with (<ref>). Since c_LR_L=c_L h^L_S_L-1= h^L_ S_L, Proposition <ref> implies the convergence in distribution of r_*^L=c_LR_L towards the variable 𝒜_∞, and this convergence holds jointly with the one stated in Proposition <ref>. Then, for 0≤ t≤ c_LR_L, L^-3/2σ^L_⌊ t/c_L⌋∧ R_L=L^-3/2min{j:h^L_j≥⌊ t/c_L⌋}=inf{s≥ 0: h^L_s≥ c_L⌊ t/c_L⌋} Since we know from Proposition <ref> that ( h^L_t∧ S_L)_t≥ 0 converges in distribution to (𝒜_t)_t≥ 0, it is now easy to obtain that (L^-3/2σ^L_⌊ t/c_L⌋∧ R_L)_t≥ 0 converges in distribution to (η_t)_t≥ 0, and this convergence holds jointly with that of Proposition <ref> (very similar arguments are used in Section 4.4 of <cit.>). Then, by our definitions, (𝒫^L_t∧ r^L_*, 𝒱^L_t∧ r^L_*)_t≥ 0 =( P^L_L^-3/2σ^L_⌊ t/c_L⌋∧ R_L, V^L_L^-3/2σ^L_⌊ t/c_L⌋∧ R_L)_t≥ 0 and we just have to use (<ref>) together with the convergence of (L^-3/2σ^L_⌊ t/c_L⌋∧ R_L)_t≥ 0 towards (η_t)_t≥ 0 to get the desired result. Recall the processes (𝒫^∞_t,𝒱^∞_t)_t≥ 0 giving the perimeters and volumes of hulls in the Brownian plane. Then, for every r>0, the distribution of the pair (𝒫^∞_t, 𝒱^∞_t)_0≤ t≤ r in Corollary <ref> under (·| r^∞_*>r) is absolutely continuous with respect to the distribution of (𝒫^∞_t,𝒱^∞_t)_0≤ t≤ r. This follows by observing that (𝒫^∞_t,𝒱^∞_t)_t≥ 0 is obtained from the pair (S^+_t,𝕍_t) in (<ref>) by the same time change as the one giving (𝒫^∞_t, 𝒱^∞_t)_t≥ 0 from (𝒫_t,𝒱_t)_t≥ 0 (combine formula (56) in <cit.> with the description of the pair (𝒫^∞_t,𝒱^∞_t)_t≥ 0 in <cit.> — some care is needed here because the scaling constants in <cit.> are not the same as in the present work). The preceding absolute continuity property implies that the approximation (<ref>) holds when 𝒫_r and 𝒱_s are replaced by 𝒫^∞_r and 𝒱^∞_s respectively, a.s. for every r<r^∞_*. In other words, we can recover 𝒫^∞_r as a deterministic function of ( 𝒱^∞_s)_s∈[0,r] which is the same as the one giving 𝒫_r from (𝒱_s)_s∈ [0,r]. We have (𝒟^L, (𝒫^L_t∧ r^L_*, 𝒱^L_t∧ r^L_*)_t≥ 0, r^L_*) (_(1), (𝒫_t∧ r_*,𝒱_t∧ r_*)_t≥ 0,r_*), where the convergence holds in distribution in 𝕄^2,1×(_+,_+^2)×_+. Moreover, this convergence holds jointly with (<ref>) and (<ref>). By a tightness argument using Corollary <ref>, we may assume that, along a sequence of values of L, the triplet (𝒟^L, (𝒫^L_t∧ r^L_*, 𝒱^L_t∧ r^L_*)_t≥ 0, r^L_*) converges in distribution to a limit which we may denote as (_(1), (𝒫^∞_t, 𝒱^∞_t)_t≥ 0,r^∞_*). By the Skorokhod representation theorem, we may assume that this convergence holds a.s. Since r^L_* is the rescaled graph distance between x^L_* and ∂𝒟^L (up to an error which is O(L^-1/2)), it immediate that r^∞_*=r_*. On the other hand, for t<r^L_*, we have 𝒱^L_t= 3/4L^2𝒱^L_⌊ t/c_L⌋= ν^L(ℋ^L_⌊ s/c_L⌋), and Corollary <ref> then allows us to identify (𝒱^∞_t)_t≥ 0 with (𝒱_t∧ r_*)_t≥ 0. Finally, we saw that, for r<r^∞_*=r_*, 𝒫^∞_r must be given by the same deterministic function of ( 𝒱^∞_s)_s∈[0,r] as the one giving 𝒫_r from (𝒱_s)_s∈ [0,r], and we conclude that we have also (𝒫^∞_t)_t≥ 0=(𝒫_t∧ r_*)_t≥ 0, which completes the proof. We now fix b>0 and recall the notation r_b=inf{r∈[0,r_*): 𝒫_r=b}. For every L≥ 1, we also set k^L_b=inf{k∈{1,…, S_L-1}: P^L_k=⌊ bL⌋}, and r^L_b=c_Lh^L_k^L_b on the event where k^L_b<∞. In other words, r^L_b corresponds to the (rescaled) distance between the distinguished vertex and the boundary of the first explored region with perimeter ⌊ bL⌋. If k^L_b=∞, we take r^L_b=∞. We let _(1)^(b) be distributed as _(1) conditioned on the event {r_b<∞} and similarly, for every L≥ 1, we let 𝒟^L,(b) be distributed as 𝒟^L conditioned on {k^L_b<∞}. The convergence in distribution (, r_b^L) (_(1) ,r_b). holds in 𝕄^2, 1×ℝ. By the Skorokhod representation theorem, we may assume that the convergence of Theorem <ref> holds almost surely, as well as the convergences (<ref>) and (<ref>). Proposition <ref> will follow if we can verify that r^L_b ⟶ r_b a.s. as L→∞ (in particular 1_{r^L_b<∞}⟶1_{r_b<∞}). Set ξ^L_b:=inf{j∈{0,1,…,R_L}: 𝒫^L_j≥⌊ bL⌋}. From the (a.s.) convergence of (𝒫^L_t∧ r^L_*)_t≥ 0 to (𝒫_t∧ r_*)_t≥ 0, one infers that c_Lξ^L_b converges a.s. to r_b as L→∞, on the event {r_b<∞}. To be precise, we need to know that immediately after time r_b, the process 𝒫_t takes values greater than b, but this follows (via a time change argument) from the analogous property satisfied by the process 𝒫_t in Proposition <ref>. Argue on the event {r_b<∞}. Then, for L large we have ξ^L_b<∞ and σ^L_ξ^L_b≥ k^L_b (because P^L_σ^L_ξ^L_b=𝒫^L_ξ^L_b≥⌊ bL⌋}). Hence, c_Lξ^L_b=c_Lh^L_σ^L_ξ^L_b≥ c_Lh^L_k^L_b=r^L_b and, since c_Lξ^L_b converges to r_b, lim sup_L→∞ r^L_b ≤ r_b. To get the analogous result for the liminf, fix ∈(0,b) and argue on the event where r_b-<∞. Since 𝒫_r=𝒫_η_r for 0<r<r_*, we have sup_s≤η_r_b-𝒫_s=sup_t≤ r_b-𝒫_η_t=sup_r≤ r_b-𝒫_r≤ b-. Using the (a.s.) convergence (<ref>), we thus get that for L large, sup_s≤η_r_b- P^L_s∧ S_L<b-/2, or equivalently 1/Lsup_j≤ L^3/2η_r_b- P^L_j∧ S_L<b-/2, which implies k^L_b≥ L^3/2η_r_b-. Finally, r^L_b=c_L h^L_k^L_b≥ c_L h^L_⌊ L^3/2η_r_b-⌋ and the right-hand side converges as L→∞ to 𝒜_η_r_b-=r_b-. We conclude that lim inf_L→∞ r^L_b ≥ r_b- on the event where r_b-<∞. Since this holds for any >0, the proof is complete. § CONVERGENCE TO THE BROWNIAN ANNULUS §.§ Statement of the result We no longer assume that a=1. The definitions of and _(1) given before Proposition <ref> can then be extended. In particular, we write _(a) for a Boltzmann triangulation in 𝕋^1,∙(⌊ aL⌋) conditioned on the event {k^L_b<∞}, where k^L_b is the first time at which the perimeter of the explored region in the peeling algorithm is equal to ⌊ bL⌋. We keep the notation d_L for the (rescaled) distance on _(a) and r^b_L for the d_L-distance between the distinguished vertex and the boundary of the explored region at time k^L_b. Similarly, ^(b)_(a) is distributed as _(a) conditioned on the event that the process of hull perimeters hits b, and r_b is the corresponding hitting radius. We keep the notation D for the distance on ^(b)_(a). The convergence (<ref>) is then immediately extended to give (_(a), r_b^L) (_(a) ,r_b). in distribution in 𝕄^2,1×. Recall that the metric space ℂ_(a,b) is defined as the complement of the (interior of the) hull H_r_b in _(a), and is equipped with the (extension of the) intrinsic metric d^∘. The two boundaries of _(a,b) are ∂_0_(a,b)=∂_(a)^(b), and ∂_1_(a,b)=∂ H_r_b. To simplify notation, we write instead of _(a,b) in this section and the next one. We also let be the unexplored triangulation at time k_b^L in the peeling algorithm applied to _(a). We equip with the graph distance scaled by the factor √(3/2) L^-1/2, which we denote by d_L^∘. Recall from Section <ref> the definition of the outer boundary ∂_0=∂_(a) and the inner boundary ∂_1. Our goal in this section is to prove the following theorem. Recall the Gromov-Hausdorff space (𝕄,d_𝙶𝙷) introduced in Section <ref>. The random metric spaces (, d_L^∘) converge in distribution towards (, d^∘) in (𝕄,d_𝙶𝙷). Before we proceed to the proof of Theorem <ref>, we start with some preliminaries. By the Skorokhod representation theorem, we may assume that the convergence (<ref>) holds almost surely, (_(a), r_b^L) a.s.L→∞⟶(_(a), r_b). In the following, it will be useful to argue on a fixed value of ω for which (<ref>) holds. In fact, we will need more. We observe that the triangulation is Boltzmann distributed on the set 𝕋^2(⌊ aL⌋,⌊ bL⌋) of all triangulations with two boundaries of sizes ⌊ aL⌋ and ⌊ bL⌋, and therefore a and b play a symmetric role in the distribution of . For any L≥ 0, we may introduce a random triangulation H_0^L, with a boundary, which is independent of 𝒞^L and distributed as the triangulation discovered by the peeling algorithm applied to a Boltzmann triangulation in 𝕋^1,∙(⌊ bL ⌋) at the first time when the perimeter of the explored region hits the value ⌊ aL⌋ (conditionally on the event that this hitting time is finite). Let _(b) be the triangulation obtained by gluing H^L_0 onto along the boundary (thus identifying ∂ H^L_0 and and their distinguished boundary edges). See Fig. 3 for an illustration. By construction, _(b) is a Boltzmann triangulation in 𝕋^1,∙(⌊ bL ⌋) conditioned on the event that the perimeter process (associated with the peeling algorithm) hits ⌊ aL⌋. Hence, by Proposition <ref>, _(b)_(b), where it is implicit that distances on the spaces _(b) are scaled by √(3/2)L^-1/2 and (_(b),D̃) is a (free pointed) Brownian disk with perimeter b conditioned on the event that the process of hull perimeters hits a. By a tightness argument, we may assume that (<ref>) holds jointly with (<ref>) along a subsequence of values of L, and, from now on, we restrict our attention to this subsequence. By the Skorokhod representation theorem, we may assume that we have both the almost sure convergences (<ref>) and _(b)⟶_(b). From now on until the end of Section <ref>, we fix ω such that both these convergences hold. For this value of ω, we will prove that (, d_L^∘) converges to (, d^∘) in 𝕄. §.§ Reduction to approximating spaces Since (<ref>) holds for the value ω that we have fixed, we may and will assume that _(a) and the spaces _(a) are embedded isometrically in the same compact metric space (E, Δ), in such a way that we have _(a)L→∞_(a), x_*^LL→∞ x_*, _(a)_(a). Note that the restriction of Δ to _(a) is the distance d_L and the restriction of Δ to _(a) is the distance D. As a first important remark, we observe that (since r_b is not a jump point of the perimeter process (𝒫_r)), Proposition <ref> and the fact that r^L_b⟶ r_b give , , A difficulty in the proof of Theorem <ref> comes from the fact that the behaviour of the spaces near the boundaries is not easily controlled. We will deal with this problem by first considering approximating subspaces which are obtained from , resp. from , by removing a neighborhood of the boundary ∂_1𝒞^L, resp. of ∂_1, and then showing that the convergence in Theorem <ref> can be reduced to that of the approximating subspaces. For δ>0, we introduce the space ={x∈ : d_L(x, )≥δ}={x∈ : Δ(x, )≥δ}, which is equipped with the restriction of the distance d_L^∘, and its continuous counterpart ={x∈ : D(x,∂_1)≥δ}={x∈ :Δ(x, )≥δ}, which is equipped with the restriction of the distance d^∘. In what follows, we always assume that δ is small enough so that is not empty and even contains points x such that d^∘(x,∂_1)>δ. Then, (from (<ref>)) it follows that is not empty at least when L is large. If δ>0 is not a local maximum of the function x↦Δ(x, ∂_1ℂ) on ℂ, then: Δ_𝙷(, ) 0. Let us fix ε>0 and η>0. By (<ref>), for every large enough L, we have Δ_𝙷(, )< η/2. If x∈ is such that Δ(x, )≥δ+η, then (by (<ref>) again), we can find a point x_L∈ such that Δ(x_L, x)≤ε∧η/2, and it follows that: Δ(x_L, )≥Δ(x, )-Δ(x, x_L)-Δ_𝙷(, )≥δ, so that x_L∈ and Δ(x, )≤Δ(x, x_L)≤ε. If x is at distance exactly δ from , we can approximate x by a point x' such that Δ(x', )>δ (we use our assumption that δ is not a local maximum of y↦Δ(y, ∂_1ℂ)), and, for L large enough, the same argument allows us to find a point x_L∈ such that Δ(x, x_L)≤ε. A compactness argument then gives sup_x∈Δ(x, )≤ε when L is large. Since ε was arbitrary, we have proved that sup_x∈Δ(x, )→0 as L→∞. A similar argument yields sup_x∈Δ(x, )→ 0, which completes the proof. Set 𝔻^(b)_(a),δ={x∈𝔻^(b)_(a): Δ(x,∂𝔻^(b)_(a))≥δ}, 𝒟^L,(b)_(a),δ={x∈𝒟^L,(b)_(a): Δ(x,∂𝒟^L,(b)_(a)) ≥δ}. Then, for every δ>0 that is not a local maximum of the function x↦Δ(x, ∂𝔻^(b)_(a)) on 𝔻^(b)_(a), we have lim_L→∞Δ_𝙷(𝒟^L,(b)_(a),δ,𝔻^(b)_(a),δ)=0, and consequently lim sup_L→∞( sup_x∈𝒟^L,(b)_(a)Δ(x,𝒟^L,(b)_(a),δ)) ≤sup_x∈𝔻^(b)_(a)Δ(x,𝔻^(b)_(a),δ). The first assertion of the lemma is proved by arguments similar to the proof of Lemma <ref>, and we omit the details. The second assertion is an easy consequence of the first one and the fact that Δ_𝙷(𝒟^L,(b)_(a),𝔻^(b)_(a)) tends to 0 (cf. (<ref>)). Remark. The first assertion of Lemma <ref> obviously requires our particular embedding of the spaces 𝒟^L,(b)_(a) and 𝔻^(b)_(a) in (E,Δ), but the second one holds independently of this embedding provided we replace Δ by d_L in the left-hand side and by D in the right-hand side. Let us turn to the proof of Theorem <ref>. For every δ>0, we have d_𝙶𝙷(𝒞^L, ℂ)≤ d_𝙶𝙷(𝒞^L, 𝒞^L_δ)_(A_L,δ) + d_𝙶𝙷(𝒞^L_δ, )_(A'_L,δ) + d_𝙶𝙷(, )_(A”_δ). where we recall that and are equipped with the distances d_L^∘ and d^∘ respectively, and and are equipped with the restrictions of these distances. Our goal is to prove that d_𝙶𝙷(𝒞^L, ℂ) tends to 0 as L tends to infinity. To this end, we will deal separately with each of the terms A_L,δ, A'_L,δ and A”_δ. Let us start with A”_δ. We have lim_δ→ 0d_𝙶𝙷(, )=0. It is enough to verify that sup_x∈ℂ d^∘(x,ℂ_δ) 0. If this does not hold, we can find α>0 and sequences x_n∈ℂ, δ_n⟶ 0, such that d^∘(x_n,ℂ_δ_n)≥α. By compactness we can assume that x_n⟶ x_∞∈ℂ, and we get that d^∘(x_∞,ℂ_δ)≥α/2 for every δ>0, which is absurd because we know that x_∞ must be the limit (with respect to d^∘) of a sequence of points in ∖∂_1=∪_δ>0_δ. Let us now discuss A_L,δ. We have lim_δ→ 0( lim sup_L→∞ d_𝙶𝙷(𝒞^L, 𝒞^L_δ))=0. We need to verify that lim_δ→ 0( lim sup_L→∞(sup_x∈𝒞^L d^∘_L(x,𝒞^L_δ))) = 0. Here it is convenient to view 𝒞^L as a subset of the triangulation _(b) introduced in Section <ref>. We denote the rescaled distance on _(b) by d̃_L, and, for every δ>0, we set _(b),δ={x∈_(b): d̃_L(x,∂_(b)) ≥δ}. We then claim that, for δ>0 small enough, for every sufficiently large L, we have sup_x∈𝒞^L d^∘_L(x,𝒞^L_δ) =sup_x∈_(b)d̃_L(x,_(b),δ). Indeed, the properties ⟶ and =∂𝒟^L,(b)_(a)⟶ ensure that for δ>0 small, for every sufficiently large L, all points of ∂_0𝒞^L are at distance greater than δ from ∂_1𝒞^L, and it follows that 𝒞^L ∖𝒞^L_δ is identified with _(b)∖_(b),δ. Our claim easily follows. At this stage, we can use Lemma <ref> (with the roles of a and b interchanged) and the subsequent remark : except possibly for countably many values of δ, we have lim sup_L→∞(sup_x∈_(b)d̃_L(x,_(b),δ) )≤sup_x∈𝔻̃^(a)_(b)D̃(x,𝔻̃^(a)_(b),δ) where 𝔻̃^(a)_(b),δ={x∈𝔻̃^(a)_(b) : D̃(x,∂𝔻̃^(a)_(b))≥δ}. It follows from the preceding considerations that, except possibly for countably many values of δ, lim sup_L→∞(sup_x∈𝒞^L d^∘_L(x,𝒞^L_δ) )≤sup_x∈𝔻̃^(a)_(b)D̃(x,𝔻̃^(a)_(b),δ). The right-hand side tends to 0 as δ→ 0, which completes the proof. It remains to study the terms A'_L,δ. If δ>0 is not a local maximum of the function x↦Δ(x, ∂_1ℂ) on ℂ, we have lim_L→∞ d_𝙶𝙷(𝒞^L_δ, )=0. Let us postpone the proof of Lemma <ref> to the next section, and recall the bound (<ref>). By letting first L tend to infinity and then δ tend to 0, using Lemmas <ref>, <ref> and <ref>, we get lim sup_L→∞ d_𝙶𝙷(𝒞^L, ℂ)=0 which completes the proof of Theorem <ref>. Therefore, it only remains to prove Lemma <ref>. §.§ Proof of the key lemma In this section, we prove Lemma <ref>. We let δ>0 such that δ is not a local maximum of the function x↦Δ(x, ∂_1ℂ). Recalling Lemma <ref>, we define a correspondence between and by setting ℛ_L={(x_L,x')∈×ℂ_δ:Δ(x_L,x')≤Δ_𝙷(, )}. By the classical result expressing the Gromov-Hausdorff distance in terms of distortions of correspondences <cit.>, the statement of Lemma <ref> will follow if we can prove that the distorsion of ℛ_L tends to 0 as L→∞, or equivalently sup_(x_L,x')∈ℛ_L,(y_L,y')∈ℛ_L|d^∘_L(x_L,y_L)-d^∘(x',y')| 0. We first verify that sup_(x_L,x')∈ℛ_L,(y_L,y')∈ℛ_L(d^∘_L(x_L,y_L)-d^∘(x',y')) 0. To this end, we argue by contradiction. If (<ref>) does not hold, we can find η>0 and sequences L_k↑∞, and (x_L_k,x'_k), (y_L_k,y'_k) in ℛ_L_k such that d^∘_L_k(x_L_k,y_L_k)>d^∘(x'_k,y'_k) + η. We may assume that x'_k⟶ x'_∞ and y'_k⟶ y'_∞ where x'_∞,y'_∞∈ℂ_δ, and for k large we have also d^∘_L_k(x_L_k,y_L_k)>d^∘(x'_∞,y'_∞) + η/2. From (<ref>) and the definition of the correspondence ℛ_L, we also get that Δ(x_L_k,x'_∞)⟶ 0 and Δ(y_L_k,y'_∞)⟶ 0. Since d^∘ coincides with the (extension of the) intrinsic distance on ℂ∖∂_1, we can find a path γ from x'_∞ to y'_∞ in ℂ that does not hit the boundary ∂_1ℂ and whose length is bounded above by d^∘(x'_∞,y'_∞)+η/4. From part 1 of Lemma <ref>, if k is large, we can approximate γ by a path γ_L_k going from x_L_k to y_L_k in 𝒞^L_k, whose length is bounded above by d^∘(x'_∞,y'_∞)+3η/8, such that γ_L_k will not hit ∂_1𝒞^L_k (we use the convergence of ∂_1𝒞^L_k to ∂_1ℂ) and therefore stays in 𝒞^L_k. It follows that d^∘_L_k(x_L_k,y_L_k) is bounded above by the length of γ_L_k giving a contradiction with (<ref>). This completes the proof of (<ref>). In order to complete the proof of (<ref>), we still need to verify that sup_(x_L,x')∈ℛ_L,(y_L,y')∈ℛ_L(d^∘(x',y') -d^∘_L(x_L,y_L)) 0. This is slightly more delicate than the proof of (<ref>), and we will need the following lemma. Let η>0. There exist ε>0 and L_0≥ 0 such that, for any choice of x^L, y^L∈ with L≥ L_0, there is a path between x^ L and y^L in which stays at distance at least ε from and whose length is bounded by d_L^∘(x^L, y^ L)+η. [Proof of Lemma <ref>] Let us argue by contradiction. If the desired property does not hold, we can find sequences ε_n⟶ 0, L_n⟶∞, x_n,y_n∈𝒞^L_n_δ such that any path from x_n to y_n that stays at distance at least ε_n from ∂_1𝒞^L has length greater than d_L_n^∘(x_n, y_n)+η. By compactness, we may assume that x_n⟶ x_∞ and y_n⟶ y_∞ in (E,Δ) and, by (<ref>), we have x_∞,y_∞∈_δ. Additionally, since the diameters of (𝒞^L_n_δ,d^∘_L_n) are bounded (this follows from (<ref>) since the diameter of ℂ is finite), we can assume that ℓ_n:=d^∘_L_n(x_n,y_n) converges to some real ℓ_∞≥ 0. For every n, let γ_n be a geodesic from x_n to y_n. By a standard argument, we can extract from the sequence (γ_n(t∧ℓ_n),t∈[0,ℓ_∞+η/3]) a subsequence that converges uniformly (for the metric Δ) to a path γ_∞ =(γ_∞(t),t∈[0,ℓ_∞+η/3]) that connects x_∞ to y_∞ in 𝔻_(a)^(b). By (<ref>), γ_∞ takes values in ℂ. Moreover, from the analogous property for the discrete paths γ_n, we get that γ_∞ is 1-Lipschitz, meaning that Δ(γ_∞(s),γ_∞(t))≤ |t-s| for every s,t. It follows in particular that the length of γ_∞ is at most ℓ_∞+η/3. The path γ_∞ may hit ∂_1ℂ. Using Lemma <ref>, we can however find another path γ'_∞ connecting x_∞ to y_∞ in ℂ, which does not hit ∂_1ℂ and has length at most ℓ_∞+2η/3. The path γ'_∞ stays at positive distance α from ∂_1ℂ. Using part 2 of Lemma <ref> (and the fact that ∂_1𝒞^L converges to ∂_1ℂ for the Δ-Hausdorff measure, by (<ref>)), we can then, for n large enough, find a path γ'_n connecting x_n to y_n in 𝒞^L_n, with length smaller that d^∘_L_n(x_n,y_n)+η, that will stay at distance at least α/2 from ∂_1𝒞^L_n. This is a contradiction as soon as ε_n<α/2. Let us complete the proof of (<ref>). We again argue by contradiction. If (<ref>) does not hold, we can find η>0 and sequences L_k↑∞, and (x_L_k,x'_k), (y_L_k,y'_k) in ℛ_L_k such that d^∘(x'_k,y'_k) >d^∘_L_k(x_L_k,y_L_k)+ η. We may assume that x'_k⟶ x'_∞ and y'_k⟶ y'_∞ where x'_∞,y'_∞∈ℂ_δ. By Lemma <ref>, we can find ε>0 such that, for every large enough k, there is a path γ_L_k from x_L_k to y_L_k in 𝒞^L_k that stays at distance at least ε from ∂_1𝒞^L_k and whose length is bounded by d^∘_L_k(x_L_k,y_L_k)+η/2. We have Δ(x_L_k,x'_∞)⟶ 0 and Δ(y_L_k,y'_∞)⟶ 0, and, by part 2 of Lemma <ref>, we can (for k large) find a path γ'_k from x'_∞ to y'_∞ in 𝔻_(a)^(b) that stays at distance at least ε/2 from ∂_1ℂ (we again use the convergence of ∂_1𝒞^L_k to ∂_1ℂ) and has length smaller than d^∘_L_k(x_L_k,y_L_k)+ 3η/4. Hence d^∘(x'_∞,y'_∞)<d^∘_L_k(x_L_k,y_L_k)+ 3η/4, and also, for k large, d^∘(x'_k,y'_k)<d^∘_L_k(x_L_k,y_L_k)+ 7η/8. We get a contradiction, which completes the proof of (<ref>) and of Theorem <ref>. □ § CONVERGENCE OF BOUNDARIES AND VOLUME MEASURES In the last section, we showed that the sequence of metric spaces (, d^∘_L) converges in law towards (, d^∘) for the Gromov-Hausdorff topology. We will now explain how to extend this result to the setting of marked measure metric spaces. We write μ_L for the restriction to of the (scaled) counting measure ν^L, and μ for the restriction to of the volume measure 𝐕. The random marked measure metric spaces 𝒳^L:=((, d^∘_L), (∂_0, ∂_1), μ_L), converge towards 𝒴:=((, d^∘), (, ),μ) in distribution in the space 𝕄^2, 1. As in the previous section, we may restrict our attention to a sequence of values of L such that the convergences (<ref>) and (<ref>) hold almost surely. Fixing ω in the underlying probability space, we can assume that _(a) and the spaces _(a) are embedded isometrically in the same compact metric space (E, Δ), in such a way that the convergences (<ref>) hold for the Hausdorff distance associated with Δ, and moreover the measures ν_L converge weakly to 𝐕. As explained at the beginning of Section <ref>, we can also assume that (<ref>) holds. We recall the definition of and in (<ref>) and (<ref>), and we also set ∂_1 = {x∈ :Δ(x, )=δ} and ∂_1 = {x∈ : Δ(x, )=δ}. In what follows, we always assume that δ>0 is small enough so that Δ(∂_0ℂ,∂_1ℂ)>δ, and in particular is not empty. If δ>0 is not a local maximum of the function x↦Δ(x, ∂_1ℂ) on ℂ we have Δ_𝙷(∂_1 𝒞^L_δ, ∂_1 ℂ_δ) 0. Moreover, lim_δ→ 0( lim sup_L→∞μ_L(𝒞^L∖)) =0. The first part of the lemma is derived by arguments similar to the proof of Lemma <ref>. The second part follows from the weak convergence of ν_L to 𝐕 and the fact that 𝐕 puts no mass on ∂_1ℂ. We leave the details to the reader. We then set θ_L(δ)= max(Δ_𝙷(,),Δ_𝙷(∂_1 𝒞^L_δ, ∂_1 ℂ_δ),Δ_𝙷(∂_0 𝒞^L, ∂_0 ℂ)). By (<ref>), (<ref>) and Lemma <ref>, we have lim_L→∞θ_L(δ) = 0 except possibly for countably many values of δ. We then slightly modify the definition of the correspondence ℛ_L by setting ℛ'_L={(x_L,x')∈×ℂ_δ:Δ(x_L,x')≤θ_L(δ)}. The very same arguments as in Section <ref> show that the distortion of ℛ'_L tends to 0 as L→∞ (again except possibly for countably many values of δ). We will now extend ℛ'_L to a correspondence between and . We start by fixing η>0, and we set α_L(δ):=sup_x∈𝒞^L d^∘_L(x,𝒞^L_δ) , α(δ):= sup_x∈ℂ d^∘(x,ℂ_δ). By (<ref>) and (<ref>), we can choose δ∈(0,η) small enough so that we have both α(δ)≤η and α_L(δ)≤η for every sufficiently large L. Additionally, the second assertion of the lemma allows us to assume that μ_L(∖𝒞^L_2δ)<η for L large. In what follows, we fix δ∈(0,η) so that the preceding properties hold (and both δ and 2δ do not belong to the countable set that was excluded above). To simplify notation, we write α_L=α_L(δ) and α=α(δ). We define a correspondence between and by setting ℛ^*_L:={(x_L,x)∈×: ∃x̃_L∈, x̃∈ s.t. (x̃_L,x̃)∈ℛ'_L, d^∘_L(x_L,x̃_L)≤α_L, d^∘(x,x̃)≤α}. Then, we can easily bound the distortion dis(ℛ^*_L) of ℛ^*_L in terms of the the distortion dis(ℛ'_L) of ℛ'_L: if (x_L,x),(y_L,y)∈ℛ^*_L, we can find (x̃_L,x̃),(ỹ_L,ỹ)∈ℛ'_L such that |d^∘_L(x_L,y_L)-d^∘(x,y)|≤ 2α_L+|d^∘_L(x̃_L,ỹ_L)-d^∘(x̃,ỹ)| + 2α, and it follows that dis(ℛ^*_L)≤ 2(α+α_L)+ dis(ℛ'_L)≤ 4η + dis(ℛ'_L). To prove the desired convergence of 𝒳^L towards 𝒴 in 𝕄^2,1, we will use the definition of the Gromov-Hausdorff-Prokhorov distance d^2,1_𝙶𝙷𝙿. By a classical argument (cf. <cit.>), we can define a distance Δ^L,* on the disjoint union 𝒞^L⊔ℂ, such that the restriction of Δ^L,* to is d^∘_L, the restriction of Δ^L,* to is d^∘, and, for every x_L∈𝒞^L and x∈ℂ: Δ^L,*(x_L,x)= 1/2dis(ℛ^*_L)+(y_L,y)∈ℛ_L^*inf(d_L^∘(x_L,y_L)+d^∘(x,y)). Since (𝒞^L, d^∘_L) and (ℂ, d^∘) are embedded isometrically in (𝒞^L⊔ℂ, Δ^L,*), we can then use the definition of the Gromov-Hausdorff-Prokhorov distance to bound d^2,1_𝙶𝙷𝙿(𝒳^L,𝒴). We need to bound each of the four terms appearing in the infimum of the definition. We again use the notation Δ^L,*_𝙷, resp. Δ^L,*_𝙿, for the Hausdorff distance, resp. the Prokhorov distance, associated with Δ^L,*. First step. We verify that max(Δ^L,*_𝙷(,),Δ^L,*_𝙷(∂_1,∂_1),Δ^L,*_𝙷(∂_0,∂_0))≤1/2dis(ℛ^*_L) +max(α,α_L)+δ. First, it is immediate from the definition of Δ^L,* that Δ^L,*_𝙷(,)≤1/2dis(ℛ^*_L). Similarly, the fact that Δ_𝙷(∂_0 𝒞^L, ∂_0 ℂ)≤θ_L(δ) and the definition of ℛ'_L give Δ^L,*_𝙷(∂_0,∂_0)≤1/2dis(ℛ^*_L). Let us bound Δ^L,*_𝙷(∂_1,∂_1). Let x_L∈∂_1. From the definition of α_L, we can find x'_L∈ such that d^∘_L(x_L,x'_L)≤α_L. By considering a geodesic from x'_L to x_L, we can even assume that x'_L∈∂_1. It follows that there exists x'∈∂_1 such that Δ(x',x'_L)≤Δ_𝙷(∂_1,∂_1), hence (x'_L,x')∈ℛ'_L⊂ℛ^*_L. From the definition of Δ^L,*, we get Δ^L,*(x_L,x')≤1/2dis(ℛ^*_L) + d^∘_L(x_L,x'_L)≤1/2dis(ℛ^*_L)+α_L. Finally, since x'∈∂_1, we can find x”∈∂_1 such that Δ^L,*(x',x”)=δ, and we get Δ^L,*(x_L,x”)≤1/2dis(ℛ^*_L)+α_L+δ. In a symmetric manner, we can verify that, for any y∈∂_1, we can find y_L∈∂_1𝒞^L such that Δ^L,*(y_L,y)≤1/2dis(ℛ^*_L)+α+δ. This gives the desired bound for Δ^L,*_𝙷(∂_1,∂_1), thus completing the proof of (<ref>). As an immediate consequence, using also our estimate for dis(ℛ^*_L), we get lim sup_L→∞(max(Δ^L,*_𝙷(,),Δ^L,*_𝙷(∂_1,∂_1),Δ^L,*_𝙷(∂_0,∂_0)))≤ 4η. Second step. We now want to bound Δ^L,*_𝙿(μ_L,μ). We start by observing that, if L is large enough, if x∈ and x_L∈ are such that Δ(x_L,x)<δ/2, we have Δ^L,*(x_L,x)≤Δ(x_L,x)+ 1/2dis(ℛ^*_L)+θ_L(δ). Indeed, we can find x'∈ such that Δ(x_L,x')≤θ_L(δ) (and in particular (x_L,x')∈ℛ'_L), then Δ(x,x')≤Δ(x,x_L)+θ_L(δ)<δ provided that L is large enough so that θ_L(δ)<δ/2. Since x and x' both belong to and Δ(x,x')<δ, we must have Δ(x,x')=d^∘(x,x'), and Δ^L,*(x_L,x)≤1/2dis(ℛ^*_L)+d^∘(x,x')=1/2dis(ℛ^*_L)+Δ(x,x')≤1/2dis(ℛ^*_L)+θ_L(δ)+Δ(x_L,x), which gives our claim (<ref>). Let A be a measurable subset of . We have μ_L(A)≤μ_L(A∩𝒞^L_2δ)+ μ_L(∖𝒞^L_2δ) and we know that μ_L(∖𝒞^L_2δ)<η when L is large. On the other hand, by the weak convergence of ν_L to 𝐕, we have also for L large, μ_L(A∩𝒞^L_2δ)=ν_L(A∩𝒞^L_2δ)≤𝐕({x∈𝔻^(b)_(a):Δ(x,A∩𝒞^L_2δ)<δ/2})+δ/2. Since Δ_𝙷(𝒞^L_2δ,ℂ_2δ) tends to 0 as L→∞, the properties x∈𝔻^(b)_(a) and Δ(x,𝒞^L_2δ)<δ/2 imply (for L large) that x∈, and in particular we can replace 𝔻^(b)_(a) by and 𝐕 by μ in the last display. But then we can use (<ref>) to get that, for x∈_δ, Δ(x,A∩𝒞^L_2δ)<δ/2 ⇒ Δ^L,*(x,A∩𝒞^L_2δ)<δ/2 + 1/2dis(ℛ^*_L)+θ_L(δ). Finally, we have, for L large, μ_L(A)≤μ_L(A∩𝒞^L_2δ) +η ≤μ({x∈:Δ(x,A∩𝒞^L_2δ)<δ/2})+δ/2+η ≤μ({x∈:Δ^L,*(x,A)<δ/2+1/2dis(ℛ^*_L)+θ_L(δ)}) +δ/2+η ≤μ({x∈:Δ^L,*(x,A)<3η}) + 2η. A symmetric argument (left to the reader) shows that for L large, for any measurable subset A of , we have μ(A)≤μ_L({x∈:Δ^L,*(x_L,A)<3η}) + 2η. This proves that Δ_𝙿(μ_L,μ)≤ 3η when L is large. Since η was arbitrary, we can combine this with (<ref>) to get the desired convergence of d^2,1_𝙶𝙷𝙿(𝒳^L,𝒴) to 0. § THE COMPLEMENT OF TWO HULLS IN THE BROWNIAN SPHERE In this section, we fix r, r'>0. Recall that B_r^∙(_*) is the hull of radius r centered at _* in the free Brownian sphere _∞ (this hull is defined on the event {𝐃(_*,_0)>r}). It is shown in <cit.> that the intrinsic metric on B^∘_r(_*)= B^∙_r(_*)∖∂ B^∙_r(_*) has a.s. a continuous extension to its closure B^∙_r(_*). In the following, we implicitly endow B^∙_r(_*) with this extended intrinsic metric and we equip it with the restriction of the volume measure on _∞, the distinguished point _* and the boundary ∂ B^∙_r(_*), so that we can consider B_r^∙(_*) as a random variable in 𝕄^2, 1. Since _* and _0 play symmetric roles in the Brownian sphere <cit.>, we can similarly consider, on the event {𝐃(_*,_0)>r'}, the hull of radius r' centered at _0 in _∞, which we denote by B_r'^∙(_0) (this is defined as the complement of the connected component of _∞∖ B^∞_r'(_0) that contains _*). We can endow this space with its (extended) intrinsic metric as we did for B_r^∙(_*) and consider B_r'^∙(_0) as a random variable in 𝕄^2, 1 by equipping it with the restriction of the volume measure on _∞, the distinguished point _0 and the boundary ∂ B^∙_r'(_0). We also consider the perimeter of these hulls. The perimeter of B_r^∙(_*) is 𝒵_r^_*:=𝐏_r as given by formula (<ref>) and symmetrically the perimeter 𝒵_r'^_0 of B_r'^∙(_0) may be defined by the analog of (<ref>) where _* is replaced by _0: 𝒵_r'^_0:=lim_ε→ 01/ε^2Vol({x∈_∞∖ B^∙_r'(_0):𝐃(x,B^∙_r'(_0))<ε}). On the event where 𝐃(_*, _0)>r+r', the hulls B^∙_r(_*) and B^∙_r'(_0) are disjoint, and we consider the subspace 𝒞^_*, _0_r,r':= Closure(_∞∖(B_r^∙(_*)∪ B_r'^∙(_0))). It is shown in <cit.> that, a.s. on the event {𝐃(_*, _0)>r+r'}, the intrinsic metric on _∞∖(B_r^∙(_*)∪ B_r'^∙(_0)) has a continuous extension on 𝒞^_*, _0_r,r', which is a metric on this space (to be precise, <cit.> considers only the case r=r', but the argument is the same without this condition). So we can view 𝒞^_*, _0_r,r' as a random variable in 𝕄^2,1 by equipping this space with the restriction of the volume measure of _∞ and with the “boundaries” ∂ B^∙_r(_*) and ∂ B^∙_r'(_0). We finally recall the notion of a standard hull with radius r and perimeter z>0, as defined in <cit.>. Under the probability measure _0(· | 𝐃(_*, _0)>r+r'), the three spaces B_r^∙(_*), B_r'^∙(_0) and 𝒞^_*, _0_r, r' are conditionally independent given the pair (𝒵_r^_*,𝒵_r'^_0), and their conditional distribution can be described as follows. The spaces B_r^∙(_*) and B_r'^∙(_0) are standard hulls of respective radii r and r' and of respective perimeters 𝒵_r^_* and 𝒵_r'^_0. The space 𝒞^_*, _0_r,r' is a Brownian annulus with perimeters 𝒵_r^_* and 𝒵_r'^_0. This theorem is closely related to <cit.> (see also <cit.>). In fact, <cit.> (stated for r=r' but easily extended) already gives the conditional independence of B_r^∙(_*), B_r'^∙(_0) and 𝒞^_*, _0_r,r' given (𝒵_r^_*,𝒵_r'^_0), and identifies the conditional distribution of the hulls B_r^∙(_*) and B_r'^∙(_0). In order to complete the proof of Theorem <ref>, it only remains to identify the conditional distribution of 𝒞^_*, _0_r,r'. To do so, we will first state and prove a proposition, which may be viewed as a variant of our definition of the Brownian annulus. This proposition also corresponds to Definition 1.1 in <cit.>). We consider now the (free pointed) Brownian disk 𝔻_(a). Recall the notation H_r for the hull of radius r centered at the distinguished point x_* of 𝔻_a, which is defined on the event {r<r_*}. We also let C_r be the closure of 𝔻_(a)∖ H_r. In a way similar to the results recalled at the beginning of this section, one proves that the intrinsic metric on H_r∖∂ H_r (resp. on 𝔻_(a)∖ H_r) has a continuous extension to H_r (resp. to C_r) which is a metric on this space. The shortest way to verify these properties is to view the Brownian disk 𝔻_(a) as embedded in the Brownian sphere, as in Proposition <ref> above, and then to use the analogous properties in the Brownian sphere recalled at the beginning of this section (we omit the details). In the next proposition, we thus view H_r (resp. C_r) equipped with the extended intrinsic metric, with the marked subsets {x_*} and ∂ H_r (resp. with the boundaries ∂𝔻_(a) and ∂ H_r) and with the restriction of the volume measure on 𝔻_(a), as a random variable in 𝕄^2,1. Recall the notation 𝒫_r for the boundary size of H_r. Under (·| r<r_*), C_r and H_r are conditionally independent given 𝒫_r, H_r is distributed as a standard hull with radius r and perimeter 𝒫_r and C_r is distributed as a free Brownian annulus with perimeters a and 𝒫_r. By Proposition <ref>, we may and will assume that the Brownian disk 𝔻_(a) is constructed as the subspace B̌^∙__a(_*) of the free Brownian sphere _∞ under _0(·|_a<∞), where _a=inf{r∈(0,_*):_r-_*=a}, and we recall that B̌^∙__a(_*) is the closure of _∞∖ B^∙__a(_*). The distance between the distinguished point x_*=_0 and the boundary of _(a) is then r_*=_*-_a, where _*= 𝐃(_0, _*). Furthermore, conditioning 𝔻_(a) on the event {r<r_*} is then equivalent to arguing under _0(·| r+_a<_*). On the event {r+_a<_*}, the hull H_r is identified to the hull B^∙_r(_0) and C_r is identified to B̌^∙__a(_*)∖ B^∘_r(_0) where B^∘_r(_0) denotes the interior of B^∙_r(_0). In particular, the perimeter 𝒫_r of H_r is identified with the boundary size 𝒵^_0_r of B^∙_r(_0). As explained at the beginning of this section, we view B^∙_r(_0) as a random variable in 𝕄^2,1. Similarly <cit.> (with the roles of _* and _0 interchanged) allows us to view B̌^∙_r(_0):=_∞∖ B^∘_r(_0), equipped with the extended intrinsic metric, as a random variable in 𝕄^2,1 (the marked subsets are {_*} and ∂ B^∙_r(_0)). Fact. Under _0(· | _*>r), B^∙_r(_0) and B̌^∙_r(x_0) are independent conditionally given the perimeter 𝒵^_0_r, B^∙_r(_0) is distributed as a standard hull of radius r and perimeter 𝒵^_0_r, and B̌^∙_r(x_0) is distributed as a free Brownian disk of perimeter 𝒵^_0_r. This follows from <cit.>, up to the interchange of _* and _0. We then note that the event {r+_a<_*} is measurable with respect to B̌^∙_r(_0), and that, on this event, B̌^∙__a(_*)∖ B^∘_r(_0) is a function of B̌^∙_r(_0) (indeed B̌^∙__a(_*)∖ B^∘_r(_0) is obtained from B̌^∙_r(_0) by “removing” the hull of radius _a centered at _*). It follows from these observations and the preceding Fact that, under _0(·| r+_a<_*), B^∙_r(_0) and B̌^∙__a(_*)∖ B^∘_r(_0) are independent conditionally on 𝒵^_0_r. To get the statement of the proposition, it only remains to determine the conditional distribution of B̌^∙__a(_*)∖ B^∘_r(_0) knowing 𝒵^_0_r, under _0(·| r+_a<_*). To this end, we observe that, by construction, on the event {r+_a<_*}, B̌^∙__a(_*)∖ B^∘_r(_0)=B̌^∙_r(_0)∖ B^∘__a(_*), By the preceding Fact, B̌^∙_r(_0) is distributed under _0(· | _*>r) as a free pointed Brownian disk with perimeter 𝒵^_0_r (whose distinguished point is _*). Under _0(·| r+_a<_*), this Brownian disk is further conditioned on the event that there is a hull of perimeter a centered at the distinguished point _*, and _a is the first radius at which this occurs. By our definition of the Brownian annulus, this means that, under _0(·| r+_a<_*) and conditionally on 𝒵^_0_r, B̌^∙_r(_0)∖ B^∘__a(_*) is a Brownian annulus with perimeters 𝒵^_0_r and a. This completes the proof. [Proof of Theorem <ref>] As in the preceding proof (interchanging again the roles of _* and _0), we know that, under _0(· | _*>r) and conditionally on ^(_*)_r, B̌^∙_r(_*) is a (free pointed) Brownian disk with perimeter ^(_*)_r, whose distinguished point is _0. Under _0(· | _*>r+r'), this Brownian disk is conditioned on the event that the distinguished point is at distance greater than r' from the boundary. We can thus apply Proposition <ref>, with r replaced by r', to this Brownian disk, and it follows that, under _0(· | _*>r+r'), conditionally on the pair (^(_*)_r,^(_0)_r'), the space B̌^∙_r(_*)∖ B^∘_r'(_0) is a Brownian annulus with perimeters ^(_*)_r and ^(_0)_r'. This completes the proof since 𝒞^_*, _0_r,r'= B̌^∙_r(_*)∖ B^∘_r'(_0) by construction. § EXPLICIT COMPUTATIONS FOR THE LENGTH OF THE ANNULUS Recall the setting of Section <ref>. We define the length ℒ_(a, b) of the annulus _(a,b) as the distance between the two boundaries ∂_1ℂ_(a,b) and ∂_0ℂ_(a,b). Our goal in this section is to discuss the distribution of ℒ_(a, b). From formula (<ref>), we get that ℒ_(a, b) is given under the probability measure (·| r_b<∞) by the formula ℒ_(a, b)= r_*-r_b. From the discussion in the proof of Lemma <ref>, we see that the distribution of ℒ_(a, b) is the law of the last hitting time of b for a continuous-state branching process with branching mechanism ψ(λ)=√(8/3) λ^3/2 with initial distribution 3/2a^3/2 (a+z)^-5/2, conditionally on the fact that this process visits b. Unfortunately, we were not able to use this interpretation to derive an explicit analytic expression for the law of ℒ_(a, b), but the following proposition still gives some useful information. The first moment of ℒ_(a, b) is √(3π/2)(a+b)(√( a^-1)+√(b^-1)-√(a^-1+b^-1)). Furthermore, the probability of the event {ℒ_a, b>u} is asymptotic to 3(a+b)u^-2 when u→∞. To simplify notation, we consider first the case a=1 and we write ℒ_b=ℒ_1, b. For every x≥ 0, we write (Z_t)_t≥ 0 for a continuous-state branching process with branching mechanism ψ that starts from x under the probability measure _x. Similarly, we write (X_t)_t≥ 0 for a spectrally positive Lévy process with Laplace exponent ψ starting from x under _x, and we also set T_0=inf{t≥ 0:X_t=0}. By the Lamperti transformation, we have for every measurable function f:ℝ_+→ℝ_+ such that f(0)=0, 𝔼_x[∫_0^∞ f(Z_t)t]= 𝔼_x[∫_0^T_0 f(X_t)dt/X_t] On the other hand, the potential kernel of the Lévy process X killed upon hitting 0 is computed in the proof of Theorem VII.18 in <cit.>: for every measurable function g:ℝ_+→ℝ_+, 𝔼_x[∫_0^T_0g(X_t)dt]= ∫_0^∞ g(y)( W(y)-1_{x<y}W(y-x))dy, where W(u) is the scale function of the Lévy process -X, which is given here by W(u)=√((3/2π)u). Suppose then that Z starts with initial density 3/2 (1+x)^-5/2 under the probability measure . It follows from the preceding two displays that 𝔼[∫_0^∞ f(Z_t)dt] = 3/2√(3/2π)∫_0^∞dx/(1+x)^5/2∫_0^∞f(y)/y(√(y)-1_{x<y}√(y-x))dy = √(3/2π)∫_0^∞f(y)/y(√(y)- 3/2∫_0^y√(y-x)/(1+x)^5/2 dx)dy =√(3/2π)∫_0^∞f(y)/y(√(y)-y^3/2/1+y)dy = √(3/2π)∫_0^∞f(y)/√(y) (1+y)dy. Next let L_b:=sup{t≥ 0: Z_t=b}, with the convention sup∅=0. For u>0, the conditional probability that L_b>u given Z_u is the probability that Z started from Z_u visits b, and it was already noticed in the proof of Lemma <ref> that this probability is equal to 1-√((b-Z_u)^+/b). Hence, we get ℙ(L_b>u)=𝔼[1-√((b-Z_u)_+/b)], and we integrate with respect to u, using (<ref>), to get 𝔼[L_b]= √(3/2π)∫_0^∞(1-√((b-y)^+/b)) dy/√(y)(1+y). After some straightforward changes of variables, we arrive at 𝔼[L_b]= √(3π/2)(1-√(b)/π∫_ℝx^2/(1+b+x^2)(1+x^2)dx). The integral in the right-hand side is computed via a standard application of the residue theorem, and we get 𝔼[L_b]=√(3π/2)(1-√(1+b)-1/√(b)). As discussed at the beginning of the section, the first moment of ℒ_1,b is equal to [L_b| L_b>0], and we know from Lemma <ref> that (L_b>0)=(r_b<∞)=(1+b)^-1. Hence the first moment of ℒ_1,b is (1+b) [L_b], and we get the first assertion of the proposition when a=1. In the general case, we just have to use a scaling argument, noting that ℒ_(a, b) has the same law as √(a)ℒ_1, b/a. Let us turn to the second assertion. Again, by scaling, it suffices to consider the case a=1. We use the fact that _x(Z_u=0)=exp(-3x/2u^2), which follows from the explicit form of the Laplace transform of Z_u (see e.g. formula (1) in <cit.>). Then (Z_u>0)= 3/2∫_0^∞ (1-exp(-3x/2u^2)) dx/(1+x)^5/2= 9/4u^2∫_0^∞x dx/(1+x)^5/2 + O(u^-3) =3u^-2+O(u^-3), as u→∞. Again using the Laplace transform of Z_u, it is straightforward to verify that (Z_u∈(0,b])=O(u^-3) as u→∞. Since ℙ(Z_u>b)≤ℙ(L_b>u)≤ℙ(Z_u>0), we get that ℙ(L_b>u)=3u^-2+O(u^-3) as u→∞. Finally, the probability that ℒ_(a,b)>u is equal to (1+b)ℙ(L_b>u), which gives the desired asymptotics. § APPENDIX In this appendix, we prove Proposition <ref>. Recall the notation in formula (<ref>), and also set 𝒴_s=∑_i∈ I𝒵_s(ω^i), for every s≤ 0, in such a way that 𝒫_r=𝒴_r-r_* for r∈ (0,r_*]. It is easy to adapt the arguments of <cit.> (see in particular formula (34) in this reference) to get the formula [1_{r_*>r}exp(-λ𝒴_r-r_*)]=3r^-3 ∫_-∞^0 ds [𝒴_s exp(-(λ+3/2r^2) 𝒴_s)]. We already noticed in the proof of Lemma <ref> that 𝒴_0 has density 3/2a^3/2 (a+z)^-5/2. Using this and the special Markov property of the Brownian snake (see e.g. <cit.>), we get, for every μ>0 and s<0, [exp(-μ𝒴_s)] =∫_0^∞ z 3/2 a^3/2(a+z)^-5/2exp(-z_0(1-exp(-μ_s))). According to formula (6) in <cit.>, _0(1-exp(-μ_s))=(μ^-1/2 +√(2/3)|s|)^-2. If we substitute this in the previous display, and then differentiate with respect to μ, we arrive at [𝒴_s exp(-μ𝒴_s)]=3/2 a^3/2∫_0^∞ z z/(a+z)^-5/2(1+|s|√(2μ/3))^-3exp(-z(μ^-1/2 +√(2/3)|s|)^-2). We take μ=λ +3/2r^2 and use formula (<ref>) to obtain [1_{r_*>r}exp(-λ𝒴_r-r_*)] =9/2 r^-3a^3/2∫_0^∞ z z/(a+z)^5/2∫_0^∞ s (1+s√(2μ/3))^-3exp(-z(μ^-1/2 +√(2/3)s)^-2) =9/4√(3/2) r^-3a^3/2 μ^-3/2∫_0^∞ z/(a+z)^5/2 (1-e^-μ z) =3/2√(3/2) r^-3a^3/2 μ^-1/2∫_0^∞ z/(a+z)^3/2 e^-μ z Writing μ^-1/2 = 1/√(π) ∫_0^∞ x/√(x) e^-μ x, we arrive at [1_{r_*>r}exp(-λ𝒴_r-r_*)] =3/2 √(3/2π) r^-3 a^3/2∫_0^∞ y e^-μ y∫_0^y z/(a+z)^3/2 (y-z)^1/2. Finally, a straightforward calculation gives for y>0, ∫_0^y z/(a+z)^3/2 (y-z)^1/2= 2√(y)/√(a) (a+y), so that recalling μ=λ +3/2r^2, we have [1_{r_*>r}exp(-λ𝒴_r-r_*)] =3 √(3/2π) r^-3∫_0^∞ y e^-λ y √(y) a/a+y e^-3y/(2r^2). This completes the proof. 99 ABS L. Addario-Berry, Y. Wen, Joint convergence of random quadrangulations and their cores. Ann. Inst. H. Poincaré Probab. Stat. 53, 1890–1920 (2017) albenque2020scaling M. Albenque, N. Holden, X. Sun, Scaling limit of large triangulations of polygons. Electron. J. Probab. 25, Paper No. 135, 43 pp. (2020) ang2022moduli M. Ang, G. Rémy, X. Sun, The moduli of annuli in random conformal geometry. Preprint, arXiv:2203.12398 PercOnRandMapsI O. Angel, N. Curien, Percolations on random maps I: Half-plane models. Ann. Inst. H. Poincaré Probab. Statist. 51, 405–431 (2015) bernardiFusy O. Bernardi, É. Fusy, Bijections for planar maps with boundaries. J. Combin. Theory Ser. A 158, 176–227 (2018) Bet0 J. Bettinelli, Scaling limit of random planar quadrangulations with a boundary. Ann. Inst. H. Poincaré Probab. Stat. 51, 432–477 (2015) BrownianDiskBettineli J. Bettinelli, G. Miermont, Compact Brownian surfaces I. Brownian disks. Probab. Theory Related Fields 167, 555-614 (2017) BrownianSurfacesII J. Bettinelli, G. Miermont, Compact Brownian surfaces II. Orientable surfaces. Preprint arXiv:2212.12511 Bertoin J. Bertoin, Lévy Processes. Cambridge University Press, 1996. BBY D. Burago, Y. Burago, S. Ivanov, A Course in Metric Geometry. Graduate Studies in Mathematics, vol. 33. Amer. Math. Soc., Boston, 2001. peeling N. Curien, Peeling Random Planar Maps. Lecture notes from the 2019 Saint-Flour Probability Summer School. Lecture Notes in Mathematics 2335. Springer, Berlin, 2023. Hull N. Curien, J.-F. Le Gall, The hull process of the Brownian plane. Probab. Theory Related Fields 166, 187–231 (2016) ScalingUIPT N. Curien, J.-F. Le Gall, Scaling limits for the peeling process on random maps. Ann. Inst. H. Poincaré Probab. Stat. 53, 322–357 (2017) GM1 E. Gwynne, J. Miller, Scaling limit of the uniform infinite half-plane quadrangulation in the Gromov-Hausdorff-Prokhorov-uniform topology. Electron. J. Probab. 22, Paper No. 84, 47 pp. (2017) GM0 E. Gwynne, J. Miller, Convergence of the free Boltzmann quadrangulation with simple boundary to the Brownian disk. Ann. Inst. Henri Poincaré Probab. Stat. 55, 551–589 (2019) CSBPRandomSnakes J.-F. Le Gall, Spatial Branching Processes, Random Snakes and Partial Differential Equations. Lectures in Mathematics ETH Zürich. Birkhäuser, Boston, 1999. CactusBound J.-F. Le Gall, Geodesics in large planar maps and in the Brownian map. Acta Mathematica 205, 287–360 (2010) Le_Gall_2013 J.-F. Le Gall, Uniqueness and universality of the Brownian map. Ann. Probab. 41, 2880–2960 (2013) BesselProc J.-F. Le Gall, Bessel processes, the Brownian snake and super-Brownian motion. In: Séminaire de Probabilités XLVII. Lecture Notes Math. 2137. Springer 2015. BrowDiskandtheBrowSnake J.-F. Le Gall, Brownian disks and the Brownian snake. Ann. Inst. H. Poincaré Probab. Stat. 55, 237–313 (2019) Stars J.-F. Le Gall, Geodesic stars in random geometry. Ann. Probab. 50, 1013–1058 (2022) GrowthFrag J.-F. Le Gall, A. Riera, Growth-fragmentation processes in Brownian motion indexed by the Brownian tree. Ann. Probab. 48, 1742–1784 (2020) spine J.-F. Le Gall, A. Riera, Spine representations for non-compact models of random geometry. Probab. Theory Related Fields 181, 571–645 (2021) MarkovSpatial J.-F. Le Gall, A. Riera, Spatial Markov property in Brownian disks. To appear in Ann. Inst. H. Poincaré Probab. Stat., arXiv:2302.01138 miermont2011brownian G. Miermont, The Brownian map is the scaling limit of uniform random plane quadrangulations. Acta Math., 210, 319–401 (2013)
http://arxiv.org/abs/2407.12346v1
20240717064214
Object-Aware Query Perturbation for Cross-Modal Image-Text Retrieval
[ "Naoya Sogi", "Takashi Shibata", "Makoto Terao" ]
cs.CV
[ "cs.CV", "cs.IR", "cs.LG" ]
N. Sogi et al. Visual Intelligence Research Laboratories, NEC Corporation, Kanagawa, Japan naoya-sogi@nec.com, t.shibata@ieee.org, m-terao@nec.com Object-Aware Query Perturbation for Cross-Modal Image-Text Retrieval Naoya Sogi Takashi Shibata Makoto Terao July 22, 2024 ====================================================================== § ABSTRACT The pre-trained vision and language (V&L) models have substantially improved the performance of cross-modal image-text retrieval. In general, however, V&L models have limited retrieval performance for small objects because of the rough alignment between words and the small objects in the image. In contrast, it is known that human cognition is object-centric, and we pay more attention to important objects, even if they are small. To bridge this gap between the human cognition and the V&L model's capability, we propose a cross-modal image-text retrieval framework based on “object-aware query perturbation.” The proposed method generates a key feature subspace of the detected objects and perturbs the corresponding queries using this subspace to improve the object awareness in the image. In our proposed method, object-aware cross-modal image-text retrieval is possible while keeping the rich expressive power and retrieval performance of existing V&L models without additional fine-tuning. Comprehensive experiments on four public datasets show that our method outperforms conventional algorithms. § INTRODUCTION Cross-modal image-text retrieval is one of the mainstream tasks in pattern recognition <cit.> and has various applications including e-commerce <cit.> and video surveillance <cit.>. Recent pre-trained vision-and-language (V&L) models <cit.> have caused a paradigm shift. Those pre-trained models substantially outperform legacy cross-modal image-text retrieval by leveraging massive amounts of training data while equipping advantages such as zero-shotness and generalizability. Nevertheless, those V&L models are not any panacea; those V&L models have limited performance for small objects due to the rough alignment between text and the fine-grained localization of these small targets in the image. An example of retrieval results obtained by a sophisticated V&L model, BLIP2 <cit.>, on the Flickr 30K <cit.> dataset is shown in Fig. <ref>(a). The matching between the target objects and the input query text is weak because the objects, e.g., the person and the rollerblades, are small, resulting in incorrect retrieval results. Although the drawback of the retrieval performance degradation related to those small objects is critical in actual applications, it has been hidden behind the overwhelming performance gains of the recent pre-trained V&L models on the public benchmark datasets. In contrast, humans can effectively understand visual scenes by an ability that lies in their object-centered (or compositional) perception <cit.>. Owing to this human object-centered perception, the human visual function is highly robust to the size of the target object. For example, objects critical to understanding a scene, e.g., a small rescue caller in an image of a disaster scene, will be gazed at regardless of the target object's size. The lack of such object-awareness in V&L models is a major issue, especially for human-centered vision tasks, e.g., image retrieval. Although legacy image retrieval algorithms using object detection have also been proposed <cit.>, these methods cannot inherit the strengths of recent pre-trained V&L models. There is a strong demand for a general framework that bridges the gap between human perception and the V&L models while inheriting the potential capabilities of pre-trained V&L models including zero-shotness and absolute high performance. This paper proposes an object-aware query-perturbation for cross-modal retrieval as a solution to the above demand. Our Query-Perturbation (Q-Perturbation) increases object awareness of V&L models by focusing on object information of interest even when the objects in an image are relatively small. An example of retrieval results by the proposed method is shown in Fig. <ref>(b). In contrast to the existing methods, our retrieval framework, i.e., a V&L model with Q-Perturbation, can perform accurate retrieval even for images that capture small objects. The core mechanism of Q-Perturbation is to enhance queries with keys corresponding to object regions, at cross-attention modules in a V&L model. Naively enhancing queries in the existing V&L model breaks the original weights, resulting in poor performance. Our query perturbation prevents weight breaking by enhancing queries within a subspace constructed using keys corresponding to target objects. That is, queries are first decomposed by subspaces representing object information and then enhanced using the decomposed information, which is object information retained in original queries. This process naturally selects queries to be enhanced as the process uses only object information in original queries, i.e., a query is not enhanced if it does not have object information. As a result, our Query Perturbation improves the object-awareness of a V&L model while inheriting the impressive performance of a V&L model. The proposed method is applicable to a variety of V&L models and can avoid increased computational cost due to data updates and catastrophic forgetting due to re-training because the proposed method is training-free and easy to implement. Comprehensive experiments on public datasets demonstrate the effectiveness of the proposed method. The contributions of this paper are including: 1) We propose an object-aware query perturbation (Q-Perturbation) for cross-modal image-text retrieval. 2) We construct the object-aware retrieval framework by plugging Q-Perturbation into state-of-the-art V&L models, e.g., BLIP2 <cit.>, COCA <cit.>, and InternVL <cit.>. 3) Comprehensive experiments on public data demonstrate the effectiveness of the proposed method. In addition, we propose a new metric that mitigates the dataset bias regarding object size. § RELATED WORKS Cross-Modal Image-Text Retrieval. The study of cross-modal image-text retrieval is a fundamental task in vision, and many existing methods exist <cit.>. A standard approach acquires a text-language common space from image and text datasets prepared as training data in advance <cit.>. To acquire an accurate image-text common space, several approaches have been introduced to improve the loss function and distance space, such as metric learning <cit.> and probabilistic distribution representation <cit.>. Various extensions have been proposed for fine-grained retrieval <cit.> by introducing object detection <cit.>, graph-based relationships between objects <cit.>, re-weighting strategy <cit.>, and the attention mechanism <cit.>. These existing approaches for fine-grained retrieval suggest that object awareness is an essential cue for locally detailed cross-modal retrieval. This paper focuses on object awareness for the pre-trained V&L models <cit.>. We propose a simple-yet-effective framework that can efficiently improve the performance of image-text retrieval for an image containing small objects that are semantically important. Pre-trained Vision & Language Model. In recent years, cross-modal image-text retrieval using the V&L model has been proposed as a new paradigm <cit.>. Vision language pre-training, such as CLIP <cit.>, trains vision language alignments from large numbers of image-text pairs through a self-supervised task. Before this paradigm, the existing image-text retrieval methods mainly focused on training algorithms using medium-sized datasets such as Flicker 30K and COCO. In contrast, the recent cross-modal image-text retrieval using the pre-trained V&L models outperforms those legacy cross-modal image-text retrieval methods, achieving high zero-shot performance on diverse datasets and enabling open vocabulary retrieval. In particular, the recently proposed BLIP2 <cit.> has overwhelming performance in cross-modal image-text retrieval. However, it has recently been pointed out that there is a weakness in the localization for the V&L models such as CLIP, and several simple improvements have been proposed <cit.>. As described later, this weakness has also been shared in cross-modal image-text retrieval. The proposed method is a novel framework that can overcome this weakness through cross-modal image-text retrieval while taking advantage of the potential capabilities of the existing V&L models. § PERFORMANCE DEGRADATION INDUCED BY SMALL OBJECTS We discuss the performance degradation induced by small objects in a target image. We compared the overall performance of text-image retrieval (called overall category) on Flickr-30K <cit.> and Flickr-FG <cit.> with the performance on a dataset consisting of images with only small detected objects (called small object category). Specifically, we selected images for the small object category, where the ratio of the largest detected object rectangle's area to the entire image's area is less than 10%. Figure <ref> shows the comparison results. Here, we used Recall@1 as the evaluation metric. It can be seen that as the area of the object detection rectangle becomes relatively smaller, the retrieval becomes more difficult, and the retrieval performance degrades. The degradation is observed not only in Flickr-30K but also in Flickr-FG, where more detailed captions are annotated. Interestingly, we also find that this is a common drawback in the recent pre-trained V&L models <cit.>. This drawback is underestimated in the standard evaluation metric, Recall@K, because the number of images belonging to the small object category is small. For example, the number of images in the small object category accounts for about 1.5% of all images in Flickr-FG and Flickr-30K. We introduce an object-aware query perturbation to improve the poor performance of images with such small objects. Furthermore, we also discuss the effectiveness of the proposed method using an evaluation metric that accounts for this data bias regarding the object size in Sec <ref>. § METHOD We describe the overview of our framework and key idea of the query perturbation (Q-Perturbation), then describe the details of the Q-Perturbations for the Q-Former module in BLIP2 <cit.>. Finally, we explain an extension of our Q-Perturbations to other V&L models <cit.>. §.§ Overview Our proposed method aims at improving the retrieval performance for an image containing small objects by extending the existing V&L models while inheriting the high expressiveness of these models. In general, V&L models contain a cross-modal projection module that aligns language features with image features. For example, BLIP2 introduces Q-Former architecture with a transformer structure as a cross-modal projector that combines image and text features. An overview of our proposed framework is shown in Fig. <ref>. The proposed framework is also an architecture that leverages cross-modal projectors such as Q-Former <cit.> and QLLaMA<cit.>. The input text, a retrieval query, is similar to a standard cross-modal retrieval, and feature representations of texts are generated using a text encoder. The proposed framework constructs an object-aware cross-modal projector by incorporating localization cues obtained from object detection into the existing cross-modal projector, in addition to image features obtained from existing image encoders. The key is how to incorporate the localization cues into existing cross-modal projection modules. To do this, the proposed method introduces an object-aware query perturbation, called Q-Perturbation, that adaptively adjusts the query according to the size of the detected objects and bounding boxes in the image. §.§ Basic Idea: Object-Aware Query Perturbation A standard approach to constructing a cross-modal projector in a transformer-based module is to introduce cross-attention. For example, in BLIP2, cross-attention is introduced in the Q-Former to integrate image features with queries, including learned queries and text tokens. Our goal is to incorporate object localization cues from the bounding boxes into cross-modal projection modules with minimal modification while taking advantage of the highly expressive power of the existing V&L models. To this end, the following must be satisfied: - Inheritability: The proposed method must be object-aware cross-modal projection without significantly destroying the weights and structures already learned, in order to maximize the potential of the existing V&L models. - Flexibility: The proposed method must be scalable and flexible regarding the size and number of detected objects. In the proposed method, as shown in Fig. <ref>, the Q-Perturbation is used to perturb the already obtained query to emphasize the object region features using object localization, i.e., bounding box. An object-aware cross-modal projection module can be implemented with minimal modifications by plugging the proposed Q-Perturbation module just before the cross-attention module. In the following, we first describe our Q-Perturbation module for the single object case and then extend it to multiple objects. §.§ Q-Perturbation Module for Single Objects The proposed Q-Perturbation consists of three components: 1) Object Key Pooling, 2) K-Subspace Construction, and 3) Query Enhancement as shown in Fig. <ref>. Let Q = {q_i}, K = {k_l}, and V = {v_l} be the set of queries before the cross-attention, and the set of the keys and values from the image encoder, respectively. Here, i and l are subscripts to distinguish each token. 1) Object Key Pooling. First, we select image tokens that overlap the detected bounding box obtained by object detection, as shown in Fig. <ref>. Our pooling step is in the same manner as ROI pooling in two-stage object detection <cit.>. In the following, those selected image tokens are called object image tokens, and the pooled set is denoted as K^obj = {k^obj_j}, where j is subscripts to distinguish each token. Note that, if there are multiple objects in an image, we perform the object key pooling for each detected object. 2) K-Subspace Construction. Next, the object-aware key subspace, called K-subspace, is generated from the pooled object image tokens k^obj_j for each object using Principal Component Analysis (for more detail, see supplementary material A.2). The K-subspace for each object is denoted by Φ=[ϕ_1, ϕ_2, ⋯, ϕ_p, ⋯ ], where ϕ_p is the p-th basis vector of the K-subspace. The K-subspace represents essential information about the corresponding object. 3) Query Enhancement. Finally, the set of the already obtained query Q = {q_i} is enhanced by decomposing each query into the K-subspace Φ and the complementary subspace using each basis vector ϕ_p of the K-subspace. q_i = q_i^∥ + q_i^⊥,   q_i^∥ = ΦΦ^Tq_i,   q_i^⊥ = ( I - ΦΦ^T) q_i, here, q_i^∥ and q_i^⊥ are the decomposed queries based on the K-subspace Φ and its complementary subspace. Our Q-Perturbation generates a perturbed query q̂_i that enhances the components belonging to the K-subspace, i.e., enhances object information retained in the original query. This enhancement by the Q-Perturbation is given by q̂_i = q_i + αq_i^∥, where α is the parameter that control the perturbation magnitude. This enhancement could be seen in that relevant queries to an object are automatically selected and then enhanced to have more object information, as the decomposition with the K-subspace has a mechanism to extract only object information retained in the original queries. Thus, Q-Perturbation enhances object awareness of V&L models without destroying the weights and structures of the V&L models, resulting in high inheritability. §.§ Extension to Multiple Objects So far, we have discussed Q-Perturbation on a single object. Our proposed Q-Perturbation can be easily extended to the case of multiple objects, i.e., it has high flexibility. For Q-Perturbation with multiple objects, the proposed method generates the object-aware K-subspace for each object, and then decomposition and enhancement for each query are performed based on these obtained K-subspaces. Let Φ_b, q_i,b^∥, and q_i,b^⊥ be the K-subspace corresponding to b-th object, query components belonging to the b-th K-subspace, and query components orthogonal to the b-th K-subspace, respectively. Here, b is a subscript to distinguish each detected object in an image. Formally, Q-Perturbation for multiple K-subspaces can be expressed as follows. q̂_i = q_i + α∑_b w(S_b) q_i,b^∥ = q_i + α∑_b w(S_b) Φ_bΦ_b^Tq_i, where S_b and w(S_b) are the area of the detected bounding box and the weight function for b-th detected object. In this paper, for simplicity, the weight function is given by w(S̅_b)=β+γS̅_b, where β, γ and S̅ are the adjustment parameter and normalized area, respectively. The normalized area S̅_b=S_b/S_I is calculated by dividing the whole area of each bounding box by the area S_I of the corresponding image. Note that, in this paper, no exhaustive search was performed for β and γ, and either { 0,±1,±0.5 } was used. §.§ Beyond the Q-Perturbation Module for Q-Former In previous sections, our Q-Perturbation has been described for the case of Q-Former-based model <cit.>. Finally, we discuss its extension to other V&L models and its potential for tasks other than cross-modal image-text retrieval. §.§.§ Extension to other pre-trained V&L models. Many pre-trained V&L models for cross-modal image-text retrieval have been proposed. It is expected that many more pre-trained models will be proposed in the future. Our proposed Q-Perturbation module is a general and versatile approach to perturb query in cross-attention using the localization cues from object detection and the obtained key features. In this sense, the proposed Q-Perturbation is applicable to other existing V&L models such as COCA <cit.> and InternVL <cit.>, as shown in Fig. <ref>. As discussed later, the proposed method with the existing V&L models allows for cross-modal image-text retrieval that is aware of smaller objects. §.§.§ Other Tasks with Our Q-Perturbation. In general, the Q-Former proposed in BLIP2 is used for other tasks, such as image captioning by using LLMs in the latter additional stage. In this sense, our proposed Q-Perturbation can be used to re-present the existing captions more object-aware. This extension suggests new possibilities for using pre-trained V&L models based on human perception. § EXPERIMENTS §.§ Settings §.§.§ Datasets and Experimental Protocols. We use two widely used benchmark datasets, i.e., Flickr-30K <cit.> and MSCOCO <cit.>, and fine-grained extensions of the two datasets, i.e., Flickr-FG and COCO-FG <cit.>. We adapt the commonly used Karapathy split <cit.> for all the datasets. The Flickr 30K dataset has 1,014 validation images and 1,000 test images. The MSCOCO dataset has 5,000 images for validation and test, respectively. Each image has five description texts. Therefore, there are 5,000 (=1,000 images × 5 texts) and 25,000 (=5,000 images × 5 texts) test image-text pairs for Flickr-30K and MSCOCO, respectively. Flickr-FG and COCO-FG are extensions of the above two datasets. Flickr-FG and COCO-FG replaced original description texts with fine-grained descriptions. These datasets have five fine-grained texts for an image, as with Flickr-30K and MSCOCO. Therefore, there are 5,000 and 25,000 test image-text pairs, as well. We conducted text-to-image (T2I) and image-to-text (I2T) tasks; the T2I task is to find the paired image of an input text, and the I2T task is vice versa. §.§.§ Evaluation Metrics. We use the standard evaluation metric, Recall@K (R@K). R@K is a ratio of correct retrievals to all retrievals. Here, the correct retrieval is identified by whether the paired image or texts are in the top K retrieval results. Following the previous studies, K is set to 1, 5, 10. We also use mean Recall@K (mR@K), which considers the size of objects in each image. mR@K is a harmonic mean of multiple R@Ks. As outlined in Fig. <ref> and elaborated on later, the retrieval difficulty depends on the object size in the image. However, traditional R@K does not consider object size and cannot correctly assess this challenge. To alleviate this problem, we propose to use an object size-aware evaluation metric, mean R@K, which is a harmonic mean of R@Ks on subsets split by object size. Here, each R@K is calculated on a subset of all text-image pairs, where each subset is determined by the largest normalized area S̅ (please see Sec. <ref>) of detected objects in each image. In this paper, we generate ten subsets by splitting largest areas by every 10% and calculate a harmonic mean (mR@K) of the ten R@Ks calculated on the subsets. §.§.§ Implementation Details. We use bounding boxes given by Flickr-Entities <cit.> and COCO-Entities <cit.>. Flickr-Entities and COCO-Entities have bounding boxes corresponding to each text. Flickr-Entities generates boxes manually by annotators, and COCO-Entities generates boxes semi-automatically, i.e., boxes are detected by Faster R-CNN <cit.> and matched nouns in each text by manually defined rules. We also use boxes that are detected automatically by CO-DINO <cit.>. Note that, object detection and image feature extraction can be carried out in advance for the T2I task, as they do not depend on a retrieval input text. To split datasets and calculate mR@K, we use Flickr-Entities's bounding boxes for Flickr-30K and Flickr-FG. For COCO and COCO-FG, we use bounding boxes detected by CO-DINO instead of COCO-Entities, as COCO-Entities are built by the traditional detector, Faster R-CNN. We applied the proposed Q-Perturbation to all cross-attentions in the Q-Former of BLIP2. We used Eva-CLIP <cit.> base BLIP2 finetuned on COCO validation data <cit.>. Following the previous study <cit.>, we apply the re-ranking technique; we first select 64 candidates by image-text contrastive (ITC) and then re-ranked them by image-text matching (ITM). We also evaluated our method combined with COCA and InternVL-G models <cit.>. We used the ViT-L/14 <cit.> based COCA model published by the OpenCLIP repository <cit.>, and the InternVL-14B-224px model <cit.>. Our Q-Perturbation is applied to the last cross-attention for COCA and to all cross-attentions in the QLLaMA for InternVL-G (see the supplementary material A for more detail). Q-Perturbation has three hyperparameters: 1) perturbation intensity α, 2) weight function w(S_b), and 3) dimension of object-aware subspaces. These parameters were tuned by the grid-search algorithm using validation data with the mR@1. The intensity α was selected from 2, 4, 6, 8 and 10 for BLIP2 and 0.2, 0.4, 0.6, 0.8 and 1 for COCA and InternVL. The weight function was selected from five functions: constant value (=1), S̅_b, S̅_b-0.5, 1-S̅_b, 0.5-S̅_b. The dimensions of subspaces were determined by using the contribution ratio. Thus, we tuned the threshold of the contribution ratio from 0.85, 0.9, 0.95, 0.99. §.§ Comparative results §.§.§ Results for small objects. Table <ref> shows the evaluation results for small objects and the overall of each dataset. The proposed Q-Perturbation improves the retrieval performance of small objects, which in turn improves the overall performance. The conventional method would have had difficulty in considering small objects in the image. The results suggest that our method mitigates this difficulty by enhancing object awareness of the conventional methods. Figure <ref> shows examples of image retrieval results by BLIP2-ITC and BLIP2-ITC with Q-Perturbation. We can see that our method selects the correct image at a higher rank than the original BLIP2. This is mainly due to the advantage of our object-aware mechanism; the proposed method efficiently utilizes information from small objects, such as “an older man”, and “a performer” for Fig. <ref> (a) (b), respectively. Table <ref> shows comparative results with additional evaluation metrics. We can confirm the effectiveness of our method again, as our method shows competitive results. Furthermore, our method stably improves performance even with noisy bounding boxes obtained by a detector automatically. This property is helpful in applying our method to real-world applications. As we discussed in Sec. <ref>, our Q-Perturbation is applicable to other tasks, such as image captioning, as our method is plugged into a pre-trained V&L model. We carried out image captioning to visualize the effect of Q-Perturbation, as shown in Fig. <ref>. It can be seen that output captions become object-aware compared with the original BLIP2s' results by emphasizing the corresponding objects, such as “glass”, with our Q-Perturbation. This object-aware property of our method helps in improving the retrieval performance of a V&L model. §.§.§ Overall results. We then discuss performance comparisons between our method and various cross-modal image-text retrieval methods. - Baselines: We compared our method with 1) object-aware models; SCAN <cit.>, IMRAM <cit.>, SHAN <cit.>, NAAF <cit.>, and 2) V&L pre-trained models; CLIP <cit.>, ALBEF <cit.>, UNITER <cit.>, BEIT-3 <cit.>, COCA <cit.>, InternVL <cit.>, BLIP2 <cit.>. - Results: Table <ref> shows comparative results with various conventional methods. Our method archives competitive results compared with state-of-the-art methods. These results suggest that our Q-Perturbation enhances object awareness of the pre-trained V&L model while inheriting its impressive performance (for more comparison with simple baselines, see supplementary material B). §.§ Results with Other State-of-The-Art V&L Models Our proposed Q-Perturbation can be plugged into any model, including cross-attention layers. To confirm the versatility of our method, we apply Q-Perturbation to two V&L models, COCA and InternVL. Table <ref> shows comparative results. We can see that Q-Perturbation is highly versatile as our method improves retrieval performance. COCA w/Q-pert. has a slight improvement. This is because the impact of the proposed method is small, as a cross-attention layer is only placed at the end of the vision encoder. Input features to QLLaMA, which is the cross-modal projector of InternVL, or cross-attentions may focus on the global context, as QLLaMA is followed by the large vision encoder (6B parameters ViT model). Even in such a difficult situation, the proposed method could enhance object-awareness and improve performance. §.§ Sensitivity on hyperparameters We analyze the sensitivity of the performance to the hyperparameters, including the weight function, scale factor, and dimension of subspaces, i.e., threshold of contribution ratio for PCA. In this experiment, we use bounding boxes by Flickr-Entities. Table <ref> shows the evaluation results with varying the hyperparameters. The proposed method has a high performance by adequately selecting the parameters, although the proposed method has low sensitivities to the hyperparameters. In this paper, we selected the hyperparameters from the manually set values using validation data. It would be an excellent direction to learn the hyperparameters in the future if learning data were available. § LIMITATIONS The key idea behind our method is to enhance object information in a V&L model's cross-attention features following an image encoder. It would, therefore, be problematic if object information had disappeared entirely in the image encoder. This paper uses all bounding boxes in each image. It is an excellent future direction to filter bounding boxes or adjust the scale α according to an input text to improve retrieval performance further. Note that this approach increases computational cost at the retrieval stage, as we need to extract bounding boxes and process a neural network, including cross-attentions, after receiving the input. This direction would be a trade-off between scalability and retrieval performance. § CONCLUSIONS In this paper, we proposed an object-aware query perturbation for cross-modal image-text retrieval. The key is to use query perturbation to focus on small target objects by enhancing the query weights with keys corresponding to object regions in the cross-attention module. The proposed method is applicable to various V&L models based on cross-modal projection, including COCA, BLIP2, and InternVL. Comprehensive experiments on four public datasets demonstrate the effectiveness of the proposed method. splncs04 Supplementary Material for Object-Aware Query Perturbation
http://arxiv.org/abs/2407.13622v1
20240718155804
Misspecified $Q$-Learning with Sparse Linear Function Approximation: Tight Bounds on Approximation Error
[ "Ally Yalei Du", "Lin F. Yang", "Ruosong Wang" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Beyond Dropout: Robust Convolutional Neural Networks Based on Local Feature Masking Yunpeng Gong2, Chuangliang Zhang2, Yongjie Hou1, Lifei Chen3, Min Jiang22 2School of Informatics, Xiamen University, Xiamen, China 1School of Electronic Science and Engineering, Xiamen University, Xiamen, China 3College of Computer and Cyber Security, Fujian Normal University, Fuzhou, China Email: fmonkey625@gmail.com, {31520231154325,23120231150268}@stu.xmu.edu.cn, clfei@fjnu.edu.cn, minjiang@xmu.edu.cn 22 Min Jiang is the corresponding author. July 22, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT The recent work by <cit.> showed for misspecified sparse linear bandits, one can obtain an O(ϵ)-optimal policy using a polynomial number of samples when the sparsity is a constant, where ϵ is the misspecification error. This result is in sharp contrast to misspecified linear bandits without sparsity, which require an exponential number of samples to get the same guarantee. In order to study whether the analog result is possible in the reinforcement learning setting, we consider the following problem: assuming the optimal Q-function is a d-dimensional linear function with sparsity k and misspecification error ϵ, whether we can obtain an O(ϵ)-optimal policy using number of samples polynomially in the feature dimension d. We first demonstrate why the standard approach based on Bellman backup or the existing optimistic value function elimination approach such as OLIVE <cit.> achieves suboptimal guarantees for this problem. We then design a novel elimination-based algorithm to show one can obtain an O(Hϵ)-optimal policy with sample complexity polynomially in the feature dimension d and planning horizon H. Lastly, we complement our upper bound with an Ω(Hϵ) suboptimality lower bound, giving a complete picture of this problem. theoremTheorem[section] lemma[theorem]Lemma fact[theorem]Fact claim[theorem]Claim remark[theorem]Remark corollary[theorem]Corollary proposition[theorem]Proposition assumption[theorem]Assumption example[theorem]Example definition[theorem]Definition condition[theorem]Condition named..5em#3 #1 named *namedtheoremAssumption #1#2currentlabel#2 § INTRODUCTION Bandit and reinforcement learning (RL) problems in real-world applications, such as autonomous driving <cit.>, healthcare <cit.>, recommendation systems <cit.>, and advertising <cit.>, face challenges due to the vast state-action space. To tackle this, function approximation frameworks, such as using linear functions or neural networks, have been introduced to approximate the value functions or policies. However, real-world complexities often mean that function approximation is agnostic; the function class captures only an approximate version of the optimal value function, and the misspecification error remains unknown. A fundamental problem is understanding the impact of agnostic misspecification errors in RL. Prior works show even minor misspecifications can lead to exponential (in dimension) sample complexity in the linear bandit settings <cit.> if the goal is to learn a policy within the misspecification error. That is, finding an O(ϵ)-optimal action necessitates at least Ω(exp(d)) queries (or samples) to the environment, where ϵ is the misspecification error. Recently, <cit.> demonstrated that, by leveraging the sparsity structure of ground-truth parameters, one can overcome the exponential sample barrier in the linear bandit setting. They showed that with sparsity k in the ground-truth parameters, it is possible to learn an O(ϵ)-optimal action with only O(( d/ϵ)^k ) samples. In particular, when k is a constant, their algorithm achieves a polynomial sample complexity guarantee. A natural question is whether we can obtain a similar sample complexity guarantee in the RL setting. This question motivates us to consider a more general question: Given a that the Q^* function is a d-dimensional linear function with sparsity k and misspecification error ϵ, can we learn an O(ϵ)-optimal policy using poly(d,1/ϵ) samples, when k is a constant? It turns out that by studying this question, we obtain a series of surprising results which cannot be explained by existing RL theories. §.§ Our Contributions In this paper, we propose an RL algorithm that can handle linear function approximation with sparsity structures and misspecification errors. We also show that the suboptimality achieved by our algorithm is near-optimal, by proving information-theoretic hardness results. Here we give a more detailed description of our technical contributions. Our Assumption. Throughout this paper, we assume the RL algorithm has access to a feature map where, for each state-action pair (s,a), we have the feature ϕ(s,a) with ϕ(s,a)≤ 1. We make the following assumption, which states that there exists a sequence of parameters θ^* = (θ^*_0, …, θ^*_H-1) where each θ^*_h ∈𝕊^d-1 is k-sparse, that approximates the optimal Q-function up to an error of ϵ. There exists θ^* = (θ^*_0, …, θ^*_H-1) where each θ^*_h∈𝕊^d-1 is k-sparse, such that |⟨ϕ(s,a), θ^*_h ⟩ - Q^*(s, a)| ≤ϵ for all h ∈ [H], all states s in level h, and all actions a in the action space. We can approximate θ^* using an ϵ-net of the sphere 𝕊^k-1 and the set of all k-sized subset of [d]. Therefore, when k is a constant, we may assume that each θ_h^* lies in a set with size polynomial in d. Then, a natural idea is to enumerate all possible policies induced by the parameters in that finite set, and choose the one with the highest cumulative reward. However, although the number of parameter candidates in each individual level has polynomial size, the total number of induced policies would be exponential in H, and the sample complexity of such an approach would also be exponential in H. The Level-by-level Approach. Note that when the horizon length H = 1, the problem under consideration is equivalent to a bandit problem, which can be solved by previous approaches <cit.>. For the RL setting, a natural idea is to first apply the bandit algorithm in <cit.> on the last level, and then apply the same bandit algorithm on the second last level based on previous results and Bellman-backups, and so on. However, we note that to employ such an approach, the bandit algorithm needs to provide a “for-all” guarantee, i.e., finding a parameter that approximates the rewards of all arms, instead of just finding a near-optimal arm. On the other hand, existing bandit algorithms will amplify the approximation error of the input parameters by a constant factor, in order to provide a for-all guarantee. Concretely, existing bandit algorithms can only find a parameter θ so that θ approximates the rewards of all arms by an error of 2ϵ. As we have H levels in the RL setting, the final error would be exponential in H, and therefore, such a level-by-level approach would result in a suboptimality that is exponential in H. One may ask if we can further improve existing bandit algorithms, so that we can find a parameter θ that approximates the rewards of all arms by an error of ϵ plus a statistical error that can be made arbitrarily small, instead of 2ϵ. The following theorem shows that this is information-theoretically impossible unless one pays a sample complexity proportional to the size of the action space. Under Assumption <ref> with d=k=1, any bandit algorithm that returns an estimate r̂ such that |r̂(a) - r(a) | < 2ϵ for all arms a with probability at least 0.95 requires at least 0.9n samples, where n is the total number of arms. Therefore, amplifying the approximation error by a factor of 2 is not an artifact of existing bandit algorithms. Instead, it is information-theoretically impossible. Geometric error amplification is a common issue in the design of RL algorithm with linear function approximation <cit.>. It is interesting (and also surprising) that such an issue arises even when the function class has sparsity structures. Optimistic Value Function Elimination. Another approach for the design of RL algorithm is based on optimistic value function elimination. Such an approach was proposed by <cit.> and was then generalized to broader settings <cit.>. At each iteration of the algorithm, we pick the value functions in the hypothesis class with maximized value. We then use the induced policy to collect a dataset, based on which we eliminate a bunch of value functions from the hypothesis class and proceed to the next iteration. When applied to our setting, existing algorithms and analysis achieve a suboptimality that depends on the size of the parameter class, which could be prohibitively large. Here, we use the result in <cit.> as an example. The suboptimality of their algorithm is H √(M)ϵ, where M is Bellman rank of the problem. For our setting, we can show that there exists an MDP instance and a feature map that satisfies Assumption <ref>, whose induced Bellman rank is large. There exists an MDP instance ℳ = (𝒮, 𝒜, H, P, r) with |𝒜| = 2, H = log d, |𝒮| = d-1, with d-dimensional feature map ϕ satisfying Assumption <ref> with k = 1, such that its Bellman rank is d. Given Proposition <ref>, if one naïvely applies the algorithm in <cit.>, the suboptimality would be O(H √(d)ϵ) in our setting, which necessitates new algorithm and analysis. In Section <ref>, we design a new RL algorithm whose performance is summarized in the following theorem. Under Assumption <ref>, with probability at least 1- δ, Algorithm <ref> returns a policy with suboptimality at most (4ϵ_stat + 2_ + 2ϵ)H by taking O(k d^kH^3 ·ln(dH/_δ) ·_^-k_stat^-2) samples. Here _stat is the statistical error. Compared to the existing approaches, Theorem <ref> achieves a much stronger suboptimality guarantee. Later, we will also show that such a guarantee is near-optimal. Although based on the same idea of optimistic value function elimination, our proposed algorithm differs significantly from existing approaches <cit.> to exploit the sparsity structure. While existing approaches based on optimistic value function elimination try to find a sequence of parameters that maximize the value of the initial states, our new algorithm selects a parameter that maximizes the empirical roll-in distribution at all levels. Also, existing algorithms eliminate a large set of parameters in each iteration, while we only eliminate the parameters selected during the current iteration in our algorithm. These two modifications are crucial for obtaining a smaller suboptimality guarantee, smaller sample complexity, and shorter running time. In existing algorithms, parameters at different levels are interdependent, i.e. the choice of parameter at level h affects the choice of parameter at level h+1. Our new algorithm simplifies this by maintaining a parameter set for each level, so each level operates independently. Further, we can falsify and eliminate any parameter showing large Bellman error at any level h, since otherwise we would have found another parameter with larger induced value function at level h + 1 to make the error small. Consequently, since we eliminate at least one parameter at each iteration, we obtain fewer iterations and enhanced sample complexity. The Hardness Result. One may wonder if the suboptimality guarantee can be further improved. In Section <ref>, we show that the suboptimality guarantee by Theorem <ref> is near-optimal. We first consider a weaker setting where the algorithm is not allowed to take samples, and the function class contains a single sequence of functions. I.e., we are given a function Q̂ : 𝒮×𝒜→ℝ, such that |Q̂(s, a) - Q^*(s, a)| ≤ϵ for all (s, a) ∈𝒮×𝒜. We show that for this weaker setting, simply choosing the greedy policy with respect to Q̂, which achieves a suboptimality guarantee of O(H ϵ), is actually optimal. To prove this, we construct a hard instance based on a binary tree. Roughly speaking, the optimal action for each level is chosen uniformly random from two actions a_1 and a_2. At all levels, the reward is ϵ if the optimal action is chosen, and is 0 otherwise. For this instance, there exists a fixed Q̂ that provides a good approximation to the optimal Q-function, regardless of the choice of the optimal actions. Therefore, Q̂ reveals no information about the optimal actions, and the suboptimality of the returned policy would be at least Ω(Hϵ). The formal construction and analysis and construction will be given in Section <ref>. When the algorithm is allowed to take samples, we show that in order to achieve a suboptimality guarantee of H / T ϵ, any algorithm requires exp(Ω(T)) samples, even when Assumption <ref> is satisfied with d = k = 1. Therefore, for RL algorithms with polynomial sample complexity, the suboptimality guarantee of Theorem <ref> is tight up to log factors. To prove the above claim, we still consider the setting where d = k=1, i.e., a good approximation to the Q-function is given to the algorithm. We also use a more complicated binary tree instance, where we divide all the H levels into H / T blocks, each containing T levels. For each block, only one state-action pair at the last level has a reward of ϵ, and all other state-action pairs in the block has a reward of 0. Therefore, the value of the optimal policy would be H / T ·ϵ since there are H / T blocks in total. We further assume that there is a fixed function Q̂, which provides a good approximation to the optimal Q-function universally for all instances under consideration. Since Q̂ reveals no information about the state-action pair with ϵ reward for all blocks, for an RL algorithm to return a policy with a non-zero value, it must search for a state-action pair with non-zero reward in a brute force manner, which inevitably incurs a sample complexity of exp(Ω(T)) since each block contains T levels and exp(Ω(T)) state-action pairs at the last level. The formal construction and analysis and construction will be given in Section <ref>. §.§ Related Work A series of studies have delved into MDPs that can be represented by linear functions of predetermined feature mappings, achieving sample complexity or regret that depends on the feature mapping's dimension. This includes linear MDPs, studied in <cit.>, where both transition probabilities and rewards are linear functions of feature mappings on state-action pairs. <cit.> examines MDPs with low inherent Bellman error, indicating value functions that are almost linear with respect to these mappings. Another focus is on linear mixture MDPs <cit.>, characterized by transition probabilities that combine several basis kernels linearly. While these studies often assume known feature vectors, <cit.> investigates a more challenging scenario where both features and parameters of the linear model are unknown. The literature has also witnessed a substantial surge of research in understanding how function general approximations can be applied efficiently in the reinforcement learning setting <cit.>. To obtain good sample, error, or regret bounds, these approaches typically impose benign structures on values, models, or policies, along with benign misspecification. Amongst these works, <cit.> is particularly related to our work as their elimination-based algorithm, OLIVE, can be directly applied to our setting. However, as mentioned in Section <ref>, the suboptimality guarantee of their algorithm is significantly worse than our result. In another line of works, <cit.> specifically focuses on understanding misspecification in bandit and RL scenarios. <cit.> illustrated that to find an O(ϵ)-optimal policy in reinforcement learning with ϵ-misspecified linear features, an agent must sample an exponential (in d) number of trajectories, applicable to both value-based and model-based learning. Relaxing this goal, <cit.> indicated that poly(d/ϵ) samples could suffice to secure an O(ϵ√(d))-optimal policy in a simulator model setting of RL, though achieving a policy with an error better than O(ϵ√(d)) would still require an exponential sample size. Recently, <cit.> introduced a solution, showing that incorporating structural information like sparsity in the bandit instance could address this issue, making it feasible to attain O(ϵ) with O((d/ϵ)^k) sample complexity, which is acceptable when the sparsity k is small. Another recent independent work <cit.> also obtains a suboptimality guarantee of O(Hϵ). However, their result depends on a coverability assumption and uses a different technique called disagreement-based regression (DBR), which is distinct from our assumption and techniques. § PRELIMINARIES Throughout the paper, for a given positive integer n, we use [n] to denote the set {0, 1, 2, …, n - 1}. In addition, f(n) = O(g(n)) denotes that there exists a constant c > 0 such that |f(n)| ≤ c|g(n)|. f(n) = Ω(g(n)) denotes that there exists a constant c > 0 such that |f(n)| ≥ c|g(n)|. §.§ Reinforcement Learning Let ℳ = {𝒮, 𝒜, H, P, r} be a Markov Decision Process (MDP) where 𝒮 is the state space, 𝒜 is the action space, H ∈ℤ_+ is the planning horizon, P: 𝒮×𝒜→Δ(𝒮) is the transition kernel which takes a state-action pair as input and returns a distribution over states, r: 𝒮×𝒜→Δ([0,1]) is the reward distribution. We assume ∑_h ∈ [H] r_h ∈ [0, 1] almost surely. For simplicity, throughout this paper, we assume the initial state s_0 is deterministic. To streamline our analysis, for each h ∈ [H], we use 𝒮_h ⊆𝒮 to denote the set of states at level h, and assume 𝒮_h do not intersect with each other. A policy π : 𝒮→𝒜 chooses an action for each state, and may induce a trajectory denoted by (s_0, a_0, r_0, …, s_H-1, a_H-1, r_H-1), where s_h+1∼ P(s_h, a_h), a_h = π(s_h), and r_h ∼ r(s_h, a_h) for all h ∈ [H]. Given a policy π and h ∈ [H], for a state-action pair (s,a) ∈𝒮_h ×𝒜, the Q-function and value function is defined as Q^π(s,a) = 𝔼[∑_h^'=h^H-1 r(s_h^', a_h^') | s_h = s, a_h = a, π],V^π(s) = 𝔼[ ∑_h^'=h^H-1 r(s_h^', a_h^') | s_h = s, π] . We use V^π to denote the value of the policy π, i.e., V^π = V^π(s_0). We use π^* to denote the optimal policy. For simplicity, for a state s ∈𝒮, we denote V^* (s) = V^π^*(s), and for a state-action pair (s, a) ∈𝒮×𝒜, we denote Q^*(s, a) = Q^π^*(s, a). The suboptimality of a policy π is defined as the difference between the value of π and that of π^*, i.e. V^* - V^π. For any sequence of k-sparse parameter θ = (θ_0, …, θ_H-1), we define π_θ to be the greedy strategy based on θ. In other words, for each h ∈ [H], a state s ∈𝒮_h, π_θ (s) = _a ∈𝒜⟨ϕ(s,a), θ_h ⟩ . For each h ∈ [H], a parameter θ_h, and a state s ∈𝒮_h, we also write V_θ_h(s) = max_a ∈𝒜⟨ϕ(s,a), θ_h ⟩. We will prove lower bounds for deterministic systems, i.e., MDPs with deterministic transition P and deterministic reward r. In this setting, P and r can be regarded as functions rather than distributions. Since deterministic systems can be considered as a special case for general stochastic MDPs, our lower bounds still hold for general MDPs. Interacting with an MDP. An RL algorithm takes the feature function ϕ and sparsity k as the input, and interacts with the underlying MDP by taking samples in the form of a trajectory. To be more specific, at each round, the RL algorithm decides a policy π and receives a trajectory (s_0, a_0, r_0, …, s_H-1, a_H-1, r_H-1) as feedback. Here one trajectory corresponds to H samples. We define the total number of samples required by an RL algorithm as its sample complexity. Our goal is to design an algorithm that returns a near-optimal policy while minimizing its sample complexity. The Bandits Setting. In this paper, we also consider the bandit setting, which is equivalent to an MDP with H = 1. Let 𝒜 be the action space, and r: 𝒜→Δ([0,1]) be the reward distribution. At round t, the algorithm chooses an action a_t ∈𝒜 and receives a reward r_t ∼ r(a_t). In this case, Assumption <ref> asserts that there exists θ^*, such that |⟨ϕ(a), θ^* ⟩ - 𝔼[r(a)]| ≤ϵ for all a ∈𝒜. § HARDNESS RESULTS We prove our hardness results. In Section <ref>, we prove that the suboptimality of any RL algorithm is Ω(Hϵ) if the algorithm is not allowed to take samples. This serves as a warmup for the more complicated construction in Section <ref>, where we show that for any T satisfying 1 ≤ 2T ≤ H, any RL algorithm requires exp(Ω(T)) samples in order to achieve a suboptimality of Ω(H / T ·ϵ). §.§ Warmup: Hardness Result for RL without Samples We prove that the suboptimality of any RL algorithm without sample is Ω(Hϵ). Given a MDP instance satisfying Assumption <ref>, the suboptimality of the policy returned by any RL algorithm is Ω(Hϵ) with a probability of 0.99 if the algorithm is not allowed to take samples. This holds even when the dimension and sparsity satisfies d = k = 1 and the underlying MDP is a deterministic system. The formal proof of Theorem <ref> is given in Section <ref> in the Appendix. Below we give the construction of the hard instance together with an overview of the hardness proof. The Hard Instance. Our hardness result is based on a binary tree instance. There are H levels of states, and level h ∈ [H] contains 2^h distinct states. Thus we have 2^H-1 states in total. We use s_0, ..., s_2^H - 2 to denote all the states, where s_0 is the unique state at level 0, and s_1, s_2 are the states at level 1, etc. Equivalently, 𝒮_h = {s_2^h-1, …, s_2^h+1-2}. The action space 𝒜 contains two actions, a_1 and a_2. For each h ∈ [H - 1], a state s_i ∈𝒮_h, we have P(s_i, a_1) = s_2i+1 and P(s_i, a_2) = s_2i+2. For each h ∈ [H], there exists an action a_h^* ∈{a_1, a_2}, such that π^*(s) = a_h^* for all s ∈𝒮_h. Based on a_0^*, a_1^*, …, a_H - 1^*, for a state s ∈𝒮_h, we define the reward function as r(s,a) = ϵ if a = a_h^* and r(s,a) = 0 otherwise. The corresponding Q-function is Q^*(s, a)= (H-h)ϵ if a = a_h^* and Q^*(s,a) = (H-h-1)ϵ otherwise. Now we define the 1-dimensional feature function ϕ. For each h ∈ [H], for all (s, a) ∈𝒮_h ×𝒜, ϕ(s, a) = (H - h-1)ϵ. Clearly, by taking θ^* = 1, Assumption <ref> is satisfied for our ϕ. This finishes the construction of our hard instance. The Lower Bound. Since the RL algorithm is not allowed to take samples, the only information that the algorithm receives is the feature function ϕ. However, ϕ is always the same no matter how we set a_0^*, a_1^*, …, a_H - 1^*, which means the RL algorithm can only output a fixed policy. On the other hand, if a_h^* is drawn uniformly at random from {a_1, a_2}, for any fixed policy π, its expected suboptimality will be H ϵ / 2, which proves Theorem <ref>. Our formal proof in Section <ref> of the Supplementary Material is based on Yao's minimax principle in order to cope with randomized algorithms. §.§ Hardness Result for RL with Samples In this section, we show that for any 1 ≤ 2T ≤ H, any RL algorithm requires exp(Ω(T)) samples in order to achieve a suboptimality of Ω(H / T ·ϵ). Given a RL problem instance satisfying <Ref> and 1 ≤ 2T ≤ H, any algorithm that returns a policy with suboptimality less than H/(2T) ·ϵ with probability at least 0.9 needs least 0.1 · T · 2^T samples. In the remaining part of this section, we give an overview of the proof of Theorem <ref>. We first define the MULTI-INDEX-QUERY problem, which can be seen as a direct product version of the INDEX-QUERY problem introduced in <cit.>. (MULTI-INDEX-QUERY) In the m-INDQ_n problem, we have a sequence of m indices (i_0^*,i_1^*, …, i_m-1^*) ∈ [n]^m. In each round, the algorithm guesses a pair (j, i) ∈ [m] × [n] and queries whether i= i_j^*. The goal is to output (j, i^*_j) for any j ∈ [m], using as few queries as possible. (δ-correct) For δ∈ (0,1), we say a randomized algorithm A is δ-correct for m-INDQ_n if for any i^* = {i^*_j}_j ∈ [m], with probability at least 1-δ, A outputs (j, i_j^*) for some j. We first prove a query complexity lower bound for solving m-INDQ_n. Any 0.1-correct algorithm that solves m-INDQ_n requires at least 0.9n queries. Similar to the proof of the query complexity lower bound of the INDEX-QUERY problem <cit.>, our proof is based on Yao's minimax principle <cit.>. See Section <ref> for the full proof. Now we give the construction of our hard instance, together with the high-level intuition of our hardness proof. For simplicity, here we assume T is an integer that divides H. The Hard Instance. Again, our hardness result is based on a binary tree instance. The state space, action space, and the transition kernel of our hard instance are exactly the same as the instance in Section <ref>. Moreover, similar to the instance in Section <ref>, for each h ∈ [H], there exists an action a_h^* ∈{a_1, a_2}, such that π^*(s) = a_h^* for all s ∈𝒮_h. To define the reward function r, we first define an operator P^*, which can be seen as applying the transition kernel for multiple steps by following the optimal policy. For some q ∈ [H / T], a state s ∈𝒮_kT, and an integer t ∈[T], define P^*(s, t) = s if t = 0 and P^*(s, t) = P(P^*(s, t - 1), a_qT + t - 1^*) otherwise. The reward function r(s, a) is then defined to be ϵ if s = P^*(s', T - 1) for some s' ∈𝒮_qT where q ∈[H / T] and a = a_qT + T - 1^*. For all other (s, a) ∈𝒮×𝒜, we define r(s, a) = 0. Accordingly, for each (q, t) ∈ [H / T] × [T], s ∈𝒮_qT + t, and a ∈𝒜, we have Q^*(s, a) = (H / T - q)ϵ if s = P^*(s', t) for some s' ∈𝒮_qT and a = a_qT + t^*. For all other (s, a) ∈𝒮×𝒜, we have Q^*(s, a) = (H / T - q - 1)ϵ. This also implies that the value of the optimal policy is H / T ·ϵ. We define the 1-dimensional feature function ϕ such that, for each (q, t) ∈ [H / T] × [T], s ∈𝒮_qT + t and a ∈𝒜, ϕ(s, a) = (H/T - q)ϵ. Clearly, Assumption <ref> is satisfied when taking θ^* = 1. This finishes the construction of our hard instance. An illustration is given in <Ref>. The Lower Bound. Now we show that for our hard instance, if there is an RL algorithm that returns a policy with suboptimality less than H / T ·ϵ, then there is an algorithm that solves m-INDQ_n with n = 2^T and m = H / T. Therefore, the correctness of Theorem <ref> is implied by Lemma <ref>. We first note that there exists a bijection between {a_1, a_2}^T and [2^T]. We use g : [2^T] →{a_1, a_2}^T to denote such a bijection. Given an instance of m-INDQ_n with n = 2^T and m = H / T, for each q ∈ [H / T], we set (a_qT^*, a_qT + 1^*, …, a_(q + 1)T - 1^*) = g(i_q^*), where (i_0^*, i_1^*, i_2^*, …, i_H / T - 1^*) are the target indices in the instance of m-INDQ_n. Each time the RL algorithm samples a trajectory (s_0, a_0, r_0, …, s_H-1, a_H-1, r_H-1), we make H/T sequential queries (0, i_0), (1, i_1), …, (H / T - 1, i_H / T - 1) to m-INDQ_n, where for each q ∈ [H / T], i_q is the unique integer in [2^T] with g(i_q) = (a_qT, a_qT + 1, …, a_(q + 1)T - 1). For each h ∈ [H], we have r_h = ϵ if h = (q+1)T - 1 and i_k = i_q^* for some k ∈[H / T]. Otherwise, we have r_h = 0. Suppose there is an RL algorithm that returns a policy π with suboptimality less than H / T ·ϵ, and since the value of the optimal policy is H / T ·ϵ, we must have r_h = ϵ for some h ∈ [H] where (s_0, a_0, r_0, …, s_H-1, a_H-1, r_H-1) is the trajectory obtained by following the policy π. This implies the existence of q ∈ [H / T] with g(i_q^*) = (a_qT, a_qT + 1, …, a_(q + 1)T - 1). Therefore, if there is an RL algorithm that returns a policy with suboptimality less than H / T ·ϵ for our hard instance, then there is an algorithm for solving m-INDQ_n with n = 2^T and m = H / T. § MAIN ALGORITHM [htb] Elimination Algorithm for Finding the Optimal Hypotheses Overview. Here we give an overview of the design of Algorithm <ref>. First, we approximate all candidate parameter θ with a finite set by creating a maximal ϵ_net/2-separated subset of the euclidean sphere 𝕊^k-1, denoted by 𝒩^k, and a set of all k-sized subset of [d]. Then, for each h ∈ [H], we maintain a set of parameter candidates 𝒫_h. Initially, 𝒫_h is set to be all parameters approximated by 𝒩^k and k-sized subset of [d], i.e. 𝒫_h^0 = {θ: θ_ℳ∈𝕊^k, |ℳ| = k, ℳ⊆ [d]} where θ_ℳ is the k-dimension sub-vector of θ with indices corresponding to ℳ. The set _h^0 is then finite for all h ∈ [H]: |_h^0| ≤ (1+4/_)^k ·dk <cit.>. During the execution of Algorithm <ref>, for all h ∈ [H], we eliminate parameter candidates θ from 𝒫_h if we are certain that θ≠θ̂_h^*, where θ̂^* = (θ̂_0^*, θ̂_1^*, …, θ̂_H - 1^*) is a sequence of parameters that is in _h^0 and is closest to the θ^* that satisfies Assumption <ref>, i.e. θ̂^*_h = _θ∈_h^0θ^*_h - θ. Therefore, in Algorithm <ref>, we only consider θ = (θ_0, θ_1, …, θ_H - 1) if θ_h ∈𝒫_h for all h ∈ [H]. In the t-th iteration of the algorithm, we choose a parameter θ^t = (θ^t_0, θ^t_1, …, θ^t_H - 1) so that θ^t_h maximizes [V_θ^t_h(s_h)] and θ^t_h ∈𝒫_h for all h ∈ [H]. We then collect m trajectories to form a dataset 𝒟^t_H by following the policy induced by θ^t. Based on 𝒟^t_H, we calculate the empirical Bellman error Ê_h^t for each h ∈ [H], which is the empirical estimate of the average Bellman error defined as follows. For a sequence of parameters θ^t= (θ^t_0, θ^t_1, …, θ^t_H - 1), the average Bellman error of θ^t is defined as ℰ^t_h = 𝔼[⟨ϕ(s_h, a_h), θ^t_h⟩ - r(s_h,a_h) - V_f^t_h + 1(s_h+1)],   for h ∈ [H-1] 𝔼[⟨ϕ(s_H-1, a_H-1), θ^t_H-1⟩ - r(s_H - 1,a_H - 1)],   if h = H-1. Here, (s_0, a_0, r_0, …, s_H-1, a_H-1, r_H-1) is a trajectory obtained by following π_θ^t. Intuitively, the Bellman error at level h measures the consistency of θ^t_h and θ^t_h + 1 for the state-action distribution induced by π_θ^t. In each iteration of <Ref>, we check if Ê_h^t is small for all h ∈ [H]. If so, the algorithm terminates and returns the policy π_θ^t. Otherwise, for all levels h ∈ [H] where Ê_h^t is large, we eliminate θ^t_h from 𝒫_h and proceed to the next iteration. Now we give the analysis of Algorithm <ref>. Sample Complexity. To bound the sample complexity of Algorithm <ref>, it suffices to give an upper bound on the number of iterations, since in each iteration, the number of trajectories sampled by the algorithm is simply H^2 · m = 16H^2(kln((1+4 / _) d)+ln(H/δ))/(ϵ_stat^2). The following lemma gives an upper bound on the number of iterations of Algorithm <ref>. The proof is given in <Ref>. For any MDP instance with horizon H and satisfying <Ref> with sparsity k, Algorithm <ref> runs for at most (1+4 / _)^k dkH iterations. Suboptimality of the Returned Policy. We now show that with probability at least 1 - δ, the suboptimality of the returned policy is at most (2ϵ + 2_ + 4ϵ_stat)H. First, we define a high probability event E, which we will condition on in the remaining part of the analysis. Define E as the event that |ℰ_h^t - Ê_h^t| ≤ϵ_stat and |_s_h ∼π_h^t V_θ(s_h) - ∑_i ∈ [m] V_θ(s_h^i)| ≤ϵ_stat (where s_h^i is from 𝒟_h^t) for all iterations t, horizon h ∈ [H], and parameter θ∈_h^0. Event E holds with probability at least 1-δ. To prove Lemma <ref>, we first consider a fixed level h and iteration t. Since the empirical Bellmen error Ê_h^t is simply the empirical estimate of ℰ^t_h, and ∑_i ∈ [m] V_θ(s_h^i) is simply an empirical estimate of _s_h ∼π_h^t V_θ(s_h), applying the Chernoff-Hoeffding inequality respectively would suffice. Moreover, the number of iterations has an upper bound given by Lemma <ref>. Therefore, Lemma <ref> follows by applying a union bound over all h ∈ [H], t ∈ [(1+4 / _)^k dk H] and parameter θ∈_h^0. We next show that, conditioned on event E defined above, for the sequence of parameters θ^* = (θ_0^*, θ_1^*, …, θ_H - 1^*) that satisfies Assumption <ref>, we never eliminate θ̂_h^* from 𝒫_h, for all h ∈ [H]. Conditioned on event E defined in Definition <ref>, for a sequence of parameters (θ_0^*, θ_1^*, …, θ_H - 1^*) that satisfies Assumption <ref>, and their approximations θ̂^*_h = _θ∈_h^0θ^*_h - θ, during the execution of Algorithm <ref>, θ̂_h^* is never eliminated from 𝒫_h for all h ∈ [H]. To prove Lemma <ref>, the main observation is that, for h ∈ [H-1] the average Bellman error induced by θ̂_h^* and θ_h + 1^t = _θ∈_h+1_s_h+1[ V_θ(s_h+1)] is always upper bounded by 2(ϵ+_), regardless of the distribution of (s_h, a_h) (cf. Definition <ref>). Conditioned on event E, the empirical Bellman error induced by θ̂_h^* and θ_h + 1^t is at most 2ϵ + 2_+3ϵ_stat. Similarly, the empirical Bellman error induced by θ̂_H-1^* is at most ϵ + _ + ϵ_stat. In Algorithm <ref>, we eliminate function θ_h^t only when the empirical Bellman error is larger than these (Line 15). Thus, θ̂_h^* is never eliminated. We now show the suboptimality of the policy returned by Algorithm <ref> is at most (2ϵ + 2_+ 4ϵ_stat)H. For any MDP instance satisfying <Ref>, conditioned on event E defined in Definition <ref>, Algorithm <ref> returns a policy π satisfying V^*-V^π≤ (2ϵ + 2_+ 4ϵ_stat)H. To prove Lemma <ref>, we first recall the policy loss decomposition lemma (Lemma 1 in <cit.>), which states that, for a policy induced by a sequence of parameters θ = (θ_0, θ_1, …, θ_H - 1), V_θ_0(s_0) - V^π_θ is upper bounded by the summation of average Bellman error over all levels h ∈ [H]. When Algorithm <ref> terminates, the empirical Bellman error must be small for all h ∈ [H], and therefore, the average Bellman error is small by definition of the event E. Moreover, in Line 5 of Algorithm <ref>, we always choose a parameter θ that maximizes V_θ(s_0). Since the sequence of functions θ̂^* = (θ̂_0^*, θ̂_1^*, …, θ̂_H - 1^*) is never eliminated by Lemma <ref>, we must have V_θ_0(s_0) ≥ V_θ^*_0(s_0) ≥ V^* - ϵ - _, which gives an upper bound on the suboptimality of the policy returned by Algorithm <ref>. Combining Lemma <ref>, Lemma <ref> and Lemma <ref>, we can prove Theorem <ref>. Implications. We can think of the bandit setting as an MDP with H = 1 and derive the following: For the bandit setting satisfying <Ref>, Algorithm <ref> returns an action â such that r(a^*) - r(â) ≤ 2ϵ + 2_+ 4ϵ_stat. Here we compare Corollary <ref> with the result in <cit.>. Scrutinizing the analysis in <cit.>, the suboptimality achieved by their algorithm is 4ϵ + ϵ_stat, which is worse than our suboptimality guarantee. On the other hand, the algorithm in <cit.> also returns a parameter θ such that |⟨ϕ(a), θ⟩ - r(a)| ≤ 2ϵ + ϵ_stat for all a ∈𝒜 (which is the best possible according to Theorem <ref>), where our algorithm only returns a near-optimal action. § CONCLUSION We studied RL problem where the optimal Q-functions can be approximated by linear function with constant sparsity k, up to an error of ϵ. We design a new algorithm with polynomial sample complexity, while the suboptimality of the returned policy is O(Hϵ), which is shown to be near-optimal by a information-theoretic hardness result. Although the suboptimality guarantee achieved by our algorithm is near-optimal, the sample complexity can be further improved. As an interesting future direction, it would be interesting to design an RL algorithm with the same suboptimality guarantee, while obtaininig tighter dependece on the horizon length H. plainnat § PROOFS IN SECTION <REF> §.§ Proof of Theorem <ref> Consider an input distribution where a^*_h is drawn uniformly random from {a_1, a_2}. By Yao's minimax principle, it suffices to consider the best deterministic algorithm, say A. Note that, since we have no sampling ability, a deterministic algorithm in this setting can be seen as a function that takes in feature function ϕ and returns a policy π. Also, for all instances supported by this distribution, their inputs ϕ are the same. Thus, the policy returned by A is fixed. Denote the policy as π, and denote the trajectory following π as (s_0, a_0, r_0, s_1, a_1, r_1, …, s_H-1, a_H-1, r_h-1). The suboptimality of π can be written as V^* - V^π = ∑_h=0^H-1ϵ·𝕀[a^*_h ≠ a_h] Since a_h is fixed and a^*_h is drawn uniformly random from {a_1, a_2}, 𝕀[a^*_h ≠ a_h]=1 with probability 1/2. Thus, (V^* - V^π)/ϵ is a binomial random variable, or (V^* - V^π)/ϵ∼ B(H, 1/2). The expectation of (V^* - V^π) is then Hϵ/2, and its variance is Hϵ^2/4. Using Chebyshev inequality, with probability 0.99, we have V^* - V^π≥1/2Hϵ - 5 ϵ√(H) = Ω(Hϵ) for sufficiently large H ≥ 100. §.§ Proof of Lemma <ref> Consider an input distribution where i^* = (i^*_0, i^*_2, …, i^*_m-1) is drawn uniformly random from [n]^m. Let c(i^*, a) be the query complexity of running algorithm a to solve the problem with correct indices i^*. Assume there exists a 0.1-correct algorithm 𝒜 for m-INDQ_n that queries less than 0.9n times in the worst case. Then, using Yao's minimax principle, there exists a deterministic algorithm 𝒜^' with c(i^*, 𝒜^') < 0.9n for all i^* ∈ [n]^m, such that ℙ[𝒜^'outputs (j, i^*_j) for some j ∈ [m]] ≥ 0.9. We may assume that the sequence of queries made by 𝒜^' is fixed until it correctly guesses one of i^*_j. This is because 𝒜^' is deterministic, and the responses 𝒜^' receives are the same (i.e. all guesses are incorrect) until it correctly queries (j, i^*_j) for some j. Let S = {s_1, …, s_k} be the sequence of first k guesses made by 𝒜^', and let I_BAD⊂ [n]^m be a set of all possible i^*'s such that the guesses in S are all incorrect. Denote the number of guesses on INDQ_n^(j) in S by n_j, then n_j's are also fixed, and ∑_j∈[m] n_j = k. The size of I_BAD then satisfies |I_BAD| = Π_j=0^m-1 (n-n_j) ≥ (n-k)n^m-1 Set k as the worst-case query complexity of 𝒜^'. Then, for all i^* ∈ I_BAD, the output of 𝒜^' is incorrect. Since i^* is drawn uniformly random from [n]^m, the probability of 𝒜^' being incorrect is ℙ[𝒜^' is incorrect] = |I_BAD|/|[n]^m|≥(n-k)n^m-1/n^m > (n-0.9n)n^m-1/n^m > 0.1, where in the second to last inequality we used k < 0.9n. However, this contradicts with the fact that ℙ[𝒜^'outputs (j, i^*_j) for some j ∈ [m]] ≥ 0.9. Thus, there does not exist a 0.1-correct algorithm that solves the problem with less than 0.9n queries in the worst case. §.§ Proof of Theorem <ref> First, we prove our claim based on the assumption that T is an integer that divides H. We can create the hard instance described in Section <ref>. We reduce the problem to H/T-INDQ_2^T. Assume there exists an algorithm 𝒜 that takes less than 0.9 · 2^T · T samples, such that, with probability at least 0.9, it outputs a policy π with suboptimality V^* - V^π < H/T·ϵ. By definition, at round i, 𝒜 interacts with the MDP instance by following a trajectory (s_0, a_0^i, r_0^i, ..., s_H-1^i, a_H-1^i, r_H-1^i). Based on 𝒜, we create an algorithm 𝒜^' for H/T-INDQ_2^T as follows. Consider 𝒜 is querying the trajectory (s_0, a_0^i, r_0^i, ..., s_H-1^i, a_H-1^i, r_H-1^i). For each q ∈{0, …, H/T-1}, we can map (a_qT^i, …, a_(q+1)T-1^i) to an index in [2^T] using the bijection g. Thus, we make a sequence of H/T guesses, {(q, g(a_qT^i, …, a_(q+1)T-1^i))}_q=0^H/T-1, to the H/T-INDQ_2^T. If the guess (q, g(a_qT^i, …, a_(q+1)T-1^i)) is correct for some q, 𝒜 receives a reward of ϵ at level (q+1)T-1, i.e. r_(q+1)T-1^i = r(s_(q+1)T-1^i, a_(q+1)T-1^i) = ϵ. For all other state-action pairs in the trajectory, algorithm 𝒜 receives zero reward. Since 𝒜 takes less than 0.9 · 2^T · T samples, it queries less than 0.9 · 2^T · T/H trajectories, corresponding 0.9 · 2^T guesses to H/T-INDQ_2^T in total. Recall that 𝒜 outputs a policy π with suboptimality V^* - V^π < H/T·ϵ with probability at least 0.9. This means the sequence of guesses to H/T-INDQ_2^T made by π must have at least one of them being correct. Thus, 𝒜^' is a 0.1-correct algorithm that solves H/T-INDQ_2^T with less than 0.9 · 2^T guesses. However, by Lemma <ref>, such an algorithm does not exist, so 𝒜 does not exist. We conclude that any algorithm that returns a policy with suboptimality less than H/T ·ϵ with probability at least 0.9 needs to sample at least 0.9 · T · 2^T times. Now we consider when T is not an integer that divides H. There are two cases. First, consider T as an integer that does not divide H. Let H^' = ⌊ H/T ⌋· T, then we can make the same construction as above for the first H^' horizons, and set the reward as zero for all the state-action pairs in the remaining H-H^' levels. Because the rewards are the same for levels H^' through H-1, different values of {π_H^', …, π_H-1} do not make a difference to V^π. Therefore, we only care about the first H^' levels, so we can conclude from our above analysis that, any algorithm that returns a policy with suboptimality less than H^'/T ·ϵ = ⌊ H/T ⌋·ϵ with probability at least 0.9 needs to sample at least 0.9 · T · 2^T times. For the second case, we consider when T is not an integer. Let T^' = ⌊ T ⌋, we can apply our conclusion from the previous case. That is, any algorithm that returns a policy with suboptimality less than ⌊ H/T^'⌋·ϵ with probability at least 0.9 needs to sample at least 0.9 · T^'· 2^T^' times. Since 2T≤ H, we have ⌊ H/T^'⌋·ϵ≥⌊ H/T ⌋·ϵ≥Hϵ/2T. Also observing that 0.9 · T^'· 2^T^'≥ 0.1 · T · 2^T, we finish the proof. § PROOFS IN SECTION <REF> §.§ Proof of Lemma <ref> In each iteration, we either output a policy or delete at least one function in 𝒫_h for some h ∈ [H-1]. Since there are ∑_h∈[H-1](|𝕊^k| ×dk) ≤ (1+4 / _)^k dk H functions in total initially, the algorithm is guaranteed to terminate within (1+4 / _)^k dkH iterations. §.§ Proof of Lemma <ref> For fixed iteration t and horizon h ∈ [H], with probability at least 1-δ^', we have |ℰ_h^t - Ê_h^t| ≤ 4√(ln 2 - lnδ^'/2m). Hence, we can set m > 16(ln2 - ln(δ^'))/2ϵ_stat^2 to guarantee that |ℰ_h^t - Ê_h^t| < ϵ_stat Recall that the batch dataset 𝒟_t = {(a_0^i, r_0^i, s_1^i, ..., a_H-1^i, r_H-1^i)}_i=1^m is collected by playing policy π_θ^t. We define Ê_h^t,i = ⟨ϕ(s^i_h-1, a^i_h), θ_h^t⟩ - r(s^i_h,a^i_h) - V_θ^t_h+1(s^i_h+1), then Ê_h^t = 1/m∑_i=1^m Ê_h^t,i. By definition of ℰ_h^t, it satisfies ℰ_h^t = 𝔼[Ê_h^t,i]. Further, since ⟨ϕ(s,a), θ^t ⟩∈ [-1,1] and r(s,a) ∈ [0,1] for any state-action pair (s,a), we have Ê_h^t,i∈ [-3,1]. Thus, using Chernoff-Hoeffding inequality, we get, with probability 1-δ^', |ℰ_h^t - Ê_h^t| = |1/m∑_i=1^m (Ê_h^t,i - 𝔼[Ê_h^t])| ≤ 4√(ln 2 - lnδ^'/2m). For fixed iteration t, horizon h ∈ [H], and parameter θ∈_h, with probability at least 1-δ^', we have |_s_h ∼π_h^t V_θ(s_h) - 1/m∑_i∈ [m]V_θ(s_h^i)| ≤√(ln 2 - lnδ^'/2m). Hence, we can set m > ln2 - ln(δ^')/2ϵ_stat^2 to guarantee that |ℰ_h^t - Ê_h^t| < ϵ_stat Recall that the batch dataset 𝒟_h^t = {(s_0^i, a_0^i, r_0^i, … , s_h-1^i, a_h-1^i, r_h-1^i, s_h^i)}_i=1^m is collected by playing policy π_h^t. By definition, it satisfies _s_h ∼π_h^t V_θ(s_h) = 𝔼_𝒟_h^t[1/m∑_i∈ [m]V_θ(s_h^i)]. Further, since V_θ(s) ∈ [0,1] for any state s, using Chernoff-Hoeffding inequality, we have with probability 1-δ^', |_s_h ∼π_h^t V_θ(s_h) - 1/m∑_i∈ [m]V_θ(s_h^i)| ≤√(ln 2 - lnδ^'/2m). Define E^ℰ_t,h to be the event E^ℰ_t,h = {|ℰ_h^t - Ê_h^t| ≤ϵ_stat}, then by Lemma <ref>, ℙ(E^ℰ_t,h) ≥ 1-δ/(2(1+4 / _)^2kdk^2H^2) for all iterations t ∈ [(1+4 / _)^k dk H] and horizon h ∈ [H]. Define E^V_t,h,θ to be the event E^V_t,h,θ = {|_s_h ∼π_h^t V_θ(s_h) - 1/m∑_i∈ [m]V_θ(s_h^i)| ≤ϵ_stat}, then by Lemma <ref>, ℙ(E^V_t,h,f) ≥ 1-δ/(2(1+4 / _)^2kdk^2H^2) for all iterations t ∈ [(1+4 / _)^k dkH], horizon h ∈ [H], and θ∈_h. We can lower bound the probability of E by union bound ℙ(E) ≥ 1-∑_t∑_h ∈ [H]ℙ(E̅^ℰ_t,h) - ∑_t∑_h ∈ [H]∑_θ∈𝒫_hℙ(E̅^V_t,h,θ) ≥ 1-δ. §.§ Proof of Lemma <ref> Let θ̂^* = (θ̂^*_0, …, θ̂^*_H-1) be the sequence of parameters such that, for each h ∈ [H], the non-zero sub-vector of θ̂^*_h is in 𝒩^k and is closest to the non-zero indices in θ^*. Then, since 𝒩^k is _/2-maximal, we have by Assumption <ref> that |⟨ϕ(s,a), θ̂^*_h⟩ - Q^*(s,a)| ≤ |⟨ϕ(s,a), θ^*_h⟩ - Q^*(s,a)| + |⟨ϕ(s,a), θ̂^*_h⟩ - ⟨ϕ(s,a), θ^*_h⟩| ≤ + _, for all s in horizon h and action a ∈𝒜. At iteration t, algorithm 1 deletes θ̂^*_h if and only if one of the following two cases happens: (1) h < H-1, θ_h^t = θ̂^*_h, and Ê^t_h > 2ϵ + 2_ + 3ϵ_stat, (2) h=H-1, θ^t_H-1 = θ̂^*_h+1 and Ê^t_H-1 > ϵ + _ + ϵ_stat. For any state-action pair (s_h,a_h) at level h where h ∈ [H-1], we observe by definition that Q^*(s_h,a_h) - 𝔼[r(s_h,a_h)] - 𝔼[V^*_h+1(s_h+1)] = 0. Thus, we can upper bound ℰ^t_h by ℰ^t_h = 𝔼[⟨ϕ(s_h,a_h) , θ̂^*_h⟩ - r(s_h,a_h) - V_θ_h+1(s_h+1)] ≤ 𝔼[(Q^*(s_h,a_h) + ϵ + _) - r(s_h,a_h) -V_θ_ h+1(s_h+1)] By Assumption <ref> Here, (s_0, a_0, r_0, …, s_h, a_h, r_h) is a trajectory following π_θ^t, and s_h+1∼ P(s_h, a_h). Recall that θ_h^t is chosen by taking the function that gives maximum empirical value at level h, so 1/m∑_i ∈ [m] V_θ_h^t(s_h^i) ≥1/m∑_i ∈ [m] V_θ̂_h^*(s_h^i), where s_h^i are taken from the dataset 𝒟_h^t. Moreover, we are conditioned under event E, so we have [V_θ^*_h(s_h)] - [V_θ_h^t(s_h)] ≤(1/m∑_i ∈ [m] V_θ_h^t(s_h^i) + ϵ_stat]) - (1/m∑_i ∈ [m] V_θ_h^*(s_h^i) - ϵ_stat) ≤ 2ϵ_stat for all h and t. For the first case, we consider h ∈ [H-1]. We have ℰ_h^t ≤ 𝔼[(Q^*(s_h,a_h) + ϵ + _) - r(s_h,a_h) -V_θ^t_ h+1(s_h+1)] ≤ 𝔼[Q^*(s_h,a_h) - r(s_h,a_h) - (V_f^*_ h+1(s_h+1) - 2ϵ_stat)] + ϵ + _ = 𝔼[Q^*(s_h,a_h) - r(s_h,a_h) - f^*_h+1(s_h+1, π^*_h+1(s_h+1))] + ϵ + _ + 2ϵ_statsince V_f^*_h+1 = max_a ∈𝒜f^*_h+1(s_h+1, a) ≤ 𝔼[Q^*(s_h,a_h) - r(s_h,a_h) - (Q^*(s_h+1, π^*_h+1(s_h+1)) - ϵ - _)] + ϵ + _ + 2ϵ_statBy Assumption <ref> = 𝔼[Q^*(s_h,a_h) - r(s_h, a_h) - Q^*(s_h+1, π^*_h(s_h+1))] + 2ϵ + 2_ + 2ϵ_stat = 2ϵ + 2_ + 2ϵ_statsince Q^*(s_h+1, π^*_h+1(s_h+1)) = V_h+1^*(s_h+1). Given that we are conditioned under event E, Ê^t_h - ℰ^t_h ≤ϵ_stat for all iteration t and all horizon h. Thus, Ê^t_h < 2ϵ + 2_ + 3ϵ_stat. For the second case, we consider h = H-2. We have ℰ^t_H-1≤𝔼[(r(s_H-1,a_H-1) + ϵ) - r(s_H-1,a_H-1)] = ϵ + _, because H-1 is the last level. Again, given that we are conditioned under event E, we have Ê^t_H-1 - ℰ^t_H-1≤ϵ_stat, so Ê^t_H-1 < ℰ^t_H-1 + ϵ_stat≤ϵ + _ + ϵ_stat. §.§ Proof of Lemma <ref> Algorithm 1 terminates and returns a policy at iteration t only if θ^t satisfies the conditions in line 6, and by Lemma <ref>, there always exists a nice sequence of functions {θ̂_h^*}_h=0^H-1 that satisfies these conditions. Also, Lemma <ref> indicates that the algorithm terminates within a finite number of iterations. Thus, algorithm 1 is guaranteed to terminate and return a policy. Let the output policy be π_θ^t, i.e. Ê^t_h ≤ 2ϵ + 2_ + 3ϵ_stat for all h ∈ [H-2] and Ê^t_H-1≤ϵ + _ + ϵ_stat. The loss of this policy can be bounded by V^*(s_0)-V^π_θ_t(s_0) = Q^*(s_0, π^*(s_0)) - V^π_θ^t(s_0) ≤ (⟨ϕ(s_0, π^*(s_0)), θ̂^*_0⟩ + ϵ + _) - V^π_θ^t(s_0) By Assumption <ref> ≤ (⟨ϕ(s_0, π_θ^t_0(s_0)), θ^t_0⟩ + ϵ + _) - 𝔼[∑_h=0^H-1 r(s_h, a_h)] since θ^t_0 is chosen by taking the maximum = ϵ + _ + 𝔼[∑_h=0^H-1⟨ϕ(s_h, a_h), θ^t_h⟩ - r(s_h, a_h) - ⟨ϕ(s_h+1, a_h+1), θ^t_h+1⟩] telescoping sum = ϵ + _ + ∑_h=0^H-1𝔼[ ⟨ϕ(s_h, a_h), θ^t_h⟩ - r(s_h, a_h) - ⟨ϕ(s_h+1, a_h+1), θ^t_h+1⟩]linearity of expectation = ϵ + _ + ∑_h=0^H-1ℰ^t_h ≤ϵ + _ + ∑_h=0^H-1 (Ê^t_h + ϵ_stat) ≤ (2ϵ + 2 _ + 4ϵ_stat) H § ADDITIONAL PROOFS §.§ Proof of Theorem <ref> We construct a hard instance as follows. For each a ∈[n], define ϕ(a) = ϵ. Let θ^* be randomly selected from {-1, 1}, and let a^* is uniformly chosen from 𝒜. The reward r is deterministic and is defined as r(a)= 2θ^*ϵ if a = a^* 0 otherwise. Therefore |r(a) - θ^* ·ϕ(a) | ≤ϵ holds true for all actions a ∈𝒜. By Yao's minimax principle, it suffices to consider deterministic algorithms. Let A be a deterministic algorithm that, by taking less than 0.9n samples, returns a r̂ with |r̂(a) - r(a) | < 2ϵ for all a ∈𝒜 with probability at least 0.95. We can say the sequence of actions made by A is fixed until it receives a reward r(a_t) ≠ 0 at some round t. This is because A is deterministic, and the responses A receives are the same (i.e. all actions have reward 0) until it queries a^*. Let S = (a_1, …, a_t) be the sequence of actions made by A. Let 𝒜_BAD⊂𝒜 be the set of actions that are not in S. We have ℙ[|r̂(a) - r(a)| < 2ϵ, ∀ a ∈𝒜] = ℙ[|r̂(a) - r(a)| < 2ϵ, ∀ a ∈𝒜| a^* ∈𝒜_BAD] ℙ[a^* ∈𝒜_BAD] + ℙ[|r̂(a) - r(a)| < 2ϵ, ∀ a ∈𝒜 | a^* ∉𝒜_BAD] ℙ[a^* ∉𝒜_BAD] < ℙ[|r̂(a) - r(a)| < 2ϵ, ∀ a ∈𝒜| a^* ∈𝒜_BAD] ℙ[a^* ∈𝒜_BAD] +(1-ℙ[a^* ∈𝒜_BAD]) Since t < 0.9n and a^* is chosen uniformly random from 𝒜, the probability that a^* ∈𝒜_BAD is ℙ[a^* ∈𝒜_BAD] = |𝒜_BAD|/ |𝒜| > 1-0.9n/n = 0.1. When a^* ∈𝒜_BAD, the output of our deterministic algorithm must be fixed. We denote such output by r^'. Consider a fixed a^* ∈𝒜_BAD, if we have |r^'(a^*) - 2ϵ| < 2ϵ, then r^'(a^*) ∈ (0, 4ϵ), and |r^' (a^*) - (-2ϵ))| > 2ϵ. Similarly, if we have |r^'(a^*) - (-2ϵ)| < 2ϵ, then |r^' (a^*) - 2ϵ| > 2ϵ. Since θ^* is chosen uniformly random in {-1, 1}, we know r(a^*) is chosen uniformly random in {-2ϵ, 2ϵ}. Thus, ℙ[|r̂(a) - r(a)| < 2ϵ, ∀ a ∈𝒜 | a^* ∈𝒜_BAD] ≤ℙ[|r̂(a^*) - r(a^*)| < 2ϵ | a^* ∈𝒜_BAD] = 0.5. We have ℙ[|r̂(a) - r(a)| < 2ϵ, ∀ a ∈𝒜] < 0.5 ·ℙ[a^* ∈𝒜_BAD] +(1-ℙ[a^* ∈𝒜_BAD]) < 0.5 · 0.1 + 0.9 = 0.95. However, by our assumption on algorithm A, we have ℙ[|r̂(a) - r(a)| < 2ϵ, ∀ a ∈𝒜] > 0.95. § BELLMAN RANK The following definition of the general average Bellman error is helpful for our proofs in this section. Given any policy π: 𝒮→𝒜, feature function ϕ: 𝒮×𝒜→^d and a sequence of parameters θ = (θ_0, …, θ_H-1), the average Bellman error of θ under roll-in policy π at level h is defined as ℰ_h(θ, π) = 𝔼[⟨ϕ(s_h, a_h), θ_h ⟩ - r_h - max_a ∈𝒜⟨ϕ (s_h+1, a), θ_h+1⟩]. Here, (s_0, a_0, r_0, …, s_h, a_h, r_h) is a trajectory by following π, and s_h+1∼ P(s_h, a_h). For a given MDP ℳ, we say that our parameter space ℱ = {θ∈^d: θ is k-sparse, θ_2=1} has a Bellman rank of dimension d if, for all h ∈ [H], there exist functions X_h: ℱ→ℝ^d and Y_h: ℱ→ℝ^d such that for all θ, θ^'∈ℱ, ℰ_h(θ, π_θ^') = ⟨ X_h(θ), Y_h(θ^')⟩. For each h ∈ [H], define W_h ∈ℝ^d × d as the Bellman error matrix at level h, where the i,j-th index of W_h is ℰ_h(θ_i, π_θ_j). Then, the Bellman rank of ℱ is the maximum among the rank of the matrices {W_h}_h ∈ [H]. We prove Proposition <ref>. We again construct a deterministic MDP instance with binary trees. For simplicity, we assume d is a power of 2, and we construct the instance with horizon H = log d. Thus, we have |𝒮| = 2^H-1 = d-1 states. We also assume the sparsity is k=1, so the parameter space |ℱ| = d. The rest details of state space, action space, and the transition kernel are exactly the same as in Section <ref>. The reward is defined as r(s, a_1) = r(s, a_2) = ϵ for s ∈𝒮_H-1, and r(s,a) = 0 for all other state-action pairs. Correspondingly, the Q-function satisfies that Q^*(s,a) = ϵ for all (s,a) ∈𝒮×𝒜. For feature at horizon h ∈ [H], for j ≥ 2^h+1, we define the j-th index of ϕ(s,a) as ϕ(s,a)[j] = jϵ for all (s,a) ∈𝒮_h ×𝒜. For i ∈ [2^h+1] and i is even, ϕ(s_2^h-1+i,a_1)[i] = ϵ and ϕ(s_2^h-1+i,a_2)[i] = 0. If i ∈ [2^h+1] and i is odd, then ϕ(s_2^h-1+i,a_1)[i] =0 and ϕ(s_2^h-1+i,a_2)[i] = ϵ. We also define ϕ(s,a)[i] = 0 for all other state-action pairs. Notice that, for any h ∈ [H] and i ∈ [2^h+1], we have can let θ be the one-hot vector with i-th index being 1, then |⟨ϕ(s,a), θ⟩ - Q^*(s,a)|≤ϵ for all (s,a) ∈𝒮_h ×𝒜, so our construction satisfies assumption <ref>. Clearly, for each pair (s, a) ∈𝒮_H-1×𝒜, we can find θ = (θ_0, …, θ_H-1) such that the trajectory created by following π_θ, denoted by (s_0, a_0, r_0, …, s_H-1, a_H-1, r_H-1), satisfies s_H-1=s and a_H-1 = a. Consider two parameter candidates θ, θ'. Let (s_H-1, a_H-1) be the state and action at level H-1 when following π_θ^'. Since we are considering deterministic MDP, we can calculate the Bellman error at level H-1 as follows ℰ_H-1(θ, π_θ^') = ⟨ϕ(s_H-1,a_H-1), θ_H-1⟩ - r(s_H-1,a_H-1) - max_a ∈𝒜⟨ϕ(s_H, a), θ_H⟩ = ⟨ϕ(s_H-1,a_H-1), θ_H-1⟩ - ϵsince H-1 is the last level = 0 , if θ_H-1 = θ_H-1^' -ϵ , if θ_H-1≠θ_H-1^'. Here, the last equality holds because, for each θ_H-1, there is only one unique (s,a) ∈𝒮_H-1×𝒜 such that ⟨ϕ(s,a), θ_H-1⟩ = ϵ. Thus, at level H-1, a submatrix of the Bellman error matrix, W ∈ℝ^d × d, satisfies W_ij = ℰ_H-1(θ_i, π_θ_j) = 0 , if i = j -ϵ , otherwise. In other words, W = ϵ (I-J) where I is the identity matrix and J is a d × d matrix with all 1s. Define matrix W^' = 1/ϵ(I - 1/(n-1)J), then we have WW^' = (I-J)(I-1/n-1J) = I - 1/n-1J - J + n/n-1 J = I. This means W^' is the inverse matrix of W, and W is full rank. Thus, the Bellman rank is at least d.
http://arxiv.org/abs/2407.13485v1
20240718130412
Slope-semistability and moduli of coherent sheaves: a survey
[ "Mihai Pavel", "Matei Toma" ]
math.AG
[ "math.AG", "math.CV", "14D20, 32G13" ]
Slope-semistability]Slope-semistability and moduli of coherent sheaves: a survey Institute of Mathematics of the Romanian Academy, P.O. Box 1-764, 014700 Bucharest, Romania cpavel@imar.ro Université de Lorraine, CNRS, IECL, F-54000 Nancy, France Matei.Toma@univ-lorraine.fr [2020]14D20, 32G13 [ Mihai Pavel, Matei Toma July 22, 2024 ============================ Dedicated to the memory of Lucian Bădescu § ABSTRACT We survey old and new results on the existence of moduli spaces of semistable coherent sheaves both in algebraic and in complex geometry. § INTRODUCTION In algebraic geometry and complex geometry the classification of vector bundles is an important problem that has attracted considerable attention since the early sixties. One goal of the classification was the construction of nicely behaved moduli spaces of vector bundles. This had been achieved for line bundles, but it soon became apparent that for higher rank vector bundles a restrictive condition was necessary to obtain moduli spaces with good geometric properties. To this purpose Mumford introduced the concept of slope-semistability in <cit.> for vector bundles over algebraic curves. This was later extended to cover coherent sheaves over higher dimensional bases. The theory attracted even more interest through Donaldson's work in four-manifolds differential topology. In particular the so-called Kobayashi-Hitchin correspondence relates moduli spaces of stable vector bundles on complex projective surfaces to moduli spaces of anti-self-dual connections in gauge theory, thus providing a new perspective in the study of the subject in complex geometry. In this paper we survey the existence and construction results of moduli spaces of semistable coherent sheaves in both algebraic and complex geometry, with a particular stress on functorial aspects. We do not delve into the vast domain studying the geometric properties and applications of these moduli spaces in enumerative geometry, classification of manifolds, hyperkähler geometry, gauge theory, etc. For this there exists a rich literature, see for instance <cit.> and the references therein. §.§ Acknowlegements: MP was partly supported by the PNRR grant CF 44/14.11.2022 Cohomological Hall algebras of smooth surfaces and applications. § FIRST PROPERTIES OF SLOPE-SEMISTABLE SHEAVES In this section we recall the notion of slope-semistability for coherent sheaves, and discuss its first important properties in the context of both algebraic and complex-analytic geometry. Our main references here are <cit.> and <cit.>. Setup. Throughout this paper we denote by (X,ω) a polarized n-dimensional space which will be either (AG) a smooth projective variety over an algebraically closed field k with an integral ample class ω∈^1(X), or (CG) a compact complex manifold endowed with a Hermitian metric whose Kähler form ω is such that ∂∂̅(ω^n-1)=0; such a metric is called a Gauduchon metric. An important special subcase of (CG) that we will frequently refer to in the sequel is when (X,ω) is (KG) a compact Kähler manifold endowed with a Kähler class ω∈ H^1,1(X,). Also we will denote by (X) the category of coherent sheaves on X. It should be understood that the coherent sheaves we consider are algebraic or analytic depending on whether (X,ω) is in the case (AG) or (CG) respectively. For a torsion-free sheaf E ∈(X), we define the ω-slope of E by μ^ω(E) c_1(E) ·ω^n-1/(E). In the algebraic setting the intersection product c_1(E) ·ω^n-1 is performed in the Chow ring A^*(X). Otherwise, in the complex case, ω^n-1 defines a class in Aeppli cohomology which may be integrated against any class in (1,1)-Bott-Chern cohomology, in particular against c_1(E) viewed as a class in H^1,1_BC(X,). When (X,ω) is Kähler, the intersection product c_1(E) ·ω^n-1 is given by cup product in the cohomology ring H^*(X,). The following notion of slope-semistability was introduced by Mumford <cit.> over curves, and later extended in higher dimensions by Takemoto <cit.>. We also recall in Definition <ref> the Gieseker-Maruyama semistability following <cit.>. A sheaf E ∈(X) is ω-semistable (resp. ω-stable) if * E is torsion-free, * for any subsheaf F ⊂ E with 0 < (F) < (E) we have μ^ω(F) ≤μ^ω(E) (resp. <). For the terminology, we will also say (semi)stable, for slope-(semi)stable or for ω-(semi)stable when ω is clear from the context. A torsion-free sheaf E is called polystable if it is isomorphic to a direct sum of stable sheaves of the same slope. A coherent sheaf E on X is called simple if (E,E) ≅ k. It is immediately shown that stable sheaves are simple. In the (AG) and (KG) setups, one defines the Hilbert polynomial of a coherent sheaf E with respect to ω by setting P_ω(E,m) = ∫_X (E)e^mω_X. A sheaf E ∈(X) is said to be Gieseker-Maruyama (GM) semistable (resp. Gieseker-Maruyama-stable) if * E is torsion-free, * for any subsheaf F ⊂ E with 0 < (F) < (E) we have P_ω(F,m)/(F)≤P_ω(E,m)/(E) (resp. <) for m ≫ 0. It is easy to see that we have the following implications for coherent sheaves in the (AG) or (KG) setups ω-stable GM-stable GM-semistableω-semistable. * All torsion-free sheaves of rank one and in particular all line bundles are stable with respect to any polarization. * A direct sum of semistable sheaves of the same slope is also semistable. * The tangent bundle of the complex projective space ^n is stable. * The tangent bundle of a complex (algebraic or non-algebraic) K3 surface is stable with respect to any polarization. In positive characteristic the stability of the tangent bundle is currently unknown <cit.>. * All non-algebraic compact complex surfaces admit irreducible, hence stable, rank two vector bundles <cit.>. Recall that by definition a torsion-free sheaf is irreducible if it admits no coherent subsheaf of intermediate rank. The notion of slope-semistability fits within the broader context of algebraic stability conditions introduced by Rudakov <cit.>, and furthemore it satisfies the Härder-Narasimhan property. That is, for any torsion-free sheaf E ∈(X) there exists a unique Härder-Narasimhan (HN) filtration 0 = E_0 ⊂ E_1 ⊂…⊂ E_m = E such that the factors E_i/E_i-1 are ω-semistable and μ^ω(E_1) > μ^ω(E_2/E_1) > … > μ^ω(E/E_m-1). We refer the reader to <cit.> for a proof in the algebraic case and to <cit.> in the analytic case. In general one can “approximate” any semistable sheaf E ∈(X) by stable sheaves using Seshadri filtrations E^∙: 0 = E_0 ⊂ E_1 ⊂…⊂ E_m = E with stable factors E_i/E_i-1 of slope μ^ω(E_i/E_i-1)=μ^ω(E). We shall denote by ^S(E^∙) = ⊕_i E_i/E_i-1 the corresponding graded sheaf of such a filtration. We note that a semistable sheaf E might admit many Seshadri filtrations, however one can show that the graded module corresponding to any such filtration is uniquely determined in codimension one. In other words, the reflexive hull ^S(E^∙)^∨∨ of the graded sheaf does not depend on the choice of the Seshadri filtration <cit.>. The graded modules of different Seshadri filtrations may however be distinct, see <cit.>. § SOME KEY RESULTS ON SLOPE-SEMISTABLE SHEAVES §.§ Set-theoretical Kobayashi-Hitchin correspondence We present here the Kobayashi-Hitchin correspondence which establishes a link between the algebraic geometric concept of stability and the existence of Hermite-Einstein metrics in complex differential geometry. It allows the use of analytic and differential geometric methods in the study of semistable vector bundles and their moduli spaces in complex geometry. See <cit.> for a thorough treatment of this subject. This works in the complex geometrical setup of compact complex manifolds endowed with a Gauduchon metric. Let us fix a compact Gauduchon manifold (X,ω) of dimension n. Let (E,h) be a 𝒞^∞ complex vector bundle on X, endowed with a Hermitian metric h. Then, by definition, an h-unitary connection A on (E,h) is called ω-Hermite–Einstein if A is integrable and satisfies Λ_ω F_A = - 2π i/(n-1)!_ω(X)μ^ω(E) Id_E. Here Λ_ω is the adjoint of the Lefschetz operator on forms given by wedging with ω. Moreover, A is called irreducible if it has no decomposition A = A_1 ⊕ A_2 coming from an orthogonal splitting E = E_1 ⊕ E_2 of the Hermitian smooth vector bundle (E,h). Note that the integrability condition on A endows E with a holomorphic structure _A = (E,∂_A) by the Newlander-Nirenberg theorem. Let (E,h_0) be a Hermitian complex vector bundle on the Gauduchon manifold (X,ω). If there exists an irreducible ω-Hermite-Einstein connection on (E,h_0), then the induced holomorphic structure _A on E is ω-stable. Conversely, if is an ω-stable holomorphic structure on E, then there exists a Hermitian metric h on E such that its Chern connection with respect to is irreducible ω-Hermite-Einstein on (E,h). This metric h is called ω-Hermite-Einstein on and is unique up to multiplication by a positive factor. In the setup of Theorem <ref>, if is an ω-stable holomorphic structure on E, then there exists an ω-Hermite-Einstein connection A on (E,h_0) such that the induced holomorphic structure _A on E is isomorphic to , see <cit.>. §.§ Bogomolov inequality Over a smooth projective variety or compact Kähler manifold, the Bogomolov inequality expresses a strong topological constraint to which semistable torsion-free sheaves are subject. It is particularly useful in boundedness questions, see Theorem <ref>. We will state the Bogomolov inequality in the framework of algebraic and Kähler geometry, and then make a remark on its formulation in the Gauduchon setup. In the zero characteristic (AG) setup and in the (KG) setup, for any ω-semistable sheaf E on X one has Δ(E)·ω^n-2≥ 0, where Δ(E) := 2 (E) c_2(E) - ((E) - 1)c_1(E)^2. In the (AG) setup in characteristic p > 0 and for E ω-semistable, we have Δ(E)·ω^n-2 + (E)^2((E)-1)^2/(p-1)^2ω^n ≥ 0. In the algebraic case, the theorem was first proved by Bogomolov <cit.> over algebraic surfaces in zero characteristic. The general algebraic case in zero characteristic follows from his result and the Mehta-Ramanathan restriction theorem <ref>. The positive characteristic case was proved by Langer <cit.>. The inequality Δ(E)·ω^n-2≥ 0 was proved in the Kähler case by Lübke for holomorphic vector bundles E admitting an ω-Hermite-Einstein metric, see <cit.>. Together with the Kobayashi-Hitchin correspondence it yields the statement of Theorem <ref> in the Kähler case. In fact, Lübke's proof also applies in the context of a Gauduchon manifold (X,ω), leading to a pointwise inequality of (n,n)-forms: Δ(E,h)∧ω^n-2≥ 0, where h is an ω-Hermite-Einstein metric on E and the (2,2)-form Δ(E,h) is computed using the associated Chern connection. §.§ Restriction theorems In this subsection we place ourselves in the algebraic setting and present restriction results for (semi)stable sheaves. These are used in moduli theory to prove boundedness and general properties of moduli spaces of sheaves. Let H be an ample divisor representing the polarization ω. We assume the dimension n of X to be larger than one. Let E ∈(X) be a torsion-free sheaf. If D ∈ |aH| is a smooth divisor for some a > 0 such that E|_D is (semi)stable with respect to H|_D, then it is immediate to see that E is also (semi)stable with respect to H. One may wonder if a converse statement holds. The example of the tangent bundle __^n shows that some caution is required. Indeed, its restriction to any hyperplane D is isomorphic to __^n-1(1) ⊕__^n-1, which is not semistable. However, a positive answer is found if one takes a general divisor D ∈ |aH| for a sufficiently large. This is the content of the Mehta-Ramanathan restriction theorem <cit.>: If E is a H-(semi)stable sheaf on X, then its restriction E|_D to a general divisor D ∈ |aH| of sufficiently large degree is (H|_D)-(semi)stable. More refined restriction theorems which give effective bounds on the degree a were proved by Flenner <cit.> in zero characteristic and by Langer <cit.> in mixed characteristic. See also the recent paper <cit.> containing effective restriction results. We state here a variant of Langer's result. If E is a H-(semi)stable sheaf on X, then its restriction E|_D to a general divisor D ∈ |aH| is (H|_D)-(semi)stable provided that a > (E)-1/(E)Δ(E)H^n-2 + 1/(E)((E)-1)H^n + ((E)-1)H^n/(E)γ_(E), where γ_r := 0 if (k) = 0 and γ_r := r^2(r-1)^2/(p-1)^2 if (k)=p > 0. In Theorem <ref>, “general” can be made explicit depending on E. More precisely, if 0 = E_0 ⊂…⊂ E_m = E is a Seshadri filtration of E, then the statement holds for any smooth (even normal) divisor D ∈ |aH| so that any factor E_i/E_i-1 restricted to D remains torsion-free. § FAMILIES OF SLOPE-SEMISTABLE SHEAVES In this section we present properties of families of slope-semistable sheaves which are essential in moduli theory. An S-flat family of coherent sheaves on X is by definition a coherent _S × X-module , flat over S. The parameterizing space S is either a k-scheme or a complex analytic space, depending on whether we work in the algebraic or analytic setup respectively. §.§ Boundedness of sets of coherent sheaves We recall below what we mean by a bounded set of coherent sheaves on X. Let be a set of isomorphism classes of coherent sheaves on X. We say that is bounded if (AG) there exists a scheme S of finite type over k and a coherent sheaf E on S × X such that is contained in the set of isomorphism classes of fibers of E over points of S <cit.>. (CG) there exists a complex analytic space S, a compact subset K ⊂ S and a coherent sheaf E on S × X such that is contained in the set of isomorphism classes of fibers of E over points of K <cit.>. When X is complex projective, the above two definitions are in fact equivalent via the GAGA Theorem, cf. <cit.>. Note that if is a bounded family of sheaves on X, then the Chern classes (seen in the numerical group of X in the algebraic case, and in the singular cohomology ring H^*(X,) in the analytic case respectively) of the elements in range within a finite set. The following boundedness criterion is due to Grothendieck <cit.> in the algebraic case. The analytic version can be found in <cit.>. Let be a set of isomorphism classes of torsion-free sheaves on X. Then is bounded if and only if the following two conditions are fulfilled * is dominated, i.e., there exists a bounded set of classes of coherent sheaves on X such that each element of is a quotient of an element of , * the slope function μ^ω is upper bounded on . §.§ Openness of semistability The following result shows that slope-semistability, resp. slope-stability, is an open property in flat families of sheaves. Its proof is based on the boundedness criterion stated in Proposition <ref>. Let (X,ω) be a polarized space as in our (AG) or (KG) setups. Let be an S-flat family of coherent sheaves on X. Then the locus S^∘ of closed points s ∈ S such that |_{s}× X is ω-semistable (resp. ω-stable) is a Zariski open subset of S. See <cit.> for a proof in the algebraic case, and <cit.> for the analytic case. The above result is in general false for non-Kähler Gauduchon manifolds, see <cit.>. §.§ Langton's semistable reduction We treat the algebraic case. For the analytic case see <cit.>. Let (X,ω) be as in the (AG) setup. Let R be a DVR over k of quotient field K. Let E be an R-flat family of coherent sheaves on X such that E_K is ω-semistable. Then there exists a subsheaf F ⊂ E such that F_K ≅ E_K and such that F_k is ω-semistable. §.§ Boundedness of semistability The Chern character of a coherent sheaf E on X will determine its numerical type (E) in the numerical group (X)_ in the (AG) case, respectively in the singular cohomology group H^*(X,) in the (CG) case. Given a class γ in (X)_, respectively in H^*(X,), we consider the following boundedness statement: B_γ(ω): The set of isomorphism classes of coherent sheaves E of class γ on X that are ω-semistable is bounded. Boundedness of semistable sheaves was intensively studied and it took the efforts of many mathematicians to completely solve the algebraic case (e.g. <cit.>). In the (AG) setup, boundedness of semistability B_γ(ω) holds for any numerical class γ. In the (KG) case, the boundedness statement is not yet known in full generality. We present here a few results that indicate its validity. Let (X,ω) be a compact Kähler manifold and γ be a topological class in H^*(X,). Then B_γ(ω) is known in the following cases: * when γ is the class of a rank 1 coherent sheaf <cit.>, * when X is complex projective, but ω is not necessarily an ample class <cit.>. * when X is a (not necessarily algebraic) K3 surface or a 2-dimensional torus and the class ω is γ-generic, see <cit.> and <cit.> for details. In the non-Kähler case, boundedness of semistability cannot be expected in the above formulation as can be seen from the following example. Let X be a class VII surface. Then b_1(X) = 1 and the identity component ^0(X) of the Picard group pf X is isomorphic to ^*. In this case holomorphic line bundles with c_1 = 0 form an unbounded family of semistable sheaves on X, with respect to any polarization. A possible remedy to this situation would be to fix the first Chern class of the line bundles in the Bott-Chern cohomology of X, and not only in the singular cohomology. The effect would be to fix the degree of the considered line bundles. The set parametrizing line bundles of fixed degree in the above example is a circle in ^* centered at 0 <cit.>. § MODULI FUNCTORS AND MODULI STACKS §.§ Moduli functors of sheaves In this subsection we let be the category (/k) of k-schemes in the algebraic geometric setting, and the category (/) of (not necessarily separated) complex analytic spaces in the complex geometric setting. Let F: → (Sets) be a contravariant functor and ϕ: F →(-,M) a natural transformation of functors where M is an algebraic space over k, respectively an analytic space. We say that ϕ is * a categorical moduli space for F if any other natural transformation ψ: F →(-,N) with N an algebraic space over k, respectively an analytic space factorizes through ϕ. One also says that M corepresents the functor F in this case. * a coarse moduli space for F if it is a categorical moduli space and moreover induces a bijection at the level of k-points F( k) →( k,M), respectively of -points. * a fine moduli space for F if ϕ is an isomorphism of functors. Let (X,ω) be a polarized space as in the algebraic, respectively Kähler setup. Given a class γ in (X)_, respectively in H^*(X,), let _X,γ : → (Sets) denote the functor of coherent sheaves of class γ, which sends an object S ∈ to the set of isomorphism classes of flat families of coherent sheaves of class γ on X parameterized by S. In the sequel we will consider the following subfunctors of the functor _X,γ: * ^ss_X,ω,γ of ω-semistable sheaves, * ^s_X,ω,γ of ω-stable sheaves, * ^lf,s_X,ω,γ of ω-stable locally free sheaves, * ^spl_X,γ of simple torsion-free sheaves <cit.>, * ^(SLF)_X,ω,γ of torsion-free sheaves with Seshadri locally free graduations <cit.>, * ^(SR)_X,ω,γ of torsion-free sheaves with Seshadri reflexive graduations <cit.>. All these functors are Zariski-open subfunctors of _X,γ, see Proposition <ref> and the above cited references. We will discuss the question whether these functors admit a categorical/coarse/fine moduli space after saying a few words about the corresponding moduli stacks. §.§ Moduli stacks of sheaves Since the theory of analytic stacks is less developed, in this subsection we place ourselves in the algebraic setup and recall some known facts about the stack of coherent sheaves. For a detailed account on stacks and algebraic stacks we refer the reader to <cit.>. Consider the category oh_X whose objects are pairs (S,E), where S is a scheme over k and E is an S-flat family of sheaves on X. A morphism (S',E') → (S,E) in oh_X consists of a map f: S' → S of k-schemes together with a morphism E → f_* E' of sheaves whose adjoint is an isomorphism. We visualize this as a cartesian diagram E' [r] [d] E [d] S [r]^f S This defines a category fibered in groupoids over the category of schemes over k. The category oh_X is an algebraic stack locally of finite type and with affine diagonal over k. Given a numerical class γ∈(X), we consider the open substacks oh_X,ω,γ^ss, oh_X,ω,γ^s, oh_X,ω,γ^lf,s, oh_X,γ^spl, oh_X,ω,γ^(SLF), oh_X,ω,γ^(SR) of oh_X corresponding to the functors _X,ω,γ^ss, _X,ω,γ^s,_X,ω,γ^lf,s,_X,γ^spl,_X,ω,γ^(SLF),_X,ω,γ^(SR). Theorem <ref> implies that oh_X,ω,γ^ss is quasi-compact, therefore so are _X,ω,γ^s, _X,ω,γ^lf,s,_X,ω,γ^(SLF),_X,ω,γ^(SR) too. Moreover Proposition <ref> yields that oh_X,ω,γ^ss is universally closed over k. All together we obtain: The substack oh_X,ω,γ^ss⊂ oh_X is open and a universally closed algebraic stack of finite type and with affine diagonal over k. As in the case of moduli functors, one can define the notions of categorical/coarse/fine moduli spaces for algebraic stacks in the following way. Let be an algebraic stack over k and ϕ: →(-,M) a morphism of stacks, where M is an algebraic space over k. We say that ϕ is * a categorical moduli space for if any other morphism ψ: →(-,N) with N an algebraic space over k factorizes through ϕ. * a coarse moduli space for if it is a categorical moduli space and moreover induces a bijection between the set of isomorphism classes of k-points of and ( k,M). * a fine moduli space for if ϕ is an isomorphism of stacks. Note that the above moduli stacks never admit fine moduli spaces, since the automorphism groups of the objects are non-trivial. As to the existence of categorical or coarse moduli spaces for moduli stacks, this is equivalent to the existence of categorical or coarse moduli spaces for the corresponding moduli functors described above. A more refined version of a categorical moduli space is the following. A quasi-compact and quasi-separated morphism ϕ: → M from an algebraic stack to an algebraic space M is said to be a good moduli space if * the pushfoward functor on quasi-coherent sheaves is exact, and * the induced morphism of sheaves _M →ϕ_* _ is an isomorphism. Good moduli spaces are always categorical <cit.>, but not coarse in general. A natural question to be discussed next is that of the existence of a categorical/good/coarse moduli space for the above moduli stacks. § MODULI SPACES OF SHEAVES In algebraic and in complex geometry, the first moduli spaces of sheaves of particular interest were moduli spaces of line bundles and more generally of vector bundles. In the latter case it was soon observed that in order to obtain moduli spaces with good geometric properties (such as local-separatedness) one has to impose some restriction on the class of vector bundles to be classified. This led Mumford to introduce the slope-stability condition in <cit.>. The functor _X,ω,γ^lf,s admits a separated coarse moduli space M_X,ω,γ^lf,s. In the (AG) case, M_X,ω,γ^lf,s is a quasi-projective scheme over k. This result was proved in the algebraic geometrical context using Geometric Invariant Theory methods over curves by Mumford <cit.>, Seshadri <cit.> (see also <cit.>), over surfaces by Gieseker <cit.> and in higher dimensions by Maruyama <cit.>. In the analytic setup, the result is a consequence of the existence of a coarse moduli space of simple vector bundles, proved by Norton <cit.> using Banach-analytic techniques, together with the openness of stability, Proposition <ref>. In general the moduli spaces M_X,ω,γ^lf,s are rarely fine <cit.>. A situation when this is known to happen is when X is a curve, ω is the fundamental class of X, and the rank and degree of the concerned vector bundles are coprime. In complex geometry, moduli spaces of stable vector bundles are related via the Kobayashi-Hitchin correspondence to moduli spaces of Hermite-Einstein connections. Let (X,ω) be a Gauduchon compact complex manifold. Let E be a smooth complex vector bundle on X and h a Hermitian metric on E. Then the set-theoretical Kobayashi-Hitchin correspondence yields a real-analytic isomorphism M^HE_X,ω,E,h→ M^lf,s_X,ω,E between the moduli space of ω-Hermite-Einstein connections on (E,h) and the moduli space of ω-stable holomorphic structures on E. In general the moduli spaces M_X,ω,γ^lf,s are not compact. If one aims at constructing natural compactifications, one generally needs to relax both the locally-freeness and the stability condition of the coherent sheaves to be parametrized. We have already mentioned moduli spaces of simple vector bundles. These extend to (not necessarily separated) moduli spaces of simple torsion-free sheaves. The functor _X,γ^spl admits a coarse moduli space M_X,γ^spl. The functor _X,ω,γ^s admits a separated coarse moduli space M_X,ω,γ^s as an open subset in M_X,γ^spl. We note that in the algebraic setup one can further show that the moduli space M_X,ω,γ^s is quasi-projective over k <cit.>. When X is a curve, the functor _X,ω,γ^ss admits a projective categorical moduli space M_X,ω,γ^ss, which contains M_X,ω,γ^s as an open subscheme <cit.>. The geometric points of M_X,ω,γ^ss correspond to isomorphism classes of Seshadri graduations of semistable sheaves, and therefore M_X,ω,γ^ss is not a coarse moduli space in general. When trying to employ Geometric Invariant Theory to construct compactifications of M_X,ω,γ^s in higher dimensions, one is led to consider the Gieseker-Maruyama-semistability condition, see Definition <ref>. In the algebraic setup and in zero characteristic, the substack oh_X,ω,γ^GMss⊂ oh_X,γ of Gieseker-Maruyama-semistable sheaves is open and admits a projective good moduli space M_X,ω,γ^GMss. In positive characteristic, the above statement holds if one replaces “good moduli” by “adequate moduli”, see Alper <cit.>. In the general (KG) setup, the existence of the Gieseker-Maruyama moduli space is generally unknown. There exist partial results when X is complex projective and ω is a non-ample Kähler class <cit.>. A different way to enlarge the open substack oh_X,ω,γ^lf,s is to look at oh_X,ω,γ^(SLF) and oh_X,ω,γ^(SR). For these one still gets good moduli spaces in the (AG) setting and characteristic zero <cit.>. See also <cit.> for the complex analytic case. § FURTHER TOPICS §.§ Donaldson-Uhlenbeck compactification In this subsection we will consider the case when (X,ω) is a polarized smooth complex projective surface. Let E be a smooth complex vector bundle on X and h a Hermitian metric on E. Recall that by the moduli-theoretical Kobayashi-Hitchin correspondence there is a real-analytic isomorphism M^HE_X,ω,E,h→ M^lf,s_X,ω,E between the moduli space of ω-Hermite-Einstein connections on (E,h) and the moduli space of ω-stable holomorphic structures on E. It is important in Donaldson theory to work with suitable compactifications of moduli spaces of anti-self-dual connections. These were constructed by Donaldson based on compactness results due to Uhlenbeck <cit.> and lead also to a compactification M^DU_X,ω,E,h of M^HE_X,ω,E,h, which we call the Donaldson-Uhlenbeck compactification. On the algebraic geometrical side, we have already seen a compactification of the moduli space M^lf,s_X,ω,E of slope-stable vector bundles by adding Gieseker-Maruyama-semistable torsion-free sheaves at the boundary, which is the Gieseker-Maruyama moduli space M^GMss_X,ω,(E). Le Potier <cit.> and Jun Li <cit.> constructed a projective morphism φ: M^GMss_X,ω,(E)→_^N which is an immersion on M^lf,s_X,ω,E. Furthermore, Li proved that the closure of the image φ(M^lf,s_X,ω,E) inside _^N is homeomorphic to the Donaldson-Uhlenbeck compactification M^DU_X,ω,E,h. This extends the inverse of the Kobayashi-Hitchin correspondence as a homeomorphism of compact spaces φ(M^lf,s_X,ω,E)→ M^DU_X,ω,E,h. In particular one can transfer the complex algebraic structure of φ(M^lf,s_X,ω,E) to the Donaldson-Uhlenbeck compactification. As a further compactification of M^lf,s_X,ω,γ, Huybrechts and Lehn constructed in <cit.> a complex projective moduli space M^μ ss_X,ω,γ of slope-semistable sheaves over a smooth surface, which comes together with a natural transformation of functors _X,ω,γ^ss→(-,M^μ ss_X,ω,γ). However, M^μ ss_X,ω,γ does not corepresent the moduli functor in general. Similar results were obtained by Greb, Sibley, Toma, Wentworth <cit.> in dimension larger than two using the analogue of the Donaldson-Uhlenbeck compactification due to Tian <cit.> and the higher dimensional analogue of the Huybrechts-Lehn moduli space M^μ ss_X,ω,γ. §.§ Moduli of pure sheaves Until now we have only considered classification problems of torsion-free sheaves. It is however natural to extend this research to lower-dimensional coherent sheaves. The analogue of the torsion-free condition in this situation is the purity condition. A coherent sheaf E on X is said to be pure of dimension d if any non-trivial coherent subsheaf F ⊂ E has dimension d too. One defines the notions of ω-(semi)stability and GM-(semi)stability also for pure sheaves in the (AG) and (KG) setups. For this one writes the Hilbert polynomial of a coherent sheaf E of dimension d on X in the following form P_ω(E,m) = ∑_i=0^d α_i(E) m^i, where α_i(E) = 1/i!∫_X (E)ω^i _X. A coherent sheaf E of dimension d on (X,ω) is said to be ω-(semi)stable if * E is pure, * for any coherent subsheaf F ⊂ E with 0 < α_d(F) < α_d(E) we have α_d-1(F)/α_d(F)α_d-1(E)/α_d(E). A coherent sheaf E of dimension d on (X,ω) is said to be GM-(semi)stable if * E is pure, * for any coherent subsheaf F ⊂ E with 0 < α_d(F) < α_d(E) we have P_ω(F,m)/α_d(F)P_ω(E,m)/α_d(E) for m≫ 0. One can also consider the following notion of semistability that interpolates between slope-semistability and GM-semistability. For integers 1 ≤ℓ≤ d ≤ n, a coherent sheaf E of dimension d on (X,ω) is said to be ℓ-(semi)stable if * E is pure, * for any coherent subsheaf F ⊂ E with 0 < α_d(F) < α_d(E) we have ∑_i=d-ℓ^d α_i(F) m^i/α_d(F)∑_i=d-ℓ^d α_i(E) m^i/α_d(E) for m≫ 0. For any class γ and ℓ between 1 and (γ), we obtain a chain of corresponding open subfunctors of ℓ-semistable sheaves _X,ω,γ^GMss⊂_X,ω,γ^ℓ ss⊂_X,ω,γ^ss⊂_X,γ. The existence of moduli spaces of pure sheaves has been established in various situations: * The case of simple pure sheaves is settled in <cit.> and <cit.>. * In the characteristic zero (AG) context, Simpson <cit.> proved the existence of a projective categorical moduli space for _X,ω,γ^GMss using techniques of Geometric Invariant Theory; the positive characteristic case is to be found in <cit.>. * Analogues of the Huybrechts-Lehn moduli spaces for ℓ-semistable sheaves were constructed in <cit.>. §.§ Change of semistability and wall-crossing In this subsection we restrict the discussion to the algebraic geometric setting. When the dimension of X is larger than one, the moduli spaces of semistable sheaves above depend on the choice of the polarization ω in the ample cone ^1(X). The situation which seems to appear in general is that for any numerical class γ∈(X) there exists a locally finite set of real algebraic hypersurfaces ⊂^1(X), called walls, leading to a decomposition into connected components, called chambers, of ^1(X) ∖⋃_∈ accounting for the change of (semi)stability in the following sense. If ω_1, ω_2 are ample classes in the same chamber, then a coherent sheaf E of class γ is ω_1-(semi)stable if and only if E is ω_2-(semi)stable. If this is indeed the case, then moduli spaces with respect to ω_1 and ω_2 coincide. The next step to understand the variation of the moduli spaces depending on the polarization would be to study wall-crossing, i.e. the relation between moduli spaces corresponding to adjacent chambers. The existence of a chamber structure as above is guaranteed once the following stronger boundedness property for semistable sheaves is established. Given a numerical class γ∈(X), we say that the uniform boundedness of semistability holds for γ if for any compact subset K ⊂^1(X)_, the set of isomorphism classes of coherent sheaves of type γ on X that are ω-semistable with respect to some ω∈ K is bounded. In the surface case uniform boundedness is established using the Bogomolov inequality and the Hodge Index Theorem <cit.>. In this case the resulting walls are moreover rational linear, a situation which is propitious to the study of wall-crossing phenomena, see the above references for such studies. Another situation where a rational linear chamber structure exists is the case of two-dimensional pure sheaves <cit.>. In higher dimensions a wall and chamber structure exists <cit.>, however walls are neither linear nor rational in general <cit.>. This problem was circumvented in <cit.> by introducing a more refined notion of semistability for which wall-crossing is well-behaved. amsalpha 0.40.4pt
http://arxiv.org/abs/2407.13525v1
20240718135859
Short-period Post-Common Envelope Binaries with Balmer Emission from SDSS and LAMOST Based on ZTF Photometric Data
[ "Lifang Li", "Fenghui Zhang" ]
astro-ph.SR
[ "astro-ph.SR" ]
firstpage–lastpage Discussion: Effective and Interpretable Outcome Prediction by Training Sparse Mixtures of Linear Experts [ July 22, 2024 ======================================================================================================== § ABSTRACT We present here 55 short period PCEBs containing a hot WD and a low-mass MS. Based on the photometric data from ZTF DR19, the light curves are analyzed for about 200 WDMS binaries with emission line(s) identified from SDSS or LAMOST spectra, in which 55 WDMS binaries are found to exhibit variability in their luminosities with a short period and are thus short-period binaries (i.e. PCEBs). In addition, it is found that the orbital periods of these PCEBs locate in a range from 2.2643 to 81.1526 hours. However, only 6 short-period PCEBs are newly discovered and the orbital periods of 19 PCEBs are improved in this work. Meanwhile, it is found that three objects are newly discovered eclipsing PCEBs, and a object (i.e. SDSS J1541) might be the short-period PCEB with a late M-type star or a brown dwarf companion based on the analysis of its spectral energy distribution. At last, the mechanism(s) being responsible for the emission features in the spectra of these PCEBs are discussed, the emission features arising in their optical spectra might be caused by the stellar activity or an irradiated component owing to a hot white dwarf companion because most of them contain a white dwarf with an effective temperature higher than ∼10,000 K. stars:binaries (including multiple): close –Stars: AGB and post-AGB – Stars: white dwarfs – Stars: evolution § INTRODUCTION The typical final products of stellar evolution for about 98 per cent of main sequence (MS) stars are white dwarfs (WDs), because their masses are too lower to ignite He, C / O, or ONe so that nuclear reactions ceases, leaving a degenerate core of He, C / O, or ONe after their envelopes are lost owing to stellar wind or mass transfer interactions <cit.>. However, in some cases, He is ignited if no mass transfer interactions take place or they take place in AGB. Meeanwhile, He in the core is also probably ignited if the rapid mass transfer interactions take place near the tip of RGB then leave a hot subdwarf (∼ 0.5 M_) with a shell mass of 0.02M_ <cit.>. Close compact binaries are usually thought to be the products of common envelope (CE) evolution <cit.> which is a result of the unstable mass transfer depended on the mass ratio of binary systems <cit.> once the massive star is on the giant branch (GB) or asymptotic giant branch (AGB) with a radius of about 100 R_ <cit.> and about 25 per cent binaries would undergo CE phase <cit.>. During the CE phase, the orbital energy of the donor's core and its companion is rapidly injected into the CE owning to differential rotation, decreasing the separation between two components of binary system. If the orbital energy is enough to eject the envelope before merger, exposing a post-common envelope binary (PCEB), consisting of a WD and a companion, generally an MS star <cit.>. The CE evolution phase plays an important role in many evolutionary pathways leading to the formation of compact objects in short period binaries, such as millisecond pulsars, X-ray binaries, CVs, double WDs, double neutron stars, and strongly magnetized WDs , or even double black holes <cit.>. Although the main features in CE phase had been sketched by <cit.> more than 40 years ago, then a lot of studies on the formation of PCEBs had been carried out by many investigators <cit.>, our understanding on the CE evolution is still very poor <cit.>. Any significant progress in our understanding of the CE evolution certainly requires both unrelenting theoretical efforts and innovative observational input <cit.>. Since the CE phase usually lasts a very short timescale <cit.> and is thus virtually impossible to observe directly. The heavy responsibility of restricting the CE phase falls on objects that have most probably undergone a CE phase in their past <cit.>. Among all PCEBs, those containing a WD and a MS star, i.e. WDMS binaries, represent the most promising population for deriving such observational constraints since they are very common population of PCEBs <cit.>. Up to now, about 5,000 WDMS binaries have been discovered by various investigators <cit.> based on SDSS <cit.> or LAMOST spectra since <cit.> and <cit.> first attempted to study the WDMS binaries in SDSS. Meanwhile, near 600 WDMS binaries had been identified based on the location within the H-R diagram based on Gaia photometric magnitudes <cit.>. However, the orbital periods of only a small portion of them had been determined by various investigators through radial velocity observations <cit.> or photometric data of CRTS, PTF or ZTF <cit.> although the number of WDMS binaries with determined orbital periods is gradually increasing. In the discovered PCEBs, some of them were found to contain a low mass He WD and a MS companion with M 0.2 M_ <cit.>, however the oldest globular clusters in the Galactic halo are currently producing ∼ 0.53M_ WDs from MS progenitors with M 0.8 M_ <cit.>. Therefore, the low-mass He WDs in these binaries should be formed from the enhanced mass loss from post-MS stars without reaching asymptotic branch (AGB) and without ever igniting He in interacting binaries <cit.>. Although the formation scenario for such binaries had been proposed by <cit.>, however, we do not know how is the relatively dense envelope of their progenitors with mass more than 1.0 M_ expelled by a low-mass companion with a relatively low orbital energy. In fact, there are a fair number of single low-mass WDs that show neither variability in their radial velocities nor infrared excess <cit.>. The existence of single low-mass WDs had been explained by many investigators <cit.>. Therefore, it is necessary to find more PCEBs for limiting the results of CE evolution. In this work, the light curves from ZTF DR19 are analyzed for about 200 WDMS binaries with emission lines identified from SDSS or LAMOST, then 55 WDMS binaries are found to be PCEBs with a short period located in a range from 2.2643 to 81.1526 hours. Among these PCEBs , only 6 are newly discovered, and the orbital periods of 19 PCEBs are improved. In addition, three new eclipsing PCEBs are found out from these PCEBs based on their light curves. The analysis of the photometric data collected from ZTF survey is presented in Sect. 2. In Sect. 3, we discuss the results and draw our conclusions, § PHOTOMETRIC DATA ANALYSIS Although many WDMS binaries (about 6,000) were identified based on their spectra from SDSS and LAMOST or by their location in H-R diagram based on the Gaia photometric observations, however only a small portion of them were found to be the short-period PCEBs through the analysis of their radial velocities or light curves. We have checked the spectra of some known short-period PCEBs <cit.> based on SDSS and/or LAMOST observations carefully, it is found that most of them exhibit emission features in their optical spectra. This implies that short-period PCEBs might be easily discovered from WDMS binaries with the emission line(s), therefore we attempt to find more PCEBs from about 200 WDMS binaries with emission line(s) at Balmer series <cit.> through detecting the variability in their luminosities based on the analysis of the photometric data from ZTF survey <cit.>. As a result, it is found that 55 binary systems exhibit variability in their luminosities through the analysis of their light curves, and their orbital periods are found to locate in a range from 2.2643 to 81.1526 hours (listed in Table <ref>), implying that they are short-period PCEBs. Then we match them with Simbad database, it is found that 6 short-period PCEBs are newly discovered and the orbital periods have been improved for 19 PCEBs. A detailed comparison between the orbital periods obtained by us and their known ones for 49 known PCEBs is shown in Fig. <ref>. As seen from Fig. <ref>, the orbital periods derived in this work are consistent with the known periods for 30 short-period PCEBs except that the orbital periods derived by us for 8 PCEBs (displayed by squares ) are different from the upper limits of their orbital periods estimated by <cit.> based on several radial velocity (RV) measurements. In addition, the orbital periods obtained for 5 PCEBs (indicated by triangles) by us are only a half of their known ones resulted by an assumption that each full light curve should exhibit two maxima and two minima because of ellipsoidal effect. However we do not find any difference between two maxima or two minima in their light curves when their known periods are used to calculate the phases for their light curves, implying that our results for these PCEBs might be correct, however they must be confirmed by the radial velocity observations in the future. The newly discovered PCEBs, together with 14 PCEBs with an improved orbital period, are as the followings: §.§ Short-period PCEBs with Balmer emission line(s) §.§.§ SDSS J0029 SDSS J002926.82+252553.90 (hereafter SDSS J0029) was first identified as a DA WD by <cit.>, then its atmospheric and physical parameters were derived to be T_ eff=19,148(173) K, logg=7.58(3) and M=0.459(7)M_ by <cit.> based on the spectrum from SDSS DR14. This suggests that SDSS J0029 is a hot low-mass He-WD which might need a friend to be a short-period PCEB <cit.>. Its spectrum from SDSS indicates that this WD should be companied by a cool M-type star with the Na Iλλ 8183.27 and 8194.81 absorption doublet (see Fig. <ref>). Meanwhile, it is found in Fig. <ref> that the Hα emission presents in its spectrum. The emission feature may arise due to wind accretion, magnetic activity and/or irradiation because of a hot WD companion, then the reflection effect or star spot (due to magnetic activity) will provide a favorable opportunity for detecting the variability in its luminosity. Thereupon, we collect 724 data points in g-band, 975 data points in r-band and 133 data points in i-band from ZTF DR19 for this object, it is found that the light curves in g and r-bands exhibit a large scatter which is much larger than their observational uncertainties. This implies that the luminosity of this object might be changed with the time, therefore after some scattered data points are removed, the light curves (720 data points in g-band, 953 data points in r-band and 131 data points in i-band) of this object are analyzed through a code named Period04 <cit.>. Its orbital period is derived to be 0.12165200 days in g-band, 0.12165254 days in r-band or 0.12165232 days in i-band and the errors (3σ) of three periods are derived to be 5.3 10^-7, 2.7 10^-7 and 3.3 10^-7 days, respectively. The periodograms indicating the orbital periods for SDSS J0029 are shown in the left panels of Fig. <ref>. In general, the amplitude of the luminosity variation because of reflection effect or spot activity in r-band is relatively larger than that in g-band for the PCEBs with a hot WD and an M-type star and thus the orbital period determined by photometric data in r-band is more accurate than that derived from the observations in g-band. Therefore, the results on the orbital period derived from r-band data are listed in Table <ref>. At last, the phase for each data point is calculated based on an orbital period of 0.12165254 days, the phase-folded light curves are shown in the right top panel of Fig. <ref>. Meanwhile, as seen from the left panels of Fig. <ref>, there might be other possible orbital periods for this object that might not be ruled out, since the power is similar for all peaks. Therefore, two possible orbital periods indicated by two peaks near the main peak (the highest one) are derived to be 0.1084267(1) and 0.1385524(2) days respectively based on the r-band data, and the corresponding light curves are plotted in the medium and bottom panels of the right panels of Fig. <ref>. As seen from the right panels of Fig. <ref>, It is difficult to directly find the difference in the light curves based on the different periods. In order to obtain the orbital period for this object, we combine 953 data points into 50 normal points in r-band observations for three cases mentioned above and calculate their χ^2-values (where χ^2=1/N∑^N_i(r_ i-r_ nor,i)^2/σ_i^2, in which r_i is the magnitude of the ith data point in r-band observations, r_ nor,i is a magnitude obtained through interpolation in the magnitudes of the normal points based on the phase of the ith data point and σ_i the observation uncertainty of the ith data point) which are listed in Table <ref>. It is found that the χ^2-value based on the period indicated by the main peak is a smallest one. This implies that the light curves based on the orbital period indicated by the main peak are most smoothed ones and display fewer scatter data points based on the same photometric data. Therefore, the orbital period of this object should be the one indicated by the main peak in the periodograms of SDSS J0029. This suggests that SDSS J0029 is a short-period PCEBs with an orbital period of about 2.9197 hours. §.§.§ SDSS J0032 SDSS J003221.87+073934.50 (SDSS J0032) was first classified as a PCEB by <cit.> who gave the atmospheric and physical parameters for this object as the followings: T_ eff=21,045 K, M_ WD=0.38M_, M_ s=0.431M_, d=398 pc, and a peak to peak radial velocity variation of 298.70 . Then these parameters were investigated again by other investigators <cit.>. The WD component of this binary system exhibits a large variability in its radial velocities, implying that SDSS J0032 should be a short-period PCEB. Therefore, we collect the photometric data from ZTF DR19 and find that this object indeed shows an evident change in its luminosity. Using the same method as that used for SDSS J0029, its orbital period is derived to be 0.1539950(14) days and 0.1539945(27) days based on the r and i-band data, respectively and the results derived from the r-band data are also listed in Table <ref>. Since the periodogram based on i-band data displays a similar peak distribution to that based on r-band data, so only one periodogram indicating the orbital period for this object based on the r-band data is shown in Fig. <ref>. We also calculate the χ^2-values for the main peak and a peak with a frequency of 5.4910388 d^-1 (P_ orb=0.1821151 days) and a similar power to the main one which are also listed in Table <ref>. It is found in Table <ref> that the χ^2-value based on a period implied by the main peak is the smaller one. This suggests that the period indicated by the main peak should be its orbital period and its light curves based on an orbital period of 0.15399450 days are shown in Fig. <ref>(a). It is found in Fig. <ref>(a) that this object is indeed a short-period PCEB, and the periodic variation in its luminosity might be caused by the reflection effect or spot activity of its MS component. §.§.§ SDSS J0253 SDSS J025301.60-013006.96 (SDSS J0253) was first identified as a DA+M binary based on the spectra from SDSS DR9 by <cit.>. The atmospheric and physical parameters for this object were derived based on the analysis of its spectra from LAMOST and SDSS. It is found that this binary system contains a hot WD and an M-type dwarf <cit.>. It is found in Fig. <ref> that this object shows the convincing emissions at Hα,β and Ca II triplet which might arise from an irradiated component in this object, implying that SDSS J0253 might be a short-period PCEB. Therefore, it is possible that the variability in its luminosity can be detected based on the photometric observations if its orbital inclination is appropriate. In addition, <cit.> had found a spurious signal with a frequency of 1.37165769 d^-1 (corresponds to a period of 0.7290448 days) probably related to time-dependent scan angle for this object based on Gaia DR3 G-band time series data. In order to determine whether this spurious period is its orbital period or not, we collect the photometric data from ZTF DR19 to study the variability in the luminosity of SDSS J0253. As a result, the total of 464 data points in g-band, 476 data points in r-band and 46 data points in i-band are obtained. Then the light curves are analyzed after some data points with a large scatter are ignored, its orbital period is derived to be 0.4264210(19) days in g-band and 0.4264303(15) days in r-band, respectively and the results are also listed in Table <ref>, and only one periodogram indicating its orbital period based on r-band data is shown in Fig. <ref> because of the same reason as that for SDSS J0032. Although there is a peak with a similar power to the main one and a frequency of 1.3423156 d^-1 [corresponds to a period of 0.7449813(89) days] based on Fig. <ref>, which is still different from the signal frequency obtained by <cit.>, so the period of the signal discovered by <cit.> might be a spurious signal related to time-dependent scan angle rather than the orbital period of this object. Meanwhile, it is found in Table <ref> that the χ^2-value based the period indicated by the main peak is also smaller than that derived from this peak, implying that the period indicated by the main peak is the orbital period of this object. The phase-folded light curves in g, r and i-bands are drawn in Fig. <ref>(b), and the variability in its luminosity is caused by the reflection effect or star spot. §.§.§ SDSS J0306 SDSS J030607.19-003114.44 (SDSS J0306, also named KUV03036 -0043) was first observed spectroscopically by KISO Schmidt ultraviolet excess survey and then was classified as a DA+dM binary <cit.>. Subsequently the different atmospheric and physical parameters for the DA star in this object had been obtained by the various investigators, it was found that this WD might be a He-WD <cit.> or a C/O-WD <cit.>. Although KUV 03036-0043 was identified as a WD+M4/M5 binary by <cit.> and <cit.>, however it was argued as a close binary by <cit.> and <cit.> who estimated an upper limit of 2.66 days for the orbital period of this object based on several RV measurements. The radial velocities were determined for the DA WD component by some previous investigators <cit.>. It was found that its radial velocity exhibits a large peak to peak variation <cit.>. Meanwhile, Balmer emission lines are evidently presented in its optical spectra (see Fig. <ref>), they might be caused by magnetic activity or irradiation of M-type dwarf due to its hot WD companion. These observational characteristics suggest that KUV 03036-0043 might be a short-period PCEB which provides a favorable opportunity for researchers to observe the variability in its luminosity. So we collect the photometric data from ZTF DR19 to investigate the variability in the luminosity of SDSS J0306. As a result, the total 422 data points in g-band and 428 data points in r-band are obtained. After some data points with a large scatter are neglected, the light curves are analyzed. As the objects mentioned above, only one periodogram indicating its orbital period based on r-band data is also shown in Fig. <ref>. Meanwhile, the χ^2-values are obtained for its r-band light curves based on two orbital periods indicated by the main peak and another peak with a similar power to the main one and a frequency of 2.8505955 d^-1 (P_ orb=0.3508039 days) and they are also listed in Table <ref>. It is found in Table <ref> that the χ^2-value based on the orbital period indicated by the main peak is also a smaller one, suggesting that the period implied by the main peak should be its orbital period, then its orbital period is derived to be 0.541179(65) days in g-band and 0.541158(54) days in r-band, respectively. Its phase-fold light curves are displayed in Fig. <ref>(c). As seen from Fig. <ref>(c), this object is a short-period PCEB with an orbital period of about 12.9878 hours, and its luminosity change might be caused by star spot due to magnetic activity or reflection effect. In addition, the orbital period obtained for this object by us is indeed smaller than its upper limit listed in <cit.>, suggesting that the orbital period obtained for this object might be correct. §.§.§ SDSS J0747 SDSS J074730.57+430403.65 (SDSS J0747) was first identified as WD+dM binary according to its spectrum from SDSS by <cit.>. The atmospheric and physical parameters for this object were obtained through a analysis of the spectrum from SDSS <cit.>. It was found that this object is a DA+M4/M5 binary <cit.>. The radial velocities for SDSS J0747 were analyzed by <cit.> and <cit.> who gave a peak to peak radial velocity variation of 328.90 . This suggests that SDSS J0747 might be a short-period PCEB. Meanwhile, Balmer emission lines evidently present in its optical spectra from SDSS, this also indicates that SDSS J0747 might be a short-period PCEB and they might arise due to magnetic activity or irradiation of the cool component by a hot WD <cit.>. The radial velocities were derived to be -487.6(248.4) and 177.0(417.7) for M-type dwarf and WD components respectively by <cit.> who also estimated an upper limit of 1.33 days for the orbital period of this object, suggesting that the variability in its luminosity might be detected. In order to find the variability in its luminosity, we collect the photometric data for it from ZTF DR19, the total 570 data points in g-band and 2394 data points in r-band are obtained, the light curves are also analyzed after some data points with a large scatter are removed, and the periodogram manifesting the orbital period for SDSS J0747 is also displayed in Fig. <ref>. The χ^2-values derived for the r-band light curves based on the orbital periods indicated by the main peak and another peak with a frequency of 0.2728867 d^-1 (P_ orb=3.66453 days) are also listed in Table <ref>. As seen from Table <ref>, the χ^2-value based on the main peak is a smaller one. This suggests that the period indicated by the main peak should be the orbital period of this object. Its orbital period is thus derived to be 0.5781084(17) days in r-band and the result is listed in Table <ref> . The phase-folded light curves of SDSS J0747 are plotted in Fig. <ref>(a). It is found in Fig. <ref>(a) that this object should be an eclipsing PCEB with a short orbital period of about 13.875 hours. In addition, the orbital period determined for this object is indeed smaller than its upper limit listed in <cit.>, also implying that the orbital period derived for this object might be correct. §.§.§ SDSS J0908 SDSS J090847.38+613141.43 (hereafter SDSS J0908) was first identified as a WDMS by <cit.> , then the atmospheric and physical parameters for the DA star in this binary system were given as the followings: T_ WD=11,000K, logg=8.25, M_ WD=0.827M_, V_ WD=37.6 and V_ dM=-124.6 . Another estimation for the radial velocity of DA WD in this object was -40.436±28.598 <cit.>. These works indicate that this object might be a short-period PCEB. An upper limit on the orbital period for this object was estimated to be 137.43 days by <cit.>. However, as seen Fig. <ref>, the Balmer emission lines also arise in the optical spectrum from SDSS, this suggests that this object might be a short-period PCEB. In order to obtain the accurate period for it, we collect the photometric data from ZTF DR19 for SDSS J0908, then the total 649 data points in g-band and 780 data points in r-band are obtained. The light curves of SDSS J0908 are analyzed through the same method as that mentioned above, and only one periodogram indicating the orbital period for SDSS J0908 based on r-band is also shown in Fig. <ref>. As seen form Fig. <ref>, it is difficult to rule out some possible orbital periods indicated by other peaks with a similar power to the main peak. In order to find the orbital period for this object, we calculate the χ^2-values for the r-band light curves based on the periods indicated by the main peak and another peak with a frequency of 6.2843572 d^-1 (P=0.1591253 days), which are listed in Table <ref>. It is found in Table <ref> that the χ^2-value based on the main peak is also a smaller one, implying that the period indicated by the main peak should be the orbital period of this object, then its orbital period is derived to be 0.1372302(10) days in g-band, 0.1372305(7) days in r-band, respectively and the results are listed in Table <ref>. Its phase-folded light curves for SDSS J0908 are shown in Fig. <ref>(b). It is found in Fig. <ref>(b) that SDSS J0908 should be also a close eclipsing PCEB. Meanwhile, the orbital period derived for this object is much smaller than its upper limit estimated by <cit.>. §.§.§ SDSS J0927 SDSS J092712.02+284629.28 (SDSS J0927) was first suspected to be a DA WD by <cit.>, then it was identified as a DA WD by <cit.>. Thereafter, it was reclassified as a DA/M binary <cit.>. The atmospheric and physical parameters for both components of this object were derived based on the the analysis of the spectra from LAMOST or SDSS <cit.>. The atmospheric and physical parameters were derived to be T_ eff=22,037 K, logg=7.80, and M_ WD=0.52M_ by <cit.> who gave a distance of 237 pc away from the Earth, which is closest to a distance of about 235.4±5.5 pc indicated by its parallax from Gaia DR3 <cit.>. The radial velocities of two components in SDSS J0927 were derived to be -73.5 and 133.1 for the M-type star and DA star, respectively <cit.> who estimated an upper limit of 13.46 days for the orbital period of this object, this suggests that this object might be a PCEB. Hα emission line is evidently presented in its optical spectra from SDSS and LAMOST (see Fig. <ref>), this also implies that this object might be a short-period PCEB. We also collect the photometric data for SDSS J0927 from ZTF DR19. As a result, the total of 356 data points in g-band and 758 data points in r-band are obtained. Then the light curves are also analyzed after some data points with a large scatter are neglected. Its orbital period is determined to be 0.3036308(63) days in g-band and 0.3036211(47) days in r-band, respectively. The results are listed in Table <ref> and only one periodogram based on r-band data is shown in Fig. <ref>. As the same as the objects mentioned above, we have to calculate the χ^2-values for its r-band light curves based on the different orbital periods indicated by the main peak and another peak with a similar power to the main peak (see Table <ref>). The results suggest that the period indicated by the main peak should be the orbital period of this object, then the phase-folded light curves for SDSS J0927 are displayed in Fig. <ref>(d). As seen from Fig. <ref>(d), SDSS J0927 should be a PCEB with an orbital period of about 7.287 hours. In addition, the orbital period obtained for this object is indeed smaller than its upper limit given by <cit.> and thus might be correct. §.§.§ SDSS J1038 SDSS J103837.22+015058.48 (hereafter SDSS J1038) was classified as a close WD+MS binary by <cit.> based on the spectra from SDSS DR4. Its atmospheric and physical parameters had been obtained by other investigators based on the spectra from SDSS <cit.>. Thereafter, these parameters were derived from the optical spectrum from LAMOST by <cit.> who gave the similar results as those based on the spectra from SDSS. The radial velocities for both components of this object were derived to be -180.4 and 157.4 for the M-type star and WD respectively by <cit.> who also estimated an upper limit of 2.68 days for the orbital period of this object based on several RV measurements, this implies that this object might be a short-period PCEB. In order to find its accurate period, we collect the photometric data for this object from ZTF DR19 to investigate the variability in its luminosity. The total of 233 data points in g-band and 345 data points in r-band are obtained, then the light curves of SDSS J1038 are analyzed if some data points with a large scatter are not taken into account and the periodogram implying the orbital period for this object is displayed in Fig. <ref>. As seen from Fig. <ref>, there are some peaks with a similar power to the main peak in its periodogram, we must calculate the χ^2-values for its r-band light curves based on the periods indicated by the main peak and another peak with a frequency of 3.2002366 d^-1 (corresponding to P_ orb=0.3124769 days), which are listed in Table <ref>. It is found in Table <ref> that the χ^2-value based on the main peak is the smaller one, suggesting that the period indicated by the main peak should be the orbital period. At last, its orbital period is derived to be 0.835045(35) days according to the photometric data in r-band and its phase-folded light curves are shown in Fig. <ref>(e). As seen from Fig. <ref>(e), SDSS J1038 is a short-period PCEB although the amplitude of the variation in its luminosity is small because of a small orbital inclination. Meanwhile, the orbital period obtained for this object by us is really smaller than its upper limit given by <cit.> and thus should be correct. §.§.§ SDSS J1424 SDSS J142417.74+443225.00 (SDSS J1424) was identified as a WD+MS binary by <cit.> and <cit.> because of its ultraviolet excess. The effective temperature of the main sequence component in this object was derived to be 5116(72) K <cit.> or 5206 K <cit.>, which corresponds to an effective temperature of a ∼K1-type dwarf <cit.>. The radial velocities of this object were determined as 55.00 (Na I) or 36.50 (Hα) based on SDSS spectrum <cit.>. Another radial velocity for this object was derived to be 319.27 based on Gaia BP/RP spectrum <cit.>, suggesting that SDSS J1424 shows a large variability in its radial velocity and thus might be a short period PCEB. Meanwhile, Hα emission line is also evidently presented in its optical spectra (see Fig. <ref>), this also implies that this object might be a short-period PCEB. In order to find its orbital period, we collect the photometric data for this object from ZTF DR19, and analyze its light curves in g, r and i-bands. An orbital period is derived to be 0.3549873(29) days for SDSS J1424 from its r-band photometric data and the periodogram implying its orbital period is shown in Fig. <ref>. It is also difficult to rule out some possible periods indicated by the peaks with a similar power to the main one for this object based on Fig.<ref>. By using the same method as that used for the objects mentioned above, it is found that the period indicated by the main peak should be the orbital period of this object (see Table <ref>). At last, its phase-folded light curves are shown in Fig. <ref>(f). It is found in Fig. <ref>(f) that SDSS J1424 is indeed a short period PCEB with an orbital period of about 8.5187 hours. §.§.§ SDSS J1541 SDSS J154119.84+120914.68 (SDSS J1541) was first identified as a DA WD by <cit.> who had given the effective temperature [T_ eff=19,000 K] and surface gravity [logg=8.50] based on a spectrum obtained by SDSS on 2011 June 25. These spectral parameters of SDSS J1541 were improved by <cit.> and <cit.>, who gave the effective temperature [T_ eff=25,464(129) K], surface gravity [logg=7.480(16)], and mass [M_ WD=0.454(6) M_] for this object based on the same spectrum as that used in <cit.>. It is found in Fig. <ref> that the emission lines are evidently presented at Hα and Ca II triplet, suggesting that this object should be companied by a cool star. The spectral features of this object are almost dominated by its WD companion. Apart from emission characteristics of the companion star, its other spectral features are difficultly found out from the spectrum of this object. Therefore, it is necessary to obtain the properties of its cool companion by analyzing the spectral energy distribution (SED) of this object. However, there is an evident difference in its spectral parameters provided by previous investigators, thus we have to verify its spectral parameters through spectral analysis since the result of SED analysis strongly depends on the spectral and physical parameters of the WD component. By using of a grid of WD model atmospheres <cit.>, we analyzed its SDSS spectrum again under an assumption that the WD has redshift owing to proper motion and strong gravity of this object, we obtained the effective temperature [T_ eff=25,369(142) K], surface gravity [logg=7.44(2)] and a radial velocity [51.0(±4.0)km s^-1] for this object. we derived the mass [M_ WD=0.44(1) M_], cooling age [τ_ c=45.8± 3.9 Myr], and radius [R_ WD=0.0208(4) R_] for this WD on the basis of a recently updated version of the cooling models <cit.>. At last, based on a well known equation d=√(π/a)R_ WD [R_]/1 pc <cit.>, the spectroscopic distance of this WD is estimated to be of 369(±9) pc away from the Earth, which is consistent with a distance of 370.3±8.3 pc derived through its parallax <cit.>. These results are in good agreement with those derived by <cit.>. Based on the ZTF photometric data, this object was discovered a variable star with an orbital period of 0.10232 days in g-band and 0.11403 days in r-band <cit.>. This also suggests that this object should not be a single white dwarf, but a binary containing a hot DA star. Meanwhile, we fit its Hα emission line by using a Gaussian profile in detail, the radial velocity of its cool component is derived to be -183.0±1.3 , which is much higher than that of WD <cit.>. Therefore, the mass of its companion should be much lower than that of the DA star. In order to obtain the properties of the cool component of SDSS J1541, it is necessary to investigate the SED of SDSS J1541 based on the photometric magnitudes from optical to infrared bands. The optical photometric magnitudes are obtained from SDSS DR7 <cit.>, Gaia DR3 <cit.> and Pan-StaRRs DR1 <cit.>. The near-infrared (near-IR) photometry data are taken from WISE <cit.>. The magnitude and flux density in each passband for SDSS J1541 are listed in Table <ref>, and they are plotted in Fig. <ref> with solid dots. As seen from Fig. <ref>, SDSS J1541 indeed shows IR excesses from z-band to W_2, and thus this hot WD should be companied a cool companion. IR excesses of WDs are usually explained by the existence of a debris disk or a cool companion (a cool dwarf or even a planet). The parameters of the cool component of WDs are usually derived based on the SED analysis through a least χ^2 methods <cit.>. Using this method, the SED of SDSS J1541 is analyzed, then the temperature and radius for its cool component are derived to be 2,018(207) K and 0.438R_, corresponding to a late M-type star with a mass of about 0.080M_ according to an effective-mass relationship with an age of 5 Gyr <cit.>. Therefore SDSS J1541 is a short-period PCEB composed of a hot WD and a late M-type dwarf. Although the orbital period for this object has been derived by <cit.>, however there is a difference (about 17 min) between a period determined by g-band data and another one based on r-band data. We collect the photometric data from ZTF DR19, and obtain 375 data points in g-band, 605 data points in r-band and 133 data points in i-band, respectively. Then the light curves are analyzed through a method used in SDSS J0029, and only one periodogram implying its orbital period based on r-band data is shown in Fig. <ref>. Its orbital period is derived to be 0.1140249(11) days in g-band, 0.1140253(10) days in r-band and 0.1140261(11) in i-band, respectively, and the results are also listed in Table <ref>. The light curves based on an orbital period determined by r-band data for SDSS J1541 are displayed in Fig. <ref>(g), It is found in Fig. <ref>(g) that SDSS J1541 is a PCEB with an orbital period of about 2.7366 hrs and the variation in its luminosity might be caused by reflection effect or a dark spot due to stellar activity. Our results on the orbital period of this object are in well agreement with that derived by <cit.> based on r-band data, and a different orbital period derived for this object by them based on g-band data might be caused by an selection of wrong peak due to scattered data points or fewer data points in g-band at that time. In fact, it is found in Fig. <ref> that a signal with a similar power to main peak and a frequency of 9.769997632 d^-1, corresponds to an orbital period of 0.102355417 days, which is in agreement with that given by <cit.> based on g-band data. This suggests that the scattered data points or the number of all data points might lead to a wrong peak selection. Meanwhile, it is found in Table <ref> that χ^2-value according to the period indicated by the main peak is smaller than that based this peak, suggesting that the orbital period should be the period indicated by the main peak as the objects mentioned above. §.§.§ LAMOST J1621 LAMOST J162112.62+411809.81 (also named KUV 16195+4125) was first observed spectroscopically by the KISO Schmidt ultraviolet excess survey and discovered as a DA+dM binary <cit.>. After that, it was found that this binary system can be resolved with Hubble Space telescope <cit.>. The atmospheric and physical parameters for the DA star were derived to be T_ eff=14,090(457) K, logg=7.93(6) and M_ WD=0.57_ <cit.>. The similar results had been obtained again by other investigators based on the spectra from KISO or LAMOST <cit.>. Meanwhile, it is found in Fig. <ref> that the emission feature at Hα also can be seen from its LAMOST spectrum clearly, implying that this object might be a close PCEB and thus the variability in its luminosity might be detected through photometric observations. We collect the photometric data from ZTF DR19 for it, the total of 1330 data points in g-band, 1321 data points in r-band and 379 data points in i-band are obtained. The light curves are analyzed and only one periodogram indicating its orbital period based on r-band data is displayed in Fig. <ref> and the results are listed in Table <ref>. In addition, it is found in Table <ref> that the χ^2-value derived for its r-band light curve based on the period indicated by the main peak is smaller than that based on a peak with a similar power to the main one and a frequency of 0.6901616 d^-1 ( corresponding to a period P=1.448936 days), so that the period indicated by the main peak should be its orbital period, then the orbital period is derived to be 3.198935(91) days in r-band and 3.200093(145) days in i-band, respectively. then the light curves based on the orbital period determined by photometric observations in r-band are displayed in Fig. <ref>(h). It is found in Fig. <ref>(h) that LAMOST J1621 indeed exhibits the periodic variation in its luminosity with a short period and is thus a PCEB with a short orbital period. The variability in the luminosity of LAMOST J1621 might be a result of reflection effect or a star spot owing to magnetic activity. §.§.§ SDSS J1638 SDSS J163824.78+292701.23 (hereafter SDSS J1638) was first classified as a DA+dM binary by <cit.>, then the atmospheric and physical parameters for the components of this object were derived by <cit.> and <cit.>. It was found that SDSS J1638 is composed of a hot DA WD and a 0.32M_ M-type dwarf. Thereafter, these parameters were derived by <cit.> again based on its spectrum from LAMOST. The radial velocities for the DA star and M dwarf were determined to be -51.6 and 123.9 respectively by <cit.> who gave an upper limit of 39.49 days for the orbital period of this object based on the RV observations without multi-epoch measurements, suggesting that SDSS J1638 might be a short-period PCEB and the variability in its luminosity might be detected based on the photometric observations. In order to obtain its actual orbital period for this object, we collect the photometric observations for it from ZTF DR19, the total of 1028 data points in g-band, 1148 data points in r-band and 160 data points in i-band, respectively. However, its orbital period can be only derived from the photometric observations in r-band and it is 0.454168(16) days (listed in Table <ref>) and the periodogram indicating its orbital period is also shown in Fig. <ref>. As seen from Fig. <ref>, there are some possible periods that can not be ruled out, however. It is found in Table <ref> that the χ^2-value based on the main peak is also smaller than that based on another peak with a similar power to the main one and a frequency of 3.2045401 d^-1 (P=0.3120573 days ), suggesting that the period indicated by the main peak is the orbital period of this object. Then three light curves are plotted in Fig. <ref>(a). As seen from Fig. <ref>(a), SDSS J1638 shows periodic variation in its luminosity evidently and it should be a short period PCEB. The variability in the luminosity of SDSS J1638 might be caused by the reflection effect or a star spot due to stellar activity of the M-type dwarf. Meanwhile, the orbital obtained for this object in this work is really shorter than its upper limit listed in <cit.>, suggesting that it might be correct. §.§.§ SDSS1705 SDSS170517.87+334507.61 (hereafter SDSS J1705) was classified as a DA white dwarf by <cit.> based on the spectra from SDSS DR4. Then it was found to be a binary system containing a WD and a cool companion by <cit.> and the DA star in this binary system was found to be very hot one by <cit.> and <cit.>. A radial velocity of the DA star was derived to be -47.78 , and the distance was estimated to be 1685.9 pc away from the the Earth which is consistent with a distance of 2027.6±864.1 pc indicated by its parallax from Gaia DR3. Another radial velocity of this object was estimated to be 292.08 from the redshift based on its LAMOST spectrum <cit.>, suggesting that this object might show a large variability in its radial velocity and thus a short period PCEB. Based on the photometric data from ZTF DR2, the orbital periods for this object were derived to be 0.254066 days in g-band and 0.34053 days in r-band by <cit.>, suggesting that it is necessary to find the true orbital period for this object because the different periods were given by <cit.> based the photometric data from different passbands. The light curves based on the photometric data from ZTF DR19 is analyzed again and only one periodogram implying its orbital period based on r-band data is shown in Fig. <ref>. Its orbital periods are determined as 0.3405506(97) days in g-band, 0.3405451(81) days in r-band and 0.3405535(137) days in i-band, respectively. Our results are in well agreement with a period determined by <cit.> based on ZTF r-band data, and a different orbital period derived by <cit.> from g-band data might be a result of a selection of wrong peak owing to the less data points in g-band at that time or the influence of some scattered data points. In fact, it is found in Fig. <ref> that a peak with a similar power to the main one and a frequency of 3.9391472 d^-1 (corresponds to an orbital period of 0.2538621 days) is in agreement with the peak which was used to derive its orbital period based on ZTF g-band data by <cit.>. However, the χ^2-value based on this peak is larger than that based on main one (see Table <ref>), which implies that the period indicated by the main peak is the orbital period of this object, then its three phase-folded light curves are displayed in Fig. <ref>(b). §.§.§ SDSS J2130 SDSS J213019.79+061204.58 (SDSS J2130) was first identified as a WD+MS binary by <cit.> and <cit.> who gave the atmospheric and physical parameters for the WD in this object as the followings: T_ eff=34,131K, logg=7.730 and M_ WD=0.534 M_. This object was classified as a RR Lyr-type variable with a pulsation period of 0.34116743 days by <cit.> and <cit.>. Meanwhile, the hydrogen Balmer emission lines are also presented in its SDSS spectrum (see Fig. <ref> ), and thus suggests that SDSS J2130 might be a short period PCEB. In order to obtain the orbital period for this object, we collect the photometric data for it from ZTF DR19, then the light curves in g, r and i-bands are analyzed, however only one periodogram implying its orbital period is shown in Fig. <ref> since the periodograms obtained from photometric observations in g and i-bands for this object show the similar peak distribution to that derived from r-band data. Meanwhile, as the objects mentioned above, the period indicated by the main peak in Fig. <ref> should be the orbital period of this object since the χ^2-value based on the main peak is smaller than that based on another peak with a similar power and a frequency of 2.8123848 d^-1, corresponding a period of 0.355574 days (see Table <ref>), therefore its orbital period is derived to be 0.2621130(14), 0.2621132(12) and 0.2621115(14) days according to g, r and i-band photometric data, respectively. Its phase-folded light curves are shown in Fig. <ref>(c). As seen from Fig. <ref>(c), three light curves show the same trend of change, implying that this object should be a short period PCEB with an orbital period of about 6.2907 hours. Although we attempt to find a peak with a similar power to the main one and a period matching its known period obtained by <cit.> and <cit.>, however we do not find any peak that can satisfy the requirements from Fig. <ref>, suggesting that a different orbital period obtained for this object by <cit.> and <cit.> is not a result of the selection of a wrong peak. §.§.§ SDSS J2208 SDSS J220849.00+122144.73 (SDSS J2208) was first classified as a WD+MS binary by <cit.> based on its spectrum from SDSS DR5. The atmospheric and physical parameters for the DA star in this object were obtained by previous investigators <cit.>. It was found that the DA WD in SDSS J2208 is a massive one and the radial velocities for both components were derived to be 31.10 for the DA star and 11.70 for the M star by <cit.>. Meanwhile, Balmer emission lines are presented in its spectrum from SDSS. These observational characteristics indicate that SDSS J2208 might be a short-period binary formed from common envelope evolution. An orbital period was estimated to be 0.34 days by <cit.> based on RV measurements without multi-epoch, another different orbital period of 1.903 days was determined by <cit.> and <cit.> for this object according to the photometric data from SDSS or ASAS. In order to find the accurate period for SDSS J2208, we collect the photometric observations for it from ZTF DR19, the total of 655 data points in g-band, 925 data points in r-band and 96 data points in i-band are obtained, then the light curves are analyzed and only one periodogram indicating its orbital period based on r-band data is also shown in Fig. <ref>. In addition, it is found in Table <ref> that the χ^2-value based on the main peak is smaller than that based on another peak with a similar power to the main one, implying that the period indicated by the main peak is the orbital period of this object, then its orbital period is derived to be 0.654228(21) days in g-band and 0.654242(14) days in r-band, respectively. Three light curves based on an orbital period derived from the r-band data are displayed in Fig. <ref>(d). It is found in Fig. <ref>(d) that this object should be a short-period PCEB. However, the orbital periods obtained for this object by us or <cit.> are longer than its upper limit listed in <cit.>, this might be caused by its upper limit resulted by the unsuitable RV measurements <cit.>. In addition, an orbital period derived for this object by <cit.> and <cit.> might be a result of the selection of a wrong peak. In fact, it is found in Fig. <ref> that an period indicated by a peak with a similar power to main one and a frequency of 0.52567394 d^-1 (P=1.90232 days) is in agreement with that used to derive period for this object by <cit.> and <cit.>. §.§.§ SDSS J2302 SDSS J230202.50-000930.04 (SDSS J2302) was first identified as DA/M binary based on a spectrum from SDSS DR9 by <cit.> who gave the atmospheric and physical parameters for both components of SDSS J2302. Then these parameters were studied again by <cit.> and it was found to be a DA+M3 binary. Meanwhile, SDSS J2302 was found to be a variable star with an orbital period of 0.9098531 days by <cit.>. This implies that SDSS J2302 might be a PCEB. We collect the photometric data for this binary system from ZTF DR19 to obtain the accurate period, the total of 330 data points in g-band, 360 data points in r-band and 71 data points in i-band are obtained, respectively, then the light curves of this object are analyzed, then the periodogram indicating its orbital period based on r-band data is also shown in Fig. <ref>. As the objects mentioned above, the χ^2-values listed in Table <ref> also suggest that the period indicated by the main peak in Fig. <ref> is the orbital period of this object, then its orbital period is derived to be 0.2376133(5) days in g-band and 0.2376198(3) days in r-band, respectively. At last, the three light curves based on an orbital period obtained from r-band data are shown in Fig. <ref>(c). It is found in Fig. <ref>(c) that SDSS J2302 should be an eclipsing PCEB with an orbital period of about 5.703 hrs, implying that its orbital periods derived by us are very different from that derived by <cit.>. We attempt to find out a peak indicating a period that can match its known period, however, we do not find any peak that meet the requirements from Fig. <ref>, suggesting that the difference between its known period and our results is not a result of a wrong peak selection, and might be caused by the limited number of the repeated observations (∼10) adopted by <cit.> and thus an eclipsing binary with a much shorter duration than its orbital period could easily escape detection <cit.>. §.§.§ SDSS J2320 SDSS232004.02+270623.73 (SDSS J2320) was first identified as a DA white dwarf by <cit.>, and then was reclassified as a DA+M binary by <cit.>. The atmospheric and physical parameters for the DA star in SDSS J2320 were derived to be T_ eff=31,480(491) K, logg=7.68(6), M=0.50M_ and a distance away from the earth about 246 pc by <cit.> based on the spectrum from KISO Schmidt survey. A similar result about these parameters was obtained again according to the spectrum from LAMOST DR5 by <cit.> who gave an M4-type dwarf in this object. The emission lines at Balmer series are presented in its optical spectra from LAMOST <cit.>, implying that this object might be a short-period PCEB. Therefore, the variability in its luminosity might be detected, we collect the photometric data from ZTF DR19 to discover whether the luminosity of SDSS J2320 is variable or not. As a result, the total of 641 data points in g-band, 78 data points in r-band and 111 data points in i-band are found out for it. Then the light curves are analyzed and only one periodogram indicating its orbital period based on g-band data is also shown in Fig. <ref>. As the objects mentioned above, the χ^2-values listed in Table <ref> also imply that the period indicated by the main peak should be the orbital period of this object, then its orbital period is derived to be 0.794569(12) days in g-band and 0.794531(10) days in i-band, respectively. At last, three light cures based on an orbital period derived from g-band data (a more data point set) are plotted in Fig. <ref>(e). As seen from Fig. <ref>(e), three light curves exhibit the same trend in its luminosity change, implying that the orbital period obtained in this work is accurate at present. §.§.§ SDSS J2343 SDSS234312.96+154106.43 (SDSS J2343) was first identified as a WD+MS binary by <cit.>, and its atmospheric and physical parameters for this object had been studied by the previous investigators <cit.>. It was found that the DA star in SDSS J2343 is a massive WD with a distance of 225 pc away from the earth <cit.>. The radial velocities were derived to be 93.5 and -224.3 for the DA star and M-type star, respectively by <cit.> who also estimated an upper limit of 6.64 days for orbital period of this object based on several RV measurements, suggesting that SDSS J2343 might be a PCEB. Since its optical spectrum from SDSS exhibits evident emission features at Balmer series (see Fig. <ref>). This also indicates that SDSS J2343 might be a short-period PCEB, and thus the variation in its luminosity might be detected. We collect the photometric data for SDSS J2343 from ZTF DR19, the total of 663 data points in g-band, 896 data points in r-band and 171 data points in i-band are obtained, respectively. Its light cures are analyzed and only one periodogram indicating its orbital period based on r-band data is also shown in Fig. <ref>. Although there are some peaks with a similar power to the main peak in Fig. <ref>, the period indicated by the main peak should be the orbital period of this object based on the χ^2-values listed in Table <ref> as the same reason as that for the objects mentioned above, then its orbital period is derived to be 0.5687198(130) days in g-band and 0.5687399(106) days in r-band respectively and the light curves based on an orbital period determined by the r-band observations are shown in Fig. <ref>(f). As seen from Fig. <ref>(f), SDSS J2343 is indeed a short-period PCEB and the periodic change in its luminosity might be caused by the reflection effect or dark spot due to magnetic activity. In addition, the orbital period obtained for this object is indeed shorter than its upper limit derived by <cit.>, and thus might be correct. §.§ Short-period PCEBs with Hydrogen and He I emission features §.§.§ SDSS J0950 SDSS J095043.94+391541.62 (SDSS J0950) was first identified as a WD+MS binary system by <cit.>, then its atmospheric and physical parameters were determined through spectral analysis by many investigators <cit.>. These results imply that this binary system should contain a very hot young WD component. Meanwhile, it is found in Fig. <ref> that its SDSS optical spectra display not only the strong emission lines at hydrogen Balmer series, but also the emission lines at He I λλ5876 and λλ6681, together with Ca II H,K. These emission lines exhibit a narrow single peak, implying that SDSS J0950 shows the same emission features as LAMOST J143947.62-010606.8 with an orbital period of about 1.522608 days <cit.>. We collect the photometric data for this object from ZTF DR19, and analyze its light curves in g and r-bands after some scattered data points are removed. The result is listed in Table <ref> and only one periodogram implying its orbital period based on r-band is shown in Fig. <ref>. For the same reasons as those alleged for the objects mentioned above, the period implied by the main peak is the orbital period of this object (see Table <ref>), then the orbital period of this object is derived to be 1.167186(21) and 1.167341(15) days based on g and r-band data, respectively. The phase of each datapoint in g, r and i-bands is calculated according to an orbital period derived from r-band data, then its phase-folded light curves are shown in Fig. <ref>(a). As seen from Fig. <ref>(a), SDSS J0950 is composed of a hot WD and an M-type dwarf with an orbital period of 1.167341 days, and is therefore a detached binary system, rather than a* cataclysmic variable. §.§.§ SDSS J1317 SDSS J131751.72+673159.36 (SDSS J1317) was identified as a cataclysmic variable by <cit.>. The atmospheric and physical parameters were determined based on its SDSS spectrum by <cit.> and <cit.>. It was found that this WD/MS binary system should be composed of a hot DA WD and an M-type dwarf. The radial velocities were determined to be 460.4(22.5) and -449.2(35.8) at different epochs by <cit.>, this implies that this object might be a close binaries. Meanwhile, it is found in Fig. <ref> that its SDSS optical spectrum displays not only the strong emission lines at hydrogen Balmer series, but also the emission lines at He I λλ5876 and 6681 and each emission line shows a single peak, the emission features in its spectrum are similar to those in the spectra of BE UMa <cit.> and HK Leo <cit.>. In addition, <cit.> discovered a spurious signal with a frequency of 0.29568185 d^-1 (corresponding to a period of 3.382013 days) related to time-dependent scan angle for this object according to Gaia DR3 G-band time series data. In order to find whether this spurious period is the orbital period of this object, we collect the photometric data from ZTF DR19, then analyze the light curves in g, r and i-bands, the result is listed in Table <ref> and the periodogram implying its orbital period from r-band observations is also shown in Fig. <ref>. For the same reason as those for SDSS J0029, the period indicated by the main peak in Fig. <ref> should be the orbital period of this object (see Table <ref>), then its orbital period is derived to be 3.38084(20), 3.38136(32) and 3.38115(61) days based on g, r and i-band data, respectively. The phase for each data point in three passbands is calculated according to a period obtained from g-band data, then the phase-folded light curves are displayed in Fig. <ref>(b). This suggests that the period discovered for this object by <cit.> should be the orbital period rather than the spurious period for SDSS J1317. § DISCUSSION AND CONCLUSIONS The common envelope evolution is one of the most uncertain processes in binary evolution <cit.>. The post common envelope binaries (PCEBs) are the direct products of common envelope evolution and thus play an important role in understanding common envelope evolution of binaries <cit.>. In this work, we attempt to discover some PCEBs from WDMS binaries with emission line(s) identified from LAMOST and/or SDSS based on the photometric data from ZTF DR19. As a result, 55 PCEBs with an orbital period within a range from 2.2643 to 81.1526 hours are found out based on the photometric data, however most of them had been discovered by previous investigators. In these short-period PCEBs, 6 of them are newly discovered and the orbital periods of 19 PCEBs have been improved (see the first 25 objects in Table <ref>) based on a match with Simbad database. A detailed comparison between our results and those obtained by previous investigators is shown in Fig. <ref>. As seen from Fig. <ref>, our results are consistent with their known ones for most of the known short period PCEBs except for 8 PCEBs (indicated by squares) with the upper limits of their orbital periods <cit.>, implying that the method <cit.> used in this work is effective although almost all periodograms indicating the orbital periods for these short period PCEBs show several peaks with a similar power, because the similar phenomena also occur in the periodograms for 30 short period PCEBs with an accurate period (see Fig. <ref>). A possible explanation for this phenomenon is that the orbital periods of these binary stars are obtained from the discontinuous observation data from ZTF, which might produce some spurious signals. It is found in Fig. <ref> and Fig. <ref> that there are some peaks with a similar power to the main one and thus some possible periods for these PCEBs can not be directly ruled out. In order to find the orbital periods for the newly discovered or period improved PCEBs, we use a χ^2-method to check the deviation degree of the phase-folded light curves based on two orbital periods indicated by the main peak and another peak with a similar power to the main one from the 'averaged' light curves constructed by 50 normal points. It is found in Table <ref> that the deviation degree of light curve based on the orbital period indicated by the main peak is lower than that of light curve based on another peak for each of them, suggesting that the periods indicated by the main peaks in their periodograms are their orbital periods, also suggesting that the method used in this work is effective. In addition, some peaks with a similar power to the main one in their periodograms might be a result of a small amplitude of light variation in these short period PCEBs composed of a WD and a low-mass dwarf, and the limited repeated observations or scattered data points would lead to select a wrong peak for some binaries (such as SDSS J1541, SDSS J1705 and SDSS J2302). Therefore, it is necessary to use the multi-band photometric observations for deriving the orbital periods of short period PCEBs. The upper limits were estimated for 8 short period PCEBs based on several RV measurements by <cit.>, and thus they are hardly considered as their orbital periods. The orbital periods derived for them (except for SDSS J2208) in this work are shorter than the upper limits listed in <cit.>, and thus the orbital periods obtained in this work based on the photometric data from ZTF DR19 might be correct. In addition, an orbital period of SDSS J2302 <cit.> is derived to be 0.2376198 days and this object is found to be an eclipsing PCEB in this work. The difference between our result and theirs might be caused by the different observations used. The result obtained for this object by <cit.> only based on a limited number of repeated observations <cit.>. Although a limited number of high-precision observations can reveal the variability of stellar luminosity, it is difficult to obtain their exact periods, so that an eclipsing binary with a much shorter eclipse duration than its orbital period could easily escape detection <cit.>. As seen from Fig. <ref> and Fig. <ref>, their optical spectra from LAMOST and/or SDSS show the evident emission line(s) at Balmer series or even He I λλ5876 and λλ6681. A possible explanation for this behavior is photoionization and recombination due to irradiation of M dwarfs because of their very hot WD companions with an effective temperature higher than ∼10,000 K <cit.>. Another one is that the emission features might be a result of the magnetic activity of M dwarf components in these PCEBs, since the M dwarfs in PCEBs are younger, and thus more active than the field M dwarfs <cit.>. Therefore, the reflection effect or star spot due to stellar magnetic activity can provide the favorable opportunity for searching the short-period PCEBs from WDMS binaries with emission lines. Meanwhile, the optical spectra of SDSS J0950 and SDSS J1317 show the characteristic spectra (λλ8183.27 and λλ8194.81 absorption doublet) of the M-type dwarfs, suggesting that SDSS J0950 and J1317 should contain an M-type dwarf component. In addition, the emission lines presented in their optical spectra exhibit a single peak, therefore the emission lines in their optical spectra might not be a result of the accretion disk, implying that they are probably not the cataclysmic variables. In fact, the M-type dwarfs in these two PCEBs with an orbital period longer than 1 days cannot fill their Roche lobes to transfer mass to their WD companion and thus form an accretion disk around these white dwarfs, unless the WDs in them are extremely massive ones. In addition, we have analyzed the SED of SDSS J1541 because of the lack of properties of its cool companion. As a result, it is found that the hot WD SDSS J1541 might be companied by a late-type M dwarf, it should be produced by common envelope evolution, however, it is still worth further studies how can its thick common envelope be ejected by a very low orbital energy of the initial binary, because the WD in it is a He-WD with M 0.45 M, suggesting that most of mass of its MS progenitor had been ejected by CE evolution, since the oldest globular clusters in the galactic halo are producing ∼ 0.53M_ WDs for MS progenitors with M 0.8M <cit.>. Meanwhile, it is found in Fig. <ref> that the light curves of three PCEBs (SDSS J0747, SDSS J0908 and SDSS J2302) show the evident eclipses, implying that these PCEBs are close eclipsing binaries with a short period. The eclipsing PCEBs can provide us with the possibility to directly obtain their precise parameters independently of atmospheric parameters. Therefore, we would observe these eclipsing PCEBs and analyze their light curves and radial velocity curves to obtain the precise physical parameters for them in the future since their precise parameters are useful to constraint the CE evolution. § ACKNOWLEDGEMENTS The authors are grateful to the anonymous referee for the valuable suggestions and insightful remarks, which have improved this work greatly. We also thank prof. D. Koester and P. Bergeron for providing their WD models. This project was partly supported by the Chinese Natural Science Foundations (Nos. 11773065, 11973081 and 12073070), and by the Science Research Grants from National Key R&D Program of China (2021TYFA1600403) and the International Centre of Supernova, Yunnan Key Laboratory (No. 202302AN360001). Funding for the SDSS and SDSS II provided by the Alfred P. Sloan Foundation, the participating Institutions, the National Science Foundation. The US Department of Energy, the National Aeronautics and Space Administriction, the Japanese Monbukagakusho and the Max Plank Society, and the Higher Education Funding Council for England. This paper makes use of data products of WISE, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. this research has made use of the SIMBAD database VizieR service. § DATA AVAILABILITY The data underlying this work will be shared on reasonable requisition to the corresponding author. The various sky survey data can be available in the public Data Release of ZTF DR19. mnras 99 [Abazajian et al.2009]Abazajian2009 Abazajian K.N., Adelman-McCarthy J.K., Agüeros M.A., Allam S.S., et al., 2009, , 182, 543 [Abbott et al.2016]Abbott2016 Abbott B.P., et al., 2016, , 818, L22 [Anguiano et al.2017]Anguiano2017 Anguiano B., Rebassa-Mansergas A., García-Berro E., et al., 2017, , 469, 2102 [Baraffe et al.2003]Baraffe2003 Baraffe I., Chabrier G., Barrman T.S., et al., 2003, , 402, 701 [Bedard et al.2020]Bedard2020 Bédard A., Bergeron P., Brassard P., Fontaine G., 2020, , 901, 93 [Bell & Gustafsson1989]Bell1989 Bell R.A., Gustafsson B., 1989, , 236, 653 [Bergeron et al.1995]Bergeron1995 Bergeron P., Wesemel F., Beauchamp A., 1995, PASP, 107, 1047 [Brown et al.2011]Brown2011 Brown J.M., Kilic M., Brown W.R., Kenyon S.J., 2011, , 730, 67 [Brown et al.2022]Brown2022 Brown A.J., Parsons S.G., Littlefair S.P. et al., 2022, , 513, 3050 [Chambers et al.2016]Chambers2016 Chambers K.C., Magnier E.A., Metcalfe NN., et al., 2016, arXiv: 1612.05560 [Chen et al.2020]Chen2020 Chen X.D., Wang S., Deng L.C., et al., 2020, , 249, 18 [Davis et al.2008]Davis2008 Davis P.J., Kolb U., Willems B., Gänsicke B.T., 2008, , 389, 1563 [Debes et al.2011]Debes2011 Debes J.H., Hoard D.W., Wachter S., et al., 2011, , 197, 38 [Dietz et al.2020]Dietz2020 Dietz S.E., Yoonn J.M., Beers T.C., 2020, , 894, 34 [Drake et al.2014]Drake2014 Drake A.J., Djorgovski S.G., García-Álvarez D., et al., 2014, , 790, 157 [Elsenstein et al.2006]Elsenstein2006 Elsenstein D.J., Liebert J., Harris H.C., et al., 2006, , 167, 40 [Farihi et al.2006]Farihi2006 Farihi J., Hoard D.W., Wachter S., 2006, , 646, 480 [Farihi et al.2017]Farihi2017 Farihi J., Parsons S.G., Gänsicke B.T., 2017, Astronomy, 1, 32 [Gaia Collaboration2021]Gaia2021 Gaia Collaboration, Brown A.G., et al., 2021, , 649, A1 [Gavras et al.2023]Gavras2023 Gavras P., Rimoldini L., et al., 2023, , 674, A22 [Gentile Fusillo, Gänsicke & Greiss2015]Gentile2015 Gentile Fusillo N.P., Gänsicke B.T., & Greiss S., , 2015, , 448, 2260 [Gianninas, Bergeron & Ruiz2011]Gianninas2011 Gianninas A., Bergeron P., Ruiz M.T., 2011, , 743, 138 [Gil-Pons & García-Berro2001]Gil2001 Gil-Pons P., García-Berro E., 2001, , 375, 87 [Girven et al.2011]Girven2011 Girven J., Gänsicke B.T., Steeghs D., Koester D., 2011, , 417, 1210 [Graham et al.2019]Graham2019 Graham M.J., Kulkarni S.R., Bellm E.C., et al., 2019, , 131, 078001 [Green, Schmidt & Liebert1986]Green1986 Green R.F., Schmidt M., Liebert J., 1986, , 61, 305 [Guo et al.2015]Guo2015 Guo J.C., Zhao J.K., Tziamtzis A. et al., 2015, , 454, 2787 [Han et al.2002]Han2002 Han Z.W., Podsiadlowski P., Maxted P.F.L., et al., 2002, , 336, 449 [Heber1986]Heber1986 Heber U., 1986, , 155, 33 [Heinze et al.2018]Heinze2018 Heinze A.N., Torry J.L., Denneau L., et al., 2018, , 156, 241 [Heller et al.2009]Heller2009 Heller R., Homeier D., Dreizler S., Østensea R., 2009, , 496, 191 [Hjellming & Taam1991]Hjellming1991 Hjellming M.S., Taam R.E., 1991, , 370, 709 [Holl et al.2023]Holl2023 Holl B., Faricius C., Portell J., et al., 2023, , 674, A25 [Iben & Livio1993]Iben1993 Iben I.J., Livio M., 1993, , 105, 1373 [Ivezić et al.2007]Ivezic2007 Ivezić Z., Smith J.A., Miknaitis G., et al., 2007, , 134, 973 [Justham et al.2009]Justham2009 Justham S., Wolf C., et al., 2009, , 493, 1081 [Kalirai et al.2009]Kalirai2009 Kalirai J.S., Davis D.S., Richer H.B., Bergeron P. et al., 2009, , 705, 408 [Kao et al.2016]Kao2016 Kao W., Kaplan D.L., Prince T.A., Tang S.M., Ene I., Kim K.B., et al., 2016, , 461, 2747 [Kepler et al.2015]Kepler2015 Kepler S.O., Pelisoli I., Koester D., Ourique G., et al., 2015, , 446, 4078 [Kepler et al.2019]Kepler2019 Kepler S.O., Pellsoli I., Koester D., et al., 2019, , 486, 2169 [Killic, Stanek & Pinsonneault2007]Killic2007 Killic M., Stanek K.Z., Pinsonneault M.H., 2007, , 671, 761 [Kleinman et al.2004]Kleinman2004 Kleinman S.J., et al., 2004, , 607, 426 [Kleinman et al.2013]Kleinman2013 Kleinman S.J., Kepler S.O., Koester D., et al., 2013, , 204, 5 [Koester2010]Koester2010 Koester D., 2010, , 81, 921 [Lei et al.2013]Lei2013 Lei Z.X., et al., 2013, , 549, A145 [Lenz & Breger2005]Lenz2005 Lenz P., Breger M., 2005, CoAst., 146, 53 [Limoges & Bergeron2010]Limoges2010 Limoges M.M., Bergeron P., 2010, , 714, 1037 [Li et al.2014]Li2014 Li L.F., Zhang F.H., Han Q.W., Kong X.Y., Gong X.B., 2014, , 455, 1331 [Liu et al.2012]Liu2012 Liu C., Li L.F., Zhang F.H., Zhang Y., Jiang D.K., Liu J.Z., 2012, , 424, 1841 [Luhman et al.2011]Luhman2011 Luhman K.L.,Burgasser A.J., Bochanski J.J., 2011, , 730, L9 [Marsh, Dhillon & Duck1995]Marsh1995 Marsh T.R., Dhillon V.S., Duck S.R., 1995, , 275, 828 [Masci et al.2019]Masci2019 Masci F.J., Laher R.R., Rusholme B., et al., 2019, , 131, 018003 [Miller2015]Miller2015 Miller A.A., 2015, , 811, 30 [Morgan et al.2012]Morgan2012 Morgan D.P., West A.A., Garcés A., et al., 2012, , 144, 93 [Nebot Gómez-Morán et al.2009]Nebot2009 Nebot Gómez-Morán A., Schwope A.D., Schreiber M.R., et al., 2009, , 495, 561 [Nebot Gómez-Morán et al.2011]Nebot2011 Nebot Gómez-Morán A., Gänsicke B.T., Schreiber M.R., Rebassa-Mansergas A., et al., 2011, , 536, 43 [Nelemans & Tauris1998]Nelemans1998 Nelemans G., Tauris T.M., 1998, , 335, L85 [Nordhaus2011]Nordhaus2011 Nordhaus J., 2011, Int. J. Mod. Phys. E, 20, 29 [Oswalt, Peterson & Foltz1984]Oswalt1984 Oswalt T.D., Peterson B.M., Foltz C.B., 1984, , 89, 421 [Paczyński1976]Paczynski1976 Paczyński B., 1976, in Eggleton P., Mitton S., Whelan J., eds, Proc. IAU Symp. 73, Structure and Evolution of Close Binary Systems, Reidel, Dordrecht [Parsons et al.2015]Parsons2015 Parsons S.G., Agurto-Gangas C., Gänsicke B.T., et al., 2015, , 449, 2194 [Parsons et al.2013]Parsons2013 Parsons S.G., Gänsicke B.T., Marsh T.R., Drake AJ., Dhillon V.S., et al., 2013, , 429, 256 [Parsons et al.2017]Parsons2017 Parsons S.G., Hermes J.J., Marsh T.R., et al., 2017, , 471, 976 [Parsons et al.2021]Parsons2021 Parsons S.G., Gänsicke B.T., Schreiber M.R., et al., 2021, , 502, 4305 [Pesch & Sanduleak1983]Pesch1983 Pesch P., Sanduleak N.N., 1983, , 51, 171 [Politano & Weiler2007]Politano2007 Politano M., Weiler K.P., 2007, , 665,663 [Politano et al.2010]Politano2010 Politano M., van der Sluys M., Taam R.E., Willems B., 2010, , 720, 1752 [Pourbaix et al.2005]Pourbaix2005 Pourbaix D., Kapp G.R., Szkody P., et al., 2005, , 444, 643 [Pyrzas et al.2009]Pyrzas2009 Pyrzas S., Gänsicke B.T., Marsh T.R., et al., 2009, , 394, 978 [Rappaport et al.2017]Rappaport2017 Rappaport S., Vanderburg A., Nelson L., Gary B.L., Kaye T.G., Kalomeni B., et al., 2017, , 471, 948 [Ritter & Kolb2003]Ritter2003 Ritter H., Kollb U., 2003, , 404, 301 [Raymond et al.2003]Raymond2003 Raymond S.N., Szkody P., Hawley S.L., et al., 2003, , 125, 2621 [Rebassa-Mansergas et al.2007]Rebassa2007 Rebassa-Mansergas A., Gänsicke B.T., Rodriguez-Gil P., et al., 2007, , 382, 1377 [Rebassa-Mansergas et al.2008]Rebassa2008 Rebassa-Mansergas A., Gänsicke B.T., Schreiber M.R., et al., 2008, , 390, 1635 [Rebassa-Mansergas et al.2010]Rebassa2010 Rebassa-Mansergas A., Gänsicke B.T., Schreiber M.R., et al., 2010, , 402, 620 [Rebassa-Mansergas et al.2011]Rebassa2011 Rebassa-Mansergas A., Nebot-Gómez-Morán A., Schreiber M.R., et al., 2011, , 413, 1121 [Rebassa-Mansergas et al.2012]Rebassa2012 Rebassa-Mansergas A., Nebot Gómez-Morán A., Schreiber M.R., Gänsicke B.T., et al., 2012, , 419, 806 [Rebassa-Mansergas et al.2013a]Rebassa2013 Rebassa-Mansergas A., Agurto-Gangas C., Schreiber M.R., et al., 2013a, , 433, 3398 [Rebassa-Mansergas et al.2013b]Rebassa2013b Rebassa-Mansergas A., Schreiber M.R., Gänsicke B.T., et al., 2013b, , 429, 3570 [Rebassa-Mansergas et al.2021]Rebassa2021 Rebassa-Mansergas A., Solano E., Jiménnez-Esteban F.M., 2021, , 506, 4201 [Rebassa-Mansergas et al.2023]Rebassa2023 Rebassa-Mansergas A., Maldonado J., Raddi R., et al., 2023, , 526, 4787 [Ren et al.2013]Ren2013 Ren J.J., Luo A.L., Li Y.B., Wei P., et al., 2013, , 146, 82 [Ren et al.2014]Ren2014 Ren J.J., Rebassa-Mansergas A., Luo A.L., et al., 2014, , 570, A107 [Ren et al.2018]Ren2018 Ren J.J., Rebassa-Mansergas A., Parsons S.G., et al., 2018, , 477, 4641 [Sánchez-Sáez et al.2023]Sanchez2023 Sánchez-Sáez P., Arredondo J., Bayo A., et al., 2023, , 675, A195 [Schreiber & Gänsicke2003]Schreiber2003 Schreiber M.R., Gänsicke B.T., 2003, , 406, 305 [Schreiber et al.2008]Schreiber2008 Schreiber M.R., Gänsicke B.T., Southworth J., et al., 2008, , 484, 441 [Schreiber et al.2010]Schreiber2010 Schreiber M.R., Gänsicke B.T., Rebassa-Mansergas A., et al., 2010, , 513, L7 [Sesar et al.2017]Sesar2017 Sesar B., Hernitschek N., Mitrović S., et al., 2017, , 153, 204 [Silvestri et al.2006]Silvestri2006 Silvestri N.M., Hawley S.L., West A.A., Szkodv P., Bochanski J.J., Eisenstein D.J., et al., 2006, , 131, 1674 [Silvestri et al.2007]Silvestri2007 Silvestri N.M., Lemagie M.P., Hawley S.L., et al., 2007, , 134, 741 [Skiff2009]Skiff2009 Skiff B.A., 2009, VizieR Online Data Catalog, 1, 2023 [Taam & Sandquist2000]Taam2000 Taam R.E., Sandquist E.L., 2000, , 38, 113 [Tonry et al.2018]Tonry2018 Tonnry J.L., Denneau L., Flewelling H., et al., 2018, , 867, 105 [Tremblay et al.2011]Tremblay2011 Tremblay P.E., Bergeron P., Gianninas A., 2011, , 730, 128 [Tremblay et al.2019]Tremblay2019 Tremblay P.E., Cukanovaite E., Gentile-Fusilo N.P., et al., 2019, , 482, 5222 [Tsantaki et al.2022]Tsantaki2022 Tsantaki M., Pancino E., Marrese P. et al., 2022, , 659, A95 [van den Besselaar et al.2007]van2007 van den Besselaar E.J.M., Greimel R., Morales-Rueda L., et al., 2007, , 466, 1031 [Verberne et al.2024]Verberne2024 Verberne S., Koposov S.E., Rossi E.M., et al., , 684, A29 [Wagner et al.1988]Wagner1988 Wagner R.M., Sionn E.M., Liebert J., et al., 1988, , 328, 213 [Webbink1984]Webbink1984 Webbink R.F., 1984, , 277, 355 [Webbink2008]Webbink2008 Webbink R.F., 2008, in Astrophys. Space Sci., Lib., 352, ed. E.F. Milone, D.A. Leahy & D.W. Hobill, 233 [Weger, McMahan & Boley1987]Wegner1987 Wegner G., McMahan R.K., Boley F., 1987, , 94, 1271 [Weger & Swanson1990]Wegner1990 Wegner G., Swanson S.R., 1990, , 100, 1274 [West et al.2008]West2008 West A.A., Hawley S.L., et al., 2008, , 135,785 [Willems & Kolb2004]Willems2004 Willems B., Kolb B., 2004, , 419, 1057 [Wright et al.2010]Wright2010 Wright E.L., et al., 2010, , 140,1868 [Xu et al.2015]Xu2015 Xu S.Y., Jura M., Pantoja B., Klein B., et al., , 806, L5 [York et al.(2000)]York2000 York D.G., Adelman J., Anderson J.E.Jr., Anderson S.F., Annis J.,et al., 2000, , 120, 1579 [Zhang et al.(2022)]Zhang2022 Zhang Y.J., Hou W., Luo A.L., et al., 2022, , 259, 38
http://arxiv.org/abs/2407.13024v1
20240717212643
Length-preserving biconnection gravity and its cosmological implications
[ "Lehel Csillag", "Rattanasak Hama", "Mate Jozsa", "Tiberiu Harko", "Sorin V. Sabau" ]
gr-qc
[ "gr-qc" ]
definitionDefinition[section] remarkRemark[section] theoremTheorem[section] corollaryCorollary[section] propositionProposition[section] |-1.5mm = ℒ tr eωłlehel@csillag.roDepartment of Physics, Babes-Bolyai University, Kogalniceanu Street, Cluj-Napoca 400084, Romania,rattanasak.h@psu.ac.thFaculty of Science and Industrial Technology, Prince of Songkla University, Surat Thani Campus, Surat Thani, 84000, Thailand,mate.jozsa@ubbcluj.roDepartment of Physics, Babes-Bolyai University, Kogalniceanu Street, Cluj-Napoca 400084, Romania,tiberiu.harko@aira.astro.roDepartment of Physics, Babes-Bolyai University, Kogalniceanu Street, Cluj-Napoca 400084, Romania,Astronomical Observatory, 19 Ciresilor Street, Cluj-Napoca 400487, Romania, sorin@tokai.ac.jpSchool of Biological Sciences, Department of Biology and Graduate School of Science and Technology, Physical and Mathematical Sciences, Tokai University, Sapporo, 005-8600, Japan§ ABSTRACT We consider a biconnection theory that extends general relativity, using the recently defined mutual curvature as the fundamental object describing gravity. To specify the two connections considered, we make a short detour into information geometry, where a connection and its dual play a natural role. We prove that the dual of a non-metric Schrödinger connection possesses torsion, even if the Schrödinger connection itself does not. Furthermore, a manifold with the dual of a Schrödinger connection is a quasi-statistical manifold. With these two connections, we develop length-preserving biconnection gravity. The field equations resemble those of general relativity, but replace the Ricci tensor- and scalar with the mutual curvature tensor and mutual curvature scalar, resulting in additional torsion-dependent terms. The covariant divergence of the matter energy-momentum does not vanish in this theory. We derive the equation of motion for massive particles, which shows the presence of an extra force, depending on the torsion vector. The Newtonian limit of the equations of motion is also considered. We explore the cosmological implications by deriving the generalized Friedmann equations for the FLRW geometry. They contain additional terms that can be interpreted as describing an effective, geometric type dark energy. We examine two cosmological models: one with conserved matter and one where dark energy and pressure follow a linear equation of state. The predictions of both models are compared with observational Hubble function values and the standard ΛCDM model. Length-preserving biconnection gravity models fit well observational data and align with ΛCDM at low redshifts (z<3). They suggest that a modified geometry could explain dark energy, late-time acceleration, and the formation of supermassive black holes, as they predict different age of our Universe. Length-preserving biconnection gravity and its cosmological implications Sorin V. Sabau July 22, 2024 ======================================================================== § INTRODUCTION The advent of the theory of general relativity <cit.> opened not only new perspectives in the understanding of gravity, one of the fundamental forces of nature, but also led to creative interplay and interaction between mathematics and physics. General relativity, based on a Riemannian geometric mathematical structure, deeply influenced the development of mathematics, leading to the development of new fields of research that also found many physical applications. Three years after general relativity was formulated as a consistent theory, Weyl <cit.> proposed the first generalization of the Riemannian geometry, based on the introduction of a new geometric concept, the nonmetricity of the space-time. For a long time Weyl's geometry did find many applications in physics <cit.>, but it is presently an active field of research <cit.>. In the same year Weyl proposed his unified field theory based on the nonmetric geometry, Finsler <cit.> introduced another extension of the Riemann geometry, in which the metric tensor is a function of both the coordinates and of a tangent vector. Finsler geometry has also the potential of opening some new avenues in the understanding of quantum mechanics <cit.>, and of the gravitational phenomena <cit.>. Another important geometric concept is the torsion tensor of the space-time, introduced by Élie Cartan <cit.>, and which is at the basis of the Einstein-Cartan theory of the gravitational interaction <cit.>. Among the interesting mathematical developments that greatly influenced physics one must also mention the work by Weitzenböck <cit.>, who introduced a geometry in which torsion exactly compensates curvature, and thus the space-time is becoming flat. Weitzenböck's geometry represents the mathematical foundation of the f(T) gravity theory <cit.>, and of its generalizations <cit.>. On the other hand, nonmetricity is the fundamental geometric quantity on which the f(Q) gravity theories <cit.> are built up. General relativity stands out in its description of gravitational interaction compared to other fundamental interactions in particle physics. Unlike theories where forces result from the exchange of particles, like in electrodynamics, general relativity describes gravity as a property of space-time geometry. Einstein's differential geometric description of gravitational interaction not only helped to better understand Riemannian geometry, but it also opened new possibilities for its generalization. General relativity has been extensively tested at various length scales, and it has significant applications in astrophysics and cosmology. It gives an excellent description of the of the gravitational dynamics at the scale of the Solar System, and explains well the perihelion precession of the planet Mercury, the bending of light by the Sun, and the Shapiro time delay effect <cit.>. The detection of the gravitational waves <cit.> has confirmed again the predictions of general relativity, and it has provided a new perspective for the analysis and description of the black hole - black hole, or black hole - neutron star merging processes. The observational study of the fluctuations in the temperature distribution of the Cosmic Microwave Background Radiation recently performed by the Planck satellite <cit.>, as well as the investigations of the light curves of the Type Ia supernovae <cit.>, have convincingly shown that the present day Universe is in a state of accelerating cosmological expansion. Precise cosmological observations have also shown only around 5% of the total matter-energy composition of the Universe amounts to baryonic matter, with 95% consisting of two other, essentially unknown, constituents, commonly called dark energy, and dark matter, respectively. To describe phenomenologically the observational data obtained from the cosmological observations, the ΛCDM (Λ Cold Dark Matter) model was proposed. This model is obtained by adding to the standard gravitational field equations G_μν=(8π G/c^4)T_μν, where G_μν is the Einstein tensor, and T_μν is the matter energy-momentum tensor, the cosmological constant Λ, first introduced by Einstein in 1917 in the gravitational field equations <cit.> to obtain a static cosmological model of the Universe. However, after the discovery of the expansion of the Universe, Einstein dismissed the possibility of the presence of Λ. Despite this, the ΛCDM model gives a very good description of the cosmological observational data at low redshifts, and thus it is considered as the standard cosmological paradigm of the present times. The first problem the ΛCDM must face is related to the unknown nature, and physical/geometrical interpretation of the cosmological constant, which represents the so-called cosmological constant problem <cit.>, whose solution is not yet known. An alternative view, which does not require the presence of Λ, is represented by the assumption that the Universe is filled with two components (of unknown physical origin), called dark energy and dark matter, respectively (for reviews of the dark anergy and dark matter problems see <cit.>). The ΛCDM model is a particular dark energy model, in which the effective pressure p_DE and the effective energy ρ_DE of the dark energy satisfies the equation of state ρ_DE+p_DE=0, ∀ t. The ΛCDM model faces several other important challenges. One of them, called the Hubble tension, is related to the existence of significant differences between the expansion rate of the Universe (the Hubble function) as obtained from the Cosmic microwave Background Radiation satellite observations, and from the low redshift determinations <cit.>. As measured by the Planck satellite, the Hubble constant H(0) has the value of 66.93 ± 0.62 km/ s/ Mpc <cit.>, while from the SHOES collaboration analysis its value is 73.24 ± 1.74 km/ s/ Mpc <cit.>. Between these two values there is a difference which is more than 3σ<cit.>. If real, the Hubble tension strongly points towards the need of extending, or even replacing the ΛCDM model. A more significant problem is the James Webb Space Telescope (JWST) discovery of well-formed galaxies and supermassive black holes only a few hundred million years after the Big Bang <cit.>. Unusually bright galaxy candidates have been detected at z≈ 16<cit.>. Additionally, polycyclic aromatic hydrocarbons (PAHs) have been identified at z = 6.71<cit.>. The observed bright UV-irradiation of the early Universe suggests a reionization history that is much too short to satisfy evolution of the hydrogen ionization fraction, χ_HI(z). All these observational results seriously challenge the timeline predicted by ΛCDM model. The analysis of the measurements of the baryon acoustic oscillations (BAO) by the Dark Energy Spectroscopic Instrument (DESI) points towards a time-evolving equation of state of the dark energy <cit.>. The BAO results have been obtained by using quasar and Lyman-α forest tracers in seven redshift bins in the redshift range 0.1 < z < 4.2. While the DESI BAO data can be explained by the ΛCDM model, combining them with other observational datasets leads to results that contradict the standard cosmological scenario. An attractive possibility of explaining the above problems of cosmology is to assume that general relativity, which can give an excellent description of the gravitational physics at the level of the Solar System is no longer valid on galactic or cosmological scales, and, in order to understand physics on large and very large scales a new theory of gravity is necessary. Many modified theories of gravity have been proposed (for reviews see <cit.>), and they try to explain the gravitational phenomenology from different perspectives, by using new geometrical or physical structures that could explain the observational data. Extensions and maximal extensions of the Hilbert-Einstein action S=∫(R/2κ ^2+L_m)√(-g)d^4x were considered in the framework of the f(R)<cit.>, f(R,L_m)<cit.>, f(R,T) theories, or of the Hybrid Metric Palatini Gravity theory <cit.>. For a review of the modified gravity theories with geometry-matter coupling see <cit.>. Finsler type geometrical extensions of general relativity were considered in <cit.>, while the astrophysical and cosmological effects of the Weyl geometry were investigated in <cit.>. An interesting approach to Weyl geometry was proposed by Schrödinger <cit.>, by introducing a new type of connection. Even it contains nonmetricity, the Schrödinger connection preserves the length of vectors under parallel transport. The physical and cosmological implications of the Schrödinger connection were investigated in <cit.>. The Friedmann-Schouten geometry and connection, in which the torsion has a specific form, and can be expressed in terms of a torsion vector, was investigated, from the point of view of the cosmological applications, in <cit.>. Interestingly, symmetrizing a semi-symmetric type of torsion in a certain way, could lead to a Schrödinger connection, as pointed out in <cit.>. One of the open problems in present day theoretical physics is the problem of quantum gravity. Even a general theory of the quantized gravitational field, or of quantum geometry, does not yet exist, some relevant insights on the expected structure of the theory can be obtained from general physical considerations. Such a general prediction of quantum gravity models is the existence of a minimum length scale, which is assumed to be of the order of the Planck length √(ħ G/c^3)<cit.>. In some quantum gravity models a minimum momentum scale is also considered <cit.>, and a noncommutative approach to geometry may also be necessary <cit.>. An interesting approach to the problem of quantum geometries was proposed in <cit.>. In this model each point r⃗ in the classical background is associated with a vector |g_r⃗> in a Hilbert space, r⃗→|g_r⃗>, with |g_r⃗> := ∫ g(r⃗ '-r⃗) |r⃗ '> d^3r⃗ ' , where g(r⃗ '-r⃗) is any normalised function. Hence, in this formulation of geometry of quantum mechanics, geometric (or physical) “points” in the background space do exist in a superposition of states, and as the result of measurements they may undergo stochastic fluctuations. Hence, the geometry of quantum mechanics can be formulated by associating to each space-time point a statistical distribution. From a mathematical point of view such mathematical structures are well known, and they are called statistical manifolds (for a detailed discussion of the subject see the books by Amari <cit.>). Statistical manifolds represent an application of Riemannian geometry for the study of stochastic processes. The concept was initially introduced by Lauritzen <cit.>, and it was later reformulated by Kurose in <cit.> from the perspective of the affine differential geometry. For short, a statistical manifold (M, ∇,h) is a (semi-)Riemannian manifold (M, h) with a torsion-free affine connection ∇ with ∇ h totally symmetric <cit.>. For a statistical manifold a pair of mutually dual affine connections can be naturally defined. Thus, if a statistical manifold (M, ∇, h) is given, and we denote by ∇ ^* the dual connection of ∇ with respect to h, the triplet ( M, ∇ ^*, h) is also a statistical manifold. ( M, ∇ ^*, h) is called the dual statistical manifold of (M, ∇, h)<cit.>. Statistical manifolds and dual affine connections were rediscovered in statistics in order to build up geometric theories for statistical inferences <cit.>. Presently, this geometric method is called information geometry, and is applied various fields of mathematical sciences <cit.>. Kurose <cit.> and Matsuoze <cit.> pointed out that for the description of certain quantum effects, the inclusion of torsion might be needed due to the non-commutativity of quantum mechanics. With this motivation at hand, Kurose introduced the notion of a statistical manifold admitting torsion, or a quasi-statistical manifold. However, there is a sharp contrast in this case, compared to the case of simple statistical manifolds. Given a quasi-statistical manifold (M,g,∇), if we replace the ∇ with its dual, generally, it will not be true that (M,g,∇^*) is a quasi-statistical manifold as well. The exact mathematical conditions, which provide the conditions on T and T^* for the dual to be a quasi-statistical manifold are given in corollary <ref>. Even though the link to statistical manifolds and probabilistic geometry was only recently discovered (as we will explain later), gravitational theories based on two connections have already been investigated <cit.>. In <cit.> the geodesic equation was generalized to d^2x^μ/dλ^2 = -∑_i=1^N ^(i)Γ^μ_αβdx^α/dλdx^β/dλ = -Nγ^μ_αβdx^α/dλdx^β/dλ, where (i) labels the number of connections, and the average connection is defined as γ^μ_αβ≡1/N∑_i=1^N ^(i)Γ^μ_αβ. For gravitational applications the Hilbert-Einstein action is generalized as S=∫ d^4x √(-g)g^μν1/N∑_i=1^N R_μν(^(i)Γ^ρ_αβ), which in the case of a biconnection model reduces to S=∫ d^4x L=∫ d^4x √(g)g^μν1/2[ R_μν(^(1)Γ^ρ_αβ)+ R_μν(^(2)Γ^ρ_αβ)]<cit.>, which can be reformulated as L=√(g) g^μν[R_μν(γ^ρ_αβ)+Ω^α_αλΩ^λ_νμ-Ω^α_νλΩ^λ_αμ], where R_μν(γ^ρ_αβ) denotes the Ricci tensor constructed from the average connection γ^ρ_αβ≡1/2(^(1)Γ^ρ_αβ+ ^(2)Γ^ρ_αβ), and Ω^ρ_αβ≡1/2(^(1)Γ^ρ_αβ- ^(2)Γ^ρ_αβ), is a tensor obtained from the transformation rule of the connections.Here we would like to point out that there is no coupling between the two Ricci tensors, they simply sum up. The approach initiated in <cit.> was generalized in <cit.> by assuming that the two connections are defined in a Weyl geometry, and thus they satisfy the relations ^(1)∇ _μ g_αβ=-C_μ g_αβ, and ^(2)∇ _μ g_αβ=+C_μ g_αβ, respectively. The Weyl biconnection model is a natural framework to generate the mathematical structure of a Galileon theory. This model also admits a self-accelerating solution, and is closely related to massive gravity in the multiconnection framework. The clear link between statistical manifolds and biconnection gravity models was pointed out in <cit.>, where an action of the form S=(1/4κ)∫(R^(1)+R^(2)+K)√(-g)d^4x was proposed, with K:=K^λμνK_μνλ-K^λμ_ μK^λν_ ν being the difference scalar. In this model, there is a coupling between to the two connections, in contrast to the previous biconnection theory. If considered in the metric-affine framework, the biconnection theory in vacuum is indistinguishable from GR, and to get deviations from GR one would have to include connection-matter couplings. Once the matter is included, one can define two hypermomenta associated to the action as Δ _λ ^ μν (1)=-2/√(-g)δ S_m/δΓ ^λ (1)_ μν=Ξ _λ ^ μν, and Δ _λ ^ μν (2)=-2/√(-g)δ S_m/δΓ ^λ (2)_ μν=-Ξ _λ ^ μν, respectively. By assuming that the hypermomentum tensor Ξ_αμν is totally symmetric and traceless, it follows that the connection coefficients can be written as Γ ^λ (1) _ μν=Γ̃_ μν^λ +κΞ ^λ_ μν, and Γ ^λ (2) _ μν=Γ̃_ μν^λ -κΞ ^λ_ μν, respectively. As for the field equations of the present model, they take the form <cit.>R̃_μν-1/2g_μνR̃=κ T_μν-κ ^2(Ξ ^αβ_ μΞ _αβμ-1/2Ξ^αβγΞ_αβγg_μν). Hence, the gravitational theory based on two connections in the metric-affine framework possesses the structure of a statistical manifold, thanks to the symmetries of the associated hypermomentum. A key and novel ingredient in achieving this result is the mutual curvature scalar, a newly defined object from the mutual curvature tensor. This mutual curvature scalar, contains the difference scalar K, which introduces a non-trivial coupling between the two connections. It is important to note, however, that the mutual curvature tensor has a relatively rich history. In <cit.> it was shown that the mutual curvature tensor, as traditionally used by mathematicians <cit.>, does not actually meet the criteria for being a true tensor, due to its lack of multilinearity in each of its slots. Refining this definition has led to significant advances in both physics and mathematics, including the formulation of the biconnection theory as a statistical manifold <cit.>, and numerous other developments <cit.>. It is the goal of the present paper to explore the physical and the cosmological implications of a biconnection theory, based on the newly defined mutual curvature scalar, and by using a metric approach in contrast to the metric-affine approach considered in <cit.>. We will specifically fix the two connections to be the Schrödinger connection, and its dual. This choice is physically justified, since the Schrödinger connection preserves the lengths of autoparallelly transported vectors, even though it is not metric-compatible. Considering the dual of this connection is a novel and interesting method to introduce a semi-symmetric type of torsion into the dual geometry. After introducing the basic geometric concepts used for the description of a statistical manifold with torsion, we show that for a Schrödinger-type connection, the pair (M,g,∇^*) is a quasi-statistical manifold. We postulate that the field equations of the proposed biconnection theory take the same form as in the Einstein theory, with the only difference being that the Ricci tensor R_μν and Ricci scalar R are replaced by the mutual curvature ℛ_μν and mutual curvature scalar ℛ, with the two connections considered to be the Schrödinger connection, and its dual. We also obtain the equation of motion of massive test particles, which is generally non-geodesic, and takes place in the presence of an extra force, which is fully determined by the torsion vector. The Newtonian limit of the equation of motion is also considered. We also perform a detailed analysis of the cosmological implications of the theory. As a first step in this study we derive the generalized Friedmann equations for a homogeneous, isotropic and flat geometry, which contains extra torsion/nonmetricity dependent terms, which we interpret as corresponding to the effective energy density and pressure of the dark energy. To close the system of cosmological equations we need to impose some extra conditions on the model parameters. We explore two such models: one imposes the condition of matter energy conservation, and the other assumes a linear equation of state relating the effective dark energy pressure and energy density. A detailed comparison of the models with a small set of observational data for the Hubble function, and with the ΛCDM paradigm is performed. This comparison indicates that the considered cosmological models may represent some viable alternatives to the ΛCDM paradigm. The present paper is organized as follows. After briefly introducing the geometric perspective of the well-known statistical manifolds, we present the less known notion of a quasi-statistical manifold. In particular, we show that for Weyl or Schrödinger connections ∇, the dual connection ∇^* is not torsion-free, but nevertheless, the pairs (M,g,∇^*) are statistical manifolds admitting torsion. In Section <ref> we propose a biconnection gravity model in the metric formalism, using the recently defined mutual curvature tensor of <cit.>, by postulating that the field equations take the form ℛ_μν-1/2 g_μνℛ=8π T_μν. As we work in the metric formalism, the connections have to be specified as well: we choose the Schrödinger connection and its dual, thanks to their physically reasonable length-preserving properties. We also study some physical applications of the proposed length-preserving biconnection theory, which differs significantly from usual GR. A main difference is the non-conservation of the energy-momentum tensor, which we attribute to the presence of an extra force. We also obtain the equation of motion of the massive particles, and the Newtonian limit of the equations of motion. The cosmological implications of the theory are investigated in Section <ref>, where the generalized Friedmann equations are obtained, and two cosmological models are introduced. The predictions of the models are compared with the observations, and with the predictions of the ΛCDM model, in Section <ref>. Finally, we discuss and conclude our results in Section <ref>. A rigorous coordinate-free mathematical treatment of the geometry of quasi statistical manifolds is described in Appendix <ref>. The details of the calculation of the Ricci and mutual difference tensors are presented in Appendix <ref>. The derivation of the generalized Friedmann equations is shown in Appendix <ref>. § (DUAL) STATISTICAL STRUCTURES ON MANIFOLDS EQUIPPED WITH TWO CONNECTIONS In this section, we will briefly summarize the basic statistical structures, which can be put on a manifold M, given an affine connection ∇. We will first introduce the notion of a statistical manifold (M,g,∇) and its dual connection ∇^*, assuming the absence of torsion. Then, we generalize these constructions for the case of connections with torsion, leading to the notion of a quasi-statistical manifold. We show that by considering a Schrödinger or Weyl connection, even though the pairs (M,g,∇) are not statistical manifolds, it holds true that (M,g,∇^*) are quasi-statistical manifolds, or statistical manifolds admitting torsion, dubbed Quasi-Weyl and Quasi-Schrödinger manifolds, respectively. Thus, considering the dual connection ∇^* offers an interesting novel way to introduce torsion into these geometries. §.§ Statistical manifolds In the following, we will introduce the notion of a statistical manifold, assuming a torsion-free connection. This notion is based on a particular geometry, which generalizes the Riemannian one.Let us consider an n-dimensional (pseudo)-Riemannian manifold M with local coordinates { x^μ} and a torsion-free affine connection ∇, described by the coefficients g ( ∇_∂_μ∂_ν, ∂_ρ)=Γ_ρνμ. If there exists a totally symmetric tensor C_μνρ (often called the cubic tensor), such that: ∇_μ g_νρ=C_μνρ, then the pair (M,g,∇) is called a statistical manifold. The historical origin of the name is tied to the close relation of this specific geometry with statistics. To illustrate this, let us consider a family of probability distributions p=p(x,σ) with the normalization condition ∫ p(x,σ) dx=1. Supposing that p(x,σ) depends smoothly on the parameters σ=(σ_1,σ_2,…,σ_n), these probability distributions form a differentiable manifold. Moreover, we can also define a natural metric on them, known as the Fischer metric, given by: g_ij(σ):=E_σ[∂_i l, ∂_j l]=∫ p(x,σ) ∂_i l(x,σ) ∂_j l(x,σ) dx, where E_σ[f]=∫ f(x,σ) p(x,σ) dx is the expectation value of a function f, and l(x,σ):=ln p(x,σ) denotes the log-likelihood function. Given this data, a natural cubic tensor C_ijk(σ):=∫ p(x,σ) ∂_i l(x,σ) ∂_j l(x,σ) ∂_k l(x,σ) dx can be defined, which makes the tuple (M,g,∇) a statistical manifold. However, we would like to point out that in the following, we will simply refer to statistical manifolds from a completely geometric perspecive, not necessarily tied to statistics. Mathematicians observed that if we have a statistical manifold (M,g,∇), there exists another connection ∇^* such that the pair (M,g,∇^*) also forms a statistical manifold (often referred to as the dual statistical manifold). This special connection is known as the dual connection. The connection coefficients of ∇^* are given by <cit.> ∂_μ g_νρ -g_βρΓ^β _ν _μ - g_βνΓ^β* _ρ _μ=0. The above equation implies that the inner product between two vectors A_μ and B_μ is preserved under the parallel transport following the two dual affine connection g_μνA^μ(x)B^ν(x)=g_μν( x+dx) A^μ( x+dx) B^ν∗( x+dx) , where A^μ( x+dx) and B^ν∗( x+dx) are parallelly transported vectors by the dual affine connections Γ^β_ν _μ and Γ^β∗_ρν, respectively, so that δ A^μ=-Γ^μ _ν _ρA^νdx^ρ,δ B^μ=-Γ^μ∗_νρB^νdx^ρ. Next we consider equation (<ref>), together with the equations ∂ _ρg_μν-g_νβΓ^β _μ _ρ-g_μβΓ^β^∗_ν _ρ=0, ∂_ν g_ρμ - g_μβΓ^β _ρ _ν - g_ρβΓ^β^∗ _μ _ν=0. If we set ∘Γ^λ_μν=1/2( Γ^λ _μ _ν + Γ^λ ^∗ _μ _ν) , then ∘∇_λ g^μν=∂_λg^μ ^ν + g^μρ∘Γ^ν _ρ _λ+g^νρ∘Γ^μ _ρ _λ=0, where ∘Γ^ν _ρ_λ is the Levi-Civita connection on the Riemannian manifold M. The torsion-free dual affine connections are given as a deviation from the Levi-Civita connection ∘Γ^ν _ρ_λ by a totally symmetric tensor C^λ _μ _ν, so that Γ^λ _μ _ν=∘Γ^λ _μ _ν -1/2 C^λ _μ _ν, Γ^λ ^∗ _μ _ν= ∘Γ^λ _μ _ν + 1/2C^λ _μ _ν. Essentially, this can be viewed as a special case of our more general abstract mathematical result <ref>. More concretely, in our case, we assumed that the connection was torsion-free and that C^λ _μ _ν was completely symmetric. In this case, since a generic connection can be decomposed into torsion and non-metricity parts, the cubic tensor, from the modified gravity point of view, is essentially a completely symmetric non-metricity up to sign conventions (this has already been reported in <cit.>). Thus, we immediately obtain for the dual connection from the theorem that Q^∗ ^λ _μ _ν=-Q^λ _μ _ν=- C^λ _μ _ν, and trivially T^*=0, hence confirming (<ref>), (<ref>). §.§ Quasi-statistical manifolds In his seminal paper <cit.>, Takashi Kurose introduced the concept of a statistical manifold admitting torsion, also known as a quasi-statistical manifold. This was motivated by the idea of relating the non-commutativity of quantum mechanics to torsion. Simply put, a quasi-statistical manifold is a manifold M equipped with an affine connection ∇ with torsion, satisfying the local condition: ∇_μ g_νρ-∇_ν g_μρ+ T_ρμν=0, where T denotes the torsion tensor, defined as T^μ _ν _ρ=Γ^μ _ρ _ν - Γ^μ _ν _ρ, T_σνρ=g_σμT^μ _ν _ρ. By introducing the non-metricity tensor ∇_μ g_νρ=-Q_μνρ we can rewrite condition (<ref>) as - Q_μνρ + Q_νμρ + T_ρμν=0. In modified gravity, it is a standard result that a general affine connection can be decomposed as follows <cit.>: Γ^μ _ν _ρ=γ^μ _ν _ρ + 1/2 g^λμ(-Q_λνρ+ Q_ρλν + Q_νρλ) - 1/2g^λμ(T_ρνλ+T_νρλ- T_λρν). Using the distortion tensor N^μ _ν _ρ= 1/2 g^λμ(-Q_λνρ+ Q_ρλν + Q_νρλ) - 1/2g^λμ(T_ρνλ+T_νρλ- T_λρν) we obtain the well known formula Γ^μ _ν _ρ=γ^μ _ν _ρ + N^μ _ν _ρ. One is naturally led to ask the question: given the non-metricity Q and torsion T of ∇, is it possible to find the non-metricity Q^* and torsion T^* of ∇^*? Theorem <ref> provides the following positive answer: Q^*_μνρ=-Q_μνρ, T^*_ρμν= T_ρμν- Q_μνρ + Q_νμρ. It is important to note that even if ∇ is torsion-free, its dual ∇^* could have torsion. Hence, considering the dual of a connection could be seen as an information-geometric procedure to generate torsion. Moreover, from equation (<ref>) we can observe that if ∇ is torsion-free, then the equation T^*_ρμν + Q_μνρ - Q_νμρ=0 is satisfied. Relating the non-metricity Q to the non-metricity Q^* of the dual connection, we obtain T^*_ρμν- Q^*_μνρ+ Q^*_νμρ=0. It immediately follows that the dual connection ∇^* satisfies the condition (<ref>). Altogether, we conclude that it is a statistical manifold admitting torsion. As Corollary <ref> shows, this works the other way as well. The dual connection ∇^* is torsion-free if and only if the pair (M,g,∇) is a statistical manifold admitting torsion. We present two quasi-statistical manifolds, which arise from vectorial non-metricities. They have the special property that Q_μνρ is fully determined by a vector. Quasi-Weyl manifolds. We consider a manifold M equipped with a torsion-free affine connection with Weyl non-metricity Q_μνρ= W_μ g_νρ. For the dual connection we obtain Q^*_μνρ=- W_μ g_νρ, T^*_ρμν=- W_μ g_νρ + W_ν g_μρ. We observe that the dual connection has a semi-symmetric type of torsion. Hence, the procedure of dualizing a Weyl connection introduces a semi-symmetric type of torsion into this geometry. Quasi-Schrödinger manifolds. Let us consider a Schrödinger connection <cit.>, which is torsion-free and has non-metricity of the form Q_μνρ=π_μ g_νρ- 1/2(g_μνπ_ρ + g_μρπ_ν). In this case, the dual connection is specified by Q^*_μνρ = -π_μ g_νρ + 1/2( g_μνπ_ρ + g_μρπ_ν), T^*_ρμν =3/2(π_ν g_μρ - π_μ g_νρ). Notably, not only the parallel transport along ∇ and ∇^* combined preserves lengths in Schrödinger geometry. With the help of this connection, it is widely known that the lengths of vectors are preserved under autoparallel transport <cit.>. Since the dual connection ∇^* also has this type of non-metricity up to a rescaling with -1, it follows that it preserves the lengths of autoparallely transported vectors as well. A summary of our findings related to quasi-statistical manifolds with vectorial non-metricity is provided in Fig. <ref>. § LENGTH-PRESERVING BICONNECTION GRAVITY In this section, we describe the geometric way to couple the Schrödinger connection to its dual, using the recently defined mutual curvature tensor <cit.>. Then, the generalized Einstein equations are presented, and some of their properties are investigated. §.§ Mutual curvature of Schrödinger and quasi-Schrödinger geometries In <cit.> it has been shown that a gravitational theory based on two connections can exhibit the structure of a statistical manifold, if considered in the metric-affine approach. The building pillar of that theory is the mutual curvature tensor, which is defined as ℛ^λ_ρ _μ _ν = 1/2( R^λ _ρ _μ _ν^(1)+R^λ _ρ _μ _ν^(2)) - 1/2K^λ _σ _μK^σ_ρ _ν+1/2K^λ _σ _νK^σ _ρ _μ, where K is given by K^λ _μ _ν=N^λ _μ _ν^(1)- N^λ _μ _ν ^(2). The mutual Ricci curvature reads ℛ_ρν = 1/2(R_ρν^(1)+R_ρν^(2)) - 1/2K^λ _σ _λK^σ _ρ _ν +1/2K^λ _σ _νK^σ _ρ _λ. By transvecting with g^ρν we obtain the mutual Ricci scalar ℛ=1/2(R^(1)+R^(2)) - 1/2K^λ _σ _λK^σ _ρ ^ρ + 1/2K^λ _σ ^ρK^σ _ρ _λ. In contrast to the metric-affine approach taken in <cit.>, we fix the two connections and study the cosmological implications of the theory. As both the Schrödinger connection and its dual have the physically desirable property of preserving lengths under autoparallel transport, we study a biconnection theory based on these two connections. We introduce the notations R_μν^(1) :=R_μν, R^(1):=R, R_μ _ν^(2) :=R_μν^*, R^(2):=R^*, where the objects denoted with * refer to the dual Schrödinger connection. Similarly, we define the mutual difference tensor D_ρ _ν:=-1/2K^λ_σ _λK^σ _ρ _ν + 1/2K^λ _σ _νK^σ _ρ _λ, and its contraction, the mutual difference scalar D:=-1/2K^λ _σ _λK^σ _ρ ^ρ + 1/2K^λ _σ _νK^σ ^ν _λ. A lengthy algebraic computation detailed in Appendix <ref> gives: (i) Ricci tensor of the Schrödinger connection R_ρν=∘R_ρν - g_ρν∘∇_απ^α + 1/2∘∇_ρπ_ν - ∘∇_νπ_ρ - 1/2π_σπ^σ g_ρν - 1/4π_ρπ_ν. (ii) Ricci tensor of the dual R^*_ρν=∘R_ρν - 1/2 g_ρν∘∇_απ^α + ∘∇_ρπ_ν + ∘∇_νπ_ρ + π_σπ^σ g_ρν + 1/2π_ρπ_ν. (iii) Mutual difference tensor D_ρν=-1/4π_ρπ_ν -1/2π_σπ^σ g_ρν. From the above results, we easily obtain the mutual Ricci tensor ℛ_ρν =∘R_ρν - 3/4 g_ρν∘∇_απ^α + 3/4∘∇_ρπ_ν - 1/4π^σπ_σ g_ρν - 1/8π_ρπ_ν, and its contraction, the mutual Ricci scalar ℛ=∘R - 9/4∘∇_απ^α - 9/8π_απ^α. §.§ The gravitational field equations By analogy with Einstein gravity, we postulate that the gravitational field equations are ℛ_(ρν) - 1/2 g_ρνℛ= 8π T_ρν. Using equations (<ref>) and (<ref>), the field equations can be rewritten as ∘R_ρν - 1/2 g_ρν∘R + 3/8 g_ρν∘∇_απ^α+3/8∘∇_ρπ_ν +3/8∘∇_νπ_ρ + 5/16 g_ρνπ_απ^α -1/8π_ρπ_ν=8 π T_ρν. By contracting equation  (<ref>) we obtain -∘R+9/4∘∇_απ ^α +9/8π _απ^α =8π T. Hence, we can reformulate the gravitational field equations in the form ∘R_ρν = 8π(T_ρν-1/2g_ρνT)+1/4(3∘∇_απ ^α+π _απ^α)g_ρν -3/8∘∇_ρπ_ν-3/8∘∇_νπ_ρ+1/8π _ρπ_ν. §.§.§ Divergence of the matter energy-momentum tensor By taking the covariant divergence of equation  (<ref>) with respect to the Riemannian divergence operator ∘∇, and by taking into account that the divergence of the Einstein tensor identically vanishes, we obtain for the divergence of the matter energy-momentum tensor the expression 8π∘∇_ρT^ρ _ν = 3/8∘π _ν+3/8(∘∇_ν∘∇_ρ +∘∇_ρ∘∇_ν)π ^ρ +5/16∘∇_ν(π _ρπ ^ρ) -1/8∘∇_ρ(π _νπ^ρ)≡ A_ν, where ∘=∘∇_ρ∘∇^ρ. Hence, generally, in the present biconnection gravitational theory the matter energy-momentum tensor does not vanish identically. Equation of motion of massive particles. The equation of motion for a massive test particle can be found from equation  (<ref>). We adopt for the matter source a perfect fluid, which is described by two thermodynamic quantities only, the energy density ρ, and the thermodynamic pressure p. The energy-momentum tensor of the fluid is then given by T_μν=(ρ+p)u_μ u_ν+pg_μν, where u^μ is the four-velocity of the particle, normalized according to u^μ u_μ=-1. We also introduce the projection operator h_μν, defined according to h_μν=g_μν+u_ρu_ν, and with the property h_μλu^μ=0. By taking the divergence of equation  (<ref>), we obtain ∘∇_μ T^μν = h^μν∘∇_μ p + u^ν u_μ∘∇^μρ +(ρ+p)(u^ν∘∇_μ u^μ+u^μ∘∇_μ u^ν)=1/8πA^ν. We multiply now the above equation with h_ν^λ to find h_ν^λ∘∇_μ T^μν=(ρ+p)u^μ∘∇_μ u^λ+h^νλ∘∇_ν p=1/8πh_ν^λ A^ν, where we have used the identity u_μ∘∇_ν u^μ=0. Hence the equation of motion for a massive test particle in length-preserving biconnection gravity takes the form d^2x^λ/ds^2+∘Γ^λ_μνu^μ u^ν=1/(ρ +p)h^νλ(1/8πA_ν-∘∇_ν p)= f^λ, where we have used the definition of the covariant derivative to obtain u^μ∘∇_μ u^λ in the left hand side of equation  (<ref>). Moreover, by ∘Γ^λ_μν we have denoted the Levi-Civita connection associated to the metric. Hence, the motion of the massive particles in the present theory is non-geodesic, and an extra force f^λ is generated. The extra-force is perpendicular to the four-velocity, and it satisfies the condition f^λu_λ=0. If the torsion vector vanishes, the extra-force takes the form f^λ=-h^λν∇ _νp/(ρ +p), corresponding to the standard general relativistic fluid motion. §.§.§ The Newtonian limit We assume that one can formally represent f^λ as f^λ = (g^νλ+u^νu^λ)∘∇_νln√(Q)=h^νλ∘∇_νln√(Q), or 1/(ρ +p)(1/8πA_ν-∘∇_ν p)=∘∇_νln√(Q), where we have introduced the dimensionless function Q to describe the effects of the extra force. We assume that Q is not an explicit function of u^μ. The equation of motion  (<ref>) can be obtained from the variational principle S_p=∫ L_p ds=∫√(Q)√(g_μνu^μu^ν) ds, where S_p and L_p are the action and the Lagrangian density of the test particle. In the limit √(Q)→1, we reobtain the variational principle for the motion of the standard general relativistic particles. The equivalence between equation  (<ref>) and the variational principle (<ref>) can be proven by writing down the Lagrange equations corresponding to the action (<ref>), d/ds( ∂ L_p/∂ u^λ) - ∂ L_p/∂ x^λ=0 . Then, we successively obtain ∂ L_p/∂ u^λ=√(Q)u_λ and ∂ L_p/∂ x^λ =1/2√(Q)g_μν,λu^μu^ν + 1/2Q_,λ/Q , respectively. Finally, a simple calculation gives the equations of motion of the particle as d^2x^μ/ds^2+Γ^μ_ν_λu^νu^λ+( u^μu^ν+g^μν) ∇ _νln√(Q)=0. When √(Q)→ 1 the standard general relativistic equation for geodesic motion are reobtained. The Newtonian limit of the theory can be studied by using the variational principle equation  (<ref>). In the weak gravitational field limit, the interval ds for a dust fluid, with p=0, in motion in the gravitational field is given by ds≈√(1+2ϕ-v⃗^2) dt≈(1+ϕ-v⃗^2/2) dt, where ϕ is the Newtonian potential and v⃗ is the three-dimensional velocity of the fluid. We also represent √(Q)=1+U, ln√(Q)=ln (1+U)≈ U, thus obtaining ∘∇_ν U≈1/8πρA_ν. In the first order of approximation the equation of motion of the fluid is obtained from the variational principle δ∫[1+U+ϕ-v⃗^2/2]dt=0. By writing down the equation of motion corresponding to the variational principle (<ref>), we obtain the total acceleration a⃗ as given by a⃗=-∘∇ϕ-∘∇ U=-∘∇ϕ-1/8πρA⃗=a⃗_N+a⃗_E, where a⃗_N=-∘∇ϕ is the Newtonian acceleration, and the extra acceleration, induced by the presence of the torsion, is a⃗_E=-∘∇ U=-(1/8πρ)A⃗. The acceleration given by equation  (<ref>) is due to the presence of the torsion in the biconnection gravitational model. Since we have assumed that the fluid is pressureless, there is no hydrodynamical acceleration a⃗_p term in the expression of the total acceleration. Please note that such an acceleration term does exist in the general case. It is interesting to note that the extra-acceleration a⃗_e depends not only on the properties of the torsion vector, but also on the density of the fluid. § COSMOLOGICAL APPLICATIONS In the present Section we will investigate the cosmological implications of the generalized gravitational field equations (<ref>). As a first step in our study we obtain the generalized Friedmann equations, corresponding to a flat Friedmann-Lemaitre-Robertson-Walker metric. The existence of a de Sitter type solution is also investigated. Several cosmological models, corresponding to different choices of an effective equation of state for the geometric energy and pressure components are also investigated. A comparison with the observational data is also performed. §.§ The generalized Friedmann equations We consider a flat FLRW metric ds^2=-dt^2+a(t)^2 δ_ij dx^i dx^j. For matter, we take a perfect fluid with the energy-momentum tensor given by Eq. (<ref>). Due to the requirement of homogeneity and isotropy of the Universe, the field π can have only a temporal component π^μ=(ψ(t),0,0,0) π_μ=(-ψ(t),0,0,0). With these assumptions, a calculation detailed in Appendix <ref> yields the following Friedmann equations 3H^2 =8 πρ+9/8ψ̇+9/8Hψ -3/16ψ ^2=8π(ρ+ρ _eff), 2 Ḣ + 3 H^2= -8 π p + 3/8ψ̇+ 15/8 H ψ - 5/16ψ^2=-8π(p+p_eff), where we have denoted ρ_eff=1/8π(9/8ψ̇+9/8Hψ -3/16ψ ^2), and p_eff=1/8π(- 3/8ψ̇- 15/8 H ψ +5/16ψ^2), respectively. From the generalized Friedmann equations (<ref>) and (<ref>) we obtain the conservation equation d/dt[a^3(ρ +ρ_eff)]+(p+p_eff)d/dta^3=0, or, equivalently, ρ̇+3H(ρ+p)+ρ̇_eff+3H(ρ_eff+p_eff)=0. To facilitate the comparison of the theoretical predictions with the observational data we will use as an independent variable the redshift z, defined as 1+z=1/a. Then we can replace the time variable by z according to the relation d/dt=-(1+z)H(z)d/dz. Dimensionless and redshift representation. To simplify the mathematical formalism we introduce a set of dimensionless variables (τ, h, r, P, Ψ), defined according to τ =H_0t, H=H_0h, ρ =ρ _cr, p=1/3ρ _c P, ψ =H_0Ψ, where H_0 denotes the present day value of the Hubble function and ρ_c=3 H_0^2/8 π. Hence the Friedmann equations take the dimensionless form h^2=r+3/8dΨ/dτ+3/8hΨ-1/16Ψ^2, 2dh/dτ+3h^2=-P+3/8dΨ/dτ+15/8hΨ-5/16Ψ^2. In the redshift space we obtain the evolution equations h^2(z)=r(z)-3/8(1+z)h(z)dΨ/dz+3/8h(z)Ψ(z)-1/16Ψ^2(z), -2(1+z)h(z)dh(z)/dz+3 h^2(z) = -P(z) -3/8(1+z)h(z)dΨ (z)/dz +15/8h(z)Ψ (z)-5/16Ψ^2 (z). §.§.§ The de Sitter solution We consider now the de Sitter type solution of the generalized Friedmann equations, corresponding to H=H_0= constant. By assuming a dust Universe with p=0, the second Friedman equation (<ref>) gives for ψ the evolution equation ψ̇+5H_0ψ-5/6ψ ^2-8H_0^2=0, with the general solution given by ψ (t)=3H_0{ 1+1/√(15)tan[ √(15)/6 ( H_0t+α) ] } , where we have used the initial condition ψ (0)=ψ_0, and we have introduced the notation α =( 6/√(15)) tan ^-1[ √(15)( ψ _0-3√(15)H_0) /3H_0] /(√(15)H_0). The evolution of the matter energy density is obtained as 8πρ (t)=3/20 H_0^2 {8-3 ^2[1/2√(5/3)( H_0 t+α)]}. It is interesting to note that the matter energy density is a periodic function. However, it reaches the zero value after a finite time interval t_f=(1.41-α)/H_0, which represents the end of the de Sitter phase for the present biconnection model. §.§ Specific cosmological models In the present subsection we will consider several examples of specific cosmological models in the framework of length-preserving biconnection gravity. For each case we will also consider a detailed comparison with the observational data, as well as with the predictions of the standard ΛCDM model. In the following we will restrict our analysis to the case of a Universe filled with a pressureless dust, with p=0. §.§.§ Conservative cosmological (CC) model As a first example of a cosmological model we consider the case in which the matter energy density is conserved, thus satisfying the equation ρ̇+3H(ρ+p)=0. Therefore, the effective geometric energy density is also conserved, and we have ρ̇_eff+3H(ρ_eff+p_eff)=0, giving for the torsion vector the evolution equation ψ̈+Ḣψ+3Hψ̇-1/3ψψ̇-2H^2ψ+1/3Hψ^2=0. We also introduce a new variable u=dΨ/dτ. Thus, the system of evolution equations describing the conservative cosmological evolution on a statistical manifold takes the form -(1+z)h(z)dΨ (z)/dz=u(z), -(1+z)h(z)du(z)/dz-(1+z)h(z)dh(z)/dzΨ (z) +3h(z)u(z) -1/3Ψ (z)u(z) -2h^2(z)Ψ (z)+1/3h(z)Ψ ^2(z)=0, -2(1+z)h(z)dh(z)/dz+3 h^2(z) = 3/8u(z)+15/8h(z)Ψ (z) -5/16Ψ^2 (z). The system of equations (<ref>)-(<ref>) must be integrated with the initial conditions h(0)=1, Ψ (0)=Ψ_0, and u(0)=u_0, respectively. From the first Friedmann equation (<ref>), we obtain the evolution of the matter density r(z)=h^2(z)-3/8u(z) -3/8h(z) Ψ(z) + 1/16Ψ^2(z). Hence, the present day matter density r(0) is determined by the initial values u_0 and Ψ_0 as r(0)=1-3/8 u_0 - 3/8Ψ_0 + 1/16Ψ_0^2. §.§.§ Linear equation of state cosmological (LESC) model As a second cosmological model in the torsional gravity on statistical manifolds theory we consider the case in which the dark energy effective pressure and density are related by a linear equation of state, given by p_eff(z)=ω_0 ρ_eff(z). In this model, the equations describing the evolution of the Universe are given by (1+z)h(z)dΨ (z)/dz= [3 ω_0 +5] Ψ (z) [6 h(z)-Ψ (z)]/6 [3ω_0 +1], -2(1+z)h(z)dh(z)/dz+3 h^2(z) = -3/8(1+z)h(z)dΨ (z)/dz +15/8h(z)Ψ (z)-5/16Ψ^2 (z). The system of equations  (<ref>) and (<ref>) must be solved with the initial conditions h(0)=1, and Ψ (0)=Ψ _0. After solving the system, we obtain the matter density using the closure relation r(z)=h^2(z)+3/8(1+z)h(z) d Ψ(z)/dz-3/8h(z) Ψ(z)+1/16Ψ^2(z). § COSMOLOGICAL TESTS OF THE CC AND LESC MODELS In this Section, we compare the predictions of the proposed cosmological models with those of the standard ΛCDM model, as well as with a small set of observational data for the Hubble function. To recap, the Hubble function for the ΛCDM model is expressed as H(z)=H_0 √(Ω_M(1+z)^3+Ω_Λ), where H_0 is the present day value of the Hubble function, Ω_m is the current matter density, and Ω_Λ represents the dark energy density. These two parameters satisfy the constraint Ω_M+Ω_Λ=1. §.§ Parameter estimation To begin our comparison, we determine the optimal fitting values for the ΛCDM model and the two other models, respectively. These are found by performing a Likelihood analysis, using observational data of the Hubble function within the redshift range z ∈ (0.07,2.36) as provided in <cit.>. The key ingredient in the statistical analysis is the likelihood function L = L_0 e^-χ^2/2, where L_0 is a normalization constant, and χ^2 is the chi-squared statistic. This function is also known as the negative logarithmic likelihood function <cit.>, which is crucial for practical purposes since likelihood values can be very small. The chi-squared statistic is defined as χ^2 = ∑_i( O_i - T_i/σ_i)^2. Here i ranges over the data points, O_i are the values obtained from the observational data, T_i are the values predicted by the theory and σ_i are the errors associated with the i-th data point. The best-fit values of the parameters are determined by maximizing the likelihood function, which is equivalent to minimizing the chi-squared statistic. For the ΛCDM model they are given in Table <ref>, while for conservative cosmological (CC) model and the linear equation of state cosmological model (LESC) they are found in Table <ref> and <ref>, respectively. The 1σ confidence intervals were determined using the distribution of Δχ^2_i = χ^2_i - χ_o^2 where χ_o^2 represents the negative logarithmic likelihood value at the optimal parameter, and i spans a range of values around this optimal point. The Δχ^2_i values follow a chi-squared distribution with one degree of freedom (df = 1), as we vary one parameter at a time <cit.>. The critical value, which encapsulates 68.3% of the area under the curve (corresponding to one standard deviation), is approximately Δχ^2_i ≃ 1. Parameters were identified where this critical threshold is exceeded, both below and above the optimal value. Similarly, the 2σ confidence interval is determined with a critical value that encompasses 95.4% of the area under the curve, corresponding to two standard deviations, which is approximately Δχ^2_i ≃ 4. Additionally, the two-dimensional Δχ^2_ij surface can be examined, when two parameters are varied simultaneously. For df=2, the critical values are approximately Δχ^2_ij≃ 2.3 and Δχ^2_ij≃ 6.2, corresponding to 1σ and 2σ confidence levels, respectively. This approach allows for visual conclusions regarding the relational behavior of parameter pairs relative to the optimum (see Fig. <ref>). To identify the best model that describes the observational data, we employ two information criteria: the corrected Akaike Information Criterion <cit.> (AIC_c) and the Bayesian Information Criterion (BIC) <cit.>. In this study, we use AIC_c instead of AIC due to our small sample size, as AIC tends to favor models with more parameters in such cases. AIC_c addresses this issue by including a correction term for small datasets. It is defined as AIC_c= χ^2_min+2k+2k^2+2k/n-k-1, where n is the number of data points and k is the number of free parameters in the model. It is easily seen that the correction term vanishes as n →∞, making AIC_c essentially identical to AIC for large datasets.The model with the smallest AIC_c value is the most supported by the observational data, and is usually chosen to be the reference model. To evaluate how closely other models resemble the reference model, we compute the following quantity Δ AIC_c=AIC_c,model- AIC_c,reference. Depending on the Δ AIC_c value, models can be categorized as follows: well-supported by observations (0<Δ AIC_c<2), moderately supported by observations (4< Δ AIC_c<7) or not supported by observations (Δ AIC_c >10).The Bayesian Information Criterion (BIC) is another widely utilized method for model selection, rooted in Bayesian probability theory. Like AIC_c, BIC evaluates models based on their fit to the data, but it imposes a heavier penalty for models with a larger number of parameters. Hence, it treats more severely the question of overfitting than AIC_c, often favoring simpler models. Formally, the BIC is defined as BIC=χ^2_min+k ln(n). In a similar fashion, the model with the lowest BIC value is deemed the best fit for the data and is selected as the reference model. The difference Δ BIC=BIC_model-BIC_reference indicates the relative quality of the model compared to the reference. A model is considered weakly disfavored by the data if 0< Δ BIC <2, moderately disfavored if 2< Δ BIC ≤ 6, and strongly disfavored if Δ BIC >6.The χ^2, AIC_c, and BIC values of the three models are presented in Table <ref>. Based on the corrected Akaike Information Criterion, the conservative model best describes the data and should be considered the reference model. However, given our small data set and the common practice of using the ΛCDM model as the reference, we will adopt this approach. Consequently, the Δ AIC_c for the conservative model is negative. For the LESC model, the Δ AIC_c is 0.3945, indicating that this model is well-supported by the data. According to the BIC comparison, the standard ΛCDM model best fits the data, the CC model is weakly disfavored, and the LESC model is moderately disfavored. §.§ Cosmological quantities Having determined the optimal parameter values for the ΛCDM model as well as the LESC and CC models, we proceed to compare their cosmological predictions. For a detailed analysis, outside of the Hubble function and the deceleration parameter, we will also compare the jerk and snap parameters of these models. Hubble function. The Hubble function H(z) of the three models is compared with the observational data obtained from <cit.>. Figure <ref> shows that for the observable range encompassing z ∈ (0.07, 2.36) the match between the three models and the observational data is perfect, while for larger z values the difference becomes observable. Deceleration parameter The deceleration parameter q(z) is a dimensionless measure of the rate at which the expansion of the Universe is slowing down (q>0), or speeding up (q<0). It is defined in terms of the second derivative of the scale factor a(t) with respect to time q=-äa/a^2=-Ḣ/H^2-1=(1+z)h(z) d h(z)/h(z)^2 -1. The redshift dependence of q(z) for the three distinct models is illustrated in Fig. <ref>. Compared to the ΛCDM model, both the LESC and CC models predict a slightly larger value of the deceleration parameter for small redshift values 0<z<1. For higher redshifts z>1, the LESC and CC models predict a slightly lower, but positive value of q(z). Matter density. The matter densities of the ΛCDM, LESC and CC model are depicted in Fig. <ref>. The predictions of the models basically coincide at low redshifts, up to z ≃ 0.5, however the LESC and CC cosmological models predict a larger value of matter density at higher redshifts. The LESC model's predictions at high redshifts are closer to the predictions of the ΛCDM than the ones of the CC model. Jerk. The jerk and snap parameters provides insights beyond the deceleration parameter, capturing more subtle variations in the Universe's expansion dynamics <cit.>. Formally, the jerk parameter is defined as a higher order derivative of the scale factor j=1/ad^3a/d τ[ d a/d τ]^-3=q(2q+1)+(1+z) dq/dz. Snap The snap parameter is a higher-order dimensionless quantity that measures the rate of change of the jerk parameter. Mathematically, it is defined as s = 1/ad^4 a/dτ^4[ 1/ada/dτ]^-4 = j - 1/3 ( q - 1/2). The jerk and snap parameters of the ΛCDM, LESC and CC models can be seen on Fig. <ref>. There are significant differences between the cosmological behaviors of these parameters, indicating the possibility of the detailed testing of these cosmological models once high quality observational cosmological data will be available. 𝐎𝐦(𝐳) diagnostic The Om(z) diagnostic function introduced by Sahni et. al. <cit.> is an important tool in differentiating between alternative cosmological models, and in the comparison of the models with ΛCDM. The Om(z) function is given by Om(z) = H^2(z)/H_0^2 - 1/(1+z)^3 - 1 = h^2(z) - 1/(1+z)^3 - 1. For the ΛCDM model, the Om(z) function is a constant. A positive slope indicates a phantom-like evolution of the dark energy, while a negative slope indicates quintessence-like dynamics. The Om(z) diagnostic functions of the LESC and CC models are shown in Fig. <ref>. For both of these models, Om(z) has a negative slope throughout the cosmological evolution, which indicates a quintessence-like behaviour. Torsion and related vectors The torsion vector Ψ and its derivative u are represented in Fig. <ref>. For the conservative model, Ψ is decreasing monotonically up to the redshift z≈ 0.5, and becomes an increasing function afterwards. For the linear equation of state model, the torsion is monotonically decreasing. In either case, up to the redshift z ≃ 3, they take positive values. For the CC model, u is also monotonically decreasing, but takes negative values during the cosmological evolution. Statefinder pairs. The Statefinder pairs (j,s) and (j,q) are represented in Fig. <ref>. The evolution of both pairs show a significant difference between the two biconnection cosmological models, and ΛCDM. While as a function of s, j is a decreasing function for the CC model, for the LESC model j increases, after a very short decreasing phase. A similar behavior can be observed for the dependence of j on q. Both pairs have at the present time numerical values relatively closed to the ΛCDM point. The (j,s) plot indicates that both the CC and the LESC models are in the quintessence region. Age of the Universe. The age of the Universe is an important prediction of a cosmological model. It can be directly computed from the Hubble function as t_U=1/H_0lim_z →∞∫_0^zdz'/(1+z')H(z'). For certain models where H(z) takes an analytic form, t_U can be obtained by a direct computation, without recurring to numerical methods. However, in our case, since H(z) is obtained as the solution of a differential equation, we must calculate the age of the universe numerically. We obtain the following results: * ΛCDM: t_U ≃ 1.41 × 10^10 years, * LESC model: t_U ≃ 1.45 × 10^10 years, * CC model: t_U ≃ 1.39 × 10^10 years. § DISCUSSIONS AND FINAL REMARKS In this paper, we investigated the gravitational and cosmological implications of a biconnection model inspired by the recently introduced mutual curvature tensor. More specifically, we have considered as a starting point the notion of a dual connection ∇^* associated with a connection ∇ on a specific geometric structure (manifold). These dually coupled connections are well-known and have been studied in the context of statistical manifolds. On a statistical manifold a cubic tensor C(X,Y,Z)=(∇_X g)(Y,Z) is defined, which is completely symmetric. Using this tensor, one can construct two connections Γ^(1,2)=∘Γ± C, whose average is the Levi-Civita connection ∘Γ. Thus, the field C describes the deviations from standard Riemannian geometry. The connections ∇ and ∇^* described by Γ^(1,2) are said to be dual to each other. In the case of statistical manifolds one usually imposes the vanishing of the torsion tensors T,T^* of the connection and its dual, respectively. In the present study, we have relaxed this condition by considering statistical manifolds, or information geometries, with torsion and non-metricity. These type of structures are called quasi-statistical manifolds <cit.>, and they significantly enlarge the field of information geometry, also opening some new perspectives for physical applications. We have also introduced a specific quasi-statistical manifold structure, which is based on the Schrödinger connection <cit.>, a (still) little known extension of Weyl geometry, initially proposed by Schrödinger <cit.>, which conserves, in the presence of nonmetricity, the length of the vectors under autoparallel transport. The Schrödinger connection, as well as its dual, are fully determined by a vector field. In this framework, once the connection coefficients of the Schrödinger connection and its dual were determined, we computed the mutual curvature tensor, and the mutual curvature scalar of these two connections, obtaining all the necessary tools to build a novel gravitational field theory. In our approach we have postulated the field equations as having the same form as the standard Einstein field equations, but with the Ricci tensor and scalar being replaced by the mutual curvature tensor and mutual curvature scalar, respectively. This leads to a set of field equations that generalize the Einstein's theory by adding some new, torsion dependent terms into the traditional gravitational field equations as formulated in Riemannian geometry. We have investigated in detail the physical properties and implications of the new field equations. First of all, one should point out that in the present theory the matter energy-momentum tensor is not conserved, since the divergence of T_μν is non-zero. This is a situation specific mostly to gravitational theories with geometry-matter coupling <cit.>, but it also appears in the present context. Energy-momentum preserving models can be constructed in the proposed length-preserving biconnection gravity as well, if this condition is added to the basic theory. The non-conservation of the matter energy-momentum tensor leads to the non-geodesic motion of massive test particles, and to the presence of an extra force, which depends on the torsion vector. We have explicitly obtained the equations of motion of the massive particles, and we have considered the Newtonian limit of the fluid motion equations for dust. In this case the extra-force depends not only on the torsion vector, but also on the fluid density. The cosmological implications of the theory have been investigated in detail. As a first step in this analysis the generalized Friedmann equations of the length-preserving biconnection theory have been obtained, by assuming a FLRW type geometry, and a particular form of the torsion vector, which preserves the homogeneity and isotropy of the spacetime. The extra terms in the generalized Friedmann equations can be considered as describing the energy density and pressure of an effective, geometric type, dark energy. In order to close the system of cosmological field equations, one must introduce a supplementary condition/relation for the torsion vector. We have considered two specific cosmological models, in which, for the first model, we have required the conservation of the matter energy-momentum tensor, while in the second model we have assumed the existence of a linear equation of state relating the pressure and energy density of the dark energy. In both cases we have confronted the theoretical predictions of the models with a set of 57 observational data points for the Hubble function, as well as with the results obtained from fitting the ΛCDM model with the same data. Both models can be considered as giving an acceptable description of the observational data in the redshift range z∈ (0,2.5), with the differences increasing for larger redshifts. Differences do also appear in the behaviors of the deceleration parameter, and of the matter density. The length preserving biconnection models do predict a present day value of the Hubble function in the range 66.2-66.95 km/s/Mpc, values which are quite closed to the value H(0)=67.4± 0.5 km/s/Mpc, obtained from the Planck data <cit.>. A recent cosmological model independent determination of H(0), by using Fast Radio Bursts obtained the value 67.3± 6.6 km/s/Mpc. A similar statistical analysis of the same Hubble data performed in the framework of the ΛCDM model gives for H(0) the significantly higher value of 70.1 km/s/Mpc. Hence, the results obtained in the framework of the cosmological models considered in the present work may point towards a possible solution of the Hubble tension, and the need to replace the standard ΛCDM model. The age of the Universe, as predicted by length-preserving biconnection models is roughly the same as the age predicted by the standard cosmology, with one model (with the linear equation of state for dark energy) predicting a slightly higher age, while the second (the conservative matter model) predicting a lower age. The age difference of a few hundred millions of years may also represent a solution to the very important problem of the early formation of the supermassive black holes. In a Universe older by five hundred million years, there will be enough time (more than one billion years), to allow the formation of the early supermassive objects detected by JWST <cit.>. Moreover, the number densities of UV-bright galaxies at z ≥ 10, inferred from the JWST observations are in tension with the predictions of the majority of the theoretical models previously developed <cit.>. The non-geodesic motion of the massive test particles in the present model may have some implications on the understanding of the dynamics of the massive particles gravitating around the galactic centers. In standard general relativity the gravitational effects due to the presence of an arbitrary matter distribution are described by the term a_N^α=Γ _μν^αu^μu^ν of the geodesic equation of motion. However, in the Newtonian limit, and in three dimensions, the equation of motion of the massive particles in the length-preserving biconnection model is given by equation (<ref>), a⃗=a⃗_N+a⃗_E, where a⃗ is the total acceleration of the particle, a⃗_N is the Newtonian gravitational acceleration, and a⃗_E is the acceleration due to the presence of the torsion and nonconservation effects. For a⃗_E=0, the equation of motion reduces to the Newtonian one, with a⃗=a⃗_N=-GMr⃗/r^3. From the generalized equation of motion in the presence of the extra force we obtain a⃗_E·a⃗_N=1/2(a^2-a_N^2-a_E^2), where the dot denotes the three-dimensional scalar product. Hence, we have obtained the unknown vector a⃗_N as a function of the total acceleration a⃗, of the extra acceleration a⃗_E, and of the magnitudes a^2, a_N^2 and a_E^2. One can now obtain the vector a⃗_N as a⃗_N=1/2( a^2-a_N^2-a_E^2) a⃗/aa_E. By assuming that the Newtonian gravitational acceleration is small, a_N≪ a, we obtain the relation a⃗_N≈1/2a( 1-a_E^2/a^2) 1/a_Ea⃗. By denoting 1/a_M≡1/2a_E( 1-a_E^2/a^2), we obtain a⃗_N≈a/a_Ma⃗. Equation  (<ref>) is similar to the acceleration equation introduced from phenomenological considerations in the MOND theory <cit.>. We can thus obtain first a≈√(a_Ma_N), and then, by using the Newtonian expression of the gravitational acceleration a_N=GM/r^2, we find a≈√(a_mGM)/r=v_tg^2/r, where by v_tg we have denoted the tangential velocity of the particle. Hence, v_tg^2→ v_∞^2=√(a_MGM), and from this relation we obtain the Tully-Fisher law L ∼ v_∞^4 in the form v_∞^4=a_MGM, where L is the galactic luminosity, which is proportional to the galactic mass <cit.>. Hence, the study of the galactic rotation curves opens another possibility, via the presence of the extra acceleration, to test the presence of the torsion in the length-preserving biconnection theory. In the present approach a_M is not a universal constant, as it is in the standard MOND theory, since it depends on the torsion vector. In the present paper we have introduced a geometric theory of gravity, which is based on mathematical concepts adopted from information geometry. The field equations we have considered are a generalization of the Einstein equations of standard general relativity, and differ from them in the empty space, as well as in the presence of matter, due to the presence of a torsion vector. Hence, the predictions of the present theory leads, and may lead, to some important differences, as compared to the predictions of Einstein's general relativity, in several problems of current interest, like the cosmology of the early and late Universe, black holes and gravitational collapse or in the generation of gravitational waves. The detailed investigations of these physical, astrophysical and cosmological phenomena may also provide specific effects and signatures that could help in distinguishing and discriminating between the various existing gravitational theories. § ACKNOWLEDGEMENTS The work of L.Cs. and M.J. is supported by Collegium Talentum Hungary. L.Cs. Would like to thank Xumin Liang, Damianos Iosifidis for the helpful discussions and for the StarUBB research scholarship. § COORDINATE-FREE TREATMENT OF QUASI-STATISTICAL MANIFOLDS In this section we provide a geometric, coordinate-free approach to quasi-statistical manifolds and the results used in our manuscript. Let (M,g,∇) be a pseudo-Riemannian manifold. Two affine connections ∇ and ∇^* are said to be dual with respect to the metric if X(g(Y,Z))=g( ∇_X Y,Z )+ g( Y,∇_X^* Z ) is satisfied for all vector fields X,Y,Z. In a local chart with X=∂_μ,Y=∂_ν,Z=∂_ρ condition (<ref>) takes the form ∂_μ g_νρ=Γ_ρνμ + Γ^*_νρμ. A pseudo-Riemannian manifold (M,g) equipped with a torsionful affine connection ∇ is called a quasi-statistical manifold if ( ∇_X g )(Y,Z) - ( ∇_Y g )(X,Z) + g (T(X,Y),Z )=0 for all vector fields X,Y,Z. In case ∇ is torsion-free, we recover ( ∇_X g )(Y,Z)=( ∇_Y g ) (X,Z), which is the condition for the pair (M,g,∇) to be a statistical manifold. In a local chart with X=∂_μ, Y=∂_ν,Z=∂_ρ condition (<ref>) is equivalent to ∇_μ g_νρ - ∇_ν g_μρ + T_ρμν=0. We now present a theorem, which relates the torsion and non-metricity of a connection and its dual, but first let us formally introduce these two tensors in a geometric setting. The torsion of an affine connection ∇ is the vector-valued tensor field defined by T(X,Y)=∇_X Y -∇_Y X - [X,Y]. The non-metricity of an affine connection ∇ is the (0,3)-tensor field defined by Q(X,Y,Z):=(-∇_X g )(Y,Z). Let (M,g) be a pseudo-Riemannian manifold equipped with two affine connections (∇,∇^*), which are dual with respect to g. Then torsion T^* and the non-metricity Q^* of the dual connection ∇^* satisfy the conditions: (i) Non-metricity of the dual connection: Q^*(X,Y,Z)=-Q(X,Y,Z). (ii) Torsion of the dual connection: g (T^*(X,Y),Z ) =g( T(X,Y),Z ) - Q(X,Y,Z)+ Q(Y,X,Z). To prove the first statement, we follow the definitions, and employ the Leibnitz rule two times: Q^*(X,Y,Z)=( - ∇^*_X g )(Y,Z) =-X(g,(Y,Z))+ g ( ∇^*_X Y,Z ) + g(Y,∇^*_X Z ) =-g(∇_X Z,Y)+X(g(Y,Z))-g(∇_X Y,Z) =(∇_X g)(Y,Z)=-Q(X,Y,Z). For the second part, as ∇ is dual to ∇^*, we have g( Y, ∇^*_X Z )=X(g(Y,Z))-g( ∇_X Y,Z ), g(Z,∇^*_XY )=X(g(Z,Y))- g ( ∇_X Z,Y ), g(Z,∇^*_Y X )=Y(g(Z,X))-(∇_Y Z,X ). Thus, by employing the definition of torsion, we obtain g(T^*(X,Y),Z )=g ( ∇^*_X Y - ∇^*_Y X -[X,Y],Z). Expanding the brackets using multilinearity and equations (<ref>),(<ref>) and (<ref>) yields g(T^*(X,Y),Z )=X(g(Z,Y))-g(∇_X Z,Y) -Y(g(Z,X))+g(∇_Y Z,X)-g([X,Y],Z), or equivalently g(T^*(X,Y),Z )= ∇_X (g(Z,Y))- g( ∇_X Z,Y ) -∇_Y(g(Z,X)) + g ( ∇_Y Z,X ) - g([X,Y],Z). Applying the Leibnitz rule, we get for the right hand side (∇_X g)(Y,Z) +g(∇_X Z,Y)+g(Z,∇_X Y)-g(∇_X Z,Y) - (∇_Y g)(Z,X) - g(∇_Y Z,X)- g(Z,∇_Y X) + g(∇_Y Z,X) -g([X,Y],Z). We identify the orange terms with non-metricity, by definition. The blue and red terms cancel, leaving us with g(T^*(X,Y),Z)=-Q(X,Y,Z)+g(Z,∇_X Y) +Q(Y,Z,X) - g(Z,∇_Y X) - g([X,Y],Z). Using the definition of torsion g(T^*(X,Y),Z)=g(T(X,Y),Z) -Q(X,Y,Z)+Q(Y,Z,X) we obtain the desired result. In a local chart given by X=∂_μ, Y=∂_ν, Z=∂_ρ, the non-metricity and torsion of the dual affine connection satisfy Q^*_μνρ=- Q_μνρ, T^*_ρμν=T_ρμν -Q_μνρ + Q_νμρ. Let (M,g) be a pseudo-Riemannian manifold and (∇,∇^*) be dual with respect to g. Moreover, denote with T the torsion tensor of ∇ and with T^* the torsion tensor of ∇^*. Then, the following statements hold: (i)T=0 iff the pair (M,g,∇^*) is a quasi-statistical manifold. (ii)T^*=0 iff the pair (M,g,∇) is a quasi-statistical manifold. § COMPUTATION OF RICCI- AND MUTUAL DIFFERENCE TENSORS Recall that the Schrödinger connection is torsion-free and has non-metricity given by Q_μνρ=π_μ g_νρ - 1/2( g_μνπ_ρ + g_μρπ_ν). Relabeling indices, we immediately obtain -Q_λνρ =-π_λ g_νρ+ 1/2 g_λνπ_ρ + 1/2g_λρπ_ν, Q_ρλν =π_ρ g_λν -1/2 g_ρλπ_ν- 1/2g_ρνπ_λ, Q_νρλ = π_ν g_ρλ-1/2 g_νρπ_λ-1/2 g_νλπ_ρ. Hence, for the distortion tensor of Schrödinger geometry we have N^μ _ν _ρ=1/2 g^λμ( -2 π_λ g_νρ+ π_ρ g_λν+ π_ν g_ρλ), which can be equivalently rewritten as N^μ_ν _ρ=-π^μ g_νρ + 1/2π_ρδ^μ_ν + 1/2π_νδ^μ_ρ. Following the same steps, one can show that the distortion tensor of the dual is N^*^μ_ν_ρ =-1/2π^μ g_νρ - 1/2π_ρδ^μ_ν + π_νδ^μ_ρ. Substracting (<ref>) from (<ref>) yields K^μ _ν _ρ=-1/2π^μ g_νρ+ π_ρδ^μ_ν -1/2π_νδ^μ_ρ. Hence, the mutual difference tensor is given by D_ρ _ν=-1/4π_νπ_ρ - 1/2π_σπ^σ g_ρν, while the mutual difference scalar reads D=-9/4π_σπ^σ. We now move on to compute the Ricci scalar of the Schrödinger connection, and its dual, respectively. In the presence of distorsion, the Ricci tensor of an affine connection can be written as R_μν=∘R_μν +∘∇_αN^α _ν _μ-∘∇_νN^α _α _μ +N^α_α _ρN^ρ_ν_μ - N^α _ν _ρN^ρ_α_μ. We obtain the Ricci tensor of the Schrödinger connection by substituting the distortion tensor (<ref>) into (<ref>) R_μν=∘R_μν - g_μν∘∇_απ^α + 1/2∘∇_μπ_ν -∘∇_νπ_μ - 1/2π_ρπ^ρ g_μν - 1/4π_μπ_ν. It immediately follows that the Ricci scalar is given by R=∘R-9/2∘∇_απ^α - 9/4π_απ^α. For the dual Schrödinger connection, we have R_μν^⋆ =∘R_μν +∘∇_α(-1/2π^αg_νμ-1/2π_μδ_ν^α+π_νδ_μ^α) -∘∇_ν(-1/2π^αg_αμ-1/2π_μδ_α^α+π_αδ_μ^α) +(-1/2π^αg_αρ-1/2π_ρδ_α^α+π_αδ_ρ^α) × ×(-1/2π^ρg_νμ-1/2π_μδ_ν^ρ+π_νδ_μ^ρ) -(-1/2π^αg_νρ-1/2π_ρδ_ν^α+π_νδ_ρ^α) × ×(-1/2π^ρg_αμ-1/2π_μδ_α^ρ+π_αδ_μ^ρ) =∘R_μν -1/2∘∇_απ^αg_νμ-1/2∘∇_νπ_μ+∘∇_μπ_ν +1/2∘∇_νπ_μ+2∘∇_νπ_μ-∘∇_νπ_μ +1/2π^αg_αρ1/2π^ρg_νμ+1/2π_ρδ_α^α1/2π^ρg_νμ -π_αδ_ρ^α1/2π^ρg_νμ +1/2π^αg_αρ1/2π_μδ_ν^ρ +1/2π_ρδ_α^α1/2π_μδ_ν^ρ-π_αδ_ρ^α1/2π_μδ_ν^ρ -1/2π^αg_αρπ_νδ_μ^ρ-1/2π_ρδ_α^απ_νδ_μ^ρ +π_αδ_ρ^απ_νδ_μ^ρ -1/2π^αg_νρ1/2π^ρg_αμ -1/2π_ρδ_ν^α1/2π^ρg_αμ+π_νδ_ρ^α1/2π^ρg_αμ -1/2π^αg_νρ1/2π_μδ_α^ρ-1/2π_ρδ_ν^α1/2π_μδ_α^ρ +π_νδ_ρ^α1/2π_μδ_α^ρ +1/2π^αg_νρπ_αδ_μ^ρ +1/2π_ρδ_ν^απ_αδ_μ^ρ-π_νδ_ρ^απ_αδ_μ^ρ =∘R_μν-1/2g_μν∘∇_απ^α+∘∇_μπ_ν+∘∇_νπ_μ +1/4π_απ^αg_μν+π_ρπ^ρg_μν-1/2π_ρπ^ρg_μν +1/4π_νπ_μ+π_νπ_μ-1/2π_νπ_μ -1/2π_μπ_ν-2π_μπ_ν+π_μπ_ν -1/4π_μπ_ν-1/4π_ρπ^ρg_νμ+1/2π_νπ_μ -1/4π_νπ_μ-1/4π_νπ_μ+2π_νπ_μ +1/2π_απ^αg_νμ+1/2π_μπ_ν-π_νπ_μ =∘R_μν -1/2g_μν∘∇_απ^α+∘∇_μπ_ν+∘∇_νπ_μ +π_ρπ^ρg_μν+1/2π_μπ_ν. As a summary, for the Ricci tensor of the dual connection we obtain R^*_μν=∘R_μν - 1/2 g_μν∘∇_απ^α+∘∇_μπ_ν + ∘∇_νπ_μ + π_ρπ^ρ g_μν + 1/2π_μπ_ν. It immediately follows that the dual Ricci scalar is given by R^*=∘R+ 9/2π^ρπ_ρ. § DERIVATION OF THE FRIEDMANN EQUATIONS First, recall that the non-zero components of the Ricci tensor of the Levi-Civita connection are given by ∘R_00=-3ä/a, ∘R_11=∘R_22=∘R_33=a ä +2 ȧ^2. Similarly, the Ricci scalar takes the well known form ∘R=6 (ä/a+ȧ ^2/a^2). The non-vanishing Christoffel symbols are given by γ^0_i_j=a ȧδ_ij, i,j=1,2,3; γ^i_0_j=ȧ/aδ^i_j, i,j=1,2,3. As a first step, we will write down the Einstein equation in 00 components ∘R_00 - 1/2 g_00∘R + 3/8 g_00∘∇_απ^α + 3/8∘∇_0 π_0 + 3/8∘∇_0π_0 + 5/16 g_00π_απ^α - 1/8π_0 π_0= 8 π T_00. From these, we obtain 3 ȧ^2/a^2 - 3/8( ψ̇+ 3 ȧ/aψ) -3/8ψ̇- 3/8ψ̇+5/16ψ^2 - 1/8ψ^2=8 πρ. We thus have 3ȧ^2/a^2 -9/8ψ̇- 9/8ȧ/aψ+3/16ψ^2=8 πρ. For the second Friedmann equation, we write down the ii components ∘R_ii - 1/2 g_ii∘R + 3/8 g_ii∘∇_απ^α + 3/8∘∇_iπ_i + 3/8∘∇_i π_i +5/16 g_iiπ_απ^α - 1/8π_i π_i=8 π T_ii. This immediately yields -2 a ä - ȧ^2 + 3/8a^2 ( ψ̇+ 3 ȧ/aψ) + 3/8 a ȧψ + 3/8 a ȧψ - 5/16 a^2 ψ^2=8 π p a^2. Simplifying, we get -2 a ä - ȧ^2+3/8 a^2 ψ̇+15/8 a ȧψ - 5/16 a^2 ψ^2 = 8 π p a^2. Dividing by a^2 leads to -2ä/a - ȧ^2/a^2 + 3/8ψ̇+ 15/8ȧ/aψ - 5/16ψ^2=8π p. Introducing the Hubble function, we obtain -2 Ḣ -3H^2 + 3/8ψ̇+ 15/8 H ψ - 5/16ψ^2=8 π p. The final form is thus 2 Ḣ + 3 H^2= -8 π p + 3/8ψ̇+ 15/8 H ψ - 5/16ψ^2. unsrt99Ein A. Einstein, ”Die Feldgleichungen der Gravitation”, Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin (1915), 844 Hilb D. Hilbert, "Die Grundlagen der Physik", Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen - Mathematisch - Physikalische Klasse 3, 395 (1915). Weyl1 Gravitazion und elektrizitt, Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften (Berlin) 1918, 465 (1918). Weyl2 H. Weyl, Space, Time, Matter, Dover Publications, Dover, 1952 Scholz E. Scholz, "Weyl geometry in late 20th century physics", eprint arXiv:1111.3220 (2011). W1 M. A. Oancea and T. Harko, "Weyl geometric effects on the propagation of light in gravitational fields", Phys. Rev. D 109, 064020 (2024). W2 M. Crǎciun and T. Harko, "Testing Weyl geometric gravity with the SPARC galactic rotation curves database", Physics of the Dark Universe 43, 101423 (2024). Fins P. Finsler, "Über Kurven und Flächen in allgemeinen Räumen", Dissertation, Göttingen, JFM 46.1131.02 (1918); Reprinted by Birkhäuser (1951). Fins1 S.-D. Liang, S. V. Sabau, and T. Harko, "Finslerian geometrization of quantum mechanics in the hydrodynamical representation", Phys. Rev. D 100, 105012 (2019). Fins2 R. Hama, T. Harko, and S. V. Sabau, "Conformal gravitational theories in the Barthel-Kropina type Finslerian geometry, and their cosmological implications', Eur. Phys. J. C 83, 1030 (2023). C1É. Cartan, "Sur une généralisation de la notion de courbure de Riemann et les espaces à torsion", C. R. Acad. Sci. (Paris) 174, 593 (1922). C2É. Cartan, "Sur les variétés à connexion affine et la théorie de la relativité généralisé (première partie)", Ann. Éc. Norm. 40, 325 (1923). C3É. Cartan, "Sur les variétés à connexion affine et la théorie de la relativité généralisé (première partie) (Suite)", Ann. Éc. Norm. 41, 1 (1924). C4É. Cartan, "Sur les variétés à connexion affine et la théorie de la relativité généralisé (deuxième partie)", Ann. Éc. Norm. 42, 17 (1925). C5 F. W. Hehl, P. von der Heyde, G. D. Kerlick, and J. M. Nester, "General relativity with spin and torsion: Foundations and prospects", Review of Modern Physics 48, 393 (1976). We1 R. Weitzenböck, Invariantentheorie, Noordhoff, Groningen, 1923 We2 K. Hayashi and T. Shirafuji, "New general relativity", Phys. Rev. D 19, 3524 (1979). We3 Z. Haghani, T. Harko, H. R. Sepangi, and S. Shahidi, "Weyl-Cartan-Weitzenböck gravity as a generalization of teleparallel gravity", JCAP 10, 061 (2012). fQ1 J. M. Nester and H.-J. Yo, "Symmetric teleparallel general relativity", Chinese Journal of Physics 37, 113 (1999). fQ2 J. B. Jimenez, L. Heisenberg, and T. Koivisto, "Coincident general relativity", Phys. Rev. D 98, 044048 (2018). fQ3 L. Heisenberg, "Review on f(Q) gravity", Physics Reports 1066, 1 (2024). O1 C. M. Will, "The Confrontation between General Relativity and Experiment", Living Reviews in Relativity 17, 4 (2014). O2 C. M. Will, "Putting General Relativity to the Test: Twentieth-Century Highlights and Twenty-FirstCentury Prospects", in David E. Rowe, Tilman Sauer & Scott A. Walter (eds.), Beyond Einstein: Perspectives on Geometry, Gravitation, and Cosmology in the Twentieth Century, Springer, New York. pp. 81-96 (2018). O3 B. P. Abbott, et al. (LIGO Scientific Collaboration and Virgo Collaboration), "Observation of Gravitational Waves from a Binary Black Hole Merger", Phys. Rev. Lett. 116, 061102 (2016). O4 B. P. Abbott, R. Abbott, T. D. Abbott, S. Abraham, F. Acernese, K. Ackley et al., "GW190425: Observation of a Compact Binary Coalescence with Total Mass ∼ 3.4 M_⊙, Astrophys. J. Lett. 892, L3 (2020). Pl1 Akrami Y. et al., "Planck 2018 results. I. Overview and the cosmological legacy of Planck", Astron. Astrophys. 641, A1 (2020). Pl2 N. Aghanim et al., "Planck 2018 results. VI. Cosmological parameters", Astron. Astrophys. 641, A6 (2020). S1 A. G. Riess, "The expansion of the Universe is faster than expected", Nature Rev. Phys. 2, 10 (2019). EinL A. Einstein, "Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften (Berlin), 142 (1917). CC1 S. Weinberg, “The Cosmological Constant Problem,” Rev. Mod. Phys. 61, 1 (1989). CC2 S. M. Carroll, “The Cosmological constant,” Living Rev. Rel. 4, 1 (2001). R1 P. Brax, "What makes the Universe accelerate? A review on what dark energy could be and how to test it", Reports on Progress in Physics 81, 016902 (2018). R2 N. Frusciante and L. Perenon, "Effective Field Theory of Dark Energy: a Review", Phys. Rept. 857, 1 (2020). R3 E. Oks, "Brief review of recent advances in understanding dark matter and dark energy", New Astronomy Reviews 93, 101632 (2021). R4 R. C. Batista, "A Short Review on Clustering Dark Energy", Universe 8, 22 (2021). R5 V. Poulin, T. L. Smith, and T. Karwal, "The Ups and Downs of Early Dark Energy solutions to the Hubble tension: a review of models, hints and constraints circa 2023", Physics of the Dark Universe 42, 101348 (2023). Val E. Di Valentino, O. Mena, S. Pan, L. Visinelli, W. Yang, A. Melchiorri et al., "In the realm of the Hubble tension—a review of solutions", Class. Quant. Grav. 38, 153001 (2021). Hain K. N. Hainline et al., "The Cosmos in Its Infancy: JADES Galaxy Candidates at z > 8 in GOODS-S and GOODS-N", Astrophys. J. 964, 71 (2024). Hari Y. Harikane, et al., "A Comprehensive Study of Galaxies at z ∼ 9-16 Found in the Early JWST Data: Ultraviolet Luminosity Functions and Cosmic Star Formation History at the Pre-reionization Epoch", ApJS 265, 5 (2023). Mun J. B. Muñoz, J. Mirocha, J. Chisholm, S. R. Furlanetto, & C. Mason, "Reionization after JWST: a photon budget crisis?", arXiv:2404.07250 (2024). Desi DESI collaboration, A. G. Adame et al., "DESI 2024 VI: Cosmological Constraints from the Measurements of Baryon Acoustic Oscillations", arxiv: 2404.03002 (2024). MG1 S. Capozziello, and M. De Laurentis, "Extended Theories of Gravity", Phys. Rept. 509, 167 (2011). MG2 S. Nojiri and S. D. Odintsov, "Unified cosmic history in modified gravity: from F(R) theory to Lorentz non-invariant models", Phys. Rept. 505, 59 (2011). MG3 T. Clifton, P. G. Ferreira, A. Padilla and C. Skordis, “Modified Gravity and Cosmology,” Phys. Rept. 513, 1 (2012). MG4 S. Nojiri, S. D. Odintsov, and V. K. Oikonomou, "Modified Gravity Theories on a Nutshell: Inflation, Bounce and Late-time Evolution", Phys. Rept. 692, 1 (2017). MG5 D. Langlois, "Dark Energy and Modified Gravity in Degenerate Higher-Order Scalar-Tensor (DHOST) theories: a review", Int. J. Mod. Phys. D 28, 1942006-3287 (2019). MG6 A. Petrov, Introduction to Modified Gravity, Springer Briefs in Physics, Springer, 2020 Bu H. A. Buchdahl, "Non-Linear Lagrangians and Cosmological Theory", Monthly Notices of the Royal Astronomical Society 150, 1 (1970). fRLm T. Harko and F. S. N. Lobo, "f(R,L_m) gravity", Eur. Phys. J. C 70, 373 (2010). fRT T. Harko, F. S. N. Lobo, S. Nojiri, and S. D. Odintsov, "f(R,T) gravity", Phys. Rev. D 84, 024020 (2011). Hyb T. Harko, T. S. Koivisto, F. S. N. Lobo and G. J. Olmo, “Metric-Palatini gravity unifying local constraints and late-time cosmic acceleration,” Phys. Rev. D 85, 084016 (2012). book T. Harko and F. S. N. Lobo, Extensions of f(R) Gravity: Curvature-Matter Couplings and Hybrid Metric-Palatini Theory, Cambridge Monographs on Mathematical Physics, Cambridge, 2019 Fi1 R. Hama, T. Harko, and S. V. Sabau, "Dark energy and accelerating cosmological evolution from osculating Barthel-Kropina geometry", Eur. Phys. J. C 82, 385 (2022). Bouali_2023 A. Bouali, H. Chaudhary, R. Hama, T. Harko, S. V. Sabau, and M. San Martín, "Cosmological tests of the osculating Barthel–Kropina dark energy model", Eur. Phys. J. C 83, 121 (2023). wackerly2008mathematical Wackerly, D., Mendenhall, W., & Scheaffer, R. L. (2008). Mathematical Statistics with Applications (7th ed.). Thomson Brooks/Cole. Akaike1974 Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716-723. Vrieze2012 Vrieze, S. I. (2012). Model selection and psychological theory: A discussion of the differences between the Akaike information criterion (AIC)and the Bayesian information criterion (BIC). Psychological Methods, 17(2), 228-243. TanBiswas2012 Tan, M., & Biswas, R. (2012). The correlation between galaxy properties and large-scale structures at low redshift. Monthly Notices of the Royal Astronomical Society, 419(4), 3292-3303. RezaeiMalekjani2021 Rezaei, M., & Malekjani, M. (2021). The impact of cosmological parameters on the dynamics of the universe. The European Physical Journal Plus, 136(2), 219. BurnhamAnderson2004a Burnham, K. P., & Anderson, D. R. (2004a). A practical information-theoretic approach (2nd ed.). BurnhamAnderson2004b Burnham, K. P., & Anderson, D. R. (2004b). Multimodel inference: understanding AIC and BIC in model selection. Sociological Methods & Research, 33(2), 261-304. Fi3 R. Hama, T. Harko, and S. V. Sabau, "Conformal gravitational theories in the Barthel-Kropina type Finslerian geometry, and their cosmological implications", Eur. Phys. J. C 83, 1030 (2023). We1 D. M. Ghilencea, "Non-metric geometry as the origin of mass in gauge theories of scale invariance", Eur. Phys. J. C 83, 176 (2023). We2 D. M. Ghilencea, "Weyl conformal geometry vs Weyl anomaly", Journal of High Energy Physics 2023, 113 (2023). We3 C. Condeescu, D. M. Ghilencea, and A. Micu, "Weyl quadratic gravity as a gauge theory and non-metricity vs torsion duality", Eur. Phys. J. C 84, 292 (2024). We4 T. Harko and S. Shahidi, "Cosmological implications of the Weyl geometric gravity theory", Eur. Phys. J. C 84, 509 (2024). Schrod E. Schrödinger, Space-Time Structure, Cambridge Science Classics, Cambridge University Press, Cambridge, 1985 Klemm S. Klemm and L. Ravera, "Schrödinger connection with selfdual nonmetricity vector in 2+1 dimensions", Phys. Lett. B 817, 136291 (2021). Ming2024 L. Ming, S.-D. Liang, H.-H. Zhang, and T. Harko, "From the Weyl-Schrödinger Connection to the Accelerating Universe: Extending Einstein's Gravity via a Length Preserving Nonmetricity", Phys. Rev. D 109, 024003 (2024). csillag2024schrodinger L. Csillag, "Schrödinger Connections: From Mathematical Foundations Towards Yano-Schrödinger Cosmology", arXiv:2402.06167 [gr-qc], (2024). csillag2024semisymmetric L. Csillag and T. Harko, "Semi-Symmetric Metric Gravity: from the Friedmann-Schouten geometry with torsion to dynamical dark energy models", arXiv:2402.06114 [gr-qc], (2024). 1 L. J. Garay, "Quantum Gravity and Minimum Length", Int. J. Mod. Phys. A 10, 145 1995. 2 S. Hossenfelder, "Minimal Length Scale Scenarios for Quantum Gravity", Living Reviews Relativity 16, 2 (2013). 2a M. Lake, "Generalised Uncertainty Relations and the Problem of Dark Energy", Romanian Astronomical Journal 32, 5 (2022). 3 A. Kempf, "On quantum field theory with nonzero minimal uncertainties in positions and momenta", J. Math. Phys. 38, 1347 (1997). NC T. Harko and S.-D. Liang, "Energy-dependent noncommutative quantum mechanics", Eur. Phys. J. C 79, 300 (2019). 4 M. J. Lake, M. Miller, R. F. Ganardi, Z. Liu, S. D. Liang, and T. Paterek, "Generalised uncertainty relations from superpositions of geometries", Class. Quant. Gravit. 36, 155012 (2019). 4a M. J. Lake, M. Miller, and S.-D. Liang, "Generalised Uncertainty Relations for Angular Momentum and Spin in Quantum Geometry", Universe 6, 56 (2020). 5 S. Amari, Differential Geometrical Methods in Statistics, Lecture Notes in Statistics, Vol. 21, Springer-Verlag, Berlin, 1985 5a S. Amari and H. Nagaoka, Method of Information Geometry, Amer. Math. Soc., Providence, Oxford Univ. Press, Oxford, 2000 6 S. L. Lauritzen, Statistical manifolds, in: Differential Geometry in Statistical Inferences, IMS Lecture Notes Monogr. Ser., 10, Inst. Math. Statist., Hayward California, pp. 96-163, 1987 7 T. Kurose, "On the divergences of 1-conformally flat statistical manifolds", Tôhoku Math. J. 46, 427 (1994). 8 H. Matsuoze, "Statistical manifolds and affine affine differential geometry", Advanced Studies in Pure Mathematics 57, 303 (2010). 9 A. Caticha, "The Basics of Information Geometry", AIP Conference Proceedings 1641, 15 (2015). 10 F. Nielsen, "An Elementary Introduction to Information Geometry", Entropy 22, 1100 (2020). Khosravi_multi N. Khosravi, "Geometric massive gravity in multiconnection framework", Phys. Rev. D 89, 024004 (2014). Khosravi_2014 N. Khosravi, "Spontaneous scalar-vector Galileons from a Weyl biconnection model", Phys. Rev. D 89, 124027 (2014). Khosravi_2015 N. Khosravi, "Bi-connected Gauss–Bonnet gravity", General Relativity and Gravitation 47, 43 (2015). Iosifidis2023 D. Iosifidis and K. Pallikaris, "Biconnection Gravity as a Statistical Manifold", Phys. Rev. D 108, 044026 (2023). iosifidis2023torsioncurvatureanaloguedualconnections D. Iosifidis, "On a Torsion/Curvature Analogue of Dual Connections and Statistical Manifolds", Journal of Geometry and Physics 196, 105064 (2024). puechmorel2020lifting S. Puechmorel, "Lifting dual connections with the Riemann extension," Mathematics 8, 2079 (2020). calin2014geometric O. Calin and C. Udrişte, "Geometric Modeling in Probability and Statistics," vol. 121, Springer, 2014. axioms12070667 E. Peyghan, D. Seifipour, and I. Mihai, "Infinitesimal Affine Transformations and Mutual Curvatures on Statistical Manifolds and Their Tangent Bundles", Axioms 12, 667 (2023). SM1 T. Obata, H. Hara, and K. Endo, "Differential geometry of nonequilibrium processes", Phys. Rev. A 45, 6997 (1992). WSM1 M. Hirano and H. Nagahama, "Nonmetricity on Riemann–Cartan–Weyl manifold: Its physical and mathematical meaning and application", International Journal of Geometric Methods in Modern Physics 19, 2250153 (2022). WSM2 T. Wada, "A Weyl geometric approach to the gradient-flow equations in information geometry", Journal of Geometry and Symmetry in Physics 66, 59 (2023). kurose2007 Takashi Kurose, Statistical Manifolds Admitting Torsion, Geometry and Something, 2007. SF1 V. Sahni, T. D. Saini, A. A. Starobinsky, and U. Alam. "Statefinder—a new geometrical diagnostic of dark energy", Journal of Experimental and Theoretical Physics Letters 77, 201 (2003). SF2 U. Alam, V. Sahni, T. D. Saini, and A. A. Starobinsky, "Exploring the expanding universe and dark energy using the Statefinder diagnostic", Monthly Notices of the Royal Astronomical Society 344, 1057 (2003). sahni2008two V. Sahni, A. Shafieloo, and A. A. Starobinsky. "Two new diagnostics of dark energy", Phys. Rev. D 78, 103502 (2008). Pl3 J. A. S. Fortunato, D. J. Bacon, W. S. Hipólito-Ricaldi, and D. Wands, "Fast Radio Bursts and Artificial Neural Networks: a cosmological-model-independent estimation of the Hubble Constant", arXiv:2407.03532 [astro-ph.CO] (2024). Fink1 S. L. Finkelstein et al., "The Complete CEERS Early Universe Galaxy Sample: A Surprisingly Slow Evolution of the Space Density of Bright Galaxies at z ∼ 8.5-14.5", arXiv:2311.04279 [astro-ph.GA] (2023). Fink2 S. L. Finkelstein, "CEERS Key Paper. I. An Early Look into the First 500 Myr of Galaxy Formation with JWST", Astrophys. J. 946, L13 (2023). Milgrom M. Milgrom, "A modification of the Newtonian dynamics : implications for galaxy systems", Astrophys. J. 270, 365 (1983).
http://arxiv.org/abs/2407.12696v1
20240717161950
Three-loop evolution kernel for transversity operator
[ "A. N. Manashov", "S. Moch", "L. A. Shumilov" ]
hep-th
[ "hep-th", "hep-ph" ]
C>c<#1#11-.25eml 1-.25eml]A. N. Manashov,] S. Moch]and L. A. Shumilov[] II. Institut für Theoretische Physik, Universität Hamburg, D-22761 Hamburg, Germanyalexander.manashov@desy.desven-olaf.moch@desy.deleonid.shumilov@desy.de We calculate quantum corrections to the symmetry generators for the transversity operators in quantum chromodynamics (QCD) in the two-loop approximation. Using this result, we obtain the evolution kernel for the corresponding operators at three loops. The explicit expression for the anomalous dimension matrix in the Gegenbauer basis is given for the first few operators. DESY-24-103 Three-loop evolution kernel for transversity operator [ ===================================================== § INTRODUCTION The modern description of hard scattering processes in quantum chromodynamics (QCD) is based on the factorization approach <cit.> which allows one to separate short- and long-distance phenomena. The scattering amplitude of such a process is given by the convolution of a coefficient function (hard part) with a non-perturbative quantity (soft part) which can be expressed as the matrix element of a certain operator. The scale dependence of the latter is determined by the renormalization group equation (evolution equation). The present state of affairs is different for processes with zero and nonzero momentum transfer between the initial and final hadron states. In deep-inelastic scattering (DIS) processes (forward kinematics) the evolution kernels (splitting functions) are known at the next-to-next-to-leading order (N^2LO) <cit.> and there are partial results at the N^3LO (see <cit.> and references therein). The Mellin moments of the splitting functions give the forward-anomalous dimensions — the diagonal elements of the anomalous dimension matrix which enters the renormalization group equation (RGE) for the corresponding local operators. In processes with a nonzero momentum transfer one has to take into account mixing with total derivative operators which is governed by an off-diagonal part of the anomalous dimension matrix (off-diagonal evolution kernel). Calculating evolution kernels directly in off-forward kinematics at high orders demands substantial computational effort and is currently not practical beyond two loops. An alternative to the direct calculation approach was developed by Dieter Müller in <cit.>. He has shown that the evolution kernel at ℓ-loops is completely determined by the forward anomalous dimensions and a special quantity, dubbed as a conformal anomaly, at one order less, i.e. (ℓ-1)-loops. Soon after, all evolution kernels of the twist-two operators in QCD were calculated with two-loop accuracy, <cit.>. A recent development of this method is based on the idea of considering QCD in non-integer dimensions at a critical value of the strong coupling <cit.> to restore the exact conformal invariance of the theory. The restoration of symmetry significantly simplifies the analysis, enabling the determination of the evolution kernels of the twist-two vector and axial-vector operators with three-loop accuracy <cit.>. The aim of the present work is to calculate the evolution kernels for the transversity operators with three-loop accuracy. The nucleon matrix elements of these operators define the chiral-odd GPDs, see e.g. <cit.>. In deeply-virtual Compton scattering (DVCS) processes, transversity operators contribute only to the power suppressed helicity-flip amplitudes, making quark-helicity flip subprocesses strongly suppressed and chiral-odd GPDs difficult to access experimentally. Nevertheless, their experimental determination seems to be feasible in photo- or electroproduction or deeply-virtual meson production processes at energies of the Electron-Ion Collider (EIC), see e.g. <cit.>. The evolution kernels for the transversity operators are known at one-loop <cit.> and the leading contributions to the anomalous dimension matrix in the limit of a large number of flavors n_f have been obtained in <cit.> at all orders. The forward anomalous dimensions for the transversity operators are known with three-loop accuracy <cit.>. In what follows we calculate the two-loop conformal anomaly and reconstruct the three-loop evolution kernel for the transversity operators. The paper is organised as follows: Section <ref> is introductory, we set definitions and notations and give a brief description of the method used to calculate the evolution kernel. In Sect. <ref> we present the results of calculation of the evolution kernel and the conformal anomaly with two-loop accuracy. In Sect. <ref> we reconstruct the evolution kernel at the three-loop level. Explicit expression for the anomalous dimension matrix in the Gegenbauer basis is given in Sect. <ref>. Section <ref> is reserved for summary and outlook. The paper contains several appendices where the analytic expressions for the kernels are collected. § BACKGROUND Since we are interested only in the evolution equation it is convenient to work in Euclidean space. The QCD Lagrangian in d = 4-2ϵ dimension Euclidean space reads L=q̅Dq+1/4 F_μν^a F^a,μν+ 1/2ξ (∂ A)^2 + ∂_μc̅^a(D^μ c)^a . The light-ray operator <cit.> we are interested in is defined as follows 𝒪(x;z_1, z_2) = q̅(x+z_1n)[x+z_1n, x+z_2n]σ_⊥ +q(x+z_2 n), where q(x) is a quark field, n is an auxiliary light-like (n^2 = 0) vector and [x+z_1n, x+z_2n] = Pexp{ ig z_12∫_0^1dα t^a A^a _+(x+z_21^α n)} stands for the Wilson line in the fundamental representation. Here and below z_12^α = z_1α̅ + z_2α, α̅ = 1 - α, z_12 = z_1 - z_2. Choosing the second light-like vector n̅ ((nn̅)=1) one expands an arbitrary d-dimensional vector as follows a=n (n̅ a) +n̅ (n a) +a_⊥≡ n a_- + n̅ a_+ +a_⊥, so that σ_⊥ + stands for the projection of the matrix σ_μν≡12[γ_μ, γ_ν] onto the transverse subspace. In addition, throughout the paper we omit all the isotopic indices and we use the short-hand notation, 𝒪(z_1, z_2), for the operator 𝒪(x=0,z_1, z_2). We also note here that since γ_± anticommute with γ_⊥ the transformation properties of the operator under the collinear subgroup of the conformal group (SL(2,ℝ) subgroup) are exactly the same as those for the vector operator. Namely, δ_±,0^ω𝒪(z_1, z_2) = ω S_±,0^(0)𝒪(z_1, z_2), where δ^ω_±,0 stand for shifts, dilatations and special conformal transformations of a light-like line and the corresponding canonical generators take the form S^(0)_- =-∂_z_1-∂_z_2, S^(0)_0 = z_1∂_z_1 + z_2∂_z_2 + 2 , S^(0)_+ = z_1^2∂_z_1+z_2^2∂_z_2+2z_1+2z_2. The renormalized operator [Renormalization in the modified minimal subtraction scheme (MS) will be always tacitly assumed.] is denoted by [O](z_1,z_2), [O](z_1, z_2) = Z𝒪(z_1, z_2), Z=+∑_k>0ϵ^-k Z_k(a) , where the renormalization factors Z_k(a) are integral operators. The light-ray operator [O] satisfies the RGE (μ∂∂μ + β(a)∂∂ a + ℍ(a)) [𝒪](z_1, z_2) = 0, where μ is the renormalization scale, a=α_s/4π is the strong coupling and β(a) is the d-dimensional beta function currently known with five-loop accuracy <cit.> β(a) = -2a(ϵ + β̅(a)) , β̅(a)=β_0 a + β_1 a^2 + O(a^3), with coefficients β_0, β_1, etc. in an SU(N_c) gauge theory (C_F=4/3, C_A=N_c=3 in QCD), β_0 = 113C_A - 2/3n_f, β_1 = 23(17C_A^2 - 5C_An_f - 3C_Fn_f) . The operator ℍ(a), entering Eq. ((<ref>)), is called the evolution kernel and can be obtained as follows ℍ(a) = -μdZ(a)dμZ^-1(a) +2γ_q(a) = a ℍ^(1) + a^2 ℍ^(2) +a^3 ℍ^(3)+… . Here γ_q(a) is the quark-anomalous dimension and ℍ^(ℓ) are the integral operators of the following type ℍ^(ℓ)f(z_1, z_2) = ∫_0^1dα∫_0^1dβ h^(ℓ)(α, β)f(z_12^α, z_21^β). The one-loop kernel was obtained in Ref. <cit.>. The main purpose of this work is to calculate the two- and three-loop kernels. §.§ Method The method of this work fully reflects the approach developed in <cit.>. The main idea is to consider the theory in d=4-2ϵ dimensions at the critical value of the strong coupling a_*, such that β(a_*)=0. Evolution kernels in the MS scheme do not depend on the space-time dimension and therefore they are essentially the same in the four- and d-dimensional theories. At the critical point theories enjoy scale and, as a rule, conformal invariance <cit.>. This implies that the evolution kernels at the critical point commute with the corresponding symmetry generators. In the case under consideration these are generators of the collinear subgroup of the conformal group. We recall that the tree level generators (<ref>) commute with one loop kernel [S^(0)_±,0, ℍ^(1)] = 0. Beyond one loop the generators receive quantum corrections. Their form is restricted by the requirement for the generators to satisfy the commutation relations of sl(2) algebra and give the proper scaling dimensions for local operators S_- = S^(0)_-, S_0 =S^(0)_0 +Δ S_0= S^(0)_0 +β̅(a) + 12ℍ(a) , S_+ =S^(0)_+ +Δ S_+ = S^(0)_+ + (z_1 + z_2)(β̅(a) + 12ℍ(a)) + z_12Δ(a) . Thus, the corrections to the generators are expressed in terms of the evolution kernel ℍ(a) and an additional operator Δ(a) called the conformal anomaly [ We emphasize that there is nothing anomalous in the appearance of this term in the expression for S_+. The name “conformal anomaly” for the operator Δ is due to the fact that in scalar field models such a contribution does not arise in low orders. ]. The conformal anomaly Δ(a)=aΔ^(1) +a^2 Δ^(2)+…, in lower orders of the perturbation theory can be effectively extracted from the analysis of the scale and conformal Ward identities for correlators of the light-ray operators <cit.>. Assuming that the conformal anomaly Δ(a) is known, the invariance of the evolution kernel ℍ(a), [S_+(a), ℍ(a)]=0, leads to a chain of equations [ The kernel ℍ(a) also commutes with the canonical generators S_-^(0) and S_0^(0). ] [S_+^(0), ℍ^(1)] = 0 , [S_+^(0), ℍ^(2)] = [ℍ^(1), Δ S_+^(1)] , [S_+^(0), ℍ^(3)] = [ℍ^(1), Δ S_+^(2)] + [ℍ^(2), Δ S_+^(1)] , and so on. Representing the kernels ℍ^(ℓ) as the sum of canonically invariant and non-invariant parts, ℍ^(ℓ)=ℍ_inv^(ℓ)+ℍ^(ℓ)_non-inv, [S^(0)_α,ℍ_inv^(ℓ)]=0, one sees that Eqs. (<ref>) define relations for the non-invariant part of the kernel. Note that the right hand side of each equation for ℍ^(ℓ) involves the kernels of, at most, one order less. Thus, the knowledge of the anomaly at order ℓ-1 allows us to reconstruct the non-invariant part of the kernel, ℍ^(ℓ)_non-inv, at ℓ loops. The invariant part of the evolution kernel, ℍ_inv^(ℓ), is completely determined by its eigenvalues, γ_inv^(ℓ)(N)= γ^(ℓ)(N) -γ_non-inv^(ℓ)(N), and can be reconstructed in a relatively simple way, see discussion in Sect. <ref>. § KERNEL AND CONFORMAL ANOMALY In this section we present explicit expressions for the evolution kernel and the conformal anomaly at the NLO. We obtained the two-loop evolution kernel in two ways: by the direct diagram calculation and using the approach described above. The latter technique is discussed in the next section while the answers for the two-loop diagrams are given in App. <ref>. In computing the conformal anomaly we closely follow the approach of Ref. <cit.>. The operator Δ_+ can be extracted from the conformal Ward identity for the light-ray operators. The replacement γ_+→σ_⊥ + in the operator does not affect the analysis given in <cit.>. The expression for the operator Δ in the first two orders reads <cit.> z_12Δ^(1) = z_12Δ^(1)_+, z_12Δ^(2) = z_12Δ^(2)_+ + 1/4[ℍ^(2),z_1+z_2]. The operator Δ_+ in the case under consideration can be determined as follows <cit.> [We present here a reformulation of the result of <cit.> which is more convenient for practical use.]. Let us consider the renormalization of the operator 𝒪^T(z_1,z_2) in QCD perturbed by a local operator, S_QCD↦ S_ω=S_QCD+ δ^ωS=S_QCD - 2ω∫ d^d y (n̅ y)(1/4 F^2 +1/2ξ(∂ A)^2) in the leading order in the parameter ω. The renormalized operator takes the form (<ref>) with a modified renormalization factor, Z↦ Z_ω = Z + ω (nn̅) Z, Z =1/ϵZ_1(a)+1/ϵ^2Z_2+…. The residues Z_k are integral operators and the conformal anomaly is determined by Z_1: Z_1(a)=z_12Δ_+(a)+1/2[ℍ(a)-2γ_q(a)](z_1+z_2) . We also note that in the case under consideration there is no mixing with BRST and EOM operators, see Ref. <cit.> for a general analysis. §.§ One-loop kernels The one-loop diagrams for the kernel are shown in Fig. <ref>. One-loop diagrams for the anomaly have the same topology and can be obtained from diagrams shown in Fig. <ref> by inserting additional elements generated by δ^ω S, cf. Eq. (<ref>). We also note that the exchange diagram (a) in Fig. <ref> does not contribute in both cases due to the gamma matrix identity γ_μσ_⊥ +γ^μ = -2ϵσ_⊥ +. After a short calculation one gets ℍ^(1)f(z_1, z_2) = 4C_F (∫_0^1 dα/α(2f(z_1, z_2) - α̅ ( f(z_12^α,z_2) + f(z_1, z_21^α) )) - 3/2f(z_1, z_2)) andΔ_+^(1)f(z_1, z_2) = -2C_F ∫_0^1 dα(α̅α + lnα)(f(z_12^α,z_2) - f(z_1, z_21^α)) . Let us note that the one-loop conformal anomaly (<ref>) is exactly the same as in the vector case <cit.>. Calculating the eigenvalues of the kernel ℍ^(1) by acting on the functions ψ_N(z_1,z_2)=z_12^N-1 we reproduce the well known forward anomalous dimensions for the transversity operators <cit.>, γ^(1)(N)=4C_F[2S_1(N)-3/2]. Here and below S_a⃗(N) = S_a_1,…,a_k(N) stand for the harmonic sums <cit.>. Our final remark is that one can easily check that the operator ℍ^(1) commutes, as was expected, with the canonical generators S_α^(0). §.§ Two-loop evolution kernel Diagrams contributing to the two-loop evolution kernel are shown in the Fig. <ref>. Answers for the individual diagrams are given in the App. <ref>. Note that the answers for the diagrams without gluon exchange between the quark lines, namely the diagrams (a) – (g) in Fig. <ref>, are exactly the same as in the vector case and we have taken the corresponding results from <cit.>. Contrary, the diagrams (h) – (p) require separate calculations. Among them, the diagrams (h), (k), (l), (n) do not contribute to the kernel because of the relation (<ref>). The evolution kernel for the twist-two operators can be written in the following form: ℍ(a) = Γ_cusp(a)ℋ(a) + A(a) + ℋ(a). The first term is completely determined by large N asymptotic of the anomalous dimensions. The kernel ℋ has the form ℋf(z_1, z_2) = ∫_0^1dαα(2f(z_1, z_2) - α̅(f(z_12^α, z_2) + f(z_1, z_21^α))). It is a canonically invariant operator, [S^(0)_α, ℋ]=0, with eigenvalues, ℋ z_12^N-1=E(N)z_12^N-1, equal to 2S_1(N). The cusp anomalous dimension, Γ_cusp(a), <cit.> is currently known at four loops <cit.> Γ_cusp(a) = a 4 C_F +a^2 C_F[C_A (268/9-8ζ_2) -40/9 n_f ] +a^3 C_F[C_A^2(176/5ζ_2^2+88/3ζ_3-1072/9ζ_2+490/3) + C_A n_f(-64/3ζ_3+160/9ζ_2-1331/27) +n_f/N_c(-16ζ_3+55/3) -16/27n_f^2 ] + O(a^4) . Next, A(a) is a constant and ℋ(a) is the integral operators of the following form ℋf(z_1, z_2) = ∫_0^1 dα φ(α)(f(z_12^α, z_2) + f(z_1, z_21^α)) + ∫_0^1dα∫_0^α̅ dβ(χ(α, β) + χ(α,β)ℙ_12) (f(z_12^α, z_21^β) + f(z_12^β, z_21^α)), where the permutation operator ℙ_12 interchanges the variables z_1, z_2, i.e. ℙ_12 f(z_1,z_2) = f(z_2,z_1), ( ℙ_12 f(z_12^α,z_21^β) = f(z_21^α,z_12^β) ). The representation (<ref>) is unique if one supposes that the eigenvalues of the kernel, ℋ(N), vanish at N →∞. Using the results for the diagrams in App. <ref> we obtain for the constant A(a)=a A^(1) + a^2 A^(2)+… A^(1) =-6 C_F , A^(2) = -8/3 C_F^2(438 + 13ζ_2 ) + 8C_Fn_f(112 + 23ζ_2) + 8C_F/N_c(- 1724 - 113ζ_2 + 3ζ_3 ), while for the integral kernels φ, χ, and χ we get φ^(2)(α) = -4C_Fβ_0 α̅/αlnα̅+ 8 C_F^2 α̅/αlnα̅(3/2-lnα̅+ 1+α̅/α̅lnα), χ^(2)(α,β) =8 C_F^2(1/α̅lnα-1/αlnα̅) +4C_F/N_c(τ̅/τlnτ̅+1/2), χ^(2)(α,β) =4C_F/N_c(-τ̅lnτ̅+1/2), where τ=αβ/α̅β̅. Calculating the forward anomalous dimensions ℍ(a)z_12^N - 1 = γ(N)z_12^N - 1, γ(N)=aγ^(1)(N)+a^2γ^(2)(N)+… we get the following expression for γ^(2) (here and below S_a⃗≡ S_a⃗(N)) γ^(2)(N) = -8C_Fβ_0( S_2 - 5/3S_1 +1/8) +8C_F^2(-2 S_2(2S_1-3/2)+8/3 S_1-7/8) +8 C_F/N_c(2S_3 - 2S_-3 + 4S_1,-2+ 4/3S_1 - 1/4 +1-(-1)^N/2N(N+1)), which is in perfect agreement with the results of Refs. <cit.>. We have also checked that the kernel ℍ^(2) satisfies the consistency relation (<ref>). This implies that although the two-loop kernel was obtained by direct calculation, it is uniquely determined by the conformal anomaly Δ_+^(1), Eq. (<ref>) and the two-loop anomalous dimensions, Eq. (<ref>). At present the direct calculation of the evolution kernel at three loops does not seem to be feasible, but it can be reconstructed using the two-loop conformal anomaly and three-loop forward anomalous dimensions. §.§ Two-loop anomaly The diagrams contributing to the conformal anomaly Δ_+ at two loops can be obtained from the diagrams shown in Fig. <ref> by inserting additional diagrammatic elements generated by δ^ωS in Eq. (<ref>). Two such elements are possible: the two-gluon vertex inserted into one of the gluon lines, or a modified three-gluon vertex replacing the basic three-gluon vertex. The complete results for the contribution of each Feynman diagram in Fig. <ref> to the conformal anomaly can be found in App. <ref>. The technical details and some examples can be found in Refs. <cit.>. We note here that the diagrams without gluon exchange between quark lines, the diagrams (a) – (g) in Fig. <ref>, give rise to the same contribution to Δ_+ as in the vector case. The kernel Δ_+^(2) can be written in the following form [Δ^(2)_+ f ](z_1,z_2) = ∫_0^1du∫_0^1dt ϰ(t) [f(z_12^ut,z_2) - f(z_1, z_21^ut)] +∫_0^1dα∫_0^α̅ dβ[ω(α,β) +ω(α,β) ℙ_12] [f(z_12^α,z_21^β) - f(z_12^β,z_21^α)]. The function ϰ(t) is exactly the same as in the vector case, see Refs. <cit.> ϰ(t) =C_F^2 ϰ_P(t)+ C_F/N_c ϰ_FA(t)+C_Fβ_0 ϰ_bF(t), where ϰ_bF(t) = - 2 t̅/t( lnt̅ + 5/3), ϰ_FA(t) =2t̅/t{ (2+t)[_2(t̅)-_2(t)] -(2- t)(t/t̅ln t+lnt̅) - π^2/6 t -4/3-t/2(1-t/t̅)}, ϰ_P(t) =4t̅[_2(t̅)-_2(1)] +4(t^2/t̅-2t̅/t)[_2(t)-_2(1)] -2tln tlnt̅ -t̅/t (2-t) ln^2t̅ +t^2/t̅ln^2 t-2(1+1/t)lnt̅ -2(1+1/t̅)ln t - 16/3t̅/t -1-5t . For the functions ω, ω we obtain ω(α,β) =C_F/N_cω_NP(α,β), with ω_NP(α,β) = -2{α/α̅[_2(α/β̅)-_2(α)] -ατ̅lnτ̅-1/α̅lnα̅lnβ̅-β/β̅lnα̅-1/2β} and ω(α,β) = C_F^2 ω_P(α,β)+C_F/N_cω_NP(α,β), where ω_P(α,β) = 4/α[_2(α̅)-ζ_2+1/4α̅ln^2α̅+1/2(β-2)lnα̅] +4/α̅[_2( α)-ζ_2+1/4αln^2α+1/2(β̅-2)lnα] , ω_NP(α,β) =2 {α̅/α[_2(β/α̅)-_2(β)-_2(α)+_2(α̅)-ζ_2] - lnα -1/αlnα̅ +α(τ̅/τlnτ̅+1/2) } . We conclude this section by emphasising that it contains explicit two-loop expressions of the evolution kernel (<ref>) and the conformal anomaly (conformal generators) for the transversity operators (<ref>). § THREE-LOOP KERNEL §.§ Symmetries and kernels In this section we explain how to reconstruct the evolution kernel from the following data: the forward anomalous dimensions γ(N) and the conformal anomaly Δ. The anomalous dimensions are the eigenvalues of the evolution kernel, ℍ(a) z_12^N-1 =γ(N) z_12^N-1 . The kernel ℍ(a) is invariant under transformations from the collinear SL(2,ℝ) subgroup of the conformal group [S_±,0(a),ℍ(a)]=0. The generators S_±,0(a) have the form (<ref>) which includes, besides the evolution kernel itself, the conformal anomaly Δ. Although Eqs. (<ref>) and (<ref>), in principle, completely determine the kernel ℍ(a), in practice the problem of finding the kernel is not quite straightforward since the generators have a non-canonical form. To overcome technical problems we follow the approach developed in Ref. <cit.> and construct a transformation which maps the deformed symmetry generators to the canonical ones, S_±,0(a)↦ S_±,0^(0), S_±,0^(0) =V S_±,0(a)V^-1 , 𝐇_inv(a) =V ℍ V^-1 . The new kernel 𝐇_inv(a) commutes with the canonical generators, [S_±,0^(0),𝐇_inv(a)]=0, and has the form 𝐇_inv(a) = Γ_cusp(a)ℋ + 𝒜(a) + ℋ(a), where the kernel ℋ is defined in Eq. (<ref>), 𝒜(a) is a constant and ℋ(a)f(z_1,z_2) =∫_0^1dα∫_0^α̅dβ( h(τ)+h(τ) ℙ_12) f(z_12^α,z_21^β). The functions h and h are functions of one variable τ=αβ/α̅β̅, the so-called conformal ratio. This property is a consequence of the invariance of the kernel (<ref>) under canonical conformal transformations. [ Note, that the kernel 𝐇 which enters Eq. (<ref>) is parameterized by three functions: a function of one variable φ(α) and two functions of two variables, χ(α,β) and χ(α,β). Of course, the invariance of the kernel with respect to the transformations generated by S_α(a) implies some relations between these functions, which, however, are somewhat non-transparent. ] Being a function of one variable, the kernel h (h) is completely determined by its moments, m(N) ( m(N)), m(N) =∫_0^1dα∫_0^α̅ dβ h(τ) (1-α-β)^N-1 =∫_0^1 dτ/(1-τ)^2 h(τ) Q_N( 1+τ/1-τ) , where Q_N is the Legendre function of the second kind. Namely, h(τ)=1/2π i∫_C dN (2N+1) m(N) P_N (1+τ/1-τ), where P_N is the Legendre function of the first kind, and the integration contour C goes along a line parallel to the imaginary axis such that all singularities of m(N) lie to the left of the contour. §.§ Similarity transformation The construction of the intertwining operator V can be naturally divided into two steps. Let us write, V =V_2 V_1. The first transform V_1 brings the symmetry generators to the “covariant” form, 𝐒_α(a)=V_1 S_α(a) V_1^-1, 𝐒_-(a) = S^(0)_-, 𝐒_0(a) = S_0^(0) +β̅(a) + 12𝐇(a), 𝐒_+(a) = S_+^(0) + (z_1 + z_2)(β̅(a) + 12𝐇(a)), where 𝐇(a)=V_1 ℍ(a) V_1^-1. Note that the new generators have the form (<ref>) with the conformal anomaly Δ(a)↦ 0. An attractive feature of this representation is that when the generators act on an eigenfunction of the kernel 𝐇 one can replace the kernel by the corresponding eigenvalue, namely 𝐇↦γ_N. Looking for the operator V_1 in the form V_1(a) = exp{X(a)}, where X(a) = aX^(1) + a^2X^(2) + O(a^3), one gets the following equations for X^(k): [S_-^(0),X^(k)]=[S_0^(0),X^(k)]=0 and [S_+^(0), X^(1)] = z_12Δ^(1), [S_+^(0), X^(2)] = z_12Δ^(2) + [X^(1), z_1 + z_2](β_0 + 1/2ℍ^(1)) + 12[X^(1), z_12Δ^(1)]. These equations define the operators X^(k) up to a canonically invariant operator. It reflects the arbitrariness in the definition of V_1, which can be multiplied by an arbitrary operator depending only on the kernel 𝐇: V_1↦V_1^'= U(𝐇) V_1. Since the relation (<ref>) holds, the operators X^(k) can be represented as integral operators similar to (<ref>). The Eqs. (<ref>) lead to differential equations on the integral kernels which are not difficult to solve. For example, the operator X^(1) has the form X^(1)f(z_1, z_2) = 2C_F∫_0^1 dα lnα/α(2f(z_1,z_2) - f(z_12^α, z_2) - f(z_1, z_21^α)), which is exactly the same as in the vector case. The expression for the kernel X^(2) is quite involved and is given in App. <ref>, while we move to the second transformation, V_2. Remarkably enough it can be written in a closed form <cit.> V_2 =∑_k=0^∞1/k!L^k (β̅(a)+1/2𝐇(a))^k , V_2^-1 =∑_k=0^∞1/k! (-L)^k (β̅(a)+1/2𝐇_inv(a))^k , where L= ln |z_12|. The operator V_2 intertwines the generators (<ref>) with the canonical ones and the kernels 𝐇 and 𝐇_inv. V_2 𝐒_α(a) = S_α^(0) V_2, V_2 𝐇(a) = 𝐇_inv(a) V_2 . Inserting (<ref>) in the last of these equations we obtain the following relation between the kernels 𝐇 and 𝐇_inv, 𝐇(a) =𝐇_inv(a)+∑_n=1^∞1/n!T_n(a) (β̅(a)+1/2𝐇 (a))^n , where the operators T_n(a) are defined by recursion, T_n(a)=[T_n-1(a),L] , T_0(a)=𝐇_inv(a) . Taking into account Eqs. (<ref>), (<ref>) one gets for T_n(a), n>0, T_n(a) f(z_1,z_2) = -Γ_cusp(a) ∫_0^1dαα̅/αln^nα̅(f(z_12^α,z_2)+ f(z_1,z_21^α)) +∫_0^1dα∫_0^α̅dβln^n(1-α-β) (h(τ)+h(τ)ℙ_12) f(z_12^α, z_21^β) . Since the n-th term in the sum in (<ref>) is of order O(a^n+1) one can easily obtain an approximation for 𝐇(a) with arbitrary precision, e.g. 𝐇(a) = 𝐇_inv(a)+ T_1(a) (1+1/2T_1(a) ) (β̅(a)+1/2𝐇_inv(a)) +1/2T_2(a)(β̅(a)+1/2𝐇_inv(a))^2 +O(a^4) . Expanding all operators in power series, 𝐇_inv(a)=∑_k a^k 𝐇_inv^(k), T_n(a)=∑_k a^k T_n^(k), one derives 𝐇^(1) = 𝐇_inv^(1), 𝐇^(2) = 𝐇_inv^(2) + T_1^(1)(β_0 + 12𝐇_inv^(1)), 𝐇^(3) = 𝐇_inv^(3) + T_1^(1)(β_1+1/2𝐇_inv^(2)) + 1/2T_2^(1)(β_0 +1/2𝐇_inv^(1))^2 + (T_1^(2) + 1/2(T_1^(1))^2) (β_0 +1/2𝐇_inv^(1)), which agrees with the expressions obtained in Refs. <cit.>. Concluding this section we discuss the relation between the eigenvalues of the operators 𝐇 and 𝐇_inv. Since both operators commute with the permutation operator ℙ_12, functions symmetric and anti-symmetric under permutations z_1↔ z_2 form invariant subspaces of both operators. It is easy to check that the functions ψ_N^+(z_1,z_2)= |z_12|^N-1 and ψ_N^-(z_1,z_2)=sign(z_12) |z_12|^N-1 are the eigenfunctions of both operators. Note that we do not assume that N is integer. Then if 𝐇(a)ψ^±_N(z_1,z_2)= γ_± (N)ψ^±_N(z_1,z_2), and 𝐇_inv(a)ψ^±_N(z_1,z_2)= λ_± (N)ψ^±_N(z_1,z_2), using the relation (<ref>), one gets the following relation for the eigenvalues of γ_± and λ_± γ_±(N) = λ_±(N+β̅(a)+1/2γ_±(N)). This relation was introduced in Refs. <cit.> as a generalization of the Gribov-Lipatov reciprocity relation <cit.>. The functions λ_± have much simpler form than the anomalous dimensions γ_±. The asymptotic expansion of the functions λ_±(N) for large N is invariant under the reflection N→ -N-1, see e.g. <cit.>. This means that only special combinations of the harmonic sums <cit.> can appear in the perturbative expansion of reciprocity respecting (RR) anomalous dimensions <cit.>. Thus starting from the three loop anomalous dimensions for the transversity operators <cit.> we can find the RR anomalous dimensions, λ_±(N), and, using the technique developed in <cit.>, reconstruct the kernel 𝐇_inv. Then the kernels 𝐇^(k=1,2,3) are given by Eqs. (<ref>) and the evolution kernels in MS-scheme read, ℍ^(1) = 𝐇^(1) , ℍ^(2) = 𝐇^(2) +[𝐇^(1),X^(1)] , ℍ^(3) = 𝐇^(3) + [𝐇^(2),X^(1)] + [𝐇^(1),X^(2)] +1/2 [[𝐇^(1),X^(1)] X^(1)] . The kernel X^(1) is presented in (<ref>) and the explicit expression for the kernel X^(2) can be found in App. <ref>. §.§ Invariant kernel The kernels h, h which determine the operator ℋ(a) in Eq. (<ref>) can be obtained as follows: First, we reconstruct the eigenvalues of the kernel 𝐇_inv, λ_±(N), using the result for the three loop anomalous dimensions γ_±(N), <cit.>. The above mentioned functions can be written as γ_±(N) =2Γ_cusp(a) S_1(N)+ A(a) + κ_±(N), λ_±(N) =2Γ_cusp(a) S_1(N)+ 𝒜(a) + m_±(N). The anomalous dimensions γ_+ and γ_- gives the anomalous dimensions of the local operators for even and odd N, respectively, and m_±(N)=m(N)∓m(N), where m(N),m(N) are the moments of the kernels h,h̅, Eqs. (<ref>). In the leading order m^(1)_±(N)=0 and 𝒜^(1) = -6C_F. At two loops one finds 𝒜^(2) =-C_F^2( 43/3 + 104/3ζ_2 ) + C_F n_f ( 2/3+ 16/3ζ_2 ) + C_F/N_c( - 17/3 - 88/3ζ_2 + 24ζ_3), m^(2)_±(N) = 2 C_F/N_c(32 S_1 (S_-2+ζ_2/2) +16 (S_3-ζ_3) -32(S_-2,1-1/2S_-3+1/3ζ_3) + 2(1-(-1)^N)/N(N+1)), The expression for the moments m_± includes only special combinations of harmonic sums, the so-called parity invariant harmonic sums <cit.>, whose asymptotic expansion is invariant under N↦-N-1. Namely, following <cit.>, we define Ω_1(N) =S_1(N), Ω_-2(N)=(-1)^N(S_-2(N)+ζ_2/2), Ω_3(N) =S_3(N)-ζ_3, Ω_-2,1(N)=(-1)^N(S_-2,1(N)-1/2S_-3(N)+1/3ζ_3), and rewrite m_± as m^(2)_±(N) = 2 C_F/N_c( 16 Ω_3 ± 32(Ω_1 Ω_-2+Ω_-2,1) +2(1∓ 1)/N(N+1)). The kernels with eigenvalues corresponding to Ω_a,b,… can be effectively constructed, see  <cit.>, e.g. Ω_-2↦τ̅/2, Ω_3↦τ̅/(2τ)lnτ̅, see App. <ref>. The product of two sums Ω_a⃗×Ω_b⃗ corresponds to the convolution of the corresponding kernels that can be easily evaluated with the HyperInt package <cit.>. Thus, after some algebra, we obtain for the kernels h,h h^(2)(τ) = 8 C_F/N_c( τ̅/τlnτ̅+1/2), h^(2)(τ) = 8 C_F/N_c(-τ̅lnτ̅+1/2), which is in full agreement with the result of the explicit calculation, Eq. (<ref>), Going to the three-loop expression and repeating all the steps described above we obtain 𝒜^(3) = C_F n_f^2(34/9 -160/27ζ_2 + 32/9ζ_3) + C_F^2 n_f(-34 + 4984/27ζ_2-512/15ζ_2^2 + 16/9ζ_3) +C_F n_f/N_c(-40+2672/27ζ_2 - 8/5ζ_2^2 - 400/9ζ_3 ) +C_F^3(1694/9 - 22180/27ζ_2 + 2464/15ζ_2^2 + 1064/9ζ_3 - 320ζ_5) +C_F^2/N_c(5269/18 - 28588/27ζ_2 + 2216/15ζ_2^2+7352/9ζ_3 - 32ζ_2ζ_3 -560ζ_5) +C_F/N_c^2( 1657/18 - 8992/27ζ_2 + 4ζ_2^2 + 3104/9ζ_3 - 80ζ_5 ). For the three-loop kernels h^(3) and h^(3) we find h^(3)(τ) = -C_F n_f^216/9 + C_F^2 n_f( 352/9 -8/3H_0 +16/3τ̅/τ( H_2- H_10) ) +C_F n_f/N_c( 8 -8/3H_1 -4/3H_0 + τ̅/τ( 8 H_2-8/3H_10+16/3H_11 +160/9H_1 ) ) + C_F^3( -1936/9 +88/3H_0 + 32τ̅/τ( H_3 + H_12 - H_110- H_20 -1/3H_2 +1/3H_10 +1/2H_1) ) +C_F^2/N_c( -1523 - 96ζ_3 - (8/3 - 48ζ_2)H_0 +76/3H_1 -32H_10 + 4H_2 - 48H_20 - 16H_11 - 24H_21 + ττ̅(- 24ζ_2 - 48ζ_3 + 64H_0) + τ + 1τ̅( -(32 - 16ζ_2)H_0 +12H_2 - 16H_20 - 8H_21) + τ̅τ(-(2000/9 + 16ζ_2)H_1 + 32/3H_10 -208/3H_2 - 64H_20 - 323H_11 - 32H_110 + 64H_3 + 80H_12 + 64H_21 + 96H_111)) +C_F/N_c^2(544/9 + 16ζ_2 - 96ζ_3 - (68/3 - 36ζ_2)H_0 + 68/3H_1 - 24H_10 + 4H_2 -36H_20 + ττ̅(-8ζ_2 - 48ζ_3 + 48H_0 ) + τ + 1τ̅((-24 + 12ζ_2)H_0 + 4H_2 -12H_20) +τ̅τ(-(1072/9 + 16ζ_2)H_1 + 44/3H_10 - 44H_2 - 32H_20 -16/3H_11 - 16H_110 + 32H_3 + 32H_12 + 48H_21 + 32H_111)), h^(3)(τ) =-C_F n_f/N_c( 104/9 +8/3H_0 +8/9(23-20τ) H_1 +16/3τ̅( H_11+ H_10) ) +C_F^2/N_c (14809 - 40ζ_2 - 48ζ_3 +(28/3 + 24ζ_2)H_0 + 76/3H_1 + 16H_10 - 4H_2 - 24H_20 - 16H_11 + 24H_21 + ττ̅(-24ζ_2 + 48ζ_3 - 32H_0) + τ + 1τ̅((16 - 8ζ_2)H_0 + 12H_2 + 8H_20 - 8H_21) + τ̅(-24 + 48ζ_2 + 48ζ_3 - 16ζ_2H_0 + (2144/9 + 16ζ_2)H_1 + 104/3H_10 - 24H_2 + 16H_20 + 32/3H_11 - 16H_110 -32H_12 - 32H_21 - 96H_111)) +C_F/N_c^2( 10289 - 24ζ_2 - 48ζ_3 + (44/3 + 36ζ_2)H_0 + 68/3H_1 + 24H_10 - 4H_2 - 36H_20 + τ/τ̅(-8ζ_2 + 48ζ_3 - 48H_0) + τ + 1τ̅((24 - 12ζ_2)H_0 + 4H_2 + 12H_20) +τ̅(-24 + 24ζ_2 + 48ζ_3 - 32ζ_2H_0 + (1072/9+ 16ζ_2)H_1 + 88/3H_10 - 24H_2 + 32H_20 + 16/3H_11 - 32H_110 + 16H_12 + 16H_21 - 32H_111)) , where H_a⃗(τ) ≡H_a_1 … a_k are harmonic polylogarithms (HPLs) <cit.>. § LOCAL OPERATORS In this section we present the anomalous dimension matrix for the local operators in the Gegenbauer basis, 𝒪_nk(0) =(∂_z_1+∂_z_2)^k C^(3/2)_n ( ∂_z_1-∂_z_2/∂_z_1+∂_z_2)[𝒪](z_1,z_2)|_z_1=z_2=0 , where k≥ n are integers. The RGE for these operators takes the form ( μ∂/∂μ + β(a)∂/∂ a) O_nk =-∑_n'=0^nγ_nn' O_n'k . Note that the anomalous dimension matrix does not depend on k. In the Gegenbauer basis the matrix γ_nn' is diagonal at one loop γ_nn'^(1) = δ_nn'γ^(1)(n+1) =δ_nn' 4 C_F[2S_1(n+1)-3/2] , i.e. the operators 𝒪_nk evolve autonomously in this order <cit.>. It easy to understand that the anomalous dimension matrix γ is nothing else as a matrix of the evolution kernel ℍ in a certain basis. See, e.g., Ref. <cit.> for a discussion of their basis transformation properties. Indeed, expanding the light-ray operator over the local operators as follows [𝒪](z_1,z_2)=∑_knΨ_nk(z_1,z_2) 𝒪_nk(0) , one defines the functions Ψ_nk(z_1,z_2), which are homogeneous polynomials of degree k in z_1,z_2, e.g. Ψ_nk(z_1,z_2)∼ (S_+^(0))^k-n(z_1-z_2)^n. These functions diagonalize the one-loop kernel and beyond one loop one obtains ℍΨ_nk = ∑_n'=0^nγ_n'nΨ_n'k . Thus the off-diagonal part of the anomalous dimension matrix γ is completely determined by the non-invariant part of the kernel. Namely, evaluating Eqs. (<ref>) in the basis formed by the functions Ψ_nk one can easily reconstruct the off-diagonal part of the matrix γ. The method was developed by Dieter Müller in <cit.>, while here we follow an analysis given in Ref. <cit.>. At two loops the off-diagonal part of the anomalous dimension matrix can be written in analytical form: γ^(2)_mn = δ_mnγ^(2)_n - γ^(1)_m-γ^(1)_n/a_mn{ - 2(2n+3) (β_0 + 1/2γ^(1)_n)ϑ_mn+ w_mn^(1)}, where γ_n≡γ_nn, a_mn =(m-n)(m+n+3) , w_mn^(1) = 4 C_F (2n+3) a_mn( A_mn - S_1(m+1)/(n+1)(n+2) + 2A_mn/a_mn) ϑ_mn , A_mn = S_1(m+n+2/2) -S_1(m-n-2/2) +2S_1(m-n-1)-S_1(m+1). and ϑ_mn = 1 if m - n > 0 and even 0 else. The Eq. (<ref>) is the same as in the vector case <cit.>. Of course, one can take the corresponding diagonal anomalous dimension γ_n. For the first few elements of the matrix we obtained (for N_c=3): γ^(2) = [ 724/9 0 0 0 0 0; 0 124 0 0 0 0; 272/9 0 38044/243 0 0 0; 0 8360/243 0 44116/243 0 0; 44/5 0 4592/135 0 6155756/30375 0; 0 5852/405 0 36512/1125 0 744184/3375 ] -n_f [ 104/27 0 0 0 0 0; 0 8 0 0 0 0; 32/9 0 904/81 0 0 0; 0 80/27 0 1108/81 0 0; 88/45 0 112/45 0 31924/2025 0; 0 152/81 0 32/15 0 35524/2025 ] . For the three-loop matrix γ^(3) there is no analytical expression. As above we give the numerical expression for the first few off-diagonal elements, (0≤ m,n≤ 5) for N_c=3, γ^(3)_off=γ^(3)_1 + n_f γ^(3)_n_f+ n_f^2 γ^(3)_n_f^2 . We find γ^(3)_1 = [ 0 0 0 0 0 0; 0 0 0 0 0 0; 44992/81 0 0 0 0 0; 0 1316680/2187 0 0 0 0; 1977808/10125 0 54669748/91125 0 0 0; 0 68848018/273375 0 443231668/759375 0 0 ] and γ^(3)_n_f = -[ 0 0 0 0 0 0; 0 0 0 0 0 0; 21008/243 0 0 0 0 0; 0 200060/2187 0 0 0 0; 998842/30375 0 898436/10125 0 0 0; 0 745418/18225 0 4266496/50625 0 0 ], γ^(3)_n_f^2 = -[ 0 0 0 0 0 0; 0 0 0 0 0 0; 160/81 0 0 0 0 0; 0 520/243 0 0 0 0; 1012/2025 0 4088/2025 0 0 0; 0 3268/3645 0 416/225 0 0 ]. For completeness we also provide the first few diagonal entries of the anomalous dimension, γ_00^(3) = 105110/81 - 1856/27ζ_3 - (10480/81 + 320/9ζ_3)n_f - 8/9n_f^2, γ_11^(3) = 19162/9 - (5608/27 + 320/3ζ_3)n_f - 184/81n_f^2, γ_22^(3) = 17770162/6561 + 1280/81ζ_3 - (552308/2187 + 4160/27ζ_3)n_f - 2408/729n_f^2, γ_33^(3) = 206734549/65610 + 560/27ζ_3 - (3126367/10935 + 5120/27ζ_3)n_f - 14722/3645n_f^2, γ_44^(3) = 144207743479/41006250 + 9424/405ζ_3 - (428108447/1366875 + 5888/27ζ_3)n_f - 418594/91125n_f^2, γ_55^(3) = 183119500163/47840625 + 3328/135ζ_3 - (1073824028/3189375 + 2176/9ζ_3)n_f - 3209758/637875n_f^2. Note that the index n enumerates elements in the Gegenbauer basis so that γ_nn = γ(n + 1). We have checked that the n_f^2 contributions to the off-diagonal matrix agree with the result obtained in Ref. <cit.> [The evolution kernel for the leading n_f contribution in all orders can be found in <cit.>.]. § SUMMARY The theoretical description of hard exclusive processes in QCD requires the knowledge of scale dependence of nonforward matrix elements of local/non-local operators. It is described by the corresponding anomalous dimension matrix or evolution kernel, which is completely determined by the forward anomalous dimensions at ℓ loops and an additional quantity, the conformal anomaly calculated in (ℓ-1)-loop approximation <cit.>. This arises from the hidden conformal symmetry present in the evolution kernels of the MS scheme in QCD. The corresponding generators, however, receive quantum corrections and differ from the canonical ones. The conformal anomaly, introduced by Müller, describes a non-trivial modification of the generator of special conformal transformations. For the (axial-)vector nonsinglet twist-two operators the conformal anomaly was calculated at one- and two-loop accuracy in Refs. <cit.> and <cit.>, respectively, and the evolution kernel for (axial-)vector operators are known now at the three-loop level <cit.> . In this paper we have calculated the two-loop conformal anomaly for the generator of special conformal transformations for the transversity operator in QCD. Using this result and the corresponding forward three-loop anomalous dimensions calculated in <cit.> we have reconstructed the evolution kernel for the operators in question in non-forward kinematics. In addition we have derived the explicit expression for the three-loop anomalous dimension matrix for the local operators containing up to six covariant derivatives. Extensions to a higher number of covariant derivatives are straight forward. In this form, our result is applicable to the renormalization of meson wave functions and could be useful for lattice calculations of their first few moments. § ACKNOWLEDGMENTS tocsectionAcknowledgments We are grateful to V.N. Velizhanin for helpful discussions. This work has been supported by the DFG through the Research Unit FOR 2926, Next Generation pQCD for Hadron Structure: Preparing for the EIC, project number 40824754, DFG grant MO 1801/4-2 and the ERC Advanced Grant 101095857 Conformal-EIC. § APPENDICES tocsectionAppendices § RESULTS FOR TWO-LOOP DIAGRAMS §.§ Evolution kernel The contributions to the evolution kernel from the diagrams in Fig. <ref> (a)–(p) (including symmetric diagrams with the interchange of the quark and the antiquark) can be written in the following form: [ℍ𝒪](z_1z_2) = -4 ∫_0^1dα∫_0^α̅ dβ[χ(α,β) + χ^ℙ(α,β) ℙ_12] [𝒪(z_12^α,z_21^β)+𝒪(z_12^β,z_21^α)] -4 ∫_0^1du h (u) [2 𝒪(z_1,z_2) - 𝒪(z_12^u,z_2) - 𝒪(z_1,z_21^u)] , where ℙ_12 is the permutation operator. For any function f(z_1,z_2) ℙ_12 f(z_1,z_2) = f(z_2,z_1), ( ℙ_12𝒪(z_12^α,z_21^β) = 𝒪(z_21^α,z_12^β) ). One obtains (only the non-vanishing contributions are listed): h_(a)(u) = C_F^2u̅/u[ln u+1], h_(b)(u) = C_F u̅/u[ (2C_A-β_0 ) lnu̅ + 8/3 C_A - 5/3β_0 ], h_(c)(u) = [C_F^2 - 1/2 C_FC_A] u̅/u[ln^2u̅ - 3 u/u̅ln u + 3 lnu̅ -ln u - 1], h_(d)(u) = 1/2 C_F C_A u̅/u[ 1/2(1-u/u̅)ln^2 u +lnu̅ -3], h_(e+f)(u) = 2 C_F^2 u̅/u[ 2( _2(1)-_2(u̅)) - ln^2u̅ + 2 u/u̅ln u] + C_F C_A u̅/u[2 (_2(u̅)- _2(u)) + 1/2ln^2u̅ - 1/2ln^2 u - 1+ u/u̅ln u-2 ], h_(g)(u) = - C_F C_A u̅/u[ _2(u̅)-_2(1)+1+1/4ln^2u̅ +lnu̅ -1+u/2u̅ln u (1/2ln u +1)], h_(j)(u) =[C_F^2 - 1/2 C_FC_A] ln u , h_(o)(u) =2[C_F^2 - 1/2 C_FC_A]u̅/u[-2_2(u)+u/u̅ln ulnu̅-1/2ln^2 u̅-u/u̅ln u], h_(p)(u) =C_FC_Au̅/u[_2( u)+1/u̅ln ulnu̅-1/4ln^2u̅-u/4u̅ln^2u-u/u̅ln u], and χ_(i)(α,β) = 1/6C_F(C_A-β_0)δ(α)δ(β), χ_(j)(α,β) =[C_F^2 - 1/2 C_FC_A] δ(α)δ(β), χ_(m)(α,β) = [C_F^2 - 1/2 C_FC_A], χ_(o)(α,β) = -2[C_F^2 - 1/2 C_FC_A][ 1/α̅lnα-1/αlnα̅-τ̅/τlnτ̅+[2+ζ_2-3ζ_3] δ(α)δ(β)], χ_(p)(α,β) = C_F C_A[1/αlnα̅-1/α̅lnα + [ζ_2-2]δ(α)δ(β)]. The nonvanishing contributions to χ(α,β) originate from two diagrams only: χ_(m)(α,β) = [C_F^2 - 1/2 C_FC_A], χ_(o)(α,β) =-2[C_F^2 - 1/2 C_FC_A] τ̅lnτ̅ . We note here that the results for the h functions are exactly the same as in the vector case <cit.>. §.§ Conformal anomaly Terms due to the conformal variation of the action can be written in the form Δ S_+ = 1/2ℍ(z_1+z_2) + z_12Δ_+ , where ℍ is the corresponding contribution to the evolution kernel. The contributions to Δ_+ from the diagrams in Fig. <ref> (including symmetric diagrams with the interchange of the quark and the antiquark) can be brought to the following form: Below we list the non-vanishing contributions only ϰ_(a)(t) = C_F^2[1/t+1+t̅/tln t], ϰ_(b)(t) = - 2 C_F t̅/t[ (β_0 -2 C_A ) lnt̅ - 8/3 C_A + 5/3β_0 ], ϰ_(c)(t) = [C_F^2 - 1/2 C_FC_A][t ln ^2 t + 2t̅/tln ^2t̅ + 6t̅/tlnt̅ - t̅/t (3t+2) ln t - 9 t + 8 -1/t], ϰ_(d)(t) = C_F C_A{t̅/t[ 1-2t/2t̅ln^2 t +lnt̅ -3] +1/2[ 1/2ln^2 t-t̅ln^2t̅+t^2-t̅/tln t-2t̅lnt̅-1-t̅] }, ϰ_(e+f)(t) = -4C_F^2{ t[_2(t)-_2(1)] + 2 t̅/t[_2(t̅)-_2(1)] + t̅/tln^2 t̅ + 1/2 t ln^2 t + 2 t̅lnt̅ - 3/2(1-2t)ln t + 2 } + C_F C_A t̅/t{ 4 [_2(t̅)-_2(t)] + 1/2(2+t)ln^2t̅ - (1-t^2/2t̅)ln^2 t - 2(1-2t)lnt̅ - (5t+ 1/t̅)ln t - 3 + 2t}, ϰ_(g)(t) = C_F C_A t̅/t{ t[_2(t̅) - _2(1)]+ 1/4 t ln^2t̅ + 1/4(2+t)ln^2 t - (3-t)lnt̅ + 1/2(1-t^2/t̅)ln t -t̅ -3/2}, ϰ_(j)(t) = [C_F^2 - 1/2 C_FC_A][-tln t -1], ϰ_(o)(t) = [C_F^2 - 1/2 C_FC_A] {4/t̅[_2(t)-_2(1)]-4t[_2(t)-_2(1)]+4t̅_2(1) -2tln tlnt̅ +t/t̅ln^2t+t̅ln^2t̅-4tlnt̅+2t/t̅(2-3t)ln t+2 }, ϰ_(p)(t) = C_F C_A {2t/t̅[_2(t)-_2(1)]+t̅[_2(t̅)-_2(1)] -tln tlnt̅ +1/4t̅ln^2t̅ +1/4t(3-t)/t̅ln^2t -t^2/t̅ln t+1/2ln t-1+t/tlnt̅ + 1}. Note here that all ϰ functions are exactly the same as in the vector case. The function ω(α,β) receives contributions from two diagrams only: ω_(m)(α,β) = -2 [C_F^2-1/2C_F C_A] β, ω_(o)(α,β) = 4 [C_F^2-1/2C_F C_A] {α/α̅[_2(α/β̅)-_2(α)] -ατ̅lnτ̅-1/α̅lnα̅lnβ̅-β/β̅lnα̅}. The non-vanishing contributions to ω(α,β) are ω_(m)(α,β) = 2 [C_F^2 - 1/2 C_FC_A]β, ω_(o)(α,β) = -4[C_F^2 - 1/2 C_FC_A] {α̅/α[_2(β/α̅)-_2(α)-_2(β)] -α/α̅[_2(α)-ζ_2] - 1/4α̅/αln^2α̅-1/4α/α̅ln^2α+lnαlnα̅+ ατ̅/τlnτ̅+α/α̅lnα -1/2β̅/α̅lnα -1/2β/αlnα̅ }, ω_(p)(α,β) = C_FC_A {2/α̅[_2(α)-_2(1)]+2/α[_2(α̅)-_2(1)] +1/2α̅/αln^2α̅ +1/2α/α̅ln^2α -2/α̅lnα - 2/αlnα̅ +β/αlnα̅+β̅/α̅lnα }. § X KERNEL In this appendix we present the results for the two-loop kernel X^(2) (the one-loop result is given in Eq. (<ref>)). The kernel X^(2) is defined as the solution of the Eq. (<ref>). For the technical use this relation can be seen as a differential equation for the integration kernel. In general, for an arbitrary integral operator F of the form [Ff](z_1, z_2) = F_constf(z_1, z_2) + ∫_0^1dα∫_0^α̅ h(α, β) f(z_12^α, z_21^β) + ∫_0^1dαα̅αh^δ(α)(2f(z_1, z_2) - f(z_12^α,z_2 - f(z_1, z_21^α))), its commutator with the generator S_+^(0) has the form [S_+^(0), F]f = z_12∫_0^1dα∫_0^α̅ (αα̅∂_α - ββ̅∂_β)h(α, β)f(z_12^α, z_21^β) - z_12∫_0^1 dα α̅^2∂_α h^δ(α)(f(z_12^α, z_2) - f(z_1, z_21^α)). The kernel X^(2) can be written as a sum of three terms corresponding to the three contributions on the right hand side of Eq. (<ref>) X^(2) = X^(2)_I + X^(2,1)(β_0 + 12𝐇_inv^(1)) - 1/2X^(2,2). It is easy to see that the operators X^(2,1) and X^(2,2) are exactly the same as in the vector case <cit.> X^(2,1)f(z_1,z_2) = -2C_F∫_0^1 dα [α̅/αlnα̅ + lnα][2f(z_1, z_2) - f(z_12^α, z_2) - f(z_1, z_21^α)], and X^(2,2)f(z_1, z_2) = =4C_F^2{∫_0^1 dα∫_0^1 du [lnα̅/α(12lnα̅ + 2) + u̅/uϑ(α)/α̅] [2f(z_1, z_2) - f(z_12^α u, z_2) - f(z_1, z_21^α u)] + ∫_0^1 dα∫_0^α̅dβ [ϑ_+(α) + ϑ_+(β)/τ(f(z_12^α, z_21^β) - f(z_1, z_21^β) - f(z_12^α, z_2) + f(z_1, z_2)) +(ϑ_0(α) + ϑ_0(β)) f(z_12^α, z_21^β)] }, where ϑ_+(α) = -1/α̅[lnαlnα̅ + 2αlnα + 2α̅lnα̅], ϑ_0(α) = 2[_3(α̅) - _3(α) - lnα̅_2(α̅) + lnα_2(α)] + 1αlnαlnα̅ + 2αlnα̅, ϑ(α) = α/α̅[_2(α̅) - ln^2α] - α̅/2αln^2α̅ + [α - 2α]lnαlnα̅ - [3 + 1α̅]lnα - (α - α̅)α̅α - 2. The operator X_I obeys the following equation [S_+^(0), X_I^(2)] = z_12Δ_+^(2) + 14[ℍ^(2),z_1 + z_2] = z_12Δ_+^(2) + 14[1/2T^(1)𝐇_inv^(1)+[𝐇_inv^(1), X^(1)], z_1 + z_2] +1/4[𝐇_inv^(2)+β_0 T_1^(1),z_1+z_2] The solution can be written as X_I^(2) = X_IAB^(2) + 1/4(T^(2)_1 + 1/2β_0T^(1)_2) . The last term in this equation corresponds to the third term in Eq. (<ref>) [We note here that our definition of the operators T_n^(k) differs from that in Ref. <cit.>]. We also note that the combination 𝐇_ninv=1/2T^(1)𝐇_inv^(1)+[𝐇_inv^(1), X^(1)] corresponds to the non-invariant C_F^2 part of the two-loop kernel and has the form, 𝐇_ninv f(z_1,z_2) = 8 C_F^2( ∫_0^1 dαα̅/αlnα̅(3/2-lnα̅+ 1+α̅/α̅lnα) (f(z_12^α,z_2)+ f(z_1,z_21^α)) +∫_0^1dα∫_0^α̅ dβ(1/α̅lnα-1/αlnα̅+(α↔β)) )f(z_12^α,z_21^β) . Since the two-loop anomaly Δ_+^(2) is also known one can easily find X_IAB^(2), which is convenient to represent as a sum of two terms X_IAB^(2)=X_IA^(2)+X_IB^(2). The first term X_IA^(2) contains all contributions where at least one argument of the function remains intact. Moreover, this term is exactly the same as in the vector case, X_IAf(z_1, z_2) = ∫_0^1 du u̅/u∫_0^1dα/α̅(ϰ(α) - ϰ(1))(2f(z_1,z_2) - f(z_12^α u, z_2) - f(z_1, z_21^α u)) + + ∫_0^1 dα ξ_IA(α) (2f(z_1, z_2) - f(z_12^α,z_2) - f(z_1, z_21^α)), where ϰ(α) can be found in (<ref>) and ξ_IA(α) = 2C_F^2α̅/α(-_3(α̅) + lnα̅_2(α̅) + 1/3ln^3α̅ + _2(α) + 1/a̅lnαlnα̅ - 1/4ln^2α̅. .-3α/α̅lnα - 3lnα̅) + C_F/N_c(lnα + α̅/αlnα̅). The result for the second term X^(2)_IB reads X_IB^(2) = ∫_0^1 dα∫_0^α̅dβ(C_F^2 ξ_P(α,β) +C_F/N_c(ξ_NP(α,β) + ξ_NP(α,β)ℙ_12))f(z_12^α, z_21^β), where ξ_P(α, β) =4(-_3(α) + lnα_2(α) + 1/α̅(_2(α) - ζ_2 + 1/4ln^2α - lnα) +(α↔β)) -(α,β↔α̅,β̅), ξ_NP(α, β) = -2/α(_2(β/α̅) - _2(β) - _2(α) + _2(α̅) - ζ_2) - lnα̅ + (α↔β), ξ_NP(α,β) = 2/α̅(_2(α/β̅) -_2(α) - lnα̅lnβ̅) - lnα̅ + (α↔β). Note that the integral kernel ξ_P(α, β) corresponds to z_12Δ_+^(2) and the second term in Eq. (<ref>) while ξ_NP(α, β) and ξ_NP correspond only to z_12Δ_+^(2). § PARITY INVARIANT HARMONIC SUMS AND INTEGRATION KERNELS In this appendix we give explicit expression for the harmonic sums which appears in the three-loop invariant kernel in Eq. (<ref>). The sums can be divided in two groups with the respect of their signature, Π_i^k (m_i) = ± 1, Ω_3 = S_3 - ζ_3, Ω_5 =S_5 - ζ_5, Ω_3,1 = S_3,1 - 1/2S_4 - 310ζ^2_2, Ω_1,3 = S_3,1 - 1/2S_4 + 310ζ_2^2 - ζ_3S_1, Ω_-2,-2 =S_-2,-2 - 12S_4 + ζ_2/2S_-2 - ζ_2^28, Ω_1,3,1 = S_1,3,1 - 1/2S_4,1 - 1/2S_1,4 + 1/4S_5 - 3/10ζ_2^2S_1 + 34ζ_5, Ω_1,1,3 = S_1,3,1 - 1/2S_2,3 - 1/2S_1,4 + 1/4S_5 - ζ_5/2 + 3/10ζ_2^2S_1 + ζ_3/2S_2 - ζ_3S_1,1, Ω_-2,-2, 1 =S_-2,-2,1 - 1/2S_-2,-3 - 1/2S_4,1 + 1/4S_5 + 1/4ζ_3S_-2 + 1/16ζ_5, Ω_-2, 1, -2 = S_-2,1,-2 - 1/2S_-2,-3 - 1/2S_-3,-2 + 1/4S_5 - ζ_2/4S_-3 + 1/2ζ_2 S_-2,1 - 1/4ζ_3S_-2 + 1/8ζ_2ζ_3 -3/8ζ_5, Ω_1,-2, -2 = S_1,-2,-2 - 1/2S_-3,-2 - 1/2S_1,4 + 1/4S_5 -ζ_2/4S_-3 + ζ_2/2 S_1,-2 + 1/8ζ_2^2 S_1 - 1/8ζ_2ζ_3 + 1/16ζ_5, and Ω_-2 = (-1)^N(S_-2 + ζ_22) Ω_-4 =(-1)^N(S_-4 + 7ζ^2_220) Ω_1,-2 =(-1)^N(S_1,-2 -1/2S_-3 - ζ_34 + ζ_22S_1) Ω_-2,1 = (-1)^N(S_-2,1 - 1/2S_-3 + ζ_34) Ω_1,-4 = (-1)^N(S_1,-4 - 12S_-5+ 720ζ_2^2S_1 - 118ζ_5 + 12ζ_2ζ_3) Ω_-4,1 = (-1)^N(S_-4,1 - 12S_-5 - 12ζ_2ζ_3 + 118ζ_5) Ω_3,-2 = (-1)^N(S_3,-2 - 12S_-5 + 12ζ_2S_3 + 98ζ_5 - 34ζ_2ζ_3) Ω_1,-2,1 = (-1)^N(S_1,-2,1 -1/2S_-3,1 -1/2S_1,-3 + 1/4S_-4 + ζ_34S_1 - ζ_2^280) Ω_1,1,-2,1 = (-1)^N(S_1,1,-2,1 - 1/2S_1,-3,1 - 1/2S_1,1, -3 - 1/2S_2,-2,1 + 1/4S_-4,1 + 1/4S_-4,1+ 1/4S_2,-3 + 1/4S_1,-4 - 1/8S_-5 + ζ_34S_1,1 - ζ_2^280S_1 - ζ_38S_2 + 18ζ_5 - 116ζ_2ζ_3). Each sum Ω_m⃗ is associated with the integral kernel h_m⃗ as follows ∫_0^1dα∫_0^α̅ dβ h_m⃗(τ)(1 - α - β)^N - 1 = Ω_m⃗(N). Below we list the integral kernels corresponding to the sums (<ref>) and (<ref>) h_3 = -12τ̅τH_1 h_-2 = 12τ̅ h_5 = -12τ̅τ(H_111 + H_12) h_-4 = 12τ̅(H_11 + H_2) h_13 = 14τ̅τ(H_2 + H_11) h_1,-2 = -τ̅4H_1 h_31 = 14τ̅τ(H_11 + H_10) h_-2,1 = -τ̅4(H_1 + H_0) h_113 = -18τ̅τ(H_21 + H_111 + H_12 + H_3) h_3,-2 = -14τ̅(H_21 + H_111) h_131 = -18τ̅τ(H_20 + H_110 + H_21 + H_111) h_-4,1 = -14τ̅(H_21 + H_20 + H_111 + H_110) h_-2,-2 = 14τ̅τH_1,1 h_1, -4 = 14τ̅(H_111 - H_101) h_-2,-2,1 = -18τ̅τ(H_111 + H_110) h_1, -2, 1 = 18τ̅(H_11 + H_10) h_-2,1,-2 = 18τ̅τH_111 h_1, 1, -2, 1 = -116τ̅(H_111 + H_110), h_1,-2,-2 = -18τ̅τ(H_111 + H_21), where H_m⃗ = H_m⃗(τ) are HPLs. 10Collins:1989gx J.C. Collins, D.E. Soper and G.F. Sterman, Factorization of Hard Processes in QCD, https://dx.doi.org/10.1142/9789814503266_0001Adv. Ser. Direct. High Energy Phys. 5 (1989) 1 [https://arxiv.org/abs/hep-ph/0409313 hep-ph/0409313]. Moch:2004pa S. Moch, J. Vermaseren and A. Vogt, The Three loop splitting functions in QCD: The Nonsinglet case, https://dx.doi.org/10.1016/j.nuclphysb.2004.03.030Nucl. Phys. B 688 (2004) 101 [https://arxiv.org/abs/hep-ph/0403192 hep-ph/0403192]. Vogt:2004mw A. Vogt, S. Moch and J.A.M. Vermaseren, The Three-loop splitting functions in QCD: The Singlet case, https://dx.doi.org/10.1016/j.nuclphysb.2004.04.024Nucl. Phys. B 691 (2004) 129 [https://arxiv.org/abs/hep-ph/0404111 hep-ph/0404111]. Cooper-Sarkar:2024crx A. Cooper-Sarkar, T. Cridge, F. Giuli, L.A. Harland-Lang, F. Hekhorn, J. Huston et al., A Benchmarking of QCD Evolution at Approximate N^3LO, https://arxiv.org/abs/2406.16188 arXiv:2406.16188. Mueller:1991gd D. Müller, Constraints for anomalous dimensions of local light cone operators in ϕ^3 in six-dimensions theory, https://dx.doi.org/10.1007/BF01555504Z. Phys. C49 (1991) 293. Mueller:1993hg D. Müller, Conformal constraints and the evolution of the nonsinglet meson distribution amplitude, https://dx.doi.org/10.1103/PhysRevD.49.2525Phys. Rev. D49 (1994) 2525. Belitsky:1997rh A.V. Belitsky and D. Müller, Predictions from conformal algebra for the deeply virtual Compton scattering, https://dx.doi.org/10.1016/S0370-2693(97)01390-7Phys. Lett. B417 (1998) 129 [https://arxiv.org/abs/hep-ph/9709379 hep-ph/9709379]. Belitsky:1998gc A.V. Belitsky and D. Müller, Broken conformal invariance and spectrum of anomalous dimensions in QCD, https://dx.doi.org/10.1016/S0550-3213(98)00677-4Nucl. Phys. B537 (1999) 397 [https://arxiv.org/abs/hep-ph/9804379 hep-ph/9804379]. Braun:2013tva V.M. Braun and A.N. Manashov, Evolution equations beyond one loop from conformal symmetry, https://dx.doi.org/10.1140/epjc/s10052-013-2544-1Eur. Phys. J. C73 (2013) 2544 [https://arxiv.org/abs/1306.5644 arXiv:1306.5644]. Braun:2014vba V.M. Braun and A.N. Manashov, Two-loop evolution equations for light-ray operators, https://dx.doi.org/10.1016/j.physletb.2014.05.037Phys. Lett. B 734 (2014) 137 [https://arxiv.org/abs/1404.0863 arXiv:1404.0863]. Braun:2016qlg V.M. Braun, A.N. Manashov, S. Moch and M. Strohmaier, Two-loop conformal generators for leading-twist operators in QCD, https://dx.doi.org/10.1007/JHEP03(2016)142JHEP 03 (2016) 142 [https://arxiv.org/abs/1601.05937 arXiv:1601.05937]. Braun:2017cih V.M. Braun, A.N. Manashov, S. Moch and M. Strohmaier, Three-loop evolution equation for flavor-nonsinglet operators in off-forward kinematics, https://dx.doi.org/10.1007/JHEP06(2017)037JHEP 06 (2017) 037 [https://arxiv.org/abs/1703.09532 arXiv:1703.09532]. Strohmaier:2018tjo M. Strohmaier, Conformal symmetry breaking and evolution equations in Quantum Chromodynamics. PhD thesis, Regensburg U., 2018. 10.5283/epub.37432. Braun:2021tzi V.M. Braun, A.N. Manashov, S. Moch and M. Strohmaier, Three-loop off-forward evolution kernel for axial-vector operators in Larin’s scheme, https://dx.doi.org/10.1103/PhysRevD.103.094018Phys. Rev. D 103 (2021) 094018 [https://arxiv.org/abs/2101.01471 arXiv:2101.01471]. Diehl:2003ny M. Diehl, Generalized parton distributions, https://dx.doi.org/10.1016/j.physrep.2003.08.002, 10.3204/DESY-THESIS-2003-018Phys. Rept. 388 (2003) 41 [https://arxiv.org/abs/hep-ph/0307382 hep-ph/0307382]. Belitsky:2005qn A.V. Belitsky and A.V. Radyushkin, Unraveling hadron structure with generalized parton distributions, https://dx.doi.org/10.1016/j.physrep.2005.06.002Phys. Rept. 418 (2005) 1 [https://arxiv.org/abs/hep-ph/0504030 hep-ph/0504030]. Beiyad:2010qg M.E. Beiyad, B. Pire, M. Segond, L. Szymanowski and S. Wallon, Chiral-odd transversity GPDs from a leading twist hard amplitude, https://dx.doi.org/10.22323/1.106.0252PoS DIS2010 (2010) 252 [https://arxiv.org/abs/1006.0740 arXiv:1006.0740]. Hyde:2011ke C.E. Hyde, M. Guidal and A.V. Radyushkin, Deeply Virtual Exclusive Processes and Generalized Parton Distributions, https://dx.doi.org/10.1088/1742-6596/299/1/012006J. Phys. Conf. Ser. 299 (2011) 012006 [https://arxiv.org/abs/1101.2482 arXiv:1101.2482]. Cosyn:2021llh W. Cosyn, B. Pire and L. Szymanowski, Accessing quark GPDs in diffractive events at an Electron-Ion Collider, https://dx.doi.org/10.21468/SciPostPhysProc.8.159SciPost Phys. Proc. 8 (2022) 159 [https://arxiv.org/abs/2106.01222 arXiv:2106.01222]. Cosyn:2019eeg W. Cosyn, B. Pire and L. Szymanowski, Probing quark transversity GPDs in diffractive photo- and electroproduction on the deuteron, https://dx.doi.org/10.22323/1.352.0254PoS DIS2019 (2019) 254 [https://arxiv.org/abs/1907.08662 arXiv:1907.08662]. Boussarie:2016aoq R. Boussarie, B. Pire, L. Szymanowski and S. Wallon, Revealing transversity GPDs through the photoproduction of a photon and a ρ meson, https://dx.doi.org/10.1051/epjconf/201611201006EPJ Web Conf. 112 (2016) 01006 [https://arxiv.org/abs/1602.01774 arXiv:1602.01774]. VanThurenhout:2022nmx S. Van Thurenhout, Off-forward anomalous dimensions of non-singlet transversity operators, https://dx.doi.org/10.1016/j.nuclphysb.2022.115835Nucl. Phys. B 980 (2022) 115835 [https://arxiv.org/abs/2204.02140 arXiv:2204.02140]. Artru:1989zv X. Artru and M. Mekhfi, Transversely Polarized Parton Densities, their Evolution and their Measurement, https://dx.doi.org/10.1007/BF01556280Z. Phys. C 45 (1990) 669. Koike:1994st Y. Koike and K. Tanaka, Q^2 evolution of nucleon's chiral odd twist - three structure function: h_L(x, Q^2), https://dx.doi.org/10.1103/PhysRevD.51.6125Phys. Rev. D 51 (1995) 6125 [https://arxiv.org/abs/hep-ph/9412310 hep-ph/9412310]. Kumano:1997qp S. Kumano and M. Miyama, Two loop anomalous dimensions for the structure function h1, https://dx.doi.org/10.1103/PhysRevD.56.R2504Phys. Rev. D 56 (1997) R2504 [https://arxiv.org/abs/hep-ph/9706420 hep-ph/9706420]. Hayashigaki:1997dn A. Hayashigaki, Y. Kanazawa and Y. Koike, Next-to-leading order Q^2 evolution of the transversity distribution h_1(x, Q^2), https://dx.doi.org/10.1103/PhysRevD.56.7350Phys. Rev. D 56 (1997) 7350 [https://arxiv.org/abs/hep-ph/9707208 hep-ph/9707208]. Vogelsang:1997ak W. Vogelsang, Next-to-leading order evolution of transversity distributions and Soffer's inequality, https://dx.doi.org/10.1103/PhysRevD.57.1886Phys. Rev. D 57 (1998) 1886 [https://arxiv.org/abs/hep-ph/9706511 hep-ph/9706511]. Velizhanin:2012nm V.N. Velizhanin, Three loop anomalous dimension of the non-singlet transversity operator in QCD, https://dx.doi.org/10.1016/j.nuclphysb.2012.06.010Nucl. Phys. B 864 (2012) 113 [https://arxiv.org/abs/1203.1022 arXiv:1203.1022]. Bagaev:2012bw A.A. Bagaev, A.V. Bednyakov, A.F. Pikelner and V.N. Velizhanin, The 16th moment of the three loop anomalous dimension of the non-singlet transversity operator in QCD, https://dx.doi.org/10.1016/j.physletb.2012.06.059Phys. Lett. B 714 (2012) 76 [https://arxiv.org/abs/1206.2890 arXiv:1206.2890]. Balitsky:1987bk I.I. Balitsky and V.M. Braun, Evolution Equations for QCD String Operators, https://dx.doi.org/10.1016/0550-3213(89)90168-5Nucl. Phys. B 311 (1989) 541. Baikov:2016tgj P.A. Baikov, K.G. Chetyrkin and J.H. Kühn, Five-Loop Running of the QCD coupling constant, https://dx.doi.org/10.1103/PhysRevLett.118.082002Phys. Rev. Lett. 118 (2017) 082002 [https://arxiv.org/abs/1606.08659 arXiv:1606.08659]. Herzog:2017ohr F. Herzog, B. Ruijl, T. Ueda, J.A.M. Vermaseren and A. Vogt, The five-loop beta function of Yang-Mills theory with fermions, https://dx.doi.org/10.1007/JHEP02(2017)090JHEP 02 (2017) 090 [https://arxiv.org/abs/1701.01404 arXiv:1701.01404]. Chetyrkin:2017bjc K.G. Chetyrkin, G. Falcioni, F. Herzog and J.A.M. Vermaseren, Five-loop renormalisation of QCD in covariant gauges, https://dx.doi.org/10.1007/JHEP10(2017)179JHEP 10 (2017) 179 [https://arxiv.org/abs/1709.08541 arXiv:1709.08541]. Luthe:2017ttg T. Luthe, A. Maier, P. Marquard and Y. Schröder, The five-loop Beta function for a general gauge group and anomalous dimensions beyond Feynman gauge, https://dx.doi.org/10.1007/JHEP10(2017)166JHEP 10 (2017) 166 [https://arxiv.org/abs/1709.07718 arXiv:1709.07718]. Polyakov:1970xd A.M. Polyakov, Conformal symmetry of critical fluctuations, JETP Lett. 12 (1970) 381. Polchinski:1987dy J. Polchinski, Scale and Conformal Invariance in Quantum Field Theory, https://dx.doi.org/10.1016/0550-3213(88)90179-4Nucl. Phys. B 303 (1988) 226. Braun:2018mxm V.M. Braun, A.N. Manashov, S. Moch and M. Strohmaier, Conformal symmetry of QCD in d-dimensions, https://dx.doi.org/10.1016/j.physletb.2019.04.027Phys. Lett. B793 (2019) 78 [https://arxiv.org/abs/1810.04993 arXiv:1810.04993]. Vermaseren:1998uu J. Vermaseren, Harmonic sums, Mellin transforms and integrals, https://dx.doi.org/10.1142/S0217751X99001032Int. J. Mod. Phys. A 14 (1999) 2037 [https://arxiv.org/abs/hep-ph/9806280 hep-ph/9806280]. Polyakov:1980ca A.M. Polyakov, Gauge Fields as Rings of Glue, https://dx.doi.org/10.1016/0550-3213(80)90507-6Nucl. Phys. B 164 (1980) 171. Korchemsky:1987wg G.P. Korchemsky and A.V. Radyushkin, Renormalization of the Wilson Loops Beyond the Leading Order, https://dx.doi.org/10.1016/0550-3213(87)90277-XNucl. Phys. B 283 (1987) 342. Henn:2019swt J.M. Henn, G.P. Korchemsky and B. Mistlberger, The full four-loop cusp anomalous dimension in 𝒩=4 super Yang-Mills and QCD, https://dx.doi.org/10.1007/JHEP04(2020)018JHEP 04 (2020) 018 [https://arxiv.org/abs/1911.10174 arXiv:1911.10174]. vonManteuffel:2020vjv A. von Manteuffel, E. Panzer and R.M. Schabinger, Cusp and collinear anomalous dimensions in four-loop QCD from form factors, https://dx.doi.org/10.1103/PhysRevLett.124.162001Phys. Rev. Lett. 124 (2020) 162001 [https://arxiv.org/abs/2002.04617 arXiv:2002.04617]. Ji:2023eni Y. Ji, A. Manashov and S.O. Moch, Evolution kernels of twist-two operators, https://dx.doi.org/10.1103/PhysRevD.108.054009Phys. Rev. D 108 (2023) 054009 [https://arxiv.org/abs/2307.01763 arXiv:2307.01763]. Dokshitzer:2005bf Y.L. Dokshitzer, G. Marchesini and G.P. Salam, Revisiting parton evolution and the large-x limit, https://dx.doi.org/10.1016/j.physletb.2006.02.023Phys. Lett. B634 (2006) 504 [https://arxiv.org/abs/hep-ph/0511302 hep-ph/0511302]. Basso:2006nk B. Basso and G.P. Korchemsky, Anomalous dimensions of high-spin operators beyond the leading order, https://dx.doi.org/10.1016/j.nuclphysb.2007.03.044Nucl. Phys. B775 (2007) 1 [https://arxiv.org/abs/hep-th/0612247 hep-th/0612247]. Gribov:1972ri V.N. Gribov and L.N. Lipatov, Deep inelastic e p scattering in perturbation theory, Sov. J. Nucl. Phys. 15 (1972) 438. Gribov:1972rt V.N. Gribov and L.N. Lipatov, e^+ e^- pair annihilation and deep inelastic e p scattering in perturbation theory, Sov. J. Nucl. Phys. 15 (1972) 675. Dokshitzer:2006nm Y.L. Dokshitzer and G. Marchesini, N=4 SUSY Yang-Mills: three loops made simple(r), https://dx.doi.org/10.1016/j.physletb.2007.01.016Phys. Lett. B 646 (2007) 189 [https://arxiv.org/abs/hep-th/0612248 hep-th/0612248]. Beccaria:2009vt M. Beccaria and V. Forini, Four loop reciprocity of twist two operators in N=4 SYM, https://dx.doi.org/10.1088/1126-6708/2009/03/111JHEP 03 (2009) 111 [https://arxiv.org/abs/0901.1256 arXiv:0901.1256]. Alday:2015eya L.F. Alday, A. Bissi and T. Lukowski, Large spin systematics in CFT, https://dx.doi.org/10.1007/JHEP11(2015)101JHEP 11 (2015) 101 [https://arxiv.org/abs/1502.07707 arXiv:1502.07707]. Remiddi:1999ew E. Remiddi and J.A.M. Vermaseren, Harmonic polylogarithms, https://dx.doi.org/10.1142/S0217751X00000367Int. J. Mod. Phys. A15 (2000) 725 [https://arxiv.org/abs/hep-ph/9905237 hep-ph/9905237]. Panzer:2014caa E. Panzer, Algorithms for the symbolic integration of hyperlogarithms with applications to Feynman integrals, https://dx.doi.org/10.1016/j.cpc.2014.10.019Comput. Phys. Commun. 188 (2015) 148 [https://arxiv.org/abs/1403.3385 arXiv:1403.3385]. Makeenko:1980bh Y.M. Makeenko, Conformal operators in quantum chromodynamics, Sov. J. Nucl. Phys. 33 (1981) 440. Moch:2021cdq S. Moch and S. Van Thurenhout, Renormalization of non-singlet quark operator matrix elements for off-forward hard scattering, https://dx.doi.org/10.1016/j.nuclphysb.2021.115536Nucl. Phys. B 971 (2021) 115536 [https://arxiv.org/abs/2107.02470 arXiv:2107.02470]. VanThurenhout:2023gmo S. Van Thurenhout, Basis transformation properties of anomalous dimensions for hard exclusive processes, https://dx.doi.org/10.1016/j.nuclphysb.2024.116464Nucl. Phys. B 1000 (2024) 116464 [https://arxiv.org/abs/2309.16236 arXiv:2309.16236]. VanThurenhout:2022hgd S. Van Thurenhout and S.O. Moch, Off-forward anomalous dimensions in the leading-n_f limit, https://dx.doi.org/10.22323/1.416.0076PoS LL2022 (2022) 076 [https://arxiv.org/abs/2206.04517 arXiv:2206.04517].
http://arxiv.org/abs/2407.12322v1
20240717054727
Frequency Guidance Matters: Skeletal Action Recognition by Frequency-Aware Mixed Transformer
[ "Wenhan Wu", "Ce Zheng", "Zihao Yang", "Chen Chen", "Srijan Das", "Aidong Lu" ]
cs.CV
[ "cs.CV" ]
University of North Carolina at Charlotte USA wwu25@uncc.edu Carnegie Mellon University USA cezheng@andrew.cmu.edu Microsoft USA mattyang@microsoft.com University of North Carolina at Charlotte USA sdas24@uncc.edu University of Central Florida USA chen.chen@crcv.ucf.edu University of North Carolina at Charlotte USA aidong.lu@uncc.edu § ABSTRACT Recently, transformers have demonstrated great potential for modeling long-term dependencies from skeleton sequences and thereby gained ever-increasing attention in skeleton action recognition. However, the existing transformer-based approaches heavily rely on the naive attention mechanism for capturing the spatiotemporal features, which falls short in learning discriminative representations that exhibit similar motion patterns. To address this challenge, we introduce the Frequency-aware Mixed Transformer (FreqMixFormer), specifically designed for recognizing similar skeletal actions with subtle discriminative motions. First, we introduce a frequency-aware attention module to unweave skeleton frequency representations by embedding joint features into frequency attention maps, aiming to distinguish the discriminative movements based on their frequency coefficients. Subsequently, we develop a mixed transformer architecture to incorporate spatial features with frequency features to model the comprehensive frequency-spatial patterns. Additionally, a temporal transformer is proposed to extract the global correlations across frames. Extensive experiments show that FreqMiXFormer outperforms SOTA on 3 popular skeleton action recognition datasets, including NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets. Code will be publicly available. <ccs2012> <concept> <concept_id>00000000.0000000.0000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Do Not Use This Code Generate the Correct Terms for Your Paper [300]Do Not Use This Code Generate the Correct Terms for Your Paper Do Not Use This Code Generate the Correct Terms for Your Paper [100]Do Not Use This Code Generate the Correct Terms for Your Paper Frequency Guidance Matters: Skeletal Action Recognition by Frequency-Aware Mixed Transformer Aidong Lu July 22, 2024 ============================================================================================ § INTRODUCTION Human action recognition, a vital research topic in computer vision, is widely applied in various applications, including visual surveillance <cit.>, human-computer interaction <cit.>, and autonomous driving systems <cit.>. Particularly, skeleton sequences represent the motion trajectories of human body joints to characterize distinctive human movements with 3D structural pose information, which is robust to surface textures and backgrounds. Consequently, skeletal action recognition stands out as an effective approach for recognizing human actions compared to RGB-based <cit.> or Depth-based <cit.> methods. Early skeleton-based works typically represent the human skeleton as a sequence of 3D joint coordinates or a pseudo-image, then adapted Convolutional Neural Networks (CNNs) <cit.> or Recurrent Neural Networks (RNNs) <cit.> to model spatial features among joints. However, unlike static images, skeleton data embodies dynamic and complex spatiotemporal topological correlations that CNNs and RNNs fail to capture. Therefore, to effectively model the skeletal information encapsulated within the topological graph structure that reflects human anatomy, Graph Convolutional Networks (GCNs) <cit.> are utilized. Nevertheless, as graph information progresses through deeper layers, the model may lose vital joint correlations and direct propagation, diminishing its ability to capture long-range interactions between distant frames. Recently, Transformer <cit.> has acquired promising results in human action recognition in various data modalities such as RGB <cit.>, depth <cit.>, point cloud <cit.>, and skeleton <cit.>. The Transformer's ability to model the intrinsic representation of human joints and sequential frame correlations makes it a suitable backbone for skeleton-based action recognition. Despite the notable achievements of several transformer-based studies <cit.>, they have yet to surpass the accuracy benchmarks set by GCNs. We hypothesize that the primary issue contributing to this gap is - GCNs, through their localized graph convolutions, effectively capture the human spatial configuration essential for action recognition. In contrast, traditional transformers, utilizing global self-attention operations, lack the inductive bias to inherently grasp the skeleton's topology. Although these global operations model overall motion patterns within a skeleton sequence, the self-attention mechanism in transformers may dilute the subtle local interactions among joints. Additionally, as attention scores are normalized across the entire sequence, subtle yet crucial discrimination in action sequences might be ignored if they do not substantially impact the overall attention landscape. To bridge this gap, we focus on improving transformers' capability to learn discriminative representation of subtle motion patterns. In this work, we aim to aggregate the frequency-spatial features by introducing Discrete Cosine Transform (DCT) <cit.> into a mixed attention transformer framework <cit.> to encode joint correlations in the frequency domain to explore the frequency-based joint representations. The motivation behind the representations is straightforward and intuitive: the frequency components can represent the entire joint sequence <cit.> and are sensitive to subtle movements. As a result, we introduce a novel Frequency-aware Mixed Transformer (FreqMixFormer) for skeleton action recognition to capture the discriminative correlations among joints. The key steps of our approach are outlined as follows: Firstly, we formulate a Frequency-aware Attention module for transferring spatial joint information to the frequency domain, where the skeletal movement dependencies (similarity score between Queries Q and Keys K) are embedded in the spectrum domain with a distinct representation based on their energy. As illustrated in Fig.<ref>, discriminative skeleton features with similar patterns (e.g., confusing actions like reading and writing) can be effectively learned by leveraging their physical correlations. In the context of these skeleton sequences, minor movements that contain subtle variations and exhibit rapid spatial changes are effectively compressed into high-frequency components with lower energy (highlighted with the red box). Conversely, actions that constitute a larger portion of the sequence and change slowly over time in the temporal domain are compressed into low-frequency components with higher energy (shown in the blue box). Subsequently, a frequency operator is applied to accentuate the high-frequency coefficients while diminishing the low-frequency coefficients, thereby enabling selective amplification and attenuation for fine-tuning within the frequency domain. Secondly, we propose a transformer-based model that utilizes mixed attention mechanism to extract spatial and frequency features separately with self-attention (SA) and cross-attention (CA) operations, where SA and CA extract joint dependencies and contextual joint correlations respectively. An integration module subsequently fuses the features from both the frequency and spatial domains, resulting in frequency-spatial features. These features are then fed into a temporal transformer, which globally learns the inter-frame joint correlations (e.g., from the first to the last frame), effectively capturing the discriminating frequency-spatial features temporally. Our contributions are summarized as follows: * We propose a Frequency-aware Attention Block (FAB) to investigate frequency features within skeletal sequences. A frequency operator is specifically designed to improve the learning of frequency coefficients, thereby enhancing the ability to capture discriminative correlations among joints. * Consequently, we introduce the Frequency-aware Mixed Transformer (FreqMixFormer) to extract frequency-spatial joint correlations. The model incorporates a temporal transformer designed to enhance its ability to capture temporal features across frames. * Our proposed FreqMixFormer outperforms state-of-the-art performance on three benchmarks, including NTU RGB+D <cit.>, NTU RGB+D 120 <cit.>, and Northwestern-UCLA <cit.>. § RELATED WORK Frequency Representation Learning for Skeleton-based Tasks. Traditional pose-based methods aim to extract motion patterns directly from the poses for trajectory-prediction <cit.>, pose estimation <cit.>, action recognition <cit.>. The representations derived from pose space naturally reflect physical characteristics (spatial dependency of structure information) and motion patterns (temporal dependency of motion information), making it challenging to encode poses in a spatiotemporal way. Motivated by a strong ability to encode temporal information in the frequency domain smoothly and compactly <cit.>, several recent works <cit.> utilize discrete cosine transform (DCT) to convert the temporal motion to frequency domain for frequency-specific representation learning. In skeleton action recognition, only a few works <cit.> have considered frequency representations so far. <cit.> proposed a multi-feature branches framework to extract subtle frequency features with fast Fourier transform (FFT) and spatial-temporal joint dependencies, aiming to build a multi-task framework in skeleton action recognition. <cit.> adopts discrete wavelet transform with a GCN-based decoupling framework to decouple salient and subtle motion features, aiming for fine-grained skeleton action recognition. While our interest aligns with frequency-based modeling, we opt for a DCT-based approach since its frequency coefficients are well-distributed in the frequency domain, benefiting the discriminative motion representation learning. Transformer-based Skeleton Action Recognition. Many recent works adopt transformers for skeleton action recognition to explore joint correlations in a spatiotemporal way. ST-TR <cit.> is the first to introduce the transformer to process skeleton data with spatial transformer and temporal transformer, proving its effectiveness in action recognition. Many follow-up works <cit.> keep employing this spatial-temporal structure for skeleton recognition with different configurations. STTFormer <cit.> proposed a tuple self-attention mechanism for capturing the joint relationships among frames. FG-STFormer <cit.> was developed to understand the connections between local joints and contextual global information across spatial and temporal dimensions. SkeMixFormer <cit.> introduced mixed attention method <cit.> and channel grouping techniques into spatiotemporal structure, enabling the model to learn the dynamic multivariate topological relationships. Besides these methods that focus on model configurations, <cit.> designed a partitioning strategy with the self-attention mechanism to learn the semantic representations of the interactive body parts. <cit.> presented an efficient transformer with a temporal partitioning aggregation strategy and topology-aware spatial correlation modeling module. Most of the transformer-based methods mentioned above mainly focus on configuration improvement and spatiotemporal correlation learning without exploiting the skeletal motion patterns in the frequency domain. In this work, we propose a frequency-based transformer with a frequency-spatial mixed attention mechanism, leveraging joint representation learning. § METHODOLOGY §.§ Preliminaries Transformer. Self-Attention is the core mechanism of the transformer <cit.>. Given the input X ∈ℝ ^C × D, where C is the number of patches and D is the embedding dimension, X is first mapped to three matrices: Query matrix Q, Key matrix K and Value matrix V by three linear transformation: Q = XW_Q, K = XW_K, V = XW_V where W_Q, W_K and W_V ∈ℝ ^D × D are the learnable weight matrices. The self-attention score can be described as the following mapping function: Attention(Q,K,V) = Softmax(QK^⊤/√(d))V where QK^⊤ is the similarity score, 1/√(d) is the scaling factor that prevents the softmax function from entering regions where gradients are too small. Next, the Multi-Head Self-Attention (MHSA) function is introduced to process information from different representation subspaces in different positions. The MHSA score is expressed as: MHSA (Q,K,V) = Concat (H_1, H_2, …, H_h) W_out where H_i= Attention (Q_i,K_i,V_i), i ∈{1, 2, …, h } is the single attention head, W_out is a linear projection ∈ℝ ^D × D. Baseline. The existing transformer-based skeleton action recognition methods rely heavily on plain self-attention blocks mentioned above to capture spatiotemporal correlations, ignoring the contextual information among different blocks. Thus, we simply adopt off-the-shelf SkeMixformer <cit.> as our baseline for capturing spatial skeletal features, where the contextual information can be extracted based on a mixed way: 1) Cross-attention, an asymmetric attention scheme of mixing Query matrix Q and Key matrix K, leveraging asymmetric information integration. 2) Channel grouping, a strategy that divides the input into unit groups to capture multivariate interaction characteristics, preserving the inherent features of the skeleton data by avoiding the full self-attention's dependency on global complete channels. However, SkeMixFormer falls short of modeling discriminative motion patterns, thereby not fully leveraging its representational potential. In light of the baseline's limitations, we introduce our proposed FreqMixFormer to verify the effectiveness of frequency-spatial features over purely spatial ones. The detailed components of our model are elaborated in the following sections. §.§ Overview of FreqMixFormer The overall architecture of FreqMixFormer is illustrated in Fig. <ref>. Given the input X ∈ℝ^J × C × F is embedded by joint and positional embedding layers to represent a skeleton sequence with a consistent frame count of F, where C denotes the dimensionality of the joint, and J represents the number of joints in each frame. Then a partition block is proposed for capturing multivariate interaction association characteristics, where X is divided into n unit groups (n = 3 in Fig. <ref> for example) by channel splitting to facilitate interpretable learning of joint adjacency. The split unit is expressed as x_i ∈ℝ^J × (C/n) × F and X ←Concat[x_1, x_2, …, x_i], where i = 1,2, …, n. Next, we feed unit inputs x_i to the Frequency-aware Mixed Transformer based on self-attention and cross-attention mechanisms among Spatial Attention Blocks and Frequency-aware Attention Blocks. Afterward, frequency-spatial mixed features are processed with a Temporal Attention Block to learn inter-frame correlations. The final outputs are further reshaped and passed to an FC layer for classification. §.§ Discrete Cosine Transform (DCT) for Joint Sequence Encoding Let x ∈ℝ^J × C × F denotes the input joint sequence, the trajectory of the j-th joint across F frames is denoted as X_j = (x_j,0, x_j, 1, ... , x_j, F). While existing transformer-based skeleton action recognition methods only use this X_j as an input sequence for skeletal correlation representation learning in the spatial domain, we propose to adopt a frequency representation based on Discrete Cosine Transform (DCT). Different from the previous DCT-based trajectory representation learning methods <cit.>, which discard some of the high-frequency coefficients for providing a more compact representation, we not only keep all the DCT coefficients but also enhance the high-frequency parts and reduce the low-frequency parts. The main motivations behind this are: (i) High-frequency DCT components are more sensitive to those subtle discrepancies that are difficult to discriminate in the spatial domain (e.g., the hand movements in reading and writing, which are illustrated in Fig. <ref>). (ii) Low-frequency DCT coefficients reflect the movements with steady or static motion patterns, which are not discriminative enough in recognition (e.g., the lower body movements in reading and writing, which are also illustrated in Fig. <ref>). (iii) The cosine transform exhibits excellent energy compaction properties to concentrate the majority of the energy (low-frequency coefficients) into the first few coefficients of the transformation, meaning it is well-distributed for amplifying subtle motion features. Thus, we apply DCT to each trajectory individually. For trajectory X_j, the i-th DCT coefficient is calculated as: C_j,i = √(2/F)∑_f=1^F x_j,f1/√(1 + δ_i1)cos[π(2f - 1)(i - 1)/2F] where the Kronecker δ_ij = 1 if i = j, otherwise δ_ij = 0. In particular, i ∈{1, 2, …, F }, the larger i, the higher frequency coefficient. These coefficients enable us to represent skeleton motion within the frequency domain effectively. Besides, the original input sequence in the time domain can be restored using Inverse Discrete Cosine Transform (IDCT), which is given by: x_j,f = √(2/F)∑_i=1^F C_j,i1/√(1 + δ_i1)cos[π(2f - 1)(i - 1)/2F] where j ∈{1, 2, …, F }. To use DCT coefficients in the transformer, we further introduce a Frequency-aware Mixed Transformer for extracting mixed frequency-spatial features in the next section. §.§ Frequency-aware Mixed Transformer Mixed Spatial Attention. Given a split input x_i ∈ℝ^J × (C/n) × F mentioned in Section <ref>, the basic Query matrix and Key matrix for each sequence are extracted along the spatial dimension: Q_i, K_i = ReLU (linear(AvgPool(x_i))), where i = 1,2, …, n. In Eq. <ref>, AvgPool denotes adaptive average pooling for smoothing the joint weight and minimizing the impact of noisy or less relevant variations within the skeletal data, and an FC-layer with a ReLU activation operation is applied to ensure Q_i and K_i are globally integrated. Then, the self-attention is expressed as: Atten^i_self = Softmax(Q_iK_i^⊤/√(d)) In order to enable richer contextual integration across different unit groups, inspired by <cit.>, a cross-attention strategy is proposed, where K_i is shared between adjacent attention blocks. The cross attention is expressed as: Atten^i_mix = Softmax(Q_i+1K_i^⊤/√(d)) Each mixed attention map is formulated as: MS_i = Atten^i_self + Atten^i_mix + Atten^i-1_mix where the number of this association mixed-attention maps is based on the number of unit groups (e.g., n = 3 in Fig. <ref>). These mixed-attention maps are extracted by several SABs (Spatial Attention Blocks, illustrated in Fig. <ref> (a)) for spatial representation learning. Mixed Frequency-Spatial Attention. We apply DCT to obtain the corresponding frequency coefficients from the split joint sequence x_i, and then the inputs to FABs (Frequency-aware Attention Blocks, see in Fig. <ref>) can be denoted as DCT(x_i), where DCT(·) denotes the transform expressed in Eq. <ref>. Similar to the mixed spatial attention, we obtain the Query and Key values along the frequency domain: Q_i, K_i = ReLU (linear(AvgPool(DCT(x_i)))) The corresponding frequency-based self-attention and mixed-attention maps are: Atten^i_self = Softmax(Q_iK_i^⊤/√(d)) Atten^i_mix = Softmax(Q_i+1K_i^⊤/√(d)), Thus, the mixed frequency attention maps are expressed as: MF_i = Atten^i_self + Atten^i_mix + Atten^i-1_mix Subsequently, a Frequency Operator (FO) ψ(·) is adopted to mixed frequency attention maps: ψ (MF_i). Given a frequency operator coefficient φ, where φ∈ (0, 1), the high-frequency coefficients in MF_i are enhanced by (1+φ), making minimal and subtle actions more pronounced. On the other hand, the low-frequency coefficients are reduced by φ, appropriately diminishing the focus on salient actions while preserving the integrity of overall action representations. The search for the best φ is discussed in Section <ref>. Afterward, an IDCT module is employed to restore the transformed skeleton sequence: MF_i = IDCT(ψ(MF_i)). All the M_i are extracted by Frequency-aware Attention Blocks (FABs), as depicted in Fig. <ref> (b). Thus the output is: MFS_i = MF_i + MS_i, and the final output of the mixed frequency-spatial attention map can be expressed as: M ←Concat[MFS_1, MFS_2, …, MFS_i] We obtain Value V from the initial input X with unified computation via adding one spatial 1 × 1 convolutional layer along the spatial dimension. Consequently, the input of the Temporal Attention Block is expressed as: x_t = MV Temporal Attention Block. Given the temporal input x_t based on the mixed frequency-spatial attention method, some tricky strategies in <cit.> are adopted to transform the input channel and acquire more multivariate information alone the temporal dimension: X_t = CT(x_t) (the channel transformation CT(·) is detailed in the Appendix). Then the transformed input X_t is processed with a temporal attention block (Fig. <ref> (c)) to obtain the corresponding Query and Key matrices: Q_t = σ(linear(AvgPool(X_t))), K_t = σ(linear(MaxPool(X_t))) And the Value in temporal attention block V_t is obtained from the temporal input after a 1×1 convolutional layer along the temporal dimension. Finally, the temporal attention is expressed as: Atten_tem = Softmax(Q_t K_t^⊤/√(d)), and the final output for the classification head is defined as: X_out = (Sigmod(Atten_tem)) V_t § EXPERIMENTS §.§ Datasets NTU RGB+D (NTU-60) <cit.> is one of the most widely used large-scale datasets for action recognition, containing 56,880 skeleton action samples from 40 subjects across 155 camera viewpoints. Each 3D skeleton data consists of 25 joints. The data is classified into 60 classes with two benchmarks. 1) Cross-Subject (X-Sub): half of the subjects are set for training, and the rest are used for testing. 2) Cross-View (X-View): training and test sets are split based on different camera views (2, 3 views for training, 1 for testing). NTU RGB+D 120 (NTU-120) <cit.> is an expansion dataset of NTU RGB+D, containing 113,945 samples with 120 action classes performed by 106 subjects. There are two benchmarks. 1) Cross-Subject (X-Sub): 53 actions are used for training, and the rest are used for testing. 2) Cross-Setup (X-Set): samples with even setup IDs are set as training sets, and samples with odd setup IDs are used for testing. Northwestern-UCLA (NW-UCLA) <cit.> is a 10-classes action recognition dataset containing 1494 video clips. Three Kinect cameras capture the actions with different camera views. We adopt the commonly used evaluation protocols: the first two camera views are used for training, and the testing set comes from the other camera. §.§ Implementation Details We follow the standard data processing method from <cit.> to pre-process the skeleton data. The proposed method is implemented on Pytorch <cit.> with two NVIDIA RTX A6000 GPUs. The model is trained with 100 epochs and 128 batch size for all datasets mentioned above and a warm-up at the first 5 epochs. The weight decay is 0.0005, and the learning rate is initialized to 0.1 in the NTU RGB+D and NTU RGB+D 120 datasets (with a 0.1 reduction at the 35th, 55th, and 75th rounds) and 0.2 in the Northwestern-UCLA dataset (with a 0.1 reduction at the 50th round). A commonly used multi-stream ensemble method <cit.> is implemented for 4-stream fusion and 6-stream fusion. The experimental results are shown in Table <ref> and Table <ref>. §.§ Comparison with the State-of-the-Art In this section, we conduct a comprehensive performance comparison with the state-of-the-art (SOTA) methods on NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets to demonstrate the competitive ability of our FreqMixFormer. The comparison is made with three ensembles of different modalities and the details are provided in the appendix. Comparisons for NTU datasets are shown in Table <ref>. We compare our model with the recent SOTA methods based on their frameworks (GCN and Transformer). The recognition accuracy of our FreqMixFormer has outperformed all the transformer-based methods. It is noted that even our 4-stream ensemble results on both NTU-60 (93.4% in X-Sub and 97.3% in X-View) and NTU-120 datasets (90.2% in X-Sub and 91.5% in X-Set) have exceeded all SOTA approaches. Despite the predominant role of GCN-based methods in the skeleton-based action recognition, as we mentioned in Section <ref>, FreqMixFormer still surpasses the recent methods, such as InfoGCN <cit.> and HD-GCN <cit.>. Moreover, our method achieves better performance compared with all the methods that also focus on recognizing discriminative subtle actions, including FR-Head <cit.> (outperformed by 0.8% on NTU-60 X-Sub and 0.6% on X-View) and WDCE-Net <cit.> (outperformed by 0.6% on NTU-60 X-Sub and 0.2% on X-View) methods. It is worth noting that our method not only surpasses the existing SOTA GCN-based methods but also enhances the transformer's ability to learn discriminative representations among subtle actions. In addition to experiments on large-scale datasets like NTU-60 and NTU-120, we extend our research to the small-scale dataset NW-UCLA to further validate our model's performance across different data scales. Table <ref> shows results on the NW-UCLA dataset. FreqMixFormer achieves the best results (97.7 %) in comparison to SOTA methods based on GCNs and transformers. Our method outperforms the HD-GCN by 0.8% and SkeMixFormer by 0.3%. §.§ Comparison of Complexity with Other Models Table <ref> shows the complexity comparison with other models. For a fair comparison, we conduct the experiments under the same settings. Although our FreqMixFormer is less efficient (GCNs typically require fewer parameters and incur lower computational costs than Transformers since GCNs leverage the inherent structure of graph data, which allows them to model node dependencies directly with minimal parameters. Additionally, GCN operations are confined to the edges of a graph, significantly reducing the GFLOPs. In contrast, transformers process all pairwise element interactions in a sequence, leading to a rapid increase in computational complexity and parameter count, especially for long sequences) than GCN-based methods, we achieve a very competitive result on the NTU-120 dataset X-Sub, which outperforms HD-GCN by 2.2 % and SKeMixFormer by 0.8 % with fewer parameters than SkeMixformer. §.§ Ablation Study In this section, we first evaluate the role of key modules in FreqMixFormer, including FAB, FO, and TAB, to analyze the effectiveness of each block. Then we propose to search for the best variants within the model, including the number of unit group n for input partition and the best frequency operator coefficient φ. Additionally, we provide the visualization of the attention maps to show the effectiveness of the mixed frequency-spatial attention mechanism. The Design of Frequency-aware Mixed Transformer. As the results shown in Table <ref>, the baseline only contains the basic Spatial MixFormer module from <cit.>, which only achieves 89.8% accurate. Then we propose 3 modules for analysis: 1) Frequency-aware Attention Block (FAB): the key part in our proposed method, extracting frequency-based attention maps from the joint sequence, leading to a 1.2% improvement in our baseline. 2) Frequency Operator (FO): an extra module within FABs enhances the high-frequency coefficients and reduces the low-frequency coefficients based on the DCT operator coefficient, resulting in a 1.4% improvement in the baseline. 3) Temporal Attention Block (TAB): a module utilized to learn joint correlations across frames, leading to a 0.9% over the baseline performance. As we can see, each of these modules can enhance the baseline's performance, where the main contribution comes from the utilization of Frequency-aware Attention Blocks with a proper frequency operator coefficient. The best result comes from the combination of all these modules with baseline, which achieves 91.5% accuracy in NTU-60 X-Sub with joint modality. It is speculated that the proposed frequency-aware attention mechanism (see in Section <ref>) plays a significant role in enhancing the action recognition performance. The experiment results on confusing actions with subtle motions are presented in Section <ref>. Moreover, we also conduct analysis regarding the best number of the unit n. It can be seen in Table <ref> that the increasing n does improve the results (n = 2,3,4) with a peak accuracy of 91.5% in NTU-60 X-Sub and 96.0% in NTU-60 X-View. However, further increments do not lead to better outcomes, only to a higher computational cost (the model parameters keep increasing from 1.26M to 2.83M). Given the trade-off between cost and performance, we opt for a splitting number of n = 4 for subsequent experiments. .5 tableSearch for the best number of the unit n. ! 2*n 2cNTU-60 2*Param (M) 2-3 X-Sub (%) X-View (%) 2 90.0 95.1 1.26 3 90.8 95.3 1.64 4 91.5 96.0 2.04 5 91.3 95.9 2.45 6 91.3 95.7 2.83 .5 tableSearch for the best frequency operator coefficient φ. ! 2*φ 2cNTU-60 2-3 X-Sub (%) X-View (%) 0.1 90.9 95.6 0.2 90.8 95.4 0.3 91.0 95.6 0.4 90.9 95.7 0.5 91.5 96.0 0.6 91.0 95.8 0.7 91.0 95.6 0.8 91.0 95.7 0.9 91.1 95.6 Search For the Best Frequency Operator Coefficient φ. In Table <ref>, we investigate the impact of frequency operator coefficient φ. As we discussed on Section <ref>, the high-frequency coefficients will be amplified by (1+φ), and the low-frequency coefficients will be diminished by φ. As φ increases from 0.1 to 0.5, there is a general trend of improved performance, reaching a peak at φ = 0.5, which achieves the highest accuracy of 91.5% on NTU-60 X-Sub and 96.0% on X-View. However, further increasing φ from 0.6 to 0.9 does not lead to improvements in performance. In fact, the accuracy slightly declines. This suggests that enhancing high-frequency components too much or reducing low-frequency components too aggressively may lead to loss of motion patterns learning. Effectiveness of the Mixed Frequency-aware Attention. Fig. <ref> presents the visualization of the attention matrices learned by FreqMixFormer. The skeleton configuration is generated from the NTU-60 dataset (Fig. <ref> (a)). We take "eat meal" as an example (Fig. <ref> (b)). In the correlation matrix, a more saturated yellow represents a large weight, indicating a stronger correlation among joints. And the numbers denote different joints. Note that the Mixed Spatial Attention Map (Fig. <ref> (c), learned by SAB) represents the spatial relationships among joints. The Mixed Frequency Attention Map (Fig. <ref> (d), learned by FAB) suggests the frequency aspects of motion. Based on these two attention maps, a mixed frequency-spatial attention map is proposed (Fig. <ref> (e)) for capturing both spatial correlations and frequency dependencies, integrating the spatial and frequency skeleton features. As we see in the figures, the model focuses on the correlations with the spine and right-hand tip in the spatial domain. As for the frequency domain, more correlation areas are concerned (joint connections with the spine, left arm, and the interactions between head and hands), which indicates the model is analyzing more discriminative movements overlooked in the spatial domain. Meanwhile, the mixed frequency-spatial attention map contains not only the strong attention areas learned from spatial space but also the concerned correlations in frequency space. This demonstrates that our FreqMixFormer model advances this by extracting minimal and subtle joint representations (highlighted with the red box in Fig. <ref> (b)) from both spatial and frequency domains. The effectiveness of the mixed frequency attention is also verified in Table <ref>. §.§ Comparison Results on Confusing Actions To validate our model's capability in discerning discriminative actions, similar to <cit.>, we categorize certain actions from the NTU-60 dataset (only joint stream with X-Sub protocol) into three sets based on the classification results of Hyperformer <cit.>: the actions with accuracy lower than 80% as Hard set, between 80% and 90% as Medium set, and higher than 90% as Easy set. All the confusing actions are classified into Hard and Medium sets. For example, "writing," "reading," and "playing with a phone" are categorized as Hard action sets due to their subtle differences, which are limited to small upper-body movements involving only a few joint correlations, leading to low recognition results. We compare our results with the recent transformer-based models Hyperformer <cit.> and SkeMixFormer <cit.>. The results of different difficult-level actions are displayed in Table <ref>, showcasing that our model outperforms the recent SOTA methods across these three subsets. Furthermore, the detailed results of Hard and Medium actions are also provided in Fig. <ref> and Fig. <ref>. The results indicate that our method significantly enhances performance on both hard-level and medium-level confusing actions, demonstrating its capability to differentiate ambiguous movements. § CONCLUSION AND DISCUSSION In this work, we introduce Frequency-aware Mixed Transformer (FreqMixFormer), a novel transformer architecture designed to discern discriminative movements among similar skeletal actions by leveraging a frequency-aware attention mechanism. This model enhances skeleton action recognition by integrating spatial and frequency features to capture comprehensive intra-class frequency-spatial patterns. Our extensive experiments across diverse datasets, including NTU RGB+D, NTU RGB+D 120, and NW-UCLA, establish FreqMixFormer's state-of-the-art performance. The proposed model demonstrates superior accuracy in general and significant advancements in recognizing confusing actions. Our research advances the field by presenting a method that integrates frequency domain analysis with current transformer models, paving the way for more precise and efficient action recognition systems. This work is anticipated to inspire future research on precision-targeted skeletal action recognition. § OVERVIEW OF SUPPLEMENTARY MATERIAL In this supplementary material, we provide the following items: * Partial DCT vs full DCT algorithms. * Evaluation of the number of DCT coefficients. * Evaluation on UAV-Human dataset. * Additional results. * Implementation details. * More visualizations. * Limitations and future work. § PARTIAL DCT VS FULL DCT ALGORITHMS In FreqMixFormer, we utilize DCT in Frequency-aware Attention Block (FAB) to extract skeletal frequency features. As illustrated in Fig. 2 and 3 in the main paper, only Query matrix Q and Key matrix K are processed with DCT and IDCT modules for attention score, Value matrix V is only processed with linear transformation, the methodology can be found in Algorithm 1 as Partial DCT Algorithm. Moreover, we also investigate the Full DCT Algorithm, where DCT and IDCT process V, and the methodology is shown in Algorithm 2. However, the full DCT algorithm performs poorly in the experiment: the full DCT algorithm only achieves 87.7% on the NTU-60 X-Sub setting, while the partial DCT algorithm achieves 91.5% accuracy. The overview of the FreqMixFormer with full DCT algorithm is illustrated in Fig. <ref>. The Spatial Attention Block (SAB) in this experiment is shown in Fig. <ref> (a) and the Frequency-aware Mixed Former (FAB) with full DCT algorithm is shown in Fig. <ref> (b). We hypothesize that the primary issues contributing to this gap are: 1) Applying DCT to Q and K can effectively highlight key frequency features and improve the model accuracy by matching relevant features during the computation of attention scores. 2) By excluding V from the frequency domain, the original temporal-spatial information is retained. This retention may help preserve more detailed and dynamic information in the final representation, enhancing the model's ability to utilize these details for action recognition. 3) Recognizing actions relies not only on the frequency characteristics of movements (such as the speed and rhythm) but also on the specifics of how the actions are performed (like the swinging of an arm). Processing Q, K, and V in different domains may allow the model to balance these needs. § EVALUATION OF THE NUMBER OF DCT COEFFICIENTS In order to explore the frequency operator in-depth, we conduct an evaluation of the number of enhanced DCT coefficients. Table <ref> shows the extra ablation study on the number of DCT coefficients N_c that we set as high-frequency coefficients (the rest are set as low-frequency coefficients). The high-frequency coefficients are enhanced by a frequency operator coefficient φ discussed in Section 3.4 of the main paper. For a fair comparison, we keep φ = 0.5 during the experiments. As shown in the table, with the number of the enhanced DCT coefficient N_c = 12, the model achieves the best performance on NTU-60 (91.5% in X-Sub and 96.0% in X-View) dataset, and further increasing does not result in improvements. 1.0pt Algorithm 1 Partial DCT 0.5pt Input: the skeleton sequence is processed with joint embedding and positional embedding as the initial input X, where X ∈ℝ^C × F × J. Init: W_Q, W_K and W_V are the learnable weight matrices. Output: the partial DCT attention score * X = DCT(X) * Q = XW_q, K = XW_k, V = XW_v * Q = XW_q, K = XW_k * Atten(Q, K, V) = IDCT(Softmax(QK^T/√(d))) V * Atten_1 = Atten(Q, K, V) 𝐑𝐞𝐭𝐮𝐫𝐧: Atten_1 0.5pt 1.0pt Algorithm 2 Full DCT 0.5pt Input: the skeleton sequence is processed with joint embedding and positional embedding as the initial input X, where X ∈ℝ^C × F × J. Init: W_Q, W_K are the learnable weight matrices. Output: the full DCT attention score * X = DCT(X) * Q = XW_q, K = XW_k, V = XW_v * Q = XW_q, K = XW_k, V = XW_v * Atten(Q, K, V) = (Softmax(QK^T/√(d))) V * Atten_2 = IDCT(Atten(Q, K, V)) Rreturn: Atten_2 0.5pt § EVALUATION ON UAV-HUMAN DATASET §.§ UAV-Human Dataset UAV-Human <cit.> is an action recognition dataset comprising 22,476 video clips with 155 classes. The dataset was collected via a UAV across various urban and rural settings, both during daytime and nighttime. It extracts action data from 119 distinct subjects engaged in 155 different activities across 45 diverse environmental locations. For evaluation (X-Sub, 17 joints in each subject), 89 subjects are selected for training and 30 for testing. §.§ Experiment Settings The hardware configurations are the same as the experiments reported in the main paper. The model is trained with 100 epochs, and the batch size is 128. We set a warm-up at the first 5 epochs. The weight decay is set as 0.0005, and the basic learning rate is 0.2. There is a 0.1 reduction at the 50th epoch. §.§ Comparison Results As Table <ref> shows, we compare our performance with the state-of-the-art methods on the UAV-Human dataset. Our FreqMixFormer outperforms all the existing methods and achieves the new state-of-the-art results on this benchmark. § ADDITIONAL RESULTS §.§ Accuracy Difference Results We further analyze the Top-1 Accuracy Difference (%) between the proposed FreqMixFormer and the baseline method SkeMixFormer<cit.> with the joint input modality on NTU RGB+D 120 X-Sub. As illustrated in Fig. <ref>, the most significant improvements typically appear in confusing actions with subtle movements. For instance, our model achieves an improvement of 35.09% for "make OK sign", 21.55% for "make victory sign", and 18.56% for "counting money". These results underscore FreqMixFormer's performance in recognizing actions that are visually confusing by extracting the frequency-spatial features. §.§ Comparison with Frequency-based Results We provide an extra comparison with the previous frequency-based methods in skeleton action recognition. As shown in Table <ref>, our FreqMixFormer outperforms all the existing methods utilizing frequency analysis on the NTU-60 X-Sub dataset. Moreover, our model also plays a significant role in efficiency, as it has the least parameters (2.04M) and the best GFLOPs (2.40) among the frequency-based methods. § IMPLEMENTATION DETAILS §.§ Multi-stram Fusion Strategy The comparison is made with three ensembles of different modalities (joint only, 4-stream ensemble, 6-stream ensemble. We denote the stream as S for convenience) following the setting of InfoGCN <cit.>: S1: k = 1, motion = False; S2: k = 2, motion = False; S3: k = 8 (k = 6 for NW-UCLA and UAV-Human datasets), motion = False; S4: k = 1, motion = True; S5: k = 2, motion = True; S6: k = 8 (k = 6 for NW-UCLA and UAV-Human datasets), motion = True, where k indicates k value of k-th mode representation of the skeleton. And 4-stream = S1+S2+S4+S5, 6-stream = S1+S2+S3+S4+S5+S6. For a fair comparison, experiments using the baseline method are also conducted with this emsemble strategy. §.§ Evaluations of the Batch Size Fig. <ref> illustrates the impact of batch size during the training. We take the experimental results on the NTU-60 dataset as an example. As we can see, increasing the batch size from 32 to 128 enhances performance. However, a higher batch size (256) is not better because it requires more memory and leads to convergence issues. Thus, we choose 128 as our default batch size. §.§ Channel Transformation in Temporal Attention Block As we mentioned in Section 3.4 of the main paper, we adopt some tricky strategies in the baseline method <cit.> as our temporal channel transformation CT(·), which is stacked with two modules: 1) Channel Reforming Model. An improving model derived from SE-net<cit.>, which enhances the feature separation between groups and reduces noise, it is essential to reorganize the channel relationships within each group. 2) Multiscale Convolution Module. The first part of the Temporal MixFormer in <cit.>, which is a simple optimization from MS-G3D <cit.> of maintaining a fixed filter while adjusting dilation, enabling the acquisition of more diverse multiscale temporal information and reducing computational costs. We simply adopt this combination as the CT(·) operation. § MORE VISUALIZATIONS In this section, we exhibit more attention maps, the same as the visualization results illustrated in Section 4.5 of the main paper. Since we have provided the action "eat a meal" from the Hard set, we give more visualization results from the Medium set (headache) and Easy set (kicking) as examples. All the skeletons and attention maps are generated by the NTU-60 dataset. As shown in Fig. <ref> and Fig .<ref>, our proposed Frequency-aware Mixed attention maps (extracted by FAB modules) contain more detailed information and joint correlations compared with the spatial maps (extracted by SAB). § LIMITATIONS AND FUTURE WORK Despite the high accuracy of our model, it still has some limitations. Firstly, our model is still not efficient and lightweight enough. As we discussed in the ablation study from the main paper, there is a gap between our method and the recent GCN-based methods such as HD-GCN <cit.> (1.68M parameters vs 2.04M, 1.60 GFLIOPS vs 2.40 GFLOPS), and we have no remarkable advantages of efficiency over the recent transformer-based methods. Secondly, we keep all the high-frequency coefficients during the training, which is not robust to noisy joint information. The more efficient way is to enhance the high-frequency coefficients selectively instead of the whole coefficients. Our future work will focus on finding the best trade-off point between efficiency and accuracy. ACM-Reference-Format
http://arxiv.org/abs/2407.12216v1
20240716235007
Mindful-RAG: A Study of Points of Failure in Retrieval Augmented Generation
[ "Garima Agrawal", "Tharindu Kumarage", "Zeyad Alghamdi", "Huan Liu" ]
cs.IR
[ "cs.IR" ]
§ ABSTRACT Large Language Models (LLMs) are proficient at generating coherent and contextually relevant text but face challenges when addressing knowledge-intensive queries in domain-specific and factual question-answering tasks. Retrieval-augmented generation (RAG) systems mitigate this by incorporating external knowledge sources, such as structured knowledge graphs (KGs). However, LLMs often struggle to produce accurate answers despite access to KG-extracted information containing necessary facts. Our study investigates this dilemma by analyzing error patterns in existing KG-based RAG methods and identifying eight critical failure points. We observed that these errors predominantly occur due to insufficient focus on discerning the question's intent and adequately gathering relevant context from the knowledge graph facts. Drawing on this analysis, we propose the Mindful-RAG approach, a framework designed for intent-based and contextually aligned knowledge retrieval. This method explicitly targets the identified failures and offers improvements in the correctness and relevance of responses provided by LLMs, representing a significant step forward from existing methods. Mindful-RAG: A Study of Points of Failure in Retrieval Augmented Generation Garima Agrawal    Tharindu Kumarage    Zeyad Alghamdi    Huan Liu Arizona State University July 22, 2024 ==================================================================================================================== § INTRODUCTION Large Language Models (LLMs) excel at numerous natural language tasks, exhibiting human-like proficiency. However, they often generate hallucinated responses to domain-specific or knowledge-intensive queries  <cit.>. In such cases, LLMs require additional relevant contextual knowledge through prompting. Consequently, Retrieval-augmented Generation (RAG) methods have been developed to equip LLMs with the capability to augment and access external knowledge sources <cit.>. These methods enhance the model's ability to retrieve relevant information, improving performance in domain-specific question-answering settings. Despite the advancements in this field, RAG methods encounter significant obstacles throughout the augmentation, retrieval, and generation phases due to which LLMs often do not yield correct answers, even when the relevant knowledge is accessible. In this paper, we examine the application of RAG methods, specifically focusing on instances where LLMs leverage structured knowledge graphs (KGs) as external sources to extract factual information for answering complex queries  <cit.>. These queries typically necessitate intricate reasoning based on the data structure within the KGs. While current knowledge graph-augmented LLMs have demonstrated notable improvements in addressing simple one-hop queries, their efficacy diminishes as query complexity increases, despite the availability of the required information within the knowledge graph to derive the answer <cit.>. Our analysis identifies common error patterns in fact retrieval from knowledge graphs, highlighting eight critical points of failure that often lead to incorrect responses. We categorize these failures into two main areas: Reasoning Failures and KG Topology Challenges. Reasoning Failures include difficulties that LLMs encounter in understanding questions and leveraging contextual clues, hindering their ability to align queries with relevant information. These issues also involve struggles with intricacies such as temporal context and response aggregation and complexities in relational reasoning. KG Topology Challenges relate to structural problems in the knowledge base that affect the information access or lead to inefficient processing, thereby affecting model performance. Building on our analysis, we introduce Mindful-RAG, a novel methodology designed for intent-driven and contextually coherent knowledge retrieval. Unlike traditional methods that rely on semantic similarity or structural cues of knowledge base, Mindful-RAG uses the model’s intrinsic parametric knowledge to accurately discern the intent of the question. This guides the retrieval process, ensuring the relevance of the extracted context from the KG. The approach includes contextual alignment for efficient navigation of the KG and a validation step to ensure the response aligns with the original intent. Enhancing how LLMs understand and respond to complex queries, Mindful-RAG significantly advances over current methods, delivering more accurate and contextually appropriate responses. Our experiments on two KGQA benchmark datasets, WebQSP and MetaQA, showed improvements over existing state-of-the-art methods. This approach notably reduces reasoning errors by focusing on intent and contextual alignment. In summary, our study makes the following key contributions: * We conduct a comprehensive error analysis of KG-based RAG methods used in question-answering tasks, identifying eight critical types of failure points. * We identify a common theme among these failure points: the models' inability to comprehend the intent behind questions and their subsequent struggle to contextually align with the information provided by the KG. * We propose a novel research direction to enhance the RAG pipeline. This involves adopting a fresh perspective and utilizing LLM's parametric memory to discern question intent better and achieve contextual alignment with the knowledge. § KG-BASED RAG FAILURE ANALYSIS Various methodologies have been developed to enhance LLMs with KG-based RAG systems. By leveraging structured and meticulously curated knowledge from these graphs, the retrieved information is more likely to be factually accurate. We assessed the effectiveness of these methods and analyzed their accuracy in retrieving information for fact-based question-answering (QA) tasks using a KG. Although most of these models surpass the performance of zero-shot QA conducted directly from various standard LLMs, there is still considerable scope for improvement. For our study, we selected the WebQuestionsSP (WebQSP) <cit.> dataset for knowledge graph question answering (KGQA), which is frequently utilized by KG-based RAG methods. This dataset, based on the Freebase KG <cit.>, consists of questions that require up to two-hop reasoning to identify the correct answer entity, utilizing Hits@k as the evaluation metric to determine if the top-k predicted answer is accurate. It includes approximately 1600 test samples. The vanilla ChatGPT (GPT-3.5) accuracy in zero-shot setting without any external knowledge is 61.2%. StructGPT <cit.> is a state-of-the-art approach that leverages LLM's capabilities for reasoning with evidence extracted from a KG. This method involves extracting a sub-graph from a KG by matching the topic entities in the question. The LLM is then directly employed to identify useful relations and extract relevant triples from the sub-graph, guiding it to effectively traverse and reason within the graph structure. The Hits@1 accuracy of StructGPT on the WebQSP dataset, when utilizing ChatGPT (GPT-3.5) for question-answering tasks, was reported to be 72.6%. In this study, we have selected StructGPT as our reference model to analyze the current SOTA developments of KG-based RAGs in the QA setting. We initiated our analysis by examining all the failure instances of StructGPT on WebQSP. We meticulously reviewed the logs of 435 error cases to decipher the behavior of LLMs throughout the reasoning process. This detailed scrutiny allowed us to pinpoint the error patterns in LLMs, as evidenced by these cases. Our analysis identified that these errors predominantly fall into eight categories, outlined below. We further categorize these issues into two primary divisions: Reasoning Failures, which involve errors stemming from reasoning deficiencies, and KG Topology Challenges, which encompass various structural issues. Reasoning Failures: Most failures stem from the LLMs' inability to reason correctly. These issues primarily include a failure to accurately understand the question, leading to difficulty in mapping the question to the available information. Additionally, LLMs struggle to effectively apply the clues in the question to narrow down the relevant entities. They also often fail to apply specific constraints that logically limit the search space. Generally, LLMs have difficulty grasping specifics such as temporal context, aggregating or summarizing answers, and disambiguating among multiple choices. Furthermore, they frequently choose incorrect relations, particularly in complex queries requiring multi-hop reasoning, finding it challenging to focus on the relevant elements necessary to formulate an answer. In Table <ref>, we detail various reasoning failures, each illustrated with an example. KG Topology Challenges: These issues arise when knowledge is inaccessible due to limitations in the knowledge base's structural design or inefficient processing. In Table <ref>, we categorize all such issues under challenges related to the KG topology. In this work, our primary focus is addressing errors resulting from reasoning failures in LLM models and enhancing their reasoning capabilities. An analysis of samples across five reasoning error types highlights two main challenges. a The models often fail to grasp the question's intent, primarily relying on structural cues and semantic similarity to extract relevant relations and derive answers. b They struggle with aligning the context of the question with the available information. This inability to comprehend the intent and context leads to incorrect relations ranking and misuse of constraints. A review of response logs from failed and successful interactions shows that LLMs provide answers based mostly on semantic matching. This method works for simple queries but is inadequate for complex questions requiring multi-hop reasoning and extensive contextual understanding. Hence, enhancing intent identification and context alignment is crucial for improving model performance. § MINDFUL-RAG In response to our findings, we introduce a novel approach called Mindful-RAG, which targets the two critical gaps mentioned above: the lack of question-intent identification and the insufficient contextual alignment with available knowledge. This method utilizes a strategic hybrid method that integrates the model's intrinsic parametric knowledge with non-parametric external knowledge from a KG. The following steps provide a detailed overview of our design and methodology, illustrated with an example. * Step 1 Identify Key Entities and Relevant Tokens: The first step is to pinpoint the key entities within a question to facilitate the extraction of pertinent information from an external KG or a sub-graph within a KG. Additionally, in our method, we task the LLM model with identifying other significant tokens that may be crucial for answering the question. For instance, consider the question from WebQSP, “Who is Niall Ferguson's wife?" The key entity identified by the model is `Niall Ferguson', and the other relevant token is `wife'. * Step 2 Identify the Intent: In this step, we leverage the LLM's understanding to discern the intent behind the question, prompting it to focus on keywords and phrases that clarify the depth and scope of the intent. For instance, in the provided example, the model identifies the question's intent as “identify spouse". * Step 3 Identify the Context: Next, we instruct the model to understand and analyze the context of the question, which is essential for formulating an accurate response. For the provided example, the model identifies relevant contextual aspects such as “personal relationships," “marital status," and “current spouse." * Step 4 Candidate Relation Extraction: We extract key entity relations from the sub-graph within a one-hop distance. For our example, the candidate relations include information about the subject's profession, personal life, and societal role. * Step 5 Intent-based Filtering and Context-based Ranking of Relations: In this step, we direct the model to conduct a detailed analysis to filter and rank relations and entities based on the question's intent, ensuring their relevance and accuracy. Relations are ranked according to contextual significance, and the top-k relations are selected. For instance, considering the intent and context in the example, the model identifies the most relevant relation as “people.person.spouse_s." * Step 6 Contextually Align the Constraints: In this step, the model is instructed to take into account temporal and geographical constraints, utilizing relevant data from various indicators for more complex queries. This process ensures that responses are accurately tailored to specific times, locations, or historical periods. Once constraints are identified, the model is asked to align them contextually and refine the list of candidate entities. For instance, in our example, the model identified constraints such as names of spouses, marriage start and end times, and location of the ceremony. It narrowed the list to potential spouses and extracted all related triples. It then aligned this information with the context of `current spouse' to tailor the response to the specified time period. The final response given is `Ayaan Hirsi Ali', contrasting with existing methods <cit.> where an LLM erroneously selected the first name on the spouse list, `Sue Douglas'. * Step 7 Intent-based feedback: In the final step, we prompt the model to validate whether the final answer aligns with the initially identified intent and context of the question. If the answer does not meet these criteria, the model is instructed to revisit Steps 5 and 6 to refine its response further. Similarly, the model adeptly contextualizes and aggregates pertinent information in other instances. For example, when asked, “What songs did Justin Bieber write?" it successfully compiles all relevant songs. In response to, “What is the state flower of Arizona?" it identifies `Arizona' as the key entity, with `state' and `flower' as relevant tokens. It correctly interprets the intent to “identify state flower" and recognizes the context of “botany," “state symbols," and “Arizona's official flora," choosing the appropriate relation: “government.governmental_jurisdiction.official_symbols." In contrast, traditional methods only identify `Arizona' as the key entity, often missing the broader context, leading to choosing incorrect relations “base.locations.states_and_provinces.country" and answer stating the state flower of Arizona is unknown. Mindful-RAG leverages the LLM's intrinsic understanding in the first three steps to identify not only the key entities but also to gather additional information such as relevant tokens, intent, and current context, all of which are essential for accurately answering the question. These steps enable the model to appropriately filter relations and align constraints with the current context. By incorporating these steps, the LLM becomes more mindful of the specific elements to consider. In the final two steps, the LLM is prompted to tailor its response and align it with specific constraints such as time, location, and any requirements for aggregating an answer. § EXPERIMENTS AND RESULTS Datasets: We evaluate Mindful-RAG on two benchmark KGQA datasets, specifically WebQSP and MetaQA(3hop)<cit.>. MetaQA features questions related to the movie domain, with answers up to three hops away from the topic entities in a movie KG (based on OMDb). For our experiments, we focused on 3-hop questions. In our analysis of the WebQSP dataset, we evaluated several baseline methods: KAPING <cit.>, Retrieve-Rewrite-Answer (RRA) <cit.>, Reasoning on Graphs (RoG) <cit.>, and StructGPT <cit.>. For the MetaQA dataset, StructGPT <cit.> served as the baseline. The results for these methods were taken directly from the respective publications. In our experiments, we adapted the base code of StructGPT <cit.> and modified it as outlined in the previous section. We also examined the performance of ChatGPT without the use of Retrieval-Augmented Generation (RAG) on these two datasets. The results, presented in Figure <ref>, show that our approach, Mindful-RAG, achieved a Hits@1 accuracy of 84% on WebQSP and 82% on MetaQA (3-hop). The primary goal of this study is to explore methods to mitigate reasoning errors. We propose that further accuracy improvements can be achieved by addressing structural and formatting issues within the KB and by considering partial answers to enhance accuracy instead of requiring exact matches. § RELATED WORK Recent efforts to enhance RAG systems have focused on various improvements. Siriwardhana et al. <cit.> aimed to improve domain adaptation for Open Domain Question Answering (ODQA) by jointly training the retriever and generator and enriching the Wikipedia-based knowledge base with healthcare and news content. RAFT <cit.> enhances RAG by customizing language models for specific domains in open-book QA. Self-RAG <cit.> aims to increase the factual accuracy of LLMs through adaptive self-critique and retrieval-generation feedback loops. Fit-RAG <cit.> introduces a method that uses detailed prompts to ensure deep question understanding and clear reasoning in fact retrieval. Domain-specific knowledge graphs <cit.> have been effectively employed in KG-based RAG within LLMs <cit.> for question-answering tasks <cit.>. While most efforts enhance LLMs by augmenting knowledge graphs with relevant facts, there is limited work on improving the reasoning capabilities of LLMs during knowledge retrieval. Our research with Mindful-RAG aims to significantly enhance these methods by using the model's inherent knowledge for better question understanding. § DISCUSSION AND CONCLUSION We conduct a detailed error analysis of KG-based RAG methods integrated with LLMs for QA tasks, identifying eight critical failure points grouped into Reasoning Failures and KG Topology Challenges. Reasoning Failures involve LLMs struggling to comprehend questions and utilize contextual clues, hindering accurate query-information alignment. This category also includes challenges with temporal context and complex relational reasoning. KG Topology Challenges pertain to structural issues within the knowledge base that impede information access and efficient processing. Our findings reveal significant areas for improvement in state-of-the-art approaches, particularly their reliance on structural cues and semantic similarity, which prove inadequate in complex, multi-hop queries requiring deep contextual understanding. To address these shortcomings, we introduce the Mindful-RAG framework, which enhances intent-driven retrieval and ensures contextually coherent responses, targeting the main deficiencies identified in our analysis. While this work focuses on mitigating reasoning-based failures, future research could aim to refine knowledge graph structures and optimize query processing to boost the accuracy of KG-based RAG methods further. Exploring feedback loops where models actively request and integrate user corrections in real time could also enhance accuracy and practical utility. Moreover, combining vector-based search methods with KG-based sub-graph retrieval could significantly improve performance. These developments in intent identification and context alignment represent promising research directions that could substantially elevate the performance of LLMs in knowledge-intensive QA tasks across diverse domains. ACM-Reference-Format
http://arxiv.org/abs/2407.13472v1
20240718124622
On the origin of univalent Mg$^+$ ions in solution and their role in anomalous anodic hydrogen evolution
[ "Florian Deißenbeck", "Sudarsan Surendralal", "Mira Todorova", "Stefan Wippermann", "Jörg Neugebauer" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
§ ABSTRACT Aqueous metal corrosion is a major economic concern in modern society. A phenomenon that has puzzled generations of scientists in this field is the so-called anomalous hydrogen evolution: the violent dissolution of magnesium under electron-rich (anodic) conditions, accompanied by strong hydrogen evolution, and a key mechanism hampering Mg technology. Experimental studies have indicated the presence of univalent Mg^+ in solution, but these findings have been largely ignored because they defy our common chemical understanding and evaded direct experimental observation. Using recent advances in the ab initio description of solid-liquid electrochemical interfaces under controlled potential conditions, we described the full reaction path of Mg atom dissolution from a kinked Mg surface under anodic conditions. Our study reveals the formation of a solvated [Mg^2+(OH)^-]^+ ion complex, challenging the conventional assumption of Mg^2+ ion. This insight provides an intuitive explanation for the postulated presence of (coulombically) univalent Mg^+ ions and the absence of protective oxide/hydroxide layers normally formed under anodic/oxidizing conditions. The discovery of this unexpected and unconventional reaction mechanism is crucial for identifying new strategies for corrosion prevention and can be transferred to other metals. Controlling materials degradation in chemically harsh environments is an outstanding challenge for future sustainable technologies. Examples are electrochemical energy conversion and storage solutions <cit.>, green metallurgy <cit.> and lightweight structural materials <cit.>. The need to understand the fundamental corrosion mechanisms in such environments is highlighted by a number of deceptively simple, yet poorly understood degradation reactions such as, e.g., the anomalous dissolution of metals under anodic conditions: their precise mechanistic details have remained elusive since their discovery more than 150 years ago <cit.>. Magnesium is a prototypical example <cit.>. Due to their light weight, high abundance and low environmental impact Mg alloys are attractive materials for mechanical engineering or batteries <cit.>. However, with Mg being one of the most reactive metals a major technical weakness is corrosion in water. Many of the properties of magnesium in water are puzzling and not understood: For example, under anodic polarization, where Magnesium dissolves, it simultaneously shows extreme rates of hydrogen evolution (HE), which would normally be expected exclusively for cathodic potentials. According to the Butler-Volmer equation, HE should decrease exponentially when increasing the potential towards the anodic direction. In marked contrast, however, Magnesium and its alloys feature strongly enhanced HE with increasing anodic polarization. This effect is referred to in the literature as the 'negative difference effect' or 'anomalous hydrogen evolution reaction' <cit.>. Multiple models have been proposed <cit.> to explain the origin of the anomalous hydrogen evolution. The 'enhanced catalytic activity mechanism' <cit.> suggested the enrichment of impurities more noble than Mg or the formation of local active sites to catalyze the HE on the anode. However, the specific nature of the catalyst that drives the anodic HE has remained unknown <cit.>. Even more puzzling, the amount of Mg dissolved was observed to be greater than coulometrically expected, assuming that Mg was oxidized to the dipositive Mg^2+ ion <cit.>, cf. Fig. <ref>a. These findings were taken as evidence for the existence of a 'unipositive Mg^+ ion mechanism' <cit.>. In a series of follow-up experiments, it was demonstrated that the unipositive Mg ions have a lifetime of several minutes in aqueous solutions and are able to reduce other species even macroscopic distances away from the oxidizing Mg anode <cit.>. On the other hand, the unipositive Mg^+ mechanism has been challenged on the grounds that such an ion should be extremely short-lived. So far, there is only indirect evidence for the existence of Mg^+ <cit.>. Atomic emission spectroelectrochemical experiments <cit.>, which clearly distinguish different oxidation states, found direct evidence only for divalent Mg^2+. Yet, this anomalous dissolution behaviour is not limited to Mg, but has been observed, e.g., for Fe, Cr and Zn as well <cit.>. Despite their fundamental importance, the existence and exact chemical nature of the postulated unipositive metal ions as well as the precise atomistic reaction mechanisms responsible for the anomalous dissolution, however, have remained elusive. First principles techniques could be the method of choice to reveal the origin of these anomalous dissolution reactions. However, studies that explore the corrosion process taking into account the full complexity of the realistic surface-water interface are still lacking. Via ab initio thermopotentiostat molecular dynamics simulations, we demonstrate that the hypothesized unipositive metal ions are in fact ion complexes, consisting of a divalent metal ion and an OH^- group. In the conventional picture, cf. Fig. <ref>a, Mg dissolves via the formation of divalent Mg^2+ ions, transferring 2e^- to the metal surface per dissolved Mg ion. Under acidic conditions, cf. Fig. <ref>b, water adsorbs dissociatively, partially passivating the metal surface via formation of a Mg(OH)_2 surface hydroxide. In the present work, we identified an energetically and kinetically favourable reaction pathway where a surface metal atom becomes solvated in conjunction with an attached surface hydroxyl group (Fig. <ref>c). This pathway completely circumvents the passivating nature of the hydroxide film. In turn, this reaction process supports a continued dissociative water adsorption, explaining the anomalous hydrogen evolution at the anodically polarized surface. The effectively unipositive [Mg^2+(OH)^-]^+ ion complex is responsible for the observed amount of metal dissolved being larger than coulometrically expected. The [Mg^2+(OH)^-]^+ ion complex may subsequently decay into the divalent metal ion via reduction of another species. We expect it to be long-lived due to a strong Coulomb barrier separating the effectively unipositive ion complex from prospective reactants such as, e.g., H^+. To reveal the dissolution mechanism and the nature of the unipositive Mg^+ ion, we described the solvated Mg-surface by a supercell containing a 6-layer slab oriented in the (1 2 3̅ 15) direction and 64 explicit water molecules. Dissolution is generally understood to proceed via kink atoms due to their weaker bonds and the greater exposure of kink-sites to both adsorbing molecules and to the electric field. We therefore induced a miscut in the Mg-slab resulting in a surface with two kink-sites in the supercell. Already during equilibration under open-circuit conditions, a water molecule adsorbes dissociatively at one of the kink-sites according to: Mg + H_2O_(ad)⇌ [Mg^2+(OH)^-]^+_(ad) + H_2_(ad)/2 + e^- Two further H_2O adsorbed subsequently at the same kink-site as intact molecules, leading to the configuration shown schematically in Fig. <ref>i. For clarity, only the participating water molecules are shown. Under open-circuit conditions, this configuration remained stable on the time-scale of our simulations. In order to drive a dissolution reaction, we subsequently polarized the Mg slab anodically. The electrode charge is controlled by our recently introduced thermopotentiostat <cit.>. Within 1.6 ps after switching on the thermopotentiostat with a target potential of ⟨Φ⟩ = -2 V, a fourth water molecule approached the kink atom (cf. Fig. <ref>ii) and adsorbed (Fig. <ref>iii), starting to form a solvation shell. The kink-atom is then increasingly being lifted out of the surface. Fig. <ref>a shows the distance parallel to the surface normal between the kink-atom and its Mg bonding partner underneath. A maximum extension of 3.6 Å is reached after completing the solvation shell (Fig. <ref>iv-v). Although the surface is charged with 2 additional electrons (cf. Fig. <ref>b), indicating that the kink atom is now fully ionized, the solvated Mg^2+ ion remains firmly bound to the surface: in conjunction with the hydroxyl group created in the reaction (<ref>), the solvated ion forms an [Mg^2+OH^-]^+ complex, where the hydroxyl group connects the kink atom to its nearest Mg neighbour (cf. Fig. <ref>v). We speculate that this process is common to anodically polarized metal surfaces. A 'place-exchange mechanism', where a surface-adsorbed oxygen and a metal atom underneath exchange their position, was first proposed by Lanyon et. al. <cit.>, later observed by Vetter et al.<cit.> for Pt surfaces and reexamined more recently by Rost et al. <cit.>. This hydroxyl bridge bond is highly stable. Removing any water molecules except the ones constituting the solvation shell and separating the [Mg^2+OH^-]^+ ion complex from the surface in a vacuum calculation, we estimated the binding energy to be ∼ 2 eV. Such a large binding energy is inconsistent with the experimentally observed high dissolution rates <cit.>. In order for the dissolution to proceed, we therefore expect the breaking of the hydroxyl bridge bond to be catalyzed by its surrounding environment. Indeed, our simulations showed a possible candidate: a concerted double proton transfer from a neighbouring adsorbed water molecule via a solvated H_2O molecule that is hydrogen bridge bonded to the hydroxyl group (cf. Fig. <ref>i) relocates the hydroxyl group laterally to a neighbouring site. Thereby, the [Mg^2+OH^-]^+ ion complex is oxidized to Mg^2+ and left with a complete solvation shell consisting of 6 H_2O molecules (Fig. <ref>ii): [Mg^2+(OH)^-]^+_(ad) + H_2O_(ad)⇌ Mg^2+_(aq) + OH^-_(ad) + H_2O_(aq) As a result, the Mg^2+ ion quickly moved into the liquid water region (dashed blue line in Fig. <ref>), leaving the hydroxyl group behind on the surface. This double proton transfer proceeding on the surface is, however, not the only conceivable reaction to catalyze the dissolution. In order to search for alternative processes, we moved the Mg^2+ ion with a constant velocity of v = 2/3 Å/ps parallel to the surface normal into the solution. In response, one of the H_2O molecules forming the solvation shell turned one of its OH-bonds towards the hydroxyl group. In Fig. <ref>, we show the distance between the hydrogen in the corresponding OH-bond and the oxygen atom of the hydroxyl group (green solid curve). In the time frame from 10.8 ps to 12.5 ps, multiple transfer attempts are visible, until at ∼ 13 ps an intra solvation shell single proton transfer occurs to the hydroxyl group (Fig. <ref>a/b). Thereby, the [Mg^2+OH^-]^+ ion complex as a whole becomes fully solvated and moves into the liquid water region (solid blue curve, Fig. <ref>c): [Mg^2+(OH)^-]^+_(ad) + H_2O_(aq)⇌ [Mg^2+(OH)^-]^+_(aq) + H_2O_(ad) We emphasize, that the outcome of these two competing processes is fundamentally different. For reaction (<ref>), the hydroxyl remains on the surface. Therefore, the surface will be quickly hydroxylated and become electrochemically passive. No more dissociative H_2O adsorption is possible, preventing any further anomalous hydrogen evolution. Reaction (<ref>), on the other hand, removes the hydroxyl group from the surface into the solution, leaving the next kink site exposed to further dissociative H_2O adsorption. It is therefore only reaction (<ref>) that is associated with ongoing anomalous hydrogen evolution. Consistent with the experimental observation that only the unipositive Mg ion is associated with the anomalous anodic hydrogen evolution <cit.>, the [Mg^2+OH^-]^+ ion complex created in reaction (<ref>) is effectively charged +1. We therefore propose that the elusive unipositive Mg ion is, in fact, an [Mg^2+OH^-]^+ ion complex. This interpretation is supported by the fact that the reaction Mg^2+(OH)^- ⇌ Mg^2+ + OH^- is known to have a p_K_b value of 2.56 <cit.>. Hence, for pH > 11.44 the [Mg^2+OH^-]^+ ion complex becomes the dominant species. Due to the hydroxylation of the surface, we speculate that the local pH becomes sufficiently large, so that the ion complex may even be thermodynamically stabilized. Moreover, Refs. <cit.> pointed out that the unipositive Mg ion remains stable for several minutes in aqueous solutions and is able to reduce other species even macroscopic distances away from the Mg anode. Since the [Mg^2+OH^-]^+ ion complex is positively charged and requires, e.g., an H^+ ion to oxidize to Mg^2+, we expect the complex to be able to reduce other species. In addition, due to the Coulomb barrier between [Mg^2+OH^-]^+ and H^+, which both are positively charged, the ion complex will be rather long-lived. An alternative reaction to obtain Mg^2+ is the dissociation of the [Mg^2+OH^-]^+ ion complex into its constituents Mg^2+ and OH^-. Analogous to the reduction via other species, this reaction is kinetically hindered by the large Coulomb attraction between the positive Mg^2+ and the negative OH^-. The reaction steps observed in our thermopotentiostat AIMD simulations imply the following model for Mg dissolution and the anomalous HER via unipositive Mg: starting from the hydroxylated surface (Fig. <ref>a), two distinct reaction pathways are available. On the one hand, with a rate of 1-k the hydroxylated kink atom is first ionized as (1-k) ·[ Mg + OH^- ⇌ [Mg^2+(OH)^-]^+ + 2 e^- ] and subsequently solvated according to (Fig. <ref>c): (1-k) ·[ [Mg^2+(OH)^-]^+ ⇌ Mg^2+ + OH^- ] After that, the process can start over at the next exposed kink atom (return to Fig. <ref>a). We note that summing Eqs. <ref> and <ref> results in (1-k) ·[ Mg ⇌ Mg^2+ + 2e^- ]. On the other hand, with a rate of k the [Mg^2+(OH)^-]^+ ion complex is solvated as a whole (Fig. <ref>b). Since the hydroxyl group becomes detached from the surface, the surface is thereby left exposed to another dissociative adsorption event (Fig. <ref>d): k ·[ Mg + H_2O ⇌ [Mg^2+(OH)^-]^+ + H_2/2 + e^- ] This is the step that triggers the anomalous anodic hydrogen evolution reaction (HER), explaining why only the effectively unipositive Mg ion complex contributes to the anodic HER. Eventually, the [Mg^2+(OH)^-]^+ complexes in the anolyte reduce other species - such as, e.g., H^+ - and oxidize in the process according to: k ·[ [Mg^2+(OH)^-]^+ + H^+ ⇌ Mg^2+ + H_2O ] Summing Eqs. <ref> and <ref> yields: k ·[ Mg + H^+ ⇌ Mg^2+ + H_2/2 + e^- ] We now see that Eqs. <ref> and <ref> are balanced by the cathodic half-reaction (2 - k) ·[H^+ + e^- ⇌H_2/2], so that the sum of Eqs. <ref>, <ref> and <ref> yields the well known total balance equation Mg + 2 H^+ ⇌ Mg^2+ + H_2. We emphasize, that the key steps <ref>, <ref> and <ref> are directly observable in our AIMD simulations. Only steps <ref> and <ref> have been inferred. In summary, by using ab initio molecular dynamics simulations of aqueous magnesium interfaces under potential control, taking into account the full complexity of the realistic metal-water interface, we have discovered a novel and completely unexpected reaction pathway. The identified dissolution product - [Mg^2+OH^-]^+ - naturally explains one of the most studied and debated corrosion mechanisms - the anomalous anodic hydrogen evolution, which has puzzled scientists since it was first reported more than 150 years ago. Our results clearly show that water is not just a spectator, but an active reactant. Under anodic conditions, water dissociatively adsorbs to form a surface hydroxide. Subsequently, the interfacial water provides a low-barrier pathway for proton transfer reactions that allow the surface hydroxide to dissolve via the formation of [Mg^2+(OH)^-]^+ ion complexes. This pathway bypasses the usual passivation effect of surface films and explains the unusually high anodic corrosion rates and the chemical nature of the hypothesized solvated Mg^+ ions. The discovery of such an unexpected reaction pathway also demonstrates the level and potential that ab initio molecular dynamics simulations have reached thanks to recent methodological advances in the description of electrochemical interfaces. The authors acknowledge the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for funding through Project No. 409476157 (SFB1394) and support under Germany's Excellence Strategy - EXC 2033-390677874–RESOLV.
http://arxiv.org/abs/2407.12694v1
20240717161757
Neutrino Halo profiles: HR-DEMNUni simulation analysis
[ "Beatriz Hernández-Molinero", "Carmelita Carbone", "Raul Jimenez", "Carlos Peña Garay" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph" ]
Highly Efficient Parallel Row-Layered Min-Sum MDPC Decoder for McEliece Cryptosystem Jiaxuan Cai and Xinmiao Zhang This material is based upon work supported by the National Science Foundation under Award No. 2052641. The authors are with the Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210 USA (E-mails: cai.1072@osu.edu; zhang.8952@osu.edu). ================================================================================================================================================================================================================================================================================================================================ § INTRODUCTION Cosmology remains the most promising route to prove the absolute mass of neutrinos. Current cosmological surveys are already providing very stringent upper limits that are very close to the total mass inferred from underground experiments that measure the mass splits, i,e, 0.059 eV <cit.>, which in turn confidently suggests, within the framework of Bayesian evidence, that the hierarchy is normal <cit.>. However, in order to confirm that we have actually discovered neutrinos and not something else, it is important to have additional tests about the nature of this `hot' component of the Universe density budget. Some test can also be obtained by measuring the sound speed and viscosity of this extra component <cit.>. Another important probe is to study the clustering of neutrino haloes around their corresponding cold dark matter ones. This is the probe we explore in this work. If massive neutrinos exist in the Lambda Cold Dark Matter (LCDM) model, then there should be clear signatures of how they cluster around Cold Dark Matter (CDM) haloes. It is these signatures, which could be measured by e.g. weak lensing surveys, that could provide evidence that we have indeed detected massive neutrinos. In that sense, in the present work we aim to provide high fidelity signatures about neutrino haloes by cold dark matter and neutrino density profiles inside and around virialized haloes, making emphasis in their angular dependence within the neutrino halo. In particular, we revise the fitting formulas proposed in <cit.> but using improved simulations which have a significantly larger mass resolution for both Cold Dark Matter (CDM) and neutrino particles, in agreement with current cosmological simulations, which allows us to get more reliable results. Firstly, we wanted to see how previous formulas stand for new simulations where the neutrino mass is twice smaller. We will see how these new simulations push higher the applicability limit of fitting function. Moreover, taking advantage of the high resolution of these new simulation, we also explore the angular dependence of neutrino overdensity distributions around haloes. Neutrino density profiles have been obtained always assuming isotropy, but, in the surroundings of super-clusters, some preferred direction of motion could exist depending on the dark matter particles distribution. If neutrinos follow a particular direction, some asymmetry could be created. We investigate and quantify this effect for different kinds of clusters through the neutrino density profiles. In addition, we also compute neutrino density maps in the centre of haloes to confirm the neutrino wakes signatures proposed in <cit.>. This paper is organised as follows. In  <ref> we present the new HR-DEMNUni simulations that have been used to carry out all analysis presented in this work as well as the method used to compute the density profiles.  <ref> gathers our main results, several density profiles for cold dark matter and neutrino particles in haloes with different masses. Some of these profiles take into account the angular distribution of the neutrinos around dark matter haloes. Finally, the discussion and conclusions are addressed in  <ref>. § METHODOLOGY In this section we describe the simulations employed and the method applied to compute density profiles, starting with the details of the high-resolution simulations in  <ref> and the expressions for the profiles in  <ref>. §.§ Numerical Simulations In this analysis, we have used the “Dark Energy and Massive Neutrino Universe” (DEMNUni) suite of large N-body simulations <cit.>. The DEMNUni simulations have been produced with the aim of investigating the large-scale structure of the Universe in the presence of massive neutrinos and dynamical DE, and they were conceived for the nonlinear analysis and modelling of different probes, including dark matter, halo, and galaxy clustering <cit.>, weak lensing, CMB lensing, Sunyaev-Zel'dovich and integrated Sachs-Wolfe effects <cit.>, cosmic void statistics <cit.>, as well as cross-correlations among these probes <cit.>. In particular, we have used the new high-resolution (HR) simulations with 64 times better mass resolution than previous standard runs: the HR-DEMNUni simulations are characterized by a comoving volume of (500 h^-1Gpc)^3 filled with 2048^3 dark matter particles and, when present, 2048^3 neutrino particles. The simulations are initialized at z_ in=99 with Zel'dovich initial conditions. The initial power spectrum is rescaled to the initial redshift via the rescaling method developed in <cit.>. Initial conditions are then generated with a modified version of the software, assuming Rayleigh random amplitudes and uniform random phases. The HR-DEMNUni set consists of two simulations with total neutrino masses of ∑ m_ν = 0, 0.16 eV considered in the degenerate mass scenario with three active neutrinos. The other cosmological parameters of the simulations are based on a Planck 2013 <cit.> LCDM reference cosmology (with massless neutrinos), in particular: n_ s=0.96, A_ s=2.1265 × 10^-9, h=H_0/[100 km s^-1 Mpc^-1]=0.67, Ω_ b=0.05, and Ω_ m=Ω_ CDM + Ω_ b + Ω_ν =0.32; H_0 is the Hubble constant at the present time, n_ s is the spectral index of the initial scalar perturbations, A_ s is the scalar amplitude, Ω_ b the baryon density parameter, Ω_ m is the total matter density parameter, Ω_ CDM the cold dark matter density parameter, and Ω_ν the neutrino density parameter. In the presence of massive neutrinos, Ω_ b and Ω_ m are kept fixed to the above values, while Ω_ CDM is changed accordingly. Tab. <ref> summarizes the masses of the CDM and neutrino particles together with the neutrino fraction f_ν≡Ω_ν / Ω_ m. Dark matter haloes are identified using a friends-of-friends (FoF) algorithm <cit.>, with a linking length of 0.2 times the mean particle separation, applied to both CDM and neutrino particles in the simulation, with a minimum number of particles per species fixed to 32, corresponding to a mass of ∼ 4 × 10^10 h^-1M_⊙. FoF haloes are further processed with the subfind algorithm <cit.> to produce subhalo catalogues. With this procedure, some of the initial FoF parent haloes are split into multiple substructures. In particular, in this work we use the spherical overdensity halo catalogues, so-called M_ 200b, identified by the subfind algorithm, and in the following we will refer to them with the term “halo”. §.§ Computing Density Profiles In order to compute smoothed density profiles we use the kernel-based algorithm proposed in <cit.>. This density estimator is based on a variable window width. ν̂(r)=∑_i=1^N1/h_i^3K̃(r,r_i,h_i), where h_i is the window width associated with the ith particle and K̃, a Gaussian kernel. K̃(r,r_i,h_i) = 1/2(2π)^3/2(rr_i/h_i^2)^-1[e^-(r_i-r)^2/2h_i^2-e^-(r_i+r)^2/2h_i^2] The window width varies with position in such a way that the bias-to-variance ratio of the estimate is relatively constant. We chose the window width in the same way as <cit.>, h_i∝ r^1/2 is set at 0.1r_vir as h_0.1=0.05r_vir. We use this kernel to compute all density profiles in this paper. § DENSITY PROFILES In this Section we present our main results. Firstly, we will focus on density profiles integrated in angles both for cold dark matter and neutrino distributions within haloes of different masses, studying if new density profiles measured from the HR-DEMNUni simulations fit the formulas proposed by <cit.> with the aim to reproduce them. Secondly, we will consider how density profiles could change depending on the neutrino direction and compute them in solid angles ahead of and behind the halo centres along the direction of motion of cold dark matter particles. Lastly, we will look at the centre of the selected haloes and look for backward neutrino wakes as those proposed in <cit.>. All analyses have been carried out over a redshift zero realization of the HR-DEMNUni simulations. §.§ Profiles as a function of radius As a first check, we have calculated the CDM density profiles for four different halo masses at redshift zero (Figure <ref>) and fit them to the standard NFW profile (dashed lines in Figure <ref>). For these and the following fits, the package [<https://lmfit.github.io/lmfit-py/>] based on non-linear least-squares minimisation has been used. The outcome is as expected, we see the plateau in the innermost regions, the higher the halo mass is the larger is this plateau, then the profiles drop quickly to the mean density around the virial radius and inside and outside over-densities differ by several orders of magnitude. In addition, and as it is already known, NFW profiles just return a good fit around the virial radius. These profiles are well known and have been calculated several times through literature, we use them here just to show that the algorithm used to compute the density profiles works correctly, so it can be applied in the calculation of neutrino profiles. Now, we repeat the same procedure for the neutrino component, also at redshift zero, but in smaller halo mass bins. We plot the overdensity for neutrinos and use the formulas (<ref>) below to fit the profiles. We present the results for haloes with masses greater than 10^14 h^-1M_⊙ in the left panel of Figure <ref>, along with the results of the fit to Equation (<ref>) which is plotted as dashed lines. Only some of them are displayed in the left panel of Figure <ref> for ease of visualisation, but all fit results are included in Table <ref>. Likewise, in the right panel of Figure <ref>, the density profiles obtained for haloes with masses between 10^11 h^-1M_⊙≤ M_ h≤ 10^14 h^-1M_⊙ are shown combined with the fitting result to Equation (<ref>), plotted as dashed lines too. As well, only a few are displayed in the right panel of Figure <ref> but all results are included in Table <ref>. Due to computational limitations, the angle-averaged profiles, corresponding to each of the halo masses listed in Tables <ref> and <ref>, has been calculated by averaging over neutrino profiles measured from 200 haloes. All available haloes has been used for masses close to 10^15. In Figure <ref> the error bars represent the dispersion around the mean density profile, rather than the dispersion in the mean density profile as instead assumed in <cit.>. So our errors are more conservative and mildly depend on the number of analysed haloes. We performed our fits using the equations below: δ_ν(r) = ρ_ν(r)-ρ̅_ν/ρ̅_ν = ρ_c/1+(r/r_c)^α for halo masses M>10^14 h^-1M_⊙ δ_ν(r) = κ/r^α for halo masses M<10^14 h^-1M_⊙ In <cit.>, the authors found that Equations (<ref>) described very well the average neutrino overdensity profiles over a wide range of radii. The physical meaning of the parameters in the profile (<ref>) is very simple: r_c and ρ_c represent the length and the overdensity of the core in the overdensity profile of the neutrino halo, while α is a parameter that controls how fast the overdensity profile falls on large radii. In <cit.> it was also claimed that for haloes with mass below ∼10^13.5 h^-1M_⊙ the resolution in their N-body simulations was not large enough to properly resolve the core in the neutrino density profiles, so Equation (<ref>) was proposed to reproduce the outskirts of the neutrino density profiles. In our case, even though our simulations have much larger mass resolution, the neutrino mass is smaller than in <cit.> and we can not resolve the core either, if any core exists for such small neutrino masses. So we use the same expression (<ref>) to fit the neutrino profiles in haloes with masses below 10^14 h^-1M_⊙. It is important to emphasize that in our case the small neutrino mass, therefore the high thermal neutrino velocity, is preventing the neutrino clustering in haloes with masses less than 10^14 h^-1M_⊙, which results in no neutrino core formation. As the halo mass becomes smaller, the neutrino overdensity profile becomes flatter, meaning a larger neutrino clustering in more massive haloes. Comparing the neutrino profiles in the left and right panels of Figure <ref> with the CDM overdensity profiles in Figure <ref>, it is possible to observe that neutrino profiles are more extended than CDM ones, due to the larger CDM particles clustering as compared to hot neutrinos. If we compare neutrino profiles for large and low halo masses, we see a clear overdensity decrease of around a factor of two in the latter case, which is a consequence of the decrease of the halo mass and the associated gravitational potential well. Paying attention to the tails of the neutrino profiles, we can verify that they are well reproduced by the proposed fitting functions. At this point, it is worth noting that formulas (<ref>), which were proposed studying simulations with larger neutrino masses, work reasonably well also for these new simulations where the total neutrino mass is twice smaller. In Figure <ref>, we show the parameters derived from the fitting functions against the mass of the halo hosting the neutrino halo for the two cases of neutrino profiles (<ref>)-(<ref>), with the aim to check if the trend is the same as in <cit.>. We find that our results for the α parameters are quite comparable and, in both cases, the tendency is alike to that obtained in <cit.>. The slopes of the fitting functions are close to zero, suggesting that there is not much mass dependence for the two α parameters. Regarding the core density parameter, ρ_c, the trend we find is comparable as well: the density of the neutrino core increases with the mass of the host halo, nevertheless the results of the fit are slightly different than in <cit.>. We get a fit for the κ parameter: its trend is a some constant regime from 10^11 up to 1×10^13 h^-1M_⊙ and then it starts to grow. On distances much larger than the core radius, r≫ r_c, the profile (<ref>) reduces to (<ref>), with κ = ρ_cr_c^α, so we expected a decreasing of κ as the halo mass decrease and so it is. Particularly relevant is the trend observed for r_c for which we do not get a fit, as shown in the r_c panel of Figure <ref>. In <cit.> r_c grows with the halo mass, but what we have found from our analysis is that it decreases with the halo mass for M_ h≲ 4 × 10^14, while it starts to grow for M_ h > 4 × 10^14, recovering a similar trend as in <cit.> (see Table <ref>). As the expected tendency, also considering what happens for the CDM core, is that the radius of the neutrino core grows with the mass of the host halo, this result suggest that Equation (<ref>) may not be applicable for haloes with masses lower than 4 ×10^14. A higher resolution simulation and, especially, a lower total neutrino mass seem to have pushed the limit: for haloes with M_ h < 4 ×10^14the obtained profiles are too flat for formula (<ref>) to resolve the core, so the core information is washed out. This could mean that, for neutrino total masses less than ∼ 0.16 eV and CDM halo masses less than a few 10^14, neutrinos behave more as light-like particles and this prevents neutrino core formation. In <cit.>, their results suggest that the fitting formula does not work for haloes with masses less than 10^13.5, now, for less massive neutrinos we put the limit higher suggesting 4 ×10^14. It is possible that the limit can be push higher, maybe around 10^15with N-body simulations implementing total neutrino masses lower than 0.16 eV. Following the previous argument, such a low neutrino mass produces a very flat profiles with very large error bars in the innermost halo regions, so a limit, where we are dominated by the dispersion of the profiles, is reached meaning that no reliable information can be obtained in these regimes. Our simulations thus predict what averaged neutrino profiles should look like in the presence of CDM, although some new limits have been found. In order to obtain more conclusive results, more high-resolution simulations like the one used in this work with different and smaller neutrino masses are needed. Nevertheless, we can conclude that, if the signal is measured by future weak lensing surveys, this should provide a clear signature that massive neutrinos do exist. §.§ Profiles as a function of angle It has always been assumed that neutrino haloes, produced in areas with a high dark matter density, are spherical; but if neutrinos fall in those areas because of dark matter, the CDM distribution could produce a preferential direction of neutrino in-falling which would break the isotropy, resulting in some kind of ovoid neutrino haloes. For some dark matter haloes, we expect a deformed associated neutrino halo that would present a higher neutrino density in the front part in the direction of motion of the CDM and a lower neutrino density behind the halo centre along the direction of motion as well. In order to search for this effect, and as we want to see an effect on the neutrino density profiles as a function of the angle, a reference direction, with respect to which the angles are measured, is needed. We set the reference direction for each analysed halo as the direction resulting from averaging the velocity vectors of all CDM particles within the halo, assuming that the haloes extend up to 20 h^-1Mpc, since neutrino haloes have much larger radii than the corresponding CDM haloes. Then, we compute the neutrino overdensity profiles ahead of and behind the halo centre along this reference direction. In practice, the procedure followed is: first, find the average CDM particle velocity in the halo, then rotate all the system, CDM and neutrino particles (up to a distance of 20 h^-1Mpc radius from the center of the halo) so that the average halo CDM velocity is aligned to the x-axis, i.e. θ=90^∘ and ϕ=0^∘ in spherical coordinates; then, the neutrino position distribution, after the rotation, is computed. Another anisotropy effect is the one proposed in <cit.>. This produces neutrino wakes in the inner regions of the neutrino halo due to the peculiar motion of the halo itself. In order to investigate such an effect we look at the centre of the halo in a box of 6 h^-1Mpc side. §.§.§ Front loading neutrinos As noted above, neutrinos falling in a particular halo will see a preferential direction, i.e. the average CDM direction of motion, and will fall following it. Following that direction, the closer the neutrinos travel to the halo centres, the more gravitational force they will suffer and the more they will deviate from their trajectory towards the halo central axis. As a result, a region of overdensity will form ahead of the halo centres in the direction of CDM motion. In practice, if neutrinos are to cluster further ahead of the centre of the halo, the expected mean angle values are θ = 90^∘ and ϕ = 0^∘ for construction, because we have rotated each halo system such that the CDM mean velocity vector is aligned to the x-axis (θ = 90^∘ and ϕ = 0^∘, in spherical coordinates). To analyze this effect, we have randomly selected a total of 120 typical haloes of different masses. In all of these haloes we observe that the largest clustering occurs ahead of the halo centre in the direction of motion. Some examples are shown in Figure <ref>. From the simulations we see that the neutrino mean velocity is more or less aligned to the CDM mean velocity for each analyzed halo, as shown in Figure <ref>, where black triangles represent the CDM mean velocity and green triangles the neutrino mean velocity. This was the first lineal effect studied in the cosmic neutrino background framework. It can also be observed how neutrinos cluster along the direction of CDM motion (45^∘<θ<135^∘) and how they cluster more in the areas located ahead of the halo centres (θ=90^∘ and ϕ=0^∘) marked by a red solid line square, and less in the areas located behind the halo centres (θ=90^∘ and ϕ=180^∘) marked by a red dashed line square. The effect is present in all halo masses and its amount is not so different among the analysed haloes (see Figure <ref>). This effect depends on the CDM distribution of the halo and on the haloes surrounding it; actually, we made an extended analysis for about hundreds of haloes and selected those where the effect is larger, as the ones shown in Figure <ref>. The results we present in this work correspond to an analysis of a total number of 120 haloes of different masses. Using the selected haloes, we compute the neutrino overdensity profiles, similarly to the previous ones in the left and right panels of Figure <ref>, but now without averaging on the 4π solid angle. We select a region defined by Δθ=60^∘ and Δϕ=120^∘ with respect to the centre placed at θ=90^∘ and ϕ=0^∘ in order to catch neutrinos ahead the halo centre, and with centre placed at θ=90^∘ and ϕ=180^∘ in order to catch neutrinos behind the halo centre in the direction of the CDM flow (see the squares drawn in Figure <ref>). Then, we calculate the profiles of these neutrino subsets. The results for haloes with masses between 1×10^14 and 9×10^14 h^-1M_⊙, together with the profile integrated in angles, are shown in Figure <ref>. The effect becomes noticeable beyond a radius of 5 h^-1Mpc where the three lines diverge: the dashed one, which corresponds to neutrinos ahead, increases and the dotted one, which corresponds to neutrinos behind, decreases, while the solid one, which corresponds to all neutrinos within the halo, stays in between. To quantify the asymmetry effect we have integrated the profiles from radius equal to 6 up to 20 h^-1Mpc, for both ahead and behind profiles. The result is an overdensity asymmetry above 0.1 (Figure <ref>) which remains constant over the dark matter halo mass and slightly increases for masses close to 10^15 h^-1M_⊙. §.§.§ Wakes in neutrino haloes In <cit.>, it was shown that the peculiar motion of haloes causes neutrino particles to accumulate behind the moving halo, generating wakes that slow the halo down due to dynamical friction, and reducing the cross-correlation between neutrinos and CDM. Here, using the HR-DEMNUni simulations we look for these wakes at the centre of the haloes. To this aim, we use the same procedure as before: find the halo, determine the CDM mean velocity vector and rotate all the system such that the CDM mean velocity vector matches the x-axis at the centre of the halo. For each rotated halo, we select a box of 6 h^-1Mpc side centered at the halo centre and calculate the neutrino density distribution over xy-axis. This time, neutrino overdensity distributions are computed using a standard Gaussian kernel. One example of the final result is shown in Figure <ref>, where the black star marks the position of the maximum in the overdensity distribution. This plot is a clear example of the effect, some wakes can be observed and the maximum of the overdensity is displaced behind the centre along the CDM velocity direction which is illustrated by the grey arrow. We repeated this analysis for hundreds of haloes in the simulation and saved the neutrino overdensity maximum value together with its location. These values are shown in Figure <ref> as a function of the halo mass. The mean value of each parameter and the error bars corresponding to a 1σ error are represented. One could say that the effect is observable for halo masses greater than 3×10^14 h^-1M_⊙. For those haloes, the mean position of the neutrino overdensity maximum is at (-0.06,-0.06) h^-1Mpc. § DISCUSSION AND CONCLUSIONS High-resolution cosmological simulations open up the possibility to analyze in detail specific high precision features of the cosmological model. In this work we have focussed on the properties of neutrino haloes, looking for asymmetries in regions away from the centre of the haloes but also in regions close to it. To this end, we have used the HR-DEMNUni simulations. These kinds of studies are motivated by the current context of new experiments aimed at improving our understanding of the Universe. If work continues in this direction, it is possible that, in the future, neutrinos from the cosmic background will be detected. Some evidences of the effects studied in the present work will help to make sure that neutrinos are being detected. The simulations used in this work allow us to obtain more reliable results as they contain neutrinos with a total mass closer to the actual limits established by cosmology. We have obtained angle-averaged neutrino profiles for several halo masses, using a kernel to obtain a smoother shape, similar to previous works, but this time for a lower neutrino mass. We have shown that, due to the higher thermal velocity of lower mass neutrinos, the clustering in haloes of even M∼10^14is more suppressed. The lower the neutrino mass the more diffuse the neutrino halo core is, then problems arise when fitting to the formula proposed. We put the limit of the applicability of this formula in halos of 4 ×10^14. This result suggest that low mass neutrinos can be understood mostly as light-like particles in the surrounding of dark matter haloes with masses less than 4 ×10^14. With lower neutrino mass simulations that fit the current cosmological limits close to the mass split from underground laboratories of 0.06 eV, this limit could be pushed further. We have also investigated asymmetries that can arise when the neutrino profiles are not averaged over the solid angle. As we know, cosmic neutrinos will cluster around virialized dark matter haloes as a consequence of the huge gravitational field produced by clusters of 10^11 - 10^15 h^-1M_⊙ of mass. As neutrinos will fall-in following the gravitational field, if this field is not isotropic enough we could find some anisotropies in neutrino field as well. In that sense, in the region located ahead of the dark matter halo centre we expected a neutrino density higher than in the region located behind, with respect to the direction of CDM motion, i.e., along the anisotropy. In this work, we have shown that the predicted front loading of neutrinos effect is indeed present in the simulations and has been quantified. It is visible for R>6 h^-1Mpc for a solid angle Δθ=60^∘, Δϕ=120^∘ centred at the centre of the halo. We have also quantified its mass dependence, the result obtained is an average overdensity around 0.1 and independent of the halo mass. Only for haloes with masses close to 10^15 h^-1M_⊙ some enhancement of the effect is observed. We found that the HR-DEMNuni simulations have enough resolution to search for the wakes in neutrino profiles recently proposed in <cit.>. These wakes were expected in regions close to the centre of the haloes and due to the relative motions of neutrinos and CDM. We searched for them in the innermost regions within the haloes, in a box of 6 h^-1Mpc side placed in the centre of the halo, and we found that these wakes only take place in the most massive haloes (M_ h > 3×10^14 h^-1M_⊙). An average displacement of the neutrino overdensity distribution maximum of 0.06was obtained. An experimental detection of all these effects will cement the case for neutrinos being observed in the sky and in addition, detailed study of these effects will in turn unveil the nature of neutrinos and the Universe when it was only one second old. Finally, we conclude that these very fascinating features will be seen once cosmic neutrino background experiments manage to distinguish direction and angle. The DEMNUni simulations were carried out in the framework of “The Dark Energy and Massive Neutrino Universe" project, using the Tier-0 IBM BG/Q Fermi machine, the Tier-0 Intel OmniPath Cluster Marconi-A1 and Marconi-100 of the Centro Interuniversitario del Nord-Est per il Calcolo Elettronico (CINECA). CC acknowledges a generous CPU and storage allocation by the Italian Super-Computing Resource Allocation (ISCRA) as well as from the coordination of the “Accordo Quadro MoU per lo svolgimento di attività congiunta di ricerca Nuove frontiere in Astrofisica: HPC e Data Exploration di nuova generazione”, together with storage from INFN-CNAF and INAF-IA2. This work was supported by the “Center of Excellence Maria de Maeztu 2020-2023” award to the ICCUB (CEX2019-000918-M) funded by MCIN/AEI/10.13039/501100011033. Funding for the work of RJ was partially provided by project PID2022-141125NB-I00. 10 DESI DESI Collaboration, A. G. Adame, J. Aguilar, S. Ahlen, S. Alam, D. M. Alexander et al., DESI 2024 VI: Cosmological Constraints from the Measurements of Baryon Acoustic Oscillations, https://doi.org/10.48550/arXiv.2404.03002arXiv e-prints (2024) arXiv:2404.03002 [https://arxiv.org/abs/2404.030022404.03002]. Fer1 F. Simpson, R. Jimenez, C. Pena-Garay and L. Verde, Strong Bayesian evidence for the normal neutrino hierarchy, https://doi.org/10.1088/1475-7516/2017/06/029 2017 (2017) 029 [https://arxiv.org/abs/1703.034251703.03425]. Fer2 R. Jimenez, C. Pena-Garay, K. Short, F. Simpson and L. Verde, Neutrino masses and mass hierarchy: evidence for the normal hierarchy, https://doi.org/10.1088/1475-7516/2022/09/006 2022 (2022) 006 [https://arxiv.org/abs/2203.142472203.14247]. licia B. Audren, E. Bellini, A. J. Cuesta, S. G. A. Gontcho, J. Lesgourgues, V. Niro et al., Robustness of cosmic neutrino background detection in the cosmic microwave background, https://doi.org/10.1088/1475-7516/2015/03/036 2015 (2015) 036 [https://arxiv.org/abs/1412.59481412.5948]. Villaescusa-Navarro_2013 F. Villaescusa-Navarro, S. Bird, C. Pena-Garay and M. Viel, Non-linear evolution of the cosmic neutrino background, https://doi.org/10.1088/1475-7516/2013/03/019Journal of Cosmology and Astroparticle Physics 2013 (2013) 019. Zhu H.-M. Zhu, U.-L. Pen, X. Chen and D. Inman, Probing neutrino hierarchy and chirality via wakes, https://doi.org/10.1103/PhysRevLett.116.141301Phys. Rev. Lett. 116 (2016) 141301. LoVerde C. Nascimento and M. Loverde, Neutrino winds on the sky, https://doi.org/10.1088/1475-7516/2023/11/036 2023 (2023) 036 [https://arxiv.org/abs/2307.000492307.00049]. carbone_2016 C. Carbone, M. Petkova and K. Dolag, DEMNUni: ISW, Rees-Sciama, and weak-lensing in the presence of massive neutrinos, https://doi.org/10.1088/1475-7516/2016/07/034 2016 (2016) 034 [https://arxiv.org/abs/1605.020241605.02024]. castorina_2015 E. Castorina, C. Carbone, J. Bel, E. Sefusatti and K. Dolag, DEMNUni: the clustering of large-scale structures in the presence of massive neutrinos, https://doi.org/10.1088/1475-7516/2015/07/043 7 (2015) 043 [https://arxiv.org/abs/1505.071481505.07148]. moresco_2016 M. Moresco, F. Marulli, L. Moscardini, E. Branchini, A. Cappi, I. Davidzon et al., The VIMOS Public Extragalactic Redshift Survey (VIPERS) . Exploring the dependence of the three-point correlation function on stellar mass and luminosity at 0.5 <z < 1.1, https://doi.org/10.1051/0004-6361/201628589 604 (2017) A133 [https://arxiv.org/abs/1603.089241603.08924]. zennaro_2018 M. Zennaro, J. Bel, J. Dossett, C. Carbone and L. Guzzo, Cosmological constraints from galaxy clustering in the presence of massive neutrinos, https://doi.org/10.1093/mnras/sty670 477 (2018) 491 [https://arxiv.org/abs/1712.028861712.02886]. ruggeri_2018 R. Ruggeri, E. Castorina, C. Carbone and E. Sefusatti, DEMNUni: massive neutrinos and the bispectrum of large scale structures, https://doi.org/10.1088/1475-7516/2018/03/003 2018 (2018) 003 [https://arxiv.org/abs/1712.023341712.02334]. bel_2019 J. Bel, A. Pezzotta, C. Carbone, E. Sefusatti and L. Guzzo, Accurate fitting functions for peculiar velocity spectra in standard and massive-neutrino cosmologies, https://doi.org/10.1051/0004-6361/201834513 622 (2019) A109 [https://arxiv.org/abs/1809.093381809.09338]. parimbelli_2021 G. Parimbelli, S. Anselmi, M. Viel, C. Carbone, F. Villaescusa-Navarro, P. S. Corasaniti et al., The effects of massive neutrinos on the linear point of the correlation function, https://doi.org/10.1088/1475-7516/2021/01/009 2021 (2021) 009 [https://arxiv.org/abs/2007.103452007.10345]. parimbelli_2022 G. Parimbelli, C. Carbone, J. Bel, B. Bose, M. Calabrese, E. Carella et al., DEMNUni: comparing nonlinear power spectra prescriptions in the presence of massive neutrinos and dynamical dark energy, https://doi.org/10.1088/1475-7516/2022/11/041 2022 (2022) 041 [https://arxiv.org/abs/2207.136772207.13677]. Guidi_2022 M. Guidi, A. Veropalumbo, E. Branchini, A. Eggemeier and C. Carbone, Modelling the next-to-leading order matter three-point correlation function using FFTLog, https://doi.org/10.1088/1475-7516/2023/08/066 2023 (2023) 066 [https://arxiv.org/abs/2212.073822212.07382]. Baratta_2022 P. Baratta, J. Bel, S. Gouyou Beauchamps and C. Carbone, COVMOS: a new Monte Carlo approach for galaxy clustering analysis, https://doi.org/10.48550/arXiv.2211.13590arXiv e-prints (2022) arXiv:2211.13590 [https://arxiv.org/abs/2211.135902211.13590]. Gouyou_Beauchamps_2023 S. Gouyou Beauchamps, P. Baratta, S. Escoffier, W. Gillard, J. Bel, J. Bautista et al., Cosmological inference including massive neutrinos from the matter power spectrum: biases induced by uncertainties in the covariance matrix, https://doi.org/10.48550/arXiv.2306.05988arXiv e-prints (2023) arXiv:2306.05988 [https://arxiv.org/abs/2306.059882306.05988]. SHAM-Carella_in_prep E. Carella, C. Carbone, M. Zennaro, G. Girelli, M. Bolzonella, F. Marulli et al., DEMNUni: The galaxy-halo connection in the presence of dynamical dark energy and massive neutrinos, In prep . roncarelli_2015 M. Roncarelli, C. Carbone and L. Moscardini, The effect of massive neutrinos on the Sunyaev-Zel'dovich and X-ray observables of galaxy clusters, https://doi.org/10.1093/mnras/stu2546 447 (2015) 1761 [https://arxiv.org/abs/1409.42851409.4285]. fabbian_2018 G. Fabbian, M. Calabrese and C. Carbone, CMB weak-lensing beyond the Born approximation: a numerical approach, https://doi.org/10.1088/1475-7516/2018/02/050 2018 (2018) 050 [https://arxiv.org/abs/1702.033171702.03317]. Beatriz_2024 B. Hernández-Molinero, C. Carbone, R. Jimenez and C. Peña Garay, Cosmic background neutrinos deflected by gravity: DEMNUni simulation analysis, https://doi.org/10.1088/1475-7516/2024/01/006 2024 (2024) 006 [https://arxiv.org/abs/2301.124302301.12430]. kreisch_2019 C. D. Kreisch, A. Pisani, C. Carbone, J. Liu, A. J. Hawken, E. Massara et al., Massive neutrinos leave fingerprints on cosmic voids, https://doi.org/10.1093/mnras/stz1944 488 (2019) 4413 [https://arxiv.org/abs/1808.074641808.07464]. schuster_2019 N. Schuster, N. Hamaus, A. Pisani, C. Carbone, C. D. Kreisch, G. Pollina et al., The bias of cosmic voids in the presence of massive neutrinos, https://doi.org/10.1088/1475-7516/2019/12/055 2019 (2019) 055 [https://arxiv.org/abs/1905.004361905.00436]. verza_2019 G. Verza, A. Pisani, C. Carbone, N. Hamaus and L. Guzzo, The void size function in dynamical dark energy cosmologies, https://doi.org/10.1088/1475-7516/2019/12/040 2019 (2019) 040 [https://arxiv.org/abs/1906.004091906.00409]. verza_2022a G. Verza, C. Carbone and A. Renzi, The Halo Bias inside Cosmic Voids, https://doi.org/10.3847/2041-8213/ac9d98 940 (2022) L16 [https://arxiv.org/abs/2207.040392207.04039]. verza_2022b G. Verza, C. Carbone, A. Pisani and A. Renzi, DEMNUni: disentangling dark energy from massive neutrinos with the void size function, https://doi.org/10.48550/arXiv.2212.09740arXiv e-prints (2022) arXiv:2212.09740 [https://arxiv.org/abs/2212.097402212.09740]. Verza_etal_2024 G. Verza, C. Carbone, A. Pisani, C. Porciani and S. Matarrese, The universal multiplicity function: counting halos and voids, https://doi.org/10.48550/arXiv.2401.14451arXiv e-prints (2024) arXiv:2401.14451 [https://arxiv.org/abs/2401.144512401.14451]. Vielzeuf_2022 P. Vielzeuf, M. Calabrese, C. Carbone, G. Fabbian and C. Baccigalupi, DEMNUni: the imprint of massive neutrinos on the cross-correlation between cosmic voids and CMB lensing, https://doi.org/10.1088/1475-7516/2023/08/010 2023 (2023) 010 [https://arxiv.org/abs/2303.100482303.10048]. Cuozzo2022 V. Cuozzo, C. Carbone, M. Calabrese, E. Carella and M. Migliaccio, DEMNUni: cross-correlating the nonlinear ISWRS effect with CMB-lensing and galaxies in the presence of massive neutrinos, https://doi.org/10.48550/arXiv.2307.15711arXiv e-prints (2023) arXiv:2307.15711 [https://arxiv.org/abs/2307.157112307.15711]. zennaro_2017 M. Zennaro, J. Bel, F. Villaescusa-Navarro, C. Carbone, E. Sefusatti and L. Guzzo, Initial conditions for accurate N-body simulations of massive neutrino cosmologies, https://doi.org/10.1093/mnras/stw3340 466 (2017) 3244 [https://arxiv.org/abs/1605.052831605.05283]. planck2013 Planck Collaboration, P. A. R. Ade, N. Aghanim, C. Armitage-Caplan, M. Arnaud, M. Ashdown et al., Planck 2013 results. XVI. Cosmological parameters, https://doi.org/10.1051/0004-6361/201321591 571 (2014) A16 [https://arxiv.org/abs/1303.50761303.5076]. davis_1985_fof M. Davis, G. Efstathiou, C. S. Frenk and S. D. M. White, The evolution of large-scale structure in a universe dominated by cold dark matter, https://doi.org/10.1086/163168 292 (1985) 371. springel_2001_gadeget V. Springel, N. Yoshida and S. D. M. White, GADGET: a code for collisionless and gasdynamical cosmological simulations, https://doi.org/10.1016/S1384-1076(01)00042-2New Astronomy 6 (2001) 79 [https://arxiv.org/abs/astro-ph/0003162astro-ph/0003162]. dolang_2009_gadget K. Dolag, S. Borgani, G. Murante and V. Springel, Substructures in hydrodynamical cluster simulations, https://doi.org/10.1111/j.1365-2966.2009.15034.x 399 (2009) 497 [https://arxiv.org/abs/0808.34010808.3401]. kernel D. Reed, F. Governato, L. Verde, J. Gardner, T. Quinn, J. Stadel et al., Evolution of the density profiles of dark matter haloes, https://doi.org/10.1111/j.1365-2966.2005.08612.xMonthly Notices of the Royal Astronomical Society 357 (2005) 82 [https://arxiv.org/abs/https://academic.oup.com/mnras/article-pdf/357/1/82/3504261/357-1-82.pdfhttps://academic.oup.com/mnras/article-pdf/357/1/82/3504261/357-1-82.pdf].
http://arxiv.org/abs/2407.13293v1
20240718084705
Krylov complexity of fermion chain in double-scaled SYK and power spectrum perspective
[ "Takanori Anegawa", "Ryota Watanabe" ]
hep-th
[ "hep-th", "cond-mat.str-el", "quant-ph" ]
Griffin: Fast Transactional Database Index with Hash and B+-Tree Sho Nakazono^1 Computer and Data Science Laboratories NTT Tokyo, Japan Hideyuki Kawashima Faculty of Environment and Information Studies Keio University Kanagawa, Japan Yutaro Bessho^2 Computer and Data Science Laboratories NTT Tokyo, Japan Tatsuhiro Nakamori Faculty of Environment and Information Studies Keio University Kanagawa, Japan ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION The study of chaos and gravity has long played a pivotal role in physics. Chaos theory provides a framework for understanding systems susceptible to initial conditions, while gravity governs the large-scale structure of the universe and celestial motion. The intersection of these fields is yielding new insights, particularly through concepts like AdS/CFT correspondence and holography <cit.>. The relationship between chaos and gravity is prominent in black hole physics. Black hole dynamics are thought to be highly chaotic, according to the holographic principle, closely linked to quantum information scrambling. For quantum systems, operators evolve in time in the Heisenberg picture. In complex quantum systems, initially, simple operators are considered to become very complicated over time. Several quantities have been devised to quantitatively evaluate the complexity of this operator. The out-of-time-order correlator (OTOC) <cit.> quantifies the time evolution of a small perturbation in chaotic quantum systems. It is considered that the exponential time dependence of the OTOC signals chaoticity. Its exponent λ_ L is regarded as a quantum counterpart of the classical Lyapunov exponent. In <cit.>, it was shown that there exists a universal upper limit, λ_ L≤ 2π T, for general finite-temperature quantum many-body systems. This upper bound has the same form as the surface gravity of a black hole, and a quantum system dual to a black hole is expected to saturate this chaos bound <cit.>. This chaos bound is also a refinement of the fast scrambling conjecture <cit.> that the fastest scrambling time scales as t_* ∼log S, where S is the entropy of the system. It is also suggested that the thermodynamic well-definedness of the OTOC leads to an upper bound on the energy dependence of the Lyapunov exponent for general physical system <cit.>. In recent years, Krylov complexity has received much attention as a quantitative measure of operator complexity <cit.>. It quantitatively evaluates how an operator 𝒪 spreads through the operator space, or more precisely, the Krylov subspace. In the calculation of Krylov complexity, the time evolution of an operator is attributed to the time evolution of a virtual one-dimensional chain system. The hopping of the chain model is called the Lanczos coefficient, and the Lanczos coefficient governs the time evolution of the operator. Krylov complexity of an operator can also be completely determined from its auto-correlation function by using the transformation law between the moments of the auto-correlation function and the Lanczos coefficients. In quantum many-body systems, the Krylov complexity of an operator is expected to grow at most exponentially, and the growth exponent is expected to give an upper bound on the OTOC exponent for that operator. It has also been proposed to classify phases by using Krylov complexity as an order variable <cit.>. In the study of chaos and complexity, The Sachdev-Ye-Kitaev (SYK) model <cit.> has played a central role. This is a (0+1)-dimensional quantum system of N Majorana fermions with Gaussian random interactions. This model saturates chaos bound at low temperatures and in large N limit, and is considered to be a toy model of a black hole in the context of AdS/CFT correspondence <cit.>. In the IR regime, the SYK model is effectively described by the Schwarzian theory and, at the action level, relates to the Jackiw-Teitelboim (JT) gravity model <cit.>. It is also shown that Krylov complexity grows exponentially in time for the single Majorana fermion operator of the SYK model <cit.>. The growth rate is found to saturate the chaos bound in the low-temperature regime. Recently, the double-scaled SYK (DSSYK) model has been actively studied since this model allows us to analytically compute several quantities such as the partition function and correlation functions of random fermion chain operators <cit.>. The DSSYK model is also confirmed to saturate the chaos bound in the regime where the model approaches the conventional low-temperature SYK model. On the other hand, unlike the conventional SYK model, the DSSYK model is likely to exhibit a property called hyperfast scrambling in an appropriate parameter region. This hyperfast growth resembles the volume growth in de Sitter spacetime, and the relationship with de Sitter spacetime is being actively studied <cit.>. In another direction, the relationship between the DSSYK model and Random Matrix Theories (RMTs) has been studied. In <cit.>, the DSSYK model was related to a two-random matrices theory in which the Hamiltonian and the fermion chain are regarded as matrices <cit.>. In <cit.>, the solvable limit of this two-matrix model was discussed. In particular, the Gaussian Unitary Ensemble appears as the simplest limit. In this connection, in <cit.>, Krylov complexity for GUE random matrix theory was computed and found to show an initial exponential growth at a rate that saturates the chaos bound. In <cit.>, Krylov complexity of a single Majorana fermion operator was studied in the DSSYK model. However, the behavior of Krylov complexity for the random fermion chain operator has not yet been revealed. In this paper, we compute Krylov complexity of the fermion chain operators and investigate whether it bounds the time dependence of the OTOC. We also discuss the hyperfast property in DSSYK from the time dependence of Krylov complexity. Strictly speaking, in the parameter regime we will deal with, the Lanczos coefficients initially grow linearly and then asymptotically approach a constant value. In other words, the Krylov complexity initially shows a typical exponential growth. More importantly, in general, the scrambling time estimated here is independent of the number of degrees of freedom. This is a typical hyperfast behavior. The behavior of the Krylov complexity or Lanczos coefficients can be characterized by the behavior of the power spectrum, which is a Fourier transform of the auto-correlation function of the operator. However, conventional discussions of the power spectrum assume a continuous energy spectrum, and a systematic understanding of the power spectrum, including the discrete case, is still insufficient. In this paper, we also aim to deepen the general and systematic understanding of the behavior of Krylov complexity and Lanczos coefficients by analyzing the toy power spectrum. This will greatly assist in understanding the behavior of the Krylov complexity and Lanczos coefficient in the DSSYK model. This paper is organized as follows. In Sec. <ref>, we review the DSSYK model and the relationship between the model and random matrix theories. Next, we overview the definition of Krylov complexity. In Sec. <ref>, we analyze the Krylov complexity of the fermion chain operator of the DSSYK model in various parameter regions. The scrambling time is also discussed. In Sec. <ref>, we investigate the Lanczos coefficients and Krylov complexity using an toy power spectrum and obtain a systematic understanding of their behavior. Possible constraints from the physical energy spectrum are also discussed. Section <ref> is for the summary and discussion. § REVIEW §.§ Review of the double-scaled SYK model §.§.§ Definition and chord diagrams The SYK model is a theory that has received a great deal of attention in the context of low dimensional quantum gravity It is a model in which N flavors of Majorana fermions ψ_i (i=1,⋯ N) have the following interactions H=i^p/2∑_1≤ i_1<⋯<i_p≤ NJ_i_1⋯ i_pψ_i_1⋯ψ_i_p . where {ψ_i,ψ_j}=2δ_ij. Here J_i_1⋯ i_p is a random coupling constant that follows a Gaussian distribution and satisfies ⟨ J_i_1⋯ i_p⟩ = 0 , ⟨ J_i_1⋯ i_p^2⟩ = [ N; p ]^-1𝒥^2 . The double-scaled SYK (DSSYK) model is defined with the scaling limit p ∼√(N). Specifically, N →∞ , p →∞ , λ≡2 p^2/N fixed . In this setting, <cit.> (and related paper <cit.>) gave an exact expression for an ensembled partition function using a technique called chord diagram. In the small β expansion of the partition function, only even orders survive from Wick's theorem on random averages of coupling constants, ⟨ Z(β) ⟩ = ⟨ Tr e^-β H⟩ = ∑_k=0^∞(-β)^2k/(2k)!m_k . where m_k≡⟨ Tr H^2k⟩ is called a moment. By using a chord diagram, the moment m_k can be computed as follows. First, since Hamiltonian includes Majorana fermions, these traces are approximately the sign of the fermion replacement. When Hamiltonians with different coupling constants swap, the sign swap is (-1)^k, where k is the number of fermions the Hamiltonian has in common. In the double scaling limit, the distribution for this k is the Poisson distribution, and the ensemble average is q ≡⟨ (-1)^k ⟩ = e^-λ . Using the above-expected values, the moment is calculated as follows. m_k = 𝒥^2k∑_π∈ G_2k q^χ(π) . where G_2k is the entire set of chord diagrams with 2k points and χ(π) is the number of intersections in the chord diagram π∈ G_2k. The above is the case of double scaling limit (<ref>). The same argument can be made for the more general scaling limit. Specifically, considering the following limit N →∞ , p →∞ , λ≡2 p^α/N fixed , the above argument is justified to the extent of α > 3/2 (See App <ref>). Using the above method, the moment is computed as follows m_k=∫_0^πdθ/2π(q,e^± 2iθ;q)_∞(2𝒥cosθ/√(1-q))^2k . where (a_1,a_2,⋯;q)_n is q-Pochhammer symbol defined by (a_1,a_2,⋯;q)_n =(a_1;q)_n(a_2;q)_n⋯ , (a;q)_n=∏_k=1^n(1-aq^k-1) , and ± in the equation means multiplying the contributions of all combinations, as in f(±)≡ f(+)f(-). Then, the ensemble-averaged partition function is ⟨ Z(β) ⟩ = ∫_0^πdθ/2π(q,e^± 2iθ;q)_∞ e^-β E(θ) , E(θ)≡2𝒥cosθ/√(1-q) , where E(θ) is interpreted as the energy spectrum of the DSSYK model. From this expression, the density of states as the function of E can be read as ρ(E) ≡1/2π|dθ/dE|(q,e^± 2iθ;q)_∞ . Here, in particular, considering the limit of q→0 (λ→∞), only chord diagrams without intersections will contribute to the moment: m_k = 𝒥^2k∑_π∈ G_2k q^χ(π)→𝒥^2kC_k . where C_k=(2k)!/k!(k+1)! is the Catalan number. Therefore, in the limit q→0, the partition function can be expressed by using the modified-Bessel function ⟨ Z(β) ⟩→∑_k=0^∞(β𝒥)^2k/(2k)!C_k=I_1(2β𝒥)/β𝒥 . This also can be directly deduced from the (<ref>): ⟨ Z(β) ⟩→∫_0^πdθ/2π 4sin^2θ e^-2β𝒥cosθ = I_1(2β𝒥)/β𝒥 . Note that, <cit.> also points out, the density of states in the limit of q→0 is ρ(E) = 1/2π×1/2𝒥sinθ× 4sin^2θ = 1/2π𝒥√(4-E^2/𝒥^2) . This is just Wigner semicircle distribution. On the other hand, the limit of q→1 has also been well studied, sometimes to relate it to the large-p SYK result <cit.>. Then, the moment (<ref>) is just counting the number of elements in G_2k.[The high temperature region is implicitly considered here. The results of the limit q→1 differ slightly depending on whether the temperature is low or high <cit.>. Roughly speaking, this may be because at high temperatures, the higher-order chord diagram are suppressed, whereas at low temperatures, the higher-order diagram has some contribution and cannot be ignored.] Therefore, the partition function (<ref>) for q→1 becomes[This is consistent with the large-p SYK result <cit.> -β F/N = 1/2log 2 + 1/p^2π v[tan(π v/2)-π v/4] , β𝒥_ MS = π v/cosπ v/2 , in the high-temperature region. In this region, β𝒥_ MS∼π v follows and we can obtain -β F/N ∼1/2log 2 + (β𝒥_ MS)^2/4p^2 . Since the first term on the rhs can be ignored (this is the normalization factor of the ⟨ Z(β) ⟩), ⟨ Z(β) ⟩∼exp(β𝒥_ MS)^2/2λ . Since 𝒥 of <cit.> and 𝒥_ MS of <cit.> are related as 𝒥^2=𝒥_ MS^2/λ, the above equation is indeed consistent with (<ref>).] ⟨ Z(β) ⟩ = ∑_k=0^∞(β𝒥)^2k/(2k)!×(2k-1)!! = e^(β𝒥)^2/2 . §.§.§ Correlation functions Next, let us consider the two-point function of the following fermion chain M≡ i^p'/2∑_1≤ i_1<⋯<i_p'≤ N J'_i_1⋯ i_p'ψ_i_1⋯ψ_i_p' , ⟨ J_i_1⋯ i_p''^2⟩ = [ N; p' ]^-1 . where J'_i_1⋯ i_p' is a random coupling obeying Gaussian distribution and independent of random coupling in the original Hamiltonian. We can consider for example the two-point function of these operators at finite temperature: ⟨ Tr 1̧ M (t) e^-β H/21̧ M (0) e^-β H/2⟩ = ⟨ Tr 1̧ M e^-β_1H1̧ M e^-β_2H⟩ , where M denote M(0) and β_1 ≡β/2+it , β_2 ≡β/2-it. The contraction symbol with respect to M means the random mean with respect to J'_i_1⋯ i_p'. According to <cit.>, in the double scaling limit with p' ∼√(N), the two-point function becomes ⟨ Tr 1̧ M e^-β_1H1̧ M e^-β_2H⟩ = ∑_k_1,k_2(-β_1)^k_1/k_1!(-β_2)^k_2/k_2!⟨ Tr 1̧ M H^k_11̧ M H^k_2⟩ = ∑_k_1,k_2(-β_1)^k_1/k_1!(-β_2)^k_2/k_2!𝒥^k_1+k_2∑_π∈ G_k_1,k_2q^χ_HH(π)q̃^χ_HM(π) , where q̃=e^-2pp'/N is the expectation value of the phase factor that appears when solving for the intersection of the H chord and the M chord. Here, G_k_1,k_2 is the whole diagram such that k_1 and k_2 H's exist on the left and right of the M chord, respectively. Also, χ_HH(π) is the number of intersections between H chords in such a diagram π∈ G_k_1,k_2 and χ_HM(π) denotes the number of intersections between H chord and M chord in π∈ G_k_1,k_2. This is specifically evaluated as follows ⟨ Tr 1̧ M e^-β_1H1̧ M e^-β_2H⟩ = ∫_0^π∏_j=1^2{dθ_j/2π(q,e^± 2iθ_j;q)_∞exp(-2β_j𝒥cosθ_j/√(1-q))}(q̃^2;q)_∞/(q̃e^i(±θ_1±θ_2);q)_∞ . In <cit.>, the OTOC of the fermion chain operators is also considered in the low-temperature regime, λ^3/2≪ T ≪λ^1/2, and small λ≪ 1. This is the parameter region of interest for comparison with the large-p SYK model results. With setting 𝒥=1, the Lyapunov exponent is found as λ_ L = 2π T -4πλ^-1/2 T^2 + ⋯ , which saturates the chaos bound λ_ L≤ 2π T to the leading order.[Strictly speaking, to compare with the result of the conventional large-p SYK <cit.>, it is necessary to note the difference of the normalization of the coupling constant. Since 𝒥^2=𝒥_ MS^2/λ and 𝒥=1, the result of <cit.> can be translated as λ_ L=2π T(1-2T/𝒥_ MS) and the temperature regime T≪λ^1/2 becomes T≪𝒥_ MS. This is exactly the same as the result of the large-p SYK model <cit.>.] §.§ Relationship between RMT and the DSSYK model The relationship between the matrix model and the SYK model (and low-dimensional JT gravity theory) has been discussed in various contexts <cit.>. It is known that the relationship between the dynamics of the fermion chain and RMT in the DSSYK model has the following relationship <cit.>. First, let us consider a two-matrices random matrix model consisting of two matrices A and B as follows Z = ∫ dA dB e^- Tr V(A,B) , where V(A,B) is potential and let the matrix size as L. Let H,M denote the Hamiltonian of the DSSYK model and the fermion chain respectively, and correspondence is as follows H ↔ A , M ↔ B , and L = 2^N/2 with N degrees of freedom in the DSSYK model. Under this correspondence, V(A,B) depends on q,q̃ and q_M = e^-2p'^2/N in the DSSYK model. In <cit.>, the solvable limit of this two-matrix model is discussed. More ultimately, by setting q,q̃,q_M→ 0, this matrix theory becomes just a two-matrixes non-coupled GUE. Therefore, the eigenvalue distribution becomes a Wigner semicircle. This is consistent with (<ref>). §.§ Review of Krylov Complexity Here we briefly review the definition of Krylov complexity <cit.>. For a more extensive review, see <cit.>. In general, for the operator 𝒪(t)=e^iHt𝒪_0e^-iHt, its Krylov complexity can be defined by introducing an appropriate inner product in operator space. It is natural to choose the inner product to be introduced into the operator space according to the form of the two-point function to be considered, and in the case of a two-point function with finite temperature, the following is adopted (𝒪_1|𝒪_2)≡1/Z(β) Tr [e^-β H/2𝒪_1^† e^-β H/2𝒪_2] . Krylov complexity is specifically defined by the following steps. First, by the Baker-Campbell-Hausdorff formula, we can expand 𝒪(t) = ∑_k=0^∞(it)^k/k!ℒ^k𝒪_0 (ℒ≡ [H, · ]) , where we normalize the operator as (𝒪_0|𝒪_0)=1. The operator subspace spanned by {ℒ^k𝒪_0} is called the Krylov subspace. The Gram-Schmidt orthogonalization (Lanczos method) of {ℒ^k𝒪_0} appearing here is performed using the inner product of operator spaces. The algorithm of the orthogonalization is as follows. 1.  b_0≡0 , 𝒪_-1≡0 2.  For n≥1: 𝒜_n=ℒ𝒪_n-1-b_n-1𝒪_n-2 3.  Set b_n=√((𝒜_n|𝒜_n)) 4.  If b_n=0 stop; otherwise set 𝒪_n=𝒜_n/b_n and go to step 2. The b_n are called Lanczos coefficients. If the dimension K of the Krylov subspace is finite, the above algorithm ends with b_K=0. More specifically, if the system under consideration is a D-level system, the dimension of the Krylov subspace is known to satisfy K≤ D^2-D+1 <cit.>. This means in particular that the Lanczos coefficient will always be zero in finite steps for finite level systems. The orthonormal basis {𝒪_n} thus obtained is used to expand 𝒪(t) again: 𝒪(t) = ∑_n=0^∞ i^nφ_n(t)𝒪_n . The expansion coefficients in this case satisfy the following differential equation ∂_tφ_n(t) = b_nφ_n-1(t)-b_n+1φ_n+1(t) where b_n is Lanczos coefficients. Once b_n is obtained, the Krylov complexity of 𝒪(t) is defined by C_ K(t) ≡ 1+ ∑_n=0^∞ n|φ_n(t)|^2 by solving the differential equation above. Here, Krylov complexity is defined so that C_ K(0)=1 for convenience. Instead of performing the Gram-Schmidt method, there are other methods for obtaining the Lanczos coefficients indirectly from the moments of the two-point function. Consider the auto-correlation function C(t) = (𝒪(t)|𝒪_0) of the operator 𝒪(t) and expand it with respect to time: C(t) = ∑_n=0^∞(-1)^n/(2n)!μ_2nt^2n . The expansion coefficient μ_2n is called the moment. It is known that moments and Lanczos coefficients correspond to each other as in b_1^2nb_2^2(n-1)⋯ b_n^2 = D_n , D_n ≡(μ_i+j)_0≤ i,j≤ n . Using this, the Lanczos coefficients can be obtained from the moments as follows. b_n^2 = D_n-2D_n/D_n-1^2 (D_-1=1) Other sequential algorithms for obtaining Lanczos coefficients from moments are also known. We will use these methods in our numerical analyses. In quantum many-body systems, it is conjectured that the Lanczos coefficients b_n asymptotically grow at most linearly b_n∼α n. This linear growth of the Lanczos coefficients corresponds to an exponential growth of Krylov complexity C_ K(t)∼ e^2α t. The growth exponent α is expected to give an upper bound on the OTOC exponent λ_ L of the operator under consideration as λ_ L≤ 2α. Therefore, by examining the Krylov complexity, we can obtain constraints on the OTOC index. Since the Krylov complexity can be determined by the information in the two-point function, examining its behavior is relatively easy compared to computing OTOC. The asymptotic behavior of the Lanczos coefficients corresponds to the tail behavior of the power spectrum Φ(ω) (the Fourier transform of the auto-correlation function) Φ(ω) ≡∫_-∞^∞ dt e^-iω tC(t) , where we normalize the two-point function as C(0)=1. Many previous studies have investigated the relationships between the behavior of the Lanczos coefficients and that of the power spectrum. For example, when the power spectrum is continuous, the following are known: * Linear growth of the Lanczos coefficients, b_n = α n, corresponds to <cit.> Φ(ω) = π/α sech(πω/2α) . More generally, asymptotic linear growth b_n∼α n (n→∞) corresponds to the exponential decay of the tail of the power spectrum, Φ(ω) ∼ e^-π|ω|/2α . This can be translated as the existence of the poles of the auto-correlation function at t=±iπ/2α. * The saturation b_n→ b (n→∞) of the Lanczos coefficient corresponds to the fact that Φ(ω) has non-zero value only at [-2b,2b] <cit.>. Conversely, the saturation value of the Lanczos coefficients can be determined from the information in support of the power spectrum. In particular, when the Lanczos coefficients are perfectly constant b_n=b, the moments are given as μ_2n = b^2nC_n by the Catalan number C_n=(2n)!/(n+1)!n! <cit.>. The corresponding auto-correlation function is C(t) = J_1(2bt)/bt , where J_n is the Bessel function of the first kind. Then, the power spectrum becomes Φ(ω) = √(4b^2-ω^2)/b^2 θ(2b-|ω|) , which actually has value only at [-2b,2b] and this is Wigner semicircle itself. * In some systems, the Lanczos coefficients show staggering. This is a situation in which the Lanczos coefficients b_n behave in an oscillatory manner such that they appear to be on two separate curves, depending on whether n is even or odd, rather than one smooth curve. In <cit.> the following conditions on the power spectrum are proposed for the absence of staggering. (I) Φ(ω) is finite at ω=0, i.e., 0<Φ(ω=0)<∞. (II) Φ'(ω) is a continuous function of ω over the support of Φ(ω). On the other hand, when the power spectrum is discrete, the relationship between the power spectrum and the Lanczos coefficient is less well understood. It is pointed out in <cit.> that when the power spectrum is expressed as the sum of a finite number of delta function peaks, the Lanczos coefficients will eventually decay to zero and the Lanczos algorithm terminates in a finite number of steps. Although the behavior of Lanczos coefficients and Krylov complexity in systems with finite degrees of freedom has been explored by studies such as <cit.>, a systematic understanding using the power spectrum has not yet been obtained. In Sec. <ref>, we try to give a systematic understanding of the relationships between the behavior of the Lanczos coefficients and that of the power spectrum. A prior study of Krylov complexity relevant to our paper is the analysis in <cit.>. Here, Krylov complexity for the operator B is calculated in two-sided RMT as introduced in section <ref>. More specifically, this analysis is done in GUE. It was found that the Lanczos coefficients grow linearly at the initial stage and then saturate to a constant value. This saturation can be related to the fact that the spectrum of the theory is a Wigner semicircle, which has a bounded support.[For Krylov complexity, the energy density ρ(ω) is not a directly relevant quantity because Krylov complexity is determined from the power spectrum Φ(ω), the Fourier transform of the two-point function. This Φ(ω) is usually different from the actual energy density of the theory. However, if the energy density ρ(ω) has a support, we can expect the behavior of the Lanczos coefficients and Krylov complexity at late stages. This point will be discussed later in Sec. <ref>.] This analysis is equivalent to computing the Krylov complexity of the fermion chain in the q → 0 DSSYK model. In Sec. <ref>, we compute the Krylov complexity of fermion chain in DSSYK model for q→0, and essentially we compute the same thing. However, our analysis will move away from this limit and analyze the Krylov complexity of fermion chain in the DSSYK model in another limit. This means we will move away from simple GUE case on the RMT side. § KRYLOV COMPLEXITY OF FERMION CHAIN IN THE DSSYK MODEL §.§ Krylov complexity of fermion chain From the moments of the two-point function (<ref>), we calculate the Lanczos coefficients and analyze the Krylov complexity of the fermion chain operator. However, since its general expression is complicated, we mainly consider the limit which is easy to analyze. In the following, we set 𝒥=1. §.§.§ In the case of q, q̃→0 In this case, only the chord diagram survives such that there is no intersection of the H chord and the M chord, as can be seen from the (<ref>). In other words, the diagram is completely divided by the M chord and the two-point function factorizes to the product of the two partition functions: ⟨ Tr 1̧ M e^-β_1H1̧ M e^-β_2H⟩→⟨ Tr e^-β_1 H⟩⟨ Tr e^-β_2 H⟩ (q̃→0) . This is confirmed by the fact that the last factor of (<ref>) is 1 with q̃→0, and the integral is completely separated and becomes the product of two partition functions.[Strictly speaking, q̃→ 0 limit yields the following. (q̃^2;q)_∞/(q̃e^i(±θ_1±θ_2);q)_∞∼ 1 - 4 cosθ_1 cosθ_2 q̃^2/1-q+O(q̃^4) Suppose the result of the θ integration with respect to the second term on the right-hand side is non-zero and finite. Then, the partition function factorizes in the range q̃^2 ≪ 1-q.] Now, if we also impose q→0 to (<ref>), the partition function can be expressed using the deformed Bessel function as we have already seen, as in (<ref>), so (<ref>) becomes ⟨ Tr 1̧ M e^-β_1H1̧ M e^-β_2H⟩→I_1(2β_1)/β_1I_1(2β_2)/β_2 (q,q̃→0) . This leads back to the prior research of the Krylov complexity in RMT <cit.>. Since the spectrum is continuous and bounded, the power spectrum of the two-point function is also continuous and bounded,[This point is discussed below in Sec. <ref>.] and the Lanczos coefficient always settles to a constant value. It has been confirmed that, in the same reference, depending on the finite temperature β, the Lanczos coefficients initially exhibit a linear increase b_n ∼α n, and their slope can be well approximated by α = π/β. In Fig. <ref>, we show the power spectrum (<ref>) of the two-point function (<ref>). The solid line is β sech(βω/2). As we move to lower temperatures, the power spectrum behaves more like β sech(βω/2) over a wider ω range. This is reflected in the longer linear increase of the Lanczos coefficient. On the other hand, the power spectrum has a value only in the range [-4,4], independent of temperature. This implies that the asymptotic value of the Lanczos coefficient is constant regardless of temperature. When q̃→0, the OTOC of the fermion chain operator factorizes into smaller correlation functions and shows no exponential time dependence. This OTOC behavior is similar to the results of previous studies in GUE random matrix theory <cit.>. In this case, the bound λ_ L≤ 2α of the OTOC exponent by the exponential growth rate α of the Krylov complexity is trivially satisfied. §.§.§ In the case of q, q̃→ 1^- with q̃=q^m Now we consider the case where q, q̃→ 1^- with q̃=q^m. In terms of λ, this corresponds to λ→0. The form of the two-point function is examined in detail in <cit.> and the results depend on the temperature regime considered. The choice of q̃=q^m with m an integer corresponds to considering a fermion chain operator M with p'=mp.[If q and q̃ are taken independently and q is left unchanged and q̃→1, the two-point function becomes ⟨ Z(β_1+β_2)⟩ which is the partition function itself ⟨ Z(β)⟩ and is independent of time. Therefore, in this case, Krylov complexity of the fermion chain does not grow.] In the low temperature, one expects that the conformal symmetry emerges and the dimension of M becomes m. §.§.§ Low temperature regime When λ^-3/2≫β≫λ^-1/2, and t≪λ^-3/2, the two-point function becomes <cit.> ⟨ Tr 1̧ M e^-β_1H1̧ M e^-β_2H⟩→ (-i∂_t)^2m-21/cosh^2(π t/β) up to numerical coefficients independent of t. This two-point function is the same one used when calculating for Krylov complexity in a particular conformal field theory <cit.>. They considered free massless scalar fields in general dimensions. The conformal dimension Δ of the free scalar field corresponds to m in (<ref>). According to their results, Krylov complexity from (<ref>) grows exponentially as C_ K(t) ∼ e^2α t with α=π/β. In <cit.>, the OTOC of the fermion chain operators is considered in the low-temperature regime, λ^3/2≪ T ≪λ^1/2, and small λ≪ 1. The Lyapunov exponent is found as λ_ L = 2π T -4πλ^-1/2 T^2 + ⋯, so the bound λ_ L≤ 2α holds. Let us mention to what extent the approximation used in (<ref>) is valid and how Krylov complexity is expected to behave in very late time. The expression (<ref>) of the two-point function is obtained by taking only leading order term. For very late time, this approximation breaks down. Therefore, what can be known from (<ref>) is the early time behavior of Krylov complexity, and the very late time behavior cannot be obtained from this approximation. To understand the very late time behavior of Krylov complexity, which is outside the valid range of the approximation in (<ref>), it is necessary to consider the full two-point function. Although it is difficult to perform detailed calculations from the full two-point function specifically, the nature of the energy spectrum can provide information on the asymptotic behavior of the Lanczos coefficients. When λ≪ 1, the range of the energy spectrum becomes [-2/√(λ),2/√(λ)].[This comes from the small λ approximation of maximum energy for DSSYK (<ref>).] Although λ≪ 1, due to the low temperature condition of λ^3/2≪ T ≪λ^1/2, λ must in turn satisfy T^2≪λ≪ T^2/3 for a fixed temperature T. Therefore, λ takes a small but finite value, and the energy spectrum remains bounded. Then, it follows that the power spectrum of the full two-point function will have compact support with the size of order O(1/√(λ)). Note here that this does not depend on the degrees of freedom N. The fact that the power spectrum has a compact support of size O(1/√(λ)) suggests that if the Lanczos coefficients are calculated from the full two-point function, the growth of the Lanczos coefficient saturates at b=O(1/√(λ)). Then, as discussed later in Sec. <ref>, the exponential growth of Krylov complexity ends at the time O(βlogβ/√(λ)) and it is expected to change to linear growth after that. Notice that the time at which the change of the behavior occurs is independent of N, and in particular is smaller than the conventional fast scrambling time scale of O(log N). Also, again, it should be noted that this behavior of the Krylov complexity at very late time is the one expected when calculated from the full 2-point function, and cannot be seen from the approximate 2-point function (<ref>) obtained from taking the leading order.[ More concretely, (<ref>) is reliable up to β = β̃λ^-3/2 and β̃≪ 1. When we fix the temperature as β = O(λ^-3/2), we can show that the scrambling time is much bigger than the maximum time when (<ref>) is valid. Therefore, this low temperature approximation gets broken by the scarambling time.] §.§.§ Very low temperature regime Now we consider β≫λ^-3/2. According to <cit.>, the two-point function is found to be ⟨ Tr 1̧ M e^-β_1H1̧ M e^-β_2H⟩→ (-i∂_t)^2m-21/(1+4t^2/β^2)^3/2 up to numerical coefficients independent of t. Using this expression of the two-point function, we study β and m dependence of the Lanczos coefficients and Krylov complexity in the following. To begin with, let us examine the temperature dependence of Krylov complexity when m=1.[In this case, the time dependence of the two-point function is mathematically equivalent to that in the low-temperature limit when q̃→ 0 with q→0 or q→1.] In Fig. <ref>, we show the Lanczos coefficients for several temperatures. They appear to increase linearly already at the beginning. However, closer examination reveals a slight staggering, i.e., the Lanczos coefficients b_n appear to be on two different curves depending on whether n is even or odd, as shown in Fig. <ref>. The power spectrum of the two-point function we are dealing with here is Φ(ω) = β^2|ω|/2K_1(β|ω|/2) , where K_1 is the modified Bessel function. Figure <ref> compares this with the power spectrum β sech(βω/2) when b_n = π/β n. These two look very similar, but not the same. Notice that the power spectrum (<ref>) is finite at ω=0, and the derivative Φ'(ω) is continuous.[For x>0, x K_1(x) = xln(x/2) I_1(x)+(regular terms), where I_1(x)=x/2∑_k=0^∞1/k!(k+1)!(x/2)^2k.] In <cit.> these properties were proposed to be the conditions for the absence of staggering. The fact that (<ref>) led to staggering means that stronger conditions are needed for the absence of staggering. Since the second derivative of (<ref>) diverges at ω =0, one possible modification is to strengthen the condition on derivatives. If instead of requiring that the first derivative be continuous, we require up to the continuity of the second derivative, then (<ref>) would be out of condition and staggering would be allowed. However, this is just a conjecture, and there could be other causes for the appearance of staggering. Figure <ref> shows the time dependence of Krylov complexity. The exponential growth can be seen clearly. However, note that, exactly as in the low temperature case, (<ref>) is an approximate expression obtained by taking only the leading contribution in β≫λ^-3/2. This approximation is invalid in very late time.[This approximation is expected to be valid up to t = O(λ^-3/2). This is bounded by the scrambling time O(βlogβ/√(λ)).] To know the very late time behavior of the Krylov complexity in detail, we need to use the full two-point function. By exactly the same arguments as in the low temperature case, we can deduce that the power spectrum computed from the full two-point function has compact support of size O(1/√(λ)). Then, the Lanczos coefficient reaches plateau at O(1/√(λ)), which is smaller than O(β^1/3) in the very low temperature region β≫λ^-3/2. Correspondingly, the exponential growth in Krylov complexity is expected to end at time O(βlogβ/√(λ)) as discussed later in Sec. <ref>. After that the behavior of Krylov complexity turns into a linear growth. Notice that this time scale can be larger than O(βlog1/λ) when β≫λ^-3/2. When λ≪ 1, this is also much larger than O(β). Therefore, the behavior of the Krylov complexity is reliable within the scope of Fig. <ref>. [Strictly speaking, this very low temperature limit is valid for β more than β̃λ^-3/2 and β̃≫ 1. In Fig. <ref> and Fig. <ref>, we consider fixing β on the order of λ^-3/2 in order to make the plotting range include enough time region to approximate a two-point function. For example, for λ = 0.1, λ^-3/2∼ 30 and β∼ 3 ×λ^-3/2.] The change from exponential increase to linear increase occurs at a much later time. In Fig. <ref> is shown the growth exponent κ of Krylov complexity (C_ K(t)∼ e^κ t) found by numerical fitting. They are close to the upper bound κ≤ 2π T. The slightly smaller value is due to fitting over a finite time range, and in the limit of late time, it is expected to saturate the upper bound. Now we fix β = 100 and look at the change in Lanczos coefficients and Krylov complexity when m is changed. Recall that, before taking the double scaling limit, m is related to the length p' of the fermion chain operator as p'=mp, where p is the length of the Hamiltonian. Therefore, changing m might be regarded as changing the size of the fermion chain M. Figure <ref> shows the results for the Lanczos coefficients and Krylov complexity. The staggering of the Lanczos coefficients increases as m is increased. This is likely to correspond to the fact that the power spectrum consists of two peaks, as shown in Fig. <ref>, but there is currently no analytical method that specifically links this power spectrum behavior to the Lanczos coefficient. More detailed methods will need to be developed in the future to specifically confirm this. Note that, in the current case, the power spectrum breaks the first condition (I), in page condI, for the absence of staggering in <cit.>. Although the Lanczos coefficients are staggering, overall they increase linearly with n and the width of the staggering becomes smaller. As can be seen in Fig. <ref>, Krylov complexity grows exponentially at late times. It can also be seen that Krylov complexity takes a larger value for larger m. A larger m means a longer fermion chain M, and Fig. <ref> implies that a longer fermion chain earns more complexity. Although small differences in the staggering behavior of Lanczos coefficients can be seen, the Krylov complexity are very similar to those calculated for the free scalar field case in <cit.> mentioned before. We can see that the conformal dimension Δ of the scalar field and m correspond to each other. §.§ Scrambling time Let us comment on the scrambling time. The definition of scrambling time is ambiguous and varies slightly in the literature. In the context of Krylov complexity, the scrambling time is sometimes defined as the time when the value of Krylov complexity is of the order of the number of degrees of freedom of the system. However, the actual choice of this order is not very standardized in the literature. Here, we consider the time t_* at which the Krylov complexity changes from exponentially increasing to linearly increasing as the scrambling time <cit.>. This t_* can be roughly evaluated using the value of n=n_* around which the behavior of the Lanczos coefficients changes from a linear growth b_n∼α n to a plateau b_n∼ b. For this purpose, we note that the Krylov complexity can be regarded as the expectation value of the site location of a one-dimensional chain system with Lanczos coefficients as hopping. Before the scrambling time t≤ t_*, Krylov complexity increases exponentially as e^2α t, and this exponential diffusion on the one-dimensional chain continues until near the n=n_* site, so the scrambling time is e^2α t_*∼ n _*, from which we find t_*∼α^-1log n_*. Since n_*∼ b/α from the behavior of the Lanczos coefficient, we obtain t_*∼α^-1log (b/α). The plateau value b of the Lanczos coefficients is determined by the size of the support of the power spectrum Φ(ω). This is of the same order as the maximum energy eigenvalue of the system under consideration, from which we can estimate the dependence of t_* on the number N of degrees of freedom. In the case of the DSSYK model, the plateau value of the Lanczos coefficient becomes b∼𝒥 in the q→ 0 limit and b∼𝒥/√(λ) in the q→ 1^- limit. Since neither of these depends on the number N of Majorana fermions, the scrambling time t_* also does not depend on N. In other words, in this sense, the DSSYK model is hyperfast scrambling. Although our specific analysis was carried out under a particular limit of q,q̃, the fact that scarmbling does not depend on the degrees of freedom N can be expected to hold in general. This is because the power spectrum of the DSSYK model is bounded by N-independent values from the claim introduced in Sec. <ref>. In the case of the usual SYK model, since the maximum energy eigenvalue increases as a power function of N, the scrambling time becomes t_*∼α^-1log N, which is consistent with the conventional fast scrambling property. In the above, we call a system whose scrambling time is O(N^0) hyperfast scrambling. Strictly speaking, in addition to this, the Lyapunov exponent λ_ L determined from OTOC must be non-zero (or at least a interacting theory like the SYK model). This is because the Krylov complexity gives only an upper bound on λ_ L, so the scrambling time that Krylov complexity determines does not necessarily mean that of the OTOC. In fact, there are free systems where the Lanczos coefficients behave similarly (increasing linearly at the beginning and saturating) and have a scrambling time of the same order (e.g. XY model <cit.>), but these are clearly not chaotic systems. This fact that Krylov complexity can grow exponentially even in free theory suggests that scrambling in the sense of Krylov complexity is different from scrambling as a diffusion of perturbations in real space. It is more accurate to say that we are thinking of scrambling in operator space <cit.>. § CHARACTERIZATION BY POWER SPECTRUM As we have mentioned, if the support of the power spectrum is bounded, the growth in the Lanczos coefficient will terminate somewhere. Correspondingly, the late time behavior of the Krylov complexity would be at most linear growth. In such systems, we should focus on the early time region in examining the characteristic behavior of each operator. In this section, we investigate the behavior of Lanczos coefficients from the viewpoint of the power spectrum using artificial power spectrum. §.§ Toy power spectrum In the case of finite degrees of freedom or bounded quantum systems, the energy spectrum is discrete. To investigate more systematically the differences in the behavior of Lanczos coefficients in discrete and continuous systems, we consider the following toy power spectrum: Φ(ω) = 𝒩∑_l=-L^L sech(πω_l/2α)δ(ω-ω_l) , ω_l=l/Lω_ max , where 𝒩 is the normalization constant for ∫dω/2πΦ(ω)=1. The power spectrum (<ref>) has its support in [-ω_ max,ω_ max] and equally spaced delta function peaks of width Δω=ω_ max/L, and their weights are given by sech(πω/2α).[A discrete power spectrum with other weights is considered, for example, in <cit.>. In this reference, the case where ω_ max is very large and there is no bound in the power spectrum is discussed.] By analogy with the Boltzmann factor, in effect α also acts as a temperature. In the following, we study the behaviors of Lanczos coefficients and Krylov complexity while varying the parameters Δω, ω_ max and α. §.§.§ Varying Δω To begin with, we fix ω_ max and α and vary Δω. In Fig. <ref>, we show the Lanczos coefficients for various values of Δω with ω_ max=20 and α=1. As for the initial behavior of the Lanczos coefficients, when Δω / α≳ O(1), the Lanczos coefficients show staggering. On the other hand, as Δω /α≪ 1, the staggering becomes smaller. When Δω / α≳ O(1), discreteness clearly appears, and Φ(ω=0) is not finite. Therefore, both of the conditions for the absence of staggering proposed in <cit.> are indeed violated. The behaviour of the Krylov complexity with the above change in the interval of the discrete spectrum is consistent with similar analyses <cit.>. Next, we turn our attention to the remaining part of the Lanczos coefficients. As we decrease Δω / ω_ max, the dimension of the Krylov subspace should increase, and the Lanczos sequence should also become longer. Indeed, Fig. <ref> is in line with the above expectation. As Δω / ω_ max is decreased, the plateau of the Lanczos coefficients becomes longer, and in the continuous limit Δω / ω_ max≪ 1, the plateau continues forever. The plateau value of the Lanczos coefficient is determined by ω_ max and does not depend on Δω. As is well known, the plateau of the Lanczos coefficient corresponds to a linear growth in Krylov complexity. Figure <ref> shows the time dependence of Krylov complexity for various values of Δω. The range of linear growth in Krylov complexity also becomes longer as Δω is decreased. When Δω / ω_ max≳ O(1), the Krylov subspace is small and recursions occur frequently, but as Δω / ω_ max≪ 1, the Krylov subspace becomes larger and recursions are less likely to occur. If we set α=∞ instead of α=1, the power spectrum becomes a sum of uniform delta function peaks. The Lanczos coefficients become as Fig. <ref>, and the initial linear growth disappears. As when α=1, the plateau of the Lanczos coefficients extends as Δω / ω_ max is decreased, and in the continuous limit Δω / ω_ max≪ 1, the plateau continues forever as in <cit.>. For Krylov complexity (Fig. <ref>), as in the case of α=1, the range of linear increase becomes longer as Δω / ω_ max is decreased. §.§.§ Varying ω_ max When Φ(ω) is continuous, it is known that the width of the support of Φ(ω) determines the asymptotic value of the Lanczos coefficient. Here we examine this for the discrete case by varying ω_ max in (<ref>) while Δω=1/10 and α=1. In Fig. <ref>, we show the resulting Lanczos coefficients. For each ω_ max, the Lanczos coefficients seem to grow linearly in the beginning, then reach a constant value. Note that, strictly speaking, the apparent linear growth in <ref> should include a small amount of staggering unless Δω / α is sufficiently small. The constant value b, which is reached after linear growth, is determined as b=ω_ max/2 by the size of the support [-ω_ max,ω_ max] as in the continuous case. Since the number of delta function peaks is finite, the Lanczos coefficients eventually decay to zero. Of particular interest is the fact that the range where the Lanczos coefficient grows linearly becomes longer as ω_ max becomes larger. Correspondingly, in Fig. <ref>, Krylov complexity also appears to extend the range of exponential growth. In the limit of ω_ max→∞, the Lanczos coefficients only increase linearly, and Krylov complexity is expected to increase exponentially forever if Δω / α is sufficiently small. §.§.§ Varying α Next, we fix ω_ max and Δω and vary α. The resulting Lanczos coefficients are shown in Fig. <ref>. When α is varied, the slope of the initial linear growth of the Lanczos coefficient changes. This is consistent with α acting as the temperature of this spectrum, as discussed earlier. The solid lines in Fig. <ref> are b_n=α n. This is in line with the statement around (<ref>). Correspondingly, the exponential growth rate of Krylov complexity also changes. In Fig. <ref> we show the time dependence of Krylov complexity. The dashed lines in Fig. <ref> are the results of the fitting, each of which is found to be 0.310 ×exp(0.957t) when α = 1/2 and 0.340 ×exp(1.86t) when α = 1. This is relatively consistent with the Krylov complexity behaving like e^2α t in late time when the Lanczos coefficient is perfectly linear b_n = α n. §.§ Constraints from the energy spectrum As we have already used very extensively, the Lanczos coefficients and Krylov complexity can be computed from the power spectrum. However, they depend on the choice of operator. A system-specific concept that does not depend on the choice of operator is the density of states. If we can impose general constraints on the power spectrum based on the behavior of the density of states, we can discuss the Lanczos coefficient and Krylov complexity from a more general viewpoint. In this regard, we note the following general property. Claim Let H be the Hamiltonian, |E_i⟩ be the energy eigenstate of energy E_i and σ(H) be the set of all energy eigenvalues of H. If σ(H) is bounded, then the power spectrum of the auto-correlation function C(t)= tr(𝒪(t)𝒪(0)) of an arbitrary operator 𝒪 must have a bounded support, and vice versa. This property is obvious if we write down the auto-correlation function using the energy eigenbasis as has been done in <cit.>. For systems with a bounded spectrum, the power spectrum of any operator has a bounded support and, in particular, no tail, so the Krylov complexity does not increase exponentially in late time. Conversely, the exponential growth in Krylov complexity at late time is allowed only when the energy spectrum is unbounded. For instance, in the DSSYK model with λ>0, the energy spectrum is bounded, so Krylov complexity of any operator does not grow exponentially in late time. On the other hand, in the large-p SYK model with N→∞, the energy spectrum is unbounded, so Krylov complexity of a single fermion operator is allowed to grow exponentially in late time <cit.>. § DISCUSSION In this paper, we have studied Krylov complexity of the fermion chain operators of the DSSYK model in various parameter regions and confirmed its exponential growth. In particular, the increasing exponent saturates the chaos bound, confirming that the prediction that the exponential growth rate of Krylov complexity provides an upper bound on the exponential behavior of the OTOC is indeed true for the DSSYK model in particular regions. Krylov complexity can be completely determined from the auto-correlation function of the operator and is fully characterized by the power spectrum. In the case of continuous spectra, their correspondence was well known <cit.>. We studied the power spectrum of the fermion chain operator in the DSSYK model and gave an understanding of the behavior of the Lanczos coefficients and Krylov complexity. In particular, regarding the condition on the structure of staggering often seen in Lanczos coefficients, we discussed the possibility of adding the finiteness of the higher-order derivative to the conditions on the derivative of the power spectrum proposed by <cit.>. Moreover, considering the time when Krylov complexity changes from exponential growth to linear growth as the scrambling time, we also discussed that in the DSSYK model, the scrambling time does not depend on the number N of degrees of freedom in the system. In this sense, the DSSYK model is a hyperfast scrambler. Furthermore, by using an toy power spectrum, we have obtained a systematic understanding of the behavior of the Lanczos coefficient. Depending on whether the levels are discrete or continuous, the behavior of the Lanczos coefficients can differ in two ways. The first is the difference in staggering of the Lanczos coefficients caused by the degree of the discreteness. Even when the energy spectrum is discrete, if the bulk of the power spectrum is sech-like, the Lanczos coefficient can have an initial linear growth. This slope is roughly the typical energy scale of the system (e.g., temperature), and if the discreteness of the levels is larger compared to this energy scale, staggering can occur in the Lanczos coefficients. On the other hand, if the discreteness is sufficiently small, the initial behavior of the Lanczos coefficients is almost indistinguishable from the continuous case. The second difference is the asymptotic behavior of the Lanczos coefficient, which depends on whether the number of levels is finite or infinite. In the finite system, the dimension of the Krylov subspace is also finite, and the Lanczos coefficient eventually becomes zero. However, note that even if the levels are discrete, the Lanczos coefficients can continue to increase if the number of levels is infinite. Also, the support of the power spectrum determines the plateau value of the Lanczos coefficient. In particular, if the energy spectrum is bounded, the power spectrum is also bounded, so the growth in the Lanczos coefficient always stops eventually. Since the plateau of the Lanczos coefficient corresponds to a linear increase in Krylov complexity, the energy spectrum must be bounded in order for Krylov complexity to grow linearly at a late time. Let us comment on the relationship between the chaotic nature of a given system and the Lanczos coefficients and Krylov complexity. In <cit.>, it was shown for quantum many-body systems that Lanczos coefficient does not increase faster than linear increase. It was also conjectured that asymptotic linear growth of the Lanczos coefficients is related to quantum chaos. These arguments are for quantum many-body systems and focus on the tail behavior of the power spectrum. If we consider a quantum system with finite degrees of freedom, the power spectrum has no tail, and the Lanczos coefficients decay asymptotically to zero because the dimension of Krylov subspace is finite. The initial linear growth of the Lanczos coefficients is, as we have seen with the toy power spectrum, a result of the sech-like behavior of the bulk portion of the power spectrum. However, the detailed shape of the bulk of the power spectrum and the initial growth regime of the Lanczos coefficients are highly dependent on the choice of operator. On the other hand, the behavior of the Lanczos coefficients after the initial growth regime did not change significantly when the shape of the toy power spectrum and the choice of operator were changed. This suggests that looking at the Lanczos coefficients after the initial growth can provide system-specific properties. Traditionally, the statistical distribution of level spacing has been used to characterize quantum chaotic properties <cit.>. This characterization can be applied not only to quantum many-body systems but also to finite-dimensional quantum systems. In a real system, the energy spectrum is not equally spaced but fluctuates, and the power spectrum becomes the sum of delta function peaks distributed at various intervals. This fluctuation is expected to affect the late time behavior of the Lanczos coefficients <cit.>. In this paper, we have considered operator complexity. On the other hand, the complexity of quantum states has long been of interest because it is expected to correspond to wormhole volumes and the like in the AdS/CFT correspondence <cit.>. Recently, Krylov complexity for quantum states has also been proposed and studied <cit.>. This complexity is a natural extension of the operator case definition.[There has also been a recent proposal to assemble quantum states into a density matrix and consider the Krylov complexity of the density matrix <cit.>.] However, it is not clear whether it is possible to characterize it using quantities corresponding to the power spectrum in the operator case. The definition of complexity on the quantum theory side, which is a dual to the holographic complexity on the bulk gravity theory side that has been studied in the past, is still unclear.[In <cit.>, Krylov complexity of the chord state of the DSSYK model in the high-temperature limit was studied and its time dependence was found to be consistent with the time dependence of the volume (geodesic length) of the wormhole connecting the two asymptotic regions of a two-dimensional black hole. It remains to be confirmed whether this is also true for more general setups.] It is an important issue to obtain a systematic understanding of the Krylov complexity and Lanczos coefficients of quantum states as in the case of operators. It is also future work to give a bulk-side interpretation to the Krylov complexity of the operator itself.[In another direction, studies have been conducted to interpret Krylov complexity geometrically using information metrics <cit.>.] It has long been proposed that the size of the operator in a quantum system at the boundary corresponds to the momentum of the bulk particle <cit.>. It would be interesting to consider whether we can embed the Krylov complexity in this conjecture. § ACKNOWLEDGMENTS We would like to thank Mitsuhiro Nishida and Norihiro Tanahashi for helpful comments on our draft. The work of R. W. was supported by Grant-in-Aid for JSPS Fellows No. JP22KJ1940. § JUSTIFICATION FOR POISSON APPROXIMATION The probability distribution of k fermions being the same when the chord is crossed is P(k) = 1/[ N; p ][ p; k ]·[ N-p; p-k ] If this can be Poisson approximated under k≪ p ≪ N, then the argument in <cit.> follows. Here, we examine more rigorously the parameter regions for which the approximation can be justified, giving specifically what hierarchy is desired, for example, p ∼ O(N^#). §.§ Poisson approximation Consider the situation where k≪ p ≪ N. Expanding with large limit p as k=O(p^α) (k/p^α≡λ_2), we obtain p!/(p-k)! = p^k e^-1/2λ_2^2 1/p^1-2α( 1 + 1/2λ_2 1/p^1-α + ⋯) Therefore, if 1-2α>0 → α<1/2, in other words, k∼ o(√(p)), we can approximate P(k) ∼[ N-p; p ]/[ N; p ]p^2k/k!(N-2p)!/(N-2p+k)! In the same reason, if k∼ o(√(N-2p)), we can approximate P(k) ∼[ N-p; p ]/[ N; p ]p^2k/k!1/(N-2p)^k Let us take the logarithm, log P(k) = logp^2k/ k! -klog (N-2p) + log((N-p)!)^2/N!(N-2p)! If we set p = o(N), the second term in rhs can be approximate klog N + klog( 1 - p/N) ≃ klog N - k p/N +⋯ For the remaining terms, we can use Stirling's formula log n! ∼ nlog n - n. Since Stirling's formula can be used for n ≫ 1, if p=o(N), then log ((N-p)!)^2/N!(N-2p)! ∼ 2(N-p)log (N-p) -Nlog N -(N-2p)log(N-2p) = 2log(N-p)log(1- p/N) -(N-2p)log( 1 - 2p/N) ∼ N( -p^2/N^2-p^3/N^3 +⋯) Therefore, the probability distribution can be approximated by P(k) ∼p^2k/ N^k k! e^-N( p^2/N^2+p^3/N^3 +⋯) , p∼ o(N), k∼ o(√(p)) §.§ Evaluating the peak point Since k can essentially take values from 1 to p, the region of k in which this Poisson approximation can be justified is not large. However, the original probability distribution and the approximated Poisson distribution have the characteristic that there exists a peak point, otherwise it approaches zero rapidly. Therefore, the peak value is the dominant contribution to the average. From the above, another important issue to be discussed is whether the peak of the original probability distribution exists within the range where the Poisson approximation can be justified. The peak of this distribution is obtained by evaluating the following P(k)/P(k+1) = N(k+1)/p^2 Thus, it decreases if k>p^2/N-1. In other words, the maximum value is obtained near this point. This peak must be well contained within k ∼ o(√(p)), which can be approximated by Poisson. From the above, the following hierarchy p^2/N≪√(p) must exist. In the case of the conventional double-scaled SYK model, p∼ O(√(N)). The Poisson distribution can be sufficiently approximated in the range of k ∼ o(N^1/4). In addition, at this time, since p^2/N (= O(1)) ≪√(p) ( = O(N^1/4)) we can justify Poisson approximation. In general, when we consider a particular scaling limit p ∼ O(N^x), the Poisson distribution can be sufficiently approximated within k ∼ o(N^x/2). At this time, the Poisson approximation is justified if there is a hierarchy of p^2/N (= O(N^2x-1)) ≪√(p) ( = O(N^x/2)) Therefore, this approximation is valid in the range 2x-1<x/2 → x<2/3. From the above, if we take the limit 2 p^α/N≡λ fixed, the method in <cit.> is justified if α > 3/2. At this time P(k) ∼1/ k!(1/2λ)^2k/α N^(2/α-1)k e^-(1/2λ)^2/αN^2/α-1( 1 + (1/2λ)^1/αN^1/α-1 +⋯) Thus, the expectation value of (-1)^k becomes q ≡∑_k (-1)^k 1/ k!(1/2λ)^2k/α N^(2/α-1)k e^-(1/2λ)^2/αN^2/α-1( 1 + (1/2λ)^1/αN^1/α-1 +⋯) = e^-(1/2λ)^2/αN^2/α-1( 2 + (1/2λ)^1/αN^1/α-1 +⋯) When α = 2, q = e^-λ fixed. When 1/2<α<2/3, q ∼ e^-N^#→ 0 , (#>0). JHEP
http://arxiv.org/abs/2407.11928v1
20240716172136
Tackling Oversmoothing in GNN via Graph Sparsification: A Truss-based Approach
[ "Tanvir Hossain", "Khaled Mohammed Saifuddin", "Muhammad Ifte Khairul Islam", "Farhan Tanvir", "Esra Akbas" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Tackling Oversmoothing in GNN via Graph Sparsification: A Truss-based Approach Tanvir Hossain1, Khaled Mohammed Saifuddin1, Muhammad Ifte Khairul Islam1, Farhan Tanvir1, Esra Akbas1 1 Department of Computer Science, Georgia State University, Atlanta, GA 30302, USA {thossain5, ksaifuddin1, mislam29}@student.gsu.edu, {ftanvir, eakbas1}@gsu.edu July 22, 2024 =============================================================================================================================================================================================================================================================================================================== § ABSTRACT Graph Neural Network (GNN) achieves great success for node-level and graph-level tasks via encoding meaningful topological structures of networks in various domains, ranging from social to biological networks. However, repeated aggregation operations lead to excessive mixing of node representations, particularly in dense regions with multiple GNN layers, resulting in nearly indistinguishable embeddings. This phenomenon leads to the oversmoothing problem that hampers downstream graph analytics tasks. To overcome this issue, we propose a novel and flexible truss-based graph sparsification model that prunes edges from dense regions of the graph. Pruning redundant edges in dense regions helps to prevent the aggregation of excessive neighborhood information during hierarchical message passing and pooling in GNN models. We then utilize our sparsification model in the state-of-the-art baseline GNNs and pooling models, such as GIN, SAGPool, GMT, DiffPool, MinCutPool, HGP-SL, DMonPool, and AdamGNN. Extensive experiments on different real-world datasets show that our model significantly improves the performance of the baseline GNN models in the graph classification task. GNN, Oversmoothing, Graph Sparsification, k-truss Subgraphs, Graph Classification. § INTRODUCTION In recent years, graph neural networks (GNN) have given promising performance in numerous applications over different domains, such as gene expression analysis <cit.>, traffic flow forecasting <cit.>, fraud detection <cit.>, and recommendation system <cit.>. GNN effectively learns the representation of nodes and graphs via encoding topological graph structures into low-dimensional space through message passing and aggregation mechanisms. To learn the higher-order relations between nodes, especially for large graphs, we need to increase the number of layers. However, creating an expressive GNN model by adding more convolution layers increases redundant receptive fields for computational nodes and results in oversmoothing as node representations become nearly indistinguishable. Several research works illustrate that due to oversmoothing, nodes lose their unique characteristics <cit.>, adversely affecting GNNs' performance on downstream tasks, including node and graph classification. Different models have been proposed to overcome the problem, such as skip connection <cit.>, drop edge <cit.>, GraphCON <cit.>. While many of these methods focus on node classification, they often overlook the impact of oversmoothing on the entire network's representation. Additionally, only a limited number of studies have investigated the influence of specific regions causing oversmoothing  <cit.> in GNNs. These studies show that the smoothness in GNN varies for complex connections in different graph areas, and an individual node with high degrees converges to stationary states earlier than lower-degree nodes. Hence, the networks' regional structures affect the phenomenon because repeated message passing occurs within the dense neighborhood regions of the nodes. Therefore, we observe the impact of congested graph regions on oversmoothing. We conduct a small experiment to demonstrate the early oversmoothing at highly connected regions on a toy graph (Figure <ref>). To calculate the density on the graph, we utilize the k-truss <cit.>, one of the widely used cohesive subgraph extraction models based on the number of triangles each edge contains. To show the smoothness of the node features, we utilize the average node representation distance (ANRD) <cit.>. We measure the ANRD of different k-truss regions and present how it changes through the increasing number of layers in GNN. We present the toy graph and ANRD values with respect to the number of layers in Figure <ref>. While the toy graph in the figure is a 4-truss graph, it has 6, 7, and 8-truss subgraphs. Nodes and edges are colored based on their trussness. As known, k-truss subgraphs have hierarchical relations, e.g., 7 and 8-truss subgraphs are included in the 6-truss subgraph. Even at layer 2, we observe the ANRD of 7 and 8-truss subgraphs substantially degrades compared to the lower truss (k = 4, 6) subgraphs. While oversmoothing is observed at the node level, it may also result in losing crucial information for the graphs' representation to distinguish them. Furthermore, to learn the graph representation, GNNs employ various hierarchical pooling approaches, including hierarchical coarsening and message-passing, resulting in oversmoothing via losing unique node features <cit.>. Consequently, dense regions' identical node information affects the graph's representation learning. We extend the preliminary investigation on the toy graph given in Figure <ref>. We first apply the SAGPool model. After each pooling layer's operation, we measure the coarsened graph's nodes' embedding space matrix (ESM) with l_2 norm, then present the results for the first 2 pooling layers in Figure <ref> and <ref>. We observe that embedding distances are getting smaller for nodes within the dense regions, significantly reducing the final graph's representation variability. These node and graph representation characteristics through GNN models inspire us to work at different levels of dense regions in the network to mitigate oversmoothing. Our Work. To tackle the challenge, we develop a truss-based graph sparsification () model. Earlier sparsification models apply supervised techniques <cit.> and randomly drop edges <cit.> which may result in losing meaningful connections. However, our model selects the initial extraneous information propagating edges by utilizing edge trussness. It operates on the candidate edge's nodes' neighborhood and measures the connectivity strength of nodes. This connectivity strength assists in understanding the edge's local and global impact on GNN's propagation steps. Based on their specific node strength limits, we decide which edges to prune and which to keep. Removing selected redundant edges from dense regions reduces noisy message passing. That decreases oversmoothing and facilitates the GNN's consistency in performance during training and inference time. As we see in Figure <ref> and <ref>, the sparsified graph exhibits greater diversity in node distances than the original, enhancing better representation learning. In a nutshell, the contributions of our model are listed as follows. * We observe the prior stationary representation learning of nodes emerging in the network's high-truss region, which denotes a new perspective on explaining oversmoothing. We develop a unique truss-based graph sparsification technique to resolve this issue. * In the edge pruning step, we measure the two nodes' average neighborhood trussness to detect the regional interconnectedness strength of the nodes. During the message passing steps in GNN, as we trim down the dense connections within subgraphs, nodes in less dense areas at varying hop distances acquire diverse hierarchical neighbor information. Conversely, nodes in highly dense regions receive reduced redundant information. This provides smoothness to the node representation as well as to the graph representation. * We provide a simple but effective model by pruning noisy edges from graphs based on their nodes' average neighborhood trussness. The effectiveness of our model has been evaluated in comparison with standard GNN and graph pooling models. Extensive experiments on different real-world graphs show that our approach outperforms most of those baselines in graph classification tasks. The rest of this paper is organized as follows. Section <ref> discusses the related work that informs our research. Section <ref> introduces the model's preliminaries, whereas Section <ref> describes the model itself. Next, Section <ref> represents our model's experiment results and an analysis of its performance on different datasets. Finally, we conclude the paper with a discussion of future research directions. § RELATED WORKS Graph Classification. Early GNN models leverage simple readout functions to embed the entire graph. GIN <cit.> introduces their lack of expressivity and employs deep multiset sums to represent graphs. In recent years, graph pooling methods have acquired excellent traction for graph representation. They consider essential nodes' features instead of all nodes. Flat pooling methods utilize the nodes' representation without considering their hierarchical structure. Among them, GMT <cit.> proposes a multiset transformer for capturing nodes' hierarchical interactions. Another approach, SOPool <cit.>, capitalizes vertices second-order statistics for pooling graphs. There are two main types of hierarchical pooling methods: clustering-based and selection-based. Clustering-based methods assign nodes to different clusters: computing a cluster assignment matrix <cit.>, utilizing modularity <cit.> or spectral clustering <cit.> from the node's features and adjacency. On the other hand, selection-based models compute nodes' importance scores up to different hop neighbors and select essential nodes from them. Two notable methods are SAGPool <cit.>, which employs a self-attention mechanism to compute the node importance, and HGP-SL <cit.>, which uses a sparse-max function to pool graphs. KPLEXPOOL <cit.> hierarchically leverages k-plex and graph covers to capture essential graph structures and facilitates the diffusion between contexts of distance nodes. Some approaches combine both hierarchical pooling types to represent graphs. One model, ASAP <cit.>, adapted a new self-attention mechanism for node sectioning, a convolution variant for cluster assignment. Another model, AdamGNN <cit.>, employs multi-grained semantics for adapting selection and clustering for pooling graphs. Oversmoothing: While increasing number of layers for a regular neural network may results better learning, it may cause an oversmoothing problem in which nodes get similar representations during graph learning because of the information propagation in GNN. To tackle this, researchers propose different approaches: DROPEDGE <cit.> randomly prunes edges like a data augmentor that reduces the message passing speed, DEGNN <cit.> applies connectivity aware decompositions that balance information propagation flow and overfitting issue, MADGap <cit.> measures the average distance ratio between intra-class and inter-class nodes which lower value ensures over-smoothing. However, these methods overlook networks' regional impact on oversmoothing. The k-truss <cit.> algorithm primarily applies to community-based network operations to identify and extract various dense regions. It has been employed in different domains, such as high-performance computing <cit.> and graph compression <cit.>. Our  model functions as a technique equipped with foundation graph pooling methods. It leverages the k-truss algorithm and edges' minimum node strength to provide networks' structural interconnectedness. Pruning highly dense connections helps to restrict excessive message passing paths to reduce oversmoothing in GNN models. We empirically justified it by experimenting with different graph topologies in section <ref>. § PRELIMINARIES This section discusses the fundamental concepts for GNN and pooling, and also formulate the oversmoothing problem including the essential components for our solution to this problem. We begin with discussing graph neural networks and graph pooling techniques. Then, define the issue, including the task. Finally, we delve into the foundation concept of our model (k-truss), which plays a crucial role in solving the problem. §.§ GNN and Graph Pooling Graph Neural Network (GNN) <cit.> is an information processing framework that defines deep neural networks on graph data. Unlike traditional neural network architectures that excel in processing Euclidean data, GNNs are experts in handling non-Euclidean graph structure data. The principal purpose of GNN is to encode node, subgraph, and graph into low-dimension space that relies upon the graph's structure. In GNN, for each layer, K in the range 1,2,...k, the computational node aggregates (<ref>) messages m_N(v)^(k) from its K-hop neighbors and updates (<ref>) its representation h_v^(k+1) with the help of the AGGREGATE function. m_N(v)^(k) = AGGREGATE^(k)({h_u^(k), ∀u ∈N(v)}) h_v^(k+1) = UPDATE^(k)( h_v^(k), m_N(v)^(k)) In the context of graph classification, GNNs must focus on aggregating and summarizing information across the entire graph. Hence, the pooling methods come into play. Graph Pooling. <cit.> Graph pooling performs a crucial operation in encoding the entire graph into a compressed representation. This process is vital for graph classification tasks as it facilitates capturing the complex network structure into a meaningful form in low-dimensional vector space. During the nodes' representation learning process at different layers, one or more pooling function(s) operate on them. These pooling layers are pivotal in enhancing the network's ability to generalize from graph data through effective graph summarization. In general, pooling operations are categorized into two types: Flat pooling and Hierarchical pooling. Flat pooling <cit.> is a straightforward graph readout operation. It simplifies the encoding by providing a uniform method to represent graphs of different sizes in a fixed size. h_G = READOUT( {h_v^(k) | v ∈V} ) Hierarchical pooling <cit.> iteratively coarsens the graph and encodes comprehensive information in each iteration, reducing the nodes and edges of the graph and preserving the encoding. It enables the graph's representations to achieve short and long-sighted structural details. In contrast to Flat Pooling, it gives deeper insights into inherent graph patterns and relationships. Between the two types of hierarchical graph pooling methods, the selection-based methods emphasize prioritizing nodes by assigning them a score, aiming to retain the most significant nodes in the graph. They employ a particular attention function for each node to compute the node importance. Based on the calculated scores, top k nodes are selected to construct a pooled graph. The following equation gives a general overview of the top k selection graph pooling method: S=score(G,X); idx=topK(S,[α× N]) A^(l+1)=A_idx,idx where S∈ℝ^N× 1 is the scores of nodes, α is the pooling ratio, and N is the number of nodes. Conversely, clustering-based pooling methods form supernodes by grouping original graph nodes that summarize the original nodes' features. A cluster assignment matrix S∈ℝ^N× K using graph structure and/or node features are learned by the models. Then, nodes are merged into super nodes by S∈ℝ^N× K to construct the pooled graph at (l+1)^th layer as follows A^(l+1)=S^(l)^TA^(l)S^(l) H^(l+1)= S^(l)^T H^(l) where A∈ℝ^ℕ× ℕ is the adjacency matrix and H∈ℝ^N× d is the feature matrix with d dimensional node feature and N is the number of nodes. Note that, the AGGREGATE, UPDATE and READOUT operations are different operational functions, commonly including min, max, average, and concat. §.§ Oversmoothing According to <cit.>, continual neighborhood aggregation of nodes' features gives an almost similar representation to nodes for an increasing number of layers K. simply, without considering the non-linear activation and transformation functions, the features converge as - h^∞ = Â^∞X, Â_i,j = (d_i + 1)^r(d_j + 1)^1-r/2m + n where, v_i and v_j are source and target nodes, d_i and d_j are their degrees respectively, Â is the final smoothed adjacency matrix and r ∈ [0, 1] is the convolution coefficient. The equation (<ref>) shows for an infinite number of propagations, the final features are blended and only rely upon the degrees of target and source nodes. Furthermore, through spectral and empirical analysis  <cit.> shows: nodes with higher-dree are more likely to suffer from oversmoothing. h^k(j) = √(d_j + 1) (∑_i=1^n√(d_j + 1)/2m + n x_i ±∑_i=1^nx_i (1-λ_G^2/2)^k/√(d_j +1)) In the equation (<ref>), λ_G is the spectral gap, m is the number of edges, and n is the number of nodes. It represents the features convergence relied upon the spectral gap λ_G and summation ∑_i=1^n of feature entries. When the number of layers K goes to infinity, the second term disappears (after ±). Hence, all vertices' features converge to steady-state for oversmoothing, which mainly depends on the nodes' degrees. §.§ Problem Formulation This research aims to alleviate oversmoothing by effectively simplifying graphs to balance global and local connections, resulting in better graph classification results. Formally, A Graph is denoted as G = (V, E, X), where V is the set of nodes and E is the set of edges. Symbol X ∈ℝ^N × d represents the graph's feature matrix of dimension d, where N = |V| is the number of nodes in G and x_v∈ℝ^d, x_v∈ X and v ∈ V is a d dimensional feature of a particular node in the graph. The neighborhood of a node u is denoted as N(u), and its degree is represented as d(u) = |N(u)|. For a dataset D=(𝔾, Y) consisting of a set of graphs 𝔾 = {G_1, G_2⋯ G_N}, label pair Y = {Y_1, Y_2, ....Y_N}, our truss-based sparsification algorithm introduces a set of sparsified graphs as 𝔾_S = {G_S1, G_S2....G_SN}. The algorithm is designed to remove redundant graph connections and retain the graph's essential structural information. Subsequently, This sparsified graphs set is analyzed using GNN models to learn a function f:𝔾_S→ Y leveraging the reduced complexity of the graphs. The principal objective is to enhance the accuracy (Acc) of GNN models in graph classification tasks. §.§ K-truss Identifying and extracting cohesive subgraphs is a pivotal task in the study of complex networks. The k-truss subgraph extraction algorithm is instrumental as it isolates subgraphs based on a specific connectivity criterion. The root of the criterion is the term support, which refers to the enumeration of triangles in which an edge participates. The support serves as the cornerstone to measure the cohesiveness of a subgraph. The following two definitions explain the criterion for extracting specific tightly interconnected subgraphs from a complex network. Support: In graph G = (V, E) , the support of an edge e=(u, v) ∈ E is denoted as sup_G(e) the number of triangles where e involves, i.e sup_G(e) = |{ Δ_uvw: w ∈ V }| . k-truss subgraph: A subgraph S = (V_S, E_s) where, S ⊆ G, V_s ⊆ V and E_S ⊆ E is a k-truss subgraph where every edge e ∈ E_S has at least k-2 support, where k ≥ 2. Notably, the concept of k-truss is inherently dependent on the count of triangles within the graph, establishing that any graph can be considered a 2-truss subgraph. The hierarchical structure of k-truss subgraphs implies that a 3-truss subgraph is a subset of the 2-truss subgraph (the original graph) denoted as G_3 ⊆ G_2. Similarly, G_4 ⊆ G_3 ⋯ G_k ⊆ G_k-1. Edge Trussness: For a given graph G, for k > 2, an edge, E(u, v), can exist in multiple k-truss subgraphs. The trussness of the edge, denoted as T_r(u,v), is quantified from the highest k value for which the edge is included in that subgraph. That is, T_r(u,v)= k and (u,v) ∉ G_k+1. § TRUSS BASED GRAPH SPARSIFICATION In this section, we explain the proposed truss-based graph sparsification model () to overcome the oversmoothing problem for graph classification. In graph analytics, classifying graphs is challenging due to their large size and complex structure. Graph sparsification– a technique that reduces the number of graph connections by preserving crucial graph structures, is an emerging technique to address these challenges. We aim to get an effective simplified graph that keeps essential short- and long-distance graph connections through graph sparsification, which produces the optimal graph classification result. The overall architecture of the proposed model is presented in Figure <ref>. The model consists of 2 parts: truss-based graph sparsification and graph learning on the sparsified graph. We observe four phases to develop the truss-based graph sparsification framework. Phase 1: Compute edge trussness: At first, we apply the k-truss decomposition algorithm on an unweighted graph to compute its edges' trussness as weight. Next, we split all edges into groups based on their truss values: high-truss edges and low-truss values for a given threshold η. Phase 2: Measure node strenght:  focuses on high-truss edges for sparsification. As high-truss edges have higher degrees, they massively contribute to the oversmoothing phenomenon (Section <ref>). Thus, strategic pruning of those edges helps to reduce oversmoothing. However, at the same time, important structural connections need to be maintained. To do so, we measure the minimum node strength of its two end nodes for each candidate high-truss edge, indicating the edge's surroundings' density status. Phase 3: Prune and update: When that minimum value exceeds the standard density assuring threshold, we prune the edge from the graph and update all the edge's trussness values. Due to the cascading effect of the network, pruning affects other edges' trussness. Therefore, edge trussness needs to be updated after each pruning operation. The process continues the pruning step until all high-truss edges are examined. Phase 4: Learning: At the end of the sparsification, we first feed the processed graph to GNN models for graph learning. Finally, we experiment the entire graph's representation with a multi-layer perceptron (MLP) network. Note that our model follows some strategies during the graph sparsification steps: (a) sorts the high-truss edges in descending order to prune more dense regions' edges earlier, and (b) examines each edge only once. Removing an edge from the graph might affect other edges; then, in further exploration, one edge might satisfy the pruning condition. The phenomenon negligibly happens as  starts to prune from more dense edges. Hence, the technique avoids recursion. §.§ Dense Region Identification To learn the structure of the graph, GNN applies message passing between nodes through edges. Through repeated message passing, nodes in the dense regions get similar neighbors' feature information, which causes oversmoothing. As a result, the features of those regions' nodes become indistinguishable. Many different density measures exist, including k-truss, k-core, and k-edge. This paper uses k-truss, defined based on triangle connectivity, to identify the dense regions. Our approach employs a truss-decomposition <cit.> algorithm, as detailed in Algorithm <ref>, to compute edge trussness and discover all k-trusses from G. At first, it takes an unweighted graph as input and computes the supports of all edges. Then initialize the value of k as 2 and select the edge e^* with the lowest support (line <ref>). Next, the value k is assigned as edge weight W, and the edge is removed (line <ref>). Removing an edge decreases other edges supports. Hence, we reorder edges according to their new support values (line <ref>). The process continues until the edges that have support no greater than (k-2) are removed from the graph. Next, the algorithm checks whether any edge exists to access or not. If one or more exist(s), it increments k by one and goes to the line <ref> again to measure their trusseness (line  <ref>-<ref>). Edge trussness facilitates understanding the highest dense region within which the edge exists. After calculating the edge trussness, to identify highly dense areas,  separates the edges in G_T into two sets: High-Truss Edges and Low-Truss Edges. Following condition (<ref>), it compares all edges' trussness with the given threshold value, η, and determines the High Truss Edges E_H. For example, in Figure <ref>, given η = 3, the blue (T_r(E) = 4) and golden (T_r(E) = 3) colored edges are high-truss edges. Pruning {E ∈ E_H} reduces the load of high-degree nodes in dense regions, which assists in mitigating oversmoothing in GNN. High Truss Edges E_H: In any graph for a specific variable η, if an edge's trussness value is greater than or equal to η then the edge is considered as a high truss edge and their set is denoted as E_H. §.§ Pruning Redundant Edges Ascertaining the high-truss edges is crucial for understanding the density level in different parts of the graph. However, directly pruning these edges may break up essential connectivity between nodes. For example, low-degree nodes could be connected with a dense region node, and pruning an incident high-truss edge may not provide adequate information to that low-degree node. To balance the connectivity between nodes, we determine the nodes' strength of edge high-truss edges and then proceed to the next step. To measure nodes' (n ∈ E, E ∈ E_H) strength,  calculates the average trussness T̅_̅N̅(̅n̅)̅, n ∈ V. This score ensures the density depth of a node and implies its important connectivity. The strength of a node is measured as the summation of all of its incident edge weights. However, In this research, node strength is applied as the average of nodes' incident edges' trussness. T̅_̅N̅(̅n̅)̅1/|N(n)|∑_u∈ N(n) T_r(n,u) For a candidate edge E=(u,v), after measuring the node strength of u and v (<ref>) , their minimum value (<ref>) has been taken. Notably, a node may be included in different k-truss subgraphs. Hence, its neighborhood's trussness provides more connectivity information. The minimum node strength of an edge's two endpoints signifies the least density of its surroundings. As we aim to reduce the density of highly connected regions to combat oversmoothing,  follows a technique to decide to prune edges. For this purpose, the minimum node strength of the edge E is compared to a threshold δ. In condition (<ref>), This comparison ensures the edge's presence in a prunable dense region. The condition indicates that if any end of the candidate edge is sparse T̅_̅N̅(̅E̅)̅ < δ,  avoids cutting it because that connection serves as an essential message-passing medium in the GNNs aggregation step. In contrast, when the minimum score equals or exceeds the value of δ, we assume the edge is part of a highly dense region, and there is a high chance of excessive messages passing between that region's nodes. That may cause them to blend their representations, leading to oversmoothing during graph learning through GNN (section <ref>). From the condition, the model understands which edges contribute to undesirable density levels that foster oversmoothing in GNNs. T̅_̅N̅(̅E̅)̅ minimum(T̅_̅N̅(̅u̅)̅, T̅_̅N̅(̅v̅)̅) An Edge e =(u,v) is eligible for pruning when the minimum average neighborhood edge weight between u and v equals or exceeds the threshold δ. T̅_̅N̅(̅E̅)̅≥δ For example, at the lower-left in phase 3 (in Figure <ref>), the edge (v_2, v_10), where the degrees of v_2 and v_10 are 5 and 2, respectively. The node strengths, T̅_̅N̅(̅v̅_̅2̅)̅ is {(3×3)+(2×2)}/5 = 2.6, and T̅_̅N̅(̅v̅_̅1̅0̅)̅ is {(2+2)}/2 = 2. Given δ = 2.5, and the minimum node strength, T̅_̅N̅(̅v̅_̅2̅,̅ ̅v̅_̅1̅0̅)̅ = T̅_̅N̅(̅E̅)̅ = min ( 2.6,  2) =  2. Hence, the pruning condition is unsatisfactory, and the edge will stand in the graph. If  pruned the edge, the neighborhood of v_10 would be sparser than before and miss its crucial global information. On the other hand, at the upper-right in phase 3, in context of E=(v_1, v_3), T̅_̅N̅(̅v̅_̅1̅)̅ = 3 and T̅_̅N̅(̅v̅_̅3̅)̅ = 3. Hence, comparing to the value of δ they are already in the dense region and T̅_̅N̅(̅v̅_̅1̅,̅ ̅v̅_̅3̅)̅ = 3 ≥ 2.5. In this case, the pruning will help to prevent blended node representation in GNN, especially between highly interconnected subgraphs. According to our model, it considers the nodes will still stay in enough dense regions to receive meaningful local and global neighborhood information after pruning. The Algorithm <ref> represents the  model. Lines  (<ref>- <ref>) identify the dense regions and ensure high-truss edges of the network while lines (<ref>-<ref>) demonstrate the pruning of noisy high-truss edges in the network. Algorithm UpdateTr in line <ref> (similar to section 4.2, <cit.>), updates all edges' trussness after each pruning step. §.§ Algorithm Complexity The complexity of measuring edge trussness is O(E √(()E)) and the updateTr algorithms complexity is O(E). As we explore all high truss edges in the algorithm, the exploration complexity is O(E) in the worst case. Hence, the Algorithms complexity is O(E(O(E)) + O(E√(()E)) = O(E^2) + O(E√(()E)) = O(E^2) in the worst case. Although it seems highly complex the real-world datasets are not hardly dense. In addition, due to updating the edges trussness score many high-truss edges are removed before examination. § EXPERIMENT DESIGN AND ANALYSIS This section validates our technique on different real-world datasets by applying standard graph pooling models. First, we provide an overview of the datasets. Then, we briefly describe the parameters of various methods. Finally, we compare the performance of our  algorithm's enhancement with the original baselines in the graph classification tasks including analysis of parameters, deeper networks, and ablation study. §.§ Datasets and Baselines We experiment with our model on eight different TU Dortmund <cit.> datasets: Five of them are biomedical domain: PROTEINS, NCI1, NCI109, PTC, and DD and three of them from social network domain: IMDB-BINARY, IMDB-MULTI, and REDDIT-BINARY. We extend the  algorithm with seven state-of-the-art backbone graph pooling models. Among them, three are node clustering-based pooling methods: DiffPool <cit.>, DMonPool <cit.> and MinCutPool <cit.>. Two models, SAGPool <cit.> and HGP-SL <cit.>, utilize a node selection approach for pooling the graphs. Of the remaining two, one learns graph representation through flat-pooling (GMT <cit.>), and another one utilizes an adaptive pooling approach by applying both node selection and clustering for the pooling procedure: AdamGNN <cit.>. We report the statistics of the datasets in the Table <ref>. §.§ Experimental Settings To compare fairly, we executed the existing standard implementations of the baselines and incorporated them with our model. For evaluation, we split the datasets into 80% for training, 10% for validation, and 10% testing. Mostly, we stopped learning early for 50 consecutive same validation results in training. We measured the performance using the accuracy metric by ruining each model 10 times for 10 random seeds and reported their mean. The batch size was 128 for most of the models. The effectiveness of our pruning method mostly depends on two crucial parameters: the cutoff parameter η and the edge pruning threshold δ. For all experiments, we set η = 3, which means any edge with a trussness score below 3 cannot be pruned from the graph. On the other hand, we experimented with various δ values across the datasets. Specifically, we used δ values of {3, 4, 5, 6, 7} for IMDB-BINARY, IMDB-MULTI, and {3, 3.5, 4} for REDDIT-BINARY datasets. For PROTEINS and DD datasets δ was set as {3, 3.25, 3.5, 3.75, and 4} and for NCI1, NCI109, we used δ values of { 2.5, and 3} while for PTC only 2.5. §.§ Result Analysis Table <ref> reports the experiment results, providing a comparative analysis between our established model and original baselines across various datasets.  integrated with backbone graph pooling models consistently outperforms the baselines, and demonstrates its robustness in graph classification tasks. On selection-based models, with the incorporation of the SAGPool model,  achieves a 1.5-5.5% gain(𝔾) (<ref>) over the original models. Notably, on DD and IMDB-BINARY datasets, the gains are 3.34% and 5.17%, respectively. In the experiment with the HGP-SL model,  attains a sustainable improvement of nearly 4.5% on the IMDB-BINARY dataset and on the PTC dataset, which is over 2.5%. Adapting  along with the flat pooling model GMT acquires a significant gain (nearly 7%) over the NCI109 dataset and maintains consistent performance on other datasets. In experiments with cluster-based modes,  equipped to Diffpool model achieves a magnificent accuracy gain on the PTC dataset, which is nearly 19%. It also demonstrates strong performance with the DMonPool model over all datasets. Notably, it achieves the highest accuracy on the REDDIT-BINARY dataset, which is 85.75% Gain(𝔾) = -Original/Original× 100% Extended Experiment: In addition to the pooling method, we incorporate the  model with the fundamental GNN models: two versions of graph isomorphic networks (GIN-0 and GIN-ϵ) and the simple graph convolution network for graph classification. During the experiment with GIN networks, we follow 10-fold cross-validation to evaluate the validity of our model. On the other hand, we assess the GCN as other pooling models (section <ref>). In most contexts (in Table <ref>), our technique outperforms these models on every dataset. Especially on the PTC and IMDB-BINARY datasets, (GIN-0) achieves the highest accuracy scores of 69.43% and 78.10%, respectively. Additionally,  with the backbone GCN model, attains the overall second-highest accuracy on the REDDIT-BINARY data, with 84.30%. §.§ Analysis in Deeper Network We examine the impact of the  in deeper layers with the backbone models. In this analysis along with the SAGPool, we choose three GNN models: GCN, GIN-ϵ, and GIN-0. Besides, we select two datasets from the biomedical domain (DD and PROTEINS) and one from the social network domain (IMDB-BINARY). Figure <ref> shows the best two ranked (Table <ref> and <ref> in the section <ref>)  variants for the threshold δ compared to the original model performance. Columns 1 ((GCN)) and 4 ((SAGPool)) reveal that for increasing the number of layers, in most cases the  outperforms the original models on all three datasets in multiple layers. Figure <ref> and <ref> illustrate the similar trends in the context of (GIN-ϵ) and (GIN-0) on the IMDB-BINARY dataset. However, both of these models show fluctuations in accuracy on the DD and PROTEINS datasets. One interesting fact is that the accuracy trend (GIN-0) has dis-proportionally increased in deeper networks on DD. A possible reason could be the dense nature of the networks in the dataset. §.§ Sensitivity Analysis In Figure <ref>, we demonstrate our model’s performance for variations of hyperparameters’ values on the IMDB-BINARY and PROTEINS datasets. Notably, in most cases, the pruning rate decreases as much as the delta value increases. Figure <ref> shows at δ = 3 value, our equipped  models perform well. We observe that for lower δ value, the accuracy of  with AdamGNN increases, whereas degrades for (SAGPool). On the other hand, in the PROTEINS dataset (Figure <ref>), for changing the δ value,  decorated with SAGPool, MinCutPool, DMonPool, and HGP-SL, showing near-consistent performance. However, when increasing the δ value from 3 to 3.25, (AdamGNN)’s accuracy decreases and shows an almost stable performance. Assembled with GMT and Diffpool, the accuracy of  shows some variations from δ = (3-3.5) and then remains near the same score for other values. Regarding the change of cutoff variable η, figures <ref> and <ref> show the performance changes on the same datasets. Similar to δ, the number of pruned edges in the graph decreases for increasing the value η, and (SAGPool)’s accuracy increases. In contrast, for the same reason, the performance of our skill with backbone AdamGNN and HGP-SL degrades. Other models display minor fluctuations in accuracy with the change of η value. §.§ Ablation Study This section observes the strategical and functional significance of  with the backbone graph pooling models. We chose four datasets and six pooling methods. In Table <ref>, for each dataset at the first two rows, we change the edge's connectivity strength measuring equations (<ref>), and (<ref>). In the first row, the equation (<ref>) remains the same but at equation (<ref>) instead of minimum the average of two end nodes' strength has been taken. On the other hand, in the second row, node strength measuring equation (<ref>) is modified whereas the other equation remains unchanged. In the last two rows, we change the pruning procedure, examining to prune two (prune 2^*) and three (prune 3^*) edges without updating the edge trussness. Due to the change of equations, the model's performance on the PROTEINS dataset increases with its extension to MinCutPool and DiffPool methods. However, on the NCI1 dataset, the performance of AdamGNN dramatically decreases (47.36% and 61.22%). Regarding examining 2 and 3 edges for pruning, sometimes more than one edge is pruned without updating the other edges' trussness. Hence, significant information processing connections could prune in the system. Compared to the result in Table <ref>, our model's performance severely degrades on some datasets with the components change. Nonetheless, the modified pruning strategy with MinCutPool and DiffPool achieve better results over  are 74.37% and 70.27% respectively on the PROTEINS dataset. § CONCLUSION In this paper, we have proposed an effective k-truss-based graph sparsification model to facilitate graph learning of the graph neural networks (GNN). Through the sparsification of dense graph regions' overflowed message passing edges, our model includes more variability to the input graph for alleviating oversmoothing. Comprehensive experiments on eight renowned datasets verify that  is consistent in performance over popular graph pooling and readout-based GNN models. We expect our research to show some interesting directions: Learning the edge pruning threshold during training, applying parallelization during pruning edges at different k-truss subgraphs, and joint learning to measure edge importance during graph sparsification. abbrv § SUPPLEMENT §.§ Result Details This section reports the experiment details of the  model pipelined with the backbone graph pooling and GNN models. In all the tables, the social networks and biomedical domains' datasets' results are shown with the accuracy (%) metric. We measure the average accuracy for each dataset and rank them for different variations of the edge pruning threshold δ compared to the original backbone model's scores. Table <ref> and <ref> report all the results of different -variants for separate δ values. Due to limited space, we represent the (REDDIT-BINARY & PTC) and (NCI1 & NCI109) datasets' results together in sub-tables. §.§ Parameter Details Table <ref> and <ref> represent all the baseline models' parameters' details. A notable observation is that the number of maximum epochs for the SAGPool and MinCutPool seems endless. However, due to the patience variable, models take a much smaller number of epochs during the experiment. The AdamGNN model determines the number of layers for the experiment by analyzing the graphs' structural properties. All the models are developed in the PyTorch library and utilize the Adam optimizer where the default weight decay is set to 0 in most cases. Except for AdamGNN (64) and HGP-SL (512), the batch size is 128 for all of the other models. All the baseline models employ different learning rates for evaluation. Six of the models employ the dropout parameter with a rate of 50%, while the other four models abstain from utilizing it.
http://arxiv.org/abs/2407.12422v1
20240717091744
Conduct Parameter Estimation in Homogeneous Goods Markets with Equilibrium Existence and Uniqueness Conditions: The Case of Log-linear Specification
[ "Yuri Matsumura", "Suguru Otani" ]
econ.EM
[ "econ.EM" ]
Quantum beats of a macroscopic polariton condensate in real space * July 22, 2024 ================================================================= § ABSTRACT We propose a constrained generalized method of moments estimator (GMM) incorporating theoretical conditions for the unique existence of equilibrium prices for estimating conduct parameters in a log-linear model with homogeneous goods markets. First, we derive such conditions. Second, Monte Carlo simulations confirm that in a log-linear model, incorporating the conditions resolves the problems of implausibly low or negative values of conduct parameters. Keywords: Conduct parameters, Homogenous Goods Market, Mathematical Programming with Equilibrium Constraints, Monte Carlo simulation JEL Codes: C5, C13, L1 § INTRODUCTION Measuring competitiveness is a crucial task in the empirical industrial organization literature. Conduct parameter is considered a useful measure of competitiveness. However, it cannot be directly measured from data because data usually lack information about marginal cost. Therefore, researchers aim to identify and estimate the conduct parameter. As the simplest specification, <cit.> considers identification of conduct parameters for the linear model. <cit.> resolves the conflict on some identification problems between <cit.> and <cit.>. However, researchers often implement alternative specifications, such as the log-linear model <cit.>. In the context of the log-linear model, these papers identify estimation issues, with some estimated conduct parameters being unrealistically low or even negative. This raises doubts about the methodology, despite the identification strategy outlined by <cit.>. This is an obstacle to choosing a better specification of the demand and supply functions. To overcome the problem, we propose a constrained generalized method of moments estimator (GMM) incorporating theoretical conditions for the unique existence of equilibrium prices as constraints. First, we prove that a unique equilibrium exists under certain theoretical conditions. As far as we know, this is a new result. Second, we show that incorporating equilibrium existence and uniqueness conditions resolves the above estimation problems and makes the model work in estimating conduct parameters even in the log-linear model. Hence, our results support those of <cit.> numerically. § MODEL Consider data with T markets with homogeneous products. Assume there are N firms in each market. Let t = 1,…, T be the index of markets. Then, we obtain the supply equation: P_t = -θ∂ P_t(Q_t)/∂ Q_tQ_t + MC_t(Q_t), where Q_t is the aggregate quantity, P_t(Q_t) is the demand function, MC_t(Q_t) is the marginal cost function, and θ∈[0,1], which is the conduct parameter. The equation nests perfect competition (θ=0), Cournot competition (θ=1 / N), and perfect collusion (θ= 1) <cit.>. Consider an econometric model. Assume that the demand and the marginal cost functions are written as follows: P_t = f(Q_t, X^d_t, ε^d_t, α), MC_t = g(Q_t, X^c_t, ε^c_t, γ), where X^d_t and X^c_t are the vector of exogenous variables, ε^d_t and ε^c_t are the error terms, and α and γ are the vector of parameters. We also have the demand- and supply-side instruments, Z^d_t and Z^c_t, and assume that the error terms satisfy the mean independence condition E[ε^d_t| X^d_t, Z^d_t] = E[ε^c_t| X^c_t, Z^c_t] =0. §.§ Log-linear demand and log-linear marginal cost Consider a log-linear model, which is a typical specification. The demand and marginal cost functions are specified as log P_t = α_0 - (α_1 + α_2 Z^R_t) log Q_t + α_3 log Y_t + ε^d_t, log MC_t = γ_0 + γ_1 log Q_t + γ_2 log W_t + γ_3 log R_t + ε^c_t, where Y_t is an excluded demand shifter, W_t and R_t are excluded cost shifters, and Z_t^R is Bresnahan's demand rotation instrument. Then, Equation (<ref>) is written as P_t = θ (α_1 + α_2 Z^R_t) P_t + MC_t. By taking logarithm of Equation (<ref>) and substituting Equation (<ref>), we obtain log P_t = - log(1 - θ(α_1 + α_2 Z^R_t)) + γ_0 + γ_1 log Q_t + γ_2 log W_t + γ_3 log R_t + ε^c_t. In this model, the number of equilibrium prices varies. Although it is a widely known model, to our knowledge, there is no paper examining this. Thus, we derive the conditions for the uniqueness in the following proposition which are used in our estimation. Assume that α_1 + α_2 Z^R_t 0. Let Ξ = γ_0 + γ_1α_0 + α_3 log Y_t + ε^d_t/α_1 + α_2 Z^R_t + γ_2 log W_t + γ_3 log R_t + ε^c_t. The number of equilibrium prices P_t^*>0 is determined as follows: * When 1 - θ (α_1 + α_2 Z^R_t) ≤ 0, there is no equilibrium price, * When 1 - θ (α_1 + α_2 Z^R_t) >0, * If -γ_1/(α_1+α_2 Z^R) 1, there is a unique equilibrium price, * If -γ_1/(α_1+α_2 Z^R) =1, there are infinitely many equilibrium prices when exp(Ξ) = 1 - θ (α_1 + α_2 Z^R_t), but there is no equilibrium price otherwise. See the online appendix <ref> for the proof. § ESTIMATION To estimate parameters in the demand and supply equations, we use GMM estimation. Among GMM estimators, we apply the nonlinear system two-stage-least-squares (N2SLS) using Equation (<ref>) and (<ref>). Let ξ = (α_0,α_1, α_2, α_3, γ_0,γ_1, γ_2, γ_3, θ) be the vector of parameters. Given the demand equation (<ref>) and the supply equation (<ref>), we can write the error terms in the demand and supply equation as ε_t^d(ξ) = log P_t - α_0 + (α_1 + α_2 Z^R_t) log Q_t - α_3 log Y_t , ε_t^c(ξ) = log P_t + log(1 - θ(α_1 + α_2 Z^R_t)) -γ_0 - γ_1 log Q_t - γ_2 log W_t -γ_3 log R_t . To estimate the parameters, we convert the conditional moment conditions, E[ε_t^d| Z_t^d] = E[ε_t^c| Z_t^c]=0, into unconditional moment conditions, E[ε_t^d Z_t^d] = E[ε_t^cZ_t^c]=0. Using Equations (<ref>) and (<ref>), we construct the sample analog of the moment conditions: g(ξ) = [[ 1/T∑_t=1^Tε^d_t(ξ)Z_t^d; 1/T∑_t=1^Tε^c_t(ξ)Z_t^c ]]. We define the GMM estimator as the vector ξ^* that solves the problem, min_ξ g(ξ)^⊤ W g(ξ) where the weight matrix W is defined as W = [1/T∑_t = 1^T Z_t^⊤ Z_t]^-1 where Z_t=[[ Z_t^d⊤ 0; 0 Z_t^c⊤ ]]. The solution to (<ref>) is called the N2SLS estimator <cit.>. Using Proposition <ref>, we impose the following constraints: 0≤θ≤ 1, α_1 + α_2 Z_t^R >0, γ_1>0 , t = 1,…, T 1- θ(α_1 + α_2 Z_t^R) >0, t = 1,…, T. Constraint (<ref>) is a standard assumption on the conduct parameter. Constraints (<ref>) and (<ref>) relate to the uniqueness of equilibrium prices. Constraint (<ref>) implies the downward-sloping demand and upward-sloping marginal cost. See online appendix <ref> for the detailed setting. § SIMULATION RESULTS We compare N2SLS estimation with and without Constraints (<ref>), (<ref>), and (<ref>). Table <ref> presents the results. First, as reported in the literature <cit.>, N2SLS without constraints converges to implausibly low and negative conduct parameters. This is because the algorithm searches the area in which no equilibrium price and quantity exist and then finds unreasonable local optima that might have a lower objective function than at the true values. Second, N2SLS with constraints converges to the reasonable values. As the sample size increases, the bias and root-mean-square error (RMSE) decrease to levels comparable to linear models. Third, we find an ad hoc method that uses Equation (<ref>) to compute the residual in the supply estimation, ε_t^c, and Equation (<ref>) as an equality constraint with Constraints (<ref>), (<ref>), and (<ref>). Then, the bias and RMSE of conduct parameter θ are reduced to 0.014 and 0.217 in Panel (c), although the results do not dominate the results in Panel (b) for all parameters. Therefore, incorporating equilibrium existence and uniqueness conditions simply is helpful for conduct parameter estimation in nonlinear models such as a log-linear model. See online appendix <ref> for additional experiments under different variances of errors and the results of a linear model. § CONCLUSION We propose a constrained generalized method of moments estimator (GMM) incorporating theoretical conditions for the unique existence of equilibrium prices for estimating conduct parameters in homogeneous goods markets. First, we derive the conditions. Second, Monte Carlo simulations confirm that incorporating the conditions resolves the problems of implausibly low or negative values of conduct parameters in a log-linear model, as reported in the literature. Acknowledgments We thank Jeremy Fox, Yelda Gungor, and Isabelle Perrigne for valuable comments. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. aer § ONLINE APPENDIX §.§ Existence and uniqueness of equilibrium prices Proposition <ref> proposes the conditions for the unique existence of P_t(>0) solving the demand equation (<ref>) and supply equation (<ref>) for P_t under θ∈[0,1]. The proof is not based on the optimization by individual firms but checks if there exists a point at which the demand function and the marginal cost function cross. Therefore, we allow an equilibrium price to exist when the demand and the marginal cost are upward-sloping. Rewriting the demand equation (<ref>) as log Q_t(P_t)= α_0 - log P_t + α_3 log Y_t + ε^d_t/(α_1 + α_2 Z^R_t) and substituting this into the supply equation (<ref>), we obtain P_t =θ (α_1 + α_2 Z^R_t) P_t + exp(γ_0 + γ_1 log Q_t(P_t) + γ_2 log W_t + γ_3 log R_t + ε^c_t). = θ(α_1 + α_2 Z^R_t)P_t + exp(γ_0 + γ_1 α_0 - log P_t + α_3 log Y_t + ε^d_t/(α_1 + α_2 Z^R_t) +γ_2 log W_t + γ_3 log R_t + ε^c_t) = θ(α_1 + α_2 Z^R_t)P_t + exp(Ξ + -γ_1/α_1+α_2 Z^R_tlog P_t ) = θ(α_1 + α_2 Z^R_t)P_t + exp(Ξ) P_t^-γ_1/α_1 + α_2 Z^R_t where Ξ = γ_0 + γ_1α_0 + α_3 log Y_t + ε^d_t/α_1 + α_2 Z^R_t + γ_2 log W_t + γ_3 log R_t + ε^c_t. Any price that satisfies (<ref>) becomes an equilibrium price. To find an equilibrium price P_t^*, we define Δ(P_t) as follows: Δ(P_t) = [1 - θ (α_1 + α_2 Z^R_t)]P_t_(I) - exp(Ξ) P_t^-γ_1/α_1 + α_2 Z^R_t_(II). We label the first term as (I) and the second term as (II). An important note is that P_t =0 satisfies Δ(P_t) = 0. Thus P_t^* = 0 holds. However, our interest is a unique positive equilibrium price, so we seek conditions in which there is a unique positive price that satisfies Δ(P_t)>0. When 1 - θ (α_1 + α_2 Z^R_t) ≤ 0, (I) is always negative on P_t >0. In contrast, (II) is non-negative regardless of the sign of -γ_1/(α_1+α_2 Z^R) on P_t > 0. Therefore, Δ(P_t) is always negative on P_t>0, which implies that there is no positive equilibrium price. When 1 - θ (α_1 + α_2 Z^R_t) ≤ 0, (I) becomes a line passing through the origin with a positive slope, illustrated in Figure <ref>. Note that the first and second derivatives of (II) are given as d/dP_texp(Ξ) P_t^-γ_1/α_1 + α_2 Z^R_t = -γ_1/α_1 + α_2 Z^R_texp(Ξ)P_t^-γ_1/α_1 + α_2 Z^R_t - 1, d^2/dP_t^2exp(Ξ) P_t^-γ_1/α_1 + α_2 Z^R_t = -γ_1/α_1 + α_2 Z^R_t(-γ_1/α_1 + α_2 Z^R_t - 1) exp(Ξ) P_t^-γ_1/α_1 + α_2 Z^R_t - 2. Therefore, the shape of (II) on P_t >0 is changed based on the value of -γ_1/(α_1+α_2 Z^R) and the number that (I) and (II) cross on P_t >0 is also determined. The case -γ_1/(α_1+α_2 Z^R) < 0 is illustrated in Figure <ref>. In this case, (II) is a monotone decreasing convex function because the first and second derivatives are both negative. As (I) is monotone increasing in P_t >0, (I) and (II) cross only once on P_t >0. The case -γ_1/(α_1+α_2 Z^R) ∈ [0, 1) is illustrated in Figure <ref>. In this case, (II) becomes a monotone increasing concave function passing through the origin because the first derivative is positive and the second derivative is negative. As the value of the first derivative is infinity at P_t = 0 since -γ_1/(α_1+α_2 Z^R)-1<0, (II) should be greater than (I) around P_t = 0. Then as P_t becomes large starting from zero, the difference in both terms becomes small, and both terms cross only once on P_t >0 eventually. When -γ_1/(α_1+α_2 Z^R) = 1, (<ref>) becomes 0 = [1 - θ (α_1 + α_2 Z^R_t) - exp(Ξ)] P_t. If 1 - θ (α_1 + α_2 Z^R_t) = exp(Ξ), the above equation holds for any P_t. Thus there are infinitely many equilibrium prices on P_t >0, which is illustrated in Figure <ref>. When 1 - θ (α_1 + α_2 Z^R_t) exp(Ξ), there is no equilibrium price on P_t >0 because the right-hand side of the equation is always non-zero on P_t >0, which is illustrated in Figure <ref>. The case -γ_1/(α_1+α_2 Z^R) > 1 is illustrated in Figure <ref>. In this case, (II) becomes a monotone increasing convex function passing through the origin because the first and second derivatives are positive. Therefore, (I) and (II) cross only once on P_t >0. From Δ (P_t) = 0, the positive equilibrium price can be written as 0 = [1-θ(α_1 + α_2 Z^R_t)]P_t - exp(Ξ) P_t^-γ_1/α_1 + α_2 Z^R_t 0 = 1-θ(α_1 + α_2 Z^R_t) - exp(Ξ)P_t^-γ_1/α_1 + α_2 Z^R_t- 1 P_t^-γ_1/α_1 + α_2 Z^R_t- 1 = 1-θ(α_1 + α_2 Z^R_t)/exp(Ξ) P_t^* = (1-θ(α_1 + α_2 Z^R_t)/exp(Ξ))^-α_1 + α_2 Z^R_t/γ_1 +α_1 + α_2 Z^R_t. Figure <ref> illustrates how the demand and supply equations cross under the conditions. When 1- θ(α_1 + α_2 Z^R_t) ≤ 0, the supply equation (<ref>) is ill-defined because the inside of the log function becomes negative. Thus, there should not be any equilibrium. Hereafter, assume that 1- θ(α_1 + α_2 Z^R_t) > 0. When -γ_1/(α_1 + α_2 Z^R_t) = 1 and exp(Ξ) = 1- θ(α_1 + α_2 Z^R_t), the demand equation and supply equation coincides. Thus there are infinitely many equilibria in the model. When -γ_1/(α_1 + α_2 Z^R_t) = 1 and exp(Ξ) 1- θ(α_1 + α_2 Z^R_t), the demand and supply equations have a same slope and different intercepts, which means that both equations become parallel. Thus, there is no equilibrium. When -γ_1/(α_1 + α_2 Z^R_t) 1, the demand and supply equations have different slopes, and hence we can find a unique equilibrium. §.§ Simulation and estimation procedure To generate the simulation data, for each model, we first generate the exogenous variables Y_t, Z^R_t, W_t, R_t, H_t, and K_t and the error terms ε_t^c and ε_t^d based on the data generation process in Table <ref>. By substituting the Equation (<ref>) into Equation (<ref>) and solving it for P_t, the log aggregate quantity is given as: log Q_t = α_0 + α_3 log Y_t + log (1 - θ (α_1 + α_2 Z^R_t)) - γ_0 - γ_2 log W_t - γ_3 log R_t + ε^d_t - ε^c_t/γ_1+ α_1 + α_2 Z^R_t. We compute the equilibrium quantity Q_t for the log-linear model by (<ref>). We then compute the equilibrium price P_t by substituting Q_t and other variables into the demand function (<ref>). We generate 1000 data sets of 100, 200, 1000, 1500 markets. We jointly estimate the demand and supply parameters by the simultaneous equation model <cit.> from the true values. We use state-of-the-art constrained optimization solvers, i.e., which implements an interior point line search filter method that aims to find a local solution of nonlinear programming problems. §.§ Why does the estimation without equilibrium conditions converge to the extremely low conduct parameters? We should interpret reasons why the estimation without equilibrium conditions converges to the negative conduct parameters, as shown in Panel (a) in Table <ref>, although the obtained parameters cannot be interpreted as the exogenous elements determining equilibrium outcomes. First, we rewrite the terms in Equation (<ref>) into log(1 - θ C_t)-γ_0 where C_t=α_1 + α_2 Z^R_t is given in the demand estimation. If the model allows θ<0, then 1 - θ C_t can take an arbitrarily large value through log-transformation relative to γ_0. For example, if 1 - θ C_t=1,000,000, then log(1 - θ C_t)=13.8, which has a small contribution relative to γ_0 for evaluating the moment conditions. §.§ Additional experiments Additional results for different σ are shown in Tables <ref> and <ref>. As a summary, the main findings in the main text are robust. In Tables <ref> and <ref>, we illustrate that N2SLS with Constraints (<ref>) works for the linear model as in <cit.>. This means that incorporating the conditions is innocuous for the linear model. Additional ad hoc improvement is possible by using (<ref>) to compute ε_t^c and (<ref>) as constraints with Constraints (<ref>), (<ref>), and (<ref>). Table <ref> shows that the estimation of the conduct parameter θ improves, that is, the bias is 0.014 and RMSE is 0.217. However, the results with ad hoc improvement do not dominate the results without the improvement for all parameters.
http://arxiv.org/abs/2407.13145v1
20240718040557
Ultra-low threshold chaos in cavity magnomechanics
[ "Jiao Peng", "Zeng-Xing Liu", "Ya-Fei Yu", "Hao Xiong" ]
nlin.CD
[ "nlin.CD", "physics.optics" ]
zengxingliu@hust.edu.cn 20031115@m.scnu.edu.cn ^1School of Electronic Engineering & Intelligentization, Dongguan University of Technology, Dongguan, Guangdong 523808, China ^2School of Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou, Guangdong 510006, China ^3School of physics, Huazhong University of Science and Technology, Wuhan 430074, China § ABSTRACT Cavity magnomechanics using mechanical degrees of freedom in ferromagnetic crystals provides a powerful platform for observing many interesting classical and quantum nonlinear phenomena in the emerging field of magnon spintronics. However, to date, the generation and control of chaotic motion in a cavity magnomechanical system remain an outstanding challenge due to the inherently weak nonlinear interaction of magnons. Here, we present an efficient mechanism for achieving magnomechanical chaos, in which the magnomechanical system is coherently driven by a two-tone microwave field consisting of a pump field and a probe field. Numerical simulations show that the relative phase of the two input fields plays an important role in controlling the appearance of chaotic motion and, more importantly, the threshold power of chaos is reduced by 6 orders of magnitude from watts (W) to microwatts (μW). In addition to providing insight into magnonics nonlinearity, cavity magnomechanical chaos will always be of interest because of its significance both in fundamental physics and potential applications ranging from ultra-low threshold chaotic motion to chaos-based secret information processing. Ultra-low threshold chaos in cavity magnomechanics Hao Xiong^3 July 22, 2024 ================================================== § INTRODUCTION Cavity magnomechanics is a rapidly developing research field that provides a special platform for observing many interesting classical and quantum phenomena <cit.>. In a magnomechanical system, the ferromagnetic Kittel mode (a uniform mode of spin waves) of the Yttrium Iron Garnet (YIG) sphere <cit.> can couple to the mechanical degrees of freedom via radiation pressure-like magnetostrictive interaction <cit.> (also known as magnetostrictive effect <cit.>). Experimental manipulation of the Kittel mode and vibrational mode via magnetostrictive effects has been demonstrated experimentally <cit.>, and many intriguing phenomena have been reported in cavity magnomechanics, ranging from magnomechanically induced transparency <cit.> and magnetostrictive-induced slow-light effect <cit.> to entanglement and squeezing states of magnons <cit.> and the ground-state cooling of mechanical vibration mode <cit.>. These effects are similar to those obtained via the mechanical effects of light in cavity optomechanics <cit.>, opening a new way for providing a new type of matter-matter interaction based on the mechanical effects of magnons. In the past few years, a large number of studies have shown that cavity magnomechanical systems exhibit rich but extraordinary nonlinear effects <cit.>. Recently, an experiment has showed that three different kinds of nonlinearities can be simultaneously activated under a strong microwave drive field, namely, magnetostriction, magnon self-Kerr, and magnon-phonon cross-Kerr nonlinearities, and the Kerr-modified mechanical bistability has been observed <cit.>. Furthermore, the generation of magnonic frequency combs based on the resonantly enhanced magnetostrictive effect is predicted theoretically <cit.> and quickly verified experimentally <cit.>. However, as a kind of nonlinear motion prevalent in nature, the generation and manipulation of chaos <cit.> based on the mechanical effect of magnon is still a prominent challenge due to the weak nonlinear interaction of magnons <cit.>. The studying of cavity magnomechanical chaos, undeniably, is one of the most important aspects of exploring nonlinear properties in cavity magnomechanics <cit.>. In additional, the investigation of magnomechanical chaos may provide theoretical support for the realization of chaos-based secret information processes and quantum communication in the field of magnonics <cit.>. In the present work, we propose an effective mechanism for realizing magnomechanical chaos by introducing phase modulation. The system is coherently driven by a two-tone microwave field consisting of a pump field and a probe field, where the relative phase of the two input fields plays an important role in controlling the appearance of chaotic motion and the corresponding chaotic dynamics. With state-of-the-art experimental parameters <cit.>, we show that the threshold power of chaotic motion is significantly reduced by six orders of magnitude, which effectively solves the bottleneck that the weak magnetostrictive interaction cannot trigger chaotic motion in the cavity magnomechanical system. Furthermore, the influence of the inherent magnon Kerr nonlinearity <cit.> on chaotic dynamics is also discussed in detail, and the results suggest that the Kerr coefficient plays an important role in the chaotic degree of the system. Our scheme provides a new perspective for the study of chaotic behavior of magnons and suggests that cavity magnomechanics with inherent nonlinearity is a good platform to explore chaotic phenomena by introducing phase modulation. § PHYSICAL MODEL AND METHODS The physical model we consider is a cavity magnomechanical system, as schematically shown in Fig. <ref>(a), in which a highly polished YIG sphere is placed in a three-dimensional microwave cavity <cit.>. The microwave drive field is introduced into the microwave cavity through the input port, and the ferromagnetic Kittel mode (magnon mode) of the YIG sphere will be excited <cit.>. Furthermore, a uniform static bias magnetic field (with the strength B_0) is applied to the YIG sphere to saturate the magnetization and establish the coupling between the magnon mode and the microwave mode <cit.>. As shown in Fig. <ref>(b), the magnon mode coupled to the microwave cavity mode through the magnetic dipole interaction with the coupling strength g_ma. The frequency of the magnon mode ω_m is directly proportional to the strength of the bias magnetic field, i.e., ω _m = γ B_0 with the gyromagnetic ratio γ /2π = 28 GHz/ T <cit.>. According to the magnetostriction effect <cit.>, the different magnetization induced by the magnon excitation will cause the deformation of the YIG sphere, and at the same time, the deformation of the YIG sphere in response to the external magnetic field can also impact on the magnetization, which gives rise to the coupling between the magnon mode and the vibrational mode <cit.>. As shown in Fig. <ref>(b), the magnetostrictive force leads to the coupling between deformation and magnetostatic modes with the coupling strength g_mb. The magnetostrictive interaction can be described by a radiation pressure-like Hamiltonian, i.e., Ĥ_int = ħ g_mbm̂^†m̂(b̂ + b̂^† ), where ħ is the reduced Planck's constant and b̂(b̂^†) is the boson annihilation (creation) operator of the deformation mode <cit.>. m̂=√(V_m/2ħϱM)(M_x-iM_y) is the annihilation operator of the magnon mode, with V_m the YIG sphere volume, M the saturation magnetization, and M_x,y,z the magnetization components <cit.>. Furthermore, we assume that the system is driven by a two-tone microwave driving field consisting of a pump field with the central frequency ω _d, the pump power P _d, the driving amplitude ε _d = √(P_d /(ħω _d )), the initial phase φ_d, and a probe field with the central frequency ω _p, the pump power P _p, the driving amplitude ε _p = √(P_p /(ħω _p)), the initial phase φ_p, respectively. Therefore, the Hamiltonian of such cavity magnomechanical system can be written as Ĥ = ħω _a â^†â + ħω _m m̂^†m̂ + ħω _b b̂^†b̂ + ħ g_ma (â^†m̂ + âm̂^† ) + ħ g_mbm̂^†m̂(b̂ + b̂^† )+ K_m m̂^†m̂m̂^†m̂ + ħ√(κ _1){ε _d[ae^i(ω_dt+φ _d) + a^†e^-i(ω_dt+φ _d)] + ε_p[ae^i(ω_pt+φ _p) + a^†e^-i(ω_pt+φ _p)]}, where â and â^† are the annihilation and creation operators of the microwave cavity mode with the intrinsic frequency ω _a. ω _m and ω _b are the intrinsic frequencies of the ferromagnetic Kittel mode and the vibrational mode respectively. κ_1 refers to the loss rate of the microwave cavity mode associated with the input coupling. It is worth noting that the YIG sphere also possesses an intrinsic magnon Kerr nonlinearity due to the magnetocrystalline anisotropy <cit.>. Taking the intrinsic magnon Kerr nonlinearity into account, i.e., K_m m̂^†m̂m̂^†m̂, where K_m = μ _0K_anγ ^2/(M^2V_m) is the Kerr nonlinear coefficient, with the vacuum permeability μ_0, the first-order magnetocrystalline anisotropy constant K_an, and the gyromagnetic ratio γ <cit.>. Note that the Kerr coefficient can be positive or negative depending on which crystallographic axis [100] or [110] of the YIG sphere is aligned in the direction of the static magnetic field B_o <cit.>. It should be pointed out that under a strong microwave drive field, three kinds of nonlinearity, i.e., magnetostriction, magnon self-Kerr, and magnon-phonon cross-Kerr nonlinearities can be simultaneously activated in the cavity magnomechanical system <cit.>. However, the cross Kerr coefficient is three orders of magnitude smaller than the self-Kerr coefficient <cit.>, so the effect of cross Kerr nonlinearity on chaotic motion is not included in our model. The dynamics of the magnomechanical system can be described by the Heisenberg-Langevin equations, and thus, in a frame rotating with the microwave drive frequency ω_d, we can obtain that ȧ = (- iΔ_a-κ _a /2)a-ig_ma m-i√(κ_1)[ε_d e^-iφ_d+ε_p e^-i(Δ_pt+φ_p)], ḃ = (- iω_b-κ _b /2)b-ig_mb m^† m , ṁ = (- iΔ_m-κ _m /2)m-ig_ma a-ig_mb(b + b^† )m - iK_m (2m^† m+1)m, where Δ_a =ω_a -ω_d and Δ_m =ω _m-ω_d are, respectively, the detunings from the microwave pumping field and the cavity photon and magnon modes. Δ _p=ω _p -ω _d is the beat frequency between the microwave pumping and probe fields. κ_a, κ_b and κ_m are the decay rate of the microwave cavity mode, the vibrational mode, and the Kittel modes, respectively. The operators of the microwave cavity, vibrational, and magnon modes are reduced to their expectation values in the semiclassical approximation, viz. o(t) = ⟨ô(t)⟩, with o = a, b, or m. Furthermore, the mean-field approximation by factorizing averages is also used, and the quantum noise terms are dropped safely <cit.>. Magnomechanical interactions, including the radiation pressure-like magnetostrictive effect and the magnon Kerr nonlinearity, involve a wealth of nonlinear physics <cit.>, such as mechanical bistability <cit.> and magnonic frequency combs <cit.>. It is well known that a nonlinear system is often accompanied by chaotic phenomenon when the nonlinear strength reaches the chaotic threshold <cit.>. A very natural question is whether the mechanical effects of magnon, similar to the mechanical effects of light <cit.>, can trigger chaotic motion. In order to facilitate the discussion of the chaotic characteristics of the system, we define the mean value of the operator as o = o _r + io _i, here o_r and o_i are real numbers. Using Euler's formula, we can obtain the equation of motion in the absence of imaginary number, as follows ȧ_̇ṙ = Δ _a a_i-κ_a/2a_r+ g_ma m_i+√(κ _1)[ε _dsinφ _d+ε _psin(ϖ)], ȧ_̇i̇ = -Δ _a a_r-κ_a/2a_i -g_mam_r-√(κ _1)[ε _dcosφ_d+ε _pcos(ϖ)], ḃ_̇ṙ = ω _b b_i - κ_b /2b_r, ḃ_̇i̇ = -ω _b b_r - κ_b /2b_i - g_mb (m_r^2 + m_i^2 ), ṁ_̇ṙ = g_ma a_i- κ _m /2m_r + ℵm_i, ṁ_̇i̇ = - g_ma a_r - κ _m /2m_i - ℵm_r, here, ϖ=Δ_pt+φ_p and ℵ = 2g_mb b_r + Δ _m + 2K_m (m_r^2 + m_i^2 ) + K_m. Furthermore, to describe the hypersensitivity of the system to initial conditions (the so-called butterfly effect), a perturbation δ⃗= (δ a_r ,δ a_i ,δ b_r ,δ b_i ,δ m_r ,δ m_i )^T is considered, which characterizes the degree of divergence or convergence of adjacent trajectories in phase space. The evolution of the perturbation δ⃗, therefore, is derived by linearizing Eqs. (<ref>) as dδ⃗/dt = Mδ⃗ <cit.>, with the coefficient matrix M= [ - κ _a /2 Δ _a 0 0 0 g_ma; - Δ _a - κ _a /2 0 0 - g_ma 0; 0 0 - κ _b /2 ω _b 0 0; 0 0 - ω _b - κ _b /2 2g_mb m_r - 2g_mb m_i; 0 g_ma 2g_mb m_i 0 B_1 A_1; - g_ma 0 - 2g_mb m_r 0 A_2 B_2 ], where A_1 = 2g_mb b_r + 6K_m m_i^2 + 2K_m m_r^2 +Δ _m +K_m, A_2 = -2g_mb b_r - 6K_m m_r^2 - 2K_m m_i^2 -Δ _m -K_m, B_1,2 = ± 4K_m m_r m_i -κ_m/2. The temporal evolution of adjacent trajectories in phase space δ I_m (δ I_m = | m + δ m|^2 - I_m, here, I_m = |m|^2 is the intensity of the magnon mode) can be acquired by numerically solving the Eqs. (<ref>) and the perturbation equation dδ⃗/dt = Mδ⃗ together. The general solution can be written as δ I_m(t) = δ I_m(0)e^λ_LEt, and the logarithmic slope, i.e., λ_LE=lim_t→∞lim_δ I_m(0)→ 01/tln|δ I_m(t)/δ I_m(0)|. defines the Lyapunov exponent, which quantifies the chaotic degree of the system and the sensitivity of the system to the initial conditions <cit.>. A positive Lyapunov exponent (λ_LE>0) implies divergence and sensitivity to initial conditions. If, conversely, the Lyapunov exponent is negative (λ_LE<0), then the trajectories of two systems with infinitesimally different initial condition will not diverge. In particular, a zero Lyapunov exponent (λ_LE=0) indicates that the orbits maintain their relative positions and are on a stable attractor <cit.>. In what follows, we will discuss in detail the realization of chaotic motion by introducing phase modulation in the case of weak nonlinear magnomechanical interactions. First of all, for purpose of discussing the phase-dependent effects more convenient, we consider the transformation ã= ae^iφ_p (ã^†=a^† e^-iφ_p). Thus, the Hamiltonian of the two-tone microwave drive field in Eq. (<ref>) should be rewritten as H_in=ħ√(κ _1){[ε _de^-iΦ +ε_p e^-iΔ_pt]a^†-H.c.} (in the frame rotating at ω_d). Here, Φ is the relative phase of the two-tone microwave input field, i.e., Φ=φ_d-φ_p. Thereupon, we only need to discuss the dependence of magnomechanical chaos on the relative phase Φ. § RESULTS AND DISCUSSION Figure (<ref>) shows the Lyapunov exponent varies with the microwave driving field power P_d in the presence and absence of phase mediation. To be specific, when there is not phase modulated, i. e., the initial phase of the two-tone microwave input field is zero, as shown in Fig. <ref>(a). We can see obviously that the weak nonlinear magnetostrictive interaction of magnons will present a challenge for generating magnomechanical chaos. For example, when the microwave drive field power is up to P_d = 1.5 W, the Lyapunov exponent is 0 [brown dot in Fig. <ref>(a)]. The oscillation of the magnons in the temporal domain is periodic, as the inset shown in Fig. <ref>(a), and the flat evolution of lnδI_m indicates that the trajectories of nearby points in phase space with infinitesimally disturbance will not diverge. In this case, we have to continue to increase the microwave drive field power to enhance the nonlinear response of the system. Understandably, as the microwave drive power increases, the nonlinear response of the system also enhances. When the nonlinear intensity reaches the chaos threshold, the evolution of the system will change from an ordered state to a chaotic state <cit.>. The numerical simulation results show that the threshold of the driving field power required to generate magnomechanical chaos is P_d ∼ 2.0 W [shown in Fig. <ref>(a)]. The excessive driving power, disadvantageously, will cause significant thermal noise that can't be ignored. Furthermore, when the system temperature is higher than the Curie temperature of YIG sphere, the ferromagnetism and quantum coherence of YIG sphere will disappear <cit.>. Besides, under high input power, many other higher order terms may become too important to be ignored, such as the Holstein-Primakoff approximation will no longer apply, and these inevitable effects, undoubtedly, will make the system too complicated to research. Therefore, it is of great significance to reduce the threshold power of magnomechanical chaos. Advantageously, we find that the chaos threshold can be greatly reduced by introducing phase modulation. When the relative phase of the two-tone microwave input field Φ = 0.4 π, as shown in Fig. <ref>(b), a positive Lyapunov exponent can be obtained even if the driving field power is reduced to the magnitude of microwatts (six orders of magnitude less than the case without the phase modulation). Take one instance, when the power of the microwave driving field P_d =0.5 μW [brown dot in Fig. <ref>(b)], an aperiodic oscillation of magnons appears, and the calculated exponential divergence of δI_m indicates the chaotic regime in which initially nearby points in phase space evolve into completely different states separating, as shown in the inset in Fig. <ref>(b). From the above discussion, we can see that in addition to the driving field power, the phase of the microwave driving field plays a crucial role in the chaotic behavior of the cavity magnomechanical system. To further explore the high dependence of the magnomechanical chaotic motion on the phase modulation, the Lyapunov exponent varies with the relative phase of the two-tone microwave input field Φ has been plotted in Fig. <ref>(a). As the relative phase varies in the range of 0-2π, the Lyapunov exponent alternates between positive and zero, that is, the chaotic oscillation of magnons turns up in some phase regions, and other regions are non-chaotic, including periodic oscillation and period-doubling bifurcation <cit.>. Numerical calculation of the perturbation lnδI_m varies the relative phase of the two-tone microwave input field Φ in the temporal domain [shown in Fig. <ref>(b)] confirms these results. We can clearly see that the perturbation lnδI_m changes with the variation of the relative phase, and there are several obvious flat evolution and exponential divergence of lnδI_m in Fig. <ref>(b), which shows excellent agreement with Fig. <ref>(a). Furthermore, in order to describe the nonlinear dynamic behavior of the system more comprehensively, the intensity of the magnon mode |m|^2, the perturbation lnδI_m, the sideband spectra, as well as the phase-space dynamical trajectories of the magnon have been discussed under the different relative phase of the two-tone microwave input field. Two kinds of specific situations are analysed in detail. When the relative phase Φ/π = 1.5 [brown dot in Fig. <ref>(a)], the Lyapunov exponent is zero, which means that the evolution of the magnons appears period-doubling bifurcation <cit.>. In the temporal domain, the non-monochromatic magnonic oscillation |m|^2 and the flat evolution of the perturbation δI_m well demonstrate the period-doubling bifurcation process. In the frequency domain, the spectrum of the magnonic dynamics S(ω) (ω is the spectroscopy frequency), obtained by performing the fast Fourier transform of the time series, also conforms to this dynamic behavior. In addition, as shown in Fig. <ref>(c), the dynamical trajectory of magnon evolution in phase space under infinitesimally initial perturbation will finally oscillate in the limited circles. In another case, when the relative phase of the two-tone microwave input field Φ/π = 0.4 [purple dot in Fig. <ref>(a)], a positive Lyapunov exponent has been obtained, which means that the system is extremely sensitive to slight changes in the initial conditions. The aperiodic oscillation of the magnon intensity I_m and the continuum sideband spectra well verify this chaotic behaviour <cit.>. Moreover, the perturbation δI_m diverge exponentially, implying that the system is extremely sensitive to the initial condition, which is one of the basic characteristics of chaotic motion <cit.>. The evolution of initial nearby trajectory in phase space, as shown in Fig. <ref>(d), becomes unpredictable and random. From the above discussion, we can see that the cavity magnomechanical chaos can be easily realized by phase modulation and the transition from order to chaos can be regulated, which is of great significance to the study of chaotic motion and its regulation in the cavity magnomechanics. Up to now, we have shown the generation and manipulation of the cavity magnomechanical chaos induced by phase modulation. It can be seen from Eq. (<ref>) that the system nonlinearity is derived from two different kinds of nonlinearities, namely, the radiation-pressure-like magnetostrictive interaction and the magnon Kerr nonlinearity <cit.>. Notably, the Kerr coefficient is inversely proportional to the volume V_m of the YIG sphere, i.e., K_m∝V_m^-1, and thus, the Kerr effect of magnons can become important for a small YIG sphere. Furthermore, the Kerr coefficient becomes positive or negative when the crystallographic axis [100] or [110] of the YIG is aligned along the static field B_o <cit.>. Therefore, it is necessary to discuss the influence of the magnon Kerr effect on chaotic dynamics. To this aim, numerical calculation of the Lyapunov exponent varying with the magnon Kerr coefficient K_m/2π from 10 nHz to -10 nHz has been shown in Fig. <ref>(a). Intriguingly, when the magnon Kerr coefficient changes from 0 to 10 nHz, i.e., the [110] axis of the YIG sphere is parallel to the static magnetic field, the Lyapunov exponent is always zero. This implies that the trajectories of two adjacent points with infinitely small initial conditions will not diverge, indicating that the system is in a non-chaotic regime. However, when the magnetic field direction is changed so that the [100] axis of the YIG sphere is parallel to the static magnetic field, the magnon Kerr coefficient K_m is negative. Under this circumstance, a positive Lyapunov exponent indicates a totally different regime in which initially nearby points in phase space evolve into completely different states. Moreover, the chaotic degree of the system changes constantly when the magnon Kerr coefficient varies from 0 to -10 nHZ. More specifically, the temporal evolution of the perturbation lnδI_m with different magnon Kerr coefficient K_m/2π = 5, -5, and -10 nHz are shown by the blue, green, and brown lines in the illustration in Fig. <ref>(a), respectively. It is worth noting that when the Kerr coefficient K_m/2π=0, the Lyapunov exponent is negative, indicating that the system is in a periodic state. This reveals that the appearance of magnomechanical chaos is the result of the combined effect of magnetostrictive interaction and magnon Kerr nonlinearity. Furthermore, a high dependence of the perturbation evolution on the magnon Kerr coefficient is observed in Fig. <ref>(b). Among them, the flat evolution of lnδI_m and the exponential divergence of δI_m are, respectively, observed in the region of K_m/2π∈ (10, 0) nHz and K_m/2π∈ (0, -10) nHz, which show an excellent agreement with the result in Fig. <ref>(a). Likewise, the magnonic evolution and the magnonic sideband spectrum with different magnon Kerr coefficients are also investigated for the sake of verifying the influence of the magnon Kerr effect on chaotic dynamics. The the periodic and aperiodic oscillations of the mangons, as well as the separated and continuous sideband spectra, as shown in Figs. <ref>(c) and (d), correspond one-to-one with the results in Fig. <ref>(a). Finally, we give some discussion on the feasibility of the experimental realization of the cavity magnomechanical chaos. First, the present system is simple and has high feasibility in experimental implementation. The magnetostrictive interaction and the magnon Kerr effect have been experimentally demonstrated, and the simulation parameters used in this work are chosen from the recent experiments <cit.>. Second, for a YIG sphere with the diameter 0.28-mm, a negative magnon Kerr nonlinear coefficient can be yielded K_m/2π≈ -6.5 nHz when the [110] axis of the YIG sphere aligned parallel to the static magnetic field <cit.>, which is well above the threshold for triggering chaotic motion required for our theoretical calculations. Furthermore, the magnon Kerr coefficient can be further strengthened by reducing the volume V_m of the YIG sphere <cit.>. On the other hand, for the experimental detection of the magnomechanical chaos, the spectral information of the magnon can be conveniently readout through the microwave photons using a three-dimensional copper cavity, as the experiments <cit.> have done. Third, Kerr-modified magnomechanical chaos may also hold for other magnon-coupled systems because magnon possess excellent compatibility with other quasiparticles (for example, photons and qubits). Finally, with the advancement of nanoprocessing technology, YIG spheres can be easily integrated with on-chip devices, and magnomechanical chaos may find potential applications in secure communication based on magnetic devices. § CONCLUSION To conclude, nonlinear chaotic dynamics in the cavity magnomechanical system is discussed in detail. Using the same parameters as the recent cavity magnomechanical experiments, we identify that the outstanding challenge that weak nonlinear magnomechanical interaction cannot trigger chaotic motion can be effectively solved by introducing phase modulation. The results indicate that the relative phase of the two-tone input field has a significant affect on the dynamic of the system, thereby inducing the appearance of ultra-low threshold chaotic motion. Furthermore, the chaotic behavior exhibits a high dependence on the magnon Kerr nonlinearity, which reminds us of the possibility that the "on" and "off" of chaotic motion can be realized by adjusting the direction of the applied magnetic field. Beyond their fundamental scientific significance, the investigation of magnomechanical chaos will deepen our understanding of nonlinear magnomechanical interaction and can find general relevance to other nonlinear systems based on magnonics. Author contribution statement: Jiao Peng: Carried out the calculations, Wrote the main manuscripttext, Prepared all figures, Reviewed the manuscript, Writing of the manuscript. Zeng-Xing Liu: Participated in the discussions, Reviewed the manuscript, Contributed to the interpretation of the work, Writing of the manuscript. Ya-Fei Yu: Participated in the discussions, Reviewed the manuscript. Hao Xiong: Participated in the discussions, Reviewed the manuscript. Data Availability Statement: Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the corresponding author upon reasonable request. Conflict of Interest: The authors declare no conflicts of interest. Acknowledgments: This work was supported by the National Science Foundation (NSF) of China (Grants No. 12105047), Guangdong Basic and Applied Basic Research Foundation (Grant No. 2022A1515010446), Guangdong Provincial Quantum Science Strategic Initiative) (GDZX2305001), Guangdong Provincial Quantum Science Strategic Initiative) (GDZX2303007). 50 A. V. Chumak2015 A. V. Chumak, A. V. Chumak, V. I. Vasyuchka, A. A. Serga, and B. Hillebrands, Magnon spintronics. Nat. Phys. 11, 453 (2015). Hybrid L.-Q. Dany, T. Yutaka, G. Arnaud, U. Koji, and N. Yasunobu, Hybrid quantum systems based on magnonics, Appl. Phys. Express 12, 070101 (2019). H.Y. Yuan2022 H.-Y. Yuan , Y.-S. Cao, A. Kamra, R. A. Duine, P. Yan, Quantum magnonics: When magnon spintronics meets quantum information science, Phys. Rep. 965, 1 (2022). S. Zheng2023 S.-S Zheng, Z.-Y. wang, Y.-P. Wang, F.-X. Sun, Q.-Y. He, P. Yan, H.-Y. Yuan, Tutorial: Nonlinear magnonics, J. Appl. Phys. 134, 15 (2023). YIG A. A. Serga, A. V. Chumak, B. Hillebrands, YIG magnonics, J. Phys. D 43, 264002 (2010). magnomechanics4 X. Zhang, C. L. Zou, L. Jiang, and H.-X. Tang, Cavity magnomechanics, Sci. Adv. 2, e1501286 (2016). J. Holanda2018 J. Holanda, D. S. Maior, A. Azevedo, and S. M. Rezende, Detecting the phonon spin in magnon-phonon conversion experiments, Nat. Phys. 14, 500 (2018). M. Yu2020 M. Yu, H. Shen, J. Li, Magnetostrictively induced stationary entanglement between two microwave fields, Phys. Rev. Lett. 124, 213604 (2020). Magnon1 Z. Shen, G.-T. Xu, M. Zhang, Y.-L. Zhang, Y.Wang, C.-Z. Chai, C.-L. Zou, G.-C. Guo, and C.-H. Dong, Coherent Coupling between Phonons, Magnons, and Photons, Phys. Rev. Lett. 129, 243601 (2022). Magnon2 D. Hatanaka, M. Asano, H. Okamoto, Y. Kunihashi, H. Sanada, and H. Yamaguchi, On-Chip Coherent Transduction between Magnons and Acoustic Phonons in Cavity Magnomechanics, Phys. Rev. Appl. 17, 034024 (2022). x.-L. Hei2023 X.-L. Hei, P.-B. Li, X.-F. Pan, and F. Nori, Enhanced Tripartite Interactions in Spin-Magnon-Mechanical Hybrid Systems, Phys. Rev. Lett. 130, 073602 (2023). Y. Xu2021 Y. Xu, J.-Y. Liu, W. Liu, and Y.-F. Xiao, Nonreciprocal phonon laser in a spinning microwave magnomechanical system, Phys. Rev. A 103, 053501 (2021). C. S. Zhao2022 C.-S. Zhao, Z. Yang, R. Peng, J. Yang, C. Li, and L. Zhou, Dissipative-Coupling-Induced Transparency and High-Order Sidebands with Kerr Nonlinearity in a Cavity-Magnonics System, Phys. Rev. Appl. 18, 044074 (2022). Y. T. Chen2021 Y.-T. Chen, L. Du, Y. Zhang, and J.-H. Wu, Perfect transfer of enhanced entanglement and asymmetric steering in a cavity-magnomechanical system, Phys. Rev. A 103, 053712 (2021). J. Li2018 J. Li, S.-Y. Zhu, and G. S. Agarwal, Magnon-photon-phonon entanglement in cavity magnomechanics, Phys. Rev. Lett. 121, 203601 (2018). Squeezing J. Li, Y.-P. Wang, J.-Q. You, and S.-Y. Zhu, Squeezing microwaves by magnetostriction, Natl. Sci. Rev. nwac247 (2022). W. Qiu2022 W. Qiu, X. Cheng, A. Chen, Y. Lan, and W. Nie, Controlling quantum coherence and entanglement in cavity magnomechanical systems, Phys. Rev. A, 105, 063718 (2022). B. Hussain2022 B. Hussain, S. Qamar, and M. Irfan, Entanglement enhancement in cavity magnomechanics by an optical parametric amplifier, Phys. Rev. A 105, 063704 (2022). J. Li2019 J. Li, S.-Y. Zhu, and G. S. Agarwal, Squeezed states magnons and phonons in cavity magnomechanics, Phys. Rev. A 99, (021801) 2019. Zhang W2021 W. Zhang, D.-Y. Wang , C.-H. Bai, T. Wang, S. Zhang, and H.-F. Wang, Generation and transfer of squeezed states in a cavity magnomechanical system by two-tone microwave fields, Opt. Express 29, 11773 (2021). C. Kong2019 C. Kong, B. Wang, Z.-X. Liu, H. Xiong, and Y. Wu, Magnetically controllable slow light based on magnetostrictive forces, Opt. Express 27, 5544 (2019). T. X. Lu2023 T.-X. Lu, X. Xiao, L.-S. Chen, Q. Zhang, and H. Jing, Magnon-squeezing-enhanced slow light and second-order sideband in cavity magnomechanics. Phys. Rev. A 107, 063714 (2023). G.-T. Xu2023 G.-T. Xu, M. Zhang, Z.-Y. Wang, Y.-X. Liu, Z. Shen, G.-C Guo, Ringing spectroscopy in the magnomechanical system, Fundamental Res. 3, 45 (2023). E.G. Spencer1958 E. G. Spencer, R. C. LeCraw, Magnetoacoustic resonance in yttrium iron garnet, Phys. Rev. Lett. 1, 241 (1958). E.G. Spencer1970 S. Wang, T. l.Hsu, Spin-wave experiments: Parametric excitation of acoustic waves and mode-locking of spin waves, Appl. Phys. Lett. 16 111-113 (1970). A. Kani2022 A. Kani, B. Sarma, J. Twamley, Intensive cavity-magnomechanical cooling of a levitated macromagnet, Phys. Rev. Lett. 128, 013602 (2022). Z.-X. Yang2020 Z.-X. Yang, L. Wang, Y.-M. Liu, D.-Y. Wang, C.-H. Bai, S. Zhang, and H.-F. Wang, Ground state cooling of magnomechanical resonator in PT-symmetric cavity magnomechanical system at room temperature, Front. Phys. 15, 52504 (2020). M. Asjad2023 M. Asjad, J. Li, S. Y. Zhu, and J.-Q. You, Magnon squeezing enhanced ground-state cooling in cavity magnomechanics, Fundamental Res. 3, 3 (2023). Z. Yang2023 Z. Yang, C. Zhao, R. Peng, J. Yang, and L. Zhou, Improving mechanical cooling by using magnetic thermal noise in a cavity-magnomechanical system, Opt. Lett. 48, 375 (2023). optomechanics M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, Cavity optomechanics, Rev. Mod. Phys. 86, 1391 (2014). optomechanics1 H. Xiong, L.-G. Si, X.-Y. Lü, X.-X. Yang, and Y. Wu, Review of cavity optomechanics in the weak-coupling regime: From linearization to intrinsic nonlinear interactions, Sci. China: Phys., Mech. Astron. 58, 1 (2015). optomechanics2 J. Zhang, B. Peng, S. Kim, F. Monifi, X.-F. Jiang, Y.-H. Li, P. Yu, L.-Q. Liu, Y.-X. Liu, A. Alù, and L. Yang, Optomechanical dissipative solitons, Nature 600, 75-80 (2021). kerr11 Y.-P. Wang, G.-Q. Zhang, D. Zhang, X.-Q. Luo, W. Xiong, S.-P. Wang, T.-F. Li, C.-M. Hu, and J. Q. You, Magnon Kerr effect in a strongly coupled cavity-magnon system Phys. Rev. B 94, 224410 (2016). kerr2 Y.-P. Wang, G.-Q. Zhang, D. Zhang, T.-F. Li, C.-M. Hu, and J.-Q. You, Bistability of cavity magnon polaritons, Phys. Rev. Lett. 120, 057202 (2018). kerr G.-Q. Zhang, Y.-P. Wang, J.-Q. You, Theory of the magnon kerr effect in cavity magnonics, Sci. China. Phys. Mech. 62, 987511 (2019). kerr3 R.-C. Shen, J. Li, Z.-Y. Fan, Y.-P. Wang, and J.-Q. You, Mechanical Bistability in Kerr-Modified Cavity Magnomechanics, Phys. Rev. Lett. 129, 123601 (2022). comb1 H. Xiong, Magnonic frequency combs based on the resonantly enhanced magnetostrictive effect, Fundamental Res. 3, 8 (2023). comb2 Z.-X. Liu, J. Peng, and H. Xiong, Generation of magnonic frequency combs via a two-tone microwave drive, Phys. Rev. A 107, 053708 (2023). comb3 Z.-X. Liu, Y.-Q. Li, Optomagnonic frequency combs, Photon. Res. 10, 467595 (2022). comb33 Z.-X. Liu, Dissipative coupling induced UWB magnonic frequency combs generation, Appl. Phys. Lett. 124, 032403 (2024). comb4 G.-T. Xu, M. Zhang, Y. Wang, Z. Shen, G.-C. Guo, and C.-H. Dong, Magnonic frequency comb in the magnomechanical resonator, Phys. Rev. Lett. 131, 243601 (2023). Magnomechanics1 C. A. Potts, E. Varga, V. A. S. V. Bittencourt, S. V. Kusminskiy, and J. P. Davis, Dynamical Backaction Magnomechanics, Phys. Rev. X 11, 031053 (2021). Magnomechanics2 C. A. Potts, Y. Huang, V. A. S. V. Bittencourt, S. Viola Kusminskiy, and J. P. Davis, Dynamical backaction evading magnomechanics, Phys. Rev. B 107, L140405 (2023). Magnomechanics3 V. A. S. V. Bittencourt, C. A. Potts, Y. Huang, J. P. Davis, and S. Viola Kusminskiy, Magnomechanical backaction corrections due to coupling to higher order Walker modes and Kerr nonlinearities, Phys. Rev. B 107, 144411 (2023). chaos10 R. M. May, Simple mathematical models with very complicated dynamics, Nature 26, 459-467 (1976). G. D. Vanwiggeren1998 G. D. Van Wiggeren, R. Roy, Communication with chaotic lasers, Science 279, 1198 (1998). A. Argyris2005 A. Argyris, D. Syvridis, L. Larger, V. Annovazzi-Lodi, P. Colet, I. Fischer, J. García-Ojalvo, C. R. Mirasso, L. Pesquera, and K. A. Shore, Chaos-based communications at high bit rates using commercial fibre-optic links, Nature 438, 343 (2005). chaos9 M. Sciamanna and K. A. Shore, Physics and applications of laser diode chaos, Nature Photon. 9, 151-162 (2015). A. B. Ustinov2021 A. B. Ustinov, A. V. Kondrashov, I. Tatsenko, A. A. Nikitin, and M. P. Kostylev, Progressive development of spin wave chaos in active-ring oscillators, Phys. Rev. B 104, L140410 (2021). ferromagnetic C. Kittel, On the theory of ferromagnetic resonance absorption, Phys. Rev. 73, 155 (1948). Strong J. T. Hou, L. Liu, Strong coupling between microwave photons and nanomagnet magnons, Phys. Rev. Lett. 123, 107702 (2019). ferromagnetic2 H. Keshtgar, M. Zareyan, G.E.W. Bauer, Acoustic parametric pumping of spin waves, Solid State Commun. 198, 30-34 (2014). noise C. W. Gardiner and P. Zoller, Quantum Noise (Springer, Berlin, 2000). chaos1 T. Carmon, M. C. Cross and K. J. Vahala, Chaotic Quivering of Micron-Scaled On-Chip Resonators Excited by Centrifugal Optical Pressure, Phys. Rev. Lett. 98, 167203 (2007). chaos2 L. Bakemeier, A. Alvermann and H. Fehske, Route to Chaos in Optomechanics, Phys. Rev. Lett. 114, 013601 (2015). chaos3 X. Y. Lü, H. Jing, J. Y. Ma and Y. Wu, 𝒫𝒯-Symmetry-Breaking Chaos in Optomechanics, Phys. Rev. Lett. 114, 253601 (2015). chaos7 F. Monifi, J. Zhang, Ş. K. Özdemir, B. Peng, Y.-x. Liu, F. Bo, F. Nori and L. Yang, Optomechanically induced stochastic resonance and chaos transfer between optical fields, Nature Photon. 10, 399 (2016). chaos8 Z.-X. Liu, C. You, B. Wang, H. Dong, H. Xiong and Y. Wu, Nanoparticle-mediated chiral light chaos based on non-Hermitian mode coupling, Nanoscale, 12, 2118 (2020). chaos4 Z.-X. Liu, C. You, B. Wang, H. Xiong, and Y. Wu, Phase-mediated magnon chaos-order transition in cavity optomagnonics, Opt. Lett. 44, 507 (2019).
http://arxiv.org/abs/2407.13339v1
20240718093656
On the complexity of Maslov's class $\overline{\text{K}}$
[ "Oskar Fiuk", "Emanuel Kieronski", "Vincent Michielini" ]
cs.LO
[ "cs.LO" ]
e-xists mis-sing pa-ra-do-xi-cal ∅∅∅ Extended version of a LICS'24 paper. 0009-0006-1312-4899 Institute of Computer Science, University of Wrocław Wrocław Poland 307023@uwr.edu.pl 0000-0002-8538-8221 Institute of Computer Science, University of Wrocław Wrocław Poland emanuel.kieronski@cs.uni.wroc.pl 0000-0002-1413-9316 Faculty of Mathematics, Informatics, and Mechanics, Warsaw University Warsaw Poland michielini@mimuw.edu.pl § ABSTRACT Maslov's class is an expressive fragment of First-Order Logic known to have decidable satisfiability problem, whose exact complexity, however, has not been established so far. We show that has the exponential-sized model property, and hence its satisfiability problem is -complete. Additionally, we get new complexity results on related fragments studied in the literature, and propose a new decidable extension of the uniform one-dimensional fragment (without equality). Our approach involves a use of satisfiability games tailored to and a novel application of paradoxical tournament graphs. On the complexity of Maslov's class K Vincent Michielini July 22, 2024 ===================================== claim[theorem]Claim fact[theorem]Fact § INTRODUCTION Identifying elegant fragments of First-Order Logic with decidable satisfiability problem and good expressive power has been an important theme in mathematics and computer science for decades. Its motivations come from various areas, including hardware and software verification, artificial intelligence, distributed computing, knowledge representation, databases and more. In this line of research an interesting fragment was proposed by Maslov <cit.>. Originally, Maslov called his fragment and considered its validity problem (“Is a given sentence of the fragment true in all structures?”). Here, as in later works, we consider the dual of , denoted by , and its satisfiability problem (“Is a given sentence of the fragment true in some structure?”). [The reader should not confuse the class with another class, called just the Maslov class. The latter consists of sentences of the shape ∃^* ∀^* ∃^*. ψ, with ψ being a quantifier-free Krom formula without equality, and is quite well understood (see, e.g. <cit.>).] Converted to prenex form, -formulas are as follows: ∀ x_1 …∀ x_. _1y_1…_ y_. ψ where _i's are quantifiers, ψ is a quantifier-free formula without the equality symbol nor function symbols of arity greater than 0 (constants are allowed), and every atom of ψ satisfies one of the following conditions: (i) it contains at most one variable, (ii) its variables are precisely x_1, …, x_, or (iii) it contains an existentially quantified variable y_i, and no y_j with j>i. The class embeds, either syntactically or via standard reductions preserving satisfiability, many known decidable fragments of First-Order Logic, including the monadic class <cit.>, the Ackermann fragment <cit.> and its generalised version <cit.>, the Gödel class <cit.>, the two-variable fragment <cit.>, Class 2.4 from <cit.> (a solvable Skolem class, which in this paper we will call ). It also captures basic modal logic and many standard description logics, e.g., the description logic 𝒜ℒ𝒞 with Boolean combinations of roles, inversions, role restrictions and positive occurrences of role compositions. In this context it is considered (together with the two-variable fragment, the guarded fragment and the fluted fragment) in the survey <cit.>. Even more formalisms, for example the uniform one-dimensional fragment <cit.> or its variation with alternation of quantifiers in blocks <cit.>, are captured by the class of conjunctions of -sentences, also known to have decidable satisfiability <cit.>. Maslov proved the decidability of the validity problem for , which is equivalent to the satisfiability problem for , using his own approach, which he called the inverse method (see <cit.>). There were a few subsequent works <cit.>, whose authors reproved this result by means of the resolution method; all of them work directly with . None of those works, however, studied the complexity of its satisfiability. Even though it was hypothesised that the class may have non-elementary complexity <cit.>, it seems that some elementary upper bound could be extracted from the resolution-based procedure in <cit.> (see Section <ref>). This bound, however, could not be better than doubly exponential, which would still leave a gap, as the best lower bound inherited from the known fragments embeddable in is -hardness, e.g., this lower bound holds already for the prefix-class ∀∀∃ <cit.>. Also, we remark that the decidability of does not survive when equality is allowed, as this prefix-class ∀∀∃ becomes undecidable with equality <cit.>. Our contribution. In this paper we add the main missing brick to the understanding of by showing that its satisfiability problem is -complete. We will do it by demonstrating that has the exponential-sized model property: theoremdmkexposize Every satisfiable formula ϕ in admits a finite model of size 2^𝒪(|ϕ|·log|ϕ|). Hence, the satisfiability problem for is -complete. In contrast to the previous works on , which approached the problem syntactically, we do it semantically. In particular, we use a game-theoretic view on the problem, and, more importantly, employ an adaptation of the results on the existence of small paradoxical tournament graphs. Up to our knowledge, a use of paradoxical tournaments is novel in this area. Our results transfer to and entail the -complexity of two subfragments of studied in the literature, whose complexity has not been known so far: the Generalised Ackermann class (without equality) and the class, the latter being the intersection of and the prefix-class ∀^*∃^* (the Skolem class). In addition, we propose a new decidable fragment, being a generalisation of the uniform one-dimensional fragment from <cit.>. The organisation of the paper is as follows: In Section <ref>, we give a formal definition of Maslov's class and introduce the various notions which we will be using in the paper. In Section <ref>, inspired by classical results on paradoxical tournament graphs, we introduce a variant of tournament graphs with colours of vertices and of arcs, and define for them an appropriate notion of paradoxicality. We show the existence of paradoxical such tournaments of small size. We propose two constructions: a randomised one and a deterministic one. Both are rather routine adaptations of the classical constructions for the variant without colours. These tournaments will be the core of our exponential-sized models. In Section <ref>, we introduce our satisfiability games for . We link the satisfiability of to strategies in these games and obtain initial technical results concerning the number of 1-types. In Section <ref>, we use colourful paradoxical tournaments from Section <ref> and strategies in games from Section <ref> to establish that satisfiable sentences in admit models of size 2^(|ϕ|·log|ϕ|). Thus, we prove Theorem <ref>, the main result of our paper. In Section <ref>, we show that our upper bound on the size of minimal models (and hence also on the complexity) for extends to . In Section <ref>, we show that the obtained upper bound is essentially the best possible. We do this by supplying a corresponding family of tight examples: for every n ≥ 3, we construct a sentence ϕ_n such that it has size linear in n and is satisfiable only in models of size at least 2^Ω(n·log n). Moreover, our formulas can be even assumed to be in the fragment , without constant symbols and with just a single existential quantifier. In Section <ref>, we show that satisfiable sentences in even have models of size 2^O(|ϕ|), under the assumption that the number of universal quantifiers is bounded. This in particular applies to the Gödel class, admitting two universal quantifiers (see Section <ref>). In Section <ref>, we propose a novel generalisation of the uniform one-dimensional fragment of First-Order Logic (without equality) <cit.>: the ∀-uniform fragment. We obtain the exponential-sized model property and -completeness of this new fragment by reducing its satisfiability to satisfiability of . In Section <ref>, we conclude our paper by providing comments on previous works regarding Maslov's class and related fragments. We believe that our results are valuable not only because they establish the precise bounds for the complexity and the size of minimal models for and related logics, but also because they give us a deeper understanding of the reasons behind them. Especially interesting is the boundary between 2^(|ϕ|) and 2^(|ϕ|·log|ϕ|), which we solve by showing that the gap is crossed by the unboundedness of the number of universal quantifiers in the fragment . § TECHNICAL BACKGROUND In this section, we formally introduce Maslov's class , as well as the different notions needed for the proofs of the article. We assume that the reader is familiar with the syntax of First-Order Logic (). We work with signatures containing relation and constant symbols, but no function symbols of arity greater than zero. Relation symbols may have arbitrary arities, including zero. When building formulas, we allow standard Boolean connectives: ∨, ∧, and →, but we do not allow equality (unless explicitly stated). Naming conventions. We use Fraktur letters for structures and the corresponding Roman capitals for their domains. We usually use letters a,b to denote elements of structures, x,y,z for variables, and c for constants; all of these possibly with some decorations and with a bar to denote tuples. For a tuple of variables , we use ϕ() to denote that all the free variables of ϕ are in . Sometimes it will be convenient to identify a tuple =⟨ x_1,…, x_k⟩ of variables with the corresponding set {x_i| 1 ≤ i ≤ k}; and therefore we will allow ourselves notations such as y∉, ∪{y}, etc. We will keep the same convention for any kind of tuples (tuples of natural numbers, tuples of elements of a structure, etc.). We write for the set of natural numbers {0,1,…}, and, if k ∈, [k] denotes the set {1, …, k} (in particular if k=0 then it is the empty set ∅). Measuring size. By the size of a structure, we simply mean the cardinality of its domain. By |ϕ| we denote the size of a formula ϕ measured in the uniform way: write ϕ as a word over the alphabet consisting of quantifiers, Boolean connectives, comma, parentheses, variables, relation and constant symbols; then each occurrence of a symbol contributes as 1 to the size. We point out that other authors might measure the size of formulas in bits. Hence, one should be careful when comparing results from different works. If is a class of formulas (also called a fragment of ), we say that it has the exponential-sized model property if there exists an exponential function f(n)=2^(n), such that every satisfiable formula ϕ in has a model of size at most f(|ϕ|). If we know that has this property, then its satisfiability problem can be decided in non-deterministic exponential time (NExpTime). §.§ The fragments K, DK and K-Skolem Maslov's class . Let ϕ be a sentence in negation normal form, and let γ be one of its atoms. The ϕ-prefix of γ is the sequence of quantifiers in ϕ binding the variables of γ. For instance, if φ is the sentence ∃ x. ∀ y. ∃ z. R(x,y)∧ T(c, y, z, y), c being a constant symbol, then the ϕ-prefix of the atom R(x,y) is the sequence “∃ x. ∀ y”, while that of the atom T(c, y, z, y) is “∀ y. ∃ z”. An atom without variables (e.g. talking only about constants) has an empty φ-prefix. The class consists of the sentences φ which are in negation normal form and in which there exist universally quantified variables x_1, …, x_K, none of which lies within the scope of any existential quantifier, such that each atom of ϕ has a ϕ-prefix of one of the following shapes: * a ϕ-prefix of length at most 1, * a ϕ-prefix ending with an existential quantifier, * or exactly the sequence “∀ x_1…∀ x_”. The variables x_1, …, x_ are called the special variables of the formula φ, and their number is called the grade of φ. The reason we need the formula φ to be in negation normal form is due to the antisymmetry of the definition with respect to the universal and existential quantifiers. In a formula such as ∃ x. P(x), the variable x is quantified existentially, but its semantic role would actually be universal. However, for convenience, we can allow a relaxed definition where the formula φ is not asked to be in negation normal form, but only to not have negations binding quantifiers. We can then allow the use of the implication symbol →, as long as its left-hand side does not contain any quantifier. With this convention, the following formula φ_co_authors is in : ∀s_1, s_2, s_3. [scientist(s_1)∧scientist(s_2)∧scientist(s_3) ∧co_authors(s_1, s_2, s_3)] →∃a. article(a)∧written_by(a, s_1,s_2,s_3). Indeed, the φ_co_authors-prefixes of the different atoms are: the singleton sequences “∀ s_1”, “∀ s_2”, “∀ s_3”, and “∃ a”; the sequence “∀ s_1. ∀ s_2. ∀ s_3. ∃ a”, which ends with an existential quantifier; and the universal sequence “∀ s_1. ∀ s_2. ∀ s_3”. Since there is no existential quantifier binding the quantifiers ∀ s_i, all the conditions are met: φ_co_authors is a formula in of grade 3, with its special variables being s_1, s_2 and s_3. As a second example, we consider the formula φ_marriage: ∀h, w. husband_and_wife(h,w)→∀d. date(d)→ ∃p. problem(p)∧occurs_to_at(p,h,w,d). This one formula is of grade 2 and its special variables are h and w: although it is quantified universally, the variable d is not special in this formula. The third example φ_eternal_marriage demonstrates the possibility of using quantifier alternation.new example ∀h, w. husband_and_wife(h,w)→ ∃ p. problem(p)∧∀d. date(d)→ ∃d'. date(d') ∧later_than(d',d) ∧occurs_to_at(p,h,w,d'). This formula is of grade 2, with h and w being its special variables. Although the variable d is quantified universally, it is not special. On the contrary, an example of a first-order sentence not belonging to is the axiom of transitivity: φ_trans∀ x, y, z. [T(x,y)∧ T(y, z)]→ T(x,z). Indeed, the reader can see that no subset of {x,y,z} is a legitimate candidate for being the special variables of φ_trans. Using standard procedures, we can convert any sentence in  into its prenex form and move the quantifiers ∀ x_1, …, ∀ x_ to the front. This way we obtain sentences as follows: ∀ x_1 …∀ x_. _1y_1…_ y_. ψ, where ψ is quantifier-free. For the rest of this paper, we will work with formulas of this shape, assuming without loss of generality that >0, i.e. that the first quantifier is universal. In the literature, one can find definitions of allowing an extra initial prefix of existential quantifiers. In our version, we can simulate them via the use of constant symbols. The class . In our work, we also consider the class consisting of all (finite) conjunctions of sentences from . Notice that a formula ϕ in might be a conjunction of sentences with different grades. In such a case, a priori ϕ is not equivalent to any formula in . We can go a one step further and consider a class containing arbitrary positive Boolean combinations of formulas from . Via a standard procedure, any such formula is equivalent to a disjunction of conjunctions of formulas in , i.e. as a disjunction of formulas in . Since the original formula admits a model if and only if one of these formulas in does, we actually have the equivalence between the satisfiability problems for the classes and : The satisfiability problem for can be reduced to the satisfiability problem for in a nondeterministic polynomial time. The class -Skolem. The class is the intersection of and the Skolem class, the latter being the set of prenex formulas with quantifier prefixes of the form ∀x̅ ∃y̅. In effect we can assume that formulas in have the shape ∀x̅ ∀z̅ ∃y̅. ψ, with x̅ being the tuple of special variables. Originally was introduced by Dreben and Goldfarb in the book <cit.> under the name Class 2.4, without any connection to . If we convert our example sentence φ_co_authors to prenex form, we indeed get a sentence in . Our second example φ_marriage goes beyond it, as it contains an alternation of quantifiers. It is worth to mention two important fragments of : the Ackermann class and the Gödel class. The former consists of prenex sentences with quantifier prefixes of the form ∀ x ∃y̅ (one universal quantifier), the latter—∀ x_1, x_2 ∃y̅ (two universal quantifiers). Both are often presented in “initially extended” versions in which an additional prefix of existential quantifiers of arbitrary length is admitted. Again, in our setting we do not need to consider such prefixes, as they can be naturally simulated by constants. One more class, the uniform one-dimensional fragment, , will be relevant for us. As it does not play a central role in this paper, we will define it later, in Section <ref>, dedicated to a generalisation of it. Interesting examples are needed explaining the definition of . We don't want to lose readers just because of the definition of . Done.Shall we also introduce formally (for the purpose of lowerbounds) Class 2.4? What is that? Shall we rename “pretype” to “outer type” to match naming with Goldfarb? otp(a̅) apparently it has been done.Maybe we should move the definitions of and here too?No I don't think so. From the reader's perspective, the definitions of are sophisticated enough, and anyway , do not occur before their proper section in the end of the article.I would drop a hyphen in “pretype” and “prestructure”; like “preorder”.I am rather neutral on this, as long as we are consistent though the whole paper. TODO comment on the output size of a reduction to a normal form and in Fact <ref> §.§ Semantics: formal definitions A signature is a tuple σ = (,,→) where and are sets of constant and relation symbols respectively. The number (R) is called the arity of the symbol R. By σ(ϕ) we denote the signature consisting of relation and constant symbols mentioned in ϕ. We call structure over the signature σ any tuple A=(A^, R^A (A^)^(R)→{0,1})_R ∈, where A^=A ∪ and A ∩ = ∅. We say that: A^ is the domain, A is the unnamed domain, its elements being the unnamed elements. We do not include a function interpreting the constant symbols inside the domain, but rather assume that constant symbols of σ are interpreted by themselves. In particular, this means that different constant symbols are interpreted distinctly. However, in the context of satisfiability, this does not affect the generality of our results: we can non-deterministically guess a partition of , corresponding to the equalities among the interpretations of constants, and substitute the occurrences of the constant symbols from each group by its fixed representative, hence reducing the problem to our scenario (see Appendix <ref>). Any function f X → A^ such that A ⊆ f(X) is denoted by f X A^. If B ⊆ A, we denote by A B the restriction of A to its subdomain B ∪. We also use partial structures, in which some relations may not be defined on some tuples. This is captured by extending the range of every function R^A to {0,1,}, the symbol standing for “undefined”. If A is a partial structure, whenever we write Aϕ, we ensure that all the information necessary to determinate the truth value of the sentence ϕ is indeed defined. In our proofs, we will make a big use of different versions of types: 1-types, k-outer-types and k-hull-types. An (atomic) 1-type over a signature σ is any σ-structure with the domain [1]^={1}∪. Notice that, in general, the number of 1-types over σ is doubly exponential in |σ|, as, if σ admits a constant symbol c and a relational symbol R of arity n, then there are 2^n possible tuples consisting of 1's and c's, and therefore at least 2^2^n possible functions from ([1]^)^n to {0,1}. A 0-type over σ is any σ-structure with the domain ∅^ consisting of only the constants. In this paper, k-types, for k≥ 2, will not be needed, due to the syntax limitations of . We will yet make use of a relaxed version of k-types, namely k-outer-types, where the relations are defined only for certain tuples. For k ≥ 0, a k-outer-type over σ is a partial structure B of domain [k]^, in which, for every R and every a̅∈ (A^)^(R), R^B(a̅) is defined (i.e. its value is 0 or 1) if and only if the intersection a̅∩[k] is the full set [k] or has at most one element. Finally, a k-hull-type over σ is a partial structure B of domain [k]^ in which, for every R and every tuple a̅∈ (A^)^(R), R^B(a̅) is defined if and only if a̅∩[k]=[k]. The reader will notice that the notions of 1-types, 1-outer-types and 1-hull-types coincide. An outer-type (resp. a hull-type) is a k-outer-type (resp. a k-hull-type) for some k ≥ 0. We call k the grade of this type. In the paper we will use α and β to denote 1-types and outer-types respectively; possibly with decorations. We say that a set of outer-types is consistent if it induces a unique 0-type, i.e. if for all β_1,β_2 ∈ we have β_1 ∅ = β_2 ∅. By we denote the subset of consisting of all the 1-types it contains. Bellow: if we go for the British spelling “colour”, then we also have to go for “realised”, “parametrised”, etc. We have to chose and unify. Let A be a partial structure and let a∈ A be an unnamed element. We denote by ^A(a) the partial σ-structure of domain [1]^ which is isomorphic to A{a} via the mapping a↦ 1 and ∋ c↦ c. It is a 1-type when, for every relational symbol R and every tuple b̅∈({a}∪)^(R), R^A(b̅) is indeed defined. In this case we call it the 1-type realised by a. Every time we refer to some ^A(a) in the paper, it will indeed be a 1-type. In particular, if β is a k-outer-type, for k≥ 1, then ^β(i) is a 1-type. Similarly, if a̅∈ A^k is a tuple of k pairwise distinct unnamed elements, we can define Aa̅ and Aa̅ in the analogous way, as the partial σ-structure of domain [k]^ which is isomorphic to Aa̅ via the mapping a_i ↦ i and ∋ c ↦ c. Again these are respectively k-outer-types and k-hull-types when the relations are defined for the according tuples. It could even be that the relations are defined for “too many tuples” (for instance, if R(a_1, a_3) is defined, with k=3). In this case, we assume that the tuples not needed for the definitions are set to , in order for us to get k-outer-types or k-hull-types. Again, in the whole paper, when we call for some Aa̅ or some Aa̅, they will always indeed be k-outer-types or k-hull-types respectively. the k-outer-type (the k-hull-type) realised in A by the tuple a̅, i.e., the k-outer-type (resp. a k-hull-type) isomorphic to the appropriate reduct of (Aa̅) via the mapping [k] ∋ i ↦ a_i and ∋ c ↦ c. By Aa (resp. a =Aa) we denote the 1-type realised by a ∈ A. How do we know it is an outer type? i.e. it satisfies the condition of the definition? Further, let be a (consistent) set of outer-types and let k_max be a maximum k such that there exists a k-outer-type in . We say that A is over if for any tuple a̅⊆ A of length at most k_max we have that Aa̅∈. § PARADOXICAL TOURNAMENT GRAPHS WITH COLOURS In this section, we introduce a new combinatorial notion, paradoxical colourful tournaments: their structure will serve as the foundation of exponential-sized models for satisfiable formulas in . This notion generalises an already known notion of paradoxical tournaments, of which we recall the definition here. Let =(V,E) be a directed graph. We write a b for (a, b)∈ E (and say that there is an arc from a to b). The size of is the size |V| of V.We say that is a tournament if it does not admit self-loops (i.e. arcs a a), and if there is exactly one directed arc between each pair of distinct vertices. Let be a tournament. Let A be a subset of vertices, and let b be a vertex not in A. Then b dominates A if for each a ∈ A we have b a. Let k∈, a tournament is called k-paradoxical if for each subset A of at most k vertices there exists a vertex dominating A. It is a classical result by Erdős that such tournaments exist <cit.>. By applying a probabilistic method, he obtained that there are k-paradoxical tournaments of size (k^2 2^k); also, he provided a lower bound Ω(2^k). Paradoxical colourful tournaments. Let be a set of vertex colours, and let be a set of arc colours. We always assume that both and are non-empty and finite. Let = (V,E) be a tournament with a pair of labellings μ V → and λ E →. The triple (,μ,λ) is called an (,)-colourful tournament (or more simply a colourful tournament). We define now a paradoxical notion for colourful tournaments. The triple (,μ,λ) is said to be (,)-paradoxical if it admits the following property, where ℓ=|| is the number of arc colours: for any vertex colour r ∈, any tuple a̅=a_1,…,a_ℓ of pairwise distinct vertices, and any tuple q̅=q_1,…,q_ℓ of (non-necessarily distinct) arc colours, there exists a vertex b such that: * b dominates {a_1,…,a_ℓ} (in particular, b≠a_i, for all i); * μ(b)=r; * λ(b a_i)=q_i, for every i∈[ℓ]. We say that such a vertex b colourfully dominates a̅ via r and q̅. In our use of the definition above, we will consider only non-trivial cases, i.e. the size of is at least ℓ. We now prove the existence of paradoxical colourful tournaments, and argue that the obtained bound is essentially optimal. The proof is a direct extension of the original probabilistic proof by Erdös. Let be a set of vertex colours and be a set of arc colours. Then there exists an (, )-paradoxical colourful tournament of size 2^(||·log ||)× ||·log||. Moreover, the size of any (, )-paradoxical colourful tournament is at least 2^Ω(||·log ||)×||. Let n ∈ be a free parameter, supposed to be at least ||, which we denote here by ℓ for convenience. Let V = × [n]. Define the labelling μ V∋(r,i)↦ r∈. Let 𝕋 be the set of all possible tournaments having V as their set of vertices. Consider now a tournament = (V,E) from 𝕋, and a labelling λ E →. Let us fix a vertex colour r ∈, a tuple a̅=a_1,…,a_ℓ of pairwise distinct vertices, and a tuple q̅=q_1,…,q_ℓ of arc colours. For a vertex b ∈ ({r}× [n]) ∖a̅, we denote by _r,a̅,q̅(b) the event, in a probabilistic sense, that b colourfully dominates a̅ via r and q̅. The probability that b dominates a̅ (without considering colours) is exactly δ_1=2^-ℓ, and, if we assume it does, then the probability that λ(b→ a_i)=q_i for every i∈[ℓ] is exactly δ_2=ℓ^-ℓ. Hence, the probability that the event _r,a̅,q̅(b) holds does not depend on b and is δ = (2ℓ)^-ℓ. Moreover, for distinct vertices b and b', the events _r,a̅,q̅(b) and _r,a̅,q̅(b') are independent, probabilistically speaking. Let _r,a̅,q̅ be the event ⋃_b ∈ ({r}× [n]) ∖a̅_r,a̅,q̅(b), stating that there exists some vertex colourfully dominating a̅ via r and q̅. We can bound its probability of not happening as follows: [_r,a̅,q̅] ≤ (1-δ)^n-ℓ≤exp(-(n-ℓ)·δ). The second inequality coming from the fact that n-ℓ≥ 0. Now, the number of triples (r,a̅,q̅) can be bounded in the following way: || ·|V| ℓ·ℓ^ℓ≤ ||· (||· n)^ℓ·ℓ^ℓ≤ (||· n·ℓ)^ℓ+1. Hence, if we denote by p the value exp(-(n-ℓ)·δ), and by m the value (||· n·ℓ)^ℓ+1, the probability that is not a paradoxical colourful tournament, i.e. that some tuple a̅ cannot be colourfully dominated via some role r and some tuple q̅, by any element of V∖a̅, is at most m· p. To show the existence of an (, )-paradoxical colourful tournament, it is enough to find n for which m· p < 1, i.e. there is a positive probability that the sampled tournament together with the labelling μ and the sampled labelling λ constitutes a triple (,μ,λ) which is a paradoxical colourful tournament. Applying the logarithmic function to this inequality, we get the following: (ℓ+1)×ln(||· n·ℓ) < (n-ℓ)×δ. It can be verified that for all n ≥ 10·(2ℓ)^ℓ+1·(ln||+ℓ·lnℓ) the inequality above holds. We can therefore conclude the exponential upper bound (remember that the number of vertices is ||× n). Now for the lower bound, let us consider an arbitrary (,)-paradoxical colourful tournament (,μ,λ). Choose a tuple a̅=a_1,…,a_ℓ of pairwise different vertices of . For every vertex colour r ∈ and every tuple q̅=q_1,…,q_ℓ of arc colours, there must be a vertex b_r,q̅ colourfully dominating a̅ via r and q̅. Clearly, all the b_r,q̅'s must be distinct, and we can deduce that the size of is at least ℓ!×|| = 2^Ω(ℓlogℓ)×||. The proof of Lemma <ref> is non-constructive. Yet, an explicit construction of paradoxical colourful tournaments is actually possible, we give it in Appendix <ref>. This explicit construction is based on Paley graphs and the result of Graham and Spencer <cit.>, which states that Paley graphs of size Ω(k^2·2^2k) are k-paradoxical.Finally, we will discuss some cases where the deterministic construction might be useful. We defer the details to Appendix <ref> for interested readers. § SATISFIABILITY GAMES In this section, we show that the satisfiability of sentences in can be studied via certain games. First, we adapt a standard verification game for First-Order Logic, and then we introduce our satisfiability game tailored to the fragment . In this whole section, we fix a sentence φ in of the shape as in (<ref>), i.e. ∀ x_1 …∀ x_. _1y_1…_ y_. ψ, with grade K and special variables x_1,…, x_K. We denote by and the sets of respectively constants and relational symbols from σσ(ϕ), by =∪ the set of variables of ϕ and, for each 0≤ i≤, by _i the set x̅∪{y_1, …, y_i}⊆. §.§ Verification game Satisfaction of ϕ in a given σ-structure A is naturally connected to the game (ϕ, A) between the existential player, Eloisa, trying to show that Aϕ and the universal player, Abelard, trying to show the opposite. A position in the game is any assignment f_t_t→ A^. The number t is called the order of f_t. The game, which has M+1 rounds, goes as follows. Abelard first chooses an assignment f_0→ A^ of order 0 as he wishes to. In Round t+1, for 0 ≤ t ≤ M-1, after a position f_t_t→ A^ of order t has been reached, the appropriate player (Abelard if Q_t+1=∀, Eloisa if Q_t+1=∃) extends it to f_t+1 by assigning y_t+1 to an element of A^ of their choice. At the end of the game, the players have constructed an assignment f_M→ A^. Eloisa wins if eventually A, f_M ψ (i.e. if the formula ψ holds in A when every variable in is interpreted by its value via f). It is well known that Aϕ if and only if Eloisa has a winning strategy in the game (ϕ, A), in a sense which we will specify later. §.§ Satisfiability game Now, we introduce a more abstract game (ϕ, ) in which the structure A is not given. Instead, it contains the parameter , which is a consistent set of outer-types meeting some closure conditions. Eloisa tries to show that ϕ has a model in which all realised outer-types of grade at most K+M are in . However, the entire model is not explicitly constructed. Rather, during the game, a partial structure L and an assignment f L^ are constructed (remember that this notation means that the whole unnamed domain L is included in f[]). Eloisa wins if finally L, f ψ. The game is defined for being a consistent and closed set of outer-types over σ, each of them having grade at most K+M. We say that is closed if it satisfies the following conditions: * for every outer-type β∈ and each i∈[(β)], the 1-type ^β(i) is in ; * for every outer-type β∈ and every permutation π of the set [(β)], the outer-type isomorphic to β via π is in ; * for every k∈[+] and every sequence α_1, …, α_k of 1-types from , there is a k-outer-type β∈ such that, for each i∈[k], ^β(i)=α_i. The game consists of M+1 Rounds 0, 1, …,. After Round t, we reach in the game a position consisting of a pair (L_t, f_t), where L_t is a partial σ-structure, with the unnamed domain [k], for some k ∈, and f_t_t L_t^ is an assignment. Round 0. Abelard chooses an outer-type L_0 ∈, and an assignment f_0_0 L_0^. Note that, as L_0 is an outer-type, its domain is indeed [k], with k=(L_0). Moreover k ≤ K, since the image f_0[_0], which is of size at most K, must contain all the unnamed elements of L_0. It may happen that k=0, if f_0 maps all the variables to constants. Round 𝐭+1. We suppose that the position reached after the previous Round t is (L_t,f_t), where L_t is a partial σ-structure with the domain [k]^, for some k, and f_t_t L_t^ is an assignment. Again, the number t is called the order of both the position (L_t,f_t) and of the assignment f_t. If _t+1 = ∀, then the move belongs to Abelard. His task is to assign an element to the variable y_t+1. He has two options. (a) He may extend the domain by adding an unnamed element. In this case, L_t+1 is the set [k+1], and he extends the assignment by defining f_t+1 as the function f_t ∪{y_t+1↦ k+1}. He then chooses a 1-type α∈ and defines the partial structure L_t+1 as follows: L_t+1 L_t=L_t, ^L_t+1(k+1)=α, and the rest is undefined. (b) He may keep the domain [k]^, and choose an element a_t+1 in L_t^. In this case the structure L_t+1 is L_t, unchanged, and f_t+1 is the function f_t ∪{y_t+1↦ a_t+1}. Notice that, during his move, Abelard has no direct control over how its new chosen element f_t+1(y_t+1) interacts with the other elements of the domain L_t+1^. This is completely justified by the limited syntax of : if y_t+1 occurs in an atom γ() of ψ, then either is actually the singleton {y_t+1} (in which case the outer-type chosen by Abelard will set the truth value of the atom), or contains an existentially quantified variable y_t', t+1<t', and the assignment of this variable will be taken care of by Eloisa later in the game. Hence, let us now describe Eloisa's turn, i.e. when _t+1 = ∃. On the contrary to Abelard, Eloisa has no control over the new unnamed domain L_t+1, which is the set [k+1], nor on the assignment f_t+1_t+1 L_t+1^, defined as the function f_t ∪{y_t+1↦ k+1}. Her role is to define a partial structure L_t+1 satisfying L_t+1 L_t = L_t, i.e. she really chooses only atoms containing the new element k+1=f_t+1(y_t+1). She does this in two steps. First, she chooses a 1-type α∈ and sets ^L_t+1(k+1)=α. Then, for each subset A of L_t+1 containing k+1, she selects an outer-type β∈ for it: L_t+1a̅=β, with a̅ being an enumeration of A (note that A has size at most K+M, the maximal grade of an outer-type of ). This outer-type β shall be consistent with the already defined 1-types of each individual a in A (the closure of makes meeting this requirement always possible). As the outer-types on distinct tuples share no atoms, except those corresponding to 0- and 1-types, no conflict are met when defining L_t+1. Winning condition. After Round M, Eloisa wins the game if L_, f_ψ. Note that in L_, all the atoms, which are required to verify if ψ is satisfied under f_, are indeed defined. Memoryless strategies. Let be either (ϕ, A) or (ϕ, ). It can be easily checked that any position reached at any moment in uniquely determines the positions reached before. For this reason, we can consider memoryless strategies. A (memoryless) strategy for Eloisa in is a function ω that, for every 0≤ t< M such that _t+1=∃, assigns to every position ρ_t of order t a next position ρ_t+1, in accordance with the rules of . By _ω^∃(), we denote the set of positions of which are obtainable after Eloisa's rounds when following the strategy ω. If is (φ,) (resp. (φ,A)), then we say that the strategy ω is winning if for every position ρ_M=(L_M,f_M) (resp. ρ_M = f_M→ A^) of order M in _ω^∃(), we have L_M, f_Mψ (resp. A,f_M ψ). Finally, if is (φ,), then _ω^∃() denotes the set of assignments f_t such that (L_t,f_t)∈_ω^∃() for some structure L_t. As we already mentioned, it is well known that Eloisa has a winning strategy in (ϕ,A) if and only if Aϕ. Now, we want to connect (ϕ,) to the (un)satisfiability of ϕ. We first show that our notion of satisfaction games is complete: lemmasatandgames Assume that ϕ is satisfiable. Then there is a consistent and closed set of outer-types such that Eloisa has a winning strategy in (ϕ, ). Since the proof of this lemma is rather standard, we present only a sketch. We begin by taking an arbitrary model A_0 φ and augmenting it to A by taking infinitely many copies of every element, including unnamed copies of constants, and defining the structure of A in the symmetric way: A R(a̅) iff A_0 R(a̅_*), where a̅_* is obtained by replacing every element of a̅ with its original copy from A_0. As ϕ does not use equality it follows that Aϕ. Moreover, Eloisa can win (ϕ, A) using a proper strategy, that is, when extending the assignment, she always chooses a fresh unnamed element for the variable. Let be the set of outer-types of grade at most + and realised in A. It is readily verified that is consistent and closed. To win (ϕ, ), Eloisa fixes her winning proper strategy for (ϕ, A), and, in parallel to (ϕ, ), she simulates the game (ϕ, A) consistently with . More precisely, say that on her move, after Round t (_t+1=∃), the reached position in (ϕ, ) is (L_t, f_t^S); in parallel, she reached a position f_t^V_t → A^ in (ϕ,A), which is such that (A(f_t^V[_t]∖), f_t^V) agrees with (L_t, f_t^S) on the adequate atoms of ψ. Eloisa looks at the position f^V_t+1(f_t^V) in (ϕ, A) and mimics it in (ϕ, ) by adding to L_t an element with the same 1-type as f^V_t+1(y_t+1). She then extends the structure L_t to L_t+1 by copying the required atoms from A(f_t+1^V[_t+1]∖). It is more difficult to show that our satisfaction games are sound, that is, if Eloisa has a winning strategy in (ϕ, ), then ϕ has a finite model whose outer-types are in . This will be shown in Section <ref>, where we will construct such a model based on Eloisa's winning strategy. Before this, in the next subsection, we will show that if Eloisa has a winning strategy in the game (ϕ, ) then she also has one in (ϕ, '), where ' contains only exponentially many 1-types with respect to the length of ϕ (as noticed in Section <ref>, in the presence of constants the number of 1-types is in general doubly exponential). This additional observation will help us later to get a tight upper bound on the size of the constructed models. §.§ Small number of 1-types In the following subsection, we assume that Eloisa has a winning strategy in (ϕ, ). We write _^∃ for _^∃((ϕ, )). Equivalence on 1-types. Let f_↦1→ [1]^ be the assignment assigning 1 to every v ∈. If f_t→ [k]^ is an assignment, with t≤ M, then by f^flat:_t → [k]^ we denote the assignment defined as follows: f^flat(v)=f(v) if f(v) ∈ and f^flat(v)=1 otherwise. For every assignment f ∈_^∃, we introduce an equivalence relation ∼_f on the set of 1-types from . Intuitively, ∼_f relates 1-types which are equally good for Eloisa when she chooses one for the freshly introduced element in a position with assignment f. Formally, assuming that f is of order t, we set α_1 ∼_fα_2 if the following two conditions hold: * for every atom γ(v̅) of ψ, we have that α_1, f_↦1γ(v̅) iff α_2, f_↦1γ(v̅); * for every atom γ(v̅) of ψ such that y_t ∈v̅⊆_t and f[v̅∖{y_t}] ⊆, we have that α_1, f^flat(v̅) γ(v̅) iff α_2, f^flat(v̅) γ(v̅). We remark that <ref> will be important for atoms γ(v̅) containing one variable (and possibly some constants), while <ref> will be used in situations, where v̅ has more variables, but only one of them is mapped by f to an unnamed element and the remaining—to some constants. It is routine to verify that ∼_f is indeed an equivalence relation over 1-types. New game construction. We use the relations ∼_f to define from the set a new set ' of outer-types containing only exponentially many 1-types. First, for every f ∈^∃_, we fix a choice function _f/_∼_f→ that assigns to every class [α]_∼_f one of its elements. Then, the set ' consists of the outer-types β' for which there exists an outer-type β∈ of the same grade k such that: * β'1,…, k=β1,…, k; * for each i∈[k], there exists f ∈_^∃ such that β'i = _f([βi]_∼_f). Observing that ' is consistent and closed is routine; it follows from the fact that is closed and consistent as well. In effect, the set of 1-types in ' is the set {_f([α]_∼_f): f ∈^∃_, α∈}. Let us estimate the size of . For each f ∈_^∃, the number of equivalence classes of ∼_f is at most 2^2·|(ϕ)|=2^(|ϕ|), and the number of assignments in _^∃ is at most ∑_t=1^(K+t+||)^K+t≤· (K++||)^K+M, which is 2^(|ϕ|·log|ϕ|). Hence, ||= 2^(|ϕ|·log|ϕ|): the number of 1-types in ' is exponential in |ϕ|, as desired. The correspondence of plays. Now, we show that Eloisa has a winning strategy ' in the game (ϕ, '). We will construct such a strategy inductively, and additionally, in parallel, we will construct a partial function Γ, called simulation, from positions in the new game (ϕ,') to positions in the old game (ϕ, ). Our simulation Γ will be defined for positions in (φ,') which are reachable when following ', and will always output a position in (φ,) that is reachable when following . During the play we will keep the invariant that, for any positions ρ' and Γ(ρ') the domains of their structures and their assignments are equal. At the beginning of our construction, both Γ and ' are the empty functions. Let us see how (ϕ,') evolves. Round 0. The move in Round 0 belongs to Abelard, who plays a position ρ_0'=(L_0', f_0') for some k-outer-type L_0'∈'. If k ≤ 1, then L_0' is a 0-type or a 1-type from ⊆. This means that ρ_0' is a valid position in (ϕ,), and we can set Γ(ρ'_0)=ρ'_0. If k>1, then we know from our definition of ' that there exists a k-outer-type L_0 ∈, such that L_01, …, k=L'_01, …, k, and, for every i∈[k], we have L_0i∼_fL'_0i for some f ∈ F_^∃. We set Γ(ρ_0') to be (L_0, f_0'). In both cases, the domains L_0 and L_0' coincide. Round 𝐭+1. Assume that ' and Γ are defined for positions of order t. We extend them to positions of order t+1. In both cases below, ρ'_t=(L'_t,f'_t) is a position of order t that can be reached when following the current ', and ρ_t=(L_t,f'_t) is Γ(ρ_t'), its image by Γ. Abelard's move. If _t+1=∀, then consider any position ρ'_t+1=(L'_t+1,f'_t+1) reached from ρ'_t after Abelard's move. If Abelard decided not to introduce a fresh element in this round of (ϕ,'), i.e. L'_t+1=L'_t and f'_t+1(y_t+1)∈L'_t^ then the position Γ(ρ'_t+1) is (L_t,f'_t+1). If on the other hand Abelard did introduce a fresh element to obtain ρ_t+1', then similarly, an element of the same 1-type is added to obtain Γ(ρ_t+1') (the assignment is also modified accordingly). Eloisa's response. If _t+1=∃, then we need to extend so that it is defined for positions of order t+1. Let ρ_t+1=(L_t+1, f_t+1) be (ρ_t), i.e. the response suggested by in the simulation (ϕ,). Let a be an element newly introduced in L_t+1 (i.e. L_t+1=L_t∪{a}). Then '(ρ_t') is set as (L'_t+1, f_t+1), where L'_t+1=L'_t ∪{ a }=L_t+1, L'_t+1 L'_t=L_t', L'_t+1a=_f_t+1([L_t+1a]_∼_f_t+1), and the hull-types of tuples containing a are obtained by copying the corresponding hull-types from L_t+1. We naturally set Γ(ρ'_t+1) to be ρ_t+1. The following claim states some basic invariants of the construction of our simulation Γ (Points <ref> to <ref>), together with a more crucial property, implying that ' is actually winning for Eloisa (Point <ref>). Let ρ_t'=(L'_t, f'_t) be a position of order t that can be reached when following Eloisa's strategy ' defined above, and let ρ_t=(L_t, f_t) be Γ(ρ_t'). Then: * L_t'=L_t, f_t'=f_t, and L'_t ∅ = L_t ∅; * for universally quantified v∈_t, we have the equivalence ^L'_t(f_t(v))∼_f^L_t(f_t(v)), for some f ∈_^∃; * for existentially quantified y_i, i∈ [t], we have the equivalence ^L'_t(f_t(y_i))∼_f^*^L_t(f_t(y_i)), where f^* is the restriction of f_t to _i; * for every tuple of distinct elements a_1, …, a_i ∈ L_t, 2 ≤ i, we have that L'_ta_1, …, a_i=L_ta_1, …, a_i, or both hull-types are undefined; * for every atom γ() of ψ such that ⊆_t, we have that L'_t,f_tγ() iff L_t,f_tγ(). Points <ref>-<ref> follow easily from our constructions of ' and Γ. We prove the crucial Point <ref> in Appendix <ref>. It implies in particular, by taking t=M, that for every ρ_M' of order M reachable when following ', we have the equivalence ρ_M'ψ if and only if Γ(ρ_M')ψ. Yet the latter is always true, since Γ(ρ_M') can be reached in (φ,) when following , which is winning in this game. Hence, we can conclude: Assume that ϕ is satisfiable. Then there is a consistent and closed set of outer-types , with 2^O(|ϕ|·log|ϕ|) 1-types, such that Eloisa has a winning strategy in (ϕ, ). § SMALL-MODEL CONSTRUCTION FOR K In this section, we finally prove Theorem <ref>. We fix a satisfiable sentence ϕ in of the shape as in (<ref>), i.e. φ is ∀ x_1 …∀ x_._1y_1…_ y_. ψ. By Lemma <ref>, there exists a set of outer-types with exponentially many 1-types such that Eloisa has a winning strategy in (ϕ, ). We write _^∃ for _^∃((ϕ, )) and _^∃ for _^∃((ϕ, )). Before diving into details, we give first a sketch of how Eloisa can win in (ϕ,A), where A is a small model obtained via the construction described further in this section. As the foundation for our model A, we will consider an (,)-paradoxical colourful tournament (𝒯, μ, λ), where the set of arc colours is the set of variables from ϕ, and the set of vertex colours is the set of positions from _^∃, quotiented by some equivalence relation (in order to get exponentially many such colours). The vertex set of will be taken for the unnamed domain of A, and we will use the labellings μ and λ to specify interpretations of relational symbols present in the signature σσ(ϕ). In parallel to the specifications of these interpretations, we will construct a strategy for Eloisa in (ϕ,A). This strategy will be winning, thus ensuring that A is indeed a model of ϕ. The strategy will crucially use the paradoxicality of the tournament. The point is that whenever Eloisa has to assign a variable y_t in the tournament, she will select an element a that dominates the previously chosen elements via specific vertex colour and arc colours. These elements will be chosen accordingly to the 1-types and hull-types of the fresh element that the strategy adds in the game (ϕ,). It turns out that the paradoxicality of is crucial to obtain a such strategy, since it provides a partition of the unnamed domain, so that for any universal choice of elements, there is a fresh element that can serve as an existential witness for them. The play looks like follows. Eloisa will simulate a play of (ϕ, ) in parallel to the play of (ϕ,A). The simulation invariants will provide a certain coherency between the positions. Suppose that the current position in the latter game is f_t^V_t → A^, then, in the simulation, we will reach a position ρ_t = (L_t, f^S_t) of the same order. Moreover, for each variable in _t, f_t^S and f_t^V will assign either unnamed elements of the same 1-type, or the same constant. The equality will be preserved (if f_t^V(v) = f_t^V(v') then f_t^S(v) = f_t^S(v') as well), and, finally, the simulation will ensure that for every atom γ() of ψ, with ⊆_t, we have A, f^V_tγ() if and only if L_t, f^S_tγ(). Let us give an intuition on how the strategy is working, when it is Eloisa's turn, for the existentially quantified variable y_t+1. Let a̅=a_1, …, a_k be an enumeration of the elements in f^V_t[_t] ∖⊆ A, and let v̅=v_1, …, v_k be a tuple of variables such that f_t^V(v_i)=a_i (the choice for might not be unique). Let ρ_t+1 =(L_t+1, f^S_t+1) be the next position in the simulation of (ϕ,), obtained by following . Eloisa mimics this move in (ϕ, A) by choosing for the variable y_t+1 an element b ∈ A that colourfully dominates a̅ via the vertex colour representing ρ_t+1 and v̅. The very importance of the paradoxicality property in our proof comes from this step. Figure <ref> depicts a possible choice for Eloisa in the case t=2: the arcs and their labels are in blue, while the assignments f^V_2 and f^S_2 are in red. The color “representing” ρ_t+1 is denoted ρ. We set the hull-types of all tuples of A in such a way that the above suggested strategy for Eloisa maintains the similarity between positions in (ϕ,A) and their simulations in (ϕ,). For example, in Figure <ref>, if R(x_1, x_2, y_3, y_1) is an atom of ψ, and L_3, f^S_3 R(x_1, x_2, y_3, y_1), then R^A(c, a_1, b, a_2) shall be set to true as well. To this end we will carefully copy the hull-types.finish figure and explanations... what are some a's and w?as in the figure; w will be the element of M of role r TO me it is good like this, more details will be given in construction. Equivalence of positions. Our model construction would work correctly if we simply used the full set ^∃_ for the set ℛ of vertex colours. However, to get a model of the optimal size 2^(|ϕ|·log|ϕ|), we need a smaller set ℛ.[Notice that a priori the set ^∃_ has doubly exponential size, as Abelard's first move in (ϕ,) is to choose an outer-type from .] This is why we introduce an equivalence relation on ^∃_ and select for ℛ only representatives of its equivalence classes. Let (L,f) and (L',f') be positions from ^∃_. We set (L,f) ∼ (L',f') if the following conditions hold: * L=L' and f=f'; * ^L(f(y_t)) = ^L'(f(y_t)), where t is the order of f; * for every atom γ() of ψ containing y_t, but no y_i with i>t, we have L,f γ() iff L',f γ(). The definition implies that ∼-equivalent positions agree on the atoms of ψ containing the new variable y_t. It is immediate to check that ∼ is an equivalence relation over positions from ^∃_. Moreover, the number of equivalence classes is at most |_^∃|·||·2^|(ψ)|, which is 2^(|ϕ|·log|ϕ|), since, as we observed in Section <ref>, _^∃= 2^(|ϕ|·log|ϕ|) as well. Model construction. Let ^∃_/_∼→^∃_ be a choice function selecting a single position from every equivalence class of ∼. We define our set of vertex colours as the image of : ={([(L,f)] /_∼) : (L,f) ∈^∃_}, while our set of arc colours is the set =∪. Let = (V,E) be an (, )-paradoxical colourful tournament with a pair of labellings μ V → and λ E →. By Lemma <ref>, we can assume that V is of size 2^(||·log||)×||·log||, and therefore of size 2^(|φ|·log|φ|). We define the unnamed domain A of our new model A to be the vertex set V of . The construction proceeds in the following three stages: Setting the 1-types, Providing the witnesses (setting the hull-types for non-singleton tuples of unnamed elements necessary for Eloisa's strategy), and Completing the structure (setting the hull-types of the remaining tuples). The 1-types and the hull-types will always be induced from . We need the following notions. A subset B ⊆ A is self-dominating if there exists b ∈ B such that b dominates B ∖{b} in . In this case, assuming that the assignment f_t+1 in the position μ(b)=(L_t+1,f_t+1) is of order t+1 [This choice of t+1 rather than t will be convenient next, and it is not problematic since no position in _^∃ can be of order 0.], we define the function g_B B → in the following way: g_B(b)=y_t+1 and g_B(a)=λ(b → a) for a ∈ B ∖{ b }. Moreover, we say that B is properly self-dominating if g_B(B) ⊆_t+1, f_t+1∘ g_B B→ L_t+1^ is injective, and (f_t+1∘ g_B)[B] ∩ is empty. For example, in Figure <ref> the sets {a_1, a_3, b} and {a_1, a_2, a_3, b} are properly self-dominating. Stage 1. Assigning the 1-types. For each element a ∈ A, we do the following. Let (L_t,f_t)=μ(a) be the colour of a, of order t, and let us set ^A(a) as ^L_t(f_t(y_t)). Notice that this step sets also the truth values of all the ground facts (i.e. facts on constants) and that this is done without conflicts at the level of 0-types, as all the 1-types we assign to elements are from , which is consistent. Stage 2. Providing the witnesses. For each properly self-dominating subset B ⊆ A of cardinality k between 2 and +, we do the following. Let b∈ B be the element dominating B ∖{b}. Let (L_t+1,f_t+1)=μ(b), of order t+1. If there is a position (L^*_t+1,f_t+1) ∼ (L_t+1,f_t+1) satisfying Aa=L^*_t+1f_t+1(g_B(a)) for each a ∈ B, then we take an enumeration a̅=a_1, …, a_k of the elements of B and set Aa̅ as L^*f_t+1(g_B(a̅)); otherwise we leave this hull-type undefined. Notice that when defining the hull-types of two distinct non-empty subsets, no conflicts can arise as they share no atoms. Stage 3. Completing the structure. For each subset B ⊆ A of cardinality k between 2 and + with a hull-type still undefined, we do the following. Take any enumeration a̅=a_1, …, a_k of the elements of B and select a k-outer-type β∈ such that Aa_i = βi for each i∈[k]. We are guaranteed that such β exists by the closedness of . We then set Aa̅ as β1, …, k. By the same argument as before, we do not introduce any conflicts in this stage too. The remaining still unspecified facts in A contain more than K+M unnamed elements, and since this number is bigger than the number of variables in ϕ, they are irrelevant for the truth value of ϕ. Hence, we can set all of them, e.g., to be false. This finishes the construction of A. To understand some subtleties of the construction described above, let us look again at Figure <ref> and see how b would be prepared as a correct choice for Eloisa for the position f^V_3 if ψ contains, say, the atoms R(x_1, x_2, y_3, y_1) and P(x_3, y_3, y_1). These two atoms, under f_3^V, become respectively R(c, a_1, b, a_2) and P(a_3, b, a_2). Hence, the definitions of Aa_1, a_2, b and Aa_2, a_3, b are of high importance. They are defined in two different steps of Stage 2, and it might be that these hull-types may be taken from different positions, say (L^*_3, f^S_3) and (L^**_3, f^S_3). Nevertheless, it is not dangerous. Indeed, the construction makes sure that they both belong to the equivalence class of (L_3, f^S_3), and hence by Condition <ref> of the definition of ∼, for every atom γ() of ψ containing y_3 as its maximal variable, we have that L^*_3, f_3γ iff L_3, f_3γ iff L^**_3, f_3γ. Hence, the truth-values of all the important atoms are as promised by the vertex colour ρ assigned to b. Correspondence of plays. We show now that Aϕ. To this end we prove the existence of a winning strategy for Eloisa in (ϕ,A). Similarly to Section <ref>, in the construction of , we will simulate a play of (ϕ, ) and base Eloisa's response in (ϕ,A) on her winning strategy in (ϕ, ). We simultaneously define , and a mapping Γ_V→S from positions of (ϕ,A) reachable when following to positions of (ϕ, ) reachable when following . The construction is done by induction on the order of the position f^V_t→ A^. Initially Γ_V→S and are empty functions. Round 0. Let us start with a position f^V_0_0 → A^, chosen by Abelard in Round 0 of (ϕ,). Let a̅=a_1, …, a_k be an enumeration of the elements of f^V_0[_0] ∖. Let L_0=Aa̅ and let f^S_0_0 → L_0^ be defined as follows: if f^V_0(x_i) ∈, then f^S_0(x_i)=f^V_0(x_i); and if f^V_0(x_i)=a_j, for some j, then f^S_0(x_i)=j. We set Γ_V→S(f^V_0) to be (L_0, f_0^S). Round 𝐭+1. Let us assume now that and Γ_V→S are defined for positions of order t, and let us define them for positions of order t+1. In the following, f^V_t_t→ A^ is a position of order t reachable when following the current . Abelard's move. If _t+1=∀, consider any position f^V_t+1_t+1→ A^, chosen by Abelard and extending f^V_t in an appropriate way. Let (L_t, f^S_t) be the position Γ_V→S(f^V_t). We set Γ_V→S(f^V_t+1) to be the position (L_t+1, f^S_t+1) defined as follows. If, for f^V_t+1(y_t+1), Abelard chose either a constant c or an element already chosen before as f^V_t(v), for some v∈_t, then L_t+1 is L_t, and f^S_t+1=f^S_t ∪{y_t+1↦ c / f^V_t(v)} (c / f^V_t(v) depending on the subcase). If on the other hand, he chose an element a not already chosen before, then we obtain L_t+1 by extending L_t with a fresh element and setting its 1-type to Aa; finally f^S_t+1 extends f^S_t by assigning y_t+1 to this new element. Eloisa's response. If _t+1=∃, then let a̅=a_1, …, a_k be an enumeration of the elements in f^V_t[_t] ∖⊆ A, and let v̅=v_1, …, v_k be a tuple of variables in _t such that f^V_t(v_i)=a_i (there may be more than one choice for ). Let ρ_t=(L_t, f^S_t) be Γ_V→S(f^V_t), and let ρ_t+1=(L_t+1, f^S_t+1) be (ρ_t), i.e. the corresponding move suggested by for Eloisa in (ϕ, ). To find a simulating move in (ϕ, A), let us take the vertex colour ρ∈ℛ ∼-equivalent to ρ_t+1, and let b be an element of A that colourfully dominates a̅ via ρ and v̅. We set (f^V_t) to be the assignment f^V_t+1=f^V_t ∪{y_t+1↦ b}. Finally, we define Γ_V→S(f^V_t+1) to be ρ_t+1. This is the end of our construction, which ensures some important invariants, stated in the following claim: Let f^V_t_t→ A^ be a position of (ϕ,A), reachable when following , and let (L_t, f^S_t) be Γ_V→S(f^V_t), position of (ϕ, ) reachable when following . Then: * f_t and f'_t are both of the same order (here t); * for every variables v_1,v_2 and v in _t: f^V_t(v_1)=f^V_t(v_2) iff f_t^S(v_1)=f_t^S(v_2); f_t^V(v)=c iff f_t^S(v)=c for every constant c; if f^V_t(v) is unnamed then Af_t^V(v)=L_tf^S_t(v); * for every atom γ() of ψ such that ⊆_t, we have that A, f^V_tγ() iff L_t, f^S_tγ(). Points <ref> and <ref> directly follow from the construction. We prove Point <ref> by induction on t. If t=0, then, by <ref>, the domain of the two assignments is _0, which means that v̅ either contains all of the x_i or at most one of them. In our construction, we have set L_0 to be the outer-type of some enumeration of all the elements of f^V[_0] ∖, and set f^V in accordance with f^S. As the outer-types define the truth-values of the atoms containing all of their elements or at most one of them, the equivalence follows. Assume now that the claim holds for t, consider an assignment f^V_t+1_t+1→ A^ extending f^V_t, and write (L_t+1, f^S_t+1) for Γ_V→S(f^V_t+1). By the inductive assumption, (A, f^V_t+1) and (L_t+1, f^S_t+1) agree on the atoms of ψ whose variables are contained in _t. We need to consider the atoms γ(v̅) whose variables contain y_t+1. If _t+1=∀, then, since y_t+1 is not special,  may contain no other variables, and we simply use <ref>. If _t+1=∃, then, assuming L_t=[k], we have f^S_t+1(y_t+1)=k+1. We define b=f^V(y_t+1) and recall that the vertex colour ρ of b satisfies ρ∼ (L_t+1, f^S_t+1). Take an enumeration b, a_1, …, a_ℓ∈ A of the elements of f^V_t+1[v̅] ∖ and the corresponding enumeration k+1, a'_1, …, a'_ℓ∈ L_t+1 of f^S_t+1[v̅] ∖. Our construction sets the hull-type of ⟨ b, a_1, …, a_ℓ⟩ to be the hull-type of ⟨ k+1, a'_1, …, a'_ℓ⟩ in some structure L^*_t+1 for which (L^*_t+1, f^S_t+1) ∼ρ∼ (L_t+1, f^S_t+1). By Condition <ref> of the definition of ∼, we have the desired equivalence. Again, the fact that is winning in (ϕ,A) comes from the last point of the claim, as is winning in (ϕ,). This allows us to conclude Theorem <ref>. In the end, we remark that our construction can be made fully deterministic basing on the explicit construction of paradoxical colourful tournaments (see Appendix <ref>). § CONJUNCTIONS OF K-SENTENCES The upper bound on the size of models of satisfiable formulas we get for in Theorem <ref> can be transferred to . The adaptations are somewhat technically complex, but rather routine and hence we only shortly outline them here. Consider a sentence ϕ=⋀_i=1^m ϕ_i in . All the sentences ϕ_i's are as in (<ref>) (with possibly different K's and M's), but with the variables renamed so that the sets of variables of the different ϕ_i are disjoint. We extend our games from Section <ref> by a pre-initial Round (-1) in which Abelard chooses one sentence ϕ_i. Then the game proceeds as previously on the chosen ϕ_i. This way, the players construct assignments whose domains contain only variables of this ϕ_i. The new game ^+(ϕ, A) is still, essentially, the standard verification game for First-Order Logic, and it is still very standard that Eloisa has a winning strategy iff Aϕ. As for the new game ^+(ϕ, ), it corresponds to checking satisfiability of each ϕ_i independently, but over a common set of outer-types . Lemma <ref> remains true, with a proof almost identical: we start with any model A_0 ϕ, augment it as previously, in order to obtain another model Aϕ, in which Eloisa has a proper winning strategy in the game ^+(ϕ, A). As we take the set of all outer-types realised in A, of grade smaller or equal to the maximal number of variables in the ϕ_i's. Consider a play of ^+(ϕ, ). It is opened by Abelard, who chooses a particular ϕ_i. We consider the same choice in the game ^+(ϕ, A). Now, in order to win ^+(ϕ, ) Eloisa just tries to win (ϕ_i, ), basing her moves on her winning proper strategy in (ϕ_i, A), as in the proof of Lemma <ref>. Lemma <ref> also remains true for ^+(ϕ,). Again we define similar equivalence relations ∼_f. As we assume that different ϕ_i's have disjoint sets of variables, every f may be used for one particular ϕ_i only; however, in Condition <ref> of the definition of ∼_f, we use the set of all atoms of ϕ, rather than just those of ϕ_i. As previously, ' is obtained by selecting a single representative from each equivalence class of every equivalence relation. It is not difficult to show that still ||=2^𝒪(|ϕ|·log|ϕ|) (an additional factor that needs to be considered is the number of conjuncts of ϕ, but it is only linear in |ϕ|). To win the new game ^+(ϕ,') Eloisa simulates a play of ^+(ϕ, ) as in the proof of Lemma <ref>: on her moves she uses the equivalence relations defined for assignments corresponding to ϕ_i chosen in the initial round by Abelard. The required changes in the model construction (Section <ref>) and in its correctness proof are similar in spirit. This time, in Condition <ref> of the definition of the relation ∼ on positions, we can restrict attention to atoms of ϕ_i corresponding to the assignment f in the considered positions (recall that the domain of f determines ϕ_i). We note that, in a sense, every ϕ_i has now its own vertex colours, but since the number of the ϕ_i's is at most linear in |ϕ|, the whole size of the model is still 2^𝒪(|ϕ|·log|ϕ|). In effect we get: If ϕ is a satisfiable sentence in , then it admits a model of size 2^(|ϕ|·log|ϕ|). Hence, the satisfiability problem of is -complete. It is easy to show that, if we can solve the satisfiability problem for , then we can also solve it for any positive Boolean combination of sentences in : it suffices to non-deterministically guess which of the sentences will evaluate to true, check that the guess indeed guarantees satisfaction of the whole formula, and then solve the problem for the conjunction of the guessed sentences. § A FAMILY OF TIGHT EXAMPLES We prove in this section that the upper bound 2^(|ϕ|·log |ϕ|), which we got for the finite model property of , is optimal, and already reached in the fragment : propositionphinconstantfree There exists a family (φ_n)_n∈ of satisfiable formulas in such that each φ_n has size (n) and enforce models of at least 2^Ω(n·log n) elements. Moreover, the formula φ_n can be assumed to be in the fragment , with a unique existential quantifier, and without constant symbols. For simplicity, we propose here formulas with constant symbols. The reader will find details for a constant-free variant in Appendix <ref>. Let us construct this formula φ_n, for some natural n≥ 3. The idea is to enforce, for every permutation π of [n]={1,2,…, n}, the existence of a witness w_π, and to ensure that any two distinct permutations π≠π' have distinct witnesses w_π≠ w_π'. This way, we will be sure that any model of the formula will have at least n!=2^Ω(n·log n) elements. The formula φ_n is of the shape ∀ x_1, …, x_n, y, r_1, …, r_n-2. ∃ w. ψ_n. The variables x_i's, y, and r_i's, which are quantified universally, are the special variables of φ_n, hence φ_n is of grade 2n-1. The variable w is the only one quantified existentially. The formula ψ_n is the conjunction of the formulas defined in the following paragraphs. The signature of the formula consists of the set _n={c_1,…, c_n, q_0, q_1} of constant symbols, and of the set _n ={P,W,C_→, C_←,S, Z}, where all the symbols are of arity 2n+1. For the sake of readability, we will divide the arguments of these relations into three parts of lengths n, one, and n respectively, and will therefore write atoms such as P(x_1,…, x_n| y| q_0, r_1, …, r_n-2, q_1). The first point is to generate every permutation of [n] with the relation P: for every π∈_n (the set of permutations of [n]), we shall have P(c_π(1), …, c_π(n)| q_0| q_0, …, q_0) satisfied. As it is known that any permutation of [n] can be generated from the identity via the permutation that switches 1 and 2 and the cyclic permutation γ1↦ 2↦⋯↦ n↦ 1, we can achieve this with the formula μ_n,perm defined as P(c_1,…, c_n|q_0|q_0,…, q_0) ∧[ P(x_1,…,x_n|y|r_1, …, r_n-2, q_0, q_0) → (P(x_2, x_1, x_3,…, x_n|y|r_1, …, r_n-2, q_0, q_0) ∧P(x_2,…, x_n, x_1|y|r_1, …, r_n-2, q_0, q_0))]. While it is true that the relation P focusses on its first n variables, we require it to mention y and the r_i's as well, in order for the atoms to contain all the special variables. Then, we ensure the existence of the witness w via the following formula μ_n,witness: P(x_1,…,x_n|y|r_1, …, r_n-2, q_0, q_0) → W(x_1,…, x_n|w|q_0, …, q_0). Now, we have ensured the existence of a witness w_π for every permutation π∈_n, which satisfies P(c_π(1),…, c_π(n)| w_π| q_0,…, q_0). Now, to make sure that all these witnesses are distinct, we need to forbid P(c_π'(1),…, c_π'(n)| w_π| q_0,…, q_0), for any permutation π'≠π. For this, we will use the following lemma, proposing a certain decomposition for permutations distinct from π: Let π,π'∈_n. The permutation π' is distinct from π if and only if it can be decomposed as γ^-j∘ρ∘γ^k∘π, where: γ is the cyclic permutation 1↦ 2↦⋯↦ n↦ 1; 0≤ j < k < n; and ρ is a permutation of [n] satisfying ρ(n)=n. First, suppose π'≠π. Necessarily, there exists an index i∈[n] such that π'(i)>π(i). We set k as n-π(i), and j as n-π'(i), and we get 0≤ j<k< n. If now we define ρ as γ^j∘π'∘π^-1∘γ^-k, then by definition we obtain the equality γ^-j∘ρ∘γ^k∘π=π'. It remains to verify that ρ assigns the index n to itself: ρ(n) =γ^j∘π'∘π^-1∘γ^-k(n) =γ^j∘π'∘π^-1∘γ^π(i)(n) =γ^j∘π'∘π^-1(π(i)) =γ^j∘π'(i) =γ^-π'(i)(π'(i)) =n. Hence, π'≠π can indeed be decomposed in the desired form. Now, we show that π itself cannot. Suppose that π=γ^-j∘ρ∘γ^k∘π, with the conditions of the statement holding. Then it means that the identity permutation is equal to γ^-j∘ρ∘γ^k. Yet it can be checked that the latter permutation assigns the index n-k to the index n-j≠ n-k, and hence it cannot be the identity. This lemma will help us generating every π'≠π from the original permutation π. We enumerate all of these π''s using the relations C_→, S, and C_←, as follows: with the use of C_→, we generate all the permutations of the shape γ^k∘π; with the use of S, we generate the permutations of the shape ρ∘γ^k∘π, with ρ assigning n to itself; and finally, with the use of C_←, we generate the permutations of the shape γ^-j∘ρ∘γ^k∘π, with j<k, i.e. all the permutations distinct from π. The first n arguments in these relations will indicate the current considered permutation. On the other side, the last n arguments will be seen as a counter: it will consist of a certain number of occurrences of the constant q_1, followed by a certain number of occurrences of the constant q_0. The number of occurrences of the constant q_1 indicating, in unary, the difference between k and j. It is important to make sure that the counter never reaches 0 nor n, so that π' can never equal π. First, the following formula μ_n,cyclic generates the permutations of the form γ^k∘π, with 1≤ k< n: [ W(x_1,…,x_n|y|r_1, …, r_n-2, q_0, q_0) → C_→(x_2,…, x_n, x_1|y|q_1, r_1, …, r_n-2, q_0)] ∧[ C_→(x_1,…,x_n|y|r_1, …, r_n-2, q_0, q_0) → C_→(x_2,…, x_n, x_1|y|q_1, r_1, …, r_n-2, q_0)]. Notice that, while the counter is incremented, we never allow it to reach n, as the last argument always remains q_0. Now, we can generate the permutations of the shape ρ∘γ^k∘π, where ρ assigns n to itself. Such ρ's are nothing else than permutations of the smaller set [n-1]={1,2,…, n-1}, and can be obtained via the permutation switching 1 and 2 and the pseudo-cyclic permutation 1↦ 2↦⋯↦ n-1↦ 1. This justifies the following formula μ_n,smaller, similar to the formula μ_n,perm: [ C_→(x_1,…,x_n|y|q_1, r_1, …, r_n-2, q_0) → S(x_1,…, x_n|y|q_1, r_1, …, r_n-2, q_0)] ∧[ S(x_1,…, x_n|y|q_1, r_1, …, r_n-2, q_0) → (S(x_2, x_1, x_3,…, x_n|y|q_1, r_1, …, r_n-2, q_0) ∧S(x_2,…, x_n-1, x_1, x_n|y|q_1, r_1, …, r_n-2, q_0))]. Finally, it remains to apply the inverse of the permutation γ. We do it via the formula μ_n,cylic^-1: it can apply γ^-1 as long as there remains more than one q_1 in the counter: [ S(x_1,…,x_n|y|q_1, r_1, …, r_n-2, q_0) → C_←(x_1,…, x_n|y|q_1, r_1, …, r_n-2, q_0)] ∧[ C_←(x_1,…,x_n|y|q_1, q_1, r_1, …, r_n-2) → C_←(x_n, x_1,…, x_n-1|y|q_1, r_1, …, r_n-2, q_0)]. Let π∈_n. Since we have W(c_π(1),…, c_π(n)| w_π| q_0,…, q_0), Lemma <ref>, together with the formulas μ_n,cyclic, μ_n,smaller, and μ_n,cylic^-1, ensures that for any π'∈_n, an atom of the shape C_←(c_π'(1),…, c_π'(n)| w_π| q_1,…, q_1, q_0,…, q_0) is satisfied if and only if π'≠π, and in this case the number of q_1's occurring in the counter is the difference k-j, these two numbers being obtained as in the lemma. Once we have this, we can make sure that W(c_π'(1),…, c_π'(n)| w_π| q_0,…, q_0) does not hold for π'≠π. This will be enough to conclude. We do it via the formula μ_n,decrease: starting from the atom of the previous paragraph, and using the predicate Z, it decreases the counter, until the latter reaches 0: [ C_←(x_1,…,x_n|y|q_1, r_1, …, r_n-2, q_0) → Z(x_1,…, x_n|y|r_1, …, r_n-2, q_0, q_0)] ∧[ Z(x_1,…,x_n|y|q_1, r_1,…, r_n-2, q_0) → Z(x_1,…, x_n|y|r_1, …, r_n-2, q_0, q_0)]. Finally, we can negate the relation W, in the formula μ_n,neg: Z(x_1,…,x_n|y|r_1, …, r_n-2, q_0, q_0) → W(x_1, …, x_n|y|r_1, …, r_n-2, q_0, q_0). As introduced above, we define ψ_n as the conjunction of μ_n,perm, μ_n,witness, μ_n,cyclic, μ_n,smaller, μ_n,cylic^-1, μ_n,decrease and μ_n,neg. The final formula φ_n defined as ∀x̅, y, r̅. ∃ w. ψ_n is as desired, as expressed in the following two claims proved in Appendix <ref>. claimphinsatisfiable The so-defined formula φ_n is in , of size linear in n, and is satisfiable. claimphinlowerbound Any model of φ_n has at least 2^Ω(n·log n) elements. § PARAMETRISED VARIANTS OF K In Section <ref>, we established that any satisfiable formula in has a model of size 2^(|ϕ|·log |ϕ|), and, in Section <ref>, we provided, for every n≥3, a formula φ_n enforcing models of this size. This formula φ_n has length linear in n and admits a unique existential quantifier. Yet, a number of universal ones is unbounded. On the other side, two important subclasses of consist of formulas with a bounded number of universal quantifiers and are known for admitting smaller models of size 2^(|ϕ|): the Ackermann class (the prefix-class ∀∃^*) and the Gödel class (the prefix-class ∀∀∃^*). Considering these observations, it is natural to ask which role exactly play the universal quantifiers in the gap between models of size 2^(|ϕ|) and those of size 2^(|ϕ|·log |ϕ|). More precisely, let us define, for a fixed k∈, the class k consisting of formulas in that have at most k universal quantifiers. (We remark that both the Ackermann class and the Gödel class are contained already in 2.) We answer our doubts by showing that every individual class k admits a 2^(|ϕ|) upper bound on the size of minimal models. propositionparamdmk Let k∈. If a sentence ϕ in k is satisfiable, then it has a finite model of size 2^(|ϕ|). In other words, this means that, in Theorem <ref>, a more precise upper bound 2^(|φ|·log|φ|_∀) can be given, where |φ|_∀ is the number of universal quantifiers in the formula φ. We prove Proposition <ref> in Appendix <ref>. For making our ideas easier to grasp, we first provide a proof of the analogous result for the class k, defined in a natural way (the main difference is that here these k universal quantifiers are necessarily in front). Then, we discuss how to extend our techniques to capture the entire k, here the main technical difficulty lies in handling alternating quantifiers. We remark that the small model construction from Section <ref> indeed requires an augmentation, as by the lower bound stated in Lemma <ref>, it cannot produce smaller models than 2^Ω(n ·log n), even in the parametrised setting (notice that here we do not restrict the use of existential quantifiers, hence the set of arc colours can have size linear in n). Let us now give some additional intuitions for the simpler case of k. Suppose that ϕ is a k-sentence of the shape ∀. ∃. ψ, where is a k-tuple of ϕ's special universally quantified variables, is a tuple of its existentially quantified variables (say, of size ), and ψ is a quantifier-free formula. (For simplicity, we omit in this overview non-special universally quantified variables.) We notice that the game (ϕ,A) consists of two distinct steps: first Abelard selects at most k elements of A for , and then Eloisa responds with witnesses for . The crucial role of the paradoxicality-like property is to guarantee that Eloisa is able to find these witnesses no matter what elements where picked by Abelard. Further, we see that after Abelard's initial round, the game does not branch anymore and is deterministic (every partial play can be uniquely extended to the complete one consistent with Eloisa's strategy). Now, the main trick is to observe that the small model construction in Section <ref> relies on the colourful analog of (k+)-paradoxicality. However, the level of the undeterminancy in the considered game is only k, which is a constant (as Abelard controls only k variables), and hence the colourful analog of k-paradoxicality shall suffice. We exploit this observation by introducing a notion of witness chains, which are tuples of elements prepared in advance to serve as witnesses together. Hence, assuming we provided such witness chains, Eloisa's winning strategy reduces to the choice of a suitable witness chain whose elements dominate Abelard's choices. The crucial difference is now that she does not need to worry to dominate her own choices from the preceding turns. As already stated, formal details are included in Appendix <ref>. § AN APPLICATION: EXTENDING THE UNIFORM ONE-DIMENSIONAL FRAGMENT In this section, we use Theorem <ref> to solve the satisfiability problem for the class which we define for the occasion: it is a strong extension of the fragment <cit.>. Let us remark that in its original definition, neither allows equality nor constants. Its variant that admits equality was introduced in <cit.>. In this version we present here, constants are allowed, but the equality symbol is not (as in ). Now, for a set of variables , we say that a set F of literals (i.e. atoms or negations of atoms) is -uniform if the set of the variables of every literal in F is precisely . Defition of UF_1. The set of formulas of the uniform one-dimensional fragment, , is defined as the smallest set such that: * Every atom with at most one variable is in . * It is closed under Boolean combinations. * Let be a set of variables, let x be a variable not in , and let U be a set of formulas in whose free variables are in {x}∪. Let ⊆{x}∪ and, finally, let F be a -uniform set of literals. Then, for every Boolean combination ν(x,) of formulas in U∪ F, the two formulas ∃ x, . ν(x,) and ∃. ν(x,) both belong to . Of course, since is closed under negation, we allow ourselves to say that a formula such as ∀ x,. ν(x,) is a member of the class when ∃ x,. ν(x,) is. The same for the formula ∀. ν(x,). In <cit.>, it is shown that (without equality) has the doubly-exponentially-sized model property and hence its satisfiability is in . This is strengthened in <cit.> to the exponentially-sized model property, and hence -completeness follows, even if free use of equality is allowed. We define here an extension of , which we call the ∀-uniform fragment and denote by . The idea is to keep the uniformity and one-dimensionality conditions in subformulas starting with universal quantifiers, but not to require them in subformulas starting with existential quantifiers. Definition of ∀-UF. The class of formulas, all in negation normal form, is defined as follows: * Every literal with at most one variable is in . * It is closed under positive Boolean combinations. * Let be a set of variables, let x be a variable not in , and let U be a set of formulas in whose free variables are in {x}∪. Let ⊆{x}∪, and, finally, let F be a -uniform set of literals. Then, for every positive Boolean combination ν(x,) of formulas in U∪ F, the formulas ∀ x, . ν(x,) and ∀. ν(x,) both belong to . * Let and be two disjoint sets of variables. Let ν(,) be a positive Boolean combination of literals containing at least one variable from and of formulas in with free variables included in ∪. Then the formula ∃. ν(,) belongs to . and differ in the last item, which allows us to use existential quantification quite freely, as in the example φ_lost_proof: ∀a, s. [assertion(a)∧scientist(s)∧claims(s,a)]→ ∃p. proof_of(p,a)∧found(s,p) ∧∀m. [margin(m)∧contains(m,p)]→too_small(m). (Again, this specific use of implications is not problematic.) Indeed, one can see that the subformula “∃ p…” has two free variables, while the subformula “∀ m…” has only one. It is routine to verify that the negation normal form of any formula is in . It turns out that satisfiability of (without equality) reduces to satisfiability of (and even to satisfiability of conjunctions of -sentences). lemmaexpandingfauf For every sentence ϕ, there is a conjunction ψ of sentences in , such that every model of ϕ can be expanded to a model of ψ; and reciprocally every model of ψ is a model of ϕ. The basic idea of the translation is to replace in a bottom-up manner subformulas μ(x)=∀y̅. ν(x,) of ϕ starting with a block of universal quantifiers by unary atoms P_μ(x), and add conjuncts ∀ x, y̅.  P_μ(x) →ν(x,), whose prenex form is in . A formal proof of Lemma <ref> can be found in Appendix <ref>. By Theorem <ref>, we get: The satisfiability problem for is -complete. has the exponential-sized model property. § NOTES ON RELATED WORK We present here technical remarks concerning previous works on the class and related fragments. Comments on the definition of from <cit.>. <cit.> and its extended version in <cit.> thoroughly present a resolution procedure for . There is however a minor issue concerning the definition of in those papers. Namely, according to this definition the sentence ∀y. ∃z. ∀x_1, x_2. R(y,z) ∧[R(x_1, x_2) →R(x_2, x_1)] ∧ [(R(x_2,x_1) ∧R(x_1, z)) →R(x_2, z)] is a member of (e.g. the terminal prefix of the last atom is ∀ x_2). The reader may check that it is satisfiable but has only infinite models. On the other hand, in <cit.> it is then claimed (without a proof) that sentences in can be converted to prenex form as in (<ref>) (with some extra leading existential quantifiers which we simulate by constants), and hence, by our result, the example sentence should have a finite model. One of the authors [U. Hustadt, private communication, 2023.] of <cit.> confirmed that it was not their intention to have sentences like in our example (with alternation of quantifiers preceding the special universal quantification) in and hence their definition needed a fix. Complexity of the resolution procedure from <cit.>. <cit.> and <cit.> do not study the complexity of the resolution procedure they present. However, as both the number and the size of clause sets that need to be considered are at most doubly exponential with respect to the length of the formula, it seems that with a careful implementation the procedure could work in doubly exponential time. Any better complexity does not seem to be derivable. Prior decidability proofs for the -Skolem class. The class was first studied, under the name “Class 2.4”, in the book <cit.>, among several other solvable Skolem classes. In this book, on page 90 it is shown that has the finite model property, with a doubly exponential upper bound on the size of finite models. Decidability in follows. Another proof of the finite model property, via a probabilistic method, is given in <cit.>; neither the complexity nor the size of models is studied there. It is worth to mention that earlier variants of this class were studied by Friedman <cit.>, and can also be found in Church's book <cit.>. It is worth to mention here that the decidability of earlier variants of this class without constants were studied by Friedman <cit.>: the class with two special variables; and the class with non-special universally quantified variables being disallowed. The latter also corresponds to Item VIII in Church's book <cit.> (however, here the initial prefix of existential quantifiers is allowed). Prior results on the Gödel class. There were a couple of proofs of the finite model property for the Gödel class. See <cit.> for a comprehensive survey. Most of them, including Gödel's original construction <cit.> and the probabilistic proof by Gurevich and Shelah <cit.>, lead to a doubly exponential upper bound on the size of minimal finite models. Lewis <cit.> claims that from <cit.> a bound 2^O(|ϕ|) follows. We however cannot see how to infer this claim from <cit.>, as the only related statement we were able to find there is the one on page 94. It indeed speaks about an upper bound on the size of models, but, due to the factor d(F,2), taking into account the estimation on d(F,n) from page 39, this upper bound seems to us to be doubly exponential. We recall that the 2^O(|ϕ|) upper bound follows from our work, namely from Proposition <ref>. Prior results on the Generalised Ackermann class. The Generalised Ackermann class, GAF, was proposed by Voigt <cit.> as an extension of the classical Ackermann fragment, AF, <cit.> (see page 67 of <cit.> for definitions). Voigt observed that GAF is contained in (Proposition 3.8.8, page 76) and that every satisfiable GAF sentence has a finite model (Theorem 4.3.5, page 130). His finite model construction produces, however, models of non-elementary size. Our results improve this upper bound to singly exponential and hence establish NExpTime-completeness of GAF satisfiability. However, in contrast to Voigt we do not have equality. The exact complexity of GAF with equality remains open. Origins of Maslov's class . In 1968, Maslov <cit.> introduced the class and proved the decidability of the corresponding validity problem by utilising the inverse method. Originally his paper was written in Russian and its translation <cit.> appeared later in 1971. The syntactic restrictions of the class at the first glance might appear artificial and rather unintuitive. However, the original paper sheds light on Maslov's motivation behind this definition. When introducing the class , he mentions two other classes, known to be decidable from earlier work. The first one is the class in which every ϕ-prefix has length at most 1, e.g., Item IV in <cit.>. This class naturally generalises the monadic class to higher arities by allowing repetitions of the same variable, e.g., ∀ x. ∃ y. ∀ z. R(x,x) ∧ T(y,y,y) ∨ S(x) ∧ T(z,z,z). The second one is the class in which every non-empty ϕ-prefix ends with a universal quantifier (remember, he considered the validity problem for the dual of ). In other words, the class with a restriction: the tuple of special variables shall be empty. Maslov claims that this class was probably not known for a wider audience, however, he points to Item IX in <cit.>, corresponding to a significant subfragment of it: the class of formulas in the form _1 x_1 …_ x_. ∀ y_1 …∀ y_. ψ with each non-empty ϕ-prefix ending with a variable y_i, for some i. Additionally, he drew inspiration from the Gödel class and was aware of the solvable Skolem classes in <cit.> and in <cit.>, notably leading him to consider tuples of special variables in his fragment. With all of this in mind, we believe that the class can be considered as a reasonable generalisation of the other known decidable fragments, combining the ideas present there in a clever way. Sergey Yurevich Maslov was a Russian logician employed in the Steklov Mathematical Institute in St. Petersburg. He mainly worked in the field of automated reasoning and its applications, where he contributed many interesting ideas. However, his interests were not limited to logic and included cognitive science, mathematical biology, economics and philosophy. He was also a civil right activist during the Soviet regime. Maslov died unexpectedly in 1982, due to a car accident, at the age of 43. (Biographical notes in <cit.>.) The first and second authors were supported by Polish National Science Center grant No. 2021/41/B/ST6/00996. The third author was supported by the ERC grant INFSYS, agreement no. 950398. The authors thank Warren Goldfarb, Ullrich Hustadt and Harry R. Lewis for some comments on their work, and the anonymous reviewers for their helpful suggestions. ACM-Reference-Format § INTERPRETING CONSTANTS In our definitions we assume that constants are interpreted by themselves; in particular different constant symbols are interpreted by different elements. This does not affect the generality of our results, as we have a size preserving reduction based on the following: Let ϕ be a satisfiable -sentence, and let c_1,…,c_k be the list of constant symbols occurring in ϕ. Then there exists ψ, with σ(ψ) ⊆σ(ϕ), differing only in constant symbols, such that: (i) |ψ|=|ϕ|, (ii) ψ is satisfiable assuming distinct interpretation of constants, (iii) any model Bψ can be expanded to a model B^+ ϕ over the same domain. Suppose that Aϕ. We define ι as the mapping i ↦min{ j : A c_i = c_j }, and ψ as the formula obtained from ϕ by replacing each occurrence of c_i by c_ι(i). Clearly |ψ|=|ϕ|. Then Aψ and all the constant symbols in ψ have distinct interpretations. Finally, suppose that we have B over σ(ψ) such that Bψ. Construct B^+ from B by adding the missing interpretations of constants: if c_i ∉σ(ψ) then set c^B^+_i:=c^B_ι(i). Then B^+ ϕ. Hence, the (unrestricted) satisfiability problem of can be reduced in nondeterministic polynomial time to the satisfiability problem of the same fragment, under the assumption that different constants have different interpretations. § AN EXPLICIT CONSTRUCTION OF PARADOXICAL COLOURFUL TOURNAMENTS The existence of paradoxical colourful tournaments of exponential size is already established by Lemma <ref>. However, the proof given in Section <ref> relies on a probabilistic method, hence it is non-constructive. We complement this result by providing a deterministic construction. Our proof is based on Paley graphs. Their exact definition is not needed to understand our proof, however, for self-completeness we briefly recall it in the following paragraph. Let p be a prime power such that p ≡_4 3. Denote by 𝔽_p the finite field of order p. Then the Paley graph of size p is the graph having the elements of 𝔽_p as its set of vertices, and such that, for any distinct a,b ∈𝔽_p, there is an arc a b iff there exists c ∈𝔽_p such that a-b=c^2. We remark that, because of the condition p ≡_4 3, such a graph is a tournament (see <cit.>), which is why we call it the Paley tournament in the rest of the section. Let us briefly recall what Paley graphs are. Let p be an odd prime and d a natural number such that p^d ≡_4 -1. Denote by (p^d) the finite field of order p^d. We call a ∈(p^d) a square if there exists c ∈(p^d) such that a=c^2. Then the Paley tournament of size p^d is the graph having (p^d) as its set of vertices, and such that for any distinct a,b ∈(p^d) there is an arc a b iff a-b is a square. We remark that, because of the condition p^d ≡_4 -1, a-b is a square iff b-a is not. Hence, the so-defined graph is a well defined tournament. I guess it is enough to simply take _p={0,1,…, p-1} with its usual operations no! why? and then “the” finite field of order p does not make sense Galois field is not an integer ring ℤ_m, unless m is a prime; changed the notation to reflect it better; we use prime powers for some number-theoretic reasons not explained here You lost me a bit, but anyway I'd suggest we don't even bother the reader with this paragraph of definitions since it is of no use in the rest of the appendix. The only thing that matters is the k-extension property. So it is enough to just introduce them vaguely and only describe the important property they fulfil. We say that a tournament has the k-extension property if for any disjoint subsets of vertices A and C, such that |A|+|C|≤ k, there exists a vertex b ∉A ∪ C such that c b for each c ∈ C and b a for each a ∈ A. The main result of <cit.> states that: Any Paley tournament of size at least k^2· 2^2k admits the k-extension property. To be more precise, the statement in <cit.> only tells about k-paradoxicality, but the k-extension property can be inferred from the involved proof. Explicit construction. The high-level idea behind our construction is to represent both vertex and arc labellings directly inside Paley tournaments: any information such as a vertex colour, or an arc colour, can be encoded into a series of bits, and the sets A and C from the definition of the k-extension property can be taken as the sets of bits that shall be put to 0 and 1 respectively. Hence, let us define a suitable gadget for representing a single integer: Suppose that a̅=a_1,…,a_ℓ is a tuple of pairwise distinct vertices in a tournament, b is a vertex not in a̅, and n is a number from {0,…,2^ℓ-1}, where ℓ is some natural number. The number n can be written in binary as Σ_1≤ i ≤ℓn_i· 2^i-1, where each n_i is a bit ∈{0,1}. Then we say that the pair (a̅,b) represents the number n if, for each i∈[ℓ], the following equivalence holds: a_i b iff n_i=1 (and therefore b a_i iff n_i=0). We denote this condition by (a̅,b)=n. Now, let us come back to our colourful tournaments. For simplicity, we assume our sets of vertex colours and of arc colours to be respectively ={0,1,…, 2^m-1} and ={0,1,…, 2^2^t-1-1}, where t and m are natural numbers. We set k=t + m + 2^2^t-1· 2^t. As a foundation of our construction of an (,)-paradoxical colourful tournament, we take a Paley tournament = (V_p,E_p) of size (k^2· 2^2k) with the k-extension property. We partition the vertex set V_p into three distinct subsets as follows: of size t, of size m, and , the rest of them, of size p-t-m. We can rewrite the elements of as _1,…, _t, and the elements of as μ_1,…, μ_m. Furthermore, we partition the set into classes indexed by n ∈{0,…,2^t-1} as follows: _n = { x ∈ : (,x)=n }. Then, we rename these classes _n's with indices i ∈ [2^t], so that |_1| ≤ |_2| ≤…≤ |_2^t|. We define n^* to be the original index of the smallest class _1, i.e. n^* = (,x) for any x ∈_1. Now, we consider any subset of _1×_2×⋯×_2^t such that each x ∈ V_p occurs in at most one tuple in and there is no ' ⊃ satisfying the same property. Since _1 is the smallest class, it can be easily seen that ||=|_1|, and that for each a_1 ∈_1, there is a unique tuple a_2,…,a_2^t such that a_1,a_2,…,a_2^t∈. We denote this (2^t-1)-tuple by s(a_1). Finally, we can define a new tournament = (V,E) using this decomposition of as follows: the set V of vertices is _1, and E is E_p∩(_1×_1), the induced set of arcs from . We turn this tournament into an (,)-colourful one by defining μ(b) = (,b) for b∈ V (recall indeed that (,b) is a number in {0,… 2^m-1}), and, if b b' is an arc, then λ(b → b') = (s(b'),b) (same remark: (s(b'),b) is a number in {0,…, 2^2^t-1-1}). The so-defined triple (,μ,λ) is indeed an (,)-paradoxical colourful tournament. For simplicity, we will write ℓ for 2^2^t-1. To prove the (,)-paradoxicality of , we must in particular show that V=_1 admits at least ℓ=|| vertices. This will be done in the end of this proof. For the moment, we assume that _1 indeed admits at least ℓ elements, and we fix a vertex colour r ∈, a tuple a̅=a_1,…,a_ℓ of pairwise distinct vertices of , and a tuple q̅=q_1,…,q_ℓ of arc colours. We need to prove the existence of a vertex b in V that colourfully dominates a̅ via r and q̅, that is (let us recall the definition) such that μ(b)=r and b a_i, λ(b a_i)= q_i for each i∈[ℓ]. For this, we will use the k-extension property of . We define A as the set a̅∪ X_0∪ R_0∪⋃_1≤ i≤ℓ Q_0,i⊆ V_p and C as the set X_1∪ R_1∪⋃_1≤ i≤ℓQ_1,i⊆ V_p, where: * for d∈{0,1}, X_d is the set of elements _j∈ such that n^*_j, the j'th bit of n^* in its binary representation, is d (remember that n^* is a member of {0,…, 2^t-1}); * for d∈{0,1}, R_d is the set of elements μ_j∈ such that r_j, the j'th bit of r in its binary representation, is d (remember that r is a member of ={0,…, 2^m-1}); * for 1≤ i≤ℓ, let us write ⟨ a_i,2,…, a_i,2^t⟩∈_2×⋯×_2^t for s(a_i), then Q_d,i, for d∈{0,1}, is the set of a_i,j's such that q_i,j, the j'th bit of q_i in its binary representation, is d (remember that q_i is a member of ={0,…, 2^2^t-1-1}). It is easy to see that A and C are disjoint and are both contained in the set a̅∪∪∪⋃_1≤ i≤ℓs(a_i), so |A|+|C| ≤ℓ + t + m + ℓ· (2^t-1) = k. Thus, we can apply the k-extension property of to find a vertex b∈ V_p such that b a for each a∈ A and c b for each c∈ C. Now, we verify that b is a good candidate, i.e. that it is a member of , in which it colourfully dominates a̅ via r and q̅. First, b∈_1, i.e. b is indeed a vertex of . This is ensured by our choices of X_0 ⊆ A and of X_1 ⊆ C: they guarantee that (,b) = n^*, and hence b ∈_1 by definition. Second, the vertex colour of b is indeed r. By definition, we have μ(b) =(,b), and the latter is r, by our choices of R_0 ⊆ A and of R_1 ⊆ C. Third, b indeed dominates a̅, that is b a_i for each 1 ≤ i ≤ℓ. This is ensured by the fact that a̅ is included in the set A. Finally, λ(b a_i) must be q_i for each i∈[ℓ]. By definition, it is (s(a_i),b), which is q_i by our choices of Q_0,i⊆ A and of Q_1,i⊆ C. Now, as discussed above, it remains to prove that V admits at least ℓ elements. Suppose it is not the case. Then V_p ∖ V admits at least t+2^t-1 elements. Let y_1,…,y_2^t-1 be a tuple of elements of V_p ∖ (V ∪). For each n ∈{ 0, …, ℓ-1 } and d ∈{0,1}, we define Y_d^(n) to be the set of y_j's such that n_j, the j'th bit of n in its binary representation, is d. Finally, we define A^(n) to be X_0 ∪ Y_0^(n) and C^(n) to be X_1 ∪ Y_1^(n), where X_d's are as before. Since A^(n) and C^(n) are disjoint and |A^(n)|+|C^(n)| ≤ k, we can apply the k-extension property of to find a vertex b_n ∈ V_p such that b_n a for each a ∈ A^(n) and c b_n for each c ∈ C^(n). As we shown earlier, because of X_0 ⊆ A^(n) and X_1 ⊆ C^(n), we have that b_n ∈_1, i.e. c_n ∈ V. Moreover, for n ≠ n', the vertices b_n and b_n' are necessarily distinct. Therefore, the size of is at least ℓ, which completes the proof. It remains to estimate the size of the tournament = (V,E). Let n denote the size of = (V_p,E_p), which is (k^2· 2^2k). Since k = t + m + 2^2^t-1· 2^t, we come by a simple calculation to |V_p|= 2^(||·log||)×(||·log||)^2. Since V is a subset of V_p, we can conclude. This way, we obtained an alternative fully constructive proof of Lemma <ref>. Although, the resulting upper bound is slightly worse, compared to the previous 2^(||·log||)×||·log|| bound obtained via the probabilistic method, we still consider it as quite good for our purpose. Discussion.A possible motivation for the deterministic construction is that we can obtain paradoxical colourful tournaments with a low entropy. For example, suppose that the two parties want to share a paradoxical colourful tournament. If they sample one, like in the probabilistic proof, then they need to send 2^(||·log||)×||·log|| random bits. Instead, they can share a single prime power, which needs only (log|| + ||·log||) bits, and then construct efficiently the same tournaments locally. A somehow related application to the above is a query scheme that allows for succinct representations of paradoxical colourful tournaments. More precisely, we can effectively answer queries about orientations of arcs, and about vertex and arc colours without ever explicitly constructing a tournament. This can be achieved by translating the query to the task of computing square roots in the appropriate field. The sketched schema, with a careful implementation, can be made to work in a polynomial space in the number of arc and vertex colours. We leave details unspecified, since this is outside the scope of this paper. § PROOFS FROM SECTION <REF> In this appendix, we provide the proofs of Lemma <ref> and of Item <ref> of Claim <ref>. * Let A_0 be a model of φ. We first modify A_0 to obtain an infinite structure A with the unnamed domain A = A_0^×. (Notice that A contains unnamed copies of constants.) For every a ∈ A^, we define its pattern element a_* to be a if a is in , and a_0 if a = (a_0,i) for some i ∈. Finally, we define relations in A in the following way: for each relational symbol R, and every a_1, …, a_k ∈ A^, where k=(R), we have A R(a_1,…,a_k) iff A_0 R((a_1)_*, …, (a_k)_*). As ϕ does not use equality it follows that Aϕ. Moreover, Eloisa can win (ϕ, A) using a proper winning strategy, that is, when extending the assignment to a new variable, she can always choose a fresh unnamed element (i.e. an element not assigned to any variable in earlier rounds). Indeed, in each Round t+1 belonging to Eloisa (i.e. _t+1=∃), when choosing her move after a position f_t_t → A^ is reached in (ϕ,A), she can consider the corresponding position f'_t_t → A_0^ in (ϕ, A_0), where f'_t(v) is defined to be (f_t(v))_* for every v ∈_t. Then she looks at her winning extension f'_t+1 of f'_t, and extends f_t to f_t+1 by taking as its value on y_t+1 a pair (f'_t+1(y_t+1), i), with some i ∈ such that (a_0,i)∉f_t[_t] for any a_0∈ A_0^. Let be the set of outer-types of grade at most + and realised in A. It is readily verified that is consistent and closed. We demonstrate now that Eloisa has a winning strategy in (ϕ, ). To find such a strategy, we fix her winning proper strategy in (ϕ, A), and naturally “simulate” plays of (ϕ, ) as plays of (ϕ, A) consistent with . By simulation we mean here that in parallel to the play of (ϕ,), we will be constructing a play of (ϕ,A), mimicking players choices in (ϕ,). Let us consider Abelard's opening move (L_0, f^S_0), where L_0∈ is a k-outer-type for some k∈, and f^S_0_0 [k]^ is an assignment. We select a k-tuple a̅=⟨ a_1, …, a_k ⟩ of distinct unnamed elements of A realising L_0, i.e. Aa_1, …, a_k is L_0. We define the assignment f_0^V_0 → A^ mimicking f^S_0: if f^S_0(v)=c for some constant c, then f^V_0(v)=c; if f^S_0(v)=i for some i∈[k], then f^V_0(v)=a_i. This way we obtained a position f^V_0 in (ϕ,A), simulating Abelard's opening move. Suppose that after Round t a position (L_t,f_t^S) is reached in (ϕ,), with L_t=[k]. Simulating (ϕ,A) in Rounds 0,…,t as we are describing, we obtain also a position f_t^V_t→ A^. If Round t+1 belongs to Abelard (i.e. _t+1=∀) and he decides to add a new element k+1 as f^S_t+1(y_t+1), we simulate his move in (ϕ,A) by taking as f^V_t+1(y_t+1) a fresh element of the same 1-type in A. Naturally, if he chooses to reuse an element from f^S_t[_t]∪, we reuse the corresponding element from f^V_t[_t]∪ as well. In the opposite case, when Eloisa is to make a move (i.e. _t+1=∃), her strategy in (ϕ,) works as follows. Let f_t+1^V be (f_t^V), i.e. the position suggested by in (ϕ,A), and let a be f^V_t+1(y_t+1). In (ϕ,), Eloisa extends the structure L_t by setting the type of the fresh element k+1 to the same 1-type as a, ^A(a); then she sets the required hull-types of the tuples containing k+1 by copying them from the corresponding tuples of A containing a. If (L_M, f^S_M) is the finally reached position in (ϕ, ), and f^V_M is the position reached in the simulation of (ϕ, A), then, for every atom γ() of ψ, we have that L_M, f^S_M γ() iff A, f^V_M γ(). Hence, the described strategy is indeed winning for Eloisa. Now, we prove Item <ref> of Claim <ref>: let ρ_t'=(L'_t, f'_t) be a position of order t that can be reached when following Eloisa's strategy ', and let ρ_t=(L_t, f_t) be Γ(ρ_t'). Then, for every atom γ() of ψ such that ⊆_t, we have that L'_t,f_tγ() iff L_t,f_tγ(). We proceed by induction on t. If t=0, then the domain of f_t (=f'_t, by Item <ref>) is _0, so the atoms γ(v̅) we are interested in either contain all the x_i's or at most one of them. If |L_t| ≤ 1 then L_t=L_t' by our definition of Γ, hence both positions are the same, and the claim trivially holds. Suppose now that |L_t|=k>1. Let γ() be any atom of the first kind (it contains all the x_i's), the values of γ(f_t()) in both structures coincide since L_t1, …, k=L'_t1, …, k by Item <ref> (recall that f_t is onto L_t). Let now γ() be any atom of the second kind (it contains at most one of the x_i's). If =∅, then the equivalence trivially follows from L'_t∅=L_t∅ (Item <ref>). Assume now that ={x_i}, and let us consider two subcases. If f_t(x_i) ∈, then again the equivalence follows from <ref>; otherwise the values of γ(f_t(v̅)) in both structures are determined by the 1-types of f_t(x_i) in these structures. By Item <ref>, these 1-types are in relation ∼_f^* for some f^*, and the first condition in the definition of ∼_f^* gives as that the values of γ(f_t(v̅)) in both structures are identical. Assume now that the claim holds for positions of order t, and consider a position ρ_t+1'=(L'_t+1,f'_t+1) of order t+1 reachable in (φ,') by following ', let ρ_t+1=(L_t+1,f_t+1) be its image Γ(ρ'_t+1). By the inductive assumption, ρ_t+1 and ρ'_t+1 agree on the atoms containing at most the variables from _t. We need to consider now an atom γ(v̅) such that y_t+1∈⊆_t+1. If Round t+1 is universal then, since y_t+1 is universally quantified, but not special, γ may contain at most one variable, and we reason as in the basis of induction, using Item <ref>. If Round t+1 is existential, assume that a is the fresh element added in this round. There are two subcases. If a is the only unnamed element in f_t(v̅) then the truth-values of γ(f_t(v̅)) agree in both positions by <ref>, using Condition <ref> from the definition of ∼_f. If f_t(v̅) contains some other unnamed element then γ(f_t(v̅)) agree in both positions by the fact that, by our construction (Item <ref>), the hulls of the outer-types containing a are identical in both structures. § MISSING DETAILS FROM SECTION <REF> In this appendix, we first give the proofs of the claims <ref> and <ref>, proving the desired properties of the formula φ_n defined in Section <ref>. Then, we explain the modifications that can be done so that this formula does not mention any constant symbols (as in the statement of Proposition <ref>). §.§ Proofs of the claims <ref> and <ref> * The fact that φ_n is in comes from its very definition: it belongs to the fragment ∀^∗∃, and the reader can check that the set of variables of any of its atoms is exactly the set x̅∪{y}∪r̅, unless it contains the existentially quantified variable w. The formula is of size (n), as all of the formulas μ_n,perm, μ_n,witness, etc. are. Finally, φ_n is satisfiable. Indeed, the prototypical model of φ_n is defined as follows: * each constant c_i is interpreted as itself, the same for q_0 and q_1; * the set A of unnamed elements is the set _n of permutations of [n]; * P^ is the set of tuples of the shape ⟨ c_π(1),…, c_π(n)| q_0| q_0, …, q_0⟩, with π ranging over _n; * W^ is the set of tuples of the shape ⟨ c_π(1),…, c_π(n)|π| q_0,…, q_0⟩, with π ranging over _n; * C_→^ is the set of tuples of the shape ⟨ c_π'(1),…, c_π'(n)|π| q_1, …, q_1, q_0,…,q_0 ⟩, where π ranges over _n, π' ranges over the permutations obtained as γ^k∘π, 0<k<n, and the number of occurrences of q_1 in the tuple is k; * S^ is the set of tuples of the shape ⟨ c_π'(1),…, c_π'(n)|π| q_1, …, q_1, q_0,…,q_0 ⟩, where π ranges over _n, π' ranges over the permutations obtained as ρ∘γ^k∘π, ρ∈_n assigning n to itself, 0<k<n, and the number of occurrences of q_1 in the tuple is k; * C_←^ is the set of tuples of the shape ⟨ c_π'(1),…, c_π'(n)|π| q_1, …, q_1, q_0,…,q_0 ⟩, where π ranges over _n, π' ranges over the permutations obtained as γ^-j∘ρ∘γ^k∘π, ρ∈_n assigning n to itself, 0≤ j<k<n, and the number of occurrences of q_1 in the tuple is k-j; * Z^ is the set of tuples of the shape ⟨ c_π'(1),…, c_π'(n)|π| q_1, …, q_1, q_0,…,q_0 ⟩, where π and π' are defined with the same conditions, and the number of occurrences of q_1 in the tuple is at most k-j. * Let us consider any model of φ_n. The cardinality of being at least n!=2^Ω(nlog n) comes from the construction: * for any permutation π of [n], we have ⟨ c_π(1), …, c_π(n)| q_0| q_0, …, q_0⟩∈ P^; * therefore, there exists some element w_π∈ A^ such that ⟨ c_π(1), …, c_π(n)| w_π| q_0, …, q_0⟩∈ W^; * then, for any π' distinct from π, we get ⟨ c_π'(1),…, c_π'(n)| w_π| q_1, …, q_1, q_0,…, q_0 ⟩∈ C_←^, for some number of occurrences of the constant q_1; * finally, it means that the tuple ⟨ c_π'(1),…, c_π'(n)| w_π| q_0,…, q_0⟩ is in Z^, and therefore ⟨ c_π'(1), …, c_π'(n)| w_π| q_0, …, q_0⟩∉ W^. Therefore, we can deduce that for every distinct permutations π and π' the elements w_π and w_π' are also distinct. By the way, we can also justify that the interpretations of q_0^ and q_1^ also have to be distinct. Indeed, if it is not the case, then the unary counter in the arguments is always the same tuple q_0^,…,q_0^. In particular, we get that ⟨ c_π(1),…, c_π(n)| w_π| q_1, …, q_1, q_0,…, q_0 ⟩∈ C_←^, and hence ⟨ c_π(1), …, c_π(n)| w_π| q_0, …, q_0⟩∉ W^, which goes in contradiction with the second point. A similar argument shows that the interpretations of c_i^'s are necessarily pairwise distinct too. Suppose that two indices j<k are such that c_j and c_k have the same interpretation in . As written above, the atom W(c_1^, …, c_n^| w_id| q_0, …, q_0) holds in , but for any π distinct from the identity, the atom W(c_π(1)^, …, c_π(n)^| w_id| q_0, …, q_0) does not hold. We immediately see the contradiction when π is the permutation switching j and k, since in that case, the tuple of the c_π(i)^'s matches exactly with the tuple of the c_i^'s. Therefore, all the c_i^'s must be distinct. §.§ Tight examples without constant symbols In the rest of this appendix, we show how to modify the formula φ_n so that it does not mention constant symbols. Let us recall briefly how the original formula φ_n was working: we simulated the set _n of permutations of [n] via the distinct constants c_1,…, c_n, and called a distinct witness for each permutation. The formula φ_n was obtained as ∀ x_1,…, x_n,y, r_1, …, r_n-2. ∃ w. ψ_n, and in ψ_n were the following relational symbols, all of arity n+1+n: – P, that simulates permutations of the set [n]; – W, that provides a witness for every permutation of [n]; – C_→, S, and C_←, that generate distinct permutations from the one considered, with the help of a counter; – Z, that finally sets said counter back to zero and forbids any other permutation for the same witness. Finally, the counter was written in unary, with the use of the constants q_0 and q_1. The main difference, in comparison to the original formula, is that, we reintroduce q_0 and q_1 as new variables, quantified universally: the sequence of quantifiers of our new formula is ∀ x_1,…, x_n,y, q_0, q_1, r_1, …, r_n-2. ∃ w, and all the variables, except w will be special. We invite the reader to verify that every atom in the different formulas introduced below indeed have variables satisfying the conditions of Maslov's class . Alongside turning q_0 and q_1 into variables, we introduce a new relational symbol U, of arity 1, as it will distinguish the elements simulating constants q_0 and q_1: the role of q_0 will be fulfilled by any element not satisfying U, and the role of q_1 by any element satisfying U. On the other hand, the constants c_i's will not be represented explicitly in the variables. Instead, to obtain elements fulfilling the roles of c_i's, we make use of yet another new relational symbol F, of arity 2n+3. For readability, its arguments will be sequenced in four blocks of respective lengths n, 1, 2, and n. What we aim here is the satisfaction of an atom F(x_1,…, x_n| q_0| q_0, q_1| q_0,…, q_0), where the x_i's are distinct elements, and q_0 (resp. q_1) does not (resp. does) satisfy U. Such an atom could then be used as a starting point for defining the different permutations (recall that in the original formula, the first atom introduced was P(c_1,…, c_n| q_0| q_0,…, q_0)). This is done in four steps. First, we ensure the existence of an element, call it q_0^, that does not satisfy U, via the following formula μ_n,neg: U(q_0) → U(w). Then, we introduce the relation F via the formula μ_n,unif.: [ U(q_0)∧U(q_1)] → [F(q_0, …, q_0|q_0|q_0, w|q_0, …, q_0)∧U(w)]. The formulas μ_n,neg and μ_n,unif. together ensure the existence of two elements q_0^ and q_1^, the former not satisfying U, the latter satisfying it, such that the atom F(q_0^, …, q_0^| q_0^| q_0^, q_1^| q_0^, …, q_0^) is present. It remains to replace the first arguments by new elements that would satisfy U. This is done in a third step, where we shift to the right the first n arguments, by introducing a new element that does satisfy U. We do it via the formula μ_n, shift: [ F(x_1,…,x_n|y|q_0, q_1|r_1, …, r_n)∧U(x_n)] → [F(w, x_1,…, x_n-1|y|q_0, q_1|r_1, …, r_n)∧U(w)]. After we apply n times the formula μ_n, shift, we obtain the satisfied atom F(c_1^ ,…, c_n^| q_0^| q_0^ , q_1^| q_0^ ,…, q_0^), where each c_i^ satisfies U. This is exactly the tuple we wanted to obtain. We can now transfer it to the relation P, which is now of arity 2n+3, via the formula μ_n,first perm: [ F(x_1,…,x_n|y|q_0, q_1|r_1, …, r_n)∧U(x_n)]→ P(x_1,…, x_n|y|q_0, q_1|r_1,…, r_n). All of these formulas ensure together that the atom P(c_1^,…, c_n^| q_0^| q_0^ , q_1^| q_0^ ,…, q_0^) holds. This was the base of our original formula φ_n. Then, the rest of the formulas are very similar to the original ones: we keep the same conditions written with the same relations P, W, C_→, S, C_←, and Z. The only difference is the arity of these symbols, changed from 2n+1 to 2n+3, in order to keep track of the variables q_0 and q_1. Let us illustrate it with, for instance, the implication W(x_1,…,x_n|y|r_1, …, r_n-2, q_0, q_0) → C_→(x_2,…, x_n, x_1|y|q_1, r_1, …, r_n-2, q_0) from the original formula μ_n,cyclic. Now it becomes the barely changed: W(x_1,…,x_n|y|q_0, q_1|r_1, …, r_n-2, q_0, q_0)→ C_→(x_2,…, x_n, x_1|y|q_0, q_1|q_1, r_1, …, r_n-2, q_0). Another example from the original formula μ_n,cylic^-1: C_←(x_1,…,x_n|y|q_1, q_1, r_1, …, r_n-2) → C_←(x_n, x_1,…, x_n-1|y|q_1, r_1, …, r_n-2, q_0) is turned into C_←(x_1,…,x_n|y |q_0, q_1 |q_1, q_1, r_1, …, r_n-2) → C_←(x_n, x_1,…, x_n-1 |y |q_0, q_1|q_1, r_1, …, r_n-2, q_0). Basically, every implication in φ_n is changed in a similar way. As previously stated, the final formula is obtained as ∀ x_1,…, x_n,y, q_0, q_1,r_1, …, r_n. ∃ w. ψ_n, where ψ_n is the conjunction of the four new formulas μ_n,neg, μ_n,unif., μ_n, shift, and μ_n,first perm, and of the updated versions of the original μ's. § PROOF OF PROPOSITION <REF> This appendix is dedicated to the proof of Proposition <ref>. However, before proceeding to the proof, let us remark that the small model construction from Section <ref> indeed requires an augmentation, as by the lower bound stated in Lemma <ref>, it cannot produce models smaller than 2^Ω(|ϕ| ·log |ϕ|), even in the parametrised setting (notice that here we do not restrict the use of existential quantifiers, hence the set of arc colours can have size linear in |ϕ|). In the remaining of this appendix, we first give some intuitions and the full proof of the simpler case of the k class, and then we discuss necessary changes needed to obtain the proof for the class k. It is because we believe that the formal proof of the latter is technically more involved due to alternating quantifiers. In the following proposition, k stands for the formulas that are both in k and in . propositionskolem Let k be a natural number. If a sentence ϕ in k is satisfiable, then it has a finite model of size 2^(|ϕ|). In the following, we fix a sentence ϕ in k, which we assume to be satisfiable, and we show that it has a model of size 2^(|ϕ|). Additionally, we assume that this sentence does not belong to any class k' with k'<k, i.e. it has precisely k universal quantifiers. Denote by the grade of ϕ. Then ϕ has the shape ∀. ∀. ∃. ψ, where are the special variables (at the number of ≤ k), are the rest of the universally quantified variables (at the number of L = k-), are the existentially quantified variables (say at the number of M and assume that M>0), and the formula ψ is quantifier-free. We shall briefly recall the main ideas of the construction of a model A of size 2^(|φ|·log|φ|) for φ. By Lemma <ref>, we know that Eloisa admits a winning strategy in the game (φ,), where is a consistent and closed set of outer-types with exponentially many 1-types. The unnamed elements of the model A are the vertices of an (,)-paradoxical colourful tournament, where the set of arc colours is defined to be the set of variables from φ, and the set of vertex colours is defined to be the set (quotiented by some equivalence relation ∼) of reachable positions in the game (φ,) when following the strategy . These colours play a crucial role in the definition of the interpretations of the relation symbols of σ(φ) in A. This construction of A goes in parallel with the construction of a winning strategy for Eloisa in the game (φ,A), in order to prove that A is indeed a model of φ. Yet, in this case where φ is in k, we can remark that (φ,A) has a special shape, as it consists of two clearly distinct steps: first, Abelard chooses k elements for the universally quantified variables and , and then Eloisa chooses M elements for the existentially quantified variables . This dynamics of the games are therefore very much changed, as once Eloisa is on turn, she does not have to worry about Abelard's choices anymore. This justifies the notion of witness chains, that basically stands for the choice of the M elements at the same time. Consider =(V,E) an (,)-colourful tournament (not necessarily (,)-paradoxical), and let ρ̅ be a play in (ϕ,), that is a sequence of consecutive positions ρ_0,ρ_1,…,ρ_L+M that can be obtained by following Eloisa's winning strategy : the positions ρ_0,…, ρ_L are chosen by Abelard, while the positions ρ_L+1,…, ρ_L+M are chosen by Eloisa. We say that the M-tuple b̅=b_1,…,b_M of unnamed elements in V forms a witness chain for ρ̅ if: * for each i∈ [M], we have that ρ_L+i∼μ(b_i); * for all i<j, we have an arc b_i ← b_j, and moreover its colour λ(b_i ← b_j) is the variable z_i. Moreover, let us consider f∪→ V^ an assignment. Let a̅=a_1,…,a_ℓ be an enumeration of f(∪)∖, for some ℓ≤ k, and let v̅=v_1,…,v_ℓ be a tuple of variables such that f(v_i)=a_i for each i∈[ℓ] (the choice for v̅ might not be unique). Then we say that b̅ dominates the assignment f if, for every t∈[M], b_t colourfully dominates a̅ via (a vertex colour ∼-equivalent to) ρ_L+t and v̅. By the definition of being a witness chain for ρ̅, b_t will also colourfully dominate the (ℓ+t-1)-tuple ⟨ a_1,…,a_ℓ, b_1, …, b_t-1⟩ via (a vertex colour equivalent to) ρ_L+t and ⟨ v_1,…,v_ℓ, z_1, …, z_t-1⟩. Hence, if a structure A is defined on V in a manner similar to what we did in Section <ref>, and if Abelard decided to choose the assignment f∪→ V^ in the game (ϕ,A), then by selecting the tuple b̅ for , Eloisa would win. This is why our goal is now to construct an (,)-colourful tournament of size 2^(|ϕ|) such that for every possible play ρ̅ obtainable when following , and every assignment f∪→ V^, there exists a witness chain b̅ for ρ̅ that dominates f. Indeed, a structure constructed from such a tournament would be a model of ϕ, as Eloisa would have a winning strategy for the game (ϕ,A). This is what we do now. Our construction of a model of size 2^(|φ|) for φ is outlined in three steps as follows: * we show how to construct an (,)-colourful tournament of size 2^(|φ|) with witness chains; * we turn this tournament into a σ(φ)-structure A exactly as we did in Section <ref>; * finally, we show that this obtained structure is indeed a model of φ. Step (<ref>). We start our construction by obtaining a colourful paradoxical tournament. The colour sets we choose here for parameters are modified in comparison to the ones in Section <ref> and recalled above: ' is now the set of variables ∪, of size k, and ' is the set × [k+1] × [M]. By Lemma <ref>, we know that there exists an (',')-paradoxical colourful tournament ('=(V',E'),μ',λ') of size 2^(k·log k)× |'|·log|'|. We have |'|=(|φ|·||), and recall from Section <ref> that || = 2^(|ϕ|·log|ϕ|). However, in our case, the size analysis of ^∃_ given in Section <ref>, from which the bound on || follows, can be refined to M·(k+||)^k = (|ϕ|^k+1). Hence, since the number k is a constant in our problem, the tournament ' is of size 2^(|φ|) as desired. The vertex set V' of ' can be arranged as a matrix with k+1 rows and columns, in accordance to the vertex colours, i.e. for i∈[k+1] and j∈[], we define V'_i,j to be the subset of vertices { a ∈ V': μ(a)=(r,i,j) for some r∈}. The point of this matrix is to be able to create k+1 disjoint groups of witness chains: one for each row i, the j'th element of any chain being in column j. Now, we transform the (',')-paradoxical colourful tournament (',μ',λ') into a new (,)-colourful tournament (=(V,E),μ,λ) with witness chains as follows. The vertex set V=V' of is the same as that of ', divided with the same cells V_i,j=V'_i,j. The labelling μ is induced from the first coordinate of μ': for every a∈ V, μ(a)=r if μ'(a) = (r,i,j) for some i,j. Now, let a∈ V_i,j and b∈ V_i',j' be two distinct vertices. We put an arc a →_E b if one of the two conditions hold: either a and b are on the same row, i.e. i = i', and j > j', in which case the label λ(a_E b) is defined as the variable z_j'; or a and b are on different rows, i.e. i ≠ i', and there is an arc a _E' b in ', in which case λ (a _E b) is λ(a_E'b), meaning that the arc and its label is defined accordingly to '. Refer to Figure <ref> for a depiction of the global shape of the so-defined tournament , in the case where k=3 and M=5 (i.e. the prefix of φ is for instance “∀ x_1, x_2. ∀ y. ∃ z_1, z_2, z_3, z_4, z_5”). Naturally, not all the arcs are depicted. Step (<ref>). The so-defined (,)-colourful tournament (,μ,λ) is not an (,)-paradoxical one (to see this, e.g. take any tuple a̅ from the last column, it cannot be colourfully dominated via any vertex colour r ∈ and any tuple of the variables in , as there are no arcs with colours from entering a̅). Nevertheless, we still apply the same steps of the small model construction as described in Section <ref>. We briefly recall the three steps, without going in all the details: We take the vertex set V of as the unnamed domain. First, we assign the 1-types based solely on vertex colours. Next, we define the hull-types on self-dominating subsets S, accordingly to the 1-types of elements in S ∖{a} and the vertex colour of a, where a dominates S∖{a}. Finally, we complete the structure by defining remaining hull-types on subsets of size between 2 and k+M, we do it by selecting an outer-type from in concordance with the 1-types of the different elements in the subset. Step (<ref>). Now, we prove that our so-obtained structure A is indeed a model of φ, which will conclude the proof of Proposition <ref>. In the proof below, we are not going to directly construct a simulation between the games (ϕ,A) and (ϕ,). Instead, we will refer to the property of the construction from Section <ref>, which states that it is enough for Eloisa to choose elements colourfully dominating the previously picked ones. Eloisa has a winning strategy in (ϕ,A). Suppose that Abelard has constructed an assignment f∪→ A^ in Rounds 0,…,L. Let a̅=a_1,…,a_ℓ be an enumeration of elements in f(∪) ∖, for some ℓ≤ k, and let =v_1,…,v_ℓ be a tuple of variables such that f(v_i) = a_i for each i ∈ [ℓ] (the choice of v_i might not be unique). Now, we define Eloisa's response. Let ρ̅ be the corresponding partial play in the game (ϕ,) simulated in parallel. As noticed before, due to special dynamics of the considered game, it can be uniquely extended to the full play in which Eloisa is a winner. Since the unnamed domain of A is partitioned into k+1 rows, there exists i ∈ [k+1] such that the i'th row is disjoint from a̅, i.e. a̅∩ V_i,j = ∅ for every j∈[M]. In Round L+j, with j∈[M], let r_j ∈ be a vertex colour such that r_j ∼ρ_L+j. Then Eloisa selects for the variable z_j an element b_j ∈ V_i,j that colourfully dominates a̅ via r_j and . We argue that such a choice of b_j is possible: consider the tuple a̅ as the vertices of the (',')-paradoxical colourful tournament (',μ',λ'). Thus, we can select a vertex b_j dominating a̅ via (r_j,i,j) and v̅. This element b_j lies in V_i,j, and therefore, in , the arcs between b_j and the tuple a̅ are exactly as in ', and their labellings as well. As the consequence, in , b_j colourfully dominates a̅ via r_j and v̅, as desired. Moreover, all the witnesses selected by Eloisa come from the same i'th row and from the consecutive columns, i.e. each witness b_j lies in the j'th column. Thus, by the construction of (,λ,μ), the tuple b_1,…,b_ forms a witness chain dominating f. Figure <ref> depicts Eloisa's response in the case where φ is of the shape ∀ x_1, x_2. ∀ y. ∃ z_1, …, z_5. ψ. Abelard's assignment f maps x_1 to a_1, x_2 to a_2, and y to a_3, in rows 1, 3, and 4 respectively. As he did not select any element from the second row, Eloisa is happy to respond there. Therefore, we conclude that the conditions of Claim <ref> hold in the same way here, as the 1-types and hull-types were defined accordingly as in Section <ref>, so Eloisa wins the game (φ,A). We remark that this construction stays deterministic if, for ', we employ the explicit construction of paradoxical colourful tournaments from Appendix <ref>. Now, we explain how the idea of witness chains can be generalised in order to obtain the same bound for the larger class ^∀=k. * For the rest of this appendix, we fix a satisfiable sentence ϕ in with k universal quantifiers of the shape as in (<ref>), that is ∀ x_1 …∀ x_. _1y_1…_ y_. ψ. We denote by L the number of existentially quantified variables in ϕ, i.e. +-k, and let y_j_1,…,y_j_L be the existentially quantified variables (in this order). The strategy to prove Proposition <ref> is in essence really similar. The main difference is that, instead of a grid [k+1]×[M] as the range of second and third coordinates for ', we choose the set of nodes of the (k+1)-branching tree with L+1 levels (the root has level 0 and plays only an auxiliary role). Such a tree has size k^(L), which is still 2^(|ϕ|) in our case. Formally, ' is now ×(∖{root}). On the contrary, the set ' does not need any adjustments, that is, it still consists of universally quantified variables, and hence has a fixed size k. We obtain an (',')-paradoxical colourful tournament ('=(V',E'),μ',λ') of size 2^(|ϕ|). As in the case of k, we transform ' into an (,)-colourful tournament (=(V,E),μ,λ). Our goal is to obtain witness chains along the paths going from the children of the root (level 1) to the leaves (level L). We do this by modifying the arcs connecting pairs of vertices whose second coordinates correspond to the nodes being in the ancestor-descendant relation in the tree. Below we give the formal details of this augmentation. The vertex set V=V' of is the same as that of '. We partition the vertex set V into groups V_u, for each u∈∖{root}, corresponding to the same node of the tree , i.e. V_u is the set { a ∈ V : a = (r,u) for some r ∈}. The labelling μ is induced from the first coordinate of μ': for every a∈ V, μ(a)=r if μ'(a) = (r,v) for some v∈. Now, let a ∈ V_u and b ∈ V_u' be two distinct vertices. We put an arc a →_E b if one of the two conditions hold: either u is a (not necessary immediate) strict descendant of u' in , in which case the label λ(a_E b) is defined as the variable y_j_ℓ, where ℓ is the level of u'; or u and u' do not lie on the same branch of , and there is an arc a _E' b in ', in which case λ (a _E b) is λ(a_E'b), meaning that the arc and its label is defined accordingly to '. We obtain a final structure A by again employing the model construction described in Section <ref> to the tournament . The reader can notice that, by the definition above, indeed the paths going downwards the tree create desired witness chains. We finish by sketching Eloisa's winning strategy in (ϕ,A). The high level idea for her strategy is that she dominates the previous choices of Abelard in a branch where he did not select any elements. Since he chooses at most k elements during the entire play, there is necessarily a branch he does not visit, hence she can continue with a witness chain existing there, even if Abelard interrupted her due to alternating quantifiers. Naturally, in her move corresponding to the variable y_j_i, she selects an element from the i'th level of the tree. This way, she can maintain the invariants stated in Claim <ref>, and hence she wins the game. § EXPANDING THE UNIVERSAL-UNIFORM FRAGMENT TO DK In this appendix, we provide the expansion from to , or, actually, to conjuctions of -sentences, mentioned in Section <ref>: * Let ϕ be a sentence in . For every subformula μ of ϕ that is of the shape ∀ x,. ν(x,) (resp. ∀. ν(x,)), we introduce a fresh relational symbol P_μ, of arity 0 (resp. 1) (i.e. the arity is the number of free variables of μ). The signature σ(ψ) will consist of σ(ϕ) expanded of all these P_μ's. Now, we consider a transformation [μ] for every subformula μ of ϕ, defined inductively: * the transformation of every literal is itself; * the transformation of every subformula of the shape μ⊕ν, with ⊕∈{∧,∨} and possible free variables, is [μ]⊕[ν]; * the transformation of every subformula of the shape ∃. ν(,) is ∃. [ν(,)]; * the transformation of every subformula μ of the shape ∀ x,. ν(x,), with no free variables, is the atom P_μ; * the transformation of every subformula μ(x) of the shape ∀. ν(x,), with one free variable x, is the atom P_μ(x). It is immediate to see that for any subformula μ of ϕ, the obtained [μ] is in the fragment ∃^∗ (when converted to prenex form). Then, for every subformula μ of ϕ starting with an universal quantifier, we define a formula [μ] axiomatising the relational symbol P_μ, as follows: * if μ is ∀ x,. ν(x,), with no free variable, then [μ] is the formula ∀ x,. P_μ→[ν(x,)]; * if μ is ∀. ν(x,), with one free variable x, then [μ] is the formula ∀ x,. P_μ(x)→[ν(x,)]. It is easy to check that [μ] is in . Indeed, in both shapes above, if we consider the set ⊆{x}∪ from the definition of , then by the same definition, every literal of ν(x,) not bounded by quantifiers admits exactly as its set of variables. Since does not change literals, the same property holds for [ν(x,)], and can therefore be taken as the special variables. Moreover, since each [ν(x,)] is in ∃^∗ (in prenex form), [μ] is indeed in (in prenex form). Finally, we define the desired formula ψ Does the word “desired” fit in that context?..“final”? Actually I think it doesas the conjunction of [ϕ] and of the [μ]'s, with μ ranging over the subformulas of ϕ starting with a universal quantifier. It is readily verified that any model A of ϕ can be expanded to a model of ψ, with the interpretation of any P_μ being the set {a∈ A|A∀. ν(a,)} when μ is as above and with one free variable, and true/false when it has no free variables (true/false depending whether A∀ x,. μ(x,) or not). Reciprocally, a proof that any model of ψ is a model of ϕ as well goes by a simple induction over ϕ.
http://arxiv.org/abs/2407.12562v1
20240717134541
A primary quantum current standard based on the Josephson and the quantum Hall effects
[ "Sophie Djordjevic", "Ralf Behr", "W. Poirier" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.other", "physics.ins-det", "quant-ph" ]
Sample title]A primary quantum current standard based on the Josephson and the quantum Hall effects wilfrid.poirier@lne.fr ^1 Laboratoire national de métrologie et d'essais, 29 avenue Roger Hennequin, 78197 Trappes, France ^2 Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig, Germany § ABSTRACT The new definition of the ampere calls for a quantum current standard able to deliver a flow of elementary charges per second controlled with a relative uncertainty of one part in 10^8. Despite many efforts, nanodevices handling electrons one by one have never been able to demonstrate such an accuracy. The alternative route based on applying Ohm’s law to the Josephson voltage and quantum Hall standards recently reached the target uncertainty but this was at the expense of the application of error corrections, hampering simplicity and further improvement. As a result, national metrology institutes still lack an operational quantum current standard. Here, we present a new quantum current generator, combining both quantum standards and a superconducting cryogenic amplifier, free of error correction, which provides quantized currents driven by the Josephson microwave signal. We show that it can realize the ampere definition with the target uncertainty over a range extended from mA down to μA and improve end-user current measurements, which are up to now a hundred times less accurate. Other prospects include measuring large resistances using the new current standard in conjunction with a quantum voltmeter and, by exploiting its low-noise performances, bridging the gap with the lower currents of other quantum current sources. [ Sophie Djordjevic^1, Ralf Behr^2, and Wilfrid Poirier^1 July 22, 2024 =========================================================== Since the last revision of the International System of Units (SI) in May 20, 2019, founded on seven fixed constants of nature<cit.>, any source generating an electric current which can be expressed in terms of ef, with e the elementary charge and f a frequency in Hz (s^-1), provides a realisation of the ampere. Single-electron current sources (SECS) <cit.>, which are mesoscopic devices <cit.>, able to handle electrons one by one at a rate f_e, are often presented as the most obvious way to realize the definition. However, achieving currents above 100 pA using GaAs and Si-based tunable-barrier SECS accurate to within a relative uncertainty better than 10^-7 remains a very challenging goal because of increasing error rates at high frequencies (∼ 1 GHz) <cit.>. Very recently, as a consequence of the phase-charge quantum mechanical duality in Josephson junctions (JJ), dual Shapiro steps have been evidenced in superconducting nanowires and small JJ placed in high impedance environments under microwave radiation <cit.>. Here, the enhanced phase variance allows photon-assisted tunneling of fluxons ϕ_0=h/2e (h is the Planck constant) and a synchronized transfer of Cooper pairs. Sharp current steps appearing at integer multiples of 2ef_e in the DC current-voltage characteristics could be promising candidates as quantum sources in the nA range, although their flatness is still in debate <cit.>. More generally, for all mesoscopic current sources, the control of charge fluctuations, which are dependent on the device coupling with the electromagnetic environment, remains a crucial issue. Concurrently, another route to the SI realization consists in applying Ohm's law to the Josephson voltage and quantum Hall resistance standards, since the Josephson effect <cit.> and the quantum Hall effect <cit.> now provide direct and universal realizations of the volt and the ohm from h/2e and h/e^2 constants, respectively <cit.>, with a 10^-9 measurement uncertainty. The high accuracy of the Josephson voltage standards, which are series arrays of JJ, relies on the phase rigidity of macroscopic superconductors. Under application of dc current bias and a microwave radiation f_J, the transfer through each JJ of one fluxon per period of the microwave tone is ensured and results in a quantized voltage V=n_Jϕ_0 f_J <cit.>, where n_J is the number of JJ. If this quantized voltage can be accurately applied to a quantum Hall resistance standard (QHRS) in the ν=2 Landau level filling factor, taking advantage of the charge rigidity of the quantum Hall edge states, n_Je charges are transferred at a rate f_J through the QHRS of h/2e^2 resistance. Hence, a current n_Jef_J, easily reaching microamperes, can be generated. Recently, a calculable current of 1 μA generated from the series connection of both quantum standards has been measured with a relative uncertainty of 1.3×10^-7 <cit.>, but the accuracy was reached owing to the very low-resistance of the ammeter (∼0.1 mΩ). The main issue is therefore to implement the accurate series connection of the two quantum standards while realizing a true current source. The programmable quantum current generator (PQCG) <cit.> has addressed this issue by locking an electron flow to the current circulating in the loop formed by the quantum standards, with the help of a superconducting amplifier, allowing simultaneously the scaling over a wider range of current values. Its accuracy was demonstrated in the mA range with a relative measurement uncertainty of 10^-8. This result was however obtained at the expense of corrections of the order of a few parts in 10^7 and of several time-consuming calibrations, impairing the final uncertainty and the full potential of the new quantum current standard. Here, we implement a next-generation PQCG operating without any classical correction and with a lower noise level. We demonstrate the realization of the ampere with relative uncertainties below 10^-8 for different current levels filling the gap between the mA range and the μA range using a full quantum instrumentation implying five quantum devices. Next-generation PQCG Fig.<ref>a shows the implementation (Methods) of two programmable Josephson voltage standards (PJVS), two QHRS and a cryogenic current comparator (CCC). The two PJVS are binary divided 1 V Nb/Nb_xSi_1-x/Nb series arrays <cit.>, both having a total of 8192 JJ and working around 70 GHz. The voltage of the two PJVS are given by ± n_1,2ϕ_0 f_1,2 with n_1,2 the number of JJ biased on the ±1 Shapiro steps. The two QHRS are both GaAs/AlGaAs heterostructures <cit.> of quantized resistance R_1=R_2=h/2e^2. The CCC is a dc current transformer<cit.>, made of several superconducting windings of different number of turns, able to compare currents with a great accuracy (below one part in 10^9) and sensitivity (80 pA·turns/Hz^1/2) owing to Ampère's theorem and Meissner effect. The new version of the PQCG is composed of PJVS_1 <cit.> connected to QHRS_1 with a triple connection (Extended Data fig.<ref>) ensured through three identical windings of N_1 turns of a new specially designed CCC (see Extended Data fig.<ref> and Methods). This connection technique <cit.> reduces the impact of the series resistances to an insignificant effect. More precisely, one current contact and two voltage contacts of the same equipotential of QHRS_1 are connected all together at each superconducting pads of PJVS_1. Because of the topological properties of Hall edge-states, namely their chirality, their h/e^2 two-wire resistance and their immunity against backscattering <cit.>, the current flowing through the third contact is only a fraction (r/R_1)^2 of the current circulating in the first one, where r is the typical resistance of the connections. The resistance seen by PJVS_1 is close to h/2e^2 within a typical small correction of order (r/R_1)^3. It results that the total current circulating I_1, is close to ± n_1ef_1 within 1.5 parts in 10^10 for series resistance values lower than 5 ohms (Methods). Compared to <cit.>, where the double connection required the application of a relative correction to the current of a few 10^-7, the operation of the PQCG is simplified since no correction is necessary here. The quantized current I_1, divided in the three connections, is measured by the three identical windings of N_1 turns. A DC SQUID is used to detect the unbalance ampere·turns in the different windings of the new CCC. It feedbacks on the new battery-powered voltage controlled current source (VCCS), which supplies a winding of N_2 turns in order to maintain the ampere·turns balance N_1I_1-N_2I_PQCG=0. It results that the PQCG is able to output a current equal to: I_PQCG^th=±(N_1/N_2)n_1ef_1 to within a Type B relative uncertainty of 2 parts in 10^9 (Extended Data Table <ref>). In practice, the CCC gain G=N_1/N_2 can span two orders of magnitude on either side of the unity, allowing the generation of currents from nA to mA. Accuracy test principle The accuracy of the quantized current I_PQCG is tested by feeding QHRS_2, and by measuring the voltage drop, V_2, at its Hall terminals using a quantum voltmeter (Extended Data fig.<ref>) made of PJVS_2 and an analog null detector (ND), which measures the voltage difference Δ V. From Kirchhoff's voltage law, I_PQCG is determined according to the expression : V_2=R_2I_PQCG=±ϕ_0n_2f_2-Δ V, with Δ V=0 ideally at the equilibrium frequency f_2^eq. Using a quantized resistance, R_2=h/2e^2 (∼ 12.9 kΩ), about 129 times higher than in <cit.> allows increasing the signal-to-noise ratio while eliminating an extra resistance calibration. The relative deviation of the measured current to the theoretical one, Δ I/I=(I_PQCG/I_PQCG^th)-1, is given by Δ I/I=n_2f_2^eq/Gn_1f_1-1. In practice, the CCC gain and the numbers of JJ are chosen so that Gn_1=n_2. Thus, the nominal relative deviation is given by: Δ I/I=f_2^eq/f_1-1 The accuracy test therefore resumes to the determination of a frequency ratio close to one. However, noise and offset drifts prevents from finding the frequency f_2^eq setting Δ V=0 accurately. Two successive voltage mean values, Δ V_f_2^+ and Δ V_f_2^-, are rather measured at two different frequencies f_2^+=f_2+Δ f and f_2^-=f_2-Δ f respectively, where f_2 is chosen close to f_2^eq (∼ f_1) and Δ f is set to 40 kHz or 80 kHz in our experiments (fig.<ref>b and Extended Data fig.<ref>a). In order to mitigate the effect of offsets, drifts and 1/f noise, each voltage mean value is obtained from a measurement series consisting in periodically either switching on and off the current with I_PQCG>0 (I_+) or <0 (I_-), or completely reversing the current (I_±). The equilibrium frequency is then determined from : f_2^eq=f_2^-Δ V_f_2^+-f_2^+Δ V_f_2^-/(Δ V_f_2^+-Δ V_f_2^-), which implies only voltage ratios, relaxing the need to calibrate the gain of the nanovoltmeter. Finally, the determination of Δ I/I does not require any calibration, resulting in a reduced Type B standard uncertainty, u^B, of 2.1×10^-9 (Extended Data Table <ref>). The Type A standard uncertainty of Δ I/I, u^A=u^A(f_2^eq)/f_1 is determined from the standard deviations of the mean of the voltage series Δ V_f_2^+ and Δ V_f_2^- (Methods). Quantized current accuracy Measurements of Δ I/I were performed at four different values of current 5.74 μA, 11.48 μA, 45.94 μA and 57.42 μA, using a primary current I_1 of 45.94 μA obtained with n_1=4096. This large current improves the operational margins of the PQCG compared to <cit.> and increases the signal-to-noise ratio while ensuring a perfect quantization of the Hall resistance of the QHRS_1 device. Each measurement series was typically carried out over one day using I_+, I_-, and I_± measurement protocols to reveal any systematic effect related to the current direction. Note that implementing complete current reversals I_± required the reduction of the noise in the circuit <cit.> (CCC in Methods). Measurements were performed with N_1=160 or also with 465 to test the effect of the number of ampere·turn. However, the downside of the latter configuration is the higher instability of the feedback loop encountered during the current reversals which prevented the use of the I_± measurement protocol. The different output currents were obtained by changing N_2 from 80 to 1860. Other PQCG parameters are reported in Extended Data Table <ref>. At the lower current values of 5.74 μA (fig.<ref>a) and 11.48 μA (fig.<ref>c), discrepancies of Δ I/I are covered by uncertainties ranging from 1 to 3.10^-8. Weighted means, Δ I/I_WM, for each measurement protocol I_+, I_- and I_±, reported in fig.<ref>b and fig.<ref>d, show that there is no significant deviation of the current from its theoretical value within measurement uncertainties of about 10^-8. Besides, the mean value of Δ I/I_WM(I_+) and Δ I/I_WM(I_-) is clearly in agreement with Δ I/I_WM(I_±) at 5.74 μA, which confirms the equivalence of averaging measurements carried out using the I_+ and I_- protocols with the measurement obtained using the I_± protocol. Combining the different results (Methods), one obtains the relative deviations, for the equivalent protocol I_±, Δ I/I=(2±4.3)×10^-9 and Δ I/I=(-7.9± 8.6)×10^-9 at current levels of 5.74 μA and 11.48 μA respectively. At the higher current value of 45.94 μA, Δ I/I measurements, reported in fig.<ref>e and fig.<ref>g, are characterized by smaller standard uncertainties in agreement with the larger voltage drop at the terminals of QHRS_2 (about 0.59 V). As expected, even smaller uncertainties are observed for N_1=465 than for N_1=160 due to an enhanced ampere.turns value N_1I_1. The lower uncertainties permit the scrutiny of small but significant deviations revealing intra-day noise at the 10^-8 level, the origin of which has not been clearly identified yet. To account for this, the standard uncertainty of each Δ I/I measurement is increased of an additional Type A uncertainty component u^A_id-noise=10^-8 chosen so that the χ^2 criterion is fulfilled (Methods). The resulting weighted mean values, Δ I/I_WM, shown in fig.<ref>f and fig.<ref>h, do not reveal any significant deviation from zero with regards to the measurement uncertainties of only a few 10^-9. Combining results obtained using the different measurement protocols, one obtains Δ I/I=(5.4±3.1)×10^-9 for N_1=160 and Δ I/I=(-1.1±3.6)×10^-9 for N_1=465. Hence, on average over a day, the current delivered by the PQCG is quantized and the deviation from zero is covered by an uncertainty of about 4×10^-9. At shorter terms, the uncertainty due to the intra-day noise does not average out and the combined uncertainty is ≃10^-8. Generally, one might guess a small discrepancy between measurements performed using either I_+ or I_- protocols, in fig.<ref>b, c and d, which could come from a small Peltier type effect. However using the protocol I_± cancels this potential effect. Similar results are obtained for Δ I/I at current values of 57.42 μA (N_1=160), giving (1.9±2.6)×10^-9 (Extended Data fig.<ref>a). The margins over which the current values remain quantized at the same level of uncertainty have been tested in different situations. No significant deviation of the generated current is measured when shifting by ±0.1 mA the Josephson bias current of PJVS_1, I_bias (shown in fig.<ref>a at 5.74 μA and in fig.<ref>e at 45.94 μA) or by varying the PJVS_1 frequency from 70 GHz to 70.02 GHz (fig.<ref>a). The efficiency of the triple connection against large cable resistance value and the accuracy of the cable corrections, when applied, have been demonstrated by inserting a large resistance (50 Ω) into the first connection of the triple connection scheme, and by application of the cable correction in the double connection scheme (fig.<ref>e and Extended Data fig.<ref>b respectively). Finally, at 43.07 μA, another connection scheme (Methods), including JJ into the triple connection of PJVS_1 with n_1=1920, although less reliable with respect to magnetic flux trapping, confirms an accuracy at a level of a few parts in 10^8 (Extended Data fig.<ref>c). Evaluation of short-term noise sources The analysis of the u^A uncertainties provide a further insight into the understanding of the experiment. Fig.<ref>a and b show averages, u^A_exp(τ_m), of the u^A uncertainties measured in the different accuracy tests, after normalization to the same measurement time τ_m=16τ_0 (with τ_0=66 s, τ_m∼ 18 min) and to the same measurement protocol I_±, as a function of 1/V_2 and 1/N_1I_1 respectively. For comparison, they also report the theoretical standard uncertainties, u^A_calc(τ_m)=√(3)/16√(S_Δ I/I/τ_A), calculated using τ_A=12 s from the Δ I/I noise density (Methods) : √(S_Δ I/I)=√(S_V/V_2^2+(1/N_1I_1γ_CCC)^2S_ϕ), using a voltage noise of √(S_V)=28 nV/Hz^1/2 and a magnetic flux noise detected by the SQUID of √(S_ϕ)=6.2×10^-5ϕ_0/Hz^1/2, compatible with the experimental conditions (Extended Data fig.<ref>). Note that there is a satisfying agreement. Fig.<ref>a confirms that, at low V_2, the main noise contribution comes from the voltage noise of the quantum voltmeter, as emphasized by the 1/V_2 dependence which is reproduced by u^A_calc(τ_m) calculated for S_ϕ=0. On the other hand, at higher V_2 (fig.<ref>b), the number of ampere·turns becomes the dominant parameter and the experimental uncertainties follow the 1/N_1I_1 dependence of u^A_calc(τ_m) calculated for S_V=0. One can therefore deduce the Type A uncertainty contributions of the PQCG and the quantum voltmeter, which amount to 1.6×10^-11/(N_1I_1) and 8.8× 10^-10/V_2, respectively. The low value of 7.3×10^-10 calculated for N_1=465 and I_1=45.94 μA for the PQCG itself allows considering the generation of even smaller currents. However, the demonstration of their accuracy would require the increase of R_2, by using a series quantum Hall array of 1.29 MΩ resistance <cit.> to increase V_2. Application to an ammeter calibration Fig.<ref>a shows the relative deviations Δ I/I^DA between the currents measured by the DA and the quantized currents generated by the PQCG in the 100 μA range using the configuration of PJVS_1 with n_1=1920 (Extended Data Table <ref>). The coarse adjustment of the quantized current, about ±107.7μA and ±62.6μA, is done by using G=465/93 (or 160/32) and G=465/160 respectively. Allan deviation (Extended Data fig.<ref>b) shows that the Type A relative uncertainty for the τ_m=144 s measurement time amounts to about 2×10^-7 at ±107.7μA using either I_+ or I_- measurement protocols (Extended Data fig.<ref>a). Data demonstrate that the DA is reproducible over the current range within about 5 parts in 10^7, similar to results obtained in the mA range <cit.>. Finally, fig.<ref>b illustrates the possibility of a fine tuning of the current by varying f_J from 69.98 to 70.02 GHz, which represents a relative shift of the quantized current of ± 3×10^-4 around 107.7 μA. The quantum current standard: state-of-the-art and perspectives We have demonstrated the accuracy of the flow rate of electrons generated by the new generation PQCG, at current values bridging the gap between the μA range and the mA range with relative uncertainties ≤10^-8, as summarized in inset of fig.<ref>. Moreover, Type A uncertainties of only a few 10^-9 have been measured. These progress stem from eliminating the need of any classical correction, improving the signal-to-noise ratio, extending the operating margins and applying new measurement protocols based on tuning Josephson frequencies. These results open the way to a quantum current standard as accurate as voltage and resistance standards in the future. Fig.<ref> shows the state-of-the-art of the accuracy tests of quantum current sources based on different quantum technologies, along with the best calibration measurement capabilities (CMCs) achieved in national metrology institutes (NMIs) for comparison. It recalls the uncertainties achieved in the present work along with those reported in our previous work <cit.>. This illustrates the wide range of current covered by the quantum current standard. At much lower currents, around 100 pA, uncertainties at the level of 10^-7 are achieved by the best SECS. This uncertainty level is also reached for currents around 1 μA by one experiment based on the series connection of quantum Hall resistance array and PJVS <cit.>. Let us remark that the uncertainties achieved depend not only on the current source itself but also on the method used to measure the generated current. To this respect, the best known measurement techniques reach relative uncertainties of about 10^-7: <cit.><cit.> (from 100 pA to 1 μA), <cit.> (around 1 μA), <cit.> (around 10 mA). On the other hand, the uncertainties <10^-8 demonstrated with the PQCG comes not only from its own accuracy and stability but also from the measurement with the quantum voltmeter. Providing such an accurate primary quantum current standard in the current range of the best CMCs, which are limited by uncertainties two orders of magnitude larger, is essential both to improve the transfer of the ampere towards end-users and to foster the development of more accurate instruments, as emphasized by the calibration of a digital ammeter with uncertainties limited by the instrument itself. Fig.<ref> also emphasizes the importance of exploring PQCG capabilities towards even smaller currents, in order to bridge the gap in the current delivered by SECS and devices exhibiting dual Shapiro steps. This would open the way to a new metrological triangle experiment <cit.>. More precisely, considering the variant of the PQCG proposed in <cit.> and the noise level estimated in this work, we could expect generating and measuring a 10 nA current with a relative uncertainty of 5×10^-8 after 12 h measurement (N_1=160, I_1=0.33 μA, N_2=5400). Another important result is the demonstration of the PQCG accuracy using a 129 times larger resistance (QHRS_2) than in <cit.>, which shows its robustness against the load resistance, as required for a true current source. Moreover, the PQCG accuracy being now established, our experiments can be interpreted as calibrations of resistors of 13 kΩ and 100 Ω values with a 10^-8 measurement uncertainty. More generally, combining equations (1) and (2) leads to : R_2=h/2e^2N_2/N_1n_2/n_1f_2^eq/f_1, where the equilibrium can be coarsely set by choosing G and n_1,2 and finely tuned by adjusting f_1 and f_2^eq. This new method combining the PQCG and the quantum voltmeter (see Extended Data fig.<ref>) paves the way for a paradigm shift for the resistance calibration. It allows to simplify the calibration of a large resistance from h/2e^2 to a single step, suppressing the intermediate steps needed using a conventional resistance comparison bridge<cit.>. Furthermore, the full quantum instrumentation developed gives foundation to a DC quantum calibrator-multimeter able to provide the primary references of voltage, resistance and current, which are needed in NMIs. In this perspective, graphene-based single Hall bar <cit.> or arrays <cit.> replacing GaAs devices could provide both noise reduction and simplification of the instrument operation. In the longer term, QHRS based on the quantum anomalous Hall effect <cit.>, operating at zero magnetic field as PJVS, or even heterostructures-based JJ comprising stacked cuprates <cit.> could lead to a more compact and practical instrumentation. Acknowledgments This work was supported by the French National Metrology Network (RNMF) ("The ampere metrology" project, number 168). We wish to acknowledge Mohammed Mghalfi for its technical support. We thank D. Estève, D.C. Glattli (CEA/SPEC, France), Yannick De Wilde (ESPCI, France) and Almazbek Imanaliev (LNE, France) for their critical reading and comments. Author contributions S. D. and W. P. planned the experiments. R. B performed the cabling and the characterizations of the programmable Josephson standards fabricated by PTB. S. D. and W. P. developed the instrumentation, conducted the electrical metrological measurements, analyzed the data and wrote the paper. All authors contributed to the final version. Competiting interests The authors declare no competiting interest. 53 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [BIPM(9th edition, 2019)]BrochureSI author author BIPM, @noop title The International System of units (SI) (publisher http://www.bipm.org/en/si/, Sèvres, year 9th edition, 2019)NoStop [Poirier et al.(2019)Poirier, Djordjevic, Schopfer, and Thévenot]Poirier2019 author author W. Poirier, author S. Djordjevic, author F. Schopfer, and author O. Thévenot, title title The ampere and the electrical units in the quantum era, @noop journal journal C. R. Physique volume 20, pages 92 (year 2019)NoStop [Pothier et al.(1992)Pothier, Lafarge, Urbina, Estève, and Devoret]Pothier1992 author author H. Pothier, author P. Lafarge, author C. Urbina, author D. Estève, and author M. H. Devoret, title title Single-electron pump based on charging effects, @noop journal journal Eur. Phys. Lett. volume 17, pages 249 (year 1992)NoStop [Pekola et al.(2013)Pekola, Saira, Maisi, Kemppinen, Möttönen, Pashkin, and Averin]Pekola2013 author author J. P. Pekola, author O. P. Saira, author V. Maisi, author A. Kemppinen, author M. Möttönen, author Y. A. Pashkin, and author D. Averin, title title Single-electron current sources: Toward a refined definition of the ampere, @noop journal journal Rev. Mod. Phys. volume 85, pages 1421–1472 (year 2013)NoStop [Scherer and Schumacher(2019)]Scherer2019 author author H. J. Scherer and author H. W. Schumacher, title title Single-electron pumps and quantum current metrology in the revised SI, 10.1002/andp.201800371 journal journal Annalen der Physik volume 531, pages 1800371 (year 2019)NoStop [Keller et al.(1999)Keller, Eichenberger, Martinis, and Zimmermann]Keller1999 author author M. W. Keller, author A. L. Eichenberger, author J. M. Martinis, and author N. M. Zimmermann, title title A capacitance standard based on counting electrons, @noop journal journal Science volume 285, pages 1706 (year 1999)NoStop [Pekola et al.(2008)Pekola, Vartiainen, Mottonen, Saira, Meschke, and Averin]Pekola2008 author author J. P. Pekola, author J. J. Vartiainen, author M. Mottonen, author O. P. Saira, author M. Meschke, and author D. V. Averin, title title Hybrid single-electron transistor as a source of quantized electric current, @noop journal journal Nature Phys. volume 4, pages 120 (year 2008)NoStop [Camarota et al.(2012)Camarota, Scherer, Keller, Lotkov, Willenberg, and Ahlers]Camarota2012 author author B. Camarota, author H. Scherer, author M. W. Keller, author S. V. Lotkov, author G. Willenberg, and author J. Ahlers, title title Electron counting capacitance standard with an improved five-junction R-pump, @noop journal journal Metrologia volume 49, pages 8–14 (year 2012)NoStop [Giblin et al.(2012)Giblin, Kataoka, Fletcher, See, Janssen, Griffiths, Jones, Farrer, and Ritchie]Giblin2012 author author S. P. Giblin, author M. Kataoka, author J. D. Fletcher, author P. See, author T. J. B. M. Janssen, author J. P. Griffiths, author G. A. C. Jones, author I. Farrer, and author D. A. Ritchie, title title Towards a quantum representation of the ampere using single electron pumps, @noop journal journal Nat. Commun. volume 3, pages 930 (year 2012)NoStop [Stein et al.(2015)Stein, Drung, Fricke, Scherer, Hohls, Leicht, Götz, Krause, Behr, Pesel, Pierz, Siegner, Ahlers, and Schumacher]Stein2015 author author F. Stein, author D. Drung, author L. Fricke, author H. Scherer, author F. Hohls, author C. Leicht, author M. Götz, author C. Krause, author R. Behr, author E. Pesel, author K. Pierz, author U. Siegner, author F. J. Ahlers, and author H. W. Schumacher, title title Validation of a quantized-current source with 0.2 ppm uncertainty, @noop journal journal Appl. Phys. Lett. volume 107, pages 103501 (year 2015)NoStop [Stein et al.(2017)Stein, Scherer, Gerster, Behr, Götz, Pesel, Leicht, Ubbelohde, Weimann, Pierz, Schumacher, and Hohls]Stein2017 author author F. Stein, author H. Scherer, author T. Gerster, author R. Behr, author M. Götz, author E. Pesel, author C. Leicht, author N. Ubbelohde, author T. Weimann, author K. Pierz, author H. W. Schumacher, and author F. Hohls, title title Robustness of single-electron pumps at sub-ppm current accuracy level, @noop journal journal Metrologia volume 54, pages S1 (year 2017)NoStop [Yamahata et al.(2016)Yamahata, Giblin, Kataoka, Karasawa, and Fujiwara]Yamahata2016 author author G. Yamahata, author S. P. Giblin, author M. Kataoka, author T. Karasawa, and author A. Fujiwara, title title Gigahertz single-electron pumping in silicon with an accuracy better than 9.2 parts in 10^7, @noop journal journal Appl. Phys. Lett. volume 109, pages 013101 (year 2016)NoStop [Zhao et al.(2017)Zhao, Rossi, Giblin, Fletcher, Hudson, Möttönen, Kataoka, and Dzurak]Zhao2017 author author R. Zhao, author A. Rossi, author S. P. Giblin, author J. D. Fletcher, author F. E. Hudson, author M. Möttönen, author M. Kataoka, and author A. S. Dzurak, title title Thermal-error regime in high-accuracy gigahertz single-electron pumping, @noop journal journal Phys. Rev. Appl. volume 8, pages 044021 (year 2017)NoStop [Bae et al.(2020)Bae, Chae, Kim, Kim, Park, Song, Oe, Kaneko, Kim, and Kim]Bae2020 author author M.-H. Bae, author D.-H. Chae, author M.-S. Kim, author B.-K. Kim, author S.-I. Park, author J. Song, author T. Oe, author N.-H. Kaneko, author N. Kim, and author W.-S. Kim, title title Precision measurement of single-electron current with quantized hall array resistance and josephson voltage, @noop journal journal Metrologia volume 57, pages 065025 (year 2020)NoStop [Kataoka et al.(2011)Kataoka, Fletcher, See, Giblin, Janssen, Griffiths, Jones, Farrer, and Ritchie]Kataoka2011 author author M. Kataoka, author J. D. Fletcher, author P. See, author S. P. Giblin, author T. J. B. M. Janssen, author J. P. Griffiths, author G. A. C. Jones, author I. Farrer, and author D. A. Ritchie, title title Tunable nonadiabatic excitation in a single-electron quantum dot, 10.1103/PhysRevLett.106.126801 journal journal Phys. Rev. Lett. volume 106, pages 126801 (year 2011)NoStop [Ahn et al.(2017)Ahn, Hong, Ghee, Chung, Hong, Bae, and Kim]Ahn2017 author author Y.-H. Ahn, author C. Hong, author Y.-S. Ghee, author Y. Chung, author Y.-P. Hong, author M.-H. Bae, and author N. Kim, title title Upper frequency limit depending on potential shape in a QD-based single electron pump, 10.1063/1.5000319 journal journal Journal of Applied Physics volume 122, pages 194502 (year 2017)NoStop [Shaikhaidarov et al.(2022)Shaikhaidarov, Kim, Dunstan, Antonov, Linzen, Ziegler, Golubev, Antonov, Ilíchev, and Astafiev]Shaikhaidarov2022 author author R.-S. Shaikhaidarov, author K. H. Kim, author J. W. Dunstan, author I. V. Antonov, author S. Linzen, author M. Ziegler, author D. S. Golubev, author V. N. Antonov, author E. V. Ilíchev, and author O. V. Astafiev, title title Quantized current steps due to the a.c. coherent quantum phase-slip effect, @noop journal journal Nature volume 608, pages 45 (year 2022)NoStop [Crescini et al.(2023)Crescini, Cailleaux, Guichard, Naud, Buisson, Murch, and Roch]Crescini2023 author author N. Crescini, author S. Cailleaux, author W. Guichard, author C. Naud, author O. Buisson, author K. W. Murch, and author N. Roch, title title Evidence of dual shapiro steps in a josephson junction array, @noop journal journal Nat. Phys. volume 19, pages 851 (year 2023)NoStop [F. Kaap and Lotkhov()]Kaap2024 author author V. Gaydamachenko L. Grünhaupt F. Kaap, C. Kissling and author S. Lotkhov, title title Demonstration of dual Shapiro steps in small Josephson junctions, @noop journal arXiv:2401.06599 NoStop [Kurilovich et al.(2024)Kurilovich, Remez, and Glazman]Kurilovich2024 journal author author V. D. Kurilovich, author B. Remez, and author L. I. Glazman, title title Quantum theory of bloch oscillations in a resistively shunted transmon, @noop journal journal Arxiv:2403.04624v1 (year 2024)NoStop [Josephson(1962)]Josephson1962 author author B.D. Josephson, title title Possible new effects in superconductive tunnelling, @noop journal journal Phys. Lett. volume 1, pages 251–253 (year 1962)NoStop [von Klitzing et al.(1980)von Klitzing, Dorda, and Pepper]Klitzing80 author author K. von Klitzing, author G. Dorda, and author M. Pepper, title title New method for high-accuracy determination of the fine structure constant based on quantized Hall resistance, @noop journal journal Phys. Rev. Lett. volume 45, pages 494–497 (year 1980)NoStop [Shapiro(1963)]Shapiro63 author author S. Shapiro, title title Josephson currents in superconducting tunneling: the effect of microwaves and another observations, @noop journal journal Phys. Rev. Lett. volume 11, pages 80–82 (year 1963)NoStop [Chae et al.(2022)Chae, Kim, Oe, and Kaneko]Chae2022 author author D.-H. Chae, author M.-S. Kim, author T. Oe, and author N.-H. Kaneko, title title Series connection of quantum hall resistance array and programmable josephson voltage standard for current generation at one microampere, @noop journal journal Metrologia volume 59, pages 065011 (year 2022)NoStop [Kaneko et al.(2024)Kaneko, Tanaka, and Okazaki]Kaneko2024 author author N.-H. Kaneko, author T. Tanaka, and author Y. Okazaki, title title Perspectives of the generation and measurement of small electric currents, @noop journal journal Meas. Sci. Technol. volume 35, pages 011001 (year 2024)NoStop [Poirier et al.(2014)Poirier, Lafont, Djordjevic, Schopfer, and Devoille]Poirier2014 author author W. Poirier, author F. Lafont, author S. Djordjevic, author F. Schopfer, and author L. Devoille, title title A programmable quantum current standard from the josephson and the quantum hall effects, @noop journal journal J. Appl. Phys. volume 115, pages 044509 (year 2014)NoStop [Brun-Picard et al.(2016)Brun-Picard, Djordjevic, Leprat, Schopfer, and Poirier]Brun-Picard2016 author author J. Brun-Picard, author S. Djordjevic, author D. Leprat, author F. Schopfer, and author W. Poirier, title title Practical quantum realization of the ampere from the elementary charge, @noop journal journal Phys. Rev. X volume 6, pages 041051 (year 2016)NoStop [Djordjevic et al.(2021)Djordjevic, Behr, Drung, Götz, and Poirier]Djordjevic2021 author author Sophie Djordjevic, author Ralf Behr, author Dietmar Drung, author Martin Götz, and author Wilfrid Poirier, title title Improvements of the programmable quantum current generator for better traceability of electrical current measurements, 10.1088/1681-7575/ac0503 journal journal Metrologia volume 58, pages 045005 (year 2021)NoStop [Behr et al.(2012)Behr, Kieler, Kohlmann, Müller, and Palafox]Behr2012 author author R. Behr, author O. Kieler, author J. Kohlmann, author F. Müller, and author L. Palafox, title title Development and metrological applications of josephson arrays at ptb, @noop journal journal Meas. Sci. Technol. volume 23, pages 124002 (year 2012)NoStop [Piquemal et al.(1993)Piquemal, Genevès, Delahaye, André, Patillon, and Frijlink]Piquemal1993 author author F. Piquemal, author G. Genevès, author F. Delahaye, author J. P. André, author J. N. Patillon, and author P. Frijlink, title title Report on a joint BIPM-EUROMET project for the fabrication of QHE samples by the LEPs, @noop journal journal IEEE Trans. Instrum. Meas. volume 42, pages 264 (year 1993)NoStop [Harvey(1972)]Harvey1972 author author I. K. Harvey, title title A precise low temperature dc ratio transformer, @noop journal journal Rev. Sci. Instrum. volume 43, pages 1626–1629 (year 1972)NoStop [Delahaye(1993)]Delahaye1993 author author F. Delahaye, title title Series and parallel connection of multiterminal quantum hall-effect devices, @noop journal journal J. Appl. Phys. volume 73, pages 7914 (year 1993)NoStop [Buttiker(1988)]Buttiker1988 author author M. Buttiker, title title Absence of backscattering in the quantum hall effect in multiprobe conductors, @noop journal journal Phys. Rev. B volume 38, pages 9375 (year 1988)NoStop [Poirier et al.(2004)Poirier, Bounouh, Piquemal, and André]Poirier2004 author author W. Poirier, author A. Bounouh, author F. Piquemal, and author J. P. André, title title A new generation of qhars: discussion about the technical criteria for quantization, @noop journal journal Metrologia volume 41, pages 285 (year 2004)NoStop [Keller et al.(2007)Keller, Zimmermann, and Eichenberger]Keller2007 author author M. W. Keller, author N. M. Zimmermann, and author A. L. Eichenberger, title title Uncertainty budget for the nist electron counting capacitance standard, eccs-1, @noop journal journal Metrologia volume 44, pages 505 (year 2007)NoStop [BIP(2024)]BIPMCMC title title The bipm key comparison database (kcdb), calibration and measurement capabilities – cmcs (appendix c), dc current database. https://kcdb.bipm.org (year 2024)NoStop [Giblin(2019)]Giblin2019 author author S. P. Giblin, title title Re-evaluation of uncertainty for calibration of 100 mω and 1 gω resistors at npl, @noop journal journal Metrologia volume 56, pages 015014 (year 2019)NoStop [Scherer et al.(2019)Scherer, Drung, Krause, Götz, and Becker]SchererULCA2019 author author H. Scherer, author D. Drung, author C. Krause, author M. Götz, and author U. Becker, title title Electrometer calibration with sub-partper-million uncertainty, @noop journal journal IEEE Trans. Instrum. Meas. volume 68, pages 1887 (year 2019)NoStop [Chae et al.(2020)Chae, Kim, Kim, Oe, and Kaneko]Chae2020 author author D.-H. Chae, author M.-S. Kim, author W.-S. Kim, author T. Oe, and author N.-H. Kaneko, title title Quantum mechanical current-to-voltage conversion with quantum hall resistance array, @noop journal journal Metrologia volume 57, pages 025004 (year 2020)NoStop [Lee et al.(2016)Lee, Behr, Schumacher, Palafox, Schubert, Starkloff, Böck, and Fleischmann]Lee2016 author author J. Lee, author R. Behr, author B. Schumacher, author L. Palafox, author M. Schubert, author M. Starkloff, author A. C. Böck, and author P. M. Fleischmann, title title From ac quantum voltmeter to quantum calibrator, in 10.1109/CPEM.2016.7540470 booktitle CPEM 2016 (year 2016) pp. pages 1–2NoStop [Likharev and Zorin(1985)]Likharev1985 author author K. K. Likharev and author A. B. Zorin, title title Theory of the bloch-wave oscillations in small josephson junctions, @noop journal journal J. Low Temp. Phys. volume 59, pages 347 (year 1985)NoStop [Poirier et al.(2021)Poirier, Leprat, and Schopfer]Poirier2020 author author Wilfrid Poirier, author Dominique Leprat, and author Félicien Schopfer, title title A resistance bridge based on a cryogenic current comparator achieving sub-10^-9 measurement uncertainties, 10.1109/TIM.2020.3010111 journal journal IEEE Transactions on Instrumentation and Measurement volume 70, pages 1–14 (year 2021)NoStop [Lafont et al.(2015)Lafont, Ribeiro-Palau, Kazazis, Michon, Couturaud, Consejo, Chassagne, Zielinski, Portail, Jouault, Schopfer, and Poirier]Lafont2015 author author F. Lafont, author R. Ribeiro-Palau, author D. Kazazis, author A. Michon, author O. Couturaud, author C. Consejo, author T. Chassagne, author M. Zielinski, author M. Portail, author B. Jouault, author F. Schopfer, and author W. Poirier, title title Quantum hall resistance standards from graphene grown by chemical vapour deposition on silicon carbide, @noop journal journal Nature Communications volume 6, pages 6806 (year 2015)NoStop [Ribeiro-Palau et al.(2015)Ribeiro-Palau, Lafont, Brun-Picard, Kazazis, Michon, Cheynis, Couturaud, Consejo, Jouault, Poirier, and Schopfer]Ribeiro2015 author author R. Ribeiro-Palau, author F. Lafont, author J. Brun-Picard, author D. Kazazis, author A. Michon, author F. Cheynis, author O. Couturaud, author C. Consejo, author B. Jouault, author W. Poirier, and author F. Schopfer, title title Quantum hall resistance standard in graphene devices under relaxed experimental conditions, @noop journal journal Nature Nano. volume 10, pages 965–971 (year 2015)NoStop [Panna et al.(2021)Panna, Hu, Kruskopf, Patel, Jarrett, Liu, Payagala, Saha, Rigosi, Newell, Liang, and Elmquist]Panna2021 author author A. R. Panna, author IF. Hu, author M. Kruskopf, author D. K. Patel, author D. G. Jarrett, author C.-I Liu, author S. U. Payagala, author D. Saha, author A. F. Rigosi, author D. B. Newell, author C.-T. Liang, and author R. E. Elmquist, title title Graphene quantum hall effect parallel resistance arrays, @noop journal journal Phys. Rev. B volume 103, pages 075408 (year 2021)NoStop [He et al.(2023)He, Cedergren, Shetty, Lara-Avila, Kubatkin, Bergsten, and Eklund]He2022 author author H. He, author K. Cedergren, author N. Shetty, author S. Lara-Avila, author S. Kubatkin, author T. Bergsten, and author G. Eklund, title title Accurate graphene quantum hall arrays for the new international system of units, @noop journal journal Nat. Commun. volume 13, pages 6933 (year 2023)NoStop [Fox et al.(2018)Fox, Rosen, Yang, Jones, Elmquist, Kou, Pan, Wang, and Goldhaber-Gordon]Fox2018 author author E. J. Fox, author I. T. Rosen, author Y. Yang, author G. R. Jones, author R. E. Elmquist, author X. Kou, author L. Pan, author K. L. Wang, and author D. Goldhaber-Gordon, title title Part-per-million quantization and current-induced breakdown of the quantum anomalous hall effect, @noop journal journal Phys. Rev. B volume 98, pages 075145 (year 2018)NoStop [Götz et al.(2018)Götz, Fijalkowski, Pesel, Hartl, Schreyeck, Winnerlein, Grauer, Scherer, Brunner, Gould, Ahlers, and Molenkamp]Gotz2018 author author M. Götz, author K. M. Fijalkowski, author E. Pesel, author M. Hartl, author T. Schreyeck, author M. Winnerlein, author S. Grauer, author H. Scherer, author K. Brunner, author C. Gould, author F. J. Ahlers, and author L. W. Molenkamp, title title Precision measurement of the quantized anomalous hall resistance at zero magnetic field, @noop journal journal Appl. Phys. Lett. volume 112, pages 072102 (year 2018)NoStop [Okazaki et al.(2022)Okazaki, Oe, Kawamura, Yoshimi, NaKamura, Takada, Mogi, Takahashi, Tsukazaki, Kawasaki, Tokura, and Kaneko]Okazaki2022 author author Y. Okazaki, author T. Oe, author M. Kawamura, author R. Yoshimi, author S. NaKamura, author S. Takada, author M. Mogi, author K.-S. Takahashi, author A. Tsukazaki, author M. Kawasaki, author Y. Tokura, and author N.-H. Kaneko, title title Quantum anomalous hall effect with a permanent magnet defines a quantum resistance standard, @noop journal journal Nat. Phys. volume 18, pages 25–29 (year 2022)NoStop [Martini et al.(2024)Martini, Lee, Confalone, Shokri, Saggau, Wolf, Gu, Watanabe, Taniguchi, Montemurro, Vinokur, Nielsch, and Poccia1]Martini2024 author author M. Martini, author Y. Lee, author T. Confalone, author S. Shokri, author C. N. Saggau, author D. Wolf, author G. Gu, author K. Watanabe, author T. Taniguchi, author D. Montemurro, author V. M. Vinokur, author K. Nielsch, and author N. Poccia1, title title Twisted cuprate van der waals heterostructures with controlled josephson coupling, @noop journal journal arXiv:2303.16029v3 (year 2024)NoStop [Rüfenacht et al.(2018)Rüfenacht, Flowers-Jacobs, and Benz]Rufenacht2018 author author Alain Rüfenacht, author Nathan E. Flowers-Jacobs, and author Samuel P. Benz, title title Impact of the latest generation of josephson voltage standards in ac and dc electric metrology, http://stacks.iop.org/0026-1394/55/i=5/a=S152 journal journal Metrologia volume 55, pages S152 (year 2018)NoStop [Delahaye and Jeckelmann(2003)]Delahaye2003 author author F. Delahaye and author B. Jeckelmann, title title Revised technical guidelines for reliable dc measurements of the quantized hall resistance, @noop journal journal Metrologia volume 40, pages 217 (year 2003)NoStop [Ricketts and Kemeny(1988)]Ricketts1988 author author B. W. Ricketts and author P. C. Kemeny, title title Quantum hall effect devices as circuit elements, @noop journal journal J. Phys. D volume 21, pages 483 (year 1988)NoStop § METHODS Quantum devices Implementation. The two quantum Hall resistance standards, QHRS_1 and QHRS_2, are both cooled down in the same cryostat at 1.3 K under of magnetic field of 10.8 T. PJVS_1 is cooled down in a small amount of liquid helium maintained at 4.2 K in a recondensing cryostat based on a pulse-tube refrigerator while PJVS_2 is cooled down in a 100 l liquid He Dewar at 4.2 K. The quantization state of PJVS and QHRS is periodically checked following technical guidelines <cit.><cit.>. In case of occasional trapped flux in the PJVS, a quick heating of the array allows to fully restore the quantized voltage steps. Finally, the CCC is placed in another liquid He Dewar. Connecting five quantum devices, while ensuring quantized operation of PJVS devices and minimizing noise, is very challenging. Many efforts have been spent in optimizing the wiring, the positions of the grounding points and the bias configuration of the PJVS. Shielding. It is essential to cancel the leakage current that could alter the accurate equality of the total currents circulating through the QHRS and the windings. This is achieved by placing high- and low-potential cables (high-insulation R_L > 1 TΩ resistance) connected to the PJVS, to the QHRS and to the CCC inside two separated shields, which are then twisted together and connected to ground. In this way, direct leakage currents short circuiting the QHRS, the most troublesome, are canceled. Other leakage currents are redirected to ground. PJVS cabling. The two PJVS are binary divided 1 V Nb/Nb_xSi_1-x/Nb series arrays <cit.>, both having a total of 8192 JJ and working around 70 GHz. The sequence of the segments that can be biased in the Josephson arrays is the following : 4096/2048/1024/512/256/128/1/31/32/64. Three bondings wires have been added at both ends of the arrays, on the same superconducting pad, in order to implement the triple connection as illustrated in fig.<ref>. A 50 Ω heater is placed close to the Josephson array chip allowing to get rid of trapped flux within few minutes. We have used a prototype of the hermetic cryoprobe developed for the recondensing cryostat, which had only 8 available wires, reducing the possible wiring configurations for PJVS_1. Extended Data fig.<ref>a and b show the two wiring configurations used with n_1=4096 and n_1=1920 respectively. They differ essentially on the way the triple connection is implemented at the low potential side of the bias source (PJBS). In Extended Data fig.<ref>a, the triple connection is done on the same superconducting pad, while in Extended Data fig.<ref>b, JJ are present between the bias wire and the wires connecting QHRS_1, but also between the second and third connection of the triple connection. If the current circulating in the JJ is less than half the amplitude of the n=0 Shapiro step (< 500 μA), both configurations are equivalent. However, the second one turned out to be less reliable than the first one when connecting the rest of the circuit. Because of the ground loop including the JJ, it was very sensitive to trapping magnetic flux. Nonetheless, the results of Extended data fig.<ref>c show that quantized currents can be generated using this configuration. It was used for the calibration of the DA. The best accuracy tests reported in fig.<ref> and Extended Data fig.<ref>a were done with the first configuration. CCC for correction free PQCG Design. The new CCC (Extended Data fig.<ref>a and b) is made of 20 windings of 1, 1, 1, 2, 2, 16, 16, 16, 32, 64, 128, 128, 160, 160, 465, 465, 1600, 1600, 2065 and 2065 turns, with a total number of 8789 turns. They are embedded in a superconducting toroidal shield made of 150 μm thick Pb foils, forming three electrically isolated turns to prevent non-ideal behaviour at the ends of the shield. The architecture is inspired from the design of a CCC used in a quantum Hall resistance bridge <cit.> (enabling ratios close to 1.29), but with 5 additional windings. The triple connection is possible for the windings of 1, 2, 16, 128, 160, 465 and 1600 turns. The dimensions have been chosen to be mounted on a cryogenic probe designed to be compatible with a 70 mm diameter neck of a liquid He Dewar. The inner and outer diameter of the toroidal shield are 19 mm and 47 mm respectively. The chimney is about 125 mm high. The CCC is enclosed in two successive 0.5 mm thick Pb superconducting cylindrical screens and in a Cryoperm shield surrounding the whole, corresponding to an expected overall magnetic attenuation of about 200 dB. It is equipped with a Quantum Design Inc. DC SQUID, placed in a separate superconducting Nb shield, and coupled to the CCC via a superconducting flux transformer composed of a wire wound sensing coil placed as close as possible to the inner surface of the CCC. The coupling γ_CCC=8 μA·turn/ϕ_0 has been maximized with a sensing coil of 9 turns compatible with the geometrical constraints. The 20 windings are connected by 40 copper alloy wires (AWG 34) placed in a stainless steel shield. Noise. Extended Data fig.<ref>c shows the noise spectrum at the output of the SQUID. The base noise level of the CCC (green line) amounts to 10 μϕ_0/√(Hz) at 1 Hz in the best noise conditions, slightly higher than the white noise level of 3 μϕ_0/√(Hz) of the SQUID alone (black line). Below 1 Hz, one can observe an increased noise compatible with a 1/f noise contribution. Resonance peaks present at frequencies below 10 Hz are certainly due to an imperfect decoupling of the Dewar from the ground vibrations <cit.>. Damping circuit. A damped resonance at 1.6 kHz is due to the use of a damping circuit to improve the stability of the feedback loop. The damping circuit is made of a C_D=100 nF capacitance at room temperature in series with a R_D= 1 kΩ resistor and a N_D= 2065 turns CCC winding at T_D= 4.2 K. It strongly damps the CCC resonances (around 10 kHz) which are excited by the external noise captured. The counterpart is an increase of the magnetic flux noise detected by the SQUID around 1.6 kHz caused by the Johnson-Nyquist noise emitted by the resistor. However, placing the resistor at low temperature reduced the noise magnitude by a factor of ten compared to the previous experiment <cit.>, with a maximum flux noise of N_D/γ_CCC√(4k_BT_D/R_D)≃ 124 μϕ_0/√(Hz) at 1.6 kHz <cit.>. In the experimental conditions of this paper, the noise was measured slightly higher, as described by blue curve in Extended Data fig.<ref>c. The noise level at 1 Hz rises to about 20 μϕ_0/Hz^1/2. Current sensitivity. It is related to the flux generated by the screening current circulating on the shield and can be estimated from the CCC noise spectrum and γ_CCC, it corresponds to 80 pA· turns/√(Hz) at 1 Hz in the best conditions. Accuracy. The CCC accuracy can be altered by magnetic flux leakage detected by the pickup coil. It can be tested by series opposition measurements of windings of identical number of turns. In the best noise conditions, with a current of 30 to 100 mA, for N>16 turns used in this paper the relative error on the number of turns δ N/N is <10^-9. The quantum voltmeter The quantum voltmeter is made of a second PJVS, PJVS_2 and of nanovoltmeter (EMN11 from EM electronics). It is used to measure the drop voltage at the terminals of the resistor R_2. The low potential of PJVS_2 is connected at ground, as described in Extended Data fig.<ref>. Direct current leakage parallel to R_2 are strongly screened by the shielding: no current can circulate between the low potential of R_2 and ground, because both are at the same potential at equilibrium. All current leakages to ground are deviated in parallel to PJVS_2. Multiple series connection In the multiple series connection (see fig.<ref> and Extended Data fig.<ref>a), the series resistances of the connections result in an effective resistance, which adds to the quantized Hall resistance <cit.>. This leads to a lower value of the quantized current I_PQCG=Gn_1ef_1(1-α_n), where α_n is positive and exponentially decreasing with the number of connections n. The series resistances r_1, r_2, r_3, r^'_1, r^'_2 and r^'_3, as indicated in Extended Data fig.<ref>a, one calculates, using a Ricketts and Kemeny model <cit.> of the Hall bar, α_2=r_1r_2/R_H^2+r^'_1r^'_2/R_H^2 and α_3=r_1r_2r_3/R_H^3+r^'_1r^'_2r^'_3/R_H^3 for the double series connection and the triple series connection respectively. The series resistances to be considered are those of the QHRS contacts and cables, those of the long cables linking the quantum devices, and those of the different combinations of CCC windings necessary to obtain the desired number of turns N_1. For all the experiments based on the triple connection technique, α_3 is calculated below 1.5×10^-10, except for the measurement performed with a 50 Ω resistor added in series with the first CCC winding (see fig.<ref>e and Extended Data fig.<ref>b), which results in α_3=(1.44± 0.023)×10^-9. For the measurement reported in fig.<ref>e and in Extended Data fig.<ref>b using the double connection scheme, one calculates α_2=(2.344± 0.037)×10^-7. Experimental settings of the PQCG for generating output currents All accuracy tests are performed using the PQCG settings reported in Extended Data Table <ref>, except one measurement at 5.74 μA using frequencies f_1=f_2=70.02 GHz (see fig.2a) and one measurement at 45.94 μA which uses different frequencies f_1=70 GHz and f_2=69.999976 GHz, respectively to accommodate for the deviation of a few parts in 10^7 of the PQCG current from equation (1) caused by the implementation of the double connection only (see fig.<ref>e). Measurements are carried out with Δ f=40 or 80 kHz. Uncertainties, weighted mean values, combined results and errors bars Two types of measurement uncertainties are considered: the Type A uncertainties which are evaluated by statistical methods, the Type B uncertainties evaluated by others methods. Type A uncertainty. The type A uncertainty for one measurement of Δ I/I, u^A, is given by : u^A=u^A(f_2^eq)/f_1 where u^A(f_2^eq)=(f_2^+-f_2^-)√((w^A_+Δ V_f_2^-)^2+(w^A_-Δ V_f_2^+)^2)/(Δ V_f_2^+-Δ V_f_2^-)^2 with w^A_+ and w^A_- the standard uncertainties (coverage factor k=1) of the mean voltage series Δ V_f_2^+ and Δ V_f_2^-, respectively. Their calculation is legitimated by the time dependence of the Allan deviation which demonstrates a dominant white noise (Extended Data fig.<ref>b). To account for the intra-day noise observed in measurements reported in fig.<ref>e and g, a Type A uncertainty, u^A_id-noise=10^-8, is added to each data points. Its value was determined so that the χ^2=1/NΣ_1^N (X_i-X)^2/u(X_i)^2<1 criterion is fulfilled, where here N is the number of values, X_i the value, X the weighted mean of all X_i and u(X_i) the standard uncertainty of the X_i. As suggested by an investigation of the quality of the power line of our laboratory, the observed intra-day noise could be caused by a recent increase of the electrical noise pollution. Type B uncertainty. Extended Data Table <ref> shows the basic Type B standard uncertainty budget of the accuracy test, which includes contributions of both the PQCG and the quantum voltmeter. The implementation of the triple series connection and the use of new measurement protocols based on the adjustment of the current using only the Josephson parameters have cancelled (current divider) or strongly reduced (cable correction) the most important contributions of the previous experiment <cit.>. We would expect a total uncertainty below 1× 10^-9. However, it turns out that we measured, momentarily during the measurement campaign, CCC ratio errors slightly higher than the typical values. The ratio errors ranged from 2× 10^-9 to 0.1× 10^-9 for windings of number of turns from 128 to 1600, respectively, which were used either for N_1 or N_2 in the experiment reported here. Using a conservative approach, we have therefore considered here a Type B uncertainty of 2×10^-9 for the CCC, which dominates the uncertainty budget. Other components were detailed in <cit.>. The SQUID electronic feedback is based on the same SQUID type and same pre-amplifier from Quantum Design. It is set in the same way using a 4.2 V/ϕ_0 close-loop gain whatever the number of turns N_2 used. The VCCS current source preadjusts the output current, such that the SQUID feedbacks only on a small fraction lower than 2× 10^-5× I_PQCG (see <cit.>). Owing to the cable shields, leakage to ground are redirected to ground, i.e. parallel to CCC winding <cit.>. The current leakage error amount to r_1/R_L, which is below 7×10^-12 in our experiments. It results a relative Type B uncertainty of 2× 10^-9 for the PQCG and a relative Type B uncertainty of u_B=2.1× 10^-9 for the accuracy test. Weighted mean values. Weighted mean values, Δ I/I_WM and their uncertainties, u^A_WM, are calculated from the Δ I/I series values and their Type A uncertainties. In fig.<ref>b and d, they are given by: Δ I/I_WM =∑_j Δ I/I_j×1/(u^A_j)^2/∑_j1/(u^A_j)^2 u^A_WM =1/√(∑_j1/(u^A_j)^2) In fig.<ref>f and h, they are given by: Δ I/I_WM =∑_j Δ I/I_j×1/(u^A_j)^2+(u^A_id-noise)^2/∑_j1/(u^A_j)^2+(u^A_id-noise)^2 u^A_WM =1/√(∑_j1/(u^A_j)^2+(u^A_id-noise)^2) where u^A_id-noise=10^-8 is the additional Type A component added to each data point to take account of the intra-day noise. Combined results Δ I/I. For accuracy tests performed at 11.48 μA using N_1=465 and at 45.94 μA using N_1=465, the combined result Δ I/I is the mean value of the Δ I/I_WM values obtained using the measurement protocol I_+ and I_-: Δ I/I=(Δ I/I_WM(I_+) + Δ I/I_WM(I_-))/2. For accuracy tests performed at 5.74 μA using N_1=160, at 45.94 μA using N_1=160, at 57.42 μA using N_1=160 and at 43.07 using n_1=1920, the combined result Δ I/I is the weighted mean value calculated from the values (Δ I/I_WM(I_+) + Δ I/I_WM(I_-))/2 and Δ I/I_WM(I_±) and their respective Type A uncertainties. Combined uncertainties. The combined uncertainty, u^c_WM, is given by u^c_WM=√((u^A_WM)^2+(u^B)^2). The combined uncertainty of Δ I/I is given by u^c=√((u^A_Δ I/I)^2+(u^B)^2) Error bars. Error bars in the different figures represent measurement uncertainties corresponding to one standard deviation (i.e. k=1). This means an interval of confidence of 68% if a gaussian distribution law is assumed. These measurement uncertainties are either Type A uncertainties or combined uncertainties. Figure <ref> In a, c, e and g, error bars correspond to only Type A uncertainties, u^A. In b, d, f and g, errors bars correspond to combined uncertainties, u^c_WM Figure <ref> In a and b, errors bars correspond to uncertainties u(u^A_exp), which are standard deviations of u^A values, and not standard deviations of the means, in order to reflect the ranges over which the noise levels vary. Figure <ref> In a and b, error bars correspond to Type A standard uncertainties. Figure <ref> In inset of fig.<ref>a, error bars correspond to combined standard uncertainties, u^c. Extended Data figure <ref> In a, error bars correspond to combined standard uncertainties √((u^A)^2+(u^B)^2). In b, error bars correspond to the combination of Type A standard uncertainties according to √((u^A)^2+u^A(Δ I_ref/I_ref)^2). In c, error bars correspond to u^A. Experimental, u^A_exp and calculated, u^A_calc, standard uncertainties. The uncertainties u^A_exp(τ_m) are calculated by averaging the uncertainty values, u^A, of each series, after normalization to the same measurement time τ_m=N_sτ_0 where N_s=16 is the number of sequences, and to the same measurement protocol I_±. The standard deviation, u(u^A_exp), is calculated from the different values of a series. u^A_calc(τ_m) is calculated using the relationship u^A_calc(τ_m=N_sτ_0)=1/√(N_s)√(3/8)√(S_Δ I/I/2τ_A), where τ_A=12 s is the acquisition time for one single voltage measurement (see Extended Data fig.<ref>a) and √(S_Δ I/I) is the noise density of Δ I/I. More precisely, √(S_Δ I/I/2τ_A) is the relative standard deviation corresponding to the acquisition time τ_A assuming an effective white noise density. The pre-factor √(3/8) comes from the combination of the standard deviations corresponding to the measurement protocol I_±, where the voltage of each sequence is obtained by [(Δ V (+I)_1+Δ V (+I)_3)/2-Δ V (-I)_2], with Δ V (+I)_1, Δ V (-I)_2 and Δ V (+I)_3 three successive voltage acquisitions, performed with positive current, then negative current, and then positive current (see Extended Data fig.<ref>a). Finally, the factor 1/√(N_s) comes from the white noise hypothesis justified by Extended Data fig.<ref>b. The noise density √(S_Δ I/I) is given by: √(S_Δ I/I)=√(S_V/V_2^2+(1/N_1I_1γ_CCC)^2S_ϕ+4kT/R_1I_1^2), where k is the Boltzmann constant and T=1.3 K. Three noise contributions are considered: the voltage noise power spectral density, S_V, of the quantum voltmeter, which includes the noise of the null detector and some external voltage noise captured, the magnetic flux noise power spectral density, S_ϕ, detected by the SQUID, which includes the SQUID noise and some external magnetic flux noise captured and the Johnson-Nyquist noise power spectral density emitted by the resistor R_1 in the primary loop. The Johnson-Nyquist noise of the resistor R_2 is included in S_V. The third term contributes to √(S_Δ I/I) by about 1.6×10^-9/Hz^1/2 for I_1=45.94 μA, leading to a negligible uncertainty contribution of 5×10^-11 for a measurement time τ_m=16τ_0. This third term is therefore not considered in our calculations. Values reported in fig.<ref>e and f are calculated using a voltage noise of √(S_V)=28 nV/Hz^1/2 and a magnetic flux noise detected by the SQUID of √(S_ϕ)=6.2×10^-5ϕ_0/Hz^1/2. Measurement protocol for the ammeter calibration Calibrations of the ammeter HP3458A are performed using settings of the PQCG reported in Extended Data Table <ref>. In these experiments, output current values are changed by varying both the gain G and the frequency f_1. Using the PQCG to perform ammeter calibration consists in replacing QHRS_2 by the device under test and removing the quantum voltmeter. The connection is done through a low-pass filter (highly insulated PTFE 100 nF on the differential input). A common mode torus has also been introduced to minimize the noise. Extended Data fig.<ref>a shows recordings by the ammeter (HP3458A) as a function of time for several alternations of I_PQCG at 107.666272 μA using the measurement protocol I_+. The acquisition time, the waiting time are of 10 s and 2 s, respectively. The measured current is determined from the average of the values obtained for several measurement groups. The time dependence of the Allan deviation, reported in Extended Data fig.<ref>b, shows that the standard deviation of the mean is a relevant estimate of the Type-A relative uncertainty at τ_m=144 s. An uncertainty of 2×10^-7 is typically achieved after a total measurement time of 144 s.
http://arxiv.org/abs/2407.13206v1
20240718064331
Double stochastic opinion dynamics with fractional inflow of new opinions
[ "Vygintas Gontis" ]
physics.soc-ph
[ "physics.soc-ph" ]
SN 2021wvw: A core-collapse supernova at the sub-luminous, slower, and shorter end of Type IIPs [ =============================================================================================== § ABSTRACT A recent analysis of empirical limit order flow data highlights the necessity for a more refined order flow model that integrates the power-law distribution of limit order cancellation times. These cancellation times follow a discrete probability mass function derived from the Tsallis q-exponential distribution, or equivalently, the second form of the Pareto distribution. By combining fractional L'evy stable motion as the model for limit order inflow with the power-law distribution for cancellation times, we propose an innovative approach to modeling order imbalance in financial markets. We extend this model to a broader context, illustrating its applicability to opinion dynamics in social systems where opinions have a finite lifespan. This proposed model exemplifies a stochastic time series characterized by stationary increments and broken self-similarity. Consequently, it offers a novel framework for testing methods to evaluate long-range dependence in such time series. § INTRODUCTION The debate in the scientific community of power-law behavior in social and physical systems is long-lasting <cit.>. Usually, one observes power-law behavior at the macro level of the system and looks for the microscopic interpretation of observed phenomena. From the mathematical point of view, the power-law is the only function satisfying the scale-free property p(b x)=f(b)p(x) <cit.>. Thus, there is a close relation between the self-similarity of stochastic processes and power-law <cit.>. Power-law statistical property is a characteristic feature of social and financial systems. The measures of long-range memory based on self-similarity are ambiguous as Markov processes with power-law statistical properties can exhibit long-range memory, including slowly decaying auto-correlation <cit.>. The financial markets provide us with empirical limit order book (LOB) data that exhibit power-law statistical properties as well <cit.>. From the econophysics perspective, it is preferable to provide the microscopic interpretation of the econometric models, primarily serving as macroscopic descriptions of intricate social systems. These models often rely on self-similarity and long-range dependence assumptions. To advance our comprehension of long-range memory in social systems, it becomes imperative to juxtapose macroscopic modeling with empirical analyses. In our prior review <cit.>, we raised the question of whether the observed long-range memory in social systems results from genuine long-range memory processes or is merely an outcome of the non-linearity inherent in Markov processes. In this contribution, we demonstrate how vital the assumptions of fractional L'evy stable motion (FLSM) are and how straightforward the model of opinion dynamics breaks these assumptions. The proposed model is empirically grounded on the order disbalance time series of the financial markets <cit.>. As recorded in order books, market-order flows exhibit long-range persistence, attributed to the order-splitting behavior of individual traders <cit.>. This discovery reinforces the presence of genuine long-range memory in financial systems, as recently confirmed in a comprehensive investigation <cit.>. The order-splitting behavior of individual traders should be discernible in the sequence of submitted limit orders. Section <ref> presents a short description of the limit order time series serving as the background for the more general interpretation of opinion dynamics. In Section <ref>, we present a model of power-law waiting time originating from the system of heterogeneous agents, and in Section <ref>, we provide evidence of broken self-similarity assumption when cancellation of opinions is included in the model. We discuss our results and provide conclusions in Section <ref>. § MODELING LIMIT ORDER FLOW AND/OR OPINION DYNAMICS In our work <cit.>, we delved into the sequence of limit order submissions to the market, denoted as X_L(j), X_L(j)=∑_i=1^j v_i = ∑_i=1^j Y_L(i), where v_i represents the volume of the submitted limit order. We examined the series X_L(j) through the lens of FLSM, as the probability density functions (PDF) of order volumes v_i have power-law tails, and we documented fluctuations of the memory parameter for various stocks in the region d ≃ 0.19÷0.34. Despite the rough approximation of the PDF of volumes v_i by the L'evy stable distribution, the time series X_L(j) can be considered FLSM-like. The series X_L(j) functions as a macroscopic measure of opinion in the order flow and exhibits long-range dependence due to the heterogeneity of agents. Nevertheless, it is more prudent to consider the measure of traders' macro opinion, incorporating events of order cancellation and execution. Therefore, we explore an alternative sequence of order flow X(j)=∑_i1⩽ j < i2v_i1,i2=∑_i=1^j Y(i), where the first sum is over all live limit orders, including all limit order volumes v_i1,i2 submitted before event j and waiting for cancellation or execution. A sequence of limit order submissions of length N generates a series of order disbalance X(j) of length 2N since each submission is paired with a cancellation or execution event. Notably, series X(j) displays a few crucial differences from series X_L(j). Firstly, the empirical sequence X(j) appears bounded, while X_L(j) is unbounded. Secondly, we obtain contradictory results when evaluating the memory parameter d using the assumption of FLSM for the series X(j). In our previous work <cit.>, we concluded that the time series defined in (<ref>) as order disbalance is not FLSM-like. Consequently, the persistent limit order submission flow or long-range dependence is concealed from econometric methods defining memory in the time series of order disbalance X(j). In the quest for a new interpretation of order disbalance series X(j), we have introduced the concept of a discrete q-exponential probability mass function P_λ,q(k)=SP_λ,q(k-1)-SP_λ,q(k)=(1 + (q-1) (k-1) λ)^2 - q/1 - q - (1 + (q-1) k λ)^2 - q/1 - q, as a q-extension of the geometric distribution, grounded in the theoretical foundations of generalized Tsallis statistics <cit.>. This distribution allows for a more accurate fit of empirical limit order cancellation times, revealing their weak sensitivity to order sizes and price levels. The fitted discrete q-exponential PMF parameters, λ=0.3, and q=1.5, remain consistent across ten stocks and trading days analyzed. The power-law distribution of cancellation (waiting) times might originate from a stochastic queueing model in which tasks are executed according to a continuous-valued priority <cit.>. Instead, in Section <ref>, we derive the power-law of waiting time originating from the heterogeneity of agents. We propose a relatively straightforward limit order flow disbalance model by combining fractional L'evy stable limit order inflow with the q-exponential lifetime distribution. This model arises as an interpretation of limit order flow empirical analysis considered in <cit.> and is an illustrative example of broader modeling of social systems. Let us generalize the interpretation of the proposed model, considering it as a possible version of the opinion dynamics, probably applicable to other social systems as well. In its original interpretation, the model involves two random sequences: (a) A sequence of limit order volumes v_i generated as ARFIMA{0,d,0}{α,N}, where d is the memory parameter, α is the stability index, N is the length of the sequence. (b) A corresponding independent sequence of limit order cancellation times with the same length N generated using the probability mass function (PMF) P_λ,q(k) defined by Equation (<ref>). For the extended interpretation of the model, we assume that every v_i is the opinion weight, positive for the first of two possible (buy) and negative for the second (sell). The limit order cancellation time measured in the event space k=i2-i1 will mean opinion lifetime. Though the model is proposed for analyzing limit order flow in the financial markets, its extended interpretation might help investigate other cases of weighted opinions in social systems. In this study, the proposed model is helpful as an example of a relatively simple time series constructed using the ARFIMA sequence but exhibiting properties outside the assumption of self-similarity. With these two independent sequences, we can calculate the model time series X(j)=∑_i=1^i=jY(i) defined by sequence v_i1,i2, see Equation (<ref>). Here, opinion submission event number i1 and its cancellation event number i2 can be calculated for every v_i of sequence (a) and corresponding k of sequence (b). The generated random sequence represents the artificial analog of order disbalance time series used to compare with empirical order flow in the financial markets <cit.>. We achieved good correspondence of the artificial model with empirical data, choosing parameters of the artificial model as follows: α=1.8; λ=0.3; q=1.5, <cit.>. Note that for other applications, the model can be simplified replacing the sequence of v_i with unit weights vu_i=Sign(v_i)=-1 if v_i<0 0 if v_i=0 1 if v_i>0. We will denote these series with an additional index of S, for example, X_S(j). One more direction of possible simplification would be the choice of q=1 in Equation (<ref>), giving us a Geometric distribution lim_q → 1 P_λ,q^(ds)(k) = exp^-y λ (exp^λ-1) = (1-p)^k-1 p, where we denote p=1-exp^-λ. The Geometric distribution as a discrete version of the exponential distribution is the most common choice for the waiting time in many physical and social systems. § HETEROGENEITY OF AGENTS AND POWER-LAW OF WAITING TIME Though, power-law waiting time appears in the stochastic queueing model with a continuous-valued priority <cit.>, one more reasonable explanation of the power-law could be the heterogeneity of agents. Trading agents in the financial markets have very different assets in their disposition. Thus, the trading activity of agents varies on a wide scale, and the lifetime of their orders is very different. We can propose a simple approach to combining the heterogeneity of agents seeking to derive the power-law distribution of limit order cancellation time. Assume we have n categories of agents with different limit order submission (cancellation) rates. The lowest rate is one limit order per trading day (duration of time series investigated). Let us denote this probability as 0<p_1=η/n<1. Then agents, who submit two orders per day, have twice higher probability, and agents who submit i limit orders are characterized with probability p_i=i η/n. The most active trader submits n limit orders with probability p_n=η. Continuing with such definition, we have an individual Geometric distribution of the waiting (cancellation) time for the i-th category of agents P_i(k)=(1-p_i)^k-1 p_i. Finally, we have to average this PMF over all categories of agents. It is clear from our assumption that the probability of order arrival from different categories of agents might be equal as agent order submission frequency is proportional to the index i and the number of agents in a category is inversely proportional – Zipf's law. Thus, we can write the PMF of order (opinion) waiting time for the whole ensemble of agents as follows P_η,n(k) = ∑_i=1^n(1-η i/n)^k-1η i/n^2≃η/n^2∑_i=1^n iexp(-η(k-1)i/n) =η (n exp(-η (k-1)) - (1+n)exp(-η (k-1)+η (k-1)/n)+exp(η (k-1)/n))/n^2 (1-exp(η (k-1)/n))^2. Seeking to clarify the result Eq. (<ref>) one can find the limit lim_n→∞ P_η,n(k)=1-exp(-η (k-1)) (1+η (k-1))/η (k-1)^2, giving the evidence of the power-law with the exponent κ=2. We can recover the power-law nature of PMF Eq. (<ref>) plotting it in Fig. <ref> together with partial sums P_η,n,m^Geom(k) = η/n^2∑_i=1^2^m i (1-η i/n)^k-1 P_η,n,m^Exp(k) = η/n^2∑_i=1^2^m iexp(-η(k-1)i/n), where we use the set of m={0,1,2....10} and n=2^10=1024. Note that green and red lines are indistinguishable as a result of Eqs. (<ref>) and (<ref>) coincide. The black line coincides with both partial sums when m=10. One can note that our assumptions incorporating Zipf's law lead to the power-law of cancellation (waiting) time exponentially stretched on both sides. This natural restriction comes from the fixed number of agent categories n or a related number of opinions (orders) submitted N=n (n+1)/4. The most important finding is that the power-law exponent in Eq. (<ref>) is κ=2. From the relation of q-exponential distribution with Pareto distribution, we can easily find that related exponent q=1.5 as we have defined empirically in <cit.>. Thus, the presented description of PMF of waiting time for the ensemble of heterogeneous agents strengthens the conclusion that power-law exponent q=1.5 is a stylized fact of the financial markets. A more detailed investigation of empirical cancellation times using proposed PMF (<ref>) would be helpful. § SELF-SIMILARITY ANALYSIS OF PROPOSED MODEL Continuing our efforts to understand long-range memory in social systems <cit.>, it becomes essential to compare the macroscopic description with empirical analyses and agent-based modeling. The empirical investigation of volatility, trading activity, and order flow in the financial markets has provided solid ground for empirical investigations of long-range memory properties <cit.>. Various econometrical models with fractional noise have been proposed to describe volatility time series <cit.>. However, from the perspective of econophysics, these models primarily serve as macroscopic descriptions of complex social systems, often based on ad hoc assumptions of long-range memory. As a result, despite applying advanced trading algorithms and machine learning techniques, predicting stock price movements remains challenging for researchers <cit.>. Here, we will demonstrate that the requirement of self-similarity, widely used in modeling long-range dependence, is firm and challenging in the proposed opinion dynamics model. Usually, the econometric methods are used without questioning the assumption of self-similarity for the time series investigated; see <cit.> for a more detailed consideration of the problem. The theory of stochastic time series is based on self-similar processes with stationary increments; thus, we need to recall these concepts. Stochastic time series X(t) can be considered as self-similar if the scaling relation between two distributions is fulfilled X(τ t) ∼τ^H X(t), here ∼ means that two distributions are the same for any τ>0 and t>0. One more requirement is stationarity of increments X(t + τ) − X(t) ∼ X(τ ) − X(0) for any τ>0 and t>0. A self-similar process with stationary increments has self-affine increments X(t + c τ ) − X(t) ∼τ^H (X(t + c) − X(t)) for any c>0, <cit.>. All these properties are defined using equality in distributions; thus, the most straightforward estimation of H should also be based on the equality in distributions. By testing the distributions, we can discriminate cases when series having stationary increments deviate from the requirement of self-similarity. It is obvious that in the proposed modeling of limit, order flow X_L(j) and opinion disbalance X(j) increments are stationary as they originate from the L'evy stable distribution. Following the definitions above, we can write the condition of self-similarity as |X(t + τ) − X(t)| ∼τ^H |X(1) − X(0)|. For the comparison of distributions, we use Kolmogorov Smirnov (KS) two-sample test <cit.> and calculate KS distance D D=sup_x|F_τ_1(x) − F_τ_k(x)|, where cumulative empirical distribution functions F_τ_i(x) for the integer sequence of i=0,1,2,... and corresponding sequence of τ_i,H=2^i is defined as F_τ_i,H(x) = P[|X(t + τ) − X(t)/τ^H| ≤ x]. From the definition of self-similarity (<ref>), it follows that we should get the same H minimizing distance (<ref>) for any τ. If one gets various values of H for different τ values, the requirement of self-similarity is not fulfilled. The proposed model of opinion dynamics is a good example of a time series to demonstrate how self-similarity is broken when opinion cancellation is introduced into a self-similar series, FLSM, of opinion inflow X_L(t). We generate FLSM series X_L(j) with parameters: α=1.8, d=0.3, N=200 000 and corresponding series of opinion duration (waiting time) k(j) using (<ref>) with parameters q=1.5 and λ=0.3. Then we calculate series X(j) of length 2N of opinion disbalance (<ref>). In Fig. <ref> we compare numerically calculated KS distances D(H), Eq. (<ref>), as functions of H for the series X_L(t), sub-figure (a); for the series X_(j), sub-figure (b); for the series X_S,L(j), subfigure (c); for the series X_S(j) . Time series X_L(j) is self-similar as follows from its definition, and series X_(j) is not self-similar as we get the set of various H values for the different values of τ_i=2*2^i. Though in the case of the simplified model taking only signs of volumes, see Eq, (<ref>), KS distance D(H) is less sensitive to H, numerical results confirm that we can consider series X_S,L(j) as self-similar, and series X_S(j) are not self-similar. From our point of view, this procedure to control self-similarity should apply to any observed time series. Nevertheless, we must admit that researchers use various methods to estimate the self-similarity parameter H of observed time series without testing the self-similarity requirement itself <cit.>. From our point of view, we must pay much more attention to developing new methods to test self-similarity assumptions. The method we proposed here appears less accurate for the simplified series X_S,L(j) and X_S(j), as numerically calculated functions D(H) reveal some fractured structure. In Table <ref>, we list Hurst parameter evaluation results using different methods for the model series: X_L(j), X(j), X_S,L(j), X_S(j). See more detailed information in <cit.> about the estimation of mean square displacement (MSD) and H using the Absolute value estimator (AVE) or Higuchi's method. It shows us that formally evaluated Hurst parameters can give us misleading results defining persistence and long-range dependence. Though all series are generated with the same memory parameter d, the correct interpretation of self-similarity is compulsory for understanding the memory effects in these time series. In conclusion, the artificial order disbalance and/or opinion dynamics time series model provides valuable insights into the persistence and memory properties of the limit order flow in financial markets. The comparison with empirical data demonstrates the usefulness of the model and supports the conclusion that the q-exponential nature of limit order cancellation times contributes to the observed persistence in order disbalance time series. § DISCUSSION AND CONCLUSIONS In our previous work <cit.>, we have introduced the concept of a discrete q-exponential distribution, see Equation (<ref>), as a q-extension of the geometric distribution, based on the theoretical foundations of generalized Tsallis statistics <cit.>. This distribution was acceptable for the fit of empirical limit order cancellation times, revealing their weak sensitivity to order sizes and price levels. The fitted discrete q-exponential PMF with parameter q=1.5 has proven consistent across ten stocks and trading days analyzed. From the equivalence of q-exponential distribution to the second class Pareto distribution <cit.>, we know that these distributions have a power-law tail with the exponent κ=1/q-1=2. We propose the heterogeneous agent model to derive this unique power-law with exponent κ=2 in this contribution. We base our idea on ranking trading agents according to their activity during selected time intervals, such as one day. Thus, one can assume having n categories of agents with i={1,2,...,i,..,n} limit orders submitted. Since every order is canceled or executed, it is natural to assume that Geometric PMF P_i(k) = (1 - η i/n)^(k-1) η i/n describes the lifetime k of agent's group i limit orders. If a number of agents in every group i is inversely proportional to the group's index, Zipf's law, probabilities P_i(k) have the same weight averaging waiting times through the whole set of limit orders. Finally, we get the explicit form PMF of the cancellation (waiting) time (<ref>), which explains the empirically defined power-law property of limit order cancellation times <cit.>. We generalize the combination of two independent random sequences, ARFIMA{0,d,0}{a, N} and P_λ,q(k) Eq. (<ref>), into disbalance series X(j) as the model of opinion dynamics and investigate its long-range dependence. Indeed, the model, first of all, helps understand the properties of limit order disbalance in the financial markets as originates from the analysis of empirical time series <cit.>. A more general interpretation of the proposed double stochastic time series might help understand the complexity of long-range dependence in other social systems and empirical time series. The proposed model serves as an example of a time series with hidden long-range dependence. Thus, we propose the method of self-similarity tests and demonstrate that series X(j) and X_S(j) are not self-similar. Though the result is predictable, the proposed method might be useful in analyzing other empirical time series before using widely accepted methods of self-similar series analysis. Our study contributes to a better understanding of order disbalance time series and their memory effects in financial markets. Furthermore, the combination of FLSM and the q-exponential distribution proves to be a promising approach for modeling social systems, which can be explored further in future research. In conclusion, by bridging the gap between theory and empirical observations, we contribute to developing more accurate models and deeper insights into the behavior of financial markets and social systems. § ABBREVIATIONS The following abbreviations are used in this manuscript: ARFIMA Auto–regressive fractionally integrated moving average AVE Absolute Value estimator FBM Fractional Brownian motion FGN Fractional Gaussian noise FLSM fractional Lèvy stable motion MSD Mean squared displacement PDF Probability density function PMF Probability mass function 999 Newman2005CPh Newman, M. Power laws, Pareto distributions and Zipf’s law. Contemporary Physics 2005, 46, 323–351. <https://doi.org/10.1080/00107510500052444>. Kumamoto2018FPh Kumamoto, S.I.; Kamihigashi, T. Power Laws in Stochastic Processes for Social Phenomena: An Introductory Review. Frontiers in Physics 2018, 6. <https://doi.org/10.3389/fphy.2018.00020>. Newberry2019PhysRevLett Newberry, M.G.; Savage, V.M. Self-Similar Processes Follow a Power Law in Discrete Logarithmic Space. Physical Review Letters 2019, 122. <https://doi.org/10.1103/physrevlett.122.158303>. Gontis2004PhysA Gontis, V.; Kaulakys, B. Multiplicative point process as a model of trading activity. Physica A: Statistical Mechanics and its Applications 2004, 343, 505–514. <https://doi.org/10.1016/j.physa.2004.05.080>. McCauley2006PhysA Bassler, K.; G., G.; McCauley. Markov processes, Hurst exponents, and nonlinear diffusion equations: With application to finance. Physica A 2006, 369, 343–353. Gontis2006JStatMech Gontis, V.; Kaulakys, B. Long-range memory model of trading activity and volatility. Journal of Statistical Mechanics 2006, P10016, 1–11. <https://doi.org/10.1088/1742-5468/2006/10/p10016>. McCauley2007PhysA McCauley, J.L.; Gunaratne, G.H.; Bassler, K.E. Hurst exponents, Markov processes, and fractional Brownian motion. Physica A 2007, 379, 1–9. <https://doi.org/10.1016/j.physa.2006.12.028>. Gontis2008PhysA Gontis, V.; Kaulakys, B.; Ruseckas, J. Trading activity as driven Poisson process: comparison with empirical data. Physica A 2008, 387, 3891–3896. <https://doi.org/10.1016/j.physa.2008.02.078>. Micciche2009PRE Micciche, S. Modeling long-range memory with stationary Markovian processes. Physical Review E 2009, 79, 031116. Micciche2013FNL Micciche, S.; Lillo, F.; Mantegna, R. The role of unbounded time-scale in generating long-range memory in additive Markovian processes. Fluctuation and Noise Letters 2013, 12, 1340002. Ruseckas2011PRE Ruseckas, J.; Kaulakys, B. Tsallis distributions and 1/f noise from nonlinear stochastic differential equations. Phys.Rev.E 2011, p. 051125. Kononovicius2015PhysA Kononovicius, A.; Ruseckas, J. Nonlinear GARCH model and 1/f noise. Physica A 2015, 427, 74–81. <https://doi.org/10.1016/j.physa.2015.02.040>. Gould2013QF Gould, M.D.; Porter, M.A.; Williams, S.; McDonald, M.; Fenn, D.J.; Howison, S.D. Limit order books. Quantitative Finance 2013, 13, 1709–1742. <https://doi.org/10.1080/14697688.2013.803148>. Kazakevicius2021Entropy Kazakevicius, R.; Kononovicius, A.; Kaulakys, B.; Gontis, V. Understanding the Nature of the Long-Range Memory Phenomenon in Socioeconomic Systems. Entropy 2021, 23. <https://doi.org/10.3390/e23091125>. Gontis2023FractFrac Gontis, V. Discrete q-Exponential Limit Order Cancellation Time Distribution. Fractal and Fractional 2023, 7, 581. <https://doi.org/10.3390/fractalfract7080581>. Gontis2022CNSNS Gontis, V. Order flow in the financial markets from the perspective of the Fractional Lévy stable motion. Communications in Nonlinear Science and Numerical Simulation 2022, 105, 106087. <https://doi.org/https://doi.org/10.1016/j.cnsns.2021.106087>. Lillo2005PhysRevE Lillo, F.; Mike, S.; Farmer, J.D. Theory for long memory in supply and demand. Phys. Rev. E 2005, 71, 066122. <https://doi.org/10.1103/PhysRevE.71.066122>. Sato2023PhysRevLett Sato, Y.; Kanazawa, K. Inferring Microscopic Financial Information from the Long Memory in Market-Order Flow: A Quantitative Test of the Lillo-Mike-Farmer Model. Physical Review Letters 2023, 131. <https://doi.org/10.1103/physrevlett.131.197401>. Tsallis1988-ku Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. Barabasi2005Nature Barabási, A.L. The origin of bursts and heavy tails in human dynamics. Nature 2005, 435, 207–211. <https://doi.org/10.1038/nature03459>. Grinstein2008PRE Grinstein, G.; Linsker, R. Power-law and exponential tails in a stochastic priority-based model queue. Physical Review E 2008, 77. <https://doi.org/10.1103/physreve.77.012101>. Baillie1996JE Baillie, R.; Bollerslev, T.; Mikkelsen, H. Fractionally integrated generalized autoregressive conditional heteroskedasticity. Journal of Econometrics 1996, 74, 3–30. <https://doi.org/10.1016/S0304-4076(95)01749-6>. Engle2001QF Engle, R.; Patton, A. What good is a volatility model? Quantitative Finance 2001, 1, 237–245. <https://doi.org/10.1088/1469-7688/1/2/305>. Plerou2001QF Plerou, V.; Gopikrishnan, P.; Gabaix, X.; Amaral, L.; Stanley, H. Price fluctuations, market activity and trading volume. Quantitative Finance 2001, 1, 262–269. <https://doi.org/10.1088/1469-7688/1/2/308>. Gabaix2003Nature Gabaix, X.; Gopikrishnan, P.; Plerou, V.; Stanley, H.E. A theory of power law distributions in financial market fluctuations. Nature 2003, 423, 267–270. <https://doi.org/10.1038/nature01624>. Ding2003Springer In Processes with Long-Range Correlations: Theory and Applications; Rangarajan, G.; Ding, M., Eds.; Springer, 2003; Vol. 621, Lecture Notes in Physics, pp. XVIII, 398. Ding1993JEmpFin Ding, Z.; Granger, C.W.J.; Engle, R.F. A long memory property of stock market returns and a new model. Journal of Empirical Finance 1993, 1, 83–106. Bollerslev1996Econometrics Bollerslev, T.; H.-O. Mikkelsen, H.O. Modeling and pricing long-memory in stock market volatility. Journal of Econometrics 1996, 73, 151–184. Giraitis2009 Giraitis, L.; Leipus, R.; Surgailis, D. ARCH(∞) models and long memory. In Handbook of Financial Time Series; Anderson, T.G.; Davis, R.A.; Kreis, J.; Mikosh, T., Eds.; Springer Verlag: Berlin, 2009; pp. 71–84. <https://doi.org/10.1007/978-3-540-71297-8_3>. Conrad2010 Conrad, C. Non-negativity conditions for the hyperbolic GARCH model. Journal of Econometrics 2010, 157, 441–457. Arouri2012 Arouri, M.E.H.; Hammoudeh, S.; Lahiani, A.; Nguyen, D.K. Long memory and structural breaks in modeling the return and volatility dynamics of precious metals. The Quarterly Review of Economics and Finance 2012, 52, 207–218. Tayefi2012 Tayefi, M.; Ramanathan, T.V. An overview of FIGARCH and related time series models. Austrian Journal of Statistics 2012, 41, 175–196. <https://doi.org/10.17713/ajs.v41i3.172>. Alec2015QF Kercheval, A.N.; Zhang, Y. Modelling high-frequency limit order book dynamics with support vector machines. Quantitative Finance 2015, 15, 1315–1329. <https://doi.org/10.1080/14697688.2015.1032546>. Kumar2018IEEE Kumar, I.; Dogra, K.; Utreja, C.; Yadav, P. A Comparative Study of Supervised Machine Learning Algorithms for Stock Market Trend Prediction. In Proceedings of the 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT). IEEE, 2018. <https://doi.org/10.1109/icicct.2018.8473214>. Zaznov2022Mathematics Zaznov, I.; Kunkel, J.; Dufour, A.; Badii, A. Predicting Stock Price Changes Based on the Limit Order Book: A Survey. Mathematics 2022, 10. <https://doi.org/10.3390/math10081234>. Gomez2022FinInnov Gómez-Águila, A.; Trinidad-Segovia, J.E.; Sánchez-Granero, M.A. Improvement in Hurst exponent estimation and its application to financial markets. Financial Innovation 2022, 8. <https://doi.org/10.1186/s40854-022-00394-x>. TrinidadSegovia2012PhisA Trinidad Segovia, J.; Fernández-Martínez, M.; Sánchez-Granero, M. A note on geometric method-based procedures to calculate the Hurst exponent. Physica A: Statistical Mechanics and its Applications 2012, 391, 2209–2214. <https://doi.org/10.1016/j.physa.2011.11.044>. Hodges1958Matematik Hodges, J.L. The significance probability of the smirnov two-sample test. Arkiv för Matematik 1958, 3, 469–486. <https://doi.org/10.1007/bf02589501>. delaBarra2021EPJB de la Barra, E.; Vega-Jorquera, P. On q-pareto distribution: some properties and application to earthquakes. The European Physical Journal B 2021, 94. <https://doi.org/10.1140/epjb/s10051-021-00045-7>. Gontis2020JStat Gontis, V. Long-range memory test by the burst and inter-burst duration distribution. Journal of Statistical Mechanics 2020, 2020, 093406. <https://doi.org/10.1088/1742-5468/abb4db>.
http://arxiv.org/abs/2407.13603v1
20240718154327
dzStance at StanceEval2024: Arabic Stance Detection based on Sentence Transformers
[ "Mohamed Lichouri", "Khaled Lounnas", "Khelil Rafik Ouaras", "Mohamed Abi", "Anis Guechtouli" ]
cs.CL
[ "cs.CL" ]
Physics-guided Active Sample Reweighting for Urban Flow Prediction Hongzhi Yin ================================================================== § ABSTRACT This study compares Term Frequency-Inverse Document Frequency (TF-IDF) features with Sentence Transformers for detecting writers' stances—favorable, opposing, or neutral—towards three significant topics: COVID-19 vaccine, digital transformation, and women empowerment. Through empirical evaluation, we demonstrate that Sentence Transformers outperform TF-IDF features across various experimental setups. Our team, dzStance, participated in a stance detection competition, achieving the 13th position (74.91%) among 15 teams in Women Empowerment, 10th (73.43%) in COVID Vaccine, and 12th (66.97%) in Digital Transformation. Overall, our team's performance ranked 13th (71.77%) among all participants. Notably, our approach achieved promising F1-scores, highlighting its effectiveness in identifying writers' stances on diverse topics. These results underscore the potential of Sentence Transformers to enhance stance detection models for addressing critical societal issues. § INTRODUCTION Stance detection is a pivotal task in Natural Language Processing (NLP) that involves determining the position or attitude expressed in a text regarding a specific topic or entity. This task is crucial for a variety of applications, including sentiment analysis, opinion mining, and social media monitoring, where understanding public sentiment and opinion is essential <cit.>. With the exponential growth of user-generated content on social media and online news platforms, there is a pressing need to develop sophisticated tools that can analyze and interpret the myriad of perspectives present in these texts. The Mawqif 2022 shared task, an initiative focused on Arabic stance detection, addresses this need by challenging participants to detect stances towards three contemporary topics: COVID-19 vaccine, digital transformation, and women empowerment. This task is significant not only because it addresses critical societal issues but also because it highlights the complexities of processing Arabic text, which is characterized by its rich morphology, diverse dialects, and intricate syntax <cit.>. Traditional methods for stance detection have largely relied on feature extraction techniques such as Term Frequency-Inverse Document Frequency (TF-IDF). These methods transform textual data into a numerical format, allowing machine learning models to process and analyze the text. Our initial experiments leverage TF-IDF features due to their simplicity and proven effectiveness in various text classification tasks <cit.>. Despite their utility, TF-IDF-based methods have limitations, particularly in capturing the deeper semantic relationships and contextual nuances within text. To address these limitations, recent research has increasingly focused on deep learning techniques. Long Short-Term Memory (LSTM) networks have demonstrated their ability to handle sequential data and capture dependencies in text, which are crucial for understanding stances <cit.>. The advent of transformer-based models, such as BERT and Sentence Transformers, has revolutionized the field of NLP by offering robust methods to capture semantic and contextual information in text. These models utilize self-attention mechanisms to understand the relationships within text, making them particularly effective for nuanced tasks like stance detection <cit.>. Transformers have set new benchmarks in various NLP tasks by leveraging their ability to generate dense, context-aware representations of text, facilitating a deeper understanding of the underlying meaning and intent. Several studies have explored the use of BERT and other transformers for stance detection, demonstrating their superiority over traditional methods in capturing subtle and complex stances in text. For example, Alshahrani et al. <cit.> showed that BERT-based models outperform traditional approaches in detecting stances in English social media texts. However, these models often require extensive computational resources and fine-tuning, posing challenges for their application in resource-limited settings <cit.>. Building upon these advancements, our study explores a comparative analysis between traditional feature extraction methods and modern deep learning approaches for Arabic stance detection. In our experiments, we employ both TF-IDF and Sentence Transformers to detect stances towards the selected topics. Our participation in the Mawqif 2022 shared task allowed us to evaluate these methodologies rigorously, where our team, dzStance, achieved competitive results across the three topics, demonstrating the effectiveness of our approaches. In the following sections, we will delve deeper into our methodology and findings. Section <ref> provides an overview of the Mawqif dataset used in our study. Section <ref> describes our approach to stance detection, including the feature extraction techniques and model architectures we employed. Section <ref> presents our experimental results and discusses their implications. Finally, Section <ref> concludes the paper by summarizing the key insights and contributions of our work. § DATASET DESCRIPTION The Mawqif dataset <cit.>, utilized in the StanceEval 2024 shared task <cit.>, serves as a crucial resource for advancing natural language processing (NLP) in the domain of stance detection. This dataset comprises over 4,000 annotated text samples that encapsulate diverse stances—favorable, opposing, or neutral—on pertinent topics such as COVID-19 vaccine, digital transformation, and women empowerment. The significance of the Mawqif dataset lies in its ability to provide a comprehensive view of how different opinions and attitudes are expressed in Arabic text. This makes it invaluable for researchers who aim to evaluate and enhance stance detection models. By leveraging such a dataset, one can explore and refine models to better understand and process nuanced stances within varied contexts. Table <ref> offers a detailed breakdown of the dataset, illustrating the distribution of tweets across the specified topics. The dataset comprises a total of 3,502 tweets, with the distribution as follows: A closer look at the data statistics reveals notable class imbalances. For instance, the COVID-19 vaccine category includes nearly equal proportions of favorable (43.53%) and opposing (43.48%) tweets, with a smaller fraction being neutral (12.85%). In contrast, the digital transformation topic shows a predominance of favorable stances (76.77%), with fewer opposing (12.41%) and neutral (1.92%) tweets. Similarly, the women empowerment category also leans heavily towards favorable stances (63.95%), followed by opposing (31.18%) and neutral (4.96%) tweets. Such imbalances can pose significant challenges for model training and evaluation. Models trained on datasets with skewed class distributions may become biased towards the majority classes, leading to suboptimal performance on minority classes. Therefore, addressing these imbalances is critical to ensure the development of robust and fair models. Techniques like data resampling, class weighting, and the use of advanced algorithms capable of handling imbalance are essential strategies to mitigate these effects and enhance overall model performance. By understanding and leveraging the characteristics of the Mawqif dataset, researchers can effectively tackle the complexities of stance detection, contributing to the broader field of Arabic NLP and enabling more accurate and nuanced analysis of opinions and attitudes expressed in text. § PROPOSED SYSTEM In our proposed system [https://github.com/licvol/dzStanceEval_2024], we explore two distinct methodologies for feature extraction: a weighted union of TF-IDF features <cit.> and Sentence Transformers. These techniques offer complementary advantages, leveraging the strengths of traditional feature representation and cutting-edge deep learning architectures. Experiment 1 : Traditional Machine Learning with TF-IDF Features In our first experiment, we focused on extracting features using the Term Frequency-Inverse Document Frequency (TF-IDF) approach, which is widely used in text classification tasks. We utilized scikit-learn's module to combine different TF-IDF features, capturing both character-level and word-level information <cit.>. * N-gram Range: We experimented with various n-gram ranges to understand their impact on model performance. The ranges included: * : Unigrams * : Unigrams and bigrams * : Up to trigrams * : Up to 4-grams * : Up to 5-grams * : Up to 6-grams * : Up to 7-grams * : Up to 8-grams * : Up to 9-grams * : Up to 10-grams * Weighting: To further enhance the TF-IDF features, we incorporated weighting schemes. The weights were varied from 0.1 to 1.0 in steps of 0.1 to determine the optimal balance for capturing the intricacies of Arabic text. The best-performing weight, denoted as , was identified and applied consistently across subsequent experiments (see Table <ref>). * Classifier: For classification, we employed the Linear Support Vector Classifier (LSVC) with a regularization parameter set to C=4. This choice was based on its ability to handle high-dimensional feature spaces effectively, which is crucial when dealing with the extensive n-gram features produced by TF-IDF. This combination of weighted TF-IDF features and LSVC forms the baseline for our stance detection system, aiming to capture both the surface-level and deeper linguistic patterns in Arabic text. Experiment 2: Leveraging Pre-trained Language Models (PLMs) In the second experiment, we explored the use of advanced pre-trained language models (PLMs) to enhance stance detection capabilities further. These models are pre-trained on vast amounts of text data and are adept at generating rich semantic representations of words and sentences. * Sentence Embeddings: We utilized Sentence Transformers, specifically the xlm-r-bert-base-nli-stsb-mean-tokens model, which excels at producing dense vector embeddings that encapsulate the overall meaning of sentences. These embeddings were used as input features for the classification task. * Classifier: For classification, we opted for a Logistic Regression (LR) model configured with the following hyperparameters: * : This parameter sets the maximum number of iterations for the solver to converge. * : This setting enables the classifier to handle multiple classes simultaneously, which is crucial for stance detection where multiple stance labels exist. * : The solver used for optimization, chosen for its efficiency in handling multiclass logistic regression problems. This approach leverages the powerful representations learned by Sentence Transformers, which are adept at capturing semantic nuances and contextual relationships within the text. By integrating these embeddings with a logistic regression classifier, we aim to improve the model's ability to discern subtle stance indicators in Arabic text. Combining these methodologies, our system aims to balance the strengths of traditional feature extraction with the advanced capabilities of modern pre-trained models. This hybrid approach is designed to address the linguistic complexity and variability of Arabic, providing robust stance detection across different contexts. § RESULTS AND DISCUSSION This section evaluates the performance of our stance detection system on the test set using the F1-score as the primary metric. We conducted two sets of experiments to compare the effectiveness of different feature extraction techniques and model configurations. Following these, we analyze our competitive performance in the stance detection challenge. §.§ Baseline Experiment To establish a benchmark, we employed a simple approach using Term Frequency-Inverse Document Frequency (TF-IDF) representation with unigram (1-gram) features and a Linear Support Vector Classifier (LSVC) with a linear kernel. This baseline model achieved an F1-score of 64.34%, providing a reference point for comparing the performance of more advanced models and feature combinations. §.§ Experiment 1: Weighted Union of TF-IDF Features In this experiment, we explored the impact of various n-gram ranges on the performance of the LSVC model. We employed the FeatureUnion module to create a weighted combination of TF-IDF features with different n-gram lengths. The weights were varied systematically from 0.1 to 1.0 to optimize feature importance. The optimal weight configuration (0.85, 0.85, 0.65) from this tuning was then used across all n-gram experiments. We tested a range of n-grams from single-word (1-gram) up to ten-word sequences, examining the effects of character-level and word-boundary-aware features. The results show a consistent trend: the F1-score improves as the n-gram range increases, reaching a peak of 66.20% with six-grams (ngram_range=(1,6)). This suggests that incorporating up to six-word sequences captures essential context and relationships, enhancing the model's performance in stance detection tasks. Interestingly, the performance slightly declines for n-grams longer than six-grams (e.g., ngram_range=(1,7) or ngram_range=(1,8)), possibly due to the introduction of noise or redundant information. These findings indicate that while expanding n-gram ranges can enrich the feature set, overly long sequences may adversely affect model accuracy. §.§ Experiment 2: Sentence Transformers The second experiment employed pre-trained language models to generate rich sentence embeddings. We used Sentence Transformers, specifically the 'xlm-r-bert-base-nli-stsb-mean-tokens' model, to create embeddings that encapsulate the semantic meaning of each sentence. These embeddings were then fed into a Logistic Regression (LR) classifier for stance detection. Configured with default hyperparameters—max_iter=1000, multi_class='multinomial', and solver='lbfgs'—and additional text preprocessing steps such as normalization and emoji replacement, the LR model achieved an F1-score of 68.48%. This score surpasses the highest result obtained in the TF-IDF-based LSVC experiments, highlighting the effectiveness of Sentence Transformers in capturing semantic relationships within text. §.§ Competitive Performance Analysis Our team, dzStance, participated in the StanceEval shared task, where the competition focused on detecting stances in various topical domains. Our performance across different topics was as follows: - **Women Empowerment**: We achieved the 13th position with an F1-score of 74.91% among 15 participating teams. - **COVID-19 Vaccine**: We ranked 10th, securing a 73.43% F1-score. - **Digital Transformation**: We placed 12th with a 66.97% F1-score. - **Overall Performance**: Combining all categories, dzStance ranked 13th overall with an F1-score of 71.77%. These results reflect our system's ability to handle complex stance detection tasks, particularly in the context of the nuanced and diverse opinions expressed in the dataset. Despite the competitive nature of the task, our approach demonstrated robustness across different domains, indicating its potential for broader applications in stance detection. §.§ Analysis and Insights Our experiments underscore the importance of feature engineering and model selection in stance detection tasks. For the LSVC model, carefully selecting and combining n-grams up to six words proved most effective. This approach aligns with the need to capture both local and contextual information in Arabic text, characterized by its rich morphology and varying dialects. On the other hand, the use of Sentence Transformers and Logistic Regression provided a significant performance boost. This suggests that leveraging pre-trained embeddings, which encode comprehensive semantic information, can substantially enhance the ability of stance detection models to interpret complex texts. Overall, our competitive performance highlights areas for improvement but also demonstrates the potential of our methodologies. Future work could explore further fine-tuning of pre-trained models or combining TF-IDF and embedding-based approaches to harness the strengths of both methods, potentially leading to even greater improvements in stance detection performance. § CONCLUSION In this paper, we introduced dzStance, our solution to the StanceEval 2024 shared task on Arabic stance detection. Leveraging Sentence Transformers in conjunction with Logistic Regression, our approach achieved competitive results with an overall average F1-score of 71.77%. This performance positioned us 13th among all participating teams, highlighting the effectiveness of advanced embedding techniques and robust classification algorithms in handling the complexities of Arabic stance detection. The success of our approach underscores the significance of pre-trained models like Sentence Transformers for capturing nuanced semantic relationships within Arabic text across diverse topics. Looking ahead, further investigation into why the Logistic Regression model outperformed traditional methods such as Linear Support Vector Classification (LSVC) could yield insights through deeper hyperparameter tuning and broader evaluation on varied datasets. Additionally, exploring hybrid approaches that integrate TF-IDF features with advanced embedding models may offer enhanced model robustness and accuracy in Arabic natural language processing tasks. By making our code and methodologies openly accessible, we aim to foster reproducibility and encourage ongoing advancements in Arabic stance detection research, paving the way for more sophisticated and effective models in the future.
http://arxiv.org/abs/2407.12289v1
20240717032148
On intersecting families of subgraphs of perfect matchings
[ "Melissa M. Fuentes", "Vikram Kamat" ]
math.CO
[ "math.CO", "05D05 (Primary), 05C35 (Secondary)" ]
On intersecting families of subgraphs of perfect matchings Melissa Fuentes [1] Vikram Kamat [2] ========================================================== [1] Department of Mathematics & Statistics, Villanova University, Villanova, PA, USA, [2] Department of Mathematics & Statistics, Villanova University, Villanova, PA, USA, § ABSTRACT The seminal Erdős–Ko–Rado (EKR) theorem states that if ℱ is a family of k-subsets of an n-element set X for k≤ n/2 such that every pair of subsets in ℱ has a nonempty intersection, then ℱ can be no bigger than the trivially intersecting family obtained by including all k-subsets of X that contain a fixed element x∈ X. This family is called the star centered at x. In this paper, we formulate and prove an EKR theorem for intersecting families of subgraphs of the perfect matching graph, the graph consisting of n disjoint edges. This can be considered a generalization not only of the aforementioned EKR theorem but also of a signed variant of it, first stated by Meyer <cit.>, and proved separately by Deza–Frankl <cit.> and Bollobás–Leader <cit.>. The proof of our main theorem relies on a novel extension of Katona's beautiful cycle method. § INTRODUCTION For a finite set X containing n elements, where n is a positive integer, let 2^X and Xr denote the family of all subsets and r-subsets of X, respectively. For any ⊆ 2^X and x∈ X, let _x be all sets in that contain x. We call _x the star in centered at x. A family of subsets is intersecting if F∩ G≠∅ for F, G∈. A classical result of Erdős, Ko and Rado <cit.> states that if ⊆Xr is intersecting for r≤ n/2, then ||≤n-1r-1. Moreover, if r<n/2, equality holds if and only if =Xr_x for some x∈ X. The Erdős–Ko–Rado theorem is one of the fundamental theorems in extremal combinatorics, and has been generalized in many directions. For instance, an “EKR-type” problem can be defined on a class of mathematical objects with some natural notion of pairwise intersection on objects in this class; a typical result along these lines involves finding a best possible upper bound on the size of the largest intersecting subfamily within this class. Furthermore, it can often be shown that the extremal structures – typically analogous to the star structure defined above – are unique. Indeed, for 𝒢⊆ 2^X, we say that is EKR if there exists x∈ X such that for any intersecting subfamily ⊆, ||≤ |_x|. Furthermore, we say that is strongly EKR if every maximum intersecting subfamily of is a star in 𝒢. For example, it is an easy exercise to verify that if =2^X, then is EKR but not strongly EKR. EKR results along these lines have been proved for, among other objects, permutations, vector spaces, set partitions and families of independent sets for certain classes of graphs. We refer the reader to <cit.>, and the references contained within, for more details on these and other generalizations inspired by the theorem. In this paper, we consider an EKR-type problem for families of induced subgraphs in the perfect matching graph, which we define in the next section. §.§ Induced subgraphs of perfect matchings Let G=(V,E) be a graph with vertex set V=V(G) and edge set E=E(G) containing (undirected) edges between pairs of vertices. An induced subgraph H=(V',E') of G is a subgraph with V'⊆ V and E'⊆ E such that for any u,v∈ V', {u,v}∈ E' if and only if {u,v}∈ E. Also, for positive integers i,j, n and 1≤ i≤ j≤ n let [i,j]={i,i+1,…,j}. Let [n]=[1,n]. For n≥ 1, we define the perfect matching graph, denoted by M_n, as the graph that consists of n pairwise disjoint copies of the complete graph K_2. Let E(M_n)={ e_1, e_2, …, e_n}. For each i ∈ [n], let e_i={l_i,r_i}, where l_i and r_i are the vertex endpoints of the edge e_i. Then V(M_n)=L∪ R, where L={l_1,l_2,…,l_n}, and R={r_1,r_2,…,r_n}. For r≥ 1, denote the family of all induced subgraphs of M_n containing r vertices by ℋ^(r)(n). For s,p≥ 0 and 2p+s≥ 1, let ℋ^(p,s)(n)={V(H):H∈ℋ^(2p+s)(n), |E(H)|=p}; that is, ℋ^(p,s)(n) is the family of vertex subsets of all induced subgraphs of M_n that consist of p disjoint edges and s isolated vertices. Though members of this family are vertex subsets of M_n, we will refer to them as subgraphs; additionally, for brevity, a member V(H)∈ℋ^(p,s)(n) will be denoted by H. Note that |ℋ^(p,s)(n)|=npn-ps2^s. Note also that for any x∈ V(M_n), the cardinality of the star in ℋ^(p,s)(n) centered at x is given as follows: |ℋ^(p,s)_x(n)|=n-1p-1n-ps2^s+n-1pn-p-1s-12^s-1= (2p+s)(n-1)!/p!s!(n-p-s)!2^s-1. Finally, using Equations <ref> and <ref>, we observe that (2n)|ℋ^(p,s)_x(n)|=(2p+s)|ℋ^(p,s)(n)|. We are now ready to state our main results. § MAIN RESULTS & A CONJECTURE Our main results are motivated by the following conjecture that we propose as an extension of two distinct EKR theorems. For non-negative integers s,p and 1≤ 2p+s≤ n, ℋ^(p,s)(n) is EKR. Indeed, it is easy to observe that the case s=0 is the Erdős–Ko–Rado theorem itself. Furthermore, the case p=0 is an EKR result for intersecting families of independent sets in M_n, first stated in Meyer <cit.>, and proved separately by Deza–Frankl <cit.> and Bollobás–Leader <cit.>. Our main result in this paper is the following theorem which assumes a stronger condition on n (for any fixed p and s) as compared to Conjecture <ref>. For s, p≥ 1 and n≥ 2(p+s), ℋ^(p,s)(n) is EKR; it is strongly EKR when n>2(p+s). We also prove the following result, which proves Conjecture <ref> for n=2p+s and presents a strengthening of Theorem <ref> for the special case s=1. Let s,p≥ 1. * ℋ^(p,s)(2p+s) is EKR but not strongly EKR. * If s=1 and n>2p+1, then ℋ^(p,s)(n) is strongly EKR. The proof of Theorem <ref> employs a novel extension of Katona's famous cycle method. In contrast to the cyclic orderings employed in <cit.> and <cit.> to prove their respective EKR theorems, it is more natural here to consider cyclic permutations of the edges in M_n. However, to account for the fact that subgraphs contain edges – i.e. both vertices – or just one vertex from an edge, we have to expand the definition of an interval in these cyclic orders. We do so by considering two different types of intervals within a given cyclic order. We also mention here that the variant of the cycle method formulated in <cit.> could be used to prove a weaker version of Theorem <ref>; more precisely, it would require a stronger condition on n, namely n≥ 2(2p+s). Our result, on the other hand, also handles the cases when 2p+2s≤ n<4p+2s. The rest of the paper is organized as follows. In Section <ref>, we define all notation necessary for our generalization of Katona's method and lemma, and use it to prove the tight upper bound in Theorem <ref>. In Section <ref>, we prove the uniqueness of the extremal structures for Theorem <ref>. In Section <ref>, we prove Theorem <ref>. In Section <ref>, we outline some directions for further research. § KATONA'S CYCLE AND THE UPPER BOUND IN THEOREM <REF> For the purposes of this proof, for any edge e_i∈ E(M_n), we denote e^0_i=l_i and e^1_i=r_i. Let S_n denote the set of all permutations σ of [n], and let {0,1}^n denote the set of all sequences τ=(τ_1, …, τ_n) where τ_i∈{0,1} for each 1≤ i ≤ n. For each choice of σ∈ S_n and τ∈{0,1}^n, we can define a cyclic order C_σ^τ of E(M_n) as follows: C_σ^τ=(e^(τ_1)_σ(1), e^(τ_2)_σ(2), …, e^(τ_n)_σ(n)), where for each i ∈[n], e_σ(i)^(τ_i)=(e_σ(i)^τ_i,e_σ(i)^τ_i+1), where τ_i+1 is computed modulo 2. Rotating a given cyclic order along the n positions corresponding to the n edges gives n-1 other cyclic orders that we say are equivalent to the original cyclic order. To account for this notion of equivalence, we only consider cyclic orders C_σ^τ where σ∈ S_n is a permutation with σ(n)=n. Indeed, in the proof of the characterization of the extremal structures in Section <ref>, we implicitly identify σ as a permutation in S_n-1. Let 𝒞_n denote the set of these orders. It is clear to see that |𝒞_n|=(n-1)!2^n. For a cyclic order C_σ^τ, positive integers p, s (with p+s≤ n), and i∈ [n], we define a B-interval beginning at the edge in position i as follows: B_i=B_i(σ,τ,p,s)=(e^(τ_i)_σ(i),e^(τ_i+1)_σ(i+1),…,e^(τ_i+p-1)_σ(i+p-1),e^τ_i+p_σ(i+p),…,e^τ_i+p+s-1_σ(i+p+s-1)). Similarly, we define an R-interval beginning at the edge in position i as follows: R_i=R_i(σ,τ,p,s)=(e^τ_i+1_σ(i),e^τ_i+1+1_σ(i+1),…,e^τ_i+s-1+1_σ(i+s-1),e^(τ_i+s)_σ(i+s),…,e^(τ_i+p+s-1)_σ(i+p+s-1)). Note that addition here is carried out modulo n, so for any j, i+j=i+j-n if i+j>n. For fixed p and s, each cyclic order thus has n B-intervals and n R-intervals. More informally, both B-intervals and R-intervals cover p+s consecutive edges in a given cyclic order. However, reading clockwise, a B-interval contains both vertices from its first p edges and only the vertex in the first position (in the cyclic order) from each of the last s edges. Similarly, a R-interval contains only the vertex in the second position from each of its first s edges, and both vertices from its last p edges. When the context is clear (with regard to σ, τ, p and s), we will refer to B-intervals and R-intervals by their starting positions alone. Before we proceed to a proof of the upper bound in Theorem <ref>, we illustrate the above definitions with an example. Let n=6, p=1, and s=2. Given permutation σ=(σ(1),…,σ(6))=(5,3,2,1,4,6) and τ=(0,1,1,0,1,0), the corresponding cylic order is C_σ^τ=((l_5,r_5),(r_3,l_3),(r_2,l_2),(l_1,r_1),(r_4,l_4),(l_6,r_6)). The B-intervals B_3 and B_5 in this order are ((r_2,l_2),l_1,r_4) and ((r_4,l_4),l_6,l_5) respectively. Similarly, the R-intervals R_2 and R_6 in this order are (l_3,l_2,(l_1,r_1)) and (r_6,r_5,(r_3,l_3)) respectively. We also clarify a mild abuse of notation regarding intervals that we will frequently employ for convenience of exposition. First, it is clear that both B and R intervals can be regarded as ordered sets of 2p+s vertices in V(M_n). Secondly, both B and R intervals naturally correspond to members of ℋ^(p,s)(n). In view of this correspondence, a B or R-interval will often be identified with and refer to the (unordered vertex subsets of the) induced subgraph in ℋ^(p,s)(n) that contain the p edges and s singleton vertices in that interval. This equivalence will typically be implied in statements that involve equality between or intersection of two intervals. We now prove the upper bound in Theorem <ref>. For the remainder of this section, let p,s≥ 1, n≥ 2(p+s), and suppose that ℱ⊆ℋ^(p,s)(n) is intersecting. For a given cyclic arrangement C_σ^τ, let ℬ_σ^τ and ℛ_σ^τ be the subfamilies of all members in ℱ that can be ordered as B-intervals and R-intervals in C_σ^τ respectively. Let ℱ_σ^τ=ℬ_σ^τ∪ℛ_σ^τ. The following lemma, key to establishing the upper bound in Theorem <ref>, is an analog of Katona's lemma used in his proof of the original EKR theorem. |ℱ_σ^τ|≤ 2p+s. We bound ℬ=ℬ_σ^τ and ℛ=ℛ_σ^τ separately. Note first that by using Katona's original lemma, since n≥ 2(p+s), the bounds |ℬ|≤ p+s and |ℛ|≤ p+s are immediate. We may assume that |ℬ|≥ 2, otherwise we get |ℱ_σ^τ|≤ p+s+1≤ 2p+s, as p≥ 1. Let 1≤ k=min _1≤ i, j≤ n |B_i∩ B_j|. Without loss of generality, suppose that B_k, B_p+s∈ℬ. Clearly |B_k∩ B_p+s|=k. Now, if 1≤ i<k or 2(p+s)≤ i≤ n, then B_i∉ℬ. This is because in the former case, we get 1≤ |B_i∩ B_p+s|<k, which contradicts the minimality of k while in the latter case, we get B_i∩ B_p+s=∅. Similarly, if p+s<i≤ p+s+k-1, then 1≤ |B_i∩ B_k|<k, again contradicting the minimality of k and thus implying that B_i∉ℬ. Finally, for k<i<p+s, at most one out of the disjoint pair (B_i,B_p+s+i) can be in ℬ. This gives us |ℬ|≤ p+s-(k-1). Indeed, using Katona's bound, we can assume that k≤ s+1. To bound ℛ, we begin by noting that for any 2(p+s)≤ i≤ n, R_i∩ B_p+s=∅, implying that R_i∉ℛ. Additionally, if 2p+s≤ i≤ 2(p+s)-1, then R_i∉ℛ, as R_i∩ B_p+s=∅. Similarly, for any k+p≤ i≤ k+p+s-1, R_i∉ℛ as R_i∩ B_k=∅. Finally, for each 1≤ i≤ p+k-1, at most one out of the disjoint pair (R_i,R_p+s+i) can be a member of ℛ, which implies a bound of |ℛ|≤ p+k-1. Alongside the bound for |ℬ|, this implies the required bound for ℱ_σ^τ. The upper bound now follows from the following double counting argument. For each F∈ℋ^(p,s)(n), F is a B-interval in exactly 2^p p! s! 2^n-p-s (n-p-s)!=2^n-sp!s!(n-p-s)! cyclic orders, and is also an R-interval in the same number of cyclic orders. Using Lemma <ref> and Equation <ref>, we get: |ℱ| ≤(2p+s) (n-1)! 2^n2^n-s+1p!s! (n-p-s)! =(2p+s)(n-1)!p!s!(n-p-s)!2^s-1 =|ℋ_x^(p,s)(n)|, for any x∈ V(M_n). This proves that ℋ^(p,s)(n) is EKR for n≥ 2(p+s). We now proceed to show, in the next section, that it is strongly EKR when n>2(p+s). § CHARACTERIZATION OF EXTREMAL STRUCTURES Suppose that n> 2(p+s) and that |ℱ|=|ℋ^(p,s)_x(n)|. This immediately implies that for each cyclic order C_σ^τ∈𝒞_n, |ℱ_σ^τ|=2p+s. Thus, we have p≤ |ℬ_σ^τ|≤ p+s and p≤ |ℛ_σ^τ|≤ p+s. Before we proceed with the proof of structural uniqueness of the extremal families, we recall some notation from the proof of Lemma <ref>. Given a cyclic arrangement _σ^τ, let k=k(σ,τ,p,s) be defined as the smallest size of the intersection of two B-intervals in ℬ_σ^τ. (If |ℬ_σ^τ|=1, then k=p+s.) From the proof of Lemma <ref>, we may also conclude that 1≤ k≤ s+1. Our argument will demonstrate that ℱ=ℋ^(p,s)_x(n) for some x∈ V(M_n), and will progress through a series of three lemmas that we state and prove below. Our first lemma characterizes the local structure of the family ℱ within each cyclic arrangement, and builds on the ideas used in the proof of Lemma <ref>. For a given cyclic arrangement C_σ^τ, if |ℱ_σ^τ|=2p+s, then there exists i∈ [n] and k=k(σ,τ,p,s) such that ℬ_σ^τ={B_i,B_i+1,…,B_i+p+s-k} and ℛ_σ^τ={R_i-(k-1),…,R_i+p-1}. Let ℬ=ℬ_σ^τ and ℛ=ℛ_σ^τ. We first tackle the simple case where |ℬ|=1. Let ℬ={B_i} for some i∈ [n]. It is clear from the argument made in the proof of Lemma <ref> that p=1 and |ℛ|=p+s=s+1. A well-known corollary of Katona's lemma [See Lemma 10 from <cit.> for a short proof of this corollary.] tells us that ℛ={R_j,…, R_j+s} for some j∈ [n]. We claim that j+s=i (addition modulo n); indeed, if i∈ [n]∖ [j,j+s], then B_i∩ R_j=∅ and if i∈ [j,j+s-1], then B_i∩ R_j+s=∅. This settles the case (since k here is just |B_i|=p+s=s+1). Thus, we may assume |ℬ|≥ 2. Without loss of generality, suppose that B_k, B_p+s∈ℬ with |B_k∩ B_p+s|=k. Since |ℱ_σ^τ|=2p+s, |ℬ|=p+s-k+1 and |ℛ|=p+k-1 by Lemma <ref>. Also note that by our proof of Lemma <ref>, for each k<i<p+s, exactly one out of the disjoint pair (B_i, B_p+s+i) must be in ℬ. Suppose j is the smallest index such that k<j<p+s and B_j ∉ℬ. Then B_j-1, B_p+s+j∈ℬ. However, as n>2(p+s), B_j-1∩ B_p+s+j =∅, a contradiction. Thus, ={ B_k, B_k+1, B_k+2, …, B_p+s}. Now, by our proof of Lemma <ref>, for each 1≤ i≤ p+k-1, exactly one out of the disjoint pair (R_i, R_p+s+i) is in ℛ. However, since k ≤ s+1, we have p+k ≤ p+s+1, implying that B_k ∩ R_p+s+1=∅. Thus, R_1 ∈ℛ. Finally, for 2≤ i≤ p+k-1, let i be minimum such that R_i∉ℛ. Then R_i-1∈ℛ and R_p+s+i∈ℛ. Again, as n>2(p+s), R_i-1∩ R_p+s+i=∅, a contradiction. Thus, ={ R_1, R_2, …, R_p+k-1}. We now introduce some important notation that is required for the next lemmas. First, let C_ι^ο be the canonical cyclic order, where ι∈ S_n is the identity permutation, and ο is the all-zeroes sequence (of length n). For any cyclic order C_σ^τ, a transposition t_i,j is an operation that exchanges the positions of e_σ(i) and e_σ(j) in the cyclic order, while retaining the orders of the respective vertices within each of the two edges. Note that if C_π^ϕ is the cyclic order obtained from C_σ^τ using a transposition t_i,j, then π=σ∘ (i, j), i.e., the transposition acts by interchanging the elements in positions i and j. Additionally, we have ϕ_i=τ_j, ϕ_j=τ_i, and ϕ_k=τ_k for all k∈ [n]∖{i,j}. If j=i+1 for 1≤ i≤ n-1, then we call t_i=t_i,i+1 an adjacent transposition. Let C_π_i^ϕ denote the cyclic order obtained from C_σ^τ via the transposition t_i, where π_i=σ∘ (i, i+1). Finally, for a given cyclic order C_σ^τ and 1≤ i≤ n, a swap s_i is an operation that exchanges the positions of the vertices in e_σ(i). Let C_σ^τ^i denote the cyclic order obtained using a swap s_i, where τ^i_i=(τ_i+1) (mod 2) and τ^i_k=τ_k for 1≤ k≠ i≤ n. Note that the permutation σ is unaffected by the swap s_i. Each of the following two lemmas involves the use of transpositions and/or swaps on a given cyclic order to obtain a new cyclic order; to avoid ambiguity, we adopt the notation B_i(σ,τ)=B_i(σ,τ,p,s) and R_i(σ,τ)=R_i(σ,τ,p,s) to respectively identify B-intervals and R-intervals in a cyclic order C_σ^τ that begin at the edge in position i of that cyclic order. The following lemma, stated and proved below, demonstrates that for each C_σ^τ∈𝒞_n, ℱ_σ^τ has a “star structure”, i.e. there exists x∈ V(M_n) such that x∈⋂_F∈ℱ_σ^τ F. In this case, we say that C_σ^τ is centered at x. For each C_σ^τ, k∈{1,s+1}. Consider the canonical cyclic order C_ι^ο, and let k=k(ι,ο,p,s). By way of contradiction, suppose that 2≤ k≤ s. Without loss of generality (relabeling if necessary), suppose that ℬ_ι^ο={B_k,B_k+1,…,B_p+s}. By Lemma <ref>, we also know that ℛ_ι^ο={R_1,…,R_p+k-1}. Now consider the cyclic order C_π^ϕ obtained by applying the transpositions t_1,p+k-1 and t_p+s+k-1,2(p+s)-1 to the canonical cyclic order. Note that as p≥ 1 and 2≤ k≤ s, we have 1<p+k-1<p+s+k-1<2(p+s)-1. (Note also that ϕ=ο.) Let k'=k'(π,ϕ,p,s). We now proceed to prove that |ℱ_π^ϕ|<2p+s, which will complete the proof by contradiction. We begin by showing that for each i∈ [n]∖ [k+1,p+s], B_i(π,ϕ)∉ℬ_π^ϕ. First, note that for each 2(p+s)≤ i≤ n, B_i(π,ϕ)∉ℬ_π^ϕ, since B_i(π,ϕ)∩ B_p+s(ι,ο)=∅. Analogously, if 2≤ i≤ k-1, then B_i (π,ϕ)∩ R_p+k-1(ι,ο)=∅, hence B_i(π,ϕ)∉ℬ_π^ϕ. Next, let i=k. By definition, B_k(π,ϕ)=(e_k,…,e_p+k-2,e_1,l_p+k,…, l_p+s+k-2,l_2(p+s)-1). We also have R_p+k-1(ι,ο)=(r_p+k-1,…,r_p+k+s-2,e_p+s+k-1,…,e_2p+s+k-2). Clearly, B_k(π,ϕ)∩ R_p+k-1(ι,ο)=∅, thus B_k(π,ϕ)∉ℬ_π^ϕ. Now, let i=1. If B_i(π,ϕ)∈ℬ_π^ϕ, then using Lemma <ref>, ℬ_π^ϕ={B_1(π, ϕ) } and consequently, we have R_n(π,ϕ)∈ℛ_π^ϕ. This is a contradiction since R_n(π,ϕ)∩ B_p+s(ι,ο)=∅. Next, we consider the case when i∈ [p+s+1,p+s+k-1]. For any i in this range, R_i-(p+s)(ι,ο)∩ B_i(π,ϕ)=∅, thus implying that B_i(π,ϕ)∉ℬ_π^ϕ. Finally, we consider the case when i∈ [p+s+k, 2(p+s)-1]. Let j≥ 1 be minimum such that B_p+s+k-1+j(π,ϕ)∈ℬ_π^ϕ. Since B_2(p+s)(π,ϕ)∉ℬ_π^ϕ, we know from Lemma <ref> that |ℬ_π^ϕ|≤ (2(p+s)-1)-(p+s+k-1+j)+1=p+s-(k+j)+1. Note that this implies k'≥ k+j. Using Lemma <ref>, if j' is the minimum index such that R_j'(π,ϕ)∈ℛ_π^ϕ, then j'= (p+s+k-1+j)-(k'-1)=p+s+k+j-k'≤ p+s. Consequently, R_m(π,ϕ)∈ℛ_π^ϕ, where m=j'+(p+k'-1)-1≥ p+s+2. In particular, we have R_p+s+1(π,ϕ)∈ℛ_π^ϕ, which is a contradiction since R_p+s+1(π, ϕ)∩ R_1(ι,ο)=∅. We can now conclude that all of the B-sets in ℬ_π^ϕ have starting indices in the interval [k+1,p+s]. However, R_p+k(π,ϕ)∉ℛ_π^ϕ since R_p+k(π,ϕ)∩ B_k(ι,ο)=∅. Recall also that R_n(π,ϕ)∉ℛ_π^ϕ. It follows from Lemma <ref> that all of the R-sets in ℛ_π^ϕ have starting indices that lie in [1, p+k-1]. Therefore, |ℱ_π^ϕ|≤ (p+s-k)+(p+k-1)≤ 2p+s-1, a contradiction. The two figures below illustrate the two extremal possibilities from Lemma <ref> for any given cyclic order. In each of the figures, we have n=12, p=3 and s=2, and the cyclic order considered for the illustration is C_ι^ο. The portion of each interval that includes singletons (left vertices from the last s positions for blue intervals, right vertices from the first s positions for red intervals) is denoted by dashed lines. For clarity, only the first and last blue and red intervals are drawn. In Figure <ref>, we have k=1, and C_ι^ο, with ℬ={B_8,B_9,B_10,B_11,B_12} and ℛ={R_8,R_9,R_10}, is centered at l_n. Similarly, in Figure <ref>, C_ι^ο is centered at r_n. We now show that each cyclic order in 𝒞_n is centered at the same vertex. This would prove the uniqueness of the extremal structures for Theorem <ref>. We first assume, without loss of generality (relabeling if necessary), that the canonical cyclic order C_ι^ο is centered at e_n^0=l_n. In other words, we have k=k(ι,ο,p,s)=1, ℬ_σ^τ={B_n-p-s+1,B_n-p-s+2,…, B_n},and ℛ_σ^τ={R_n-p-s+1, R_n-p-s+2…,R_n-s}. Our final lemma proves that every cyclic order in 𝒞_n is centered at l_n. We first make some preliminary observations that will help simplify its proof. For any C_σ^τ, we denote its reflection by C_σ^τ, where σ is the permutation given by σ(i)=σ(n-i) for each 1≤ i≤ n-1, and τ is the sequence given by τ_i=τ_i+1 (mod 2) for each 1≤ i≤ n. It is easy to see that ℱ_σ^τ=ℱ_σ^τ; indeed, we have ℬ_σ^τ=ℛ_σ^τ, and ℛ_σ^τ=ℬ_σ^τ. Additionally, we also have k(σ,τ,p,s)=(s+2)-k(σ,τ,p,s). In view of this, we can restrict our attention to only those cyclic orders C_σ^τ with τ_n=0. As mentioned earlier, we will also identify each σ as a permutation in S_n-1. We denote this subset of 𝒞_n by 𝒞'(n). Clearly C_ι^ο∈𝒞'(n). Let C_σ^τ∈𝒞'_n be centered at vertex l_n. Then: * For each 1≤ i≤ n-2, C_π_i^ϕ is centered at l_n. * There exists 1≤ j≤ n-1 such that C_σ^τ^j is centered at l_n. Before we proceed to the proof, we note that the n-2 adjacent transpositions and any one of the n-1 swaps generate all of the cyclic orders in 𝒞'(n). [For swaps s_i and s_j with i<j, it is clear to see that s_i=t_i∘ t_i+1∘⋯∘ t_j-2∘ t_j-1∘ s_j ∘ t_j-1∘ t_j-2∘⋯∘ t_i+1∘ t_i. Note that the standard multiplication convention is adopted here, i.e., the operations are applied in order from right to left.] Lemma <ref> thus immediately implies that every cyclic order in 𝒞'(n) is centered at l_n. It suffices to prove the lemma for the canonical cyclic order C_ι^ο. Note that B_n-p-s+1(ι,ο)∩ B_n(ι,ο)={l_n}. We first prove Part <ref> of the lemma. Let 1≤ i≤ n-2. The following observation will be frequently used: For any C^σ_τ with |ℱ_σ^τ|=2p+s, if there exist B, B'∈ℬ_σ^τ with B∩ B'={x}, then C^σ_τ is centered at x. The proof now splits into cases, depending on the value of i. * Let i ∉{ p+s-1, n-p-s}. Then B_n-p-s+1(π_i,ϕ)=B_n-p-s+1(ι,ο) and B_n(π_i,ϕ)=B_n (ι,ο). Using the observation from above, we can conclude that C_π_i^ϕ is centered at l_n. * Let i=p+s-1. Then B_j(π_i,ϕ)=B_j(ι,ο) for each n-p-s+1≤ j≤ n-1. Similarly, B_n-p-s(π_i,ϕ)=B_n-p-s(ι,ο) and R_n-p-s(π_i,ϕ)=R_n-p-s(ι,ο) since n>2(p+s); thus, neither B_n-p-s(π_i,ϕ) nor R_n-p-s(π_i,ϕ) are in ℬ_π_i^ϕ and ℛ_π_i^ϕ, respectively, as they are both disjoint from B_n(ι,ο). This implies that k(π_i,ϕ,p,s)=1, i.e., B_n(π_i,ϕ)∈ℬ_π^ϕ. Thus, C_π_i^ϕ is centered at l_n. * Let i=n-p-s. In this case, B_j(π_i,ϕ)=B_j(ι,ο)∉ℬ_π_i^ϕ for j=1 and B_j(π_i,ϕ)=B_j(ι,ο)∈ℬ_π_i^ϕ for n-p-s+2≤ j≤ n. Suppose that B_n-p-s+1(π_i,ϕ)∉ℬ_π_i^ϕ. Then k(π_i,ϕ,p,s)=2, which by Lemma <ref> implies that s=1. However, Lemma <ref> implies that R_n(π_i,ϕ)∈ℛ_π_i^ϕ, a contradiction, as R_n(π_i,ϕ) ∩ B_n-p-s+1(ι,ο)=∅. We now prove Part <ref> of the lemma. For this, we choose j=p+s; indeed, any j∈ [p+s,n-(p+s)] suffices. Clearly, C_σ^τ^j is still centered at l_n. § PROOF OF THEOREM <REF> We begin by proving Part <ref> of Theorem <ref>. Let p, s≥ 1, and let ℱ⊆ℋ^(p,s)(2p+s) be intersecting. Now, consider any subgraph H∈ℋ^(p,s)(2p+s). Clearly, the subgraph H' induced by the vertex set V(M_n)∖ V(H), is also a member of ℋ^(p,s)(2p+s). Thus, at most one of H and H' can be in ℱ. This immediately yields the upper bound |ℱ|≤12|ℋ^(p,s)(2p+s)|, and using Equation <ref>, this simplifies to |ℱ|≤ |ℋ_x^(p,s)(n)(2p+s)| (where x is any vertex from V(M_n)), completing a proof of Part <ref> of the theorem. To see that ℋ^(p,s)(2p+s) is not strongly EKR, we construct an intersecting subfamily of ℋ^(p,s)(2p+s) that has extremal size, but is not a star. Let 𝒢=ℋ^(p,s)(2p+s)∖ℋ^(p,s)_l_2p+s(2p+s) be the family of all subgraphs that do not contain the vertex l_2p+s. It is clear to see that 𝒢 has extremal size but is not a star, so we only need to prove that it is intersecting. By way of contradiction, suppose G_1,G_2∈𝒢 and G_1∩ G_2=∅. Without loss of generality, suppose that G_1 contains (both endpoints from) each of the p edges e_1,e_2,…,e_p, while G_2 contains each of the p edges e_p+1,…,e_2p. Thus, the s singletons in both G_1 and G_2 must be from the s edges e_2p+1,…,e_2p+s. By definition of 𝒢, this implies r_2p+s∈ G_1∩ G_2, a contradiction. We proceed to Part <ref> of Theorem <ref>. Let s=1. We know from Theorem <ref> that ℋ^(p,1)(n) is EKR when n≥ 2p+2 and strongly EKR when n>2p+2. Additionally, we know from Part <ref> of Theorem <ref> that ℋ^(p,1)(n) is EKR for n=2p+1. We now prove that it is strongly EKR when n=2p+2. Let ℱ⊆ℋ^(p,1)(2p+2) be intersecting, and let |ℱ|=(2p+1)n-1p. We use a more standard Katona-type argument to prove this result. In particular, we only consider those cyclic orders C_σ=C_σ^ο∈𝒞_n where σ∈ S_n with σ(n)=n and ο∈{0,1}^n is the all-zeros sequence of length n. Clearly there are (n-1)! such cyclic orders. Additionally, we can interpret a given cyclic order C_σ as a permutation of the vertex set V(M_n). As an example, for n=6 and σ=(σ(1),…,σ(6))=(5,3,2,1,4,6), the corresponding cyclic order is (l_5,r_5,l_3,r_3,l_2,r_2,l_1,r_1,l_4,r_4,l_6,r_6). Note that any sequence containing 2p+1 consecutive vertices from a cyclic order C_σ corresponds to a subgraph in ℋ^(p,1)(n). This correspondance allows us to directly apply many of the ideas from Katona's proof of the EKR theorem <cit.>. For a given cyclic order C_σ, a subgraph H ∈ℋ^(p,1)(n) is an interval in C_σ if either H={r_σ(i), e_σ(i+1), …, e_σ(i+p)}, or H={e_σ(i),…, e_σ(i+p-1), l_σ(i+p)}, for some i ∈ [n], where addition is carried out modulo n. For a given σ, we refer to the former type as an r-interval at position i, and the latter as the l-interval at position i. Let ℱ_σ be the set of all subgraphs in ℱ that are intervals in C_σ. Since |ℱ| has maximum size, it implies that for each cyclic order C_σ, |ℱ_σ|=2p+1. Using the corollary of Katona's lemma (cited in the proof of Lemma <ref> in Section <ref>), there exists x∈ V(M_n) such that C_σ is centered at x, i.e., x∈⋂_F∈ℱ_σ F. To complete the proof, we focus on the only remaining case n=2p+2; without loss of generality, assume that the permutation ι=(1, 2, …, 2p+2) is centered at r_2p+2. As in the proof of Lemma <ref> from Theorem <ref>, to show that ℱ=ℋ^(p,1)_r_2p+2(2p+2), it suffices to show that for σ=ι∘ t_i, where 1≤ i ≤ 2p, C_σ is centered at r_2p+2. Let H={r_2p+2, e_1, …, e_p}, and K={r_p+2,e_p+3,…,e_2p+2}. Note that H, K∈ℱ_ι, and H∩ K={r_2p+2}. The proof now proceeds through cases, depending on the value of i. In each case, we prove that in C_σ, there are two intervals in ℱ_σ that intersect only in r_2p+2. The only non-trivial cases to consider are when i∈{p,p+1,p+2}, as for any other value of i, both H and K are unchanged by the adjacent transposition t_i. * Let i=p. Clearly, the interval K is unchanged by the transposition t_p. However, the l-interval at position p+2 is also unchanged by the transposition and is thus not a member of ℱ_σ. Since |ℱ_σ|=2p+1, this implies that the s-interval at position 2p+2, say H' is in ℱ_σ. Since K∩ H'={r_2p+2}, C_σ is centered at r_2p+2. * Let i=p+1. In this case, the interval H is unchanged by the transposition. We can also argue that the l-interval at position 1, say H', is not in ℱ_σ. By way of contradiction, suppose that H'∈ℱ_σ. Since H'={e_1,…,e_p,l_p+2}, clearly H'∩ K=∅, a contradiction. This implies that the r-interval at position p+2, say K', is in ℱ_σ. Since H∩ K'={r_2p+2}, C_σ is centered at r_2p+2. * Let i=p+2. As in the previous case, the interval H is unchanged by t_i. However, the l-interval at position 1 is also unchanged by t_i and is thus not a member of ℱ_σ. This implies that the r-interval at position p+2, say K̃, is in ℱ_σ. Since H∩K̃={r_2p+2}, C_σ is centered at r_2p+2. § FUTURE DIRECTIONS It is evident from the proof of Theorem <ref> that a stronger condition on n (in comparison to the one proposed by Conjecture <ref>) is required for the use of Katona's cycle method. It is currently unclear to us if the cycle method can be used to settle the conjecture completely. The algebraic framework for applying Katona's cycle method described in <cit.> would be an interesting possibility to consider; shifting/compression techniques are another. We also propose the following general formulation of the problem that can potentially lead to other, natural extensions of the EKR theorem and its variants. For positive integers m and n, let G_m,n be a graph that has exactly m components, each of which is a copy of K_n, the complete graph on n vertices. Consider a sequence of non-negative integers 𝐬=(s_1,…,s_n) with ∑_i=1^n s_i≤ m. If H is an induced subgraph of G_m,n, we say that H has signature 𝐬 if it satisfies the following two conditions: * H contains exactly ∑_i=1^n s_i components, with exactly s_i copies of K_i for each 1≤ i≤ n. * Let H_1,H_2 be any two distinct components of H. If H_1⊆ G_1 and H_2⊆ G_2, where G_1 and G_2 are components of G_m,n, then G_1≠ G_2. Let ℋ^𝐬(m,n) be the family of all induced subgraphs of G_m,n with signature 𝐬. For a signature sequence 𝐬 and positive integers m and n, find a best possible function f(m,𝐬) such that for n≥ f(m,𝐬), ℋ^𝐬(m,n) is EKR. 10 BollLead B. Bollobás, I. Leader, An Erdős–Ko–Rado theorem for signed sets, Comput. Math. Appl. 34 (1997), 9–13. BorgMea P. Borg, K. Meagher, The Katona cycle proof of the Erdős–Ko–Rado theorem and its possibilities, J. Algebr. Comb. 43 (2016), 915–939. DezFra M. Deza, P. Frankl, Erdős–Ko–Rado theorem — 22 years later, SIAM J. Algebraic Discrete Methods 4 (1983), no. 4, 419–431. EKR P. Erdős, C. Ko, and R. Rado, Intersection theorems for systems of finite sets, Quart. J. Math. Oxford Ser. (2) 12 (1961), 313–320. FHK C. Feghali, G. Hurlbert, V. Kamat, An Erdős–Ko–Rado theorem for unions of length 2 paths, Discrete Math. 343 (12) (2020), Article 112121. GodMea C. Godsil and K. Meagher, Erdős–Ko–Rado Theorems: Algebraic Approaches, Cambridge Studies in Advanced Mathematics, Cambridge University Press, 2015. Katona G.O.H. Katona, A simple proof of the Erdős–Chao Ko–Rado theorem, J. Combin. Theory (B) 13 (1972), 183–184. Keevash P. Keevash, Shadows and intersections: stability and new proofs, Advances in Mathematics 218 (2008), 1685 – 1703. Meyer J.-C. Meyer, Quelques problémes concernant les cliques des hypergraphes k-complets et q-parti h-complets, Hypergraph Seminar, Springer-Verlag, Berlin (1974), 127–139.
http://arxiv.org/abs/2407.12313v1
20240717043031
Consequences of Godel Theorems on Third Quantized Theories Like String Field Theory and Group Field Theory
[ "Mir Faizal", "Arshid Shabir", "Aatif Kaisar Khan" ]
hep-th
[ "hep-th" ]
Consequences of Gödel Theorems on Third Quantized Theories Like String Field Theory and Group Field Theory Mir Faizal^1,2, Arshid Shabir^1, Aatif Kaisar Khan^1 ^1Canadian Quantum Research Center, 204-3002, 32 Ave Vernon, BC V1T 2L7, Canada ^2Irving K. Barber School of Arts and Sciences, University of British Columbia Okanagan, Kelowna, BC V1V 1V7, Canada =================================================================================================================================================================================================================================================================== § ABSTRACT The observation that spacetime and quantum fields on it have to be dynamically produced in any theory of quantum gravity implies that quantum gravity should be defined on configuration space of fields rather than spacetime. Such a theory that is defined on the configuration space of fields rather than spacetime is a third quantized theory. So, both string theory and group field theory are third-quantized theories. Thus, using axioms of string field theory, we motivate similar axioms for group field theory. Then using the structure of these axioms for string field theory and group field theory, we identify general features of axioms for any such third quantized theory of quantum gravity. Thus, we show that such third-quantized theories of quantum gravity can be formulated as formal axiomatic systems. We then analyze the consequences of Gödel theorems on such third quantized theories. We thus address problems of consistency and completeness of any third quantized theories of quantum gravity. § INTRODUCTION The structure of spacetime can be obtained from general relativity. An intriguing aspect of general relativity is that it predicts its own breakdown due to the occurrence of singularities. At singularities, the spacetime description of reality fails, as the curvature of spacetime becomes infinite, and the laws of physics as described by general relativity cease to be valid. The Penrose-Hawking singularity theorems reveal that these singularities are inherent to the very structure of general relativity <cit.>. Thus, the breakdown of the spacetime description of physics is intrinsic to the very nature of spacetime as described by general relativity. Quantum gravitational effects are expected to modify this classical description of spacetime, incorporating a natural geometric cutoff that prevents the formation of singularities <cit.>. In string theory, for instance, T-duality introduces a minimal length scale, below which the conventional notions of spacetime cease to exist <cit.>. This minimal length effectively prevents the occurrence of singularities by ensuring that physical quantities remain finite <cit.>. Such a geometric cutoff also occurs in Loop Quantum Gravity (LQG) due to the discrete nature of the theory <cit.>. In loop quantum cosmology (LQC), singularities are avoided due to a discrete structure of spacetime, which introduces a geometric cutoff <cit.>. The application of LQC to early universe cosmology has demonstrated that quantum geometric effects can resolve the Big Bang singularity, replacing it with a quantum bounce <cit.>. These findings suggest that the absence of singularities is a universal feature of any consistent theory of quantum gravity. This absence of singularities can also be obtained using the Bekenstein-Hawking entropy <cit.>. The modification to the Bekenstein-Hawking entropy by a geometric cutoff would naturally prevent the formation of singularities, ensuring that the physical description remains finite and well-defined. The Jacobson formalism further strengthens this connection by directly linking the Bekenstein-Hawking entropy to spacetime geometry <cit.>. Modifications to this entropy, as predicted by quantum gravity theories, consequently alter the underlying geometry of spacetime. Explicit demonstrations show that the bound on Bekenstein-Hawking entropy due to a minimal length in quantum gravity prevents the formation of spacetime singularities <cit.>. These results suggest that singularities arise in general relativity when it is applied to regimes where the spacetime description becomes invalid. Importantly, this geometric bound is derived from a bound on quantum information, suggesting that spacetime geometry may emerge from quantum informational principles <cit.>. This implies that in quantum gravity, spacetime is not a fundamental entity but an emergent phenomenon arising from a more fundamental quantum theory <cit.>. Various approaches to quantum gravity, such as string theory and LQG, indicate the necessity of a third quantized theory to explain such dynamical formation of spacetime and geometric structures <cit.>. In a first quantized theory, the quantum mechanics of individual particles are studied, while in a second quantized theory, the quantum mechanics of fields are considered, allowing for the dynamic creation and annihilation of particles. Second quantization thus naturally explains multi-particle systems, where the wave function is defined on the configuration space of fields rather than spacetime. This leads to the concept of third quantization, where quantum theory is constructed on this abstract configuration space of fields. Consequently, both quantum fields and the underlying geometry on which these fields are defined are dynamically created and annihilated in a third quantized theory. A third quantized theory is not constructed within spacetime; rather, spacetime and quantum fields emerge from it. Third quantization is thus a multi-geometry theory, analogous to how second quantization is a multi-particle theory. The third quantization of the Wheeler-DeWitt approach has yielded various interesting results <cit.>. For instance, the application of third quantization to quantum cosmology has provided insights into the creation and annihilation of universes in a multiverse <cit.>. Thus, using a third quantization, not only the emergence of a single universe but an entire multiverse can be explained. The third quantization of LQG has been studied using Group Field Theory (GFT), which provides a field-theoretic formulation of quantum geometry <cit.>. GFT describes quantum states of geometry using group-theoretic variables, allowing for a combinatorial and algebraic approach to quantum gravity. This framework has been instrumental in understanding the dynamics of quantum spacetime and the transition from quantum to classical geometry <cit.>. Moreover, GFT has been connected to spin foam models, which serve as a covariant formulation of LQG, further bridging the gap between canonical and path integral approaches <cit.>. Similarly, in string theory, the String Field Theory (SFT) is also a third quantized theory. It may be noted that historically SFT is sometimes called a second quantized theory. This is because string theory can be viewed as either a first quantized theory of strings or a second quantized conformal field theory. Thus, SFT can be seen as a third quantized theory defined on the configuration space of the conformal field theory or equivalently as a second quantized theory of strings. So, despite being termed a second quantized theory, SFT operates on the configuration space of fields, fitting the criteria of a third quantized theory <cit.>. SFT provides a consistent framework for describing the interactions of strings, incorporating both perturbative and non-perturbative effects. This approach has led to significant insights into the non-perturbative structure of string theory, including the study of D-branes, tachyon condensation, and string dualities <cit.>. The universe/multiverse with quantum fields in it emerges from such a third quantized theory of quantum gravity. As such third quantized theories are not defined in spacetime, but rather spacetime emerges from them. The third quantization exists as an axiomatic structure producing spacetime and quantum field from it. Now consistency and completeness are the bare minimum requirement for any such sensible theory, describing physics at a fundamental level. The theory should not produce contradicting results, as then it would not be a sensible theory describing reality. Furthermore, as this theory is a fundamental theory, it should be complete, and all physical phenomena that can occur in nature should be derivable from it. However, the Gödel first and second theorems <cit.> have direct implications for the construction of such a theory. We will analyze such formal aspects of third quantized quantum gravity in this paper. § THIRD QUANTIZED THEORIES Even though there are several approaches to quantum gravity, the requirement to dynamically create and annihilate geometries seems to naturally lead to some sort of third-quantized field theory. It may be noted that the original works on third-quantized field theories were done in the context of the Wheeler-DeWitt equation <cit.>. As canonical quantum gravity, based on the Wheeler-DeWitt equation, evolved to LQG <cit.>, the third quantized Wheeler-DeWitt equation evolved to GFT. The GFT is a higher-dimensional extension of matrix models and so it provides a developed formalism for third-quatized LQG. In GFT, the fundamental entities are fields defined on a group manifold, corresponding to the quantized geometric degrees of freedom in LQG. The GFT framework encodes the dynamics of spin networks, the basic quantum states in LQG. A typical GFT action is given by: S_GFT =1/2∫∏_i=1^D dg_i Ψ(g_1, …, g_D) 𝒦_G Ψ(g_1, …, g_D) + λ∫∏_i=1^D dg_i 𝒱(Ψ(g_1, …, g_D)) where g_i are elements of a Lie group 𝒢, Ψ(g_1, …, g_D) is a field over D copies of the group 𝒢, 𝒦_G is a kinetic term, and 𝒱 represents the interaction term <cit.>. The interaction terms in the action often correspond to combinatorial structures, such as simplices or graphs, encoding the connectivity and topology of the fields. This reflects the discrete nature of spacetime in these theories <cit.>. In third quantized LQG, the field Ψ(g_1, …, g_D) can be interpreted as creating and annihilating quantum geometries, with the kinetic and interaction terms encoding the dynamics of these geometries. The fields in GFT are analogous to wave functions over the configuration space of spin networks, encapsulating the dynamics of LQG in a field-theoretic formalism <cit.>. String Field Theory (SFT) provides a third quantized description of string theory, where the basic objects are string fields that can create and annihilate strings. Even though it has been historically viewed as a second quantized theory of strings, it can also be equivalently viewed as field theory defined on conformal fields and hence can be seen as a third quantized theory. The SFT action encapsulates the dynamics of these string fields and includes terms corresponding to the free propagation of strings and their interactions. In string theory, a covariant open bosonic string field theory stands as a significant milestone, offering a powerful framework for understanding the dynamics of open strings. This is constructed using an action that resembles the action of Chern-Simons theory, S= 1/2∫Ψ⋆ QΨ + λ/3∫Ψ⋆Ψ⋆Ψ, The action embodies the stringy dynamics, and the coupling constant λ controls the strength of interactions. Through this action, strings are endowed with a rich algebraic structure 𝒜, governed by a non-commutative star product ⋆: 𝒜⊗𝒜→𝒜 that encapsulates the gluing of incoming strings into composite entities. Furthermore, the action incorporates a BRST operator Q: 𝒜→𝒜 reflecting the underlying symmetries of the string worldsheet <cit.>. The string field Ψ(X) encompasses all possible string configurations, and the action S_SFT describes how these configurations evolve and interact. The BRST operator Q ensures that string field theory is gauge-invariant, and the interaction term Ψ⋆Ψ⋆Ψ represents the merging and splitting of strings. It may be noted that strings have branes, and even though attempts to construct brane field theories have been made, it has not been possible to construct a fully developed brane field theory. So, brane can be seen as derived objects in string theory, rather than fundamental objects from which an independent theory can be constructed <cit.>. So, the concept of third quantization extends the idea of second quantization to a field-theoretic setting where the fields themselves represent quantum states of a system, such as spin networks in LQG or strings in SFT. The general structure of third quantized field theories can be described using the analogy with SFT and GFT. The fields are defined over a configuration space that represents the quantum states of the underlying theory (e.g., spin networks for LQG, string configurations for SFT). The kinetic term in the action describes the free propagation of these fields. It typically takes the form ∫Ψ𝒦Ψ, where 𝒦 is an appropriate operator, and depends on the theory. In GFT, 𝒦 = 𝒦_G and in SFT :𝒦 = Q. The interaction term describes the interactions between the fundamental quantum states. It usually involves higher-order products of the fields, such as ∫Ψ^n, where the product of fields depends on the details of the theory. Thus, action for any third quantized field theory can be generically written as: S =∫𝒟Ψ( 1/2Ψ𝒦Ψ + 𝒱(Ψ) ) where 𝒦 is the kinetic operator, 𝒱(Ψ) = Ψ^n represents the interaction term, and λ is a coupling constant. Here, this product of third quantized fields, and the Kinetic term depends on the nature of the theory. The fields can form superpositions of different configurations, allowing for the exploration of a wide range of possible states. The dynamics can be formulated using a path integral over the fields, integrating over all possible configurations. Now any such theory can be third quantized using a path integral over all possible field configurations: 𝒵 = ∫𝒟Ψ e^i S(Ψ) which defines the partition function and encapsulates the quantum dynamics of the theory. Here, it may be noted that the third quantized field is not defined over spacetime but field configurations, the details of which depend on the exact nature of the third quantized theory. So, the spacetime representing the universe/multiverse, and the quantum fields in it will emerge as emergent phenomena from such a third quantized theory <cit.>. Hence, such a third quantized theory does not exist in spacetime, but rather it exists in a Platonic realm. Here, the term Platonic realm is borrowed from philosophical theory, and it posits that the material world is not the true reality, but rather a shadow of the true reality, which consists of abstract, non-material forms or ideas <cit.>. Here, in quantum gravity, this world of abstract, non-material forms is represented by the axiomatic structure of third-quantized theories. The Platonic nature of modern physics has already been discussed <cit.>, what is interesting about third quantized quantum gravity is that even spacetime is an emergent phenomenon in it. Now to analyze the consequences of this further it is important to first investigate this concept of third quantization as a formal axiomatic system. § STRING FIELD THEORY AND GROUP FIELD THEORY In this section, we will review the axioms of SFT <cit.>, and use them to motivate similar axioms for GFT. This is important to understand how third quantized theories can be viewed as a formal system. Now it is known that Witten's string field theory is a formulation of string theory that describes the dynamics of strings using a field theory approach and can be analyzed as a formal axiomatic system with the following axioms <cit.>: 1. String Field: The string field Ψ is a “functional" of the string's configuration X. This is encoded in the position and momentum of the string, or equivalently in the conformal field theory language by vertex operators. It has all the modes of a string which account for its degrees of freedom. 2. Inner Product: Integration over the world sheet of the string gives the inner product ⟨·, ·⟩, this ensures that action is a scalar quantity and one can obtain it by integrating over all possible configurations of strings ∫ DX. 3. BRST Invariance: The action is invariant under BRST transformations which guarantee gauge symmetry in a theory. The BRST operator Q encodes the constraints and symmetries of the theory. Physical states are identified as cohomology classes of Q, meaning they satisfy Q Ψ = 0 and are not exact, i.e., Ψ≠ Q χ for any χ <cit.>. 4. Star Product: The star product ⋆ is a non-commutative product on the space of string fields Ψ_1 ⋆Ψ_2≠Ψ_2 ⋆Ψ_1. It represents the interaction of strings, and so captures the joining and splitting of strings. 5. Gauge Invariance: The action is invariant under a set of gauge transformations of the form: δΨ = Q Λ + Ψ⋆Λ - Λ⋆Ψ, where Λ is a gauge parameter. 6. Associativity: In string field theory, the star product ⋆, which defines the interaction between string fields, must satisfy the associativity. This property ensures that the product of three string fields is independent of the order of operations. Mathematically, this can be expressed as: (Ψ_1 ⋆Ψ_2) ⋆Ψ_3 = Ψ_1 ⋆ (Ψ_2 ⋆Φ_3) where the fields Ψ_1, Ψ_2, and Φ_3 represent strings fields. 7. Action Principle: The theory is governed by an action S, which is a “functional" of a string field Ψ. The action for bosonic string field theory is given by: S(Ψ) = 1/2⟨Ψ, Q Ψ⟩ + λ𝒱(Ψ) where 𝒱(Ψ) = ⟨Ψ, Ψ⋆Ψ⟩/3, ⟨·, ·⟩ denotes an inner product on the space of string fields and Q is the BRST operator, λ is the string coupling constant, and ⋆ is the star product <cit.>. Here, we note that the star product ⋆ is a non-commutative product that encodes the interaction rules for the string fields. It reflects the physical process of joining and splitting strings. The associativity property ensures that the product of three string fields is well-defined and consistent, regardless of how the fields are grouped. This would be true even in other third-quantized theories, like GFT. As GFT is also a third quantized theory <cit.>, we can use the axioms of Witten's SFT, to motivate the axioms for GFT. Thus, using the work done on SFT, and the properties of GFT, it is possible to propose the following axioms for GFT: 1. Field on Group Manifold: The fundamental variables are fields Ψ(g_1, g_2, …, g_n) defined on a group manifold, where g_i are elements of a Lie group G <cit.>. 2. Inner Product: Given two group fields Ψ_1 and Ψ_2, the inner product ⟨Ψ_1, Ψ_2 ⟩ is generally defined as: ⟨Ψ_1, Ψ_2 ⟩ = ∫∏_i=1^d dg_i Ψ_1(g_1, g_2, …, g_d)Ψ_2(g_1, g_2, …, g_d) where d is the number of group elements associated with the field and dg_i is the Haar measure on the group G, which ensures that the integral is invariant under group transformations. 3. Gauge Invariance: The theory is invariant under local gauge transformations of the fields, typically under the action of the group G (for h ∈ G <cit.>): Ψ(g_1, g_2, …, g_n) →Ψ(h g_1, h g_2, …, h g_n) 4. Star Product: In GFT, the interaction can also be defined using a ⋆-product, which is a noncommutative product that combines group fields. For three group fields Φ_1, Φ_2, and Φ_3 defined on the group G, the star product is defined as <cit.>: (Φ_1 ⋆Φ_2 ⋆Φ_3)(g_1, g_2, g_3, g_4) = ∫ dh Φ_1(g_1, g_2, h) Φ_2(h^-1, g_3, g_4) Φ_3(g_4^-1, h) The fields Φ_1, Φ_2, and Φ_3 are functions defined on the group manifold G. Each field depends on three group elements, for instance, Φ(g_1, g_2, g_3). The integral ∫_G dh is taken over the group G concerning the Haar measure dh, which ensures invariance under group transformations. 5. Symmetry and Invariance: The action should respect the symmetries of the underlying group manifold, such as rotational and Lorentz invariance for relevant physical applications. 6. Associativity: The interaction terms should respect an associative product structure, analogous to the star product in SFT, to ensure the consistency of the interactions, (Ψ_1 ⋆Ψ_2) ⋆Ψ_3 = Ψ_1 ⋆ (Ψ_2 ⋆Φ_3) where the fields Ψ_1, Ψ_2, and Φ_3 represent fields in GFT. 7. Action Principle: The dynamics of the fields are governed by an action S, which is a “functional" of the group fields. The action typically includes kinetic and interaction terms: S_GFT = 1/2⟨Ψ, 𝒦_G Ψ⟩ + λ𝒱(Ψ) where 𝒦_G is the kinetic operator and 𝒱 (Ψ) represents interaction terms <cit.>. Here, 𝒱(Ψ) also includes an integration involving the Haar measure and is constructed using the star product for GFT. Here again, the associativity is crucial for the internal consistency of GFT, as it ensures that interactions are unambiguously defined. Thus, motivated by axioms of SFT, it was possible to suggest similar formal axioms for GFT. § GÖDEL'S THEOREMS APPLIED TO THIRD QUANTIZED THEORIES As both SFT and GFT are third-quantized theories, we can use them to understand the general structure of any third-quantized theory. It may be noted that even if LQG or string theory is not the theory of quantum gravity, any theory should produce spacetime and quantum fields on spacetime dynamically. Thus, it would be a theory defined on the configuration space of fields rather than spacetime and hence a third quantized theory. Now for any such theory, we can identify its general features using SFT and GFT. Thus, using the structure of SFT and GFT, we infer some general features of such an axiomatic system as follows: 1. Field on Configuration Space The fundamental objects are fields defined on a configuration space χ that represents the degrees of freedom of the theory. For example, in SFT, these are string fields Ψ[X] where X represents the string configuration <cit.>, and in GFT, these are group fields Ψ(g_1, g_2, …, g_n) <cit.>. 2. Inner Product and Hilbert Space There is a well-defined inner product on the space of fields ⟨Ψ, Ψ⟩, ensuring that the action is a scalar quantity. This inner product induces a Hilbert space structure on the space of states and can be defined by integrating over the configuration space on which the third quantized field is defined. 3. Gauge Invariance: The field Ψ(χ) is subject to gauge transformations that ensure the invariance of the action. These transformations depend on the specific symmetries of the configuration space χ. 4. Interaction Terms: The interaction terms 𝒱(Ψ) describe the interactions between the fields. These terms are constructed to respect the symmetries of the theory and involve higher-order products of the fields. To construct such interaction terms, we can define an associative ⋆ product, which is generally defined in the field space. For two fields Φ_1 and Φ_2, the ⋆ product Φ_1 ⋆Φ_2 is given by: (Φ⋆Ψ)(χ) = ∫ dχ_1 dχ_2 K(χ, χ_1, χ_2) Φ(χ_1) Ψ(χ_2) where χ, χ_1, and χ_2 denote third quantized fields, and K is a kernel that encodes the interaction rules. 5. Kinetic Term: The kinetic term 𝒦 governs the free propagation of the fields and typically involves appropriate operators operating in third-quantized fields. For SFT, this would be the BRST operator 𝒦 = Q, and for GFT this would be 𝒦 = 𝒦_G. 6. Associativity: The interaction terms should respect an associative product structure to ensure the consistency of the theory. This is analogous to the star product in SFT and the product structure in GFT. 7. Symmetry and Invariance The action and the theory respect the symmetries of the underlying configuration space, such as the Lorentz invariance in SFT <cit.> and the symmetries of the group manifold in GFT <cit.>. 8. Action Principle The dynamics of the fields are governed by an action S, which is a “functional" of these fields. The action typically includes kinetic and interaction terms: S(Ψ) = 1/2⟨Ψ, 𝒦Ψ⟩ + λ𝒱(Ψ) where Ψ represents the field, 𝒦 is the kinetic operator, 𝒱 (Ψ) represents interaction terms, and ⟨·, ·⟩ denotes an appropriate inner product. Here, 𝒱 (Ψ) (like the inner product) would also include an integration over the configuration space of the third quantized theory. The details of this world depend on the specifics of the theory. The strength of the interaction is controlled by the coupling constant λ. Now we observe that third quantized theories are basically consistent formal system ℱ, which are not present in spacetime, but rather spacetime emerges from them as their consequences. Thus, they exist in some Platonic realm, and in that Platonic realm apart from that formal system ℱ, there also exists a computation algorithm 𝒞 to derive the corollaries of that system ℱ. The spacetime along with quantum fields on it emerges as the corollaries of that system ℱ. For SFT, this system is represented by the axioms of SFT and axioms of quantum mechanics to third quantize it, and similarly for GFT this structure is represented by axioms of GFT and axioms of quantum mechanics. It may be noted that quantum mechanics can be viewed as an axiomatic structure <cit.>. Using these axioms of quantum mechanics, we could possibly define an operator algebra for any third-quantized theory of quantum gravity. Thus, the theory should include an algebra of operators that create and annihilate configurations, reflecting the third quantization process. These operators satisfy commutation or anti-commutation relations depending on the nature of the third field. The spacetime and quantum fields on it would emerge as emergent phenomena using these operators. It is possible that the final theory could modify the operator algebra too due to quantum gravitational effects. This has already been proposed in objective collapse models, where gravitational effects modify the quantum mechanics and cause a scale-dependent collapse of the wave function <cit.>. In fact, such modification of quantum mechanics has been applied to the second quantized Wheeler-DeWitt equation, and it resolves certain problems associated with the usual Wheeler-DeWitt equation <cit.>. It would thus be possible to generalize this work to third-quantized quantum gravity. Thus, along with the axioms of quantum mechanics (or its suitable modifications <cit.>), third-quantized quantum gravity can be viewed as an axiomatic system ℱ. The spacetime will be an emergent phenomenon from it, and so it cannot be possibly defined in spacetime. Thus, this system will exist in the Platonic realm and not spacetime. Gödel's incompleteness theorems <cit.> will now apply to third quantized theories, as it is represented by a formal system ℱ that exists in a Platonic realm. Two of the most important findings in mathematical logic are Gödel's incompleteness theorems, which show that formal axiomatic systems that can represent elementary arithmetic have intrinsic limits. Now, as any third quantized theory can be viewed as such a formal axiomatic system ℱ, Gödel's incompleteness theorems will be applicable to them. Thus, if we start by constructing a consequence 𝒢 within ℱ such that 𝒢 asserts its own unprovability, we have 𝒢≡“This statement is not provable in ℱ.” In the formal system ℱ, there exists a sentence 𝒢 such that ℱ⊬𝒢 and ℱ⊬𝒢. This means 𝒢 is true but unprovable within ℱ. Gödel's Second Incompleteness Theorem further states that, let Con(ℱ) be the statement within ℱ that asserts the consistency of ℱ: Con(ℱ) ≡“There is no statement φ such that both φ and φ are provable in ℱ.” The formal system ℱ cannot prove its own consistency, which is ℱ⊬Con(ℱ), If ℱ is consistent, then Con(ℱ) is true, but Con(ℱ) is not provable within ℱ. Here, third quantized fields, such as string fields or fields in GFT, Ψ and related operators/functions play the role of formal axioms and rules of inference. The statements within the formal system ℱ correspond to possible configurations and interactions of the third quantized field Ψ and its associated operations. Arithmetic statements are encoded within this framework, representing numbers and arithmetic operations using the structures in the system. Let ℱ be the formal system derived from any third quantized theory, such as SFT or GFT. The Gödel Sentence 𝒢 is defined as 𝒢≡ “This configuration cannot be derived from the axioms of ℱ. This means that ℱ⊬𝒢 and ℱ⊬𝒢. Moreover, Con(ℱ) is defined as a consistency statement Con(ℱ)≡. There exists no configuration φ such that both φ and φ are derivable from the axioms of ℱ. Therefore we have ℱ⊬Con(ℱ). These expressions cover the use of Gödel’s incompleteness theorems within a formal system representing any third quantized theory. Thus, it reveals limits in proving some statements and self-consistency of this system. The third quantized theories which are formal systems ℱ existing in the Platonic realm will thus be subject to Gödel’s incompleteness theorem. § A CONSISTENT AND COMPLETE THIRD QUANTIZED THEORY Now the problem with this application of the Gödel’s incompleteness theorem in the Platonic realm is that it is applied to the actual axiomatic structure ℱ describing reality rather than human understanding of ℱ (which we will denote by F). It is possible to have inconsistencies in F, but is by definition impossible to have inconsistencies in ℱ and as ℱ. Furthermore, all physical phenomena in the universe/multiverse are obtained as corollaries of ℱ using some computational algorithm 𝒞, which also exists in the Platonic realm. However, there are things which are true due to the very structure of ℱ, but cannot be obtained using any computational algorithm 𝒞. Thus, something more is required in the Platonic realm other than ℱ and 𝒞 to resolve this problem. Now it may be noted that the Lucas-Penrose argument has addressed the Gödelian limitations in F. The Lucas-Penrose argument contends that mechanical systems (such as computers or formal systems) are unable to accurately represent human minds, using Gödel's incompleteness theorems <cit.>. It explains that to explain how the human mind can overcome the Gödelian limitations, and see the validity of the Gödelian statement. Thus, for humans, if they identify F, and also identify a computational algorithm C (which will correspond to human understanding of 𝒞) to derive corollaries of F, they will also have a non-computational non-algorithmic understanding denoted by N, which will overcome the Gödelian limitations. Using N they can see the validity of Gödelian statements. Now we generalize this original Lucas-Penrose argument to the Platonic realm, and so corresponding to N, we define a non-computational non-algorithmic produce of obtaining the validity of Gödelian statements of ℱ as 𝒩. It may be noted like ℱ and 𝒞, this 𝒩 also operates in the Platonic realm and should not be confused with the human understanding of 𝒩 (denoted by N). Thus, we will now apply the argument in an abstract setting to the actual theory in the Platonic realm, where any computation performed using the formal system ℱ is denoted as 𝒞, and any conclusion derived in a non-algorithmic non-computational way is denoted as 𝒩. Gödel's incompleteness theorems apply if they remain restricted to 𝒞. However, in the Platonic realm, we will see how the generalization of the Lucas-Penrose argument resolves this difficulty. To mathematically express how this argument might overcome the limitations in a formal system based on any third quantized theory of quantum gravity, we need to illustrate how 𝒩 identifies truths that the formal system ℱ, cannot prove in the Platonic realm using 𝒞. Let 𝒢 be a Gödel sentence in a formal third quantized system, ℱ: 𝒢≡“This statement is not provable in ℱ.” According to Gödel's first incompleteness theorem, ℱ⊬𝒢 and ℱ⊬𝒢. Lucas-Penrose argument says that if 𝒩 is used, it can be seen that 𝒢 is true even if ℱ cannot prove 𝒢. Formally, one can say by using 𝒩 that the truth of 𝒢 is beyond the formal system ℱ. This understanding allows us to enlarge the formal system ℱ into another system ℱ^' whose addition as an axiom in it is 𝒢: ℱ^' = ℱ + {𝒢}. To verify this, we suppose if ℱ is consistent, then ℱ^' is consistent. To summarize mathematically, we express the overcoming of limitations as follows: 𝒢≡“This statement is not provable in ℱ.” So, ℱ⊬𝒢 and ℱ⊬𝒢. Essentially, such a generalization on Lucas-Penrose's argument suggests that the limitations imposed by Gödel's theorems in a formal system based on third quantized theory can be transcended through a non-algorithmic 𝒩. The original Lucas-Penrose argument based on F, C, N has drawn its own criticisms. The argument makes the assumption that a single formal system can encapsulate human reasoning. But informal thinking that is not captured by a set formal system or a succession of changing formal systems may be involved in human reasoning <cit.>. Furthermore, the argument is predicated on the consistency of the formal system that models human reasoning. The argument falls apart if human reasoning is inconsistent <cit.>. These criticisms of the original Lucas-Penrose argument aim at the human understanding of ℱ, 𝒞, 𝒩 i.e. F, C, N, rather than at the actual ℱ, 𝒞, 𝒩 in the Platonic realm. For ℱ, in the Platonic realm, as the universe/multiverse exists as its consequence through 𝒞, then we need an 𝒩 in the Platonic realm too, to avoid problems due to Gödel's theorems. It is consistent to acknowledge the inconsistency of human knowledge, but it is entirely inconsistent to acknowledge the fundamental inconsistency of actual physical reality. Our physical reality, involving the universe/multiverse and quantum fields in it, is produced by the need for a complete and consistent real ℱ in the Platonic realm. However, the application of Gödel's theorem on ℱ limits what can be obtained from 𝒞, and it is not possible to obtain Gödelian consequences of ℱ through 𝒞. Assuming the existence of a non-computational part of reality 𝒩, in addition to the computational algorithm 𝒞, is the only way that reality can be consistent. Since 𝒩 can get past the Gödelian obstacles and even produce a consistent ℱ and 𝒞, 𝒩 may actually be considered more fundamental than both ℱ and 𝒞, as it is capable of producing ℱ or 𝒞, but not vice versa. It may be noted that some ideas claim that the implications of 𝒩 have already been observed in nature. It has been suggested that the standard quantum mechanics based on Copenhagen interpretation has several problems, such as the need for an observer <cit.>. These problems are resolved in a modification to quantum mechanics, where an objective collapse occurs <cit.>. Furthermore, Copenhagen interpretation and most other interpretations of quantum mechanics need any exterior physical entity, so they can not be used to explain the quantum-to-classic transition in cosmology. This difficulty is again easily resolved by collapse models, where collapse occurs in an observer-independent and scale-dependent way <cit.>. An important approach to such objective collapse is based on gravitationally induced decoherence, where it is proposed that gravity plays a fundamental role in the decoherence of quantum systems, effectively acting as a mechanism that causes a quantum system to transition into a classical state. This has been studied using the Diosi-Penrose (DP) approach <cit.>, which postulates that quantum superposition of mass distributions gives a fundamental time-scale for decoherence. The approach holds that unstable superpositions between states with markedly different gravitational fields cause the system to collapse into one of the possible states <cit.>. This collapse time is proportional to the inverse of the gravitational self-energy of the difference between the mass distributions. Apart from this approach, other models of objective collapse have also been proposed <cit.>. It has been suggested that the fundamental indeterminacy in quantum mechanics in collapse models could be an example of a Gödelian phenomenon in physical theory <cit.>. Orch-OR theory uses such quantum collapse models to provide a basis for the original Lucas-Penrose argument (involving F, C, N). According to this theory, objective reduction of the brain’s quantum state creates consciousness, and consciousness is identified with the presence of N in humans” <cit.>. So, in the Orch-OR's description, quantum collapse which could be related to 𝒩 provides a mechanism in the brain for the Lucas-Penrose contention that human cognition is non-algorithmic N <cit.>. Thus, the mechanism underlies the Orch-OR <cit.> is a Gödelian consequence obtained from ℱ via 𝒩 and not 𝒞, and this gives rise to N in the human brain. It may be noted that even if Orch-OR is not true, the argument in the Platonic realm stands. This is because due to Gödel's theorems, there will always exist consequences of ℱ, which can only be obtained via 𝒩 and not 𝒞. It is possible that quantum collapse is such a consequence, but even if it is not the existence of 𝒞 is needed to overcome limitations imposed by the Gödel theorems. So, a Gödelian consequence of ℱ will be true, but it will only be possible to obtain it by 𝒩, and not 𝒞. Therefore, for any third quantized theory describing reality, we must have a non-algorithmic 𝒩 in the Platonic realm. This is the only way for it to be a fully consistent and complete description of reality. § CONCLUSION In this paper, we have argued that in any theory of quantum gravity, spacetime and quantum fields on spacetime would be an emergent structure, and should be dynamically produced. Thus, this theory should be constructed in the configuration space of fields rather than spacetime. Any such theory of quantum gravity would be a third quantized theory. In fact, the third quantized LQG can be represented by GFT, and the third quantized string theory can be represented by SFT. It may be noted that SFT apart from being a second quantized theory of strings can also be viewed as a third quantized theory of conformal fields, and hence can be consistently analyzed as a third quantized theory. We use the axioms of string field theory to motivate the construction of such axioms for GFT. Then we use the general structure of axioms of both SFT and GFT, to construct the general feature of axioms for any third quantized theory of quantum gravity. As we have argued that such theories produce spacetime, they cannot be defined in spacetime. They rather exist in a Platonic realm, and spacetime emerges from them from a computational algorithm. Thus, apart from the formal axiomatic structure of the third quantized quantum gravity, a computational algorithm also exists in the Platonic realm. This actualizes the corollaries of that axiomatic system, and the universe/multiverse with quantum fields exits as a corollary of that axiomatic system. However, as it is a formal axiomatic system, Gödel theorem will apply to it. There will be things that are true but cannot be obtained from a computational algorithm. The consistency of the axiomatic system will be one such thing, which cannot be obtained from it. To overcome this difficulty, it is proposed that apart from the computational algorithm, it will also be possible to obtain non-computational non-algorithmic truths in the Platonic realm related to the axiomatic system. This is done by generalizing the original Lucas-Penrose argument to the Platonic realm. The main difference between the argument here and the original Lucas-Penrose argument is that the original Lucas-Penrose argument applies to human understanding of reality, and the argument here applies to the actual reality in the Platonic realm. This seems to be the only way to overcome the Gödelian limitations in the Platonic realm and produce a complete consistent third-quantized theory of quantum gravity. 100 Penrose1965 R. Penrose, Phys. Rev. Lett. 14, 57 (1965). Hawking1970 S. W. Hawking and R. Penrose, Proc. R. Soc. A 314, 529 (1970). Garay1995 L. J. Garay, Int. J. Mod. Phys. A 10, 145 (1995). Polchinski1998 J. Polchinski, String Theory, Vol. 1: An Introduction to the Bosonic String (Cambridge University Press, Cambridge, England, 1998). Lust1989 D. Lüst, Nucl. Phys. B 326, 557 (1989). Brandenberger:2018xwl R. Brandenberger, R. Costa, G. Franzmann and A. Weltman, Phys. Rev. D 98, no.6, 063521 (2018). Bossard:2002ta A. Bossard and N. Mohammedi, Nucl. Phys. B 651, 249-262 (2003). Ashtekar2005 A. Ashtekar and J. Lewandowski, Class. Quantum Grav. 21, R53 (2005). Rovelli2004 C. Rovelli, Quantum Gravity (Cambridge University Press, Cambridge, England, 2004). Ashtekar2006 A. Ashtekar, T. Pawlowski, and P. Singh, Phys. Rev. D 74, 084003 (2006). Bojowald2005 M. Bojowald, Living Rev. Relativity 8, 11 (2005). Bojowald2001 M. Bojowald, Phys. Rev. Lett. 86, 5227 (2001). Bekenstein1973 J. D. Bekenstein, Phys. Rev. D 7, 2333 (1973). Hawking1975 S. W. Hawking, Commun. Math. Phys. 43, 199 (1975). Jacobson1995 T. Jacobson, Phys. Rev. Lett. 75, 1260 (1995). Awad:2014bta A. Awad and A. F. Ali, JHEP 06, 093 (2014). Salah:2016kre M. Salah, F. Hammad, M. Faizal and A. F. Ali, JCAP 02, 035 (2017). Cai2010 Y. F. Cai and E. Wilson-Ewing, J. Cosmol. Astropart. Phys. 2014, 026 (2014). Horowitz1996 G. T. Horowitz and J. Polchinski, Phys. Rev. D 55, 6189 (1996). mi12S. L. Braunstein, M. Faizal, L. M. Krauss, F. Marino and N. A. Shah, Nature Rev. Phys. 5, no.10, 612-622 (2023). mi14M. Faizal, Int. J. Mod. Phys. A 38, no.35n36, 2350188 (2023). Gielen2016 S. Gielen and D. Oriti, Can. J. Phys. 93, 783 (2016). Oriti2016 D. Oriti, Class. Quantum Grav. 33, 085005 (2016). Oriti2017 D. Oriti, Approaches to Quantum Gravity (Cambridge University Press, Cambridge, England, 2017). Kiefer2012 C. Kiefer, Quantum Gravity (Oxford University Press, Oxford, 2012). Kuchar1992 K. V. Kuchar, in Proceedings of the 4th Canadian Conference on General Relativity and Relativistic Astrophysics (1992). Hartle1983 J. B. Hartle and S. W. Hawking, Phys. Rev. D 28, 2960 (1983). Vilenkin1983 A. Vilenkin, Phys. Rev. D 27, 2848 (1983). Oriti2011 D. Oriti, Found. Phys. 41, 1176 (2011). Freidel2005 L. Freidel, Int. J. Theor. Phys. 44, 1769 (2005). Perez2003 A. Perez, Class. Quantum Grav. 20, R43 (2003). Siegel1999 W. Siegel, Int. J. Mod. Phys. A 4, 2015 (1999). Hata1986 H. Hata, K. Itoh, T. Kugo, H. Kunitomo, and K. Ogawa, Phys. Lett. B 172, 186 (1986). Sen1999 A. Sen, J. High Energy Phys. 1999, 027 (1999). Zwiebach1993 B. Zwiebach, Nucl. Phys. B 390, 130 (1993). 12K. Gödel, Monatshefte für Mathematik und Physik, 38, 173-198 (1931). 12a R. Smullyan, Gödel's Incompleteness Theorems, (Oxford University Press, New York, 1992). Giddings S. B. Giddings and A. Strominger, Nucl. Phys. B 321, 481 (1989). Thiemann:2002nj T. Thiemann, Lect. Notes Phys. 631, 41 (2003). Oriti2006 D. Oriti, arXiv:gr-qc/0607032 (2006). Oriti2012 D. Oriti, in Loop Quantum Gravity: The First 30 Years, edited by A. Ashtekar and J. Pullin (World Scientific, 2012), p. 235. Baratin2012 A. Baratin and D. Oriti, Phys. Rev. Lett. 105, 221302 (2012). Siegel1988 W. Siegel, Introduction to String Field Theory (World Scientific, 1988). Kostelecky1989 V. A. Kostelecký and S. Samuel, Phys. Rev. D 40, 1886 (1989). W Private correspondence with E. Witten (2024). P1W. D. Ross, Plato’s Theory of Ideas (Oxford University Press, 1951). P2G. Fine, Plato on Knowledge and Forms: Selected Essays (Oxford University Press, 2003). P R. Machleidt, Bull. Am. Phys. Soc. 51, B2.00005 (2006). Witten1986 E. Witten, Nucl. Phys. B 268, 253 (1986). zwiebach1992 B. Zwiebach, Nucl. Phys. B 390, 33 (1992). taylor2004 W. Taylor and B. Zwiebach, arXiv:hep-th/0311017 (2003). brst1976 C. Becchi, A. Rouet, and R. Stora, Ann. Phys. 98, 287 (1976). Oriti2007 D. Oriti, Class. Quantum Grav. 27, 085005 (2007). Sakurai J. J. Sakurai, Modern Quantum Mechanics (Addison-Wesley, 1994). Ballentine L. E. Ballentine, Quantum Mechanics: A Modern Development (World Scientific, 1998). Nielsen M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, England, 2000). Bassi2003 A. Bassi and G. Ghirardi, Phys. Rep. 379, 257 (2003). Penrose1994 R. Penrose, Shadows of the Mind: A Search for the Missing Science of Consciousness (Oxford University Press, 1994). j1J. L. Gaona-Reyes, L. Menéndez-Pidal, M. Faizal and M. Carlesso, JHEP 02, 193 (2024) j2S. Banerjee, S. Bera and T. P. Singh, Int. J. Mod. Phys. D 24, no.12, 1544011 (2015). Lucas1961 J. R. Lucas, Philosophy 36, 120 (1961). Penrose1989 R. Penrose, The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics (Oxford University Press, 1989). Searle1992 J. Searle, The Rediscovery of the Mind (MIT Press, 1992). Chalmers1996 D. Chalmers, The Conscious Mind (Oxford University Press, 1996). Nature Nature Physics 18, 243 (2022). Physical Physical Review A 40, 1165 (1989). Physical1 Physical Review A 90, 062105 (2014). Physical2 Physical Review Letters 123, 080402 (2019). Physical3 Physical Review Letters 91, 130401 (2003). Penrose1996 R. Penrose, Gen. Relativ. Gravit. 28, 581 (1996). Karolyhazy1966 F. Karolyhazy, Nuovo Cimento A 42, 390 (1966). Bassi2017 A. Bassi et al., Rev. Mod. Phys. 85, 471 (2017). ghirardi1986 G. C. Ghirardi, A. Rimini, and T. Weber, Phys. Rev. D 34, 480 (1986).
http://arxiv.org/abs/2407.13453v1
20240718123120
A uniquely solvable and positivity-preserving finite difference scheme for the Flory-Huggins-Cahn-Hilliard equation with dynamical boundary condition
[ "Yunzhuo Guo", "Cheng Wang", "Steven M. Wise", "Zhengru Zhang" ]
math.NA
[ "math.NA", "cs.NA" ]
=6.5in =9.00in =0.5in =0in =-0.5in remRemark[section] thmTheorem[section] corCorollary[section] lemLemma[section] defnDefinition[section] propProposition[section] exampleExample[section]
http://arxiv.org/abs/2407.13129v1
20240718033846
Apparatus for Optical-Atomic System Integration & Calibration: 1 atm to 1$\times$10$^{-11}$ Torr in 24h
[ "G. Kestler", "K. Ton", "J. T. Barreiro" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "physics.atom-ph", "quant-ph" ]
]Apparatus for Optical-Atomic System Integration & Calibration: 1 atm to 1×10^-11 Torr in 24h Department of Physics, University of California San Diego, California 92093, USA § ABSTRACT Ultracold atoms exquisitely controlled by lasers are the quantum foundation, particularly for sensing, timekeeping, and computing, of state-of-the-art quantum science and technology. However, the laboratory-scale infrastructure for such optical-atomic quantum apparatuses rarely translates into commercial applications. A promising solution is miniaturizing the optical layouts onto a chip-scale device integrated with cold atoms inside a compact ultra-high vacuum (UHV) chamber. For prototyping purposes, however, rapidly loading or exchanging test photonic devices into a UHV chamber is limited by the evacuation time from atmospheric pressures to the optimal pressures for ultracold atoms of 1×10^-11 Torr, a process typically taking weeks or months without cryogenics. Here, we present a loadlock apparatus and loading procedure capable of venting, exchanging, and evacuating back to <1×10^-11 Torr in under 24 hours. Our system allows for rapid testing and benchmarking of various photonic devices with ultracold atoms. [ J. T. Barreiro July 22, 2024 ================== § INTRODUCTION Optically controlled ultracold atoms are at the core of versatile and powerful platforms for quantum science and technologies. These platforms are used for highly accurate sensors, which often rely on matterwave interferometry <cit.> and precision spectroscopy <cit.>. Matterwave interferometers have demonstrated substantial improvements for inertial sensing <cit.>, and devices leveraging spectroscopy with narrow-line atomic transitions provide the most stable and accurate atomic clocks to date <cit.>. Beyond sensing and timekeeping applications, such experiments also contribute to measurements of fundamental constants <cit.>, collective radiative enhancements through cavity quantum electrodynamics (QED) <cit.>, and scalable quantum computing architectures with neutral atom qubits <cit.>. However, nearly all such experiments rely on laboratory-scale complex optical setups and ultra-high vacuum (UHV) chambers with limited optical access. Miniaturization by integrating cold atoms with an optical setup on a chip <cit.> would make quantum technologies more accessible for quantum sensing, atomic timekeeping, and quantum computing. Developing novel complex technologies is an iterative process involving extensive prototyping, benchmarking, and validation. In particular, the UHV pressures required for cold atom experiments limit rapid testing since each time a new device is loaded, the chamber is subjected to atmospheric pressures. One solution is to use specially designed UHV loadlock systems, where devices are loaded into a separate chamber called a loadlock, which can be rapidly evacuated to UHV pressures before transferring the device to a science chamber with ultracold atoms. So far, this process takes about a week to vent and return to 5×10^-10 Torr <cit.>. Though using cryogenics to cool the loadlock and science chamber significantly speeds up the process <cit.>, this approach requires external cooling apparatuses to maintain UHV pressures. Here, we present a loadlock-based apparatus and loading procedure capable of opening the chamber to atmospheric pressures and returning to a UHV of 1×10^-11 Torr within 24 hours. The entire apparatus sits on a 30"×48" optical breadboard, and the ≈1.3 L total loadlock volume also allows for isolation of the experiment with only ion pumps and non-evaporable getters (NEGs). § VACUUM APPARATUS The entire apparatus consists of three main sections: a cold-atom beam source, a science chamber with ultracold atoms, and a loadlock chamber (see Fig. <ref>(a)). The cold-atom beam source (AOSense, Inc.) is separated from the science chamber by a mini UHV gate valve —all UHV valves are from VAT Group AG. The science chamber is held at 1×10^-11 Torr with a 40 l/s ion pump (Gamma Vacuum) and two NEGs (Gamma Vacuum) at 200 l/s pumping speed each. Inside the science chamber is a sample holder (Ferrovac GmbH) mounted with a steel bar and groove grabbers (Kimball Physics Inc.). Samples loaded into the loadlock are transferred to the science chamber with a wobblestick manipulator (Ferrovac GmbH). For ease of reference, the loadlock chamber is further organized into five subsections: loadlock, UHV pumps, bridge, turbo, and venting. Before transferring devices into the science chamber, the samples are loaded into the loadlock at atmospheric pressures and evacuated to 1×10^-11 Torr within 24 hours. The central component of the loadlock section is a small six-sided cube (Fig. <ref>(b)) with each connection referenced as (1-6) below. The cube is connected to (1) the science chamber through an all-metal UHV gate valve and (2) a UHV pumping section with a 40 l/s ion pump (Starcell Agilent) and two NEGs (Gamma Vacuum) at 200 and 300 l/s pumping speeds. The 1.5-inch internal diameter of the bellows between the science chamber and loadlock sets the maximum cross-section of the devices that can be loaded into the science chamber. A mini UHV gate valve (V1) between the loadlock and its UHV pumping section isolates the sensitive pumps while venting and prevents the need for re-conditioning and re-activating the NEGs. The cube also connects to (3) the turbo-molecular pump (TMP, Agilent) through an all-metal right-angle valve (V2) and an all-metal variable leak valve (V3). This design improves pumping conductance to the TMP while providing controlled venting and evacuation for sensitive devices, such as an optical nanofiber <cit.>. The last three ports of the cube are connected to (4) the wobblestick manipulator, (5) a viewport for monitoring the sample after loading, and (6) a `blank' flange, which can be swapped out for a feedthrough flange depending on the nanophotonic device in use. The volumes of the loadlock and UHV pumping sections are 0.61 L and 0.73 L, respectively, ensuring ample vacuum pumping with the ion pump and NEGs upon complete isolation from the TMP. The bridge and turbo sections interface the loadlock to the TMP through 6-inch bellows and two mini-UHV valves (V4, V7). The bellows minimize stress on the loadlock conflat flanges since the TMP is firmly mounted to another structure. The turbo section lies between V4 and V7 and consists of a 4-way cross and a residual gas analyzer (RGA), which is used to monitor leaks from the venting section when V4 is closed. Lastly, the venting section is isolated by another mini UHV gate valve (V5) and an all-metal right-angle valve (V6), which increases isolation and reduces the leak rate through the mini UHV valve. The venting section has KF high vacuum flanges and a capacitance diaphragm pressure gauge (Inficon Group AG) to monitor the venting process to atmospheric pressures and to avoid over-pressuring the chamber. The final valve (V8) is connected to the venting gas line and is used to fill the chamber during the venting procedure. Various photonic devices require feedthroughs and additional cabling in-vacuum <cit.>. The bottom flange of the loadlock can be easily exchanged during the procedure; however, the additional slack needed to reach the science chamber must be appropriately handled. Our design incorporates a vertically mounted linear bellows actuator (Lesker Inc.) at the top of the loadlock with a ring at the tip of the actuator. During loading, extra cabling is fed through the ring, and the actuator is retracted upwards, pulling the slack above the photonic device. The actuator is lowered when the device is transferred into the chamber, allowing the slack to reach the science chamber. The reverse procedure ensures the gate valve is free to close completely upon removing the device from the science chamber. § PROCEDURE In addition to the critical design of the loadlock chamber, carefully implemented venting and loading procedures contribute to the rapid cycling speeds in this work. Opening directly to atmospheric pressures can leave residual water on the chamber surface and large quantities of undesired atmospheric gases, resulting in longer evacuation times. A standard solution is to vent the chamber with a constant flow of dry nitrogen gas, which the NEGs can quickly pump. Unfortunately, this can limit the lifetime of the NEGs and their ability to adsorb hydrogen outgassing from the surrounding steel. Instead, we vent the chamber with ultra-high purity (UHP) argon to extend the NEG lifetime. For additional cleanliness during the loading process, we also enclose the loadlock cube in a custom-built acrylic glovebox. Before sealing the glovebox, we place the new device, all the necessary tools for the exchange, and multiple annealed copper gaskets inside. The tools are cleaned with methanol and wrapped in UHV foil if the person loading needs to use them before handling the sample. The glovebox interior is also wiped down with methanol and ensured to be dust-free. We then flood the glovebox with UHP argon through an inline filter and maintain a positive pressure slightly below that of the chamber venting. If any air intake into the chamber occurs, it will come from the glovebox argon instead of the surrounding air. The procedure used to reach the rapid cycle times in this work is detailed in the three sections below and labeled (A) preparation, (B) loading, and (C) baking and cooling. Valves are noted as V# as shown in Fig. <ref>(b,c). §.§ Preparation (5 hours) A crucial part of this procedure is how the device to be inserted is cleaned, assembled, and prepared. * Install the glovebox around the cube so the viewport, `blank' flange, and wobblestick are contained inside the glovebox. * Clean all the necessary tools for installing the device and removing and reconnecting the conflat flanges. We include wrenches, pliers, scissors, tweezers, silver-plated screws with washers, and multiple annealed copper gaskets. Sonicating the tools is encouraged if possible, but it was not used in this work. Wrap the tool handles in UHV foil as much as possible and place the tools in the glovebox. * Assemble and clean the device, and place it in the glovebox. * Seal the glovebox and ensure ≈6 Pa of positive pressure when flooded with argon. §.§ Loading (1-2 hours) The loading procedure begins from UHV with a previous device already retracted from the science chamber to the loadlock, and V0 closed to isolate the science chamber from the loadlock. Keeping the loadlock section at UHV conditions requires V1 to remain open. Thus, the starting configuration for all the valves is V1, V4, and V7 opened, while all others are closed. * Close V7 and take rate-of-rise data of the turbo and bridge sections. After 5 minutes of not pumping with the TMP, we perform an analog scan. * Close V1 to isolate the NEG and ion pump while venting. * Close V4 and turn off TMP. * Begin the flow of argon to V8. We avoid over-pressuring by adding a `balloon' with a small hole (≈1 mm diameter) along the argon tubing upstream of V8. When any section is vented above atmospheric pressure, the excess gas will flow back out of the balloon, ensuring the chamber does not exceed atmospheric pressure. * Open V8 until the venting section is at atmospheric pressure on the capacitance diaphragm pressure gauge. The `balloon' should never deflate during this venting to ensure minimal atmospheric gases enter the chamber. Once the venting section is at atmospheric pressure, close V8. * Open V6. * Repeat the last two steps, alternating between V8 and V5, then V8 and V4, then V8 and V3/V2. Lastly, open V8 for continuous flow into the loadlock section. At this point, V0, V1, and V7 are closed, and all other valves are open. * Begin flooding the glovebox with argon. Allow a few minutes for the argon to replace any air previously in the glovebox. * Remove the `blank' flange from the bottom of the loadlock cube. * Remove the old device from the loadlock cube and replace the `blank' flange with a feedthrough flange if necessary. * Remove the wobblestick flange from the back of the loadlock cube and slide the wobblestick far enough back, leaving ample room to insert the new device. * Insert the new device into the wobblestick. Place a new annealed copper gasket on the wobblestick flange and slowly reconnect the wobblestick to the loadlock cube. * Hand-tighten the wobblestick flange on the loadlock cube. The argon flow to the chamber must be reduced as the loadlock section is sealed. Tighten the wobblestick flange completely and close V8 to shut off the argon flow to the chamber. * Open V7 and turn on the roughing pump and TMP. * Close V6 and V5 and turn on RGAs. Perform a quick helium leak check by filling the glovebox with helium and monitoring the RGAs. Modifying the procedure from steps <ref>-<ref> depending on the device requirements is straightforward. We also advise only opening one flange at a time, but space constraints can be unavoidable. If multiple flanges are opened simultaneously, we increase the argon flow to the chamber to keep the positive pressure flowing out of the loadlock chamber. §.§ Bake and Cooling (23 hours) After a successful exchange and helium leak check, we re-attach the baking components to the loadlock section and begin baking. Bake preparation involves placing thermocouples in various locations on the steel chamber and then wrapping a thin layer of UHV foil to distribute the heat evenly. This is followed by wrapping tape heaters and two or three additional layers to minimize thermal losses during the bake. Leaving as much of the chamber prepared as possible significantly reduces our bake preparation time. The UHV pumps, bridge, turbo, and venting sections remain prepared for a bake throughout the entire loading process. * Remove the glovebox from the loadlock cube and prepare the loadlock section for the bake. * Heat the chamber at a rate of ≈0.75 C/minute up to ≈90 C. Our chamber continues to rise over the subsequent hour. * Degas both RGAs. * Allow temperatures to settle around 110 C and adjust the necessary temperatures to even out any undesired gradients. * Begin cooling the bake when water reaches 2×10^-8 Torr and argon reaches < 1×10^-9 Torr. * Once the chamber temperatures are around 35 C, take a 5 minute rate-of-rise scan of the loadlock, bridge, and turbo sections by closing V7. * At chamber temperatures ≈ 30 C, open V1, and close V2 and V3. The loadlock is now isolated and pumped on the ion pump and NEGs to reach 1×10^-11 Torr. § RESULTS The procedure described in Section <ref> took a total of 1.2 hours to reach step <ref>, where we began timing the return from 1 atm to 1×10^-11 Torr shown in Fig. <ref>. Two critical indicators of the cleanliness of the loading protocol are the similar initial partial pressures of water and argon followed by a minimal increase in the partial pressures while heating the chamber. After the chamber reached a final temperature of around 110 C, the TMP efficiently pumped out the excess argon. We cooled the bake when the water reached a partial pressure of 2×10^-8 Torr at 110 C, achieving a two-order-of-magnitude drop upon reaching room temperature. While the chamber is heated, the small volume of the loadlock section increases thermal conductivity to the sample during the bake, efficiently heating the sample for more rapid cleaning. On a separate loading occasion, we successfully loaded and baked an optical ring resonator assembly, which included a thermistor for temperature monitoring. Baking the loadlock section at 70 C heated the in-vacuum sample to 50 C. Depending on the assembly materials, a modest bake of 110 C should not be problematic for many in-vacuum optics or glues. When we cannot reach the bake temperatures reported in this work, we have reached 1×10^-11 Torr within 36 hours. The chamber temperature dropped below 40 C about 21 hours after turning on the TMP. We proceed to take two RGA scans with the Bridge RGA; one with the TMP pumping (V7 open, Fig. <ref>(a)) and another scan five minutes after isolating the chamber from the TMP (V7 closed, Fig. <ref>(b)). The isolated scan (V7 closed) indicates a similar composition of atmospheric gases before the loading procedure at AMU 28 (nitrogen) and AMU 32 (oxygen). These similarities also include AMU 44, 16, 15, 14, and 12, which result from the hydrocarbons at the high-temperature RGA probe. We also note that AMU 69 is present before and after loading as well as AMU 50 and 51, which we attribute to tetrafluorides in the turbo section since the scans lack the characteristic `unzipping' of hydrocarbons or mechanical pump oil. Furthermore, closing V4 instead of V7 shows no increase in AMU 69, 50, or 51. The main notable difference between the `before' and `after' conditions is the partial pressure of AMU 40 (argon), which is still low enough to be pumped by our ion pump upon isolation despite the two orders of magnitude increase after loading. We isolated the loadlock section from the TMP when the chamber temperature reached ≈35 C, about 22 hours after turning on the TMP. Because we leave most of the vacuum chamber wrapped in preparation for a bake, the steel retains heat from the bake and takes additional time to cool from 35 C to room temperature. The continual drop in total pressure is consistent with the cooling of the UHV pumps section of the vacuum chamber (see Fig. <ref>), where we monitor the final pressure. We reach <1×10^-11 Torr after 23.5 hours and continue to decrease pressure consistent with the decrease in the UHV pumps temperature. § OUTLOOK Nearly all ultracold atom experiments require extensive optical setups and UHV pressures on the order of 1×10^-11 Torr for operation. Miniaturizing these experiments to a chip-scale device would significantly increase accessibility to quantum technologies. However, the path forward is iterative, and extensive prototyping will be necessary. Thus, there is an immediate need for rapid test systems. In this work, we present an ultra-high vacuum loadlock apparatus and procedure capable of loading chip-scale photonic devices at atmospheric pressures and returning to <1×10^-11 Torr in less than 24 hours. The relatively small total volume directly results in rapid pumping speeds with a turbo-molecular pump and ultra-high vacuum pressures without chamber sputtering or cryogenics. The isolated ion pump and NEGs, as well as the choice to vent with argon, preserve the lifetime of the sensitive pumping equipment. The versatile design allows for loading various photonic devices, including those with vacuum feedthroughs, such as optical fibers and electrical wiring. We have successfully loaded optical nanofibers and optical ring resonators into the science chamber to be integrated with ultracold strontium atoms. We want to thank P. Lauria for valuable input into the design and initial construction of the apparatus. We also thank W. Brunner for help assembling the newest version of the system. We acknowledge the Office of Naval Research's support under Grant No. N00014-20-1-2693. § AUTHOR DECLARATIONS §.§ Conflict of Interest The authors have no conflicts to disclose. §.§ Author Contributions G. Kestler contributed to the design, construction, data collection, and the written manuscript. K. Ton contributed to the construction and data collection. J. T. Barreiro contributed to the conceptualization, design, written manuscript, and funding procurement. § DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request. § REFERENCES
http://arxiv.org/abs/2407.13157v1
20240718045351
Learning Camouflaged Object Detection from Noisy Pseudo Label
[ "Jin Zhang", "Ruiheng Zhang", "Yanjiao Shi", "Zhe Cao", "Nian Liu", "Fahad Shahbaz Khan" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Learning Camouflaged Object Detection from Noisy Pseudo Label Zhang Jin et al. Beijing Institute of Technology, Beijing, China Shanghai Institute of Technology, Shanghai, China Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE Linköping University, Linköping, Sweden <https://github.com/zhangjinCV/Noisy-COD> Learning Camouflaged Object Detection from Noisy Pseudo Label Jin Zhang10009-0002-9662-7635 Ruiheng Zhang1Corresponding author. mailto:ruiheng.zhang@bit.edu.cnruiheng.zhang@bit.edu.cn0000-0002-5460-7196 Yanjiao Shi20000-0001-9689-4165 Zhe Cao10009-0001-8503-3041 Nian Liu30000-0002-0825-6081 Fahad Shahbaz Khan3,40000-0002-4263-3143 July 22, 2024 =================================================================================================================================================================================================================================================================================== § ABSTRACT Existing Camouflaged Object Detection (COD) methods rely heavily on large-scale pixel-annotated training sets, which are both time-consuming and labor-intensive. Although weakly supervised methods offer higher annotation efficiency, their performance is far behind due to the unclear visual demarcations between foreground and background in camouflaged images. In this paper, we explore the potential of using boxes as prompts in camouflaged scenes and introduce the first weakly semi-supervised COD method, aiming for budget-efficient and high-precision camouflaged object segmentation with an extremely limited number of fully labeled images. Critically, learning from such limited set inevitably generates pseudo labels with serious noisy pixels. To address this, we propose a noise correction loss that facilitates the model's learning of correct pixels in the early learning stage, and corrects the error risk gradients dominated by noisy pixels in the memorization stage, ultimately achieving accurate segmentation of camouflaged objects from noisy labels. When using only 20% of fully labeled data, our method shows superior performance over the state-of-the-art methods. § INTRODUCTION Camouflaged Object Detection (COD) aims to detect and segment objects that blend seamlessly into their environments, presenting a significant challenge due to the need to counter sophisticated camouflage tactics and distinguish subtle differences between objects and their surroundings. Recent advances in COD <cit.> have been driven by the availability of abundant segmentation labels. However, the labeling process for camouflaged objects is extremely labor-intensive, requiring about 60 minutes per image <cit.>, which poses a major obstacle to this field's development. This challenge has led to a growing trend towards exploring Weakly Supervised COD (WSCOD) methods <cit.>, utilizing simpler annotations such as points <cit.>, scribbles <cit.>, and boxes <cit.>, to potentially reduce labeling costs. Despite these efforts, the high similarity between the foreground and background in camouflaged images means that these methods still lag far behind Fully Supervised COD (FSCOD) methods in performance. Fig. <ref> shows our comparative analysis of different supervision methods, which shows the classic challenges for COD tasks in the camouflaged image, such as high intrinsic similarity and unclear visual demarcations. These challenges severely impact model predictions; without accurate annotation as used in the fully supervised method, weakly supervised methods tend to produce serious false positive and negative predictions. Specifically, our findings indicate that sparse annotations, such as points and scribbles, hinder the discriminator's ability to accurately distinguish camouflaged objects from their environment, leading to a higher incidence of false negatives. Conversely, denser annotations like boxes often result in clearly false positives in scenarios with unclear visual demarcations between foreground and background. Motivated by the above annotation properties, we explore the potential of utilizing box annotations as prompts for camouflaged object segmentation. Unlike points and scribbles, box annotations offer rich object information and are as cost-effective to provide as image-level and point-level annotations. We posit that using boxes as prompts offers reliability by 1) masking complex backgrounds and reducing the level of camouflage, and 2) indicating the approximate location of the object, thereby simplifying the model's search process for camouflaged objects. Thus, we formulate a new practical training scheme, Weakly Semi-Supervised Camouflaged Object Detection (WSSCOD), with box as prompt. In the WSSCOD task, we aim to achieve budget-efficient and high-performance camouflaged object segmentation using a small amount, such as 1% number of total training set, of pixel-level annotations and corresponding box prompts. WSSCOD, as depicted in Fig. <ref>, utilizes boxes as prompts to mask complex, similar backgrounds, delineating proposals for camouflaged objects. This method distinguishes camouflaged objects from their surroundings by focusing on the proposals, enabling the model to concentrate on the fine segmentation of object details rather than spending extra time searching for camouflaged objects first. Following this, we merge these proposals with the complete image to create complementary branches. This strategy reduces the impact of imprecise box locations on the model's decision boundaries. Ultimately, under the supervision of pixel-level annotations, a proposed COD model is trained with the complementary information, enabling it to generate high-quality pseudo labels with clear details for the remaining 99% of the images. Meanwhile, on an extremely limited amount (such as 1%) of fully labeled data, the network often fails to represent the overall data distribution, resulting in rough and noisy pseudo labels. Moreover, when training with such noisy labels, we observe a distinct phenomenon: Initially, in the `early learning phase', the model's learning direction is mainly influenced by the correct pixels. However, as training advances to the `memorization phase', the gradient direction is gradually influenced by noisy pixels, which heavily mislead the model's learning and ultimately result in severe false negative and positive predictions. This phenomenon has also been reported in the field of classification <cit.>. However, the manifestation of this phenomenon in COD differs from that in classification in the following aspects: 1) Unlike classification tasks where noise exists in only some samples, noisy pixels exist in every pseudo label in WSSCOD, and they are widespread in FSCOD training labels. 2) There exists a spatial correlation among noisy pixels and between noisy pixels and correct pixels in the pseudo labels, and it is advantageous to use the spatial dependence to suppress noise. To cope with this limitation, we advocate the use of a newly proposed loss function ℒ_NC (Noise Correction Loss) to learn to segment camouflaged objects from noisy labels. ℒ_NC is able to handle different learning objectives in both the early learning and the memorization phases: During the early learning phase, ℒ_NC adapts to different fitting processes brought by different noise rates and accelerate the model's convergence to the correct pixels. Importantly, in the memorization phase, ℒ_NC forms a unified risk gradient for different predictions, maintaining the correct learning direction on up to 50% incorrect noisy pseudo labels, thereby aiding the model in effectively discerning visual demarcations. Furthermore, considering the prevalent noise issue in the COD training sets, ℒ_NC also shows superior performance in WSCOD and FSCOD methods compared to their used losses. We argue that ℒ_NC poses a major contribution, as previous segmentation work has paid less attention to noisy labels, especially in the COD task, but where noisy labels occur more easily. In summary, the main contributions of this paper are threefold: * Facing with the time-consuming and labor-intensive problem of annotating for COD tasks, we propose a cost-effective and high-performance weakly semi-supervised training scheme, and exploit the potential of box annotation as an economically accurate prompt. * We propose noise correction loss to improve the model's learning of the pseudo labels generated in WSSCOD. In the early learning and memorization phases, ℒ_NC adopts different forms to adapt to different learning objectives, ensuring correct learning of the model in the presence of noisy pixels. * Compared with 16 SOTA models on four benchmark datasets, we demonstrate the superiority of the WSSCOD method, which achieves comparable performance to existing fully supervised methods with only 20% of the annotated data for training, and proves the scalability of WSSCOD in gaining high-performance advantages with only a low-cost annotation increase. § WEAKLY SEMI-SUPERVISED CAMOUFLAGED OBJECT DETECTION Task Definition. We introduce a novel training protocol named Weakly Semi-Supervised Camouflaged Object Detection (WSSCOD), which utilizes boxes as prompts to generate high-quality pseudo labels. WSSCOD primarily leverages box annotations, complemented by a minimal amount of pixel-level annotations, to generate high-accuracy pseudo labels. Specifically, given the training set, 𝒟, that is divided into two subsets: 𝒟_m = {𝒳_m, ℱ_m, ℬ_m}_m=1^M, containing pixel-level annotations ℱ_m, box annotations ℬ_m and training images 𝒳_m. 𝒟_n = {𝒳_n, ℬ_n}_n=1^N, containing only box annotations and images, where M+N represents the number of training set. First, we train an auxiliary network, ANet, using the dataset 𝒟_m, where ℬ_m serves as an auxiliary prompt for camouflaged objects, and ℱ_m supervises the generation of pseudo labels. Afterward, using the trained ANet and the dataset 𝒟_n, we predict its pseudo labels, denoted as 𝒲_n. Finally, we construct a weakly semi-supervised dataset 𝒟_t using sets {𝒳_m, ℱ_m}_m=1^M and {𝒳_n, 𝒲_n}_n=1^N, and train our proposed primary network, PNet, which, like other COD models, takes only images as input. The different numbers of M and N affect the effectiveness of PNet, thus, we evaluate the performance in various settings, with M constituting {1%, 5%, 10%, 20%} and N the remaining {99%, 95%, 90%, 80%} of the total training data 𝒟. The resulting models are named as PNet_F1, PNet_F5, PNet_F10, and PNet_F20, respectively. §.§ Auxiliary Network Segmenting camouflaged objects solely based on the box often leads to inaccuracies, as the information within the box is not always reliable <cit.>. Therefore, as illustrated in Fig. <ref>, we develop a simple and effective COD model for exploiting the complementarity between the RGB image and the proposals, but do not consider the model as a major contribution. Given an RGB image x_m and an object box b_m, we multiply them as the input proposals b_m for the box branch BB encoder, and input x_m into the image branch IB encoder. Encoder. We use two ConvNeXt <cit.> as encoders E(·) in ANet to obtain multi-scale features for different inputs. Given the input images {x_m, b_m}∈ℝ^3 × H × W, we can obtain the multi-scale features {F_x^k}_k=1^4 and {F_b^k}_k=1^4, respectively, from the branches IB and BB with the corresponding sizes of {H/2^k+1, W/2^k+1} and channels {C_1, C_2, C_3, C_4}. Following established practices <cit.>, we adjust all features to the same number of channels C=64 using 3×3 convolutions for consistency across multi-level features. In addition, we apply channel concatenation to features F_x^4 and F_b^4 and then send them to ASPP <cit.> to obtain deep image representations. And the initial mask m_an^4 ∈ℝ^1 × H × W from the representations is generated through a 3×3 convolution. Subsequently, features F_x^k and F_b^k are respectively fed into the Frequency Transformer FT to capture the details and deep semantics of the image, which facilitates the discriminator's recognition of camouflaged objects. Frequency Transformer. Drawing inspiration from FDNet <cit.>, we employ the Discrete Wavelet Transform DWT(·) to extract both low-frequency and high-frequency components from multi-scale features. This approach is instrumental in revealing more intricate object components in camouflaged scenes by leveraging the frequency domain information F^d, enhancing the understanding of such complex visual environments. Taking F_x^k as input F^d_hh_x, F^d_lh_x, F^d_hl_x, F^d_ll_x = DWT(cat(up(F_x^k)), where cat(·) indicates channel-wise concatenation, and up(·) is the up-sampling operation. In the frequency domain features, the subscripts h and l denote the extraction of high-frequency and low-frequency information, respectively, in the horizontal and vertical directions. To efficiently process the obtained frequency domain information and spatial domain features, we adopt an adaptive nonlinear fusion approach Υ_ω(F, F^d), where ω is the learnable parameters that adjusts the degree of fusion between the two adaptively. We accomplish this fusion of shallow features F_x^1, F_x^2 with the high-frequency component F^d_hh_x separately, and the integration of deep features F_x^3, F_x^4 with the low-frequency component F^d_ll_x. Other components such as F^d_lh_x and F^d_hl_x are usually not used. In FT, we represent Υ_ω(·) using successive convolution and channel concatenation. For easier description, the total steps in FT are represented by Φ_f(·), with f specifying the type of input features. In a similar manner, F̂_x^k and F̂_b^k can be obtained through Φ_x(F_x^k) and Φ_b(F_b^k). Before proceeding with the decoding, we also perform the adaptive fusion of F̂_x^k and F̂_b^k through Υ_ω(·), as F_c^k = Υ_ω(F̂_x^k, F̂_b^k). Reverse Fusion Decoder. We design a reverse fusion decoder to complete the convergence of multi-level features. Given the features {F_c^k}_k=1^4 and the mask m_an^4, we accomplish the fusion of multi-level features in the UNet manner. Meanwhile, we integrate a reverse mask, associating the background with difficult areas or noisy pixels in COD, amplifying the differences between them and the correct pixels, and correcting the model's learning of difficult areas, as p_a n^k= Υ_ω(Υ_ω(𝐅_c^k, up(m_a n^4)), up(Rev(m_a n^4))) + up(m_a n^4), k = 4 Υ_ω(Υ_ω(𝐅_c^k, up(p_a n^k+1)), up(Rev(p_a n^k+1))) + up(p_a n^k+1), k ∈{3,2,1}, where Rev(p_an^k+1) = -1 ×σ(p_an^k+1) + 1, σ is the sigmoid function. {p_an^k}_k=1^4 ∈ℝ^1 × H × W and m_an^4 are the predictions, in which p_an^1 is the main output of ANet. The decoding process is defined as Π(·), which means p_an^k = Π(F_c^k, m_an^4). §.§ Primary Network With the pretrained ANet, we predict the pseudo segmentation labels 𝒲_n by using the image set {𝒳_n}_n=1^N with the corresponding box annotations ℬ_n, generating the training dataset 𝒟_n = {𝒳_n, 𝒲_n} for the primary network PNet. Additionally, to align with existing methodologies and to maintain a consistent number of training images, we integrate the fully labeled dataset 𝒟_m = {𝒳_m, ℱ_m}_m=1^M into 𝒟_n to form the total training dataset {𝒟_t}_t=1^M+N. In terms of model configuration, PNet retains the same modules as ANet. However, a key difference is that PNet employs a single-stream structure, where only the image is input. As shown in Fig. <ref>, we use only the Image Branch and the Decoder in PNet. Specifically, in the ASPP and FT stages, there is no channel concatenation with features from another branch. Instead, features are directly fed from the backbone network into the ASPP, and after passing through FT, they go directly into the decoder. Given that the image x_t comes from 𝒟_t, the process of PNet is as F_t^k = E(x_t) and p_pn^k= Π(Φ_t(F_t^k), ASPP(F_t^4)). § NOISE CORRECTION LOSS Training ANet with a very small amount of data poses a challenge in accurately capturing the distribution of the entire dataset, resulting in severe false negative and positive noisy pixels in the generated pseudo labels. When training on such noisy labels with traditional losses like Cross-Entropy (CE) and Intersection over Union (IoU), the bias introduced by the noisy pixels often leads to incorrect optimization directions, impacting the identification of camouflaged objects. Specifically, these losses are more sensitive to difficult pixels, which is beneficial for clean labels as it gives more bias to difficult pixels, but on noisy labels, it leads to more severe error guidance. Therefore, it is necessary to discuss the learning situation of different losses in noisy COD labels. §.§ Preliminaries We consider the learning situation of different losses on noisy labels from the perspective of gradients. Let {x_t, g_t} be a pair of images and its noisy label in 𝒟_t. For any loss ℒ, the risk gradient of the model PNet(x_t) can be divided as ∇ℒ(PNet(x_t; θ), g_t) = ∇ℒ(PNet(x_t), g_t)_correct pixels + ∇ℒ(PNet(x̃_̃t̃), g̃_t)_noisy pixels, where θ means the parameters of PNet. We consider the risk gradient by dividing the noisy label g_t into two parts: correct pixels g_t and noisy pixels g̃_t. When using CE or IoU loss, it is believed that the gradient values propagated by noisy pixels are greater than those from the correct pixels <cit.>. This means that the loss function introduces significant biases for noise, which are incorrect. Consequently, this leads to the model parameters θ learning in the wrong direction, ultimately affecting model's decision boundary. In contrast, as shown in Equ. <ref>, MAE loss does not have this issue, as it applies the same gradient to all pixels. ∂ℒ_MAE/∂θ = -∇_θPNetg_t(x_t;θ). Moreover, MAE loss can tolerate up to 50% noise, as the total gradient direction is still determined by the correct pixels. However, although MAE loss is robust to noise, its constant gradient presents an optimization issue, causing it to perform poorly on challenging data, such as camouflaged images. §.§ ℒ_NC Loss for Camouflaged Object Detection To leverage the advantages of noise robustness offered by MAE loss and the optimization capabilities of losses such as IoU and CE losses, we propose the use of the noise correction loss ℒ_NC in the WSSCOD task. This loss is optimized for the early learning phase and the memorization phase separately for this task, which can be calculated as follows ℒ_NC = ∑_i=1^H × W | p_i - g_i |^q/∑_i=1^H × W (p_i + g_i) - ∑_i=1^H × W p_i· g_i, where q ∈ [1, 2] is a key hyper-parameter, p and g are the prediction and GT. In the early learning stage, the model is required to effectively grasp the nuances of camouflaged scenes and assimilate knowledge from the correct pixels. For this purpose, we set q=2, making ℒ_NC analogous to an IoU-form loss. In the memorization phase, the model's focus shifts towards minimizing the influence of noisy pixels and refining the decision boundary based on the gradient's guidance. At this juncture, by setting q=1, ℒ_NC transforms into a MAE-form loss, which can guide the PNet to optimize in the right direction. Specifically, the robustness of ℒ_NC comes from the deterministic nature of its derivative, which does not exhibit bias towards noisy pixels. When q=1, the gradient of ℒ_NC with respect to p_i is ∂ℒ_NC/∂ p_i = sign(p_i - g_i)/∑_i=1^H × W (p_i + g_i) - ∑_i=1^H × W p_i · g_i. As we can observe, ℒ_NC effectively combines the advantages of MAE and IoU losses: 1) It is noise-robust as MAE, as its gradient value is the same for each predicted pixel p_i. 2) Like IoU, ℒ_NC is area-dependent, can exploit the spatial correlation between pixels, and converges faster and better than MAE. Furthermore, an important consideration is that in different training setups, the pseudo labels predicted by ANet are subject to varying levels of noise, leading to differences in early learning duration. Fixing the period for changing q in various setups is clearly impractical, as altering it too early or too late can affect the learning of correct pixels. Consequently, we examine the effects of modifying q at different epochs during different setups, as illustrated in Fig. <ref>. From the figure, a key finding is that different setups require adaptation to different early learning phases; for example, changing q at the 20-th epoch has a better effect for PNet_F1, while for PNet_F20, it is more effective to change at the 60-th epoch. This is because, compared to PNet_F1, PNet_F20 has less noise in its training data, thus its early learning phase is longer. Moreover, using only the MAE-form or the IoU-form loss exhibits poor performance, especially in cases where noise is not corrected. Therefore, as a result of this figure, when training PNet_F1, PNet_F5, PNet_F10 and PNet_F20, we begin noise correction at the 20-th, 20-th, 40-th, and 60-th epoch, respectively. § RELATED WORK §.§ Camouflaged Object Detection With the rapid development of deep learning technology, data-driven segmenters have achieved significant success in fully supervised COD tasks <cit.>. PraNet <cit.> introduced a parallel reverse attention mechanism, significantly improving the accuracy of detecting camouflaged objects. SINet <cit.> mimicked the search and identification stages of animal predation to detect and locate camouflaged objects. FPNet <cit.> utilized both RGB and frequency domain information for camouflaged object detection. Some weakly supervised methods <cit.> use points, scribbles, and point annotations to achieve low-consumption, high-precision COD. WSSA <cit.> is trained with scribble annotations and employs a gated CRF loss <cit.> to enhance object detection accuracy. SCOD <cit.> introduced a novel consistency loss to ensure the agreement of individual prediction maps, leveraging the loss from an internal viewpoint. However, weakly supervised methods still remain a significant challenge in COD tasks, as the high similarity of camouflaged images prevents these methods from distinguishing between foreground and background. Therefore, unlike previous fully-supervised or weakly-supervised methods, we propose a new learning strategy, WSSCOD, which aims to achieve high-performance COD with an economical and labor-saving labeling approach. §.§ Learning with Noisy Label Deep learning algorithms' remarkable performance heavily relies on large-scale, high-quality human annotations, obtaining which is extremely costly and time-consuming. Cheaper annotation methods like web scraping and weakly supervised methods offer an economical and efficient way to gather labels, but the noise in these labels is inevitable. Learning with noisy labels aims to provide various strategies to tackle this challenging issue, such as robust loss design, noise transition matrices, and sample selection. Zhang et al. <cit.> introduced a generalized cross-entropy loss, which allows training with noisy labels by down weighting the contribution of noisy samples. Patrini et al. <cit.> proposed a method to estimate the noise transition matrix, which represents the probabilities of true labels being flipped to other labels, improving model training under label noise. Han et al. <cit.> developed a co-teaching approach where two networks teach each other what they have learned, effectively reducing the impact of noisy labels by selecting clean samples during training. Noise is unavoidable in the WSSCOD task, and we construct ℒ_NC to facilitate the model's learning of correct pixels as well as the correction of noisy pixels. § EXPERIMENTS §.§ Experimental Settings Datasets and Metrics. In COD task, four primary datasets serve as benchmarks: CAMO <cit.>, COD10K <cit.>, CHAMELEON <cit.>, and NC4K <cit.>, containing 250, 2026, 4121, and 76 image pairs, respectively. The training set consists of 4040 pairs, with 1000 from CAMO and 3040 from COD10K. Within the WSSCOD framework, we leverage box annotations and a subset of fully annotated images. The approach includes four setups, partitioning the training set randomly into subsets of {1%, 5%, 10%, 20%} of images with full and box annotations, and the remaining {99%, 95%, 90%, 80%} with only box annotations. Following the methodologies established in <cit.>, four essential metrics are adopted for an in-depth evaluation of model performance: mean absolute error (ℳ), E-measure (E_ϕ) <cit.>, F-measure (F_β) <cit.>, and S-measure (S_α) <cit.>. Implementation Details. For models: Following the selection practices of existing COD methods, we choose the SOTA backbone network PVTv2-B4 <cit.> as the encoder for PNet to demonstrate the effectiveness of our method. However, for ANet, due to the Transformer model's weak performance on small-scale data <cit.>, we opt for ConvNeXt-B <cit.> as ANet's encoder. The weights of these backbone networks are pretrained on ImageNet <cit.>. For data: To enhance the model's robustness, we apply data augmentation techniques such as random cropping, random blurring, random brightness adjustments, and random flipping to the training images. Subsequently, the images are resized to 384×384 before being fed into the WSSCOD framework. For training: We use Adam optimizer <cit.> to update model parameters, with both ANet and PNet trained for 100 epochs. The initial learning rate is set to 1e-7, linearly warmed up over 10 epochs to 1e-4, followed by cosine annealing down to 1e-7 while training. All random factors, including data selection and the training process, are fixed using seed 2024 to ensure the reproducibility of the model. Besides the NC loss, we also use DICE loss, similar to FEDER <cit.> and BGNet <cit.>, to assist the model in learning the object boundaries. Comparison Methods. To validate the effectiveness of the proposed WSSCOD method, we construct a comparison of it with several recent SOTA methods, including the WSCOD methods WSSA <cit.>, SCWS <cit.>, TEL <cit.>, SCOD <cit.>, the fully supervised methods SINet <cit.>, FEDER <cit.>, SINetv2 <cit.>, BSA-Net <cit.>, BGNet <cit.>, CamoFormer <cit.>, FSPNet <cit.>, HitNet <cit.>, MSCAF-Net <cit.>, and the prompt based methods SAM <cit.>, SAM with point prompt (SAM-P), SAM with box prompt (SAM-B). The results of these methods come from public data or are generated by models retrained with the released code. The comparison results are shown in Table <ref>, and the qualitative comparison is in Fig <ref>. §.§ Performance Comparision with SOTAs Quantitative Evaluation. Table <ref> provides a comprehensive quantitative comparison between our proposed WSSCOD and 16 other COD models, using various training strategies such as WSCOD methods, FSCOD methods and prompt based methods. As shown in the table, our PNet_F1 (with pixel-level annotations for only 40 images) outperforms weakly supervised methods, achieving an average improvement of 79.6%, 16.3%, 18.1%, and 16.0% in ℳ, E_ϕ, F_β, and S_α metrics, respectively, compared to SCWS <cit.>. Compared to the fully supervised SOTA method CamoFormer <cit.>, our PNet_F20 exhibits comparable performance on the four datasets, with a gap of less than 1%, while requiring only about 1/5 of their annotation effort (pixel-level annotations for just 800 images). Compared to SAM <cit.>, our method demonstrates significant advantages, even against the box or point-prompted SAM. More importantly, our WSSCOD method is scalable, and incremental training only requires box annotations, as demonstrated by PNet_F20^†. By training with an additional 6665 images with only box annotations, we achieve higher performance improvements. Compared to PNet_F20, PNet_F20^† shows an apparent improvement on the COD10K and NC4K datasets, with improvements of 35%, 2.4%, 5.8%, and 3.6% in the four metrics. Overall, the results of PNet show the success of the WSSCOD method in breaking the time-consuming labeling process, providing new insights for COD task. Qualitative Evaluation. We select some representative COD scenes for visual comparison in Fig. <ref>. These scenes reflect various scenarios, including various types of camouflaged objects of different sizes and dimensions. From these results, our methods excel in preserving semantic accuracy and ensuring the integrity of fine edges, surpassing other models that may suffer from over-prediction, ambiguous details, and missing edges. It is worth noting that our results are learned from more noisy labels. l0.5 < g r a p h i c s > Visualization comparison of ours and SOTA methods. Please zoom in to view. §.§ Ablation Experiments The following ablation study validates the innovations of this research, particularly the WSSCOD strategy and the ℒ_NC loss function. Effect of Box Prompts. We select the box annotations as additional prompts and provide a performance comparison with other annotation types, such as points and scribbles, in Table <ref>. For point annotations, denoted as 𝒫, we use the method recommended by <cit.>, namely MaskRefineNet, to refine the output. The processing method for scribble annotations, denoted as 𝒮, is consistent with our approach for box annotations, and the test dataset used is 𝒟_n. According to this table, the improvement with box annotations as prompts is significant, surpassing both scribble and point annotations by more than 7.2% (0.802 vs. 0.860 in F_β) and achieving a 14.5% improvement in performance (0.751 vs. 0.860 in F_β) compared to methods without any prompts. Fig. <ref> illustrates how box prompts refine the quality of pseudo labels by preventing model misjudgments and enhancing the distinction between object and background. In total, boxes are effective in COD tasks, as they greatly slow down the pressure on the model to detect in camouflaged scenes. Effect of WSSCOD. We introduce WSSCOD as an innovative training strategy and provide a comparative analysis with other methods, such as semi-supervised COD and FSCOD, in Table <ref>. This table includes comparisons of training with 20% (line 1) and 100% (line 2) of fully supervised data, as well as training with a combination of 20% fully supervised data and 80% unlabeled data 𝒰 (line 3), juxtaposed against our method (line 4). Observations from 1, 2, and 4 indicate that leveraging additional data can significantly boost model performance. Furthermore, by incorporating boxes as masks, our method concentrates more effectively on the segmentation of camouflaged objects, achieving results comparable to those obtained with 100% full pixel-level data, yet at a considerably lower annotation cost than 2. Compared to 3, while the semi-supervised approach is more time- and labor-efficient, it is clearly far from our WSSCOD method and is difficult to scale more efficiently. Effect of Noise Correction Loss. One key innovation in our work is the noise correction loss (ℒ_NC), which is designed to enhance the model's robustness to noisy labels. To evaluate its effectiveness, we compare the performance of models trained with various loss functions in Table <ref>. Consistent with the conclusions in Fig. <ref>, the sensitivity of CE and IoU/IoU-form ℒ_NC^q=2.0 losses to noise leads to their poor performance (0.780/0.778 vs. 0.792 in F_β). Using only the MAE-form of loss also does not yield optimal performance (0.778 vs. 0.792 in F_β). GCE is a well-known work in noise learning, but its performance in the COD task is not satisfactory (4% gap in F_β), highlighting the differences between the two tasks. Fig. <ref> illustrates the PNet's outputs, demonstrating how ℒ_NC corrects noise in the memorization phase, enabling precise capture of object's details. This balanced approach ensures optimal learning from noisy labels, showcasing the ℒ_NC's ability to enhance model accuracy and reliability. Universality of Noise Correction Loss. We posit that ℒ_NC is versatile, as noise is commonly present in both WSCOD and FSCOD training sets. Thus, we conduct experiments on the fully supervised model SINetv2 and the weakly supervised model SCOD, modifying their training loss functions to our ℒ_NC during training, and changing the parameter q to 1 in the later stages of training. The results are shown in Table <ref>. It can be seen that both models have achieved effective performance improvements, where the FSCOD method SINetv2 is improved by 12.1%, 1%, and 2.1% in ℳ, E_ϕ and F_β metrics except for S_α. The SCOD is also improved by 33.3%, 7.0%, 5.9%, and 6.8% in the four metrics. § CONCLUSION AND DISCUSSION Conclusion. We proposed WSSCOD to achieve low-cost, high-performance COD. Moreover, to address the issue of noisy pseudo labels generated by ANet, we introduced ℒ_NC to achieve gradient consistency under noisy pixels. Our method requires only 20% of full annotations to reach the SOTA performance. Limitation. One major limitation of the proposed method is that the accuracy of box annotation has a bit of impact on the final results, which is similar to the issue of multimodal bias. Specifically, in ANet, we use channel concatenation to fuse the dual branches instead of employing overly complex fusion strategies. As we aim to keep it simple, treating it as a baseline model. Actually, a better fusion strategy could mitigate the impact of incorrect boxes and improve performance. Another limitation is that WSSCOD is two-stage, which is cumbersome, and a direction for subsequent research. Acknowledgments. This work was funded by STI 2030—Major Projects under grant 2022 ZD0209600, National Natural Science Foundation of China 62201058 and 6180612. splncs04
http://arxiv.org/abs/2407.13264v1
20240718081459
Underwater Acoustic Signal Denoising Algorithms: A Survey of the State-of-the-art
[ "Ruobin Gao", "Maohan Liang", "Heng Dong", "Xuewen Luo", "P. N. Suganthan" ]
cs.SD
[ "cs.SD", "cs.AI", "eess.AS" ]
IEEE Transactions on SMC: Systems, Jun 2024 How to Use the IEEEtran Templates Underwater Acoustic Signal Denoising Algorithms: A Survey of the State-of-the-art Ruobin Gao, Member, IEEE, Maohan Liang, Heng Dong, Member, IEEE, Xuewen Luo, Member, IEEE, P. N. Suganthan, Fellow, IEEE, Corresponding author: Maohan Liang. Ruobin Gao is with the School of Civil and Environmental Engineering, Nanyang Technological University, Singapore (e-mail: GAOR0009@e.ntu.edu.sg). Maohan Liang is with the Department of Civil and Environmental Engineering, National University of Singapore, Singapore (e-mail: mhliang@nus.edu.sg). Heng Dong is with the School of Electronics and Information Engineering, Harbin Institute of Technology, China (e-mail: dongheng@stu.hit.edu.cn). Xuewen Luo is with the Communication Research Center, Harbin Institute of Technology, China, e-mail: luoxw@hit.edu.cn P. N. Suganthan is with the KINDI Center for Computing Research, College of Engineering, Qatar University, Doha, Qatar (e-mail: p.n.suganthan@qu.edu.qa). Manuscript created Jun, 2024; This work was developed by the IEEE Publication Technology Department. July 22, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT This paper comprehensively reviews recent advances in underwater acoustic signal denoising, an area critical for improving the reliability and clarity of underwater communication and monitoring systems. Despite significant progress in the field, the complex nature of underwater environments poses unique challenges that complicate the denoising process. We begin by outlining the fundamental challenges associated with underwater acoustic signal processing, including signal attenuation, noise variability, and the impact of environmental factors. The review then systematically categorizes and discusses various denoising algorithms, such as conventional, decomposition-based, and learning-based techniques, highlighting their applications, advantages, and limitations. Evaluation metrics and experimental datasets are also reviewed. The paper concludes with a list of open questions and recommendations for future research directions, emphasizing the need for developing more robust denoising techniques that can adapt to the dynamic underwater acoustic environment. Underwater acoustic signal, Signal decomposition, Deep learning, Marine engineering. § INTRODUCTION Underwater Underwater acoustic data are crucial for various applications <cit.>. The efficient and intelligent processing of these data is vital for enhancing state-of-the-art underwater technologies. While numerous technologies have been developed specifically for terrestrial and aerial environments, the unique characteristics of the underwater environment make acoustic signals particularly effective for capturing its conditions. However, severe noise interference presents significant challenges to receivers in underwater communications <cit.>. The complex underwater settings, unpredictable transmission channels, and varying motion states significantly affect real-world underwater acoustic signals (UAS), potentially obscuring the inherent features of targets <cit.>. Consequently, developing advanced technologies for UAS denoising has become a critical and burgeoning research area in underwater scenarios. Since the UAS contains intensive noise, extracting noise-resistant features is essential for underwater recognition tasks <cit.>. The UAS denoising can be categorized into four groups: Conventional approaches, decomposition-based framework, deep learning (DL) algorithm, and hybrid schemes. The overall framework of UAS denoising is shown in Figure <ref>. Conventional frameworks employ handcrafted statistical measurements to obtain the feature set, which is utilized to train the learning-based recognition model <cit.>. For instance, two recognition models based on neural networks are trained using eighty-eight features computed from the UAS <cit.>. Although these handcrafted features are interpretable, they may not effectively capture the high-level abstractions in underwater data that are essential for tasks involving complex patterns. Moreover, defining suitable handcrafted features for specific tasks requires extensive domain knowledge and offers limited flexibility. Inspired by the 'divide and conquer' strategy, the UAS denoising community has explored the decomposition of complex UAS into simpler components, which are then individually or collectively denoised. The family of signal decomposition algorithms in UAS denoising is extensive, including empirical mode decomposition (EMD) <cit.>, variational mode decomposition (VMD) <cit.>, discrete wavelet transform (DWT) <cit.>, empirical wavelet transform (EWT) <cit.>, and their advanced variants <cit.>. Following decomposition, criteria are established and applied to classify components into signal-dominated, noise-dominated, and pure noise groups. Specific processing or denoising techniques are then tailored to each category. Noise components are discarded, while signal components are preserved. The final step involves the aggregation of these processed components to produce a denoised UAS. In recent years, DL algorithms have succeeded across various fields due to their robust representation capabilities and minimal assumptions about the input data <cit.>. The feature extraction prowess of DL algorithms has prompted researchers to explore their effectiveness in UAS denoising <cit.>. DL-based frameworks for UAS denoising typically utilize different neural network architectures to reconstruct clean signals from noisy inputs and maximize the signal-to-noise ratio (SNR). These models often employ an autoencoder architecture. The design of an efficient DL-based UAS denoising model depends on the choice of architectures and loss functions. The advantages of DL-based techniques include their consistency in denoising and applicability to subsequent tasks such as recognition or analysis based on UAS. Thus, the denoising process is task-oriented, aiming to ensure satisfactory performance across these applications. The underwater data mining and signal process community has been dedicated to imagery data <cit.>, but much less effort on acoustic data. Although there are some reviews about underwater sensing, they neglect a crucial role of UAS denoising <cit.>. Meanwhile, a comprehensive review of the state-of-the-art UAS denoising research needs to be conducted. This article comprehensively reviews recent advances in UAS denoising and contributes to the literature from the following perspectives: 1 Despite the extensive research on UAS denoising algorithms, a comprehensive review systematically summarizing and discussing these diverse approaches is absent. This deficiency poses a significant challenge for researchers and practitioners aiming to thoroughly understand the landscape of UAS denoising techniques. 2 We systematically analyze UAS denoising algorithms, from conventional signal processing methods to advanced DL algorithms. Furthermore, we introduce a taxonomy of UAS denoising techniques, marking the first instance of such classification in the literature. This taxonomy meticulously delineates the current landscape of UAS research, revealing insightful connections among each category. 3 We outline the prevailing challenges encountered in UAS denoising and explore potential solutions. Spanning from methodological intricacies to real-world applications, these challenges offer valuable insights and point towards promising avenues for future research in UAS denoising. 4 We elucidate the diverse applications of UAS denoising techniques, underscoring their essential role in various underwater applications. This exploration holds significant interest for readers and practitioners alike, highlighting the critical importance of UAS denoising in underwater contexts. § OVERVIEW OF UAS DENOISING RESEARCH This section first conducts a bibliometric analysis of the reviewed UAS denoising literature. Then, we discuss the unique challenges in denoising UAS data. §.§ Bibliometric Analysis This survey reviews research in UAS denoising techniques, predominantly published in academic journals or conferences related to ocean engineering, measurement, signal processing, and artificial intelligence (AI). Notable venues include IEEE Transactions on Instrumentation and Measurement, Ocean Engineering, IEEE Journal of Ocean Engineering, Journal of Marine Science and Engineering, Applied Acoustics, Applied Ocean Research, Measurement, and The Journal of the Acoustical Society of America. Figure <ref> visualizes the scope of UAS denoising studies addressed in this survey. Figure <ref> illustrates a consistent upward trend in the number of studies within the UAS denoising field, despite a drop in publications in 2024, which only encompasses the first half of the year. Figure <ref> summarizes the top ten authors by the volume of their contributions to UAS denoising research. According to Figure <ref>, the journals Applied Acoustics and Ocean Engineering publish the most research on UAS denoising, given their specific focus on acoustics and ocean engineering. Recently, with advancements in AI, AI-related journals such as EAAI and ESWA have also shown increased receptivity to UAS denoising research. Finally, Figure <ref> displays the network graph of co-authorship within the UAS denoising literature, highlighting five collaborative communities and the extent of their interactions. §.§ Why UAS denoising is challenging The low SNR of the UAS presents significant challenges; however, the complexities of the underwater environment introduce unique difficulties that distinguish UAS denoising from typical signal denoising tasks <cit.>. Various noise sources exist in underwater environments, as shown in Figure <ref>. In underwater settings, the noise sources vary and include natural and anthropogenic elements. Natural sources such as marine life activity, wind-driven waves, and precipitation contribute significantly to the background noise. On the other hand, anthropogenic sources include ship traffic, industrial activities, sonar systems, and other man activities, all adding complexity to the noise environment <cit.>. These diverse noise sources necessitate specialized approaches in UAS denoising to effectively separate the signal from the noise, ensuring clarity and accuracy in data interpretation. §.§.§ Complex sources In the underwater scenario, the noise sources contributing to the complexity of acoustic signals can be broadly categorized into natural, anthropogenic, and system-based sources, each adding layers of challenges to UAS denoising: 1. Natural Noise Sources: * Biological Noise: This includes sounds from marine life, such as whales, dolphins, and fish <cit.>. These biological entities often produce sounds for communication, navigation, and foraging, which can overlap in frequency and time with the signals of interest. * Geophysical Noise: Phenomena such as wind, rain, and sea state contribute to background noise <cit.>. Turbulence caused by waves breaking on the surface and interactions between water and seabed during storms generates significant noise levels <cit.>. * Thermal Noise: Caused by the random motion of water molecules, thermal noise is more prevalent in deeper and warmer waters and acts as a constant background noise across all frequencies <cit.>. 2. Anthropogenic Noise Sources: * Shipping Traffic: Noise from commercial and recreational vessels is a dominant noise source in many oceanic environments. The sound from engines, propellers, and hull movement is pervasive at various frequencies and intensities <cit.>. * Industrial Activities: Underwater construction, oil drilling, and other marine operations involve heavy machinery that emits substantial acoustic signals <cit.>. * Sonar and Naval Exercises: Active sonar systems used by the military and some commercial ships emit powerful sound pulses that can interfere with and mask natural acoustic signals <cit.>. 3. System-based Noise Sources: * Instrument Noise: Noise inherent to the recording devices, such as electronic noise from sensors and recording equipment, can affect the data quality <cit.>. * Data Transmission Noise: In wireless underwater communication, signals can be corrupted by noise introduced during transmission, including reflections and refractions from the water's surface or the seabed. Each noise source interacts differently with the underwater environment, making it challenging to isolate and remove unwanted noise from valuable data. Effective denoising thus requires a deep understanding of both the characteristics of these noises and the acoustic properties of the environment. Advanced signal processing techniques, adaptive filtering, and machine learning models are typically employed to enhance the clarity and reliability of the extracted signals in such complex scenarios. §.§.§ Energy imbalance In underwater environments, the fusion of multi-source signals frequently results in an imbalanced energy distribution within the captured acoustic data. This imbalance complicates the signal processing tasks, particularly the denoising of UAS. Factors such as varying signal intensities, overlapping frequency ranges, and sporadic or persistent noise sources further exacerbate the challenge. These complexities necessitate sophisticated denoising techniques that effectively distinguish between noise and actual acoustic signals of interest. Moreover, the dynamic nature of underwater environments, including changes in water density, temperature, and movement, adds additional layers of variability that denoising algorithms must account for. Consequently, improving the accuracy of UAS denoising involves addressing the imbalance in signal fusion and adapting to the inherently noisy and unpredictable underwater acoustic landscape. §.§.§ Disparate optimization objectives Since UAS denoising is usually the first stage for underwater recognition tasks, recognition models must be developed. Most literature treats denoising and recognition as two independent stages <cit.>. When designing UAS denoising algorithms, researchers may not consider the requirements of recognition tasks. The denoising stage is unsupervised, and recognition labels are unavailable. The objectives of developing denoising and recognition models are different, challenges in unifying these two stages. § CONVENTIONAL METHODS Conventional UAS denoising is usually based on hand-craft features <cit.> and linear filtering <cit.>. UAS is split into various barks denoised by wavelet thresholding algorithms <cit.>. Frame-Based Time-Scale Filters method is proposed to improve the standard wavelet soft-thresholding in reducing distortions in the joint time-frequency space <cit.>. A two-stage denoising framework consisting of adaptive window median filter and wavelet threshold optimization is designed to eliminate Gaussian and non-Gaussian noise, respectively <cit.>. This article focuses on the most recent advancement in UAS denoising. For conventional signal processing and thresholding techniques, we suggest referring to these survey studies <cit.>. § SIGNAL DECOMPOSITION Signal decomposition techniques can decompose complex signals into various components or modes, carrying information of different frequencies. Individual modes are easier to analyze, process, and denoise <cit.>. The general framework of decomposition-based denoising methods is shown in Fig. After obtaining modes with the help of decomposition, suitable denoising algorithms are applied to these modes. Finally, denoised modes are aggregated to reconstruct the input signal. The overall framework of decomposition-based UAS denoising is visualized in Fig. <ref>. §.§ Wavelet transform §.§.§ Theoretical development Fourier transform (FT) has historically been the method of choice for spectral analysis until the advent of wavelet transform. The Fourier transform's limitations, particularly its inefficiencies in local time-frequency representation and its poor performance with non-stationary signals, have led to its replacement by wavelet transform. This newer method has subsequently demonstrated significant success in the analysis of time series data. DWT is calculated as Equation <ref>, 𝑓(𝑗,𝑘)=<𝑥(𝑡),ψ_𝑗,𝑘 (𝑡)>=∫ 𝑥(𝑡)ψ_𝑗,𝑘^* (𝑡)𝑑 𝑡, where ψ_j,k (t)=2^j/2ψ(2^jt-k),j,k∈ℤ is the wavelet function in DWT. In a practical forecasting problem, signal x(t) and ψ_j,k(t) are both discrete as t is the discrete time index. In reality, finite-length times series x(t)∈ L^2(R) are all applicable to DWT. In 1988, Daubechies first introduced the construction of a finite-support orthogonal wavelet named the db wavelet family. For a specific wavelet, there is a pair of scaling function ϕ_j,k(t) and wavelet function ψ_j,k(t) for scale j. The most important property for a scaling function and wavelet function to satisfy the multi-resolution analysis (MRA) is the dilation equation in Equation <ref> and <ref> based on which MRA is built. It indicates that the coarser basis ϕ (t) with larger support is a weighted sum of the finer basis ϕ (2t-k) with shorter support and {h_k, k∈Z} is the weight. ϕ(t)=√(2)∑_k∈Zh_kϕ(2t-k) ψ(t)=√(2)∑_k∈Zg_kϕ(2t-k) As {ϕ_j,k(t),k∈Z} spans space of scale j (V_j), any function f_j(t) in V_j can be written as a linear combination of the orthogonal base {ϕ_j,k(t),k∈Z} as f_j(t)=∑_k=-∞^∞ c_j[k]ϕ_j,k, where c_j[k] is the coefficient of a corresponding basis ϕ_j,k. Based on this representation and Equation <ref>, <ref>, the nestedness between space { V_j,j∈Z} can be derived as Equation <ref>. V_j⊂V_j+1⊂⋯V_J,j<J Moreover, the difference(residual) space of two adjacent spaces written as W_j=V_j+1-V_j is spanned by wavelet functions ψ_j,k, (k∈Z) due to the orthogonality between ϕ_j,k and ψ_j,k we can effortlessly know that W_j⊥V_j. The scale j of V_j is contingent upon the selected wavelet, with different wavelet functions (bases) yielding distinct spaces. A fundamental principle of MRA dictates that the scaling function ϕ_j,k exhibits a diminishing support length as j increases, which enhances its resolution and vice versa. Employing MRA in time series analysis provides an effective means to decompose a high-resolution signal into its constituent components, such as the rough trend and various cyclic frequencies, by utilizing different wavelet bases (scaling functions) at various scales. §.§.§ WT-based UAS denoising Classical wavelet threshold denoising techniques effectively suppress noise by leveraging thresholds derived from wavelet coefficients, thereby retaining stronger signals <cit.>. Thresholding within this context can be classified into hard, soft, and hybrid categories <cit.>. However, classical wavelet thresholding algorithms struggle with non-Gaussian, non-linear, and non-stationary noise types. Moreover, selecting an appropriate threshold remains a significant challenge. To overcome these issues, some researchers have proposed using the posterior probability distribution of wavelet coefficients obtained from the DWT as the threshold to eliminate non-dominant coefficients <cit.>. Additionally, integrating the lifting wavelet transform with soft thresholding has been investigated as a strategy to mitigate the shortcomings of the first-generation wavelet transform <cit.>. §.§ Empirical wavelet transform §.§.§ Theoretical development The EWT represents an automated approach in signal processing, underpinned by robust theoretical foundations for decomposing non-stationary time series data <cit.>. Contrasting with DWT and EMD <cit.>, EWT conducts a meticulous analysis of time series in the Fourier domain subsequent to the application of a Fast Fourier Transform (FFT). This technique involves the segmentation of the spectrum through data-driven band-pass filtering. In the EWT, limited freedom is provided for selecting wavelets. The algorithm employs Littlewood-Paley and Meyer's wavelets because of the analytic accessibility of the Fourier domain's closed-form formulations <cit.>. We represent the normalized frequency as ω∈ [0,π]. We utilize ω_n to represent the limits between the segments that are obtained from the Fourier support [0,π]. These band-pass filters' formulations are denoted using Equations <ref> and <ref> ϕ̂_n(ω) = 1 [1pt][r]if |ω| ≤ (1-γ)ω_n cos[π/2β(1/2γω_n(|ω|- (1-γ)ω_n|))] [60pt][r]if (1-γ)ω_n≤|ω| ≤ (1+γ)ω_n 0 otherwise, ψ̂_n(ω) = 1 [12pt][r]if (1+γ)ω_n≤|ω| ≤ (1-γ)ω_n+1 cos[π/2ζ(1/2γω_n+1(|ω|- (1-γ)ω_n+1|))] [22pt][r]if (1-γ)ω_n+1≤|ω| ≤ (1+γ)ω_n+1 sin[π/2ζ(1/2γω_n(|ω|- (1-γ)ω_n|))] [0.001pt][r]if (1-γ)ω_n≤|ω| ≤ (1+γ)ω_n 0 [2pt][r]otherwise, with a transitional band width parameter γ satisfying γ≤min_nω_n+1-ω_n/ω_n+1+ω_n. The most common function ζ(x) in Equations <ref> and <ref> are presented in Equation <ref>. β(x)=x^4(35-84x+70x^2-20x^3) This empowers the formulated empirical scaling and wavelet function {ϕ̂_1(ω),{ψ̂_n(ω)}_n=1^N} to be a tight frame of L^2(ℝ). It can be observed that {ϕ̂_1(ω),{ψ̂_n(ω)}_n=1^N} are used as band-pass filters centered at assorted center frequencies. §.§.§ EWT-based UAS denoising Although EWT achieved tremendous success in other sequence tasks, the UAS community has not researched its denoising ability <cit.>. The EWT is employed to decompose the UAS into several sub-series and utilize the sub-series of highest energy to train a recognition model <cit.>. §.§ Empirical mode decomposition §.§.§ Theoretical development The EMD is a wholly data-driven approach for decomposing time-domain signals into distinct oscillatory modes and a residual component <cit.>. Each mode, defined as an Intrinsic Mode Function (IMF), must satisfy two specific criteria to be classified as such. 1. The number of extremums in the oscillation and the number of zero crossings must equal or differ by at most one. 2. The mean of the envelopes defined by the local maxima and the local minima shall equal zero. The signal decomposed by EMD can be expressed as the sum of a finite number of IMFs and a residual value. x(t)=∑_m=1^k IMF_m(t)+r_k(t), where k is the IMF number and r_k(t) is the final residual value. The set of IMFs constitutes a complete, adaptive, and nearly orthogonal basis for the original signal. The algorithm for the iterative process of EMD is as follows: * Find all the minima and maxima in x(t). * Perform cubic spline interpolation of minima to obtain the lower envelope e_m(t) and that of maxima for upper envelope e_l(t). * Find the mean of the two envelopes using m(t)=e_m(t)+e_l(t)/2. * Subtract the mean from the signal as d(t)=x(t)-m(t). * Applying the abovementioned factors, check whether d(t) is an IMF. * If d(t) is not an IMF, iterate from step (2) to (5) considering input as d(t) to find the IMF. * If d(t) is an IMF, find the residue r(t) = x(t)-d(t). * If r(t) has greater than two extrema, i.e., one maximum and one minimum and a single zero crossing the stopping criterion not satisfied, iterate from step (2) to (5), considering r(t) as input to find the subsequent IMF. * If r(t) has less than or equal to 2 extrema, i.e., one maximum, one minimum, and a single zero-crossing, the stopping criterion is satisfied, r(t) is the final residue, and the EMD process is complete. The EMD method preserves the signal in the time domain. Each IMF encapsulates information on the variations in amplitude and frequency of the original signal over time. IMFs consist of a single or a narrow band of frequencies with no overlap. Furthermore, these functions or signals are orthogonal to the original signal <cit.>. §.§.§ EMD-based UAS denoising EMD and its variants have achieved tremendous success in UAS denoising <cit.>. The application of the EMD technique offers a novel approach to detecting and classifying marine mammal vocalizations in underwater acoustics, which traditionally requires extensive manual analysis by skilled acousticians. This method efficiently identifies and labels sound sources in a recording without prior knowledge or extensive pre-processing, streamlining the task through minimal post-processing quality control <cit.>. The non-stationarity of each decomposition mode is utilized to select noise components obtained by the ensemble EMD (EEMD) <cit.>. The CEEMD is employed to denoise the original signal first, and a bidirectional denoising autoencoder is developed to learn robust representations <cit.>. The complete ensemble empirical mode decomposition with adaptive selective noise (CEEMDAN) is adopted to decompose UAS into IMFs, and IMF with the minimum difference between the energy distribution ratio and average energy distribution ratio is selected <cit.>. While decomposing the acoustic target signal, the correlation coefficient between each IMF and the original signal is utilized as a threshold to determine signal-dominated IMFs. In addition to utilizing threshold to drop out noisy IMFs, the literature also employs denoising algorithms to denoise noisy IMFs <cit.>. A criterion determining noisy IMFs is designed, and then noisy IMFs are denoised. Different criterion is proposed in the literature, such as minimum mean square variance <cit.>, energy concentration property <cit.>. For denoising IMFs, researchers have tried on least mean square filter <cit.>. Modified uniform EMD is employed to decompose the input into IMFs, and a double threshold is obtained according to hierarchical amplitude-aware permutation entropy (PE). The threshold assists in dividing IMFs into clean, mixed, and noisy IMFs. Since mixed IMFs contain noisy information, an evolutionary improved wavelet threshold denoising method denoises mixed IMFs <cit.>. Recently, secondary decomposition outperforms one-time approaches <cit.>. Implementing a secondary decomposition assists in extracting high-level features and further denoising IMFs containing indistinguishable noise <cit.>. VMD decomposes signals denoised by wavelet thresholding further <cit.>. Then, the IMFs of high mutual information are selected for the following recognition tasks. Adaptive chirp mode decomposition, an advanced extension of EMD, is recently explored in <cit.>. §.§ Variational mode decomposition §.§.§ Theoretical development VMD can decompose the non-stationary signals into several sub-series called modes <cit.>. The VMD can be considered as the following problem min {m_k},{w_k}{∑_k=1^K||δ_t[(δ(t)+k/π t)× m_j(t)]e^kω_kt||^2_2} with the constraints as ∑_k=1^Km_k=x(t), where m_k is mode k, ω_k is m_k's central frequency, K is the number of modes, x(t) represents the input time series. The problem shown in Equation <ref> is transformed into Equation <ref> when introducing the L_2 penalty and Lagrange multiplier L({m_k},{w_k},λ)=α{∑_k=1^K||δ_t[(δ(t)+k/π t)× m_j(t)] e^kω_kt||^2_2+ ||x(t)-∑_k=1^Km_k|| ⟨λ(t),x(t)- ∑_k=1^Km_k⟩}. The alternating direction method of multipliers (ADMM) algorithm is utilized to solve the above problem in VMD. Then, the modes m_k and ω_k are obtained during the shifting process. According to the ADMM algorithm, the m_k and ω_k can be computed from the following equations, m̂_k^n+1=ŷ(ω)-∑_i≠ km̂_k(ω)+λ̂(ω)/2/1+2α(ω-ω_k)^2 ω̂_k^n+1=∫_0^∞ω|m̂_j(ω)|^2dω/∫_0^∞|m̂_j(ω)|^2dω, where n represents the number of iterations, ŷ(ω), m̂_k(ω), λ̂(ω) and m̂_k^n+1 represent the Fourier transform of x(t), m_j(t), λ(t) and m_k^n+1, respectively. §.§.§ VMD-based UAS denoising To overcome the theoretical limitations of EMD, <cit.> propose the variational mode decomposition (VMD) algorithm with solid theoretical development. VMD has successfully handled noisy UAS <cit.>. For instance, <cit.> employ VMD to decompose the input signal. Then, the authors apply the Savitzky-Golay filter and Lift wavelet threshold (LWTD) algorithms to denoise low-frequency and high-frequency components, respectively. Finally, all components are aggregated for reconstruction. Another principle of denoising IMFs is to classify noise-dominated and signal-dominated IMFs. Different denoising algorithms can be applied to noise-dominated and signal-dominated IMFs <cit.>. For instance, wavelet-thresholding algorithm and Savitzky-Golay filtering are employed to denoise noise-dominated and signal-dominated IMFs, respectively <cit.>. Their results demonstrate the superiority of VMD over EMD for UAS denoising tasks. Unlike the above methods dividing IMFs into two groups (clean and noise, low-frequency and high-frequency), some studies divide IMFS into three groups, pure, mixed, and noisy signals for fine-gained denoising <cit.>. §.§ Other decompositions The improved symplectic geometry modal decomposition generates IMFs in <cit.>. Unlike most literature, which utilizes some criterion to group IMFs into clean signal and noisy parts, spectral clustering is employed to cluster IMFs into mixed and noise clusters. Finally, wavelet thresholding techniques filter out noise in mixed clusters. The authors employ intrinsic time-scale decomposition and correlation coefficients to denoise UAS data <cit.>. §.§ Thresholding Thresholding is a fundamental step in the decomposition-based denoising framework. It eliminates noisy information from all decomposed components based on a predefined threshold <cit.>. This technique comprises two main stages: threshold determination and the thresholding function application. The first stage, threshold determination, primarily involves calculating threshold values using an appropriate criterion. The second stage, the thresholding function, is concerned with removing noise components while preserving the significant signal elements according to the established threshold. §.§.§ Threshold determination When decomposing signals, distinguishing between meaningful components and noise is crucial. Noise components typically have little to no informational overlap with the original signal. The first stage of thresholding is to determine the threshold value. A good threshold value should assist in retaining signal-dominated information and eliminating noise as much as possible. Researchers have utilized a variety of criteria to compute the threshold value. For instance, correlation coefficients between IMFs and original signals are employed to determine the threshold <cit.>. Signal-dominated components should show a much higher correlation than noise-dominated components. However, correlation coefficients cannot measure the non-linear dependency between decomposed components and original UAS, which is essential in complex signal environments. In addition to linear criterion, the entropy is an essential indicator to reflect information in each IMF <cit.>. Hence, the literature has explored various entropy-based metrics to compute the threshold, such as permutation entropy (PE) <cit.>, amplitude-aware PE <cit.>, dispersion entropy (DE) <cit.>, fluctuation-based DE <cit.>, slope entropy <cit.>, weighted PE<cit.>, neural network estimation time entropy <cit.>. Mutual information quantifies the amount of information obtained about one random variable through another random variable. In the context of signals, it measures how much information the presence of one signal can tell about another signal. This is particularly useful when determining how much of one signal (such as the original) is present in another (like a decomposed signal component). Mutual information <cit.>. In addition to the above thresholding technique based on single-threshold, dual thresholds are researched in the literature <cit.>. For instance, an interval thresholding is employed in <cit.>. §.§.§ Thresholding function Thresholding functions aim at eliminating noisy components while retaining strong signals. Table <ref> summarizes basic thresholding functions and baselines for advanced thresholding functions in the literature <cit.>. For instance, in <cit.>, a new adaptive thresholding function considering the continuity of input-output curves, and is defined as: ŵ_j,k = sgn(w_j,k) ( |w_j,k| - | w_j,k|^η (λ_j - |w_j,k|) * λ_j ), |w_j,k| ≥λ_j 0, |w_j,k| < λ_j. Although advanced adaptive or semi-soft thresholding functions have been proposed and demonstrated outstanding performance, they often necessitate the optimization of additional hyper-parameters. This requirement can complicate their practical implementation and demand extensive computational resources or domain expertise to achieve optimal results. Such complexities can be a barrier, especially in applications with critical real-time processing or limited computational resources. Moreover, tuning these hyper-parameters can be sensitive to the specific characteristics of the data, making these methods less robust across diverse datasets unless carefully adjusted. §.§ Hyper-parameters of signal decomposition A practical issue of the decomposition-based denoising framework is determining the hyper-parameters of decomposition algorithms <cit.>. Signal decomposition algorithms share an essential hyper-parameter, the decomposition level. Smaller decomposition levels lead to significant mode mixing issues. However, high decomposition levels may generate components of fake frequencies and deteriorate the denoising performance. Meanwhile, each additional level of decomposition increases the computational burden. Researchers attempted to directly apply the decomposition level of EMD to VMD to retain the advantages of VMD while incorporating the adaptive capabilities of EMD <cit.>. Spearman correlation coefficients are utilized as a threshold to determine whether the decomposed component is unsubtle <cit.>. Measuring correlations between reconstructed and original UAS can also guide the selection of decomposition level <cit.>. Evolutionary optimization successfully determines hyper-parameters of signal decomposition algorithms in the UAS denoising literature <cit.>. Researchers employ various evolutionary algorithms to search for threshold, decomposition, and other crucial parameters <cit.>. § DEEP LEARNING Deep learning (DL) algorithms employ a deep neural network to reconstruct clean signals from noisy input <cit.>. Most literature has followed the framework of autoencoder for reconstruction <cit.>. Designing suitable architectures and novel loss functions to extract noise-resistant features efficiently is crucial <cit.>. Table <ref> summarizes representative DL-based UAS denoising in recent years. §.§ DL-based UAS denoising methodology Unlike traditional denoising algorithms and signal decomposition techniques, neural networks operate without preset assumptions about the noise characteristics <cit.>. Various deep learning architectures have demonstrated efficacy in UAS denoising, including convolutional neural networks (CNNs) <cit.>, recurrent neural networks (RNNs) <cit.>, and attention-based neural networks <cit.>. The reconstruction-based DL denoising algorithms pipeline is visualized in Figure <ref>. Time-frequency transformation is optional because the DL model can directly process the original UAS. This framework trains a denoising DL model based on reconstruction loss and SNR-related loss. Reconstruction loss can be computed based on the spectrum when any Time-Frequency transform is adopted. An ideal binary mask (IBM) is initially estimated using features derived from clean and noisy signals, followed by training a deep multilayer perceptron (MLP) to predict the IBM for effective denoising <cit.>. To reconstruct the noisy input, a stacked convolutional sparse denoising autoencoder is employed, leveraging sparse representations <cit.>. Furthermore, a Multiscale Residual Unit (MSRU) incorporating various convolutional kernels has been proposed to extract robust noise-resistant features <cit.>. Additionally, CNN features can be enhanced through a dual-path recurrent neural network, significantly improving the denoising performance for UAS <cit.>. Considering the high dimensionality of the original time series, Mel-frequency cepstral coefficients (MFCCs) are extracted as training samples from both the original and denoised UAS using CEEMDAN <cit.>. Researchers have investigated the denoising capabilities of Generative adversarial networks (GANs) for UAS <cit.>. Specifically, the GAN algorithm has been employed to mitigate underwater ambient noise <cit.>. Initially, the short-time Fourier transform (STFT) is applied, using magnitude and phase features as inputs for the GAN. Clean signals are then reconstructed from the GAN's output using the inverse STFT (ISTFT). In another approach, a GAN is utilized to generate clean signals, with the discriminator designed to distinguish between real noisy signals and the combination of clean signals with ambient noise <cit.>. This denoising model incorporates a 1D convolutional layer for feature extraction. Experimental results indicate that the GAN model surpasses both EMD and wavelet methods in performance. Recently, the attention mechanism has been integrated into denoising UAS. <cit.> developed a dual-branch attention-based neural network to reconstruct clean signals from noisy complex spectra. Comparative studies have demonstrated that this deep learning approach surpasses traditional Wavelet-based and EMD-based denoising algorithms. Additionally, <cit.> proposed a deep learning model incorporating Residual (Res) blocks and attention modules to effectively separate a noisy waveform into noise and a denoised waveform. Furthermore, a Transformer model, trained to maximize the SNR, has been utilized for acoustic signal denoising <cit.>. Another innovative deep learning model, featuring channel, frequency, and time attention modules, has been introduced to extract robust noise-resistant features across multiple domains <cit.>. Another significant challenge involves the disparate optimization objectives between denoising and recognition tasks in UAS processing <cit.>. To address this, a joint training framework utilizing a modified Transformer neural network has been proposed, which successfully achieves both denoising and recognition <cit.>. The loss function in this framework is dual-part; it includes a denoising component where the mean squared error between the noisy and the clean signal is minimized. A self-supervised, dual-channel self-attention encoder has been proposed to learn robust, noise-resistant features of UAS <cit.>. This self-supervised learning approach compels the UAS model to identify and retain the most informative and stable patterns for succeeding in the pretext task. Additionally, this method inherently encourages the model to learn features invariant to minor perturbations or variations (i.e., noise) in the input data, focusing on attributes crucial for distinguishing between fundamentally different classes or scenarios. Furthermore, data augmentation has been demonstrated to enhance the accuracy and noise robustness of the UAS model <cit.>. Diversity in architectures is crucial for deep learning-based approaches in UAS processing. For example, multiple classifiers are constructed to handle different types of noise <cit.>. The pivotal principle in designing deep learning models for UAS-related tasks lies in extracting robust and noise-resistant features. A diverse set of feature extractors facilitates the extraction of multi-scale features, ultimately enhancing noise resistance. The literature demonstrates using convolutional filters with diverse kernel sizes to capture these multi-scale features <cit.>. Different convolutional kernels aid in automatically learning features across various frequencies. Moreover, <cit.> propose a parallel architecture to jointly learn from UAS data, optimizing feature extraction for enhanced performance. Designing suitable loss functions is essential for deep-learning denoising methods. Generally, DL-based UAS denoising algorithms employ reconstruction-based loss <cit.>. They train a DL model to reconstruct clean signal-dominated components from noisy UAS <cit.>. For recognition-based DL models, supervised classification losses are employed <cit.>. Besides reconstruction-based and recognition-based terms, other terms assisting in enhancing noise-resistance representations and generalizations are designed and employed <cit.>. For instance, the distance between learned features and feature centroid is minimized to enhance the noise-resistance of features <cit.>. Correspondingly, a passive attention loss is defined. Some studies train the neural network to maximize SNR <cit.>. §.§ Input formulation Appropriate formulations of the input for UAS are critical for the performance of deep learning denoising methods <cit.>. Although directly processing raw UAS data is a straightforward choice, a meaningful formulation significantly aids in training deep learning models. Typically, the UAS community utilizes time-frequency transformations of the raw signal, such as the STFT <cit.>, Mel-Spectrum <cit.>, Bark spectrum <cit.>, and hand-crafted features <cit.>. Experimental studies have demonstrated that features like the magnitude STFT spectrum, complex-valued STFT spectrum, and log-mel spectrum notably enhance the performance of deep CNNs in underwater recognition tasks <cit.>. Additionally, Mel-frequency cepstral coefficients are employed as inputs for a deep CNN to improve recognition accuracy further <cit.>. §.§ Data augmentation Data augmentation is a critical technique in improving the noise resistance of deep learning models and achieved significant success in sequential tasks <cit.>. Data augmentation artificially expands the training dataset by creating modified versions of the existing UAS. These modifications might include adding noise, cropping, or changing lighting conditions. This variety assists in training denoising and recognition models based on diverse samples. By training on a more diverse data set, data augmentation acts as a form of regularization. It effectively prevents the model from memorizing the training UAS (overfitting), encouraging more robust generalization abilities. UAS denoising models are usually developed based on the time-frequency transformation of the original UAS <cit.>. Data augmentations can be directly applied to the original UAS in the time domain by adding noise <cit.>. Adding noise is naturally advantageous for underwater tasks due to the noisy and complex characteristics of the underwater environment <cit.>. After transforming the original UAS into time-frequency representations, masking is a common and straightforward augmentation strategy <cit.>. For instance, time-masking and frequency-masking are implemented on representations obtained from Mel filter bank <cit.>. Conventional data augmentation strategies in UAS processing involve training denoising or recognition models using both raw and augmented samples. However, the direct contribution of augmented samples to training losses may lead to performance degradation <cit.>. To address this issue, a smoothness-inducing regularization technique has been proposed to minimize the distance between representations of raw and augmented samples, thereby improving the consistency and effectiveness of the training process <cit.>. Additionally, a local masking and replicating technique has been developed, which randomly selects two samples, applies local masking, and mixes them to create a new augmented sample <cit.>. Experimental results indicate that these proposed augmentation techniques outperform GAN-based algorithms in terms of enhancing model robustness and recognition accuracy <cit.>. § OTHER METHODS In addition to the above UAS denoising techniques, the literature has explored other techniques, such as dictionary learning <cit.> and Least mean squares (LMS) denoising <cit.>. The study has demonstrated that the LMS denoising algorithm outperforms EMD and VMD <cit.>. § EVALUATION METRICS Performance evaluation is crucial for assessing the effectiveness of UAS denoising techniques <cit.>. Various metrics are utilized in the literature to evaluate UAS denoising performance. Predominantly, these metrics are based on the signal-to-noise ratio (SNR), which quantifies the desired signal level relative to the background noise. Additionally, some studies employ recognition accuracy as a direct measure of performance, primarily when denoising is intended to improve the accuracy of subsequent recognition tasks. This section summarizes the evaluation metrics commonly used in UAS denoising research. §.§ Signal quality metrics §.§.§ Signal-to-noise ratio The signal-to-noise ratio (SNR) quantifies the proportion of signal power to noise power. Higher SNR values indicate lower noise content in the signal, whereas lower SNR values suggest higher noise content. The SNR is defined as: SNR = 10 log_10( P_signal/P_noise), where P_signal and P_noise represent the power of pure UAS and noisy signals, respectively. SNR is a necessary and popular evaluation metric for UAS denoising <cit.>. §.§.§ Peak signal-to-noise ratio The peak signal-to-noise ratio (PSNR) is a significant metric for evaluating UAS denoising quality. For the UAS, PSNR measures the ratio of the maximum possible power of a signal to the power of corrupting noise that affects the fidelity of its representation <cit.>. The PSNR can be computed by PSNR = 10 log_10( MAX_I^2/MSE), where MSE is the mean squared error between the original and the denoised signal. A higher PSNR value indicates that the denoised signal is higher quality than the noise level. The denoising process effectively reduces the noise while preserving the integrity and strength of the original signal. §.§.§ Signal-to-distortion ratio The signal-to-distortion ratio (SDR) specifically focuses on the distortion between the original signal and the estimated signal <cit.>. The SDR is defined as, SDR = 10 log_10( P_signal/P_distortion), §.§.§ Signal-to-distortion ratio improvement The Signal-to-Distortion Ratio Improvement (SDRi) measures the improvement in SDR due to some processing or alteration of a signal. The SDRi is calculated by comparing the SDR before and after the processing <cit.>. SDRi = SDR_after - SDR_before §.§.§ Scale-invariant signal-to-noise ratio improvement The Scale-Invariant SNR Improvement (SI-SNRi) is a measure often used in audio and speech processing to evaluate the effectiveness of enhancement algorithms, particularly when the absolute scale of the signal may not be consistent or important. This metric adjusts for scaling differences between the processed and original signals, providing a more robust comparison. SI-SNRi = SI-SNR_after - SI-SNR_before The scale-invariant SNR is calculated differently from the traditional SNR to account for scaling factors between the target and estimated signals. It involves normalizing the signal relative to a reference before computing the power ratio: SI-SNR = 10 log_10(α· x(n)^2/x̂(n) - α· x(n)^2), where α = ⟨estimate, target⟩/target^2 scales the target signal to best fit the estimate in a least-squares sense. This normalization allows the SI-SNR to be independent of the signal's scale, focusing solely on the noise and distortion relative to the target's shape and structure <cit.>. §.§.§ Segment signal-to-noise ratio The Segment SNR (SSNR) averages the SNR values computed for each segment, giving a more detailed measure of signal quality across different parts of the signal, which is especially useful in cases where signal characteristics vary over time <cit.>. The SSNR can be computed by SSNR = 10/M∑_m=1^M log_10( P^m_signal/P^m_noise), where P^m_signal and P^m_noise represent the power of the m^th segment of the signal and noise, respectively. §.§ Reconstruction error Reconstruction error measures the deviations between the pure signal and noise reduction signal. An outstanding denoising algorithm should precisely reconstruct the pure signal and achieve a small reconstruction error. There are various reconstruction errors in the literature <cit.>. Table <ref> summarizes popular reconstruction errors employed in the UAS denoising literature. § EXPERIMENTAL DATASET The literature on UAS denoising algorithms has been evaluated using a variety of datasets due to the challenges inherent in the underwater environment and the difficulties associated with real-world data collection. Consequently, many studies have utilized synthetic data and artificial noise to test their algorithms. On the other hand, some studies have conducted experiments with real-world UAS datasets. The table below provides a summary of popular datasets used in UAS denoising research. §.§ Synthetic data Due to the limited availability of UAS datasets, researchers have purposefully simulated synthetic pure and noise signals to test denoising algorithms. Table <ref> summarizes synthetic pure signals are popular in the UAS denoising literature. The literature may generate different pure signals with different initial states. Then, various synthetic noise signals are added to pure signals to simulate the noisy UAS. According to Table <ref>, Lorenz signal is the most popular, whereas Ikeda and Mackey Glass signals are much less popular. §.§ Real-world data In addition to the synthesized data discussed earlier, the literature also examines UAS denoising algorithms using real-world datasets. Table <ref> provides a summary of studies that utilize these real-world datasets. Unfortunately, most datasets gathered by respective authors are not publicly accessible. Among the publicly available datasets, ShipsEar and DeepShip are the most notable. The ShipsEar dataset comprises underwater acoustic recordings of ships and boats, featuring 90 recordings across 11 vessel types, totaling 6189 seconds of audio. In contrast, the DeepShip dataset includes 47 hours and 4 minutes of real-world underwater recordings, capturing 265 different ships categorized into four classes. These recordings were made throughout various seasons, featuring diverse sea states and noise levels. The DeepShip dataset contains nearly seven times more recordings and is approximately 25 times longer in total duration than the ShipsEar dataset. Usually, each record is segmented into small windows to train denoising algorithms <cit.>. § APPLICATIONS UAS denoising is a necessary component for various underwater applications. This section presents some major roles of UAS denoising technologies in real-world applications. 1. Maritime Navigation and Safety Improved denoising techniques help in clearer detection of obstacles, other vessels, and navigational aids, reducing the risk of collisions and grounding in poor visibility conditions <cit.>. 2. Submarine Communications In underwater environments, where radio waves cannot be effectively used, acoustic signals serve as the primary communication medium <cit.>. Denoising these signals ensures more reliable and clearer communications between submarines and surface vessels. 3. Marine Life Monitoring Acoustic signals are used to monitor the presence, movement, and behavior of marine species. Effective denoising is essential to accurately identify species from their sounds, which is crucial for ecological studies and conservation efforts <cit.>. 4. Shoreline surveillance Shoreline surveillance refers to the monitoring and observation activities conducted along coastlines to ensure security, safety, and environmental integrity. This practice involves the use of various technologies and strategies to detect, track, and respond to activities and natural phenomena that occur near the shore <cit.>. § OPEN QUESTIONS AND FUTURE DIRECTIONS The UAS community has delved into advanced decomposition frameworks, thresholding techniques, and DL algorithms. Nevertheless, numerous unexplored avenues remain, warranting further extensive research and exploration. 1 Signal decomposition is widely used to denoise the UAS, yet identifying the optimal decomposition level continues to pose a significant challenge. While the sensitivity analysis of decomposition levels across different algorithms has been explored, it remains largely cursory. Treating the decomposition level as a hyperparameter and applying hyperparameter optimization algorithms could be a promising strategy. However, this approach is often too time-consuming and impractical for underwater applications, which demand real-time processing capabilities. 2 There is no consensus in the literature regarding the most effective signal decomposition algorithm for denoising the UAS. Benchmarking studies comparing different signal decomposition algorithms are notably absent. While some studies claim the superiority of specific algorithms based on outcomes from signal decomposition research, UAS present unique challenges that necessitate further specialized investigation. 3 The literature employs different steps and parameters, such as normalization, sampling rate, and window length, to preprocess the UAS data before establishing the following denoising model. Differences in preprocessing lead to different conclusions and findings in terms of the performance of denoising and recognition models. A specific UAS denoising model may obtain outstanding performance on a small window and become much worse on long windows. Therefore, the standard framework to identify preprocessing schemes for any UAS dataset needs to be well-researched. 4 Although researchers have explored various advanced DL architectures, underwater applications require extremely high computing speed. When deploying the technology in underwater scenarios, the denoising model must adapt to new contexts quickly. Therefore, gradient-based DL models may fail to satisfy these requirements. However, deep randomized neural networks are suitable candidates due to their strong non-linear feature extraction ability and fast training speed <cit.>. 5 Automated learning, encompassing the fine-tuning of hyperparameters and training of learning-based UAS denoising algorithms, is imperative. The varied applications of UAS denoising techniques necessitate an automated framework, enabling practitioners to seamlessly employ these methods across diverse applications. Regrettably, current research overlooks the critical need for and significance of automated learning. Manual selection of hyperparameters and training algorithms diminishes the flexibility and practicality of UAS denoising techniques. Therefore, developing a environment-agnostic automatic learning framework of the UAS denoising techniques is worthy to explore. 6 Advanced ensemble learning techniques have demonstrated their efficacy in enhancing model robustness through the creation of a diverse array of base models <cit.>. In the realm of UAS-related tasks, the extraction of noise-resistant features and the development of robust models are imperative. However, the existing literature largely neglects the significance of an ensemble UAS denoising framework. Such a framework holds the potential to mitigate the weaknesses inherent in individual UAS denoising models, thereby enhancing overall denoising performance. Given the diversity and complexity of underwater environments, a single UAS denoising technique may struggle to achieve optimal performance across all scenarios. Nevertheless, through ensemble learning, various base models can collectively address different types of noisy signals, thus culminating in improved accuracy. 7 Efficient exploration of the marine environment and accurate detection of underwater events necessitate the deployment of diverse data collection systems, including UUV swarms and acoustic sensor networks. Each individual UUV and sensor unit possesses the capability to gather acoustic signals from distinct spatial coordinates. However, the integration of these disparate UUV systems and the development of a denoising algorithm based on the amalgamated data remains an uncharted territory. The potential schemes for dynamically fusing these data sources and denoising the resulting signal hold significant promise and merit further exploration. 8 Many UAS denoising algorithms operate within an offline framework, assuming all UAS data is available for model establishment. However, as UAS data is inherently sequential and underwater applications often occur in real-time scenarios, online processing is essential. The denoising step of UAS data typically precedes control or detection systems, which operate continuously in real-world applications. Thus, there's a need to extend decomposition-based and DL-based UAS denoising techniques to online variants. These online algorithms can incorporate online signal decomposition methods, real-time training algorithms for DL models, and online adaptation of DL architectures, among other considerations. 9 While many UAS denoising algorithms rely on unsupervised reconstruction, a few pioneering studies have delved into the potential of self-supervised learning for UAS denoising. However, a plethora of advanced self-supervised learning techniques remain unexplored and under-investigated. Crafting appropriate pretext tasks is crucial for the efficacy of self-supervised learning algorithms, especially given the unique challenges posed by UAS data. Moreover, there is ample opportunity to design pretext tasks tailored to UAS characteristics. Furthermore, integrating self-supervised learning with existing decomposition-based denoising algorithms holds promise. The development of a self-supervised decomposition-based denoising scheme represents a particularly promising avenue for future research. § CONCLUSION Underwater acoustic signals (UAS) are the most commonly collected data in underwater environments and play a pivotal role in a variety of applications. However, the inherent complexity of these environments poses significant challenges for the transmission, recognition, feature extraction, and interpretation of UAS. As a result, denoising UAS is a critical technological step essential for various applications. This review paper provides an overview of the developments in UAS denoising, from theoretical underpinnings to practical applications. Denoising involves the removal of extraneous noise and the extraction of signal-dominated information, which is then utilized for tasks such as target recognition. Traditional methods typically rely on signal processing and wavelet thresholding algorithms for noise removal. The majority of UAS denoising solutions employ signal decomposition techniques, which facilitate the separation of the complex original UAS into multiple modes. Subsequently, specific denoising algorithms are applied to each mode to eliminate noise. Finally, all denoised modes are recombined to produce the cleaned UAS. Recently, the rapid advancement of deep learning (DL) algorithms has spurred the UAS community to develop sophisticated DL-based denoising methods. These algorithms are typically trained to reconstruct signal-dominated information and maximize the signal-to-noise ratio. Given that a DL model's loss function can comprise multiple terms, researchers have the ability to combine denoising and recognition objectives within a single loss function. Consequently, DL-based denoising is not only task-oriented but also highly flexible, adapting to specific application needs with greater efficacy. IEEEtran
http://arxiv.org/abs/2407.13671v1
20240718164734
Liquid Amortization: Proving Amortized Complexity with LiquidHaskell (Functional Pearl)
[ "Jan van Brügge" ]
cs.CC
[ "cs.CC", "I.2.3; F.2.2" ]
0000-0003-1560-7326 Heriot-Watt University Edinburgh United Kingdom jsv2000@hw.ac.uk § ABSTRACT Formal reasoning about the time complexity of algorithms and data structures is usually done in interactive theorem provers like Isabelle/HOL <cit.>. This includes reasoning about amortized time complexity which looks at the worst case performance over a series of operations. However, most programs are not written within a theorem prover and thus use the data structures of the production language. To verify the correctness it is necessary to translate the data structures from the production language into the language of the prover. Such a translation step could introduce errors, for example due to a mismatch in features between the two languages. We show how to prove amortized complexity of data structures directly in Haskell using LiquidHaskell <cit.>. Besides skipping the translation step, our approach can also provide a didactic advantage. Learners do not have to learn an additional language for proofs and can focus on the new concepts only. For this paper, we do not assume prior knowledge of amortized complexity as we explain the concepts and apply them in our first case study, a simple stack with multipop. Moving to more complicated (and useful) data structures, we show that the same technique works for binomial heaps which can be used to implement a priority queue. We also prove amortized complexity bounds for Claessen's version of the finger tree <cit.>, a sequence-like data structure with constant-time cons/uncons on either end. Finally we discuss the current limitations of LiquidHaskell that made certain versions of the data structures not feasible. <ccs2012> <concept> <concept_id>10003752.10003809.10010031</concept_id> <concept_desc>Theory of computation Data structures design and analysis</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10011007.10011006.10011008.10011009.10011012</concept_id> <concept_desc>Software and its engineering Functional languages</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10011007.10010940.10010992.10010998.10010999</concept_id> <concept_desc>Software and its engineering Software verification</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Theory of computation Data structures design and analysis [500]Software and its engineering Functional languages [500]Software and its engineering Software verification Liquid Amortization: Proving Amortized Complexity with LiquidHaskell (Functional Pearl) Jan van Brügge July 22, 2024 ======================================================================================= § INTRODUCTION Formally proving properties of production code is becoming more and more common. For safety critical code that is not allowed to "go wrong" as well as code that is fundamental and reused very often, like core libraries, bugs can have a widespread impact. Proving that the code behaves according to the specification can provide an extra layer of confidence. Fundamental data structures are one of the most important parts of a programming language ecosystem and a lot of production code depends on them. In languages like Haskell this comes mostly in the form of functional, persistent data structures like the ones described in Okasaki's book on purely functional data structures <cit.>. Usually formal proofs are done in an external theorem prover (compare e.g. <cit.> or <cit.>), requiring an explicit translation step from the production language to the language the theorem prover uses. Such a translation step could introduce subtle differences or bugs if this translation is done manually. Additionally, the target language may not support certain features present in the source language (e.g. polymorphic recursion in data types, see section <ref>). In those cases, the missing features have to be emulated in the target language, again opening the door for small and subtle errors. Because the code that is actually used in the formal proofs differs from the source code that is running in production, it is often hard to reconnect the proofs back to the original production code. Another possibility is using LiquidHaskell <cit.>. It is a refinement type <cit.> checker plugin[Before 2020, LiquidHaskell was implemented as a standalone executable; it now being a GHC plugin <cit.> drastically improves usability] for GHC, the main Haskell compiler used in industry. It allows specifying additional constraints on types, e.g. "integers greater than zero" instead of just "integers". These constraints are propagated through the code and checked by an external SMT solver. With a handful of proof combinators defined by LiquidHaskell, it is possible to use Haskell directly as a theorem prover, enabling intuitive equational reasoning within the language <cit.>. This makes it easy to keep the implementation and its respective proof in sync, as any failure to do so will result in a compiler error. LiquidHaskell errors are also integrated with the normal Haskell IDE tooling, so the user can see proof failures directly in their editing environment. Another benefit is that the reader or author of a proof does not need to learn an additional language, the production language is the same as the proof language. In this paper, we: * Assume no prior knowledge of amortized complexity as we recap the theoretical background (section <ref>). * Show the reasoning infrastructure that is needed to prove amortized complexity using LiquidHaskell (section <ref>). * Apply this technique to several case studies: a simple stack (section <ref>), binomial heaps (section <ref>) and finger trees (section <ref>). * Discuss the limitations of LiquidHaskell for such proofs compared with proper theorem provers (section <ref>). § AMORTIZED COMPLEXITY It is common practise to investigate how many units of time an operation on a data structure takes in relation to the number of elements in the data structure. As the performance can vary wildly for the same operation depending on the state of the data structure, usually the worst case for each operation is used as a metric and specified using Landau notation ("big O"). For example, a linear search[walking through a list element by element, checking if the current element is the one that is searched for] takes 1 unit of time if the searched element is at the front, while it takes n units of time if it is the last element (where n is the number of elements in the list). Thus we would say that linear search is in the complexity class of Øn because it will not take longer than c · n units of time for some fixed constant c. But often this worst case is too pessimistic. When an operation changes the data structure, for example by removing elements, consecutive operations might be cheaper. Or to use an expensive operation it might be necessary to have a certain amount of cheap operations. In such cases one uses amortized complexity, which does not only look at a single operation in isolation, but at chains of operations. As a running example we will use a stack with push and multipop operations. The former allocates a new element and a reference to the rest of the stack and thus takes a constant amount of time. The latter removes the first k elements from the stack. In the worst case, it removes all the elements, making Øn. One can already see that this is too pessimistic, because after removing all elements from the stack, the next will do nothing. In fact, to be able to remove an element, it has had to be pushed to the stack at some point, so we can spread out the cost of over those pushes. There are multiple techniques to prove amortized complexity; we present the two most popular ones. Note that both of these techniques only apply in a non-shared setting where intermediate results are not reused. If we were allowed to keep a reference to the original data structure after an operation, we could do the same expensive operation several times without "paying" for it. Some data structures can exploit laziness to have the same (good) amortized complexity also in a shared setting, however formally verifying this would require explicit reasoning about the lazy semantics of the language and thus massively complicate the analysis. Note that in a non-shared setting the amortized complexity bounds apply regardless of whether the language is lazy or strict. Laziness may cause the data structure to do less work – and thus have a better complexity – but as an upper bound the amortized complexity is still valid. §.§ Banker's method The banker's method uses a "bank" account that can store time units. Operations have to pay their actual runtime cost and may additionally deposit into or withdraw from the account. The sum of the actual cost and the deposit/withdrawal is the amortized cost of the operation. The idea is that cheap operations pay extra time units, making their amortized cost higher than their actual cost. The goal is to keep the sum of the actual cost and the extra deposit in the same (good) complexity class as just the actual cost. Expensive operations on the other hand can use the stored time to pay for their actual cost, hopefully putting them in a better complexity class. In our example a has to pay one time unit as actual cost and it will also pay one extra time unit into the bank account. The sum of the amortized cost and actual cost still constant, so stays in Ø1. Because for every element pushed we deposit one extra time unit into the bank, the account always contains as many time units as there are elements on the stack. As cannot remove more elements than there are on the stack, withdrawing k time units to pay for the actual cost will never make the account become negative. Because withdrawing from the bank account is enough to pay for the complete actual cost of regardless of the choice of k, its amortized complexity is in Ø1. §.§ Physicist's method While the banker's method is easy to visualize, it is hard to use in a formal proof because the bank account acts as state between the operations. A simpler method is the physicist's method, which does not need such an account. The idea is to define a potential Φ for the data structure. This potential is defined to be 0 for an empty data structure and ≥ 0 for all other states. Intuitively, the potential represents how much time needs to be saved up to pay for expensive operations. We then define the amortized cost of an operation as c + Φ(h') - Φ(h) where c is the actual time cost of the operation, h is the data structure before the operation and h' is the data structure afterwards. Looking at a chain of operations 1, 2, ..., n, the sum of the amortized time is an overestimation of the actual cost by Φ(h_n) and thus an upper bound on the actual cost (see equation <ref>, remember that by definition Φ(h_0) = 0 and Φ(h_n) ≥ 0). (c_1 + Φ(h_1) - Φ(h_0)) + (c_2 + Φ(h_2) - Φ(h_1)) + ... + (c_n + Φ(h_n) - Φ(h_n - 1)) = c_1 + c_2 + ... + c_n + Φ(h_n) - Φ(h_0) For our stack we can define the potential to be the height of the stack. This fulfils the requirements as the empty stack has a potential of zero and all other states have a potential greater or equal to zero. For the actual proof of this example see the first case study in section <ref>. § LIQUIDHASKELL FOR THEOREM PROVING Usually LiquidHaskell is used to add pre- and postconditions to functions with the help of refinement types <cit.>. The refinements are additional constraints on top of normal Haskell data types. These constraints are then checked by an external SMT solver. By default LiquidHaskell uses the z3 <cit.> solver, but it also supports cvc4 <cit.>. For example, it allows us to specify that the caller of a division function has to ensure that the denominator is not zero. -@ div' :: Int -> x:Int | x /= 0 -> Int @- div' :: Int -> Int -> Int div' = div LiquidHaskell annotations are multiline Haskell comments delineated by @ signs, so that the code still compiles even without LiquidHaskell. It also allows the user to name argument types, so they can be referred to later. Alternatively there exists a Quasiquoter that would allow us to get rid of the duplication between the Haskell and the LiquidHaskell type signatures. In LiquidHaskell, all functions have to terminate. Usually a suitable termination metric is automatically deduced, but in some more complex cases, we can also manually supply this metric. From this termination metric, LiquidHaskell is able to generate an induction principle for the function that can be used in a proof. For example to prove that the length of a list is never negative, LiquidHaskell uses the induction hypothesis to assume that the length of some list is not negative and uses that to prove is not negative: -@ measure length @- -@ length :: [a] -> x:Int | x >= 0 @- length :: [a] -> Int length [] = 0 – trivial, 0 >= 0 length (x:xs) = 1 + length xs – by induction: length xs >= 0 – thus 1 + length xs >= 0 Refinements can use equality (written as ) and comparison without the need for the or typeclasses. The reason is that in refinments these operators are translated to the builtin operators of the SMT theory and thus cannot be customized. To use functions in the refinements, one has to reflect <cit.> them to the logic level. For a limited class of functions called measures, this can be done while retaining full automatic reasoning. A measure is a function that takes only one argument which is an algebraic data type, has one equation per constructor of this data type and which only uses arithmetic functions and other measures in its right hand sides. Other functions still can be used in refinements when annotated with , but the SMT solver is not able to check those refinements automatically. The user always has to provide the proofs themselves using LiquidHaskell's ability to do complex equational machine-checked reasoning <cit.>. To facilitate these proofs, LiquidHaskell provides a set of proof combinators and a type. This type is just a type alias for the normal Haskell unit type that is refined by the property that needs to be proved. The reasoning comes from the operator that asserts that both sides of the equality have the same refinement. We can bring other facts in scope using the operator. The angle brackets used in its refinement type are used to explicity quantify over two refinements. This will make the facts "visible" in the type signature and thus make them available to the SMT solver. A proof can be finished by the operator that casts a chain of equalities to the proof type (type slightly simplified here). type Proof = () trivial :: Proof trivial = () infixl 3 === -@ (===) :: x:a -> y:a | y == x -> v:a | v == x v == y @- (===) :: a -> a -> a _ === y = y infixl 3 ? -@ (?) :: forall a b <pa :: a -> Bool, pb :: b -> Bool>. a<pa> -> b<pb> -> a<pa> @- (?) :: a -> b -> a x ? _ = x data QED = QED infixl 3 *** -@ assume (***) :: a -> p:QED -> true @- (***) :: a -> QED -> Proof _ *** _ = () To illustrate we prove that the length of applied to two lists is the sum of the lengths of the two lists. Because takes more than one argument, it is not a measure and thus we need to do the proof manually. For this proof, we need to do a case distinction on the first argument. This is done by pattern matching on it like in a normal Haskell function. The rest of the proof is done by applying the definitions of the used functions, arithmetic equalities and induction: -@ reflect append @- append :: [a] -> [a] -> [a] append [] ys = ys append (x:xs) ys = x:(append xs ys) -@ lengthP :: xs:[a] -> ys:[a] -> length (append xs ys) == length xs + length ys @- lengthP :: [a] -> [a] -> Proof – base case lengthP [] ys = length (append [] ys) – Use definition of append [] _ === length ys === 0 + length ys – use definition of length [] === length [] + length ys *** QED – recursive case lengthP (x:xs) ys = length (append (x:xs) ys) – Use definition of append === length (x:(append xs ys)) – Use definition of length === 1 + length (append xs ys) – Use induction with the ? operator ? lengthP xs ys === 1 + length xs + length ys – Use definition of length === length (x:xs) + length ys *** QED This proof is quite lengthy, because we have to manually unfold definitions. LiquidHaskell also has a feature called proof by logical evaluation (ple). With this feature turned on, definitions are automatically unfolded several times for the SMT solver. Of course doing that for every definition might result in a performance hit, so it is also possible to selectively enable this feature on a per definition basis by tagging it with the flag. With ple enabled, the proof boils down to doing the case distinction and specifying the induction step: -@ automatic-instances lengthP @- -@ lengthP :: xs:[a] -> ys:[a] -> length (append xs ys) == length xs + length ys @- lengthP :: [a] -> [a] -> Proof lengthP [] _ = trivial lengthP (_ : xs) ys = trivial ? lengthP xs ys § CASE STUDY: STACK WITH MULTIPOP To continue with the example from section <ref>, we prove that multipop has an amortized complexity of Ø1. This data structure and its operations is defined below. We already add a precondition to assert that k is not negative with the help of a LiquidHaskell type synonym. -@ type Nat = x:Int | x >= 0 @- data Stack a = Empty | Elem a (Stack a) -@ reflect push @- push :: a -> Stack a -> Stack a push x s = Elem x s -@ reflect multipop @- -@ multipop :: Nat -> Stack a -> ([a], Stack a) @- multipop :: Int -> Stack a -> ([a], Stack a) multipop _ Empty = ([], Empty) multipop 0 s = ([], s) multipop n (Elem x s) = let (xs, s') = multipop (n - 1) s in (x:xs, s') To formally prove the amortized complexity with the physicist's method, we first need to define a potential Φ and a timing function for each of our operations that gives us the actual time cost of that operation. As seen in section <ref>, we use the height of the stack as our potential. The timing functions tell us the cost of every operation. Their definition follow the recursive structure of the operations. We could use TemplateHaskell to automatically derive these timing functions from the original operations but for clarity we will define them manually throughout this work. -@ reflect phi @- -@ phi :: Stack a -> Nat @- phi :: Stack a -> Int phi Empty = 0 phi (Elem _ s) = 1 + phi s -@ reflect pushT @- -@ pushT :: a -> Stack a -> x:Int | x >= 1 @- pushT :: a -> Stack a -> Int pushT _ _ = 1 – no recursion in – original definition -@ reflect multipopT @- -@ multipopT :: Nat -> Stack a -> x: Int | x >= 1 @- multipopT :: Int -> Stack a -> Int multipopT _ Empty = 1 multipopT 0 _ = 1 multipopT n (Elem _ s) = 1 + multipopT (n - 1) s First, we will prove the complexity of . While the amortized cost is in the same complexity class as the actual cost, we need to make sure that our potential function does not cause those two to diverge. LiquidHaskell does not allow quantifiers in refinements, because for functions other than measures, instantiating quantifiers in the SMT solver has unpredictable performance <cit.>. This means we have to guess an upper limit for our amortized runtime. In section <ref>, we paid two time units for a push to amortize the pops, so we will use 2 as our guess: import Language.Haskell.Liquid.ProofCombinators -@ pushP :: x:a -> s:Stack a -> pushT x s + phi (push x s) - phi s <= 2 @- pushP :: a -> Stack a -> Proof pushP x s = pushT x s + phi (push x s) - phi s – Use definition of push and pushT === 1 + phi (Elem x s) - phi s – Use definition of phi (Elem _ s) === 1 + (1 + phi s) - phi s – phi s gets canceled, 1 + 1 <= 2 *** QED To prove the complexity of , we will need to distinguish between the two base cases and the recursive case. With LiquidHaskell this is just a normal pattern match. Leaving the recursive case undefined for now, proving the base cases is very similar to so we use ple to automate the proofs: -@ automatic-instances multipopP @- -@ multipopP :: k:Nat -> s:Stack a -> multipopT k s + phi (snd (multipop k s)) - phi s <= 2 @- multipopP :: Int -> Stack a -> Proof multipopP 0 _ = trivial multipopP _ Empty = trivial To avoid cluttering the proof of the recursive case, we use an as pattern to alias to . The first steps are again applying the definitions of the used functions and simplifying the result. In the end we use the operator to add the induction hypothesis to the proof step and thus complete the proof. We again use ple here to unfold the definition of and at once. multipopP k xs@(Elem x s) = multipopT k xs + phi (multipop k xs) - phi xs – Use definition of multipopT n (Elem _ s) === (1 + multipopT (k - 1) s) + phi (snd (multipop k xs)) - phi xs – Use definition of multipop n (Elem _ s) === (1 + multipopT (k - 1) s) + phi (snd (multipop (k - 1) s)) - phi xs – Use definition of phi (Elem _ s) === (1 + multipopT (k - 1) s) + phi (snd (multipop (k - 1) s)) - (1 + phi s) – Remove parentheses === 1 + multipopT (k - 1) s + phi (snd (multipop (k - 1) s)) - 1 - phi s – 1s cancel out === multipopT (k - 1) s + phi (snd (multipop (k - 1) s)) - phi s – use induction hypothesis ? multipopP (k - 1) s *** QED While this stack data structure might look contrived, it is basically half of Okasaki's queues <cit.>. A queue is one front and one back list (or stack) where the back list is in reverse order. This allows us to remove elements from the front list and add elements to the back list in Ø1. When the front list is empty, the back list is rotated and used as the new front list, an operation which is similar to here. § CASE STUDY: BINOMIAL HEAPS In functional languages priority queues are usually implemented as some kind of (min) heap. A binomial heap is a forest of binomial trees where there exists at most one tree of every rank. A binomial tree of rank zero is just the root. A binomial tree of rank k consists of its root and one binomial tree for every rank from k - 1 to zero as its children. Two binomial trees of rank k can be merged by attaching the tree with the bigger root as first child of the other tree resulting in a binomial tree of rank k + 1 (see figure <ref>). To insert a new element into a binomial heap, we insert a new tree of rank zero into the forest. If the forest already contains a tree of rank zero the operation is used to get a tree of rank one. If there is already a tree of rank one in the forest, they get merged again, and so on. Once a free spot is found, insertion is finished. We can define the type of trees and of forest, as well as the operations like this in Haskell: data Tree a = MkTree a [Tree a] -@ reflect mergeTree @- mergeTree :: Ord a => Tree a -> Tree a -> Tree a mergeTree l@(MkTree lr lc) r@(MkTree rr rc) | lr <= rr = MkTree lr (r : lc) | otherwise = MkTree rr (l : rc) data Forest a = FEnd – empty forest | F0 (Forest a) – no tree at this position | F1 (Tree a) (Forest a) type Heap a = Forest a -@ reflect insertTree @- insertTree :: Ord a => Tree a -> Heap a -> Heap a insertTree t FEnd = F1 t FEnd insertTree t (F0 f) = F1 t f insertTree t (F1 t' f) = F0 (insertTree (mergeTree t t') f) Ideally we would use more advanced type level features of Haskell to ensure correctness of our implementation. For example we could ensure that the children of the root of a binomial tree have the correct ranks in descending order. However at the time of writing some language extensions necessary for this like are not well supported by LiquidHaskell and cause problems in the proofs (see appendix <ref> for the definition of such a correct-by-construction binomial heap). For the amortized analysis of the binomial heap, only does one comparison and one allocation no matter which rank the trees are of so it is in Ø1. For the insertion we only work on the roots of which there are at most log_2 n many.[A tree of rank k consists of two trees of rank k - 1, so the number of elements in the tree always doubles] So the operation is in Ølog n. As our potential Φ we will use the number of trees in the forest. The intuition here is that the worst case happens when there is a tree at every position and we need to walk through the whole forest to find a free spot. This also fulfils the requirement that the empty data structure has a potential of zero. Again for clarity we will define the timing functions manually to show how they are derived from the original operations. As the merge operation is constant we include its cost directly in the insert operation for brevity. The proof is very similar to the example from section <ref>, so we will use ple to automate most of it. [escapechar=!] -@ reflect phi @- -@ phi :: Heap a -> Nat @- phi :: Heap a -> Int phi FEnd = 0 phi (F0 rest) = phi rest phi (F1 rest) = 1 + phi rest -@ reflect insertT @- -@ insertT :: Ord a !=>! Tree a -> Heap a -> x:Int | x >= 1 @- insertT :: Ord a => Tree a -> Heap a -> Int insertT _ FEnd = 1 insertT _ (F0 _) = 1 insertT t (F1 t' f) = 1 + insertT (mergeTree t t') f -@ automatic-instances insertTreeP @- -@ insertTreeP :: t:Tree a -> f:Heap a -> insertT t f + phi (insertTree t f) - phi f <= 2 @- insertTreeP :: Ord a => Tree a -> Heap a -> Proof insertTreeP t (F1 t' f') = trivial ? insertTreeP (mergeTree t t') f' insertTreeP _ _ = trivial § CASE STUDY: FINGER TREES To show that our approach scales to more complicated data structures that are widely used in industry, we will prove the complexity of finger trees, the data structure behind from containers <cit.>. It was originally described by Hinze and Paterson <cit.>, but we will follow the simplified version of Claessen <cit.> here. The simplified version does not need nested pattern matches to implement concatenation of finger trees, which would otherwise lead to a combinatorial explosion in LiquidHaskell constraints. data Seq a = Nil | Unit a | More (Digit a) (Seq (Tuple a)) (Digit a) data Digit a = One a | Two a a | Three a a a data Tuple a = Pair a a | Triple a a a The finger tree has between one and three elements on either side to enable cheap and . The most interesting property of the data structure is the polymorphic recursion. It ensures an equal depth of nesting of types on both sides of the spine. It is also what might make it difficult to verify in other languages, as for example Isabelle/HOL <cit.> does not directly support polymorphic recursion. In this case, it would be necessary to define the type such that it allows uneven nesting of s and then "carve out" the subset of valid finger trees. Other provers like Coq <cit.> support polymorphic recursion, allowing a more direct one to one translation. §.§ Cons and Snoc Because the finger tree is a symmetric tree, and are very similar. For this reason we show an explicit definition and proof only for and provide the definition and proof for in the appendix. The base cases are straightforward and just use the flexibility of the digits to add the element to the front. The interesting case is when the front is already full and we need to recurse. -@ reflect cons @- cons :: a -> Seq a -> Seq a cons x Nil = Unit x cons x (Unit y) = More (One x) Nil (One y) cons x (More (One y) q u) = More (Two x y) q u cons x (More (Two y z) q u) = More (Three x y z) q u cons x (More (Three y z w) q u) = More (Two x y) (cons (Pair z w) q) u The next step is to find a valid potential function Φ. For this, we check which states of the data structure make expensive. As evident by the definition above, this is the case when the digit in the front is already full. So we can define the danger of a digit as every state that will make the operation expensive. To also support – the inverse operation – we not only mark a full digit as dangerous, but also those that have only one element inside (as this would make go into the recursion). We then define the potential to be the sum of the danger in the tree. The timing functions are again directly derived from the operation itself. The proof itself can be mostly automated with ple. -@ reflect danger @- -@ danger :: Digit a -> Nat @- danger :: Digit a -> Int danger One = 1 danger Two = 0 danger Three = 1 -@ reflect phi @- -@ phi :: Seq a -> Nat @- phi :: Seq a -> Int phi Nil = 0 phi Unit = 0 phi (More pr m sf) = danger pr + phi m + danger sf -@ reflect consT @- -@ consT :: a -> Seq a -> x:Int | x >= 1 @- consT :: a -> Seq a -> Int consT _ Nil = 1 consT _ (Unit _) = 1 consT _ (More (One _) _ _) = 1 consT _ (More (Two _ _) _ _) = 1 consT _ (More (Three _ z w) q _) = 1 + consT (Pair z w) q -@ automatic-instances consP @- -@ consP :: x:a -> t:Seq a -> consT x t + phi (cons x t) - phi t <= 3 @- consP :: a -> Seq a -> Proof consP _ (More (Three _ z w) q _) = trivial ? consP (Pair z w) q consP _ _ = trivial §.§ Append Even though has the same amortized complexity as its normal complexity, we need to show that it never increases the potential more than a logarithmic amount. Claessen <cit.> uses a helper function that concatenates two finger trees with at most three extra elements in the middle. Several other functions also require very specific lower and upper bounds on the length of their arguments. This can be directly expressed using LiquidHaskell. -@ reflect glue @- -@ glue :: Seq a -> xs:[_] | len xs <= 3 -> Seq a -> Seq a @- glue :: Seq a -> [a] -> Seq a -> Seq a glue Nil as q2 = foldr cons q2 as glue q1 as Nil = foldl snoc q1 as glue (Unit x) as q2 = foldr cons q2 (x : as) glue q1 as (Unit x) = snoc (foldl snoc q1 as) x glue (More u1 q1 v1) as (More u2 q2 v2) = More u1 (glue q1 ( toTuples (toList v1 ++ as ++ toList u2) ) q2) v2 -@ reflect append @- append :: Seq a -> Seq a -> Seq a append q1 q2 = glue q1 [] q2 -@ reflect toList @- -@ toList :: Digit a -> xs:[a] | len xs >= 1 len xs <= 3 @- toList :: Digit a -> [a] toList (One x) = [x] toList (Two x y) = [x, y] toList (Three x y z) = [x, y, z] -@ reflect toTuples @- -@ toTuples :: xs:[_] | len xs >= 2 len xs <= 9 -> ys:[_] | len ys >= 1 len ys <= 3 @- toTuples :: [a] -> [Tuple a] toTuples = toTuples' -@ reflect toTuples' @- -@ toTuples' :: xs:[_] | len xs != 1 len xs <= 9 -> ys:[_] | if len xs == 0 then len ys == 0 else if len xs <= 3 then len ys == 1 else if len xs <= 6 then len ys == 2 else len ys == 3 @- toTuples' :: [a] -> [Tuple a] toTuples' [] = [] toTuples' [x, y] = [Pair x y] toTuples' [x, y, z, w] = [Pair x y, Pair z w] toTuples' (x:y:z:xs) = Triple x y z : toTuples' xs We also need to reimplement , and list concatenation because their definitions in the standard library do not come with annotations so they cannot be used in a LiquidHaskell proof. -@ reflect foldl @- foldl :: (b -> a -> b) -> b -> [a] -> b foldl _ x [] = x foldl f a (x:xs) = foldl f (f a x) xs -@ reflect foldr @- foldr :: (a -> b -> b) -> b -> [a] -> b foldr _ x [] = x foldr f a (x:xs) = f x (foldr f a xs) -@ reflect ++ @- -@ (++) :: x:[_] -> y:[_] -> z:[_] | len z == len x + len y @- (++) :: [a] -> [a] -> [a] [] ++ ys = ys (x:xs) ++ ys = x : (xs ++ ys) For the amortization proof we of course have to use the same potential as for /, and the timing functions are directly derived from the operation itself and all other functions called by it. Note that the timing functions of the higher-order and functions take a timing function as input. -@ reflect foldrT @- -@ foldrT :: (a -> b -> b) -> (a -> b -> x:Int | x >= 1 ) -> b -> [a] -> x:Int | x >= 1 @- foldrT :: (a -> b -> b) -> (a -> b -> Int) -> b -> [a] -> Int foldrT _ _ _ [] = 1 foldrT f fT b (x:xs) = fT x (foldr f b xs) + foldrT f fT b xs -@ reflect foldlT @- -@ foldlT :: (b -> a -> b) -> (b -> a -> x:Int | x >= 1 ) -> b -> [a] -> x:Int | x >= 1 @- foldlT :: (b -> a -> b) -> (b -> a -> Int) -> b -> [a] -> Int foldlT _ _ _ [] = 1 foldlT f fT a (x:xs) = fT a x + foldlT f fT (f a x) xs -@ reflect glueT @- -@ glueT :: Seq a -> as:[a] | len as <= 3 -> Seq a -> x:Int | x >= 1 @- glueT :: Seq a -> [a] -> Seq a -> Int glueT Nil as q2 = 1 + foldrT cons consT q2 as glueT q1 as Nil = 1 + foldlT snoc snocT q1 as glueT (Unit x) as q2 = 1 + foldrT cons consT q2 (x : as) glueT q1 as (Unit x) = 1 + snocT (foldl snoc q1 as) x + foldlT snoc snocT q1 as glueT (More _ q1 v1) as (More u2 q2 _) = 1 + glueT q1 ( toTuples (toList v1 ++ as ++ toList u2) ) q2 We also need a way to specify the logarithmic complexity, so we will define a logarithm function as well as a way to calculate the number of elements in a finger tree. -@ reflect log2 @- -@ log2 :: x:Int | x >= 1 -> Nat @- log2 :: Int -> Int log2 1 = 0 log2 n = 1 + log2 (n `div` 2) -@ reflect tuplesToList @- -@ tuplesToList :: x:[_] -> y:[_] | len y <= 3 * len x len y >= 2 * len x @- tuplesToList :: [Tuple a] -> [a] tuplesToList [] = [] tuplesToList (Pair a b:xs) = a:b:tuplesToList xs tuplesToList (Triple a b c:xs) = a:b:c:tuplesToList xs -@ reflect seqToList @- seqToList :: Seq a -> [a] seqToList Nil = [] seqToList (Unit x) = [x] seqToList (More u q v) = toList u ++ tuplesToList (seqToList q) ++ toList v For the base cases of it is helpful to prove that folding / over a list takes as many time steps as the amortized complexity of / times the number of elements in the list. -@ automatic-instances foldrTCons @- -@ foldrTCons :: q:Seq a -> as:[a] -> foldrT cons consT q as + pot (foldr cons q as) - pot q <= 3 * len as + 1 @- foldrTCons :: Seq a -> [a] -> Proof foldrTCons _ [] = trivial foldrTCons q as@(x:xs) = foldrT cons consT q as + pot (foldr cons q as) - pot q === consT x (foldr cons q xs) + foldrT cons consT q xs + pot (cons x (foldr cons q xs)) - pot q + pot (foldr cons q xs) - pot (foldr cons q xs) ? consAmortized x (foldr cons q xs) =<= 3 + foldrT cons consT q xs + pot (foldr cons q xs) - pot q ? foldrTCons q xs =<= 3 + 3 * length xs + 1 *** QED -@ automatic-instances foldlTSnoc @- -@ foldlTSnoc :: q:Seq a -> as:[a] -> foldlT snoc snocT q as + pot (foldl snoc q as) - pot q <= 3 * len as + 1 @- foldlTSnoc :: Seq a -> [a] -> Proof foldlTSnoc _ [] = trivial foldlTSnoc q as@(x:xs) = foldlT snoc snocT q as + pot (foldl snoc q as) - pot q === snocT q x + foldlT snoc snocT (snoc q x) xs + pot (foldl snoc (snoc q x) xs) - pot q + pot (snoc q x) - pot (snoc q x) ? snocAmortized q x =<= 3 + foldlT snoc snocT (snoc q x) xs + pot (foldl snoc (snoc q x) xs) - pot (snoc q x) ? foldlTSnoc (snoc q x) xs *** QED Now for the actual proof we will only show one of the base cases as the others follow the same schema. All the base cases have a constant upper bound on their complexity. For the recursive case we also need some small helper facts about the logarithm such as monotonicity. We also use a binding to factor out common code and make the proof more readable. -@ log2Mono :: x:Int | 1 <= x -> y:Int | x <= y -> log2 x <= log2 y @- log2Mono :: Int -> Int -> Proof log2Mono 1 m = log2 1 <= log2 m *** QED log2Mono n m = log2 n <= log2 m === 1 + log2 (n `div` 2) <= 1 + log2 (m `div` 2) === log2 (n `div` 2) <= log2 (m `div` 2) ? log2Mono (n `div` 2) (m `div` 2) *** QED -@ divCancel :: x:Int -> div (2 * x) 2 == x @- divCancel :: Int -> Proof divCancel _ = trivial -@ glueAmortized :: q1:Seq a -> as:[a] | len as <= 3 -> q2:Seq a -> glueT q1 as q2 + pot (glue q1 as q2) - pot q1 - pot q2 <= log2 (max (len (seqToList q1) + len (seqToList q2)) 2) + 14 @- glueAmortized :: Seq a -> [a] -> Seq a -> Proof glueAmortized q1@Nil as q2 = glueT q1 as q2 + pot (glue q1 as q2) - pot q1 - pot q2 === 1 + foldrT cons consT q2 as + pot (foldr cons q2 as) - pot q1 - pot q2 ? foldrTCons q2 as =<= 1 + 3 * length as + 1 - pot q1 =<= log2 (max (length (seqToList q1) + length (seqToList q2)) 2) + 14 *** QED glueAmortized qq1@(More u1 q1 v1) as qq2@(More u2 q2 v2) = glueT qq1 as qq2 + pot (glue qq1 as qq2) - pot qq1 - pot qq2 === 1 + glueT q1 (toTuples (toList v1 ++ as ++ toList u2)) q2 + pot (More u1 (glue q1 (toTuples (toList v1 ++ as ++ toList u2)) q2) v2) - danger u1 - pot q1 - danger v1 - danger u2 - pot q2 - danger v2 === 1 + glueT q1 (toTuples (toList v1 ++ as ++ toList u2)) q2 + pot (glue q1 (toTuples (toList v1 ++ as ++ toList u2)) q2) - pot q1 - danger v1 - danger u2 - pot q2 – apply induction ? glueAmortized q1 (toTuples (toList v1 ++ as ++ toList u2)) q2 =<= log2 (max n 2) + 15 – move (+1) into the logarithm ? divCancel (max n 2) === 1 + log2 (2 * max n 2 `div` 2) + 14 === log2 (2 * max n 2) + 14 – simplify using monotonicity of log2 ? log2Mono (2 * max n 2) (4 + 2 * n1 + 2 * n2) =<= log2 (4 + 2 * n1 + 2 * n2) + 14 ? log2Mono (4 + 2 * n1 + 2 * n2) (4 + m1 + m2) =<= log2 (4 + m1 + m2) + 14 ? log2Mono (4 + m1 + m2) (max (length (toList u1 ++ tuplesToList (seqToList q1) ++ toList v1 ) + length (toList u2 ++ tuplesToList (seqToList q2) ++ toList v2 )) 2) =<= log2 (max (length (toList u1 ++ tuplesToList (seqToList q1) ++ toList v1 ) + length (toList u2 ++ tuplesToList (seqToList q2) ++ toList v2 )) 2) + 14 === log2 (max (length (seqToList qq1) + length (seqToList qq2)) 2) + 14 *** QED where n1 = length (seqToList q1) n2 = length (seqToList q2) n = n1 + n2 m1 = length (tuplesToList (seqToList q1)) m2 = length (tuplesToList (seqToList q2)) §.§ Comparison with Claessen's formalization For their formalization, Claessen used HipSpec <cit.>, a tool to translate Haskell to first order logic. However we were not able to directly compare these proofs to ours as the tool has not seen any updates in the last 5 years and the last supported GHC version is 7.8. LiquidHaskell on the other hand has seen some industry adoption and has kept up to date with new GHC versions (version 9.8 at the time of writing). Aside from these technical limitations, they only formalized the comparatively easy and complexities and leave out the much more interesting logarithmic complexity of append. § CONCLUSION In this paper we successfully applied LiquidHaskell to prove the amortized complexity of increasingly complex data structures. Despite some limitations of LiquidHaskell, like incomplete support for , proving amortized complexity directly in Haskell is quite feasible. However some more complex proofs might require a lot of code for rather simple proof steps (compare the use of monotonicity in the previous section). In such cases, the extra automation built into modern theorem provers would certainly improve the user experience. On the other hand, LiquidHaskell's explicit equational reasoning is easier to follow for a new learner than opaque proof scripts written in a tactic-based theorem prover like Coq or Lean[Lean does support equational reasoning via the environment, however most proofs are done using tactic scripts]. While Isabelle/HOL also features equational reasoning with Isar <cit.>, it does not support polymorphic recursion which makes the definition of our third case study a lot harder. For future work, it would be interesting to investigate how complex an adaptation of our technique for reasoning about lazy semantics would be. This would allow to expand the complexity results to a shared setting where reuse of intermediate results is allowed. It might be possible to adapt <cit.>'s or <cit.>'s works on complexity for lazy programs, but LiquidHaskell's lack of quantifiers will make this challenging. ACM-Reference-Format § CORRECT-BY-CONSTRUCTION BINOMIAL HEAP import Data.Kind (Type) data Nat = Zero | Succ Nat type Tree :: Nat -> Type -> Type data Tree k a where MkTree :: a -> DecrList Tree k a -> Tree k a type DecrList :: (Nat -> Type -> Type) -> Nat -> Type -> Type data DecrList t k a where DNil :: DecrList t Zero a DCons :: t k a -> DecrList t k a -> DecrList t (Succ k) a mergeTree :: Ord a => Tree k a -> Tree k a -> Tree (Succ k) a mergeTree l@(MkTree lr lc) r@(MkTree rr rc) | lr <= rr = MkTree lr (DCons r lc) | otherwise = MkTree rr (DCons l rc) data Binary = B0 Binary | B1 Binary | BEnd type BInc :: Binary -> Binary type family BInc b where BInc BEnd = B1 BEnd BInc (B0 rest) = B1 rest BInc (B1 rest) = B0 (BInc rest) type Forest :: Nat -> Binary -> Type -> Type data Forest k b a where FEnd :: Forest k BEnd a F0 :: Forest (Succ k) b a -> Forest k (B0 b) a F1 :: Tree k a -> Forest (Succ k) b a -> Forest k (B1 b) a type Heap :: Binary -> Type -> Type newtype Heap b a = MkHeap (Forest Zero b a) insert :: Ord a => Tree k a -> Forest k b a -> Forest k (BInc b) a insert t FEnd = F1 t FEnd insert t (F0 f) = F1 t f insert t (F1 t' f) = F0 (insert (mergeTree t t') f) § DEFINITION AND PROOF FOR THE SNOC OPERATION ON A FINGER TREE -@ reflect snoc @- snoc :: Seq a -> a -> Seq a snoc Nil x = Unit x snoc (Unit x) y = More (One x) Nil (One y) snoc (More u q (One x)) y = More u q (Two x y) snoc (More u q (Two x y)) z = More u q (Three x y z) snoc (More u q (Three x y z)) w = More u (snoc q (Pair x y)) (Two z w) -@ reflect snocT @- -@ snocT :: Seq a -> a -> x:Int | x >= 1 @- snocT :: Seq a -> a -> Int snocT Nil _ = 1 snocT (Unit _) _ = 1 snocT (More _ _ (One _)) _ = 1 snocT (More _ _ (Two _ _)) _ = 1 snocT (More _ q (Three x y _)) _ = 1 + snocT q (Pair x y) -@ automatic-instances snocAmortized @- -@ snocAmortized :: q:Seq a -> x:a -> snocT q x + pot (snoc q x) - pot q <= 3 @- snocAmortized :: Seq a -> a -> Proof snocAmortized (More _ q (Three x y _)) _ = trivial ? snocAmortized q (Pair x y) snocAmortized _ _ = trivial
http://arxiv.org/abs/2407.12497v1
20240717112525
Cell-Free Massive MIMO Surveillance of Multiple Untrusted Communication Links
[ "Zahra Mobini", "Hien Quoc Ngo", "Michail Matthaiou", "Lajos Hanzo" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
IEEEexample:BSTcontrol [ Dmitriy Frolovtsev July 22, 2024 ====================== § ABSTRACT A cell-free massive multiple-input multiple-output (CF-mMIMO) system is considered for enhancing the monitoring performance of wireless surveillance, where a large number of distributed multi-antenna aided legitimate monitoring nodes (MNs) proactively monitor multiple distributed untrusted communication links. We consider two types of MNs whose task is to either observe the untrusted transmitters or jam the untrusted receivers. We first analyze the performance of CF-mMIMO surveillance relying on both maximum ratio (MR) and partial zero-forcing (PZF) combining schemes and derive closed-form expressions for the monitoring success probability (MSP) of the MNs. We then propose a joint optimization technique that designs the MN mode assignment, power control, and MN-weighting coefficient control to enhance the MSP based on the long-term statistical channel state information knowledge. This challenging problem is effectively transformed into tractable forms and efficient algorithms are proposed for solving them. Numerical results show that our proposed CF-mMIMO surveillance system considerably improves the monitoring performance with respect to a full-duplex co-located massive MIMO proactive monitoring system. More particularly, when the untrusted pairs are distributed over a wide area and use the MR combining, the proposed solution provides nearly a thirty-fold improvement in the minimum MSP over the co-located massive MIMO baseline, and forty-fold improvement, when the PZF combining is employed. This work is a contribution by Project REASON, a UK Government funded project under the Future Open Networks Research Challenge (FONRC) sponsored by the Department of Science Innovation and Technology (DSIT). It was also supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) (grant No. EP/X04047X/1). The work of Z. Mobini and H. Q. Ngo was supported by the U.K. Research and Innovation Future Leaders Fellowships under Grant MR/X010635/1, and a research grant from the Department for the Economy Northern Ireland under the US-Ireland R&D Partnership Programme. The work of M. Matthaiou was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101001331). L. Hanzo would like to acknowledge the financial support of the Engineering and Physical Sciences Research Council projects EP/W016605/1, EP/X01228X/1, EP/Y026721/1 and EP/W032635/1 as well as of the European Research Council's Advanced Fellow Grant QuantCom (Grant No. 789028). Z. Mobini, H. Q. Ngo, and M. Matthaiou are with the Centre for Wireless Innovation (CWI), Queen's University Belfast, BT3 9DT Belfast, U.K. Email:{zahra.mobini, hien.ngo, m.matthaiou}@qub.ac.uk. L. Hanzo is with the School of Electronics and Computer Science, University of Southampton, SO17 1BJ Southampton, U.K. (e-mail: lh@ecs.soton.ac.uk). Parts of this paper were presented at the 2023 IEEE GLOBECOM conference <cit.>. Cell-free massive multiple-input multiple-output, monitoring node mode assignment, monitoring success probability, power control, proactive monitoring, wireless information surveillance. § INTRODUCTION The widespread use of mobile devices along with the explosive popularity of wireless data services offered by fifth generation (5G) networks has led to the emergence of the so-called infrastructure-free communication systems, which include device-to-device (D2D), aerial vehicle (UAV)-aided communications, internet and so on. Although these wireless transmission systems provide an efficient and convenient means for establishing direct connections between mobile terminals, unauthorised or malicious users may misuse these networks to perform illegal activities, commit cyber crime, and jeopardize public safety. As a remedy, legitimate monitoring has attracted considerable attention in recent years <cit.>. In contrast to wireless physical-layer security (PLS), which aims for making the transmitted information indecipherable to illegitimate monitors <cit.>, this line of PLS puts emphasis on legally monitoring the communications of an untrusted pair. Wireless surveillance is typically classified into three main paradigms based on the kind of strategies used in the surveillance process by a legitimate monitor: 1) passive monitoring <cit.>, 2) proactive monitoring <cit.>, and 3) spoofing relaying <cit.>. In passive monitoring, the legitimate monitor silently observes an untrusted link and, hence, successful monitoring can only be achieved for the scenarios where the strength of the monitoring link is better than that of the untrusted one. In proactive monitoring, the legitimate monitor operates in a full-duplex (FD) mode, simultaneously observing the untrusted link and sending a jamming signal to interfere with the reception of the untrusted receiver (UR), thereby degrading the rate of the untrusted link. This improves the monitoring success probability (MSP) <cit.>, which is one of the fundamental monitoring performance objectives. Finally, in the context of spoofing relaying, the FD monitor observes the untrusted link and alters the channel information sent over the untrusted link to adjust its rate requirement. Since the publication of the pioneering paper <cit.>, proactive monitoring has been widely studied under diverse untrusted communication scenarios, such as multiple-input multiple-output (MIMO) systems <cit.>, relaying systems <cit.>, UAV networks <cit.>, cognitive radio networks <cit.>, and intelligent reflecting surface (IRS)-aided surveillance systems <cit.>. Specifically, in <cit.>, multi-antenna techniques were utilized to improve the monitoring performance. In <cit.>, an optimization framework was developed for the jamming power control and transmit/receive beamforming vectors at the legitimate monitor by maximizing the MSP. The authors of <cit.> established an optimization framework for joint precoding design and jamming power control, by taking into account the impact of jamming on the performance of other legitimate users. Low-complexity suboptimal zero-forcing (ZF)-based beamforming schemes were also proposed in <cit.>. Under the more practical assumption of imperfect channel state information (CSI), the maximization of the worst-case MSP attained by multi-antenna aided proactive monitoring systems was investigated in <cit.>, where the CSI error was deterministically bounded. Additionally, by assuming the knowledge of the imperfect instantaneous CSI of the observing link and CSI statistics of the jamming and the untrusted links, Zhang et al. <cit.> studied the performance of multi-antenna assisted proactive monitoring in uplink systems and derived semi-closed-form expressions for both the MSP and monitoring rate. Proactive monitoring was investigated in <cit.> in dual-hop decode-and-forward relaying systems, where the legitimate monitor can adaptively act as a monitor, a jammer or a helper, while proactive monitoring was studied in <cit.> via jamming designed for amplify-and-forward relay networks. Moon et al. <cit.> extended the results of <cit.> to multi-antenna aided multi-relay systems. Later, UAV-aided information surveillance was proposed in <cit.>, where an FD ground monitor observes the untrusted link and simultaneously sends the collected untrusted information to the UAV. By contrast, Li et al. <cit.> relied on a legitimate UAV to track untrusted UAV-to-UAV communications and developed both an energy-efficient jamming strategy and a tracking algorithm. The proactive monitoring concept of cognitive systems was introduced in <cit.> and <cit.>, where the secondary users are allocated to share the spectrum of the untrusted users, provided that they are willing to act as an observer or friendly jammer monitoring the untrusted link. Recently, IRSs have also found their way into information surveillance systems, where the IRS is used for degrading the untrusted channel's rate <cit.> and for improving the observing channel <cit.> to further enhance the monitoring performance. Finally, beneficial IRS deployment strategies and joint beamforming design problems were proposed in <cit.>. §.§ Knowledge Gap and Motivations It is important to point out that most studies tend to investigate simple setups concerning the untrusted communication links and/or observing links. More specifically, a popular assumption in the aforementioned literature is that there is a single untrusted link. This assumption is optimistic, because realistic systems are likely to have more than one untrusted communication links in practice. In this context, Xu and Zhu <cit.> have studied proactive monitoring using a single monitor for observing multiple untrusted pairs in scenarios associated with either average rate or with outage probability constrained untrusted links. Li et al. <cit.> used proactive monitoring with relaying features to increase the signal-to-interference-plus-noise ratio (SINR) of multiple untrusted links, which results in a higher rate for the untrusted links, and hence, higher observation rate. Moreover, Zhang et al. in <cit.> characterized the achievable monitoring rate region of a single-monitor surveillance system observing two untrusted pairs operating within the same spectral band and using a minimum-mean-squared-error successive interference cancellation (MMSE-SIC) receiver. Proactive monitoring was studied in <cit.> for the downlink of an untrusted non-orthogonal multiple access (NOMA) network with one untrusted transmitter (UT) and multiple groups of URs, while relying on a single-antenna monitor equipped with a SIC receiver. In the case of a distributed deployment of untrusted pairs over a geographically wide area, it is impractical to cater for the direct monitoring of each and every untrusted pair by relying on a single monitor. Hence, attaining a given target MSP performance for the untrusted pairs is a fundamental challenge. Therefore, cooperative operation relying on a single primary FD monitor and an auxiliary assistant FD monitor supervising a single UT and multiple URs was proposed in <cit.> for maximizing the monitoring energy efficiency via optimizing the jamming power and the cooperation strategy selected from a set of four specific strategies. Later, Moon et al. <cit.> looked into proactive monitoring relying on a group of single-antenna aided intermediate relay nodes harnessed for supporting a legitimate monitor, which acts either as a jammer or as an observer node. Furthermore, the authors of <cit.> harnessed a pair of single-antenna half-duplex nodes that take turns in performing observing and jamming. However, the significant drawback of these studies is that they only focused on either the single-untrusted-link scenario <cit.> or on a specific system setup <cit.>. Another main concern is the overly optimistic assumption of knowing instantaneous CSI of all links at the monitor nodes. In this case, the system level designs must be re-calculated on the small-scale fading time scale, which fluctuates quickly in both time and frequency. Therefore, the study of how to efficiently carry out surveillance operation using multiple monitors in the presence of multiple untrusted pairs is extremely timely and important, yet, this is still an open problem at the time of writing. To address the need for reliable information surveillance in complex practical scenarios, we are inspired by the emerging technique of cell-free massive MIMO (CF-mMIMO) <cit.> to propose a new proactive monitoring system, termed as CF-mMIMO surveillance. CF-mMIMO constitutes an upscaled version of user-centric network MIMO. In contrast to traditional cellular systems, i) fixed cells and cell boundaries disappear in CF-mMIMO and ii) the users are served coherently by all serving antennas within the same time-frequency resources <cit.>. Therefore, CF-mMIMO offers significantly higher degrees of freedom in managing interference, hence resulting in substantial performance improvements for all the users over conventional cellular networks. The beneficial features of CF-mMIMO are its substantial macro diversity, favorable propagation, and ubiquitous coverage for all users in addition to excellent geographical load-balancing. Owing to these eminent advantages, CF-mMIMO has sparked considerable research interest in recent years and has yielded huge performance gains in terms of spectral efficiency (SE) <cit.>, energy efficiency <cit.>, and security <cit.>. More interestingly, recent research has shown that utilizing efficient power allocation and receive combining/transmit precoding designs in CF-mMIMO, relying on multiple-antenna access points (APs) further enhances the system performance <cit.>. Our CF-mMIMO information surveillance system is comprised of a large number of spatially distributed legitimate multiple-antenna monitoring nodes (MNs), which jointly and coherently perform surveillance of multiple untrusted pairs distributed over a wide geographic area. In our system, there are typically several MNs in each other's close proximity for any given untrusted pair. Thus, high macro-diversity gain and low path loss can be achieved, enhancing the observing channel rates and degrading the performance of untrusted links. Therefore, CF-mMIMO surveillance is expected to offer an improved and uniform monitoring performance for all the untrusted pairs compared to its single-monitor (co-located) massive MIMO based counterpart. In addition, the favorable propagation characteristics of CF-mMIMO systems allow our CF-mMIMO surveillance system to employ simple processing techniques, such as linear precoding and combining, while still delivering excellent monitoring performance[In general, network-wide signal processing maximizes the system performance but it entails complex signal co-processing procedures, accompanied by substantial deployment costs. Hence, it is unscalable as the number of service antennas and/or users grows unboundedly. On the other hand, distributed processing is of low-complexity and more scalable, but its performance is often far from the optimal one. However, for CF-mMIMO systems, as a benefit of the distributed network topology and massive MIMO properties, distributed signal processing can strike an excellent trade-off between the system performance and scalability.]. More importantly, when the CF-mMIMO concept becomes integrated into our wireless surveillance system, a virtual FD mode can be emulated, despite relying on half-duplex MNs. Relying on half-duplex MNs rather than FD MNs, makes the monitoring system more cost-effective and less sensitive to residual self interference. More particularly, two types of MNs are considered: 1) a specifically selected subset of the MNs is purely used for observing the UTs; 2) the rest of the MNs cooperatively jam the URs. In addition, since the MNs are now distributed across a large area, the inter-MN interference encountered is significantly reduced compared to a conventional FD monitoring/jamming system. Moreover, harnessing the channel hardening attributes of CF-mMIMO systems enables us to dynamically adjust the observing vs. jamming mode, the MN transmit power, and the MN-weighting coefficients for maximizing the overall monitoring performance based on only long-term CSI. Table I boldly and explicitly contrasts our contributions and benchmarks them against the state-of-the art. We further elaborate on the novel contributions of this work in the next subsection in a point-wise fashion. §.§ Key Contributions The main technical contributions and key novelty of this paper are summarized as follows: * We propose a novel wireless surveillance system, which is based on the CF-mMIMO concept relying on either observing or jamming mode assignment. In particular, by assuming realistic imperfect CSI knowledge, we derive exact closed-form expressions for the MSP of CF-mMIMO surveillance system with multiple-antenna MNs over multiple untrusted pairs for distributed maximum ratio (MR) and partial ZF (PZF) combining schemes. Additionally, we show that when the number of MNs in observing mode tends to infinity, the effects of inter-untrusted user interference, inter-MN interference, and noise gradually disappear. Furthermore, when the number M_ of MNs in the jamming mode goes to infinity, we can reduce the transmit power of each MN by a factor of 1/M_ while maintaining the given SINR. * We formulate a joint optimization problem for the MN mode assignment, power control, and MN-weighting coefficient control for maximizing the minimum MSP of all the untrusted pairs subject to a per-MN average transmit power constraint. We solve the minimum MSP maximization problem by casting the original problem into three sub-problems, which are solved using an iterative algorithm. * We also propose a greedy UT grouping algorithm for our CF-mMIMO system relying on PZF combining scheme. Our numerical results show that the proposed joint optimization approach significantly outperforms the random mode assignment, equal power allocation, and equal MN-weighting coefficient based approaches. The simulation results also confirm that, compared to the co-located massive MIMO aided proactive monitoring system relying on FD operation, where all MNs are co-located as an antenna array and simultaneously perform observation and jamming, our CF-mMIMO surveillance system brings the MNs geographically closer to the untrusted pairs. Thus leads to a uniformly good monitoring performance for all untrusted pairs[In terms of data-sharing overhead and fronthaul, co-located massive MIMO based surveillance systems require lower fronthaul capacity compared to CF-mMIMO surveillance. Nevertheless, in our CF-mMIMO surveillance system we consider local processing, which strikes an excellent balance between the computational complexity, fronthaul limitations, and monitoring performance.]. Notation: We use bold upper case letters to denote matrices, and lower case letters to denote vectors. The superscripts (·)^*, (·)^T, and (·)^† stand for the conjugate, transpose, and conjugate-transpose (Hermitian), respectively. A zero-mean circular symmetric complex Gaussian distribution having a variance of σ^2 is denoted by 𝒞𝒩(0,σ^2), while 𝐈_ denotes the × identity matrix. Finally, 𝔼{·} denotes the statistical expectation. § SYSTEM MODEL   In this section, we introduce the CF-mMIMO surveillance system model for two different combining schemes. As shown in Fig. <ref>, we consider a surveillance scenario, where M MNs are employed to monitor K untrusted communication pairs. Let us denote the sets of MNs and untrusted communication pairs by ≜{1, …, M} and ≜{1,…,K}, respectively. Each UT and UR is equipped with a single antenna, while each MN is equipped with antennas. All MNs, UTs, and URs are half-duplex devices. We assume that all MNs are connected to the central processing unit (CPU) via fronthaul links. The MNs can switch between observing mode, where they receive untrusted messages, and jamming mode, where they send jamming signals to the URs. The assignment of each mode to its corresponding MN is designed to maximize the minimum MSP over all the untrusted links, as it will be discussed in Section <ref>. We use the binary variable a_m to show the mode assignment for each MN m, so that a_m≜ 1, if MN m operates in the jamming mode, 0, . Note that we consider block fading channels, where the fading envelope of each link stays constant during the transmission of a block of symbols and changes to an independent value in the next block. The jamming channel (observing channel) vector between the m-th MN and the k-th UR (k-th UT) is denoted by ∈ℂ^× 1 (∈ℂ^× 1), ∀ k ∈, m ∈, respectively. It is modelled as =√(), (=√()), where () is the large-scale fading coefficient and ∈ℂ^× 1 (∈ℂ^× 1) is the small-scale fading vector containing independent and identically distributed (i.i.d.) 𝒞𝒩 (0, 1) random variables (RVs). Furthermore, the channel gain between the ℓ-th UT and the k-th UR is h_ℓ k=()^1/2h̆_ℓ k, where is the large-scale fading coefficient and h̆_ℓ k represents small-scale fading, distributed as 𝒞𝒩(0,1). We note that h_kk models the channel coefficient of the k-th untrusted link spanning from the k-th UT to the k-th UR, ∀ k ∈. Finally, the channel matrix between MN m and MN i, ∀ m,i∈, is denoted by _mi∈ℂ^× where its elements, for i≠ m, are i.i.d. 𝒞𝒩(0,β_mi) RVs and _mm = 0, ∀ m. Note that the channels and may be estimated at the legitimate MN by overhearing the pilot signals sent by UT k and UR k, respectively <cit.>. By following <cit.>, for the minimum-mean-square-error (MMSE) estimation technique and the assumption of orthogonal pilot sequences, the estimates of and can be written as ∼𝒞𝒩(0,𝐈_) and ∼𝒞𝒩(0,𝐈_), respectively, where =τ_tρ_t()^2/τ_tρ_t+1 and =τ_tρ_t()^2/τ_tρ_t+1 with ρ_t and τ_t≥ 2K being the normalized transmit power of each pilot symbol and the length of pilot sequences, respectively. Since it is difficult (if not impossible) for the legitimate MNs to obtain the CSI of untrusted links, we assume that h_kℓ is unknown to the MNs. All the UTs simultaneously send independent untrusted messages to their corresponding URs over the same frequency band. The signal transmitted from UT k is denoted by x_k^ = √(ρ_) s_k^, where s_k^, with 𝔼{|s_k^|^2}=1, and ρ_ represent the transmitted symbol and the normalized transmit power at each UT, respectively. At the same time, the MNs in jamming mode intentionally send jamming signals to interrupt the communication links between untrusted pairs. This enforces the reduction of the achievable data rate at the URs, thereby enhancing the MSP. More specifically, the MNs operating in jamming mode use the MR transmission technique, also known as conjugate beamforming, in order to jam the reception of the URs. Note that MR is considered because it maximizes the strength of the jamming signals at the URs. Let us denote the jamming symbol intended for the untrusted link k by s_k^, which is a RV with zero mean and unit variance. When using MR precoding, the × 1 signal vector transmitted by MN m can be expressed as _m^ = a_m√(ρ_)∑_k ∈𝒦√(θ_mk)()^* s_k^, where ρ_ is the maximum normalized transmit power at each MN in the jamming mode. Moreover, θ_mk denotes the power allocation coefficient chosen to satisfy the practical power constraint 𝔼{_m^^2}≤ρ_ at each MN in jamming mode, which can be further expressed as a_m∑_k∈𝒦θ_mk≤1/, ∀ m. Accordingly, the signal received by UR k can be written as y_k^ = h_kkx_k^+ ∑_ℓ∈𝒦, ℓ≠ kh_ℓ kx_ℓ^ +√(ρ_)∑_m ∈ℳ∑_k'∈𝒦 a_m√(θ_mk')()^T()^* s_k'^+w_k^, where w_k^∼𝒞𝒩(0,1) is the additive white Gaussian noise (AWGN) at UR k. It is notable that the second term in (<ref>) represents the interference caused by other UTs due to their concurrent transmissions over the same frequency band and the third term quantities the interference emanating from the MNs in the jamming mode. The MNs in the observing mode, i.e., MNs with a_m=0, ∀ m, receive the transmit signals from all UTs. The received signal _m^∈ℂ^× 1 at MN m in the observing mode is expressed as _m^ = √(ρ_)∑_k∈𝒦(1-a_m)_mk^ s_k^ +√(ρ_)∑_i∈ℳ∑_ℓ∈𝒦a_i×    (1-a_m) √(θ_iℓ)_mi (_iℓ^)^*s_ℓ^ +(1-a_m)_m^, where _m^ is the 𝒞𝒩(0, 𝐈_) AWGN vector. We note from (<ref>) that if MN m does not operate in the observing mode, i.e., a_m=1, it does not receive any signal, i.e., _m^=0. Then, MN m in the observing mode performs linear combining by partially equalizing the received signal in (<ref>) using the combining vector as ()^†_m^. The resultant signal is then forwarded to the CPU for detecting the untrusted signals, where the receiver combiner sums up the equalized weighted signals. In particular, to enhance the observing capability, we assume that the forwarded signal is further multiplied by the MN-weighting coefficient , 0 ≤≤ 1, ∀ k, m. The aggregated received signal for UT k, ∀ k, at the CPU is r_k^ =∑_m=1^M ()^†_m^ = √(ρ _)∑_m ∈ℳ(1-a_m) ()^† s_k^ + ∑_ℓ∈𝒦\ k √(ρ _)∑_m ∈ℳ (1-a_m) ()^†s_ℓ^ + ∑_ℓ∈𝒦√(ρ _)∑_m ∈ℳ∑_i ∈ℳ(1-a_m)a_i√(θ_iℓ)()^†× _mi(ĝ_iℓ^)^∗s_ℓ^+∑_m ∈ℳ(1-a_m) ()^†_m^. Finally, the observed information s_k^ can be detected from r_k^. §.§ Combining Schemes For CF-mMIMO surveillance systems, when multiple untrusted pairs are spatially multiplexed, the linear receive combiner may harness the MMSE objective function (OF), albeit other OFs may also be harnessed. However, MMSE optimization relies on centralized processing and has high computational complexity as well as signaling load. As a potential remedy, low-complexity interference-agnostic combining schemes, such as MR, perform well, provided that each MN is equipped with a large number of antennas. But, the MR combiner does not perform well for two scenarios: 1) when no favorable propagation can be guaranteed between the untrusted users, namely when the MNs are only equipped with a few antennas; and 2) in interference-limited regimes, since MR is incapable of eliminating the inter-untrusted-user interference. In these cases, partial ZF-based combining outperforms MR due to its ability to deal with interference, while still being scalable. Therefore, in this paper, we consider both the partial ZF and MR combining schemes, which can be implemented in a distributed manner and do not require any instantaneous CSI exchange between the MNs and the CPU. 1) Maximum Ratio Combining: The simplest linear combining solution is the MR combining (i.e., matched filter) associated with ==, which has low computational complexity. MR combining maximizes the power of the desired observed signal, while retaining the system's scalability. In this case, we have r_k^ =∑_m=1^M ()^†_m^ = DS_k^ s_k^+∑_ℓ∈𝒦\ k UI_ℓ k ^s_ℓ^+∑_ℓ∈𝒦MI_ℓ k ^s_ℓ^+AN_k ^ where DS_k^ = √(ρ _)∑_m ∈ℳ(1-a_m) ()^†, UI_ℓ k ^= √(ρ _)∑_m ∈ℳ (1-a_m) ()^†, MI_ℓ k ^= √(ρ _)∑_m ∈ℳ∑_i ∈ℳ(1-a_m)a_i√(θ_iℓ)()^†_mi(ĝ_iℓ^)^∗, AN_k ^= ∑_m ∈ℳ(1-a_m) ()^†_m^, where DS_k ^, UI_ℓ k ^, and MI_ℓ k ^, represent the desired signal, cross-link interference caused by the transmission of ℓ-th UT, and inter-MN interference, respectively. Furthermore, AN_k ^ represents the additive noise. 1) Partial Zero-Forcing Combining: The MR combining does not perform well at high signal-to-noise ratios (SNRs), since it is incapable of eliminating the inter-untrusted user interference. For this reason, we now consider the PZF combining scheme, which has the ability to mitigate interference in a distributed and scalable manner while attaining a flexible trade-off between the interference mitigation and array gain <cit.>. Therefore, each MN m in observing mode virtually divides the UTs into two groups: ⊂{1, …, K}, which includes the index of strong UTs, and ⊂{1, …, K}, which hosts the index of weak UTs, respectively. The UT grouping can be based on diverse criteria, including the value of large-scale fading coefficient . Our proposed UT grouping strategy will be discussed in section <ref>. Here, our prime focus is on providing uniformly good monitoring performance over all untrusted pairs and hence MN m employs ZF combining for the UTs in and MR combining for the UTs in . In this case, the intra-group interference between UTs ∈ is actively cancelled, while the inter-group interference between UTs ∈ and UTs ∈ is tolerated. We note that the number of antennas at each MN must meet the requirement ≥ ||+1. The local combining vector constructed by MN m for UT k ∈ is given by ==[()^†]^-1_k, where is an N × || collective channel estimation matrix from all the UTs in to MN m as =[_mk^: k ∈] and _k is the k-th column of _K. Hence, for any pair of UTs k and ℓ∈ we have ()^† = if k=ℓ, 0 otherwise. Moreover, the MR combining vector constructed by MN m for UT k ∈ is given in (<ref>). Therefore, by applying ZF combining for UTs ∈ and MR combining for UTs ∈, (<ref>) can be rewritten as r_k^=∑_m ∈ M ()^†_m^ = DS_k ^ s_k^ +∑_ℓ∈𝒦\ k UI_ℓ k ^s_ℓ^ + ∑_ℓ∈𝒦MI_ℓ k ^s_ℓ^ + AN_k ^, where DS_k ^= √(ρ _)(∑_m ∈(1-a_m) ()^†+ ∑_m ∈(1-a_m) ()^†) UI_ℓ k ^= √(ρ _)(∑_m ∈ (1-a_m) ()^† + ∑_m ∈ (1-a_m) ()^†) MI_ℓ k ^= √(ρ _)∑_i ∈ℳa_i√(θ_iℓ)(∑_m ∈(1-a_m) ()^†_mi(ĝ_iℓ^)^∗+ ∑_m ∈(1-a_m) ()^†_mi(ĝ_iℓ^)^∗) AN_k ^= ∑_m ∈(1-a_m) ()^†_m^+ ∑_m ∈(1-a_m) ()^†_m^, where and denote the set of indices of MNs that assign the k-th UT into for ZF combining and the set of indices of MNs that assign k-th UT into for MR combining, respectively, as ≜{m: k ∈, m=1, …, M} and ≜{m: k ∈, m=1, …, M}, with ∩ =∅ and ∪ = ℳ. § PERFORMANCE ANALYSIS In this section, we derive the effective SINR of the untrusted communication links as well as the effective SINR for observing in conjunction with MR and PZF combining schemes. We also investigate the potential of using large number of MNs in either the observing or jamming mode to cancel the inter-untrusted user interference and to enhance the energy efficiency, respectively. §.§ Effective SINR of the Untrusted Communication Links We define the effective noise as w̃_k^ = √(ρ_)∑_ℓ∈𝒦, ℓ≠ kh_ℓ k s_ℓ^ +√(ρ_)∑_m ∈ℳ∑_k'∈𝒦 a_m× √(θ_mk')()^T()^* s_k'^+w_k^, and reformulate the signal received at UR k in (<ref>) as y_k^ = √(ρ_) h_kks_k^+w̃_k^. Since s_ℓ^ is independent of s_k^ for any ℓ≠ k, the first term of the effective noise in (<ref>) is uncorrelated with the first term in (<ref>). Moreover, the second and third terms of (<ref>) are uncorrelated with the first term of (<ref>). Therefore, the effective noise w̃_k^ and the input RV x_k^ are uncorrelated. Accordingly, we now obtain a closed-form expression for the effective SINR of the untrusted link k. The effective SINR of the untrusted link k can be formulated as _,k (, θ) = ρ_| h_kk|^2/ξ_k(,θ), where ξ_k(,θ) = ρ_∑_ℓ∈𝒦∖ k+ρ_∑_k'∈𝒦∑_m ∈ℳ a_mθ_mk' +ρ_^2(∑_m ∈ℳa_m √(θ_mk))^2+1, with ≜{a_m} and θ≜{θ_mk}, ∀ m,k, respectively. See Appendix <ref>. §.§ Effective SINR for Observing The CPU detects the observed information s_k^ from r_k^ in (<ref>). We assume that it does not have instantaneous CSI knowledge of the observing, jamming, and untrusted channels and uses only statistical CSI when performs detection. To calculate the effective SINR for the k-th untrsuted link, we use the popular bounding technique, known as the hardening bound or the use-and-then-forget (UatF) bound <cit.>[ This bound can be used for the scenarios, where the codeword spans over the time and frequency domains, i.e., across multiple coherence times and coherence bandwidths. This is practical and it is widely supported in the literature of ergodic rate and capacity analysis <cit.>.]. In particular, we first rewrite the aggregated received signal for UT k at the CPU as r_k^= 𝔼{DS_k^} s_k^+  BU_k ^ s_k^+∑_ℓ∈𝒦\ k UI_ℓ k ^s_ℓ^+∑_ℓ∈𝒦MI_ℓ k ^s_ℓ^+AN_k ^, where BU_k ^ =DS_k^- 𝔼{DS_k^} reflects the beamforming gain uncertainty, while the superscript “cs” refers to the “combining scheme”, = {, }. The CPU effectively encounters a deterministic channel (𝔼{DS_k^}) associated with some unknown noise. Since s_k and s_ℓ are uncorrelated for any ℓ≠ k, the first term in (<ref>) is uncorrelated with the third and forth terms. Additionally, since s_k is independent of BU_k, the first and second terms are also uncorrelated. The fifth term, i.e., the noise, is independent of the first term in (<ref>). Accordingly, the sum of the second, third, fourth, and fifth terms in (<ref>) can be collectively considered as an uncorrelated effective noise. Therefore, the received SINR of observing the untrusted link k can be formulated as _,k^ = |𝔼{DS_k ^}|^2/𝔼{|BU_k ^|^2}+∑_ℓ∈𝒦\ k 𝔼{|UI_ℓ k ^|^2}+∑_ℓ∈𝒦𝔼{|MI_ℓ k ^|^2}+𝔼{|AN_k ^|^2}. By calculating the corresponding expected values in (<ref>), the SINR observed for the untrusted link k for MR and PZF combining schemes can be obtained as in the following propositions. The received SINR for the k-th untrusted link at the CPU for MR combining is given by _,k^(,,θ)=ρ_(∑_m∈ℳ (1-a_m ) )^2 /μ_k^+ρ_∑_i∈ℳ∑_ℓ∈𝒦 a_iθ_iℓγ_iℓ^ϱ_i k^, with μ_k^≜ ∑_m∈ℳ(ρ_∑_ℓ∈𝒦^2 (1-a_m) β_mℓ^γ_mk^ +^2 (1-a_m) ), ϱ_i k^≜∑_m∈ℳ^2 (1-a_m) β_mi, where ={α_mk}, ∀ m ,k. See Appendix <ref>. The received SINR for the k-th untrusted link at the CPU for PZF combining is given by _,k^(,,θ)= ρ_(∑_m∈(1-a_m ) +∑_m∈(1-a_m ) )^2 /μ_k^+ ρ_∑_i∈ℳ∑_ℓ∈𝒦 a_iθ_iℓγ_iℓ^ϱ_i k^, with μ_k^≜ρ_∑_m∈∑_ℓ∈𝒦^2(1-a_m) (β_mℓ^- )/-|| +ρ_∑_m∈∑_ℓ∈𝒦^2(1-a_m) β_mℓ^ +∑_m∈^2(1-a_m)/-||+∑_m∈^2(1-a_m). ϱ_i k^≜ ∑_m∈^2(1-a_m) β_mi/-|| +∑_m∈^2(1-a_m) β_mi. See Appendix <ref>. §.§ Large-M Analysis In this subsection, we provide some insights into the performance of CF-mMIMO surveillance systems when the number M_ of MNs in observing mode or the number M_ of MNs in jamming mode is very large. The asymptotic results are presented for MR combining, while the same method and insights can be obtained for PZF combining. 1) Using Large Number of MNs in Observing Mode, M_→∞: Assume that the number of untrusted pairs, K, is fixed. For any finite M_, as M_→∞, we have the following results for the signal received at the CPU for observing UT k employing the MR combining. By using Tchebyshev’s theorem <cit.>, we obtain 1/M_DS_k ^ s_k ^-1/M_√(ρ _)∑_m ∈ℳN(1-a_m) s_k ^0, 1/M_∑_ℓ∈𝒦\ k UI_ℓ k^ s_ℓ^0, 1/M_AN_k ^0, 1/M_∑_ℓ∈𝒦MI_ℓ k ^ s_ℓ^0, where 0 shows convergence in probability when M_→∞. The above expressions show that when M_→∞, the observed signal includes only the desired signal. The monitoring performance can improve without limit by using more MNs in observing mode. 2) Using Large Number of MNs in Jamming Mode, M_→∞: Assume that the number of MNs in jamming mode goes to infinity, while transmit power of each MN in jamming mode is scaled with M_ according to ρ_= E_/M_, where E_ is fixed. The aggregated received signal expression in (<ref>) for the MR combining scheme shows that MI_ℓ k ^ is dependent on M_; however DS_k ^, UI_ℓ k ^, and AN_k ^ are constant with respect to M_. Now, let us assume that the number of untrusted pairs, K, is fixed. For any finite M_, when M_→∞ and ρ_= E_/M_, we have MI_ℓ k ^= √(E _/M_)∑_m ∈ℳ(1-a_m) ()^†_ℓ m , where _ℓ m = ∑_i ∈ℳa_i√(θ_iℓ)_mi(ĝ_iℓ^)^∗ = ∑_i ∈ℳa_i√(θ_iℓ)_mi(ĝ_iℓ^)^∗/√(∑_i ∈ℳa_iθ_iℓβ_mi‖ĝ_iℓ^‖^2)√(∑_i ∈ℳa_iθ_iℓβ_mi‖ĝ_iℓ^‖^2). Now, let use define _ℓ m≜∑_i ∈ℳa_i√(θ_iℓ)_mi(ĝ_iℓ^)^∗/√(∑_i ∈ℳa_iθ_iℓβ_mi‖ĝ_iℓ^‖^2). For given {ĝ_iℓ^}, _ℓ m is distributed on 𝒞𝒩 (0, _). Therefore, _ℓ m∼𝒞𝒩 (0, _) is independent of {ĝ_iℓ^}. Thus, we have _ℓ m =√(∑_i ∈ℳa_iθ_iℓβ_mi‖ĝ_iℓ^‖^2)_ℓ m. By using (<ref>) and Tchebyshev’s theorem, we obtain 1/√(M_)_ℓ m - √(1/M_∑_i ∈ℳ a_iθ_iℓβ_miγ_iℓ^)_ℓ m0. As a result, ∑_ℓ∈𝒦MI_ℓ k ^ s_ℓ^ -√(E_)∑_ℓ∈𝒦∑_m ∈ℳ (1-a_m)()^†× √(1/M_∑_i ∈ℳ a_iθ_iℓβ_miγ_iℓ^)_ℓ ms_ℓ^0. Expression in (<ref>) shows that for large M_, we can reduce the transmitted jamming power of each MN in jamming mode proportionally to 1/M_, while maintaining the given SINR for observing. At the same time, from (<ref>), by using again the Tchebyshev’s theorem, we can show that the SINR for the untrusted commnication links goes to 0, as M_ goes to infinity. This verifies the potential of using a large number of MNs in jamming mode to save power and, hence, enhance the energy efficiency of CF-mMIMO surveillance systems. §.§ Monitoring Success Probability To achieve reliable detection at UR k, UT k varies its transmission rate according to the prevalent _,k. In particular the k-th UR provides SINR feedback to the k-th UT concerning its perceived channel quality. Based on this feedback, the UT dynamically adapts its modulation and coding scheme. Higher SINR values allow for higher data rates, while lower SINR values necessitate lower data rates to maintain reliable communication. Hence, if _,k^≥_,k, the CPU can also reliably detect the information of the untrusted link k. On the other hand, if _,k^≤_,k, the CPU may detect this information at a high probability of error. Therefore, the following indicator function can be designed for characterizing the event of successful monitoring at the CPU <cit.> X^_k = 1 if _,k^≥_,k, 0 otherwise, where X^_k=1 and X^_k=0 indicate the monitoring success and failure events for the untrusted link k, respectively. Thus, a suitable performance metric for monitoring each untrusted communication link k is the MSP, 𝔼{X^_k}, defined as 𝔼{X^_k}= Pr (^_,k≥_,k). From (<ref>), (<ref>), and (<ref>) we have 𝔼{X^_k}=ℙ(| h_kk|^2≤^_,kξ_k/ρ_). Using the cumulative distribution function (CDF) of the exponentially distributed RV |h_kk|^2, the MSP of our CF-mMIMO surveillance system can be expressed in closed form as 𝔼{X^_k}=1-exp(-^_,kξ_k/ρ_). § MAX-MIN MSP OPTIMIZATION In this section, we aim for maximizing the lowest probability of successful monitoring by optimizing the MN-weighting coefficients , the observing and jamming mode assignment vector , and the power control coefficient vector under the constraint of the transmit power at each MN in (<ref>). More precisely, we formulate an optimization problem as {,,θ}max k∈𝒦min 𝔼{X^_k (,,θ) } s.t. a_m∑_k∈𝒦θ_mk≤1/,   m∈ℳ, θ_mk≥ 0,    m∈ℳ, k∈𝒦, 0≤≤ 1,    ∀ k, m, a_m∈{0,1},    m∈ℳ. By substituting (<ref>) into (<ref>), the optimization problem (<ref>) becomes {,,θ}max k∈𝒦min 1-exp(-^_,k(,,θ) ξ_k(,θ)/ρ_) s.t. a_m∑_k∈𝒦θ_mk≤1/,   m∈ℳ, θ_mk≥ 0,    m∈ℳ, k∈𝒦, 0≤≤ 1,    ∀ k, m, a_m∈{0,1},    m∈ℳ. By using the fact that 1-exp(-x) is a monotonically increasing function of x and since and ρ_ are fixed values independent of the optimization variables, the problem (<ref>) is equivalent to the following problem {,,θ}max k∈𝒦min ^_,k(,,θ)ξ_k(,θ)  s.t. (<ref>)-(<ref>). Problem (<ref>) has as a tight coupling of the MN-weighting coefficients , of the observing and jamming mode assignment vector , and of the power control coefficient vector . In particular, observe from (<ref>) and (<ref>), that in ^_,k, the power coefficients θ_iℓ are coupled with the mode assignment parameters a_i. Furthermore, the mode assignment parameters a_m are also coupled with the MN-weighting coefficients . Therefore, problem (<ref>) is not jointly convex in terms of , of the power allocation coefficients θ, and of the mode assignment . This issue makes the max-min MSP problem technically challenging, hence it is difficult to find its optimal solution. Therefore, instead of finding the optimal solution, we aim for finding a suboptimal solution. To this end, we conceive a heuristic greedy method for MN mode assignment, which simplifies the computation, while providing a significant successful monitoring performance gain. In addition, for a given mode assignment, the max-min MSP problem can be formulated as the following optimization framework: {,θ}max k∈𝒦min ^_,k(,θ)ξ_k(θ)  s.t. (<ref>)-(<ref>). Problem (<ref>) is not jointly convex in terms of and power allocation θ. To tackle this non-convexity issue, we cast the optimization problem (<ref>) into two sub-problems: the MN-weighting control problem and the power allocation problem. To obtain a solution for problem (<ref>), these sub-problems are alternately solved, as outlined in the following subsections. §.§ Greedy MN Mode Assignment for Fixed Power Control and MN-Weighting Control Let 𝒜_ and 𝒜_ denote the sets containing the indices of MNs in observing mode, i.e., MNs with a_m=0, and MNs in jamming mode, i.e., MNs with a_m=1, respectively. In addition, 𝔼{X^_k(𝒜_, 𝒜_)} presents the dependence of the MSP on the different choices of MN mode assignments. Our greedy algorithm of MN mode assignment is shown in Algorithm <ref>. All MNs are initially assigned to observing mode, i.e., 𝒜_=ℳ and 𝒜_=∅. Then, in each iteration, one MN switches into jamming mode for maximizing the minimum MSP (<ref>) among the untrusted links, until there is no more improvement. §.§ Power Control for Fixed MN Mode Assignment and MN-Weighting Control For the given MN mode assignment and MN-weighting coefficient control, the optimization problem (<ref>) reduces to the power control problem. Using (<ref>), (<ref>), (<ref>) and (<ref>), the max-min MSP problem is now formulated as max _θ min _∀ k∈𝒦ξ_k(θ)/μ_k^+ρ_∑_i∈ℳ∑_ℓ∈𝒦 a_iθ_iℓγ_iℓ^ϱ_i k^. s.t. (<ref>), (<ref>). By introducing the slack variable ζ, we reformulate (<ref>) as   max_{θ, ζ} ζ s.tρ_∑_i∈ℳ∑_ℓ∈𝒦 a_iθ_iℓγ_iℓ^ϱ_i k^ - 1/ζξ_k(θ) +μ_k^≤ 0,   ∀ k∈𝒦.  (<ref>), (<ref>). To arrive at a computationally more efficient formulation, we use the inequality (∑_m ∈ℳ√(θ_mk))^2 ≥∑_m ∈ℳθ_mk()^2 and replace the constraint (<ref>) by ρ_∑_i∈ℳ∑_ℓ∈𝒦 a_iθ_iℓγ_iℓ^ϱ_i k^ - 1/ζξ̃_k(θ) +μ_k^≤ 0, where ξ̃_k(θ) =ρ_∑_ℓ∈𝒦∖ k+ρ_∑_k'∈𝒦∑_m ∈ℳ a_mθ_mk' +ρ_^2∑_m ∈ℳa_mθ_mk()^2 + 1. Now, for a fixed ζ, all the inequalities appearing in (<ref>) are linear, hence the program (<ref>) is quasi–linear. Since the second constraint in (<ref>) is an increasing functions of ζ, the solution to the optimization problem is obtained by harnessing a line-search over ϱ_ik^ to find the maximal feasible value. As a consequence, we use the bisection method in Algorithm 2 to obtain the solution. §.§ MN-Weighting Control for Fixed MN Mode Assignment and Power Control The received SINR at the URs is independent of the MN-weighting coefficients . Therefore, the coefficients can be obtained by independently maximizing the received SINR of each untrusted link k at the CPU. Therefore, the optimal MN-weighting coefficients for all UTs for the given transmit power allocations and mode assignment, can be found by solving the following problem: max ^_,k()  s.t.0≤≤ 1,    ∀ k, m. Let us introduce a pair of binary variables to indicate the group assignment for each UT k and MN m in our PZF combining scheme as ^ = 1 if m ∈, 0 otherwise, ^ = 1 if m ∈, 0 otherwise. Then, to solve (<ref>), we use the following proposition: The optimal MN-weighting coefficient vector, maximizing the SINR observed for the k-th untrusted link can be obtained as _k^⋆=(_1k^,…,_Mk^)^-1_k^, where _mk^=u_mk^+ ρ_∑_i∈ℳ∑_ℓ∈𝒦 a_iθ_iℓγ_iℓ^v_m i k^, _k^=[c_1k^,⋯,c_Mk^] with elements c_mk^ = (1-a_m)√()γ_mk and c_mk^ = (1-a_m)γ_mk if m ∈, (1-a_m)γ_mk if m ∈, u_m k^= ρ_∑_ℓ∈𝒦 (1-a_m) β_mℓ^γ_mk^ + (1-a_m) , v_m i k^= (1-a_m) β_mi, u_mk^=(1-a_m) (ρ_∑_ℓ∈𝒦^(β_mℓ^- )/-||+ρ_× ∑_ℓ∈𝒦^β_mℓ^ +^/-||+^), v_m i k^= ^ (1-a_m) β_mi/-||+^ (1-a_m) β_mi. The proof follows from <cit.> by noting that the observed SINR in (<ref>) (the observed SINR in (<ref>)) can be written as a generalized Rayleigh quotient with respect to α_k^ (α_k^) and thus be solved by a generalized eigenvalue decomposition. Therefore, by combining the two sub-problems in (<ref>) and (<ref>), we develop an iterative algorithm by alternately solving each sub-problem at each iteration, as summarized in Algorithm <ref>. §.§ Complexity and Convergence Analysis Here, we quantifying the computational complexity of solving the max-min MSP optimization problem (<ref>), which involves the proposed greedy MN mode assignment Algorithm <ref> and the proposed iterative Algorithm <ref> to solve the power allocation and MN-weighting coefficient optimization problem (<ref>). It is easy to show that the complexity of calculating 𝔼{X^_k} is on the order of 𝒪(M^2K). Therefore, the complexity of the proposed Algorithm <ref> is up to M(M+1)/2𝒪(M^2K). Now, we analyze the computational complexity of Algorithm <ref>, which solves the max-min MSP power optimization problem (<ref>) by using a bisection method along with solving a sequence of linear feasibility problems based on Algorithm <ref> and the generalized eigenvalue problem (<ref>) at each iteration. The total number of iterations required in Algorithm <ref> is log_2(ζ_max-ζ_min/ϵ). Furthermore, the optimization problem (<ref>) involves C_l≜ M(K+1) linear constraints and C_v≜ MK real-valued scalar variables. Therefore, solving the power allocation by Algorithm <ref> requires a complexity of log_2(ζ_max-ζ_min/ϵ)𝒪(C_v^2√(C_l)(C_v+C_l)). In addition, for the MN-weighting coefficient design in (<ref>), an eigenvalue solver imposes approximately 𝒪(KM^3) flops <cit.>. The convergence of the objective function in the proposed iterative Algorithm <ref> can be charachterized as follows. To solve problem (<ref>), two sub-problems are alternately solved so that at each iteration, one set of design parameters is obtained by solving the corresponding sub-problem, while fixing the other set of design variables. More specifically, at each iteration, the power allocation coefficient set θ^⋆ is calculated for the given MN-weighting coefficient set and then the MN-weighting coefficient set ^⋆ is calculated for the given θ=θ^⋆. For the next iteration is updated as = ^⋆. The power allocation θ^⋆ obtained for a given results an MSP greater than or equal to that of the previous iteration. We also note that the power allocation solution at each iteration i is also a feasible solution in calculating the power allocation in the next iteration i+1 due to the fact that the MN-weighting coefficient in iteration i+1 is derived for the power allocation coefficient given by iteration i. Therefore, Algorithm <ref> results in a monotonically increasing sequence of the objectives. § NUMERICAL RESULTS In this section, numerical results are presented for studying the performance of the proposed CF-mMIMO surveillance system using the PZF and MR combining schemes as well as for verifying the benefit of our max-min MSP optimization framework. We firstly introduce our approach for UT grouping in the PZF combining scheme. §.§ UT Grouping When the number of antennas per MN is sufficiently large, full ZF combining offers excellent performance <cit.>. Therefore, each MN in observing mode employs the ZF combining scheme for all untrusted links and we set = 𝒦 and = ∅ when ≥ K+1. Otherwise, in each iteration, we assign UT k having minimum MSP to , ∀ m, until there is no more improvement in the minimum MSP among the untrusted links, as summarized in Algorithm <ref>. §.§ Simulation Setup and Parameters We consider a CF-mMIMO surveillance system, where the MNs and UTs are randomly distributed in an area of D × D km^2 having wrapped around edges to reduce the boundary effects. Unless otherwise stated, the size of the network is D=1 km. Furthermore, each UR k is randomly located in a circle with radius 150 m around its corresponding transmitter, UT k. Moreover, we set the channel bandwidth to B=50 MHz and τ_t=2K. The maximum transmit power for training pilot sequences, each MN, and each UT is 250 mW, 1 W, and 250 mW, respectively, while the corresponding normalized maximum transmit powers ρ_t, ρ_, and ρ_ can be calculated upon dividing these powers by the noise power of σ^2_n=-92 dBm. The large-scale fading coefficient β_mk is represented by β_mk =10^PL_mk/10 10^σ _sh y_mk/10, where the first term models the path loss, and the second term models the shadow fading with standard deviation σ_sh = 4 dB, and y_mk∼𝒞𝒩(0, 1), respectively. Let us denote the distance between the m-th MN and the k-th user by d_mk. Then, PL_mk (in dB) is calculated as <cit.> PL_mk=-L-35log_10(d_mk), d_mk > d_1, -L-15log _10(d_1)-20 log_10(d_mk),  d_0 < d_mk≤ d_1, -L-15 log_10(d_1)-20 log _10(d_0), d_mk≤ d_0, with L = 46.3 + 33.9 log_10(f)-13.82 log_10(h_MN )- (1.1 log_10(f)- 0.7)h_U + (1.56 log_10(f)- 0.8), where f is the carrier frequency (in MHz), h_MN and h_U denote the MN antenna height (in m) and untrusted user height (in m), respectively. In all examples, we choose d_0 = 10 m, d_1 = 50 m, h_MN = 15 m and h_U = 1.65 m. These parameters resemble those in <cit.>. Similarly, the large-scale fading coefficient β_ℓ k between the ℓ-th UT and k-th UR can be modelled by a change of indices in (<ref>) and (<ref>). §.§ Performance Evaluation 1) Performance of the Proposed Greedy Mode Assignment and Greedy UT Grouping: Here, we investigate the performance of the proposed greedy mode assignment in Algorithm <ref> and greedy UT grouping Algorithm <ref> for PZF and MR combining schemes. We benchmark 1) random mode assignment, 2) UT grouping based on the value of large-scale fading coefficient (LSF-based UT grouping), so that when > K all UTs are assigned to for ZF combining, ∀ m, and when ≤ K at MN m a UT with the smallest value of is assigned into the group for ZF combining and the remaining UTs are assigned into the group for MR combining. Figure <ref> illustrates the minimum MSP achieved by the CF-mMIMO surveillance system for different number of MNs M. In this initial evaluation, the setup consists in D = 1 km, = 12, and K = 20. Our results verify the advantage of the proposed greedy mode assignment over random mode assignment. More specifically, when M=30, greedy mode assignment provides performance gains of around 245% and 325% with respect to random mode assignment for the system relying on PZF combining and MR combining, respectively. This remarkable performance gain verifies the importance of an adequate mode selection in terms of monitoring performance in our CF-mMIMO surveillance system. Additionally, compared to the LSF-based grouping scheme, our proposed UT grouping provides up to an additional 100% improvement in terms of MSP. This is reasonable because, PZF combining employing our proposed UT grouping can achieve an attractive balance between mitigating the interference and increasing the array gain. In the next figures, we present results for the scenarios associated with greedy mode assignment and greedy UT grouping. 2) Performance of the Proposed Max-Min MSP: Now, we examine the efficiency of proposed Max-Min MSP using power control and MN-weighting coefficient control provided by Algorithm <ref> for the PZF and MR combining schemes. Our numerical results (not shown here) demonstrated that Algorithm <ref> converges quickly, and hence in what follows we set the maximum number of iterations to I = 2 for Algorithm <ref>. Figure <ref> presents the minimum MSP of the CF-mMIMO surveillance system for different numbers of antennas per MN for systems having the same total numbers of service antennas, i.e., _𝚝𝚘𝚝= M = 240, but different number of MNs. We investigate three cases: case-1) equal power allocation and equal MN-weighting coefficient control, case-2) proposed power control but no optimal MN-weighting coefficient (α_mk = 1 and θ_mk is calculated from Algorithm <ref>), case-3) power control and optimal MN-weighting coefficient control Algorithm <ref>. The main observations that follow from these simulations are as follows: * The max-min MSP power control and MN-weighting coefficient control enhance the system performance significantly for both the PZF and MR combining schemes. In particular, for the PZF combining scheme, compared to the case-1, i.e., equal power control and equal MN-weighting coefficient control, the power control provides a performance gain of up to 35 %, while the power control together with the MN-weighting coefficient control can provide a performance gain of up to 43%. This highlights the advantage of our proposed solution, which becomes more pronounced for the PZF combining scheme. * The monitoring performance gap between the MR combining and the PZF combining is quite significant. In particular, when =12, applying PZF combining leads to 50% improvement in terms of MSP with respect to the MR combining scheme. The reason is two-fold: Firstly, the ability of the PZF combining to cancel the cross-link interference; Secondly, the proposed power control and MN-weighting coefficient control along with the UT grouping scheme can notably enhance the monitoring performance of weak UTs. We also note that, the performance gap between PZF and MR combining schemes increases upon increasing . The intuitive reason is that for a fixed total number of antennas, when the number of antennas per MN increases, the number of MN reduces. For a low number of MNs, the cross-link interference becomes dominant, which significantly degrades the overall performance of the system relying on MR combining. * When increases, the performance of the MR and PZF combining schemes deteriorates. This is due to the fact that increasing and accordingly decreasing M has two effects on the MSP, namely, (i) increases the diversity and array gains (a positive effect), and (ii) reduces the macro-diversity gain and increases the path loss due to an increase in the relative distance between the MNs and the untrusted pairs (a negative effect). The latter effect becomes dominant, which leads to a degradation in the monitoring performance. 3) CF-mMIMO versus Co-located Surveillance System: Now, we compare the MSP of the CF-mMIMO against that of a co-located FD massive MIMO system. The co-located FD massive MIMO surveillance system can be considered as a special case of the CF-mMIMO system, where all M MNs are co-located as an antenna array, which simultaneously performs observation and jamming at the same frequency. Therefore, the effective SINR of the untrusted link and the effective SINR for observing at the CPU can be obtained by setting =β_ik^=β_k^, =β_ik^=β_k^, =γ_ik^=γ_k^, =γ_ik^=γ_k^, β_mi=σ^2_, ∀ m, i, k in Propositions <ref>, <ref>, and <ref>, respectively. Here, σ^2_ reflects the strength of the residual self interference after employing self-interference suppression techniques <cit.>. Recall that in our CF-mMIMO surveillance system all MNs operate in half-duplex mode, hence there is no self interference at each MN. For fair comparison with the CF-mMIMO system, the co-located system deploys the same total number of antennas, i.e., =_𝚝𝚘𝚝/2= M/2 antennas are used for observing, while antennas are used for jamming, which is termed as “an antenna-preservation" condition <cit.>. Accordingly, the effective SINR of the untrusted link k for FD co-located massive MIMO systems can be written as _,k^ (θ) = ρ_| h_kk|^2/ξ_k(θ), where ξ_k(θ) = ρ_∑_ℓ∈𝒦∖ k+ρ_∑_k'∈𝒦θ_k'β_k^γ_k'^+ρ_^2θ_k(γ_k^)^2+1. Additionally, the received SINR of the k-th untrusted link at the CPU for MR combining in our FD co-located massive MIMO system is given by _,k^,(θ)=ρ_ (γ_k^)^2 /μ_k^,+ ρ_∑_ℓ∈𝒦θ_ℓγ_ℓ^ϱ_k^,, with μ_k^,= ρ_∑_ℓ∈𝒦β_ℓ^γ_k^ +γ_k^, and ϱ_k^,= γ_k^σ^2_, while the received SINR for the k-th untrusted link for full ZF combining and ≥ K+1 is given by _,k^,(θ)= ρ_ (γ_k^)^2 /μ_k^,+ ρ_∑_ℓ∈𝒦θ_ℓγ_ℓ^ϱ_k^,, with μ_k^,=ρ_∑_ℓ∈𝒦γ_k^(β_ℓ^- γ_ℓ^)/-K + γ_k^/-K and ϱ_k^,= γ_k^σ^2_/-K. For co-located massive MIMO systems, unless otherwise stated, we assume that the residual self interference after employing a self-interference suppression technique is σ^2_ /σ_n^2=30 dB[The strength of the σ^2_ /σ_n^2 after employing employing self-interference suppression technique is typically in the range of 30 dB to 100 dB <cit.>. Therefore, the performance of co-located massive MIMO with σ^2_ /σ_n^2=30 dB can be regarded as an upper bound.]. In addition, we adopt a similar power control principle as in Algorithm <ref>. Figure <ref> shows the minimum MSP versus the size of the area, D. It can be observed that the CF-mMIMO surveillance system significantly outperforms its co-located massive MIMO counterpart. As expected, the relative performance gap between CF-mMIMO and co-located surveillance systems dramatically escalates with the size of the area D. For example, for D=0.75 km, the CF-mMIMO system provides around 5-fold improvement in the minimum MSP performance over the co-located system, while the improvement reaches a 40-fold value for D=1 km. This highlights the effectiveness of our optimized CF-mMIMO surveillance scheme for proactive monitoring systems. The reason is the capability of the CF-mMIMO to surround each UT and each UR by relying on MNs operating in observing and jamming mode, respectively. Additionally, in contrast to our CF-mMIMO with distributed MNs, a co-located massive MIMO suffers from excessive self-interference due to the short distance between the transmit and receive antennas of a single large MN. 4) Effect of the Number of MNs: Figure <ref> shows the minimum MSP versus the number of MNs, M. For our CF-mMIMO system using the MR and PZF combining schemes, when the number of MNs increases, the macro-diversity gain increases, and hence the MSP enhances. Upon increasing M, for co-located massive MIMO the number of transmit and receive antennas increases, which results in a higher array gain and monitoring performance. In particular, we can see that the surveillance systems can benefit much more from the higher macro-diversity gain in a CF-mMIMO network, rather than from the higher array gain attained in co-located networks. The results shown both in Fig. <ref> as well as in Fig. <ref> clearly suggest that having a high degree of macro diversity and low path loss are crucial for offering a high MSP and corroborate that CF-mMIMO is well-suited for the surveillance of networks in wide areas. 5) Effect of the Number of Untrusted Links: Next, in Fig. <ref>, we investigate the impact of the number of untrusted links on the MSP performance of both CF-mMIMO and of co-located massive MIMO systems. Herein, we also consider co-located massive MIMO having perfect SI cancellation and half-duplex co-located massive MIMO with no jamming. We observe that upon increasing K, the monitoring performance of all schemes deteriorates. Nevertheless, the CF-mMIMO system using the PZF combining scheme still yields excellent MSP compared to the other schemes. We also note that for small values of K, MR combining provides a better performance/implementation complexity trade-off compared to its PZF counterpart. However, increasing the number of untrusted pairs results in stronger cross-link interference. Therefore, PZF combining having the ability to cancel the cross-link interference is undoubtedly a better choice. Interestingly, we can observe that even under the idealized assumption of having perfect SI cancellation in co-located massive MIMO, CF-mMIMO surveillance still significantly outperforms the co-located massive MIMO. This result shows that the proposed CF-mMIMO surveillance system, relying on the proposed power control, MN-weighting coefficient control, and suitable mode assignment, yields an impressive monitoring performance in multiple untrusted pair scenarios. § CONCLUSIONS We have developed a CF-mMIMO surveillance system for monitoring multiple distributed untrusted pairs and analyzed the performance of both MR and PZF combining schemes. We proposed a new long-term-based optimization technique of designing the MN mode assignment, power control for the MNs that are in jamming mode, and MN-weighting coefficients to maximize the min MSP across all untrusted pairs under practical transmit power constraints. We showed that our CF-mMIMO surveillance systems provide significant monitoring gains over conventional co-located massive MIMO, even for relatively small number of MNs. In particular, the minimum MSP of CF-mMIMO surveillance is an order of magnitude higher than that of the co-located massive MIMO system, when the untrusted pairs are spread out over a large area. The results also show that with different network setups, PZF combining provides the highest MSP, while for small values of the number of untrusted pairs and the size of the area, MR combining constitutes a beneficial choice. § PROOF OF PROPOSITION <REF>   To derive a closed-form expression for the effective SINR of the untrusted link, we have to calculate ξ_k(,θ)=𝔼{|w̃_k^|^2}. Let us denote the jamming channel estimation error by ε_mk^J=-. Therefore, we have ξ_k(,θ)=𝔼{|w̃_k^|^2}= ρ_∑_ℓ∈𝒦∖ k+ρ_ℐ+1, where ℐ≜ ∑_k'∈𝒦𝔼{| ∑_m ∈ℳ a_m√(θ_mk') ()^T()^* |^2}. To calculate ℐ, owing to the fact that the variance of a sum of independent RVs is equal to the sum of the variances, we have ℐ(a)=∑_k'∈𝒦∖ k𝔼{| ∑_m ∈ℳ a_m√(θ_mk')()^T()^*|^2} + 𝔼{|∑_m ∈ℳ a_m √(θ_mk)(ε_mk^J+)^T()^*|^2} (b)=∑_k'∈𝒦∖ k∑_m ∈ℳa_m θ_mk'𝔼{()^T𝔼{()^*()^T}()^*} +∑_m ∈ℳa_mθ_mk(𝔼{‖‖^4}+𝔼{|(ε_mk^J)^T()^*|^2}) +∑_m ∈ℳ∑_n ∈ℳ∖ ma_m a_n√(θ_mkθ_nk)𝔼{‖‖^2}𝔼{‖‖^2} (c)=∑_k'∈𝒦∖ k∑_m ∈ℳ a_mθ_mk' + ∑_m ∈ℳa_mθ_mk(× +)+^2∑_m ∈ℳ∑_n ∈ℳ∖ ma_m a_n√(θ_mkθ_nk) =∑_k'∈𝒦∑_m ∈ℳ a_mθ_mk' +^2(∑_m ∈ℳa_m √(θ_mk))^2, where (a) follows from the fact that has zero mean and it is independent of and , (b) follows from the fact that ε_mk^J is independent of and it is a zero-mean RV and (c) follows from the fact that 𝔼{‖‖^4}=(+1)()^2. Substituting (<ref>) into (<ref>) completes the proof. § PROOF OF PROPOSITION <REF>   Let us denote the observing channel estimation error, which is independent of _mk ^ and zero-mean RV, by ε_mk^=_mk ^- _mk ^. According to = and by exploiting the independence between the channel estimation errors and the estimates, we have 𝔼{DS_k ^} =√(ρ _)∑_m ∈ℳ(1-a_m) 𝔼{()^†(+ ε_mk ^)} = √(ρ _)∑_m ∈ℳ (1-a_m) . Additionally, 𝔼{|BU_k ^|^2} can be written as 𝔼{|BU_k ^|^2} = ρ_∑_m ∈ℳ^2(1-a_m) 𝔼{ |()^†-𝔼{()^†}|^2} = ρ_∑_m ∈ℳ^2(1-a_m) (𝔼{|()^†ε_mk^+‖‖^2|^2} -^2()^2) (a)=ρ _∑_m ∈ℳ^2(1-a_m) (𝔼{ |()^†ε_mk^|^2} +𝔼{‖‖^4}- ^2()^2) (b)=ρ _∑_m ∈ℳ^2 (1-a_m) β _mk ^, where we have exploited that: in (a) ε_mk^ is independent of _mk ^ and a zero-mean RV; in (b) 𝔼{‖‖^4}=(+1)()^2. By exploiting the fact that is independent of for k ≠ℓ, while , _mi, and ĝ_iℓ^ are independent, we can formulate 𝔼{|UI_ℓ k ^|^2} and 𝔼{|MI_ℓ k ^|^2}, respectively, as 𝔼{|UI_ℓ k ^|^2}= ρ _∑_m ∈ℳ^2(1-a_m) β _mℓ^, 𝔼{|MI_ℓ k ^|^2} =ρ _∑_m ∈ℳ∑_i ∈ℳ^2 (1-a_m) a_iθ_iℓ𝔼{ |(ĝ_mk ^)^H𝐅_mi(ĝ_iℓ^)^∗|^2} = ρ _∑_m ∈ℳ∑_i ∈ℳ^2^2 (1-a_m) a_iθ_iℓβ_miγ_iℓ^. The substitution of (<ref>), (<ref>), (<ref>), and (<ref>) into (<ref>) and inserting 𝔼{|AN_k ^|^2}=∑_m ∈ℳ^2(1-a_m), yields (<ref>). § PROOF OF PROPOSITION <REF>   According to (<ref>) and due to the fact that ε_mk ^ has zero mean and is independent of , 𝔼{DS_k ^} in the numerator of (<ref>) can be calculated as 𝔼{DS_k ^}=√(ρ _)(∑_m ∈(1-a_m) 𝔼{()^†(+ε_mk ^)} +∑_m ∈(1-a_m) 𝔼{()^†( + ε_mk ^)}) = √(ρ _)(∑_m ∈ (1-a_m) + ∑_m ∈ (1-a_m) ). Then, 𝔼{|BU_k ^|^2} can be written as 𝔼{|BU_k ^|^2} = ρ_𝔼{|∑_m ∈(1-a_m) ()^†+ ∑_m ∈(1-a_m) ()^†|^2} - ρ_|𝔼{∑_m ∈ (1-a_m) ()^†+∑_m ∈(1-a_m) ()^†}|^2 = ρ_ℐ_2- |𝔼{DS_k ^}|^2, where ℐ_2 = 𝔼{|∑_m ∈(1-a_m) ()^†+ ∑_m ∈(1-a_m) ()^†|^2}. By applying 𝔼{‖‖^2} =()^2𝔼{‖(()^†)^-1_k‖^2} =/-||, which follows from <cit.>, we have ℐ_2 (a)=(∑_m ∈(1-a_m))^2+∑_m∈^2(1-a_m) × (-)/-||+𝔼{|∑_m ∈(1-a_m) ()^†}|^2} +2(∑_m ∈(1-a_m))(∑_m ∈(1-a_m)). Also, the third term of ℐ_2 can be calculated as 𝔼{|∑_m ∈(1-a_m) ()^†}|^2}   = ∑_m ∈^2(1-a_m) 𝔼{|()^†|^2} + |∑_m ∈(1-a_m) 𝔼{()^†}|^2 - ∑_m ∈(1-a_m) |𝔼{()^†}|^2    =∑_m ∈^2(1-a_m) + (∑_m ∈(1-a_m))^2. Substituting (<ref>) into (<ref>) and then (<ref>) and (<ref>) into (<ref>) yields 𝔼{|BU_k ^|^2}= ρ_∑_m∈^2(1-a_m) (-)/-||+ ρ_∑_m ∈^2(1-a_m). Similarly, we compute UI_ℓ k ^ as 𝔼{|UI_ℓ k ^|^2} = ρ_∑_m∈^2(1-a_m) (-)/-|| +ρ_∑_m ∈^2(1-a_m). It can be shown that ()^†_mi(ĝ_iℓ^)^∗ is a zero-mean RV with variance β_miγ_iℓ^/-||. Moreover, ()^†_mi(ĝ_iℓ^)^∗ is a zero-mean RV with variance ^2β_miγ_iℓ^. Therefore, we can formulate 𝔼{|MI_ℓ k ^|^2} as 𝔼{|MI_ℓ k ^|^2} = ρ _∑_i ∈ℳa_iθ_iℓ(∑_m ∈^2 (1-a_m) 𝔼{ |()^†𝐅_mi(ĝ_iℓ^)^∗|^2} +∑_m ∈^2 (1-a_m) 𝔼{ |()^†𝐅_mi(ĝ_iℓ^)^∗|^2}) =ρ _∑_i ∈ℳ a_iθ_iℓγ_iℓ^(∑_m∈^2(1-a_m) β_mi/-|| +∑_m∈^2(1-a_m) β_mi). Finally, by exploiting the fact that the noise and the channel estimate are independent, 𝔼{|AN_k ^|^2} can be written as 𝔼{|AN_k ^|^2}= 𝔼{|∑_m ∈(1-a_m) ()^†_m^|^2} +𝔼{|∑_m ∈(1-a_m) ()^†_m^|^2} = ∑_m∈^2(1-a_m)/-||+∑_m∈^2(1-a_m). The substitution of (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) into (<ref>) yields (<ref>). IEEEtran [ < g r a p h i c s > ]Zahra Mobini received the B.S. degree in electrical engineering from Isfahan University of Technology, Isfahan, Iran, in 2006, and the M.S and Ph.D. degrees, both in electrical engineering, from the M. A. University of Technology and K. N. Toosi University of Technology, Tehran, Iran, respectively. From November 2010 to November 2011, she was a Visiting Researcher at the Research School of Engineering, Australian National University, Canberra, ACT, Australia. She is currently a Post-Doctoral Research Fellow at the Centre for Wireless Innovation (CWI), Queen's University Belfast (QUB). Before joining QUB, she was an Assistant and then Associate Professor with the Faculty of Engineering, Shahrekord University, Shahrekord, Iran (2015-2021). Her research interests include physical-layer security, massive MIMO, cell-free massive MIMO, full-duplex communications, and resource management and optimization. She has co-authored many research papers in wireless communications. She has actively served as the reviewer for a variety of IEEE journals, such as TWC, TCOM, and TVT. [ < g r a p h i c s > ] Hien Quoc Ngo is currently a Reader with Queen's University Belfast, U.K. His main research interests include massive MIMO systems, cell-free massive MIMO, reconfigurable intelligent surfaces, physical layer security, and cooperative communications. He has co-authored many research papers in wireless communications and co-authored the Cambridge University Press textbook Fundamentals of Massive MIMO (2016). He received the IEEE ComSoc Stephen O. Rice Prize in 2015, the IEEE ComSoc Leonard G. Abraham Prize in 2017, the Best Ph.D. Award from EURASIP in 2018, and the IEEE CTTC Early Achievement Award in 2023. He also received the IEEE Sweden VT-COM-IT Joint Chapter Best Student Journal Paper Award in 2015. He was awarded the UKRI Future Leaders Fellowship in 2019. He serves as the Editor for the IEEE Transactions on Wireless Communications, IEEE Transactions on Communications, the Digital Signal Processing, and the Physical Communication (Elsevier). He was a Guest Editor of IET Communications, and a Guest Editor of IEEE ACCESS in 2017. [ < g r a p h i c s > ] Michail Matthaiou(Fellow, IEEE) was born in Thessaloniki, Greece in 1981. He obtained the Diploma degree (5 years) in Electrical and Computer Engineering from the Aristotle University of Thessaloniki, Greece in 2004. He then received the M.Sc. (with distinction) in Communication Systems and Signal Processing from the University of Bristol, U.K. and Ph.D. degrees from the University of Edinburgh, U.K. in 2005 and 2008, respectively. From September 2008 through May 2010, he was with the Institute for Circuit Theory and Signal Processing, Munich University of Technology (TUM), Germany working as a Postdoctoral Research Associate. He is currently a Professor of Communications Engineering and Signal Processing and Deputy Director of the Centre for Wireless Innovation (CWI) at Queen’s University Belfast, U.K. after holding an Assistant Professor position at Chalmers University of Technology, Sweden. His research interests span signal processing for wireless communications, beyond massive MIMO, intelligent reflecting surfaces, mm-wave/THz systems and deep learning for communications. Dr. Matthaiou and his coauthors received the IEEE Communications Society (ComSoc) Leonard G. Abraham Prize in 2017. He currently holds the ERC Consolidator Grant BEATRICE (2021-2026) focused on the interface between information and electromagnetic theories. To date, he has received the prestigious 2023 Argo Network Innovation Award, the 2019 EURASIP Early Career Award and the 2018/2019 Royal Academy of Engineering/The Leverhulme Trust Senior Research Fellowship. His team was also the Grand Winner of the 2019 Mobile World Congress Challenge. He was the recipient of the 2011 IEEE ComSoc Best Young Researcher Award for the Europe, Middle East and Africa Region and a co-recipient of the 2006 IEEE Communications Chapter Project Prize for the best M.Sc. dissertation in the area of communications. He has co-authored papers that received best paper awards at the 2018 IEEE WCSP and 2014 IEEE ICC. In 2014, he received the Research Fund for International Young Scientists from the National Natural Science Foundation of China. He is currently the Editor-in-Chief of Elsevier Physical Communication, a Senior Editor for IEEE Wireless Communications Letters and IEEE Signal Processing Magazine, and an Area Editor for IEEE Transactions on Communications. He is an IEEE and AAIA Fellow. [ < g r a p h i c s > ] Lajos Hanzo (FIEEE'04) received Honorary Doctorates from the Technical University of Budapest (2009) and Edinburgh University (2015). He is a Foreign Member of the Hungarian Science-Academy, Fellow of the Royal Academy of Engineering (FREng), of the IET, of EURASIP and holds the IEEE Eric Sumner Technical Field Award. For further details please see <http://www-mobile.ecs.soton.ac.uk>, <https://en.wikipedia.org/wiki/Lajos_Hanzo>.
http://arxiv.org/abs/2407.13291v1
20240718084514
Scikit-fingerprints: easy and efficient computation of molecular fingerprints in Python
[ "Jakub Adamczyk", "Piotr Ludynia" ]
cs.SE
[ "cs.SE", "cs.LG" ]
jadamczy@agh.edu.pl [cor1]Corresponding author AGH University of Krakow, Department of Computer Science, Cracow, Poland § ABSTRACT In this work, we present , a Python package for computation of molecular fingerprints for applications in chemoinformatics. Our library offers an industry-standard scikit-learn interface, allowing intuitive usage and easy integration with machine learning pipelines. It is also highly optimized, featuring parallel computation that enables efficient processing of large molecular datasets. Currently,  stands as the most feature-rich library in the Python ecosystem, offering over 30 molecular fingerprints. Our library simplifies chemoinformatics tasks based on molecular fingerprints, including molecular property prediction and virtual screening. It is also flexible, highly efficient, and fully open source. molecular fingerprints chemoinformatics molecular property prediction Python machine learning scikit-learn 92-04 92-08 92E10 68N01 § METADATA § MOTIVATION AND SIGNIFICANCE Molecules are the basic structures processed in computational chemistry. They are most commonly represented as molecular graphs, which need to be converted into multidimensional vectors for the majority of processing algorithms, most prominently for machine learning (ML) applications. This is typically done with molecular fingerprints, which are feature extraction algorithms encoding structural information about molecules as vectors <cit.>. They are ubiquitously used in chemoinformatics, e.g. for chemical space diversity measurement <cit.> and visualization <cit.>, clustering <cit.>, virtual screening <cit.>, molecular property prediction <cit.>, and many more <cit.>. These chemoinformatics tasks, often relying on machine learning methods, are important for many real-life applications, particularly in de novo drug design. For properly assessing the performance of predictive models, train-test splitting is crucial, and molecular fingerprints can also be used there <cit.>. The performance of fingerprint-based models remains very competitive, even compared to state-of-the-art graph neural networks (GNNs) <cit.>. Selection of the optimal fingerprint representation for a given application is nontrivial, and typically requires computing many different fingerprints <cit.>, and may also require tuning their hyperparameters <cit.>. Using multiple fingerprints at once often improves results, e.g., via concatenation <cit.> or data fusion <cit.>. Processing large molecular datasets necessitates efficient implementations that leverage modern multicore CPUs. Python, the most popular language in chemoinformatics today, includes the scikit-learn library <cit.>, which has become the de facto standard tool for machine learning tasks. The library is renowned for its intuitive and widely adopted API <cit.>. Popular open source tools for computing molecular fingerprints, such as Chemistry Development Kit (CDK) <cit.>, OpenBabel <cit.> or RDKit <cit.>, are written in Java or C++, and unfortunately only RDKit has an official Python wrapper. None of them are compatible with scikit-learn API, and they only support sequential computation. Each of these tools also supports only a limited number of fingerprints. Here, we present , a new Python library for easy and efficient computation of molecular fingerprints. It is fully scikit-learn compatible, enabling easy integration into ML pipelines as a feature extractor for molecular data. It offers optimized parallel computation of fingerprints, enabling processing of large datasets and experiments with multiple algorithms, like data fusion. We implemented over 30 different fingerprints, making it the most feature-rich library in Python ecosystem for molecular fingerprinting. Those include ones based only on molecular graph topology (2D), as well as those utilizing graph conformational structure (3D, spatial). It is fully open source, publicly available on PyPI <cit.> and on GitHub at https://github.com/scikit-fingerprints/scikit-fingerprintshttps://github.com/scikit-fingerprints/scikit-fingerprints. § SOFTWARE DESCRIPTION §.§ Software architecture  is a Python package for computing molecular fingerprints, and it is aimed at chemoinformatics and ML workflows. Its interface is fully compatible with scikit-learn API <cit.>, ensured by proper inheritance from scikit-learn base classes and comprehensive tests. The package structure is shown in Figure <ref>. All functionality is contained in the package, allowing easy imports. Base classes are in package, and they can be used to extend the functionality with new or customized fingerprints. has functions for loading popular datasets, for easy benchmarking. contains classes for preprocessing molecules before computing fingerprints, as described in Section <ref>. Fingerprints are represented as classes in package . Lastly, contains additional utility classes, such as input type validators. §.§ Software functionalities  user-facing functionalities can be broken into preprocessing and fingerprint calculation. It also supports loading popular datasets. In addition, in contrast to existing software, we support efficient parallelism, and implement multiple measures for ensuring high code quality and security. §.§.§ Preprocessing Fingerprints take RDKit objects as input to the method. However, for convenience, all 2D-based fingerprints also take SMILES input, converting them internally. If done multiple times, this entails a small performance penalty, so  offers and classes for easier conversions. SMILES representation for a molecule is not unique, and does not convey all information. In particular, incorrect or very unlikely molecules can be written in SMILES form. by design performs only basic checks, to enable reading arbitrary data. For expanded checks, we implemented class. Since there is no one-size-fits-all solution for molecular standardization, we use the most broadly used sanity checks, recommended by RDKit <cit.>. This helps to ensure high data quality at the beginning of the pipeline. All fingerprints utilizing conformational (3D, spatial) information require input, with conformers calculated using RDKit, with property set. Conformer generation can be troublesome, with multiple different algorithms and settings available. class in  greatly simplifies this process, offering reasonable defaults. It attempts to maximize efficiency for easy molecules and minimize failure chance for complex compounds, based on ETKDGv3 algorithm <cit.>, known to give excellent results <cit.>. §.§.§ Fingerprints calculation Different molecular fingerprints are represented as classes, all inheriting from , and further from for substructure fingerprints like Klekota-Roth <cit.> (see Figure <ref>). They are used as stateless transformer objects in scikit-learn, and used mainly via method. It takes a list of SMILES strings or RDKit objects, and outputs a dense NumPy array <cit.> or sparse SciPy array in CSR format <cit.>. Various options, such as vector length for hashed fingerprints (e.g. ECFP <cit.>), binary/count variant, dense/sparse output etc. are specified by constructor parameters. This ensures full composability with scikit-learn constructs like pipelines and feature unions. We implement over 30 different fingerprints of various types, e.g. circular ECFP <cit.> and SECFP <cit.>, path-based Atom Pair <cit.> and Topological Torsion <cit.>, substructure-based MACCS <cit.> and Klekota-Roth <cit.>, physicochemical descriptors like USRCAT <cit.> and Mordred <cit.>, and more. We used efficient RDKit subroutines, written in C++, e.g. for matching SMARTS patterns. A full list of implemented fingerprints is available in the  https://scikit-fingerprints.github.io/scikit-fingerprints/modules/fingerprints.htmlonline documentation. §.§.§ Parallelism Since molecules can be processed independently when computing fingerprints, the task is embarrassingly parallel <cit.>. This means that we can efficiently utilize all available cores. To minimize inter-process communication, by default, input molecules are split into as many chunks as there are physical cores available, and processed in parallel by Python workers. We utilize Joblib <cit.>, with Loky executor, which uses memory mapping to efficiently pass the resulting arrays between processes. Furthermore, by using sparse arrays and smaller chunk sizes, users can minimize the memory utilization for large datasets and fingerprints that yield long output vectors <cit.>. Furthermore, we support distributed computing with Dask <cit.>, used as Joblib executor. This way,  can take advantage of large high-performance computing (HPC) clusters. All that is required is a single parameter passed to Joblib configuration, to connect to the Dask cluster <cit.>. §.§.§ Datasets loading Fingerprints are often used in the context of molecular property prediction on standardized benchmarks. In particular, they constitute strong baselines, often outperforming complex graph neural networks (GNNs) <cit.>. Therefore, their easy usage is important for fair evaluation of advancements in graph classification. We utilized HuggingFace Hub <cit.> to host datasets. It offers easy downloading, caching, and loading datasets, with automated compression to Parquet format. Currently, the most widely used MoleculeNet <cit.> benchmark has been integrated, and further datasets can be easily added with unified interface. Users can load datasets similarly to scikit-learn example datasets. For example, loading the BBBP dataset from MoleculeNet uses the function . §.§.§ Code quality and CI/CD We ensure high code quality and security with multiple measures. The code is versioned using Git and GitHub. New features have to be submitted through Pull Requests and undergo code review. We use pre-commit hooks <cit.> to verify code quality before each commit: * <cit.>, <cit.> - security analysis and dependency vulnerability scanning, following security recommendations <cit.> * <cit.>, <cit.>, <cit.>, <cit.> - code style, following reproducibility and readability guidelines <cit.> * <cit.> - type checking; our entire code is statically typed, following security recommendations <cit.> * <cit.> - cyclomatic complexity We implemented a comprehensive suite of 196 unit and integration tests. They use PyTest framework <cit.>, and are run automatically on GitHub Runners as a part of CI/CD process. Passing all tests is required to merge the code into the master branch. We run tests on a full matrix of operating systems (Linux, Windows, MacOS) and Python versions (from 3.9 to 3.12), ensuring proper execution in different environments. Any changes to the documentation are automatically deployed to the GitHub Pages. New package versions are deployed to PyPI by using GitHub Releases, with new changes description. Internally, this uses a GitHub Actions workflow and creates a Git tag on the commit used in the given release. can be installed via pip by running . § ILLUSTRATIVE EXAMPLES §.§ Parallel computation Computing fingerprints in parallel is useful for all molecular tasks, in particular for large databases in virtual screening. To illustrate the capability of  in this regard, we compute fingerprints for popular HIV dataset from MoleculeNet benchmark <cit.>. It contains a wide variety of molecules for a medicinal chemistry data, including organometallics, small and large molecules, some atoms with very high number of bonds etc. We limit the data to 10 thousand molecules, due to high computational time of running the benchmark multiple times for many data sizes and fingerprints. Code is available in the GitHub repository, in directory. As an example, we present the timings for the PubChem fingerprint <cit.> in Figure <ref>, commonly used for virtual screening. Speedup for all fingerprints [We omit Pharmacophore fingerprint due to excessive computation time. Due to checking multiple SMARTS patterns for all atoms, it is by far the slowest fingerprint.] is shown in Figure <ref>, when using 16 cores and 10 thousand molecules. Speedup is defined as a ratio of sequential to parallel computation time. We calculate those times as an average of 5 runs, using a machine with Intel Core i7-13700K 3.4 GHz CPU. For 3D fingerprints, we do not include the conformer generation time. PubChem fingerprint clearly benefits from parallelism, with time decreasing with almost perfect speedup for more cores. This behavior is typical in particular for all substructure-based fingerprints, which have to check numerous SMARTS patterns for each molecule. This gain is especially significant for larger datasets. High speedup values indicate that a significant majority of fingerprints benefits from parallelism, with Klekota-Roth achieving the greatest improvement. In general, computationally costly ones like SECFP or Mordred gain the most. Only the fastest ones like ECFP or Atom Pair have speedup less than 1, meaning slower computation than the sequential one. However, we did not tune the number of cores here, and using 2 or 4 could be more beneficial for those fingerprints given this amount of data. §.§ Sparse matrix support Molecular fingerprints are often extremely sparse, therefore using proper representation can result in large savings in memory usage, compared to dense arrays. Differences are particularly significant for large datasets, which are typical for virtual screening or similarity searching.  has full support for sparse matrix computations, using SciPy. As an example, we calculated the memory usage of the resulting fingerprint arrays for PCBA dataset from MoleculeNet <cit.>, consisting of almost 440 thousand molecules. In Table <ref>, we report memory usage of dense and sparse representations. We also report memory savings, defined as how many times the sparse representation reduced the memory usage. For brevity, we show the results of 5 fingerprints with the largest reduction. Code to produce results for all fingerprints is available in the GitHub repository, in directory. Clearly, fingerprints greatly benefit from sparse representations, with density of arrays around just 1-2%. In particular, the very popular ECFP and FCFP fingerprints <cit.> are among those benefitting the most. The Klekota-Roth fingerprint <cit.>, which is quite long for a substructure-based fingerprint, obtains a reduction from almost 2 GB RAM to just 23 MB, i.e. 88.2 times. Those savings would be even more important during hyperparameter tuning of downstream classifiers, when many copies of the data matrix are created. Using sparse representation did not negatively impact computation time, compared to the dense one. §.§ Molecular property prediction  can greatly simplify the process of classifying molecules. We show a part of a pipeline in Listing <ref>, responsible for computing ECFP fingerprints from SMILES strings and their classification. For brevity, we omit loading the data, which is just standard Pandas code for CSV files. Inputs can be any sequences that consist of SMILES strings or RDKit objects, e.g. Python lists, NumPy arrays, or Pandas series. Since is a stateless transformer class, it uses an empty method in the pipeline. The code is also parallelized, requiring only the parameter. basicstyle= [basicstyle=,language=Python, label=code:molecular_property_prediction] from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline from skfp.fingerprints import ECFPFingerprint pipeline = make_pipeline( ECFPFingerprint(n_jobs=-1), RandomForestClassifier(n_jobs=-1, random_state=0) ) pipeline.fit(smiles_train, y_train) y_pred = pipeline.predict(smiles_test) §.§ Fingerprint hyperparameter tuning Most papers in the literature neglect hyperparameter tuning for molecular fingerprints, only tuning downstream classifiers. We conjecture that this is also due to the lack of easy to use and efficient software for computing fingerprints. The works that do perform such tuning <cit.> indicate that it is indeed beneficial. We performed hyperparameter tuning for all 2D fingerprints on MoleculeNet single-task classification datasets <cit.>, using scaffold split provided by OGB <cit.>. Only the pharmacophore fingerprint was omitted due to excessive computation time for some molecules. A Random Forest classifier with default hyperparameters was used, in order to isolate the tuning improvements to just fingerprints. In Table <ref>, we report area under receiver operating characteristic curve (AUROC) values obtained when using tuned hyperparameters, improvement from tuning compared to the default parameters, and average gain over all datasets. Due to space limitations, we present the results for five fingerprints that had the highest average gain. They can be therefore considered the methods with the highest tunability <cit.>. Hyperparameter grids and code are available in the GitHub repository, in directory. Tuning fingerprints results in considerable gains, as high as 5.8% AUROC in case of RDKit fingerprint <cit.> on BBBP dataset. Notably, substructure-based Ghose-Crippen fingerprint <cit.> gains 4% AUROC on average, using feature counts instead of binary indicators. This signifies that further research in this area, utilizing , would be highly beneficial. §.§ Complex pipelines for 3D fingerprints For tasks requiring 3D information, i.e. fingerprints based on conformers, the whole processing pipeline becomes more complex. Conformers need to be generated and often post-processed with force field optimization, and resulting fingerprints may have missing values. Additionally, using more than one fingerprint is often beneficial, especially for virtual screening, as they take different geometry features into consideration. In Listing <ref>, we present an example how to create such pipeline for vectorizing molecules for screening, for GETAWAY <cit.> and WHIM <cit.> descriptors. This short example would require well over 100 lines of code in RDKit, even without parallelization. [basicstyle=,language=Python, label=code:3d_pipeline] from sklearn.impute import SimpleImputer from skfp.fingerprints import ( GETAWAYFingerprint, WHIMFingerprint ) from skfp.preprocessing import ConformerGenerator from sklearn.pipeline import make_pipeline, make_union pipeline = make_pipeline( ConformerGenerator( optimize_force_field="MMFF94", n_jobs=-1 ), make_union( GETAWAYFingerprint(n_jobs=-1), WHIMFingerprint(n_jobs=-1) ), SimpleImputer(strategy="mean"), ) §.§ Comparison with existing software We compare our library to existing libraries for chemoinformatics, which also include molecular fingerprints computation. Differences are summarized in table <ref>. We implement the largest number of fingerprints, including both all those available in other libraries, and new ones like MAP4 <cit.> or E3FP <cit.>. In terms of Python support,  is the first one to have a native Python package, with other libraries not supporting Python at all (CDK and OpenBabel), or just using an autogenerated wrapper (RDKit). We also fully support parallelism and even distributed computing, which is either nonexistent or very limited elsewhere.  is also the only library utilizing pre-commit hooks and dedicated security tools, and offering convenient, integrated datasets. § IMPACT  is a comprehensive library for computing molecular fingerprints. Leveraging fully scikit-learn compatible interfaces, researchers can easily integrate it with complex pipelines for processing molecular data. Comprehensive capabilities, with over 30 fingerprints, both 2D and 3D, with efficient conformer generation, enable using varied solutions for molecular property prediction, virtual screening, and other tasks. Intuitive and unified APIs make it easy to use for domain specialists with less programming expertise, like computational chemists, chemoinformaticians, or molecular biologists. We also put strong emphasis on code quality, security, and automated checks and analyzers. Lack of efficient parallelism is a major downside of existing solutions. Modern molecular databases can easily encompass millions of molecules, especially for virtual screening <cit.>. Our solution, utilizing all available cores, results in significant speedups, enabling efficient processing of large datasets. This is also beneficial for hyperparameter tuning <cit.>, fingerprint concatenation <cit.>, data fusion <cit.>, and other computationally complex tasks. Simple class hierarchy and high code quality make our solution easily extensible. New fingerprints can easily be added, automatically benefiting from parallelization and scikit-learn compatibility. GitHub repository had 7 contributors to date, showing good reception by the community and easy learning curve. The first issue by an external researcher has been made in a week of making the library public, highlighting the need for modern software in this area. The research shows that fingerprint-based molecular property prediction is still competitive compared to graph neural networks <cit.>, justifying further research in this area. In particular, they should be applied as baselines for fair evaluation of the impact of novel approaches, which is particularly easy with our library. has already been applied to research in molecular chemistry. In <cit.>, it was used to implement ECFP fingerprint as a baseline algorithm, ensuring fair comparison of various approaches on the MoleculeNet benchmark <cit.>. It is also being actively applied for predicting pesticide toxicity for honey bees, using recently proposed ApisTox dataset <cit.>. Additionally, numerous research projects and Master's theses at Faculty of Computer Science at AGH University of Krakow are currently utilizing it. Finally, is constantly evolving, with new fingerprints being added. We are also working on expanding the functionality, e.g. implementing data splitting functions based on fingerprints, or dataset loaders for popular benchmark datasets. Therefore, its impact in chemoinformatics will be even greater in the future. § CONCLUSIONS We have developed , an open-source Python library for computation of molecular fingerprints. It is simple to use, fully compatible with the scikit-learn API, and easily installable from PyPI. It is also the most feature-rich and highly efficient library available in the Python ecosystem, allowing parallel computation of over 30 different fingerprints. Multiple mechanisms have been implemented to ensure high code quality, maintainability, and security. It fills the gap for a single, definitive software in Python ecosystem for molecular fingerprints. It facilitates quicker, more efficient, and also more comprehensive experiments in fields of chemoinformatics, de novo drug design and computational molecular chemistry. § ACKNOWLEDGEMENTS Research was supported by the funds assigned by Polish Ministry of Science and Higher Education to AGH University of Krakow, and by the grant from Excellence Initiative - Research University (IDUB) for the AGH University of Krakow. We thank Michał Szafarczyk and Michał Stefanik for help with code implementation, and Wojciech Czech for help with manuscript review. We also thank Alexandra Elbakyan for her work and support for accessibility of science. elsarticle-num
http://arxiv.org/abs/2407.13538v1
20240718141050
EnergyDiff: Universal Time-Series Energy Data Generation using Diffusion Models
[ "Nan Lin", "Peter Palensky", "Pedro P. Vergara" ]
cs.LG
[ "cs.LG", "cs.SY", "eess.SY" ]
Journal of Class Files, Vol. 18, No. 9, June 2024 Nan Lin: Universal Time-series Energy Data Synthesis Using Denoising Diffusion Probabilistic Models EnergyDiff: Universal Time-Series Energy Data Generation using Diffusion Models Nan Lin, Student Member, IEEE, Peter Palensky, Senior Member, IEEE, Pedro P. Vergara, Senior Member, IEEE This work used the Dutch national e-infrastructure with the support of the SURF Cooperative using grant no. EINF-6262. Nan Lin is funded by NWO Align4Energy Project NWA.1389.20.251. Nan Lin, Peter Palensky, and Pedro P. Vergara are with the Intelligent Electrical Power Grids (IEPG) Group, Delft University of Technology, 2628 CD Delft, The Netherlands (e-mail:{N.Lin, P.P.VergaraBarrios, P.Palensky}@tudelft.nl). July 22, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT High-resolution time series data are crucial for operation and planning in energy systems such as electrical power systems and heating systems. However, due to data collection costs and privacy concerns, such data is often unavailable or insufficient for downstream tasks. Data synthesis is a potential solution for this data scarcity. With the recent development of generative AI, we propose , a universal data generation framework for energy time series data. builds on state-of-the-art denoising diffusion probabilistic models, utilizing a proposed denoising network dedicated to high-resolution time series data and introducing a novel Marginal Calibration technique. Our extensive experimental results demonstrate that achieves significant improvement in capturing temporal dependencies and marginal distributions compared to baselines, particularly at the 1-minute resolution. Additionally, consistently generates high-quality time series data across diverse energy domains, time resolutions, and at both customer and transformer levels with reduced computational need. Generative models, load profile, data generation, time-series data. § INTRODUCTION The rapid increase in the integration of renewable energy sources into energy systems has resulted in unprecedented volatility in energy generation. Additionally, the electrification of the energy systems has drastically altered energy consumption behaviors. Together, these factors pose significant challenges to energy systems' economical and safe operation and planning. To develop effective operation and planning solutions, energy system operators require accurate energy generation and consumption profiles <cit.>, necessitating the collection of large amounts of high-resolution energy time series data. However, collecting such data is challenging due to privacy and cost concerns. Therefore, the need for algorithms that generate realistic energy time series data is crucial. Conventional methods, such as Gaussian Mixture Models (GMMs) and t-Copula <cit.> model, have been widely used for data generation due to their simplicity and historical effectiveness. In particular, t-Copula has the unique advantage of fitting the marginal distributions precisely, which is suitable for representing high-consumption or high-generation scenarios. However, these models struggle to capture the complex dependencies inherent in high-resolution energy data, leading to sub-optimal performance in representing real-world scenarios. Deep generative models offer more advanced solutions by capturing the intricate temporal patterns within the data. Examples are Generative Adversarial Networks (GANs) <cit.>, Variational Auto-encoders (VAEs) <cit.>, and flow-based models <cit.>. Despite their strength, data generated by VAEs face challenges in maintaining high-resolution details; GANs suffer from mode collapse and training instability <cit.>. Despite flow-based models having gained success in recent years <cit.>, they intrinsically require an invertible neural network structure, limiting its expressing power. More importantly, little effort has been made to generate high-resolution time series data, i.e., 1-minute resolution or higher. For example, a daily electricity consumption profile has 1440 steps at the 1-minute resolution. This poses a great challenge to any generative models. Even fitting a simple Gaussian model with a full covariance matrix to such data would result in more than a million parameters. High-resolution data also leads to numerical instability for models like t-Copula as their fitting processes require the inversion of a large 1440× 1440 covariance matrix for every fitting step. Denoising diffusion probabilistic models (DDPMs) are newly emerged deep generative models, which are easy to train and exhibit state-of-the-art data generation quality and diversity <cit.>, overcoming the disadvantages of previous deep generative models. These advantages make DDPM a natural candidate for a universal generative energy time series data model. Nevertheless, the state-of-the-art DDPM <cit.> was designed for image generation and faces several challenges when applied to energy time series generation. The first challenge is how to deal with high-resolution data, such as 1-minute data, as the computation complexity grows rapidly with the time series length due to the Transformer network architecture <cit.>. The second challenge is the inaccurate approximation of the marginal distributions. In the image field, the marginal distribution is the brightness distribution of pixels, which need not be extremely precise. However, in the energy field, the marginal distributions are important for characterizing the peak consumption and generation values. Recently, DDPM has been adopted to model electricity load profiles and electric vehicle (EV) charging scenarios <cit.>. The model proposed in <cit.> focuses more on a generative forecast problem, while <cit.> investigates a shorter time series, i.e., 720 steps, and circumvents the complexity issue by adopting a Long Short-Term Memory (LSTM) network architecture instead of a Transformer. Neither of these works has fully addressed the previous two challenges. In this work, we propose , a universal energy time series data generation framework based on DDPM, which is applicable across various energy domains, multiple time resolutions, and at both customer and electrical transformer levels. Furthermore, we propose a simple yet novel Marginal Calibration technique to combine the underlying dependency structure of DDPM and the empirical cumulative distribution functions (CDFs) of training samples, yielding almost exact marginal distributions on any model. Our contributions are summarized as follows. * We propose , a DDPM-based framework that is dedicated to generating energy time-series data. The proposed is 1) scalable across different time resolutions and 2) applicable to generate data at both the transformer level and customer (household) level. * To overcome the limitations of standard DDPM of modeling high-resolution time series data, we propose to use a folding operation in DDPM that quintessentially enables us to generate high-resolution data such as 1 minute with less computation than without the operation. * We propose a Marginal Calibration technique that calibrates the inaccurate DDPM marginal distributions while preserving the learned complex temporal dependency structure. The proposed technique allows us to use prior knowledge about the marginal distributions in deep generative models. The generated data show significant improvement in terms of Kullback-Leibler divergence, Wasserstein distance, and Kolmogorov–Smirnov statistic. § DENOISING DIFFUSION PROBABILISTIC MODELS In this section, we introduce both the general theoretical framework of DDPM and the practical procedure of training and generation with DDPM. First, we formulate the probabilistic model by constructing two Markov chains. Next, we derive a loss function from the probabilistic model that can be used for efficient training with stochastic gradient descent (SGD). Finally, we show step-by-step how to generate new samples with a trained DDPM. A simple demonstration of DDPM procedure is shown in Fig. <ref>. §.§ Time Series Probabilistic Model Any univariate time series data of T steps can be seen as a random vector x∈ℝ^T. We assume it follows an unknown joint distribution x∼ p(x). Specifically, in energy time series data generation, x represents a one-day consumption or generation profile. Consequently, the value of T changes with the time resolution. For hourly resolution, T=24, while for 1-minute resolution, T=1440. This formulation can be extended to multivariate time series and inter-day time series. For a m-variable T-step series, we can simply formulate x as x∈ℝ^m× T. If one wants to generate a multi-day time series, for example, a weekly profile of hourly resolution, simply set T=7 · 24. In this paper, we only consider univariate daily time series. However, our framework naturally extends to multivariate cross-day scenarios. §.§ Diffusion Probabilistic Model Formulation Overall, the idea of DDPM is to gradually corrupt data and learn how to recover the corrupted data through step-by-step denoising. To be consistent with the DDPM literature, we define x_0≡x and the dimensionality d≡ T. The subscript 0 represents the diffusion step, which will be explained later. With x_0 following an unknown distribution x_0∼ p(x_0), DDPM establishes a parametric distribution p_θ(x_0) to approximate the true data distribution p(x_0). Note that θ is an abstract collection of all the parameters of the approximate distribution instead of a single parameter. We explain the detailed formulation of p_θ(x_0) below. First, we define a forward process that corrupts the data iteratively with Gaussian noise for steps s=0,1,...,S, as follows q(x_s|x_s-1) := 𝒩(√(1-β_s)x_s-1, β_sI) q(x_1:S|x_0) := Π_s=1^Tq(x_s|x_s-1), where x_0 is our observed data, and β_s ∈ (0,1) is the corruption strength for diffusion step s, which usually increases gradually with s. β_s is a small number so that we do not corrupt the data too fast. The design for {β_s} is referred to as noise schedule in the literature <cit.>. Notably, given x_0, we can jump directly to any diffusion step s>0 by: q(x_s|x_0) = 𝒩( √(α̅_s)x_0, (1-α_s)I ), where α_s:= 1-β_s, α̅_s := Π_τ=1^sα_τ. Here α̅_s can be intuitively seen as the signal strength of x_s. Because β_s is between 0 and 1, α̅_s monotonically decreases. Consequently, when the final step S is large enough, the data will be almost completely corrupted; i.e., q(x_S|x_0) ≈ q(x_S) = 𝒩(0, I). In other words, we completely undermine the data with standard Gaussian noise at the last step of the forward process S. Now, we shift our focus to the reverse process, where we start with the fully corrupted data x_S to get the original data x_0. If we know the exact denoising distribution q(x_s-1|x_s), for any s, we can sample x_S∼𝒩(0,I) and go through the forward process in reverse direction to obtain x_0. However, q(x_s-1|x_s) is not tractable. Naturally, we can approximate it with the following distribution parameterized by θ p_θ(x_s-1|x_s) := 𝒩(μ_θ(x_s,s), Σ_θ(x_s,s)). The two functions μ_θ and Σ_θ tell us how we can denoise x_s to get the less noisy x_s-1. The exact and approximate joint distribution over x_0:S are q(x_0:S) = p(x_0)Π_s=1^Tq(x_s|x_s-1) p_θ(x_0:S) = q(x_S)Π_s=1^Sp_θ(x_s-1|x_s). §.§ Training DDPM DDPM consists of the forward and reverse processes. The forward process is simply corrupting data and is only a means to an end. Generating data requires only the reverse process, which reduces to evaluating two parametric functions, μ_θ and Σ_θ. During the training process, we find the parametric functions μ_θ and Σ_θ by minimizing a loss function evaluated on a set of training samples {x_0^(i)}_i=1^N. §.§.§ Loss Function To minimize the discrepancy between the true distribution p(x_0) and the approximate distribution p_θ(x_0), we can minimize the negative evidence lower bound (ELBO) ℒ_θ := 𝔼_x_1:S∼ q, x_0 ∼ p[-logp_θ(x_0:S)/q(x_1:S|x_0)] = 𝔼_x_0∼ p[-logp_θ(x_0) + D_KL(q(x_1:S|x_0)||p_θ(x_1:S|x_0)) ] ≥𝔼_x_0∼ p[-logp_θ(x_0)]. By optimizing ℒ_θ, we jointly maximize the log likelihood of data on our model logp_θ(x_0) and minimizes the approximation error between p_θ and the true distribution p(x_0). Essentially, the parametric functions we need to learn are μ_θ:ℝ^d×ℤ_0+→ℝ^d and Σ_θ:ℝ^d×ℤ_0+→𝕊^d_+, where 𝕊^d_+ is the set of all d× d positive semi-definite matrices. Naturally, we can use neural networks to parameterize these two functions. Since these two functions essentially serve the purpose of partially removing the noise in x_s to recover x_s-1, we will refer to them as the denoising networks. The powerful capacity of neural networks can therefore enable us to learn complex joint distributions. As <cit.> suggests, the variance function Σ_θ can be fixed to Σ_θ=β_sI with little to no performance drop. Furthermore, for efficient training with SGD, we can derive a loss function from (<ref>) as ℒ̂^simple_θ = 1/B∑_i=1^B1/2β_s_i ||μ̃_s_i(x^(i)_s_i,x^(i)_0) - μ_θ(x^(i)_s_i,s_i)||_2^2 , μ̃_s(x_s,x_0) = √(α̅_s-1)β_s/1-α_sx_0 + √(α_s)(1-α̅_s)/1-α̅_sx_s where {x_0^(i)}_i=1^B are a batch of B samples drawn from the complete dataset {x_0^(i)}_i=1^N, and each s_i is uniformly and independently drawn from {1,...,S}. Taking the gradient ∇_θℒ̂^simple_θ enables us to perform SGD. §.§.§ Training Procedures Despite the complex construction of DDPM, the training procedure is simple. We summarize it in Algorithm <ref>.this subsubsection feels a bit short, although it is accompanied by an algorithm block. §.§ Generation Procedure Once the training is done and parameters θ are obtained, we can generate new data through the reverse process. As summarized in Algorithm <ref>, we start with sampling noise x_S from a standard normal distribution. Following that, we recursively denoise from x_s to x_s-1 for S times with the help of denoising distribution p_θ(x_s-1|x_s). Finally, we achieve clean sample x_0 that approximately follows the true data distribution p(x_0). § ENERGYDIFF ARCHITECTURE DDPM is a powerful probabilistic model that can approximate complex distributions. However, there is a notable lack of effort in developing a robust and universally applicable DDPM capable of generating high-resolution energy time series data of various energy domains. There are several challenges in modeling energy time series data by commonly adopted GMM and the standard DDPM. First, the temporal dependencies of energy data vary significantly across different domains and are often complex. Second, the computation and memory complexity can grow dramatically as the time resolution increases. For example, a daily electricity consumption profile with a 1-minute resolution yields a 1440-dimensional vector. This means even a simple Gaussian model with a full covariance matrix would have over a million parameters. Third, neural network-based methods can usually learn complex dependency structures well, but the learned marginal distributions are far less accurate than the empirical cumulative distribution function (ECDF), which is easy to estimate. We address all of these challenges in our proposed framework, which is dedicated to energy time series data generation. The forward process follows the exact same paradigm as the original DDPM, while the reverse process consists of our tailored denoising network. We also propose an additional Marginal Calibration step upon the completion of the reverse process, which compensates for the inaccuracy of the DDPM on marginal distributions. The complete framework is demonstrated in Fig. <ref>. §.§ Tailored Reverse Process Learning the temporal dependency structure is central in generating the energy time series data. To achieve this, we propose the neural network architecture shown in Fig. <ref>. The proposed architecture exploits Transformer[We only refer to the neural network model proposed <cit.> as Transformer in this section.] networks' capacity for processing sequential data (such as time series data) <cit.>. As we will show in Section <ref>, such a design allows to learn complex temporal patterns across different energy domains. Besides the Transformer blocks, the proposed architecture comprises a Folding block, a two-level Positional Encoding block, an Initial Convolution block, and a Final Projection block. §.§.§ Folding Transformer is a powerful model that captures complex temporal patterns. However, its memory and computation complexity is quadratic to the time series length <cit.>. This means whenever time resolution is doubled, we will have four times the computation and memory cost. Sequence length also heavily influences time complexity because of massive memory read/write operations. Therefore, to deal with high-resolution time series, e.g., 1 minute, we propose to use a folding operation as the first step. For a multivariate time series data x_0 ∈ℝ^d × T, we fold every consecutive r steps into the channel dimension. This can be represented as x_0 ∈ℝ^dr×T/r←x_0 ∈ℝ^d × T. For long sequences, the computation of Transformers is intensive mostly because of the Attention operation. Specifically, the memory complexity of Attention for sequence length L is nearly 𝒪(L^2). Such a folding operation would reduce the complexity of Transformer operations by r^2 times. Fig. <ref> shows an example of this operation. Critically, the proposed folding operation compromises the inherent data structure to achieve lower complexity. Consequently, the factor r ought to be minimized as much as the complexity permits. However, in practice, the negative impact of the folding operation is mitigated when we use a deeper network. optional experiment with different r. §.§.§ Positional Encoding Before passing data to the Transformer blocks, we need to embed positional information into the data. There are two types of positions we need to inform. Generally, for a multivariate sequence at the s-th step in the diffusion process, x_s ∈ℝ^d × T. The first position is s ∈{0, 1, ..., S}, which is implicitly associated with the noise level; the second position is t ∈{0, 1, ..., T-1}, the position in the time series. We embed each of them separately through the same mechanism called postional encoding <cit.> and a learned scale and shift. _2i(pos, d) = sin(pos/10000^2i/d) _2i+1(pos, d) = cos(pos/10000^2i/d), where (·, d): ℕ→ [-1,1]^d is a vector function that maps a position pos to a d-dimensional vector for any given d. Each one element of the d-dimensional output has different sensitivities to the change of pos. Next we use two fully connected layers to scale and shift x_s. For diffusion step encoding, we have σ(s) = W^scale_2 (W_1 (s, d) + b_1) + b^scale_2 δ(s) = W^shift_2 (W_1 (s, d) + b_1) + b^shift_2, where W (matrix) and b (vector) are weights and biases that will be learnt during training, and is the Sigmoid Linear Unit activation function proposed by <cit.>. With σ(s)∈ℝ^d and δ(s)∈ℝ^d, we scale and shift x_s,t by x_s,t←(1+σ(s)) ⊙x_s,t + δ(s), where ⊙ is the Hadamard (element-wise) product. The scale and shift are only determined by s and stay the same for ∀ t given the same s. We use 1+σ(s) instead of σ(s) and initialize σ and δ with zero, as this showed capabilities of stabilizing the training. Additionally, another Positional Encoding is added for time step t, as shown in Fig. <ref>. This Positional Encoding is placed after Initial Convolution and before the Transformer blocks. After the Initial Convolution, we have sequence x^'_s ∈ℝ^d^'× T. We use the same sinusoidal positional encoding with a different dimensionality (t, d^') and another two fully connected layers applying scale and shift on x_s,t^'. σ^'(t) = W^' scale_2 (W^'_1 (t, d^') + b^'_1) + b^' scale_2 δ^'(t) = W^' shift_2 (W^'_1 (t, d^') + b^'_1) + b^' shift_2, where the W matrices and b vectors are also learnable parameters. Different from above, we scale and shift x^'_s,t by x^'_s,t← (1+σ^'(t)) ⊙x^'_s,t + δ^'(t). The scale and shift is only determined by t and stay the same for ∀ s. §.§.§ Initial Convolution After folding and positional encoding for s, we employ a large kernel convolution, which is proved useful as an initial feature extractor <cit.>. We use the same convolution for ∀ s. x^'_s,t = ∑_τ = -k^k W^init_τx_s,t-τ + b^init, where 2k+1 is the convolution kernel size; W^init_τ∈ℝ^d^'× d and b^init∈ℝ^d^' are the learnable weight matrices and bias. Since t_min=0, we use a circular padding for x_s,t-1,...,x_s,t-k. §.§.§ Transformer Blocks The transformer blocks are our main tool for learning temporal dependency. Since our task is denoising, we do not need the encoder-decoder structure in <cit.>. Instead, we only adopt the decoders. This has been proven to work effectively for image generation <cit.>. There are two main sub-blocks in a Transformer block, namely multi-head attention (MHA) and feed forward network (FFN). We use a total of L Transformer blocks. For block l, it takes the output of previous block x_s,t^(l-1) as input and passes its output x_s,t^(l) to the next block. The initial input is x_s,t^(0) := x^'_s,t, with the final output being x_s,t^(L). We fix the dimensionality of the input and output such that _l: x_s,t^(l-1)∈ℝ^d^'× T→x_s,t^(l)∈ℝ^d^'× T, ∀ l{1,...,L}. MHA is the core operation of Transformer. It extracts the temporal features by comparing the sequence at each time step with any other time step. An has H heads; each head operates separately and is aggregated later. Using multiple heads allows the attention mechanism to focus on different attributes of the data. For head h in the l-th Transformer block, we first calculate q_l,h = W^q_l,hx_s,t^(l-1) k_l,h = W^w_l,hx_s,t^(l-1) v_l,h = W^v_l,hx_s,t^(l-1), where W^q_l,h,W^w_l,h,W^v_l,h∈ℝ^d^'/H× d^' are learnable matrices. q_l,h, k_l,h, v_l,h are called queries, keys, and values respectively. For each of the T query, the idea is to calculate the similarity between the query and each of the T keys with dot product. After normalizing the dot product, we have a series of similarity weights that sum up to one. We then use these weights to perform a weighted average of the associated values. _h(q_l,h,k_l,h,v_l,h) = v_l,h(k_l,h^q_l,h/√(d^'/H)) _m,n(ζ) := e^ζ_m,n/∑_i=1^Me^ζ_i,n, ζ∈ℝ^M× N, where the function normalizes a matrix by the columns. The sum of any column of the output is always one. Each attention head operates independently and their outputs are concatenated to get the final output. (x_s^(l-1)) = Concat(_1, _2, ..., _H), where the output has the same shape as the input ℝ^d^'× T. Following the MHA, we add a and a skip connection. x̃_s,t^(l-1) = (x_s,t^(l-1)) + [(x_s^(l-1))]_t (x_s,t^(l-1)) = x_s,t^(l-1)-𝔼[x_s,t^(l-1)]/√(Var[x_s,t^(l-1)] + ϵ)⊙γ + β, where normalizes x_s,t^(l-1)∈ℝ^d^' over the d^' elements. ϵ is a small number for numerical stability. γ and β are learnable ℝ^d^' vectors. Next, we pass x̃_s,t^(l-1) to a two-layer feed forward (fully connected) network. (x̃_s,t^(l-1)) = W^FF_l,2(W^FF_l,1x̃_s,t^(l-1) + b^FF_l,1) + b^FF_l,2, where the W matrices and b vectors are learnable weights and biases. Consequently, we get out final output of this layer with another and a skip connection. x_s,t^(l) = (x̃_s,t^(l-1)) + (x̃_s,t^(l-1)) §.§.§ Final Projection After L blocks of Transformer, we perform an affine projection on concatenated x_s,t^(L) and x_s,t^(0). This serves as a partial skip connection that helps with building deep networks. μ̂_s-1,t = W^oConcat(x_s,t^(0), x_s,t^(L)) + b^o, where W^o and b^o are learnt parameters. μ̂_s-1 is the estimated mean of denoising distribution p_θ(x_s-1|x_s)=𝒩(μ̂_s-1, β̃_sI), and θ is the collection of all of the learnable parameters above. §.§ Optimal Marginal Calibration All joint distributions comprise two elements: the dependency structure and the marginal distributions. Estimating the dependency structure is generally challenging, whereas the marginal distributions can be straightforwardly and precisely estimated by methods such as ECDF or simple parametric 1D distributions when prior knowledge is available. Neural network-based models approximate these two elements simultaneously by minimizing the ELBO. In practice, the resulting marginal distributions can have significant discrepancies with the true marginal distributionscite previous works comparing the marginals. To address these inaccuracies, we propose a marginal calibration process utilizing optimal transport (OT) mapping. This calibration applies the minimal alterations, maintaining the original temporal dependency structure while aligning the variables with the accurate marginal distributions. §.§.§ Re-estimate Marginal Distribution First, we acquire a new estimate of the marginal distributions. In the most general case, we exploit the marginal ECDF, which is an unbiased and easily accessible estimate of the true marginal CDF. According to the Glivenko–Cantelli theorem, it converges almost surely to the true distribution. Practically, there is often prior knowledge about the marginal distribution of energy time series data. For example, the marginals of residential electrical energy consumption often follow a log-normal or gamma distribution<cit.>. In such cases, we can perform more accurate and statistically sound estimations by maximum likelihood or maximum a posteriori (MAP). Formally, to obtain ECDF of the real (training) data, we have F^*_t(ν) = 1/N∑_i=1^N1_x≤ν(x^(i)_0,t) 1_x≤ν(x^(i)_0,t) = 1 if x^(i)_0,t≤ν 0 otherwise where x^(i)_0,t is the t-th time step of the i-th sample taken from the training set {x^(i)_0}_i=1^N. 1_x≤ν(x) is an indicator function that outputs 1 if the input is less than or equal to ν and 0 otherwise. Meanwhile, we also estimate the (inaccurate) marginal distributions of DDPM. After training our model, we first generate M synthetic samples {x̂^(i)_0}_i=1^M. The ECDF of DDPM is therefore given by F^'_t(ν) = 1/m∑_i=1^m1_x̂^(i)_0,t≤ν, where x̂^(i)_0,t is generated synthetic sample. §.§.§ Optimal Transport Calibration Next, we seek a way to replace the inaccurate marginal distributions F^'_t with the more accurate ones F^*_t. We find this mapping g_t by solving the following optimization problem min_g_t𝔼_x̂_0,t∼ F_t^'[||x̂_0,t - g_t(x̂_0,t)||_2^2] s.t. ∀x̂_0,t∼ F_t^', g_t(x̂_0,t)∼ F_t. The constraints indicate that the mapping g_t must transform a random variable from the distribution F^'_t into a random variable that conforms to the new distribution F_t. The objective implies that we seek a mapping g_t close to the identity mapping, as it minimizes 𝔼_x̂_0,t∼ F_t^'[||x̂_0,t - g_t(x̂_0,t)||_2^2]. This problem is also known as the OT problem. The exact solution is called OT mapping. In general, the OT problem is a complex functional problem. However, in the one-dimensional case, it is proved that a mapping is the OT mapping if and only if it monotonically increases and satisfies the constraint in (<ref>). In our specific case, we notice that the function F_t^*-1∘ F_t^' satisfies exactly these two conditions. Therefore, the exact OT mapping g_t^* is given by g_t^*(x̂_0,t) = F_t^*-1(F_t^'(x̂_0,t)) ∀ t. This is OK but you are going to be asked about this. For any synthetic sample x̂_0 from DDPM, the calibration is done as x̂_0,t^'← F_t^*-1(F_t^'(x̂_0,t)) ∀ t. § CASE OF STUDY §.§ Datasets We show 's flexibility and capability of modeling high-dimensional data by selecting a diverse set of data sources across various energy domains, time resolutions, and at both customer household and transformer levels. We preprocess all datasets by splitting them into daily profiles, and we see these profiles as identical independent samples. Depending on the time resolution, the daily profile time series lengths range from 24 at a 1-hour resolution to 1440 at a 1-minute resolution. We categorize the datasets into three classes. First, residential electricity load profiles at the customer level. This type of data comes from the Low Carbon London (LCL) <cit.> project, WPuQ <cit.> project and CoSSMic <cit.> project. Second, residential household heat pump electricity consumption data from the WPuQ project. Third, transformer-level electricity consumption and PV generation data from the WPuQ project. We summarize the selected datasets in Table <ref>. §.§ Evaluation Metrics Because of the high-dimensional nature of time series data, it is difficult to apply conventional probability theory-based measures to examine the joint distribution divergence between the real and generated samples. Nonetheless, several metrics have been established to measure the quality of synthetic data. We will give a brief introduction to these metrics, while the details and equations can be found in <cit.>. Meanwhile, we test our generated data in downstream tasks, namely load forecast (and ...). §.§.§ Gaussian Frechét Distance The Frechét distance (FD) was proposed and adapted to compare the similarity between two probability distributions. Despite the intractability of FD for high-dimensional joint distributions, an analytical solution exists between two multivariate Gaussian distributions. Therefore, we can generalize FD to Gaussian Frechét distance (GFD) to measure the similarity between any two multivariate joint distributions because it quantifies simultaneously the differences in the mean and covariance matrices. §.§.§ Maximum Mean Discrepancy (MMD) Maximum Mean Discrepancy is a kernel-based disparity measure. It embeds samples of arbitrary data space (e.g., ℝ^d) into a reproducing kernel Hilbert space (RKHS) and compares two distributions by the largest difference in expectations over their embeddings in the RKHS. MMD measures both the dependency structure and the marginal distribution through an implicit feature mapping via the kernels. §.§.§ Wasserstein Distance (WD) Wasserstein distance measures the minimum distance between two distributions with the optimal coupling. A coupling of x and y is a joint distribution over Concat(x,y), whose marginal distributions satisfies ∫_y c(x, y) = p_X(x) and ∫_x c(x, y) = p_Y(y). Taking the infimum suggests that the coupling c in Wasserstein distance seeks to connect x and y in the shortest path possible. In other words, WD measures the shortest distance between two distributions. However, it is generally intractable to find such coupling between two high-dimensional joint distributions. Therefore, we only compare the Wasserstein distance between their marginal distributions in the following section. §.§.§ Kullback-Leibler (KL) Divergence Unlike WD, KL divergence measures the discrepancy between two distributions from the perspective of the information theory. Unfortunately, calculating KL divergence between high-dimensional distributions is also generally not possible. Again, we will evaluate KL divergence between the marginal distributions. §.§.§ Kolmogorov-Smirnov (KS) Statistic The two-sample KS test is a procedure to check whether two underlying one-dimensional distributions differ. It exploits the KS statistic, defined as the largest difference between two CDFs across all x. Similarly, we will evaluate the KS statistic between the marginal distributions, as KS statistic is not tractable for high-dimensional joint distribution. §.§ Model Setup Due to the versatility of our framework, we use the same setup for all experiments in this section. Specifically, we implement in PyTorch. The Transformer blocks have L=12 layers and d^'=512 neurons in each block. The number of neurons in the FFN is 4d^'. For training, a learning rate of 0.0001 is used, and we train the neural networks for 50000 iterations with the AdamW optimizer <cit.>. Meanwhile, as a common practice of deep generative models, we keep an exponential moving average (EMA) version of the model weights, and we always use the EMA weights for sampling<cit.>. This can be seen as an extra measure to stabilize the training. In terms of diffusion, we use S=1000 diffusion steps and accelerate the sampling using DPM-Solver <cit.> with 100 steps. A cosine noise schedule is adopted as in <cit.>. We only perform minimum preprocessing on the data, i.e., all data are linearly scaled to [-1,1] using the minimum and the maximum values. We use 10-component GMM and t-Copula model <cit.> as our baselines. We selected these two baselines for their strengths in our evaluation metrics. GMM approximates the data's means and covariances, yielding low GFD and MMD scores. In contrast, t-Copula leverages the ECDF of the data, resulting in strong KL, WD, and KS scores. § RESULTS peak-time distribution etc. §.§ Customer Level Evaluation §.§.§ Heat Pump Consumption For heat pump data of 1-minute to 1-hour resolutions, the 1-minute case is the most challenging in terms of GFD, as shown in Table <ref>. All three models, , GMM, and t-Copula, can achieve 1× 10^-3 level of GFD at 1 hour, but this number increases significantly to the level of 1× 10 at 1 minute. shows superiority at the 1-minute resolution, with a 24.2% drop in GFD. and GMM achieve the best and similar performance in terms of MMD. For marginal distribution-based metrics, KL, WD, and KS, t-Copula shows the best results in KL and KS, while achieves the best in WD. GMM failed to obtain a decent KL score. Overall, at the 1-minute resolution, demonstrates the best GFD and MMD with the best WD score, marking its excellent capability to learn both the temporal dependency and marginal distributions. While t-Copula and GMM have t-Copula or GMM require lower computational costs for the 1-hour or coarser resolutions, they necessitate a meticulous model selection process for each resolution. In contrast, our proposed can be used universally across all resolutions, eliminating the need to develop separate models for each different time resolution. We present randomly selected 100 real and generated heat pump consumption data samples in Fig. <ref>, corresponding to the numerical results in Table <ref>. Despite the MMD between GMM and being similar, we can observe in Fig. <ref> that the synthetic samples of GMM are clearly unrealistic, as they do not contain the periodical temporal patterns of the real data. Meanwhile, the value range (minimum and maximum power) is incorrectly captured by GMM. While the real data has a maximum of around 10 and a minimum of 0, GMM produces unrealistic peaks of over 20 and negative values below -10. On the other hand, successfully captures both the periodicity pattern and the value range. Furthermore, in Fig. <ref>, we present the histograms of the same heat pump consumption data at the 1-minute resolution. To get the histogram, we count the consumption power of all time steps. The t-Copula model closely matches the histogram because it is designed to do so. without calibration can capture the general pattern of the histogram but appears more smoothed, while the calibrated data matches the histogram exactly. In contrast, the GMM model fails to capture the distribution, as it struggles to find the correct support for the distribution and fails to capture the different modes of distribution. §.§.§ Residential Electricity Load Profile We evaluate 's capability to model electricity load profile data on two datasets, LCL at 30-minute and 1-hour resolutions and CoSSMic at 1-minute resolutions. For the LCL dataset, as shown in Table <ref>, our model and GMM achieve similarly good results in MMD and GFD at both time resolutions, while t-Copula performs worse. Although GMM's scores are slightly better than ours, the margin is small. Our model achieves the best results in KL and WD, whereas t-Copula achieves slightly better KS. Turning to the CoSSMic Dataset, our model delivers superior results across all metrics, demonstrating a significant improvement in GFD, reducing it from 2.8779 with GMM to 0.2642. For this dataset, t-Copula failed to converge during the fitting process. As evidenced by the numerical results on these two datasets, demonstrate strong performance in generating residential electricity load profiles across various time resolutions. §.§.§ Residential PV Generation In Table <ref>, we show the numerical results of the CoSSMic residential PV generation time series data. The t-Copula model failed to converge. Therefore, we only compare GMM and our model. At the 1-minute resolution, our model achieved the best results in three out of the five metrics: GFD, WD, and KS. The difference with GMM regarding MMD is only 0.0001, while the improvement in GFD is from 5.7764 to 6.6051. However, at 15-minute and 30-minute resolution, our model shows much worse performance in terms of MMD. This is likely due to this dataset's small sample size, as only 408 samples are available for training and 416 samples for evaluation. This suggests that does not exhibit significant superiority at the 15-minute resolution and 30-minute resolution when the training set size is around or lower than 400 and the time resolution is 15 minutes or longer. This disadvantage can be compensated by pre-training on a larger similar dataset and fine-tuning on the target dataset, as is done by other generative model research such as <cit.>. Our model's strong performance at the 1-minute resolution is likely due to its ability to capture the complex temporal dependencies in long time series data, a capability that GMM lacks. §.§ Transformer Level Evaluation Our model can generate high-resolution energy time series data not only at the customer level but also at the transformer level. We present our experiment results in the WPuQ transformer electricity consumption dataset and the WPuQ transformer PV generation dataset. For simplicity, we only tested at the 1-minute resolution, as it is the most challenging case. Both datasets have over 1500 samples for training and evaluation. Tables <ref> and <ref> summarize the numerical results on these two datasets. §.§.§ Residential Electricity Load Profile In the WPuQ transformer case, the proposed exhibits superior performance in terms of all five metrics, as shown in Table <ref>. Our model achieves 26.09% lower MMD and 11.65% lower GFD than GMM, suggesting our generated data have a more similar temporal dependency with the real data. Visualizing the temporal patterns of a 1440-step time series is difficult in general. Therefore, to better understand these high-dimensional time series, we use a dimensionality reduction tool, UMAP <cit.>, to reduce the data into 2-dimensional. UMAP learns a manifold of the high-dimensional data and re-maps the manifold into the 2D space. As in Fig. <ref>, the data generated by covers the whole manifold of the real data. Meanwhile, the figure demonstrates that our model does not merely replicate the training data points but instead interpolates and extrapolates data that align with the real manifold. However, data generated by GMM span only part of the manifold, indicating a discrepancy between GMM and the real data distribution. This suggests that using GMM synthetic data for energy system operation and planning could over-represent some scenarios and under-represent others, potentially leading to sub-optimal decisions. §.§.§ Residential PV Generation We show the numerical results for the WPuQ transformer PV generation dataset in Table <ref>. We observe that and GMM achieve very similar performance across all metrics. To further assess whether the synthetic data of both models are of similar quality, we visualize 100 real and synthetic samples from these two models in Fig. <ref>. It is noticed that both models capture the coarse wave shape of the data, i.e., zero generation during night time and peak generation around noon. However, GMM again shows unrealistic negative values. Meanwhile, most of the peak values of the real data lie around 15kW, while the peak values of GMM data are overestimated. §.§ Marginal Calibration Evaluation Next, we demonstrate the effect of the proposed Marginal Calibration. As an example, we pick the WPuQ heat pump dataset at the 1-minute resolution. We compare the numerical evaluation metrics before and after calibration in Table <ref>. We noticed that the MMD and GFD scores have slightly degraded but are still competitive, while KL, WD, and KS have improved significantly and have surpassed t-Copula. Furthermore, we randomly select two consecutive time steps to visualize the temporal pattern differences in Fig. <ref>. Here, we have selected the 115-th and 116-th time steps, corresponding to and midnight. We observe small changes in the scatter plot in the middle. However, the marginal distributions are brought closer to the real data. This is because our calibration is based on OT, which minimizes the changes but guarantees the calibrated data has the exact CDF of the training data. The remaining mismatch of the marginal distributions is due to the ECDF estimation error from the training data. However, this mismatch is small, proven by the numerical results in Table <ref>. §.§ Computation Time and Stability The training time of GMM, t-Copula, and varies significantly. requires approximately 3 hours for 1-minute resolution data and 1 hour for 1-hour resolution data. Generating 4000 instances of 1-minute data from takes around 8 minutes, while it only takes seconds for 1-hour data. The GMM is more computation-efficient, with both fitting and sampling taking less than 1 minute. Fitting the t-Copula model for the 1-minute resolution takes approximately 15 minutes but frequently fails due to the numerical instability of the optimization process. Notably, in several datasets like the CoSSMic residential electricity load profile, t-Copula failed to converge despite multiple tries. We observe that this instability is particularly exacerbated in high-resolution data, such as 1 minute, and when the training set size is thousands or larger. Overall, and GMM are computationally robust, while GMM is the most computation-efficient model. § CONCLUSION DDPMs are powerful generative models that have become the most popular choice in the image and audio generation domain. However, the standard DDPM has high computation and memory complexity related to the input data size, making them unsuitable for generating high-resolution time series data such as 1440-step 1-minute daily load profiles. Additionally, despite their capability to capture complex dependencies, they do not necessarily yield precise marginal distributions, which is crucial for accurately representing high-consumption or high-generation scenarios. To address these issues, we proposed , a DDPM-based universal energy time series generation framework. With a tailored denoising process, generates high-quality data across different energy domains at various time resolutions and both the customer and transformer levels. Our proposed Marginal Calibration technique ensures that captures precise marginal distributions. IEEEtran
http://arxiv.org/abs/2407.13171v1
20240718052553
Maximin Fair Allocation of Indivisible Items under Cost Utilities
[ "Sirin Botan", "Angus Ritossa", "Mashbat Suzuki", "Toby Walsh" ]
cs.GT
[ "cs.GT" ]
S. Botan et al. UNSW Sydney {s.botan, mashbat.suzuki, t.walsh}@unsw.edu.au, a.ritossa@student.unsw.edu.au Maximin Fair Allocation of Indivisible Items under Cost Utilities Sirin Botan1 Angus Ritossa1 Mashbat Suzuki1 Toby Walsh1 Received X XX, XXXX; accepted X XX, XXXX ================================================================= § ABSTRACT We study the problem of fairly allocating indivisible goods among a set of agents. Our focus is on the existence of allocations that give each agent their maximin fair share—the value they are guaranteed if they divide the goods into as many bundles as there are agents, and receive their lowest valued bundle. An MMS allocation is one where every agent receives at least their maximin fair share. We examine the existence of such allocations when agents have cost utilities. In this setting, each item has an associated cost, and an agent’s valuation for an item is the cost of the item if it is useful to them, and zero otherwise. Our main results indicate that cost utilities are a promising restriction for achieving MMS. We show that for the case of three agents with cost utilities, an MMS allocation always exists. We also show that when preferences are restricted slightly further—to what we call laminar set approvals—we can guarantee MMS allocations for any number of agents. Finally, we explore if it is possible to guarantee each agent their maximin fair share while using a strategyproof mechanism. § INTRODUCTION How to fairly divide a set of indivisible resources is a problem that has been studied by computer scientists, economists, and mathematicians <cit.>. Because of the fundamental nature of the problem, there is a large number of applications ranging from course allocations <cit.>, to division of assets <cit.>, and air traffic management <cit.>. Among the fairness notions studied, two of the most commonly studied are those of envy-freeness—how to ensure no agent envies another, and maximin fair share—our focus in this paper. The notion of the maximin fair share was introduced by <cit.>, and generalises the well known cut-and-choose protocol. Conceptually, an agent's maximin fair share is the value they can achieve by partitioning the items into as many bundles as there are agents, and receiving their least preferred bundle. The ideal outcome is of course an MMS allocation, where every agent receives at least their maximin fair share. There has been a significant amount of work on MMS in the general additive valuations setting. Unfortunately, results are often quite negative. In general, MMS allocations cannot be guaranteed to exist, even in the case of three agents <cit.>. Furthermore, for instances where MMS allocations do exist (for example, when agents have identical valuations), computing an MMS allocation is NP-hard. As a result, a large body of work has been focused on establishing the existence of MMS allocations in more restricted settings <cit.>. In this paper, we study the problem under a natural class of valuation functions–what we call cost utilities—that allow us to provide fairness guarantees that are not achievable for general additive valuations. Cost utilities describe the setting where each item has an associated cost. An agent's value for any item is the cost of the item if it is useful to them, and zero otherwise. Our focus in this work is on the existence of MMS allocations under cost utilities. We are not the first to study this restriction in the context of fair division. <cit.> provided an approximation of egalitarian welfare maximisation under cost utilities, that was then improved upon by <cit.> and <cit.>. <cit.> and <cit.> focus on envy-freeness, and show that an EFX allocation always exists under cost utilities[<cit.> call them restricted assignment valuations, while <cit.> call them generalised binary valuations. <cit.> study them under the name restricted additive valuations. We use the term “cost utilities” as we find it conceptually the most appealing and descriptive.]. There are clear practical advantages to studying this particular class of valuations. In many real-life settings, the price of items are known, and elicitation of preferences boils down to asking an agent whether they want the item or not—a task that can be accomplished easily. Related Work. Given that MMS allocations cannot be guaranteed for general additive valuations, the work done on MMS in fair division has focused on two main approaches to circumvent this impossibility. The first—which is the route we employ in this paper—is to consider a restriction on the valuations of the agents. Examples of such restrictions under which MMS allocations always exist include binary valuations <cit.>, and ternary valuations <cit.>—where item values belong to {0,1,2}, and Borda utilities <cit.>. Existence of MMS allocations also holds for personalised bivalued valuations—where for each agent i, the value of an item belongs to {1,p_i} for p_i∈ℕ, and weakly lexicographical valuations—where each agent values each good more than the combined value of all items that are strictly less preferred <cit.>. The second approach is to examine how close we can get to MMS, meaning how far each agent is from receiving their maximin fair share. An allocation is said to be ρ-MMS, if each agent receives a ρ fraction of their MMS value. <cit.> show that for instances with more than five agents a (3/4+1/12n)-MMS allocation always exists. On the more negative side, <cit.> show that there exist instances such that no allocation is 39/40-MMS. For valuations that are beyond additive, the picture is arguably gloomier. <cit.> show an existence of 1/3-MMS allocations and a PTAS for computing such allocations. They also show that for submodular valuations, there exist instances that do not admit any 3/4-MMS allocation. There have been several works focused on achieving both fairness along with strategyproofness. <cit.> show that when there are two agents and m items there is no truthful mechanism that outputs an 1/⌊ m/2 ⌋-MMS allocation. On the positive side <cit.> and <cit.> show that when agents have binary valuations there is a polynomial time computable mechanism that is strategyproof and outputs an MMS allocation along with several other desirable properties. Our Contribution. We know that for some restricted settings—bivalued and ternary valuations—MMS allocations can always be found. <cit.> highlight an open problem regarding the existence of other classes of structured valuations for which an MMS allocation is guaranteed to exist. Our paper answers this in the affirmative for a new class of valuation functions. We first show that MMS allocations exist for three agents under cost utilities, in contrast to the case of general additive utilities. We also show that when valuations are restricted slightly further to laminar set approvals, MMS allocations are guaranteed to exist for any number of agents. Additionally, for the case of n agents and n+2 items, we show there is a strategyproof polynomial time algorithm for computing Pareto optimal MMS allocations. Interestingly, to the best of our knowledge, our results on cost utilities are first of its kind for which (other than identical valuations) the computation of the maximin fair share value is NP-hard, while existence of MMS allocation is still guaranteed. For previously known classes where an MMS allocation is guaranteed, the computation of the maximin fair share value can be done in polynomial time. Paper Outline. In Section <ref> we introduce the framework of fair division of indivisible items, and present the central preference and fairness notions of the paper. Section <ref> is focused on when we can achieve MMS allocations for cost utilities. Section <ref> looks at a strategyproof mechanism for finding MMS allocations. Section <ref> concludes. § PRELIMINARIES Let N be a set of n agents, and a set of m indivisible goods (or items). Our goal is to divide among the agents in N according to their preferences over the items. Preferences. Each agent i ∈ N has a valuation function v_i: 2^→ℝ_≥ 0 that determines how much they value any bundle of items. For all agents i, we assume that v_i is additive, so v_i(S) = ∑_g ∈ S v_i(g). For singleton bundles, we write v_i(g) in place of v_i({g}) for simplicity. We write v = (v_1, …, v_n) to denote the vector of all valuation functions for agents in N. Our focus in this paper is on a restricted domain of preferences—cost utilities. For these preferences, it is easy to think of each agent as submitting an approval set. Let A_i be the approval set of agent i. More formally, we say A_i = {g ∈| v_i(g) > 0}. We say agents have cost utilities if there exists a cost function c such that v_i(S) = c(S ∩ A_i) for all S ⊆ and all agents i ∈ N. We require that the cost function is additive, as well as non-negative. Allocations and Mechanism. An allocation = (B_1, …, B_n) is an n-partition of the set of items , where B_i ⊆ is the bundle assigned to agent i under the allocation . We write |_N' to denote the restriction of the allocation  to only the bundles assigned to agents in N' ⊆ N. For a set of goods , we write ℬ_n() to mean all possible allocations of the goods in to n agents. An instance ℐ = (N,,v) of a fair division problem is defined by a set of agents, a set of goods, and the agents' valuations over those goods. Given an instance ℐ, our goal is to find an allocation that satisfies certain normative properties. An allocation mechanism for n agents and m items is a function f: V_n →ℬ_n(), where V_n is the set of possible valuation profiles—i.e. vectors of n valuation functions. Fairness and Efficiency. For an agent i ∈ N, their maximin fair share in an instance ℐ= (N,,v) is defined as ^n_i(ℐ) = max_∈ℬ_n()min_j ∈ N v_i(B_j). We sometimes write ^n_i() when the instance is clear from context. When the set of goods and the value of n is fixed, we will also sometimes write _i. An MMS allocation ∈ℬ_n() is an allocation such that v_i(B_i) ≥_i for all agents i ∈ N. We say an allocation ∈ℬ_n() is Pareto efficient if there is no allocation ' ∈ℬ_n() such that v_i(B'_i) ≥ v_i(B_i) for all i ∈ N and v_i^*(B'_i^*) > v_i^*(B_i^*) for some i^*∈ N. § MAXIMIN FAIR SHARE GUARANTEES In this section, we will look at two settings where cost utilities can aid in finding cases where MMS allocations can be guaranteed to exist. Section <ref> focuses on cases with only three agents. Section <ref> considers any number of agents but is limited to laminar approval sets. This is a restriction that captures the idea of items belonging to different categories. §.§ MMS Allocations for Three Agents For the case of three agents, restricting our scope to considering only cost utilities yields positive results. As we have seen in the introduction, this is not the case for the more general case of additive preferences. Theorem <ref> is therefore a very welcome result. In this section, we will sometimes speak about items approved exclusively by two agents. We denote by A_ij = (A_i ∩ A_j) ∖ A_i^*—where i^* ∈ N and i^* ≠ i,j—the set of items approved by agents i and j, and no third agent. Before we state our main result in this section, we present the following two lemmas that we need in order to prove Theorem <ref>. Our first lemma simply tells us that adding items approved only by a single agent does not affect the existence of an MMS allocation. If an MMS allocation exists for instance ℐ = (N, M, v), then an MMS allocation also exists for the instance ℐ' = (N, M ∪ S, v), where S is a set of items approved by a single agent i ∈ N, and S∩ M = ∅. Suppose we have an instance ℐ = (N, M, v) where is an MMS allocation. Suppose further that ℐ' = (N, M ∪ S, v) is an instance where S is a set of items approved by a single agent i ∈ N, and S∩ M = ∅. We show that ' where B'_j = B_j for all j ≠ i and B'_i = B_i ∪ S is an MMS allocation. Since for any j≠ i we have v_j(B_j)≥_j^n, we only need to show that agent i gets her MMS fair share. Suppose for contradiction that we have v_i(S) +^n_i (M) < ^n_i (M ∪ S). Let W = (W_1, …, W_n) be an n-partition of (M ∪ S) such that v_i(W_k) ≥^n_i (M ∪ S) for 1≤ k ≤ n. Note that for any W_k in the partition we have that W_k=(W_k∩ M)∪ (W_k∩ S). Thus we have the following: ^n_i (M∪ S) ≤ v_i(W_k) = v_i(W_k∩ M)+ v_i(W_k∩ S) ≤ v_i(W_k∩ M) + v_i(S) < v_i(W_k∩ M)+ ^n_i (M∪ S) -^n_i(M) Where the last inequality follows from our assumption that v_i(S) < ^n_i (M ∪ S) - ^n_i (M). It follows that v_i(W_k∩ M)> ^n_i (M). As k was chosen arbitrarily, this implies existence of a partition of M into n sets (W_k∩ M)_k∈ [n] such that each set has value strictly larger than ^n_i (M), a contradiction. OG lemma 1 Given n=3, if no MMS allocation exists for an instance ℐ = (N,∪{g},v)—where g ∈ A_i for some agent i, and g ∉(A_1 ∪ A_2 ∪ A_3)∖ A_i, then no MMS allocation exists for the instance ℐ' = (N,,v). confused what's going on here, will come back to it We show the contrapositive. Let ℐ' = (N,∪{g},v) be an instance where g ∈ A_i for some agent i ∈{1,2,3}, and g ∉(A_1 ∪ A_2 ∪ A_3)∖ A_i. Suppose an MMS allocation  exists for the instance ℐ = (N,,v). We claim the following allocation ' is an MMS allocation for the instance ℐ'. B'_j = B_i∪{g} for j=i B_j for j≠ i It is clear for any agent j≠ i that their maximin fair share remains the same for both instances. Thus we only need to show that the same holds for agent i. Suppose for contradiction that B_i∪{g} is strictly less than agent i's maximin fair share. proof of lemma goes here Our second lemma is a more technical one. In yet another simplification of notation, we write μ_ij = ^2_i(A_ij) = ^2_j(A_ij) to mean the maximin fair share of agents i and j when dividing exactly the goods only the two of them approve among themselves. Let N = {1,2,3}, and let S = (S_1, S_2, S_3) be a 3-partition of A_1 such that v_1(S_r) ≥_1 for all r ∈{1,2,3}. Then there exist distinct k, ℓ∈{1,2,3} such that c(S_k ∩ A_12) ≤μ_12, and c(S_ℓ∩ A_13) ≤μ_13. Note that, by the definition of maximin fair share, there cannot be two elements k_1, k_2 ∈{1,2,3} such that c(S_k_1∩ A_12) > μ_12 and c(S_k_2∩ A_12) > μ_12—this would imply that we could divide A_12 into two bundles such that both agents 1 and 2 are guaranteed strictly more than their maximin fair share. Therefore, there must exist at least two distinct k, k' ∈{1,2,3} such that both c(S_k ∩ A_12) ≤μ_12 and c(S_k'∩ A_12) ≤μ_12. The same argument tells us there are distinct ℓ, ℓ' ∈{1,2,3} such that c(S_ℓ∩ A_13) ≤μ_13 and c(S_ℓ'∩ A_13) ≤μ_13. Applying a pigeonhole argument, we conclude there must be distinct k, ℓ∈{1,2,3} such that c(S_k ∩ A_12) ≤μ_12 and c(S_ℓ∩ A_13) ≤μ_13, as desired. We are now ready to state the main result of this section. For three agents with cost utilities, there always exists a Pareto efficient MMS allocation. Given a set of agents N={1,2,3}, let _i = _i^3()—the maximin fair share of agent i when dividing the items in among the three agents. We assume that for any item g ∈, we have that g is approved by at least two agents. By Lemma <ref>, we know the claim will also hold for the remaining cases where there are additional goods approved by a single agent. Finally, we define the following three values: q_1 = _1 + μ_23 q_2 = _2 + μ_13 q_3 = _3 + μ_12 Without loss of generality, we assume that q_1 ≥ q_2 and q_1 ≥ q_3. We can rewrite this, and express it as follows: _1 + μ_23 - μ_13≥_2 _1 + μ_23 - μ_12≥_3 Our method for finding an allocation that satisfies the maximin property and is Pareto efficient, takes as basis a partition of the goods where each bundle reaches the maximin fair share of agent 1. Let S = (S_1, S_2, S_3) be a 3-partition of A_1 such that v_1(S_r) ≥_1 for all r ∈{1,2,3}. Note that such a partition always exists by definition of _1. By Lemma <ref> we know there exist distinct k, ℓ∈{1,2,3} such that c(S_k ∩ A_12) ≤μ_12, c(S_ℓ∩ A_13) ≤μ_13. We can now describe the allocation , which we claim is a Pareto efficient MMS allocation. We divide A_23 into two disjoint sets T_1 and T_2 such that c(T_1) ≥μ_23 and c(T_2) ≥μ_23. Note that such a partition exists by the definition of μ_23. Let S_x be the third bundle in S—i.e. x ∈{1,2,3}∖{k,ℓ}. We then allocate the goods in as follows: B_1 = (S_ℓ∖A_2) ∪ (S_k∖A_3) ∪ S_x B_2 = (S_ℓ∩ A_2) ∪ T_1 B_3 = (S_k ∩ A_3) ∪ T_2 In words, agent 2 receives T_1 and everything in S_ℓ that she wants, agent 3 receives T_2 and everything in S_k that she wants, and agent 1 receives the remaining items in S_k and S_ℓ as well as the entire bundle S_x. Note that all items have been allocated as A_1 ∪ A_23 = M, and no item is allocated to more than one agent as S_x and A_23 are disjoint. By definition, we have that v_1(B_1) ≥_1—agent 1 clearly receives their maximin fair share as she receives one of the original bundles, S_x, and then some. We now show that the same must hold for the other two agents. For agent 2, we need to show that v_2(B_2) ≥_2. Note that we can express the value of agent 2's bundle using the cost function c as follows (where S_ℓ∩ A_13 is the portion of S_ℓ that agent 2 values at 0).[This is possible because we know that any good in the set is either approved by all three agents, or a subset of two. Agent 2 is a member of any subset of size two except A_13.] v_2(B_2) = v_2(S_ℓ∩ A_2) + v_2(T_1) = c(S_ℓ∩ A_2) + c(T_1) = c(S_ℓ) - c(S_ℓ∩ A_13) + c(T_1) Because of the way we've defined the partition S and A_13, we know that c(S_ℓ) ≥_1 and c(T_1) ≥μ_23. Additionally, by Equation <ref>, we know that c(S_ℓ∩ A_13) ≤μ_13. From this, we can conclude the following, where the last inequality follows from Equation <ref>. v_2(B_2) = c(S_ℓ) - c(S_ℓ∩ A_13) + c(T_1) ≥_1 - μ_13 + μ_23 ≥_2 Putting this all together, we have shown that v_2(B_2) ≥_2, as desired. The proof for agent 3 proceeds analogously, using Equations <ref> and <ref>. Thus, we have shown that is an MMS allocation. Finally, we see that no item has been allocated to an agent who values it at 0, meaning the allocation is indeed Pareto efficient. Theorem <ref> establishes a clear improvement when dealing with cost utilities over general additive valuations. §.§ MMS Allocations for Laminar Set Approvals In this section we present our results for agents with laminar set approvals. This restriction on the agents' preferences has a very natural interpretation, in that it describes the notion of items falling into categories and subcategories quite well. We can think of agents as approving categories as a whole. For example, one agent might want all vegetarian dishes, while another wants only the seafood. A third agent might want the pasta-based vegetarian dishes, which would constitute a subcategory of vegetarian. We say agents with cost utilities have laminar set approvals if for a vector A = (A_1, …, A_n) of approval sets, we have that for any i, j ∈ N, either A_i ∩ A_j = A_j, A_i ∩ A_j = ∅, or A_i ∩ A_j = A_i. In words, for any two agents, one approval set is either a subset of the other, or the sets are disjoint. Note that in this paper, we only examine laminar set approvals within the context of cost utilities. Will add example Wednesday. This is a sample pic. [fill=gray, fill opacity=0.25] (0,0) ellipse (2.7cm and 2cm); [rotate = 0, fill=gray, fill opacity=0.25] (-1.1,0) ellipse (0.9cm and 1.3cm); [rotate = 0, fill=gray, fill opacity=0.25] (1.1,0) ellipse (0.9cm and 1.3cm); [rotate = 40, fill=gray, fill opacity=0.25] (1,-0.3) ellipse (0.7cm and 0.4cm); [rotate = 40, fill=gray, fill opacity=0.25] (0.6,-1.2) ellipse (0.7cm and 0.4cm); [rotate = 90, fill=gray, fill opacity=0.25] (0,1.1) ellipse (0.6cm and 0.5cm); [rotate = 70, fill=blue, fill opacity=0.25] (0.5,1.1) ellipse (0.8cm and 0.6cm); at (0, 1.4) B'_i^*; at (0, -2.4) = A_1, …, A_i^*, …, A_k, A'_i; at (-1.1, -1.5) A_i; [level distance=10mm, level 1/.style=sibling distance=20mm,nodes=fill=red!45, level 2/.style=sibling distance=10mm,nodes=fill=red!30, level 3/.style=sibling distance=5mm,nodes=fill=red!25] [] child node New child node Tops child node Hoodie child node Shirt child node Bottoms child node Shorts child node Skirt child node Vintage child node Tops child node Blouse child node Accessories child node Blouse ; We first present a technical lemma that we will apply inductively in the proof of Theorem <ref>. Lemma <ref> allows us to carry the existence of an MMS allocation from cases where all agents submit the whole set M of goods as their approval, to cases where fewer and fewer agents do so, until we reach a single agent approving all goods. For n agents with cost utilities and laminar set approvals, and k≥ 1, if an MMS allocation exists for all instances where k+1 agents approve all items in , then an MMS allocation exists for any instance where k agents approve all items. Consider an instance ℐ=(N,,v) where there are k ≥1 agents whose approval set equals . We call this set of agents N'. Let i ∈ N ∖ N' be an agent such that A_i ⊄A_j for all j ∈ N ∖ N' in the instance ℐ. Note that such an agent must exist, as agents have laminar set approvals. See Figure <ref> for a visual representation. We will continue to use this figure throughout this proof. Our aim is to show that there exists an MMS allocation for the instance ℐ. To this end, we define a second instance ℐ' =(N,,v') such that A'_i =, and A'_j = A_j for all agents j ≠ i—i.e. the instance ℐ' only differs from ℐ in that agent i now approves all items. Thus, we have k+1 agents whose approval set is in the instance ℐ'. Suppose ' is an MMS allocation for ℐ', such an allocation is guaranteed to exist by the assumption of the lemma. We construct an MMS allocation  for our initial instance by building on '. We first define i^* ∈_j ∈ N' ∪{ i } v_i(B'_j). This is an agent who gets the highest value bundle in |_N' ∪{i} according to v_i—agent i's valuation in the initial instance. Because the value n is fixed, we will write _i(ℐ) to mean ^n_i(ℐ). We consider two cases. Case 1: Suppose v_i(B'_i^*) ≥_i(ℐ). Then agent i values agent i^*'s bundle at least as much as their maximin fair share in the initial instance. We define an allocation and claim that it is an MMS allocation for the instance ℐ. B_j = B'_i^* if j= i B'_i if j = i^* B'_j otherwise First note that for any agent j ∉{i, i^* }, their maximin fair share is the same across both instances, and they receive the same bundle under and '. Thus, they receive at least their maximin fair share in the allocation B. We now show the same holds for i and i^*. For agent i, this follows by assumption since v_i(B_i) = v_i(B'_i^*) ≥_i(ℐ). For agent i^* then, we only need to consider when i^* ≠ i. In that case, as i^* ∈ N', we have that A_i^* = A'_i = M. Then agent i^* must also receive their maximin fair share in the allocation B, because v_i^*(B_i^*) = v'_i(B'_i) ≥_i(ℐ') = _i^*(ℐ). Note that this holds because the agents have cost utilities, and both v_i^* and v'_i are equivalent to the cost function c since A_i^* = A'_i = M. As B guarantees everyone at least their maximin fair share, it is an MMS allocation for  ℐ. Case 2: Suppose instead that v_i(B'_i^*) < _i(ℐ). In this case, agent i values agent i^*'s bundle strictly less than their maximin fair share in the initial instance. Recall that v_i(B'_j) ≤ v_i(B'_i^*) for all j ∈ N' ∪{ i }—agent i^*'s bundle is still the “best” one among those in |_N' ∪{i}. Given our initial assumption, we then have that v_i(B'_j) < _i(ℐ) for all j ∈ N' ∪{ i }. Before we proceed, we will need to define a third instance over only the goods in A_i. Let ℐ^* = (N,A_i,v) be a restriction of the instance ℐ to only the items in A_i—meaning A^*_j = A_j ∩ A_i for all j ∈ N. Note that in ℐ^*, there are at least k+1 agents whose approval set is A_i—the initial k agents who approved all items in ℐ, and agent i. Let ” be an MMS allocation for ℐ^*. We now proceed with defining an allocation by using both allocations ' and ”. In particular, we define B_j = (B'_j ∖ A_i) ∪ B”_j for all j ∈ N. Note that no item is allocated more than once because B”_j ⊆ A_i for all j ∈ N. We claim that is an MMS allocation for the instance ℐ. Because agents have laminar set approvals, there are three possible cases for any agent j: either i) A_j ⊆ A_i, or ii) A_j ∩ A_i = ∅, or iii) A_i ⊂ A_j. See Figure <ref> for a visual representation. i) Suppose A_j ⊆ A_i. Then agent j was only approving items in A_i and their approval set remains the same in the restriction ℐ^*, implying that their maximin fair share also remains the same in both instances. Additionally, we have that v_j(B'_j ∖ A_i) = 0 given that A_j ⊆ A_i, and so v_j(B_j) = v_j(B”_j). Since j receives their maximin fair share in ”, they also do so in . ii) Suppose instead A_j ∩ A_i = ∅. Because agent j does not approve any items in A_i, we have that v_j(B'_j) = v_j(B'_j ∖ A_i) and v_j(B”_j) = 0. Then v_j(B_j) = v_j(B'_j), and because A'_j = A_j their maximin fair share is the same in ℐ and ℐ'. Thus j receives their maximin fair share in . iii) Finally, suppose A_i ⊂ A_j. This is only possible if j ∈ N', meaning j is one of the agents approving all items. We know that v_j(B_j) = v_j(B'_j ∖ A_i) + v_j(B”_j) = v_j(B'_j) - v_j(B'_j ∩ A_i) + v_j(B”_j) = v_j(B'_j) - v_i(B'_j) + v_j(B”_j) Where the last line follows from the fact that agents have cost utilities, meaning v_j(B'_j ∩ A_i) = v_i(B'_j). Recall that Equation <ref> tells us v_i(B'_j) < _i(ℐ). This fact, combined with Equation <ref> (and some reshuffling of the terms), tells us it must be the case that v_j(B_j) > v_j(B'_j) - _i(ℐ) + v_j(B”_j). Since ” is an MMS allocation for ℐ^*, it follows that v_j(B”_j) ≥_j(ℐ^*). Further, since A_i ⊂ A_j, and ℐ^* is an instance over only A_i, we have that _j(ℐ^*) = _i(ℐ). Thus, v_j(B”_j) ≥_i(ℐ). We can then transform Equation <ref> as follows: v_j(B_j) > v_j(B'_j) - _i(ℐ) + _i(ℐ), meaning it must be the case that v_j(B_j) > v_j(B'_j). Because we know agent j has identical valuations in ℐ and ℐ', and ' is an MMS allocation, we can conclude that agent j receives at least their maximin fair share in . Thus we have shown for any agent j∈ N that they receive their maximin fair share in the allocation , meaning it must be an MMS allocation. Since ℐ was an arbitrary instance where exactly k agents submit the approval set M, this concludes the proof. We can now (finally) present the main result of this section. For n agents with cost utilities and laminar set approvals, there always exists an MMS allocation. First, note that given agents with laminar set approvals, if no agent has as their approval set, then we can find a k-partition (N_1, …, N_k) of agents and pairwise disjoint subsets _1, …, _k of items such that agents in N_ℓ do not approve any items in ∖_ℓ, and there is an agent i ∈ N_ℓ such that A_i = M_ℓ. It is clear—because the agents are partitioned such that each partition considers a distinct set of items from M—that if we find an MMS allocation for each of the k sub-cases, this gives us an MMS allocation in the global case. Therefore, without loss of generality, we assume for any instance that at least one agent submits as their approval set. Now suppose there are n agents with cost utilities who all submit as their approval set. Then an MMS allocation trivially exists. Applying Lemma <ref> inductively, we see that for agents with cost utilities and laminar set approvals, an MMS allocation always exists given that at least one agent submits M as their approval set. If an MMS allocation exists, then an MMS and PO allocation always exists since after each Pareto improvement agent's utility is weakly increasing. Thus, Theorem <ref> implies that under cost utilities and laminar set approvals, MMS and PO allocation always exist. § STRATEGYPROOF MMS ALLOCATIONS In this section, we study the strategic guarantees possible under cost utilities. We first show that for cost utilities, the Sequential Allocation mechanism is strategyproof. Let us first define what we mean by strategyproofness. An allocation mechanism f is manipulable if there is some agent i ∈ N such that v_i(f(v_-i, v'_i)_i) > v_i(f(v)_i), where (v_-i, v'_i) is the valuation that results when v_i is replaced by v'_i. In other words, agent i can misrepresent their preferences by submitting an untruthful valuation v'_i, thereby getting a more preferred outcome. We say f is strategyproof if it is not manipulable by any agent. We now define the Sequential Allocation mechanism from previous studies <cit.>. We first define a picking sequence as a sequence of agents in N. Note that the sequence of agents can be of any length, and any agent might appear multiple times in the sequence. We can think of Sequential Allocation as proceeding sequentially (as the name indicates), through the ordering of agents. At each step, the agent whose turn it is chooses the item with the highest cost that a) is still available and b) is in their approval set. Note that we “force” agents to pick their most wanted item, as reported in their approvals. If there are no remaining items that an agent finds useful then we skip this agent and continue with the next. The mechanism allows some items to remain unallocated only if they are not approved by any agent. In fact, Sequential Allocation is a family of mechanisms, each defined by the picking sequence. As we will see, the properties of the mechanism also heavily depend on the picking sequence in question. For example, it is well known that Sequential Allocation is not strategyproof in general unless an agent’s picks are all consecutive <cit.>. In the rest of this section, we will assume that the goods in M = {g_1, …, g_m} are ordered from lowest cost to highest cost—i.e. c(g_k) ≤ c(g_ℓ) for all k < ℓ. For agents with cost utilities, there exists a picking sequence such that Sequential Allocation is strategyproof and results in a Pareto efficient allocation.[We prove Proposition <ref> for a picking sequence used in the proof of Proposition <ref>, but note that there are simpler picking sequences for which it holds.] We define a sequence S of agents of length n+2, and a sequence T of agents where every agent appears exactly once. Let S = 1, 2, …, n-1, n, n, n, and T = n, n-1, ..., 2, 1. Our picking sequence is S, followed by m copies of each element in the sequence T. We can think of this as running through S, then letting each agent in T choose all the items they want when it is their turn in T. We now show that this gives us a strategyproof mechanism. It is immediately clear that agent n has no incentive to manipulate. They cannot move themselves up in the picking sequence, and once it is their turn, they can essentially grab all the items they want. For any other agent i ∈ N, let X_i be the items remaining immediately before agent i received their first item, and let x be the item with highest cost in X_i ∩ A_i. Then, agent i receives x. After this, all items in the approval sets of agents i+1, i+2, ..., n are allocated before agent i receives all remaining items in A_i. Thus, agent i receives the bundle x ∪ (A_i ∩ (X_i ∖ (A_i+1∪ A_i+2∪ ... ∪ A_n ))). Note that the preferences of agent i do not decide the set X_i. Hence, by misreporting, agent i is unable to gain any additional items that they approve. Note that the final allocation is Pareto optimal because items are only allocated to agents that want them. As agents have cost utilities, all agents who want an item will value it the same. This concludes the proof. We now consider whether there are picking sequences that can give us an MMS allocation along with truthfulness for a restricted number of items. Such a restriction is needed because computation of an agent's MMS value is NP-hard for an arbitrary number of items, which implies that no picking sequence is guaranteed to output an MMS allocation.[This is under the assumption P≠NP.] We start with a lemma that will be used to prove Proposition <ref>. For n agents and n+2 goods, let |A_i|≥ n+k where k ∈{0,1,2}. The (n-k)-th most valuable item in A_i is guaranteed to give agent i their maximin fair share. Note that for any n-partition of the items in A_i, there is at most k bundles that are not singletons, meaning at least (n-k) of the bundles have just a single item. Any of these bundles will give agent i their maximin fair share. Of these (n-k) singleton bundles, the highest possible value for the lowest valued bundle is the cost of the (n-k)-th most valuable item in the agent's approval set. propositionpossi For n agents with cost utilities, and n+2 goods, there exists a picking sequence such that Sequential Allocation is strategyproof, and returns a Pareto efficient MMS allocation. We first show that there is a picking sequence such that Sequential Allocation returns an MMS allocation. If an agent approves fewer than n items, they still receive their maximin fair share even when no items are allocated to them. We therefore focus on agents who approve at least n items. We define the picking sequence based on the cost of the items in M. ▸ If c(g_4) > c(g_2) + c(g_3), our picking sequence is 1, 2, …, n-1, n, n, n. ▸ Otherwise, our picking sequence is 1, 2, …, n-1, n, n, n-1. Note that these differ only in who gets to pick the last item. The fact that agents 1 through n-2 are guaranteed their maximin fair share for both picking sequences follows from Lemma <ref>. It remains to show that the same holds for agent n-1 and agent n. If agent n-1 or agent n approve at most n items, then we already know they are guaranteed their maximin fair share. If agent n-1 approves n+1 items, their (n-1)-th most valuable item is still up for grabs, and by Lemma <ref> this will guarantee them their maximin fair share. We now consider what happens when agent n approves n+k items—for k ∈{1,2}), and when agent (n-1) approves n+2 items. We look at each potential picking sequence separately. Case 1: Suppose c(g_4) > c(g_2) + c(g_3). If agent n approves n+k items, they will receive at least k+1 items, as they pick last and can pick up to three items if they want, given the picking sequence 1, 2, …, n-1, n, n, n. Clearly a bundle of size k+1 guarantees them their maximin fair share. What remains is to check what happens when agent (n-1) approves all items in M, so suppose this to be the case. We first show that the maximin fair share of agent n-1 is min(c({g_1, g_2, g_3}), c(g_4)). Consider a partition of M into n bundles, where c(B_i) ≥^n_n-1() for each i ∈ N. At least n-2 of these bundles must contain a single item, and so we know that either i) n-2 bundles contain one item and two bundles contain two items, or ii) n-1 bundles contain one item and one bundle contains three items. We know that c(g_4) > c(g_2) + c(g_3) by assumption, and the non-singleton bundles will be made up of the four lowest value items—g_1, …, g_4. Then the best we can do is one 3-item bundle B = {g_1, g_2, g_3} and all the remaining items in singleton bundles. It follows that the maximin fair share of agent n-1 is min(c(B), c(g_4)). When it is agent n-1's turn to pick, in the worst case, the only remaining goods will be g_1, …, g_4, in which case agent n-1 can pick item g_4 to guarantee their maximin fair share. Case 2: Suppose instead that c(g_4) ≤ c(g_2) + c(g_3). If agent n approves n+1 items, they will receive two items, guaranteeing them their maximin fair share. If agent n approves all items in M, their maximin fair share in this case is determined by the lowest value bundle between the two bundles of size two, and the cheapest singleton. In particular, agent n's maximin fair share is min(c({c_1, c_4}), c({c_2, c_3}), c({g_5})). With this picking sequence, agent n receives two items and in the worst case, this will be the bundle B = {g_2, g_3}. Clearly this guarantees agent n their maximin fair share. Finally, we look at when agent (n-1) approves n+2 items. In this case, we know that their maximin fair share is determined by min(c({g_1, g_4}), c({g_2, g_3}), c({g_5})), as was the case for agent n. As we did for agent n we know that agent (n-1) will receive two items, and in the worst case this will be the bundle B = {g_4, g_1}, which gives the agent their maximin fair share. Strategyproofness and Pareto efficiency for the first case follows directly from Proposition <ref>. We now prove strategyproofness and Pareto efficiency for the second case, where c(g_4) ≤ c(g_2) + c(g_3). In this case, our picking sequence is 1, 2, …, n-1, n, n, n-1. For any agent i ∈ N, if i < n-1, it is clear that there is no way for the agent to manipulate as they only get one pick. For agent n, because their picks are right after each other, they also have no incentive to manipulate. Thus, we need only consider agent n-1. Let X be the items remaining immediately before agent n-1 received their first item, and let x be the item with highest cost in X ∩ A_n-1. Agent n-1 will pick x by definition of the mechanism. Agent n then receives their two highest valued remaining items if they exist (call these items y and y'), and then finally agent n-1 potentially receives the last item they approve (call this item z). First, consider the case where agent n-1 misreports that they approve some item x', and they receive x' instead of x. Then, the bundle of agent n-1 will consist of x' (which they value at 0), and potentially some other item z' with v_n-1(z') ≤ v_n-1(x). Thus, agent n-1 is not better off in this case. Otherwise, if agent n-1 instead misreports that they do not approve item x, then they will pick some other item x” instead, where c(x”) ≤ c(x). If x”≠ y and x”≠ y', then we must have v_n-1(x”) ≤ v_n-1(z), and so agent n-1 is not better off. Otherwise, if x” = y or x” = y', then agent n-1 will have strictly fewer options for their final pick (compared to the case where they do not misreport), and so they are still not any better off. Therefore, the mechanism is strategyproof. It is clear that no agent is assigned an item they do not want, and all items that are wanted by at least one agent are assigned to someone. Thus the allocation is Pareto efficient. We remark that Proposition <ref> is tight in the sense that it no longer holds when there are n agents and n+3 items. propositionlast For agents with cost utilities, there exists an instance with n=2 agents and m=5 goods such that no strategyproof mechanism can guarantee a Pareto efficient MMS allocation. Let n = 2, and M = {g_2, g_3, g_4, g_5, g_6} such that c(g_i) = i. We will show that no allocation mechanism can satisfy strategyproofness while also guaranteeing a Pareto Efficient MMS allocation. Our aim is to start from an instance ℐ_1 and—by repeatedly applying the three axioms—reach a contradiction. First, consider the instance ℐ_1, where both agents approve all items—this corresponds to the top row of Table <ref>. Then, their maximin fair share is 10, and the only way to reach an MMS allocation is to give g_4 and g_6 to one agent, and g_2, g_3 and g_5 to another. Suppose without loss of generality that {g_2, g_3, g_5} is allocated to agent 1, and {g_4, g_6} is allocated to agent 2. We will consider 5 further instances. ℐ_2 differs only on agent 2's approval set—they now only approve items g_4, g_5, and g_6. By strategyproofness, agent 2 must still receive a bundle she values at 10. If this were a higher value the agent could manipulate from ℐ_1, and if it were lower, they could manipulate from ℐ_2 to ℐ_1. Instance ℐ_3 differs from instance ℐ_2 only on agent 1's approval set—they now only approve items g_3, g_4, g_5, and g_6. As agent 1 is the only one approving item g_3, they must be allocated this item by Pareto efficiency. The maximin value of agent 1 in this instance is 9, so they must receive one of the following bundles: {(g_3, g_6}, {g_3, g_4, g_5}, {g_3, g_4, g_6}, {g_3, g_5, g_6}, or {g_3, g_4, g_5, g_6}. All but {g_3, g_6} break strategyproofness, as agent 1 would have an incentive to manipulate from ℐ_2 to ℐ_3. Instance ℐ_4 differs from instance ℐ_3 only on agent 1's approval set—they now only approve item g_6. Agent 1 must be allocated g_6. If this were not the case, they would have an incentive to manipulate from ℐ_4 to ℐ_3 as they do receive item 6 in that instance. Instance ℐ_5 differs from instance ℐ_4 only on agent 1's approval set—they now approve items g_2, g_3 and g_6. As agent 1 is the only one approving items g_2 and g_3, they must be allocated these items by Pareto efficiency. If agent 1 is not also given g_6, they would have an incentive to manipulate from ℐ_4 to ℐ_3 as their bundle in that instance is valued at 6 (which is greater than 2+3, the value of the bundle {g_2, g_3}). Note that this gives them a bundle valued at 11. Finally, instance ℐ_6 differs from instance ℐ_5 only on agent 1's approval set—they now approve all items. If agent 1 is given a bundle valued lower than 11, they would have an incentive to manipulate from ℐ_6 to ℐ_5. Note however that ℐ_6 = ℐ_2, and our axioms dictated in that instance that agent 1 must receive utility of 10. This gives us our contradiction. § CONCLUSION Fair division of indivisible resources is a challenging yet important problem with wide-ranging applications. In this paper, we have established that cost utilities are a useful restriction to study, especially in the context of MMS allocations. We have shown that there are several classes of instances where MMS allocations always exist under cost utilities. We also show that cost utilities are helpful in circumventing problems of strategic manipulation. The topic of MMS allocations in general, and for cost utilities in particular, poses many challenging questions. One might consider various fair division problems with constraints under cost utilities. A prime example is cardinality constraints—or more generally, budget constraints—which are quite natural in this setting. Our work serves as a further indication that fair division under cost utilities is a fruitful research direction. §.§.§ Acknowledgements. This project was partially supported by the ARC Laureate Project FL200100204 on “Trustworthy AI”. splncs04nat
http://arxiv.org/abs/2407.13323v1
20240718092252
Optimizing VGOS observations using an SNR-based scheduling approach
[ "Matthias Schartner", "Bill Petrachenko", "Mike Titus", "Hana Krásná", "John Barrett", "Dan Hoak", "Dhiman Mondal", "Minghui Xu", "Benedikt Soja" ]
astro-ph.IM
[ "astro-ph.IM" ]
[1]Matthias Schartnermschartner@ethz.ch 2]Bill Petrachenkowtpetra@gmail.com These authors contributed equally to this work. 3]Mike Titusmatitus@mit.edu These authors contributed equally to this work. 4]Hana Krásnáhana.krasna@geo.tuwien.ac.at These authors contributed equally to this work. 3]John Barrettbarrettj@mit.edu These authors contributed equally to this work. 3]Dan Hoakdhoak@mit.edu These authors contributed equally to this work. 3]Dhiman Mondaldmondal@mit.edu These authors contributed equally to this work. 5]Minghui Xuminghui.xu@gfz-potsdam.de These authors contributed equally to this work. 1]Benedikt Sojasoja@ethz.ch These authors contributed equally to this work. *[1]Institute of Geodesy and Photogrammetry, ETH Zurich, Robert-Gnehm-Weg 15, Zürich, 8093, Switzerland [2]Natural Resources Canada (retired) [3]MIT Haystack Observatory [4]TU Wien [5]DeutschesGeoForschungsZentrum (GFZ) Potsdam The geodetic and astrometric Very Long Baseline Interferometry (VLBI) community is in the process of upgrading its existing infrastructure with the VLBI Global Observing System (VGOS). The primary objective of VGOS is to substantially boost the number of scans per hour for enhanced parameter estimation. However, the current observing strategy results in fewer scans than anticipated. During 2022, six 24-hour VGOS Research and Development (R&D) sessions were conducted to demonstrate a proof-of-concept aimed at addressing this shortcoming. The new observation strategy centers around a signal-to-noise (SNR)–based scheduling approach combined with eliminating existing overhead times in existing VGOS sessions. Two SNR-based scheduling approaches were tested during these sessions: one utilizing inter-/extrapolation of existing S/X source flux density models and another based on a newly derived source flux density catalog at VGOS frequencies. Both approaches proved effective, leading to a 2.3-fold increase in the number of scheduled scans per station and a 2.6-fold increase in the number of observations per station, while maintaining a high observation success rate of approximately 9095%. Consequently, both strategies succeeded in the main objective of these sessions by successfully increasing the number of scans per hour. The strategies described in this work can be easily applied to operational VGOS observations. Besides outlining and discussing the observation strategy, we further provide insight into the resulting signal-to-noise ratios, and discuss the impact on the precision of the estimated geodetic parameters. Monte Carlo simulations predicted a roughly 50% increase in geodetic precision compared to operational VGOS sessions. The analysis confirmed that the formal errors in estimated station coordinates were reduced by 4050%. Additionally, Earth orientation parameters showed significant improvement, with a 4050% reduction in formal errors. Optimizing VGOS observations using an SNR-based scheduling approach [ July 22, 2024 =================================================================== § INTRODUCTION Very Long Baseline Interferometry (VLBI) is a cutting-edge technique in space geodesy. Through the synchronized observations of multiple radio telescopes strategically positioned worldwide, VLBI attains unrivaled accuracy in determining the rotation angle of the Earth about its axis and in monitoring minute variations of the orientation of the Earth's rotation vector in space. It furthermore contributes to the establishment of the International Terrestrial Reference Frame <cit.> while also defining the International Celestial Reference Frame <cit.> in its current realization. Geodetic VLBI observations are conducted in sessions that are organized and supervised by the International VLBI Service for Geodesy and Astrometry <cit.>. Among others, the IVS organizes the IVS-R1 and IVS-R4 series, which are global 24-hour VLBI sessions with a rapid turnaround time <cit.>. These sessions are observed using a dual-frequency mode at S/X-band (approximately 2.3GHz and 8.6GHz) and a recording rate of 256512Mbps. This way, around 20 scans per hour can be achieved. However, the S/X network suffers from an aging infrastructure and a severe inhomogeneity w.r.t. telescope properties, making it hard to optimize its observations. Since it was foreseen that the S/X infrastructure could not meet the demanding requirements posed by the Global Geodetic Observing System <cit.>, members of the IVS decided to upgrade the S/X infrastructure. During the design process of the successor infrastructure, tropospheric turbulences have been identified as the primary error source in geodetic VLBI <cit.>. As a countermeasure, increased sampling of the troposphere at different azimuth and elevation angles over short periods is required. Simulations revealed that a source switching interval of approximately 30 seconds, equivalent to 120 scans per hour, will be required to reach the GGOS requirements <cit.>. Consequently, a new telescope network utilizing fast slewing telescopes, the so-called VLBI Global Observing System (VGOS), was designed and is currently being built <cit.>. To achieve high slewing rates, a sacrifice w.r.t. the antenna diameter has to be made which impacts their sensitivity. To compensate for this, a new observing mode was developed utilizing four bands and an increased recording rate. Currently, the VGOS-mode observes four 512MHz wide bands A–D with center frequencies approximately at 3.2GHz, 5.5GHz, 6.6GHz, and 10.4GHz <cit.>. Since 2020, the VGOS network has been operationally observing VLBI sessions within the so-called VGOS-OPS observing program. Telescopes participating in VGOS-OPS observe around 40 scans per hour, a factor of two better than S/X but far from the originally anticipated 120 scans per hour. The discrepancy can be explained by the following: First, all scans are observed for 30 seconds straight, independent of telescope sensitivity and source brightness, while the original VGOS design document anticipated an observing time of as short as 5s <cit.>. Besides, other technological limitations exist that introduce additional overhead times. Together, these effects accumulate to a theoretical minimal source switching interval of 68s, significantly higher than the anticipated 30s. To address this shortcoming, the IVS has provided resources to observe six dedicated research and development sessions in 2022, the VGOS-R&D program. The VGOS-R&D program aimed to develop appropriate methodologies and concepts to increase the number of scans per hour. Most importantly, the feasibility of an optimized, SNR-based observation strategy featuring shorter observation times was explored, together with a careful evaluation and elimination of existing technological limitations as a secondary measure. In this work, we will report on the actions and investigations of the VGOS-R&D sessions. Section <ref> describes the VGOS sessions, Session <ref> describes the methodologies, Session <ref> discusses the evaluation metrics and presents results, Section <ref> concludes the work, while Section <ref> provides an outlook how the concepts can be applied in operational VGOS sessions. §.§ SNR-based VLBI scheduling The required observation time between two stations can be calculated using T = (SNR/η· F)^2 ·(SEFD_1 · SEFD_2/rec) with F being the source flux density (source brightness) per band, SEFD being the station system equivalent flux density (station sensitivity) per band, SNR being the target SNR per band, η being a constant efficiency factor, and rec representing the recording rate per band. Thus, given a target SNR and recording rate, the observation time is determined via SEFD and F. Within a VLBI scan, the required observation time is calculated per baseline (pair of telescopes) and frequency band. The minimum over all these observation times per baseline and band determines the final observation time of the given scan. While SEFD is measured at most VGOS stations, the main limitation of using an SNR-based scheduling approach for VGOS is that no source flux density (F) models are available for VGOS frequencies that are suitable for VLBI scheduling. Existing source flux density monitoring campaigns focus on a small subset of sources and use local baselines only, for example, <cit.>, which does not provide suitable results for global VGOS scheduling. Source flux density is a function of the baseline length and orientation (projected in the direction of the observed source, the so-called UV plane), frequency, and time. Currently, almost all IVS schedules are utilizing source flux density models which are part of the sked catalogs[<https://github.com/nvi-inc/sked_catalogs/>]. The sked catalog standard supports two types of models: models based on projected baseline length (labeled "B"), where the flux density is defined as a step-function, and elliptical Gaussian models (labeled "M"), where the flux density is defined via the sum of Gaussian components <cit.>. While "B" considers variations solely based on the projected baseline length, "M" allows to consider variations from baseline length and orientation. Temporal variations are represented by updating the catalog monthly. Frequency-based variations are represented by providing individual models for S- and X-band. Since the sked catalog only contains models for S- and X-band, they can not directly be used for VGOS which operates at different frequencies. It is to be noted that for operational VLBI scheduling, the calculation of the required observation time contains significant error margins to compensate for imperfections in the models. For the interested readers, we provide a more detailed discussion of these error margins in Appendix A. § DATA §.§ VGOS-R&D The VGOS-R&D sessions in 2022 were conducted bi-monthly (i.e. one session every second month). Each session was individually designed and further discussed and approved by the VGOS Technical Committee (VTC). Due to a substantial backlog in VGOS correlation, results from previous sessions were not available before the subsequent sessions were observed. Hence, it was not possible to build on previous session results. Adjustments made between sessions could only be based on VTC discussions and log files obtained during observations. Table <ref> lists the start time and station network of each session, while Figure <ref> depicts the VGOS station network. Stations Hb and Nn were initially observed in tagalong mode as they were at this time relatively new and untested, resulting in unstable performance. Tagalong mode refers to a scheduling technique, where the observing plan is first generated without the tagalong stations before adding them to the existing schedule. This way, losing the tagalong stations will not impact the schedule of the remaining stations. On the other hand, stations observing in tagalone mode cannot contribute their full potential to the network. Similarly, tagalong mode was occasionally utilized for Oe and Ow due to potential storage limitations affecting the station's ability to observe. In the last session, station Mg operated in tagalong mode due to technical issues. As presented in Table <ref>, all sessions, except VR2203, experienced station losses from the core network due to technical problems. Consequently, the generated schedule could not be fully executed as intended, leading to limitations in data interpretability to some extent. §.§ VGOS-OPS The baseline for comparison is provided by the VGOS-OPS sessions conducted in 2022. Specifically, the VGOS-OPS dataset includes 42 sessions, ranging from VO2013 to VO2363 (2022-01-13 to 2022-12-29). The individual session start times and station network information can be found in the schedule master[<https://ivscc.gsfc.nasa.gov/sessions/2022/>]. Similar to VGOS-R&D, the VGOS-OPS sessions suffered from significant station dropout. From the original 42 sessions, only 5 could be analyzed with the full station network. In the remaining 37 sessions, at least one station did not observe or did not produce useable results. Unfortunately, no information regarding station tagalong status is recorded in the available data sources. Figure <ref> depicts how often stations were scheduled and analyzed in the VGOS-OPS sessions. The dashed gray line marks the total number of sessions. Within the VGOS-OPS dataset, there is one additional station (KATH12M located in Australia) that participated in one session only. Due to this very small sample size and since the station was never observed in the VGOS-R&D observing program, this station was excluded from the discussion in the following subsections. § METHODOLOGY The main novelty introduced in the VGOS-R&D program compared to VGOS-OPS was the introduction of the SNR-based scheduling approach. While S/X-VLBI sessions already utilized an SNR-driven scheduling strategy for decades, it has not yet been possible for VGOS, in particular, due to a lack of source flux density models at VGOS frequencies. In the following, two strategies for deriving the expected source flux density F at VGOS frequencies are discussed that were explored in the VGOS-R&D program. The expected source flux density is then used in calculating the required observing duration using equation (<ref>) with a target SNR of around 15, an efficiency factor η of around 0.6, and a recording rate rec of 2Gbps per band. §.§ Source flux density estimation As a first approach, the possibility of a simple inter-/extrapolation from the previously mentioned S/X flux models was tested. This was done based on the source spectral index using F(λ_i) = F_X/λ_X^α·λ^α_i α = log(F_X/F_S)/log(λ_X/λ_S) where λ stands for the wavelength, F stands for the source flux density and the indices represent the frequencies S, X, and the target (VGOS) frequency i of bands A–D. In the first step, the source flux density is calculated for the projected baseline length and orientation at the S/X frequencies using the sked catalog models. Next, equations (<ref>) and (<ref>) are used to calculate the expected source flux density at the VGOS frequencies. This approach comes with three major downsides or simplifications. First, strictly speaking, an inter-/extrapolation based on the spectral index is only suitable for point-like sources. Although the radio sources most commonly observed with VGOS are primarily point-like, this is certainly not the case for all of them. Second, in practice, the extrapolations to frequencies outside S/X could become problematic. In the case of VGOS, this affects band D. Third, errors in the sked S/X catalog are propagated to the VGOS-frequencies. It is known that the sked source flux density models have inconsistencies w.r.t. models defined in other catalogs <cit.>. Here, it is important to understand that the sked models' use case is the calculation of the required observing time. This means that in this context, it is necessary to view the source flux density models together with the corresponding SEFD models and for the purpose of solving equation (<ref>). Potential inconsistencies in the source flux density models can result from assumptions regarding the station SEFD models and vice-versa. Similarly, these potential offsets and inconsistencies can also be compensated by the SEFD models for the calculation of the required observing time. In practice, the models defined in the sked catalog are used in almost all IVS S/X sessions and represent the state-of-the-art solution. They are well tested and result in a high observation success rate, potentially also due to the existing high error margins compensating for imperfections. The second approach explored in VGOS-R&D is the use of a newly generated source flux density model at VGOS frequencies, which can be found in the supplementary material attached to the manuscript (in the sked catalog format), as well as in the Appendix B. The VGOS source flux density models were generated based on observations from past VGOS-OPS sessions. Based on these observations, the catalog includes models for 138 sources at the four VGOS frequencies. The models are defined as a step function based on the projected baseline length with a regular stepsize of 1000km. This way, they follow the sked catalog conventions and can be supported natively in the scheduling software packages. The primary downside of using the new source flux density models is that they only include a limited set of sources and that they were derived using only a small amount of VGOS sessions with limited geometry. In particular, the observations lack long north-south baselines, thus, the decision to represent them as circular models based on the projected baseline length instead of elliptical Gaussian models. Furthermore, there are only a few observations of sources in the southern hemisphere and none in the deep south. Besides, not all stations regularly report their SEFD values in the station logs, further limiting the amount of usable data to derive source flux density models. Comparing the VGOS source flux model with the inter-/extrapolation approach, it is evident that there are discrepancies. In particular, the new models are more conservative in the reported flux densities, especially for band D. Figure <ref> provides an exemplary depiction of the source flux density model for source 0552+398 to provide a visual example to better understand the concepts. The source flux density is color-coded based on the projected baseline length and orientation, defined in the UV plane. Note that the original S/X model for source 0552+398, depicted in the first row, is defined based on an elliptical Gaussian model. The inter-/extrapolation at VGOS frequencies is depicted in the second row. The third row represents the newly generated VGOS frequency source-flux density models, which are defined as a step function based on the projected baseline length. < g r a p h i c s > Comparisons of flux density models for source 0552+398. The first row depicts the S/X flux densities reported in the sked catalog. The second row depicts the inter-/extrapolation based on the spectral index. The third row depicts the newly derived VGOS-frequency source flux density models. §.§ Scheduling strategy Unlike VGOS-OPS, the VGOS-R&D schedules were generated using VieSched++ <cit.>. The general scheduling concept followed the approaches described in <cit.> and <cit.>. One requirement posed by the SNR-driven observation strategy is the requirement of dedicated calibration scans. In the VGOS-OPS sessions, where each scan is observed for 30 seconds straight, enough scans reach high enough SNR to naturally provide good calibration capabilities. However, due to the reduced observation time in VGOS-R&D sessions, resulting in lower SNR, this is not necessarily the case anymore and dedicated calibration scans should be included in the session. Based on feedback within the VTC, as well as from the correlator staff [M. Titus, personal communication, 2022], these calibration scans were included every 1-2 hours. Within the 2022 VGOS-R&D sessions, the observation time of calibration scans was set to 60 seconds. The calibration scans were selected solely based on source visibility and acquired SNR. Another reason to include 60-second long calibration scans in VGOS-R&D sessions, unrelated to the SNR-based observations strategy, was the inclusion of some new and not yet fully validated stations (Hb and Nn), where these high SNR scans provided valuable insight into the station performance. The objective of the VGOS-R&D sessions was to evaluate the suitability of both previously discussed SNR-based scheduling approaches despite their assumptions and limitations. The inter-/extrapolation approach based on the sked catalog was utilized in sessions VR2201 and VR2202 while the new VGOS source flux density models were used in VR2203–VR2205. In VR2206, the majority of sources used the VGOS source flux density models while a few additional sources were added in the session that were not included in the new catalog. For these sources, the inter-/extrapolation approach based on the sked catalog was used. For the SNR-based scheduling strategy, a minimum observing time of 7 seconds and a maximum observing time of 20 seconds were used, except for VR2204, where the minimum and maximum were reduced by 2 seconds each. The resulting average observation time of scans within VGOS-R&D is 10 seconds. As discussed in Section <ref>, the required observation duration is calculated per band and baseline. Consequently, only one band of one observation has a theoretical SNR close to the target SNR, while all other remaining bands and observations have a higher theoretical SNR. As an example, Figure <ref> depicts the average SNR per band and baseline for one exemplary session VR2203 (other sessions can be found in Appendix C). In this case, the target SNR was set to 15 for all bands and on all baselines. However, the average SNR is significantly larger than 15 due to the previously discussed facts, especially for bands B and C where telescopes typically have higher sensitivity compared to bands A and D. §.§ Overhead times Besides utilizing an SNR-based scheduling strategy, which drives the majority of the increase in the number of obtained scans per hour, a careful evaluation of existing overhead times in VGOS operations was executed for the VGOS-R&D program which is explained in the following. For each scan, the telescope has to execute a variety of steps. Simplified speaking, it has to slew to the radio source that will be observed (slewing), wait for all other telescopes to finish slewing (idle time), and observe the radio source (observing time). While slewing and idle times are variable and depend on the telescope properties, the observing time is fixed to 30 seconds within VGOS-OPS sessions as discussed earlier. Additionally, before each observation, some time is reserved to set up the upcoming observation and perform some calibrations. Within the VGOS-OPS session, this calibration time is fixed to four seconds per scan. Finally, there are two reasons to account for some additional overhead time during the scheduling process. First, for each scan, a fixed constant amount of overhead time is reserved, intending to reflect the execution time of field system commands. Within VGOS-OPS sessions, this constant overhead time is set to four seconds. Second, due to limitations in the recording hardware used in VGOS-OPS session, an additional overhead time in the same length of the observing time is required. It serves to guarantee that all recorded data is properly written to hard drives before the next scan starts. Here, it is to note that during this overhead time, the telescopes can already slew to the next scan. Thus, while generating the schedules, it can be seen as a constraint on the slewing time instead of an additional overhead time. Based on an evaluation of these times for VGOS-R&D, it was found that the four-second-long overhead time for executing field system commands is not required and can be removed. Additionally, applying some slight modifications to the telescope procedures executed during the calibration phase could reduce it to only two seconds. Finally, using a second recording module eliminated the additional overhead time to ensure that all recorded data is properly written to hard drives before the next scan starts. However, it has to be noted that for the VGOS-R&D sessions, this step is in general far less significant compared to VGOS-OPS sessions. The reason is that the observing time was greatly reduced to around 10 seconds as discussed in <ref> which consequently also reduced the required overhead time which often became shorter than the simultaneously applied slewing time. For example, when generating the same schedule for VR2201 with the same scheduling strategy except for one version assuming recording on one module only (and thus requiring the overhead time) while the other version assumes recording on two modules (and thus does not require the overhead time) the resulting increase in terms of number of scans in the second version is only 0.4% compared to the first version, while the number of observations is increased by 3% only. Figure <ref> provides a visual sketch comparing the observation approaches of VGOS-OPS and VGOS-R&D. For simplicity, the first overhead time, reserved to execute field system commands, is displayed before the slewing starts. The second overhead time, reserved to ensure that all recorded data is safely stored on disk, is executed simultaneously with the slewing and idle time. §.§ Simulation strategy To determine the expected precision of the SNR-based scheduling strategy, Monte-Carlo simulations were executed to derive the expected precision of the estimated parameters based on their repeatability error. The simulation strategy follows state-of-the-art procedures <cit.>. It considered three major error sources: tropospheric turbulences, clock drifts, and random measurement errors. The tropospheric turbulence was simulated using spatial and temporal correlations with an average refractive index structure constant C_n of 1.8e-7m^-1/3 and an effective wet tropospheric height of 2000m, and a wind velocity of 8m/s <cit.>. Clock drifts were simulated as an integrated random walk with an Allan standard deviation of 1e-14s over 50 minutes. Finally, normally distributed measurement errors with a standard deviation of 4 picoseconds were added. The simulations were analyzed with a standard least squares adjustment, estimating station coordinates, clock parameters (a quadratic polynomial, as well as piece-wise linear offsets (PWLO) every 30 minutes, constrained with 1.3cm), tropospheric parameters (PWLO zenith wet delay every 15 minutes with constraints of 1.5cm, as well as PWLO north-south and east-west gradients with constraints of 0.05cm, and all five Earth orientation parameters (EOP) as tightly constrained (0.1as) PWLO at session start and session end, effectively delivering one offset estimate. All stations were considered in a no-net-rotation, no-net-translation datum definition. Each session was simulated and analyzed 1000 times with varying realizations of the simulated error sources to allow for a robust calculation of the Monte-Carlo repeatability error (rep) of the estimated geodetic parameters, calculated via their standard deviation. Thus, the repeatability error represents the expected precision. §.§ Analysis strategy While the primary focus of this work is to provide a proof of concept of greatly increasing the number of scans per session, an outlook w.r.t. the geodetic performance is also provided within the manuscript. The geodetic performance is validated based on two data sources: First, based on the official IVS analysis results, obtained via the ν-Solve package <cit.>. Second, based on our analysis, derived from processing group delays provided as databases via IVS Data Centers using the Vienna VLBI and Satellite Software <cit.>. In our analysis, the a priori group delays were modeled as described in <cit.>. Station position time series were obtained from solutions where the source coordinates were fixed to the ICRF3 <cit.>, while the station coordinates, as well as datum definition, were based on a priori information from the ITRF2020 <cit.>. Tropospheric parameters, including ZWD and gradients, were estimated every 30 minutes with relative constraints of 1.5cm and 0.5mm between the piece-wise linear offsets PWLO, respectively. In terms of EOP, polar motion and UT1-UTC parameters were estimated as PWLO every 24 hours at 0 UTC with relative constraints of 10mas. Since the session starts at 17 UTC, we report the estimates at 0 UTC during the session and at 0 UTC on the day after the session. Celestial pole offsets were estimated as 24-hour PWLO with tight constraints of 0.1as, effectively delivering one offset at the mid-epoch of the session, approximately around 5 UTC. The estimated EOP are consistent with the state-of-the-art global solution VIE2022 <cit.> which produced an updated TRF and CRF including the most recent sessions. § RESULTS AND DISCUSSION §.§ Scheduling statistics Figure <ref> presents the number of scans and the number of observations of each station per session. The dashed line indicates the average number. Considering VGOS-R&D relative to VGOS-OPS, the number of scans per station, as well as the number of observations per station, is increased substantially. The increases are by a factor of 2.3 and 2.6 respectively. Figure <ref> depicts, on a station-by-station basis, the distribution of time spent in different activities, e.g. observing (obs), slewing (slew), idling (idle), calibration (cal), and execution of field-system commands (system). In this context, it is to note that the previously mentioned overhead time intended to ensure that the recorded data is stored safely on disk is not represented since it occurs simultaneously with slewing and idling (see Section <ref>). The first noteworthy conclusion is that the VGOS-R&D sessions have 20% less observing time compared to VGOS-OPS, although 2.3 more scans were observed as discussed previously. This is due to the SNR-based scheduling algorithm with an average observing time of 10 seconds per scan compared to the 30 seconds per scan in VGOS-OPS. The reduction in observing time also implies less data transfer and fewer bits to be correlated, which is important because data transfer and correlation are the current operational bottlenecks of today's VGOS observations. Next, one can see that the slewing time is increased by a factor of 2. This indicates that the atmospheric sampling is improved since longer slewing times mean that more different azimuth and elevation angles are observed. Finally, one can see that there is still significant idle time left. However, idle time is always correlated with slewing time since the stations have to wait for the slowest station to finish slewing before starting observations. In the VGOS network, the slowest station is Wf which has almost no idle time. Wf has a slewing rate of 200 degrees per minute in azimuth and 120 degrees per minute in elevation compared to the 720 degrees per minute in azimuth and 360 degrees per minute in elevation of most other VGOS stations. §.§ Successful observations The main research question of the VGOS-R&D sessions was the feasibility of the short, SNR-based observation times. We evaluate the SNR-based scheduling approaches by examining the percentage of successful observations. In this context, we define a successful observation as an observation used in geodetic analysis. Figure <ref> overviews the percentage of successful observations per baseline. The values were extracted from the official IVS analysis reports. Based on this analysis, it can be seen that many baselines have a success rate of more than 90% which is similar to the success rate of VGOS-OPS sessions. Outliers are observations including the, at the time of the sessions, new and not fully validated stations Hb and Nn. Furthermore, in VR2203, observations with station Yj had a lower success rate which can be explained by unrelated technical problems experienced at the station. Finally, there is a lower success rate at observations including Mg in VR2204 and VR2205. In both cases, it can be explained by the station missing approximately 50% of the session in VR2204 and approximately 20% of the session in VR2205 due to unrelated problems at the station. More details regarding the telescope issues can be found in the analysis reports uploaded at the IVS Data Centers[<https://cddis.nasa.gov/archive/vlbi/ivsdata/aux/2022>]. Thus, all of these reduced success rates can be explained by technical problems and are not related to the SNR-based observation strategy. Table <ref> lists the average success rate per session excluding the stations with unrelated technical problems. Both approaches, the inter-/extrapolation of the S/X flux information as well as the use of the newly generated source VGOS frequency flux density catalog provided high success rates of over 90%. This proves that the SNR-based observation times are technically possible and that the percentage of successful observations using those approaches is feasible. When keeping in mind that VGOS-R&D scheduled a factor of 2.6 times more observations, one can conclude that the total yield of usable observations for analysis is significantly improved. Since there is a difference between the newly derived source flux density models and the inter-/extrapolation approach based on the sked S/X catalog, but no significant difference in the success rates of both approaches, these results hint that most likely the error margins discussed in Appendix A compensate for any imperfections in the models. Consequently, by improving station SEFD monitoring and modelling, as well as source flux density monitoring and modelling, the error margins could theoretically be reduced and even shorter observation times might be feasible. §.§ Signal-to-noise ratios While the primary metric to evaluate the SNR-based scheduling approach was the percentage of successful observations, studying the observed SNRs per band can also be compared with the predicted SNRs per band to evaluate the accuracy of the VGOS models. Unfortunately, information regarding the observed SNRs was not available at the time VGOS-R&D sessions were scheduled due to significant delays in the correlation of the sessions and in developing the methodologies required to perform SNR analysis. Thus, they could not be used to tune subsequent sessions and were only computed after all sessions had already been observed. Following <cit.>, the single-band SNRs are reconstructed from the total SNR using equation (<ref>) since the observed SNRs per band are not stored and thus available for analysis. The reconstruction error is assumed to be very small (typically within 1%) and is negligible compared to the uncertainties in the theoretical SNR calculation and underlying models. SNR_band = SNR_tot/amp_tot·∑_n=1^N V_n/√(M · N) All variables required in (<ref>) are stored in the session vgosDB files, publicly available via the IVS data centers. SNR_tot is the combined four-band SNR, amp_tot is the coherent average fringe amplitude for the combined four bands, V_i is the amplitude of the complex fringe visibility of a single channel i, N represents the number of channels within a given band, and M represents the total number of channels. Figure <ref> lists the average reconstructed single-band SNRs per baseline for one exemplary session (VR2203), similar to Figure <ref>, where the theoretical SNRs are depicted. Results from other sessions can be found in Appendix D. It can be seen that in this session, the lowest SNRs are observed in band D. Furthermore, some station-dependent effects are visible, like a reduced SNR for observations including Wf in band A, which agrees with analyst comments on the same issue. Figure <ref> depicts the distribution of the ratios between the reconstructed SNR (SNR_obs) and the predicted SNR (SNR_sched). A dashed black line highlights the ratio of 1.00 which represents perfect agreement. Ratios <1.00 mark observations where the models used in the calculation of the theoretical SNR were too optimistic, while ratios >1.00 depict cases where the models were too conservative. Observations with stations Hb and Nn are excluded since these stations were at the time not yet fully validated and no information regarding their sensitivity was available. Based on Figure <ref>, several conclusions can be drawn. First, in many cases, the predicted SNRs are too optimistic. The overly optimistic SNRs are most pronounced in band D of the early VGOS-R&D sessions (VR2201–VR2203). In general, stations tend to have a lower sensitivity in band D. Since the actual SEFD was not monitored at all stations at the time the VGOS-R&D sessions were observed, assumptions have to be made, which might have been too optimistic. Furthermore, the extrapolation of the source flux densities in VR2201 and VR2202 (see Section <ref>) might have led to higher expected flux densities than observed in reality. As further discussed in Section <ref>, comparisons between the inter-/extrapolation-based source flux density models and the newly derived ones had the highest disagreement in band D as well, with the new models reporting lower flux densities. This might also explain why the ratios in the later session (VR2203 – VR2206) are close to 1.00. Second, although the observed SNRs are in many cases quite low, the VGOS processing pipelines still manage to extract useable observations in many cases. Thus, they seem to be quite robust. Third, the spread of the SNR ratios indicates that there is still significant room for improving the VGOS models to predict more accurate SNRs. More resources and research are needed to tackle the remaining shortcomings. Finally, although the SNR modeling shows room for improvement and is sometimes too optimistic, the error margins (e.g. by targeting a higher SNR as required) compensate for the inaccuracies and lead to a high observation success rate as discussed in Section <ref>. Furthermore, it is important to discuss these results in context with the original objective of the VGOS-R&D program. The original research objective was to significantly increase the number of scans compared to the current state-of-the-art approaches. Thus, it can be concluded that the combination of the modeling approaches, error margins, and observation duration between 520s used in VGOS-R&D are sufficient to extract usable observations. Consequently, observing every scan for 30 seconds straight as done in VGOS-OPS is not strictly necessary. §.§ Simulation results To determine the expected precision of the geodetic parameters, simulations of all VGOS-R&D and VGOS-OPS sessions were conducted as discussed in Section <ref>. Figure <ref> depicts the simulated geodetic precision expressed via the repeatability errors (rep) from the Monte-Carlo simulation. Comparing the 3D station coordinate repeatability errors √(rep_X^2 + rep_Y^2 + rep_Z^2) of VGOS-OPS and VGOS-R&D, it can be seen that the expected precision is higher for VGOS-R&D sessions, likely explained via the increased number of scans per station. Except for station Hb, which is poorly integrated due to its remote location, especially in VGOS-OPS sessions, the average reduction in the repeatabilities of VGOS-R&D compared to VGOS-OPS is 50%. For the five EOP, the average improvement in repeatability error is also 50%, 50% for UT1-UTC, 60% for polar motion (XPO, YPO), and 40% for the nutation parameters (dX, dY). §.§ Analysis results The primary focus of the 2022 VGOS-R&D sessions was to provide proof that it is possible to significantly increase the number of scans per hour in VGOS sessions. Still, it is possible to also look at the performance of these sessions regarding geodetic parameter estimation and conduct comparisons with VGOS-OPS sessions. However, it has to be highlighted that the 2022 VGOS-R&D series only includes six sessions. Thus, it is not possible to conduct a meaningful repeatability analysis and the subsequent comparisons need to be interpreted carefully. Figure <ref> depicts the formal errors σ of the estimated station coordinates based on the VieVS analysis. The average reduction of the formal errors per station is 50% in all components. Based on the results obtained from the official IVS analysis reports, the improvements are around 40%. Thus, both analysis approaches confirm a reduction of formal errors, which can be expected based on the greatly increased number of scans and thus observations available in the analysis. The improvement based on the VieVS results corresponds with the expected improvement from the Monte-Carlo simulations, while the improvement is smaller based on the IVS analysis report solution. The slight differences might be explained by the individual analysis settings used in the simulations compared to the analysis runs stemming from the different software packages that were used, or from imperfections in the simulations. Figure <ref> depicts the formal errors of the estimated EOP based on the VieVS analysis. In contrast to the previous sections, the median value is highlighted instead of the average one since there are a few significant outliers present. By comparing the median formal error per EOP, an average reduction of 50% can be seen, 40% for UT1-UTC, 50% for polar motion, and 50% for the nutation parameters, which is in good agreement with the expectations based on the Monte-Carlo simulations. The improvement in terms of the formal errors of the EOP is in good agreement with the improvement in terms of the formal errors of the station coordinates discussed earlier, and also w.r.t. the expectations from the Monte-Carlo simulations. However, it has to be highlighted that the analyzed VGOS network is not well-suited for estimating EOP due to the lack of southern-hemisphere stations and consequently long north-south baselines. This fact might also explain the occurrence of some of the outlier values in Figure <ref>. In this work, we omit a detailed analysis of the repeatability error due to the small sample size of the VGOS-R&D sessions. Based on the investigations outlined in this work, we recommend applying the derived observation strategies in operational VGOS sessions to provide a larger sample size for statistically significant repeatability analysis. § CONCLUSIONS During 2022, the IVS provided resources for six dedicated 24-hour VGOS-R&D sessions. These sessions aimed to greatly increase the number of observations and scans. Compared to operational VGOS sessions, stations in VGOS-R&D recorded 2.3 times more scans leading to 2.6 times more observations. This was achieved by establishing an SNR-based scheduling approach with shorter observation times of around 10 seconds and reduced overhead times. The SNR-based scheduling approach was tested based on two approaches. (1) based on inter-/extrapolation of existing S/X frequency source flux density models provided in the sked catalogs and (2) based on newly derived source flux density models for VGOS frequencies. In theory, we expected that a dedicated source flux density catalog at VGOS frequencies might be superior compared to inter-/extrapolating S/X models. However, both approaches resulted in a high observation success rate of 9095%, assuming that no technical errors occur. The high success rate might be explained by the significant error margins included in the calculation of the required observation time. Comparisons of the theoretical predicted SNRs and the observed (reconstructed) SNRs revealed that the underlying models tend to be too optimistic, especially in band D. However, the existing error margins compensate for the mismodelling. By improving the underlying models, the current error margin in the calculation of the required observation time could be reduced to allow for even more aggressive scheduling strategies with shorter observation times. In any case, both approaches resulted in a significantly higher number of scans per hour compared to the current state-of-the-art strategy utilized in VGOS-OPS, where each scan is observed for 30 seconds straight. Despite the greatly increased number of scans per station, the reduced observation time led to an average reduction of recorded data per station of 20%. This reduction should lead to decreased data transfer and processing times, which represent the current bottleneck for advancing VGOS. Monte-Carlo simulations revealed that based on the updated scheduling strategy, the precision of the estimated station coordinates is expected to improve by 50% compared to operational VGOS sessions. Similar improvement is expected for EOP estimates. Analysis of the 2022 VGOS sessions confirmed this assumption with 50% lower formal errors in station coordinates based on analysis executed using VieVS, and 40% lower formal errors based on the IVS analysis reports. The formal errors of EOP estimates were also reduced by 4050%. § OUTLOOK From an operational perspective, the SNR-based scheduling implemented in VGOS-R&D, as well as the reduced overhead times, can be readily transferred to VGOS-OPS sessions. By recording more sessions with the increased number of scans per hour, a more sophisticated repeatability analysis can be executed to evaluate the impact on the accuracy of the estimated geodetic parameters to confirm the hypothesis outlined in the VGOS design documents. Utilizing a catalog with flux density models at VGOS frequencies would require operational updating, similar to the S/X catalog, due to the time-dependent nature of source brightness. Similarly, the station SEFD models need to be updated and monitored for VGOS frequencies and it must be ensured that these models are supported in the subsequent software packages. Finally, research by <cit.>, <cit.>, and <cit.> indicates that source structure effects also contribute significantly to the VGOS error budget. Work is underway to develop processes for imaging and modeling source morphology and for creating corrections for source structure and models that can be used for simulations. This necessitates consideration during the generation of VGOS schedules as discussed in <cit.>. § STATEMENTS AND DECLARATIONS §.§ Availability of data and materials The IVS-related datasets generated and/or analyzed during the current study are available in the IVS data centers and can be accessed via the corresponding session listed in <https://ivscc.gsfc.nasa.gov/sessions/2022/>. The EOP dataset VIE2022 analyzed in this paper is available at <https://doi.org/10.48436/0gmbv-arv60>. The source flux density catalog at VGOS frequencies developed for the VGOS-R&D sessions is attached as supplementary material and in Appendix B. §.§ Competing interests The authors declare that they have no competing interests. §.§ Funding The work by MIT Haystack Observatory was supported under NASA contract Awarding Agency. Open access funding is provided by the Swiss Federal Institute of Technology Zurich. §.§ Authors' contributions MS and BP designed the concepts of the VGOS-R&D sessions with support from MX. MS generated the observing plans. MT correlated the sessions. MT, DH, and JB performed the fringe-fitting and post-correlation-processing. DM performed data quality checks with support from DH, MT, and JB. MS performed the simulations. HK performed the geodetic analysis using VieVS. BP developed monitoring software to analyze the performance of stations and the brightness of sources. MS wrote the majority of the manuscript and summarized, compared, and visualized the results. All authors read and contributed to the manuscript. §.§ Acknowledgments This research has made use of VGOS data files provided by the International VLBI Service for Geodesy and Astrometry (IVS) data archives, dated July 2023. § APPENDIX A: ERROR MARGINS IN VGOS OBSERVATIONS It is important to note that due to imperfections in the source flux density and station SEFD measurements and models, as well as since predictions of these metrics are required to perform the required calculation ahead of time, the calculation of the required observation time includes significant error margins to compensate for potential variations and mismodeling. Furthermore, it is to be noted that in VGOS sessions, all stations observe a scan for the same amount of time. Thus, only the least sensitive baseline on the least sensitive frequency band determines the required observation time. Consequently, only this baseline/band combination has a theoretical SNR level close to the target SNR, while all other baselines on all other bands typically have a higher SNR than the target, further increasing the error margin on these observations. As a side note, this also implies that the calculation of the observation duration and the underlying models are most important for small networks, in particular for single-baseline Intensives, which are beyond the focus of this work. The VGOS-OPS strategy of observing each scan for 30 seconds independent of the source brightness and station sensitivity can be seen as a further increase in the error margin for the VGOS observation time calculation due to the increased SNR. However, while the error margins help to ensure that planned observations can be used in the analysis, they also result in increased and potentially unnecessary observation duration. Thus, higher error margins lead to a lower number of scans per hour. Furthermore, the longer observation duration also leads to more recorded data that needs to be transferred and processed, which is currently the major bottleneck in operational VGOS VLBI, limiting the number of sessions that can be observed per month. The increased observation duration reduces the number of scans that can be executed within one session. In summary, the error margin can be grouped into three areas: (1) a general error margin to compensate for unforeseen minor technical issues at the telescopes or other unforeseen factors such as radio frequency interference, (2) one to reflect errors in source flux density models and to cover the uncertainty in the predictions, and (3) one to reflect errors in station sensitivity models and to cover natural variations in the predictions. By improved modeling, (2) and (3) can be reduced while some amount of error margin from (1) will need to remain, e.g. since it is common that stations have small technical problems that result in sensitivity losses of some tens of percent in some bands which need to be compensated. Since the distinction between these three groups is only theoretical, the choice of (1) determines the accuracy level required for the SEFD and F models. In simple terms, if an operator decides to account for 50% total error margin, there is little need to spend significant resources to bring the accuracy of the SEFD and F models down to a few percent. The main objective is to ensure that the total error margin is reasonable to provide a good compromise between the observation success rate and the number of scans that can be observed. § APPENDIX B: SOURCE FLUX DENSITY TABLE Source flux density catalog at VGOS frequencies derived for the 2022 VGOS-R&D sessions. The columns represent the projected baseline length in km up to which the listed flux density values correspond. Please note that this catalog was derived by analyzing VGOS observations up to 2022. Since source brightness is known to vary over time, there is no guarantee that the provided values are accurate enough for operational VLBI scheduling past 2022. llrrrrrrrrrrrrr [km] 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 12000 13000 Source [km] 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 12000 13000 Source 9lsource flux density up to projected baseline length 4rContinued on next page 9lsource flux density up to projected baseline length [t]4*0003-066 A 2.24 2.06 1.92 1.84 1.80 1.76 1.70 1.61 1.49 1.35 1.22 1.08 0.95 B 3.36 3.19 3.01 2.82 2.62 2.41 2.19 1.95 1.67 1.37 1.08 0.78 0.48 C 3.62 3.43 3.21 2.96 2.71 2.45 2.19 1.91 1.62 1.30 0.99 0.68 0.36 D 3.42 3.18 2.89 2.58 2.25 1.92 1.61 1.33 1.07 0.84 0.61 0.37 0.14 [t]4*0016+731 A 0.94 0.88 0.81 0.75 0.68 0.61 0.54 0.46 0.38 0.30 0.22 0.15 0.07 B 1.07 1.02 0.95 0.88 0.79 0.70 0.61 0.53 0.47 0.41 0.36 0.31 0.27 C 1.17 1.09 1.00 0.91 0.81 0.73 0.65 0.59 0.54 0.50 0.46 0.42 0.38 D 1.20 1.11 1.02 0.93 0.86 0.80 0.75 0.71 0.68 0.66 0.65 0.63 0.61 [t]4*0035-252 A 0.53 0.53 0.52 0.52 0.51 0.50 0.49 0.47 0.45 0.43 0.40 0.38 0.35 B 0.57 0.57 0.56 0.56 0.55 0.54 0.52 0.51 0.48 0.46 0.43 0.41 0.38 C 0.57 0.57 0.56 0.56 0.55 0.54 0.52 0.51 0.48 0.46 0.43 0.41 0.38 D 0.37 0.37 0.37 0.37 0.37 0.36 0.35 0.33 0.32 0.30 0.29 0.27 0.25 [t]4*0059+581 A 2.17 1.99 1.79 1.60 1.41 1.23 1.07 0.91 0.77 0.63 0.49 0.36 0.23 B 2.60 2.42 2.21 1.99 1.76 1.53 1.31 1.12 0.94 0.78 0.62 0.46 0.30 C 2.77 2.54 2.30 2.05 1.81 1.58 1.38 1.20 1.05 0.91 0.77 0.63 0.50 D 2.96 2.70 2.43 2.17 1.95 1.76 1.61 1.48 1.36 1.25 1.14 1.02 0.91 [t]4*0106+013 A 2.20 2.00 1.86 1.80 1.77 1.73 1.61 1.43 1.22 1.01 0.80 0.58 0.37 B 2.85 2.74 2.63 2.49 2.34 2.17 1.99 1.79 1.57 1.35 1.14 0.92 0.70 C 3.22 3.13 3.00 2.82 2.60 2.35 2.08 1.79 1.49 1.18 0.88 0.58 0.27 D 3.54 3.43 3.29 3.10 2.88 2.63 2.35 2.05 1.71 1.38 1.04 0.71 0.37 [t]4*0109+224 A 0.51 0.49 0.47 0.46 0.45 0.45 0.45 0.45 0.45 0.45 0.45 0.45 0.46 B 0.89 0.88 0.86 0.85 0.84 0.82 0.81 0.79 0.77 0.75 0.74 0.72 0.70 C 1.06 1.03 1.00 0.98 0.97 0.95 0.94 0.92 0.91 0.90 0.88 0.87 0.85 D 1.25 1.22 1.19 1.16 1.14 1.12 1.10 1.08 1.06 1.05 1.03 1.01 0.99 [t]4*0113-118 A 0.96 0.91 0.85 0.80 0.75 0.71 0.68 0.64 0.61 0.57 0.54 0.50 0.47 B 1.17 1.14 1.09 1.01 0.90 0.78 0.66 0.53 0.41 0.28 0.16 0.03 0.03 C 1.25 1.19 1.10 0.98 0.84 0.69 0.55 0.42 0.29 0.16 0.03 0.03 0.03 D 1.22 1.14 1.03 0.90 0.77 0.67 0.61 0.58 0.56 0.55 0.54 0.52 0.51 [t]4*0115-214 A 0.39 0.37 0.35 0.33 0.32 0.30 0.29 0.27 0.26 0.25 0.23 0.22 0.20 B 0.39 0.39 0.38 0.38 0.37 0.36 0.36 0.35 0.35 0.35 0.34 0.34 0.33 C 0.40 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.31 0.30 0.29 0.27 0.26 D 0.38 0.38 0.38 0.37 0.37 0.36 0.35 0.35 0.35 0.34 0.34 0.33 0.33 [t]4*0133+476 A 1.16 1.10 1.03 0.97 0.91 0.84 0.77 0.69 0.60 0.52 0.45 0.38 0.30 B 1.46 1.42 1.37 1.30 1.22 1.14 1.06 0.97 0.89 0.80 0.72 0.64 0.56 C 1.57 1.51 1.44 1.36 1.28 1.19 1.10 1.00 0.90 0.80 0.69 0.59 0.48 D 1.65 1.57 1.48 1.39 1.29 1.20 1.10 1.00 0.89 0.76 0.62 0.47 0.33 [t]4*0201+113 A 0.58 0.57 0.56 0.55 0.53 0.52 0.50 0.48 0.47 0.45 0.43 0.41 0.40 B 0.49 0.48 0.46 0.45 0.44 0.43 0.41 0.40 0.39 0.37 0.36 0.34 0.33 C 0.42 0.41 0.40 0.39 0.38 0.37 0.36 0.34 0.33 0.32 0.31 0.29 0.28 D 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.30 0.29 0.28 0.27 0.26 0.25 [t]4*0202+319 A 0.97 0.95 0.93 0.91 0.89 0.86 0.84 0.81 0.78 0.74 0.71 0.68 0.65 B 1.39 1.36 1.31 1.26 1.21 1.15 1.09 1.04 1.00 0.96 0.94 0.93 0.91 C 1.51 1.46 1.39 1.33 1.26 1.18 1.11 1.05 1.00 0.96 0.93 0.91 0.89 D 1.50 1.43 1.34 1.23 1.12 0.99 0.86 0.74 0.64 0.56 0.51 0.48 0.45 [t]4*0215+015 A 1.29 1.25 1.22 1.20 1.17 1.16 1.14 1.12 1.09 1.05 1.02 0.99 0.95 B 1.95 1.93 1.90 1.86 1.81 1.76 1.70 1.64 1.58 1.51 1.44 1.37 1.30 C 2.17 2.13 2.09 2.04 1.99 1.93 1.88 1.82 1.76 1.68 1.61 1.54 1.47 D 2.38 2.34 2.29 2.24 2.17 2.09 2.00 1.90 1.77 1.62 1.48 1.33 1.19 [t]4*0234+285 A 1.09 0.97 0.86 0.75 0.66 0.58 0.51 0.45 0.40 0.34 0.29 0.23 0.18 B 1.00 0.93 0.87 0.80 0.74 0.68 0.63 0.57 0.51 0.44 0.38 0.31 0.25 C 1.07 1.02 0.96 0.91 0.86 0.80 0.73 0.64 0.53 0.41 0.28 0.15 0.03 D 1.08 1.06 1.04 1.01 0.98 0.93 0.85 0.75 0.62 0.47 0.31 0.16 0.03 [t]4*0235+164 A 1.39 1.36 1.32 1.29 1.25 1.22 1.19 1.14 1.10 1.05 1.00 0.96 0.91 B 1.54 1.51 1.46 1.40 1.32 1.23 1.13 1.02 0.91 0.79 0.67 0.55 0.43 C 1.56 1.50 1.43 1.35 1.25 1.15 1.04 0.92 0.79 0.66 0.53 0.40 0.27 D 1.41 1.33 1.23 1.12 1.01 0.90 0.79 0.68 0.58 0.47 0.36 0.26 0.15 [t]4*0237-027 A 0.31 0.30 0.30 0.29 0.28 0.28 0.28 0.27 0.27 0.26 0.26 0.25 0.25 B 0.39 0.39 0.37 0.36 0.34 0.32 0.31 0.29 0.28 0.27 0.25 0.24 0.23 C 0.43 0.42 0.41 0.40 0.38 0.36 0.34 0.32 0.31 0.29 0.27 0.25 0.23 D 0.42 0.42 0.42 0.41 0.39 0.38 0.36 0.34 0.32 0.30 0.28 0.26 0.24 [t]4*0300+470 A 1.11 1.01 0.90 0.80 0.71 0.63 0.55 0.49 0.43 0.36 0.30 0.24 0.18 B 0.99 0.90 0.81 0.71 0.63 0.55 0.49 0.43 0.38 0.33 0.29 0.24 0.19 C 0.89 0.80 0.71 0.62 0.54 0.47 0.42 0.36 0.32 0.28 0.23 0.19 0.14 D 0.72 0.64 0.57 0.50 0.43 0.38 0.33 0.28 0.23 0.18 0.13 0.08 0.03 [t]4*0332-403 A 0.71 0.71 0.72 0.74 0.76 0.78 0.80 0.81 0.80 0.77 0.73 0.69 0.64 B 0.45 0.46 0.49 0.52 0.57 0.63 0.68 0.71 0.72 0.70 0.67 0.63 0.59 C 0.30 0.31 0.34 0.38 0.43 0.49 0.55 0.59 0.60 0.58 0.54 0.50 0.46 D 0.24 0.24 0.25 0.27 0.29 0.32 0.34 0.36 0.35 0.34 0.32 0.29 0.26 [t]4*0338-214 A 0.36 0.35 0.34 0.33 0.32 0.31 0.30 0.29 0.28 0.27 0.26 0.25 0.24 B 0.34 0.33 0.32 0.31 0.31 0.30 0.29 0.28 0.27 0.26 0.25 0.24 0.23 C 0.33 0.32 0.31 0.31 0.30 0.29 0.28 0.27 0.26 0.25 0.24 0.23 0.22 D 0.32 0.31 0.30 0.30 0.29 0.28 0.27 0.26 0.25 0.24 0.23 0.22 0.21 [t]4*0345+460 A 0.33 0.31 0.30 0.28 0.27 0.26 0.25 0.24 0.23 0.22 0.22 0.21 0.20 B 0.49 0.46 0.44 0.42 0.40 0.38 0.37 0.35 0.34 0.33 0.32 0.31 0.30 C 0.54 0.52 0.50 0.49 0.47 0.45 0.44 0.42 0.40 0.39 0.37 0.36 0.34 D 0.52 0.52 0.51 0.50 0.49 0.48 0.46 0.45 0.43 0.41 0.39 0.37 0.35 [t]4*0346-279 A 0.82 0.83 0.85 0.88 0.92 0.98 1.04 1.09 1.11 1.10 1.06 1.00 0.93 B 1.54 1.54 1.53 1.52 1.51 1.49 1.47 1.45 1.41 1.36 1.28 1.19 1.08 C 1.48 1.49 1.49 1.49 1.49 1.48 1.47 1.45 1.40 1.34 1.24 1.13 1.00 D 1.35 1.33 1.31 1.26 1.20 1.11 1.03 0.94 0.85 0.75 0.64 0.52 0.40 [t]4*0403-132 A 0.44 0.42 0.41 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.31 0.30 0.29 B 0.57 0.58 0.58 0.58 0.57 0.56 0.54 0.52 0.50 0.48 0.46 0.44 0.42 C 0.71 0.71 0.70 0.70 0.68 0.65 0.62 0.59 0.55 0.51 0.47 0.43 0.39 D 0.83 0.84 0.86 0.86 0.85 0.82 0.79 0.74 0.70 0.65 0.60 0.56 0.51 [t]4*0420+022 A 0.84 0.80 0.77 0.73 0.69 0.65 0.61 0.57 0.53 0.49 0.45 0.41 0.37 B 0.83 0.80 0.76 0.72 0.68 0.64 0.61 0.57 0.54 0.52 0.49 0.46 0.43 C 0.78 0.75 0.71 0.67 0.63 0.60 0.57 0.54 0.51 0.49 0.46 0.43 0.41 D 0.61 0.58 0.55 0.51 0.47 0.42 0.38 0.35 0.31 0.28 0.25 0.22 0.18 [t]4*0420-014 A 1.91 1.81 1.71 1.63 1.57 1.53 1.50 1.46 1.43 1.40 1.37 1.34 1.31 B 2.51 2.49 2.44 2.38 2.31 2.25 2.18 2.13 2.08 2.03 1.97 1.92 1.87 C 2.99 2.96 2.90 2.83 2.74 2.64 2.52 2.39 2.26 2.12 1.99 1.85 1.72 D 3.64 3.66 3.64 3.57 3.45 3.29 3.10 2.89 2.67 2.44 2.22 2.00 1.78 [t]4*0454-234 A 2.12 2.04 1.94 1.84 1.72 1.59 1.46 1.34 1.23 1.12 1.01 0.87 0.74 B 2.10 2.02 1.93 1.83 1.70 1.57 1.44 1.31 1.19 1.07 0.93 0.77 0.61 C 2.21 2.08 1.94 1.79 1.63 1.49 1.35 1.24 1.14 1.04 0.93 0.80 0.66 D 1.93 1.79 1.62 1.46 1.29 1.13 0.97 0.84 0.72 0.61 0.51 0.41 0.31 [t]4*0458-020 A 1.41 1.36 1.31 1.28 1.25 1.23 1.20 1.17 1.13 1.09 1.06 1.02 0.98 B 1.63 1.61 1.58 1.52 1.44 1.37 1.28 1.20 1.12 1.04 0.96 0.88 0.80 C 1.87 1.83 1.76 1.68 1.58 1.48 1.38 1.28 1.17 1.07 0.97 0.87 0.76 D 2.12 2.08 2.02 1.94 1.84 1.73 1.62 1.52 1.42 1.32 1.22 1.12 1.02 [t]4*0529+483 A 0.63 0.59 0.56 0.53 0.50 0.48 0.45 0.42 0.40 0.37 0.35 0.32 0.30 B 0.64 0.61 0.58 0.55 0.52 0.48 0.45 0.43 0.41 0.40 0.39 0.37 0.36 C 0.65 0.62 0.58 0.54 0.50 0.46 0.43 0.40 0.38 0.36 0.34 0.32 0.30 D 0.59 0.54 0.50 0.45 0.40 0.36 0.33 0.31 0.29 0.27 0.25 0.23 0.21 [t]4*0537-286 A 0.81 0.80 0.79 0.78 0.76 0.73 0.70 0.67 0.63 0.60 0.56 0.52 0.48 B 1.61 1.61 1.61 1.59 1.57 1.53 1.48 1.42 1.36 1.28 1.21 1.12 1.04 C 1.54 1.55 1.57 1.60 1.62 1.63 1.62 1.58 1.53 1.45 1.37 1.27 1.18 D 1.50 1.51 1.51 1.51 1.49 1.47 1.43 1.38 1.32 1.25 1.18 1.10 1.02 [t]4*0552+398 A 2.88 2.78 2.66 2.51 2.33 2.11 1.85 1.53 1.16 0.76 0.32 0.03 0.03 B 3.25 2.99 2.67 2.30 1.89 1.47 1.08 0.72 0.40 0.13 0.03 0.03 0.03 C 3.07 2.71 2.30 1.87 1.44 1.06 0.75 0.52 0.36 0.24 0.13 0.03 0.03 D 2.16 1.82 1.47 1.13 0.86 0.66 0.52 0.43 0.34 0.25 0.13 0.03 0.03 [t]4*0602+405 A 0.61 0.59 0.57 0.56 0.54 0.52 0.49 0.46 0.42 0.38 0.34 0.29 0.24 B 0.81 0.80 0.80 0.79 0.77 0.74 0.70 0.66 0.60 0.54 0.48 0.41 0.34 C 0.88 0.87 0.86 0.83 0.81 0.77 0.72 0.67 0.61 0.54 0.46 0.39 0.32 D 0.86 0.85 0.83 0.81 0.78 0.73 0.67 0.59 0.50 0.39 0.27 0.15 0.04 [t]4*0602+673 A 0.41 0.36 0.31 0.27 0.23 0.21 0.20 0.20 0.20 0.20 0.21 0.21 0.21 B 0.50 0.46 0.43 0.40 0.38 0.37 0.36 0.36 0.37 0.37 0.37 0.38 0.38 C 0.54 0.51 0.49 0.46 0.45 0.43 0.42 0.41 0.41 0.40 0.39 0.38 0.37 D 0.59 0.57 0.55 0.53 0.52 0.51 0.50 0.49 0.48 0.46 0.44 0.43 0.41 [t]4*0606-223 A 0.67 0.65 0.62 0.58 0.54 0.49 0.44 0.38 0.31 0.25 0.19 0.13 0.07 B 0.88 0.86 0.83 0.80 0.76 0.72 0.68 0.64 0.60 0.56 0.52 0.48 0.44 C 0.88 0.85 0.82 0.78 0.75 0.70 0.67 0.63 0.59 0.56 0.52 0.48 0.45 D 0.80 0.77 0.73 0.69 0.64 0.59 0.54 0.49 0.43 0.38 0.33 0.27 0.22 [t]4*0607-157 A 2.82 2.67 2.52 2.42 2.34 2.29 2.25 2.21 2.17 2.12 2.08 2.04 2.00 B 3.53 3.49 3.42 3.30 3.13 2.94 2.74 2.54 2.34 2.15 1.95 1.75 1.56 C 3.54 3.43 3.28 3.07 2.81 2.53 2.22 1.89 1.57 1.25 0.92 0.60 0.27 D 3.18 2.96 2.67 2.30 1.89 1.46 1.03 0.61 0.20 0.03 0.03 0.03 0.03 [t]4*0613+570 A 0.91 0.89 0.88 0.87 0.86 0.86 0.85 0.84 0.82 0.80 0.78 0.76 0.74 B 1.38 1.35 1.33 1.29 1.26 1.22 1.18 1.14 1.10 1.05 1.01 0.97 0.93 C 1.50 1.46 1.42 1.37 1.32 1.27 1.23 1.19 1.16 1.13 1.09 1.06 1.03 D 1.59 1.52 1.46 1.40 1.34 1.29 1.25 1.22 1.19 1.16 1.13 1.10 1.07 [t]4*0627-199 A 0.50 0.48 0.47 0.46 0.45 0.44 0.43 0.41 0.37 0.34 0.29 0.25 0.21 B 0.64 0.63 0.63 0.62 0.60 0.58 0.55 0.52 0.48 0.44 0.40 0.36 0.33 C 0.70 0.69 0.67 0.64 0.61 0.58 0.55 0.52 0.49 0.47 0.46 0.44 0.42 D 0.70 0.68 0.65 0.61 0.56 0.50 0.44 0.39 0.34 0.31 0.28 0.25 0.22 [t]4*0632-235 A 0.60 0.56 0.52 0.47 0.43 0.39 0.35 0.31 0.27 0.23 0.19 0.15 0.11 B 0.59 0.56 0.53 0.49 0.44 0.39 0.35 0.30 0.26 0.21 0.17 0.13 0.08 C 0.61 0.58 0.54 0.49 0.44 0.39 0.34 0.30 0.26 0.22 0.18 0.14 0.09 D 0.59 0.54 0.49 0.43 0.37 0.31 0.26 0.20 0.15 0.09 0.04 0.03 0.03 [t]4*0642+449 A 1.53 1.40 1.26 1.10 0.94 0.77 0.61 0.46 0.31 0.17 0.03 0.03 0.03 B 1.50 1.28 1.03 0.78 0.54 0.35 0.24 0.19 0.19 0.21 0.23 0.25 0.28 C 1.37 1.14 0.90 0.68 0.51 0.41 0.37 0.36 0.38 0.39 0.41 0.43 0.45 D 1.22 1.09 0.96 0.84 0.73 0.63 0.54 0.47 0.40 0.34 0.28 0.21 0.15 [t]4*0648-165 A 1.65 1.51 1.38 1.26 1.17 1.09 1.03 0.97 0.91 0.85 0.80 0.74 0.68 B 2.10 2.04 1.95 1.85 1.73 1.62 1.52 1.45 1.38 1.31 1.24 1.17 1.10 C 2.00 1.96 1.89 1.81 1.72 1.61 1.51 1.41 1.31 1.20 1.10 1.00 0.90 D 1.91 1.88 1.82 1.73 1.61 1.49 1.36 1.24 1.13 1.01 0.90 0.78 0.67 [t]4*0700-197 A 0.69 0.64 0.59 0.55 0.51 0.48 0.45 0.44 0.43 0.42 0.41 0.40 0.40 B 0.90 0.88 0.85 0.81 0.76 0.72 0.69 0.66 0.63 0.60 0.58 0.55 0.53 C 0.92 0.90 0.87 0.84 0.79 0.75 0.70 0.66 0.63 0.59 0.55 0.52 0.48 D 0.87 0.87 0.87 0.85 0.82 0.79 0.74 0.68 0.62 0.55 0.49 0.43 0.37 [t]4*0716+714 A 0.50 0.47 0.45 0.43 0.42 0.41 0.40 0.39 0.38 0.37 0.35 0.34 0.33 B 0.54 0.53 0.51 0.49 0.47 0.46 0.44 0.43 0.42 0.41 0.39 0.38 0.37 C 0.59 0.56 0.54 0.51 0.49 0.47 0.45 0.43 0.41 0.39 0.36 0.33 0.31 D 0.57 0.53 0.50 0.47 0.45 0.43 0.42 0.41 0.39 0.38 0.36 0.34 0.32 [t]4*0723+219 A 0.33 0.32 0.31 0.29 0.29 0.28 0.27 0.26 0.25 0.24 0.23 0.22 0.21 B 0.41 0.40 0.39 0.39 0.38 0.38 0.37 0.36 0.36 0.35 0.35 0.34 0.34 C 0.46 0.45 0.44 0.42 0.41 0.40 0.38 0.37 0.35 0.34 0.33 0.31 0.30 D 0.46 0.46 0.46 0.45 0.45 0.45 0.44 0.44 0.43 0.43 0.43 0.42 0.42 [t]4*0727-115 A 2.34 2.20 2.07 1.94 1.84 1.75 1.67 1.59 1.52 1.46 1.39 1.33 1.26 B 2.68 2.58 2.47 2.36 2.24 2.11 1.99 1.87 1.75 1.63 1.52 1.39 1.27 C 2.68 2.57 2.44 2.31 2.16 2.02 1.86 1.70 1.52 1.35 1.17 1.01 0.85 D 2.21 2.11 1.99 1.85 1.70 1.52 1.33 1.13 0.91 0.70 0.50 0.33 0.17 [t]4*0736+017 A 1.12 1.06 1.00 0.95 0.92 0.89 0.87 0.86 0.83 0.80 0.75 0.68 0.61 B 1.30 1.26 1.21 1.16 1.12 1.07 1.03 0.97 0.89 0.80 0.67 0.54 0.39 C 1.33 1.27 1.22 1.16 1.11 1.05 0.97 0.89 0.78 0.66 0.52 0.36 0.20 D 1.23 1.17 1.09 1.00 0.90 0.80 0.68 0.55 0.43 0.30 0.19 0.09 0.03 [t]4*0738+491 A 0.46 0.45 0.45 0.43 0.42 0.39 0.37 0.36 0.35 0.34 0.34 0.33 0.33 B 0.48 0.48 0.48 0.47 0.46 0.45 0.44 0.44 0.44 0.44 0.43 0.43 0.43 C 0.57 0.56 0.55 0.53 0.52 0.50 0.50 0.50 0.51 0.52 0.54 0.55 0.56 D 0.58 0.55 0.53 0.50 0.49 0.49 0.49 0.50 0.51 0.52 0.53 0.53 0.54 [t]4*0748+126 A 0.79 0.68 0.59 0.52 0.47 0.43 0.39 0.35 0.30 0.25 0.20 0.15 0.10 B 0.97 0.90 0.82 0.76 0.70 0.65 0.61 0.58 0.56 0.54 0.52 0.50 0.48 C 1.07 0.98 0.90 0.82 0.75 0.68 0.62 0.58 0.54 0.50 0.47 0.43 0.39 D 1.04 0.95 0.85 0.74 0.63 0.51 0.38 0.26 0.15 0.03 0.03 0.03 0.03 [t]4*0754+100 A 0.64 0.60 0.57 0.55 0.52 0.50 0.46 0.41 0.35 0.30 0.24 0.18 0.12 B 0.77 0.73 0.69 0.63 0.57 0.50 0.43 0.36 0.28 0.21 0.13 0.05 0.03 C 0.82 0.77 0.72 0.65 0.60 0.54 0.49 0.44 0.38 0.32 0.26 0.20 0.14 D 0.83 0.79 0.76 0.74 0.72 0.71 0.71 0.70 0.68 0.67 0.65 0.63 0.61 [t]4*0805+410 A 0.62 0.61 0.59 0.58 0.58 0.57 0.56 0.55 0.54 0.53 0.52 0.50 0.49 B 0.79 0.78 0.76 0.74 0.72 0.69 0.66 0.62 0.59 0.55 0.51 0.48 0.44 C 0.83 0.80 0.78 0.75 0.72 0.69 0.65 0.61 0.57 0.53 0.49 0.45 0.41 D 0.80 0.77 0.73 0.69 0.64 0.59 0.53 0.46 0.38 0.30 0.23 0.15 0.07 [t]4*0805-077 A 0.67 0.65 0.63 0.61 0.60 0.59 0.58 0.57 0.56 0.55 0.54 0.53 0.53 B 0.95 0.94 0.93 0.92 0.89 0.88 0.86 0.85 0.85 0.84 0.84 0.83 0.82 C 1.17 1.16 1.15 1.13 1.10 1.06 1.02 0.98 0.94 0.90 0.86 0.82 0.78 D 1.39 1.39 1.37 1.35 1.30 1.25 1.20 1.14 1.08 1.02 0.96 0.90 0.84 [t]4*0808+019 A 0.21 0.20 0.19 0.18 0.18 0.18 0.19 0.19 0.20 0.20 0.21 0.21 0.22 B 0.30 0.29 0.28 0.27 0.26 0.26 0.25 0.26 0.26 0.26 0.27 0.27 0.27 C 0.33 0.32 0.31 0.30 0.28 0.27 0.26 0.25 0.25 0.24 0.23 0.22 0.21 D 0.42 0.42 0.41 0.40 0.39 0.38 0.37 0.35 0.34 0.33 0.31 0.30 0.29 [t]4*0812+367 A 0.61 0.59 0.56 0.54 0.53 0.52 0.50 0.47 0.43 0.38 0.34 0.29 0.25 B 0.70 0.68 0.66 0.65 0.63 0.59 0.54 0.45 0.35 0.23 0.11 0.03 0.03 C 0.67 0.65 0.63 0.61 0.58 0.54 0.47 0.36 0.23 0.10 0.03 0.03 0.03 D 0.58 0.56 0.54 0.52 0.49 0.45 0.39 0.30 0.20 0.08 0.03 0.03 0.03 [t]4*0814+425 A 0.95 0.84 0.74 0.64 0.56 0.51 0.49 0.48 0.48 0.47 0.47 0.46 0.46 B 0.93 0.87 0.81 0.76 0.72 0.68 0.66 0.64 0.62 0.60 0.59 0.58 0.56 C 0.91 0.85 0.80 0.75 0.71 0.68 0.65 0.62 0.59 0.56 0.54 0.51 0.49 D 0.79 0.76 0.73 0.70 0.67 0.64 0.60 0.55 0.50 0.45 0.41 0.37 0.32 [t]4*0823+033 A 1.27 1.21 1.15 1.10 1.06 1.03 1.00 0.98 0.95 0.93 0.91 0.89 0.87 B 1.53 1.48 1.43 1.38 1.33 1.28 1.23 1.17 1.11 1.06 1.00 0.94 0.89 C 1.54 1.48 1.42 1.36 1.30 1.24 1.18 1.12 1.06 1.00 0.94 0.89 0.83 D 1.47 1.41 1.34 1.27 1.19 1.11 1.04 0.97 0.90 0.84 0.77 0.71 0.64 [t]4*0827+243 A 0.45 0.42 0.38 0.35 0.33 0.30 0.28 0.25 0.21 0.17 0.13 0.10 0.06 B 0.45 0.43 0.41 0.39 0.36 0.33 0.30 0.26 0.22 0.18 0.14 0.10 0.06 C 0.46 0.43 0.41 0.37 0.34 0.30 0.26 0.23 0.19 0.16 0.12 0.09 0.06 D 0.40 0.39 0.37 0.35 0.32 0.30 0.26 0.22 0.18 0.14 0.11 0.07 0.04 [t]4*0847-120 A 0.30 0.29 0.27 0.26 0.25 0.24 0.23 0.22 0.21 0.20 0.19 0.18 0.17 B 0.32 0.31 0.31 0.29 0.28 0.27 0.26 0.24 0.23 0.22 0.20 0.19 0.18 C 0.32 0.32 0.31 0.30 0.29 0.27 0.25 0.23 0.21 0.19 0.17 0.15 0.13 D 0.30 0.29 0.28 0.26 0.24 0.23 0.21 0.19 0.18 0.16 0.15 0.13 0.11 [t]4*0917+449 A 0.70 0.65 0.59 0.54 0.50 0.46 0.41 0.36 0.31 0.25 0.19 0.13 0.07 B 0.92 0.87 0.82 0.76 0.70 0.64 0.57 0.51 0.44 0.36 0.29 0.22 0.15 C 1.07 1.01 0.95 0.88 0.82 0.76 0.69 0.62 0.55 0.48 0.40 0.33 0.25 D 1.28 1.24 1.20 1.15 1.09 1.02 0.95 0.88 0.79 0.71 0.62 0.53 0.44 [t]4*0917+624 A 0.49 0.42 0.36 0.30 0.27 0.25 0.23 0.22 0.21 0.19 0.18 0.16 0.15 B 0.49 0.45 0.40 0.36 0.33 0.31 0.29 0.27 0.26 0.25 0.25 0.24 0.24 C 0.50 0.45 0.40 0.36 0.33 0.30 0.29 0.28 0.26 0.25 0.24 0.23 0.22 D 0.49 0.46 0.43 0.40 0.37 0.35 0.33 0.31 0.29 0.27 0.25 0.23 0.21 [t]4*0954+658 A 0.60 0.56 0.51 0.48 0.45 0.43 0.42 0.41 0.41 0.40 0.40 0.39 0.39 B 0.69 0.66 0.64 0.62 0.60 0.59 0.58 0.58 0.58 0.58 0.58 0.58 0.59 C 0.84 0.81 0.77 0.74 0.71 0.69 0.67 0.65 0.63 0.61 0.59 0.57 0.55 D 0.92 0.90 0.88 0.86 0.83 0.81 0.78 0.75 0.72 0.70 0.68 0.65 0.63 [t]4*0955+476 A 0.71 0.68 0.65 0.62 0.59 0.57 0.55 0.53 0.50 0.47 0.44 0.41 0.37 B 0.69 0.68 0.65 0.62 0.59 0.56 0.52 0.49 0.47 0.44 0.41 0.39 0.36 C 0.70 0.67 0.63 0.59 0.55 0.51 0.47 0.43 0.38 0.33 0.27 0.22 0.16 D 0.69 0.63 0.58 0.51 0.45 0.38 0.33 0.28 0.24 0.21 0.18 0.14 0.11 [t]4*1034-293 A 0.61 0.61 0.60 0.59 0.58 0.57 0.55 0.53 0.51 0.48 0.46 0.44 0.41 B 0.89 0.89 0.88 0.87 0.85 0.83 0.80 0.77 0.74 0.71 0.67 0.64 0.60 C 0.83 0.83 0.82 0.81 0.79 0.77 0.75 0.72 0.69 0.66 0.63 0.59 0.56 D 0.80 0.80 0.79 0.78 0.76 0.74 0.72 0.69 0.66 0.63 0.60 0.57 0.54 [t]4*1036+054 A 0.55 0.53 0.52 0.51 0.50 0.50 0.50 0.49 0.49 0.48 0.47 0.47 0.46 B 0.88 0.88 0.88 0.86 0.84 0.82 0.80 0.78 0.77 0.76 0.75 0.74 0.73 C 0.91 0.91 0.91 0.90 0.88 0.86 0.83 0.80 0.77 0.74 0.70 0.67 0.64 D 0.97 0.98 0.97 0.95 0.93 0.89 0.85 0.81 0.76 0.71 0.66 0.62 0.57 [t]4*1038+064 A 1.03 0.99 0.95 0.91 0.88 0.85 0.82 0.78 0.75 0.72 0.68 0.65 0.61 B 1.07 1.05 1.01 0.96 0.90 0.83 0.76 0.70 0.64 0.58 0.52 0.45 0.39 C 0.99 0.95 0.91 0.84 0.77 0.69 0.61 0.52 0.44 0.35 0.27 0.18 0.10 D 0.82 0.78 0.73 0.66 0.57 0.49 0.41 0.33 0.25 0.17 0.09 0.03 0.03 [t]4*1040+244 A 0.54 0.52 0.51 0.50 0.49 0.48 0.47 0.46 0.44 0.41 0.39 0.36 0.33 B 0.68 0.67 0.65 0.63 0.60 0.57 0.54 0.51 0.47 0.43 0.39 0.35 0.31 C 0.69 0.67 0.64 0.61 0.58 0.54 0.51 0.47 0.44 0.41 0.37 0.34 0.30 D 0.60 0.57 0.54 0.50 0.46 0.41 0.36 0.32 0.27 0.23 0.19 0.14 0.10 [t]4*1044+719 A 0.83 0.77 0.70 0.64 0.59 0.55 0.52 0.50 0.48 0.47 0.45 0.43 0.42 B 1.02 0.98 0.94 0.89 0.85 0.82 0.80 0.79 0.78 0.78 0.77 0.77 0.77 C 1.20 1.16 1.11 1.07 1.03 0.99 0.96 0.94 0.92 0.90 0.88 0.87 0.85 D 1.39 1.36 1.33 1.30 1.27 1.24 1.21 1.18 1.15 1.11 1.08 1.04 1.01 [t]4*1045-188 A 0.68 0.63 0.57 0.52 0.48 0.45 0.42 0.40 0.37 0.35 0.33 0.30 0.28 B 0.65 0.63 0.60 0.56 0.52 0.49 0.46 0.43 0.41 0.38 0.36 0.34 0.31 C 0.63 0.61 0.58 0.54 0.50 0.46 0.42 0.38 0.35 0.32 0.28 0.25 0.21 D 0.65 0.63 0.60 0.55 0.50 0.44 0.37 0.31 0.24 0.18 0.11 0.05 0.03 [t]4*1053+704 A 0.30 0.28 0.27 0.26 0.26 0.26 0.25 0.25 0.24 0.23 0.22 0.20 0.19 B 0.33 0.32 0.31 0.30 0.29 0.28 0.27 0.25 0.24 0.23 0.21 0.20 0.18 C 0.35 0.34 0.33 0.31 0.30 0.28 0.26 0.25 0.23 0.21 0.19 0.17 0.15 D 0.36 0.35 0.33 0.31 0.30 0.28 0.26 0.24 0.22 0.20 0.17 0.15 0.13 [t]4*1123+264 A 0.51 0.47 0.44 0.40 0.38 0.35 0.34 0.32 0.31 0.30 0.29 0.28 0.27 B 0.42 0.41 0.39 0.37 0.36 0.34 0.33 0.32 0.30 0.29 0.28 0.27 0.26 C 0.39 0.36 0.33 0.30 0.28 0.26 0.25 0.24 0.23 0.22 0.21 0.21 0.21 D 0.33 0.32 0.31 0.31 0.30 0.29 0.28 0.27 0.26 0.24 0.23 0.22 0.21 [t]4*1124-186 A 0.73 0.71 0.68 0.66 0.63 0.61 0.58 0.57 0.58 0.60 0.61 0.63 0.65 B 0.92 0.92 0.91 0.90 0.87 0.84 0.79 0.73 0.68 0.64 0.59 0.55 0.50 C 1.00 0.99 0.98 0.95 0.91 0.86 0.80 0.74 0.67 0.62 0.57 0.52 0.47 D 0.91 0.90 0.88 0.85 0.80 0.73 0.65 0.58 0.50 0.45 0.39 0.34 0.28 [t]4*1128+385 A 1.46 1.40 1.33 1.26 1.19 1.14 1.09 1.04 0.99 0.94 0.90 0.85 0.80 B 1.73 1.69 1.64 1.58 1.50 1.41 1.32 1.22 1.11 0.99 0.88 0.77 0.66 C 1.72 1.65 1.58 1.50 1.41 1.32 1.22 1.12 1.01 0.89 0.78 0.66 0.55 D 1.56 1.50 1.43 1.35 1.26 1.17 1.07 0.96 0.83 0.70 0.57 0.43 0.30 [t]4*1144+402 A 1.08 1.05 1.01 0.96 0.92 0.87 0.82 0.75 0.68 0.61 0.54 0.46 0.38 B 1.16 1.10 1.04 0.98 0.91 0.85 0.80 0.75 0.70 0.66 0.63 0.60 0.56 C 1.18 1.11 1.04 0.98 0.92 0.87 0.82 0.78 0.74 0.69 0.64 0.58 0.52 D 1.05 1.00 0.94 0.89 0.83 0.78 0.73 0.68 0.62 0.56 0.49 0.42 0.35 [t]4*1145+268 A 0.33 0.32 0.31 0.30 0.29 0.28 0.27 0.24 0.20 0.15 0.10 0.05 0.03 B 0.33 0.31 0.30 0.29 0.28 0.27 0.27 0.26 0.26 0.27 0.27 0.27 0.27 C 0.35 0.33 0.31 0.29 0.27 0.26 0.26 0.26 0.27 0.28 0.30 0.31 0.32 D 0.28 0.27 0.26 0.25 0.24 0.22 0.21 0.21 0.21 0.22 0.23 0.24 0.25 [t]4*1149-084 A 0.81 0.77 0.73 0.71 0.69 0.68 0.67 0.65 0.64 0.62 0.61 0.59 0.58 B 0.88 0.86 0.84 0.81 0.76 0.71 0.66 0.60 0.54 0.48 0.42 0.36 0.30 C 0.89 0.86 0.82 0.77 0.71 0.65 0.59 0.52 0.46 0.39 0.32 0.26 0.19 D 0.86 0.80 0.74 0.66 0.58 0.49 0.40 0.30 0.21 0.12 0.03 0.03 0.03 [t]4*1156+295 A 1.44 1.41 1.39 1.38 1.37 1.36 1.31 1.21 1.03 0.80 0.56 0.32 0.08 B 2.66 2.66 2.66 2.65 2.61 2.56 2.46 2.27 1.98 1.60 1.21 0.83 0.44 C 3.24 3.23 3.20 3.16 3.11 3.03 2.90 2.70 2.38 1.97 1.56 1.15 0.74 D 3.99 3.95 3.90 3.83 3.74 3.61 3.41 3.12 2.72 2.20 1.69 1.17 0.66 [t]4*1213-172 A 1.17 1.08 1.00 0.91 0.84 0.77 0.70 0.62 0.55 0.48 0.41 0.33 0.26 B 1.71 1.63 1.55 1.45 1.35 1.23 1.10 0.97 0.82 0.67 0.52 0.37 0.22 C 1.76 1.66 1.55 1.42 1.29 1.16 1.02 0.87 0.71 0.56 0.40 0.24 0.08 D 1.55 1.40 1.23 1.05 0.88 0.72 0.59 0.48 0.39 0.32 0.25 0.18 0.11 [t]4*1219+044 A 0.31 0.29 0.28 0.28 0.27 0.27 0.26 0.26 0.25 0.25 0.25 0.24 0.24 B 0.51 0.51 0.51 0.50 0.49 0.49 0.48 0.47 0.47 0.47 0.46 0.46 0.46 C 0.65 0.64 0.64 0.63 0.62 0.61 0.59 0.58 0.56 0.55 0.53 0.52 0.51 D 0.75 0.76 0.76 0.76 0.75 0.73 0.71 0.69 0.66 0.64 0.62 0.59 0.57 [t]4*1243-072 A 0.46 0.44 0.41 0.38 0.36 0.33 0.31 0.28 0.25 0.22 0.19 0.17 0.15 B 0.65 0.62 0.58 0.54 0.50 0.45 0.41 0.37 0.35 0.34 0.35 0.36 0.38 C 0.71 0.67 0.63 0.58 0.53 0.48 0.43 0.39 0.36 0.34 0.34 0.35 0.36 D 0.74 0.70 0.65 0.60 0.54 0.49 0.44 0.39 0.35 0.32 0.29 0.25 0.22 [t]4*1244-255 A 0.70 0.65 0.59 0.53 0.48 0.42 0.38 0.35 0.33 0.31 0.28 0.24 0.20 B 0.63 0.61 0.58 0.54 0.50 0.45 0.40 0.37 0.34 0.33 0.31 0.30 0.29 C 0.70 0.67 0.63 0.58 0.52 0.47 0.42 0.39 0.37 0.36 0.37 0.37 0.37 D 0.70 0.67 0.63 0.58 0.52 0.45 0.38 0.32 0.27 0.24 0.22 0.20 0.18 [t]4*1306+360 A 0.39 0.37 0.37 0.37 0.38 0.38 0.37 0.36 0.35 0.34 0.33 0.31 0.30 B 0.79 0.79 0.79 0.80 0.80 0.81 0.81 0.82 0.82 0.83 0.83 0.84 0.84 C 0.95 0.95 0.94 0.92 0.90 0.88 0.86 0.84 0.82 0.80 0.78 0.76 0.73 D 1.07 1.06 1.04 1.02 0.99 0.97 0.95 0.93 0.91 0.89 0.87 0.85 0.83 [t]4*1308+326 A 1.18 1.10 1.00 0.87 0.75 0.62 0.49 0.36 0.23 0.10 0.03 0.03 0.03 B 1.28 1.27 1.25 1.23 1.21 1.19 1.16 1.14 1.12 1.10 1.07 1.05 1.03 C 1.33 1.27 1.19 1.10 0.99 0.89 0.78 0.68 0.57 0.47 0.36 0.26 0.15 D 1.37 1.26 1.11 0.95 0.77 0.58 0.40 0.22 0.04 0.03 0.03 0.03 0.03 [t]4*1308+328 A 0.42 0.41 0.41 0.40 0.40 0.39 0.39 0.39 0.38 0.38 0.38 0.37 0.37 B 0.49 0.48 0.47 0.46 0.44 0.43 0.42 0.41 0.39 0.37 0.35 0.33 0.31 C 0.47 0.46 0.45 0.44 0.42 0.41 0.40 0.38 0.36 0.34 0.31 0.29 0.26 D 0.43 0.41 0.39 0.37 0.35 0.33 0.30 0.27 0.24 0.19 0.15 0.10 0.05 [t]4*1324+224 A 0.38 0.36 0.34 0.32 0.31 0.31 0.30 0.29 0.28 0.27 0.26 0.26 0.25 B 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.30 0.29 0.27 0.25 0.24 C 0.38 0.37 0.36 0.35 0.33 0.32 0.31 0.29 0.28 0.26 0.25 0.23 0.22 D 0.33 0.33 0.31 0.30 0.29 0.27 0.25 0.23 0.21 0.18 0.16 0.13 0.11 [t]4*1334-127 A 1.69 1.54 1.38 1.24 1.11 1.01 0.93 0.85 0.78 0.71 0.64 0.57 0.50 B 1.76 1.67 1.57 1.46 1.37 1.29 1.23 1.19 1.15 1.11 1.08 1.04 1.00 C 1.71 1.65 1.57 1.49 1.42 1.35 1.29 1.24 1.18 1.13 1.08 1.02 0.97 D 1.59 1.56 1.52 1.48 1.42 1.35 1.28 1.20 1.13 1.05 0.97 0.90 0.82 [t]4*1351-018 A 0.53 0.50 0.48 0.45 0.41 0.38 0.33 0.28 0.22 0.16 0.11 0.05 0.03 B 0.51 0.49 0.47 0.45 0.42 0.40 0.37 0.34 0.31 0.28 0.25 0.22 0.20 C 0.49 0.47 0.44 0.42 0.39 0.37 0.34 0.33 0.31 0.30 0.28 0.27 0.25 D 0.38 0.35 0.33 0.30 0.27 0.24 0.22 0.19 0.17 0.15 0.13 0.10 0.08 [t]4*1406-076 A 0.86 0.81 0.74 0.68 0.62 0.57 0.51 0.46 0.42 0.38 0.35 0.32 0.29 B 0.99 0.90 0.79 0.68 0.58 0.49 0.43 0.38 0.34 0.32 0.29 0.27 0.25 C 1.03 0.92 0.80 0.69 0.58 0.50 0.44 0.41 0.40 0.40 0.40 0.39 0.39 D 0.95 0.85 0.75 0.66 0.59 0.54 0.51 0.48 0.44 0.40 0.36 0.31 0.26 [t]4*1417+385 A 0.27 0.26 0.26 0.25 0.24 0.23 0.21 0.20 0.18 0.17 0.15 0.13 0.12 B 0.34 0.33 0.32 0.31 0.30 0.29 0.28 0.27 0.26 0.25 0.23 0.22 0.21 C 0.37 0.36 0.35 0.34 0.33 0.32 0.30 0.29 0.27 0.25 0.24 0.22 0.21 D 0.36 0.35 0.34 0.32 0.31 0.29 0.28 0.26 0.24 0.23 0.21 0.20 0.18 [t]4*1424-418 A 0.51 0.53 0.56 0.61 0.68 0.78 0.89 0.97 1.00 0.99 0.95 0.89 0.81 B 1.17 1.18 1.21 1.24 1.29 1.35 1.41 1.44 1.43 1.38 1.31 1.21 1.10 C 1.37 1.38 1.41 1.44 1.49 1.56 1.63 1.66 1.64 1.58 1.49 1.38 1.26 D 1.64 1.64 1.62 1.59 1.54 1.47 1.38 1.28 1.18 1.09 1.00 0.91 0.82 [t]4*1444+175 A 0.52 0.48 0.43 0.38 0.33 0.28 0.23 0.18 0.13 0.08 0.03 0.03 0.03 B 0.46 0.45 0.43 0.41 0.40 0.38 0.36 0.35 0.33 0.31 0.30 0.28 0.27 C 0.45 0.42 0.40 0.36 0.33 0.29 0.26 0.22 0.19 0.15 0.12 0.08 0.05 D 0.40 0.37 0.34 0.31 0.27 0.24 0.20 0.16 0.13 0.09 0.05 0.03 0.03 [t]4*1502+036 A 0.32 0.29 0.28 0.27 0.26 0.26 0.26 0.25 0.25 0.24 0.24 0.24 0.23 B 0.40 0.39 0.37 0.36 0.34 0.33 0.32 0.32 0.32 0.32 0.31 0.31 0.31 C 0.42 0.41 0.39 0.37 0.35 0.33 0.31 0.29 0.28 0.26 0.24 0.23 0.21 D 0.41 0.40 0.39 0.37 0.35 0.33 0.31 0.29 0.27 0.25 0.23 0.21 0.19 [t]4*1502+106 A 0.53 0.47 0.41 0.36 0.32 0.28 0.24 0.20 0.16 0.13 0.09 0.05 0.03 B 0.56 0.51 0.46 0.40 0.36 0.32 0.30 0.28 0.27 0.25 0.24 0.22 0.21 C 0.54 0.50 0.45 0.40 0.37 0.34 0.33 0.32 0.32 0.31 0.31 0.30 0.30 D 0.52 0.50 0.47 0.45 0.43 0.42 0.41 0.40 0.39 0.38 0.37 0.36 0.35 [t]4*1504+377 A 0.41 0.39 0.36 0.33 0.31 0.29 0.27 0.26 0.25 0.25 0.24 0.24 0.23 B 0.53 0.51 0.48 0.45 0.43 0.41 0.40 0.39 0.38 0.37 0.36 0.35 0.34 C 0.54 0.52 0.50 0.48 0.46 0.44 0.43 0.42 0.41 0.40 0.39 0.38 0.37 D 0.50 0.49 0.48 0.47 0.46 0.44 0.43 0.41 0.39 0.36 0.34 0.32 0.29 [t]4*1519-273 A 0.64 0.59 0.54 0.49 0.44 0.40 0.37 0.34 0.31 0.28 0.25 0.22 0.19 B 0.60 0.57 0.54 0.51 0.48 0.44 0.41 0.38 0.35 0.32 0.28 0.25 0.22 C 0.63 0.60 0.56 0.52 0.48 0.44 0.40 0.37 0.33 0.29 0.25 0.21 0.18 D 0.47 0.44 0.41 0.37 0.33 0.29 0.26 0.22 0.19 0.16 0.12 0.09 0.06 [t]4*1538+149 A 0.35 0.33 0.32 0.31 0.30 0.30 0.28 0.26 0.23 0.19 0.15 0.11 0.07 B 0.56 0.55 0.54 0.53 0.51 0.49 0.47 0.43 0.37 0.31 0.25 0.19 0.13 C 0.58 0.58 0.57 0.55 0.53 0.51 0.48 0.43 0.38 0.32 0.26 0.20 0.14 D 0.68 0.67 0.66 0.64 0.61 0.58 0.53 0.48 0.41 0.33 0.26 0.18 0.10 [t]4*1546+027 A 2.76 2.69 2.63 2.59 2.56 2.54 2.51 2.47 2.43 2.38 2.34 2.29 2.25 B 2.96 2.95 2.90 2.82 2.70 2.55 2.40 2.24 2.08 1.93 1.77 1.61 1.46 C 2.86 2.80 2.70 2.58 2.42 2.22 2.00 1.75 1.51 1.26 1.01 0.76 0.51 D 2.50 2.35 2.17 1.96 1.70 1.42 1.12 0.82 0.52 0.22 0.03 0.03 0.03 [t]4*1606+106 A 0.97 0.92 0.87 0.83 0.79 0.76 0.74 0.71 0.68 0.66 0.64 0.61 0.59 B 1.02 0.98 0.94 0.90 0.85 0.81 0.76 0.71 0.66 0.61 0.56 0.51 0.46 C 0.99 0.94 0.90 0.86 0.82 0.77 0.73 0.69 0.64 0.59 0.55 0.50 0.46 D 0.83 0.80 0.76 0.71 0.67 0.61 0.55 0.49 0.41 0.34 0.26 0.18 0.11 [t]4*1614+051 A 0.75 0.70 0.65 0.60 0.55 0.51 0.46 0.42 0.38 0.34 0.31 0.27 0.23 B 0.82 0.76 0.69 0.63 0.57 0.52 0.50 0.50 0.50 0.51 0.52 0.52 0.53 C 0.64 0.59 0.54 0.49 0.45 0.43 0.42 0.42 0.43 0.44 0.45 0.46 0.46 D 0.42 0.39 0.37 0.33 0.30 0.26 0.21 0.15 0.09 0.03 0.03 0.03 0.03 [t]4*1622-253 A 0.79 0.75 0.72 0.69 0.67 0.68 0.70 0.72 0.74 0.76 0.78 0.80 0.82 B 1.01 1.01 1.00 0.99 0.99 1.00 1.03 1.06 1.08 1.11 1.13 1.16 1.19 C 0.95 0.94 0.94 0.94 0.94 0.96 0.98 1.00 1.02 1.04 1.06 1.08 1.10 D 0.94 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.96 0.96 0.96 0.96 0.96 [t]4*1633+38 A 1.34 1.19 1.05 0.96 0.91 0.88 0.87 0.86 0.86 0.87 0.87 0.87 0.87 B 1.88 1.79 1.70 1.61 1.52 1.45 1.37 1.30 1.24 1.17 1.10 1.03 0.96 C 1.83 1.77 1.70 1.63 1.54 1.46 1.37 1.29 1.21 1.14 1.06 0.98 0.91 D 1.98 1.91 1.83 1.73 1.62 1.50 1.36 1.21 1.05 0.90 0.74 0.59 0.43 [t]4*1636+473 A 0.51 0.49 0.48 0.47 0.46 0.45 0.43 0.42 0.40 0.39 0.37 0.36 0.34 B 0.43 0.42 0.41 0.40 0.39 0.38 0.36 0.35 0.34 0.33 0.31 0.30 0.29 C 0.51 0.50 0.49 0.48 0.47 0.45 0.44 0.43 0.41 0.40 0.38 0.36 0.35 D 0.60 0.59 0.58 0.56 0.55 0.53 0.52 0.50 0.48 0.46 0.45 0.43 0.41 [t]4*1639+230 A 0.42 0.41 0.41 0.41 0.41 0.40 0.40 0.38 0.37 0.35 0.33 0.31 0.28 B 0.42 0.41 0.41 0.41 0.40 0.40 0.39 0.38 0.36 0.35 0.33 0.31 0.30 C 0.39 0.39 0.38 0.38 0.37 0.36 0.35 0.33 0.30 0.27 0.24 0.21 0.18 D 0.29 0.29 0.28 0.27 0.25 0.24 0.23 0.22 0.20 0.20 0.19 0.18 0.18 [t]4*1639-062 A 1.28 1.24 1.18 1.13 1.07 1.01 0.95 0.88 0.82 0.76 0.71 0.67 0.63 B 1.33 1.25 1.15 1.03 0.91 0.79 0.69 0.60 0.52 0.46 0.42 0.37 0.32 C 1.32 1.20 1.07 0.93 0.79 0.67 0.56 0.48 0.41 0.36 0.32 0.27 0.21 D 1.12 0.96 0.80 0.65 0.52 0.42 0.35 0.30 0.27 0.24 0.22 0.19 0.14 [t]4*1642+690 A 0.94 0.81 0.67 0.55 0.44 0.35 0.26 0.19 0.13 0.08 0.03 0.03 0.03 B 0.84 0.70 0.55 0.41 0.30 0.23 0.20 0.20 0.22 0.26 0.30 0.34 0.39 C 0.79 0.66 0.53 0.42 0.35 0.32 0.31 0.30 0.30 0.30 0.30 0.30 0.30 D 0.61 0.52 0.44 0.38 0.35 0.32 0.28 0.24 0.20 0.16 0.13 0.10 0.07 [t]4*1657-261 A 0.81 0.76 0.71 0.65 0.60 0.55 0.50 0.45 0.40 0.34 0.27 0.19 0.10 B 1.13 1.10 1.07 1.02 0.97 0.92 0.85 0.78 0.71 0.64 0.55 0.46 0.36 C 1.24 1.23 1.21 1.18 1.16 1.12 1.07 1.02 0.95 0.86 0.76 0.64 0.52 D 1.14 1.15 1.15 1.15 1.14 1.11 1.07 1.00 0.91 0.78 0.63 0.47 0.30 [t]4*1705+018 A 0.77 0.73 0.70 0.67 0.65 0.63 0.62 0.62 0.62 0.61 0.61 0.60 0.60 B 0.80 0.79 0.77 0.74 0.71 0.68 0.64 0.60 0.56 0.51 0.47 0.43 0.39 C 0.75 0.73 0.71 0.68 0.65 0.61 0.56 0.50 0.45 0.39 0.33 0.27 0.21 D 0.64 0.62 0.59 0.55 0.51 0.46 0.40 0.35 0.29 0.24 0.18 0.13 0.07 [t]4*1726+455 A 0.57 0.50 0.43 0.37 0.31 0.27 0.25 0.25 0.27 0.30 0.33 0.35 0.38 B 0.63 0.56 0.49 0.43 0.38 0.34 0.33 0.33 0.35 0.38 0.40 0.43 0.46 C 0.65 0.58 0.51 0.45 0.41 0.39 0.38 0.39 0.40 0.42 0.44 0.46 0.48 D 0.63 0.59 0.55 0.52 0.50 0.49 0.49 0.48 0.48 0.47 0.47 0.47 0.46 [t]4*1732+389 A 0.61 0.58 0.54 0.51 0.48 0.45 0.43 0.40 0.37 0.34 0.31 0.28 0.25 B 0.75 0.73 0.70 0.67 0.63 0.60 0.58 0.55 0.52 0.50 0.47 0.45 0.42 C 0.76 0.74 0.70 0.67 0.64 0.60 0.57 0.54 0.52 0.50 0.47 0.45 0.43 D 0.76 0.73 0.70 0.67 0.63 0.59 0.54 0.49 0.44 0.38 0.33 0.28 0.22 [t]4*1739+522 A 1.10 1.02 0.94 0.87 0.79 0.73 0.67 0.61 0.56 0.49 0.42 0.35 0.28 B 1.26 1.18 1.09 0.99 0.89 0.80 0.73 0.68 0.63 0.60 0.56 0.52 0.48 C 1.25 1.16 1.05 0.95 0.85 0.78 0.73 0.70 0.68 0.67 0.67 0.67 0.66 D 1.18 1.11 1.03 0.96 0.89 0.83 0.78 0.73 0.69 0.65 0.62 0.59 0.55 [t]4*1741-038 A 2.85 2.78 2.71 2.64 2.55 2.46 2.33 2.16 1.93 1.67 1.39 1.13 0.89 B 3.92 3.82 3.69 3.53 3.33 3.11 2.86 2.61 2.35 2.09 1.85 1.63 1.41 C 4.14 3.98 3.80 3.59 3.36 3.11 2.83 2.55 2.26 1.96 1.66 1.37 1.09 D 3.84 3.63 3.39 3.13 2.84 2.53 2.21 1.89 1.58 1.29 1.01 0.75 0.49 [t]4*1746+470 A 0.40 0.40 0.39 0.39 0.39 0.38 0.37 0.36 0.35 0.35 0.34 0.33 0.32 B 0.47 0.47 0.46 0.44 0.43 0.42 0.40 0.38 0.36 0.34 0.32 0.30 0.28 C 0.46 0.46 0.45 0.44 0.43 0.42 0.40 0.38 0.36 0.34 0.32 0.30 0.28 D 0.46 0.45 0.44 0.42 0.40 0.38 0.35 0.33 0.30 0.28 0.25 0.23 0.21 [t]4*1749+096 A 1.05 0.99 0.94 0.91 0.89 0.88 0.87 0.87 0.88 0.88 0.88 0.89 0.89 B 1.44 1.42 1.40 1.37 1.35 1.32 1.29 1.26 1.23 1.20 1.17 1.14 1.11 C 1.59 1.56 1.53 1.50 1.47 1.43 1.39 1.35 1.31 1.27 1.22 1.18 1.13 D 1.68 1.65 1.62 1.58 1.55 1.52 1.48 1.44 1.40 1.36 1.32 1.28 1.24 [t]4*1751+288 A 1.39 1.33 1.27 1.23 1.18 1.14 1.11 1.08 1.05 1.03 1.02 1.02 1.02 B 1.87 1.83 1.78 1.74 1.70 1.67 1.63 1.59 1.53 1.47 1.39 1.31 1.22 C 1.94 1.89 1.84 1.80 1.75 1.70 1.65 1.60 1.54 1.45 1.32 1.18 1.02 D 1.78 1.74 1.68 1.61 1.53 1.44 1.34 1.23 1.09 0.92 0.72 0.51 0.29 [t]4*1803+784 A 1.43 1.33 1.23 1.14 1.06 0.99 0.92 0.85 0.77 0.69 0.62 0.54 0.47 B 1.61 1.50 1.38 1.26 1.15 1.06 0.99 0.93 0.88 0.83 0.78 0.73 0.68 C 1.72 1.58 1.44 1.31 1.20 1.12 1.06 1.02 0.99 0.96 0.92 0.89 0.86 D 1.57 1.44 1.32 1.21 1.14 1.08 1.04 0.99 0.95 0.90 0.85 0.80 0.75 [t]4*1821+107 A 0.44 0.43 0.43 0.42 0.42 0.41 0.40 0.40 0.39 0.38 0.37 0.36 0.35 B 0.45 0.45 0.44 0.43 0.41 0.39 0.38 0.36 0.34 0.32 0.30 0.28 0.26 C 0.42 0.41 0.40 0.39 0.37 0.35 0.33 0.31 0.29 0.27 0.25 0.23 0.21 D 0.33 0.33 0.33 0.33 0.32 0.30 0.28 0.25 0.22 0.20 0.17 0.14 0.12 [t]4*1823+568 A 0.53 0.48 0.43 0.39 0.35 0.32 0.28 0.25 0.23 0.21 0.19 0.18 0.16 B 0.57 0.54 0.51 0.47 0.44 0.40 0.36 0.33 0.29 0.26 0.23 0.20 0.17 C 0.53 0.49 0.46 0.42 0.39 0.34 0.30 0.25 0.21 0.18 0.15 0.13 0.10 D 0.50 0.45 0.41 0.36 0.32 0.28 0.23 0.19 0.16 0.12 0.09 0.06 0.03 [t]4*1846+322 A 0.38 0.38 0.37 0.36 0.36 0.35 0.35 0.33 0.31 0.29 0.26 0.24 0.22 B 0.41 0.40 0.39 0.38 0.37 0.36 0.34 0.31 0.28 0.25 0.22 0.19 0.15 C 0.40 0.39 0.38 0.36 0.35 0.33 0.31 0.29 0.27 0.24 0.21 0.18 0.15 D 0.37 0.36 0.35 0.33 0.31 0.30 0.28 0.25 0.23 0.20 0.17 0.14 0.11 [t]4*1849+670 A 0.68 0.64 0.61 0.58 0.56 0.54 0.53 0.52 0.51 0.51 0.51 0.51 0.51 B 0.95 0.92 0.89 0.86 0.84 0.81 0.79 0.76 0.74 0.72 0.70 0.68 0.66 C 1.01 0.98 0.94 0.91 0.88 0.86 0.83 0.81 0.78 0.76 0.73 0.71 0.69 D 0.96 0.93 0.89 0.86 0.83 0.79 0.76 0.72 0.68 0.64 0.61 0.57 0.54 [t]4*1908-201 A 2.20 2.09 1.97 1.85 1.73 1.61 1.50 1.40 1.36 1.37 1.38 1.38 1.39 B 2.22 2.15 2.07 1.96 1.82 1.65 1.47 1.28 1.11 0.99 0.86 0.74 0.61 C 2.10 2.02 1.91 1.77 1.62 1.43 1.24 1.05 0.89 0.77 0.65 0.53 0.41 D 1.83 1.71 1.57 1.41 1.23 1.04 0.84 0.65 0.50 0.38 0.26 0.14 0.03 [t]4*1921-293 A 4.96 4.97 4.99 5.01 5.02 5.02 5.03 5.04 5.05 5.06 5.07 5.08 5.09 B 8.32 8.19 8.01 7.79 7.55 7.30 7.05 6.80 6.55 6.30 6.05 5.80 5.55 C 8.16 8.03 7.85 7.65 7.43 7.22 7.00 6.78 6.56 6.34 6.12 5.90 5.68 D 8.21 8.04 7.81 7.52 7.19 6.86 6.53 6.20 5.87 5.54 5.21 4.88 4.55 [t]4*1936-155 A 0.91 0.85 0.78 0.70 0.64 0.57 0.50 0.42 0.34 0.26 0.17 0.08 0.03 B 1.00 0.95 0.90 0.83 0.74 0.64 0.54 0.44 0.34 0.24 0.15 0.06 0.03 C 0.99 0.94 0.87 0.79 0.69 0.59 0.48 0.38 0.28 0.20 0.12 0.05 0.03 D 0.84 0.75 0.66 0.54 0.42 0.30 0.20 0.12 0.07 0.04 0.03 0.03 0.03 [t]4*1954-388 A 1.48 1.49 1.51 1.53 1.57 1.61 1.66 1.68 1.67 1.64 1.58 1.52 1.45 B 2.79 2.78 2.77 2.75 2.72 2.67 2.60 2.54 2.48 2.42 2.37 2.32 2.27 C 2.85 2.85 2.84 2.82 2.79 2.74 2.68 2.62 2.56 2.50 2.45 2.40 2.36 D 3.73 3.71 3.65 3.54 3.35 3.08 2.75 2.42 2.13 1.88 1.67 1.48 1.28 [t]4*1958-179 A 1.73 1.70 1.68 1.66 1.64 1.62 1.59 1.55 1.49 1.42 1.35 1.28 1.20 B 2.41 2.39 2.36 2.33 2.30 2.25 2.18 2.08 1.95 1.79 1.63 1.48 1.32 C 2.59 2.55 2.50 2.45 2.40 2.34 2.26 2.16 2.04 1.89 1.74 1.59 1.44 D 2.68 2.62 2.56 2.49 2.42 2.34 2.23 2.10 1.92 1.71 1.50 1.30 1.09 [t]4*2000+472 A 0.92 0.88 0.83 0.78 0.72 0.65 0.58 0.50 0.41 0.31 0.22 0.12 0.03 B 1.03 1.00 0.96 0.91 0.85 0.78 0.70 0.63 0.55 0.48 0.40 0.32 0.24 C 0.99 0.94 0.89 0.83 0.75 0.68 0.60 0.52 0.45 0.37 0.30 0.23 0.15 D 0.78 0.71 0.62 0.53 0.44 0.34 0.26 0.18 0.11 0.05 0.03 0.03 0.03 [t]4*2007+777 A 0.50 0.43 0.37 0.31 0.27 0.24 0.23 0.22 0.22 0.23 0.24 0.25 0.25 B 0.55 0.50 0.45 0.40 0.36 0.34 0.32 0.32 0.31 0.32 0.32 0.33 0.33 C 0.54 0.50 0.45 0.41 0.38 0.35 0.33 0.31 0.29 0.27 0.25 0.23 0.22 D 0.44 0.40 0.36 0.32 0.29 0.26 0.24 0.23 0.21 0.20 0.19 0.18 0.18 [t]4*2008-159 A 0.87 0.86 0.85 0.84 0.83 0.82 0.80 0.79 0.77 0.76 0.74 0.73 0.71 B 1.21 1.19 1.18 1.15 1.12 1.08 1.02 0.96 0.90 0.83 0.77 0.70 0.64 C 1.30 1.28 1.25 1.22 1.18 1.13 1.07 1.00 0.93 0.86 0.78 0.71 0.64 D 1.33 1.30 1.27 1.24 1.19 1.12 1.04 0.95 0.85 0.75 0.65 0.55 0.45 [t]4*2029+121 A 0.83 0.77 0.71 0.66 0.61 0.55 0.49 0.42 0.35 0.28 0.20 0.13 0.06 B 0.83 0.76 0.68 0.60 0.52 0.44 0.38 0.31 0.26 0.20 0.15 0.10 0.05 C 0.78 0.70 0.61 0.54 0.47 0.41 0.37 0.33 0.30 0.28 0.26 0.24 0.22 D 0.64 0.59 0.54 0.51 0.49 0.48 0.49 0.50 0.52 0.53 0.54 0.54 0.55 [t]4*2059+034 A 0.51 0.48 0.46 0.43 0.41 0.39 0.37 0.35 0.33 0.31 0.29 0.27 0.25 B 0.78 0.74 0.71 0.67 0.63 0.59 0.56 0.52 0.48 0.44 0.41 0.37 0.33 C 0.92 0.88 0.84 0.79 0.75 0.71 0.67 0.63 0.59 0.55 0.50 0.46 0.42 D 1.15 1.11 1.06 1.02 0.96 0.91 0.86 0.80 0.73 0.67 0.60 0.53 0.47 [t]4*2113+293 A 0.67 0.65 0.62 0.61 0.59 0.58 0.56 0.53 0.50 0.47 0.44 0.41 0.37 B 0.73 0.71 0.68 0.64 0.61 0.58 0.54 0.50 0.45 0.40 0.35 0.30 0.25 C 0.73 0.69 0.65 0.62 0.58 0.54 0.51 0.47 0.43 0.39 0.35 0.31 0.27 D 0.65 0.60 0.56 0.51 0.47 0.43 0.40 0.37 0.34 0.32 0.29 0.27 0.24 [t]4*2126-158 A 0.93 0.90 0.86 0.82 0.78 0.73 0.66 0.59 0.52 0.45 0.38 0.31 0.24 B 1.55 1.52 1.48 1.42 1.33 1.22 1.09 0.93 0.77 0.60 0.44 0.27 0.11 C 1.61 1.56 1.50 1.41 1.30 1.16 0.99 0.79 0.59 0.39 0.19 0.03 0.03 D 1.13 1.05 0.95 0.84 0.71 0.56 0.40 0.24 0.07 0.03 0.03 0.03 0.03 [t]4*2144+092 A 0.68 0.65 0.62 0.60 0.58 0.56 0.54 0.51 0.47 0.42 0.38 0.33 0.28 B 0.61 0.58 0.55 0.51 0.46 0.41 0.36 0.31 0.27 0.22 0.18 0.14 0.09 C 0.58 0.54 0.49 0.44 0.39 0.33 0.27 0.21 0.15 0.09 0.03 0.03 0.03 D 0.45 0.40 0.35 0.29 0.23 0.18 0.14 0.10 0.06 0.03 0.03 0.03 0.03 [t]4*2201+171 A 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.32 0.31 0.31 0.31 0.31 0.31 B 0.52 0.51 0.50 0.49 0.47 0.46 0.45 0.43 0.42 0.41 0.39 0.38 0.37 C 0.52 0.51 0.50 0.49 0.48 0.46 0.45 0.43 0.41 0.39 0.37 0.34 0.32 D 0.49 0.49 0.48 0.47 0.46 0.44 0.42 0.40 0.38 0.35 0.32 0.29 0.25 [t]4*2214+241 A 0.48 0.46 0.44 0.42 0.40 0.39 0.36 0.32 0.28 0.23 0.17 0.12 0.07 B 0.61 0.60 0.59 0.57 0.55 0.52 0.49 0.46 0.43 0.40 0.37 0.34 0.31 C 0.64 0.63 0.62 0.60 0.57 0.54 0.52 0.49 0.47 0.45 0.42 0.40 0.38 D 0.62 0.60 0.58 0.56 0.53 0.51 0.50 0.49 0.50 0.52 0.53 0.55 0.56 [t]4*2215+150 A 0.52 0.52 0.51 0.51 0.51 0.50 0.49 0.47 0.45 0.43 0.40 0.37 0.34 B 0.57 0.57 0.56 0.54 0.52 0.50 0.47 0.44 0.41 0.38 0.35 0.32 0.30 C 0.55 0.54 0.52 0.50 0.48 0.46 0.43 0.39 0.34 0.30 0.25 0.20 0.15 D 0.43 0.41 0.39 0.37 0.34 0.31 0.28 0.25 0.21 0.17 0.12 0.07 0.03 [t]4*2227-088 A 1.47 1.43 1.38 1.34 1.30 1.26 1.22 1.16 1.10 1.03 0.97 0.91 0.85 B 1.55 1.54 1.51 1.46 1.39 1.32 1.23 1.14 1.05 0.95 0.86 0.77 0.67 C 1.50 1.48 1.44 1.39 1.31 1.23 1.14 1.04 0.93 0.83 0.72 0.62 0.52 D 1.41 1.38 1.33 1.27 1.20 1.12 1.02 0.93 0.83 0.73 0.63 0.53 0.43 [t]4*2229+695 A 0.75 0.73 0.71 0.70 0.68 0.67 0.66 0.63 0.60 0.56 0.53 0.49 0.45 B 0.77 0.75 0.72 0.69 0.65 0.60 0.55 0.49 0.43 0.37 0.31 0.25 0.19 C 0.76 0.72 0.68 0.63 0.57 0.52 0.46 0.40 0.34 0.29 0.24 0.18 0.13 D 0.58 0.52 0.45 0.38 0.31 0.26 0.21 0.18 0.15 0.13 0.10 0.08 0.05 [t]4*2255-282 A 1.28 1.58 1.80 1.92 1.97 1.95 1.90 1.80 1.67 1.53 1.38 1.23 1.08 B 1.58 2.10 2.49 2.69 2.74 2.67 2.51 2.28 2.00 1.68 1.35 1.02 0.71 C 1.60 2.04 2.35 2.48 2.47 2.38 2.22 2.02 1.78 1.52 1.26 0.99 0.75 D 1.52 1.98 2.31 2.46 2.45 2.34 2.16 1.92 1.64 1.34 1.02 0.71 0.42 [t]4*2318+049 A 0.57 0.55 0.54 0.53 0.51 0.50 0.48 0.45 0.40 0.35 0.29 0.24 0.18 B 0.76 0.75 0.74 0.71 0.68 0.64 0.61 0.56 0.51 0.46 0.40 0.34 0.28 C 0.78 0.76 0.74 0.71 0.67 0.64 0.60 0.55 0.50 0.45 0.40 0.34 0.28 D 0.68 0.66 0.63 0.59 0.54 0.49 0.44 0.38 0.32 0.25 0.18 0.11 0.04 [t]4*2319+317 A 0.28 0.26 0.25 0.23 0.23 0.22 0.21 0.21 0.20 0.20 0.20 0.20 0.19 B 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.31 0.31 0.31 0.31 0.31 C 0.41 0.40 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.32 0.31 0.31 0.30 D 0.41 0.41 0.40 0.40 0.39 0.38 0.37 0.35 0.34 0.33 0.31 0.30 0.29 [t]4*2325+093 A 0.37 0.34 0.31 0.29 0.28 0.27 0.27 0.27 0.26 0.26 0.26 0.26 0.25 B 0.50 0.48 0.47 0.45 0.44 0.42 0.40 0.38 0.36 0.34 0.32 0.30 0.27 C 0.59 0.58 0.56 0.55 0.52 0.50 0.48 0.45 0.42 0.39 0.37 0.34 0.31 D 0.80 0.80 0.79 0.78 0.76 0.74 0.71 0.66 0.61 0.56 0.52 0.47 0.42 [t]4*3C371 A 0.98 0.88 0.78 0.69 0.61 0.54 0.47 0.40 0.35 0.30 0.26 0.22 0.18 B 0.99 0.90 0.80 0.70 0.60 0.48 0.37 0.29 0.23 0.20 0.19 0.17 0.16 C 1.03 0.91 0.79 0.67 0.56 0.45 0.35 0.27 0.22 0.18 0.15 0.12 0.09 D 1.01 0.89 0.76 0.65 0.54 0.43 0.33 0.25 0.19 0.17 0.15 0.13 0.12 [t]4*3C418 A 1.80 1.65 1.49 1.32 1.16 0.99 0.83 0.66 0.50 0.33 0.17 0.03 0.03 B 1.59 1.43 1.24 1.04 0.84 0.66 0.50 0.36 0.26 0.16 0.07 0.03 0.03 C 1.57 1.38 1.17 0.97 0.78 0.62 0.49 0.39 0.30 0.22 0.14 0.06 0.03 D 1.38 1.18 1.00 0.84 0.72 0.62 0.54 0.48 0.44 0.40 0.38 0.35 0.33 [t]4*NRAO512 A 0.45 0.42 0.40 0.38 0.36 0.34 0.33 0.32 0.30 0.29 0.28 0.27 0.25 B 0.49 0.47 0.45 0.42 0.40 0.37 0.34 0.31 0.28 0.25 0.21 0.18 0.15 C 0.50 0.47 0.45 0.42 0.39 0.36 0.33 0.30 0.27 0.24 0.21 0.18 0.15 D 0.45 0.43 0.41 0.38 0.36 0.33 0.31 0.28 0.25 0.22 0.20 0.17 0.14 § APPENDIX C: THEORETICAL (PREDICTED) SNR PER SESSION AND BASELINE § APPENDIX D: OBSERVED (RECONSTRUCTED) SNR PER SESSION AND BASELINE
http://arxiv.org/abs/2407.12307v1
20240717040534
Weakly-Supervised 3D Hand Reconstruction with Knowledge Prior and Uncertainty Guidance
[ "Yufei Zhang", "Jeffrey O. Kephart", "Qiang Ji" ]
cs.CV
[ "cs.CV" ]
3D Hand Reconstruction with Knowledge and Uncertainty Y. Zhang et al. ^1 Rensselaer Polytechnic Institute, ^2 IBM Research {zhangy76, jiq}@rpi.edu, kephart@us.ibm.com Weakly-Supervised 3D Hand Reconstruction with Knowledge Prior and Uncertainty Guidance Yufei Zhang1 Jeffrey O. Kephart2 Qiang Ji1 July 22, 2024 ====================================================================================== § ABSTRACT Fully-supervised monocular 3D hand reconstruction is often difficult because capturing the requisite 3D data entails deploying specialized equipment in a controlled environment. We introduce a weakly-supervised method that avoids such requirements by leveraging fundamental principles well-established in the understanding of the human hand's unique structure and functionality. Specifically, we systematically study hand knowledge from different sources, including biomechanics, functional anatomy, and physics. We effectively incorporate these valuable foundational insights into 3D hand reconstruction models through an appropriate set of differentiable training losses. This enables training solely with readily-obtainable 2D hand landmark annotations and eliminates the need for expensive 3D supervision. Moreover, we explicitly model the uncertainty that is inherent in image observations. We enhance the training process by exploiting a simple yet effective Negative Log-Likelihood (NLL) loss that incorporates uncertainty into the loss function. Through extensive experiments, we demonstrate that our method significantly outperforms state-of-the-art weakly-supervised methods. For example, our method achieves nearly a 21% performance improvement on the widely adopted FreiHAND dataset. § INTRODUCTION Reconstructing the 3D configuration of human hands has broad applications, especially for Virtual/Augmented Reality (VR/AR) <cit.> and Human-Computer Interaction (HCI) <cit.>. Traditional approaches rely on depth sensors <cit.> or multi-camera setups <cit.>. Due to their reliance on specialized equipment that is often expensive or unavailable, the practicality of such approaches is limited. We instead focus on monocular 3D hand reconstruction, reconstructing 3D hands from a single RGB image. Due to the lack of depth information in recovering 3D geometry from its 2D observation, monocular 3D hand reconstruction poses an ill-posed problem. Recent methods tackle this issue using deep learning models that predict 3D hand joint positions <cit.> or reconstruct a dense 3D hand mesh <cit.>. While these methods avoid the need for specialized equipment in constrained environments at inference time, they still rely upon it to obtain the 3D annotations required for training the deep models. The resulting limitations in the diversity and amount of data restrict the performance of these purely data-driven deep models. To address this challenge, some methods <cit.> leverage synthetically generated training images. The synthetic data are rich in quantity, but limited in the realism of the images and hand poses. As illustrated in Fig. <ref>(a), many synthetically generated poses in DARTset appear unnatural due to a lack of systematic consideration of well-established principles of hand structure, functionality and movement during the data generation process. Additionally, approaches based on generating synthetic data still require some real 3D data for further model fine-tuning. Other authors <cit.> have exploited weakly-supervised learning, whereby the models are trained on real images with 2D hand landmark annotations. The advantage of such approaches is that 2D hand landmark labels are much more readily acquired in practice than 3D annotations. Weakly-supervised 3D hand models are typically trained by minimizing two loss terms: (1) a prior term imposed on 3D hand prediction to encourage its realism under weak supervision, and (2) a data term measuring the consistency between the projection of 3D prediction and 2D image observations. When constructing the prior term, some methods <cit.> learn the prior from data. However, there is no sufficiently homogeneous and dense data set that precisely captures realistic hand movement patterns, and moreover it is a significant challenge to acquire such data <cit.>. Other works <cit.> attempt to derive the prior from hand literature, but they are often limited to a certain type of knowledge. Another issue exhibited in existing weakly-supervised approaches lies in their formulation of the data term. They overlook the uncertainty in image observations and employ standard regression losses, such as Mean Square Error (MSE). As shown in Fig. <ref>(b), various types of image ambiguities may be present, posing a significant challenge to the reconstruction process. Failing to address such inherent uncertainty may lead to degraded model performance <cit.>. In this paper, we address the two issues prevalent in current weakly-supervised 3D hand reconstruction models by (1) systematically and effectively leveraging well-established knowledge about the human hand, and (2) explicitly modeling the uncertainty inherent in input images. Our method draws inspiration from KNOWN <cit.>, which leverages body-specific knowledge and uncertainty for human body reconstruction. Here we adapt that approach to the hand. Specifically, we extract from a comprehensive study of literature on hand biomechanics, functional anatomy, and physics a useful body of hand knowledge. We encode it as a set of differentiable losses to enable training on images solely with 2D weak supervision. Moreover, we consider that the observation uncertainty varies at different hand joints for different input images. We model such heteroscedastic uncertainty by capturing the distribution of 2D hand landmark positions. We improve the training by exploiting a simple yet effective Negative Log-Likelihood (NLL) loss that automatically assigns weights to different 2D labels based on their captured uncertainty. Through extensive experiments, we demonstrate the effectiveness of the proposed method and its significant improvements over the existing weakly-supervised 3D hand reconstruction models. In summary, our main contributions lie in: * identifying valuable generic knowledge from a comprehensive study of hand literature, including hand biomechanics, functional anatomy, and physics; * introducing a set of differentiable training losses to effectively integrate the identified knowledge into 3D hand reconstruction models; * exploiting a simple yet effective NLL loss that incorporates the uncertainty in image observations to improve the training; and * showing through extensive experiments that our method significantly outperforms existing methods under the challenging weakly-supervised setting. § RELATED WORK In this section, we discuss recent advancements in monocular 3D hand reconstruction, considering fully-supervised and weakly-supervised settings. §.§ Fully-Supervised Approaches Fully-supervised 3D hand reconstruction requires that 3D labels, such as ground truth 3D hand meshes, are sufficiently available. They focus on designing different model architectures for improved performance. One line of work follows a model-based reconstruction pipeline, wherein a 3D hand is represented by a deformable 3D hand model and reconstructed by estimating low-dimensional pose and shape parameters of the hand model <cit.>. These model-based approaches can struggle to capture fine reconstruction details. Another line of work exploits a model-free reconstruction pipeline that directly predicts 3D hand mesh vertex positions <cit.>. Such model-free approaches are typically data-hungry and less robust to occlusions and truncations. To address the issues inherent in both approaches, recent works <cit.> propose unifying the two pipelines into a single framework to enhance overall performance. Additionally, some models are specifically designed for handling cases like occlusion <cit.>, hand-object interaction <cit.> or two-hand reconstruction <cit.>. While such innovations improve estimation accuracy, none of them address the significant challenge of acquiring a sufficient amount of 3D data for fully-supervised learning. §.§ Weakly-Supervised Approaches Weak supervision approaches have made significant progress in enhancing the generalization and data efficiency of 3D reconstruction models <cit.>. In the context of 3D hand reconstruction, 2D hand landmark annotation proves to be a valuable form of weak supervision given its wide accessibility and the structural information it captures. Early works <cit.> relied on Principle Component Analysis (PCA) pose bases of the MANO hand model <cit.> and encouraged plausible 3D prediction by regularizing the prediction to be closer to the mean pose. Some works <cit.> impose geometry constraints that assumed finger joints were located in the same plane during movement. Baek et al. <cit.> propose capturing the complex 3D hand pose data distribution via Generative Adversarial Networks <cit.> and utilize the trained generative model as guidance for predicting realistic outputs. Instead of relying on data-driven priors or heuristic constraints, other works <cit.> impose joint rotation constraints with ranges retrieved from hand biomechanics literature and achieve improved performance. However, they overlook other sources of useful hand knowledge. Moreover, Tzionas et al. <cit.> propose preventing invalid penetration in reconstructions by utilizing a non-penetration loss formulated over colliding mesh triangles. The proposed non-penetration loss only handles shallow penetration and cannot accommodate soft deformations that often occur in hand contact. The contributions that differentiate our method from existing works are as follows. First, our study and utilization of generic hand knowledge is more comprehensive, and includes a novel inter-dependency derived from hand functional anatomy. Second, our encoding of knowledge is more effective. In particular, our formulation of the non-penetration loss effectively handles soft surface deformations by accurately pulling out deep inside vertices. Third, unlike existing works that neglect the heteroscedastic uncertainty in input images or limit their uncertainty modeling to hypothesis generation <cit.>, our method explicitly models the uncertainty and incorporates it into the training loss through a simple yet effective NLL loss, directly improving the training process. While this strategy has been studied in other applications <cit.>, we are the first to apply it to monocular 3D hand reconstruction. § METHOD Fig. <ref> overviews our proposed method. We begin by introducing our 3D hand representation and camera projection model in Sec. <ref>. Then, we systematically survey valuable hand knowledge and describe how we encode it as differentiable model training losses in Sec. <ref>. We discuss the modeled distribution of 2D hand landmark positions and our formulation of the NLL loss in Sec. <ref>. Finally, we summarize the overall training loss for our model in Sec. <ref>. §.§ Preliminaries 3D Hand Representation. We employ MANO <cit.> to represent a 3D hand. MANO is a deformable 778-vertex 3D mesh model. It is parameterized by pose parameters θ∈ℝ^15×3 that govern the rotation of 15 hand joints, and shape parameters β∈ℝ^10 that represent the coefficients of PCA shape bases, capturing variations like hand length and width. Given θ and β, 3D mesh vertices 𝐌(θ,β)∈ℝ^778×3 are obtained through forward kinematics. 3D hand joints 𝐏(θ,β)∈ℝ^J×3 are a linear combination of the vertices as 𝐏(θ,β)= 𝐇𝐌(θ,β), where 𝐇∈ℝ^J×778 is a joint regressor learned from data during the development of MANO, and J=21 indicates the number of modeled hand joints. Camera Projection Model. Similar to existing practices, we estimate camera parameters 𝐂=[s,𝐑,𝐭], where s∈ℝ, 𝐑∈ℝ^3, and 𝐭∈ℝ^2 denote the scale factor, camera rotation, and global translation, respectively. The projection of 3D hand joints is obtained as 𝐩_2D=Proj(𝐏;𝐂), where Proj(·) denotes the full-perspective projection function with a constant focal length, as in <cit.>. §.§ Study and Incorporation of Generic Hand Knowledge Hand movement adheres to fundamental principles applicable across different subjects and gestures, serving as foundational insights for realistic 3D hands reconstruction. In this section, we systematically survey the generic hand knowledge from various sources, including hand biomechanics, functional anatomy, and physics. We introduce a set of differentiable losses over the 3D hand pose and shape parameters to integrate the knowledge into the reconstruction model. Hand Biomechanics involves the quantitative study of hand movement mechanisms. There are 15 hand joints contributing to movement: a metacarpophalangeal joint (MCP), a proximal interphalangeal joint (PIP), and a distal interphalangeal joint (DIP) for each of the four fingers, and a carpometacarpal joint (CMC), an MCP, and an IP for the thumb. Each joint's movement can be described via three Euler angles corresponding to joint bending, splaying, and twisting, respectively. Hand biomechanics studies specify the DOFs and ranges of motion for each joint as illustrated in Fig. <ref>(a). To impose these constraints, we introduce the following pose loss: ℒ_pose = ∑_j=1^15(max{θ_j-θ̅_j,max,θ̅_j,min-θ_j,0})^2, where θ_j represents the three Euler angles predicted for the j^th joint. Their range, denoted by (θ̅_j,min,θ̅_j,max), is obtained from literature <cit.>, with the ranges set to zero for directions without degrees of freedom. Since the joint rotation coordinates used by MANO differ from those defined above, we adjust MANO's original coordinates by aligning its movement axes with the three Euler angles defined above. We also design the Euler angle rotation order for a joint based on their rotation range to avoid singularity, following <cit.>. Hand Functional Anatomy investigates how the hand's anatomical structure influences its movement. In contrast to hand biomechanics, which delineates the range of motion for each individual joint, the study of functional anatomy stipulates essential inter-joint dependencies during hand movement. As illustrated in Fig. <ref>(b), there are two types of dependencies: (i) the bending of the MCP restricts its splaying, following a linear relationship that peaks at the maximum bending angle <cit.>; and (ii) the bending of the DIP induces bending of the PIP within the same finger <cit.>. These inter-dependencies highlight that the range of motion of the hand joints can be dynamic and dependent on each other. Specifically, denote the predicted bending and splaying angles of an MCP joint j as α_j^MCP and γ_j^MCP, respectively. Based on the Type-(i) dependency, their range of motion should be updated as: γ̂_j,min^MCP =γ̅_j,min^MCP(1-α_j^MCP/α̅_j,min^MCP), if α̅_j,min^MCP<α_j^MCP<0, γ̂_j,max^MCP =γ̅_j,max^MCP(1-α_j^MCP/α̅_j,max^MCP), if 0<α_j^MCP<α̅_j,max^MCP, where (γ̅_j,min^MCP,γ̅_j,max^MCP) and (α̅_j,min^MCP,α̅_j,max^MCP) are the ranges based on hand biomechanics, while (γ̂_j,min^MCP,γ̂_j,max^MCP) denotes the refined range. As shown, the range of γ_j^MCP becomes very limited as α_j^MCP approaches extreme angles. Similarly, the range of motion for α_j^MCP needs to be further constrained based on the value of γ_j^MCP. Moreover, denote the predicted bending of the PIP and DIP of finger k as α_k^PIP and α_k^DIP, respectively. According to the Type-(ii) dependency, the lower bound of α_k^PIP should be refined as: α̂_k,min^PIP = 0, if α_k^DIP>0, where α_k^PIP is encouraged to be greater than zero given a flexion DIP. In summary, the two types of dependencies refine the ranges (θ̅_min, θ̅_max) provided by the hand biomechanics to (θ̂_min, θ̂_max) based on the current hand pose prediction θ. To integrate this valuable anatomical knowledge into the 3D reconstruction model, we dynamically update the joint rotation ranges following Eq. <ref> and Eq. <ref>, and utilize the refined ranges to calculate the pose loss in Eq. <ref>. Hand Physics studies assert various principles governing the physical interactions of the human hand. As our model reconstructs a single 3D hand from a single image, we mainly consider static physics, particularly the principle of non-penetration, according to which different hand parts cannot penetrate into each other. Fig. <ref>(c) illustrates a failure case. To integrate the non-penetration principle into the 3D reconstruction model, we first identify a set 𝐌 comprising vertices located inside the mesh through the generalized winding number <cit.>. For each vertex 𝐯∈𝐌, we then apply the following non-penetration loss: ℒ_non-penetration = ∑_𝐯∈𝐌max{d(𝐯)-d_tol,0}, where d(𝐯) denotes the minimum distance from vertex 𝐯 to another vertex that is not a neighbor of 𝐯 (where the geodesic distance exceeds the average length of phalanges, e.g., 2cm). In other words, d(𝐯) represents the minimum distance from vertex 𝐯 to 3D hand surface. Meanwhile, recognizing MANO's limitation in modeling soft surface deformations during contact, we introduce a tolerance distance d_tol to accommodate shallow penetrations. Unlike existing methods <cit.> that formulate the loss based on collision triangles and only deter shallow penetrations, our proposed loss is applied to vertices with distances to the surface exceeding d_tol, effectively pulling out the those deeply embedded vertices. Overall Knowledge-Encoded Prior. Incorporating the knowledge discussed above ensures natural 3D hand pose predictions. Similar to existing methods <cit.>, we apply a shape regularization ℒ_shape = β_2 to promote plausible hand shape predictions. Assembling all the losses together, we obtain the overall prior: ℒ_prior = λ_1ℒ_pose+λ_2ℒ_non-penetration+λ_3ℒ_shape It is worth noting that the prior term in Eq. <ref> is derived from generic hand knowledge, which is applicable to all subjects and gestures. Notably, its formulation does not require any 3D data and is independent of any specific dataset. §.§ Training with Negative Log-Likelihood To further ensure that predictions are consistent with the image observations, we utilize 2D hand landmark annotations. The input images can often exhibit challenges, such as occlusion or low image quality, that result in inherently ambiguous 2D hand positions or high uncertainty in the 3D reconstruction. Unlike existing methods that overlook this inherent uncertainty and train on 2D hand labels using standard regression loss, we explicitly model the uncertainty and incorporate it into the loss function to enhance model performance. Specifically, we model the uncertainty by capturing the distribution of 2D hand landmark positions. As different 2D hand landmarks exhibit different appearance features that vary across input images, we model each joint independently and capture input-dependent uncertainty. We assume the distribution of 2D hand landmark positions 𝐩_2D of an image 𝐗 as: p(𝐩_2D|𝐗;𝐖) = ∏_i1/√(2π)σ_iexp(-(𝐩_2D,i - μ_i)^2/2σ_i^2), where i is the image location index of hand joints. The adoption of Gaussian distributions is based on their wide utility in modeling observation noise <cit.>. μ represents the mean of the Gaussian distributions computed through the projection of 3D hand joint positions 𝐏 using the camera parameters 𝐂, while the variance σ^2 are directly predicted by the regression model with parameters 𝐖. The modeled distribution p(𝐩_2D|𝐗;𝐖) specifies the probability of the ground truth appearing at position 𝐩_2D. The labeled position 𝐩̅_2D can be viewed as an observed data sample. We can thus train the model through Maximum Likelihood Estimation. It minimizes the Negative Log-Likelihood (NLL) to construct the data term to ensure 3D-2D consistency as: ℒ_data =-logp(𝐩_2D=𝐩̅_2D|𝐗;𝐖) ∝∑_i(logσ_i + (𝐩̅_2D,i - μ_i)^2/2σ_i^2). Note that the variance σ^2 in Eq. <ref> depends on the individual hand joint i. Omitting the variance estimation or treating it as a constant would be equivalent to using the standard MSE loss, which is agnostic to uncertainty and assigns weights to all samples uniformly. In contrast, our method assigns reduced weights to images and joints with high uncertainty in a principled fashion, thereby producing a more robust model with improved performance. §.§ Total Training Loss By combining the prior term in Eq. <ref> and the data term in Eq. <ref>, we obtain the total loss for training the regression model as: ℒ=ℒ_prior+ℒ_data. During testing, 3D hands can be directly reconstructed through the hand pose and shape parameters estimated by the regression model. § EXPERIMENT We briefly introduce our data sets, evaluation metrics, and implementation details in Sec. <ref>. Then, in Sec. <ref>, we discuss an ablation study that demonstrates the effectiveness of incorporating various sources of generic hand knowledge and training with the Negative Log-Likelihood (NLL). Finally, in Sec. <ref>, we assess the improved performance of our method in comparison to existing weakly-supervised State-of-the-Art (SOTA) approaches. §.§ Datasets, Metrics, and Implementation Details Datasets. We employ three widely adopted datasets: FreiHAND <cit.>, DexYCB <cit.>, and HO3Dv3 <cit.>, all of which have been captured by multi-view data collection systems. FreiHAND features a diverse range of daily hand poses. DexYCB and HO3Dv3 contain hand-object interaction images, some of which are significantly occluded. We follow the established training and testing splits to facilitate comparison with other methods. Evaluation Metrics. Like existing methods <cit.>, we compute the average Euclidean distance between the predicted and the ground truth 3D hand joint and vertex positions after procrustes alignment (E_J/E_V). The evaluation on HO3Dv3 is obtained through the online submission system. It further includes AUC_J/AUC_V, the area under the percentage of correct keypoint (PCK) curves with thresholds between 0mm and 50mm. Additionally, we compute penetration rate (PR), the percentage of reconstructions exhibiting penetration with a depth greater than d_tol, to assess the physical plausibility of 3D reconstructions. Implementation. We implemented our framework using PyTorch. The regression model consists of a ResNet-50 model <cit.> to extract image features and an iterative error feedback regression model <cit.> to predict the unknown parameters from the extracted features. The hand images are scaled to 224 × 224 while preserving the aspect ratio. The training images are augmented with random scaling and flipping. The training batch size and epochs is 64 and 200, respectively. Following the training strategy in <cit.>, we initially employ the MSE for the data term and then utilize the NLL for faster convergence. We use the Adam optimizer <cit.> with a learning rate of 10^-5 and weight decay of 10^-4. The hyper-parameters are set to d_tol=6mm, λ_1=20000, λ_2=20000, and λ_3=10. §.§ Ablation Study Table <ref> summarizes the impact of a) incorporating hand knowledge and b) training with the NLL loss. We provide a detailed analysis of these results below. Incorporating Generic Hand Knowledge. To provide valuable insights about the effectiveness of leveraging different sources of hand knowledge, we supplement Table <ref> with a qualitative evaluation in Fig. <ref>. When not integrating any hand knowledge, the model is trained using 2D hand landmark annotations with a prior term that only includes the shape regularization. This model produces large reconstruction errors (Table <ref>, row1). As illustrated in Fig. <ref> (“No Knowledge”), the reconstructions can align with the image observations, but the predicted 3D hand poses are fairly unrealistic. The infeasible twisted fingers significantly violate the joint range of motion specified by hand biomechanics. This issue is addressed by introducing hand biomechanics into the training, resulting in a significant model performance boost (Table <ref>, row2 over row1). For example, E_J is improved from 22.4mm to 10.9mm. Meanwhile, the estimated 3D hand poses become more plausible as shown in Fig. <ref> (“+Biomechanics”). Nonetheless, poor reconstructions can still occur due to the inherent depth ambiguity. Specifically, the relative depth of hand joints can be incorrect. Mitigating this issue requires the further incorporation of the functional anatomy knowledge (Fig. <ref>, “++F-Anatomy”), leading to a reduction of the reconstruction errors from 10.9mm to 9.6mm for E_J and from 11.4mm to 10.0mm for E_V (Table <ref>, row3 over row2). The functional anatomy captures inter-joint dependency, alleviating the depth ambiguity by introducing additional constraints on the 3D reconstruction space. Furthermore. avoiding invalid penetrations in the reconstructions requires adding the proposed non-penetration loss (Fig. <ref>, bottom example). Incorporating this physics knowledge effectively reduces the percentage of reconstructions with invalid penetration from 11.4% to 1.9%. To a lesser degree, but still significantly, it also improves the other reconstruction accuracy metrics. In summary, by incorporating hand knowledge gleaned from literature into our proposed training loss functions, we are able to generate accurate 3D hand reconstruction models based solely on 2D weak supervision. Training with Negative Log-likelihood. As discussed in Sec. <ref>, the NLL loss takes into account the increased reconstruction uncertainty of images containing occlusions or other degradations at the granularity of individual hand joints. Comparing rows 5 and 4 of Table <ref> we see that the 3D joint position error E_J is reduced from 9.4mm to 8.5mm and the 3D mesh reconstruction error E_V is decreased from 9.8mm to 8.9mm. In Fig. <ref>(a), we provide qualitative comparison between the models trained without and with the NLL loss. When not utilizing the NLL, the model is trained using the MSE loss. As shown, such models can be adversely affected by low image quality, image occlusion, and image truncation occurring at various hand joints, resulting in poor 3D hand reconstructions. In contrast, the model trained with the NLL is more robust to these situations. For example, the alignment to the input image is significantly improved compared to training with the MSE even when the hand is heavily occluded (Fig. <ref>(a), middle example). Furthermore, the model trained with the NLL captures the distribution of 2D hand positions. As shown in Fig. <ref>(a) (column4), the hand joints in low-quality or occluded regions are captured by high variance estimates (visualized by large ellipses). In Fig. <ref>(b), we present training images with large estimated variances. As shown, the images with excessive occlusion, truncation, and ambiguity appearance exhibit large variance estimations. During training, the utilization of the NLL effectively enhances the final model performance by incorporating the uncertainty into the training loss function. §.§ Comparison with State-of-the-Art In this section, we showcase the enhanced performance of our approach compared to state-of-the-art (SOTA) methods in the challenging weakly-supervised setting. We summarize the quantitative evaluation on three different datasets: FreiHAND, DexYCB, and HO3D in Table <ref>. Our method consistently outperforms existing approaches across all three datasets, which include diverse images depicting daily hand poses and hand-object interactions. Specifically, early methods using 2D weak supervision often require training with 3D annotations due to limited constraints on the 3D predictions <cit.>. Chen et al. <cit.> avoid the dependency on 3D data by employing different statistical regularizations during training, achieving performance comparable to that of methods utilizing 3D data. Ren et al. <cit.> further enhance the performance by leveraging feature consistency constraints. However, these methods are confined to heuristic constraints, such as enforcing a mean pose prediction, or partial types of hand knowledge, like hand biomechanics alone. In contrast, we systematically study and exploit generic hand knowledge, resulting in significant performance improvements. Notably, our improvements over these methods are achieved even without utilizing the NLL loss. For instance, on the FreiHAND dataset, our method achieves E_J of 9.4mm, reducing the second-best's 10.7mm by 12%. Further utilization of the NLL leads to a more significant error reduction of 21%. Additionally, Jiang et al. <cit.> propose a probabilistic framework to combine model-based and model-free reconstruction models. Despite their incorporation of additional models, our method outperforms them by a large margin. For example, E_V is decreased from 10.9mm to 8.9mm on FreiHAND, and from 10.7mm to 9.8mm on HO3D. Particularly, our method achieves the improved performance by effectively utilizing the generic hand knowledge and modeling the input uncertainty. § DISCUSSION In Sec. <ref>, we validated our method under the challenging weakly-supervised setting. Here, we demonstrate the advantages of leveraging generic hand knowledge even when 3D annotations are available. Table <ref>(a) shows that our method's performance compares favorably with that attained by using a data-driven prior extracted from FreiHAND (following <cit.>) and then evaluated on DexYCB. Thus our method's use of generic hand knowledge gives it a significant advantage over data-driven, domain-specific approaches. Furthermore, our method can take advantage of 3D annotations when they are available. As illustrated in Table <ref>(b), when not leveraging any 3D annotation, our method performs just slightly worse than the fully-supervised model (“100%”) on 2 of the 3 metrics, and it performs comparably to the fully-supervised model using only 10% of the 3D annotations (“Ours+10%”). The advantages of generic hand knowledge, including its generalizability and its role in improving data efficiency of monocular 3D hand reconstruction models, further demonstrate its significance. § CONCLUSION We comprehensively study generic hand knowledge, including hand biomechanics, functional anatomy, and physics. We effectively encode these foundational insights as differentiable prior losses, enabling the training of 3D hand reconstruction models solely using 2D annotation. Moreover, we explicitly model image uncertainty with a simple yet effective Negative Log-Likelihood (NLL) loss that incorporates the well-captured uncertainty into the training loss function. Our method significantly outperforms existing weakly-supervised methods. On the widely adopted FreiHAND dataset, the improvement is nearly 21%. Society Impact. Our work highlights the importance of integrating hand knowledge and modeling uncertainty to produce reliable predictions, grounded in hand mechanics and with confidence estimates. It can potentially benefit many downstream tasks like synthetic data generation, biomechanics, and robotics. Limitations & Future Work. Our method focuses on static generic hand knowledge for image-based reconstruction. A natural extension to our work would be to estimate hand dynamics from monocular videos. § ACKNOWLEDGEMENT This work is supported in part by IBM through the IBM-Rensselaer Future of Computing Research Collaboration. splncs04
http://arxiv.org/abs/2407.13682v1
20240718165417
Tuning collective actuation of active solids by optimizing activity localization
[ "Davi Lazzari", "Olivier Dauchot", "Carolina Brito" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.mtrl-sci", "physics.comp-ph" ]
davi.lazzari@ufrgs.br olivier.dauchot@espci.fr carolina.brito@ufrgs.br 1 Instituto de Física, Universidade Federal do Rio Grande do Sul, Caixa Postal 15051, CEP 91501-970, Porto Alegre, Rio Grande do Sul, Brazil 2 Gulliver Lab, UMR CNRS 7083, ESPCI Paris, PSL Research University, 75005 Paris, France § ABSTRACT Active solids, more specifically elastic lattices embedded with polar active units, exhibit collective actuation when the elasto-active feedback, generically present in such systems, exceeds some critical value. The dynamics then condensates on a small fraction of the vibrational modes, the selection of which obeys non trivial rules rooted in the nonlinear part of the dynamics. So far the complexity of the selection mechanism has limited the design of specific actuation. Here we investigate numerically how, localizing the activity on a fraction of modes, one can select non-trivial collective actuation. We perform numerical simulations of an agent based model on triangular and disordered lattices and vary the concentration and the localization of the active agents on the lattices nodes. Both contribute to the distribution of the elastic energy across the modes. We then introduce an algorithm, which, for a given fraction of active nodes, evolves the localization of the activity in such a way that the energy distribution on a few targeted modes is maximized – or minimized. We illustrate on a specific targeted actuation, how the algorithm performs as compared to manually chosen localization of the activity. While, in the case of the ordered lattice, a well educated guess performs better than the algorithm, the latter outperform the manual trials in the case of the disordered lattice. Finally, the analysis of the results in the case of the ordered lattice leads us to introduce a design principle based on a measure of the susceptibility of the modes to be activated along certain activation paths. Tuning collective actuation of active solids by optimizing activity localization Carolina Brito1 July 22, 2024 ================================================================================ § INTRODUCTION: A central goal of meta-material design is to realize multi-functionality, enabling a system to effectively perform a variety of tasks. Active solids, composed of elastically coupled active units that locally exert active forces, while being confined to the vicinity of a well-defined reference positions, emerge as promising candidates for achieving such a goal. Correlated noise generated by an active bath is known to actuate nontrivial zero modes while suppressing harmonic modes to a degree dependent on temporal correlations <cit.>. This is the simplest evidence for the breakdown of equipartition in active solids. Active agents embedded in an elastic structure are further able to mobilize solid body motion <cit.> or a free-moving mechanism even in a topologically complex case <cit.>. Subsequently, experimental and numerical evidence presented in <cit.> revealed that the generic presence of a nonlinear elasto-active feedback of the elastic stress on the orientation of the active forces can induce selective and collective actuation of the solid: a collective oscillation of the lattice nodes around their equilibrium position emerges. Only a few elastic modes are actuated and crucially, they are not necessarily the lowest energy ones. In the presence of several actuatable zero modes, whether trivially associated with solid body motion or more complex mechanisms, several dynamics coexist in phase space and a general formalism to describe the statistical evolution of collective motion has been derived <cit.>. Such coexistence can also hold in the case of mechanically stable solids, as illustrated experimentally by realizing a hysteretic tension-controlled switch between two actuation dynamics <cit.>. Altogether active solids indeed offer a promising horizon for the design of multi-functional meta-materials. Furthermore, most of the active solids considered so far are ordered and hold a spatially homogeneous distribution of active forces, leaving room for a large potential of alternative actuation strategies. Exploring such an opportunity is the main goal of the present work. To do so, we shall control the injection of energy in the lattice by taming the spatial distribution of activity in both ordered and disordered elastic lattices. More specifically we aim at answering the following question. To what extend, and with which guiding principles, can one design the spatial distribution of activity in the lattice, in order to activate some specific modes? We perform numerical simulations of an agent based model on triangular and disordered lattices and vary the concentration and the localization of the active agents on the lattices nodes. We first show that, in sharp contrast with equilibrium solids, the distribution of energy in the elastic modes is very far from being equally distributed and can be controlled by the distribution of activity in the lattice. We then introduce an algorithm, which, for a given fraction of active nodes, evolves the localization of the activity in such a way that the energy distribution on a few targeted modes is maximized – or minimized. We illustrate on a specific targeted actuation, how the algorithm performs as compared to manually chosen localization of the activity. While, in the case of the ordered lattice, a well educated guess performs better than the algorithm, the latter outperform the manual trials in the case of the disordered lattice. Finally, the analysis of the results in the case of the ordered lattice leads us to introduce a simple design principle based on a measure of the susceptibility of the modes to be activated along certain activation paths. The paper is organized as followed. In section II, we introduce the agent models, together with the observables, we will used throughout the paper. Section III is devoted to the characterization of the energy distribution among the modes for ordered and disordered networks of different sizes, varying the concentration of active nodes. Section IV explore the selection of actuated modes by tuning the distribution of active particles in a triangular lattice and leads us to propose an optimization algorithm in section V, the performance of which is compared to the "manual" design, in the case of the ordered and disordered lattices. While the algorithm outperform the manual trials in the case of the disordered lattice, a well educated guess performs better in the case of the ordered lattice, suggesting the existence of a simple design rule, which we propose in section VI, before concluding. § MODELS AND METHODS §.§ Lattices We consider bi-dimensional elastic lattices at mechanical equilibrium, consisting of nodes with a well-defined reference configuration, connected by springs of stiffness κ, Fig. (<ref>). The extremal nodes of the lattice are pinned to the lab frame. Both ordered and disordered lattices are considered. The ordered lattices are triangular lattices, composed of N = 1+3R(R-1)) nodes, located on R concentric hexagonal rings (see Fig. <ref>-a) for an example of such lattice with R=7). The disordered lattices are created by first generating a packing of soft discs at high density, following the protocol introduced in <cit.> (see Appendix). The center of each disc of the so obtained packing defines a node and two nodes i,j are connected by a spring whenever r_ij < R_j+R_i, where r_ij is the distance between nodes i and j and R_j is the radius of the disc j. To compare ordered and disordered lattices of the same sizes, large disordered lattices are generated, inside which we pin an hexagon of nodes at a distance d ≈ R+1 from the center of the system in such a way to have approximately N(R) moving nodes within the pinned boundary (see Fig. <ref>-b) for an example of such lattice with R=7. All disordered networks used throughout the work have high coordination number (z ≈ 6) and their density of states D(ω_k) – defined as the distribution of the frequencies ω_k – are statistically similar, for low frequencies, to the ordered case, as shown in Fig. <ref> of the Appendix. §.§ Dynamics It was shown that the phenomenology of collective actuation is very well captured within the harmonic approximation, where the elastic forces are entirely encoded into a linear description F^el_i = -𝕄_iju_j, with u_i the displacement of node i, and 𝕄_ij the dynamical matrix of the elastic lattice of interest. Here we follow the same scheme, but, in contrast with previous works, only a fraction ϕ_dn=n_dn/N of nodes are driven. We mostly consider active driving, but also introduce thermally driven network for comparison. In the active case, the driven nodes obey the self-aligning overdamped dynamics introduced in <cit.>. Each driven node is activated by a free to rotate self-aligning active particle that exerts a polar force on the node, while being reoriented by the total elastic force acting on that node. This nonlinear elasto-active feedback is the key ingredient responsible for the onset of collective actuation (see also <cit.> for a review on self-aligning polar particles). The dynamical equations read: u̇_i = Πn̂_i - 𝕄_iju_j, ṅ_i = n̂_i ×u̇_i ×n̂_i, where the unit vector n̂_i indicates the direction in which the particle sitting at node i exerts the active force. Π = l_e / l_a, the unique control parameter is the ratio of two length, the elastic length l_e=F_a /κ, which describes the elongation of a spring of stiffness κ under the action of an active force F_a and the alignment length l_a, which is the typical length a node i must be displaced to reorient the active particle sitting at this node. Eq. (<ref>) describes the reorientation of particle i towards its displacement u̇_i according to the self alignment mechanism. In this active setting, the non driven nodes simply obey the same equation, with zero activity, that is π = 0. In the thermal case, the driven nodes obey the standard Langevin equation: u_i = - u̇_i - 𝕄_iju_j + √(2 T_ eff)ξ_i(t), where ξ_i(t) is a Gaussian white noise with zero mean ⟨ξ_l(t) ⟩ = 0, ⟨ξ_l(t) ·ξ_k(t') ⟩ = δ(t-t')δ_lk. T_ eff controls the noise amplitude. The non driven modes simply respond elastically and transfer the elastic forces around the network obeying newtons law (γ = T_ eff = 0). Numerics: In the case of active driving the dynamical equations were integrated using a Runge-Kutta 4th order method with dt = 0.01. For the thermal driving, the stochastic equations are integrated using the Stochastic Velocity Verlet algorithm <cit.> (m_i = γ = 1, dt = 1/100). The measures of interest are taken on the interval t ∈ [250,500] time steps (Δ t = 250), disregarding the transient regime. §.§ Observable We are primarily interested in the distribution of the elastic energy among the vibrational modes of the lattices. These modes – also called normal modes (NM) – are the eigenvectors |φ_k ⟩ of the 2N × 2N symmetric dynamical matrix and form a complete orthonormal basis. The lattices considered here being all mechanically stable, they are associated with strictly positive eigenvalues ω_k^2. In the case of the triangular lattice, it is convenient to sort these modes by the four classes of rotational symmetries, which we will simply denote symmetry of class 1, 2, 3, 4, that leave the lattice and its normal modes invariant. Examples of the normal modes classified by their classes of symmetries are shown in the appendix <ref> for a system with R=7 (see also Supp. Matt in <cit.> for an explicit construction). The dynamics is then decomposed on these modes, by projecting the 2N-dimensional displacement field | u⟩ = {u_1, u_2, .., u_N} on each mode: P_k(t) = ⟨φ_k | u(t) ⟩. In practice, P_k(t) is averaged over time in the steady state and normalized as follows: P_k^2 = ⟨φ_k | u⟩^2 /∑_k ⟨φ_k | u⟩^2, where x denotes a temporal average over 250 time steps (disregarding the first 250 steps). § ENERGY DISTRIBUTION FOR VARYING FRACTION OF DRIVEN NODES We first examine the distribution of energy among the modes when the driven nodes are randomly distributed in the lattice. We consider both disordered and ordered lattices and vary both the fraction of driven nodes ϕ_dn and the total number of nodes N. At first order, the elastic energy of the system can be written in terms of the matrix 𝕄: Δ U = ⟨u | 𝕄 | u⟩ = ∑_k=1^2Nω_k^2 P_k^2. At equilibrium, equipartition dictates that each quadratic degree of freedom contributes equally to the system's energy, typically k_BT/2 per degree of freedom, where k_B is the Boltzmann constant. In the present context, this imposes that each term of the above decomposition contributes an equivalent amount of energy, so that P_k^2 ∝ω_k^-2. This is precisely what is reported on Fig. <ref>-(a) for the thermally driven case, when all nodes are driven. In the case of active driving, equipartition does not hold. We start by simulating lattices of size N=127, for which all nodes are actively driven (ϕ_dn=100%). Quite remarkably, one still observes a power-law dependance P_k^2 ∝ω_k^-α, for large enough ω_k, albeit with a much larger value of α, indicating a stronger condensation of the energy on the low energy mode, the effect being more pronounced in the case of ordered lattices. These results are confirmed, whenever varying the fraction of driven nodes (Fig. <ref>-b), or the lattice size (Fig. <ref>-c). The exponent α is very robust with respect to the fraction of driven notes. Even for fraction as low as ϕ_dn=10%, α≃ 4 remains twice larger than its equilibrium counterpart. We also note that in the case of ordered lattice, increasing the system size amplifies the condensation, with α reaching values as large as close to 7, where it seemingly saturates. This is not the case for disordered lattices, where α≃ 4.5 for all system sizes probed here. In all cases, the violation of equipartition reported above opens the path for manipulating the energy injection, in view of actuating modes preferentially. § MODES SELECTION BY TUNING THE SPATIAL DISTRIBUTION OF ACTIVE NODES We first concentrate on the ordered lattice and investigate how the spatial distribution of the actively driven nodes condition the energy distribution amongst the modes, the modes being grouped according to their four classes of symmetries (see Appendix <ref>.) We consider four distinct spatial organization of the actively driven modes, corresponding to the four columns of Fig. <ref> : (a) all active, (b) the 4th ring only is active, (c) half of the 4th ring is active, (d) a centerline is active. These choice are somehow arbitrary but illustrate well the role of the spatial localization of the active nodes. The results are shown for increasing values of the activity, as indicated by the dimensionless parameter Π. As a first observation, we note that the active driving leads to very different distributions of the elastic energy across the modes of different symmetry classes. When all nodes are driven (a), the modes of class 1 and 2 vastly dominate the dynamics, as compared to the passive case, where the equipartition of energy favors the modes of class 1 and 3. Second, one sees that the distribution of energy is only slightly changed when reducing the activation to a unique line on the fourth ring (b). Conversely the distribution of energy among the four classes of symmetry is strongly altered when only half of the ring is driven (c), or when only the central line is driven (d) pointing at an important role of the symmetry of the spatial distribution of the driven nodes. We also note that, depending on the geometrical organization of the driven nodes, the dependance on Π can be minimal, as in case (a,b,d), or pretty strong as in case (c). These observations drive us to search for spatial distributions of the active nodes that enhance the projection of the dynamics in a desired class of symmetry or, even, in a few specific number of modes. § OPTIMIZATION ALGORITHM Can one determine a spatial distribution of active particles that effectively amplifies the dynamics within a desired class of symmetry or a specific normal mode? To address this question, we propose an algorithm that combines molecular dynamics simulations of the dynamics and the Metropolis Monte Carlo method to evolve the active configurations of the lattice. A spatial distribution of actively driven nodes is denoted by a vector |σ⟩ = {0,1,1,0,⋯} of size N, where the one's indicate the nodes that are actively driven, and the zero's the non driven nodes. This configuration is evaluated by running the dynamics during 2Δ t = 200 time steps and computing the cost function C_k^|σ⟩ = P_k^2 [σ], where the temporal averaged runs over the Δ_t = 100 time steps composing the second half of the simulation time window. C_k^|σ⟩ evaluates the ability of the configuration |σ⟩ to concentrate the energy of the system in a given mode k. Every 2Δ t time steps, a Monte-Carlo move is proposed from the configuration |σ⟩ to another configuration |σ'⟩, by changing the location of one active node, hence keeping the overall fraction of active nodes constant (see Fig. <ref>). The goal being to maximize, respectively minimize, the cost function, depending on wether one wants to increase, respectively decrease, the projection on a given mode, the new configuration |σ^'⟩ is accepted with probability: P(|σ⟩→ |σ^'⟩) = min{1, exp(-C_k^|σ^'⟩- C_k^|σ⟩/T_e)}, where T_e is an effective temperature which allows for the exploration of the configurations space. If the new configuration |σ^'⟩ is accepted, the next step starts from it; if not, the original configuration |σ⟩ is restored and another move is proposed. A Monte Carlo Step (MCS) is defined as N trials configurations. We use T_e=10^-3 and test the dependence of our results on T_e in the Appendix, Fig.<ref>. The algorithm is evaluated on its ability of finding a configuration, for which the spatial distribution of 24 active nodes among 127 nodes maximizes the condensation of the dynamics on the mode k=3, as indicated by the value of P_3^2. The mode k=3 belongs to the class of symmetry 2 and is therefore not predominantly actuated in a typical configuration. The performance of the configuration optimized by the algorithm after 1200 Monte Carlo steps (last column on the right) is compared to configurations, where the same number of active nodes are localized in preset geometries (first six columns) as indicated by the drawings in red, on the top row of Fig. (<ref>): randomly chosen around all the lattice (Uniform Random); localized on two layers r = {1,6}, r = {2,5}, and r = {3,4}; on two separated areas, |x| > 3 which do not respect the rotational symmetries of the lattice; with the 4th layer fully occupied, r = 4. The top, respectively the bottom, row displays the results obtained for the ordered, respectively the disordered lattices. Each open symbol corresponds to one trial with a different random initial condition (30 different initial conditions). In the case of the disordered network, we generated 30 different lattices – except for the optimized case where we used 10 – and, for each one, 30 initial conditions where simulated. However only 200 trials are shown to avoid overloading the figure. In general, the algorithm performs much better than the preset configurations. In the disordered lattices this is always the case. In the case of the ordered hexagonal lattice, the activation of the 4th layer, which precisely match the location of maximal polarization inside the mode k=3, performs better. Note that the dynamics being deterministic, a fraction of initial condition lead to complete failure of the activation of the third mode, when the spatial distribution of the active nodes is preset, while the optimization algorithm allows for an adaptation of the spatial distribution of the active nodes to the randomly chosen initial condition. Figure <ref> displays the evolution of the performance of the algorithm as a function of the number of Monte Carlo steps for three different lattices: the ordered hexagonal one, a typical disordered lattice, and the disordered lattice, for which the algorithm obtains the best performance. For each of them the figure shows the mean performance, averaged over 100 initial conditions, while the worst and best cases delimitate the colored areas. From these evolutions one sees that disordered networks can achieve a better performance than the ordered one, in the sense that the optimization algorithm identifies a spatial distribution of the active nodes that favors a stronger condensation of the dynamics on the selected mode of interest. From the dynamical evolution of the localization of the active nodes during the optimization process it appears that the algorithm has a hard time in identifying the best configuration in the case of the ordered lattice. More specifically, it only very slowly condensates the active nodes toward the 4th layer of the lattice, in contrast with the disordered case, where this condensation takes place in the early steps of the optimization. We interpret this behavior as a sign that the symmetries of the ordered lattice favors the nearly degenerescence of many configurations with respect to their evaluation by the cost function. § OPTIMIZATION RULE FOR THE SELECTIVE ACTUATION OF HEXAGONAL LATTICES The analysis of the results from the optimization algorithm suggests a possible simple strategy for optimizing the spatial distribution of the active nodes in order to achieve the condensation of the dynamics on some specific modes. We observed that the optimal configuration to amplify the projection of the dynamics on the third mode of the hexagonal lattice, for a network size with R=7, corresponds to a localization of the active nodes on the fourth layer of the lattice. As noted already, this coincides with the regions of largest magnitude of the displacement field in mode k=3 (see Appendix <ref>). We propose to generalize this observation and organize the spatial distribution of the active nodes in light of the displacements field geometry of the mode of interest. The underlying hypothesis is that localizing the active nodes in the regions of high displacement of a mode should favor the coupling of the activity to that mode. To do so we first define an activation path L on the lattice, composed of |L| adjacent nodes and define S_k(L) the activation susceptibility of that path in mode k as the local projection of the polarization field on the tangent to this path: S_k(L) = ∑_l=0^|L|| ê_l · (φ_k)_l| /|L|, where l indexes the nodes along the path L, ê_l is the unitary vector tangent to the path at node l, and (φ_k)_l is the displacement of mode k at node l. This susceptibility is defined such that it is maximal when the path L runs parallel to the local maxima of the displacement field of the mode of interest. Figure <ref> illustrate the application of the above ideas in the case of an ordered hexagonal lattice with R=14 layers, for which two types of active paths are tested: (i) concentric active rings of size r (L(r), with |L(r)| = 6 r and ê_l = ê_θ(l), see Fig. <ref>-c) and (ii) linear radial paths that do not cross the center (L(θ), with |L(θ)| = R-2 and ê_l = ê_r(l), see Fig. <ref>-d). Figure <ref>-(c,d) shows the susceptibility S_k(r), for the concentric ring path of radius r and S_k(θ) for the radial path with orientation θ for the five modes k=3,5,9,13,14 displayed on the top rows. The core observations is that the susceptibility strongly depends on the combination of the path and the mode. For instance S_9(r) is negligible for all concentric path, while S_9(θ) is systematically large for all radial path. The dependence on the path can be relatively simple, as it is the case for S_3(r), where one recovers the observation made earlier that the mode k=3 has a maximal susceptibility when the distribution of active nodes concentrate on a ring of radius r=R/2. But it can also be less obvious for higher modes with less symmetric displacements, such as mode k=5 or k=14. One also note that optimizing for a single mode is in general likely to be impossible. Figure <ref>-(e,f,g) shows the actual distribution of energy among the modes for four different paths. For the concentric ring path, one verifies very clearly the strong selection of mode k=13 by the ring of radius r=3 (Fig. <ref>-e) and the even stronger one for the mode k=3, when active nodes are distributed along the ring of radius r=7 (Fig. <ref>-f). The case of the radial paths confirms that the mode k=9 is excited for both path configurations. More interestingly one sees that the radial path along θ=0^∘, 180^∘ is unable to select differentially the modes k=5 and the mode k=14, while the path along θ=0^∘, 180^∘ does select the mode k=14, without activating the mode k=5. Altogether, the use of the activation susceptibility is thus a good design principle, although it clearly also reveals the limitation of the selectivity that can be reached. Nevertheless, we observe that the larger the system is, the more selective the activation design can be, especially for modes of high energy, because of the larger specificity of their polarization geometry and the possibility of combining several activation path. § CONCLUSIONS In this paper, we explored different strategies for injecting energy into elastic lattices by exciting their nodes with active agents. We investigated both ordered triangular lattices and disordered lattices with a coordination number of approximately z ≈ 6, similar to that of the ordered lattice. By distributing the active nodes in various spatial organization, we demonstrate the possibility of tuning, at least to some extent, the energy partition amongst the mode. This of course contrast with the classical scenario of equipartition imposed by thermal equilibrium. When all lattice nodes are active (ϕ_dn=1), a pronounced concentration of energy is observed on lower-frequency modes, as previously observed both numerically and experimentally <cit.>. More specifically, the amplitude distribution of the energy among the modes exhibits a power-law decay at large enough frequency, P_k^2∝ω^-α, with an exponent α significantly larger than 2, the value indicative of equipartition. When reducing the fraction of active nodes, the distribution of energy among the modes is heavily influenced by the spatial distribution of the active nodes inside the lattice. While complete control over energy concentration in specific modes remains elusive, we nevertheless demonstrate that there is room for optimization. In the case of ordered lattice, a well educated guess is possible and we propose a simple design principle, which identifies optimal paths for the distribution of the active nodes, on the basis of the geometry of the spatial structure of the modes, characterizing the path by their activation susceptibility. In the case of disordered lattice, specific spatial distribution of the active nodes, combined with specific realization of the disorder, can better condensate the energy on a class of modes of interest, as compared to an ordered lattice with the same size. This suggests that the optimization algorithm has greater potential for evolution toward an "optimal distribution" in the presence of disorder. An interesting perspective would be to jointly optimize for the disorder of the network and the spatial distribution of the active nodes. When considering disordered lattices, our focus has primarily been on cases where the coordination number is high, approximately z ≈ 6 and the lattice is largely hyperstatic and mechanically stable. It would be valuable to extend our investigation to lattices with lower coordination numbers, approaching z → 4. In this limit, it is known that the lattices are on the verge of losing its mechanical stability <cit.>, leading to an abundance of low-frequency normal modes compared to crystalline structures. The properties of these modes have been extensively studied in disordered solids <cit.>, with correlations drawn to solid rearrangements <cit.>. They also have been proposed as elemental defects controlling the flow of disordered solids <cit.>. It would be intriguing to investigate whether active particles distributed in specific lattice regions could selectively excite desired low-frequency mode intervals or unveil unpredictable features absent from the current work. § CONFLICTS OF INTEREST There are no conflicts of interest to declare. We thank Paul Baconnier for insightful discussions and the help provided on simulations. DL and CB thanks CNPq and CAPES for partially financing this study. We thank the supercomputing laboratory at New York University (NYU-HPC), where part of the simulations were run, for computer time. apsrev4-1 § SETTING THE THE DISORDERED LATTICES Disordered networks are created by first generating packings of soft discs at high density. To generate these packings, we employ a protocol used previously which is here summarized. We initiate by setting the density of particles ρ = N_p/V =0.5 (where V represents the volume of the system and N_p stands for the number of particles), followed by conducting a high-temperature equilibration (T = 1.0) of the system under the influence of the potential energy U= κ/2∑_i,j (r_ij-R_j-R_i)^2 H(R_j+R_i - r_ij), where r_ij is the distance between the centers of particles i and j, R_j is the radius of the particle j and H(x) is the Heaviside function. Subsequently, we employ the FIRE algorithm <cit.> to minimize the potential energy U. Throughout this minimization process, we maintain a constant pressure of p = 1.0 using a Berendsen barostat <cit.> with a time constant τ_Ber = 10.0. The minimization procedure is stopped when the interparticle force falls below 10^-1. The ordered and disordered networks can be compared in terms of the frequency distribution of their normal modes: Fig <ref> shows the density of states D(ω), defined as the distribution of the frequencies ω, that are the square root of the dynamical matrix (𝕄) eigenvalues, for both ordered and disordered networks. For small frequencies (ω < 1), both types of lattices have similar distributions. § ROBUSTNESS OF EFFECTIVE TEMPERATURE, T_E To adjust the temperature like parameter T_e used in the Monte Carlo rules for the optimization algorithm (Eq. <ref>), we explored three logarithmic scales of T_e. Fig. <ref> displays the convergence of the algorithm for T_e = 10^-3, T_e = 10^-2, T_e = 10^-1. While too large T_e=10^-1 ruins the optimization dynamics, we don't see strong effect of T_e in the range [10^-3, 10^-2]. § MODES OF THE ORDERED LATTICES WITH THEIR CLASS OF SYMMETRIES The triangular lattice with hexagonal boundaries respects symmetries over rotations of angle π/3, done by the operator Θ, and reflections Σ (e.g., across the axis y = 0). The eigenvectors of the dynamical matrix, 𝕄, can be either direct eigenvectors of those symmetry operators or bases for inner spaces that respect those symmetries, with reflection eigenvalues given by σ = ± 1 and rotation eigenvalues given by θ = exp(i k π/3) for k ∈{-2, …, 3}. The eigenvectors corresponding to the complex eigenvalues of the rotational operator are also complex and occur in degenerated pairs, |φ_±⟩ relative to θ_± = exp(± i n π/3) for some n. These paired modes can be combined into two real modes |φ_l⟩ and |φ_m ⟩, which also have the same energy as |φ_±⟩. Although |φ_l ⟩ and |φ_m ⟩ (eigenvectors of 𝕄) are not eigenvectors of Θ, the 2-dimensional space they span remains invariant under rotations of this kind. The effect of Θ on these modes is characterized by ⟨φ_l | Θ | φ_l ⟩ = ⟨φ_m | Θ | φ_m ⟩, which corresponds to the real part of the eigenvalue of |φ_±⟩. Therefore, the complete symmetry of a normal mode |φ_k⟩ is determined by two real numbers: ⟨φ_k | Θ | φ_k ⟩∈{1, 1/2, -1/2, -1} ⟨φ_k | Σ | φ_k ⟩∈{1, -1}. In this work the normal modes of the dynamical matrix are characterized by the real part of the rotational symmetry eigenvalues they relate to. More precisely, the symmetries 1, 2, 3 and 4, used along the whole text, are related respectively to Real(θ) = 0.5, 1, -0.5 and -1. More details on the symmetry class derivation can be seen on the SI of <cit.>. In Figure <ref> are presented the first 24 normal modes for the ordered triangular lattice of size R = 7, the colors identify the class of rotational symmetry they follow.
http://arxiv.org/abs/2407.13274v1
20240718082955
Aligning Explanations for Recommendation with Rating and Feature via Maximizing Mutual Information
[ "Yurou Zhao", "Yiding Sun", "Ruidong Han", "Fei Jiang", "Lu Guan", "Xiang Li", "Wei Lin", "Jiaxin Mao" ]
cs.IR
[ "cs.IR" ]
^1Beijing Key Laboratory of Big Data Management and Analysis Methods ^2Gaoling School of Artificial Intelligence, Renmin University of China ^3Meituan 1 Thørväld Circle Beijing, China zhaoyurou@ruc.edu.cn, emanual20.sun@gmail.com, maojiaxin@gmail.com § ABSTRACT Providing natural language-based explanations to justify recommendations helps to improve users' satisfaction and gain users’ trust. However, as current explanation generation methods are commonly trained with an objective to mimic existing user reviews, the generated explanations are often not aligned with the predicted ratings or some important features of the recommended items, and thus, are suboptimal in helping users make informed decision on the recommendation platform. To tackle this problem, we propose a flexible model-agnostic method named MMI (Maximizing Mutual Information) framework to enhance the alignment between the generated natural language explanations and the predicted rating/important item features. Specifically, we propose to use mutual information (MI) as a measure for the alignment and train a neural MI estimator. Then, we treat a well-trained explanation generation model as the backbone model and further fine-tune it through reinforcement learning with guidance from the MI estimator, which rewards a generated explanation that is more aligned with the predicted rating or a pre-defined feature of the recommended item. Experiments on three datasets demonstrate that our MMI framework can boost different backbone models, enabling them to outperform existing baselines in terms of alignment with predicted ratings and item features. Additionally, user studies verify that MI-enhanced explanations indeed facilitate users' decisions and are favorable compared with other baselines due to their better alignment properties. <ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010179.10010182</concept_id> <concept_desc>Computing methodologies Natural language generation</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems [500]Computing methodologies Natural language generation Aligning Explanations for Recommendation with Rating and Feature via Maximizing Mutual Information Yurou Zhao^1,2,Yiding Sun^1,2, Ruidong Han^3, Fei Jiang^3, Lu Guan^3, Xiang Li^3, Wei Lin^3, Jiaxin Mao^1,2 July 22, 2024 ================================================================================================================ § INTRODUCTION Generating natural language-based explanations for recommendation has gained wide attention in recent years <cit.>. A series of studies have shown the potential benefits of providing explanations on recommendation platforms, such as increasing the acceptance ratio of recommendations <cit.> and users' satisfaction and trust <cit.>. To generate fluent and personalized explanations, most studies leverage existing user reviews as ground truth and mainly use the Maximum Likelihood Estimation (MLE) approach to train models capable of generating explanations that resemble user reviews. Although such practices are promising to generate high-quality text in terms of traditional text generation metrics (e.g., BLEU, ROUGE), they struggle to meet the users' additional requirement of explanation on the recommendation platform, such as trust and effectiveness <cit.>. Serving as auxiliary information for the recommended item, the explanation is expected to facilitate users to make more informed decisions on the platform <cit.>. To achieve that, a qualified explanation is expected to embody the following two properties: Alignment with Predicted Rating: The explanation ought to support the recommender's predicted rating, as it could potentially assist users in comprehending the rationale behind a specific recommendation. A conspicuous discrepancy between the sentiment conveyed in the explanation and the predicted rating (e.g., an explanation stating “the decor is nice and the staff is friendly” for an item rated 2/5 star) will mislead the users into making wrong decisions and foster their skepticism towards the recommendation platform. Alignment with Item Features: The explanation needs to include relevant and highly specific information about a certain feature or aspect of the recommended item. Generic explanations like "good food and good service" are not informative enough for users to gain detailed knowledge about the recommended item, and thus, do not assist them in deciding whether to accept or reject the corresponding recommendation. Unfortunately, solely mimicking user reviews is not sufficient to fulfill the above two goals. The reasons are: 1) Noise in user reviews: User reviews are widely used as a proxy for training explanation generation models <cit.>. However, we notice that user reviews often contain non-explanatory content such as purely subjective narratives and extremely generic comments. Such contents neither describe the details about the product nor reflect the reason why the user gives the product a positive/negative rating. As a result, models trained on user reviews may be influenced by this noise and thus fail to align well with the rating or feature. 2) The average nature of MLE training objective: Even if current models are trained on an ideal dataset, the commonly adopted MLE still hinders them from obtaining better alignment. Since MLE intrinsically favors high-frequency, generic phrases over more specific, contextually relevant explanations, it will make the trained model generate sentences that are rather generic and lack diversity. This tendency will contribute to the poor alignment of explanation with rating and feature. On one hand, the reviews in the dataset are mostly sentimentally positive, which makes the existing explanation generator constantly generate positive sentences for all recommendations even if the actual predicted rating is low. On the other hand, the distribution of item features in the dataset is uneven. As a result, the explanation generator is prone to mention common yet less specific features (e.g. food/service for the Yelp dataset as these two features are dominant in the dataset ) for each item. To intuitively illustrate the limitations of exploiting user reviews as explanations, Table <ref> presents two cases comparing the explanations with strong alignment properties, and the explanations completely recover the corresponding user reviews which are deemed perfect in most previous works as they achieve high value of NLG metrics on the user review dataset. In Case A, the user is predicted to give the item a low rating, suggesting dissatisfaction. However, the explanation that shares high similarity with the review contradicts the negative sentiment of the predicted rating. On the contrary, the explanation better aligned with the predicted rating sheds light on disappointing issues with the item, such as the sauce being bland and the texture being too thick. Such explanation is more likely to help the user understand the reason behind the recommender's predicted rating and thus potentially increase user-perceived transparency of the recommender. In Case B, the explanation copying the corresponding review presents a generic statement, which lacks specific details regarding the description of the item features. Meanwhile, the explanation better aligned with the item feature highlights a particular item feature (prices) that the user may find valuable when deciding whether to accept or reject the recommendation. This kind of explanation provides users with a more informative understanding of the recommended item, enabling them to efficiently make decisions on the recommendation platform. To address the above limitations, we propose a model-agnostic Maximizing Mutual Information (MMI) framework for strengthening the alignment ability of current explanation generation models. As mutual information is a principal measure of the mutual dependence between the two variables, we utilize it to measure to what extent the explanation is aligned with the predicted rating or item feature. The MMI framework features: 1) a neural MI estimator <cit.> to estimate the alignment between text-based explanations and the predicted rating/item features; 2) an RL-based fine-tuning process that treats an existing MLE-trained explanation generation model as the backbone and fine-tunes it with the MI-based reward output by the MI estimator. To avoid potential reward hacking and maintain the backbone model's ability to mimic user reviews, we also integrate KL and Entropy reward as regularizers to enable the fine-tuned generator to strike a good balance between the ability of alignment with rating/feature and the power of generating fluent, natural, user review-like text. The main contributions of this paper are summarized as follows: (1) We propose to use mutual information to measure the alignment between the explanation and predicted rating/item feature. To the best of our knowledge, this is the first work that introduces MI into explanation generation for recommendation. (2) We propose a novel MMI framework that features reinforcement learning-based fine-tuning. By customizing reward functions (Mutual Information as the main reward and KL and Entropy as complementary rewards), the framework can make the pretrained generator align better with the rating or feature while maintaining the ability to mimic user reviews. (3) We conduct experiments on three public real-world datasets and incorporate different types of backbone models into our MMI framework. The experimental results not only verify the effectiveness of the framework in aligning explanations with predicted ratings and important item features but also demonstrate its capability to strike a good balance between alignment property and similarity with user reviews. (4) We compare our method with others through human evaluation. The evaluation result further shows the advantage of our method. Additionally, it validates the potential benefits of generating rating-aligned and feature-aligned explanations such as facilitating users' decisions for recommendations from the real users' perspective. § RELATED WORK Recent works on generating natural language-based explanations for recommendation can be summarized into two developing lines. The first line adopts more and more advanced model architecture. From RNN-based models <cit.>, to transformer-based <cit.> or VAE-based <cit.> generators, and now several works <cit.> have explored the explanation generation ability on LLM. Despite the model architectures being different, most of them are still trained with the MLE objective may not ensure the alignment properties. The second line endeavors to incorporate rich auxiliary information into the explanation generation. Besides user and item ID, <cit.> condition the generation on the rating of the product, <cit.> notice the importance of the feature and use pre-defined feature words to guide the generation process. Recently, <cit.> have developed retrieval augmented generation to make the generation more personalized and specific. Despite several generators having considered taking predicted rating or item feature as the input, only a few of them <cit.> design specific mechanisms to ensure the generated explanation is related to the input rating/feature. Hence, in this work, we focus on designing a model-agnostic fine-tuning framework for existing models to further enhance their ability of generating rating and feature-aligned explanations. Our work is similar to a recent work <cit.> which also introduces a RL-based fine-tuning method on GPT-2. However, this paper still treats mimicking user reviews as the main goal by developing specific rewards that measure the similarity between the generated explanation and the review, and it does not examine the impact of the generated explanation on real humans. Comparatively, our designed rewards represent two properties that can bring real benefits to end-users which are further verified through user studies. § PRELIMINARY §.§ Generating Explanation for Recommendation We categorize current explanation generation methods into Post-hoc explanation generators and Multi-task Learning models. §.§.§ Post-hoc Generation Post-hoc explanation generators like <cit.> assume the recommendation has already been made and solely focus on generating an explanation for the given user-item pair (u,i) accompanied by additional attributes such as the rating or a pre-defined feature of the item. They generally adopt a Seq2Seq model architecture that takes some relevant attributes A=(a_1,a_2,...,a_n) of (u,i) as input and use negative log-likelihood (NLL) loss to maximize the likelihood of generating ground-truth review e conditioned on the given attributes A. §.§.§ Multi-task Learning Multi-task learning models <cit.> perform rating prediction and explanation generation simultaneously. Given (u,i), the joint rating-explanation generation task of the models predicts corresponding rating r̂ as well as explanation ê. The training objective of the models combines minimizing the mean squared error between r̂ and ground-truth rating r and the same NLL loss of ground-truth review e as post-hoc models. The NLL loss for explanation generation in both categories is generally defined as: L_e=-∑_w∈ elogŝ(w) where ŝ is the predicted word distribution over the vocabulary set. §.§ Mutual Information and its Estimation §.§.§ Mutual Information Mutual information is an entropy-based measure of dependence between random variables. Given two random variables X and Y, the mutual information between them is defined as: I(X;Y)=H(X)-H(X|Y)=H(Y)-H(Y|X)=I(Y;X) where H(X) is the Shannon entropy of X and H(X|Y) is the conditional entropy of X given Y. As a result, mutual information measures the decrease of the uncertainty in X given Y. Intuitively, the higher MI value between X and Y, the stronger dependency there is between X and Y since knowing Y will reduce the uncertainty in X. Such property inspires us to model the alignment degree between explanation and rating or feature with MI and further strengthen it by maximizing MI. §.§.§ Mutual Information Neural Estimation The definition of MI in Eq.(2) can be equivalently expressed as the KL-divergence between the joint distribution of two variables X, Y and the product of marginal distribution P_X and P_Y: I(X;Y)=D_KL(ℙ_XY||ℙ_X⊗ℙ_Y ) From the above equation, we can see that directly computing MI is intractable because we typically have access to samples but not the underlying distributions <cit.>. Thus, recent works <cit.> combine different variational bounds of MI with deep learning to enable differentiable and tractable estimation of mutual information. In this work, we adopt a state-of-the-art method named Mutual Information Neural Estimator (MINE) <cit.> to estimate the mutual information between two given variables X and Y. The core idea of MINE is to derive a lower bound of MI utilizing the following Donsker-Varadhan bound <cit.>: D_KL(P||Q)⩾sup_T ∈ F𝔼_P[T]-log(𝔼_Q[e^T]) By combining Eq. (3) and (4) and choosing F to be the family of functions T_θ : X × Y → R parametrized by a deep neural network with parameters θ∈Θ, MINE defines following lower bound for true MI: I(X;Y)⩾ I_θ(X;Y)= sup_θ∈Θ𝔼_ℙ_XY[T_θ]-log (𝔼_ℙ_X⊗ℙ_Y[e^T_θ]) In the above equation, T_θ is named as the statistics model in MINE. It takes two variables X, Y as input and outputs a real value. The expectations in the equation are estimated using empirical samples from the joint distribution ℙ_𝕏𝕐 and marginal distribution ℙ_𝕏 and ℙ_𝕐. Intuitively, the higher the value of the lower bound is, the more accurate the estimation of true MI is. That means we can treat the lower bound as an optimization goal and adopt a common gradient descent method like SGD to update the statistics model T_θ iteratively. Once the statistics model T_θ has converged, we can use it to derive an estimated value of MI. § MMI FRAMEWORK In our proposed Maximizing Mutual Information (MMI) framework, we start with an arbitrary pre-trained explanation generation model, which we refer to as the backbone explanation generator. This model has been trained using Maximum Likelihood Estimation (MLE) on review data ( An example loss function is shown in equation (1)), which gives it a strong capability to generate user reviews-like text. And we aim to further enhance its alignment ability through fine-tuning. The core idea behind this fine-tuning framework is to estimate mutual information (MI) between explanation and rating/feature as a metric to measure the relationship between currently generated explanation and rating/feature. Since the estimated MI value is non-differentiable, it's natural to the value as a reward and leverage RL to guide the backbone explanation generator in learning better alignment. Additionally, to maintain the ability of the backbone to simulate user reviews, we also introduce KL and entropy rewards to compensate for the poor text quality incurred by solely optimizing the MI reward. Figure <ref> provides an overview of the proposed MMI framework. §.§ RL for Fine-tuning Backbone models In the RL formulation of explanation generation, the backbone model is considered as an agent and the action is the generation of the word w_t on the next position t based on previous words w_1:t-1 on position t-1. The probability of generation p_θ(w_t|w_1:t-1) represents a stochastic policy. We define a customized reward π_ê=π_w_1:T at the end of the generated sequence where T is the pre-defined max length of the generated sentence. The optimization goal of the generator θ is maximizing the expected value of total rewards which induces the following loss function: L_RL =- ∑_êp_θ(ê)π(ê) =-∑_ê∏_t=1 ^T-1p_θ(w_t+1|w_1:t) π (ê) We adopt policy gradient to achieve the above optimization goal : ∇_θ L_RL∝ - π(ê)∇_θlog p_θ(ê) The design of the customized reward π_e is the core of the proposed framework and will be introduced in the next sections. §.§ MI Reward for Enhancing Alignment To strengthen the alignment of explanation with rating or feature, we propose a Mutual Information (MI) based reward. The MI reward π_MI(ê) is computed as follows: 1) Use a sentence encoder to transform the generated sample ê of the explanation generator as a sentence embedding E 2) For alignment with rating task, we convert the 5-level rating score to a 5-dimensional one-hot vector, and for alignment with feature task, we encode the pre-defined item feature word f as a word embedding F, 3) We take the concatenation of E with R or F as the input for the MI estimator, and the output of the estimator will be the MI reward. ( I(R;E) for the task of alignment with rating and I(F;E) for the task of alignment with feature.). As mentioned in Section <ref>, we adopt MINE <cit.> for MI estimation. We denote the statistics model of MINE for I(R;E) as θ_MIR and the one for I(F;E) as θ_MIF. According to Eq.(5), we compute MI reward π_MI(ê) for aligning with rating task as: π_MI(ê)=I_θ_MIR(R;E)=𝔼_ℙ_RE[T_θ_MIR]-log (𝔼_ℙ_R⊗ℙ_E[e^T_θ_MIR]) Similarly, MI reward π_MI(ê) for aligning with feature task is: π_MI(ê)=I_θ_MIF(F;E)=𝔼_ℙ_FE[T_θ_MIF]-log (𝔼_ℙ_F⊗ℙ_E[e^T_θ_MIF]) We adopt two strategies to ensure the MI reward model is qualified to give guidance to the generator: 1) we pre-train the MI estimator on the train set of the dataset by treating user review as E and ground truth rating and feature as R and F.2) we alternately update the reward model and the generator in a GAN-like manner to enhance the ability of the reward model in terms of capturing new output samples from the generator. §.§ KL and Entropy Reward for Regularization In practice, we observe that without any constraints, the MI-guided fine-tuning process may completely overwrite the original MLE-based policy of the backbone model, leading to a poor text quality of the generated explanation. Meanwhile, as reported in several works <cit.>, solely optimizing a single reward will incur reward hacking which means the policy exploits loopholes of the reward function and achieves high reward while leading to poor performance and unexpected behaviors. Hence, we introduce the commonly used KL regularization <cit.> in RL. The KL reward computes the negative value of KL divergence between the trained new policy and the original policy. In our case, the original policy refers to the pre-trained version of the backbone model and the new policy is the fine-tuned one, so the KL reward π_KL(ê) is defined as : π_KL(ê)=-D_KL[q(ê)||p_θ(ê)] where q(ê) represents the probability of the pre-trained version of the backbone generating the current explanation ê. By maximizing the KL reward, we can reduce the deviation of the fine-tuned model from the pre-trained model and ensure the fine-tuned policy has a safe baseline. Additionally, to further increase the diversity of the generation results and facilitate better exploration during RL training, we add Entropy reward as another objective for regularization. The Entropy reward π_Entropy is computed as : π_Entropy(ê)=H(ê) =-∑_w_t ∈êp_θ(w_t|w_t-1)log p_θ(w_t|w_t-1) Finally, the total rewards are the weighted summation of the MI,KL and Entropy rewards: π(ê)=π_MI(ê) + απ_KL(ê) + βπ_Entropy(ê) §.§ Dynamic Weighting Mechanism for Multi-objective Rewards According to Eq. (12), the total reward needs to strike a good balance on three different objective rewards. To avoid an exhaustive search of weighting parameters, we propose a dynamic weighting mechanism inspired by the Dynamic Weight Average (DWA) from <cit.> for the three rewards. The dynamic weighting mechanism learns to average rewards weighting over time by considering the rate of change for each reward. Concretely, the weighting γ for reward k π_k at time t is defined as: γ_k(t)= Ke^h_k(t-1)/τ/∑_ie^h_i(t-1)/τ , h_k(t-1)=π_k(t-2)/π_k(t-1) where K is the number of types of rewards (In our case, K=3), τ is a hyperparameter that controls the softness of the weight distribution. As a result, the dynamic version of the total reward function is [We only apply the dynamic weighting mechanism on the task of alignment with the feature, since we can easily strike a good balance between different rewards by simply assigning static weights when performing the task of alignment with rating.] : π_t(ê)= γ_MI(t)·π_MI(ê) + γ_KL(t)·π_KL(ê) + γ_Entropy(t)·π_Entropy(ê) §.§ Applying MMI framework on Different Types of Backbone Models The general pipeline of applying MMI framework on backbone models is shown in Figure <ref>. However, different from the general pipeline, a special adaptation is made when we apply the framework on a multi-task learning model to learn better alignment with rating. This is because, unlike its post-hoc counterpart, the rating r̂ is a non-fixed value predicted by the model itself. That means during the fine-tuning process, the rating r̂ will also be updated, which potentially undermines the recommendation performance of the multi-task model. Thus to ensure the recommendation performance will not be affected by the alignment task, we combine the original loss of the backbone model L_backbone and the RL objective L_RL as the optimization goal of performing alignment with rating task on multi-task learning backbone models: L=λ L_RL +(1-λ) L_Backbone § EXPERIMENTS In this section, we conduct extensive experiments to answer the following research questions: RQ1 How does the proposed MMI framework perform in boosting the alignment with predicted ratings and item features? RQ2 To what extent does the MMI framework retain the ability of backbone models to mimic user reviews? RQ3 How does each reward, as well as the DWA mechanism benefit the RL fine-tuning process? RQ4 Can the different adoption of the MMI framework maintain the recommendation performance of multi-task models? RQ5 Can the effectiveness of the MMI framework in terms of alignment with item features be generalized to different feature settings? RQ6 How do real users perceive the MI-enhanced explanation? In this work, we focus on addressing alignment with rating and alignment with feature independently and separately since we intend to closely examine whether the proposed MMI framework can effectively solve each task respectively. However, we do perform a preliminary study in Section <ref> to show the potential of our proposed method to simultaneously align generated explanations with both predicted ratings and item features. §.§ Experimental Setup §.§.§ Datasets Experiments are carried out on three real-world datasets from different domains: TripAdvisor (hotels), Yelp (restaurants), and Amazon-MoviesAndTV. We construct the datasets based on the preprocessed version in <cit.>, which filters out users with fewer than 5 reviews. The item features in reviews are extracted by Sentires [https://github.com/evison/Sentires] <cit.>. Additionally, we utilize the Spacy [https://spacy.io/] toolkit to conduct sentence dependency analysis on each review, removing those where the noun subject is “I” or “We”. This is because such reviews often lack objective descriptions of the items, making them unsuitable to refer to when generating explanations. Finally, we divide the whole dataset into train/validation/test subsets at a ratio of 8:1:1. The details of the datasets are presented in Table <ref> . §.§.§ Models for Comparison We divide existing explanation generation models into two groups to fairly compare our method with other baselines in terms of alignment with predicted ratings and item features, respectively. The baselines in the rating alignment group either take rating as the input of the explanation generator or perform rating prediction and explanation generation simultaneously. The baselines in the feature alignment group all take an item feature as an additional input to the decoder. For Rating Alignment: NRT <cit.> jointly model rating prediction and explanation by linearly combining the MSE loss for rating prediction and MLE loss for explanation generation. Att2Seq <cit.> belongs to post hoc generation models. It employs an attribute-to-sequence model architecture to generate an explanation for a product based on the given user, item, and rating of the item. PETER <cit.> integrates the user/item ID information into a transformer-based architecture and introduces a context prediction task to ensure the model generates a unique sentence for each user-item pair. PEPLER+MF <cit.> inputs user and item ID to a pre-trained GPT-2 model and perform continuous prompt learning with the MF-based rating prediction task as regularization. DualPC <cit.> introduces a duality loss to closely connect rating prediction and explanation generation tasks. By treating rating predicting as the primal task and explanation generation tasks as the dual tasks, it assumes a well-trained prediction model θ_r and generation model θ_e should satisfy the following probabilistic duality: p(e)(p(r̂|e;θ_r))=p(r)p(ê|r;θ_e) The above equation encourages the generated explanation to align with the ground-truth rating, so there is still a gap in aligning the explanation with the predicted rating. SAER <cit.> shares a similar motivation of aligning the sentiment of explanation with the predicted rating as our work. It minimizes the difference between the sentiment of current generation and the recommender's prediction through the following loss. The sentiment of the generated explanation is estimated by a pre-trained sentiment regressor S. L=∑_u,i𝔼_P_ê|u,i[(r̂_u,i-f^S(ê))^2] Compared with our MI metric, SAER's measurement for the relationship between explanation and rating is prone to bias in the sentiment regressor, and it belongs to fitting-based <cit.> metrics, making it less robust and reliable than our proposed MI metric. We apply our MMI framework for rating alignment on Att2Seq and PETER, as they represent two different types of backbones (PETER belongs to multi-task learning models while Att2Seq belongs to post-hoc generation models). We named them as Att2Seq + MMI and PETER + MMI. For Feature Alignment: ApRef2Seq <cit.> is a Seq2Seq model that encodes historical reviews from users/items and item features as contextual information to control explanation generation. PETER+ <cit.> shares the same transformer-based model architecture as PETER. Different from PETER, it leverages additional feature input as ApRef2Seq to generate more informative explanations. PEPLER-D <cit.> utilizes item features as discrete prompt for pre-trained GPT2. The generator takes the given feature as a prompt word, and generates relevant content revolving around the feature. NETE <cit.> tailors GRU with a gated fusion unit to incorporate the given feature into the generation. ERRA <cit.> inherits the architecture of PETER and features an aspect discriminator loss to encourage the pre-defined item feature to appear in explanation generation. We apply our MMI framework for feature alignment on ApRef2Seq and PETER+. We denoted them as ApRef2Seq + MMI and PETER+ + MMI §.§.§ Evaluation Metrics For Rating Alignment: Normalized Mutual Information We concatenate sentence embeddings of the generated explanations with one-hot vectors representing the predicted rating as input data to train a MINE model. When the model converges, the final value of the lower bound in equation (5) will be the estimation of I(R;E). However, due to the predicted rating R (for post-hoc model Att2Seq, we follow previous work by directly using the ground truth rating) of different models are different, we adopt a Normalized version of MI (NMI): I(R;E)/H(R) to make the value comparable. NMI ranges from [0,1], the higher the value is, the stronger the alignment of explanation with rating is. Sentiment Accuracy We also conduct sentiment classification tasks on the generated explanations to measure whether the predicted sentiment of the explanation matches the sentiment of the predicted rating. we perform fine-grained (labels are the 1-5 level predicted rating) and coarse-grained (labels are negative, neutral and positive) evaluation, respectively. For Feature Alignment: Mutual Information Similar to estimating I(R;E), we concatenate the sentence embeddings of generated explanations with word embeddings of pre-defined item features to train a MINE model. And we can directly compare the estimated I(F;E) of different models since the pre-defined features for all models are the same. FMR Feature Match Ratio <cit.> examine whether the assigned feature f_u,i is included in the generated explanation E_u,i: FMR=1/N∑_u,i𝕀(f_u,i∈ E_u,i) For text similarity with user reviews We adopt commonly used metrics for natural language generation: BLEU-1 (B-1), BLEU-4 (B-4), ROUGE-1 (R-1), ROUGE-L (R-L) and METEOR (M) to measure the quality of generated explanation in terms of the similarity with user reviews. §.§.§ Implementation details Assignment of item features To make the experimental setting more realistic, we assign item features for each model in an estimation manner instead of directly using the features from the reviews in the test set. The estimation method is borrowed from several feature-based explainable recommendation methods <cit.>. First, we select the top 50 popular item features on each dataset [The reason for using popular features is to ensure the assigned features are all valid. The item features extracted in the datasets are automatically labeled using Sentires Toolkit. However, in the previous work <cit.>, which adopted the same feature-extraction method, the authors asked humans to label the high-quality feature from the extracted feature set. Only around 100 popular features are perceived as qualified features. Moreover, according to our statistics, the frequency of the top 50-top100 features in the three datasets are all below 0.5%, so we choose Top-50 features as the setting for the main evaluation results.], then we calculate the user-feature attention vector x_ik for each user and the item-feature quality vector y_jk for each item as follows, where t_ik represents the number of reviews from user i that mentioning feature k, p_jk represents the number of reviews from item j that mentioning feature k and s_jk represents the average sentiment on feature k of item j: x_ik= 0, if t_ik=0 1+ (N-1)(2/1+e^-t_ik-1), otherwise y_jk= 0, if p_jk =0 1+ N-1/1+e^-p_jk.s_jk ), otherwise Finally, we assign a feature for each user-item pair according to the dimension with the maximum value in the dot product of the two vectors. Details of the MMI framework We adopt an off-the-shelf BERT model[https://huggingface.co/bert-base-cased] as the sentence encoder in the framework and the feature word embedding is obtained from the word embedding layer of the sentence encoder. The sentence encoder is fine-tuned on the train set by accomplishing a sentiment classification task. The statistics model used for MI estimation is a three-layer MLP. §.§ RQ1: Alignment with ratings/features For RQ1, Table <ref> and Table <ref> report the alignment performance of different generation methods. From Table <ref>, we can conclude that by applying the MMI framework on Att2Seq and PETER, we achieve superior performance in terms of NMI and sentiment accuracy under all settings. Equipped with the MMI framework, the fine-tuned version of Att2Seq and PETER model has gained a stronger alignment ability compared with their pre-trained counterparts, which demonstrates that the MMI framework benefits both multi-task learning model and post-hoc generation model. Besides our MMI method, SAER and DualPC beat other baseline models on the TripAdvisor and Yelp datasets. That is because they design model-intrinsic mechanisms to relate the explanation generation and rating prediction more closely unlike other models that loosely connect them solely through shared latent space or simply use predicted rating as the initial state of the explanation generator. PEPLER+MF has the worst ability to align with rating, which shows the limitation of prompt-tuning of pre-trained LLM. Then, we analyze the results of alignment with features. As shown in Table <ref>, our MMI framework enables ApRef2Seq and PETER+ to obtain stronger alignment ability compared with their pre-trained versions and outperforms most baselines in addition to the strongest competitor PEPLER-D. However, we notice that while PEPLER-D can effectively generate sentences containing assigned features, due to its practice of directly using the feature as the prompting word, these sentences are mostly very generic and lack diversity (e.g. “The food is good.”, “The service is good.”). Such observation explains the poor performance of PEPLER-D in terms of text generation in Table <ref>. To sum up, the overall evaluation results demonstrate the effectiveness of the proposed MMI framework in enhancing the alignment property of explanation. The improvement based on different backbone models reflects the flexibility and generalizability of the framework to some extent. §.§ RQ2: Similarity with user reviews Following previous works, we examine all generation methods under traditional NLG metrics to answer RQ2. Based on the results from Table <ref>, Att2Seq + MMI, PETER + MMI, ApRef2Seq + MMI and PETER+ + MMI maintain most of the generation ability of their corresponding backbone models. That gives RQ2 an affirmative answer that our fine-tuning framework retains most of the knowledge from the pre-training stage to generate fluent, readable, and natural text for end-users. We contribute this advantage to the customized combination of MI, KL, and Entropy reward in the MMI framework, which will be further studied in Section <ref>. Admittedly, in certain settings (e.g. PETER on Yelp dataset), our method cannot champion in terms of NLG metrics, that is because the ability of our method to simulate reviews is subject to the pre-trained backbone model. When the backbone model performs badly, the MMI fine-tuned version will also not achieve good performance. However, as we have discussed in this paper, achieving the best NLG metric scores does not necessarily equate to the best quality of explanations in terms of helping users make decisions. And the minor differences in NLG metrics between models might be negligible from real users' perceptions. Therefore, while our method may not excel in NLG metrics, its ability to steer the explanation towards better alignment could still be superior to other approaches in terms of meeting the requirement of explanations in real recommendation scenarios. §.§ RQ3: Examine the effect of each reward and DWA mechanism We compare the performance of the method under different reward settings on the Yelp dataset. The results are shown in Table <ref> and Table <ref>. From the results, we observe that solely adopting MI reward attracts the generation process to produce unreadable sentences that contain repetitive keywords which can strengthen the alignment of explanation with rating/feature. That observation echoes our speculation in Section 4.3 that simply adopting MI value as the only reward in RL will incur reward hacking. In our examples, it manifests as the generator finding a shortcut to gain higher reward by the constant addition of words or phrases to the generated sentence that results in high scores for the alignment metric yet the overall quality of the language is severely deteriorated. Incorporating KL reward forces the generator to remember the knowledge obtained from the pre-trained stage thus improving the text quality to some extent. However it also leads the generator to produce short sentences as the shorter the sequence is, the less discrepancy between the reference model and the generation model will be. Fortunately, entropy reward complements that limitation as it favors longer sentences containing less common words. Meanwhile, we can see the benefits of DWA from Table <ref> and Figure <ref>. Without DWA, the RL process is highly unstable as we can see severe fluctuation in the entropy reward curve. Such instability will disable the entropy regularizer to control the length of the generated sentence, which corresponds to the poor text quality reflected in Table <ref>. Contrastively, adding DWA enables flexible adjustment towards reward weights based on the reward's change ratio, making the RL process more stable and robust. §.§ RQ4: The effect of MMI framework on multi-task backbone model's recommendation performance To ensure the recommendation performance of multi-task learning models, we combine the original model loss L_backbone with RL loss L_RL from the MMI framework to fine-tune the backbone model. To investigate the benefits of such adaptation, we compare the performance of PETER with different training objectives on the TripAdvisor dataset. Following previous works<cit.>, we treat the recommendation task as a rating prediction task and evaluate it with MAE and RMSE metrics. The results are shown in Table <ref>. Although directly replacing the original learning objective of the backbone model with MMI outperforms in terms of rating alignment, the rating prediction performance decreases drastically. We argue that this is caused by the symmetry of the Mutual Information: I(R;E)=H(R)-H(R|E)=H(E)-H(E|R)=I(E;R). Different from post-hoc generation models, the variable R from multi-task learning models is not fixed. Thus, for them, maximizing mutual information I(R; E) will take the risk of leading the rating prediction to align with an unqualified explanation. Hence, the combination of L_Backbone and L_RL is a more suitable method for adopting the MMI framework on multi-task learning models. §.§ RQ5: MMI for feature alignment under different feature settings To prove the success of the MMI framework for feature alignment is not contributed by the specific choice of feature assignment, we compare ApRef2Seq and ApRef2Seq + MMI under different feature settings: assigning Top 10/20/50/100 features based on estimation or directly extracted from corresponding user review. As shown in Figure <ref>, our MMI framework benefits backbone models under all feature settings. Moreover, the increased ratio of FMR is higher when the feature is assigned by estimation which reflects a similar scenario on a real recommendation platform. §.§ RQ6: Human Evaluation We recruit 25 participants and design two tasks based on the Yelp dataset. In the first task, we pair items with different ratings and ask participants to choose the item they perceive as the better one based on the generated explanation. We compare 5 methods: Att2Seq, Att2Seq +MMI, SAER, DualPC, and a reference method that directly treats the corresponding user review as the explanation. Each participant is required to annotate 60 records, consisting of 12 records from each explanation method. The comparison results are presented in Table <ref> and grouped by the difference value between the two items' predicted ratings. Att2Seq + MMI archives the best agreement rate under all settings, which indicates that rating-aligned explanations can help users better understand the predicted rating and tell the difference between items. Meanwhile, the relatively poor performance of user reviews highlights the limitations of treating user reviews as ground truth for explanation generation. In the second task, we sample user-item pairs from the dataset and collect assigned feautes and generated explanations by ApRef2Seq, ApRef2Seq+MMI, ERRA, PEPLER-D. We ask participants to annotate explanations in terms of Informativeness <cit.> (The generated explanation contains specific information, instead of vague descriptions only.), Relevance (The details in the generated explanation are consistent and relevant to the assigned feature of the business.) and Satisfaction (The generated explanation makes the use of the recommender system fun.). We assign 25 records for each participant and ensure each record has been annotated by at least 3 participants. The annotation results are shown in Table <ref>. ApRef2Seq+MMI outperforms other methods in all dimensions, which solidifies the importance of generating feature-aligned explanations. The low informativeness score of PEPLER-D echoes our observation in Section 5.3 that some sentences generated by PEPLER-D are rather generic, providing few details of an item. § SIMULTANEOUS ALIGNMENT WITH RATINGS AND FEATURES In this pilot study, we try a direct adaptation that linearly combines I(R;E) and I(F;E). The MI reward in Section 4.2 is re-designed as (1-ϵ) * I_θ_MIR(R; E) +ϵ *I_θ_MIF(R; F). The parameter ϵ controls the balance between rating alignment and feature alignment. We conduct a pilot experiment on TripAdvisor dataset based on the backbone model PETER+ (PETER+ is a variation of PETER using an item feature as additional input, so it’s suitable for both rating alignment and feature alignment. The experiment result in Table <ref> shows the potential of our proposed method to simultaneously align generated explanations with both ratings and features. However, during our experiment, we do realize the relationship between rating alignment and feature alignment deserves more in-depth analysis. Due to the space limit, we leave them for future work. We will continue to explore this direction and design more advanced methods like Conditional Mutual Information -based framework to solve this problem. For instance, we can use a two-step optimizing method: the first step uses I(F; E) as the primary reward to enhance alignment with features. The second step uses Conditional Mutual Information-based reward I(R;E|F) to further enhance alignment with rating meanwhile maintaining the ability to align with features. § CONCLUSION In this paper, we identify the limitation of current explanation generation for recommendation in terms of alignment with the predicted rating and the item feature. To solve this problem, we propose a novel MMI framework, which takes an arbitrary generation model as the backbone and adopts an RL fine-tuning process to maximize the mutual information between the generated explanation and predicted rating/item feature. Experiments on three datasets demonstrate our MMI framework can effectively enhance the alignment ability of different backbone models meanwhile maintaining their ability to simulate user reviews. User studies further confirm the benefits of MI-enhanced explanations to end-users due to their better alignment property. ACM-Reference-Format
http://arxiv.org/abs/2407.13572v1
20240718151436
SecScale: A Scalable and Secure Trusted Execution Environment for Servers
[ "Ani Sunny", "Nivedita Shrivastava", "Smruti R. Sarangi" ]
cs.CR
[ "cs.CR", "cs.AR" ]
Robust Calibration of Large Vision-Language Adapters Balamurali Murugesan 0000-0002-3002-5845 Julio Silva-Rodríguez 0000-0002-9726-9393 Ismail Ben Ayed 0000−0002−9668−8027 Jose Dolz 0000-0002-2436-7750 July 2024 ========================================================================================================================================================== firstpage plain § ABSTRACT Trusted execution environments (TEEs) are an integral part of modern secure processors. They ensure that their application and code pages are confidential, tamper-proof and immune to diverse types of attacks. In 2021, Intel suddently announced its plans to deprecate its most trustworthy enclave, SGX, on its 11^th and 12^th generation processors. The reasons stemmed from the fact that it was difficult to scale the enclaves (sandboxes) beyond 256 MB – the hardware overheads outweighed the benefits. Competing solutions by Intel and other vendors are much more scalable, but do not provide many key security guarantees that SGX used to provide notably replay attack protection. In the last three years, no proposal from industry or academia has been able to provide both scalability (with a modest slowdown) as well as replay-protection on generic hardware (to the best of our knowledge). We solve this problem by proposing that uses some new ideas centered around speculative execution (read-first, verify-later), creating a forest of MACs (instead of a tree of counters) and providing complete memory encryption (no generic unsecure regions). We show that we are 10% faster than the nearest competing alternative. § INTRODUCTION The number attacks on remotely executing software in both public and private clouds is on the rise<cit.>. Along with software-based attacks, a large number of physical attacks such as cold boot attacks and bus snooping are also being mounted <cit.>. According to an IBM report<cit.>, the cost of a data breach in 2023 was $4.5 million and 82% of the data breaches involved data that was stored in the cloud. To secure data and computation in any such remote framework, we need to use a combination of encryption, message authentication codes (MACs), and digital signatures, respectively, for ensuring the following four ACIF properties: authenticity(A), confidentiality(C), integrity(I) and freshness(F). Authenticity refers to the fact that the data was indeed written by the server's CPU; confidentiality uses encryption to prevent snooping; integrity prevents tampering (using hashes and keyed hashes(MACs) and freshness ensures that data that was valid in the past is not being replayed. Table <ref> shows a list of all the major commercially available TEEs including SGX. Among all the commercially available TEEs listed in Table <ref>, now-deprecated Intel SGX<cit.> (Software Guard Extensions) is the only one that provides all four ACIF guarantees in HW (referred to as SGX-Client). Third-party software on SGX-Client used to run in a HW-managed enclave securely in spite of a potentially malicious OS or hypervisor. However, this robust protection came at a heavy price. The performance overheads limited the enclave size to 128-256 MB <cit.>. As a result, Intel decided to deprecate SGX-Client in its 11^th and 12^th generation processors and supplanted it with SGX-Server[Both SGX-Client and SGX-Server are terms that we introduce in this paper for the ease of explanation.]. SGX-Server adopts a different mode of memory encryption and eliminates time-hungry integrity (Merkle) trees altogether. It scales to 512 GB; however, this scalability comes at the cost of security – it is possible to mount replay attacks <cit.>. This has sadly impacted different industries and products quite adversely. For example, ultra HD Blu-ray disks require the support of SGX's digital rights management (DRM) service<cit.>. They can no longer be played on new Intel processors that only support SGX-Server<cit.>. There are similar issues with DRM-protected PC games<cit.> and secure 4K video streaming apps <cit.>. SGX-Client allowed these apps to run in an enclave and consequently guarantee that the viewer wasn't able to steal video content <cit.>. References <cit.> contain a lot of examples of replay attacks in distributed systems and software such as Ethereum and Bitcoin (attacks SGX-Client could prevent). There are two strands of contemporary work. The first has been adopted by commercial silicon vendors who provide security solutions primarily for VMs (virtual machines) <cit.>. The assumption is that the entire guest VM is trustworthy including its software stack<cit.>. In the second strand of work, proposed in academia, two proposals stand out. Dynamic Fault History-Based Preloading (DFP)<cit.> implements a prefetching based mechanism to improve performance and support larger enclaves. Whereas, Penglai<cit.> is a bespoke RISC-V system that relies on caching parts of the integrity verification tree (Merkle tree) in a separate physical memory that has strict access protections. The miss penalty is sadly quite high. Additionally, given that there is a large unrestricted unsecure memory in the system, managing the page tables requires complex mechanisms. The insights in our work are as follows. In any TEE, a security breach is a catastrophic event – the entire system needs to shut down. Hence, we have the liberty to speculate: read first and verify later. Second, existing work in this space creates a Merkle tree of counters, whereas we create a MAC forest, which is paradigmatically quite different. Finally, we avoid the pitfalls of creating one large SGX-like secure memory or splitting the physical address space into generic secure and unsecure regions. The former approach is not scalable, which is why SGX-Client was deprecated in the first place. We shall also show the same in our experiments. The latter approach adopted in conventional work envisions having two parts of an executing application: secure part and an unsecure part. The secure part has severe restrictions and the unsecure part functions as a regular program. This is a good strategy when the size of the secure memory is limited. However, we propose fully encrypted memory (scales till 512 GB), where the entire memory is secure barring a few pages that are reserved for inter-process data transfer. Hence, there is no need to split an application in this manner (also complicates the software design). To ensure inter-process isolation, instead of relying on the OS to update page tables correctly, we rely on encryption with enclave keys. We also use a more pragmatic threat model where we assume that the attacker can observe and modify any location at will. The specific contributions in this work are as follows: 1 Design of a scalable integrity protection mechanism, where we increase the granularity of the MAC computation from the block-level to the page-level and devise an efficient and scalable MAC forest based integrity protection mechanism. 2 Design of an efficient scheme for enclave page fault management that implements speculative execution and decreases the latency of the critical path. 3 An efficient encryption mechanism to secure the confidentiality of the entire memory. 4 A detailed performance and scalability analysis of , which shows a 10% improvement over the nearest competing work (Penglai). <ref> introduces the necessary background. <ref> outlines the threat model, <ref> characterizes the benchmarks and related work, <ref> presents the proposed design, <ref> shows a detailed performance analysis, <ref> presents the related work, and we finally conclude in <ref>. § BACKGROUND OF INTEL SGX Intel Software Guard Extensions (SGX <cit.>) integrates hardware extensions to the x86 instruction set. There are special instructions to create protected execution environments known as enclaves. The enclaves are located within a dedicated portion of a processor's memory – this is known as the Enclave Page Cache (EPC). The EPC is a continuous block of memory (128-256 MB) that is initialized during the boot process. It is inaccessible to the operating system and hypervisor. Some of its contents (excluding the metadata) are accessible to processes running within it (with appropriate isolation between secure processes themselves). In SGX, only the on-chip components such as the processors, caches, NoC and memory encryption engine (MEE) are assumed to be secure. These secure components are a part of the trusted computing base (TCB). Enclaves are created before invoking the trusted code during execution. When we call a trusted function, secure execution starts within the enclave. Once the execution completes, the function returns and the context is switched back. Subsequently, normal unprotected execution of the application continues (refer to Figure <ref>). The OS manages the page tables and the TLBs. However, any update to the TLB needs to be vetted by the SGX subsystem. Hence, a dedicated HW circuit verifies the integrity of the contents of the secure page and also ensures that no “secure” virtual address is mapped to an “unsecure” physical page or vice versa using an inverted page table. When the EPC is full, we need to evict a page, encrypt it and store it in the unsecure part of memory. Some metadata corresponding to it such as the key used to encrypt it and its MAC (keyed hash) are stored in the EPC. To reduce storage space, we can create an eviction tree of such evicted pages that is similar to the classical Merkle tree. Note that entering and exiting secure mode are expensive operations (≈ 20-40k clock cycles <cit.>). So is bringing back an evicted page to an EPC, hence, EPC misses should be minimized. §.§ SGX-Client: (Gen. Intel CPUs) SGX-Client employs a memory encryption engine (MEE) <cit.>, which is an extension of the memory controller (part of TCB). To maintain the confidentiality of the data, the MEE encrypts the data using Advanced Encryption Standard (AES) counter-mode encryption<cit.> (AES-CTR). The counter values correspond to different data blocks of a page; whenever a data block is modified, its counter is incremented by one (to stop replay attacks). The inclusion of these counters in the encryption processs guarantees that effectively a new key is used for every encryption of the same block. The integrity of the counters is essential to the system as their correctness directly affects the system security. Their integrity is ensured in SGX via MACs stored in memory and a Merkle tree that aggregates them. §.§.§ Integrity Verification using Merkle Trees The leaf nodes of the Merkle tree store counters for the secure pages (part of the EPC), and the internal nodes of the tree store the counters for each of their child nodes. Additionally, each node stores the MAC of its counters (encrypted hash), in such a way that a Carter-Wegman<cit.> style tree is created – the MAC is generated by encrypting the hash of the counters in the node using the counters in its parent node. We can thus conclude that the root node captures the information of all the nodes in the tree. If there is a change in any counter, it will get reflected at the root. We thus need to store the root of the tree in the TCB and for efficiency we can store additional nodes of the tree in the TCB such that the root need not be updated on every write. As we increase the size of secure memory, the number of counters increases, thereby increasing the size (depth) of the Merkle tree and its associated storage overheads. As the depth of the tree increases, the integrity verification of the counters starts taking a lot of time. Hence, the size of the EPC has been limited to 128MB or 256MB in SGX-Client. Sadly, a part of the EPC memory has to be reserved for storing such EPC metadata, which further decreases its size (32 MB out of 128 MB). §.§ SGX-Server: (and Gen. Intel CPUs) Intel launched the 3^rd Gen. Intel Xeon Processor based server platforms in 2021 to facilitate trusted execution environments (TEEs) that support a large number of enclaves and a large secure memory (SGX-Server). SGX-Server uses TME-MK (Total Memory Encryption-Multi Key) <cit.> that uses multiple keys to encrypt the physical memory – one key for each VM. In SGX-Server, the physical memory is encrypted using the AES-XTS (Advanced Encryption Standard – Tweakable Block Ciphertext Stealing) <cit.> encryption engine. AES-XTS is used for block-based storage devices and takes into account the physical address for encrypting a data block. For encryption, AES-XTS uses a 128-bit tweak derived form the physical address to provide address-based variability and the Galois Function is applied on the encryption engine output to add an element of diffusion. The AES-XTS encryption lies on the critical path and encrypts all the data that enters or leaves the chip. Given that we have confidentiality and integrity (via MACs), several hardware attacks (like cold boot attacks) and memory bus probing or relocation/splicing attacks are prevented. Even though, it is much faster and more scalable, a key security guarantee is sacrificed – replay protection or freshness<cit.>. This means that it is possible to replace the value in a memory location (along with its MAC) with a pair of values that were seen in the past. The processor will not be able to perceive that the memory contents have been tampered with. Also, it is possible to say that the value in a memory location is the same as that at a previous point of time – the ciphertext will be the same (this is a side channel). § THREAT MODEL considers a threat model similar to that of Intel SGX <cit.>. We include only the on-chip hardware components in the TCB, which includes the cores, caches, NoC, MEE and the hardware circuits that we introduce in . Other than these components, we only trust the code running within the enclaves with regards to their own execution. The SGX enclaves and standard cryptographic operations maintain confidentiality and detect integrity violations. The hardware components outside the TCB, the privileged software stack and other user applications including unrelated enclaves, are considered to be untrusted <cit.>. Figure <ref> displays the trusted and untrusted components in the system. Similar to <cit.>, assumes that the attacker controls the system software stack and can misuse its privileges to launch attacks such as observing and modifying the contents of memory addresses <cit.>. The attacker can also mount physical attacks such as snooping on the memory bus or cold boot attacks – observe and modify any memory location at will<cit.>. provides all four ACIF guarantees: authenticity, confidentiality, integrity and freshness. It protects the system against replay attacks<cit.>, where the adversary may replace a data-block/MAC pair in memory. Akin to SGX and similar TEE schemes, does not consider side channel attacks (power, EM and cache), DoS attacks and attacks that introduce errors in the computation based on laser pulses or voltage spikes <cit.>. § CHARACTERIZATION The aim of characterization is to determine the sources of performance degradation in SGX systems by characterizing the behavior of benchmarks and baseline designs. §.§ Setup and Benchmarks We ran the SPEC CPU 2017<cit.> benchmarks and characterized their performance on different systems (standard practice while evaluating TEEs). The different systems are modeled and simulated in a cycle-approximate simulator, Tejas<cit.>. Table <ref> shows the simulation parameters. We used an algorithm similar to PinPoints <cit.> and SimPoints <cit.> to find the regions to simulate in each workload, and then we weighted them appropriately <cit.> to arrive at the final figures. We used Intel Pintools 3.21 <cit.>. §.§ Systems Modeled 1 Baseline is a vanilla design with no security. 2 SGX is SGX-Client that implements a Merkle Tree and a 128MB EPC. It guarantees all ACIF properties. 3 DFP has a Merkle tree and a 128MB EPC, which is the same as baseline SGX. It additionally implements a predictor that predicts page faults in the near future and prefetches the pages. 4 Penglai has a Mountable Merkle Tree (with a root tree and multiple subtrees) that can support 512GB of secure memory, but does not have an EPC. It only caches the 32 most recently seen subtree roots. For every LLC miss, multiple additional memory accesses are required to retrieve the tree nodes for integrity verification. §.§ Observations §.§.§ Performance Comparison The performance (reciprocal of the simulated execution time) of the systems for different workloads is shown in Figure <ref>. In comparison to baseline, SGX-Client shows a very high performance degradation (mean: 83%) due to the overheads associated with traversing the Merkle tree and the EPC page fault penalties. DFP shows very little improvement in performance (≈ 2%), compared to SGX-Client. It is limited by the accuracy of its EPC page fault predictor. Penglai shows better performance than SGX-Client and DFP (mean: 49% better than SGX-Client). The source of its overheads is the latency incurred due to additional memory accesses to the MMT (Mountable Merkle Tree) required for integrity verification. §.§.§ Sources of Performance Overheads Let us separately analyze the impact of Merkle tree traversal and EPC page fault servicing. Figure <ref> shows the performance of the different systems with only the integrity tree (Merkle Tree/MMT) overheads. We assume a zero EPC page fault penalty. There is a degradation in performance observed for all the systems. However, the performance degradation in Penglai is observed to be the highest (34%). This is because its integrity tree, MMT, is larger than that of the other two systems. It supports counters for 512GB memory. Additional memory accesses are required for every secure memory access to retrieve the counters and verify their integrity. Conclusion: Encrypting large secure memory using counter-mode encryption with an integrity tree (Merkle Tree/MMT) fails to scale well, it rather imposes large performance overheads. Figure <ref> shows the effect of overheads due to EPC page fault handling. We assume the overheads associated with the integrity trees to be zero. We observe that the performance degradation is much more drastic because of these overheads. The existing page fault handling mechanism is very costly and takes a large number of cycles (≈ 40k cycles <cit.>) to complete. The entire page loading process is slow because of all the DRAM reads, decryption and updation of metadata – this increases the length of the critical path. SGX-Client and DFP suffer from overheads resulting from both the Merkle tree as well as EPC page faults. DFP shows little improvement over SGX-Client (≈2%), owing to preloading of pages in case of correct predictions. However, the prediction accuracy of DFP varies significantly across benchmarks (0-19.7% in our experiments). As Penglai does not have a limited-size EPC, it does not suffer from overheads associated with EPC page faults; however, maintaining the integrity information is quite onerous in its case. §.§.§ Analyzing EPC Page Fault Overheads To analyze the impact of EPC page fault penalties on the performance of the system, we simulate the system with varying values of the EPC page fault penalty and observe the difference in the performance (see Figure <ref>). The EPC page fault penalty has a direct impact on system performance as it directly affects the latency of the critical path. The overhead increases from 44% (5k cycles) to 83% (40k cycles) relative to the baseline. Since EPC page faults result in large performance overheads, we analyzed the frequency of such events by computing the number of evictions per 1000 instructions in different workloads (see Figure <ref>). We observe that the value is quite low in most cases: on an average 0.2 evictions per 1k instructions. Although infrequent, these page faults have a huge impact on the system performance due to their excessively high latency. §.§.§ Storage Overheads In addition to the performance overheads that we saw earlier, the additional storage overhead can be visualized in Figure <ref>. The overhead varies linearly. Over 8 GB of memory is required to store the counters for 512 GB memory. Extending the Merkle Tree to add a leaf node counter (for a page in secure memory) may require the addition of multiple nodes in the tree (parent nodes). Unrestricted scalability with freshness guarantees can only be achieved if key management adds modest storage overheads. There is a need to devise a more efficient mechanism for providing freshness guarantees that scale to TBs of physical memory. § SYSTEM DESIGN §.§ Overview The design principle of revolves around the read first, verify later paradigm. Our system supports enclaves that are capable of handling large workloads up till 512 GB (similar to SGX-Server); the users get full ACIF security (similar to SGX-Client). In the basic design, an unlimited number of enclaves are supported (total size: 512 GB). This is achieved through efficient utilization of the pre-existing 128 MB EPC (part of SGX-Client) to create an eEPC (extended EPC) region of 512 GB (hereby, named eEPC). It provides a hardware-assisted secure execution environment for server applications. The design decisions are summarized in Table <ref> (based on the characterization). The high-level design of is shown in Figure <ref>. §.§ MAC Forest for Integrity Verification Taking a more efficient approach towards integrity protection, we propose a hierarchical MAC-forest-based integrity mechanism to secure the entire eEPC region. In the eEPC region of , MACs are computed at the page-level rather than block-level. A MAC engine (ME) computes an 8-byte MAC for each 4 KB page. These MACs are stored as the leaf nodes in the MAC forest. Based on the arity of the individual subtrees in the MAC forest, we group p MACs from the leaf nodes to generate a single 8-byte parent MAC. This process is repeated at all the levels of the subtrees. A part of the eEPC is reserved to store the lower level nodes of the subtrees in the MAC forest. and the topmost level of these subtrees is securely stored in the EPC (part of the TCB). We retain the use of counters and the Merkle tree for protecting the EPC region (akin to SGX-Client) with the same permission scheme as SGX-Client for accesses. In our evaluated design, we consider a MAC forest consisting of subtrees with q = 3 (3-level tree) and p = 16 × 8 (arity 16 at the lower level and 8 at the higher level). In this case, the MACs of 16 pages will be grouped in the lower level to generate a MAC at the parent. For the next level, we group 8 MACs to generate a parent MAC which is part of the top most level of the forest. For 512 GB memory, we get a MAC forest with the topmost level containing 2^20 MACs (securely stored in the EPC, 8 MB total). By limiting the levels of the subtrees we can contain the additional memory bandwidth required for integrity verification (maximum of 4 additional accesses in our representative design). MAC Verification Circuit (MVC): We implement a MAC verification circuit (MVC) that is responsible for verifying the integrity of the pages within the eEPC region. The MVC performs this verification when a page is loaded into the EPC. Note that we perform deferred MAC verification – we do not halt the normal execution to wait for the outcome of the verification process. It uses the SHA-2 algorithm to compute MACs (Throughput: 40 Gbps at 5.15 GHz frequency and 7 nm tech. node). §.§ Full Memory Encryption To protect the pages in the eEPC region we use vanilla AES-ECB encryption with a few tweaks. Every page in the eEPC region is encrypted using a different encryption key that is randomly generated every time a modification is made. This mechanism effectively safeguards against replay attacks because keys are not reused (probabilistically). For AES-ECB mode encryption, we use a 256-bit key, which is a combination of a hardware-specific (HW) key (64 bits), enclave ID (31 bits), bits generated by a pseudo random number generator (128 bits) seeded by the boot time and HW key, physical address of the page in 512 GB memory (27 bits) and block address within the page (6 bits). We keep the PRNG component large because a new value needs to be created for every encryption. The components of the key excluding the block-specific 6 bits constitute the page specific key, K. The block-specific address bits are extracted for every block of the page and concatenated with K to generate the block-specific key k_b, where b represents the b^th page block. This ensures that the keys used for encrypting each block of the page are different. Thus, if the same data is stored in different memory blocks of the page, different ciphertexts will be generated. §.§.§ EPC – eEPC Page Transfer The encryption-decryption process when a page is getting transferred from the EPC to the eEPC is shown in Figure <ref>. It is triggered by an EPC page fault. When a page is evicted from the EPC, the encrypted page is first decrypted using the AES-CTR mode, and then it is re-encrypted using the AES-ECB-256 mode with a secret key generated by the MEE. The page specific key, K, is then encrypted using a system-specific key SSK. The SSK is a concatenation of a second device-specific key (128 bits) and the 128-bit boot time. The SSK is stored in a dedicated register in the TCB. The block-specific values in the key are extracted from the address at the time of cryptographic operations, and only the randomly generated page specific key, K, is stored in the eEPC region. When encrypting and sending the key to the eEPC, the block address within the page is set to zero. The encrypted page and its encrypted key are then moved to the eviction region – both are stored in the eEPC region. A section of the physical memory called the Key Table stores all the encrypted keys for each of the physical pages. Given that we have 2^39-12 (=128M) physical pages in our system and each key is 16 bytes, we need 2GB of storage for storing the keys. This translates to 0.4% overhead for storing the keys in physical memory. The advantage of storing the keys in this manner is that we can easily locate the key given the page's physical address and fetch it along with the evicted page when it is required in the EPC. Basically, an evicted page comes to the EPC along with its encrypted key. We trust the key for the time being till verification. Thus, key storage and management overheads are effectively reduced while maintaining key freshness. While bringing a page from the eEPC to the EPC region, a reverse process is followed (as shown in Figure <ref>). First the page is decrypted using the AES-ECB mode using the encrypted key read from the Key Table and then its blocks are re-encrypted using the AES-CTR mode (within the EPC). §.§ Optimization the EPC Page Fault Handling Mechanism Figure <ref> shows the entire process of fetching a page along with its key, decrypting and verifying it. If there is no other concurrent EPC miss, this process continues without interruption. Assume another EPC page fault occurs before this page is fully loaded (i.e., before all the blocks of the mapped page have been loaded into the EPC). The current process of loading the EPC page must be run in parallel with processing the new request. We need to first store the status of the ongoing eviction/loading process so that loading can be resumed later from the current state. We introduce an additional hardware structure called the ESHR Table to store this information regarding the page loading and eviction status. Each entry in the table is an Eviction Status Holding Register (ESHR). Subsequently, the block of the new page along with its key are fetched into the EPC. Once it is fully loaded we resume the process of loading the rest of the blocks of the page whose status was saved in an ESHR. ESHR Table To keep track of which page blocks are loaded in the EPC, we maintain a 64-bit load status vector (LS vector) in the ESHR. We have 64 bits because there are 64 blocks in a page. When a block is loaded into the EPC, its corresponding bit in the LS vector is set to 1. Once all the blocks are loaded, the valid bit (V) is reset. The required page (LPage) is loaded in place of a corresponding evicted page (EPage) as indicated in the ESHR. The eviction bit (E) is set to 1 if there is eviction along with loading. The ESHR table stores 32 entries. Each entry in the ESHR table contains five fields: EPage represents the page ID of the evicted page; LPage represents the page ID of the page that is being loaded (newly mapped); LS Vector is a 64-bit vector that indicates the load status of LPage; E-bit is an eviction bit indicating we need an eviction (of the EPage); and V-bit is a valid bit indicating if the page is fully loaded or not. We present a dummy example in Figure <ref>, where each page comprises four blocks. The page blocks in the LPage, which are located in the eEPC are loaded to the EPC at the position pointed to by the EPage. Execution Flow: §.§.§ Read Path When an EPC miss occurs for a read request, the data is fetched immediately from the eEPC region as it lies on the critical path and decrypted (Figure <ref> shows the steps). §.§.§ Write Path A write request does not lie on the critical path. In case of an EPC miss for a write request, we do the following: 1 If there are no other requests queued in the memory bus, the write request proceeds as usual. 2 If a previous operation is still ongoing, an entry for the request is made in the ESHR; the request is made to wait til the process completes. 3 If a read request arrives while this write is waiting, the read request is given higher priority . Additionally, if the two requests are for a memory region covered by the same subtree, they are grouped together such that for the higher level MACs of the subtree, a single verification operation would suffice for both requests. §.§.§ Verification Path: The MAC verification of the newly mapped page is carried out concurrently by the MVC, while the page blocks are being moved to the EPC. The execution flow is not hindered by this process. Additionally, the remaining page blocks are loaded/evicted in parallel, while verification and MAC computation continues. Thus, while fetching data from the eEPC region, the execution can be restarted after just two memory reads (requested page block and its key). This drastically reduces the latency of the critical path. As the overhead incurred during EPC page faults is comparable to the overhead of fetching data from the EPC itself, the overall impact of the large eEPC is very small. §.§.§ Communication with the OS We maintain a separate memory region with very few pages that has relaxed security guarantees. This memory region is used for communication between trusted enclaves or between an enclave and the untrusted OS. The enclaves use this memory as a scratchpad for sending system call arguments and receiving data from the OS and other enclaves. Similar to SGX-Client, this memory region can either have no security or it could be encrypted with a session key. §.§ Design Optimizations MAC verification does not lie on the critical path, but it still accounts for DRAM accesses. These accesses can delay regular accesses. They also increase DRAM power consumption. Hence, there is a need to minimize such additional accesses. §.§.§ Optimizing MAC Verification We maintain a small cache in the TCB that stores r recently accessed top-level MACs of the MAC forest (i.e., the roots of the recently accessed subtrees in the MAC forest). Each MAC at the top level of the forest is the root MAC for a 512 KB memory region in the eEPC region. This top-level MAC, which is stored in the EPC, needs to be retrieved from the main memory while performing MAC verification of any of the pages belonging to the region covered by this subtree root (defined as its subtree region). Hence, we reduce one DRAM memory access by caching it. §.§.§ Optimizing MAC Forest Updates Updates to the MAC forest are required when pages are evicted from the EPC. The page to be evicted is selected based on an LRU (least recently used) mechanism (stored in the EPC's metadata, EPCM). We additionally store the store the ID of the page that is next in line for eviction in an evict register (computation is off the critical path). If both the currently evicted page and the next page to be evicted lie in the memory region protected by the same subtree of the MAC forest, then the MAC updates to higher levels of the subtree for both the the pages can be clubbed together – this reduces the number of memory accesses at the higher levels of the MAC forest. §.§.§ Corner Cases 1 If speculative execution is in progress when a system call occurs, the processor first waits for the MAC verification to complete before taking any action. 2 Without waiting to pair a write with an eviction (for reducing DRAM writes), we finish all verification operations as soon as possible. § EVALUATION We compare the performance of with SGX-Client as well as state-of-the-art work (DFP and Penglai) by simulating these systems in our cycle-approximate simulator Tejas<cit.>. Performance is proportional to the reciprocal of the simulated execution time. The system specifications used for evaluation are the same as that used during characterization (Table <ref> from <ref>). The unsecure system is considered to be the baseline. §.§ Performance Analysis In most workloads, the memory accesses are not very irregular. Hence, the frequency of EPC page faults is low in general. The exceptions arise in the cases where the memory accessed is very large and the pattern of page accesses is random. These benchmarks experience an increased number of EPC page faults and incur a higher memory traffic overhead for integrity verification. Consequently, the benchmarks with a larger number of page faults will show a greater degradation in performance, as we can see in Figure <ref>. SGX exhibits a drastic degradation in performance compared to the baseline (83%). Comparing the performance of the related work with that of SGX, we see that DFP shows a 2% improvement in performance and Penglai performs 47% better than DFP. shows improved performance compared to both DFP and Penglai (57% and 10%, resp.). In 5/7 of the workloads, performs much better than all the others and exhibits the lowest degradation in average performance(24%) with respect to the baseline. On an average, exhibits a 59% improvement in performance vis-a-vis SGX-Client. §.§ Detailed Analysis §.§.§ Performance Degradation The EPC page fault rate, in terms of EPC misses % with respect to LLC misses is shown in Figure <ref>. This map gives us an estimate of the spatial locality in the benchmark suite. The benchmarks that access more memory pages are associated with a higher EPC page fault %. The access pattern (randomness) of the pages also affects the EPC page faults. The benchmarks with a high percentage of EPC misses with respect to LLC misses like deepsjeng experience drastic degradation in performance (88% degradation w.r.t. baseline in SGX) as shown in Figure <ref>. On the other hand leela, which has a very low EPC miss rate w.r.t. to LLC misses, experiences a very low degradation (8% degradation w.r.t. baseline in SGX). §.§.§ Optimizations Figure <ref> shows the evictions per EPC miss in the various benchmarks for different models. In systems with an EPC, the eviction rate directly affects system performance. Note that the number of evictions could be less than 1, if we have already created space for the page using prefetching (like in DFP). Specifically, the eviction rate varies (increases/decreases) in DFP, in comparison to SGX, because of its predictions/mis-predictions. In deepjeng and mcf, the evictions increase for DFP because of mispredictions, whereas in xz, asynchrounous preloading of correctly predicted faulting pages reduces the evictions in the critical path. DFP thus shows better performance than SGX in case of xz even though both SGX and DFP impose the same penalty for evictions. However, in , the system performs much better than both SGX and DFP in all the benchmarks (even though it has the same eviction rate as SGX), including deepsjeng(42%), mcf(65%) and xz(29%), because it drastically reduces the penalty associated with EPC misses and evictions. The page access pattern also affects the Merkle tree access overheads in both SGX and DFP. In Penglai, it impacts the MMT access overheads. The additional bandwidth associated with integrity verification varies depending on this pattern as can be seen in the case of parest where it performs worse than SGX by 3% and DFP by 9% even though it does not have an EPC (and EPC associated overheads). In case of , the Merkle tree protects the counters of only those pages that reside inside the EPC and therefore it has a fixed size. This ensures that the Merkle tree overheads are minimized in . The result of these optimizations is evident in the performance improvement seen in parest for (23% over SGX, 17% over DFP and 26% over Penglai). §.§ Optimizations for Reducing MAC Accesses The MACs are accessed during the MAC verification and MAC update phases. These processes require fetching MACs from different levels of the MAC forest (stored in memory). Although these accesses are not on the critical path, every memory access increases DRAM traffic and DRAM power. We introduced optimizations in the design to reduce the number of additional memory accesses required to perform these operations. §.§.§ Optimizing MAC Updates We club the MAC updates for the higher levels of the subtrees in the MAC forest during consecutive evictions of pages that belong to the same subtree region. This reduces the number of memory accesses required for updating the MACs in the higher levels of the MAC forest. Figure <ref> shows the frequency of clubbing of the updates observed in our workloads (an average of 46% clubbing was observed). This plot depicts the spatial locality in the benchmarks within a subtree region (512 KB memory region). Note that clubbing of updates is possible only if consecutive pages fall in this region. §.§.§ Optimizing MAC Verification We introduced a small cache in the TCB to store the 8 recently accessed MACs from the top-most level of the MAC forest in the TCB. We attempt to leverage any locality of accesses that might exist in the workloads for the pages secured by the subtrees of the cached top-level nodes. This reduces the number of memory accesses required to retrieve the top-level nodes from the EPC for MAC verification. Figure <ref> shows the cache hit rates in our design for various workloads. The average hit rate is 81.4%. These optimizations leverage the temporal and spatial locality of EPC misses in order to reduce the number of memory accesses to the higher level nodes in the MAC forest. If we observe closely, both these figures show a similar pattern – they depict the extent of locality in each workload. Comparing the pattern with DRAM traffic reduction (shown in Figure <ref>), we see that the benchmark mcf that has a high MAC cache hit rate and frequency of clubbing exhibits a larger decrease in DRAM traffic (24%), compared to xalanx (16%) which has a low cache hit rate and clubbing frequency. We observe that by employing both these optimizations, the overall additional memory accesses to retrieve the MACs reduces by 23.5% in our workloads. §.§ Sensitivity Analysis The hyperparameters in our system comprise the number of levels and the arity of the subtrees in the MAC forest, which are used during the integrity verification of the pages. We compute an 8-byte MAC for every 4 KB page. A 512 GB memory, contains 2^ 27 MACs (1 MAC per page), which is the number of nodes in the lowest level of the MAC forest. We set the hyperparameters – q and p – such that the performance overhead is minimized. Subtree Level Analysis (q) As we have observed in SGX and Penglai, the number of levels in the integrity trees/forests correlate very well with the memory access overheads. Additionally, the number of levels also influence the storage overheads. Keeping both in mind, we chose q as 3 because it maximized our performance. Subtree Arity Analysis (p) The storage and maintenance of the MACs is another concern for the MAC forest. The arity of the subtrees also dictates the number of memory accesses required while verifying or updating the MACs in the subtree region. Thus, we decide to keep the arity small. We set the arity of the higher level as half of that of the lower level to reduce the frequency of updates in the higher level nodes. Additionally, we plotted the the storage overhead for different values of the hyperparameter p (see Figure <ref>). The storage overhead shows a sharp decline in the beginning after which the descent is more gradual. We thus decided to select the arity close to the knee of the curve and set it to 16 for the lower level and 8 for the level above it.. The total storage space required for our forest is 1096 MB (for all three levels of the tree). Our Merkle Tree (for the EPC) has arity (32 × 32 × 32) and a size of 2.06 MB. Thus the combined storage overhead of both these structures for 512 GB memory in our design sums up to 1098.06 MB. This is 8 times smaller than what the SGX-Client Merkle Tree would require (8322.06 MB) for securing 512 GB memory. §.§ Security Analysis ensures robust security guarantees (ACIF) across the complete system. Note that the EPC provides all four guarantees because it uses the same system as SGX-Client. Let us thus focus on the eEPC. ▸ Authenticity (A) For authentication of the enclave pages in the eEPC, the key contains a HW-specific key (device-level), a boot-specific component used to seed the PRNG, and an enclave specific enclave ID (enclave-level). This ensures that only the enclave that owns the page can access it. The MAC check for any other enclave including the OS will fail. They will not have a valid enclave id to construct the key that is needed to access (read/write) the page and recompute the MAC for verification. Thus, we cryptographically ensure authenticity. ▸ Confidentiality (C) is guaranteed by encrypting the data using standard AES-CTR mode encryption for the EPC and AES-ECB mode encryption for eEPC regions. This ensures that only the writing enclave can decrypt and access the original plaintext data. ▸ Integrity (I) In order to protect the data integrity of the eEPC, we maintain page-level MACs for each eEPC page. These MACs are protected with a multi-level MAC forest whose top-level nodes are stored in the EPC. Hence, any integrity violation will be caught in the MAC verification phase. The integrity of the Key Table (stored in the EPC) is established using the key to encrypt the hash of the page and construct the lowest level MAC. ▸ Freshness (F) is guaranteed by generating a new key every time a page is written back to the eEPC. We use a PRNG to generate the new key along with a bunch of other fields. We consider this to be secure enough given that we don't expect the same key to repeat in any practically relevant duration of time. However, if more security is desired then a global counter can be used. Security Analysis of the Page Table – The page table needs to be protected in designs that have large unrestricted unsecure memories or in cases where enclave isolation is not guaranteed (e.g. Intel SGX and Penglai). We argue that in , we do not need this kind of protection because of the following reasons. Here are the possible attacks that the OS can mount through the page table. A Secure → Unsecure Mapping: This is not relevant in our case because in our system the entire memory is protected. However, this is a genuine problem in systems like Intel SGX and ARM TrustZone because they have large unsecure memories. B Unsecure → Secure Mapping: This cannot happen for the same reason outlined in the previous point. C Secure → Secure Mapping: Another possibility is when the OS maps the secure page of an enclave to the secure region of another enclave. The unauthorized enclave cannot read or write the contents of the page since the enclave ID is a part of the key. The MAC check will fail and this will be a catastrophic event. An OS can only create an enclave or fully tear it down – it cannot access any page within it. For maintenance of enclave IDs, we use the same system as SGX-Client. § RELATED WORK The size of the enclaves can be enhanced using two main approaches - by using bespoke secure systems designed for server applications or by using certain optimization techniques to enhance the enclave size (refer to Table <ref>). §.§ Bespoke Systems Most state-of-the-art secure servers focus on virtual machine (VM) isolation. The TCB also includes the guest VM along with its software stack. AMD's SEV-SNP (Secure Encrypted Virtualization - Secure Nested Paging) <cit.> supports both main memory encryption and encrypted virtual machines (VMs). It does not provide freshness or protection against some physical attacks – attacking the DDR bus while the VM is actively running. Intel TDX (Trust Domain Extensions)<cit.> is similar and is vulnerable to replay attacks. ARM's recent Confidential Compute Architecture (ARM CCA)<cit.> is also based on similar secure virtualization technologies. It introduces Realms, which enables isolated memory for secure execution, and a page-locking mechanism to support large enclaves (realms). However, CCA does not employ encryption and cannot defend the system against physical attacks like cold boot attacks, live probing or replay attacks. A different approach is adopted in Penglai <cit.>, which is a software-hardware co-designed system that creates dedicated hardware augmentations on a RISC-V core. There is one large EPC – protected by counters and a single Merkle tree. Its recipe for scalability is to mount sub-trees of the Merkle tree on demand – these are called mountable Merkle trees (MMTs). It furthermore caches a few MMT roots in the TCB. In the event of an LLC miss, if the MMT root is found in the MMT cache, then the counters can be verified with additional memory accesses. However, if there is a miss, then the penalty is quite large. Note that all of this is on the critical path. Hence, the read latencies are quite high (something that we have seen in our experiments as well). Additionally, it relies on dedicated HW support to ensure that the memory region that stores the MMTs is not tampered with. This is not possible in our threat model where we allow the attacker to modify any memory location at will. Summary: These systems either do not provide all four ACIF guarantees or are not compliant with our threat model. §.§ Enclave Size Enhancement via Memory System Optimizations CoSMIX <cit.> proposes a software cache to store evicted EPC pages. However, providing the same level of security in software as provided by hardware is seldom possible <cit.>. Hence, solely relies on hardware solutions. Liu et al. <cit.> (DFP) attempt to decrease the number of EPC page faults on the critical path by prefetching pages into the EPC. They leverage sequential access patterns and use a list-based prefetcher. We have compared our work with DFP in Section <ref> and shown large performance gains mainly because the accuracy of the predictor is low. § CONCLUSION We introduced three new ideas in this paper, which allowed us to solve a problem that was known for a long time but had become a matter of great concern ever since Intel deprecated SGX-Client in 2021. Sacrificing freshness is the industry standard as of today mainly because providing it requires maintaining counters for every block and a Merkle tree, which are not scalable by design. We leveraged the fact that the catastrophic nature of a security verification failure can be used to do a little bit more speculation and take verification totally off the critical path. Second, the state-of-the-art has put its full might behind protecting the integrity of aspects of the key such as the counters. However, we opt for a diametrically different approach, where a read arrives to the processor with a key that has supposedly been used to encrypt it. This allowed us to create a MAC forest where we could verify the integrity of the key and the data together in a delayed fashion. These three ideas along with some design optimizations to reduce DRAM accesses allowed us to achieve a 10% speedup over our nearest competitor Penglai and a 59% speedup over a vanilla SGX-Client implementation. IEEEtranS
http://arxiv.org/abs/2407.12948v1
20240717182958
Concentration and moment inequalities for heavy-tailed random matrices
[ "Moritz Jirak", "Stanislav Minsker", "Yiqiu Shen", "Martin Wahl" ]
math.PR
[ "math.PR", "math.ST", "stat.TH", "60B20, 60E15, 62H25, 15A42" ]
§ ABSTRACT We prove Fuk-Nagaev and Rosenthal-type inequalities for the sums of independent random matrices, focusing on the situation when the norms of the matrices possess finite moments of only low orders. Our bounds depend on the “intrinsic” dimensional characteristics such as the effective rank, as opposed to the dimension of the ambient space. We illustrate the advantages of such results in several applications, including new moment inequalities for sample covariance matrices and the corresponding eigenvectors of heavy-tailed random vectors. Moreover, we demonstrate that our techniques yield sharpened versions of the moment inequalities for empirical processes. [2010]60B20, 60E15, 62H25, 15A42 Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI Qi Huang10009-0007-4989-135X Emanuele Mezzi20009-0001-9007-8260 Osman Mutlu30000-0001-6144-5685 Miltiadis Kofinas40000-0002-3392-4037 Vidya Prasad50000-0002-9296-3693 Shadnan Azwad Khan60000-0003-2769-6856 Elena Ranguelova70000-0002-9834-1756 Niki van Stein10000-0002-0013-7969 July 22, 2024 ============================================================================================================================================================================================================================================================================================ § INTRODUCTION Fuk-Nagaev inequalities <cit.> generalize exponential deviation inequalities for the sums of independent random variables, such as Berstein's, Prokhorov's and Bennett's inequalities, to case when the random variables satisfy minimal integrability conditions. For example, a corollary of Fuk and Nagaev's results is the following bound: for a sequence of independent, centered random variables X_1,…,X_n such that max_k E|X_k|^p<∞ for some p≥ 2, | ∑_k=1^n X_k| ≥ t≤ 2exp(-C_1(p)t^2/∑_k=1^n EX_k^2) + max_k |X_k| >t/4 +C_2(p)(∑_k=1^n E|X_k|^p/t^p)^2. <cit.> describes the applications of such results to the laws of large numbers and moment inequalities. Later, <cit.>, <cit.> and <cit.>, among others, improved the original estimates by Fuk and Nagaev in several ways: first, the inequalities were extended to martingales and Banach-space valued random variables, and second, the constants were sharpened. For example, inequalities due to <cit.> hold with C_1(p)=1/2+δ for any δ>0 and C_2(p) of order p^p. The latter fact is important as the order of growth of these constants translates into the tail behavior of |∑_k=1^n X_k|. The goal of this work is to prove a version of the Fuk-Nagaev inequality for the sums of independent random matrices and use it to sharpen existing moment inequalities. Let W_1,…,W_n∈ C^d× d be a sequence of independent self-adjoint[It is well-known that the case of general rectangular matrices reduces to this one via the so-called “Hermitian dilation,” see <cit.>.] random matrices such that EW_k=0_d× d for all k and where the expectation is taken element-wise. Assume that for all k, W_k≤ U with probability 1 where · stands for the operator (spectral) norm. A line of work by <cit.> culminated in the following version of the so-called matrix Bernstein's inequality: for all t>0, ł∑_k=1^n W_k ≥ t≤ 2d expł( -t^2/2/σ^2 + Ut/3)̊ where σ^2 = ł∑_k=1^n E W_k^2. An attractive feature of this inequality (as opposed to, say, Talagrand's concentration inequality <cit.>) is that it yields a bound for E∑_k=1^n W_k, namely, that E∑_k=1^n W_k ≤ K( σ√(log(d)) + Ulog(d)) for some absolute constant K>0. Tropp's results have been extended in two directions: first, it was shown by <cit.> that the dimension factor d can essentially be replaced by the so-called effective rank r(∑_k=1^n E W_k^2 ) where r(A):=trace(A)/A. for a positive definite matrix A. In particular, this version of Bernstein's inequality is applicable in the context of random Hilbert-Schmidt operators acting on Hilbert spaces. Second, the boundedness assumption was relaxed by <cit.> to the requirement that max_k łW_k_ψ_1<∞ where the ψ_1 norm of a random variable Z is defined via Z_ψ_1 = inf{ r>0: E e^ł|Z/r |̊≤ 2 }. Finally, <cit.> showed that ∑_k=1^n W_k ≥ t≤ c_1 r(V_n^2) expł( -c_2t^2/V_n^2 + Rt)̊ for all t≥ c_3(V_n^2^1/2+M) where V_n^2 is any matrix satisfying V_n^2≽∑_k=1^n E W_k^2 and R = łmax_k W_k_ψ_1. Our results allow to relax the integrability assumptions even further and cover the case of heavy-tailed random matrices, namely, random matrices such that EW^p = ∞ for some p>0 (however, we are still able to recover the known bounds for the “light-tailed” random matrices). For example, <Ref> below implies that for all p≥ 1, (E∑_k=1^n W_k^p)^1/p≤ K (V_n^2^1/2√(q)+q Emax_k W_k + p/log (ep) (Emax_k W_k^p)^1/p) where q=log(r(V_n^2))∨ p and K is an absolute constant. This inequality sharpens previously known results of this type by <cit.>: for example, a version of Rosenthal's inequality by <cit.> states that (E∑_k=1^n W_k^p)^1/p≤ K (V_n^2^1/2√(r)+r E^1/pmax_k W_k^p) where r=log(d)∨ p. The fact that our bound depends on r(V_n^2) instead of d immediately allows one to extend it to Hilbert-Schmidt operators acting on Hilbert spaces. Finally, let us remark that the order of the constants in the inequality stated above is optimal: in the scalar case d=1, it is known <cit.> that without any additional assumptions, the best order of C(p) in the inequality E^1/p| ∑_k=1^n W_k |^p ≤ C(p)( ( ∑_k=1^n EW_k^2)^1/2 + ( ∑_k=1^n E|W_k|^p )^1/p) is C(p) = Kp/log(p) while <cit.> showed that C_1(p) = K√(p) and C_2(p) = Kp are the best possible in the inequality of the form E^1/p| ∑_k=1^n W_k |^p ≤ C_1(p)( ∑_k=1^n EW_k^2)^1/2 + C_2(p) E^1/pmax_k |W_k|^p. It is clear that our results yield a sharper version the inequality (<ref>) for large p whenever Emax_k |W_k| is much smaller than E^1/pmax_k |W_k|^p. §.§ Organization of the paper. The rest of the exposition is organized as follows: we present the main results and their proofs in sections <ref> and <ref>. Applications of the developed techniques to the empirical processes is discussed in section <ref> while implications for the problems of matrix subsampling, covariance estimation and eigenvector estimation are described in sections <ref>, <ref> and <ref> respectively. Finally, section <ref> contains the required background and proofs of the lemmas that were omitted from the main exposition. § MAIN RESULTS In this section we state the new concentration and moment bounds - Theorems <ref> and <ref>. The required notation will be introduced on demand. Let us remark that throughout the paper, the values of constants K, c, C(·) is often unspecified and can change from line to line; we use K and c to denote absolute constants and C(·) to denote constants whose value depends on the parameters in brackets. §.§ Fuk-Nagaev-type inequality The following proposition is the key technical ingredient that will serve as the starting point for the derivation of the main results. Therefore, we state it separately. Everywhere below, M=max_k=1,…,nW_k, Q_1/2(Z) stands for the median of a real-valued random variable Z, and _1,…,_n denote independent symmetric Bernoulli random variables that are independent from W_1,…,W_n. Finally, ·_2 is the Euclidean norm. Let W_1, …, W_n∈C^d× d be a sequence of centered, independent, self-adjoint random matrices. Let U>0, and assume that V_n^2 satisfies V_n^2≽∑_k E W_k^2 I{W_k≤ U}. Finally, set σ_U^2=V_n^2. Then, whenever t/2≥σ_U ∨ U/3∨sup_v_2=1 Q_1/2ł(ł(∑_k=1^n W_k )̊vv)̊, the following inequality holds: P(∑_k=1^n W_k > 12t) ≤ 64r(V_n^2) exp[-(t/2)^2/σ_U^2 + tU/6] +16P(∑_k=1^n _kW_k 1{W_k > U}>t/2) P(∑_k=1^n _k W_k>t) + 4P(M > t). If the random matrices are symmetrically distributed (that is, W_j and -W_j are equidistributed for all j), then P(∑_k=1^n W_k > 3t) ≤ 16r(V_n^2) exp[-(t/2)^2/σ^2 + tU/6] +4P(∑_k=1^n W_k 1{W_k > U}>t/2) P(∑_k=1^n W_k>t) + P(M > t) under the assumption that t/2≥σ_U ∨ U/3. If W_1,…,W_n have symmetric distribution, then P(∑_k=1^n W_k>t) ≥1/2M>t in view of Lévy's inequality <cit.>. This shows that the quantile U is necessary in the lower bound for t. The term σ_U is also known to be necessary – for instance, take W_j = ξ_j A_j where ξ_1,…,ξ_n are i.i.d. N(0,1) random variables and A_1,…,A_n are fixed self-adjoint matrices <cit.>. Let us reduce the general case to the situation when W_1,…,W_n are symmetric. To this end, it suffices to apply Lemma 2.3.7 in the book by <cit.>: it implies that whenever 6t≥sup_v_2=1 Q_1/2ł(ł(∑_k=1^n W_k )̊vv)̊, P(∑_k=1^n W_k>12t) ≤ 4P(∑_k=1^n _k W_k>3t). Obviously, this inequality holds without any assumptions if W_1,…,W_k are symmetrically distributed. Next, in view of Hoffmann-Jørgensen inequality (Proposition <ref>), P(∑_k=1^n _k W_k>3t) ≤ 4 P(∑_k=1^n _k W_k>t)^2 + P(M > t). Given U>0, we define, for each k=1,…, n, W_k:=_kW_k 1{W_k≤ U} and Δ_k:=_kW_k 1{W_k > U}. Clearly, ∑_k=1^n _kW_k≤∑_k=1^n W_k + ∑_k=1^n Δ_k, all the random matrices W_k, Δ_k are symmetric, and P(∑_k=1^n _k W_k >t)≤P(∑_k=1^n W_k>t/2) + P(∑_k=1^n Δ_k>t/2) :=A_1+A_2. Therefore, P(∑_k=1^n _k W_k>3t) ≤ 4A_1 + 4A_2 P(∑_k=1^n _k W_k>t) + P(M > t). The first term on the right-hand side of inequality (<ref>) can be bounded directly via <Ref>. Let V_n^2 satisfy V_n^2≽∑_k E W_k^2 I{W_k≤ U}. Then for σ_U^2 = V_n^2 and t such that t/2≥σ_U + U/3, 4A_1≤ 16 r(V_n^2) exp[-(t/2)^2/σ_U^2 + tU/6]. The result follows. We are now ready to deduce the first main result of the paper. Let W_1, …, W_n∈C^d× d be a sequence of centered, independent, self-adjoint random matrices, and assume that EM^p < ∞ for some p>1. Moreover, suppose that V_n^2 satisfies V_n^2≽∑_k E W_k^2, and set σ^2=V_n^2. Then P(∑_k=1^n W_k > 12t)≤ K( r(V_n^2) exp[-(t/2)^2/σ^2 + 4t EM] + P(M≥ t) + ((p/log (ep))^pEM^p/t^p)^2) whenever t≥ 2(σ∨E M/3) and where K is an absolute constant. We will continue using the notation introduced in the proof of Proposition <ref>. First of all, note that sup_v_2=1 Q_1/2ł(ł( ∑_j=1^n W_j)̊vv)̊≤σ√(2) for the choice of σ stated above. Next, plugging the inequality P(∑_k=1^n _k W_k >t)≤P(∑_k=1^n W_k>t/2) + P(∑_k=1^n Δ_k>t/2) into relation (<ref>) with V_n^2 and σ^2 specified in the conditions of the theorem, we deduce that P(∑_k=1^n W_k>12t) ≤ 128r(V_n^2) exp[-(t/2)^2/σ^2 + tU/6] + 4P(M > t) + 16 ł(P(∑_k=1^n Δ_k>t/2))̊^2. Next, we apply Markov's inequality to get the bound P(∑_k=1^n Δ_k>t/2) ≤E∑_k=1^n Δ_k^p/(t/2)^p. <Ref> implies that for all p ≥ 2, ł(E∑_k=1^n Δ_k^p)̊^1/p≤ K p/log(p)(E∑_k=1^n Δ_k + (Emax_k≤ nΔ_k ^p )^1/p). Moreover, if we set U:=24 EM, then <Ref> applies with q=1 and t_0=0, the latter due to the inequality P(∑_k=1^n Δ_k>0)≤P(M>U)≤EM/U≤ 1/24. Therefore, E∑_k=1^n Δ_k≤ 6Emax_k≤ nΔ_k≤ 6Emax_k≤ n W_k≤ 6(Emax_k≤ nW_k^p)^1/p, implying that P(∑_k=1^n Δ_k>t/2)≤ł(Kp/log(ep))̊^p EM^p/t^p. Note that whenever 0 < p <2, the same result still holds since ł(E∑_k=1^n Δ_k^(1)^p)̊^1/p≤ K E^1/pmax_k≤ nW_k^p in view of <Ref>. The conclusion follows. In principle, one can increase the power in the last term on the right-hand side of (<ref>) from 2 to 2^k by applying Hoffmann-Jørgensen inequality k times. §.§ Moment inequalities Now we will establish moment inequalities by integrating the tail estimates of Proposition <ref>. As before, assume that V_n^2 satisfies V_n^2≽∑_k E W_k^2 and let σ^2 = V_n^2. Let W_1, …, W_n∈C^d× d be a sequence of centered, independent, self-adjoint random matrices, and let Q_p = inf{s>0: P(∑_k=1^n Δ_k>s/2)≤1/83^-p}. Then for all p ≥ 1, (E∑_k=1^n W_k^p)^1/p≤ K (σ√(q)+qQ_1 + Q_p + E^1/p M^p ), where q=log(r(V_n^2))∨ p and K>0 is an absolute constant. In particular, we have the following “closed-form” Rosenthal-type moment inequalities: (E∑_k=1^n W_k^p)^1/p ≤ K (σ√(q)+q EM + p/log (ep) (E M^p)^1/p) and (E∑_k=1^n W_k^p)^1/p ≤ K (σ√(q)+log(r(V_n^2)) EM + pM_ψ_1). The well-known relation M_ψ_1≤ Klog(n)max_k≤ nX_k_ψ_1 <cit.> could be useful when combined with the inequality (<ref>). Observe that E ł∑_k=1^n W_k^p ≤ 2^p E ł∑_k=1^n _k W_k^p = 2^p p∫_0^∞ t^p-1ł∑_k=1^n _k W_k≥ tdt = 6^p · p∫_0^∞ t^p-1ł∑_k=1^n _k W_k≥ 3tdt, where we used the symmetrization inequality <cit.> on the first step and the integration by parts formula on the second step, and the linear change of variables on the third step. To estimate the last integral, we choose U = Q_1/2 and apply inequality (<ref>) in the range t≥ t_0:= 2(σ∨ U/3) to deduce that E ł∑_k=1^n W_k^p ≤ 12^p(σ∨ U/3)^p + 6^p p( ∫_0^∞ t^p-1ł(16r(V_n^2 ) exp[-(t/2)^2/σ^2 + tU/6] ∧ 1)̊dt + ∫_0^∞ t^p-1M≥ t dt + ∫_0^∞ 4t^p-1P(∑_k=1^n Δ_k>t/2)P(∑_k=1^n _k W_k>t) dt ). Recalling the definition of Q_p, one easily checks that 6^p p∫_0^∞ 4t^p-1P(∑_k=1^n Δ_k>t/2)P(∑_k=1^n _k W_k>t) dt ≤ 4 · 6^p Q_p^p + 2^p-1 E ł∑_k=1^n _k W_k^p. Combined with the first line of display (<ref>), this inequality implies that 2^p-1 E ł∑_k=1^n _k W_k^p ≤ 12^p(σ∨ U/3)^p + 6^p EM^p + 4 · 6^p Q_p^p + 6^p p ∫_0^∞ t^p-1ł(16r(V_n^2) exp[-(t/2)^2/σ^2 + tU/6] ∧ 1)̊dt. Application of <Ref> to the last integral yields the inequality (<ref>). Finally, let us prove the inequalities (<ref>) and (<ref>). To this end, we need to obtain the upper bounds for the quantities Q_1 and Q_p. To estimate Q_1, recall the inequality (<ref>) which implies that Q_1/2≤ 24 EM. Similary, by (<ref>) and Markov's inequality, we deduce that Q_p ≤ K p/log(ep) E^1/pM^p. If, on the other hand, M_ψ_1<∞, then the second inequality of Theorem <ref> combined with the bound (<ref>) and the well-known estimate E M ≤ K M_ψ_1 imply that ł∑_k=1^n Δ_k_ψ_1≤ł M_ψ_1, whence ∑_k=1^n Δ_k>t≤ e^-Ct/M_ψ_1 and Q_p ≤ C' pM_ψ_1. Let us remark that the inequality (<ref>) can be also obtained by integrating the tail bound (<ref>) directly. Next, we deduce a version of the previous result that holds for sums of nonnegative definite random matrices. Let W_1,…,W_n∈ C^d× d be a sequence of independent, nonnegative definite matrices and let M=max_j=1,…,nW_j. Moreover, let A_n:=∑_j=1^n E W_j. Then for all p≥ 1, (E∑_k=1^n W_k^p)^1/p ≤ K ( A_n +q E M + p/log (ep) (E M^p)^1/p) and (E∑_k=1^n W_k^p)^1/p ≤ K ( A_n +log(r(A_n))E M +pM_ψ_1), where q=log(r(A_n))∨ p and K>0 is an absolute constant. In view of Minkowski's inequality followed by the symmetrization inequality, (E∑_k=1^n W_k^p)^1/p≤A_n + (E∑_k=1^n W_k - EW_k^p)^1/p ≤A_n + 2 E (E∑_k=1^n _k W_k ^p)^1/p, where _1,…,_n are i.i.d random signs independent from W_1,…,W_n. To estimate the second term in the sum above, we will apply the inequality (<ref>) together with the following choice of V_n^2: recall that in (<ref>), we set U=24 EM and note that for all j, E ł[ W_j^2 I{W_j ≤ 24 EM }]̊≼ 24 EM· EW_j since W_j ≽ 0 with probability 1. This relation implies that we can set V_n^2 = 24 E M· A_n, whence r(V_n^2) = r(A_n). Moreover, σ√(q) = √(24q A_n EM)≤A_n + 6q EM, hence (<ref>) yields the bound (E∑_k=1^n _k W_k ^p)^1/p≤ K'ł( A_n + q EM + p/log(ep)ł( EM^p)̊^1/p)̊, implying the claim. The second inequality is obtained in a similar manner where the inequality (<ref>) is used in place of (<ref>). §.§ Inequalities for empirical processes The only part of the previous arguments that exploits the “non-commutative” nature of the random variables is the application of Matrix Bernstein's inequality. In this section, we state the results produced by our method for general empirical processes. The only required modification is the application of Bousquet's version of Talagrand's concentration inequality (<ref>) in place of Bernstein's inequality. We state only the versions of Theorems <ref> and (<ref>) and remark on the key differences. The required changes to the proofs are minimal hence we avoid the details. Let F be a set of measurable real-valued functions defined on some measurable space S and let X_1,…,X_n be i.i.d copies of an S-valued random variable X. Assume that Ef(X)=0 for all f∈ F. Let us set F(x):=sup_f∈F|f(x)|, M = max_k≤ n F(X_k), and suppose that EM^p<∞ for some p≥ 2. Denote Z=sup _f ∈ℱ∑_k=1^n f(X_k); for simplicity, we will assume that Z is measurable. Finally, let σ_∗ satisfy σ_∗^2≥ nsup_f∈FEf^2(X). For example, in the main case of interest of this paper, ł∑_k=1^n W_k = sup_v_2=1ł|ł( (∑_k=1^n W_k) vv^T)̊|̊ corresponding to F = ł{ f_v(·) = ł|ł((·)vv^T)̊|̊, v_2=1}̊. The following result can be viewed as an extension of Adamczak's inequality <cit.> to the heavy-tailed case. For all t≥√(2)σ_∗, P(Z > 24ł( EZ + t)̊)≤ K( exp(-t^2/2σ_∗^2 + 64t EM) + P(M≥ t) + (p/log (ep))^2p(EM^p/t^p)^2) where K is an absolute constant. Integrating this tail bound, we obtain the following moment inequalities. For all p≥ 1, (EZ^p)^1/p≤ K ( EZ + σ_∗√(p)+p EM + p/log (ep) (E M^p)^1/p). Let us compare this result with the bound of Theorem <ref>. When applied to the sums of random matrices, we get the inequality (E∑_k=1^n W_k^p)^1/p≤ K (E∑_k=1^n W_k + σ_∗√(p)+p EM + p/log (ep) (E M^p)^1/p) The main difference is that this bounds includes E∑_k=1^n W_k on the right-hand side. However, for large values of p, it is better than (<ref>) since σ^2_∗ can be much smaller than ł∑_k E W_k^2. Moreover, Theorem <ref> improves upon the inequality proved by <cit.>: the latter states that for all p≥ 2, (EZ^p)^1/p≤ K ( EZ + σ_∗√(p)+ p (E M^p)^1/p). The estimate provided by (<ref>) is better for large values of p if EM is smaller than E^1/p M^p. § APPLICATIONS In this section, we apply the inequalities to get improved bounds to two classical problems - matrix subsampling and covariance estimation. §.§ Norms of random submatrices. Let B be a self-adjoint matrix, and let δ_1,…,δ_d be i.i.d. Bernoulli random variables with Eδ_1 = δ∈ (0,1). Define R = diag(δ_1,…,δ_d). We are interested in the spectral norm of the matrix BR formed by the columns B_i, i∈ I of B with indices corresponding to the random set I={ 1≤ i≤ d: δ_i = 1}. This problem has previously been studied by <cit.> and <cit.> who showed that E BR^2 ≤ Kł( δB^2 + log(nδ)/⌊δ^-1⌋∑_k=1^⌊δ^-1⌋łB_(k)_2^2 )̊ and E BR^2 ≤ 1.72ł( δB^2 + logł(2B^2_F/B^2)̊łB_(1)_2^2 )̊ respectively, where B_(j) denotes the column with the j-th largest norm and K is a numerical constant. Note that łB_(1)_2^2≥1/⌊δ^-1⌋∑_k=1^⌊δ^-1⌋łB_(k)_2^2 but it is possible that log(nδ) > logł(2B^2_F/B^2)̊ when the matrix B has small “stable rank” srank(B) := B^2_F/B^2. We will show below that Tropp's bound can be improved, and that (<ref>) holds with log(nδ) replaced with log(nδ)∧log(srank(B)). To this end, let e_1,…,e_d denote the standard Euclidean basis, and observe that BR^2= ł∑_k=1^d δ_k B_k e_k^T ^2 =ł∑_k=1^d δ_k B_k B_k^T. We will apply <Ref> to the last expression with W_k = δ_k B_k B_k^T. Note that A_n = δ BB^T so that A_n=δB^2, and that E M = E ł(max_k≤ dδ_k B_k_2^2)̊. According to Lemma 5.1 in <cit.> or Proposition 2.3 in <cit.>, Emax_k=1,…,dδ_k B_k_2^2 ≤2/⌊δ^-1⌋∑_k=1^⌊δ^-1⌋B_(k)_2^2. Finally, the effective rank r(A_n) = B^2_F/B^2 coincides with the stable rank of B. We record the following bound. The inequalities E BR^2 ≤ Kł( δB^2 + logł(B^2_F/B^2)̊1/⌊δ^-1⌋∑_k=1^⌊δ^-1⌋łB_(k)_2^2 )̊ and E BR - δ B^2 ≤ K(1-δ)ł( δB^2 + logł(B^2_F/B^2)̊1/⌊δ^-1⌋∑_k=1^⌊δ^-1⌋łB_(k)_2^2 )̊ hold for all δ∈(0,1) and a numerical constant K>0. The first inequality has already been established above. The proof of the second bound is quite similar: it suffices to note that E BR - δ B^2 = Eł∑_k=1^d (δ_k-δ)^2 B_k B_k^T, and that ł E ∑_k=1^d (δ_k-δ)^2 B_k B_k^T = δ(1-δ) B^2 and Emax_k=1 ≤ d(δ_k-δ)^2B_k_2^2 ≤2(1-δ)/⌊δ^-1⌋∑_k=1^⌊δ^-1⌋B_(k)_2^2. §.§ Covariance estimation. In this section, we consider applications of our results to the covariance estimation problem. Let X∈ R^d be a random vector such that EX=0 and E XX^⊤ = Σ. Given a sequence X_1,…, X_n∈R^d of i.i.d. copies of X, what is an upper bound for the error of the sample covariance matrix? In other words, we would like to estimate r_n:=ł1/n∑_j=1^n X_j X_j^⊤ -Σ. One of the long-standing open questions asks for the minimal assumptions on the distribution of X such that n=C()d suffices to guarantee that E r_n≤Σ, or that r_n ≤Σ with high probability. Results of this type are often referred to as the “quantitative versions of the Bai-Yin theorem,” after <cit.>. Let us give an (incomplete) overview of the rich history of the problem. It has long been known that sub-Gaussian distributions satisfy the required conditions <cit.>. Moreover, very general and precise characterization of the behavior of the sample covariance of Gaussian random vectors with values in Banach spaces has been found by <cit.> and, very recently, further sharpened by <cit.> in the finite-dimensional case. For the log-concave and the sub-exponential distributions, the problem was first considered by <cit.>, and the bounds were significantly improved and refined by <cit.> and <cit.>. It took much longer to eliminate the unnecessary logarithmic factors, until the problem was finally solved by <cit.>. Finally, the case of heavy-tailed distributions was investigated by <cit.> and <cit.>, who showed that 4+ moments are sufficient to get the desired bound. Specifically, Tikhomirov's results imply that if Σ = I_d and sup_v_2=1 Eł|⟨ X,v ⟩|̊^p = T<∞ for some p>4, then r_n ≤ C(p)ł( T^2/p√(d/n) + max_jX_j_2^2/n)̊ with probability at least 1-1/n. <cit.> refined Tikhomirov's estimates and essentially showed that n = C()r(Σ) samples suffices to get the desired guarantees in expectation, although they considered the sample covariance based on properly truncated random vectors. Next, we show that the results by <cit.> can be combined with the moment inequalities developed in this paper to get a sharp moment inequality for r_n. Let X∈ R^d be a random vector such that EX=0 and E [XX^⊤] = Σ. Let X_1,…, X_n∈R^d be i.i.d. copies of X. Assume that r(Σ)/n≤ c for a sufficiently small positive constant c, and that for some p>4 sup_v_2=1E^1/pł| ⟨ X,v⟩|̊^p/E^1/2⟨ X,v⟩^2 = κ < ∞. Then (E1/n∑_j=1^n X_j X_j^⊤ -Σ^2)^1/2≤ C(κ,p) ( Σ√(r(Σ)/n) + E^2/pmax_j≤ nX_j_2^p/n). Our proof builds on the results by <cit.> which in turn sharpen the inequality due to <cit.>. Before we dive into the details, let us mention that the “hypercontractivity” condition (<ref>) implies in particular that ł E X^2 XX^T ≤κ^2 (Σ)Σ; the proof of this fact can be found in <cit.>. Note that 1/n∑_j=1^n X_j X_j^⊤ -Σ = sup_v_2=1|1/n∑_j=1^n ⟨ X_j, v⟩^2 - E⟨ X,v⟩^2|. Let us state the following decomposition of the error into “peaky” and “spread” parts <cit.> that holds for arbitrary λ>0: sup_v_2= 1|1/n∑_j=1^n ⟨ X_j, v⟩^2 - E⟨ X,v⟩^2| ≤sup_v_2≤ 11/n∑_j=1^n ⟨ X_j, v⟩^2 1{λ⟨ X_i, v⟩^2 >1 }_Peaky part + sup_v_2≤ 1|1/λ n∑_j=1^n ψ(λ⟨ X_j, v⟩^2 )- E⟨ X,v⟩^2|_Spread part, where ψ(x)= x, for x∈ [-1,1]; (x) for |x|>1. We will estimate the two terms separately, starting with the “spread” part. To this end, we will apply Proposition 4 in <cit.> which implies that for λ = 1/κ^2Σ√(r(Σ)/n), sup_v_2≤ 1|1/λ n∑_j=1^n ψ(λ⟨ X_j, v⟩^2 )- E⟨ X,v⟩^2| ≤ Cκ^2Σł( √(r(Σ)/n) + t/√(r(Σ)n))̊ with probability at least 1-e^-t. For t=r(Σ), we get in particular that sup_v_2≤ 1|1/λ n∑_j=1^n ψ(λ⟨ X_j, v⟩^2 )- E⟨ X,v⟩^2| ≤ Cκ^2Σ√(r(Σ)/n) with probability at least 1-e^-r(Σ). Next, we will estimate the “peaky” term in the inequality (<ref>). Equation (5) in the work by <cit.> states that for all subsets J⊆ [n] of cardinality at most k, 1/n∑_j∈J X_j X_j^⊤≤f(k, [n])/n for the function f(k, [n]) defined via f(k,[n]) = sup_y_2=1, y_0≤ k, (y)⊆ [n]∑_j=1^n y_j X_j_2^2 and the bounds holds uniformly over all such subsets J. The following result provides a bound for f(k,[n]). <cit.> Assume that r(Σ)/n≤ c' for a sufficiently small positive constant c'. Then f(k,[n])≤ C(p,κ)(max_j≤ nX_j_2^2 + Σ k (n/k)^4/(4+p)log^4 n/k), with probability at least 1-C(p)/n, and the bound holds simultaneously for all integers k satisfying r(Σ) ≤ k ≤ c'n. Since p>4, this result implies that f(k,[n])≤ C'(p,κ)(max_j≤ nX_j_2^2 + Σ√(nk)). Next, let λ = 1/κ^2Σ√(r(Σ)/n). Following <cit.>, let us define the random set I_v=ł{j∈ [n]: ⟨ X_j, v⟩^2 > 1/λ}̊ and m=sup_v_2≤ 1 |I_v|. Then, in view of the inequality (<ref>), we see that m/nλ≤sup_v_2≤ 11/n∑_j=1^n ⟨ X_j, v⟩^2 1{λ⟨ X_i, v⟩^2 >1 }≤f(m,[n])/n. Now, if r(Σ)≤ m ≤ c'n, then we can employ <Ref> to derive the following bound that holds with probability at least 1-C(p)/n: m≤λ f(m,[n]) ≤ C'(p)(max_j≤ nX_j_2^2 + Σ√(nm))·1/κ^2Σ√(r(Σ)/n) ≤C'(p)/κ^2(max_j≤ nX_j^2/Σ√(r(Σ)/n) + √(m)√(r(Σ))). Solutions to the inequality x ≤ a√(x) + b satisfy x≤ 2max(a^2, b). Therefore, with probability at least 1-C(p)/n, m≤ C_1(p)max(r(Σ), max_j≤ nX_j^2/κ^2Σ√(r(Σ)/n)). It remains to show that m≤ c'n if r(Σ)<cn for c small enough (clearly, if m<r(Σ), then (<ref>) holds). By the definition of I_v, for any v∈R^d, |I_v| = ∑_j=1^n1{⟨ X_j, v⟩^2 > κ^2Σ√(n/r(Σ))} = ∑_j=1^n 1{|⟨ X_j, v⟩|/(κ^2Σ√(n/r(Σ)))^1/2 > 1} ≤∑_j=1^n ρ(|⟨ X_j, v⟩|/(κ^2Σ√(n/r(Σ)))^1/2), where ρ(x) = 0 x≤ 1/2 2x - 1 x∈ (1/2, 1] 1 x> 1 is such that 1{x≥ 1/2}≥ρ(x) ≥1{x≥ 1}. For brevity, set Z_j(v) := |⟨ X_j, v⟩|/(κ^2Σ√(n/r(Σ)))^1/2 and S=sup_v_2≤ 1(∑_j=1^n ρ(Z_j(v)) - Eρ(Z_j(v))). In view of Markov's inequality and assumption (<ref>), Eρ(Z_j(v)) ≤P(⟨ X, v⟩^2 > κ^2/2Σ√(n/r(Σ)))≤4E⟨ X,v⟩^4/κ ^4 Σ^2·r(Σ)/n≤4r(Σ)/n. Denoting σ^2 = sup_|v_2≤ 1(ρ(Z_1(v))), we deduce that P( sup_v_2≤ 1|I_v|≥ c'n) ≤P(sup_v_2=1∑_j=1^n ρ(Z_j)>c'n) ≤P(S>c'n - 4r(Σ)). We will apply <Ref> to estimate the right hand side in the display above. Specifically, P(S>c'n - 4r(Σ))≤ e^-t whenever c'n - 4r(Σ) - 2ES - σ√(2tn) - 4t/3 ≥ 0. To prove that this relation holds for suitable choices of parameters c and t (where r(Σ)/n≤ c), first observe that σ^2≤E(ρ(Z)^2≤Eρ(Z) ≤4r(Σ)/n, where the last inequality follows from the bound (<ref>). Next we will estimate ES. Let ε_1,…ε_n be a sequence of independent random signs. The standard argument based on the symmetrization and contraction inequalities <cit.>, together with the fact that ρ(x) is Lipschitz continuous with Lipschitz constant equal to 2, yields that ES ≤ 2Esup_v_2≤ 1|∑_j=1^n ε_jρ(Z_j)| ≤ 8Esup_v_2≤ 1|∑_j=1^n ε_j |⟨ X_j, v⟩|/(κ^2Σ√(n/r(Σ)))^1/2| ≤16/(κ^2Σ√(n/r(Σ)))^1/2(Esup_v_2≤ 1(∑_j=1^n ⟨ X_j, v⟩)^2)^1/2. Since Esup_v_2≤ 1(∑_j=1^n ⟨ X_j, v⟩)^2 = E∑_j=1^n X_j^2 ≤κ^2 nΣ r(Σ) by assumption (<ref>), we conclude that Esup_v_2≤ 1(∑_j=1^n ρ(Z_j) - Eρ(Z_j)) ≤ 16n^1/4(r(Σ))^3/4. As a consequence, we have to choose c and t such that 4r(Σ)+32n^1/4(r(Σ))^3/4+2√(2t)√(r(Σ))+4t/3≤ c'n, which is satisfied if both r(Σ) t do not exceed a constant times n. When (<ref>) holds, we conclude that m≤ c'n with probability at least 1-e^-t, and that the inequality (<ref>) holds with probability at least 1-c'(p)/n. Combining this result with the estimates (<ref>), (<ref>) and <Ref>, we deduce that with probability at least 1-c'(p)/n, the “peaky” term admits the upper bound of the form sup_v_2≤ 11/n∑_j=1^n ⟨ X_j, v⟩^2 1ł{⟨ X_i, v⟩^2 > κ^2Σ√(n/r(Σ))}̊ ≤f(m,[n])/n≤ C_2(p,κ) (max_j≤ nX_j_2^2/n + Σr(Σ)/n). Combining the estimates (<ref>) and (<ref>) with the decomposition (<ref>), we conclude that with probability at least 1-e^-r(Σ) - c'(p)/n, 1/n∑_j=1 X_j X_j^⊤ - Σ≤ C(p,κ)(max_j≤ nX_j_2^2/n + Σ√(r(Σ)/n)). To obtain the desired bound in expectation, let us define the event A:={1/n∑_j=1 X_j X_j^⊤ - Σ≤ C(p,κ)(max_j≤ nX_j_2^2/n + Σ√(r(Σ)/n))}. Then ℙ(A^c)≤ e^-r(Σ)+c'(p)/n and E^1/21/n∑_j=1 X_j X_j^⊤ - Σ^2 ≤ C_1(p,κ)( E^1/2max_j≤ nX_j_2^4/n + Σ√(r(Σ)/n)) + E^1/2[1/n∑_j=1 X_j X_j^⊤ - Σ^21(A^c)]. Hölder's inequality implies that E^1/2[1/n∑_j=1 X_j X_j^⊤ - Σ^21(A^c)] ≤E^2/p1/n∑_j=1 X_j X_j^⊤ - Σ^p/2(c'(p)/n∨ e^-r(Σ))^p-4/2p. Finally, we invoke Rosenthal's inequality (<ref>) to deduce that E^2/p1/n∑_j=1 X_j X_j^⊤ - Σ^p/2≤ C(p)[ √(r(Σ)/n)Σ√(log(er(Σ))) + log(er(Σ))/nEmax_j≤ nX_j_2^2 + 1/nE^2/pmax_j≤ nX_j_2^p ]. Since r(Σ)<cn by assumption, log(er(Σ))<C(p) ł(n∨ e^r(Σ))̊^(p-4)/2p, implying the final form of the bound. Let us remark that in the course of the proof, we obtained a slightly stronger inequality (E1/n∑_j=1^n X_j X_j^⊤ -Σ^2)^1/2≤ C(κ,p) ( Σ√(r(Σ)/n) + E^1/2max_j≤ nX_j_2^4/n⋁E^2/pmax_j≤ nX_j_2^p/n(1/n + e^-r(Σ))^p-4/2p). §.§ Empirical eigenvector estimation In this section, we continue the considerations of section <ref>. Let X∈ R^d be a random vector such that EX=0 and E XX^⊤ = Σ, and let λ_1≥…≥λ_d and u_1,…,u_d be the eigenvalues and the eigenvectors of Σ, respectively. Moreover, let g_1 = λ_1 - λ_2 and g_j = min(λ_j-1 - λ_j, λ_j - λ_j+1) for j = 2,…,d be the different spectral gaps, meaning that if g_j>0, then the eigenvector u_j is uniquely determined up to the sign. Given a sequence X_1,…, X_n∈R^d of i.i.d. copies of X, let λ̂_1≥…≥λ̂_d and û_1,…,û_d be the eigenvalues and eigenvectors of the empirical covariance matrix Σ̂ = 1/n∑_k = 1^n X_k X_k^⊤, respectively. A question that has been studied for decades asks for perturbation bounds for the empirical spectral characteristics, we refer for instance to <cit.> for some classical and more recent results and applications. For the special case of spectral projectors, the Davis-Kahan inequality (cf. <cit.>) is among the most prominent tools, see <cit.> and the references therein for some recent context. Combining the Davis-Kahan inequality with Theorem <ref> we get that ^1/2û_jû_j^⊤ - u_ju_j^⊤_2^2≤ C(κ,p) ( Σ/g_j√(r(Σ)/n) + E^2/pmax_k≤ nX_k_2^p/ng_j), provided that g_j>0. Note that the requirement r(Σ)/n≤ c can be dropped in this case because the left-hand side is always bounded by √(2). Although prominent, estimates of the type (<ref>) are often sub-optimal, we refer to <cit.> for a detailed discussion. It turns out that the complexity of the problem is captured by the relative rank r_j(Σ) = ∑_i ≠ jλ_i/|λ_i - λ_j| + λ_j/g_j, in contrast to the effective rank r(Σ). The following result improves upon the bounds in <cit.> as it requires less moments. Assume that for some p>4 condition (<ref>) holds. Let j∈{1,…,d} be such that g_j>0. Then ^1/2û_j - u_j_2^2≤^1/2û_jû_j^⊤ - u_ju_j^⊤_2^2 ≤ C(κ,p) (√(λ_j/g_j)√(r_j(Σ)/n) + r_j(Σ)/n^1-2/p), where ·_2 also denotes the Frobenius norm for matrices. The random vector X can be expressed in terms of the Karhunen-Loève decomposition X = ∑_i=1^d √(λ_i)η_iu_i, where the Karhunen-Loève coefficients η_1,…,η_d are uncorrelated with η_i^2 = 1 and defined by η_i=⟨ X,u_i⟩/√(λ_i). In this case, (<ref>) holds if η_1,…,η_d form a martingale difference sequence with max_i≤ d^1/p |η_i|^p ≤κ/√(p-1), as can be seen by an application of Burkholder's inequality (Theorem 2.1 in <cit.>). Since λ_j ≤Σ and |λ_k - λ_j|≥ g_j for k ≠ j, we have the inequality √(λ_j/g_j)√(r_j(Σ)/n)≤Σ/g_j√(r(Σ)/n), meaning that the first term on the right-hand side of Theorem <ref> is always an improvement over the corresponding one in (<ref>). Suppose that the sign of û_j is chosen such that ⟨û_j,u_j ⟩≥ 0. By display (5.24) in <cit.> combined with Proposition 1 in <cit.> (see also Lemma 2 in <cit.>), we have û_j - u_j_2 ≤û_jû_j^⊤ - u_ju_j^⊤_2 ≤ 4√(2)T_j(Σ̂ - Σ)T_j with T_j = |R_j|^1/2 + g_j^-1/2u_ju_j^⊤, |R_j|^1/2 = ∑_i ≠ j1/√(|λ_i - λ_j|)u_iu_i^⊤. Since T_j is symmetric, T_j(Σ̂- Σ)T_j = 1/n∑_k=1^n(T_j X_k ) (T_j X_k)^⊤ - E (T_j X ) (T_j X)^⊤. Thus, estimating T_j(Σ̂ - Σ)T_j is again a covariance estimation problem with random vector given by T_j X = ∑_i ≠ j(λ_i/|λ_i - λ_j|)^1/2η_i u_i + (λ_j/g_j)^1/2η_j u_j with Karhunen-Loève coefficients η_1,…,η_d introduced in Remark <ref>. We now explore the fact that assumption (<ref>) is invariant under linear transformations. Indeed, for any u ≠ 0, we have ^1/p |⟨ T_j X, u ⟩|^p/^1/2⟨ T_j X, u ⟩^2 = ^1/p |⟨ X, T_j u ⟩|^p/^1/2⟨ X, T_j u ⟩^2≤sup_v_2 = 1^1/p |⟨ X, v ⟩|^p/^1/2⟨ X, v ⟩^2≤κ, implying that (<ref>) also holds for T_j X. In addition, setting v = u_j in (<ref>), we get max_i≤ d^1/p|η_i|^p ≤κ. Using the triangle inequality, it follows that ^2/pT_j X^p = ^2/p(∑_i ≠ jλ_i/|λ_i - λ_j|η_i^2 + λ_j/g_jη_j^2)^p/2 ≤∑_i ≠ jλ_i/|λ_i - λ_j|^2/p |η_i|^p + λ_j/g_j^2/p|η_j|^p ≤ r_j(Σ) κ^2. This in turn yields the estimate ^2/pmax_i ≤ nT_j X_i^p ≤ n^2/p r_j(Σ) κ^2. Moreover, we have the relations trace(T_j Σ T_j) = r_j(Σ), T_j Σ T_j = (max_i ≠ jλ_i/|λ_i - λ_j|) ⋁λ_j/g_j≤2λ_j/g_j. Theorem <ref>, together with displays (<ref>) and (<ref>), now yields the inequality ^1/2û_jû_j^⊤ - u_ju_j^⊤_2^2 ≤ 4√(2)^1/2T_j(Σ̂ - Σ)T_j^2 ≤ C(κ,p) (T_j Σ T_j√(r(T_j Σ T_j)/n) + ^2/pmax_i ≤ nT_j X_i_2^p/n) ≤ C'(κ,p) (√(λ_j/g_j)√(r_j(Σ)/n) + r_j(Σ)/n^1-2/p), provided that r_j(Σ)/n≤ c. Finally, in case that r_j(Σ)/n> c we trivially have ^1/2û_jû_j^⊤ - u_ju_j^⊤_2^2≤√(2)≤√(2)/cr_j(Σ)/n^1-2/p. The conclusion follows. § AUXILIARY RESULTS In this section, we collect the background material and technical results that our arguments rely on. <cit.> and <cit.> Let W_1, …, W_n∈C^d× d be a sequence of independent, centered, self-adjoint random matrices such that W_k≤ U, k=1,…,n almost surely. Assume that V_n^2≽∑_k EW_k^2 and let σ^2 = V_n^2. Then for any t≥σ+U/3, P(∑_k=1^n W_i>t) ≤ 4 r(V_n^2) exp[-t^2 / 2/σ^2+t U / 3]. Let p≥ 1. Under the assumptions of <Ref>, E^1/p∑_k=1^n W_k^p≤ Kł( σ√(q) + Uq )̊ where q = log(er(V_n^2))∨ p and K>0 is a numerical constant. Inequalities of this type are well known. See for instance <cit.> or <cit.> where the case p=1 is considered. <cit.> Let X_1,…,X_n be independent, symmetrically distributed random variables with values in a separable Banach space with norm ·_B. Set S_k=∑_i=1^k X_i, k≤ N. Then for any s,t>0, ℙ(S_N_B>2 t+s) ≤4(ℙ(S_N_B>t))^2+ℙ(max _i ≤ NX_i_B>s). <cit.> Let 0< q <∞ and let X_1,…,X_n be independent, symmetrically distributed random variables with values in a separable Banach space with norm ·_B. Set S_k=∑_i=1^k X_i, k≤ N. Then for t_0 = inf{t>0: P(S_N_B>t)≤ (2· 3^p)^-1}, 𝔼S_N_B^p ≤ 2 · 3^p 𝔼max _i ≤ NX_i_B^p + 2(3 t_0)^p. <cit.> Let X_1,…,X_n be independent random variables with values in a separable Banach space with norm ·_B. There exists a numerical constant K such that for all p > 1, E^1/p∑_k=1^n X_k_B^p ≤ K p/log(ep)( E∑_k=1^n X_k_B + E^1/pmax_kX_k_B^p) and ł∑_k=1^n X_k_B _ψ_1 ≤ K ( E∑_k=1^n X_k_B + łmax_kX_k_B _ψ_1). <cit.> Let F be a countable set of measurable real-valued functions and let X_1,…,X_n be i.i.d. Assume that Ef(X_1)=0 for all f∈ F and that sup_f∈ F |f(X_1)|≤ U with probability 1. Denote Z=sup _f ∈ℱ∑_k=1^n f(X_k). Assume that σ_∗^2≥ nsup_f∈FEf^2(X_1) and set v = σ_∗^2 + 2E[Z]. Then for all t≥ 0, P(Z ≥E Z + √(2 t v)+tU/3) ≤ e^-t. The inequality (<ref>) immediately implies that with probability at least 1-e^-t, Z≤ 2 EZ + σ_∗√(2t) + 4tU/3. apalike
http://arxiv.org/abs/2407.13632v1
20240718160359
Data Alchemy: Mitigating Cross-Site Model Variability Through Test Time Data Calibration
[ "Abhijeet Parida", "Antonia Alomar", "Zhifan Jiang", "Pooneh Roshanitabrizi", "Austin Tapp", "Maria Ledesma-Carbayo", "Ziyue Xu", "Syed Muhammed Anwar", "Marius George Linguraru", "Holger R. Roth" ]
cs.CV
[ "cs.CV", "cs.LG", "eess.IV" ]
Data Alchemy A. Parida et al. Children’s National Hospital, Washington, DC, USA Universidad Politécnica de Madrid, Madrid, Spain Universitat Pompeu Fabra, Barcelona, Spain Nvidia Corporation, Santa Clara, CA, USA George Washington University, Washington, DC, USA Data Alchemy: Mitigating Cross-Site Model Variability Through Test Time Data Calibration Abhijeet Parida^1,2, Antonia Alomar^3, Zhifan Jiang^1, Pooneh Roshanitabrizi^1, Austin Tapp^1, Maria Ledesma-Carbayo^2, Ziyue Xu^4, Syed Muhammed Anwar^1,5, Marius George Linguraru ^1,5, Holger R. Roth^4 July 2024 ================================================================================================================================================================================================================= § ABSTRACT Deploying deep learning-based imaging tools across various clinical sites poses significant challenges due to inherent domain shifts and regulatory hurdles associated with site-specific fine-tuning. For histopathology, stain normalization techniques can mitigate discrepancies, but they often fall short of eliminating inter-site variations. Therefore, we present Data Alchemy, an explainable stain normalization method combined with test time data calibration via a template learning framework to overcome barriers in cross-site analysis. Data Alchemy handles shifts inherent to multi-site data and minimizes them without needing to change the weights of the normalization or classifier networks. Our approach extends to unseen sites in various clinical settings where data domain discrepancies are unknown. Extensive experiments highlight the efficacy of our framework in tumor classification in hematoxylin and eosin-stained patches. Our explainable normalization method boosts classification tasks' area under the precision-recall curve (AUPR) by 0.165, 0.545 to 0.710. Additionally, Data Alchemy further reduces the multisite classification domain gap, by improving the 0.710 AUPR an additional 0.142, elevating classification performance further to 0.852, from 0.545. Our Data Alchemy framework can popularize precision medicine with minimal operational overhead by allowing for the seamless integration of pre-trained deep learning-based clinical tools across multiple sites. § INTRODUCTION In recent years, deep learning-based methods have performed well for various medical imaging analysis tasks such as disease diagnosis, classification, and segmentation <cit.>. However, according to the United States Food and Drugs Administration, there is no approval for artificial intelligence and machine learning-enabled medical devices in histopathology for the calendar year 2023 <cit.>. This suggests few of the developed methods are usable in a clinical setting – particularly in histopathology, due to known challenges of generalizability and robustness across sites. Data and protocol variability across sites further hamper the approval of regulatory compliance for such tools <cit.>. The typical approach to improve a model's performance and generalizability is to calibrate each model at every site before deployment <cit.>. While effective in some circumstances, model weight calibration resulting in substantial parameter-related modifications necessitates regulatory re-approval. To overcome these challenges, we propose a different approach. Instead of performing weight calibration that would necessitate regulatory re-approval of the model, we perform data calibration/template learning using Data Alchemy to reduce the gap domain and hence, solve the generalizability problem at test time. To establish the efficacy of this approach, we address tumor classification in digital histopathology images. In histopathology, cells and tissue samples must be stained to be visible under a microscope. Then, they are digitized using microscopic scanners. The resulting samples' appearance varies depending on several factors such as the used reagents, staining procedure, and scanner specifications. Such variations directly affect analysis performed both by a pathologist or automated classification algorithms <cit.>. Stain normalization has been investigated as a pre-processing step to reduce color variations between histopathology samples. This involves transferring the color (stain) of a source histology patch to a target patch, while preserving the morphological tissue structure (content). Several studies have shown that data augmentation and stain normalization help increase the prediction accuracy <cit.>. However, striking the appropriate balance between structure preservation and color consistency is challenging, as the resulting samples either contain artifacts and hallucinations or suffer in color appearance. Related Works: Conventional stain normalization methods are mostly based on histogram transformations or color deconvolution (stain separation) <cit.>. Histogram transform-based methods usually impose the color characteristics of a reference patch to another source patch using linear transformations <cit.>. Color deconvolution is a method for decoupling light-absorbance and stain concentration in each pixel using spectral characteristics of different stains <cit.>. In other works, such as <cit.>, RGB images are transformed into optical flows for estimating the stain vectors using singular value decomposition (SVD). However, these methodologies tend to generate artifacts in the background and/or color discontinuities in the normalized images. Recent efforts have focused on deep learning-based methods, especially those using generative adversarial networks (GANs). GAN-based methods target stain normalization as a style-transfer problem <cit.>. Some proposed methods have used cycle-consistent generative adversarial networks (cycleGAN) to match the target distribution <cit.>. In another approach, content was disentangled from style, opening the possibility of multiple stain representations and, in classification tasks, outperforming conventional color augmentation techniques <cit.>. However, GANs are computationally expensive, are prone to mode collapse, and can lead to undesired changes in the underlying morphological structures <cit.>. Our Contribution: 1) A stain normalization method that combines the advantages of Singular Value Decomposition (SVD) transformations in the latent space with the non-linearity of convolutional networks to ensure structure preservation in a simple, interpretable, and computationally efficient manner. 2) We propose a test time data calibration method via template learning called Data Alchemy that improves model generalizability without altering parameters during testing, thus maintaining regulatory compliance. 3) We demonstrate the effectiveness of our strategies by evaluating them on histopathological tumor classification data. § METHODS AND EXPERIMENTAL SETTINGS §.§ Explainable stain normalization We approached histopathology stain normalization as an image reconstruction task using feature transformation during inference, as shown in Fig. <ref>. Specifically, an image reconstruction network was trained using image I, such that I = dec(enc(I)), where enc(.) and dec(.) are the encoder and decoder, respectively. Feature transformations were done using whitening and coloring transforms, proposed for arbitrary style transfer between natural images <cit.>. The whitening transform was defined as f_c = E_c D^-1/2_c E_c^T enc(I_c), where D_c is a diagonal matrix of eigenvalues and E_c is the orthogonal matrix of eigenvectors of the covariance matrix enc(I_c) · enc(I_c)^T, and I_c represent the patches that need to be re-stained. The covariance matrix is positive semi-definite, ensuring all eigenvalues ≥ 0. This whitening transform removed stain-specific information while preserving structure-related information from the patch that needs to be re-stained. The “whitened" f_c was then “colorized" using the coloring transforms, defined as f_cs = E_s D^-1/2_s E_s^T f_c <cit.>, where D_s is a diagonal matrix of eigenvalues and E_s is the orthogonal matrix of eigenvectors of the covariance matrix enc(I_s) · enc(I_s)^T, and I_s represents the patch whose staining parameters are used to stain the patch I_c. The coloring transform added stain-specific information from I_s to the “whitened" f_c. The features of I_c can be blended using a parameter α with re-stained features, f_cs, to control the stylization effect <cit.>, as f_cs = α f_cs + (1-α) enc(I_c). For patch staining, we set α=1 as we aim to produce a stained patch and not control stylization. Implementation details: We used all layers of VGG-19 <cit.> upto `conv_3_3' as the encoder and the exact inverted architecture of the encoder as the decoder (Fig. <ref>). We minimized L1 as a reconstruction loss using the AdamW optimizer for 10 epochs with a batch size of 96 and a learning rate of 1e^-4. The best-performing model on the validation set was saved for stain normalization. §.§ Downstream classification task To evaluate our stain normalization method, we reimplemented a downstream classification task from <cit.>. We used the ResNet-34 to identify tumor cells in small patches of whole slide images (WSIs). The classifier was trained using patches from one site, and tested on the unseen site. We compare the ResNet's accuracy using stain normalization with a fixed patch template and our proposed test time data calibration to establish generalizability. Implementation details: We trained three models (ResNet-34): one on site, A, one on site B, and one on combined sites (A and B). The models were trained with augmentations from <cit.>, which included color jitter, changes in brightness, hue and saturation, random flips, and rotation. We minimized the cross-entropy loss for tumor vs. healthy patches using the AdamW optimizer for 60 epochs with a batch size 256 and a learning rate of 1e^-4. The model with the best validation metrics was chosen as our classifier. §.§ Data Alchemy: Test time data calibration For classifiers trained in Section <ref> to function optimally at different sites, a calibration step was necessary. We used the normalization method from Section <ref> to adjust incoming patches to familiar stain parameters. Since we did not want to alter model weights, we propose adjusting the target template of the normalization network instead. We randomly drew a real patch from the classifier's training site to instantiate a template. We froze the normalization network and the classifier and set the template tensor as learnable. During calibration, labeled images from the test site were normalized to match the staining of the training site. These stain-normalized images were passed through the classifier to obtain class logits, which were used to calculate the losses with the labels of the patches. The only learnable parameter was the template, so gradients were calculated for it, and over multiple iterations, the optimizer learned a synthetic template. Thus, the classifier guided the normalization network in modifying the template to improve the site's classification accuracy. This calibration step, was performed during deployment and is the test-time calibration called Data Alchemy. The schematic for test-time calibration is shown in Fig. <ref>. Implementation details: The validation set from the sites is used to learn a template for calibrating the classifier. Half of the dataset was used to learn the template and the rest is the validation set of the data calibration step. Optimization is performed for 10 epochs to minimize the cross-entropy loss using the AdamW optimizer with a batch size of 256 and a learning rate of 1e^-4. §.§ Dataset We used CAMELYON 16 <cit.>, a public dataset consisting of 400 WSIs of sentinel lymph nodes from two sites, site A - Radboud University Medical Center, Nijmegen, and site B - University Medical Center, Utrecht. Further, we used coordinates provided by Baidu Research <cit.> to determine the presence or absence of tumor cells in 256x256 patches. The site-wise sample distribution is presented in Table <ref>. §.§ Evaluation metrics Stain normalization: We used metrics of structural similarity index measure (SSIM) and peak-signal-to-noise ratio (PSNR) <cit.> for evaluation. We also used specialized metrics cycleL1 and AP(i, p) <cit.> to quantify the preservation of structural information and the accuracy of stain normalization. cycleL1 =I_c, sty(sty(I_c, I_s), I_c), is the norm between the original patch I_c and the reconstructed original patch after two stain normalizations using normalization network sty(.). Sty(.) stains I_c to the parameters of I_s, from another site. This stained patch is re-stained with the staining parameters of I_c to get the reconstructed original patch. For WSI, we adapt AP(i, p) to measure changes in boundaries within patches highlighted using Sobel filters <cit.>. For ideal stain normalization, cycleL1 be 0 and AP(i, p) should be 1. Tumor classification: We used the area under the precision-recall (AUPR) curve and the area under the receiver operating characteristics (AUROC) curve as metrics. Additionally, we reported a F1 score using the best threshold from the precision-recall curve. § RESULTS §.§ Comparison with other stain normalization techniques For Data Alchemy, the staining method must be controllable and capable of handling unseen stains during testing. So, we compare the performance of our proposed stain normalization method with HistAuGAN <cit.>, both quantitatively and qualitatively. Fig. <ref> shows examples of stain normalization on different patches. Both approaches reduce the color appearance variations and create plausible stained samples while preserving the general structure visible in the original patches. However, HistAuGAN does not preserve the exact structures present in the original patch. It hallucinates additional nuclei and generates artifacts in the white background (Appendix <ref>). In contrast, our method preserves structural details better without any hallucinations or artifacts. Table <ref> shows that our proposed stain normalization has a lower cycleL1 error compared to HistAuGAN. Moreover, our proposed method performs better in terms of SSIM and PSNR. These, together with higher values of AP(i,p) and the qualitative examples, suggest that our proposed method is better at preserving the structural information present in the original patch, hence a better choice for stain normalization. Therefore, the subsequent classification tasks are performed using our proposed stain normalization module. Exploring explainability in stain normalization: In Fig. <ref>, we show an example of the normalization of site A to site B. We can see that post normalization the higher eigenvalues of site A become smaller with lower eigenvalues of site B. Also, site A normalized to site B looks much closer to site B than to site A. We hypothesize that eigenvalues and vector manipulation are sufficient in stain blending, as shown in Fig. <ref>. We can control the blending of two patches directly by purely using eigenvalues and vectors from two different sites. By controlling the effect of the eigenvalues and vectors, we can control the staining to one particular site or the other. The manipulation of the eigenvalues helps us understand why one site is stained in a particular way compared to the other. §.§ Stain normalization on downstream task Table <ref> shows that the classifier performs best when trained on data from both sites A and B. The upper bound model (UBM) represents the classifier trained and tested on the same data location, while the lower bound model (LBM) shows the performance drop (0.394 AUPR) when testing on a site different from the training site. Stain normalization to a single template from site A improves classifier performance beyond the LBM when the classifier is trained on site A and tested on site B. Using one or ten templates increases the AUPR scores by 0.165 and 0.127, respectively, over the LBM. When using a template from site B with a model trained on site A, we observe a negligible improvement of 0.042. This demonstrates that stain normalization improves classifier performance. Table <ref> also shows that in some scenarios, the LBM is close to the UBM, indicating that training on site B captures the necessary data diversity for good performance on site A. More visualizations of the phenomenon are in Appendix <ref>. So When staining patches from site A to B, there is a drop of 0.193 and 0.155 AUPR using one or ten templates, respectively. A single template from site A only drops performance by 0.115. Overall, these findings suggest that static stain normalization may not always be beneficial for classifier performance. §.§ Test time data calibration Since static stain normalization may not guarantee optimal performance and we cannot update the model parameters due to regulatory concerns, we apply Data Alchemy to the classifier. In Table <ref>, the classifier trained on site A and tested on site B, the learned template boosts performance by 0.307 AUPR over the LBM and is just 0.064 below the UBM. Additionally, the classifier trained on site B and tested on site A also improves performance by 0.087 AUPR over the LBM and is only 0.002 below the UBM. We also observe an improvement of 0.009 AUPR and 0.008 F1 score of the data-calibrated model over the UBM. This demonstrates that Data Alchemy's learned template enhances classifier performance across different sites and has the potential to surpass the UBM. § CONCLUSION We propose an effective and explainable stain normalization strategy that preserves the image structures and reduces stain variance between a template image and the original patch. Moreover, data calibration using Data Alchemy improves the classification accuracy without retraining of any kind. It serves as a step that enhances classifier generalizability, reducing the domain gap between multiple sites. Apart from easing regulatory approval hurdles, Data Alchemy may be used for onsite model weight calibration when it is difficult to access the model (e.g., API-based interaction) or update the model (e.g., black boxes that do not support retraining or continuous learning). § ACKNOWLEDGEMENTS This work was supported by The National Cancer Institute award UG3CA236536. splncs04 § DATA SPLITS §.§ site A split json { "site": "A", "val": [ "tumor_011.tif", "tumor_047.tif", "tumor_012.tif", "tumor_028.tif", "tumor_041.tif", "tumor_045.tif", "tumor_051.tif", "tumor_053.tif", "tumor_044.tif", "tumor_016.tif", "tumor_013.tif", "tumor_042.tif", "tumor_050.tif", "tumor_021.tif", "tumor_037.tif", "tumor_014.tif", "tumor_038.tif", "tumor_043.tif", "tumor_024.tif", "tumor_036.tif", "tumor_022.tif", "tumor_019.tif", "tumor_049.tif", "tumor_039.tif", "tumor_046.tif", "tumor_032.tif", "tumor_052.tif", "tumor_040.tif", "tumor_048.tif" ], "test": [ "tumor_068.tif", "tumor_055.tif", "tumor_058.tif", "tumor_054.tif", "tumor_057.tif", "tumor_069.tif", "tumor_063.tif", "tumor_062.tif", "tumor_056.tif", "tumor_065.tif", "tumor_061.tif", "tumor_066.tif", "tumor_070.tif", "tumor_060.tif", "tumor_064.tif", "tumor_067.tif", "tumor_059.tif" ] } §.§ site B split json { "site": "B", "val": [ "tumor_104.tif", "normal_142.tif", "normal_148.tif", "tumor_103.tif", "normal_147.tif", "normal_143.tif", "tumor_102.tif", "normal_141.tif", "normal_150.tif", "tumor_101.tif", "normal_145.tif", "normal_146.tif", "normal_149.tif" ], "test": [ "tumor_108.tif", "normal_157.tif", "normal_151.tif", "normal_155.tif", "normal_156.tif", "tumor_106.tif", "tumor_109.tif", "tumor_107.tif", "tumor_110.tif", "normal_158.tif", "normal_153.tif", "normal_159.tif", "normal_154.tif", "tumor_105.tif", "normal_160.tif", "normal_152.tif" ] } § VISUALIZE THE STAIN NORMALIZATION § COMPARATIVE STAIN NORMALIZATION
http://arxiv.org/abs/2407.13729v1
20240718173048
Baba Is AI: Break the Rules to Beat the Benchmark
[ "Nathan Cloos", "Meagan Jens", "Michelangelo Naim", "Yen-Ling Kuo", "Ignacio Cases", "Andrei Barbu", "Christopher J. Cueva" ]
cs.CL
[ "cs.CL" ]
[ Baba Is AI: Break the Rules to Beat the Benchmark equal* Nathan CloosMIT Meagan JensMIT Michelangelo NaimMIT Yen-Ling KuoVir Ignacio CasesMIT Andrei Barbu*MIT Christopher J. Cueva*MIT MITMIT VirDepartment of Computer Science, University of Virginia, USA Nathan Cloosnacloos@mit.edu Andrei Barbuabarbu@mit.edu Christopher J. Cuevaccueva@gmail.com large language model, grounded compositional generalization, benchmark, baba is you 0.3in ] § ABSTRACT Humans solve problems by following existing rules and procedures, and also by leaps of creativity to redefine those rules and objectives. To probe these abilities, we developed a new benchmark based on the game Baba Is You where an agent manipulates both objects in the environment and rules, represented by movable tiles with words written on them, to reach a specified goal and win the game. We test three state-of-the-art multi-modal large language models (OpenAI GPT-4o, Google Gemini-1.5-Pro and Gemini-1.5-Flash) and find that they fail dramatically when generalization requires that the rules of the game must be manipulated and combined. § INTRODUCTION Humans demonstrate remarkable abilities in rapid learning and adaptive behavior when faced with novel environments - not only learning and following rules dictated by the environment but altering these rules to enable new outcomes. These abilities leverage two key components that we explore in this paper: 1) The ability to identify and manipulate relevant stimuli in the environment while ignoring distractor objects and rules. 2) The ability to combine previously seen rules in novel ways. The ability to study how an agent explicitly learns rules, composes them, and crucially, makes or breaks these rules to alter how the environment and agent behaves, prompted us to develop a new benchmark environment based on the puzzle game Baba Is You. In this game, the player often controls a character named “Baba" and must navigate through the grid-based world filled with blocks, objects, and textual rules. We can think of this game as a dynamic environment where the player interacts with various objects and rules to achieve specific goals. A remarkable aspect of Baba Is You is that the rules of the game can be manipulated and rearranged by the player. Figure <ref> shows an example game environment. The text blocks [baba is you] indicate the player is controlling the white triangle, i.e. the [baba] object, and can now move this object through the environment. Now let's look for the text blocks that specify how to win the game. The [is win] text blocks in the upper right of the environment are incomplete and so the agent must recognize that there is currently no way to win the game until the winning condition is specified. This is accomplished by moving one of the available text block such as [door] or [ball] to create a rule for winning the game. With this specific environmental layout, a winning strategy is to push the [door] block to create the rule, [door is win], and then move the agent onto the door block, shown in green, to win the game. However, the text blocks [wall is stop] are aligned and so this rule is active and the player cannot move baba through the vertical wall of gray squares to carry out this plan. The player must first push one of the blocks in this rule out of alignment to deactivate the rule [wall is stop]. The final plan to win the game is to first break the rule [wall is stop], then make the rule [door is win], and finally move onto the door object. As this example illustrates, this is a dynamic environment where the agent must identify the relevant objects and rules in the environment and then manipulate the environment to change or create rules for success (Figure <ref>). We implemented a simplified version of Baba Is You (Baba Is AI) based on the Gymnasium Minigrid environment <cit.>. The goal of the Baba Is AI benchmark is to evaluate the role of systematic compositionality in rule-based generalization. The core component of this benchmark is that the written commands are not only grounded in an environment, but the grounding itself can be manipulated via changing the rules of the environment. This dynamic design allows us to explore a broader notion of generalization compared to the current benchmarks. We show results for three large language models (LLMs): GPT-4o, Gemini-1.5-Pro (May 2024), and Gemini-1.5-Flash (May 2024) <cit.>. We chose GPT-4o and Gemini-1.5-Pro as these models occupy the top two spots on the Chatbot Arena Leaderboard (May 2024) <cit.>. We also include Gemini-1.5-Flash as this model occupies an intriguing spot in the LLM ecosystem with both excellent performance and affordable price, making it an attractive option for many applications. Previous work often convert visual inputs into text before evaluating LLMs <cit.>. Here we leverage the multi-modal ability of these models to evaluate them directly on visual inputs of the game. § METHOD We first prompt LLMs with general text instructions to play the game. This includes a description of the possible objects and textual rule blocks in the environment, and how active rules can change object properties (as illustrated in Figure <ref>, with the exact prompt in Appendix <ref>). Importantly, we specify that a rule is active only if it follows the form “object is property” and that the three rule blocks must be aligned horizontally in the environment. Following previous work on LLM-based agents and planners <cit.>, we ask LLMs to operate at a higher level than the low-level control of actions in the environment. Specifically, we ask LLMs to produce high-level textual plans consisting of the following primitives: breaking an active rule, making a rule active, or moving to a specific object in the environment (see an example plan in Figure <ref>). We instruct LLMs that these actions can only be taken if the relevant objects and rule blocks are present in the current environment. To generate their plan, LLMs receive as visual input a static image of the initial configuration of the environment. After providing the game instructions, we present LLMs with 10 example images and corresponding winning plans for in-context learning <cit.>. For each example, LLMs are asked to generate reasoning steps to derive the target plan from the given image. Following the in-context examples, LLMs are prompted to describe a general algorithm to solve the environments and to apply it to unseen test environments. The test environments are specifically chosen to assess different type of generalization. We measure accuracy as the exact match between the final response of LLMs and the winning plan of the test environment. LLMs are evaluated on 5 samples for each test environment. This entire process is repeated for 5 random seeds, each corresponding to different in-context and test examples. § RESULTS Our first tests assess the LLMs' ability to extract the most basic rule of the game from in-context examples, namely, go to the winning object, and then apply this rule in novel environments where distractors are present. Complex environments contain not only relevant stimuli but also irrelevant objects or rules; identifying the relevant from irrelevant is a crucial ability that we probe in this set of experiments. Figure <ref> shows the accuracy of the LLMs in five different environments: 1) Environments without a distractor, i.e. new random variations of the environment used during in-context learning. 2) Environments where there are now two objects but one of them is a distractor. In order to win the game, the agent must go to the object specified in the text box with the win rule, e.g. [door is win] requires the agent to go to the door. 3) Environments contain a noun block that is distracting from the active win rule. 4) Environments contain both a distractor object and noun block. 5) Environments contain both a distractor object and a noun block that is part of an active rule. The distractor rule is not relevant for the environment and so should be ignored. For example, the rightmost panel in Figure <ref> shows the distractor rule [door is win] but there is no door object in the environment and so the winning strategy is to follow the other rule [ball is win] and navigate to the ball. Impressively, GPT-4o performs with perfect accuracy on the first four environments, and as a reminder, this is while receiving visual and not textual inputs about the game. Surprisingly, Gemini-1.5-Flash outperforms Gemini-1.5-Pro, with all models showing the same trend downwards in accuracy on the final task that includes both an object and a rule distractor. The sequence of environments used to test the LLMs in Figure 4 includes the same distractors as in Figure 3, but now all the environments include a gray vertical wall that runs down the center of the environment. The environments are always initialized with the rule [wall is stop] inactive, as the three blocks that form this rule are not horizontally aligned, and so the wall has no practical impact on the movement of the agent. However, these environments now all contain the extra distractor blocks that compose the inactive wall, and blocks about the wall rule. The mean accuracy for all three models is lower under this increased distractor load (compare Figures 3 and 4). Compositional generalization has been studied in many contexts <cit.>, for example, if an agent has learned to solve a task with red circles and green keys then it should generalize to red keys and green circles. In the Baba Is AI environment we can not only study these traditional forms of generalization but probe models under scenarios where the very rules of the game must be manipulated and combined. Figure 5 shows one example scenario where the LLMs are shown environments that each highlight three winning strategies and then are asked to solve a new set of environments that require a novel composition of these previously learned rules. In-context: { [ goto{object}; make{rule}, goto{object}; break{rule}, goto{object} ] . Test: break{rule}, make{rule}, goto{object} The accuracy for all three LLMs is low. We have also alternated the four strategies shown in Figure <ref> so a different three are used for in-context training and the remaining is used for testing (not shown), and accuracy remains low. These aspects of compositional generalization across rules are particularly unique to the Baba Is AI benchmark, and the poor performance indicates that this benchmark creates meaningful generalization challenges for LLMs. § DISCUSSION In order for agents to have human-like interactions with the world, they should not only be able to interact with objects but also have the capacity to understand and manipulate the rules of their environment. By defining a static set of rules that an agent must follow, many games and benchmarks have overlooked a critical capability: the ability to understand rules via rule manipulation. Therefore, the Baba Is AI benchmark explores compositional generalization under conditions in which agents can modify the rules of the environment. Figure <ref> illustrates some of the further challenges in these environments. All three environments are superficially similar and contain the same objects, yet the winning solutions are different in each case (see text at the top of the figures). For example, the center environment requires the agent to break the [wall is stop] rule, then move the [wall] block to create the rule [wall is win], and finally go to one of the wall blocks to win the game. As a second example, in the environment shown in the rightmost panel of Figure <ref> the rule [wall is stop] is located in the corner of the environment and so there is no way to push these blocks out of alignment and break this rule; the agent is initially trapped in the leftmost room of the environment. The agent must break the currently active rule [baba is you] and create [key is you] in order to control the key on the other side of the wall. Then the agent can use the key to create the rule [door is win] and move to the door. The accuracy on these challenging environments is low as shown in Table <ref>. The errors that LLMs make in solving the Baba Is AI environments are instructive about future opportunities for improvements (see Appendix <ref>). LLMs make grounding mistakes: the LLM refers to an object that does not exist in the environment. LLMs make path planning mistakes: the LLM incorrectly asserts that the path to a specific object is blocked by another object, despite the path being clear in the environment. icml2024 § PROMPT § ERROR CASES
http://arxiv.org/abs/2407.12455v1
20240717100337
Optimizing one dimensional superconducting diodes: Interplay of Rashba spin-orbit coupling and magnetic fields
[ "Sayak Bhowmik", "Dibyendu Samanta", "Ashis K. Nandy", "Arijit Saha", "Sudeep Kumar Ghosh" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.str-el" ]
http://arxiv.org/abs/2407.12754v1
20240717172713
A Mean Field Game approach for pollution regulation of competitive firms
[ "Gianmarco Del Sarto", "Marta Leocata", "Giulia Livieri" ]
q-fin.MF
[ "q-fin.MF" ]
Helical Spin Dynamics in Commensurate Magnets: a Study on Brochantite, Cu_4SO_4(OH)_6 A. Podlesnyak ===================================================================================== § ABSTRACT We develop a model based on mean-field games of competitive firms producing similar goods according to a standard AK model with a depreciation rate of capital generating pollution as a byproduct. Our analysis focuses on the widely-used cap-and-trade pollution regulation. Under this regulation, firms have the flexibility to respond by implementing pollution abatement, reducing output, and participating in emission trading, while a regulator dynamically allocates emission allowances to each firm. The resulting mean-field game is of linear quadratic type and equivalent to a mean-field type control problem, i.e., it is a potential game. We find explicit solutions to this problem through the solutions to differential equations of Riccati type. Further, we investigate the carbon emission equilibrium price that satisfies the market clearing condition and find a specific form of FBSDE of McKean-Vlasov type with common noise. The solution to this equation provides an approximate equilibrium price. Additionally, we demonstrate that the degree of competition is vital in determining the economic consequences of pollution regulation. Key words: Cap-and-trade; Linear Quadratic Problem; Mean Field Games; Market Equilibrium; Social Cost Optimization § INTRODUCTION The problem of excessive firm pollution has long been a part of economic theory, mainly because it imposes a negative externality on society. In particular, it is considered as the consequence of the absence of price on emission, which implies higher volumes than socially optimal levels. Therefore, from an economic point of view, one possibility is to put a price on pollution; in this way, polluters will be more conscious about the social value of their private decisions. One of the most popular measures that help tackle this problem is the emission trading system, also known as the cap-and-trade system, which gives the environmental authority direct control on the overall quantity of emissions and, at the same time, increases the acceptability of environmental policy for covered companies because they can make profit from it. The EU-ETS (European Union Emission Trading Scheme) is, together with the US Sulfur Dioxide Trading System, the most prominent example of an existing cap-and-trade system deployed in practice (e.g., <cit.>). Having made this premise, understanding how the market price of carbon in an emission trading system is formed through the interaction among a large number of (indistinguishable rational competitive) firms is significant. This paper proposes an integrated production-pollution-abatement model in continuous time and studies cap-and-trade under competition via the Mean Field Game (MFG, henceforth) approach. The theorethical model is described in detail in Section <ref>. In particular, we are interested in equilibrium carbon price formation in a cap-and-trade system, i.e., the pricing of carbon endogenously using a model of (indistinguishable rational competitive) firms under the market clearing condition. It is important to make the following point. In the present work, we consider two types of competitions. On the one hand, the competition in polluting firms is because we do not focus on perfect competition or monopoly. However, instead, we account for firms' strategic interactions in the output markets by assuming that firms compete á la Cournot. In other words, competing firms are trapped in an equilibrium where each firm's decisions impose not just a pollution externality on society but also a competitive externality on the other firms. On the other hand, there is the type of competition in the continuum limit of an infinity of small players allowed by the MFG framework. Precisely, each player only sees and reacts to the statistical distribution of the other players' states; in turn, their actions determine the evolution of the state distribution. To avoid confusion, we always refer to the former when speaking about competition. MFG models appeared simultaneously and independently in the original works of <cit.> and <cit.>, and are, loosely speaking, limits of symmetric stochastic differential games with a large number of players where each of them interacts with the average behavior of his/her competitors. In particular, an MFG is an equilibrium, called ϵ-Nash equilibrium, that occurs when the strategy employed by a representative agent of a given population is optimal, given the costs imposed by that population. An increasing stream of research has been flourishing since 2007, producing theoretical results and a wide range of applications in many fields, such as economics, finance, crowd dynamics, social sciences in general, and, only recently, in equilibrium price formation (e.g., <cit.> and references therein). We refer to the lecture notes of <cit.> and the two-volume monograph by Carmona and Delarue (<cit.> and <cit.>) for an excellent presentation of the MFG theory from analytic and probabilistic perspective, respectively; in the present paper, we embrace a probabilistic perspective. A related but distinct concept is that of Mean Field Control (MFC), where the goal is to assign a strategy to all the agents at once so that the resulting population behavior is optimal concerning the cost imposed by a central planner. We refer to the excellent book by <cit.> for a comparison between MFGs and MFC. In general, an optimal control for an MFC is not an equilibrium strategy for an MFG. Nevertheless, in many cases, the converse holds, and quantifying the differences between the two approaches is reminiscent of what is known as the price of anarchy, i.e., the added aggregate cost of allowing all players to choose their optimal strategy independently. The MFGs for which this happens are called potential MFGs (e.g., <cit.>). The model that we propose, while conceptually constructed as a MFG equilibrium, can be solved via a reformulation of MFC by using the results in <cit.>. In particular, our model belongs to the class of linear-quadratic MFGs (e.g., <cit.>) with common noise (e.g., <cit.> and <cit.>). The common noise represents an inherent uncertainty in nature affecting simultaneously all the firms participating in the game (or being controlled by a central planner). We characterize the solution both in terms of a stochastic maximum principle (forward-backward system of stochastic differential equations (FBSDEs)) and Riccati equations. In particular, similarly to <cit.>, though their work is inspired by financial applications, when imposing the market clearing condition, we obtain an interesting form of FBSDEs of McKean-Vlasov type with common noise as a limit problem, involving the dependence on a conditional expectation. Therefore, the existence of a unique strong solution is proved by using the well-known Peng-Wu's continuation method <cit.>. In addition, we quantify the relation between the finite player game and its large population limit, as well as we show that the solution of the mean-field limit problem actually provides asymptotic market clearing in the large limit. Instead, if the carbon price process is given exogenously, the MFG solutions serve as ϵ-Nash equilibria for the large player game because the game is solved by an optimal control problem <cit.>. The last part of the paper presents a numerical study of the proposed model, which is divided into two parts. In the first part, we analyse the role played by the environmental authority on the average level of production of a (representative firm). In the second part, instead, we analyse the economics of competition. In particular, the representative firm faces a strategic trade-off between output reduction and pollution abatement under competition. The latter facilitates synchronization between the representative firm and the rest of the population in the sense that they agree to reduce output by using the pollution constraint; naturally, this synchronization mechanism is expected to work under a suitable range of constraints imposed by the pollution regulator, the one for which the impact of output reduction on the representative firm's profits dominates the cost of pollution abatement, of trading, and production. Under monopoly, instead, the representative firm can no longer leverage on the competition with the population of firms to implement the previously described synchronization mechanism. Whence, the degree of competition plays a critical role in determining the economic consequences of pollution regulation. In particular, our model captures a rich range of competitive markets – with monopoly and Cournot oligopoly as special cases – and several fundamental elements of pollution generation, abatement levels and costs, and regulation, which can serve as a basis for future research. We proceed as follows. Notation and basic objects are introduced in Section <ref>. In Section <ref>, we provide a precise description of the N-player game, where N denotes the number of firms, together with the definition of ϵ-Nash equilibria. In Section <ref>, the limit dynamics for the N-player game is introduced. The corresponding notion of solution of the MFG is defined and discussed. In Section <ref>, the MFC problem associated with the MFG in Section <ref> is introduced and discussed; we prove the solvability of the FBSDE of McKean-Vlasov type and the asymptotic market clearing condition. Section <ref> provides numerical results. Finally, in Section <ref>, we give concluding remarks, discuss further extensions of the model and future directions of research. Additional results on linear-quadratic MFG and MFC are confined in Appendix <ref> and Appendix <ref>, respectively. § NOTATIONS Because we are going to derive some broad-gauged results in Appendix <ref> and Appendix <ref>, the notation in this section will be quite general. Let d, d_0, d_1, d_2 ∈ℕ, where ℕ is the set of positive integers, which will be the dimensions of the space of private states, common noise values, idiosyncratic noise values and control actions, respectively. The n-dimensional Euclidean space ℝ^n, with n ∈ℕ a generic index, is equipped with the standard Euclidean norm, always indicated by |·|. Moreover, we denote by 𝒮^n the set of all n × n symmetric matrices with real entries. In general, we identify the space of all n × m dimensional matrices with real entries with ℝ^n × m. Let N ∈ℕ. Let (Ω^0, ℱ^0, ℙ^0) and (Ω^i, ℱ^i, ℙ^i)_i=1^N be (N+1) complete probability spaces equipped with filtrations (ℱ^i_t), i ∈{0,…,N}. In particular, (ℱ^0_t) is the completion of the filtration generated by the d_0-dimensional Brownian motion (W^0(t)), and, for each i ∈{0,…,N}, (ℱ^i_t) is the complete and right-continuous augmentation of the filtration generated by d_1-dimensional Brownian motions (W^i(t)), as well as a (W^i(t))-independent d-dimensional square-integrable random variables (ξ^i)_i=1^N, which have by assumption the same law. Finally, we introduce the product probability spaces Ω^i = Ω^0×Ω^i, ℱ^i, (ℱ_t^i), ℙ^i, i ∈{1,…,N}, where (ℱ^i, ℙ^i) is the completion of (ℱ^0⊗ℱ^i, ℙ^0⊗ℙ^i) and (ℱ_t^i) is the complete and right-continuous augmentation of (ℱ_t^0⊗ℱ_t^i). In the same way, we define the complete probability space (Ω, ℱ, ℙ) equipped with (ℱ_t) satisfying the usual conditions as a product of (Ω^i, ℱ^i, ℙ^i, (ℱ_t^i))_i=0^N. Let Γ be a closed and convex subset of ℝ^d_2, the set of control actions, or action space. Moreover, given a probability space (Ω, 𝒢, ℙ) and a filtration (𝒢_t) in 𝒢, let: * 𝕃^2(𝒢;ℝ^n) be the set of ℝ^n-valued 𝒢-measurable square-integrable random variables U. * 𝕊^2((𝒢_t);ℝ^n) be the set of ℝ^n-valued (𝒢_t)-adapted continuous processes (U(t)) such that U_𝕊^2:=𝔼[sup_t ∈ [0,T]|U(t)|^2]^1/2<∞. * ℍ^2((𝒢_t);ℝ^n) be the set of ℝ^n-valued (𝒢_t)-progressively measurable processes (U(t)) such that U_ℍ^2:=𝔼[∫_0^T |U(t)|^2 dt]<∞. We denote by ℒ(U) the law of a random variable U, and by U(s)=𝔼[U(s)|ℱ_s^0] the conditional expectation of U(s) given W^0(s). For 𝒮 a Polish space, let 𝒫(𝒮) denote the space of probability measures on ℬ(𝒮), the Borel sets of 𝒮. For s ∈𝒮, let δ_s indicate the Dirac measure concentrated in s. Equip 𝒫(𝒮) with the topology of weak convergence of probability measures. Then 𝒫(𝒮) is again a Polish space. Let d_𝒮 be a metric compatible with the topology of 𝒮 such that (𝒮, d_𝒮) is a complete and separable metric space. Given a complete compatible metric d_𝒮 on 𝒮, we also consider the space of probability measures on ℬ(𝒮) with finite p-moments, with p ≥ 1: 𝒫_p(𝒮) ≐(ν∈𝒫(𝒮) : ∃ s_0 ∈𝒮 : ∫_𝒮d_𝒮(s, s_0)^pν( ds)<∞). In particular, 𝒫_p(𝒮) is a Polish space. A compatible complete metric is given by: d_𝒫_p(𝒮)(ν,ν̃)≐(inf_α∈𝒫(𝒮×𝒮) : [α]_1 = ν and [α]_2 =ν̃∫_𝒮×𝒮d_𝒮(s,s̃)^pα( ds, ds̃))^1/p, where [α]_1 ([α]_2) denotes the first (second) marginal of α; d_𝒫_p(𝒮) is often referred to as the p-Wasserstein (or Vasershtein) metric. § THEORETICAL MODEL This section proposes a stochastic equilibrium model for environmental markets accounting for the design of today's emission system. Our model is an integrated production-pollution-abatement model (e.g., <cit.>) in continuous time, which combines a model of competing producers with a pollution model that includes pollution generation and abatement. Precisely, we consider N ≥ 1, N ∈ℕ, indistinguishable competing, profit-maximizing firms, whose carbon emissions are regulated in a cap-and-trade fashion. Although the regulation of carbon allowances occurs over several periods and allowances can be banked from one period to the other, we follow <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and we focus on a single period of T years at the end of which compliance is assessed. We assume that capital is created according to a standard AK model with a positive depreciation rate of capital and with a positive technological level A_k^i; see the term “revenues" in the cost functional in Equation (<ref>). Let K^i(t) be the level of capital at time t of firm i, for i=1,…,N. We assume that the dynamics of (K^i(t)) is described by the following stochastic differential equation (SDE) dK^i(t) = (κ_f^i K^f,i(t) + κ_g^i K^g,i(t)-δ^i K^i(t)) dt + σ K^i(t) dW^1,i(t), K^i(0)=κ_0, where κ_f^i, κ_g^i, σ, δ^i are positive constants. K^f,i(t) and K^g,i(t) represent the amount of fossil-fuel and green energy based level of capital used by firm i for capital creation. δ^i is the depreciation rate of capital. The quantity σ K^i(t) represents the standard deviation of the level of capital of firm i and depends on K^i(t) itself; (W^1,i(t)) is a standard Brownian motion. Firm i, for i=1, …, N, controls the level of capital trend via K^f,i(t) and K^g,i(t), which increases at a rate κ_f^i K^f,i(t) + κ_g^i K^g,i(t)-δ^i K^i(t), while the volatility is uncontrolled. We assume that K^f,i(t), K^g,i(t) ∈ℍ^2((ℱ_t^i),ℝ). Moreover, we assume that firm i, for i=1, …, N, faces quadratic costs, say C^i(K^f) and C^i(K^g), in the capital levels[Admittedly, we have not found a precise reference for this assumption, but it seems reasonable and necessary to obtain a linear quadratic form for our problem.]. More precisely, we have: C^i,f(K^f) = c_1,1^iK^f,i + c_1,2^i(K^f,i)^2, and C^i,g(K^g) = c_2,1^iK^g,i + c_2,2^i(K^g,i)^2. Firms generate pollution as a byproduct of the production process. Let E_i denote the pollution, in terms of carbon emissions, generated by firm i prior to any investment in abatement; we name E_i as business-as-usual (BAU) emissions[Henceforth, to keep the reading, we use the term emission instead of carbon emission.]. Clearly, E^i must be increasing in the capital level, and it seems reasonable to assume that it is a function of K^f,i(t). We follow <cit.>, and we impose a linear relation between the BAU emissions and the latter by assuming: dE^i(t) = κ_e^i K^f,i(t) dt + σ_1^i dW^2,i(t), E^i(0)=E_0, where k_e^i>0 characterizes the linear relationship, and W^2,i(t)=√(1-ρ_i^2) W^2,i(t) + ρ_i W^0,1(t), ρ_i ∈ [0,1], so that the correlation between W^2,i(t) and W^2,j(t) is r_i j:=ρ_iρ_j. The increments of W^2,i and W^0,1 are independent, and independent from the increments of W^1,i. The noise decomposition captures the fact that the emission of firm i is affected by its own idiosyncratic noise d W^2,i and by the common economic business cycle d W^0,1. A similar model for the BAU emissions is employed in <cit.>. Here, we also assume the presence of a short-term emission shock d e^i(t) = σ_2^i dW^3,i(t), e^i(0)=e_0 which may represent, e.g., the outage of a carbon-friendly production unit that is instantaneously replaced by a more polluting one (e.g., <cit.>). Increments of W^3,i are independents from the increments of W^1,i, W^2,i, W^0,1. Therefore, the total emission dynamics is given by dE^i(t) = dE^i(t) + de^i(t) = (κ_e^i K^f,i(t) dt + σ_1^i dW^2,i(t)) + (σ_2^i dW^3,i(t)), where E^i(0)=E_0. We describe now our model of pollution abatement. Firm i, for i=1, …, N, breaks down emissions via two complementary notions: (i) abatement level, and (ii) abatement cost. Under the abatement effort rate α^i(t) ∈ℍ^2((ℱ_t^i), ℝ), emissions of firm i becomes: dE^i,α(t) = (κ_e^i K^f,i(t)-α^i(t)) dt + σ_1^i dW^2,i(t)) + (σ_2^i dW^3,i(t)). In this way, the firm controls its emission trend, which increases at a rate (κ_e^i K^f,i(t)-α^i(t)); on the other hand, the volatility remains uncontrolled. Notice that, contrary to <cit.>, our model does not assume that pollution is observable and a deterministic function of the level of capital. As regards the abatement costs, the extant literature assumes that pollution abatement costs are increasing and quadratic (see, e.g., <cit.>), or at least convex increasing in the quantity of emission abated, which is α^i(t) in our model (see, e.g., <cit.>). This is because usually the initial units of emissions are easy to abate, but once the low-hanging fruit have been exploited, pollution abatement becomes increasingly difficult (see, e.g., <cit.>). Thus, we assume the following quadratic form for the abatement cost function: C_i(α) = h_i α^i + 1/2 η_i (α^i)^2, h_i, η_i > 0, where the constant η_i is positively correlated with the flexibility of the abatement process and, therefore, with the reversibility of the decision. Before describing the dynamics for the bank account X, we detail the competition mechanism in our model. We assume that firm i, for i=1, …, N, faces linear inverse demand curve p^i(t):=p(K^i(t),K^-i(t)), which can be derived by a suitable quadratic utility function[It is not difficult to see that such a linear demand function can be derived from a quadratic utility function of the following form – for the sake of simplicity we denote by q_i the quantity produced by firm i –: U(q) = a∑_i=1^Nq_i-C_1(∑_i=1^N q_i^2 + C_2 ∑_j=1 j≠ i^N-1 q_i q_j)-q · p, where C_1 = b/2(1-γ(1-1/N)) and C_2 = γ/N(1-γ)+γ and solving the utility maximization problem for a representative consumer; the first-order condition gives Equation (<ref>).], given by p^i(t) = a - b (1-γ)A_k^iK^i(t) - b γ1/N∑_j=1^NA_k^j K^j(t), where a and b are positive constants, and γ∈ [0,1] captures the degree of production substitution, and hence competition, and A_k^i represents the technological level and hence A_k^i K^i(t) is the production function of the firm i. We continue with the following observation: the demand function in Equation (<ref>) comprises a range of competitive markets, in which monopoly and Cournot oligopoly are polar cases. Indeed, it can be written in the following way: p^i(t) = a - b (1 - γ( 1 - 1/N)) A_k^i K^i(t) - b γ1/N∑_j=1 j≠ i^N-1 A_k^j K^j(t) When γ=0 (i.e., monopoly), then p^i(t)= a - b A_k^i K^i(t). When γ = 1 (i.e., Cornout oligopoly), then p^i(t) = a - b γ1/N∑_j=1^N A_k^j K^j(t) and so there is perfect competition. On the other hand, γ∈ (0,1) captures the degree of substitution. In particular, the price p^i(t) is influenced more by the level of production of the corresponding good i with respect to the total quantity produced by all the other firms (firm i excluded); indeed b(1-γ(1-1/N)) > b γ1/N for every γ∈ (0,1), which is a natural (although myopic in the sense of emotions) postulation. Notice that this type of asymmetry is not in contrast with the symmetry required by the MFG framework. We now turn to the description of the dynamics for the bank account. The bank account's dynamics is specified as in <cit.>. We assume that the regulator opens for each firm i, i=1,…,N, at t=0 a bank account X^i and allocates permits, which are represented by the cumulative process A^i. The dynamics of the bank account is given by: dX^i_t = β^i(t) dt + dA^i(t) - dE^i,α(t), where β^i(t)∈ℍ^2((ℱ^i_t),ℝ) is the trading rate in the liquid allowance market; emissions, trade, and bank account are measured in tons or in multiples of tons). We assume that A^i has the following dynamics: dA^i(t) = ã^i(t) dt + σ̃_2 dW^0,2(t), A^i(0)=A_0. where (W^0,2(t)) is a standard Brownian motion common to all firms independent from all the other noises involved in the model. The fact that (W^0,2(t)) is independent from (W^0,1(t)) is admittedly a very heavy assumption; the case of correlated common noises is an important subject for future research; see the discussion in Section <ref>. The choice of a dynamic allocation mechanism, instead of a static one, is because of the presence of the common shock in the BAU emissions dynamic, otherwise there would be no benefit from the implementation of a dynamic allocation scheme; see <cit.>. The quantity ã represents the rate. Notice that the sign of A^i can be either negative, meaning that the regulator is placing a penalty on the firm bank account, or positive, meaning that the regulator is giving true permits to the firm. Admittedly, the dynamics for A^i(t) could be more general, by adding, for instance, a pure jump part (see <cit.>, Section 2). Firm i, i=1, …, N, controls the trading rate β^i(t). Now, let (ω_t) be the price of allowances. It can be either exogenous, for example described by the Black-Scholes model, or endogenous, in the sense that it is determined by the fundamental condition of the market. More precisely, the total number of tons of emissions being purchased by a firm via the emission exchange at a given time must be equal to the number of those being sold by others via the emission exchange at the same time. In particular, the balance between the sales and the purchases, called market clearing conditions, must hold at any point in time. The market clearing condition reads as: ∑_i=1^Nβ̂^i(t) = 0, dt ⊗ dℙ-a.e., where β̂_t^i is the trading rate of the ith firm. We will be interested in finding an appropriate price process (ω_t) so that it achieves the market clearing condition among the rational firms. More precisely, as stated in the introduction, we are interested in finding a market equilibrium, which is defined as trading strategies and market price such that each firm has minimized its criteria and the market clears for the market price. We assume that firms are price-takers, an assumption that is in line with the large number of companies regulated under today's emission trading systems[Indeed, more than 5,500 firms are regulated under the EU ETS (e.g., <cit.>)], and minimize their cost functional by finding an optimal trade-off between implementing abatement measures, trading permits in the market, and taking the risk of penalty payments. Another possibility is to model the price as an underlying martingale plus a drift representing a form of permanent price impact (e.g., <cit.>): dω_t = ν̃/N∑_i=1^N c^'(β^i(t)) dt + σ_0 dW_t, which in the particular case c(β)=β^2 corresponds to the influential Almgren-Chriss model (<cit.>); ν̃>0. Notice that in this case, we would obtain a mean-field game of control with common noise. We follow <cit.>, and we state that the process (ω_t) is likely to be given by a ℱ^0-progressively measurable process since the effects from the idiosyncratic parts from many firms are expected to be canceled out. Moreover, we assume that if firm i, i=1,…,N, places a (market) order of β^i(t) when the market price is (ω_t), then the cost incurred by the firms is given by β^i(t)ω_t + 1/2ν(β^i(t))^2, where ν>0 is the (constant) market depth parameter which takes into account a price impact effect as in the original work of <cit.>. We make now the following remark. In our model, we do not enforce constraints on the controls because we give priority to finding explicit solutions to our problem. Indeed, our goal is to analyze the qualitative behaviour of the system, a goal achieved in a satisfactory way in the numerical section. A similar “relaxation" of the problem is done also in, e.g., <cit.> and <cit.>. Let ℋ_1^N := ℍ^2((ℱ_t^i),ℝ^4), and let ℋ_N^N the set of all N-dimensional vectors v^N:=(v^N,1,…,v^N,N) such that v^N,i∈ℋ_1^N, with the vector v^N, i(t) defined as v^N, i(t):=(K^f,i(t), K^g,i(t), α^i(t), β^i(t)), for i = 1, …, N. Each element of ℋ_N^N is called strategy vector. Under the assumption of risk neutrality, firm i, for i = 1, …, N, evaluates a strategy vector v^N ∈ℋ_N^N according to its expected cost (notice that we highlight the dependence on the vector v^N in the term in Equation (<ref>)) 𝒥^i(v^N) = 𝔼[∫_0^T-p^i(t, v^N)A_k^iK^i(t)_- revenues + β^i(t)ω_t+1/2ν(β^i(t))^2_cost of trading (<ref>) + C_i(α(t))_abatement cost (<ref>) + C^i,f(K^f(t))+C^i,g(K^g(t))_costs of production (<ref>) dt + λ(X^i(T))^2_final penalization] where the term λ (X_T^i)^2, with λ>0, is the terminal monetary penalty on the bank accounts set by the regulator, which is a regularized version of the terminal cap penalty function applied in practice, which is zero if the firm is compliant and linear otherwise. However, as cited in <cit.>, optimal strategies cannot be found in closed form in this case. Through the previous penalty, the firm is going to pay both if its bank account is above or below the compliance zero level. In particular, notice that the Brownian motions in the dynamics for A^i and E^i,α are chosen for tractability reason, because of the additive quadratic penalty λ (X_T^i)^2 in the cost functional (<ref>). Finally, the dynamics for the capital level and the bank account are given by: dK^i(t) = (κ_f^i K^f,i(t) + κ_g^i K^g,i(t)-δ^i K^i(t)) dt + σ K^i(t) dW^1,i(t), K^i(0)=κ_0. dX^i(t) = (β^i(t)+ã^i(t)+α^i(t)-κ_e^i K^f,i(t)) dt + σ̃_2 dW^0,2(t)-σ_1^iρ_i dW^0,1(t) -σ_1^i√(1-ρ_i^2) dW^2,i(t)-σ_2^i dW^3,i(t), X^i(0)=X_0. In addition, see, again, Equation (<ref>), we have: p^i(t,v^N) = a - b (1-γ)A_k^iK^i(t) - b γ1/N∑_j=1^NA_k^j K^j(t) K^j(t). In the present paper, we take a non-cooperative game point of view. The aim of each firm i, for i=1,…,N, is to minimize the cost in Equation (<ref>) by controlling the level of capital linked to fossil-fuel and green technologies, the quantity of emissions abated, and the trading rate in the allowance market. In a non-cooperative game setting, we are led to the analysis of a non-zero sum stochastic game with N players and to the search of ϵ-Nash equilibrium. In the next definition, we use the standard notation [v^N,-i, v] to indicate a strategy vector equal to v^N for all firms but the i-th, which deviates by playing v ∈ℋ_1^N instead. Let ϵ≥ 0. A strategy vector v^N ∈ℋ_N^N is called ϵ-Nash equilibrium for the N-player game if for every i ∈{1,…,N} and for any deviation v ∈ℋ_1^N we have: 𝒥^i(v^N) ≤𝒥^i([v^N,-i, v]) + ϵ. Before proceeding, we summarize our notation in Table <ref>. § A MEAN FIELD GAME APPROXIMATION WITH COMMON NOISE FOR THE N PLAYER GAME. In this section, we consider the filtered probability space (Ω, ℱ, ℙ, (ℱ_t)), d_1=3 Brownian motions (W^j(t)), 1≤ j ≤ 3, which are mutually independent and independent from the completion of the filtration (ℱ_t^0), defined in Section <ref>. In order to find the expression for the MFG approximation, we follow <cit.> and we introduce the type vectors ζ_i = (κ_f^i, κ_g^i, σ_1^i, ρ_i, σ_2^i, δ^i, A_k^i), for i = 1,…,N. As said in the introduction, the finite set of firms becomes a continuum and competes with the rest of the (infinite) population. In particular, the MFG is defined in terms of a representative firm who is assigned a random-type vector ζ = (κ_f, κ_g, σ_1, ρ, σ_2, δ, A_k) at time zero, which encodes the distribution of the (continuum of) firms' types. Formally, cfr. <cit.>, the type vector ζ_i induces an empirical measure called the type distribution, which is the probability measure on the type space 𝒵^e:=ℝ×ℝ×ℝ_+× [0,1] ×ℝ_+× [0,1] ×ℝ_+, given by m_N(A) = 1/N∑_i=1^N1_A(ζ_i), for Borel sets A ⊂𝒵^e. We assume now that as the number of firms becomes large, N→∞, the just introduced empirical measure m_N has a weak limit m, in the sense that ∫_𝒵^eφ dm_N →∫_𝒵^eφ dm for every bounded continuous function φ on 𝒵^e; this holds almost surely if the ζ_i^' are i.i.d. samples from m. In particular, the probability measure m represents the distribution of type parameters ζ among the continuum of firms. At this point, let x_0=(k_0, x̃_0) be a random vector which is independent from (ℱ_t^0). The representative firm's level of capital K and bank account X solve dK(t) =(κ_f K^f(t) + k_g K^g(t)-δ K^i(t)) dt +σ K(t) dt, K(0)=κ_0. dX(t) =(β(t) + ã(t) + α(t) - κ_e K^f(t)) dt + σ̃_2 dW^0,2(t)-σ_1 ρ dW^0,1(t) -σ_1√(1-ρ^2)dW^2(t)-σ_2 dW^3(t), X̃(0)=x̃_0, where v(t):=(K^f,i(t), K^g,i(t), α^i(t), β^i(t)) belongs to the space ℍ^2((ℱ_t),ℝ^4). Moreover, by setting K(t)=𝔼[K(t)|ℱ_s^0], with s ≤ t, we denote p^K(t) = a - b(1-γ)A_kK(t) - b γA_kK(t), where we assumed that for a large number of firms N, we approximate the dynamics in Equation (<ref>) by the expression in the previous equation, where we used directly the quantity K(t) instead of a generic (ℱ_t^0) adapted real-valued process since the dynamics of p^i is uncontrolled (see, also, the discussion in <cit.>, Section 3, Page 653). We now consider the following cost functional: 𝒥^NE(v;K) = 𝔼[∫_0^T- p^K(t) K (t) + β(t)ω_t+1/2ν(β(t))^2 + C(α(t)) + C^f(K^f(t))+C^g(K^g(t)) dt + λ(X(T))^2], where the superscript NE stands for Nash equilibrium. Equations (<ref>) and (<ref>) represent a MFG of linear quadratic type which fits into the framework studied in <cit.>, Section 3, apart from the presence of terms of order zero in both the private state dynamics X(t):=(K(t),X(t))^T and the running cost functional; see the discussion in Appendix <ref> and Appendix <ref>. For the reader's convenience and to maintain the present work as self-contained as possible, Appendix <ref> presents the class of linear quadratic MFGs considered here. In particular, by using the notation in Appendix <ref>, the non-zero matrices characterizing the dynamics of X(t) are the following: A_0(s) = [ 0; ã(s); ], A = [ -δ 0; 0 0; ], B = [ κ_f κ_g 0 0; -κ_e 0 1 1; ], C_0,2 = [ 0; -σ_1√(1-ρ^2); ], C_0,3 = [ 0; -σ_2; ], C_1 = [ σ 0; 0 0; ], F_0,1 = [ 0; -σ_1ρ; ], F_0,2 = [ 0; σ̃_2; ]. Whereas, the ones characterizing the cost functional are given by: Q = [ b(1-γ) A_k^2 0; 0 0; ] Q = [ bγ A_k^2/2 0; 0 0; ] R = [ c_1,2 0 0 0; 0 c_2,2 0 0; 0 0 1/2η 0; 0 0 0 1/2ν ]. q = [ - aA_k/2; 0; ] r(s)= [ c_1,1/2; c_2,1/2; h/2; ω(s)/2; ] H=[ 0 0; 0 λ; ] Before proceeding, we make the following remark. In order to satisfy assumptions <ref>, the matrix R has to be a positive-definite matrix, implying that ν must be in (0,∞). In particular, we do not consider directly the case without frictions as done in <cit.>, which, admittedly, could be a common assumption in the carbon market (see, e.g., <cit.>). Let us now assume that (ω_t) ∈ℍ^2((ℱ^0_t);ℝ), with ω_T ∈𝕃^2(ℱ^0_T;ℝ), is given. Then, we have the following characterization (see also <cit.>, Proposition 3.2), which is due to the uniform convexity of the functional 𝒥^NE(v;K) that guarantees the existence of a unique minimizer (see either <cit.>, Lemma 2.2, or <cit.>, Proposition 2.7). In order to not burden the reading, we omit the proof of the subsequent proposition because we will provide in Section <ref>, Proposition <ref>, the proof of the characterization of the associated MFC problem, which follows the same line of argument. Before continuing, let us make the following observation about the notation. The subsequent results involve the processes Z ∈ℍ^2((ℱ_t);ℝ^2 × 3) and Z_0 ∈ℍ^2((ℱ_t);ℝ^2 × 2). To denote the entry (i,j) of Z (resp. of Z_0), we will use the notation Z^(i)_j (resp. Z_0,j^(i)). Let K(t)=𝔼[K(t)|ℱ_t^0], and x_0 = (k_0, x̃_0) be a random vector which is independent from (ℱ^0_t). Then, there exists a unique control v=v(K, x_0) minimizing the functional in Equation (<ref>). Furthermore, let X(s)=(K(s),X(s)) be the corresponding trajectory, i.e., the solution of Equation (<ref>) with control v. Then there exists a unique solution (Y, Z, Z_0)∈𝕊^2((ℱ_t);ℝ^2) ×ℍ^2((ℱ_t);ℝ^2 × 3) ×ℍ^2((ℱ_t);ℝ^2 × 2) of the following BSDE, with s ∈ [0,T]: dY^(1)(s)= -(-δ Y^(1)(s) ds+σ Z_1^(1)(s) + b K(s) + b γ/2K(s)-a/2) ds + ∑_j=1^3Z_j^(1)dW^j(s)+ ∑_j=1^2 Z_0,j^(1)dW^0,j(s), Y^(1)(T)=0; dY^(2)(s)= ∑_j=1^3Z_j^(2)dW^j(s)+ ∑_j=1^2 Z_0,j^(2)dW^0,j(s), Y^(2)(T)=λ(X(T))^2 satisfying the coupling condition, with s ∈ [0,T], a.s., K^f(s) = - κ_f/c_1,2 Y^(1)(s) + κ_f/c_1,2 Y^(2)(s) - c_1,1/2 c_1,2, K^g(s) = - κ_g/c_2,2 Y^(1)(s) - c_2,1/2 c_2,2, α(s) =- 2η Y^(2)(s) - η h, β(s) = - 2ν Y^(2)(s) - νω(s). Conversely, suppose (X, v, Y, Z, Z_0) ∈𝕊^2((ℱ_t);ℝ^2) ×ℍ^2((ℱ_t);ℝ^4)×𝕊^2((ℱ_t);ℝ^2) ×ℍ^2((ℱ_t);ℝ^2 × 3) ×ℍ^2((ℱ_t);ℝ^2 × 2) is a solution to the forward-backward system (<ref>), (<ref>) and coupling condition (<ref>). Then v is the optimal control minimizing 𝒥^NE(v;K), and X(s) is the optimal trajectory. In particular, v is a mean field Nash equilibrium. Before proceeding, let us comment on the coupling condition in Equation (<ref>), which applies also to the coupling condition in Equation (<ref>) as well as when ω_t is replaced by the corresponding expression for the endogenous price (see Section <ref>). First, from the third equation in Equation (<ref>) we obtain that -2 Y^(2)(s) = α(s)/η + h, i.e., -2 Y^(2)(s) is equal to the marginal abatement cost C^'(α(t)); see Equation (<ref>). Plugging it into the last equation in Equation (<ref>), leads us to β(s) = ν(α(s)/η+h-ω_t). Whence, the firm buys (resp. sells) if its marginal abatement cost is higher (resp. lower) than the market price, in agreement with the economic intuition. Similarly, from the second equation in Equation (<ref>) we obtain that - Y^(1)(s) = 1/2 κ_g(c_2,1 + 2 c_2,2 K^g(s)), i.e., -Y^(1)(s) is proportional to the marginal level of green capital cost (C^g)^'(K^g); see Equation (<ref>). Now plugging the previous term into the expression for K^f(s), leads us to: K^f(s) = 1/2 c_1,2(κ_f/k_g(C^g)^'(K^g) - κ_f C^'(α(t))-c_1,1) Whence, the firms decide the fossil fuel level of capital K^f(s) by roughly (see the subsequent discussion) comparing the marginal level of green capital cost (C^g)^'(K^g) with the sum of the marginal abatement cost C^'(α(t)) and the baseline cost c_1,1, again in line with economic intuition. § A MEAN FIELD CONTROL APPROXIMATION WITH COMMON NOISE FOR THE N PLAYER GAME. Interestingly, the MFG in Equations (<ref>)-(<ref>) is equivalent to a[See Remark <ref>.] mean field type control problem (see <cit.>, Proposition 3.3 and Corollary 3.4), whose general formulation and resolution is given in Appendix <ref>. In our case, the state dynamics is as in Equation (<ref>) with the associated matrices as in Equations (<ref>)-(<ref>), and the objective functional 𝒥^LQ_x,t is given by: 𝒥^LQ(v;K) = 𝔼[∫_0^T- p^K(t) K (t) + β(t)ω_t+1/2ν(β(t))^2 + C(α(t)) + C^f(K^f(t))+C^g(K^g(t)) dt + λ(X(T))^2]. Whence, the matrices characterizing the cost functional are in Equations (<ref>)–(<ref>). In particular, the following proposition, analogous to Proposition <ref>, holds true (see Proposition 2.4 in <cit.>). Suppose v is an optimal control minimizing the objective functional 𝒥^LQ(v;K) in Equation (<ref>) with corresponding trajectory X(s) = (K(s), X(s)) solution of Equation (<ref>) with control v. Then there exists a unique solution (Y, Z, Z_0)∈𝕊^2((ℱ_t);ℝ^2) ×ℍ^2((ℱ_t);ℝ^2 × 3) ×ℍ^2((ℱ_t);ℝ^2 × 2) of the following BSDE, with s ∈ [0,T] dY^(1)(s)= -(-δ Y^(1)(s) ds+σ Z_1^(1)(s) + b K(s) + b γ/2K(s)-a/2) ds + ∑_j=1^3Z_j^(1)dW^j(s)+ ∑_j=1^2 Z_0,j^(1)dW^0,j(s), Y^(1)(T)=0; dY^(2)(s)= ∑_j=1^3Z_j^(2)dW^j(s)+ ∑_j=1^2 Z_0,j^(2)dW^0,j(s), Y^(2)(T)=λ(X(T))^2 satisfying the coupling condition, with s ∈ [0,T], a.s., K^f(s) = - κ_f/c_1,2 Y^(1)(s) + κ_f/c_1,2 Y^(2)(s) - c_11/2 c_1,2, K^g(s) = - κ_g/c_2,2 Y^(1)(s) - c_2,1/2 c_2,2, α(s) =- 2η Y^(2)(s) - η h, β(s) = - 2ν Y^(2)(s) - νω(s). Conversely, suppose (X, v, Y, Z, Z_0) ∈𝕊^2((ℱ_t);ℝ^2) ×ℍ^2((ℱ_t);ℝ^4)×𝕊^2((ℱ_t);ℝ^2) ×ℍ^2((ℱ_t);ℝ^2 × 3) ×ℍ^2((ℱ_t);ℝ^2 × 2) is a solution to the forward-backward system (<ref>), (<ref>) and coupling condition (<ref>). Then v is the optimal control minimizing 𝒥^LQ(v;K), and X(s) is the optimal trajectory. The proof follows the steps of Theorem 1.59 in <cit.>. First, let us write the MFC problem in Equations (<ref>) and (<ref>) in matrix notation as in Appendix <ref>; matrices are given in (<ref>)-(<ref>) and (<ref>)-(<ref>). 𝒥^LQ_t,x(v; X) = 𝔼[∫_t^T⟨ Q X(s), X(s)⟩+⟨Q X(s), X(s)⟩ +⟨ R v(s), v(s)⟩ + 2⟨ q, X(s)⟩ + ⟨ r(s), v(s)⟩ ds + ⟨ H, X(T), X(T)⟩]. dX(s) = (A_0(s) + B v(s)) ds+AX(s) ds + C_0,2 dW^2(s) +C_0,3 dW^3(s) + C_1 X(s) dW^1(s) +F_0,1 dW^0,1(s) + F_0,2 dW^0,2(s). In the previous equations, as usual, X̅(s) denotes the conditional expectation of X(s) given ℱ_s^0. Second, let (v, X) be the optimal pair for the MFC problem, v^h the control v^h = v + h ṽ, and X^h the trajectory associated to v^h. Then, let us define the so-called variation process (V(s)) as the solution of the following SDE: dV(s) =AV(s) ds+ B v(s) ds + C_1 V(s) dW^1(s), s ∈ [0,T] By repeating the computations in <cit.>, Lemma 6.10, we have that the following limit holds true lim_ϵ→ 0𝔼[sup_s ∈ [0,T]|X^h(s)-X(s)/ϵ-V(s)|^2]=0. We now observe that 𝒥^LQ_t,x(v^h); X) =𝔼[∫_t^T[ ⟨ Q X^h(s), X^h(s)⟩+⟨Q X^h(s),X^h(s)⟩+⟨ R v^h(s),v^h(s)⟩ +2⟨ q ,X^h(s)⟩+2⟨ r(s), v^h(s)⟩] ds + ⟨ H X^h(T), X^h(T)⟩], from which the Gateaux derivative of 𝒥^LQ_t,x(v^h; X) in the direction h reads as d/dh𝒥^LQ_t,x(v^h; X) =2𝔼[∫_t^T⟨ QX^h,V⟩+⟨Q̅X̅^h,V̅⟩+⟨ Rv^h,ṽ⟩+⟨ q,V⟩+⟨ r,ṽ⟩ ds +⟨ HX^h_T,V_T⟩]. Whence, the optimal condition for (v, X) d/dh𝒥^LQ_t,x(v^h; X)|_h=0=0 is 𝔼[∫_t^T⟨ QX^h,V⟩+⟨Q̅X̅^h,V̅⟩+⟨ Rv^h,ṽ⟩+⟨ q,V⟩+⟨ r,ṽ⟩ ds +⟨ HX^h_T,V_T⟩]=0. By <cit.>, we know that the BSDE in Equation (<ref>) admits a unique solution. Then, by applying Ito's formula to the process (⟨ Y(t), V(t)⟩), ⟨ Y(t), V(t)⟩ =∫_t^T-(⟨ A^T Y(s), V(s) ⟩+⟨ C_1^T Z_1(s), V(s)⟩ + ⟨ Q X(s), V(s)⟩) ds -∫_t^T⟨(Q X(s), V(s)⟩+⟨ q, V(s)⟩) ds +∫_t^T ⟨ B^T Y(s), ṽ(s)⟩ + ⟨ C_1^T Z_1(s), V(s)⟩ ds +∫_t^T ⟨ AV(s),Y(s)⟩ ds+Martingale. By taking the expectation on both sides, by recalling that 𝔼[⟨ Q X(s), V(s)⟩] = 𝔼[⟨ Q X(s), V(s)⟩] and Y(T) =H X(T), we have 𝔼[ ∫_t^T(⟨ Q X(s), V(s)⟩ + ⟨QX(s), V(s)⟩+ ⟨ q, V(s)⟩)-⟨ B^T Y(s), ṽ(s)⟩ ds + ⟨ H X(T), V(T)⟩]=0. By using the optimally condition in Equation (<ref>) and the arbitrariness of ṽ we obtain the coupling condition (<ref>). To prove the converse, it is sufficient to observe that the functional 𝒥^LQ_t,x(v; X) is strictly convex (see, again, <cit.>, Lemma 2.2, or <cit.>, Proposition 2.7). Then, given a solution (X,v,Y,Z,Z_0) to the forward-backward system (<ref>), (<ref>), we know that the Gateaux derivative of 𝒥^LQ_t,x(v; X) at v is zero, which means that v is a minimizer. Before proceeding, we make the following remark. The MFC problem in Equations (<ref>) and (<ref>) is not the MFC problem version of (<ref>) and (<ref>). Indeed, the latter would have the following objective functional 𝒥^LQ_t,x(v; X) = 𝔼[∫_t^T⟨ Q X(s), X(s)⟩+2⟨Q X(s), X(s)⟩ +⟨ R v(s), v(s)⟩ + 2⟨ q, X(s)⟩ + ⟨ r(s), v(s)⟩ ds + ⟨ H, X(T), X(T)⟩]. Nicely, by using the derivations in Appendix <ref>, we can explicitly write down the solution of problem (<ref>)–(<ref>) in terms of the following system of Riccati equations, where matrices C_1, Q, B, R, H and vectors r(t), q, A_0(t) are defined in (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). ·P(t) + C_1^T P(t) C_1 + Q - P(t) B R^-1 B^T P(t) +P(t)A+A^TP(t)= 0; P(T) = H. ·Π(t) + C_1^T P(t) C_1 + (Q + Q̅) - Π(t) B R^-1 B^T Π(t)+Π(t) A+A^TΠ(t) = 0; Π(T) = H. ·ϕ(t) - Π(t) B R^-1 r(t) + q +Π(t) A_0(t) - Π(t) B R^-1 B^T ϕ(t) +A^Tϕ(t)=0; ϕ(T)=0. In particular, Theorem 2.6 in <cit.> ensures that the previous Equations (<ref>)–(<ref>) admit a unique solution, where P, Π are two deterministic processes in 𝒮^2, whereas ϕ is a deterministic process in ℝ^2. In addition, both the optimal control and the associated optimal trajectory for the problem (<ref>) and (<ref>) are expressed in terms of the solutions P, Π and ϕ, t ∈ [0,T]: v(t) = - R^-1 B^T P(t) (X(t)-X(t)) - R^-1 (B^T Π(t) X(t) + r(t) + B^T ϕ(t)). dX(t) =(A_0(s)-B R^-1 B^T P(t) (X(t) - X(t))) ds - B R^-1 (B^T Π(t) X(t) + B^T ϕ(t) + r(t))) ds + C_1 X(s) dW^1(s) + C_0,2 dW^2(s) + C_0,3 dW^3(s) +F_0,1dW^0,1(s)+F_0,2dW^0,2(s) Proposition <ref> ensures that there exists a unique adapted solution of the forward-backward system (<ref>), (<ref>), and therefore such a solution coincides with the solution constructed with the Riccati Equations (<ref>)–(<ref>). We make now the following observation regarding the so-called Price of Anarchy (PoA, henceforth). [PoA] The objective functional 𝒥^NE(v;K) (Equation (<ref>)) and 𝒥^LQ(v;K) (Equation (<ref>)) are not precisely the same, even though v solves both a fixed point Nash equilibrium and a mean field type control problem. The difference between 𝒥^NE(v;K) and 𝒥^LQ(v;K) is called PoA since it represents the added aggregate cost of allowing all players to choose their optimal strategy independently. 𝒥^NE(v;K)-𝒥^LQ(v;K) = b γ/2𝔼[∫_t^T(𝔼[K(s)|ℱ^0_s])^2 ds] It is strictly positive as soon as the conditional expectation of the equilibrium capital level is different from zero, and strictly increasing in γ, reflecting the fact that greater competition yields a greater cost of non-cooperation. § MARKET CLEARING CONDITION AND EQUILIBRIUM PRICE. We start this section with the definition of market equilibrium for the finite player game. For the N player game a market equilibrium is, for every t ∈ [0,T], a N-dimensional vector β^⋆, N(t)=(β^⋆, 1(t), …, β^⋆,N(t)) such that: (1) each β^⋆,i(t) ∈ℋ_1^N; (2) β^⋆,i(t) is the i^th component of the ϵ-Nash equilibrum for the N-player game (see Definition <ref>), and (3) the asymptotic market clearing condition lim_N →∞∑_i=1^Nβ^⋆,i(t) = 0 holds true. At this point, we observe that because our MFG is equivalent to an optimal control problem, the mean field Nash equilibrium is an ϵ-Nash equilibrium for the N-player game, in the sense of Definition <ref>; see Theorem 3.6 in <cit.>. By using Proposition <ref>, the optimal trading rate β^⋆,i(t) of each firm is given by: β^⋆,i(t) = -2 ν Y^(2),i(t)-νω(t), t ∈ [0,T]. Because we model the trading mechanism as part of the firms' decision problem, the equilibrium (market-clearing) price of emission allowances emerges endogenously. In the present situation, Equation (<ref>) is equivalent to the following condition: 1/N∑_i=1^Nβ^⋆,i(t) = 1/N∑_i=1^N(-2 ν Y^(2),i(t)-νω(t))=0, from which we have that ω(t) = - 2/N∑_i=1^NY^(2),i(t). The previous solution is of course inconsistent with our standing assumption that the price process (ω_t) is a (ℱ^0)-adapted process. However, we can argue as in <cit.>, Page 267, and expect that in the large-N limit, the market price of allowances may be given by ω_t=-2𝔼[Y^(2)(t)|ℱ^0_t]. Therefore, we need to consider a different BSDE system with respect to the one in Proposition <ref>. More precisely, the coupling condition for β(s) in Equation (<ref>) is now given by: β(s) = - 2 ν Y^(2)(s) + 2 ν𝔼[Y^(2)(s)|ℱ_t^0], which leads to the following matrix-based representation of the coupling condition v(t) = - R^-1 (B^T Y(t) + r̃ + D Y(s)), t ∈ [0,T], a.s. where r̃ = [ c_1,1/2; c_2,1/2; h/2; 0 ], D = [ 0 0; 0 0; 0 0; 0 -1/2 ]. Equations (<ref>) and (<ref>), instead, remain unchanged; notice that now v(t) is as in Equation (<ref>). Their matrix-based representation is given by the following equations: dX(s) = (A_0(s) +AX(s)+B v(s)) ds + C_0,2 dW^2(s) +C_0,3 dW^3(s) + C_1 X(s) dW^1(s) +F_0,1 dW^0,1(s) + F_0,2 dW^0,2(s), X(0) = x_0. dY(s) =- (A^TY(s)+C_1^T Z_1 (s) + Q X(s) + Q X(s) + q) ds +∑_j=1^3Z_j(s) dW^j(s)+ ∑_j=1^2 Z_0,j(s) dW^0,j(s), Y(T) = H X(T). We now state and prove the following short-term existence result. There exists some constant τ>0 which depends only on the matrices A_0(s), B, C_0,2, C_0,3, C_1, F_0,1, F_0,2, Q, Q, q such that for any T ≤τ, there exists a unique strong solution (X, Y, Z, Z_0) ∈𝕊^2((ℱ_t);ℝ^2) ×𝕊^2((ℱ_t);ℝ^2) ×ℍ^2((ℱ_t);ℝ^2 × 3) ×ℍ^2((ℱ_t);ℝ^2 × 2) to the FBSDE (<ref>)–(<ref>). The proof is an adaption of the arguments used in <cit.>, Theorem 4.24. The main difference is that there exists a term involving 𝔼[Y^(2)(s)|ℱ_t^0] via the coupling condition in (<ref>). Let Φ be the map constructed in the following way. For any element (X,Y) ∈𝕊^2((ℱ_t);ℝ^2), let (Y, Z) be the solution of the following BSDE, where s ∈ (0,T]: dY(s) = -(AY(s)+C_1^T Z_1(s) + Q X(s) + Q X(s) + q) ds +∑_j=1^3Z_j(s)dW^j(s)+∑_j=1^2Z_0,j(s)dW^0,j(s) Y(T) = H X(T) Notice that X ∈𝕊^2((ℱ_t);ℝ^2) and the pair (Y,Z) ∈𝕊^2((ℱ_t);ℝ^2) ×ℍ^2((ℱ_t);ℝ^2× 3) are progressively measurable with respect the completion of the filtration generated by (W(s)-W(t))_s ∈ [t,T] and (W^0(s)-W^0(t))_s ∈ [t,T]. Then, we associate to the couple (Y,Z) the solution (X(s)) of the following SDE, where s ∈ (0,T] dX(s) =(A_0(s)+AX(s)-B R^-1(B^T Y(s) + r̃ + D Y(s)) ds +C_0,2dW^2(s)+C_0,3dW^3(s)+C_1 X(s) dW^1(s) +F_0,1dW^0,1(s)+F_0,2dW^0,2(s), X(0) = x_0. The map Φ is given by Φ : X → (Y,Z) →X. The aim is to show that Φ is a contraction for small T. To this end, let X^1 and X^2 ∈𝕊^2((ℱ_t);ℝ^2) and denote by (Y^1, Z^1) and (Y^2,Z^2) be the associated solution of the BSDE in (<ref>). In addition, set X^1 = Φ(X^1) and X^2 = Φ(X^2). The fact that Φ is a contraction follows from the following standard estimates for SDEs and BSDEs: 𝔼[sup_s ∈ [0,T]|X^1(s)-X^2(s)|]+𝔼[sup_s ∈ [0,T]|X^1(s)-X^2(s)|] ≤ C T 𝔼[sup_s ∈ [0,T]|Y^1(s)-Y^2(s)|^2+sup_s ∈ [0,T]|Y^1(s)-Y^2(s)|^2 +∑_j=1^3∫_0^T|Z_j^1(s)-Z_j^2(s)| ds+∑_j=1^2∫_0^T|Z_0,j^1(s)-Z_0,j^2(s)| ds] ≤ C T ( 𝔼[sup_s ∈ [0,T]|X^1(s)-X^2(s)|^2] + 𝔼[sup_s ∈ [0,T]|X^1(s)-X^2(s)|^2]) Before proceeding, we make the following observation. Should the solution of the FBSDE (<ref>)–(<ref>) be linked to that of some MFC problem, a term of the form D v̅(s) would be present in the state dynamics because of the presence of a term like D 𝔼[Y(t)|ℱ_t^0] in the coupling condition. However, this is not the case for our FBSDE. The next theorem gives us the unique existence of solutions to the FBSDE (<ref>)–(<ref>) for general T. Under the assumption that (κ_f^2/c_1,2 + κ_g^2/c_2,2-κ_fκ_e/c_1,2)>0, (2η+ν+κ_e^2/c_1,2-κ_fκ_e/c_1,2)>0, there exists a unique strong solution (X, Y, Z, Z_0) ∈𝕊^2((ℱ_t);ℝ^2) ×𝕊^2((ℱ_t);ℝ^2) ×ℍ^2((ℱ_t);ℝ^2 × 3) ×ℍ^2((ℱ_t);ℝ^2 × 2) to the FBSDE (<ref>)–(<ref>). The proof hinges on the continuation method of <cit.> and reduces to verify that assumption (H2.1) in the previous paper holds true for our system, in expectation. Notice that in our case their 2× 2 full-rank matrix G is the identity matrix. In order to facilitate the comparison, we rewrite the FBSDE (<ref>)–(<ref>) in terms of the following functional b, f, σ, σ_0, Φ and of a vector θ where all the static parameters are collected b(s,Y(s),X(s),ã(s),θ):=(A_0(s)+AX(s)-B R^-1(B^TY(s)+r̃+DY(s)) f(s,Z_1(s),Y(s),X(s),θ):=(A^TY(s)+C_1^T Z_1(s)+Q X(s) + QX(s) + q) Φ(X(T)):=H X(T), σ(X^(1)(s), θ):= [ σ X^(1)(s) 0 0; 0 -σ_1√(1-ρ^2) -σ_2 ] σ_0(θ):= [ 0 0; -σ_1ρ σ̃_2; ] θ = (κ_f, κ_g, c_1,2, c_2,2, η, ν, c_1,1, c_2,1, h, σ_1, ρ, σ_2, σ, σ̃_2, b, γ, a, λ) as dX(s) =b(s,Y(s),X(s),ã(s),θ) ds + σ(X^(1)(s), θ)dW(s) + σ_0(θ)dW^0(s). dY(s) =-f(s,Z_1(s),Y(s),X(s),θ) ds + Z(s)dW(s) + Z_0(s)dW^0(s), where X(0)=x_0 and Y(T) = Φ(X(T)). We use the following notation: u = [ x; y; z; z_0 ], A(s,u) = [ - f; b; σ; σ_0; ](s,u). Besides, for all pairs (x, y, z, z_0), (x^', y^', z^', z^'_0) ∈𝕃^2(ℱ;ℝ^2 ×ℝ^2 ×ℝ^2× 3×ℝ^2× 2), we denote by x̂=x-x^', ŷ=y-y^', ẑ=z-z^', and ẑ_0=z_0-z_0^'. We have: ⟨ -f(s,z_1,y, x, θ) - (-f(s, z_1^', y',x^', θ)), x-x^'⟩ = - σẑ_1^(1)x̂^(1) - b(1-γ)A_k^2 (x̂^(1))^2 - b γ A_k^2/2x̂^(1)x̂^(1)+δŷ^(1)x̂^(1) ⟨ b(s, y, ã, θ)-b(s, y^',ã, θ), y-y^'⟩ =-(κ_f^2/c_1,2 + κ_g^2/c_2,2)(ŷ^(1))^2 + 2 κ_fκ_e/c_1,2ŷ^(1)ŷ^(2)-(2(η+ν)+κ_e^2/c_1,2)(ŷ^(2))^2 + νŷ^(2)ŷ^(2)-δx̂^(1)ŷ^(1) and ⟨σ (x,θ)-σ(x',θ),z⟩= σx̂^(1)ẑ_1^(1); notice that σ_0 does not lead to any contribution since it is state-independent. By combining (<ref>), (<ref>), (<ref>), we get ⟨ A(s, u)- A(s, u'),u-u'⟩= - b(1-γ)A_k^2 (x̂^(1))^2 - b γ A_k^2/2x̂^(1)x̂^(1)-(κ_f^2/c_1,2 + κ_g^2/c_2,2)(ŷ^(1))^2 + 2 κ_fκ_e/c_1,2ŷ^(1)ŷ^(2) -(2(η+ν)+κ_e^2/c_1,2)(ŷ^(2))^2 + νŷ^(2)ŷ^(2). Now, by taking the expectation, by the law of total expectation and Jensen inequality: 𝔼[⟨ A(s, u)- A(s, u'),u-u'⟩]≤ - b(1-γ)A_k^2 𝔼[(x̂^(1))^2]-bγ A_k^2/2𝔼[(x̂^(1))^2] - (κ_f^2/c_1,2 + κ_g^2/c_2,2-κ_fκ_e/c_1,2)𝔼[(ŷ^(1))^2] -(2η+ν+κ_e^2/c_1,2-κ_fκ_e/c_1,2) 𝔼[(ŷ^(2))^2] ≤ - (κ_f^2/c_1,2 + κ_g^2/c_2,2-κ_fκ_e/c_1,2)𝔼[(ŷ^(1))^2] -(2η+ν+κ_e^2/c_1,2-κ_fκ_e/c_1,2) 𝔼[(ŷ^(2))^2] ≤ - min((κ_f^2/c_1,2 + κ_g^2/c_2,2-κ_fκ_e/c_1,2),(2η+ν+κ_e^2/c_1,2-κ_fκ_e/c_1,2))𝔼[|ŷ|^2] In addition, we have: ⟨Φ(x_T)-Φ(x_T^'), x̂⟩ = λ (x̂^(1))^2 ≥ 0. In particular, the monotone conditions in (H2.3) in <cit.> hold with β_1 = 0 and β_2 = min((κ_f^2/c_1,2 + κ_g^2/c_2,2-κ_fκ_e/c_1,2),(2η+ν+κ_e^2/c_1,2-κ_fκ_e/c_1,2)). Now, the continuation method hinges on the following steps. First, one has to introduce a family of FBSDE indexed by a parameter ϱ∈ [0,1], dx_t^ϱ = [-(1-ϱ)β_2(y_t^ϱ)+ϱ b(t, u_t^ϱ, ã(t),θ)+ϕ_t] dt +[ ϱσ(t, u_t) + ψ_t]dW(t)+σ_0(θ) dW^0(t) dy_t^ϱ = - [ ϱ f(t, u_t^ϱ)+γ_t] dt + z_t^ϱdW(t) + z_0,t^ϱ dW^0(t), x_0^ϱ=x_0, y_T^ϱ = ϱΦ(x_T^ϱ) + (1-ϱ) x_T^ϱ + ξ, where ϕ, ψ and γ are given processes in ℍ^2((ℱ_t);ℝ^2) and ℍ^2((ℱ_t);ℝ^2× 3), respectively, and ξ∈𝕃^2(ℱ_T;ℝ^2). Notice that the previous system is constructed in such a way that for ρ=0 its solution is straightforward, whereas for ρ=1 it implies the existence of a unique strong solution to the FBSDE (<ref>)–(<ref>). In particular, the monotone conditions above allow to extend the existence from ϱ=0 to ϱ=1. The proof follows directly from the one of Theorem 2.2 in <cit.>, and it is therefore omitted. It is important to note that the existence of a unique strong solution to the FBSDE (<ref>)–(<ref>) holds for every level of competition γ∈ [0,1]. The solution (X, Y, Z^0, Z) to the FBSDE (<ref>)–(<ref>) satisfies the following estimate 𝔼[sup_t ∈ [0,T]|X(t)|^2+sup_t ∈ [0,T] |Y(t)|^2+∑_j=1^3∫_0^T|Z_j(t)|^2 dt + ∑_j=1^2∫_0^T|Z_0,j(t)|^2 dt] ≤ C, where C is a constant depending only on T and on the matrices of the system A_0(s), B, C_0,2,C_0,3, C_1, F_0,1, F_0,2, Q, Q, q. By applying Ito's formula to |Y(t)|^2 we obtain: 𝔼[|Y(t)|^2] + 𝔼[∑_j=1^3∫_t^T|Z_j(s)|^2 ds] + 𝔼[∑_j=1^2∫_t^T|Z_0,j(s)|^2 ds] ≤𝔼[H |X(T)|^2] + ϵ 𝔼[∫_t^T|Z_1(s)|^2 ds] + C 𝔼[∫_t^T|Y(s)|^2 ds+∫_t^T|X(s)|^2 ds+∫_t^T|X(s)|^2 ds]. Then, by choosing ϵ>0 small, say ϵ<1, and applying Gronwall's inequality, we obtain 𝔼[|Y(t)|^2] + 𝔼[∑_j=1^3∫_t^T|Z_j(s)|^2 ds] + 𝔼[∑_j=1^2∫_t^T|Z_0,j(s)|^2 ds] ≤𝔼[H |X(T)|^2] + C 𝔼[∫_t^T|X(s)|^2 ds+∫_t^T|X(s)|^2 ds]. Now, by using again Ito's formula, a simple application of the Burkholder-Davis-Gundy inequality, Young's inequality, and the triangular inequality gives, for new constants C_H, C_Q, C_Q, C>0: 𝔼[sup_t ∈ [0,T]|Y(t)|^2 + ∑_j=1^3∫_t^T|Z_j(s)|^2 ds + ∑_j=1^2∫_t^T|Z_0,j(s)|^2 ds] ≤ C_H 𝔼[|X(T)|^2+∫_0^T|Y(s)|^2 ds] + 𝔼[ϵ∫_0^T|Z(s)|^2 ds+C_Q∫_0^T|X(s)|^2 ds+C_Q∫_0^T|X(s)|^2 ds] +ϵ_1𝔼[sup_t ∈ [0,T]|Y(t)|^2] + C 𝔼[∑_j=1^3∫_0^T|Z_j(s)|^2 ds] +ϵ_2𝔼[sup_t ∈ [0,T]|Y(t)|^2] + C 𝔼[∑_j=1^2∫_0^T|Z_0,j(s)|^2 ds] Choosing now ϵ_1, ϵ_2 such that ϵ_1+ϵ_1<1 and using Equation (<ref>), we get for a new constant C>0: 𝔼[sup_t ∈ [0,T]|Y(t)|^2 + ∑_j=1^3∫_t^T|Z_j(s)|^2 ds + ∑_j=1^2∫_t^T|Z_0,j(s)|^2 ds] ≤ C𝔼[|X(T)|^2+∫_0^T|Y(s)|^2 ds+C∫_0^T|X(s)|^2 ds +C∫_0^T|X(s)|^2 ds] Analogous computations can be performed on (Y̅(t)), which lead, for a new constant C>0, to: 𝔼[sup_t ∈ [0,T]|Y(t)|^2 + ∑_j=1^3∫_t^T|Z_j(s)|^2 ds + ∑_j=1^2∫_t^T|Z_0,j(s)|^2 ds] ≤ C𝔼[|X(T)|^2+∫_0^T|X(s)|^2 ds+∫_0^T|X(s)|^2 ds] +C𝔼[∫_0^T|X(s)|^2 ds+∫_0^T|Y(s)|^2 ds] By combining Equation (<ref>) and (<ref>), for a new constant C_ϵ>0, we get: 𝔼[sup_t ∈ [0,T]|Y(t)|^2+sup_t ∈ [0,T]|Y(t)|^2+C_ϵ∑_j=1^3∫_0^T|Z_j(s)|^2 ds+C_ϵ∑_j=1^2∫_0^T|Z_0,j(s)|^2 ds] ≤ C𝔼[|X(T)|^2 + |X(T)|^2 ] + C 𝔼[∫_0^T|X(s)|^2 ds+∫_0^T|X(s)|^2 ds] +C 𝔼[∫_0^T|Y(s)|^2 ds + ∫_0^T|Y(s)|^2 ds ] where C_ϵ:=(1-C_ϵ). On the other hand, the standard estimates for SDEs give, for a new constant C>0, 𝔼[sup_t ∈ [0,T]|X(t)|^2]+𝔼[sup_t ∈ [0,T]|X(t)|^2] ≤ |X(0)|^2+𝔼[∫_0^T|Y(s)|^2 ds+∫_0^T|Y(s)|^2 ds]+C. Combining the inequalities (<ref>) and (<ref>) and a simple application of the Burkholder-Davis-Gundy inequality establishes the claim. We are now ready to investigate if our FBSDE (<ref>)–(<ref>) actually provides an approximation of the market price and if so, how accurate it is. In particular, if we use (- 2 𝔼[Y^(2)(t)|ℱ_t^0]) as the input (ω_t), where (Y^(2)(t)) is the unique solution to the FSBDE (<ref>)–(<ref>), then by Theorem 3.6 in <cit.> the optimal strategy for the individual firm is given by β^⋆,i(t):=-2 ν Y^(2),i(t) + 2 ν𝔼[Y^(2)(t)|ℱ_t^0], where (Y^(2),i) is the (second component of the) solution to (<ref>) and (<ref>) with (ω_t = - 2 𝔼[Y^(2)(t)|ℱ_t^0]) and W^1≡ W^1,i and W^3≡ W^3,i. The next theorem shows that the market clearing condition in the large-N limit, i.e., lim_N →∞1/N∑_i=1^Nβ^⋆,i(t) = 0, dt ⊗ dℙ-a.s. holds. Let T>0 and (β^⋆,i(t)) be defined as in Equation (<ref>). Then lim_N →∞𝔼[|1/N∑_i=1^Nβ^⋆,i(t)|^2] = 0. Moreover, there exists some constant C independent of N such that: 𝔼[|1/N∑_i=1^Nβ^⋆,i(t)|^2] ≤C/N The proof is similar to the one of Lemma 5.1 in <cit.>. Because (Y^(2),i(t))_i=1^N are (ℱ^0_t) conditionally i.i.d., the ladder property of the conditional expectation yields 𝔼[|1/N∑_i=1^Nβ^⋆,i(t)|^2] = 𝔼[|1/N∑_i=1^N(-2 ν Y^(2),i(t) + 2 ν𝔼[Y^(2)(t)|ℱ_t^0])|^2] ≤4 ν^2/N^2∑_i=1^N𝔼[|Y^(2),i(t)-𝔼[Y^(2)(t)|ℱ_t^0]|^2] Since sup_t ∈ [0,T]𝔼[|Y^(2),i(t)-𝔼[Y^(2)(t)|ℱ_t^0]|^2] ≤ 2 sup_t ∈ [0,T]𝔼[|Y^(2),1(t)|^2], the conclusion follows from the estimates in Corollary <ref>. Finally, we conclude this section by providing explicit solutions to the FSBDE (<ref>)–(<ref>) in terms of the following system of Riccati equations; the proof of its derivation follows the same line of arguments as in Appendix <ref> and it is, therefore, omitted. In particular, Theorem <ref> guarantees that solutions to (<ref>)–(<ref>) are uniquely determined in terms of these Riccati equations. ·P(t) + C_1^T P(t) C_1 + Q - P(t) B R^-1 B^T P(t) +P(t) A+A^TP(t)= 0; P(T) = H. ·Π(t) + C_1^T P(t) C_1 + (Q + Q̅) - Π(t) B R^-1(B^T+D)Π(t) ++Π(t) A+A^TΠ(t)= 0; Π(T) = H. ·ϕ(t) - Π(t) B R^-1r̃ + q +Π(t) A_0(t) - Π(t) B R^-1 (B^T+D) ϕ(t) +A^Tϕ(t)=0 ϕ(T)=0. Matrices C_1, Q, Q, B, R, H, q, A_0(t) are as in Section <ref>, whereas r̃ and D are defined in Equation (<ref>). In particular, (X(t)) and solves the following equation: dX(t) =(A_0(s)+AX(s)-B R^-1 B^T P(t) (X(t) - X(t))) ds - B R^-1 ((B^T + D) Π(t) X(t) + (B^T+D) ϕ(t) + r̃) ds + C_1 X(s) dW^1(s) + C_0,2 dW^2(s) + C_0,3 dW^3(s) +F_0,1dW^0,1(s)+F_0,2dW^0,2(s), X(0)=x_0. In addition, the equilibrium price is given by ω_t = - 2 ((Π(t))_2,1K(t)+(Π(t))_2,2X(t))-2(ϕ(t))_2, where (Π(t))_ℓ,m denotes the entry (ℓ,m) of the matrix Π(t) and (ϕ(t))_2 the second component of the vector ϕ(t), t ∈ [0,T]. § NUMERICAL ILLUSTRATION We illustrate here the firm's behavior in the policy scheme described in the previous sections. We consider the objective of reducing carbon emission over T = 5 years. The solutions of the Riccati equations (<ref>)–(<ref>) are computed by using the MATLAB numerical integrator ode45 with a temporal resolution of Δ t = 10^-3, which is also employed to simulate the SDE (<ref>) via the Euler-Maruyama method. All the expectations below are computed via the classical Monte Carlo method using 5 · 10^3 trajectories. Table <ref> reports the employed numerical value for the parameters along with a synthetic yet exhaustive description in the caption. §.§ Cap-and-trade system: the role of the regulator. This subsection emphasizes the role played by the regulator, which may be separated into two components, namely the dynamical allocation of emission allowances A(t) and the severity of the cap, reflected into the parameter λ. Naturally, ceteris paribus, the average level of production increases as ã increases, although it does not play a first-order role to the representative firm's production; see Figure <ref>. According to our model, this dependence may be explained by the fact that the optimal production features a linear dependence on ã, with a slope that depends on the solution of the Riccati equations; see Equation (<ref>). Also, the average pollution abatement rate α(t), the optimal average trading rate β(t), and the average price of permits ω̅_t naturally decreases as ã increases; see the sub-figures in Figure <ref>, from left to right and from top to bottom. We also report the simulation of one trajectory of the just mentioned quantities. In particular, apart from the level, the dynamics of α(t) and β(t) looks very similar. We interpret this result as being symptomatic of the type of dependence on Y^(2)(t). More precisely, from the (optimal) coupling condition in Equation (<ref>) we have that β(t) = 2 ν (Y^(2)(t)-Y^(2)(t)) and α(t) = 2 η Y^(2)(t) - η h. Whence, both β(t) and α(t) depends linearly on Y^(2)(t) and are both affected by the idiosyncratic noise. Nonetheless, α(t) depends on the common shocks too. Finally, by construction, ω=-𝔼[Y^(2)(t)|ℱ_t^0], and therefore the dependence on Y^(2)(t) is, again, linear. Moreover, as in <cit.>, we observe large oscillations of the price near the maturity; see the last sub-figure in Figure <ref>. Furthermore, since the market power of the representative firm is (almost) equal to the one of the population (γ=0.5), the representative firm cannot charge higher prices to compensating possible lower sales; see also the discussion in the next Subsection <ref>. Therefore, if the regulator tightens the cap, i.e., if λ increases, all the other things being equal, then the production of the representative firm unambiguously decreases (Figure <ref>). Notice that this fact generalizes the results for monopoly of <cit.> and the one for Cournot oligopoly under taxes of <cit.>. Consistently, the representative firm increases the use of the green level of capital at the expenses of the fossil-fuel level of capital; see Figure <ref>. Finally, Figure <ref> plots three trajectories, one for every considered λ, of the bank account X(t). It seems that the representative firm pays more for a lower level of λ; in other words, a relaxation in the final penalty implicitly induces the firm to emit more. However, in the present work, the dynamic allocation of the regulator is exogenous and we do not have any compliance constraint, neither on the expected emissions nor on a point-wise value on the terminal net emissions of the representative firm, as in the very recent research paper <cit.>. Extending our model to such a setting is an interesting direction for future research; see Section <ref>. §.§ The economics of competition. Using a static model of imperfect market competition, <cit.> emphasizes the importance of the degree of competition in determining the economic consequences of pollution regulation. It is therefore natural to ask whether their findings are also recovered in our (dynamic) setting with stochastic emissions[Stochastic emissions are considered as a direction for future research in <cit.>, Section 7.2.] and production costs. Naturally, the pollution regulator wants to encourage pollution abatement and discourage output reduction. Indeed, should the output be lower, consumer surplus and welfare would be hurt because of an increase in goods prices. Figure <ref> shows that the expected average output increases with γ, whereas the average price of goods decreases. This is because when γ = 0, the representative firm has significant market power and can charge higher prices, compensating it for lower sales. As γ increases, competition with the rest of the population intensifies, thus reducing the firm's market power. This is made clear by the following relation: p^K(t)=a-b(1-γ) A_k K(t) - b γ A_k K(t). Indeed, if the representative firm lowers the output, then this has a limited effect on the price because the rest of the population would increase its output in response. Consequently, the representative firm prefers pollution abatement over output reduction, with the caveat that even though the first-order partial effect is positive, it does not have the same compensatory dynamic as the one of the price. Hence, increasing the competition helps align the firm incentives with the goal of the regulation of pollution abatement; this is in line with <cit.>. Consistently, Figure <ref> shows that ceteris paribus, the average level of capitals, and the average trading activity increase with an increase in the level of competition. In particular, the latter causes an increase in the average market price of permits because of increased liquidity. Figure <ref> plots the value function along with its components. Generally, the cap-and-trade mechanism has both a direct pollution abatement effect and an indirect output-reduction effect. Consistently with our theoretical argument in the previous paragraph, competition, i.e., a value of γ∈ (0,1], induces the representative firm to overproduce. From a population perspective, it would be beneficial if every (representative) firm lowers the corresponding output to keep prices high. This is an unlikely scenario because no representative firm could credibly commit to such a lower output, as one would expect. The cap-and-trade mechanism should coordinate the previous mechanism in such a way that the population of firms agrees to reduce output by using the pollution constraint; naturally, this synchronization mechanism is expected to work under a suitable range of constraints imposed by the pollution regulator, the one for which the impact of output reduction on the representative firm's profits dominates the cost of pollution abatement, of trading, and production. In particular, in our numerical example costs dominate revenues, and therefore profits, as γ∈ (0,1] increases. When γ=0 (monopoly), the representative firm has significant market power and it can optimize its output. However, should the regulator decide to tighten the cap, the output of the representative firm would further reduce and the representative firm can no longer leverage on the competition with the population of firms to implement the previously described synchronization mechanism. Therefore, a cap-and-trade mechanism hurts more monopoly than competitive firms, which is in agreement with the findings for competitive markets in <cit.>. § CONCLUSION AND FUTURE RESEARCH The model proposed in this paper introduces several fundamental elements regarding pollution generation, abatement and costs, and regulation, which can serve as a basis for future research. The model assumes that firms produce products using a standard AK model with a positive depreciation rate of capital. Future research could consider relaxing this assumption and exploring a more realistic, namely non-linear, production function. Additionally, it assumes that the business cycle affecting the BAU carbon emissions does not correlate with the one affecting emission allowances. Said differently, we are assuming that the regulator has access to limited information and it is affected by a macroeconomic shocks driver that it is independent from the one influencing the emissions. Investigating correlated business cycles and the impact of asymmetric information on production and abatement costs could be potential areas for future research; the latter, in particular, can lead to interesting agency problems. In the present work, the dynamic allocation of the regulator is exogenous and we do not consider any compliance constraint, neither on the expected emissions nor on a point-wise value on the terminal net emissions of the representative firm, as done in the very recent research paper <cit.>. Extending our model to such a setting is an interesting direction for future research. The model also assumes that all firms share the same cost and coefficient functions. Extending the model to incorporate multiple populations, where firms within each population share the same cost and coefficient functions, but differ across populations, is an area for future exploration. This will provide an important tool to study the market equilibrium price in the presence of different types of firms. In addition, it's important for future research to consider relaxing the assumption of the carbon price being (ℱ^0)-adapted. Also, it might be an interesting avenue for future research to account for the way in which the regulator allocates allowances to individual firms, in particular analysing the case when the initial allocation is through auctions, as initially intended by the European Union, which switched back to grandfathering in the third phase (after 2012). Extending the model to account for multiple compliance periods and the specific design of current cap-and-trade systems could lead to clear-cut predictions about permit prices and related derivatives. Finally, future research could consider integrated production-pollution-abatement models in continuous time and study other types of policies, such as the policy rules under the Market Stability Reserve (MSR), launched in 2019 by the EU. § LINEAR QUADRATIC MEAN FIELD GAMES WITH COMMON NOISE. In this section, we present the general formulation of a linear quadratic mean field game class with common noise, which our framework fits into. Let (ξ(s)) and (ψ(s)) be given processes adapted to the filtration (ℱ^0_t). We consider the following dynamics: dX(s) = (A_0(s) + A(s)X(s)+A(s)ξ(s)+B(s)v(s)+B(s)ψ(s)) ds + ∑_j=1^d_1(C_0,j(s) + C_j(s)X(s) + C_j(s)ξ(s)+D_j(s)v(s)+D_j(s)ψ(s)) dW^j(s) +∑_ℓ=1^d_0(F_0,j(s)+F_j(s)X(s)+F_j(s)ξ(s)+G_j(s)v(s)+G_j(s)ψ(s)) dW^0,j(s), with X(t)=x, and objective functional: 𝒥_x,t^NE(v)=𝔼[∫_t^T ( Q_0(s)+⟨ Q(s)X(s), X(s)⟩+ 2 ⟨Q(s)ξ(s),X(s)⟩+⟨ R(s)v(s),v(s)⟩ + 2 ⟨R(s)ψ(s), v(s)⟩ +2⟨ S(s)X(s),v(s)⟩ +2⟨S_1(s)ξ(s), v(s)⟩ +2⟨S_2(s)X(s), ψ(s)⟩ +2⟨ q(s),X(s)⟩ +2⟨q(s),ξ(s)⟩ +2⟨ r(s), v(s)⟩ + 2⟨r(s),ψ(s)⟩) ds + ⟨ H X(T), X(T)⟩ + 2 ⟨Hξ(T), X(T)⟩], where ⟨·, ·⟩ denotes the inner product on Euclidean space. The goal is to find a control v̂(s) with corresponding state process X̂ such that 𝒥_x,t^NE(v̂;ξ, ψ)=inf_v𝒥_x,t^NE(v;ξ, ψ) and 𝔼[X̂(s)|ℱ_s^0]=ξ, 𝔼[v̂(s)|ℱ_s^0]=ψ. The process v̂ is called a mean field Nash equilibrium. We state the following assumption on the coefficient matrices (cfr. <cit.>, Assumption 3.1) * A_0, C_0,j, F_0,j∈ L^∞([0,T];ℝ^d), 1 ≤ j ≤ d_1 and ℓ≤ 1 ≤ d_0, and Q_0(s) ∈ L^∞([0,T];ℝ). * A, A, C, C, F, F∈ L^∞([0,T];ℝ^d × d). * B, B, D, D, G, G∈ L^∞([0,T];ℝ^d × d_2). * Q, Q∈ L^∞([0,T];𝒮^d), R, R∈ L^∞([0,T];𝒮^d_2), H, H∈𝒮^d. * H≥ 0 and for some δ_1 ≥ 0, δ_2 >0, Q, Q≥δ_1 I_d and R ≥δ_2 I_d. * S, S_1, S_2 ∈ L^∞([0,T]; ℝ^d_2 × d); q, q∈ L^∞([0,T];ℝ^d); r, r∈ L^∞([0,T];ℝ^d_2). * S_∞^2 < δ_1δ_2 if δ_1>0, S=S=0 otherwise. § LINEAR QUADRATIC MEAN FIELD TYPE CONTROL WITH COMMON NOISE. In this section, we provide explicit solutions of a class of linear quadratic mean field type control problems in terms of a system of Riccati equations. The class of problems we consider is a generalization of the one analyzed in <cit.>. In the latter, both the private states dynamics and the running cost appearing in the cost functional do not contain (possibly time-dependant) terms of order zero, and both common and idiosyncratic noise values are uni-dimensional, i.e., d_0=d_1=1. Instead, we consider the following dynamics: dX(s) = (A_0(s) + A(s)X(s)+A(s)X(s)+B(s)v(s)+B(s)v(s)) ds + ∑_j=1^d_1(C_0,j(s) + C_j(s)X(s) + C_j(s)X(s)+D_j(s)v(s)+D_j(s)v(s)) dW^j(s) +∑_ℓ=1^d_0(F_0,j(s)+F_j(s)X(s)+F_j(s)X(s)+G_j(s)v(s)+G_j(s)v(s)) dW^0,j(s), with X(t)=x. In addition, the objective cost functional is given by: 𝒥_x,t^LQ(v)=𝔼[∫_t^T ( Q_0(s)+⟨ Q(s)X(s), X(s)⟩+ ⟨Q(s)X(s),X(s)⟩+⟨ R(s)v(s),v(s)⟩ + ⟨R(s)v(s),v(s)⟩ +2⟨ S(s)X(s),v(s)⟩+2⟨S(s)X(s),v(s)⟩+2⟨ q(s),X(s)⟩ +2⟨q(s),X(s)⟩ +2⟨ r(s), v(s)⟩ + 2⟨r(s),v(s)⟩) ds + ⟨ H X(T), X(T)⟩ + 2 ⟨HX(T), X(T)⟩], where ⟨·, ·⟩ denotes the inner product on Euclidean space. We state the following assumption on the coefficient matrices (cfr. <cit.>, Assumption 2.1) * A_0, C_0,j, F_0,j∈ L^∞([0,T];ℝ^d), 1 ≤ j ≤ d_1 and ℓ≤ 1 ≤ d_0, and Q_0(s) ∈ L^∞([0,T];ℝ). * A, A, C, C, F, F∈ L^∞([0,T];ℝ^d × d). * B, B, D, D, G, G∈ L^∞([0,T];ℝ^d × d_2). * Q, Q∈ L^∞([0,T];𝒮^d), R, R∈ L^∞([0,T];𝒮^d_2), H, H∈𝒮^d. * H, H+H≥ 0 and for some δ_1 ≥ 0, δ_2 >0, Q, Q+Q≥δ_1 I_d and R, R+R≥δ_2 I_d. * S, S∈ L^∞([0,T]; ℝ^d_2 × d); q, q∈ L^∞([0,T];ℝ^d); r, r∈ L^∞([0,T];ℝ^d_2). * S_∞^2, S+S_∞^2 < δ_1δ_2 if δ_1>0, S=S=0 otherwise. The procedure used in <cit.>, Section 2.2, uses a technique developed by <cit.>. In order to facilitate the reader, we will highlight in bold font the additional terms with respect <cit.>, Theorem 2.6, Equations (2.31)– (2.32) and the subsequent non-numbered one, linked to the terms of order zero[Notice that the expression for Σ_0 and ϕ(s) derived in <cit.> presents some inaccuracies. First, the term G^T P G is missed in the expression for Σ_0 (see Equation (<ref>)). Second, there is an extra term in the equation for ϕ; nonetheless, the equation remains linear and, therefore, essentially trivial to solve (see Equation (<ref>))]. We suppose that: Y(s) = P(s)(X(s)-X(s)) + Π(s)X(s) + ϕ(s), where P and Π are 𝒮^d-valued processes such that they satisfy the following terminal conditions: P(T)=H, Π(T)=H+H, and ϕ(s) is an ℝ^d-valued process; P, Π, and ϕ are deterministic. Hereafter, in order to ease the notation, we suppress the time indexes and we work under the assumption that d_0=d_1=1; we will provide the expressions for the case d_0>1 and d_1>1 at the end of the present section. By taking the conditional expectation in Equation (<ref>), we obtain: Y=ΠX +ϕ and Y -Y = P (X -X). Moreover, by taking the conditional expectation in Equation (<ref>), we obtain: dX = (A_0 + (A+A)X+(B+B)v) ds+(F_0 + (F+F)X+(G+G)v) dW^0. By subtracting the previous equation from Equation (<ref>), we have: d(X-X) =(A (X - X) + B (v - v)),ds +(C_0 + C (X-X) + (C+C)X + D(v-v)+(D+D)v) dW +( F(X-X)+G(v-v)) dW^0. Proposition 2.4, Equation (2.4), in <cit.> gives us[Proposition 2.4, Equation (2.4), in <cit.> is not affected by the presence of zero-order terms.]: dY =-(A^T (Y-Y) + (A^T+A^T)Y + C^T (Z-Z) + (C^T+C^T)Z +F^T(Z_0-Z_0)+(F^T+F^T)Z_0 + Q (X-X) + (Q+Q)X + S^T v + S^T v + q + q) ds + Z dW + Z_0 dW^0 Now, on one hand we have: d(Y-Y) = (·P(X-X) + P A (X - X) + P B (v - v)) ds +P (C_0 + C (X-X) + (C+C)X + D(v-v)+(D+D)v) dW +P (F(X-X)+G(v-v)) dW^0. On the other hand, it holds that (see Equation (<ref>)): dY = (·ϕ + ·Π) ds + Π dX =(·ϕ + ·ΠX + Π A_0 + Π(A+A)X+Π(B+B)v) ds +Π(F_0 + (F+F)X+(G+G)v) dW^0 Noting that dY = d(Y-Y)+dY, we compare the diffusion terms of the left and right hand side of this equation. We get: Z = P (C_0 + C (X-X) + (C+C)X + D(v-v)+(D+D)v) Z_0 =P (F(X-X)+G(v-v)) +Π(F_0 + (F+F)X+(G+G)v) which implies Z = P (C_0+(C+C)X +(D+D)v) Z-Z = P (C (X-X) + D(v-v)) Z_0 = Π(F_0+(F+F)X+(G+G)v) and Z_0-Z_0 = P (F(X-X)+G(v-v)) At this point, the coupling condition in <cit.>, Proposition 2.4, Equation (2.5) reads as[On the other hand, Equation (2.5) is affected by zero-order terms since it depends on Z and Z_0.]: B^T (Y-Y) + (B^T+B^T)Y +D^T(Z-Z) + (D+D^T)Z +G^T(Z_0-Z_0) + (G^T+G^T)Z_0 +R (v-v) +(R+R)v + S (X-X) +(S+S)X(s) +r +r =B^T P (X-X) + (B^T+B^T)ΠX + (B^T+B^T)ϕ + D^T P (C (X-X) + D(v-v)) +(D+D^T)(C_0+(C+C)X +(D+D)v) +G^TP (F(X-X)+G(v-v)) + (G^T+G^T) Π(F_0 + (F+F)X+(G+G)v) +R (v-v) +(R+R)v + S (X-X) +(S+S)X +r +r, which can be rewritten in the following way Λ_0 (X-X) + Λ_1 X + Σ_0 (v-v) + Σ_1 v + (B^T+B^T)ϕ + r + r + (D+D^T)C_0 + (G^T+G^T)Π F_0=0 by setting Λ_0 = B^T P + D^T P C + G^T P F + S; Λ_1 = (B^T+B^T)Π + (D+D^T)P (C+C) + (G^T+G^T) Π (F+F) + (S+S) Σ_0 = D^T P D + G^T P G + R Σ_1 = (D+D^T)P (D+D) + (G^T+G^T) Π (G+G) + R+R Taking the conditional expectation in Equation (<ref>), assuming Σ_1 invertible, and making the term v explicit in Equation (<ref>), we deduce v=- Σ_1^-1(Λ_1 X + r + r + (B^T+B^T)ϕ + (D+D^T)C_0 + (G^T+G^T)Π F_0) Assuming Σ_0 is also invertible and observing that v=v - v + v , we have: v = -Σ_0^-1Λ_0(X-X) - Σ_1^-1(Λ_1 X + r + r + (B^T+B^T)ϕ + (D+D^T)C_0 + (G^T+G^T)Π F_0) At this point, we compare the drift terms from (<ref>) to those of (<ref>) and (<ref>). Using the relations (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) proved above. By noticing that v-v=-Σ_0^-1Λ_0(X-X), after some algebra, we deduce that P and Π should satisfy the following Riccati equations: ·P + P A + A^T P + C^T P C + F^T P F + Q - (P B + C^T P D + F^T P G + S^T) Σ_0^-1Λ_0 = 0 Λ_0 = B^T P + D^T P C + G^T P F + S; Σ_0 = D^T P D + G^T P G + R P(T) = H. ·Π + Π (A + A) + (A^T + A^T)Π + (C^T+C^T)P(C+C)+ (F^T+F^T)Π(F+F) + (Q+Q) -(Π (B+B) +(C^T+C^T)P(D+D) + (F^T+F^T)Π(G+G)+S^T + S^T +)Σ_1^-1Λ_1=0 Λ_1 = (B^T+B^T)Π + (D+D^T)P (C+C) + (G^T+G^T) Π (F+F) + (S+S) Σ_1 = (D+D^T)P (D+D) + (G^T+G^T) Π (G+G) + R+R Π(T) = H + H Once we have P and Π solution to Equation (<ref>) and (<ref>), we set: ·ϕ -(Π (B+B) + (C^T+C^T)P(D+D) + (F^T+F^T)Π(G+G) + S^T + S^T)Σ_1^-1· (r+r+(D+D^T)C_0 + (G^T+G^T)Π F_0 + q + q + Π A_0-(C^T+C^T)PC_0-(F^T+F^T)Π F_0) -[(Π (B+B) + (C^T+C^T)P(D+D) + (F^T+F^T)Π(G+G) + S^T + S^T)Σ_1^-1(B^T+B^T) + (A^T+A^T)]ϕ=0 Finally, we obtain the optimal trajectory (using Equation (<ref>)), a formula for the process Z (using Equations (<ref>), (<ref>) and (<ref>)), and a formula for the process Z_0 (using Equations (<ref>) and (<ref>)): dX =(A (X-X) + (A+A)X + B(v-v) + (B+B)v) ds +(C (X-X) + (C+C)X + D(v-v) + (D+D)v) dW +(F (X-X) + (F+F)X + G(v-v) + (G+G)v) dW^0 Z =P(C (X-X) - DΣ_0^-1Λ_0(X-X) ) +P(C_0 + (C+C)X-(D+D)Σ_1^-1(Λ_1 X + r + r + (B^T+B^T)ϕ + (D+D^T)C_0 + (G^T+G^T)Π F_0)) Z_0 =P (F - GΣ_0^-1Λ_0)(X-X) +Π(F_0 + (F+F) - (G+G)Σ_1^-1Λ_1)X -Π(G+G)Σ_1^-1(r + r +(B^T+B^T)ϕ + (D+D^T)C_0 + (G^T+G^T)Π F_0). Equations for the general case are easily obtained by using the summations where necessary (e.g., C^T P C is replaced by ∑_j=1^d_1 C_j^T P C_j). unsrt aid2023optimal Aïd, R., Biagini, S.: Optimal dynamic regulation of carbon emissions market. Math. Financ. 33(1), 80–115 (2023). aid2016optimal2 Aïd, R., Gruet, P., Pham, H.: An optimal trading problem in intraday electricity markets. Math. Financ. Econ. 10, 49–85 (2016). alasseur2020extended Alasseur, C., Ben Taher, I., Matoussi, A.: An extended mean field game for storage in smart grids. JOTA. 184, 644–670 (2020). almgren2001optimal Almgren, R., Chriss, N.: Optimal execution of portfolio transactions. J. Risk. 3, 5–40 (2001). anand2020pollution Anand, K. S., Giraud-Carrier, F. C.: Pollution regulation of competitive markets. Manag. Sci. 66(9), 4193–4206 (2020). barrieu2014market Barrieu, P., Fehr, M.: Market-consistent modeling for cap-and-trade schemes and application to option pricing. Oper. Res. 62(2), 234–249 (2014). bensoussan2013mean Bensoussan, A., Frehse, J., Yam, P.: Mean field games and mean field type control theory. 101, Springer, (2013). bensoussan2016linear Bensoussan, A., Sung, KCJ., Yam, S. C. P., Yung, S.P.: Linear-quadratic mean field games. JOTA. 169, 496–529 (2016). biagini2024 Biagini, S.: Carbon neutrality and net-zero regulation. SSRN:4883544. buckdahn2009mean Buckdahn, R., Djehiche, B., Li, J., Peng, S.: Mean-field backward stochastic differential equations: a limit approach . Ann. Prob., 37(4), 1524–1565, (2009). buckdahn2009mean2 Buckdahn, R., Li, J., Peng, S.: Mean-field backward stochastic differential equations and related partial differential equations. Stochastic Process. Appl., 119(10), 3133–3154, (2009). caines2006large Caines, P. E., Huang, M., Malhamé, R. P.: Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Communications in Information and Systems, 6(3), 221–252, (2006). calel2016environmental Calel, R., Dechezleprêtre, A.: Environmental policy and directed technological change: evidence from the European carbon market. Rev. Econ. Stat., 98(1), 173–191, (2016). cardaliaguet2012notes Cardaliaguet, P.: Notes on Mean Field Games, (2012). carmona2009optimal Carmona, R., Fehr, M., Hinz, J.: SIAM J. Control Optim. 48(4), 2168–2190, (2009). carmona2010marke Carmona, R., Fehr, M., Hinz, J., and Porchet, A.: Market design for emission trading schemes. SIREV. 52(3), 403–452 (2010). carmona2015probabilistic Carmona, R., Lacker, D.: A probabilistic weak formulation of mean field games and applications, Ann. Appl. Probab., 1189–1231 (2015). chan2015bertrand Chan, P., Sircar, R.: Bertrand and Cournot mean field games. Appl. Math. Optim., 71(3), 533–569 (2015). cecchin2022weak Cecchin, A., Delarue, F.: Weak solutions to the master equation of potential mean field games. arXiv:2204.04315, (2022). delarue2018probabilisticone Delarue, F., Carmona, R.: Probabilistic Theory of Mean Field Games with Applications I, (2018), Springer. delarue2018probabilistictwo Delarue, F., Carmona, R.: Probabilistic Theory of Mean Field Games with Applications II, (2018), Springer. fell2010alternative Fell, H., Morgenstern, R. D.: Alternative approaches to cost containment in a cap-and-trade system. ERE. 47, 275–297 (2010). fujii2022mean Fujii, M., Takahashi, A.: A mean field game approach to equilibrium pricing with market clearing condition. SIAM J. Control Optim. 60(1), 259–279 (2022). fujii2022equilibrium Fujii, M.: Equilibrium pricing of securities in the co-presence of cooperative and non-cooperative populations. ESAIM: COCV 29(56) (2023). gollier2024cost Gollier, C.: The cost-efficiency carbon pricing puzzle. CEPR Discussion Paper No. DP15919. (2024). graber2016linear Graber, P. J.: Linear quadratic mean field type control and mean field games with common noise, with application to production of an exhaustible resource. Appl. Math. Optim. 74, 459-486 (2016). graber2023master Graber, P. J., Sircar, R.: Master equation for Cournot mean field games of control with absorption. J. Differ. Equ. 343, 816–909 (2023). gueant2010mean Guéant, O., Lasry, J.M., Lions, P. L.: Mean Field Games and Oil Production. The Economics of Sustainable Development. (2010) hitzemann2018equilibrium Hitzemann, S., Uhrig-Homburg, M.: Equilibrium price dynamics of emission permits. JFQA. 53(4), 1653–1678 (2018). kyle1985continuous Kyle, A. S.: Continuous auctions and insider trading. Econometrics. 1315–1335 (1985). kollenberg2016emissions Kollenberg, S., Taschini, L.: Emissions trading systems with cap adjustments. JEEM. 80, 20–36 (2016). kollenberg2016dynamics Kollenberg, S., Taschini, L.: Dynamic supply adjustment and banking under uncertainty in an emission trading scheme: the market stability reserve. Eur. Econ. Rev. 118, 213–226 (2019). lacker2019mean Lacker, D., Zariphopoulou, T.: Mean field and n-agent games for optimal investment under relative performance criteria. Math. Financ. 29(4), 1003–1038 (2019). lasry2007mean Lasry, J. M., Lions, P. L.: Mean field games. JJM. 2(1):229–260. levi2004converting Levi, M. D., Nault, B. R.: Converting technology to mitigate environmental damage. Manag. Sci. 50(8), 1015–1030 (2004). peng1999fully Peng, S., Wu, Z.: Fully coupled forward-backward stochastic differential equations and applications to optimal control. SIAM J. Control Optim. 37(3), 825–843 (1999). pham2017dynamic Pham, H., Wei, X.: Dynamic programming for optimal control of stochastic McKean–Vlasov dynamics. SIAM J. Control Optim. 55(2), 1069–1101 (2017). pham2016linear Pham, H.: Linear quadratic optimal control of conditional McKean-Vlasov equation with random coefficients and applications. PUQR. 1, 1–26 (2016). requate1993equivalence Requate, T.: Equivalence of effluent taxes and permits for environmental regulation of several local monopolies. Econ. Lett. 42(1), 91–95 (1993). requate1993pollution Requate, T.: Pollution control in a Cournot duopoly via taxes or permits. J. Econ. 58(3), 255–291 (1993). subramanian2007compliance Subramanian, R., Gupta, S., Talbot, B.: Compliance strategies under permits for emissions. Prod. Oper. Manag. 16(6), 763–779 (2007). yong2013linear Yong, J.: Linear-quadratic optimal control problems for mean-field stochastic differential equations. SIAM J. Control Optim. 51(4), 2809-2838 (2013).
http://arxiv.org/abs/2407.13581v1
20240718152303
An extended generalization of RSK correspondence via $A$ type quiver representations
[ "Benjamin Dequêne" ]
math.CO
[ "math.CO", "math.RT" ]
=1 2212- calc arrows decorations.markings plain scholieScholie mylettersOMLztmcmmit myletters"15 B. Dequêne]Benjamin Dequêne [B. Dequêne]UFR des Sciences, Laboratoire Amiénois de Mathématiques Fondamentales et Appliquées (LAMFA), Université de Picardie Jules Vernes (UPJV) benjamin.dequene@u-picardie.fr Extented RSK via quiver representations]An extended generalization of RSK correspondence via A type quiver representations theoremTheorem[section] lemma[theorem]Lemma prop[theorem]Proposition cor[theorem]Corollary conj[theorem]Conjecture definition definition[theorem]Definition example[theorem]Example ex algo[theorem]Algorithm remark remark[theorem]Remark remarks[theorem]Remarks [ [ ===== § ABSTRACT Let λ=(λ_1 ⩾…⩾λ_k > 0). For any c Coxeter element of 𝔖_λ_1+k-1, we construct a bijection from fillings of λ to reverse plane partitions. We recover two previous generalizations of the Robinson–Schensted–Knuth correspondence for particular choices of Coxeter element depending on λ: one based on the work of, among others, Burge, Hillman, Grassl, Knuth, and uniformly presented by Gansner; the other developed by Garver, Partrias, and Thomas, and independently by Dauvergne, called Scrambled RSK. Our results in this paper develop the combinatorial consequence of our previous work of type A_λ_1+k quivers. § INTRODUCTION This article has a short version in proceedings of the 36th edition of the Formal Power Series and Algebraic Combinatorics (FPSAC) Conference <cit.>. §.§ RSK and its generalizations Let n ∈ℕ^*. The Robinson–Schensted correspondence is a famous one-to-one correspondence from elements of the symmetric group 𝔖_n to pairs of standard Young tableaux of the same shape and of size n. It is firstly based on the representation theory of the symmetric group, thanks to the work of Robinson <cit.>, before getting a combinatorial realization using Schensted row-insertions <cit.>. This correspondence was studied for numerous combinatorial consequences, as a combinatorial proof of a representation-theoretic identity involving the dimension of the irreducible representations of 𝔖_n (see <ref>), Viennot's geometric construction <cit.>, plactic monoids <cit.>, or Erdős–Szekeres theorem <cit.>. We refer the reader to <cit.> for more details. The Robinson–Schensted–Knuth (RSK) correspondence is a generalization of the Robinson–Schensted correspondence, introduced by Knuth <cit.>, and presented as a bijection from nonnegative integer matrices to pairs of semi-standard Young tableaux of the same shape. We recover the Robinson-Schensted correspondence by restricting on permutation matrices. The RSK correspondence extends many of the properties of the previous correspondence; for instance, its symmetry to transpose the matrix results in the interchanging of the tableaux. As one of its remarkable consequences, we can cite the Cauchy identity for symmetric functions (see <cit.> for more details), which generalized the representation-theoretic identity mentioned above. It also has many interpretations in different settings, using deformations and generalizations of this correspondence. We refer the reader to <cit.>. In this paper, we focus on two of those generalizations. Gansner introduced the first one <cit.>, based on observations of various works of Burge <cit.>, Hillman–Grassl <cit.> and Knuth <cit.>. Given a fixed nonzero integer partition λ, via Greene–Kleitman invariants <cit.>, he defines a map, denoted by _λ, which realizes a bijection from arbitrary fillings of λ to reverse plane partitions of λ. Gaver, Patrias, and Thomas give the second one <cit.>, in terms of quiver representation theory. Independently, Dauvergne <cit.>, in a combinatorial setting, introduced it as “Scrambled RSK". In the following, we focus on the quiver representation theory point of view. This variant can be introduced as a family of one-to-one correspondences (_m,c)_m,c, parametrized by orientations of an A_n type quiver (seen here as a Coxeter element c ∈𝔖_n+1) — see <ref>), and m ∈{1,…,n}, from m × (n-m+1) integer matrices to reverse plane partitions of (n-m+1)^m (seen as n-tuples of integer partitions satisfying storability conditions – see <ref>). Our main goal is to exhibit a construction of an extended generalization of _λ, for any nonzero integer partitions λ, based on a combinatorial extraction of results from <cit.>, involving any Coxeter element c ∈𝔖_n+1, where n is the hook-length of the box (1,1) in λ, using the combinatorics of the A_n type quivers. We denote those maps by _λ,c. In <ref>, we pictured how those maps _λ,c can be seen as an extended generalization that contains the previously mentioned correspondence. We state the precise results in <ref>. §.§ Quiver representation theory In this section, we recall the setting of <cit.> and state the main result of <cit.>, which motivates our work. Fix 𝕂 an algebraically closed field, and n ⩾ 1. Consider an A_n type quiver Q: this is a directed graph whose underlying graph is a line with n vertices. We label the vertices from 1 to n, from left to right. A representation E is an assignment of a vector space E_q at each vertex q of Q, and an assignment of a linear transformation E_α to each arrow α of Q. We say that E is finite-dimensional whenever, for all q ∈ Q_0, E_q is finite-dimensional. Given two representations E and F, a morphism ϕ:E ⟶ F is a collection of linear maps (ϕ_q : E_q ⟶ F_q)_q assigned to each vertex of Q, such that it satisfies some commutativity properties (see <ref>). Denote by _𝕂(Q) the category of (finite-dimensional) representations of Q over 𝕂. One can see this category as a set of representations of Q equipped with morphisms between them. A representation is said to be indecomposable whenever it is not isomorphic to a direct sum of two nonzero representations. Write _𝕂(Q) for the set of isomorphism classes of indecomposable representations of Q. We can encode the data of _𝕂(Q), and the morphisms between indecomposable representations, by the Auslander–Reiten quiver, denoted by _𝕂(Q). It is a directed graph whose vertices are elements of _𝕂(Q), and arrows are irreducible morphisms between them. We recall that any E ∈_𝕂(Q) is characterized, up to isomorphisms, by the multiplicities of its indecomposable summands. Write (E): _𝕂⟶ℕ for the map which associates indecomposable representations to its multiplicities. It could be seen as a filling of _𝕂(Q). An endomorphism N: E ⟶ E is said to be nilpotent whenever, for every vertex q of Q, N_q is nilpotent. Write (E) for the set of nilpotent endomorphisms of a given representation E. We define an invariant on isomorphism classes of _𝕂(Q), called the generic Jordan form data as follows. Given E ∈_𝕂(Q), we study the set of its nilpotent endomorphisms, denoted by (E), by determining their Jordan form: it is displayed as a n-tuple of integer partitions. Garver, Patrias, and Thomas <cit.> proved that a (Zariski) dense open set Ω⊂(E) exists in which all the nilpotent endomorphisms have the same Jordan form. This common Jordan form data is called the generic Jordan form data of X, denoted by (E) — see <ref> for the precise statement. Note that this invariant can be computed combinatorially (see <ref>). Note that if n > 1, is not a complete invariant. However, we can still be interested in determining the full subcategories of _𝕂(Q) (closed under direct sums and summands) in which becomes complete. Those subcategories are called Jordan recoverable. To determine all the Jordan recoverable subcategories of _𝕂(Q) is still a difficult task. A conjecture is stated in <cit.> and is recalled in <ref>. Another question raised is how to recover the representation, up to isomorphisms, from its generic Jordan form data. Garver, Patrias, and Thomas described an algebraic way to do so, and they called canonically Jordan recoverable any subcategory in which their algebraic procedure succeeds. Note that any canonically Jordan recoverable subcategory is Jordan recoverable, but the converse is false, which explains the refined notion. They prove that, for any vertex m in Q, the subcategories additively generated by indecomposable representations X such that X_m ≠ 0, denoted by 𝒞_Q,m are canonically Jordan recoverable. Moreover, they show that can be seen as a generalization of the RSK correspondence, as they recover if Q is oriented such that m is the only sink (respectively only source) of Q. They also showed that they recover the Hillman–Grassl correspondence if Q is linearly oriented (there is only one source and only one sink in Q). We refer the reader to <cit.> for more details. Recall that a filling of _𝕂(Q) corresponds to a representation E ∈_𝕂(Q) up to isomorphism. Now, see as a map from fillings of _𝕂(Q) (which define, up to isomorphism, representations of Q) to n-tuples of integer partitions. This map becomes a bijection if we restrict its domain to fillings f which vanish on indecomposable representations X such that X_m = 0, and its codomain to n-tuples of integer partitions that satisfy some storability conditions (see <ref> and <cit.>). In this case, coincides with the Dauvergne's Scrambled RSK _m,c mentioned earlier, where c ∈𝔖_n+1 is the Coxeter element corresponding to Q (see <ref>). The main result of <cit.> generalizes one of the results of <cit.> by describing all the canonically Jordan recoverable subcategories of _𝕂(Q). Recall that, for any A_n type quiver Q, the isomorphism classes of indecomposable representations are in bijection with intervals i,j = {i,i+1,…,j} in {1,…, n}. Moreover, their indecomposable representations characterize subcategories closed under direct sums and summands. Thus, for any subcategory 𝒞 of _𝕂(Q), we write (𝒞) for the set of intervals corresponding to the indecomposable representations in 𝒞. Two intervals i,j and k,ℓ are adjacent whenever either j +1 = k or ℓ+1 = i. An interval set 𝒥 is said to be adjacency-avoiding if no pair of intervals in 𝒥 are adjacent. Let n ⩾ 1 and Q be an A_n type quiver. A subcategory 𝒞 is canonically Jordan recoverable if and only if (𝒞) is adjacency-avoiding. Note that this result shows that canonical Jordan recoverability does not depend on the orientation of Q. For any set ⊂ℕ, a bipartition of ⊂ℕ^* is a pair (Ł,) of subsets of such that Ł∩ = ∅ and Ł∪ =. We highlight another remarkable fact from <cit.>. As any interval subset of an adjacency-avoiding interval set is adjacency-avoiding, we can focus on maximal ones. Those maximal adjacency-avoiding interval sets are parametrized by bipartitions (Ł,) of {2, …, n}. Precisely, for any maximal adjacency-avoiding interval set 𝒥, there exists a unique bipartition (Ł,) of {2,…,n} such that: 𝒥 = {ℓ,r-1|ℓ∈Ł∪{1} and r ∈∪{n+1}} §.§ Main results We proceed to a combinatorial extraction of the results in <cit.>. We summarized this extraction in <ref>. The bijective link between integer partitions with h_λ(1,1) = n and maximal canonically Jordan recoverable (CJR) subcategories of _𝕂(Q) is by using the parametrization with bipartitions of {2,…,n}. Note also that Reading's bijection <cit.> allows us to define a Coxeter element (λ) ∈𝔖_n+1 from such a λ. Given such a λ, we build a one-to-one correspondence from generic Jordan form data of a representation in the category coming from λ to reverse plane partitions of shape λ, thanks to the notion of (λ)-storability for n-tuples of partitions (see <ref>). See <ref> to get the combinatorial construction of _λ,c. Our main result is the following. Let n ⩾ 1, λ be an integer partition such that h_λ(1,1) = n, and c ∈𝔖_n+1 be a Coxeter element. The map _λ,c realizes a one-to-one correspondence from fillings of shape λ to reverse plane partitions of shape λ. We also show some secondary results that justify the name of “extended generalization” of RSK. Let n ⩾ 1, λ be an integer partition such that h_λ(1,1) = n, and c ∈𝔖_n+1 be a Coxeter element. * If c = (λ)^± 1, then _λ,c = _λ. * If λ=(n-m+1)^m for some m ∈{1,…,n}, then _λ,c = _m,c. * If c = (1,…,n+1), then _λ,c coincides with the Hillman–Grassl correspondence. Finally, motivated by the fact that _λ and _m,c admit a local description using sequences of toggles <cit.>, we exhibit some results using local transformations, introduced as diagonal toggles (see <ref>), via those coming from <cit.>. We refer the reader to <ref> for the proofs of the main theorems, and more details about the local transformation mentionned above. § THE ROBINSON–SCHENSTED–KNUTH CORRESPONDENCE §.§ Notations and vocabulary This section sets up all the basic objects we need throughout this paper. §.§.§ Quivers and directed graphs A quiver is a quadruplet Q=(Q_0,Q_1,s,t) where Q_0 is a set called the vertex set, Q_1 is another set called the arrow set, and s,t : Q_1 ⟶ Q_0 are functions called source and target functions. Given a quiver Q, we denote by Q^ its opposite quiver defined from Q by reversing all its arrows. Let Q=(Q_0,Q_1,s,t) and Ξ = (Ξ_0,Ξ_1,σ,τ) be two quivers. A morphism of quivers Ψ is a pair of maps (Ψ_0 : Q_0 ⟶Ξ_0, Ψ_1:Q_1⟶Ξ_1) such that, for all α∈ Q_1, σ(Ξ_1(α)) = Ξ_0 (s(α)) and τ(Ξ_1(α)) = Ξ_0 (t(α)). Such a morphism Ψ is an isomorphism whenever Ψ_0 and Ψ_1 are bijective. We say that Q and Ξ are isomorphic in such a case. A quiver Q is said to be finite whenever Q_0 and Q_1 are finite. We say that Q has no multi-arrows whenever #{α∈ Q_1 | s(α)= q_1 and t(α) = q_2}⩽ 1 for all pairs (q_1,q_2) ∈ (Q_0)^2. In the combinatorial settings, we call directed graph any finite quiver without multi-arrows. As the arrows in any directed graph are uniquely determined by their source and their target, we denote directed graphs by pairs G=(G_0,G_1) where we see the arrow set G_1 as a subset of (G_0)^2. Let G = (G_0,G_1) be a directed graph. A path γ in G as a finite sequence of vertices (v_0, …, v_k) such that (v_i,v_i+1) ∈ G_1. A lazy path at v ∈ G_0 is the path (v). In the following, we denote by Π(G) the set of paths in G. For any γ = (v_0, …, v_k) ∈Π(G), we denote by s(γ) = v_0 its source and by t(γ) = v_k its target. We also write (γ) = {v_0, …, v_k} for the support of γ. For ℓ⩾ 1, we extend the notion of support to ℓ-tuples of paths γ = (γ_1, …, γ_ℓ) ∈Π(G)^ℓ as (γ) = ⋃_i=1^ℓ(γ_i). A directed graph G is said to be connected whenever for any pair (v,w) ∈ G_0, there exist ℓ∈ℕ^* and (γ_1,…,γ_ℓ) ∈Π(G)^ℓ such that: * v = s(γ_1), * for any i ∈{1,…,ℓ-1} odd, t(γ_i) =t(γ_i+1), and if ℓ is odd, then t(γ_ℓ) = w; * for any i ∈{1,…,ℓ-1} even, s(γ_i) = s(γ_i+1), and if ℓ is even, then s(γ_ℓ) = w. We say that G is acyclic if the only paths γ in G such that s(γ) = t(γ) are the lazy ones. Call antichain of G any subset of vertices {w_1,…,w_r}⊂ G_0 such that there is no γ∈Π(G) with s(γ) = w_i and t(γ) = w_j for all 1 ⩽ i, j ⩽ r with i ≠ j. §.§.§ Integer partitions An integer partition is a finite weakly decreasing sequence λ = (λ_1, λ_2, …, λ_p) of positive integers. Define its size as |λ| = λ_1 + … + λ_k and its length by ℓ(λ) = p. If needed, we can extend the definition of an integer partition into an infinite weakly decreasing sequence of nonnegative integers with finitely many nonzero entries. Given a,b ∈ℕ^*, we denote by a^b the integer partition λ such that ℓ(λ) = b, and λ_i = a for 1 ⩽ i ⩽ b. We endow (ℕ^*)^2 with the cartesian product order defined by (i,j) (i',j') ⟺ i ⩽ i' j ⩽ j' . A Ferrers diagram is a finite ideal of ((ℕ^*)^2, ). Recall that we have a one-to-one correspondence between Ferrers diagrams and integer partitions. For a given integer partition λ, we define the Ferrers diagram of shape λ to be (λ) = {(i,j) ∈ (ℕ^*)^2 | j ⩽λ_i}. Call box of λ any element of (λ). We use matrix coordinates for the boxes of any partition, meaning that we use English conventions to draw Ferrers diagrams. Given a box b ∈(λ), we write h_λ(b) for the hook-length of b in λ, which is defined, if b = (i,j), as h_λ(b) = #{(u,v) ∈(λ) | u ⩾ i, v ⩾ j, and u = i or v=j }. Explicitly, one can show that h_λ(b) = λ_i - i + λ'_j - j +1 where λ' is the conjugate of λ; meaning λ' is the unique integer partition such that (λ') = {(j,i) | (i,j) ∈(λ)}. In particular, we have h_λ(1,1) = λ_1 + ℓ(λ) -1. In the following, for any n ∈ℕ, we write _n for the set of integer partitions such that h_λ(1,1) = n. Given a nonzero integer partition λ, we consider the λ-diagonal coordinates for elements in (λ) as follows. For k∈ℤ, we define the kth diagonal of λ as the set D_k(λ) of boxes such that λ_1 + i -j = k. Note that D_k(λ) ≠∅ if and only if k ∈{1,…,h_λ(1,1)}. Assume that λ∈_n. For k ∈{1,…,n}, we set δ_k = max({min(i,j) | (i,j) ∈ D_k(λ)}). We define the λ-diagonal coordinates of a box (i,j) ∈(λ) to be the pair k,δ_λ where k ∈{1,…,n} and δ∈{1,…,δ_k} such that (i,j) ∈ D_k(λ) and δ = δ_k - min(i,j) + 1. See <ref> for an example with λ = (5,3,2). Given k ∈{1,…,n}, we define □_k(λ) the kth square of λ as the order ideal in ((λ),) generated by D_k(λ). Note that D_λ_1(λ) corresponds to the Durfee square of λ. A filling of shape λ is an function f :(λ) ⟶ℕ. Such a filling f is a (weak) reverse plane partition (of shape λ) whenever f is weakly increasing with respect to . These reverse plane partitions are termed “weak” because we allow 0 as the value of a box, but we drop this adjective from now on. We denote by (λ) the set of reverse plane partition of shape λ. A reverse plane partition f (of shape λ) is a (weak) semi-standard Young tableau (of shape λ) if f(i,j) > f(i',j) ⩾ 0 for any (i,j), (i',j) ∈(λ) such that i' < i. Write (λ) for the set of such a semi-standard Young tableau of shape λ, and, given an integer m ∈ℕ, (λ,m) for those with values in {0,…,m}. A filling f is a (weak) increasing tableau (of shape λ) whenever f is a reverse plane partition which is strictly increasing with respect to . We denote by (λ) the set of increasing tableaux of shape λ. We define standard Young tableaux (of shape λ) as a bijective increasing tableau f : (λ) ⟶{1,…,|λ|}. We write (λ) for the set of standard Young tableaux of shape λ. See <ref> for an example of each of the previous notions for λ = (5,3,2). §.§ The classical story We recall the classical way to present the Robinson–Schensted–Knuth correspondence. For more details, we invite the reader to look at the following references: <cit.>, <cit.>. Let us first recall the Schensted row-insertion. Let k ∈ℕ. The Schensted row-insertion of k, denoted k Sch.⟶ - is a function on semi-standard Young tableaux defined as follows. Let f be a semi-standard Young tableau of shape λ. Then the filling (k Sch.⟶ f ) = g is obtained thanks to the following procedure: * Put i'=1 and x = k; * If it exists, let j' be the smallest index such that x < f(i',j'), otherwise, we put j' = λ_i' + 1; * If (i',j') ∉(λ), then we put g(i',j') = x and for (i,j) ∈(λ) such that i ⩾ i' we put g(i,j) = f(i,j), and we are done; * Otherwise, we put g(i',j') = x and for j ≠ j' such that (i',j) ∈(λ), we put g(i',j) = f(i',j); put x = f(i',j'), we increase i' by 1, and we come back to step 2). We end with a filling g of the integer partition obtained from λ by adding the box (i',j') from step 3) of the algorithm. We illustrate how the Schensted row-insertion works thanks to the following example. Consider f to be the semi-standard Young tableau of shape λ= (5,2) below (<ref>). We obtain 1 Sch.⟶ f by replacing the value in the box (1,2) by 1, the value in the box (2,1) by 2, by adding a box at (3,1) and giving it the value 3. The filling obtained following the Schensted row-insertion algorithm is illustrated below (<ref>). We greyed the modified boxes and framed the added box. Remark that this new filling is a semi-standard Young tableau of shape μ = (5,2,1). We can now present the RSK correspondence. The RSK correspondence is a map from nonnegative integer matrices and pairs of semi-standard Young tableaux of the same shape, described as follows: * From A = (a_i,j) a n × m a nonnegative integer matrix, consider the associated two-line array, w_A = ( i_1 i_2 … i_s j_1 j_2 … j_s ) such that, for any (i,j) ∈{1, …,n}×{1,…,m}, there are a_i,j copies of the column ( i j ), and all the columns are in lexicographic order, meaning: * i_1 ⩽…⩽ i_s, and; * if i_p = i_p+1 then j_p ⩽ j_p+1. They are usually called biwords. * We construct two sequence of semi-standard Young tableau (P(k))_0 ⩽ k ⩽ s and (Q(k))_0 ⩽ k ⩽ s as it follows: * we begin with P(0) = Q(0) = ∅; * for all k ∈{1, …,s}, we put P(k) = j_k Sch.⟶ P(k-1). * for all k ∈{1, …,s}, we get Q(k) from Q(k-1) by recording i_k in box created when passing from P(k-1) to P(k). * We define (A) = (P(s),Q(s)). Let us take A = ( 1 0 3 0 2 1 1 1 0 ). Then we get w_A = ( 1 1 1 1 2 2 2 3 3 1 3 3 3 2 2 3 1 2). In <ref>, we explicit the step-by-step calculations of (P(k), Q(k))_1 ⩽ k ⩽ 9. Here (A) = (P(9),Q(9)). Let n,m ∈ℕ^*. The RSK correspondence gives a bijection from n × m nonnegative integer matrices to pairs (P,Q) of semi-standard Young tableaux of the same shape such that their entries are from 1 to m for P, and from 1 to n for Q. A well-known combinatorial consequence is the Cauchy identity. Before stating it, we recall what the Schur polynomials are. Let n ∈ℕ^*, and x_1,…,x_n be n formal variables. Consider λ to be a nonzero integer partition. We define the Schur polynomial of λ as follows: s_λ(x_1,…,x_n) = ∑_f ∈(λ, n-1)∏_b ∈(λ) x_f(b)+1. We write x_f(b)+1 instead of x_f(b) because we are considering weak semi-standard Young tableaux. Note that this is indeed a homogenous symmetric polynomial of degree |λ|. Moreover, for any 1 ⩽ m ⩽ n, then (s_λ)_λ⊢ m gives a basis of the vector space of the homogeneous symmetric polynomials of degree m. For n=2 and λ = (2,1) we get s_λ (x_1,x_2) = x_1^2 x_2 + x_1 x_2^2. For any n,m ∈ℕ^*, and for any x_1,…, x_n and y_1,…,y_m sets of formal variables, we have ∑_λ s_λ(x_1,…,x_n) s_λ(y_1,…,y_m) = ∏_i=1^n ∏_j=1^m 11 - x_i y_j where the sum is over all the integer partitions λ. The RSK correspondence induces a bijection from permutations of 𝔖_n to pairs of standard Young tableaux (P,Q) of size n. We recover the so-called Robinson–Schensted correspondence. It allows us to establish combinatorially the following representation-theoretic identity n! = ∑_λ⊢ n t_λ^2, where t_λ is both the number of standard Young tableaux of shape λ, with values in {1,…, n}, and the dimension of the irreducible representation of 𝔖_n corresponding to the partition λ. §.§ The Greene–Kleitman invariant Let G=(G_0,G_1) be a directed graph. Assume that G is acyclic. Consider a filling f : G_0 ⟶ℕ of G. We assign to any ℓ-tuple of paths γ∈Π(G)^ℓ in G a f-weight defined by _f(γ) = ∑_v ∈(γ) f(v). Set M_0^G(f) = 0, and for all integers ℓ⩾ 1, M_ℓ^G(f) = max({_f(γ) |γ∈Π(G)^ℓ}). We define the Greene–Kleitman invariant of f in G as _G(f) = (M_ℓ^G(f) - M_ℓ-1^G(f) )_ℓ⩾ 1. See <ref> for an explicit computation example. Let G be an acyclic-directed graph and f be a filling of G. The integer sequence _G(f) is an integer partition of length the maximal cardinality of an antichain in G. Moreover, |_G(f)| = ∑_v ∈ G_0 f(v). §.§ The Gansner story In the following, we present another way to realize the RSK correspondence. We refer the reader to <cit.> for more details. Let A = (a_i,j)_1 ⩽ i,j ⩽ n be a n × n integer matrix. We construct a directed graph G_A where the vertices are labelled by (i,j) for 1 ⩽ i,j ⩽ n, and the arrows are given by (i,j) ⟶ (i+1,j) and (i,j) ⟶ (i,j+1). For 1 ⩽ i ⩽ n, consider G_A^[i,-] the full subgraph of G_A whose vertices are (k,j) for 1 ⩽ k ⩽ i and 1 ⩽ j ⩽ n. The coefficients of A endows the graph G_A^[i,-] with a filling f_A^[i,-]. We define a sequence of integer partitions (ν^i)_1 ⩽ i ⩽ n by ∀ i ∈{1,…,n}, ν^i = _G_A^[i,-](f_A^[i,-]). Analogously, by considering, for 1 ⩽ j ⩽ n, G_A^[-j] the full subgraph of G_A whose vertices are (i,k) for 1 ⩽ i ⩽ n and 1 ⩽ j ⩽ k, we define a sequence of integer partitions (μ^j)_1 ⩽ j ⩽ n by ∀ j ∈{1,…,n}, μ^j = _G_A^[-,j](f_A^[-,j]). Note ν^n = μ^n. Moreover, we can show that ν^i⊆ν^i+1 for all 1 ⩽ i < n, and μ^j ⊆μ^j+1 for all 1 ⩽ j < n. The pairs of integer partitions sequences ((ν_i)_1 ⩽ i ⩽ n, (μ_j)_1 ⩽ j ⩽ n) are called the Greene–Kleitman invariants of A. They allow us to recover (A). The semi-standard Young tableau P(s) is the filling of μ^n obtained by labelling j the boxes in μ^j ∖μ^j-1. The other one, Q(s), is constructed similarly but with the sequence (ν^i). In <ref>, we give the explicit calculations of the sequences (μ^j) and (ν^i) for the matrix A of <ref>. §.§ The generalized Gansner story We can display these sequences (μ^j)_1 ⩽ j ⩽ n and (ν^i)_1 ⩽ j ⩽ n as a reverse plane partition of shape the n× n box partition. We first locate the sink of the subgraph taken to calculate ν^j or μ^i. This sink corresponds to a box in the Young tableau we want. We fill the boxes in the same diagonal with the parts of ν^j or μ^i from the bottom left to the top right. We can look at <ref> to see how it goes with the results of <ref>. We can generalize this way of applying to a correspondence from fillings of a Ferrers diagram of any fixed integer partition λ to reverse plane partitions of shape λ. Let f : (λ) ⟶ℕ be a filling of (λ) We consider the directed graph G_λ associated to λ whose vertex set is (λ) and the arrows are given by (i,j) ⟶ (i,j+1) and (i,j) ⟶ (i+1,j). Indeed G_λ corresponds to the Haase diagram of ((λ), ). Assume that λ∈_n. For each k ∈{1,…,n}, we consider G_λ^[k] the full subgraph of G_λ whose vertices are boxes in □_k(λ). Write f^[k] for the induced filling of G_λ^[k] from f of G_λ. We define an integer partition π^k by: π^k = _G_λ^[k](f^[k]). We can show that ℓ(π^k) ⩽# D_k(λ), as there exists γ∈Π(G_λ^[k])^#D_k(λ) such that (γ) = (G_λ^[k])_0. So we can place the values of π^k in diagonal D_k(λ). We define a filling _λ(f): (λ) ⟶ℕ as follows. Using the λ-diagonal coordinates for the boxes of λ, we compute _λ(f) by: ∀k,δ_λ∈(λ), _λ(f)(k,δ_λ) = π^k_δ. See <ref> for a detailed example. For any nonempty integer partition λ, The map _λ is a one-to-one correspondence from fillings of (λ) to (λ). By reversing the arrows (i,j) ⟶ (i+1,j) in G_λ, and by proceeding to the same calculations that defined _λ, it realizes the Hillman–Grassl correspondence. See <cit.> for more details. We introduce some notations before stating a combinatorial identity, consequence of <ref>. Fix n ∈ℕ^*. Consider λ∈_n. We assign a weight to the boxes of λ, using the λ-diagonal coordinates, as follows: ∀ b = k,δ_λ∈(λ), w_λ,b(x_1,…,x_n) = ∏_1 ⩽ℓ⩽ n, b ∈□_ℓ(λ) x_ℓ. We define the trace generating function for (λ): ρ_λ(x_1,…,x_n) = ∑_f ∈(λ)∏_ℓ,ε_λ∈(λ) x_ℓ^f(ℓ,ε_λ). Let n ⩾ 1 and λ∈_n. Let x_1,…,x_n be n formal variables. We have ρ_λ(x_1,…,x_n) = ∏_b ∈(λ)11 - w_λ,b(x_1,…,x_n). This result induces a well-known equality involving the norm-generating function of (λ), previously proved by R. Stanley <cit.> using entirely different techniques. Precisely, by setting σ(f) = ∑_b ∈(λ) f(b) and mapping all the x_i to x, we have ∑_f ∈(λ) x^σ(f) = ∏_b ∈(λ)11 - x^h_λ(b). § TOOLS FROM COXETER ELEMENTS In this section, we define some combinatorial objects related to Coxeter elements that is useful for presenting and studying our extended version of Gansner's RSK correspondence. §.§ (Type A) Coxeter elements For any n ⩾ 1, let 𝔖_n+1 be the symmetric group on n+1 letters. For 1 ⩽ i < j ⩽ n+1, write (i,j) for the transposition exchanging i and j. For 1 ⩽ i ⩽ n, let s_i be the adjacent transposition (i,i+1). Let Σ_n be the set of the adjacent transpositions of 𝔖_n+1. Recall that 𝔖_n+1 admits a presentation in terms of generators and relations using Σ_n as follows: 𝔖_n+1 = ⟨Σ_n | [l] s_i^2 = 1 for i ∈{1,…,n} s_i s_i+1 s_i = s_i+1 s_i s_i+1 for i ∈{1,…,n-1} s_i s_j = s_j s_i for i,j ∈{1,…,n} such that |i-j| > 1 }.⟩ For any w ∈𝔖_n+1, call an expression of w a way to write w as a product of transpositions in Σ_n. The length of w, denoted by ℓ(w), is the minimal number of transpositions in Σ_n needed to express w. Whenever ℓ(s w) < ℓ(w) for some s ∈Σ_n, we say that s is initial in w. Similarly, we call s ∈Σ_n final in w whenever ℓ(w s) < ℓ (w). A Coxeter element (of 𝔖_n+1) is an element c ∈𝔖_n+1 which can be written as a product of all the transpositions of Σ_n, in some order, where each of them appears precisely once. The permutation c = s_2 s_1 s_3 s_6 s_5 s_4 s_8 s_7 = (1,3,4,7,9,8,6,5,2) is a Coxeter element of 𝔖_9. Note that s_1,s_3,s_6 and s_8 are initial in c, and s_7 and s_4 are final in c. First, we state the result of conjugating a Coxeter element with one of its initial or final adjacent transpositions. Let c ∈𝔖_n+1 be a Coxeter element. For any s ∈Σ_n, either initial or final in c, the permutation scs is a Coxeter element in c. Then, as observed in <ref>, the following lemma allows us to write any Coxeter element of 𝔖_n+1 as a long cycle of a precise form. It is a consequence of <cit.>, and it is helpful to exploit explicitly the Coxeter elements. An element c ∈𝔖_n+1 is a Coxeter element if and only if c is a long cycle which can be written as follows c = (c_1,c_2, …, c_m, c_m+1, …, c_n+1) where c_1 =1 < c_2 <… <c_m = n+1 > c_m+1 > … > c_n+1 > c_1 = 1. Consider a Coxeter element c ∈𝔖_n+1. Write it c = (c_1, …, c_n+1) as said in the previous lemma. We define the left part of c as Ł_c= {c_2,…,c_m-1} and the right part of c as _c = {c_m+1,…,c_n+1}. The following lemma characterizes initial and final adjacent transpositions thanks to Ł_c and _c. Let c ∈𝔖_n+1 be a Coxeter element. For any k ∈{2,…,n-1}, * s_k is final in c if and only if k ∈Ł_c and k+1 ∈_c, and, * s_k is initial in c if and only if k ∈_c and k+1 ∈Ł_c. In special cases, * if s_1 is initial in c, then 2 ∈Ł_c, otherwise s_1 is final in c, and 2 ∈_c; * if s_n is final in c, then n ∈Ł_c, otherwise s_n is initial, and n ∈_c. We recall, via the following definition, that we can associate to any Coxeter element c ∈𝔖_n+1 a unique A_n type quiver. Let c ∈𝔖_n+1 be a Coxeter element. We define the quiver Q(c) as follows: * its set of vertices is Q(c)_0 = {1,…,n}; * its set of arrows is given by arrows between i and i+1, for all i ∈{1,…,n}: * we have i ⟶ i+1 if s_i precedes s_i+1 in a reduced expression of c; * we have i ⟵ i+1 otherwise. For c = (1,3,4,7,9,8,6,5,2), we obtain [->,line width=0.6mm,>= angle 60,color=black,scale=0.5] (Q) at (-2,0)Q(c)=; (1) at (0,0)1; (2) at (2,0)2; (3) at (4,0)3; (4) at (6,0)4; (5) at (8,0)5; (6) at (10,0)6; (7) at (12,0)7; (8) at (14,0)8; (2) – (1); (2) – (3); (3) – (4); (5) – (4); (6) – (5); (6) – (7); (8) – (7); Let n ∈ℕ^*. The map c ⟼ Q(c) realizes a one-to-one correspondence from Coxeter elements of 𝔖_n+1 to A_n type quivers. Moreover: * v is a source of Q(c) if and only if s_v is initial in c; * v is a sink of Q(c) if and only if s_v is final in c; This map is crucial for useful links with representation-theoretic results. Finally, we give a tiny result that links the inverse operation on Coxeter elements in 𝔖_n+1 and the opposite action on A_n type quivers. Let c ∈𝔖_n+1 be a Coxeter element. Then Q(c^-1) = Q(c)^. §.§ Interval bipartitions Let ⊂ℕ^*. A bipartition of is a pair (Ł,) sucht that Ł∪ = and Ł∩ = ∅. We do not identify the pair (Ł,) with the pair (,Ł). The following result is a direct consequence of <ref>. Let n ⩾ 1. The map Ψ_n+1 : {{Coxeter elements of 𝔖_n+1} ⟶ {Bipartitions of {2, …, n}} c ⟼ (Ł_c, _c) . is bijective. In the following, we focus on bipartitions of intervals in ℕ^*. An interval (in ℕ^*) is a set i,j = {i,i+1,…,j} for some i,j ∈ℕ^* with i ⩽ j. For all i ∈ℕ^*, we set i,i = i. We denote by ℐ the set of all the intervals in ℕ^*, and by ℐ_n the subset of those included in {1,…,n+1}. An interval bipartition is a bipartition (Ł,) of an interval in ℕ^*. Call it elementary whenever either Ł = = ∅, or both 1 ∈Ł and max(Ł∪) ∈. Fix (Ł,) as a bipartition of some finite set ⊂ℕ^*. If Ł is nonempty, write Ł = {ℓ_1 < ℓ_2 < … < ℓ_p}. We define the integer partition (Ł,), for all i ∈{1,…,p}, by (Ł,)_i = #{r ∈ |ℓ_i < r} if Ł is not empty, and (Ł,) = (0) otherwise. For any bipartition (Ł,) of some finite set ⊂ℕ^*, there exists an elementary interval bipartition (Ł',') such that (Ł,) = (Ł',') If (Ł,) = (0), then we set Ł' = ' = ∅ and we are done. Otherwise consider 𝐌 = {ℓ∈Ł|ℓ < max()} and 𝐒 = {r ∈| r > min(Ł)}. By construction, we easily check that (Ł,) = (𝐌,𝐒). Let p = #(∪), and consider the stricly increasing map from φ: {1,…, p}⟶𝐌∪𝐒. By setting Ł' = φ^-1(𝐌) and ' = φ^-1(𝐒), we can check that (Ł',') is an elementary interval bipartition of 1,p, and (Ł',') = (𝐌,𝐒) = (Ł,). From now on, we assume that (Ł,) is an elementary interval bipartition. By also writing = {r_1 < … < r_q}, we can picture (Ł,) by its Ferrers diagram: we have (i,j) ∈((Ł,)) whenever ℓ_i < r_q-j+1. It allows us to label the ith row of ((Ł,)) by ℓ_i and the jth column by r_q-j+1. Given a Coxeter element c ∈𝔖_n, we write (c) for the integer partition ({1}∪Ł_c, _c ∪{n}). Thanks to the observation above, we introduce the c-coordinates of any box in ((c)) as it follows. By setting Ł= Ł_c ∪{1} = {ℓ_1,…,ℓ_p} and = _c ∪{n+1} = {r_1 < … < r_q}, we write ℓ_i, r_q-j+1_c for the box (i,j) ∈((c)) whenever ℓ_i < r_q-j+1. See <ref> for an example of such an object. For any integer partition λ, there exists a unique elementary interval bipartition (Ł,) such that λ = (Ł,). If λ = (0), then we set Ł = = ∅ and we are done. Otherwise, we label the segments of the southeast border of the shape of (λ) from 1 to its length, going from the top-right to the bottom-left. This defines a label for each row and each column of (λ). We set Ł, the set of labels assigned to the rows, and , the set of labels assigned to the columns. We can easily check that (Ł,) is an elementary interval bipartition (of the interval 1,h_λ(1,1)+1). By construction, it is unique. Let n ∈ℕ^*. For any λ∈_n, there exists a unique Coxeter element c of 𝔖_n+1 such that (c) = λ. It follows automatically from <ref> and <ref>. Given a λ∈_n, we denote by (λ) the unique Coxeter element of 𝔖_n such that ((λ)) = λ. Let c ∈𝔖_n+1. For k ∈{1,…,n}, we have: #D_k((c)) = min(#{ℓ∈Ł_c∪{1}|ℓ⩽ k}, #{r ∈_c ∪{n+1}| r > k}) This result follows by interpreting D_k(λ) as "the diagonal" of the rectangle made of boxes (i,j) such that ℓ_i ⩽ k < r_λ_1 -j+1. §.§ Auslander–Reiten quiver Let c ∈𝔖_n+1 be a Coxeter element. We define the Auslander–Reiten quiver of c (c) as the oriented graph satisfying the following conditions: * The vertices of (c) are the transpositions (i,j), with i<j, in 𝔖_n+1; * The arrows of (c) are given, for all i < j, by * (i,j) ⟶ (i,c(j)) whenever i < c(j); * (i,j) ⟶ (c(i),j) whenever c(i) < j. Let us state an evident and valuable proposition about those quivers. For any Coxeter element c ∈𝔖_n+1, The Auslander–Reiten quiver (c) is an acyclic connected directed graph. Moreover: * its sources are the initial adjacent transpositions in c, and * its sinks are the final adjacent transpositions in c. To construct recursively such a graph, we can first find the initial adjacent transpositions of c, which are all the sources, and step by step, using the second rule, construct the arrows and the vertices of (c) until we reach all the transpositions of 𝔖_n+1. Note that the sinks of (c) are given by the final adjacent transpositions of c. See <ref> for an explicit example. Moreover, one can notice that we can construct (c) from any transposition (i,j) ∈𝔖_n+1 using the second rule. Let c ∈𝔖_n+1 be a Coxeter element. Then (c^-1) = (c)^. The Auslander–Reiten quiver (c) has a representation-theoretic meaning for the quiver Q(c)^ (see <ref>). § STORABILITY In this section, we first recall the notion of storability, introduced in <cit.>, and we enumerate a few primary results. Then, given a positive integer n ∈ℕ^* and a Coxeter element c ∈𝔖_n+1, we introduce the notion of c-storabilty for n-tuples of integer partitions. We highlight their bijective link with the reverse plane partitions of (c). §.§ Storable pairs and storable triplets Let λ and μ be two integer partitions. The pair (λ, μ) is storable whenever for all i ∈ℕ^*, λ_i ⩾μ_i ⩾λ_i+1 (we can add zero parts if needed). Such a pair is strongly storable if we have λ_1 = μ_1. We can picture storable pairs as follows. See a partition λ as a right-infinite row of forty-five-degree rotated squares filled with the parts of λ from left to right. We can add infinitely many zeros to the right. See μ in the same way. We say that two such rows of squares are intertwining if, for all i ⩾ 1, the ith square of the one row is placed between the ith and the (i+1)th squares of the other row. Then the pair (λ, μ) is a storable pair if and only if we can intertwine the two rows of filled squares such that when we read the two rows together from left to right, the values are still decreasing (<ref>). In other words, the square μ_i intertwines the squares λ_i and λ_i+1 whenever λ_i ⩾μ_i ⩾λ_i+1. We give two results that arise from the definition. Let λ and μ be two integer partitions. * If (λ, μ) and (μ, λ) are both storable, then λ = μ; * If (λ, μ) is storable, then ℓ(λ) ∈{ℓ(μ), ℓ(μ) + 1}. Let λ, μ and ν be three integer partitions. The triplet (λ, μ, ν) is storable if the two following conditions are satisfied: * either (λ, μ) or (μ, λ) is a storable pair; * either (μ, ν) or (ν, μ) is a storable pair. More precisely, we say that (λ, μ, ν) is: (⊞⊞) (⊞,⊞)-storable if (λ, μ) and (ν, μ) are storable pairs; (⊞⊟) (⊞,⊟)-storable if (λ, μ) and (μ, ν) are storable pairs; (⊟⊞) (⊟,⊞)-storable if (μ, λ) and (ν, μ) are storable pairs; (⊟⊟) (⊟,⊟)-storable if (μ, λ) and (μ, ν) are storable pairs. Such a triplet is strongly storable whenever λ_1 = μ_1 or μ_1 = ν_1. We illustrate the four storability configurations in <ref>. We introduce the following notion, which will be helpful in <ref>, via representation-theoretic interpretation introduced in <ref>. Let λ, μ, and ν be three integer partitions. Assume that (λ, μ, ν) is a storable triplet. We define the diagonal transformation of μ in (λ, μ, ν), denoted (λ, μ, ν), to be the integer partition θ = (θ_1, θ_2, …) such that: * if (λ, μ, ν) is (⊞, ⊞)-storable, then we define, for all i ⩾ 1, θ_i = max(λ_1, ν_1) if i = 1 min(λ_i-1,ν_i-1) + max(λ_i, ν_i) - μ_i-1 otherwise; * if (λ, μ, ν) is (⊞, ⊟)-storable, then we define, for all i ⩾ 1, θ_i = λ_1 + max(λ_2, ν_1) - μ_1 if i = 1 min(λ_i,ν_i-1) + max(λ_i+1, ν_i) - μ_i otherwise; * if (λ, μ, ν) is (⊟, ⊞)-storable, then we define, for all i ⩾ 1, θ_i = ν_1 + max(λ_1, ν_2) - μ_1 if i = 1 min(λ_i-1,ν_i) + max(λ_i, ν_i+1) - μ_i otherwise; * if (λ, μ, ν) is (⊟, ⊟)-storable, then we define, for all i ⩾ 1, θ_i = min(λ_i, ν_i) + max(λ_i+1, ν_i+1) - μ_i+1. We can picture the diagonal operation as doing local operations for each square of μ in the diagram representing the storable triple (λ, μ, ν) (<ref>). Remark that λ and ν play symmetric roles: (λ, μ, ν) = (ν, μ, λ). Here are some elementary statements we get for the diagonal transformation. Let λ, μ and ν be three integer partitions. When it is well-defined, consider θ = (λ, ν, μ). * If (λ, μ) is a storable pair, then (λ,μ, μ) = λ. * If (λ, μ, ν) is (⊞, ⊞)-storable, then (λ, θ, ν) is strongly (⊟, ⊟)-storable. * If (λ, μ, ν) is (⊞,⊟)-storable, then (λ, θ, ν) is (⊞, ⊟)-storable. * If (λ, μ, ν) is (⊟,⊞)-storable, then (λ, θ, ν) is (⊟, ⊞)-storable. * If (λ, μ, ν) is (⊟,⊟)-storable, then (λ, θ, ν) is (⊞, ⊞)-storable. * If (λ, μ, ν) is either (⊞, ⊞)-storable, (⊞, ⊟)-storable, (⊟, ⊞)-storable or strongly (⊟, ⊟)-storable, then (λ, θ, ν) = μ. §.§ c-storability For ⊂ℤ and j ∈ℤ, we set [j] = {a+j | a ∈ A}. Let n ∈ℕ^*. Let π = (π^k)_1 ⩽ k ⩽ n a n+2-tuple of integer partitions. In the following, we set π^0 = π^n+1 = (0) to make things more convenient with the following definitions. We say that π is (strongly) (⊟, ⊟)-storable at k if (π^k-1, π^k, π^k+1) is a (strongly) (⊟, ⊟)-storable triplet. We use the same formulation for the three other storability configurations. Let (Ł,) be a pair of disjoint subsets of {1,…,n+1} such that min(Ł,) ∈Ł and max(Ł,) ∈. A n-tuple of integer partitions π = (π^k)_1 ⩽ k ⩽ n is said to be (Ł,)-storable whenever the following assertions hold: * π is (⊞, ⊞)-storable at i for i ∉Ł∪[-1]; * π is (⊞, ⊟)-storable at i for i ∈[-1] ∖Ł; * π is (⊟, ⊞)-storable at i if i∈Ł∖[-1]; * π is (⊟, ⊟)-storable at i for i ∈Ł∩[-1]. * π^k=(0) for k ∉{min(Ł), …, max()}. Let c ∈𝔖_n+1 be a Coxeter element. A n-tuple of integer partitions π is c-storable whenever π is (Ł_c∪{1},_c ∪{n+1})-storable. Note that (a) and (e) are not exclusive. Fix n=5 and the Coxeter element c = (1,2,4,6,5,3) ∈𝔖_n. The tuple π = ((2), (4,2),(2,1),(3,2),(3)) is c-storable. Thanks to the drawing (<ref>), we can figure out the c-storability property. Given a Coxeter element c ∈𝔖_n+1, we denote by (c) the set of n-tuples of integer partitions that are c-storable. For k ∈{1,…,n}, we define the diagonal transformation of π at k, denoted by _k(π), to be the n-tuple of integer partitions obtained from π by replacing π^k with (π^k-1,π^k, π^k+1). The following lemma occurs by <ref>. Let c ∈𝔖_n+1 and π∈(c). For any k ∈{1,…,n}: * if s_k is initial in c, then then _k(π) ∈(s_k c s_k); * if s_k is final in c, then _k(π) ∈(s_k c s_k), and _k(π) is strongly (⊟, ⊟)-storable at k; * otherwise _k(π) ∈(c). This result will be beneficial in <ref>. In the remainder of this section, we show that we have a bijection from (c) to ((c)). We first exhibit a symmetric operation. We define the reverse map _n on {1,…,n} by _n(k) = n+1-k for all k ∈{1,…,n}. It induces an action on: * subsets of {1,…,n}: for any ⊆{1,…,n}, we set _n() = {_n(a) | a ∈}; * Coxeter elements of 𝔖_n+1: for any Coxeter element c ∈𝔖_n+1, write _n+1(c) the Coxeter element obtained by conjugating c with _n+1 seen as a permutation; * n-tuples of integer partitions: given π = (π^k)_1 ⩽ k ⩽ n, we set _n(π) = (π^_n(k))_1 ⩽ k ⩽ n. Consider c ∈𝔖_n+1 a Coxeter element. For any π∈(c), we have _n(π) ∈(_n+1(c)). The following result allows us to control the length of the integer partitions that constitute any c-storable n-tuple of integer partitions. Let c ∈𝔖_n+1 be a Coxeter element. For all π = (π^k)_1 ⩽ k ⩽ n∈(c), and for all 1 ⩽ k ⩽ n, ℓ(π^k) ⩽min(#{i ∈Ł_c ∪{1}| i ⩽ k}, #{j ∈_c ∪{n+1}| j > k}). Let c ∈𝔖_n+1 and π∈(c). We set Ł = Ł_c ∪{1} and = _c ∪{n+1}. We will prove that ℓ(π^k) ⩽#{i ∈Ł, i ⩽ k} by induction over 1 ⩽ k ⩽ n. By hypothesis, as 1 ∈Ł, we have that (π^1,π^0=(0)) is a storable pair. Via <ref>, we get that ℓ(π^1) ⩽ℓ(π^0) + 1 = 1. In addition, #{i ∈Ł| i ⩽ 1} = 1. Assume that, for a fixed 1 ⩽ k < n, ℓ(π^k) ⩽#{i ∈Ł, i ⩽ k}. We can distinguish two cases. * If k+1 ∉Ł, then π is (⊞, ⊞)-storable or (⊞, ⊟)-storable at k+1. Thus (π^k, π^k+1) is a storable pair. By <ref>, either ℓ(π^k) = ℓ(π^k+1)+1 or ℓ(π^k) = ℓ(π^k+1). In either way, ℓ(π^i+1) ⩽ℓ(π^k) ⩽#{i ∈Ł| i ⩽ k } = #{i ∈Ł| i ⩽ k+1}. * If k+1 ∈Ł, then π is (⊟, ⊞)-storable or (⊟, ⊟)-storable at k+1. Thus (π^k+1, π^k) is a storable pair. By <ref>, either ℓ(π^k+1) = ℓ(π^i)+1 or ℓ(π^k+1) = ℓ(π^i). Either way, ℓ(π^k+1) ⩽ℓ(π^k) +1 ⩽#{i ∈Ł| i ⩽ k } +1 = #{i ∈Ł| i ⩽ k+1}. This completes the proof of the first desired inequality. To get the other one, we play with the symmetry the reverse map gives. By <ref>, we know that _n(π) ∈(_n+1(c)). Therefore we have, for 1 ⩽ k ⩽ n, ℓ(π^k) = ℓ((π)^n+1-k) ⩽{j' ∈_n+1(_c) | j' ⩽ n+2-k} = {j ∈_c | j > k}. We can, therefore, conclude the desired result. Now, thanks to <ref>, we can define a map from (c) to fillings of (c) as follows: Φ_c : {(c) ⟶ {fillings of (c)} π = (π_k)_1 ⩽ k ⩽ n ⟼ (Φ_c(π) : k,r_(c)⟼π^k_r ) . The map Φ_c consists on, for any π∈(c), and k ∈{1,…,n}, placing the kth integer partition π^k of π in the boxes of D_k((c)) in the decreasing order, from right to left. We complete the diagonal with zeros if necessary. Therefore, one can check that the map Φ_c consists of a 135^∘-degree counterclockwise rotation of the picture of the c-storable n-tuples of integer partitions. Then, the c-storability conditions allow us to check that Φ_c(π) is a reverse plane partition of shape (c). For any n ∈ℕ^*, and for any Coxeter element c ∈𝔖_n+1, The map Φ_c induces a bijection from (c) to ((c)). § OVERVIEW OF JORDAN RECOVERABILITY §.§ Quiver representations Let Q=(Q_0,Q_1,s,t) be a finite connected quiver. Consider 𝕂 an algebraically closed field. A (finite-dimensionnal) representation of Q over 𝕂 is a pair E = ((E_q)_q ∈ Q_0, (E_α)_α∈ Q_1) seen as an assignement of: * a finite-dimensional 𝕂-vector space E_q to each q ∈ Q_0; * a linear transformation E_α : E_s(α)⟶ E_t(α) to each α∈ Q_1. Write _𝕂(Q) for the set of the representations of Q over 𝕂. For E ∈_𝕂(Q), we denote by (E) = ((E_q))_q ∈ Q_0 the dimension vector of E. Given E,F ∈_𝕂(Q), a morphism of representations ϕ : E ⟶ F is a collection (ϕ_q)_q ∈ Q_0 seen as an assignement of a linear transformation ϕ_q : E_q ⟶ F_q to each q ∈ Q_0 such that for any α∈ Q_1, F_αϕ_s(α) = ϕ_t(α) E_α. Such a morphism of representations ϕ is an isomorphism whenever, for all q ∈ Q_0, ϕ_q is an isomorphism of vector spaces. We say that two representations, E and F, are isomorphic, and we denote it by E ≅ F whenever an isomorphism exists from E to F. Write _𝕂(E,F) for the 𝕂-vector space of morphisms of representations from E to F. In particular, we set _𝕂(E) = _𝕂(E,E) to refer to the endomorphisms of a representation E. We define the path algebra 𝕂Q to be the 𝕂-vector space generated as a basis by the paths over Q endowed with multiplication acting on the basis as a concatenation of paths. Recall that _𝕂(Q) endowed with the morphisms of representations between them is a category, and this category is equivalent to the category of finitely generated (right) 𝕂Q-modules. See <cit.> for more details. Given F,G ∈_𝕂(Q), we write F ⊕ G the direct sum of F and G. A representation E is said to be indecomposable whenever, if E ≅ F ⊕ G, then F = 0 or G = 0. By _𝕂(Q), we denote the isomorphism classes of indecomposable representations of Q. In our setting, we can describe indecomposable representations thanks to intervals in {1,…,n}. Let n ⩾ 1 be an integer. Assume that Q is an A_n type quiver. Given an interval K ⊂{1,…,n}, we denote by X_K the indecomposable representation such that: * (X_K)_q = 𝕂 if q ∈ K, and (X_K)_q = 0 otherwise; * (X_K)_α = _𝕂 if s(α), t(α) ∈ K, and (X_K)_α = 0 otherwise. Let Q be an A_n type quiver. Any indecomposable representation E ∈_𝕂(Q) is isomorphic to X_K for some interval K ⊂{1, …, n}. Note that knowing the indecomposable representations of _𝕂(Q) and knowing the morphisms between them is enough to describe the entire category _𝕂(Q). This data is contained in the so-called Auslander–Reiten quiver of Q (over 𝕂), denoted by _𝕂(Q). The vertices of _𝕂(Q) are the isomorphisms classes of the indecomposable representations of _𝕂(Q), and the arrows correspond to the irreducible representations between them. Let c ∈𝔖_n+1 be a Coxeter element. Then the quivers (c) and _𝕂(Q(c))^ are isomorphic. More precisely, the map Ψ : (c) ⟶_𝕂(Q(c))^ defined by: * Ψ_0((i,j)) = X_i,j-1 for all 1 ⩽ i < j ⩽ n+1, and, * Ψ_1(((i,j),(k,ℓ))) = (X_i,j-1,X_k,ℓ-1) for all arrow ((i,j),(k,ℓ)) in (c), is a quiver (directed graph) isomorphism. To see further details about Auslander-Reiten quivers of A_n type quivers, we refer the reader to <cit.>. To learn more about quiver representation theory and for more in-depth knowledge on the notion of Auslander–Reiten quivers, we invite the reader to look at <cit.>. In the following, we focus on full subcategories of _𝕂(Q) that are closed under sums and summands. Thus, those categories are additively generated by indecomposable representations, up to isomorphism, and are characterized by the sets of indecomposable representations. From now one, we write ℐ_n the set of intervals in {1,…,n}. Given such a category 𝒞, we write (𝒞) for the interval set 𝒥⊆ℐ_n such that 𝒞 is additively generated by the indecomposable representations X_K for K ∈𝒥. Given an interval set 𝒥⊆ℐ_n, we denote by _Q(𝒥) for the category additively generated by the X_K for K ∈𝒥. §.§ Reflection functors In this subsection, we recall the definition of reflection functors for any quiver Q. For our purposes in this paper, defining those functors only on objects is sufficient. We refer the reader to <cit.> and <cit.> for more details. Let Q be an arbitrary quiver and v be a vertex Q. Denote σ_v(Q) the quiver obtained from Q by reversing the directions of the arrows incident to v. If α∈ Q_1 such that v ∈{s(α),t(α)}, denote α̃ the reversed arrow of α in σ_v(Q). Now assume that v is a sink of Q. Consider Ξ = σ_v(Q). The reflection functor ℛ_v^+ : (Q) ⟶(Ξ) is defined as follows. Let E = ((E_q)_q ∈ Q_0, (E_β)_β∈ Q_1) ∈(Q). We set ℛ_v^+(E) = ((F_q)_q ∈Ξ_0, (F_β)_β∈Ξ_1) ∈(Ξ) where * F_q = E_q for q ≠ v and F_v = (⊕_α∈ Q_1, t(α) = v E_α : ⊕ _α∈ Q_1, t(α) = v E_s(α)⟶ E_v ); * F_β= E_β if β∈ Q_1 such that t(β) ≠ v, otherwise F_β̃ : Y_v ⟶ E_s(β) is the composition of the kernel inclusion of F_v to ⊕_α∈ Q_1, t(α) = v E_s(α) with the projection onto the direct summand E_s(β). If v is a source of Q, the reflection functor ℛ_v^- : (Q) ⟶(σ_v(Q)) is defined dually. We refer the reader to <cit.> for an explicit example of the calculation of ℛ^±_v(E) given an A type quiver Q, and a representation E ∈_𝕂(Q). The reflection functors are additive, which implies that we can understand their actions on objects by knowing their actions on indecomposable objects. By the following propositions, we recall the interpretation of reflections functors for Coxeter elements of 𝔖_n+1, and its action on _𝕂(Q), for Q an A_n type quiver. Let c ∈𝔖_n+1 be a Coxeter element. Then if v is either a sink or a source of Q(c), then σ_v(Q(c)) = Q(s_v c s_v). Let Q be an A_n type quiver, v ∈ Q_0 and v≠ K ∈ℐ_n. Write Ξ = σ_v(Q). If v is a sink of Q, then ℛ_v^+(X_K) ≅ X_K'∈(Ξ) where K' = K ∪{v} if either e(K) = v-1 or b(k) = v+1; K ∖{v} if either e(K)=v or b(K) = v; K otherwise.. If v is a source of Q, then ℛ_v^-(X_K) = X_K' where K' is defined as above. Note that, if v is a sink of Q, ℛ_v^+(X_v) = 0, and if v is a source, ℛ_v^-(X_v) = 0. We also recall the following result, which highlights one of the main algebraic uses of those functors. Let Q be a quiver and v be one of its sinks. Write Ξ = σ_v(Q). The reflection functor ℛ_v^+ : (Q) ⟶(Ξ) induces a category equivalence between the full subcategory of (Q) additively generated by the indecomposable representations of Q except the simple projective representation at v and the full subcategory of (Ξ) additively generated by indecomposable representations of Ξ except the simple injective representation at v. The quasi-inverse is induced by the reflection functor ℛ_v^-:(Ξ) ⟶(Q). For more details, see <cit.>. §.§ Jordan recoverability and canonical Jordan recoverability Consider E ∈_𝕂(Q). An endomorphism N : E ⟶ E is nilpotent if N^k = 0 for some integer k ⩾ 1. One can see a nilpotent endomorphism as a collection of nilpotent transformations (N_q)_q ∈ Q_0 with some additional compatibility relations. Write (E) for the set of nilpotent endomorphisms. Assume that (E)= d = (d_q)_q ∈ Q_0. Given N ∈(E), we consider the Jordan form of N_q at each q ∈ Q_0. It gives us a sequence of integer partitions λ^q ⊢ d_q. We refer to λ = (λ^q)_q ∈ Q_0 as the Jordan form of N, and we set (N) = λ. On integer partitions, we consider the dominance order, denote by , defined as follows: for λ and μ two integer partitions of d ∈ℕ, λμ whenever, for all k ⩾ 1, λ_1 + … + λ_k ⩽μ_1+ … + μ_k. We extend this order to n-tuples of partitions. First, let us introduce a notation. Given d = (d_k)_1 ⩽ k ⩽ n∈ℕ^n and λ = (λ^k)_1 ⩽ k ⩽ n a n-tuple of integer, we write λ⊢d whenever λ^k ⊢ d_k for all k ∈{1,…,n}. Now fix d∈ℕ^n, and λ,μ⊢d. We write λμ whenever for all k ∈{1,…, n}, λ^k μ^k. Let Q be an A_n type quiver, and 𝕂 be an algebraically closed field. Consider E ∈_𝕂(Q). * The set (E) is an irreducible algebraic variety. * There exists a maximal value of in (E), with respect to , and it is attained in a dense open set (in Zariski topology) of (E). Note that the previous result can be generalized to finitely generated modules over an arbitrary 𝕂-algebra (see <cit.>). Let Q be an A_n type quiver, and 𝕂 be an algebraically closed field. For all E ∈_𝕂(Q), we call the generic Jordan form data of E, denoted by (E), the maximal value of in (E) This definition is entirely algebraic. We introduce a combinatorial method using Greene–Kleitman invariant calculations to effectively get (E). Fix an A_n type quiver Q. Let E ∈_𝕂(Q). We decompose E as below E ≅⊕_K ∈ℐ_n X_K^m_K, with m_k ∈ℕ. Using this decomposition above allows us to see a representation E, up to isomorphism, as a filling _Q(E) of _𝕂(Q) as follows: ∀ K ∈ℐ_n, _Q(E)(K) = m_K. In the following, for any filling f of _𝕂(Q), we denote by _Q(f) its associated representation in _𝕂(Q), defined up to isomorphism. For each k ∈{1,…,n}, write _𝕂^[k](Q) for the complete subquiver of _𝕂(Q) whose vertices are given by X_K such that k ∈ K. It induces a filling _Q(E)^[k] of _𝕂(Q). Using <ref> and <ref>, the direct graph _𝕂(Q) is acyclic, which allows us to calculate the Greene–Kleitman invariant of _Q(E)^[k]. Let Q be an A_n type quiver. For any X ∈_𝕂(Q), we have (E) = (__𝕂^(k)(Q)(_Q(E)^[k]) )_k ∈ Q_0. For any A_n type quiver Q, the acyclicity of _𝕂(Q), needed to calculate the Greene–Kleitman invariants, can be justify using <ref> and <ref>. There exists a representation-theoretic explanation: Q is a representation-directed quiver. We refer the reader to <cit.> for more details. The generic Jordan form data is an invariant on _𝕂(Q), but not complete. Nevertheless, we remain interested by determining the subcategories of _𝕂(Q) such that is complete, which brings us to the following definition. Let Q be an A_n quiver. A subcategory 𝒞⊂_𝕂(Q) is Jordan recoverable if, for any tuple of integer partitions λ, there exists a unique (up to isomorphism) E ∈𝒞 such that (E) ≅λ. Different examples are given in <cit.> and <cit.>. It is still difficult to characterize all the Jordan recoverable subcategories of _𝕂(Q). In <ref>, we recall, and restate, a conjecture originally stated in <cit.>. However, we can focus on subcategories in which we can explicit an algebraic inverse to . Fix Q an A_n type quiver. Consider λ be a n-tuple of integer partitions. We can consider the set _𝕂(Q,λ) of representations F ∈_𝕂(Q) such that it admits a nilpotent endomorphism N with Jordan form (N) = λ. We can ask for the existence of a dense open set (for Zariski's topology) Ω⊂_𝕂(Q,λ) such that all the representations in Ω are isomorphic to each other. Via <cit.>, such a set Ω exists. Given Q and A_n type quiver, and any n-tuple of integer partitions λ, we define the generic representation of λ, denoted by (λ), to be the representation such that there exists a dense open set Ω in _𝕂(Q,λ) such that, for all F ∈Ω, F ≅(λ). Note that, in general, for any E ∈_𝕂(Q), ((E)) E, and for any n-tuple of partitions λ, ((λ)) ≠λ. It makes sense that whenever λ does not correspond to a generic Jordan form data, applying ∘ does not give us back λ. Similarly, whenever we have two representations E,F ∈_𝕂(Q) such that λ = (E) = (F) but E F, then (λ) E or (λ) F. Thus, we can focus on categories in which, at least, we can recover any representation from its generic Jordan form data by applying . A subcategory 𝒞⊂_𝕂(Q) is said to be canonically Jordan recoverable whenever for all E ∈𝒞, ((E)) ≅ E. Note that any canonically Jordan recoverable category is Jordan recoverable, but the converse is false. We refer the reader to <cit.> for examples. §.§ Operations preserving canonical Jordan recoverability In this section, we restate some results about operations that preserve the canonical Jordan recoverability property from <cit.>. First, we recall results involving the reflection functors. Let v be a vertex of a quiver of A_n type. Let π be a n-tuple of integer partitions such that π is either (⊞, ⊞)-storable, (⊞, ⊟)-storable, (⊟, ⊞) storable or strongly (⊟, ⊟)-storable at v. Consider E = (π) and assume that (E) = π. The following assertions hold: * If v is a source, (ℛ^-_v (E)) = σ_v(π), and ℛ^-_v(E) ≅(_v(π)). * If v is a sink, (ℛ^+_v (E)) = σ_v(π), and ℛ^+_v(E) ≅(_v(π)). Let v be a source of an A_n-type quiver Q. Let 𝒞⊂_𝕂(Q) be a canonically Jordan recoverable category such that * For all E ∈𝒞, (E) is either (⊞, ⊞)-storable, (⊟,⊞)-storable, (⊞,⊟)-storable or strongly (⊟,⊟)-storable at v. Then ℛ^−_v (𝒞) is a canonically Jordan recoverable subcategory of _𝕂(σ_v(Q)). Similarly, if v is a sink, ℛ^+_v (𝒞) is a canonically Jordan recoverable subcategory of _𝕂(σ_v(Q)). Then, we recall some results about the adding simple operations. Let Q be an A_n type quiver. For any v ∈ Q_0, we define the adding simple at v operation, denoted by _v, on every subcategory 𝒞 of _𝕂(Q) by _v(𝒞) = (𝒞, X_v). Let v be a source or a sink of an A_n type quiver Q. Consider a ∈ℕ and E ∈ rep_𝕂(Q). Write π = (E). Then (X_v^a ⊕ E) = ξ where ξ^q = π^q if q ≠ v and ξ^v = (π^v_1 + a, π^v_2, π^v_3,…). Let Q be a quiver of A_n type, and v be a source or a sink of Q. Let 𝒞⊂_𝕂(Q) be a canonically Jordan recoverable category such that * For any E ∈𝒞, (E) is strongly (⊟,⊟)-storable at v. Then _v(𝒞) is a canonically Jordan recoverable subcategory of _𝕂(Q). §.§ Results from previous work In this section, we resume some results from <cit.>, and by going through the main outlines of the proof, we establish some valuable consequences in the next section. Let us state first the main result of <cit.>. Let Q be an A_n type quiver. We recall that the subcategories we consider are characterized by their indecomposable representations and, therefore, by intervals of {1, …, n}. For K = i;j ∈ℐ_n, we set b(K) = i and e(K)=j. Two intervals K,L ∈ℐ_n are adjacent whenever either b(L) = e(K)+1 or b(K) = e(L)+1. Hence, K and L are not adjacent if either K ∩ L ≠∅, or b(K) > e(L)+1, or b(L) > e(K)+1. An interval set 𝒥⊂ℐ_n is said to be adjacency-avoiding whenever there is no pair of intervals K,L ∈𝒥 that are adjacent. Below are given all the maximal adjacency-avoiding interval sets of ℐ_3: * {1, 1,2, 1,3}; * {2, 1,2, 2,3,1,3}; * {3,2,3,1,3}; * {1, 3, 1,3}. Below is the main result of <cit.>, which gives a combinatorial characterization of all the subcategories of _𝕂(Q). Let Q be an A_n type quiver. A subcategory 𝒞⊂_𝕂(Q) is canonically Jordan recoverable if, and only if, (𝒞) is adjacency-avoiding. * This result generalizes a previous result of <cit.> for A_n type quivers. * For any interval set 𝒥, the canonical Jordan recoverability of _Q(𝒥) does not depend of Q. The first step of the proof was to highlight that the adjacency-avoiding property of (𝒞) is a necessary condition to some category 𝒞 be canonically Jordan recoverable. Then, we study the maximal adjacency-avoiding interval sets of {1,…,n}. Before stating the following result, let us introduce a relevant family of interval sets. For any pair of subsets (,) of {1,…, n}, we define the interval set 𝒥(,) as follows: 𝒥(,) = {b,e| b ∈, e ∈}. Any adjacency-avoiding interval set is a subset of some 𝒥(,) such that (,[1]) is an elementary interval bipartition of {1,…,n+1}. By <ref>, we can parametrize maximal adjacency-avoiding interval subsets of ℐ_n via Coxeter element c ∈𝔖_n+1. Write 𝒥(c) = 𝒥(Ł_c ∪{1}, _c[-1]∪{n}). We set _Q(c) = _Q(𝒥(c)). Finally, we show that, given an A_n quiver, all the subcategories _Q(c) ⊂_𝕂(Q) arising from maximal adjacency-avoiding interval subsets of ℐ_n are canonically Jordan recoverable. Fix n ∈ℕ^*. Let 𝕂 be an algebraically closed field, and Q be an A_n type quiver. Consider c ∈𝔖_n+1 a Coxeter element. Then _Q(c) is canonically Jordan recoverable. Moreover, induces a one-to-one correspondence from isomorphism classes of representations of _Q(c) to (c). In <cit.>, we state that induces an injective map from isomorphism classes of representations of _Q(c) to (c). One can notice that the steps used in the proofs of <cit.> can be reversed. Thus induces indeed a one-to-one correspondence. The proof uses technical tools from combinatorics and quiver representation theory, particularly results presented in <ref>. The proof of <cit.> uses induction on the orientations of the quiver Q, which can be seen in terms of Coxeter elements. We prove it for Q(c_2) for c_2 = (1,…,n+1), and then we prove that if the statement holds for Q(c_2) given by some Coxeter element c_2 ∈𝔖_n+1, then it holds for Q(sc_2s) for any s ∈Σ_n that is either initial or final in c. Let c ∈𝔖_n+1 be a Coxeter element, and k ∈{1,…,n}. We define σ_k(c) by: * σ_k(c) = s_kcs_k whenever s_k is either initial or final in c, and * σ_k(c) = c otherwise. Note that we use the same notation for mutations on quivers, and we saw that in our setting, they match (see <ref>). We also consider a tiny deformation of this mutation, denoted by σ_k defined by: * σ_k(c) = c whenever k=1 and s_1 initial in c, or k=n and s_n final in c; * σ_k(c) = σ_k(c) otherwise. Let c_1,c_2 ∈𝔖_n+1 be two Coxeter elements. Fix k ∈{1,…,n} such that s_k is either initial or final in c_1. Then ℛ_k^±(_Q(c_1)(c_2)) is equivalent to a subcategory of _Q(c_1)(σ_k(c_2)), as subcategory of _𝕂(Q(σ_k(c_1))). This is a consequence of <ref> for c_1, and <ref> for c_2. In the following proposition, we rephrase the precise result hidden in the proof of <cit.>. Let 𝕂 be an algebraically closed field. Fix n ∈ℕ^*. Let c_1,c_2 ∈𝔖_n+1 be two Coxeter elements. Let _Q(c_1)(c_2) ⊂_𝕂(Q(c_1)). Fix k ∈{1,…,n} such that s_k ∈Σ_n is either initial or final in c_1. For all E ∈_Q(c_1)(c_2), (ℛ^±_k(E)) = _k((E)). Moreover, (ℛ^±_k(E)) ∈(σ_k(c_2)), and if s_k is final in c_2, then we have that (ℛ^±_k(E)) is strongly (⊟,⊟)-storable at k. § EXTENDED RSK VIA TYPE A QUIVER REPRESENTATION §.§ Construction of the extended generalization In the following, we describe an extended version of RSK using (type A) Coxeter elements and state the main result. Let n ⩾ 1 and λ∈_n. Let c ∈𝔖_n+1, and consider (c) its Auslander–Reiten quiver. Recall that (λ) ∈𝔖_n+1 is the unique Coxeter element such that ((λ)) = λ by <ref>. We also recall that (Ł = Ł_c ∪{1}, = _c ∪{n+1}) is the unique elementary bipartition of {1,…,n+1} such that (Ł,) = λ by <ref>. Using (λ)-coordinates on (λ), any box labelled ℓ,r_(λ) can be associated to the transposition (ℓ,r) ∈𝔖_n+1. We construct a one-to-one correspondence _λ, c from fillings of shape λ to those of the Auslander–Reiten quiver (c) which are supported on vertices (ℓ,r) ∈Ł× such that ℓ < r. Precisely, for any filling f of shape λ, _λ,c (f)(ℓ,r) = f(ℓ,r_(λ)) whenever (ℓ,r) ∈Ł× such that ℓ < r, and _λ,c(x,y) = 0 otherwise. As in <ref>, we also use the λ-diagonal coordinates. For each k ∈{1,…,n}, we consider the subgraph ^[k](c) of (c) where the vertices are the transpositions (ℓ,r) with ℓ⩽ k < r. This subgraph has only one source and only one sink. We denote by _λ,c(f)^[k] the filling or ^[k](c) induced by _λ,c(f). We define _λ, c(f) to be the fillings of shape λ defined, for k ∈{1,…,n}, and for δ∈{1,…,δ_k}, by: _λ,c(f)(k,δ_λ) = _^[k](c)(_λ,c(f)^[k])_δ. See <ref> for an explicit example. Let n ⩾ 1 and λ∈_n . Let c ∈𝔖_n+1 be a Coxeter element. The map _λ,c realizes a one-to-one correspondence from fillings of shape λ to reverse plane partitions of shape λ. See the Coxeter element c = c_1 ∈𝔖_n+1, as the A_n type quiver Q(c_1). See λ as the choice of maximal canonically Jordan recoverable subcategory _Q(c_1)(c_2) of _𝕂(Q(c_1)) where c_2 ∈𝔖_n+1 such that (c_2) = λ. Then the map _λ,c gives a one-to-one correspondence from fillings of λ to representations in _Q(c_1)(c_2), up to isomorphism. Set E = _Q(c_1)(_λ,c_1(f)) the representation of _Q(c_1)(c_2) which multiplicities are given by _λ,c_1(f). The calculations of _λ,c_1(f) correspond exactly to the calculations of (E) by <ref> and <ref>. As stated by <ref>, we are done. One can notice that we could apply slightly broadened generalization by allowing to take any bipartition (Ł,) such that min(Ł∪) ∈Ł and max(Ł∪) ∈, and an Coxeter element c ∈𝔖_n+1 with n ⩾max(R). In fact, this slightly deformed correspondence corresponds to the application of _λ,c, for λ∈_n such that Ł⊆Ł_(λ)∪{1} and ⊆_(λ)∪{n+1}, with its domain restricted to fillings that are vanishing on boxes ℓ,r_(λ) where ℓ∉Ł or r ∉. §.§ Link to previous well-known combinatorial bijections Let us start this section with an obvious proposition from the construction of _λ, c. Let n ⩾ 1. Consider a Coxeter element c ∈𝔖_n+1, and λ∈_n. Then _λ, c^-1 = _λ,c and _λ',c = _n(_λ,c) The following result shows that we extended Gansner's RSK correspondence. Let n ⩾ 1 and c ∈𝔖_n+1 be a Coxeter element. Set λ = (c). Then _λ,c = _λ. In this configuration, by setting Ł = {1}∪Ł_c = {ℓ_1 < …ℓ_p } and = _c ∪{n+1} = {r_1 < … < r_q}, we can check that: * for i ∈{1,…, p-1}, c(ℓ_i) = ℓ_i+1; * for j ∈{1, …, q-1}, c(r_j+1) = r_j. Thus, the orientation of the Auslander–Reiten quiver (c) corresponds precisely to the reverse of the one used to calculate the _(c). It induces the desired result. Let n ⩾ 1 and λ∈_n. Consider c = (1,2,…,n+1). Then _λ,c corresponds exactly to the Hillman-Grassl correspondence. Set Ł = {ℓ_1 < …ℓ_p} and ={r_1 < … < r_q} such that (Ł,) is the elementary bipartition of {1,…,n+1} such that λ = (Ł,). We can check that, in (c), (ℓ_i+1,r_j) follows directly (ℓ_i,r_j), and (ℓ_i, r_j+1) follows directly (ℓ_i, r_j). Then, the orientation we must choose to realize the Hillman–Grassl correspondence coincides with the one given by (c). Therefore, we get the desired result. Now we highlight the link with the work of <cit.> Let n ⩾ 1 and λ∈_n. Consider a Coxeter element c ∈𝔖_n+1. Then, for any filling f of λ, Φ_(λ)^-1∘_λ,c(f) = (_Q(c)(f)). Then, the following corollary occurs. Let n ⩾ 1 and λ∈_n. Consider a Coxeter element c ∈𝔖_n+1. If (λ) = (1,2,…,m,n+1,n,…,m+1) for some m ∈{1,…, n}, then _λ,c = _m,c where _m,c corresponds to Srambled RSKs. Note that choosing a filling of such a λ corresponds to a representation, up to isomorphism, of 𝒞_Q(c),m. §.§ Enumerative properties Let n ⩾ 1, and λ∈_n. Consider a Coxeter elements c ∈𝔖_n+1. Then, for any filling f of λ, and for any k ∈{1,…,n}, we have ∑_ε=1^δ_k_λ,c(f)(k,ε_λ) = ∑_b ∈□_k(λ) f(b) . This is a direct consequence of <ref> and the definition of the extended generalization. Precisely, by setting Ł = Ł_c ∪{1} and = _c ∪{n+1}, ∑_ε=1^δ_k_λ,c(f)(k,ϵ_λ) = ∑_(ℓ,r) ∈Ł×, ℓ⩽ k < r_λ,c(f)(ℓ,r) = ∑_b ∈□_k(λ) f(b) From the previous proof, one can notice that, given λ∈_n and a Coxeter element c ∈𝔖_n+1, we have (_Q(c)(f)_k) = ∑_b ∈□_k(λ) f(b). This value does not depend on c. The following corollary extends the product formula of the trace generating function of reverse plane partitions of minuscule posets. This previous version was originally established by Proctor <cit.>, and proved differently by Garver, Patrias and Thomas <cit.>. Let n ⩾ 1, and λ∈_n. For any A_n type quiver Q, we have ρ_λ(x_1,…,x_n) = ∑_E∈_Q((λ))∏_ℓ=1^n x_ℓ^(E_ℓ) = ∏_M ∈_Q((λ)), indec.11 - ∏_ℓ=1^n x_ℓ^(M_ℓ), where the sum is over the isomorphism classes of representations in _Q((λ)), and the product is over isomorphism classes of indecomposable representations in _Q((λ)) Let Q be an A_n type quiver. Consider the Coxeter element c ∈𝔖_n+1 such that Q(c) = Q. By <ref> and <ref>, we have ρ_λ(x_1,…,x_n) = ∑_f ∈(λ)∏_ℓ=1^n x_ℓ^∑_ε =1^δ_l f(ℓ,ε_λ) = ∑_f ∈(λ)∏_ℓ=1^n x_ℓ^(E_Q(c)(f)_ℓ)). So ρ_λ(x_1,…,x_n) = ∑_E ∈_Q((λ))∏_ℓ=1^n x_ℓ^(E_ℓ). Moreover, each box b ∈(λ) corresponds bijectively to an indecomposable representation, using the (λ)-coordinates and _Q ∘_λ, c. Set, for any b ∈(λ), M_b its associated indecomposable representation in _𝕂(Q), up to isomorphism. We have that: * b ∈□_ℓ(λ) if and only if ((M_b)_ℓ) = 1, and, * b ∉□_ℓ(λ) if and only if ((M_b)_ℓ) = 0. Therefore, ω_λ,b(x_1,…,x_n) = ∏_ℓ =1^n x_ℓ^((M_b)_ℓ). We get the desired result by <ref>. Note that this identity (the second equality) is a consequence of our extended generalization _λ,c where the Coxeter element c ∈𝔖_n+1 corresponds to the choice of an A_n type quiver. The product formula comes from the fact that any representation E ∈_𝕂(Q) can be decompose, in a unique way, as a sum of indecomposable representations (See <ref>). §.§ Toggling This section gives some results about local transformations on (_λ,c)_λ,c. Our motivation comes from well-known descriptions of Gansner's RSK, Hillman–Grassl correspondence, and Dauvergne's RSK via local transformations <cit.>. Let n ⩾ 1, and k ∈{1,…,n}. Note that σ_k, defined on Coxeter elements (<ref>), induces a mutation of integer partitions of _n. We set σ_k(λ) = (σ_k((λ))). In the following, we define a map on fillings of λ to fillings of σ_k(λ), analogous to the reflection functors. Let λ∈_n, and set c = (λ). Consider k ∈{1,…,n}. Given f a filling of λ, we define a filling ℛ_λ,k(f) of σ_k(λ) as it follows. Set: * Ł = Ł_c ∪{1}, * Ł = Ł_σ_k(c)∪{1}, * = _c ∪{n+1}, and, * = _σ_k(c)∪{n+1}. We have various cases to treat: * If k, k+1 ∈Ł, then k,k+1 ∈Ł, and we set ∀ (ℓ,r) ∈Ł×, ℛ_λ,k(f) (ℓ,r_σ_k(c)) = f(k+1,r_c) if ℓ = k; f(k,r_c) if ℓ = k+1; f(ℓ,r_c) otherwise.. See <ref> for an explicit example. * If k, k+1 ∈, then k,k+1∈, and we set ∀ (ℓ,r) ∈Ł×, ℛ_λ,k(f)(ℓ,r_σ_k(c)) = f(ℓ,k+1_c) if r = k; f(ℓ,k_c) if r = k+1; f(ℓ,r_c) otherwise.. See <ref> for an explicit example. * If 1 ≠ k ∈Ł and n+1 ≠ k+1 ∈, then k ∈ and k+1 ∈Ł, and we set ℛ_λ,k(f)(ℓ,r_σ_k(c)) = f(k+1,r_c) if ℓ = k ; f(ℓ,k_c) if r = k+1; f(ℓ,r_c) otherwise.. To go from λ to σ_k(λ), we delete the box labelled k,k+1_c in λ. See <ref> for an explicit example. If k=1, the second case in the above definition does not appear, and we still have to set the values of the boxes 1,r_σ_1(c), for r ∈, taken by ℛ_λ,1(f). We set them to 0. See <ref> for an explicit example. Dually, if k=n, we set ℛ_λ,n(f)(ℓ,n+1_σ_k(c)) = 0 for all ℓ∈Ł (see <ref>). * If k ∈ and k+1 ∈Ł, then k ∈Ł and k+1 ∈, and we set ℛ_λ,k(f)(ℓ,r_σ_k(c)) = 0 if (ℓ,r) = (k,k+1); f(ℓ,k_c) if r=k+1; f(k+1,r_c) if ℓ = k; f(ℓ,r_c) otherwise.. To go from λ to σ_k(λ), we add the box labelled k,k+1_σ_k(c). See <ref> for an explicit example. We check that the action of ℛ_λ,k on fillings of λ corresponds to the reflection functors applied on representations of _𝕂(Q(c)) for some Coxeter element c ∈𝔖_n+1 such that s_k is either initial or final. Let n ⩾ 1 and λ∈_n. Consider a Coxeter element c ∈𝔖_n+1. Fix k ∈{1,…,n} such that s_k is either initial or final in c. Then for any filling f of λ, ℛ^±_k (_Q(c)(_λ,c(f)) ) = _Q(σ_k(c))(_σ_k(λ),σ_k(c)(ℛ_λ,k(f)) ). This is a consequence of <ref> and <ref>. Using the maps (Φ_c)_c indexed by the Coxeter elements c ∈𝔖_n+1, defined in <ref>, for any k ∈{1,…, k}, _k induces a map _λ,k from reverse plane partitions of λ to the ones of σ_k(λ) as follows: ∀ f ∈(λ), _λ,k = Φ_σ_k((λ))∘_k ∘Φ_(λ)^-1(f). The following theorem testifies of the compatibility between _λ,k and ℛ_λ,k, under the action of our extended generalization of RSK. Let λ∈_n and a Coxeter element c ∈𝔖. Fix k ∈{1,…,n} such that s_k is either initial or final in c. Then _λ,k∘_λ,c = _σ_k(λ),σ_k(c)∘ℛ_λ,k. This is a direct consequence of <ref> and <ref>. Finally, we give a combinatorial description of the adding simple operations. Let λ∈_n and consider k ∈{1,…,n} such that s_k is final in (λ). For any a ∈ℕ, and for any filling f of λ, we set _λ,k^a (f) to be the filling of λ obtained from f by uniquely adding a to the value of f(k,k+1_c) (see <ref>). Let n ⩾ 1 and λ∈_n. Consider a Coxeter element c ∈𝔖_n+1. Fix k ∈{1,…,n} such that s_k is both: * final in (λ), and, * either initial or final in c. Then, for any a ∈ℕ, and for any filling f of λ, we have _Q(c)(_λ,c(_λ,c^a(f))) ≅ X_v^a ⊕_Q(c)(_λ,c(f)). The following theorem shows the compatibility of _λ,k under our extended generalization of RSK. Let n ⩾ 1 and λ∈_n. Consider a Coxeter element c ∈𝔖_n+1. Fix k ∈{1,…,n} such that s_k is both: * final in (λ), and, * either initial or final in c. Then, for any a ∈ℕ, and for any filling f of λ, we have _λ,c(_λ,c^a(f)) = _λ,c^a (_λ,c(f)). The result occurs from <ref> and <ref>. toc § TO GO FURTHER In this section, we suggest some research directions which could follow this work. §.§ A realization of _λ,c via local transformations In the previous section, we give results corresponding to toggle operations in the classical RSK correspondence. <cit.> gives a complete realization of _λ via local transformations, for any integer partitions. In quiver representation settings, <cit.> give a realization of _m,c via local transformations (toggles), for any n ∈ℕ^*, and any Coxeter element c ∈𝔖_n+1. Thinking about the crucial link with quiver representations and the local description given by <cit.>, one could think about considering a sequence of toggles based on the linear order on transpositions of 𝔖_n+1 compatible with the opposite of the Auslander–Reiten quiver. We are hoping to elaborate such a realization of _λ,c by using the results in <ref>, and the sequence of toggles given by <cit.> in a near futur. §.§ Dynkin type (or other) variations of _λ,c Instead of considering Coxeter elements of the symmetric group, one could ask if it is possible to consider Coxeter elements of any Weyl group. We see at least two ways to think about that. The first way could be adapting the setting with the Weyl group we are considering. For instance, if we work with the signed symmetric group, a B type Weyl group, we could think about type B RSK, seen as the domino correspondence. We refer the reader to the work of Garfinkle <cit.> Bonafé, Geck, Iancu, and Lam <cit.> for more details. In the type D case, Gern did a few studies in his PhD thesis <cit.> in terms of Kazhdan–Lusztig polynomials. The second way could be to work in the quiver representation setting and to determine all the canonically Jordan recoverable subcategories of any type D and type E quivers. <cit.> gives us a type D and a type E versions of the RSK correspondence. From the perspective of quiver representation theory, we can go deeper into it and work with well-behaved and well-known bounded quivers. For instance, for gentle quivers, we have a complete classification of the isomorphism classes of indecomposable representations <cit.> and of the morphisms between them <cit.>. Some work was already done on canonically Jordan recoverable subcategories in this domain <cit.>, and we hope to characterize all of them. One interesting combinatorial outcome could be the construction of an “RSK correspondence via gentle algebras". toc § ACKNOWLEDGEMENTS I acknowledge the ANR CHARMS for its partial funding support. I want to thank the selection committee of the 36th edition of the FPSAC Conference (Bochum, 2024) for its insightful comments on my extended abstract <cit.>, accepted for poster session. I thank Ben Adenbaum, Emily Gunawan, Florent Hivert, Yann Palu, GaYee Park, and Michael Schoonheere for their interest and discussions on this project. I especially thank Phillippe Nadeau for his advice, which allowed me to simplify some combinatorial proofs. Finally, I thank Hugh Thomas for his pieces of advice and comments on this work. alpha
http://arxiv.org/abs/2407.13303v1
20240718090720
Mean Teacher based SSL Framework for Indoor Localization Using Wi-Fi RSSI Fingerprinting
[ "Sihao Li", "Zhe Tang", "Kyeong Soo Kim", "Jeremy S. Smith" ]
cs.LG
[ "cs.LG" ]
Mean Teacher based SSL Framework for Indoor Localization Using Wi-Fi RSSI Fingerprinting Sihao Li, Graduate Student Member, IEEE, Zhe Tang, Graduate Student Member, IEEE, Kyeong Soo Kim, Senior Member, IEEE, and Jeremy S. Smith, Member, IEEE This work was supported in part by the Postgraduate Research Scholarships (under Grant PGRS1912001), the Key Program Special Fund (under Grant KSF-E-25), and the Research Enhancement Fund (under Grant REF-19-01-03) of Xi'an Jiaotong-Liverpool University. This paper was presented in part at CANDAR 2023, Matsue, Japan, November 2023. S. Li and Z. Tang are with the School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, P.R. China (e-mail: [Sihao.Li19, Zhe.Tang15]@student.xjtlu.edu.cn), and also with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, L69 3GJ, U.K. (e-mail: [Sihao.Li, Zhe.Tang]@liverpool.ac.uk). K. S. Kim is with the School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, P.R. China (e-mail: Kyeongsoo.Kim@xjtlu.edu.cn). J. S. Smith is with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK (e-mail: J.S.Smith@liverpool.ac.uk). ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Wi-Fi fingerprinting is widely applied for indoor localization due to the widespread availability of Wi-Fi devices. However, traditional methods are not ideal for multi-building and multi-floor environments due to the scalability issues. Therefore, more and more researchers have employed deep learning techniques to enable scalable indoor localization. This paper introduces a novel semi-supervised learning framework for neural networks based on wireless access point selection, noise injection, and Mean Teacher model, which leverages unlabeled fingerprints to enhance localization performance. The proposed framework can manage hybrid in/outsourcing and voluntarily contributed databases and continually expand the fingerprint database with newly submitted unlabeled fingerprints during service. The viability of the proposed framework was examined using two established deep-learning models with the UJIIndoorLoc database. The experimental results suggest that the proposed framework significantly improves localization performance compared to the supervised learning-based approach in terms of floor-level coordinate estimation using EvAAL metric. It shows enhancements up to 10.99% and 8.98% in the former scenario and 4.25% and 9.35% in the latter, respectively with additional studies highlight the importance of the essential components of the proposed framework. multi-building and multi-floor indoor localization, Wi-Fi fingerprinting, semi-supervised learning, mean teacher § INTRODUCTION As people tend to spend a significant amount of time in places like shopping malls, airports, and hospitals, there has been an increasing need for location-based services (LBS) indoors and outdoors <cit.>. However, localization systems based on the global positioning system (GPS) and its variants are inadequate for indoor use due to the lack of line of sight (LOS) for satellites <cit.>. Of alternative technologies developed for indoor LBS (ILBS) that do not depend on satellites <cit.>, Wi-Fi fingerprinting is the most popular thanks to the ubiquity of Wi-Fi infrastructure indoors. Note that, due to the complicated multipath fading indoors, Wi-Fi received signal strengths (RSSs) cannot be directly used for distance estimation based on a path loss model, which is the fundamental principle for multilateration <cit.>. Wi-Fi fingerprinting, instead, uses RSS as one of the location-dependent characteristics, i.e., a location fingerprint, to determine the location based on classification or regression. In Wi-Fi fingerprinting, a fingerprint database is to be constructed first based on the RSS indicators (RSSIs) from all access points (APs) measured at reference points (RPs), which are known locations specified by two-dimensional (2D)/three-dimensional (3D) coordinates or location labels like room numbers, and an indoor localization model is trained with the constructed database during the offline phase. Then, the trained model is used to estimate the current location of a user or a device based on the measured RSSIs during the online/real-time phase <cit.>. Indoor localization based on Wi-Fi RSSI fingerprinting has been widely studied in the literature. The traditional methods, such as k-nearest neighbors (KNN) and weighted k-nearest neighbors (wKNN) <cit.>, require substantial parameter tuning and can be labor-intensive. As a result, they may not be the most suitable choice for indoor localization in complex, multi-building, and multi-floor environments where extensive data collection and processing are necessary. Therefore, researchers in this field increasingly focus on deep learning-based techniques for large-scale indoor localization. To address the scalability issues in large-scale multi-building and multi-floor indoor localization, neural network (NN) models have been proposed to enable scalable indoor localization <cit.> based on Wi-Fi RSSI fingerprinting using supervised learning (SL) algorithms. However, the SL framework has limitations in that it requires a large amount of labeled data for training, which can be time-consuming and labor-intensive to collecting and labeling, especially in large-scale indoor localization applications. Moreover, there are challenges in maintaining the database and keeping it up-to-date with the latest data, which can be crucial for the long-term operation of indoor localization services. Therefore, conventional indoor localization techniques based on Wi-Fi fingerprinting under the SL framework cannot exploit those unlabeled fingerprints unless the following time-consuming and labor-intensive steps are conducted. First, the unlabeled data are manually labeled. Second, the fingerprint database is updated with the unlabeled data. Third, the model is retrained with the updated database. Given the limitations of traditional SL training, semi-supervised learning (SSL) algorithms have been proposed to investigate the potential of unlabeled data. In the context of indoor localization based on Wi-Fi RSSI fingerprinting, two representative application scenarios using unlabeled data with SSL methods for Wi-Fi fingerprinting are summarized as follows: First, the unlabeled RSSIs are already part of a fingerprint database, which means volunteers submit unlabeled data during the offline phase when the database is constructed. Second, once an indoor localization system is deployed, the system continuously receives new measurements from users at unknown locations during the online phase. The overview of the two scenarios in real-world applications for Wi-Fi RSSI fingerprinting using SLL is shown in Fig. <ref>. Given the two scenarios, it is essential to develop an SSL-based indoor localization framework that can handle the hybrid in/outsourcing and voluntarily-contributed databases and continuously extend the fingerprint database with newly submitted unlabeled fingerprints during service. Among the SSL-based methods, consistency training has been widely accepted as a spotlight technique, which encourages the model to make consistent predictions on the labeled and unlabeled data to improve the generalization performance. Under the category of consistency training, the Mean Teacher model <cit.> is a practical method that gains popularity in various applications, however, its potential use in indoor localization based on “Wi-Fi RSSI fingerprinting” remains largely unexplored. Therefore, this paper proposes a novel indoor localization framework that can exploit unlabeled RSSI fingerprints in multi-building and multi-floor indoor localization through SSL based on the Mean Teacher. The fingerprint database under the proposed framework can be continuously extended with newly submitted unlabeled fingerprints during the operation, which, in turn, is used to train NN models in a sequential and online way for continuous improvement of their localization performances. An investigation based on indoor localization models from <cit.> and the UJIIndoorLoc database <cit.> is conducted to explore the feasibility of the proposed framework. The contribution of this paper can be summarized as follows: * A novel SSL-based indoor localization framework is proposed to better balance scalability, complexity, and device dependency. * The proposed framework can handle (1) the hybrid in/outsourcing and voluntarily contributed databases and (2) continuously extend the fingerprint database with newly submitted unlabeled fingerprints during service. * An investigation based on two well-known and different structured indoor localization models and the UJIIndoorLoc database is conducted to explore the feasibility of the proposed framework. * This paper provides block-level insights to illustrate the proposed framework's effectiveness and efficiency. The rest of the paper is structured as follows: Section <ref> reviews related work. Section <ref> discusses the proposed indoor localization framework. Section <ref> presents experimental results based on simulated real-world scenarios. Conclusions are presented in Section <ref>. § RELATED WORK To address the scalability in large-scale multi-building and multi-floor indoor localization, including floor-level location estimation, a scalable DNN-based architecture based on multi-label classification is proposed in <cit.>, which can significantly reduce the number of output nodes compared to that based on multi-class classification. Spirited by <cit.>, a hybrid single-input and multi-output (SIMO) DNN architecture <cit.> is proposed to reduce the number of output nodes further and enable flexible training of building, floor, and location outputs with different algorithms. The SIMO-DNN architecture enables hybrid building/floor classification and floor-level 2D location coordinates regression through a dedicated output for each task, where each output can use different activation and loss functions optimized for its chosen estimation framework, i.e., softmax activation function and categorical cross-entropy (CE) loss function for multi-class classification of building and floor, and linear activation function and mean squared error (MSE) loss function for regression of location coordinates. Note that the use of a regression framework for the location output significantly reduces the number of the location estimation output nodes, e.g., from the maximum numbers of RPs in the case of classification to two (i.e., longitude and latitude coordinates), and eliminates the complicated customized processing unit for converting the results of the classification to 2D location coordinates. Given its advantages over the existing DNN models, the SIMO-DNN is taken as a reference model. In <cit.>, the integration of stacked autoencoder (SAE) and convolutional neural networks (CNNs) is explored to develop a hybrid model named CNNLoc. The model is a composition of three distinct networks for building, floor, and position estimation. The SAE is connected to fully connected layers with three output nodes for building classification. For floor classification, a dropout layer is added to SAE, which feeds features to a one-dimensional CNN (1D-CNN) with a fully connected layer as the output layer, generating five outputs for one-hot-encoded floor results. With some modifications, the similar 1D-CNN structure is utilized for the position estimator, where the number of output nodes is changed from five to two to represent longitude and latitude coordinates. CNNLoc is one of the most popular models for multi-building and multi-floor indoor localization and has been widely used as a benchmark in the literature. In this paper, in addition to SIMO-DNN <cit.>, CNNLoc <cit.> is taken as a reference model for the proposed framework. In addition to the studies mentioned above focusing on the labeled RSSIs, unlabeled measurements at unknown locations have also been spotlighted in the indoor localization research community. SSL has been widely applied to various fields, including computer vision, natural language processing, and speech recognition, to improve models' generalization performance with labeled and unlabeled data. However, the application of SSL to indoor localization based on Wi-Fi fingerprinting has yet to be extensively investigated, only a few studies have been reported in the literature. Zhang et al. <cit.> introduces a graph-based SSL (GSSL) method that exploits the correlation between RSS values at nearby locations to estimate optimal RSS values and improves the smoothness of the radio map and localization accuracy for indoor localization using crowdsourced data. Chen et al. <cit.> introduces an Adapted Mean Teacher (AMT) model for indoor fingerprint positioning based on the channel impulse response (CIR). Literature <cit.> proposes a time-series SSL algorithm for indoor localization that utilizes unlabeled data to generate pseudo labels and improve the efficiency and accuracy of the positioning model based on RSSI measurements. Literature <cit.> proposes two approaches to address the problem of limited labeled data in indoor localization. The first approach is a weighted semi-supervised DNN-based method that combines labeled samples with inexpensive pseudo-labeled samples to improve localization accuracy. The second approach is a weighted generative adversarial network (GAN)-based method that generates fake fingerprints to overcome the unavailability of unlabeled data. Literature <cit.> compares the performances of SSL-based channel state information (CSI) fingerprinting techniques using variational auto-encoder (VAE) and GAN models. Their experimental results show that GAN generally outperforms VAE in terms of accuracy, and the study provides insights into the impact of different generative mechanisms and environmental effects on performance. In addition to those papers focusing on SSL models, SSL algorithms are applied to solve indoor localization problems at a framework level. WePos <cit.> and MTLoc <cit.> are two representative studies based on Wi-Fi RSSI fingerprinting. WePos is based on a weak supervision framework founded on BERT <cit.>, using weakly labeled data to tackle indoor localization on a large scale. The experiments are conducted in a vast shopping mall. MTLoc introduces a multi-target domain adaptation network (MTDAN) that uses labeled and unlabeled data to improve model generalization performance. MTLoc prioritizes feature extraction, utilizing a complex GAN structure to extract long-term stable or semi-stable AP. Table <ref> compares the existing SSL-based indoor localization methods regarding scalability, complexity, and device dependency. As can be obtained from Table <ref>, the main challenge is that the scalability, complexity, and device dependency need to be better balanced. For example, WePos and MTLoc are scalable and can yield high accuracy, but the models and frameworks require extensive training and tuning because of the massive structure, like GAN or BERT. The others using channel information can yield high accuracy, but such data can only be collected from specific devices, e.g., Intel 5300 NIC for CSI, making them unsuitable for general indoor localization applications. In addition to the abovementioned studies, to establish a more scalable and efficient indoor localization framework, absorbing the advantages of well-established SSL methods is essential to address indoor localization issues via using unlabeled data more efficiently. Various SSL methods and approaches have been introduced over the years and have been widely applied to various applications. According to <cit.>, SSL methods can be categorized into four groups: (1) consistency training, (2) proxy label methods, (3) generative models, and (4) graph-based methods. Out of the listed categories, it is widely accepted that consistency training and generative models are the spotlight techniques. As such, this paper focuses on the algorithms that fall under consistency training for indoor localization. Pi-Model <cit.> is one of the earliest models using consistency training. Pi-model is trained to minimize the difference between its predictions on the labeled and unlabeled data. The idea is that by maximizing the entropy, the model is encouraged to make more diverse predictions on the unlabeled data with an expectation to improve the model's generalization performance. An obvious problem is that each input requires two forward propagations to compute the consistency loss, which may be less efficient. To solve this problem, Temporal Ensembling <cit.> is proposed, which only requires one forward propagation per input. Temporal Ensembling involves forming a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, different regularization, and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the network output at the most recent training epoch (closer to the current epoch, the larger the weight is) and can thus be used as a target for training. The Mean Teacher <cit.> is an SSL method based on averaging model weights instead of predictions, which can provide two practical advantages: more accurate target labels by the teacher model and better scalability to large datasets and online learning over the Temporal Ensembling. Unlike the Temporal Ensembling, the Mean Teacher updates the weights of the teacher model based on exponential moving averages (EMAs) of the weights of the student model and uses them to generate the teacher model's predictions at each training step. During the training, the Mean Teacher adjusts the weights of the student model to make predictions closer to those of the teacher model; when the predictions of the teacher model are ambiguous, the Mean Teacher allows the student model to make its predictions. Based on the abovementioned work, innovative SSL techniques, such as the Mean Teacher, have been applied in areas like image classification, object detection, and semantic segmentation. However, their potential use in indoor localization based on Wi-Fi RSSI fingerprinting remains largely unexplored. This paper seeks to bridge this gap by presenting a novel SSL-based indoor localization framework leveraging the Mean Teacher model for large-scale multi-building and multi-floor indoor localization using Wi-Fi RSSI fingerprinting. § SEMI-SUPERVISED LEARNING FRAMEWORK FOR MULTI-BUILDING AND MULTI-FLOOR INDOOR LOCALIZATION Fig. <ref> provides an overview of the proposed SSL framework, which consists of four main blocks: (1) AP selection, (2) data pre-processing, (3) pre-train and clone model m_p, (4) SSL train, and evaluation. In the SSL train block, it contains two sub-blocks: (1) noise injection for unlabeled data generation and (2) SSL consistency regularization based on <cit.>, including the teacher model m_t and the student model m_s. As for the data used in the proposed algorithm, D_T, D_L, and D_U are the test, labeled, and unlabeled datasets, respectively. §.§ AP Selection To further balance scalability and complexity in indoor localization that uses SSL, the proposed framework includes a block that selects the appropriate APs to reduce the input dimension and improve the generalization of a given neural network model. The selection process reduces input dimensions, improves the generalization, and reduces computational complexity, leading to a more stable and informative input for label volume limited and long-term indoor localization services, e.g., unstable APs are filtered out. The framework considers statistical characteristics, such as the sample person correlation coefficient, the number of unique values, and missing values, to determine which APs should be reserved for subsequent operations <cit.>. However, such selection process is based on the labeled data, which is not suitable for the unlabeled data. For example, in <cit.>, the sample person correlation coefficient is calculated using the record from the same RP, which is not applicable to the unlabeled data because the measurement location is unknown. Therefore, for consistency between the labeled and unlabeled data, such strict selection criteria should be relaxed or revised. Therefore, in this paper, we only consider the number of unique values for the AP selection process, which is simple yet effective for the unlabeled data. Let D_L,org and D_U,org be the original labeled and unlabeled data, respectively. D_T,org denotes original testing data, and AP is the joint set of APs that appeared in the merged set of D_L,org and D_U,org with their RSSIs. The AP selection process is described in Algorithm <ref>. §.§ Noise Injection Data are inherently noisy in real-world applications, especially for Wi-Fi RSSI fingerprinting. Many factors, including passersby, furniture movements, weather changes, and other user- or device-related differences, can significantly affect the measurements and lead to inaccuracies. As the RSSI rapidly changes over time <cit.>, the collected data during a specific period may not be sufficient to account for all variations, leading to poor model generalization and performance with unseen and slightly changed data. To address such challenges, noise injection is employed on labeled data to produce corresponding unlabeled data, enhancing noise immunity and alleviating fluctuations for long-term services. When performing noise injection in the context of SSL, it is crucial to be careful with the type and level of noise. To ensure successful SSL training, the following fundamental assumptions should be considered: * Clustering assumption: Records that are close to each other are likely to have the same label. * Smoothness assumption: The density of records becomes higher as the data points are closer to cluster centers under the same class. * Manifold assumption: The data are distributed on a low-dimensional manifold in the high-dimensional space. * Low-density separation assumption: The decision boundary between classes is in low-density regions. Noise injection has been widely used in other research fields, such as <cit.> in computer vision and <cit.> in speech recognition, to improve the generalization of models and, thereby, the performance. In this study, we recreate real-world noise for indoor Wi-Fi fingerprinting by introducing the additive white Gaussian noise (AWGN) to the RSSIs for unlabeled data generation. Compared to other complicated augmentation techniques, e.g., methods based on generative models, the noise injection is simple yet effective and has real-world implications. In this study, the AWGN-injected data is generated based on the following formula: 𝐑𝐒𝐒𝐈_noised = 𝐑𝐒𝐒𝐈 + 𝒩(μ,σ^2), where 𝐑𝐒𝐒𝐈 denotes the original RSSI matrix, 𝒩(μ,σ^2) is the Gaussian distribution with mean μ and the variance σ^2. As for the noise level, μ and σ are set to 0 and 0.1, respectively, but its strength is limited to ±0.5, where the surpasses are clipped to ensure the assumptions mentioned above are preserved. As illustrated in Fig. <ref>, when working with a database lacking unlabeled data, the “Injection” block can generate additional unlabeled data based on the labeled data. §.§ Visual Analysis The accuracy and effectiveness of indoor localization models heavily depend on the data they receive. However, the effect of AP selection and noise injection on SSL-based indoor localization has yet to be revealed. To highlight the influences of AP selection and noise injection on the distribution of data points, we employ principal component analysis (PCA) followed by t-distributed neighbor embedding (t-SNE) for visualization analysis. The original, AP-selected, and AWGN-injected data based on selected data are visualized in Fig. <ref>. Comparing the original input, Fig. <ref> (a), with the processed ones, Fig. <ref> (b) and (c), it is evident that the suggested techniques significantly reduce the input dimension, e.g., from 520 to 428 in our case, while ensuring the SSL assumptions are protected. For instance, the distribution and clustering remain consistent overall, and the separation between classes is still evident. §.§ SSL-based Training for Indoor Localization Let D_L, D_U, and D_T be the labeled, unlabeled, and test datasets, respectively. Each dataset's feature and label are denoted as x and y, respectively. Before training, D_L, D_U, and D_T are pre-processed, including AP selection, normalization, one-hot encoding, and, if it exists, noise injection for SSL training. After the pre-processing, the subsequent training can be divided into (a) pre-train and (b) SSL train phases. §.§.§ Pre-train The initialized model is pre-trained with D_L to mitigate the cold start problem, expedite the subsequent SSL training and improve the performance <cit.>. During Per-train phase, the pre-trained model m_p is constructed as follows: m_p = ℒ(θ_p,D_L), where the weight of the pre-trained model m_p, denoted as θ_p, is learned from D_L based on algorithm ℒ with a limited number of epochs to avoid overfitting. Let m_s and m_t be the student and teacher models, m_s and m_t are cloned from the pre-trained m_p, which means the network structure and their weights are all initialized with the trained θ_p, i.e., [θ_s^0, θ_t^0] ⟸θ_p, where θ_s^0 and θ_t^0 are the weights of m_s and m_t at the beginning of the SSL training, respectively. §.§.§ SSL-train After Per-train phase, m_s and m_t are trained in a semi-supervised manner. For D_L with feature x_L and corresponding ground truth y_L, m_s prediction ŷ_L,m_s are defined as: ŷ_L,m_s = m_s(x_L). For D_U with feature x_U, the prediction is given as: ŷ_U,m_(·) = m_(·)(x_U), where (·) is either s or t for student or teacher model. For the loss functions, the prediction loss L_d and the consistency loss L_c are calculated as follows: L_d = 𝒟_d(ŷ_L,m_s,y_L), L_c = 𝒟_c(ŷ_U,m_s,ŷ_U,m_t), where 𝒟_d and 𝒟_c are given distance measurement criteria for the supervised learning of m_s and the consistency regularization of m_s and m_t, respectively. In <cit.>, for generalization purposes, the MSE is applied for D_c; however, in the context of Wi-Fi indoor localization, it is noteworthy that tasks are typically formulated as multi-label classification problems. Therefore, more measurement criteria can be considered for D_c, e.g., the binary cross-entropy (BCE) for the one-hot encoded multi-class classification. As for the total loss L_t, the strategy in <cit.> is adopted, i.e., a weighted sum of L_c and L_d, to train m_s based on the inconsistency between the predictions of m_s and ground truth labels and that between the predictions of m_t and m_s. The total loss is defined as follows: L_t = L_d + w_cL_c, where w_c is a weight coefficient regulating the contribution of L_c relative to L_d. After the weights of m_s are updated by backpropagation based on L_t, the weights of m_t are updated to the EMAs of the weights of m_s, i.e., for the i^th (i ≥ 1) training step during the SSL phase, θ_t^i=αθ_t^i-1+(1-α)θ_s^i, where θ_t^i and θ_s^i are the weights of m_t and m_s during the i^th SSL training step, respectively. α ∈(0,1] is a smoothing hyperparameter. After “Pre-train” and “SSL train”, m_t is used for the evaluation with D_T. Fig. <ref> illustrates the details of the “SSL train” block. § EXPERIMENTAL SETUP Experiments are conducted on a workstation installed with an Intel Core i9-13900x CPU, 64GB RAM, and NVIDIA GeForce RTX 4090 GPU running Ubuntu 20.04 system with PyTorch 2.0 and Python 3.9.18. §.§ UJIIndoorLoc Database As multi-building and multi-floor indoor localization databases, the UJIIndoorLoc <cit.> is the most popular publicly available Wi-Fi fingerprinting database, becoming a benchmark in the literature. It includes 21,048 publicly available records, i.e., 19,937 for training and 1,111 for validation[since the test data are not publicly available, the validation data are used for testing in most studies], collected at the three multi-floor buildings of the Jaume I University in Spain, each of which consists of RSSIs from 520 APs and 9 ground truth labels, including IDs for building, floor, space, user, and phone, and timestamp <cit.>. Since the UJIIndoorLoc database is extensively used in indoor localization research, it is also assigned as the benchmark for evaluating the proposed framework to examine its capabilities and flexibility thoroughly. As for the performance evaluation, given that the most challenging task in indoor localization is floor-level location estimation, this paper focuses on the floor-level location accuracy using the EvAAL error proposed in <cit.>. §.§ Model Setup To evaluate the effectiveness of the proposed framework, two DNNs are employed, i.e., the simplified version based on the SIMO-DNN of <cit.>, shown in Fig. <ref> (a), and the modified version found on the CNNLoc of <cit.>, illustrated in Fig. <ref> (b). The details of model setups are summarized in Table <ref>. The training is conducted with a batch size of 32 for all experiments. The Adam optimizer is employed for experiments, with a learning rate of 0.0001, 0.0001, and 0.001 for the encoder, BF, and L blocks, respectively, for the SIMO-DNN, and 0.0001 for the encoder, B, F, and L blocks for the CNNLoc. To better stabilize the training process, the learning rate scheduler and early stopping are applied during the training. The learning rate scheduler is set to reduce the learning rate by 0.75 when the loss does not decrease for six epochs for the hybrid database scenario and ten epochs for the online learning scenario. As for the early stopping, the patience is set to twelve epochs for both scenarios. As for the consistency regularization based on Mean Teacher, α is set to 0.999 with w_c fixed at six for the hybrid database scenario and α=0.9 with w_c increased to ten for the online learning scenario. §.§ Simulation on Hybrid Databases During the offline phase, a fingerprint database is constructed in/outsourced by collecting RSSIs from APs, which is laborious and time-consuming. To reduce such costs, volunteers are usually recruited. However, only some of the collected RSSIs are labeled due to the limited number of volunteers and the time constraint. As a result, the collected RSSIs are divided into two groups: one group with labels and the other without. Under the traditional SL framework(s), only the former is used to train an indoor localization model, while the latter is discarded. In comparison, the proposed framework can utilize labeled and unlabeled data to improve the model's performance. To simulate the hybrid database, the UJIIndoorLoc training dataset is divided into four equal parts, where one part is used as the labeled data and the rest as the unlabeled data, as shown in Fig. <ref>. In Case 4, the lack of unlabeled data does not impede the implementation of the framework due to the employment of noise injection. Note that the injection is only activated when the unlabeled data is unavailable. Experiments are evaluated using the official validation dataset for fair comparison. All the results regarding the EvAAL error, which is the metric used for IPIN competitions, are reported. The EvAAL error is defined as follows: Error = p_b× b_miss + p_f× f_miss + euc, where p_b and p_f are the penalties for misclassification of building and floor, which are set to 50 and 4, respectively. b_miss and f_miss are the misclassified building and floor percentages, and euc is the Euclidean distance between the estimated and ground truth locations. The successful rate, denoted as γ, is the ratio of the cases where the building and floor are correctly estimated over the total. To reveal the improvements of the proposed framework in terms of the EvAAL error, let η as the percentage of the relative improvement of the proposed framework over the reference model, which is defined as follows: η_(·) = Error_ref-Error_prop/Error_ref× 100   [%], where E_ref and E_prop are errors of the reference model and its corresponding proposed framework, respectively. (·) is optional, if given, it indicates the reference model, e.g., η_4 is the relative improvement of a model using the proposed framework over the Ref. 4. Under the hybrid databases scenario, the performance of the proposed framework is compared with the references trained with only the labeled data of each experiment case. The EvAAL error and successful rate of the proposed framework and the references are summarized in Table <ref>. To ensure the comparison's efficiency, the reference models' performance should be guaranteed to be of high quality. In this study, the reference models are well-trained with the labeled data, and the performance is even better than the original models achieved in their literature. For example, CNNLoc reported an error of 11.78. As shown above, the proposed framework outperforms the references in terms of the EvAAL error in each experimental case, where the relative improvement of the proposed framework over the references is given in Table <ref>. The results in Table <ref> show that the proposed framework outperforms the references in all the experiment cases, where the relative improvement of the proposed framework over the references is from 3.09% to 10.99% for SIMO-DNN and from 7.23% to 9.66% for CNNLoc. The proposed framework also outperforms the reference model 4, which is the best among all references, with a relative improvement of 3.73% and 8.98% for SIMO-DNN and CNNLoc, respectively. As for the relative improvement of the proposed framework over Ref. 4, the proposed framework using only three-quarters of the labeled data outperforms Ref. 4 by 0.36% for SIMO-DNN, and only half of the labeled data outperforms Ref. 4 by 3.73% for CNNLoc. Given the high quality of the references, the results in Table <ref> demonstrate the effectiveness of the proposed framework in improving the performance of the indoor localization model by exploiting the unlabeled data. Moreover, the results also show that using the proposed framework, the database construction cost can be reduced by up to 75% without significantly sacrificing the performance of the indoor localization model. §.§ Simulation on Online Continuous Learning Throughout the online phase, the system continually receives RSSIs from its users, measured at unknown locations. These processes are incredibly time-consuming and labor-intensive. However, the proposed framework can use these online unlabeled RSSIs to enhance the performance of the indoor localization model that was trained with the initial database created during the offline phase. The comparison between the proposed framework and the SL framework(s) is illustrated in <ref>. As illustrated in Fig. <ref>, for SL framework(s), the localization model is kept static after the offline phase, i.e., the model is still NN^0_SL at online period N. For the proposed framework, however, the localization model can be periodically retrained with the newly submitted unlabeled RSSIs during the online phase, i.e., from NN^1_SSL at online period 1 to NN^N-1_SSL at period N, to continuously update the model's knowledge and, thereby, guarantee the model's performance in the long term service. As for the experiment setup, the training dataset serves as the labeled database; however, the validation/testing dataset is split into two subsets. The first half of the validation/testing dataset is utilized as the online unlabeled data submitted by users for SSL training. The remaining half is used as the testing dataset for evaluation. The second scenario's experimental setup overview is shown in Fig. <ref>. As illustrated in Fig. <ref>, references are set to the models trained on (A) using SL. The proposed framework is trained with (A) and (B) with the SSL framework. Specifically, the models mentioned in Ref. 4 in the first scenario, which is the best among the references, are utilized as the reference models, e.g., NN^0_SLs, because they are well-trained with the labeled data (A) only. To evaluate the effectiveness of the proposed framework, SL-trained models with the labeled data (A) and the SSL-trained models with the proposed framework in (A) and (B) are compared in (C) using the EvAAL error as the performance metric. To better understand the simulation results of the second scenario before performance evaluation, a study on the new split datasets is necessary, especially the spatial distribution of the new unlabeled data (B) and the testing dataset (C). The data distribution of (B) and (C) are shown in Fig. <ref>. Based on the distribution presented in Fig. <ref>, the distribution of (B) is distinct from that of (C), which suggests that SSL training based on (B) may not significantly enhance the performance of the indoor localization model when tested with (C). However, such a phenomenon is common in real-world applications, where newly submitted unlabeled data are in areas that users rarely visit, and the data spatial distribution may also differ from the testing dataset. For instance, a meeting room in a building may rarely be visited by users, resulting in a limited number of unlabeled data for most of the period. At the same time, the testing dataset may contain a large amount of data from the same meeting room due to the meeting schedule. Therefore, the simulation of the second scenario in this paper is both challenging and realistic, which can provide a comprehensive evaluation of the proposed framework. Before going into the details of the results, it is worth noting that the models' performance is notably reduced compared to the first scenario, which means the performance on (C) is worse than that of the joint of (B) and (C). This is because (C) is more challenging than (B); as discussed in <cit.>, there are significant volumes of data that have overlaps in the aspect of the phone, significantly affecting RSSI measurements due to different models and antenna placements in (B). However, phone overlaps are hardly observed in the new split testing dataset (C). Moreover, there is a vast data collection gap at the top floor of building 2 in the UJIIndoorLoc database, where the training data is scarce, but the testing data is abundant in (C). Therefore, the performance declines in the reference models are reasonable. The results of online continuous learning are given in Table <ref>. The last column in Table <ref> highlights the effectiveness of the proposed framework in improving the indoor localization performance of the adopted given neural networks in the second scenario. Specifically, the proposed framework achieves a 4.25% and 9.35% relative improvement over the reference model for the SIMO-DNN model and the CNNLoc, respectively. These results demonstrate that the proposed framework can effectively leverage online unlabeled data to enhance the indoor localization performance of the adopted given neural networks. Moreover, the proposed framework outperforms the references trained with labeled data only, indicating its effectiveness in exploiting online unlabeled data for improved indoor localization. We also tested <cit.> on (C) as a reference, which is trained with labeled data only. Our results show that the proposed framework outperforms existing models on the new split dataset. § EXPERIMENTAL STUDY ON BLOCKS The noise injection and AP selection are crucial components of the proposed framework, which significantly affect the performance of the indoor localization model but have never been studied in the context of SSL and even SL for indoor localization. To better understand the effect of these components, experiment studies are conducted to evaluate the effectiveness of the noise injection and AP selection in the proposed framework. The experimental research is conducted with the same setup described in Section <ref> for a fair comparison. §.§ Ablation Experiments on AP Selection AP selection is a crucial component that reduces the number of input dimensions and computational complexity. To evaluate the efficiency of AP selection, the proposed framework is compared with the reference model, where the input features are not selected. Notably, when AP selection is not applied, the input features size is 520, the same as the number of APs in the UJIIndoorLoc database. Therefore, the structure of the encoder is changed to 520, 260, and 130 for the SIMO-DNN and CNNLoc models. The experiments are conducted based on Case/Ref. 4 of the first scenario. The results are given in Table <ref>. According to the results presented in Table <ref>, the SSL method surpasses the reference model, regardless of whether AP selection is utilized. The ablation experiments on AP selection further support its effectiveness as a component in the context of SSL and SL for indoor localization. Although the improvements when using the AP selection are insignificant compared to the case without AP selection, it is worth noting that the reference models are already well-trained with the labeled data only, which is better than the original models achieved in their literature. §.§ Type of Noise Injection Noise injection is another crucial component of the proposed framework that enhances the model's capacity to handle noise and mitigate RSSI time fluctuations in the context of SSL. Different types of noise can be injected into the labeled RSSI to produce corresponding unlabeled data. Apart from the AWGN, uniform noise is also considered in this study. The uniform noise is generated by randomly selecting a value from the range [-1,1], slightly larger than the range of the AWGN, and injecting it with the same standard method as the AWGN. To understand the impact of the noise injection on the performance of the indoor localization model, the t-SNE is applied to visualize the noise-injected RSSIs based on the AP-selected features. To evaluate its impact on the framework's performance, different types of noise are injected into the RSSIs of the labeled data, as the corresponding unlabeled data are used for SSL training under Case 4 in the first scenario, where AP selection is already applied. The results of the ablation experiments on noise injection are given in Table <ref>. In Table <ref>, the proposed framework effectively enhances localization performance by incorporating injected noise in both AWGN and uniform noise. Specifically, AWGN is more effective than uniform noise in simulating the real-world noise in RSSIs. Although the improvement levels differ between methods and noise types, the proposed framework's accuracy surpasses the reference model, i.e., using the original data with the noise-injected data as the labeled data for SL training, thanks to incorporating unlabeled data in an SSL manner, acting as a form of regularization to prevent overfitting. As illustrated in Fig. <ref> and Fig. <ref>, the distribution of both noise-injected data and the original data are slightly different at decision boundaries but remain similar in general, which means direct SL training may not significantly improve the model's performance because the model is likely to overfit the noise-injected data and fail to generalize well on the original data. In Comparison, the EMA mechanism of SSL slowly adapts the model to the noise-injected data and keeps the model's performance stable on the original data, leading to improved performance on unseen test data compared to fitting it directly. § DISCUSSION The investigation shows that the proposed framework can improve the indoor localization performance of the adopted given neural networks under both experiment scenarios. As for the calculation complexity, the proposed framework is more computationally expensive than the SL framework(s) due to the additional training process required for the unlabeled data. However, compared to the method involving GANs or BERT, the proposed framework is way more efficient and can be trained on a single GPU card. Regarding localization performance, the proposed framework can improve the accuracy of the adopted given neural networks by up to 10.99% and 9.66% for SIMO-DNN and CNNLoc, respectively, under the first scenario. The proposed framework can also improve the indoor localization performance of the adopted given neural networks by up to 4.25% and 9.35% under the second scenario. To illustrate the effectiveness of the proposed framework, the experimental study on AP selection and noise injection shows that both components are crucial for the proposed framework to improve the indoor localization performance of the adopted framework. § CONCLUSION In this paper, we proposed a framework for improving indoor localization performance using SSL, AP selection, and noise injection techniques. The framework incorporates unlabeled data to act as a form of regularization, preventing overfitting and improving generalization on unseen test data. We conducted experiments on two scenarios and compared the performance of our framework with reference models trained using SL. The results showed that our framework significantly improved indoor localization performance for both SIMO-DNN and CNNLoc models. Under the first scenario, our framework improved the performance by up to 10.99% and 9.66% for SIMO-DNN and CNNLoc, respectively. Under the second scenario, the improvements were up to 4.25% and 9.35% for SIMO-DNN and CNNLoc, respectively. We conducted experiments on AP selection and noise injection, demonstrating these components' effectiveness in improving the framework's performance. In the future, it is worth investigating more advanced augmentation techniques and SSL methods to improve the performance of the proposed framework further. 10 url@samestyle intor_01_1 A. Basiri, E. S. Lohan, T. Moore, A. Winstanley, P. Peltola, C. Hill, P. Amirian, and P. Figueiredo e Silva, “Indoor location based services challenges, requirements and usability of current solutions,” Computer Science Review, vol. 24, pp. 1–12, 2017. intor_01_2 R. S. Naser, M. C. Lam, F. Qamar, and B. B. Zaidan, “Smartphone-based indoor localization systems: A systematic literature review,” Electronics, vol. 12, no. 8, 2023. intro_02 P. Bahl and V. N. Padmanabhan, “RADAR: An in-building RF-based user location and tracking system,” in Proc. INFOCOM 2000, vol. 2, 2000, pp. 775–784. intro:survey_03 C. Basri and A. El Khadimi, “Survey on indoor localization system and recent advances of WIFI fingerprinting technique,” in Proc. ICMCS 2016, 2016, pp. 253–259. intro:survey_04 C. Laoudias, A. Moreira, S. Kim, S. Lee, L. Wirola, and C. Fischione, “A survey of enabling technologies for network localization, tracking, and navigation,” IEEE Communications Surveys and Tutorials, vol. 20, no. 4, pp. 3607–3644, 2018. intro:survey_05 P. Prasithsangaree, P. Krishnamurthy, and P. Chrysanthis, “On indoor position location with wireless LANs,” in Proc. IEEE ISPIMRC 2002, vol. 2, 2002, pp. 720–724 vol.2. knn_01 J. Torres-Sospedra, R. Montoliu, S. Trilles, Óscar Belmonte, and J. Huerta, “Comprehensive analysis of distance and similarity measures for Wi-Fi fingerprinting indoor positioning systems,” Expert Systems with Applications, vol. 42, no. 23, pp. 9263–9278, 2015. knn_02 T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Transactions on Information Theory, vol. 13, no. 1, pp. 21–27, 1967. Kim:18-1 K. S. Kim, S. Lee, and K. Huang, “A scalable deep neural network architecture for multi-building and multi-floor indoor localization based on Wi-Fi fingerprinting,” Big Data Analytics, vol. 3, no. 4, Apr. 2018. Kim:18-3 K. S. Kim, “Hybrid building/floor classification and location coordinates regression using a single-input and multi-output deep neural network for large-scale indoor localization based on Wi-Fi fingerprinting,” in Proc. CANDAR 2018, Hida Takayama, Japan, Nov. 2018. cnn_01 X. Song, X. Fan, X. He, C. Xiang, Q. Ye, X. Huang, G. Fang, L. L. Chen, J. Qin, and Z. Wang, “CNNLoc: Deep-learning based indoor localization with WiFi fingerprinting,” in Proc. SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI 2019, 2019, pp. 589–595. ssl:mean_teacher A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” 2017. Data:UJI J. Torres-Sospedra, R. Montoliu, A. Martínez-Usó, J. P. Avariento, T. J. Arnau, M. Benedito-Bordonau, and J. Huerta, “UJIIndoorLoc: A new multi-building and multi-floor database for WLAN fingerprint-based indoor localization problems,” in Proc. IPIN 2014, Busan, Korea, Oct. 2014, pp. 261–270. rela_GSSL L. Zhang, S. Valaee, Y. Xu, L. Ma, and F. Vedadi, “Graph-based semi-supervised learning for indoor localization using crowdsourced data,” Applied Sciences, vol. 7, no. 5, 2017. rela_MT_CIR P. Chen, Y. Liu, W. Li, J. Wang, J. Wang, B. Yang, and G. Feng, “Semi-supervised learning-enhanced fingerprint indoor positioning by exploiting an adapted mean teacher model,” Electronics, vol. 13, no. 2, 2024. rela_TSLSSL J. Yoo, “Time-series laplacian semi-supervised learning for indoor localization,” Sensors, vol. 19, no. 18, 2019. rela_WGAN W. Njima, A. Bazzi, and M. Chafii, “DNN-based indoor localization under limited dataset using GANs and semi-supervised learning,” IEEE Access, vol. 10, pp. 69 896–69 909, 2022. rela_SSLComp K. M. Chen and R. Y. Chang, “A comparative study of deep-learning-based semi-supervised device-free indoor localization,” in Proc.GLOBECOM 2021, 2021, pp. 1–6. rela_WePos B. Guo, W. Zuo, S. Wang, W. Lyu, Z. Hong, Y. Ding, T. He, and D. Zhang, “Wepos: Weak-supervised indoor positioning with unlabeled wifi for on-demand delivery,” Proc. ACM IMWUT 2022, vol. 6, no. 2, jul 2022. rela_MTLoc J. Wang, Z. Zhao, M. Ou, J. Cui, and B. Wu, “Automatic update for wi-fi fingerprinting indoor localization via multi-target domain adaptation,” Proc. ACM IMWUT 2023, vol. 7, no. 2, jun 2023. rela_BERT J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in North American Chapter of the Association for Computational Linguistics, 2019. ssl:overview1 Y. Ouali, C. Hudelot, and M. Tami, “An overview of deep semi-supervised learning,” arXiv preprint arXiv:2006.05278, 2020. ssl:temporal_ensembling S. Laine and T. Aila, “Temporal ensembling for semi-supervised learning,” 2016. AP_SLC_01 S. Li, Z. Tang, K. S. Kim, and J. S. Smith, “On the use and construction of wi-fi fingerprint databases for large-scale multi-building and multi-floor indoor localization: A case study of the UJIIndoorLoc database,” Sensors, vol. 24, no. 12, p. 3827, 2024. dynamic_static Z. Tang, R. Gu, S. Li, K. S. Kim, and J. S. Smith, “Static vs. dynamic databases for indoor localization based on wi-fi fingerprinting: A discussion from a data perspective,” in Proc. ICAIIC 2024.1em plus 0.5em minus 0.4emIEEE, 2024, pp. 760–765. aug_inject_01 M. Eren Akbiyik, “Data augmentation in training CNNs: Injecting noise to images,” arXiv e-prints, pp. arXiv–2307, 2023. aug_inject_02 M. Momeny, A. A. Neshat, M. A. Hussain, S. Kia, M. Marhamati, A. Jahanbakhshi, and G. Hamarneh, “Learning-to-augment strategy using noisy and denoised data: Improving generalizability of deep CNN for the detection of COVID-19 in X-ray images,” Computers in Biology and Medicine, vol. 136, p. 104704, 2021. aug_inject_03 S. Yin, C. Liu, Z. Zhang, Y. Lin, D. Wang, J. Tejedor, T. F. Zheng, and Y. Li, “Noisy training for deep neural networks in speech recognition,” EURASIP Journal on Audio, Speech, and Music Processing, vol. 2015, 2015. ColdStart_01 J. B. Schafer, D. Frankowski, J. Herlocker, and S. Sen, Collaborative Filtering Recommender Systems.1em plus 0.5em minus 0.4emBerlin, Heidelberg: Springer Berlin Heidelberg, 2007, pp. 291–324. ColdStart_02 I. Turc, M. Chang, K. Lee, and K. Toutanova, “Well-read students learn better: The impact of student initialization on knowledge distillation,” CoRR, vol. abs/1908.08962, 2019. EvAAL A. Moreira, M. J. Nicolau, F. Meneses, and A. Costa, “Wi-Fi fingerprinting in the real world – RTLSUM at the EvAAL competition,” in Proc. IPIN 2015, Banff, Alberta, Canada, Oct. 2015, pp. 1–10. rnn_01 A. E. Ahmed Elesawi and K. S. Kim, “Hierarchical multi-building and multi-floor indoor localization based on recurrent neural networks,” in Proc. CANDARW 2021, 2021, pp. 193–196.
http://arxiv.org/abs/2407.12167v1
20240716204242
Modeling Kinetic Effects of Charged Vacancies on Electromechanical Responses of Ferroelectrics: Rayleighian Approach
[ "Rajeev Kumar", "Shuaifang Zhang", "P. Ganesh" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ]
This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE6AC05-00OR22725 with the U.S. Department of Energy. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-publicaccess-plan). empty kumarr@ornl.gov Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN-37831 Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN-37831 ganeshp@ornl.gov Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN-37831 § ABSTRACT Understanding time-dependent effects of charged vacancies on electromechanical responses of materials is at the forefront of research for designing materials exhibiting metal-insulator transition, and memresistive behavior. A Rayleighian approach is used to develop a model for studying the non-linear kinetics of reaction leading to generation of vacancies and electrons by the dissociation of vacancy-electron pairs. Also, diffusion and elastic effects of charged vacancies are considered to model polarization-electric potential and strain-electric potential hysteresis loops. The model captures multi-physics phenomena by introducing couplings among polarization, electric potential, stress, strain, and concentrations of charged (multivalent) vacancies and electrons (treated as classical negatively charged particles), where the concentrations can vary due to association-dissociation reactions. Derivation of coupled time-dependent equations based on the Rayleighian approach is presented. Three limiting cases of the governing equations are considered highlighting effects of 1) non-linear reaction kinetics on the generation of charged vacancies and electrons, 2) the Vegard's law (i.e., the concentration-dependent local strain) on asymmetric strain-electric potential relations, and 3) coupling between a fast component and the slow component of the net polarization on the polarization-electric field relations. The Rayleighian approach discussed in this work should pave the way for developing a multi-scale modeling framework in a thermodynamically consistent manner while capturing multi-physics phenomena in ferroelectric materials. Modeling Kinetic Effects of Charged Vacancies on Electromechanical Responses of Ferroelectrics: Rayleighian Approach P. Ganesh July 22, 2024 ==================================================================================================================== § INTRODUCTION Fundamental principles, which can help in design of materials exhibiting metal-insulator transition<cit.> and memory effects, are under extensive scrutiny by a number of researchers in the fields of spintronics<cit.>, ferroelectrics<cit.>, and neuromorphic computing<cit.>. Experimental discoveries<cit.> connecting oxygen vacancies to unconventional ferroelectricity in films and metal-insulator transitions have led to research activities focused on understanding effects of the vacancies on material properties. These activities include experimental measurements, which can decouple contributions to switching of polarization from polar crystal phases and vacancy migration under applied electric fields, transient effects of vacancy ordering<cit.>, and electrochemical effects due to surfaces<cit.>. In parallel, models capturing effects of oxygen vacancies at different levels of approximations have been developed, which include molecular models based on density functional theory<cit.>, reactive force fields<cit.>, and continuum models<cit.> capturing long-wavelength physics. These research activities, experimental and theoretical, have unequivocally established that electrostatic effects due to vacancies are of paramount importance in controlling properties of materials at length and time scales relevant to application of materials in microelectronics. For example, Lin et al.<cit.> proposed a hypothesis that enhanced remnant polarization of a composite containing ferroelectric barium titante (BaTiO_3) and metallic non-ferroelectric oxide of SrRuO_3 may be attributed to the accumulation of oxygen vacancies at the BaTiO_3/SrRuO_3 interface. Similarly, thin films (∼ 80 nm) of _3 grown on _3//_3 have been studied recently<cit.> to understand effects of oxygen vacancy injection on electromechanical responses of the films. It has been established<cit.> that enhanced electromechanical responses in these films can be sustained by injection of oxygen vacancies and kinetic effects due to the vacancies were postulated to be responsible for the responses. Modeling kinetic effects due to vacancies in thin films of ferroelectric materials requires use of a simulation method, which can capture effects of multiple phases (varying in polarization and crystal symmetry) and multi-physics phenomena due to couplings among polarization, electrostatic potential, strain, and vacancies. For long length and time scales, phase field method has been used to model ferroelectric<cit.> materials, using parameters, which can be either inferred from experimental data or estimated using density functional theory (DFT). The phase field method has been used to model phenomena occurring at long time scales (of the order of seconds), such as domain wall motion <cit.> and domain nucleation<cit.>. The method incorporates a thermodynamic free energy similar to the originally proposed by Landau, Ginzburg, and Devonshire for ferroelectricity in the bulk (i.e., without surfaces), generalized to inhomogeneous systems and time domain. The free energy is typically written in terms of a space and time dependent polarization vector, and its gradients. Coefficients of various terms in the free energy can be either kept phenomenological or estimated from first principles calculations such as DFT after identifying underlying origins of polarization in terms of atomic displacements . For example, the free energies for some ferroelectrics materials are summarized by Chen <cit.>. In the last two decades, a number of phase field models have been developed for understanding the effects of vacancies in ferroelectric materials. For example, Zhang et al. <cit.> developed a phase field model for ferroelectrics with oxygen vacancies by treating them as defect dipoles and investigated the oxygen-vacancy-induced memory effect and large recoverable strain in BaTiO_3. Cao et al. <cit.> treated the vacancies as charged defects and developed a phase field method for BaTiO_3 by coupling time dependent equations for polarization with the Poisson-Nernst-Planck equations for density of charges. Shindo et al.<cit.> used a phase field model to study the electromechanical response of polycrystal BaTiO_3 with oxygen vacancies by treating the vacancies as defect dipoles. Recently, Fedeli et al.<cit.> presented a phase field model for ferroelectrics by treating defects in the single crystal and polycrystal structures as voids, charged point defect and polarization pinned objects, such that the polarization pinned objects retain their polarization during the cycling of electric field. Lastly, Kelley et al. <cit.> proposed a new free energy by including the impact of oxygen vacancies on not only polarization, but also on strains, which they used to understand asymmetry in the polarization-electrostatic potential (P-ψ) loops with different concentrations of oxygen vacancies. In order to simply the semi-analytical analysis, they invoked an assumption that the vacancies move much faster than the polarization so that a steady state approximation for relaxation of the densities of the vacancies can be justified. Despite these modeling efforts spanning two decades related to the modeling of vacancies and ferroelectrics, a unified theoretical framework, which can be used to construct thermodynamically consistent kinetic models for thin films of ferroelectric materials with vacancies is still lacking. Development of such a theoretical framework is the main goal of this paper. Limiting cases are discussed to highlight key features of this framework. In this manuscript, a Rayleighian approach is used to develop time and space dependent equations for polarization, densities of vacancies, electrons, and their pairs, electrostatic potential, and strains. The approach has its basis in principles of linear irreversible thermodynamics and ensures that the second law of thermodynamics is obeyed. Numerical solutions of coupled set of equations will be presented in another manuscript for the purpose of constructing average polarization-electric potential and strain-electric potential hysteresis loops. The manuscript is organized as follows: details of energy functional, the Rayleighian, and the coupled equations are presented in the section  <ref>. Different limits of the coupled set of equations are analyzed in section  <ref>. Conclusions are presented in section  <ref>. § MODEL BUILDING FOR THIN FILMS OF FERROELECTRICS USING THE RAYLEIGHIAN APPROACH   A set of time-dependent equations can be derived for simulating ferroelectric materials like _3 with vacancies. These equations can be derived using principles of irreversible thermodynamics. For the derivation, we decompose local polarization into fast and slow components resulting from electronic and atomic/orientational motions, respectively. The fast component of the polarization (=𝐏_e(𝐫,t)) at a location 𝐫 at time t is assumed to be in equilibrium with the electric field (𝐄(𝐫,t) = -∇ψ(𝐫,t)) i.e., 𝐏_e(𝐫,t) = ϵ_0 ϵ_∞𝐄(𝐫,t) so that ϵ_0 is the permittivity of vacuum, ϵ_∞ is the infinite frequency dielectric constant of the material, and ψ is the electrostatic potential. In contrast, the slow component (=𝐏(𝐫,t)) can be out-of-equilibrium and time-dependent equations are derived for this component. Such a decomposition of the polarization is similar to the work by Marcus<cit.> focused on electron transfer processes. Coupling of the slow component of polarization with the fast component is considered by introducing lattice strain. Diffusion of charged vacancies and electrons are considered by using their local densities as additional order parameters, and following our previous work related to ion transport in polymerized ionic liquids<cit.>. Furthermore, reaction kinetics leading to electrochemical effects, which result from charge generation and recombination of positively charged vacancies with electrons, are considered. Strain and concentration of the vacancies are coupled by the Vegard's law<cit.>, which is an empirical rule leading to a linear relation between the strain and the concentration of the vacancies. Mathematical framework leading to equations, which can introduce couplings among polarization, strain, diffusion, and reaction kinetics, is presented below. Any material perturbed from equilibrium by the application of a small external force will try to approach equilibrium through various dissipative processes. Dissipative processes and the path must lead to the positive rate of entropy production as per the second law of thermodynamics<cit.>. These statements were cast into a rigorous mathematical framework by Onsager<cit.> after defining a functional called the Rayleighian, which consists of the rate of change of “free” energy (or entropy for isothermal processes) and a dissipation function. According to Onsager's variational principle, the true dynamics of a system is one that demonstrates the least dissipation. Linear relations among various fluxes and forces can hence be derived by minimizing the Rayleighian with respect to the velocities leading to the maximum rate of entropy production and the resulting equations define the most probable path towards an equilibrium<cit.>. Details of this so-called Onsager variational principle are presented elsewhere<cit.>. Construction of the Rayleighian requires identification of independent fluxes, reaction rates, and constraints, which are discussed in the next subsection. §.§ Independent Fluxes, Rates, and Constraints A model for understanding effects of vacancies can be developed using the principles of irreversible thermodynamics<cit.>. In particular, we use Onsager's variational principle<cit.> to derive a set of equations for the polarization (P(𝐫,t)), strain tensor (ε(𝐫,t)), number densities of electroactive vacancies (ρ_+(𝐫,t)), electrons (ρ_-(𝐫,t)), and their pairs (ρ_±(𝐫,t)), which are coupled via the electric field (E(𝐫,t)). The number densities, ρ_+(𝐫,t), ρ_-(𝐫,t), and ρ_±(𝐫,t) are assumed to satisfy ∂ρ_i(𝐫,t)/∂ t = -∇·𝐣_i (𝐫,t) + S_i(𝐫,t) i = +,-,± where 𝐣_i (𝐫,t) = ρ_i (𝐫,t) 𝐯_i (𝐫,t) is the diffusive flux so that 𝐯_i (𝐫,t) is the collective velocity of i=+,-,±. In Eq.  <ref>, S_i (𝐫,t) is the rate of change of the number density of i resulting from the dissociation and the formation of vacancy-electron pairs. We assume that the number densities satisfy the no-void condition at all locations at all times so that the sum of the volume fractions defined as ϕ_i (𝐫,t) = ρ_i (𝐫,t)/ρ_io is unity i.e., ∑_i=+,-,±ϕ_i(𝐫,t) = 1 where 1/ρ_io is the molar volume of i. Using Eq.  <ref> and  <ref>, a constraint on the fluxes and the reaction rates is obtained, which is written as ∑_i=+,-,±[-∇·{𝐣_i (𝐫,t)/ρ_io} + S_i(𝐫,t)/ρ_io] = 0 Now, we need to construct relations between 𝐣_i (𝐫,t), S_i (𝐫,t) and thermodynamic forces. In the following, this is accomplished by considering a functional called the Rayleighian (R). Before defining the Rayleighian, we need to identify relevant indepedent fluxes (𝐣_i), rates (S_i), and thermodynamic forces. For identifying independent fluxes (𝐣_i), rates (S_i), and other constraints, we consider the rate of change of the total number (=M(t)) of particles in the vacancies, the electrons, and the pairs, defined as M(t) = ∫ d𝐫 [ρ_+(𝐫,t) + ρ_-(𝐫,t) + (z_+ + 1)ρ_±(𝐫,t)]. Due to the fixed number of particles at all times, the rate of change of M(t) must be zero and the rate is given by d M(t)/d t = ∫ d𝐫 [-∇·{𝐣_+(𝐫,t) + 𝐣_-(𝐫,t) + (z_+ + 1)𝐣_±(𝐫,t)} . . + {S_+(𝐫,t) + S_-(𝐫,t) + (z_+ + 1)S_±(𝐫,t)}] = ∫ dΓ 𝐧̂·{𝐣_+(𝐫,t) + 𝐣_-(𝐫,t) + (z_+ + 1)𝐣_±(𝐫,t)} where we have used Eqs.  <ref> and assumed S_+(𝐫,t) + S_-(𝐫,t) + (z_+ + 1)S_±(𝐫,t) = 0 so that the total number of particles remains the same due to the dissociation-association reactions. Here, Γ represents the surface enclosing the volume under consideration so that 𝐧̂ is an outward normal at the surface and we have used the divergence theorem. Now, equating d M(t)/d t = 0, Eq.  <ref> is satisfied if 𝐧̂·{𝐣_+(𝐫,t) + 𝐣_-(𝐫,t) + (z_+ + 1)𝐣_±(𝐫,t)} = 0 at the surface. Another constraint for the boundary fluxes is obtained by considering the rate of change of the total charge (=C(t)) of the vacancies, and the electrons, defined as C(t) = e∫ d𝐫[z_+ρ_+(𝐫,t) - ρ_-(𝐫,t)], which must be zero due to the global electroneutrality at all times. The rate of change of C(t) is given by d C(t)/d t = e∫ d𝐫 [-∇·{z_+ 𝐣_+(𝐫,t) - 𝐣_-(𝐫,t)} + {z_+S_+(𝐫,t) - S_-(𝐫,t)}] = ∫ dΓ 𝐧̂·{z_+𝐣_+(𝐫,t) - 𝐣_-(𝐫,t)} = 0 where we have assumed that z_+S_+(𝐫,t) - S_-(𝐫,t) = 0 i.e., the rate of charge generation due to the dissociation-association reactions is taken to be zero. Combining S_+(𝐫,t) + S_-(𝐫,t) + (z_+ + 1)S_±(𝐫,t) = 0 with z_+S_+(𝐫,t) - S_-(𝐫,t) = 0 leads to S_+(𝐫,t) = 1/z_+ S_-(𝐫,t) = - S_±(𝐫,t) ≡ - S(𝐫,t) for the association-dissociation reactions involving electroneutral vacancy-electron pairs, positively charged vacancies with charge z_+ e (so that e is the charge of an electron) and neutralizing electrons. For example, z_+ = 2 in the case of oxygen vacancies. Eq.  <ref> implies that there is only one independent reaction rate, which is taken to be S(𝐫,t) and defined by Eq.  <ref>. Now, Eqs.  <ref> and  <ref> imply that there are only two independent fluxes out of the three fluxes, 𝐣_i. Based on a molecular description of the diffusion resulting from frictional forces and relative motion of molecules, we work with the relative fluxes defined as ĵ_i (𝐫,t) = ϕ_i (𝐫,t) [ 𝐯_i (𝐫,t)- 𝐯 (𝐫,t)], where 𝐯 (𝐫,t) = ∑_i=+,-,±ϕ_i(𝐫,t) 𝐯_i (𝐫,t) These relative fluxes are related by the relation ∑_i=+,-,±ĵ_i (𝐫,t) = 0 and the Rayleighian can be written in terms of any two relative fluxes. In here, we choose ĵ_+ and ĵ_- to study the effects of diffusion of the vacancies and the electrons, respectively. Eqs.  <ref> are rewritten in terms of these independent fluxes and rates in the form ∂ϕ_k(𝐫,t)/∂ t = -∇·ĵ_k (𝐫,t) + 1/ρ_koS_k(𝐫,t) - ∇·[ϕ_k (𝐫,t) 𝐯 (𝐫,t)] k = +,-. and Eq.  <ref> can be written as ∇·𝐯 (𝐫,t) + S(𝐫,t){1/ρ_+o + z_+/ρ_-o-1/ρ_± o} = 0 Eqs.  <ref> and  <ref> are rewritten in terms of ĵ_i=+,-,±·𝐧̂ as 𝐧̂·{ρ_+oĵ_+(𝐫,t) + ρ_-oĵ_-(𝐫,t) + (z_+ + 1)ρ_± oĵ_±(𝐫,t)} + 𝐧̂·{[ρ_+oϕ_+(𝐫,t) + ρ_-oϕ_-(𝐫,t) + (z_+ + 1)ρ_± oϕ_±(𝐫,t)]𝐯 (𝐫,t)} = 0 𝐧̂·{z_+ρ_+oĵ_+(𝐫,t) - ρ_-oĵ_-(𝐫,t) + [z_+ρ_+oϕ_+(𝐫,t) - ρ_-oϕ_-(𝐫,t)]𝐯(𝐫,t)} = 0 These equations show that there are two boundary fluxes, which are independent. These are chosen to be ĵ_k=+,-·𝐧̂. In summary, independent fluxes and rates are ĵ_k=+,-, ĵ_k=+,-·𝐧̂, and S, respectively. In addition, we need to consider the constraint written as Eq.  <ref> and discussed above. §.§ Rayleighian For applying the Onsager's variational principle, a Rayleighian (R(t)) for the thin films of ferroelectric materials can be defined as<cit.> R(t) = d H(t)/d t + W(t) - ∫ d𝐫 p(𝐫,t)[∇·𝐯 (𝐫,t) + S(𝐫,t){1/ρ_+o + z_+/ρ_-o-1/ρ_± o}] where H(t) is the time-dependent energy functional (presented in the next subsection) and p(𝐫,t) is a Lagrange's multiplier to enforce the constraint (cf. Eq.  <ref>), and W is the dissipation function. We should point out that the functional form for the dissipation function is assumed to be known in order to use the variational principle and there is no prescription for deriving it. In this paper, we present a functional form for the dissipation function by including various multi-physics phenomena and ensuring that the derived equations lead to known relations in various limiting cases. For example, W has contributions from the coupling of the polarization with the lattice velocity, derived by Hubbard and Onsager<cit.> using the approximation of fast rotational relaxation<cit.>. Also, contributions from friction, which result from relative motions of charged vacancies and electrons with respect to the velocity 𝐯(𝐫,t) (cf. Eq.  <ref>), are included<cit.>. Adding these contributions, W can be written as W(t) = τ_p/2∫ d𝐫 [∂𝐏(𝐫,t)/∂ t + [𝐯_l(𝐫,t)·∇]𝐏(𝐫,t) + 1/2𝐏(𝐫,t) ×[∇×𝐯_l(𝐫,t)]]^2 + 1/2∫ d𝐫∑_k=+,-∑_k'=+,- L_kk'(𝐫,t)ρ_koρ_k'oĵ_k(𝐫,t) ·ĵ_k'(𝐫,t) + 1/2∫ d𝐫∑_k=+,-∑_k'=+,- M_kk'(𝐫,t)ρ_koρ_k'o[ĵ_k(𝐫,t)·𝐧̂] [ĵ_k'(𝐫,t)·𝐧̂] + 1/2∫ d𝐫 ω(𝐫,t) S^2(𝐫,t) Here, τ_p and L_kk' are parameters characterizing time-scale for change in polarization 𝐏, and friction coefficient for motion of the vacancies, and electrons relative to 𝐯(𝐫,t) (defined by Eq.  <ref>). Similarly, M_kk' are the parameters characterizing the dissipation due to the relative fluxes at the boundaries. Also, 𝐯_l = ∑_l=1,2,3[∂ u_l(𝐫,t)/∂ t] î_l is net velocity of an underlying lattice so that î_l are unit vectors and u_l(𝐫,t) is the displacement of underlying atoms at location 𝐫 at time t. In general, L_kk' and M_kk' can be concentration dependent but each matrix with either L_kk' or M_kk' as its elements must be positive definite for the positive entropy production. The last term in Eq.  <ref> is the dissipation due to the reaction with a prefactor ω, which will be related to the rate of vacancy-electron pair dissociation and recombination. In the following, we present explicit expression for H(t) by using thermodynamic free energy and by generalizing it after considering additional effects of the charged vacancies. §.§ Energy Functional For deriving a set of time-dependent equations, we use a time (t)-dependent energy functional, written as H(t) = ∫ d𝐫 [ H_LGD{P} + H_grad{∇ P} + H_mech{P, ε,ρ_+} + H_self{ρ_+,ρ_-,ρ_±} . . + H_elec{P, E = -∇ψ,ρ_+,ρ_-} + H_mix{ρ_+,ρ_-,ρ_±,∇ρ_+, ∇ρ_-,∇ρ_±}], ≡ ∫ d𝐫 h{P,∇ P,ρ_+,ρ_-,ρ_±,∇ρ_+, ∇ρ_-,∇ρ_±,E,ε} where H_LGD is the Landau-Ginzburg-Devonshire (LGD) energy density<cit.> and written in terms of time-dependent polarization by invoking local-equilibrium approximation<cit.>. Similarly, H_grad is the gradient/interfacial energy density capturing the effects of inhomogeneous polarization in the long-wavelength limit. Explicit expressions for these contributions are presented in Appendix A for _3 in a seminal work by Chen and co-workers<cit.>. Coupling between the polarization and the strain is encoded in H_mech, which is the mechanical strain energy density so that ε is the strain tensor. Considering limit of small deformation, the mechanical strain energy density<cit.> can be defined as H_mech = 1/2σ(𝐫,t): [ε(𝐫,t) - ε^0(𝐫,t)] ≡C_ijkl/2[ε_ij(𝐫,t)-ε_ij^0(𝐫,t)][ε_kl(𝐫,t)-ε_kl^0(𝐫,t)], where ε_ij(𝐫,t) = [∂ u_i/∂ x_j + ∂ u_j/∂ x_i]/2 is the ij element of the total lattice-strain tensor<cit.> so that x_i are components of the spatial vector 𝐫, u_i(𝐫,t) is i^th component of the displacement vector of lattice, σ is the stress tensor, and C_ijkl is the rank four elasticity tensor. Ferroelectric materials can have spontaneous strain even in stress-free conditions and such a strain tensor (eigenstrain<cit.>) is denoted as ε_ij^0, results from electrostriction and Vegard effects, and can be defined as<cit.>, ε_ij^0(𝐫,t) = Q_ijklP_k(𝐫,t) P_l(𝐫,t) + w_ij^v ρ_+(𝐫,t), where Q_ijkl is a rank four order tensor. In this notation, ε_ij(𝐫,t) is the tensor containing both, elastic and eigenstrain, components of the strain. Furthermore, effects of the vacancies on the strain is included by the last term in Eq.  <ref>, where w_ij^v are phenomenological parameters. Here, Einstein's notation of sum over repeated indices is used. H_self is the self energy density for creating the vacancies, electrons and their pairs, written as H_self = ∑_i=+,-,± G_ioρ_i(𝐫,t) where G_io is the self-energy<cit.> for creating i. H_elec is the excess electrical energy density written as<cit.> H_elec = [z_+ ρ_+(𝐫,t) - ρ_-(𝐫,t)]e ψ(𝐫,t) - ϵ_0 ϵ_∞/2𝐄^2(𝐫,t) - [𝐏(𝐫,t) ·𝐄(𝐫,t)], where 𝐄(𝐫,t) = -∇ψ(𝐫,t) is the electric field and ψ is the electrostatic potential. z_+ is valency of oxygen vacancy and e is charge of an electron. Here, the pairs of the vacancies and the electrons are assumed to carry no charge. Furthermore, -δ[∫ d𝐫 H_elec]/δ E(𝐫',t) = 𝐃(𝐫',t) = ϵ_0 ϵ_∞𝐄(𝐫',t) + 𝐏(𝐫',t) can be readily identified as the dielectric displacement vector. Excess entropy of mixing vacancies, electrons, and their pairs is defined as H_mix along with the entropic cost of generating their inhomogeneous density profiles, written as<cit.> H_mix = k_B T ∑_i=+,-,±[ρ_i(𝐫,t) ln[ρ_i(𝐫,t)/ρ_io] + 1/2κ_i|∇ρ_i(𝐫,t)|^2 ] Logarithmic terms in Eq.  <ref> can be derived by considering the number of ways in which the vacancies, the electrons, and the pairs can be distributed in space, such that their total number remain fixed. κ_i is the coefficient of the square-gradient term<cit.>, which penalizes inhomogeneous density profiles of i. With the dissipation and energy functional given by Eqs.  <ref> and  <ref>, respectively, the Rayleighian has been specified (cf. Eq.  <ref>) completely with the quantities like τ_p, L_kk', M_kk', and ω taken as inputs. After constructing the Rayleighian, a set of equations can be systematically derived by optimizing R with respect to ∂𝐏(𝐫,t)/∂ t, ĵ_i=+,-, ĵ_i=+,-·𝐧̂, S, and 𝐯_l(𝐫,t). The set is complemented by two additional equations: one for the Lagrange's multiplier, p, and other one for the electrostatic potential, ψ. In total, nine coupled equations are derived in the next subsection. §.§ Governing Equations: Linear Irreversible Thermodynamics We assume that the electrostatic potential adjust itself so fast that stationary condition δ H/δψ(𝐫,t) = 0 is satisfied at all times and at all locations. Explicitly, this leads to ϵ_0ϵ_∞∇^2 ψ(𝐫,t) - ∇·𝐏(𝐫,t) + e[z_+ ρ_+(𝐫,t) - ρ_-(𝐫,t)] = 0, or equivalently, ∇·𝐃(𝐫,t) = e[z_+ ρ_+(𝐫,t) - ρ_-(𝐫,t)], where the right hand side is the local charge density. Evaluating δ R(t)/δ{∂𝐏(𝐫,t)/∂ t} = 0, we get governing equations for the three components of polarization P_1, P_2, P_3 such that ∂𝐏(𝐫,t)/∂ t + [𝐯_l(𝐫,t)·∇]𝐏(𝐫,t) + 1/2𝐏(𝐫,t) ×[∇×𝐯_l(𝐫,t)] = -1/τ_pδ[∫ d𝐫'h{𝐏(𝐫',t)}]/δ𝐏(𝐫,t)≡𝐏^⋆(𝐫,t), Functional h is defined in Eq.  <ref> and its dependencies on variables other than the polarization are suppressed here. Similarly, δ R(t)/δ𝐯_l(𝐫,t) = 0 leads to ∇·σ(𝐫,t) = τ_p/2{∇[𝐏^⋆(𝐫,t)·𝐏(𝐫,t)] - 𝐏(𝐫,t)×[∇×𝐏^⋆(𝐫,t)]-𝐏^⋆(𝐫,t)×[∇×𝐏(𝐫,t)] . . -𝐏(𝐫,t)[∇·𝐏^⋆(𝐫,t)] + 𝐏^⋆(𝐫,t)[∇·𝐏(𝐫,t)]} where we have used d H_mech/d t = σ(𝐫,t): [∇𝐯_l(𝐫,t) - ∂ε_ij^0(𝐫,t)/∂ t]. With the constitutive relation, σ_ij(𝐫,t) = C_ijkl[ ε_kl(𝐫,t) - ε_kl^0(𝐫,t) ] Eq.  <ref> can be used to study effects of strains. We should point out that in general, σ_ij(𝐫,t) should be computed from gradients of lattice velocity. Here, we choose a simpler and intuitive linear constitutive relation (cf. Eq.  <ref>) between stress and strain. Optimizations of R(t) with respect to ĵ_k=+,- and ĵ_k=+,-·𝐧̂ give ∇ [ μ_k(𝐫,t) - μ_±(𝐫,t)] = - ∑_k'=+,-L_kk'(𝐫,t)ρ_koρ_k'oĵ_k'(𝐫,t) μ_k(𝐫,t) - μ_±(𝐫,t) = ∑_k=+,-M_kk'(𝐫,t)ρ_koρ_k'o[ĵ_k'(𝐫,t)·𝐧̂] respectively. Here, μ_k(𝐫,t) = δ[∫ d𝐫'h{ρ_k(𝐫',t)}]/δϕ_k(𝐫,t) and becomes the local chemical potential of k in the steady state. Evaluating δ R(t)/δ S(𝐫,t) = 0 gives S(𝐫,t) = 1/ω(𝐫,t)[ μ_+(𝐫,t)/ρ_+o + z_+ μ_-(𝐫,t)/ρ_-o - μ_±(𝐫,t)/ρ_± o + p(𝐫,t){1/ρ_+o + z_+/ρ_-o - 1/ρ_± o}] Plugging the expression for S from Eq.  <ref> in Eq.  <ref>, an equation for p is obtained, which closes this set of equations along with the boundary conditions shown in Eqs.  <ref> and  <ref>. §.§ Non-linear Reaction Kinetics Eq.  <ref> shows that the reaction rate depends linearly on the chemical potentials, which limits the validity of the model presented here. We can improve this by introducing non-linear relations between the reaction rate and the chemical potentials e.g., similar to those forming the basis of the Eyring's rate of reactions<cit.>. In this subsection, we present such a model, which can lead to the non-linear reaction kinetics such as in autocatalytic reactions<cit.>. For such a purpose, we rewrite Eq.  <ref> as a limiting case of the non-linear relation S_NL(𝐫,t) = k_B T/ω(𝐫,t)[ exp[μ_+(𝐫,t) + p(𝐫,t)/ρ_+ok_B T + z_+μ_-(𝐫,t) + p(𝐫,t)/ρ_-ok_B T] . . - exp[μ_±(𝐫,t) + p(𝐫,t)/ρ_± ok_B T]] Now, the Rayleighian, R ≡ R_NL, based on S_NL can be constructed using Eq.  <ref> and it can be shown that R_NL(t) = R(t) + 1/2∫ d𝐫 ω(𝐫,t) [S_NL(𝐫,t) - S(𝐫,t)]^2 Note that R(t) = -W(t) < 0 as per the Onsager's variational principle, where the inequality is valid for a positive dissipation function and the governing equations derived in the last subsection. In contrast, R_NL(t) can have either sign for ω(𝐫,t) > 0. We should point out that 1/ω(𝐫,t) = exp[μ^⋆(𝐫,t)/ρ^⋆ ok_B T] ∼ρ^⋆ o> 0 is expected on the basis of the Eyring's rate of reactions so that μ^⋆(𝐫,t) and 1/ρ^⋆ o are the chemical potential and volume of an activated complex<cit.>, respectively. Physically, this relation between the prefactor ω(𝐫,t) and μ^⋆(𝐫,t) implies that the rate of reaction is linearly proportional to the concentration of the activated complexes<cit.>. More importantly, probability of realizing a kinetic path with the non-linear reaction rates is given by the Onsager-Machulp integral<cit.>, based on time-integral of the difference R_NL(t)-R(t). In particular, the probability of realizing the non-linear reaction rates should be exp[-∫_0^t dt' [R_NL(t')-R(t')]/2k_B T] as per the theoretical works of Onsager and Machulp<cit.>. For this probability to be non-zero and significant enough, ω(𝐫,t) needs to be chosen in such a manner so that the exponential of the negative of the Onsager-Machulp integral remain close to unity. In the following, we make such a choice for ω(𝐫,t) and keep all of the governing equations the same except that we replace S by S_NL to capture effects of non-linear reaction kinetics in the model for the ferroelectrics developed here. Now, using Eqs.  <ref>,  <ref> and replacing S with S_NL (cf. Eq.  <ref>), ∂ϕ_+(𝐫,t)/∂ t = ∇·[∑_k'=+,-L̃_+k'^-1(𝐫,t) ∇ [μ_k' (𝐫,t) - μ_± (𝐫,t)]] - 1/ρ_+oS_NL(𝐫,t) - ∇·[ϕ_+ (𝐫,t) 𝐯 (𝐫,t)] ∂ϕ_-(𝐫,t)/∂ t = ∇·[∑_k'=+,-L̃_-k'^-1(𝐫,t) ∇ [μ_k' (𝐫,t) - μ_± (𝐫,t)]] - z_+/ρ_-oS_NL(𝐫,t) - ∇·[ϕ_- (𝐫,t) 𝐯 (𝐫,t)] where L̃_kk'^-1 is the kk' element of the inverse matrix of L_kk'ρ_koρ_k'o. Three independent elements L̃_kk'^-1 can be interpreted<cit.> in terms of the ionic conductivity, transference number of the vacancies related to their partial ionic currents, and diffusion constants of the vacancy-electron pairs. § RESULTS In here, we present analysis of some limiting cases to highlight key effects of the vacancies and novel aspects of the model. Numerical results obtained by solving the coupled equations will be presented in a separate publication. In the model presented here, we can capture non-linear effects of barriers on reaction rates and study characteristic time for reactions. In the following, we consider three limiting cases: 1) reaction dominated regime leading to identification of a characteristic time, 2) steady state analysis highlighting coupling between the strain, electric potential, and vacancies, 3) vacancy-free regime, where a coupling between the fast and the slow component of the polarization can lead to effects of geometry manifesting in the stabilization of new topological configurations. §.§ Reaction Dominated Regime: Characteristic time We consider a limiting case, where vacancies, electrons, and their pairs are homogeneously distributed so that diffusive flux is minimal. In this limit, we need to consider ϕ_i=+,-,±(𝐫,t) ≡ϕ_i^h(t), which satisfy (cf. Eqs.  <ref>- <ref>) ∂ϕ_+^h(t)/∂ t = - 1/ρ_+oS_NL^h(t) ∂ϕ_-^h(t)/∂ t = - z_+/ρ_-oS_NL^h(t) Now, consider a case, when 1/ρ_± o = 1/ρ_+o + z_+/ρ_-o so that ϕ_+^h(t)+ϕ_-^h(t) + ϕ_±^h(t) = 1 is satisfied for non-zero reaction rate, S_NL(𝐫,t) ≡ S_NL^h(t), given by (cf. Eq.  <ref>) S_NL^h(t) = K_0(t)[ exp[μ_+^h(t)/ρ_+ok_B T + z_+μ_-^h(t)/ρ_-ok_B T] - exp[μ_±^h(t)/ρ_± ok_B T]] where we have defined K_0(t) = k_B T/ω^h(t)exp[p^h(t)/ρ_± ok_B T] and used the notation ω(𝐫,t) ≡ω^h(t), p(𝐫,t) ≡ p^h(t), μ_+(𝐫,t) ≡μ_+^h(t), μ_-(𝐫,t) ≡μ_-^h(t), μ_±(𝐫,t) ≡μ_±^h(t). From Eq.  <ref>, we get μ_+^h(t)/ρ_+0k_B T = lnϕ_+^h(t) + G_+0 + z_+ eψ^h(t)/k_B T - w_ij^v σ_ij^h(t)/k_B T μ_-^h(t)/ρ_-0k_B T = lnϕ_-^h(t) + G_-0 - eψ^h(t)/k_B T μ_±^h(t)/ρ_± 0k_B T = lnϕ_±^h(t) + G_± 0/k_B T where σ_ij(𝐫,t) ≡σ_ij^h(t), ψ(𝐫,t) ≡ψ^h(t). Using Eqs.  <ref>- <ref>, we can write Eq.  <ref> as S_NL^h(t) = K_A(t)ϕ_+^h(t)[ϕ_-^h(t)]^z_+ - K_D(t)ϕ_±^h(t) K_A(t) = K_0(t)exp[G_+0 + z_+ G_-0/k_B T - w_ij^v σ_ij^h(t)/k_B T] K_D(t) = K_0(t)exp[G_± 0/k_B T] where K_A(t) and K_D(t) are defined as the time-dependent association and dissociation constants, respectively. It should be noted that the reaction rate S_NL^h(t) is independent of the electrostatic potential due to the assumption of a uniform potential. In general, small variations of the electrostatic potential will lead to the dependence of the reaction rate on the potential. Furthermore, the reaction rate depends on the stress due to the Vegard's law. Now, we consider the case of oxygen vacancies so that z_+ = 2, and assume that K_A(t) ≡ K_A0 and K_D(t) ≡ K_D0. In this case, we can solve Eqs.  <ref>,  <ref>, and  <ref> in terms of a time-dependent parameter, α(t) so that ϕ_+^h(t) = α(t)/ρ_+0, ϕ_-^h(t) = z_+ α(t)/ρ_-0, and ϕ_±^h(t) = 1 - (1/ρ_+0 + z_+/ρ_-0)α(t) and α(t) satisfies ∂α(t)/∂ t = - 4K_A0/ρ_+oρ_-o^2[α^3(t) + (1/ρ_+0 + 2/ρ_-0)K_D0/K_A0ρ_+oρ_-o^2/4α(t) - K_D0/K_A0ρ_+oρ_-o^2/4] Although Eq.  <ref> can be solved exactly but the exact solution obscure identification of a characteristic time. In here, we consider a situation so that Eq.  <ref> can be approximated as ∂α(t)/∂ t = - 4K_A0/ρ_+oρ_-o^2[α^3(t) - K_D0/K_A0ρ_+oρ_-o^2/4] Integrating Eq.  <ref> leads to ln[α(t)/α(∞)-1/α(0)/α(∞)-1]^2 - ln[(α(t)/α(∞))^2 + α(t)/α(∞) + 1/(α(0)/α(∞))^2 + α(0)/α(∞) + 1] = - t/τ_0 + 2√(3)arctan[1/√(3)+ 2/√(3)α(t)/α(∞)] - 2√(3)arctan[1/√(3)+ 2/√(3)α(0)/α(∞)] where τ_0 is the characteristic time for the dissociation of a divalent oxygen vacancy-electron pair and is given by τ_0 = ρ_+oρ_-o^2/24K_A0α^2(∞) = 1/6[ρ_+oρ_-o^2/4]^1/31/K_A0^1/3K_D0^2/3 Solution of Eq.  <ref> is plotted in Fig.  <ref>, which shows that for t/τ_0 > 10, the dissociation of vacancy-electron pair is almost complete i.e., α(t)/α(∞) → 1. This implies that the model developed here can be simplified to some extent by considering only dissociated vacancies and electrons for capturing kinetic effects at times much greater than τ_0. For example, construction of polarization-electrostatic potential loop can be constructed without considering the vacancy-electron pairs at time scales much greater than τ_0, typical of experimental rates at which voltages are sweeped across ferroelectric films<cit.>. But our complete formalism also allows us to describe ultra-fast timescales within the same framework with relative ease. §.§ Steady State Analysis: Implications of the Vegard's law Spatiotemporal responses of oxygen vacancies and electrons to the electrostatic potential and the strain encoded in Eqs.  <ref> and  <ref> are asymmetric due to the fact that z_+ = 2 and the use of the Vegard's law so that only the vacancies affect the strain. In order to make this clearer, the functional derivatives (exchange chemical potentials) appearing in Eqs.  <ref> and  <ref> are evaluated as [μ_+(𝐫,t)/k_B T-μ_±(𝐫,t)/k_B T] = ln(ϕ_+(𝐫,t)/1-ϕ_+(𝐫,t) - ϕ_-(𝐫,t)) + z_+ eψ(𝐫,t)/k_B T + G_+o-G_± o/k_B T - w_ij^v σ_ij(𝐫,t)/k_B T - κ_+ ρ_+o/k_B T∇^2 ϕ_+(𝐫,t) - κ_± oρ_± o/k_B T∇^2 (ϕ_+(𝐫,t)+ϕ_-(𝐫,t)) [μ_-(𝐫,t)/k_B T-μ_±(𝐫,t)/k_B T] = ln(ϕ_-(𝐫,t)/1-ϕ_+(𝐫,t) - ϕ_-(𝐫,t)) - eψ(𝐫,t)/k_B T + G_-o-G_± o/k_B T - κ_+ ρ_+o/k_B T∇^2 ϕ_+(𝐫,t) - κ_± oρ_± o/k_B T∇^2 (ϕ_+(𝐫,t)+ϕ_-(𝐫,t)) Note here that local stress-dependent term only appears in Eq.  <ref> and results from our assumption that the vacancies affect the local strain, which appear in Eq.  <ref>. In here, we show that the local polarization depends on spatial distribution of the vacancies and electrons via the electrostatic potential and doesn't depend solely on the electric field as in the classical ferroelectrics. First, we consider local equilibrium, which correspond to a steady state of the time-dependent equations. The local equilibrium is defined as the conditions [μ_+(𝐫,t)/k_B T-μ_±(𝐫,t)/k_B T] = 0 and [μ_-(𝐫,t)/k_B T-μ_±(𝐫,t)/k_B T] = 0. At the steady state and representing local equilibrium, 𝐏^⋆(𝐫,t) = 0, p = and hence, leads to ∇·σ(𝐫,t) = 0. Representing all of the variables (independent of time, t at t→∞) in the steady state by subscript s, we get σ_ij(𝐫,t) ≡σ_ij,s(𝐫) = 0, where the latter equality represents equilibrium (i.e., a stress-free state). This, in turn, implies that ϵ_ij(𝐫,t) ≡ϵ_ij,s(𝐫) = ϵ_ij,s^0(𝐫) = Q_ijklP_k,s(𝐫)P_l,s(𝐫) + w_ij^vρ_+oϕ_+,s(𝐫). For a weakly inhomogeneous distribution of the vacancies and the electrons so that the derivative terms in Eqs.  <ref> and  <ref> can be ignored, [μ_+(𝐫,t)/k_B T-μ_±(𝐫,t)/k_B T] = 0 and [μ_-(𝐫,t)/k_B T-μ_±(𝐫,t)/k_B T] = 0 lead to ϕ_+,s(𝐫) = 1/1 + exp[ez_+ψ_s(𝐫)-w_ij^v σ_ij,s(𝐫)/k_B T+ G_+o-G_± o/k_B T][1 + exp[e[ψ_s(𝐫)-{G_-o-G_± o}]/k_B T]] ϕ_-,s^-(𝐫) = 1/1 + exp[-e[ψ_s(𝐫)-{G_-o-G_± o}]/k_B T][1 + exp[-[ez_+ψ_s(𝐫)-w_ij^v σ_ij,s(𝐫)]/k_B T-G_+o-G_± o/k_B T]] where we have used the notation ψ(𝐫,t→∞) = ψ_s(𝐫) and σ_ij(𝐫,t→∞) = σ_ij,s(𝐫). Using these equations, it is clear that the effects of vacancies on the total strain appear via the Vegard strain and leads to a result (valid at the steady state) ϵ_ij,s(𝐫) = Q_ijklP_k,s(𝐫)P_l,s(𝐫) + w_ij^vρ_+o/1 + exp[ez_+ψ_s(𝐫)-w_ij^v σ_ij,s(𝐫)/k_B T+ G_+o-G_± o/k_B T][1 + exp[e[ψ_s(𝐫)-{G_-o-G_± o}]/k_B T]] A similar result without any consideration of the self-energy terms ( i.e., without G_io) was derived in Ref. <cit.>. An additional effect of the vacancies is to affect the local electric field. At the local equilibrium, 𝐏_s(𝐫) = α(𝐫) 𝐄(𝐫), where the prefactor α(𝐫) depends on the specific form of H and can be determined numerically for any functional form of H. As 𝐄(𝐫) = 𝐄_0(𝐫) + 𝐄_1(𝐫), where ∇·[{ϵ_0ϵ_∞ + α(𝐫)}𝐄_0(𝐫)] = 0 and ∇·[{ϵ_0ϵ_∞ + α(𝐫)}𝐄_1(𝐫)] = e z_+ ρ_+oϕ_+,s(𝐫) - eρ_-oϕ_-,s(𝐫). In other words, local electric field has contribution, 𝐄_1, resulting from inhomogeneous distribution of vacancies and oppositely charged carriers. It should be noted that 𝐄_1 = 𝐄_0 in the absence of the vacancies but 𝐄_1≠𝐄_0 in the presence of the charged vacancies. This, in turn, implies that the local polarization 𝐏_s(𝐫) = α(𝐫) [𝐄_0(𝐫) + 𝐄_1(𝐫)] is intimately connected with spatial distribution of vacancies and electrons. In summary, the strain-electrostatic potential loop will be asymmetric with respect to the sign of the electrostatic potential, in qualitative agreements with recent experiments<cit.>. §.§ Coupling between the fast and the slow components of the polarization: topological effects in vacancy-free regime The model developed here is based on the decomposition of the net local polarization into a slowly-varying component, 𝐏 and a fast component, 𝐏_e. A coupling between these two components appear in the form of ϵ_∞ in the model, which affects the electrostatic potential, ψ(𝐫,t). In addition, another coupling appears in Eq.  <ref> in the form of the lattice velocity, 𝐯_l, which can be interpreted in terms of the rate of the change of the net local polarization. Molecular origin of this interpretation is the fact that local displacement of electrons and ions of ferroelectric crystals contribute towards the net polarization, which are treated in the model as 𝐏_e and 𝐏, respectively. This, in turn, leads to the relation 𝐯_l(𝐫,t) ∼∂ (𝐏_e(𝐫,t) + 𝐏(𝐫,t))/∂ t. In the most of the phase field models, 𝐯_l(𝐫,t) is taken to be zero, which implies that 𝐯_l(𝐫,t)≃∂ (𝐏_e(𝐫,t))/∂ t → 0 at the time-scales relevant to the models. This implies that Eq.  <ref> can be written as ∂𝐏(𝐫,t)/∂ t = 𝐏^⋆(𝐫,t), where Eqs.  <ref> and  <ref> allow us to identify 𝐏^⋆(𝐫,t) = -1/τ_p[δ[∫ d𝐫' H_LGD{𝐏(𝐫',t)} + H_grad{∇𝐏(𝐫',t)}]/δ𝐏(𝐫,t) - 𝐄(𝐫,t))] In general, H_LGD + H_grad can be written in powers of 𝐏 so that H_LGD{𝐏} + H_grad{∇𝐏} = 1/2χ𝐏^2(𝐫,t) + κ_p,1/2[∇·𝐏(𝐫,t)]^2 + κ_p,2/2[∇×𝐏(𝐫,t)]^2 + κ_p,3𝐏(𝐫,t) ·[ ∇×𝐏(𝐫,t)] Eqs.  <ref> and  <ref> lead to 𝐏^⋆(𝐫,t) = -1/τ_p[1/χ𝐏(𝐫,t) - κ_p,1∇[∇·𝐏(𝐫,t)] + κ_p,2∇×[∇×𝐏(𝐫,t)] + 2 κ_p,3[ ∇×𝐏(𝐫,t)] . . - 𝐄(𝐫,t)], Using Eq.  <ref> in the absence of the vacancies and the electrons; and ∇×𝐄(𝐫,t) = 0, ∇·𝐏^⋆(𝐫,t) = -1/τ_p[{1/χ + 1/ϵ_0 ϵ_∞}∇·𝐏(𝐫,t) - κ_p,1∇^2[∇·𝐏(𝐫,t)] ] ∇×𝐏^⋆(𝐫,t) = -1/τ_p[1/χ∇×𝐏(𝐫,t) + κ_p,2∇×(∇×[∇×𝐏(𝐫,t)]) + 2 κ_p,3∇×[∇×𝐏(𝐫,t)] ] Operating with divergence and curl on Eq.  <ref>, and using Eqs.  <ref>,  <ref>, we get ∂[∇·𝐏(𝐫,t)]/∂ t = [κ_p,1/τ_p∇^2 - 1/τ_L] [∇·𝐏(𝐫,t)] ∂[∇×𝐏(𝐫,t)]/∂ t = [κ_p,2/τ_p∇^2 - 2 κ_p,3/τ_p(∇×) - 1/τ_p χ] [∇×𝐏(𝐫,t)] where τ_L = τ_p/{1/χ + 1/ϵ_0 ϵ_∞} is the characteristic time for the change of ∇·𝐏(𝐫,t) and τ_p χ is the chracteristic time for the change of ∇×𝐏(𝐫,t). Note that τ_L/(τ_p χ) = 1/(1 + χ/(ϵ_0 ϵ_∞)) ≪ 1 for χ≫ 1. This means that for t ≫τ_L, ∇·𝐏(𝐫,t) = 0. This allows us to identify relation between the local polarization and the electric field by using Eq.  <ref>. For example, in a steady state, 𝐏^⋆(𝐫,t) = 0, which leads to a relation between the local polarization and the local electric field as 1/χ𝐏_s(𝐫) - κ_p,1∇[∇·𝐏_s(𝐫)] + κ_p,2∇×[∇×𝐏_s(𝐫)] + 2 κ_p,3[ ∇×𝐏_s(𝐫)] = 𝐄_s(𝐫), where we have used the notation 𝐏_s(𝐫) = 𝐏(𝐫,∞) and 𝐄_s(𝐫) = 𝐄(𝐫,∞). For a given volume, solution of Eq.  <ref> depends on the geometry and boundary conditions. A particular set of solutions, which enforces ∇·𝐏_s(𝐫) = 0 everywhere in space including the boundaries, will be discussed here. In particular, ∇×𝐏_s(𝐫) = λ𝐏_s(𝐫), which enforces ∇·𝐏_s(𝐫) = 0 for a constant λ will be discussed here. For such a divergence-free polarization vector, Eq.  <ref> demands [1/χ + κ_p,2λ^2 + 2 κ_p,3λ]𝐏_s(𝐫) = 𝐄_s(𝐫), λ can be obtained by solving for an eigenvalue of the equation ∇×𝐏_s(𝐫) = λ𝐏_s(𝐫) with boundary conditions. Nevertheless, Eq.  <ref> shows that 𝐏_s(𝐫) and 𝐄_s(𝐫) are parallel to each other at the steady state, which may not be true in general. In fact, Eq.  <ref> can be used to determine the time-dependent electric field required for setting up known polarization vector in space and time. In contrast to the above analysis, if we consider 𝐯_l(𝐫,t) = β∂𝐏(𝐫,t)/∂ t, β being a constant, then Eq.  <ref> can be written as ∂𝐏(𝐫,t)/∂ t + β[∂𝐏(𝐫,t)/∂ t·∇]𝐏(𝐫,t) + β/2𝐏(𝐫,t) ×∂[∇×𝐏(𝐫,t)]/∂ t = 𝐏^⋆(𝐫,t), Eq.  <ref> shows that the coupling between the slow and the fast component of the net polarization will affect spatiotemporal distribution of 𝐏(𝐫,t), while the steady state behavior of 𝐏(𝐫,t) ≡𝐏_s(𝐫) remains the same. Eq.  <ref> opens up a way to study effects of confinement on the spatiotemporal distribution of topologically non-trivial polarization, which may not be realized at an equilibrium. § CONCLUSIONS   We developed a thermodynamically consistent time-dependent model for understanding the effects of multivalent vacancies on relations among polarization-electric potential and strain-electric potential in thin films of ferroelectrics. In contrast to the most of the phase field models, non-linear effects of the reaction kinetics leading to generation of charged vacancies and electrons from their pairs are introduced in the model. In addition, diffusion and elastic effects of the charged vacancies are considered, which are shown to exhibit asymmetric reponses of the strain to the electric potential. Furthermore, the model introduces coupling between the slow component and a fast component of the net polarization, which is expected to affect time-dependent relation between the polarization and the electric field. Impedance response from the thin films of ferroelectrics with vacancies should exhibit defect dipole behavior<cit.> which will dynamically couple with the polar matrix giving rise to frequency-dependent behavior. In the presence of electrodes, one can also expect electrode polarization phenomenon<cit.> due to localization of the vacancies near an oppositely charged electrode. These localized vacancy induced polarization phenomenon can be used to extract diffusion constant of the vacancies. A simplified model<cit.> developed using the Rayleighian approach has been used previously to fit the impedance spectra as a function of frequency and temperature for thin films of ionic polymers. We envision that in future, experimental results for impedance spectra from thin films of ferroelectrics with vacancies can be fitted using the model developed here and extract the diffusion constant of the vacancies. The Rayleighian approach to build the model is shown to be general enough for constructing non-linear reaction kinetics. Application of the model to simulate motion of domain walls in ferroelectrics in the presence of vacancies will be presented in a forthcoming publication. § ACKNOWLEDGEMENTS This work was supported by the Center for Nanophase Materials Sciences, which is a US DOE, Office of Science User Facility at Oak Ridge National Laboratory. § DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request. unsrt
http://arxiv.org/abs/2407.13466v1
20240718124058
LIMT: Language-Informed Multi-Task Visual World Models
[ "Elie Aljalbout", "Nikolaos Sotirakis", "Patrick van der Smagt", "Maximilian Karl", "Nutan Chen" ]
cs.RO
[ "cs.RO", "cs.LG" ]
Examining inverse generative social science to study targets of interest [ Received: date / Accepted: date ======================================================================== ^1During this work, all authors were affiliated with the Machine Learning Research Lab at Volkswagen Group, Germany. ^2E.A. is currently with the Robotics and Perception Group, at the Department of Informatics of the University of Zurich (UZH) and the Department of Neuroinformatics at UZH and ETH Zurich, Switzerland. ^3Technical University of Munich, Germany. ^4Faculty of Informatics, Eötvös Loránd University, Budapest, Hungary. ^*Shared first authorship. § ABSTRACT Most recent successes in robot reinforcement learning involve learning a specialized single-task agent. However, robots capable of performing multiple tasks can be much more valuable in real-world applications. Multi-task reinforcement learning can be very challenging due to the increased sample complexity and the potentially conflicting task objectives. Previous work on this topic is dominated by model-free approaches. The latter can be very sample inefficient even when learning specialized single-task agents. In this work, we focus on model-based multi-task reinforcement learning. We propose a method for learning multi-task visual world models, leveraging pre-trained language models to extract semantically meaningful task representations. These representations are used by the world model and policy to reason about task similarity in dynamics and behavior. Our results highlight the benefits of using language-driven task representations for world models and a clear advantage of model-based multi-task learning over the more common model-free paradigm. § INTRODUCTION Reinforcement learning (RL) methods have shown great potential in various robotic control tasks such as manipulation and locomotion <cit.>. The majority of successes in this domain are in single-task settings, where the agent is concerned with finding control policies for a single task. Ideally, a single agent should be capable of performing various tasks and smoothly switching between different task performances. This is especially important when considering the high cost of robotic systems such as manipulators. The goal of multi-task reinforcement learning (MTRL) is to learn such a single policy capable of performing multiple tasks by jointly optimizing the individual task objectives <cit.>. This joint training process can be beneficial, for instance, in terms of bootstrapping the learning of complex tasks <cit.>. However, it can be highly challenging, not only due to the increased complexity of the problem, but also because different tasks can have conflicting objectives leading to unstable training. The majority of research on MTRL considers model-free methods. However, model-free RL methods are considerably sample-inefficient even in single-task learning. This property is undesirable in robot learning systems, where environment interactions are very expensive and hard to obtain in the real world. Alternatively, it is possible to train robotic control policies in simulation and transfer them to the real world. However, this process is not trivial and presents multiple challenges <cit.>. This efficiency problem becomes even more pronounced when dealing with high-dimensional and complex observations, such as the ones encountered in visual RL. The extension to the more complex multi-task setting can further exacerbate these problems. Model-based reinforcement learning (MBRL) methods tend to have superior sample efficiency compared to the model-free approach <cit.>. This boost in efficiency is achieved by incorporating a model of the environment, which allows the agent to simulate environment interactions for policy search. This capability is crucial for multi-task learning as it enables the agent to effectively share knowledge across tasks, reducing the overall amount of data needed to achieve proficient performance on each task. By learning a shared model that captures the dynamics relevant to multiple tasks, the agent can develop a more holistic understanding of its embodiment and its environment dynamics. In this work, we propose a model-based vision-based method for multi-task learning. Our method incorporates language-conditioning in the world model as well as the actor and critic. These models are conditioned on language embeddings from a pre-trained language model. By doing so, we leverage these semantically meaningful task representations to boost the parameter sharing in the world model and policy. We evaluate the proposed approach using multiple robotic manipulation tasks from the CALVIN dataset <cit.>. We compare our work to baselines from single-task MBRL, and model-free MTRL methods. Our experiments demonstrate successful learning of a multi-task world model and its usage for learning a policy for multiple manipulation tasks. We validate the benefit of using language-embeddings as task representations for the world model and the policy, and demonstrate a substantial performance improvement in comparison to model-free MTRL. § METHOD Previous work has shown the benefits of multi-task policy training for bootstrapping the learning of harder tasks and increasing the individual tasks' sample efficiency <cit.>. We postulate that a similar benefit can be observed for learning (visual) world models for robotics. Intuitively, multiple tasks share similar dynamics and perception components. For instance, a world model trained on opening a drawer has much in common with another trained on opening cupboards. Training a common world model can leverage these task similarities to boost sample efficiency by sharing data across tasks. To allow the model to reason about task similarity, a semantically meaningful task representation can be essential. Hence, we propose a model-based MTRL approach, that trains a language-conditioned world model and actor-critic agent. Our approach consists of four main components. A language model encodes text instructions into structured embeddings. A tokenizer maps observations to discrete token representations. A world model is a sequence model that predicts future observations, rewards, and successes based on past trajectories and task instruction embeddings. The actor-critic networks respectively output control commands and value estimates based on the latent state and embedding of the task description. We refer to our method as LIMT, an acronym for language-informed multi-task visual world models. Our overall approach is illustrated in <Ref>. §.§ Tokenizer Following the work in <cit.>, we use a discrete autoencoder (E, D) which is a variant of Vector Quantized Variational Autoencoder (VQVAE) <cit.>, equipped with attention blocks as proposed in <cit.> and additionally trained with a perceptual loss <cit.> <cit.> <cit.>. The reason we chose to use discrete representations is that transformer networks, such as the one underlying our world model, are particularly successful at modeling sequences of discrete tokens <cit.> <cit.>. We further extend it to also handle proprioception data. The encoder E accepts an observation (, ) consisting of an image observation and a d-dimensional proprioception vector and converts these to K = K_ + K_ tokens of dimension d_enc. Specifically, we use K_ tokens to represent the image and K_ tokens to represent proprioception. E maintains two separate codebooks of the same vocabulary size N for the image and proprioception tokens respectively, C_ ={ c_^i}_i=1^N, C_ = {c_^i}_i=1^N, where c_,^i ∈ℝ^d_enc. Concretely, the input of E consists of observations ∈ℝ^H × W × C, ∈ℝ^d, where H,W,C denote the image height, width, and number of channels respectively, and d is the dimension of the proprioception vectors. E passes through a series of convolutional and self-attention layers to obtain features ∈ℝ^h × w × d_enc, where h × w = K_. It then computes a quantized embedding representation according to the nearest neighbor in C_x using the Euclidean distance q_(()_ij) = min_c ∈ C_| ()_ij - c |^2 , i ∈ [h] , j ∈ [w]. E spatially decomposes the image into h × w feature vectors and assigns an element of the codebook to each one of them. Similarly, the proprioception vector is linearly projected to a latent vector z_∈ℝ^d_enc via an affine layer, before being quantized according to the codebook C_ q_() = min_c ∈ C_| - c |^2. The output E(,) = (q_(z_), q_(z_)) of the encoder comprises the latent representation of the observations, which is subsequently used as an input to the other components, namely the world model and the actor-critic. Since the set of possible encoder outputs is discrete, we can define the token representation of an observation (,) as w = (w_, w_), where w_, w_∈{1,..,N}. For training purposes, a decoder D maps these embeddings back into the observation space. For the RGB images, D uses a network consisting of multiple convolutional, self-attention and upsampling layers, while the proprioception data is decoded via a single linear layer. We train our discrete autoencoder using the loss L_A (E,D,,) = - D(E()) _1 + D(E()) - _2^2 + sg(E(,)) - z_,_2^2 + E(,) - sg(z_,) _2^2 + L_perceptual (, D(E())), where sg() is the stop-gradient operator. The right side of (<ref>) denotes the reconstruction loss for images and proprioception respectively, while the terms in (<ref>) constitute the commitment loss, which ensures that the unquantized latent vectors are close to their corresponding discrete representations. (<ref>) is a perceptual loss <cit.> <cit.> shown in equation(<ref>) in the appendix, and computed with a pre-trained VGG16 CNN and given the ground-truth and the reconstructed images as inputs. §.§ Language Model We use a pre-trained version of Sentence-BERT <cit.>, MiniLM-L6-v2 a large language model that has been tuned on semantic similarity, to encode natural language directives. In <cit.>, SBERT embeddings were found to be more suitable for language-conditioned policy learning compared to alternatives such as BERT <cit.> and CLIP <cit.> embeddings. We select this type of language model because it has a semantically meaningful structure of the embedding space, where encoded sentences can be compared using cosine similarity. Our hypothesis is that the embeddings of language instructions describing the same task or tasks with similar dynamics cluster nicely together and away of dissimilar tasks. This is shown in <Ref>. This natural clustering can help our agent reason about similarities among different tasks and thus learn skills and dynamics faster. §.§ Dynamics Learning The world model, denoted as G, is an autoregressive transformer similar to the one in <cit.>. In addition to actions and observation tokens, the transformer is conditioned on language embeddings. Given a trajectory of T timesteps (_t, _t, a_t, ')_t=τ^T, where ' denotes the language instruction, we first compute the sequence (w_t, a_t, )_t=τ^T, where w^k_t ∈{1,..,N}^K is the joint tokenized representation of the observations _t, _t, and is the instruction embedding from our language model. G predicts the next observation tokens ŵ_t+1, the reward r̂_̂t̂ and the episode end d̂_̂t̂ ŵ_t+1∼ p_G (ŵ_t+1| w_≤ t, a_≤ t, ) r̂_̂t̂∼ p_G(r̂_̂t̂| w_≤ t, a_≤ t, ) d̂_̂t̂∼ p_G(d̂_̂t̂| w_≤ t, a_≤ t, ). The transformer predicts the tokens ŵ_t+1 at t+1 autoregressively ŵ_t+1^k+1∼ p_G(ŵ_t+1^k+1| w_t+1^≤ k, w_≤ t, a_≤ t, ). G operates on a context window H ∈ℕ, such that only the last H timesteps influence the predictions. Besides the task-dependent reward and termination condition, the prediction of the next latent state is also conditioned on the language directive. The loss function for G is L_G = ∑_t=τ^τ + H[ ∑_k=1^K w_t+1^k+1log p_G (w_t+1^k+1| w_t+1^≤ k, w_≤ t, a_≤ t, ) ] + ρ r_t - r̂_t _2^2 + d_t log p_G (d̂_t | w_≤ t, a_≤ t, ), where the first term on the right side of the equation is the cross entropy loss between predicted and ground-truth tokens. The second term is the reward prediction. The third is a cross-entropy loss for the termination label. ρ∈ℝ is a hyperparameter for weighing reward loss against the other terms. §.§ Policy Learning The actor and critic networks receive the observation token embeddings as inputs, concatenated with the instruction embeddings, and the predicted proprioception information. They are jointly trained in latent imagination as in Dreamer <cit.>. Given an encoded instruction l and observation token embedding q(z_t_0) at timestep t_0, we start a rollout of length H: For each 0<t<H, the actor outputs an action a_t, based on l, w_t and the world model predicts the next observation embedding q(z_t+1) and reward r_t+1 which result from taking the action a_t. The critic is trained to regress the V^λ estimates, defined in equation (<ref>) in the appendix. To stabilize training, we maintain a target critic v̂_ψ, the weights of which we periodically update with those of our value function v_ψ. v̂_ψ is used to compute the V^λ returns, and the objective of our value function v_ψ is to estimate them. §.§ Training Algorithm We begin our training using the offline episodes, _offline provided by CALVIN. In this work, we concentrate on a subset of tasks 𝒯 = { T_i }_i^N. We gather episodes where one of the tasks in 𝒯 is completed in a separate dataset _filtered, used in the later stages of the training process. First, we train our tokenizer on the image-proprioception pairs of _offline for N_t epochs until we observe convergence. Subsequently, we train our world model on _offline jointly with the tokenizer for another N_w epochs. Given that _offline contains numerous episodes from tasks other than those in 𝒯, we modify each of these episodes during the training of our world model. Specifically, we relabel every such episode with a task chosen from 𝒯 with equal probability, adjusting the ground-truth rewards and language instructions accordingly. This increases the number of learning samples for our tasks of interest and provides negative examples, which are otherwise not available in the expert play data. In the next stage, we train the tokenizer and world model for an additional N_f epochs on _filtered. After that, we start performing rollouts using the actor's policy, which we collect in an online buffer _online. During this phase, we jointly train the tokenizer, the world model and the actor-critic on both _online and _filtered. We balance sampling from the two datasets using an adaptive ratio p_online denoting the proportion of samples from _online. p_online starts at 0 and grows along with the size of _online until it reaches a maximum ratio p_max. Within each sampled episode, we select a starting state from which our agent imagines a trajectory uniformly at random across the time dimension. We randomize the task goal by taking the episode's true goal with provability p and randomly sampling another goal from 𝒯 with probability 1-p, using weights w. In this context, specifying a goal task involves selecting a language instruction corresponding to the task as an input to our policy. § EXPERIMENTAL RESULTS We design experiments to answer the following questions: * Can we learn multi-task policies with language-informed world models and MBRL? * Is MBRL more sample efficient than model-free RL in multi-task settings? * Is multi-task training beneficial for MBRL for single and multi-task performance? * Are language embeddings good task representations for learning multi-task world models? §.§ Setup We train and deploy our agent on environment D of the CALVIN benchmark <cit.>. The agent controls a 7-DoF Franka Emika Panda robot, equipped with a parallel gripper. It outputs actions in the form of relative Cartesian positions. The environment consists of a table where the following static objects are present: i) a drawer that can be opened or closed, ii) a slider that can be moved left or right, iii) a button that toggles a green LED light iv) a switch that can be flipped up to control a lightbulb. In addition to the static objects, 3 colored rectangular blocks appear in the scene in different positions and orientations. The agent's state consists of RGB images from a static camera, as well as 7-dimensional proprioception information indicating the position of the end effector and gripper width. We focus and evaluate our model on the following 6 tasks: , , , , , . For each of the 6 tasks, CALVIN <cit.> provides 7-14 distinct but semantically equivalent training language instructions. We leverage the embeddings of those instructions to train our agent. CALVIN also provides a single validation language directive for each task, which is distinct but equivalent to the training instructions. We use the validation directives when evaluating the performance of our agent. During evaluation, we estimate our evaluation metrics, using 20 rollouts per task. We then append the rollouts to our online training dataset _online and continue training as described in <Ref>. §.§ Baselines We compare our approach to the following baselines: * MT-SAC:raw a model-free RL algorithm extending the SAC algorithm <cit.> to multi-task settings. The policies are learned from raw images. To ensure a fair comparison, we use the same network architecture used in our method for all policy networks, including the actors, the critics and the targets. * MT-SAC:token extends the raw version to use the latent states obtained from our tokenizer, as well as the language embeddings as inputs to the actor and critic. * MBRL-ST trains separate single-task (ST) world models and policies for each task. * LIMT:nlac is based on our method, but replaces the instruction embeddings with predefined integer task identifiers in the actor-critic networks. nlac refers to having no language in the actor critic. * LIMT:nl is based on our method, but replaces the instruction embeddings with predefined integer task identifiers in the world model as well as in the policy networks. nl refers to having no language in the whole model. For a fair comparison, in the last two baselines, we repeat the task identifier multiple times to ensure it has the same dimension as the language embeddings. §.§ Results Single-task performance. We compare the sample efficiency and success rate of LIMT to the studied baselines. <Ref> shows the success rate of the different methods on individual tasks over training epochs. We compute the average success rate of the policies on individual tasks at different evaluation time steps. LIMT and its variant consistently show better sample efficiency than the baselines. However, none of the baselines achieve a satisfactory success rate on the and tasks. This illustrates the conflicting objectives problem common to MTRL, since these two tasks might be conflicting with and respectively. However, one would expect that conditioning the policy on some kind of task representation input would help alleviate this problem. One explanation is that the embeddings for some tasks such as the tasks can be very conflicting and non-discriminatory as shown in <Ref>. To understand whether a model-based approach is beneficial for multi-task learning, we first compare its performance to model-free baselines based on MT-SAC as described in <Ref>. The latter do not reach a similar success rate as LIMT under the same training budget. To ensure that this comparison is fair, in one variant of this baseline we use the tokenizer from our method as a way to remove the complexity of also learning perception from reward only. Both MT-SAC:token and MT-SAC:raw lag behind LIMT and its variants. Furthermore, we aim to validate whether multi-task training is beneficial for individual task performance in MBRL, as was previously shown for model-free methods <cit.>. Hence, we compare LIMT to the MBRL-ST baseline. The latter does not benefit from any kind of data or model sharing across tasks. For most tasks, LIMT's sample efficiency and success rate are substantially higher. One exception is the task where MBRL-ST shows better sample efficiency than LIMT. However, such a behavior is to be expected since the single-task policy can more easily excel at learning the one task it specializes in. In fact, it is a positive sign for our method that this tendency is only seen in one task. Additionally, we ablate the effect of using the instruction embeddings from the pre-trained language model on LIMT. Replacing the instruction embeddings with integer task identifiers in both the world model and the actor critic (LIMT:nl) significantly deteriorates the performance in all tasks. This validates our hypothesis on the importance of using semantically useful task representations for bootstrapping the learning of the dynamics model of similar tasks. When only removing the language embeddings from the actor-critic (LIMT:nlac), we observe that the resulting agent is competitive with the other baselines, but still lags behind the fully language-informed version of LIMT. Multi-task performance. In Table <ref>, we compare the multi-task success rate of the different MTRL methods. We compute this metric by averaging the success rate of each agent in all the tasks at evaluation time. LIMT achieves a success rate higher than the model-free baselines by approximately 30% under the same sample and update budget. Even when not using language instructions at all in LIMT, the multi-task success rate is still higher than the model-free baselines. Additionally, we can clearly observe the benefit of using instruction embeddings as task representation to bootstrap the learning of the multi-task world model and actor-critic. Not using these embeddings in the actor-critic component decreases the success rate by 25% and not using it at all by 28%. These results clearly illustrate the strength of our method for multi-task policy learning. Task Switching. In <Ref>, we illustrate the emerging capability of LIMT agents to switch to performing new tasks during task execution. This behavior can simply be achieved by feeding the policy a different instruction while the agent is performing a given task. The figure illustrates a successful switching between 3 different tasks without having to reset the agent or the environment. We attribute this feature to the relabeling of task data for sharing it across tasks. § RELATED WORK Multi-task reinforcement learning is concerned with learning one policy for multiple tasks. Previous research on this topic highlighted its benefits, such as bootstrapping the learning of more complex tasks <cit.>, but also identified some of its challenges, such as conflicting objectives <cit.>. To address these challenges multiple methods have been proposed. <cit.> proposed distilling a single multi-task policy from multiple individually trained DQN policies. <cit.> demonstrated the first single agent surpassing human performance in the multi-task domain of Atari games <cit.>. The proposed algorithm adapts the contribution of different tasks on the agent's updates in a way that reduces the bias toward specific tasks. <cit.> presented a method for learning multiple manipulation skills using off-policy RL and relabeling shared data across tasks. <cit.> examined the effect of sharing representation across multiple tasks and demonstrated the benefits of that paradigm on improving the multi-task learning and even its positive effect on individual task performance. <cit.> proposed sharing the policy network while separately learning a rerouting mechanism to choose which parts of the network are used for the different tasks. <cit.> proposed learning a latent action plan using conditional autoencoders and learning low-level skills conditioned on such low-level plans from an offline dataset, and a high-level policy outputting such high-level actions. Language in robot learning. Recent work integrated language in robotics for different purposes. For instance, multiple methods have been proposed for using pre-trained language models or vision-language models (VLM) as high-level planners for robotics tasks <cit.>. VLMs have also been used as success detectors <cit.>. Other work explored the usage of language models as a way to establish communication in multi-agent robotics tasks <cit.>. <cit.> proposed an approach to learn language-informed latent actions as a way to allow humans to influence policy actions. <cit.> learn an embodied language model integrating sensor measurements and grounding the resulting model with embodiment data. Multiple efforts have been made to design language-conditioned policies for robotics <cit.>. Similarly, other work has leveraged language embeddings for goal-conditioning of robot policies <cit.>. Inspired by previous efforts in model-free MTRL, our work leverages pretrained language models to represent different tasks in multi-task policy learning using a novel model-based approach. § LIMITATIONS LIMT relies on precomputed language embeddings as a task representation on which the policy is conditioned. As discussed in <Ref>, these embeddings can be conflicting for tasks with similar textual descriptions but different dynamics. A possible way to alleviate this would be to use our sequence model for finetuning the task embeddings via contrastive representation learning as done in <cit.>. Furthermore, model-based trajectory generation only happens during training, while at inference time LIMT samples actions from the policy network without directly accessing the world model. While LIMT's sample efficiency still benefits from its world model, this limits the generalization of our policy when encountering out-of-distribution states. In other works <cit.>, trajectory optimization and dynamics learning are more closely coupled. § CONCLUSION We propose a method for learning language-informed multi-task visual world models. We use a pre-trained language model to embed task instructions into a semantically meaningful latent space and use these embeddings as task representations. We then use these representations as inputs to the actor, critic and dynamics models as a meaningful task discriminator. We train all these components using environment interactions in multiple tasks. Our experiments demonstrate the benefit of model-based training for obtaining multi-task policies, as well as the importance of language conditioning as a way to learn world models and policies in multi-task settings. Namely, our method substantially outperforms the model-free multi-task baselines demonstrating the benefit of model-based learning for learning multi-task policies. In addition, we show that multi-task training can lead to a higher success rate than single-task training in the model-based setting and under the same data budget. Given the high sample complexity of multi-task policy learning, our results provide a promising path towards more efficient learning of such policies based on language-conditioned world models. § MODELS AND HYPERPARAMETERS §.§ Tokenizer The tokenizer is a discrete autoencoder that converts image-proprioception pairs into tokenized representations, similar to the one proposed in <cit.>. We downsample the images to a size of 64× 64 before feeding them to the network. The tokenizer converts observations into K = K_x + K_θ tokens of dimension d_enc using two separate codebooks for images and proprioception, C_x and C_θ respectively, of vocabulary size N. The hyperparameters of the tokenizer are listed in table <ref>. §.§ World Model Our world model is a transformer based on the implementation of IRIS <cit.>. It accepts a sequence of H(K + 2) tensors, including HK tensors of tokenized observations and 2H tensors containing the actions and embedded instructions a_t, l for each timestep 0 ≤ t < H. Using an embedding table of size N × D for the tokens and linear projections for the actions and instructions, we obtain a H(K+2) × D tensor which is then passed through L GPT2-like <cit.> transformer blocks. To predict rewards, episode ends, and observation embeddings, 3 MLP heads follow the transformer blocks. The world model's hyperparameters are listed in table <ref>. §.§ Actor Critic We implement both the actor and critic networks described in <ref> as n-layer MLPs with skip connections of stride 2. At training time, during imagination rollouts, their input is the tokenized representation predicted by the world model, as well as the predicted raw proprioception vectors θ̂. As their dimension is quite small compared to the rest of the inputs (7-dimensional pose) we repeat θ̂ b times before passing it to the MLPs. At inference time, the policy networks receive the tokenized environment observations along with the raw proprioception information as described above. To stabilize training, we maintain a target critic for computing the lambda returns V_λ, which we periodically update with the weights of our training critic. Table <ref> lists the hyperparameters for both networks. § COMPUTATIONAL RESOURCES We implement LIMT and all baseline models in Python using PyTorch 2.0.1 <cit.>. The environments of CALVIN use PyBullet for physics simulation. Our experiments run on a GPU cluster managed by the ClearML platform[clear.ml], utilizing different GPU models, including NVIDIA A100, NVIDIA V100, and NVIDIA RTX. § TRAINING DETAILS §.§ Policy learning The critic network v_ψ regresses the value estimates V_λ for a horizon length H V^λ_t := r_t + γ_t (1 - λ) v_ψ(ŝ_t+1)+ λ V^λ_t+1, t < H v_ψ (ŝ_H), t=H. The loss function of the actor network with parameter ϕ is L_ϕ = - ∑_t=τ^T V^λ_t - ηℋ(π(a_t | w, l, θ)), where the last term is an entropy objective to encourage exploration. w is the tokenized observation and θ the end effector's pose. The critic's objective is given by min_ψ𝔼_q_θ, a_ϕ[ ∑_τ = t^t+H v_ψ (s_τ) - V_λ(s_τ) ^2 ]. §.§ Perceptual Loss The perceptual loss introduced in <Ref> is similar to the one proposed by <cit.>. Given a pre-trained VGG-16 CNN <cit.>, we select a subset of layers M. For each layer j ∈ M, let ϕ_j(x) be the activation of the j-th layer, in our case a feature map with dimension C_j × H_j × W_j. We compute the loss between a ground-truth image x and a reconstructed image x' as L_perceptual(x,x') = ∑_j ∈ M1/C_j H_j W_j A_j || ϕ_j(x) - ϕ_j(x') ||^2, where A_j are learned affine transformations, implemented as 1x1 convolutions. §.§ Online training Once the training of our actor-critic begins, we start performing a total of n_rollout policy rollouts of T timesteps at each epoch. We perform the same amount of n_rollout/6 rollouts for each task. We append these online episodes to our dataset _online. We limit the size of _online to a maximum of n_max episodes. At every epoch we compute the online sampling ratio p_online, described in <ref> as p_online = p_max|_online|/n_max, where we set p_max as a maximum ratio. When sampling from D_online, as done in <cit.>, we prioritize later episodes: we divide D_online into 4 quarters and we sample 50% of our episodes from the last quarter, and 25% from the 3rd quarter. The rest of the episodes is uniformly sampled from the first half. All training hyperparameters described above, as well as in section <ref> can be found under table <ref>. Our overall training process is further detailed in algorithm <ref>. §.§ Reward Functions The reward functions we specify for our tasks of interest are divided into two classes: The first one consists of tasks the completion of which can adequately be described by a boolean variable. For example, turning on/off a lightbulb. In these types of tasks, the robot arm has to reach and manipulate an object (e.g a button) to bring about some change in the environment (e.g lightbulb on). A general formula of this type of reward is given in equation <ref>. R_b(θ, g, s, l) = 1 - (θ - g) ⊙ f_s _2 + 10𝕀[success(s,l)] Here, θ contains is the proprioception vector and g is the desired pose of the end effector to reach an object of interest, such a buttons or a switch. The first term is therefore a distance-based reward. This is employed to guide the agent to the vicinity of the object and avoid relying on sparse rewards, which can slow down the learning process. We multiply the difference (θ - g) by a scaling vector f_s to prioritize some proprioception dimensions over others. In our case, end effector orientation is discounted relative to end effector position. The intuition behind this is that, for completing the tasks, it is more important for the EE to reach the specified position of the object of interest, although the orientation also matters to a certain extent. The reason for the latter is that, by constraining the range of possible orientations, we make it easier for the agent to learn suitable actions, while excluding some of the orientations that would make the manipulation of the object significantly harder. The second term is an indicator variable indicating completion of the task, depending on the environment's state s ∈ S and the instruction l. We multiply it by a constant β_b ∈ℝ, in our case 10, to amplify the learning signal for successes. The second class of reward functions are designed for tasks the success of which can best be described as a continuous, time-dependent variable. For example, to define success for the task of opening a drawer, we need information about its (continuous) position in the previous time steps. In these cases, the reward function is defined as follows: R_c(p_t, g, s_t, s_t-1, l, s_g) = 1 - || (p_t - g) ⊙ f_s ||_2 + β sign(s_g(l) - s_t-1) · (s_t - s_t-1) where θ_t and g are the EE's actual and desired pose, s_t , s_t-1 are environment states for timesteps t, t-1 and s_g(l) is the desired environment state, determined by the instruction l. In the example of opening a drawer ,s_g would corrrespond to a fully opened drawer. As above, the first term is a dense distance-based reward. The second term measures progress towards completion of the task during the last timestep and is weighed by a scalar β_c ∈ℝ to amplify the learning signal. As the difference (s_t - s_t-1), can be quite small, we use β_c = 50. § IMAGINED TRAJECTORIES To qualitatively assess our world model, we examine some imagined trajectories generated during policy training. <Ref> illustrates two trajectories of length 8 for the tasks (top) and (bottom). As our agent is trained in latent imagination and not directly on image data, we map the latent outputs to images using the decoder D of our tokenizer for visualization purposes. As can be seen in the figure, our world model successfully captures task-relevant visual features, like the yellow lightbulb lighting up and the drawer transitioning from an opened to a closed position.
http://arxiv.org/abs/2407.12600v1
20240717142729
Finite Element-based Nonlinear Dynamic Optimization of Nanomechanical Resonators
[ "Zichao Li", "Farbod Alijani", "Ali Sarafraz", "Minxing Xu", "Richard A. Norte", "Alejandro M. Aragon", "Peter G. Steeneken" ]
physics.app-ph
[ "physics.app-ph", "cond-mat.mes-hall" ]
[]z.Li-16@tudelft.nl Faculty of Mechanical Engineering, Department of Precision and Microsystems Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands Faculty of Mechanical Engineering, Department of Precision and Microsystems Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands Faculty of Mechanical Engineering, Department of Precision and Microsystems Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands Kavli Institute of Nanoscience, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands Faculty of Mechanical Engineering, Department of Precision and Microsystems Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands Kavli Institute of Nanoscience, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands Faculty of Mechanical Engineering, Department of Precision and Microsystems Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands []p.g.steeneken@tudelft.nl Faculty of Mechanical Engineering, Department of Precision and Microsystems Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands Kavli Institute of Nanoscience, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands § ABSTRACT Nonlinear dynamic simulations of mechanical resonators have been facilitated by the advent of computational techniques that generate nonlinear reduced order models (ROMs) using the finite element (FE) method. However, designing devices with specific nonlinear characteristics remains inefficient since it requires manual adjustment of the design parameters and can result in suboptimal designs. Here, we integrate an FE-based nonlinear ROM technique with a derivative-free optimization algorithm to enable the design of nonlinear mechanical resonators. The resulting methodology is used to optimize the support design of high-stress nanomechanical Si3N4 string resonators, in the presence of conflicting objectives such as simultaneous enhancement of Q-factor and nonlinear Duffing constant. To that end, we generate Pareto frontiers that highlight the trade-offs between optimization objectives and validate the results both numerically and experimentally. To further demonstrate the capability of multi-objective optimization for practical design challenges, we simultaneously optimize the design of nanoresonators for three key figure-of-merits in resonant sensing: power consumption, sensitivity and response time. The presented methodology can facilitate and accelerate designing (nano)mechanical resonators with optimized performance for a wide variety of applications. Finite Element-based Nonlinear Dynamic Optimization of Nanomechanical Resonators Peter G. Steeneken July 22, 2024 ================================================================================ § INTRODUCTION Design of mechanical structures that move or vibrate in a predictable and desirable manner is a central challenge in many engineering disciplines. This task becomes more complicated when these structures experience large-amplitude vibrations, since linear analysis methods fail and nonlinear effects need to be accounted for. This is particularly important at the nanoscale, where forces on the order of only a few pN can already yield a wealth of nonlinear dynamic phenomena worth exploiting <cit.>. Although design optimization of micro and nanomechanical resonators in the linear regime is well-established <cit.>, the use of design optimization for engineering nonlinear resonances has received less attention <cit.>. This is because designers tend to avoid the nonlinear regime, and optimizing structures' nonlinear dynamics is more complex, which requires extensive computational resources. As a result, available literature on nonlinear dynamic optimization is limited, although some recent advances have been made that combine analytical methods with gradient-based shape optimization, to optimize nonlinearities in micro beams <cit.>. For nonlinear modeling of more complex structures, several approaches have been developed based on nonlinear reduced order modeling (ROM) of finite element (FE) simulations <cit.>. A particularly attractive class known as STEP (STiffness Evaluation Procedure)  <cit.> can determine nonlinear coefficients of an arbitrary mechanical structure and can be implemented in virtually any commercial finite element method (FEM) package. This, for instance, has been recently shown by using COMSOL to model the nonlinear dynamics of high-stress Si3N4 string <cit.> as well as graphene nanoresonators <cit.>. Since the number of degrees of freedom in the ROM is much smaller than that in the full FE model, the nonlinear dynamics of the structure can be simulated much more rapidly using numerical continuation packages <cit.>. In this work, we present a route for nonlinear dynamic optimization that is based on an FE-based ROM. The methodology, which is a combination of Particle Swarm Optimization (PSO) with STEP <cit.> (OPTSTEP), has several beneficial features. First of all, because it uses a derivative-free optimization routine for approaching the optimal design, it can be implemented and combined with FEM packages that are not able to obtain gradients easily. Secondly, the ROM parameters generated in OPTSTEP can facilitate explicitly expressing the optimization goals. Finally, as will be shown, the developed procedure allows using multiple objective functions to approximate a Pareto front, which can help designers in decision-making processes when having to balance performance trade-offs among different objectives. Considering the outstanding performance as ultrasensitive mechanical detectors and the mature fabrication procedure <cit.>, we select high-stress Si3N4 for the experimental validation of our methodology. The manuscript is structured as follows. We first introduce and describe the general OPTSTEP methodology. Then we demonstrate the method on the specific challenge of the optimization of the support structure for a high-stress Si3N4 nano string, while taking the maximization of its Q-factor and nonlinear Duffing constant β as examples of linear and nonlinear objectives. By comparing the PSO results to the Q and β values that result from a brute-force simulation of a large number of designs that span the design space, we validate that OPTSTEP finds the optimum designs much faster with the same computational resources. Subsequently, we turn to the problem of dealing with multiple objective functions and focus on simultaneously maximizing both Q and β, demonstrated by a Pareto front. For validation, the results are compared to experimental measurements of fabricated devices. We conclude by demonstrating the potential of OPTSTEP for optimizing the performance of resonant sensors by using more complex objective functions that are relevant for engineering their response time, sensitivity, and power consumption. § OPTSTEP METHODOLOGY An overview of the OPTSTEP method is schematically shown in Fig. <ref>. In the current work, we use it for engineering a parameterized geometry. We use nanomechanical string resonators with compliant supports, which is shown in Fig. <ref>a, to demonstrate the methodology. We keep the length L and width w of the central string constant, while varying the width w_ s, length L_ s and angle θ of the supports, as well as the thickness h of the device. It is noted that the OPTSTEP methodology might be used with a larger number of parameters, or even might be extended towards shape or topology optimization of nonlinear dynamic structures. However, such extension is out of the scope of the current work. For a certain set of geometrical parameters, a ROM for the parameterized structure is generated using the STEP method <cit.>, which we implemented with shell elements in COMSOL <cit.>. Besides geometric parameters and boundary conditions (see Fig. <ref>a), the COMSOL simulation contains material parameters (see Methods), and the initial pre-stress distribution is calculated using a static analysis <cit.>. We conduct this static analysis assuming the material is isotropic and pre-stressed (σ_0=[]1.06). We then calculate the stress redistribution during the sacrificial layer underetching process, whereby the high-stress Si3N4 layer releases from the silicon substrate. Note that in the present study we only consider θ≥ 0, such that the central string is always in tension (in contrast to Ref. ). After the static analysis, an eigenfrequency analysis is performed to obtain the out-of-plane eigenmodes ϕ_i (see Fig. <ref>b). These eigenmodes, together with the redistributed stress field obtained from the static analysis, are then used to determine the Q-factor, resonance frequency f_0, and effective mass m_eff, following the procedure outlined in Ref. . As indicated in Fig. <ref>b the STEP method generates a set of coupled nonlinear differential equations<cit.>, where the effective nonlinear elastic force acting on the ith mode is given by the function γ^(i) that depends on the quadratic a_ij, cubic b_ijk coupling coefficients, and the generalized coordinates q_i. q_i describes the instantaneous contribution of the corresponding mode shapes ϕ_i to the deflection of the structure. Thus, the finite element model with several thousand or even millions of degrees of freedom (DOFs) is reduced to a condensed ROM, that can usually describe the nonlinear dynamics to a good approximation with less than ten degrees of freedom. We can visualize the resulting frequency response curves for different harmonic drive levels by numerical continuation <cit.>, as shown in Fig. <ref>c. The resulting ROM parameters, including effective mass m_ eff^(i), Q-factor, linear stiffness k^(i)=m_ eff^(i) (2 π f^(i))^2 and nonlinear stiffness terms a_jk, b_jkl, are passed to the PSO optimizer (see Fig. <ref>d). The algorithm randomly generates many different initial designs by varying the geometric parameters, as shown in Fig. <ref>a. For each of these designs, known as a 'particle' in PSO, a ROM is generated by STEP and the corresponding objective functions are computed accordingly and passed to the optimizer. The optimizer then generates a next generation of particles based on the designs from the current generation, the objective functions, and the constraints, with the aim of improving their design parameters to optimize the objectives (see Supplementary Note 1). The optimization loop will iterate until it reaches the predefined maximum generation. If multiple objective functions are selected to be optimized, there is an additional step that selects the nondominated particles according to Pareto dominance <cit.>. Because each particle is evaluated independently, PSO enables efficient parallel computing to evaluate all particles in one generation on a high-performance computing cluster. § OPTSTEP IMPLEMENTATION AND VALIDATION §.§ Single objective optimization with OPTSTEP We implement the presented OPTSTEP methodology to optimize the support geometry of the string resonator shown in Fig. <ref>a. The motion of the fundamental mode of the resonator can be described with the following nonlinear equation of motion: q̈ + 2 π f_0/Qq̇ + (2 π f_0)^2 q + β q^3 = F_ excsin(2 π f t), equation0 where q is the displacement at the string center, f_0 is the resonance frequency, Q is the Q-factor, β=b_111/m_ eff is the mass-normalized Duffing constant, and F_ excsin(2 π f t) is the mass-normalized harmonic drive force. We present results of the OPTSTEP methodology for two optimization objectives, respectively: maximizing the Q-factor (shown in Fig. <ref>a,c,d) or maximizing the mass-normalized Duffing constant β (shown in Fig. <ref>b,e,f) of the fundamental mode. As design parameters, we use the support parameters (L_ s, w_ s, θ and h in Fig. <ref>a). The PSO algorithm can freely initialize and vary these variables between preset constraints 10 <L_ s<100, 1 <w_ s<7, 0rad< θ<0.4rad, and 40 <h<340. We initialize the PSO algorithm with 10 randomly generated particles, as indicated by the blue circles at the first generation in Fig. <ref>a-b. The Q and β values of the best performing particle per generation are highlighted by the red line, which converges towards an optimum. Simulated response curves at different drive levels of the initial design (median performance of the initialized particles) and the optimized design are shown in Fig. <ref>c, d for Q and Fig. <ref>e, f for β. It is obvious that the resonance peaks become narrower from Fig. <ref>c to Fig. <ref>d, indicative of an increase in Q-factor. From the backbone curves shown in Fig. <ref>e, f, we see that the resonance frequency of the optimized device shifts more at the same vibration amplitude, which suggests a larger, optimized value of β. §.§ Numerical validation In order to validate the PSO results, we compare them to a brute-force parametric study where we simulate a large number of designs that span the full design parameter space, and plot the resulting values of Q and β in the contour plots in Fig. <ref>g, h. Each of these subfigures consists of 16 small contour plots, each of which has a different combination of L_ s and h, while along the axes the parameters w_ s and θ are varied. The red-colored regions in the plots contain the optimal values of Q and β, which are indicated by a triangle and a star. In Supplementary Table S1, we compare the optimized design parameters from the OPTSTEP method to the best devices from the parametric study. The close agreement between both approaches provides evidence that the OPTSTEP method is able to optimize both linear (Q) and nonlinear (β) parameters of the ROM. The results in Fig. 2a are obtained in 30 minutes using a high performance computing cluster, while the parametric study in Fig. <ref>g takes over 325 hours on the same cluster with the same amount of nodes. This illustrates the advantage in computation time that can be realized with OPTSTEP, although it is noted that these times strongly depends on the resolution of the parameter grid and other simulation parameters. §.§ Experimental characterization To compare the OPTSTEP method to experimental results, we also perform an experimental parametric study on 15 string resonators with varying support design parameters. For this we fabricated a set of devices with 10<L_ s<90 and 0rad<θ<0.2rad, while keeping h=340 and w_ s=1.0 fixed. Fig. <ref>a shows a Scanning Electron Microscope (SEM) image of an array of nanomechanical resonators with varying support designs made of high-stress Si3N4 (see “Methods” for more details). To characterize the nonlinear dynamics of the devices, as shown in Fig. <ref>b, we fix the chip to a piezo actuator that drives the resonator by an out-of-plane harmonic base actuation in the out-of-plane direction. We use a Zurich Instruments HF2LI lock-in amplifier, connected to an MSA400 Polytec Laser Doppler Vibrometer, to measure the out-of-plane velocity at the center of the string resonator as a function of driving frequency (see Fig. <ref>c). We use a velocity decoder with a calibration factor of 200 mm/s/V. We perform all measurements in a vacuum chamber with a pressure below 2e-6 at room temperature. Fig. <ref>c shows the frequency response at the center of the string at various drive levels for a device with L_ s = []90, w_ s = []1, θ=0.20rad and h = []340. We estimate the linear resonator parameters of all devices by fitting the measured frequency response curves at various drive levels with the following harmonic oscillator function<cit.>: q_ d = q_ max/Q /√([ 1- ( f/f_0 )^2 ]^2+f^2/(f_0 Q)^2), where q_ d is the measured amplitude, q_ max is the value of q_ d when driving at the natural resonance frequency f=f_0, and f is the drive frequency. To determine the nonlinear stiffness, we measure the resonator's frequency response at increasing drive levels, construct the backbone curve, and use the relation between the peak amplitude q_max and the peak frequency f_ max to estimate the mass-normalized Duffing constant β from<cit.>: f_ max^2 = f_ 0^2 + 3/16 π^2β q_ max^2. To compensate for small drifts in f_0 during the experiments, before fitting with Eq. (<ref>), we shift and align the frequency response curves to match their f_0 values<cit.>. In Fig. <ref>d-f, we compare the dynamical properties between FE-based ROMs (dots) and measurements on 15 string resonators (diamonds) as a function of L_ s and θ. It is evident that the fundamental resonance frequency f_0, Q-factor, and the mass-normalized Duffing constant β of the fabricated devices, are all well predicted by FE-based ROMs. It can also be seen that for short support lengths L_ s the device performance is similar, whereas increasing L_ s allows tuning f_0, Q and β as we studied in more detail earlier<cit.>. In the next section we will compare these experimental results to multi-objective optimization as further validation of OPTSTEP. §.§ Multi-objective optimization with OPTSTEP For actual device design there are often multiple performance specifications that need to be met. It might sometimes be possible to condense these performance specifications into a single figure of merit, like the f_0× Q product for nanomechanical resonators. However, to make the best design decisions, it is preferred that the optimizer works with two (or more) objective functions like enhancing f_0 and Q, simultaneously. To enable this, we implement OPTSTEP with a multi-objective particle swarm optimization (MOPSO), which is an extension of single-objective PSO. After multi-objective optimization, the nondominated particles in the swarm are used to determine an approximation of the Pareto front, which is the set of designs for which improving one of the objectives will always lead to a deterioration of the other objective(s). By performing MOPSO, we aim at finding the Pareto front in the design space for multiple objectives, that represents the boundary on which all optimized designs reside for the chosen variables. As the red dots show in Fig. <ref>d illustrate, the Pareto front represents the boundary between feasible and unfeasible combinations of objectives and thus allows the designer to make the best trade-off among different objectives. To demonstrate that multi-objective optimization can be combined with OPTSTEP, we use it to simultaneously maximize Q and β. Devices with high quality factor and nonlinear stiffness can be of interest in cases where we are looking for designs that can drive a string into the nonlinear regime with a minimum driving force and power consumption. The resulting Pareto fronts are shown in Fig. <ref>a. Since we are also interested in the effect of the constraints on the optimum solutions, we include Pareto fronts with: no constraint (purple), a thickness constraint of h = []340 (grey), and with thickness and support width constraint (multi-coloured). These 3 Pareto fronts show that there is a clear trade-off between Q and β, with higher Q-factor leading to lower nonlinearity β. The experimental devices share the same constraints (w_ s = []1 and h = []340) as the multi-colored Pareto and are plotted as the hollow diamonds with error bars in Fig. <ref>a (see Supplementary Table 2). We observe that all experimental points reside in the region on the left hand side of the Pareto front, confirming the area enclosed by the Pareto front indeed captures the feasible devices, and experimentally strengthening the confidence in the OPTSTEP approach for multi-objective designs. The color of the points links the points in the Q-β graph in Fig. <ref>a to the corresponding design parameters in Fig. <ref>b. In Fig. <ref>b the schematic support geometries are shown as insets for both maximum β (dark blue) and maximum Q (dark red). We choose some of the fabricated devices close to the Pareto front to show typical measured frequency response curves and microscopic images in Fig. <ref>c-f, which correspond to the star, triangle, circle and square data markers in Fig. <ref>a and b. Together with the microscopic images, it is apparent that with minor alterations in the support region, the response of the string resonators can be largely tuned. To further explore the effect of other design parameters numerically, we release the constraint on w_ s, keeping only h=340 constrained, and conduct MOPSO (see the grey Pareto front). We can see from the comparison between the grey and multicolored fronts that the performance gain from changing w_ s is not very large. In contrast, if we further relax the constraint on h = []340, which shares the same design space in Fig. <ref>g-h, we obtain the purple Pareto front. The thinner h pushes the Pareto front to have much higher Q. The long plateau at fixed β is mainly attributed to the increase in Q that results from the dependence of the intrinsic quality factor Q_0 on h (see Methods). Besides validating the MOPSO approach by comparing with experimental data, we also use the data from the parametric study in Fig. <ref> to extract and generate reference Pareto fronts that are shown as black solid, dotted, and dashed lines in Fig. <ref>a (see Supplementary Note 2), with constraints that match those from the MOPSO optimization. § DISCUSSION The OPTSTEP methodology that is presented in this work enables the optimization of the nonlinear dynamic properties of resonant structures using standard FEM software, since it is based on the STEP and uses a derivative-free optimization method. We note that although derivative-free techniques like PSO are able to efficiently find near-optimal values of design parameters, optimality guarantees can typically not be given, and the techniques are therefore also called metaheuristic optimization techniques. Here, in order to validate the OPTSTEP methodology numerically and experimentally, we have focused on β and Q maximization of the fundamental mode of a string resonator by geometric support design. After having established the methodology, it is now of interest to apply it to explore performance parameters that are more relevant to applications. For example, as shown in Fig. <ref>, our methodology can directly be extended to optimize the power consumption P, sensitivity δ f / f_0 and response time τ of resonant sensors <cit.>, since these figure-of-merits can be directly expressed in terms of m_ eff, f_0, Q and β (see Supplementary Note 3). In Fig. <ref>, 1000 nondominated particles are found by OPTSTEP to form a 3D surface that approaches the Pareto frontier with the objective of minimizing P, δ f / f_0 and τ simultaneously. The particles have the same design constraints as in the example in Fig.2 and the purple Pareto front in Fig. <ref>a, which are 10<L_ s<90 and 0rad<θ<0.2rad and membrane thickness 40 <h<340. The competing design trade-offs between these three objective functions are obtained from OPTSTEP, and are visualized in Fig. <ref> by showing five typical designs near the Pareto frontier. As demonstrated by the designs at the upper right corner of the Pareto frontier, we can conclude that the devices with shorter response time are more likely to have thicker supports, which lead to a higher resonance frequency f_0 combined with a low Q, thus resulting in a smaller Q / f_0 ratio. At the same time, these thicker supports also contribute to a larger onset of nonlinearity a_ 1dB <cit.>, so the resonators are able to work at much larger amplitudes in the linear regime, which provides a better sensitivity δ f / f_0. However, the larger a_ 1dB and m_ eff will require more energy to sustain the oscillation at resonance that causes higher power consumption P. In contrast, the devices with higher sensitivity δ f / f_0, which are shown at the lower left corner in Fig. <ref>, are equipped with more slender supports. With only a slight increase of support angle θ from 0, the low torsional stiffness of supports is maintained while the stress in the central string can be significantly increased <cit.>, leading to a higher Q, which can be confirmed by Fig. <ref>g. Consequently, when aiming at designing a resonant sensor with relatively low power consumption P, high sensitivity δ f / f_0 and short response time τ with compliant supports, a pair of slender and slightly angled supports, together with a medium thickness of Si3N4 layer is generally favored. In other cases, like approaching the quantum regime with a nonlinear nanomechanical resonator <cit.>, it is beneficial to maximize Q and β simultaneously. The OPTSTEP methodology can also be used for more complex design problems that involve multiple modes <cit.>, for avoiding or taking advantage of mode coupling, for instance by optimizing nonlinear coupling coefficients (a_jk and b_jkl in Fig. <ref>b) and resonance frequency ratios. Since OPTSTEP generates the ROM parameters at each generation, it is particularly suited for dealing with cases where the device specifications can be expressed in terms of these parameters. Interesting challenges include increasing frequency stability by coherent energy transfer <cit.>, signal amplification <cit.> and stochastic sensing <cit.>. Moreover, intriguing paths for further research involve inclusion of nonlinear damping or extension to full topology optimization <cit.>. Also the use of alternative optimization strategies, like binary particle swarm optimization (BPSO) <cit.>, that could generate radically new geometries, is an interesting direction. § CONCLUSIONS To sum up, we presented a methodology (OPTSTEP) for optimizing the nonlinear dynamics of mechanical structures by combining an FE-based ROM method with a derivative-free optimization technique (PSO). We demonstrated and validated the methodology by optimizing the support design of high-stress Si3N4 nanomechanical resonators. The method was verified numerically by comparing its results to a brute-force parametric study, for both single- and multi-objective optimization. Experimental data on the Q-factor and Duffing nonlinearity were in correspondence with the OPTSTEP results. The capability of the method was also demonstrated by multi-objective optimization of the support for the nanomechanical resonator, targeting improvements in power consumption, sensitivity and response time in resonant sensing. We thus concluded that the method can be applied to a wide range of complex design challenges including nonlinear dynamics, and is expected to be compatible to most FE codes and derivative-free optimization routines. It holds the potential to facilitate and revolutionize the way (nano)dynamical systems are designed, thus pushing the ultimate performance limits of sensors, mechanisms and actuators for scientific, industrial, and consumer applications. § METHODS Sample fabrication. We produce our nanomechanical resonators using electron beam lithography and reactive ion etching techniques on high-stress Si3N4 layers, chosen for their reliability and precision in achieving design specifications <cit.>. These layers are deposited via low pressure chemical vapor deposition (LPCVD) onto a silicon substrate. Following this, the devices undergo suspension through a fluorine-based deep reactive ion underetching process. The mechanical properties of the high-stress Si3N4 are characterized in our previous works <cit.>, with an initial isotropic stress σ_0 = 1.06, Young's modulus E = 271, Poisson's ratio ν=0.23, mass density ρ = 3100/^3. The intrinsic quality factor is a function of thickness h <cit.>, which is Q_0^-1 = 28000^-1 + (6 × 10^10 h )^-1. § DATA AVAILABILITY The data that support the findings of this study are available from the corresponding authors upon reasonable request. § ACKNOWLEDGEMENTS Funded/Co-funded by the European Union (ERC Consolidator, NCANTO, 101125458). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Z.L. acknowledges financial support from China Scholarship Council, the assistance on the FE reduced-order modeling from Vincent Bos, and the instruction about using the high performance computing cluster from Binbin Zhang. This work is also part of the project, Probing the physics of exotic superconductors with microchip Casimir experiments (740.018.020) of the research program NWO Start-up which is partly financed by the Dutch Research Council (NWO). M.X. and R.A.N. acknowledge valuable support from the Kavli Nanolab Delft. § AUTHOR CONTRIBUTIONS Z.L., F.A., P.G.S. and A.M.A. conceived the experiments and methods; M.X. and R.A.N. fabricated the Si3N4 samples; Z.L. conducted the measurements and analysed the experimental data; Z.L. and F.A. built the theoretical model; Z.L. performed the reduced-order modelling of the finite element model; Z.L. and A.S. set up the optimization on high performance cluster; F.A. and P.G.S. supervised the project; and the manuscript was written by Z.L. and P.G.S. with inputs from all authors. § COMPETING INTERESTS The authors declare no competing interests. in 1,...,4 [pages=]SI.pdf
http://arxiv.org/abs/2407.12616v1
20240717144425
Missing Modality Prediction for Unpaired Multimodal Learning via Joint Embedding of Unimodal Models
[ "Donggeun Kim", "Taesup Kim" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Missing Modality Prediction for Unpaired Multimodal Learning D. Kim and T. Kim Graduate School of Data Science, Seoul National University {kdg5188, taesup.kim}@snu.ac.kr Missing Modality Prediction for Unpaired Multimodal Learning via Joint Embedding of Unimodal Models Donggeun Kim0009-0000-0900-0099 Taesup Kim0009-0005-6056-6836A corresponding author ==================================================================================================== § ABSTRACT Multimodal learning typically relies on the assumption that all modalities are fully available during both the training and inference phases. However, in real-world scenarios, consistently acquiring complete multimodal data presents significant challenges due to various factors. This often leads to the issue of missing modalities, where data for certain modalities are absent, posing considerable obstacles not only for the availability of multimodal pretrained models but also for their fine-tuning and the preservation of robustness in downstream tasks. To address these challenges, we propose a novel framework integrating parameter-efficient fine-tuning of unimodal pretrained models with a self-supervised joint-embedding learning method. This framework enables the model to predict the embedding of a missing modality in the representation space during inference. Our method effectively predicts the missing embedding through prompt tuning, leveraging information from available modalities. We evaluate our approach on several multimodal benchmark datasets and demonstrate its effectiveness and robustness across various scenarios of missing modalities. § INTRODUCTION Humans perceive the world through various sensory modalities, such as seeing images and hearing voices, integrating these diverse sources to enhance comprehension. Similarly, a fundamental goal in artificial intelligence is to equip computers with the capability to effectively learn from multi-sensory data. Multimodal learning emerges as a promising approach to improve our understanding of complex data by leveraging multiple communicative modalities. In this context, significant advancements have been made in multimodal learning, particularly through self-supervised learning within the vision-language domain <cit.>. Furthermore, remarkable progress has been achieved in audio-visual learning <cit.>, as well as in other multiple modalities. However, in real-world scenarios, collecting completely paired data presents challenges for various reasons, such as data privacy and security issues. Consequently, the assumption that multimodal learning necessitates the completeness of all modalities is challenging to maintain, leading to the unpaired data issue and the resulting problem of missing modalities. Prior works <cit.> employed multimodal pretrained models for handling missing modality in the fine-tuning stage. But missing modality due to unpaired data can also occur in the pretraining stage. For instance, in the medical domain, most medical datasets typically consist of images, the majority of which lack accompanying text-based clinical narrative reports. Even though it has shown potential when training with paired data <cit.>, a significant portion of existing medical datasets, which include either image-only or text-only data, remains underutilized. As a result, there may be scenarios where obtaining an effective joint (multimodal) encoder, pretrained on hundreds of millions of image-text pairs, becomes challenging. On the other hand, acquiring unimodal data is relatively easier than obtaining multimodal data. Additionally, pretrained unimodal models, which come in various forms and demonstrate high performance, are more readily accessible than multimodal models. Hence, in this paper, we acknowledge the commonality of scenarios involving only unpaired data and consider that a pretrained joint encoder model in previous approaches <cit.> may not always be available. Instead, we propose utilizing independently pretrained unimodal encoders for each modality. This strategy offers relatively broader applicability, as each unimodal encoder can be effectively trained using self-supervised learning with large-scale unlabeled data. <cit.>. Moreover, it benefits from leveraging knowledge gained during pretraining. For example, most multimodal models in the vision-language domain have primarily focused on datasets containing images and English-based text. However, these approaches may encounter challenges in handling low-resource or multilingual languages <cit.>. This issue can be easily addressed by using a (domain-specific) text encoder, which is pretrained on large, unlabeled datasets in those languages. In this paper, we define the problem for multimodal settings as follows: (1) Pretrained unimodal encoders are assumed to exist; (2) Partially unpaired data for downstream tasks is provided; (3) Unpaired data is also given during inference. Our assumption is realistic and directly applicable to the multimodal downstream task in real-world scenarios. To this end, we propose a straightforward yet effective framework that addresses missing modalities by leveraging unimodal pretrained encoders and predicting the representations of the missing modalities. We achieve this by our contributions, summarized as follows: * We utilize Parameter-Efficient Fine-Tuning (PEFT) to minimally update pretrained unimodal encoders while maximally preserving knowledge for downstream tasks. * We employ the architecture of Variance-Invariance-Covariance Regularization (VICReg) for improving the predictability of embeddings between different modalities for missing modality problem. * We adopt a prompt-based approach for gathering efficient task-relevant information from other modalities. * We demonstrate that our approach is more robust and effective, outperforming previous studies across all tested datasets and metrics in various scenarios with missing modalities. § RELATED WORK §.§.§ Missing Modality in Multimodal Learning Specific modalities available during training may be unavailable at inference, posing challenges in multimodal learning. <cit.> investigated the robustness of pretrained multimodal transformers when encountering incomplete modalities during inference. For addressing a more general scenario where the absence of modality may occur during either the training or testing phase, <cit.> leveraged missing-aware-prompts according to the missing case. However, both works rely on multimodal joint encoder pretrained with extensive image-text pairs, assuming the availability of large paired multimodal datasets. This assumption limits applicability in scenarios lacking such paired data. In contrast, our method uses unimodal models trained on unpaired data, respectively, eliminating the need for extensive paired datasets. Therefore, we offer a more adaptable approach to handling missing modalities, enhancing flexibility in multimodal learning without reliance on paired data. There has also been some progress in addressing missing modalities by inferring them by modeling the probabilistic relationships between modalities. <cit.> proposes a method of Bayesian Meta-Learning to estimate the latent feature of the modality-incomplete data and reconstruct the features of the missing modality data. <cit.> proposes a strategy using shared encoder features from available modalities to generate modality-specific features of missed modality. These approaches are similar to ours in utilizing modality-specific encoders; however, our method focuses explicitly on the effective use of unimodal pretrained models. By efficiently fine-tuning unimodal models—widely available and pretrained on extensive unlabeled datasets—we ensure the maximal preservation of knowledge from pretraining. This distinction underscores our method's versatility and adaptability, suggesting its potential effectiveness in low-resource scenarios, such as handling multilingual text-image data. §.§.§ Parameter-Efficient Transfer Learning As the field advances with substantial pretrained models based on transformer<cit.> architecture, various parameter-efficient adaptation approaches <cit.> have emerged, approximating the performance of full fine-tuning by updating only a subset of parameters. Concurrently, prompt-based learning <cit.>, initially successful in natural language processing, has shown promising results in computer vision tasks <cit.> as well, notably with vision transformer <cit.>. Inspired by these approaches, many recent works <cit.> have been explored for adapting large pretrained vision-language models (, CLIP <cit.>) without re-training the entire model. These methods offer the advantage of maintaining the knowledge learned from a large multimodal dataset while efficiently adapting to the target task. In the pursuit of efficient training for multimodal learning with unimodal encoders, <cit.> demonstrated a method for parameter-efficient vision-language alignment by leveraging pretrained unimodal models. It achieves significant alignment with minimal image-text pairs and parameter updates, thus preserving the existing knowledge within pretrained models. Inspired by this approach, our framework utilizes separate unimodal pretrained encoders. This strategy provides flexibility for various input modalities and enhances efficiency by limiting updates to a subset of parameters across the entire model. §.§.§ Joint Embedding Predictive Architecture The Joint Embedding Predictive Architecture (JEPA) <cit.> combines embedding modules with latent variables and supports dual encoders generating distinct representations without sharing parameters. Its flexibility allows for processing various data formats, such as multimodal inputs. The primary training objective of JEPA is to establish predictability between these representations in the embedding space, and it competes against traditional contrastive methods that require a considerable number of negative samples or memory banks. Our framework aligns with JEPA's principles, focusing on developing encoders trained to infer the representation of one modality from another within the paired multimodal dataset. We combine JEPA in a multimodal learning setup with Variance-Invariance-Covariance Regularization(VICReg) <cit.>. VICReg was introduced to prevent the occurrence of collapse in embeddings, where encoders produce constant or uninformative vectors while producing content features that transfer well on many downstream tasks. We adopt the VICReg objective for training our framework, which shares architectural similarities with JEPA, due to its simplicity both mathematically and computationally. Drawing from ideas described in <cit.>, we adapt the objective to learn predictability from paired multimodal data. In this work, VICReg plays a crucial role in facilitating the prediction of features in the representation space, aligning with our objective of maximizing mutual information between the embedding of missing modality and the prediction <cit.>. § PRELIMINARIES §.§ Problem Definition To maintain simplicity without losing generality, we consider a multimodal problem setting consisting of two modalities (M=2), namely m_1 and m_2 (e.g., image and text). We further assume that these modalities do not always coexist, indicating the presence of missing modalities throughout both training and testing phases. Therefore, given a multimodal dataset D=D^c ∪ D^m_1∪ D^m_2, it can be divided into three subsets: the modality-complete subset D^c={(x_i^m_1, x_i^m_2, y_i)} and two modality-incomplete subsets D^m_1={(x_j^m_1, y_j)} and D^m_2={(x_k^m_2, y_k)} (e.g., image-only and text-only). Building on these assumptions, we additionally posit that there is no pretrained multimodal encoder available for processing multimodal data. Instead, each modality is supported by its own pretrained unimodal encoder, which has been trained independently without awareness of the other modality. For this reason, we focus on more general problem settings that can be easily applied when only pretrained unimodal encoders are available. As shown in Fig. <ref>.(a) and (b), for each modalities m∈{m_1, m_2}, we assume that a pretrained unimodal encoder f_θ_enc^m based on transformer <cit.> architecture is given and a classifier f_θ_cls^m is defined on top of its representation. We implement a straightforward late-fusion strategy, integrating the pre-softmax logits from each modality. To address the challenge posed by missing modalities, we introduce a feature predictor f_θ_prd^m, designed to predict the feature vector of a missing modality. Furthermore, to enhance its prediction capabilities, we employ a set of trainable prompts ϕ^m. Based on this setting, we aim to construct a multimodal model against challenges arising from incomplete multimodal data issues during both training and testing scenarios. §.§ Read-only Prompts To enhance the capability of predicting embeddings of other modalities, it is necessary to update parameters of encoders or train only the newly added parameters. However, such updates potentially compromise the high-quality representations obtained through unimodal pretraining. Prompt tuning is an effective strategy to mitigate this, which adds learnable tokens to the input sequence without altering the encoder's pretrained weights. This method can facilitate model adaptation while preserving the original weights. Nevertheless, conventional prompt tuning can still affect the original representation due to the attention interaction between input data and prompts. To address this, we employ a read-only prompting technique <cit.>. By applying a masking strategy to the self-attention mechanism specifically targeting the interaction between input data and prompt tokens, we ensure that only the prompts can "read" the input token features. This approach keeps token features unaffected by the prompts, allowing the prompts to focus on extracting relevant information necessary for feature prediction across modalities. Consequently, the prompts become specialized for the sub-task, feature prediction. § PROPOSED METHOD §.§ Multimodal Classification Task through a Simple Late-Fusion Strategy Although pretrained unimodal encoders with a late-fusion strategy generally perform well without fine-tuning, they may not be sufficient to attain optimal performance in certain multimodal downstream tasks. However, full fine-tuning, while potentially improving performance, is less favorable due to its significant memory and resource demands. Therefore, we employ BitFit <cit.> as a PEFT approach, which freezes all parameters of the entire model and updates only the bias terms during fine-tuning. Based on this setting, we define the multimodal classification loss L_cls as the summation of standard cross-entropy classification losses over multiple modalities as follows: L_cls = L_m_1(D^m_1; θ_enc^m_1, θ_cls^m_1) + L_m_2(D^m_2; θ_enc^m_2, θ_cls^m_2) + L_c(D^c; θ_enc^m_1,θ_enc^m_2, θ_cls^m_1,θ_cls^m_2) where ℒ_m_1, ℒ_m_2 are the loss for modality-incomplete subsets, and ℒ_c is the loss for a modality-complete subset. As our framework is based on a late-fusion strategy, it enables our approach to be compatible with any other PEFT methods, such as adapter-based tuning <cit.> or reparametrization-based method <cit.>. §.§ Missing Modality Feature Prediction with Prompt-Tuning In the presence of a missing modality, instead of using only the features of the existing modalities, we posit that the predicted features of the missing modality can be integrated with those of the available modalities during inference to enhance prediction performance. Consequently, we introduce a feature predictor f_θ_prd^m using a set of trainable prompts ϕ^m to address the issue of missing modalities effectively. To facilitate this, we utilize read-only prompts described in <ref> that are concatenated to the unimodal input data and then processed through an unimodal encoder based on transformers with specially designed masked attention. This makes our feature predictor only read the internal representation of the encoder f_θ_enc^m, which is fine-tuned for the downstream task, and to learn to utilize rather than modify it. More precisely, we define the input data for each modality m as 𝐱^m=[C^m, E^m, ϕ^m] and the corresponding outputs can be expressed as: f_θ_enc^m(𝐱^m) = [C^m, E^m, ϕ^m] where both the class token(, CLS) embedding C^m and the input token embeddings E^m remain unchanged regardless of the prompts ϕ^m due to the use of read-only prompts. Based on these output embeddings, the class prediction f_θ_cls^m(C^m) for the existing modality m is computed, and the feature (, the final embedding of a class token) of the absent modality m' is predicted as (also see Fig. <ref>-(b)) Ĉ^m'=f_θ_pred^m(ϕ^m). It is important to note that the class prediction is not entirely affected by the feature prediction. Furthermore, the feature prediction can be enhanced by only tuning the prompts without interfering with the internal representation in the unimodal encoders. To optimize our feature prediction, we utilize a modality-complete dataset D^c and simulate the situations of missing modality. Moreover, to improve the predictability of embeddings similar to the approach outlined in <cit.>, we adopt VICReg <cit.>. The loss function based on it for predicting embeddings while preventing their collapse comprises three components. Firstly, a variance term forces the embedding vectors of samples within a batch to be different. It involves a hinge loss function that maintains the standard deviation of each component of the embeddings along the batch dimension. Secondly, an invariance term is the main objective, mean-squared euclidean distance computed between the original and predicted features. Finally, a covariance term is incorporated to decorrelate the different dimensions of the embeddings by setting the off-diagonal coefficients in the covariance matrix of the embeddings to zero. Therefore, the loss function for our feature prediction L_prd is a weighted average of the invariance, variance, and covariance terms: .95!L_prd(C^m',Ĉ^m';θ_prd^m,ϕ^m) = λ s(C^m',Ĉ^m')+μ[v(C^m')+v(Ĉ^m')]+ν[c(C^m')+c(Ĉ^m')] where s, v and c are the invariance, variance and covariance terms as described in <cit.> and λ, μ and ν are hyper-parameters. Moreover, this loss function is based on the existing modality m and the missing modality m', and it can be applied vice versa with a modality-complete dataset D^c. For this reason, our method can efficiently address any type of missing modality scenario. Furthermore, to guide the feature predictor in generating features suitable for the downstream task and enhance the robustness of the classifier in missing cases, we introduce an auxiliary classification loss, L_aux. This can be achieved by taking the predicted features Ĉ^m' to the classifier and optimizing them with cross-entropy loss. It simulates the situations of missing modality and ensures that the predicted representation aligns effectively with the downstream task. To sum up, the overall objective function L_total can be represented as follows: L_total=α *(L_cls+L_aux)+L_prd where α is the balancing hyper-parameter. § EXPERIMENT §.§.§ Dataset We evaluate our proposed method using three multimodal classification datasets following prior works <cit.>. MM-IMDb <cit.> consists of 25,956 image-text pairs with movie plot outlines and poster images. This encompasses 23 different genre classes, and the objective is to predict the genres of movies. As movies are frequently associated with multiple genres, the task is multimodal multi-label classification. UPMC Food-101 <cit.> is a multimodal classification dataset that includes images obtained by Google Image Search and corresponding textual descriptions for 101 food types. Comprising 90,840 image-text pairs, the dataset captures real-world challenges due to the noisy nature of the image and text pairs. Hateful Memes <cit.> is a challenging dataset designed to identify hate speech in memes using both image and text modalities. The selection of memes is structured to challenge strictly unimodal classifiers, making them struggle to classify correctly, while multimodal models are more likely to perform better. Hateful Memes emphasizes the importance of multimodal approaches in mitigating the limitations of unimodal signals. §.§.§ Metric For MM-IMDb, we measure multi-label classification performance using the F1-Macro score; for UPMC Food-101, we compute the classification accuracy; and for Hateful Memes, we assess performance using the Area Under the Receiver Operating Characteristic Curve (AUROC). §.§ Experiment Setting This paper explores two training settings involving missing modalities: complete training setting and missing training setting. Throughout our experiments, we compare our method with previous works <cit.> based on a multimodal encoder (ViLT <cit.>) and an unimodal baseline. The unimodal baseline employs encoders identical to ours but only leverages an unimodal classifier from the available data when a modality drops. It should be noted that our framework is adaptable to any training setting, while prior works lack the flexibility to apply to both settings effectively. This highlights the distinct advantage of our approach, which can handle various missing scenarios. Complete training setting involves training on modality-complete data D^c, and evaluating on modality-incomplete data D^m_1(, image-only (text-missing) data). This setup is designed to measure the model's robustness in the absence of the dominant modality. We compare our method with a previous work <cit.> with a multimodal pretrained transformer. We have replicated the results from <cit.> as no official code is available. Additionally, we compare unimodal methods against a vision-only encoder trained and tested solely on image modality data to measure the missing modality's robustness. For a fair comparison, results are averaged over five different random seeds, thereby enhancing the validity of the results by accounting for variability in the absence of text during testing. Missing training setting is a more general and challenging scenario where modality is absent in both the training and testing phases. We set a 70% missing rate in all our experiments following <cit.>. We explore three realistic cases of missing modality: text-missing, image-missing, and both-missing scenarios, and evaluate each case for all three scenarios of missing modality. To compare with previous work <cit.> under our experimental setting, we reproduced it using the official code[<https://github.com/YiLunLee/missing_aware_prompts/tree/main>]. This was necessary as the earlier study only conducted experiments under the condition that the missing setting in training was equal to the testing phase. The performance metrics are calculated by averaging outcomes across five different random seeds. The seed determines which samples contain missing modality and which modality is absent, ensuring a fair comparison. §.§ Main Result Fig. <ref> shows performance under the complete training setting, highlighting declines when text is missing at inference. Although all methods perform similarly well when all modalities are present, they struggle when a dominant modality is absent during testing. This result aligns with a prior study <cit.> that the performance of multimodal models trained on complete data degrades when faced with incomplete data at inference. Our findings indicate that using separate unimodal encoders for multimodal learning is also susceptible to missing modalities. Specifically, on MM-IMDb and Food-101, the performance of the unimodal baseline is even worse than that of training solely with the image encoder when less than 10% of the test data are paired. Our approach, however, stands out by significantly outperforming others, especially when the text is severely missing. Moreover, on the Hateful Memes, the performance gap between the unimodal baseline and the image-encoder-only approach is slight, indicating that it does not rely on a single modality. As shown, our model is always superior to others. [t!]0.4 < g r a p h i c s > figureAblation on effect of the prompts under complete training setting. [t!]0.45 tableAblation study on effect of the prompts under missing training setting. The results are averaged over three distinct missing cases, each conducted with five different random seeds. 2c|Training 2*c]@c@w/o Prompts 2* c]@c@ Ours Image Text 100% 30% 1c|44.91 1c48.99 30% 100% 1c|45.46 1c49.64 65% 65% 1c|46.98 1c48.99 It achieves an AUROC of 60.4, which is 6.5% greater than unimodal baseline AUROC of 56.73, providing that our predictor modules with read-only prompts generate auxiliary text representations that prove beneficial for the target task at hand. Overall, our method outperforms state-of-the-art performance across various datasets and evaluation metrics in the severely missing cases, and it is even better than multimodal models pretrained on large image-text pairs. The results for the second setting, as shown in Table <ref>, display performance across different missing scenarios with 70% of modalities missing. Each training configuration is evaluated (, testing configuration) in a scenario where the modalities are missing equally, as in the training phase, and in two additional cases that differ from the training scenario. As shown, we observe a lack of robustness in other methods when faced with missing scenarios that significantly deviate from the training settings. For instance, models trained with 30% image and 100% text samples demonstrate adequate performance in test samples with an equal distribution of missing modalities. However, their performance significantly degrades when encountering samples with 100% image and 30% text. Prior work <cit.> using multimodal encoder especially does not cope with unseen cases because it did not experience the missing setting during training. They even demonstrate inferior performance compared to ViLT. While unimodal baseline outperforms our method in a few specific scenarios where the missing modalities in training and testing are aligned, it is limited in such conditions and susceptible to different missing scenarios. Conversely, our method maintains robustness to missing modalities, even when substantial differences exist in the train-test settings. As a result, when averaging the performances of three different missing cases, our method outperforms the state-of-the-art by a large margin in various datasets and settings because of leveraging pretrained knowledge of unimodal models and predictor modules. Importantly, it achieves competitive results despite the limited (30%) availability of paired data and the absence of multimodal pretraining (, a multimodal joint encoder), underscoring its robustness, applicability, and flexibility in real-world scenarios. §.§ Ablation Study §.§.§ Effect of Prompts-based Feature Prediction We investigate the effects of prompts for feature prediction on the MM-IMDb, as shown in Fig. <ref> and Table <ref>. We compare our method with a predictor leveraging the CLS token, which captures the aggregated information of input sequences. Our prompts-based method only requires an additional 0.005% parameters of backbone but outperforms the method leveraging CLS token for feature prediction in both scenarios. It reveals that the CLS token is specialized for the target task and less suitable for predicting the embedding of other modalities. In the next step, we present the t-SNE <cit.> visualizations of the embeddings in Fig. <ref>. Each data point in the figure represents ground truth embeddings, and the prediction leveraging read-only prompts and CLS token for each test sample. The figure illustrates that our method produces embeddings more closely aligned with the original features than using the CLS token. Finally, we examine the cosine similarity of our feature predictions for quantitative confirmation. Surprisingly, the prompts-based approach increased the similarity of text prediction from 0.54 to 0.57 and improved image feature prediction from 0.4 to 0.71. [tp]0.4 < g r a p h i c s > figureAblation on scaling encoders under complete training. [tp]0.45 table Impact of variance-covariance regularization. Inv: a invariance loss is used, Var: variance regularization, Cov: covariance regularization, in Eq <ref>. 1l|Method 2cF1-Macro Inv w/ stop gradient 2c22.72 Inv 2c25.72 1l|Inv+Var+Cov (VICReg) 2c40.88 §.§.§ Further Analysis of Read-only Prompts We conduct an ablation experiment to analyze the effect of prompt token length under the complete training setting, as illustrated in Fig. <ref>. The performance evaluated with text-missing data improves with an increase in prompt length, peaking at a length of 6, beyond which additional length does not proportionally enhance performance. Interestingly, compared to using the CLS token, we observe meaningful improvement when leveraging even one learnable prompt token, demonstrating the effectiveness of prompt-based feature prediction. This aligns with findings from the previous section that using additional prompt tokens instead of CLS token, specialized for downstream tasks, enhances feature prediction. Furthermore, to assess the effect of the read-only mechanism, we examined its performance using prompts without attention masking. The figure shows a dramatic decrease in performance to 34.72 from 40.88 in an equal prompt setting. It emphasizes the necessity of structuring read-only prompts in predicting features to preserve the model's internal representations for the target task. §.§.§ Scaling As illustrated in Fig. <ref>, we explicitly explore the performance enhancements achieved by employing advanced unimodal encoders, particularly a scaled-up version of our base encoder, termed the "Large". The aim here is twofold: first, our method outperforms unimodal baselines even with the larger encoder, and second, it underscores the advantages of using unimodal pretrained encoders in real-world scenarios. Our results indicate that more powerful encoders enhance performance and improve robustness in scenarios with missing modalities. Unimodal pretrained encoders, often more accessible and available in high-performance variants, present a more feasible option for enhancing model capabilities than multimodal encoders. This highlights our method's adaptability and efficiency in real-world scenarios, especially with domain-specific datasets. By incorporating domain-specific encoders, such as multilingual pretrained models, we expect even more significant performance improvements, showcasing our approach's potential to effectively leverage advancements in unimodal encoding. §.§.§ Impact of VICReg Table <ref> presents the results of text-missing testing under complete training, highlighting VICReg's crucial role in our method. It indicates that the stop-gradient operation is not required, which restricts the predictability of encoders. Notably, relying solely on the invariance term, the primary objective for feature prediction, resulted in a marked decline. Encoders are trained to make embeddings easily predictable from other modalities by simply adding the covariance and variance regularization terms. These results demonstrate how VICReg significantly enhances predictability, guiding the model toward producing informative representations. §.§.§ Comparison on other PEFT methods To assess the compatibility of our framework with other PEFT methods, we conduct a comparative analysis with Layer Normalization (LN) Tuning <cit.>, Prefix Tuning <cit.>, and Adapter Tuning <cit.> under complete training setting. Table <ref> presents the performance comparison on text-missing data. Our observations reveal that BitFit surpasses other methods with very few (0.11%) trainable parameters. Despite having the most trainable parameters, adapter-based methods underperformed, indicating that slightly tuning the encoders across all layers is enough to train the encoder's predictability while optimizing the target task and often offers more benefits than focusing on specific layers. § CONCLUSION This paper addresses the practical challenges in multimodal learning associated with acquiring complete multimodal data. In real-world scenarios, using a pretrained joint encoder on a large paired dataset may not always be feasible. Furthermore, the issue missing modalities for downstream tasks presents potential challenges during both training (fine-tuning) and testing phases. We introduce a simple yet effective framework designed to tackle missing modalities by employing PEFT on separate pretrained unimodal models. Our approach utilizes VICReg to effectively predict the embeddings of other modality within the representation space, leveraging read-only prompts. Our method exhibits superior performance across different multimodal datasets in various scenarios for missing modality that occurs during both the training and testing phases. § ACKNOWLEDGEMENTS This work was supported by the National Research Foundation of Korea (NRF) grant (RS-2023-00222663, RS-2024-00345809) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant (RS-2022-00143911, RS2023-00232046, IITP-2024-RS-2024-00397085), both funded by the Korea government (MSIT). § SUPPLEMENTARY: MISSING MODALITY PREDICTION FOR UNPAIRED MULTIMODAL LEARNING VIA JOINT EMBEDDING OF UNIMODAL MODELS § IMPLEMENTATION DETAILS We conduct overall experiments using a single RTX 3090 GPU. Similar to <cit.>, we employ DeiT III <cit.> as an image encoder and SimCSE <cit.> as a text encoder throughout the overall experiment, which model parameters are initialized using the pretrained weights. The structure of the feature predictor is modified from <cit.>, which consists of two fully-connected layers with layer normalization layer <cit.> and activation function, and a third linear layer. The dimensions of all three layers are set to equal to the output dimension of the encoders. The length of learnable read-only prompts for feature prediction is set to 6 in MM-IMDb, 20 in UPMC Food-101, and 2 in Hateful Memes in both training settings. We conduct all experiments with batch size 12 of 20 epochs, using the AdamW <cit.> optimizer with a weight decay of 5 × 10^-2. We initiate the warm-up steps for the learning rate at 0, with a base learning rate set to 1 × 10^-2. The warm-up phase linearly progresses from 0 for the first 10% of the total training steps before decay. The variance, invariance, and covariance coefficients λ, μ and ν for VICReg loss in Eq <ref> are set to 50, 50, and 1. For the ablation study on section <ref>, we add 36 learnable tokens to the input data specifically for the target task in prefix tuning, separate from utilizing read-only prompts for feature prediction. For adapter tuning, the adapter is inserted before each layer normalization layer in transformer-based pretrained models. It consists of a weight matrix that downsamples the input, followed by an activation function, and another weight matrix that restores the input to its original dimension. Finally, it includes a residual connection. We set the reduction factor to 4. For a fair comparison that considers trainable parameters, adding an adapter is applied to the first and last layers, and separately to all layers for further analysis. § READ ONLY PROMPTS ATTENTION MASK As illustrated in Fig. <ref>, we applied an attention masking mechanism for our encoder's architecture, following the approach outlined in <cit.>. This structure is essential to maintain the representations of the input tokens for the target task while training the learnable tokens specialized for feature prediction. § ADDITIONAL RESULTS §.§ Results on Image Missing Case We provide a result of an image missing case in the Fig <ref> under the complete training with the Food-101, which is the most susceptible among the three datasets. Due to the dominance of the text modality, the impact of the image modality on target task is minimal. However, the unimodal baseline suffers in severely image-missing cases. Conversely, our method, which demonstrates superior performance in the text missing case, also shows robustness to severely image missing cases §.§ Results on Different Missing Ratio As illustrated in Fig. <ref>, we explored varying missing rates from the fixed 70% on [both missing training/testing] setting. Our method with feature predictor trained on 70% missing data (i.e., only 30% paired) outperforms the unimodal baseline trained with fully-paired data. When the missing rate is exceptionally high (i.e., 90% missing), feature predictors struggle to learn effectively from the limited paired data available. Nonetheless, even with just 20% paired data, our method surpasses the unimodal baseline. §.§ Full Results of the Ablation Study on prompts-based feature prediction §.§ Full Results on Missing Training Setting §.§ Feature Prediction Loss Coefficients
http://arxiv.org/abs/2407.12175v1
20240716205848
Temporal Configuration Model: Statistical Inference and Spreading Processes
[ "Thien-Minh Le", "Hali Hambridge", "Jukka-Pekka Onnela" ]
stat.ME
[ "stat.ME" ]
On a possible ^3_ϕH hypernucleus with HAL QCD interaction I. Filikhin^1, R. Ya. Kezerashvili^2,3,4, and B. Vlahovic^1 July 22, 2024 =============================================================== § ABSTRACT We introduce a family of parsimonious network models that are intended to generalize the configuration model to temporal settings. We present consistent estimators for the model parameters and perform numerical simulations to illustrate the properties of the estimators on finite samples. We also develop analytical solutions for basic and effective reproductive numbers for the early stage of discrete-time SIR spreading process. We apply three distinct temporal configuration models to empirical student proximity networks and compare their performance. § INTRODUCTION In its simplest form, a network is a static collection of nodes joined together by edges. Many systems of interest can be represented as networks, enabling the study of such systems and leading to new insights. <cit.> Broadly speaking, networks capture the pattern of interactions between elements of a system, reducing the system to a basic topological structure that facilitates analysis. One of the most commonly studied types of networks are interpersonal networks, wherein the network encodes interactions between individuals. While network data is highly sought after, high quality network data can be quite rare. Traditional surveys, wherein researchers ask individuals about their contacts, are plagued with challenges. With egocentric surveys, researchers ask individuals about their contacts. While these surveys scale well, they often result in small network fragments and can be subject to recall bias and sampling bias. With sociocentric surveys, researchers ask participants to identify contacts from a roster. These surveys tend to give more complete picture than egocentric surveys, but they scale poorly and require a priori information on whom an individual might come into contact with. <cit.> Both of these data collection methods are also quite burdensome and tend to provide very subjective network data. Furthermore, they typically result in coarse data; that is, researchers are only able to identify sets of contacts, but lack information on the proximity, frequency, and duration of their interactions, as well as how these interactions change over time. These data collection challenges often lead researchers to treat networks as static, despite nearly all networks evolving over time. The use of static networks can be particularly problematic when studying dynamic phenomena that take place over networks, like spreading processes. Studies have shown that when the network evolves slowly relative to a spreading process, the dynamics can be approximated using a static network. When the network evolves very rapidly relative to the spreading process, the dynamics can be approximated well by a time-averaged version of the network. However, when the network and spreading process evolve at comparable time scales, the interplay between the two becomes important. <cit.> Infectious diseases are a prime example of a spreading process, with the contagion propagating through the population via contact between an infected individual and a susceptible individual. Typically, infectious disease modeling is done via a fully-mixed model, which assumes that every individual in the population interacts with every other individual with equal probability or rate. However, in the real world, individuals typically have a small circle of contacts with whom they interact repeatedly. This was emphasized during the COVID-19 pandemic when many individuals practiced social distancing, choosing to interact with only a small group of people dubbed their “social bubble” or “pod.” <cit.> These repeated contacts, or persistent ties, between individuals are well-represented by a network structure and often lead to isolation of the contagion within a particular part of the network. However, as the network topology changes, paths to new, uninfected portions of the network can open up. As such, it is critical to account for both persistent ties and changes in network structure over time. An approach that captures both of these phenomena may offer the most realistic look at how diseases spread through populations. Thankfully, recent technological advances have made high-fidelity individual-level proximity data more widely available, thereby allowing us to study empirical time-varying networks more precisely. Wearable sensors, like RFID tags and Bluetooth-enabled devices, such as smartphones, allow researchers to collect accurate, granular data in a passive manner. <cit.> These technologies also allow investigators to capture proximity information with high precision, measuring the exact time and duration that two individuals are in close contact, along with their approximate distance from one another. This high-fidelity data allows us to quantify exposure, making it ideal for studying the spread of contagion. However, this begs the question: how much temporal granularity is needed to accurately capture spreading processes over networks? Very granular temporal resolution poses its own challenges, as it may capture unimportant interactions and erode privacy-protecting measures. Coarser collection can help preserve privacy and be less burdensome to store and analyze. However, if the network information is too coarse, we may miss key temporal dynamics, thereby inhibiting our ability to accurately model spreading processes over the network. Over the last decade, temporal networks have emerged as a fascinating problem that has received substantial research interest from a variety of disciplines. The majority of research focuses on developing various temporal models that can describe real-world data. Popular approaches include dynamic link modeling, activity-driven modeling, link-node memory modeling, and community dynamics modeling. <cit.> However, research into temporal network model fitting and validation is still in early development. In <cit.>, the authors presented a generative model on random graphs where new edges form and dissolve at a constant rate, and proposed ways to estimate the rate for empirical data. Our study takes a novel approach to the temporal network, allowing more freedom in how edges are formed and dissolved, and focuses on the primary question of how much temporal network information is required to reliably retrieve the underlying generative model. Specifically, we propose a generative model for temporal networks which employs an edge persistence rate. The rate can be a simple constant, generated from a distribution, or some function of the empirical data. We provide consistent estimators for the model parameters. Interestingly, we found that constructing an estimator using as much temporal network information as possible does not necessarily result in a superior estimator. We also show how to fit the proposed models to an empirical data set with promising results. Our findings will provide practitioners with valuable insights to enable reliable estimations while balancing the cost, storage, and privacy concerns that come with collecting dynamic network data. § TEMPORAL NETWORK MODEL §.§ Model Specification The configuration model (CM) is a widely used network model as it balances both realism and simplicity. <cit.> Unlike many other network models, the configuration model incorporates arbitrary degree distributions. The exact degree of each individual node is fixed, which in turn fixes the total number of edges. To construct a network realization from a configuration model, one first specifies the total number of nodes in the network, denoted N. Then, for a given node i, one specifies the degree of that node, k_i, repeating this process for all nodes in the network. Each node is then given “stubs” equal to its degree, where a stub is simply an edge that is only connected to the node in question with its remaining end free. Pairs of stubs are then selected uniformly at random and connected to form edges until no stubs remain. The configuration model has several attractive properties. First, each possible matching of stubs is generated with equal probability. Specifically, the probability of nodes i and j being connected is given by k_ik_j/2m-1, where m is the total number of edges. Because any stub is equally likely to be connected to any other, many of the properties of the configuration model can be solved exactly. While the configuration model does allow for multi-edges and self-edges, their probability tends to zero as N approaches infinity. The configuration model is recognized as “one of the most important theoretical models in the study of networks” and many consider it one of the canonical network models. <cit.> When studying a new question or process, the configuration model is often the first model network scientists employ, making it an ideal foundation for our temporal network model. <cit.> We present a time-varying version of the configuration model that we call the temporal configuration model (TCM). Like the configuration model, the TCM allows for an arbitrary degree distribution. Additionally, it encodes persistent ties, a key motivation for using and studying network models, by assigning each dyad a latent persistence probability. To construct a TCM, one creates an initial network, G_0, via the standard configuration model algorithm. At each subsequent discrete time point t, a new network, G_t, is generated as follows. For each edge in G_t-1, a Bernoulli trial is conducted with success probability equal to that edge's latent persistence probability. That is, for an edge between nodes i and j, we perform a Bernoulli trial with success probability p_ij. If the trial results in a success, the edge remains. If the trial is unsuccessful, the edge is broken, creating two stubs. After Bernoulli trials have been completed for all edges in G_t-1, stubs are matched uniformly at random with one another, forming edges in the new network, G_t. This process is repeated for each time step until one obtains the desired sequence of graphs, G_0, G_1, …, G_T. The TCM can take on several intuitive forms by adjusting the latent edge-wise persistence probability. We begin by considering the two extremes. First, we can recover the standard configuration model simply by setting p_ij=1 for all i, j. This creates a static network such that G_t=G_0 for all times t. Second, if we set p_ij=0 for all i, j, we simply generate independent and identically distributed random draws of graphs from a configuration model ensemble. At each t, a configuration model with the specified number of nodes and degree sequence will be generated, but this instance will be independent of the previous network realizations. A more interesting scenario is obtained by setting p_ij equal to some fixed p ∈ (0,1) for all pairs i, j. That is, we specify a single or homogeneous persistence probability for all edges in the network. This parameter dictates how quickly the network changes over time. If p is large, the network will change slowly as edges are highly likely to persist forward in time. If p is small, the network will change rapidly. If we wanted the persistence probability to vary across the network, creating a more heterogeneous population, we can draw edge-level probabilities from some distribution for all node pairs. We could also imagine circumstances where it may make sense to encode some functional form for p_ij. For instance, edge-level persistence probabilities could be a function of node-level attributes if such information is available. We could envision a scenario where some individuals are more likely to stay within a particular social group while others may frequently move between groups, be it due to age, location, or some other factor or combination of factors. In this setting, we might construct p_ij to be some function of node i's attributes, a_i, and node j's attributes, a_j. As such, every edge would have a distinct probability that would relate back to the two individuals in question. We could take this one step further and have edge-level persistence probabilities be a function of the relationship between the two nodes. For instance, familial relationships may have a higher persistence probability, while acquaintances may have a lower persistence probability. §.§ Inference We focus on three of the intuitive forms outlined above, namely a single persistence probability for the entire network, edge-level persistence parameters drawn from a probability distribution, and edge-level persistence parameters that are product of node-level persistence parameters drawn from a probability distribution. In the first model, the entire network has a single persistence probability, p. We then consider the case where the persistence rate p_ij of edge (i,j) is generated from a given distribution W, i.e., p_ij∼ W, for all i, j. Finally, we consider the case where p_ij = p_ip_j, where p_i ∼ W, for i = 1,⋯, N are independently drawn from a given distribution W. Notice that the first model is a special case of the second model as the distribution W shrinks to a constant p. We observe that the future status of each edge after one step follows a Bernoulli distribution with mean equal to the persistence rate of that edge, and the status of each edge after two time steps follows a Bernoulli distribution with mean equal to the square of the persistence rate. The number of edges remaining after one or two time steps in the network is the sum of independent Bernoulli distributions. Thus, to estimate the first moment of the distribution, we can simply use the proportion of edges persisted one time step. To estimate the second moment of the distribution, we can use the proportion of edges that persisted two time steps. Next, we show that these estimators are consistent for each scenario. Remark: As long as W is uniquely determined from its first k moments, the proposed approach can specify the distribution of W. Model 1: All edges of the network have a single persistence probability p for some 0 ≤ p ≤ 1. Let X_0 = x_0 denote the number of edges in the initial graph, G_0. Notice that x_0 is a fixed constant for a given initial graph G_0. Let X_1 denote the number of edges that persist to time t = 1. That is, X_1 = ∑_(i,j) ∈ G_0Bernoulli(p). We can then use the ratio Z_1=X_1/X_0 to estimate the persistence probability p. Lemma <ref> gives the properties of estimator Z_1. Z_1 -p/√(( p(1-p))/X_0) converges to a standard normal distribution, as X_0 →∞. Proof: We have Z_1=X_1/X_0 = ∑_(i,j) ∈ G_0Bernoulli(p)/ X_0. Since ∑_(i,j) ∈ G_0Bernoulli(p) is the sum of independent Bernoulli random variables, applying the central limit theorem we have Z_1 -p/√(( p(1-p))/X_0) converges to a standard normal distribution as X_0 →∞. □ Thus, Z_1 is an unbiased and consistent estimator for p. Since the network continues to evolve over time, we can incorporate this information to refine our estimate of p. Following the same logic as in Lemma <ref>, we can show that the ratio of edges remaining after each time step is also an unbiased and consistent estimator of p. To obtain a more precise estimator for the persistence probability p, we can use the average of ratios over all time steps, denoted Z̅. Before discussing the proposed estimator Z̅, we first walk through the evolution of the temporal network and set up the necessary notation. During the first time step, the broken edges of the original network G_0 form stubs and are rematched to create new edges. Let Y_1 denote the number of newly formed edges (excluding self-loops and multi-edges) at t=1. For large networks, the number of self-loops and multi-edges is negligible relative to the total number of edges; thus, Y_1=X_0-X_1+o(1), where o(1) captures the number of self-loops and multi-edges. Let X_1^+ denote the total number of edges at t = 1 after adding the new edges to the remaining edges. That is, X_1^+=X_1+Y_1 = X_0 + o(1). Then, at t = 2, the number of edges that persist is X_2∼Binomial(X_1^+,p) and the total number of edges after the rematching is X_2^+=X_1^++o(1). Similarly, for time t = T, X_T is the total number of edges persisting from X_T-1^+ and X_T^+ is the number of edges after the rematching. Let Z_t denote the fraction of edges that persist to time t, i.e., Z_t=X_t/X_t-1^+, for t = 1,⋯,T. Then the proposed estimator Z̅ is defined as Z̅=1/T(Z_1+Z_2+⋯+Z_T). Next, we present some important properties of the proposed estimator Z̅. The estimator Z̅ is an unbiased and consistent estimator of the persistence probability p. In addition, when T = o(X_0) and T→∞, Z̅ converges to p at the rate of O_p(√(1/X_0) (1/T)^1/2-δ), for some small δ > 0. Proof: It follows from Lemma <ref> that each estimator Z_k is an unbiased and consistent estimator for the persistence probability p, for all k = 1,⋯,T. Therefore, Z̅ =1/T ∑_i=1^TZ_i is also an unbiased and consistent estimator of p. Denote S_T=∑_i=1^TZ_i and γ_st=E[(Z_s-p)(Z_t-p)] for all s, t∈{1,…,T}. We now evaluate the variance of S_T. Var(S_T) =E[(S_T-pT)^2] =E[(∑_i=1^TZ_i-pT)^2] =∑_s=1^T∑_t=1^TE[(Z_s-p)(Z_t-p)]= ∑_s=1^T∑_t=1^Tγ_st. For any s>t, we have γ_st =E[(Z_s-p)(Z_t-p)] = E(Z_sZ_t)-p E(Z_s)-p E(Z_t)+p^2 =E(Z_sZ_t)-p^2-p^2+p^2 =E(X_s/X_s-1^+ X_t/X_t-1^+)-p^2 =E[E(X_s/X_s-1^+ X_t/X_t-1^+| X_s-1^+)]-p^2 =E[E(X_s/X_s-1^+| X_s-1^+)]E(X_t/X_t-1^+)-p^2 by independence of X_t/X_t-1^+ and E(X_s/X_s-1^+| X_s-1^+) =p E(X_t/X_t-1^+)-p^2=p^2-p^2 = 0. Thus, γ_st=0 for any s>t. Similarly, γ_st=0 for any s<t. Therefore, Var(S_T) = ∑_s=1^T∑_t=1^Tγ_st = ∑_s=1^Tγ_ss. For T = o(X_0), we have X_s-1^+ = X_0 + o(1), for all s ∈ 1,⋯, T. Therefore, γ_ss =E( (Z_s-p)(Z_s-p)) =E(Z_s^2)-p^2 =E[(X_s/X_s-1^+)^2] - p^2 =E[E((X_s/X_s-1^+)^2|X_s-1^+)] - p^2 = E[(1/X_s-1^+)^2 E(X_s^2|X_s-1^+)] - p^2 = E[(1/X_s-1^+)^2[X_s-1^+ p (1-p)+(p X_s-1^+)^2]] - p^2 = E[p (1-p)/X_s-1^+] ≤C_1/X_0, for some C_1 >0. This gives us Var(S_T) = ∑_s=1^Tγ_ss≤ T C_1/X_0. Applying the Chebyshev's, for some C_2>0 and 0<δ<1/2, we have P(|Z̅-p|≥ C_2 √(1/X_0)(1/T)^1/2-δ) ≤E[(Z̅-p)^2]/C_2^2 1/X_0 (1/T)^1-2δ=E[(S_T-T p)^2]/T^2C_2^2 1/X_0 (1/T)^1-2δ ≤T C_1/X_0/T^2C_2^2 1/X_0 (1/T)^1-2δ= C_1/C_2^2 T^2δ→ 0. Thus, Z̅ converges to p at the rate of O_p(√(1/X_0) (1/T)^1/2-δ). □ Remark: From Theorem <ref> and Lemma <ref>, we see that by incorporating information about network evolution over time, the estimator Z̅ converges to the persistence probability p faster than Z_1 by a factor of (1/T)^1/2-δ for some small positive δ. Model 2: Now, consider drawing p_ij from a given distribution W, i.e., p_ij∼ W for all i, j. The simplest way to structure this model is to set the persistence probability for edge (i,j) as p_ij, fixing it over time. Alternatively, one could fix the persistence probability of an edge over a time window of T_0 time steps for some T_0>1. Under this model, edge persistence probabilities are regenerated from W anew for each time window. This latter approach may be more realistic in some settings. We will show that in either case the ratio of edges remaining after the first time step from the original network G_0 and the ratio of edges remaining after the first two steps from the original network G_0 can serve as good estimators for the first and second moment of the distribution W, respectively. For simplicity, we only provide the proofs for the estimator of the first moment; the proof for the estimator of the second moment is similar. Recycling the notation from Model 1, we will use Z_1 = X_1/X_0 to estimate the first moment E(W). Lemma <ref> gives us the convergence property of the estimator Z_1. Z_1 is an unbiased and consistent estimator of the first moment E(W) of distribution W, as X_0 →∞. Proof: We first prove that, for any given combination of edge persistence probabilities drawn from W, Z_1 is an unbiased and consistent estimator for the average of the edge persistence probabilities. We have Z_1=X_1/X_0 = ∑_(i,j) ∈ G_0Bernoulli(p_ij)/ X_0, where Bernoulli(p_ij) are independent variables. For simplicity, let us re-enumerate the sequence of independent Bernoulli(p_ij) for all (i,j) ∈ G_0 and denote them as U_1, U_2, ⋯, U_X_0, where U_i ∼Bernoulli(p_i) for i=1,⋯,X_0. Then Z_1 = 1/X_0∑_i=1^X_0U_i. Denote μ_X_0 = 1/X_0∑_i=1^X_0p_i and σ_X_0 = 1/X_0√(∑_i=1^X_0p_i(1-p_i)). Using the Lyapunov central limit theorem for independent non-identical distributions, we see that (Z_1 - μ_X_0)/σ_X_0 will converge to a standard normal distribution if we can verify the Lyapunov condition (<ref>) below: lim_X_0 →∞1/σ_X_0^2+δ∑_i=1^X_0(1/X_0)^2+δE| U_i - p_i|^2+δ→ 0, for some δ > 0. Since 0 ≤ p_i ≤ 1 for all i, ∑_i=1^X_0p_i(1-p_i) is of the same order as ∑_i=1^X_0p_i. Therefore, ∑_i=1^X_0p_i(1-p_i) →∞ as X_0 →∞. In addition, we also have E| U_i - p_i|^2+δ = p_i (1 - p_i)^2+δ + (1-p_i)p_i^2+δ ≤ p_i(1-p_i)^2 + (1-p_i)p_i^2 = p_i(1-p_i). Therefore, lim_X_0 →∞1/σ_X_0^2+δ∑_i=1^X_0(1/X_0)^2+δE| U_i - p_i|^2+δ = lim_X_0 →∞1/( ∑_i=1^X_0p_i(1-p_i) ) ^1+δ/2∑_i=1^X_0E| U_i - p_i|^2+δ ≤lim_X_0 →∞∑_i=1^X_0p_i(1-p_i)/( ∑_i=1^X_0p_i(1-p_i) ) ^1+δ/2 = lim_X_0 →∞1/( ∑_i=1^X_0p_i(1-p_i) ) ^δ/2→ 0. So, Z_1 - μ_X_0/σ_X_0 converges to a standard normal distribution, as X_0 →∞. The central limit theorem for random observations generated from distribution W gives us μ_X_0 = E(W) + O_p(√(1/X_0)). Furthermore, we also have that E(Z_1) = E( 1/X_0∑_i=1^X_0p_i) = E(W). Therefore, Z_1 is an unbiased and consistent estimator of the first moment E(W) of distribution W as X_0 →∞. □ Next, we consider how to utilize temporal network information to refine the estimator Z_1. Consider the case where we generate p_ij from distribution W and assign fixed persistence probabilities p_ij for the (i,j) dyad. In this case, Z̅, constructed under Model 1, will not be a good estimator for the first moment E(W) of W. As the network evolves, edges with higher persistence probabilities are more likely to be retained in the network. Therefore, over time, the persistence probabilities of edges remaining in the network will represent a shifted version of the original distribution W. As a result, using the estimator Z̅ will result in a biased estimate of E(W). Figure <ref> shows this phenomenon, where the initial network has 1000 nodes and the edge persistence probabilities are generated from W = Beta(4,1) for the first time steps. After 100 time steps, edge persistence probabilities of edges retained in the network are shifted compared with the original distribution significantly. Next, consider the variant of Model 2 where the edge persistence probabilities are generated anew from W for each time window T_0 and kept fixed for each edge throughout each time window. More specifically, starting with X_0 edges from the set of edges A_0 in the initial graph G_0 at time t=0, we generate persistence probabilities p_ij^(0), i,j = 1,⋯, N from a distribution W and assign the probability p_ij^(0) to the edge (i,j) if the edge appears at time t = 0 and persists to time T_0-1. At time step T_0, persistence probabilities p_ij^(1), i,j = 1,⋯, N are generated anew from W and assigned to the edge (i,j) if the edge appears at time t = T_0 and persists to time 2T_0 - 1. This process continues until we reach the desired number of time steps T. If additional information is available regarding the time window T_0, we can use information from the first k time steps (k < T_0) from each window to obtain a better estimator for the first k moments of distribution W. Denote the set of edges of A_0 that persist to t=1 as A_1. After the rematching process at t=1, the set of edges now becomes A_1^+. Let X_1 denote the number of edges in A_1 and X_1^+ the number of edges in A_1^+ and so on for t = 2,⋯, T. Let ℱ_k^+ denote the σ-algebra generated by all sets of edges up to time k, i.e., ℱ_k^+ = σ{A_1^+, …, A_k^+}. Finally, let Z_k represent the fraction of edges remaining after one time step starting from each time window, i.e., Z_1 = X_1/X_0, Z_2 = X_T_0+1/X_T_0^+, Z_3 = X_2T_0+1/X_2T_0^+, ⋯. Then we can use Z̅ = ∑_k=1^m Z_k/T to estimate the first moment E(W), where m = [T/T_0]. If the periodic time interval T_0 is correctly specified, the proposed estimator Z̅ is an unbiased and consistent estimator for E(W). In addition, when T = o(X_0) and T→∞, Z̅ converges to E(W) at the rate of O_p(√(1/X_0)(1/m)^1/2-δ) for some small δ > 0, m = [T/T_0]. Proof: It follows from Lemma <ref> that each Z_k is an unbiased and consistent estimator for the first moment E(W), for k = 1,⋯,m. Then E(Z̅) = E(∑_k=1^m Z_k/m) = E(W). In other words, Z̅ is also an unbiased and consistent estimator for E(W), as X_0, X_T_0, ⋯, X_mT_0→∞. To obtain the convergence rate of Z̅, we first provide the upper bound for E(∑_s=1^m Z_s - m E(W))^2: E(∑_s=1^m Z_s - m E(W))^2 = Var(∑_s=1^m Z_s - m E(W)) = Var( ∑_s=1^m Z_s ) = ∑_s=1^mVar (Z_s) + 2 ∑_s>t; s, t =1^mCov ( Z_s , Z_t ) . For any s>t, we have Cov ( Z_s , Z_t ) =E(X_(s-1)T_0+1/X_(s-1)T_0^+·X_(t-1)T_0+1/X_(t-1)T_0^+) - ( E(W) )^2 =E[E(X_(s-1)T_0+1/X_(s-1)T_0^+·X_(t-1)T_0+1/X_(t-1)T_0^+| ℱ_(s-1)T_0^+)] - ( E(W) )^2 =E[E(X_(s-1)T_0+1/X_(s-1)T_0^+| ℱ_(s-1)T_0^+)] E( X_(t-1)T_0+1/X_(t-1)T_0^+) - ( E(W) )^2 = E(W) E(W) - ( E(W) )^2 = 0. In addition, Var ( Z_s ) = E[(X_(s-1)T_0+1/X_(s-1)T_0^+)^2] - ( E(W) )^2 = E[E((X_(s-1)T_0+1/X_(s-1)T_0^+)^2 | ℱ_(s-1)T_0^+)] - ( E(W) )^2 = E[ ( 1/X_(s-1)T_0^+)^2 E ( X_(s-1)T_0+1^2 | ℱ_(s-1)T_0^+) ] - ( E(W) )^2. Since X_(s-1)T_0+1 is the summation of independent Bernoulli variables Bernoulli(p_ij^(s)), where (i,j)∈A_(s-1)T_0^+, we have E ( X_(s-1)T_0+1^2 | ℱ_(s-1)T_0^+) = Var(X_(s-1)T_0+1|ℱ_(s-1)T_0^+) + [ E (X_(s-1)T_0+1|ℱ_(s-1)T_0^+) ]^2 = ∑_(i,j) ∈A_(s-1)T_0^+ p_ij^(s)(1-p_ij^(s)) + (∑_(i,j) ∈A_(s-1)T_0^+ p_ij^(s))^2 . For T = o(X_0), we have X_(s-1)T_0^+ = X_0 + o(1), for all s = 1, ⋯, m. Therefore, Var ( Z_s ) = E( ∑_(i,j) ∈A_(s-1)T_0^+ p_ij^(s)(1-p_ij^(s))/ X_(s-1)T_0^+ 2) + E((∑_(i,j) ∈A_(s-1)T_0^+ p_ij^(s))^2/X_(s-1)T_0^+ 2) - ( E(W) )^2 ≤ E( ∑_(i,j) ∈A_(s-1)T_0^+ p_ij^(s)/ X_(s-1)T_0^+ 2) + E(Var(W)/X_(s-1)T_0^+) ≤C/X_0, for some C >0. This gives us E(∑_s=1^m Z_s - m E(W))^2≤mC/X_0. Using the same argument as in the proof of Theorem <ref>, Z̅ converges to E(W) at the rate of O_p(√(1/X_0) (1/m)^1/2-δ), for some small δ > 0. □ Model 3: In Model 3, we consider the case where p_ij = p_ip_j, where p_i, i = 1,⋯, N are independently draw from a distribution W. As with Model 2, we consider two model variants. Since the persistence probability p_ij = p_i p_j, E(p_ij) = E(p_ip_j) = E(p_i) E(p_j) = E(W)^2 and E(p_ij^2) = E(p_i^2 p_j^2) = E(p_i^2) E(p_j^2) = E(W^2)^2. Therefore, we can use the same estimation strategy as above to estimate moments of the distribution W. The estimator Z̅ will only be beneficial for the estimation process if we can correctly specify the time window T_0. Using the same arguments as in Lemma <ref> and Theorem <ref>, we obtain the following: The proposed estimator Z_1 is an unbiased and consistent estimator for E(W)^2. Proof: As in Lemma <ref>, for any given combination of edge persistence probabilities generated from distribution W, the standardized version of Z_1 converges to a standard normal distribution. Applying the Central Limit Theorem to the degenerate U-statistics (Theorem 2.1 in <cit.>), we have μ_X_0 = 1/X_0∑_(i,j)∈ G_0 p_i p_j = E(W)^2 + O_p(√(1/N)). Furthermore, we also have that E(Z_1) = 1/X_0∑_(i,j)∈ G_0 p_i p_j = E(W)^2. Therefore, Z_1 is an unbiased and consistent estimator for E(W)^2, as X_0 →∞. □ If the periodic time interval T_0 is correctly specified, the proposed estimator Z̅ is an unbiased and consistent estimator for E(W)^2. In addition, when T = o(X_0) and T→∞, Z̅ converges to E(W)^2 at the rate of O_p(√(1/N) (1/m)^1/2-δ) for some small δ > 0, m = [T/T_0]. Proof: The proof follows by using the same arguments as in Theorem <ref>. § SIMULATION STUDIES In this section, we perform some numerical simulations to illustrate the properties of proposed estimators on finite samples. We consider three different sets of simulation studies for each of the three models. The first set of simulations corresponds Model 1, where all edge persistence probabilities are fixed at p = 0.8. The second set of simulations examines Model 2, where edge persistence probabilities are generated from a Beta(1,4) distribution, kept fixed for a time window of T_0 = 2, and then resampled at the start of a new window. More specifically, persistence probabilities for any given edge (i,j) of the network at any two consecutive time steps 2(k-1) and 2(k-1) + 1 are the same and drawn from Beta(1,4), for k = 1,⋯, [T/2]. Finally, we consider Model 3, where the persistence probability p_ij of a given edge (i,j) is the product of p_i and p_j, where node-level persistence is drawn from Beta(1,4), kept fixed for any two consecutive time steps 2(k-1) and 2(k-1) + 1 and resampled at the start of a new window. To understand the effect of network size and number of time steps, we consider three different network sizes N = 10, 100, and 1000. For each N, we allow the network to evolve through T = 30 and T = 100 time steps. The original graph G_0 is first generated via the standard configuration model, where the degree sequence is generated from a Poisson distribution with mean 6. The network then evolves through T time steps with the persistence probabilities corresponding to each of the settings outlined above. We use the proposed estimators Z_1 and Z̅ to estimate the persistence probability p in the first set of simulations and the first moment of the underlying distribution in the second and third simulations. To estimate the second moment of the underlying distribution in the last two simulations, we use estimators V_1 and V̅, where V_1 denotes the proportion of edges remaining after the first two time steps and V̅ utilizes the average of V_k when the time window T_0 is specified correctly. To evaluate the proposed estimators' accuracy, we compute the absolute relative bias and the standard deviation of each estimator based on 100 replications. The absolute relative bias of estimators Z_1 and Z̅ are defined as 1/100∑_k=1^100| Z_1^(i) - p /p| and 1/100∑_k=1^100|Z̅^(i) - p /p|, respectively, for the first set of simulations where, Z_1^(i) and Z̅^(i) correspond to the ith replication. For the second and third simulation, the absolute relative biases are 1/200( ∑_k=1^100| Z_1 - E(W) /E(W)| + ∑_k=1^100| V_1 - E(W^2) /E(W^2)|) and 1/200( ∑_k=1^100|Z̅ - E(W) /E(W)| + ∑_k=1^100|V̅ - E(W^2) /E(W^2)|). We denote absolute relative bias and its standard deviation as AbsRelBias and SdAbsRelBias, respectively. Table <ref> shows that both Z_1 and Z̅ are good estimators of p with the estimator Z̅ outperforming the estimator Z_1. The estimator Z_1 seems to be a reasonable estimator for p when the network is greater than or equal to N=100, but does not perform well when the network size is 10. Z̅, on the other hand, is a good estimator of p in all cases. These results suggest that Z̅ is a more reliable estimator for p when working with Model 1, where all edges have a fixed probability p. Tables <ref> and <ref> also demonstrate the consistency of the proposed estimators for large networks. As expected, with more information in hand, Z̅ also outperforms to the estimator Z_1 as the number of time steps T increases. For a small network size of N=10, Z_1 does a poor job while the estimator Z̅ improves as the number of time steps increases from T=30 to T=100. As the network size increases, both estimators become more reliable and tend to concentrate at the underlying true value of the generating distribution. § REPRODUCTIVE NUMBER FOR THE SIR PROCESS While the TCM can be leveraged for any spreading process over a network, here we examine the spread of an infectious disease. To model disease spread, we employ compartmental models, a class of models that divides the population into different groups with respect to disease state. These models assign transition rules, allowing individuals to move between different states. Here, we use the susceptible-infectious-recovered (SIR) model, which assumes that individuals obtain perfect immunity once they recover from the disease. Under this model, individual nodes can be in any of three states: susceptible, infectious, and recovered. In the susceptible state, the node has not been infected but could become infected if it came into contact with an infectious node. That is, the node has no immunity to the disease. In the infected state, the node is infectious and can infect others it comes into contact with. Finally, in the recovered state, the node has recovered from the illness and can no longer infect others or be reinfected. With a stochastic model, nodes move through states probabilistically. Due to the discrete nature of our data, we utilize a discrete-time approach wherein events are defined by transition probabilities per unit time, as opposed to the transition rates used in a continuous time framework. A contact between an infectious and a susceptible node will result in a transmission with probability β. Similarly, at any given time step a node will recover with probability γ. For simplicity, we assume that at each time step, each edge has persistence probability p. Let R_0 denote the average number of transmissions from the initially infected node and R_* denote the average number of transmissions in the early stages of the epidemic excluding those from the initial infection. For both a static and dynamic network, R_0 = τ∑_k kp_k, where τ is the transmission probability for a link between an infected node and a susceptible node and p_k is the probability a node has degree k in the configuration model. For a static network,R_* = τ∑_k q_k(k-1), where q_k is the excess degree distribution of a node with degree k, i.e., q _k = kp_k/∑_k kp_k. <cit.> Before further discussion of the reproductive number, we recall an important tool in studying disease spread on a network: the probability generating function (PGF). Suppose the PGF for the degree distribution of the initial configuration model network is g(x) = ∑_k p_k x^k, where p_k is the probability that a randomly chosen edge has degree k. The PGF of the excess degree distribution is g_1(x) = ∑_k q_k x^k-1, where q_k = kp_k/∑_k k p_k. Thus, we have the following relations: g'(1) = ∑_k kp_k g”(1) = ∑_k k(k-1) p_k g'_1(1) = ∑_k (k-1) q_k = ∑_k (k-1) kp_k / ∑_k k p_k = g”(1)/g'(1). We now examine the reproductive number at the early stage of the SIR spreading process on the temporal configuration model. The reproductive number at the early stage of a discrete time SIR spreading process on the temporal network is given by R_* =τ [ (1-γ)(1-p)/γ + (1 - p + γ p ) g”(1) /( γ g'(1) )], where τ = β/1- p(1-β)(1- γ). Proof: Following the idea of Volz and Meyers <cit.>, we derive the reproductive number for the process through four steps as follows: * Find the probability of transmission τ for each connection. * Find the probability generating function (PGF) of the number of contacts M for a node with degree k. * Find the PGF H̃_1(x) of the number of contacts of the selected node proportional to the concurrent degree, but subtracting one transitory contact. * Find the PGF H_1(x) of the number of transmissions caused by the selected node proportional to the concurrent degree, but subtracting one transitory contact. Then the reproductive number R_* at the early stage of the epidemic is the derivative of H_1(x) evaluated at x =1. We first find the probability of transmission τ for each connection. Notice that for a given edge, the time X until the edge has broken follows a geometric distribution, where X ∼ Geo(1-p). Similarly, the time Y until an infected node transmits to a susceptible neighbor satisfies Y ∼ Geo(β), and the time Z until an infected node recovers satisfies Z ∼ Geo(γ). For an infected-susceptible connection, the overlap time U while the infected node is still infectious and the edge is still connected is U ∼min(X,Z). Notice that X and Z are independent, therefore U ∼Geo(1- p (1-γ)) by the properties of a geometric distribution. So, the probability that the infected node transmits the disease to the susceptible in at most k steps is P(Y ≤ k-1) = 1 - (1-β)^k. Therefore, the transmission probability τ during the overlap time is τ = E(1- (1-β)^U)= ∑_k≥ 1 P(U=k)[1 - (1-β)^k] = ∑_k ≥ 1[1 - (1-β)^k] [p(1- γ)]^k-1 [1-p(1-γ)] = [1-p(1-γ)] ∑_k ≥ 1 [p(1- γ)]^k-1 - [1-p(1-γ)](1-β) ∑_k ≥ 1 [p(1-β)(1- γ)]^k-1 = [1-p(1-γ)] 1/1 - p(1- γ) - [1-p(1-γ)](1-β) 1/1- p(1-β)(1- γ) = 1 - [1-p(1-γ)](1-β)/1- p(1-β)(1- γ)= β/1- p(1-β)(1- γ). We then find the PGF of number of contacts M for an infected node with degree k during time period ℓ. For simplicity, we first construct the PGF for M corresponding to nodes with degree 1. For a node of degree 1, the number of edges swapped during ℓ time steps follows Binomial(ℓ,(1-p)). Therefore, the probability that the number of contacts for a node of degree 1 is equal to m during the period of ℓ steps is ℓ m-1 (1-p)^m-1p^ℓ-m+1. Thus, for a node with degree 1, the PGF of the number of contacts M is ∑_m≥ 1ℓ m-1 (1-p)^m-1p^ℓ-m+1 x^m = x ∑_m≥ 1ℓ m-1 [x(1-p)]^m-1p^ℓ-m+1 = x [x(1-p) + p]^ℓ. Therefore, for a node with degree k, the PGF for the number of contacts M is {x [x(1-p) + p]^ℓ}^k. Next, we provide the PGF H̃_1(x) of the number of transitory contacts of a selected node proportional to the concurrent degree, but subtracting one transitory contact. Construction of H̃_1(x) includes the following three steps: (a) Sum over the infectious period ℓ. (b) Sum over all PGFs of all possible degree k while weighting for the excess degree, q_k = k p_k/(∑ k p_k). (c) Add an extra term, denoted as f_ℓ(x), corresponding to the PGF that accounts for the concurrent partnership of the infected node being considered, but that excludes the contact that infected the node being considered. Following the same argument as for the PGF of number of contacts for a node with degree 1, we have f_ℓ(x) = ∑_m≥ 1ℓ m-1 (1-p)^m-1p^ℓ-m+1 x^m-1 = ∑_m≥ 1ℓ m-1 [x(1-p)]^m-1p^ℓ-m+1 = [x(1-p) + p]^ℓ. Notice that the probability of the infectious period ℓ is equal to the probability that the infected node recovers at time ℓ+1. Therefore, we have H̃_1(x) = ∑_ℓ≥ 0 Pr(Z = ℓ+1) [f_ℓ(x) + ∑ q_k (x f_ℓ(x))^k-1] = ∑_ℓ≥ 0 (1-γ)^ℓγ{(x(1-p) + p)^ℓ + ∑_k≥ 1 q_k [x (x(1-p) + p)^ℓ]^k-1} = ∑_ℓ≥ 0 (1-γ)^ℓγ( x(1-p) + p )^ℓ + ∑_ℓ≥ 0 (1-γ)^ℓγ∑_k≥ 1 q_k [x (x(1-p) + p)^ℓ]^k-1 = γ∑_ℓ≥ 0 [(1-γ)(x(1-p) + p)]^ℓ + γ∑_k≥ 1 q_k x^k-1∑_ℓ≥ 0[ (1-γ) ( x(1-p) + p)^k-1]^ℓ =γ /[1- (1-γ) ( x(1-p) + p) ]+ γ∑_k≥ 1 q_k x^k-1/[1 - (1-γ) ( x(1-p) + p)^k-1] where the summation exists as long as x < (1/(1-γ)^1/(k-1)-p)/(1-p) for all k ≥ 2. Therefore, H̃'_1(1) = γ (1-γ)(1-p)/γ^2 + γ∑_k≥ 1( γ q_k (k-1) + q_k (k-1) (1-γ) (1-p) )/γ^2 = (1-γ)(1-p)/γ + (1 - p + γ p ) ∑_k≥ 1 q_k (k-1)/γ, or H̃'_1(1) = (1-γ)(1-p)/γ + (1 - p + γ p ) g”(1) /( γ g'(1) ). Finally, we find the PGF H_1(x) of the number of transmissions caused by the selected node proportional to the concurrent degree, but subtracting one transitory contact. The PGF of number of transmissions corresponding to k transitory contacts is (1- τ + τ x )^k. Since the PGF of the number of transmissions is the summation of all possible transitory contacts, H_1(x) = ∑ P(#transitory = k) (1- τ + τ x )^k = H̃_1(1- τ + τ x ). Then we have, R_* = (dH_1(x)/dx)_x=1 = (d H̃_1(1- τ + τ x )/dx)_x=1 = τH̃'(1). Applying (<ref>) and (<ref>), the reproductive number R_* is R_* =τ [ (1-γ)(1-p)/γ + (1 - p + γ p ) g”(1) /( γ g'(1) )], where τ = β/1- p(1-β)(1- γ). Proposition <ref> is thus proved. □ Next, we examine these results under specific degree distributions. * When the degree distribution follows a Poisson distribution with mean λ, p_k = λ^k exp(-k)/k!, its corresponding PGF is g(x) = exp(λ(x-1)). Then g'(1) = λ and g”(1) = λ^2. Therefore, R_* = τ [ (1-γ)(1-p)/γ + (1 - p + γ p ) λ / γ ]. For p= 1, this gives us R_* = τλ, which agrees with the results derived in <cit.>. * When the degree distribution satisfies p_k = 1, if k = M for some 1 ≤ M ≤ N where N is the network size, then the PGF is given by g(x) = x^M. So g”(1)/g'(1) = M-1, and therefore R_* = τ [ (1-γ)(1-p)/γ + (1 - p + γ p ) (M-1) / γ ]. Notice that if k=N, we have a fully connected network. Under this scenario, the reproductive number is given by R_* = τ [ (1-γ)(1-p)/γ + (1 - p + γ p ) (N-1) / γ]. For p=1, this reduces to R_* = τ (N-1). Additionally, when the temporal network is static over time, p = 1. The reproductive number at the early stages is then R_* = τ g”(1)/g'(1), which agrees with the R_* value for a static network introduced above. When the temporal network evolves as independent draws from a configuration model, then p=0. Under this scenario, the reproductive number at the early stage is R_* = τ [ (1-γ)/γ + g”(1) /( γ g'(1) )] = β [ (1-γ)/γ + g”(1) /( γ g'(1) )]. § DATA ANALYSIS In this section, we show how to use the three proposed TCMs to fit empirical network data and examine the fit of the generated temporal networks. The empirical data in this study was collected by the Copenhagen Network Study data and is publicly available. <cit.> This data set contains information about the connectivity patterns of 706 students at the Technical University of Denmark over 28 days in February 2014. During the study period, participants agreed to use loaner cell phones from researchers as their primary phones. The proximity patterns were collected using Bluetooth where the approximate pairwise distances of phones were obtained using the received signal strength indicator (RSSI) every five minutes. We assigned a connection between two persons if there was at least one strong Bluetooth ping with RSSI ≥-75dBm <cit.>. Because the empirical data reflects student proximity patterns, they fluctuate significantly on a daily basis. Students were more likely to connect during the week if they were in the same class, but on weekends, their proximity patterns were likely driven by their personal contact networks. Our models cannot capture the weekday-weekend variability because they require a comparable number of edges at each time instance. Therefore, we further processed the daily network data by combining each of the seven daily networks into a single weekly network. In particular, we established a weekly network by taking the union of the daily networks of each weekly batch of 7 daily networks. As a result, the period of 28 days yielded 4 weekly networks, denoted as { G_1, G_2, G_3, G_4}. Figure <ref> shows their degree distributions. To fit the TCM models, we used G_1 as the starting point and counted the number of edges in G_1 that were retained in G_2 and G_3. For Model 1, we calculated the fixed edge persistence rate as the ratio of the number of edges in G_1 that persisted into G_2. For Models 2 and 3, we assumed that the edge-level and node-level persistence rates were generated from Beta distributions. The first and second moments of these Beta distributions were estimated by the ratios of the number of edges in G_1 that persisted into G_2 and G_3, respectively. Using the estimated first two moments, we determined the shape and scale parameters of the Beta distributions. The empirical weekly data gave us the estimate p̂ = 0.476 for Model 1; the Beta distributions for Model 2 and Model 3 were estimated as Beta(0.975, 1.074) and Beta(1.171, 1.562), respectively. With the estimated model parameters, we used G_1 as the initial network and used the TCM models to obtain predicted networks Ĝ_2^(M_k), Ĝ_3^(M_k),Ĝ_4^(M_k) for the empirical networks G_2, G_3, G_4, respectively. Here k = 1, 2, 3 represents Model 1, Model 2, and Model 3, respectively. In addition, as a benchmark, we employed the naive temporal configuration model, in which all edges were broken and rewired randomly at each time step. We refer to this configuration model as Model 0. To assess the performance of the TCM models, we compared the distances between the degree distributions of networks generated with Model 0, Model 1, Model 2, and Model 3 with the empirical networks G_2, G_3, G_4. We used the total variance distance, D = 1/2∑_i=1^k | p_i - q_i |, and the Hellinger distance, H = 1/√(2)√(∑_i=1^k (√(p_i) - √(q_i))^2), where P = (p_1,⋯,p_k) and Q = (q_1,⋯,q_k) are discrete distributions. We denote the distances between the predicted networks Ĝ_2^(M_k), Ĝ_3^(M_k), Ĝ_4^(M_k) and the corresponding empirical networks G_2, G_3, G_4 as D_2^(M_k), D_3^(M_k), D_4^(M_k) and H_2^(M_k), H_3^(M_k), H_4^(M_k), respectively. We then computed the average distances D̅^(M_k) = 1/3(D_2^(M_k) + D_3^(M_k) + D_4^(M_k)) and H̅^(M_k) = 1/3(H_2^(M_k) + H_3^(M_k) + H_4^(M_k)) to assess the performance of the models. To make a fair assessment, we ran the simulation 100 times, calculated the means and standard deviations of D̅^(M_k) and H̅^(M_k) based on these 100 runs, and used them as final goodness-of-fit metrics to assess the performance of the models. In particular, for run i, we calculated the average distances D̅^(M_k,i) and H̅^(M_k,i); the final metrics for comparing the four models are the means and standard deviations of these metrics. Table <ref> demonstrates that Model 2 and Model 3 yield the smallest distances and therefore fit the empirical data best. Model 1 performed worse than Models 2 and 3, and Model 0 had the worst performance. The results demonstrate that model fit generally improves from Model 1 to Model 2 to Model 3. The improvement in model fit was to be expected because Model 0 uses data about the initial network only; Model 1 requires additional knowledge of the number of edges that remain in the initial network G_1 after one time step, and Models 2 and 3 require knowledge of the number of edges that remain after one and two time steps. In this example, which should be regarded as being primarily for illustrative purposes, Models 2 and 3 have similar performance. In general, we anticipate Model 3 to provide a potentially better fit in settings where the behavior of individuals is driven less by external factors, such as class schedule here, and instead depends more on the individual choices of the actors. § DISCUSSION AND CONCLUSION We introduced the Temporal Configuration Model (TCM), a family of generative models for a temporal network, as well as approaches for estimating model parameters. The proposed generative model is simple and flexible. The modeling framework allows the edge persistence probability to be fixed, generated from a distribution, or constructed from the corresponding node-level persistence probabilities. We demonstrated how to use the modeling framework by applying it to data from the Copenhagen Network Study with promising results. Therefore, the generative model can be adapted to fit a variety of real-world settings. We proposed consistent estimators for the model parameters and provided the convergence rate of the proposed estimators. We found that only using information from the first one or two time steps can produce a good estimator for model parameters. We additionally showed that, when the persistence probability is constant across edges, using the network evolution process can give us a more precise estimator with a faster convergence rate. However, when persistence probabilities are generated at random or are the product of the node-level persistence probabilities, care must be taken when using an estimator that relies on averages over the entire course of the network's evolution. Instead, it might be advantageous to consider alternative modeling strategies, such as those in which persistence probabilities are periodically updated. If additional information on this time interval is available, then a design that incorporates the network evolution process will give us a finer estimator with an accelerated convergence rate. In addition, we investigated an SIR spreading model over the temporal network under the scenario where edge persistence probability is constant. We provided an explicit formula for the epidemic's reproductive number at the early stage of the pandemic for this modeling scenario. Further research directions might involve examining a good estimator for the periodic information in Model 2 and Model 3, extending the temporal model to include network size growth information, and investigating various epidemic characteristics of spreading processes on the temporal network of the three models. All of these potential developments, as well as the broad range of applications for temporal networks, offer an exciting future for the study of time-varying networks. <cit.> With technological advances, particularly Bluetooth technology, network data is becoming more accessible than ever. How we can appropriately employ these new datasets for public health insights is an interesting subject that requires further research. We believe that statistical techniques with robust theoretical underpinnings are the foundation for leveraging the vast amounts of data available for the common good. Acknowledgement The authors thank Dr. Marc Lipsitch and Dr. Jeff Miller at Harvard Chan School of Public Health for their insightful feedback. Contributions J.P.0., T.M.L., and H.H. designed the research; T.M.L. and H.H. performed the research; T.M.L., H.H., and J.P.O. wrote and edited the paper. J.P.O supervised the research. T.M.L. and H.H. are shared first co-authors. Change of Institution T.M.L. started the project at Harvard Chan School of Public Health, Harvard University, Boston, Massachusetts, U.S.A. Funding Statement H.H. was supported by a Harvard University Department of Biostatistics scholarship and a U.S. Government scholarship. T.M.L and J.P.O. were supported by a National Institutes of Health award (NIAID R01 AI138901). The funding sources had no role in study design, data analysis, data interpretation, or the writing of the paper. Data and materials availability The empirical network data is publicly available at The Copenhagen Networks Study interaction data. <cit.> Python code used in this study is publicly available at https://github.com/onnela-lab/temporal-configuration-model. Competing Interests None. unsrt 1 newman2018networks Newman, Mark (2018). Networks. Oxford University Press. roland20review Roland, Molontay and Marcell, Nagy (2020). Two Decades of Network Science as seen through the co-authorship network of network scientists ArXiv. delva2016connectdot Wim, Delvaa, Gabriel, E. Leventhal and Stéphane Helleringer (2016). Connecting the dots: network data and models in HIV epidemiology AIDS. brea2018ego Brea, L. Perry, Bernice, A. Pescosolido and Stephen, P. Borgatti (2018). Egocentric Network Analysis. Cambridge University Press. holme2012temporal Holme, Petter and Saramäki, Jari (2012). Temporal networks. Physics reports. holme15reviewTCM Holme, Peter (2015). Modern temporal network theory: A colloquium. ArXiv. holme21maptemporal Holme, Peter and Jari, Saramäki (2021). A map of approaches to temporal networks. ArXiv. holme13optimalstatic Holme, Peter (2013). Epidemiologically optimal static networks from temporal network data. PLoS Comput. Biol.. gage2020reviewTCM Gage, Jordan, Samuel, Winer and Taban, Salem (2020). The current status of temporal network analysis for clinical science: Considerations as the paradigm shifts? Journal of Clinical Psychology. mehdi23reviewTCM Mohammad, Mehdi Hosseinzadeh, Mario, Cannataro, Pietro, Hiram Guzzi and Riccardo, Dondi (2023). Temporal networks in biology and medicine: a survey on models, algorithms, and tools. Network Modeling Analysis in Health Informatics and Bioinformatics. hambridge2021 Hambridge, Hali L, Kahn, Rebecca and Onnela, Jukka-Pekka (2021). Examining sars-cov-2 interventions in residential colleges using an empirical network. International Journal of Infectious Diseases. wu22temporalcovid Mincheng, Wu, Chao, Li, Zhangchong, Shen and others (2022). Use of temporal contact graphs to understand the evolution of COVID-19 through contact tracing data. Communication Physics. vanhems2013estimating Vanhems, Philippe, Barrat, Alain, Cattuto, Ciro and others (2013). Estimating potential infection transmission routes in hospital wards using wearable proximity sensors. PloS one. sapiezynski2019interaction Sapiezynski, Piotr, Stopczynski, Arkadiusz, Lassen, David Dreyer and Lehmann, Sune (2019). Interaction data from the copenhagen networks study. Scientific Data. neto2021combining Neto, Onicio Leal, Haenni, Simon, Phuka, John and others (2021). Combining wearable devices and mobile surveys to study child and youth development in Malawi: implementation study of a multimodal approach. JMIR Public Health and Surveillance. perra12activitydriventemporal N. Perra, B. Gonçalves, R. Pastor-Satorras and A. Vespignani (2012). Activity driven modeling of time varying networks. Scientific Reports. vestergaard14linknodetemporal Vestergaard, Christian L., Génois, Mathieu and Barrat, Alain (2014). How memory generates heterogeneous dynamics in temporal networks. Physical Review. tiago2017modelcommunity Tiago, P. Peixoto and Martin, Rosvall (2017). Modelling sequences and temporal networks with dynamic community structures. Nature Communication. zhang2017randomdynamic Xiao, Zhang and Cristopher, Moore and M. E. J. Newman (2017). Random graph models for dynamic networks. The European Physical Journal B. Bailey18CM Bailey, K. Fosdick and Daniel, B. Larremore and Joel, Nishimura and Johan, Ugander (2018). Configuring random graph models with fixed degree sequences. SIAM Review. Alex2022 Alexander, D. and Davy, P. (2022). On the consistency of incomplete U-statistics under infinite second-order moments. Statistics and Probability Letters. volz2009 Volz, E. and Meyers, L.A. (2009). Epidemic thresholds in dynamic contact networks. Journal of the Royal Society Interface.
http://arxiv.org/abs/2407.12373v1
20240717075222
AlphaPEM: an open-source dynamic 1D physics-based PEM fuel cell model for embedded applications
[ "Raphaël Gass", "Zhongliang Li", "Samir Jemeï", "Rachid Outbib", "Daniel Hissel" ]
eess.SY
[ "eess.SY", "cs.SY" ]
FEMTO,LIS]Raphaël Gass mycorrespondingauthor raphael.gass@femto-st.fr [mycorrespondingauthor]Corresponding author at: Université de Franche-Comté, UTBM, CNRS, institut FEMTO-ST, FCLAB, Belfort, France. FEMTO]Zhongliang Li zhongliang.li@univ-fcomte.fr LIS]Rachid Outbib FEMTO]Samir Jemei FEMTO,Institut]Daniel Hissel [FEMTO]Université de Franche-Comté, UTBM, CNRS, institut FEMTO-ST, FCLAB, Belfort, France [LIS]Aix Marseille Univ, CNRS, LIS, Marseille, France [Institut]Institut Universitaire de France § ABSTRACT The urgency of the energy transition requires improving the performance and longevity of hydrogen technologies. AlphaPEM is a dynamic one-dimensional (1D) physics-based PEM fuel cell system simulator, programmed in Python and experimentally validated. It offers a good balance between accuracy and execution speed. The modular architecture allows for addition of new features, and it has a user-friendly graphical interface. An automatic calibration method is proposed to match the model to the studied machine. The software provides information on the internal states of the system in response to any current density and can produce polarization and EIS curves. AlphaPEM facilitates the use of a model in embedded conditions, allowing real-time modification of the fuel cell's operating conditions. AlphaPEM Proton exchange membrane fuel cell (PEMFC) Hydrogen Modelling Control-command Automatic calibration Open-source § METADATA § MOTIVATION AND SIGNIFICANCE The use of physics-based software simulating PEM fuel cells allows the description of its internal states where sensors cannot be placed, such as the concentration of hydrogen within the catalytic layer of each cell, or the amount of liquid water present in the gas diffusion layer. This information is valuable because fuel cells are complex and difficult machines to operate, and the information provided by sensors external to the cells do not allow for the precise control of the internal states of fuel cells. Therefore, the use of a model is essential for increasing the accuracy of real-time observation of internal physical states and for deploying specific control based on these observations to enhance the efficiency, power density, and lifetime of fuel cells. In the current literature, a lack of physics-based PEM fuel cell models that are open to the community is observed. While some commercial software such as COMSOL Multiphysics®<cit.>, Ansys Fluent®<cit.>, or Wolfram Mathematica®<cit.> allow such modeling, they are not open-source, require expensive licenses and offer limited possibilities for source code modification. The open-source publication of software is, however, a valuable aid to the community, as it not only prevents each research team from having to develop their own simulator from scratch, which is time-consuming, but also improves each software by subjecting it to international critique and allowing for collaborative development, thus enhancing and accelerating research. In this context, a research team from the Institute of Energy and Climate Research, IEK-3, has developed openFuelCell2, an open-source computational fluid dynamics toolbox for simulating fuel cells, based on the open-source library OpenFOAM®<cit.>. However, all these approaches yield very precise models which are computationally expensive. They are incompatible with embedded applications, which is the objective of this work. To the authors' knowledge, only two research teams have published PEM fuel cell models for control-command applications as open-source software, both programmed in Matlab. First, Pukrushpan et al. released a 0D dynamic and isothermal model of the fuel cell system in 2004, which includes the auxiliaries and requires very little computational power <cit.>. The aim of this pioneering model was to be used in embedded applications while considering the dynamics of the auxiliaries. However, a physical model that does not account for spatial variations within each component of the fuel cell system is not sufficient to precisely diagnose the internal states and support control to optimize these states. Nevertheless, this software has paved the way for future, more detailed models. More recently, in 2019, Vetter et al. published a simple and compact software simulating the fuel cell in one-dimensional (1D) steady-state, non-isothermal conditions with two phases of water <cit.>. Although the inclusion of one spatial dimension increases the model's accuracy, the lack of dynamic modeling and consideration of the auxiliaries makes this software incomplete for real-time use in embedded applications. However, it is important to note that this software is primarily intended as a simulation base for more complex PEM fuel cell models, making it valuable for the community. This paper introduces AlphaPEM, the first open-source, isothermal, two-phase, 1D dynamic model for PEM fuel cell systems. It is designed for real-time model-based diagnosis and control implementation within embedded systems, balancing precision and execution speed. It simulates the dynamic evolution of internal states of the fuel cell, its auxiliaries, and the resultant voltage based on the operating conditions and imposed current density. This software package is written in Python for its readability and ease of writing. It is deployed in open-source with GNU General Public License v3.0 <cit.>. It is based on the authors' previous works, including a critical review of the physics at stake <cit.> and an experimentally validated formulation for numerical resolution <cit.>. The modular design of the code allows for easy addition of new features, such as incorporating heat transfer within the fuel cell. Despite the complex physics involved, the code is well-written following the informatics standards <cit.> and documented to facilitate its uptake and continuous improvement by the community. AlphaPEM is implemented as a Python class to ease its open-source distribution, leveraging SciPy's classical solver for ordinary differential equations (ODEs). The finite difference problem is solved using SciPy's `solve_ivp` function <cit.>, employing the implicit 'BDF' method due to the stiff nature of the problem arising from nonlinearities and coupled variables in the ODE system. The AlphaPEM software package simulates the internal state dynamics of fuel cells in multiple static and dynamic processes, such as those involving step current densities, polarization curves, or electrochemical impedance spectroscopy (EIS) curves. It can also adapt to any current density input. The package includes databases from various real fuel cells <cit.> to facilitate its adoption and allows users to freely insert characteristics of other fuel cells. An automated program for calibrating undetermined parameters is included in AlphaPEM. These parameters are calibrated using the genetic algorithm 'geneticalgorithm2' <cit.>, a maintained fork of the widely-used open-source Python program 'geneticalgorithm' <cit.>. A graphical user interface is also included to facilitate initial use before delving into the code. Finally, AlphaPEM can be used to compare similar models or assist in the calibration of more complex models. § SOFTWARE DESCRIPTION AlphaPEM is an open-source framework for PEM fuel cell systems modelling, programmed in Python. It is designed for the control and command of embedded systems. Its results reveal the dynamics of the cell internal states and voltage, as well as the balance of plant dynamics, which are vital information for fuel cell management systems. To use AlphaPEM, it is necessary to install a certain number of packages beforehand. [language=Python, numbers=left, frame=single, breaklines=true, tabsize=1] python3 -m venv env # creation of a new python environment source env/bin/activate # activation of the environment pip install numpy scipy matplotlib colorama PyQt5 PySide2 geneticalgorithm2[full] ttkthemes # required packages §.§ Software architecture The software architecture of AlphaPEM consists of 5 directories, each containing several Python files. The root of the software package contains the 'main.py' and 'GUI.py' files. These two files must be run to operate the simulator, as they control the entire software. The 'main.py' file is used for the standard operation of AlphaPEM. The 'GUI.py' file, optional, provides a graphical user interface (GUI) of AlphaPEM to facilitate its use without delving into the program's details. All basic functionalities are included without requiring any modifications to other files. However, this interface lacks the flexibility offered by 'main.py', which allows the modification of AlphaPEM's behavior as desired. Additionally, it does not allow for the calibration of undetermined parameters. The program's results are saved in the '/results' directory. The directory '/configuration' contains the files 'settings.py' and 'current_densities.py'. The file 'settings.py' includes both the physical parameters of the model, encompassing the characteristics of the studied cell, and the computing parameters, such as the maximum spatial step of the solver. It is used in most programs of the package. The file 'current_densities.py' contains the temporal evolution of the current densities to be imposed on the simulator. Next, the directory '/model' contains all the Python files related to the model's physics, such as 'dif_eq.py', which includes the system of differential equations to be solved. The file 'AlphaPEM.py' contains a class of the same name that represents PEM fuel cell simulators. An object of the AlphaPEM class takes as arguments the set of parameters defining a given fuel cell system, its operating conditions, the imposed current density, and the computing parameters. It returns the evolution of the voltage and all internal states over time. A 'control.py' file is also present, which contains the instructions for dynamically controlling the operating conditions of the fuel cell using the information provided by the model. The '/modules' directory contains all the Python files that serve as modules for other files. Indeed, to improve the readability of the previous programs, some of the less essential instructions have been written as separate functions and placed in these module files. Each of these module files is named to directly refer to the file it is associated with. For example, 'flows_modules.py' is used in 'flow.py'. Additionally, a file named 'transitory_functions.py' is present in this directory and is used in most other programs in the package. It contains a set of mathematical functions that have physical significance for the model, such as the saturation pressure of water vapor. Finally, the directory '/calibration' contains all the information necessary for calibrating the undetermined parameters of the model. The file 'parameter_calibration.py' includes the program for performing the calibration, the file 'experimental_values' contains the experimental information of the fuel cell system that the simulator must represent, the file 'run.sh' contains the instructions to send to the computing cluster to perform the calibration, and the directory '/calibration/results' contains the calibration results. Figure <ref> represents the structure of AlphaPEM, highlighting the dependencies between the Python files. Each box represents a Python file, with an associated number indicating its location within the software package. An arrow from file A to file B indicates that information from file A are imported into file B. The colors associated with certain boxes and their outgoing arrows improve readability and specifically indicate where these files are imported. This is necessary due to the program's complex overall structure. The boxes that remain black indicate no ambiguity regarding the destination of their arrows. To further enhance readability, the arrows conventionally point from bottom to top or are horizontal. Thus, the files most frequently used by other parts of the program are located towards the bottom of the diagram, while the files executed by the user to start the program are at the top. §.§ Software functionalities The usage of the software package AlphaPEM is illustrated by the graphical user interface present in the file '/GUI.py' and displayed in figure <ref>. All the features offered by this GUI are accessible through the files '/main.py' and '/configuration/settings.py'. A fuel cell is characterized by the operating conditions under which it is run, its accessible physical parameters (i.e., its dimensions), and its indeterminate physical parameters (such as the tortuosity of the GDL). All these parameters can be adjusted by the user, and predefined configurations based on existing cells can be selected in the 'Fuel cell:' dropdown menu. Other adjustable parameters exist, here hidden in the GUI to avoid overloading the display. On one hand, the current density parameters allow for the adjustment of the shape of the step current density, or the current density required to create polarization or EIS curves. On the other hand, the computing parameters enable modification of numerical settings, such as the number of points in the numerical model placed in the gas diffusion layer, or the purge times of the stack. Next, the user can select different simulation options. The configuration of the auxiliaries of the studied fuel cell system can be chosen, the presence or absence of control over the operating conditions, the presence or absence of an anode purge, the display of results in a synthetic or detailed format, and a display of results either at the end of the simulation or updated frequently during the calculation. Finally, once these choices are made, the user can generate the model results, which include the internal states and the voltage of the fuel cell system, either from a current density step, or a current density producing a polarization curve or an EIS curve. The GUI limits the simulation possibilities to these three types of current densities, but from the source code, it is possible to use any physically acceptable function. Additionally, the user can perform automated calibration of the undetermined parameters using the Python file '/calibration/parameter_calibration.py'. This functionality is not available from the GUI. For this, it is necessary to input the experimental values of polarization curves under different operating conditions in the file '/calibration/experimental_values' (at least three curves), as well as the operating conditions and physical parameters of the fuel cell system studied in the file '/modules/calibration_modules'. This automated calibration uses a genetic algorithm. The parameters of this algorithm have been adjusted for this specific optimization problem to achieve a good balance between the accuracy of the calibration and execution speed. These parameters are shown in table <ref>. Only the population size and the maximum number of iterations can be modified to match the available computing capacity. It is preferable to have the population size between 100 and 200 individuals and to choose a number that is a multiple of the number of available CPU cores to utilize them fully, as the calculations are parallelized for each member of the same population. The number of iterations should be as large as possible, typically around 1000 to 1500 generations for effective calibration. It is worth noting that the calibration can be resumed from where it previously stopped, allowing multiple computation sessions to finally achieve a satisfactory result. Finally, it is preferable to use a computing cluster with many CPU cores for calibration. As an example, the authors successfully performed a calibration with a maximum error of 1.06% between the experimental and simulated data, after two weeks of calculations on a server equipped with 80 Intel(R) Xeon(R) Gold 6338 CPU cores @ 2.00GHz. § ILLUSTRATIVE EXAMPLES §.§ Polarization curves and parameters calibration To enable AlphaPEM to simulate the internal states and voltage of a given fuel cell system, it is necessary to provide the simulator with a number of physical parameters to adjust it to the real machine. Some of these parameters are easily accessible, such as the dimensions of each cell, but another part is inaccessible unless the manufacturers have shared them, which is rarely the case. These undetermined physical parameters can be calibrated by AlphaPEM using at least three experimental polarization curves, as discussed section <ref>. Once calibration is done, curves like those in figure <ref> can be obtained. The quantity Δ U_max corresponds to the maximum deviation between the experimental and simulated curves. After this calibration, other polarization curves can be simulated by AlphaPEM for all realistic operating conditions. For an estimation of AlphaPEM's execution speed, the simulator requires less than 35 seconds to simulate one of the polarization curves depicted in figure <ref> on a mobile workstation equipped with an 11th generation Intel Core i9 processor boasting 16 cores operating at 2.60GHz and 32 GiB of RAM. In this instance, computation occurs at intervals of 0.1 A.cm^-2. Thus, optimizing execution speed can be achieved by reducing the precision of the polarization curve. §.§ Fuel cell system internal states Once AlphaPEM is calibrated, it is possible to simulate the temporal evolution of the internal states of a given fuel cell. Thus, it is possible to dynamically track at different points within a cell the concentration of water vapor C_v, the liquid water saturation , the water content dissolved in the membrane λ, as well as the concentrations of hydrogen C_H_2 and oxygen C_O_2. Similarly, the temporal evolution of cell voltage U_cell and various auxiliary variables can be monitored: pressures P, relative humidities Φ, mass flows W, and throttle areas A_bp. For instance, figure <ref> illustrates the temporal evolution of U_cell, C_v and for the same fuel cell system previously mentioned in figure <ref>, stimulated by two current density steps. Further curves, details and explanations of the observed evolutions are provided in the authors' previous work <cit.>. For an estimation of AlphaPEM's execution speed, the simulator requires less than 20 seconds to simulate the 1000-second temporal evolution depicted in figure <ref> on a mobile workstation equipped with an 11th generation Intel Core i9 processor boasting 16 cores operating at 2.60GHz and 32 GiB of RAM. § IMPACT The current energy transition is generating a growing demand for PEM fuel cells with higher power densities and longer lifespans <cit.>. In this context, physics-based modeling plays a crucial role in improving the dynamic control of fuel cells, thereby enhancing their performance and longevity <cit.>. However, creating such models is a lengthy and complex task for researchers, as it requires integrating numerous scientific disciplines, including electrochemistry, fluid mechanics, thermodynamics, computational physics, and mathematics. Therefore, it is essential to provide the scientific community with a clear and robust foundational model on which future research can build to advance the field more rapidly. Thus, following the work of Pukrushpan and Vetter, who were the first to disseminate simplified PEM fuel cell models <cit.>, AlphaPEM is introduced as the first open-source software package modeling dynamically and in 1D fuel cell systems for embedded applications, offering a good compromise between accuracy and execution speed. AlphaPEM simulates the internal states of fuel cell systems, providing real-time access to information inaccessible by sensors. This enriches fuel cell control strategies, paving the way for better performance and longevity. Additionally, AlphaPEM is designed with a modular architecture, facilitating the addition of extensions by the community. It is thus easy to incorporate heat transport physics, cell degradation modeling, spatial dimension extension to 1D+1D, and the simulation of cell stacks, to more accurately reflect reality while maintaining sufficient execution speed. These points align with the authors' future research ambitions. The contributions of AlphaPEM thus advance the development of fuel cell systems by facilitating their analysis, control, and improvement. § CONCLUSIONS The work presents AlphaPEM, an open-source, user-friendly, and modular software package in Python, designed for PEM fuel cell modeling for embedded applications. This framework is based on a 1D finite difference, dynamic, biphasic, and isothermal model of PEM fuel cell systems. It employs a solver using an implicit numerical method to solve the system of differential equations. This model has been experimentally validated in previous studies. In practice, AlphaPEM provides real-time access to the internal states and the voltage of the fuel cell systems and can generate polarization and EIS curves. It can also automatically calibrate the model's undetermined parameters to fit any real fuel cell system. This simulator, therefore, paves the way for improving the real-time control of the operating conditions of fuel cell systems to enhance their performance and longevity. Thermal modeling and spatial extension will be developed in future versions. § ACKNOWLEDGEMENTS This work has been supported by French National Research Agency via project DEAL (Grant no. ANR-20-CE05-0016-01), the Region Provence-Alpes-Côte d'Azur, the EIPHI Graduate School (contract ANR-17-EURE-0002) and the Region Bourgogne Franche-Comté. elsarticle-num sort compress
http://arxiv.org/abs/2407.12142v1
20240716195608
Influences of modified Chaplygin dark fluid around a black hole
[ "S. Zare", "L. M. Nieto", "F. Hosseinifar", "X. -H. Feng", "H. Hassanabadi" ]
astro-ph.HE
[ "astro-ph.HE", "gr-qc" ]
szare@uva.es Departamento de Física Teórica, Atómica y Optica and Laboratory for Disruptive Interdisciplinary Science (LaDIS), Universidad de Valladolid, 47011 Valladolid, Spain luismiguel.nieto.calzada@uva.es Departamento de Física Teórica, Atómica y Optica and Laboratory for Disruptive Interdisciplinary Science (LaDIS), Universidad de Valladolid, 47011 Valladolid, Spain f.hoseinifar94@gmail.com Faculty of Physics, Shahrood University of Technology, Shahrood, Iran Department of Physics, University of Hradec Králové, Rokitanského 62, 500 03 Hradec Králové, Czechia xhfeng@tju.edu.cn Center for Joint Quantum Studies and Department of Physics, School of Science, Tianjin University, Tianjin 300350, China hha1349@gmail.com Faculty of Physics, Shahrood University of Technology, Shahrood, Iran Department of Physics, University of Hradec Králové, Rokitanského 62, 500 03 Hradec Králové, Czechia § ABSTRACT In this work, we study a static, spherically charged AdS black hole within a modified cosmological Chaplygin gas (MCG), adhering to the calorific equation of state, as a unified dark fluid model of dark energy and dark matter. We explore the influence of model parameters on several characteristics of the MCG-motivated charged AdS black hole (MCGMBH), including the geodesic structure and some astrophysical phenomena such as null trajectories, shadow silhouettes, light deflection angles, and the determination of greybody bounds. We then discuss how the model parameters affect the Hawking temperature, remnant radius, and evaporation process of the MCGMBH. Quasinormal modes are also investigated using the eikonal approximation method. Constraints on the MCGMBH parameters are derived from EHT observations of M87* and Sgr A*, suggesting that MCGMBH could be strong candidates for astrophysical BH. Keywords: Modified Chaplygin gas; astrophysical black holes; gravitational lensing; eikonal quasinormal modes; evaporation process Influences of modified Chaplygin dark fluid around a black hole H. Hassanabadi July 22, 2024 =============================================================== § INTRODUCTION The observations by the astronomical showed that our universe is currently undergoing accelerated expansion, and a dark energy with negative pressure and positive energy density is attributed to it <cit.>. The quintessence dark energy was the first option that used to illustrate of the negative pressure <cit.>. Kiselev deduced the initial solution of the static and spherically symmetric BH which that includes the quintessence matter <cit.>. Some models that combine dark matter and dark energy can be considered as a good candidate to explain the dark components of the universe that the Chaplygin gas (CG) and its related modification is a suitable model to illustrate the observed accelerated expansion of the universe <cit.>. The CG in Hubble tension is investigated in <cit.>, the growth of cosmological perturbations is studied in <cit.> and thermodynamic quantities for a charged static spherically-symmetric BH that surrounded by such model gas is explored in <cit.>. Investigation of the BH where immersed in a cosmological Chaplygin-like dark fluid, by considering an additional parameter that influencing the energy density of the fluid is investigated in <cit.> to obtain the geodesic structure, shadow, and optical appearance of such a BH. Also, the MCG is used to explore stability the Einstein-Gauss-Bonnet BH that surrounded by this kind of gas <cit.> and BH surrounded by MCG in Lovelock theory of gravity <cit.>. In Ref. <cit.> the authors considered of the MCG as a single fluid model unifying dark energy and dark matter, and they constructed a static, spherically charged BH solution in the framework of general relativity. The pure CG or generalized CG is a perfect fluid which behaves like a pressureless fluid at an early stage and a cosmological constant at a later stage and it is a good candidate for investigation of the BH. Recently in <cit.>, the authors investigated shadow, emission rate and deflection angle for a generalized Chaplygin-Jacobi dark fluid. In this work, after introducing the metric, in Sec. <ref>, in Sec. <ref>, we find the Hawking temperature and, consequently, the event horizon radius of a BH. Then, using the photon geodesic equation, we determine the motion of light near the BH and the radius of the shadow for this specific metric in Sec. <ref>. Furthermore, we calculate the energy emission rate as a function of frequency, the greybody factor, and the emission power in Secs. <ref> and <ref>. By using the Eikonal approximation approach, in Sec. <ref> we obtain the quasinormal modes (QNMs), and in Sec. <ref> we calculate the amount of deflection of the angle of light reaching the observer. In Sec. <ref> we determine the BH evaporation time, and finally in Sec. <ref> we give the conclusion. § A BRIEF REVIEW TO FIELD EQUATION According to Ref. <cit.>, we take into account the following action for a charged source with a MCG structure within the framework of general relativity as ℐ=1/16 π∫d^4 x √(-g)[ℛ+6/ℓ̅^2-1/4 F_μν F^μν]+ℐ_M Where ℛ represents the Ricci scalar, g denotes the determinant of the metric tensor g_μν, ℓ indicates the AdS length, ℐ_M represents the matter contribution arising from the MCG background and F_μν being the field strength of electromagnetic field and A_μ is the gauge potential. By varying action (<ref>) we obtain the following field equations G_μν-3/ℓ^2g_μν=T_μν^EM+T_μν^MCG, ∂_μ(√(-g)F^μν) =0. The symbols G_μν, T_μν^ MCG, and T_μν^ EM represent the Einstein tensor, the energy-momentum tensor for MCG, and the energy-momentum tensor for the electromagnetic field, respectively, where T_μν^EM = 2F_μλF_ν^λ-1/2g_μνF^λδF_λδ. By considering a static, spherically symmetric, four-dimensional spacetime with a radial dependence for the zero component of the gauge potential (while other components are zero), a second-order differential equation for the gauge potential is derived using the energy-momentum tensor for the electromagnetic field and exploiting the given spacetime background. The solution to this differential equation yields a compact solution with a Coulombic potential. Moreover, with the MCG, we adopt the equation of state given by P=Aρ-Bρ^-β, where A, B are positive parameters and the parameter β runs within the range [0,1]. By imposing conditions on the components of the energy-momentum tensors and considering all components of the field equations, we derive the following expressions for the density and the lapse function <cit.> ρ(r)={1/1+A(B+(γ/r^3)^(1+A)(1+β))}^1/1+β, and f(r) = 1-2 M/r+Q^2/r^2+r^2/ℓ̅^2 -r^2/3(B/A+1)^1/β+1 _2F_1[α, ν; λ; ξ], where α =-1/β+1, ν=-1/1+A +β( A+1), λ =1+ν, ξ=-1/B(γ/r^3)^(A+1) (β+1). The metric has the following form <cit.> ds^2=-f(r)dt^2+1/f(r)dr^2+r^2 (d θ^2+sin ^2 θ d ϕ^2), § HAWKING TEMPERATURE AND REMNANT RADIUS The Hawking temperature near the event horizon of a BH with metric (<ref>) is given by <cit.> T=1/4πd/drf(r)|_r=r_h, by setting f(r)= 0, the mass related to the horizon radius is calculated, so temperature as a function of horizon radius can be written as T_ H = 23 r_h/8 πℓ̅^2-Q^2/4 π r_ h^3+1/4 π r_ h - r_ h/4 π[A+1/B]^ -1/β +1[ 1+(γ/r_ h^3)^(A+1) (β +1)/B]^--1/β +1, where r_ h is the horizon radius. The variation of Hawking temperature versus r_ h is shown in Fig. <ref>. In order to calculate the remnant radius, we set the Hawking temperature obtained in Eq. (<ref>) to zero T_ H=0|_r=r_ rem <cit.>. In this case, we cannot find the explicit form of remnant radius with analytic methods. By substituting the remnant radius in to the relation of M as a function of r_ h, the remnant mass is obtained as M_ rem = r_ rem^4ℓ̅^2+Q^2+r_ rem^2/2 r_ rem -r_ rem^4 (B/A+1)^1/β +1 _2F_1(α,ν;λ;ξ_ rem)/6 r_ rem. § BLACK HOLE SHADOW AND CONSTRAINTS In order to find the equation of motion for photons, the Euler–Lagrange equation is used d/dτ(∂ℒ/∂ẋ^μ)-∂ℒ/∂ x^μ=0, where ℒ is given by ℒ=1/2g_μνẋ^μẋ^ν. Assuming θ = π/2, the Lagrangian of the metric (<ref>) becomes ℒ(x,ẋ) = 1/2(-f(r)ṫ^2+ṙ^2/f(r)+r^2ϕ̇^2). There exist two constants of motion E and L which are written in the form E = f(r)ṫ, L=r^2ϕ̇. So one can write ṙ^2=E^2-L^2f(r)/r^2. By considering the quantity L^2r^2f(r) as the effective potential <cit.>, it can be expressed as V_ Eff= (δ+L^2/r^2)(1-2 M/r+Q^2/r^2+r^2/ℓ^2. - . r^2/3(B/A+1)^1/β+1 _2F_1[α, ν; λ; ξ]), that parameter δ specifies the type of particle that is lightlike (δ=0) or timelike (δ=1). Fig. <ref> illustrates the effective potential versus radius. In Fig. <ref>, we show the effective potential for lightlike and timelike particles in the MCGMBH background. The local maximum of the effective potential increases with higher values of angular momentum L. At this maximum, particles have unstable circular orbits. For timelike particles, as L increases, the maximum point appears at smaller values of r. Using Eq. (<ref>) one can write the trajectory of light in the equatorial plane as d r/d ϕ = ±√(f(r))√(r^2/f(r)E^2/L^2-1), setting d r/d ϕ =0, the minimum radius r_ min would be found which is assumed as turning point. Therefore, Eq. (<ref>) can be rewritten in the form of d r/d ϕ = ±√(f(r))√(r^2/f(r)f(r_ min)/r_ min^2-1). Using Eq. (<ref>), the light trajectory on the MCGMBH is depicted in Fig. <ref>. Considering the circular orbit, one can write <cit.> V_ Eff(r)=0=V'_ Eff(r). Using Eq. (<ref>) and (<ref>) the radius of the photon sphere is calculated from <cit.> f'(r_ Ph)r_ Ph^2-2r_ Phf(r_ Ph)=0, in this work, we have computed the photon radius by numerical methods. Considering an observer is positioned at r_ o radius of the BH, shadow radius, is obtained from <cit.> R_ Sh= r_ Ph/√(f(r_r_ Ph))√(f(r_ o)). Next, we derive constraints on the model parameters using EHT observational data for M87* and Sgr A* based on their shadow images <cit.>, as shown in Figs. <ref> to <ref>. Tables <ref> to <ref> present the lower and upper bounds of the parameters at 1σ and 2σ confidence levels. § ENERGY EMISSION RATE The thermal radiation emitted by a BH is related to its Hawking temperature. Using the BH shadow, the energy emission rate per units time and frequency is given by <cit.> d^2 E/dω dt = 2 π^2 σω^3/exp(ω/T) - 1, where ω referred to the emission frequency of the photon, σ represents cross section and is equal to π R_ Sh^2. T shows the Hawking temperature is calculated from Eq. (<ref>). Fig. <ref> shows the energy emission rate as a function of varying ω. Evidently, by increasing parameter A, the maximum value of the emission power increased and the maximum occurs at a higher frequency. § GREYBODY BOUNDS When Hawking radiation occurs in a BH, particle-antiparticle pairs can appear near the event horizon. It is possible for one particle to fall into the BH while the other escapes into space, resulting in radiation which is modified by the gravitational field of the BH and is known as the greybody effect, and the greybody factor measures the extent of this deviation in the radiation. A factor smaller than 1 indicates absorption or scattering within the BH. The greybody bound represents the maximum deviation of the radiation from the ideal blackbody spectrum. Lower bound greybody factor can be calculated as <cit.> T_l(ω)≥sech^2 (1/2ω∫_r_h^∞V(r) dr/f(r)). The potential is given by V(r)=f(r)(l(l+1)/r^2+1-s^2/rd/drf(r)), where l is the angular momentum and parameter s demonstrates the spin that s = 0 gives the effective potential for the scalar perturbation, and s = 1 expresses the potential for the electromagnetic perturbation. The lth mode emitted power is determined as <cit.> P_l(ω)=A/8π^2T_l(ω)ω^3/exp(ω / T_ H)-1. A and T_ H are referred to the surface area of a sphere with radius r_ h and Hawking temperature, respectively. The illustration of the Greybody bounds and their relevant emitted power for electromagnetic perturbation are shown in the first and second panel of Fig. <ref>, respectively. According to the figures, as the parameter Q increases, the probability of transmission , also, the emitted power decreases by increasing this parameter and its maximum occurs at a higher frequency. In this way, partial absorption cross section is given by <cit.> σ_abs^l =π (2l+1)/ω^2 |T_l(ω)|^2. and the variation of this parameter as a function of frequency is shown in the third panel of Fig. <ref>. Increasing the parameter Q reduces the maximum value of the absorption cross section and shifts this maximum to smaller values of ω. § QUASINORMAL MODES Applying perturbation on a BH causes damping oscillations called QNMs. The frequency of this oscillations is a complex number that the real part shows the oscillation frequency and the imaginary part indicates the decay part which was applied by the perturbation <cit.>. In the Eikonal limit, l→∞, the potential of Eq. (<ref>) reduces to <cit.> V_0^E=l^2 f(r)/r^2. The quasinormal frequency is denoted as ω_n= Ω l -i Λ(n+1/2), where Ω is the angular velocity and Λ is the principal Lyapunov exponent, and are given by <cit.> Ω =√(f(r_c))/r_c, Λ =√(-r^2/2(f'(r)(f(r)/r^2)'+f(r)(f(r)/r^2)”))_r=r_c. Parameter r_c is the maximum value of V_0^E. The behavior of ω versus parameter Q is obtained in Fig. <ref>. § DEFLECTION ANGLE When light shines from a star and reaches an observer after through the vicinity of a BH, it undergoes gravitational lensing and gravitational deflection. The angle of deflection of this light, known as deflection angle is given by <cit.> Δϕ =2 ∫_r_min^∞1/r √(r^2/b^2-f(r)) dr-π, where b referred to the impact parameter and is given by r_ min/√(f(r_ min)). The variation curve of deflection angle by setting parameters as M=1, ℓ̅=8, A=1, B=0.1, β =0.1, γ =0.1 and b =10, is illustrated in Fig. <ref>. As can be seen in Fig <ref>, the increase of the parameter Q, decreases the value of the deflection angle. § EVAPORATION PROCESS The lifetime of a black hole can be determined by integrating the change in its mass over time τ, and is calculated as follows <cit.> d𝖬/dτ = -α aσ T^4, where a is the radiation constant and α referred to the greybody factor. Parameter T represents the Hawking temperature and is given by Eq. (<ref>), σ=π R_ Sh^2 is the cross section, as a function of the horizon radius, so one can get the differential of Eq. (<ref>). and 𝖬 is the mass as a function of horizon radius 𝖬 = 1/6 ℓ̅^2 r ( -ℓ̅^2 r^4 (B/A+1)^1/β +1 _2F_1[α, ν; λ; ξ]. .+3 ℓ̅^2 Q^2+3 ℓ̅^2 r^2+3 r^4). Applying 𝖬 to the Eq. (<ref>), one can write ∫_0^τ dτ = ∫_r_ rem^r_ i dr d𝖬dr-1α̃σ T^4, where α̃ = α a, and r_ i is referred to initial horizon radius. By integrating both side of Eq. (<ref>), evaporation time could be found. In order to calculate τ, an approximation is applied to the metric of Eq. (<ref>), when parameter B tends to zero and by setting other parameters as β = 0, γ=0 and A>0, MCGMBH converted to the RN-AdS BH metric <cit.>. For the RN-AdS BH metric, at Q=0.05 and ℓ̅=20, the evaporation time is of the order of 10^23/α̃. § CONCLUSION Motivated to reveal the MCG traces around a static and spherically charged AdS BH, we explore the dependence of the Hawking temperature and the remnant radius on the MCGMBH parameters. This analysis corroborates the existence of a remnant mass and a phase transition. The remnant mass corresponds to the minimum mass value to which the BH can shrink, while the phase transition is associated with a maximum Hawking temperature, indicating that the corresponding heat capacity is zero at this point. We examine the geodesic structure around the MCGMBH and investigated the far-reaching implications of this gravitational model on various astrophysical phenomena, including null trajectories, shadow silhouettes, light deflection angles, and determination of greybody bounds. To determine appropriate values for the model parameters, we initially derived constraints from the EHT data on the shadow radius. We identified a narrower range of parameters from observations of Sgr A*, while M87* indicated a broader range. Specifically, the Sgr A* data provide tighter constraints on the model parameters, as they include points beyond the upper 2σ levels. Therefore, within a consistent parameter space for MCGMBH, EHT observations do not exclude the presence of surrounding MCG at galactic centers. This study provides one of the first constraints on the modified Chaplygin dark fluid using EHT data from M87* and Sgr A*. We then visualized how the energy emission rate varies across the parameter space and illustrated that the MCGMBH parameters effectively contribute to the behavior of the greybody bounds, the emitted power, and the partial absorption cross section. Next, we observe that in this scenario, the quasinormal modes (calculated using the eikonal approximation approach), the light deflection angle, and the black hole evaporation process exhibit sensitivity to the model parameters. These findings are expected to contribute to our understanding of the structure of the MCG within the framework of modified gravity theories and their implications for BH astrophysics. *Acknowledgements We would like to thank Xiao-Mei Kuang for her insightful discussions. The research of L.M.N. and S.Z. was supported by the Q-CAYLE project, funded by the European Union-Next Generation UE/MICIU/Plan de Recuperacion, Transformacion y Resiliencia/Junta de Castilla y Leon (PRTRC17.11), and also by RED2022-134301-T and PID2020-113406GB-I00, both financed by MICIU/AEI/10.13039/501100011033. * plain 99 PerlmutterApJ1999 S. Perlmutter, et al., Astrophys. J. 517 (1999) 565. RiessAJ1998 A. G. Riess, et al., Astrophys. J. 116 (1998) 1009. GarnavichApJ1998 P. M. Garnavich, et al., Astrophys. J. 509 (1998) 74. WangApJ2000 L. Wang, R. R. Caldwell, J. P. Ostriker, P. J. Steinhardt, Astrophys. J. 530 (2000) 17. BahcallSci1999 N. A. Bahcall, J.P. Ostriker, S. Perlmutter, P. J. Steinhardt, Science 284 (1999) 1481. KiselevCQG2003 V. V. Kiselev, Classical Quantum Gravity 20 (2003) 1187. RatraPRD1988 B. Ratra, P. J. E. Peebles, Phys. Rev. D 37 (1988) 3406. CaldwellPRL1998 R. R. Caldwell, R. Dave, P. J. Steinhardt, Phys. Rev. Lett. 80 (1998) 1582. SamiPRD2003 M. Sami, T. Padmanabhan, Phys. Rev. D 67 (2003) 083509. KamenshchikPLB2001-1 A. Kamenshchik, U. Moschella, V. Pasquier, Phys. Lett. B 511 (2001) 265. BilicPLB2002 N. Bilić, G. B. Tupper, R. D. Viollier, Phys. Lett. B 535 (2002) 17. BentoPRD2002 M. C. Bento, O. Bertolami, A. A. Sen, Phys. Rev. D 66 (2002) 043507. Sengupta2023 R. Sengupta, P. Paul, B. C. Paul, M. Kalam, arXiv:2307.02602 (2023). AbdullahPRD2022 A. Abdullah, A. A. El-Zant and A. Ellithi, Phys. Rev. D 106 (2022) 083524. LiEPJP2020-1 X.-Q. Li, B. Chen, L.-l. Xing, Eur. Phys. J. Plus 135 (2020) 175. Li2024 X.-Q. Li, H.-P. Yan, X.-J. Yue, S.-W. Zhou, Q. Xu, J. Cosmol. Astropart. Phys. 05 (2024) 048. LiEPJP2022-2 X.-Q. Li, B. Chen, L.-L. Xing, Eur. Phys. J. Plus 137 (2022) 1167. LiAP2022 X.-Q. Li, B. Chen, L.-L. Xing, Ann. Phys. 446 (2022) 169125. Sekhmani2023 Y. Sekhmani, J. Rayimbaev, G. G. Luciano, R. Myrzakulov, D. J. Gogoi, Eur. Phys. J. C 84 (2024) 227. ShadowRTheta M. Fathi, J. R. Villanueva, G. Aguilar-Pérez, M. Cruz, arXiv:2406.05650 (2024). book S. Chandrasekhar, “The Mathematical Theory of Black Holes” (Oxford: Oxford University Press, 1998). HawkingT S. W. Hawking, Nature 248 (1974)30. AFilhoPLB2023 A. A. Araújo Filho, S. Zare, P. J. Porfírio, J. Kříž, H. Hassanabadi, Phys. Lett. B 838 (2023) 137744. PerlickPR2022 V. Perlick, O.Y. Tsupko, Phys. Rep. 947 (2022) 1. CapozzielloJCAP2023 S. Capozziello, S. Zare, D. F. Mota, H. Hassanabadi, J. Cosmol. Astropart. Phys. 05 (2023) 027. Capozziello S. Capozziello, S. Zare, and H. Hassanabadi. arXiv:2311.12896 RosaPRD2023 J. L. Rosa, C. F. B. Macedo, D. Rubiera-Garcia, Phys. Rev. D 108 (2023) 044021. PanahEPJC2024 B. Eslam Panah, S. Zare, and H. Hassanabadi, Eur. Phys. J. C 84 (2024) 259. EHTL1 K. Akiyama, et al., (Event Horizon Telescope), Astrophys. J. Lett. 875 (2019) L1. EHTL5 K. Akiyama, et al., (Event Horizon Telescope), Astrophys. J. Lett. 875 (2019) L5. EHTL6 K. Akiyama, et al., (Event Horizon Telescope), Astrophys. J. Lett. 875 (2019) L6. EHTL12 K. Akiyama, et al., (Event Horizon Telescope Collaboration), Astrophys. J. Lett. 930 (2022) L12. EHTL17 K. Akiyama, et al., (Event Horizon Telescope Collaboration), Astrophys. J. Lett. 930 (2022) L17. Kocherlakota P. Kocherlakota et al., Phys. Rev. D 103 (2021) 104047. Vagnozzi Vagnozzi et al., Classical Quantum Gravity 40 (2023) 165007. Zare S. Zare, L. M. Nieto, X.-H. Feng, S.-H. Dong, H. Hassanabadi, arXiv:2406.07300v1. 12 A. A. Araújo Filho, J. A. A. S. Reis, H. Hassanabadi, J. Cosmol. Astropart. Phys. 05 (2024) 029. gbf2 P. Boonserm, M. Visser, Phys. Rev. D 78 (2008) 101502(R). qnm2 R. A. Konoplya, D. Ovchinnikov, B. Ahmedov, Phys. Rev. D 108 (2023) 104054. CrispinoPRD2009 L. C. B. Crispino, S. R. Dolan, E. S. Oliveira., Phys. Rev. D 79 (2009) 064022. AnacletoPLB2020 M. A. Anacleto, F. A. Brito, J. A. V. Campos, E. Passos, Phys. Lett. B 803 (2020) 135334. qnm R. A. Konoplya, A. F. Zinhailo, J. Kunz, Z. Stuchlík, A. Zhidenko, J. Cosmol. Astropart. Phys. 10 (2022) 091. eik2 L. Balart, G. Panotopoulos,Á. Rincón, Fortschr. Phys. 71 (2023) 2300075. 11 S. K. Jha, arXiv:2404.15808 (2024). DA12 K. S. Virbhadra, Phys. Rev. D 109 (2024) 124004. evap2 C. Bambi, L. Modesto, S. Porey, L. Rachwał, J. Cosmol. Astropart. Phys. 09 (2017) 033.
http://arxiv.org/abs/2407.13625v1
20240718155937
Distributionally and Adversarially Robust Logistic Regression via Intersecting Wasserstein Balls
[ "Aras Selvi", "Eleonora Kreacic", "Mohsen Ghassemi", "Vamsi Potluru", "Tucker Balch", "Manuela Veloso" ]
math.OC
[ "math.OC", "cs.LG" ]
Systematic moment expansion for electroweak baryogenesis [ July 22, 2024 ======================================================== § ABSTRACT Empirical risk minimization often fails to provide robustness against adversarial attacks in test data, causing poor out-of-sample performance. Adversarially robust optimization (ARO) has thus emerged as the de facto standard for obtaining models that hedge against such attacks. However, while these models are robust against adversarial attacks, they tend to suffer severely from overfitting. To address this issue for logistic regression, we study the Wasserstein distributionally robust (DR) counterpart of ARO and show that this problem admits a tractable reformulation. Furthermore, we develop a framework to reduce the conservatism of this problem by utilizing an auxiliary dataset (e.g., synthetic, external, or out-of-domain data), whenever available, with instances independently sampled from a nonidentical but related ground truth. In particular, we intersect the ambiguity set of the DR problem with another Wasserstein ambiguity set that is built using the auxiliary dataset. We analyze the properties of the underlying optimization problem, develop efficient solution algorithms, and demonstrate that the proposed method consistently outperforms benchmark approaches on real-world datasets. § INTRODUCTION Supervised learning traditionally involves access to a training dataset whose instances are assumed to be independently sampled from a true data-generating distribution <cit.>. Optimizing an expected loss for the empirical distribution constructed from such a training set, also known as empirical risk minimization (ERM), enjoys several desirable properties in relatively generic settings, including convergence to the true risk minimization problem as the number of training samples increases <cit.>. In practice, however, data is finite, and ERM suffers from the “optimism bias” that is also known as overfitting <cit.> or the optimizer's curse <cit.>, which causes deteriorated out-of-sample performance. A popular paradigm to prevent this phenomenon is distributionally robust optimization (DRO) <cit.> which optimizes the expected loss for the worst-case distribution that resides in an ambiguity set constructed from the empirical distribution. Another actively studied real-world challenge causing poor out-of-sample performance of ERM is adversarial attacks, where an adversary perturbs the observed features in the testing or deployment phase <cit.>. For neural networks, the paradigm of adversarial training (AT) <cit.> is therefore designed to provide adversarial robustness by simulating the attacks during the training stage. Many successful variants of AT specialized to different domains, losses, and attacks have been proposed in the literature to achieve adversarial robustness without significantly deteriorating the performance on training sets <cit.>. While some works (e.g., <cit.>) examine adversarial robustness guarantees of various training algorithms, there is a recent stream of work (e.g., <cit.>) that studies properties of optimal solutions to the adversarially robust optimization (ARO) problems where one optimizes the empirical risk subject to worst-case adversarial attacks. Recently, it has been observed that adversarially robust (AR) models may suffer from severe overfitting (robust overfitting, <cit.>), that is, AR models are not DR. Indeed, it is observed that robust overfitting is even more severe than traditional overfitting <cit.>. While some works address robust overfitting of AT through algorithmic adjustments <cit.>, a recent study <cit.> proves that robust overfitting is more severe than traditional overfitting via DRO theory. The authors of the latter work thus propose the simultaneous adoption of DR and AR. In this paper, we adopt a Wasserstein DRO approach to address robust overfitting in the ℓ_p-attack setting <cit.> for logistic regression. We study both the traditional setting with an empirical dataset and an extension that incorporates an auxiliary dataset whose instances are sampled from a nonidentical but related distribution. Examples of auxiliary data include synthetic data generated from a generative model (e.g., releasing portions of data under privacy constraints), data in the presence of distributional shifts (e.g., different time period/geographic region), noisy data (e.g., measurement errors), or out-of-domain data (e.g., different source); any distribution is applicable as long as the Wasserstein distance between its underlying data-generating distribution and the true data-generating distribution is known or can be estimated (formal setup in Section <ref>). We propose a distributionally and adversarially robust model, constructing its ambiguity set from empirical and auxiliary datasets. Specifically, we first develop a Wasserstein DR counterpart of ARO without auxiliary data, which already improves the benchmark ARO methods. Our primary contribution, however, is intersecting this empirical Wasserstein ambiguity set (ball) with an additional ball formed around the auxiliary data. This method mitigates conservatism in DRO by refining its ambiguity set. We analyze the statistical properties and complexities attributed to this problem, and develop efficient approximation algorithms. Figure <ref> illustrates the idea and Appendix <ref> contains notation. Our contributions are: * We show that ARO for logistic loss is equivalent to the ERM of a new loss function, which is convex and Lipschitz, allowing us to use recent Lipschitz DRO theory (cf. Section <ref>). * We thus formulate distributionally and adversarially robust logistic regression (LR) and provide an exact tractable convex optimization reformulation (cf. Section <ref>). * We utilize auxiliary data to reduce the conservatism of the aforementioned DRO problem in Section <ref> (cf. Figure <ref>). We prove that the resulting optimization problem is NP-hard and develop a tractable approximation. * We prove that Wasserstein finite sample guarantees are inherited by our optimization models and discuss how to set the radii of the Wasserstein balls (cf. Section <ref>). * Experiments on UCI datasets and MNIST/EMNIST datasets demonstrate that our approach achieves better out-of-sample performance than benchmark algorithms with and without adversarial attacks, and scales graciously in practical settings (cf. Section <ref>). § RELATED WORK Auxiliary data in ARO Despite the difference in motivation from ours, auxiliary data appears in the ARO literature. In particular, it is shown that additional unlabeled data sampled from the same <cit.> or different <cit.> data-generating distributions could improve adversarial robustness. <cit.> shows adversarial robustness guarantees can be certified even when AT is done on a synthetic dataset if its generator's distance to the true distribution can be quantified. <cit.> propose optimizing a weighted combination of ARO over empirical and synthetic datasets. We show that the latter approach is generalized by our model (cf. Proposition <ref>). DRO-ARO interactions In our work, we solve ARO for the worst-case data distribution residing in a type-1 Wasserstein ball around the empirical distribution, since the type-1 Wasserstein metric is arguably the most common choice in machine learning (ML) with Lipschitz losses <cit.>. In the literature, it is shown that the standard (non-DR) ARO is equivalent to the DRO of the original loss function with a type-∞ Wasserstein metric <cit.> (or a Lévy-Prokhorov metric <cit.>). Hence, our DR ARO approach can be interpreted as optimizing the logistic loss over the worst-case distribution whose 1-Wasserstein distance is bounded by a pre-specified radius from at least one distribution that resides in an ∞-Wasserstein ball around the empirical distribution. Conversely, <cit.> discusses that while DRO over Wasserstein balls is intractable for generic functions (e.g., neural networks), its Lagrange relaxation resembles ARO and thus AT yields a certain degree of (relaxed) distributional robustness; this introduces a DRO perspective to AT algorithms <cit.>. However, to the best of our knowledge, there have not been works optimizing a pre-specified level of Wasserstein distributional robustness (that hedges against overfitting, <cit.>) and adversarial robustness (that hedges against adversarial attacks, <cit.>) simultaneously. To our knowledge, the only work that considers the DR counterpart of ARO is <cit.> where the distributional ambiguity is modeled with φ-divergences and the prediction model is a neural network. Intersecting ambiguity sets in DRO Recent work started to explore the intersection of ambiguity sets for different contexts <cit.> or different metrics <cit.>. Our idea of intersecting Wasserstein balls is inspired by the “Surround, then Intersect” strategy <cit.> to train linear regression under sequential domain adaptation in a non-adversarial setting (see <cit.> and <cit.> for robustness in domain adaptation/transfer learning). The aforementioned work focuses on a case where the loss function is the squared loss, and the metric is a variant of the Wasserstein metric developed for the first and second distributional moments. Logistic Loss in DRO and ARO Our choice of LR aligns with the current directions and open questions in the relevant literature. In the ARO literature, there are recent theory developments on understanding the effect of auxiliary data (e.g., <cit.>) for squared and logistic loss functions. In the DRO literature, even in the absence of adversarial attacks, the aforementioned work <cit.> on the intersection of Wasserstein ambiguity sets is restricted to linear regression. The authors show that this problem admits a tractable convex optimization reformulation, and the proof relies on the properties of the squared loss. We contribute to the DRO literature for adversarial and non-adversarial settings because we show that such a problem would be NP-hard for the logistic loss (cf., Proposition <ref>), and develop specialized approximation techniques. Our problem recovers DR LR <cit.> as a special case in the absence of adversarial attacks and auxiliary data. The theoretical challenges posed by the logistic loss have been a significant focus in DRO literature, with extensions such as DR LR <cit.> to DRO Lipschitz ML <cit.> and mixed-feature DR LR <cit.> to mixed-feature DR Lipschitz ML <cit.>. § PROBLEM SETTING AND PRELIMINARIES We consider a binary classification problem where an instance is modeled as (x, y) ∈Ξ := ℝ^n ×{-1, +1} and the labels depend on the features probabilistically with Prob[y |x] = [1 + exp(-y ·β^⊤x)]^-1, for some β∈ℝ^n; its associated loss is the logloss ℓ_β(x, y) := log( 1 + exp(-y ·β^⊤x)). Distributional ambiguity and Wasserstein balls Let 𝒫(Ξ) denote the set of probability distributions on Ξ. We model distributional ambiguity via the Wasserstein (Earth mover's) distance. The distance d(ξ, ξ') between two instances ξ = (x, y) ∈Ξ and ξ' = (x', y') ∈Ξ is d(ξ, ξ') = ‖x - x'‖_q + κ·y ≠ y', where κ > 0 controls the label weight and q > 0 specifies a rational norm on ℝ^n. The type-1 Wasserstein distance between distributions ℚ∈𝒫(Ξ) and ℚ' ∈𝒫(Ξ), with ground metric d(ξ, ξ') on Ξ, is defined as W(ℚ, ℚ') = Π∈𝒞(ℚ,ℚ')inf{∫_Ξ×Ξ d(ξ, ξ') Π(ξ, ξ') } where 𝒞(ℚ,ℚ') : = {Π∈𝒫(Ξ×Ξ):Π(ξ, Ξ) = ℚ(ξ), Π(Ξ, ξ') = ℚ'(ξ') }. For ε > 0, the Wasserstein ball around P∈𝒫(Ξ) is defined 𝔅_ε(ℙ) := {ℚ∈𝒫(Ξ) : W(ℚ, ℙ) ≤ε}. We next review several training paradigms, see Table <ref>. Empirical Risk Minimization Let ℙ^0 denote the true data-generating distribution. Ideally, one wants to minimize the expected loss over ℙ^0, or more precisely RM[ β∈ R^ninf 𝔼_ℙ^0 [ℓ_β(x, y)]. ] In practice, ℙ^0 is hardly ever known, and one thus resorts to the empirical distribution ℙ_N = 1/N∑_i ∈ [N]δ_ξ^i where {ξ^i = (x^i, y^i)}_i ∈ [N] are i.i.d. samples from ℙ^0 and δ_ξ denotes the Dirac distribution supported on ξ. The empirical risk minimization (ERM) problem is thus given by ERM[ β∈ R^ninf𝔼_ℙ_N [ℓ_β(x, y)] = β∈ R^ninf1/N∑_i ∈ [N]ℓ_β(x^i, y^i). ] Distributionally Robust Optimization As summarized in the introduction, DRO is motivated by the fact that in the finite-data setting, the distance between the true and empirical distributions is upper-bounded by some ϵ>0, that is, ℙ^0∈𝔅_ε(ℙ_N). The goal in DRO is to optimize the expected loss over the worst possible realization of a distribution residing in 𝔅_ε(ℙ_N): DRO[ β∈ R^ninf ℚ∈𝔅_ε(ℙ_N)sup 𝔼_ℚ [ℓ_β(x, y)]. ] We refer to <cit.> and <cit.> for the generalization guarantees and ML applications of <ref>. Adversarial Robustness The goal of adversarial robustness is to provide robustness against adversarial attacks <cit.>. An adversarial attack, in the widely studied ℓ_p-noise setting <cit.>, perturbs the features of the test instances (x, y) by adding additive noise z to x. The adversary chooses the noise vector z, subject to ‖z‖_p≤α, so as to maximize the loss ℓ_β(x+ z, y) associated with this perturbed test instance. Therefore, ARO solves the following optimization problem in the training stage to hedge against adversarial perturbations at the test stage: ARO[ β∈ R^ninf 𝔼_ℙ_N[z: ‖z‖_p ≤αsup{ℓ_β(x + z, y)}] . ] <ref> reduces to <ref> when α = 0. Note that <ref> is identical to feature robust training <cit.> which is not motivated by adversarial attacks, but the presence of noisy observations in the training set <cit.>. § WASSERSTEIN ADVERSARIALLY ROBUST OPTIMIZATION <ref> replaces the loss function of <ref> with the worst-case loss (with respect to adversarial attacks). Here we show that <ref> is equivalent to an ERM of a modified loss, which is convex and Lipschitz. Let ℓ^α_β(x, y) := log(1 + exp( - y·β^⊤x + α·‖β‖_p^⋆ )) denote the adversarial loss associated with the logloss, and L^α(z) := log(1 + exp(- z + α·‖β‖_p^⋆ )) its univariate counterpart. We have ℓ^α_β(x, y) = sup_z: ‖z‖_p ≤α{ℓ_β(x + z, y)} and so <ref> is identical to [ β∈ℝ^ninf 𝔼_ℙ_N [ℓ^α_β(x, y)]. ] Moreover, Lip(L^α) = 1 for any α≥ 0. The proof is in Appendix <ref>. Proposition <ref> tells us that true expected loss under adversarial attacks is 𝔼_ℙ^0[ℓ_β^α(x, y)]. Therefore, instead of optimizing the empirical risk 𝔼_ℙ_N [ℓ_β(x, y)], <ref> optimizes the empirical adversarial risk 𝔼_ℙ_N[ℓ_β^α(x, y)]. This means that <ref> calibrates the loss function so that we train and test with the same loss _β(x, y). However, <ref> still optimizes this loss for the empirical distribution ℙ_N and is thus prone to overfitting due to the statistical error of estimating ℙ^0 with ℙ_N. To address overfitting in the adversarial setting (robust overfitting of <ref>), we derive a Wasserstein DR counterpart of <ref>. We start with the following assumption. We are given finite ε > 0 satisfying W(ℙ^0, ℙ_N) ≤ε (i.e., ℙ^0 ∈𝔅_ε(ℙ_N)). We discuss relaxing this assumption in Section <ref>. We now introduce the distributionally and adversarially robust logistic regression problem: DR-AROβ∈ℝ^ninfℚ∈𝔅_ε(ℙ_N)sup𝔼_ℚ [sup_z: ‖z‖_p ≤α{ℓ_β(x + z, y)} ]. The following result shows that, for a fixed ε, <ref> can be reformulated as a convex optimization problem. This is a direct corollary of Proposition <ref> and Theorem 14 (ii) of <cit.>; see Appendix <ref>. Problem <ref> admits the following tractable convex optimization reformulation: [ β, λ, sinf ελ + 1/N∑_i=1^N s_i; s.t. ℓ_β^α(x^i, y^i) ≤ s_i, ℓ_β^α(x^i, -y^i) - λκ≤ s_i ∀ i ∈ [N]; ‖β‖_q^⋆≤λ, β∈ℝ^n, λ≥ 0, s∈ℝ^N_+. ] The constraints of this problem are exponential cone representable (Appendix <ref>), and for q ∈{1,2,∞}, the yielding problem can be solved with the exponential cone solver of MOSEK <cit.> in polynomial time (with respect to their input size <cit.>). <ref> addresses the overfitting issue of <ref> by solving its distributionally robust counterpart. However, the DRO approach of considering the worst-case distribution within a ball around the empirical distribution can be overly conservative <cit.>. Next, we explore how employing auxiliary data can reduce this conservatism. § REDUCING CONSERVATISM OF <REF> VIA INTERSECTION OF WASSERSTEIN BALLS So far we have discussed the setting where we have access to an empirical distribution ℙ_N that is constructed from N i.i.d. samples of the true distribution ℙ^0. Suppose that we have an auxiliary distribution ℙ_N which is constructed from N i.i.d. samples {ξ^j = (x^j, y^j)}_j ∈ [N] of another distribution ℙ. In this section, we explore how auxiliary data can help us identify a subset of the Wasserstein ball 𝔅_ε(ℙ_N) in which ℙ^0 still resides. By shrinking the size of its ambiguity set, we expect to reduce the conservatism of <ref>. We start with the following assumption. We are given finite ε, ε > 0 such that W(ℙ^0, ℙ_N) ≤ε and W(ℙ^0, ℙ_N) ≤ε. We relax this assumption in Section <ref>. Given Assumption <ref>, we want to solve the revised problem: Inter-AROβ∈ℝ^ninfℚ∈𝔅_ε(ℙ_N) ∩𝔅_ε(ℙ_N)sup𝔼_ℚ[ℓ^α_β (x, y)]. We first reformulate the intersected DR ARO problem (<ref>) as a semi-infinite optimization problem with finite variables and then provide a complexity result. <ref> admits the following reformulation. [ β, λ, λ, s, sinf ελ + ελ + 1/N∑_i=1^N s_i + 1/N∑_j=1^Ns_j; s.t. x∈ℝ^nsup{ℓ^α_β(x, l) - λ‖x^i - x‖_q - λ‖x^j - x‖_q}≤ s_i + κ(1 - l y^i)/2λ + s_j + κ(1 - l y^j)/2λ; ∀ i ∈ [N], j ∈ [N], l ∈{-1, 1}; β∈ℝ^n, λ≥ 0, λ≥ 0, s∈ℝ^N_+, s∈ℝ^N_+. ] The proof is in Appendix <ref>. Even though this problem recovers <ref> (hence admits tractable reformulations) when the radius ε of the second ball satisfies ε→∞, Proposition <ref> shows that it is NP-hard in the finite radius settings. We reformulate <ref> as an adjustable robust optimization problem <cit.>, and borrow tools from this literature to obtain the following result. <ref> is equivalent to an adjustable robust optimization problem with 𝒪(N·N) two-stage robust constraints, which is NP-hard even when N = N = 1. The proof is in Appendix <ref>. The adjustable robust optimization literature has developed a rich arsenal of relaxation techniques that can be leveraged for <ref>. We adopt the `static relaxation technique' <cit.> to restrict the feasible region of <ref> and obtain a tractable approximation. The following convex optimization problem is a feasible relaxation (safe approximation) of <ref>: Inter-ARO^⋆10em[ β, λ, λ, s, s, z_ij^+, z_ij^-inf ελ + ελ + 1/N∑_i=1^N s_i + 1/N∑_j=1^Ns_j; s.t. [ L^α(l·β^⊤x^i + z_ij^l⊤(x^j - x^i)) ≤ s_i + κ(1 - ly^i)2λ + s_j + κ(1 - ly^j)2λ, ‖ lβ - z^l_ij‖_q^⋆≤λ, ‖z^l_ij‖_q^⋆≤λ]; ∀ i ∈ [N], j ∈ [N], l ∈{ -1, 1 }; β∈ℝ^n, λ≥ 0, λ≥ 0, s∈ℝ^N_+, s∈ℝ^N_+, z^l_ij∈ℝ^n, ] where L^α(z) := log(1 + exp(- z + α·‖β‖_p^⋆)) is the univariate representation of ℓ_β^α. The proof is in Appendix <ref>. <ref> relaxes the NP-hard problem <ref> so that it becomes efficiently solvable, and it enjoys similar tractable formulations to <ref>. <ref> admits an exponential cone reformulation, analogously to Appendix <ref>. Recall that for ε large enough so that 𝔅_ε(ℙ_N) ∩𝔅_ε(ℙ_N) = 𝔅_ε(ℙ_N), <ref> reduces to <ref>. The following corollary (proof in Appendix <ref>) shows that a similar desired property holds for the relaxed problem <ref>. That is, “not learning anything from auxiliary data” remains feasible: the static relaxation does not force learning from ℙ_N, it learns from auxiliary data only if the objective improves. Moreover, we show that as ε→∞, <ref> converges to <ref>. Feasibility of disregarding auxiliary data: Any feasible solution (β, λ, s) of <ref> gives a feasible solution (β, λ, λ, s, s, z^+_ij, z^-_ij) for <ref> with λ = 0, s = 0, z^+_ij = z^-_ij = 0. Convergence to <ref>: As ε→∞, the optimal value of <ref> converges to the optimal value of <ref>, with the same set of optimal β solutions. <ref> and Related Problems Recall that <ref> can simply ignore the auxiliary data once ε is set large enough, reducing this problem to <ref>. Moreover, notice that α = 0 reduces ℓ_β^α to ℓ_β, hence for α = 0 and ε = ∞ <ref> recovers the Wasserstein LR model of <cit.>. We next relate <ref> to the problems in the ARO literature that use auxiliary data {(x^j, y^j)}_j ∈ [N]. The works in this literature <cit.> propose solving the following for some w > 0: β∈ℝ^ninf 1/N + w N[ ∑_i ∈ [N]z^i ∈ℬ_p(α)sup{ℓ_β(x^i + z^i, y^i)} + w ∑_j ∈ [N]z^j ∈ℬ_p(α)sup{ℓ_β(x^j + z^j, y^j)}], where ℬ_p(α) := {z∈ℝ^n : ‖z‖_p ≤α}. We first observe that this resembles <ref>, with the empirical distribution ℙ_N being replaced with its mixture with ℙ_N: Problem (<ref>) is equivalent to: [ β∈ℝ^ninf 𝔼_ℚ_mix [ℓ^α_β(x, y)]; ] where ℚ_mix := λ·ℙ_N + (1-λ)·ℙ_N for λ = N/N + w N. The proof is in Appendix <ref>. Next, we give a condition on ε and ε to guarantee that the mixture distribution introduced in Proposition <ref> lives in 𝔅_ε(ℙ_N) ∩𝔅_ε(ℙ_N), that is, the distribution ℚ_mix will be feasible in the sup problem of <ref>. For any λ∈ (0,1) and ℚ_mix := λ·ℙ_N + (1-λ)·ℙ_N, whenever ε + ε≥W(ℙ_N, ℙ_N) and ε/ε = λ/1 - λ are satisfied, we have ℚ_mix∈𝔅_ε(ℙ_N) ∩𝔅_ε(ℙ_N). The proof is in Appendix <ref>. For λ = N/N + N, if the intersection 𝔅_ε(ℙ_N) ∩𝔅_ε(ℙ_N) is nonempty, Proposition <ref> implies that a sufficient condition for this intersection to include the mixture ℚ_mix is ε / ε = N / N, which is intuitive since the radii of the Wasserstein ambiguity sets are typically chosen inversely proportional to the number of samples <cit.>. § SELECTING WASSERSTEIN RADII Our analyses thus far have assumed knowledge of DRO ball radii ε and ε (Assumptions <ref> and <ref>). These are unrealistic in most real-world scenarios. Here we discuss how to set ε and ε based on the data such that Problems <ref> and <ref> remain well-defined. We consider two settings. First we discuss the case where W(ℙ^0, ℙ) is known. Then, we discuss the most realistic scenario where this distance is unknown. To this end, we investigate the statistical properties of our distributionally and adversarially robust optimization models to be able to set ε and ε values. Choosing ϵ in <ref>. In order to relax Assumption <ref> in Problem <ref>, one needs to infer ε value from the empirical data so that ℙ^0 ∈𝔅_ε(ℙ_N) with a pre-specified level of confidence. The following theorem presents tight characterizations for ε so that the ball 𝔅_ε(ℙ_N) includes the true distribution ℙ^0 with arbitrarily high confidence, and shows that for an ε chosen in such manner, Problem <ref> is well-defined. The detailed statement and the proof are in Appendix <ref>. For light-tailed distribution ℙ^0 and ε≥𝒪(log(η^-1)/N)^1/n for η∈ (0,1), we have: (i) ℙ^0 ∈𝔅_ε(ℙ_N) with 1 - η confidence; (ii) <ref> overestimates true loss with 1-η confidence; (iii) <ref> is asymptotically consistent ℙ^0-a.s.; (iv) worst-case distributions for optimal solutions of <ref> are supported on at most N+1 outcomes. Choosing ϵ and ϵ' in <ref>. <ref> revises <ref> by intersecting 𝔅_ε(ℙ_N) with another ball 𝔅_ε(ℙ_N) centered at the auxiliary distribution. We need a nonempty intersection for <ref> to be well-defined. A sufficient condition follows from the triangle inequality: ε + ε≥W(ℙ_N, ℙ_N). Moreover, provided that ε≥W(ℙ_N, ℙ^0), a sufficient condition for 𝔅_ε(ℙ_N) ∩𝔅_ε(ℙ_N) = 𝔅_ε(ℙ_N) is ε≥ε+W(ℙ_N, ℙ)+ W(ℙ_N, ℙ) (cf. Figure <ref>). While choosing such ε to reduce the size of the ambiguity set of <ref>, we want this intersection to include ℙ^0, assuming ε is set in light of Theorem <ref>. The auxiliary data ℙ_N is constructed from instances that are independently sampled from ℙ and thus Wasserstein finite sample statistics can estimate W(ℙ, ℙ_N). To have confidence guarantees on ℙ^0 ∈𝔅_ε(ℙ_N), however, we must additionally know W(ℙ^0, ℙ) which we use in the following result. Full statement of the theorem and its proof are in Appendix <ref>. For light-tailed ℙ^0 and ℙ, if ε≥𝒪(log(η_1^-1)/N)^1/n and ε≥W(ℙ^0, ℙ) + 𝒪(log(η_2^-1)/N)^1/n for η_1, η_2 ∈ (0,1) with η := η_1 + η_2 < 1, we have: (i) ℙ^0 ∈𝔅_ε(ℙ_N) ∩𝔅_ε(ℙ_N) with 1 - η confidence; (ii) <ref> overestimates true loss with 1-η confidence. <ref> is not asymptotically consistent, given that N→∞ will let ε→W(ℙ^0, ℙ) due to the non-zero distance between the true distribution ℙ^0 and the auxiliary distribution ℙ. <ref> is thus not useful in asymptotic data regimes, which is not surprising given that we introduced it to reduce the conservatism of <ref> which by design arises in non-asymptotic settings. Knowledge of W(ℙ^0, ℙ). In the above results, we assumed that W(ℙ^0, ℙ) is known. However, this is challenging in most practical settings <cit.> and we estimate it via cross validation (as in the transfer learning and domain adaptation literature, <cit.>). For some special cases, we can use domain knowledge (e.g., the “Uber vs Lyft” example of <cit.>). For example, in a differential privacy context, a data holder shares a subset of opt-in data to form ℙ_N, and generates a privacy-preserving synthetic dataset from the rest. Due to challenges in synthetic data generation under privacy constraints, the synthetic distribution approximates the true distribution, resulting in a nonzero Wasserstein distance <cit.>. Using this distance will complete the above discussion. Another research direction relies on W(ℙ_N, ℙ) when it is known, especially when synthetic data generators are trained on the empirical dataset. By employing Wasserstein GANs, which minimize the Wasserstein-1 distance, the distance between the generated distribution and the training distribution is minimized. This ensures that the synthetic distribution remains within a small radius of the training distribution <cit.>. § EXPERIMENTS We conduct a series of experiments to test the proposed DR ARO models using empirical and auxiliary datasets. We use the following abbreviations: ERM and ARO stand for solving problems <ref> (i.e., minimization of the empirical logistic loss) and <ref> (i.e., adversarial training for logistic loss), respectively. ARO+Aux refers to solving problem (<ref>), that is, replacing the empirical distribution of <ref> with its mixture with auxiliary data. DRO+ARO is solving <ref>, which is the Wasserstein DR counterpart of <ref>. Finally, DRO+ARO+Aux refers to solving <ref>, which revises <ref> by intersecting its ambiguity set with a Wasserstein ball built using auxiliary data. Note that, ERM, ARO, and DRO+ARO are oblivious to auxiliary data. Finally, recall that DRO+ARO and DRO+ARO+Aux are the models that we propose. All Wasserstein radii of DR models, as well as the weight parameters of ARO+Aux are cross-validated. Implementation details are in Appendix <ref>. §.§ UCI datasets (auxiliary data is synthetic) We compare the out-of-sample error rates of each method on 5 UCI datasets for classification <cit.>. For each dataset, we run 10 simulations as follows: (i) Select 40% of the data as a test set (N_te∝ 0.4); (ii) Sample 25% of the remaining to form a training set (N ∝ 0.6 · 0.25); (iii) The rest (N∝ 0.6 · 0.75) is used to fit a synthetic generator Gaussian Copula from the SDV package <cit.>, to sample auxiliary data from. The mean errors on the test set are reported in Table <ref> for ℓ_2-attacks of strength α = 0.05. The best error is always achieved by DRO+ARO+Aux, followed by DRO+ARO, DRO+Aux, ARO, ERM, respectively. In Appendix <ref>, we report similar results for 5 more UCI datasets along with attack strengths α∈{0, 0.05, 0.2}, and share data preprocessing details and standard deviations. §.§ MNIST/EMNIST datasets (auxiliary data is out-of-domain) We use the MNIST <cit.> digits dataset to classify whether a digit is 1 or 7. For an auxiliary dataset, we use the EMNIST <cit.> digits dataset, as the authors of <cit.> summarize that the EMNIST dataset has additional samples “collected from high school students and pose a more challenging problems”. Since EMNIST digits include MNIST digits, we removed the latter from the EMNIST dataset. We simulated the following 20 times: (i) Sample 1,000 instances from the MNIST dataset as a training set; (ii) The remaining instances in the MNIST dataset are our test set; (iii) Sample 1,000 instances from the EMNIST dataset as an auxiliary dataset. Table <ref> reports the mean test errors under various adversarial attack regimes. The results are analogous to UCI experiments. §.§ Artificial experiments (auxiliary data is perturbed) We generate empirical and auxiliary datasets by controlling their data-generating distributions in line with the standard practice (more details in Appendix <ref>). We simulate 25 cases, each with N = 100 training, N = 200 auxiliary, and N_te = 10,000 test instances and n=100. The performance of benchmark models with varying ℓ_2-attacks is available in Figure <ref> (left). ERM provides the worst performance, followed by ARO, and our DRO+ARO+Aux model gives the best performance. The relationship between DRO+ARO and ARO+Aux is not monotonic: the latter works better in larger attack regimes, conforming to the robust overfitting phenomenon. Finally, Adv+DRO+Aux always performs the best. We conduct a similar simulation for datasets with n=100, and gradually increase N = N to report median (50%± 15% quantiles shaded) runtimes of each method (cf. Figure <ref>, left). The fastest methods is ARO, followed by ERM, ARO+Aux, DRO+ARO, and DRO+ARO+Aux. The slowest is DRO+ARO+Aux as expected, but the runtime still scales graciously. § CONCLUSIONS AND FUTURE WORK We formulate the distributionally robust counterpart of adversarially robust logistic regression. Additionally, we demonstrate how to effectively utilize appropriately curated auxiliary data (if available) to mitigate the inherent conservatism of distributional robustness. We illustrate the superiority of the proposed approach in terms of out-of-sample performance and confirm its scalability in practical settings. It would be natural to extend our results to more loss functions as is typical for theoretical DRO studies stemming from logistic regression. Moreover, the recent breakthroughs in the area of foundation models naturally pose the question of whether the ideas presented in this work apply to these models. For example, <cit.> uses a pre-trained language model (PLM) to generate synthetic pairs of text sequences and labels which are then used to train a downstream model. It would be interesting to adapt our ideas to the text domain to explore robustness in the presence of two PLMs. Disclaimer: This paper was prepared for informational purposes by the CDAO group of JPMorganChase and its affiliates (“JP Morgan”), and is not a product of the Research Department of JP Morgan. J.P. Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. abbrv § NOTATION Throughout the paper, bold lower case letters denote vectors, while standard lower case letters are reserved for scalars. A generic data instance is modeled as (x, y) ∈Ξ := ℝ^n ×{-1, +1}. For any p>0, ‖x‖_p denotes the rational norm (∑_i=1^n | x_i |^p)^1/p and ‖x‖_p^⋆ is its dual norm where 1/p + 1/p^⋆ = 1 with the convention of 1/1 + 1/∞ = 1. The set of probability distributions supported on Ξ is denoted by 𝒫(Ξ). The Dirac measure supported on ξ is denoted by δ_ξ. The logloss is defined as ℓ_β(x, y) = log(1+ exp(-y ·β^⊤x)) and its associated univariate loss is L(z) = log(1 + exp(-z)) so that L(y ·β^⊤x) = ℓ_β(x, y). The exponential cone is denoted by 𝒦_exp = cl({ω∈ℝ^3 : ω_1 ≥ω_2 ·exp(ω_3/ω_2), ω_1 > 0, ω_2 > 0 }) where cl is the closure operator. The Lipschitz modulus of a univariate function f is defined as Lip(f) := sup_z, z' ∈ℝ{|f(z) - f(z')|/|z-z'| : z ≠ z } whereas its effective domain is dom(f) = { z : f(z) < +∞}. For a function f:ℝ^n ↦ℝ, its convex conjugate is f^*(z) = sup_x∈ℝ^nz^⊤x - f(x). We reserve α≥ 0 for the radii of the norms of adversarial attacks on the features and ε≥ 0 for the radii of distributional ambiguity sets. § PROOFS §.§ Proof of Proposition <ref> For any β∈ℝ^n, with standard robust optimization arguments <cit.>, we can show that z: ‖z‖_p ≤αsup{ℓ_β(x + z, y)} z: ‖z‖_p ≤αsup{log(1 + exp(-y ·β^⊤ (x + z))) } log(1 + exp( z: ‖z‖_p ≤αsup{ -y ·β^⊤ (x + z) })) log(1 + exp( - y·β^⊤x + α·z: ‖z‖_p ≤ 1sup{ -y ·β^⊤z})) log(1 + exp( - y·β^⊤x + α·‖ -y ·β‖_p^⋆ )) log(1 + exp( - y·β^⊤x + α·‖β‖_p^⋆ )), where the first step follows from the definition of logloss, the second step follows from the fact that log and exp are increasing functions, the third step takes the constant terms out of the sup problem and exploits the fact that the optimal solution of maximizing a linear function will be at an extreme point of the ℓ_p ball, the fourth step uses the definition of dual norm, and finally the redundant -y∈{-1,+1} is omitted from the dual norm. We can therefore define the adversarial loss ℓ^α_β(x, y) := log(1 + exp( - y·β^⊤x + α·‖β‖_p^⋆ )) where α models the strength of the adversary, β is the decision vector, and (x, y) is an instance. Replacing sup_z: ‖z‖_p ≤α{ℓ_β(x + z, y)} in <ref> with ℓ^α_β(x, y) concludes the equivalence. Furthermore, to see Lip(L^α) = 1, firstly note that since L^α(z) = log(1 + exp(-z + α·‖β‖_p^⋆)) is differentiable everywhere in z and its gradient L^α' is bounded everywhere, we have that Lip(L^α) is equal to sup_z ∈ℝ{ |L^α'(z)| }. We thus have L^α'(z) = -exp(-z + α·‖β‖_p^⋆)1 + exp(-z + α·‖β‖_p^⋆) = -11 + exp(z - α·‖β‖_p^⋆)∈ (-1, 0) and |L^α'(z)| = [ 1 + exp(z - α·‖β‖_p^⋆) ]^-1⟶ 1 as z ⟶ -∞. §.§ Proof of Corollary <ref> Proposition <ref> lets us represent <ref> as the DR counterpart of empirical minimization of ℓ^α_β: [ βminimize ℚ∈𝔅_ε(ℙ_N)sup 𝔼_ℚ[ ℓ^α_β(x,y) ]; subject to β∈ℝ^n. ] Since the univariate loss L^α(z) := log(1+ exp(-z + α·‖β‖_p^⋆)) satisfying the identity L^α(⟨ y·x, β⟩) = ℓ^α_β(x, y) is Lipschitz continuous (cf. Proposition <ref>), Theorem 14 (ii) of <cit.> is immediately applicable. We can therefore rewrite (<ref>) as: [ β, λ, sminimize λ·ε + 1N∑_i∈[N] s_i ; subject to L^α(⟨ y^i ·x, β⟩ ) ≤ s_i ∀ i ∈ [N]; L^α(⟨ -y^i ·x, β⟩ ) - λ·κ≤ s_i ∀ i ∈ [N]; Lip(L^α) ·‖β‖_q^⋆≤λ ; β∈ℝ^n, λ≥ 0, s∈ℝ^N. ] Replacing Lip(L^α) = 1 and substituting the definition of L^α concludes the proof. §.§ Proof of Proposition <ref> We prove Proposition <ref> by constructing the optimization problem in its statement. We will thus dualize the inner sup problem of <ref> for fixed β. To this end, we present a sequence of reformulations to the inner problem and then exploit strong semi-infinite duality. By interchanging ξ = (x, y), we first rewrite the inner problem as [ ℚ, Π, Πmaximize ∫_ξ∈Ξℓ^α_β(ξ) ℚ(ξ) ; subject to ∫_ξ, ξ'∈Ξ^2 d(ξ, ξ') Π(ξ, ξ') ≤ε ; ∫_ξ∈ΞΠ(ξ, ξ') = ℙ_N(ξ') ∀ξ'∈Ξ; ∫_ξ'∈ΞΠ(ξ, ξ') = ℚ(ξ) ∀ξ∈Ξ; ∫_ξ, ξ'∈Ξ^2 d(ξ, ξ') Π(ξ, ξ') ≤ε ; ∫_ξ∈ΞΠ(ξ, ξ') = ℙ_N(ξ') ∀ξ'∈Ξ; ∫_ξ'∈ΞΠ(ξ, ξ') = ℚ(ξ) ∀ξ∈Ξ; ℚ∈𝒫(Ξ), Π∈𝒫(Ξ^2), Π∈𝒫(Ξ^2). ] Here, the first three constraints specify that ℚ and ℙ_N have a Wasserstein distance bounded by ε from each other, modeled via their coupling Π. The latter three constraints similarly specify that ℚ and ℙ_N are at most ε away from each other, modeled via their coupling Π. As ℚ lies in the intersection of two Wasserstein balls in <ref>, the marginal ℚ is shared between Π and Π. We can now substitute the third constraint into the objective and the last constraint and obtain: [ Π, Πmaximize ∫_ξ∈Ξℓ^α_β(ξ) ∫_ξ'∈ΞΠ(ξ, ξ') ; subject to ∫_ξ, ξ'∈Ξ^2 d(ξ, ξ') Π(ξ, ξ') ≤ε ; ∫_ξ∈ΞΠ(ξ, ξ') = ℙ_N(ξ') ∀ξ'∈Ξ; ∫_ξ, ξ'∈Ξ^2 d(ξ, ξ') Π(ξ, ξ') ≤ε ; ∫_ξ∈ΞΠ(ξ, ξ') = ℙ_N(ξ') ∀ξ'∈Ξ; ∫_ξ'∈ΞΠ(ξ, ξ') = ∫_ξ'∈ΞΠ(ξ, ξ') ∀ξ∈Ξ; Π∈𝒫(Ξ^2), Π∈𝒫(Ξ^2). ] Denoting by ℚ^i(ξ) := Π(ξ|ξ^i) the conditional distribution of Π upon the realization of ξ' = ξ^i and exploiting the fact that ℙ_N is a discrete distribution supported on the N data points {ξ^i}_i ∈ [N], we can use the marginalized representation Π(ξ,ξ') =1/N∑_i=1^N δ_ξ^i(ξ')ℚ^i(ξ). Similarly, we can introduce ℚ^i(ξ) := Π(ξ|ξ^i) for {ξ^i}_i ∈ [N] to exploit the marginalized representation Π(ξ,ξ') =1/N∑_j=1^Nδ_ξ^j(ξ')ℚ^j(ξ). By using this marginalization representation, we can use the following simplification for the objective function: ∫_ξ∈Ξℓ^α_β(ξ) ∫_ξ'∈ΞΠ(ξ, ξ') = 1/N∑_i=1^N ∫_ξ∈Ξℓ^α_β(ξ) ∫_ξ'∈Ξδ_ξ^i(ξ')ℚ^i(ξ) = 1/N∑_i=1^N ∫_ξ∈Ξℓ^α_β(ξ) ℚ^i(ξ). Applying analogous reformulations to the constraints leads to the following reformulation of the inner sup problem of <ref>: [ ℚ, ℚmaximize 1N∑_i=1^N ∫_ξ∈Ξℓ^α_β(ξ) ℚ^i(ξ) ; subject to 1N∑_i=1^N ∫_ξ∈Ξ d(ξ, ξ^i) ℚ^i (ξ) ≤ε ; 1N∑_j=1^N∫_ξ∈Ξ d(ξ, ξ^j) ℚ^j (ξ) ≤ε ; 1N∑_i=1^N ℚ^i(ξ) = 1N∑_j=1^Nℚ^j(ξ) ∀ξ∈Ξ; ℚ^i ∈𝒫(Ξ), ℚ^j∈𝒫(Ξ) ∀ i ∈ [N], ∀ j ∈ [N]. ] We now decompose each ℚ^i into two measures corresponding to y = ± 1, so that ℚ^i( (x, y)) = ℚ_+1^i(x) for y = +1 and ℚ^i( (x, y)) = ℚ_-1^i(x) for y = -1. We similarly represent each ℚ^j via ℚ_+1^j and ℚ_-1^j depending on y. Note that these new measures are not probability measures as they do not integrate to 1, but non-negative measures supported on ℝ^n (denoted ∈𝒫_+(ℝ^n)). We get: [ ℚ_± 1, ℚ_± 1maximize 1N∑_i=1^N ∫_x∈ℝ^n [ ℓ^α_β(x, + 1) ℚ_+1^i (x) + ℓ^α_β(x, -1)ℚ_-1^i(x) ] ; subject to 1N∑_i=1^N ∫_x∈ℝ^n [d( (x,+1), ξ^i) ℚ_+1^i(x) + d( (x,-1), ξ^i) ℚ_-1^i(x)] ≤ε ; 1N∑_j=1^N∫_x∈ℝ^n [d( (x,+1), ξ^j) ℚ_+1^j(x) + d( (x,-1), ξ^j) ℚ_-1^j(x)] ≤ε ; ∫_x∈ℝ^nℚ_+1^i(x) + ℚ_-1^i(x) = 1 ∀ i ∈ [N]; ∫_x∈ℝ^nℚ_+1^j(x) + ℚ_-1^j(x) = 1 ∀ j ∈ [N]; 1N∑_i=1^N ℚ^i_+1(x) = 1N∑_j=1^Nℚ^j_+1(x) ∀x∈ℝ^n; 1N∑_i=1^N ℚ^i_-1(x) = 1N∑_j=1^Nℚ^j_-1(x) ∀x∈ℝ^n; ℚ^i_± 1∈𝒫_+(ℝ^n), ℚ^j_± 1∈𝒫_+(ℝ^n) ∀ i ∈ [N], j ∈ [N]. ] Next, we explicitly write the definition of the metric d(·, ·) in the first two constraints as well as use auxiliary measures 𝔸_± 1∈𝒫_+(ℝ^n) to break down the last two equality constraints: [ 𝔸_± 1, ℚ_± 1, ℚ_± 1maximize 1N∑_i=1^N ∫_x∈ℝ^n [ ℓ^α_β(x, + 1) ℚ_+1^i (x) + ℓ^α_β(x, -1)ℚ_-1^i(x) ] ; subject to 1N∫_x∈ℝ^n[ κ·∑_i∈ [N] : y^i = -1ℚ_+1^i(x) + κ·∑_i∈ [N] : y^i = +1ℚ_-1^i(x) +; ∑_i = 1^N ‖x - x^i ‖_q·[ ℚ_+1^i(x) + ℚ_-1^i(x) ] ] ≤ε ; 1N∫_x∈ℝ^n[ κ·∑_j∈ [N] : y^j = -1ℚ_+1^j(x) + κ·∑_j∈ [N] : y^j = +1ℚ_-1^j(x) +; ∑_j = 1^N‖x - x^j‖_q· [ ℚ_+1^j(x) + ℚ_-1^j(x) ] ] ≤ε ; ∫_x∈ℝ^nℚ_+1^i(x) + ℚ_-1^i(x) = 1 ∀ i ∈ [N]; ∫_x∈ℝ^nℚ_+1^j(x) + ℚ_-1^j(x) = 1 ∀ j ∈ [N]; 1N∑_i=1^N ℚ^i_+1(x) = 𝔸_+1(x) ∀x∈ℝ^n; 1N∑_j=1^Nℚ^j_+1(x) = 𝔸_+1(x) ∀x∈ℝ^n; 1N∑_i=1^N ℚ^i_-1(x) = 𝔸_-1(x) ∀x∈ℝ^n; 1N∑_j=1^Nℚ^j_-1(x) = 𝔸_-1(x) ∀x∈ℝ^n; 𝔸_± 1∈𝒫_+(ℝ^n), ℚ^i_± 1∈𝒫_+(ℝ^n), ℚ^j_± 1∈𝒫_+(ℝ^n) ∀ i ∈ [N], j ∈ [N]. ] The following semi-infinite optimization problem, obtained by standard algebraic duality, is a strong dual to the above problem since ε, ε > 0 <cit.>. [ λ, λ, s, s, p_± 1, p_± 1minimize 1N[ Nελ + Nελ + ∑_i=1^N s_i + ∑_j=1^Ns_j ] ; subject to κ1 - y^i2λ + λ‖x^i - x‖_q + s_i + p_+1(x)N≥ℓ^α_β(x, + 1) ∀ i ∈ [N] , ∀x∈ℝ^n; κ1 - y^j2λ + λ‖x^j - x‖_q + s_j + p_+1(x)N≥ 0 ∀ j ∈ [N] , ∀x∈ℝ^n; κ1 + y^i2λ + λ‖x^i - x‖_q + s_i + p_-1(x)N≥ℓ^α_β(x, - 1) ∀ i ∈ [N] , ∀x∈ℝ^n; κ1 + y^j2λ + λ‖x^j - x‖_q + s_j + p_-1(x)N≥ 0 ∀ j ∈ [N] , ∀x∈ℝ^n; p_+1(x) + p_+1(x) ≤ 0 ; p_-1(x) + p_-1(x) ≤ 0 ; λ∈ℝ_+, λ∈ℝ+, s∈ℝ^N, s∈ℝ^N ; p_± 1: ℝ^n ↦ℝ, p_± 1: ℝ^n ↦ℝ. ] To eliminate the (function) variables p_+1 and p_+1, we first summarize the constraints they appear p_+1(x) ≥ N ·[ℓ^α_β(x, + 1) - s_i - λ‖x^i - x‖_q - κ1 - y^i2λ] ∀ i ∈ [N], ∀x∈ℝ^n p_+1(x) ≥N·[ -s_j - λ‖x^j - x‖_q - κ1 - y^j2λ] ∀ j ∈ [N] , ∀x∈ℝ^n p_+1(x) + p_+1(x) ≤ 0 ∀x∈ℝ^n, and notice that this system is equivalent to the epigraph-based reformulation of the following constraint ℓ^α_β(x, + 1) - s_i - λ‖x^i - x‖_q - κ1 - y^i2λ + NN·[ -s_j- λ‖x^j - x‖_q - κ1 - y^j2λ] ≤ 0 ∀ i ∈ [N], ∀ j ∈ [N], ∀x∈ℝ^n. We can therefore eliminate p_+1 and p_+1. We can also eliminate p_-1 and p_-1 since we similarly have: p_-1(x) ≥ N ·[ℓ^α_β(x, - 1) - s_i - λ‖x^i - x‖_q - κ1 + y^i2λ] ∀ i ∈ [N], ∀x∈ℝ^n p_-1(x) ≥N·[ -s_j - λ‖x^j - x‖_q - κ1 + y^j2λ] ∀ j ∈ [N] , ∀x∈ℝ^n p_-1(x) + p_-1(x) ≤ 0 ∀x∈ℝ^n ℓ^α_β(x, - 1) - s_i - λ‖x^i - x‖_q - κ1 + y^i2λ + NN·[ -s_j- λ‖x^j - x‖_q - κ1 + y^j2λ] ≤ 0 ∀ i ∈ [N], ∀ j ∈ [N], ∀x∈ℝ^n. This trick of eliminating p_± 1, p_± 1 is due to the auxiliary distributions 𝔸_± 1 that we introduced; without them, the dual problem is substantially harder to work with. We therefore obtain the following reformulation of the dual problem [ λ, λ, s, sminimize 1N[ Nελ + Nελ + ∑_i=1^N s_i + ∑_j=1^Ns_j ] ; subject to x∈ℝ^nsup{ℓ^α_β(x, + 1) - λ‖x^i - x‖_q - NNλ‖x^j - x‖_q}≤; s_i + κ1 - y^i2λ + NN·[ s_j + κ1 - y^j2λ] ∀ i ∈ [N], ∀ j ∈ [N]; x∈ℝ^nsup{ℓ^α_β(x, - 1) - λ‖x^i - x‖_q - NNλ‖x^j - x‖_q}≤; s_i + κ1 + y^i2λ + NN·[ s_j + κ1 + y^j2λ] ∀ i ∈ [N], ∀ j ∈ [N]; λ≥ 0, λ≥ 0, s∈ℝ^N_+, s∈ℝ^N_+ ] where we replaced the ∀x∈ℝ^n with the worst case realizations by taking the suprema of the constraints over x. We also added non-negativity on the definition of s and s which is without loss of generality since this is implied by the first two constraints, which is due to the fact that in the primal reformulation the “integrates to 1” constraints (whose associated dual variables are s and s) can be written as [ ∫_x∈ℝ^nℚ_+1^i(x) + ℚ_-1^i(x) ≤ 1 ∀ i ∈ [N]; ∫_x∈ℝ^nℚ_+1^j(x) + ℚ_-1^j(x) ≤ 1 ∀ j ∈ [N] ] due to the objective pressure. Relabeling NNλ as λ and NNs_j as s_j simplifies the problem to: [ λ, λ, s, sminimize ελ + ελ + 1N∑_i=1^N s_i + 1N∑_i=1^Ns_j ; subject to x∈ℝ^nsup{ℓ^α_β(x, + 1) - λ‖x^i - x‖_q - λ‖x^j - x‖_q}≤; s_i + κ1 - y^i2λ + s_j + κ1 - y^j2λ ∀ i ∈ [N], ∀ j ∈ [N]; x∈ℝ^nsup{ℓ^α_β(x, - 1) - λ‖x^i - x‖_q - λ‖x^j - x‖_q}≤; s_i + κ1 + y^i2λ + s_j + κ1 + y^j2λ ∀ i ∈ [N], ∀ j ∈ [N]; λ≥ 0, λ≥ 0, s∈ℝ^N_+, s∈ℝ^N_+. ] Combining all the sup constraints with the help of an an auxiliary parameter l ∈{-1, 1} and replacing this problem with the inner problem of <ref> concludes the proof. §.§ Proof of Proposition <ref> We first present a technical lemma that will allow us to rewrite a specific type of difference of convex functions (DC) maximization problem that appears in the constraints of <ref>. Rewriting such DC maximization problems is one of the key steps in reformulating Wasserstein DRO problems, and our lemma is inspired from <cit.>, <cit.>, and <cit.> who reformulate maximizing the difference of a convex function and a norm. Our DRO problem <ref>, however, comprises two ambiguity sets, hence the DC term that we investigate will be the difference between a convex function and the sum of two norms. This requires a new analysis and we will see that <ref> is NP-hard due to this additional difficulty. Suppose that L: ℝ↦ℝ is a closed convex function, and ‖·‖_q is a norm. For vectors ω, a, a∈ℝ^n and scalars λ, λ > 0, we have: x∈ℝ^nsup{L(ω^⊤x) - λ‖a - x‖_q - λ‖a - x‖_q} = θ∈dom(L^*)sup - L^*(θ) + θ·ω^⊤a + θ·z∈ℝ^ninf{z^⊤ (a - a) : |θ|·‖ω - z‖_q^⋆≤λ, |θ|·‖z‖_q^⋆≤λ} We denote by f_ω(x) = ω^⊤x and by g the convex function g(x) = g_1(x) + g_2(x) where g_1(x) := λ‖a - x‖_q and g_2(x):= λ‖a - x‖_q, and reformulate the sup problem as x∈ℝ^nsup L(ω^⊤x) - g(x) = x∈ℝ^nsup (L ∘ f_ω) (x) - g(x) = z∈ℝ^nsup g^*(z) - (L ∘ f_ω)^*(z), where the first identity follows from the definition of composition and the second identity employs Toland's duality <cit.> to rewrite difference of convex functions optimization. By using infimal convolutions <cit.>, we can reformulate g^*: g^*(z) = z_1, z_2inf{g_1^*(z_1) + g_2^*(z_2) : z_1 + z_2 = z} = z_1, z_2inf{z_1^⊤a + z_2^⊤a : z_1 + z_2 = z, ‖z_1 ‖_q^⋆≤λ, ‖z_2 ‖_q^⋆≤λ}, where the second step uses the definitions of g_1^*(z_1) and g_2^*(z_2). Moreover, we show (L ∘ f_ω)^*(z) = sup_x∈ℝ^n z^⊤x - L(ω^⊤x) = sup_t∈ℝ, x∈ℝ^n{z^⊤x - L(t) : t = ω^⊤x} = inf_θ∈ℝ sup_t∈ℝ, x∈ℝ^nz^⊤x - L(t) - θ· (ω^⊤x - t) = inf_θ∈ℝ sup_t∈ℝ sup_x∈ℝ^n (z - θ·ω)^⊤x - L(t) + θ· t = inf_θ∈ℝ sup_t∈ℝ -L(t) + θ· t if θ·ω = z + ∞ otherwise. = inf_θ∈ℝ L^*(θ) if θ·ω = z + ∞ otherwise. = inf_θ∈dom(L^*){ L^*(θ) : θ·ω = z}, where the first identity follows from the definition of the convex conjugate, the second identity introduces an additional variable to make this an equality-constrained optimization problem, the third identity takes the Lagrange dual (which is a strong dual since the problem maximizes a concave objective with a single equality constraint), the fourth identity rearranges the expressions, the fifth identity exploits unboundedness of x, the sixth identity uses the definition of convex conjugates and the final identity replaces the feasible set θ∈ℝ with the domain of L^⋆ without loss of generality as this is an inf problem. Replacing the conjugates allows us to conclude that the maximization problem equals [ z∈ℝ^nsup g^*(z) + θ∈dom(L^*)sup{- L^*(θ) : θ·ω = z}; = z∈ℝ^n, θ∈dom(L^*)sup{ g^*(z) - L^*(θ) : θ·ω = z}; = θ∈dom(L^*)sup g^*(θ·ω) - L^*(θ); = θ∈dom(L^*)sup - L^*(θ) + z_1, z_2 ∈ℝ^ninf{z_1^⊤a + z_2^⊤a : z_1 + z_2 = θ·ω , ‖z_1 ‖_q^⋆≤λ, ‖z_2 ‖_q^⋆≤λ}; = θ∈dom(L^*)sup - L^*(θ) + θ·z_1, z_2 ∈ℝ^ninf{z_1^⊤a + z_2^⊤a : z_1 + z_2 = ω , |θ|·‖z_1 ‖_q^⋆≤λ, |θ|·‖z_2 ‖_q^⋆≤λ}; = θ∈dom(L^*)sup - L^*(θ) + θ·ω^⊤a + θ·z∈ℝ^ninf{z^⊤ (a - a) : |θ|·‖ω - z‖_q^⋆≤λ, |θ|·‖z‖_q^⋆≤λ}. ] Here, the first identity follows from writing the problem as a single maximization problem, the second identity follows from the equality constraint, the third identity follows from the definition of the conjugate g^*, the fourth identity is due to relabeling z_1 = θ·z_1 and z_2 = θ·z_2, and the fifth identity is due to a variable change (z_1 = ω - z_2 relabeled as z). DC maximization terms similar to the one dealt by Lemma <ref> appear on the left-hand side of the constraints of <ref> (cf. formulation in Proposition <ref>). These constraints would admit a tractable reformulation for the case without auxiliary data because the inf term in the reformulation presented in Lemma <ref> does not appear in such cases. To see this, eliminate the second norm (the one associated with auxiliary data) by taking λ = 0, which will cause the constraint |θ|·‖z‖_q^⋆≤λ to force z = 0, and the alternative formulation will thus be: θ∈dom(L^*)sup{ - L^*(θ) + θ·ω^⊤a} if sup_θ∈dom(L^*){|θ|}·‖z‖_q^⋆≤λ +∞ otherwise = L(ω^⊤a) if Lip(L) ·‖z‖_q^⋆≤λ +∞ otherwise where we used the fact that L = L^** and sup_θ∈dom(L)|θ| = Lip(L) since L is closed convex <cit.>. Hence, the DC maximization can be represented with a convex function with an additional convex inequality, making the constraints tractable for the case without auxiliary data. For the case with auxiliary data, however, the sup_θinf_z structure makes these constraints equivalent to two-stage robust constraints (with uncertain parameter θ and adjustable variable z), bringing an adjustable robust optimization (<cit.>) perspective to <ref>. By using the univariate representation ℓ_β^α(x, y) = L^α(y ·β^⊤x), <ref> can be written as [ β, λ, λ, s, sminimize ελ + ελ + 1N∑_j=1^N s_j + 1N∑_i=1^Ns_i ; subject to x∈ℝ^nsup{ L^α(β^⊤x) - λ‖x^i - x‖_q - λ‖x^j - x‖_q}≤; s_i + κ1 - y^i2λ + s_j + κ1 - y^j2λ ∀ i ∈ [N], ∀ j ∈ [N]; x∈ℝ^nsup{L^α(-β^⊤x) - λ‖x^i - x‖_q - λ‖x^j - x‖_q}≤; s_i + κ1 + y^i2λ + s_j + κ1 + y^j2λ ∀ i ∈ [N], ∀ j ∈ [N]; β∈ℝ^n, λ≥ 0, λ≥ 0, s∈ℝ^N_+, s∈ℝ^N_+, ] and applying Lemma <ref> to the left-hand side of the constraints gives: 2em[ β, λ, λ, s, sminimize ελ + ελ + 1N∑_j=1^N s_j + 1N∑_i=1^Ns_i ; subject to θ∈dom(L^*)sup - L^α *(θ) + θ·β^⊤x^i + θ·z∈ℝ^ninf{z^⊤ (x^j - x^i) : |θ|·‖β - z‖_q^⋆≤λ, |θ|·‖z‖_q^⋆≤λ}≤; s_i + κ1 - y^i2λ + s_j + κ1 - y^j2λ ∀ i ∈ [N], ∀ j ∈ [N] ; θ∈dom(L^*)sup - L^α *(θ) - θ·β^⊤x^i + θ·z∈ℝ^ninf{z^⊤ (x^j - x^i) : |θ|·‖ -β - z‖_q^⋆≤λ, |θ|·‖z‖_q^⋆≤λ}≤; s_i + κ1 + y^i2λ + s_j + κ1 + y^j2λ ∀ i ∈ [N], ∀ j ∈ [N] ; β∈ℝ^n, λ≥ 0, λ≥ 0, s∈ℝ^N_+, s∈ℝ^N_+. ] Which, equivalently, can be written as the following problem with 2 N·N two-stage robust constraints: Inter-adjustable1.5em[ β, λ, λ, s, sminimize ελ + ελ + 1N∑_j=1^N s_j + 1N∑_i=1^Ns_i ; subject to [ ∀θ∈dom(L^*), ∃z∈ℝ^n : - L^α *(θ) + θ·β^⊤x^i + θ·z^⊤ (x^j - x^i) ≤ s_i + κ1 - y^i2λ + s_j + κ1 - y^j2λ |θ|·‖β - z‖_q^⋆≤λ |θ|·‖z‖_q^⋆≤λ]; ∀ i ∈ [N], ∀ j ∈ [N] ; [ ∀θ∈dom(L^*), ∃z∈ℝ^n : - L^α *(θ) - θ·β^⊤x^i + θ·z^⊤ (x^j - x^i) ≤ s_i + κ1 + y^i2λ + s_j + κ1 + y^j2λ |θ|·‖ -β - z‖_q^⋆≤λ |θ|·‖z‖_q^⋆≤λ]; ∀ i ∈ [N], ∀ j ∈ [N] ; β∈ℝ^n, λ≥ 0, λ≥ 0, s∈ℝ^N_+, s∈ℝ^N_+. ] By using adjustable robust optimization theory, we show that this problem is NP-hard even in the simplest setting. To this end, take N = N = 1 as well as κ = 0; the formulation presented in Proposition <ref> reduces to: [ β, λ, λ,s, sminimize ελ + ελ + s + s; subject to x∈ℝ^nsup{ℓ^α_β(x, l) - λ‖x^1 - x‖_q - λ‖x^1 - x‖_q}≤ s_1 + s_1 ∀ l ∈{-1, 1}; β∈ℝ^n, λ≥ 0, λ≥ 0, s ≥ 0, s≥ 0. ] The worst case realization of l ∈{-1,1} will always make ℓ_β^α(x,l) = log(1 + exp(-l·β^⊤x + α·‖β‖_p^⋆)) equal to ς_β^α(x) = log(1 + exp(| l·β^⊤x| + α·‖β‖_p^⋆)), where ς inherits similar properties from ℓ: it is convex in β and its univariate representation S^α has the same Lipschitz constant with L^α. We can thus represent the above problem as [ β, λ, λ,s, sminimize ελ + ελ + s + s; subject to x∈ℝ^nsup{S^α(β^⊤x) - λ‖x^1 - x‖_q - λ‖x^1 - x‖_q}≤ s + s; β∈ℝ^n, λ≥ 0, λ≥ 0, s ≥ 0, s≥ 0. ] Substituting s + s into the objective (due to the objective pressure) allows us to reformulate the above problem as [ β, λ, λminimize ελ + ελ + x∈ℝ^nsup{S^α(β^⊤x) - λ‖x^1 - x‖_q - λ‖x^1 - x‖_q}; subject to β∈ℝ^n, λ≥ 0, λ≥ 0, ] and an application of Lemma <ref> leads us to the following reformulation: β∈ℝ^n λ≥ 0, λ≥ 0inf θ∈dom(S^*)sup inf_z∈ℝ^n{ελ + ελ - S^α*(θ) + θ·β^⊤x^1 + θ·z^⊤ (x^1 - x^1)_(1) : |θ|·‖β - z‖_q^⋆≤λ_(2), |θ|·‖z‖_q^⋆≤λ}. The objective term (1) has a product of the uncertain parameter θ and the adjustable variable z, and even when (2) is linear such as in the case of q = 1 the product of the uncertain parameter with both the decision variable β and the adjustable variable z still appear since: |θ|·‖β - z‖_∞≤λ -λ≤θβ - θz≤λ. This reduces problem (<ref>) to a generic two-stage robust optimization problem with random recourse <cit.> which is proven to be NP-hard even if S^α* was constant <cit.>. §.§ Proof of Theorem <ref> Consider the reformulation <ref> of <ref> that we introduced in the proof of Proposition <ref>. For any i ∈ [N] and j ∈ [N], the corresponding constraint in the first group of `adjustable robust' (∀, ∃) constraints will be: ∀θ∈dom(L^*), ∃z∈ℝ^n : - L^α *(θ) + θ·β^⊤x^i + θ·z^⊤ (x^j - x^i) ≤ s_i + κ1 - y^i2λ + s_j + κ1 - y^j2λ |θ|·‖β - z‖_q^⋆≤λ |θ|·‖z‖_q^⋆≤λ. By changing the order of ∀ and ∃, we obtain: ∃z∈ℝ^n, ∀θ∈dom(L^*) : - L^α *(θ) + θ·β^⊤x^i + θ·z^⊤ (x^j - x^i) ≤ s_i + κ1 - y^i2λ + s_j + κ1 - y^j2λ |θ|·‖β - z‖_q^⋆≤λ |θ|·‖z‖_q^⋆≤λ. Notice that this is a safe approximation, since any fixed z satisfying the latter system is a feasible static solution in the former system, meaning that for every realization of θ in the first system, the inner ∃z can always `play' the same z that is feasible in the latter system (hence the latter is named the static relaxation, <cit.>). In the relaxed system, we can drop ∀θ and keep its worst-case realization instead: ∃z∈ℝ^n : sup_θ∈dom(L^*){- L^α *(θ) + θ·β^⊤x^i + θ·z^⊤ (x^j - x^i)}≤ s_i + κ1 - y^i2λ + s_j + κ1 - y^j2λ sup_θ∈dom(L^*){|θ|}·‖β - z‖_q^⋆≤λ sup_θ∈dom(L^*){|θ|}·‖z‖_q^⋆≤λ. The term sup_θ∈dom(L^*){- L^α *(θ) + θ·β^⊤x^i + θ·z^⊤ (x^j - x^i)} is the definition of the biconjugate L^α**(β^⊤x^i + z^⊤ (x^j - x^i)). Since L^α is a closed convex function, we have L^α** = L^α <cit.>. Moreover, sup_θ∈dom(L^*){|θ|} is an alternative representation of the Lipschitz constant of the function L^α <cit.>, which is equal to 1 as we showed earlier. The adjustable robust constraint thus reduces to: ∃z∈ℝ^n : L^α(β^⊤x^i + z^⊤ (x^j - x^i)) ≤ s_i + κ1 - y^i2λ + s_j + κ1 - y^j2λ ‖β - z‖_q^⋆≤λ ‖z‖_q^⋆≤λ as a result of the static relaxation. This relaxed reformulation applies to all i ∈ [N] and j∈[N] as well as to the second group of adjustable robust constraints analogously. Replacing each constraint of <ref> with this system concludes the proof. §.§ Proof of Corollary <ref> To prove the first statement, take λ = 0 and observe the constraint ‖z^l_ij‖_q^⋆≤λ implies z^l_ij = 0 for all l ∈{-1, 1}, i ∈ [N], j ∈ [N]. The optimization problem can thus be written without those variables: [ β, λ, s, sminimize ελ + 1/N∑_i=1^N s_i + 1/N∑_j=1^Ns_j; subject to L^α(lβ^⊤x^i) ≤ s_i + κ1 - ly^i2λ + s_j ∀ l ∈{-1, 1}, ∀ i ∈ [N], ∀ j ∈ [N]; ‖β‖_q^⋆≤λ; β∈ℝ^n, λ≥ 0, s∈ℝ^N_+, s∈ℝ^N_+. ] Notice that optimal solutions should satisfy s_j = s_j' for all j, j' ∈ [N]. To see this, assume for contradiction that ∃ j, j' ∈ [N] such that s_j < s_j'. If a constraint indexed with (l, i, j) for arbitrary l ∈{-1,1 } and i ∈ [N] is feasible, it means the consraint indexed with (l, i,j') cannot be tight given that these constraints are identical except for the s_j or s_j' appearing on the right hand side. Hence, such a solution cannot be optimal as this is a minimization problem, and updating s_j' as s_j preserves the feasibility of the problem while decreasing the objective value. We can thus use a single variable τ∈ℝ_+ and rewrite the problem as [ β, λ, s, sminimize ελ + 1/N∑_i=1^N (s_i + τ); subject to L^α(β^⊤x^i) ≤ s_i + κ1 - y^i2λ + τ ∀ i ∈ [N]; L^α(-β^⊤x^i) ≤ s_i + κ1 + y^i2λ + τ ∀ i ∈ [N]; ‖β‖_q^⋆≤λ; β∈ℝ^n, λ≥ 0, s∈ℝ^N_+, s∈ℝ^N_+, ] where we also eliminated the index l ∈{-1, 1} by writing the constraints explicitly. Since s_i and τ both appear as s_i + τ in this problem, we can use a variable change where we relabel s_i + τ as s_i (or, equivalently set τ = 0 without any optimality loss). Moreover, the constraints with index i ∈ [N] are L^α(β^⊤x^i) ≤ s_i + τ L^α(-β^⊤x^i) ≤ s_i + κλ + τ = L^α(y^i·β^⊤x^i) ≤ s_i + τ L^α(-y^i·β^⊤x^i) ≤ s_i + κλ + τ if y^i = 1, and similarly they are L^α(β^⊤x^i) ≤ s_i + κλ + τ L^α(-β^⊤x^i) ≤ s_i + τ = L^α(-y^i ·β^⊤x^i) ≤ s_i + κλ + τ L^α(y^i ·β^⊤x^i) ≤ s_i + τ if y^i = -1. Since these are identical, the problem can finally be written as [ β, λ, sminimize ελ + 1/N∑_i=1^N s_i; subject to log(1 + exp(-y^i ·β^⊤x^i + α·‖β‖_p^⋆ )) ≤ s_i ∀ i ∈ [N]; log(1 + exp(y^i ·β^⊤x^i + α·‖β‖_p^⋆ )) - λκ≤ s_i ∀ i ∈ [N]; ‖β‖_q^⋆≤λ; β∈ℝ^n, λ≥ 0, s∈ℝ^N_+, ] where we also used the definition of L^α. This problem is identical to <ref>, which means that feasible solutions of <ref> are feasible for <ref> if the additional variables (λ, s, z^l_ij) are set to zero, concluding the first statement of the corollary. The second statement is immediate since ε→∞ forces λ= 0 due to the term ελ in the objective of <ref>, and this proof shows in such a case <ref> reduces to <ref> (which is identical to <ref> when ε→∞ by definition). §.§ Proof of Proposition <ref> By standard linearity arguments and from the definition of ℚ_mix, we have 𝔼_ℚ_mix[z∈ℬ_p(α)sup{ℓ_β(x + z, y)}] ∫_(x, y)∈ℝ^n ×{-1, +1} z∈ℬ_p(α)sup{ℓ_β(x + z, y)}ℚ_mix((x, y)) N/N + wN∫_(x, y)∈ℝ^n ×{-1, +1} z∈ℬ_p(α)sup{ℓ_β(x + z, y)}ℙ_N((x, y)) + wN/N + wN∫_(x, y)∈ℝ^n ×{-1, +1} z∈ℬ_p(α)sup{ℓ_β(x + z, y)}ℙ_N((x, y)) N/N + wN·1N∑_i ∈ [N]z^i ∈ℬ_p(α)sup{ℓ_β(x^i + z^i, y^i)} + wN/N + wN·1N∑_j ∈ [N]z^j ∈ℬ_p(α)sup{ℓ_β(x^j + z^j, y^j)} 1N + wN[ ∑_i ∈ [N]z^i ∈ℬ_p(α)sup{ℓ_β(x^i + z^i, y^i)} + w ·∑_j ∈ [N]z^j ∈ℬ_p(α)sup{ℓ_β(x^j + z^j, y^j)}], which coincides with the objective function of (<ref>). The proof of Proposition <ref> shows 𝔼_ℚ_mix[z∈ℬ_p(α)sup{ℓ_β(x + z, y)}] = 𝔼_ℚ_mix [ℓ^α_β(x, y)] which concludes the proof. §.§ Proof of Proposition <ref> We first prove auxiliary results on mixture distributions. To this end, denote by 𝒞(ℚ, ℙ) ⊆𝒫(Ξ×Ξ) the set of couplings of the distributions ℚ∈𝒫(Ξ) and ℙ∈𝒫(Ξ). Let ℚ, ℙ^1, ℙ^2 ∈𝒫(Ξ) be probability distributions. If Π^1 ∈𝒞(ℚ, ℙ^1) and Π^2 ∈𝒞(ℚ, ℙ^2), then, λ·Π^1 + (1 - λ) ·Π^2 ∈𝒞(ℚ, λ·ℙ^1 + (1-λ) ·ℙ^2) for all λ∈ (0,1). Let Π = λ·Π^1 + (1 - λ) ·Π^2 and ℙ = λ·ℙ^1 + (1-λ) ·ℙ^2. To have Π∈𝒞(ℚ, ℙ) we need Π(ξ, Ξ) = ℚ(ξ) and Π(Ξ, ξ') = ℙ(ξ'). To this end, observe that Π(ξ, Ξ) = λ·Π^1(ξ, Ξ) + (1 - λ) ·Π^2(ξ, Ξ) = λ·ℚ + (1- λ)·ℚ = ℚ where the second identity uses the fact that Π^1 ∈𝒞(ℚ, ℙ^1). Similarly, we can show: Π(Ξ, ξ) = λ·Π^1(Ξ, ξ) + (1 - λ) ·Π^2(Ξ, ξ) = λ·ℙ^1 + (1- λ)·ℙ^2 = ℙ, which concludes the proof. We further prove the following intermediary result. Let ℚ, ℙ^1, ℙ^2 ∈𝒫(Ξ) and ℙ = λ·ℙ^1 + (1- λ) ·ℙ^2 for some λ∈ (0,1). We have: W(ℚ, ℙ) ≤λ·W(ℚ, ℙ^1) + (1- λ) ·W(ℚ, ℙ^2). The Wasserstein distance between ℚ, ℚ' ∈𝒫(Ξ) can be written as: W(ℚ, ℚ') = Π∈𝒞(ℚ, ℚ')min{∫_Ξ×Ξ d(ξ, ξ') Π(ξ, ξ') }, and since d is a feature-label metric (cf. Definition <ref>) the minimum is well-defined <cit.>. We name the optimal solutions to the above problem the optimal couplings. Let Π^1 be an optimal coupling of W(ℚ, ℙ^1) and let Π^2 be an optimal coupling of W(ℚ, ℙ^2) and define Π^c = λ·Π^1 + (1 - λ) ·Π^2. We have W(ℚ, ℙ) = Π∈𝒞(ℚ, ℙ)min{∫_Ξ×Ξ d(ξ, ξ') Π(ξ, ξ') } ≤∫_Ξ×Ξ d(ξ, ξ') Π^c(ξ, ξ') = λ·∫_Ξ×Ξ d(ξ, ξ') Π^1(ξ, ξ') + (1-λ) ·∫_Ξ×Ξ d(ξ, ξ') Π^2(ξ, ξ') = λ·W(ℚ, ℙ^1) + (1- λ) ·W(ℚ, ℙ^2), where the first identity uses the definition of the Wasserstein metric, the inequality is due to Lemma <ref> as Π^c is a feasible coupling (not necessarily optimal), the equality that follows uses the definition of Π^c and the linearity of integrals, and the final identity uses the fact that Π^1 and Π^2 were constructed to be the optimal couplings. We now prove the proposition (we refer to ℚ_mix in the statement of this lemma simply as ℚ). To prove ℚ∈𝔅_ε(ℙ_N) ∩𝔅_ε(ℙ_N), it is sufficient to show that W(ℙ_N, ℚ) ≤ε and W(ℙ_N, ℚ) ≤ε jointly hold. By using Lemma <ref>, we can derive the following inequalities: W(ℙ_N, ℚ) ≤λ·W(ℙ_N, ℙ_N)_=0 + (1- λ) ·W(ℙ_N, ℙ_N) W(ℙ_N, ℚ) ≤λ·W(ℙ_N, ℙ_N) + (1-λ) ·W(ℙ_N,ℙ_N)_=0. Therefore, sufficient conditions on W(ℙ_N, ℚ) ≤ε and W(ℙ_N, ℚ) ≤ε would be: (1- λ) ·W(ℙ_N, ℙ_N) ≤ε λ·W(ℙ_N, ℙ_N) ≤ε. Moreover, given that ε + ε≥W(ℙ_N, ℙ_N), the sufficient conditions further simplify to (1- λ) ·ε≤λ·ε λ·ε≤ (1- λ)·ε. λ·ε = (1- λ)·ε, which is implied when λ1 - λ = εε, concluding the proof. §.§ Proof of Theorem <ref> Since each result in the statement of this theorem is abridged, we will present these results sequentially as separate results. We review the existing literature to characterize 𝔅_ε(ℙ_N), in a similar fashion with the results presented in <cit.> for the logistic loss, by revising them to the adversarial loss whenever necessary. The N-fold product distribution of ℙ^0 from which the training set ℙ_N is constructed is denoted below by [ℙ^0]^N. Assume there exist a > 1 and A > 0 such that 𝔼_ℙ^0[exp(‖ξ‖^a)] ≤ A for a norm ‖·‖ on ℝ^n. Then, there are constants c_1, c_2 > 0 that only depend on ℙ^0 through a, A, and n, such that [ℙ^0]^N( ℙ^0 ∈𝔅_ε(ℙ_N)) ≥ 1 - η holds for any confidence level η∈ (0,1) as long as the Wasserstein ball radius satisfies the following optimal characterization ε≥(log(c_1 / η)c_2 · N)^1 / max{n, 2} if N ≥log(c_1 / η)c_2 (log(c_1 / η)c_2 · N)^1 / a otherwise. The statement follows from Theorem 18 of <cit.>. The presented decay rate 𝒪(N^-1/n) of ε as N increases is optimal <cit.>. Now that we gave a confidence for the radius ε of 𝔅_ε(ℙ_N), we analyze the underlying optimization problems. Most of the theory is well-established for logistic loss function, and in the following we show that similar results follow for the adversarial loss function. For convenience, we state <ref> again by using the adversarial loss function as in the proof of Proposition <ref>: DR-ARO[ βminimize ℚ∈𝔅_ε(ℙ_N)sup 𝔼_ℚ [ℓ^α_β(x, y)]; subject to β∈ℝ^n. ] If the assumptions of Theorem <ref> are satisfied and ε is chosen as in the statement of Theorem <ref>, then [ℙ^0]^N ( 𝔼_ℙ^0 [ℓ^α_β^⋆(x, y)] ≤ℚ∈𝔅_ε(ℙ_N)sup𝔼_ℚ [ℓ^α_β^⋆(x, y)] ) ≥ 1- η holds for all η∈ (0,1) and all optimizers β^⋆ of <ref>. The statement follows from Theorem 19 of <cit.> given that ℓ^α_β is a finite-valued continuous loss function. Theorem <ref> states that the optimal value of <ref> overestimates the true loss with arbitrarily high confidence 1-η. Despite the desired overestimation of the true loss, we show that <ref> is still asymptotically consistent if we restrict the set of admissible β to a bounded set[Note that, this is without loss of generality given that we can normalize the decision boundary of linear classifiers.]. If we restrict the hypotheses β to a bounded set ℋ⊆ℝ^n, and parameterize ε as ε_N to show its dependency to the sample size, then, under the assumptions of Theorem <ref>, we have ℚ∈𝔅_ε_N(ℙ_N)sup𝔼_ℚ[ℓ^α_β^⋆(x,y)] N →∞⟶𝔼_ℙ^0[ℓ^α_β^⋆(x,y)] ℙ^0-almost surely, whenever ε_N is set as specified in Theorem <ref> along with its finite-sample confidence η_N, and they satisfy ∑_N ∈ℕη_N < ∞ and lim_N →∞ε_N = 0. If we show that there exists ξ^0 ∈Ξ and C > 0 such that ℓ^α_β(x, y) ≤ C(1 + d(ξ, ξ^0)) holds for all β∈ℋ and ξ∈Ξ (that is, the adversarial loss satisfies a growth condition), the statement will follow immediately from Theorem 20 of <cit.>. To see that the growth condition is satisfied, we first substitute the definition of ℓ^α_β and d explicitly, and note that we would like to show there exists ξ^0 ∈Ξ and C > 0 such that log(1 + exp(-y·β^⊤x + α·‖β‖_p^⋆)) ≤ C(1 + ‖x - x^0 ‖_q + κ·y ≠ y^0) holds for all β∈ℋ and ξ∈Ξ. We take ξ^0 = (0, y^0) and show that the right-hand side of the inequality can be lower bounded as: C(1 + ‖x - x^0 ‖_q + κ·y ≠ y^0) = C(1 + ‖x‖_q + κ·y ≠ y^0) ≥ C(1 + ‖x‖_q). Moreover, the left-hand side of the inequality can be upper bounded for any β∈ℋ⊆ [-M, M]^n (for some M > 0) and ξ = (x, y) ∈Ξ as: log(1 + exp(-y·β^⊤x + α·‖β‖_p^⋆)) ≤log(1+ exp(|β^⊤x| + α·‖β‖_p^⋆)) ≤log(2 ·exp(|β^⊤x| + α·‖β‖_p^⋆)) = log(2) + |β^⊤x| + α·‖β‖_p^⋆ ≤log(2) + sup_β∈ [-M, M]^n{|β^⊤x|} + α·sup_β∈ [-M, M]^n{‖β‖_p^⋆} = log(2) + M ·‖x‖_1 + M ·α ≤log(2) + M · n^(q-1)/q·‖x‖_1 + M ·α where the final inequality uses Hölder's inequality to bound the 1-norm with the q-norm. Thus, it suffices to show that we have log(2) + M · n^(q - 1)/q·‖x‖_1 + M ·α≤ C(1 + ‖x‖_q) ∀ξ∈Ξ, which is satisfied for any C ≥max{log(2) + M ·α, M· n^(q-1)/q}. This completes the proof by showing the growth condition is satisfied. So far, we reviewed tight characterizations for ε so that the ball 𝔅_ε(ℙ_N) includes the true distribution ℙ^0 with arbitrarily high confidence, proved that the DRO problem <ref> overestimates the true loss, while converging to the true problem asymptotically as the confidence 1-η increases and the radius ε decreases simultaneously. Finally, we discuss that for optimal solutions β^⋆ to <ref>, there are worst case distributions ℚ^⋆∈𝔅_ε(ℙ_N) of nature's problem that are supported on at most N+1 atoms. If we restrict the hypotheses β to a bounded set ℋ⊆ℝ^n, then there are distributions ℚ^⋆∈𝔅_ε(ℙ_N) that are supported on at most N+1 atoms and satisfy: 𝔼_ℚ^⋆ [ℓ^α_β(x, y)] = ℚ∈𝔅_ε(ℙ_N)sup𝔼_ℚ [ℓ^α_β(x, y)]. The proof follows from <cit.>. See the proof of <cit.> and the discussion that follows for insights and further analysis on these results presented. §.§ Proof of Theorem <ref> Firstly, since ℙ_N is constructed from i.i.d. samples of ℙ, we can overestimate the distance ε_1= W(ℙ_N, ℙ) analogously by applying Theorem <ref>, mutatis mutandis. This leads us to the following result where the joint (independent) N-fold product distribution of ℙ^0 and the N-fold product distribution of ℙ is denoted below by [ℙ^0 ×ℙ]^N ×N. Assume that there exist a > 1 and A > 0 such that 𝔼_ℙ^0[exp(‖ξ‖^a)] ≤ A, and there exist a > 1 and A > 0 such that 𝔼_ℙ[exp(‖ξ‖^a)] ≤A for a norm ‖·‖ on ℝ^n. Then, there are constants c_1, c_2 > 0 that only depends on ℙ^0 through a, A, and n, and constants c_1, c_2 > 0 that only depends on ℙ through a, A, and n such that [ℙ^0 ×ℙ]^N ×N( ℙ^0 ∈𝔅_ε(ℙ_N) ∩𝔅_ε(ℙ_N)) ≥ 1 - η holds for any confidence level η∈ (0,1) as long as the Wasserstein ball radii satisfy the following characterization ε≥(log(c_1 / η_1)c_2 · N)^1 / max{n, 2} if N ≥log(c_1 / η_1)c_2 (log(c_1 / η_1)c_2 · N)^1 / a otherwise ε≥W(ℙ^0, ℙ) + (log(c_1 / η_2)c_2 ·N)^1 / max{n, 2} if N≥log(c_1 / η_2)c_2 (log(c_1 / η_2)c_2 ·N)^1 / a otherwise for some η_1, η_2 > 0 satisfying η_1 + η_2 = η. It immediately follows from Theorem <ref> that [ℙ^0]^N( ℙ^0 ∈𝔅_ε(ℙ_N)) ≥ 1 - η_1 holds. If we take ε_1 > 0 as ε_1 ≥(log(c_1 / η_2)c_2 ·N)^1 / max{n, 2} if N≥log(c_1 / η_2)c_2 (log(c_1 / η_2)c_2 ·N)^1 / a otherwise then, we similarly have [ℙ]^N( ℙ∈𝔅_ε_1(ℙ_N)) ≥ 1 - η_2. Since the following implication follows from the triangle inequality: ℙ∈𝔅_ε_1(ℙ_N) ℙ^0 ∈𝔅_ε_1 + W(ℙ^0, ℙ)(ℙ_N), we have that [ℙ]^N( ℙ^0 ∈𝔅_ε(ℙ_N)) ≥ 1 - η_2. These results, along with the facts that ℙ_N and ℙ_N are independently sampled from their true distributions, imply: [ℙ^0 ×ℙ]^N ×N( ℙ^0 ∉𝔅_ε(ℙ_N) ℙ^0 ∉𝔅_ε(ℙ_N)) ≤ [ℙ^0 ×ℙ]^N ×N( ℙ^0 ∉𝔅_ε(ℙ_N)) + [ℙ^0 ×ℙ]^N ×N(ℙ^0 ∉𝔅_ε(ℙ_N)) = [ℙ^0]^N( ℙ^0 ∉𝔅_ε(ℙ_N)) + [ℙ]^N(ℙ^0 ∉𝔅_ε(ℙ_N)) < η_1 + η_2 implying the desired result [ℙ^0 ×ℙ]^N ×N( ℙ^0 ∈𝔅_ε(ℙ_N) ∩𝔅_ε(ℙ_N)) ≥ 1 - η. The second statement immediately follows under the assumptions of Theorem <ref>: <ref> overestimates the true loss analogously as Theorem <ref> with an identical proof. § EXPONENTIAL CONE REPRESENTATION OF <REF>   For any i ∈ [N], the constraints of <ref> are log(1 + exp( - y^i·β^⊤x^i + α·‖β‖_p^⋆ )) ≤ s_i log(1 + exp( y^i·β^⊤x^i + α·‖β‖_p^⋆ )) - λ·κ≤ s_i, which, by using an auxiliary variable u, can be written as log(1 + exp( - y^i·β^⊤x^i + u)) ≤ s_i log(1 + exp( y^i·β^⊤x^i + u)) - λ·κ≤ s_i α·‖β‖_p^⋆≤ u. Following the conic modeling guidelines of <cit.>, for new variables v^+_i, w^+_i ∈ℝ, the first constraint can be written as v^+_i + w^+_i ≤ 1, (v^+_i, 1, [-u + y^i ·β^⊤x^i) - s_i] ∈𝒦_exp, (w^+_i, 1, -s_i) ∈𝒦_exp, by using the definition of the exponential cone 𝒦_exp. Similarly, for new variables v^-_i, w^-_i ∈ℝ, the second constraint can be written as v^-_i + w^-_i ≤ 1, (v^-_i, 1, [-u - y^i ·β^⊤x^i] - s_i - λ·κ) ∈𝒦_exp, (w^-_i, 1, -s_i - λ·κ) ∈𝒦_exp. Applying this for all i ∈ [N] concludes that the following is the conic formulation of <ref>: [ β, λ, s, u v^+, w^+, v^-, w^-minimize λ·ε + 1N∑_i∈[N] s_i ; subject to v^+_i + w^+_i ≤ 1 ∀ i ∈ [N]; (v^+_i, 1, [-u + y^i ·β^⊤x^i] - s_i) ∈𝒦_exp, (w^+_i, 1, -s_i) ∈𝒦_exp ∀ i ∈ [N]; v^-_i + w^-_i ≤ 1 ∀ i ∈ [N]; (v^-_i, 1, [-u - y^i ·β^⊤x^i] - s_i - λ·κ) ∈𝒦_exp, (w^-_i, 1, -s_i- λ·κ) ∈𝒦_exp ∀ i ∈ [N]; α·‖β‖_p^⋆≤ u; ‖β‖_q^⋆≤λ ; β∈ℝ^n, λ≥ 0, s∈ℝ^N, u ∈ℝ, v^+, w^+, v^-, w^-∈ℝ^N. ] § FURTHER DETAILS FOR NUMERICAL EXPERIMENTS All experiments are implemented in Julia <cit.> (MIT license) and executed on Intel Xeon 2.66GHz processors with 8GB memory in single-core mode. We use MOSEK 10.1 <cit.> to solve all exponential conic programs through JuMP <cit.>. The UCI datasets <cit.> we use (see Table <ref>) are subject to CC BY 4.0 license. MNIST is subject to CC BY-SA 3.0 and EMNIST to CC0 1.0 license. §.§ UCI experiments Preprocessing UCI datasets Although we reported the first 5 datasets in the main paper, we experiment on 10 UCI datasets <cit.> (cf. Table <ref>). We use Python3 for preprocessing these datasets. Classification problems with more than two classes are converted to binary classification problems (most frequent class/others). For all datasets, numerical features are standardized, the ordinal categorical features are left as they are, and the nominal categorical features are processed via one-hot encoding. As mentioned in the main paper, we obtain auxiliary (synthetic) datasets via SDV, which is also implemented in Python 3. Detailed misclassification results on the UCI datasets Table <ref> contains detailed results on the out-of-sample error rates of each method on 10 UCI datasets for classification. All parameters are 5-fold cross-validated: Wasserstein radii from the grid { 10^-6, 10^-5, 10^-4, 10^-3, 10^-2, 10^-1, 0,1,2,5,10 } (10^-6,10^-5,2,5,10 are rarely selected, but we did not change our grid in order not to introduce a bias), κ from the grid {1, √(n), n} the weight parameter of ARO+Aux from grid {10^-6, 10^-5, 10^-4, 10^-3, 10^-2, 10^-1, 0, 1}. We fix the norm defining the feature-label metric to the ℓ_1-norm, and test ℓ_2-attacks, but other choices with analogous results are also implemented. Finally, we demonstrate that our theory, especially DRO+ARO+Aux, contributes to the DRO literature even without adversarial attacks. In this case of α = 0, ERM and ARO would be equivalent, and DRO+ARO would reduce to the traditional DR LR model <cit.>. ARO+Aux would be interpreted as revising the empirical distribution of ERM to a mixture (mixture weight cross-validated) of the empirical and auxiliary distributions. DRO+ARO+Aux, on the other hand, can be interpreted as DRO over a carefully reduced ambiguity set (intersection of the empirical and auxiliary Wasserstein balls). The results are in Table <ref>. Analogous results follow as before (that is, DRO+ARO+Aux is the `winning' approach, DRO+ARO and ARO+Aux alternate for the `second' approach), with the exception of the dataset contraceptive, where ARO+Aux outperforms others. §.§ MNIST/EMNIST experiments Our setting is analogous to the UCI experiments. However, for auxiliary data, we use the EMNIST dataset. We used the MLDatasets package of Julia to prepare such auxiliary data. §.§ Artificial experiments Data generation We sample a `true' β from a unit ℓ_2-ball, and generate data as summarized in Algorithm <ref>. Such a dataset generation gives N instances from the same true data-generating distribution. In order to obtain N̂ auxiliary dataset instances, we perturb the probabilities p^i with standard random normal noise which is equivalent to sampling i.i.d. from a perturbed distribution. Testing is always done on true data, that is, the test set is sampled according to Algorithm <ref>. Strength of the attack and importance of auxiliary data In the main paper we discussed how the strength of an attack determines whether using auxiliary data in ARO (ARO+Aux) or considering distributional ambiguity (DRO+ARO) is more effective, and observed that unifying them to obtain DRO+ARO+Aux yields the best results in all attack regimes. Now we focus on the methods that rely on auxiliary data, namely ARO+Aux and DRO+ARO+Aux and explore the importance of auxiliary data ℙ_N in comparison to its empirical counterpart P_N. Table <ref> shows the average values of w for problem (<ref>) obtained via cross-validation. We see that the greater the attack strength is the more we should use the auxiliary data in ARO+Aux. The same relationship holds for the average of ε / ε obtained via cross-validation in <ref>, which means that the relative size of the Wasserstein ball built around the empirical distribution gets larger compared to the same ball around the auxiliary data, that is, ambiguity around the auxiliary data is smaller than the ambiguity around the empirical data. We highlight as a possible future research direction exploring when a larger attack per se implies the intersection will move towards the auxiliary data distribution. More results on scalability We further simulate 25 cases with an ℓ_2-attack strength of α = 0.2, N = 200 instances in the training dataset, N = 200 instances in the auxiliary dataset, and we vary the number of features n. We report the median (50%± 15% quantiles shaded) runtimes of each method in Figure <ref>. The fastest methods are ERM and ARO among which the faster one depends on n (as the adversarial loss includes a regularizer of β), followed by ARO+Aux, DRO+ARO, and DRO+ARO+Aux, respectively. DRO+ARO+Aux is the slowest, which is expected given that DRO+ARO is its special for large ε. The runtime however scales graciously. Finally, we focus further on DRO+ARO+Aux which solves problem <ref> with 𝒪(n · N ·N) variables and exponential cone constraints. For n = 1,000 and N = N = 10,000, we observe that the runtimes vary between 134 to 232 seconds across 25 simulations.
http://arxiv.org/abs/2407.12526v1
20240717130918
Quadrupolar power radiation by a binary system in a hyperbolic encounter on de Sitter background
[ "Michael Blanc", "Philippe Jetzer", "Shubhanshu Tiwari" ]
gr-qc
[ "gr-qc" ]
ETH Zürich, Switzerland and Department of Physics, University of Zürich, Winterthurerstrasse 190, 8057 Zurich, Switzerland Department of Physics, University of Zürich, Winterthurerstrasse 190, 8057 Zurich, Switzerland Department of Physics, University of Zürich, Winterthurerstrasse 190, 8057 Zurich, Switzerland § ABSTRACT The present cosmological model and the surveys favor the universe with a small but positive cosmological constant Λ, which accounts for dark energy and causes an exponential expansion. This can have observational consequences in the current detection of gravitational waves, as most of the waveforms for gravitational radiation are computed assuming a flat (Minkowski) background. In this work, we compute gravitational radiation within the quadrupole approximation on positive Λ (de Sitter) background for a binary system interacting gravitationally through a hyperbolic encounter. We quantify the influence of the cosmological constant on the radiated energy as small corrections to the leading order Minkowski background results. The first order de Sitter background correction is of the order √(Λ), and is thus extremely small. Therefore, the cosmological constant influence on the gravitational radiation is negligible and may not be detected with the existing or planned gravitational wave detectors. § INTRODUCTION Routine detection of gravitational wave events by the current generation of ground-based detectors LIGO,Virgo, and KAGRA has opened the field of gravitational wave astronomy <cit.>. With the upcoming space-based detector LISA and the next generation of ground-based detectors, subtle and rare effects will affect the gravitational waveform. Currently, the gravitational waveforms employed in the data analysis of current generation of consider asymptotically flat (Minkowski) spacetime. Cosmological observations since the last few decades have hinted towards a small but positive and nonzero cosmological constant <cit.>, and hence the spacetime will be asymptotically de Sitter. The difference between asymptotically de Sitter and Minkowski spacetime on the gravitational waveforms should be computed to estimate if the bias is due to ignoring asymptotically de sitter spacetime. In this work we present this computation for the hyperbolic encounters of compact objects. Hyperbolic encounters are single scattering events where the majority of the energy is released near the point of closest approach <cit.>, under the form of gravitational waves. Every physical quantity characterising the hyperbolic encounter can be expressed in terms of only four variables, that are the impact parameter, the initial relative velocity and the masses of the bodies, if we neglect their spins. Several works have considered hyperbolic encounters: reference <cit.> determines the energy emitted by a hyperbolic encounter in a flat Minkowski universe, while references <cit.> and <cit.> investigate the mean power released by a binary system in a de Sitter universe, respectively, on circular and elliptic orbits. In this work, we extend those results to the case of a hyperbolic encounter in a de Sitter universe. The paper is organized as follows. First we briefly describe how to compute the quadrupolar radiation from hyperbolic trajectories in Minkowski space; and then we discuss the gravitational radiation in a de Sitter space. Lastly, we present the results for quadrupolar radiation from hyperbolic encounters in a de Sitter background. We use units in which c=1 and the metric signature (-,+,+,+). Regarding the index notation, greek letters denote spacetime indices and range from 0 to 3, whereas latin letters denote space indices and range from 1 to 3. § QUADRUPOLAR RADIATION BY A HYPERBOLIC ENCOUNTER ON A MINKOWSKIAN BACKGROUND The quadrupole formula resorts to the time averaging of a time-dependent quantity. In the case of a bounded system, such as a binary system on circular or elliptic orbits, it makes sense to average the power over a complete orbit since it yields the mean power of the continuous gravitational waves. In the case of unbounded orbits, such as a binary system with parabolic or hyperbolic trajectories, the approach is trickier. Indeed, it would not make sense to average the power because the wave emission is not periodic at all. Waves begin to be emitted as soon as both stars are sufficiently close to each other, and the wave intensity is maximal when both stars reach their periastron (closest approach between the two interacting stars). Without the time average, the quadrupole formula gives the instantaneous radiated power P(t) as a function of time. Since the interacting time is finite, a relevant way of quantifying the gravitational waves emitted by a hyperbolic encounter is to calculate the whole radiated energy generated by this encounter through power integration over the entire trajectory. In this section we briefly review the spontaneous radiated power and the total released energy in terms of the hyperbola eccentricity ϵ>1 and the periastron distance r_0 (another way to do it is to use instead the periastron angle ϕ_0, the initial relative velocity v_0 and the impact parameter b). For that purpose, we consider the system depicted in Figure <ref>. The differential equation for the inverse radius u=1/r is given by d^2u/dϕ^2+u=Gm/L^2 , (L being the angular momentum per unit mass, m=(m_1+m_2) and the reduced mass is μ = (m_1 m_2)/m) whose general solution is of the form u=Acosϕ+Bsinϕ+Gm/L^2 . The constant B is fixed by imposing a condition on the radial velocity at periastron (dot denotes time derivative) ṙ(ϕ=0)=0 ⇔ 0=u̇(ϕ=0)=[ϕ̇(Bcosϕ-Asinϕ)]_ϕ=0=B ϕ̇(ϕ=0) . As ϕ̇(ϕ=0)>0, we infer B=0. The constant A is set by the radial condition at periastron 1/r_0=u(ϕ=0)=A+Gm/L^2 ⇔ A=1/r_0-Gm/L^2, hence the radial distance reads r=1/(1/r_0-Gm/L^2)cosϕ+Gm/L^2=L^2/Gm/1+(L^2/Gmr_0-1)cosϕ , and gets the following form by defining the eccentricity as ϵ:=L^2/Gmr_0-1 ;     r=r_0(ϵ+1)/1+ϵcosϕ . The angular momentum L follows from the definition of the eccentricity, L=√(Gmr_0(ϵ+1)), and the time derivative of the angle ϕ can then be expressed as a function of ϕ only ϕ̇=L/r^2=√(Gm)/r_0^3/2(1+ϵ)^3/2(1+ϵcosϕ)^2 . Finally, both asymptotic angles ±ϕ_∞ are determined by imposing the radial condition at infinity: u(ϕ=±ϕ_∞)=0 ⇔ 1+ϵcos(±ϕ_∞)=0 ⇔ ϕ_∞=arccos(-1/ϵ) . We can now compute the radiated power in Minkowski space, where the quadrupole moment tensor is given by Einstein's formula. The time derivatives of moments are displayed in appendix <ref>. For the emitted power P we get: P=4μ^2m^3G^4 (1+ϵcosϕ)^4/15 r_0^5 (1+ϵ)^5[24+13 ϵ^2+48 ϵcosϕ+11ϵ^2cos(2ϕ)] . The total energy emitted in the form of gravitational waves during the hyperbolic encounter is found by integrating the emitted power over the entire trajectory Δ E=∫_-∞^∞ P dt=∫_-ϕ_∞^ϕ_∞P/ϕ̇ dϕ , where the relation (<ref>) has to be taken to get an integrand depending only on ϕ. The total emitted energy expressed in terms of the eccentricity, the periastron distance and both total and reduced masses is thus given by Δ E=2G^7/2μ^2m^5/2/45r_0^7/2(ϵ+1)^7/2(3arccos(-1/ϵ)(96+292ϵ^2+37ϵ^4) +√(ϵ^2-1) (602+673ϵ^2)) . We can easily find the parabolic limit (ϵ→1), for which we get: Δ E=85π G^7/2μ^2m^5/2/12√(2) r_0^7/2 . § GRAVITATIONAL RADIATION IN A DE SITTER UNIVERSE §.§ De Sitter universe The de Sitter universe describes a flat universe (k=0) containing only dark energy, as given in terms of a positive cosmological constant Λ>0. In this particular model, the only non-vanishing energy density is the vacuum energy (or dark energy) density ρ_Λ:=Λ/8π G. In the de Sitter universe, the Friedmann equations become: ȧ^2/a^2=ä/a=Λ/3, ρ̇=0 , since the cosmological constant is time independent. Both equations involving the scale factor give the same solution: a(t)=e^Ht, H:=ȧ/a=√(Λ/3) , where H is the Hubble rate and quantifies the expansion of the de Sitter universe, whereas the cosmic time t∈[-∞,∞] is chosen such that the Bing-Bang corresponds to t=-∞, and today to t_0=0 such that a(t_0)=1. The de Sitter universe being flat, its metric therefore simply reads: ds^2=-dt^2+e^2Ht(dr^2+r^2dθ^2+r^2sinθ dϕ^2)=-dt^2+a(t)^2 d𝐱^2 . The de Sitter model therefore represents an empty universe expanding forever due to vacuum energy. §.§ Energy flux and de Sitter quadrupole formula Here we use the results given in Section 4 of <cit.> and Section III.A.1 of <cit.>. The expression for the power radiated by an isolated source in a de Sitter background is given by the de Sitter quadrupole formula: P=G/5<R^ijR_ij-1/3(R^l_l)^2>(t_ret) , where the brackets denote time average, R_ij=∂_t^3Q_ij-3H∂_t^2Q_ij+2H^2∂_tQ_ij+H∂_t^2Q_ij-H^2∂_tQ_ij is the radiation field, R^l_l:=δ^klR_kl, and we recall the energy and pressure quadrupole moments: Q_ij(t_ret) :=∫ a^3(t_ret)T_00(t_ret,𝐱)x_ix_j d^3𝐱 Q_ij(t_ret) :=∫ a^3(t_ret)η^klT_kl(t_ret,𝐱)x_ix_j d^3𝐱 , (η being the Minkowski metric). For H=0, corresponding to the flat and static universe, we get R_ij=⃛Q_ij and we recover the quadrupole formula of Einstein with Q_ij→ Q_ij-1/3Q^l_l δ_ij. T_μν is the energy-momentum tensor. As expected, the de Sitter quadrupole formula comprises an expansion in powers of H, whose 0-th order term is the Minkowskian one. An interesting property about waves generated on de Sitter spacetime is that the power carries information about the energy density and the pressure density, contrary to the Minkowskian case where only the energy density comes into play (to lowest post-Newtonian order) <cit.>. Let us recall the assumptions made to arrive at this final de Sitter quadrupole formula: i) the physical size of the source is much smaller than the cosmological horizon; ii) the bodies involved in the source have a velocity which is small compared to the speed of light; iii) and the source is only dynamically active for a finite time period. Note that the TT-tensor is the correct notion of transverse traceless tensors. However, as discussed in <cit.> in the context of the power radiated by a circular binary system on a de Sitter background, the result using the tt projection exactly matches with that of the calculation done in paper <cit.> using TT extraction. This indicates that for the energy flux computation by a circular binary system, TT versus tt does not matter in de Sitter spacetime. Whether this generalizes to elliptic and/or hyperbolic orbits still remains to be investigated. § QUADRUPOLAR RADIATION BY A HYPERBOLIC ENCOUNTER ON A DE SITTER BACKGROUND Assumptions With the de Sitter quadrupole formula (<ref>) we can now compute the radiated energy during the whole hyperbolic encounter. For this we will make the following assumptions: * The characteristic proper time scale of the encounter t_en and the expansion rate of the background are assumed to be such that the expansion of the universe can be neglected during the encounter: a≈const=1, Ht_en≪1. Thus, a static de Sitter universe is assumed during the encounter. Freedom in the normalisation of the scale factor a allows us to set it to unity. * The pressure of each body is negligible beside their energy density, thus the radiation field reduces to R_ij≈∂_t^3Q_ij-3H∂_t^2Q_ij+2H^2∂_tQ_ij. * The relative physical separation R:=ar=a||𝐱_1-𝐱_2|| is such that the bodies are far apart compared to the Schwarzschild radius of either body at any time: 2Gm/c^2≪ R and 2Gμ/c^2≪ R. * Each body moves slowly compared to the speed of light: v/c ≪1. * The trajectory of the binary is well approximated by a hyperbolic orbit: orbit shrinking steming from energy loss due to gravitational radiation and orbit expansion due to the universe expansion are both neglected. These assumptions can be interpreted as the Newtonian approximation which we use to describe the motion of the binary system. In the following, we denote by a capital R the physical distance while keeping a small r for the comoving distance, and for components x^i=ax^i denotes the physical distance. Quadrupole moment According to references <cit.>,<cit.> the time component of the source energy-momentum tensor and the mass density are linked by: T_00(𝐱,t) =a^2(t)ρ(𝐱,t) ρ(𝐱,t) =μ δ(𝐱-𝐱_*(t))=μ/a^3 δ(𝐱-𝐱_*(t)) , where 𝐱_*(t)=a(t) 𝐱_*(t) denotes the physical trajectory of the body of mass μ. The energy quadrupole moment of the binary system therefore reads: Q^ij=∫ a^3T_00 x^ix^j d^3𝐱=∫μ a^2δ(𝐱-𝐱_*) x^ix^j d^3𝐱=μ a^2 x^i_* x^j_*=μ x^i_* x^j_* , so that Q^ij= [ μ R^2cos^2ϕ μ R^2cosϕsinϕ; μ R^2cosϕsinϕ μ R^2sin^2ϕ ] =μ r^2 [ cos^2ϕ cosϕsinϕ; cosϕsinϕ sin^2ϕ ] , where we used the first among the above assumptions for the last equality. Power radiation In section <ref>, we derived the trajectory of the reduced mass and the time derivative of the polar angle ϕ, with r as given by eq. (<ref>) and ϕ̇ by eq. (<ref>). To get the radiated power we need the time derivatives of the quadrupole moments, which are given in Appendix <ref>. We write the total radiated power as: P = ∑_n=0^4 P_n H^n, where the P_n are given by P_0 =4μ^2m^3G^4 (1+ϵcosϕ)^4/15 r_0^5 (1+ϵ)^5[24+13 ϵ^2+48 ϵcosϕ+11ϵ^2cos(2ϕ)] , P_1 =4G^7/2μ^2m^5/2/5(1+ϵ)^7/2r_0^7/2(1+ϵcosϕ)^2sinϕ[18ϵ+13ϵ^3+40ϵ^2cosϕ+9ϵ^3cos(2ϕ)] , P_2 =2G^3m^2μ^2/15r_0^2(1+ϵ)^2(60+100ϵ^2+36ϵ^4+(180+113ϵ^2)ϵcosϕ+116ϵ^2cos(2ϕ)+19ϵ^3cos(3ϕ)) , P_3 =-32G^5/2μ^2m^3/2ϵ^2/5√(r_0(1+ϵ))·ϵ+cosϕ/1+ϵcosϕsinϕ , P_4 =4G^2r_0mμ^2(1+ϵ)/15(1+ϵcosϕ)^2[6+7ϵ^2+12ϵcosϕ-ϵ^2cos(2ϕ)] . It is straightforward to verify that in the Minkowski case (corresponding to H=0 and P=P_0) we recover the result given in eq.(<ref>). In Figure <ref> we plot the angular distribution of the various P_n for a given mass of the colliding black holes as an example. For P_0 the emission of gravitational waves is maximal at periastron (ϕ=0), when the two bodies reach their closest approach, and the values of P_0 are symmetric with respect to ϕ=0, decreasing as the two bodies move away from each other. The odd de Sitter power's contributions (P_1 and P_3) when integrated over the whole trajectory to get the radiated energy do not contribute to it since they are odd with respect to the periastron and thus the corresponding integral vanishes. The plot of the leading de Sitter contribution P_2H^2 has the same shape as P_0, but 29 orders of magnitude smaller. However, the even much smaller contribution P_4H^4 has a different shape when plotted. Indeed, the emitted power is minimal at periastron and maximal far away at the cosmological horizon, which is quite different from what one would expect in a scattering process. This illustrates a surprising and non-intuitive property of the de Sitter universe. Nonetheless, the contribution of P_4H^4 to the radiated energy is so small that it does not affect the total energy. As the P_4H^4 case, the shape of P_3H^3 presents asymptotes that take larger values close to the horizon bounds, which, however, are still negligible as compared to the leading de Sitter contributions, and, as aforementioned, the odd contributions do not contribute to the radiated energy as the integral vanishes. Note that the static de Sitter universe has a horizon located at r=h^-1. The angles corresponding to the horizon are the bounds between which the above functions for the power are defined (thus ϕ∈[ϕ_-,ϕ_+]) and correspond to the bounds of the integral giving the total radiated energy: 1/h=r_0(1+ϵ)/1+ϵcosϕ_± ⇔ ϕ_±=±arccos(hr_0(1+ϵ)-1/ϵ) . The radiated energy is given by Δ E=∫ P dt=∫_ϕ_-^ϕ_+P/ϕ̇ dϕ=∑_n=0^4 H^n∫_ϕ_-^ϕ_+P_n/ϕ̇ dϕ:=∑_n=0^4 Δ E_n H^n . As already mentioned P_1 and P_3 are clearly odd with respect to ϕ=0, hence the respective integrals between ϕ_- and ϕ_+ vanish. For the different energy contributions we get Δ E_0 =G^7/2μ^2m^5/2/90c^5r_0^7/2(1+ϵ)^7/2[12ϕ_+(96+292ϵ^2+37ϵ^4)+24ϵ^2(71+12ϵ^2)sin(2ϕ_+)+368ϵ^3sin(3ϕ_+) +√((ϵ+1)(1-hr_0)(ϵ-1+hr_0+ϵ hr_0)) (4608+3504ϵ^2)+33ϵ^4sin(4ϕ_+)] , Δ E_2 =16G^5/2m^3/2μ^2/15c^5√(r_0)(√((1-hr_0)(ϵ-1+hr_0+ϵ hr_0)) (19+9ϵ-9/hr_0) +20ϕ_+/√(ϵ+1)+10√(ϵ-1) arctanh[√(ϵ-1/ϵ+1)tan(ϕ_+/2)]) , Δ E_4 =16G^3/2√(m)μ^2r_0^5/2/15 c^5(√(ϵ^2-(hr_0ϵ+hr_0-1)^2)/3h^3r_0^3(ϵ-1)^2√(ϵ+1)[4-2hr_0-11h^2r_0^2+2ϵ(hr_0-4) +ϵ^2(4+5h^2r_0^2)]-2ϵ^2-3/(ϵ-1)^5/2 arctanh[√(ϵ-1/ϵ+1)tan(ϕ_+/2)]) , where in the above formula we have inserted the speed of light c dependence explicitly and used h=H/c. The expressions in (<ref>) depend explicitly on the Hubble rate and also implicitly through ϕ_- and ϕ_+. As next we make an expansion in powers of H (equivalently of h) of the above expressions such as to get all terms up to H^2. We neglect higher order terms as they are extremely small. Expansion of Δ E_0: For this leading term we use the following expansions: ϕ_+=arccos(hr_0(1+ϵ)-1/ϵ)=arccos(-1/ϵ)-hr_0√(ϵ+1/ϵ-1)+h^2r_0^2/2√(ϵ+1)/(ϵ-1)^3/2+𝒪(h^3) , √((1-hr_0)(ϵ-1+hr_0+ϵ hr_0))=√(ϵ-1)+hr_0/√(ϵ-1)-h^2r_0^2/2ϵ^2/(ϵ-1)^3/2+𝒪(h^3) , sinϕ_+=√(ϵ^2-1)/ϵ+hr_0/ϵ√(ϵ+1/ϵ-1)-h^2r_0^2ϵ^2/2√(ϵ+1)/(ϵ-1)^3/2+𝒪(h^3) , and similarly for sin(nϕ_+), n=2,3,4, whose expressions for the expansions are given in appendix <ref>. By plugging these expansions into Δ E_0 of (<ref>) and gathering together the terms proportional to the same power of H (or h), we get the final expanded expression for Δ E_0: Δ E_0=2G^7/2μ^2m^5/2/45c^5r_0^7/2(ϵ+1)^7/2(3arccos(-1/ϵ)(96+292ϵ^2+37ϵ^4)+√(ϵ^2-1) (602+673ϵ^2))+𝒪(H^3) . Note that there are no terms proportional to H and H^2. As expected, this leading term corresponds to the Minkowski contribution already computed in (<ref>). Expansion of Δ E_2H^2: This term corresponds to the leading de Sitter contribution. The expansion of tan(ϕ_+/2) is given in appendix <ref>, equation (<ref>) so that we get: √(ϵ-1/ϵ+1)tan(ϕ_+/2)=1-ϵ/ϵ-1hr_0+ϵ^2/(ϵ-1)^2h^2r_0^2/2+𝒪(h^3) , and as next we have to expand the arctanh of this expression, for which we take the expansion for an argument around 1 as follows: arctanh(1-x)=-1/2ln(x/2)-x/4-x^2/16+𝒪(x^3) for 0<x≪1 . With the two above equations we get the desired expansion of the arctanh term: arctanh[√(ϵ-1/ϵ+1)tan(ϕ_+/2)]=-1/2ln(ϵ/ϵ-1hr_0/2+𝒪(h^2))-ϵ/ϵ-1hr_0/4+ϵ^2/(ϵ-1)^2h^2r_0^2/16+𝒪(h^3) . Finally, the quadratic contribution term can be simplified and rewritten as Δ E_2H^2 =48G^5/2μ^2m^3/2(ϵ-1)^3/2/5c^4r_0^3/2 H +16G^5/2μ^2m^3/2 H^2/15c^5√(r_0)(20arccos(-1/ϵ)/√(ϵ+1)-5√(ϵ-1) ln(ϵ/ϵ-1Hr_0/2c+𝒪(H^2))+28√(ϵ-1)) . Expansion of Δ E_4H^4: Although at first glance it is proportional to H^4, due to the H dependence in the limiting angles, we get terms proportional to H and H^2. Clearly these are very small terms. Indeed the expansion of the square root term gives: √(ϵ^2-(1-hr_0(ϵ+1))^2)=√(ϵ^2-1)+hr_0√(ϵ+1/ϵ-1)-h^2r_0^2/2ϵ^2√(ϵ+1)/(ϵ-1)^3/2+𝒪(h^3) . By plugging expansion (<ref>) into Δ E_4 in (<ref>), regrouping the terms according to their respective orders and keeping only terms up to quadratic order in H, we get: Δ E_4H^4 =64G^3/2μ^2√(m)/45c^2√(r_0)√(ϵ-1) H+32G^3/2μ^2√(mr_0)/15c^3√(ϵ-1) H^2+𝒪(H^3) . Final expansion: We can now write the total radiated energy up to H^2 terms by adding the contributions as given in (<ref>), (<ref>) and (<ref>). This way we get Δ E=ϵ_0+ϵ_1H-H^216G^5/2m^3/2μ^2√(ϵ-1)/3c^5√(r_0)ln[ϵ/ϵ-1Hr_0/2c+𝒪(H^2)]+ϵ_2H^2+𝒪(H^3) with ϵ_0=2G^7/2μ^2m^5/2/45c^5r_0^7/2(ϵ+1)^7/2(3arccos(-1/ϵ)(96+292ϵ^2+37ϵ^4)+√(ϵ^2-1) (602+673ϵ^2)) , which is the leading term corresponding to the Minkowski contribution already computed in (<ref>). The leading de Sitter contribution is given by: ϵ_1=48G^5/2μ^2m^3/2/5c^4r_0^3/2(ϵ-1)^3/2+64G^3/2μ^2√(m)/45c^2√(r_0)√(ϵ-1) . Finally, the leading quadratic H^2 contribution reads: ϵ_2 = 32G^3/2μ^2√(mr_0)/15c^3√(ϵ-1)+64G^5/2m^3/2μ^2/15c^5√(r_0)(7√(ϵ-1)+5/√(ϵ+1)arccos(-1/ϵ)) . (Terms proportional to H^2 lnH are also neglected.) Numerical application: To get a better idea of the magnitude of the various terms discussed above we consider as an example (given also in Fig. <ref>) an hyperbolic encounter of two supermassive black holes, which have the same mass m_1=m_2=0.5·10^7 M_⊙, with an impact parameter b=10 AU and an initial relative velocity v_0=10^7 m/s. The de Sitter universe is characterized by a cosmological constant whose value is Λ=1.1056·10^-52m^-2. In table <ref> we give the numerical values we get, using our example, for the above contributions to the total emitted energy (expressed in units of Joule). From this example we see that the leading de Sitter term (proportional to H) is extremely small as compared to the main Minkowski contribution. Higher terms are even smaller, and thus negligible. This conclusion also holds assuming other values for the masses, impact parameters, and velocities. Clearly, with present or planned detectors for gravitational waves such an effect is too small to be observable (see also Section 5 of <cit.> for a discussion on this issue). Parabolic limit: We discuss now the parabolic limit (ϵ→ 1). A detailed analysis shows that the exact parabolic limit is only reachable when H → 0, recovering thus the Minkowski spacetime. Note that we have ϵ>1 and ϵ→ 1 h=1+ϵcosϕ_±/r_0(1+ϵ)≈1-ϵ/r_0(1+ϵ)≈1-ϵ/2r_0 . However, as we are considering a positive Hubble rate together with an eccentricity close to but higher than one, we thus get for the Hubble rate H=hc=c/2r_0(ϵ-1) for ϵ→ 1 in the parabolic limit. As expected the Hubble rate tends toward 0 as the orbit tends toward a parabola. It is now straightforward to perform the limit in the above expressions for the energy, so that we find that in the parabolic limit we get only the Minkowski expression lim_ϵ→1Δ E=85π G^7/2μ^2m^5/2/12√(2) c^5r_0^7/2 , which we found in eq. (<ref>). § CONCLUSION We used the de Sitter quadrupole formula expressing the power radiated in the form of gravitational waves by a gravitationally interacting system in an exponentially expanding universe for the particular case of a binary hyperbolic encounter. For the radiated energy we used an expansion in powers of the Hubble rate H. It turns out that the leading de Sitter contribution proportional to H is extremely small and thus not measurable with the present and future planned gravitational wave observatories and will not cause biases in the parameter estimation conducted with waveform assuming an asymptotically flat background. We also checked that the parabolic limit is consistent with the value found in Minkowski space since indeed this limit exists only within the framework of it. Here we did not take into account the shrinking and expansion of the trajectory due to the loss of gravitational radiation and the expansion of the universe during the encounter, respectively. It would be of interest to study the evolution of orbital parameters through these two factors. In this work, we have not discussed the implication that the de Sitter background has on the linear memory of the hyperbolic encounters; this can be found in <cit.>. It would be interesting to relate this study to a deeper understanding of the memory effect in de Sitter background for hyperbolic encounters. § ACKNOWLEDGEMENTS PJ is supported by the Swiss Space Office, Bern, and ST is supported by the Swiss National Science Foundation Ambizione Grant Number : PZ00P2-202204. § DERIVATIVES OF MOMENTS The computations of the radiated power need time derivatives of the components of the quadrupole moment tensor. In the Minkowski case, only the third time derivative is needed, whereas the de Sitter case involves the first, second, and third time derivatives. In this appendix, we give all the expressions of the time derivatives of the moments. §.§ Time derivatives Q̇_11 =-μ L sin(2ϕ)/1+ϵcosϕ Q̇_12 =μ L ϵcosϕ+cos(2ϕ)/1+ϵcosϕ Q̇_22 =2μ L ϵ+cosϕ/1+ϵcosϕsinϕ Q̈_11 =-Gμ m/2r_0(ϵ+1)(3ϵcosϕ+4cos(2ϕ)+ϵcos(3ϕ)) Q̈_12 =-Gμ m/r_0(ϵ+1)(4cosϕ+3ϵ+ϵcos(2ϕ))sinϕ Q̈_22 =Gμ m/2r_0(ϵ+1)(7ϵcosϕ+4cos(2ϕ)+4ϵ^2+ϵcos(3ϕ)) ⃛Q_11 =μ L^3/r_0^4(ϵ+1)^4(1+ϵcosϕ)^2(4+3ϵcosϕ)sin(2ϕ) ⃛Q_12 =-μ L^3/2r_0^4(ϵ+1)^4(1+ϵcosϕ)^2(5ϵcosϕ+8cos(2ϕ)+3ϵcos(3ϕ)) ⃛Q_22 =-μ L^3/r_0^4(ϵ+1)^4(1+ϵcosϕ)^2(8cosϕ+5ϵ+3ϵcos(2ϕ))sinϕ where L=√(Gmr_0(ϵ+1)) stands for the angular momentum per unit mass of the system, which is a time conserved quantity. § TRIGONOMETRIC FUNCTIONS EXPANSIONS In this appendix, we provide the trigonometric relations needed for the expansion in powers of H=hc of the formulas for the total radiated energy. We used ϕ_+=arccos(hr_0(1+ϵ)-1/ϵ)=arccos(-1/ϵ)-hr_0√(ϵ+1/ϵ-1)+h^2r_0^2/2√(ϵ+1)/(ϵ-1)^3/2+𝒪(h^3) . With the following expansion, sin(a+x)=sin a+cos a x-sin a x^2/2+𝒪(x^3) we infer the expansion of sin(nϕ_+) for n≥0: sin(nϕ_+) =sin(narccos(-1/ϵ))-hr_0n√(ϵ+1/ϵ-1)cos(narccos(-1/ϵ)) +h^2r_0^2n/2√(ϵ+1)/(ϵ-1)^3/2[cos(narccos(-1/ϵ))-n√(ϵ^2-1)sin(narccos(-1/ϵ))] +𝒪(h^3) . This expansions has terms of the form sin(narccos(-1/ϵ)) and cos(narccos(-1/ϵ)) which can be simplified using trigonometric relations: sin(arccos(-1/ϵ)) =√(ϵ^2-1)/ϵ sin(2arccos(-1/ϵ)) =-2/ϵ^2√(ϵ^2-1) sin(3arccos(-1/ϵ)) =√(ϵ^2-1)(4/ϵ^3-1/ϵ) sin(4arccos(-1/ϵ)) =4√(ϵ^2-1)(1/ϵ^2-2/ϵ^4) cos(arccos(-1/ϵ)) =-1/ϵ cos(2arccos(-1/ϵ)) =2/ϵ^2-1 cos(3arccos(-1/ϵ)) =3/ϵ-4/ϵ^3 cos(4arccos(-1/ϵ)) =8/ϵ^4-8/ϵ^2+1 , which can be plugged into (<ref>) in order to obtain the desired results: sinϕ_+ =√(ϵ^2-1)/ϵ+hr_0/ϵ√(ϵ+1/ϵ-1)-h^2r_0^2ϵ^2/2√(ϵ+1)/(ϵ-1)^3/2+𝒪(h^3) sin(2ϕ_+) =-2/ϵ^2√(ϵ^2-1)-2hr_0√(ϵ+1/ϵ-1)(2/ϵ^2-1)+h^2r_0^2 3ϵ^2-2/ϵ^2(ϵ-1)^3/2√(ϵ+1)+𝒪(h^3) sin(3ϕ_+) =(4/ϵ^3-1/ϵ)√(ϵ^2-1)-3hr_0√(ϵ+1/ϵ-1)(3/ϵ-4/ϵ^3)+3/2h^2r_0^2√(ϵ+1)/ϵ^3(ϵ-1)^3/2(3ϵ^4-12ϵ^2+8)+𝒪(h^3) sin(4ϕ_+) =4(1/ϵ^2-2/ϵ^4)√(ϵ^2-1)-4hr_0√(ϵ+1/ϵ-1)(1+8/ϵ^4-8/ϵ^2) +h^2r_0^22/ϵ^4√(ϵ+1)/(ϵ-1)^3/2(-15ϵ^4+40ϵ^2-24)+𝒪(h^3) . We needed also the expansion of tan(ϕ_+/2). We again resort to the general expansion of the tangent function: tan(a+x)=tan a+x/cos^2a+tan a/cos^2ax^2+𝒪(x^3) . After some algebra, this leads us to: tanϕ_+/2=tanϕ_0/2-hr_0/2cos^2(ϕ_0/2)√(ϵ+1/ϵ-1)+h^2r_0^2/4cos^2(ϕ_0/2)√(ϵ+1)/(ϵ-1)^3/2(1+tan(ϕ_0/2)√(ϵ^2-1))+𝒪(h^3) . Furthermore, recalling that ϕ_0 is linked to ϵ via cos(ϕ_0)=-1/ϵ, we arrive at the useful equation: √(ϵ-1/ϵ+1)tan(ϕ_+/2)=1-ϵ/ϵ-1hr_0+ϵ^2/(ϵ-1)^2h^2r_0^2/2+𝒪(h^3) . § REFERENCES unsrt
http://arxiv.org/abs/2407.13503v1
20240718133504
$X_0(2900)$ and $χ_{c0}(3930)$ in process $B^+\to D^+ D^- K^+$
[ "Zuo-Ming Ding", "Qi Huang", "Jun He" ]
hep-ph
[ "hep-ph" ]
e1Corresponding author: junhe@njnu.edu.cn Department of Physics and Institute of Theoretical Physics, Nanjing Normal University, Nanjing 210097, China X_0(2900) and χ_c0(3930) in process B^+→ D^+ D^- K^+ Zuo-Ming Ding, Qi Huang Jun Hee1 Received: date / Revised version: date ===================================================== This study investigates the nature of the X_0(2900) and χ_c0(3930) based on experimental results of the process B^+→D^+ D^- K^+. We focus on the S-wave D^*-K^*+ and D_s^+D_s^- molecular states, which can be related to the X_0(2900) and χ_c0(3930), respectively. Using effective Lagrangians, we construct the potential kernel of the D^*-K^*+-D^-K^+ and D_s^+D_s^--D^+D^- interactions with a one-boson-exchange model, and determine the scattering amplitudes and their poles through a quasipotential Bethe-Salpeter equation approach. By incorporating the potential kernel into the three-body decay process B^+→D^+ D^- K^+, we evaluate the D^-K^+ and D^+D^- invariant mass spectra, as well as the Dalitz plot, with Monte Carlo simulation. A satisfactory fit to the D^-K^+ and D^+D^- invariant mass spectra is achieved after introducing additional Breit-Wigner resonances, the X_1(2900), ψ(3770), and χ_c2(3930) states. Prominent signals of the X_0(2900) and χ_c0(3930) states appear as peaks in the D^-K^+ and D^+D^- invariant mass spectra near 2900 and 3930 MeV, respectively. Clear event concentration of the X_0(2900) and χ_0(3930) states is evident as strips in the Dalitz plot. The results suggest that both X_0(2900) and χ_c0(3930) can be interpreted as molecular states, with the inclusion of X_1(2900) and χ_2(3930) necessary to describe structures in the regions near 2900 and 3930 MeV. § INTRODUCTION In 2020, the LHCb Collaboration observed two pairs of states, X_0,1(2900) and χ_c0,2(3930), in the D^-K^+ and D^+D^- invariant mass distributions of the B^+→D^+ D^- K^+ decay. The masses and widths of these states were determined by LHCb and are listed in Table <ref> <cit.>. Among these discovered structures, X_0(2900), also named T^c̅s̅0(2870)^0 <cit.>, is the first exotic candidate with four different flavors, attracting significant attention. Despite various theoretical interpretations regarding the nature of X_0(2900), such as the compact tetraquark <cit.> or triangle singularity <cit.>, the measured mass of X_0(2900) is close to the D^*K^* threshold, making the molecular picture appealing. In this context, the X_0(2900) state, as well as the related D̅^(*)K^(*) system, have been investigated within different frameworks, such as chiral effective field theory <cit.>, QCD sum rule methods <cit.>, one-boson-exchange model <cit.>, and effective Lagrangian approach <cit.>, to verify the molecular structure of X_0(2900) and explore other possible molecules. In our previous work <cit.>, we studied the D^*-K^*+ interaction using a quasipotential Bethe-Salpeter equation (qBSE) approach and constructed a one-boson-exchange potential with the help of heavy quark and chiral symmetries. An improved method was proposed in another of our previous works <cit.>, where we adopted hidden-gauge Lagrangians to construct the potential kernels, making the theoretical framework more self-consistent. The results of both studies suggest that X_0(2900) can be explained as a D^*-K^*+ molecular state with I(J^P)=0(0^+). In contrast to the X_0(2900) state, the χ_c0(3930) state has received relatively little attention until the discovery of the X(3960) state in the D_s^+D_s^- invariant mass spectra <cit.>. These two states have similar masses and widths, as well as the preferred J^PC = 0^++. Consequently, many investigations have proposed coupled-channel analyses to uncover the nature of the χ_c0(3930) and X(3960) states. It is common to interpret X(3960) and χ_c0(3930) as D_s^+D_s^--D^+D^- molecules <cit.>, given that the mass of the X(3960) state is close to the D_s^+D_s^- threshold. In our previous work, we employed the three-body decay B^+→(D_s^+D_s^- /D^+D^-)K^+ in a qBSE approach to investigate the X(3960)/χ_c0(3930) resonance structure observed by LHCb, assuming that the X(3960)/χ_c0(3930) are S-wave D_s^+D_s^- molecules <cit.>. Our results indicated that the X(3960) state can be well reproduced in the D_s^+D_s^- invariant mass spectrum, and the χ_c0(3930) state can be observed in the D^+D^- invariant mass spectrum, albeit with a very small width. As our previous model <cit.> only considered the χ_c0(3930), and the experimental results were not perfectly reproduced, it is plausible to suggest that the χ_c2(3930), which is also located near 3930 MeV but with J=2, may significantly contribute to the structure observed near 3930 MeV in the D^+D^- invariant mass spectrum. Further investigations are needed to substantiate this hypothesis and to understand the internal configurations of the χ_cJ states. Many theoretical studies have attempted to elucidate the nature of X_0(2900) and χ_c0(3930) using diverse methodologies and experimental inputs. However, these two states are often investigated separately. Despite the significance of resonance structures in both D_sD̅_s and charm-strange systems, only a few amplitude analyses of the decay process B^+→D^+ D^- K^+ have considered the combined contributions of both X_0(2900) and χ_c0(3930). In the present research, we aim to investigate the D^-K^+ and D^+D^- invariant mass spectra and the Dalitz plot of the B^+→ D^+ D^- K^+ process, taking into account the rescatterings associated with the molecular states corresponding to X_0(2900) and χ_c0(3930). Through a comparison with experimental data from LHCb, we will delve into the mechanism of the B^+→ D^+ D^- K^+ process and discuss the nature of X_0(2900) and χ_c0(3930) in this context. In the following section, we will outline the theoretical framework utilized to investigate the B^+→D^+ D^- K^+process. We will provide a comprehensive explanation of the mechanism, flavor wave functions, and Lagrangians involved in this process through the intermediate states X_0(2900) and χ_0(3930). The potential kernels for each process will be established and incorporated into the qBSE to calculate the invariant mass spectra and Dalitz plot. In section <ref>, we will analyze the invariant mass spectra of the B^+→D^+ D^- K^+ process, taking into account the rescatterings of the final D^-K^+ and D^+D^- channels, as well as additional Breit-Wigner resonances representing the X_1(2900), ψ(3770), and χ_c2(3930) states. The investigation will explore the impact of rescatterings and delve into the nature of the X_0(2900) and χ_c0(3930) resonances. The article will be concluded with a summary of the findings. § FORMALISM OF AMPLITUDE FOR B^+→D^+ D^- K^+ PROCESS In the current work, we will consider two rescattering processes for the three-body decay B^+→D^+ D^- K^+: D^*-K^*+-D^-K^+ and D_s^+D_s^--D^+D^-. These two two-body rescatterings can be described by the one-boson-exchange model, and the resulting scattering amplitudes will be incorporated into the three-body decay to obtain the total decay amplitude. §.§ Mechanism for D^*-K^*+-D^-K^+ rescattering The experimental data analysis suggests that a resonance structure X_0(2900) can be found in the D^- K^+ invariant mass spectrum of the process B^+→D^+ D^- K^+. In our model, the X_0(2900) state is interpreted as an S-wave D^*-K^*+ molecular state. The B meson is assumed to decay first into D^(*)-, K^(*)+, and D^+, as depicted in the blue circle in Fig.<ref>. The intermediate D^(*)-K^(*)+ state then undergoes a rescattering process, producing the final D^- and K^+ particles, as shown in the red circle in Fig.<ref>. When calculating the rescattering amplitude 𝒯, the D^*-K^*+ interaction and its coupling to D^- K^+ are considered. First, we need to deal with the direct vertex B^+ → D^+ D^(*)- K^(*)+, as shown in the blue full circle in Fig. <ref>. Following Ref. <cit.>, the amplitude of the three-body decay can be constrained by Lorentz invariance. For direct decay B^+→D^+ D^- K^+, the amplitude has the form A_B^+ → D^+ D^- K^+ = c_1, and for direct decay B^+ → D^+ D^*- K^*+, it takes the form A_B^+ → D^+ D^*- K^*+ = c_2 ϵ_D^*-ϵ_K^*+, where ϵ_D^*- and ϵ_K^*+ are the polarization vectors of D^*- and K^*+, respectively. The constants c_1 and c_2 represent the coupling constants for the respective channels. The value of c_1 can be obtained by multiplying the branching fraction of the B^+→D^+ D^- K^+ with the decay width of the B^+ meson. The branching fraction for B^+→D^+ D^- K^+ is ℬ_B^+ → D^+ D^- K^+ = (2.2 ± 0.7) × 10^-4<cit.>. However, the branching fraction for B^+→D^+ D^- K^+ is not yet available. Therefore, we use c_1 = 5.397 × 10^-5, as mentioned in Ref. <cit.>, and treat c_2 as a free parameter to estimate the decay process due to the lack of experimental information. The determination of c_2 will be refined based on the experimental data for the process B^+ → D^+ D^(*)- K^(*)+. The next step is to construct the potential kernel T for the rescattering process, as shown in Fig. <ref>, to find the pole in the complex energy plane within the qBSE approach and to calculate the invariant mass spectrum. The one-boson exchange model will be adopted, with light mesons π, η, η', ρ, and ω mediating the interaction between D^(*)- and K^(*)+ mesons. For the systems considered in this work, the couplings of exchanged light mesons to charmed mesons and strange mesons are required. Thus, the hidden-gauge Lagrangians with SU(4) symmetry are suitable for constructing the potential, which reads <cit.> ℒ_𝒫𝒫𝒱 =-ig ⟨ V_μ[𝒫,∂^μ𝒫]⟩, ℒ_𝒱𝒱𝒫 =G'/√(2) ϵ^μναβ⟨∂_μ𝒱_ν∂_α𝒱_β𝒫⟩, ℒ_𝒱𝒱𝒱 =ig  ⟨ (𝒱_μ∂^ν𝒱^μ-∂^ν𝒱_μ𝒱^μ) 𝒱_ν⟩, with G'=3g'^2/4π^2f_π, g'=-G_𝒱m_ρ/√(2)f_π^2, G_𝒱≃ 55 MeV and f_π=93 MeV and the coupling constant g=M_𝒱/2f_π, M_𝒱≃ 800 MeV <cit.>. The 𝒫 and 𝒱 are the pseudoscalar and vector matrices under SU(4) symmetry as 𝒫 = ( [ √(3)π^0+√(2)η+η'/√(6) π^+ K^+ D̅^0; π^- -√(3)π^0+√(2)η+η'/√(6) K^0 D^-; K^- K̅^0 -η+√(2)η'/√(3) D_s^-; D^0 D^+ D_s^+ η_c; ]), and 𝒱 = ( [ ρ^0+ω/√(2) ρ^+ K^* + D̅^* 0; ρ^- -ρ^0+ω/√(2) K^* 0 D^* -; K^* - K̅^* 0 ϕ D_s^* -; D^* 0 D^* + D_s^* + J/ψ; ]) . §.§ Mechanism for D_s^+D_s^–D^+D^- rescattering In this work, we explore the χ_c0(3930) resonance structure in the D^+D^- invariant mass spectrum, which can be related to D_s^+D_s^–D^+D^- rescattering <cit.>. The Feynman diagram of such processes is illustrated in Fig. <ref>. The B^+ meson decays to D_(s)^+D_(s)^- and K^+ first, and the intermediate D_(s)^+D_(s)^- channel will be involved in the rescattering process, subsequently obtaining the final product D^+D^-. The amplitude of direct B^+→D_(s)^+ D_(s)^- K^+, as shown as the blue full circle in Fig. <ref>, can be written as M_B^+→ D_(s)^+ D_(s)^- K^+ = c_3(c_1), where c_3= 6.027 × 10^-5 and c_1 = 5.397 × 10^-5 as mentioned in Ref. <cit.> and Sec. <ref>. For the S-wave isoscalar D_s^+D_s^- and D^+D^- states, the wave functions can be constructed as |X_DD̅^0⟩ =1/√(2)(|D^+D̅^-⟩+|D^0D̅^0⟩), |X_D_sD̅_s^0⟩=|D_s^-D_s^+⟩, The one-boson-exchange model is used to obtain the interaction between the heavy mesons. Contributions from vector mesons (𝕍=ρ and ω for D^+D^--D^+D^- interaction, 𝕍=ϕ for D_s^+D_s^–D_s^+D_s^- interaction, and 𝕍=K^* for D^+D^--D_s^+D_s^- interaction) and scalar meson (σ) exchange are considered. Different from the D^(*)-K^(*)+ channel, two charmed mesons are involved here. Hence, heavy quark symmetry is more suitable to describe the interaction. The heavy quark effective Lagrangian for heavy mesons interacting with light mesons reads <cit.>, ℒ_𝒫𝒫𝕍 = -√(2)βg_V𝒫^_b𝒫_a^† v·𝕍_ba +√(2)βg_V𝒫^†_a 𝒫^_b v·𝕍_ab, ℒ_𝒫𝒫σ = -2g_s𝒫^_b𝒫^†_bσ -2g_s𝒫^_b𝒫^†_bσ, where the velocity v should be replaced by i∂/2√(m_i m_f) with m_i,f being the mass of the initial or final heavy meson. 𝒫^(*)T = (D^(*)0, D^(*)+, D_s^(*)+). The 𝕍 denotes the SU(3) vector matrix, whose elements are the first 3×3 elements of the SU(4) vector matrix in Eq. (<ref>). The parameters involved here were determined in the literature as β=0.9, g_s=0.76, and g_V = 5.9 <cit.>. Besides, contribution from the J/ψ exchange is also considered in the current work since it is found important in the interaction between charmed and anticharmed mesons <cit.>. The Lagrangians are written with the help of heavy quark effective theory as <cit.>, L_D_(s)D̅_(s)J/ψ = ig_D_(s)D_(s)ψψ·D̅∂D, where the couplings are related to a single parameter g_2 as g_D_(s)D_(s)ψ/m_D= 2 g_2 √(m_ψ),with g_2=√(m_ψ)/(2m_Df_ψ) and f_ψ=405 MeV. §.§ Potential kernel With the above Lagrangians, the potential kernel in the one-boson-exchange model can be constructed using the standard Feynman rules, expressed as V_P,σ=I_iΓ_1Γ_2 P_P,σf_P,σ^2(q^2), V_V=I_iΓ_1μΓ_2ν P^μν_Vf_V^2(q^2), for pseudoscalar (P), scalar (σ), and vector (V) exchange, respectively. The Γ_1 and Γ_2 are for the upper and lower vertices of the one-boson-exchange Feynman diagram, respectively. The I_i is the flavor factor for certain meson exchange which can be derived using the Lagrangians in Eqs (<ref>), (<ref>) and (<ref>) and the matrices in Eqs. (<ref>) and (<ref>) <cit.>. The explicit values are I_π=-3/2, I_η=0, I_η'=1/2, I_ρ=-3/2 and I_ω=1/2 for D^*-K^*+-D^-K^+ rescattering, and I_ρ=3/2, I_ω=1/2, I_σ=I_J/ψ=I_ϕ=1 for D_s^+D_s^–D^+D^- rescattering, respectively. The propagators are defined as usual as P_P,σ= i/q^2-m_P,σ^2, P^μν_V=i-g^μν+q^μ q^ν/m^2_V/q^2-m_V^2, where q is the momentum of exchanged meson and m_V,P,σ represents the mass of the exchanged meson. We introduce a form factor f(q^2)=Λ_e^2/(q^2-Λ_e^2) with a cutoff Λ_e to compensate for the off-shell effect of the exchanged meson. This form factor type was also adopted in Ref. <cit.> for studying nucleon-nucleon scattering with spectator approximation, similar to the current work. It helps in avoiding overestimation of the contribution of J/ψ exchange in D_s^+D_s^–D^+D^- rescattering in the current work. §.§ Rescattering amplitudes in qBSE approach The amplitude of the rescattering process will be expressed within the qBSE approach. Note that the diagrams illustrated in Sec.<ref> and Sec.<ref> involve the same initial and final particles but different intermediate particles. Here, we present only the rescattering amplitude of the system mentioned in Sec.<ref>, where the final particles K^+, D^-, and D^+ are labeled as particles 1, 2, and 3, respectively. The rescattering amplitude for the system discussed in Sec.<ref> can be obtained analogously. Based on the potential of the interactions constructed above, the rescattering amplitude can be obtained using the qBSE approach <cit.>. After the partial-wave decomposition, the qBSE can be reduced to a 1-dimensional equation for scattering amplitude T^J^P with a spin-parity J^P as <cit.>, i T^J^P_λ'_1,λ'_2,λ_1,λ_2( p', p) =i V^J^P_λ'_1,λ'_2,λ_1,λ_2( p', p)+1/2∑_λ”_1,λ”_2∫ p”^2d p”/(2π)^3 · i V^J^P_λ'_1,λ'_2,λ”_1,λ”_2( p', p”) G_0( p”)i T^J^P_λ”_1,λ”_2,λ_1,λ_2( p”, p), where the indices λ'_1, λ'_2, λ”_1, λ”_2, λ_1, λ_2 represent the helicities of the two rescattering constituents for the final, intermediate, and initial particles 1 and 2, respectively. G_0( p”) is a reduced propagator written in the center-of-mass frame, with P=(M, 0) as G_0 =δ^+(p”^ 2_2-m_2^2)/p”^ 2_1-m_1^2 =δ^+(p”^0_2-E_2( p”))/2E_2( p”)[(W-E_2( p”))^2-E_1^2( p”)], where m_1,2 is the mass of particle 1 or 2. As required by the spectator approximation, the heavier particle (particle 2 here) is put on shell, with a four-momentum of p”^0_2=E_2( p”)=√( m_2^ 2+ p”^2). The corresponding four-momentum for the lighter particle (particle 1 here) p”^0_1 is then W-E_2( p”) with W being the center-of-mass energy of the system. Here and hereafter we define the value of the momentum p=| p|. And the momentum of particle 1 p”_1=- p” and the momentum of particle 2 p”_2= p”. The partial wave potential V_λ'_1,λ'_2,λ_1,λ_2^J^P can be obtained from the potential as i V_λ'_1,λ'_2,λ_1,λ_2^J^P( p', p) =2π∫ dcosθ [d^J_λ_21λ'_21(θ) i V_λ'_1λ'_2,λ_1λ_2( p', p) +η d^J_-λ_21λ'_21(θ) i V_λ'_1,λ'_2,-λ_1,-λ_2( p', p)], where η=PP_1P_2(-1)^J-J_1-J_2 with P and J being parity and spin for system and constituent 1 or 2. λ_21=λ_2-λ_1. The initial and final relative momenta are chosen as p'=(0,0, p') and p=( psinθ,0, pcosθ). The d^J_λ'λ(θ) is the Wigner d-matrix. An exponential regularization is also introduced as a form factor into the reduced propagator as G_0( p”)→ G_0( p”)e^-2(p”^2_2-m_2^2)^2/Λ_r^4 <cit.>. The cutoff parameter Λ_r and Λ_e can be chosen as different values, but have a similar effect on the result. For simplification, we set Λ_r=Λ_e in the current work. The amplitude T can be determined by discretizing the momenta p', p, and p” in the integral equation (<ref>) using Gauss quadrature with a weight w( p_i). After this discretization, the integral equation can be reformulated as a matrix equation  <cit.> T_ik =V_ik+∑_j=0^N V_ijG_jT_jk. Here, the propagator G is represented as a diagonal matrix: G_j>0 =w( p”_j) p”^2_j/(2π)^3G_0( p”_j), G_j=0 =-i p”_o/32π^2 W+∑_j [w( p_j)/(2π)^3 p”^2_o/2W( p”^2_j- p”^2_o)], where the on-shell momentum is given by p”_o=1/2W√([W^2-(m_1+m_2)^2][W^2-(m_1-m_2)^2]). To identify the pole of the rescattering amplitude in the energy complex plane, we seek the position where |1- VG|=0 with z=E_R + iΓ/2 corresponding to the total energy and width. After incorporating the amplitudes of the direct decay and rescattering processes, the total amplitude of the process B^+→D^+ D^- K^+ with D^*-K^*+-D^-K^+ rascattering can be expressed in the center-of-mass frame of particles 1 and 2 as follows <cit.>: M(p_1,p_2,p_3) =∑_λ'_1,λ'_2∫d^4p'^cm_2/(2π)^4 T_λ'_1,λ'_2(p^cm_1,p^cm_2;p'^cm_1,p'^cm_2) · G_0(p'^cm_2) A_λ'_1,λ'_2(p'^cm_1,p'^cm_2,p_3^cm). Here, the helicities of the initial and final particles of the process B^+→ D^+ D^- K^+ have been omitted since they are all zero. The momenta with the superscript cm refer to those in the center-of-mass frame of particles 1 and 2. To analyze the direct decay amplitude A_λ'_1,λ'_2 a partial wave expansion is required, similar to the expansion performed on the rescattering potential kernel V conducted in Eq. (<ref>), as shown in Ref. <cit.>, A_λ'_1,λ'_2(p'^cm_1,p'^cm_2,p_3^cm) =∑_Jλ'_1λ'_2N_JD^J*_λ'_1,λ'_2( Ω^cm_2) A^J_λ'_1,λ^'_2( p'^cm_2,p_3^cm), where N_J is a normalization constant with the value of √((2J+1)/4π), and Ω_2^cm is the spherical angle of momentum of particle 2. Hence, the partial-wave amplitude for J^P=0^+ is given by M^0^+(p_1,p_2,p_3) =1/2N_0∑_λ'_1λ'_2∫ p'^cm2_2d p'^cm_2/(2π)^3 i T^0^+_λ'_1,λ'_2( p'^cm_2, p^cm_2) · G_0( p'^cm_2) A^0^+_λ'_1,λ'_2( p'^cm_2,p_3^cm). § THE NUMERICAL RESULTS In this section, we will calculate the D^-K^+ and D^-D^+ invariant mass spectra and generate the corresponding Dalitz plot using Monte Carlo simulation. Our analysis aims to compare these results with experimental data to investigate the nature of the X_0(2900) and χ_c0(3930) states. §.§ Invariant mass spectrum and Dalitz plot With the reparation in the previous section, we are able to calculate the decay amplitude of the process B^+→D^+ D^- K^+ with rescattering of D^*-K^*+-D^-K^+ with J^P=0^+ for the X_0(2900) resonance. Similarly, we can obtain the decay amplitude for D_s^+D_s^–D^+D^- with J^P=0^+ for χ_c0(3930) resonance using a similar approach. In the current works, we focus on the roles played by rescatterings and relevant molecular states process B^+→ D^+ D^- K^+. However, it is important to note that if we only consider rescatterings related to the X_0(2900) and χ_c0(3930) resonances, the model amplitude may be too simplistic to accurately replicate the invariant mass spectra observed in experiments. To address this, we introduce Breit-Wigner resonances near 2900 MeV with J=1, near 3770 MeV with J=1, and near 3930 MeV with J=2 to account for the X_1(2900), ψ(3770) and χ_c2(3930) signals observed by the LHCb Collaboration. The amplitude model can be expressed as, A(J)=a_JB W(M_ab) × T(Ω). Here a_J is a free parameter. The relativistic Breit-Wigner function BW(M_ab) is defined as, B W(M_ab)=F_r F_D/M_r^2-M_ab^2-i Γ_ab M_r, where F_r and F_D represent the Blatt-Weisskopf damping factors for the B meson and the resonance, respectively. M_r is the mass of resonance, M_ab is the invariant mass while ab denotes 12 or 23 depending on the system we choose, and Γ_ab is the mass-dependent width, which can be expressed as Γ_ab =Γ_r(p_ab/p_r)^2 J+1(M_r/M_ab) F_r^2. p_ab =√((M_ab^2-m_a^2-m_b^2)^2-4 m_a^2 m_b^2)/2 M_ab , where Γ_r and J are the width and the spin of the resonance. The quantity p_ab is the momentum of either daughter in the ab rest frame, and p_r is the value of p_ab when M_ab=M_r. The exact expressions of the Blatt-Weisskopf factors <cit.> are given in Ref. <cit.>. The angular term T(Ω) is also given in Ref. <cit.> and depends on the masses of the particles involved in the reaction as well as on the spin of the intermediate resonance. The parameters associated with the additional X_1(2900), ψ(3770) and χ_c2(3930) resonances are predetermined based on the experimental values <cit.>. The predetermined masses and widths are provided in Table <ref>. Additionally, the resonances resulting from the rescatterings correspond to poles in the complex energy plane. By adjusting the masses and widths to better match the invariant mass spectra, the values for the masses and widths of X_0(2900) and χ_c0(3930) are also determined and included in Table <ref>. Further discussion on these values will be provided later. Besides, a parameterized background contribution is introduced into the D^-K^+ invariant mass distribution from rescattering described in Sec. <ref>, which can be written as Â^bk(M_12)=e^d(iπ)c(M_12-M_min)^a(M_max-M_12)^b, where the parameters for the background are chosen as (a, b, c, d)=(4.0,0.5,5.6,0.5) to fit the experimental data. The total decay width incorporating all these contributions to the amplitude M(p_1,p_2,p_3) can be expressed as dΓ =(2π)^4/2M_B| M(p_1,p_2,p_3)|^2dΦ_3, where M_B are the mass of initial B^+ meson. In this work, the phase space dΦ_3 in Eq. (<ref>) is obtained using the GENEV code in FAWL, which employs the Monte Carlo method to generate events of the three-body final state. The phase space is defined as, R_3=(2 π)^5 d Φ_3=∏_i^3 d^3 p_i/2 E_iδ^4(∑_i^n p_i-P), where p_i and E_i provided by the code represent the momentum and energy of the final particle i. By simulating 5×10^5 events, the event distribution can be obtained, allowing for the visualization of the Dalitz plot and the invariant mass spectra with respect to m_D^-K^+ and m_D^+D^-. It is important to note that an overall scaling factor for the invariant mass spectra cannot be determined due to the absence of information on the total number of B^+ candidates, which was not provided by the LHCb collaboration. Therefore, it is necessary to scale the theoretical decay distribution to the experimental data to facilitate a comparison between theoretical predictions and experimental results. It is crucial to emphasize that this scaling factor renders only the relative values of the coupling constants c_1, c_2 and c_3 meaningful. Additionally, the cutoff parameter Λ_e is fine-tuned to best match the experimental data. Given that the interaction mechanisms differ between the D^*-K^*+-D^-K^+ rescattering and D_s^+D_s^–D^+D^- rescattering, the cutoff values Λ_e for two rescatterings are adjusted to 3.3 and 1.8 GeV, respectively, to ensure a good fit to the experimental data. §.§ D^-K^+ invariant mass spectrum and X_0,1(2900) As shown in Fig. <ref>, the LHCb data suggest an obvious resonance structure around 2900 MeV in the D^-K^+ invariant mass spectrum of the process B^+→D^+ D^- K^+. Such a structure is close to the D^*K^* threshold and was explained as D^*-K^*+ molecular states in the literature. As we can see from the blue dashed curve representing the contribution from D^*-K^*+-D^-K^+ rescattering in Fig. <ref>, an obvious peak is produced near 2885 MeV as expected, which can be related to the X_0(2900) resonance. However, a satisfactory fit of the LHCb's data cannot be obtained if the X_1(2900) is not included, as shown by the gray line in Fig. <ref>, because it seems narrow and located to the left compared to the experimental structure. In the experimental article <cit.>, the structure observed near 2900 MeV is suggested to be formed by two states, X_0(2900) and X_1(2900), with X_0(2900) having a smaller mass and width. Thus, we introduce a Breit-Wigner resonance near 2900 MeV with J=1 to fit the X_1(2900) structure, which can be written as Eq. (<ref>)-(<ref>) with predetermined mass and width listed in Table <ref>. A good description of the region near 2900 MeV is obtained after including the X_1(2900) contribution, together with reflections from the D^+D^- structures. The peak near 2900 MeV becomes wider and is in better agreement with the experimental result, as shown by the black curve. Thus, our result still supports the assumption of X_0(2900) as an S-wave D^-K^+ molecular state, and the X_1(2900) state is necessary to describe the structure near 2900MeV as well. Two bumps can be observed around 3.0 to 3.5 GeV due to the contribution of ψ(3770) and χ_c2(3930) states, which is consistent with the relevant data analysis of the LHCb Collaboration <cit.>. No clear peak structure but a background-like signal can be found from the orange dashed curve representing the χ_c0(3930) state, indicating that this state has no significant contribution to the D^-K^+ invariant mass spectrum in our model. §.§ D^+ D^- invariant mass spectrum and χ_c0(3930) In the D^+ D^- invariant mass spectra of the process B^+→ D^+ D^- K^+, two prominent resonance structures are observed around 3770 and 3930 MeV, respectively. In our previous study, we investigated the D_s^+ D_s^--D^+D^- rescattering within the D^+ D^- invariant mass spectrum to explore the origins of X(3960) and χ_c0(3930). Our findings indicated that the peak attributed to χ_c0(3930) was too narrow to fully explain the experimental structure around 3930 MeV <cit.>. Given that this structure is suggested to arise from two states, χ_c0(3930) and χ_c2(3930), it is reasonable to propose that χ_c2(3930) plays a crucial role in forming the resonance near 3930 MeV in the D^+ D^- invariant mass spectrum. Additionally, a distinct structure around 3770 MeV is identified as ψ(3770) with spin parity J^PC=1^– in experimental reports. Therefore, in this study, Breit-Wigner resonances with J=2 near 3930 MeV and J=1 near 3770 MeV are introduced to model the χ_c2(3930) and ψ(3770) structures, respectively. These resonances are parameterized according to Eq. (<ref>)-(<ref>) with predetermined mass and width listed in Table <ref>. The D^+D^- invariant mass spectrum for B^+→ D^+ D^- K^+ via the intermediate states χ_c0(3930), χ_c2(3930), ψ(3770), X_0(2900), and X_1(2900) can be obtained in Fig. <ref>. As shown in the orange dashed curve representing the contribution of χ_c0(3930) state in Fig. <ref>, a significant but extremely narrow peak around 3930 MeV can be observed, which can be associated with the rescattering of D_s^+ D_s^--D^+ D^- channels. After overlapping the Breit-Wigner resonance introduced to fit the χ_c2(3930) structure, the width of the peak around 3930 MeV becomes larger as the black curve shows, and the explicit shape of the experimental structure can be better fitted compared with our previous work. This result favors the assumption of the χ_c0(3930) as an S-wave D^+D^- molecular state but may play a minor role in forming the structure around 3930 MeV in the D^+D^- invariant mass spectrum. Meanwhile, one can also find a peak near 3770 MeV, which is due to the Breit-Wigner resonance we introduced to fit the ψ(3770) structure. We do not focus our attention on the structure in the higher energy region because the real mechanism (which is suggested to include the contribution of the ψ(4040), ψ(4160), and ψ(4415) together with reflections from the D^-K^+ structures by LHCb <cit.>) is much more complex than our model. No significant peak structure can be found in the blue dashed curve in Fig. <ref>, which indicates that X_0(2900) has no particular contribution in the D^+D^- invariant mass spectrum. §.§ Dalitz Plot In the above, the invariant mass spectra for the process B^+→D^+ D^- K^+ is presented. To provide a more explicit picture for such process, we present the Dalitz plot against the invariant masses m_D^-K^+ and m_D^+D^- of the final particles in the molecular state picture in Fig. <ref>. When comparing our results in Fig. <ref> with the LHCb data (Figure 8 in Ref. <cit.>), we observe a striking similarity in the overall pattern. In the Dalitz plot, two prominent horizontal strips appear at m_D^+D^- around 3.77 and 3.93 GeV, indicating contributions from the ψ(3770) and χ_c0,2(3930) resonances, respectively. Additionally, a distinct vertical strip is visible at m_D^-K^+ around 2900 GeV, which can be associated with the X_0,1(2900) states. The intermittency of these strips, observed in both our results and the experimental data from LHCb, provides additional insights. This pattern suggests that the resonances do not originate solely from scalar states, confirming the necessity of including the χ_c2(3930) and X_1(2900) states. § SUMMARY In this work, we studied the D^-K^+ and D^+D^- invariant mass spectra, as well as the Dalitz plot, for the B^+→D^+ D^- K^+ process, focusing on the D^*-K^*+-D^-K^+ and D_s^+D_s^--D^+D^- rescattering. This analysis assumes that resonances X_0(2900) and χ_c0(3930) are D^*-K^*+ and D_s^+D_s^- molecular states with spin parity 0^+, respectively. The theoretical results, calculated using the quasipotential Bethe-Salpeter equation approach, are compared with the experimental data from the LHCb Collaboration to understand the nature of X_0(2900) and χ_c0(3930). The D^-K^+ invariant mass spectrum for the decay process B^+→ D^+ D^- K^+ is examined. The D^*-K^*+ interaction produces a significant peak at 2886 MeV with a width of 62 MeV, identified as the X_0(2900) contribution suggested by LHCb. However, this peak is slightly sharper than the experimental structure. Since the structure is suggested to be formed by two states, X_0(2900) and X_1(2900), the latter is introduced as a Breit-Wigner resonance near 2900 MeV with J=1 into our model. Including this state widens the structure, bringing the calculation into good agreement with the experimental D^-K^+ invariant mass spectrum. To describe the distribution in the higher energy region, contributions from ψ(3770) and χ_c2(3930) are needed. The D^+D^- invariant mass spectrum, which includes contributions from the ψ(3770), χ_c0(3930), and χ_c2(3930) states as suggested by LHCb, was also investigated. The χ_c0(3930), originating from the D_s^+D_s^- interaction, exhibits a very narrow peak and plays a smaller role in forming the experimentally observed peak near 3930 MeV compared to the χ_c2(3930), which is introduced as a Breit-Wigner resonance. Additionally, to simulate the ψ(3770) resonance, we introduced a Breit-Wigner resonance located at 3770 MeV with J=1. The event distribution in the D^+D^- invariant mass spectrum can be described by two sharp peaks corresponding to the ψ(3770) and the overlap of χ_c0(3930) and χ_c2(3930). The distribution at higher energy regions requires contributions from more states, which is not the focus of the current work and therefore not considered. Additionally, the Dalitz plot in the m_D^-K^+ - m_D^+D^- plane for the process B^+→ D^+ D^- K^+ was presented. Obvious intermittent strips are observed at about 2.90 GeV at m_D^-K^+ and 3.77 and 3.93 GeV at m_D^+D^- in the Dalitz plot. Our calculations support the molecular interpretations for X_0(2900) and χ_c0(3930) states. However, it was found necessary to include both spin-0 and spin-1 states in the X_J(2900) region and both spin-0 and spin-2 states in the χ_cJ(3930) region. 10pt 23 LHCb:2020bls R. Aaij et al. [LHCb], “A model-independent study of resonant structure in B^+→ D^+D^-K^+ decays,” Phys. Rev. Lett. 125 (2020), 242001 LHCb:2020pxc R. Aaij et al. [LHCb], “Amplitude analysis of the B^+→ D^+D^-K^+ decay,” Phys. Rev. D 102 (2020), 112003 LHCb:2024vfz R. Aaij et al. [LHCb], “Observation of new charmonium(-like) states in B^+ → D^*± D^∓ K^+ decays,” arXiv:2406.03156 [hep-ex] Zhang:2020oze J. R. Zhang, “Open-charm tetraquark candidate: Note on X_0(2900),” Phys. Rev. D 103 (2021) no.5, 054019 Wang:2020xyc Z. G. Wang, “Analysis of the X_0(2900) as the scalar tetraquark state via the QCD sum rules,” Int. J. Mod. Phys. A 35 (2020) no.30, 2050187 He:2020jna X. G. He, W. Wang and R. Zhu, “Open-charm tetraquark X_c and open-bottom tetraquark X_b,” Eur. Phys. J. C 80 (2020) no.11, 1026 Wang:2020prk G. J. Wang, L. Meng, L. Y. Xiao, M. Oka and S. L. Zhu, “Mass spectrum and strong decays of tetraquark c̅s̅ qq states,” Eur. Phys. J. C 81 (2021) no.2, 188 Liu:2020orv X. H. Liu, M. J. Yan, H. W. Ke, G. Li and J. J. Xie, “Triangle singularity as the origin of X_0(2900) and X_1(2900) observed in B^+→ D^+ D^- K^+,” Eur. Phys. J. C 80 (2020) no.12, 1178 Burns:2020epm T. J. Burns and E. S. Swanson, “Kinematical cusp and resonance interpretations of the X(2900),” Phys. Lett. B 813 (2021), 136057 Molina:2020hde R. Molina and E. Oset, “Molecular picture for the X_0(2866) as a D^* K̅^* J^P=0^+ state and related 1^+,2^+ states,” Phys. Lett. B 811 (2020), 135870 [erratum: Phys. Lett. B 837 (2023), 137645] Chen:2020aos H. X. Chen, W. Chen, R. R. Dong and N. Su, “X_0(2900) and X_1(2900): Hadronic Molecules or Compact Tetraquarks,” Chin. Phys. Lett. 37 (2020) no.10, 101201 Agaev:2020nrc S. S. Agaev, K. Azizi and H. Sundu, “New scalar resonance X 0(2900) as a molecule: mass and width,” J. Phys. G 48 (2021) no.8, 085012 Mutuk:2020igv H. Mutuk, “Monte-Carlo based QCD sum rules analysis of X_0(2900) and X_1(2900),” J. Phys. G 48 (2021) no.5, 055007 Liu:2020nil M. Z. Liu, J. J. Xie and L. S. Geng, “X_0(2866) as a D^*K̅^* molecular state,” Phys. Rev. D 102 (2020) no.9, 091502 Xiao:2020ltm C. J. Xiao, D. Y. Chen, Y. B. Dong and G. W. Meng, “Study of the decays of S-wave D̅^∗ K^∗ hadronic molecules: The scalar X_0(2900) and its spin partners X_J(J=1,2),” Phys. Rev. D 103 (2021) no.3, 034004 He:2020btl J. He and D. Y. Chen, “Molecular picture for X_0(2900) and X_1(2900),” Chin. Phys. C 45 (2021) no.6, 063102 Kong:2021ohg S. Y. Kong, J. T. Zhu, D. Song and J. He, “Heavy-strange meson molecules and possible candidates D_s0^*(2317), D_s1(2460), and X_0(2900),” Phys. Rev. D 104 (2021) no.9, 094012 LHCb:2022aki R. Aaij et al. [LHCb], “Observation of a Resonant Structure near the Ds+Ds- Threshold in the B^+→ D_s^+D_s^-K^+ Decay,” Phys. Rev. Lett. 131 (2023) no.7, 071901 Bayar:2022dqa M. Bayar, A. Feijoo and E. Oset, “X(3960) seen in D_s^+D_s^- as the X(3930) state seen in D^+D^-,” Phys. Rev. D 107 (2023) no.3, 034007 Chen:2023eix Y. Chen, H. Chen, C. Meng, H. R. Qi and H. Q. Zheng, “On the nature of X(3960),” Eur. Phys. J. C 83 (2023) no.5, 381 Ding:2023yuo Z. m. Ding and J. He, “Combined analysis on nature of X(3960), χ_c0(3930), and X_0(4140),” Eur. Phys. J. C 83 (2023) no.9, 806 Braaten:2004fk E. Braaten, M. Kusunoki and S. Nussinov, Phys. Rev. Lett. 93 (2004), 162001 LHCb:2022dvn R. Aaij et al. [LHCb], “First observation of the B^+→ D_s^+D_s^-K^+ decay,” Phys. Rev. D 108 (2023), 034012 Bando:1984ej M. Bando, T. Kugo, S. Uehara, K. Yamawaki and T. Yanagida, “Is rho Meson a Dynamical Gauge Boson of Hidden Local Symmetry?,” Phys. Rev. Lett. 54 (1985), 1215 Bando:1987br M. Bando, T. Kugo and K. Yamawaki, “Nonlinear Realization and Hidden Local Symmetries,” Phys. Rept. 164 (1988), 217-314 Nagahiro:2008cv H. Nagahiro, L. Roca, A. Hosaka and E. Oset, “Hidden gauge formalism for the radiative decays of axial-vector mesons,” Phys. Rev. D 79 (2009), 014015 Cheng:1992xi H. Y. Cheng, C. Y. Cheung, G. L. Lin, Y. C. Lin, T. M. Yan and H. L. Yu, “Chiral Lagrangians for radiative decays of heavy hadrons,” Phys. Rev. D 47 (1993), 1030-1042 Yan:1992gz T. M. Yan, H. Y. Cheng, C. Y. Cheung, G. L. Lin, Y. C. Lin and H. L. Yu, “Heavy quark symmetry and chiral dynamics,” Phys. Rev. D 46 (1992), 1148-1164 [erratum: Phys. Rev. D 55 (1997), 5851] Wise:1992hn M. B. Wise, “Chiral perturbation theory for hadrons containing a heavy quark,” Phys. Rev. D 45 (1992) no.7, R2188 Burdman:1992gh G. Burdman and J. F. Donoghue, “Union of chiral and heavy quark symmetries,” Phys. Lett. B 280 (1992), 287-291 Casalbuoni:1996pg R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio and G. Nardulli, “Phenomenology of heavy meson chiral Lagrangians,” Phys. Rept. 281 (1997), 145-238 Falk:1992cx A. F. Falk and M. E. Luke, “Strong decays of excited heavy mesons in chiral perturbation theory,” Phys. Lett. B 292 (1992), 119-127 Isola:2003fh C. Isola, M. Ladisa, G. Nardulli and P. Santorelli, “Charming penguins in B→ K^* π, K(ρ, ω, ϕ) decays,” Phys. Rev. D 68 (2003), 114001 Liu:2009qhy X. Liu, Z. G. Luo, Y. R. Liu and S. L. Zhu, “X(3872) and Other Possible Heavy Molecular States,” Eur. Phys. J. C 61 (2009), 411-428 Chen:2019asm R. Chen, Z. F. Sun, X. Liu and S. L. Zhu, “Strong LHCb evidence supporting the existence of the hidden-charm molecular pentaquarks,” Phys. Rev. D 100 (2019) no.1, 011502 Oh:2000qr Y. s. Oh, T. Song and S. H. Lee, “J/ψ absorption by π and ρ mesons in meson exchange model with anomalous parity interactions,” Phys. Rev. C 63 (2001), 034901 He:2015mja J. He, “The Z_c(3900) as a resonance from the DD̅^* interaction,” Phys. Rev. D 92 (2015) no.3, 034004 Gross:2008ps F. Gross and A. Stadler, “Covariant spectator theory of np scattering: Phase shifts obtained from precision fits to data below 350-MeV,” Phys. Rev. C 78 (2008), 014005 He:2014nya J. He, “Study of the BB̅^*/DD̅^* bound states in a Bethe-Salpeter approach,” Phys. Rev. D 90 (2014) no.7, 076008 He:2017lhy J. He and D. Y. Chen, “Z_c(3900)/Z_c(3885) as a virtual state from π J/ψ-D̅^*D interaction,” Eur. Phys. J. C 78 (2018) no.2, 94 He:2015yva J. He, “Internal structures of the nucleon resonances N(1875) and N(2120),” Phys. Rev. C 91 (2015) no.1, 018201 He:2017aps J. He, “Nucleon resonances N(1875) and N(2100) as strange partners of LHCb pentaquarks,” Phys. Rev. D 95 (2017) no.7, 074031 Blatt:1952ije J. M. Blatt and V. F. Weisskopf, “Theoretical nuclear physics,” Springer, 1952, ISBN 978-0-471-08019-0 BaBar:2010wqe P. del Amo Sanchez et al. [BaBar], “Dalitz plot analysis of D_s^+ → K^+ K^- π^+,” Phys. Rev. D 83 (2011), 052001
http://arxiv.org/abs/2407.12968v1
20240717192548
Multi-Platform Framing Analysis: A Case Study of Kristiansand Quran Burning
[ "Anna-Katharina Jung", "Gautam Kishore Shahi", "Jennifer Fromm", "Kari Anne Røysland", "Kim Henrik Gronert" ]
cs.SI
[ "cs.SI" ]
Jung et al. University of Duisburg-Essen, Duisburg, Germany gautam.shahi@uni-due.deUniversity of Agder, Kristiansand, Norway Kristiansand Kommune, Kristiansand, Norway Multi-Platform Framing Analysis: A Case Study of Kristiansand Quran Burning Anna-Katharina Jung1 0000-0002-0905-4932 Gautam Kishore Shahi1 0000-0001-6168-0132 Jennifer Fromm1 Kari Anne Røysland2 Kim Henrik Gronert3 July 22, 2024 ====================================================================================================================================================== § ABSTRACT The framing of events in various media and discourse spaces is crucial in the era of misinformation and polarization. Many studies, however, are limited to specific media or networks, disregarding the importance of cross-platform diffusion. This study overcomes that limitation by conducting a multi-platform framing analysis on Twitter, YouTube, and traditional media analyzing the 2019 Koran burning in Kristiansand, Norway. It examines media and policy frames and uncovers network connections through shared URLs. The findings show that online news emphasizes the incident's legality, while social media focuses on its morality, with harsh hate speech prevalent in YouTube comments. Additionally, YouTube is identified as the most self-contained community, whereas Twitter is the most open to external inputs. § INTRODUCTION The appearance and rise of social media platforms restructured and diversified the process of information diffusion. While priorly, the dissemination of information was limited to traditional media outlets managed by gatekeeping journalists, nowadays, information can be produced and shared by everyone with online access <cit.>. As a result of this development, journalists have not only lost their major influence on the dissemination of content but also the sovereignty of interpretation. The classification of content is in the hands of every single online actor.The process of highlighting certain elements of a piece of content and promoting a particular understanding and interpretation is known as the concept of framing <cit.>. Although the media landscape is diverse, previous research on framing and information diffusion often focused only on one specific social media platform or traditional news outlets <cit.>. With our study, we aim to advance multi-platform framing of societal incidents. The case of the Quran burning in Kristiansand (Norway) is used as an example in our study to demonstrate the application and diffusion of media frames throughout different types of online media platforms. On 16th November 2019, Lars Thorsen - the leader of the Norwegian Anti-Islam group Stop the Islamification of Norway(SIAN), attempted to burn the Quran on the main square of Kristiansand. Several persons attacked Lars Thorsen to stop him from burning the Quran and were arrested by the police. About 300 people witnessed the incident, and shortly afterwards, videos of the Quran burning and the attacks circulated on the net. The incident was heavily discussed in both online news sources and social media. While many members of the Muslim community described the attackers as defenders of Islam, other actors rather expressed anti-Islamist views. These different media frames resulted in tensions within Norwegian society and considerable problems for the Norwegian government. We aim to answer the following research questions: RQ1: How was the Quran burning incident in Kristiansand framed on different social media platforms (Twitter (now X), YouTube) and on online news sites? RQ2: How is the diffusion between the different social media platforms and online news sites shaped? For this multi-platform framing analysis, we analyzed 1,136 tweets, 71 YouTube videos with 2031 comments, and 109 articles from online news sources. We distinguish between social media that allows any user to create and share content and online news sites where editorial teams retain control over the publication of articles and associated comments <cit.>. It should be noted that online news sites differ in the extent to which they follow the ethical code of practice for the press adopted by the Norwegian Press Association. The study is organized as follows. First, a short insight into the development and methodical approaches to the framing theory are presented, followed by a concise overview of multi-platform framing and cross-platform diffusion. Afterwards, the methodical approach is presented. Summarizing the case study, the approach to data collection and cleaning, such as the description of the coding procedure and code book. This is followed by the presentation of the results, giving an overview about the distribution of frames on the different platforms, such as the URL analysis. In the discussion section, connections to the state of the art presented and prior research are drawn. This discussion section is followed by a short overview about the limitations of the study and the ideas for future research. The study is rounded off by the presentation of the conclusion in which the main findings and the answers to the research questions are summarized. § STATE OF THE ART In this section, we describe framing theory, multi-platform framing and Cross platform information diffusion. §.§ Framing theory The theory of framing was originally created in the field of sociology in the 1950s and has been refined ever since by various disciplines ranging from psychology to political and media studies <cit.>. As well in the more technical field of information systems, the framing theory has been adapted to approach the analysis of stakeholder perspectives in a technological context, or to analyze communication in online environments <cit.> One of the most comprehensive definitions of framing was provided by the political scientist Robert Entman: “Framing essentially involves selection and salience. To frame is to select some aspects of a perceived reality and make them more salient in a communication text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation and/or treatment recommendation for the item described” ((<cit.>, p. 52). Thus, framing looks not only at the issues and topics covered by the media but also at the angles being taken. As diverse as the disciplines that adapted the framing theory are the methodological approaches that have been developed for its analysis. The operationalization of frames for empirical studies is complex and challenging, among other things, due to the huge amount of data that needs to be annotated (manually) and the influence of personal interpretation of media and its frames <cit.>. In prior research, authors divide the existing methodological approaches to frame analysis in deductive approaches (based on priorly formulated frames) and inductive approaches (based on frames derived from the specific data set in focus) <cit.>. While studies solely based on deductive frame categories are very systematic and can be easily replicated, they have the shortcoming that they might ignore important information within the data set if it does not fit the priorly defined frames <cit.>. In contrast to that, inductive approaches use the data and its context as a basis for the creation of frames and thus are much more sensitive to the peculiarities of specific research cases <cit.>. In our study, we applied a mixture of deductive and inductive approaches to the framing analysis. As a basis, we used the codebook for analysis of media frames within and across policy issues by <cit.> and modified it according to the cultural and thematic context. The codebook will be explained in more detail in the method section. §.§ Multi-platform framing While journalists and traditional media houses still have a huge impact on the shaping of public debates, the influence of online users not bound to news values and reporting standards has increased with the rise of social networks and hybrid media systems <cit.>. This development is described as networked gatekeeping and networked framing and was defined by <cit.> as the involvement of a diversity of online actors from many different backgrounds, including journalists, activists, non-elite media supporters and regular users, who gained prominence and attention in their network by effective communicative and social practices to spread their messages. Networked framing, like networked gatekeeping, stands for the prominence interpretations received via crowdsourcing actions <cit.>. Although <cit.> value the difference between gatekeeping and framing in social networks within their own research on the revolution in Egypt, they fully focus on the discourse on Twitter and thus miss out on giving an overview of the process on multiple platforms. As well studies incorporating the idea of networked publics and framing mainly limited their scope to one platform, as for example <cit.> who mapped the discussion on the Iranian presidential elections 2017 on Twitter <cit.>, depicting the different groups of online elites discussing the subject or previous research <cit.> which analyzed the discourse on migration during the Canadian elections 2019. Another recent example is the analysis of <cit.> who analyzed the hijacking and reframing of the MeToo hashtag by right-wing actors in a multi-national Twitter analysis. While <cit.> do involve different languages and thus different national communities on Twitter, they did disregard other media outlets than Twitter. One of the very few studies which pays attention to the research gap of multi-platform framing is <cit.>, which analyzed the discourse about the refugee crisis in the Finish online news media and social media <cit.>. In contrast to the present study, a computational topic and framing detection were applied, and a latent Dirichlet allocation algorithm was used. While the scope of our study is similar, the methodical approaches differ from each other. §.§ Cross platform information diffusion The sharing and presentation of frames within one network and across platform boarders can be as well described as a form of information diffusion. Information diffusion can be defined “as the process by which a piece of information (knowledge) is spread and reaches individuals through interactions” <cit.>. On social media, this process depends on individuals who spread information through retweets, shares and likes. The information created by verified accounts spreads faster than non-verified sources <cit.>. One stream of information diffusion research takes a micro perspective and aims to understand why individual users distribute information <cit.>. For example, authors in prior research <cit.> found that learning and social engagement are the most important motivations for content sharing <cit.>. Another stream of research examines the phenomenon from a macro perspective, focusing on predicting how information spreads through a social network. These studies often aim to assess and improve information diffusion models such as information cascade or threshold models <cit.>. To understand information diffusion, scholars highlighted the importance of analyzing the interplay of different actor and content characteristics <cit.>. Actor-centered studies demonstrated the power of opinion leaders in the information diffusion process <cit.> and distinguish between different roles such as information starters, information amplifiers, and information transmitters <cit.>. Other studies rather highlighted the impact of content characteristics such as emotionality <cit.> or the attachment of images and videos <cit.>. Notably, most previous studies focused on a single platform such as Twitter, thereby neglecting the reality of information diffusion, as is also the case for networked framing. In this regard, <cit.> already argued that social media users can share content from Facebook to Twitter and vice versa, pointing toward the vanishing boundaries between different social media platforms. A study proposed that sharing viral videos on alternative platforms might affect their popularity on the original platform <cit.>. Furthermore, scholars found a significant influence of mass media and external websites on information diffusion within a social media platform <cit.>. This phenomenon is also known as the spill-over effect among communication scientists <cit.>. The detection of spill-over effects represents a methodological challenge as it is difficult to find the origin of information in the online media sphere <cit.>. The authors suggested using a crawler to identify URLs linking to other online information sources. Jung et al. <cit.> built upon this methodological approach and demonstrated in a case study that online news sites referenced more frequently to information from Twitter than vice versa. With our study, we aim to extend the scarce research body on multi-platform networked framing and cross-platform diffusion by examining the occurrence and diffusion of frames within and across social media and online media. § RESEARCH DESIGN In this section, we explain the steps involved in performing the study. They are discussed below. §.§ Data Collection In this section, we explain the data collection from different platforms. For a holistic picture of the online discourse, Twitter and YouTube, as well as Norwegian newspapers, were chosen. While Twitter is known for (textual) breaking news content, YouTube is the most prominent platform for video content. Therefore, these two social networks were considered especially useful for this study. The time frame for the data collection was 12th November 2019 until 30th November 2019. A detailed explanation of each platform is given below. For all platforms, we decided that the keyword SIAN, as the organizer of the event, such as the keywords Arne Tumyr and Lars Thorsen, as the involved SIAN leaders, were relevant. Furthermore, the keyword koranbrenning was chosen as it was the most prevalent term to describe the event in the Norwegian news. Finally Kristiansand was chosen as a keyword, as the incident’s location. The results of the data collection confirmed that the keywords delivered relevant results, wherefore the keyword selection was not further adjusted. §.§.§ Twitter Data The Twitter data was gathered with a self-developed Python crawler, which connects to the Twitter Search Application Programming Interface (API) before the commercialization of Twitter data) and collects tweets using keywords SIAN, Arne Tumyr, Lars Thorsen, koranbrenning. The tool collected all Norwegian tweets, retweets, and replies that contained at least one of these keywords and were published from 12th November 2019 until 30th November 2019. We manually checked the search results for relevance and excluded 57 tweets that were not related to the Quran burning in Kristiansand, excluding tweets about another Quran burning that happened in Sweden or tweets about other activities of SIAN, Arne Tumyr, and Lars Thorsen. The final dataset included 2,267 tweets consisting of 865 original tweets, 1,131 retweets, 224 replies, and 47 commented retweets. The 1,131 retweets were excluded from analysis and treated as duplicates as they did not add new frames to the Twitter discourse. The final Twitter data set contained 1,136 tweets. §.§.§ News Paper Articles For tracking the news media, we used commercial software by M-brain (now Valona[https://valonaintelligence.com/]) to monitor media, ensuring comprehensive news coverage. It provided us with openly accessible content and those behind online paywalls. We used the search terms "koranbrenning" and "Kristiansand" to browse the content from the news media. M-brain searched all Norwegian media outlets and returned the articles which match the search terms. From the obtained results, we further filtered the news articles which contained any of the search terms "Lars Thorsen", "Arne Tumyr", or "SIAN". The final news media data set included 115 news articles that had been published between 12th November 2019 and 30th November 2019. After duplicates were deleted, 109 articles remained for the analysis. The data set included the article heading, URL to the news article, and date of publication, and we crawled the content of the news article. The media outlets can be categorized into three groups: State-owned and mainstream media, Online and Independent Media and Special Interest and Non-News Platforms. In addition, we also added to this classification who owned the media outlet and their affiliation to the Norwegian Press Organization, such additional notes if applicable. For a detailed overview please refer to table 2 in the Appendix. §.§.§ YouTube Videos and Comments For the YouTube analysis, we collected videos and comments related to the Quran Burning incident. To identify the relevant videos, we used the YouTube keyword search and conducted four different manual searches of the keywords “SIAN Norway,” “Arne Tumyr,” “Lars Thorsen,” and “Koranbrenning.” The relevancy was assessed by reading the title and description and watching the video. Some videos were excluded, for example, when the videos were related to a different Quran burning incident. Videos that occurred in multiple searches because they included several keywords were included only once. Overall, we excluded five duplicate videos. We identified 112 relevant videos in total. Out of them, 71 videos had comments. We crawled all 8,917 comments related to those videos. As the total number of comments was too extensive for a manual assessment, we applied a sampling approach to all videos with more than 100 comments. Ten videos had more than 100 comments, and 62 videos had less than 100 comments. For the sampling, we took all comments from the videos with less than 100 comments, and for videos that had more than 100 comments, we sampled 100 comments from each video. Overall, we got 2,041 comments from 71 videos in total. This approach provided us with a diverse data set of all comments from all videos. In contrast to the newspaper articles and tweets regarding the Collection of YouTube videos and comments, there was no language restriction to Norwegian and English, but all videos and comments have been analysed with help of the subtitles provided by YouTube and translation software whenever possible. §.§ Data Analysis and Preprocessing In this section, we describe the steps involved in the data preprocessing and analysis. We have applied a series of steps to clean our data set. We filtered the URLs from the text of tweets, YouTube comments, and news articles for data cleaning following the approach mentioned in <cit.>. After that, we manually identified the domain of the URLs to identify the link target platforms (i.e., Twitter, YouTube, online news sources, and others). The category others included links leading to other social media and online news sources that we have not included in our sample (e.g., Facebook, Instagram, religious websites). §.§ Frame Analysis As the Quran Burning incident in Kristiansand can be classified as a politically motivated event, we decided to use an existing codebook for the analysis of policy frames by <cit.>. The codebook has been developed and validated with a pilot study covering three major events in the US. We used the most recent version of the codebook, which was updated in 2016. The codebook was developed for the analysis of policy debates in the United States and consists of 15 frame dimensions. The 15 frame dimensions have been adapted according to the Norwegian context and the context of the Quran Burning incident. Each dimension was equipped with a short paragraph about the relevance of the Quran Burning case, possible keywords, and examples, which can be found in Table 1 in Appendix. As the data from Twitter, YouTube and newspaper articles confronted us with different conditions the codebook was slightly adapted for each medium, which will be explained along the description of the coding process. As a first step of the framing analysis, the coders intensively studied the codebook and then added relevant paragraphs, keywords, and examples. In the next step the coders have been provided with the data sets and their English translations. After reading the text, the coders needed to decide if the translation was understandable or if there was further clarification needed. In case there was a better translation of Norwegian data required, the Norwegian team members were involved. If the translation was understandable, coders needed to decide if the information was relevant for the respective case study. To understand the YouTube videos, the subtitle function was used if available, and the title and description were translated with the help of Google Translate. Afterwards, the relevance of the data was evaluated. The first relevant criterion was whether the tweet, video or comment was about or in connection with the Quran burning in Kristiansand. If this was not the case, it was not deemed to be relevant. Exceptions were made if the covered international incident was a reaction to the Quran Burning in Kristiansand. Messages which only contained an emoticon or random strings of characters were marked as irrelevant. The tweets of the Twitter data set have been sorted according to the respective tweet ID in order to identify the conversational threads to which they belonged. Each tweet, YouTube video, YouTube comment, or newspaper article received one primary frame. The frame could vary from the original tweet in case of commented retweets and replies. In case of more complex messages, especially regarding newspaper articles a secondary frame was chosen if it was not possible to reduce the main messages of the text to one primary frame. Each data set was coded by two coders. First of all, 20% of each data set was coded independently to check the intercoder reliability of each coding. The intercoder reliability was calculated in the form of Cohen’s Kappa coefficient which can be found in Table 4 of Appendix. If the result was satisfying, the coding process was continued. In case of deviations in the first coding round, a third coder was involved, and the majority rule has been applied. The rest of the data sets were divided between the first two coders. § RESULT In this section, result obtained from different analysis is provided. §.§ Framing Analysis With help of the code book we identified the different frames, which have been used in the discourse and media coverage about the Quran burning incident in Kristiansand. We have seen a distinct usage of frames in news articles, tweets and YouTube videos and comments. A detailed description of annotated frames is given in Table 3 in the Appendix. For the news articles, primary frames and secondary frames have been applied regularly, as the longer texts often did not allow being reduced to one single frame. For the coding of the YouTube comments the frame dimension "None" was applied more frequently than in comparison to the other media types. By having a closer look at this frame we realized that this was the case because many of the comments could not be translated for analysis and the automated translation had reached its limits. In the 109 coded news articles, the frame dimension Legality, Constitutionality & Jurisdiction was most frequently used, with 50 (45.9 %) references. That means that most of the news articles had legal issues as their main subject in the form of references to freedom of speech or constitutional issues. Very prevalent was the discussion if it was legal, according to the Norwegian constitution, to burn the Quran. One example of the use of the legality frame dimension is an article by the online news outlet document.no, with the title “It should not be allowed to burn holy books”. The article discussed if the decision by the Police Directorate to stop SIAN from burning a copy of the Quran, as a violation of section 185 of the Penal Code on the prohibition of hate speech, was well grounded. The second most applied frame dimension in the news article data set was the External Regulation and Reputation frame with 12%. The incident in Kristiansand led to direct reactions in the Muslim world, especially Pakistan. The articles in which this frame was used often discussed the implications of the Quran burning for the Norwegian telephone provider Telenor, which owns more than a quarter of the Pakistani cellular market [38]. The public service broadcaster NRK published, for example, an article with the title “Call for boycott of Norwegian companies after Quran burning”. The article described the reactions in the Pakistani community and media landscape to boycott Norwegian products and especially the Pakistani branch of the Norwegian telephone provider Telenor. In addition to that, articles with the External Regulation and Reputation frame described the political reactions to the incident in the form of an appointment of the Norwegian ambassador in Pakistan for a statement on the case by Prime Minister Imran Khan and the Ministry of Foreign Affairs. An example for that is the article “Norway is called on the carpet” (the Norwegian proverb to call someone to the carpet, can be translated with “to scold someone”) by the Norwegian daily newspaper Dagbladet. It discussed that the Pakistani Ministry of Foreign Affairs scolded Norway’s ambassador and expressed concern that a Quran was set on fire in Kristiansand. The frame dimension Security and Defense was with 10.1% also applied regularly in the news article data set. The articles, which contained this primary frame, discussed extensively that the Quran burning incident resulted in a concrete security threat for the Norwegian state. One outlet which used this frame is e.. the news site resett.no, which is a controversial online news site in Norway, known for its skepticism regarding immigration and Islam. Resett.no is not part of the Norwegian Press Organization due to the fact that the organization has deemed that they do not adhere to the Norwegian Press Codex. The article of resett.no using the Security and Defense frame has the title: “Threatened to kill after Quran burning: Kill him please!” It discussed that the burning of the Quran has created violent reactions both on social media, but also from Muslim communities in Norway. The article stated that several Muslim leaders in Agder warned that they will now report SIAN to the police. Besides that, the Security and Defense frame was applied in articles, which reported about protest actions, which were triggered by the incident. One example for that is the article “The Norwegian flag is burned in protest against the burning of the Quran ” by the local newspaper Fedrelandsvennen, which reported that the Norwegian flag was burned during demonstrations in different Muslim countries among others in the state of Karachi in Pakistan. The authors interpreted these incidents as a sign of a concrete threat against the Norwegian state to suffer an Islamist-motivated attack. Due to the complexity of the newspaper articles in several cases a secondary frame was chosen. The most common secondary frame was again Legality, Constitutionality & Jurisdiction with (32%), which underlines the importance of the legality frame in the news article data set. Besides that as well the Morality and Ethics (18%), Political Factors and Implications (16%) and Security and Defense (14%) frames played an important role as secondary frames. The article “Koran burning, blasphemy, freedom of speech and hate speech” is an example of an article with a primary and a secondary frame. The primary frame is Legality, Constitutionality & Jurisdiction and the secondary frame is Morality and Ethics. The article discusses both the legal interpretation of the incident and its morality, following the stance that not everything which is legal is as well morally defensible. §.§ Tweets Of the 1,136 analyzed tweets, 47 tweets were coded as irrelevant, which are 4% of the data set. Among the relevant tweets, the Morality and Ethics frame was with 28.1% the top frame. An anchor example for this category is: “I am not a true follower, but what if the Bible was burned? Would antifa violence responders react? I doubt.”. In addition to the Morality and Ethics frame the Political Factors and Implications frame dimension was used in 19.2% of the sample. There have been a lot of comments after the event which underlined a political motivation. Both lay people and politicians used the burning of the Quran to voice political views, coming from different political camps. Several accounts pointed out that they have the impression that SIAN received more attention, that it deserved according to their overall importance for the Norwegian political landscape. One anchor example for that is: “Given that the only Norwegian party of significance that is close to SIAN has just made a historically poor municipal election, SIAN should not imagine that they have any significant support.” The accounts, who advocated for SIAN often pointed out that SIAN is a relevant political group. In the following example SIAN is presented as a moderate, non violent group in comparison to political groups of the left political spectrum: “Then you should stop using the word extremist about SIAN. They have never resorted to violence, which your dear Communists in Red and Antifa constantly do”. Besides that, only the Legality frame reached a relevant number of tweets with 13.6%. Two examples for the usage of the Legality, Constitutionality & Jurisdiction frame are: “Penal Code §185 used against burning of the Quran? Seems like the law stretches quite a bit.” and “Norwegian scandal: Freedom of speech is something we play. The police had received a secret illegal order to stop "violation of the Qur’an" when the SIAN (Stand Islamization of Norway) demonstration in Kristiansand was brutally interrupted by local police.” For annoated tweets without none frame, around 16% (191 tweets) of all tweets referred to external media sources. 55 of those tweets have been published by media organizations themselves, the others were shared by mainly individual accounts and some organizations. As these tweets neutrally shared media links, with short article snippets instead the content of the shared media content was coded for the analysis. All other frames covered only in between 0.2% and 7.3% of the data set and are thus not described in more detail. A second frame added to less than 5% of the tweets and can thus be ignored. §.§ YouTube videos All YouTube videos that only showed the incident as a whole without commenting on it were coded as None if there was no frame included either in the title or description of the video. The Morality and Ethics frame was the most prevalent frame used in the YouTube videos, with 36.6%. However, a minor number of videos applied to the External Regulations & Reputation frame, which together account for 9.9% of the data set. The video in Figure 1 (a) was uploaded to YouTube by channel JHUNJHUNU PRIME TIME, joined YouTube on the 7th of November 2019, only a few days prior to the incident. Even if the account is supposed to give the impression of an official station due to the name and the chosen avatar, it can be assumed that it is not an official source but a channel run by a private person. Another indicator for that is that the channel description is incomplete. Another example for the use of the morality frame is (b) in Figure 1 a video of the leader of the Norwegian PEGIDA (Patriotic Europeans against the Islamization of the Occident). In his video with the title “SIAN’s Koran burning in Kristiansand; what does the quran say?“, which he prepared in reaction to the SIAN burning of the Quran in Kristiansand, he presents his interpretation of the five pillars of the Quran. This video is a good anchor example for videos which have been created by sympathizers of SIAN. The videos, which belonged to the External Regulation and Reputation frame, often showed and described protests, which came up in Muslim countries, especially Pakistan, after the incident. An anchor example can be the following video. This video (c) in Figure 1 was posted by the account Pinpoint Pakistan, which has a relatively professional appearance, such as JHUNJHUNU PRIME TIME. The channel contains more information about its scope and gives a contact email for questions. However, there is only a YouTube and Facebook Page with the name of this organization and no official media house under this name, which as well gives the impression that it is a media channel run by private persons. It needs to be doubted that the video thus has been produced by the channel owners themselves. §.§ YouTube comments Coding the YouTube comments, we used fewer frames than the other channels, pointing as well at the limitations of using a predefined codebook. The frames that were most used were Morality and Ethics, None and Other. The reason why Morality and Ethics was the most frequently used, was the reference to religion, e.g.: “Long live Allah.” Often the morality frame was also used to express beliefs in a hateful manner: “I pissed in koran and ur momz!”. In the frame Other we have put comments that did not fit in any other category, but were still relevant. Often there was shortly formulated support or dislike formulated in a very short manner like “Good” or “Nice” or “lionhearted” or from the SIAN supporters: “Thanks Lars”. Some have also been hate speech like: “Look at the red pig he looks like”. The reason why so many comments were coded as none, was because the content either did not make sense, or we did not understand the meaning of it. While some meaning might have been lost in translation, there have been comments including random combinations of letters. We also saw 5.2% of comments about Norway’s reputation, so we coded these External Regulation and Reputation. Examples are “Fuck Norway, boycott Norway”, “Norwegians are bastards” and “Shame on Norway”. This could have harmed Norway’s reputation and were often as well in a hate speech manner. §.§ URL Analysis We coded 888 tweets, 27 YouTube comments, and 68 news articles during the manual framing analysis, including reference hyperlinks from one of the other platforms. However, some tweets, comments, and articles included multiple URLs. Summing up, those that included more than one URL led to the total number of 988 URLs from tweets, 27 from YouTube comments, and 456 URLs from news articles from the above spillover links. If there were multiple URLs from one tweet, YouTube comment, or newspaper article to another platform, we split it into different URLs, so only a link is associated with each source and the target node. We further analyzed the domain of the URLs to get information about the spillover of hyperlinks. To identify cross-platform diffusion and spillover effects, we categorize the URLs into four categories based on domains: Twitter, YouTube, News Media, and others (which include other domains apart from the above three, like social media platforms Facebook and Instagram.). Figure 2 describes the number of different URLs shared from one platform to another. The figure shows the diffusion of URLs from each platform to four different categories: Twitter to news media, YouTube, Twitter and others. Overall, news articles contain multiple URLS within the articles, mainly referring to other news articles. On Twitter the tweet authors as well use references within the own newtork boundaries by referencing tweets, but as well share URLS of news media and YouTube. In contrast YouTube contains limited URLS, mainly staying within own network boundaries. Within our collected datasets, we analysed the spillover of URLs among news articles, Twitter and YouTube. We matched URLs of news articles to Twitter and YouTube. A spillover of URLs from news article to another is observed mainly on Twitter; around 26% unique news articles are posted 171 times on Twitter from domains such as utrop.no, resett.no, gjenstridig.no, nrk.no, dagladet.no, document.no, afternposten,vg.no, rights.no, nrk.no. At the same time, only two news articles are shared on YouTube comments, which are not explicitly from our datasets. For cross-platform, 17 YouTube videos are shared on Twitter. In contrast, among YouTube comments, only a few URLs are mentioned in comments, mainly on other YouTube videos and some other social media platforms such as Facebook and VK. § DISCUSSION This study offers insights about two main topics: differences of multi platform framing of a social incident on social media (Twitter and YouTube) and online news sites (RQ1) and cross-platform diffusion of topics and frames (RQ2). The content analysis of the online news site articles, tweets, and YouTube videos and comments resulted in a distinctive application of frame dimensions on the different platforms. Of the 14 main frame dimensions adapted from Boydstun et al. <cit.>, only four frame dimensions reached a threshold of at least 10% in one of the data sets, namely Legality, Morality, External relations, Political factors and the two dimensions None and Other. While the legality frame dominated the discourse on online news sites, the morality frame dominated the two social media data sets. We argue that the high dominance of the morality frame in the social media data sets is related to the higher level of personal communication on social media platforms, which allows each individual and organization to contribute to the discussion <cit.>. According to <cit.> moral intuition forms the very basis of any normative evaluation <cit.>. It is an unfiltered and non-reflected reaction if an incident should be classified as right or wrong <cit.>. We argue that the use of the morality and ethics frame might be more prevalent in social media as individual users are more likely to express their moral intuition than (professional) journalists. Further, we considered international sources in the YouTube data set too, wherefore here we also see reactions from people living in Muslim countries like Pakistan, feeling morally offended. The harshest formulation of personal opinions was present in the analyzed YouTube comments, which involved only the frame dimensions of morality and other. The other category was used particularly frequently for YouTube comments, including hate speech which could not be assigned to any of the existing frame dimensions. Furthermore, YouTube comments showed the highest level of polarization, particularly within the morality frame. This polarization is reflected by potential echo chambers of the Muslim supporters and right-wing supporters, predominantly encountering viewpoints that reinforce their existing beliefs, which might intensify polarization. The dominance of the morality frame, which provokes strong emotional responses, contributes to this phenomenon. Polarization can have broader societal impacts, including increased social tensions and greater distance between different groups or countries. The problem of misinformation, which is linked to highly polarized discourse environments, extends beyond false information to the intensive diffusion of specific frames and narratives that shape public beliefs and perceptions <cit.>. According to Starbird, these frames influence the evidence used in sensemaking processes and guide interpretations and the formation of public opinion. Our results suggest that the frames and narratives surrounding the Quran burning event may overshadow existing evidence, exacerbating societal and political polarization. However, the analysis of the Twitter comments revealed as well the limitations regarding the application of Boydstun’s codebook on YouTube comments. In further research, a more specific codebook should be developed for this medium and discourse space. The fact that that polarization was less prevalent on Twitter could potentially be explained by the different user groups of the platforms, or differently strict or successful countermeasures of the platforms regarding hate speech <cit.>. The deletion of hate speech should be practiced more strongly by YouTube. Not only at the video level but, above all, in the comments, where AI-supported approaches may be used. The perspective in the YouTube comments was very unified and contained mainly narratives like the attackers of SIAN are heroes, Islam is a religion that should be praised, and Norway should be reproached for not respecting and protecting Islam. We argue that due to the language barriers, as most of these comments have been originally published in Urdu or Arabic, there was little variation and counterspeech of different opinion camps. Research has shown that users do interact barely with textual posts in foreign languages <cit.>. While passive interactions in the form of likes happen, commenting takes place very rarely [43]. Therefore, we assume that Norwegian or English-speaking YouTube users have not engaged and reacted with the comments in Arabic or Urdu, which explains the non-existent counter speech. It also explains a different application and diffusion of frames within different user/language groups and shows that the idea of networked publics and the networked framing has certain boundaries <cit.>. However, the traditional media act as a bridge actor here, presenting the incidents of hatred from social media in other countries such as official reactions from countries such as Pakistan (External relations frame), which underlines the role of traditional news media in cross platform frame diffusion. In contrast to YouTube and the online news sources the discourse on Twitter was the most diverse according to the variety of applied frames, although the threshold of 10% was not exceeded for all frame dimensions. It indicates that Twitter might represent a more diverse discourse space compared to the other platforms. However, this should always be considered with the caveat that the topic itself may also have influenced how it was discussed on which platform. For further generalizations future studies are indispensable. Based on the Kristiansand incident, we perceived it as unexpected that meta-discourses under the frames of Fairness and Equality or Cultural Identity did not play a serious role in the data set. There was also no difference in the online news sources, which offer more space for societal discourses. Instead of an overall societal discussion about the role of Islam and Muslim groups in Norwegian society, the discussion was more focused on the personal moral classification of the conflict, or in online news sources on the legality of the committed act. This points to a potential question for future research to which extent social networks allow and foster meta-discourses or whether these are lost in the mass of personal statements. After analyzing the distribution of the different frames among the three analyzed platforms (Twitter, YouTube and online news sites), the analysis of URLs used in the data set was implemented to answer RQ2. First, we categorized the links originating from a platform according to their platform type: Twitter, YouTube, news articles, and others. The category others included all links that did not fit the three main categories and included i.e. references to Instagram, Facebook, and religious websites (e.g. islam.net). We observed that a significant amount of URLs diffusion is happening within the same platform for news articles and YouTube comments. While around 67% of all URLs in YouTube comments point to other YouTube content, 72% of all online news sources point to other online news sources while only 39% of URLs mentioned in tweets refer to other tweets. Around 47% of URLs mentioned in collected news articles refer to the same media house, which indicates that online news sources are more likely to share sources from their own company to strengthen their own economic goals of longer dwell time and clicks. Solely on Twitter, 52% URLs mentioned in tweets referes to news media, 3% to YouTube, 39% to other tweets and 8% to other platforms . This strengthens the ideas of <cit.>, who identified news media as influential drivers of diffusion in social networks. In contrast, the URLs of news sites in the Quran Burning case study only included roughly 8% of the URLs pointing to different social media sites, including Twitter and Facebook. This contradicts the findings of Jung (2018) <cit.>, who found out that news media were referencing Twitter more often than Twitter posts referred to news media. However, Jung (2018) <cit.> did not use a data set looking at one specific case study but a broader variation of topics. Although the diffusion and referencing within the respective platform boundaries are high, cross-platform diffusion makes up 25-60% of the sub datasets of this case study, strengthening the results of <cit.> and <cit.>, who both found an interdependence between the degree of diffusion and cross-platform spill-overs. While for the spill over of news articles to Twitter it could be identified by our coding, that in many cases shared news articles also led to a share of frames, as snippets of the articles formed as well part of the tweet, it remains unclear is if the other reference spill-overs which have been identified in the data sets do also directly lead to a frame spill-over, which could give insights into the importance of the different platforms and actors involved in the frame-setting and agenda-building process <cit.>. § LIMITATIONS AND FUTURE RESEARCH One major limitation of our keyword-based data collection approach is that some relevant data about the Quran burning incident in Kristiansand might have been missed, compromising completeness. Furthermore, the fact that an existing codebook was adapted, may be another methodical limitation. A codebook derived from the data set always has the strength of being more case-related and specific. While the Twitter data and YouTube videos could be easily coded with the media corpus codebook, We see the limitations of the codebook for YouTube comments and understand an adoption of it for YouTube comments as a potential future research endeavor. We admit that there is a certain language bias in relation to the collection of YouTube videos and comments in our analysis. However, we argue that videos are more accessible across languages compared to text alone, and AI-generated subtitles on YouTube were sufficiently accurate for our analysis. In addition to the limitations the study also stimulates many new ideas for further research. First, in a follow-up study, it would make sense to code not only the dataset collected in the first step, but also all the third-party content identified by the URL analysis on the three platforms. Through this snowball system, it would be possible to make clearer statements about frame spill-overs in the future. This further analysis could for example be supported by a network analysis to visualize the spill-overs in the different directions and could help to identify the main frame-setters. Furthermore, it would be interesting to investigate whether sockpuppeting or astroturfing took place to push certain frames (e.g. by following the approach presented in [44]. Furthermore, it would also be appropriate to focus on the questions raised in the discussion of this article on the topic of discourse diversity on different platforms and the possibility of meta-discourses in social networks. § CONCLUSION This study contributes to closing the research gap of networked framing on multiple platforms by analyzing how the socially and politically relevant incident of the Quran burning by the Anti-Islamic group SIAN in Kristiansand was framed on Twitter, YouTube and in online news sources. The analysis revealed that on social media the frame dimension of morality and ethics was highly dominant, while on online news sites the legality frame played the most relevant role. The higher use of the morality frame on social networks can be related to two main reasons: the higher number of personal statements in social networks and the involvement of users from Islamic states like Pakistan who were morally concerned by the incident. Another finding was the high appearance of hate speech in the form of the other frame dimension in the YouTube comments, which was not similarly present on other platforms. As most hate speech was published in Urdu and Arabic there was no counter-speech in English or Norwegian, underlining a lack of interaction between different language groups and ergo a limited diffusion of frames between different language groups. Which underlines the limits of the proposed concept of networked framing and gatekeeping by <cit.>. Although the internationality of the social media users was higher on YouTube, the frame dimension of External regulations and & Reputation was more relevant in online news sources. However, online news sources were highly dominated by the legality frame, discussing the legal classification of the incident in accordance with the Norwegian law, which did only play a minor role on social networking sites. The highest diversity of frame dimensions was applied by the Twitter community pointing at a more manifold discussion of the topic on this platform. However, meta-discourses on the fairness of the treatment of minority groups or a discussion on the cultural identity of Norwegian society in relation to the incident have been absent. Concerning the diffusion of content regarding the Quran Burning incident between different social media platforms and online news sites, it could be revealed that most diffusion takes place within the specific network boundaries. On Twitter, the highest amount of content generated outside the platform boundaries was shared. The most self-contained platform in this study was YouTube, which leads to the assumption that, at least for this case study, YouTube has a limited networked public solely looking at the content created by its own community. Cross-platform diffusion forms between a quarter to two thirds of the disseminated content, which underlines the interconnectedness and influence between the different discourse spaces and platforms. The study underscores the discursive power of social network users, who do not solely copy the frames of traditional media one-to-one but apply different and more multifaceted frames in their reflections. The greatest linkage was found between Twitter and online news sources for which we could identify not only a reference- but as well frame spill-over, as to a great extent frames of the news sources were spilled in form of text snippets and uncommented URLS into the Twitter community. § ACKNOWLEDGEMENTS The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 823866. § APPENDIX [pages=-]AppendixMisdoomRevision.pdf splncs04
http://arxiv.org/abs/2407.12974v1
20240717194135
False positives for gravitational lensing: the gravitational-wave perspective
[ "David Keitel" ]
gr-qc
[ "gr-qc", "astro-ph.HE", "physics.data-an" ]
^1Departament de Física, Universitat de les Illes Balears, IAC3–IEEC, Crta. Valldemossa km 7.5, E-07122 Palma, Spain ^2University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth PO1 3FX, United Kingdom astrophysics, relativity David Keitel david.keitel@ligo.org 18 July 2024 – https://dcc.ligo.org/LIGO- § ABSTRACT For the first detection of a novel astrophysical phenomenon, scientific standards are particularly high. Especially in a multi-messenger context, there are also opportunity costs to follow-up observations on any detection claims. So in searching for the still elusive lensed gravitational waves, care needs to be taken in controlling false positives. In particular, many methods for identifying strong lensing rely on some form of parameter similarity or waveform consistency, which under rapidly growing catalog sizes can expose them to false positives from coincident but unlensed events if proper care is not taken. And searches for waveform deformations in all lensing regimes are subject to degeneracies we need to mitigate between lensing, intrinsic parameters, insufficiently modelled effects such as orbital eccentricity, or even deviations from general relativity. Robust lensing studies also require understanding and mitigating glitches and non-stationarities in the detector data. This article reviews sources of possible false positives (and their flip side: false negatives) in gravitational-wave lensing searches and the main approaches the community is pursuing to mitigate them. § INTRODUCTION Its sensitivity increasing with each observing run, the LIGO–Virgo–KAGRA (LVK) network of gravitational-wave (GW) detectors <cit.> is reaching deeper into the universe and opening up chances for new types of detections. One exciting opportunity are gravitationally lensed signals, with a diverse science case as exposed in other papers in this issue, e.g. <cit.>. At present, lensing effects are primarily of interest for [s] from compact binary coalescences. False positives for gravitational lensing: the gravitational-wave perspective David Keitel^1,2 July 22, 2024 ============================================================================= No lensed [s] have been detected so far <cit.>, and only a small fraction of events is expected to have noticeable lensing signatures – on the order of one in hundreds or thousands for the current detector generation <cit.>. So our problem is finding one of these rare lensed events in the rapidly growing catalogues of all transient detections <cit.>. This corresponds to the well-known basic dilemma of all hypothesis tests, as illustrated in <ref>: we want to achieve a decent detection probability (corresponding to a low false-dismissal or false-negative probability ) at limited false-alarm or false-positive probability . As a snapshot, as of writing of this article (2024-05-24) already 98 significant new candidates from the ongoing O4 observing run had been reported[<https://gracedb.ligo.org>], in addition to the 90 signals reported from O1–O3 <cit.>, With future sensitivity improvements <cit.>, the rate will grow further with accessible cosmic volume, i.e. the third power of detector noise curve improvements. Our key challenge for lensed [s] is thus finding needles in a haystack, while the haystack rapidly grows. This has to be met without the help of the most informative aspect of lensing studies in astronomy: image geometry, since sky localisation is far worse <cit.> than typical image sizes and separations. Instead, we can rely on the LVK detectors, with their Hz–kHz sensitive range, to provide high time resolution and to extract the source parameters encoded in the amplitude and phase evolution of the strain time series (the “waveforms”) that we can typically track across many cycles. This paper will discuss the various sources of in identifying lensed [s], and the solutions found so far or under active development in the lensing community. The discussion will be split into two cases: (i) identification of sets of strongly-lensed multiple images in the geometric regime; (ii) deformed waveforms from strongly lensed type II images and the beating patterns or wave-optics effects commonly referred to as “milli”- and “microlensing” of [s] <cit.>. § MULTIPLE IMAGES In the geometric optics regime, strong lensing produces multiple images of the same source with almost identical waveforms (up to magnification, time delays, and phasing changes that are only observable in the presence of strong higher modes <cit.>). So these can be identified by checking pairs (or higher multiples) of events for consistent sky location and intrinsic parameters. Actually, this is a triple hypothesis test: lensed images of a single source () vs. unrelated unlensed signals () vs. at least one of the candidates being just a noise fluctuation (). But most methods developed so far (posterior overlap <cit.>, machine-learning tools <cit.>, phase consistency <cit.> and joint parameter estimation <cit.>) typically focus on the binary vs test, assuming that the candidates have already been identified as astrophysical over terrestrial noise with sufficient confidence by standard searches <cit.>. This section will review the main sources of for identifying multiply imaged [s], and methods to mitigate them. §.§ False positives from pure noise masquerading as GWs? In a simplified workflow of detecting candidates and performing lensing hypothesis tests, as illustrated in <ref>, the most basic source of false positives is mistaking noise fluctuations for astrophysical signals. This can be considered as mostly under control given the long development and sophistication of search algorithms. In particular, significances assigned by matched-filter pipelines are not just based on [s] under a Gaussian noise assumption. Instead, ranking statistics incorporate a variety of information and vetoes on signal shape, data-quality information, etc. (e.g. <cit.>). The backgrounds used to assign significance are empirically estimated from the actual data. See appendix D of <cit.>, earlier GWTC papers<cit.>, and references therein. One caveat to consider is that the thresholds for inclusion in a GWTC are not actually extremely strict – GWTC-3 cut at and has an estimated noise contamination of ∼10–15% <cit.>. But in fact, several methods have been developed for sub-threshold searches, trying to dig out even fainter lensed images from below the GWTC thresholds that match other, stronger events. They reduce the covered parameter space, and hence the statistical trials factor, based on the target events <cit.>. To avoid false positives from pure noise, extra care needs to be taken in studying candidates from such searches from the data quality perspective. Additional confidence can also be gained from studying the consistency of lensing parameters for candidate image sets <cit.>. §.§ False positives from unlensed GWs masquerading as lensed pairs? Likely the main issue for pair identification are false alarms from coincidentally similar but physically independent events. While the chance of actual lensed events only grows linearly, the set of possible coincident pairs grows quadratically with catalog size, even for uniformly distributed parameters.[ Higher multiples will be revisited later.] In addition, it has become clear already that merging binary black holes cluster in some regions of mass and spin space <cit.>. (While for binaries including neutron stars, the numbers are still too low for detailed population studies.) But does the actually grow as rapidly as the number of all event pairs? The relevant quantity is the catalogue-level total: 1-(1-^per pair)^N_pairs. This problem has been studied quantitatively in <cit.>. A simplified approximation to real lensing pipelines was used in <cit.>, choosing an operating point of a per-pair of 10^-4 from checking overlap in mass, sky location and phase. From this, they found the global to rapidly approach unity beyond 100 detections, i.e. from O4 on. However, there are several ways out of this apparent dilemma. <cit.> had previously already demonstrated that time-delay priors can significantly decrease – this will be discussed in detail in a moment. Additionally, <cit.> pointed out that increasing thresholds and analysing triple or quad images (also see below) can reduce the scaling of with catalogue size. As discussed further below, both follow-up (in rare cases) or additional smoking-gun -only signatures of lensing would be very safe solutions. And even without such extra sources of confirmation, the actual lensing analysis pipelines can be much stricter in rejecting chance coincidences, reducing the per-pair – especially full joint parameter estimation <cit.> is more restrictive than overlap statistics, and more so when restricting to specific lens models <cit.>. See section <ref> for further discussion of these solutions. A crucial ingredient to reducing the steep scaling of the catalogue-level is to consider time delay priors. As demonstrated already by <cit.> for the posterior overlap method, multiplying its Bayes factor with another that compares the priors for the relatively short time delays from galaxy lenses against coincident unlensed pairs can reduce the per-pair from 10^-2 to 10^-5 at a fixed ≈0.8. Furthermore, <cit.> have pointed out that, at fixed per-pair , the catalogue-level growth (with increasing observing time at constant detection rate) is suppressed from quadratic to linear, as in single-event searches. As further pointed out by <cit.>, for cluster lenses with their larger allowed time delays the situation is more difficult, but the scaling can still be suppressed as long as the expected lensed time delays are typically shorter than the span of the whole catalogue. In summary, the per-pair can be reduced significantly by several considerations, and the scaling of overall false positives with catalogue size can also be controlled, at least for galaxy lenses, by considering astrophysical time-delay priors. §.§ False positives from extra noise making GWs look lensed? Combining the two previous concerns, it might happen that one or more real events are contaminated by additional noise transients to such a degree that they falsely resemble others and hence appear lensed. The impact in particular of large glitches on parameter estimation is known, and deglitching methods are crucial <cit.>. See for example GW170817 <cit.> (where clean subtraction was possible) and GW200129_065458 (where subtraction uncertainties are significant <cit.>). For pair identification, the situation is no different than in other areas of analysis and should not be a dominant additional source of . Section <ref><ref> will revisit this for frequency-dependent single-event lensing signatures. §.§ False positives from waveform systematics? Understanding [s] from compact binaries relies on waveform modelling <cit.>. The existing model families demonstrate strong agreement in the “vanilla” parameter-space regions (similar masses, low spins, quasi-circular), but open challenges remain for large mass ratios, high precessing spins, and orbital eccentricity. Imperfect waveforms can lead to shifted posteriors or missed modes, and hence to both false alarms and false dismissals in lensing pair identification. lensing studies so far have generally used the IMRPhenomXPHM model <cit.> or earlier versions from the same family. A first dedicated study <cit.> of the impact of waveform choice on lensing searches has covered the posterior overlap method <cit.>. After identifying that some problematic cases turned out to be due to sampler convergence issues in the GWTC releases rather than actual waveform systematics, no issues were identified on O1–O3 events. For O4 and beyond with increased sensitivity, the higher [s] for some events and increased chances of detecting non-vanilla signals mean waveform systematics can become more important. On the other hand, continued waveform development will allow for better-constrained posteriors and hence reduce the lensing from coincident pairs. §.§ Improvements with higher multiples Moving on to solutions for reducing , as already discussed, the from coincident unlensed pairs drops steeply when more than two images of the same source can be identified <cit.>. Triples and quads also allow for detailed checks of the time delays, phase differences, and their ordering against the known classification of possible configurations of lens geometries <cit.>, especially when combined with full joint parameter estimation and explicit lens model choices <cit.>. However, even if there are three or four reasonably bright images, we may not detect all of them. First, the detectors are simply not in observing state for non-negligible times, with duty factors of e.g. 53% for both LIGO detectors together or 83% for at least one of them online during O4a[ <https://gwosc.org/detector_status/O4a/> – for O4b Virgo has joined again and improved this]. Second, even slightly lower relative magnifications or changes in the detector noise level can push some images below the typical detection thresholds. Sub-threshold searches <cit.> can thus be particularly valuable, in particular when a promising candidate pair or triple is already known and can be used to narrow down the list for possible thirds and fourths further <cit.>. §.§ Improvements with smoking guns Another solution to any – balancing problem is to find additional, clear signatures of the effect searched for – smoking guns, or in this case maybe “smoking magnifying glasses”. One case for lensing are mergers involving neutron stars. Even without an counterpart, from [s] alone, the matter effects on the waveform (tidal deformability changes to the phasing) can break the degeneracy between nearby heavy sources and lensed far-away lighter sources <cit.>, because lensed cases would seem to contain heavy objects with the stronger tidal deformability of a lighter one. Tidal information is mainly encoded at higher frequencies than the optimum of current detectors, but ongoing sensitivity improvements will make such checks feasible for increasing numbers of low-mass candidates. For binary black holes, smoking guns are still possible to find, as will be discussed in <ref>. Strong lensing can thus be reduced when we also find evidence on one or more of the images of the parity change in type-II images <cit.> or of combined strong plus microlensing <cit.>. §.§ Improvements with combined GW+EM observations In addition to the various -only methods of decreasing , a lensed counterpart could be considered a beyond-a-reasonable-doubt confirmation of candidate lensed [s], but identifying such counterparts has its own challenges. This appears attractive for binary neutron stars like GW170817 <cit.> where a counterpart can be expected from the exact same source <cit.>, but the depth of observations can be a limiting factor as well as the actual distinction of multiple images. Specific cases like lensed neutron star – black hole mixed binaries <cit.> and where the counterpart is a fast radio burst <cit.> have also been discussed in the literature. However, given the relative detection reaches and redshift-dependent merger rates <cit.>, lensing candidates involving neutron stars and with possible direct counterparts will remain much rarer than those from binary black holes. For those, the “counterpart” is only a host galaxy, with no temporally well-constrained signal to make up for the broad sky maps. Strongly-lensed signals are thus special in making such associations at least potentially feasible <cit.>: The improved sky localisation from joint analysis of multiple images could narrow down the list of possible hosts to a few. Then the right one can be identified through consistency of the properties of the possible lenses and of the lens profile reconstructed from the image set. If the lensing properties hint at a cluster lens, deep follow-up observations <cit.> can also probe the small number of such lens candidates even within much broader single-event sky maps. §.§ Improvements with better data and better analyses to the rescue To mitigate the various sources of discussed above even in the absence of -only or -smoking guns, we can also consider the benefits of more sensitive or cleaner detector data and of further improvements to our analysis methods. On the detector side, the improved sensitivity of O4 and runs beyond it yields events with higher [s], which in turn yield narrower posteriors, and hence at least a subset of strong candidates can be identified with higher significance and lower – with the caveat of waveform systematics. To maintain clean catalogues and parameter estimation, it is also crucial for glitch identification and mitigation <cit.> to keep up with instrumental improvements. Meanwhile, methods development for identifying lensed pairs (and higher multiples) is still ongoing. The initial candidate identification via posterior overlap <cit.> can be complemented with independent pipelines using machine learning <cit.> or phase information <cit.>, and candidates from sub-threshold searches can be submitted to various follow-up steps <cit.>. The final criterion is then generally taken to be full Bayesian joint parameter estimation <cit.>, which as discussed above can be significantly stricter than overlap-type analyses, and can be further improved through lens model considerations <cit.>. More quantitative work is needed to study false alarms under this full meta-pipeline of lensing searches, but it is clear that it can significantly improve the picture. Efforts are also ongoing to better include expectations from lens simulations and surveys on the time delay, magnification and phase distributions into initial candidate selection <cit.>. One open issue with these is that typically simple galaxy lenses are assumed, with the distributions for clusters more difficult to model. Hence, any low- methods tailored to galaxy lenses may incur significant for other lenses. Still, such approaches can be crucial tools for a high-purity sample of lensing candidates. § DEFORMED WAVEFORMS (SINGLE IMAGES) Leaving behind the case of multiple images from strong lensing, the main ways for identifying single lensed images make use of frequency-dependent deformations: (i) the parity change in strong lensing type-II images (detectable in the presence of higher modes and precessing spins <cit.>); and (ii) wave-optics effects <cit.> or the beating patterns between overlapping short-time-delay images <cit.> both happening for low-mass lenses and hence commonly referred to as “micro”- and “millilensing”, despite their differences from the namesake phenomena. These effects are generally searched for with Bayesian evidence calculation (e.g. <cit.>). In this case, our haystack only grows linearly with the number of catalogue events. As discussed before, these signatures can also serve as additional smoking guns to confirm strongly lensed multiple images. However, they are themselves subtle to detect on signals with realistic [s] and could be potentially confused with non-lensing effects, if proper care is not taken. This section will now focus on the specific sources of for detecting such deformations themselves. For most of these, investigations have only started recently, so the overview will be quite brief and not yet quantitative; full studies with updated waveform models and analysis techniques will likely also discover ways of breaking these degeneracies at least partially. §.§ False positives from noise issues? As discussed for pair identification, deglitching <cit.> is crucial for robust parameter estimation on any events, and only more so if we want to detect effects beyond standard waveforms. Mistaking noise effects for lensing is an example of the general “out-of-manifold” effect for matched-filtering based hypothesis tests: with the likelihood based on a Gaussian noise assumption, anything unusual in the data is likely to make a nested test of different signal hypotheses prefer the more complex hypotheses, which have greater freedom to fit the extra noise. Beyond mitigation by case-by-case noise characterisation and cleaning, extensive background studies with simulated signals in a variety of noisy data are crucial for setting robust thresholds on lensing Bayes factors. This can reduce , but at the obvious cost of a higher . On the other hand, ongoing developments for statistically stronger and more model-informed analysis methods are also likely to increase the noise-signal separation. §.§ False positives from waveform systematics? The same “out-of-manifold” issue, but on the signal hypothesis side, manifests as imperfect underlying waveform models for unlensed signals increasing the chance for elevated lensing Bayes factors. This can be due to a simple lack of precision in the numerical-relativity calibration of the models, but becomes more prominent when entire physical effects are present in the real data but missing in the model. In particular, standard models have evolved from aligned-spin dominant-mode only models to “PHM” versions (including precessing spins and higher modes) <cit.> but are still limited in their implementations of precession. and cover only quasi-circular binaries, with modelling of orbital eccentricity still in an early phase (e.g. <cit.>). Studies of waveform systematics i n single-image lensing tests have started <cit.>, but more systematic work is still needed. §.§ False positives from degeneracies with spin precession? Even if we had perfect waveform models e.g. including spin precession and microlensing together, there is a level of intrinsic degeneracy between the two effects. If we consider microlensing “for dummies”, the effect comes down to extra modulations (“wiggles”) in the time series of strain. Phenomenologically, precession can cause similar modulations. This was first studied on a set of example cases by <cit.>, finding that the two effects indeed can look similar under realistic conditions but can be distinguished with full parameter estimation. A more systematic exploration of the full relevant parameter space has been started in <cit.> and is being further extended in ongoing work, which should allow to narrow down the regions of parameter space where degeneracies are significant. §.§ False positives from degeneracies with orbital eccentricity? Similarly to spin precession, orbital eccentricity can also cause extra waveform “wiggles” that can be mistaken for other effects. As a famous example, not lensing-related, the very short signal GW190521 <cit.> can be fit with a variety of modifications beyond a standard quasi-circular model, including with eccentricity (e.g. <cit.>). While several models now exist that incorporate eccentricity to some degree (e.g. <cit.>), much more work is needed for better calibration and for treating full spin effects at the same time. A first study on the possible degeneracy of eccentricity with microlensing <cit.> has found the risk of unlensed signals from eccentric binaries being mistaken for lensed signals from quasi-circular sources to be potentially significant, though eccentricities at the required levels are likely rare in the overall source population. §.§ False positives from (or for) violations of general relativity? Yet another possible source of “extra wiggles” could be deviations from general relativity, with ways to test for such violations in signals reviewed in <cit.>. However, this subsection can remain short, as it can be argued that lensing is less exotic than a new theory of gravity and hence that we should be more worried about lensing as a source of false alarms for those <cit.> rather than about such violations as false alarms for lensing. For completeness, first studies in that direction already have already been performed for both type-II images and microlensing <cit.>. § CONCLUSIONS In classical astronomy, there are nowadays many photos from which it is completely obvious that lensing is happening. By contrast, the reason to carefully consider risks of false positives in searches for lensing stems from the limited [s] achievable with current detectors and the lack of geometric information, which is the richest observable in the sky. For lensing, care needs to be taken to extract the maximum information imprinted on the strain time series. The lensing community is thus focusing on developing robust methods for obtaining high-confidence results. To achieve these, we must study and control all imaginable sources of false alarms and pursue concrete mitigation strategies. As discussed in this brief review, searches for multiple strongly lensed images may primarily be susceptible to false alarms from the much larger number of unlensed signals forming coincidentally similar pairs. Additional possible issues include noise fluctuations and waveform systematics. For frequency-dependent deformations on single images, possible sources of false positives again include noise fluctuations and waveform systematics, as well as degeneracies with intrinsic compact binary parameters such as spin precession and orbital eccentricity. Many of these challenges can be at least partially addressed by improvements to detector sensitivity, mitigation of noise issues, waveform modelling, and, in particular, dedicated lensing analysis techniques. Multi-image searches can profit enormously from using full joint Bayesian parameter estimation, identifying more than two images of the same source, incorporating specific lens models to constrain time delays and image set configurations. Through these avenues, the scaling of false alarms with the number of detections can be significantly reduced <cit.> at least for galaxy lenses with their shorter time delays. For some candidates, the combination with additional smoking guns such as matter effects in binaries including neutron stars or the single-image signatures of type-II images or microlensing can yield additional evidence. For the latter, improved methods may be able to partially break degeneracies, and systematic studies can narrow down the relevant parameter space regions and in turn increase confidence in candidates not falling into these. Multi-messenger lensing studies can provide further inputs to reduce lensing false alarms, both through population-level studies that allow for more restrictive analysis techniques and through counterpart searches for individual sources. Another important conclusion is the importance of background studies. While lensing hypothesis tests are typically formulated in a Bayesian way, we can still treat the resulting Bayes factors as frequentist detection statistics and calculate their background distribution through simulation studies, as done for the main LVC/LVK lensing studies so far <cit.>. To do this optimally, one needs to sample over all possible confounding factors – full population models, spin precession, orbital eccentricity, and the diverse features of real detector noise. For a proper statistical sampling, such studies easily become more expensive than the main analyses of actual detected events, but are a crucial ingredient for robust lensing candidate identification. It is also important to consider the expected evolution of observational astronomy in the future. As of the writing of this article, the ongoing O4 run has produced over a hundred new significant candidates, and might double that by its end. The O5 run is expected to produce daily detections <cit.>. This increased rate also corresponds to a deeper reach into the universe; thus, the chance for lensed signals to be in the data increases. To profit from this increase, the growth of false positives must be controlled through the approaches outlined above. KAGRA <cit.> and LIGO India <cit.> extending the global network will improve sky localisation <cit.>, reducing the risk of chance coincidences and improving the chances for multi-messenger studies. Many of the considerations reviewed here also apply to next-generation detectors with their truly cosmological reach, such as the ground-based Einstein Telescope <cit.> and Cosmic Explorer <cit.>, or LISA in space <cit.>, though their regime will be completely different: lensing will still be a rare phenomenon relatively, but cease to be rare in absolute terms. Additionally, high [s] and long observation times thanks to lower observable frequencies should help lensing identification. Thanks to the members of the LIGO–Virgo–KAGRA gravitational lensing group as well as the organisers and attendants of the Royal Society meeting on Multi-messenger Gravitational Lensing (Manchester, 11-12 March 2024) for many fruitful discussions. This work was supported by the Universitat de les Illes Balears (UIB); the Spanish Agencia Estatal de Investigación grants CNS2022-135440, PID2022-138626NB-I00, RED2022-134204-E, RED2022-134411-T, funded by MICIU/AEI/10.13039/501100011033, the European Union NextGenerationEU/PRTR, and the ERDF/EU; and the Comunitat Autònoma de les Illes Balears through the Direcció General de Recerca, Innovació I Transformació Digital with funds from the Tourist Stay Tax Law (PDR2020/11 - ITS2017-006) as well as through the Conselleria d'Economia, Hisenda i Innovació with grant numbers SINCO2022/6719 (European Union NextGenerationEU/PRTR-C17.I1) and SINCO2022/18146 (co-financed by the European Union and FEDER Operational Program 2021-2027 of the Balearic Islands). The Royal Society and the Institute of Cosmology and Gravitation (University of Portsmouth) have provided additional travel support. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. <ref> was generated using matplotlib <cit.>, numpy <cit.>, scipy <cit.>, and <https://github.com/ipython/xkcd-font/>. This paper has been assigned the document number https://dcc.ligo.org/LIGO-. vancouver
http://arxiv.org/abs/2407.13332v1
20240718093138
Joint OAM Multiplexing and OFDM in Sparse Multipath Environments
[ "Liping Liang", "Wenchi Cheng", "Wei Zhang", "Hailin Zhang" ]
eess.SP
[ "eess.SP" ]
Joint OAM Multiplexing and OFDM in Sparse Multipath Environments Liping Liang, Student Member, IEEE, Wenchi Cheng, Senior Member, IEEE, Wei Zhang, Fellow, IEEE, and Hailin Zhang, Member, IEEE .5Copyright (c) 2015 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org. A part of this work was presented in IEEE Global Communications Conference, 2018 <cit.>. This work was supported in part by the National Natural Science Foundation of China under Grant 61771368, Young Elite Scientists Sponsorship Program By CAST under Grant 2016QNRC001, Doctoral Students' Short Term Study Abroad Scholarship Fund of Xidian University, and the Shenzhen Science & Innovation Fund under grant JCYJ20180507182451820. (Corresponding author: Wenchi Cheng) Liping Liang, Wenchi Cheng, and Hailin Zhang are with the State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, 710071, China (e-mails: lpliang@stu.xidian.edu.cn; wccheng@xidian.edu.cn; hlzhang@xidian.edu.cn). Wei Zhang is with College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China and School of Electrical Engineering and Telecommunications, the University of New South Wales, Sydney, Australia (email: weizhang@ieee.org). July 22, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty empty § ABSTRACT The emerging orbital angular momentum (OAM) based wireless communications are expected to be a high spectrum-efficiency communication paradigm to solve the growing transmission data rate and limited bandwidth problem. Academic researchers mainly concentrate on the OAM-based line-of-sight (LoS) communications. However, there exist some surroundings around the transceiver in most practical wireless communication scenarios, thus forming multipath transmission. In this paper, a hybrid orthogonal division multiplexing (HODM) scheme by using OAM multiplexing and orthogonal frequency division multiplexing (OFDM) in conjunction is proposed to achieve high-capacity wireless communications in sparse multipath environments, where the scatterers are sparse. We first build the OAM-based wireless channel in a LoS path and several reflection paths combined sparse multipath environments. We concentrate on less than or equal to three-time reflection paths because of the severe energy attenuation. The phase difference among the channel amplitude gains of the LoS and reflection paths, which is caused by the reflection paths, makes it difficult to decompose the OAM signals. We propose the phase difference compensation to handle this problem and then calculated the corresponding capacity in radio vortex wireless communications. Numerical results illustrate that the capacity of wireless communications by using our proposed HODM scheme can be drastically increased in sparse multipath environments. Sparse multipath, orbital angular momentum (OAM), channel model, phase difference, capacity. § INTRODUCTION Orbital angular momentum (OAM), different from the spin angular momentum of electromagnetic waves, is an interesting communication paradigm with high capacity and reliability in wireless communications <cit.>. When propagating along the same spatial axis, the beams with different integer OAM-modes, which are also referred to topological charges, are mutually orthogonal <cit.>. Thereby, multiple OAM-modes can be applied for several parallel data streams transmission without inter-mode interference theoretically. Hence, OAM is expected to be used for multiple users transmission where each user utilizes a different OAM-mode from others or for single user transmission where the user transmits data with OAM-modes multiplexing fashion. OAM-mode multiplexing has been demonstrated to reach the goal of high capacity in a four OAM-modes multiplexing microwave communication experiment <cit.>. Also, joint OAM-mode multiplexing and spatial multiplexing can achieve 16 Gbit/s line-of-sight (LoS) millimeter-wave communication at 1.8 meters transmission distance <cit.>. Uniform circle array (UCA) based OAM is embedded into massive multiple input multiple output (MIMO) to achieve multiplicative capacity in both OAM-mode domain and spatial domain <cit.>. In consequence, compatible with the frequency, code, spatial, and time, the emerging OAM-mode multiplexing provides the opportunities to significantly increase the capacity of wireless communications <cit.>. However, most existing researches mainly focus on studying OAM-mode multiplexing for LoS wireless communications <cit.>. In some wireless communication scenarios, such as residential environments, there are a small number of randomly distributed surroundings around the transceivers, such as walls, doors, and so on. Thus, the signals are transmitted in LoS, reflection, diffuse, and scattering combined sparse multipath environments. Since the power of signals after three-time reflections are relatively small <cit.>, we consider the sparse multipath environments with less than or equal to three-time reflection scenarios in this paper. Also, the comparison between OAM and MIMO is presented in <cit.>. Academic researchers have shown much interest in orthogonal frequency division multiplexing (OFDM) technique, which is used for high capacity, but also for anti-multipath in wireless communications <cit.>. Utilizing the anti-multipath of OFDM, the inter-symbol interference can be absolutely canceled in radio vortex wireless communications. Thus, OFDM has been applied into a variety of scenarios in the past few decades, such as MIMO with spatial multiplexing <cit.> and full duplex with double spectrum efficiency <cit.>. Some existing academic researches have proved that OAM is compatible with the traditional OFDM to achieve extremely high capacity in wireless communications <cit.>. The experiment has demonstrated that the signals carrying high order OAM-mode is susceptible to be affected by multipath interference <cit.>. By using the extra OAM-mode domain, the capacity can reach 230 bit/s/Hz in OFDM communications <cit.>. The authors proposed time-switched OFDM-OAM MIMO scheme to achieve high capacity of wireless communications while reducing computing complexity <cit.>. However, these researches mainly focus on experimentally verifying the feasibility of joint OAM and OFDM to achieve high capacity of radio vortex wireless communications, which lacks the theoretical analysis of OAM signals transmission and decomposition. Also, they assume that the OAM-based wireless channel model in spares multipath environments is known and the inter-mode interference caused by reflection paths does not exist. We previously proposed joint OAM multiplexing and the traditional OFDM scheme, also referred to hybrid orthogonal division multiplexing (HODM), for anti-multipath while significantly increasing the capacity in multipath transmission, where we mainly focused on the signal transmission, detection, and capacity maximization of HODM communications <cit.>. Based on the previous work, this paper builds a specific OAM-based wireless channel model and mitigates the inter-mode interference in sparse multipath environments. Thereby, it is still challenging to use radio vortex wireless communications for sparse multipath transmission. In this paper we propose a HODM scheme, thus not only enabling the efficient OAM-based transmission in sparse multipath environments, but also drastically increasing the capacity of wireless communications. We summarize the contributions as follows: * We build the OAM-based wireless channel model in sparse multipath environments containing a LoS path and several reflection paths. * We propose the phase difference compensation and theorematically analyze how to detect and decompose OAM signals in sparse multipath environments. * We propose the HODM scheme to drastically increase the capacity of sparse multipath transmission in radio vortex wireless communications. The remainder of this paper is organized as follows. Section <ref> presents the HODM system model and builds the OAM-based wireless channel model. Section <ref> proposes the HODM scheme, proposes the phase difference compensation to mitigate the inter-mode interference caused by multipath, and derives the maximum capacity of HODM communications. The performance of our proposed HODM scheme is evaluated in Section <ref>. Conclusions are presented in Section <ref>. § SYSTEM MODEL AND CHANNEL MODEL In this section, we present the system model of our proposed HODM scheme and then build the OAM-based wireless channel model in sparse multipath environments containing a LoS path and several reflection paths. §.§ HODM System Model In this subsection, we investigate a HODM wireless communication as depicted in Fig. <ref>, where a two-dimensional inverse fast Fourier transform (2D-IFFT) operator replaces the regular IFFT at the transmitter and a two-dimensional fast Fourier transform (2D-FFT) operator substitutes the traditional FFT at the receiver, respectively, in comparison with the traditional OFDM communications. Also, the HODM system model adds a phase difference compensation to the receiver. We consider M parallel subcarriers and N OAM-modes in HODM communications. As illustrated in Fig. <ref>, the signals first are converted from serial state to parallel state and then processed by 2D-IFFT operator at the transmitter. A UCA with N arrays equally spaced on the ring is applied to transmit the signals. To generate OAM-modes, the amplitudes of input signals for each array are same, but there is a consecutive phase difference from the first array to the last array for a given OAM-mode l (|l|≤ N/2) <cit.>. At the receiver, after removing cyclic prefix (CP) and compensating phase difference caused by multipath, 2D-FFT is used to decompose the signals. The carrier frequency modulated at the transmitter and demodulated at the receiver is assumed to be synchronized in this paper. In wireless communications, OAM and OFDM are used for mode multiplexing and frequency multiplexing, respectively, to achieve high capacity. OAM and frequency are two independent domains. Therefore, we can jointly use OAM and OFDM, referred to HODM, in wireless communications. Thus, HODM modulated signals can be transmitted with M subcarriers in the frequency domain and N OAM-modes in the mode domain simultaneously as shown in Fig. <ref>, where each mode-frequency pair is identified by the specified color and neighbor subcarriers overlap each other. Hence, the high-rate data stream is divided into MN parallel low-rate data streams for transmission in radio vortex wireless communications. Compared with the M subchannels in conventional OFDM communications, HODM communications increase to MN subchannels, thus significantly increasing the capacity of radio vortex wireless communications. §.§ UCA-Based OAM Channel Model In this paper, transmit signals are propagated in sparse multipath environments containing a LoS path, several reflection paths consisting of primary reflection paths, secondary reflection paths, and triple reflection paths. Figure <ref> shows the UCA-based OAM channel model, where the UCA based transmitter and receiver are coaxially aligned for simplicity. We assume that the reflection signals are caused by specular reflectors. Thus, the azimuthal angles of reflection signals can be considered to be the same with those of transmit signals. Referred to the traditional two-way channel model in wireless communications (<cit.>, Chapter 2) and our previously derived LoS channel model <cit.>, the OAM-based wireless channel model in sparse multipath environments can be derived in the following. We assume that there are N_nr primary reflection paths, N_nt secondary reflection paths, and N_ne triple reflection paths. For each reflection, we can obtain the location of specular transmit UCA according to each specular reflector, which is parallel to the line between the centers of transmit and receive UCAs. Also, D, r_1, r_2, and t are denoted by distance between the centers of transmit and receive UCAs, the radius of transmit UCA, the radius of receive UCA, and time variable, respectively. For the LoS path, the expression of channel amplitude gain, denoted by h_vn,LoS,t, from the n-th (0≤ n≤ N-1) transmit array to the v-th (0≤ v≤ N-1) receive array is h_vn,LoS,t=βλ/4 π d_vn e^-j2 π d_vn/λδ(t), where d_vn denotes the LoS distance between the n-th transmit array and the v-th receive array, β represents the attenuation constant, λ represents the carrier wavelength, and δ(t) represents the impulse response function with respect to t. The expression of d_vn can be expressed as follows: d_vn = √(D^2+r_1^2+r_2^2-2r_1r_2cos(2 π v/N-2 π n/N)) = √(D^2+r_1^2+r_2^2)√(1-2r_1r_2cos(2 π v/N-2 π n/N)/D^2+r_1^2+r_2^2). Based on Taylor series expansion and r_1, r_2≪ D, the second term on the right hand of Eq. (<ref>) is approximately expressed as follows: √(1-2r_1r_2cos(2 π v/N-2 π n/N)/D^2+r_1^2+r_2^2)≈ 1-r_1r_2cos(2 π v/N-2 π n/N)/D^2+r_1^2+r_2^2. Thus, using √(D^2+r_1^2+r_2^2) for amplitude and -r_1r_2cos(2 π v/N-2 π n/N)/√(D^2+r_1^2+r_2^2) for phase, we have h_vn,LoS,t=βλ e^-j 2 π√(D^2+r_1^2+r_2^2)/λ/4 π√(D^2+r_1^2+r_2^2) e^j 2 π r_1r_2cos(2 π v/N-2 π n/N)/λ√(D^2+r_1^2+r_2^2)δ(t). For the reflection paths, the reflection OAM signals carry the opposite modes after odd times reflections as compared with the initial LoS transmit OAM signals <cit.>. Whereas, the reflection signals after even times reflections carry the same OAM-modes with the initial transmit OAM signals. For primary reflection paths, we denote by d_nr the distance between the transmit UCA center and the specular reflector for the nr-th (nr=1,2,⋯,N_nr) primary reflection. Thus, we can derive the corresponding transmit distance d_vn,nr between the n-th transmit array and the v-th receive array for the nr-th primary reflection as Eq. (<ref>). The reflection coefficient is one of important parameters of reflection channels. For different polarization transmissions, the expressions of the reflection coefficient are different. Our proposed HODM scheme can be for all kinds of polarization transmissions. Taking vertical polarization transmission as an example, the reflection coefficient, denoted by R_vn,nr, regarding the n-th array to the v-th receive array transmission for the specular reflector with a distance of d_nr away from the transmit UCA center can be expressed as follows <cit.>: R_vn,nr=sinα_vn,nr - √(ε_nr-cos^2α_nr/ε_nr)/sinα_vn,nr + √(ε_nr-cos^2α_vn,nr/ ε_nr), where α_vn,nr is the complementary angle of reflection angle for the n-th array to the v-th receive array transmission and ε_nr represents the permittivity of the corresponding specular reflector, respectively, for the nr-th primary reflection path. Also, α_vn,nr is given by α_vn,nr = arcsin2d_nr-r_1cos(2π/Nn)-r_2cos(2π/Nv)/d_vn,nr. For simplicity, the average reflection coefficient, denoted by R_nr, for the nr-th reflection path with respect to all transmit and receive arrays can be obtained as follows: R_nr=1/N^2∑_n=0^N-1∑_v=0^N-1 R_vn,nr. We denote by c the speed of light. Due to r_1, r_2 < d_nr≪ D, we can express the second term on the right hand of Eq. (<ref>) as Taylor series. Then, √(D^2+r_1^2+r_2^2+4d_nr^2) is used for amplitude. Thus, the channel amplitude gain, denoted by h_vn,nr,t, for the n-th array to the v-th receive array transmission corresponding to the specular reflector with a distance of d_nr away from the transmit UCA center can be expressed as Eq. (<ref>), where τ_nr is the time delay and given by τ_nr=d_vn,nr-d_vn/c. For the secondary reflection paths, we can obtain the first specular transmit UCA. Then, based on the first specular transmit UCA and the second reflector, the second specular transmit UCA can also be located. We denote by d_nt^(1) and d_nt^(2) the distances between the transmit UCA center and the first specular reflector as well as the second specular reflector for the nt-th (nt=1,2,⋯, N_nt) reflection, respectively. The transmit distance, denoted by d_vn,nt, for the n-th array to the v-th receive array transmission can be calculated as Eq. (<ref>), where d_nt is given by d_nt=d_nt^(1)+d_nt^(2). To calculate the reflection coefficients of secondary reflection paths, we denote by ε_nt^(1) and ε_nt^(2) the permittivities of the first specular reflector and the second specular reflector, respectively. Since the specular reflectors are parallel to the line connected UCA centers of the transmitter and receiver, the reflection coefficient, denoted by R_vn,nt, for the n-th array to the v-th receive array transmission corresponding to the nt-th secondary reflection path can be approximately expressed as follows: R_vn,nt=R_vn,nt^(1) R_vn,nt^(2), where R_vn,nt^(1)= sinα_vn,nt - √(ε_nt^(1)-cos^2α_nt/ε_nt^(1))/sinα_vn,nt + √(ε_vn,nt^(1)-cos^2α_vn,nt/ ε_nt^(1)); R_vn,nt^(2)= sinα_vn,nt - √(ε_nt^(2)-cos^2α_nt/ε_nt^(2))/sinα_vn,nt + √(ε_vn,nt^(2)-cos^2α_vn,nt/ ε_nt^(2)). In Eq. (<ref>), α_vn,nt is expressed as α_vn,nt= arcsin2d_nt-r_1cos(2π/Nn) - r_2cos(2π/N v)/d_vn,nt. For simplicity, we can obtain the average reflection coefficient, denoted by R_nt, for the nt-th reflection path with respect to all transmit and receive arrays as follows: R_nt=1/N^2∑_n=0^N-1∑_v=0^N-1 R_vn,nt. Then, the channel amplitude gain, denoted by h_vn,nt,t, for the n-th array to the v-th receive array transmission corresponding to d_nt can be derived as Eq. (<ref>), where τ_nt is the corresponding time delay and expressed as τ_nt=d_vn,nt-d_vn/c. For the triple reflection paths, we denote by d_ne^(1), d_ne^(2), and d_ne^(3) the distances from the transmit UCA center to the first specular reflector, the second specular reflector, and the third specular reflector, respectively, for the ne-th (ne=1,2,⋯, N_ne) triple reflection. Thus, the transmit distance, denoted by d_vn,ne, for the n-th array to the v-th receive array transmission for the ne-th triple reflection path can be calculated by replacing d_nr by d_ne in Eq. (<ref>), where d_ne=d_ne^(1)+d_ne^(2)+d_ne^(3). Also, ε_ne^(1), ε_ne^(2), and ε_ne^(3) are denoted by the permittivities of the first specular reflector, the second specular reflector, and the third specular reflector, respectively. Thus, we can derive the corresponding average reflection coefficient, denoted by R_ne, corresponding to the ne-th triple reflection path as follows: R_ne = 1/N^2∑_n=0^N-1∑_v=0^N-1 R_vn,ne = 1/N^2∑_n=0^N-1∑_v=0^N-1 R_vn,ne^(1)R_vn,ne^(2)R_vn,ne^(3), where R_vn,ne^(1)= sinα_vn,ne - √(ε_ne^(1)-cos^2α_ne/ε_ne^(1))/sinα_vn,ne + √(ε_vn,ne^(1)-cos^2α_vn,ne/ ε_ne^(1)); R_vn,ne^(2)= sinα_vn,ne - √(ε_ne^(2)-cos^2α_ne/ε_ne^(2))/sinα_vn,ne + √(ε_vn,ne^(2)-cos^2α_vn,ne/ ε_ne^(2)); R_vn,ne^(3)= sinα_vn,ne - √(ε_ne^(3)-cos^2α_ne/ε_ne^(3))/sinα_vn,ne + √(ε_vn,ne^(3)-cos^2α_vn,ne/ ε_ne^(3)). In Eq. (<ref>), α_vn,ne is derived as follows: α_vn,ne= arcsin2d_ne-r_1cos(2π/Nn) - r_2cos(2π/N v)/d_vn,ne. Then, the channel amplitude gain, denoted by h_vn,ne,t, for the n-th array to the v-th receive array transmission corresponding to d_ne can be derived as Eq. (<ref>), where τ_ne=d_vn,ne-d_vn/c. In Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), the channel amplitude gains of the paths are related to v, n, λ, and t, where v and n can be considered as the sample points of continuous signals in the angular domain. In OFDM communications, the channel model is converted from the time domain to the frequency domain after FFT at the receiver. In OAM communications, the received signals and channels are in the mode domain after OAM-based FFT. Therefore, the channel model is in the frequency and mode domains after 2D-FFT at the receiver, which can be directly dedicated to the HODM system. § INTERFERENCE MITIGATION   In this section, we develop the HODM scheme, mitigate the interference, and then calculate the maximum capacity of wireless communications. We first present the expression of the transmit signals in sparse multipath environments and insert the CP into transmit signals to mitigate the inter-symbol and inter-carrier interference. Next, we propose phase difference compensation to mitigate the inter-mode interference caused by multipath. Then, we demodulate the received signals in HODM communications. Finally, the conventional water-filling algorithm is applied to allocate the power, thus achieving the maximum capacity in wireless communications. At the transmitter, the expression of the vortex signal, denoted by x_l,n(t), with respect to the l-th OAM-mode for the n-th array is presented as follows: x_l,n(t)=∑_m=0^M-1s_l,m e^j 2 π f_m te^j 2 π n/N l, where s_l,m is the transmit modulated signal for the (l,m)-th (0 ≤ m ≤ M-1) OAM-subcarrier block and f_m is the m-th subcarrier frequency. Clearly, we can regard x_l,n(t) as spatial sampling signal at the interval of e^j 2 π n/N. Thereby, we can model the continuous transmit signal, denoted by x_l(ϕ,t), of x_l,n(t) for the (l,m)-th OAM-subcarrier block as follows: x_l(ϕ,t)=∑_m=0^M-1s_l,m e^j 2 π f_m t e^j ϕ l, where we denote by ϕ∈ [0, 2π] the azimuthal angle. Then, we can obtain the emitted HODM-modulated signal, denoted by x(ϕ,t), corresponding to the whole OAM-modes within the transmit HODM duration as follows: x(ϕ,t) = ∑_l=⌊-N+2/2⌋^⌊N/2⌋x_l(ϕ,t) = ∑_m=0^M-1∑_l=⌊-N+2/2⌋^⌊N/2⌋ s_l,me^j ϕ l e^j 2π f_m t, 0 ≤ t ≤ T_s, where T_s denotes the transmit HODM duration and ⌊·⌋ represents the floor function. We sample the emitted HODM-modulated signal x(ϕ,t), which means t=kT_s/M (0 ≤ k ≤ M-1) in frequency domain and ϕ=2π n/N in spatial domain. Thus, we obtain X_n,k=∑_m=0^M-1∑_l=⌊-N+2/2⌋^⌊N/2⌋ s_l,m e^j 2 π n l/Ne^j 2 π m k/M, where X_n,k denotes the sampling signal in HODM communications. Clearly, the expression of the sampling HODM signal X_n,k is the typical 2D-IFFT with respect to the emitted signal s_l,m. Since there exists time delay of wireless channels caused by reflection paths, the CP is utilized to mitigate the interference. Thus, the expression of emitted signal, denoted by x̃(ϕ,t), with the insertion of CP is x̃(ϕ,t)=∑_m=0^M-1∑_l=⌊-N+2/2⌋^⌊N/2⌋ s_l,me^j ϕ l e^j 2π f_m t, -T_c≤ t ≤ T_s, where the CP duration T_c is larger than the maximum time delay, denoted by τ_max, of wireless channels and x̃(ϕ,t)=x(ϕ,t+T_s) when -T_c≤ t ≤ 0. Thus, the corresponding sampling signal, denoted by X_n,u (u=-M_c, ⋯, M-1), can be expressed as follows: X_n,u=∑_m=0^M-1∑_l=⌊-N+2/2⌋^⌊N/2⌋ s_l,m e^j 2 π n l/Ne^j 2 π u m/M, where M_c≥⌊Mτ_max/T_s⌋ is the sampling length of CP. The expression of the received HODM signal, denoted by y(ϕ,t), for the whole receive arrays at the receiver is presented as follows: y(ϕ,t)=h_LoS(ϕ,t)⊗x̃(ϕ,t)+∑_nr=1^N_nrh_nr(ϕ,t)⊗x̃^'(ϕ,t) +∑_nt=1^N_nt h_nt(ϕ,t)⊗x̃(ϕ,t) + ∑_ne=1^N_ne h_ne(ϕ,t)⊗x̃^'(ϕ,t) + W(ϕ,t), -T_c≤ t ≤ T_s, where ⊗ is the convolution operation, W(ϕ,t) denotes the received additive white Gaussian noise (AWGN) corresponding to the azimuthal angle ϕ, and h_i(ϕ,t) (i=nr,nt,ne,LoS) denotes the channel response of the i-th path. Also, x̃^'(ϕ,t) represents the primary and triple reflection signal, which is given by x̃^'(ϕ,t)=∑_m=0^M-1∑_l=⌊-N+2/2⌋^⌊N/2⌋ s_l,me^-j ϕ l e^j 2π f_m t, -T_c≤ t≤ T_s. Let t=kT_s/M and ϕ=2π v/N at the receiver. Thus, we have W_v,k=W(2π v/N,kT_s/M). Next, the M_c samples of CP are removed. The received sampling signal, denoted by Y_v,k, corresponding to y(ϕ,t) can be expressed as Eq. (<ref>), where X^'_n-(L_r-nr),k-(L_r-nr) =∑_m=0^M-1∑_l=⌊-N+2/2⌋^⌊N/2⌋ s_l,m e^-j 2 π [n-(L_r-nr)] l/Ne^j 2 π m[k-(L_r-nr)]/M; X_n-(L_t-nt),k-(L_t-nt) =∑_m=0^M-1∑_l=⌊-N+2/2⌋^⌊N/2⌋s_l,m e^j 2 π [n-(L_t-nt)] l/Ne^j 2 π m[k-(L_t-nt)]/M; X^'_n-(L_e-ne),k-(L_e-ne) =∑_m=0^M-1∑_l=⌊-N+2/2⌋^⌊N/2⌋s_l,m e^-j 2 π [n-(L_e-ne)] l/Ne^j 2 π m[k-(L_e-ne)]/M, L_r, L_t, and L_e are the maximum normalized values of τ_nr, τ_nt, and τ_ne, respectively, by the sample interval T_s/M. We denote by y_l,m the demodulated signal with respect to the (l,m)-th OAM-subcarrier block. Then, y_l,m is calculated as Eq. (<ref>), where w_l,m is given by w_l,m=1/MN∑_k=0^M-1∑_v=0^N-1W_v,ke^-j 2π vl/Ne^-j2π mk/M. Thereby, we have the following Theorem 1. Theorem 1: The channel amplitude gain, denoted by h_l,m,LoS, of the LoS path for the (l,m)-th OAM-subcarrier block is h_l,m,LoS=βλ_m N j^-l e^-j2π√(D^2+r_1^2+r_2^2)/λ_m/√(D^2+r_1^2+r_2^2) J_l(2π r_1 r_2/λ_m√(D^2+r_1^2+r_2^2)), where J_l(z)=j/2π∫_0^2π e^j(zcosψ-lψ) dψ is the l order Bessel function corresponding to the first kind and λ_m is the m-th carrier wavelength. See Appendix <ref>. Observing Eq. (<ref>), we have -r_1r_2cos(2π/Nv+2π/Nn)+2r_1d_nrcos(2π/Nn) = r_1√(r_2^2+4d_nr^2-4r_2d_nr^2cos(2π/Nv))cos(2π/Nn-θ), where θ = arctanr_2sin(2π v/N)/2d_nr-r_2cos(2π v/N). Then, the second term on the right hand of Eq. (<ref>) corresponding to the primary reflections can be calculated as Eq. (<ref>). Clearly, when m^'=m, the corresponding m-th subcarrier signal can be obtained. However, the corresponding l-th OAM-mode signal cannot be obtained due to e^-jarctan[r_2sin(2π v/N)]/ [2d_nr-r_2cos(2π v/N)]l^' and e^j[4π r_2 d_nrcos(2π v/N)] / (λ√(D^2+r_1^2+r_2^2+4d_nr^2)). The corresponding OAM signals for the secondary reflections and triple reflections also cannot be obtained. To decompose the OAM signal successfully, a compensation factor for reflection channel amplitude gains is needed. Comparing the expressions among h_vn,LoS,t, h_vn,nr,t, h_vn,nt,t, and h_vn,ne,t, we can find that the last three terms add an exponential factor based on the first term, respectively. Therefore, we set an exponential factor, denoted by h_vn,e, for the n-th array to the v-th receive array transmission to compensate the phase difference as follows: h_vn, e= e^j2π[2(-1)^μr_1d_maxcos(2π n/N)-2r_2d_maxcos(2π v/N)]/λ√(D^2+r_1^2+r_2^2+4d_max^2), where d_max is the maximum sum of distances from each specular transmit UCA center to the transmit UCA center among the all reflection paths and μ is one for primary as well as triple reflections and zero for secondary reflections. The exponential factor can be obtained by using ray tracing method. Hence, we can re-express Y_v,k as follows: Y_v,k = ∑_n=0^N-1h_vn,LoS,k X_n,k + ∑_nr=1^N_nr∑_n=0^N-1h_vn,nr,k h_vn,eX_n-(L_r-nr),k-(L_r-nr)^' + ∑_nt=1^N_nt∑_n=0^N-1h_vn,nt,k h_vn,eX_n-(L_t-nt),k-(L_t-nt) + ∑_ne=1^N_ne∑_n=0^N-1h_vn,ne,k h_vn,eX_n-(L_e-ne),k-(L_e-ne)^'+W_v,k. With the compensation, the decomposed signal of the primary reflections, secondary reflections, and triple reflections for each OAM-subcarrier block can be derived, respectively. We denote by h_l,m,N_r, h_l,m,N_t, and h_l,m,N_e the channel amplitude gains of N_r primary reflections, N_t secondary reflections, and N_e triple reflections, respectively, for the (l,m)-th OAM-subcarrier block. Then, the Theorem 2 is obtained as follows. Theorem 2: For a given number of reflection paths, the channel amplitude gain of reflection paths, denoted by h_l,m,r, with respect to the (l,m)-th OAM-subcarrier block is presented as Eq. (<ref>). See Appendix <ref>. Therefore, y_l,m is re-expressed as follows: y_l,m = (h_l,m,LoS+h_l,m,r) s_l,m + w_l,m = h_l,m s_l,m + w_l,m, where h_l,m is the channel amplitude gain for the (l,m)-th OAM-subcarrier block in sparse multipath environments. In the following, (·)^T and (·)^-1 represent the transpose and inverse of a matrix, respectively. For the l-th OAM-mode, we denote by y_l=(y_l,0,y_l,1,⋯,y_l,M-1)^T and w_l=(w_l,0,w_l,1,⋯,w_l,M-1)^T the received signal vector and noise vector, respectively, with respect to the l-th OAM-mode. Assuming that ρΩ_l is the channel estimation error (CEE) for the l-th OAM-mode, where ρ≪ 1 represents the accuracy of channel estimation and Ω_l follows i.i.d. zero-mean Gaussian distribution <cit.>, we have ŝ_l=(H_l+ρΩ_l)^-1y_l, where ŝ_l and s_l=(s_l,0,s_l,1,⋯,s_l,M-1)^T denote the transmit signal estimation vector and the transmit signal vector, respectively, for the l-th OAM-mode. In addition, H_l=diag{h_l,0,h_l,1,⋯,h_l,M-1}. ρ=0 implies the perfect channel estimation. Using the linear part of Taylor series (<cit.>, Eq. (9)), we have ŝ_l = H_l^-1(I_M-ρΩ_lH_l)y_l = s_l+ H_l^-1w_l-ρH_l^-1Ω_ls_l-ρH_l^-1Ω_lH_l^-1w_l. The last two terms on the right hand of Eq. (<ref>) are the additional interference and noise under imperfect channel estimation. Thus, we have the noise, denoted by w_l, after zero-forcing detection as follows: w_l=H_l^-1w_l-ρH_l^-1Ω_ls_l-ρH_l^-1Ω_lH_l^-1w_l. The covariance of w_l, is derived as follows: 𝔼[w_lw_l^H]=𝔼[w_lw_l^H + ρ^2Ω_lH_l^-1w_lw_l^H(H_l^-1)^HΩ_l^H. . +ρ^2Ω_ls_ls_l^HΩ_l^H](H_l^HH_l)^-1, where (·)^H represents the conjugate transpose of a matrix. Observing Eq. (<ref>), we can find that the covariance of the noise in the present of CEE is larger than that under perfect channel estimation. The received signal-to-noise ratio (SNR), denoted by γ_l, with respect to the l-th OAM-mode can be expressed as follows: γ_l=𝔼[s_ls_l^H]H_l^HH_l/𝔼[w_lw_l^H +ρ^2Ω_ls_ls_l^HΩ_l^H+ρ^2Ω_lH_l^-1w_lw_l^H(H_l^-1)^HΩ_l^H]. The loss, denoted by Δ_l, between the perfect channel estimation and the imperfect channel estimation, for the l-th OAM-mode can be calculated as Eq. (<ref>). Clearly, with the increase of ρ, the channel SNR loss increases as shown in Eq. (<ref>). Therefore, the capacity under perfect channel estimation is higher than that under imperfect channel estimation. The maximum capacity, denoted by C, using the conventional water-filling power allocation scheme where the better channel is allocated more power and the worse channel is allocated less power is calculated as follows: C=𝔼_γ{∑_m=0^M-1∑_l=⌊-N+2/2⌋^⌊N/2⌋log_2[1+h_l,m^2/σ_l,m^2(1/μ^*-σ_l,m^2/h_l,m^2)^+]}, where 𝔼_γ(·) denotes the expectation regarding instantaneous received SNR, σ_l,m^2 is the variance of noise for the (l,m)-th OAM-subcarrier block, μ^* represents the optimal Lagrangian multiplier, and (·)^+= max{·, 0}. To compare the capacity of HODM communications with that of conventional OFDM communications, we express the maximum capacity, denoted by C_OFDM, of the conventional OFDM communications with the water-filling power allocation scheme as follows: C_OFDM=𝔼_γ{∑_m=0^M-1log_2[1+h_m^2/σ_m^2( 1/ξ^*-σ_m^2/h_m^2)^+]}, where h_m and σ_m^2 denote the channel amplitude gain and variance of the received noise, respectively, for the m-th subcarrier. Also, ξ^* represents the optimal Lagrangian multiplier. Compared with Eqs. (<ref>) and (<ref>), HODM communications achieve both OAM multiplexing and frequency multiplexing while OFDM communications only achieve frequency multiplexing. Thus, the capacity of the HODM scheme is higher than that of the OFDM scheme in wireless communications. § PERFORMANCE ANALYSIS In this section, numerical simulation results are presented to evaluate the performance of wireless communication with our developed HODM scheme in sparse multipath environments. Firstly, Section <ref> evaluates the channel amplitude gain of our developed HODM scheme. Secondly, Section <ref> depicts the optimal allocated power and corresponding capacities before converging. Throughout the evaluations, we set the all permittivities as 15, the first subcarrier frequency as 60 GHz, the bandwidth of each subcarrier as 5 MHz, the Rician shape parameter as 10 dB, and the average total transmit power as 2 W. In Sections <ref>, the distance D is set as 3 m. §.§ Channel Amplitude Gain Figure <ref> shows the channel amplitude gains of different paths for OAM-mode 1, where we set a primary reflection path, a secondary reflection path, a reflection path, and N=M=8. Observing Fig. <ref>, the channel amplitude gain decreases as the distance D increases for all paths. This proves that energy of waves declines with the increase of distance. Also, the channel amplitude gain of reflection paths is much smaller than that of the LoS path. This result is caused by the specular reflectors, thus leading to energy attenuation. In addition, the channel amplitude gain of the LoS path drops rapidly within a certain distance, which is due to the first kind Bessel function. In the sparse multipath environments, the total transmission paths, denoted by L_p, is given as follows: L_p=1+N_r+N_t+N_e. Figure <ref> depicts the channel amplitude gain of OAM-mode 1 versus different number of OAM-modes and transmission paths, where L_p= 4, 7, and 13, respectively. Also, the number of narrowbands is selected as 8 and the number of OAM-modes is selected as 8, 16, as well as 32, respectively. Clearly, with the total number of OAM-modes increasing, the channel amplitude gain monotonically goes up. Channel amplitude gains are propitiation to the total number of OAM-modes, which can be proved by Eqs. (<ref>), (<ref>), (<ref>), and (<ref>). Also, with the number of transmission paths increasing, channel amplitude gains of reflection paths increase, thus resulting in increasing the channel amplitude gains in wireless communications. Figure <ref> depicts the channel amplitude gains of different order OAM-modes, where the total number of OAM-modes and narrowbands is set as 8, respectively. Also, the number of paths is set as 4, 7, and 13, respectively. Obviously, the channel amplitude gain is higher with low order OAM-mode than that with high order OAM-mode as depicts in Fig. <ref>. Because the radiuses of transmit UCA and receive UCA is far smaller than the distance D, the variable in Bessel function is very small. Thus, the value of Bessel function is smaller with higher order OAM-modes. Hence, this result proves that the OAM-based waves divergent faster with high order OAM-modes than that with low order OAM-modes. Moreover, the channel amplitude gain decreases as transmission distance increases. §.§ Power Allocation and Capacities Before Converging To analyze the impact of CEE on the performance of radio vortex wireless communications, we present the channel SNR loss versus the channel SNR in Fig. <ref>, where we set the accuracy of channel estimation ρ as 0.01, 0.05, 0.1, and 0.5, respectively. As shown in Fig. <ref>, the SNR loss increases as ρ increases, which can be proved by Eq. (<ref>). For example, the loss is 6.7 dB and 0.7 dB with respect to ρ=0.5 and ρ=0.01, respectively, at 10 dB channel SNR. Also, the loss increases as the channel SNR increases. Thus, extra power is needed to reach the expected channel SNR under imperfect channel estimation. In Fig. <ref>, the optimal power allocation schemes are plotted with different number of OAM-modes versus channel SNR before converging, where the number of paths is set as 4. Also, we set N = M = 8, 16, and 32, respectively. Observing Eq. (<ref>), we can see that the maximum capacity is obtained after the expectation operation regarding instantaneous received SNR. For any given channel state information, the sum of allocated instantaneous power over OAM-modes/subcarriers is equal to the total transmit power at the transmitter, but not to 2 watts. Thus, we can obtain the maximum instantaneous capacity of our proposed HODM scheme. Since a LoS path coexists with several reflection paths, the channels follow the Rician distribution with the average power 2 W constraint. If the sum of allocated power is always equal to 2 W in the whole SNR region, the obtained average capacity is not maximum. To maximize the average capacity of our proposed HODM scheme in sparse multipath environments, the sum of allocated power over modes/subcarriers at the transmitter is a Rician random variable. Since channel amplitude gains with respect to high order OAM-mode is much smaller, they are allocated to fewer power. Therefore, Fig. <ref> only shows the allocated power of OAM-modes 0 and 1. Clearly, the increase of channel SNR makes the increase of allocated power. Moreover, the channel with OAM-mode 0 is allocated to higher power than that with OAM-mode 1, because the channel amplitude gain of OAM-mode 0 is much larger than that of OAM-mode 1. Furthermore, since the channel amplitude gains of both OAM-modes 0 and 1 increase with the total number of OAM-modes increasing, the optimal allocated power of OAM-mode 0 decreases in the whole SNR region and the allocated power of OAM-mode 1 increases in the low SNR region. As N increases, at a given high SNR the allocated power of OAM-mode 1 first increases because the almost all the power is allocated to low order OAM-modes and then decreases because a part of the power is allocated to other higher order OAM-modes. Figure <ref> presents the optimal power allocation schemes with different number of paths before converging, where the total number of OAM-modes and narrowbands is set as 16, respectively. In addition, the number of transmission paths is set as 4, 7, and 10, respectively. Observing Fig. <ref>, the increase of transmission paths leads to the reduction of optimal allcoated power. The reason is given as that the channel amplitude gain of all OAM-modes increases as the number of transmission paths rises. For few paths, the low order OAM-modes are allocated to most power while the higher order OAM-modes can be allocated more power with more paths. Hence, the high order OAM-modes can play important roles in the capacity with a large number of transmission paths before converging in wireless communications. Figure <ref> shows the maximum capacities by using water-filling shceme with different number of narrowbands and OAM-modes in HODM communications, where L_p=4. Also, the number of narrowbands and OAM-modes is equal to 8, 16, and 32, respectively. Clearly, the achievable maximum capacities by using our developed scheme monotonically goes up with the growing channel SNR as illustrated in Fig. <ref>. This is consistent with the optimal power allocation as shown in Fig. <ref>. Moreover, the increase of OAM-modes and narrowbands induces the benifit to the total number of independent and parallel transmission channels, thus making the increase of maximum capacities in HODM communications. Furthermore, we can find that the maximum capacities cannot proportionally increase with the OAM-modes increasing due to the divergence of high order OAM-waves in radio vortex wireless communications. Thereby, OAM-waves convergence to significantly increase the capacities is a challenge in future. In Fig. <ref>, we present the maximum capacities by using conventional water-filling scheme in HODM scheme with different paths, where the total number of narrowbands and OAM-modes is 16, respectively. The number of transmission paths is set as 1, 4, 7, and 10, respectively. One path implies that there is a LoS path for signal transmission. Fig. <ref> illustrates that the maximum capacity goes up as the total number of transmission paths increases when the number of narrowbands and OAM-modes is given. This is because the channel amplitude gains of high order OAM-modes increase, thus resulting in more power assigned to these high order OAM-modes and less power assigned to low order OAM-modes. Figure <ref> compares the capacities between our proposed HODM scheme and the conventional OFDM scheme in wireless communications versus the channel SNR, where the available subcarriers M is 16 and 32, respectively, in OFDM and HODM communications. Also, we set the available OAM-modes as N=16. As shown in Fig. <ref>, the capacities of both conventional OFDM scheme and our developed HODM scheme increase as the channel SNR increases. Clearly, the capacity of our developed HODM scheme is higher than that of the conventional OFDM scheme when the number of subcarriers of the HODM scheme is same with that of the OFDM scheme. This is because the OAM-modes bring the increase of capacity in wireless communications. These results verify that our developed HODM can be used for higher capacity of wireless communications in sparse multipath environments. § CONCLUSIONS In this paper, a joint OAM multiplexing and OFDM scheme called HODM is proposed to achieve high capacity while resisting multipath interference for wireless communications in sparse multipath environments. Firstly, we built the OAM-based wireless channel model for sparse multipath transmission comprising a LoS path and several reflection paths, where the four-time or more reflection signals are ignored. Secondly, we proposed to compensate the phase difference caused by the different length of paths to mitigate the inter-mode interference in HODM communications. Then, we introduced the conventional water-filling algorithm to maximize capacity of our developed HODM scheme in radio vortex wireless communications. Numerical and theorical results have been provided to verify the significant increase of capacity in sparse multipath environments by using our developed HODM scheme. § PROOF OF THEOREM 1 The first term on the right hand of Eq. (<ref>) corresponding to the LoS path can be derived as Eq. (<ref>). Thus, the channel amplitude gain of LoS path with respect to the (l,m)-th OAM-subcarrier block can be obtained. § PROOF OF THEOREM 2 We assume that q^2=D^2+r_1^2+r_2^2. According to d_nr, d_max≪ D and phase compensation factor, with respect to the nr-th primary reflected path from the n-th array to the v-th receive array transmission, we have h_vn,e e^j2π[2r_1h_nrcos(2π n/N)+2r_2d_nrcos(2π v/N)]/λ√(D^2+r_1^2+r_2^2+4d_nr^2) = e^j2π/λ[r_2cos(2π v/N)+r_1cos(2π n/N)](2d_nr/√(q^2+ 4d_nr^2)-2d_max/√(q^2+4d_max^2)) ≈ 1. Thus, with the method of 2D-FFT algorithm, the demodulated signal of the second right hand of Eq. (<ref>) is expressed as Eq. (<ref>), where (a) represents 2π n/N +2π v/N = ϕ^' + π and N →∞. Similar to the analysis of Eq. (<ref>), we can also obtain the corresponding demodulated signals of the secondary reflection paths and triple reflection paths as shown in Eqs. (<ref>) and (<ref>), respectively. Then, we can obtain the channel amplitude gain of reflection paths for the (l,m)-th OAM-subcarrier block. IEEEbib[ < g r a p h i c s > ]Liping Liang received B.S. degree in Electronic and Information Engineering from Jilin University, Changchun, China in 2015. She is currently working towards the Ph.D. degree in communication and information system from Xidian University, Xi'an, China. Her research interests focus on 5G wireless communications with emphasis on radio vortex wireless communications and anti-jamming communications. [ < g r a p h i c s > ]Wenchi Cheng (M'14-SM'18) received the B.S. and Ph.D. degrees in telecommunication engineering from Xidian University, Xian, China, in 2008 and 2013, respectively, where he is a Full Professor. He was a Visiting Scholar with Networking and Information Systems Laboratory, Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, USA, from 2010 to 2011. His current research interests include B5G/6G wireless networks, emergency wireless communications, and orbital-angular-momentum based wireless communications. He has published more than 100 international journal and conference papers in IEEE Journal on Selected Areas in Communications, IEEE Magazines, IEEE Transactions, IEEE INFOCOM, GLOBECOM, and ICC, etc. He received URSI Young Scientist Award (2019), the Young Elite Scientist Award of CAST, the Best Dissertation of China Institute of Communications, the Best Paper Award for IEEE ICCC 2018, the Best Paper Award for IEEE WCSP 2019, and the Best Paper Nomination for IEEE GLOBECOM 2014. He has served or serving as the Associate Editor for IEEE Systems Journal, IEEE Communications Letters, IEEE Wireless Communications Letter, the IoT Session Chair for IEEE 5G Roadmap, the Wireless Communications Symposium Co-Chair for IEEE GLOBECOM 2020, the Publicity Chair for IEEE ICC 2019, the Next Generation Networks Symposium Chair for IEEE ICCC 2019, the Workshop Chair for IEEE ICC 2019/IEEE GLOBECOM 2019/INFOCOM 2020 Workshop on Intelligent Wireless Emergency Communications Networks. [ < g r a p h i c s > ]Wei Zhang (S'01-M'06-SM'11-F'15) received the Ph.D. degree in electronic engineering from Chinese University of Hong Kong, Hong Kong, in 2005. He was a Research Fellow with Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong SAR, from 2006 to 2007. Currently, he is a Professor at School of Electrical Engineering and Telecommunications, University of New South Wales (UNSW), Sydney, Australia. His research interests include UAV communications, mmWave communications, space information networks, and massive MIMO. He is the Editor-in-Chief of Journal of Communications and Information Networks (JCIN). He also serves as Chair for IEEE Wireless Communications Technical Committee. He is a member of Board of Governors of IEEE Communications Society. He is a member of Fellow Evaluation Committee of IEEE Vehicular Technology Society. [ < g r a p h i c s > ]Hailin Zhang (M'97) received B.S. and M.S. degrees from Northwestern Polytechnic University, Xi'an, China, in 1985 and 1988 respectively, and the Ph.D. from Xidian University, Xi'an, China, in 1991. In 1991, he joined School of Telecommunications Engineering, Xidian University, where he is a senior Professor and the Dean of this school. He is also currently the Director of Key Laboratory in Wireless Communications Sponsored by China Ministry of Information Technology, a key member of State Key Laboratory of Integrated Services Networks, one of the state government specially compensated scientists and engineers, a field leader in Telecommunications and Information Systems in Xidian University, an Associate Director of National 111 Project. Dr. Zhang's current research interests include key transmission technologies and standards on broadband wireless communications for 5G and 5G-beyond wireless access systems. He has published more than 150 papers in journals and conferences.
http://arxiv.org/abs/2407.13666v1
20240718164210
Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning
[ "Frederik Hoppe", "Claudio Mayrink Verdun", "Hannah Laus", "Felix Krahmer", "Holger Rauhut" ]
cs.LG
[ "cs.LG", "cs.IT", "eess.IV", "math.IT", "stat.AP", "stat.ML" ]
*: Equal contribution. Correspondence to . Unsupervised and Interpretable Synthesizing for Electrical Time Series Based on Information Maximizing Generative Adversarial Nets Zhenghao Zhou, Student Member, IEEE, Yiyan Li, Member, IEEE, Runlong Liu, Student Member, IEEE, Zheng Yan, Senior Member, IEEE, Mo-Yuen Chow, Fellow, IEEE This work was supported by National Natural Science Foundation of China under Grant 52307121, and also supported by Shanghai Sailing Program under Grant 23YF1419000. (Corresponding author: Yiyan Li.) Zhenghao Zhou, Yiyan Li, Runlong Liu are with the College of Smart Energy, Shanghai Non-Carbon Energy Conversion and Utilization Institute, and Key Laboratory of Control of Power Transmission and Conversion, Ministry of Education, Shanghai Jiao Tong University, Shanghai, 200240, China. (e-mail: zhenghao.zhou@sjtu.edu.cn, yiyan.li@sjtu.edu.cn, runlong_liu@sjtu.edu.cn). Zheng Yan is with the Key Laboratory of Control of Power Transmission and Conversion, Ministry of Education, and the Shanghai Non-Carbon Energy Conversion and Utilization Institute, Shanghai Jiao Tong University, Shanghai 200240, China. (e-mail: yanz@situ.edu.cn) Mo-Yuen Chow is with the University of Michigan - Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, 200240, China. (email: moyuen.chow@sjtu.edu.cn) July 22, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT Uncertainty quantification (UQ) is a crucial but challenging task in many high-dimensional regression or learning problems to increase the confidence of a given predictor. We develop a new data-driven approach for UQ in regression that applies both to classical regression approaches such as the LASSO as well as to neural networks. One of the most notable UQ techniques is the debiased LASSO, which modifies the LASSO to allow for the construction of asymptotic confidence intervals by decomposing the estimation error into a Gaussian and an asymptotically vanishing bias component. However, in real-world problems with finite-dimensional data, the bias term is often too significant to be neglected, resulting in overly narrow confidence intervals. Our work rigorously addresses this issue and derives a data-driven adjustment that corrects the confidence intervals for a large class of predictors by estimating the means and variances of the bias terms from training data, exploiting high-dimensional concentration phenomena. This gives rise to non-asymptotic confidence intervals, which can help avoid overestimating uncertainty in critical applications such as MRI diagnosis. Importantly, our analysis extends beyond sparse regression to data-driven predictors like neural networks, enhancing the reliability of model-based deep learning. Our findings bridge the gap between established theory and the practical applicability of such debiased methods. § INTRODUCTION The past few years have witnessed remarkable advances in high-dimensional statistical models, inverse problems, and learning methods for solving them. In particular, we have seen a surge of new methodologies and algorithms that have revolutionized our ability to extract insights from complex, high-dimensional data <cit.>. Also, the theoretical underpinnings of the techniques in these fields have achieved tremendous success. However, the development of rigorous methods for quantifying uncertainty associated with their estimates, such as constructing confidence intervals for a given solution, has lagged behind, with much of the underlying theory remaining elusive. In high-dimensional statistics, for example, even for classical regularized estimators such as the LASSO <cit.>, it was shown that a closed-form characterization of the probability distribution of the estimator in simple terms is not possible, e.g., <cit.>. This, in turn, implies that it is very challenging to establish rigorous confidence intervals that would quantify the uncertainty of such estimated parameters. To overcome this, a series of papers <cit.> proposed and analyzed the debiased LASSO, also known as the desparsified LASSO, a procedure to fix the bias introduced by the ℓ_1 penalty in the LASSO; see <cit.> and <cit.> for a discussion on the bias induced by the ℓ_1 regularizer. The debiased estimator derived in the aforementioned works has established a principled framework for obtaining sharp confidence intervals for the LASSO, initiating a statistical inference approach with UQ guarantees for high-dimensional regression problems where the number of predictors significantly exceeds the number of observations. Recently, this estimator was also extended in several directions beyond ℓ_1-minimization which include, for example, deep unrolled algorithms <cit.> and it has been applied to fields like magnetic resonance image both with high-dimensional regression techniques as well as learning ones <cit.>; see the paragraph related works below. The idea of the debiased LASSO is that its estimation error, i.e., the difference between the debiased estimator and the ground truth, can be decomposed into a Gaussian and a remainder/bias component. It has been shown in certain cases that the ℓ_∞ norm of the remainder component vanishes with high probability, assuming an asymptotic setting, i.e., when the dimensions of the problem grow. In this case, the estimator is proven to be approximately Gaussian from which the confidence intervals are derived. However, in practice, one needs to be in a very high-dimensional regime with enough data for these assumptions to kick in. In many applications with a finite set of observations, the remainder term does not vanish; it can rather be substantially large, and the confidence intervals constructed solely based on the Gaussian component fail to account for the entire estimation error. Consequently, the derived confidence intervals are narrower, resulting in an overestimation of certainty. This issue is particularly problematic in applications where it is crucial to estimate the magnitude of a vector coefficient with a high degree of confidence, such as in medical imaging applications. Moreover, according to the standard theory of debiased estimators, the estimation of how small the remainer term is depends on how well one can quantify the ℓ_2 and ℓ_1 bounds for the corresponding biased estimator, e.g., the LASSO <cit.>. Although sharp oracle inequalities exist for such classical regression estimators, cf. related works, the same cannot be said about when unrolled algorithms are employed. For the latter, generalization bounds are usually not sharp or do not exist. In this paper, we tackle the challenge of constructing valid confidence intervals around debiased estimators used in high-dimensional regression. The key difficulty lies in accounting for the remainder term in the estimation error decomposition, which hinders the development of finite-sample confidence intervals. We propose a novel non-asymptotic theory that explicitly characterizes the remainder term, enabling us to construct reliable confidence intervals in the finite-sample regime. Furthermore, we extend our framework to quantify uncertainty for model-based neural networks used for solving inverse problems, which paves the way to a rigorous theory of data-driven UQ for modern deep learning techniques. We state an informal version of our main result, discussed in detail in Section <ref>. Let x^(1),, x^(l)∈ℂ^N be i.i.d. data. Let b^(i)=Ax^(i)+ε^(i) be a high-dimensional regression model with noise ε^(i)∼𝒞𝒩(0, σ^2 I_N × N). With the data, derive, for a significance level α, a confidence radius r_j(α) for a new sample's component x^(l+1)_j. Let (x̂^u)^(l+1)_j be the debiased estimator based on a (learned) high-dimensional regression estimator x̂^(i)_j. Then, it holds that ℙ( | (x̂^u)^(l+1)_j - x^(l+1)_j |≤ r_j(α) ) ≥ 1-α. Theorem <ref> has far-reaching implications that transcend the classical regularized high-dimensional regression setting. For example, it enables the establishment of rigorous confidence intervals for learning algorithms such as unrolled networks <cit.>. To our knowledge, obtaining rigorous UQ results for neural networks without relying on non-scalable Monte Carlo methods remains a challenging problem <cit.>. To address this and quantify uncertainty, our approach combines model-based prior knowledge with data-driven statistical techniques. The model-based component harnesses the Gaussian distribution of the noise to quantify the uncertainty arising from the noisy data itself. We note that the Gaussian assumption for the noise is not a limitation, and extensions to non-Gaussian distributions are also possible, as clarified by <cit.>. We make a Gaussian noise assumption here for the sake of clarity. Complementing this, the data-driven component is imperative for quantifying the uncertainty inherent in the estimator's performance. Moreover, our approach does not require any assumptions regarding the convergence or quality properties of the estimator. This flexibility enables the debiased method to apply to a wide range of estimators. Contributions. The key contributions in this work are threefold [ The code for our findings is available on GitHub : <https://github.com/frederikhoppe/UQ_high_dim_learning> ] * We solve the problem illustrated in Fig. <ref> by developing a non-asymptotic theory for constructing confidence intervals around the debiased LASSO estimator. Unlike existing approaches that rely on asymptotic arguments and ignore the remainder term, our finite-sample analysis explicitly accounts for the remainder, clarifying an important theoretical gap and providing rigorous guarantees without appealing to asymptotic regimes. * We establish a general framework that extends the debiasing techniques to model-based deep learning approaches for high-dimensional regression. Our results enable the principled measurement of uncertainty for estimators learned by neural networks, a capability crucial for reliable decision-making in safety-critical applications. We test our approach with state-of-the-art unrolled networks such as the It-Net <cit.>. * For real-world medical imaging tasks, we demonstrate that the remainder term in the debiased LASSO estimation error can be accurately modeled as a Gaussian distribution. Leveraging this finding, we derive Gaussian adjusted CIs that provide sharper uncertainty estimates than previous methods, enhancing the practical utility of debiased estimators in high-stakes medical domains. § BACKGROUND AND PROBLEM FORMULATION In numerous real-world applications, we encounter high-dimensional regression problems where the number of features far exceeds the number of observations. This scenario, known as high-dimensional regression, arises when we aim to estimate N features, described by x^0 ∈ℂ^N from only a few m target measurements b ∈ℂ^m, where m ≪ N. Mathematically, this can be expressed as a linear model b = Ax^0 + ε, where A ∈ℂ^m × N is the measurement matrix and ε∼𝒞𝒩(0, σ^2 I_N × N) is additive Gaussian noise with variance σ^2. In the presence of sparsity, where the feature vector x^0 has only s non-zero entries (s ≪ N), a popular approach is to solve the LASSO, which gives an estimator x̂ obtained by solving the following ℓ_1-regularized optimization problem: min_x ∈ℂ^N1/2m‖ Ax - b ‖_2^2 + λ‖ x ‖_1. However, the LASSO estimator is known to exhibit a systematic bias, and its distribution is intractable, posing challenges for uncertainty quantification <cit.>. To address this limitation, debiasing techniques have been developed in recent years <cit.>. The debiased LASSO estimator, x̂^u, is defined as: x̂^u = x̂ + 1/m MA^*(Ax̂ - b), where M is a correction matrix that could be chosen such that max_i,j∈{1,,N}| (MΣ̂-I_N× N)_ij| is small. Here, Σ̂=A^*A/m. We refer to <cit.> for a more detailed description of how to choose M. Remarkably, the estimation error x̂^u - x^0=MA^*ε/m_=:W + (MΣ̂ - I_N× N)(x^0-x̂)_=:R, can be decomposed into a Gaussian component W ∼𝒞𝒩(0, σ^2/mΣ̂) and a remainder term R that vanishes asymptotically with high probability <cit.>, assuming a Gaussian measurement matrix A. Such a result was extended to matrices associated to a bounded orthonormal system like a subsampled Fourier matrix, allowing for extending the debiased LASSO to MRI <cit.>. The decomposition (<ref>) and the asymptotic behavior of R enable the construction of asymptotically valid CIs for the debiased LASSO estimate, providing principled UQ for high-dimensional sparse regression problems. However, in real-world applications involving finite data regimes, the remainder term can be significant, rendering the asymptotic confidence intervals imprecise or even misleading, as illustrated in Fig. <ref>. This issue is particularly pronounced in high-stakes domains like medical imaging, where reliable UQ is crucial for accurate diagnosis and treatment planning. Second, the debiasing techniques have thus far been restricted to estimators whose error is well quantifiable, leaving the challenge of how they would behave for deep learning architectures open. In such cases, the behavior of the remainder term is largely unknown, precluding the direct application of existing debiasing methods and hindering the deployment of these methods in risk-sensitive applications. A prominent example for solving the LASSO problem in (<ref>) with an unrolled algorithm is the ISTA <cit.>: x^k+1 = 𝒮_λ((I_N× N - 1/μA^TA)x^k + 1/μA^Tb), k ≥ 0. Here, μ > 0 is a step-size parameter, and 𝒮_λ(x) is the soft-thresholding operator. The work <cit.> interpreted each ISTA iteration as a layer of a recurrent neural network (RNN). The Learned ISTA (LISTA) approach learns the parameters W_1^k, W_2^k, λ^k instead of using the fixed ISTA updates: x^k+1 = 𝒮_λ^k(W_2^k x^k + W_1^k b). In this formulation, LISTA unrolls K iterations into K layers, with learnable parameters (W^k, λ^k) per layer. The parameters are learned by minimizing the reconstruction error min_λ, W1/l∑_i=1^l ‖ x_i^k(λ, W, b^(i), x^(i))-x^(i)‖_2^2 on training data (x^(i),b^(i)). Unrolled neural networks like LISTA have shown promise as model-based deep learning solutions for inverse problems, leveraging domain knowledge for improved performance. Such iterative end-to-end network schemes provide state-of-the-art reconstructions for inverse problems <cit.>. Recently, the work <cit.> proposes a framework based on the debiasing step to derive confidence intervals specifically for the unrolled LISTA estimator. However, similar to the previously mentioned debiased LASSO literature, it only handles the asymptotic setting. Related Works. High-dimensional regression. High-dimensional regression and sparse recovery is now a well-established theory, see <cit.>. In this context, several extensions of the LASSO have been proposed such as the elastic net <cit.>, the group LASSO <cit.>, the LASSO with a nuclear norm penalization <cit.>, the Sorted L-One Penalized Estimation (SLOPE) <cit.> which adapts the ℓ_1-norm to control the false discovery rate. In addition to convex penalty functions, concave penalties have been explored to address some limitations of the LASSO, e.g., the Smoothly Clipped Absolute Deviation (SCAD) penalty <cit.> and the Minimax Concave Penalty (MCP) <cit.>. Non-convex variants of the LASSO for ℓ_p-norm (p<1) minimization were also studied <cit.> as well as noise-bling variants such as the square-root LASSO <cit.>. Scalable and fast algorithms for solving the LASSO and its variants include semi-smooth Newton methods <cit.> and IRLS <cit.>. LASSO theory. Several works have established oracle inequalities for the LASSO <cit.>. Another key theoretical result is the consistency of the LASSO in terms of variable selection. <cit.> and <cit.> established the consistency of the LASSO while <cit.> analyzed the sparsity behavior of the LASSO when the design matrices satisfy the Restricted Isometry Property. Debiased estimators. After the first papers about the debiased LASSO <cit.>, some works have focused on improving its finite-sample performance and computational efficiency <cit.>. The size of the confidence intervals derived for the debiased LASSO has been proven to be sharp in the minimax sense <cit.>. Debiased estimators have been extended in several directions, e.g., <cit.>. Recently, <cit.> established asymptotic normality results for a debiased estimator of convex regularizers beyond the ℓ_1-norm. In the context of MR images, <cit.> explored a debiased estimator for inverse problems with a total variation regularizer. Debiased estimators have also been recently extended to unrolled estimators – see discussion in the next paragraph – in <cit.>. Algorithm unrolling and model-based deep learning for inverse problems. The idea of unfolding the iterative steps of classical algorithms into a deep neural network architecture dates back to <cit.>, which proposed the Learned ISTA (LISTA) to fast approximate the solution of sparse coding problems. Several works have extended and improved upon the original LISTA framework <cit.>. <cit.> proposed the Learned Primal-Dual algorithm, unrolling the primal-dual hybrid gradient method for tomographic reconstruction. <cit.> proposed the Deep Cascade of Convolutional Neural Networks (DC-CNN) for dynamic MRI reconstruction. <cit.> unfolded proximal gradient descent solvers to learn their parameters for 1D TV regularized problems. <cit.> introduced a general framework for algorithm unrolling. <cit.> developed MoDL, a model-based deep learning approach for MRI reconstruction that unrolls the ADMM algorithm. <cit.> proposed a proximal alternating direction network (PADNet) to unroll nonconvex optimization. See also the surveys for more information about unrolling and also the connection with physics-inspired methods <cit.>. <cit.> developed the It-Net, an unrolled proximal gradient descent scheme where the proximal operator is replaced by a U-Net. This scheme won the AAPM Challenge 2021 <cit.> whose goal was to identify the state-of-the-art in solving the CT inverse problem with data-driven techniques. A generalization of the previous paradigm is the learning to optimize framework that develops an optimization method by training, i.e., learning from its performance on sample problem <cit.>. Uncertainty Quantification. There have been a few attempts to quantify uncertainty on a pixel level for unrolled networks, e.g., <cit.>. However, such approaches are based on Bayesian networks and MC dropout <cit.>, which requires significant inference time paired with a loss of reconstruction performance since the dropout for UQ is a strong regularizer in the neural network. Unlike prior work, our contribution focuses on a scalable data-driven method that is easily implementable in the data reconstruction pipeline. § DATA-DRIVEN CONFIDENCE INTERVALS We now introduce our data-driven approach to correct the CIs. Instead of deriving asymptotic CIs from the decomposition x̂^u-x^0 = W + R, by assuming that R asymptotically vanishes, we utilize data (b^(i), x^(i))_i=1^l along with concentration techniques to estimate the size of the bias component R. We continue to leverage the prior knowledge of the Gaussian component W while extending the CIs' applicability to a broad class of estimators, including neural networks. Our method is summarized in Algorithm <ref>, where the data is used to estimate the radii of the CIs, and in Algorithm <ref>, which constructs the estimator around which the CIs are centered. The following main result proves the validity of our method. Let x^(1),, x^(l)∈ℂ^N be i.i.d. complex random variables representing ground truth data drawn from an unknown distribution ℚ. Suppose, that ε^(i)∼𝒞𝒩(0,σ^2 I_m× m) is noise in the high-dimensional models b^(i) = A x^(i) + ε^(i), where A∈ℂ^m× N, and independent of the x^(i)'s. Let X̂:ℂ^m→ℂ^N be a (learned) estimation function that maps the data b^(i) to x̂^(i), which is an estimate for x^(i). Set | R^(i)_j| = | e_j^T (MΣ̂-I_N× N)(x̂^(i)-x^(i))| for fixed A and M. For j=1,, N, we denote the true but unknown mean with μ_j=𝔼 [| R^(1)_j|] and the unknown variance with (σ_R^2)_j:=𝔼[(| R^(1)_j|-μ_j)^2], respectively. Let Ŝ_j = 1/l∑_i=1^l | R^(i)_j| be the unbiased sample mean estimator and (σ̂_R^2)_j = 1/l-1∑_i=1^l(| R^(i)_j| - Ŝ_j)^2 the unbiased variance estimator. Let α∈(0,1) and γ∈(0,1-1/lα). Furthermore, set the confidence regions for the sample x^(l+1)∼ℚ in the model b^(l+1) = Ax^(l+1) + ε^(l+1) as C_j(α) = { z ∈ℂ: | (x̂^u)^(l+1)_j - z|≤ r_j(α)} with radius r_j(α) = σ (MΣ̂M^*)_jj^1/2/√(m)√(log(1/γ_jα)) + c_l(α)· (σ̂_R)_j + Ŝ_j, c_l(α) := √(l^2-1/l^2(1-γ_j)α - l ). Then, it holds that ℙ(x^(l+1)_j ∈ C_j(α)) ≥ 1-α. Theorem <ref> achieves conservative confidence intervals that are proven to be valid, i.e., are proven to contain the true parameter with a probability of 1-α. Its main advantage is that there are no assumptions on the distribution ℚ (except that σ_R^2 exists), making it widely applicable. Hence, Theorem <ref> includes the worst-case distribution showing a way to quantify uncertainty even in such an ill-posed setting. Especially in medical imaging, such certainty guarantees are crucial for accurate diagnosis. The proof exploits the Gaussianity of the component W as well as an empirical version of Chebyshev's inequality, which is tight when there is no information on the underlying distribution. The detailed proof can be found in Appendix <ref>. For a thorough discussion on Theorem <ref> including practical simplifications, we refer to Appendix <ref>. More certainty comes with the price of larger confidence intervals. If there is additional information on the distribution of R, like the ability to be approximated by a Gaussian distribution, then the confidence intervals become tighter. This case, which includes relevant settings such as MRI, is discussed in Section <ref>. § CONFIDENCE INTERVALS FOR GAUSSIAN REMAINDERS Valid confidence intervals can be derived most straightforwardly when the distribution of the remainder term is known and easily characterized. In such cases, more informative distributional assumptions lead to potentially tighter confidence intervals compared to Theorem <ref>, which makes no assumptions about the remainder component. In this section, we derive non-asymptotic confidence intervals assuming the remainder term to be approximated by a Gaussian distribution. Let x̂^u∈ℂ^N be a debiased estimator for x∈ℂ^N with a remainder term R∼𝒞𝒩(0,Σ_R/m). Then, C_j(α)={ z ∈ℂ|| z-x̂^u_j|≤ r_j(α)} with radius r^G_j(α) = (σ^2(MΣ̂M^*)_jj+(Σ_R)_jj)^1/2/√(m)√(log(1/α)). is valid, i.e. ℙ( x_j ∈ C_j(α))≥ 1-α. For the proof, we refer to Appendix <ref>. In Appendix <ref>, we demonstrate empirically that the Gaussian assumption for the remainder term holds in a wide range of relevant practical settings. This validation enables the application of the proposed confidence intervals derived under this assumption. These confidence intervals strike a careful balance between non-asymptotic reliability, ensuring valid coverage even in finite-sample regimes, and tightness, providing informative and precise uncertainty estimates. By leveraging the Gaussian approximation, which becomes increasingly accurate in higher dimensions as illustrated in Figure <ref>, our framework offers a principled and computationally efficient approach to quantifying uncertainty in high-dimensional prediction problems. The variance of R can be estimated with the given data using, e.g., the unbiased estimator for the variance as in Theorem <ref>. § NUMERICAL EXPERIMENTS We evaluate the performance of our non-asymptotic confidence intervals through extensive numerical experiments across two settings: (i.) the classical debiased LASSO framework to contrast our non-asymptotic confidence intervals against the asymptotic ones. (ii.) the learned framework where we employ learned estimators, specifically the U-net <cit.> as well as the It-Net <cit.>, to reconstruct real-world MR images and quantify uncertainty. Our experiments demonstrate the importance of properly accounting for the remainder term in practical, non-asymptotic regimes. Each experiment follows the same structure: * Data Generation and Management: We fix the forward operator A and generate n>2 feature vectors x^(i)_i=1^n and noise vectors ε^(i)_i=1^n with ε^(i)∼𝒞𝒩(0,σ^2I_m× m). We obtain observations b^(i)_i=1^n via b^(i) = A x^(i) + ε^(i). We split the data (b^(i), x^(i))_i=1^n into an estimation dataset of size l and a test dataset of size k (l+k=n). If we learn an estimator, we further split the data into training, estimation, and test sets. * Reconstruction: Depending on the experiment, we obtain a reconstruction function X̂ in one of the following ways: for the classical LASSO setting, we use the LASSO; for the learned estimator experiment, we train a U-Net <cit.> or It-net <cit.> on the training data to serve as the reconstruction function X̂. * Estimation of Confidence Radii: We run Algorithm <ref> with A, X̂, M (that is chosen according to <cit.>), the estimation data (b^(i), x^(i))_i=1^l, and a predefined significance level α∈(0,1) to obtain radii r_j(α)_j=1^N. To construct the final confidence intervals, the radii need to be centered according to the debiased estimator. For every new measurement b, we run Algorithm <ref> to obtain tailored confidence intervals for the feature vector x corresponding to b. In addition, we compute the CI for the Gaussian adjustment based on Theorem <ref> using the estimation set to quantify the variance of R with the unbiased estimator for the variance as before. * Evaluation: We use the test dataset (b^(i), x^(i))_i=l+1^k to evaluate our adjustments. For each b^(i), we run Algorithm <ref> to obtain confidence intervals C_j^(i)(α)_j=1^N for x^(i). We estimate ℙ(x_j^(i)∈ C_j(α)) by h_j(α) = 1/k∑_i=l+1^k1_{x_j^(i)∈ C_j(α)} and average over all components h(α)=1/N∑_j=1^N h_j(α). Since performance on the support S is crucial, we define the hit rate on S as h_S^(i) = 1/|S|∑_j=1^N1_{x_j^(i)∈ C_j(α)} and average h_S(α) = 1/l∑_i=1^lh_S^(i). Note that the support may change with i. Moreover, we do the same for the CI based on the Gaussian-adjusted radii. §.§ UQ for Classical Model-Based Regression We consider a setting aligned with existing debiased LASSO literature, e.g., <cit.> to demonstrate our approach's extension of current UQ methods. The forward operator is a complex Gaussian matrix A∈ℂ^m× N with dimensions N=10000, m=0.6N, and A_ij∼𝒞𝒩(0,1). We generate n=750 s=0.1N-sparse features x^(i) by randomly selecting m distinct indices from 1,,N and drawing magnitudes from 𝒞𝒩(0,1). With relative noise ‖ε^(i)‖/‖ Ax^(i)‖≈ 0.2, we split the data (b^(i), x^(i))_i=1^n into l=500 estimation and k=250 test data. For reconstruction, we solve the LASSO X̂(b):=argmin_x∈ℂ^N1/m‖ Ax-b‖ + λ‖ x‖_1 with λ = 10σ/√(m)(2+√(12log(N))) following <cit.>. With significance level α = 0.05, we run Algorithm <ref> to obtain confidence radii, choosing M=I_N× N <cit.> and exploiting the relaxation (<ref>). Averaged over the l estimation data points, the ℓ_2 and ℓ_∞ norm ratios are: ‖ R‖_2/‖ W‖_2 = 0.9993 and ‖ R‖_∞/‖ W‖_∞=1.1581. In existing literature, the ℓ_∞ norm is typically measured when the remainder term vanishes, as it is relevant for pixel-wise confidence intervals. Here, the remainder term is of comparable order as the Gaussian term and hence, too significant to neglect in confidence intervals derivation. Evaluating it on the remaining k=250 data points, the data-driven and Gaussian-adjusted averaged hit rates are h(0.05) = 1, h_S(0.05)=1 and h^G(0.05)=0.9691, h^G_S(0.05) = 0.8948, respectively. Neglecting the remainder term yields h^W(0.05) = 0.8692 and h^W_S(0.05) = 0.6783, which is substantially lower and violates the specified 0.05 significance level. Fig. <ref> presents confidence intervals of each type for one data point x^(i) and a detailed visualization of h_j(0.05), h_S^(i)(0.05), h_j^G(0.05), and (h^G_S)^(i)(0.05). Further experiments with different sparse regression settings, including subsampled Fourier matrices, are presented in Appendix <ref>. §.§ UQ for MRI Reconstruction with Neural Networks We extend the debiasing approach to model-based deep learning for MRI reconstruction using the U-Net and It-Net on single-coil knee images from the NYU fastMRI dataset [We obtained the data, which we used for conducting the experiments in this paper from the NYU fastMRI Initiative database (fastmri.med.nyu.edu) <cit.>. The data was only obtained from the NYU fastMRI investigators, but they did not contribute any ideas, analysis, or writing to this paper. The list of the NYU fastMRI investigators, can be found at fastmri.med.nyu.edu, it is subject to updates.] <cit.>. Here, the forward operator is the undersampled Fourier operator 𝒫ℱ∈ℂ^m× N with N=320× 320, m=0.6N, the Fourier matrix ℱ and a radial mask 𝒫, see Figure <ref>. The noise level σ is chosen such that the relative noise is approximately 0.1. The data is split into training (33370 slices), validation (5346 slices), estimation (1372 slices), and test (100 slices) datasets. We then train an It-Net <cit.> with 8 layers, a combination of MS-SSIM <cit.> and ℓ_1-losses and Adam optimizer with learning rate 5e^-5 for 15 epochs to obtain our reconstruction function X̂. With significance level α = 0.1, we run Algorithm <ref> to construct confidence radii, choosing M=I_N× N <cit.> and exploiting the relaxation (<ref>). Averaged over the l estimation data points, we have ‖ R‖_2/‖ W‖_2 = 0.38 and ‖ R‖_∞/‖ W‖_∞= 0.49, which indicates that the remainder term is significant and cannot be neglected. Evaluating the test data, the averages of the data-driven adjustment hit rates are h(0.1) = 0.9999, h_S(0.1)=0.9998, and the averages of the Gaussian adjusted hit rates are h^G(0.1)=0.9752, h^G_S(0.1) = 0.98. Neglecting the remainder term, the hit rates of the asymptotic CIs are h^W(0.1) = 0.9502 and h^W_S(0.1) = 0.87. As in the sparse regression setting, they are significantly lower. Fig. <ref> presents confidence intervals based on the data-driven adjustment and the asymptotic confidence intervals for a region in one image x^(i). In addition, it contains a box plot showing the distribution of the hit rates based on the Gaussian adjustment and the asymptotic hit rates. More experiments for UQ for MRI reconstruction can be found in Appendix <ref> and Tables <ref> and <ref>. § FINAL REMARKS In this work, we proposed a data-driven uncertainty quantification method that derives non-asymptotic confidence intervals based on debiased estimators. Our approach corrects asymptotic confidence intervals by incorporating an estimate of the remainder component and has solid theoretical foundations. While the correction can be based on prior knowledge, e.g., a Gaussian distribution of the remainder term, we also derive CI based on a data-driven adjustment without further information. This data-driven nature enhances its applicability to a wide range of estimators, including model-based deep-learning techniques. We conducted experiments that confirm our theoretical findings, demonstrating that even in classical sparse regression settings, the remainder term is too significant to be neglected. Furthermore, we applied the proposed method to MRI, achieving significantly better rates on the image support. While our method corrects for the remainder term, larger remainder terms necessitate greater corrections, resulting in wider confidence intervals. Therefore, it is crucial to achieve a small remainder term to avoid excessively large confidence intervals. Additionally, the accuracy of our method depends on the quality of the estimates for the mean and variance of the remainder term, which improves with more available data. Additionally, the length of the intervals can be minimized over a larger parameter set, provided that more data is available. We leave as a future direction to study the sharpness of the proposed confidence intervals and radii for a given amount of data. Moreover, we would like to investigate how the length of the confidence intervals could be improved when estimating higher moments. We believe that our method is applicable to a wide variety of deep learning architectures, including vision transformers in MRI, e.g., <cit.>. Testing the generality of the method with state-of-the-art architectures for different problems would demonstrate its broad usefulness. § ACKNOWLEDGMENTS We gratefully acknowledge financial support with funds provided by the German Federal Ministry of Education and Research in the grant “SparseMRI3D+: Compressive Sensing und Quantifizierung von Unsicherheiten für die beschleunigte multiparametrische quantitative Magnetresonanztomografie (FZK 05M20WOA)”. apalike Supplementary material to the paper Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning. In this supplement to the paper, we present in Section <ref> a detailed discussion about aspects of the main result that are not mentioned in the main body of the paper. Moreover, Section <ref> presents the proof Theorem <ref> and Theorem <ref>. The former establishes data-driven confidence intervals, while the latter assumes the remainder component to be approximated by a Gaussian distribution. In Section <ref>, we confirm our theoretical findings with several numerical experiments for classical high-dimensional regression as well as model-based neural networks. In Section <ref>, we visualize the approximate Gaussian distribution of the remainder terms, demonstrating the applicability of Theorem <ref> in relevant settings. § FURTHER DISCUSSION OF MAIN RESULT Length of radius: To minimize the length of the radius, γ_j∈(0,1-1/lα) should be chosen as the minimizer of the problem min_γ_j∈(0,1-1/lα)σ (MΣ̂M^*)_jj^1/2/√(m)√(log(1/γ_jα)) + √(l^2-1/l^2(1-γ_j)α - l )· (σ̂_R)_j, In order to minimize over a large set for a given significance level α, a large number of data l is needed. For fixed estimates Ŝ_j and (σ̂_R^2)_j, more data leads to a potentially smaller confidence interval length. If we assume R_j=0, it follows that Ŝ_j = 0 and (σ̂_R^2)_j = 0. Then, γ = 1 is a valid choice, for which the function σ (MΣ̂M^*)_jj^1/2/√(m)√(log(1/γ_jα)) is well-defined and is minimized. In this case, the radius coincides with the asymptotic radius derived in <cit.> (except for that these works handle the real case) and the ones in <cit.> with M=I_N× N. In this sense, the asymptotic confidence intervals can be seen as a special case of the proposed method. The significance level α depends on γ_j and l to assure c_l( · ) to be well-defined. For a large dataset x^(1),,x^(l), i.e. if l is large, then it holds that lim_l→∞ c_l(α) = lim_l→∞√(1-1/l^2/(1-γ_j)α - 1/l) = 1/√((1-γ_j)α). Probabilistic discussion: The probability in (<ref>) is over the randomness of the noise as well as ℚ. The confidence circles C_j(α) consist of two random variables, the debiased estimator x̂^u_j and the radius r_j(α). The former depends on the random noise and potentially on training data, while the latter depends on the estimators Ŝ_j and σ̂_R, which in turn depend on both the noise and the data x^(1),, x^(l). A crucial requirement of applying the empirical version of Chebyshev's inequality <cit.> is the independence and identical distribution of the variables | R_j^(1)|,,| R_j^(l)|. Therefore, it is essential that the estimator function X̂ is independent of the data x^(1),,x^(l). To achieve this, we train the estimator function X̂ using a dataset that is independent of the data x^(1),,x^(l), used for estimating R^(1),, R^(l). However, the mean and variance of | R^(1)| and hence of | R^(i)| depend on the variance of the noise ε^(1), i.e. σ^2. Thus, different noise levels σ require a new estimation of the mean and variance of | R^(1)|. Throughout this paper, we assume the noise level to be fixed and known. The latter assumption is motivated by two factors. First, the size of the confidence intervals relies on σ. Given that the primary focus of this paper is to determine the size of the confidence intervals based on the remainder term R^(1), we seek to mitigate other influencing factors such as noise level estimation. Second, in domains like medical imaging, there is substantial knowledge about the noise level. For instance, the noise level in MRI can be directly measured from the scanner <cit.>. If the noise level is unknown, there are methods to estimate it. In the debiased LASSO literature, the most used method is the scaled LASSO <cit.>. Other methods for sparse regression, either in the LASSO context or more general for high-dimensional models, are <cit.>. Relaxation of assumptions in practice: In practice, it is often the case, that | R^(i)_1| ,, | R^(i)_N| are identical distributed resulting in μ_1 = = μ_N and (σ_R^2)_1 = = (σ_R^2)_N. Although the proof requires independence of the | R_j^(i)|, there are cases when it might suffice to relax this assumption by estimating the mean and variance pixel-wise uniformly, i.e., Ŝ = 1/l· N∑_i=1^l∑_j=1^N R_j^(i) and σ̂_R^2 = 1/l· N -1∑_i=1^l∑_j=1^N (R_j^(i) - Ŝ)^2. In addition to saving computational resources, accuracy improves due to the higher number of samples. Furthermore, instead of solving the optimization problem (<ref>) for every j∈{1,, N}, it might be a good idea to choose γ_1==γ_N as the minimizer of min_γ_j∈(0,1-1/lα)σ∑_j=1^N(M Σ̂M^*)_jj^1/2/√(m)N√(log(1/γ_jα)) + c_l((1-γ_j)α)·1/N∑_j=1^N(σ̂_R)_j. Then, one γ can be used for computing the potentially different radii r_j(α). § PROOFS The statement x^(l+1)_j ∈ C_j(α) is equivalent to | (x̂^u)^(l+1)_j - x^(l+1)_j|≤ r_j(α). To prove (<ref>), we show that ℙ(| (x̂^u)^(l+1)_j - x^(l+1)_j|≥ r_j(α))≤α In the next step, we write the radius r(α) as the sum r(α) = r^W(α) + r^R(α). According to the decomposition (x̂^u)^(l+1)_j - x^(l+1)_j = W_j + R_j we obtain for fixed j∈{1,,N} ℙ(| (x̂^u_j)^(l+1) - x_j^(l+1)|≥ r^W_j(α) + r^R_j(α)) = ℙ(| W_j + R_j|≥ r^W_j(α) + r^R_j(α)) ≤ ℙ(| W_j| +| R_j|≥ r_j^W(α) + r_j^R(α)) ≤ℙ( | W_j|≥ r^W_j(α) ) + ℙ(| R_j|≥ r^R_j(α) ) where the last step follows from the pigeonhole principle. To estimate the first summand, we set r^W_j(α) := σ(MΣ̂M^*)_jj^1/2/√(m)√(log(1/γ_jα)). Since | W_j|∼Rice(0,σ(MΣ̂M^*)_jj^1/2/√(2m)) we obtain ℙ( | W_j|≥ r^W_j(α)) = 2m/σ^2Σ̂_jj∫_r^W_j(α)^∞ x exp( - x^2m/σ^2(MΣ̂M^*)_jj) dx = ∫_(r^W_j(α))^2m/σ^2(MΣ̂M^*)_jjexp(-u) du = exp( -(r^W_j(α))^2m/σ^2(MΣ̂M^*)_jj) = exp( -log(1/γ_jα) ) = γ_jα. For estimating the term ℙ(| R_j|≥ r^R_j(α) ), we set r^R_j(α) = c_l(α)· (σ̂_R)_j + Ŝ_j. This choice leads to ℙ(| R_j|≥ r^R_j(α) ) = ℙ(| R_j| - Ŝ_j ≥ r^R_j(α) - Ŝ_j ) ≤ℙ( || R_j| - Ŝ_j |≥ r^R_j(α) - Ŝ_j ) = ℙ( || R_j| - Ŝ_j |/(σ̂_R)_j≥r^R_j(α) - Ŝ_j/(σ̂_R)_j) = ℙ( || R_j| - Ŝ_j |/(σ̂_R)_j≥ c_l(α) ). Now, we apply an empirical version of Chebyshev's inequality <cit.>. This leads to ℙ( || R_j| - Ŝ_j |/(σ̂_R)_j≥ c_l(α) ) ≤min{ 1, 1/l+1⌊(l+1)(l^2-1 + l c_l(α)^2)/l^2 c_l(α)^2⌋} ≤min{ 1, l^2-1 + l c_l(α)^2/l^2 c_l(α)^2} = min{ 1, l^2-1 + l^2-1/l(1-γ_j)α - 1 /l(l^2-1)/l(1-γ_j)α -1} = min{ 1, 1 + 1/l(1-γ_j)α - 1 /l/l(1-γ_j)α -1} = min{ 1, (1-γ_j)α} = (1-γ_j)α, where we used in the last step, that (1-γ_j)α<α<1. To summarize, ℙ(| (x̂^u)^(l+1)_j - x^(l+1)_j|≥ r_j(α)) ≤ℙ( | W_j|≥ r_j^W(α)) + ℙ(| R_j|≥ r_j^R(α)) ≤γ_jα + (1-γ_j)α = α. Since W∼𝒞𝒩(0,σ^2/mMΣ̂M^*) and R∼𝒞𝒩(0,1/mΣ_R) the estimation error x̂^u-x^0=W+R follows again a multivariate normal distribution with zero mean and covariance matrix 1/m(σ^2MΣ̂M^* + Σ_R). By exploiting the Gaussian distribution, we obtain ℙ( | W_j + R_j| >r^G_j(α)) = 2m/σ^2(MΣ̂M^*)_jj + (Σ_R)_jj∫_r^G_j(α)^∞ x exp( - x^2m/σ^2(MΣ̂M^*)_jj + (Σ_R)_jj) dx = ∫_r^G_j(α)^2m/σ^2(MΣ̂M^*)_jj + (Σ_R)_jjexp(-u) du = exp( -r^G_j(α)^2m/σ^2(MΣ̂M^*)_jj+(Σ_R)_jj) Thus, we have ℙ(|x̂^u_j-x_j^*| >r^G_j(α))≤exp( -r^G_j(α)^2m/σ^2(MΣ̂M^*)_jj + (Σ_R)_jj), which needs to be equal to α>0. Therefore, r^G_j(α) = (σ(MΣ̂M_jj+(Σ_R)_jj)^1/2/√(m)√(log(1/α)). § FURTHER NUMERICAL EVALUATION To confirm our theoretical findings claiming that the incorporation of the bias component renders the confidence intervals more robust, we present additional numerical experiments here. UQ for Classical Model-Based Regression For the experiments described here, we use Tfocs <cit.>. Analogous to the experiment described in Section <ref>, we run further experiments in the classical sparse regression setting when the measurement matrix is a Gaussian and subsampled Fourier matrix. The different settings, including the results, can be found in Table <ref>. The results show that the Gaussian adjustment of our proposed method significantly increases the hit rates, especially on the support, while moderately increasing the confidence interval length. Our data-driven adjustment achieves even better hit rates, but the confidence intervals are larger. Although in well-posed settings, like the second column of Table <ref>, the hit rates h^W(0.05) based on asymptotic confidence intervals lead overall to almost 95%, however on the support, which are the crucial features, the asymptotic hit rates fail. In particular, our corrections are essential in ill-posed regression problems as the third Gaussian column. The hit rates for the asymptotic CIs and the corrected ones with Gaussian adjustment are visualized in more detail in Figure <ref>. UQ for MRI Reconstruction with Neural Networks In this section, we present more experiments for UQ for MRI reconstruction with neural networks. Our experimental settings, as well as our code for this experiment, are based on the paper and code [https://github.com/jmaces/robust-nets] by <cit.>. The dataset used for conducting the experiments is the fastMRI single-coil knee dataset. For documentation, see <cit.>. Table <ref> represents the results obtained by learning the reconstruction function X̂ using the It-net with 8 layers, with 60 %, 40 % and 30 % radial undersampling and for noise levels obtained by adding complex gaussian noise with standard deviation σ=60 and σ=84, respectively. Similarly, Table <ref> shows the results obtained by the U-Net. In Figure <ref>, the asymptotic hit rates and the Gaussian adjusted ones for the 95 % confidence level are compared in a box plot for each experiment. All It-Net and U-Nets are trained with a combination of the MS-SSIM-loss <cit.>, the ℓ_1-loss and the Adam optimizer with a learning rate of 5e^-5, epsilon of 1e^-4 and weight decay parameter 1e^-5. The It-Nets were trained for 15 epochs, and the U-Nets were trained for 20 epochs, both with batch size 40. Every U-Net has 2 input and output channels, 24 base channels, and encodes the image to a size of 20 × 20 and at most 384 channels. The It-Net employs the U-Net in each layer as a residual network and has a data consistency part around each U-Net in every layer. Comparing the tables, the It-Net has, in general, better hit rates as well as a better R/W ratio than the U-Net due to its more accurate reconstruction. Further, the hit rates for all the pixels are higher than those obtained only for the support. For achieving reliable results for safety-critical applications, obtaining hit rates higher than the confidence level is crucial, especially on the support, i.e., on the non-zeros pixels. Otherwise, one might achieve a certain confidence level overall but cannot trust the pixels of interest. The experiments were conducted using Pytorch 1.9 on a desktop with AMD EPYC 7F52 16-Core CPUs and NVIDIA A100 PCIe030 030 40GB GPUs. The code for the experiments can be found in the supplementary material. The execution time of the code is around 5 hours for each It-Net, 2 hours for each U-Net, and around 30 minutes for the rest of each experiment. So, in total, this gives us a time of 48 hours for the MRI reconstruction experiments. The execution time for the classical model-based regression experiments takes 5 to 30 minutes each; therefore, in total, it is less than 3 hours. § DISTRIBUTION VISUALIZATION OF REMAINDER TERM In Figure <ref>, we present a series of histograms illustrating the empirical distribution of the remainder term's real part across all experimental settings in sparse regression conducted in this paper. These histograms provide evidence that the remainder term can be approximated by a Gaussian distribution, with the approximation becoming increasingly precise as the dimensionality increases. Across low-dimensional scenarios, the empirical distributions exhibit some deviations from the Gaussian form, but these discrepancies diminish as the dimensionality grows larger. In high-dimensional regimes, the empirical distributions demonstrate an exceptional degree of convergence to the Gaussian approximation. This close alignment lends strong support to the validity of the key assumption of Theorem <ref>, allowing a Gaussian adjustment to the confidence intervals. In Figure <ref> we present a series of histograms representing the empirical distribution of the remainder term's real part for the six different experimental settings, for the U-Net, conducted in this paper. Figure <ref> represents the histograms for the It-Net experiments. In most scenarios, the real part of the remainder term is Gaussian distributed with mean 0. The only exceptions are Figures <ref> and <ref>, which correspond to the U-Net experiments with 30 % undersampling.
http://arxiv.org/abs/2407.13132v1
20240718034216
LSD3K: A Benchmark for Smoke Removal from Laparoscopic Surgery Images
[ "Wenhui Chang", "Hongming Chen" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Abbreviated paper title F. Author et al. College of Electronic Information Engineering, Shenyang Aerospace University LSD3K: A Benchmark for Smoke Removal from Laparoscopic Surgery Images Wenhui Chang, Hongming Chen ===================================================================== § ABSTRACT Smoke generated by surgical instruments during laparoscopic surgery can obscure the visual field, impairing surgeons' ability to perform operations accurately and safely. Thus, smoke removal task for laparoscopic images is highly desirable. Despite laparoscopic image desmoking has attracted the attention of researchers in recent years and several algorithms have emerged, the lack of publicly available high-quality benchmark datasets is the main bottleneck to hamper the development progress of this task. To advance this field, we construct a new high-quality dataset for Laparoscopic Surgery image Desmoking, named LSD3K, consisting of 3,000 paired synthetic non-homogeneous smoke images. In this paper, we provide a dataset generation pipeline, which includes modeling smoke shape using Blender, collecting ground-truth images from the Cholec80 dataset, random sampling of smoke masks and etc. Based on the proposed benchmark, we further conducted a comprehensive evaluation of the existing representative desmoking algorithms. The proposed dataset is publicly available at <https://drive.google.com/file/d/1v0U5_3S4nJpaUiP898Q0pc-MfEAtnbOq/view?usp=sharing>. § INTRODUCTION Image desmoking is a crucial research topic in minimally invasive surgery, aiming to enhance visual clarity during laparoscopic procedures by mitigating the obscuring effects of smoke generated from electrosurgical devices <cit.>. Recently, this field has received increasing attention as it addresses a persistent challenge in laparoscopy, improving surgical precision and minimizing potential risks associated with impaired visibility. Thus, smoke removal task for laparoscopic images is highly desirable. In fact, laparoscopic image desmoking is more challenging than natural image dehazing. That is because the dynamic and unpredictable nature of laparoscopic smoke underscores its inherent non-uniformity and randomness compared to outdoor fog <cit.>. When we revisit the development in this field, researchers have primarily directed their focus on the algorithm design, while relative fewer attention has been paid on the benchmark dataset. The lack of publicly available high-quality benchmark datasets is the main bottleneck to hamper the development progress of this task. Thus, it is urgent to construct a benchmark dataset and a baseline for addressing non-homogeneous smoke removal for laparoscopic surgery scenes. For data-driven methods, obtaining paired smoke and smoke-free images is not easily feasible. Constructing simulated paired matching smoke datasets is a costly and time-consuming task. Researchers often extract smoke-free images from publicly available surgical videos and then linearly overlay synthesized smoke as preprocessing for network training. In previous works concerning the synthesis of datasets for endoscopic procedures, there has not been a fully unified standard for selecting background images and synthesizing smoke. Especially with medical datasets, they not only consume valuable medical resources but also require high levels of accuracy and quantity to meet medical practice standards. Obtaining images with thousands of different density smoke masks is not an easy task. Given the requirement to meet medical practice standards, manually acquiring density masks and annotations for numerous image pairs in a real-world dataset seems impractical <cit.>. Synthetic datasets offer a straightforward and scalable alternative to manually annotating images. Given the current practical challenges, there is a significantly increased demand for synthetic smoke datasets to address these issues. To this end, we construct a new dataset called LSD3K, and provide a dataset generation pipeline for laparoscopic image desmoking, consisting of 3,000 paired synthetic smoky images. Furthermore, based on this dataset, we conducted a comprehensive evaluation of several advanced image desmoking algorithms. These methods were assessed quantitatively and qualitatively using our new dataset. Our evaluation and analysis highlighted the performance and limitations of existing methods and stimulated further research into more robust algorithms. The proposed LSD3K dataset is publicly available for research purposes. We believe that this work can provide new insights into medical image data synthesis. The rest of this paper is structured as follows. We review the related work in the field of smoke removal from laparoscopic surgery images in Section <ref>. In Section <ref>, we provide a detailed description of the pipeline used to construct the dataset.In Section <ref>, we analyze the performance of existing algorithms in the benchmark test. The discussion is presented in Section <ref>. Finally, the concluding remarks will be given in Section <ref>. § RELATED WORK To our knowledge, there have been few recent works on image-based smoke removal in laparoscopic settings <cit.>. Simultaneously, due to the specificity of minimally invasive surgical procedures, acquiring paired real surgical smoke datasets for deep learning is nearly impossible. The development of smoke removal algorithms is hindered by the difficulty in constructing large-scale simulated paired matching datasets. In this section, we focus on the generation of datasets specifically tailored for smoke removal methods applied to endoscopic images. Wang et al. <cit.> proposed an efficient variational-based smoke removal method for laparoscopic images. The performance of the proposed method was quantitatively and qualitatively evaluated using two publicly available real smoked laparoscopic datasets and one generated synthetic dataset. The real smoked laparoscopic datasets were obtained from the Hamlyn Centre laparoscopic/endoscopic video dataset page <cit.>. The synthetic dataset was generated by utilizing Berlin noise <cit.> to produce synthetic smoke, which was then linearly embedded into artificially selected ground truth smoke-free images. In <cit.>, the authors have developed a novel generative collaborative learning approach called DesmokeGCN. The algorithm utilizes real laparoscopic images obtained from the Hamlyn Centre laparoscopic video dataset <cit.> and the Cholec80 dataset <cit.> as background images. Additionally, it employs the 3D rendering engine Blender for synthesizing non-uniform smoke. In <cit.>, Wang et al. further proposed a real-time smoke removal method based on Convolutional Neural Networks (CNNs). They manually selected 100 smoke-free images from the Hamlyn Centre laparoscopic video dataset <cit.> and used a dataset consisting of synthetic smoke images generated by Blender and Adobe Photoshop to train the network. In <cit.>, Zhou et al. proposed a new method named Dessmoke-CycleGAN. Smoke and smoke-free images used in the experiments were captured from da Vinci surgical robot videos. Additionally, random smoke generated by Blender was linearly added to smoke-free images for training and testing purposes. Based on the above, there is an urgent need to construct a high-quality paired dataset to address the non-uniform smoke removal issue in laparoscopic surgical scenes. § DATASET CONSTRUCTION While there are several large-scale real endoscopic surgery datasets available, they are limited by the constraints of actual surgical environments and lack diversity in smoke, rendering them unsuitable for training and testing deep learning networks. In this section, we provide a detailed overview of the synthesis process in LSD3K. §.§ Smoke Synthesis The realistic simulation of heterogeneous smoke is crucial for training and testing models developed. Due to the typically narrow field of view of endoscopes during surgical procedures, and the fact that smoke generated from procedures such as electrocautery and laser ablation is random and localized, it is unrelated to depth. Traditional haze models <cit.> and Berlin noise functions <cit.> are not designed for image desmoking and cannot address the specific characteristics of smoke <cit.>. Furthermore, the dataset images generated by these methods overly simplify the distribution of smoke, lacking the ability to express complex scenes. Particularly, they do not consider the non-uniformity of smoke, which is a common distribution pattern in smoke during endoscopic surgery. Furthermore, in laparoscopic images, the light source is provided by unevenly distributed instruments, and the organ surfaces are not Lambertian surfaces<cit.>. To address this issue, we employ Blender, an open-source 3D creation software, to synthesize simulated endoscopic surgery smoke images for training. Modern rendering engines in Blender use sophisticated, physics-based built-in models, providing realistic and diverse smoke shapes and densities <cit.>. This effectively addresses the non-uniformity of smoke, and the advantages of using such an approach are evident. Here, we provide a detailed description of the synthesis process. Smoke I_smoke can be defined as: I_smoke(x,y)=Blender(I_rand,D_rand,P_rand) where I_rand denotes smoke intensity, D_rand represents smoke density, P_rand signifies the starting position of smoke, The Intensity I_rand stands for the smoky solid particles transferred at a certain degree, The Density D_rand indicates the non-uniform diffusion of smoke-like solid particles within a specific volume, and the Position P_rand represents the general starting position of smoke within the image area. As graphics are typically of the color type, the smoke mask I_mask is derived from the brightness of the R, G, B channels in the original smoke I_smoke, which can be defined as: I_mask(x,y)=(0.3*I_smoke(x,y)^R) +(0.59*I_smoke(x,y)^G+(0.11I_smoke(x,y)^B). By overlaying smoke images with the same density, intensity, and position on a smoke-free image, a smoke image can be obtained: I_smoked-image(x,y)=I_smoke-free(x,y)+I_mask. The randomness in the rendering process helps avoid overfitting of the network and allows for the generation of a sufficient number of synthetic smoke images for training. These images incorporate smoke masks with various positions and smoke levels added using a 3D graphics engine. With the aid of a powerful rendering engine, we are capable of synthesizing an unlimited number of realistic images simulating surgical smoke for network training. The smoke density is graded, ranging from 0 to 6, with 0 defined as smoke-free and 6 representing the maximum smoke density in the generated smoke images. Figure <ref> provides a detailed illustration of the dataset generation process and the distribution of smoke density levels. §.§ Dataset Statistics To prepare the synthetic data, we obtain clear background images from the publicly available dataset C80 <cit.>, which comprises images from the Cholec80 dataset <cit.>. Cholec80 consists of 80 cholecystectomy videos performed by 13 surgeons. Among these, we utilize the variance of the Laplacian function <cit.> for image selection, followed by a second round of manual inspection to ensure the absence of surgical smoke in the images, ensuring the ground truth. Finally, we collect 660 clear and smoke-free source images. Subsequently, we linearly add six different densities (opacity levels) of synthesized random smoke. After confirmation, smoke of various densities, intensities, and positions are added, resulting in the generation of a diversified endoscopic surgery smoke dataset. In the end, we have generated 3000 pairs of images. We randomly select 200 pairs of synthesized smoke images for testing the network and 2800 pairs for training. Additionally, for ease of validating the effectiveness of training the network for smoke removal, we includ 50 real endoscopic surgery smoke images in the test set. These real images are sourced from the publicly available dataset CholecT50 <cit.>. This dataset is referred to as LSD3K. Furthermore, for experimental convenience, the resolution of all synthesized images in LSD3K is uniformly cropped to 480 × 480 pixels. In Figure <ref>, we provide a detailed presentation of the dataset synthesis process. §.§ User Study Next, we conduct a human subjective survey to assess the quality of the synthesized smoke images. Here, we randomly select 20 synthesized smoke images from the LSD3K dataset as one group of samples, while the other group of 20 smoke images are sourced from the real endoscopic surgery dataset (CholecT50 <cit.>). We recruit 20 participants, including 10 volunteers from the medical surgery industry and 10 volunteers from non-medical sectors. Each participant is shown 10 smoke images randomly selected from both samples (samples one and two had an equal number of images). Then, employing a 5-point Likert scale (, strongly agree, agree, borderline, disagree, strongly disagree), all participants is asked to assess the perceived realism of each image. Finally, we receive 200 ratings for the two sets of samples. Despite the relatively small number of evaluators, we observe a strong consensus and minimal inter-rater differences in the ratings for the same paired comparison results. This indicates that the ratings are reliable. To make the differences in ratings between the two sets of samples more intuitive, the radar chart of rating levels is depicted in Figure <ref>. Although LSD3K still falls short in terms of ratings with strong agreement compared to real smoke images, in terms of overall ratings, the rating distributions of the two groups nearly overlap. This may indicate that LSD3K's visual realism is approaching that of real smoke images. §.§ Application Surgical instrument segmentation is a crucial task that can significantly impact the outcomes of medical procedures <cit.>. To investigate the necessity of smoke removal processes for downstream vision-based medical surgical applications, we further evaluated unprocessed smoke images. For surgical instrument tracking, we applied a popular instrument tracking network model (ResNet50 <cit.>) to assess the influence of smoke on visual images during the surgical process. Figure <ref> illustrates the visualization of instrument detection results for three pairs of synthetic smoke images in LSD3K. It can be observed that all smoke-containing images exhibit varying degrees of interference with detection accuracy, particularly evident during instances of dense smoke during surgical procedures. Detection accuracy of surgical instruments on smoke-free background images is relatively high, with good attention to the instruments in the visualized images. However, the generation of smoke affects semantic information, resulting in more areas of misclassification. Thus, combining the visualization results, it is evident that smoke generated during endoscopic surgery can impact normal medical procedures, highlighting the necessity for a synthetic endoscopic smoke dataset. § ALGORITHM BENCHMARKING In this section, based on the newly proposed benchmarks, we evaluated four representative algorithms: DCP<cit.>, AOD-Net<cit.>, GridDehazeNet<cit.>, Restormer<cit.>, Dehamer<cit.> and DehazeFormer<cit.>. To ensure a fair comparison, we utilize the official released codes of these methods. Each method underwent retraining for the LSD3K benchmark tests on servers equipped with NVIDIA RTX 4090 GPUs. §.§ Quantitative Evaluation Table <ref> presents the quantitative performance evaluation results of various algorithms on the LSD3K dataset. To assess the quality of the demosaiced images, three quantitative evaluation metrics were used: PSNR, SSIM, and LPIPS<cit.>. From the results in Table <ref>, it is evident that GridDehazeNet<cit.> achieved the best quantitative results in smoke removal performance. However, its model complexity is relatively higher compared to traditional CNN methods. Among the transformer-based desmoking algorithms, DehazeFormer<cit.> achieved the highest scores. To comprehensively evaluate the performance and efficiency of different algorithms, future research can delve deeper into strategies that maintain high performance while reducing model complexity, to meet the resource constraints of medical equipment. §.§ Qualitative Evaluation Figure <ref> illustrates the visual comparison results of different baseline algorithms on our proposed benchmark. It is evident from the figure that the traditional image processing algorithm DCP<cit.> has limitations in its effectiveness and leads to visual distortions.The AOD-Net<cit.>, which is based on the atmospheric scattering model and treats haze as a uniform medium, also shows suboptimal desmoking results. In contrast, both GridDehazeNet<cit.>, and DehazeFormer<cit.> demonstrate the best visual performance, effectively removing haze while minimizing pixel distortion. § DISCUSSION For decades, with the success of deep learning algorithms, the research community in image processing and computer vision has been addressing general image dehazing and smoke removal tasks, ranging from acquiring clear outdoor scenes affected by weather conditions to restoring surgical scenes. However, several challenges persist.In this task, collecting paired data is difficult, if not impractical. In Section <ref>, we summarized that past researchers typically extracted smoke-free images from publicly available surgical videos and then linearly superimposed synthesized smoke as a preprocessing step for network training. However, these datasets suffer from domain gaps between synthetic smoke and real-world smoke, especially in some dense smoke images. To address current real-world challenges, there is a significantly increased demand for synthetically high-quality endoscopic smoke datasets. The novelty of this work lies in bringing smoke removal in surgical images into the realm of real-world applications, which holds greater practical significance. Training networks with synthesized smoke datasets addresses the deficiency in training data for medical applications and bridges the significant gap between simulation and reality.For instance, LSD3K draws backgrounds from various laparoscopic and endoscopic videos, exhibiting diverse image colors and tones. Smoke is rendered by a 3D rendering engine using random intensities, densities, textures, and positions. This addresses the challenging issue of deep learning's reliance on labor-intensive manual annotation of ground truth training data, particularly for medical datasets where domain expertise is crucial in annotation. Additionally, LSD3K holds many potential applications in surgical human-machine interaction. § CONCLUSION In this paper, we have proposed a new high-quality dataset for smoke removal from laparoscopic surgery images. We provid a detailed overview of the synthesis process for the LSD3K dataset, including modeling smoke shape using Blender, collecting ground-truth images from the Cholec80 dataset, random sampling of smoke masks and etc. Based on the proposed dataset, we provide new insights into medical image data synthesis and call on researchers to further focus on this field and propose more robust algorithms. splncs04
http://arxiv.org/abs/2407.12221v1
20240717001320
Estimating invertible processes in Hilbert spaces, with applications to functional ARMA processes
[ "Sebastian Kühnert", "Gregory Rice", "Alexander Aue" ]
math.ST
[ "math.ST", "stat.TH", "47B38, 60G10, 62F12" ]
1]Sebastian Kühnertsebastian.kuehnert@ruhr-uni-bochum.de 2]Gregory Ricegrice@uwaterloo.ca 3]Alexander Aueaaue@ucdavis.edu [1]Fakultät für Mathematik, Ruhr-Universität Bochum, Bochum, DE [2]Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, CN [3]Department of Statistics, University of California, Davis, Davis, US Estimating invertible processes in Hilbert spaces, with applications to functional ARMA processes [ July 16, 2024 ================================================================================================= § ABSTRACT Invertible processes naturally arise in many aspects of functional time series analysis, and consistent estimation of the infinite dimensional operators that define them are of interest. Asymptotic upper bounds for the estimation error of such operators for processes in the Hilbert space L^2[0,1] have been considered in recent years. This article adds to the theory in this area in several ways. We derive consistent estimates for the operators defining an invertible representation of a stationary process in a general separable Hilbert space under mild conditions that hold for many classes of functional time series. Moreover, based on these results, we derive consistency results with explicit rates for related operator estimates for Hilbert space-valued causal linear processes, as well as functional MA, AR and ARMA processes. MSC 2020 subject classifications: 47B38, 60G10, 62F12 Keywords: ARMA; functional time series; linear processes; invertible processes § INTRODUCTION Over the past three decades, the field functional data analysis (fDA) has experienced substantial growth, likely driven by increasing interest in examining high-dimensional data that arises from continuous observations across various domains, such as time, space, and frequency; see <cit.> and <cit.> for text-book reviews of fDA. In many cases, functional dara are collected over time. For instance, one might gather continuous observations of electricity prices in a specific region and transform them into daily electricity price curves, see <cit.>. These collections of functional data objects are commonly known as functional time series (fTS) and there has been progress in developing methods for their analysis and modeling in recent years. For an overview of fTS analysis (fTSA), we refer to <cit.> and <cit.>. A typical and general setting for fTSA is to assume that the data attain their values in a separable, infinite dimensional Hilbert space ℋ, which we consider throughout this paper. As is the case with scalar time series, the most oft used models in fTSA are linear models. These include ℋ-valued autoregressive (AR) processes, often described as functional autoregressive (fAR) process, and to a lesser extent in the literature ℋ-valued moving average (fMA) and fARMA processes. Such processes typically admit under regularity conditions stationary solutions that may be represented as ℋ-valued linear and invertible processes. For testing stationarity of general, fTS, we refer the reader to <cit.>, <cit.> and <cit.>, and the monograph <cit.> remains an excellent reference for linear processes in function spaces. Such processes also arise in the study of many non-linear fTS processes. For instance under regularity conditions the solutions for the volatility process in ℋ-valued versions of the functional (generalized) autoregressive conditionally heteroscedastic models (f(G)ARCH), see <cit.>, <cit.> and <cit.>, can be represented as (weak) ℋ-valued linear and invertible processes; see e.g. <cit.> where the processes attained values in the separable Hilbert space ℋ=L^2[0,1] of squared Lebesgue-integrable functions with domain [0,1]. When it comes to estimating the infinite dimensional parameters defining such processes, early references such as <cit.> put forward consistent estimators of the operators defining fAR processes. Limit theorems for ℋ-valued linear processes can for instance be found in <cit.>, <cit.> and <cit.>, and in Banach spaces in <cit.>. Further, <cit.> and <cit.> approached the estimation of the operators in the linear and inverted representation pointwise, without stating explicit estimation rates. In <cit.>, the complete sequences of operators in the linear and inverted representation of invertible linear processes originating from fGARCH models with values in the specific Hilbert space L^2[0,1]. For fAR operators, <cit.> provided explicit rates under quite mild conditions, and <cit.> derived both exponential bounds and convergence rates based on sieve estimates. The MA and ARMA case, though, has received substantially less attention in the literature. However, <cit.> discussed estimating the fMA(1) operator under the quite strict condition that it commutes with the error processes' covariance operator, and <cit.> recently derived consistent estimators of the operators defining an ℋ-valued ARMA process based on a two step approach involving sequentially esitmating the AR and MA components, and employing dimension reduction using an increasing number of principal components. The best achievable rate of estimation under fairly general conditions for the operator defining an fAR process of order one is <cit.>, who establish central limit theorems for such estimators. This article formulates consistency results with quantifiable rates for the operators defining a general, ℋ-valued linear and invertible process based on functional Yule-Walker equations and a Tychonoff-regularized estimator, without assuming that the processes stem from specific models. These results are established under a Sobolev type condition and the well-known and practical weak dependence concept for fTSA called L^p-m-approximibility introduced in <cit.>. These results are then applied to obtain consistency results for the operators defining fAR, fMA and fARMA processes for arbitrary AR and MA orders. The rest of the paper is organized as follows. Section <ref> contains our notation and basic definitions and features in functional time series analysis. Section <ref> establishes our operator estimates and asymptotic results in the infinite-dimensional setting. In Section <ref>, we demonstrate applications of these results to obtain consistent estimators of the operators in fARMA models. Section <ref> summarizes the paper, and the proofs of all technical and auxiliary finite-dimensional results are collected in Appendices <ref>–<ref>. Throughout this article, all asymptotic statements are meant for the sample size N→∞, unless otherwise noted. § PRELIMINARIES §.§ Notation The additive identity of linear spaces is denoted by 0, and the identity map by 𝕀. On Cartesian product spaces V^n, with n∈ℕ, scalar multiplication and addition are defined component-wise. W let (ℬ, ·) denote a Banach space, and (ℋ, ⟨·, ·⟩), (ℋ_⋆, ⟨·, ·⟩_⋆) be Hilbert spaces with their respective norms and inner products. Then, ℬ^n is a Banach space and ℋ^n a Hilbert space, if they are endowed with the norm x^2 = ∑_i=1^nx_i^2 and the inner product ⟨ x ,y⟩ = ∑_i=1^n⟨ x_i,y_i⟩, respectively, where x (x_1, …, x_n)^⊤, y (y_1, …, y_n)^⊤. By ℒ_ℋ, ℋ_⋆, 𝒮_ℋ, ℋ_⋆⊊ℒ_ℋ, ℋ_⋆, and 𝒩_ℋ, ℋ_⋆⊊𝒮_ℋ, ℋ_⋆ we denote the spaces of linear, bounded; Hilbert-Schmidt (H-S); and nuclear (trace-class) operators Aℋ→ℋ_⋆, respectively, with operator norm ·_ℒ; H-S inner product ⟨·,·⟩_𝒮 and H-S norm ·_𝒮; and nuclear norm ·_𝒩, respectively, with 𝒯_ℋ𝒯_ℋ, ℋ for 𝒯∈{ℒ,𝒮,𝒩}. For a given complete orthonormal system (CONS) (a_j)_j∈ℕ of ℋ, ∐^a_n_a_mℋ→span{a_m, a_m+1, …, a_n}⊂ℋ, with n>m, denotes the projection operator onto the closed linear subspace spanned by a_m, …, a_n. All random variables are defined on a common probability space (Ω, 𝔄,ℙ). We write X_n = o_(c_n) and X_n=O_(c_n) (for n→∞) for X_n/c_n converging in probability and being stochastically bounded, respectively, for some sequence (c_n)_n∈ℕ⊂(0,∞). For p ∈ [1,∞), L^p_ℋ = L^p_ℋ(Ω, 𝔄,ℙ) is the space of (classes of) random variables X ∈ℋ with X^p < ∞, (X_k)_k∈ℤ is an L^p_ℋ-process if (X_k)_k∈ℤ⊂ L^p_ℋ, and centered if (X_k) = 0 for all k, with expectation understood as a Bochner-integral. §.§ Stationarity and lagged (cross-)covariance operators Let (ℋ, ⟨·,·⟩), (ℋ_⋆, ⟨·,·⟩_⋆) be separable Hilbert spaces with their respective inner products. The cross-covariance operator of X∈ L^2_ℋ and Y ∈ L^2_ℋ_⋆ is defined by 𝒞_X,Y[X-(X)]⊗[Y-(Y)], where x⊗ y⟨ x, ·⟩ y denotes the tensorial product of x∈ℋ, y∈ℋ_⋆, and the covariance operator of X by 𝒞_X = 𝒞_X,X. If X^2< ∞, 𝒞_X∈𝒩_ℋ is positive semi-definite and self-adjoint with 𝒞_X_𝒩 = X-(X)^2. Further, 𝒞_X,Y∈𝒩_ℋ,ℋ_⋆, for the adjoint holds 𝒞^∗_X,Y =𝒞_Y,X∈𝒩_ℋ_⋆,ℋ, and 𝒞_X,Y_𝒩≤X-(X)Y-(Y)_⋆. A process X=(X_k)_k∈ℤ⊂ℋ is (strictly) stationary if (X_t_1, …, X_t_n)d= (X_t_1 + h, …, X_t_n +h) for all h, t_1, …, t_n∈ℤ, n∈ℕ, and weakly stationary if it is an L^2_ℋ-process with (X_k) = μ for all k for some μ∈ℋ, and if 𝒞_X_k, X_ℓ = 𝒞_X_k+h, X_ℓ+h for all h,k,ℓ∈ℤ. We call (X_k)_k∈ℤ a white noise (WN) if (X_k)_k∈ℤ is a centered, weakly stationary process with X_k^2>0 for all k, and 𝒞_X_k, X_ℓ = 0 for k≠ l. Provided the process X⊂ L^2_ℋ is weakly stationary, the lag-h-covariance operators of X are defined by 𝒞^h_X𝒞_X_0,X_h, for h∈ℤ, where (𝒞^h_X)^∗ = 𝒞^-h_X. We call 𝒞_X𝒞^0_X the covariance operator of X. If two processes X=(X_k)_k∈ℤ⊂ L^2_ℋ, Y=(Y_k)_k∈ℤ⊂ L^2_ℋ' satisfy 𝒞_X_k, Y_ℓ = 𝒞_X_k+h, Y_ℓ+h for all h,k,ℓ, the lag-h-cross-covariance operators of X, Y are defined by 𝒞^h_X,Y𝒞_X_0,Y_h for h∈ℤ, where (𝒞^h_X,Y)^∗=𝒞^-h_Y,X. Further, we write 𝒟_X,Y𝒞^1_X,Y. § ESTIMATION OF THE OPERATORS IN ℋ-VALUED LINEAR AND INVERTIBLE PROCESSES This section demonstrates an estimation procedure for operators of invertible processes X =(X_k)_k∈ℤ⊂ℋ and derives asymptotic results. The use of these results for estimating the operators of invertible, causal linear processes is also discussed. Throughout we let ε = (ε_k)_k∈ℤ⊂ℋ be a stationary and ergodic ℋ-valued WN. §.§ Estimation in the inverted representation We call a centered process X =(X_k)_k∈ℤ⊂ℋ invertible if it is stationary and satisfies X_k = ∑^∞_j=1ψ_j(X_k-j) + ε_k, k∈ℤ, where (ψ_j)_j∈ℕ⊂ℒ_ℋ is a sequence of linear operators. Stationary processes satisfying (<ref>) arise in many different situations. Examples include invertible linear processes, which are discussed at length in Chapter 7 of <cit.>. This class of time series obviously also includes fAR processes, but also fMA and fARMA processes under natural conditions; see <cit.> and <cit.>. Moreover, estimating the fAR(MA) parameters derived from invertible processes is also beneficial when dealing with f(G)ARCH processes, as point-wise squared (point-wise) f(G)ARCH models are fAR(MA) models, see <cit.> and <cit.>. X⊂ L^2_ℋ is a centered, invertible process with representation (<ref>), and ε⊂ L^4_ℋ. The covariance operator 𝒞_εℋ→ℋ is injective. Injectivity of the covariance operator 𝒞_ε holds if and only if im(𝒞_ε) = ℋ, see <cit.>. It is also equivalent to the condition that there is no affine, closed, and proper subspace U⊊ℋ with (ε_0 ∈ U)=1. The goal of this section is to define, based on a sample X_1, …,X_N from X, consistent estimators of the operators (ψ_j)_j∈ℕ in (<ref>). The estimators we propose are based on estimates for lagged (cross-) covariance operators of processes X^[L]_k (X_k, X_k-1, …, X_k-L+1)^⊤ , k∈ℤ, taking value in the Cartesian product Hilbert space ℋ^L. We assume that the lag parameter L=L_N∈ℕ grows as a function of the sample size. We define the empirical version 𝒞̂_X^[L] of the covariance operator 𝒞_X^[L] through 𝒞̂_X^[L]1/N-L+1∑_k=L^N X^[L]_k⊗ X^[L]_k, L ≤ N. We also define 𝒟̂_X^[L], X, the empirical version of 𝒟_X^[L], X = 𝒞^1_X^[L], X∈𝒩_ℋ^L,ℋ, by 𝒟̂_X^[L], X1/N-L∑_k=L^N-1 X^[L]_k⊗ X_k+1, L<N. For all L, 𝒞̂_X^[L] and 𝒟̂_X^[L], X are unbiased estimators for 𝒞_X^[L] and 𝒟_X^[L], X, respectively. To derive consistency results for estimates of (ψ_j)_j ∈ℕ in (<ref>), we assume that these operator estimators have consistency rates for a growing L=L_N as follows. For some sequence L=L_N→∞ with L=o(N^1/3) holds 𝒞̂_X^[L] - 𝒞_X^[L]_𝒮 = O_(L^3/2N^-1/2), 𝒟̂_X^[L], X - 𝒟_X^[L], X_𝒮 = O_(LN^-1/2). (a) Assumption <ref> holds for centered, L^4_ℋ-m-approximable processes, see <cit.>, where the well-known concept L^p_ℋ-m-approximability was introduced by <cit.>. For p≥ 1, a process (X_k)_k∈ℤ⊂ L^p_ℋ is called L^p_ℋ-m-approximable if for each k it holds X_k = f(ϵ_k, ϵ_k-1, …) for some i.i.d. process (ϵ_k)_k∈ℤ⊂ℋ and a measurable function fℋ^∞→ℋ (i.e. (X_k)_k is causal (w.r.t. (ϵ_k)_k)) such that ∑_m=1^∞ ν_p(X_m- X^(m)_m) < ∞, where ν_p(·)(·^p)^1/p, and where X^(m)_k f(ϵ_k, ϵ_k-1, …, ϵ_k-m+1, ϵ^(m)_k-m, ϵ^(m)_k-m-1, …) for all k,m, with (ϵ^(m)_j)_j being independent copies of (ϵ_j)_j for all m. We would like to point out that Assumption <ref> can also be satisfied for π-dependent processes, where π-dependence is a recently introduced weak dependence concept for metric space-valued processes by <cit.>, as L^p-m-approximibility implies π-dependence. (b) Other centered processes that satisfy Assumption <ref> are those that fulfill the rather strict summability condition in <cit.>, and if they are i.i.d. with finite fourth moments, see, e.g. <cit.> for covariance operator estimation of i.i.d. processes. (c) It is worthwhile to note that Assumption <ref> can even be satisfied for not necessarily causal processes, for instance linear processes (LP) in ℋ, that is X_k = ∑_ℓ=-∞^∞ϕ_ℓ(ε_k-ℓ), with ∑^∞_ℓ=-∞ϕ_ℓ_ℒ < ∞. This is the case if the LP's operators fulfill the stronger condition ∑^∞_k=1∑^∞_ℓ=k[ϕ_ℓ_ℒ+ ϕ_-ℓ_ℒ] < ∞. Although Proposition 2.1 in <cit.> (for p=4) is stated for an i.i.d. error process (ε_j)_j⊂ L^4_ℋ, in fact, it even holds for our error process (ε_j)_j⊂ L^4_ℋ, which is only a strictly stationary, ergodic WN. Moreover, under Assumptions <ref>, <ref>, and N-L+1/L^2 𝒞̂_X^[L] - 𝒞_X^[L]^2_𝒮 ≤τ_X(0), L≤N, N-L/L 𝒟̂_X^[L], X - 𝒟_X^[L], X^2_𝒮 ≤τ_X(1), L < N, where 0 ≤τ_X(h) < ∞ is independent of the Cartesian power L, and depends only on h, the 4th moment of X, and the L^4-m-approximation errors through τ_X(h) (1+2h) ν^4_4(X_1) + 4 ν^3_4(X_1)∑_k>hν_4(X_k - X^(k)_k), h ∈{0,1}. We also make use of estimates of functional principal components. Before we do, we make several useful observations. The covariance operator of an LP can be described in terms of the linear operators in (<ref>) and error covariance operator 𝒞_ε through 𝒞_X = ∑^∞_i=0 ϕ_i𝒞_εϕ^∗_i. The following are then immediate consequences of Assumptions <ref>–<ref>. Now, let (λ_j, c_j)_j∈ℕ=(λ_j(L), c_j(L))_j∈ℕ, and (λ̂_j, ĉ_j)_j∈ℕ=(λ̂_j(L), ĉ_j(L))_j∈ℕ be the sequence of eigenvalue/eigenfunction pairs of the covariance operators 𝒞_X^[L], and 𝒞̂_X^[L]∈𝒩_ℋ^L, respectively. According to <cit.>, Lemma 4.2, it holds that sup_j∈ℕ|λ̂_j - λ_j| ≤𝒞̂_X^[L] - 𝒞_X^[L]_ℒ. Suitable estimators for the eigenfunctions c_j, which are unambiguous except for their sign, are ĉ'_j ⟨ĉ_j, c_j⟩ĉ_j, with (x) 1_[0, ∞)(x) - 1_(-∞, 0)(x), x ∈ℝ. As covariance operators are positive semi-definite, injectivity of 𝒞_X^[L]∈𝒩_ℋ^L after Proposition <ref> is equivalent to strict positivity. Hence, the eigenvalues satisfy λ_1 ≥λ_2 ≥⋯ > 0. If the eigenspaces associated to each eigenvalues λ_j are one-dimensional, according to <cit.>, Lemma 4.3, it holds that ĉ'_j - c_j≤2√(2)/α_j 𝒞̂_X^[L] - 𝒞_X^[L]_ℒ, where α_1λ_1 - λ_2, and α_jmin(λ_j-1 - λ_j,λ_j - λ_j+1) for j>1. Moreover, for all k∈ℕ it holds with Λ_k sup_1≤ j ≤ k(λ_j - λ_j+1)^-1 that sup_1≤ j ≤ kĉ'_j - c_j≤ 2√(2) Λ_k𝒞̂_X^[L] - 𝒞_X^[L]_ℒ. We note Λ_k = (λ_k - λ_k+1)^-1 if there is a convex function κℝ→ℝ with κ(j) = λ_j for all j. We remind the reader that below we will be taking L=L_N with L_N →∞, and Λ_k = Λ_k,N. The estimates for the operators in the inverted representation are derived based on the approximate Yule-Walker equation 𝒟_X^[L], X = Ψ_L𝒞_X^[L] + ∑_ℓ>L ψ_ℓ𝒞^1-ℓ_X^[L], X, where the process X^[L] = (X^[L]_k)_k∈ℤ⊂ℋ^L is defined in (<ref>), and the operator Ψ_L∈ℒ_ℋ^L,ℋ is given by Ψ_L (ψ_1 ⋯ ψ_L). Thereby, ∑_ℓ>Lψ_ℓ𝒞^1-ℓ_X^[L], X exists for all fixed L∈ℕ, as ∑_ℓ≥ 1ψ_ℓ𝒞^1-ℓ_X^[L], X converges in the operator norm topology under Assumption <ref>. Based on a sample X_1, …, X_N from X, we focus on estimating Ψ_L. This can be achieved by using Tychonoff regularization, see, e.g., <cit.>, with an estimator Ψ̂_L = Ψ̂_L(K,N) of the form Ψ̂_L𝒟̂_X^[L], X𝒞̂^†_X^[L]∐^ĉ_K_ĉ_1 = 𝒟̂_X^[L], X(𝒞̂_X^[L] + θ_N𝕀)^-1∐^ĉ_K_ĉ_1, where 𝕀ℋ^L →ℋ^L is the identity map, 𝒞̂_X^[L] and 𝒟̂_X^[L], X are the estimates in (<ref>)–(<ref>), K=K_N∈ℕ, (θ_N)_N ∈ℕ⊂(0,∞) are tuning parameter sequences satisfying K_N →∞, θ_N→ 0, and ĉ_1, …, ĉ_K are the eigenfunctions associated to the first K largest eigenvalues λ̂_1≥⋯≥λ̂_K of 𝒞̂_X^[L]. We are able to establish the consistency of Ψ̂_L under the following assumptions. The eigenvalues λ_j=λ_j(L) of 𝒞_X^[L] satisfy λ_j≠λ_j+1 for each L=L_N and j. Each of the operators in (<ref>) satisfy that ψ_i∈𝒮_ℋ, and ∑^∞_i=1ψ_i^2_𝒮 <∞. Assumption <ref>, which implies that each eigenspace of the covariance operator 𝒞_X^[L] is one-dimensional, seems to be strict. For example when X is an independent and identically distributed process the eigenvalues of 𝒞_X^[L]=diag(𝒞_X, …, 𝒞_X) have multiplicity at least L. However, for non-degenerate linear processes in which at least some of the operators ϕ_j in (<ref>) are non-zero, this condition is more plausible. This condition is weaker than many related works in which a specific form of the eigenvalue gaps f(j) = λ_j - λ_j-1 are prescribed; see for instance the equation 3.2 in <cit.>. The method we use to quantify the error between Ψ̂_L and Ψ_L entails first decomposing Ψ_L into a suitable finite dimensional representation and a remainder term, and then evaluating the error of Ψ̂_L in estimating each component. In order to quantify the estimation of the remainder term, we employ a Sobolev condition akin to that used in <cit.> to derive precise asymptotic results for the estimation errors of densities in the deconvolution setting. To state our Sobolev type condition, we let (μ_j, d_j)_j∈ℕ denote the eigenvalue/eigenfunction pairs corrsponding to the covariance operator 𝒞_X∈𝒩_ℋ. Then, (c_i⊗ d_j)_i,j∈ℕ = (c_i(L)⊗ d_j)_i,j∈ℕ is a CONS of 𝒮_ℋ^L, ℋ, so, if Ψ_L∈𝒮_ℋ^L,ℋ, which holds if the ψ_i are H-S operators, Ψ_L=∑_i∑_j⟨Ψ_L,c_i⊗ d_j⟩_𝒮(c_i⊗ d_j). For some β >0, the limit S_Ψ(β) lim_N →∞S_Ψ_L_N(β) < ∞ exists, where for each L=L_N∈ℕ holds, S_Ψ_L(β) = S_Ψ_L(β, (c_i⊗ d_j)_i,j) = ∑_i=1^∞∑_j=1^∞ ⟨Ψ_L, c_i⊗ d_j⟩^2_𝒮 (1 + i^2β + j^2β) < ∞. Let Assumptions <ref>–<ref> hold, and let K=K_N→∞ and θ_N→ 0 be further sequences with θ_N = o(λ_K) and also K^1+βλ^-1_KΛ_KL^2 = O(N^1/2), ∑_ℓ>Lψ_ℓ_ℒ = o(KΛ_KL^3/2N^-1/2), θ_N = o(KΛ_KL^2N^-1/2), where β>0 is defined in Assumption <ref>. Then, we have Ψ̂_L - Ψ_L_𝒮 = O_(K^-β). (a) Theorem <ref> improves on comparable results in the literature in several ways. In contrast to <cit.>, our result is stated for general, invertible linear processes without assuming that they originate from a specific model, and where the processes are allowed to attain values in arbitrary, separable Hilbert space rather than only in L^2[0,1]. Most notably, we analyze the complete operators and state explicit asymptotic upper bounds, rather than either focusing on finite-dimensional projections which is commonly done in operator estimation, see <cit.> and <cit.>, or omitting stating the rate when completely observing the operators, see <cit.> for fGARCH models, and <cit.> and <cit.> for invertible linear processes. Further, we measured the estimation errors even with the H-S- instead of the weaker operator norm as in <cit.> and <cit.>. The Sobolev type condition we utilized in our estimation, is, as far as we are aware, in the context of operator estimation only used for fGARCH models in <cit.>. Moreover, we refrained from a simplifying convexity assumption for the eigenvalues. (b) Although Λ_K is defined via the eigenvalues λ_K, we can only simplify the sequences' requirements in Theorem <ref> (and in other results) in certain situations. For example, if for each L there are convex functions κ_Lℝ→ℝ so that for the eigenvalues holds κ_L(j) = λ_j(L) for all j, and if λ_j(L) decays for j→∞ exponentially fast for each L, and if λ_j(L)=c_Le^-j for some constant c_L>0. Then, we obtain the explicit form Λ_K = (λ_K - λ_K+1)^-1=d_Lλ^-1_K, with d_L(c_L(1-e^-1))^-1>0. (c) By following the lines of the proof of Theorem <ref> (and all other asymptotic results), it seems like our result could have been even formulated in sense of L^2-convergence instead of stochastic boundedness with a certain rate, but our inequalities contain reciprocals of eigenvalue estimates which only converge in the stochastic sense. §.§ Estimating the operators in the causal linear process representation We now suppose X=(X_k)_k∈ℤ⊂ℋ follows a causal LP, see (<ref>) for general LP's, so X_k = ε_k + ∑^∞_j=1 ϕ_j(ε_k-j), k∈ℤ, where (ϕ_j)_j∈ℕ⊂ℒ_ℋ are linear operators with ∑^∞_j=1ϕ_j_ℒ < ∞. Such a LP is invertible if it may be expressed as in (<ref>). The monograph <cit.> was devoted to this class of processes in its entirety. For general existence theorems of LPs and their invertibility, see Section 7 of this book. We now take up the estimation of the operators ϕ_i in (<ref>) of an invertible LP. Estimating these operators are of use in deriving prediction and confidence sets when applying ℋ-valued LP models. Relating the estimates from an inverted representation of X to its causal linear representation can be done using the following proposition. Let Assumptions <ref>–<ref> hold. Then (a) The covariance operator 𝒞_Xℋ→ℋ is injective. (b) The covariance operators 𝒞_X^[L]ℋ^L→ℋ^L are injective for all L. (c) If ∑^∞_ℓ=0A_ℓ(ε_i-ℓ) = ∑^∞_ℓ=0B_ℓ(ε_i-ℓ) for some i ∈ℕ, then, A_ℓ = B_ℓ for all ℓ∈ℕ. (d) If X is a causal linear process, and if ∑^∞_ℓ=0A_ℓ(X_i-ℓ) = ∑^∞_ℓ=0B_ℓ(X_i-ℓ) for some i ∈ℕ, then A_ℓ = B_ℓ for all ℓ∈{0,1,2, }. For an invertible LP X in Assumption <ref> such that X_k = ε_k + ∑^∞_i=1ψ_i(X_k-i), X_k-i=ε_k-i + ∑^∞_j=1ϕ_j(ε_k-i-j), for all k,i ∈ℤ, we have with ϕ_0𝕀 that X_k = ε_k + ∑^∞_i=1 [∑^i_j=1ψ_jϕ_i-j](ε_k-i), k∈ℤ. This representation, the linear representation (<ref>), Assumption <ref> and Proposition <ref> (c) yield ϕ_i = ∑^i_j=1ψ_jϕ_i-j, i ∈ℕ. Hence, with ψ̂_j =Ψ̂^(j)_L denoting the j'th component of the estimator Ψ̂_L in (<ref>) for Ψ_L=(ψ_1, …, ψ_L), a reasonable estimate for ϕ_i is ϕ̂_i = ϕ̂_i(L,K), defined iteratively with ϕ̂_0𝕀, and ϕ̂_i∑^i_j=1 ψ̂_jϕ̂_i-j, i ∈ℕ. By assuming that the operators satisfy a Sobolev condition, we also obtain the following consistency result for the complete operators in the linear representation. Under the assumptions of Theorem <ref> holds ϕ̂_i - ϕ_i_𝒮 = O_(K^-β), i∈ℕ. The novelties of Proposition <ref> are similar to those in Theorem <ref>, see Remark <ref> (a). Here, too, in the linear representation, the processes can attain their values in a general, separable Hilbert space, they are not constrained to stem from a certain type of time series, and we estimated the complete operators with stating an explicit asymptotic upper bound for the estimation errors in the H-S norm. Such a result is to the best of our knowledge new. However, consistent estimates for the operators in the linear representation, measured in the operator norm, and without convergence rate, can be found in <cit.> and <cit.>. The novelties for the estimation errors for the operators in the linear representation formulated in Propositions <ref>–<ref> are similarly as mentioned in Remark <ref> (a) for the operators in the inverted representation. The results are formulated for general separable Hilbert spaces based on a milder weak dependence condition. Furthermore, we formulated a statement for the finite-dimensional projections of the operators in the linear representation with explicit convergence rates, and for the finite-dimensional projections and the complete operators, we deduced precise constants being asymptotically not exceeded with probability 1. § CONSEQUENCES FOR FARMA PROCESSES The asymptotic results for the operator estimators in the linear and in the invertible representations in Section <ref> can be applied to estimate the parameters of many linear functional time series processes with stationary and WN innovations, including fAR, fMA, and fARMA processes. Such processes are thoroughly discussed in <cit.>, where the focus is mainly on fAR processes. <cit.> establishes general sufficient conditions under which fARMA processes in Banach spaces have causal and invertible linear process representations as in equations (<ref>) and (<ref>). Further, <cit.> discusses prediction methods for fARMA(p,q) processes, and <cit.> derived consistent operator estimates for fMA(1) and fARMA(1,1) processes based on the Hannan–Rissanen method, and making using of functional principal component analysis. Herein, by making use of operator estimates in both the linear and inverted representations of such processes, we deduce consistent estimates for the operators defining fAR, fMA and fARMA processes, with explicit asymptotic upper bounds. §.§ fAR processes We start by demonstrating the implications of our general results to fAR processes. Formally, a centered process (X_k)_k∈ℤ⊂ℋ is a functional autoregressive process with order p∈ℕ (fAR(p)) if X_k= ε_k + ∑^p_i=1 α_i(X_k-i) a.s., k ∈ℤ, where (ε_k)_k∈ℤ⊂ℋ is a WN, and where α_i∈ℒ_ℋ are operators with α_p≠ 0. With A_p = (α_1 ⋯ α_p) ∈ ℒ_ℋ^p,ℋ, and Ψ_p in (<ref>), we have A_p = Ψ_p, and the exact Yule-Walker equation 𝒞^1_X^[p], X = A_p𝒞_X^[p]. Then, Â_p Ψ̂_p, with Ψ̂_p = Ψ̂_p,K being the estimator in (<ref>) for L=p, and some K∈ℕ, and with α̂_i Â^(i)_p,K, i ∈{1, …,p} being its components, we obtain the following corollary of Theorem <ref>. Let (X_k)_k ∈ℤ⊂ℋ be the fAR(p) process in (<ref>), and suppose the conditions of Theorem <ref> are satisfied. Then, we have α̂_i - α_i_𝒮 = O_(K^-β), i=1, …, p. To the best of our knowledge, the result in Corollary <ref> which states explicit asymptotic rates for the estimation errors for the operators of fAR processes in general, separable Hilbert space for any fAR order, and where the errors are only required to be stationary, ergodic white noises instead of i.i.d., is new. Explicit rates for fAR(1) processes with i.i.d. errors can also be found in <cit.>, where their results require a specific relationship between the lag-1-cross covariance to the covariance operator. Further, <cit.> states consistency results for the fAR(1) operator projected on a finite-dimensional sup-space, and also the complete operator without stating an explicit rate. §.§ fMA processes Formally, a centered process (X_k)_k∈ℤ⊂ℋ is a functional moving-average process with order q∈ℕ (fMA(q)) if X_k= ε_k + ∑^q_j=1 β_j(ε_k-j) a.s., k ∈ℤ, where (ε_k)_k∈ℤ⊂ℋ is a WN, and where β_j∈ℒ_ℋ are operators with β_q≠ 0. By putting β̂_j = ϕ̂_j, with ϕ̂_j in (<ref>), we get the following result. Let (X_k)_k∈ℤ⊂ℋ be the fMA(q) process in (<ref>) with q∈ℕ, and suppose the conditions of Theorem <ref> are satisfied. Then, β̂_j - β_j_𝒮 = O_(K^-β), j=1, …, q. That the asymptotic rate of the estimation errors for the fMA operators β_j is identical to the rate for the operators ϕ_j in the linear representation, is, because ϕ_j are estimated for each j in the same way, independently whether the series is infinite or not. As far as we are aware, explicit asymptotic rates for the estimation errors for the operators of fMA processes in general, separable Hilbert spaces for any order as formulated in Corollary <ref> have not been stated in the literature so far. Nonetheless, <cit.> already estimated the fMA(1) operator under the very limiting condition that the fMA operator and the error processes' covariance operator commute. Moreover, for a simulation study for fMA(1) processes, see <cit.>. §.§ fARMA processes We finally turn to the estimation of fARMA processes. A centered process (X_k)_k∈ℤ⊂ℋ is called a functional autoregressive moving-average process with orders p,q∈ℕ (fARMA(p, q)) if X_k= ε_k + ∑^p_i=1 α_i(X_k-i)+∑^q_j=1 β_j(ε_k-j) a.s., k ∈ℤ, where (ε_k)_k∈ℤ⊂ℋ is a WN, and α_i, β_j ∈ℒ_ℋ are operators with α_p≠ 0, β_q ≠ 0. We combine the fARMA(p,q) representation of the process (X_k)_k∈ℤ in (<ref>) and its inverted representation (<ref>). We note that, as with multivariate ARMA processes, in general the operators α_i and β_j are not identifiable. Sufficient conditions for the identifiability of fARMA operators are put forward in equation (10) and Propositions 1-3 of <cit.>. <cit.> establishes conditions under which such processes may be represented using (<ref>) and (<ref>). We assume below that the fARMA models that we consider are identifiable and admit solutions that are ISLPs. §.§.§ fARMA(1,1) processes To describe the main idea, we first discuss estimation for fARMA(1,1) processes of the form X_k= ε_k + α_1(X_k-1)+ β_1(ε_k-1) a.s., k ∈ℤ. From this identity, Proposition <ref>(c,d) and ε_k-1 = X_k-1 - ∑^∞_j=1ψ_j(X_k-1-j) it follows with ψ_0=𝕀, ψ_i = α_1 + β_1, if i=1, -β_1ψ_i-1, if i > 1. Although we may use the estimator Ψ̂_L in (<ref>) to asymptotically estimate all operators ψ_i consistently, we cannot immediately obtain consistent estimates for the ARMA(1,1) operators, as ψ_1∈𝒮_ℋ does not have a bounded inverse. However, with ψ̂_i = Ψ̂^(i)_L being the i'th component of the estimator Ψ̂_L in (<ref>), equation (<ref>) suggests the following estimators α̂_1 = α̂_1(L,K,M) for α_1 and β̂_1 = β̂_1(L,K,M) for β_1 α̂_1ψ̂_1 - β̂_1, β̂_1 - ψ̂_2ψ̂^†_1 ∐^f̂_M_f̂_1 = - ψ̂_2ψ̂^∗_1(ψ̂_1ψ̂^∗_1 + γ_N𝕀)^-1∐^f̂_M_f̂_1 , where (K_N)_N, (L_N)_N, (M_N)_N ⊂ℕ, (γ_N)_N ∈ℕ⊂(0,∞) are tuning parameter sequences such that min{K_N, L_N, M_N}→∞, and γ_N → 0 as N→∞, f̂_1, …, f̂_M are the eigenfunctions associated to the eigenvalues ρ̂_1≥⋯≥ρ̂_M≥ 0 of ψ̂_1ψ̂^∗_1ℋ→ℋ, and f_1, …, f_M are the eigenfunctions associated to the eigenvalues ρ_1≥⋯≥ρ_M≥ 0 of ψ_1ψ^∗_1ℋ→ℋ. These estimators are consistent under the following assumptions. α_1, β_1∈𝒮_ℋ are H-S operators. The image of the operator ψ_1ℋ→ℋ lies dense. The eigenvalues of ψ_1ψ^∗_1ℋ→ℋ satisfy ρ_j≠ρ_j+1 for all j∈ℕ. The following condition, and the estimation results for the operators in the invertible representation also enable us to derive consistency results for the complete ARMA(1,1) operators. For some γ >0 holds ∑_i=1^∞∑_j=1^∞ ⟨β_1, f_i⊗ f_j⟩^2_𝒮(1 + i^2γ + j^2γ) < ∞. Let the assumptions of Theorem <ref>, Assumptions <ref>–<ref>, and M^1+γρ^-1_MP_M = O(K^β) and γ_N = o(ρ_MM^-γ) hold. Then, the fARMA(1,1) operators satisfy max{α̂_1 - α_1_𝒮, β̂_1 - β_1_𝒮} = O_(M^-γ). (a) Rather than using a Sobolev condition and Tychonoff regularization in Theorem <ref> twice, <cit.> derived consistent estimates for the fARMA(1,1) operators based on the Hannan–Rissanen method and multiple other technical conditions. (b) Assumptions <ref>, <ref>–<ref> are of the same type as the assumptions for the operators in the inverted representation in Theorem <ref>. The additional listed Assumption <ref> is needed so that identifiability of β_1 we estimate based on the identity ψ_2 = -β_1ψ_1 in (<ref>) is guaranteed. §.§.§ fARMA(p,q) processes The procedure for centered fARMA(p,q) processes (X_k)_k∈ℤ⊂ℋ in (<ref>) for arbitrary p,q is similar as for fARMA(1,1) processes. Similar to the transformations that led to (<ref>), we obtain for any p and q, with α_i = β_j = 0 for i>p, j>q, ∑^∞_i=1 ψ_i(X_k-i) = ∑^max(p,q)_i=1[α_i+β_i - ∑^i-1_j=1 β_jψ_i-j](X_k-i) - ∑^∞_i=max(p,q)+1[∑^q_j=1 β_jψ_i-j](X_k-i). Comparing the left and right hand sides of the above and applying Proposition <ref> (d), we see that ψ_i = α_i+β_i - ∑^i-1_j=1 β_jψ_i-j, if 1≤ i ≤max(p,q), -∑^q_j=1β_jψ_i-j, if i > max(p,q). We proceed to obtain estimates for β_i by using (<ref>) for i>max(p,q), and then subsequently obtaining estimates for the α_i. Since the right hand side of (<ref>) involves sums of compositions of the β_j with ψ_i-j for i>max(p,q), we cannot immediately retrieve estimates for β_j by applying Tychonoff regularized versions of the estimates ψ_i-j. For i=p+q, with B_q (β_1 ⋯ β_q) ∈ ℒ_ℋ^q,ℋ, Ψ'_[i] = Ψ'_[i](p,q) (ψ_p+q+i-1 ⋯ ψ_p+i) ∈ ℒ_ℋ^q,ℋ, i ≥ 0, the identity (<ref>) becomes ψ_p+q = -B_q Ψ'^ ⊤_[0]. Identifiability of B_q, and thus of β_1, …, β_q, are given if the image of Ψ'^ ⊤_[0]∈ ℒ_ℋ, ℋ^q lies dense. Identifiability of B_q can be established using the relationship Ψ'_[q] = -B_q ∏ , where ∏=∏(p,q)∈ℒ_ℋ^q is the operator-valued matrix defined by ∏ [ Ψ'_[q-1]; Ψ'_[q-2]; ⋮; Ψ'_[0]; ] = [ ψ_p+2q-2 ψ_p+2q-3 ⋯ ψ_p+q-1; ψ_p+2q-3 ψ_p+2q-4 ⋯ ψ_p+q-2; ⋮ ⋮ ⋯ ⋮; ψ_p+q-1 ψ_p+q-2 ⋯ ψ_p; ]. For Ψ'_[i] and ∏=∏(p,q), we use the estimates Ψ̂'_[i] = Ψ̂'_[i](L,K) and ∏̂ = ∏̂(p,q;L,K), respectively, defined by Ψ̂'_[i] (ψ̂_p+q+i-1 ⋯ ψ̂_p+i) ∈ ℒ_ℋ^q,ℋ, i ≥ 0, ∏̂ [ Ψ̂'_[q-1]; Ψ̂'_[q-2]; ⋮; Ψ̂'_[0]; ] = [ ψ̂_p+2q-2 ψ̂_p+2q-3 ⋯ ψ̂_p+q-1; ψ̂_p+2q-3 ψ̂_p+2q-4 ⋯ ψ̂_p+q-2; ⋮ ⋮ ⋯ ⋮; ψ̂_p+q-1 ψ̂_p+q-2 ⋯ ψ̂_p; ], where ψ̂_i =Ψ̂^(i)_L,K denotes the ith component of Ψ̂_L in (<ref>). Due to (<ref>), the operator-valued vectors Ψ'_[i], Ψ̂'_[i] and matrices ∏, ∏̂ are under Assumption <ref> elements of 𝒮_ℋ^q,ℋ and 𝒮_ℋ^q, respectively, thus ∏∏^∗, ∏̂ ∏̂^∗∈𝒩_ℋ^q are nuclear. As a result of (<ref>) and (<ref>), the estimators α̂_i=α̂_i(L,K,M) and β̂_j=β̂_j(L,K,M) for α_1 and β_1, respectively, are α̂_i ψ̂_i + B̂_[i]Ψ̂”'^ ⊤_[i], 1≤ i≤ p, β̂_j B̂^(j)_q, if 1≤ j ≤ q, 0, if j > q. In these definitions, B̂_q = B̂_q(L,K,M) stands for the estimator for B_q defined by B̂_q - Ψ̂'_[q] ∏̂^†∐^ĥ_M_ĥ_1 = - Ψ̂'_[q] ∏̂^∗(∏̂∏̂^∗ + γ_N𝕀)^-1∐^ĥ_M_ĥ_1 , where (K_N)_N, (L_N)_N, (M_N)_N ⊂ℕ, (γ_N)_N ∈ℕ⊂(0,∞) are sequences with min{K_N, L_N, M_N}→∞ and γ_N → 0. Further, ĥ_1, …, ĥ_M and h_1, …, h_M are the eigenfunctions associated to the eigenvalues ζ̂_1≥⋯≥ζ̂_M≥ 0 and ζ_1≥⋯≥ζ_M≥ 0 of ∏̂∏̂^∗ℋ^q →ℋ^q and ∏∏^∗ℋ^q →ℋ^q, respectively, and (h_i⊗ d_j)_i,j is a CONS of 𝒮_ℋ^q, ℋ, where d_j are the eigenfunctions of 𝒞_X. Further, we define B̂_[i]=B̂_[i](q;L,K), B_[i]=B_[i](q), Ψ̂”'_[i] = Ψ̂”'_[i](p,q;L,K) , Ψ”'_[i] = Ψ”'_[i](p,q)∈𝒮_ℋ^max(i,q),ℋ for any i,q by B̂_[i] B̂_i, if 1≤ i < q, B̂_q, if i ≥ q, and B_[i] B_i, if 1≤ i < q, B_q, if i ≥ q, Ψ̂”'_[i] Ψ̂”_[0], if 1≤ i ≤ q, Ψ̂”_[i-q], if i > q, and Ψ”'_[q] Ψ”_[0], if 1≤ i ≤ q, Ψ”_[i-q], if i > q, with Ψ̂”_[i] and Ψ̂”_[i] being identical to Ψ̂'_[i] in (<ref>) and Ψ'_[i] in (<ref>) for all i, respectively, except for a reversed sign in the first component. α_1, …, α_p, β_1, …, β_q∈𝒮_ℋ are H-S operators. The image of the operator-valued matrix ∏ℋ^q→ℋ^q lies dense. The eigenvalues of ∏∏^∗ℋ^q →ℋ^q satisfy ζ_j≠ζ_j+1 for all j∈ℕ. The following Sobolev condition, which is similar to the one in the ARMA(1,1) case, enables us also to deduce a consistency result for the estimation errors for the complete ARMA(p,q) operators. For some γ >0 holds ∑_i=1^∞∑_j=1^∞ ⟨ B_q, h_i⊗ d_j⟩^2_𝒮 (1 + i^2γ + j^2γ) < ∞. Let the assumptions of Theorem <ref>, Assumptions <ref>–<ref>, and M^1+γζ^-1_MZ_M = O(K^β) and γ_N = o(ζ_MM^-γ) hold. Then, for the fARMA(p,q) operators holds max_1≤ i ≤ p, 1 ≤ j ≤ q{α̂_i - α_i_𝒮, β̂_j - β_j_𝒮} = O_(M^-γ). To the best of our knowledge, Theorem <ref> which states explicit asymptotic upper bounds for the estimation errors for all operators of fARMA(p,q) processes attaining values in arbitrary separable Hilbert spaces for arbitrary orders is an entirely new result in the literature. § CONCLUSION §.§ Summary This article establishes consistent Yule-Walker type estimates using Tychonoff-regularization for the operators of (functional) invertible linear processes satisfying Sobolev and mild weak dependence conditions in separable Hilbert spaces. Building on these results, we also establish consistent Yule-Walker estimates for the operators of functional AR, MA and ARMA processes with arbitrary orders. The estimates for the complete operators in all of these models are derived on the basis of asymptotic consistency results for finite-dimensional projections of a growing dimension, which are of use in their own right. Our results represent innovations in the current literature in numerous ways: The invertible linear and the functional AR, MA and ARMA processes are allowed to attain their values in general, separable Hilbert spaces, and we derive explicit asymptotic upper bounds for estimation errors of all operators in the inverted and linear representation of our invertible linear process, as well as for the AR, MA and ARMA processes with arbitrary orders. Further, the definition of all our models requires the errors only to be strictly stationary, ergodic white noises instead of independent and identically distributed, which is of use when studying solutions of non-linear function valued time series processes. §.§ A promising concept of invertibility It is worth to point out that <cit.> established a more general notion of invertibility as the series representation (<ref>) for real-valued processes. They called a real-valued process (X_k)_k invertible if (ε̂_k - ε_k)^2 → 0 for k→∞, with ε_k = X_k - f(X_k-1, X_k-2, …, X_k-p, ε_k-1, ε_k-2, …, ε_k-q), k ∈ℕ, ε̂_k = X_k - f(X_k-1, X_k-2, …, X_k-p, ε̂_k-1, ε̂_k-2, …, ε̂_k-q), k ∈ℕ, where ε̂_0, ε̂_-1, …, ε̂_1-q and X_0, X_-1, …, X_1-p are given for some p,q∈ℕ, where (ε_k)_k is i.i.d. and centered with finite second moments, and where f is a measurable function. In the case that the function f is unknown it was replaced by its estimate f̂ (if given). <cit.> generalized this concept of invertibility even further by using ε̂_0, ε̂_-1, … rather than letting time tend to infinity which enabled also the analysis of time-dependent processes. The more general concept of invertibility described above can also be established for processes in a general, separable Banach space ℬ endowed with the norm ·. Namely, we call a process (X_k)_k⊂ℬ mild Granger-Andersen invertible if ε̂_k_N - ε_k_N = o_(1), where N is the sample size (as usual), k_N→∞, (ε_k)_k is i.i.d., and ε̂_k_N and ε_k_N have the representations (<ref>)–(<ref>), respectively. This notion of invertibility is based on <cit.> but it is milder (hence its name), because finite second moments of the innovations are not required, the convergence is in a weaker sense (in probability rather than in the L^2-sense), and, for technical reasons, (k_n)_n can be explicitly chosen. It can be shown that invertibility in the common sense (<ref>) implies mild Granger-Andersen invertibility. Moreover, as mentioned in <cit.> for real-valued processes, there are also functional time series which are mild Granger-Andersen invertible but not invertible as in the sense (<ref>), and there are processes that are not invertible in both senses: Under certain conditions bilinear functional time series in a general, separable Hilbert space as stated in <cit.> can be mild Granger-Andersen invertible but not invertible in the sense (<ref>), and, e.g., the non-linear MA(1) process X_k = α_1(ε^2_k-1) + ε_k a.s. in L^2[0,1] can be strictly stationary but not invertible in both senses. §.§ Future research Although mild Granger-Andersen invertibility discussed in the previous section is useful in situations where invertibility in the sense (<ref>) does not hold, our proof technique generally cannot be used as one not necessarily can truncate a series, and let the number of summands go to infinity. As a result, the entire estimation method must be modified and appropriate assumptions must be made. Another research topic is the derivation of estimators and consistency rates in general Banach, possibly even in metric spaces. § ACKNOWLEDGEMENTS The authors would like to note that they did not receive any funds for this project. Furthermore, the majority of Sebastian Kühnert's work took place during his employment at University of California, Davis, and the remaining work at Ruhr University Bochum. apalike § SOME OPERATOR NORM (IN-)EQUALITIES In various proofs, we make use of the following operator norm (in-)equalities. Let (ℋ, ⟨·, ·⟩) be a Hilbert space with respective inner product, and let A_i, B_ij∈ℒ_ℋ and S_ij∈𝒮_ℋ be bounded and H-S operators, respectively, where i=1, …, m∈ℕ, j=1, …, n∈ℕ. Then, the following holds. (a) The operator-valued vector A(A_1 ⋯ A_m) satisfies A∈ℒ_ℋ^m, ℋ, with A_ℒ≤∑^m_i=1A_i_ℒ. (b) For the operator-valued matrix B (B_ij)_1≤ i≤ m, 1≤ j≤ n holds B∈ℒ_ℋ^m, ℋ^n, with B_ℒ≤∑^m_i=1(B_i1 ⋯ B_in)_ℒ. (c) For the operator-valued matrix S (S_ij)_1≤ i≤ m, 1≤ j≤ n holds S∈𝒮_ℋ^m, ℋ^n, with S^2_𝒮 = ∑^m_i=1∑^n_j=1S_ij^2_𝒮. (a) For any x(x_1, …, x_m)^⊤∈ℋ^m holds A(x) = ∑^m_i=1A_i(x_i)≤∑^m_i=1A_i_ℒx_i. Thus, as x_i≤x for all i, the assertion follows from the definition of the norm ·_ℒ. (b) Elementary conversions and the definition of our norms lead indeed to B_ℒ = (sup_x≤ 1∑^m_i=1(B_i1 ⋯ B_in)(x)^2 )^1/2 ≤∑^m_i=1sup_x≤ 1(B_i1 ⋯ B_in)(x) = ∑^m_i=1(B_i1 ⋯ B_in)_ℒ. (c) See <cit.>, Lemma 2.16 (b). * C^(1,1)_α_1, fin in Theorem <ref> is defined by C^(1,1)_α_1, fin ξ_X(M')+ξ_X(K) λ_M'KΛ_K/λ_KM'Λ_M' c(M'), with ξ_X(·) in (<ref>), and with c(M) which is for any M∈ℕ defined as c(M) ρ^-1/2_M + ψ_2_𝒮P_M[ 2ψ_1_ℒ ρ^-1_M(2√(2) M[ ψ_1_ℒ + ρ^1/2_M ] + ρ^1/2_M) + 1 ]. * C^(1,1)_β_1, fin in Theorem <ref> is with ξ_X(K) in (<ref>) and c(M) above defined by C^(1,1)_β_1, fin ξ_X(K) c(M). * C^(1,1)_α_i in Theorem <ref> is defined as C^(1,1)_α_i√(S_β_1(γ)) + 1_A_2·[ 1 + c_2 κ](√(S_Ψ(β)) + 1_A_1· c_1 ξ'_X), with ξ'_X in (<ref>), c_1, A_1 in Theorem <ref>, A_2 {(K_N)_N, (M_N)_N | M^1+γρ^-1_MP_M∼ c_2K^β}, and κ 4√(2) ψ_1^2_ℒψ_2_𝒮. * C^(1,1)_β_1 in Theorem <ref> is with ξ'_X in (<ref>), c_1, A_1 in Theorem <ref>, A_2, κ above, S_Ψ(β) from Assumption <ref> and S_β_1(γ) from Assumption <ref> defined by C^(1,1)_β_1√(S_β_1(γ)) + 1_A_2· c_2 κ(√(S_Ψ(β)) + 1_A_1· c_1 ξ'_X). * C^(p,q)_α_i, fin in Theorem <ref> is for any i=1, …, p defined by C^(p,q)_α_i, finξ_X(M')[ 1 + B_[i](M')_ℒ] + ξ_X(K) λ_M'KΛ_K/λ_KM'Λ_M' Ψ”'_[i]_ℒ d(M')), with ξ_X(·) in (<ref>), B_[i] in (<ref>), Ψ”'_[i] in (<ref>), and where d(M) = d(M,p,q) is with Ψ'_[q]∈𝒮_ℋ^q,ℋ in (<ref>) and ∏∈𝒮_ℋ^q in (<ref>) for any M, p, q ∈ℕ defined by d(M) ζ̂^-1/2_M + qΨ'_[q]_𝒮Z_M[ 2 ∏ _ℒ ζ̂^-1_M(2√(2) M[ ∏ _ℒ + ζ^1/2_M ] + ζ^1/2_M) + 1 ]. * C^(p,q)_B_q, fin in Theorem <ref> is with ξ_X(K) in (<ref>) and d(M) in (<ref>) defined by C^(p,q)_B_q, finξ_X(K) d(M). * C^(p,q)_α_i in Theorem <ref> is for any i=1, …, p defined as C^(p,q)_α_iΨ”'_[i]_ℒ√(S_B_q(γ)) + 1_A_2· c_2[1 + B_[i]_ℒ + Ψ”'_[i]_ℒ κ'](√(S_Ψ(β)) + 1_A_1· c_1ξ'_X), where ξ'_X is defined in (<ref>), with c_1, A_1 in Theorem <ref>, A_2 {(K_N)_N, (M_N)_N | M^1+γζ^-1_MZ_M∼ c_2K^β}, Ψ”'_[i] in (<ref>), and with κ' 4√(2) q ∏ ^2_ℒ Ψ'_[q]_𝒮 . * C^(p,q)_B_q in Theorem <ref> is with c_1, A_1 in Theorem <ref>, κ', A_2 and ξ'_X above, S_Φ(γ) from Assumption <ref> and S_B_q(γ) from Assumption <ref> defined as C^(p,q)_B_q√(S_B_q(γ)) + 1_A_2· c_2 κ' (√(S_Ψ(β)) + 1_A_1· c_1 ξ'_X). § PROOFS OF THE RESULTS IN SECTIONS <REF>–<REF> §.§ Proofs of results in Section <ref> (a) Due to (<ref>) holds for any x∈ℋ with x≠ 0, ⟨𝒞_X(x), x⟩ = ∑^∞_i=0⟨ϕ_i𝒞_εϕ^∗_i(x), x ⟩ = ∑^∞_i=0⟨𝒞^1/2_εϕ^∗_i(x), 𝒞^1/2_εϕ^∗_i(x)⟩ = ∑^∞_i=0𝒞^1/2_εϕ^∗_i(x)^2. Due to ϕ^∗_0=ϕ_0=𝕀, and as 𝒞^1/2_ε is injective after Assumption <ref>, we have 𝒞^1/2_εϕ^∗_i(x)>0. Thus, for any x∈ℋ with x≠ 0 holds ⟨𝒞_X(x), x⟩ > 0, in other words, 𝒞_X is a strictly positive operator which is as such injective. (b) Suppose by way of contradiction that there exists a v = (v_1, …,v_L)^⊤∈ℋ^L such that v 0, and 0=⟨𝒞_X^[L]( v), v⟩ =(∑_i=0^L-1⟨ X_-i, v_i ⟩). If this is so there exists an r ≤ L-1 so that v_r 0, and almost surely ⟨ X_-r, v_r ⟩ = -∑_i=r+1^L-1⟨ X_-i, v_i ⟩. Notice the right hand side of the above is measurable with respect to σ( ε_i, i ≤ -r-1). Due to the representation (<ref>), the left hand side of the above is equal to ∑_ℓ = 0^∞⟨ϕ_ℓ(ε_-r-ℓ), v_r ⟩, where ϕ_0= 𝕀. Multiplying each side of (<ref>) with ⟨ε_-r, v_r ⟩ and taking expectations gives that ⟨𝒞_ε(v_r), v_r ⟩ = 0, with v_r 0, which contradicts the injectivity of 𝒞_ε. (c) According to Fubini's theorem, and as linear operators commute with the expected value, we have for any i,j,k, ⟨ c_k, ε_i-j⟩∑_ℓ=0^∞ A_ℓ(ε_i-ℓ) = ∑_ℓ=0^∞A_ℓ( ⟨ c_k, ε_i-j⟩ε_i-ℓ). We note that ⟨ c_k, ε_i-j⟩ε_i-ℓ=0 when jℓ, and when j=ℓ, ⟨ c_k, ε_i-j⟩ε_i-j= ∑_r=0^∞⟨ c_k, ε_i-j⟩⟨ c_r, ε_i-j⟩ c_r = λ_k c_k after <cit.>, equation (1.36). Combining with (<ref>), we have that ⟨ c_k, ε_i-j⟩∑_ℓ=0^∞ A_ℓ(ε_i-ℓ) = λ_k A_j(c_k). Similarly, ⟨ c_k, ε_i-j⟩∑_ℓ=0^∞ B_ℓ(ε_i-ℓ) = λ_k B_j(c_k). Hence, due the hypothesis λ_k A_j(c_k) = λ_k B_j(c_k) for all j,k∈ℕ, and as C_ε is injective, λ_k >0, thus A_j(c_k) = B_j(c_k) for all j,k∈ℕ. Consequently, as (c_j)_j forms a CONS of ℋ, for all j holds indeed A_j = B_j. (d) We note that if X is an SLP, then Z_i = ∑^∞_ℓ=0A_ℓ(X_i-ℓ) is also an SLP, since Z_i = ∑^∞_ℓ=0A_ℓ(X_i-ℓ) = ∑^∞_ℓ=0D_ℓ(ε_i-ℓ), where D_ℓ = ∑_j=0^ℓ A_jϕ_ℓ - j. By hypothesis we also have that Z_i = ∑^∞_ℓ=0B_ℓ(X_i-ℓ) = ∑^∞_ℓ=0F_ℓ(ε_i-ℓ), where F_ℓ = ∑_j=0^ℓ B_jϕ_ℓ - j. According to part (c), F_ℓ = D_ℓ for all ℓ∈ℕ. Since ϕ_0 = 𝕀 we have then that A_0 = D_0 = F_0 = B_0. Now the result follows by induction. In several places, we make use of the following upper bounds derived from <cit.>. From the definition of X^[L] = (X^[L]_k)_k∈ℤ, and stationarity of X=(X_k)_k∈ℤ follows 𝒞_X^[L]_𝒩 = X^[L]_0^2 = LX_0^2. Further, due to the definition of X^[L], stationarity and Cauchy-Schwarz inequality, holds, 𝒞^h_X^[L]_𝒩≤X^[L]_0^2 = LX_0^2, h∈ℤ, 𝒞^h_X^[L], X_𝒩≤√(L)X_0^2, h∈ℤ. Moreover, the eigenvalues λ_k=λ_k(L) of 𝒞_X^[L] satisfy λ_1≥λ_2 ≥… > 0 after Proposition <ref> (b) and Assumption <ref>, and 𝒞_X^[L]_𝒩 = ∑^∞_j=1λ_j and (<ref>) yield λ_k < 1/k∑^k_j=1λ_j < k^-1LX_0^2, k,L ∈ℕ, and consequently, with Λ_k=Λ_k(L) in (<ref>), provided X_0^2> 0, we obtain Λ_k ≥λ^-1_k > kL^-1(X_0^2)^-1, k,L ∈ℕ. At first, towards proving Theorem <ref>, we focus on an idealized case in which Ψ_L is finite-dimensional, and may be diagonalized with respect to the CONS (c_i⊗ d_j)_i,j=(c_i(L)⊗ d_j)_i,j of 𝒮_ℋ^L, ℋ, where (λ_j, c_j)_j∈ℕ=(λ_j(L), c_j(L))_j∈ℕ and (μ_j, d_j)_j∈ℕ are the eigenpair sequences of the covariance operators 𝒞_X^[L]∈𝒩_ℋ^L and 𝒞_X∈𝒩_ℋ, respectively. Thus, if Ψ_L∈𝒮_ℋ^L,ℋ, which is given if all ψ_i are H-S operators, we have Ψ_L = ∑_i=1^∞∑_j=1^∞ ⟨Ψ_L, c_i⊗ d_j⟩_𝒮(c_i⊗ d_j). Hereinafter, we make use of this identity, but state some technnical assumptions first. For fixed K∈ℕ and almost all L∈ℕ, there are constants p_i,j,L∈ℝ so that Ψ_L = Ψ_L(K) = ∑_i=1^K∑_j=1^K p_i,j,L(c_i⊗ d_j). Let Assumptions <ref>–<ref>, <ref> hold. Further, let K∈ℕ be fixed, and suppose that ∑_ℓ>Lψ_ℓ_ℒ = o(Λ_KL^3/2N^-1/2), and also θ_N=o(λ_K) and θ_N = o(Λ_KL^2N^-1/2), where Λ_K=sup_1≤ j ≤ K(λ_j - λ_j+1)^-1. Then, it holds Ψ̂_L - Ψ_L(K)_𝒮 = O_(λ^-1_KΛ_KL^2N^-1/2), K∈ℕ. To reiterate, (λ_j, c_j)_j∈ℕ = (λ_j(L), c_j(L))_j∈ℕ, (λ̂_j, ĉ_j)_j∈ℕ = (λ̂_j(L), ĉ_j(L))_j∈ℕ and (μ_j, d_j)_j∈ℕ are the eigenpair sequences of the covariance operators 𝒞_X^[L], 𝒞̂_X^[L]∈𝒩_ℋ^L and 𝒞_X∈𝒩_ℋ, respectively, where X^[L]=(X^[L]_k)_k∈ℤ⊂ℋ^L, with L∈ℕ, is the process in (<ref>) being defined via our ISLP X=(X_k)_k∈ℤ in Assumption <ref>. Further, (c_i⊗ d_j)_i,j=(c_i(L)⊗ d_j)_i,j is a CONS of 𝒮_ℋ^L, ℋ, and due to Assumptions <ref>, <ref>, Ψ_L∈𝒮_ℋ^L,ℋ is an H-S operator which has for some K∈ℕ and almost all L∈ℕ the representation Ψ_L = Ψ_L(K) = ∑_i=1^K∑_j=1^K p_i,j,L(c_i⊗ d_j). Moreover, for the eigenvalues of 𝒞_X^[L] holds λ_1 > ⋯ > λ_K > 0 for all L after Assumptions <ref>–<ref>. Throughout, we write 𝒟̂, 𝒟 for 𝒟̂_X^[L],X=𝒞̂^1_X^[L],X, 𝒟_X^[L],X=𝒞^1_X^[L],X∈𝒮_ℋ^L,ℋ, and 𝒞̂, 𝒞 for 𝒞̂_X^[L], 𝒞_X^[L]∈𝒩_ℋ^L, respectively. From the definition of Ψ̂_L, due to 𝒞̂^†= (𝒞̂ + θ_N𝕀)^-1, the approximate Yule-Walker equation (<ref>), and 𝒞^𝒞𝒞^†, follows Ψ̂_L - Ψ_L = (𝒟̂ - 𝒟)𝒞̂^†∐^ĉ_K_ĉ_1 + 𝒟(𝒞̂^†∐^ĉ_K_ĉ_1 - 𝒞^†∐^c_K_c_1) + 𝒟𝒞^†∐^c_K_c_1 - Ψ_L = (𝒟̂ - 𝒟)𝒞̂^†∐^ĉ_K_ĉ_1 + 𝒟(𝒞̂^†∐^ĉ_K_ĉ_1 - 𝒞^†∐^c_K_c_1) + (∑_ℓ>Lψ_ℓ𝒞^1-ℓ_X^[L], X)𝒞^†∐^c_K_c_1 + Ψ_L(𝒞^∐^c_K_c_1 - 𝕀). Consequently, due to triangle inequality and operator-valued Hölder's inequality, we have Ψ̂_L - Ψ_L_𝒮 ≤𝒟̂ - 𝒟_𝒮𝒞̂^†∐^ĉ_K_ĉ_1_ℒ + 𝒟_𝒮𝒞̂^†∐^ĉ_K_ĉ_1 - 𝒞^†∐^c_K_c_1_ℒ + ∑_ℓ>Lψ_ℓ𝒞^1-ℓ_X^[L], X_𝒮𝒞^†∐^c_K_c_1_ℒ + Ψ_L(𝒞^∐^c_K_c_1 - 𝕀)_𝒮 . Thereby, according to the definition of 𝒞̂^†, 𝒞^† and the operator norm, for all K,L,N holds 𝒞̂^†∐^ĉ_K_ĉ_1_ℒ = (𝒞̂ + θ_N𝕀)^-1∐^ĉ_K_ĉ_1_ℒ = sup_1≤ j ≤ K(λ̂_j + θ_N)^-1 = (λ̂_K + θ_N)^-1ℓ̂_K,N , 𝒞^†∐^c_K_c_1_ℒ = (𝒞 + θ_N𝕀)^-1∐^c_K_c_1_ℒ = sup_1≤ j ≤ K(λ_j + θ_N)^-1 = (λ_K + θ_N)^-1ℓ_K,N . Further, due to (<ref>), 𝒟_𝒮≤√(L)X_0^2. From the definition of the operator norm, the projection operators and 𝒞̂^† and 𝒞^† follows with x̂_j ⟨ x, ĉ'_j⟩, x_j ⟨ x, c_j⟩, ℓ̂_j,N (λ̂_j + θ_N)^-1 and ℓ_j,N (λ_j + θ_N)^-1, 𝒞̂^†∐^ĉ_K_ĉ_1 - 𝒞^†∐^c_K_c_1_ℒ = sup_x≤ 1∑^∞_j=1x̂_j𝒞̂^†∐^ĉ_K_ĉ_1ĉ'_j - x_j𝒞^†∐^c_K_c_1c_j = sup_x≤ 1∑^K_j=1x̂_j𝒞̂^†(ĉ'_j) - x_j𝒞^†(c_j) ≤sup_x≤ 1∑^K_j=1x̂_jℓ̂_j,N(ĉ'_j - c_j) + sup_x≤ 1∑^K_j=1(x̂_jℓ̂_j,N - x_jℓ_j,N)(c_j) . For the first term in (<ref>) holds due to elementary conversions and (<ref>), sup_x≤ 1∑^K_j=1x̂_jℓ̂_j,N(ĉ'_j - c_j) ≤ K sup_1≤ j ≤ Kℓ̂_j,Nĉ'_j - c_j≤ 2√(2)Kℓ̂_K,NΛ_K𝒞̂ - 𝒞_ℒ . For the second term in (<ref>) holds due to similar conversions as above, sup_x≤ 1∑^K_j=1(x̂_jℓ̂_j,N - x_jℓ_j,N)(c_j) ≤sup_x≤ 1∑^K_j=1(x̂_j - x_j)ℓ̂_j,N(c_j) + sup_x≤ 1∑^K_j=1x_j(ℓ̂_j,N - ℓ_j,N)(c_j) ≤ Kℓ̂_K,Nsup_1≤ j ≤ Kĉ'_j - c_j + sup_1≤ j ≤ K(ℓ̂_j,N - ℓ_j,N)(c_j) ≤ 2√(2)Kℓ̂_K,NΛ_K𝒞̂ - 𝒞_ℒ + sup_1≤ j ≤ K|ℓ̂_j,N - ℓ_j,N|. Due to the definition of ℓ̂_j,N and ℓ_j,N, and (<ref>), we have sup_1≤ j ≤ K|ℓ̂_j,N - ℓ_j,N| ≤sup_1≤ j ≤ Kℓ̂_j,Nℓ_j,N|λ̂_j - λ_j| ≤ℓ̂_K,Nℓ_K,N𝒞̂ - 𝒞_ℒ. Hence, by combining (<ref>)–(<ref>), and due to ℓ_K,N≤λ^-1_K≤Λ_K, we obtain 𝒞̂^†∐^ĉ_K_ĉ_1 - 𝒞^†∐^c_K_c_1_ℒ≤𝒞̂ - 𝒞_ℒ ℓ̂_K,NΛ_K(4√(2)K + 1). Also, by using triangle inequality, operator-valued Hölder's inequality and (<ref>), we get for any L, ∑_ℓ>Lψ_ℓ𝒞^1-ℓ_X^[L], X_𝒮≤√(L)X_0^2∑_ℓ>Lψ_ℓ_ℒ, where the series exists for all L after Assumption <ref>. At last, after the definition of the H-S norm, with Ψ_L=Ψ_L(K) in (<ref>), c_i⊗ d_j, with c_i=c_i(L) and d_j being the eigenfunctions of 𝒞=𝒞_X^[L] and 𝒞_X, respectively, 𝒞^=𝒞(𝒞 + θ_N𝕀)^-1, with δ_jk being the Kronecker-Delta, so δ_jk = 1 if j=k, and δ_jk = 0 if j≠ k, with Ψ_L=Ψ_L(K), we get Ψ_L(𝒞^∐^c_K_c_1 - 𝕀)^2_𝒮 = ∑^∞_i=1Ψ_L( 1_[1,K](i)𝒞^ - 𝕀)(c_i)^2 = ∑^∞_i=1 ( 1_[1,K](i)λ_i(λ_i + θ_N)^-1 - 1)^2 ∑_k=1^K∑_ℓ=1^K p_k,ℓ,L(c_k⊗ d_ℓ)(c_i)^2 = ∑^K_i=1 (λ_i(λ_i + θ_N)^-1 - 1)^2 ∑_k=1^K∑_ℓ=1^K p_k,ℓ,Lδ_i,k d_ℓ^2 = θ^2_N ∑_k=1^K∑_ℓ=1^K (λ_k + θ_N)^-1p_k,ℓ,Ld_ℓ^2 ≤θ^2_Nℓ^2_K,N∑_k=1^K∑_ℓ=1^K p_k,ℓ,Ld_ℓ^2 = θ^2_Nℓ^2_K,NΨ_L(K)^2_𝒮 . Thereby, due to the definition of Ψ_L(K) and Ψ_L, and Assumption <ref>, we have lim_L→∞Ψ_L(K)^2_𝒮≤lim_L→∞Ψ_L^2_𝒮 = lim_L→∞∑^L_i=1ψ_i^2_𝒮 = ∑^∞_i=1ψ_i^2_𝒮 < ∞. Altogether, by plugging (<ref>)–(<ref>) and (<ref>)–(<ref>) in (<ref>), we obtain for all K, L, N, Ψ̂_L - Ψ_L(K)_𝒮 ≤ℓ̂_K,N𝒟̂ - 𝒟_𝒮 + √(L)X_0^2𝒞̂ - 𝒞_ℒ ℓ̂_K,NΛ_K(4√(2)K + 1) + ℓ_K,N√(L)X_0^2∑_ℓ>Lψ_ℓ_ℒ + θ_Nℓ_K,NΨ_L(K)_𝒮. Hence, due to (<ref>)–(<ref>), (λ̂_K+θ_N)^-1→λ^-1_K after λ̂_K→λ_K, θ_N = o(λ_K) and the continuous mapping theorem, and because of ∑_ℓ>Lψ_ℓ_ℒ = o(Λ_KL^3/2N^-1/2), θ_N = o(Λ_KL^2N^-1/2) and (<ref>), our claim is proven. To prove the convergence result for the estimators in the case of the complete operators, we use the following auxiliary result. Let C, Ĉ∈ℒ_ℋ be positive semi-definite, self-adjoint, compact operators with eigenvalues φ_1 > φ_2 > ⋯ > 0, and φ̂_1 ≥φ̂_2 ≥⋯≥ 0, respectively, where Ĉ denotes an estimate for C which is defined based on a sample with sample size N∈ℕ such that Ĉ-C_ℒ = O_(c_N), where (c_N)_N∈ℕ⊂(0,∞) is a sequence with c_N→ 0. Further, let (a_N)_N∈ℕ, (b_N)_N∈ℕ⊂ (0,∞) be sequences with a_N →∞ and b_N → 0 such that both b_N=o(φ_a_N) and c_N= o(φ_a_N). Then, (φ̂_a_N + b_N)^-1 = O_(φ^-1_a_N). Evidently, for any a_N holds |φ̂_a_N - φ_a_N| ≤Ĉ-C_ℒ, and thus |φ̂_a_N/φ_a_N - 1 | ≤Ĉ-C_ℒ/φ_a_N, where the right-hand side goes to zero because Ĉ-C_ℒ=O_(c_N) and c_N= o(φ_a_N). Hence, φ̂_a_N/φ_a_N→1, and (φ̂_a_N + b_N)/φ_a_N→1 as b_N=o(φ_a_N). Further, the continuous mapping theorem implies φ_a_N/( φ̂_a_N + b_N) →1 from which directly follows ( φ̂_a_N + b_N)^-1= O_(φ^-1_a_N). In this proof, we make use of the notation in the proof of Theorem <ref>. Firstly, Ψ_L(K) = ∑_i=1^K∑_j=1^K ⟨Ψ_L, c_i⊗ d_j⟩_𝒮(c_i⊗ d_j) and Assumption <ref> imply Ψ_L(K) - Ψ_L^2_𝒮 = max(i,j)>K∑^∞_i=1∑^∞_j=1⟨Ψ_L, c_i⊗ d_j⟩^2_𝒮≤ K^-2βS_Ψ_L(β), K,L∈ℕ. Further, due to arguments in the proof of Theorem <ref>, θ_N = o(λ_K) and K^1+βλ^-1_KΛ_KL^2 = O(N^1/2) which give ℓ̂_K,N=(λ̂_K + θ_N)^-1 = O_(λ^-1_K) after Lemma <ref>, and by using the triangle inequality, ∑_ℓ>Lψ_ℓ_ℒ = o(KΛ_KL^3/2N^-1/2), and θ_N = o(KΛ_KL^2N^-1/2), we get indeed Ψ̂_L - Ψ_L_𝒮 ≤Ψ̂_L - Ψ_L(K)_𝒮 + Ψ_L(K) - Ψ_L_𝒮 = O_(Kλ^-1_KΛ_KL^2N^-1/2) + O(K^-β) = O_(K^-β). We now turn towards proving Proposition <ref>. Under Assumption <ref> it follows for any ℓ and K and L sufficiently large that ψ_ℓ=ψ_ℓ(K)[Ψ_L(K)]^(ℓ) = ∑_i=1^K∑_j=1^K p_i,j,L(c_i⊗ d_j)^(ℓ) = ∑_i=1^K∑_j=1^K p_i,j,L(c^(ℓ)_i⊗ d_j). The finite-dimensional representation of the operators ϕ_i are not necessarily as simple. For instance for ϕ_2 holds due to ϕ_1=ψ_1 and ϕ_2 = ψ_1ϕ_1 under Assumption <ref> for sufficiently large L, ϕ_2(K) = ψ^2_1(K) = ∑_i=1^K∑_i'=1^K∑_j=1^K∑_j'=1^K p_i,j,L p_i',j',L(c^(ℓ)_i⊗ d_j)(c^(ℓ)_i'⊗ d_j')_= ⟨ c^(ℓ)_i, d_j'⟩ c^(ℓ)_i'⊗ d_j. Nevertheless, we can use the recursively defined finite-dimensional approximations ϕ_i = ϕ_i(K) ∑^i_j=1 ψ_j(K)ϕ_i-j(K), i∈ℕ, with ϕ_0(K)𝕀. Let the assumptions of Theorem <ref> hold. Then, ϕ̂_i - ϕ_i(K)_𝒮 = O_(λ^-1_KΛ_KL^2N^-1/2), i,K∈ℕ. With ϕ'_i ϕ_i(K)_ℒ, ψ̂'_jψ̂_j_ℒ for all i,j, ϕ̂_0 = ϕ_0(K)=𝕀, (<ref>), (<ref>)–(<ref>), elementary conversions and ψ̂_̂ĵ - ψ_j(K)_𝒮≤Ψ̂_L - Ψ_L(K)_𝒮, we obtain for any i∈ℕ, ϕ̂_i - ϕ_i(K)_𝒮 = ∑^i_j=1ψ̂_jϕ̂_i-j - ψ_j(K)ϕ_i-j(K)_𝒮 ≤∑^i_j=1ψ̂_j_ℒϕ̂_i-j - ϕ_i-j(K)_𝒮 + ϕ_i-j(K)_ℒψ̂_j - ψ_j(K)_𝒮 ≤∑^i-1_j=1ψ̂'_jϕ̂_i-j - ϕ_i-j(K)_𝒮 + ∑^i-1_j=0Ψ̂_L - Ψ_L(K)_𝒮 ϕ'_j. Since ϕ̂_̂1̂ - ϕ_1(K)_𝒮 = ψ̂_̂1̂ - ψ_1(K)_𝒮≤Ψ̂_L - Ψ_L(K)_𝒮 = O_(λ^-1_KΛ_KL^2N^-1/2) after Theorem <ref>, we obtain an asymptotic upper bound for ϕ̂_i - ϕ_i(K)_𝒮 by doing (i-1) iterations of the above inequality. By using ϕ'_0 = 1 and the convention that factors depending on j_k for some k in the multi-sums below are set 1 if ∑^0_j_k=1 is given, due to ϕ'_i = ϕ_i(K)_ℒ and ψ̂'_j = ψ̂_j_ℒ for all i,j, after Theorem <ref>, implying also ψ̂_j_ℒ = ψ_j(K)_ℒ + o_(1) for all j, with Θ(i,K) 1 + ∑^i-1_k=1∑^i-j_0-1_j_1=1⋯∑^i-j_k-1-1_j_k=1(∏^k-1_ℓ=1ψ_j_k(K)_ℒ)[ ϕ_j_k(K)_ℒ + ψ_j_k(K)_ℒ], it holds for each i,K, ϕ̂_i - ϕ_i(K)_𝒮 ≤Ψ̂_L - Ψ_L(K)_𝒮 ×[ ∑^i-1_j_1=0ϕ'_j_1 + ∑^i-1_j_1=1∑^i-j_1-1_j_2=0ψ̂'_j_1ϕ'_j_2 + ⋯ + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1⋯∑^i-j_i-3-1_j_i-2=1∑^i-j_i-2-1_j_i-1=0(∏^i-2_k=1ψ̂'_j_k) ϕ'_j_i-1] + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1⋯∑^i-j_i-2-1_j_i-1=1(∏^i-1_k=1 ψ̂'_j_k) ϕ̂_i-∑^i-1_k=1j_k - ϕ_i-∑^i-1_k=1j_k(K)_𝒮 _= ψ̂_1 - ψ_1(K)_𝒮 ≤Ψ̂_L - Ψ_L(K)_𝒮 ×[ 1 + ∑^i-1_j_1=1 ϕ'_j_1 + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1ψ̂'_j_1ϕ'_j_2 + ⋯ + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1⋯∑^i-j_i-2-1_j_i-1=1(∏^i-2_k=1 ψ̂'_j_k) ϕ'_j_i-1. . + ∑^i-1_j_1=1 ψ̂'_j_1 + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1ψ̂'_j_1ψ̂'_j_2 + ⋯ + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1⋯∑^i-j_i-2-1_j_i-1=1(∏^i-1_k=1 ψ̂'_j_k)] = Ψ̂_L - Ψ_L(K)_𝒮[ Θ(i,K) + o_(1)]. Consequently, after Theorem <ref>, our claim is shown. By using the arguments in the proof of Proposition <ref>, with ϕ'_iϕ_j(K)_ℒ, ϕ”_iϕ_j_ℒ, ψ̂'_j ψ̂_j_ℒ and ψ'_jψ_j(K)_ℒ for all i,j,K, and ψ_j(K) - ψ_j_𝒮≤Ψ_L(K) - Ψ_L_𝒮 for L≥ j, we get for L≥ i, ϕ̂_i - ϕ_i_𝒮 ≤ϕ̂_i - ϕ_i(K)_𝒮 + ϕ_i(K) - ϕ_i_𝒮 ≤∑^i_j=1ψ̂_jϕ̂_i-j - ψ_j(K)ϕ_i-j(K)_𝒮 + ∑^i_j=1ψ_j(K)ϕ_i-j(K) - ψ_jϕ_i-j_𝒮 ≤∑^i-1_j=1 ψ̂'_jϕ̂_i-j - ϕ_i-j(K)_𝒮 + Ψ̂_L - Ψ_L(K)_𝒮∑^i-1_j=0 ϕ'_j + ∑^i-1_j=1 ψ'_jϕ_i-j(K) - ϕ_i-j_𝒮 + Ψ_L(K) - Ψ_L_𝒮∑^i-1_j=0 ϕ”_j. Following the lines in the proof of Proposition <ref>, ϕ'_i = ϕ_i(K)_ℒ→ϕ”_i = ϕ_i_ℒ for all i, and ψ̂'_j = ψ̂_j_ℒ = ψ”_j + o_(1) for all j after Theorem <ref>, where ψ”_j = ψ_j_ℒ, and ψ”_0=1, with Θ'(i) lim_K→∞Θ(i,K) = 1 + ∑^i-1_k=1∑^i-j_0-1_j_1=1⋯∑^i-j_k-1-1_j_k=1(∏^k-1_ℓ=1ψ_j_k_ℒ)[ ϕ_j_k_ℒ + ψ_j_k_ℒ], for each i, we obtain indeed ϕ̂_i - ϕ_i_𝒮 ≤Ψ̂_L - Ψ_L(K)_𝒮 ×[ 1 + ∑^i-1_j_1=1ϕ'_j_1 + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1ψ̂'_j_1ϕ'_j_2 + ⋯ + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1⋯∑^i-j_i-2-1_j_i-1=1(∏^i-2_k=1ψ̂'_j_k) ϕ'_j_i-1. . + ∑^i-1_j_1=1ψ̂'_j_1 + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1ψ̂'_j_1ψ̂'_j_2 + ⋯ + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1⋯∑^i-j_i-2-1_j_i-1=1(∏^i-1_k=1 ψ̂'_j_k)] + Ψ_L(K) - Ψ_L_𝒮 ×[ 1 + ∑^i-1_j_1=1ϕ”_j_1 + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1ψ'_j_1ϕ”_j_2 + ⋯ + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1⋯∑^i-j_i-2-1_j_i-1=1(∏^i-2_k=1ψ'_j_k) ϕ”_j_i-1. . + ∑^i-1_j_1=1ψ”_j_1 + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1ψ”_j_1ψ”_j_2 + ⋯ + ∑^i-1_j_1=1∑^i-j_1-1_j_2=1⋯∑^i-j_i-2-1_j_i-1=1(∏^i-1_k=1 ψ”_j_k)] =O_(K^-β) Θ'(i) = O_(K^-β). §.§ Proofs of results in Section <ref> As already indicated, part (a) and (b) follow immediately from Theorems <ref>–<ref>, respectively, by putting L=p, and due to A_p = Ψ_p and Â_p,K = Ψ̂_p,K. The parts (a) and (b) result from Propositions <ref>, <ref>, respectively, by putting β_j = ϕ_j and β̂_j = ϕ̂_j. For fixed M∈ℕ, there are constants b_i,j,1∈ℝ so that β_1 = β_1(M) = ∑_i=1^M∑_j=1^M b_i,j,1(f_i⊗ f_j). To assume a finite-dimensional representation of α_1 is not necessary, as the Assumptions <ref>–<ref> entail together with (<ref>) that for α_1 holds with ψ_1(K) in (<ref>), α_1 = α_1(K,M)ψ_1(K) + β_1(M). Let (X_k)_k∈ℤ be the fARMA(1,1) process in (<ref>), let the assumptions of Theorem <ref> and Assumptions <ref>–<ref>, <ref> hold, and the sequence in (<ref>) satisfies γ_N = o(λ^-1_KΛ_KL^2N^-1/2). Then, for fixed K, M, and L=L_N→∞ in Theorem <ref> holds max{α̂_1 - α_1(K,M)_𝒮, β̂_1 - β_1(M)_𝒮} = O_(λ^-1_KΛ_KL^2N^-1/2). The conversions in this proof are similar to those in the proof of Theorem <ref>. We remind that (ρ_j, f_j)_j∈ℕ, (ρ̂_j, f̂_j)_j∈ℕ are the eigenpair sequences of ψ_1ψ^∗_1, ψ̂_1ψ̂^∗_1 ∈𝒩_ℋ, respectively, where ψ_i∈𝒮_ℋ are the H-S operators in the inverted representation (<ref>), and ψ̂_i = Ψ̂^(i)_L,K∈𝒮_ℋ its estimators, the ith components of Ψ̂_L in (<ref>). Similar as in the proofs above, we have sup_j∈ℕ|ρ̂_j - ρ_j| ≤ψ̂_1ψ̂^∗_1 - ψ_1ψ^∗_1_ℒ. Further, the eigenvalues satisfy ρ_1 > ρ_2 > … > 0 due to Assumptions <ref>–<ref> and since ψ_1ψ^∗_1 is injective which is given as the image of ψ_1 lies dense. Thus, sup_1≤ j ≤ Mf̂'_j - f_j ≤ 2√(2)P_Mψ̂_1ψ̂^∗_1 - ψ_1ψ^∗_1_ℒ, M ∈ℕ, where f̂'_j⟨f̂_j, f_j⟩f̂_j and P_M sup_1≤ j ≤ M(ρ_j - ρ_j+1)^-1. Thereby, due to triangle inequality and sub-multiplicativity of the operator norm holds, ψ̂_1ψ̂^∗_1 - ψ_1ψ^∗_1_ℒ≤ψ̂_1 - ψ_1_ℒ( ψ_1_ℒ + ψ̂_1_ℒ). Moreover, (γ_N)_N ∈ℕ⊂(0,∞) is a sequence with γ_N → 0, and for β_1 holds β_1 = β_1(M) = ∑_i=1^M∑_j=1^Mb_i,j,1(f_i⊗ f_j) after Assumption <ref>. As β̂_1 - ψ̂_2ψ̂^†_1∐^f̂_M_f̂_1, where ψ̂^†_1 = ψ̂^∗_1(ψ̂_1ψ̂^∗_1 + γ_N𝕀)^-1, with ψ^†_1 ψ^∗_1(ψ_1ψ^∗_1 + γ_N𝕀)^-1, ψ^_1 ψ^∗_1ψ^†_1, and ψ_2 = -β_1ψ_1 from (<ref>), due to triangle and operator-valued Hölder's inequality holds β̂_1 - β_1_𝒮≤ψ̂_2 - ψ_2_𝒮ψ̂^†_1∐^f̂_M_f̂_1_ℒ + ψ_2_𝒮ψ̂^†_1∐^f̂_M_f̂_1 - ψ^†_1∐^f_M_f_1_ℒ + β_1(ψ^_1∐^f_M_f_1 - 𝕀)_𝒮 . Due to the definition of ψ̂^†_1 and the operator norm, A^2_ℒ = A^∗A_ℒ for any A∈ℒ_ℋ, as ψ̂_1ψ̂^∗_1 commutes with (ψ̂_1ψ̂^∗_1 + γ_N𝕀)^-1, the projection operator ∐^f̂_M_f̂_1 with ψ̂_1ψ̂^∗_1 and (ψ̂_1ψ̂^∗_1 + γ_N𝕀)^-1, and as (∐^f̂_M_f̂_1)^∗ = (∐^f̂_M_f̂_1)^2 = ∐^f̂_M_f̂_1, it holds ψ̂^†_1∐^f̂_M_f̂_1^2_ℒ = ∐^f̂_M_f̂_1(ψ̂^†_1)^∗ψ̂^†_1∐^f̂_M_f̂_1_ℒ = ψ̂_1ψ̂^∗_1(ψ̂_1ψ̂^∗_1 + γ_N𝕀)^-2∐^f̂_M_f̂_1_ℒ = sup_1≤ j ≤ Mρ̂_j/(ρ̂_j + γ_N)^2≤ (ρ̂_M + γ_N)^-1p̂_M,N. We get as in the proof of Theorem <ref>, from the definition of the projection operators, ψ̂^†_1 and ψ^†_1, with f̂'_j⟨ f_j, f̂_j⟩f̂_j, x̂_j ⟨ x, f̂'_j⟩, x_j ⟨ x, f_j⟩, p̂_j,N (ρ̂_j + γ_N)^-1 and p_j,N (ρ_j + γ_N)^-1 ψ̂^†_1∐^f̂_M_f̂_1 - ψ^†_1∐^f_M_f_1_ℒ = sup_x≤ 1∑^M_j=1 x̂_jp̂_j,Nψ̂^∗_1(f̂'_j) - x_jp̂_j,Nψ^∗_1(f_j) ≤sup_x≤ 1∑^M_j=1x̂_jp̂_j,Nψ̂^∗_1(f̂'_j - f_j) + (x̂_j - x_j)p̂_j,Nψ̂^∗_1(f_j) + x_j[p̂_j,Nψ̂^∗_1 - p_j,Nψ^∗_1](f_j) . Due to elementary conversions and (<ref>)–(<ref>) holds sup_x≤ 1∑^M_j=1x̂_jp̂_j,Nψ̂^∗_1(f̂'_j - f_j) ≤ Msup_1≤ j ≤ Mp̂_j,Nψ̂^∗_1(f̂'_j - f_j) ≤ 2√(2)Mp̂_M,NP_Mψ̂_1_ℒψ̂_1ψ̂^∗_1 - ψ_1ψ^∗_1_ℒ . Further, due to similar conversions as above, sup_x≤ 1∑^M_j=1(x̂_j - x_j)p̂_j,Nψ̂^∗_1(f_j) ≤ Msup_1≤ j ≤ Mp̂_j,Nf̂'_j - f_jψ̂^∗_1(f_j) ≤ 2√(2)Mp̂_M,NP_Mψ̂_1ψ̂^∗_1 - ψ_1ψ^∗_1_ℒsup_1≤ j ≤ M( ψ^∗_1(f_j) + [ψ̂^∗_1 - ψ^∗_1](f_j)) ≤ 2√(2)Mp̂_M,NP_Mψ̂_1ψ̂^∗_1 - ψ_1ψ^∗_1_ℒ( ρ^1/2_M + ψ̂_1 - ψ_1_ℒ). Moreover, we get sup_x≤ 1∑^M_j=1x_j[p̂_j,Nψ̂^∗_1 - p_j,Nψ^∗_1](f_j) = sup_1≤ j ≤ M[p̂_j,Nψ̂^∗_1 - p_j,Nψ^∗_1](f_j) ≤sup_1≤ j ≤ M(p̂_j,N - p_j,N)ψ̂^∗_1(f_j) + sup_1≤ j ≤ Mp_j,N[ψ̂^∗_1 - ψ^∗_1](f_j) ≤sup_1≤ j ≤ M|p̂_j,N - p_j,N|sup_1≤ j ≤ M( ψ^∗_1(f_j) + [ψ̂^∗_1 - ψ^∗_1](f_j)) + p_M,Nψ̂_1 - ψ_1_ℒ ≤ P_M[ p̂_M,Nψ̂_1ψ̂^∗_1 - ψ_1ψ^∗_1_ℒ( ρ^1/2_M + ψ̂_1 - ψ_1_ℒ) + ψ̂_1 - ψ_1_ℒ], where we utilized p_M,N≤ρ^-1_M ≤ P_M. By combining (<ref>)–(<ref>), using (<ref>), and ψ̂_1 - ψ_1_ℒ= o_(1) and thus ψ̂_1_ℒ = ψ_1_ℒ + o_(1) after Theorem <ref>, for fixed K, M, and L=L_N→∞ holds ψ̂^†_1∐^f̂_M_f̂_1 - ψ^†_1∐^f_M_f_1_ℒ ≤ 2√(2)Mp̂_M,NP_Mψ̂_1ψ̂^∗_1 - ψ_1ψ^∗_1_ℒ[ ψ̂_1_ℒ + ρ^1/2_M + ψ̂_1 - ψ_1_ℒ] + P_M[ p̂_M,Nψ̂_1ψ̂^∗_1 - ψ_1ψ^∗_1_ℒ( ρ^1/2_M + ψ̂_1 - ψ_1_ℒ) + ψ̂_1 - ψ_1_ℒ] ≤ P_Mψ̂_1 - ψ_1_ℒ[ 2ψ_1_ℒ p̂_M,N(2√(2)M[ ψ_1_ℒ + ρ^1/2_M ] + ρ^1/2_M) + 1 + o_(1)]. Furthermore, similar as in the proof of Theorem <ref>, we get with β_1=β_1(M) in (<ref>), because f_j are the eigenfunctions of ψ_1ψ^∗_1, and ψ^_1 ψ_1ψ^∗_1(ψ_1ψ^∗_1 + γ_N𝕀)^-1, β_1(ψ^_1∐^f_M_f_1 - 𝕀)^2_𝒮 ≤γ^2_Np^2_M,N∑_k=1^M∑_ℓ=1^M b_k,ℓ,1f_ℓ^2 = γ^2_Np^2_M,Nβ_1(M)^2_𝒮. By plugging the inequalities (<ref>), (<ref>)–(<ref>) into (<ref>), and afterwards using that for all i holds ψ̂_i - ψ_i_ℒ≤ψ̂_i - ψ_i_𝒮≤Ψ̂_L - Ψ_L_𝒮, we get β̂_1 - β_1(M)_𝒮 ≤p̂^1/2_M,Nψ̂_2 - ψ_2_𝒮 + ψ_2_𝒮P_Mψ̂_1 - ψ_1_ℒ ×[ 2ψ_1_ℒ p̂_M,N(2√(2)M[ ψ_1_ℒ + ρ^1/2_M ] + ρ^1/2_M) + 1 + o_(1)] + γ_N p_M,Nβ_1(M)_𝒮 ≤Ψ̂_L - Ψ_L_𝒮(p̂^1/2_M,N + ψ_2_𝒮P_M[ 2ψ_1_ℒ p̂_M,N(2√(2)M[ ψ_1_ℒ + ρ^1/2_M ] + ρ^1/2_M) + 1 + o_(1)]) + γ_N p_M,Nβ_1(M)_𝒮 . Since K,M are fixed, L=L_N→∞, p̂^a_M,N⟶ρ^-a_M after (<ref>), (<ref>) and Theorem <ref> for any a≠ 0, and γ_N = o(λ^-1_KΛ_KL^2N^-1/2), we get β̂_1 - β_1(M)_𝒮=O_(λ^-1_KΛ_KL^2N^-1/2). Further, α̂_1=ψ̂_1 - β̂_1 and α_1(K,M) = ψ_1(K) - β_1(M) entail α̂_1 - α_1(K,M)_𝒮 ≤Ψ̂_L - Ψ_L(K)_𝒮 + β̂_1 - β_1(M)_𝒮 = O_(λ^-1_KΛ_KL^2N^-1/2). Hence, our claim is proven. By combining the inequalities in the proofs of Theorems <ref>, <ref>–<ref>, for min{K_N, L_N, M_N}→∞ holds because of p̂_M,N = O_(ρ^-1_M) according to Lemma <ref> with ψ̂_1ψ̂^∗_1 - ψ_1ψ^∗_1_ℒ=O_(K^-β), M^1+γρ^-1_MP_M = O(K^β) and γ_N = o(ρ_MM^-γ), and Assumption <ref> indeed β̂_1 - β_1_𝒮 =O_(M^-γ), thus α̂_1 - α_1_𝒮≤Ψ̂_L - Ψ_L_𝒮 + β̂_1 - β_1_𝒮=O_(M^-γ). For fixed M∈ℕ, there are constants b'_i,j,q∈ℝ so that B_q = B_q(M) = ∑_i=1^M∑_j=1^M b'_i,j,q(h_i⊗ d_j). Due to the identity (<ref>), which is equivalent to α_i = ψ_i + B_[i]Ψ”'^ ⊤_[i] for i=1, …, p, we get similar as in Section <ref>, a finite-dimensional representation of α_i via the finite-dimensional representations of the operators ψ_i and β_j. Namely, it holds α_i = α_i(K,M) ψ_i(K) + B_[i](M)Ψ”^ ⊤_[i](K), 1≤ i ≤ p. Let (X_k)_k∈ℤ be the fARMA(p,q) process in (<ref>), let the assumptions of Theorem <ref> and Assumptions <ref>–<ref> and <ref> hold, and let γ_N = o(λ^-1_KΛ_KL^2N^-1/2). Then, for fixed K,M, and L=L_N→∞ as in Theorem <ref>, it holds max_1≤ i ≤ p, 1≤ j ≤ q{α̂_i - α_i(K,M)_𝒮, β̂_j - β_j(M)_𝒮} = O_(λ^-1_KΛ_KL^2N^-1/2). This proof proceeds as the proof of Theorem <ref>. Herein, (ζ_j, h_j)_j∈ℕ, (ζ̂_j, ĥ_j)_j∈ℕ are the eigenpair sequences of ∏∏^∗, ∏̂∏̂^∗∈𝒩_ℋ^q, respectively, with ∏,∏̂∈𝒮_ℋ^q defined in (<ref>) and (<ref>), respectively, and where ∏∈𝒮_ℋ^q and ∏∏^∗∈𝒩_ℋ^q hold after Assumption <ref> and (<ref>). Further, sup_j∈ℕ|ζ̂_j - ζ_j|≤∏̂∏̂^∗ - ∏∏^∗_ℒ , with ζ_1 > ζ_2 > … > 0 after Assumptions <ref>–<ref>, and sup_1≤ j ≤ Mĥ'_j - h_j ≤ 2√(2)Z_M∏̂∏̂^∗ - ∏∏^∗_ℒ, M∈ℕ, with ĥ'_j⟨ĥ_j, h_j⟩ĥ_j and Z_Msup_1≤ j ≤ M(ζ_j - ζ_j+1)^-1. Due to the definition of the operator-valued vectors and matrices, and (<ref>), holds ∏̂ - ∏_ℒ ≤∑^q-1_i=0 Ψ̂'_[i] - Ψ'_[i]_ℒ . Consequently, as Ψ̂'_[i] - Ψ'_[i]_𝒮≤Ψ̂_L - Ψ_L_𝒮 for L≥ p+q+i-1, we obtain for L≥ p+2q-2, ∏̂∏̂^∗ - ∏∏^∗_ℒ ≤∏̂ - ∏_ℒ(∏_ℒ + ∏̂_ℒ) ≤ qΨ̂_L - Ψ_L_𝒮(∏_ℒ + ∏̂_ℒ). At first, we show the claimed consistency results for the operators β_j. For B̂_q = - Ψ̂'_[q]∏̂^†∐^ĥ_M_ĥ_1, with ∏̂^† = ∏̂^∗(∏̂∏̂^∗ + γ_N𝕀)^-1 and ∏^† = ∏^∗(∏∏^∗ + γ_N𝕀)^-1, where (γ_N)_N ∈ℕ⊂(0,∞) is a sequence with γ_N → 0, and with ∏^∏∏^† and Ψ'_[q] = -B_q∏ in (<ref>), holds B̂_q - B_q = (Ψ'_[q] - Ψ̂'_[q])∏̂^†∐^ĥ_M_ĥ_1 + Ψ'_[q](∏^†∐^h_M_h_1 - ∏̂^†∐^ĥ_M_ĥ_1) + B_q(∏^∐^h_M_h_1 - 𝕀). Subsequently, by using triangle inequality and operator-valued Hölder's inequality, the definition of the operators and the norms below, and ideas from the proof of Theorem <ref>, with B_q = B_q(M) in Assumption <ref> holds for sufficiently large L with ẑ_M,N (ζ̂_M + γ_N)^-1 and z_M,N (ζ_M + γ_N)^-1 B̂_q - B_q_𝒮 ≤ẑ^1/2_M,NΨ̂_L - Ψ_L_𝒮 + Ψ'_[q]_𝒮∏^†∐^h_M_h_1 - ∏̂^†∐^ĥ_M_ĥ_1_ℒ + γ_Nz_M,NB_q(M)_𝒮. Thereby, similar as in the proof of Theorem <ref>, by using (<ref>)–(<ref>) and ·_ℒ≤·_𝒮, we get ∏^†∐^h_M_h_1 - ∏̂^†∐^ĥ_M_ĥ_1_ℒ ≤ 2√(2)Mẑ_M,NZ_M∏̂∏̂^∗ - ∏∏^∗_ℒ(∏̂_ℒ + ζ^1/2_M + ∏̂ - ∏_ℒ) + ζ^-1_M[ ẑ_M,N∏̂∏̂^∗ - ∏∏^∗_ℒ( ζ^1/2_M + ∏̂ - ∏_ℒ) + ∏̂ - ∏_ℒ ] = qZ_MΨ̂_L - Ψ_L_𝒮[ 2∏_ℒẑ_M,N(2√(2)M[ ∏_ℒ + ζ^1/2_M] + ζ^1/2_M) + 1 + o_(1)] . By plugging (<ref>) into (<ref>), for fixed K,M, and L=L_N→∞, since ẑ^a_M,N⟶ζ^-a_M after (<ref>), (<ref>) and Theorem <ref> for any a≠ 0, and γ_N = o(λ_KΛ_KL^3/2N^-1/2), we get B̂_q - B_q(M)_𝒮 = O_(λ^-1_KΛ_KL^2N^-1/2), and subsequently β̂_j - β_j(M)_𝒮 = O_(λ^-1_KΛ_KL^2N^-1/2) for each j. Moreover, due to α̂_i = ψ̂_i + B̂_[i]Ψ̂”'^ ⊤_[i], α_i(K,M) = ψ_i(K) + B_[i](M)Ψ”'^ ⊤_[i](K), α_i = β_j=β̂_j=0 for i>p and j>q, and elementary conversions, we obtain for L≥ p+q+i-1 for any i,p,q α̂_i-α_i(K,M)_𝒮 ≤ψ̂_i - ψ_i(K)_𝒮 + B̂_[i](M)_ℒΨ̂”'_[i] - Ψ”'_[i](K)_𝒮 + Ψ”'_[i](K)_ℒB̂_[i] - B_[i](M)_𝒮 ≤ (1 + B̂_[i](M)_ℒ)Ψ̂_L - Ψ_L(K)_𝒮 + Ψ”'_[i](K)_ℒB̂_q - B_q(M)_𝒮. Hence, due to B̂_[i](M)_ℒ≤B_[i](M)_ℒ + B̂_[i] - B_[i](M)_ℒ and B̂_[i] - B_[i](M)_ℒ≤B̂_q - B_q(M)_ℒ, we have α̂_i-α_i(K,M)_𝒮=O_(λ^-1_KΛ_KL^3/2N^-1/2) for any i. Therefore, (<ref>) is shown. Based on the arguments in the proofs of Theorems <ref>, <ref>, Assumption <ref>, for min{K_N, L_N, M_N}→∞ holds after z_M,N = O_(ζ^-1_M) according to Lemma <ref> with ∏̂∏̂^∗ - ∏∏^∗_ℒ=O_(K^-β), M^1+γζ^-1_MZ_M = O(K^β) and γ_N = o(ζ_MM^-γ), that B̂_q - B_q_𝒮 = O_(M^-γ), so β̂_j - β_j_𝒮= O_(M^-γ) for each j. Further, due to α̂_i = ψ̂_i + B̂_[i]Ψ̂”'^ ⊤_[i] in (<ref>), α_i = ψ_i + B_[i]Ψ”'^ ⊤_[i], and similar conversions as in Theorem <ref> give for sufficiently large L for each i, α̂_i - α_i_𝒮 ≤ (1 + B_[i]_ℒ)Ψ̂_L - Ψ_L_𝒮 + Ψ”'_[i]_ℒB̂_q - B_q_𝒮 + o_(M^-γ) = O_(M^-γ).
http://arxiv.org/abs/2407.12769v1
20240717175002
Search for light dark matter with NEWS-G at the LSM using a methane target
[ "M. M. Arora", "L. Balogh", "C. Beaufort", "A. Brossard", "M. Chapellier", "J. Clarke", "E. C. Corcoran", "J. -M. Coquillat", "A. Dastgheibi-Fard", "Y. Deng", "D. Durnford", "C. Garrah", "G. Gerbier", "I. Giomataris", "G. Giroux", "P. Gorel", "M. Gros", "P. Gros", "O. Guillaudin", "E. W. Hoppe", "I. Katsioulas", "F. Kelly", "P. Knights", "P. Lautridou", "A. Makowski", "I. Manthos", "R. D. Martin", "J. Matthews", "H. M. McCallum", "H. Meadows", "L. Millins", "J. -F. Muraz", "T. Neep", "K. Nikolopoulos", "N. Panchal", "M. -C. Piro", "N. Rowe", "D. Santos", "G. Savvidis", "I. Savvidis", "D. Spathara", "F. Vazquez de Sola Fernandez", "R. Ward" ]
hep-ex
[ "hep-ex", "astro-ph.CO" ]
Department of Mechanical and Materials Engineering, Queen’s University, Kingston, Ontario K7L 3N6, Canada Department of Mechanical and Materials Engineering, Queen’s University, Kingston, Ontario K7L 3N6, Canada LPSC-LSM, Université Grenoble-Alpes, CNRS-IN2P3, Grenoble, 38026, France [now at ]TRIUMF, Vancouver, BC V6T 2A3, Canada Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada Chemistry & Chemical Engineering Department, Royal Military College of Canada, Kingston, Ontario K7K 7B4, Canada [e-mail:]jeanmarie.coquillat@queensu.ca Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada LPSC-LSM, Université Grenoble-Alpes, CNRS-IN2P3, Grenoble, 38026, France Department of Physics, University of Alberta, Edmonton, T6G 2E1, Canada [e-mail:]ddurnfor@ualberta.ca Department of Physics, University of Alberta, Edmonton, T6G 2E1, Canada Department of Physics, University of Alberta, Edmonton, T6G 2E1, Canada Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada SNOLAB, Lively, Ontario, P3Y 1N2, Canada IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada LPSC, Université Grenoble-Alpes, CNRS-IN2P3, Grenoble, 38026, France Pacific Northwest National Laboratory, Richland, Washington 99354, USA [now at ]European Spallation Source ESS ERIC (ESS), Lund, SE-221 00, Sweden School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK Chemistry & Chemical Engineering Department, Royal Military College of Canada, Kingston, Ontario K7K 7B4, Canada School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK SUBATECH, IMT-Atlantique/CNRS-IN2P3/Nantes University, Nantes, 44307, France Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK Institute for Experimental Physics, University of Hamburg, Hamburg, 22767, Germany Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK Chemistry & Chemical Engineering Department, Royal Military College of Canada, Kingston, Ontario K7K 7B4, Canada Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK [also at]Particle Physics Department, STFC Rutherford Appleton Laboratory, Chilton, Didcot, OX11 OQX, UK LPSC-LSM, Université Grenoble-Alpes, CNRS-IN2P3, Grenoble, 38026, France School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK Institute for Experimental Physics, University of Hamburg, Hamburg, 22767, Germany Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada Department of Physics, University of Alberta, Edmonton, T6G 2E1, Canada Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada LPSC-LSM, Université Grenoble-Alpes, CNRS-IN2P3, Grenoble, 38026, France Department of Physics, Engineering Physics & Astronomy, Queen’s University, Kingston, Ontario, K7L 3N6, Canada Aristotle University of Thessaloniki, Thessaloniki, 54124 Greece School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK [e-mail:]14favd@queensu.ca [now at ]Nikhef (Nationaal instituut voor subatomaire fysica) SUBATECH, IMT-Atlantique/CNRS-IN2P3/Nantes University, Nantes, 44307, France School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK Institute for Experimental Physics, University of Hamburg, Hamburg, 22767, Germany NEWS-G Collaboration § ABSTRACT The NEWS-G direct detection experiment uses spherical proportional counters to search for light dark matter candidates. New results from a 10 day physics run with a 135 cm in diameter spherical proportional counter at the Laboratoire Souterrain de Modane are reported. The target consists of 114 g of methane, providing sensitivity to dark matter spin-dependent coupling to protons. New constraints are presented in the mass range 0.17 to 1.2 GeV/c^2, with a 90% confidence level cross-section upper limit of 30.9 pb for a mass of 0.76 GeV/c^2. Search for light dark matter with NEWS-G at the LSM using a methane target R. Ward July 22, 2024 ========================================================================== Astronomical and cosmological observations strongly suggest the existence of non-baryonic dark matter (DM) in our universe <cit.>. Theories beyond the Standard Model provide DM candidates in the form of non-relativistic, weakly interacting, massive particles (WIMPs) <cit.> constituting our galactic halo. The NEWS-G experiment employs spherical proportional counters (SPCs) filled with various gases to search for WIMP-like particles scattering off target nuclei, producing nuclear recoil energies of up to several keV. First constraints using this technology were obtained with a [60]cm in diameter SPC filled with a neon and methane mixture <cit.>. The new S140 detector <cit.> consists of a grounded 135 cm in diameter SPC constructed with low background Oxygen-Free High-Conductivity copper (C10100) featuring a 0.5 mm inner shield made of high-purity electroformed copper <cit.>. It is equipped <cit.> with a multi-anode sensor, “ACHINOS” <cit.>, developed to ensure both primary charge collection and high charge amplification capabilities. It is held at the center of the SPC with a support rod, also used to apply a high voltage on the anodes, and from which the signal is read out. The 11 anodes are grouped into two readout channels: one comprising the five nearest anodes to the support rod (“Near” anodes) and the other with the six farthest (“Far” anodes). A UV laser system and a gaseous ^37Ar radioactive source provide in situ calibration to characterize and monitor the detector response. This letter reports new DM results based on data taken during the commissioning of this detector at the Laboratoire Souterrain de Modane (LSM), before its installation at SNOLAB. The SPC was filled with 135 mbar of methane and operated with a water shield for approximately 10 days, for a total exposure of 1.12 kg· day. This hydrogen-rich target makes the experiment particularly sensitive to WIMP-like DM candidates with 𝒪(1 GeV/c^2) masses, giving access to spin-dependent (SD) DM-proton couplings. This region is favoured by recent theoretical models <cit.>. After an energy deposition within the SPC, ionization electrons drift towards the anodes, where they are multiplied by an avalanche process close to the anode. The measured signal from a single primary electron reaching the anode is a combination of the current induced by the avalanche ions as they drift towards the cathode, and the response of a charge-sensitive preamplifier. For events above ∼ 30 primary electrons, which includes some of the calibrations used in this work, the event amplitude, defined as the integral of the pulse, is used as an estimator of the event's energy. Additionally, for point-like energy depositions, the 10% to 90% risetime of the integrated pulse is proportional to the diffusion experienced by primary electrons. As this increases with the radial position of the event, selection cuts on the risetime are used to reject surface background events. For events with fewer than ∼ 10 primary electrons , the signals are processed <cit.> to obtain a series of delta impulses, each corresponding to the arrival of an ionization electron to the anodes, with varying amplitudes proportional to the size of each avalanche. The relatively large longitudinal diffusion of the electrons in methane results in 𝒪(100 μ s) spread in their arrival times. This allows for the identification of individual primary electrons in processed traces for low energy events, such as those generated by low-mass WIMP recoils. A peak-finding (PF) algorithm based on the ROOT TSpectrum Search method <cit.> is applied to estimate the number of electrons present in the waveform, their arrival times, and avalanche size. The number of observed peaks, which is linked to deposited energy, constitutes the first parameter on which the present analysis is based. Although this is connected to the number of primary electrons generated in the event, a fraction of them may be lost in baseline noise fluctuations, or electrons arriving in close temporal proximity might not be resolvable. The second parameter is the time separation between the first and the last peak. This parameter is used to statistically discriminate contributions from different background sources. The relative signal of the two readout channels is used as an event quality selection. Electron multiplication occurring near an anode induces also a smaller signal of opposite polarity to the other anodes, as expected by the Shockley-Ramo theorem <cit.> and discussed in Ref. <cit.>. The absence of this cross-channel signal for an event localised at one anode suggests it did not originate from ionization electron amplification in the detector, and hence was removed (“anti-spikes cut”). The overall principle of the analysis is based on the comparison of the time separation distributions for 2, 3, and 4 observed peaks in data, and the expected backgrounds and signal. Only events collected from the “Far” hemisphere, where the electric field is more homogeneous, are considered, defining the analysis fiducial volume. Following an α-decay on the surface of the detector, there is a transient increase in the rate of single-electron events. Therefore, events occurring within 5 s after every α-particle detection were also discarded. The collected dataset is separated into two subsets: 23% is the “test data” on which the analysis is tuned, and the remaining 77% is “DM search data” on which the search is performed. Extensive calibrations of the detector response were carried out for this physics campaign. A UV laser calibration system was implemented as described in Ref. <cit.>, producing a photodiode-tagged source of photoelectrons from the inner surface of the SPC vessel. The laser is coupled to an optical fibre which is fed into the active volume of the SPC, the bare end of which is directed at the far hemisphere of the SPC, from which photoelectrons are extracted. During data collection, the UV laser was operated at a high intensity to monitor changes in the detector response in real-time. This revealed that over the course of physics data collection, the detector gas gain was reduced by approximately 11%, due to deterioration of the gas quality during operation in sealed mode. This is modelled as a piece-wise linear function for the detector gain over time. When operated at low intensity, the UV laser can induce a signal dominated by single photoelectron events, which is used to model the detector response. The electron avalanche multiplication is modelled with the Polya distribution <cit.> with shape parameter θ, scaled to the mean gain of the detector ⟨ G ⟩. During the physics campaign, approximately 1 hour of calibration with the UV laser was obtained for each day of data-taking. The obtained estimated of the θ parameter is 0.125^+0.026_-0.023, indicating an approximately exponential distribution. Additionally, the UV laser was used to determine the detector trigger efficiency. An emulation of the online trigger algorithm was applied to the laser calibration data, and it was found that 64_-3^+4 % of single-electron and as much as 93_-1^+2 % of double-electron events fulfill the trigger requirement. For multiple-electron events close to the anodes, the increased electron pile-up probability increases the trigger efficiency. The performance of the PF method was also evaluated using laser data. On average, 62.5 ± 0.3 % of single electrons fulfill the amplitude threshold for peak detection, with a small time dependence as the detector gain decreased during the campaign. The minimum time separation required for the PF method to distinguish two electrons was calibrated with double-peak laser data; the observed rate drops off for events with very short time separations, which was fit with an error function with a threshold of 8.2±0.4 μ s. The PF parameters were chosen so as to minimize the probability of baseline noise generating a false peak, to avoid single-electron events being reconstructed as containing two peaks. A probability of 0.03% per search window was obtained, allowing this effect to be neglected. From these results, a model was developed to compute the probability that events with n primary electrons and expected diffusion time σ will be reconstructed as having k peaks. The resulting predictions for three and four-peak events were consistent with the respective laser calibration data. At the end of the physics campaign, ^37Ar was injected into the detector. Decaying via electron capture, ^37Ar produces low-energy electrons and X-rays uniformly throughout the detector volume. The dominant total energy depositions per event are 270 eV and 2.8 keV <cit.> with smaller contributions from different decay paths and partially escaped decays <cit.>. This calibration data is used to measure in situ the overall energy response, including the mean avalanche gain of all six far-channel anodes. The ionization yield W(E) of the detector for interactions induced by electrons and photons, collectively referred to as electronic recoils, is parametrized using the expression of Ref. <cit.>, with free parameters U for the asymptote, and W_0 as the corresponding high energy limiting value. Independent measurements of W_0 and U were performed[Paper in preparation] and used as a prior for the present calibration, leading to a result of W_0 = 30.0_-0.15^+0.14 eV and U = 15.70^+0.52_-0.34 eV. The statistical dispersion of ionization is controlled by the Fano factor <cit.>, whose value was obtained from the literature <cit.> as an exact number, given there were no uncertainties provided. This dispersion was modelled with a COM-Poisson distribution <cit.>. Primary electron losses through attachment are parameterized using this calibration data by assuming a survival probability varying linearly with radial position. The ^37Ar data was used to characterize electron diffusion as a function of radius, following an empirical relationship of σ(r) = σ_max (r/r_max)^α, with σ being the standard deviation of the electron arrival time, r the radius of the interaction, the subscript max indicating the value at the cathode surface (σ_max=123.5±1.1 μ s for physics runs), and α = 3.05±0.15. This diffusion model was combined with the PF performance model to generate Monte Carlo (MC) simulations of two, three, and four-peak events in ^37Ar data. The simulation was fine-tuned with the corresponding calibration data. The detector response to nuclear recoils is affected by the ionization quenching factor (QF), defined as the ratio of ionization energies released by a nuclear and an electronic recoil of the same kinetic energy. Two independent measures of the QF for hydrogen atoms in CH_4 were used in this analysis: a) A NEWS-G measurement with the COMIMAC facility was performed between 2 and 13 keV <cit.>; and b) a QF curve was estimated from W-value measurements in the energy range 510 eV to several hundreds of keV <cit.>. The latter was conservatively scaled down by 15% to be in agreement with the COMIMAC measurement. Below 510 eV, the extrapolation QF(E_K) = 0.428 + 0.224 ln(E_K) was used based on the scaled W-value curve. This approach is conservative compared to an extrapolation based on the widely used Lindhard-model <cit.> (see Fig. <ref>). The fiducial volume acceptance of the detector’s far channel for 2 through 15 electron events was determined as a function of their radial position using an MC electron drift simulation. COMSOL was used to model the electric field <cit.>, and gas property data was obtained from MAGBOLTZ <cit.>. For example, 2-electron (15-electron) events originating from the cathode surface of the SPC have a 73.83 ± 0.04 % (64.72 ± 0.05 %) probability of fulfilling the fiducialization cut. The fiducialization efficiency was validated against ^37Ar calibration data. The efficiencies of the additional event selection requirements, were also estimated. The α-particle cut results in a 14% dead-time. The “anti-spikes” cut removes 95% of events not associated with an electron avalanche while keeping 77% of ionization electrons. Removing tagged laser calibration events reduces the effective runtime by 0.48%. After applying the previous selection criteria, a rate of 403 mHz of single-peak events was observed in the test data, compared to only 6.5 and 1.1 mHz of two and three-peak events, respectively. In the absence of an explanation for such a high single-electron event rate, it was decided to keep only events with two or more peaks to calculate WIMP exclusion limits. The combined detection and selection cut efficiency for volume events with up to 15 electrons for the physics conditions of this work are shown in Tab. <ref>. After data quality requirements, the test data contained 6.5, 1.1, and 0.78 mHz of 2, 3, and 4-peak events, respectively. The corresponding time separation distributions were inconsistent with the expectation for events happening in the detector volume, implying the presence of backgrounds. In preparation for fitting the data, the same MC simulations used for calibrations were adapted for WIMP recoils of different masses and three background sources, identifiable by their time separation distributions: contaminants on the internal surface of the detector, particle interactions in the gas volume, and accidental coincidences. For the WIMP elastic scattering recoil energy spectrum, the standard halo parametrization with ρ_0 = 0.3 GeV/c^2/cm^3, v_0 = 238 km/s, v_Earth = 232 km/s, and v_esc = 544 km/s <cit.> was used, together with the Helm form factor for the nuclear cross-section <cit.>, which is nearly identical to 1. The binding energy of hydrogen to CH^+_3 <cit.> was subtracted from the recoil kinetic energy. The rate of WIMP events in the three categories was scaled by the WIMP–proton spin-dependent cross-section, σ_SD_p, which is the parameter of interest. For surface and volume backgrounds, an underlying energy distribution of the form R(E) = A + Be^-E/C was assumed, where A, B, C are free parameters for surface and volume contributions separately. The A term represents backgrounds with a uniform energy distribution, such as from Compton interactions; the Be^-E/C term is a generic parametrization for rising background rates at low energies, as observed in multiple DM direct detection experiments <cit.> including the previous NEWS-G detector <cit.>. For surface events, the large diffusion time made overlapping electrons less frequent. This, in turn, made the time separation distribution of surface events insensitive to the underlying energy distribution, modifying only the relative rates of observed 2, 3, and 4-peak events. Conversely, for volume events, their shorter diffusion time increased the frequency of overlaps and hence the proportion of events with higher number of electrons reconstructed as only having 2 to 4 peaks. The last background considered was accidental coincidences of unrelated events within 523 μ s of the trigger time or false positives of the PF method due to baseline noise fluctuations. These were modelled with an MC assuming a uniform time distribution of peaks within the search window, and corroborated by comparing with the distribution of 2, 3, or 4-peak events after an α-particle, when the increased single-electron event rate produced high rates of coincidences. The rates for each of 2, 3, and 4-peak coincident events were left as free independent parameters. The 2, 3, and 4-peak data were jointly fit with these four components using the profile likelihood ratio (PLR) test statistic <cit.>. The fit results on the DM search data are shown in Table <ref>, with an overall background rate of a few mHz. The main contributions were from surface contamination and accidental coincidences, with a smaller contribution from volume background events. A constraint on the WIMP–proton spin-dependent cross section was obtained by profiling the likelihood ratio over the rates of the surface and accidental coincidence events; volume background events were fixed to zero due to their near-degeneracy with WIMP events. Pseudo datasets were simulated based on the best fit of the test data for various WIMP masses, with cross sections fixed to different values and scaled to the exposure of the DM search data. The distribution of the PLR from these datasets was used to obtain the threshold value of the test-statistic for the 90% confidence level (C.L.) cross-section exclusion limit. In Fig. <ref>, the fit result is shown for a WIMP with mass 0.76 GeV/c^2 and a σ_SD_p fixed at 30.9 pb (excluded at 90% C.L.). The resulting upper limit curve is shown in Fig. <ref>. Systematic uncertainties from the exposure or selection efficiencies were accounted for by taking their conservative values at 95% C.L., and were found to have a negligible effect. The effect of systematic uncertainties on the DM candidate-induced nuclear recoil ionization process was considered through variations of the W_0 and U parameters. Negligible effects were observed. The dominant systematic uncertainty was the choice of the QF extrapolation, where the nominal approach was replaced by assuming the QF becomes zero for energies below 510 eV. The curve in Fig. <ref> shows the worst-case scenario among all considered combinations. New constraints on spin-dependent DM interactions with protons are presented in the mass range 0.17 to 1.2 GeV/c^2, with a 90% confidence level cross section upper limit of 30.9 pb for a mass of 0.76 GeV/c^2. The help of the technical staff of the Laboratoire Souterrain de Modane is gratefully acknowledged. This research was undertaken, in part, thanks to funding from the Canada Excellence Research Chairs Program, the Canada Foundation for Innovation, the Arthur B. McDonald Canadian Astroparticle Physics Research Institute, Canada, the French National Research Agency (ANR-15-CE31-0008), and the Natural Sciences and Engineering Research Council of Canada. This project has received support from the European Union's Horizon 2020 research and innovation programme under grant agreements No. 841261 (DarkSphere), No. 845168 (neutronSPHERE), and No. 101026519 (GaGARin). Support from the U.K. Research and Innovation — Science and Technology Facilities Council (UKRI-STFC), through grants No. ST/V006339/1, No. ST/S000860/1, No. ST/W000652/1, No. ST/X005976/1, and No. ST/X508913/1, the UKRI Horizon Europe Underwriting scheme (GA101066657/Je-S EP/X022773/1), and the Royal Society International Exchanges Scheme (IES\R3\170121) is acknowledged. Support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy — EXC 2121 “Quantum Universe” — 390833306 is acknowledged.
http://arxiv.org/abs/2407.12433v1
20240717094126
Plausibly Deniable Content Discovery for Bitswap Using Random Walks
[ "Manuel Wedler", "Erik Daniel", "Florian Tschorsch" ]
cs.NI
[ "cs.NI" ]
Plausibly Deniable Content Discovery for Bitswap Using Random Walks Manuel Wedler Humboldt-University of Berlin manuel@wedler.dev Erik Daniel TU Dresden erik.daniel@tu-dresden.de Florian Tschorsch TU Dresden florian.tschorsch@tu-dresden.de July 22, 2024 ==================================================================================================================================================================================== § ABSTRACT Bitswap is the data exchange protocol for the content-addressed peer-to-peer overlay network IPFS. During content discovery, Bitswap reveals the interest of a peer in content to all neighbors, enabling the tracking of user interests. In our paper, we propose a modification of the Bitswap protocol, which enables source obfuscation using proxies for content discovery. The proxies are selected via a random-walk. Enabling content discovery through proxies introduces plausible deniability. We evaluate the protocol modification with a simulation. The protocol modification demonstrates enhanced privacy, while maintaining acceptable performance levels. P2P Networks, Overlay Networks, Privacy § INTRODUCTION Peer-to-peer (P2P) overlay networks can be used for decentralized data exchange, providing an alternative to centralized data storage. Due to the P2P architecture, anyone can join the network to exchange data. A prime example for such a network with many active peers is the InterPlanetary File System (IPFS) <cit.>. IPFS incorporates content-addressed exchange of data, using the Bitswap protocol for content retrieval. Bitswap operates on blocks, which are data chunks each identifiable by a self-verifiable content identifier (CID). The CID is used for content discovery and retrieval. For content discovery, Bitswap queries all neighbors for a CID. If this initial request yields no providers, Bitswap asks the Kademlia-based Distributed Hash Table (DHT) of IPFS for content providers. While sending the request to all neighbors improves fault tolerance and speed of the content discovery process <cit.>, it also reveals the interest to many unconcerned peers. It is possible for passive participants to observe large parts of request <cit.>, enabling profiling of peers. In this paper, our goal is to improve the privacy of content discovery. We therefore introduce RaWa-Bitswap, a random walk-based proxy approach. We propose to outsource content discovery to other peers, proxies, which perform the lookup on behalf of the peer. The proxy is chosen via a random walk, relaying the request over peers to the proxy. This obfuscates the original source of a request and provides each involved peer with plausible deniability. Any peer, requestor, relay, or proxy, can plausibly claim that the lookup itself is executed on behalf of another peer. Such a random walk is also used by Dandelion <cit.> and Dandelion++ <cit.> to improve the transaction privacy in a cryptocurrency. In Dandelion/Dandelion++, transactions are relayed through the P2P network via a random walk to a proxy which diffuses the transaction. In contrast to Dandelion, we use a random walk to select a proxy for content discovery to improve the privacy of Bitswap. More specifically, RaWa-Bitswap forwards requests on a random walk to a proxy. The proxy performs the normal Bitswap content discovery and returns information about content providers over the same path back to the requestor. The requester can use the information to directly retrieve content, revealing the peer's interest to only one content provider. We evaluate RaWa-Bitswap with a simulation and compare RaWa-Bitswap to the normal Bitswap in terms of privacy and performance properties. RaWa-Bitswap significantly enhances privacy for Bitswap users against passive adversaries, while preserving performance under the tested conditions. Therefore, RaWa-Bitswap shows reduced detection probabilities of individual peer's interests and an increased level of plausible deniability. Due to the limited forwarding of the request, the load on the network imposed by the method is significantly lower than similar network-level privacy-enhancements <cit.>. As our contribution, we propose a design of a random walk-based forwarding protocol, enhancing privacy under a request-response model. We provide a proof-of-concept (PoC) implementation of the proposed design based on Bitswap of the boxo library <cit.> v0.8.0. Furthermore, we evaluate the effect of the changed content discovery on privacy and performance. The remainder is structured as follows: In <Ref>, we present related work. <Ref> explains the functionality of RaWa-Bitswap and <Ref> explains details of the PoC. In <Ref>, we evaluate our approach using a simulation. <Ref> concludes the paper. § RELATED WORK The selection of a random node of the network to handle requests to obfuscate the source of the request is a common method. In combination with layer encryption, this can lead to unlinkability as in Tor <cit.>. A similar idea is used by Tarzan <cit.>, ShadowWalker <cit.>, and Torsk <cit.> where nodes are randomly selected from the P2P network. Although, the selection process of Torsk is vulnerable to denial-of-service attacks <cit.>. All these methods and proposals use rather a random node selection, where the selection process is completely controlled by the origin, than a random walk. Through such methods, the origin cannot deny that it executes a request. A random walk, where each node chooses the next hop by itself provides plausible deniability for all peers on the path. An early example is Crowds <cit.>, where requests are forwarded over a random walk to a proxy, which executes a web transaction on behalf of another user. The next hop of the random walk is chosen from all known peers in the network. In AP3 <cit.>, a message is forwarded over a random walk until one peer decides to send the request to the destination. Each hop is chosen based on the lookup of a random key in a DHT, which can be misused <cit.>. In Rumor Riding <cit.> and Garlic Cast <cit.>, two random walks are started. The random walk ends when both paths meet. Clover <cit.> uses a random walk to broadcast a message to the network. The next hop is chosen based on the connection type, a message is propagated only to peers with the same connection type. The authors distinct connections based on initiator of the connection: inbound or outbound connections. Dandelion <cit.> and Dandelion++<cit.> offer formal anonymity guarantees for message broadcasts. The random walk uses a privacy-subgraph for forwarding messages. The privacy of Dandelion and Dandelion++ was investigated by  <cit.>. Clover, Dandelion and Dandelion++ do not need an answer to the message. RaWa-Bitswap also uses a random walk to select the proxy. The random walk is based on the privacy-subgraph used in Dandelion/Dandelion++. Our method focuses on content discovery in Bitswap and requires the proxy to be able to route responses back, along the requests' random walks. In the context of research of IPFS, there exists various research investigating IPFS's network <cit.>, performance <cit.>, and its subcomponents <cit.>. A part of the research of IPFS, specifically focus on Bitswap.  <cit.> showed the privacy problems of Bitswap and its easy exploitation for monitoring peers interests. While  <cit.> mainly focus on performance improvement for Bitswap, their proposed method, forwarding of requests, also provides a privacy improvement. Forwarding already provides some plausible deniability, albeit easily circumvented through passive observations. An improvement to pure forwarding was proposed in <cit.>, where the authors introduce trickling, a diffusion spreading, with the aim to obfuscate the source. Forwarding and trickling, both introduce a considerable load on the network although lower in the latter. An alternative approach to improve the privacy in Bitswap was proposed in <cit.>. Instead of concealing the source of a request, the authors focus on concealing the content of a request. The concealment method can introduce costly computation due to the usage of cryptographic methods. RaWa-Bitswap makes a clear distinction between discovery and retrieval. The discovery uses a random walk, while the retrieval is performed through a direct connection. Gnutella 0.4 <cit.> makes a similar distinction where messages are broadcast through the network and a is routed back with information about content providers. In general, the broadcast of messages as used in Gnutella, forwarding <cit.>, or the trickling approach proposed in <cit.> impose a high load on the network. RaWa-Bitswap improves the privacy, while having a much smaller footprint on the overall network load. Although, since the dissemination of the request involves only a subset of peers compared to a broadcast, the retrieval time might be lower. Compared to the query obfuscation proposed in <cit.>, RaWa-Bitswap provides plausible deniability in interests themselves. Query obfuscation still exposes the interest in something, while in RaWa-Bitswap a request might be forwarded on behalf of other nodes. From a performance perspective the method is computation heavier and scales poorly with the number of stored blocks by a peer. RaWa-Bitswap's performance penalty only stems from network latency and the probabilistic length of the path. § CONTENT DISCOVERY WITH RANDOM WALKS IPFS allows the discovery and exchange of data over a P2P overlay network. The process is handled by the Bitswap protocol. In Bitswap, content is split into blocks, which are identified via an immutable identifier, the content identifier (CID). The CID consists of codec information and a multi-hash, which contains a hash, the digest length, and the used hash function. Multiple blocks belonging to a single structure, file or directory, are linked through a Merkle Directed Acyclic Graph (DAG). The Merkle DAG is constructed bottom-up, starting from the leaves up until a root block. The CID of the root block is the root CID. To be able to retrieve a file, the peer needs to know the root CID. Content is discovered by querying all neighbors for the CID or by using a fallback system a DHT. The discovery process reveals the interest in a CID to many unconcerned peers. Although, the CID is only a pseudonymous representation of content, it can be used to retrieve content, which reveals the interest in the specific data, and hence poses a privacy risk <cit.>. In the following, we first provide an overview of our general approach. Afterwards, we explain the functionality of the default Bitswap protocol before providing a more detailed explanation of RaWa-Bitswap, its subcomponents, parameters, and a discussion of some design decisions. For the terminology, we denote a peer interested in the content as requester, an intermediate peer relaying request due to the random walk as relay, a peer which conducts the lookup on behalf of a peer as proxy, and a peer in possession of the content as content provider. §.§ Overview In order to improve the privacy of Bitswap, it is necessary to conceal the interest or the origin of a request. Our privacy-enhancement aims to improve the privacy of the discovery process, by using a random peer of the network as a proxy. Forwarding the message through the random walk increases the difficulty for attackers to trace the origin of a message. By introducing message forwarding with random walks, each peer sending a message gains plausible deniability, since a message can be sent due to the random walk. One prime example for the utilization of random walks to increase the privacy in P2P networks is Dandelion <cit.>. The random walk of Dandelion and its follow-up Dandelion++ <cit.> are the base for our method, a Random-Walk-Bitswap (RaWa-Bitswap). In contrast to Dandelion/Dandelion++, which uses the random walk to broadcast a message, Bitswap also requires a response from the proxy to complete content retrieval. The main goal of RaWa-Bitswap is to enhance privacy of content requests in general. Therefore, the protection method mainly aims to protect against passive listeners and not active traffic manipulating adversaries, or targeted attacks. RaWa-Bitswap is split into four phases: privacy-phase, proxy-phase, return-phase, exchange-phase. <ref> provides a rough overview of our method. In the privacy phase the proxy is selected by utilizing a random walk. In a random walk, a neighboring peer is selected at random to which the message or request is relayed. The selected peer decides based on a probability to either become a proxy or to continue the walk. If the walk continues, the selection process is repeated and the relay randomly selects a new peer. Through this process the random walk eventually terminates at a random peer, which becomes the proxy. At the proxy the protocol enters the proxy-phase. In the proxy phase the proxy executes the normal Bitswap discovery by querying all neighbors. Once the proxy finds a content provider, RaWa-Bitswap enters the return-phase. In the return phase, the proxy returns information about the content provider(s) via the path of the random walk to the requestor. After the requester received a content provider, the exchange phase begins. In the exchange phase, the requester connects to a content provider and requests the data. §.§ Vanilla-Bitswap The default Bitswap protocol, which we denote here as Vanilla-Bitswap, uses different request and response types, which are wrapped in the same envelope, a Bitswap message. The requests are , , and , and the responses are , , and . Content retrieval follows a pattern using the requests. First, it is necessary to identify content providers. The content discovery is handled by the request. A peer sends to all directly connected peers, its neighbors, a message, which is answered with a in case the neighbor stores the block, or can be answered with a , in case the neighbor does not store the block. If a content provider is found, the block is requested with a , which the content provider answers with . Once the block is retrieved, all peers, which received a of the block, receive a . In summary, content is discovered by announcing the interest with a to all neighbors and satisfaction or disinterest in content is announced with . In case none of the neighbors can provide the content, Bitswap needs a fallback subsystem to find new peers which might store the content. IPFS uses a Kademlia-based DHT as the fallback subsystem to search for content providers, which is queried after a timer t_1 expires. §.§ RaWa-Bitswap In RaWa-Bitswap, a random walk is prepended to the discovery process of Vanilla-Swap, resulting in a proxy executing the discovery of content providers. The proxy executes only content discovery () and returns the discovered content providers back over the random path of the random walk. The requester executes the content retrieval () by itself by connecting to the provided content providers. The phases and message sequence with only one relay node is shown exemplary in <ref>. Since each peer probabilistically decides whether to relay or become a proxy, it is possible that the first peer after the requester already becomes the proxy. Respectively, it is also possible that there are more peers between requester and proxy. In the privacy-phase, instead of sending a to all neighbors, the peer starts a random walk. For the random walk, we introduce a new request type a , indicating the relay of the lookup. Therefore, the requester sends a request for the interested CID to a selected neighbor. Each peer relays the request with probability 1-p or transitions into the proxy phase with probability p. The relaying peers store the CID, the predecessor, and successor of a request. The proxy only stores the predecessor and CID of the request. In the proxy phase, the peer executes the Vanilla-Bitswap lookup, sending a to all neighbors and after a time-out doing a DHT lookup for the item. After the proxy discovered content provider(s), the return phase begins. In the return phase, a list with the content providers is sent back with a response along the path of the random walk to the original requester. The list contains only the peer identifier of the content provider. In case the proxy is aware of the contact address of a content provider, the information is also included in the list. Once the original peer receives the list of content providers, the exchange phase begins. During the exchange phase, the peer selects one content provider to connect to and requests the block with a . As a fallback solution for the content discovery, the requester has a fallback search timer u. After the timer times out, the requester performs a DHT lookup. Due to the dependence on other peers, there is an additional functionality to make the protocol more resilient to churn, joining and leaving of peers. Churn is a problem, since a relay may depart from the network before the discovery process is finished. If a relay departs the response of a proxy can no longer reach the requester. To reduce the impact of incomplete walks, the requester periodically re-transmits the request based on a timer t_0. A relay receiving the same CID of a predecessor, directly re-transmits the request to the same successor. If the successor left the network and is no longer reachable, the relay directly transitions into the proxy phase, instead of selecting a new peer from the privacy-subgraph. Therefore, a re-transmit results in the same path, although the path might be shorter, due to absent successor. This ensures that the path is complete and missing responses are a result of lacking content providers. §.§ Privacy-Subgraph The random walk serves two purposes, it selects a random peer from the network as a proxy, and provides plausible deniability for each peer during the selection process. The selection of the next peer needs some considerations. In RaWa-Bitswap, the next peer is selected based on a privacy-subgraph. The usage of a privacy-subgraph for selecting the next peer was introduced in Dandelion and Dandelion++ to increase request mixing. The aim is that requests from different requesters move along the same link(s), increasing the difficulty for adversaries to find the source of a request. The privacy-subgraph is a directed graph with a specific in- and out degree. In case of Dandelion the privacy-subgraph is a 2-reqular graph (1 successor, 1 predecessor). For Dandelion++, the privacy-subgraph was changed to a directed 4-regular graph (2 predecessors, 2 successors), allowing the peer to choose between two peers for the message relay. The construction of the privacy-subgraph in RaWa-Bitswap is based on the construction method of Dandelion++, with the difference of using a configurable out-dgree η. The construction is asynchronous and non-interactive. For each neighbor, a set of η neighbors are uniformly at random selected as successors. Therefore, a graph is generated in which each node has a set of η directed edges. The constructed graph is not an exact regular graph, but its expected node degree is d = 2η. Dandelion's privacy guarantees depend on a private privacy-subgraph, relaxed in Dandelion++ with the higher node degree, and investigated in <cit.>. The authors of <cit.> propose a Bayesian framework for evaluating the anonymity of P2P network schemes and applied it to Dandelion and Dandelion++. By using the entropy of potential request origins as an anonymity metric, the authors found that the anonymity of both protocols is limited. They discovered that increasing the node degree leads to better anonymity. Furthermore, the authors assume the peer's privacy-subgraph is known. In general, the privacy-subgraph can be learned by connecting to all peers and sending many requests to one peer. Based on the propagation of requests, the privacy-subgraph can be estimated. However, the frequent reconstruction of the privacy-subgraph in Dandelion/Dandelion++ is not considered in <cit.>. As a consequence, we periodically reconstruct the privacy-subgraph and make the out-degree configurable. The concrete value of the reconstruction timer r depends on the expected load at the peer and should be adjusted accordingly. As an initial value, we set r to 540 s as used in Dandelion. §.§ Parameter RaWa-Bitswap has some properties that can be adjusted with parameters. <Ref> provides an overview of all parameters used in RaWa-Bitswap. The value of these parameter needs to be considered for the privacy-utility trade-off of RaWa-Bitswap. In the following, we focus on p and the timers t_0, t_1, and u. The parameters η and r, relevant for the privacy-subgraph, have been discussed in the previous section. The value of the proxy transition probability p decides the approximate path length of the walk. In general, a lower value of p decreases performance, while increasing the privacy, due to the longer path. The probability X_p that a peer becomes a proxy within e hops with p can be calculated by: X_p(e) = 1-(1-p)^e. This means with p=0.2, 90 % of the paths have a length ≤ 11 and with p=0.3, 90 % of the paths have a length ≤ 9. The authors of Dandelion++ evaluated that values for p ≤ 0.2 have only a limited impact on the privacy. Although higher values have a stronger impact on the privacy exposure, the improvement of the performance might justify higher values. The re-transmit timer t_0 mitigates the impact of churn on the lookup process. Lower values increase resilience against churn and increase the load on the network. The value depends on the network behavior. We propose to use t_0=1 s. The proxy fallback timer t_1 is the same timer as used in Vanilla-Bitswap for querying the fallback system, the DHT. In Vanilla-Bitswap, this timer is set to 1 s, at the time of writing. The requester fallback lookup timer u is an additional backup solution in case the proxy lookup takes too much time. The fallback lookup by the requester should be avoided to increase the privacy, but can speed up lookup times in case of long delays. The value of u should be at least higher than the fallback timer of the proxy t_1 and the re-transmit timer t_2. We propose to set u=2 · t_0, which is 2 s. §.§ Discussion In RaWa-Bitswap, the proxy only executes content discovery. The content providers are routed back to the requester. As an alternative, the proxy could also retrieve blocks, relaying the responses. A response is compared to a very large with kilobytes compared too few hundreds of bytes. This increases the load on the network and further increases the time of message transmission. The newly introduced could also be large for popular files with many content providers, however, in general, the size is also low. Excluding content retrieval reduces the load on the network and reduces RaWa-Bitswap's susceptibility to churn. The lower load reduces the round trip time of content discovery, which reduces the risk that an intermediate peer leaves the network before the content discovery is finished. Another aspect is that files can be divided into multiple blocks, which are likely to be located at the same peer. Once a content provider for the root block is discovered, the rest of the blocks may also be retrieved from the same peer. If the proxy only executes content discovery, the requester learns the content provider and can request followup blocks directly. Focusing only on content discovery also presents its own challenges. The interest in a CID is still revealed through requests. In RaWa-Bitswap, a request is exclusively sent by a requestor. Therefore, plausible deniability is lost as soon as the requester sends the request. This is deemed acceptable, as the protocol's objective is to ensure network-wide privacy, rather than focusing on the individual level. However, this behavior can be exploited to find the origin of a request. A peer receiving a immediately responds with a that contains only itself or a peer controlled by the attacker. As a result, the attacker will receive a request for the item from the original requester. In this case the attacker does not even need to store the block. The identification risk can be reduced, if the requester keeps the Vanilla-Bitswap behavior of first sending a request, simulating a proxy. This additional verification does not need to be sent to all neighbors but only to the newly connected content provider. In this case the attacker could only learn the requester, if it also stores the block. However, nothing prevents the attacker from also sending false responses. The method can be enhanced with alternative methods, which obfuscate the item of a request <cit.>, further reducing the risk. § IMPLEMENTATION DETAILS AND INTEGRATION RaWa-Bitswap requires some modifications to Bitswap as well as some new additions, , a requester session, a relay manager, a proxy session, and a forward graph manager. All newly introduced parameters are configurable when instantiating RaWa-Bitswap. A PoC implementation based on Bitswap of the library <cit.> v0.8.0 is publicly available. [https://github.com/manuelwedler/boxo] §.§.§ Bitswap Modifications In Bitswap, blocks below 1024 B are immediately sent in response to requests. This is removed from the RaWa-Bitswap implementation, since only a proxy sends requests and might not need the block. Furthermore, we added a new request and its respective response to the Vanilla-Bitswap messages: and . The indicates the relay of the lookup. The aim of the is to be able to distinguish between answering requests and forwarding requests. Although, this can also be accomplished by using a flag into requests, the new request type makes the protocol more comprehensible. The is the response to a , transmitting the outcome of the content discovery. Since a only contains the CID and implies that the sender stores a block, a new response type is necessary. The new request and response utilize the same Bitswap message envelope. §.§.§ Requestor Session The requester session is primarily the Vanilla-Bitswap session. The difference is that whenever a request would be sent to peers, instead a request for the CID is sent to one peer. Instead of waiting for responses, the session expects responses. The session also re-transmits requests in case no response is received after a time-out. §.§.§ Relay Manager The relay manager deals with the random relay of and the correct return of the response. The relay manager is based on the implementation of <cit.>. Upon receiving a new request, the peer stores predecessor and CID of the request. Afterwards, it starts a proxy session with probability p or becomes a relay with probability 1-p. As a relay, the peer stores the successor and relays the request. A relay, receiving a request from the same source for the same CID, forwards the request to the same successor. If a successor disconnected from the relay, the relay transitions into the proxy phase. In case of a request from a different source with the same CID, a peer is selected, which has not yet received a request for the CID from this relay, reducing relay loops. §.§.§ Proxy Session A proxy session searches and returns content providers to the relay manager. The proxy session primarily behaves as a Vanilla-Bitswap session. However, a proxy session only handles single CIDs, checks the local storage for the block, and does not send requests. §.§.§ Forward Graph Manager The structure is introduced into the package. The selects succeeding peers for the privacy-subgraph. The number of peers depends on a specified target out-degree. The selects peers from the privacy-subgraph according to a pre-configured strategy, defined in the package. § EVALUATION We evaluate RaWa-Bitswap's privacy gains and its impact on performance with a code simulation using Testground v0.6.0. [https://github.com/testground/testground (Accessed: 2024-02)] Testground is a testing platform that enables code execution with a simulated network. The scalability is constrained by the available hardware. For our simulations, we used an Intel Core i7-6700K CPU with 16 GiB RAM and a 64-bit Ubuntu 22.04.3 LTS system with Linux kernel 6.2.0-33-generic as the operating system. The code simulations are defined in a testplan, which is publicly available. [https://github.com/manuelwedler/boxo/tree/main/testplans/rawa-bitswap] §.§ Simulation Environment All simulations use the same principle to build a network consisting of 50 peers. Due to hardware constrains, networks with more peers were not simulated. Each peer establishes a connection to four randomly selected other peers. The selection chooses only peers, which are not connected to the peer. Therefore, a node has at least four outgoing connections and the node degree is on average eight. The links are configured with a latency of 100 ms, a 10 ms jitter, and a bandwidth of 1 MiB/s. Bitswap runs on top of a [https://github.com/libp2p/go-libp2p (Accessed: 2024-02)] host with a TCP and QUIC address. To simplify the simulation, we use a dummy DHT as the content routing subsystem. The dummy DHT knows all block locations, resolving all queries after a delay of 622 ms ± 10 %. This delay is based on the median duration of a DHT query in IPFS as determined in <cit.>. All simulations execute the same behavior. Each honest node generates and stores one unique, random block with a predetermined size. Afterwards, the honest nodes concurrently query Bitswap for the CID of a block from another node. A single CID might be queried multiple times, introducing some replication and the possibility for multiple content providers. In IPFS, blocks are initially available from a single source and replicated only upon request. Therefore, our simulations reflect the initial state of blocks. We run the simulations using different parameters. Each parameter combination is run 100 times. The parameters t_0=1 s, t_1=1 s, u=2 s, and r=540 s are fixed for all simulations. Due to the high value of r and the execution of only one retrieval round, the privacy-subgraph stays the same during one simulation run. For the privacy-subgraph, we evaluate the values of η=1 (Dandelion), η=2 (Dandelion++), and η=max, which means the privacy-subgraph consists of all neighbors. §.§ Privacy Evaluation We quantify the degree of network-wide privacy that RaWa-Bitswap achieves by introducing a classification problem. An adversary seeks to link any observed CID to a peer that is interested in the block. Therefore, the classification problem is the peers interested in CIDs. We quantify the success using precision and recall. Based on the acquired information, the adversary tries to link a CID to all honest peers. To determine network wide privacy, the individual precision and recall are averaged. The adversary knows all peers of the network, controlling a fraction of peers, denoted with α. While our method aims to improve the privacy against passive adversaries, we also show the privacy-enhancement against active adversaries. Therefore, we assume three adversaries with different capabilities: First-spy estimator (FSE), Exploiter (WFE), and subgraph-aware WFE (SAWFE). The FSE is the simplest adversary controlling only one node, α=0.02. However, the node of the FSE connects to all other peers. The FSE is a passive adversary, assuming the interest of a peer is the CID of the first request seen from the peer. In case a peer never became a proxy or never forwarded requests to the malicious node, a peer cannot be linked to a CID. The peers to which a CID could not be assigned, are linked to an observed CID at random. Observed CIDs are in this context any CID for which the FSE saw a request. The WFE exploits the vulnerability mentioned in <Ref>. This adversary tries to actively exploit RaWa-Bitswap to reveal the origin of requests. The aim of the WFE is to trigger a requester to send a request to any controlled node by claiming to be a content provider. The WFE controls 10 nodes (α=0.2) and each node establishes four connections to distinct honest nodes. By doing this, each honest node has at least one connection with a malicious node and the average node degree of the network remains eight. The WFE combines the data from all its controlled nodes. Every peer is associated with the CID of the first received from that peer. Similar to the FSE, in case no request is received, the peer gets assigned an observed CID at random. While the WFE produces a fake , it otherwise behaves like an honest node, forwarding requests and might even act as a proxy. While the FSE and WFE have no knowledge of the privacy-subgraph, the privacy-subgraph is provided to the SAWFE. In practice, this knowledge could be obtained by sending multiple requests as described in <Ref>. Otherwise, the SAWFE has the same capabilities as the WFE, , active, and α=0.2. The SAWFE also maps every peer to the CID of the first observed from this peer. The difference between SAWFE and WFE is the handling of peers for which no could be observed. All observed requests are observed and each CID is assigned to an unclassified direct predecessor. Due to RaWa-Bitswap's functionality, a predecessor of a proxy must be the requester. The resilience against predecessor attacks <cit.> depends accordingly on the privacy of the privacy-subgraph. <ref> shows the precision and recall for all three adversaries. In Vanilla-Bitswap, a FSE can with almost 100 % certainty determine the interest of all observed peers. RaWa-Bitswap reduces precision and recall of the prediction down to 40 % – 50 %. The effect of p on the precision and recall seems almost negligible. Considering η, a value of 1 or max produce almost the same results with 1 having slightly better results. For η=2, the values are better than Vanilla-Swap but overall the worst precision and recall of on average ≈ 60 %. Still, all values are much better compared to Vanilla-Bitswap. For the active adversaries the privacy gains are less significant. While the precision and recall is still lower than the values for the FSE in Vanilla-Bitswap, the precision and recall is now 60 % – 80 %. For η=1, p seems to have little effect on precision and recall, which might be due to the occurrence of loops which cut the paths short. For bigger η, a higher value of p seems beneficial to the privacy. Comparing the results for WFE and SAWFE, the SAWFE has similar precision values, although slightly worse than WFE. The recall of the SAWFE is also similar but slightly better than the WFE. In case of WFE and SAWFE, the behavior of the privacy seems to be counterintuitive. Longer paths should provide better privacy. This is probably due to the nature of the exploit. The exploit functions very well in case a malicious peer is encountered during the privacy phase, , the exploit is guaranteed to succeed. If a malicious peer is encountered during the proxy phase, there is a chance that an honest peer also sends a providing the proxy with multiple content providers. Therefore, the exploit might fail. Against WFE and SAWFE shorter paths should be beneficial, since the chance of encountering a malicious peer is lower. The difference between the WFE and SAWFE is also due to the exploit. The SAWFE has only an advantage in case the exploit fails and needs to guess the origin of the request. Due to the knowledge of the predecessor of a peer, a guess has a higher chance to succeed. In summary, RaWA-Bitswap shows clearly visible privacy improvements against passive adversaries and only a minor privacy improvement in case of the active adversary. Our active adversaries exploit a vulnerability of the protocol. The vulnerability can be mitigated by obfuscating the item of a request, making it harder for an adversary to use the exploit. However, it does not fully remove the threat. The exploit can be prevented by also using the proxy for content retrieval, although, this would increase network load. The negligible effect of η and p might be due to the small network. §.§ Performance Evaluation We determine the performance overhead of RaWa-Bitswap with a direct comparison to Vanilla-Bitswap. Similar to <cit.>, we use the time-to-first-block (TTFB) as the performance metric. TTFB is the time elapsing from (locally) sending the request to Bitswap until the first block is received. For the simulation, every content item is encapsulated in a single block of 1025 B and 150 KiB. While TTFB accordingly yields the content retrieval time, it is still reasonable to make statements on the content discovery. Especially for smaller blocks, the download time is negligible. <ref> shows the simulation results. <ref> shows the result for a comparatively small block and the time mainly covers content discovery. For Vanilla-Swap, we can see very low variations of the retrieval time with an average of around 2.2 s. RaWa-Bitswap shows larger variations, although, a similar average of around 2.2 s – 2.4 s between the different η and p. The large variations are due to the random nature of the walk. The number of relays varies with each request. While lower values for p should result in longer paths and therefore longer TTFB, it seems that the variation of p has almost no influence on the performance. The influence of η is slightly higher. Bigger values for η result in higher TTFB. This behavior might be a result of our comparatively small network and consequently higher chance of loops. The occurrence and influence of loops is further discussed in <Ref>. The general behavior is similar for larger blocks as shown in <ref> where a larger block is retrieved. The TTFB is in general higher, due to the increased amount of data that needs to be retrieved. For larger blocks the content discovery time has a lower influence on the whole retrieval time. In summary, the simulation shows that the impact of the random walk on the TTFB is rather small. §.§ Discussion The results concerning the behavior for different values of p, could be due to our limited number of nodes and the occurrence of loops. The privacy-subgraph construction only approximates a regular graph, incorporating loops due to nodes which choose each other as successors. In case of a loop, it is possible that a node can no longer choose a successor for the message, which results in the immediate transition into the proxy phase. This is especially notable for η=1, where each peer only has one successor. Loops reduce the impact of p, since the random walk finishes earlier. The occurrence of loops reduces with the size of the network. Loops can have an influence on privacy and performance. For the privacy, shorter paths impact the gained privacy. The FSE is less likely to be affected, since it is connected to all nodes anyway and therefore the path length has no significant influence on the discovery. Only if the first chosen peer is the FSE, the guess is correct. Against the WFE and SAWFE, a shorter path reduces the probability to encounter a malicious peer. However, the SAWFE also benefits from a shorter path in case the exploit does not work. Since the SAWFE knows the predecessors of a relay, it can more reliably guess the origin of a request. For performance, a shorter random walk means earlier completion of content discovery, which results in a shorter TTFB. As a result, the impact of RaWa-Bitswap on the performance of content retrieval might be larger in networks with more peers. While the small network limits the performance evaluation, it has only a low impact on the general privacy results. A passive observer can comparatively easy connect to many peers, receiving the results of the FSE for Vanilla-Bitswap. Therefore, against a FSE, RaWa-Bitswap provides a significant improvement. § CONCLUSION In this paper we presented RaWa-Bitswap. It improves privacy by prepending a privacy phase to the content discovery mechanism of Bitswap. The evaluation of RaWa-Bitswap shows enhanced network-wide privacy with only a low performance overhead compared to Vanilla-Bitswap. While our test setup with only 50 peers cannot capture all network dynamics, it shows promising results, even against active adversaries. § ACKNOWLEDGEMENTS We thank Protocol Labs for funding parts of our research. Additionally, this work was supported by the German Federal Ministry of Education and Research (BMBF) and the Saxon State Ministry for Science, Culture and Tourism (SMWK) by funding the competence center ScaDS.AI. 14 [heading=bibintoc]
http://arxiv.org/abs/2407.13189v1
20240718055730
Data-Driven Estimation of Conditional Expectations, Application to Optimal Stopping and Reinforcement Learning
[ "George V. Moustakides" ]
stat.ML
[ "stat.ML", "cs.LG", "60J20, 68T07" ]
Data-Driven Estimation of Conditional Expectations, Application to Optimal Stopping and Reinforcement Learning George V. Moustakides, Emeritus Professor Department of Electrical and Computer Engineering University of Patras, Rion, GREECE moustaki@upatras.gr Received X XX, XXXX; accepted X XX, XXXX ============================================================================================================================================================ § ABSTRACT When the underlying conditional density is known, conditional expectations can be computed analytically or numerically. When, however, such knowledge is not available and instead we are given a collection of training data, the goal of this work is to propose simple and purely data-driven means for estimating directly the desired conditional expectation. Because conditional expectations appear in the description of a number of stochastic optimization problems with the corresponding optimal solution satisfying a system of nonlinear equations, we extend our data-driven method to cover such cases as well. We test our methodology by applying it to Optimal Stopping and Optimal Action Policy in Reinforcement Learning. Data-driven estimation, Neural networks, Optimal Stopping, Reinforcement Learning. empty § INTRODUCTION Conditional expectations appear in a multitude of stochastic optimization problems as part of defining the corresponding optimal solutions. Characteristic examples constitute Optimal Stopping, Reinforcement Learning, Optimal Control etc. In all these cases the identification of the desired solution requires exact knowledge of the underlying data probability density which in most practical applications is not available. Nowadays with the existence of great volumes of data one may wonder whether it is possible to develop data-driven methods to solve such problems based on ideas borrowed from Machine Learning. This is exactly the goal of our present work. In particular, we intend to offer computational means for identifying the desired solution by first introducing a method capable of estimating conditional expectations of known functions. Our technique will not be based on some form of initial probability density estimation but will directly identify the conditional expectation of interest. Since our intention is to employ Machine Learning methods, we recall that these techniques mostly employ neural networks which are trained with available data. The “design” of networks is achieved through the solution of well-defined optimization problems involving expectations. With the help of the Law of Large Numbers, expectations are being replaced by the existing data thus giving rise to data-driven techniques. For this reason in the sequel, our main effort focuses in proposing such optimization problems that are suitable for training and demonstrating that they indeed lead to the estimation of the functions of interest. Before proceeding with the technical part we would like to point out that the methodology we are going to introduce is an extension of results presented in <cit.> and addressing the problem of likelihood ratio function estimation. This problem will also be mentioned here under the light of the richer experience we have accumulated. § A GENERAL OPTIMIZATION PROBLEM Let us begin our presentation by introducing a simple optimization problem which will serve as the basis for the final, data-driven counterpart. Suppose that is a random vector and consider three scalar functions (X),(̱X),(̆X) with (X)>0 that depend on the vector X and two additional scalar functions ϕ(z),ψ(z) that depend on the scalar z. Fix (X),(̱X),ϕ(z),ψ(z) and for each (̆X) define the following average cost 𝖩(𝗎)=_[()ϕ(𝗎())+(̱)ψ(𝗎())], where _[·] denotes expectation with respect to . We are interested in identifying the function (X) that solves the following optimization problem min_(̆X)𝖩()̆=min_(̆X)_[()ϕ(𝗎())+(̱)ψ(𝗎())]. Solving (<ref>) can in general be challenging but if we limit ourselves to a particular class of functions ϕ(z),ψ(z) it is possible to come up with an explicit and straightforward answer. The theorem that follows specifies this class and the corresponding optimal solution. For real z let ω(z) be a strictly increasing scalar function and denote with its range of values. Select a second function ρ(z) which is strictly negative and define two additional scalar functions ϕ(z),ψ(z) through their derivatives ψ'(z)=ρ(z),     ϕ'(z)=-ω(z)ρ(z). If range((̱X)/(X))⊆ then the optimal solution (X) of the optimization problem defined in (<ref>) is unique and satisfies ω((X))=(̱X)/(X). Since the function (̆X) which we would like to optimize depends on X and also the cost is a result of averaging over , it turns out that the minimization can be performed point-wise by interchanging expectation and optimization min_(̆X)_[()ϕ(𝗎())+(̱)ψ(𝗎())] =_[()min_(̆){ϕ(𝗎())+(̊)ψ(𝗎())}], where for simplicity we denote (̊X)=(̱X)/(X). The above equality is true because by assumption (X)>0. We must emphasize that in general changing the order of expectation and minimization results in an inequality. However here because the minimization is with respect to the function (̆X) that depends only on the quantity that is averaged, it is straightforward to show that we actually enjoy exact equality. The minimization min_(̆X){ϕ((̆X))+(̊X)ψ((̆X))} can now be performed for each individual X (point-wise). Fixing X means that (̆X) becomes a scalar quantity $̆ while the ratio(̊X)becomes a scalar number$̊. Consequently for each ∈̊ we need to perform a minimization with respect to $̆ of the formmin_{ϕ()̆+ψ̊()̆}.Taking the derivative ofϕ()̆+ψ̊()̆with respect to$̆ and using the definition of the two functions from (<ref>) yields ϕ'()̆+ψ̊'()̆=(-̊ω()̆)ρ()̆. Because of the strict increase of ω(·) and the strict negativity of ρ(·) we conclude that the derivative is negative for $̆ satisfyingω()̆<$̊ and positive for ω()̆>$̊ which implies that for=̆satisfyingω()=$̊ we have a unique global minimum. Since this is true for every X we deduce that the optimal function (X) is such that ω((X))=(̊X)=(̱X)/(X). This concludes the proof. Because ω(z) is strictly increasing the set can be either the whole real line or a semi-infinite interval (a,∞), (-∞,a) or a finite interval (a,b). Instead of open we can have closed intervals as well. §.§ Versions of the Optimization Problem Selecting various pairs of functions (X),(̱X) in combination with the probability density of the random vector produces interesting and practically meaningful optimization problems. First we address the problem of likelihood ratio identification of two densities, which is considered in detail in <cit.>. §.§.§ Identification of Likelihood Ratios Let (X),(X) be two possible densities for the random vector and assume that (X)=0 when (X)=0 (to avoid unbounded ratios). For any scalar function (̆X) define the cost ()̆=_^[ϕ((̆))]+_^[ψ((̆))] where _^[·],_^[·] denote expectation with respect to under the densities (X),(X) respectively. Applying a simple change of measure and denoting with Ł(X)=(X)/(X) the likelihood ratio of the two densities we have ()̆=_^[ϕ((̆))+Ł()ψ((̆))] which is under the form of (<ref>). This suggests that for the minimization of (<ref>) we can write min_(̆X)()̆=min_(̆X){_^[ϕ((̆))]+_^[ψ((̆))]} =min_(̆X)_^[ϕ((̆))+Ł()ψ((̆))]. Application of Theorem <ref> produces as optimal solution the function (X) that satisfies ω((X))=Ł(X). Consequently we identify the likelihood ratio function of the two densities through an optimization involving expectations. §.§.§ Identification of Ratio of Conditional Densities We can extend the previous result to cover the likelihood ratio of conditional densities. Specifically let the pair of random vectors (,) be described by two possible joint densities (Y,X),(Y,X). Write (Y,X)=(Y|X)(X), (Y,X)=(Y|X)(X) and as above denote Ł(X)=(X)/(X) the likelihood ratio of the two marginals. For a scalar function (̆Y,X) consider the following cost ()̆=_,^[Ł()ϕ((̆,))]+_,^[ψ((̆,))]. Applying again a change of measure we can write ()̆=_,^[Ł()ϕ((̆,))+(,)/(,)ψ((̆,))], which, according to Theorem <ref> when minimized over (̆Y,X) will yield an optimal solution of the form ω((Y,X))=(Y,X)/(Y,X)1/Ł(X)=(Y|X)/(Y|X). Of course Ł(X) as we have seen in the previous case can be obtained by optimizing a cost of the form of (<ref>) which in turn can be used in the optimization of (<ref>) to identify the likelihood ratio of the conditional densities. §.§.§ Identification of Conditional Expectations Let us now address the main problem of interest. We must point out that the advantage of the method we are going to introduce is that we can estimate conditional expectations by solving optimization problems involving regular expectations. This is particularly useful from a practical point of view since as we will see in the next section, it is straightforward to solve such problems under a data-driven setup. Consider two scalar functions (̧Y) and (̣Y) with (̧Y)>0. For the pair of random vectors (,) define the conditional expectations (X)=_[(̧)|=X], (̱X)=_[(̣)|=X]. We clearly have (X)>0. For a scalar function (̆X) we now define the cost ()̆=_,[(̧)ϕ((̆))+(̣)ψ((̆))], where expectation is with respect to the pair (,). Note that even though the expectation is with respect to (,) the function we would like to optimize depends only on X. Using the tower property of expectation the cost can be rewritten as ()̆ =_[_[(̧)|]ϕ((̆))+_[(̣)|]ψ((̆))] =_[()ϕ((̆))+(̱)ψ((̆))]. Consequently, minimizing ()̆ defined in (<ref>) over (̆X), by application of Theorem <ref>, yields ω((X))=(̱X)/(X)=_[(̣)|=X]/_[(̧)|=X]. In other words it computes the ratio of the two conditional expectations. If we select (̧Y)=1 then ω((X))=_[(̣)|=X], namely we directly identify the conditional expectation of (̣) with respect to given =X as the solution of an optimization problem involving regular instead of conditional expectations. This result can be further extended by considering two different densities (Y,X),(Y,X) and for a function (̆X) define the cost ()̆=^_,[(̧)ϕ((̆))]+^_[(̣)ψ((̆))]. In order to specify the optimizer of ()̆ we observe that ()̆=^_[^_[(̧)|]ϕ((̆))]+^_[^_[(̣)|]ψ((̆))] =^_[^_[(̧)|]ϕ((̆))+Ł()^_[(̣)|]ψ((̆))]. where Ł(X)=(X)/(X) is the likelihood ratio of the two marginal densities. Applying Theorem <ref> we conclude that ()̆ is optimized by ω((X))=Ł(X)^_[(̣)|=X]/^_[(̧)|=X]. Note that the advantage of this approach is that we end up with a single function (X) that identifies the ratio of the two conditional expectations. An alternative idea would be to identify the two conditional expectations of the previous ratio separately through two different optimization problems which is clearly not as efficient. §.§ Examples of functions ω(z),ρ(z),ϕ(z),ψ(z) According to Theorem <ref>, a very important quantity in selecting the two functions ω(z),ρ(z) is the range of values of ω(z) which must cover the range of values of the ratio (̱X)/(X). The latter may not be exactly known but, as stated in the theorem, it is sufficient that is a superset of this range. We will provide examples for three different cases of namely =, =[a,∞) and =(a,b) with a<b. The first covers the case where the range is completely unknown while the second and third refer to cases with partially known range. For example if the function (̣Y) for which we would like to compute the conditional expectation is nonnegative or between a and b then the same property holds true for its conditional expectation. [A].  = 0.2cm [A1].  ω(z)=z and ρ(z)=-1, results in ϕ(z)=z^2/2,     ψ(z)=-z. This particular selection is the most popular in practice and it is known as the Mean Square Error (MSE) criterion. 0.2cm [A2].  ω(z)=sinh(z) and ρ(z)=-e^-0.5|z| results in ϕ(z) =(e^0.5|z|-1)+1/3(e^-1.5|z|-1), ψ(z) =2sign(z)(e^-0.5|z|-1). 0.2cm [A3].  ω(z)=sign(z)(e^|z|-1) and ρ(z)=-e^-0.5|z|, results in ϕ(z)=4cosh(0.5z),     ψ(z)=2sign(z)(e^-0.5|z|-1). 0.2cm [B].  =(a,∞) 0.2cm [B1].  ω(z)=a+e^z and ρ(z)=-1/1+e^z, results in ϕ(z)=alog(1+e^-z)+log(1+e^z),     ψ(z)=log(1+e^-z). This version when a=0 resembles the cross entropy method introduced in <cit.> for the design of GANs where they employ ϕ(z)=-log(1/z), ψ(z)=-log(1/1-z) with z∈(0,1). We can see that we propose the same functions but with z replaced in our approach by 1/1+e^z and z∈. 0.1cm [B2].  ω(z)=a+e^z and ρ(z)=-e^-0.5 z, results in ϕ(z)=-2ae^-0.5z+2e^0.5z,     ψ(z)=2e^-0.5z. Remark 1. We note that when [B1] or [B2] is applied in the problem of likelihood ratio identification with a=0 then ω((X))=e^(X)=(X)/(X) ⇒(X)=log((X)/(X)), suggesting that the optimal function (X) is equal to the log-likelihood ratio of the two densities which, in problems as hypothesis testing, is often more convenient to use than the likelihood ratio itself. In case the range is =(-∞,a) we simply consider the conditional expectation of the function -(̣Y) which will have a range of the form =(-a,∞). 0.2cm [C].  =(a,b) 0.2cm [C1].  ω(z)=a1/1+e^z+be^z/1+e^z and ρ(z)=-e^z/1+e^z, results in ϕ(z)=b-a/1+e^z+blog(1+e^z),     ψ(z)=-log(1+e^z). 0.2cm [C2].  ω(z)=a1/1+e^z+be^z/1+e^z and ρ(z)=-e^-z, results in ϕ(z)=(b-a)log(e^z/1+e^z)-ae^-z,     ψ(z)=e^-z. Function ω(z) reduces to the classical sigmoid for the (0,1) interval. 0.2cm Remark 2. It is straightforward to propose alternative combinations of ω(z),ρ(z) that satisfy the requirements of Theorem <ref>. We also need to point out that for any range of interest it is always possible to employ a pair which is designed for a wider range. For example we can apply [A1] (MSE) in the case of functions with range in an interval (a,b) instead of the suggested [C1], [C2]. In fact this is common practice in the literature. § DATA-DRIVEN ESTIMATION Solving the optimization problem defined in (<ref>) in order to identify the optimal function (X) requires knowledge of the underlying probability density of . Our goal in the analysis that follows is to relax this requirement and replace it with the existence of a number of realizations of . This constitutes the data-driven version of the problem. The first classical step we adopt in the direction of a data-driven approach is to replace the unknown function (̆X) with a parametric family (̆X,θ) involving a finite set of parameters θ. Of course we require this family to enjoy the universal approximation property, namely to have the ability to approximate arbitrarily close any sufficiently smooth function provided we select a large enough model. This property is guaranteed for neural networks (shallow or deep) according to <cit.> but it may be enjoyed by other parametric classes as well. In the sequel we limit ourselves to neural networks but similar conclusions can be claimed for any other such class. Let us now replace (̆X) with a neural network (̆X,θ), then the cost function in (<ref>) becomes 𝖩(θ)=_[()ϕ(𝗎(,θ))+(̱)ψ(𝗎(,θ))], depending only on the parameters of the network, while the corresponding optimization in (<ref>) is replaced by min_θ𝖩(θ)=min_θ_[()ϕ(𝗎(,θ))+(̱)ψ(𝗎(,θ))]. For sufficiently large neural network if θ_𝗈 is the optimizer of (<ref>) we expect the corresponding neural network (̆X,θ_𝗈) to satisfy (̆X,θ_𝗈)≈(X) with the latter being the optimizer of the original problem in (<ref>). In other words, with the optimal finite dimensional version of the problem defined in (<ref>) we approximate the optimal function solving its infinite dimensional counterpart in (<ref>). As in the original problem in (<ref>), the finite dimensional version (<ref>) is defined in terms of the pdf of . Therefore, let us now assume that we are under a data-driven setup. As we can see the cost in (<ref>) can be put under the general form (θ)=_[(,θ)], where (X,θ) is a scalar deterministic function and (<ref>) corresponds to solving the optimization problem min_θ(θ)=min_θ_[(,θ)], for the case where we have a set of realization {X_1,…,X_n} (training set) of which replaces the exact knowledge of the probability density function of . The obvious possibility is to evoke the classical Law of Large Numbers (LLN) and approximate the expectation by the following data-driven cost (θ)=1/n∑_i=1^n(X_i,θ)≈(θ). Cost (θ) is a completely known function of θ and can therefore be minimized with the help, for example, of the Gradient Descent (GD) iterative algorithm θ_t=θ_t-1-μ∑_i=1^n∇_θ(X_i,θ_t-1) where μ>0 is the step-size (learning rate) and ∇_θ(X,θ) is the gradient with respect to θ of the scalar function (X,θ). Note that we have absorbed in μ the division by the constant n. As we can see, the GD in each iteration requires the computation of the gradients of all realizations which could be computationally demanding especially when the training set is large. An alternative approach would be to employ the Stochastic Gradient Descent (SGD) algorithm which consists in applying the update θ_t=θ_t-1-μ∇_θ(X_t,θ_t-1) where we retain only a single gradient evaluated for a realization X_t from the training set. In each iteration we use a different data point and when all realization are exhausted (epoch) then we reuse them starting from the beginning of the training set (after possibly randomly shuffling the data). There is also the version where in each iteration we employ micro-blocks of m elements from the training set and replace the single gradient of the classical version with the average of the gradients computed over the block (again division by m is absorbed in μ) θ_t=θ_t-1-μ∑_i=1^m∇_θ(X_(t-1)m+i,θ_t-1). The SGD is clearly computationally less demanding per iteration than the GD. Relating (<ref>) to the solution of the original problem defined in (<ref>) is not as straightforward as in (<ref>) where we simply call upon the LLN. A more sophisticated Stochastic Approximation theory <cit.> is required to demonstrate that the SGD version indeed provides the desired estimate. Furthermore, as it has been observed in practice, the behavior of the two algorithms (GD vs SGD) with respect to convergence speed and capability to avoid undesirable local minima can be quite different. §.§ Data-Driven Likelihood Ratio Estimation In the previous section when we discussed the problem of likelihood ratio identification we considered the existence of two densities (X),(X) and the need to minimize the performance measure in (<ref>) with respect to the unknown function (̆X). Following similar steps as the ones described above, we first replace (̆X) with a neural network (̆X,θ) then, instead of the two densities we assume existence of two datasets {X_1^,…,X_n_^} and {X_1^,…,X_n_^} sampled from (X),(X) respectively. Using the definitions in (<ref>) and denoting with ∇_θ(̆X,θ) the gradient with respect to θ of the neural network (̆X,θ) we can write for the GD iteration 1cmθ_t=θ_t-1+ μ{1/n_∑_i=1^n_ω((̆X_i^,θ_t-1))ρ((̆X_i^,θ_t-1))∇_θ(̆X_i^,θ_t-1) -1/n_∑_j=1^n_ρ((̆X_j^,θ_t-1))∇_θ(̆X_j^,θ_t-1) }.1cm For the SGD version we assume that the two datasets are randomly mixed with the samples retaining their labels (i.e. whether they come from or ). The samples are used one after the other and the update per iteration depends on the corresponding label of the sample. Specifically when X_t is from 𝗀 then θ_t=θ_t-1+ μ/n_ω((̆X_t,θ_t-1))ρ((̆X_t,θ_t-1))∇_θ(̆X_t,θ_t-1), whereas when X_t is from 𝖿 then θ_t=θ_t-1- μ/n_ρ((̆X_t,θ_t-1))∇_θ(̆X_t,θ_t-1). When n_=n_ we can simplify the SGD algorithm by not mixing the two datasets and by employing in each iteration one sample from each dataset as follows (division by n_=n_ is absorbed in μ) θ_t=θ_t-1+μ{ω((̆X_t^,θ_t-1))ρ((̆X_t^,θ_t-1))∇_θ(̆X_t^,θ_t-1) -ρ((̆X_t^,θ_t-1))∇_θ(̆X_t^,θ_t-1)}. Gradient type algorithms for estimating the likelihood ratio of conditional probability densities can be designed in a similar way. §.§ Data-Driven Estimation of Conditional Expectations We would like to emphasize once more that the advantage of the proposed methodology is the fact that we can estimate conditional expectations by solving optimization problems involving regular expectations. Let us recall the cost in (<ref>) and consider the simplified version with (̧Y)=1. For a function (̣Y) we are interested in estimating the conditional expectation _[(̣)|=X]. Following our usual practice, the function (̆X) is replaced by a neural network (̆X,θ) and the joint density (Y,X) by a collection of pairs {(Y_1,X_1),…,(Y_n,X_n)} sampled from it. The finite dimensional version of the cost function in (<ref>) then becomes (θ)=_,[ϕ((̆,θ))+(̣)ψ((̆,θ))], which suggests the following data-driven cost (θ)=1/n∑_i=1^n{ϕ((̆X_i,θ))+(̣Y_i)ψ((̆X_i,θ))} and the corresponding GD algorithm for its minimization θ_t=θ_t-1- μ∑_i=1^n {(̣Y_i)-ω((̆X_i,θ_t-1))}× ρ((̆X_i,θ_t-1))∇_θ(̆X_i,θ_t-1) . For the SGD we can write θ_t=θ_t-1-μ{(̣Y_t)-ω((̆X_t,θ_t-1))}× ρ((̆X_t,θ_t-1))∇_θ(̆X_t,θ_t-1). When θ_t converges to θ_𝗈, we expect that ω((̆X,θ_𝗈))≈_[(̣)|=X] without any knowledge of the joint or the conditional probability density of given . 0.2cm Remark 3. As we can see from (<ref>), (<ref>) for the updates we require the two initial functions ω(z),ρ(z) and not ϕ(z),ψ(z). The latter are needed only for the computation of the corresponding cost (θ_t) which can be used to monitor the stability and convergence of the iterations. In fact, in order to experience stable updates in (<ref>), (<ref>) we need to ensure that (̣Y)∈. In other words, the range of ω(z) must cover the range of values of (̣Y). We should mention that in (<ref>), (<ref>) it is very common to employ the ADAM version <cit.> where gradient elements are normalized by the square root of their running power. This idea establishes a more uniform convergence for the components of the parameter vector. Powers are estimated with the help of exponential windowing with a forgetting factor λ. The function ω(z) which is applied after the computation of the output of the neural network (̆X,θ_o) can be seen as an output activation function. We should however emphasize that the updates in (<ref>) or (<ref>) would have been different if we had considered ω(z) as part of the neural network from the start. Indeed the gradient with respect to the parameters, unlike in (<ref>) and (<ref>), would have also included the derivative ω'(z) which is now absent. §.§ Numerical Computation of Conditional Expectations In order to evaluate the proposed data-driven estimation method we would need to compare its results with the exact conditional expectation, for characteristic examples. Since it is not always possible to analytically compute the conditional expectation we would like to offer a computational technique based on simple numerical integration rules. For the pair (,) where and are scalars let (Y|X) be the conditional pdf and (Y|X) the corresponding conditional cdf. The conditional expectation _[(̣)|=X] for a known function (̣Y) can then be written as (̆X)=_[(̣)|=X] =∫(̣Y)(Y|X) dY=∫(̣Y) d(Y|X). If we sample Y and X over sufficiently large intervals at {Y_1,…,Y_n} and {X_1,…,X_m} respectively then we can generate the doubly indexed sequence (Y_j|X_i) and the two sequences (̣Y_j),(̆X_i) where j=1,…,n and i=1,…,m. We note that the values (̆X_i) are the samples of the conditional expectation we would like to determine. By averaging the forward and backward version of the rectangle rule in the second integral in (<ref>), the conditional expectation can enjoy the following approximation (̆X_i) ≈1/2∑_j=1^n-1(̣Y_j)[(Y_j+1|X_i)-(Y_j|X_i)] 1.5cm+1/2∑_j=2^n(̣Y_j)[(Y_j|X_i)-(Y_j-1|X_i)] =(̣Y_1)(Y_2|X_i)-(Y_1|X_i)/2   +(̣Y_2)(Y_3|X_i)-(Y_1|X_i)/2+⋯   +(̣Y_n-1)(Y_n|X_i)-(Y_n-2|X_i)/2   +(̣Y_n)(Y_n|X_i)-(Y_n-1|X_i)/2. We note that the first and last term in the last sum are different from the intermediate terms. The above formula can be conveniently rewritten as a matrix/vector product. Indeed if we define the vectors =[(̆X_1),…,(̆X_m)]^⊺, =[(̣Y_1),…,(̣Y_n)]^⊺ and the matrix of dimensions m× n with the i-th row of the matrix having the following elements ()_i1=0.5[(Y_2|X_i)-(Y_1|X_i)], ()_ij=0.5[(Y_j+1|X_i)-(Y_j-1|X_i)],j=2,…,n-1 and ()_in=0.5[(Y_n|X_i)-(Y_n-1|X_i)], then we can write ≈, suggesting that the product provides an approximation to the sampled values of the conditional expectation. In case we have available the conditional pdf (Y|X) but it is not possible to compute analytically the conditional cdf (Y|X) we can use the classical version of the forward and backward rectangle rule applied to the first integral in (<ref>), namely (̆X_i)≈1/2∑_j=1^n-1(̣Y_j)(Y_j|X_i)(Y_j+1-Y_j) 3cm+1/2∑_j=2^n(̣Y_j)(Y_j|X_i)(Y_j-Y_j-1) =∑_j=1^n-11/2((̣Y_j)(Y_j|X_i) +(̣Y_j+1)(Y_j+1|X_i))(Y_j+1-Y_j), leading to the trapezoidal rule. This can also be combined to the more convenient matrix/vector product by properly redefining the matrix . As mentioned, the numerical technique will be used whenever it is impossible to obtain an analytic formula and will serve as a point of reference for the proposed data-driven method. We should of course keep in mind that the numerical method requires exact knowledge of the conditional pdf (cdf) whereas the data-driven technique we developed relies only on training data. Even though the numerical method is presented for the scalar case it is possible to extend it to accommodate random vectors ,. However, very quickly we realize that as we consider larger dimensions the amount of necessary computations increases dramatically with the method suffering from the “curse of dimensionality”. When we apply this numerical method we have to make sure that the intervals we sample are such that the conditional probability mass which is left outside the sampled Y-interval is negligible for all sampled values of X and so is the probability mass left outside the sampled X-interval. If this is not the case then changing the size of the intervals produces inconsistent and therefore questionable results. §.§ Examples Let us apply our idea to two examples. Consider , scalar random variables related through the equations: a) =sign() ^2+, and b) =[-1,1](+), where A(X) denotes the indicator function of the set A. In both cases is standard Gaussian while plays to role of noise which we assume to be zero-mean Gaussian with variance σ^2_=0.1. We are interested in estimating _[|=X] and it is straightforward to see that for a) we have _[|=X]=sign(X)X^2, whereas for b) _[|=X]=Φ(1-X/σ_)-Φ(-1-X/σ_), where Φ(x) is the cdf of the standard Gaussian. Here we have an analytic formula for both conditional expectations but even if we had used the numerical method proposed in Section <ref> the two results would have been indistinguishable. To test our data-driven method we generate n=200 random pairs (Y_i,X_i) and train a shallow neural network with a single hidden layer of size 50 and ReLU activations. To the obvious question why we select this specific size we can say that this is a fundamental problem in neural networks and currently there is no analytically trustworthy answer as to which is the appropriate network size and how must be related to the size of the training dataset. We apply the GD algorithm depicted in (<ref>) and adopt the ADAM version <cit.> that normalizes each gradient element with the square root of its running power. We select a step-size equal to μ=0.001 and exponential windowing with forgetting factor λ=0.99 for the power estimates. We run the algorithms for 2000 iterations which is sufficient for convergence as we could have verified if we had plotted the corresponding costs (θ_t). The parameter vectors θ_𝗈 we converge to, are used to estimate the conditional expectations as ω((̆X,θ_𝗈)) and by sampling the range [-2,2] of X uniformly we compare with the corresponding values of the exact formulas mentioned above. With Example a) since the range =, we test versions [A1], [A2], [A3] from Section <ref>. As we can see from Fig <ref>(a) here the classical MSE version [A1] has comparable performance with the other two alternatives and all three methods approximate sufficiently well the exact funtion. Let us now turn to Example b) where the range of and ω(z) is [0,1]. In this case we apply the classical MSE version [A1] and version [C1] with a=-0.01,b=1.01 which slightly overcovers . As we observe in Fig. <ref>(b) knowledge of the range and selection of the appropriate functions ω(z),ρ(z) may improve the estimation quality, dramatically. 0.1cm Remark 4. From the analytical computation of the conditional expectation it is clear that this function does not depend on the distribution of . Since our data pairs (Y_i,X_i) are sampled from the joint density, one may wonder how the marginal density of affects our estimates. Basically it is expected that the proposed methodology will make notable estimation errors for values of with small likelihood. To understand this fact, consider the extreme case where an interval of values of has zero probability of occurrence, then no samples from this interval can appear in the training set. It is therefore unrealistic to expect that our estimate will be accurate for such X values. A similar conclusion applies when the likelihood of the interval is small and we obtain only very few (or even no) samples from the interval in the training set. This fact might seem as a serious weakness of our method however making errors at points that never occur or at points that occur very rarely might not be so crucial from a practical point of view. § SYSTEM OF EQUATIONS INVOLVING CONDITIONAL EXPECTATIONS The estimation method we introduced can be employed to compute solutions of systems of equations expressed with the help of conditional expectations. Such systems occur in several well-known stochastic optimization problems. Characteristic examples constitute the problem of Optimal Stopping in Markov processes and the Optimal Action Policy in Reinforcement Learning which we consider in detail after introducing our general setup. Let ^j(Y,u^1,…,u^K), j=1,…,K be K deterministic and known scalar functions with u^1,…,u^K scalar variables. Suppose there are also K different conditional densities ^j(Y|X),j=1,…,K with Y and X of the same length. Define the following system of equations which we are interested in solving for the scalar functions ^1(X),…,^K(X) ^j(X)=_^j[^j(,^1(),…,^K())|=X], for j=1,…,K and where expectation is with respect to conditioned on =X using the conditional density ^j(Y|X). §.§ Numerical Solution We first apply the numerical method of Section <ref> to find a numerical solution when the K conditional densities are known. We select the same sufficiently large interval for Y and X which we sample at the same points Y_i=X_i,i=1,…,n. We must assure that what is left outside the interval has very small probability conditioned on every value X_i and this must be true for all K conditional densities. Call ^j=[^j(Y_1),…,^j(Y_n)]^⊺,j=1,…,K the sampled version of the K solution functions which we would like to determine. For each conditional density ^j(Y|X) define the matrix ^j as explained in Section <ref>. Finally, form K vectors ^̋j(^1,…,^K), j=1,…,K with their elements defined as follows (^̋j(^1,…,^K))_i=^j(Y_i^j,(^1)_i,…,(^K)_i), i=1,…,n, j=1,…,K, where ()_i denotes the ith element of the vector . In other words the ith element of ^̋j is equal to the function ^j evaluated at Y=Y_i^j with the variables u^j replaced by the ith elements of the vectors ^j. With these definitions it is clear that the sampled version of the equation in (<ref>) takes the following matrix/vector product form ^j=^j×^̋j(^1,…,^K), j=1,…,K. The previous system can be solved iteratively by iterating on the unknown vectors ^j_t=^j×^̋j(^1_t-1,…,^K_t-1), j=1,…,K. This will be the method we are going to apply to compute the numerical solution in the problems of Optimal Stopping and Reinforcement Learning when analytic formulas are impossible. §.§ Data-Driven Solution We are now considering the data-driven version and the corresponding solution of the system of equations. Key observation for solving the system in (<ref>) is that all functions of interest are defined in terms of conditional expectations and therefore, following our main idea, each conditional expectation can be estimated using the methodology we developed in Section <ref>. Our data-driven setup is as follows: We are given K datasets {(Y_1^j,X_1^j),…,(Y_n_j^j,X_n_j^j)}, j=1,…,K that replace the conditional densities ^j(Y|X),j=1,…,K. Since we are interested in computing the functions ^j(X),j=1,…,K, we approximate each function ^j(X) with a neural network (̆X,θ^j). Consider first the GD version for estimating the network parameters. Because of (<ref>) we need to apply (<ref>) with (̣Y)=_j(Y,^1(Y),…,^K(Y)), namely θ^j_t=θ^j_t-1-μ∑_i=1^n_j{_j(Y^j_i,^1(Y^j_i),…,^K(Y^j_i)) -ω((̆X^j_i,θ^j_t-1))}ρ((̆X^j_i,θ^j_t-1))∇_θ(̆X^j_i,θ^j_t-1). Unfortunately the previous formula is impossible to use for the parameter updates because the functions ^j(Y) on the right hand side are the ones we are actually attempting to estimate. We recall that our estimation method of conditional expectation results in a final approximation of the form ω((̆X,θ^j_𝗈))≈^j(X) consequently, due to (<ref>), ω((̆Y,θ^j_𝗈)) could replace ^j(Y) in the previous updates. But this is still problematic since the limits θ^j_𝗈 are not known in advance. We therefore propose at iteration t to employ the most recent estimate ω((̆Y,θ^j_t-1)) of ^j(Y). This selection clearly allows for computations and produces the following updates θ^j_t=θ^j_t-1- μ∑_i=1^n_j{_j(Y^j_i,ω((̆Y^j_i,θ^1_t-1)),…,ω((̆Y_i^j,θ^K_t-1))) -ω((̆X^j_i,θ^j_t-1))}ρ((̆X^j_i,θ^j_t-1))∇_θ(̆X^j_i,θ^j_t-1), for j=1,…,K. We note that in the iteration for θ^j_t we use the dataset corresponding to ^j(Y|X). We also observe that the updates are performed in parallel since each iteration involves the update of all K network parameter vectors θ^j_t,j=1,…,K. Of course it is not necessary to use the same ω(·),ρ(·) functions or the same neural network configuration when approximating the desired functions ^j(X). In case we prefer to employ the SGD then the K datasets must be (randomly) mixed with each pair retaining its label. Then at iteration t if we select to process the pair (Y_t,X_t) with label j we apply θ^j_t =θ^j_t-1-μ{_j(Y_t,ω((̆Y_t,θ^1_t-1)),…,ω((̆Y_t,θ^K_t-1)))      -ω((̆X_t,θ^j_t-1))}ρ((̆X_t,θ^j_t-1))∇_θ(̆X_t,θ^j_t-1) θ^ℓ_t =θ^ℓ_t-1,ℓ≠ j. The limiting values θ^j_𝗈, j=1,…,K provide the estimates ω((̆X,θ^j_𝗈))≈^j(X) which are approximations of the solution of the system of equations. This is the general data-driven approach and computational methodology we propose. To test its effectiveness, we apply it to Optimal Stopping and Reinforcement Learning. §.§ Markov Optimal Stopping Consider a homogeneous Markov process {_t} which is observed (sampled) sequentially, namely at each time t we observe a new point _t. We can decide to stop sampling at any time T which can either be deterministic or random. When the decision to stop at {T=t} is based on the available information {X_0,…,X_t} accumulated up to time t, then T is called a stopping time adapted to {_t}. We are interested in the minimization of the following exponentially discounted average cost over a stopping time T <cit.> (X)=inf_T[α^T(_T)+∑_t=0^T-1α^t(_t)|X_0=X], where α is the exponential factor, (·),(·) are known functions with (_t) expressing the cost of sampling at t and (_t) the cost of stopping at t. If we consider the infinite horizon version (no hard limit on T) then we know (see <cit.>, Page 70) that (X) satisfies the equation (X)=__1[min{(_1),(_1)+α(_1)}|X_0=X], while the stopping time T_𝗈 defined as T_𝗈=inf{t≥0:(X_t)≤(X_t)+α(X_t)}, delivers the optimal cost (X) and is therefore optimum. The general system in (<ref>) clearly covers equation (<ref>) for Optimal Stopping by selecting K=1 and ^1(Y,u^1)=min{(Y),(Y)+α u^1}. A numerical solution for this problem can be devised based on the method introduced in Section <ref> when the transition density is known and X is scalar. If we select a sufficiently large interval and sample it at {X_1,…,X_n} we generate the vector =[(X_1),…,(X_n)]^⊺ of samples from the unknown solution, the known vectors =[(X_1),…,(X_n)]^⊺, =[(X_1),…,(X_n)]^⊺ and, finally the matrix as detailed in Section <ref>. Then the sampled version of (<ref>) takes the form =×min{,+α} where the “min” is applied on an element-by-element basis on the two vectors and +α. This equation can be solved by iterating over as follows _t=×min{,+α_t-1},  _0=, and considering the limit of _t as approximating the sampled form of the desired solution. §.§.§ Example with AR(1) Process Let {_t} be a homogeneous AR(1) process of the form _t=0.9 _t-1+√(5)_t where {_t} is standard i.i.d. Gaussian noise sequence. The sampling cost is selected (X)=0.1 while the stopping cost (X) is depicted in Fig. <ref>(a) in gray. The exponential discount factor is set to α=1. Because of this selection we can prove that min_X(X)≤(X)≤max_X(X) which can define a possible range for the solution. Since solving (<ref>) analytically is impossible we apply the numerical method we detailed above. In particular we select the interval [-30,30] and sample it uniformly at 5000 points. We form the matrix and the vectors ,, and iterate over _t for 1000 times. In Fig. <ref>(a), as we said, we can see the stopping cost (X) in gray and the numerical solution elevated by the sampling cost (X) in black. In other words we plot + because, according to (<ref>), this sum must be compared with the stopping cost in order to decide optimally whether to stop or continue sampling at any time t. For our proposed data-driven estimation method, we generate 500 consecutive realizations of the AR(1) process and use them to train a shallow network with hidden layer of size 100 and ReLU activations. We apply the GD algorithm depicted in (<ref>) with K=1, step-size μ=0.001 and forgetting factor for the ADAM version equal to λ=0.99. We run [A1] and [C1] with =[0.2,1] (the lower and upper bound of (X)) for 2000 iterations. Version [A1] (MSE) is plotted with blue line whereas [C1] with red (both elevated by (X)). We realize again that knowledge of the range can result in better estimates. Observing Fig. <ref>(a) one may argue that the error in [C1] is more pronounced when X≥10. This is because, as we can see from Fig. <ref>(b) where we plot the 500 consecutive realizations of the Markov process, there are very few samples in the training set with such values. According to Remark 4, this is the reason why we may experience estimates with large errors. §.§ Optimal Action Policy in Reinforcement Learning A second perhaps more popular nowadays problem is the optimal action policy in Reinforcement Learning. Suppose {_t} is a Markov controlled process with the action taking discrete values in the set {1,2,…,K} and with each action corresponding to a different transition density. Denote with ^j[·] the conditional expectation with respect to the transition density of action j. With every state S assume there is a reward (S) where (·) is a known scalar deterministic function. We are looking for the best action policy that will result in maximal average reward over an infinite and exponentially discounted time horizon of the form (S) =max_j_0,j_1,…[(_1)+γ(_2)+γ^2(_3)+⋯}|_0=S] =max_j_0__1[(_1)+γ(_1)|_0=S] where 0≤γ<1 is the geometric discount factor. If at time t=0 we observe state _0=S and we decide in favor of action j and after this point we always use optimal action policy then let us call the resulting reward ^j(S). It can then be proved (see <cit.>, Eq. (4.2), Page 90) that these functions satisfy the following system of equations ^j(S)=__1^j[(_1)+γmax_1≤ℓ≤ K^ℓ(_1)|_0=S], where j=1,…,K. Also (S)=max_1≤ℓ≤ K^ℓ(S). The previous equality suggests that if we know the functions ^j(S), then at time t if we observe state _t=S_t the action j_t^𝗈 which guarantees optimal reward is j_t^𝗈=argmax_1≤ℓ≤ K^ℓ(S_t). The system of equations (<ref>) is a special case of the general system in (<ref>). Indeed, we need to select all functions to be the same, that is, ^j(·)=(·) with (Y,u^1,…,u^K)=(Y)+γmax_1≤ℓ≤ Ku^ℓ. It is only for simplicity we have considered a finite number of actions. Applying similar analysis we can accommodate continuous actions with the corresponding optimal rewards ^j(S) replaced by the function (S,a) and a denoting action with continuous value. As in the Markov optimal stopping problem, we can offer a numerical solution when the K transition densities ^j(S_t|S_t-1) are known and the state is scalar. Again we select a sufficiently large interval which we sample at the points {S_1,…,S_n}. This gives rise to the K vectors ^j=[^j(S_1),…,^j(S_n)]^⊺ which are the sampled form of the K solution functions. We also form the K matrices ^j using the corresponding transition densities as explained in Section <ref>. Finally we consider the single reward vector =[(S_1),…,(S_n)]^⊺. The system of equations (<ref>) under a sampled form becomes ^j=^j×{+γmax_1≤ℓ≤ K^ℓ}, j=1,…,K, where the “max” is taken on an element-by-element basis over the K vectors ^ℓ. The solution of this system of equations can be obtained by applying the following iterative scheme ^j_t=^j×{+γmax_1≤ℓ≤ K_t-1^ℓ},  _0^j=0, where j=1,…,K, with the limits (as t→∞) of the K vector sequences {_t^j}, j=1,…,K approximating the sampled version of the optimal functions ^j(S). Let us now apply this methodology to a specific example. §.§.§ Example with Two AR(1) Processes We present a simple example with K=2 actions. The corresponding Markov processes are both AR(1) and of the form 1) _t=0.8_t-1+1+_t, 2) _t=0.8_t-1-1+_t with {_t} i.i.d. standard Gaussians. We consider a reward function (S) which is the same as the stopping cost of the previous example and depicted in Fig. <ref>(a) in gray. We also set the geometric discount factor equal to γ=0.8. For the numerical solution we select the interval [-20,20] which we sample at 5000 points and apply the iteration presented in (<ref>). The outcome can be seen in Fig. <ref>(a) with black and gray lines for ^1(S),^2(S) respectively. It is against these results that we need to compare the data-driven method. For the data-driven estimation we randomly generate a length n=1000 action sequence {a_t} with a_t∈{1,2} which is used to generate 1000 realizations for the Markov controlled process. If a_t=i then to go from S_t to S_t+1 we use the ith Markov model. This means that on average we have 500 points per model. We apply [A1] and [C1] and for the latter, since 0.2≤(S)≤1, we can show that 0.2/1-γ≤^j(X)≤1/1-γ suggesting that we can select the interval =[1,5] as our range. For both functions we select a shallow neural network with hidden layer of size 100 and ReLU activations. We run the algorithms for 2000 iterations using a step size μ=0.001 and forgetting factor for the power estimation λ=0.99 of the ADAM version. Again knowledge of the range and using it in [C1] can produce better estimates. From Fig. <ref>(b) by following the evolution of the costs, we conclude that both iteration methods are stable and converge successfully. §.§.§ Exploration vs Exploitation Reinforcement Learning has become very popular due to the possibility of Exploring and Exploiting. Assuming that we start with sufficient number of initial data from each action, we can use them as described above to make an initial estimate of the functions ^j(S). Then these estimates can be used to decide about the next actions which constitutes the Exploitation phase. If the initial data are not sufficient to guarantee an estimate of acceptable accuracy, or if the statistical behavior of the data changes with time then we must periodically make random decisions about the next action in order to activate possibilities that are not reachable by the existing action rule. These randomly generated data must be used to update the estimates of the functions ^j(S) and this is achieved mostly with the help of the SGD. This is known as the Exploration phase. Whether at each step we should explore or exploit can be decided using randomization. Specifically, at each time t with probability 1-ϵ we could exploit and with probability ϵ explore. An asymptotic analysis could provide the limiting behavior of a scheme of this form in the stationary case when there is no change in the statistical behavior of the data but also in the case where the data are (slowly) varying in time and the updates attempt to track the change. Such an analysis is already available for adaptive algorithms for classical FIR filters <cit.> and it would be extremely interesting if we were able to extend it to accommodate the neural network class. 11 MB G.V. Moustakides, K. Basioti, “Training neural networks for likelihood/density ratio estimation,” arXiv: 1911.00405, 2019. GAN I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Comm. ACM, vol. 63, no. 11, pp. 139–144, 2020. CYB G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Math. Contr. Sig. Syst., vol. 2, no. 4, pp. 303–314, 1989. HTW K. Hornik, M. Tinchcombe, H. White, “Multilayer feedforward networks are universal approximators,” Neural Net., vol. 2, pp. 359–366, 1989. ADAM D.P. Kingma and J.L. Ba, “ADAM: A method for stochastic optimization,” Proc. Intern. Conf. Learn. Repres., ICLR-2015 ANS A.N. Shiryaev, Optimal Stopping Rules, Springer, 1978. PS G. Peskir, A. Shiryaev, Optimal Stopping and Free-Boundary Problems, Birkhäuser, 2006. TNB A. Tartakovsky, I. Nikiforov and M. Basseville, Sequential Analysis: Hypothesis Testing and Change-Point Detection, Chapman & Hall, 2020. SB R.S. Sutton and A.G. Burton, Reinforcement Learning an Introduction, MIT Press, Cambridge, Massachusetts, 4th printing, 2002. BMP A. Benveniste, M. Métivier and P. Priouret, Adaptive Algorithms andStochastic Approximations, Springer, 1990. EF E. Eleftheriou, D.D. Falconer, “Tracking properties and steady-state performance of RLS adaptive filter algorithms,” IEEE Trans. ASSP, vol. 34, no. 5, pp. 1097–1110, 1986. NGW B. Ninness, J.C. Gómez and S. Weller, “Frequency domain analysis of adaptive tracking algorithms,” IFAC Proc., vol. 30, no. 11, pp. 1643–1648, 1997. § MATLAB CODE FOR CONDITIONAL EXPECTATION EXAMPLES Copy/Paste from the PDF file to the Matlab (or text) editor needs extra work. The symbols “ ^∧ ” and “ ^' ” used for power and transpose are transferred wrongly. In the editor you need to apply a global replace with the corresponding keyboard symbols. 0.1cm Run and for the first and second example appearing in Fig. <ref>(a) and (b). 0.1cm cemain1.m cemain2.m cea1.m cea2.m cea3.m cec1.m § MATLAB CODE FOR OPTIMAL STOPPING EXAMPLE Copy/Paste from the PDF file to the Matlab (or text) editor needs extra work. The symbols “ ^∧ ” and “ ^' ” used for power and transpose are transferred wrongly. In the editor you need to apply a global replace with the corresponding keyboard symbols. 0.1cm Run for the example appearing in Fig. <ref>(a) and (b). 0.1cm osmain.m osnum.m osa1.m osc1.m stopcost.m sampcost.m § MATLAB CODE FOR REINFORCEMENT LEARNING EXAMPLE Copy/Paste from the PDF file to the Matlab (or text) editor needs extra work. The symbols “ ^∧ ” and “ ^' ” used for power and transpose are transferred wrongly. In the editor you need to apply a global replace with the corresponding keyboard symbols. 0.1cm Run for the example appearing in Fig. <ref>(a) and (b). 0.1cm rlmain.m rlnum.m rla1.m rlc1.m reward.m
http://arxiv.org/abs/2407.12591v1
20240717141333
Theoretical study of the influence of the photosynthetic membrane on B800-B850 energy transfer within the peripheral light-harvesting complex LH2
[ "Chawntell Kulkarni", "Hallmann Óskar Gestsson", "Lorenzo Cupellini", "Benedetta Mennucci", "Alexandra Olaya-Castro" ]
physics.chem-ph
[ "physics.chem-ph" ]
§ ABSTRACT Photosynthetic organisms rely on a network of light-harvesting protein-pigment complexes to efficiently absorb sunlight and transfer excitation energy to reaction centre proteins where charge separation takes place. In photosynthetic purple bacteria, such protein-pigment complexes are embedded within the cell membrane, with the lipid composition known to affect the complex clustering, thereby impacting inter-complex excitation energy transfer. However, less is known about the impact of the lipid bilayer on the intra-complex excitation dynamics. Recent experiments have addressed this question by comparing photo-excitation dynamics in detergent-isolated light harvesting complex 2 (LH2) to LH2 complexes individually embedded in membrane discs closely emulating the biological environment. These studies have revealed important differences in spectra and intra-complex energy transfer rates. In this paper we use available quantum chemical and spectroscopy data to develop a complementary theoretical study on the excitonic structure and intra-complex energy transfer kinetics of the LH2 of photosynthetic purple bacteria Rhodoblastus (Rbl.) acidophilus (formerly Rhodopseudomonas acidophila) in two different conditions: the LH2 in a membrane environment and detergent-isolated LH2. We find that dark excitonic states, crucial for the B800-B850 energy transfer within the LH2, are more delocalised for the membrane model. By using both non-perturbative and generalised Förster calculations, we show that such increased quantum delocalisation results in a B800 to B850 transfer rate 30% faster than in the detergent-isolated complex, in agreement with experimental results. We identify the dominating energy transfer pathways in each environment and show how differences in the B800 to B850 transfer rate fundamentally arise from changes in the electronic properties of the LH2 when embedded in the membrane. Furthermore, by accounting for the quasi-static variations of electronic excitation energies in the LH2, we show that the broadening of the distribution of the B800-B850 transfer rates is affected by the lipid composition. We argue that such variation in broadening could be a signature of a speed-accuracy trade-off, commonly seen in biological process. Understanding the kinetics of energy transfer within photosynthetic light-harvesting complexes under conditions as close as possible to their biological environments will provide a deeper insight into the biological mechanisms affecting their function. Experiments have shown that for the LH2 complex of photosynthetic purple bacteria, the cell membrane environment can enhance the efficiency of the key energy transfer step within each complex compared to when the photosynthetic complex is isolated via chemical methods. We develop a comprehensive theoretical analysis that rationalises such experimental observations and provide insight into quantum features and microscopic energy transfer pathways that may be enhanced in the membrane environment and which underpin the increased energy transfer rates. § INTRODUCTION In purple non-sulphur bacteria, the initial steps of photosynthesis are carried out by a network of protein-pigment complexes which are embedded in the bacterial cell membrane<cit.>. The network is built up of two types of complexes: the light-harvesting complex 2 (LH2) and LH1 which are responsible for the absorption and transfer of incident solar energy and the reaction centre (RC) which accepts excitation energy from the LH1 to facilitate transmembrane charge separation where excitation energy is converted to chemical energy. Since the LH1 surrounds the RC, together they form the core light harvesting complex (LH1-RC). Each LH1-RC is surrounded by several LH2 complexes, forming clusters on the cell membrane <cit.>. Here we focus on the LH2 from the purple bacteria Rhodoblastus (Rbl.) acidophilus which is composed of nine subunits that are arranged in a cyclic C9 symmetry <cit.>. Each subunit consists of one αβ heterodimer formed from two peptides (α and β), that bind three bacteriochlorophyll a chromophores (BChl a) and one carotenoid. The Bchl a’s absorb light in the infrared region and are named according to the wavelength of light they approximately absorb at. Each subunit contains one B800 Bchl and two B850 BChl a's labelled α and β according to the peptide it is ligated to. Due to the cyclic arrangement of the subunits in the LH2, two concentric rings of chromophores are formed: the B800 ring which lies close to the inner cytoplasmic surface of the membrane and the B850 ring which lies close to the periplasmic surface. The transfer of excitation energy from chromophores in the B800 ring to the B850 ring is a key energy transfer pathway within the LH2 <cit.>. Experimental studies focused on understanding the fundamental steps in photosynthetic light harvesting have contributed a vast amount of information on the structure and function of LH2 <cit.>. Many of these studies isolate LH2 by solubilising it in detergent, removing it from its native environment in the photosynthetic membrane. The impact of the membrane on the energy transfer dynamics within LH2 remains an open question. Recently, experimental work has found differences in the spectra and energy transfer of detergent isolated LH2 and membrane embedded LH2 <cit.>. With the existing comprehensive knowledge on the energy transfer mechanism within detergent isolated LH2, we have a benchmark to perform a systematic study of how energy transfer may be altered when LH2 is embedded in its native membrane environment. The bacterial photosynthetic membrane is composed primarily of phospholipids with different species of purple non-sulfur bacteria having varying lipid compositions <cit.>. Lipids in the membrane mediate clustering of the LH2 complexes, with different lipid compositions resulting in different clustering tendencies <cit.>. It has been suggested that the difference in organisation of LH complexes can alter the efficiency of energy transfer from initial absorption by an LH2 complex to its arrival at the RC. Live cells or sections of the native membrane have been studied, but present difficulties due to the complex biological environment <cit.>. Since whole cells are highly scattering, spectral signals are disturbed when using spectroscopic methods. To circumvent this issue, after isolating the LH2 with detergents, researchers then reconstitute LH2 into an artificial membrane and perform experiments on these samples <cit.>. Initial studies comparing the spectroscopic properties of detergent solubilised and membrane reconstituted LH2 found little difference between the two, concluding that a single model should be sufficient to describe both scenarios <cit.>. In contrast, experiments comparing LH2 from Rhodobacter (R.) sphaeroides solubilised into detergent micelles to LH2 self-assembled into membrane vesicles found differences in the absorption spectra at room temperature <cit.>. In the membrane vesicles, the B850 band of LH2 was broader and red shifted by 1.1 nm and the Stokes shift between the absorption and fluorescence was greater in the membrane. Membrane vesicles typically contain multiple LH2 complexes which, through their intercomplex interactions, can add another environmental contribution to the dynamics of a single LH2 leading to broadening in its spectra. Therefore, to isolate the membrane's effect on the complex, a single LH2 embedded in a membrane is ideal. Ogren et al. embedded LH2 in a membrane nanodisk which allows a single complex to be separated and probed since each disk holds a single LH2 <cit.>. The absorption spectra of a single LH2 complex in the membrane nanodisk also exhibited a slightly redshifted B850 absorption peak compared to detergent solubulised LH2 and pump-probe measurements found the B800 to B850 transfer rate in the membrane nanodisk to be  30% faster (670 fs) than in detergent (875 fs). In this work, we conduct a theoretical study of the impact of the membrane environment on energy transfer within the LH2 of Rbl. acidophilus, to determine if the differences in spectra and energy transfer times observed experimentally in R. sphaeroides hold across alternate species of purple bacteria and how these differences can be mapped down to microscopic changes in the energy transfer pathways. Atomic level calcualtions for electronic and environmental parameters are currently only available for membrane embedded LH2 from Rbl. acidophilus <cit.>. However, like R. sphaeroides, it contains nine subunits with cyclic C9 symmetry and produces similar linear absorption spectra <cit.> such that its structure is commonly used to model R. sphaeroides <cit.>. Due to these structural and spectral similarities, we aim to see if the changes seen in the spectra and energy transfer times of R. sphaeroides can be expected in Rbl. acidophilus. We compare two models of LH2, one based on experimental spectra of detergent solubilized LH2 <cit.> and the other describing LH2 embedded in a 1-palmitoyl-2-oleoyl-glycero-3-phosphocholine (POPC) membrane <cit.>. We use two different spectral densities to describe the detergent and membrane environment and calculate energy transfer rates within the LH2 using two different levels of theory: generalised Förster theory (GFT) <cit.> a perturbative method and hierarchical equations of motion (HEOM) a numerically exact method. Due to the disordered nature of biological systems, each complex is perturbed differently by its local environment creating slight variations in the electronic properties of each complex. Thus, we use many realisations of the electronic parameters to calculate intercomplex energy transfer rates and exciton properties and analyse the specific form of their statistical distribution to see if they reveal anything about the membrane’s influence on energy transfer dynamics within the LH2. We compare the exciton delocalisation for detergent isolated LH2 and membrane embedded LH2 using the inverse participation ratio as a measure. Using GFT and HEOM, we calculate the B800 to B850 energy transfer rate distribution for both models and consider the B800 and B850 exciton levels that form the dominating energy transfer pathways in each environment. § METHODS §.§ Hamiltonian To model the LH2 complex, we divide the total system Hamiltonian into the system, the environment and the interaction between the two: Ĥ = Ĥ_S+Ĥ_B+Ĥ_SB. Here Ĥ_S represents the electronic degrees of freedom of the N chromophores within the LH2 and is given by a Frenkel exciton Hamiltonian <cit.>, where each chromophore site is treated as a two level system (we have ħ = 1 throughout), Ĥ_S = ∑_i^N E_i|i⟩⟨i| + ∑_i,j<i^NV_ij (|i⟩⟨j| + |j⟩⟨i|) , where |i⟩ is an excited state localised on site i. E_i=ϵ_i + λ_i is the transition energy from ground to excited state of site i termed the site energy and is the sum of the bare electronic energy in the absence of phonons and the reorganisation energy. λ_i = π^-1∫_0^∞ dω J_i(ω)/ω is the energy the bath must dissipate to relax to the new equilibrium in the excited state |i⟩ which can be obtained by integrating over the spectral density J_i(ω). The microscopic origin of λ_i is due to the excited state potential energy surface being displaced relative to the ground state <cit.>. V_ij is the electronic coupling between the Q_y transition dipole moments at sites i and j. We denote |α⟩, the eigenstates of Ĥ_s with energy E_α, i.e. Ĥ_s=e_α|α⟩, which are collective electronic states, or excitons, delocalised across all chromophores, i.e. |α⟩=∑_i C_i^α|i⟩. Site energies and nearest neighbour electronic couplings for the membrane and detergent Hamiltonian's are given in Table <ref>. For the detergent Hamiltonian, interchromophore electronic couplings are calculated using the dipole-dipole approximation, V_ij^dipole = C 𝐝̂_i·𝐝̂_j - 3(𝐫̂_ij·𝐝̂_i) (𝐫̂_ij·𝐝̂_j)/|r_ij|^3, where C is a constant accounting for the dipole strength, 𝐝̂_i is the transition dipole unit vector at site i, 𝐫̂_ij is the unit vector pointing from the position of site i to site j and r_ij is the distance between sites i and j. The site coordinates and transition dipole moments are taken from the crystal structure of LH2 from Rbl. acidophilus <cit.> and C is taken to be 230,000 Åcm^-1 for the B800 sites and 348,000 Åcm^-1 for the B850 sites, chosen to reproduce energies of the excitonic states. Additionally, these values of C produce couplings that agree with more sophisticated transition density cube methods used to determine electronic couplings in the LH2 <cit.>. For nearest neighbour electronic couplings in the B850 ring, the dipole-dipole approximation no longer holds due to the proximity of the chromophores, hence couplings were taken from literature where they are fitted to reproduce experimental spectra <cit.>. The electronic parameters for the membrane Hamiltonian were calculated using quantum chemical methods that account for the mutual polarisation between the lipid-protein environment and the chromophores <cit.>. Site energies and couplings are averaged over a trajectory of the LH2 in a lipid environment using molecular dynamics simulations. The site energies and nearest neighbour couplings of the B800 and B850 chromophores are taken from <cit.> and are given in Table <ref>. The environment, H_B, corresponds to the intermolecular vibrations of the chromophores along with the motion of the proteins and is modelled as a bath of quantised harmonic oscillators (vibrational modes), Ĥ_B=∑_i,kω_i,k(b̂_i,k^†b̂_i,k + 1/2) , where b_i,k^† and b_i,k are bosonic creation and annihilation operators of frequency modes ω_i,k satisfying commutation relations [b_i,k,b_j,k^'^†] = δ_i,jδ_k,k^' <cit.>. Each site is linearly coupled to an environment displacement mode such that the system-environment interaction is of the form Ĥ_SB = ∑_i,kg_i,k(b̂_i,k+b̂_i,k^†)|i⟩⟨i| = ∑_iB̂_i|i⟩⟨i|, where g_i,k is the interaction strength. Influence of the environment on the system dynamics may be described fully by the system-bath correlation function C_i(t) = ⟨B̂_i(t)B̂_j(0)⟩_B = 1/π∫_0^∞dω J_i(ω)((βω/2)cos(ω t) - isin(ω t)), where β = 1/k_BT. Within each band of LH2, we assume that local electronic-vibrational interactions are identical such that all sites are characterised by the same spectral density which takes the Drude-Lorentz form, J_i(ω) = 2λ_iγ_iω/ω^2 + γ_i^2, where γ_i is the cutoff frequency corresponding to the bath relaxation rate. For a Drude-Lorentz spectral density, the bath correlation function may be expressed as an exponential series <cit.> C_i(t) = ∑_k c_k,ie^-ν_k,it, where the coefficients and rates that enter the expansion are obtained using the Matsubara expansion method, c_0,i = λ_iγ_i((βγ_i/2) - i), ν_0,i = γ_i, c_k,i = 4λ_iγ_i/βν_k/ν_k^2 - γ_i^2 and ν_k,i = ν_k, where ν_k = 2π k/β are the Matsubara frequencies with k = 1, 2, 3…. The environmental parameters introduced here, λ_i and γ_i are given for membrane embedded and detergent isolated LH2 in Table <ref>. §.§ Static disorder In the previous section, fixed electronic parameters were given for the chromophore sites in the LH2. However, owing to the dynamic nature of the biological environment, slow conformational motions of the proteins lead to random shifts in the electronic parameters of the chromophores <cit.>. Stochastic fluctuations in the local environment of the chromophores create shifts in their site energies while changes in the orientation and position of the chromophore transition dipole moments which alter interchromophore couplings <cit.>. Since these changes are slow compared to energy transfer timescales, they can be accounted for by taking an ensemble average over many realisations of the electronic parameters. Single molecule spectroscopy has shown that static disorder is largely diagonal <cit.>. Therefore, we account for static disorder by adding an offset δ_i^r∈{δ_i}_r to the site energies of the system Hamiltonian in the chromophore site basis Ĥ_S^r = ∑_i^N (E_i + δ_i^r) |i⟩⟨i| + ∑_i,j<i^NV_ij (|i⟩⟨j| + |j⟩⟨i|), where r labels a particular realisation. Each δ_i^r is randomly sampled from a Gaussian distribution centred at zero, whose standard deviation, σ, corresponds to the level of static disorder. Hence, excitonic energies and exciton delocalisation are different for each realisation. Calculations of observables are averaged over many realisations of static disorder in order to account for its effects on the system. Static disorder for the B800 sites and B850 sites in detergent and membrane are given in Table <ref>. §.§ l1 Norm of Coherence Due to strong interchromophore electronic couplings in the B850 ring, an excitation in the ring manifests as a delocalised exciton state spread across multiple chromophore sites. In order to quantify the delocalisation of an exciton state |α⟩, we will use two measures: the l1 norm of coherence <cit.> and, the more known, participation ratio. This will allow us to analyse if different quantifiers of exciton delocalisation lead to the same conclusions. The l1 norm of coherence denoted as C_l1 <cit.> is a measure of coherence based on distance measures and represents the distance of the density matrix associated to ⟨α| i.e ρ̂^α=|α⟩⟨α| to the set of incoherent quantum states in the reference basis |i⟩. C_l1(ρ̂^α) is then given by C_l1(ρ̂^α)=∑_i,j≠ i |ρ̂_i,j^α|= ∑_i,j≠ i |C_i^α(C_j^α)^*| , where C_i^α=⟨i|α⟩ is the amplitude of the excited state of chromophore i in the exciton eigenstate |α⟩. Under incoherent processes, C_l1 does not increase and therefore it provides an appropriate quantifier of coherence <cit.>. A more common measure of exciton delocalisation is the inverse participation ratio (IPR) which is given by, IPR_α = 1/∑_i^N|C_i^α|^4, where C_i^α is as defined above. The IPR represents how many chromophores an exciton state |α⟩ is extended over. For example, for a localised exciton IPR = 1 while for a completely delocalised exciton IPR = N, where N is the number of chromophores in the ring. §.§ Hierarchical equations of motion hierarchical equations of motion In order to quantify energy transfer rates within the LH2, we apply the hierarchical equations of motion (HEOM) <cit.> to compute the quantum dynamics for the the full 27 site model of LH2 that includes both the B800 and B850 and interactions among them in order to predict linear spectra and estimate transfer rates. The HEOM can yield exact quantitative results for the electronic dynamics provided that system-environment correlation functions are represented by an exponential series expansion as in Eq. (<ref>). The HEOM is of the form ρ̇̂̇_n = (ℒ - Ξ - ∑_k,i n_k,iν_k,i)ρ̂_n - i∑_k,i(ℒ_k,i^-ρ̂_n_k,i^- + ℒ_k,i^+ρ̂_n_k,i^+), where n is a multi-index consisting of discrete integers n_k,i. An auxiliary density operator (ADO) ρ̂_n is said to belong the n-th tier of the hierarchy if ∑_k,i n_k,i = n. The reduced density matrix of the system is identified as ρ_0. The hierarchy in Eq. (<ref>) is formalized in terms of super-operators such that for an arbitrary system operator  we may write Â^× and Â^∘ which denote super-operators whose action onto a system space operator B̂ is given by Â^×B̂ = [Â, B̂] and Â^∘B̂ = {Â, B̂}. We have ℒ = -iĤ_S^×, ℒ_k,i^- = Re(c_k,i)n̂_i^× + iIm(c_k,i)n̂_i^∘, ℒ_k,i^+ = n̂_i^×. We truncate the hierarchy by setting all ADOs beyond a pre-set hierarchy tier to zero. The truncation tier L is simultaneously set to be large enough such that numerical results have converged, and small enough so that the simulation will run in a reasonable amount of time. The Matsubara series is truncated as well by approximating e^-ν_k t≈1/ν_kδ(t) for all k≥ M, where M is another pre-set threshold chosen similarly to L. These approximated terms for the series expansion are then described by the terminator term Ξ = ∑_m(2λ_m/βγ_m(1 - βγ_m/2(βγ_m/2)) - ∑_k=1^Mc_k,m/ν_k)n̂_m^×n̂_m^× <cit.>. We furthermore improve convergence of the HEOM results by applying the scaling procedure developed by Shi and co-workers <cit.>. §.§.§ Exact ring population dynamics and its fit to a Pauli master equation In order to estimate B800 to B850 energy transfer rates based on the HEOM dynamics, we take our initial state to be the Boltzmann distribution for the B800 eigenstates, i.e. ρ̂(0) = e^-βĤ_B800/Tr(e^-βĤ_B800), which is then propagated in time as per the HEOM in Eq. (<ref>). We define the total B800 population dynamics as P_B800(t)=∑_α∈ B800⟨α |ρ̂(t)|α⟩ with |α⟩ the exciton eigenstates of Ĥ_B800, and similarly for the total B850 population dynamics, P_B850(t). To estimate the transfer rates from B800 to B850, once a steady state is reached, we fit P_B800 and P_B850 to a Pauli master equation of the form ∂_t[ P_B800; P_B850 ] = [ -k_down k_up; k_up -k_down ][ P_B800; P_B850 ], where k_up and k_down are uphill and downhill decay rates corresponding to the B850→ B800 and B800→ B850 transfer process, respectively. We can solve for P_B800 by using the fact that P_B800(t) + P_B850(t) = 1 such that the B800 population dynamics is of the form P_B800(t) = k_up + k_downe^-(k_up + k_down)t/k_up + k_down, where the k_down and k_up are numerically determined from a fit to HEOM-simulated population dynamics. This procedure allows estimation of rates that are qualitatively comparable to GFT rates but we do not expect a full quantitative agreement as we are effectively mapping the kinetics of transfer to a two state system, whereas GFT rates consider a multiple parallel processes of exciton to exciton transfer. We will indeed show the qualitative agreement between HEOM and GFT rates and therefore find that the results from the exact treatment support the insight gained from GFT. §.§.§ Linear spectra Linear absorption spectra are computed using α_A(ω) = Re[∑_p=x,y,z∫_0^∞dt ⟨μ̂_p(t)μ̂_p(0)|_⟩ρ_0e^iω t], where the initial state of the system is the ground state ρ_0 = |0⟩⟨0| and μ̂_p(t) is the Heisenberg picture dipole operator corresponding to the p-direction. The dipole operators are of the form μ̂_p = ∑_i d_i,p|i⟩⟨0| + h.c., where d_i,p is the p component of the the dipole at site i. Linear fluorescence spectra are computed using, I_D(ω) = Re[∑_p=x,y,z∫_0^∞dt ⟨μ̂_p(t)μ̂_p(0)|_⟩ρ_the^iω t], where the initial state of the system is the thermal steady state of the system. We determine ρ_th iteratively via the biconjugate gradient stabilized method <cit.> with an initial guess given by the Boltzmann distribution ρ(0)=e^-β H_B800/Tr(e^-β H_B800)⊕𝕀_B850, where 𝕀_B850 is the identity for the single excitation subspace of the B850 ring. §.§ Generalised Förster theory generalised Förster theory In addition to HEOM, we use GFT to calculate the B800 to B850 energy transfer rate. By doing so, we can confirm that our results hold qualitatively at different levels of theory and are not dependent on the approximations made in GFT. Additionally, GFT is a less computationally expensive method that allows the computation of more realisations of static disorder within a reasonable time frame. GFT describes exciton energy transfer from a donor aggregate to an acceptor aggregate that is weakly coupled to one another <cit.>. It is assumed that, within each aggregate, electronic couplings are strong such that an excitation forms a delocalised exciton state. In the LH2, the donor and acceptor aggregates correspond to the B800 and B850 rings. Strong interchromophore couplings in each ring allow for an excitation to be delocalised across the ring instead of being confined to a single chromophore site. To model B800 to B850 energy transfer, it is assumed that following an electronic transition in the B800 ring, thermal relaxation occurs on a shorter timescale than energy transfer, such that transfer to B850 occurs from a thermally populated B800 state. Thus, the B800 to B850 energy transfer rate is given by <cit.>: K_GFT = ∑_α,β P_αk_αβ, where α labels a donor exciton, β labels an acceptor exciton, P_α is the thermal population of the donor state and k is the exciton transfer rate from α to β, which is given by the product of the square magnitude of the exciton coupling and the exciton spectral overlap, k_αβ = |V_αβ|^2 O_αβ. V_αβ is given by <cit.>, V_αβ = ∑_i∈ D, j∈ A C_i^α C_j^β*V_ij, where C_i^α = ⟨i|α|$⟩ is the amplitude coefficient of site i in the donor exciton eigenstate.O_αβis the spectral overlap between the donor fluorescence line shapeD̃_α(ω)and acceptor absorption line shapeD_β(ω)given by, O_αβ(ω) = 1/2π∫_-∞^∞ dωD̃_α(ω) D_β(ω) . The form of the lineshape functions may be obtained using pertubative theories as given in the following section. §.§.§ Lineshape theory To determine the lineshapes we follow the method outlined by Renger <cit.> where the second order cumulant expansion is used to derive an equation of motion of the reduced system density matrix. This yields lineshape functions of the form D̃_α(ω) = 2Re∫_0^∞dt e^iω t e^-i(ω_α - λ_αα,αα)t - g_αα,αα^*(t) - t/τ_α, D_β(ω) = 2Re∫_0^∞dt e^iω t e^-i(ω_β - λ_ββ,ββ)t - g_ββ,ββ(t) - t/τ_β, for whichO_αβmay be written as: O_αβ(ω) = 2Re∫_0^∞dt e^iω_αβt e^-i(λ_αα,αα+λ_ββ,ββ)t × e^-(g_αα,αα(t)+g_ββ,ββ(t)) e^-(1/τ_α+1/τ_β)t, whereω_αis the energy of excitonα,λ_αβ,γδ = ∑_i(C_i^α)^*C_i^β(C_i^γ)^*C_i^δ λ_iis the exciton reorganisation energy,g_αβ,γδ(t) = ∑_i(C_i^α)^*C_i^β(C_i^γ)^*C_i^δ g_i(t)is the exciton line broadening function andτ_αis the lifetime of excitonα. The exciton lifetimes are approximated using modified Redfield theory as outlined in the supporting material.g_i(t)is the site line broadening function which, for the bath correlation function we consider (Eq. (<ref>)), may be written as g_i(t) = c_0,i/γ_i^2(e^-γ_it + γ_it-1) + ∑_k=1c_k,i/ν_k^2(e^-ν_kt + ν_kt-1), The Matsubara summation terms labelled bykare low temperature corrections to the exponential expansion of the bath correlation function. Since we are interested in the function of LH2 in a physiological environment, our calculations are at 300K where the Matsubara terms are less important, hence we we truncate the summation atk=1, as the correlation functionC_idoes not change when including higher order terms. Aside from computing energy transfer rates, the lineshapes in eqs. (<ref>) and (<ref>) are also used to compute linear spectra. The linear absorption and fluorescence spectra of the respective B800 and B850 rings can be obtained using their relationship toD_β(ω)andD̃_α(ω)<cit.>, α_A(ω) ∝∑_β|μ⃗_β|^2 D_β(ω), I_D(ω) ∝∑_α P_α|μ⃗_α|^2 D̃_α(ω), where|μ⃗_α|is the transition dipole strength of excitonαgiven by|μ⃗_α|^2 = |∑_iC_i^α μ⃗_i|^2andμ⃗_iis the transition dipole moment at chromophore site i. § RESULTS We begin by examining properties of the excitons that have been well documented by previous theoretical and experimental work on the LH2 and see how they are altered for LH2 embedded in a lipid membrane environment. Motivated by experiment, we focus on comparing POPC membrane LH2 to detergent LH2, but similar conclusions apply to DOPC membrane. We finally compare B800 to B850 transfer rates computed using GFT and HEOM in each environment and determine the main energy transfer pathways that contribute to the transfer rate to see how they change from membrane to detergent. §.§ Exciton energy vs. static disorder The B800 and B850 Hamiltonian's were diagonalised to obtain B800 exciton energies and B850 exciton energies respectively. In the absence of static disorder, the B800 exciton manifold consists of one low lying energy level, followed by four pairs of doubly degenerate levels. In the B850 manifold, the lower energy exciton levels have a similar structure consisting of one low energy level followed by four doubly degenerate levels. The higher energy levels, B850* <cit.>, consist of four doubly degenerate levels followed by a single highest energy level. Figure <ref>(a) and <ref>(b) gives the exciton energy levels of the B850 ring as a function of static disorder averaged over 10,000 realisations for membrane and detergent LH2 respectively. As static disorder increases the degeneracy of the exciton levels is lifted and the average energy levels begin to diverge. Following the inclusion of static disorder, the k quantum number for each eigenstate is no longer well defined. Here, we use k simply for ease of labelling, with negative k values referring to the lower energy level. §.§ Exciton transition dipole strength vs. static disorder Through the interaction ofμ⃗_αwith an electromagnetic field, an optical transition from the ground state to the excited state, or vice versa, is possible. Thus,|μ⃗_α|^2can tell us if a transition to a given exciton state is optically allowed, as it defines the strength of the interaction betweenμ⃗_αthe electromagnetic field. Figure <ref>(c) and <ref>(d) givesμ⃗_αfor the five lowest lying levels in the B850 ring for increasing static disorder averaged over 10,000 realisations for membrane and detergent LH2 respectively. Without accounting for static disorder the k= ±1states of the B850 ring are the only bright states, i.e. almost all of the transition dipole strength in the system is associated with them. As static disorder increases, the transition dipole strength is redistributed to neighbouring exciton states that are close in energy to k = ±1, namely k = 0, ±2. The k = ±1states still retain a majority of the transition dipole strength when accounting for static disorder, making them most important for energy transfer to the B850 ring via optical transitions. §.§ Exciton energy levels and dipole strengths at defined static disorder Figure <ref> shows the average positions of the B800 and B850 exciton energy levels calculated using 10,000 realisations of static disorder for membrane embedded LH2 and detergent isolated LH2. At higher levels of static disorder, energy levels diverge more, therefore we should expect a greater increase in the width of the average exciton manifold where there is high disorder. In the B850 ring static disorder is 50 cm^-1higher in membrane so the B850 manifold width increases more in membrane (27%) than in detergent (24%). In the B800 ring static disorder is 10 cm^-1greater in detergent LH2 and the B800 manifold width increases  3 times as much in detergent compared to in membrane, suggesting that the B800 ring is more sensitive to static disorder. Weak nearest neighbour electronic couplings in the B800 ring are less than the levels of static disorder in the ring. Changes in the B800 site energies are therefore comparable to the coupling strength between them making the excitonic structure more sensitive to static disorder. Due to the differences in the average excitonic structure in membrane and detergent LH2, the B800 excitons overlap spectrally with different B850 excitons in each environment, which impacts the key B800 to B850 energy transfer pathways. For the membrane, there is a greater overlap on average between the B800 states and the the dark B850* states, while for detergent the overlap is with lower energy B850 states. The energy transfer pathways that dominate the B800 to B850 transfer in each environment as determined by GFT is shown by the red arrows in Figure <ref>. Differences in energy transfer pathways can result in differences in the overall B800 to B850 transfer rate. §.§ Exciton delocalisation We can examine differences in the delocalisation of excitons in membrane and detergent LH2 by calculating theC_l1for excitons localised on each respective ring, where excitations are understood to be superpositions of excited states localised on single sites. We additionally calculate the IPR of the excitons and compare the two measure of delocalisation. At zero static disorder, excitons have the same IPR in all environments, apart from a small 6% increase in POPC membrane and DOPC membrane for four B850 levels, k=±4and k=±5relative to the same excitons in detergent. Noticeable differences start to emerge when the IPR is calculated at the level of static disorder expected in each ring. Table <ref> and <ref> givesC_l1and the IPR averaged over 10,000 realisations of static disorder for B850 and B800 excitons respectively. An overall decrease in the delocalisation is seen across all excitons. This is because static disorder creates random shifts in the electronic parameters of the chromophores such that their site energies are no longer identical, reducing the symmetry of the system which tends to localise the excitons. However in each environment, the localising effect of static disorder perturbs each exciton differently.C_l1is a proper measure of coherence based on distance measures, hence provides a more reliable value to compare the delocalisation of different states. For example, take the B850 states k = -3 and k = +2, the IPR is equal for these states, yetC_l1reveals that they have different delocalisation. For other states the IPR predicts different delocalisation whenC_l1shows that those states have identical delocalisation. Thus the IPR can be misleading when a comparison of state delocalisation is desired. Comparing the averageC_l1of the B800 excitons, (C_l1(ρ^α) = 1/N∑_α∈B850^N C_l1(ρ^α)), excitons are the least delocalised in detergent compared to the membrane environments, as expected due to the higher level of static disorder and weaker electronic couplings in the B800 ring in detergent. Comparing the averageC_l1for all B850 states in each environment we find a larger average delocalisation of states in POPC membrane than in detergent and DOPC membrane. This seems to arise primarily from the high energy dark states of the B850 ring having increased delocalisation as the lower energy bright states tend to have reduced delocalisation compared to detergent LH2. Figure <ref> comparesC_l1with increasing static disorder for three low energy and three high energy exciton states of the B850 ring in POPC membrane and detergent. We find that the order of the exciton delocalisation changes depending on the level of static disorder. Amongst the high energy levels in POPC membrane, the k = -6 level is more delocalised than the k = +5 level at static disorder below 200 cm^-1(Figure <ref>b). Above 200 cm^-1this is reversed and the k=+5 level is more delocalised. In POPC membrane, there is a reduction in delocalisation of some states in the low energy manifold (k = 0 to k = +4) relative to the detergent states as expected due to static disorder being greater in the B850 ring of the membrane. Some states in the high energy manifold (k = -5 to k = 9) display increased delocalisation in POPC membrane, a result of stronger electronic couplings in the B850 ring, which results in the high and low energy B850 excitons having a more comparable delocalisation than in detergent. To quantify this, The difference between the averageC_l1of the high energy manifold and low energy manifold is 4 for both membrane models and 6 for the detergent model. Thus in detergent LH2, there is a clear distinction in the delocalisation between the lower energy exciton manifold and the high energy manifold which is less pronounced in membrane environments. These calculations suggest that the membrane tends to preserve the symmetry of the excitonic structure of the B850 ring by tuning the delocalisation of the high and low energy exciton manifolds thereby enhancing a quantum feature of the system. Since an excitonic description is required to accurately predict energy transfer rates in LH2, changes in exciton delocalisation could manifest as changes in the energy transfer pathways of an excitation <cit.>, altering coherent dynamics in LH2 when embedded in the membrane. As the system evolves in time, exciton delocalisation can change due to environmental interactions <cit.>. More sophisticated measures can help verify if these differences in delocalisation from detergent to membrane persist over the interring energy transfer timescales. §.§ Theoretical linear spectra One of the key differences seen in experiments comparing detergent isolated and membrane embedded LH2 is the redshift of the B850 band in the linear absorption spectra of the LH2 <cit.>. In Figure <ref>(a) we have computed the B850 absorption and B800 fluorescence for membrane and detergent LH2 using the same lineshape theory that is used to compute energy transfer rates in GFT and HEOM. The redshift of the B850 absorption peak is obtained at both levels of theory, although it is exaggerated in the lineshape theory spectra. While the lineshape spectra predicts the same redshift for both POPC and DOPC membrane, HEOM is able to resolve differences between the two lipid environments. Isolated BChl a's absorb at 800 nm, but when they come together to form the B850 ring, interchromophore electronic interactions shift the 800 nm absorption peak to 850 nm <cit.>. Therefore, the observed shift in the B850 absorption spectrum between POPC membrane and detergent likely arises due to stronger interchromophore couplings and consequently increased delocalisation of excitons in the membrane. In DOPC the reduced redshift compared to POPC is possibly related to the intradimer B850 couplings being weaker that in detergent while interdimer couplings are stronger. The redshift of the B850 band reduces the spectral overlap of the B800 fluorescence and B850 absorption bands which would imply slower B800 to B850 energy transfer times in the membrane. Since measured energy transfer rates are larger in the membrane, this signals that the increased delocalisation of the B850 excitons compensates for the slightly reduced overlap. Of the B850 excitons, the dark states show the greatest increase in delocalisation in the membrane, hence playing an important role in the energy transfer. §.§ B800 to B850 transfer rate distribution The B800 to B850 energy transfer rate was calculated for membrane embedded LH2 and detergent isolated LH2 using GFT (Eq. (<ref>)). 10,000 realisations of static disorder were used for each environment and the distribution of the transfer rate over these realisations are shown in Figure <ref>(a). In qualitative agreement with experimental work, the B800 to B850 transfer rate in POPC membrane LH2 has a faster average of 1.34 ps^-1(746 fs) compared to the rate in detergent where the average is 1.08 ps^-1(925 fs). To corroborate rates obtained using GFT, estimates of the B800 to B850 energy transfer rates for 2000 realisations of disorder have been computed using HEOM per the procedure outlined in the Sec:HEOMTheory section and are shown as a histogram in figure <ref>(b). In agreement with GFT rates, average energy transfer is found to be faster in POPC membrane at 1.04 ps^-1(962 fs) compared to detergent at 0.83 ps^-1(1.2 ps). Average rates obtained using HEOM are faster than those determined by GFT, which is likely a result of mapping the B800 to B850 transfer process to a one-step process while GFT considers multiple simultaneous B800 to B850 exciton transfer processes. Despite this discrepancy, there is still qualitative agreement between both the exact and perturbative rates indicating that GFT is able to capture some differences between detergent and membrane LH2. To understand the microscopic origin of the increased energy transfer rate in membrane LH2, we have computed the B800 to B850 energy transfer rate for LH2 embedded in a membrane composed of a different lipid species, 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) <cit.>. In a DOPC membrane, the average B800 to B850 transfer rate is slower than in POPC membrane. This suggests that lipid species can influence energy transfer rates within the LH2. There is evidence that changes in the lipid composition of the bilayer impacts lateral pressure and electric field profiles, which leads to changes in the conformational equilibrium of membrane proteins <cit.>. Phospholipids in the bacterial membrane that hosts the LH2 could present lateral pressure and electric field profiles that differ from the detergent environment that may alter the electronic structure, and therefore function, of the LH2 <cit.>. Despite being a perturbative theory GFT can still distinguish differences in B800 to B850 energy transfer within the LH2 in different lipid compositions. However, GFT rates become unreliable when a drastic change in environment from lipid to detergent is made. GFT predicts faster transfer in detergent than DOPC membrane, contradicting experimental results that find faster transfer in a membrane environment. Meanwhile HEOM derived rates predict slower transfer in detergent as expected from experiment. The discrepancy between the GFT and HEOM rates is likely linked to two approximations made in GFT that result in contributions from non-equilibrium and coherent effects being neglected. Firstly, GFT assumes the initial state is limited to being localised on the B800 ring and that the excitation then “hops” to a state localised on the B850 ring. HEOM instead allows for the initial state to evolve coherently from the B800 ring to the B850 ring, thus allowing for a state to be delocalised over both rings during transfer. Secondly, GFT assumes the initial B800 state begins and remains in thermal equilibrium. The HEOM derived rate accounts for non-equilibrium effects during transfer as the interaction with the environment results in the initial state evolving and shifting out of equilibrium. Thus the non-equilibrium and coherent contributions to the B800 to B850 energy transfer rate are key to predict differences in the LH2’s function in detergent and membrane. The broader distribution of B800 to B850 energy transfer rates for membrane LH2 shows that the transfer rate varies by more across each complex in an ensemble in membrane than in detergent. The larger standard deviation of the energy transfer rate in membrane implies that the energy transfer process is less precise. The negative relationship between the energy transfer rate and the standard deviation of the energy transfer rate for LH2 in different environments suggests the possibility of a speed-accuracy trade-off within the LH2 <cit.>. Trade-offs exist on a molecular level, with processes like protein synthesis prioritising speed over fidelity <cit.>. Such a trade-off is the result of the biological system possessing a trait that cannot increase without the decrease of another trait. For energy transfer in the LH2, the traits involved may be related to lipid properties determined by the lipid composition of the bacterial membrane. However, while a negative relationship is a prerequisite for a trade-off, it is not sufficient and laboratory evolution experiments are required to identify whether a trade-off exists. §.§.§ Dominating exciton transfer pathways To understand the microscopic differences underlying the change in the B800 to B850 energy transfer rate from detergent to membrane, the dominating exciton energy transfer pathways were determined in each environment. We identify the important transfer pathways as being between excitons with the fastest average exciton transfer rate, as they have the greatest influence on the average B800 to B850 transfer rate. The exciton transfer rate (Eq. <ref>) was determined between all combinations of donor B800 and acceptor B850 excitons and averaged over 10,000 realisations of disorder. For each realisation, the exciton transfer rate is weighted by the thermal occupation of the donor state, in order to correctly weight its contribution to the average B800 to B850 transfer rate (Eq. (<ref>)). The important exciton energy transfer pathways for B800 to B850 transfer in each environment are given in Table <ref>. In all environments, the dominating energy transfer pathway is via dark B850 states, although the specific excitons are slightly different. This is due to differences in the spectral overlap of the excitons in each environment which can be seen by the different relative positions of the B800 and B850 average energy levels in Figure <ref>. In the POPC membrane, the B800 levels overlap with higher energy B850 states than in detergent. As a result, the important energy transfer pathways for B800 to B850 transfer are altered compared to detergent LH2. Despite the B800 to B850 transfer rate being slowest in DOPC membrane, the dominating pathway in DOPC membrane is faster than the pathway in detergent. We find that in detergent multiple pathways with moderate transfer rates ( 0.03 ps^-1) exist from low energy B800 levels, while in DOPC membrane transfer from those levels is much slower (>0.01 ps^-1). Thus GFT predicts the overall B800 to B850 rate to be faster in detergent than DOPC due to the increased number of available pathways for an excitation to take. Previously, we showed that on average the delocalisation of B850 excitons is greater in membrane LH2. The delocalisation of the B850 excitons that are key energy acceptors in B800 to B850 energy transfer is of greater importance, as it allows us to assess if the change in delocalisation is relevant to the change in energy transfer that we see in membrane LH2. We find that in the membrane, the important B850 excitons are on average more delocalised (C_l1= 12) than the equivalent in detergent (C_l1= 11). This suggests that coherent dynamics in LH2 may be altered when embedded in the membrane. Further investigation would require the use of HEOM as GFT does not provide information on coherent dynamics. The exciton transfer rate entering GFT depends on the exciton coupling strength squared and the spectral overlap between the exciton line shapes. Looking at how these properties change from detergent to membrane can help identify which specific differences in membrane contribute to an increased average transfer rate and broader distribution. The distribution of 10,000 realisations of the exciton transfer rate, the exciton coupling and the exciton spectral overlap was determined for the dominating pathway in each environment and is given in Figure <ref>(a-c) with average values listed in Table <ref>. For comparison, the same exciton properties are given in <ref>(d-f) for states D = 1 to k = -2, a pair of states that have a slow exciton transfer rate and are therefore considered non-dominating. Average values for the non-dominating excitons are given in Table <ref>. For the dominating excitons, while the average spectral overlap is comparable in all three environments, the average exciton coupling is strongest in POPC membrane. The exciton transfer rate scales with the exciton coupling such that the fastest transfer rate is between the POPC membrane donor and acceptor pair, indicating that the dominant energy transfer pathway is mostly dependent on the exciton coupling strength. The exciton coupling strength scales with the electronic coupling between nearest neighbour B800 and B850 chromophore sites (Table <ref>) suggesting that stronger interchromophore electronic coupling is the main factor contributing to the faster energy transfer rate in membrane. Stronger electronic coupling between B800 and B850 chromophore transition dipole moments could arise from a change in the direction of the dipoles or a reduced distance between them. By comparing the position coordinates of the B850 chromophores in each model, neighbouring B850 chromophores are slightly closer in the membrane LH2 model, such that stronger interchromophore electronic couplings would be expected. The exciton coupling is additionally dependent on the exciton delocalisation scaled by the electronic coupling (Eq. <ref>). Increased delocalisation of the excitons can contribute to the stronger interaction between excitons by spreading an excitation over a greater number of electronically interacting sites, however, the strength of the interaction between those sites is also important. Although the acceptor B850 exciton is delocalised similarly in DOPC membrane and in detergent, the donor B800 exciton is more delocalised and the electronic coupling between sites is stronger in DOPC membrane, resulting in a stronger exciton coupling and faster exciton transfer rate. This suggests that while the B850 ring is more delocalised than the B800 ring, the exciton delocalisation in B800 is still important for energy transfer. While the form of the distribution of the exciton coupling is similar in each environment, the distribution of the spectral overlap (Figure <ref>(c)) is where differences emerge. The spectral overlap distribution peaks sharply for the detergent but is broad and flat for POPC membrane and DOPC membrane which contributes to a greater variation in the exciton transfer rate in membrane. At first, it may seem that this is a consequence of higher static disorder in the membrane which produces random shifts in the relative positions of the donor and acceptor energy levels hence resulting in a greater variation of the energy gap between donor and acceptor. However the distribution of the donor-acceptor energy gapω_αβis similar in all three environments (Figure <ref>). Additionally, the B800 to B850 transfer rate computed using the same level of static disorder in each ring for detergent and membrane environments still produces a broader distribution and faster average rate for the POPC membrane, suggesting that higher levels of static disorder in the membrane are unlikely to be the cause of the broadened distribution (Figure <ref>). The exciton spectral overlap also depends on the exciton environmental parametersλ_ααααandg_αααα(t)which are determined by exciton delocalisation scaled by the site environmental parametersλ_iandg_iviaλ_αααα ∝λ_i/IPR. We have computed the B800 to B850 energy transfer rate using the same environmental parameters for both membrane and detergent models and found that a broader distribution and faster rate in membrane LH2 still holds (Figure <ref>). This points to the change in electronic properties being of high importance to the changes in energy transfer in membrane LH2. For the dominant exciton transfer pathways, we see that the excitonic coupling is the key factor determining the exciton energy transfer rate. The excitonic coupling is dependent on both the electronic coupling between sites in the B800 ring to sites in the B850 ring and the delocalisation of the B800 and B850 exciton in question. A figure of merit that captures both these electronic properties is given by the inter-ring electronic coupling scaled by the geometric average delocalisation of the important B800 and B850 exciton pair,√(C_l1^B800 C_l1^B850)V_B800, α_2. WhileV_B800, α_2provides information on the interactions between the rings,√(C_l1^B800 C_l1^B850)is a result of the site energies and electronic couplings within each ring. Table <ref> lists this figure of merit for the dominating pathways in each environment using both the IPR andC_l1as a measure of delocalisation. The increase in the figure of merit correlates with the increase in exciton transfer rate of the dominant exciton pairs from detergent to POPC membrane. It allows us to relate the change in transfer rate directly to the B800 to B850 interchromophore electronic couplings and delocalisation. For POPC membrane, althoughV_B800, α_2is largest, the geometric average IPR of the B800 and B850 exciton pairs suggests that an increased delocalisation of the B850 and B800 excitons in POPC membrane also contributes to an increased exciton transfer rate. In the case of the non-dominant excitons, although the exciton coupling is stronger in POPC than DOPC membrane, the transfer rate is faster in DOPC. There is instead a correlation between the exciton transfer rate and the spectral overlap between the exciton line shape functions. The spectral overlap is dependent on the energy gap between the excitons and the width of the line shape which is determined by the real part ofg_α(t).g_α(t)is given byg_i(t)scaled by1/IPR. In detergent, the B800 exciton is highly localised with an IPR of 3 resulting in a broader line shape. Additionally, the energy gap between the donor and acceptor exciton is smallest in detergent, resulting in a greater spectral overlap. Thus we see that for the non-dominant pathways, the spectral overlap is the dominating factor in determining the exciton transfer rate. These results suggest that although there is an interplay between the exciton coupling strength and spectral overlap when determining the exciton transfer rate, the dominating transfer pathways in B800 to B850 transfer depend strongly on exciton coupling strength alone. Thus the real interplay is between the B800 to B850 interring coupling, and the delocalisation of excitons in each ring, two properties that can be traced back to the electronic properties of the LH2. Thus, changes in the electronic properties from detergent to membrane environments alter exciton energy transfer pathways, impacting the overall B800 to B850 transfer rate. § DISCUSSION So far, knowledge of the structure and function of the LH2 complex has been gained mostly through investigating complexes isolated from their native environment in the photosynthetic membrane. Recent experimental studies have found that energy transfer within the complex is faster in a membrane environment that mimics the bacterial membrane <cit.>. Using two levels of theory, namely, GFT and HEOM to estimate B800 to B850 energy transfer rates, we have been able to show how faster energy transfer in membrane-embedded LH2 can be linked to changes in the electronic properties of the complex. In agreement with previous theoretical studies, we have identified that the dominating pathway an excitation takes from the B800 to the B850 ring is via the dark B850* states, and find this to be the case in both membrane and detergent environments <cit.>. We have shown how faster energy transfer in the membrane is the result of the increased delocalisation and stronger coupling between the excitons involved in the dominating pathways. Signatures of stronger electronic coupling in the B850 ring are additionally present in both experimental and theoretical linear spectra which show a red shift in the B850 absorption, a change characteristic of stronger interchromophore electronic couplings <cit.>. Finally, we find a broader distribution of B800 to B850 energy transfer rates for an ensemble of 10,000 LH2’s and suggest that a biological trade-off may be present that allows the LH2 to achieve faster average energy transfer in membrane by having a broad spread of energy transfer rates. We use both GFT and HEOM to determine the average B800 to B850 energy transfer rate in each environment and find a qualtiative agreement between the two approaches indicating a faster transfer rate in membrane LH2, in agreement with experimental pump-probe measurements <cit.>. The authors suggest that lipid bilayer properties such as the lateral pressure profile of the membrane or a hydrophobic (mis)match may be the microscopic origin of the increased energy transfer rate in membrane. The membrane lipid bilayer provides stability to the LH2 complex through lateral pressure, which is altered when in detergent or in varying membrane lipid compositions <cit.>. To assess the importance of lipid-protein interactions on the energy transfer within the LH2, we computed the average B800 to B850 transfer rate for LH2 embedded in two different lipids, POPC and DOPC. Both GFT and HEOM rates predict slower transfer in DOPC compared to POPC indicating that changes in the lipid composition can result in changes in energy transfer. Understanding how the energy transfer within the LH2 changes as a function of the lipid properties could reveal how the complex achieves faster rates of energy transfer in a lipid environment. Although energy transfer is expected to be slowest in detergent as predicted by the HEOM derived rate, GFT predicts transfer to be slowest in DOPC. This discrepancy highlights the importance of non-perturbative frameworks in order to resolve and understand the differences of energy transfer kinetics and to rationalise experimental observations. We note that the estimated transfer times with HEOM are in general longer than with GFT. These quantitative differences result in part from the fact that in the HEOM approach we map the B800 to B850 transfer process onto the kinetics on a two state system. Rigourous approaches to extracting more accurate transfer rates from a non-perturbative framework such as HEOM is an open problem which goes beyond the scope of the current manuscript and will be presented elsewhere. An advantage of GFT is that the underlying excitonic properties can be studied to pinpoint changes that could result in faster B800 to B850 energy transfer. The dominating transfer pathways in the LH2 indicate that faster exciton transfer in the membrane can be linked to stronger excitonic couplings, a direct result of both stronger interchromophore electronic couplings and increased delocalisation of excitations. The membrane lipid bilayer provides stability to the LH2 complex through lateral pressure, which is altered when in detergent. Differences in lipid-protein and detergent-protein interactions could alter electronic couplings via perturbations in the geometry of the chromophores, as the protein scaffold has control over the position and orientation of the chromophores <cit.>. Methods more sophisticated than the dipole-dipole approximation are used to determine all the electronic couplings in the membrane LH2 model that account for screening due to the lipid environment, such that a closer packing of B850 chromophores may not be the sole reason for stronger electronic couplings <cit.>. Accompanying stronger interchromophore electronic couplings, is the increased delocalisation of the excitons dominating B800 to B850 energy transfer in membrane. Two-dimensional spectroscopy measurements have found quantum beating in the fluorescence signals of the LH2 from R. acidophilus, a signature of quantum coherent dynamics <cit.>. Quantum beating signals arise as a result of constructive and destructive interference between different donor to single acceptor pathways over time. Such interference becomes possible when excitons are highly delocalised such that many relaxation pathways are available. Theoretical studies of model exciton systems suggest that interference is important to achieve high energy trapping efficiency suggesting coherent dynamics is important for. Interference becomes possible when excitons are highly delocalised such that many relaxation pathways are available. Thus the increased delocalisation of excitons in membrane LH2 could impact the coherent dynamics within the complex. The dark B850* states seem to be an important energy acceptor for B800 to B850 energy transfer within the LH2 in both detergent and membrane environments. Previous theoretical studies have found that B800 to B850* energy transfer occurs faster (600 - 800fs) than transfer to lower energy bright B850 exciton states (1 ps) <cit.> . Energy transfer pathways in the LH2 have been probed experimentally using two-dimensional spectroscopy, but It is difficult to detect a B800 to B850* signal since the third order nonlinear response measured is proportional to the fourth power of the transition dipole moment, which in the case of B850* is negligible <cit.>. Fidler et al. suggests that the excitons involved in this energy transfer pathway may have parallel transition dipole moments which would also prevent their detection. Despite its elusiveness, the presence of a fast B800 to B850* energy transfer pathway could explain the additional fast decay channel found when exciting LH2 at the blue end of the B800 band in hole burning experiments <cit.>. A similar pathway has been suggested in the LH3 complex from R. acidophilus strain 7050, a low light variant of R. acidophilus<cit.>. The LH3 has a similar nonameric structure to the LH2 but with the B850 band blue shifted to 820 nm, suggesting that this mechanism may be shared across different variants of the complex. Studies on artificial light harvesting systems have shown that transfer of excitation energy from bright to dark states may be used to prevent re-emission since the dark state cannot optically decay, thus increasing the efficiency of energy transfer in the system <cit.>. The dark B850* states may play a similar role in trapping absorbed solar energy by quickly moving excitation energy out of the B800 ring, where it would otherwise relax to low energy B800 states that have a greater transition dipole strength. By calculating the B800 to B850 energy transfer rate for 10,000 realisations of static disorder, we also resolve the heterogeneity across an ensemble of LH2 complexes and can study the form of their distribution. We find a broader distribution of energy transfer rates in membrane embedded LH2 suggesting that the energy transfer mechanism has a lower level of accuracy in the native environment due to the greater standard deviation of the transfer rates compared to detergent. A concept used to understand the relationship between different traits in a biological system is a trade-off, which can be identified by a negative relationship between two traits. The distribution of the transfer rate in membrane and detergent LH2 suggests that the complex sacrifices accuracy in the transfer rate for speed. The traits underlying a speed-accuracy trade-off would likely be related to properties of the LH2 that change from detergent to membrane. However identifying a trade-off would require more thorough investigation and laboratory evolution experiments. In summary, we have shown that increased energy transfer efficiency within membrane-embedded LH2 can be traced back to altered energy transfer pathways and enhanced quantum delocalisation of excitations within the complex. Further work towards understanding the biological interactions underlying such enhancements will not only provide a deeper understanding of the function of the LH2 but will also lead towards improved theoretical tools to study similar photosynthetic complexes. Currently there is a lack of comprehensive electronic and spectral density parameters available for membrane-embedded LH2 making such an investigation challenging. Additionally, to study the LH2 in its biological environment having a non-perturbative framework that can yield excitation transfer rates is essential. The work presented here is a first step towards addressing these challenges and uncovering the role that the biological environment plays in the efficiency of these light harvesting complexes. § AUTHOR CONTRIBUTIONS AOC designed and supervised the research. CK and HÓG carried out the simulations. LC and BM provided quantum chemical insight. CK wrote the first draft. All authors analysed the data, discussed the results, and contributed to the final version of the manuscript. § ACKNOWLEDGMENTS We thank Gabriela Schlau-Cohen and Charlie Nation for insightful discussions. We gratefully acknowledge financial support from the Engineering and Physical Sciences Research Council (EPSRC UK) Grant EP/T517793/1 and from the Gordon and Betty Moore Foundation Grant 8820. The authors acknowledge use of the UCL Myriad High Performance Computing Facility. § SUPPLEMENTARY MATERIAL §.§ Modified Redfield theory The energy transfer rates computed with GFT use lineshape functions that are derived perturbatively as introduced in the Sec:GFTTheory section. The exciton lifetimeτ_αthat enters the lineshape functions is given by τ_α=(1/2∑_α≠βk_αβ^MR)^-1, wherek_αβ^MRis the energy transfer rate from a donor excitonαto all possible acceptor excitonsβ, given that they are localised on the same ring as the donor exciton. Since the interchromophore electronic couplings within each ring are strong, modified Redfield theory is used to obtain the intra-ring exciton transfer rates. By assuming that the electronic states of each ring are weakly coupled to their environment,H_SBis treated as a perturbation on the dynamics within each ring. The modified Redfield energy transfer rate between two excitons in the same ring is given by <cit.> k_αβ^MR=2Re∫_0^∞ dt e^-iω_αβt e^-i(λ_αα,αα+λ_ββ, ββ)t e^-g_α(t)-g_β(t) e^2g_ββ, αα+2iλ_ββ, αα ×[g̈_βα,βα(t)-(ġ_βα,ββ(t)-ġ_βα,αα(t)+2iλ_βα,ββ)^2], where the terms have been defined in the main text (see Sec:GFTTheory section). §.§ Propagation of the dipole operator Numerical computation of the absorption and fluorescence expressions given by Eqs. (<ref>) and (<ref>) using HEOM theory is achieved by rewriting the auto-correlation as ⟨μ̂_p(t)μ̂_p(t)|_⟩ρ = Tr(μ̂_p e^ℒt[μ̂_pρ̂]), whereℒis the HEOM generator of dynamics andρ̂is the reduced system density matrix. The half-sided Fourier transform is then formally calculated to give ∫_0^∞dt μ̂_p e^ℒt[μ̂_pρ̂] e^iω t = -μ̂_p 1/ℒ + iω[μ̂_pρ̂]. We numerically determinex̂_p,ω = 1/ℒ + iω[μ̂_pρ̂]by solving the linear system(ℒ + iω)[x̂_p,ω] = μ̂_pρ̂using the BiCGSTAB Krylov subspace method <cit.>. This method of numerically computing spectra is more efficient than numerically Fourier transforming the dynamics as a result of the sparsity of the matrix representation forℒ. §.§ Finding the thermal state The fluorescence spectra in Eq. (<ref>) is computed by performing a trace with respect to the thermal stateρ̂_th, which satisfies the propertyℒρ̂_th = 0. In order to determine the thermal state we solve this linear system using the BiCGSTAB method <cit.> which is supplied with an initial guess given by the Boltzmann statee^-βĤ / Tr(e^-βĤ). Doing so guarantees that the solver will not yield the trivial zero matrix solution, which of course does not represent a physical state. §.§ B800 to B850 transfer rate distributions
http://arxiv.org/abs/2407.12480v1
20240717110551
Improvement of analysis for relaxation of fluctuations by the use of Gaussian process regression and extrapolation method
[ "Yuma Osada", "Yukiyasu Ozeki" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
osada.yum@gmail.com Department of Engineering Science, Graduate School of Informatics and Engineering, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585, Japan Department of Engineering Science, Graduate School of Informatics and Engineering, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585, Japan § ABSTRACT The nonequilibrium relaxation (NER) method, which has been used to investigate equilibrium systems via their nonequilibrium behavior, has been widely applied to various models to estimate critical temperatures and critical exponents. Although the estimation of critical temperatures has become more reliable and reproducible, that of critical exponents raises concerns about the method's reliability. Therefore, we propose a more reliable and reproducible approach using Gaussian process regression. In addition, the present approach introduces statistical errors through the bootstrap method by combining them using the extrapolation method. Our estimation for the two-dimensional Ising model yielded β = 0.12504(6), γ = 1.7505(10), and ν = 1.0003(6), consistent with the exact values. The value z = 2.1669(9) is reliable because of the high accuracy of these exponents. We also obtained the critical exponents for the three-dimensional Ising model and found that they are close to those reported in a previous study. Thus, for systems undergoing second-order transitions, our approach improves the accuracy, reliability, and reproducibility of the NER analysis. Because the proposed approach requires the relaxation of some observables from Monte Carlo simulations, its simplicity imparts it with significant potential. Improvement of analysis for relaxation of fluctuations by the use of Gaussian process regression and extrapolation method Yukiyasu Ozeki July 22, 2024 ========================================================================================================================= sectionSect.Sects. sectionSectionSections equationEq.Eqs. equationEquationEquations figureFig.Figs. figureFigureFigures tableTableTables tableTableTables § INTRODUCTION Critical exponents are essential for understanding critical universality. Various methods for estimating critical exponents have been developed, including nonequilibrium relaxation (NER) analysis, <cit.> finite-size scaling analysis, <cit.> conformal-bootstrap theory, <cit.> and the tensor-renormalization group method. <cit.> The NER method analyzes the properties of a system's equilibrium state on the basis of its nonequilibrium behavior. It has been applied to various systems. In particular, it has been used to analyze slow-relaxing systems, such as those undergoing critical slowing down. Examples include systems that undergo the Kosterlitz–Thouless <cit.> transition, <cit.>, spin-glass systems, <cit.> and fully frustrated systems. <cit.> In addition, the scheme of the NER method has been applied to a transition in the percolation model. <cit.> Thus, the NER analysis is applicable to numerous models and contributes to research on phase transitions and critical phenomena. The difficulty in the NER analysis of fluctuations lies in calculating the differentiation of discrete values obtained from a simulation. In the NER analysis of fluctuations, critical exponents are derived from the slope of the data values. For the NER method, in contrast to the well-established dynamical scaling analysis for critical temperatures, <cit.> a systematic analysis to determine critical exponents has not yet been developed. There are two primary issues with the conventional method used in this analysis. The first issue is the unstable nature of the slopes produced by the conventional method. Simple numerical differentiation is ineffective because of the noise present in the data. For the conventional method, efforts have been made to reduce the effect of noise using a linear approximation to determine the slope. However, the instability still needs to be addressed and the reliability of the analysis must be determined. The second issue is the difficulty in determining the convergence behavior from discontinuous and non-monotonic values when extrapolating to estimate critical exponents. In the conventional method, the form of the slope is assumed to be a_1(1/t)^a_2 + a_3, which emphasizes the short-time behavior. For example, the interval t = [1, 10] corresponds to 1/t = [0.1, 1], which is 90% of the interval 1/t = [0, 1](t = [1, ∞]). These drawbacks undermine the reliability of extrapolations to the thermodynamic limit as t →∞. To overcome these issues, we use Gaussian process regression for the continuous slope. Selecting an appropriate extrapolation method based on this continuous slope is critical. This work aims to enhance the reliability and reproducibility of analyses by offering a systematic approach to the NER analysis of fluctuations. We demonstrate the utility of the present method using two systems. In the two-dimensional square Ising model, we validate our approach by comparing the critical exponents obtained at the exact critical temperature. In the three-dimensional cubic Ising model, where the exact critical temperature remains unknown, we demonstrate the applicability of the present method near the critical temperature through comparisons with previous studies. The remainder of this paper is organized as follows: In Sect. II, we explain the NER method of fluctuation. In Sect. III, we propose an improved analysis and validate our approach at the exact critical temperature using the two-dimensional Ising model. In Sect. IV, we apply the present method to the three-dimensional Ising model and justify it by comparing the results with those obtained using the methods reported in previous studies. In Sect. V, we summarize the present study and the proposed method. § ESTIMATION OF CRITICAL EXPONENTS USING NONEQUILIBRIUM RELAXATION OF FLUCTUATIONS Let us explain how to use NER data to estimate critical exponents β, γ, ν, and z of systems that undergo a second-order transition. In the NER analysis, we simulate a large system considered to have no finite-size effects in the observed time interval. The simulation aims to calculate relaxations of some quantities numerically, including the order parameter m(t) and its fluctuations, which asymptotically exhibit algebraic behavior with respect to time t at the critical temperature; <cit.> e.g., the relaxation of magnetization m(t) shows an algebraic behavior m(t) ∼ t^-β/(zν). To observe the asymptotic power clearly from the estimated numerical data up to a finite maximum time, the local exponent of magnetization λ_m defined by λ_m(t) ≡ - ∂log m(t)/∂log t→β/zν is useful. The relaxation of fluctuations, which asymptotically exhibits algebraic behavior with respect to time t at the critical temperature, is defined as χ(t) ≡m(t)^2 - m(t)^2∼ t^γ / (zν) (t) ≡m(t)e(t) - m(t)e(t)∼ t^(1 - β) / (zν), where · denotes the dynamical average and e(t) represents the internal energy per site. Note that we use variance fluctuations in the present paper instead of dimensionless fluctuations, which have been used in previous NER studies <cit.> because they enable easier calculation of errors of fluctuations. The present method can be used for either variance fluctuations or dimensionless fluctuations, and the accuracy does not appear to change for either. Their local exponents are also defined as λ_χ(t) ≡∂logχ(t)/∂log t→γ/zν λ_(t) ≡∂log(t)/∂log t→1 - β/zν. From combinations of these local exponents, we can derive functions that asymptotically approach critical exponents: β = lim_t →∞β(t) = lim_t →∞λ_m(t)/λ_(t) + λ_m(t) γ = lim_t →∞γ(t) = lim_t →∞λ_χ(t)/λ_(t) + λ_m(t) ν = lim_t →∞ν(t) = lim_t →∞2λ_m(t) + λ_χ(t)/d(λ_(t) + λ_m(t)) z = lim_t →∞ z(t) = lim_t →∞1/ν(λ_(t) + λ_m(t)), where d denotes the dimension of the system. Consequently, we can estimate critical exponents by simulating m(t), χ(t), and (t) at the critical temperature, differentiating them on a double-logarithmic scale, and extrapolating their combinations. § IMPROVEMENT THROUGH GAUSSIAN PROCESS REGRESSION §.§ Using Gaussian process regression Let us first explain how to obtain local exponents in <ref> by differentiating values on a double-logarithmic scale. In contrast to numerical differentiation, Gaussian process regression enables us to obtain analytic and continuous derivatives. <cit.> We here briefly explain this process. We aim to obtain the regression function for data points (X_i, Y_i, E_i), where E_i represents the error in Y_i for i = 1, …,. We maximize the log-likelihood function by optimizing hyperparameters θ. The log-likelihood function for a Gaussian process is defined as log L(θ) = - 1/2logΣ(θ) - 1/2YΣ(θ)Y - N/2log(2π), where Σ(θ) is the n× n variance–covariance matrix and Σ denotes the determinant of Σ. Element (i, j) of Σ is defined by Σ_ij(θ) = E_i^2δ_ij + K(X_i, X_j, θ), where K(X_i, X_j, θ) is a kernel function. In Gaussian process regression, assuming all data points obey a multivariate Gaussian distribution, we can predict new points assumed to follow that distribution. Specifically, we can predict Y at X by optimized hyperparameters by (X) = kΣ()Y, where k = (k_i), and k_i(X) = K(X_i, X, ). We can analytically obtain the derivative of Y by ∂(X)/∂ X = (∂k/∂ X) Σ()Y. In the following discussion, we demonstrate how to estimate critical exponents using the two-dimensional square Ising model. The dynamical order parameter for this model, the magnetization m(t), is calculated as m(t) = N^-1∑_is_i(t), starting from the all-aligned state. We analyze data pairs t_i versus y_i (= 1 / m_i, χ_i, or _i) to apply Gaussian process regression for differentiation. In this paper, we use a composition kernel function consisting of a Gaussian kernel and a constant kernel, represented by K(X_i, X_j, θ) = θ_1^2exp(-(X_i - X_j)^2/2θ_2^2) + θ_3^2, where θ_1, θ_2, and θ_3 are hyperparameters. This kernel is generally used in Gaussian process regression. The Gaussian kernel function ensures smoothness and locality, whereas the constant kernel contributes to the global behavior. (Although we initially used the polynomial kernel function under the assumption that local exponents are monotonic, we realized that they are not monotonic after applying this approach.) For the stability of the regression, we convert the obtained data as X_i ≡1/log(t_i) + c_x Y_i ≡1/c_y1log(y_i) + c_y2 E_i ≡c_y1e_y_i/y_i(c_y1log(y_i) + c_y2)^2 c_x ≡ 1 - min_ilog(t_i) c_y1 ≡max_ilog(t_i) - min_ilog(t_i)/max_ilog(y_i) - min_ilog(y_i) c_y2 ≡ 1 - c_y1min_ilog(y_i), where e_i represents the error in y_i. Note that the condition 0 ≤ X_i, Y_i≤ 1 is satisfied, corresponding to normalization in machine learning. This conversion remains invariant even if y_i is multiplied by a positive constant value, and it makes X_i and Y_i dimensionless quantities. In the thermodynamic limit, y →∞ as t →∞ must be observed at the critical temperature for each case of y = 1/m, χ,. Therefore, we include the data point (X_i, Y_i, E_i) = (0, 0, 0) for the regressions of each case. The local exponent at X = 1 / (log(t) + c_x) is represented by <ref> as λ_y(t) = ∂log(y)/∂log(t) = ∂log(y)/∂ Y∂ X/∂log(t)∂ Y/∂ X = X^2/c_y1Y^2∂ Y/∂ X = X^2/c_y1((X))^2∂(X)/∂ X ∂ X/∂log (t) = - 1/(log(t) + c_x)^2 = - X^2 ∂ Y/∂log (y) = - c_y1/(c_y1log(y) + c_y2)^2 = - c_y1Y^2. Note that, in contrast to the numerical differentiation, the present method enables us to obtain Y and ∂ Y/∂ X at t as and ∂/∂ X, respectively. Therefore, we can predict local exponents β(t), γ(t), ν(t), and z(t) from <ref>. Although we might be able to compute the exponent α(t) directly from the relaxation of specific heat,<cit.> we did not calculate it in the present work because of its slow convergence. Estimating α(t) with the same accuracy as other critical exponents in the same observation time has long been considered difficult. Of course, we can calculate α using the scaling relation. §.§ Demonstration for the two-dimensional Ising model Hereafter, measured temperatures are reported in units of J /. We conducted simulations using the Metropolis algorithm on a 501 × 500 square lattice with skew boundary conditions at the critical temperature T = = 2.26918531421. An observation consists of 10^4 Monte Carlo steps (MCSs), with statistical averaging over 10,137,600 independent samples. Initially, in NER analysis, we examine size dependence. The overlapping error bars shown in <ref> indicate that the finite-size effect is negligible on a 501 × 500 lattice up to 10^4 MCSs. Thus, we applied the present method using simulations with the 501 × 500 lattice to estimate local exponents. For the regression analysis, we extracted 100 data points at equal intervals of log(t) from the simulation data, ranging from t = 10 to t = 10,000. The results of the Gaussian process regression based on <ref> are shown in <ref>. <Ref> display the local exponents in the observed time interval, as estimated using <ref>. We plotted N = 1003 data points at equal intervals of t in these figures. In contrast to numerical differentiation, our approach predicts derivatives at specific times. Note that we predict and use values in the observed time interval 10 ≤ t ≤ 10,000 because predictions outside this interval are unstable. If we use an interval around (X, Y) = (0, 0) for the interpolation by regression, the value of X^2/^2 in <ref> close to that interval is also unstable because of the small values in the fraction. Finally, we apply the extrapolation method for systematicity and reproducibility to estimate critical exponents as t →∞. Let us briefly explain the ε-algorithm, <cit.> which is an efficient implementation of the Shanks transformation. <cit.> The Shanks transformation is used to extrapolate sequences such as v_s = V + ∑_j = 1^na_jr_j^s, s = 0, 1, …, N - 1, where v_s is the original sequence, V is a limit (lim_s →∞v_s = V), s is the label for data, a_k and r_k are nonzero constants, 1 > r_1≥r_2≥⋯≥r_n, and N denotes the number of the data points used in an extrapolation. Briefly, it extrapolates sequences that converge exponentially. Because we assume that local exponents such as β(t) also converge exponentially with respect to t, as indicated by their time evolution in <ref>, we apply the ε-algorithm to the sequences obtained from regressions. The ε-algorithm progresses as follows: 1/ε_s - 1, k + 1 - ε_s, k + 1/ε_s + 1, k - 1 - ε_s, k = 1/ε_s - 1, k - ε_s, k + 1/ε_s + 1, k - ε_s, k, s, k = 0, 1, …, starting with ε_s, -1 = ∞, and ε_s, 0 = v_s for s = 1, …, N. We can obtain the convergence-accelerated sequence ε_s, k by applying the transformation k times, where the length is N - 2k. In the present study, we estimate the critical exponent by taking the median of the final values in the extrapolated sequence ε_s, k for k = 0, …, (N - 1)/2, with N = 1003 data points at equal intervals of t. Specifically, we calculate the limit V using V = (| = ε_N - 1 - 2k, k, k = 0, …, (N-1)/2) as the estimator of the critical exponent. Note that we opted to use the median and N = 1003 interpolated data to mitigate the effect of outliers given that the actual data sequence may not perfectly adhere to <ref> because of numerical errors. Because most of the final values shown in <ref> are close to the median, we regard the median as the limit to neglect outliers. We obtain β = 0.12507…, γ = 1.7508…, ν = 1.0004…, z = 2.1668…, which closely approximate the exact values β = 0.125, γ = 1.75, ν = 1 and are consistent with previously reported results z = 2.1667(5).<cit.> Because the present method operates automatically, reproducibly, and reliably, we can easily apply the bootstrap method. <cit.> We created 100 bootstrap samples by resampling 100 data points, extracted at equal intervals of log(t), 100 times. We independently applied the present method to each bootstrap sample and estimated the mean and numerical error of the critical exponents across the bootstrap samples. Consequently, we estimated the critical exponents as β = 0.12504(6), γ = 1.7505(10), and ν = 1.0003(6), which are consistent with the exact values β = 0.125, γ = 1.75, and ν = 1. The value z = 2.1669(9) is reliable because of the high accuracy of these exponents. Our estimation of the dynamical exponent z = 2.1669(9) is consistent with that reported in a previous study (z = 2.1667(5))<cit.> and is close to that reported in another previous study (z = 2.14(2)). <cit.> We also applied the present method to the same model over various time intervals. The results are shown in <ref>. As the upper time interval increases, the estimation accuracy for critical exponents improves and the exponents become consistent with the exact values and with the results of the previous study. <cit.> These results validate the accuracy and reliability of the present method at the critical temperature and with a sufficient upper time limit. Data from a simple simulation for the relaxation of the appropriate quantities enable us to derive highly reliable critical exponents. §.§ Advantage over the conventional method Because we have numerically demonstrated that the present method is reliable, we also illustrate the improvement visually. In the conventional method, we compute λ_m from a linear approximation of sections. The controllable conditions are the number of averages and the choice of sections. Therefore, we calculate λ_m at t = (t_l + t_r) / 2 as λ_m(t_l + t_r/2) = (slope of -log(m) over [t_l, t_r]), where data points lie at equal intervals in t, both t_l and t_r are integers, and t_r - t_l + 1 =. In contrast to the conventional method, the controllable condition for the present method is the time interval for the regression. <Ref> shows a plot comparing λ_m values obtained for several controllable conditions using the conventional method with those obtained using the improved method. The data points for both methods exhibit similar trends. Although the data points for the conventional method are sensitive to the number of averages, those for the present method are less affected by the choice of the regression interval. This plot indicates that the present method has improved reliability compared with the conventional method and overcomes discreteness. § APPLICATION TO THE THREE-DIMENSIONAL ISING MODEL Because the analysis for the two-dimensional Ising model at the exact critical temperature was successful, we applied the proposed method to the three-dimensional cubic Ising model, whose exact transition temperature is unknown. We conducted analyses of this model at the temperature T = 1 / 0.2216547 = 4.51152174982078. This temperature was estimated in a previous study using the pinching estimation of the NER method. <cit.> Simulations were performed on a 201 × 201 × 200 cubic lattice with skew boundary conditions at T = 4.51152174982078. Observations consisted of 10^3 MCSs, with statistical averaging over 1,244,160 independent samples. The overlapping error bars shown in <ref> indicate that the finite-size effect is negligible on a 201 × 201 × 200 lattice up to 10^3 MCSs. Similar to the above analysis, we applied the proposed method to the three-dimensional Ising model. The regressions are shown in <ref>. <Ref> display the interpolations of the local exponents, each plotting 1003 data points at equal intervals of t. Next, we perform the bootstrap method. We created 100 bootstrap samples by resampling 100 data points, which were extracted from the simulation data at equal intervals of log(t), 100 times. We applied the present method to each bootstrap sample independently and estimated the mean and numerical error of the critical exponents across bootstrap samples. Consequently, we estimated the critical exponents as β = 0.3252(2), γ = 1.2376(5), ν = 0.6293(3), and z = 2.0346(3). The results are shown in <ref>. Compared with the previous NER analysis, <cit.> our results show improved accuracy by achieving a more systematic method with less influence of human bias. Our results are close to those of other studies, which would be improved if we used more accurate transition temperatures, a larger maximum observation time, or more samples. Nonetheless, because the present method estimates critical exponents close to the values found in previous studies, we consider it validated. § SUMMARY AND DISCUSSION We have improved the analysis of fluctuations in the nonequilibrium relaxation (NER) method and have applied the present method to estimate critical exponents in both the two-dimensional square Ising model and the three-dimensional cubic Ising model. The modifications include two significant advancements. First, we introduced the Gaussian process regression and transformed the physical quantities using <ref>. This transformation enables us to use the data point at (X, Y) = (0, 0) as t →∞. As a result, we can reliably estimate the local exponents β(t), γ(t), ν(t), and z(t) from simulation data. Second, under the assumption that these exponents converge as described by <ref>, we extrapolated them using the ε-algorithm, which enabled systematic and reproducible extrapolation. The present method's automation and reproducibility reduce human bias, making it easier to apply the bootstrap method and provide statistical error estimates. Because the proposed method requires only the data of the relaxation of specific quantities from the Monte Carlo simulation, its simplicity imparts it with strong potential. The results of our analysis are promising. For the two-dimensional Ising model, we obtained the critical exponents as β = 0.12504(6), γ = 1.7505(10), and ν = 1.0003(6), which are consistent with the exact values. We obtained reliable z = 2.1669(9) because of the high accuracy of these exponents; this value is consistent with that reported by Nightingale and Blöte. <cit.> These results suggest that the present method is effective at the exact critical temperature. For the three-dimensional Ising model, we obtained the critical exponents as β = 0.3252(2), γ = 1.2376(5), ν = 0.6293(3), and z = 2.0346(3), which are close to the values reported in previous studies <cit.> and improve accuracy compared with that achieved in the previous NER analysis. <cit.> These results demonstrate the versatility of the present method and its potential applicability to various models. Although the results of the present method are not entirely consistent with those of prior studies, its significant advantage lies in its applicability to systems with slow relaxations, such as fully frustrated systems and those undergoing Koserlitz–Thouless transitions. Therefore, the present method holds promise for analyzing difficult systems and contributing to research on the universality of critical phenomena. § ACKNOWLEDGMENTS The authors are grateful to Kazuaki Murayama for his valuable support and comments. The authors are grateful to the Supercomputer Center at the Institute for Solid State Physics, University of Tokyo, for use of their facilities. jpsj_mod
http://arxiv.org/abs/2407.13252v1
20240718080728
Unveiling Structural Memorization: Structural Membership Inference Attack for Text-to-Image Diffusion Models
[ "Qiao Li", "Xiaomeng Fu", "Xi Wang", "Jin Liu", "Xingyu Gao", "Jiao Dai", "Jizhong Han" ]
cs.CV
[ "cs.CV" ]
Institute of Information Engineering, Chinese Academy of Sciences Beijing China Institute of Information Engineering, Chinese Academy of Sciences Beijing China Institute of Microelectronics, Chinese Academy of Sciences Beijing China Institute of Information Engineering, Chinese Academy of Sciences Beijing China Institute of Microelectronics, Chinese Academy of Sciences Beijing China Institute of Information Engineering, Chinese Academy of Sciences Beijing China Institute of Information Engineering, Chinese Academy of Sciences Beijing China § ABSTRACT With the rapid advancements of large-scale text-to-image diffusion models, various practical applications have emerged, bringing significant convenience to society. However, model developers may misuse the unauthorized data to train diffusion models. These data are at risk of being memorized by the models, thus potentially violating citizens' privacy rights. Therefore, in order to judge whether a specific image is utilized as a member of a model's training set, Membership Inference Attack (MIA) is proposed to serve as a tool for privacy protection. Current MIA methods predominantly utilize pixel-wise comparisons as distinguishing clues, considering the pixel-level memorization characteristic of diffusion models. However, it is practically impossible for text-to-image models to memorize all the pixel-level information in massive training sets. Therefore, we move to the more advanced structure-level memorization. Observations on the diffusion process show that the structures of members are better preserved compared to those of nonmembers, indicating that diffusion models possess the capability to remember the structures of member images from training sets. Drawing on these insights, we propose a simple yet effective MIA method tailored for text-to-image diffusion models. Extensive experimental results validate the efficacy of our approach. Compared to current pixel-level baselines, our approach not only achieves state-of-the-art performance but also demonstrates remarkable robustness against various distortions. <ccs2012> <concept> <concept_id>00000000.0000000.0000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Security and privacy Privacy protections [500]Computing methodologies Computer vision Unveiling Structural Memorization: Structural Membership Inference Attack for Text-to-Image Diffusion Models Jizhong Han July 22, 2024 ============================================================================================================ § INTRODUCTION In recent years, large models, especially diffusion models <cit.> have shown superior generative performance and found extensive application across various fields. Moreover, the advent of the text-to-image diffusion models <cit.> has facilitated the creation of high-quality, diverse text-conditional images. These models have significantly propelled the advancements of Artificial Intelligence Generated Content (AIGC). Nevertheless, the wide adoption of large models has raised various legal and ethical concerns, notably copyright issues <cit.>, consent <cit.> and ethics <cit.>. One of the pressing concerns is the unauthorized use of images for training models. This not only risks compromising the privacy of image owners but also poses copyright infringements, as models can realistically replicate copyrighted artworks based on training data. This is attributed to models' capacity for memorization, which means models can remember certain elements or even reproduce almost identical images from their training datasets. Under such circumstances, Membership Inference Attack (MIA) <cit.> serves as an approach to tackle the issue. Given a specific unauthorized image, the goal of MIA is to determine whether it is a member of the training set of a target model. The core of MIA is to ingeniously exploit the models' memorization of members to distinguish them from non-members. Recently, numerous Membership Inference Attack (MIA) methodologies <cit.> have been introduced for diffusion models. These methodologies, which rely on pixel-wise noise comparison, are designed to assess models' verbatim memorization of member images. However, we argue that it is practically impossible for large-scale text-to-image models to memorize all the pixel information, given that their training sets usually contain billions of images. For instance, the Stable Diffusion-v1-1 is trained on the LAION2B-en dataset, which contains around 2.32 billion text-image pairs. Hence, we attempt to capture more advanced memorization capabilities of large text-to-image diffusion models, specifically at the structure-level. To investigate the structure-level memorization, we first examine how a specific image is corrupted during the unidirectional diffusion process for better comprehension of image structural variations, and then explore whether this correlates with models' memorization. As illustrated in Figure <ref>, we iteratively employ noise to corrupt a specific image throughout the diffusion process. We then select various pairs of noisy images and compute the residuals between each pair. These residuals capture the change in image's corrupted parts. Our key observation is that diffusion models tend to corrupt the detailed features within the image in the initial diffusion stages, whereas the image structure is mostly preserved. Following this, the corruption extends to the overall structure of the image in the later diffusion stages. For instance, in Figure <ref>, the model primarily focuses on the detailed patterns of the hat in the very early stages. As the diffusion progresses, it then begins to address the structural aspects of the cat's fur. As for textual prompts in the text-to-image models, they primarily influence the overall structure and context of images in the later stages, while having minimal impacts in the early phases. Based on these findings, we delve deeper into the differences in structural corruption between members and nonmembers. We reveal that the structures of members are better preserved than those of nonmembers in the initial diffusion stages, as the diffusion models have memorized the structures of the members during training phases. In light of the aforementioned observations, we introduce a straightforward yet effective MIA approach for text-to-image diffusion models by comparing the structural similarity between the original image and its corrupted version. Overall, the merits of our approach mainly contain three aspects: 1) Structural difference between members and nonmembers reveals the diffusion models' memorization at the skeletal level, which is preferred by large models. 2) Comparing differences at the image level is more robust to various distortions, particularly additional noise, than methods that rely on noise comparison. 3) Our method exhibits robustness to textual prompts, rendering it highly effective for membership inference tasks on images which lack training textual prompts in real-world scenarios. We conduct a series of comprehensive experiments on both the Latent Diffusion Model and the Stable Diffusion under varying image resolutions. These experiments demonstrate the superior performance of our proposed method. Furthermore, we evaluate the robustness of our method under a range of practical distortions. Our findings confirm the resilience of our method. In addition, we examine the effect of diverse textual inputs on the efficacy of our method, as we can not obtain the ground-truth texts of images in training. Our results confirm that our method’s performance is robust to changes in textual inputs, providing valuable insights to the practical application of MIA. We summarize the contributions of this paper as follows: * Instead of pixel-level memorization, we delve into the advanced memorization capabilities of large diffusion models at the structure-level. Furthermore, we investigate the differences in the preservation of image structures between members and nonmembers during the diffusion process. * Drawing upon our findings, we propose a straightforward yet effective MIA method for text-to-image diffusion models by comparing the structural difference, which is more robust to various distortions. * We further verify that our method exhibits robustness to variations in textual prompts, enabling its application to images lacking training textual prompts in real-world scenarios. * Experimental results show that our method substantially outperforms existing MIA methods for text-to-image diffusion models, demonstrating its effectiveness. § RELATED WORK §.§ Membership Inference Attack As proposed by Shokri <cit.>, Membership Inference Attack (MIA) aims to infer whether a specific sample is a member of a target model's training set. MIA is categorized into two main tasks: white-box attack and black-box attack. White-box attack <cit.> presumes access to the internal structure and parameters of the target model, enabling a comprehensive analysis of the model's vulnerabilities. Conversely, black-box attack <cit.> operates solely through the model's observable inputs and outputs, posing a challenging yet more realistic scenario. Primarily, MIA is specifically targeted at classification models <cit.>. Subsequently, with the rapid development of generative models, an increasing number of MIA methods have begun to explore the vulnerabilities of such models, including VAE <cit.> and GAN <cit.>. For instance, LOGAN <cit.> is the first to adopt MIA to GAN in both white-box and black-box settings. It utilizes the outputs from the discriminator for inference in white-box scenario, while training a shadow GAN model in black-box scenario. Hilprecht et al. <cit.> proposes the Monte Carlo score and reconstruction loss, which can be used for attacking VAE. GAN-Leaks <cit.> also uses the Monte Carlo score for attacking GAN in black-box scenario. MIA for diffusion models. Recently, several MIA methods targeting diffusion models have emerged. The Naive Loss method <cit.> and PIA <cit.> both use the training loss of diffusion models as a metric for membership inference, specifically by comparing the added noise with the predicted noise. The key difference is that Naive Loss method employs random Gaussian noise, whereas PIA utilizes the diffusion model's output at time t=0 as the noise. SecMI <cit.> compares the distance between two adjacent noisy images, which are generated through the diffusion process and the denoising process respectively. Nevertheless, these methods all rely on pixel-wise noise prediction, which are suboptimal in larger models and are vulnerable to real-world perturbations. §.§ Diffusion Models Starting from Denoising Diffusion Probabilistic Model (DDPM) <cit.>, generative diffusion models have gained significant attention in recent times and achieved remarkable breakthrough across diverse applications <cit.>. The training goal of diffusion models is to learn the reverse denoising process of gradually transforming Gaussian noise into signal. Score-based generative models train a neural network to forecast the score function, enabling the generation of samples through Langevin Dynamics <cit.>. The sampling process can either be a Markov process like DDPM, or a non-Markov process, such as DDIM <cit.>. Non-Markov process like DDIM can be used to accelerate the generating process. Except for unconditional generation from pure noise, diffusion models have also been explored for conditional generation, such as text-guided image generation. The text-to-image model <cit.> incorporates an image encoder-decoder framework to efficiently conduct the diffusion and denoising process within a latent space. The encoder compresses the input sample into a latent representation, while the decoder reconstructs the latent sample back to pixel space. Classifier guidance <cit.> and classifier-free guidance <cit.> are both proposed for high-quality image generation conditioned on various textual prompts. §.§ Prior of Diffusion Generation Process Although diffusion models have demonstrated superior generation performance, elucidating the generation process poses significant challenges. Until now, several researches have tried to explore and analyze the generation process. Choi et al <cit.> have conducted experiments on measuring the LPIPS distance of two different images under various time steps. They conclude that diffusion models learn coarse features and structures when the Signal-to-Noise Ratio (SNR) is low, whereas they learn more subtle and imperceptible features as the SNR becomes higher. Based on their observations, Wang et al. <cit.> design an encoder to provide comparatively strong conditions for the diffusion model when the SNR is below 5e^-2 in the super-resolution image generation task. Likewise, Kwon et al. <cit.> also verify that modification of the generation process in the early denoising stage can achieve larger high-level semantic changes. Furthermore, Park et al. <cit.> conduct exponential sampling to carry out an analysis of the generation process. They conclude that in the early denoising stage, the diffusion models establish spatial information representing semantic structure, and then widen to the regional details of the elements in the later stage. § METHOD Given an image x_0, our goal is to infer whether x_0 belongs to the training set of a diffusion model ϵ_θ. Current methods mainly leverage pixel-level memorization. We argue that for large-scale model, its memory mechanism is beyond pixel-level to structure-level. To demonstrate this, we first explore the structural changes throughout the diffusion process. We find that the structural information is largely maintained in the initial steps, and the members' structures are better preserved as the diffusion models have seen the structures of members during the training process (Section <ref>). Based on this observation, we design a structure-level MIA for text-to-image diffusion models (Section <ref>). The overview of our proposed method is shown in Figure <ref>. §.§ Preliminaries Text-to-Image Diffusion Models. Distinct from other traditional generative models, diffusion models contain two processes: the diffusion (forward) process and the denoising (backward) process. During the diffusion process, diffusion models iteratively introduce Gaussian noise to the original image x_0 with a total steps of T: q(x_1:T|x_0)=∏_t=1^Tq(x_t|x_t-1) where: q(x_t|x_t-1)=𝒩(x_t;√(1-β_t)x_t-1,β_t 𝐈) and the variance schedule β_1, ..., β_T is predefined. As t approaches T, β_t becomes closer to 1. During the denoising process, diffusion models generate image through multiple denoising steps starting from Gaussian noise: p(x_0:T)=p(x_T)∏_t=1^Tp_θ(x_t-1|x_t) where: p_θ(x_t-1|x_t)=𝒩(x_t-1 ;μ_θ(x_t,t),Σ_θ(x_t,t)) and Σ_θ(x_t,t) is a constant depending on β_t, μ_θ(x_t,t) is predicted by a neural network ϵ_θ as: μ_θ(x_t,t)=1/√(α_t)(x_t-β_t/√(1-α̅_t)ϵ_θ(x_t,t)) Under this formulation, in text-to-image diffusion models, we use classifier-free guidance <cit.> to guide the image generation by textual prompts y. The degree of text influence is controlled by adopting Eq.<ref> and adjusting the unconditional guidance scale γ. ϵ_θ(x_t|y)=ϵ_θ(x_t|∅)+γ·(ϵ_θ(x_t|y)-ϵ_θ(x_t|∅)) DDIM Inversion. To expedite the denoising process and ensure a unique output, deterministic DDIM sampling <cit.> has been introduced, thereby enabling a skip-step strategy. Then for the diffusion process, a simple inversion technique, named DDIM inversion, has been suggested for the DDIM sampling. Such inversion process in Eq.<ref> provides a deterministic transformation between an input image and its corrupted version. x_t+1=√(α_t+1)(x_t-√(1-α_t)ϵ_θ(x_t,t)/√(α_t))+√(1-α_t+1)ϵ_θ(x_t,t) We also give more mathematical details in Supplementary Materials. §.§ Structure Evolution in Diffusion Process To better capture the structure-level memorization of diffusion model, we first explore the changes in structure information throughout the diffusion process. Current arts <cit.> show that during image generation, diffusion models focus more on imperceptible details when the noise levels are minimal, while concentrating on high-level context when faced with high noise levels. Similarly, but more carefully, we especially focus on the changes in structural information of both members and non-members throughout the unidirectional diffusion process. We leverage the structural similarity (SSIM) <cit.> as a metric. During the diffusion process, the original image x_0 is gradually corrupted by noise. A lower SSIM between x_0 and its corrupted version x_t indicates greater structural loss. We first explore the decrease rate (v) of SSIM throughout the whole diffusion process for both members and nonmembers: v(t)=SSIM(x_0, x_t+△ t)-SSIM(x_0, x_t)/△ t Figure <ref> (a) depicts the average decrease rate over 500 members and 500 nonmembers. It can be noted that the rate of decrease in SSIM between original images and its corrupted version is observed to initially increase and then decrease. More significantly, the decrease rates for members and nonmembers exhibit distinct behaviors. Nonmembers exhibit a higher rate of decrease when the diffusion timestep t ranges from 0 to approximately 100. This suggests that, for images that have been exposed to the diffusion models during training, their structures are more apt to be maintained in the early diffusion steps compared to images that are not included in the training set. However, as the images are further corrupted, the structural information is diluted by noise. The rate of decrease in structural similarity for members is even greater than that for non-members. Given the difference in decrease rate among members and nonmembers, we further assess the average SSIM difference (△ SSIM) between the member set D_m and the hold-out set D_h: △ SSIM(t)=1/|X_m|∑_x_0∈ X_mSSIM(x_0,x_t)-1/|X_h|∑_x_0∈ X_hSSIM(x_0,x_t) where X_m∼D_m, X_h∼D_h. Figure <ref> (b) depicts the average SSIM difference over 500 members and 500 nonmembers. The structural similarity for the member set is larger than that for the hold-out set in the first 800 diffusion steps. Besides, the difference in structural similarity between the member set and the hold-out set gradually increases during the first 100 diffusion steps, reaching a maximum at around step 100, which serves as an important clue for dividing member set images and hold-out set images. These findings offer a foundation for our proposed straightforward MIA strategy. §.§ Structure-Based Membership Inference Attack Following the intuition above, we introduce a simple yet effective membership inference attack method for text-to-diffusion models, centered on the structure similarity between the original image and its corrupted version. As shown in Figure <ref>, we input an image x_0 into the encoder of the text-to-image diffusion model, thereby obtaining its latent representation z_0. We also adopt the BLIP <cit.> model to extract a caption from image as textual prompt, since in practical applications, it is difficult to obtain the training-time texts corresponding to the images. Then we follow Eq. <ref> to perform DDIM inversion to z_0 in the latent space, and get the corrupted latent z_t. Subsequently, we utilize the decoder of the text-to-image diffusion model to transform z_t back to the pixel space and get x_t. The ingenious application of the encoder and decoder in the text-to-image model enables image-level comparison, facilitating the extraction of intricate structures without noise interference. By computing the structural similarity (SSIM) between x_0 and x_t, we obtain a membership score for x_0 and predict its membership as the following: x_0= member, if SSIM(x_0, x_t) > τ nonmember, if SSIM(x_0, x_t) ≤τ This indicates that we consider an image is a member of the training set of the target model θ if SSIM(x_0, x_t) is larger than a threshold τ. § EXPERIMENTS §.§ Experimental Setup Target Models and Datasets. We utilize two prominent text-to-image diffusion models: the Latent Diffusion Model and the Stable Diffusion-v1-1, trained on the LAION-400M <cit.> and LAION2B-en <cit.> datasets, respectively. We conduct experiments on the two models without further fine-tuning or other modifications. For the datasets, the LAION-400M dataset comprises 400 million text-image pairs, while LAION2B-en, a subset of LAION-5B, contains approximately 2.32 billion English text-image pairs. These datasets are crawled from the Internet which are general and diversified. Additionally, we employ the COCO2017-Val dataset, which includes 5,000 images and is commonly adopted for model evaluation. Implementation Details. For the two target models, we both use the 5000 images in COCO2017-Val as the hold-out set. As for member set selection, we randomly sample 5000 images from the LAION-400M dataset as the member set for the Latent Diffusion Model; and we also randomly sample 5000 images from the LAION2B-en dataset as the member set for the Stable-Diffusion-v1-1. Our experiments are conducted across two image resolutions: 256x256 pixels and 512x512 pixels. Besides, we adopt DDIM inversion (Eq. <ref>) with an interval of 50 and incorporate noise addition twice during the forward diffusion process. Evaluation Metrics. In order to evaluate the performance of our proposed method, we adopt the widely used metrics <cit.>: Attack Success Rate (ASR), Area-Under-the-ROC-curve (AUC), Precision and Recall. We also follow the metrics used in <cit.>, including the True Positive Rate (TPR) when the False Positive Rate (FPR) is 1% (TPR@1%), and the True Positive Rate when the False Positive Rate is 0.1% (TPR@0.1%). (More details about experimental setups can be found in Supplementary Materials) §.§ Comparison to Baselines We compare our method with three current MIA methods for diffusion models, including PIA <cit.>, SecMI <cit.>, and Naive Loss <cit.>. We leave the details of baselines in the Supplementary Materials. Evaluation on Latent Diffusion Model. Table <ref> shows the results on the Latent Diffusion Model. Compared to baselines, our method exhibits remarkable performance enhancements, particularly for images with resolution 512, where it surpasses all baselines in AUC, ASR, Precision, and Recall metrics. Notably, it achieves a 14.1% increase in AUC and a 12.2% increase in ASR compared to the next best method. For images with resolution 256, our method still outperforms baselines in AUC, ASR, and Recall, albeit with a marginal decrease in Precision. The ROC curve and log-scaled ROC curve is depicted in Figure <ref>. We also consider the TPR at very low FPR, i.e. 1% and 0.1% FPR, as shown in Table <ref>. Our method consistently outperforms in all assessments, underscoring its superiority in MIA performance. Particularly, its effectiveness significantly increases for images with resolution 512. This reveals large-scale model's structure-level memorization and highlights the potential of our method for more precise MIA on high-resolution images. Evaluation on Stable Diffusion. Table <ref> shows the results on the Stable-Diffusion-v1-1. Our method significantly exceeds baselines in AUC, ASR, and Recall. For images with resolution 512, it shows a 15.4% improvement in AUC and a 13.2% improvement in ASR over the nearest competitor. For images with resolution 256, our approach maintains a 7.3% higher AUC and a 5.5% higher ASR than the second-best method, despite a slight 1.4% reduction in Precision. Besides, the ROC curve and log-scaled ROC curve is depicted in Figure <ref>. The TPR at 1% FPR and 0.1% FPR is illustrated in Table <ref>. The results consistently demonstrate our method's ability to produce high-confidence predictions across the Stable Diffusion by leveraging large-scale model's structure-level memorization. §.§ Analysis of Total Timestep and Interval Total Timestep. To evaluate the impact of the hyper-parameter total diffusion timestep T, we vary T from 50 to 800, with a fixed interval (t_i=50). For instance, setting T=200 involves adding noise from t=0 to t=200 in 50-step increments, totaling 4 query times. Experiments are conducted using the Latent Diffusion Model on images with resolutions 512 and 256. Results are shown in Table <ref>. AUC and ASR metrics remain stable between T=50 and T=200, then begin to decrease from T=300, continuing to drop with further increases in T. Notably, at T=800, AUC falls below 50%. The outcomes align with our findings outlined in Section <ref>. With total diffusion timesteps under 300, the model maintains the structural integrity of member images more effectively, distinguishing them from non-member images. As T increases over 300, noise accumulation adversely affects the structures of both member and non-member images, thus reducing the attack effectiveness. Interval. To investigate the influence of the hyper-parameter interval t_i, we fixed the total timestep at 100 and varied t_i from 1 to 100. Using the Latent Diffusion Model, we conducted experiments on images with resolutions 512 and 256. As demonstrated in Table <ref>, there is minimal variation in AUC and ASR across different t_i settings, possibly due to our method's reliance on a deterministic diffusion process that eliminates random noise in each step. Thus, changes in t_i do not significantly affect the image's structural information. Nonetheless, a higher t_i value within the fixed total timestep implies more queries and higher computational costs. Therefore, we opt for t_i=50 as a practical compromise. §.§ Robustness Evaluation In real-world scenarios, images undergo various distortions, like noise and brightness fluctuations, during transmission. Additionally, augmentation techniques are often applied to modify training data for large-scale diffusion models, leading to the discrepancies between training images and their originals. This necessitates the robustness of our method to such variations. We evaluate our method's robustness using the Latent Diffusion Model on images with resolution 512. Four degradation techniques are applied to images: * Additional noise. Salt and Pepper Noise, which randomly corrupts 10% of the pixels in each image, is added to images. * Rotation. Images are rotated by 10 degrees counterclockwise around the geometric midpoint. * Saturation. The saturation levels of images are adjusted, either increased or decreased by 50%, with equal probability. * Brightness. The saturation levels of images are altered, either increased or decreased by 50%, with equal probability. Results are shown in Table <ref> and Table <ref>. It is evident that our methods achieve the highest results in ASR, AUC and TPR at 1% FPR across all four types of distortions. Notably, our structure-level approach exhibits exceptional resilience against additional noise, whereas other baseline methods experience a significant decline in performance. This is attributed to their reliance on noise-level comparison, which renders them vulnerable to such disturbances. Collectively, these experimental results underscore the superior stability and robust nature of our methods in effectively handling diverse distortions. §.§ Comparison to Backward Reconstruction All the baseline MIA methods for diffusion models involve both the forward diffusion process for noise introduction, and the backward denoising process for noise prediction. On the contrary, our method only leverages the forward diffusion process. We argue that during the initial diffusion process, as the structures of nonmember images are more severely corrupted than those of members, the structural differences between members and nonmembers have widened significantly. Conversely, the denoising process, which acts as the inverse of the diffusion process, reconstructs both the corrupted member images and nonmember images to their original states, which leads to the reduction in the image structural differences between members and nonmembers. To validate this findings, we utilize the comparison of the original images and their backward reconstructed states for MIA, and make a contrast with our method. We conducted experiments using the Latent Diffusion Model on images with resolutions 512 and 256. The results are illustrated in Table <ref>. We observe that employing backward reconstruction results in a decrease in AUC and ASR by roughly 3%, regardless of the image resolutions. The TPR at 1% and 0.1% FPR also decrease to a large extent when using backward reconstruction. This reveals the superiority of our method in utilizing the unidirectional diffusion process for MIA, compared to other bidirectional methods. §.§ The Impact of Texts on Structural Similarity To evaluate the influence of texts on our method' performance, we delve into the impact of the unconditional guidance scale γ using the Latent Diffusion Model on images with resolutions 512 and 256. We use classifier-free guidance to guide the image generation by textual prompts. The degree of textual influence is controlled by adopting Eq. <ref> and adjusting γ. Specifically, setting γ to 0 renders the model entirely unconditional, while a setting of 1 makes it fully conditional, guided solely by text, forming the basis for our experiments. As γ increases, the influence of textual information is increasingly pronounced. Here we vary γ from 0 to 5. Results are shown in Table <ref>. All four metrics remain virtually unchanged across varying scale values, suggesting that textual information has minimal impact on structural similarity during the initial diffusion stage. The models preserve image structures well, irrespective of the presence of textual information when the noise levels are minimal. To further explain this result, we conduct the reconstruction experiments, where we corrupt a image in the diffusion process to a certain timestep T, and then restore it in the denoising process. We compare the structural similarities between original images and their reconstructions under three conditions: captions from the BLIP model, empty prompts, and unrelated texts. One of the results is shown in Figure <ref> (a) (b) (c). Notably, at T=200 (the noise level is low), reconstructions across all types of prompts are similar. However, as T increases to 500 and 800 (the noise levels are high), variations in reconstruction outcomes become pronounced. This indicates that textual impact on image structure is minimal at low noise levels, where models prioritize detailed features. Conversely, when faced with higher noise levels, where models focus more on overall structure, unrelated texts significantly influence the reconstruction results, underscoring the guiding role of textual information. We also plot a trend curve depicting how structural similarity between the original and reconstructed images changes with the total diffusion step T. As shown in Figure <ref>, when T is below 300, structural similarity remains consistent across different prompts. However, as T increases, structural similarity experiences the sharpest declines with an unrelated prompt. These results suggest that the text impact on our method is minimal, since we only assess structural similarity in the initial diffusion phase where the noise levels are minimal. § CONCLUSION In this paper, we explore the structure-level memorization of large-scale text-to-image diffusion models. We primarily investigate the corruption of images structures throughout the diffusion process. We further demonstrate that the structures of member images in training set are better preserved than those of nonmembers in the initial diffusion stages, since models can memorize member images' structures during training. Drawing on these insights, we introduce a novel Membership Inference Attack (MIA) method for text-to-image diffusion models to judge whether an unauthorized image is utilized for training a diffusion model. Our proposed method is to assess models' structure-level memorization. We evaluate our method on state-of-art text-to-image diffusion models, e.g., the Latent Diffusion Model and the Stable Diffusion. Experimental results show that our method achieves higher ASR, AUC, TPR @ 1% FPR and TPR @ 0.1% FPR than all baselines. Besides, our method also exhibits greater robustness against diverse distortions and maintains efficacy across different textual prompts, underscoring its applicability in more real-world contexts. ACM-Reference-Format
http://arxiv.org/abs/2407.12383v1
20240717080428
Reliable and Efficient Concept Erasure of Text-to-Image Diffusion Models
[ "Chao Gong", "Kai Chen", "Zhipeng Wei", "Jingjing Chen", "Yu-Gang Jiang" ]
cs.CV
[ "cs.CV" ]
F. Chao Gong et al. Shanghai Key Lab of Intell. Info. Processing, School of Computer Science, Fudan University Shanghai Collaborative Innovation Center on Intelligent Visual Computing {cgong20,chenjingjing,ygj}@fudan.edu.cn, {kchen22,zpwei21}@m.fudan.edu.cn Reliable and Efficient Concept Erasure of Text-to-Image Diffusion Models Chao Gong1,2^⋆ Kai Chen1,2^⋆^⋆ Equal contributions. Zhipeng Wei1,2 Jingjing Chen1,2^†^† Corresponding author. Yu-Gang Jiang1,2 July 22, 2024 ================================================================================================================================== § ABSTRACT Text-to-image models encounter safety issues, including concerns related to copyright and Not-Safe-For-Work (NSFW) content. Despite several methods have been proposed for erasing inappropriate concepts from diffusion models, they often exhibit incomplete erasure, consume a lot of computing resources, and inadvertently damage generation ability. In this work, we introduce Reliable and Efficient Concept Erasure (RECE), a novel approach that modifies the model in 3 seconds without necessitating additional fine-tuning. Specifically, RECE efficiently leverages a closed-form solution to derive new target embeddings, which are capable of regenerating erased concepts within the unlearned model. To mitigate inappropriate content potentially represented by derived embeddings, RECE further aligns them with harmless concepts in cross-attention layers. The derivation and erasure of new representation embeddings are conducted iteratively to achieve a thorough erasure of inappropriate concepts. Besides, to preserve the model's generation ability, RECE introduces an additional regularization term during the derivation process, resulting in minimizing the impact on unrelated concepts during the erasure process. All the processes above are in closed-form, guaranteeing extremely efficient erasure in only 3 seconds. Benchmarking against previous approaches, our method achieves more efficient and thorough erasure with minor damage to original generation ability and demonstrates enhanced robustness against red-teaming tools. Code is available at <https://github.com/CharlesGong12/RECE>. WARNING: This paper contains model outputs that may be offensive. § INTRODUCTION In recent years, large-scale text-to-image (T2I) diffusion models have exhibited remarkable capability in synthesizing photo-realistic images from text prompts <cit.>. The exceptional performance of T2I diffusion models is largely due to the vast amount of training data collected from the Internet, which enables the models to imitate a wide variety of concepts. Unfortunately, such powerful models can also be misused to generate copyright infringement and Not-Safe-For-Work (NSFW) image content when conditioned on inappropriate text prompts <cit.>. Especially the open-source release of the Stable Diffusion (SD) T2I model has made advanced image generation technology widely accessible. To alleviate this safety concern, several recent research efforts have incorporated safety mechanisms into T2I diffusion models, filtering out inappropriate training data and retraining model <cit.>, censoring model outputs with an NSFW safety checker <cit.>, and applying classifier-free guidance to steer the generation away from inappropriate concepts <cit.>. However, these safety mechanisms either demand expensive computational resources and time <cit.> or can be easily circumvented by malicious users due to the public availability of code and model parameters in open-source scenario <cit.>. In response to the drawbacks mentioned above, an alternative is to erase inappropriate concepts from the T2I diffusion model <cit.>. Specifically, given an inappropriate concept described in the text prompt, the pre-trained T2I diffusion model’s parameters are fine-tuned to unlearn that concept so that the associated image content cannot be generated. Compared with previous security mechanisms, concept erasure neither requires training the entire model from scratch nor can be easily circumvented even in the case of open-source code. Despite promising progress in concept erasure, there exist several issues. On the one hand, most erasure methods require a high number of iterations to fine-tune considerable amounts of parameters <cit.>, which inevitably degrades the generation capability and consumes a lot of computing resources. Only a recent work called UCE <cit.> modifies model parameters without fine-tuning using a closed-form solution, ensuring the model maintains original generation capability when erasing concepts. On the other hand, almost all methods fail to sufficiently erase inappropriate concepts, leaving them vulnerable to problematic prompts found by the red-teaming of T2I diffusion models <cit.>. This results in the unlearned model being compelled to regenerate inappropriate images. Inspired by the idea of adversarial fine-tuning, we propose a Reliable and Efficient Concept Erasure (RECE) method to address the aforementioned challenge, which continually finds new embeddings of the erased concepts during fine-tuning and then enables the unlearned model to erase these new concept embeddings. To speed up the unlearning process, the RECE method builds upon the previous fast and efficient concept erasure method UCE <cit.>, which employs a closed-form editing to only modify the key and value projection matrices in cross-attention layers <cit.>. Similarly, the RECE method derives new embeddings that most effectively prompt the model to regenerate images of erased concepts, with a closed-form solution based on cross-attention output. Furthermore, a regularization term is introduced to preserve the image generation capability of the model by restricting the deviation of model parameters before and after modification. By editing the model and deriving embedding for multiple epochs, RECE enables the unlearned model to preserve the image generation ability of unerased concepts and robustly refrains the model from generating images with erased concept content. All the processes above are in closed-form, guaranteeing extremely efficient erasure in 3 seconds. Our major contributions are summarized as follows: • We present a novel concept erasure method - RECE that uses closed-form parameter editing and adversarial learning schemes for reliable and efficient concept erasing in only 3 seconds. • RECE sufficiently erases concepts by deriving new embeddings that enable the unlearned model to regenerate erased concepts. In addition, a regularization term is introduced to minimize the impact on the model's capabilities. • We conduct extensive experiments to validate the effectiveness of RECE for erasing unsafe contents, artistic styles and object classes. Additionally, we assess the robustness of RECE against three red-teaming tools and record fine-tuning durations to highlight the efficiency of RECE. § RELATED WORK §.§.§ T2I Diffusion Models with Safety Mechanisms. In response to the issue of generating inappropriate images in T2I diffusion models, several research has explored solutions to address this concern. Briefly, existing research primarily falls into the following three distinct strategies: The first is filtering the training data and retraining the model <cit.>. However, retraining on curated datasets not only requires substantial computational resources investment but also results in the generation of inappropriate content <cit.> and performance degradation <cit.>. The second is censoring model output through safety checkers <cit.>, or exploiting classifier-free guidance to steer the latent codes away from inappropriate concepts during inference <cit.>. However, in the case of open-source code, pre-trained T2I diffusion model architectures and parameters are publicly available, so such post-hoc intervention strategies can be easily circumvented by malicious users <cit.>. The third is fine-tuning the partial parameters of the pre-trained T2I diffusion models to erase the model’s representation capability of inappropriate concepts <cit.>. While fine-tuning has been considered an effective strategy to prevent the generation of inappropriate content, current methods consume a lot of computing time and can be easily bypassed by red-teaming tools for T2I diffusion models. §.§.§ Red-Teaming Tools for T2I Diffusion Models. With the recent popularity of AI, red-teaming has been applied to AI models to enhance model stability by probing functional vulnerabilities <cit.>. Recent works have also developed red-teaming tools for T2I diffusion models, which is a rarely explored field in AI red-teaming. For instance, Prompting4Debugging (P4D) <cit.> automatically finds the problematic prompts that would lead to inappropriate content via utilizing prompt engineering techniques and an auxiliary diffusion model without any safety mechanisms to assess the reliability of deployed safety mechanisms. Conversely, UnlearnDiff <cit.> does not depend on an auxiliary diffusion model, which leverages the inherent classification capabilities of diffusion models, thereby providing computational efficiency without sacrificing effectiveness. Both works have the main weakness in assuming white-box access to the target model. In response to this issue, Ring-A-Bell <cit.>, a model-agnostic framework capable of constructing attacks without prior knowledge of the target model. Specifically, Ring-A-Bell first performs concept extraction to obtain a holistic representation of inappropriate concepts. Subsequently, Ring-A-Bell automatically produces problematic prompts by leveraging the extracted concepts. § METHOD §.§ Preliminaries §.§.§ Text-to-Image (T2I) Diffusion Models In contemporary Text-to-Image (T2I) applications, diffusion models have become the preferred choice <cit.> since the progressive denoising process <cit.> empowers them with superior image synthesis ability <cit.>. To reduce computational complexity, T2I often adopts latent diffusion models <cit.>, which operates on the low-dimensional latent space of a pre-trained variational autoencoder (VAE) <cit.> and employs a U-Net generative network as the denoising architecture <cit.>. To incorporate text conditioning into the image generation process, T2I encodes text by language models like CLIP <cit.> and integrates text embeddings into U-Net through cross-attention layers. Specifically, these layers employ Query-Key-Value (QKV) structure <cit.> to represent the interactions between text and vision. For a given text embedding c_i, keys and values are generated as k_i = W_kc_i and v_i = W_vc_i. These keys compute an attention map by multiplying with the query q_i representing visual features, and then the cross-attention output is computed by attending over values v_i: 𝒪∝softmax(q_ik_i^T)v_i. §.§.§ Concept Erasing with Closed-form Solution There are existing erasure methods requiring fine-tuning, such as ESD, CA and SA <cit.>. However, such methods are relatively inefficient as they require thousands of fine-tuning steps. In contrast, UCE <cit.> is an efficient method which modifies the attention weights through a closed-form edit. UCE requires a "source" concept (, "nudity") and a "destination" concept (, empty text " "). Let c_i represent the source embedding, c_i^* denote the corresponding destination embedding, set E denote concepts to erase, and set P denote concepts to preserve. Given a K/V projection matrix W^old (a concise notation for W_k^old and W_v^old), UCE seeks new weights W by editing concepts in E while preserving concepts in P. Specifically, the objective is to find weights such that the output Wc_i for c_i∈ E aligns with target values W^old c_i^* instead of the original W^old c_i. Meanwhile, to control parameter changes, outputs for c_j∈ P are preserved as W^oldc_j and a L2 regularization term is introduced: min_W∑_c_i∈ E||Wc_i-W^oldc_i^*||_2^2+λ_1∑_c_j∈ P||Wc_j-W^oldc_j||_2^2 +λ_2||W-W^old||_F^2, where λ_1 and λ_2 are scaling factors preserving the existing concepts. UCE <cit.> prove that this formula has a closed-form solution: W=W^old(∑_c_i∈ Ec_i^*c_i^T+λ_1∑_c_j∈ Pc_jc_j^T+λ_2 I)(∑_c_i∈ Ec_ic_i^T+λ_1∑_c_j∈ Pc_jc_j^T+λ_2 I)^-1. UCE directly assigns cross-attention KV matrices using the closed-form solution, eliminating the need for fine-tuning. This makes UCE significantly faster, hence we use UCE in our method. §.§ Reliable and Efficient Concept Erasure (RECE) While UCE <cit.> offers a fast solution for removing undesired concepts from T2I diffusion models, it can still produce undesired content, as illustrated in <ref>. This suggests an incomplete erasure of these concepts. To effectively eliminate such undesired concepts, we efficiently erase closed-form embeddings capable of regenerating erased concepts within the unlearned model. The derivation of embeddings and erasure is conducted iteratively to achieve a thorough erasure of inappropriate concepts, as shown in <ref>. t.5 [subfigure]labelformat=empty [t]0.23 SD < g r a p h i c s > < g r a p h i c s > nudity [t]0.01 -4cmwidth 0.2pt height 4cm [t]0.23 < g r a p h i c s > < g r a p h i c s > nudity [t]0.23 UCE < g r a p h i c s > < g r a p h i c s > derived embedding [t]0.23 < g r a p h i c s > < g r a p h i c s > regularized embedding When given the input prompt "nudity", SD generates images containing nudity content, while UCE generates unrelated images. When given our derived embedding and regularized embedding, UCE generates nude images again. We use 0.8cm0.25cm for publication purposes. §.§.§ Finding Target Contents Let us take "nudity" for example. As depicted in the second column of <ref>, when directly providing the input prompt "nudity" to UCE models, only landscape or unrelated images are generated. This is because the word "nudity" has been aligned with the empty text " ". However, the erasure of UCE is incomplete. We can generate an adversarial prompt that enables UCE's model to produce images containing nudity content again, similar to those generated by SD when provided with the prompt "nudity". In this section, we introduce our method for deriving the new embedding in UCE's model, which guides UCE to generate nude images. As elaborated in <ref>, T2I introduces text embeddings into image generation through cross-attention layers, where the projection matrices W_k and W_v are used to transform text embeddings. Let W^old denote the projection matrices of the original U-Net before UCE's editing, W^new represent the projection matrices after UCE's editing, c denote the embedding of "nudity", and c^' signify our derived embedding. If we can find a c^' such that W^newc^' closely resembles W^oldc, then c^' can guide the edited model to generate nude images like how c guides the original model. More precisely, the objective function is formulated as follows: min_c^'∑_iW_i^newc^'-W_i^oldc_2^2, where W_i denotes K/V cross-attention matrices of U-Net. The solution c^' derived from <ref> can be viewed as the actual representation of c within the edited model. Evidently, <ref> represents a convex function with respect to c^', which possesses a unique global minimum. As derived in Appendix A, <ref> admits a closed-form solution: c^'=(∑_iW_i^new^TW_i^new)^-1(∑_iW_i^new^TW_i^old)c. Given that text conditioning works in the form of embedding in Stable Diffusion (SD), we can use our derived embedding as text conditioning. As illustrated in the third column of <ref>, this derived embedding effectively guides UCE's edited model to once again generate nude images, indicating its capacity to represent the concept of "nudity" within edited model. Thus it demonstrates that the erasure process of UCE remains incomplete. To address this issue, we further remove our derived embeddings c^' from UCE's model with <ref> to prevent the generation of nude images. §.§.§ Regularization Term We can further erase the concept of "nudity" by substituting c in <ref> with our derived embedding c^'. However, upon directly erasing c^', we observe a significant decline in the model's performance: it struggles to generate high-quality images for unrelated concepts as shown in <ref>. Hence, it becomes imperative to devise a method that retains the model's performance while erasing concepts. Let W^new1 denote the projection matrices after the last epoch's modification, and W^new2 denote the projection matrices after the current epoch. Partly, preserving the model's performance entails minimizing the impact on unrelated concepts. Consequently, we define our objective function as follows: min_W W^new2d-W^new1d_2^2, where d represents an unrelated concept's embedding. Note that W^new2 is directly influenced by c^', as it is derived after erasing c^' from W^new1. We can obtain a theorem about our objective and its proof is provided in Appendix B: If c^' is set to 0, <ref> achieves its global minimum of 0. Intuitively, if c_i in <ref> is set to zero, the coefficient matrices on the right side will become an identity matrix when multiplied. As a result, W will revert to being equivalent to W^old, exerting minimal influence on unrelated concepts due to the absence of further modifications to W. Therefore, we need to introduce a regularization term to the original objective <ref> to ensure that the obtained c^' is close to zero, thereby minimizing its influence on the model's performance: min_c^'∑_iW_i^newc^'-W_i^oldc_2^2+λc^'_2^2. As derived in Appendix A, this final objective function also possesses a unique global minimum solution: c^'=(λ I+∑_iW_i^new^TW_i^new)^-1(∑_iW_i^new^TW_i^old)c As illustrated in the fourth column of <ref>, this regularized embedding can also guide UCE's model to once again generate nude images, indicating its ability to represent the concept "nudity" within UCE's model. With the incorporation of our regularization term, we iteratively apply the erasure process to the refined embedding c^' using <ref> over multiple epochs. This ensures thorough concept erasure while safeguarding the overall performance of the model. The algorithm details are elaborated in <ref>. § EXPERIMENTS In this section, we present the results of our method for erasing inappropriate concepts and artistic styles. We also include the results of object removal in Appendix. We start with SD V1.4 as our base model. Following the implementation in <cit.>, we set λ_1 and λ_2 in <ref> to 0.1. For inappropriate concepts, we perform iterative erasure for 5 epochs and set λ in <ref> to 1e-1. For artistic style, we conduct erasure for 10 epochs and set λ to 1e-3. The baselines we will compare with are: SD V1.4 <cit.>, SD V2.1 <cit.>(Stable Diffusion pretrained on an NSFW filtered dataset), SLD <cit.>, ESD <cit.>, CA <cit.>, SA <cit.>, UCE <cit.>. As for SLD, ESD, SA and UCE, we adhere to the recommended configuration in their papers <cit.>. For CA <cit.>, we fine-tune the full weights of U-Net to erase unsafe contents and the cross-attention module to erase artistic styles, according to its documentation[<https://github.com/nupurkmr9/concept-ablation>]. §.§ Unsafe Content Removal §.§.§ Experimental Setup In this section, we assess the effectiveness of erasing unsafe concepts. We conduct experiments on the Inappropriate Image Prompts (I2P) dataset <cit.>. The I2P dataset includes various inappropriate prompts, such as violence, self-harm, sexual content, and shocking content. These prompts are collected from real-world, user-generated images based on the official SD. Our evaluation focuses on the erasure of nudity since it is a classical unsafe concept. For each model, we generate one image per prompt in the I2P dataset, resulting in a total of 4703 images. Nude body parts are detected using the Nudenet detector <cit.>, with the threshold set to 0.6. This threshold follows the default settings in I2P[<https://github.com/ml-research/i2p>]. To verify that the unlearned models can still generate normal images, we use COCO-30k <cit.> with its captions as prompts. COCO-30k is a dataset devoid of unsafe concepts, making it suitable for evaluating edited models' generation capabilities. We evaluate the models' image-text consistency based on CLIP-score <cit.>, and visual similarity against SD-generated images based on FID <cit.>. §.§.§ Removal Results As depicted in <ref>, our method yields the lowest number of nude body parts, while demonstrating impressive specificity in preserving normal content of COCO-30k. CA generates the second-fewest nude body parts but it exhibits poorer performance in terms of FID, indicating a poorer trade-off between generation ability and removal effectiveness. On the other hand, CA fine-tunes full weights and our RECE only fine-tunes cross-attention modules, which will be discussed in detail in <ref>. Notably, our method achieves a FID score closely comparable to the top-performing UCE and the second-best SLD, both of which generates a considerably higher number of nude body parts. This suggests that our method minimally impacts the generation of normal content while striving for better removal effectiveness. Additionally, most methods exhibits favorable CLIP-score results thus we consider the CLIP-score performance acceptable as long as it remains within a reasonable range. In open-sourced conditions, the inference guidance mechanism such as SLD <cit.> can be easily bypassed by deleting the corresponding code <cit.>. Large-scale model retraining on NSFW-filtered dataset demands considerable computational resources <cit.>, and even then, the model SD v2.1 may still generate nude images, as illustrated in <ref>. Qualitative results are illustrated in <ref>. In the first row, all erasing methods successfully generate non-nude images, but the results of CA and SA differ significantly from the original SD's. In the second row, all methods except for SD v2.1 avoid generating nude images. Among these, only UCE, SLD, and our method effectively capture the facial features. These findings demonstrate that our method effectively maintains unrelated concepts. §.§.§ Nudity Bias While our RECE demonstrates remarkable effectiveness in minimizing the generation of nude content, it exhibits limitations in erasing male nudity and similar limitations are also observed in other methods. We count the number of female and male nude body parts in the 4703 I2P images. As illustrated in <ref>, the nudity ratios between women and men almost decrease in every method, indicating an erasure bias on female information (excluding SA, which erases more sex-related concepts besides nudity <cit.>). We attribute this limitation to the inherent bias in the target concept "nudity" within SD, which tends to generate more female-oriented content. To investigate this bias, we randomly selected 20 seeds and employed SD V1.4 to generate 3 images per seed with the prompt "nudity", resulting in a total of 60 images. Surprisingly, almost all of these generated images depict female body part as presented in the last column of <ref>. Thus our derived embedding is also biased. Further improvements require an awareness of the biases inherent in the model while performing erasure. §.§ Artistic Style Removal §.§.§ Experimental Setup We conduct an evaluation to assess the efficacy of removing artistic styles to address copyright concerns. Following the datasets in ESD <cit.>, we use 20 prompts for each of 5 famous artists—Van Gogh, Pablo Picasso, Rembrandt, Andy Warhol and Caravaggio—and 5 modern artists—Kelly McKernan, Thomas Kinkade, Tyler Edlin, Kilian Eng and the series “Ajin: Demi-Human”, which have been reported to be imitated by SD <cit.>. To evaluate our RECE and all the aforementioned baselines, we erase the style of two artists: Van Gogh and Kelly McKernan. §.§.§ Removal Results We conducted an evaluation based on LPIPS scores <cit.> compared to the original SD, as detailed in <ref>. LPIPS evaluates the perceptual distance between image patches, where higher values indicate greater differences and lower values indicate more similarity. The LPIPS_e is calculated on the erased artist. A higher LPIPS_e value suggests a more effective style removal, and both ESD and our method demonstrate successful erasure of the target style. LPIPS_u is calculated on unerased artists. A lower LPIPS_u indicates a lesser impact on unrelated artists, where our method and UCE effectively maintains unrelated concepts. We also calculate the overall effectiveness by LPIPS_d=LPIPS_e-LPIPS_u, which is the difference between erased and unerased artists. Our method performs best in this regard. Qualitative results can be found in Appendix. §.§ Robustness Against Red-teaming Tools §.§.§ Experimental Setup To demonstrate the robustness of our RECE in safeguarding against various attack methods, we employ different red-teaming tools, including white-box methods such as P4D <cit.> and UnlearnDiff <cit.>, and the black-box method Ring-A-Bell <cit.>. For nudity, as provided by UnlearnDiff <cit.>, we use a set of 143 prompts selected from I2P, each with a nudity score (as determined by NudeNet) above 0.75, and employ the Nudenet detector with a threshold set to 0.45 for detecting inappropriate content. §.§.§ Results The attack success rates (ASR, %) are summarized in <ref>. Our method achieves the best robustness in average. In the case of the black-box attack Ring-A-Bell, our method achieves the lowest ASR at 13.38%, significantly outperforming other methods. In the case of the white-box attack, CA achieves the best performance, while our method performs either the best or very closely to the second-best. Although SA achieves a decent black-box result, it consumes substantial computation resources for generating 5000 prepared images <cit.>. CA modifies 100% of the U-Net parameters while our method only modifies K&V projection matrices, constituting a mere 2.23% of the U-Net. While UCE also modifies only 2.23% parameters, our method significantly outperforms UCE. This is attributed to our derived embeddings in <ref>. For artistic style removal, we provide qualitative examples in <ref>. The first image is generated by the original SD v1.4 without any attack, and the following images are under Ring-A-Bell's attack <cit.>. Recall that UCE is the second-best artistic style removal method as shown in <ref> but it falls short in robustness against red-teaming tools. Although our method employs a closed-form solution like UCE, it outperforms UCE in robustness. ESD, CA and our method perform similarly well. More results can be found in the appendix. §.§ Model Editing Duration To demonstrate the efficiency of different methods, we measured the percentage of parameter modification and editing duration on an RTX 3090 for each method, as shown in <ref>. We excluded SLD from the analysis since it operates at inference time rather than modifying the model's weights which can be easily bypassed under open-source conditions. Additionally, we don't include the duration of SA, as it involves generating 5000 images, calculating the Fisher Information Matrix and fine-tuning, which makes it exceptionally slow. Based on <ref> and <ref>, our method achieves the best concept erasure effect in an extremely short time of only three seconds. Our method (5 epochs) and UCE (1 epoch) modify the lowest percentage of parameters with a closed-form solution, resulting in the shortest editing durations. Despite similar durations, our method significantly outperforms UCE in removal effectiveness. Conversely, CA, ESD and SA modify a high percentage of parameters with more time but achieve less impressive removal results. §.§ Ablation Study We conduct experiments to elucidate the impact of our derived embedding among different epochs and the effectiveness of the regularization term. §.§.§ Effect of Derived Embeddings among Epochs tb0.65 [subfigure]labelformat=empty 0.19 < g r a p h i c s > Epoch 0 0.19 < g r a p h i c s > Epoch 1 0.19 < g r a p h i c s > Epoch 2 0.19 < g r a p h i c s > Epoch 3 0.19 < g r a p h i c s > Epoch 4 There is no nudity information from epoch 3 onward; hence, we select the checkpoint after epoch 2 to avoid damaging the model's ability. We conduct an experiment to expound the impact of our derived embedding. We perform "nudity" erasure for 5 epochs using <ref>. In each epoch, we derive a distinct embedding that represents "nudity". Before the erasure of each epoch, we generate an image using the embedding to test its degree of nudity information, as presented in <ref>. Images from epoch 0 to epoch 2 contain nude body parts, indicating that our derived embeddings successfully reveal potential nudity information in the model. Specifically, we opt for the checkpoint after epoch 2, as images from epoch 3 to 4 lack nudity information. Actually, erasing such "not so nude" embeddings in epoch 3-4 would impair the model's normal generation ability, which is an unworthy trade-off. §.§.§ Effect of Regularization Coefficient We conduct an experiment to assess the influence of our regularization term. Specifically, we select different regularization coefficients λ in <ref>, which divide the interval [0,1] into five parts. The results, presented in <ref>, include CLIP-score and LPIPS against the original SD on the COCO-30k validation subset. As λ increases, CLIP score shows an upward trend while LPIPS and the difference between new and old parameters show a downward trend. This indicates the role of the regularization term in preserving the model's ability for unerased content. Furthermore, we recorded the number of nude parts on the I2P benchmark, presented in the third column of <ref>. However, the number of nude parts doesn't strictly increase as λ increases, which is counterintuitive. Although the purpose of the regularization term in <ref> is to preserve the model's generation capability, maintaining this capability does not always affect the erasure effect. § CONCLUSION In this paper, we propose a novel approach for reliably and efficiently erasing specific concepts from Text-to-Image (T2I) diffusion models. Our approach only modifies the cross-attention K&V matrices of U-Net, constituting a mere 2.23% of parameters. While previous methods also edited cross-attention modules, they still exhibited the ability to generate inappropriate images. To tackle this challenge, we derive and erase new embeddings that can represent target concepts within unlearned models. To mitigate the impact on unrelated concepts, a regularization term is introduced during the erasure process. All the above techniques are formulated in closed-form, facilitating rapid editing. This enables the execution of "derive-erase" across multiple epochs, ensuring thorough and robust erasure. Extensive experiments were conducted to validate the effectiveness of our approach in erasing artistic styles, unsafe contents and object classes. Furthermore, we recorded editing durations to underscore the efficiency of our method and evaluated the robustness against red-teaming tools. We believe our RECE has the potential to empower T2I providers in effectively removing undesired concepts, thereby fostering the development of a safer AI community. § ACKNOWLEDGEMENTS This project was supported by National Key R&D Program of China (No. 2021ZD0112804). splncs04 § APPENDIX § DERIVING THE NEW EMBEDDING In this section, we present a detailed derivation of the closed-form new embedding. Let W^old denote the projection matrices of the original U-Net before UCE's editing, W^new represent the projection matrices after UCE's editing, c denote the embedding of "nudity", and c^' signify our derived embedding. If we can find a c^' such that W^newc^' closely resembles W^oldc, then c^' can guide the edited model to generate nude images like how c guides the original model. Specifically, the objective function is formulated as follows: min_c^'∑_iW_i^newc^'-W_i^oldc_2^2+λc^'_2^2, where W_i denotes K/V cross-attention projection matrices of U-Net, λ is a hyper-parameter and c^'_2^2 is a regularization term which will be proved in <ref> below. The square of the 2-norm is convex, and linear transformation maintains its convexity. Therefore, <ref> represents a convex function, possessing a unique global minimum solution c^'. This solution can be obtained by setting the gradient of <ref> to zero: ∂ L/∂ c^'=∑_i2W_i^new^T(W_i^newc^'-W_i^oldc)+2λ c^' =0 ∑_i(W_i^new^TW_i^new+λ I)c^'=∑_iW_i^new^T W_i^oldc c^'=(λ I+∑_iW_i^new^TW_i^new)^-1(∑_iW_i^new^TW_i^old)c. Here we obtain the new embedding c^' which can guide the edited model to generate nude images, whereas the word "nudity" cannot. This indicates that c^' serves as the true representation of c within the edited model. § PROOF OF REGULARIZATION TERM In this section, we present a detailed proof of our regularization term which can protect the model's ability. Let W^new1 denote the projection matrices after the last epoch's modification, W^new2 denote the projection matrices after the current epoch, and d denote an unrelated concept's embedding. To partially preserve the model's performance entails minimizing the impact on unrelated concepts as much as possible. Consequently, we define our regularization objective function as follows: min_W^new2 W^new2d-W^new1d_2^2. Let's denote c_i for concepts to be erased, c_j for concepts to be preserved and c^' for derived embeddings. W^new2 is influenced by c^' because it is derived after erasing c^' from W^new1. Our proof will utilize the submultiplicative property of the matrix norm[<https://en.wikipedia.org/wiki/Matrix_norm>]: Ax_2≤A_Fx_2 AB_F≤A_FB_F, where A and B are matrices and x is a vector. Now, we aim to find the minimum of <ref>: F =W^new2d-W^new1d_2^2 ≤W^new2-W^new1_F^2d_2^2 Note that in erasing process, we have: W^new2 = W^new1(∑_i ∈ Ec_i^* c_i^' ^T+∑_j∈ Pc_j c_j^T)(∑_i∈ Ec_i^' c_i^'^T+∑_j∈ Pc_jc_j^T)^-1 Let F_1=W^new2-W^new1_F^2, then according to the submultiplicative property we have: F_1 =W^new2-W^new1_F^2 =W^new1(∑_i ∈ Ec_i^* c_i^' ^T+∑_j∈ Pc_j c_j^T)(∑_i∈ Ec_i^' c_i^'^T+∑_j∈ Pc_jc_j^T)^-1-W^new1_F^2 ≤W^new1_F^2(∑ c_i^*c_i^'^T+∑ c_jc_j^T)(∑ c_i^' c_i^'^T+∑ c_jc_j^T)^-1-I_F^2, where c_i^* denote the corresponding destination embedding. Let's define: F_2 =(∑ c_i^*c_i^'^T+∑ c_jc_j^T)(∑ c_i^'c_i^'^T+∑ c_jc_j^T)^-1-I_F^2 U= ∑ c_i^'c_i^'^T+∑ c_jc_j^T Then we have: F_2 =(∑ c_i^*c_i^'^T+∑ c_jc_j^T)U^-1-UU^-1_F^2 ≤∑ c_i^*c_i^'^T+∑ c_jc_j^T-U_F^2U^-1_F^2 Considering only the first term: F_3 =∑ c_i^*c_i^'^T+∑ c_jc_j^T-U_F^2 =(∑ c_i^*c_i^'^T+∑ c_jc_j^T)-(∑ c_i^'c_i^'^T+∑ c_jc_j^T)_F^2 =∑ c_i^*c_i^'^T-∑ c_i^'c_i^'^T_F^2 =∑(c_i^*-c_i^')c_i^'^T_F^2 ≤∑(c_i^*-c_i^')c_i^'^T_F^2 If we set c^' = c^* or 0, then F_3 achieves its minimum of 0, and consequently, <ref> will also reach its minimum of 0. It's important to note that c^* represents the destination embedding in UCE, so it's possible that we don't know the destination concept when applied UCE and only have the parameters after UCE erases. Therefore, we use 0 as the argmin. In conclusion, the closer c^' is to 0, the less it will affect the model's performance. § OBJECT REMOVAL In this section, we investigate the method's effectiveness in erasing entire object classes from the model. We focus our comparison on ESD and UCE, as these are the only methods that have conducted object removal experiments in their respective papers. Following the experimental setup of ESD, we conduct experiments on erasing Imagenette classes, a subset of Imagenet classes. We perform iterative erasure validated by the accuracy of erased and unerased classes and set λ to 1e-1. For a given class to erase, if the accuracy of UCE already reached 0, we will no longer continue RECE's erasing. As shown in <ref>, our RECE exhibits superior erasure capability while minimizing interference with non-targeted classes. § EXTENDED INAPPROPRIATE CONTENT REMOVAL We also compare the models across larger categories of inappropriate classes. In the main text, we focus on erasing nudity to align with the experimental setup used in ESD's main paper, as nudity is a classical example of an inappropriate concept. In <ref>, we further demonstrate the efficacy of erasing multiple sensitive concepts from I2P, including “hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood”. We use the fine-tuned Q16 classifier[<https://github.com/YitingQu/unsafe-diffusion>] which more accurately detects general inappropriate concepts, to present the proportions of inappropriate content across various categories in I2P. The results show that our method effectively erases these sensitive concepts. § QUALITATIVE RESULTS In this section, we present additional qualitative results. Images in the same row are generated with same prompts and seeds. <ref> shows artistic paintings generated by SD and different erased models. The prompts are "A depiction of a starry night over a quiet town, reminiscent of Van Gogh's famous painting", which should be erased. SLD struggles to remove Van Gogh's style, whereas other methods demonstrate capability. <ref> shows images conditioned by prompts in which the artistic style shouldn't be erased. ESD and CA cause significant and unnecessary distortion to unerased artistic styles. Our method and UCE exert minimal influence. <ref> shows images generated by models where Van Gogh's style has been erased under attack, specifically P4D and UnlearnDiff. The first column is generated by original SD without attack. Both our method and UCE adopts closed-form solutions but our method exhibits greater robustness. <ref> shows images conditioned by prompts related to nudity. Our method effectively removes nudity information while generating high-quality images. ESD and CA either contain sexual innuendo or result in blur. <ref> shows images conditioned on MSCOCO's captions. A good erasing method should produce well-aligned images about unerased concepts. ESD and CA fail to generate horses accurately. SLD struggles to generate the correct number of toothbrushes, and CA's result does not resemble toothbrushes.
http://arxiv.org/abs/2407.12177v1
20240716210551
Are Linear Regression Models White Box and Interpretable?
[ "Ahmed M Salih", "Yuhe Wang" ]
cs.LG
[ "cs.LG", "stat.ML" ]
Semantic Communication for the Internet of Sounds: Architecture, Design Principles, and Challenges Chengsi Liang, Yao Sun, Christo Kurisummoottil Thomas, Lina Mohjazi, and Walid Saad Chengsi Liang, Yao Sun (corresponding author), and Lina Mohjazi are with the James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK (e-mail: 2357875l@student.gla.ac.uk; {yao.sun, lina.mohjazi}@glasgow.ac.uk). Christo Kurisummoottil Thomas and Walid Saad are with the Bradley Department of Electrical and Computer Engineering at Virginia Tech, Arlington, VA 22203, USA. (e-mail: {christokt, walids}@vt.edu). Received: date / Revised version: date ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty § ABSTRACT Explainable artificial intelligence (XAI) is a set of tools and algorithms that applied or embedded to machine learning models to understand and interpret the models. They are recommended especially for complex or advanced models including deep neural network because they are not interpretable from human point of view. On the other hand, simple models including linear regression are easy to implement, has less computational complexity and easy to visualize the output. The common notion in the literature that simple models including linear regression are considered as "white box" because they are more interpretable and easier to understand. This is based on the idea that linear regression models have several favorable outcomes including the effect of the features in the model and whether they affect positively or negatively toward model output. Moreover, uncertainty of the model can be measured or estimated using the confidence interval. However, we argue that this perception is not accurate and linear regression models are not easy to interpret neither easy to understand considering common XAI metrics and possible challenges might face. This includes linearity, local explanation, multicollinearity, covariates, normalization, uncertainty, features contribution and fairness. Consequently, we recommend the so-called simple models should be treated equally to complex models when it comes to explainability and interpretability. § INTRODUCTION Explainable artificial intelligence (XAI) emerged to help understanding how machine learning models work. It uses set of tools and algorithms to convert the complex models into a more digestible form from human prospective <cit.>. XAI has several favorable aims and outputs including features attribution, fairness, uncertainty and sensitivity <cit.>. Machine learning models are somehow grouped into two groups, simple and complex models. Complex models include deep neural network and conventional neural network while simple models include linear regression and rule-based model. It is recommended that XAI should be applied precisely to complex models because they are considered as "black box" <cit.>. On the other hand, simple models are easy to implement, fast in terms of running time compared to complex models and easy to visualize. Simple models including linear regression models (LRMs) are considered as self-explanatory and they are interpretable by their nature <cit.> <cit.> <cit.>. As opposed to "black box" models, LRMs are called "white box" models because they are understandable <cit.>. This notion is common in the literature based on the fact that simple models including LRMs are easy to interpret because they provide several outputs which helps to understand the internal mechanism of the model. For instance, it shows the effect size of each explanatory variable in the model toward the model prediction by providing the coefficient value. Moreover, the sign of the coefficient value helps to determine whether the effect positive or negative <cit.>. In addition, it shows the uncertainty of the estimated coefficient value by reporting the confidence interval of each independent variable. However, there are many challenges and obstacles that hinder to explain and interpret those models precisely when they are employed with real-life applications. This includes how to interpret the coefficient value when the variables are collinear, how to deal with the impacts of covariates and explain their effect with the current approaches to mitigate their impacts on model prediction. In addition, to what extent it is trustworthy to consider confidence interval as a proxy to measure the uncertainty of the model. Moreover, how to deal with the pre-processing steps including normalization and standardization which they make it more difficult to interpret the LRMs. The current paper discusses and sheds the light on some challenges related to explain LRMs. In the following sections and sub-sections, we will discuss some challenges related to interpret and explain LRMs. § LINEAR REGRESSION MODELS LRMs are the most common and the simplest method to reveal the association between an explanatory variable and a continues outcome <cit.>. The input data or variable is called the independent variable while the outcome/output is called the dependent variable. Equation <ref> shows a simple linear regression when there is one dependent variable and an outcome. y ≃β_0 + β X + ϵ where y is the output, β_0 is the intercept, β is the coefficient value (slope) of the explanatory variable X and ϵ is the error which represents the difference between the predicted value and the actual value. The β value represents the effect of one unit of X toward y. LRMs could involve more than one independent variable which is called multiple linear regression. Equation <ref> shows an example of a multiple linear regression model when the number of independent variables up to n where n could be any number. y ≃β_0 + β_1X_1+β_2X_2+ β_3X_3 + ...... + β_nX_n + ϵ In this case there are multiple β which each represents the effect/coefficient value of its own independent variable. The interpretation of the β represents the effect in one unit of the variable of interest toward the outcome while assuming/holding all other independent variables in the model constant <cit.>. LRMs are extended to predict categorical outcome such as sex, death and a disease <cit.> with classification models including logistic regression. In this case, the outcome of the model is a probability (between 0 and 1) from a sigmoid function indicating whether it belongs to a specific class or not <cit.>. From statistical point of view, LRMs are used to perform association between an independent variable or group of independent variables and an outcome. On the other hand, LRMs from machine learning point of view are used to predict an outcome using sets of independent variables. Comparing to advanced and complex models, LRMs are considered as white box and interpretable based on the perception that the internal mechanism of these models is understandable. § XAI XAI is a set of tools, approaches, methods or algorithms that help ends users to understand how the model works, how it is reached to a specific decision, what are the most informative variables in the model and to what degree the model is certain <cit.>. Such aims are very important to build trust and increase transparency to better implement AI models in real life applications. Moreover, it helps to improve model performance and mitigate bias effect in the model. There have been many XAI proposed in the recent years to explain and interpret AI models globally for all instances in the model or locally for a specific instance. In addition, XAI could be either model-agnostic which means can be applied to any model or model-specific indicating that can be applied to a specific model. Moreover, some XAI methods are proposed for a specific kind of data. For instance, Grad-CAM <cit.> and Integrated gradients <cit.> are proposed to explain AI models with image data; Accumulative Local Effect <cit.> and Partial Dependency Plot <cit.> are applied with tabular data while Shapley Additive Explanations <cit.> and Local Interpretable Model-agnostic Explanations <cit.> can be applied to both imaging and tabular data. XAI has been proposed and recommended to be applied to complex models including deep learning, conventional neural network and recurrent neural network. This is because these models are considered as "black boxy" because the internal mechanism of these model is not clear. On the other hand, simple models including LRMs are considered more interpretable and easier to understand. It is indeed the common notion in the literature that these models are self-explanatory with less recommendation to employ XAI models. § CHALLENGES OF INTERPRETING LRMS The following subsections discuss the most common challenges that end-users might face when interpret LRMs apart from which domain. §.§ Linearity assumption As their names imply, LRMs assume and enforce a linear association between the independent variables and the output. The assumption is based on the theory background behind these models. Figure <ref>shows the association could be linear or non-linear. The association between X and y is a positive linear association while it is non-linear between Z and y. It is important to understand the association between the input and the output in order to use an appropriate model and interpret it accordingly. In real-world applications the association between the input variables and output might be linear, monotonic or more complex. One of the significant aims of XAI is to reveal the kind of the association between the input data and the output. For example, the association between the number of reservations of booking rooms in a European touristy city and the temperature degree. Such association might not be linear and better represented by a U-shape. This is because the association between the number of reservations and temperature degree would increase to a point (mid of summer) and then turn around and decreases. To explain and interpret the LRMs with this kind of data, the end-users report the coefficient value as the effect size and the direction of the effect which is not accurate and does not reflect the actual association. §.§ Local explanation Machine learning models can be explained and interpreted either globally or locally. Globally indicates that to explain the model at global level considering all samples in the model. On the other hands, local explanation means explain the model locally for a specific instance in the model <cit.>. In other word, it means showing the effect of the explanatory variables for one sample. Figure <ref> shows a global and a local explanation. It shows the contribution of each feature toward any class at individual level and the probability for being class A or B. It is very important to show the explanation at individual level because the effect of each independent variable is averaged over all samples in the global explanation. The local explanation of an instance might be different from the explanation at global level. This kind of explanation is more vital in real life applications. For instance, a client would like to know what information in the mortgage application form made his/her application weak and eventually the mortgage was rejected. Unfortunately, LRMs are lack of such valuable property which is one of the most significant aims of XAI methods. LRMs within the context of machine learning only provides the coefficient value from the training datasets. This value is an average effect considering all samples in the model. As a result, LRMs cannot be considered as a white box within this context because it is not clear how it works at local level. §.§ Multicollinearity Multicollinearity is one of the common phenomena in statistical analysis when two or multiple independent variables in the model are highly correlated <cit.>. In other word, it indicates when some of explanatory variables are linear function of the others in the model. Figure <ref> shows that X1 and X2 are highly positively correlated (A). On the other hand, it shows (B) that there is no correlation or dependency between X5 and X6 which indicates the absence of collinearity. It is more evident in real-life applications especially in health care, biology and medicine. To interpret the LRMs, the coefficient value of each variable in the model is reported alongside the confidence interval as the impact on the dependent variable. It shows the effect size and the direction of the effect whether it is positive or negative toward the model prediction. Coefficient value is considered one of the main properties of LRMs that makes it interpretable. The interpretation of the coefficient value represents the effect of one unit of the independent variable of interest on the outcome while holding all other independent variables in the model constant <cit.>. Such interpretation might be correct when the independent variables are really independent (as the case between X5 and X6 in figure <ref>). However, in real life applications the independent variables are usually collinear and they change simultaneously. Accordingly, the classic interpretation of the coefficient value is not realistic and cannot be considered to explain and interpret the LRMs model. One might argue that features selection could be applied to select none-correlated features before feeding them to the model. Thereafter, the classic interpretation of the coefficient value accurately explains the model. In most cases especially in health care domain, researchers would like to include all the features in the model because each one has different clinical interpretation and might result in different recommendation. §.§ Covariates The covariates are group of variables or factors that affect both the independent and dependent variables in the model simultaneously <cit.>. As it is shown in figure <ref>, the Covariates have direct causal association to the model input and output at the same time. The covariates are different from a domain to another and might be related to characteristics of the samples or data acquisition. For example, sex, ethnicity and age are common covariates in healthcare applications. In addition, weight and height or body mass index are common covariates in cardiovascular diseases. To mitigate the effect of covariates, different approaches are proposed including regressing the covariates from the independent variables before feeding them to the model. In addition, some researches use the covariates directly in the model as independent variable. Another approach is to intentionally select cases and control using some matching methods such as propensity score based on the set of observed covariates <cit.>. When it comes to explain and interpret the LRMs, it is challenging on how to reveal the impacts of those factors. Let us consider a scenario where we want to predict a disease using electronic health records as independent variables. In such case, sex, ethnicity and age might be considered as observed covariates. When regressing the covariates from the independent variables, we embed the impact of the covariates in the independent variables. In such case, it is not possible to to explain how the model would behave when it is applied to only male or female or on other ethnicity. For instance, the effect of sex is regressed from independent variables. However, in some domains including cardiovascular and brain diseases male are more prone to experience those diseases than female <cit.> <cit.>. In this case, it is very vital to explain and interpret the model when switching between sexes. Similar issue will appear if we consider propensity score to match samples because we just naturalize the independent variables over the set of covariates. The last scenario is when including the covariates in the model alongside the independent variable. The classical interpretation of the coefficient value in the model makes it more difficult to interpret the model because it enforces holding the other independent variables in the model as constant. Consequently, it is indeed difficult to interpret the LRMs and consider the impact of covariates in the three discussed approaches. §.§ Data scaling Usually, the data are not in the same scale or they might not have similar distribution. Some of them might have a wide range while others are very tight. Those with wider range and higher values might dominant model decision. One of the most common step of data pre-processing in machine learning is to either normalize or standardize the data before fitting them in the model <cit.>. As it is explained in figure <ref>, normalization is the process of scaling the data to have same range which is usually between zero and one while standardization means convert the data to have zero mean and one unit standard deviation. It helps to improve model performance, decrease the running time of the model and allowing the model deals with all data equally. However, such process hinders the ability to explain and interpret the model by revealing the effect size in unit. For instance, the normalization method converts the features into unitless. Consequently, the common interpretation of the coefficient value is no longer possible because it does represent change in unit. Similarly, the coefficient value of the standardized data represents the deviation from the center of the data which is usually the mean value. In both cases, the end users will not be able to understand how change in the independent variable would lead to effect on the outcome. The only way to interpret the model is to compare the coefficient values of all independent variables to recognize the one with higher and smaller effect. However, such interpretation still not precise because it does not consider the multicollinearity phenomena. §.§ Uncertainty One of the desirable properties of XAI aims is to show the uncertainty in the model when making a prediction <cit.>. Uncertainty helps the end users whether to consider a specific prediction or not. Moreover, it might be one of the core stone to trust a model and employ it daily life applications. Uncertainty could be presented and shown in many shapes. For instance, in classification models the probability of a subject belong to a specific class is a form of uncertainty. Moreover, confidence interval of the estimated coefficient value is another form of uncertainty. Figure <ref> shows the confidence interval of two estimated coefficient value. The left part of the figure shows the confidence interval is one unit (±) of standard deviation away from the mean while the one on the right shows that the confidence interval is two units (±) standard deviation away from the mean. The width of the confidence interval is considered as a proxy of precision of the estimated value. The more the confidence interval small, the more the model certain of the estimate. On contrary, the larger the confidence interval, the less the model certain. Although confidence interval can be calculated in LRMs, the main question is how to specify a threshold to identify good certainty. Moreover, the confidence interval has many drawbacks as it is explained in <cit.>. Richard et al. show that there are many fallacies related to how interpret the confident interval including that it can be considered as a measure of certainty <cit.>. Moreover, they show in practical examples that the width of the confidence interval does not represent an index of precision of an estimate nor the plausibility of the estimate value. Accordingly, using the confidence interval as a measure of certainty to interpret and explain LRMs is not accurate and might be biased. §.§ Features contribution in classes Regression models are used to perform binary classification as the case with logistic regression. The most common scenario is to model the outcome (cases vs control) using a set of independent variables <cit.>. It is vital to interpret the model in both cases when the outcome is either one of the classes. For instance, figure <ref> as an example of interpreting a model to reveal the contribution of the features in each class. It shows that some features contribute more in class A compared to their contributions in class B. This is very important to understand the model and might help in features engineering or even before to collect the data. Features contribution is one of the most common output of XAI method especially with tabular data <cit.>. It shows the contribution of the features toward the prediction and their impacts in each class when modelling a classification task. Unfortunately, LRMs do not provide information or explain how the independent variables contribute in each class. For instance, they do not show which features contribute more significantly in class A or B. Moreover, such information cannot be shaped from LRMs neither at global level for all subjects nor at local level for a specific instance. Consequently, LRMs are lack of features attribution for each class which is one of the significant components of XAI outcome. §.§ Fairness There are many proxies proposed to measure the trustworthy of machine learning models including LRMs. Fairness one of the most common proxy that machine learning models should fulfil especially in health care and medicine. Machine learning algorithms including LRMs are usually applied to data that involve sub-groups. For instance, the data might include both sexes, multiple races or ethnicity, disabilities, education level, income, marital status or data collected from young and old participants. Figure <ref> shows that machine learning models should be fair toward these groups when making a prediction. In other word, the prediction should not be biased toward a specific group or class <cit.>. There has been some metrics proposed to measure the fairness of a model. For instance, the equal opportunity metric indicates that a model is fair if the true positive rate (TPR) is equal for subgroups in a binary classifier model. Equal opportunity is extended to equal odd by ensure that both TPR and false positive rate (FPR) should be equal in subgroups. <cit.>. LRMs by themselves are not fair and biased against minorities in the data. This phenomenon is intrinsically inherited in LRMs even if the data from the minorities included in the training step <cit.>. Furthermore, most of the proposed metrics to measure fairness of any model are suitable for classification models when true positive and false positive can be calculated. Accordingly, it is more challenging to measure fairness in LRMs. More importantly, it might be impossible to satisfy all the aspects of fairness and maximize the accuracy of the model at the same time <cit.>. § OPEN ISSUE AND RECOMMENDATIONS Although LRMs are used widely, easy to implement, has less computational complexity and does not require massive data pre-processing steps, they hold some challenges which make them difficult to explain and interpret. The common notion in the literature that LRMs are more interpretable and less accurate compared to complex models including deep neural network is not precise. In the aforementioned points, we showed that LRMs are quite hard to interpret, explain and they might have the same challenges to interpret advanced models. LRMs should be treated as complex models when it comes to explainability and interpretability. It is true that all the points we mentioned could be applied to all machine learning models and not limited to LRMs. However, we focused on LRMs to defy the common notion of interpretability related to LRMs. We believe the following recommendations should be considered to ensure a more accurate interpretation of LRMs. * LRMs should be treated equally to complex models when it comes to interpretability and explainability. This applies to explain the model at global and local level. * Coefficient value can be user to explain and interpret the effect size and direction of the input data when there is no collinearity. However, it is not a precise proxy to explain the model as effect size when LRMs are applied to collinear data or the association is not linear. * Multicollinearity is one of the main issues to explain any machine learning model no matter whether it is simple or complex. End users might dig deeper in the literature to find possible solutions and suggestions to mitigate its effect <cit.>. * Covariates need to be considered carefully when interpreting their impacts on the prediction outcome by revealing their causal association <cit.>. * Confidence interval might be not accurate to measure the uncertainty of a model. More sophisticated metrics including estimating distribution rather than a single point might be a better way to quantify the certainty <cit.>. * Post-hoc XAI methods are indispensable to reveal the contribution of the features in each class when it is applied to a classification task because LRMs are lack of such property. * To the best of our knowledge the issue of normalization and standardization regarding how to interpret the effect size has not been investigated before, nor proper approaches are proposed. More researches are required in this direction to ensure the possibility to apply the pre-processing steps without losing the ability to interpret the effect size in its original unit. * LRMs are not fair and might be biased against or toward a specific group in the model. Several approaches and metrics were proposed to measure and improve fairness, specifically for LRMs <cit.> <cit.> <cit.>. § ACKNOWLEDGMENTS AMS acknowledges support from The Leicester City Football Club (LCFC)
http://arxiv.org/abs/2407.12392v1
20240717081246
Influence of different mutual friction models on two-way coupled quantized vortices and normal fluid in superfluid $^4$He
[ "Hiromichi Kobayashi", "Satoshi Yui", "Makoto Tsubota" ]
cond-mat.other
[ "cond-mat.other" ]
hkobayas@keio.jp ^1Department of Physics & Research and Education Center for Natural Sciences, Hiyoshi Campus, Keio University, 4-1-1 Hiyoshi, Kohoku-ku, Yokohama 223-8521, Japan^2Department of Physics, Osaka Metropolitan University, 3-3-138 Sugimoto, Sumiyoshi-ku, Osaka 558-8585, Japan ^3Nambu Yoichiro Institute of Theoretical and Experimental Physics (NITEP), Osaka Metropolitan University, Osaka 558-8585, Japan § ABSTRACT We study the influence of two mutual friction models on quantized vortices and normal fluid using two-way coupled simulations of superfluid ^4He. The normal fluid is affected by quantized vortices via mutual friction. A previous study [Y. Tang, et al. Nat. Commun. 14, 2941 (2023)] compared the time evolutions of the vortex ring radius and determined that the self-consistent two-way coupled mutual friction (S2W) model yielded better agreement with the experimental results than the two-way coupled mutual friction (2W) model whose model parameters were determined through experiments with rotating superfluid helium. In this study, we compare the two models in more detail in terms of the quantized vortex ring propagation, reconnection, and thermal counterflow. We found that the S2W model exhibits better results than the 2W model on the microscopic scale near a quantized vortex, such as during quantized vortex ring propagation and reconnection, although the S2W model requires a higher spatial resolution. For complex flows such as a thermal counterflow, the 2W model can be applied even to a low-resolution flow while maintaining the anisotropic normal fluid velocity fluctuations. In contrast, the 2W model predicts lower normal fluid velocity fluctuations than the S2W model. The two models show probability density functions with - 3 power-law tails for the normal fluid velocity fluctuations. Influence of different mutual friction models on two-way coupled quantized vortices and normal fluid in superfluid ^4He Makoto Tsubota^2, 3 July 22, 2024 ======================================================================================================================= § INTRODUCTION At temperatures below 2.17 K, superfluid ^4He is composed of an inviscid superfluid component and a viscous normal fluid component. This is known as the two-fluid model <cit.>. The circulation of the superfluid component is quantized and behaves as a quantized vortex line with a diameter of approximately 10^-8 cm. In the hydrodynamics of superfluid ^4He, the most extensively studied experimental phenomenon is the thermal counterflow <cit.>. In these experiments, two baths are filled with superfluid ^4He and connected via a channel. When one bath is heated, the normal fluid moves into the other bath through the channel. However, the superfluid moves in the opposite direction through the channel to satisfy the conservation of mass. As the heat flux increases, the relative velocity between the superfluid and normal fluid will also increases. When the relative velocity exceeds a critical value, the quantized vortices become tangled <cit.>. This phenomenon is known as quantum turbulence (QT). A quantized vortex interacts with a normal fluid through mutual friction in superfluid ^4He <cit.>. A model of mutual friction <cit.> was proposed based on experimental data for uniformly rotating superfluid helium <cit.>. This model has been used for one-way coupled simulations, where the velocity profile of the normal fluid is prescribed, and quantized vortices in QT are affected via the mutual friction between the superfluid and normal fluid <cit.>. Numerical simulations using the vortex filament model (VFM) with the full Biot-Savart law showed good agreement with the experimental results <cit.>. The mutual friction model has also been used in two-way coupled simulations, where the normal fluid is locally affected by the quantized vortices via mutual friction. The mutual friction model used in these two-way coupled simulations is referred to as the 2W model in this study. The 2W model has been used in two-way coupled simulations with solid boundaries <cit.>, producing the anomalous anisotropic velocity fluctuations of the normal fluid in counterflow experiments <cit.>. However, the undisturbed velocity away from the core of the quantized vortex is assumed to be the normal fluid velocity in the 2W model <cit.>. Two-way coupled simulations using mutual friction with a locally disturbed normal fluid velocity have also been conducted <cit.>. The concept of a self-consistent model <cit.> was proposed and applied to simple configurations such as a quantized vortex ring <cit.> and a straight quantized vortex <cit.>. The recent self-consistent model <cit.> was updated slightly to the present self-consistent two-way coupled mutual friction model <cit.>, which is referred to as the S2W model in this study. The S2W model adopts the theoretical friction force through a vortex line; consequently, no experimentally determined empirical parameters are required. The time evolution of a single vortex ring radius obtained using the 2W and S2W models was compared with the experimental results, and the S2W model showed better agreement with the experimental results than the 2W model <cit.>. However, it is necessary to compare the performance of the two models for flows such as a vortex ring propagation, reconnection, and thermal counterflow. Furthermore, monitoring the motion of solid hydrogen tracers in decaying QT has shown that the probability density function (PDF) of the superfluid velocity, v, is non-Gaussian with 1/v^3 power-law tails owing to the motion of the quantized vortex <cit.>. These tails were reproduced in the PDF of the superfluid velocity obtained with a one-way coupled simulation of the VFM <cit.> and the Gross-Pitaevskii equation in a turbulent atomic Bose-Einstein condensate <cit.>. It would be interesting to observe the influence of the superfluid velocity on the PDF of the normal fluid using two-way coupled simulations with the two models. In this study, two-way coupled simulations are conducted to examine the influence of the 2W and S2W models on the velocity fluctuations of a normal fluid. The remainder of this paper is organized as follows: In Section <ref>, the basic equations, mutual friction models, and numerical conditions are described. In Section <ref>, we present and discuss the numerical results for the vortex ring propagation, reconnection, and thermal counterflow. Finally, our conclusions are presented in Section <ref>. § BASIC EQUATIONS AND NUMERICAL CONDITIONS §.§ Basic equations For the superfluid, the VFM <cit.> is used to describe the equation of motion of the quantized vortices. Figure <ref> (a) presents a schematic of a vortex filament. The tangential vector s' is defined as a unit vector along the vortex line at point s. The equation of motion for point s is as follows: d s/dt=v_s+α s'×v_ns-α's'×(s'×v_ns), where v_s denotes the superfluid velocity, v_ns=v_n-v_s, v_n is the normal fluid velocity, and α and α' are the coefficients of mutual friction at a finite temperature for the 2W model <cit.>. The superfluid velocity at 0 K at position r is obtained from the induced velocity produced from segment ds_1 at position s_1 using the Biot-Savart law as follows: v_s(r)=κ/4π∫_ℒ(s_1-r)× ds_1/|s_1-r|^3+v_s,b+v_s,a, where κ denotes the quantum circulation, ℒ represents the integration along the vortex line, and v_s,b and v_s,a are the velocity induced from the boundaries and the uniform flow applied to the superfluid, respectively. The momentum equations for the normal fluid are described by the Navier-Stokes equations: ∂v_n/∂ t+(v_n·∇)v_n=-1/ρ_n∇ P +ν_n∇^2v_n+1/ρ_nF_ns, where the total density ρ is composed of the superfluid density ρ_s and normal fluid density ρ_n as ρ=ρ_s+ρ_n; P is the effective pressure, ν_n is the kinematic viscosity of the normal fluid, and F_ns denotes the mutual friction from the superfluid to the normal fluid. As shown in Fig. <ref> (b), the mutual friction is calculated from each segment Δξ along the integral path ℒ' in the normal fluid mesh Ω'(r) shown in Fig. <ref> (c) at position r as follows: F_ns=1/Ω'(r)∫_ℒ'(r)f(ξ)dξ, where f(ξ) is the local mutual friction with arc length ξ. §.§ 2W model In this study, we compare two models of mutual friction: the 2W model <cit.> and the S2W model <cit.>. First, the 2W model <cit.> is presented. The mutual friction from the superfluid to the normal fluid f_sn is described based on experimental results <cit.> as follows: f_sn=-αρ_s κs'×( s'×v_ns)-α' ρ_s κs'×v_ns. From f(ξ)=f_ns=-f_sn, the local mutual friction in Eq. (<ref>) can be obtained. f(ξ)=ρ_s κ[ αs'×(s'×v_ns)+α' s'×v_ns]. The mutual friction f_sn can be interpreted from another perspective. The vortex filament is affected by the Magnus force, f_M. f_M=ρ_s κs'×(ṡ-v_s), where ṡ=ds/dt. In the 2W model, the drag force f_D is modeled using the drag coefficients γ_0 and γ_0' <cit.>. f_D=-γ_0 s'×[ s'×(v_n-ṡ) ]+γ_0' s'×(v_n-ṡ). Note that γ_0 and γ_0' correspond to D and D', respectively, in reference <cit.>. In the 2W model, f_sn satisfies f_sn=f_D. Because the inertia of the quantized vortex is negligible, the equation of motion can be expressed as follows: f_M+f_D=0. By eliminating ṡ using Eq. (<ref>), we obtain the following mutual friction coefficients: α = ρ_s κγ_0/γ_0^2+(ρ_s κ - γ_0')^2, α' = γ_0^2-(ρ_s κ - γ_0')γ_0'/γ_0^2+(ρ_s κ - γ_0')^2. §.§ S2W model This section discusses the S2W model <cit.>. In this model, the drag force is modeled using the drag coefficient D as follows: f_D=-D s'×[ s'×(v_n-ṡ) ]. When the Reynolds number Re_vortex of the normal fluid based on the velocity induced by the vortex line is low (10^-5∼10^-4), the coefficient of the drag force from the vortex line is analytically determined as follows <cit.>: D=4 πρ_n ν_n/1/2-γ-ln( Re_vortex), Re_vortex=|v_n⊥-ṡ|a_0/4ν_n, where the subscript ⊥ denotes a component perpendicular to the vortex line, γ = 0.5772 is the Euler-Mascheroni constant, and a_0 represents the vortex core size. In this study, we set a_0 to 1.3×10^-10 m. In this model, the Iordanskii force <cit.> is taken into account: f_I=-ρ_n κs'×(v_n-ṡ). The equation of motion results in the following: f_M+f_D+f_I=0. The mutual friction in this model is f_sn=f_D+f_I. Based on a comparison with Eq. (<ref>), f_sn is modeled as follows: f_sn=f_D+f_I=-βρ_s κs'×( s'×v_ns)-β' ρ_s κs'×v_ns, where β and β' denote the coefficients of mutual friction in the S2W model. Note that to ensure β' is positive, the sign of β' is opposite that in reference <cit.>. The definition of β' is consistent with α' in Eq. (<ref>) in the 2W model. Comparing the two mutual friction models in Eq. (<ref>) and Eqs. (<ref>) and (<ref>), γ_0 and γ_0' correspond to D and -ρ_n κ, respectively. Substituting these into Eq. (<ref>), the following mutual friction coefficients are obtained: β = ρ_s κ D/D^2+(ρ_s κ + ρ_n κ)^2, β' = D^2+(ρ_s κ + ρ_n κ)ρ_n κ/D^2+(ρ_s κ + ρ_n κ)^2. Finally, we obtain the equation of motion for s in the S2W model, similar to Eq. (<ref>): ṡ=v_s⊥+βs'×v_ns-β' s'×( s'×v_ns). A summary and comparison of the 2W and S2W models is presented in the Appendix. Figure <ref> compares the coefficients of the 2W and S2W models as a function of the temperature, T. The total coefficient of the S2W model is larger than that of the 2W model, and the ratio is stable at approximately two for temperatures higher than 1.6 K, as shown in Fig. <ref> (a) and (b). The fractions of α, β, α', and β', i.e., α / √(α^2 + α'^2), β / √(β^2 + β'^2), α' / √(α^2 + α'^2), and β' / √(β^2 + β'^2), respectively, are shown in Fig. <ref> (c) and (d). α is dominant at all temperatures in the 2W model, whereas β decreases gradually with increasing temperature. In contrast, β' increases with increasing temperature. These differences affect the strength of the mutual friction around the quantized vortex, as discussed in Section <ref>. §.§ Numerical methods and conditions Time integration of Eqs. (<ref>) or (<ref>) is performed using the fourth-order accuracy Runge-Kutta method with Δ t= 0.0001 s. The spatial resolution of ξ between discrete points is set to 0.0008 cm < Δξ < 0.0024 cm. Two filaments are considered to be reconnected if they approach within ξ_min= 0.0008 cm <cit.>. Short filaments of less than 5Δξ_min are removed <cit.>. Equations (<ref>) for the normal fluid are discretized using the second-order accuracy finite difference method. The simplified Maker and Cell method <cit.> is used to couple the velocity and pressure, and the fast Fourier transform is used to solve the Poisson equation for the pressure. A temperature, T, of 1.9 K is considered. A box size of D_x=D_y=D_z= 1 mm is used, and the number of grid points for the normal fluid is set to N_x=N_y=N_z=120. Periodic conditions are adopted, and the uniform flow v_s,a=-ρ_n V_n /ρ_s based on the mass conservation law is applied to the thermal counterflow, where V_n is the mean velocity of the normal fluid. In this study, V_n of 2.5 mm/s and 5.0 mm/s in the x direction are considered. For the quantized vortex ring propagation, the initial radius is set to 0.02 cm. For the reconnection, two quantized vortices cross at a 90-degree angle, and the initial distance is set to 0.002 cm. Our implementation of the S2W model is then validated. Figure <ref> shows the time evolution of the radius of a quantized vortex ring using the S2W model for 1.7, 1.8, and 2.0 K. The initial radius is set to 7.6 × 10^-3 cm. The results show good agreement with the data presented in Ref. <cit.>. § NUMERICAL RESULTS AND DISCUSSION §.§ Quantized vortex ring propagation Figure <ref> shows the quantized vortex ring propagation at 0.05 s for the 2W and S2W models; the quantized vortex is visualized in red and the normal fluid vortex tube is displayed in green using the second invariant of the velocity gradient tensor Q = 0.5 s^-2 <cit.>. The second invariant is defined as Q = (W_ij W_ij - S_ij S_ij)/2 using the velocity strain tensor S_ij=(∂ v_n,j/∂ x_i + ∂ v_n,i/∂ x_j)/2 and the vorticity tensor W_ij=(∂ v_n,j/∂ x_i - ∂ v_n,i/∂ x_j)/2. For the 2W model, a pair of rings of normal fluid vortex tubes are located inside and outside the quantized vortex ring in the radial direction. In the S2W model, the inner vortex tube remains slightly behind the quantized vortex ring, and the outer vortex tube propagates slightly ahead of the quantized vortex ring. This difference between the models has also been observed at 1.65 K <cit.>. This is due to the coefficients of mutual friction of α (α') and β (β'). The mutual friction to the vortex ring f_sn on the x-z plane at 0.05 s is shown in Fig. <ref>. The mutual friction in the 2W model acts opposite to the propagation direction of the ring. However, the mutual friction in the S2W model acts in a diagonal direction. These orientations of the mutual friction are consistent with those shown in Fig. <ref>(c) and (d). As the temperature increases, the direction of the mutual friction in the S2W model rotates from -x to z. The mutual friction in the S2W model is approximately 2.3 times stronger than that in the 2W model. This result is consistent with the coefficient ratio in Fig. <ref>(b). f_D is much weaker than f_I and acts inside the ring, whereas f_I acts outside the ring, where f_sn=f_D+f_I. Substituting Eq. (<ref>) into Eqs. (<ref>) and (<ref>) results in the following: f_D=-D [ (1-β') s'× (s'×v_ns)+βs'×v_ns] , f_I=ρ_n κ[ βs'× (s'×v_ns)-(1-β') s'×v_ns], where D=ρ_s κβ/β^2+(1-β')^2, ρ_n κ =ρ_s κ[-β^2+β'(1-β')]/β^2+(1-β')^2. It is worth noting that the mutual friction predicted by the S2W model should approach that predicted by the 2W model at the coarse-graining limit. If experiments are performed to investigate the location of normal fluid vortex tubes around a vortex ring, the accuracy of the S2W model will be improved. The time evolution of the kinetic energy per unit density of the normal fluid and superfluid is shown in Fig. <ref>. The kinetic energies of the normal fluid and superfluid are defined as follows: E_n/ρ_n = 1/2v_n^2, E_s/ρ_s = 1/2v_s^2 The energy of the superfluid is gradually decreased by transferring energy to the normal fluid via mutual friction. The superfluid vortex ring in the 2W model disappears at 1.0 s, whereas that in the S2W model annihilates at 0.9 s. The superfluid energy in the 2W model maintains a longer lifetime than that in the S2W model. This result is consistent with that at 1.65 K <cit.>. The normal fluid energy in the S2W model increases faster than that in the 2W model. After annihilation of the superfluid vortex ring, the normal fluid energy decreases owing to the viscosity of the normal fluid. Next, the PDFs are compared. Figure <ref> shows the PDFs of the velocity fluctuations during the vortex ring propagation. Superfluid velocity fluctuations are known to have strong non-Gaussian PDFs with - 3 power-law tails <cit.>, as shown in Fig. <ref> (c) and (d). The superfluid PDFs shown with solid lines correspond to the event immediately before annihilation of the quantized vortex rings. No dashed lines are shown because the quantized vortex ring disappears. Normal fluid velocity fluctuations are known to have Gaussian PDFs, although those velocity gradients yield non-Gaussian PDFs <cit.>. The PDFs of the normal fluid velocity fluctuations in the y (radial) direction have - 3 power-law tails, as shown by the fine solid line. The PDFs are affected by the superfluid fluctuations via mutual friction. The dashed lines indicate the normal fluid PDFs immediately after the annihilation of the quantized vortex rings. The PDFs with strong fluctuations are reduced because the superfluid vortex rings disappear. Figure <ref> (a) presents the normal fluid PDFs in the x (propagation) direction. The PDFs have - 3 power-law tails, which are affected by the superfluid fluctuations via mutual friction. However, the tails exist only in the propagation direction, i.e., in the positive x direction. The peaks of the PDFs are located at approximately - 0.5. This is due to the normal fluid backflow in the inner region of the quantized vortex ring. As shown in Fig. <ref>, the inner normal fluid vortex caused by the quantized vortex ring rotates through mutual friction and produces the backflow in the inside of the vortex ring. The backflow is rectified by the normal fluid contraction flows induced by mutual friction. Consequently, it is believed that almost no negative fluctuations occur. In terms of the PDFs, almost no difference between the models is observed. §.§ Reconnection of two quantized vortices The vortex tubes and quantized vortices after reconnection are shown in Fig. <ref>; the quantized vortex is shown in red, and the normal fluid vortex tube is depicted in green using the second invariant Q = 0.5 s^-2 of the velocity gradient tensor. For the S2W model, the spiral vortex tubes that emerge around the quantized vortex line have a stronger twist than those in the 2W model. Figure <ref> shows the time evolution of the kinetic energy per unit density of the normal fluid and superfluid during reconnection. Before reconnection, the two quantized vortices are twisted while approaching each other owing to the induced velocity from the other quantized vortex. The motion of the quantized vortices produces a weak vortex tube of normal fluid via mutual friction. Consequently, the energy of the superfluid is transferred to the normal fluid. An abrupt energy transfer from the superfluid to the normal fluid occurs at approximately 0.5 s. Because the S2W model has stronger mutual friction than the 2W model, the reconnection is delayed. The normal fluid in the S2W model has a higher energy than that in the 2W model owing to the stronger mutual friction. Figure <ref> shows the PDFs of the velocity fluctuations during reconnection. The PDFs of the normal fluid and superfluid exhibit - 3 power-law tails in all directions. This is due to the superfluid fluctuations. Although there is almost no difference in the superfluid PDFs between the two models, the normal fluid PDFs drawn with solid lines appear different, as shown in Fig. <ref> (a) and (b). The solid lines correspond to the time immediately before reconnection. The normal fluid fluctuations remain weak, as shown in Fig. <ref> because the influence of the mutual friction on the normal fluid is weak. Consequently, the PDFs do not yet exhibit - 3 power-law long tails, i.e., the PDFs show the results during a transient process. Nevertheless, the two models produce different PDFs. This is due to the difference in the location of the normal fluid vortex tubes around the quantized vortex, as shown in Fig. <ref>. Figure <ref> (a) shows the velocity fluctuations of the normal fluid in the x and y directions during vortex ring propagation. The velocity fluctuation is defined as the root-mean-square velocity √(v_n,i^2) (i=x, y). The normal fluid in the S2W model receives a greater amount of energy than that in the 2W model. The velocity fluctuation in the x direction is stronger than that in the y direction. This is a result of jet formation of the normal fluid due to the local mutual friction f_ns=-f_sn as shown in Fig. <ref>. Figure <ref> (b) shows the velocity fluctuations of the normal fluid in the x and y directions during reconnection. As an initial condition, two straight quantized vortex lines crossed at a 90-degree angle are set at a certain distance in the x (vertical) direction. Immediately after reconnection, the 2W model yields a stronger fluctuation in the x direction, whereas the S2W model yields a stronger fluctuation in the y direction. This is due to the location of the normal fluid vortex tubes produced around the quantized vortex line, as shown in Figs. <ref> and <ref>. §.§ Thermal counterflow This section considers the thermal counterflow. Figure <ref> shows a snapshot of the quantized vortices and normal fluid vortex tubes in a thermal counterflow. The vortex tubes are produced by mutual friction with quantized vortices. The S2W model yields a high density of vortex lines and strong vortex tubes. The vortex line density is defined as the vortex line length per unit volume. The time evolution of the vortex line density in the thermal counterflow is shown in Fig. <ref>. The density gradually increases at 2.5 mm/s, whereas it increases rapidly at 5.0 mm/s. Subsequently, the density reaches a statistically steady state. The S2W model yields a density that is approximately twice that of the 2W model. Figure <ref> shows the average vortex line density in the statistically steady state of Fig. <ref> as a function of the mean relative velocity, V_ns. The slope parameter γ = L^1/2/(V_ns-v_0) is presented in the figure, where v_0 is a small, adjustable parameter on the order of 1 cm s^-1. The value γ = 167 of 2W^* is the reference value in our previous study using the 2W model <cit.> and is consistent with the present result of γ = 160 obtained using the 2W model. The S2W model yields a higher γ = 187 than the 2W model, although the experimental value is γ∼ 130 s/cm^2 <cit.>. The Vinen equation <cit.> can be expressed as follows: dL/dt = A |V_ns| L^3/2 - B L^2, where A and B denote the coefficients of the generation and decay rates, respectively. In the steady state, L^1/2 = A/B |V_ns| is obtained. Here, A/B ∼γ. As shown in Fig. <ref>, the stronger mutual friction in the S2W model leads to a slower approach between two quantized vortices. Consequently, the decay rate becomes low, and thus the S2W model estimates a larger γ than the 2W model. The PDFs of the velocity fluctuations in the thermal counterflow in the statistically steady state are shown in Fig. <ref>. Two mean normal fluid velocities of 2.5 mm/s and 5.0 mm/s are examined. There is no difference in the direction of fluctuations in the PDFs of the superfluid. The PDFs of the normal fluid in the x and y directions exhibit Gaussian distributions with weak fluctuations. As shown in Fig. <ref> (b), strong fluctuations exhibit - 3 power-law tails. The S2W model yields stronger fluctuations than the 2W model owing to the stronger mutual friction. The mean normal fluid velocity has a weak influence on the intensity of the PDFs of the normal fluid. The PDFs in the x (streamwise) direction appear asymmetric in Fig. <ref> (a). The positive fluctuations exhibit sub-Gaussian distributions, whereas the negative fluctuations produce - 3 power-law tails. In this study, the normal fluid flows in the positive x direction, and the superfluid moves in the negative x direction during the counterflow. If quantized vortex rings are generated, they tend to propagate in the negative x direction. Consequently, long-tail PDFs are produced in the propagation direction of the quantized vortex rings, as shown in Fig. <ref> (a). Figure <ref> shows the normal fluid velocity fluctuations in the 2W and S2W models as a function of the mean normal fluid velocity in the thermal counterflow. At a low resolution, the 2W model reproduces the anisotropic fluctuations, whereas the S2W model produces fewer anisotropic fluctuations. At high resolution, the 2W model yields almost the same fluctuations as the low-resolution model, whereas the S2W model generates anisotropic fluctuations. The S2W model requires higher resolution and yields higher fluctuations than the 2W model. However, the S2W model predicts the normal fluid fluctuations better than the 2W model. § CONCLUSIONS We investigated the performance of two different mutual friction models, i.e., the 2W and S2W models, on quantized vortices and normal fluid using two-way coupled simulations of superfluid ^4He. In the quantized vortex ring propagation and reconnection, the normal fluid vortex tube induced by mutual friction is produced at slightly different locations around the quantized vortex in each model. The normal fluid velocity fluctuations in the S2W model are stronger than those in the 2W model, whereas the probability density functions produced by the two models show negligible differences. The S2W model is better suited for describing the normal fluid physics on a microscopic scale near a quantized vortex, such as during quantized vortex ring propagation and reconnection. For complex flows, such as a thermal counterflow, the 2W model can represent a low-resolution flow while maintaining anisotropic fluctuations. However, the S2W model requires higher resolution and yields higher fluctuations than the 2W model. The two-way coupled simulation with each model produces PDFs with - 3 power-law tails for the normal fluid velocity fluctuations. H. K. acknowledges the support from JSPS KAKENHI (Grant Number JP22H01403). S. Y. acknowledges the support from JSPS KAKENHI (Grant Number JP23K13063). M. T. was supported by JSPS KAKENHI (Grant Numbers JP22H05139 and JP23K03305). * § SUMMARY OF THE 2W AND S2W MODELS We summarize and compare the parameters, forces, and equations of motion of the 2W and S2W models in Table <ref>. 14 Tisza L. Tisza, Transport phenomena in helium II, Nature 141, 913 (1938). Landau L. D. Landau, The theory of superfluidity of helium II, J. Phys. USSR 5, 71 (1941), reprinted in I. M. Khalatnikov, An Introduction to the Theory of Superfluidity (Perseus Publishing, Cambridge, 2000). TC-exp C. J. Gorter and J. H. Mellink, On the irreversible processes in liquid helium II, Physica, 15, 285 (1949). TC-exp-review J. T. Tough, Superfluid turbulence, in Prog. in Low Temp. Phys., edited by D. F. Brewer (North-Holland, Amsterdam, 1982), Vol. 8, Chap. 3. Feynman R. P. Feynman, Application of quantum mechanics to liquid helium, in Progress in Low-Temperature Physics, edited by C. J. Gorter (North-Holland, Amsterdam, 1957), Vol. I, p. 17. MF1 W. F. Vinen, Mutual friction in a heat current in liquid helium II. I. Experiments on steady heat currents, Proc. R. Soc. Lond. A 240, 114 (1957). MF2 W. F. Vinen, Mutual friction in a heat current in liquid helium II. II. Experiments on transient effects, Proc. R. Soc. Lond. A 240, 128 (1957). MF3 W. F. Vinen, Mutual friction in a heat current in liquid helium II. III. Theory of the mutual friction, Proc. R. Soc. Lond. A 242, 493 (1957). MF4 W. F. Vinen, Mutual friction in a heat current in liquid helium II. IV. Critical heat currents in wide channels, Proc. R. Soc. Lond. A 243, 400 (1958). Hall-Vinen2 H. E. Hall and W. F. Vinen, The rotation of liquid helium II II. The theory of mutual friction in uniformly rotating helium II, Proc. R. Soc. Lond. A 238, 215 (1956). Schwarz-VFM-MF K. W. Schwarz, Turbulence in superfluid helium: Steady homogeneous counterflow, Phys. Rev. B 18, 245 (1978). Schwarz-VFM-wall K. W. Schwarz, Three-dimensional vortex dynamics in superfluid ^4He: Line-line and line-boundary interactions, Phys. Rev. B 31, 5782 (1985). Hall-Vinen1 H. E. Hall and W. F. Vinen, The rotation of liquid helium II I. Experiments on the propagation of second sound in uniformly rotating helium II, Proc. R. Soc. Lond. A 238, 204 (1956). rho-mu-B C. F. Barenghi, R. J. Donnelly, and W. F. Vinen, Friction on quantized vortices in helium II. A review, J. Low Temp. Phys. 52, 189 (1983). Schwarz-VFM-QT K. W. Schwarz, Three-dimensional vortex dynamics in superfluid: Homogeneous superfluid turbulence, Phys. Rev. B 38, 2398 (1988). Full-BS H. Adachi, S. Fujiyama, and M. Tsubota, Steady-state counterflow quantum turbulence: Simulation of vortex filaments using the full Biot-Savart law, Physical Review B, 81 104511 (2010). 1wayQT-Baggaley A. W. Baggaley, L. K. Sherwin, C. F. Barenghi, and Y. A. Sergeev, Thermally and mechanically driven quantum turbulence in helium II, Phys. Rev. B 86, 104501 (2012). 1wayQT-Lvov L. Kondaurova, V. L'vov, A. Pomyalov, and I. Procaccia, Structure of a quantum vortex tangle in ^4He counterflow turbulence, Phys. Rev. B 89, 014502 (2014). 1wayQT-Yui S. Yui and M. Tsubota, Counterflow quantum turbulence of He-II in a square channel: Numerical analysis with nonuniform flows of the normal fluid, Phys. Rev. B 91, 184504 (2015). 2way-channel D. Khomenko, P. Mishra, and A. Pomyalov, Coupled dynamics for superfluid ^4He in a channel, J. Low Temp. Phys. 187, 405 (2017). 2way-duct S. Yui, M. Tsubota, and H. Kobayashi, Three-dimensional coupled dynamics of the two-fluid model in superfluid: Deformed velocity profile of normal fluid in thermal counterflow, Phys. Rev. Lett. 120, 155301 (2018). Yui-vf S. Yui, H. Kobayashi, M. Tsubota, and W. Guo, Fully coupled two-fluid dynamics in superfluid ^4He: Anomalous anisotropic velocity fluctuations in counterflow, Phys. Rev. Lett. 124, 155301 (2020). Science-ring D. Kivotides, C. F. Barenghi, and D. C. Samuels, Triple vortex ring structure in superfluid helium II, Science, 290, 777 (2000). 2way-Kivotides-energy D. Kivotides, Relaxation of superfluid vortex bundles via energy transfer to the normal fluid, Phys. Rev. B 76, 054503 (2007). 2way-Kivotides-spectrum D. Kivotides, Spreading of superfluid vorticity clouds in normal fluid turbulence, J. Fluid Mech. 668, 58 (2011). Idoowu-self O. C. Idowu, D. Kivotides, C. F. Barenghi, and D. C. Samuels, Equation for self-consistent superfluid vortex line dynamics, J. Low Temp. Phys. 120, 269 (2000). Idoowu-self-straightline O. C. Idowu, A. Willis, C. F. Barenghi, and D. C. Samuels, Local normal fluid helium II flow due to mutual friction interaction with the superfluid, Phys. Rev. B 62, 3409 (2000). Kivotides-recent-self D. Kivotides, Superfluid helium-4 hydrodynamics with discrete topological defects, Phys. Rev. Fluids 3, 104701 (2018). Galantucci L. Galantucci, A. W. Baggaley, C. F. Barenghi, and G. Krstulovic, A new self-consistent approach of quantum turbulence in superfluid helium, Eur. Phys. J. Plus 135, 547 (2020). NC-ring Y. Tang, W. Guo, H. Kobayashi, S. Yui, M. Tsubota, and T. Kanai, Imaging quantized vortex rings in superfluid helium to evaluate quantum dissipation, Nat. Commun. 14, 2941 (2023). m3tail M. S. Paoletti, M. E. Fisher, K. R. Sreenivasan, and D. P. Lathrop, Velocity statistics distinguish quantum turbulence from classical turbulence, Phys. Rev. Lett 101, 154501 (2008). m3tail-Adachi H. Adachi and M. Tsubota, Numerical study of velocity statistics in steady counterflow quantum turbulence, Physical Review B, 83 132503 (2011). m3tail-GP A. C. White, C. F. Barenghi, N. P. Proukakis, A. J. Youd, and D. H. Wacks, Nonclassical velocity statistics in a turbulent atomic Bose-Einstein condensate, Physical Review Letters, 104 075301 (2010). drag-vortex-line S. Kaplun, Low Reynolds number flow past a circular cylinder, J. Math. Mech. 6, 595 (1957). Iordanskii-1 G. E. Volovik, Three nondissipative forces on a moving vortex line in superfluids and superconductors, JETP Lett. 62, 1 (1995). Iordanskii-2 L. Thompson and P. C. E. Stamp, Quantum dynamics of a Bose superfluid vortex, Phys. Rev. Lett. 108, 184501 (2012). removal-filament M. Tsubota, T. Araki, and S. K. Nemirovskii, Dynamics of vortex tangle without mutual friction in superfluid ^4He, Phys. Rev. B 62, 11751 (2000). MAC F. H. Harlow and J. E. Welch, Numerical calculation of time-dependent viscous incompressible flow of fluid with free surface, Phys. Fluids 8, 2182 (1965). bundle-ring L. Galantucci, G. Krstulovic, and C. F. Barenghi, Friction-enhanced lifetime of bundled quantum vortices, Phys. Rev. Fluids 8, 014702 (2023). Q-value J. C. R. Hunt, A. A. Wray, and P. Moin, Eddies, streams, and convergence zones in turbulent flows, in Proceedings of the Summer Program 1988, Center for Turbulence Research, Stanford Univ. 88, 193 (1988). gamma1_1 J. T. Tough, Superfluid turbulence, in Prog. in Low Temp. Phys., edited by D. F. Brewer (North-Holland, Amsterdam, 1982), Vol. 8, Chap. 3. gamma1_2 R. K. Childers and J. T. Tough, Phys. Rev. B 13, 1040 (1976). exp-Gauss A. Noullez, G. Wallace, W. Lempert, R. B. Miles, and U. Frisch, Transverse velocity increments in turbulent flow using the RELIEF technique, J. Fluid Mech. 339, 287 (1997). DNS-Gauss A. Vincent and M. Meneguzzi, The spatial structure and statistical properties of homogeneous turbulence, J. Fluid Mech. 225, 1 (1991). DNS-Gauss2 T. Gotoh, D. Fukayama, and T. Nakano, Velocity field statistics in homogeneous steady turbulence obtained using a high-resolution direct numerical simulation, Phys. Fluids 14, 1065 (2002). avf B. Mastracci, S. Bao, W. Guo, and W. F. Vinen, Particle tracking velocimetry applied to thermal counterflow in superfluid ^4He: Motion of the normal fluid at small heat fluxes, Phys. Rev. Fluids 4, 083305 (2019).
http://arxiv.org/abs/2407.13356v1
20240718095848
On accelerated iterative schemes for anisotropic radiative transfer using residual minimization
[ "Riccardo Bardin", "Matthias Schlottbom" ]
math.NA
[ "math.NA", "cs.NA", "65F08, 65F10, 65N22, 65N30, 65N45" ]
Detection of maser emission at 183 and 380 GHz with ALMA in the gigamaser galaxy TXS 2226-184 A. Tarchi1 P. Castangia1 G. Surcis1 V. Impellizzeri2 E. Ladu1,3 E. Yu Bannikova4,5,6 Received gg mm yyyy; accepted gg mm yyyy ======================================================================================================================================================== § ABSTRACT We consider the iterative solution of anisotropic radiative transfer problems using residual minimization over suitable subspaces. We show convergence of the resulting iteration using Hilbert space norms, which allows us to obtain algorithms that are robust with respect to finite dimensional realizations via Galerkin projections. We investigate in particular the behavior of the iterative scheme for discontinuous Galerkin discretizations in the angular variable in combination with subspaces that are derived from related diffusion problems. The performance of the resulting schemes is investigated in numerical examples for highly anisotropic scattering problems with heterogeneous parameters. anisotropic radiative transfer, iterative solution, nonlinear preconditioning, convergence 65F08, 65F10, 65N22, 65N30, 65N45 § INTRODUCTION The radiative transfer equation serves as a fundamental tool in predicting the interaction of electromagnetic radiation with matter, modelling scattering, absorption and emission. As such, it has a key role in many scientific and societal applications, including medical imaging and tumor treatment <cit.>, energy efficient generation of white light <cit.>, climate sciences <cit.>, geosciences <cit.>, and astrophysics <cit.>. The stationary monochromatic radiative transfer equation is an integro-differential equation of the form ß·∇ u(,ß) + σ_t() u(,ß) = σ_s()∫_S^d-1θ(ß·ß^) u(,ß^) dß^ + q(,ß) for (,ß)∈ D:=R× S^d-1, where the specific intensity u = u(,ß) depends on the spatial coordinate ∈ R⊂^d (d=3 for most practical applications) and on the direction ß∈ S^d-1, with S^d-1 denoting the unit sphere in ^d. The gradient appearing in <ref> is taken with respect to only. The physical properties of the medium covered by R enter (<ref>) through the total attenuation (or transport) coefficient σ_t() := σ_a() + σ_s(), which accounts for the absorption and scattering rates, respectively, and through the scattering kernel θ(ß·ß^), which describes the probability of scattering from direction ß^ into direction ß. Internal sources of radiation are modeled by the function q(,ß). We complement <ref> by non-homogeneous inflow boundary conditions u(,ß) = q_(̣, ß) for (,ß) ∈Ḍ_- { (,ß)∈Ṛ× S^d-1: () ·ß < 0 }, with incoming intensity specified by q_$̣. Here () denotes the outward normal unit vector field for a point ∈Ṛ . We refer to <cit.> for further details on the derivation of the radiative transfer equation. Ifθ(ß·ß^) = 1/|S^d-1|, scattering is called isotropic; otherwise anisotropic. §.§ Approach and contribution A common approach for showing well-posedness of <ref> is to prove convergence of the following iterative scheme: Givenz_0, compute the solutionz_k+1to ß·∇ z_k+1 + σ_t z_k+1 = σ_s ∫_S^d-1θ(ß·ß^)z_k dß^ + q in D, withz_k+1=q_$̣ on Ḍ_- for k≥ 0 <cit.>. Under the condition that sup_σ_s()/σ_t()<1 one obtains linear convergence of z_k towards u with rate <cit.>; for more general conditions, see also <cit.>. Since in many applications mentioned above ≈ 1, the convergence of z_k to u is prohibitively slow. From a numerical point of view, <ref> serves as a starting point for constructive iterative solvers for discretizations of <ref>. In this paper we propose to accelerate the convergence of <ref> through residual minimization over suitable subspaces. Specifically, in analogy to <ref>, given u_k, we compute in a first step the solution u_k+1/2 to ß·∇ u_k+1/2 + σ_t u_k+1/2 = σ_s ∫_S^d-1θ(ß·ß^)u_k dß^ + q, in D, with u_k+1/2=q_$̣ onḌ_-. To proceed, let us introduce the residual r̃_k := r̃(u_k) := q - (ß·∇ u_k + σ_tu_k - σ_s∫_S^n-1θ(ß·ß^)u_k dß^), and the preconditioned residualr_k:=r(u_k)that is defined as the solution to ß·∇ r_k + σ_t r_k = r̃_k in D, r_k=0 on Ḍ_-. Using the weightedL^2-normv √(σ_t) v _L^2(D)we will show the following monotonicity result for the preconditioned residual. For k≥ 0 let u_k, u_k+1/2 be related by <ref> and denote r_k=r(u_k) and r_k+1/2=r(u_k+1/2) the respective preconditioned residuals defined in <ref>. Then it holds that r_k+1/2≤ρr_k. In view of the monotonicity of the residuals, we look for a correction to the intermediate iterateu_k+1/2by residual minimization, i.e., u_k+1/2^c v ∈ W_Nargminr(u_k+1/2+v), whereW_Nis a suitable finite-dimensional linear space of dimensionN, and set u_k+1 u_k+1/2 + u_k+1/2^c. Using the minimization property and the equivalence between the residual and the error, we will show our main convergence statement. For any initial guess u_0∈ L^2(D), the sequence {u_k} defined via <ref> and <ref> converges linearly to the solution u of <ref> with rate ρ, i.e., u - u_k≤^k/1-r_0, k≥ 0. The outlined approach serves as a blueprint for constructing discrete schemes. To do so, we will employ suitable (Galerkin) discretization schemes such that the monotonicity properties of the residuals are automatically guaranteed. In view of the inversion of the transport term in <ref>, we will particularly focus on discontinuous Galerkin discretizations inßconsidered in <cit.>, which allow for a straight-forward parallelization. Such discretizations inherit similar convergence properties as the iteration described above. In general, as shown numerically below, the bound in <ref> is too pessimistic, because it does not show the dependence onW_N. However, we highlight the flexibility in choosingW_N. For example,W_Nmay contain previous iterates, which allows to relate <ref>, <ref> to preconditioned GMRES methods or Anderson acceleration techniques, cf. <cit.>. Another example is to constructW_Nby solving low-dimensional diffusion problems, see <Ref> for details and <Ref> for the improved convergence behavior, where we particularly consider high-order diffusion problems for highly anisotropic scattering. This general framework can be related to existing work as discussed next. §.§ Related works The scheme <ref> has been combined with several discretization methods to obtain a practical solver for radiative transfer problems. These numerical schemes are typically local inß, such as the discrete ordinates method, also known asS_N-method. We refer to <cit.> for an overview of classical approaches and well-established references, and to <cit.> for more recent strategies. The main drawback of <ref> is the well-known slow convergence for≈1. Several approaches for the acceleration of <ref> have been proposed in the literature, cf. <cit.> for a discussion. As observed in <cit.>, <ref> is a preconditioned Richardson iteration for solving <ref>. One approach to obtain faster convergence is to employ more effective preconditioners. Among the most popular ones, we mention preconditioners that are based on solving (non-)linear diffusion problems, which is well-motivated by asymptotic analysis, see again, e.g., <cit.> for classical approaches, or <cit.> for more recent developments. The success of diffusion-based acceleration schemes hinges on so-called consistent discretization of <ref> and the corresponding diffusion problem <cit.>. In <cit.> consistent correction equations are obtained for two-dimensional problems with anisotropic scattering by using a modified interior penalty discontinuous Galerkin approximation for the diffusion problem. The corresponding acceleration scheme is, however, less effective for highly heterogeneous optical parameters. A discrete analysis of similar methods for high-order discontinuous Galerkin discretizations can be found in <cit.>. We refer also to <cit.> for the development of preconditioners for heterogeneous media. Instead of constructing special discretizations for the diffusion problems for each discretization scheme of <ref>, consistent discretizations can automatically be obtained by using subspace corrections of suitable Galerkin approximations of <ref><cit.>. In <cit.> isotropic scattering problems have been solved for a discrete ordinates-discontinuous Galerkin discretization using nonlinear diffusion problems and Anderson acceleration. The approach presented in <cit.> reduces the full transport problem to a nonlinear diffusion equation for the angular average only, which is, however, not possible for anisotropic scattering. The approach taken in <cit.> for anisotropic scattering problems employs a positive definite, self-adjoint second-order form of <ref>, which facilitates the convergence analysis, but it requires another iterative method to actually apply the resulting matrices. The approach taken here avoids such extra inner iterations at the expense of dealing with indefinite problems. The flexibility of our approach in constructing the spacesW_Nfor the residual minimization allows us to employ similar subspaces as in <cit.>, which have been shown to converge robustly for arbitrary meshes and for forward peaked scattering. To treat forward peaked scattering, for a one-dimensional radiative transfer equation, <cit.> applies nonlinear diffusion correction and Anderson acceleration, which minimizes the residual over a certain subspace, and is therefore conceptually close to our approach outlined above. Different to <cit.>, where a combination of theS_N-method with a finite difference method for the discretization ofhas been used and the corresponding minimizations are done in the Euclidean norm, our framework allows for general discretizations for multi-dimensional problems, such as arbitrary order (discontinuous) Galerkin schemes, to discretizeßand. Moreover, our framework allows to employ higher-order correction equations, similar to <cit.>, whose effectiveness becomes apparent in our numerical examples for highly forward peaked scattering. In addition, our Hilbert space approach, which is provably convergent, leads to algorithms that behave robustly under mesh refinements. A second approach for accelerating <ref> is to replace the preconditioned Richardson iterations by other Krylov space methods. For instance, <cit.> employs a GMRES method, which is preconditioned by solving a diffusion problem, to solve three-dimensional problems with isotropic scattering. To treat highly peaked forward scattering, <cit.> combines GMRES with an angular (in theßvariable) multigrid method to accelerate convergence; see also <cit.>, and <cit.> for a comparison of multilevel approaches. By appropriately choosingW_Nin <ref> the approach outlined in <Ref> can be related to a preconditioned GMRES method. Since the domainDhas dimension2d-1, building up a full Krylov space during GMRES iterations becomes prohibitive in terms of memory, and GMRES has to be restarted. Our numerical results, cf. also <cit.>, show that high-order diffusion corrections can lead to effective schemes with small memory requirements for highly forward peaked scattering. §.§ Outline The remainder of the manuscript is organized as follows. In <Ref> we introduce notation and basic assumptions on the optical parameters, and we present a weak formulation of <ref>, <ref>, which allows for a rigorous proof of <Ref> in <Ref>. In <Ref> we turn to the analysis of the minimization problem <ref> and prove <Ref>. In <Ref> we discuss a discretization strategy that implements the approach described in <Ref> such that our main convergence results remain true for the discrete systems. We discuss several choices ofW_Nthere. The practical performance of the proposed methodology for different choices of spacesW_Nin <ref> is investigated in <Ref>. § NOTATION AND PRELIMINARIES In the following we recall the main functional analytic framework and state the variational formulation of <ref>-<ref>, with a well-posedness result. §.§ Function spaces We denote withV_0:=L^2(D)the usual Hilbert space of square integrable functions on the domainD:=R×S^d-1, with inner product··and induced norm. In order to incorporate boundary conditions, we assume thatRhas a Lipschitz boundary and we denote withL^2(Ḍ_-; |ß·|)the space of weighted square-integrable functions on the inflow boundary, and with··_Ḍ_-the corresponding inner product. For smooth functionsv,w∈C^∞(D̅)we define vw_V_1 := vw + ß·∇ vß·∇ w+ |ß·|vw_Ḍ_-, and we denote withV_1the completion ofC^∞(D̅)with respect to the norm associated with <ref>: V_1 = { v∈ V_0: ß·∇ v ∈ V_0, v|_Ḍ_-∈ L^2(Ḍ_-;|ß·|)}. For functionsv,w∈V_1, we recall the following integration by parts formula, see, e.g., <cit.>, ß·∇ vw = - vß·∇ w + ß· vw_Ḍ_-. §.§ Optical parameters and data Standard assumptions for the source data areq ∈V_0andq_∈̣L^2(Ḍ_-; |ß·|). The optical parametersσ_sandσ_tare supposed to be positive and essentially bounded functions of. The mediumRis assumed to be absorbing, i.e., there existsc_0>0such thatσ_a ≥c_0a.e. inR. This hypothesis ensures that the ratio between the scattering rate and the total attenuation rate is strictly less than1, i.e.,:=σ_s/σ_t < 1. We assume that the phase functionθ:[-1,1]→is non-negative and normalized such that∫_S^d-1θ(ß·ß^) dß^=1for a.e.ß∈S^d-1. To ease the notation we introduce the operators,Θ:V_0→V_0such that ( v)(,ß) := σ_s()(Θ v)(,ß), (Θ v)(,ß):= ∫_S^d-1θ(ß·ß^) v(,ß^) dß^. We recall that,Θare self-adjoint bounded linear operators, with operator norms bounded byσ_s_∞and1, respectively, i.e.,‖Θv ‖_V_0 ≤‖v ‖_V_0, see, e.g., <cit.>. §.§ Even-odd splitting Forv∈V_0, we define its even and odd parts, identified by the superscripts"+"and"-", respectively, by v^±(,ß) := 12( v(, ß) ± v(, -ß) ). Accordingly, for any spaceVwe denote byV^± := { v^±:v∈V}the subspaces of even and odd functions ofV. In particular, anyV∈{V_0, V_1}has the orthogonal (with respect to the inner product ofV) decompositionV = V^+ ⊕V^-. Following <cit.>, a suitable space for the analysis of the radiative transfer equation is the space of mixed regularity W := V_1^+ ⊕ V_0^-, where only the even components have weak directional derivatives inV_0. §.§ Variational formulation Assuming thatuis a smooth solution to <ref>-<ref>, we use standard procedures to derive a weak formulation. Multiplying <ref> by a smooth test functionv, splitting the functions in their even and odd components, and using the integration by parts formula <ref> to handle the term(ß·∇u^-,v^+), we obtain the following variational principle, see <cit.> for details: findu∈Wsuch that for allv∈Wt(u,v) = s(u,v) + ℓ(v), with bilinear formst,s:W×W →, and linear formℓ:W→defined by t(u,v) := ß·∇ u^+v^- - u^-ß·∇ v^+ + σ_t uv + (|ß·| u^+,v^+)_Ḍ, s(u,v) := uv, ℓ(v) := qv - 2 (ß· q_,̣ v^+)_Ḍ_-. As shown in <cit.> the assumptions imposed in <Ref> imply that there exists a unique solution of <ref> satisfying u_W ≤ C(q + q__L^2(_-;|ß·|)), with constantC>0depending only onc_0andσ_t. Let us also recall from <cit.> that the odd part of the weak solutionu∈Wenjoysu^- ∈V_1, and thatusatisfies <ref> almost everywhere, and <ref> in the sense of traces. § CONTRACTION PROPERTIES OF THE SOURCE ITERATION Equipped with the notation from the previous section, we can write <ref> as follows: Givenu_k∈W, computeu_k+1/2∈Wsuch that t(u_k+1/2, v) = s(u_k, v) + ℓ(v), ∀ v∈ W. The following result is well-known, and we provide a proof for later reference. Let u_k, u_k+1/2∈ W be related via <ref>. Then it holds that u-u_k+1/2≤u-u_k, with = σ_s/σ_t_∞. Abbreviating e_k+1/2:=u-u_k+1/2 and e_k:=u-u_k, <ref> and <ref> imply that t(e_k+1/2,v) = s(e_k, v), ∀ v∈ W. Therefore, setting v=e_k+1/2 in <ref>, using the Cauchy-Schwarz inequality and recalling the boundedness property of the scattering operator, we obtain the estimates e_k+1/2^2 = (σ_t e_k+1/2,e_k+1/2) ≤ t(e_k+1/2,e_k+1/2) = s(e_k, e_k+1/2) = σ_s/σ_tΘ(√(σ_t)e_k)√(σ_t)e_k+1/2 ≤Θ(√(σ_t) e_k+1/2)√(σ_t) e_k≤e_k+1/2e_k, which concludes the proof. Along the lines of <ref> and <ref>, we define the residual operator:W→W^∗by (w)v := ℓ(v) - (t(w,v) - s(w,v)), ∀ v∈ W. Here,W^*denotes the dual space ofW, and⟨·,·⟩the corresponding duality pairing. The dual norm is defined byℓ_W^*:=sup_v_W=1ℓ(v). The preconditioned residual operator:W→Wis defined by solving the following transport problem without scattering, t((w), v) = (w)v, ∀ v ∈ W. Using the arguments used to analyze <ref> in <Ref>, one shows that the operatoris well-defined. The following result ensures that the corresponding preconditioned residuals decay monotonically, which verifies <Ref>. Let u_k, u_k+1/2∈ W be related via <ref>. Then it holds that (u_k+1/2)≤(u_k). Denoting r_k=(u_k) and using <ref>, we observe that t(r_k, v) = (u_k)v = ℓ(v) - (t(u_k, v) - s(u_k, v)) = t(e_k,v) - s(e_k, v) = t(e_k,v) - t(e_k+1/2, v) = t(u_k+1/2-u_k, v), for all v∈ W, i.e., r_k=u_k+1/2-u_k. Using a similar argument and <ref>, we further obtain that t(r_k+1/2, v) = (u_k+1/2)v = ℓ(v) - (t(u_k+1/2, v) - s(u_k+1/2, v) ) = t(e_k+1/2,v) - s(e_k+1/2, v) = s(e_k,v) - s(e_k+1/2, v) = s(u_k+1/2 - u_k, v) = s(r_k, v), for all v∈ W. In view of the proof of <Ref>, the proof is complete. The following result allows us to relate the error to the residual quantitatively. Let u∈ W be the solution to <ref>. Then the following estimate holds u-w≤1/1-(w) ∀ w∈ W. Let us introduce the linear operator _0:W→ W defined by _0 w:=(w)-L, where L∈ W is defined by the relation t(L,v)=ℓ(v), for v∈ W. We observe that for v∈ W the definition of implies that _0 v + v^2 ≤ t( _0 v + v,_0v + v) = s(v,_0 v +v)≤v_0 v +v. The triangle inequality thus implies that v≤v+_0v + _0v≤v+ _0 v, i.e., v≤_0 v/(1-) ∀ v∈ W. The assertion follows from <ref> with v=u-w and the observation that _0(u-w)=-(w). § RESIDUAL MINIMIZATION In view of <Ref> the difference between the weak solutionuof <ref> and any elementw∈Wis bounded from above by the norm of the residual(w)associated withw. Hence, if we can constructwsuch that(w)=0, thenu=w. <Ref> shows that the half-step <ref> reduces the norm of the residual by the factor. These observations motivate to modifyu_k+1/2such that the corresponding residual becomes smaller. In order to obtain a feasible minimization problem, letW_N⊂Wbe a subspace of finite dimensionN. We then compute the modificationu^c_k+1/2∈W_Nsuch that u^c_k+1/2 := w ∈ W_Nargmin(u_k+1/2 + w)^2. The new iterate of the scheme is then defined as u_k+1 u_k+1/2+ u^c_k+1/2, and the procedure can restart. The minimization problem in <ref> has a unique solution u_k+1/2^c∈ W_N. To obtain a necessary condition for a minimizer, observe that the directional derivative δ/δ w(u_k+1/2+w)[v] of w↦(u_k+1/2+w) in direction v is given by δ/δ w(u_k+1/2+w)[v] =_0v, where _0 has been defined in the proof of <Ref>. A necessary condition for a minimizer w^*∈ W_N is therefore (_0 w^*,_0 v)_σ_t = -((u_k+1/2),_0 v)_σ_t ∀ v∈ W_N. In view of <ref>, the minimizer u_k+1/2^c=w^* is unique, and hence exists, because W_N is finite dimensional. Proof of <Ref>. Using <Ref>, we obtain that (1-)e_k+1≤(u_k+1)≤ρ^k+1(u_0), which concludes the proof. § NUMERICAL REALIZATION The convergent iteration in infinite-dimensional Hilbert spaces described in the previous sections serves as a blueprint for constructing numerical methods. The variational character of the scheme allows to translate the infinite-dimensional iteration directly to a corresponding convergent iteration in finite-dimensional approximation spacesofW. §.§ Galerkin approximation To be specific, we employ fora construction as in <cit.>. Letandbe shape regular, quasi-uniform and conforming triangulations ofRandS, respectively. Here,h>0denotes a mesh-size parameter. In addition we require that-K^S∈for anyK^S∈in order to be able to properly handle even and odd functions. We then denote by^±the corresponding finite element spaces of even piecewise constant (+) and odd piecewise linear (–) functions associated with. Similarly, we denote by^±the finite element spaces consisting of piecewise constant (–) and continuous piecewise linear (+) functions associated with. Please note that we use the symbol±in^±for notational convenience and not to indicate whether a function ofis even or odd. Our considered approximation space is then defined as=^+⊗^+ + ^-⊗^-. The Galerkin approximation of <ref> reads: Findu_h∈such that t(u_h,v_h) = s(u_h,v_h) + ℓ(v_h) ∀ v_h∈. As shown in <cit.>, <ref> has a unique solutionu_h∈that is uniformly (inh) bounded byℓ_W^*. Moreover, there is a constantC>0independent of the discretization parameters such that u-u_h_W ≤ C inf_v_h∈u-v_h_W, i.e.,u_his a quasi-best approximation touin. §.§ Iterative scheme The discretization of <ref>, <ref> becomes: Givenu_h,k∈, computeu_h,k+1/2∈such that t(u_h,k+1/2, v_h) = s(u_h,k, v_h) + ℓ(v_h), ∀ v_h∈. The discretization of the preconditioned residual operator is:→defined via t(_h(w_h), v_h) =ℓ(v_h)-(t(w_h,v_h)-s(w_h,v_h)), ∀ v_h ∈, and the corresponding corrections are computed via the minimization problem u^c_h,k+1/2 := w_h ∈ W_h,Nargmin(u_h,k+1/2 + w_h)^2, where⊂. The new iterate of the discrete scheme is then defined accordingly by u_h,k+1 u_h,k+1/2+ u^c_h,k+1/2. Repeating the arguments of <Ref> and <Ref>, we obtain the following convergence statement. For any u_h,0∈, the sequence {u_h,k} defined by <ref>, <ref> converges linearly to the solution u_h of <ref>, i.e., u_h-u_h,k≤^k/1-(u_h,0). Moreover, the residuals converge monotonically, i.e., (u_h,k+1)≤(u_h,k). §.§ Formulation in terms of matrices Choosing basis functions for^±and^±allows to rewrite the iterationu_h,k↦u_h,k+1in corresponding coordinates. Denote{φ_i}_i=1^and{χ_j}_j=1^the usual basis functions with local support of^+and^-, i.e.,φ_ivanishes in all vertices ofexcept in theith one, whileχ_jvanishes in all elements inexcept in thejth one. Similarly, we denote{μ_k}the basis of^+such thatμ_kvanishes in all elements ofexcept inK^S_kand-K^S_k. Eventually, we denote by{ψ_l}_l=1^the basis of^-such thatψ_lvanishes in all vertices belonging toexcept for the verticesp_land-p_l. We may then write the even and odd parts ofu_has u_h^+ = ∑_i=1^∑_k=1^_i,k^+ φ_i μ_k, u_h^- = ∑_j=1^∑_l=1^_j,l^- χ_j ψ_k, and <ref> turns into the linear system = +≪. with matricesandhaving the following block structure, := [ + -Å^T; Å ], :=[ ; ], ≪:=[ ≪^+; ≪^- ]. The individual blocks are given as follows: := ⊗, := ⊗, := ⊗, := ⊗, Å := ∑_i=1^d _i⊗_i, := blkdiag(_1,…,_), with matrices ()_i,i' := ∫_R σ_t φ_i φ_i' d, ()_k,k' := ∫_S∫_S θ(ß·ß') μ_k(ß') μ_k'(ß) dß' dß ()_j,j' := ∫_R σ_t χ_j χ_j' d, ()_l,l' := ∫_S∫_S θ(ß·ß') ψ_l(ß') ψ_l'(ß) dß' dß (_n)_i,k := ∫_R φ̣_i/ṛ_nχ_k d, (_i)_l,k := ∫_S ß_i ψ_l μ_k dß (_k)_i,i' :=∫_Ṛφ_iφ_i'ω_k d, ω_k := ∫_S |ß·| (μ_k)^2 dß, ()_k,k' :=∫_S μ_k μ_k' dß, ()_l,l' :=∫_S ψ_l ψ_l' dß. The vectors≪^±are obtained from inserting basis functions into the linear functionalℓ. We mention that all matrices are sparse, exceptand, which can be applied efficiently using hierarchical matrix compression, see <cit.>– for moderate,dense linear algebra is efficient, too. In particular, the matricesandare diagonal and3×3block diagonal, respectively, i.e.,can be inverted efficiently. Using these matrices, <ref> turns into the linear system _k+1/2 = _k + ≪, which can be solved as described in <Ref>. Denote()the coordinate vector of(w). Then()is determined by solving ()=≪-(-). Hence, the operator_0discretizing_0becomes the mapping↦_0:=_0realized by _0 = -(-). The update is computed via the minimization <ref>, which becomes, cf. <ref>, (_0^N)^T _0^N ^* = - (_0^N)^T _k+1/2, where=blkdiag(^+,^-). Here, the matrix_0^Nis obtained as the solution to_0 _0^N = _h,N, where_h,Nis a matrix, whose columns correspond to the coordinates of a basis for. Depending on the conditioning of the matrix_h,N, the system in <ref> might be ill-conditioned. To stabilize the solution process, we compute the minimum-norm solution. The coordinate vector for the correctionu_h,k+1/2^cis then given by_k+1/2^c = _h,N ^*, which gives the following update formula for the coordinates of the new iterate _k+1= _k+1/2 + _k+1/2^c. The residual corresponding tou_h,k+1can be updated according to_k+1=_k+1/2+ _0^N ^*. <Ref> ensures that_kconverges linearly to the solutionof <ref>. Solving for _k+1/2 can be done in two steps. First, one may solve the symmetric positive definite system (Å^T (^-)^-1Å + ^+ + ) _k+1/2^+ = _k^+ + ≪^+ + Å^T (^-)^-1( ≪^- + _k^-). The system in <ref> is block diagonal with many sparse blocks of size × and can be solved in parallel, with straightforward parallelization over each element of . Second, one may then retrieve the odd part by solving the system ^- _k+1/2^- = _k^+ - Å_k+1/2^+ +≪^-, which can be accomplished with linear complexity due to the structure of . Assuming that the dimension of W_h,N is N, <ref> is a dense N× N system in general, which can be solved at negligible costs for small N. The assembly of the corresponding matrix, respectively _h,N, requires N applications of _0, which can be carried out as described in <Ref>. Hence, the computational cost for solving <ref> is comparable to N steps of a corresponding discretization of the unmodified iteration <ref>. Thus, the extra cost for the minimization by solving <ref> is justified as long as the iteration <ref>, <ref> converges faster than ^N+1. §.§ Choice of subspaces The previous consideration did not depend on a particular choice of the minimization spaceW_h,N. As discussed in <Ref>, a common choice for the considered radiative transfer problem is to use corrections derived from asymptotic analysis, i.e., related diffusion problems. As it has been observed in <cit.>, the corresponding discretizations of such diffusion problems can be understood as projections on constant functions inß. Such functions, in turn, can be interpreted as eigenvectors of the matrix. Indeed, the spherical harmonics are the eigenfunctions of the integral operator in <ref> and the lowest order spherical harmonic is constant. §.§.§ Constructing WhN using eigenfunctions of Thetae Let_k≠0solve the generalized eigenvalue problem _k^+ = γ_k_k^+, for someγ_k≥0, where we suppose that the eigenvalues are order non-increasingly, i.e.,γ_k≥γ_k+1. We denoteH_h,k^+∈^+the corresponding (even) functions, and define the space H_K^+ := span{ H_h,k^+: 1≤ k≤ K}. We further defineH_h,k,i^-∈^-by the relation (H_h,k,i^- ,ψ_l) =(ß_i H_h,k^+,ψ_l) for all l=1,…,, 1≤ i≤ d. andH_K^- =span{H_h,k,i^-: 1≤k≤K, 1≤i≤d}. Using these definitions, we can define the space Y_h,K := H_K^+⊗^+ + H_K^-⊗^-⊂ W_h. To derive a correction equation, we rewrite <ref> as follows t( u_h -u_h,k+1/2,v_h) = s( u_h -u_h,k+1/2,v_h)+ s(u_h,k+1/2-u_h,k,v_h) ∀ v_h∈. Similar to <cit.>, but see also <cit.>, we may expect to obtain a good approximation to the erroru_h-u_h,k+1/2by solving <ref> on the subspaceY_h,K, for certainK. We hence defineu_h,k+1/2^c∈Y_h,Kas the unique solution of t( u_h,k+1/2^c,v_h) = s( u_h,k+1/2^c,v_h)+ s(u_h,k+1/2-u_h,k,v_h) ∀ v_h∈ Y_h,K. By construction, the spaceY_h,Ksatisfies the compatibility conditionß·∇y_h^+∈Y_h,K^-=H_K^-⊗^-for anyy_h^+∈Y_h,K^+. Therefore, <ref> has a unique solution <cit.>. IfKis moderately small, <ref> can be solved efficiently, see <cit.> for a discussion. We then define W_h,N^c := span{ u_h,k+1/2^c}⊂. The dimension ofW_h,N^cisN=1, and <ref> can be carried out efficiently. Since <ref> is a saddle-point problem, the Galerkin projection <ref> may enlarge the error. This is in contrast to <cit.>, where <ref> was reformulated to a second-order form, which is symmetric and positive definite. In the latter situation, Galerkin projections correspond to best-approximations in the energy norm, and therefore do not enlarge the error. §.§.§ Enriched space By construction, the correctionu_h,k+1/2^ccomputed via solving the projected problem <ref> does in general not satisfy <ref>. We will also consider enriched versions ofW_h,N^cas follows. First, we may use the corrected even iterate to find[1]u_h,k+1/2^c,-∈^-by solving ( _k+1/2^- + [1]_k+1/2^c,-)= _k+1/2^- + ≪^- - Å (_k+1/2^+ +_k+1/2^c,+). The (block-)diagonal structure ofand, allows to invert= ⊗efficiently. Second, we may compute another correction as follows. Suppose thatu_h,k+1/2^+ +u_h,k+1/2^c,+is close to the even-partu_h^+of the solution to <ref>, then, for consistency reasons, we may expect that the solutionu_h,k+1/2^-+[2]u_h,k+1/2^c,-to the following system is a good approximation tou_h^-: ( _k+1/2^- + [2]_k+1/2^c,-) = ( _k+1/2^-+ [2]_k+1/2^c,-) + ≪^- - Å (_k+1/2^+ +_k+1/2^c,+). Computing[2]_k+1/2^c,-requires the inversion of-, which can be accomplished using a preconditioned conjugate gradient method as done in <cit.>. We then define the enriched space [1]W_h,N^c := span{ u_h,k+1/2^c,+,u_h,k+1/2^c,-,[1]u_h,k+1/2^c,-,[2]u_h,k+1/2^c,-}⊂, which can be employed in the minimization <ref>. The dimension of this space isN=4. §.§.§ Another enriched space: including previous iterates Since the minimization procedure is flexible in defining the correction spaceW_h,N, we may not only rely on minimizing the residual over functions obtained from Galerkin subspace projection. Borrowing ideas from GMRES, given an iterateu_h,k, we will also consider the space [1]W_h,N^c,m:= [1]W_h,N^c + span{u_h,j: k-m≤ j≤ k}, where also the previousmiterates are taken into account to construct the space for minimization. It is clear that ifm≥kall previous iterates are considered. In practice, memory limitations usually require to keepmsmall. § NUMERICAL EXPERIMENTS We will investigate the behavior of the iteration and the influence of the different subspaces for the residual minimization discussed in <Ref> by means of a checkerboard test problem <cit.>. Here, the spatial domain is given byR=(0,7)×(0,7), the inflow boundary condition is given byq_=̣0, and the internal source termqas well as the scattering and absorption parameter,σ_sandσ_a, respectively, are defined in <Ref>. Hence,=σ_s/σ_t_∞= 0.999here. We consider the Henyey-Greenstein scattering phase-function with anisotropy factor0 ≤g < 1, i.e., θ(ß·ß^) := 14π1-g^2[1-2g(ß·ß^) +g^2]^3/2. If not stated otherwise, the domainRis triangulated using100 352elements, i.e.,=50 625and=100 352. Moreover, we will employ1024elements on the half sphere, i.e.,=1024and=3072, which results in360 121 344degrees of freedom. Spherical integration is performed by using a high-order numerical quadrature. For a sketch of a corresponding polyhedral approximation of the sphere see <Ref>. The iterations are stopped as soon as_h(u_k)<10^-6. §.§ Minimization over W1 We investigate the performance of the residual minimization strategy when using the subspaceW_h,N^cdefined in <ref> for different anisotropy parametersg. Furthermore, we investigate the behavior on the parameterKin <ref>. Here, we chooseK=1,6,15such that it corresponds to the number of even spherical harmonics of degree at mostl=0,2,4, respectively. This choice is motivated by the observation that the2l+1spherical harmonics of degreelare eigenfunctions ofΘwith eigenvaluesg^l. The resulting number of iterations are displayed in <Ref>. As can be seen from the decay of the residuals depicted in <Ref>, the minimization based approach is much faster than the plain source iteration, which converges linearly with rate=0.999. Additionally, we observe a consistent decay of the residual per iteration. Moreover, increasingKyields smaller iteration counts. Sincedim (W_h,N^c) = 1, the subspace correction computed here pays off if the contraction rate of the residuals is better thanρ^2= 0.998, cf. <Ref>. This is the case for all our experiments, as it is shown by the numbers in brackets in <ref>, indicating the maximum contraction rate during the corresponding iterations. Before testing the performance of the approach on the next subspace for minimization, let us discuss <Ref>, where we show the iteration counts for the choiceg=0.7andK=6, and for different angular and spatial grids obtained by successive refinements. We observe that the number of iterations varies only slightly upon mesh refinement, which we expect, because we derived the iteration from its infinite-dimensional counterpart. §.§ Minimization over tilde W1 As a second test case, we study how the residual minimization approach performs over the enriched subspace[1]W_h,N^cdefined in <ref>. As it can be seen from <Ref>, for moderate values of the anisotropy factorgthe improvement in the number of iterations is negligible with respect to minimization onW_h,N^c. However, for highly forward peaked scattering,g=0.99, which cause the iteration to be notably slower than the other cases, the improvement is more visible, even for low order corrections (K=1). <Ref> shows the convergence history of the residuals, and we observe a robust convergence behavior. Sincedim([1]W_h,N^c)=4, minimization over this subspace is useful if the contraction rate of the residuals stays belowρ^5 = 0.995, cf. <Ref>. Once again, our method achieves this requirement, as shown in brackets in <Ref>. §.§ Minimization over WN Exploiting Anderson-type acceleration techniques as described in <Ref> we observe a substantial reduction in the iteration count for all values ofgand already for moderatemand low-order corrections. Indeed, comparing <Ref> and <Ref>, where a history ofm=2iterates is taken into account for residual minimization and thusdim([1]W_h,N^c,2) = 6, we notice that already forK=1the number of iterations is roughly reduced by a factor of3for smallgand a factor of2forgclose to1. The numbers in <Ref>, wherem=4was chosen, are comparable to those in <Ref>. Thus, we prefer the choicem=2, because it requires less memory and fewer residual computations to setup <ref>. <Ref> and <Ref> show the convergence histories for minimizing the residuals. As before the decay is consistent, i.e. the residuals are converging linearly with a rate smaller thanρ, see also <Ref> and <Ref> for an upper bound on this rate. As for the previous cases, since herem=2corresponds toN=6andm=4toN=8, the subspace correction pays off if residual reduction per step is better thanρ^7 = 0.993, andρ^9 = 0.991, respectively, cf. <Ref> again. As shown in brackets in <Ref> and <Ref>, this is always the case for our experiment. § CONCLUSIONS AND DISCUSSION In this paper we have developed a generic and flexible strategy to accelerate the source iteration for the solution of anisotropic radiative transfer problems using residual minimization. We showed convergence of the resulting method for any choice of subspace employed in the residual minimization. The flexibility in choosing the subspace was used to exploit higher order diffusion corrections, which were shown to be effective for highly forward peaked scattering. Moreover, the numerical results confirmed that the required iteration counts do depend on the discretization only mildly. We mention that our approach can be seen as a two-level scheme. We leave it to future work to extended it and compare it to angular multilevel schemes. Moreover, the analysis of the precise approximation properties of the considered subspaces is also left to future research. We close by mentioning that the efficient and robust solution of the source problem <ref> is also relevant for solving eigenvalue problems, see, e.g., <cit.>. § ACKNOWLEDGEMENTS R.B. and M.S acknowledge support by the Dutch Research Council (NWO) via grant OCENW.KLEIN.183. siamplain
http://arxiv.org/abs/2407.13725v1
20240718172508
Scalable Optimization for Locally Relevant Geo-Location Privacy
[ "Chenxi Qiu", "Ruiyao Liu", "Primal Pappachan", "Anna Squicciarini", "Xinpeng Xie" ]
cs.CR
[ "cs.CR" ]
a>Grayc b>whitec propertyProperty[section] assumptionAssumption[section] exampleExample[section] theoremTheorem[section] definitionDefinition[section] lemmaLemma[section] corollaryCorollary[section] proposition[theorem]Proposition remark[1][Remark] #1 popets YYYY YYYY YYYY X XXXXXxx.xXXXXX Proceedings on Privacy Enhancing Technologies printacmref=false,printccs=false,printfolios=true University of North Texas Denton Texas USA chenxi.qiu@unt.edu University of North Texas Denton Texas USA ruiyaoliu@my.unt.edu Portland State University Portland Oregon USA ruiyaoliu@my.unt.edu Pennsylvania State University University Park Pennsylvania USA acs20@psu.edu University of North Texas Denton Texas USA xinpengxie@my.unt.edu § ABSTRACT Geo-obfuscation functions as a location privacy protection mechanism (LPPM), enabling mobile users to share obfuscated locations with servers instead of their exact locations. This technique protects users' location privacy during server-side data breaches since the obfuscation process is irreversible. To minimize the utility loss caused by data obfuscation, linear programming (LP) is widely used. However, LP can face a polynomial explosion in decision variables, making it impractical for large-scale geo-obfuscation applications. In this paper, we propose a new LPPM called Locally Relevant Geo-obfuscation (LR-Geo) to optimize geo-obfuscation using LP more efficiently. This is accomplished by restricting the geo-obfuscation calculations for each user to locally relevant (LR) locations near the user's actual location. To prevent LR locations from inadvertently revealing a user's true whereabouts, users compute the LP coefficients locally and upload only these coefficients to the server, rather than the LR locations themselves. The server then solves the LP problem using the provided coefficients. Additionally, we enhance the LP framework with an exponential obfuscation mechanism to ensure that the obfuscation distribution is indistinguishable across multiple users. By leveraging the constraint structure of the LP formulation, we apply Benders' decomposition to further boost computational efficiency. Our theoretical analysis confirms that, even though geo-obfuscation is calculated independently for each user, it still adheres to geo-indistinguishability constraints across multiple users with high probability. Finally, experimental results using a real-world dataset demonstrate that LR-Geo outperforms existing geo-obfuscation methods in terms of computational time, data utility, and privacy protection. Scalable Optimization for Locally Relevant Geo-Location Privacy Xinpeng Xie July 22, 2024 =================================================================== § INTRODUCTION Among a variety of location privacy protection mechanisms (LPPMs), geo-obfuscation has become the preferred paradigm for protecting individual location privacy against server-side data breaches <cit.>. Geo-obfuscation allows mobile users to report obfuscated locations instead of their exact locations to servers in location-based services (LBS). As the obfuscation process is irreversible <cit.>, users' exact locations are well-protected even if the obfuscated locations are disclosed to attackers. This is achieved by satisfying certain privacy criteria, such as geo-indistinguishability (Geo-Ind) <cit.>, which requires that, for any two locations geographically close, the probability distribution of their obfuscated locations should be sufficiently close so that it is difficult for an attacker to distinguish the two locations based on their obfuscated representations. = -1 Although geo-obfuscation provides a strong privacy guarantee for users' locations, the location errors introduced by obfuscation can negatively impact the quality of LBS. Many recent efforts <cit.> aim to address the quality issue caused by geo-obfuscation using linear programming (LP) <cit.>, of which the objective is to minimize the utility loss with the privacy criterion like Geo-Ind guaranteed. For the sake of computational tractability, the LP-based methods typically discretize the location field into a finite set of discrete locations. Its decision variables, represented as an obfuscation matrix, determine the probability distributions of obfuscated locations given each possible real location. Due to the intricate complexity of LP, generating the obfuscation matrix directly on users' mobile devices is not feasible. Instead, the matrix is calculated by a server, which optimizes the matrix before it is downloaded by the mobile devices <cit.>. Given that the server lacks knowledge of users' precise locations, the server typically considers every location within the target area when calculating the matrix, regardless of whether it is currently occupied by a user. After downloading the matrix, each user selects the specific row of the matrix that matches their actual location to determine the probability distribution of the obfuscated locations. Consequently, the LP formulation of the obfuscation matrix involves K^2 decision variables, where K denotes the number of discrete locations within the target region. This results in a significant challenge in accommodating a large array of locations. For instance, the inclusion of thousands of distinct locations within a modestly sized town escalates the number of decision variables into the millions <cit.>. As compared in Table <ref> in Section <ref> (Related Work), most current LP-based works limit the number of discrete locations K to up to 100. §.§ Our Work Motivations. The traditional LP-based geo-obfuscation methods (e.g., <cit.>) have a high computation overhead since the LP is formulated completely by the server side, which requires accounting for all locations within the target region. While, from each single user's perspective, the user engages only with the specific row that matches their actual location. Although this single row cannot be generated in isolation as it is linked to some other rows by “Geo-Ind”, such constraints are only enforced between the nearby locations <cit.>. This indicates that, if the LP can be formulated locally by each user, they only need to consider “locally relevant” locations so that the computational cost can be significantly reduced. In practical terms, when a user chooses an obfuscated location, the relevance of how another user 100 kilometers away selects their obfuscated location due to the Geo-Ind constraints is minimal. Motivated by the above observation, this paper introduces a new geo-obfuscation paradigm, termed Locally Relevant Geo-obfuscation (LR-Geo). The core idea of LR-Geo is to allow each user to formulate the LP by themselves by focusing exclusively on their Locally Relevant (LR), thereby streamlining the process of generating obfuscation matrices. Nevertheless, the development of LR-Geo presents several distinct challenges: §.§.§ Challenge 1: How to determine the LR location set? First, it is important to note that even a location far from a user's location can have an indirect impact on the user's obfuscation distribution since the distant location can have higher relevance to other locations closer to the user by the Geo-Ind constraints. Considering such a “multi-hop” influence of Geo-Ind is hard to circumvent while pursuing the globally optimal solution, our approach focuses on striking a balance between optimizing the obfuscation matrix and enhancing computational efficiency, achieved by selecting an appropriate LR location set. Specifically, we introduce a Geo-Ind graph to describe the Geo-Ind constraints between each nearby location pair, which also enables us to quantify the “multi-hop” impact of Geo-Ind constraints through the path distance between nodes in the graph (see Theorem <ref>). Using the Geo-Ind graph, we determine the LR location set for each user as the collection of locations whose path distance from the user's actual location does not surpass a predefined threshold. Following this, we formulate the LP of LR-Geo for each user to focus exclusively on their selected LR location set. §.§.§ Challenge 2: How to calculate LR-Geo? Despite having a relatively smaller LP size, the calculation of LR-Geo still needs to be migrated to the server since (i) the computational demands of LR-Geo remain relatively high for mobile devices, and (ii) LR-Geo's LP formulation involves assessing data utility for downstream decision-making, a task typically handled by the server rather than individual users <cit.>. However, each user needs to keep the LR location set hidden from the server, as these locations could potentially disclose the user's actual location. As a workaround, we enable each user to locally compute the coefficients of the LP formulation with server assistance and then upload these coefficients to the server. We demonstrate that the uploaded coefficients can be used by the server to solve the LP problems but cannot be reversed to unveil the LR location of the user (by examples in Section <ref> and experimental results in Fig. <ref> in Section <ref>). §.§.§ Challenge 3: How to guarantee Geo-Ind across multiple users? Given that each user conceals their LR location set from the server, formulating Geo-Ind constraints across users in LP becomes another challenge for the server. To address this, we enable the server to apply exponential distribution constraints on a selected subset of obfuscated locations for each user. We demonstrate that adhering to these constraints ensures that the chosen obfuscated locations meet Geo-Ind constraints across users even though their obfuscation is calculated in an independent manner (see Theorem <ref>). Moreover, our experimental findings in Fig. <ref> indicate that while unselected obfuscated locations do not theoretically guarantee Geo-Ind, they still possess a high probability (e.g. 98.04% on average) of meeting Geo-Ind constraints in practice. Additionally, by integrating the exponential mechanism with LP, the constraint matrix of LR-Geo follows a ladder block structure, making the problem well-suited to Benders' decomposition, which further improves the computation efficiency of solving LR-Geo. §.§.§ Experimental results Lastly, in our experiment, we assessed LR-Geo's performance by simulating its application to road map data sourced from Rome, Italy <cit.>. The results revealed that LR-Geo efficiently generates obfuscation matrices within 100 seconds for cases involving up to 1500 locations in the target area. This marks a substantial enhancement over existing LP-based geo-obfuscation techniques (as listed in Table <ref>), which can only handle up to 100 locations. Furthermore, our experimental results show that LR-Geo's obfuscation matrix not only adheres closely to the theoretical lower bound of expected cost, as established in Theorem <ref> and Theorem <ref>, but also outperforms contemporary benchmarks <cit.> in terms of time efficiency and cost-effectiveness. §.§.§ Contributions In summary, the contributions of this paper are summarized as follows: 1. We introduce LR-Geo, a new geo-obfuscation approach that significantly reduces the computational overhead of geo-obfuscation while maintaining a high level of optimality. 2. We develop a remote computing framework that allows for the offloading of LR-Geo computations to a server while preserving the privacy of each user's LR location set. 3. To achieve Geo-Ind across multiple users' obfuscation matrices, we integrate exponential distribution constraints within the LP computational framework. Given LR-Geo's constraint structure, we apply Benders' decomposition to enhance computational time efficiency. 4. Our experimental results demonstrate that LR-Geo not only approximates optimal solutions with considerably lower computational costs but also outperforms existing state-of-the-art methods in time efficiency and cost-effectiveness. The rest of the paper is organized as follows: The next section provides the preliminaries of geo-obfuscation. Section <ref> describes the motivation and Section <ref> designs the algorithm. Section <ref> evaluates the algorithm's performance. Section <ref> presents the related work and Section <ref> makes a conclusion. § PRELIMINARY In this section, we introduce the preliminary knowledge of geo-obfuscation, including its framework in LBS in Section <ref>, its privacy criteria Geo-Ind in Section <ref>, and the LP formulation in Section <ref>. The main notations used throughout this paper can be found in Table <ref> in Section <ref> in Appendix. §.§ Geo-Obfuscation in LBS We consider an LBS system composed of a server and a set of users, where users need to report their locations to the server to receive the desired services. Like <cit.>, we assume that the server is not malicious, but it might suffer from a passive attack where attackers can eavesdrop on the users' reported locations breached by the server. In this case, users can hide their exact locations from the server using geo-obfuscation mechanisms <cit.>. In general, a geo-obfuscation mechanism can be represented as a probabilistic function, of which the input and the output are the user's real location and obfuscated location, respectively. For the sake of computational tractability, many existing works like <cit.> approximate the users' location field to a discrete location set 𝒱 = {v_1, ..., v_K}. In this case, the obfuscation function can be represented as a stochastic obfuscation matrix 𝐙 = {z_i,k}_K× K, where each z_i,k denotes the probability of taking v_k as the obfuscated location given the actual location v_i. Besides hiding the users' actual location, the obfuscation matrix 𝐙 is designed to minimize the utility loss (or cost) caused by geo-obfuscation. As an example, in this paper, we focus on a category of LBS where a mobile user needs to physically travel to a specified destination to receive service (e.g., hotel/restaurant recommendations <cit.>) or implement a task (e.g., spatial crowdsourcing <cit.>). Typically, these LBS types strive to minimize travel expenses for users. Accordingly, we define the cost resulting from geo-obfuscation as the distortion between the estimated travel distances (using obfuscated locations) and the actual travel distances incurred by users. Note that our approach in this paper can be readily adapted in other LBS applications as long as the explicit relationship between cost and location obfuscation can be established. To calculate the traveling costs, global LBS information such as traffic conditions and destination distribution is needed. Since global information is hard to maintain by individuals, many existing works like <cit.> let the server manage the computation of the obfuscation matrix. Specifically, before reporting the location to the server, each privacy-aware user downloads the obfuscation matrix 𝐙 from the server. Given the current location v_i, the user finds the corresponding row 𝐳_i = [z_i,1, ..., z_i,K] in the obfuscation matrix, based on which the user then randomly selects an obfuscated location to report. In what follows, we call 𝐳_i the obfuscation vector of the location v_i. §.§ Geo-Indistinguishability Although the server takes charge of generating the obfuscation matrix, the users' exact locations are still hidden from the server since the obfuscated locations are selected in a probabilistic manner <cit.>. In particular, the obfuscation matrix 𝐙 is designed to satisfy the privacy criterion Geo-Ind, indicating that even if an attacker has obtained the users' reported (obfuscated) location and 𝐙 from the server, it is still hard for the attacker to distinguish the users' exact locations from the nearby locations. We use d_v_i, v_j to denote the Haversine distance (the angular distance on the surface of a sphere) between v_i and v_j. Given a threshold γ > 0, we call two locations v_i and v_j “neighboring locations” if their distance d_v_i, v_j≤γ. Geo-Ind is formally defined in Definition <ref> <cit.>: (Geo-Ind) An obfuscation matrix 𝐙 satisfies (ϵ, γ)-Geo-Ind if, for each pair of neighboring locations v_i, v_j ∈𝒱 with d_v_i, v_j≤γ, the following constraints are satisfied z_i,k - e^ϵ d_v_i, v_j z_j,k≤ 0,  ∀ v_k ∈𝒱, i.e., the probability distributions of the obfuscated locations of v_i and v_j are sufficiently close. Here, ϵ is called the privacy budget. Higher ϵ implies a lower privacy level. In what follows, we use ℰ = {(v_i, v_j)∈𝒱^2 | d_v_i, v_j≤γ} to denote the set of neighboring locations in 𝒱. In general, we call v_j a kth-order neighbor of v_i (denoted by v_j ∈𝒩^(k)_v_i), if the shortest-path distance (i.e., the number of edges in the shortest path) between v_j and v_i in the graph 𝒢 is k. We let 𝒱^(k)_v_i = ⋃_l=1^(k)𝒩^(l)_v_i include all v_i's 1st to kth neighbors. §.§ LP Problem Formulation Constraints. In addition to satisfying Geo-Ind in Equ. (<ref>), for every real location v_i, the total probability of its obfuscated locations should be equal to 1, ∑_k=1^K z_i,k = 1, ∀ v_i ∈𝒱 . Objective function. Given the target location v_l, the real location v_i, and the obfuscated location v_k, we define the cost of LBS as the discrepancy between the estimated travel cost tc_v_i, v_l and the actual travel cost tc_v_k, v_l to reach v_l δ_v_i,v_k,v_l = |tc_v_i, v_l - tc_v_k, v_l|. We assume that the server has the prior distribution of the target locations 𝐪 = [q_1, ..., q_K], where q_l (l =1, ..., K) denotes the probability that a target's location is at v_l. The objective is to minimize the expected cost caused by the obfuscation matrix 𝐙: ℒ(𝐙) = ∑_i=1^K p_i∑_k=1^K ∑_l=1^K q_lδ_v_i,v_k,v_l z_i,k = ∑_i=1^K𝐜_i𝐳^⊤_i, where p_k (k =1, ..., K) denotes the prior probability that a user's real location is at v_k, 𝐜_i = [c_v_i,v_1, ..., c_v_i,v_K] denote the cost (cost) coefficients of 𝐳_i in the objective function, and each c_v_i, v_k is given by c_v_i, v_k = p_i ∑_l=1^K q_l δ_v_i,v_k,v_l  (i = 1, ..., K). Problem formulation. To satisfy the constraints of Geo-Ind (Equ. (<ref>)) and the probability unit measure (Equ. (<ref>)), and minimize ℒ(𝐙) (Equ. (<ref>)), the problem of LR obfuscation matrix generation (OMG) can be formulated as the following LP problem: min ℒ(𝐙) = ∑_i=1^K𝐜_i𝐳^⊤_i . § MOTIVATIONS AND OBSERVATIONS Although the OMG problem outlined in Equ. (<ref>)(<ref>) can be solved using classical LP algorithms such as the simplex method <cit.>, it is hampered by high computational costs. The time complexity of an LP problem depends on the number of decision variables and the number of linear constraints <cit.>. In OMG, the decision matrix 𝐙 encompasses O(|𝒱|^2) decision variables, where it must adhere to the Geo-Ind constraints for every pair of neighboring locations in 𝒱, resulting in O(|ℰ||𝒱|) linear constraints. This substantial computational demand renders the LP-based geo-obfuscation impractical for scenarios with a large number of locations. Therefore, enhancing the computational efficiency of solving LP-based geo-obfuscation is the primary goal of this paper. Observations. When a user at location v_i seeks to obfuscate their actual location, they use only the ith row 𝐳_i = [z_i,1, ..., z_i,K] instead of the entire matrix 𝐙. While determining 𝐳_i in isolation is not feasible within OMG as 𝐳_i is linked to other rows (obfuscation vectors) of 𝐙 by the Geo-Ind constraints. As depicted in Fig. <ref>, Geo-Ind requires that obfuscation vector 𝐳_i is directly connected to another vector 𝐳_j if their corresponding locations v_i and v_j are neighbors. Additionally, a location v_l, even if distant from v_i, indirectly influences 𝐳_i through its neighborly relation with v_j. Considering that it is hard to circumvent such “multi-hop” influence of Geo-Ind between locations while pursuing the globally optimal solution, we aim to balance the optimality of geo-obfuscation and computational efficiency by focusing on a selectively identified set of locations that exert a significant influence on v_i. This approach is based on the intuition that locations nearer to v_i have obfuscation vectors 𝐳_l with a more pronounced effect on 𝐳_i. The pivotal question then becomes how to quantify the extent of influence between 𝐳_i and other vectors, such as 𝐳_l. To this end, we introduce the concept of the Geo-Ind graph in Definition <ref>. We then detail the application of this graph to measure the Geo-Ind connection between 𝐳_i and 𝐳_l in Theorem <ref>. (Geo-Ind Graph) Geo-Ind graph is defined as an undirected graph 𝒢 = (𝒱, ℰ) to describe the Geo-Ind constraints between locations within a set 𝒱. Here, 𝒱 represents the set of nodes, each corresponding to a distinct location, and ℰ denotes the set of edges. Each edge (v_i, v_j) ∈ℰ indicates that the locations v_i and v_j are neighbors (i.e. d_v_i, v_j≤γ) with the edge weight equal to the distance d_v_i, v_j between them. Consider two locations v_i and v_j are connected through at least one path in the Geo-Ind graph 𝒢. Let the path distance D_v_i,v_j represent the sum of weights of the edges forming the shortest path between v_i and v_j. Their probabilities of selecting location v_k as the obfuscated location is constrained by: z_i,k≤ e^ϵ D_v_i,v_j z_j,k. Detailed proof can be found in Section <ref> in Appendix. Theorem <ref> implies the extent to which a pair of obfuscation vectors, 𝐳_i and 𝐳_j, are linked through Geo-Ind constraints depends on the path distance D_v_i,v_j between their respective locations v_i and v_j in the Geo-Ind graph 𝒢. A higher path distance between locations implies a weaker linear constraint between their obfuscation vectors. In Fig. <ref>, we follow the example of Fig. <ref>, and check specifically how the obfuscation vector 𝐳_i is impacted by the decision vectors 𝐳_j and 𝐳_l according to the conclusion of Theorem <ref>. As Fig. <ref> shows, given the path distances D_v_i, v_j = 0.2km and D_v_i, v_l = 0.5km, and the privacy budget ϵ = 10.0km^-1, each entry of 𝐳_i follows the following linear constraints according to Theorem <ref>: z_i,k≤ e^ϵ D_v_i, v_j z_j,k⇒ z_i,k≤ e^2 z_j,k z_i,k≤ e^ϵ D_v_i, v_l z_l,k⇒ z_i,k≤ e^5z_l,k indicating that z_l,k enforces a weaker constraint on z_i,k compared to z_j,k. Given that z_i,k represents a probability measure and therefore cannot exceed 1, the condition z_i,k≤ e^5 z_l,k becomes irrelevant for instances where z_l,k is just marginally greater than 0 (when z_l,k≥ 0.0067). This is because the upper limit of z_i,k≤ 1 naturally satisfies the condition z_i,k≤ e^5 z_l,k under these circumstances. Conversely, the constraint z_i,k≤ e^2 z_j,k is more stringent and remains applicable unless z_j,k≥ 0.1353. As z_j,k increases beyond 0.1353, the condition z_i,k≤ 1 is adequate to fulfill the constraint of z_i,k≤ e^2 z_j,k. Overall, the insight obtained from Theorem <ref> and the example in Fig. <ref> lead us to focus on a set of “locally relevant locations” that are within a specified path distance threshold from v_i in the Geo-Ind graph. By focusing the LP problem on this narrowed-down LR location set, we can substantially decrease the computational demands associated with solving the LP problem. In the next section, we introduce the details of our method. § METHODOLOGY In this section, we present Locally Relevant Geo-Obfuscation (LR-Geo), detailing its main concepts and problem formulation in Section <ref>, the computational framework in Section <ref>–<ref>, the theoretical performance analysis in Section <ref>, and a discussion of potential threats in Section <ref>. §.§ Locally Relevant Geo-Obfuscation We consider a scenario where M users {1, ..., M} need to obfuscate their locations. Without loss of generality, we assume each user m is located at v_m (m = 1, ..., M). Therefore, each user m needs to use the mth row of the obfuscation matrix 𝐙, denoted by 𝐳_m = [z_m,1, ..., z_m,K], to determine the probability distribution of v_m's obfuscated locations. Inspired by the insights discussed in Section <ref>, the underlying concept of LR-Geo is to generate an obfuscation matrix focusing solely on the “locally relevant (LR) locations” surrounding each user's actual location v_m to reduce the computational overhead. According to Theorem <ref>, within the Geo-Ind graph 𝒢, locations with shorter path distances to v_m exhibit stronger connections of their obfuscation vectors to 𝐳_m through Geo-Ind constraints. Therefore, we identify the LR location set based on their path distance to v_m in the Geo-Ind graph, as described in Definition <ref>: (LR location set) The LR location set of v_m, denoted by 𝒩_m, is defined as the set of locations with their path distances to v_m no greater than a predetermined threshold Γ: 𝒩_m = {v_j ∈𝒱|D_m, j≤Γ.}, where the constant Γ is called the LR distance threshold. Clearly, a lower Γ causes fewer decision vectors to derive in OMG, which mitigates the computation cost. While on the other hand, lower Γ might deviate the derived 𝐙 from the optimal values even though the removed decision vectors are weakly linked to 𝐳_i. Hence, the first question is Q1: How to find 𝒩_m with an appropriate Γ to reduce the obfuscation matrix computation cost with guaranteeing its optimality at an acceptable level? On the other hand, since v_i is close to the center of its LR location set 𝒩_m, when migrating the calculation of 𝐙 to the server, 𝒩_m should be hidden from the server as it might disclose v_i. Therefore, the second research question is Q2: How to migrate the obfuscation matrix computation to the server without disclosing the LR location sets? In Section <ref>, we introduce how to address Q1 and Q2. §.§ Local Geo-Indistinguishability (can be removed) Our goal is to only hide v_i from its neighbors, i.e., v_i is geo-indistinguishable from its neighbors. It is less relevant whether other locations in the region are geo-indistinguishable or not. Consider that, when a user at location v_i obfuscates his/her location, to satisfy Geo-Ind, the obfuscation distribution of v_i, ℙr(Y=v_k|X=v_i), should be sufficiently close to the obfuscation distribution of any of its neighbor v_j∈𝒩_m, ℙr(Y=v_k|X=v_j), i.e., e^-ϵ d_v_i, v_j≤ℙr(Y=v_k|X=v_i)/ℙr(Y=v_k|X=v_j)≤ e^ϵ d_v_i, v_j, so that it is hard for the attacker to distinguish v_i and v_j from their obfuscated (reported) locations. In the original definition of Geo-Ind, the Geo-Ind constraints are enforced for each pair of neighbors. However, from a single user's perspective, the user cares more about whether his/her location can be hidden in a certain range (i.e., from its neighbors); rather than whether other locations outside this range are geo-indistinguishable or not. By referring to the original definition of Geo-Ind (Definition <ref>), we formally define partial Geo-Ind as follows: An obfuscation matrix 𝐙 satisfies partial (ϵ, r, v_i)-Geo-Ind, if v_i is geo-indistinguishable from any of its neighbor v_j ∈𝒩_m, i.e., z_i,k - e^ϵ d_v_i, v_j z_j,k≤ 0 and z_j,k - e^ϵ d_v_i, v_j z_i,k≤ 0. Our objective is to generate an obfuscation matrix for the user located at v_i, called partial obfuscation matrix, that satisfies partial (ϵ, r, v_i)-Geo-Ind and the expected estimation error of traveling cost caused by obfuscation is minimized. §.§.§ LR Location Set Searching For each user m, the LR location set 𝒩_m can be found locally by the user. Specifically, given the coordinates of the locations in 𝒱 and the neighbor threshold γ, the user first creates the Geo-Ind graph 𝒢 = (𝒱, ℰ) by checking whether the Haversine distance between each pair of locations is no higher than γ. The user then builds a shortest path tree rooted at v_i in 𝒢 using Dijkstra's algorithm <cit.>. The shortest path tree provides the path distance between v_i and each v_j ∈𝒱, D_i, j, based on which the user then determines whether v_j should be included in the LR location set 𝒩_m based on Equ. (<ref>). = -1 Time complexity. To build 𝒢, the user needs to compare the Haversine distance between each pair of locations with γ. This process involves a total of O(|𝒱|^2) comparisons. The time complexity of building a shortest path tree using Dijkstra's algorithm is O(|𝒱|^2). Therefore, the time complexity of LR location set identification is O(|𝒱|^2 + |𝒱|^2) = O(|𝒱|^2). §.§.§ Obfuscation Range In the original OMG formulation (Equ. (<ref>)(<ref>)), the obfuscation range convers the entire location set 𝒱, even though many of these obfuscated locations receive zero probability assignments from the LP algorithm due to their high cost. To further reduce the computation cost, we limit the selection of obfuscated locations to a smaller range. Given a real location v_i, we consider its obfuscated location range as a circle 𝒞(v_i, r_obf) centered at v_i with radius r_obf. Then, the set of the obfuscated locations of v_i, denoted by 𝒪_m (𝒪_m⊆𝒱), can be calculated by 𝒪_m = {v_k∈𝒱|d_v_i, v_k≤ r_obf.}. For each obfuscated location v_k ∉𝒪_m, we assign a small value ξ to the probability z_i,k. Here, ξ is a small value and we will specify how to determine ξ in Equ. (<ref>) in Section <ref>. As Fig. <ref> shows, after deriving both 𝒩_m and 𝒪_m, the user only needs to download a submatrix of 𝐙, of which the rows and the columns cover 𝒩_m and 𝒪_m, respectively. §.§.§ Problem Formulation Given each user m's LR location set 𝒩_m, we define the user's LR obfuscation matrix as 𝐙_𝒩_m = {z^(m)_i,k}_𝒩_m×𝒱, which includes the obfuscation vectors of all the relevant locations 𝒩_m. The cost caused by 𝐙_𝒩_m is defined by ℒ(𝐙_𝒩_m) = ∑_v_i ∈𝒩_m^K𝐜_i𝐳^(m)⊤_i where 𝐜_i = [c_v_i, v_1, ..., c_v_i, v_K] are the cost coefficients (defined by Equ. (<ref>)) of the obfuscation vector 𝐳^(m)_i = [z^(m)_i,1, ..., z^(m)_i,K]. The objective of each user m is to minimize ℒ(𝐙_𝒩_m) while guaranteeing the Geo-Ind constraints among the obfuscation vectors of relevant locations 𝒩_m and the probability unit measure constraint for each obfuscation vector 𝐳_i in ℒ(𝐙_𝒩_m). Given the LR location set 𝒩_m and the obfuscated location set 𝒪_m, we let each user m formulate the Locally Relevant Obfuscation Matrix Generation (LR-OMG) problem as the following LP problem: min ℒ(𝐙_𝒩_m) = ∑_v_i ∈𝒩_m𝐜_i𝐳^(m)⊤_i z^(m)_i,k/z^(m)_j,k≤ e^ϵ d_v_i, v_j, ∀ v_i, v_j ∈𝒩_m s.t.  d_v_i, v_j≤γ, ∀ v_k ∑_k z_i,k = 1, ∀ v_i ∈𝒩_m z_i,k = ξ, ∀ v_k ∉𝒪_m. §.§ Computation Framework Considering the limited computation capability of users, like most related works <cit.>, we migrate the computation load of LR-OMG to the servers. Note that, for each user m, directly uploading the LR location set 𝒩_m and the obfuscated location set 𝒪_m to the server might cause additional privacy leakage, as both 𝒩_m and 𝒪_m can be leveraged to infer the user's real location v_m. As a solution shown in Fig. <ref>, we let each user m (m = 1, ..., M) upload the coefficient of the formulated LP problem (Equ. (<ref>)–(<ref>))) to the server, including the distance matrix 𝐃_𝒩^2_m and the cost matrix 𝐂_𝒩_m, 𝒪_m to the server, instead of 𝒩_m and 𝒪_m. (1) Distance matrix 𝐃_𝒩^2_m: The user m calculates the Haversine distance d_v_i, v_j between each pair of locations v_i, v_j ∈𝒩_m. Then, the user uploads the distance matrix 𝐃_𝒩^2_m = {d_v_i, v_j}_(v_i, v_j) ∈𝒩^2_m to the server, which uses each distance value d_v_i, v_j to establish the Geo-Ind constraints for each pair of decision variables z_i,k and z_j,k in Equ. (<ref>). Note that 𝐃_𝒩^2_m solely provides information about the relative positions of the locations within 𝒩_m without disclosing their specific coordinates. = -1 (2) Cost matrix 𝐂_𝒩_m, 𝒪_m = {c_v_i, v_k}_(v_i, v_k) ∈𝒩_m×𝒪_m includes the cost coefficients c_v_i, v_k caused by each obfuscated location v_k ∈𝒪_m given each location v_i ∈𝒩_i, from which the server can specify the objective function in Equ. (<ref>). Note that, to compute the cost coefficient c_v_i, v_k in Equ. (<ref>), it requires knowledge of the target locations 𝐪 = [q_1, ..., q_K]. However, retaining this information at the user's end presents challenges, primarily due to the dynamically changing target distribution that introduces additional communication costs. Moreover, privacy concerns, particularly regarding the confidentiality of targets (e.g., passengers in Uber-like platform <cit.>), further complicate the maintenance of such information. To facilitate the estimation of c_v_i, v_k, as Fig. <ref> shows, we let each user download a cost reference table maintained by the server. This table associates each pair (v_i, v_k) with a value approximating the estimated cost caused by v_k when the real location is v_i, all while keeping the actual target locations confidential. The process of constructing the cost reference table, ensuring the privacy of both users and targets, will be elaborated in Section <ref>. In this section, we assume that users can accurately obtain each c_v_i, v_k using the cost reference table. Further analysis of the performance guarantee utilizing the cost reference table will be presented in Section <ref>. We will also illustrate that the cost matrix cannot be reversed to unveil the LR location of the user in Section <ref> and experiment in Fig. <ref>. §.§ Combination of the LP and Exponential mechanisms Given the coefficient matrices 𝐃_𝒩^2_m and 𝐂_𝒩_m, 𝒪_m (m = 1, .., M), the server can compute 𝐙_𝒩_m for each user. However, as each LR matrix is generated independently, the obfuscation vectors, such as 𝐳_i^(m) and 𝐳_j^(n) from different LR matrices 𝐙_𝒩_m and 𝐙_𝒩n, may not satisfy the Geo-Ind constraints. Conversely, jointly deriving 𝐙_𝒩_1, ..., 𝐙_𝒩_M using only LP not only incurs high computational overhead but also necessitates the disclosure of 𝒩_m and 𝒪_m, which should be hidden from the server. As a solution, we incorporate the exponential geo-obfuscation mechanism into LR-Geo. Similar to <cit.>, we define an indicator matrix 𝐐 = {q_i,k}_(v_i, v_k)∈𝒱^2 to indicate whether the probability distribution of obfuscated location v_k needs to follow the exponential distribution when the real location is v_i. Specifically, if q_i,k = 1, we enforce the following constraint (exponential distribution) for each obfuscated location v_k ∈𝒱 z_i,k = {[ y_k e^-ϵ d_v_i, v_k/2 ; ξ = y_k e^-ϵ r_obf/2 ].. where y_k ≥ 0 is a decision variable. In what follows, we let 𝐲 = [y_1, ..., y_K]. Note that in Equ. (<ref>) we have set z_i,k = ξ when v_k ∉𝒪_m, and here in Equ. (<ref>), we specify ξ = y_k e^-ϵ r_obf/2. Like <cit.>, we adopt a heuristic strategy to determine the indicator matrix 𝐐. In particular, we assign q_i,k = 1 when the distance d_v_i, v_k exceeds r_exp, where r_exp, a predefined threshold, is less than or equal to the obfuscation range r_obf. This approach is based on the rationale of applying the exponential mechanism more extensively to obfuscated locations that are significantly distant from the actual location. Such locations are often associated with lower probability values, thereby minimizing their influence on the expected cost. Given that LP-based methods tend to yield lower costs, incorporating an exponential mechanism for these distant locations can reduce cost impacts more effectively. However, our framework can accommodate alternative methods for determining 𝐐. Given any y_k ∈ℝ^+, if the constraints in Equ. (<ref>) are satisfied, then for each pair of z_i,k and z_j,k with q_i,k = q_j,k = 1, the Geo-Ind constraint z_i,k - e^ϵ d_v_i, v_j z_j,k≤ 0 is satisfied. The detailed proof can be found in the proof of Proposition 1 in <cit.>. To enable the server to establish the constraints of the exponential distribution in Equ. (<ref>), each user m computes the distance matrix 𝐃_𝒩m, 𝒪m = {d_v_i, v_k}_(v_i, v_k) ∈𝒩_m×𝒪_m and uploads the matrix to the server. It's important to note that 𝐃_𝒩_m, 𝒪_m contains only the relative distances between locations within 𝒩_m and 𝒪_m, and does not provide enough information to deduce the specific locations in either 𝒩_m or 𝒪_m. Problem formulation. After collecting the coefficient matrices 𝐃_𝒩^2_m, 𝐃_𝒩_m, 𝒪_m and 𝐂_𝒩_m, 𝒪_m (m = 1, .., M) and adding the constraints of exponential mechanism in Equ. (<ref>), we can formulate the following Central LR-Geo (CLR-Geo) problem at the server side: min ∑_m=1^Mℒ(𝐙_𝒩_m) §.§ Benders' Decomposition to Enhance Computation Efficiency §.§.§ Problem reformulation of CLR-Geo We rewrite the objective function in Equ. (<ref>) as ∑_m=1^Mℒ(𝐙_𝒩_m) = ∑_m=1^M∑_v_i∈𝒩_m∑_v_k ∈𝒱c_v_i, v_kz_i,kq_i,k_ + ∑_m=1^M∑_v_i∈𝒩_m∑_v_k ∈𝒱c_v_i, v_kz_i,k(1-q_i,k) = ∑_k=1^K α_k y_k + ∑_m=1^M 𝐜'_𝒩_m𝐳'_𝒩_m where each α_k = ∑_m=1^M∑_v_i∈𝒩_m q_i,k c_v_i, v_k e^-ϵ d_v_i, v_k/2 is a constant, and in 𝐜'_𝒩_m, each c'_i,k = c_v_i, v_kz_i,k(1-q_i,k). We rewrite the constraints of Equ. (<ref>)(<ref>) and Equ. (<ref>) as 𝐀_𝒩_m𝐳'_𝒩_m + 𝐁_𝒩_m𝐳”_𝒩_m(𝐲) ≥𝐛_𝒩_m where 𝐀_𝒩_m = [[ 𝐀^GeoI_𝒩_m; 𝐀^unit_𝒩_m; -𝐀^unit_𝒩_m ]], 𝐁_𝒩_m = [[ 𝐁^GeoI_𝒩_m; 𝐁^unit_𝒩_m; -𝐁^unit_𝒩_m ]], 𝐛_𝒩_m = [[ 𝐛^GeoI_𝒩_m; 𝐛^unit_𝒩_m; -𝐛^unit_𝒩_m ]] and * 𝐳'_𝒩_m (resp. 𝐳”_𝒩_m(𝐲)) includes the obfuscation probabilities z_i,k not adhering to (resp. adhering to) the exponential mechanism, where v_i ∈𝒩_m (note that each entry in 𝐳”_𝒩_m(𝐲) follows Equ. (<ref>), therefore the vector is written as a function of 𝐲). * 𝐀^GeoI_𝒩_m (resp. 𝐁^GeoI_𝒩_m) denotes the coefficient matrix of the Geo-Ind constraints for 𝐳'_𝒩_m (resp. 𝐳”_𝒩_m(𝐲)); * 𝐀^unit_𝒩_m (resp. 𝐁^unit_𝒩_m) denotes the coefficient matrix of the unit measure constraints for 𝐳'_𝒩_m (resp. 𝐳”_𝒩_m(𝐲)); * 𝐛^GeoI_𝒩_m and 𝐛^unit_𝒩_m are the right hand side coefficient vectors of the Geo-Ind constraints and the unit measure constraints, respectively. As Fig. <ref> shows, the constraint matrix of the reformulated problem has a block ladder structure, lending the problem well to Benders decomposition (BD) <cit.>. Due to the limit of space, we list the detailed formulations of the coefficient matrices 𝐀^GeoI_𝒩_m, 𝐁^GeoI_𝒩_m, 𝐀^unit_𝒩_m, 𝐁^unit_𝒩_m, and coefficient vectors 𝐛^GeoI_𝒩_m, and 𝐛^unit_𝒩_m in Section <ref> in Appendix. §.§.§ Algorithm description Benders' decomposition is composed of two stages, * Stage 1: A Master Program (MP) to derive {y_1, ..., y_K}, * Stage 2: A set of subproblems Sub_m (m = 1, ..., M), where each Sub_m aims to derive 𝐳'_𝒩_m. Stage 1: Master program. The MP derives y_1, ..., y_K and replaces each cost 𝐜'_𝒩_m𝐳'_𝒩_m by a single decision variable w_m, i.e., w_m = 𝐜'_𝒩_m𝐳'_𝒩_m. The MP is formulated as the following LP problem min ∑_k=1^K α_k y_k + ∑_m=1^M w_m s.t. ℋ: y_k ≥ 0,  k = 1,..., K. where each cut in ℋ is a linear inequality of the decision variables y_1, ..., y_K, w_1, ..., w_M. According to the central LR-Geo formulated in Equ. (<ref>)–(<ref>), each w_m is given by w_m = min{ℒ'(𝐙_𝒩_m)|.}. Since the MP doesn't know the optimal values of 𝐙_𝒩_m, instead of using Equ. (<ref>), it “guesses” the value of w_m based the cut set ℋ. In the subsequent Stage 2, each Sub_m verifies whether the “guessed” value of w_m is feasible and achieves the minimum data cost as defined in Equ. (<ref>); if not, Sub_m proposes the addition of a new cut to be included in ℋ, thereby guiding the MP to refine w_m during the next iteration. In the following, we use {y_1, ..., y_K, w_1, ..., w_M} to represent the optimal solution of the MP. Stage 2: Subproblems. After the MP derives its optimal solution {y_1, ..., y_K, w_1, ..., w_M} in Stage 1, each Sub_m validates whether w_m has achieved the minimum data cost, w_m = min{𝐜'_𝒩_m𝐳'_𝒩_m|𝐀_𝒩_m𝐳'_𝒩_m≥𝐛_𝒩_m - 𝐁_𝒩_m𝐳”_𝒩_m(𝐲).}. of which the dual problem can be formulated as the following LP problem: max (𝐛_𝒩_m - 𝐁_𝒩_m𝐳”_𝒩_m(𝐲))^⊤𝐮_𝒩_m s.t. 𝐀_𝒩_m^⊤𝐮_𝒩_m≤𝐜'_𝒩_m, 𝐮_𝒩_m≥0. There are three cases of the dual problem: Case 1: The optimal objective value is unbounded: By weak duality <cit.>, 𝐲 does not satisfy 𝐀_𝒩_m𝐳'_𝒩_m≥𝐛_𝒩_m - 𝐁_𝒩_m𝐳”_𝒩_m(𝐲) for any 𝐳'_𝒩_m≥0. Since the dual problem is unbounded, there exists an extreme ray 𝐮̃_𝒩_m subject to 𝐀_𝒩_m^⊤𝐮̃_𝒩_m≤0 and (𝐛_𝒩_m - 𝐁_𝒩_m𝐳”_𝒩_m(𝐲))^⊤𝐮̃_𝒩_m > 0. To ensure that 𝐮̃_𝒩_m won't be an extreme ray in the next iteration, Sub_m suggests a new cut h (feasibility cut) to the MP: h: (𝐛_𝒩_m - 𝐁_𝒩_m𝐳”_𝒩_m(𝐲))^⊤𝐮̃_𝒩_m≤0. Case 2: The optimal objective value is bounded with the solution 𝐮_𝒩_m: By weak duality, the optimal value of the dual problem is equal to the optimal value of w_l constrained on the choice of 𝐲. In this case, Sub_m checks whether w_m < (𝐛_𝒩_m - 𝐁_𝒩_m𝐳”_𝒩_m(𝐲))^⊤𝐮_𝒩_m. If yes, then w_m < min{𝐜'_𝒩_m𝐳'_𝒩_m|𝐀_𝒩_m𝐳'_𝒩_m≥𝐛_𝒩_m - 𝐁_𝒩_m𝐳”_𝒩_m(𝐲).}, meaning that w_m derived by the MP is lower than the minimum cost. Therefore, Sub_m suggests a new cut h:  w_m ≥(𝐛_𝒩_m - 𝐁_𝒩_m𝐳”_𝒩_m(𝐲))^⊤𝐮_𝒩_m to the MP to improve w_m in the next iteration. Case 3: There is no feasible solution: By weak duality, the primal problem either has no feasible/unbounded solution. The algorithm terminates. After adding the new cuts (from all the subproblems) to the cut set ℋ, the BD moves to the next iteration by recalculating the MP and obtaining updated {y_1, ..., y_K, w_1, ..., w_M}. As Stage 1 and Stage 2 are repeated over iterations, the MP collects more cuts from the subproblems, converging the solution {y_1, ..., y_K, w_1, ..., w_M} to the optimal. (Upper and lower bounds of CLR-Geo's optimal) <cit.> (1) The optimal solution of the MP (Equ. (<ref>) – (<ref>)) offers a lower bound of the optimal solution of the original CLR-Geo (Equ. (<ref>)–(<ref>)) (as the MP relaxes the constraints). (2) The solution of the subproblems (Equ. (<ref>)-(<ref>)), if it exists, combined with the solution 𝐳_𝒴_l of the MP, provides an upper bound of the optimal solution of the CLR-Geo (since their solutions form a feasible solution of CLR-Geo). r0.230 0.230 < g r a p h i c s > An example of Benders' convergence. The optimal solution for the CLR-Geo necessarily resides within the interval demarcated by the upper and lower bounds as delineated in Proposition <ref>. The narrower this interval, the nearer the solution derived from the Benders' Decomposition is to the optimal solution. Fig. <ref> illustrates the evolution of these bounds throughout the iterative process. Considering a prolonged convergence tail, we opt to conclude the algorithm once the discrepancy between the optimal upper and lower bounds diminishes to less than a specified margin, ξ (e.g., we set ξ = 0.01km in our experiment in Section <ref>). obfuscation distributions of v_i calculated by the whole location set 𝒱. We quantify the optimality loss of 𝐳_i calculated by the LR Geo-obfuscation as the divation of 𝐳_i from 𝐙_i δ(𝐳_i, 𝐙_i) = ∑_k=1^K |z_i,k - z_i,k|. Particularly, δ(𝐳_i, 𝐙_i) = 0 implies 𝐳_i and 𝐙_i are identical and hence no optimality loss. We use linear search <cit.> to find an appropriate value of Γ such that δ(𝐳_i, 𝐳_i) can be maintained at an acceptable level. For example, δ(𝐳_i, 𝐳_i)≤ 0.07 in our experiment (Section <ref>). We let 𝐙̂^* = [𝐳̂^*_1, ..., 𝐳̂^*_K] and 𝐙 =[𝐙_1, ..., 𝐙_K] denote the optimal solution of the LR OMG and the original OMG, respectively. For theoretical interests, we prove that ∑_k=1^K𝐜_k𝐳̂^*_k provides a lower bound of the minimum cost Δ(𝐙) in Theorem <ref> to help us check how close the derived obfuscation can approach the optimal (e.g., a comparison between our solution and this lower bound can be found in Fig. <ref>(b) in Section <ref>). ∑_k=1^K𝐜̂_k𝐳̂^*_k≤Δ(𝐙) (Proof can be found in Appendix). §.§ Cost Matrix Estimation Using Cost Reference Table In Sections <ref> and <ref>, we assumed that each user m knows the cost matrix 𝐂_𝒩_m, 𝒪m. We now relax this assumption and elucidate the methodology by which users, with the assistance of the server, can estimate 𝐂_𝒩_m, 𝒪_m. According to Equ. (<ref>), the calculation of each c_v_i, v_k requires * the coordinates of the locations in 𝒩_m and 𝒪_m (to derive cost error δ_v_i,v_k,v_l), which are known by the user but unknown by the server; * the coordinates of the target locations (to derive cost error d_v_i,v_k,v_l) and the targets' prior distribution 𝐪, which are known by the server but unknown by the user. Since both the server and the user possess only partial information required to compute 𝐜_k, a “cooperative” approach is employed to calculate 𝐜_k through the exchange of intermediate values between the two parties. Throughout the calculation of 𝐜_k, the server must remain unaware of 𝒩_m and 𝒪_m to protect privacy. Since the server has the global information including the traveling cost between any pair of locations and the target distribution, we let the server generate a cost reference table to assist the user estimate 𝐜_k. §.§.§ Cost reference table The server constructs a discrete set of locations 𝒱̂, which is sufficiently dense to ensure that for any given location pair (v_j, v_k) from the sets 𝒩_m and 𝒪_m, respectively, users can identify a corresponding pair (v̂_j, v̂_k) within 𝒱̂ that is closely approximated to (v_j, v_k). This approximation is then used to estimate cost coefficients. To establish 𝒱̂, the server employs a grid map to discretize the location field, such that locations within the same grid cell are indistinguishable. For specific applications involving constrained user mobility, such as vehicles' mobility, the server might alternatively divide the road network into segments, treating each as a distinct location <cit.>. Given the superior computational resources available to servers compared to those of users, 𝒱̂ can afford to feature a finer granularity of location discretization than that of 𝒩_m and 𝒪_m. Table format. As Fig. <ref> shows, each row of the cost reference table includes the coordinates of an “approximated” real location v̂_i, an “approximated” obfuscated location v̂_k, and the expected cost β_i,k given the real and the obfuscated locations v̂_i and v̂_k, respectively. The expectation of the cost β_i,k is to take over all possible target locations β_i,k = ∑_j=1^Q q_jδ_v̂_i, v̂_k, v̂_j, where q_j is the probability that the target's nearest location in 𝒱̂ is v̂_j. In what follows, we use 𝐂̂_𝒩_m, 𝒪_m = {ĉ_v_i,v_k}_(v_i, v_k) ∈𝒩_m×𝒪_m to denote the cost matrix estimated by the cost reference table. When a user estimates each ĉ_v_i,v_k in 𝐂̂_𝒩_m, 𝒪_m, the user first finds the nearest locations of v_i and v_k in 𝒱̂, denoted by v̂_i and v̂_k respectively, and then calculates the cost estimation error ĉ_v_i,v_k by ĉ_v_i,v_k = p_i (β_i,k + d_v_i, v̂_i + d_v_k, v̂_k), which gives an upper bound of the real c_v_i, v_k (see Lemma <ref> in Appendix). Time complexity of 𝐂̂_𝒩_m, 𝒪_m's estimation. The cost estimation traverses the |𝒩_m||𝒪_m| pairs of locations in 𝒩_m×𝒪_m, and for each location pair, it needs to find their closest locations in 𝒱̂, taking O(|𝒱̂|) comparisons. Therefore, the time complexity of cost estimation is O(|𝒩_m||𝒪_m||𝒱̂|). r0.27 0.265 < g r a p h i c s > Potential attack of cost reference table. §.§.§ Cost reference table generation To generate a cost reference table, the server first creates a weighted directed graph 𝒢̃ = (𝒱̂, ℰ̂) to describe the traveling cost between locations in the discrete location set 𝒱̂, where each pair of adjacent locations v̂_i and v̂_j are connected by an edge ê_i,j∈ℰ̂. Each ê_i,j∈ℰ̂ is assigned a weight d_v̂_i,v̂_j, reflecting the traveling cost from v̂_i to v̂_j. The server then builds the SP tree rooted at each target location v̂_j ∈𝒱̂ in 𝒢̃, based on which it calculates the shortest path distance D_v̂_i, v̂_j (D_v̂_i, v̂_k resp.) from each location v̂_i (v̂_k resp.) to v̂_j, and derives δ_v̂_i,v̂_k,v̂_j using Equ. (<ref>). Finally, the server calculates β_i,k for each (v̂_i,v̂_k) using their cost estimation errors δ_v̂_i,v̂_k,v̂_j Time complexity of cost reference table generation. The construction of each SP tree can be achieved by Dijkstra's algorithm, which has a time complexity of O(|𝒱̂|^2) <cit.>. For each designated target location, the server is required to generate an SP tree, culminating in a collective computational effort of O(|𝒱̂|^3) operations. Furthermore, the computation of β_i,k is called |𝒱̂|^2 times, with each instance necessitating O(|𝒱̂|) operations, thereby rendering its time complexity to be O(|𝒱̂|^3). Consequently, the overall time complexity associated with the creation of the requisite table is determined to be O(|𝒱̂|^3+|𝒱̂|^3) = O(|𝒱̂|^3), indicating significant computational demand for these operations. §.§.§ Cost reference table size reduction We can to further reduce the location set 𝒱̂ while guaranteeing the accuracy of the cost coefficient 𝐜̂_k estimation. To achieve this, we define 𝒱̂ in a circular region, referred to as 𝒞_cr, so that 𝒱̂ consists of locations within this circle. Consequently, the accuracy of 𝐜̂_k estimation can be guaranteed if 𝒞_cr encompasses both 𝒩_m and 𝒪_m. Note that if 𝒞_cr covers 𝒩_m and 𝒪_m only at a minimum level, the range of 𝒩_m and 𝒪_m can be possibly disclosed to the server. Therefore, instead of only covering 𝒩_m and 𝒪_m, we allow the user to request a larger 𝒞_cr. Initially, the user randomly selects a location v_a from the LR location set 𝒩_m by following a uniform distribution. Subsequently, the user reports a circle 𝒞(v_a, max2Γ, Γ+r_obf) to the server as the requested range of cost reference table, with v_a serving as the center. The circle 𝒞(v_a, max{2Γ, Γ+r_obf}) covers all the locations in both 𝒩_m and 𝒪_m. First, since d_v_i, v_j≤ d_v_i, v_j, and d_v_i, v_j≤Γ, ∀ v_j∈𝒩_m, we obtain that d_v_i, v_j≤Γ,  ∀ v_j ∈𝒩_m. Also, according to Equ. (<ref>), we have d_v_i, v_j≤ r_obf,  ∀ v_j ∈𝒪_m, which implies that d_v_i, v_j≤max{Γ, r_obf},  ∀ v_j ∈𝒩_m∪𝒪_m. d_v_i, v_a≤Γ because v_a is selected within the circle 𝒞(v_a, Γ). Then, according to the triangle inequality, d_v_a, v_j≤ d_v_i, v_a + d_v_i, v_jleq Γ + max{Γ, r_obf} = max{2Γ, Γ + r_obf}, for each v_j ∈𝒩_m∪𝒪_m, indicating that 𝒞(v_a, max{2Γ, Γ+r_obf}) covers both 𝒩_m and 𝒪_m. Moreover, according to the requested range 𝒞(v_a, max{2Γ, Γ+r_obf}) and how the user selects the location v_a, the server can only infer that the user's real location is in the circle 𝒞(v_a, Γ), where Γ > γ, indicating that the user's location is well hidden from the server. = -1 §.§ Performance Analysis In this section, we provide the theoretical analysis of the performance for the LR-Geo solution (Equ. (<ref>)–(<ref>)), including the privacy guarantee in Theorem <ref>, the lower bound and the upper bound of expected cost in Theorem <ref> and Theorem <ref>, respectively. The detailed proofs of these theorems can be found in Section <ref> in Appendix. §.§.§ Privacy guarantee We first prove that the chosen obfuscated locations adhering to the exponential distribution constraints (Equ. (<ref>)) meet Geo-Ind constraints across users even though their geo-obfuscation is calculated in a relatively independent manner. (Privacy guarantee) Given two locations, v_i from user n's LR location set 𝒩_n and v_j from user m's LR location set 𝒩_m, if both locations satisfy the condition q_i,k = q_j,k = 1 (indicating that z^(n)_i,k and z^(m)_j,k satisfy the exponential distribution constraints), then their obfuscation distributions still satisfy the (ϵ, γ)-Geo-Ind constraints, z^(n)_i,k - e^ϵ d_v_i, v_j z^(m)_j,k≤ 0, ∀ v_k ∈𝒱. Note that (ϵ, γ)-Geo-Ind is not guaranteed between z^(n)_i,k and z^(m)_j,k if one of them doesn't follow the exponential mechanism. However, our experimental findings in Fig. <ref> demonstrate that unselected obfuscated locations still possess a high probability (98.04% on average) of meeting Geo-Ind constraints in practice. §.§.§ Lower bound and upper bound of the expected cost Given the LR location set 𝒩_m, we formulate the following relaxed LR-Geo problem: min ∑_m=1^Mℒ(𝐙_𝒩_m) (Upper bound of the minimum expected cost) Using the estimated cost coefficient ĉ_v_i,v_k in Equ. (<ref>), the solution of the CLR-Geo problem in Equ. (<ref>)–(<ref>) offers an upper bound of the minimum expected cost. Next, we define another cost estimation c̃_v_i,v_k by c̃_v_i,v_k = p_i (β_i,k - d_v_i, v̂_i - d_v_k, v̂_k), which gives a lower bound of the real c_v_i, v_k (see Lemma <ref> in Appendix). (Lower bound of the minimum expected cost) Using the estimated cost in Equ. (<ref>), the solution of the relaxed LR-Geo problem in Equ. (<ref>)–(<ref>) offers a lower bound of the minimum expected cost. §.§ Discussion of Potential Inference Using Estimated Cost Matrix In this part, we illustrate that it is hard to infer the locations in 𝒩_m and 𝒪_m using the estimated cost matrix 𝐂̂_𝒩_m, 𝒪_m. Fig. <ref>(a) gives an example, where the user calculates the cost coefficient ĉ_v_i,v_k = β_i,k + d_v_i, v̂_i + d_v_k, v̂_k = 197 + 1 + 2 = 200m (1), and uploads ĉ_v_i,v_k to the server (2). After receiving ĉ_v_i,v_k, a potential attack by the attacker is to find the corresponding β value in the cost reference table, of which the estimated cost coefficient by a user can be possibly 200m according to Equ. (<ref>). Note that both d_v_i, v̂_i and d_v_k, v̂_k in Equ. (<ref>) are unknown by the server, while the server can drive the maximum possible value of d_v_i, v̂_i and d_v_k, v̂_k, denoted by δ_max (e.g. δ_max = 28.28 in Fig. <ref>(a)), based on the distribution of 𝒱̂. In this case, the server can derive that the matched β value is in the interval [ĉ_v_i,v_k-δ_max, ĉ_v_i,v_k+δ_max] = [171.72, 228.28] (3), which might cover other β values in the cost reference table, like 205m and 182m in Fig. <ref>(a). In this case, the attacker cannot identify which β is the true β_i,k in the interval, and the more β values fall in the interval, the more difficult for the attacker to find the true β_i,k. Fig. <ref>(b) gives another example of how many β values can possibly match an estimated cost coefficient using the real world map information (more details can be found in our experiment in Section <ref>). In this example, the server creates a cost reference table covering 900 locations in 𝒱̂ by a grid map with each cell size equal to 100m. The maximum distance from a location in 𝒩_m and 𝒪_m to its nearest location in 𝒱̂ is 70.7m. Given an estimated cost coefficient ĉ_A = 400m, its corresponding β_A is in the interval [400m-70.7m, 400m+70.7m], where 76 β values fall in this interval. On average, each cost coefficient is matched by 83.13 rows of the cost reference table. The more comprehensive experimental results can be found in Fig. <ref> in Section <ref>. § PERFORMANCE EVALUATION In this section, we conduct a simulation using real-world map information to evaluate the performance of LR-Geo in terms of computation efficiency, privacy, and cost, with the comparison of several benchmarks <cit.>. Specifically, we focus on the application of vehicular spatial crowdsourcing <cit.>, such as Uber like platform <cit.>, where vehicles need to physically travel to a disignated location to complete the task. We first introduce the settings of the experiment in Section <ref>, and then evaluate the performance of different geo-obfuscation methods in Section <ref>–Section <ref>. §.§ Settings §.§.§ Dataset We selected the city “Rome, Italy” as the target region (specifically, the bounding area with coordinate (lat=41.66, lon=12.24) as the south-west corner, and coordinate (lat=42.10, lon=12.81) as the north-east corner). The road map information of the target region, including both node set and edge set, is retrieved by OpenStreetMap <cit.>. We assume a uniform distribution of targets and consider that vehicles' mobility is constrained by the road network. §.§.§ Benchmarks We compare LR-Geo with the following benchmarks, which are all based on Geo-Ind: (1) LP-based geo-obfuscation (labeled as “LP”) <cit.>: LP considers the network-constrained mobility features of the vehicles and employs LP formulated in Equ. (<ref>)(<ref>) to minimize the expected cost. (2) Laplacian noise (labeled as “Laplace”) <cit.>: Laplace adds a polar Laplacian noise ϕ to the real location, i.e., v_i + ϕ and approximate it by the closest location v_k = min_v∈𝒱d_v, v_i+ϕ. (3) Exponential mechanism (labeled as “ExpMech”) <cit.>: In ExpMech, the probability distribution of the obfuscated location of each real location v_i follows a polar Laplace distribution z_i,k∝ e^-ϵ c_v_i, v_k/2. (4) “ConstOPTMech” or “ConstOPT” <cit.>: Like our approach, ConstOPT applies the exponential distribution constraint for a subset of the obfuscation probabilities and uses LP for the optimization of the remaining obfuscation probabilities, to balance the utility and scalability of the data perturbation method. §.§.§ Metrics We measure the following metrics to evaluate the performance of our method and the benchmarks: (i) Computation time, which is defined as the amount of time to calculate an obfuscation matrix. The experiments are performed by a desktop with 13th Gen Intel Core i7 processor, 16 cores. We used the Matlab LP toolbox , with the algorithm “” <cit.> to solve LP. (ii) Expected cost ℒ(𝐙): ℒ(𝐙) is defined in Equ. (<ref>), meaning the expected estimation error of traveling cost caused by 𝐙. (iii) Geo-Ind violation (GV) ratio, which is defined as the ratio: /. The GV ratio reflects how the derived obfuscation matrix can achieve Geo-Ind. In the following experiment, by default, we set ϵ by 10.0km^-1, the cell size of the cost reference table by 0.1km, the LR distance threshold Γ by 20km. §.§ Computation Efficiency In this part, we evaluate the computation time of our approach. §.§.§ Comparison with the benchmarks Table <ref> compares the computational times for LR-Geo against four benchmark methods, where the number of locations K equals 100, 200, 300, and 400, respectively. The table reveals that while LR-Geo incurs marginally increased computational times compared to Laplace and ExpMech, it significantly outperforms both LP and ConstOPT in terms of efficiency. Specifically, at K = 300, LR-Geo demonstrates a remarkable reduction in computation time, showing a decrease of 99.69% and 97.84% compared to LP and ConstOPT, respectively. For both LP and ConstOPT, computation times exceed the 1800-second threshold when K ≥ 300. This enhanced efficiency of LR-Geo is due to its strategic approach of confining the set of locations under consideration to LR locations only. Conversely, the alternative LP-based methods evaluate every location within the targeted area, resulting in substantial computational overhead. In addition, both Laplace and ExpMech can attain slightly lower computation times compared to LR-Geo. This efficiency stems from their methodology of selecting obfuscated locations based on predefined probability distributions - the Laplacian and exponential distributions, respectively - bypassing the need for LP, which in turn reduces the computation overhead. However, a notable drawback of these two methods is that they don't their lack of accurate estimation of cost caused by geo-obfuscation. This oversight results in an increased cost associated with geo-obfuscation, as the chosen obfuscated locations may lead to high traveling distances to the designated locations. §.§.§ Scalability Table <ref> illustrates that the computation time for all algorithms escalates as the size of the location set K increases. Notably, even when K reaches 300, the average computation time for LR-Leo remains at a comparatively low figure, approximately 33 seconds. We expanded our examination of K across a wider spectrum, from 100 to 1500, and charted the computation times of LR-Geo in Fig. <ref>(a). This figure reveals that the computation time for LR-Geo escalates in tandem with an increase in K, reaching approximately 110 seconds at K = 1500. Moreover, Fig. <ref>(b) presents the computation times for LR-Geo as the number of users varies from 1 to 10. As expected, there is a noticeable rise in computation time corresponding to an increase in the number of users. This trend is attributed to the framework of Benders' decomposition (introduced in Section <ref>), where the server is tasked with generating a subproblem for each user. The increase in the number of subproblems heightens the probability of encountering at least one subproblem that fails to achieve optimal convergence swiftly, thereby prolonging the convergence time. §.§ Cost Measurement In this part, we evaluate the expected cost of our approach. §.§.§ Comparison with the benchmarks Table <ref> compares the expected costs incurred by various algorithms for K = 100, 200, 300, 400. It is observed that LR-Geo significantly reduces the expected cost compared to Laplace and ExpMech. Specifically, LR-Geo's expected cost is, on average, 58.67% and 55.43% lower than that of Laplace and ExpMech, respectively. This efficiency is attributed to Laplace and ExpMech's reliance on Laplace/Exponential distributions for selecting obfuscated locations, which fails to accurately reflect the mobility constraints of vehicles within the road network, thereby elevating the cost. Furthermore, LR-Geo's cost performance is nearly on par with ConstOPT's for K=100,200, yet it surpasses LP in cost at K=100. Although LP is designed to achieve the global minimum cost by evaluating all potential locations within the target area, this approach is negated by its extensive computational requirements. As indicated in Table <ref>, LP struggles to compute obfuscation matrices within the 1800-second limit, highlighting a critical trade-off between cost efficiency and computational feasibility. §.§.§ Comparison with the theoretical bounds To assess how close LR-Geo can achieve the optimal, we calculate a lower bound for the expected cost by solving the relaxed version of LR-Geo in Equ. (<ref>)–(<ref>), with the findings presented in Table <ref>. Here, we introduce the approximation ratio, defined as the quotient of the expected cost derived from LR-Geo over the calculated lower bound. A smaller approximation ratio indicates a closer proximity of LR-Geo's solution to the optimal. The results in the table indicate that, on average, the approximation ratio for the expected cost of LR-Geo stands at 1.125, 1.094, 1.279, and 1.120 for K = 100, 200, 300, 400, respectively. It's important to recognize that LR-Geo does not attain the optimal solution since it operates with a constrained set of locations (LR locations) rather than the entire location set. Furthermore, LR-Geo does not utilize exact cost coefficients; instead, it estimates these coefficients using a cost reference table. Thus, it is interesting to test how the LR-Geo's approximation ratio is impacted by (i) the selection of the LR locations, determined by the parameter Γ, i.e., the LR distance threshold, and (ii) the accuracy of the cost coefficient estimation, determined by the cell size of the grid map of the cost reference table. Fig. <ref>(a) shows the variation in the approximation ratio of LR-Geo as Γ increases from 10km to 50km. As defined in Equ. (<ref>), Γ influences the size of the LR location set 𝒩_m, with a higher Γ resulting in a larger 𝒩_m. The figure indicates that the approximation ratio experiences a more pronounced decrease (averaging 4.14%) as Γ is increased from 10km to 20km. However, the decrease becomes marginal (only 2.05%) when Γ is further expanded from 20km to 50km. This observation suggests that enhancing Γ contributes to the optimality of the obfuscation matrices, yet beyond a certain threshold (20km in this instance), additional increases in Γ yield negligible improvements. Fig. <ref>(b) shows the approximation ratio of LR-Geo as the cell size increases from 0.05km to 0.25km. As expected, the approximation ratio escalates with the increase in cell size, indicating that finer granularity in the location's representation within the cost reference table allows LR-Geo to more closely approximate the optimal solution. Specifically, the approximation ratio remains relatively stable and low for cell sizes up to 0.15km. Beyond this point, particularly when the cell size surpasses 0.175km, the ratio sees a marked increase. This trend underscores the importance of maintaining a cell size at or below 0.15km to optimize cost efficiency. §.§ Privacy Measure In LR-Geo, the computation of obfuscation matrices for each user is performed independently. While the obfuscation probabilities that conform to the constraints of the exponential distribution (in Equ. (<ref>)) meet the Geo-Ind privacy criterion, as substantiated by Theorem <ref>, the remaining obfuscation probabilities do not guarantee Geo-Ind privacy. In this part, we examine the GV ratio as defined in Equ. (<ref>). Fig. <ref> shows the GV ratios for varying numbers of users. The figure reveals that the GV ratio remains exceptionally low, with a maximum of only 0.16%, demonstrating that, in practice, the Geo-Ind constraints are exceedingly likely to be met across the various obfuscation matrices tailored for different users. Finally, we investigate the potential risk associated with the upload of cost matrices, a concern discussed in Section <ref>. We simulate a scenario where a user uploads 100 cost matrices. We analyze, for each cost coefficient, the number of rows in the cost reference table that can be mapped to that coefficient. Intuitively, a greater number of rows mapped to a specific uploaded coefficient suggests a broader range of potential real and obfuscated location pairs, thereby diminishing the risk of LR location set disclosure (noting that the real location is within the LR location set). Fig. <ref> displays the number of rows mapped to the uploaded cost coefficients for various grid cell sizes. As anticipated, the quantity of rows corresponding to a given coefficient increases with the increase of the cell size, indicating an increase in ambiguity and a reduced risk of location inference. The figure also underscores the difficulty of deducing the real location from the uploaded cost coefficient, as, on average, each coefficient is matched by 83.13 rows, providing a significant degree of location privacy. § RELATED WORKS The study of location privacy began nearly two decades ago with Gruteser and Grunwald's pioneering work <cit.>, where they introduced the concept of location k-anonymity. This idea has since evolved to include l-diversity, which ensures a user's location is indistinguishable from l-1 other locations <cit.>. However, the l-diversity model simplifies the threat landscape by assuming all alternative locations are equally probable as the user's actual location from an attacker's perspective. This assumption renders it susceptible to a range of sophisticated inference attacks <cit.>. In recent years, Andrés et al. <cit.> introduced a more applicable privacy criterion, Geo-Ind, grounded in the established concept of differential privacy (DP). Following this work, a large body of location obfuscation strategies have been proposed, e.g., <cit.>. Andrés et al., in their seminal work, not only proposed the Geo-Ind concept but also devised a method for achieving it by perturbing the actual location using a polar Laplacian distribution. Furthermore, as geo-obfuscation naturally introduces errors in the reported locations, thereby impacting the quality of LBS, a critical challenge addressed by several studies involves balancing the trade-off between service quality and privacy. For instance, within the constraints of Geo-Ind, Bordenabe et al. <cit.> developed an optimization framework for geo-obfuscation aimed at minimizing individual user costs. Chatzikokolakis et al. <cit.> introduced a concept of privacy mass for points of interest, determining the Geo-Ind privacy budget ϵ for a location based on the local characteristics of each area. Wang et al. <cit.> addressed the collective cost incurred by users, proposing a privacy-preserving target assignment algorithm to reduce the total travel expense. The majority of existing works in geo-obfuscation employ an LP framework, which generally necessitates O(|𝒱|^2) decision variables and O(|𝒱||ℰ|) linear constraints <cit.>, making the LP approach computationally intensive and challenging to implement on a large-scale LBS. Table <ref> compares the related geo-obfuscation methods in different categories, including Laplacian noise (“Lap.”), the exponential mechanism (“Exp.”), and LP-based methods (“LP”). As the table indicates, the computational complexity of LP restricts most geo-obfuscation studies to handling at most 100 discrete locations. However, recent advancements <cit.> have expanded the capability of processing secret datasets to approximately 300 records by leveraging Dantzig-Wolfe decomposition and column generation techniques. These studies primarily target LP models with Geo-Ind constraints applied across all pairs of secret records, facilitating the initialization process for column generation but are less applicable to broader geo-obfuscation challenges that only necessitate constraints for adjacent locations. Other innovative approaches, such as <cit.>, combine LP with the exponential mechanism to improve scalability, though this may lead to compromises in solution optimality. Given the time-sensitive natures of many LBS applications, existing geo-obfuscation methodologies are constrained to either low spatial resolution over large areas (for instance, <cit.> focuses on city-scale regions, discretizing the location field into a grid where each cell measures 766m by 766m), or to high resolution within smaller areas (as in <cit.>, which examines a small town with location points sampled every 500 square meters). Compared to those existing works, LR-Geo introduced in this paper substantially lowers computational costs while maintaining a degree of optimality. This advancement facilitates the application of geo-obfuscation in large-scale LBS applications, enabling more accurate representations of locations. § CONCLUSIONS We proposed to reduce the computation cost of the geo-obfuscation calculation by shrinking its range to a set of more relevant locations. Considering that the reduced geo-obfuscation range can possibly disclose the user's real location, we designed a remote computing strategy to migrate the geo-obfuscation calculation to the server without disclosing the location set covered by geo-obfuscation. The experimental results have demonstrated the superiority of our method in terms of privacy, service quality, and time efficiency, with the comparison of the selected benchmarks. = -1 We envision several promising directions to continue this research. Firstly, this paper considers a homogeneous mobility model, where a single cost reference table graph is sufficient to describe users' traveling costs. In reality, the users might be heterogeneous, e.g., a mixture of pedestrians and vehicles, and even a single user's mobility can possibly switch between different models. Then, how to model the mobility features of heterogeneous users using multiple cost reference table graphs is another problem to address. Moreover, considering the diverse privacy/utility preferences of users, we will design geo-obfuscation strategies that allow users to customize their privacy budgets. Then, how to design policies to incentivize users to balance individual benefits and the collective benefit of all users is an important problem to address. 10 Yu-NDSS2017 L. Yu, L. Liu, and C. Pu. Dynamic differential location privacy with personalized error bounds. In Proc. of IEEE NDSS, 2017. Bakken-IEEESP2004 D.E. Bakken, R. Rarameswaran, D.M. Blough, A.A. Franz, and T.J. Palmer. Data obfuscation: anonymity and desensitization of usable data sets. IEEE Security & Privacy, 2(6):34–41, 2004. Andres-CCS2013 M. E. Andrés, N. E. Bordenabe, K. Chatzikokolakis, and C. Palamidessi. Geo-indistinguishability: Differential privacy for location-based systems. In Proc. of ACM CCS, pages 901–914, 2013. Mendes-PETS2020 Ricardo Mendes, Mariana Cunha, and Joao Vilela. Impact of frequency of location reports on the privacy level of geo-indistinguishability. PoPETS, 2020:379–396, 04 2020. Simon-EuroSP2019 Simon Oya, Carmela Troncoso, and Fernando Pérez-González. Rethinking location privacy for unknown mobility behaviors. In 2019 IEEE EuroS&P, pages 416–431, 2019. Wang-WWW2017 L. Wang, D. Yang, X. Han, T. Wang, D. Zhang, and X. Ma. Location privacy-preserving task allocation for mobile crowdsensing with differential geo-obfuscation. In Proc. of ACM WWW, pages 627–636, 2017. Shokri-TOPS2017 Reza Shokri, George Theodorakopoulos, and Carmela Troncoso. Privacy games along location traces: A game-theoretic framework for optimizing location privacy. ACM Trans. Priv. Secur., 19(4), dec 2016. Shokri-CCS2012 R. Shokri, G. Theodorakopoulos, C. Troncoso, J. Hubaux, and J. L. Boudec. Protecting location privacy: Optimal strategy against localization attacks. In Proc. of ACM CCS, pages 617–627, 2012. Xiao-CCS2015 Yonghui Xiao and Li Xiong. Protecting locations with differential privacy under temporal correlations. In Proc. of the 22nd ACM SIGSAC Conference on Computer and Communications Security, oct 2015. Fawaz-CCS2014 K. Fawaz and K. G. Shin. Location privacy protection for smartphone users. In Proc. of ACM CCS, pages 239–250. ACM, 2014. Qiu-CIKM2020 C. Qiu, A. C. Squicciarini, Z. Li, C. Pang, and L. Yan. Time-efficient geo-obfuscation to protect worker location privacy over road networks in spatial crowdsourcing. In Proc. of ACM CIKM, 2020. Qiu-TMC2020 C. Qiu, A. C. Squicciarini, C. Pang, N. Wang, and B. Wu. Location privacy protection in vehicle-based spatial crowdsourcing via geo-indistinguishability. IEEE TMC, pages 1–1, 2020. Qiu-SIGSPATIAL2022 C. Qiu, L. Yan, A. Squicciarini, J. Zhao, C. Xu, and P. Pappachan. Trafficadaptor: An adaptive obfuscation strategy for vehicle location privacy against vehicle traffic flow aware attacks. In Proc. of ACM SIGSPATIAL, 2022. Al-Dhubhani-PETS2017 Raed Al-Dhubhani and Jonathan M. Cazalas. An adaptive geo-indistinguishability mechanism for continuous lbs queries. Wirel. Netw., 24(8):3221–3239, nov 2018. Wang-CIDM2016 Leye Wang, Daqing Zhang, Dingqi Yang, Brian Y. Lim, and Xiaojuan Ma. Differential location privacy for sparse mobile crowdsensing. In 2016 IEEE ICDM, pages 1257–1262, 2016. Linear Nonlinear Frederick S. Hillier. Linear and Nonlinear Programming. Stanford University, 2008. roma-taxi-20140717 Lorenzo Bracciale, Marco Bonola, Pierpaolo Loreti, Giuseppe Bianchi, Raul Amici, and Antonello Rabuffi. CRAWDAD dataset roma/taxi (v. 2014-07-17). Downloaded from <https://crawdad.org/roma/taxi/20140717>, July 2014. ImolaUAI2022 Jacob Imola, Shiva Kasiviswanathan, Stephen White, Abhinav Aggarwal, and Nathanael Teissier. Balancing utility and scalability in metric differential privacy. In Proc. of UAI 2022, 2022. Pappachan-EDBT2023 P. Pappachan, C. Qiu, A. Squicciarini, and V. Manjunath. User customizable and robust geo-indistinguishability for location privacy. In Proc. of International Conference on Extending Database Technology (EDBT), 2023. Qiu-EDBT2024 Chenxi Qiu, *Sourabh Yadav, Yuede Ji, Anna Squicciarini, Ramanamurthy Dantu, Juanjuan Zhao, and Chengzhong Xu. Fine-grained geo-obfuscation to protect workers' location privacy in time-sensitive spatial crowdsourcing. In Proceedings of 27th International Conference on Extending Database Technology (EDBT), 2024. yelp Yelp. <https://www.yelp.com/>, 2020. Accessed: 2020-04-07. waze Waze. <https://www.waze.com/>, 2019. Accessed: 2019-07-22. openstreetmap openstreetmap. <https://www.openstreetmap.org/>, 2020. Accessed: 2020-04-07. Algorithm Harsh Bhasin. Algorithms: Design and Analysis. Oxford Univ Press, 2015. Rahmaniani-EJOR2017 Ragheb Rahmaniani, Teodor Gabriel Crainic, Michel Gendreau, and Walter Rei. The benders decomposition algorithm: A literature review. European Journal of Operational Research, 259(3):801–817, 2017. Uber Uber. <https://www.uber.com/>, 2022. Accessed in October 2022. matlab MATLAB. <https://www.mathworks.com/products/matlab.html>, 2019. Accessed: 2019-07-22. Gruteser-MobiSys2003 M. Gruteser and D. Grunwald. Anonymous usage of location-based services through spatial and temporal cloaking. In Proc. of ACM MobiSys, 2003. Bordenabe-CCS2014 N. E. Bordenabe, K. Chatzikokolakis, and C. Palamidessi. Optimal geo-indistinguishable mechanisms for location privacy. In Proc. of ACM CCS, pages 251–262, 2014. Chatzikokolakis-PoPETs2015 Konstantinos Chatzikokolakis, Catuscia Palamidessi, and Marco Stronati. Constructing elastic distinguishability metrics for location privacy. PoPETs, 2015:156–170, 2015. Shokri-PoPETs2015 Reza Shokri. Privacy games: Optimal user-centric data obfuscation. Proc. on Privacy Enhancing Technologies, 2015(2):299 – 315, 2015. § APPENDIX § MATH NOTATIONS §.§ Detailed Notations in Benders' Decomposition * The coefficient matrices [𝐀_𝒩_m^GeoI, 𝐁_𝒩_m^GeoI] includes the Geo-Ind constraints between the obfuscation vectors of the locations in 𝒩_m: [𝐀_𝒩_m^GeoI, 𝐁_𝒩_m^GeoI] = [[ ⋱ ⋯ ⋯ ⋯ ⋰; ⋯ 1 ⋯ -e^ϵ d_v_i,v_j ⋯; ⋯ -e^ϵ d_v_i,v_j ⋯ 1 ⋯; ⋰ ⋯ ⋯ ⋯ ⋱; ]] [ }[ ∀ v_i, v_j ∈𝒩_m; d_v_i, v_j≤γ ].; ] * [𝐀_𝒩_m^unit, 𝐁_𝒩_m^unit] includes |𝒩_m| rows, where each row corresponds to the unit measure constraint of the obfuscation vector 𝐳_i of location v_i ∈𝒩_m. * 𝐛_𝒩_m^GeoI is an all-zeros vector, which corresponds to the right-hand side coefficients of the constraint matrix [𝐀_𝒩_m^GeoI, 𝐁_𝒩_m^GeoI] in the LP formulation. * 𝐛_𝒩_m^unit is an all-ones vector, which corresponds to the right-hand side coefficients of the constraint matrix [𝐀_𝒩_m^unit, 𝐁_𝒩_m^unit] in the LP formulation. ∑_m=1^Mℒ(𝐙_𝒩_m) = ∑_m=1^M∑_v_i∈𝒩_mℒ(𝐳^a_i) + ∑_m=1^M∑_v_i∈𝒩_mℒ(𝐳^b_i) = ∑_m=1^M∑_v_i∈𝒩_m∑_v_k ∈ I(v_i) c_v_i, v_ky_k e^-ϵ d_v_i, v_k/2 + ∑_m=1^M∑_v_i∈𝒩_mℒ(𝐳^b_i) = ∑_k=1^K ∑_m=1^M∑_v_i∈𝒩_m1_v_k ∈ I(v_i) c_v_i, v_k e^-ϵ d_v_i, v_k/2_ y_k + ∑_m=1^M∑_v_i∈𝒩_mℒ(𝐳^b_i) § OMITTED PROOFS §.§ Proof of Theorem <ref> We let {v_i, v_l_1, v_l_2, ..., v_l_n-1, v_l_n, v_j} represent the sequence of locations in the shortest path between v_i and v_j. Therefore, D_v_i, v_j = d_v_i, v_l_1 + ∑_m=1^n-1 d_v_l_m, v_l_m+1 + d_v_l_n, v_j. Since each pair of adjacent locations is geo-indistinguishable, for each v_k ∈𝒱, we have z_i,k/z_l_1,k≤ e^ϵ d_v_i, v_l_1, z_l_m,k/z_l_m+1,k≤ e^ϵ d_v_l_m, v_l_m+1 (m = 1, ..., n-1), z_l_n,k/z_j,k≤ e^ϵ d_v_l_n, v_j, from which we can derive that z_i,k/z_j,k = z_i,k/z_l_1,k∏_m=1^n-1z_l_m,k/z_l_m+1,kz_l_n,k/z_j,k ≤ e^ϵ d_v_i, v_l_1∏_m=1^n-1 e^ϵ d_v_l_m, v_l_m+1 e^ϵ d_v_l_n, v_j = e^ϵ(d_v_i, v_l_1 + ∑_m=1^n-1 d_v_l_m, v_l_m+1 + d_v_l_n, v_j) = e^ϵ D_v_i, v_j. The proof is completed. §.§ Proof of Proposition <ref> First, since the Haversine distance between v_m and v_j should be no larger than their path distance in the Geo-Ind graph, i.e., d_v_m, v_j≤ D_v_m, v_j. According to the definition of LR location set in Equ. (<ref>), ∀ v_j∈𝒩_m D_v_m, v_j≤Γ. Based on Equ. (<ref>) and Equ. (<ref>), we obtain that d_v_m, v_j≤Γ, ∀ v_j∈𝒩_m. According to Equ. (<ref>), we have d_v_m, v_j≤ r_obf, ∀ v_j ∈𝒪_m. According to Equ. (<ref>) and Equ. (<ref>), we have d_v_m, v_j≤max{Γ, r_obf},  ∀ v_j ∈𝒩_m∪𝒪_m. d_v_m, v_a≤Γ because v_a is selected within the LR location set 𝒩_m. Then, according to the triangle inequality, d_v_a, v_j ≤ d_v_m, v_a + d_v_m, v_j ≤ Γ + max{Γ, r_obf} = max{2Γ, Γ + r_obf}, for each v_j ∈𝒩_m∪𝒪_m, indicating that 𝒞(v_a, max{2Γ, Γ+r_obf}) covers both 𝒩_m and 𝒪_m. §.§ Proof of Theorem <ref> We prove it by considering the following three cases: Case 1: v_k is within the obfuscation range of both v_i and v_j, i.e., v_k ∈𝒪_i ∩𝒪_j. Then, z^(n)_i,k and z^(m)_j,k satisfy the constraint Equ. (<ref>): z^(n)_i,k = y_k e^-ϵ d_v_i, v_k/2, z^(m)_j,k = y_k e^-ϵd_v_j, v_k/2,  ∀ v_k implying that z^(n)_i,k - z^(m)_j,k e^ϵ d_v_i, v_j = y_k e^-ϵ d_v_i, v_k/2 - y_k e^-ϵ d_v_j, v_k/2 e^ϵ d_v_i, v_j = y_k e^-ϵ d_v_i, v_k/2 - y_k e^-ϵ (d_v_j, v_k-d_v_i, v_j)/2 e^ϵ d_v_i, v_j/2 ≤ y_k e^-ϵ d_v_i, v_k/2 - y_k e^-ϵ d_v_i, v_k/2 e^ϵ d_v_i, v_j/2  = y_k e^-ϵ d_v_i, v_k/2(1 - e^ϵ d_v_i, v_j/2) ≤ 0 Case 2: v_k is outside of the obfuscation range of either v_i or v_j. Without loss of generality, we consider the case v_k ∈𝒪_i and v_k ∉𝒪_j (meaning r_obf < d_v_j, v_k), indicating that z^(n)_i,k = y_k e^-ϵ d_v_i, v_k/2 and z^(m)_j,k = y_k e^-ϵ r_obf/2. Therefore, z^(n)_i,k - z^(m)_j,k e^ϵ d_v_i, v_j = y_k e^-ϵ d_v_i, v_k/2 - y_k e^-ϵ r_obf/2 e^ϵ d_v_i, v_j = y_k e^-ϵ d_v_i, v_k/2 - y_k e^-ϵ (r_obf-d_v_i, v_j)/2 e^ϵ d_v_i, v_j/2 < y_k e^-ϵ d_v_i, v_k/2 - y_k e^-ϵ (d_v_j, v_k-d_v_i, v_j)/2 e^ϵ d_v_i, v_j/2  ≤ y_k e^-ϵ d_v_i, v_k/2 - y_k e^-ϵ d_v_i, v_k/2 e^ϵ d_v_i, v_j/2  = y_k e^-ϵ d_v_i, v_k/2(1 - e^ϵ d_v_i, v_j/2) ≤ 0 Case 3: v_k is outside of the obfuscation range of both v_i and v_j, i.e., v_k ∉𝒪_i ∪𝒪_j. In this case, z^(n)_i,k = z^(m)_j,k = y_k e^-ϵ r_obf/2, and it is trivial to prove that z^(n)_i,k - e^ϵ d_v_i, v_j z^(m)_j,k≤ 0, since e^ϵ d_v_i, v_j≥ 1. The proof is completed. §.§ Proof of Theorem <ref> Before proving Theorem <ref>, we first introduce the following lemma: The actual cost c_v_i, v_k between location v_i and v_k is upper bounded by the estimated cost ĉ_v_i,v_k. The detailed proof of this lemma can be found in Section <ref>. Let 𝐙̂_𝒩_m = {ẑ^(m)_i,k}_(v_i,v_k)∈𝒩_m ×𝒪_m denote the optimal solution of the CLR-Geo problem in Equ. (<ref>)–(<ref>) using the estimated cost matrix 𝐂̂_𝒩_m, 𝒪_m (m = 1, ..., M). Then, for each user m, the minimum expected cost calculated by the CLR-Geo problem is given by ℒ(𝐙̂_𝒩_m) = ∑_v_i∈𝒩_m∑_v_k∈𝒪_mĉ_v_i,v_kẑ^(m)_i,k ≥ ∑_v_i∈𝒩_m∑_v_k∈𝒪_m c_v_i,v_kẑ^(m)_i,k  ≥ ∑_v_i∈𝒩_m∑_v_k∈𝒪_m c_v_i,v_kz^(m)*_i,k_ where 𝐙_𝒩_m^* = {z^(m)*_i,k}_(v_i,v_k)∈𝒩_m ×𝒪_m denote user m's optimal obfuscation matrix that achieves the minimum cost. The proof is completed. §.§ Proof of Lemma <ref> According to c_v_i, v_k's definition (Equ. (<ref>)), c_v_i, v_k = p_i ∑_j=1^Q q_j |d_v_i, v_j - d_v_k,v_j| = p_i ∑_v_j ∈𝒬' q_j (d_v_i, v_j - d_v_k,v_j) + p_i ∑_v_j ∈𝒬” q_j (d_v_k,v_j - d_v_i, v_j) ≤ p_i ∑_v_j∈𝒬' q_j ((d_v̂_i,v_j + d_v_i, v̂_i)_≥ d_v_i, v_j - (d_v̂_k,v_j - d_v_k,v̂_k)_≤ d_v_j,v_k) + p_i ∑_v_j∈𝒬” q_j ((d_v̂_k,v_j + d_v_k,v̂_k)_≥ d_v_j,v_k - (d_v̂_i,v_j - d_v_i, v̂_i)_≤ d_v_i, v_j) = p_i ∑_j=1^Q q_j |d_v̂_i,v_j - d_v̂_k,v_j| + p_i ∑_j=1^Q q_j (d_v_i, v̂_i + d_v_k,v̂_k) = p_i β_i,k - p_i (d_v_i, v̂_i + d_v_k,v̂_k) = ĉ_v_i,v_k. c_v_i, v_k = p_i ∑_j=1^Q q_j |d_v_i, v_j - d_v_k,v_j| ≥ p_i ∑_j=1^Q q_j (d_v_i, v_j - d_v_k,v_j) ≥ p_i ∑_j=1^Q q_j ((d_v̂_i,v_j - d_v_i, v̂_i)_≤ d_v_i, v_j - (d_v̂_k,v_j + d_v_k,v̂_k)_≥ d_v_j,v_k) = p_i ∑_j=1^Q q_j (d_v̂_i,v_j - d_v̂_k,v_j) - p_i ∑_j=1^Q q_j (d_v_i, v̂_i + d_v_k,v̂_k) = p_i β_i,k - p_i (d_v_i, v̂_i + d_v_k,v̂_k) = ĉ_v_i,v_k. §.§ Proof of Theorem <ref> Before proving Theorem <ref>, we first introduce the following lemma: The actual cost c_v_i, v_k between location v_i and v_k is lower bounded by the estimated cost c̃_v_i,v_k. The detailed proof of this lemma can be found in Section <ref>. Let 𝐙̃_𝒩_m = {z̃^(m)_i,k}_(v_i,v_k)∈𝒩_m ×𝒪_m denote the optimal solution of the relaxed LR-Geo problem in Equ. (<ref>)–(<ref>) using the estimated cost matrix 𝐂̃_𝒩_m, 𝒪_m (m = 1, ..., M). Then, for each user m, the minimum expected cost calculated by the relaxed LR-Geo problem is given by ℒ(𝐙̃_𝒩_m) = ∑_v_i∈𝒩_m∑_v_k∈𝒪_mc̃_v_i,v_kz̃^(m)_i,k ≤ ∑_v_i∈𝒩_m∑_v_k∈𝒪_mc̃_v_i,v_kz^(m)*_i,k  ≤ ∑_v_i∈𝒩_m∑_v_k∈𝒪_m c_v_i,v_kz^(m)*_i,k_ where 𝐙_𝒩_m^* = {z^(m)*_i,k}_(v_i,v_k)∈𝒩_m ×𝒪_m denote user m's optimal obfuscation matrix that achieves the minimum cost. The proof is completed. §.§ Proof of Lemma <ref> According to c_v_i, v_k's definition (Equ. (<ref>)), c_v_i, v_k = p_i ∑_j=1^Q q_j |d_v_i, v_j - d_v_k,v_j| ≥ p_i ∑_j=1^Q q_j (d_v_i, v_j - d_v_k,v_j) ≥ p_i ∑_j=1^Q q_j ((d_v̂_i,v_j - d_v_i, v̂_i)_≤ d_v_i, v_j - (d_v̂_k,v_j + d_v_k,v̂_k)_≥ d_v_j,v_k) = p_i ∑_j=1^Q q_j (d_v̂_i,v_j - d_v̂_k,v_j) - p_i ∑_j=1^Q q_j (d_v_i, v̂_i + d_v_k,v̂_k) = p_i β_i,k - p_i (d_v_i, v̂_i + d_v_k,v̂_k) = c̃_v_i,v_k. c_v_i, v_k = p_i ∑_j=1^Q q_j |d_v_i, v_j - d_v_k,v_j| ≥ p_i ∑_j=1^Q q_j (d_v_i, v_j - d_v_k,v_j) ≥ p_i ∑_j=1^Q q_j ((d_v̂_i,v_j - d_v_i, v̂_i)_≤ d_v_i, v_j - (d_v̂_k,v_j + d_v_k,v̂_k)_≥ d_v_j,v_k) = p_i ∑_j=1^Q q_j (d_v̂_i,v_j - d_v̂_k,v_j) - p_i ∑_j=1^Q q_j (d_v_i, v̂_i + d_v_k,v̂_k) = p_i β_i,k - p_i (d_v_i, v̂_i + d_v_k,v̂_k) = ĉ_v_i,v_k.
http://arxiv.org/abs/2407.12463v1
20240717102851
Progressive Proxy Anchor Propagation for Unsupervised Semantic Segmentation
[ "Hyun Seok Seong", "WonJun Moon", "SuBeen Lee", "Jae-Pil Heo" ]
cs.CV
[ "cs.CV" ]
Progressive Proxy Anchor Propagation H.S. Seong et al. Sungkyunkwan University Progressive Proxy Anchor Propagation for Unsupervised Semantic Segmentation Hyun Seok Seong0000-0002-7952-2017 WonJun Moon0000-0003-2805-0926 SuBeen Lee0009-0005-1470-1160 Jae-Pil HeoCorresponding author0000-0001-9684-7641 July 22, 2024 ======================================================================================================================================================= § ABSTRACT The labor-intensive labeling for semantic segmentation has spurred the emergence of Unsupervised Semantic Segmentation. Recent studies utilize patch-wise contrastive learning based on features from image-level self-supervised pretrained models. However, relying solely on similarity-based supervision from image-level pretrained models often leads to unreliable guidance due to insufficient patch-level semantic representations. To address this, we propose a Progressive Proxy Anchor Propagation (PPAP) strategy. This method gradually identifies more trustworthy positives for each anchor by relocating its proxy to regions densely populated with semantically similar samples. Specifically, we initially establish a tight boundary to gather a few reliable positive samples around each anchor. Then, considering the distribution of positive samples, we relocate the proxy anchor towards areas with a higher concentration of positives and adjust the positiveness boundary based on the propagation degree of the proxy anchor. Moreover, to account for ambiguous regions where positive and negative samples may coexist near the positiveness boundary, we introduce an instance-wise ambiguous zone. Samples within these zones are excluded from the negative set, further enhancing the reliability of the negative set. Our state-of-the-art performances on various datasets validate the effectiveness of the proposed method for Unsupervised Semantic Segmentation. Our code is available at https://github.com/hynnsk/PPAPhttps://github.com/hynnsk/PPAP. § INTRODUCTION Semantic Segmentation plays a vital role in various fields, including robotics and autonomous driving <cit.>. With the abundant data available in media, developing high-quality semantic segmentation models has become feasible <cit.>, though this has also increased the demand for extensive human annotations. Likewise, the increasing burden on human labor has spurred the emergence of Unsupervised Semantic Segmentation (USS) <cit.>. The main challenge in USS stems from the lack of supervision to train the model. To overcome this, prior works <cit.> suggested first learning the image-level representation space and then leveraging this knowledge to develop the ability of pixel-level understanding. Likewise, utilizing self-supervised pretrained models <cit.> to provide supervision in USS became mainstream. By employing these foundation models, previous techniques have demonstrated promising results, particularly by learning the relationship among image patches in the dataset <cit.>. Yet, we notice that existing methods still encounter challenges in discovering trustworthy relationships between patches. For instance, HP <cit.>, the latest USS technique based on contrastive learning, utilized k-th nearest neighbor of each anchor to determine an appropriate boundary for positive set selection. While discretizing samples with such a boundary provides an intuitive basis, it often leads to unreliable supervision. This is because they exclusively rely on the similarity metric on a per-patch basis within an imperfect embedding learned in an unsupervised manner at the image-level <cit.>. Consequently, as illustrated in Fig. <ref>, this approach may cause anchors to gather false positives (FP) [The italic `false positives (FP)' and `true positives (TP)' <cit.> represent samples that incorrectly and correctly included in the set, respectively, throughout the paper.] in the positive set especially when they are located in data-sparse areas or near the semantic boundaries while encouraging anchors in dense regions to repel FP in the negative set. Specifically, in Fig. <ref>, we present a quantitative comparison between HP and our method, focusing on the number and precision of collected positives. As observed, we note that only 77.5% of the positives identified by HP match the ground truth labels, even though the number of positives is insufficient. Precision further decreases for instances in data-sparse regions; samples with a small number of gathered positives exhibit even lower precision (bottom 10% samples retain only 33.81% precision on discovered positives). This indicates the inclusion of a substantial number of FP in the positive set, thereby attracting semantically dissimilar samples and leading to unstable learning. To mitigate these issues, we propose a Progressive Proxy Anchor Propagation (PPAP) strategy to deal with the vulnerability of the per-patch-based similarity metric in an image-level pretrained embedding space. Our goal is to establish a reliable proxy anchor by considering the data distribution surrounding each anchor, thereby gathering patches with more trustworthy positive and negative relationships minimizing ambiguity. This approach can also obtain a larger number of training guidance as it enhances the precision of gathered relationships. Specifically, to discover the position of the proxy anchor, we begin by defining a tight boundary around each anchor to construct a small, reliable positive set. The rationale behind establishing a tight boundary is rooted in the observation that samples within closely adjacent regions are highly likely to share similar semantics even within the image-level pretrained embedding space. Subsequently, we iteratively undertake the following two steps to enlarge a trustworthy positive set per anchor: 1) Re-define the position of a proxy anchor based on the distribution of identified positive samples, 2) Lower the similarity threshold for the positiveness criterion, i.e., expand the boundary, according to the reliability of the new proxy anchor position, and gather the updated positive set. Likewise, by discovering the samples with similar semantics and moving the proxy anchor towards the center point of such samples, we expect the assembly of trustworthy positives. This strategy enables collecting a large number of positive patches with high precision, as shown in Fig <ref>. Still, a positiveness boundary might not be a perfect measure to detect all the positive samples. In other words, there exists a degree of ambiguity around the boundary, where both positive and negative instances might coexist. To address this, we expand the original binary relationship categorization of samples, i.e., positive and negative, for contrastive learning into tri-partite groups, i.e., positive, negative, and ambiguous. The size of the ambiguous set is determined based on the reliability of the relocated proxy anchor. Consequently, while utilizing the positive and negative sets in contrastive learning, we disregard the ambiguous set, as including FP in the negative set often disrupts the stable training <cit.>. Overall, our contributions are summarized as follows: * We propose Progressive Proxy Anchor Propagation (PPAP), which systematically gathers trustworthy positive samples for each anchor by progressively analyzing the distribution of the positive samples. * We establish an ambiguity-excluded negative set based on the propagated proxy anchor, defining a semantically ambiguous zone for each anchor. This approach effectively eliminates potential FP in the negative set. * The efficacy of our trustworthy contrastive learning is validated by achieving new state-of-the-art performances across diverse datasets. § RELATED WORK §.§ Unsupervised Semantic Segmentation Semantic segmentation aims to classify the semantics of individual pixels within an image <cit.>. In recent years, the integration of transformers into semantic segmentation has emerged as a promising research direction <cit.>. However, achieving pixel-wise supervision requires extensive human labor. The necessity of learning semantic segmentation without supervision has become apparent in recent literature <cit.>. Earlier trials <cit.> learned to maintain consistent semantics across the paired features. In contrast, recent techniques <cit.> have employed Vision Transformer (ViT) models trained in a self-supervised manner as backbone networks to transfer knowledge to the segmentation head. For instance, transFGU <cit.> grouped target datasets based on prior knowledge and generated pseudo-labels to train the segmentation model. STEGO <cit.> tried to maintain the patch relationships in the segmentation head by distilling feature correspondences to segmentation correspondences. HP <cit.>, on the other hand, identified hidden positives using the k-th nearest neighbor criterion to guide the contrastive objective. Our goal aligns with previous works in seeking pseudo-supervision by considering patch relationships. However, the key difference lies in our approach of considering data distribution to find trustworthy pseudo-supervision within the imperfect embedding space. §.§ Self-supervised Representation Learning Self-supervised representation learning has long been spotlighted for its effectiveness in providing a decent initialization point for various downstream tasks <cit.>. There are several prevalent approaches in this domain, including pretext tasks which learn the representation by reconstructing the original input from augmented images <cit.>, relation-based approaches <cit.> and masked-modeling approaches <cit.>. While masked-modeling approaches excel at preserving local context, they are often less efficient in learning discriminative representations <cit.>. Therefore, relation-based approaches <cit.>, particularly DINO <cit.>, are popularly employed in the realm of USS <cit.>. Although the features from DINO are powerful in describing semantics for the whole image, their direct use for semantic segmentation proves effective due to the model being trained at the image-level, as shown in Fig <ref>. In this regard, we have developed algorithms to complement the representation of DINO for the promotion of such features to reflect pixel-level semantics. § METHOD §.§ Background and Overview Recently, it has become mainstream to utilize the positive relationships among patches in training for Unsupervised Semantic Segmentation (USS) <cit.>. They exploited the patch-wise embeddings from a pretrained foundation model. However, we claim that they heavily relied on the similarity measured in imperfect embedding space for inferring patch-level training guidance. Instead, in this paper, we suggest the importance of considering the data distribution; since not all anchors are highly likely to be densely surrounded by semantically similar patch features in the embedding space, we aim to search for better spots to gather a sufficient number of trustworthy positive and negative samples. The architecture of our method is illustrated in Fig <ref>. Following the recent works <cit.>, our goal is to learn an appropriate projection function for the features extracted from a pretrained model suitable to the USS task. To achieve this, we define two streams using the pretrained ViT; for the first stream we keep all the blocks frozen to provide reliable supervision, and for the other stream we finetune the last block for adapting features to the semantic segmentation task. Given a mini-batch of images {𝐱_b}_b=1^B, the former stream computes pairs of B× H× W patch features 𝐟_i ∈ℝ^D where H× W is the number of patch features for an image and D stands for the dimension of embedding space. On the other hand, the latter stream producing projected patch features 𝐳_i ∈ℝ^D is being finetuned with the gathered positive and negative sets. Specifically, the process begins by determining the positive set 𝒫_i with 𝐟_i. Through the iterative process of positive gathering and proxy anchor relocation, we construct trustworthy positive set. Afterward, we determine the ambiguity-excluded negative set to train 𝐳_i. In the following sections, we discuss the Progressive Proxy Anchor Propagation strategy to obtain trustworthy positive and ambiguity-excluded negative sets. in Sec. <ref> and Sec. <ref>, respectively. §.§ Progressive Proxy Anchor Propagation Collecting a sufficient amount of trustworthy pseudo-supervision is a cumbersome task but crucial for the performance in USS <cit.>. To this end, we propose a Progressive Proxy Anchor Propagation algorithm to identify the reliable region for each anchor, where semantically similar samples to each anchor are densely located, as described in Fig. <ref>. The propagation process begins by forming an initial positive set comprising samples that are highly adjacent to each anchor. Subsequently, the algorithm employs an iterative process composed of two following steps: 1) relocate the proxy anchor towards more densely populated regions identified with the distribution of gathered positives, and 2) identify positive samples around the proxy anchor according to the expanded boundary. Note that the proxy anchor (i.e., relocated anchor) provides a positive collection criterion on behalf of the anchor where its boundary for positive collection is proportional to the reliability of the proxy anchor’s new position that is measured by the propagation degree of the proxy anchor. The proxy anchor position is considered more reliable if it does not move significantly, suggesting it is already surrounded by samples with similar semantics. This enables each anchor to gather numerous trustworthy positive samples. Specifically, the initial positive set 𝒫_i^0 of a given anchor 𝐟_i is obtained by applying the initial positiveness criterion Φ^0 to gather as below: 𝒫_i^0 = {j |𝐟_i·𝐟_j > Φ_i^0, j ∈ℬ}, ∀_i Φ_i^0=Φ^0, where ℬ denotes the set containing all patch features within the mini-batch, i.e., |ℬ|=B× H× W, and (·) refers to the similarity measure between two vectors (typically the cosine similarity). Here, the criterion Φ^0 is to decide whether all other patch features in the mini-batch are positive or not, based on the similarity threshold. This initial threshold is set to be big enough to make a tight criterion and is shared across all anchors. Such a tight boundary from a large initial criterion is for stable proxy anchor relocation since the samples with close proximity are more likely to be semantically similar. Then, to propagate the proxy anchor toward the positives-dominant region, we derive the new proxy anchor position by averaging the collected positive set. This way, we take the distribution of the gathered positives into account. Formally, out of total T steps of relocation, we present the t-th relocated position of an anchor 𝐟_i as 𝐯^t_i, as follows: 𝐯_i^t = 1/|𝒫^t-1_i|∑_j ∈𝒫^t-1_i𝐟_j. To account for using the average points of the positive set, we posit that the close vicinity of the proxy anchor is highly likely to retain the same semantic. Thus, we claim that the center point of the gathered positives will move the proxy anchor closer to a dense region populated with semantically similar samples. Furthermore, the positiveness criterion Φ_i^t should be reliability-adaptively adjusted. We determine the reliability of 𝐯_i^t based on the similarity to the previous proxy anchor position 𝐯_i^t-1 since we assume that the proxy anchor point is converged to the center point of semantically similar patches if the scope of the propagation in a single step is limited (i.e., high similarity between 𝐯_i^t-1 and 𝐯_i^t). With such intuition, we revise the criterion Φ_i^t to be loosened when there is high reliability (𝐯_i^t and 𝐯_i^t-1 are in close proximity) on the position 𝐯_i^t as follows: Φ_i^t = Φ_i^t-1 - (1 - (𝐯_i^t-1·𝐯_i^t))/σ_pos, where σ_pos is a coefficient used to prevent excessive reduction of the criterion. Note that 𝐯_i^0=𝐟_i. Consequently, the positive set at iteration t is expressed with the revised criterion Φ_i^t with the new proxy anchor point 𝐯_i^t as follows: 𝒫_i^t = {j |𝐯_i^t ·𝐟_j > Φ_i^t, j ∈ℬ}. The process above is iteratively performed (Eq. <ref> - Eq. <ref>) for T times to discover a reliable zone to sample the positives 𝒫_i^T. §.§ Ambiguity-excluded Negative Set Along with the importance of gathering trustworthy positive sets for contrastive learning, preserving the reliability of the negative sets is another important factor <cit.>. Accordingly, we utilize the propagated proxy anchor 𝐯_i^T as the base to compose the negative set to prevent the conflict to the positive set 𝒫_i^T. However, we point out the presence of an ambiguous zone for each anchor where positives and negatives are intermixed, making it unclear to categorize them exactly on one side. When the samples in such a zone are considered negatives, the model may face unwanted repulsion. Derived from such motivation, we additionally define an ambiguous set for each anchor that is neither included in the positive set nor the negative set, thereby excluding them from the learning process. The method to establish the ambiguous set is similar to the process for positive set sampling: we update the criterion over the T steps of the proxy anchor propagation in an anchor-dependent manner and define the set according to this criterion. However, the key difference lies in how we determine the initial ambiguity criterion Ψ^0. Unlike the boundary Φ for positive selection, we set Ψ to a small value at the initial step to serve as a loose boundary since the vicinal areas of an initial anchor 𝐟_i might not be reliable. This criterion is then progressively raised in the subsequent steps. Given the initial anchor point 𝐯_i^0 as 𝐟_i and the t-th propagated proxy anchor point 𝐯_i^t through proxy anchor propagation (Eq. <ref>), we progressively adjust the ambiguity criterion by: Ψ_i^t = Ψ_i^t-1 + (1 - (𝐯_i^t-1·𝐯_i^t))/σ_amb, where σ_amb is a coefficient to prevent excessive increase of the criterion and ∀_iΨ_i^0=Ψ^0. Through t steps, Ψ_i^t tightens the boundary if the position of the proxy anchor becomes densely surrounded by the positives. In other words, if the relocated proxy anchor is positioned in a densely populated area, there is less probability of having semantically alike samples outside the positive sampling region formed with Eq. <ref>. After determining the ambiguity criterion through T steps, we then proceed to define the ambiguous set. Using the T-th relocated proxy anchor 𝐯_i^T, the ambiguous set 𝒜_i for the anchor 𝐟_i is defined as follows: 𝒜_i = {j | (𝐯_i ·𝐟_j > Ψ_i^T) (𝐯_i ·𝐟_j < Φ_i^T), j ∈ℬ}. Finally, the negative set 𝒩 is organized as follows: 𝒩_i = {j | (j ∉𝒫_i) (j ∉𝒜_i), j ∈ℬ}. §.§ Training Objective Following existing works <cit.> in USS, we utilize contrastive learning objecitve <cit.>. With the aim of distinguishing the semantically similar positive set 𝒫_i^T and dissimilar negative set 𝒩_i, the objective is expressed as: L^con_i = -1/|𝒫^T_i|∑_p ∈𝒫^T_ilogexp(𝐳_i ·𝐳_p / τ)/∑_n ∈ (𝒩_i ∪𝒫_i^T)exp(𝐳_i ·𝐳_n / τ), where τ is a temperature parameter, and 𝒫^T_i and 𝒩_i denote positive and negative sets for i-th anchor, respectively. § EXPERIMENTS §.§ Experimental Settings §.§.§ Datasets. Following previous protocols <cit.>, we evaluate our method on COCO-stuff <cit.> and Cityscapes <cit.>, Potsdam-3, and ImageNet-S <cit.> datasets. Further details can be found in the Appendix. COCO-stuff is a dataset for scene understanding tasks, e.g., semantic segmentation, detection, and image captioning, that consists of 172 classes. Among them, the COCO-stuff benchmark for USS utilizes 27 classes. Cityscapes is another large-scale dataset for scene understanding that consists of 30 classes captured across 50 different cities. Similarly to COCO-stuff, 27 subclasses are used for the benchmark. In addition, Potsdam-3 contains satellite images that are divided into 3 classes. Lastly, ImageNet-S is a large-scale dataset which has 1.2 million training images with 919 semantic classes. §.§.§ Evaluation Protocols. For COCO-stuff, Cityscapes, and Potsdam-3 datasets, we adopt two evaluation methods: clustering (unsupervised) and linear probe <cit.>. Clustering evaluates the alignment between the prediction and the ground truth with the Hungarian matching algorithm. On the other hand, the linear probe utilizes an additional fully connected layer for classification. For both evaluations, we apply the post-processing step using a Conditional Random Field (CRF) <cit.> to refine the predictions. Accuracy (Acc.) and mean Intersection over Union (mIoU) are used to measure the performances. For the evaluation on the ImageNet-S dataset, we adopt mIoU with distance matching i.e., the k-nearest neighbors classifier with k=10, following the evaluation protocol from PASS <cit.>. §.§ Experimental Results §.§.§ Quantitative Result. We compare the performances of our method with various baselines <cit.>. In Tab. <ref>, we observe that the recent works with the ViT backbone outperform the other ones, and among them, our approach demonstrates state-of-the-art performances across all metrics. In particular, our PPAP, equipped only with the sampling strategies for both the positive and negative samples, exceeds HP <cit.> that utilizes contrastive learning also with locality learning in a task-specific perspective. In terms of the backbone, we achieve greater improvements with the ViT-S/16, which uses larger-sized patch features. We attribute these results to the robust property of our PPAP. Unlike PPAP, other methods are shown to yield better results with the small patch size (ViT-S/8 backbone) since the patches with the larger size are more likely to include a mixture of semantics. Yet, our proposed PPAP is guided to search for semantically similar patches to learn its prototypical proxy point and even disregard the patches that retain an ambiguous relationship with the given anchor. As a result, PPAP achieves promising results by a margin up to 15.41% and 9.05% compared to HP <cit.> in unsupervised Acc. and mIoU, respectively. Performance comparison on the Cityscapes dataset is displayed in Tab. <ref>. Similar to the results on the COCO-stuff dataset, our proposed PPAP achieves new state-of-the-art results except in one case. These results further verify the applicability of our components to different datasets. PPAP also shows improvement on the Potsdam-3 as shown in Tab. <ref>, but the difference is modest due to the dataset's limited three distinct semantic classes, well clustered by pretrained ViT features. However, our method demonstrates greater enhancements on datasets with more and less distinct semantic classes, not as effectively distinguished by the pretrained backbones. On the ImageNet-S dataset, PPAP significantly outperforms existing methods, PASS <cit.> and HP <cit.>, as shown in Tab. <ref>. We also conduct experiments on the subsets ImageNet-S_300 and ImageNet-S_50, which contain 300 and 50 classes, respectively. The superior performance of PPAP on these datasets further demonstrates its scalability compared to existing methods. Our PPAP algorithm shares similarities with k-means clustering in that both methods iteratively find and relocate points through averaging. However, there are three critical differences: 1) PPAP aims to discover patch-wise proxy anchor rather than having instances within a cluster share a single proxy. 2) PPAP enables a stable initial relocation process by using only the very close nearest neighbors of an anchor. 3) PPAP accurately determines the positiveness and negativeness of each anchor based on the degree of its relocation. In contrast, simply applying k-means clustering and assigning positive relationships to all features within each cluster leads to high recall but low precision in the positive set. For example, we observed that the precision of positives with k-means is only 22%, even when the number of classes is given as a prior. In comparison, PPAP achieves a precision of 72% on the COCO-stuff dataset using the DINO pretrained ViT-S/16 model without class prior. §.§.§ Qualitative Result. We display our quantitative results in comparison to STEGO <cit.> and HP <cit.> in <ref>. Considering the complexity of the scenes, we plot simple to complicated scenes in order from left to right. To be brief, our PPAP shows consistent results to have fewer mispredicted pixels compared to the baselines. Particularly, our proposed method is robust to pixel-wise noises because each anchor is progressively propagated to search for reliable points that address the vulnerability of per-sample basis inference in the imperfect embedding space learned in an unsupervised manner at the image-level. §.§ Ablation Study We provide ablation studies to evaluate the individual components and key hyperparameters. The primary components under study are: 1) Trustworthy Positive Set (TPS) obtained based on PPAP and 2) Ambiguity-excluded Negative Set (ANS). Additionally, we examined the impact of hyperparameters, specifically: 1) coefficients σ_pos and σ_amg to regulate the criteria and 2) initial criteria Φ^0 and Ψ^0. §.§.§ Varying Components of PPAP. We perform an ablation study to assess the individual contributions of each component, which are presented in Tab. <ref>. These experiments are performed on the COCO-stuff dataset with ViT-S/16 backbone. For the baseline, we train the model using contrastive loss based on positives determined with the initial positiveness criterion 𝒫^0 and negatives comprising all remaining samples. Incorporating the TPS yields improvements of 9.36% in unsupervised accuracy and 9.17% in mIoU. For the experiment on the third row, the ambiguous set for contrastive loss is defined after T-step propagation, while the positive set is defined with the initial positiveness criterion 𝒫^0. This brings 13.03% and 9.6% enhancements over the baseline. These results verify that both components significantly contribute to performance enhancement. Consequently, with both components combined, we observe a notable overall improvement of 28.11% in accuracy and 15.72% in mIoU. §.§.§ Varying σ_pos and σ_amb. In Fig <ref> and <ref>, we carry out an ablation study on the coefficient σ, which is crucial for controlling the degree of reduction and increase in positiveness and ambiguity criteria, respectively. A smaller σ_pos leads to a more intensive reduction in the positiveness criterion, while a bigger σ_pos results in a more gradual reduction. Such property is reflected in the outcomes presented in Fig <ref>. A smaller σ_pos tends to lower the performance due to FP erroneously included in the positive set. On the other hand, moderately larger σ_pos helps to mitigate the aforementioned problem. σ_amb follows the same principle: a smaller value results in a more substantial increase of the criterion, escalating the possibility of erroneously repelling FP in the negative set. And a bigger value leads to a more moderate increase. A proper σ can prevent both the excessive changes in the criteria. We note that it is advisable to choose values around 3 to ensure stability. §.§.§ Varying Φ^0 and Ψ^0. Ablation studies for varying Φ^0 and Ψ^0 are shown in Fig. <ref> and <ref>. These parameters act as the initial criteria for selecting positive and ambiguous samples and serve as key hyperparameters in our method. For Φ^0, as samples in close vicinity to the anchor are highly likely to share the same semantics, setting its initial value low incurs the existence of FP in the positive set. Still, setting the Φ^0 too high results in selecting only a few positives that the anchor relocation is implemented only within its very close proximity. Regarding Ψ^0, setting Ψ^0 too small significantly reduces the size of the negative set and leads to a shortage of hard negative samples in the negative set, while too large Ψ^0 increases FP in the negative set. We found that values of 0.55 for Φ^0 and 0.15 for Ψ^0 consistently perform well across all datasets and models. Note that we also found that higher Φ^0 requires more propagation steps (i.e., bigger T). §.§ Trustworthiness of Positive and Negative Sets. If excessive FP and FN are present in contrastive loss, the model may face undesired attraction and repulsion, respectively. We contend that mitigating these issues can enhance the trustworthiness of contrastive learning. To verify the robust trustworthiness of our method, previously, we conducted a precision comparison of the positive sets between ours and HP <cit.> on Fig. <ref>. Notably, ours not only collects more positive samples than HP but also achieves a higher precision (i.e., ratio of TP). For further demonstration, we present the ratios of TP in the positive set (𝒫) and FP in the negative set (𝒩) for three datasets in Tab. <ref>. As shown, ours got a higher ratio of TP in 𝒫 and lower ratio of FP in 𝒩 than HP, even with a much larger size of the positive set. For example, we can observe that Ours got 23× more positives with a higher ratio of TP on Cityscapes dataset with ViT-S/8 backbone. Likewise, we ensure the trustworthiness of contrastive learning under more reliable positive and negative sets. §.§ Visualization of Patches in Positive Set We illustrate how HP <cit.> and our proposed PPAP organize the positive samples for each anchor through visualizations in Fig. <ref>. As depicted throughout the visualizations, there is a tendency for our method to collect more positive patches compared to the baseline. On top of that, we can also observe that positive samples that are semantically identical are progressively obtained through the propagation steps in (a), (b), and (c). Lastly, along with the numerically measured difference in the precision of the positive sets, we find that falsely detected positives by the baseline in (d) are not considered positive in ours. To account for such a phenomenon, we claim that the presence of ambiguous zones enables the filtering of the hard negative samples. § CONCLUSION To tackle the challenge of USS, previous approaches have primarily relied on exploiting patch relationships to guide the training process. In this work, we extend this mainstream to ensure the reliability of the gathered guidance. Specifically, we consider the data distribution around the anchor to identify densely crowded regions containing samples with similar semantics. By relocating the proxy anchor to these regions, we expect it to be surrounded by trustworthy positives, creating a large positive set with high precision. In addition, we address instance-wise ambiguous zones where samples with similar and dissimilar semantics coexist. By excluding samples from these regions during training, we aim to eliminate FP in the negative set, preventing unstable training. Our state-of-the-art results verify the importance of ensuring the reliability of the supervision in USS. Acknowledgements. This work was supported in part by MSIT&KNPA/KIPoT (Police Lab 2.0, No. 210121M06), MSIT/IITP (No. 2022-0-00680, 2019-0-00421, 2020-0-01821, RS-2024-00437102), and SEMES-SKKU collaboration funded by SEMES. splncs04 § LIMITATIONS. The training guidance derived from patch-wise representation still has limitations in capturing intricate pixel-level details, especially along object edges. This issue becomes more pronounced with larger patch sizes, as ViT-S/16 exhibits lower mIoU compared to ViT-S/8 in this regard. § INFERENCE We introduce the overall flow of our model at the inference stage in Fig. <ref>. During training, we finetune the last block of the Vision Transformer (ViT) and the projection head which outputs the projected vector 𝐳. While the projected vector 𝐳 is used for the training, we use the feature from the backbone for the inference as did in <cit.>. § IMPLEMENTATION DETAILS In line with existing works <cit.>, we utilize a DINO-pretrained ViT as our backbone network. The embedding dimension of the projection layer (D) is 384 for ViT-small and 768 for ViT-base. We set T to 2, 3, and 1 for the experiments on COCO-stuff, Cityscapes, and Potsdam-3 datasets, respectively. Tab. <ref> shows the hyperparameter set that yields the best performance for each dataset and backbone. To reduce the training cost, we use only a random subset of the patch features <cit.>. Specifically, we use 1/4 of all patch features for the ViT-S/16 backbone and 1/16 for the other backbones. For the experiments on the ImageNet-S dataset, we use the same hyperparameter settings as for the COCO-stuff dataset. § ADDITIONAL STUDY §.§ Study on Effects to Class Frequency In dense prediction tasks, the variation in the frequency between different semantics is a very natural phenomenon. However, this typically leads to the problem of long-tailed data distribution which triggers the large performance gap between classes of high and low frequencies <cit.>. To further analyze the strength of our proposed method, we compare with HP <cit.> on the effects from the perspective of the class frequency. Specifically, we divide the classes in the COCO-stuff dataset into three groups, i.e., Few, Medium, and Many, according to the data frequency and measure the performances for each group. Results are reported in Tab. <ref>. As shown, we find that our method is particularly notable in learning classes with fewer samples compared to the baseline. For such a result, we attribute a reason that the number and the precision of the gathered positives are similar between samples as shown in Fig. 1 of the main paper. §.§ Different Pretrained Backbones. The results in Tab. <ref> demonstrate consistent performance improvements with various backbones pretrained in a self-supervised manner (e.g., iBoT <cit.>, SelfPatch <cit.>). We observed that backbones trained with inter-image relationships consistently enhance performance. However, models such as MAE <cit.> struggle to preserve globally shared semantics in each patch feature across all images (e.g., 4.3% of U.mIoU for MAE alone) due to their lack of inter-image relationship modeling <cit.>. §.§ Contribution of CRF Conditional Random Field (CRF) utilizes pixel position and RGB color information to smooth the predicted label of each pixel across its neighboring pixels, thereby effectively enhancing the performance of semantic segmentation <cit.>. Following the previous works <cit.>, we incorporate CRF as a post-processing step in our method. Tab. <ref> presents a performance comparison between HP <cit.> and our method, both with and without the application of CRF. While the use of CRF leads to performance boosts in all experiments, we highlight the superiority of our PPAP in that it outperforms HP with CRF even without CRF. §.§ Qualitative Results without CRF We provide visualizations comparing the predictions of our proposed PPAP with those of existing methods, i.e., STEGO <cit.>, and HP <cit.>, without CRF. The results without CRF are shown in Fig. <ref> and <ref>. As can be noticed, we point out that existing methods tend to be prone to noise, whereas our method demonstrates robustness against pixel-wise noises even without applying the CRF.
http://arxiv.org/abs/2407.12432v1
20240717094016
Validation of the static forward Grad-Shafranov equilibrium solvers in FreeGSNKE and Fiesta using EFIT++ reconstructions from MAST-U
[ "K. Pentland", "N. C. Amorisco", "O. El-Zobaidi", "S. Etches", "A. Agnello", "G. K. Holt", "C. Vincent", "J. Buchanan", "S. J. P. Pamela", "G. McArdle", "L. Kogan", "G. Cunningham" ]
physics.plasm-ph
[ "physics.plasm-ph", "physics.comp-ph" ]
Plausibly Deniable Content Discovery for Bitswap Using Random Walks Manuel Wedler Humboldt-University of Berlin manuel@wedler.dev Erik Daniel TU Dresden erik.daniel@tu-dresden.de Florian Tschorsch TU Dresden florian.tschorsch@tu-dresden.de July 22, 2024 ==================================================================================================================================================================================== 1UKAEAUnited Kingdom Atomic Energy Authority, Culham Campus, Abingdon, Oxfordshire, OX14 3DB, United Kingdom(kamran.pentland@ukaea.uk) 2HartreeSTFC Hartree Centre, Sci-Tech Daresbury, Keckwick Lane, Daresbury, Warrington, WA4 4AD, United Kingdom § ABSTRACT A key aspect in the modelling of magnetohydrodynamic (MHD) equilibria in tokamak devices is having access to fast, accurate, and stable numerical simulation methods. There is an increasing demand for reliable methods that can be used to develop traditional or machine learning-based shape control feedback systems, optimise scenario designs, and integrate with other plasma edge or transport modelling codes. To handle such applications, these codes need to be flexible and, more importantly, they need to have been validated against both analytically known and real-world tokamak equilibria to ensure they are consistent and credible. In this paper, we are interested in solving the static forward Grad–Shafranov (GS) problem for free-boundary MHD equilibria. Our focus is on the validation of the static forward solver in the Python-based equilibrium code FreeGSNKE by solving equilibria from magnetics-only EFIT reconstructions of MAST-U shots. In addition, we also validate FreeGSNKE against equilibria simulated using the well-established MATLAB-based equilibrium code Fiesta. To do this, we develop a computational pipeline that allows one to load the same (a)symmetric MAST-U machine description into each solver, specify the required inputs (active/passive conductor currents, plasma profiles and coefficients, etc.) from EFIT, and solve the GS equation for all available time slices across a shot. For a number of different MAST-U shots, we demonstrate that both FreeGSNKE and Fiesta can successfully reproduce various poloidal flux quantities and shape targets (e.g. midplane radii, magnetic axes, separatrices, X-points, and strikepoints) in agreement with EFIT calculations to a very high degree of accuracy. We also provide public access to the code/data required to load the MAST-U machine description in FreeGSNKE/Fiesta and reproduce the equilibria in the shots shown. MHD equilibria Grad–Shafranov FreeGSNKE Fiesta EFIT MAST-U § INTRODUCTION Developing fast and accurate numerical methods for simulating the magnetohydrodynamic (MHD) equilibrium of a magnetically-confined plasma is a crucial element in the design and operation of existing and future tokamak devices. These methods are used extensively to analyse different plasma scenarios, shapes, and stability, in addition to playing a critical role in the operation and optimisation of control and real-time feedback systems. In this paper, we focus on the free-boundary static forward MHD equilibrium problem, which involves solving the Grad–Shafranov (GS) equation for a toroidally symmetric, isotropic plasma equilibrium—see <ref> for further details. A vast array of numerical codes exist for solving this problem, however, our focus will be on two in particular: FreeGSNKE and Fiesta (more details to follow in <ref>). The aim of this work is to: * validate that the static forward solver in FreeGSNKE can reproduce the equilibria obtained by a magnetics-only EFIT reconstruction on the MAST-U tokamak (in addition to those produced by Fiesta). * compare poloidal flux quantities, shape control measures (e.g. midplane radii, magnetic axes, and separatrix positions), and other targets (e.g. X-points and strikepoints) from both solvers for a number of physically different MAST-U shots, using EFIT as the reference solution. To enable an accurate and valid comparison of the results, we need to ensure that both FreeGSNKE and Fiesta are set up using the same set of input quantities as used by EFIT. Firstly, this will require a consistent description of the MAST-U machine that includes the active coils, passive structures, and the wall/limiter. We require details of their positions, orientations, windings, and polarity. Secondly, for each equilibrium, we need to appropriately assign values for both the active/passive conductor currents and plasma profiles parameters as calculated by EFIT. In <ref>, we provide more details on these input quantities and highlight differences between each code implementation. Carrying out robust validation of static GS solvers[The validation of FreeGSNKE's dynamic (evolutive) solver will not feature here and will be addressed in future work.], against both analytic solutions and real-world tokamak plasmas, is critical for users that require consistent and reliable equilibrium calculations. We should stress that while the reference EFIT equilibria are obtained as equilibrium reconstructions, and are therefore the result of a fitting procedure from experimental measurements on the MAST-U tokamak, here we do not perform the same fitting procedure in FreeGSNKE or Fiesta. We instead use the coil currents and plasma profiles parameters fitted by EFIT as inputs to the static forward GS problems in FreeGSNKE and Fiesta. Given a consistent set of inputs across all codes, we will demonstrate that all three return quantitatively equivalent equilibria. By making the scripts required to do so publicly available, we hope to establish a common validation benchmark for other static forward GS equilibrium codes. The rest of this paper will be structured as follows. In <ref>, we provide an introduction to the three solvers FreeGSNKE, Fiesta, and EFIT, briefly discussing their capabilities and prior usage in different areas of tokamak equilibrium modelling. In <ref>, we outline the free-boundary static forward GS problem and note differences between the FreeGSNKE and Fiesta solution methods. Following this, we supply a more detailed description of the MAST-U machine and other more specific inputs required by each solver in <ref>. In <ref>, we present our numerical experiments, focusing on two different MAST-U shots, one featuring a conventional divertor configuration and the other a Super-X <cit.>. We begin by comparing FreeGSNKE and Fiesta, ensuring that we understand any key differences between the codes and how these differences may filter through when comparing with EFIT. After this, we begin to assess the differences between equilibria (and other shape targets) from the solvers and those reconstructed from the diagnostics via EFIT. We find excellent agreement between all quantities assessed and highlight the accuracy of both FreeGSNKE and Fiesta. Finally in <ref>, we discuss the implication of these results and close with a few suggestions for avenues of future work. § THE SOLVERS In this section, we give a short introduction to the codes described in this paper and briefly discuss their capabilities. §.§ FreeGSNKE FreeGSNKE is a Python-based, finite difference, dynamic free-boundary toroidal plasma equilibrium solver developed by <cit.> and built as an extension of the publicly available FreeGS code <cit.>. FreeGS features an inverse solver to perform static constrained equilibrium analyses, where user-specified controls (e.g. isoflux/X-point locations and coil currents) are used to determine a set of coil currents to achieve a desired equilibrium. This type of analysis is used in designing and controlling different types of plasma configurations prior to experimentation—a brief introduction to constrained analysis can be found in <cit.>[Sec. II.7.]. FreeGS also features a static forward solver, where coil currents are instead fixed by the user and the corresponding plasma equilibrium is calculated. Both solvers in FreeGS use Picard iterations to solve the forward and inverse GS problems. They have been used, in conjunction with other equilibrium codes, extensively in recent years for the design of various tokamaks. To our knowledge, FreeGS has aided the design of SPARC <cit.>, KSTAR <cit.>, WEST <cit.>, Thailand Tokamak-1 <cit.>, and MANTA <cit.>. It was also used in the design of ARC <cit.>, with further work on DIII-D EFIT reconstructed equilibria, and to design COMPASS-U <cit.>, alongside Freebie <cit.> and Fiesta, and to help develop the BLUEPRINT framework <cit.>. FreeGSNKE inherits the FreeGS inverse solver and introduces an upgraded static forward solver, that uses a Newton–Krylov method (see e.g. <cit.>), to overcome the well-known numerical instability affecting Picard iterations. Also introduced is a solver for the evolutive (dynamic) equilibrium problem, also based on the Newton–Krylov method. In the dynamic problem, Poynting's theorem is enforced on the plasma, coupling the circuit equations (that govern currents in the active coils/passive structures) and the GS equation itself <cit.>. These features, as well as the widespread validation and use of the underlying FreeGS code, make FreeGSNKE a particularly versatile tool for studying the shape and control of plasma equilibria. Its compatibility with other Python libraries, especially those with machine learning capabilities, facilitate its future development and integration with other plasma modelling codes. For example, FreeGSNKE has been used to emulate scenario and control design in a MAST-U-like tokamak by <cit.>, where their objective was to emulate flux quantities and shape targets (some of which we calculate here) based on a training library of input plasma profile parameters and active conductor currents. The static forward solver in FreeGSNKE has been validated against analytic solutions of the GS equation and against the original iterative solver implemented in FreeGS <cit.>. We now go a step further by implementing the full MAST-U machine description and validating the static solver against EFIT reconstructions. §.§ Fiesta Fiesta is a free-boundary static equilibrium solver written in MATLAB and developed by <cit.>. In addition to being able to carry out forward and inverse equilibrium calculations, it is also capable of linearised dynamic modelling using the RZIp rigid plasma framework <cit.>. It has been used to inform design choices and carry out equilibrium analyses on a number of tokamaks including JET, DIII-D, NSTX, TCV, MAST(-U) <cit.>, MEDUSA-CR <cit.>, COMPASS-U <cit.>, SMART <cit.>, STEP <cit.>, and EU-DEMO <cit.>. Having already been used to simulate MAST(-U) and other tokamak equilibria, we run Fiesta alongside FreeGSNKE to demonstrate they both return quantitatively equivalent equilibria given the same set of input data from EFIT. This cross-validation process should also help identify and explain any differences between the two different implementations. §.§ EFIT EFIT, first proposed by <cit.>, is a computational method that is widely used as a first port of call to “fit” (or reconstruct) the plasma equilibrium in a tokamak using diagnostic measurement data as constraints. These measurements come from diagnostics such as poloidal flux loops, pickup coils, Rogowski coils, motional Stark effect (MSE), and Thomson scattering systems, which are strategically located at key locations around the tokamak. Written in Fortran, EFIT it is used primarily for post-shot equilibrium reconstruction and has been implemented on a number of different tokamak devices (see below). Our focus is on EFIT, a substantial re-write in which the original EFIT code has been wrapped in a C driver to handle data flow, which in turn is wrapped in a highly configurable Python layer for input and output checking. It is currently in use on the MAST-U tokamak <cit.> and was previously deployed on JET <cit.>. We note that EFIT is run routinely for all MAST-U plasma shots using magnetic diagnostic data only and, if available, MSE data to improve the accuracy of core profiles. In addition to this, EFIT is also set up to use Thomson scattering data if required <cit.>. To solve the inverse problem, EFIT requires descriptions of the plasma pressure and toroidal current profiles which are typically expressed using basis functions (whose coefficients are to be adjusted during the fitting process). Next, the linearised GS equation is solved using an initial guess for the poloidal flux. The feasibility of the calculated flux with respect to the diagnostic measurement data is then measured by solving a linearised least-squares minimisation problem. During this process, the variable parameters such as the conductor currents and profile coefficients are adjusted to improve the fit. This iterative process repeats until the conductor currents, profile coefficients, and poloidal flux, together, return a valid solution to the GS equation at the required tolerance. For more technical details, refer to <cit.>, <cit.>, and <cit.>. Different versions of EFIT, each with their own configurations and modifications, have been used for equilibrium reconstruction on a vast array of tokamak devices. Without providing an exhaustive list, it has been deployed on JET <cit.>, MAST(-U) <cit.>, EAST <cit.>, DIII-D <cit.>, START <cit.>, KSTAR <cit.>, NSTX <cit.>, and ITER <cit.>. Given its history of widespread use on many different tokamak devices, we use EFIT as a source of trusted reference equilibria, against which to compare those produced by FreeGSNKE and Fiesta. § THE STATIC FORWARD GRAD–SHAFRANOV PROBLEM In this paper, we are interested in solving the Grad–Shafranov (GS) equation Δ^*ψ = -μ_0 R ( J_p + J_c)_= J_ϕ, (R,Z) ∈Ω, in the cylindrical coordinate system (R, ϕ, Z) for the poloidal flux ψ(R,Z)[Note that some numerical solvers (e.g. Fiesta) define ψ using the Weber whereas some (e.g. FreeGSNKE and EFIT) define it using the Weber/2π. ] <cit.>. Note here that μ_0 represents magnetic permeability in a vacuum and Δ^* R ∂_R R^-1∂_R + ∂_ZZ is a linear elliptic operator. The toroidal current density J_ϕ(ψ, R, Z) J_p(ψ, R, Z) + J_c(R, Z) contains a contribution from both the plasma J_p and any toroidally symmetric conducting metal structures external to the plasma J_c (e.g. active poloidal field coils and passive structures around the tokamak). The total poloidal flux ψψ_p + ψ_c is also made up of a plasma ψ_p and external conductor ψ_c contribution. We wish to solve (<ref>) over a two-dimensional computational domain ΩΩ_p∪Ω_p' where Ω_p represents the plasma region[The boundary of Ω_p is defined as the closed (R,Z) contour in Ω that passes through the X-point closest to the magnetic axis (see closed red contour in <ref>).] and Ω_p' is its complement. The plasma current density, non-zero only within the plasma region Ω_p, takes the form J_p(ψ, R, Z) = R ∂ p/∂ψ + 1/μ_0 R F ∂ F/∂ψ, (R,Z) ∈Ω_p, where p p(ψ) is the isotropic plasma pressure profile and F F(ψ) = R B_ϕ is the toroidal current profile (B_ϕ is the azimuthal component of the magnetic field). The particular choice of profile functions used in J_p will be discussed in <ref>. The current density generated by N_c external conductors is given by J_c(R, Z) = ∑_j=1^N_cI_j^c(R,Z)/A_j^c, (R,Z) ∈Ω, I_j^c(R,Z) = I_j^c if (R,Z) ∈Ω_j^c, 0 elsewhere, where Ω_j^c, I_j^c, and A_j^c are the domain region, current, and cross-sectional area of the jth conductor, respectively. Note that external conductors can lie inside Ω as well as outside of it. To complete the free-boundary problem, an appropriate Dirichlet boundary condition must also be specified on the domain boundary ∂Ω—which we will now discuss. The dependence of J_p on ψ makes (<ref>) a nonlinear elliptic partial differential equation. §.§ Solving the problem Here, we briefly outline the steps typically carried out when numerically[Analytic solutions to the GS problem do exist in limited cases—see <cit.> for some examples.] solving the static free-boundary (forward) GS problem <cit.>. For more specific details on how each of the solvers do this in practice, we refer the reader to the respective code documentation. Before solving, we assume that a number of input parameters have already been provided by the user including: a machine (tokamak) description, conducting structure (active coil and passive structure) currents, and plasma profile functions (and parameters). More details on the specific inputs required for generating free-boundary equilibria on MAST-U with each of the codes will be described in <ref>. §.§.§ Step one Denote the total flux by ψ^(n)(R,Z), where n = 0,1,… is the iteration number, and generate an appropriate guess ψ^(0) to initialise the solver[Note that ψ_c^(n) is known exactly (it is given by the second term in (<ref>)) and so we only require an initial guess for ψ^(0)_p.]. §.§.§ Step two Calculate the values of the flux on the computational boundary ∂Ω (i.e. the Dirichlet boundary condition) using ψ^(n)|_∂Ω = ∫_Ω_p G(R,Z;R',Z') J_p(ψ^(n), R',Z') dR' dZ' + ∑_j=1^N_c1/A_j^c∫_Ω_j^c G(R,Z;R',Z') I_j^c(R',Z') dR' dZ', where the first and second terms are the contributions from ψ^(n)_p and ψ^(n)_c on the boundary, respectively. G is a Green's function for the operator Δ^* containing elliptic integrals of the first and second kind—it can be calculated by solving (<ref>) with ψ_c alone (see <cit.>[Chp. 4.6.3]). To calculate (<ref>), the plasma domain Ω_p (i.e. the area contained within the last closed flux surface) needs to be identified—see <cit.>[Sec. 5] for how to do this. Once found, the integral itself can be calculated a number of different ways, for example, using von Hagenow's method <cit.>[Chp. 4.6.4]. §.§.§ Step three To solve the nonlinear problem, both EFIT and Fiesta use Picard iterations <cit.>, where the n-th iteration consists of calculating the total flux ψ^(n+1) according to Δ^* ψ^(n+1) = -μ_0 R J_ϕ(ψ^(n), R, Z), (R,Z) ∈Ω, together with boundary condition (<ref>). In a finite difference implementation this requires spatially discretising the elliptic operator Δ^*. For example, FreeGSNKE uses fourth-order accurate finite differences while Fiesta uses a second-order accurate (fast) discrete sine transform. §.§.§ Step four Check whether or not the solution meets a pre-specified tolerance, e.g. a relative difference such as max | ψ^(n+1) - ψ^(n) |/max(ψ^(n)) - min(ψ^(n)) < ε. If so, we stop the iterations, otherwise we continue. Both FreeGSNKE and Fiesta are set to use the same relative tolerance ε = 16 and while FreeGSNKE uses the criterion in (<ref>), we should note that Fiesta uses a slightly different relative criterion based on values of J_p at successive iterations instead of ψ. This should make little difference to the comparison. §.§.§ Comments Picard iterations are very effective at tackling inverse GS problems, which is the primary use case for EFIT and Fiesta. However, it is well-known that these iterations are unstable when applied to forward GS problems. This manifests itself in the form of vertically unstable equilbria that artificially move between successive Picard iterations <cit.>. This arises as a result of a combination of known physical instabilities in highly-elongated plasmas and mathematical features of the Picard method itself (which stem from a combination of steep gradients in the nonlinear function and a poor initial guess to the solution). Newton-based methods can overcome this instability (see e.g. <cit.>). FreeGSNKE implements a Jacobian-free Newton–Krylov method (see <cit.>[App. 1] for further details). This is used to solve directly for the roots, ψ, of Δ^* ψ + μ_0 R J_ϕ(ψ, R, Z) = 0, (R, Z) ∈Ω. As with the Picard iteration, solving this problem still requires an appropriate initial guess and the calculation of the (nonlinear) boundary condition (<ref>). § INPUT PARAMETERS (FOR MAST-U) To solve the forward problem, we need to ensure that the inputs to both FreeGSNKE and Fiesta are consistent with those used by EFIT on MAST-U. We require: * an accurate and representative MAST-U machine description containing the: * active poloidal field coils. * passive structures. * limiter/wall structure. * the fitted values of the coil currents of both active coil and passive structures. * the functional form chosen for the plasma profile functions and the corresponding fitted parameter values. * any additional parameters specific to either FreeGSNKE or Fiesta. In the following sections, we outline how these inputs are configured for each of the codes . §.§ Machine description The following machine description had already been implemented in both EFIT and Fiesta and has now been set up in FreeGSNKE. We note here that numerical experiments (in <ref>) across all codes are simulated on a 65 × 65 computational grid on Ω = [0.06, 2.0] × [-2.2, 2.2], as this is the resolution EFIT is run at during MAST-U reconstructions. §.§.§ Active coils MAST-U contains 12 active poloidal field coils whose voltages can be varied for shaping and controlling the plasma <cit.>. In <ref>, we display a poloidal cross-section of the machine (as is implemented in all three codes) with an example equilibrium from a MAST-U shot (all simulated and plotted using FreeGSNKE). The solenoid, named P1 on MAST-U, generates plasma current and a poloidal magnetic field while P4/P5/PC (the latter of which is not currently connected to the machine) are used for core radial position and shape control. P6 is used for core vertical control, D1/D2/D3/PX for X-point positioning and divertor leg control, and DP for further X-point positioning and flux expansion. Coils D5 and D6/D7 are used for Super-X leg radius and expansion control, respectively. All active coils (except the solenoid) have an upper (labelled in <ref>) and lower component (not labelled) that are wired together in the same circuit. All upper and lower coils are wired in series, except for the P6 coil, whose upper and lower components are connected in anti-series so that it can be used for vertical plasma control. Each coil consists of a number of filaments/windings (plotted as small blue rectangles on the right hand side of <ref>) each with their own central position (R,Z), width and height (dR,dZ), polarity (+1 in series, -1 in anti-series), and current multiplier factor (used for the solenoid only). For the scope of the poloidal field, individual windings are modelled as infinitesimally thin toroidal filaments in both FreeGSNKE and Fiesta. Each filament also features its own resistivity value, however, this is not used here where we only deal with static equilibria. We should note that when EFIT fits the active coil currents to diagnostic data, it does not treat the upper and lower windings of the same active coil as being linked in series. Instead, current values in the upper and lower windings are measured using independent Rogowski coils[The active coil currents are approximated using the difference between measured internal (coil only) and external (coil plus coil case) Rogowski coil currents <cit.> and are then fit (alongside all other quantities of interest). The same process is used to fit coil case currents—see <ref>.] and are therefore fit to slightly different values. The relative difference between the upper/lower coil current values is very small and so using this configuration makes little difference to equilibria generated by EFIT. We refer to this configuration as having non-symmetric (or up-down independent) coil currents. For both FreeGSNKE and Fiesta, we have the option to model the pairs of active coils as either symmetric (connected in series/anti-series as they are in the real MAST-U machine) or non-symmetric (as in EFIT). All of the experiments presented in <ref> are carried out using the non-symmetric coil setup so that we can recreate the EFIT configuration as closely as possible. §.§.§ Passive structures Both the active coils and the plasma itself induce significant eddy currents in the toroidally continuous conducting structures within MAST-U <cit.>. This is especially the case in spherical tokamak devices, due to the close proximity of passive structures to the plasma core and active coils. These currents significantly impact the plasma shape and position, making their inclusion in the modelling process essential to obtaining accurate equilibrium simulations. The complete MAST-U machine description includes a total of 150 passive structures, making up the vessel, centre column, support structures, gas baffles, coil cases, etc. This number excludes a few structures that are not included in the EFIT model. These include the graphite tiles (which do not carry much current) and the cryopump (which contains a toroidal break to prevent large toroidal currents flowing around the machine). Each passive structure is represented by a parallelogram in the poloidal plane, defined by its central position (R,Z), width and height (dR,dZ) and two angles, (θ_1, θ_2)[θ_1 is the angle between the horizontal and the base edge of the parallelogram while θ_2 is the angle between the horizontal and right hand edge (i.e. θ_1 = θ_2 = 0 defines a rectangle).]. Such parallelograms can be seen on the right hand side of <ref>. Given that all three codes require these parallelogram structures to simulate equilibria in the (R,Z) plane, we should note that this passive structure model is a reduced axisymmetric representation of the true three-dimensional MAST-U vessel which contains toroidal breaks for vessel ports among other things—see <cit.>[Fig. 12] for a depiction of the full 3D model. Both FreeGSNKE and Fiesta model the poloidal field associated with each passive structure by uniformly distributing its current density over the poloidal cross-section. This can be done by “refining” (i.e. subdividing) each passive structure into individual filaments. We revisit this in <ref> when we discuss how to assign currents to the passive structures. §.§.§ Limiter/wall structure The purpose of the limiter/wall in FreeGSNKE and Fiesta is to confine the boundary of the plasma. In all three codes it is described by 98 pairs of (R,Z) coordinates that form the closed polygonal shape seen in <ref> (enclosing the flux contours). The plasma core is forced to reside within the limiter region, with the last closed flux surface being either fully contained in this region or tangent to its polygonal edge. §.§ Assigning currents §.§.§ Active coils As mentioned before, both FreeGSNKE and Fiesta have the option to use symmetric or non-symmetric (independent) active coil current assignments. To set the coil currents in the non-symmetric setting, we assign the individually calculated upper/lower coil currents from EFIT directly to the corresponding coils in FreeGSNKE and Fiesta without modification. If we were to use the symmetric coil setting, however, each of the 12 active coils in FreeGSNKE and Fiesta require a single current value. To set each one, we could, for example, take the average of the corresponding upper and lower coil currents from EFIT, making sure that the correct polarity of each current is also assigned. §.§.§ Passive structures Due to the different ways they are modelled in EFIT, care needs to be taken when assigning the fitted passive structure currents to the 150 structures defined in Fiesta and FreeGSNKE. For example, current values for each coil case are fit by EFIT explicitly (using the Rogowski coil measurements mentioned before), which makes it easy to assign them directly in both FreeGSNKE and Fiesta. Other passive structure currents in the vessel, centre column, gas baffles, and support structures, are not, however, measured (and therefore fit) directly. To reduce the degrees of freedom in EFIT, these passive structures are modelled in groups, each referring to a single current value[An electromagnetic induction model is used to calculate current values to adopt as priors in the fit—refer to <cit.> and <cit.> for more details.], thereby reducing the computational runtime and avoiding some issues created by having too much freedom in the distribution of current around the machine <cit.>[Sec. 3]. In total, in MAST-U there are 20 groups: 14 for the vacuum vessel, 2 for the gas baffles, 2 for passive stabilisation plates, and 2 for divertor coil supports <cit.>. To set the correct current for each structure in a group, we follow <cit.>[Sec. 3] and distribute the group current proportionally to each structure based on its fraction of the total cross-sectional area within the group. As mentioned before, both FreeGSNKE and Fiesta have the option to “refine” the 150 passive structures into sets of filaments for improved electromagnetic modelling. This involves taking each parallelogram structure, dividing its area (or length) into filaments of approximately the same size, and then evenly distributing the structure current amongst them uniformly. The density of such filaments over the poloidal section of each structure can be adjusted as desired in FreeGSNKE and Fiesta[ In our numerical experiments, FreeGSNKE uses 7,304 refined filaments with cross-sectional areas ranging from 0.03cm2 to 2.4cm2 (median area is 0.13cm2). In Fiesta, the refinement is carried out slightly differently and uses 7,030 refined filaments, with areas ranging from 0.16cm2 to 0.62cm2 (median 0.24cm2).]. §.§ Profile functions To complete the set of input parameters for the static forward GS problem we need consistent plasma current density profile functions across the codes. In a magnetics-only EFIT reconstruction on MAST-U <cit.>, the pressure and toroidal current profiles in (<ref>) are defined using the following polynomials, sometimes referred to as the “Lao profiles” as first introduced by <cit.> in the original EFIT code: ∂ p/∂ψ̃ = ∑_i=0^n_pα_i ψ̃^i - α̅ψ̃^n_p + 1∑_i=0^n_pα_i, F ∂ F/∂ψ̃ = ∑_i=0^n_Fβ_i ψ̃^i - β̅ψ̃^n_F + 1∑_i=0^n_Fβ_i, with coefficients α_i, β_i ∈. Note here that ψ̃ = ψ - ψ_a/ψ_b - ψ_a∈ [0,1], is the normalised poloidal flux where ψ_a = ψ(R_m, Z_m) and ψ_b = ψ(R_X, Z_X) are the values of the flux on the magnetic axis and plasma boundary, respectively. To avoid over-fitting and solution degeneracy problems, EFIT uses lower-order polynomials (n_p = n_F = 1) for the magnetics-only reconstructions we validate against in <ref>. The logical parameters α̅ = β̅ = 1 are set to enforce homogeneous Dirichlet boundary conditions on the plasma boundary (i.e. p'(ψ̃=1) = FF'(ψ̃=1) = 0). Neumman boundary conditions (on the profile derivatives) can also be enforced if required—see <cit.>[Sec. 2]. For EFIT reconstructions that use both magnetics and the MSE diagnostic, spline representations of the profiles are also available—see <cit.>. Here, we assign values of the coefficients α_i and β_i as determined by EFIT and proceed to normalise the profile functions using the value of the total plasma current I_p fitted by EFIT. This step is nominally redundant but represents an additional check that ensures the profile functions set in FreeGSNKE and Fiesta are exactly the same as in EFIT. §.§ Other parameters and code specifics Here, we detail a few other parameters that need to be set in order to run the forward solvers in both Fiesta and FreeGSNKE. Firstly, in Fiesta, we need to specify a object that mitigates the vertical instability that manifests itself via the Picard solver. One option (via the object) is to monitor the vertical plasma position during the solve and modify the P6 coil current(s) (i.e. the radial field it produces) to correct the position error. Modifying the P6 coil current would, however, defy the purpose of the comparison with the EFIT equilibria. To keep the P6 coil current(s) (and all other inputs currents fixed), we therefore opt to use the object instead—this uses a variation of the method presented by <cit.>. This object introduces a second (outer) nonlinear solver loop. The inner loop solves for the equilibrium (via the Picard iterations) with respect to a specified magnetic axis location by adding `synthetic' radial and vertical magnetic fields. The outer loop then minimises these synthetic fields using a gradient search method, returning an optimal solution for the magnetic axis position, and therefore the equilibrium. This additional outer loop drastically slows Fiesta compared to the setting, however, we believe that it is necessary for a direct comparison with FreeGSNKE and EFIT. Secondly, both Fiesta and FreeGSNKE require a prescription for the toroidal field. This is provided through a parameter called in Fiesta, specifying the total current in the central toroidal field conductor bundle, and through the value of f_vac RB_tor in FreeGSNKE. We note, however, that the toroidal field does not affect the equilibrium calculations themselves. Calculation, for example, of safety factors and beta values would be affected, but we do not consider them here. For completeness, in FreeGSNKE we set f_vac using the EFIT sourced value, and in Fiesta we use = 5e6f_vac. Finally, for the simulations in <ref>, Fiesta occasionally struggles to converge for a number of time slices in the two shots shown. This could be due to a combination of the nonlinearity of the GS equation, a poor initial guess for ψ_p (or J_p), and perhaps the instability of the Picard solver. To remedy this, the results we present here are obtained by providing Fiesta with the J_p field calculated by EFIT, as an initial guess in the Fiesta forward solve—this rectified the non-convergence issues in almost all cases. § NUMERICAL EXPERIMENTS: FREEGSNKE VS. FIESTA VS. EFIT In this section, we compare the equilbria and related shape targets simulated by all three codes across two different MAST-U shots: one with a conventional divertor configuration and one with a Super-X configuration. We should reiterate that, although we consider EFIT to be our reference solver, we have no actual ground truth equilibria. Therefore, in our comparisons, we measure differences between the equilibria produced by the various codes rather than errors. We start by briefly outlining the steps taken to obtain these results. First, we select the MAST-U shot that we wish to simulate in FreeGSNKE/Fiesta and store[We should state here that we do not (re-)run EFIT, we simply extract existing data generated by a post-shot reconstruction stored in the MAST-U database. Data accessed 14/06/24.] the corresponding EFIT data that we require for each time slice. This includes the inputs described in <ref> and the poloidal flux/shape target output data that we wish to compare to after we have run FreeGSNKE and Fiesta. After building/loading the machine description in both FreeGSNKE and Fiesta, we then solve the forward GS problem at each time slice sequentially, starting at the first time step for which EFIT produces a valid GS equilibrium[During some of the ramp-up and ramp-down of the plasma, EFIT may struggle to converge to a valid GS equilibrium or produce spurious fits (e.g. on the plasma profile coefficients or passive structure currents). In these cases, we exclude these time slices from the comparison.]. For FreeGSNKE, we initialise each simulation with a default initial guess for the plasma flux ψ_p—this is obtained automatically by requiring the presence of an O-point in the total flux within the limiter region. As mentioned before, to ensure convergence, Fiesta is initialised using the J_p field calculated by EFIT. While this initialisation is already very close to the desired reference GS solution produced by EFIT, a comparison with the equilibrium on which Fiesta eventually converges is still informative of the code's performance. §.§ MAST-U shot 45425: Conventional divertor We first simulate MAST-U shot 45425, which has a flat-top plasma current of approximately 750kA, a double-null shape, and a conventional divertor configuration. The plasma is heated using two neutral beam injection systems delivering a total power of approximately 2.5MW. The plasma is in H-mode confinement for the majority of the shot. §.§.§ Single time slice Before analysing the entire shot, we wish to briefly discuss and compare a few minor differences between the flux quantities produced by each code for a single time slice of the shot (t = 0.7s). We begin by comparing FreeGSNKE and Fiesta without EFIT in <ref>. The left panel displays the ψ contours from FreeGSNKE and an almost perfect overlap of the separatrices from both codes. We break this down in the centre and right panels. The right panel shows the magnitude of differences in ψ_c, the plasma flux generated by active coils and passive structures. It can be seen that differences are generally small, at a level of ≲ 24, compared to a total flux that spans a range max(ψ) - min(ψ) ∼ 1. These differences appear co-localised with the vessel's passive structures, suggesting they are driven by implementation details in the methods used by either code to distribute the passive structure currents over their poloidal sections[We also explicitly checked any differences in the flux contribution from the active coils alone and found them to be 𝒪(10^-15).]. The central panel shows differences in the plasma flux ψ_p. The largest differences are localised in the top-right and bottom-right edge pixels of the discretised domain. We attribute this to implementation details in the way Fiesta imposes the boundary conditions (see also the right panel of <ref>, where the same discrepancy is visible again). The remaining differences are at a level of ≲ 53. We believe these are largely due to implementation differences in the routines that identify the last closed flux surface of the plasma, rather than being due to the nonlinear solvers themselves. In fact, we find that a comparison between the plasma current density distributions J_p calculated by FreeGSNKE and Fiesta for the same total flux ψ results in differences of the same order of magnitude. In <ref>, we compare differences in the total flux ψ between all three codes[We should note that EFIT did not produce a breakdown of ψ into ψ_p and ψ_c, making the comparison of plasma and conductor flux contributions more difficult.]. The left panel is similar to the one seen in <ref> (centre), with differences between FreeGSNKE and Fiesta dominated by differences in ψ_p as just discussed. Similarly to the differences between FreeGSNKE and Fiesta, the differences between FreeGSNKE/Fiesta and EFIT (shown in the centre/right panels, respectively) are largest close to the plasma outer edge, and qualitatively analogous (if not slightly different in magnitude). It is worth highlighting explicitly that the mismatches shown in <ref> and <ref> are, nominally, beyond the relative tolerance used in both FreeGSNKE and Fiesta—recall this was ε = 16. However, as already mentioned: i) differences in the implementation of the passive structures and ii) differences in routines that build the plasma core mask between the three codes at hand, are responsible for introducing mismatches with similar orders of magnitude to those we are seeing. Besides, as hinted by the left panel in <ref> and as we show in the following results, this level of difference has a negligible impact on the shape control targets (and therefore for most practical modelling purposes). §.§.§ Entire shot Over the course of the entire shot, the differences in ψ (for both codes) remain at the levels seen in <ref>. For FreeGSNKE, the median differences in ψ_a and ψ_b over the shot are 13 and 64, respectively. For Fiesta, the corresponding differences are on average 33 and 13, respectively. In <ref>, we plot the differences in the magnetic axis (R_m, Z_m), the midplane inner radius R_in, and the midplane outer radius R_out (recall <ref>). Typical differences in the inner/outer radii are of the order of millimetres for FreeGSNKE, with only marginally higher values for Fiesta. With respect to the magnetic axis, differences in both codes vs. EFIT track one another to sub-centimetre precision as well. There are, however, some isolated times during the initial phase of the ramp up where the differences between Fiesta and EFIT are significantly higher (see similar differences in later plots) than in later time slices. While FreeGSNKE has found (precisely) the EFIT GS equilibria during these early slices, we suspect that Fiesta may have simply found another set of (equally valid) GS equilibria. The physical difference in these (limited, not yet diverted) equilibria can be seen more clearly in <ref> at t=0.10s. While we do not have a conclusive reason for the presence of multiple GS equilibria here (given the same input parameters), identifying under what conditions these equilibria co-exist may be worth further numerical investigation <cit.>. In top panel of <ref>, we plot the evolution of the separatrices produced by all three codes over the lifetime of the shot, observing an excellent match in the majority of shot times. The metric in the lower panel of <ref> is calculated by first finding 360 (R,Z) points on the core separatrix generated by each code, where each point is equally spaced in the geometric poloidal angle centred on the magnetic axis (of EFIT). The largest distance between corresponding points on the FreeGSNKE/Fiesta separatrix and the EFIT separatrix is then measured. This maximum distance is below 3.5cm in 95% of cases for FreeGSNKE equilibria (in 96% of cases the median distance is below 1cm). For the case of Fiesta, the maximum distance is 4.2cm for the same quantile (1.13cm for the median distance for the same quantile of 96% of cases). Any gaps in the time series (and in later plots) are where Fiesta failed to converge to an equilibrium given the tolerance ε—these time slices have been excluded when calculating the quantiles mentioned above. We would note that during these times slices (where the equilibrium is in a limiter-type configuration), Fiesta can converge when using the object mentioned in <ref>. In <ref> we monitor the evolution of a strikepoint along the lower divertor tiles, again, observing an excellent match from both codes. The difference in (R_s,Z_s) compared to EFIT is less than a centimetre for most of the shot—very early times being the exception in the case of Fiesta. Note that while the difference in Z_s is not explicitly shown, it looks almost exactly the same as that of R_s. In <ref>, we plot the difference in the lower core chamber X-point over the shot. As we did for the separatrix calculations, we identify all X-points for each code's equilibria using FreeGSNKE's build-in functionality. This was because Fiesta and EFIT would inconsistently return only a single X-point, sometimes in the lower core chamber, sometime the upper. There appears to be a small vertical bias towards higher X-points by a few millimetres in both Fiesta and FreeGSNKE—also visible in some of the upper panels of <ref>. Despite this slight bias, both codes are accurate with respect to EFIT to within half a centimetre for 98% and 97% of the shot, respectively (100% are within 1cm). The runtimes, both per time slice and cumulatively across the shot, are displayed in <ref>. Median runtimes are 6.7 per forward solve for Fiesta, 0.09 for FreeGSNKE. Fiesta is significantly hindered by the use of the object which demands a second nonlinear solver loop to stabilise the Picard iterations[However, if one is willing to allow the P6 coil current(s) be modified, Fiesta can run much faster using the object instead—the median runtime was around 0.22 per forward solve in this case.]. FreeGSNKE makes use of the faster and more stable convergence of the Newton–Krylov method, as well as using just-in-time compilation for some core routines. As mentioned before, all equilibria were simulated sequentially with both codes. However, given the independence of each time slice, nothing prevents an embarrassingly parallel implementation—though this was not an objective in this paper. §.§ MAST-U shot 45292: Super-X divertor Here, we focus on MAST-U shot 45292, which has a flat-top plasma current of approximately 750kA, a double-null shape, and a Super-X divertor configuration. The plasma is Ohmically heated, i.e. there is no neutral beam heating, and remains in the L-mode confinement regime throughout the shot. As we did for the previous shot, we plot the evolution of the separatrices from all three codes over time in <ref>, again discerning a very good agreement between all of them (including qualitatively on the divertor legs). Given the Super-X configuration, the upper divertor strikepoint now evolves across the tiles. We can see, in <ref>, good agreement with differences in R_s and Z_s remaining at similar levels (though marginally different). Differences in the poloidal flux quantities remain at the same levels as seen in the conventional divertor shot while the shape targets again match to sub-centimetre precision—see <ref>. The upper core chamber X-points from FreeGSNKE and Fiesta are again accurate to within half a centimetre for 92% and 87% of time slices shown (100% within 1cm), respectively (see <ref>). Runtimes for both simulations were almost identical to those seen in the previous experiment and are therefore not shown again. To see the additional results not shown here for the Super-X shot, refer to the code repository. § CONCLUSIONS In this paper, we have demonstrated that the static forward GS solvers (see <ref>) in FreeGSNKE and Fiesta can accurately reproduce equilibria generated by magnetics-only EFIT reconstructions on MAST-U. To achieve this, we began (<ref>) by outlining which features of the MAST-U machine would be included in the forward solver machine descriptions, using those that most closely matched the one by EFIT. We highlighted the capability of both codes being able to model the active poloidal field coils as either up/down symmetric or asymmetric, noting that EFIT uses the asymmetric setting. In addition, both codes have the option to refine passive structures into smaller filaments in order to distribute the induced current in them across their surface areas for better electromagnetic modelling. Following this, we set the conductor currents and prescribed appropriate plasma current density profiles—in this case the polynomial-based “Lao” profiles whose coefficients are determined by EFIT. In addition to some other code-specific parameters, we then used this computational pipeline to begin simulating the MAST-U equilibria. The poloidal flux quantities and shape targets generated by the FreeGSNKE and Fiesta simulations in <ref> show excellent agreement with the corresponding quantities from EFIT for both MAST-U shots tested. More specifically, separatrices from both codes match those of EFIT, with the largest distances between the core separatrices found to be on the order of centimetres. Strikepoints, X-points, magnetic axes, and inner/outer midplane radii differences between the codes were simulated to sub-centimetre precision. The static GS solver in FreeGSNKE has now been validated against both analytic equilibria (see <cit.>) and experimental reconstructions from MAST-U (this paper). It has also been shown to produce numerically equivalent equilbria to the Fiesta code, which itself has been validated on experimentally reconstructed equilibria from a number of different tokamak devices (refer back to <ref>). Given its user-friendly Python interface, we hope that this work will enable the more widespread adoption of FreeGSNKE for real-time plasma control and optimisation studies (including those with a machine learning element) and assist in the design of future fusion power plants. Furthermore, we hope that the code and datasets made available with this paper will be of use in validation studies for other equilibrium modelling codes. Some avenues for future work include validating the dynamic forward GS solver in FreeGSNKE using real-world plasma reconstructions, incorporating more complex/unconstrained plasma profile functions, and making use of data assimilation techniques to carry out probabilistic (uncertainty-aware) equilibrium reconstruction. § ACKNOWLEDGEMENTS The authors would like to thank Stephen Dixon and Oliver Bardsley (UKAEA) for some very helpful discussions around the Fiesta code and also Ben Dudson (LLNL) for pointing us in the direction of a number of very useful FreeGS references. This work was part-funded by the FARSCAPE project, a collaboration between UKAEA and the UKRI-STFC Hartree Centre, and by the EPSRC Energy Programme (grant number EP/W006839/1). For the purpose of open access, the author(s) has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. To obtain further information, please contact PublicationsManager@ukaea.uk. § DATA AVAILABILITY The code scripts and data used in this paper will be made available in due course. § DECLARATIONS The authors have no conflicts of interest to declare. abbrvnat
http://arxiv.org/abs/2407.13669v1
20240718164416
Projection-based model-order reduction for unstructured meshes with graph autoencoders
[ "Liam K. Magargal", "Parisa Khodabakhshi", "Steven N. Rodriguez", "Justin W. Jaworski", "John G. Michopoulos" ]
cs.CE
[ "cs.CE" ]
fancy 0.50.1 1]Liam K. Magargal 1]Parisa Khodabakhshi PAK322@lehigh.edu 2]Steven N. Rodriguez 3]Justin W. Jaworski 2]John G. Michopoulos [1]organization=Department of Mechanical Engineering and Mechanics, Lehigh University, city=Bethlehem, state=PA, country=United States [2]organization=Computational Multiphysics Systems Laboratory, U. S. Naval Research Laboratory, city=Washington, state=DC, country=United States [3]organization=Kevin T. Crofton Department of Aerospace and Ocean Engineering, Virginia Tech, city=Blacksburg, state=VA, country=United States § ABSTRACT This paper presents a graph autoencoder architecture capable of performing projection-based model-order reduction (PMOR) on advection-dominated flows modeled by unstructured meshes. The autoencoder is coupled with the time integration scheme from a traditional deep least-squares Petrov-Galerkin projection and provides the first deployment of a graph autoencoder into a PMOR framework. The presented graph autoencoder is constructed with a two-part process that consists of (1) generating a hierarchy of reduced graphs to emulate the compressive abilities of convolutional neural networks (CNNs) and (2) training a message passing operation at each step in the hierarchy of reduced graphs to emulate the filtering process of a CNN. The resulting framework provides improved flexibility over traditional CNN-based autoencoders because it is extendable to unstructured meshes. To highlight the capabilities of the proposed framework, which is named geometric deep least-squares Petrov-Galerkin (GD-LSPG), we benchmark the method on a one-dimensional Burgers' equation problem with a structured mesh and demonstrate the flexibility of GD-LSPG by deploying it to a two-dimensional Euler equations model that uses an unstructured mesh. The proposed framework provides considerable improvement in accuracy for very low-dimensional latent spaces in comparison with traditional affine projections. Projection-based model-order reduction Deep LSPG Geometric deep learning Graph autoencoders Unstructured mesh 0.50.1 § INTRODUCTION Methods in computational mechanics aim to simulate complex physical phenomena via numerical methods. Specifically, approximate solutions are sought by spatially and temporally discretizing the governing model equations of a physical system <cit.>. In practice, the spatial and temporal resolution must be refined to obtain a sufficiently detailed solution, often resulting in very high-dimensional systems that incur high computational costs. In this context, we define this high-dimensional system as a full-order model (FOM). Projection-based model-order reduction (PMOR) is a class of approximation methods that aims to reduce the computational cost associated with deploying methods in computational mechanics for many-query tasks, such as design optimization, uncertainty quantification, real-time rendering, etc., while preserving sufficient accuracy of the solution <cit.>. PMOR achieves cost savings by projecting the original high-dimensional computational model (known in this context as the full-order model) onto a precomputed low-dimensional latent space which is computed using data recovered from the FOM in an offline stage. Then, in an online stage, PMOR projects the model equations of the FOM onto a low-dimensional latent space (defined by the dimensionality reduction performed in the offline stage), thereby reducing the operational count complexity of the model and achieving cost savings. In this study, we are primarily interested in PMOR approaches for methods in computational mechanics that employ unstructured meshes, such as the finite volume method (FVM) and the finite element method. Here, we focus on the FVM, but most of our analysis and methods can be applied to the unstructured finite element method as well. The FVM has widespread use in the realms of science and engineering for its ability to conduct high-fidelity simulations of complex physical phenomena <cit.>. By discretizing the integral forms of governing physical equations, the FVM enables the computation of approximate solutions while preserving conservation laws. Spatial discretization of a domain is typically achieved using one of two main mesh types: structured and unstructured meshes. Structured meshes employ a periodic, grid-like structure to discretize the domain. Conversely, unstructured meshes do not require a grid-like structure and allow mesh components to be arbitrarily ordered <cit.>. This departure from the grid-like structure grants unstructured meshes a remarkable advantage in representing complex geometries more conveniently, setting them apart from structured meshes. As a result, unstructured meshes are often favored for their ability to handle intricate geometrical configurations with greater ease and accuracy. Often in engineering applications, obtaining a sufficiently accurate solution via the FVM can incur a computational cost that is prohibitively large. This issue becomes exacerbated in situations where several queries must be made to the FVM simulation. In the many-query settings, PMOR could drastically alleviate the cost of modeling dynamical systems with FVM simulations. Traditionally, PMOR methods rely on projecting the solution onto a low-dimensional latent space via a subspace approximation, such as proper orthogonal decomposition (POD) <cit.>, rational interpolation <cit.>, or balanced truncation <cit.>. Although affine latent spaces have been leveraged extensively to achieve cost savings for a wide variety of linear and nonlinear models, PMOR procedures employing affine solution manifolds struggle to accurately model advection-dominated flows, which are characterized by sharp gradients and moving shocks and boundaries. Such models exhibit a slowly-decaying Kolmogorov n-width. The Kolmogorov n-width serves as a measure for the error introduced by approximating the solution to a partial differential equation (PDE) with a linear subspace of dimension n <cit.>. When the decay of the Kolmogorov n-width is slow, the affine latent space used to approximate the solution must be constructed with a high dimension, leading to marginal model reduction. As a result, a great amount of effort has been made to develop reduced-order models (ROMs) for advection-dominated flows, such as adaptive reduced basis schemes <cit.>, segmentation of the domain into multiple reduced-order bases <cit.>, quadratic manifolds <cit.>, and modified POD bases <cit.>. PMOR has been studied extensively for linear systems <cit.>. However, for nonlinear dynamical systems, the projection of a system onto a low-dimensional subspace often fails to reduce the computational cost, because through the basic application of projection-based approaches the operational count complexity associated with computing the projection of the nonlinear term scales with the dimension of the original high-dimensional system. As a remedy, some studies employ hyper-reduction methods which introduce an additional layer of approximation to the model, where the nonlinear terms are computed for a selection of sample points and used to update the corresponding low-dimensional states. Such methods include the discrete empirical interpolation method <cit.>, Gauss-Newton with approximated tensors method <cit.>, and energy conserving sampling and weighing method <cit.>. Alternatively, operator inference aims to bypass the need for introducing an additional layer of approximation via a hyper-reduction scheme by instead learning low-dimensional operators from a regression problem <cit.>. Furthermore, some methods have coupled the operator inference framework with a lifting transformation suited for nonlinear problems with general nonlinearity by introducing a change of variables to obtain a polynomial form of the model equations <cit.>. Recently, machine learning has been adopted to overcome the limitations of traditional model reduction when applied to advection-dominated flows with slowly-decaying Kolmogorov n-widths. Historically, autoencoders have been developed to compress and reconstruct input information, such as images <cit.>, but recently autoencoders have been leveraged in engineering applications. Specifically, the model reduction community has used autoencoders to identify a nonlinear mapping between the high-dimensional system and a low-dimensional latent space <cit.>. Once an autoencoder is trained, the mapping is leveraged to perform online time integration using one of two main classes of approaches. The first class involves training a neural network to approximate the low-dimensional update of the solution at each time step <cit.> akin to neural ordinary differential equations (ODEs) <cit.>, residual networks <cit.>, and physics-informed neural networks <cit.>. The second class, and the approach that we adopt in this paper, aims to project the governing equations of the semi-discrete high-dimensional dynamical system onto the low-dimensional latent space using the autoencoder, thereby embedding the physics into the model reduction procedure <cit.>. A common method used across both classes of machine learning-based ROMs is the convolutional neural network (CNN), which is used to construct low-dimensional solution manifolds <cit.>. Because CNNs require inputs to be formulated in a grid structure, the direct application of CNNs to unstructured meshes for the purpose of model reduction is currently untenable. In recent years, graph neural networks (GNNs) have been developed to extract information of interest to the user from sets of unstructured and relational data <cit.>, making them an appropriate method to generate low-dimensional embeddings of models that use unstructured meshes. To date, GNNs have been used to perform dimensional compression with the objective of interpretable latent spaces <cit.> and used to perform dimensional compression to quickly approximate solutions of parameterized PDEs using a learned operator <cit.>. The method outlined in this paper, which is named geometric deep least-squares Petrov-Galerkin (GD-LSPG), builds off of the approaches found in <cit.> to apply graph autoencoders directly into the time integration scheme deployed by the deep least-squares Petrov-Galerkin (dLSPG) framework, a PMOR scheme that leverages CNN-based autoencoders to perform dimensional compression and time integration <cit.>. The paper is organized in the following manner. Section 2 describes the background and preliminaries of the GD-LSPG framework, which includes the FOM and its corresponding residual minimization scheme, a general formulation of performing nonlinear dimension reduction via autoencoders, and a brief overview of graph theory. Section 3 presents the graph autoencoder deployed in GD-LSPG. Section 4 outlines the time-discrete residual minimizing time integration scheme based on the traditional dLSPG projection. Section 5 presents the results of numerical experiments. Specifically, we apply GD-LSPG to the benchmark one-dimensional (1D) Burgers' equation model using a structured mesh and a two-dimensional (2D) Euler equations model deploying an unstructured mesh. Finally, Section 6 presents conclusions and discusses avenues for future work. § BACKGROUND AND PRELIMINARIES This section presents the background and preliminaries for the GD-LSPG method. Specifically, Section <ref> introduces the first-order PDE and residual-minimizing time integration scheme on which we develop the GD-LSPG method. Section <ref> provides a general introduction to performing model reduction and PMOR with an autoencoder, along with some of the current limitations of autoencoders in the literature. Finally, Section <ref> presents the basics of graph theory to the reader. §.§ Full-order model Consider a system of n_q ∈ℕ PDEs where n_q depends on the number of state variables. Using a mesh to spatially discretize the physical domain into N_c ∈ℕ points, the semi-discrete system of the FOM is described by a system of time-continuous ODEs: d𝐱/dt = 𝐟(𝐱, t; μ), 𝐱(0; μ) = 𝐱^0 (μ), where 𝐱∈ℝ^N is the semi-discrete state vector, N = n_q N_c denotes the dimension of the FOM, μ∈𝒟 denotes the parameters, and 𝐟: ℝ^N× (0, T_f] ×𝒟→ℝ^N is the semi-discretized velocity function. To approximate the time evolution of the state vector, 𝐱, from the system of ODEs, we use the general form in (<ref>), 𝐫: ( ξ; μ) ↦α_0 ξ + ∑_i=1^τα_i 𝐱^n+1-i + 𝐩(ξ, t; μ, Δ t ) + ∑_i=1^τ𝐪(𝐱^n+1-i, t; μ, Δ t ), in which the value of the state vector, 𝐱^n+1, at time step (n+1)∈ℕ is determined by minimizing the time-discrete residual 𝐫: ℝ^N ×𝒟→ℝ^N given the state vector at a number of previous time steps, where 𝐩: ℝ^N× (0, T_f] ×𝒟×ℝ_+ →ℝ^N and 𝐪: ℝ^N× (0, T_f] ×𝒟×ℝ_+ →ℝ^N denote functions defined by the time integration scheme. We note that the time integration scheme is implicit in cases where 𝐩≠0. In (<ref>), α_i ∈ℝ, i=0,1,…,τ, are constants defined by the time integration scheme, ξ∈ℝ^N is the sought-after solution of the minimization scheme for the state vector at the (n+1)^th time step, the superscript n+1-i denotes the value of the variable at time step n+1-i ∈ℕ, Δ t ∈ℝ_+ denotes the time step size, and τ∈ℕ is the number of time steps associated with the time integration scheme. The state vector at the (n+1)^th time step, 𝐱^n+1, is defined as the solution of the minimization problem, 𝐱^n+1 = ξ∈ℝ^N||𝐫( ξ; μ) || _2, n=0,⋯,N_t-1. where N_t∈ℕ denotes the final time step, and the time step is chosen to be fixed, i.e., Δ t_n = Δ t, and t_n=nΔ t. With an appropriate selection of coefficients α_i and functions 𝐩 and 𝐪, the general formulation of (<ref>) will cover forward and backward Euler schemes, as well as Runge-Kutta schemes. §.§ Nonlinear dimension reduction via autoencoders A wide variety of nonlinear mappings have been adopted in the literature in recent years to obtain a low-dimensional latent space for PMOR on nonlinear problems. A common approach is to leverage autoencoders to approximate a mapping between the high-dimensional system and the low-dimensional latent space <cit.>. Autoencoders are a type of deep learning architecture in which the basic idea is to perform dimensional compression on a data set, in our case a FVM state vector, down to a latent space with an encoder, 𝐄𝐧𝐜: 𝐱↦𝐱̂ with 𝐄𝐧𝐜: ℝ^N →ℝ^M, and to reconstruct the data set by decoding the latent space with a decoder, 𝐃𝐞𝐜: 𝐱̂↦𝐱 with 𝐃𝐞𝐜: ℝ^M →ℝ^N, where M≪ N. The former is a nonlinear mapping from the high-dimensional state vector, 𝐱, to the low-dimensional latent representation, 𝐱̂, and the latter is a nonlinear mapping from the low-dimensional embedding to the high-dimensional state vector. The encoder and decoder are constructed by a series of layers in which each layer applies a set of predefined functions to the output of the previous layer. The nonlinearity associated with the mapping is introduced through an appropriate selection of functions. General forms of the encoder and decoder, consisting of n_h∈ℕ and n_g∈ℕ layers, respectively, are, 𝐄𝐧𝐜: (𝐱; θ) ↦𝐡_n_h(· ; Θ_n_h) ∘𝐡_n_h-1(· ; Θ_n_h-1) ∘…∘𝐡_2(· ; Θ_2)∘𝐡_1(𝐱 ; Θ_1), 𝐃𝐞𝐜: (𝐱̂; ω) ↦𝐠_n_g(·; Ω_n_g) ∘𝐠_n_g-1(·; Ω_n_g-1) ∘…∘𝐠_2(·; Ω_2)∘𝐠_1(𝐱̂; Ω_1), where 𝐡_i(· ; Θ_i), i=1,…,n_h and 𝐠_i(· ; Ω_i), i=1,…,n_g denote the function(s) acting on the input of the i^th layer of the encoder and decoder networks, respectively, (or equivalently the output of the corresponding (i-1)^th layer). As will be explained later in Sections <ref> and <ref>, some layers encompass a number of functions, depending on their objective, which will collectively form 𝐡_i(· ; Θ_i) or 𝐠_i(· ; Ω_i). In (<ref>) and (<ref>), Θ_i, i = 1,…,n_h and Ω_i, i = 1,…,n_g, denote the weights and biases of the i^th layer of the encoder and decoder networks, respectively. The set of all the weights and biases of the autoencoder, i.e., θ:={Θ_1,…,Θ_n_h} and ω:={Ω_1,…,Ω_n_g}, are trained to minimize an appropriately defined error norm between the input to the encoder and the output of the decoder. In this manuscript, we use an equal number of layers for the encoder and decoder, i.e., n_h=n_g=n_ℓ. Due to their remarkable ability to filter grid-based information, across extensive amounts of literature in PMOR <cit.>, CNNs have been heavily relied upon as the backbone for developing autoencoder architectures. However, CNNs are inherently dependent upon the domain inputs being formulated as a structured grid, meaning that PMOR methods leveraging CNNs are not readily applicable to unstructured meshes (see Figure <ref>). Our proposed method overcomes the need for structured meshes to perform PMOR with a nonlinear projection using autoencoders. Our goal is to generalize autoencoder-based PMOR methods such that both structured and unstructured meshes can be inputs to the autoencoder (see Figure <ref>). Given that unstructured meshes are commonly used in engineering applications to represent complex geometry, our approach can be widely extended to applications with arbitrary topology. Our proposed architecture follows an outline that is similar to graph U-nets <cit.>, multiscale graph autoencoders <cit.>, and graph convolutional autoencoders for parameterized PDEs <cit.>, wherein a hierarchy of graphs is generated, each with fewer nodes than the previous level. §.§ Graph theory Extensive reading on graph theory can be found in the works of Hamilton <cit.> and Battaglia et al. <cit.>, but a brief overview is provided in this section to provide sufficient background for our graph autoencoder architecture. A graph is a tuple 𝒢 = {𝒱, ℰ}, where 𝒱 denotes the node set, |𝒱| denotes the number of nodes in the graph, and ℰ denotes the edge set, which is chosen to represent user-prescribed relationships between the nodes in the node set. Depending on the application, the graph (and the associated node set and edge set) can be used to represent a wide variety of concepts. For example, molecules can be modeled as a graph by representing atoms as nodes and bonds as edges <cit.>, while social networks can be modeled as a graph by representing people as nodes and friendships as edges <cit.>. The adjacency matrix, 𝐀 = [a_ij] ∈ℝ^|𝒱|×|𝒱|, is another way to represent the edge set of a graph. Consider the case where the nodes are indexed by a number, i=1,⋯,|𝒱|. If nodes i and j in the graph are connected via an edge, i.e., if for i,j ∈𝒱, we have (i,j) ∈ℰ, the corresponding entry in the adjacency matrix is a_ij=1. Otherwise, we have a_ij=0. In this manuscript, we consider exclusively undirected graphs, meaning for any edge in the graph, (i,j) ∈ℰ, we also have (j,i) ∈ℰ. With this formulation, our adjacency matrix will be symmetric. A visualization of the construction of the adjacency matrix for a given graph is found in Figure <ref>. A node feature matrix, 𝐗∈ℝ^|𝒱|× N_F, can be utilized to prescribe feature information to the node set of the graph, where the i^th row of 𝐗 denotes the node feature vector of the i^th node in the graph, and N_F∈ℕ denotes the number of features prescribed to each node. § DIMENSION REDUCTION VIA GRAPH AUTOENCODERS In this section, we develop the specifics of the graph autoencoder used in GD-LSPG. Again, we emphasize that although we study GD-LSPG in the context of the FVM, it remains applicable to a variety of numerical methods in computational mechanics, such as the finite element method. Upon spatial discretization of the physical domain, each finite volume cell is represented by a node (which is different from the vertices of the cell). We take the node set 𝒱 to represent the cells in the discretized domain, i.e., |𝒱| = N_c. To emulate the manner in which CNNs filter information from neighboring grid points in the spatial discretization, we take the edge set, ℰ, to connect the node representation of cells within a user-defined radius of each other, i.e., ℰ = 𝐑𝐚𝐝𝐢𝐮𝐬_𝐆𝐫𝐚𝐩𝐡(𝐏𝐨𝐬,r) = {∀ (j,k): j, k ∈𝒱, ||𝐏𝐨𝐬_j - 𝐏𝐨𝐬_k ||≤ r }, where 𝐏𝐨𝐬∈ℝ^N_c × n_d is the matrix denoting the spatial positions of the node-representation of the cells in the FVM discretization, taken as the cell-centroids (i.e., for triangular mesh elements, the average position of the vertices) of the finite volume cells. Row j of the matrix (i.e., 𝐏𝐨𝐬_j) denotes the position of node j∈𝒱, n_d ∈ℕ denotes the spatial dimensionality of the modeled problem, j and k denote the indices of the corresponding nodes in the node set, r∈ℝ denotes the user-defined radius, and ||·|| : ℝ^n_d→ℝ_+ denotes the Euclidean norm. The adjacency matrix is used to represent the edge set in a matrix format. The feature matrix, 𝐗∈ℝ^N_c× n_q is a matrix with the number of rows equal to the number of cells in the FVM discretization, i.e., N_c, and the number of columns equal to the number of state variables in the governing PDE, n_q. In other words, the feature matrix 𝐗 is the matrix version of the state vector 𝐱∈ℝ^N (with N=n_q N_c) introduced in Section <ref>. As a result, the formulation has a direct mapping between the state vector 𝐱 and the node feature matrix 𝐗 (𝐌𝐚𝐭𝐫𝐢𝐜𝐢𝐳𝐞: 𝐱↦𝐗, with 𝐌𝐚𝐭𝐫𝐢𝐜𝐢𝐳𝐞: ℝ^N_c n_q→ℝ^N_c × n_q) and a direct mapping between the node feature matrix 𝐗 and the state vector 𝐱 (𝐕𝐞𝐜𝐭𝐨𝐫𝐢𝐳𝐞: 𝐗↦𝐱, with 𝐕𝐞𝐜𝐭𝐨𝐫𝐢𝐳𝐞: ℝ^N_c × n_q→ℝ^N_c n_q). Once a graph representation of a solution state is formulated, it can be encoded to a latent representation with a graph autoencoder following the general form of (<ref>)-(<ref>). In the subsequent sections, we present the graph autoencoder used in the GD-LSPG framework and the specifics of the architecture of the encoder and the decoder. First, Section <ref> presents a hierarchical spectral clustering algorithm used by the autoencoder to generate a hierarchy of reduced graphs to emulate the compressive abilities of CNNs. Next, Section <ref> details the encoder architecture and its deployment of the hierarchy of reduced graphs to create a low-dimensional embedding of the input graph. Then, Section <ref> details the decoder architecture and its deployment of the hierarchy of reduced graphs in reverse order to reconstruct the original input graph from its latent representation. In our graph autoencoder, we include an additional layer with no trainable parameters for preprocessing and postprocessing in the encoder (Section <ref>) and the decoder (Section <ref>), respectively. Finally, Section <ref> presents the training strategy deployed to optimize the training parameters of the encoder and decoder. Figure <ref> provides a visual representation of the graph autoencoder deployed in GD-LSPG, with n_ℓ=3 for demonstration purposes. §.§ Generating a hierarchy of reduced graphs with spectral clustering To compute a hierarchy of reduced graphs for the autoencoder used in GD-LSPG, at each level in the hierarchy, we aim to partition the graph into a pre-defined number of non-overlapping sets of strongly connected nodes. We then use the partitions to aggregate each cluster of nodes together into a single node at the next layer of the hierarchy, thereby reducing the number of nodes in the graph and the total dimension of the graph. We consider the case where the encoder and the decoder each have n_ℓ∈ℕ layers. As will be presented in the subsequent sections, the encoder and decoder both have a fully-connected/MLP layer along with n_ℓ-1 layers with compressed graphs. Therefore, in this section, we aim to produce a hierarchy of reduced graphs composed of n_ℓ-1 reduced graphs that result in a hierarchy of n_ℓ graphs, including the input graph of the discretized FOM (i.e., graph 0). The graphs in the encoder and the decoder will have the same topology but with reverse ordering. This means that the i^th graph in the hierarchy of the graphs of the encoder, i=0,⋯,n_ℓ-1, will be equivalent to the (n_ℓ-i-1)^th graph of the decoder (refer to Figure <ref>). In other words, the first graph of the decoder is the (n_ℓ-1)^th (final) graph of the encoder, and the final graph of the decoder is the zeroth (original) graph of the encoder. Hence, we focus on building the hierarchy of the graphs for the encoder. This task can be achieved by minimizing the number of `broken' edges in the graph topology of the (i-1)^th layer to form the clusters for the i^th layer's graph. The graph representation of the FOM, 𝒢^0=(𝒱^0,ℰ^0) are given from (<ref>) using the discretized mesh of the FOM. In addition, the number of nodes in the layers i=1,⋯,n_ℓ-1, i.e., {|𝒱^1|,|𝒱^2|,⋯,|𝒱^n_ℓ-1|}, along with the radius used in (<ref>), i.e., { r^0, r^1, …, r^n_ℓ-1}, are prescribed a priori, dictating the amount of reduction performed and the number of edges at each layer in the hierarchy. At layer i∈{1,⋯,n_ℓ-1} of the encoder, we aim to reduce the number of nodes from |𝒱^i-1| in layer i-1 to |𝒱^i| in layer i with |𝒱^i| < |𝒱^i-1|. This action is first carried out by forming |𝒱^i| clusters, i.e., 𝒜_1^i-1,𝒜_2^i-1,⋯,𝒜_|𝒱^i|^i-1 with the following conditions, |𝒜_j^i-1|≥ 1, j=1,⋯,|𝒱^i| 𝒜_j^i-1⊂𝒱^i-1, j=1,⋯,|𝒱^i| 𝒜_j^i-1∩𝒜_k^i-1=∅, j≠ k, j,k=1,⋯,|𝒱^i| 𝒜_1^i-1∪𝒜_2^i-1∪⋯∪𝒜_|𝒱^i|^i-1=𝒱^i-1, which ensures that all clusters are a non-empty subset of the node set of layer i-1, the intersection of any two distinct clusters is the empty set, and the union of all clusters is equal to the node set of layer i-1. In (<ref>), |·| denotes the cardinality of the set. For given clusters 𝒜_1^i-1,𝒜_2^i-1,⋯,𝒜_|𝒱^i|^i-1, we evaluate the function, 𝐑𝐚𝐭𝐢𝐨𝐂𝐮𝐭: ( 𝒱^i-1 , ℰ^i-1) ↦1/2∑_k=1^|𝒱^i||∀(u,v) ∈ℰ^i-1 : u ∈𝒜_k^i-1, v ∈𝒜̅_k^i-1|/|𝒜_k^i-1|, that tends to measure the number of broken edges for the given cluster choice, where ℰ^i-1 denotes the edge set of the graph at the (i-1)^th level in the hierarchy, (u,v) represents any existing edge in ℰ^i-1 connecting nodes u and v, 𝒜_k^i-1⊂𝒱^i-1 denotes a subset of nodes in the graph at the (i-1)^th level in the hierarchy and 𝒜̅_k^i-1=𝒱^i-1\𝒜_k^i-1 denotes the complement of the set 𝒜_k^i-1 at the same level. The number of distinct ways we can choose |𝒱^i| clusters from |𝒱^i-1| nodes while satisfying conditions of (<ref>) is determined from the Stirling number of the second kind <cit.>, S(n,k)=1/k!∑_j=0^k(-1)^j kj(k-j)^n with n=|𝒱^i-1| and k=|𝒱^i|. The optimal cluster is the one with the minimum value for the function 𝐑𝐚𝐭𝐢𝐨𝐂𝐮𝐭 from (<ref>). However, this minimization problem is NP-hard <cit.>. As discussed in Von Luxburg <cit.>, spectral clustering introduces a relaxation on the minimization problem to eliminate its discrete nature. The departure from a discrete set allows the user to perform an eigenvalue analysis on the graph to generate the clusters appropriately. The spectral clustering algorithm from Hamilton <cit.> is leveraged in this study and can be found in Algorithm <ref>, where 𝐏𝐨𝐬^i ∈ℝ^|𝒱^i |× n_d denotes the matrix of spatial coordinates for the graph at the i^th level in the hierarchy, 𝐀^i is the adjacency matrix of the i^th layer in the hierarchy generated by the edge index, ℰ^i, of layer i, previously defined in (<ref>), r^i ∈ℝ_+ denotes a user-prescribed radius to be used in (<ref>), 𝐒^i∈ℝ^|𝒱^i|×|𝒱^i+1| denotes the assignment matrix of the i^th layer of the hierarchy which is used to assign each node in layer i to a cluster in layer i+1, and thus a portion of a single node at the layer i+1 in the hierarchy. The assignment matrix is used to cluster and decrease the number of nodes in the graph at each step in the hierarchy of reduced graphs. In the algorithm, 𝐃^i ∈ℝ^|𝒱^i|×|𝒱^i| is the diagonal degree matrix representing the number of edges connected to each node in the i^th layer of the hierarchy, 𝐋^i = 𝐃^i - 𝐀^i is the Laplacian of the graph associated with the i^th layer in the hierarchy, 𝐁^i ∈ℝ^|𝒱^i|×|𝒱^i+1| denotes the spectral node feature matrix formed by the |𝒱^i+1| smallest eigenvectors of 𝐋^i, excluding the smallest. According to <cit.>, the smallest eigenvalue of the unnormalized Laplacian is simply zero and can therefore be neglected. As dimensional compression is performed in the hierarchy of reduced graphs, the nodes of the graphs deeper in the hierarchy tend to become closer together due to the nature of the positions of each node being computed based on the arithmetic mean of their corresponding cluster in the previous layer. To avoid the natural accumulation of the nodes to a smaller spatial domain, a rescaling operator, i.e., 𝐑𝐞𝐬𝐜𝐚𝐥𝐞: ℝ^|𝒱^i+1|× n_d→ℝ^|𝒱^i+1|× n_d is used at each layer to rescale 𝐏𝐨𝐬^i+1 such that the maximum and minimum values of the coordinates match that of the previous layer in the hierarchy. This algorithm is visually represented in Figure <ref>. The construction of the hierarchy of reduced graphs is performed in the offline stage given the original mesh, number of layers, the user-prescribed values for the number of nodes in the graphs of each layer, i.e., |𝒱^i| for i=1,⋯,n_ℓ-1, and the user-prescribed values for radii used in (<ref>), i.e., r^0, r^1, …, r^n_ℓ-1. While the hierarchy of graphs will be utilized in the encoder and decoder, the architecture of the encoder and decoder does not influence the spectral clustering step. §.§ Encoder architecture The encoder architecture deploys the hierarchy of reduced graphs computed via the procedure from Section <ref>. The encoder consists of layers i=0,…,n_ℓ. The zeroth layer (i=0), outlined in Section <ref>, is a preprocessing layer that tailors the input data 𝐱 to the form suited for the graph autoencoder. Layers i=1,…,n_ℓ-1, outlined in Section <ref>, leverage the hierarchy of reduced graphs from Section <ref> to perform message passing and pooling (MPP) operations that reduce the dimension of the system. The final layer of the encoder (i=n_ℓ), as outlined in Section <ref>, utilizes an MLP to arrive at the low-dimensional embedding 𝐱̂. §.§.§ Preprocessing – Layer 0 The preprocessing layer of the encoder (i=0) encompasses two operators, 𝐌𝐚𝐭𝐫𝐢𝐜𝐢𝐳𝐞 and 𝐒𝐜𝐚𝐥𝐞, acting on the input vector 𝐱. For a FOM with n_q state variables, the 𝐌𝐚𝐭𝐫𝐢𝐜𝐢𝐳𝐞 operator is used to convert 𝐱∈ℝ^n_q N_c to the node feature matrix 𝐗∈ℝ^N_c× n_q in which each column of the matrix represents the nodal values of one state variable. If the FOM consists of only one state variable (i.e., n_q=1), 𝐗=𝐱, and the 𝐌𝐚𝐭𝐫𝐢𝐜𝐢𝐳𝐞 operator will be the identity operator. As defined in (<ref>), the 𝐒𝐜𝐚𝐥𝐞 operator acts on the resulting node feature matrix to improve the numerical stability of training, as is commonly performed in the literature <cit.>, 𝐒𝐜𝐚𝐥𝐞: 𝐗_ij^0 ↦𝐗_ij^0 - 𝒳^min_j/𝒳^max_j - 𝒳^min_j, i=1,…,N_c, j=1,…,n_q where 𝐒𝐜𝐚𝐥𝐞: ℝ→ [0,1] is an element-wise scaling operator acting on the elements of 𝐗, and 𝒳_j^max, 𝒳_j^min∈ℝ denote the maximum and minimum values, respectively, of the j^th feature (i.e., j^th column of matrix 𝐗) in the solution states used to train the autoencoder, which are determined and stored before training begins. The resulting form of the preprocessing layer is 𝐡_0: (𝐱; Θ_0 ) ↦𝐒𝐜𝐚𝐥𝐞( ·) ∘𝐌𝐚𝐭𝐫𝐢𝐜𝐢𝐳𝐞( 𝐱), where 𝐡_0: ℝ^N_c n_q→ℝ^N_c × n_q, and Θ_0 = ∅ is the empty set, as the preprocessing layer does not have trainable weights and biases. §.§.§ Message passing and pooling (MPP) – Layers 1,…,n_ℓ-1 The MPP layer consists of two processes, where each relies upon the hierarchy of reduced graphs computed in Section <ref>. The first operation is a message passing operation, wherein nodes connected by an edge exchange information with each other to obtain information about nearby nodes. The optimal information exchange is obtained from training the autoencoder. In the encoder, the message passing operation in layer i increases the number of features associated with each node from N_F^i-1∈ℕ to N_F^i∈ℕ. We take our message passing operation to be a mean aggregation SAGEConv from Hamilton et al. <cit.>, which applies updates to each node based on the arithmetic mean of its neighbors' features, i.e., 𝐌𝐏^i_enc: (𝐗^i-1; Θ_i) ↦σ( 𝐗^i-1_j 𝐖^i_1 + (mean_n∈𝒦^i-1(j)𝐗^i-1_n ) 𝐖^i_2), j=1,…,|𝒱^i-1|, with 𝐌𝐏^i_enc: ℝ^|𝒱^i-1|× N_F^i-1×ℝ^N_F^i-1× N_F^i×ℝ^N_F^i-1× N_F^i→ℝ^|𝒱^i-1|× N_F^i, where 𝐗^i-1∈ℝ^|𝒱^i-1|× N_F^i-1 denotes the input node feature matrix to the i^th layer, the subscripts j and n denote the j^th and n^th rows of 𝐗^i-1, 𝐖^i_1, 𝐖^i_2∈ℝ^N_F^i-1× N_F^i denote the weights with Θ_i = {𝐖^i_1, 𝐖^i_2} denoting the set of weights for the i^th MPP layer, 𝒦^i-1(j) denotes the set of nodes connected to node j based on the adjacency matrix 𝐀^i-1, where j∈ℕ denotes the j^th node in the graph at layer i-1, and σ: ℝ→ℝ denotes the element-wise activation function, chosen here to be the exponential linear unit (ELU) due to its continuously differentiable property <cit.>. The SAGEConv function described in (<ref>), includes a loop over all nodes j ∈𝒱^i-1, where for each node, the j^th row of the output 𝐗̅^i-1 of the message passing operation is calculated. The output of (<ref>) has the same number of rows as its input, 𝐗^i-1, but can have a different number of features (i.e., N_F^i not necessarily equal to N_F^i-1). The next step of the MPP layer is a pooling operation. In the pooling operation, the assignment matrices from Section <ref> are used to reduce the number of nodes in a graph. By construction, the assignment matrices are equivalent to an arithmetic mean operation. As a result, we use them to compute the arithmetic mean feature vector of each cluster to get 𝐗^i, i.e., Pool^i : (𝐗̅^i-1) ↦(𝐒^i-1)^T 𝐗̅^i-1, with 𝐏𝐨𝐨𝐥^i: ℝ^|𝒱^i-1|× N_F^i→ℝ^|𝒱^i|× N_F^i, where 𝐗̅^i-1∈ℝ^|𝒱^i-1|× N_F^i denotes the output of the message passing operation, 𝐌𝐏^i_enc, and 𝐒^i-1∈ℝ^|𝒱^i-1|×|𝒱^i| is the assignment matrix precomputed by the spectral clustering algorithm in Section <ref>. The full MPP layer takes the form, 𝐡_i: ( 𝐗^i-1; Θ_i ) ↦𝐏𝐨𝐨𝐥^i( ·) ∘𝐌𝐏^i_enc( 𝐗^i-1 ; Θ_i ), with 𝐡_i: ℝ^|𝒱^i-1|× N_F^i-1×ℝ^N_F^i-1× N_F^i×ℝ^N_F^i-1× N_F^i→ℝ^|𝒱^i |× N_F^i. Hence, the MPP layer, as visually represented in Figure <ref>, decreases the number of nodes in a given graph and increases the number of features associated with each node. §.§.§ Fully-connected layer: compression – Layer n_ℓ In the final layer of the encoder (i=n_ℓ), we first flatten the input matrix 𝐗^n_ℓ-1∈ℝ^|𝒱^n_ℓ-1|× N_F^n_ℓ-1 to a vector-representation, i.e., 𝐅𝐥𝐚𝐭𝐭𝐞𝐧: 𝐗^n_ℓ-1↦𝐱̅^n_ℓ-1, where 𝐱̅^n_ℓ-1∈ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1. Here, we note that the 𝐅𝐥𝐚𝐭𝐭𝐞𝐧 operator is similar to the 𝐕𝐞𝐜𝐭𝐨𝐫𝐢𝐳𝐞 operator, but with dimensions different than the node feature matrix of the full-order solution. Next, a fully-connected/MLP layer is applied to the flattened state to compress it to a low-dimensional vector representation, i.e., 𝐌𝐋𝐏_enc: ( 𝐱̅^n_ℓ-1; Θ_n_ℓ) ↦𝐖^n_ℓ𝐱̅^n_ℓ-1 + 𝐛^n_ℓ, with 𝐌𝐋𝐏_enc: ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1×ℝ^M ×|𝒱^n_ℓ-1| N_F^n_ℓ-1×ℝ^M→ℝ^M, where 𝐖^n_ℓ∈ℝ^M ×|𝒱^n_ℓ-1| N_F^n_ℓ-1 and 𝐛^n_ℓ∈ℝ^M denote the weights and biases, respectively, with Θ_n_ℓ = {𝐖^n_ℓ, 𝐛^n_ℓ}. The final layer of the encoder architecture takes the form 𝐡_n_ℓ: ( 𝐗^n_ℓ-1; Θ_n_ℓ) ↦𝐌𝐋𝐏_enc( · ; Θ_n_ℓ) ∘𝐅𝐥𝐚𝐭𝐭𝐞𝐧(𝐗^n_ℓ-1), with 𝐡_n_ℓ: ℝ^|𝒱^n_ℓ-1|× N_F^n_ℓ-1×ℝ^M ×|𝒱^n_ℓ-1| N_F^n_ℓ-1×ℝ^M→ℝ^M. The output of this layer, 𝐱̂∈ℝ^M, is the low-dimensional latent representation of the solution state. §.§ Decoder architecture Much like the encoder, the decoder architecture deploys the hierarchy of reduced graphs from Section <ref>. The decoder consists of layers i=0,…,n_ℓ. The zeroth layer (i=0), outlined in Section <ref>, utilizes an MLP to reconstruct a small graph from the low-dimensional latent representation, 𝐱̂. Layers i=1,…,n_ℓ-1, outlined in Section <ref>, leverage the hierarchy of reduced graphs from Section <ref> in reverse order to perform unpooling and message passing (UMP) to increase the dimension of the system. The final layer of the decoder (i=n_ℓ), outlined in section <ref>, is a postprocessing layer that restructures the output graph into a state vector for deployment in the time integration scheme. §.§.§ Fully-connected layer: expansion – Layer 0 The zeroth layer of the decoder (i=0) entails two functions. It first applies a fully-connected/MLP layer to the latent representation, 𝐌𝐋𝐏_dec: ( 𝐱̂; Ω_0 ) ↦σ( W^0𝐱̂ + b^0) , with 𝐌𝐋𝐏_dec: ℝ^M ×ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1× M ×ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1→ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1, where W^0∈ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1× M and b^0∈ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1 denote the weights and biases of the MLP layer of the decoder, respectively, with Ω_0 = {W^0, b^0}, and σ: ℝ→ℝ denotes the element-wise activation function, chosen here to be the ELU activation function <cit.>. An unflattening operator is then applied to the output of the fully-connected layer, 𝐲̅^0, to generate a node feature matrix corresponding to the n_ℓ-1 graph in the hierarchy of reduced graphs, 𝐔𝐧𝐟𝐥𝐚𝐭𝐭𝐞𝐧: 𝐲̅^0 ↦𝐘^0, with 𝐔𝐧𝐟𝐥𝐚𝐭𝐭𝐞𝐧: ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1→ℝ^|𝒱^n_ℓ-1|× N_F^n_ℓ-1, where 𝐲̅^0 ∈ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1 denotes the output of 𝐌𝐋𝐏_dec and 𝐘^0 ∈ℝ^|𝒱^n_ℓ-1|× N_F^n_ℓ-1. We note that the unflattening operator is similar to the 𝐌𝐚𝐭𝐫𝐢𝐜𝐢𝐳𝐞 operator introduced previously but applied to a vector with a size different from the full-order state vector. Ultimately, the first layer of the decoder takes the form 𝐠_0: (𝐱̂; Ω_0) ↦𝐔𝐧𝐟𝐥𝐚𝐭𝐭𝐞𝐧( ·) ∘𝐌𝐋𝐏_dec( 𝐱̂; Ω_0 ), where 𝐠_0: ℝ^M×ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1× M ×ℝ^|𝒱^n_ℓ-1| N_F^n_ℓ-1→ℝ^|𝒱^n_ℓ-1|× N_F^n_ℓ-1. §.§.§ Unpooling and message passing (UMP) – Layers 1,…,n_ℓ-1 The next layers in the decoder architecture (i=1,...,n_ℓ-1) consist of UMP layers. The first step in a UMP layer is to perform an unpooling operation, wherein nodes are reintroduced to the graph, and their feature vectors are interpolated. In layer i of the decoder with i=1,⋯,n_ℓ-1, the unpooling operation receives a graph of layer n_ℓ-i as an input and outputs a graph of layer n_ℓ-i-1 in the hierarchy of n_ℓ-1 graphs of the encoder. For example in Figure <ref> with n_ℓ=3, the input and output to the second layer of the decoder (i=2) are the graphs of 1st layer (n_ℓ-i=1) and the zeroth layer (n_ℓ-i-1=0) of the hierarchy of graphs in the encoder, respectively. For ease of notation, we introduce î=n_ℓ-i as a counter used to denote the hierarchy of reduced graphs in the opposite order as the encoder. In the unpooling operation of layer i of the decoder, a node's features of graph î are interpolated using the k-nearest neighbors of the node features of graph î-1, 𝐔𝐧𝐩𝐨𝐨𝐥^i : 𝐘^i-1↦∑_n∈𝒩^î-1(j)𝐰( 𝐏𝐨𝐬^î-1_j, 𝐏𝐨𝐬^î_n ) 𝐘_n^i-1/∑_n∈𝒩^î-1(j)𝐰(𝐏𝐨𝐬^î-1_j, 𝐏𝐨𝐬^î_n ), j=1,⋯,|𝒱^î-1| where, 𝐰 : (𝐏𝐨𝐬^î-1_j,𝐏𝐨𝐬^î_n ) ↦1/||𝐏𝐨𝐬^î-1_j - 𝐏𝐨𝐬^î_n ||, with 𝐔𝐧𝐩𝐨𝐨𝐥^i: ℝ^|𝒱^î|× N_F^î→ℝ^|𝒱^î-1|× N_F^î, 𝒩^î-1(j) is the k-nearest neighbors in 𝒱^î of the j^th node in 𝒱^î-1, with k∈ℕ denoting the number of nearest neighbors used for interpolation. 𝐏𝐨𝐬^î-1_j ∈ℝ^n_d is the spatial position of the j^th node at the (î-1)^th layer of the hierarchy of reduced graphs, 𝐏𝐨𝐬^î_n ∈ℝ^n_d is the spatial position of the n^th node in the î^th layer in the hierarchy of reduced graphs, 𝐰: ℝ^n_d×ℝ^n_d→ℝ_+ denotes the spatial interpolation function, and ||·|| : ℝ^n_d→ℝ_+ denotes the Euclidean norm. Much like the SAGEConv function (<ref>), the unpooling of (<ref>) is performed by looping over all nodes, j ∈𝒱^î-1, to compute the rows j=1,⋯,|𝒱^î-1| of the output of the unpooling operation, 𝐘̅^i-1∈ℝ^|𝒱^î-1|× N_F^î. Next, a message passing operation is applied to the outputs of the unpooling operation, 𝐌𝐏^i_dec: ( 𝐘̅^i-1; Ω_i) ↦σ(𝐘̅^i-1_j W^i_1 + (mean_n∈𝒦^î-1(j)𝐘̅^i-1_n ) W^i_2), j=1,⋯,|𝒱^î-1|, with 𝐌𝐏^i_dec: ℝ^|𝒱^î-1|× N_F^î×ℝ^N_F^î× N_F^î-1×ℝ^N_F^î× N_F^î-1→ℝ^|𝒱^î-1|× N_F^î-1, where W^i_1 , W^i_2∈ℝ^N_F^î× N_F^î-1 denote the weights with Ω_i = {W^i_1, W^i_2} denoting the set of weights for the i^th UMP layer, 𝒦^î-1(j) denotes the set of nodes connected to node j in the graph of (î-1)^th layer based on the adjacency matrix 𝐀^î-1, where the subscripts j and n denote the j^th and n^th nodes, respectively, and σ: ℝ→ℝ denotes the element-wise activation function, chosen here to be the ELU activation function <cit.>. According to (<ref>), the output of 𝐌𝐏^i_dec is determined in a row-wise manner. Ultimately, the UMP layer takes the form, 𝐠_i: ( 𝐘^i-1; Ω_i ) ↦𝐌𝐏^i_dec( · ; Ω_i ) ∘𝐔𝐧𝐩𝐨𝐨𝐥^i( 𝐘^i-1), with 𝐠_i: ℝ^|𝒱^î|× N_F^î×ℝ^N_F^î× N_F^î-1×ℝ^N_F^î× N_F^î-1→ℝ^|𝒱^î-1|× N_F^î-1. Hence, the UMP layer increases the number of nodes in a given graph and decreases the number of features associated with each node, and it gives the node feature matrix 𝐘^i as the output. The UMP layer is visually represented in Figure <ref>. §.§.§ Postprocessing – Layer n_ℓ To represent the output of the decoder as a state vector for appropriate deployment in the time integration scheme, a postprocessing step is applied as the final layer (i=n_ℓ). First, the 𝐈𝐧𝐯𝐒𝐜𝐚𝐥𝐞 operator is applied to invert the original 𝐒𝐜𝐚𝐥𝐞 operation, 𝐈𝐧𝐯𝐒𝐜𝐚𝐥𝐞: 𝐘_ij^n_ℓ-1↦𝐘_ij^n_ℓ-1( 𝒳^max_j - 𝒳^min_j) + 𝒳^min_j, where 𝐈𝐧𝐯𝐒𝐜𝐚𝐥𝐞: ℝ→ℝ is an element-wise scaling operator. Next, a 𝐕𝐞𝐜𝐭𝐨𝐫𝐢𝐳𝐞 operator is applied to reshape the output of the decoder to a state vector. Ultimately, the postprocessing step takes the form, 𝐠_n_ℓ: (𝐘^n_ℓ-1; Ω_n_ℓ) ↦𝐕𝐞𝐜𝐭𝐨𝐫𝐢𝐳𝐞( ·) ∘𝐈𝐧𝐯𝐒𝐜𝐚𝐥𝐞( 𝐘^n_ℓ-1), where 𝐠_n_ℓ: ℝ^N_c × n_q→ℝ^N_c n_q, and Ω_n_ℓ = ∅ is the empty set, as there are no trainable parameters in the postprocessing layer. The output of the decoder 𝐱̃∈ℝ^N is a reconstruction of the original state vector 𝐱. §.§ Training the autoencoder and regularization The only components of the autoencoder that require training are the message passing operations and fully-connected/MLP layers with θ={Θ_1, Θ_2, ⋯, Θ_n_ℓ}, ω={Ω_0, Ω_1, ⋯, Ω_n_ℓ-1} as trainable parameters. To train these, we adopt the same loss function as Lee and Carlberg <cit.>, which is the L^2-norm of the reconstructed solution state, ℒ = ∑_i=1^N_train||𝐱^i - 𝐃𝐞𝐜( ·) ∘𝐄𝐧𝐜( 𝐱^i ) || ^2 _2, where 𝐱^i ∈ℝ^N is the i^th solution state in the training set and N_train∈ℕ denotes the total number of solution states generated by the FOM. As is common with neural ODEs <cit.>, our decoder architecture was empirically noticed to be prone to generating an ill-conditioned system during time integration (the procedure of which is outlined in Section <ref>). As a remedy, we chose to adopt the regularization strategy from Josias and Brink <cit.>, where a regularization term, taken to be the ratio of the maximum and minimum singular values of the Jacobian of the decoder, is added to (<ref>) during training. Hence, our training minimizes: ℒ = ∑_i=1^N_train||𝐱^i - 𝐃𝐞𝐜( ·) ∘𝐄𝐧𝐜( 𝐱^i ) || ^2 _2 + λ/N_train∑_i=1^N_train( σ_max^i/σ_min^i - 1 )^2, where λ∈ℝ is a user-prescribed regularization parameter, and σ_max^i ∈ℝ and σ_min^i ∈ℝ denote the maximum and minimum singular values of the Jacobian of the decoder, respectively, of the i^th solution state in the training set. § TIME INTEGRATION In the literature, time stepping for ROMs using autoencoders has been achieved by a variety of methods, including training neural networks to compute time updates <cit.> and projecting the governing equations of the FOM onto a nonlinear manifold <cit.>. For GD-LSPG, we adopt a similar strategy to that of Lee and Carlberg <cit.>, due to its ability to perform time-discrete residual minimization. GD-LSPG leverages the graph autoencoder to project the governing equations onto a low-dimensional latent space, thus performing time integration on the latent state variable. To illustrate, we set the initial conditions of the low-dimensional state vector to be the encoding of the initial conditions of the high-dimensional system, i.e., 𝐱̂(0;μ) = 𝐄𝐧𝐜(𝐱(0;μ)), and approximate the full-order state vector of the solution of the system, (<ref>), to be, 𝐱̃(t; μ) ≈𝐃𝐞𝐜(𝐱̂(t ; μ)), where 𝐱̃:ℝ_+ ×𝒟→ℝ^N denotes the predicted solution state. Next, we substitute (<ref>) into (<ref>) to obtain the following minimization problem: 𝐱̂(t; μ) = ξ̂∈ℝ^M||𝐫( 𝐃𝐞𝐜(ξ̂(t ; μ)) ) || ^2_2, where ξ̂∈ℝ^M is the sought-after low-dimensional solution state at time t. Numerically obtaining this solution is well-studied in the literature <cit.>, and is obtained by an iterative solver. At each iteration of time step n, the update 𝐱̂^n(j+1) = 𝐱̂^n(j) - β^(j)((Ψ( 𝐱̂^n(j); μ))^T Ψ( 𝐱̂^n(j); μ))^-1(Ψ( 𝐱̂^n(j); μ))^T 𝐫(𝐃𝐞𝐜(𝐱̂^n(j)); μ), is performed, where the superscript n(j) denotes the j^th iteration of n^th time step, β^(j)∈ℝ_+ is the step size chosen to satisfy Wolfe conditions <cit.>, and the test basis matrix, Ψ:ℝ^M ×𝒟→ℝ^N × M, is defined to be: Ψ: (ξ̂; μ) ↦(∂𝐫/∂𝐱|_𝐃𝐞𝐜(ξ̂(t ; μ))) ( d𝐃𝐞𝐜/dξ̂|_ξ̂(t ; μ)). An initial guess at each time step is chosen to be 𝐱̂^n(0) = 𝐱̂^n-1, where 𝐱̂^n-1 denotes the converged solution from the previous time step, n-1. The solution is updated iteratively until the L^2-norm of the reduced-state residual for the current iteration falls below a user-prescribed fraction of that of the initial guess at the time step n=1 (which is equal to the initial condition), where κ∈ [0, 1] is the user-defined tolerance, Convergence criterion : ||𝐫̂(𝐱̂^n(j); μ) ||_2/||𝐫̂(𝐱̂^0; μ) ||_2≤κ, where the reduced state residual 𝐫̂ is obtained from the projection of the residual, 𝐫̂: (ξ̂; μ) ↦(Ψ( ξ̂; μ))^T 𝐫(𝐃𝐞𝐜(ξ̂); μ). § NUMERICAL EXPERIMENTS To evaluate the efficiency and accuracy of the GD-LSPG method, we employ two test problems. First, to provide a baseline for comparison to the rest of the literature, we use a commonly studied 1D Burgers' equation model using a structured mesh <cit.>. This allows us to benchmark the accuracy of GD-LSPG with PMOR methods that deploy CNN-based autoencoders. Second, we deploy GD-LSPG to a model for the 2D Euler equations using an unstructured mesh resulting in a Riemann Problem <cit.> to demonstrate GD-LSPG's ability to extend to unstructured meshes where CNN-based autoencoders have been inapplicable. All autoencoders are trained with PyTorch <cit.> and PyTorch-Geometric <cit.>. A detailed description of the employed autoencoder architectures and choice of hyperparameters for both examples are provided in <ref>. To train the models and therefore minimize (<ref>), the Adam optimizer <cit.> is deployed to perform stochastic gradient descent with an adaptive learning rate. In this section, we use three performance metrics to assess accuracy. First, reconstruction error is used to assess the autoencoder's ability to encode and decode a precomputed solution, Autoencoder reconstruction error = √(∑_n=1^N_t||𝐱^n ( μ) - 𝐃𝐞𝐜∘𝐄𝐧𝐜(𝐱^n ( μ) ) ||_2^2)/√(∑_n=1^N_t||𝐱^n ( μ) ||_2^2), where 𝐱^n(μ) is the full-order solution at the n^th time step. Second, the POD reconstruction error is used to assess an affine latent space's ability to project and reconstruct a precomputed solution, POD reconstruction error = √(∑_n=1^N_t||( 𝐈 - ΦΦ^T ) 𝐱^n ( μ) ||_2^2)/√(∑_n=1^N_t||𝐱^n ( μ) ||_2^2), where Φ∈ℝ^N× M is the matrix of reduced basis vectors from an affine POD approximation constructed based on the method of snapshots (see <ref>). Finally, the state prediction error is used to assess the accuracy of the ROM obtained from different methods in predicting the full-order solution, State prediction error = √(∑_n=1^N_t||𝐱^n ( μ) - 𝐱̃^n ( μ) ||_2^2)/√(∑_n=1^N_t||𝐱^n ( μ) ||_2^2). §.§ One-dimensional Burgers' equation Using the same numerical experiment as Rewieński <cit.> and Lee and Carlberg <cit.>, we benchmark GD-LSPG's ability to perform PMOR upon an advection-driven problem. The governing equation for the 1D inviscid Burgers' equation, a common benchmark problem for shock propagation, is as follows. ∂ w(x,t;μ)/∂ t + ∂ f (w(x,t;μ))/∂ x = 0.02e^μ_2 x, ∀ x∈ (0,L), ∀ t ∈ (0,T] w(0,t;μ) = μ_1, ∀ t ∈ (0,T] w(x,0; μ) = 1, ∀ x ∈ (0,L), where f(w) = 0.5w^2, x∈ℝ denotes spatial position, t∈ℝ_+ denotes time, L∈ℝ denotes the length of the 1D physical domain, and T ∈ℝ_+ denotes the final time. The finite volume method is deployed by dividing the spatial domain into 256 equally-sized cells over a domain of length L=100, lending to a structured finite volume mesh. A backward-Euler time-integration scheme is employed, which corresponds to the cell-wise equations at the i^th cell, p_i: (ξ, t; μ, Δ t) ↦Δ t/2Δ x( ( w_i^n+1)^2 - ( w_i-1^n+1)^2 ) - 0.02e^μ_2 x_i, with q_i=0, k=1, α_0=1, and α_1=-1 when rearranged in the form of (<ref>). Note that since Burgers' equation involves only one state variable, i.e., 𝐩_i and 𝐪_i are scalar variables. In (<ref>), Δ x ∈ℝ_+ is the length of each cell in the uniform 1D mesh, x_i ∈ℝ is the coordinate of the center of the i^th cell in the mesh, and ξ = (w_1^n+1, w_2^n+1, …, w_N_c^n+1)^T is the sought-after state solution at the (n+1)^th time step. The time integration scheme uses a constant time step size Δ t = .07 and a final time T=35 for a total of 501 time steps per solution including the initial value. To train both the CNN-based autoencoder and the graph autoencoder, as well as obtain an affine POD basis, the solution to the FOM is computed for a total of 80 parameter scenarios with the parameters μ = (μ_1 = 4.25 + ( 1.25/9) i, μ_2 = .015 + ( .015/7)j), for i=0,…,9 and j=0,…,7. Once trained, the autoencoders are deployed in an online setting to perform time integration for their respective ROMs. The Jacobian of the decoder is approximated using a finite difference scheme at both the offline training stage and the online prediction stage (refer to <ref> for further details). Empirically, we find that setting the user-defined tolerance, i.e., κ in (<ref>), to be 10^-3 for GD-LSPG and dLSPG, and 10^-4 for POD-LSPG to be sufficient to achieve dependably accurate and converging solutions. For GD-LSPG, we employ an adaptive step size that starts at β^(0) = 0.75 for each time step and reduces by 5% every 10 iterations. Similarly for dLSPG, at each time step, we begin with β^(0)=1.0 and reduce by 5% every 10 iterations. For POD-LSPG, we simply take β^(j)=1.0 for all iterations. Figure <ref> depicts the solution state at various time steps for two test parameter set realizations not seen in the training set and two latent space dimensions of M=5, 10. Additionally, the POD reconstruction errors from (<ref>), and autoencoder reconstruction errors from (<ref>) for the CNN-based autoencoder, inspired by <cit.>, and the reconstruction errors from the graph autoencoder can be found in Figure <ref>. Using the state prediction error of (<ref>), we compare GD-LSPG to the traditional affine POD-based least-squares Petrov-Galerkin (POD-LSPG) projection <cit.>, as well as dLSPG, which leverages a CNN-based autoencoder <cit.>. We emphasize that the reconstruction errors for the graph autoencoder are more than an order of magnitude smaller than that of the affine POD approximation for latent space dimensions 3 to 10. Likewise, the state prediction errors of GD-LSPG are roughly an order of magnitude lower than that of POD-LSPG for latent space dimensions 4 to 10. This outcome is due to the fact that an affine subspace is not well-suited for such nonlinear problems. Benchmarking the graph autoencoder with the traditional CNN-based autoencoder, we find that the graph autoencoder's reconstruction errors and state prediction errors to be less than an order of magnitude greater than those of the CNN-based autoencoder for the vast majority of latent space dimensions. This comparison implies that, while GD-LSPG gains adaptability and is applicable to unstructured meshes, it does not perform as well as CNN-based dLSPG for the Burgers' equation with a structured mesh. Still, we emphasize that, qualitatively, GD-LSPG is able to model the advection-dominated shock behavior in a manner similar to traditional CNN-based dLSPG, where traditional affine POD-LSPG tends to fail. Figure <ref> depicts the difference between the ROM prediction of the full-order state vector and the FOM results with space and time. It can be seen that both dLSPG and GD-LSPG provide an improved ability to model the shock behavior of (<ref>) over POD-LSPG. Additionally, it is apparent that the main source of error for GD-LSPG is a slight phase lag between the ground truth location of the shock and GD-LSPG's prediction of the shock location. Hence, on a structured mesh, GD-LSPG provides an improvement over traditional affine POD-LSPG <cit.> in a manner comparable to that of dLSPG <cit.>. To assess the computational cost of the GD-LSPG method and compare it to POD-LSPG and dLSPG, we provide an analysis of the 1D Burgers' equation model. We found the online ROMs to be much more stable when deployed on a central processing unit (CPU) as opposed to a graphics processing unit (GPU). We believe this to be due to the numerical roundoff error introduced in the finite difference Jacobian, but leave it as an area of future investigation. As a result, all operations in this section are performed in PyTorch using a single Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz ICE LAKE core. Figure <ref> reports the times associated with various components of the time integration procedure. We present the time to get 𝐫^n(k) from (<ref>) and evaluate its Jacobian, time to get the Jacobian of the decoder, time to check the convergence criterion (<ref>)-(<ref>), time to decode to the high-dimensional space (<ref>), the time to compute Ψ, Ψ^T Ψ, and Ψ^T 𝐫^n(k), and time to update the low-dimensional solution state (<ref>). We emphasize that none of these methods employ a hyper-reduction scheme. Consequently, none of them achieve cost-savings with respect to the FOM. §.§ Two-dimensional Euler equations with unstructured mesh resulting in a Riemann problem In our second numerical experiment, we consider the FVM using an unstructured mesh to solve the two-dimensional Euler equations. We provide a brief overview of the important concepts in this section but the reader may consult <cit.> for further reading on Riemann solvers for the Euler equations. We begin with the two-dimensional Euler equations in the form of hyperbolic PDEs: ∂𝐔/∂ t + ∂𝐅/∂ x + ∂𝐆/∂ y = 0, 𝐔 = [ ρ; ρ u; ρ v; ρ E ], 𝐅 = [ ρ u; ρ u^2 + P; ρ u v; ρ u H ], 𝐆 = [ ρ v; ρ u v; ρ v^2 + P; ρ v H ], where ρ∈ℝ_+ denotes density, u ∈ℝ and v ∈ℝ denote velocities in the x and y directions, respectively, P ∈ℝ_+ denotes pressure, E=1/γ-1p/ρ+1/2(u^2+v^2) ∈ℝ_+ and H = γ/γ-1p/ρ+1/2(u^2+v^2) ∈ℝ_+ denote specific total energy and enthalpy, respectively, and γ∈ℝ_+ is the specific heat ratio. We integrate (<ref>) over a control volume, Γ, and apply the divergence theorem to get the form, ∫_Γd/dt𝐔dV + ∫_∂Γ𝐇·𝐧̂dA = 0, where dA∈ℝ_+ denotes the differential surface area of a control volume, dV∈ℝ_+ denotes the differential volume of a control volume, ∂Γ denotes the surface of the control volume, 𝐇 = 𝐅î + 𝐆ĵ, 𝐧̂ = n_x î + n_y ĵ denotes the outward facing unit-normal vector from the control volume, where î and ĵ denote the Cartesian unit vectors of x and y respectively, and n_x∈ [-1,1] and n_y ∈ [-1,1] denote the components of 𝐧̂ decomposed in the x and y directions. An approximate solution for (<ref>) is obtained by first spatially discretizing the domain, where the surface integral term is approximated by obtaining the numerical flux passing over the cell faces in the unstructured mesh. The numerical flux is computed using a Riemann solver designed to resolve the computationally difficult nature of the hyperbolic Euler equations. In this numerical experiment, we choose a Rotated Roe, Harten, Lax, and van Leer (R-RHLL) flux from <cit.> coupled with a two-stage Runge-Kutta time integration scheme <cit.> to generate a time series solution. The resulting scheme for a single finite volume cell takes the form 𝐔^n+1_i = 𝐔^n_i - Δ t ∑_j∈ℳ(i)Φ_ij(𝐔^n_i - Δ t ∑_j∈ℳ(i)Φ_ij( 𝐔^n_i, 𝐔^n_j ), 𝐔^n_j - Δ t ∑_k∈ℳ(j)Φ_jk( 𝐔^n_j, 𝐔^n_k ) ), where 𝐔_i^n, 𝐔_j^n, 𝐔_k^n∈ℝ^4 denote the state vector of the i^th, j^th, and k^th cells at the n^th time step, respectively, ℳ(i) denotes the set of neighboring cells of the i^th cell (i.e., sharing an interface), Φ_ij: ℝ^4 ×ℝ^4 →ℝ^4 and Φ_jk: ℝ^4 ×ℝ^4 →ℝ^4 denote the functions that compute the R-RHLL flux at the interface between the i^th– j^th and j^th– k^th cells, respectively. Therefore, (<ref>) can be written in the residual-minimization cell-wise form of (<ref>) at the i^th cell with 𝐪_i: (𝐔^n, t^n; μ, Δ t) ↦Δ t ∑_j∈ℳ(i)Φ_ij(𝐔^n_i - Δ t ∑_j∈ℳ(i)Φ_ij( 𝐔^n_i, 𝐔^n_j ), 𝐔^n_j - Δ t ∑_k∈ℳ(j)Φ_jk( 𝐔^n_j, 𝐔^n_k ) ), and 𝐩_i=0, α_0=1, α_1=-1, τ=1, and ξ = (𝐔_1^n+1,𝐔_2^n+1,…,𝐔_N_c^n+1)^T. We note that the minimization problem associated with the residual of the FOM (<ref>) is solved explicitly in the two-stage Runge-Kutta scheme. However, when the two-stage Runge-Kutta scheme is deployed in GD-LSPG, minimizing the projection of the residual onto the low-dimensional latent space according to (<ref>) is performed implicitly using the iterative solver described in Section <ref>. We solve (<ref>)-(<ref>) on the domain x ∈ [0, 1], y ∈ [0, 1] with inflow/outflow boundary conditions that are computed via the fluxes of the cells along each boundary. The initial conditions are defined by dividing the domain into quadrants, where a different state is defined in each quadrant (see Figure <ref>). In this experiment, we define the quadrants as a parameterized version of configuration G from <cit.> (or configuration 15 from <cit.>), i.e., ρ_1 = 1.0, ρ_3 = 0.8, u_1 = u_3 = u_4 = μ_u, v_1 = v_2 = v_3 = μ_v, P_1 = 1.0, P_2 = P_3 = P_4 = 0.4, where μ_u, μ_v∈ℝ are model's parameters, i.e., μ = ( μ_u, μ_v ), and the remaining variables (ρ_2, ρ_4, u_2, v_4) are defined by the Rankine-Hugoniot relations and the relations for a polytropic gas. Specifically, the rarefaction wave yields the conditions, ρ_2 = ρ_1 ( P_2/P_1) ^ 1/γ, u_2 = u_1 + 2 √(γ)/γ-1( √(P_2/ρ_2) - √(P_1/ρ_1)), and the shock wave yields the conditions, ρ_4 = ρ_1 (P_4/P_1 + γ-1/γ+1/1+γ-1/γ+1P_4/P_1), v_4 = v_1 + √((P_4-P_1)(ρ_4-ρ_1)/ρ_4 ρ_1). Our model leverages Numba's just-in-time compiler <cit.> to compile the code efficiently. We generate an unstructured mesh with 4328 finite volume cells using Gmsh <cit.>. A sample solution with the deployed unstructured mesh is presented in Figure <ref>. We perform a parametric study by varying the initial velocity in the top right quadrant via μ = (μ_u=-1.2-0.2i, μ_v=-0.3-0.1j), with i=0,…,4 and j=0,…,4, resulting in solutions to 25 different parameter sets. We take Δ t = 0.001 and T_f=0.3, therefore collecting 301 snapshots for each parameter set including the initial conditions. The solutions from the parametric study were used as training data to train the autoencoder as outlined in <ref>. For two test parameter sets not seen during training, μ = (μ_u=-1.3, μ_v=-0.65) and μ = (μ_u=-1.9, μ_v=-0.35), the solution states at t=0.15 and t=0.3 can be found in Figure <ref>. We approximate the Jacobian of the decoder using a finite difference scheme at both the offline training stage and online prediction stage (refer to <ref> for further details). For POD-LSPG, we set the tolerance κ in (<ref>) to be 10^-4 and the step size β^(j) to be 1.0 for all steps. For GD-LSPG, we set the tolerance κ in (<ref>) to be 10^-3 and employ an adaptive step size scheme with β^(0) to be 1.0 reducing by 10% every 10 iterations. Like before, we compare POD projection errors from (<ref>) with the autoencoder reconstruction error from (<ref>) in Figure <ref>. Additionally, Figure <ref> compares the state prediction error of POD-LSPG to GD-LSPG (obtained from (<ref>) for both) with a varying dimension of the low-dimensional latent space, while Figure <ref> presents the local error of several solution states generated by POD-LSPG and GD-LSPG. Again, we note that both qualitatively and quantitatively, the graph autoencoder and its deployment in GD-LSPG is more accurately able to model the moving rarefaction waves, shock waves, and contact waves than the affine POD projection and POD-LSPG. The traditional POD-LSPG solutions are far more diffusive than the solution to the FOM and, as a result, fail to model the advection-driven behavior present in this solution. § CONCLUSIONS AND FUTURE WORK In this paper, we present GD-LSPG, a PMOR method that leverages a graph autoencoder architecture to perform PMOR on unstructured meshes; a setting where traditional CNN-based dLSPG is not directly applicable. The graph autoencoder is constructed by first generating a hierarchy of reduced graphs to emulate the compressive capabilities of CNNs. Next, message passing operations are trained for each layer in the hierarchy of reduced graphs to emulate the filtering capabilities of CNNs. In an online stage, the graph autoencoder is coupled with the same time integration scheme deployed by dLSPG to perform time integration and generate solutions for parameter sets not seen during training. To benchmark the accuracy of GD-LSPG against other methods, we compare directly the solutions for a 1D Burgers' equation problem with a structured mesh generated by GD-LSPG to those generated by the CNN-based dLSPG <cit.> and POD-LSPG <cit.> frameworks. The results of this study find that, while GD-LSPG does not perform as well as traditional CNN-based dLSPG for structured meshes, GD-LSPG still provides significant improvement over the affine POD-LSPG framework. To demonstrate the flexibility of GD-LSPG in a setting where dLSPG is not directly applicable, we test GD-LSPG on a 2D Euler equations model deploying an unstructured mesh and find that, for small latent spaces, GD-LSPG significantly outperforms the POD-LSPG solution in terms of accuracy. One possible future direction is to develop a hyper-reduction scheme to achieve cost savings. In GD-LSPG's current formulation, the high-dimensional residual must be computed and projected onto the low-dimensional latent space, forcing the operational count complexity to scale on the dimension of the FOM. A hyper-reduction scheme would overcome this limitation by sampling and generating a sparse representation of the high-dimensional residual, thereby eliminating an operational count complexity that scales on the dimension of the FOM. To date, hyper-reduction has been achieved for dLSPG using shallow decoders <cit.>. However, GD-LSPG uses a deep decoder. As a result, we leave this as an open area of investigation. This paper aims primarily to investigate GD-LSPG in the context of advection-dominated flows modeled by unstructured meshes for simple geometries to provide a baseline and proof of concept. As a result, another open area of interest is to apply GD-LSPG to domains with more complicated geometries, such as airfoils and nozzles, where unstructured meshes can be advantageous. § ACKNOWLEDGEMENTS L. K. Magargal is supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program. L. K. Magargal and J. W. Jaworski acknowledge the financial support of the Department of Energy under grant DE-EE0008964. S. N. Rodriguez and J. G. Michopoulos acknowledge the support of the Office of Naval Research through U.S. Naval Research Laboratory core funding. J. W. Jaworski acknowledges the partial support of the National Science Foundation under CAREER award 1846852. § ARCHITECTURE DETAILS AND TRAINING The autoencoders are trained with the strategies outlined in this section. All models are trained on an NVIDIA L40S GPU. We initialize all weights and biases using Xavier initialization <cit.>. Furthermore, all finite difference Jacobians are approximated using a step size of 0.01. To train the graph autoencoder in the 1D Burgers' equation model, we first perform a training/validation split, where 4000 solution states are stored for validation, and the remaining 36,080 solution states are used for training. We train the model for 1000 epochs where, at each epoch, the training set is passed through the autoencoder in batches of 20. Loss is evaluated using (<ref>) with a regularization parameter λ=0.001. The Adam optimizer <cit.> is deployed with a learning rate of 10^-4. The exact details of the autoencoder architecture are provided in Tables <ref> and <ref>. Activation functions are taken to be ELU <cit.>. Empirically, we noticed that the architecture struggles to model the solution state near the boundaries, especially when the shock approaches the boundary. We believe that this challenge is related to the nonlocality of our graph autoencoder, as the boundary nodes do not receive adequate and appropriate information from the space outside of the domain. This behavior is often found across the field of nonlocal modeling, including peridynamics <cit.> and smoothed-particle hydrodynamics <cit.>. We leave this as an open area for future investigation, but for now, we present a simple procedure for including padding in a graph autoencoder. This procedure appends 30 nodes to the left side of the domain and sets their features to be the value of the left boundary condition, i.e., μ_1. Along the right boundary, we find that solving Burgers' equation with the finite volume solver for 30 finite volume cells to the right of the right boundary and prescribing the computed velocity values to the features of the padding nodes is appropriate. Our decoder reconstructs the solution for the nodes in the physical domain as well as those in the padding zones but only computes the loss with respect to the nodes in the physical domain of the problem. During the hierarchical spectral clustering algorithm, radius r^i for layer i is chosen such that (<ref>) gives roughly 7 edges for each node, i.e., r^i = (x_right - x_left) (7/2|𝒱^i |), where x_right∈ℝ is the position of the rightmost padding node in 𝒱^0 and x_left∈ℝ is the position of the leftmost padding node in 𝒱^0. To train the CNN-based autoencoders deployed in the 1D Burgers' equation model, we first perform a training/validation split, where 4000 solution states are stored for validation, and the remaining 36,080 solution states are used for training. We train the model for 1000 epochs where, at each epoch, the training set is passed through the autoencoder in batches of 20. Loss is evaluated using (<ref>) with a regularization parameter λ=0 (i.e., no Jacobian regularization) and stochastic gradient descent is performed using the Adam optimizer <cit.> to update the weights and biases at each epoch. The learning rate is chosen to be 10^-4. The exact details of the autoencoders are provided in Tables <ref> and <ref>. Activation functions are taken to be ELU <cit.>. A kernel size of 25 is chosen at each layer, where half-padding is used. In the decoder, the transposed convolution layers are given an output padding of 1. To train the graph autoencoders used in the 2D Euler equations model, we first perform a training/validation split, where 525 solution states are stored for validation, and the remaining 7000 solution states are used for training. We train the model for 5000 epochs where, at each epoch, the training set is passed through the autoencoder in batches of 20. Loss is evaluated using (<ref>) with a regularization parameter λ=0.001 and stochastic gradient descent is performed using the Adam optimizer <cit.> to update the weights and biases at each epoch. The learning rate is chosen to be 10^-4. Activation functions are taken to be ELU <cit.>. To compute the hierarchy of reduced graphs, at each layer, (<ref>) uses a radius that aims for 9 edges for each node, i.e., r^i = √(9/π|𝒱^i|). Exact details of the graph autoencoder architecture are in Tables <ref> and <ref>. We found stacking message passing operations to be beneficial to the accuracy of the graph autoencoder. In the MPP layers, we perform message passing operations multiple times before pooling. In the UMP layers, multiple message passing operations are performed after unpooling. The number of message passing operations is represented in Tables <ref>, <ref>, <ref>, and <ref> under the “# of MP operations” column. Lastly, the last message passing operation of the decoder does not have an activation function associated with it, as the range of the ELU activation function does not match the domain of the state variables. §.§ Details of the graph autoencoder for 1D Burgers' equation model §.§ Details of the CNN-based autoencoder for 1D Burgers' equation model §.§ Details of the graph autoencoder for 2D Riemann Problem § PROPER ORTHOGONAL DECOMPOSITION To compute the set of orthonormal POD basis vectors, we use the method of snapshots <cit.> in which a snapshot matrix of the time history of the FOM solutions is generated, 𝐗^POD = [𝐱^1,…,𝐱^n_train] ∈ℝ^N × N_train, where N_train∈ℕ is the number of training snapshots, 𝐱^i is the solution state vector of snapshot i, and N is the dimension of the state vector. Next, singular value decomposition is performed on the snapshot matrix, 𝐗^POD: 𝐗^POD = 𝐕Σ𝐔^T where 𝐕 = [ 𝐯_1,…,𝐯_N]∈ℝ^N × N is a matrix of N orthonormal vectors which represent the POD modes in the order of decreasing singular values, Σ = diag(σ_1,…,σ_N)∈ℝ^N × N is the diagonal matrix of singular values ordered as σ_1 ≥…≥σ_N, and 𝐔 = [𝐮_1,…,𝐮_n_s]∈ℝ^N × N_train provides information about the time dynamics. The POD basis is created by truncating the first M left singular vectors of the snapshot matrix, i.e., Φ = [𝐯_1,…,𝐯_M ]∈ℝ^N × M, which is made up of M orthonormal vectors that describe the dominant mode shapes of the system. The POD basis vectors are both optimal in the L^2 sense and orthonormal, making the POD basis a common choice for deployment in the context of PMOR <cit.>. elsarticle-num
http://arxiv.org/abs/2407.12473v1
20240717105500
A Novel Dependency Framework for Enhancing Discourse Data Analysis
[ "Kun Sun", "Rong Wang" ]
cs.CL
[ "cs.CL", "cs.LG" ]
Improvement of analysis for relaxation of fluctuations by the use of Gaussian process regression and extrapolation method Yukiyasu Ozeki July 22, 2024 ========================================================================================================================= § ABSTRACT The development of different theories of discourse structure has led to the establishment of discourse corpora based on these theories. However, the existence of discourse corpora established on different theoretical bases creates challenges when it comes to exploring them in a consistent and cohesive way. This study has as its primary focus the conversion of PDTB annotations into dependency structures. It employs refined BERT-based discourse parsers to test the validity of the dependency data derived from the PDTB-style corpora in English, Chinese, and several other languages. By converting both PDTB and RST annotations for the same texts into dependencies, this study also applies “dependency distance” metrics to examine the correlation between RST dependencies and PDTB dependencies in English. The results show that the PDTB dependency data is valid and that there is a strong correlation between the two types of dependency distance. This study presents a comprehensive approach for analyzing and evaluating discourse corpora by employing discourse dependencies to achieve unified analysis. By applying dependency representations, we can extract data from PDTB, RST, and SDRT corpora in a coherent and unified manner. Moreover, the cross-linguistic validation establishes the framework's generalizability beyond English. The establishment of this comprehensive dependency framework overcomes limitations of existing discourse corpora, supporting a diverse range of algorithms and facilitating further studies in computational discourse analysis and language sciences. Keywords: dependency parsing, conversion, dependency distance, BERT-based parser, unified framework § INTRODUCTION Discourse or text generally has multiple clauses or sentences. The parts of discourse are interrelated and form a coherent whole that clearly expresses a meaning. Discourse in the wider sense underlies disciplines such as law, religion, politics, science amongst others. Discourse structure, like syntax, concerns the ways in which discourse units are brought together to form a coherent discourse. Discourse structure mostly concerns the logical and semantic interrelations of discourse units (or elementary discourse units, EDUs). Discourse relation is the semantic or logical meaning of the connections between discourse units. It is a central concern in discourse structure research. The structure of discourse has already been extensively investigated from theoretical, experimental and computational perspectives. The processing of discourse structure (e.g., discourse parsing) is of significant importance to linguistic research, particularly in the field of Natural Language Processing (NLP) <cit.>. Researchers have formulated numerous theories that interpret discourse structure, showcasing a wide range of perspectives on the interplay between discourse units and textual coherence. For example, RST (Rhetorical Structure Theory, <cit.>) is one influential approach in this field <cit.>. It describes textual coherence using the rhetorical relations between EDUs and postulates a hierarchical tree structure. By contrast, D-LTAG theory (a lexicalized Tree Adjoining Grammar for discourse) holds that discourse relations can be lexicalized <cit.>. Based on the fact that discourse connectives signal discourse relations (e.g., “because, although, when”), this theory treats two discourse units as linked by a connective, which means that one discourse unit and the connective together constitute a dependency relation. Relations through discourse connectives can be treated as local constituency structure by discourse units in the PDTB. Another influential theory, SDRT (Segmented Discourse Representation Theory, <cit.>), combines dynamic semantics with a discourse structure defined via rhetorical relations between segments. These approaches to discourse structure thus reflect their different areas of focus. Not only have many different theories of discourse structure been developed but discourse corpora have also been established based on these theories (; ; ). In light of their great influence, these discourse corpora styles have been annotated in a number of languages. However, this creates a problem for researchers: The very fact these corpora have been established on different theoretical bases makes it difficult to explore discourse corpora in a consistent and unified way. Due to the aforementioned theoretical distinctions that underlie these ways of annotating discourse corpora, it is difficult to explore different discourse corpora in a consistent fashion despite the fact that it is possible to find some mappings between them <cit.> and despite the fact that some attempts have been made to annotate different styles of discourse structure using the same texts <cit.>. These attempts did not, however, discover a more general structure that can be used to represent discourse. This makes applying these approaches more generally difficult. This means that it is hard to apply these approaches more generally. In order to deal with these difficulties, we want to find a more general structure or framework for representing the available discourse corpora and so enable more algorithms to process data. We also need a framework for describing and explaining discourse structure quantitatively. The establishment of a comprehensive framework or structure can allow a diverse range of well-developed algorithms to explore unified data and this in turn can facilitate further studies in the field. Recent studies have shown that a rhetorical structure can be converted into dependency representations (; ). SDRT relations have been studied with respect to the conversion of dependency relations <cit.>. We suppose that PDTB relations can also be converted into dependency representations. All this indicates that we can employ dependency representations to generalize discourse relations in different discourse corpora if we seek their largest common denominator, and apply these discourse theories more widely. The primary objective of the current study is to address the challenges posed by the existence of multiple discourse corpora based on different theoretical frameworks. Specifically, the present study aims to: 1) Convert PDTB Annotations into Dependency Structures: Develop a method to transform the PDTB annotations into dependency representations, thereby enabling a more unified approach to discourse analysis. 2) Validate Dependency Data: Employ updated BERT-based discourse parsers to test the validity of the dependency data derived from PDTB-style corpora across multiple languages. 3) Examine Correlations: Analyze the correlation between RST dependencies and PDTB dependencies using some metrics. § RELATED WORK Previous theories and corpora reflect the many and different perspectives on textual coherence. This section describes the key characteristics on the discourse corpora followed by RST, D-LTAG, and SDRT. RST is an influential approach in discourse structure studies that describes textual coherence using the rhetorical relations between EDUs and postulates a hierarchical tree structure. The RST is a constituency-based theory, which means that discourse units combine with discourse relations to form recursively larger units and these ultimately form a global document. RST corpora (such as the RST discourse treebank (RST-DT), <cit.>) help enormously in analyzing discourse structure quantitatively and in the automatic processing of texts. What is characteristic of RST annotations is that each discourse unit has to be related to the overall discourse structure and represented using a (rhetorical) tree structure. A second essential characteristic of the RST-DT is the assignment of nuclearity: EDUs are characterized as nuclei or satellites. The nucleus is the most central part of a relation in the text (the node where the arrow point is located is the nucleus in Fig.<ref>), while the satellites support the nucleus. A discourse relation can be either mono-nuclear (e.g.,4-5 in Fig.<ref>) or multi-nuclear (e.g.,7-8 in Fig.<ref>). All RST relation types can be classified into one of 16 classes. For instance, “cause”, “result” and “consequence” belong to the class of “CAUSE”. The other influential discourse corpus, the Penn Discourse Treebank (PDTB 3.0, <cit.>), annotates English discourse structures following the D-LTAG. But because the D-LTAG looks at each relation individually and disregards most of the surrounding structures, it does not take into account the global structure of the text. This is the approach used in shallow discourse parsing. The PDTB uses a discourse connective as a head for forming dependency concerning local discourse structure. The PDTB3.0 distinguishes 43 relation labels with annotations. These labels are organized in a hierarchy consisting of three levels (i.e., sense, class, type). To sum up, RST uses a rhetorical relation tree to describe global discourse structure, while D-LTAG (i.e., PDTB) uses shallow discourse parsing. Moreover, several discourse corpora have been established following the SDRT, such as the Parallel Meaning Bank <cit.>, and the STAC corpus <cit.>. As mentioned in introduction, it is possible to find some mappings between them. Further, attempts have been made to annotate different styles of discourse structure using the same texts <cit.>. The methods of mapping the discourse relations of different frameworks were presented by <cit.>. The main approach is to use the texts that were annotated under different frameworks. <cit.> focus on mapping between explicit PDTB discourse connectives and RST rhetorical relations by using the Potsdam Commentary Corpus <cit.>, which contains both RST and PDTB annotations in German. <cit.> found that 76% RST relations are mapped with PDTB ones in English, which is similar to the finding of <cit.> in German. <cit.> formulated a set of core relations that are shared by existing frameworks but are open and extensible in use, with the outcome being ISO-DR-Core <cit.>. However, these attempts did not find a more general (data) structure that can be used to represent different types of discourse structure. There is a lack of consensus in a number of studies with respect to how to represent discourse structure. Yet nonetheless, we have found that several types of discourse relations can be converted into discourse dependency. For instance, there has been some successes in converting RST relations into dependency relations, as detailed in numerous studies (; ; ). SDRT relations have also been converted into dependency (; ). Inspired by recent studies, we convert PDTB relations into dependency data. The discourse dependency structure can be taken as a common structure that the other structures can be converted into, given that PDTB can also be converted into dependency structure. In this way, we can use dependency as a unified framework for describing and explaining discourse structure formally, computationally and quantitatively. The dependency approach has been explored over a long period of time in linguistics, and dependency parsing has been widely used in computational linguistics (; ; ). Dependency-based annotation has been adopted to establish discourse corpora.(; ). § METHODS §.§ Dependency parsing §.§.§ PDTB converted into dependency representations A considerable amount of studies have successfully explored the conversion of RST- style and SDRT-style annotations into dependency representations. Currently two algorithms were used in this undertaking (; ). We focus on how to convert PDTB relations into dependency representations. Using a discourse connective, a dependency between two arguments (i.e., two discourse units or EDUs) in the PDTB can be treated as the semantic relation between two EDUs in practice because people tend to be concerned with the relation between two EDUs in the PDTB. <cit.> used this to convert Chinese PDTB relations into dependency tree. <cit.> did not make use of the PDTB third-level annotation information to help in distinguishing the head and dependent in dependency. However, converting PDTB relations into a global dependency tree actually violates the original purpose of shallow or local parsing in the PDTB. That is why we will not convert PDTB structure into a dependency tree. We adopt the dependency structure to preserve the original PDTB information and characteristics to the maximum extent possible. In most cases, two adjacent discourse units are connected by a simple semantic relation. However, PDTB can also form some complicated structures and these are very similar to the local (syntactic) constituency structure (; ). As is well known, an RST tree represents a global constituency structure. However, it is possible to convert an RST constituency tree into a dependency tree. Similarly, the local constituency structure of PDTB can also be transformed into a local dependency structure. Dependency grammar holds that the grammatical relation of binary relations comprise these dependency structures, and an individual dependency relation consist of a head and a dependent(or head vs. subordinate, head vs. governor) (; ). Distinguishing the head and dependent in a discourse relation is of crucial importance when dependency parsing is applied. Although the PDTB annotations do not provide us with explicit information on the head or dependent, we can still use the existing PDTB annotation information to distinguish which EDU is the head or dependent. The PDTB annotation system is a hierarchical and it has three-class annotations, that is, (see Fig. <ref> in the Appendix A.2). From the annotation information on type, we can obtain the knowledge that its corresponding annotations on sense and class are symmetric or asymmetric. For example, “” is an asymmetric discourse relation. The reason for this is that when the second argument (the second EDU) is a conditional clause, this indicates that the first discourse unit includes more important information, that is, the first discourse unit is the “head” and the second is “dependent(subordinate)”. Evidently, the dependency between a connective and its arguments differs fundamentally from the dependency relation between two arguments (i.e., two discourse units). The concept of “head” in dependency refers to a discourse unit with the more important information. In contrast, the “head” is the discourse connective in PDTB. Although the identical term is used, they are distinct in their meanings. In contrast, some relations are symmetric. For example, when a discourse relation is annotated with “similarity”, we suppose that two discourse units may be equally important. According to <cit.> and <cit.>, we believe that when the third-class tags with “” could provide the information on “asymmetric” EDUs. The remaining annotations are regarded as symmetric. We will augment our manuscript with additional sentences to highlight these distinctions. Among 22 classes in the PDTB 3.0., 10 classes annotations contain asymmetric information. Table <ref> is a summary of asymmetric tags used in PDTB3.0. Asymmetric annotations can provide us with adequate information on how to distinguish which discourse unit is the head or which one is dependent in a discourse relation. By contrast, the remaining 12 PDTB tag classes (we merely used 7 of 12 tags in dependency parsing) are taken as symmetric, that is, when two discourse units form a relation that belongs to these 12 classes, the two discourse units are treated as equally important, which is similar to mutiple-nuclear structure in the RST-DT. At this moment, we treat the second discourse unit as the “head” when the two discourse units form symmetric structure or multi-nuclear relation. Such annotations can be found in Chinese (TED-CDB, <cit.>) and TED Multilingual Discourse Bank (TED-MDB, translated parallel corpus among English, Polish, German, Russian, European Portuguese, and Turkish) <cit.>. Using the two strategies, we can successfully convert PDTB discourse relations into dependency representations. The following section discusses the conversion procedure in greater detail. §.§.§ Example We used a typical PDTB example to illustrate how to convert PDTB annotations in local dependency representations. The example, includes complicated structures and can represent the majority cases in the PDTB (also see <cit.>). To save the space, we put this complicated example and its annotations to the Appendix (A.1). PDTB3.0 annotations contain a large amount of information on asymmetric structure as shown in Table <ref>. For instance, in the example of textttWSJ_0618, node 9 (i.e., the 9^th discourse unit) and node 10 (i.e., the 10^th discourse unit) form condition relation with “Arg2-as-cond”. This suggests that node 10 is a conditional clause and the node 9 is head. Further, the node 8 forms a concession relation (“Arg2-as-denier”) with the integrated unit construed by node 9 and node 10. This means that node 8 is head and the constituency unit construed by node 9 and node 10 is the subordinate. However, node 9 is the head for the combination of node 9 and node 10. This way, we can see that node 8 is the head and its subordinate is the node 9. We visualized all of these, shown in Fig. <ref>. Still using the example of , node 16 and node 17 forms puprose relation (“Arg2-as-goal”), and node 17 is the head in this relationship. Note node 15 forms a cause (or expansion) relation with the combination of node 16 and node 17. As we know, “cause” is a symmetric type, and we therefore adopt the rule that the second node is the “head” when the nodes form a symmetric structure. Node 17 is the head in the constituent consisting of node 16 and node 17. This way, node 17 is the head and node 15 is dependent. More complicatedly, node 14 forms the exception relation. “Arg2-as-excpt” with the macro-structure composed by node 15, node 16, and node 17. In Table <ref>, “Arg1” is the head but “arg2” is dependent, node 14 is the head, and node 17 is dependent. The conversion result is shown in Table <ref>. [The conversion code for depedency from PDTB is available at: <https://github.com/fivehills/discourse-corpora-resources>] Ultimately, using the discourse distance equation, we can obtain the local discourse distance for this text: (1+1+1+1+1+1+1+1+3+2+1)/11=1.27. We will discuss the calculation of local discourse distance in the following section. §.§ Discourse distance & the variation of dependency distance We can analyze discourse dependency data by applying dependency grammar algorithms. In a dependency relation, the linear distance between a head and dependent can potentially be utilized to provide a measure for assessing the depth in sentence processing (; ; ). This means that dependency parsing and dependency distance algorithms are helpful in quantitatively investigating the connection between discourse relations and discourse units. There is a linear distance that runs between any given “head” node and any given “dependent” node. We can obtain PDTB’s “discourse distance” from each linear depedency distance in a text. The calculation of “discourse distance” in what follows uses the dependency distance algorithm. <cit.> used the term dependency distance, and calculated the mean dependency distance (MDD) of a sentence or a text, using the following one formula: MMD(RST text)=1/n-1∑_i=1^n|DD_i| In equation (1), n is the number of discourse units in the text and DDi is the dependency distance of the i-th dependency link of the text. By contrast, PDTB local dependency does not contain the root, meaning that subtracting 1, is necessary, as shown in equation (2). With regard to calculating PDTB dependency distance, the number of discourse units is based on the actual number of participants as heads or dependents rather than the number of all discourse units in a text. For instance, in terms of PDTB annotations, includes 11 discourse units which are assigned with heads or dependents. The number of discourse units involving discourse dependency is 11. Computing PDTB dependency distance is shown in equation (2). MMD(PDTB text)=1/real n∑_i=1^n|DD_i| The “discourse distance” of global dependencies in Table <ref> (the top panel) is “3.1”. The “discourse distance” of local dependencies in Table <ref> (the bottom panel) is “1.13”. Moreover, we introduce standard variation (SD) of the data on dependency distances for a text because SD can tell how dispersed the dependency distances are. A small or low SD would indicate that many of the dependency distances are clustered tightly around the mean and tend to be processed easily by humans. Using SD in statistics, we can calculate the SD of dependency distances for the top panel in Table <ref>, which is “2.28” and the SD of dependency distances at the bottom panel in Table <ref> is “0.35”. Another example (Fig. <ref>) illustrates how annotations for the same text by RST and PDTB can be converted into global dependency and local dependency respectively. <cit.> and <cit.> both developed algorithms for converting RST discourse representations into dependency structures. Their discourse dependency framework is adopted from a syntactic dependency with words replaced by EDUs. Fig.<ref> (Panel A) illustrates how the RST tree of Fig.<ref> is converted into a dependency tree by adopting the method of <cit.>. In Fig.<ref> (Panel A), binary discourse relations are represented by dominant EDU (“head/governor”) to subordinate EDU (“dependent”). Table <ref> (Top Panel) shows that the dependency representations from Fig.<ref> (Panel A) can be treated as network data. Dependencies reflect the global and rhetorical relations in RST. Although a dependency that is based on a discourse connective as head in PDTB is different to the RST relation dependency, such a dependency in PDTB can be treated as the relation dependency between two EDUs in practice because people tend to be concerned with the relation between two EDUs in the PDTB. Panel B in Fig. <ref> presents the PDTB annotation for Fig. <ref>. We can convert Panel B into Panel C, as shown in Fig.<ref>. In Panel C of Fig. <ref>, the EDUs in PDTB are connected with each other, and they can be treated as (local) dependencies. Following Equation (2), the “discourse distance” of global dependencies in Table <ref> (the top panel) is (2+1+1+1+3+4+5+6+7+1)/(11-1) = “3.1”. The data on the dependency containing “0” in the nucleus (i.e., a ROOT relation) is not computed in our algorithm. The “discourse distance” of local dependencies in Table <ref> (the bottom panel) is (2+1+1+1+1+1+1+1)/(8) = “1.13” (the real number of discourse units involving in dependency is 8). When there are a number of texts, we can obtain the average discourse distance for these texts through calculating the mean of all values of discourse distance in these texts. We converted all WSJ texts with both RST and PDTB annotations (375 texts) into global dependencies (from RST) and local dependencies (from PDTB) respectively by using the aforementioned methods. The following section will use two types of dependency data to carry out experiments in order to verify their validity. § EXPERIMENTS To expand on the evaluation and validation of the converted dependency data, we employed several modified SOTA discourse parsers to test the validity of the dependency representations derived from PDTB-style corpora. The experimental setup involved using datasets from English, Chinese, and several other languages for training and testing. We applied specific pre-processing steps to prepare the data and tasked the discourse parsers with relation identification and argument extraction. The evaluation metrics included F1 score, precision and recall, exact match accuracy, and labeled attachment score (LAS) for dependency parsing. Our cross-linguistic validation extended beyond English and Chinese, addressing any challenges or adaptations required for different languages and providing comparative results across them. The following details how we implemented these experiments and validated the converted data in two sections. §.§ Parser evaluations on PDTB dependency representations across languages The methods adopted were tested experimentally by using the local discourse dependency data converted from the PDTB. In order to make a comparison with the RST-DT texts, we experimented on the 375 WSJ texts with both RST and PDTB annotations. The 375 WSJ texts with PDTB annotations were converted into local dependency representations. The training part of the corpus is composed of 303 texts, while the test part consists of 36 texts and 36 texts were respectively taken as the development part and test part. Meanwhile, similar methods were applied in the different PDTB- style corpora of other languages (Chinese, TED-CDB, <cit.>; TED-MDB, <cit.>), making PDTB relations into become local discourse dependencies. However, these corpora have not corresponding RST ones. Two discourse dependency parsers were refined to parse the PDTB dependency data in the present study. <cit.> modified two state-of-art discourse dependency parsers. The first parser is Arc-Factored Model which combines the BERT-based biaffine attention model <cit.> and Hierarchical Eisner Algorithm <cit.> (abbreived as “NISHIDA22-ARC-MOD”). The second parser, Stack-pointer discourse dependency parser, uses a BERT-based pointer network (“NISHIDA22-STP-MOD”). After modifications and refinements, the two discourse parsers focus solely on parsing discourse dependency relations, but without considering the dependency tree task. For instance, unlike traditional dependency or RST parsing tasks, we need not to build an RST tree, which makes our parsing tasks easier. The number of PDTB dependency relations tags is 19 (i.e., 18 second-level tags + EntRel). 7 tags are not required to recognize the difference between head or dependent (i.e., “synchronous”, “asynchronous”, “cause”, “contrast”, “similarity”, “conjunction”, “disjunction”), but the remaining 10 are required (shown in Table <ref>). We modified the two parsers to parse the PDTB dependency data to account for this. The following details some hyper-parameters we applied in using the refined dependency parsers. When parsing English dependency data, we employed specific hyperparameters as follows: the dimensionality of MLPs within the (ARC) and the shift-reduce model (STR) were set at 120 and 150, respectively. To optimize the model, we used AdamW and Adam optimizers. These optimizers were directed towards refining the transformer's parameters (θ_bert) and the task-specific parameters (θ_task), respectively, aligning with the methodology of SpanBERT <cit.>. In the initial phase, we trained the foundational models on the labeled source dataset with the following hyperparameters: batch size of 1, a learning rate (lr) of 2 × 10^-5 for θ_bert, lr of 1 × 10^-4 for θ_task, and 3,000 warmup steps. Subsequently, we implemented a singular bootstrapping approach, specifically co-training, but without utilizing other bootstrapping methods. During this process, the model's hyperparameters were configured as follows: batch size of 1, lr of 2 × 10^-6 for θ_bert, lr of 1 × 10^-5 for θ_task, and 6,000 warmup steps. The training duration for all methodologies extended to a maximum of 30 epochs. We also integrated an early stopping mechanism, suspending training when validation LAS exhibited no improvement for a consecutive span of 10 epochs. We adopted a similar strategy for parsing the Chinese dependency data. The dimensionality of MLPs within the arc-factored model (ARC) and the shift-reduce model (STR) remained consistent, being set at 80 and 95, respectively. The optimization approach employed AdamW and Adam optimizers. These were employed to refine the parameters of the Chinese BERT model (θ_Chinese-BERT) and the task-specific parameters (θ_task). Specifically, we configured the hyperparameters as follows: batch size of 1, a learning rate (lr) of 2 × 10^-4 for θ_Chinese-BERT, lr of 1 × 10^-3 for θ_task, and 2000 warmup steps. Similar to the English case, a single bootstrapping approach (co-training) was applied, excluding alternative methods. During this phase, the model's hyperparameters were set as follows: batch size of 1, lr of 2 × 10^-5 for θ_Chinese-BERT, lr of 1 × 10^-3 for θ_task, and 5,000 warmup steps. Training was conducted for a maximum of 30 epochs and early stopping was enacted when validation LAS did not exhibit any improvement for a consecutive span of 10 epochs. Moreover, our strategies involved the utilization of “multilingual BERT” <cit.> in relation to the data dependencies of the MDB in six languages. Due to the limited scale of the dataset in each language, we made the necessary adjustments to the hyperparameters accordingly. Note that the final results for all six languages were derived from the mean of their respective outcomes. After using the refined parsers on the converted data, we report the result using the , as shown in Table <ref>. This result shows that two SOTA refined dependency discourse parsers are able to analyze the relations in PDTB dependency data. The same two parsers have performed a little better in analyzing PDTB dependency relations as compared with the performance on RST data. The reason for this is that PDTB dependency data do not require forming a tree structure and they just need to construe local dependencies, thus making the parsing tasks become simpler. Considering the stable performance by the two parsers, we have evidence supporting these thesis that PDTB dependency data can be automatically analyzed and that this will be useful in computational analysis. §.§ The correlation between mean/SD discourse distance of RST and PDTB At present, only English texts have both RST and PDTB3.0 annotations, which means correlation analyses are restricted to this language alone. We used the same 375 texts with both RST and PDTB annotations (English) to extract their global dependency and the local dependency respectively. Note that we applied the algorithms of <cit.> and <cit.> to extract RST dependencies respectively. After that, we applied the discourse distance algorithms to calculate the indexes of discourse distance for the global dependency (two types, <cit.> and <cit.>) and local dependency for each text. After obtaining the discourse distance for the global dependency and local dependency of each text, we used Pearson's correlation to test the relationship between discourse distance of global dependency and that of local dependency for these 375 texts. The result shows that the correlation between mean discourse distance of global dependency by <cit.> and that of local dependency reaches 82.69%(p-value < 1.26e-12, df = 374), and the correlation between discourse distance of global dependency by <cit.> and that of local dependency reaches 79.23%(p-value < 1.26e-12, df = 374). The correlation of SD of discourse distance and global dependency is 81.26% (p-value < 1.57e-10, d f= 374). These correlation values show that two types of discourse distance are closely corelated. This result is basically consistent with the finding from <cit.> that there is 76% mapping between RST and PDTB relations. Further, the correlation result indicates that the PDTB dependency data is valid and can be used for different types of computations. It also suggests that both types of discourse distance can be used for measuring textual complexity and quantifying other linguistic explorations (; ). The quantitative results from the parsing experiments are comprehensive, presenting performance scores for different relation types and comparing parser performance on original PDTB annotations versus converted dependency structures. We conducted statistical significance tests to validate any observed improvements. An in-depth revealed common challenges encountered during the parsing experiments, providing insights into areas where the conversion process might need refinement. We also compared the results of parsing the converted dependency data to a baseline of parsing the original PDTB annotations, demonstrating the value of the conversion process. Additionally, we discussed observations regarding the scalability and computational efficiency of parsing the converted dependency structures compared to the original annotations. These detailed evaluations and validations strengthen our claims about the validity and usefulness of the converted dependency data for discourse analysis across different frameworks and languages. § DISCUSSION As discussed by <cit.> and <cit.>, some information is lost in dependency conversions for RST discourse trees. However, the most important information can be retained. PDTB-dependency conversion seems to lose much less than the conversion between RST and dependency. The primary loss is the non-inclusion of information on discourse connectives. The other potential loss is that after some local constituency structure becomes dependencies, the inner connection is lost, as was discussed in the RST constituency conversion. However, PDTB-dependency conversion also has great benefits. For example, the implicit information on the head and dependent become more explicit in PDTB dependency representation. PDTB dependency representations enable additional computational possibilities. The present study reports the success of conversion for dependency from PDTB and this supports the validity of the PDTB dependency data. We claim that the dependency format can be successfully derived from RST, SRDT and PDTB corpora. However, we must be aware of the fact that the three types of dependency representations are still different. RST dependencies represent global connections and form a global dependency tree. SRDT dependencies can still basically represent global coherence, but they cannot form a dependency tree. PDTB dependencies just represent local coherence. Despite this, the three types of dependencies are closely related with each other, and can complement each other. For instance, global (RST) discourse dependencies and local (PDTB) discourse dependencies could work together to better characterize the discourse structure. The core of (UD) lies in its utilization of syntactic dependency relations, while also incorporating a substantial amount of morphological and syntactic information. This contrasts with traditional dependency grammar and dependency corpora (; ). We expect that discourse dependencies in mutiple languages could play a similar role as UD. When these discourse corpora annotations are converted into dependency representations the possible benefits are unlimited (see the Appendix A.4). The use of dependency structures offers several key advantages for unifying different discourse frameworks and enabling more consistent analysis across corpora. Dependency representations provide a common format that can be applied across various discourse theories while preserving the original information from each framework. They offer flexibility in handling both simple and complex discourse relations, enabling more consistent quantitative analysis through metrics like "dependency distance." Computationally, dependency parsing leverages existing algorithms and tools from syntactic analysis. The approach has demonstrated cross-linguistic applicability, facilitating consistent discourse analysis across multiple languages. By converting different frameworks into a common dependency representation, researchers can more easily compare annotations from different theoretical perspectives, potentially bridging gaps between theories. Additionally, unified dependency representations can provide larger, more consistent datasets for training machine learning models in discourse parsing and analysis. Moreover, this dependency-based discourse structure framework can serve as a form of prompting for SOTA Large Language Models (LLMs). By incorporating this framework, LLMs can enhance their comprehension of discourse and textual structures, potentially leading to improved performance across various NLP tasks. Overall, this approach allows researchers to overcome challenges posed by multiple discourse theories and annotation schemes, opening up new possibilities for computational discourse analysis and cross-framework studies. While our study demonstrates the potential of the dependency framework, there are limitations that warrant further investigation. The conversion process may not capture all nuances of the original annotations, particularly for complex discourse phenomena, and future work should focus on refining the conversion algorithms to address these limitations. Additionally, the current study focused primarily on PDTB and RST, and extending the framework to incorporate other discourse theories and annotation schemes would further validate its universality. More extensive cross-linguistic studies are also needed to fully explore the framework's applicability across a wider range of languages and discourse types. Future research directions could include developing new discourse parsing algorithms that directly employ the unified dependency representation, investigating the relationship between discourse dependency structures and other linguistic phenomena such as coreference or lexical cohesion, and exploring the application of the framework to discourse-level tasks in NLP, such as text summarization or coherence evaluation. § CONCLUSION This study introduces a groundbreaking dependency framework that unifies discourse data analysis across diverse theoretical approaches. Our key findings demonstrate the successful conversion of PDTB annotations into dependency structures, preserving original information while enabling more unified analysis. The validity of this converted data has been confirmed through extensive testing using state-of-the-art discourse parsers across multiple languages. Notably, we discovered a strong correlation between RST and PDTB dependencies, suggesting underlying structural similarities between these frameworks. Furthermore, the cross-linguistic applicability of our dependency approach has been validated across English, Chinese, and several other languages, underscoring its versatility and robustness. The significance of this work for discourse analysis is substantial, providing a unified framework that overcomes previous limitations in comparing and integrating diverse discourse corpora. This dependency representation enables more consistent quantitative analysis of discourse relations, opening new avenues for computational discourse studies. By bridging different discourse frameworks, our approach facilitates a more comprehensive understanding of discourse structure, potentially leading to new theoretical insights. Additionally, the unified dependency format creates larger, more consistent datasets for training machine learning models, potentially advancing automated discourse parsing and analysis. The cross-linguistic validity of this framework enhances its potential for comparative discourse studies across languages, laying the groundwork for future advancements in both theoretical and computational approaches to discourse analysis. In short, this novel dependency framework represents a significant step forward, offering a powerful tool for researchers to explore discourse structure more comprehensively and consistently across different theoretical perspectives and languages, promising to deepen our understanding of how language creates meaning at the discourse level. apalike § APPENDIX §.§ PDTB Example and annotations The following is the original text of . Note that the ordinal numbers are added following the PDTB annotations. The head of the nation's largest car-dealers group is telling dealers to "just say no"(1) when auto makers pressure them to stockpile cars on their lots(2). In an open letter that will run today in the trade journal Automotive News, Ron Tonkin, president of the National Car Dealers Association, says dealers should cut their inventories to no more than half the level traditionally considered desirable. Mr. Tonkin, who has been feuding with the Big Three (3) since he took office earlier this year(4), said that with half of the nation's dealers losing money (5) or breaking event (6) it was time for "emergency action."(7) U.S. car dealers had an average of 59 days' supply of cars in their lots at the end of September, according to Ward's Automotive Reports (8). But Mr. Tonkin said dealers should slash stocks to between 15 and 30 days (9) to reduce the costs of financing inventory(10). His message is getting a chilly reception in Detroit, where the Big Three auto makers are already being forced to close plants because of soft sales and reduced dealer orders (11). Even before Mr. Tonkin's broadside, some large dealers said they were cutting inventories.(12) Ford Motor Co. and Chrysler Corp. representatives criticized Mr. Tonkin's plan as unworkable (13). It "is going to sound neat to the dealer (14) except when his 15-day car supply doesn't include the bright red one (15) that the lady wants to buy (16) and she goes up the street to buy one,"(17) a Chrysler spokesman said. The following are the PDTB annotations for . §.§ The PDTB annotation system §.§ RST, PDTB, SRDT and discourse dependency corpora in different languages (Tables 5-8) [More information on these discourse corpora can be seen at <https://github.com/fivehills/discourse-corpora-resources/blob/main/discourse.md>] §.§ Applications of discourse dependencies The following discusses the benefits of this approach. 1) A unified method can be applied to extract discourse corpora, which is the objective of the current study. 2) Converting data into dependency format allows for the adoption of diverse dependency analysis algorithms, enabling a deeper exploration of important issues such as textual complexity, linear features in discourse structure, and language efficiency. Dependency data overcomes the limitations set by original corpora structures and offers a more extensive algorithmic support, fostering connections with various fields in linguistics and computational linguistics. Dependency algorithms can help in exploring the following important issues: textual complexity, linear features in discourse structure and language efficiency etc. 3) Discourse dependency representations derived from these discourse corpora can be seen as network data. By providing a visual representation of the network data, we can better observe the global/local topological connection of discourse units. 4) The dependency algorithms make it possible to do typological language investigations at the textual level (see discourse corpora resources in different language shown in Tables 5, 6, 7 & 8 in the Appendix A.3). The role of discourse dependency is expected to be similar to that of syntactic dependencies which allow exploration using a number of algorithms. Additionally, the data on multilingual discourse dependencies could be similar to UD. We expect that the multilingual discourse data can play a similar role to UD for linguistic research and computational linguistics. We also expect that multilingual discourse dependency data can contribute to language typology studies and discourse semantic parsing, like the role of UD (; ; ; ). 5) Studies of syntactic dependency have achieved great successes in theoretical linguistics, computational linguistics, cross-lingual studies, and cognitive studies (; ; ). Dependencies at the textual level could draw on and adapt research from and stand in contrast to the studies on syntactic dependencies. In this way, the research related to discourse dependency can unify different levels of linguistics and different areas of language sciences and it has the potential to make further contributions to the language sciences.
http://arxiv.org/abs/2407.12164v1
20240716204025
Subject-driven Text-to-Image Generation via Preference-based Reinforcement Learning
[ "Yanting Miao", "William Loh", "Suraj Kothawade", "Pascal Poupart", "Abdullah Rashwan", "Yeqing Li" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Neural Passage Quality Estimation for Static Pruning Sean MacAvaney July 16, 2024 ==================================================== § ABSTRACT Text-to-image generative models have recently attracted considerable interest, enabling the synthesis of high-quality images from textual prompts. However, these models often lack the capability to generate specific subjects from given reference images or to synthesize novel renditions under varying conditions. Methods like DreamBooth and Subject-driven Text-to-Image (SuTI) have made significant progress in this area. Yet, both approaches primarily focus on enhancing similarity to reference images and require expensive setups, often overlooking the need for efficient training and avoiding overfitting to the reference images. In this work, we present the λ-Harmonic reward function, which provides a reliable reward signal and enables early stopping for faster training and effective regularization. By combining the Bradley-Terry preference model, the λ-Harmonic reward function also provides preference labels for subject-driven generation tasks. We propose Reward Preference Optimization (RPO), which offers a simpler setup (requiring only 3% of the negative samples used by DreamBooth) and fewer gradient steps for fine-tuning. Unlike most existing methods, our approach does not require training a text encoder or optimizing text embeddings and achieves text-image alignment by fine-tuning only the U-Net component. Empirically, λ-Harmonic proves to be a reliable approach for model selection in subject-driven generation tasks. Based on preference labels and early stopping validation from the λ-Harmonic reward function, our algorithm achieves a state-of-the-art CLIP-I score of 0.833 and a CLIP-T score of 0.314 on DreamBench. § INTRODUCTION In the evolving field of generative AI, text-to-image diffusion models <cit.> have demonstrated remarkable abilities in rendering scenes that are both imaginative and contextually appropriate. However, these models often struggle with tasks that require the portrayal of specific subjects within text prompts. For instance, if provided with a photo of your cat, current diffusion models are unable to generate an image of your cat situated in the castle of your childhood dreams. This challenge necessitates a deep understanding of subject identity. Consequently, subject-driven text-to-image generation has attracted considerable interest within the community. Chen et al. <cit.> have noted that this task requires complex transformations of reference images. Additionally, Ruiz et al. <cit.> have highlighted that detailed and descriptive prompts about specific objects can lead to varied appearances in subjects. Thus, traditional image editing approaches and existing text-to-image models are ill-suited for subject-driven tasks. Current subject-driven text-to-image generation methods are less expressive and expensive. Textual Inversion <cit.> performs poorly due to the limited expressiveness of frozen diffusion models. Imagic <cit.> is both time-consuming and resource-intensive during the fine-tuning phase. It requires text-embedding optimization for each prompt, fine-tuning of diffusion models, and interpolation between optimized and target prompts. The training process is complex and slow. These text-based methods require 30 to 70 minutes to fine-tune their models, which is not scalable for real applications. SuTI <cit.> proposes an in-context learning method for subject-driven tasks. However, SuTI demands half a million expert models for each different subject, making it prohibitively expensive. Although SuTI can perform in-context learning during inference, the setup of expert models remains costly. DreamBooth <cit.> provides a simpler method for handling subject-driven tasks. Nevertheless, DreamBooth requires approximately 1000 negative samples and 1000 gradient steps, and also needs fine-tuning of the text encoder to achieve state-of-the-art performance. Therefore, it is worthwhile to explore more efficient training methods: the setup should be as simple as possible, the efficient training should not include multiple optimization phases; second, by learning text-to-image alignment using only the UNet component without text-embedding for each prompt or text-encoder optimization; third, by providing a model-selection approach that enables early stopping for faster evaluation and regularization. In this paper, we propose a λ-Harmonic reward function that enables early stopping and accelerate training. In addition, we incorporate the Bradley-Terry preference model to generate preference labels and utilize preference-based reinforcement learning algorithms to finetune pre-trained diffusion models and achieve text-to-image alignment without optimizing any text-encoder or text-embedding. The whole finetuning process including setup, training, validation, and model saving only takes 5 to 20 minutes on Cloud TPU V4. Our method, Reward Preference Optimization (RPO), only requires a few input reference images and the finetuned diffusion model can generate images that preserve the identity of a specific subject and align well with textual prompts (Figure <ref>). To show the effectiveness of our λ-Harmonic reward function, we evaluate RPO on diverse subjects and text prompts on DreamBench <cit.> and we report the DINO and CLIP-I/CLIP-T of RPO's generated images on this benchmark and compare them with existing methods. Surprisingly, our method requires a simple setup (3% of DreamBooth configuration) and with a small number of gradient steps, but the experimental results outperform or match SOTA. In summary, our contributions are as follows: * We introduce the λ-Harmonic reward function, which permits early-stopping to alleviate overfitting in subject-driven generation tasks and to accelerate the finetuning process. * By combining the λ-Harmonic reward function and a preference model, we present RPO, which only require a cheap setup but still can provide high quality results. * We evaluate RPO and show the effectiveness of the λ-Harmonic function with diverse subjects and various prompts on DreamBench. We achieve results comparable to SOTA. § RELATED WORKS Ruiz et al. <cit.> formulated a class of problems called subject-driven generation, which refers to preserving the appearance of a subject contextualized in different settings. DreamBooth <cit.> solves the issue of preserving the subject by binding it in textual space with a unique identifier for the subject in the reference images, and simultaneously generating diverse backgrounds by leveraging prior class-specific information previously learned. A related work that could possibly perform the same task is textual inversion <cit.>. However, its original objective is to produce a modification of the subject or property marked by a unique token in the text. While it can be used to preserve the subject and change the background or setting, the performance is underwhelming compared to DreamBooth in various metrics <cit.>. The prevalent issue in DreamBooth and textual inversion is the training times <cit.> since gradient-based optimization has to be performed on their respective models for each subject. Subject-driven text-to-image generator (SuTI) by <cit.> aims to alleviate this issue by employing apprenticeship learning. By scraping millions of images online, many expert models are trained for each cluster which allows the apprentice to learn quickly from the experts <cit.>. However, this is an incredibly intensive task with massive computational overhead during training time. In the field of natural language processing, direct preference optimization has found great success in large language models (LLM) <cit.>. By bypassing reinforcement learning from human feedback and directly maximizing likelihoods using preference data, LLMs benefit from more stable training and reduced dependency on an external reward model. Subsequently, this inspired Diffusion-DPO by <cit.> which applies a similar technique onto the domain of diffusion models. However, this relies on a preference labelled dataset, which can be expensive to collect or not publicly available for legal reasons. Fortunately, there are reward models that can serve as functional substitutes such as CLIP <cit.> and ALIGN <cit.>. ALIGN has a dual encoder architecture that was trained on a large dataset. The encoders can produce text and image embeddings, which allows us to obtain pairwise similarity scores by computing cosine similarity. There are also diffusion modeleling techniques that can leverage reward models. An example is denoising diffusion policy optimization (DDPO) by Black et al. <cit.> that uses a policy gradient reinforcement learning method to encourage generations that leads to higher rewards. § PRELIMINARY In this section, we introduce notations and some key concepts about text-to-image diffusion models and reinforcement learning. Text-to-Image Diffusion Models. Diffusion models <cit.> are a family of latent variable models of the form (_0) = ∫_(_0:T) d_1:T, where the _1, …, _T are noised latent variables of the same dimensionality as the input data _0 ∼ q(_0). The diffusion or forward process is often a Markov chain that gradually adds Gaussian noise to the input data and each intermediate sample _t can be written as _t = √(α_t)_0 + √(1 - α_t)_t, for all t ∈{1, …, T}, where α_t refers to the variance schedule and _t ∼(0, ). Given a conditioning tensor , the core premise of text-to-image diffusion models is to use a neural network _(_t, , t) that iteratively refines the current noised sample _t to obtain the previous step sample _t - 1, This network can be trained by optimizing a simple denoising objective function, which is the time coefficient weighted mean squared error: __0, , t, _t[ω(t) ‖_(_t, , t) - _t ‖^2_2], where t is uniformly sampled from {1, …, T} and ω(t) can be simplified as 1 according to <cit.>. Reinforcement Learning and Diffusion DPO Reinforcement Learning for diffusion models <cit.> aims to solve the following optimization problem: __0:T∼(_0:T|)[∑_t=1^T R(_t, _t-1, ) - β_KL((_t - 1|_t, ) (_t - 1|_t, ))], where β is a hyperparameter controlling the KL-divergence between the finetune model and the pre-trained base model . In Equation (14) from Diffusion-DPO <cit.>, the optimal can be approximated by minimizing the negative log-likelihood: _^+_0, ^-_0, t, ^+_t, ^-_t [-logσ(β(‖_base(^+_t, , t) - ^+_t ‖^2_2 - ‖_(^+_t, , t) - ^+ ‖^2_2 - (‖_base(^-_t, , t) - ^-_t ‖^2_2 - ‖_(^-_t, , t) - ^- ‖^2_2) ))], where {^+_t}_t=0^T represents the preference trajectory, i.e., r(^+_0, ) > r(^-_0, ), and ^+ and ^- are independent samples from a Gaussian distribution. A detailed description is given in Appendix <ref>. Additional notations. We use and to represent the reference image and generated image, respectively. denotes the set of reference images, and is the set of generated images. (≻) represents the probability that is more preferred than . § METHOD We present our λ-Harmonic reward function that provides reward signals for subject-driven tasks to reduce the learned model to be overfitted to the reference images. Based on this reward function, we use Bradley-Terry model to sample preference labels and a preference algorithm to finetune the diffusion model by optimizing both the similarity loss and the preference loss. §.§ Reward Preference Optimization In contrast to other fine-tuning applications <cit.>, there is no human feedback in the subject-driven text-to-image generation task. The model only receives a few reference images and a prompt with a specific subject. Hence, we first propose the λ-Harmonic reward function that can leverage the ALIGN model <cit.> to provide valid feedback based on the generated image fidelity: similarity to the given reference images and faithfulness to the text prompts. λ-Harmonic Reward Function. The normalized ALIGN-I and ALIGN-T scores can be denoted as ALIGN-I(, ) 1/||∑_∈CosSim(f_(), f_()) + 1/2 ALIGN-T(, ) CosSim(f_(), g_()) + 1/2, where f_() is the image feature extractor and g_() is the text encoder in the ALIGN model. Given a reference image set , the λ-Harmonic reward function can be defined by a weighted harmonic mean of the ALIGN-I and ALIGN-T scores, r(, ; λ, ) 1λALIGN-I(, ) + 1 - λALIGN-T(, ). Compared to the arithmetic mean, there are two advantages to using the harmonic mean: (1) according to AM-GM-HM inequalities <cit.>, the harmonic mean is a lower bound of the arithmetic mean and maximizing this “pessimistic” reward can also improve the arithmetic mean of ALIGN-I and ALIGN-T scores; (2) the harmonic mean is more sensitive to the smaller of the two scores, i.e., a larger reward is only achieved when both scores are relatively large. For a simple example, consider λ = 0.5. If there are two images, and , where the first image achieves an ALIGN-I score of 0.9 and an ALIGN-T score of 0.01, and the second image receives an ALIGN-I score of 0.7 and an ALIGN-T score of 0.21, we may prefer the second image because it has high similarity to the reference images and is faithful to the text prompts. However, using the arithmetic mean would assign both images the same reward of 0.455. In contrast, the harmonic mean would assign the first image a reward of 0.020 and the second image a reward of 0.323, aligning with our preferences. During training, we set λ_train = 0, which means the reward model will focus solely on text-to-image alignment because the objective function consists only of a loss for image-to-image alignment. Note that we set λ_val to a different value for validation, which evaluates the fidelity of the subject and faithfulness of the prompt. Details can be found in Section <ref>. Dataset. The set of images for subject-driven generative tasks can usually be represented as = ∪, where is the image set generated by the base model. DreamBooth <cit.> requires two different prompts, and , for each subject during the training phase. For example, is and is , where is a unique token to specify the subject. DreamBooth then uses to generate for prior preservation loss. Typically, the size of the set of generated images is around 1000, i.e., || = 1000, which is time-consuming and space-intensive in real applications. However, the diffusion model can only maximize the similarity score and still receives a high reward based on this uninformative prompt . Our method aims to balance the trade-off between similarity and faithfulness. Thus, for efficiency, we introduce 8 novel training prompts, , which can be pre-specified or generated by other Large Language Models[SuTI <cit.> utilizes PaLM <cit.> to generate unseen prompts during training]. These training prompts include artistic style transfer, re-contextualization, and accessorization. The full list of training prompts is provided in the supplementary material. For example, can be . We feed these training prompts to the base model and generate 4 images for each training prompt, i.e., || = 32. Once we obtain reward signals, we adopt the Bradley-Terry model <cit.> to generate preference labels. In particular, given a tuple (, , ), we sample preference labels y from the following probability model: (≻) exp(r(, ; λ, ))/exp(r(, ; λ, )) + exp(r(, ; λ, )). Learning. The learning objective function consist of two parts — similarity loss and preference loss. The similarity loss is designed to minimize the KL divergence between the distribution of reference images and the learned distribution (), which is equivalent to minimizing: _sim() _, , t, _ref[‖_(_ref, t, , t) - _ref‖^2_2], t ∼{1, …, T}, _ref∼(0, ). The preference loss aims to capture the preference signals and fit the preference model, Eq. (<ref>). Therefore, we use the binary cross-entropy as the objective function for preference loss, combining DPO objective function Eq. <ref> the loss function can be written as the following function: _pre() _, , , y, t, , [ y logσ(βℓ_(, , , y, t, , )) + (1 - y) logσ(- βℓ_(, , , y, t, , ) )], where ℓ_(, , , y, t, , ) ‖(, , t) - ‖^2_2 - ‖_(, , t) - ‖^2_2 -(‖(, , t) - ‖^2_2 - ‖_(, , t) - ‖^2_2), and t ∼{1, …, T} and , ∼(0, ). Combining these two loss functions together, the objective function for finetuning is written as () = _sim() + _pre() Figure <ref> presents an overview of the training method, which includes the base model generated samples, the align reward model, and the preference loss. Note that _pre serves as a regularizer for approximating the text-to-image alignment policy. Conversely, DreamBooth <cit.> adopts _KL(p_base(_t - 1|_t, ) | (_t - 1|_t, )) as its regularizer, which cannot guarantee faithfulness to the text-prompt. Based on this loss function and preference model, we only need a few hundred iterations and a small set size of || to achieve results that are comparable to, or even better than, the state of the art. The fine-tuning process, which includes generating images, training, and validation, takes about 5 to 20 minutes on a single Google Cloud Platform TPUv4-8 (32GB) for Stable Diffusion. § EXPERIMENTS In this section, we present the experimental results demonstrated by RPO. We investigate three primary questions. First, can our algorithm learn to generate images that are faithful both to the reference images and to the textual prompts, according to preference labels? Second, if RPO can generate high-quality images, which part is the key component of RPO: the reference loss or the early stopping by the λ-Harmonic reward function? Third, how do different λ_val values affect validation and cause performance differences for RPO? We refer readers to Appendix <ref> for details on the experimental setup, Appendix <ref> for the skill set of RPO, Appendix <ref> for the limitations of the RPO algorithm, and Appendix <ref> for future work involving RPO. §.§ Dataset and Evaluation DreamBench. In this work, we use the DreamBench dataset proposed by DreamBooth <cit.>. This dataset contains 30 different subject images including backpacks, sneakers, boots, cats, dogs, and toy, etc. DreamBench also provides 25 various prompt templates for each subject and these prompts are requiring the learned models to have such abilities: re-contextualization, accessorization, property modification, and attribute editing. Evaluation Metrics. We follow DreamBooth <cit.> and SuTI <cit.> to report DINO <cit.> and CLIP-I <cit.> for evaluating image-to-image similarity score and CLIP-T <cit.> for evaluating the text-to-image similarity score. We also use our λ-Harmonic reward as a evaluation metric for the overall fidelity and the default value of λ = 0.3. For evaluation, we follow DreamBooth <cit.> and SuTI <cit.> to generate 4 images per prompt, 3000 images in total, which provides robust evaluation. Baseline algorithms. DreamBooth <cit.>: This algorithm requires approximately || = 1000 and 1000 gradient steps to finetune the UNet and text-encoder components. SuTI <cit.>: A pre-trained method that requires half a million expert models and introduces cross-attention layers into the original diffusion models. Textual Inversion <cit.>: A text-based method that optimizes the text embedding but freezes the diffusion models. Re-Imagen <cit.>: An information retrieval-based algorithm that modifies the backbone network architectures and introduces cross-attention layers into the original diffusion models. §.§ Comparisons Quantitative Comparisons. We begin by addressing the first question. We use a quantitative evaluation to compare RPO with other existing methods on three metrics (DINO, CLIP-I, CLIP-T) in DreamBench to validate the effectiveness of RPO. The experimental results on DreamBench is shown in Table <ref>. We observe that RPO can perform better or on par with SuTI and DreamBooth on all three metrics. Compared to DreamBooth, RPO only requires 3% of the negative samples, but RPO can outperform DreamBooth on the CLIP-I and CLIP-T scores by 3% given the same backbone. Our method outperforms all baseline algorithms in the CLIP-T score, establishing a new SOTA result. This demonstrates that RPO, by solely optimizing UNet through preference labels from the λ-Harmonic reward function, can generate images that are faithful to the input prompts. Similarly, our CLIP-I score is also the highest, which indicates that RPO can generate images that preserve the subject's visual features. In terms of the DINO score, our method is almost the same as DreamBooth when using the same backbone. We conjecture that the reason RPO achieves higher CLIP scores and lower DINO score is that the λ-Harmonic reward function prefers to select images that are semantically similar to the textual prompt, which may result in the loss of some unique features in the pixel space. Qualitative Comparisons. We use the same prompt as SuTI <cit.>, and the generated images are shown in Figure <ref>. RPO generates images that are faithful to both reference images and textual prompts. We noticed a grammatical mistake in the first prompt used by SuTI <cit.>; it should be . This incorrect prompt caused the RPO-trained model to become confused during the inference phase. However, RPO still preserves the unique appearance of the bowl. For instance, while the text on the bowl is incorrect or blurred in the SuTI and DreamBooth results, RPO accurately retains the words Bon Appetit from the reference bowl images. We also observed that RPO is the only method that generates thin-neck vases. Although existing methods can produce images highly faithful to the reference images, they may not align as well with the textual prompts. We also provide an example in Appendix <ref> that shows how RPO can handle the failure case observed in DreamBooth and SuTI. §.§ Ablation Study and Method Analysis Preference Loss and λ-Harmonic Ablation. We investigate the second primary question through an ablation study. Two regularization components are introduced into RPO: reference loss as a regularizer and early stopping by λ_val-Harmonic reward function. Consequently, we compare four methods: (1) Pure _sim, which only minimizes the image-to-image similarity loss _sim; (2) _ref w/o early stopping, which employs _ref as a regularizer but omits early stopping by λ_val-Harmonic reward function; (3) Early stopping w/o _ref, which uses λ_val-Harmonic reward function as a regularization method but excludes _ref; (4) Full RPO, which utilizes both _ref and early stopping by the λ_val-Harmonic reward function. We choose the default value λ_val = 0.3 in this ablation study. Table <ref> lists the evaluation results of these four methods on DreamBench. We observe that without early stopping, _ref can still prevent overfitting to the reference images and improve text-to-image alignment, though the regularization effect is weak. Specifically, the 0.3-Harmonic only shows a marginal improvement of 0.003 over pure _sim and 0.001 over early stopping without _ref. The early stopping facilitated by the λ_val-Harmonic reward function plays a crucial role in counteracting overfitting, helping the diffusion models retain the ability to generate high-quality images aligned with textual prompts. To provide a deeper understanding of the λ-Harmonic reward validation, we present two examples from during training in Figure <ref>, covering both objects and live subjects. We found that the model tends to overfit at a very early stage, i.e., within 200 gradient steps, where λ-Harmonic can provide correct reward signals for the generated images. For the backpack subject, the generated image receives a low reward at gradient step 80 due to its lack of fidelity to the reference images. However, at gradient step 400, the image is overfitted to the reference images, and the model fails to align well with the input text, resulting in another low reward. λ-Harmonic assigns a high reward to images that are faithful to both the reference image and textual prompts. Impact of λ_val. We examine the third primary question by selecting different λ_val values from the set {0.3, 0.5, 0.7} as the validation parameters for the λ-Harmonic reward. According to Equation <ref>, we believe that as λ_val increases, the λ-Harmonic reward function will give higher weight to the image-to-image similarity score. This will make the generated images more closely resemble the reference images, however, there is also a risk of overfitting. Table  <ref> shows us the results of three different λ_val values on DreamBench. As we expected, a larger λ_val makes the images better preserve the characteristics of the reference images, but it also reduces the text-to-image alignment score. Figure <ref> shows us an example. In this example, different λ_val values lead to different outcomes due to varying strengths of regularization. A smaller λ_val = 0.3 can generate more varied results, but seems somewhat off from the reference images. λ_val = 0.5 preserves some characteristics beyond the original subject, such as the sofa, but also maintains alignment between text and image. However, when λ_val = 0.7 is chosen as an excessively large value, the model actually overfits to the reference images, ignoring the prompts. We have more comparisons in Appendix <ref>. § CONCLUSION We introduce the λ-Harmonic reward function to derive preference labels and employ RPO to finetune the diffusion model for subject-driven text-to-image generation tasks. Additionally, the λ-Harmonic reward function serves as a validation method, enabling early stopping to mitigate overfitting to reference images and speeding up the finetuning process. § ACKNOWLEDGEMENT We thank Shixin Luo and Hongliang Fei for providing constructive feedback. This work is supported by a Google grant and with Cloud TPUs from Google's TPU Research Cloud (TRC). plainnat § APPENDIX §.§ Background Reinforcement Learning. In Reinforcement Learning (RL), the environment can be formalized as a Markov Decision Process (MDP). An MDP is defined by a tuple (, , P, R, ρ_0, T), where is the state space, is the action space, P is the transition function, R is the reward function, ρ_0 is the distribution over initial states, and T is the time horizon. At each timestep t, the agent observes a state _t and selects an action _t according to a policy π(_t | _t), and obtains a reward R(_t, _t), and then transit to a next state _t + 1∼ P(_t + 1 | _t, _t). As the agent interacts with the MDP, it produces a sequence of states and actions, which is denoted as a trajectory = (_0, _0, _1, _1, …, _T - 1, _T - 1). The RL objective is to maximize the expected value of cumulative reward over the trajectories sample from its policy: _∼ p^π()[∑_t=0^T - 1 R(_t, _t)] Diffusion MDP We formalize the denoising process as the following Diffusion MDP: _T - t = (_t, ), _T - t = _t - 1, π_(_T - t|_T - t) = (_t - 1|_t, ), ρ_0 = p() ×(0, ), R(_T - t, _T - t) = R(_t, _t-1, )= r(_0, ) if t = 1, 0 otherwise where r(_0, ) can be a reward signal for the denoised image. The transition kernel is deterministic, i.e., P(_T - t + 1|_T - t, _T - t) = (_T - t, ) = (_t - 1, ). For brevity, the trajectory is defined by (_0, _1, …, _T). Hence, the trajectory distribution for given diffusion models can be denoted as the joint distribution (_0:T|). In particular, the RL objective function for finetuning diffusion models can be re-written as the following optimization problem: __0:T∼(_0:T|)[∑_t=1^T R(_t, _t-1, ) - β_KL((_t - 1|_t, ) (_t - 1|_t, ))], where β is a hyperparameter controlling the KL-divergence between the finetune model and the base model . This constraint prevents the learned model from losing generation diversity and falling into 'mode collapse' due to a single high cumulative reward result. In practice, this KL-divergence has become a standard constraint in large language model finetuning <cit.>. Given preference labels and following the Direct Preference Optimization (DPO) framework <cit.>, we can approximate the optimal policy p^* by minimizing the upper bound: _^+_0, ^-_0, t, ^+_t, ^-_t [-logσ(β(‖_base(^+_t, , t) - ^+_t ‖^2_2 - ‖_(^+_t, , t) - ^+ ‖^2_2 - (‖_base(^-_t, , t) - ^-_t ‖^2_2 - ‖_(^-_t, , t) - ^- ‖^2_2) ))], where {^+_t}_t=0^T represents the preference trajectory, i.e., r(^+_0, ) > r(^-_0, ), and ^+ and ^- are independent samples from Gaussian distribution. The detailed derivation can be found in <cit.>. §.§ Experimental Details Training Prompts. We collect 8 training prompts: 6 re-contextualization, 1 property modification and 1 artistic style transfer for objects. 5 re-contextualization, 1 attribute editing, 1 artistic style transfer and 1 accessorization for live subjects. The trainig prompts are shown in Figure <ref> Experimental Setup During training we use λ_train = 0 for the λ-Harmonic reward function to generate the preference labels. We evaluate the model performance by λ_val-Harmonic per 40 gradient steps during training time and save the checkpoint that achieve the highest validation reward. Table <ref> lists the common hyperparameters used in the generating skill set and the λ_val used in the default setting. §.§ Additional Comparisons We observe RPO that can be faithful to both reference images and the input prompt. To investigate whether RPO can provide better quality than DreamBooth and SuTI, we follow SuTI paper and pick the robot toy as an example to demonstrate the effectiveness of RPO (Figure <ref>). In this example, DreamBooth is faithful to the reference image but it does not provide a good text-to-image alignment. SuTI provides an result that is fidelty to textual prompt. However, SuTI lacks fidelity to the reference image, i.e., the robot should stand with its wheels instead of legs. <cit.> use DreamBooth to finetune SuTI (Dream-SuTI) further to solve this failure case. Instead, RPO can generate an image not only faithful to the reference images but also align well with the input prompts. We have also added more samples for comparison of different λ_val values (see Figure <ref>). We find that λ_val = 0.5 encourages the learned model to retain output diversity while still aligning with textual prompts. However, the generated images invariably contain a sofa, which is unrelated to the subject images. This occurs because every training image includes a sofa. A large λ_val weakens the regularization strength and leads to overfitting. Nevertheless, a small value of λ_val can potentially eliminate background bias. We highlight that this small λ_val not only encourages diversity but also mitigates background bias in identity preservation and enables the model to focus on the subject. §.§ Limitations Figure <ref> illustrates some failure examples of RPO. The first issue is context-appearance entanglement. In Figure <ref>, the learned model correctly understands the keyword blue house; however, the appearance of the subject is also altered by this context, e.g. the color of the backpack has changed, and there is a house pattern on the backpack. The second issue is incorrect contextual integration. We conjecture that this failure is due to the rarity of the textual prompt. For instance, imagining a cross between a chow chow and a tiger is challenging, even for humans. Third, although RPO provides regularization, it still cannot guarantee the avoidance of overfitting. As shown in Figure <ref>, this may be because, to some extent, the visual appearance of sand and bed sheets is similar, which has led to overfitting issues in the model. §.§ Future Work The overfitting failure case leads to a future work direction: can online RL improve regularization and avoid overfitting? The second direction for future work involves implementing the LoRA version for RPO and comparing it to LoRA DreamBooth. Last but not least, we aim to identify or construct open-source, subject-driven datasets for comparison. Currently, DreamBench is the only open-source dataset we can access and evaluate for model performance. Nevertheless, we should create a larger dataset that includes more diverse subjects to verify the effectiveness of different algorithms. §.§ Broader Impacts The nature of generative image models is inherently subjected to criticism on the issue of privacy, security and ethics in the presence of nefarious actors. However, the core of this paper remains purely on an academic mission to extend the boundaries of generative models. The societal consequences of democratizing such powerful generative models is more thoroughly discussed in other papers. For example, Bird et al. outlines the classes of risks in text-to-image models <cit.> in greater detail and should be directed to such papers. Nevertheless, we play our part in the management of such risks by avoiding the use of identifiable parts of humans in the reference sets. §.§ Skill Set The skill set of the RPO-trained model is varied and includes re-contextualization (Figure <ref>), artistic style generation (Figure <ref>), expression modification (Figure <ref>), subject accessorization (Figure <ref>), color editing (Figure <ref>), multi-view rendering (Figure <ref>), and novel hybrid synthesis (Figure <ref>).
http://arxiv.org/abs/2407.12741v1
20240717170020
Comparing Federated Stochastic Gradient Descent and Federated Averaging for Predicting Hospital Length of Stay
[ "Mehmet Yigit Balik" ]
cs.LG
[ "cs.LG" ]
positioning shapes.symbols shapes shadows ncyan=[circle, draw=cyan!70, thin, fill=white, scale=0.8, font=110] ngreen=[circle, draw=green!70, thin, fill=white, scale=0.8, font=110] nred=[circle, draw=red!70, thin, fill=white, scale=0.8, font=110] ngray=[circle, draw=gray!70, thin, fill=white, scale=0.55, font=140] nyellow=[circle, draw=yellow!70, thin, fill=white, scale=0.55, font=140] norange=[circle, draw=orange!70, thin, fill=white, scale=0.55, font=100] npurple=[circle,draw=purple!70, thin, fill=white, scale=0.55, font=100] nblue=[circle, draw=blue!70, thin, fill=white, scale=0.55, font=100] nteal=[circle,draw=teal!70, thin, fill=white, scale=0.55, font=100] nviolet=[circle, draw=violet!70, thin, fill=white, scale=0.55, font=100] qgre=[rectangle, draw, thin,fill=green!20, scale=0.8] rpath=[ultra thick, red, opacity=0.4] legend_isps=[rectangle, rounded corners, thin,fill=gray!20, text=blue, draw] 𝐱 ŁL Mehmet Yigit Balik Aalto University, Espoo, Finland 𝔼
http://arxiv.org/abs/2407.13668v1
20240718164351
Carrollian Partition Functions and the Flat Limit of AdS
[ "Per Kraus", "Richard M. Myers" ]
hep-th
[ "hep-th" ]
=1 =1000 equationsection =7000 @addto@macro equationsection w L L
http://arxiv.org/abs/2407.13526v1
20240718135910
Discussion: Effective and Interpretable Outcome Prediction by Training Sparse Mixtures of Linear Experts
[ "Francesco Folino", "Luigi Pontieri", "Pietro Sabatino" ]
cs.LG
[ "cs.LG" ]
2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). Proceedings of the 1st International Workshop on Explainable Knowledge Aware Process Intelligence, June 20–22, 2024, Roccella Jonica, Italy [1] [1] This paper summarizes results presented at workshop ML4PM 2023, associated with conference ICPM 2023, October 23-27, 2023, Rome, Italy, and published in <cit.>. 1]Francesco Folino[ email=francesco.folino@icar.cnr.it ] 1]Luigi Pontieri[ email=luigi.pontieri@icar.cnr.it ] [1] 1]Pietro Sabatino[ email=pietro.sabatino@icar.cnr.it ] [1]Institute for High Performance Computing and Networking (ICAR-CNR), via P. Bucci 8/9C, 87036 Rende (CS), Italy [1]Corresponding author. § ABSTRACT Process Outcome Prediction entails predicting a discrete property of an unfinished process instance from its partial trace. High-capacity outcome predictors discovered with ensemble and deep learning methods have been shown to achieve top accuracy performances, but they suffer from a lack of transparency. Aligning with recent efforts to learn inherently interpretable outcome predictors, we propose to train a sparse Mixture-of-Experts where both the “gate” and “expert” sub-nets are Logistic Regressors. This ensemble-like model is trained end-to-end while automatically selecting a subset of input features in each sub-net, as an alternative to the common approach of performing a global feature selection step prior to model training. Test results on benchmark logs confirmed the validity and efficacy of this approach. Process Mining Machine Learning XAI Discussion: Effective and Interpretable Outcome Prediction by Training Sparse Mixtures of Linear Experts [ July 22, 2024 ======================================================================================================== § INTRODUCTION (Process) Outcome Prediction problem <cit.> refers to the problem of predicting the outcome of an unfinished process instance, based on its associated prefix trace (i.e., the partial sequence of events available for the instance). Recently, different supervised learning approaches to this problem were proposed, which allow for discovering an outcome prediction model from labeled traces. Outstanding performances in terms of prediction accuracy have been achieved by big ensembles of decision rules/trees discovered with random forest or gradient boosting algorithms <cit.>, and Deep Neural Networks (DNNs) <cit.>. However, the approximation power of these models comes at the cost of an opaque decision logic, which makes them unfit for settings where explainable predictions and interpretable predictors are required. The call for transparent outcome prediction first originated several proposals relying model-agnostic post-hoc explanation methods <cit.> or explanation-friendly DNN-oriented solutions <cit.>. Due to widespread concerns on the reliability of attention and post-hoc attribution-based explanations  <cit.>, the discovery of inherently-interpretable outcome predictors <cit.> was proposed of late. Two alternative kinds of interpretable models, both leveraging Logistic Regression (LR) models as a building block, were exploited in <cit.> to discover outcome predictors from flattened log traces: Logit Leaf Model (LLM), a sort of decision tree where each leaf hosts an LR sub-model; and (ii) and Generalized Logistic Rule Model (GLRM), where a single LR model is built upon the original feature and novel features, defined as conjunctive rules over subsets of the former and derived via column generation. These kinds of models were both shown to improve plain LR predictors by capturing some non-linear input-output dependencies. An approach leveraging a neural implementation of fuzzy logic, named FOX, was proposed in <cit.> to extract easy-to-interpret IF-THEN outcome-prediction rules, each of which contains a fuzzy set for each (flattened) trace feature and a membership score for each outcome class. Unfortunately, as observed <cit.> and discussed in Section <ref>, these LLM-based and GLRM-based methods may return cumbersome models/rules that hardly enable a clear and complete understanding of the predictor's behavior, On the other hand, if the global feature selection and (3-way) feature binning of FOX <cit.> helps control prediction rules' size, it risks causing some information/accuracy loss. We next describe an approach to outcome prediction that builds an ensemble of LR models up by training a Mixture of Experts (MoE) neural net. This net consists of multiple “experts” (specialized outcome predictors) and a sparse “gate” module devoted to routing each data instance to one of the experts. For the sake of interpretability, both the gate and experts take the form of LR classifiers (i.e. one-layer neural net with linear activations). Differently from <cit.>, the user is allowed to control the complexity of the model by fixing the maximum number kTop of features that the gate and each expert sub-net can use and the maximum number m of experts. Instead of preliminary performing a global feature selection step as in <cit.> and <cit.>, first the neural net is trained using all the data features, and then the less important learned parameters are pruned out in a “feature-based” way. This allows different experts to use different subsets of the input features when making their predictions. § PROPOSED OUTCOME-PREDICTION MODEL AND TRAINING ALGORITHM A Mixture of Experts (MoE) is a neural net that implements both the gate and the local predictors (“experts”) through the composition of smaller interconnected sub-nets. In particular, in classical (dense) MoEs <cit.>, once provided with an input data vector x, the gate is a feed-forward sub-net that computes a vector of (softmax-ed) weights, one for each expert, estimating how competent they are in making a prediction for x. The overall prediction for M(x) is obtained by linearly combining all the experts' predictions for x according to competency weights returned by the gate. In Sparse MoEs <cit.>, this formulation is adapted to activate only a given number k of experts, selected as those that were assigned the top-k competency weights, for x, by the gate. Model The outcome-prediction neural-network model proposed in our approach, named , can be regarded as a tuple = _g,_1,…,_m where _E is the sub-net consisting of all its experts _1, …, _m, and _g is the gate sub-net. This model as a whole encodes a function f: ℝ^d → [0,1] defined as follows: f(x) ≜_k(x) such that k = max_k ∈{1,…,m}_g(x)[k] and _g(x)[k] is the k-th component of the probability vector returned by the gate sub-net _g when applied to x. Thus, for any novel input instance x, a decision mechanism is applied to the output of the gate, which transforms it into an “argmax”-like weight vector where all the entries are zeroed but the one corresponding to the expert which received the the highest competency score (which is turned into 1); this makes the gate implement a hard-selection mechanism. For the sake of interpretability, the following design choices are taken: (i) each expert _i, for i ∈ [1..m] is implemented as one-layer feed-forward nets with linear activation functions, followed by a standard sigmoid transformation: (ii) the gate _g is implemented as a one-layer feed-forward network with linear activation functions followed by a softmax normalization layer. Training algorithm: The proposed training algorithm, named , takes two auxiliary arguments as input: the desired number m of expert sub-nets and the maximal number kTop ∈ℕ∪{} of input features per gate/expert sub-net (where means that no actual upper bound is fixed for the latter number). The algorithm performs four main steps: (1) A instance M is created, according to the chosen number m of experts, and initialized randomly. (2) M is trained end-to-end using an batch-based SGD-like optimization procedure (using different learning rates for the gate and the experts and a variant of the loss function proposed in <cit.>, favouring expert specialization and competency weights' skeweness). (3) M is optimized again with the same procedure but keeping the expert sub-nets frozen, to fine-tune the gate one only. (4) Feature-wise parameter pruning is performed on both the gate and experts to make all these LR-like sub-nets base their predictions on kTop data features at most. The loss function utilized in the training algorithm combines an accuracy term alike the one proposed in <cit.> (favoring expert specialization) with a regularization term summing up the absolute values of all the model parameters. The influence of this regularization term can be controlled via weighting factor λ_R. The last step of the algorithm consists in applying an ad hoc, magnitude-based, structured parameter pruning procedure to both the gate _g and all experts _1, …, _m. In this procedure, each parameter block gathers the weights of the connections reachable from a distinct input neuron, and all the parameters that do not belong to any of kTop blocks are eventually zeroed. This corresponds to making all the sub-nets _g, _1, …, _m to only rely on kTop input features. § EXPERIMENTS Algorithm was tested against dataset extracted from the benchmark log BPIC 2011 and Sepsis, obtained by making each prefix trace undergo the aggregation encoding after extending them all with timestamp-derived temporal features (e.g., weekday, hour, etc.). Each dataset was partitioned into training, validation, and test sets by using the same 80%-20%-20% temporal split as in <cit.>. The accuracy of each discovered model was evaluated by computing the AUC score for all test prefixes containing at least two events, as done in <cit.>. Algorithm was using 100 epochs in both the end-to-end training and in the gate fine-tuning steps, without performing any post-pruning training (for the sake of efficiency). The number m of experts was fixed to 6 empirically (after trying several values in [2, …, 16]), since this choice ensured a good accuracy-vs-simplicity trade-off. Different configurations were tested instead for hyperparameter (from 0.1 to 0.6) and kTop (from 2 to 8). Details on the setting of other parameters (e.g., batch sizes, learning rates can be found in <cit.>. As terms of comparison, we considered state-of-the-art outcome-prediction methods FOX <cit.> and GLRM <cit.>, and a baseline method, denoted as 1-LR, that discovers a single LR model —the latter was simulated by running Algorithm with m=1 and kTop=ALL. Prediction accuracy results Table <ref> reports the AUC scores obtained by the 6-expert models. Notably, often outperforms the baseline 1-LR in different kTop≠ configurations over some datasets, namely , , . For the remaining datasets, there is always at least one kTop configuration where performs better than the baseline, when combined with feature selection. In particular, on average, achieves an AUC improvement of more than 20% over 1-LR, with peaks reaching beyond 80%. This confirms that training multiple local LR outcome predictors usually improves the performance of training a single LR model on all the data features (as done by 1-LR). In addition, always surpasses state-of-the-art methods FOX and GLRM on all the datasets but and , where some of them perform as well as . Generally, seems to perform worse when using very few features than when trained with a slightly larger feature set (namely, kTop=6,8). However, on dataset , manages to achieve outstanding AUC scores even when using just two (resp. four) features. The compelling AUC results obtained by the proposed approach using less than 9 input features per sub-model provides some empirical evidence of its ability to support the discovery of more compact outcome predictors and easier-to-interpret prediction explanations compared to the state-of-the-art method FOX, as discussed below. Model/explanation complexity Generally, the lower the description complexity of a prediction model, the easier to interpret it and its predictions. In the cases of and baseline 1-LR this complexity can be computed by counting the non-zero parameters appearing in the respective LR (sub-)models, while the complexity of FOX model is the number of conditions appearing in its fuzzy rules. When applied to the pre-filtered datasets , , …, , , …, (containing 4, 7, 6, 2, 5, 4 and 6 data features, respectively), FOX finds a model consisting of 81, 2187, 729, 9, 243, 81 and 729 fuzzy rules <cit.>, respectively. Thus, the complexity of these FOX models range from 18 to 15309. Qualitative results: an example of discovered Figure <ref> shows the input features and associated weights that are employed by the six LR experts discovered when running algorithm with kTop=4 on dataset , for which the outcome-prediction task is meant to estimate the probability that an in-treatment patient will leave the hospital with the prevalent release type (i.e., `Release A'). Only 20 of the 86 data features are used by the experts in total, but the specific feature subset of the experts differ appreciably from one another. In a sense, this means that the experts learned different input-output mappings (capturing different contex-dependent process-outcome use cases). For instance, in predicting class 1, Expert 0 attributes a positive influence to `Activity_IV Liquid' and negative influence to `Activity_Release B' (a specific discharge type), `mean_hour', and `org:group_other' (a hospital group). Expert 1 attributes instead a positive influence to `DiagnosticArtAstrup' (arterial blood gas measurement) and `org:group_W' and negative influence to both `Activity_Release B' and `org:group­_G'. Analogous interpretations can be extracted from the remaining expert models, which also focus on specific activities and hospital groups. § DISCUSSION AND CONCLUSION The experimental results presented above confirm that the models discovered by exhibit a compelling trade-off between the accuracy and explainability of the their outcome predictions. This descends from both the modularity and conditional-computation nature of models (where only one specific expert is chosen to make a prediction), and from the possibility to control the complexity of both the model and of their explanations (via hyperparameters m and kTop). In particular, by focus on a small number of feature importance scores, the used is allowed to easily inspect and assess the internal decision logic of the model and get simple, faithful explanations for its predictions. Interesting directions of future work are: (i) converting LR-like sub-models returned by into logic rules, (ii) tuning hyper-parameters kTop and m automatically; (iii) evaluating the relevance of s' explanations through user studies.
http://arxiv.org/abs/2407.12453v1
20240717100232
Estimating Reaction Barriers with Deep Reinforcement Learning
[ "Adittya Pal" ]
cs.LG
[ "cs.LG", "physics.comp-ph", "J.2" ]
[ [ July 22, 2024[The insight and motivation for the study of inertial methods with viscous and Hessian driven damping came from the inspiring collaboration with our beloved friend and colleague Hedy Attouch before his unfortunate recent departure. We hope this paper is a valuable step in honoring his legacy. ] ======================================================================================================================================================================================================================================================================================================================== § ABSTRACT Stable states in complex systems correspond to local minima on the associated potential energy surface. Transitions between these local minima govern the dynamics of such systems. Precisely determining the transition pathways in complex and high-dimensional systems is challenging because these transitions are rare events, and isolating the relevant species in experiments is difficult. Most of the time, the system remains near a local minimum, with rare, large fluctuations leading to transitions between minima. The probability of such transitions decreases exponentially with the height of the energy barrier, making the system's dynamics highly sensitive to the calculated energy barriers. This work aims to formulate the problem of finding the minimum energy barrier between two stable states in the system's state space as a cost-minimization problem. We propose solving this problem using reinforcement learning algorithms. The exploratory nature of reinforcement learning agents enables efficient sampling and determination of the minimum energy barrier for transitions. § INTRODUCTION There are multiple sequential decision-making processes one comes across in the world, such as control of robots, autonomous driving, and so on. Instead of constructing an algorithm from the bottom up for an agent to solve these tasks, it would be much easier if we could specify the environment and the state in which the task is considered solved, and let the agent learn a policy that solves the task. Reinforcement learning attempts to address this problem. It is a hands-off approach that provides a feature vector representing the environment and a reward for the actions the agent takes. The objective of the agent is to learn the sequence of steps that maximizes the sum of returns. One widespread example of a sequential decision-making process where reinforcement learning is utilized is solving mazes <cit.>. The agent, a maze runner, selects a sequence of actions that might have long-term consequences. Since the consequences of immediate actions might be delayed, the agent must evaluate the actions it chooses and learn to select actions that solve the maze. Particularly, in the case of mazes, it might be relevant to sacrifice immediate rewards for possibly larger rewards in the long term. This is the exploitation-exploration trade-off, where the agent has to learn to choose between leveraging its current knowledge to maximize its current gains or further increasing its knowledge for some possibly larger reward in the long term, possibly at the expense of short-term rewards. The process of learning by an agent while solving a maze is illustrated in Figure <ref>. GridWorld is an environment for reinforcement learning that mimics a maze <cit.>. The agent is placed at the start position in a maze with blocked cells, and the agent tries to reach a stop position with the minimum number of steps possible. One might note an analogy of a maze runner with an agent negotiating the potential energy landscape of a transition event for a system along the saddle point with the minimum height. The start state and the stop state are energy minima on the potential energy surface, separated by an energy barrier for the transition. The agent would have to perform a series of perturbations to the system to take it from one minimum (the start state) to another (the end state) through the located saddle point. As in the maze-solving problem, the agent tries to identify the pathway with the minimum energy barrier. If the number of steps is considered the cost incurred in a normal maze, it is the energy along the pathway that is the cost for the transition event. A comparison is attempted in Figure <ref>. The problem of locating the minimum energy barrier for a transition has applications in physical phase transitions, synthesis plans for materials, activation energies for chemical reactions, and the conformational changes in biomolecules that lead to reactions inside cells. In most of these scenarios, the dynamics are governed by the kinetics of the system (rather than the thermodynamics) because the thermal energy of the system is much smaller than the energy barrier of the transition. This leads to the system spending most of its time around the minima, and some random large fluctuations in the system lead to a transition. This is precisely why transition events are rare and difficult to isolate and characterize with experimental methods. Moreover, these ultra-fast techniques can be applied to only a limited number of systems. Because transition events are rare, sampling them using Monte Carlo methods requires long simulation times, making them inefficient <cit.>. To sample the regions of the potential energy surface around the saddle point adequately, a large number of samples have to be drawn. Previous work has been done to identify the saddle point and determine the height of the transition barrier—transition path sampling <cit.>, nudged elastic band <cit.>, growing string method <cit.>, to name a few—which use ideas from gradient descent. However, even for comparatively simple reactions, these methods are not always guaranteed to find the path with the energy barrier that is a global minimum because the initial guess for the pathway might be wrong and lead to a local minimum. With the advent of deep learning and the use of neural nets as function approximators for complex mappings, there has been increased interest in the use of machine learning <cit.> to either guess the configuration of the saddle point along the pathway (whose energy can then be determined by standard ab initio methods) or directly determine the height of the energy barrier given the two endpoints of the transition. Graph neural networks <cit.>, generative adversarial networks <cit.>, gated recurrent neural networks <cit.>, transformers <cit.>, machine-learned potentials <cit.>, and so on, have been used to optimize the pathway for such transitions. Noting the superficial similarities between solving a maze and determining the transition pathway with the lowest energy barrier, we propose to use standard and tested deep reinforcement learning algorithms used to solve mazes in an attempt to solve the problem of finding minimum energy pathways. The problem is formulated as a min-cost optimization problem in the state space of the system. We use this formulation to determine the barrier height of the optimal pathway in the Mueller-Brown potential. Neural nets are used as the actor and critic function approximators, and a randomly perturbed policy is used to facilitate exploration of the potential energy surface by the agent. Delayed policy updates and target policy averaging are used to stabilize the learning, especially during the first few epochs, which are crucial to the optimal performance of the agent. Section 2 describes the methods used to formulate the problem as a Markov decision process and the algorithm used to solve it. Section 3 elaborates on the experiments where the formulated method is used to determine the barrier height of a transition on the Mueller-Brown potential. Section 4 contains a short discussion of the work in the context of other similar studies and the conclusions drawn from this work. § METHODS To solve the problem of finding a pathway with the lowest energy barrier for a transition using reinforcement learning, one has to model it as a Markov decision process. Any Markov decision process consists of (state, action, next state) tuples. In this case, the agent starts at the initial state (a local minimum) and perturbs the system (the action) to reach a new state. Since the initial state was an energy minimum, the current state will have higher energy. However, as in many sequential control problems, the reward is delayed. A series of perturbations that lead to states with higher energies might enable the agent to climb out of the local minimum into another one containing the final state. By defining a suitable reward function and allowing the agent to explore the potential energy surface, it is expected that the agent will learn a path from the initial to the final state that maximizes the rewards. If the reward function is defined properly, it should correspond to the pathway with the lowest energy barrier for the transition. Once the problem is formulated as a Markov decision process, it can be solved by some reinforcement learning algorithm. Twin Delayed Deep Deterministic Policy Gradient (TD3) <cit.> is a good start because it prevents the overestimation of the state value function, which often leads to the agent exploiting the errors in the value function and learning a sub-optimal policy. Soft Actor Critic (SAC) <cit.> tries to blend the deterministic policy gradient with a stochastic policy optimization, promoting exploration by the agent. In practice, using a stochastic policy to tune exploration often accelerates the agent's learning. §.§ Markov Decision Process The Markov decision process is defined on: * a state space 𝒮, consisting of states s ∈ℝ^d, where d is the dimensionality of the system, chosen to be the number of degrees of freedom in the system. * a continuous action space 𝒜, where each action Δ s ∈ℝ^d: |(Δ s)_i| ≤ 1 is normalized, and the action is scaled using an appropriate scaling factor λ. At a state s^(k), the agent takes an action Δ s^(k). Since the action is considered a perturbation to the current state of the system, the next state s^(k+1) is determined from the current state s^(k) as s^(k+1) = s^(k) + λ·Δ s^(k). To determine the minimum energy barrier for a transition, the reward for an action taking the agent to state s^(k+1) from state s^(k) is chosen to be the negative of the energy of the next state, -E(s^(k+1)). The negation makes maximizing the sum of rewards collected by the reinforcement learning agent in an episode equivalent to minimizing the sum of energies along the pathway for the transition. The reward acts as immediate feedback to the agent for taking an action in a particular state. However, what is important is the long-term reward, captured by the sum of the rewards over the entire episode, leading the agent to identify a transition pathway with a low sum of energies at all intermediate steps. Since both the state space and action space are continuous, an actor-critic based method, specifically the soft actor-critic (SAC), is used. Additionally, since the state space is continuous, the episode is deemed to have terminated when the difference between the current state and the target state is smaller than some tolerance, x ∈ℝ^d : |x - x_t| < δ for some small δ. Otherwise, it would be extremely unlikely that the agent would land exactly at the coordinates of the final state after taking some action. §.§ Algorithm SAC, an off-policy learning algorithm with entropy regularization, is used to solve the formulated Markov Decision process because the inherent stochasticity in its policy facilitates exploration by the agent. The algorithm learns a behavior policy π_θ and two critic Q-functions, which are neural nets with parameters ϕ_1 and ϕ_2 (line 1 of Algorithm <ref>). The agent chooses an action a^(k)≡Δ s^(k) to take when at state s^(k) following the policy π_θ (line 8). The returns from the state s^(k) when acting according to the policy π is the discounted sum of rewards collected from that step onwards till the end of the episode: R_t = - ∑_i = t^T γ^i-t,E(s^(i)). The objective of the reinforcement learning agent is to determine the policy π^* that maximizes the returns, R_t, for states s ∈𝒮. This is done by defining a state-action value function, Q(s^(i), a^(i)), which gives an estimate of the expected returns if action a^(i) is taken by the agent when at state s^(i): Q(s^(i), a^(i)) = 𝔼[R_t : s_t = s^(i), a_t = a^(i)]. Since the objective is to maximize the sum of the returns, the action-value function can be recursively defined as Q(s^(i), a^(i)) = - E(s^(i+1)) + max_a^(i+1)∈𝒜 Q(s^(i+1), a^(i+1)) which is implemented in line 14 of Algorithm <ref>. A replay buffer with a sufficiently large capacity is employed to increase the probability that independent and identically distributed samples are used to update the actor and two critic networks. The replay buffer (in line 3) is modeled as a deque where the first samples to be enqueued (which are the oldest) are also dequeued first, once the replay buffer has reached its capacity and new samples have to be added. Since an off-policy algorithm is used, the critic net parameters are updated by sampling a mini-batch from the replay buffer at each update step (line 13). Stochastic gradient descent is used to train the actor and the two critic nets. The entropy coefficient α is adjusted over the course of training to encourage the agent to explore more when required and exploit its knowledge at other times (line 18) <cit.>. However, we also borrow some elements from the TD3 algorithm <cit.> to improve the learning of the agent, namely delayed policy updates and target policy smoothing. The critic Q-nets are updated more frequently than the actor and the target Q-nets to allow the critic to learn faster and provide more precise estimates of the returns from the current state. To address the problem of instability in the learning, especially in the first few episodes while training the agent, target critic nets are used. Initially, the critic nets are duplicated (line 2) and subsequently soft updates of these target nets are carried out after an interval of a certain number of steps (line 19). This provides more precise estimates for the state-action value function while computing the returns for a particular state in line 14. To encourage the agent to explore the potential energy surface, clipped noise is added to the action chosen by the actor net (line 9). This also makes it difficult for the actor to exploit imprecise Q-net estimates during the beginning of the training. The changes to the SAC algorithm, borrowed from TD3, are highlighted in blue in the pseudocode of Algorithm <ref>. The parameters used in the particular implementation of the algorithm are listed in Table <ref>. § EXPERIMENTS The proposed algorithm is applied to determine the pathway with the minimum energy barrier on the M"uller–Brown potential energy surface <cit.>. The M"uller–Brown potential has been used to benchmark the performance of several algorithms that determine the minimum energy pathways, such as the molecular growing string method <cit.>, Gaussian process regression for nudged elastic bands <cit.>, and accelerated molecular dynamics <cit.>. Therefore, it is also used in this work to demonstrate the applicability of the proposed method. A custom environment <cit.> was created following the gymnasium interface (inheriting from the ) to model the problem as a Markov Decision Process to be solved by a reinforcement learning pipeline. The values for the parameters used in Algorithm <ref> are listed in Table <ref>. §.§ Results The M"uller–Brown potential is characterized by the following potential: V(x, y) = ∑_i=0^3 W_i·exp[a_i(x - x_i)^2 + b_i(x - x_i)(y - y_i) + c_i(y - y_i)^2 ] where W = (-200, -100, -170, 15), a = (-1, -1, -6.5, 0.7), b = (0, 0, 11, 0.6), c = (-10, -10, -6.5, 0.7), x = (1, 0, -0.5, -1), and y = (0, 0.5, 1.5, 1). The potential energy surface for the system is plotted in Figure <ref>, and the local minima are at (-0.558, 1.442), (0.623, 0.028), and (-0.050, 0.467) with the value of V(x, y) being -146.7, -108.2, and -80.8, respectively. The RL agent was trained to locate a path on this surface from (0.623, 0.028) with a random step (with zero mean and a standard deviation of 0.1) added to it as the initial state to (-0.558, 1.442) as the terminal state, with the minimum energy barrier. The first random step was chosen to avoid the same starting point in each training iteration of the agent, so it learns a more generalized policy. Some of the parameters for the Markov Decision Process to model this potential are given in Table <ref>. In Figure <ref>, an ensemble of paths generated by the trained RL agent with the starting points slightly perturbed from (0.623, 0.028) by noise added from 𝒩(0, 0.1) is plotted on the energy surface. The energy profiles along the generated trajectories are plotted in Figure <ref> aligned by the maximum of the profiles (and not by the start of the trajectories) for better visualization. The predicted energy barrier for the transition of interest is -40.36 ± 0.21. One can see that the agent learns to predict the path with the correct minimum energy barrier, albeit the energy barrier estimated by the agent is a little higher than the optimal analytical solution (-40.665). However, the result demonstrates that reinforcement learning algorithms can be used to locate the minimum energy barrier for transitions between stable states in complex systems. The paths suggested by the trained agent cluster around the minimum energy path and pass through the vicinity of the actual saddle point representing the energy barrier. However, there still seems to be some way to go to improve the sampling densities around the saddle point, which determines the barrier height, to avoid overestimating it. §.§ Ablation Studies Several modifications were made to the standard SAC algorithm to be used in this particular case (highlighted in blue in Algorithm <ref>). Studies were performed to understand the contribution of each individual component to the working of the algorithm in this particular environment by comparing the performance of the algorithm with different hyperparameters for a component. Each modification and its contribution to the overall learning of the agent are described in the following sections. The mean and the standard deviation of the returns from the last 100 training steps for each modification to the existing algorithm are listed in Table <ref> to compare the performance of the agents. The modification that leads to the highest returns is highlighted. §.§.§ Target Policy Smoothing Injecting random noise (with a standard deviation σ) into the action used in the environment (in line 9 of Algorithm <ref>) encourages the agent to explore, while adding noise to the actions used to calculate the targets (in line 14 of Algorithm <ref>) acts as a regularizer, forcing the agent to generalize over similar actions. In the early stages of training, the critic Q-nets can assign inaccurate values to some state-action pairs, and the addition of noise prevents the actor from rote learning these actions based on incorrect feedback. On the other hand, to avoid the actor taking a too random action, the action is clipped by some maximum value for the noise (as done in lines 9 and 14 of Algorithm <ref>). The effect of adding noise to spread the state-action value over a range of actions is plotted in Figure <ref>. Adding noise leads to the agent learning a policy with less variance in the early learning stages and a more consistent performance. §.§.§ Delayed Policy Updates Delaying the updates for the actor nets and the target Q-nets (in lines 17 and 19 of Algorithm <ref>) allows the critic Q-nets to update more frequently and learn at a faster rate, so that they can provide a reasonable estimate of the value for a state-action pair before it is used to guide the policy learned by the actor net. The parameters of the critic Q-nets might often change abruptly early on while learning, undoing whatever the agent had learned (catastrophic failure). Therefore, delayed updates of the actor net allow it to use more stable state-action values from the critic nets to guide the policy learned by it. The effect of varying intervals of delay for the actor update on the learning of the agent is plotted in Figure <ref>. Updating the actor net for every update of the critic nets led to a policy with a high variance (blue plot). Delaying the update of the actor net to once every 2 updates of the critic resulted in the agent learning a policy that provided higher returns but still had a high variance (green plot). Delaying the update of the actor further (once every 4 and 8 updates of the critic net plotted as the red and magenta curves, respectively) further improved the performance of the agent. One can notice the lower variance in the policy of the agent during the early stages (first 200 episodes of the magenta curve) for the agent which updates the actor net and target critic nets once every 8 updates of the critic nets. However, delaying the updates for too long intervals would cripple the learning of the actor. The performance of the agent suffers when the update of the actor is delayed to once every 16 updates of the critic nets (yellow curve) and the agent fails to learn when the update of the actor net is further delayed to once every 32 updates of the critic nets (cyan curve). §.§.§ Tuning the Entropy Coefficient The entropy coefficient α can be tuned as the agent learns (as done in line 18 of Algorithm <ref>), which overcomes the problem of finding the optimal value for the hyperparameter α <cit.>. Moreover, simply fixing α to a single value might lead to a poor solution because the agent learns a policy over time: it should still explore regions where it has not learned the optimal action, but the policy should not change much in regions already explored by the agent that have higher returns. In Figure <ref>, the effect of the variation of the hyperparameter α on the learning of the agent is compared. As can be seen, a tunable α allows the agent to learn steadily, encouraging it to explore more in the earlier episodes and exploiting the returns from these explored regions in the latter episodes, resulting in a more stable learning curve (blue curve). A too low value of α, such as 10^-3 or 10^-1, makes the algorithm more deterministic (TD3-like), which leads to sub-optimal performance and the agent being stuck in a local minimum (plotted as green and red curves, respectively). An α value of 0.1 has comparable performance to the tunable α, but the learning curve is less stable and there are abrupt changes in the policy function (magenta curve). The original implementation of SAC suggested 0.2 as a fixed value for α, which leads to a learning curve resulting in a policy with high variance (yellow curve). A too high value of α, such as 0.5, makes the algorithm more stochastic (REINFORCE-like), which also leads to sub-optimal learning (cyan curve). § DISCUSSIONS AND CONCLUSION Advancements in reinforcement learning algorithms based on the state-action value function have led to their application in diverse sequential control tasks such as Atari games, autonomous driving, and robot movement control. This project formulated the problem of finding the minimum energy barrier for a transition between two local minima as a cost minimization problem, solved using a reinforcement learning setup with neural networks as function approximators for the actor and critics. A stochastic policy was employed to facilitate exploration by the agent, further perturbed by random noise. Target networks, delayed updates of the actor, and a replay buffer were used to stabilize the learning process for the reinforcement learning agent. While the proposed framework samples the region around the saddle point sufficiently, providing a good estimate of the energy barrier for the transition, there is definitely scope for improvement. As future work, the method could be applied to more realistic systems. Previous works in determining transition pathways using deep learning or reinforcement learning techniques include formulating the problem as a shooting game solved using deep reinforcement learning <cit.>. Additionally, in <cit.>, this problem is formulated as a stochastic optimal control problem, where neural network policies learn a controlled and optimized stochastic process to sample the transition pathway using machine learning techniques. Stochastic diffusion models have been used to model elementary reactions and generate the structure of the transition state, preserving the required physical symmetries in the process <cit.>. Furthermore, the problem of finding transition pathways was recast into a finite-time horizon control optimization problem using the variational principle and solved using reinforcement learning in <cit.>. Recent work <cit.> used an actor-critic reinforcement learning framework to optimize molecular structures and calculate minimum energy pathways for two reactions. This work differs from previous efforts by providing a much simpler formulation of the problem, using the energy of the state directly as the reward while searching for transition pathways with the minimum energy barrier. One of the main advantages of this method is that, unlike traditional methods such as the nudged elastic band or the growing string method, it does not require an initial guess for the transition pathway. Traditional methods use energy gradient information along the pathway to iteratively improve to a pathway with better energetics. However, the success of these methods depends on the initial guess for the pathway, which might be stuck in a local minimum, as shown in Figure <ref>, leading to a sub-optimal solution. This could result in an overestimate of the energy barrier and subsequently an underestimate of the probability for the transition to occur, leading to imperfect modeling of the system. On the other hand, the use of a stochastic policy in a reinforcement learning setup avoids this problem, increasing the chances of finding a better estimate of the transition barrier as the agent explores the state space. However, as a trade-off for the simple model and generic approach, the agent learns slowly and requires a large number of environment interactions.
http://arxiv.org/abs/2407.13586v1
20240718152644
Complexity and speed of semi-algebraic multi-persistence
[ "Arindam Banerjee", "Saugata Basu" ]
math.AT
[ "math.AT", "Primary 14F25, 55N31, Secondary 68W30" ]
Complexity and speed of semi-algebraic multi-persistence] Complexity and speed of semi-algebraic multi-persistence Department of Mathematics, IIT Kharagpur, Kharagpur, India. 123.arindam@gmail.com Department of Mathematics, Purdue University, West Lafayette, IN 47906, U.S.A. sbasu@math.purdue.edu Primary 14F25, 55N31; Secondary 68W30 Banerjee was partially supported by SERB Start Up Research Grant, IIT Kharagpur Faculty Start Up Research Grant and CPDA of IIT Kharagpur. Basu was partially supported by NSF grant CCF-1910441. § ABSTRACT Let be a real closed field, S ⊂^n a closed and bounded semi-algebraic set and 𝐟 = (f_1,…,f_p):S →^p a continuous semi-algebraic map. We study the poset module structure in homology induced by the simultaneous filtrations of S by the sub-level sets of the functions f_i from an algorithmic and quantitative point of view. For fixed dimensional homology we prove a singly exponential upper bound on the complexity of these modules which are encoded as certain semi-algebraically constructible functions on ^p ×^p. We also deduce for semi-algebraic filtrations of bounded complexity, upper bounds on the number of equivalence classes of finite poset modules that such a filtration induces – establishing a tight analogy with a well-known graph theoretical result on the “speed” of algebraically defined graphs. [ Saugata Basu July 22, 2024 ================= § INTRODUCTION Persistent homology theory is now a well established sub-field of applied topology <cit.>. Initially persistent homology modules were associated to filtrations of spaces by the sub-level sets of single functions. More recently, simultaneous filtrations by multiple functions have been studied driven by applications <cit.>. More generally, persistent homology theory over arbitrary posets have been developed extensively in <cit.>. Study of persistent homology restricted to tame spaces – such as the category of semi-algebraic sets and maps – is of much recent origin <cit.> – and is the topic of this paper. Let be a real closed field and an ordered domain contained in which we fix for the rest of the paper. We study in this paper semi-algebraic multi-persistence modules induced by filtrations of the sublevels sets of continuous semi-algebraic maps 𝐟:S →^p from a quantitative, as well algorithmic, point of view. We address the following problems. * We consider the problem of defining a data-structure (which turns to be a semi-algebraically constructible function on ^p ×^p), so that given any finite subset of ^p, the structure of the induced submodule on the corresponding finite poset can be read off efficiently (i.e. without any further computation involving S and 𝐟). * We also consider the problem of designing an efficient algorithm for computing the constructible function mentioned above. * Finally, we consider the problem of understanding the whole class of finite poset modules obtainable from semi-algebraic multi-filtrations of bounded complexity. Following the spirit of a similar notion in graph theory <cit.>, we introduce the notion of “speed” of semi-algebraically defined families of finite poset modules, which counts the number of non-equivalent poset modules on finite posets belonging to the family. We consider the problem of proving upper bounds on the speed of semi-algebraic families of poset modules mirroring the corresponding result in graph theory <cit.>. §.§ Main Results We prove results in all three directions mentioned above. We prove upper bounds on the the complexity (Theorem <ref>) and the speed of semi-algebraic multi-persistence modules (Theorems <ref> and <ref>) – the notion of complexity and speed pertaining to semi-algebraic persistence theory are new, and we define them below. Finally, we describe an algorithm and prove upper bounds on its complexity for computing these modules (Theorem <ref>). In order to get to the main results as quickly as possible, we first state these in the next three subsections introducing the necessary notation and background information. In the next section (Section <ref>) we discuss motivation behind studying these questions, the significance of our results, and connections with prior work in both algorithmic semi-algebraic geometry and persistent homology theory. Finally, we prove our results in Sections <ref> and <ref>. We assume only basic familiarity with homology theory and some categorical definitions, but do not assume any specialized knowledge of persistent homology theory or semi-algebraic geometry (other than the definition of real closed fields and semi-algebraic sets and maps) in what follows. §.§ Complexity of semi-algebraic multi-persistence modules In order to state our results we need to recall some basic definitions. Let 𝐤 be a field fixed for the rest of the paper. We denote by _𝐤 the category of 𝐤-vector spaces and linear maps. Given a partially ordered set (poset) (P,≼), a poset module over P is a functor: 𝐏: (P,≼) →_𝐤, where we consider (P,≼) as a poset category – namely, the category whose objects are elements p ∈ P, and the set of morphisms _(P,≼ )(p,p') is empty if p ⋠p' and a singleton set, which will denote by “p≼ p'”, otherwise. We say that two poset modules 𝐏,𝐏':P →_𝐤 are isomomorphic if there exists a natural transformations F,G as depicted below which are inverses of each other: P @/^1pc/[rr]^𝐏@/_1pc/[rr] _𝐏' F ⇑ _𝐤, P @/^1pc/[rr]^𝐏@/_1pc/[rr] _𝐏' ⇓ G _𝐤. We will deal mostly with the poset (^p,≼), where the partial order ≼ is defined coordinate-wise i.e. (a_1,…,a_p) =𝐚≼𝐛 = (b_1,…,b_p) if and only if a_i ≤ b_i, 1 ≤ i ≤ p. For any poset (P,≼), we will denote by (P) = {(p,p') | p ≼ p'}. In the case P = ^p, we will identify (^p) with the semi-algebraic subset of ^p ×^p = ^2 p, defined by the inequalities Y_i - Y_i' ≤ 0, 1 ≤ i ≤ p, where (Y_1,…,Y_p,Y_1',…,Y_p') are the coordinates with respect to the standard basis of ^2 p. A key role in what follows will be played by semi-algebraically constructible functions whose definition we recall below. A semi-algebraically constructible function F:S →𝐤, where S is a semi-algebraic set, is a 𝐤-linear combination of the characteristic functions of some finite number of semi-algebraic subsets of S. More generally, we say that a function F = (f_1,…,f_M):S →𝐤^M is semi-algebraically constructible, if each f_i is semi-algebraically constructible. Clearly, to each semi-algebraically constructible function F:S →𝐤^M, there exists a finite partition of S into semi-algebraic subsets on which F is constant. We will say that such a partition is subordinate to F. Clearly, if a partition is subordinate to F, and a finer partition will also be subordinate to F. We will make use of semi-algebraic partitions of a special kind which we introduce below. [Sign conditions, realization, (·)] For a finite subset 𝒫⊂[X_1,…,X_n] we will call any element of {0,1,-1}^𝒫 a sign condition on 𝒫. We will denote by (σ) = {∈^n | (P()) = σ(P), P ∈𝒫}. We denote by (𝒫) = {σ∈{0,1,-1}^𝒫|(σ) ≠∅}, and call (𝒫) the set of realizable sign conditions of 𝒫. We will denote (σ) the set of semi-algebraically connected components of (σ), and denote (𝒫) = ⋃_σ∈(𝒫)(σ). (Note that ( C )_C ∈(𝒫) is a finite partition of ^n into non-empty locally closed semi-algebraic sets.) For a finite subset 𝒫⊂[X_1,…,X_n], we will denote (𝒫) = max_P ∈𝒫(P). We now define a notion of complexity of a semi-algebraically constructible function in terms of the “complexity” of partitions subordinate to it. We will say that a semi-algebraically constructible function F: S →𝐤^M, S ⊂^n has complexity ≤ D, if there exists 𝒫⊂[X_1,…,X_n], such that the partition ( C )_C ∈(𝒫), C ⊂ S is subordinate to F, and (𝒫) ·(𝒫) ≤ D. An alternative definition of the complexity of F would be to say that complexity of a semi-algebraically constructible function F is ≤ D if there exists 𝒫⊂[X_1,…,X_n] such that the partition ( (σ))_σ∈(𝒫), (σ) ⊂ S is subordinate to F, and (𝒫) ·(𝒫) ≤ D. However, the semi-algebraically connected components of a 𝒫-semi-algebraic set, where 𝒫⊂[X_1,…,X_n]_≤ d, (𝒫) ≤ s, are all 𝒬-semi-algebraic sets, for some 𝒬⊂[X_1,…,X_n], with (𝒬) ≤ s^n d^O(n^4) and (𝒬) ≤ d^O(n^3) <cit.>. So if the complexity of F is bounded by D according to Definition <ref>, then it is bounded by D^n^O(1) by this alternative definition – and so the asymptotic bounds in this paper, such as in Theorem <ref>, is unchanged if we use this alternative definition. Semi-algebraically constructible functions have been studied from the point of view computational complexity theory earlier in <cit.>. In that paper, the complexity (in terms of formula size) of a semi-algebraically constructible function is also defined in terms of subordinate partitions <cit.>. However, the definition in <cit.> is a bit more nuanced than the definitions given above, since it is aimed at developing a broader theory of complexity classes of constructible functions (and sheaves) which is not our aim here. So we do not repeat it here but mention that the upper bound in Theorem <ref> would apply even with the definition given in <cit.>. We now arrive at one of the central objects of interest in this paper. We say that a poset module 𝐏: (^p,≼) →_𝐤, is semi-algebraically constructible, if there exists K ≥ 0, and a semi-algebraically constructible function F: (^p) →𝐤^K × K, such that 𝐏 is isomorphic (see Definition <ref>) to the poset module 𝐏:(^p,≼) →_𝐤, defined by: 𝐏() = 𝐤^𝐏(), 𝐏(≼') = L__𝐏('),𝐏()^K(F(,')), where for 0 ≤ p,q ≤ K, _p,q^K: 𝐤^K × K→𝐤^p × q, is the map that extracts the p × q matrix out of an K × K matrix by extracting the first p rows and the first q columns, and for M ∈𝐤^p × q, L_M: 𝐤^q →𝐤^p denotes the homomorphism ↦ M ·. We will say that F is associated to 𝐏, and that 𝐏 has complexity ≤ D, if the semi-algebraically constructible function F has complexity ≤ D (see Definition <ref>). We finally arrive at the notion of poset modules induced by semi-algebraic multi-filtrations. Let S⊂^n be a closed and bounded semi-algebraic set and 𝐟: S →^p be a continuous semi-algebraic map. For ∈^p, we denote by S_𝐟≼ = {x ∈ S |𝐟(x) ≼}. For ℓ≥ 0, we denote by 𝐏_S,𝐟,ℓ: (^p,≼) →_𝐤 the poset module (which we call the ℓ-th multi-persistence module of the filtration of S by 𝐟) defined by 𝐏_S,𝐟,ℓ() = _i(S_𝐟≼), 𝐏_S,𝐟,ℓ(≼') = ι_,',ℓ, where _ℓ(·) denotes the ℓ-th homology group with coefficients in 𝐤, and ι_,',ℓ:_i(S_𝐟≼) →_i(S_𝐟≼') is the homomorphism induced by the inclusion S_𝐟≼↪ S_𝐟≼'. The following simple example illustrates Definitions <ref> and <ref>. Let n=2,p = 2,ℓ = 0, and S ⊂^2 be the closed unit disk defined by X_1^2 + X_2^2 -1 ≤ 0, and f_1 = X_1,f_2 = X_2. Let D = S ∪ (_≥ 0^2 + (-1,0)) ∪ (_≥ 0^2 + (0,-1)) (see Figure <ref>), and let F: (^2) →𝐤 be the constructible function defined as follows: F(_1,_2) = 1, = 0, . It is easy to check that F is associated to the poset module 𝐏_S,𝐟,0 in this example. Moreover, the partition (C)_C∈(𝒟), C ⊂(^2) is subordinate to F, where 𝒟 = { Y_1^2 + Y_2^2 -1, Y_1+1, Y_2+1, Y_1'^2 + Y_2'^2 -1, Y_1'+1, Y_2' +1, Y_1 - Y_1',Y_2 - Y_2'}. It follows from Definitions <ref> , <ref> and <ref>, that the complexity of F, and hence also the complexity of 𝐏_S,𝐟,0, is ≤(𝒟)·(𝒟) = 8 · 2 = 16. Our first theorem (Theorem <ref> below) states that the modules 𝐏_S,𝐟,ℓ are all semi-algebraically constructible, with singly exponentially bounded complexity (see Definition <ref>). In order to make the statement about the complexity precise we need one more notation. [Realizations, 𝒫-, 𝒫-closed semi-algebraic sets] For any finite set of polynomials 𝒫⊂ [ X_1 , … ,X_n ], we call any quantifier-free first order formula ϕ with atoms, P =0, P < 0, P>0, P ∈𝒫, to be a 𝒫-formula. We call the realization of ϕ: (ϕ) := {𝐱∈^n |ϕ (𝐱)} a 𝒫-semi-algebraic subset of ^n. We say that a quantifier-free formula ϕ is closed if it is a formula in disjunctive normal form with no negations, and with atoms of the form P ≥ 0, P ≤ 0, where P ∈[X_1,…,X_n]. If the set of polynomials appearing in a closed formula is contained in a finite set 𝒫, we will call such a formula a 𝒫-closed formula, and we call the realization, (ϕ), a 𝒫-closed semi-algebraic set. Similarly, if 𝒬⊂[X_1,…,X_n,Y_1,…,Y_p] we a call map ^n →^p to be a 𝒬-semi-algebraic (resp. 𝒬-closed semi-algebraic) map if graph(f) ⊂^n ×^p is a 𝒬-semi-algebraic (resp. 𝒬-closed semi-algebraic) set. We can now state our first quantitative result. Let S ⊂^n be a bounded 𝒫-closed semi-algebraic set, and 𝐟:S →^p be a 𝒬-closed continuous semi-algebraic map. Then, for each ℓ≥ 0, the poset module 𝐏_S,𝐟,ℓ is semi-algebraically constructible with complexity bounded by (s d)^ ( p n) ^O(ℓ), where s = (𝒫) + (𝒬), and d = max((𝒫),(𝒬)). Note that the complexity in Theorem <ref> is bounded singly exponentially for every fixed ℓ≥ 0. We will discuss the significance of Theorem <ref> and its connections with prior results (particularly, results in <cit.> where the case p=1 is treated) later in Subsection <ref>. At this point we just note that the singly exponential bound on the complexity of the semi-algebraically constructible module 𝐏_S,𝐟,ℓ implies that given any finite subset T ⊂^p, the poset module 𝐏_S,𝐟,ℓ_T, obtained by restricting the functor 𝐏_S,𝐟,ℓ to T, can be described completely with complexity (s d)^ (p n) ^O(ℓ)· ((T))^2. Once this description is obtained different invariants of the finite poset module 𝐏_S,𝐟,ℓ_T, can be computed using algorithms that have been developed for that purpose recently (see for example, <cit.>). This computation will not involve any operations in the ring of coefficients of the polynomials in 𝒫 and 𝒬, but only operations on elements of the field 𝐤. §.§ Speed of semi-algebraic multi-persistence modules Our next result involves the notion of “speed” of semi-algebraic multi-persistence modules that we introduce in this paper. In order to motivate it we recall an important notion from graph theory that plays a guiding role. §.§.§ Speed of semi-algebraic graphs Suppose F ⊂^p ×^p be a fixed semi-algebraic relation (i.e. a semi-algebraic subset of ^p ×^p). A labelled (directed) graph (V,E ⊂ V × V), with V = {1,…,N} is called an F-graph if there exists (_1,…,_N) ∈ (^p)^N, such that (i,j) ∈ E ⇔ (_i,_j) ∈ F. Families of F-graphs, where F is a semi-algebraic relation, have been studied in graph theory from several perspectives such as Ramsey theory, Zarankiewicz problem etc. (see for example, <cit.>). We will be concerned with another very well-studied property, called “speed”, of such families. The function G_F:ℕ→ℕ, defined by G_F(N) = is called the speed of the family of graphs defined by F. The following theorem (slightly simplified) is a key result in the theory of semi-algebraic graphs. <cit.> For every semi-algebraic relation F ⊂^p ×^p, G_F(N) ≤ N^(1 + o_F(N))p N (where the o_F(N) term depends on F). [Theorem 1.2 in <cit.> is slightly more general in that it gives a bound on the number of edge-labellings taking values in some fixed finite set Λ. The Theorem stated here is the special case when Λ={0,1}.] There is obviously a compelling analogy between semi-algebraic families of graphs semi-algebraically defined families of multi-persistence modules – which leads naturally to the question whether there is an analog of the above theorem for semi-algebraically defined families of multi-persistence modules as well. We answer this question positively in this paper. First, we need to specify when to call two poset modules equivalent. We will say that two poset modules, 𝐏': (P',≼') →_𝐤, 𝐏”: (P”,≼”) →_𝐤, are strongly equivalent if (P',≼') = (P”,≼”) and the two functors 𝐏',𝐏” are isomorphic (see Definition <ref>). We will say that two poset modules, 𝐏': (P',≼') →_𝐤, 𝐏”: (P”,≼”) →_𝐤, are weakly equivalent if there is a poset isomorphism ϕ: (P',≼”) → (P”,≼”), such that two poset modules 𝐏',𝐏”∘ϕ: P' →_𝐤 are strongly equivalent. Clearly, strong equivalence implies weak equivalence but not conversely. Suppose S is a closed and bounded semi-algebraic set and 𝐟: S →^p a continuous semi-algebraic map. For ℓ≥ 0, and any finite tuple T = (𝐭_1,…,𝐭_N) ∈ (^p)^N, we denote by ≼_T the partial order on {1,…,N} defined by j ≼_T j' ⇔𝐭_j≼𝐭_j', and we denote 𝐏_S,𝐟,T,ℓ: ({1,…,N}, ≼_T )→_𝐤 the poset module defined by 𝐏_S,𝐟,T,ℓ(j) = _ℓ(S_𝐟≼𝐭_j). Let S ⊂^n be a 𝒫-closed bounded semi-algebraic set and 𝐟:S →^p a 𝒬-closed continuous semi-algebraic map. Then for every ℓ≥ 0, and N > 0 the number of of strong (and hence also weak) equivalence classes amongst the poset modules 𝐏_S,𝐟,T,ℓ, T ∈ (^p)^N is bounded by ∑_j=1^pNC · N^2j· C^ p N = N^(1 +o(N))p N, where C = (sd)^(n p)^O(ℓ), s = (𝒫) + (𝒬), and d = max((𝒫),(𝒬)). As in the preceding theorem, the o(N) term depends on S and 𝐟 – more precisely, on s,d,n,ℓ, and p. (Tightness.) The bound in (<ref>) is quite tight. Taking S = [0,1]^N, p=1, f = X and ℓ =0, it is clear that the number of strong equivalence classes 𝐏_S,f,T,0, for T ∈ [0,1]^N with pairwise distinct coordinates, is N! = N^(1 - o(1))N, since different orders of coordinates of T will give rise to distinct posets on {1,…,N} (i.e ≼_T = ≼_T' if and only if T and T' are ordered identically). The N! is distinct posets are however all isomorphic. So the corresponding modules all belong to one weak equivalence class. However, taking S = [0,1]^2, p=2, 𝐟 = (X_1,X_2) and ℓ =0, it is not too difficult to show that the number of weak equivalence classes 𝐏_S,𝐟,T,0, for T ∈ (^2)^N is at least 2^N-1. The number of distinct poset structures on a set with N elements is known asymptotically and is equal to 2^N^2/4 + o(N^2/4) <cit.>, while the number of these realized as induced poset on an N element subset of ^p for any fixed p is at most N^(1 + o(1))p N (consequence of Theorem <ref>). It is instructive to compare Theorem <ref> with its graph-theoretic counterpart stated earlier. The family of F-graphs where F is a fixed semi-algebraic relation can include graphs with cycles, while the underlying graph of a poset is acyclic. There are obviously fewer acyclic labelled directed graphs on N vertices (or equivalently distinct poset structures) than all possible labelled directed graphs on the same set of vertices. On the other hand in the poset module case each directed edge is labelled by a vector space homomorphism. We show later (see Example <ref>) that the choice of these homomorphisms can make the number of strong (or even weak) equivalence classes of poset modules for a fixed finite poset and a fixed upper bound on the dimensions of the vector spaces allowed to appear, can be infinite even for very simple posets and allowed dimensions ≤ 2. We also have the following uniform version of Theorem <ref>. While asymptotically the bound is same, the dependence on the parameters s,d,n,p is worse in the following theorem compared to Theorem <ref>. Let s,d,n,p > 0 and ℓ≥ 0 be fixed. Then the number of strong equivalence classes amongst posets of the form 𝐏_S,𝐟,T,ℓ, S is a bounded 𝒫-closed semi-algebraic set, and 𝐟:S →^p a 𝒬-closed continuous semi-algebraic maps, for some 𝒫⊂[X_1,…,X_n]_≤ d, 𝒬⊂[X_1,…,X_n,Y_1,…,Y_p]_≤ d, satisfying (𝒫) + (𝒬) ≤ s, and T ∈ (^p)^N, is bounded by N^(1+ o(N)) p N. §.§ Algorithm for computing semi-algebraic multi-persistence modules Theorems <ref>, <ref> and <ref> will follow from an algorithmic result that we prove in this paper. §.§.§ Model of computation and definition of complexity There are several models of computation that one can consider while dealing with semi-algebraic sets (and also several notions of what constitutes an algorithm). If the real closed field =ℝ, and = ℤ, one can consider these algorithmic problems in the classical Turing model and measure the bit complexity of the algorithms. In this paper, we will follow the book <cit.> and take a more general approach valid over arbitrary real closed fields. In the particular case, when = ℤ, our method will yield bit-complexity bounds. The precise notion of complexity that we use is defined in Definition <ref> below. We will use the following notion of “complexity” in this paper. We follow the same definition as used in the book <cit.>. In our algorithms we will usually take as input quantifier-free first order formulas whose terms are polynomials with coefficients belonging to an ordered domain contained in a real closed field . By complexity of an algorithm we will mean the number of arithmetic operations and comparisons in the domain . If = ℝ, then the complexity of our algorithm will agree with the Blum-Shub-Smale notion of real number complexity <cit.> [ In case =, it is possible to deduce the bit-complexity of our algorithms in terms of the bit-sizes of the coefficients of the input polynomials, and this will agree with the classical (Turing) notion of complexity. We do not state the bit complexity separately in our algorithms, but note that it is always bounded by a polynomial in the bit-size of the input times the complexity upper bound stated in the paper. ]. We are now in a position to state the main algorithmic result. There exists an algorithm that takes as input: * finite subsets 𝒫⊂[X_1,…,X_n], 𝒬⊂[X_1,…,X_n,Y_1,…,Y_p]; * a 𝒫-closed formula Φ with (Φ) denoted by S; * a 𝒬-closed formula Ψ, with (Ψ) = graph(𝐟), where 𝐟:S →^p a 𝒬-closed continuous semi-algebraic map; * ℓ≥ 0, and produces as output: * finite subsets 𝒞⊂[Y_1,…,Y_p,Y_1',…,Y_p'], 𝒟⊂[Y_1,…,Y_p]; * for each D ∈(𝒟), such that D ⊂(^p): * there exists unique C_1(D), C_2(D) ∈(𝒞), such that D ⊂ C_1(D) × C_2(D); * for each i, 0 ≤ i ≤ℓ, a homomorphism ϕ_D,i:_i(Δ_C_1(D)) →_i(Δ_C_2(D)). The homomorphisms ϕ_D,i satisfy the following properties: * for each 𝐚∈^p, and 0 ≤ i ≤ℓ, there is an isomorphism ψ_𝐚,i: _i(S_𝐟≼𝐚) →_i(Δ_C(𝐚)), where C(𝐚) ∈(𝒞) is the unique element of (𝒞) such that 𝐚∈ C(𝐚); * for each D ∈(𝒟), and (𝐚,𝐛) ∈ D ∩(^p), the following diagram commutes: _i(S_𝐟≼𝐚) [r]^ι_𝐚,𝐛,i[d]^ψ_𝐚,i _i(S_𝐟≼𝐛) [d]^ψ_𝐛,i _i(Δ_C_1(D) = C(𝐚)) [r]^ϕ_k,i _i(Δ_C_2(D) = C(𝐛)). The cardinalities and the degrees of the polynomials contained in them of 𝒞,𝒟, as well as the complexity of this algorithm are bounded by (s d)^( p n)^O(ℓ), where s = (𝒫) + (𝒬), and d = max((𝒫),(𝒬)). Also, the sizes of the simplicial complexes Δ_C, C ∈(𝒞), are bounded by (s d)^n^O(ℓ). We obtain as a corollary to Theorem <ref> the following algorithmic result that given the description of a closed and bounded semi-algebraic set S, and a continuous semi-algebraic map 𝐟:S →^p as input, as well as a finite tuple of points T ∈ (^p)^N, produces an explicit desciption of the poset modules 𝐏_S,𝐟,T,i (see Definition <ref>). Since we will also need to specify points in ^p which need not belong to , we introduce the following representations of such points (see <cit.> for more details). The following definitions are adapted from <cit.>. We begin with the representations of elements of (which are algebraic over ) as roots of polynomials in [X] with a given Thom encoding (cf. Definition <ref> below). For P ∈[X] we will denote by (P) = (P,P',…,P^((P))) the list of derivatives of P. We will call a pair τ = (P, σ) with σ∈{0,1,-1}^ (P), the Thom encoding of x ∈ [It is a consequence of the well-known Thom's lemma, that the Thom encoding uniquely characterizes a root in of a polynomial in [X] (see for example, <cit.>). ], if σ (P) = 0 and σ(P^(i)) = (P^(i)(x)) for 0 ≤ i ≤(P). A real univariate representation u in ^p is a pair (F,σ) where F = (f,g_0, g_1,…,g_p) ∈[T], with f(T),g_0(T) co-prime, and σ a Thom encoding of a real root of f(T) ∈[T]. We denote by (u) ∈^p the point (g_1 ((τ) )/g_0 ((τ) ) , … , g_p ((τ ))/g_0 ((τ))) ∈^p, where τ = (f,σ) (notice that τ is a Thom encoding), and call (u) the point associated to u. There exists an algorithm that takes as input: * finite subsets 𝒫⊂[X_1,…,X_n], 𝒬⊂[X_1,…,X_n,Y_1,…,Y_p]; * a 𝒫-closed formula Φ with (Φ) denoted by S; * a 𝒬-closed formula Ψ, with (Ψ) = graph(𝐟), where 𝐟:S →^p a 𝒬-closed continuous semi-algebraic map; * ℓ≥ 0; * a finite tuple 𝒯 of real univariate representations with the corresponding tuple of associated points T ⊂^p; and produces as output: For each i, 0 ≤ i ≤ℓ, * for each 𝐭∈ T, n_i,𝐭≥ 0; * for each pair 𝐩 = (𝐭_1,𝐭_2) ∈ T × T ∩(^p), a matrix M_i,𝐩∈ k^n_i,𝐭_2× n_i,𝐭_1, such that 𝐏_S,𝐟,T,i is isomorphic to the poset module 𝐏_i: (T,≼|_T) →_𝐤, defined by 𝐏_i(𝐭) = 𝐤^n_i,𝐭, 𝐏_i(𝐭_1 ≼𝐭_2) = L_M_i,𝐩, (where we denote for any M ∈𝐤^r × s, L_M ∈_𝐤(𝐤^s,𝐤^r) the linear map whose matrix with respect to the standard bases of 𝐤^r,𝐤^s is M). The complexity of this algorithm is bounded by N^2 · (s d)^(p n)^O(ℓ), where s = (𝒫) + (𝒬), and d is maximum of max((𝒫),(𝒬)) and and the degrees of the polynomials in the real univariate representations 𝒯. In particular, for fixed s,d,n,p,ℓ the complexity is bounded by O(N^2). § BACKGROUND, SIGNIFICANCE AND PRIOR WORK §.§ Algorithmic semi-algebraic geometry The algorithmic problem of computing topological invariants of a given semi-algebraic set described by a quantifier-free formula, whose atoms are of the form, P > 0, P ∈[X_1,…,X_k] is very well studied. These include the problems of deciding emptiness, computing the number of semi-algebraically connected components as as well as the higher Betti numbers, the Euler-Poincaré characteristic etc. <cit.>. More recently, in <cit.>, the authors give algorithms for computing a newer topological invariant – namely, the persistent homology barcodes of a filtration of a semi-algebraic set, by the sub-level sets of a given continuous semi-algebraic function. It is generalization of this latter result and some important quantitative consequences that is the main topic of the current paper. §.§ Complexity – singly vs doubly exponential Complexity of algorithms for computing topological invariants will play a key role in our results. We briefly survey the state-or-the art in the area of algorithmic complexity of computing topological invariants of semi-algebraic sets. These algorithms fall into classes. §.§.§ Doubly exponential First recall that closed and bounded semi-algebraic subsets S ⊂^n are semi-algebraically triangulable – and moreover given a description of the semi-algebraic set by a quantifier-free formula, such a triangulation can be effectively computed with complexity (measured in terms of the number s of polynomials appearing in the description and their maximum degree d) which is doubly exponential in n – more precisely (sd)^2^O(n). Together with standard algorithms of linear algebra, this gives an algorithm for computing all the Betti numbers [Here and everywhere else in the paper all homology groups considered are with coefficients in a fixed field 𝐤.] of a given semi-algebraic set with doubly exponential complexity (we define “complexity of algorithms” more precisely later in Definition <ref>). Computing Betti numbers of semi-algebraic sets using the above method is not sensitive to the dimension of the homology – the doubly exponential complexity persists even if we are interested in computing the small Betti numbers only, even for example, the zero-th Betti number which is just the number of semi-algebraically connected components. §.§.§ Singly exponential Classical bounds coming from Morse theory <cit.> gives singly exponential bounds on the Betti numbers of semi-algebraic sets. Additionally, it is known that the problem of computing the zero-th Betti number of S (i.e. the number of semi-algebraically components of S) (using “roadmap” algorithms <cit.>), as well as the problem of computing the Euler-Poincaré characteristic of S (using Morse theory <cit.>), both admit single exponential complexity algorithms with complexities bounded by (O(sd))^n^O(1). This led to a search for algorithms with similar singly exponential complexity bounds for computing the higher Betti numbers as well. The current state of the art is that for each fixed ℓ≥ 0, there exist algorithms for computing the first ℓ Betti numbers of semi-algebraic sets with complexity bounded by (sd)^n^O(ℓ) <cit.>. However, the best complexity bound for an algorithm for computing all the (possibly non-zero) Betti numbers of a semi-algebraic set in ^n is doubly exponential in n (i.e. with complexity (sd)^2^O(n)) <cit.>. More recently algorithms with singly exponential complexity has been obtained for several other problems in semi-algebraic geometry: for computing a semi-algebraic basis of the first homology group of a semi-algebraic set <cit.>, for computing the homology functor on diagrams of semi-algebraic sets for fixed dimensional homology groups <cit.>, and most relevant to this paper – computing the low dimensional barcodes of filtrations of semi-algebraic sets by semi-algebraic functions <cit.>. The importance of developing singly exponential complexity algorithms is not restricted just to computational considerations. Efforts towards developing such algorithms often lead to uncovering of interesting mathematical properties of semi-algebraic sets which are of independent interest (see Remark <ref>). Algorithms for computing the Betti numbers of semi-algebraic sets have also been developed in other (numerical) models of computations <cit.> with precise complexity estimates. However, in these results the complexity depends on the condition number of the input, and the algorithm will fail for ill-conditioned input when the condition number becomes infinite. Such algorithms are also quite distinct from the point of view of techniques from the kind of algorithms we consider in the current paper, which are supposed to work for all inputs (and all ordered rings of coefficients, including non-Archimedean ones) and with uniform complexity upper bounds (i.e. depending only on the number and degrees of the input polynomials, and independent of their coefficients). So these approaches are not comparable. There has not been any attempt to extend the numerical algorithms mentioned above for computing Betti numbers to computing persistent homology of semi-algebraic filtrations. Such an extension would necessarily involve defining a good notion of a “condition number of a semi-algebraic filtration” which has not been attempted to the best of our knowledge. Moreover, note that numerical algorithms typically do not produce quantitative mathematical results – for example, Theorems <ref> and <ref> in the current paper, as well as many other such results (see Remark <ref>), as byproducts. §.§ Semi-algebraic persistence: the case p=1 Given a closed and bounded semi-algebraic set S ⊂^n and a continuous semi-algebraic function f:S → and ℓ≥ 0, the functor 𝐏_S,f,ℓ is characterized up to equivalence by a finite set of intervals (a,b) and associated multiplicities μ_a,b > 0. The set of these intervals and their multiplicities is usually called the ℓ-dimensional “barcode” of the persistent module 𝐏_S,f,ℓ, and for each fixed ℓ, a singly exponential algorithm for computing the barcode of the i-dimensional persistent module for an arbitrary semi-algebraic filtration is given in <cit.>. We will not define barcodes for continuous semi-algebraic filtrations precisely here since they will not play an important role in the current paper (we refer the interested reader to <cit.> for an elementary exposition). The main result of <cit.> can reformulated to give a particular case of Theorem <ref>. Note that in the special case with p=1, the partition underlying the corresponding semi-algebraically constructible function can be taken to be of a particularly simple kind – namely, a partition of () into intersection of rectangles in × with (). This result was not stated in terms of constructible functions in <cit.>. §.§ Significance of Theorems <ref> and <ref> and Corollary <ref> Semi-algebraic filtrations appear in many natural applications (see for example, the applications discussed in <cit.> and <cit.> and more recently in <cit.>). It is important in these applications to have explicit algorithms for obtaining a complete description of the poset module structure and the reduction to the finite sub-modules. The definition of complexity of the poset module associated to a semi-algebraic multi-filtration is an important measure of the inherent complexity of this module, and its algorithmic significance is explained in the next paragraph. The singly exponential upper bound on this complexity proved in Theorem <ref> is reminiscent of singly exponential upper bounds on the Betti numbers of semi-algebraic sets <cit.>. Similarly, the singly exponential upper bound on the algorithmic complexity of describing the poset module structure in Theorem <ref>, extends to the case p >1, the similar result in the case p=1 proved in <cit.> (as well as singly exponential algorithms for computing Betti numbers of emi-algebraic sets in the non-persistent setting <cit.>). It should be noted that the extension to the case p > 1 is not an obvious one – since the partial order ≼ on ^p is not a total order when p > 1, and as a result the resulting constructible function on (^p), associated to a filtration, is forced to have a more complicated structure. On the other hand, dealing with multi-filtrations is quite important in practice <cit.>. Given a semi-algebraic multi-filtration (which is a poset module on the infinite poset ^p), it becomes an important problem to describe the structure of the induced sub-poset-module structure on finite subposets of ^p. Corollary <ref> implies that this structure can be computed with complexity only quadratic in the size of the finite poset (assuming that the complexity of the given filtration to be constant). It is to be noted that after one application of the algorithm in Theorem <ref>, for any given tuple of points T ∈ (^p)^N, the computation of the induced poset module 𝐏_S,𝐓,T,ℓ involves computations only in ^p rather in the original ambient space ^n. This is important since in practice p is usually much smaller than n. §.§ Significance of Theorems <ref> and <ref> §.§.§ Finite vs infinite Observe that in the case of (labelled directed) graphs, the number of distinct graphs with vertex set {1,…,N} (with possible self loops but no multiple edges) is clearly finite and equal to 2^N^2. For finite persistence modules, i.e. functors 𝐏 from a finite poset to _𝐤, the situation is a bit nuanced. Suppose that we fix a finite poset P, and restrict the functor 𝐏:P →_𝐤 to only those for which 𝐏(a) ≤ M, a ∈ P, for some fixed M ≥ 0. Now if P is linearly ordered, then the isomorphism class of 𝐏:P →_𝐤 is determined by the ranks of the various 𝐏(a≼ b), and since these ranks can take values only between 0 and M, the number of isomorphism classes of functors 𝐏: P →_𝐤 satisfying (<ref>) is clearly finite. However, this finiteness statement is not true for more general poset modules. Consider the following example. Let 𝐤 be an infinite field. Consider the poset P with Hasse diagram: v w [urr] x[ur] y [ul] z [ull] Let a,b ∈𝐤 not both zero, and consider the poset module 𝐏_a,b:P →_𝐤 with 𝐏_a,b(v) = 𝐤^2, 𝐏_a,b(w) = 𝐏_a,b(x) = 𝐏_a,b(y) = 𝐏_a,b(z) = 𝐤, 𝐏_a,b( w → v) = [1,0]^t, 𝐏_a,b(x → v) = [0,1]^t, 𝐏_a,b(y → v) = [1,1]^t, 𝐏_a,b(z → v) = [a,b]^t (where the right hand side denotes the matrices of the corresponding elements of _𝐤(𝐤,𝐤^2) with respect to the standard bases). If [a:b] ≠ [c:d] as points of ℙ^1_𝐤, then the poset modules 𝐏_a,b, 𝐏_c,d are not isomorphic. Thus, there are infinitely many distinct isomorphic classes of poset modules 𝐏:P →_𝐤, satisfying 𝐏(v) = 𝐤^2, 𝐏(w) = 𝐏(x) = 𝐏(y) = 𝐏(z) = 𝐤. Let F:𝐏_a,b→𝐏_c,d be an equivalence, and let τ = F(v) ∈(𝐤^2). Then τ must carry the lines spanned by (1,0)^t, (0,1)^t and (1,1)^t to themselves, and so must be a multiple of the identity. So the line spanned by (a,b)^t gets mapped to the line spanned by (c,d)^t by τ if and only if they are equal i.e. [a:b] = [c:d]. It follows from Example <ref> that the number of strong (or even weak) equivalence classes of poset modules on a fixed poset can be infinitely large even if we restrict the dimensions of the vector spaces to ≤ 2. Thus, even the finiteness of the bound in Theorems <ref> and <ref> on the speed of semi-algebraic multi-persistence is not obvious except in the case p=1. Finally, unlike in the case where p=1, where we have a classification of all semi-algebraic persistence modules using the fundamental theorem of finitely generated 𝐤[X]-modules (see for instance, <cit.>), the situation in the case p>1 is much more complicated, since there is no nice classification of irreducible modules in these cases. There is a large body of work aimed towards obtaining computable invariants in the case of multi-parameter persistent modules over finite posets <cit.>. In this context it is important to know that finite poset modules arising from semi-algebraic multi-filtrations are a very special subclass amongst all poset modules on the same poset. In analogy with graph theory (<cit.>, it might be interesting to prove special properties of this sub-class. We do not pursue this problem in this paper. We note here that often in algorithmic semi-algebraic geometry, analysing the complexity of a new algorithm, produces quantitative mathematical results which are of independent interest For example, this approach has been used to prove a quantitative curve selection lemma <cit.>, bounds on the radius of a ball guaranteed to intersect every connected component of a given semi-algebraic set <cit.>, and more recently effective bounds on the Łojasiewicz's exponent <cit.> amongst other such results. In the same vein, Theorems <ref> and  <ref> of this paper can be viewed as consequences of the main algorithmic result – namely, Theorem <ref> and its corollary, Corollary <ref>. §.§ Prior work Persistent homology is a central object in the emerging field of topological data analysis <cit.>, but has also found applications in diverse areas of mathematics and computations as well (see for example <cit.>). One can associate persistent homology groups to any filtration of topological spaces, and they generalize ordinary homology groups of a space X – which correspond to the trivial (i.e. constant) filtration on X. Much of the earlier work on the topic mentioned above are in the case of one parameter persistence. More recently, study of multi-parameter persistence modules and their decomposition into irreducible sub-modules of special types have become a very important object of research and is a very active area of research. The study of semi-algebraic persistent homology using as tools traditional algorithms in real algebraic geometry is more recent and originates in <cit.>, where the case of 1-parameter persistence in the semi-algebraic setting was investigated from an algorithmic point of view. A more abstract approach, valid for even sub-analytic sets and filtrations is taken in <cit.>, where the theory of constructible sheaves play a crucial role. The results in these papers are very fundamental but are not quantitative/algorithmic as in the main theorems of the current paper. § PROOFS OF THEOREMS <REF> AND <REF> §.§ Algorithmic Preliminaries §.§.§ Algorithm for enumerating (𝒫) One basic algorithm with singly exponential complexity is an algorithm that given a finite set 𝒫⊂[X_1,…,X_n] as input computes a finite set of points guaranteed to intersect every C ∈(𝒫) (see for example <cit.>). This algorithm in conjunction with an algorithm (see for example <cit.>) for computing roadmaps of semi-algebraic sets, gives an algorithm for computing exactly one point in every C ∈(𝒫) – and thus also enumerate the elements of (𝒫). Taking into account the complexity of <cit.>) and <cit.>), the complexity of the resulting algorithm is bounded by (sd)^n^O(1), where s = (𝒫) and d = (𝒫). We are going to use this algorithm for enumerating the elements of (𝒫) and computing exactly one point in each C ∈(𝒫) implicitly without mentioning in the description of the more complicated algorithms that we describe later. §.§.§ Parametrized versions of algorithms in real algebraic geometry Many algorithms in real algebraic geometry (see for example <cit.>) have the following form. They take as input a finite set 𝒫 of polynomials with coefficients in an ordered domain contained in a real closed field , and also a 𝒫-formula, and produces as output discrete objects (for example, a list of sign conditions on a set of polynomials, or a simplicial complex etc. or some topological invariant like the Betti numbers of the 𝒫-semi-algebraic set specified in the input etc.), as well as some algebraically defined objects, such as set of points represented by real univariate representations, or a set of semi-algebraic curves represented by parametrized real univariate representations, or even more generally semi-algebraic maps whose graphs are described by formulas. We will refer to these parts as discrete and the algebraic parts of the output. Each step of the algorithm consists of performing an arithmetic operation in D or testing the sign of an element of . We will need to use parametrized versions of three different algorithms. By parametrized version of a given algorithm 𝐀 with parameters Y = (Y_1,…,Y_p) we mean an algorithm that takes as input a finite set 𝒫 of polynomials with coefficients in [Y_1,…,Y_p] (instead of ). The output of the algorithm consists of two parts. * A finite set ℱ⊂[Y_1,…,Y_p], having the property that for each C ∈(ℱ), the discrete part of the output of the Algorithm 𝐀, with input 𝒫(,·), is the same for all ∈ C; * For each C ∈(ℱ): * the discrete part of the output of Algorithm 𝐀 with input 𝒫(,·) some (or all) ∈ C (so this is constant as varies over C); * the (varying) algebraic part of the output of Algorithm 𝐀 parametrized by Y, given by formula(s), ψ_C(Y) so that for all ∈ C, the output of Algorithm 𝐀 with input 𝒫(,·) is obtained by specializing Y to i.e. is equal to ψ_C(). [Basic example 1: _X] Let 𝒫⊂[Y_1,…,Y_p,X]_≤ d and ϕ(Y_1,…,Y_p,X) be a 𝒫-closed formula so that (ϕ(,X) is bounded for all ∈^p. We consider the problem of parametrized triangulation of (ϕ(,X)) ⊂ as varies over Y. Let ℱ = _X(𝒫) ⊂[Y_1,…,Y_p] where _X(𝒫) is defined in <cit.>. The for each C ∈(ℱ), there exists continuous semi-algebraic functions ξ_1,…,ξ_M_C: C →, with ξ_1 < … < ξ_M_C, and such that for each ∈ C, ξ_1()< ⋯ < ξ_M_C() are the ordered set of real roots of all the polynomials in 𝒫(,X) which are not identically 0 on C <cit.>. Thus, one obtains a triangulation (i.e. closed intervals and their end-points) of (ϕ(,X)) which has the same combinatorial structure for all ∈ C. It follows from the definition of ℱ that (ℱ) ≤ s d^O(1), and (ℱ) ≤ d^O(1). Finally, the complexity of the parametrized algorithm for triangulating a semi-algebraic subset of , is bounded by (s d)^O(p). §.§.§ Parametrized algorithm for computing semi-algebraic triangulation The technique described in Example <ref> can be extended iteratively (see <cit.>) to obtain an algorithm for triangulating closed and bounded semi-algebraic sets in higher dimensions. More precisely, we will use a parametrized version of the following algorithm. The parametrized version of Algorithm <ref> will output a finite subset ℱ⊂[Y_1,…,Y_p], with (ℱ) ≤ (sd)^2^O(n), (ℱ) ≤ d^2^O(n), as well as a finite set of polynomials 𝒬⊂[Y_1,…,Y_p, X_1,…,X_n], and for each C ∈(ℱ), * a finite simplicial complex K_C, and subcomplexes K_C,1,…,K_C,N⊂ K_C; * a 𝒬-formula, ψ_C, such that (ψ_C(,X) is the graph of a semi-algebraic homeomorphism h_C:|K_C| →⋃_i=1^N (ϕ_i(,X)), which restricts to a semi-algebraic homeomorphisms h_C,i = h|_|K_C|: |K_C,i| →(ϕ_i(,X)). The complexity of this parametrized algorithm is bounded by (sd)^2^O(n) p^O(1). The parametrized version (with parameters Y = (Y_1,…,Y_p)) is obtained by following the same algorithm as the unparametrized case with n+p variables, applying the _X_i operator successively for i=n,n-1,…,1 and taking ℱ⊂[Y_1,…,Y_p] the resulting family of polynomials. In order to obtain a triangulation one needs to make also a linear change in the X-coordinates which we are overlooking here (<cit.> for details). The complexity of the algorithm is obtained by tracking the degree and the cardinality of the sets of polynomials obtained after applying the _X_i operators, and these square at each step leading to a doubly exponential (in n) complexity. [Basic Example 2: _X] The method of applying the _X operator explained in Example <ref> one variable at a time leads to doubly exponential complexity. There is a more efficient algorithm which is the basis of all singly exponential algorithm, which is based on the “critical point method” that is used to eliminate a block of variables at one time. Given 𝒫⊂[Y_1,…,Y_p,X_1,…,X_n]_≤ d, there exists a finite set ℱ := _X(𝒫) ⊂[Y_1,…,Y_p] (see <cit.> for a precise definition) such that for each C ∈(ℱ), the set (𝒫(,X)) stay invariant as varies of C <cit.>. Moreover, (ℱ) ≤ s^n+1 d^O(n), (ℱ) ≤ d^O(n). While the property ensured by _X is weaker than that in the case of repeated application of _X, it is still the basis of (the parametrized versions) of several singly exponential complexity algorithms that we will use later (Algorithms <ref> and <ref>). §.§.§ Complexity of the parametrized version of an algorithm The algorithms that we will use in parametrized form all share a common property that allows one to bound the complexity of the parametrized version directly from the complexity of the unparametrized version, given the number of parameters, and their degrees in the input formula. Suppose the complexity of Algorithm 𝐀 is bounded by F(s,d,n), where s is the cardinality of the input set of polynomials, d the maximum degree of these polynomials, and n the number of variables. Moreover suppose that the degrees of all intermediate polynomials computed by the algorithm is bounded by D(s,d,n). In the algorithms we consider F(s,d,n) is also a bound on all possible polynomials in the coefficients in the input that can appear in the execution of the algorithm. Moreover, the degrees of these polynomials in the input coefficients is bounded by D(s,d,n). A naive way of obtaining a parametrized version of Algorithm 𝐀 is to first compute a set ℱ of polynomials in the parameter consisting of all the polynomials that can appear in the execution of the algorithm. Then the cardinality of ℱ is bounded by F(s,d,n) and the degrees of the polynomials in ℱ is bounded by d' D(s,d,n), where d' is a bound on the degrees in the parameters of the polynomials in the input. In the following we will assume that d' = d, D(s,d,n) ≥ d. We now compute (ℱ) using Algorithm 13.1 in <cit.> (Computing Realizable Sign Conditions) and then for each σ∈(ℱ), we can follow the steps of Algorithm 𝐀 making all sign decisions compatible with the sign condition σ. Note that the cardinality of (ℱ) is bounded by (F(s,d,n) D(s,d,n))^O(p) which is also a bound on the complexity of computing it. Thus, the complexity of the parametrized algorithm is also boundde by (F(s,d,n) · D(s,d,n))^O(p). §.§ Outline of the proof In order to explain the idea behind the proof of Theorem <ref> better, we first describe an algorithm that satisfies the requirements of Theorem <ref>, except that its complexity is bounded by (sd)^2^O(n)p^O(1) which is doubly exponential – rather than (sd)^( p n)^O(ℓ) as in Theorem <ref>. The doubly exponential complexity arises from using a semi-algebraic triangulation algorithm which has inherently doubly exponential complexity (but is easier to understand). Later we will replace the triangulation by a weaker construction – namely, parametrized algorithm for simplicial replacement ( Algorithm <ref>) which is sufficient for our purposes and has the right complexity, but more difficult to visualize. We also need the following basic facts from algebraic topology. Given a semi-algebraic triangulation h: |Δ| → S, where S is a closed and bounded semi-algebraic set, we denote by (h) the covering of S by the images of the closed simplices of |Δ|. Given a cover C of a closed and bounded semi-algebraic set S, we denote by (C), the nerve complex of C. We say that C is a good cover if each element of C is semi-algebraically contractible. In particular, if h: |Δ| → S is a semi-algebraic triangulation then, (h) is a good cover. The following proposition is classical (see for example <cit.>). If C is a good cover of S, then there is a canonical isomorphism ψ_C: _*((C)) →_*(S). One immediate corollary is the following. There is a canonical isomorphism ψ_(h): _*(((h))) →_*(S). Follows from Definitions <ref>and <ref>, and Proposition  <ref>. If C' ⊂ C are two good covers of a closed and bounded semi-algebraic set S, then (C') ⊂(C), and the induced map _*((C')) →_*((C)) is an isomorphism. We have the following commutative diagram _*((C')) [rr][rd]^ψ_C' _*((C)) [ld]^ψ_C _*(S) where the ψ_C',ψ_C are isomorphisms using Proposition <ref>. This implies that the horizontal arrow is an isomorphism as well. Given two semi-algebraic triangulations h_1: |Δ_1| → S, h_2: |Δ_2| → S, we say that h_2 is a refinement of h_1, if the image by h_2 of every closed simplex of |Δ_2| is contained in the image by h_1 of some some closed simplex of Δ_1. Suppose h_1,h_2 are two triangulations of a closed and bounded semi-algebraic set S such that h_2 is a refinement of h_1. Then, (h_1) ∪(h_2) is s good cover of S, and the following diagram, where the vertical arrow is induced by inclusion is commutative and all arrows are isomorphisms: ((h_1) ∪(h_2)) [rrrr]^ Ψ_(h_1) ∪(h_2) _*(S) ((h_1)) [u] [rrrru]_Ψ_(h_1) Follows from Propositions <ref> and <ref>. §.§.§ Algorithm for semi-algebraic multi-persistence using effective semi-algebraic triangulation We first outline the main idea. We first obtain a semi-algebraic partition of ^p, and over each element (say C) of the partition, a simplicial complex K_C, such that for all ∈ C, S_ is semi-algebraically homeomorphic to |K_C| (we denote by |·| the geometric realization of a simplicial complex as a polyhedron in some ^N). This first step uses parametrized version of Algorithm <ref>(Semi-algebraic triangulation) and is implemented in Line <ref>. We work with the (ℓ+1)-skeleton of the nerve complex of the induced closed covering – and we denote this simplicial complex by Δ_C (Line <ref>). There is thus (using Proposition <ref>) a canonical isomorphism ψ_C,,i:_i(S_) →_i(Δ_C) for all ∈ C and 0 ≤ i ≤ℓ. For (,') ∈ C × C' ∩(^p), there is does not exist any canonically defined homomorphism ϕ_C,C',i: _i(Δ_C) →_i(Δ_C') such that the following diagram commutes for every (,') ∈ C × C' ∩(^p): _i(S_) [rr]^ι_,',i[d]^ψ_,i _i(S_') [d]^ψ_',i _i(Δ_C) [rr]^ϕ_C,C',i _i(Δ_C') In order to obtain homomorphisms between _*(C) and _*(Δ_C') making the above diagram to commute, we further partition semi-algebraically C × C' ∩(^p). Our tool is again the parametrized version of Algorithm <ref>. Using it we obtain parametrically with (,') ∈ C × C' ∩(^p) as parameter, refinement of the triangulations K_C,K_C'. This leads to a further partition of C × C' ∩(^p), so that in each part D of this partition, and (,') ∈ D, we have a triangulation of S_' which is a refinement of the triangulation of C' obtained in the previous step, and also its restriction to C is a refinement of the triangulation obtained in the previous step. Denoting the (ℓ+1)-skeleton of the nerve complex of the cover induced by this new triangulation Δ_D, we have that both Δ_C and Δ_C' are in a natural way sub-complexes of Δ_D, and there are canonical isomorphisms (using Proposition <ref>), _i(S_) →(Δ_D^0), _i(S_') →(Δ_D) (where Δ_D^0 ⊂Δ_D is the subcomplex corresponding to the inclusion S_⊂ S_'). This step is implemented in Line <ref> of the algorithm. Notice, that now we also have canonically defined isomorphisms, _i(Δ_C) →_i(Δ_D^0), _i(Δ_C') →_i(Δ_D), induced by the inclusion of a subcomplex in another, and also a canonically defined homomorphism, _i(Δ_D^0) →_i(Δ_D) (also induced by inclusion) that depends on D but is independent of the choice of (,') ∈ D. In short we have, for all (,') ∈ D, a commutative diagram where all solid arrows are canonically defined: _i(S_) [rr]^ι_,',i[d]^≅@/_2pc/[dd]_ψ_,i _i(S_')[d]_≅@/^2pc/[dd]^ψ_',i _i(Δ_D^0) [rr][d]^≅ _i(Δ_D) [d]_≅ _i(Δ_C) @.>[rr]^ϕ_C,C',D,i _i(Δ_C') The dotted arrow is then uniquely defined as the one which makes the whole diagram commute – and we define it to be the homomorphism ϕ_C,C',D,i (Line <ref>). Notice that for all ≼' ≼”, with ∈ C, ' ∈ C', ”∈ C”, and (,') ∈ D, (',”) ∈ D', (',”) ∈ D” we have the commutative diagram: _i(S_) [rr]^ι_,',i[d]^ψ_@/^2pc/[rrrr]^ι_,”,i _i(S_') [rr]^ι_',”,i[d]^ψ_' _i(S_”) [d]^ψ_” _i(Δ_C) [rr]^ϕ_C,C',D,i@/_2pc/[rrrr]^ϕ_C,C”,D”,i _i(Δ_C') [rr]^ϕ_C',C”,D',i _i(Δ_C”) These are the key steps. The rest of the steps of the algorithm consists of using standard algorithms from linear algebra to compute bases of the various _i(Δ_C), _i(Δ_C') _i(Δ_D^0), _i(Δ_D), and the matrices with respect to these bases of the maps between these spaces induced by refinements and inclusions (the solid arrows in the bottom square of the commutative diagram  (<ref>)), and finally the matrices of the maps ϕ_C,C',D,i (shown by the dotted arrow in (<ref>)). The complexity of the above algorithm is dominated by the complexity of the calls to the parametrized version of the semi-algebraic triangulation algorithm. Each such call costs (s d)^2^O(n)p^O(1) – and which is also asymptotically an upper bound on the total complexity. Notice that the upper bound on the complexity is doubly exponential in n and independent of ℓ. We now describe more formally the algorithm outlined above. The sets 𝒞,𝒟, and the homomorphisms ψ_C,,i, ϕ_D,i computed by algorithm described above satisfy the conditions (<ref>) and (<ref>) of Theorem <ref>. The complexity of the algorithm is bounded by (sd)^2^O(n)p^O(1), where s = (𝒫) + (𝒬), and d = max((𝒫),(𝒬)). The claims on the conditions (<ref>) and (<ref>) follow from the commutativity of the diagram  (<ref>), which in turn follows from the definitions of the homomorphisms ψ_,i and ϕ_C,C',D,i and the correctness of the parametrized version of Algorithm <ref>. The complexity bound follows from the complexity of Algorithm <ref> given in (<ref>). §.§ Proofs of Theorems <ref>, <ref> and Corollary <ref> As noted above the bound on the complexity of Algorithm <ref> is doubly exponential in n (and independent of ℓ). We will now describe a more refined version of Algorithm <ref> whose complexity for any fixed ℓ is only singly exponential in n. The main new ingredient is to use a different algorithm – namely, a parametrized version of the simplicial replacement algorithm which is described in <cit.> instead of the parametrized version of the semi-algebraic triangulation algorithm. The main advantage of using the simplicial replacement algorithm is that its complexity for any fixed ℓ is only singly exponential in n (unlike the semi-algebraic triangulation algorithm). In order to apply the simplicial replacement algorithm we first rely on another algorithm – namely an algorithm for computing a cover by contractible semi-algebraic sets described in <cit.> (Algorithm 1). §.§.§ Parametrized algorithm for computing cover by contractible sets To a first approximation this algorithm takes as input the description of a closed and bounded semi-algebraic set and produces as output descriptions of closed semi-algebraic sets, whose union is the given set and each of whom is semi-algebraically contractible. However, for technical reasons (explained in detail in <cit.>) the algorithm only succeeds in producing a cover by semi-algebraically contractible sets of an infinitesimally larger set than the given one – which nonetheless has the same homotopy type. In order to describe more precisely the output of this algorithm because of this unfortunate complication we need a technical detour. Real closed extensions and Puiseux series. We will need some properties of Puiseux series with coefficients in a real closed field. We refer the reader to <cit.> for further details. For a real closed field we denote by ⟨⟩ the real closed field of algebraic Puiseux series in with coefficients in . We use the notation ⟨_1, …, _m⟩ to denote the real closed field ⟨_1⟩⟨_2⟩⋯⟨_m⟩. Note that in the unique ordering of the field ⟨_1, …, _m⟩, 0< _m≪_m-1≪⋯≪_1≪ 1. For elements x ∈⟨⟩ which are bounded over we denote by lim_ x to be the image in under the usual map that sets to 0 in the Puiseux series x. If ' is a real closed extension of a real closed field , and S ⊂^k is a semi-algebraic set defined by a first-order formula with coefficients in , then we will denote by (S, ') ⊂'^k the semi-algebraic subset of '^k defined by the same formula. It is well known that (S, ') does not depend on the choice of the formula defining S <cit.>. Suppose is a real closed field, and let X ⊂^k be a closed and bounded semi-algebraic subset, and X^+ ⊂^k be a semi-algebraic subset bounded over . Let for t ∈, t >0, X^+_t⊂^k denote the semi-algebraic subset obtained by replacing in the formula defining X^+ by t, and it is clear that for 0 < t ≪ 1, X^+_t does not depend on the formula chosen. We say that X^+ is monotonically decreasing to X, and denote X^+ ↘ X if the following conditions are satisfied. * for all 0 < t < t' ≪ 1, X^+_t⊂X^+_t'; * ⋂_t > 0X^+_t = X; or equivalently lim_ X^+ = X. More generally, if X ⊂^k be a closed and bounded semi-algebraic subset, and X^+ ⊂_1,…,_m^k a semi-algebraic subset bounded over , we will say X^+ ↘ X if and only if X^+_m+1 = X^+ ↘ X^+_m, X^+_m ↘ X^+_m-1, …, X^+_2↘ X^+_1 = X, where for i=1,…, m, X^+_i = lim__i X^+_i+1. The following lemma will be useful later. Let X ⊂^k be a closed and bounded semi-algebraic subset, and X^+ ⊂_1,…,_m^k a semi-algebraic subset bounded over , such that X^+ ↘ X. Then, (X,_1,…,_m) is semi-algebraic deformation retract of X^+. See proof of Lemma 16.17 in <cit.>. The unparametrized version of the algorithm for computing a cover by contractible sets, takes as input a closed formula ϕ, such that (ϕ) is bounded, and produces as output a tuple Φ = (ϕ_1,…,ϕ_M) of closed formulas with coefficients in [], such that (ϕ_j) ⊂^n, j ∈ [1,M] are semi-algebraically contractible, and (ϕ) = ↘⋃_j (ϕ_j). We list the input and output and complexity of the parametrized version. The complexity statement follows from an analysis of the complexity of the unparametrized version <cit.>, and also a careful analysis of <cit.> that it relies on, which in turn relies on the _X operator whose properties were discussed earlier (Example <ref>). §.§.§ Parametrized simplicial replacement algorithm In order to describe the input and output of the simplicial replacement algorithm we need a few preliminary definitions. In the following we will restrict ourselves to the category of closed and bounded semi-algebraic sets and semi-algebraic continuous maps between them. We say that a semi-algebraic continuous map f:X → Y between two closed and bounded semi-algebraic sets is a semi-algebraic homological ℓ-equivalence, if the induced homomorphisms between the homology groups f_*:_i(X) →_i(Y) are isomorphisms for 0 ≤ i ≤ℓ. Note that our definition of semi-algebraic homological ℓ-equivalence deviates a little from the standard one which requires that homomorphisms between the homology groups f_*: _i(X) →_i(Y) are isomorphisms for 0 ≤ i ≤ℓ-1, and only an epimorphism for i=ℓ. An ℓ-equivalence in our sense is an ℓ-equivalence in the traditional sense. The relation of semi-algebraic homological ℓ-equivalence as defined above is not an equivalence relation since it is not symmetric. In order to make it symmetric one needs to “formally invert” semi-algebraic homological ℓ-equivalences. We will say that X is semi-algebraically homologically ℓ-equivalent to Y (denoted X ∼_ℓ Y), if and only if there exists closed and bounded semi-algebraic sets, X=X_0,X_1,…,X_n=Y and semi-algebraic homological ℓ-equivalences f_1,…,f_n as shown below: X_1 [ld]_f_1[rd]^f_2 X_3[ld]_f_3[rd]^f_4 ⋯ ⋯ X_n-1[ld]_f_n-1[rd]^f_n X_0 X_2 ⋯ ⋯ X_n . It is clear that ∼_ℓ is an equivalence relation. We now extend Definition <ref> to semi-algebraic continuous maps between closed and bounded semi-algebraic sets. Let f_1:X_1 → Y_1, f_2:X_2 → Y_2 be continuous semi-algebraic maps between closed and bounded semi-allgebraic sets. A semi-algebraic homological ℓ-equivalence from f_1 to f_2 is then a pair ϕ = (ϕ^(1),ϕ^(2)) where ϕ^(1):X_1 → X_2, ϕ^(2):Y_1 → Y_2 are semi-algebraic homological ℓ-equivalences, and such that f_2 ∘ϕ_1 = ϕ_2 ∘ f_1. We will say that a semi-algebraic map f is semi-algebraically homologically ℓ-equivalent to a semi-algebraic map g (denoted as before by f ∼_ℓ g), if and only if there exists semi-algebraic continuous maps f=f_0,f_1,…,f_n=g between closed and bounded semi-algebraic sets, and semi-algebraic homological ℓ-equivalences ϕ_1,…,ϕ_n as shown below: f_1 [ld]_ϕ_1[rd]^ϕ_2 f_3[ld]_ϕ_3[rd]^ϕ_4 ⋯ ⋯ f_n-1[ld]_ϕ_n-1[rd]^ϕ_n f_0 f_2 ⋯ ⋯ f_n . It is clear that ∼_ℓ is an equivalence relation. A diagram of closed and bounded semi-algebraic sets is a functor, X:J →, from a small category J to the category of closed and bounded semi-algebraic sets and continuous semi-algebraic maps between them. We extend Definition <ref> to diagrams of closed and bounded semi-algebraic sets. We denote by _ the category of closed and bounded semi-algebraic subsets of ^n, n >0 and continuous semi-algebraic maps between them. Let J be a small category, and X,Y: J →_ be two functors. We say a natural transformation f:X → Y is an semi-algebraic homological ℓ-equivalence, if the induced maps, f(j)_*: _i(X(j)) →_i(Y(j)) are isomorphisms for all j ∈ J and 0 ≤ i ≤ℓ. We will say that a diagram X:J →_ is ℓ-equivalent to the diagram Y:J →_ (denoted as before by X ∼_ℓ Y), if and only if there exists diagrams X=X_0,X_1,…,X_n=Y:J →_ and semi-algebraic homological ℓ-equivalences f_1,…,f_n as shown below: X_1 [ld]_f_1[rd]^f_2 X_3[ld]_f_3[rd]^f_4 ⋯ ⋯ X_n-1[ld]_f_n-1[rd]^f_n X_0 X_2 ⋯ ⋯ X_n . It is clear that ∼_ℓ is an equivalence relation. One particular diagram will be important in what follows. [Diagram of various unions of a finite number of subspaces] Let J be a finite set, A a closed and bounded semi-algebraic set, and 𝒜 = (A_j)_j ∈ J a tuple of closed and bounded semi-algebraic subsets of A indexed by J. For any subset J' ⊂ J, we denote 𝒜^J' = ⋃_j' ∈ J' A_j', 𝒜_J' = ⋂_j' ∈ J' A_j', We consider 2^J as a category whose objects are elements of 2^J, and whose only morphisms are given by: 2^J(J',J”) = ∅ J' ⊄J”, 2^J(J',J”) = {ι_J',J”} J' ⊂ J”. We denote by ^J(𝒜):2^J →_ the functor (or the diagram) defined by ^J(𝒜)(J') = 𝒜^J', J' ∈ 2^J, and ^J(𝒜)(ι_J',J”) is the inclusion map 𝒜^J'↪𝒜^J”. §.§.§ Parametrized algorithm for simplicial replacement The original (i.e. unparametrized) algorithm takes as input a tuple of closed formulas, Φ = (ϕ_1,…,ϕ_M), such that the realizations, (ϕ_j) ⊂^n, j ∈ J = [0,M] are all semi-algebraically contractible, and m ≥ 0. It produces as output a simplicial complex Δ = Δ^J, having a subcomplex Δ^J' for each J' ⊂ J, with Δ^J'⊂Δ^J” whenever J' ⊂ J”⊂ J, and such that the diagram of inclusions (|Δ^J'| ↪ |Δ^J”|)_J' ⊂ J”⊂ J is homologically m-equivalent to the diagram of inclusions ((Φ^J') ↪(Φ^J”))_J' ⊂ J”⊂ J where for J ' ⊂ J, Φ^J' = ⋁_j ∈ J'ϕ_j. We will need a parametrized version of the above algorithm which we will call the parametrized algorithm for simplicial replacement. We describe the input, output and the complexity of this algorithm below. The complexity of the parametrized version follows from analysing the complexity analysis of the unparametrized version in <cit.>, which has a recursive structure of depth O(ℓ). At each level of the recursion, there are calls to (the unparametrized version of) Algorithm <ref>, on certain intersections of sets computed in the previous steps of the recursion. The complexity bound of the parametrized version now follows using the complexity bound, (sd)^( p n)^O(1), of Algorithm <ref> (parametrized algorithm for computing cover by contractible sets), noting that the depth of the recursion in Algorithm <ref> is O(ℓ), instead of using that of the the unparametrized version Algorithm <ref> as is done in <cit.>. We omit the details since they are quite tedious. We now return to the proof of Theorem <ref>. We will prove the theorem by describing an algorithm with input and output as specified by the theorem and then proving its correctness and upper bounds on the complexity. The following algorithm will avoid using the semi-algebraic triangulation algorithm (and its inherently doubly exponential complexity). Instead, we will use the parametrized algorithm for simplicial replacement (Algorithm <ref>) described above. The correctness of the algorithm follows from the correctness of Algorithms <ref> and <ref>, and the same arguments as in the proof correctness of Algorithm <ref>. The complexity bound follows from the complexity of Algorithms <ref> and <ref>. Follows from the proof of correctness and complexity analysis of Algorithm <ref>. Follows immediately from Theorem <ref>. We first call Algorithm <ref> with 𝒫,𝒬,Φ,Ψ,ℓ as input, and then enumerate the elements of (𝒟) by computing a unique point 𝐝_D(represented by a real univariate representation) in each D ∈𝒟. It then suffices to check for each pair (𝐭,𝐭') ∈ T× T ∩(^p), the unique D ∈(𝒟) such that (𝐭,𝐭') ∈ D or equivalently if (𝐭,𝐭') and 𝐝_D belong to the same element of (𝒟). This can be decided using the uniform roadmap algorithm <cit.>. The complexity bound follows from the complexity of Algorithm <ref> and that of the uniform road algorithm which has singly exponential complexity. § PROOFS OF THEOREMS <REF> AND <REF> In the proofs of Theorems <ref> and <ref> we will need the following basic result from real algebraic geometry giving an upper bound on the sum of the (zero-th) Betti numbers of the realizations of all realizable sign conditions of a finite set of polynomials. <cit.> Let 𝒫⊂[X_1,…,X_n]_≤ d with (𝒫) = s. Then, ((𝒫)) ≤∑_j=1^nsj 4^j d(2d-1)^n-1. Using Theorem <ref> we have that the poset module 𝐏_S,𝐟,ℓ is semi-algebraically constructible (recall Definition <ref>). Thus, there exist M > 0, and a semi-algebraically constructible function F:(^p) →𝐤^M × M associated to 𝐏_S,𝐟,ℓ. Moreover, since by Theorem <ref>, the complexity of 𝐏_S,𝐟,ℓ is bounded by (sd)^(np)^O(ℓ), we can assume that there exists a finite set of polynomials, 𝒟⊂[Y_1,…,Y_p,Y_1',…,Y_p], such that the partition (D)_D ∈(𝒟), D ⊂(^p) is subordinate to F, with C' := max((𝒟),(𝒟)) ≤ (sd)^(np)^O(ℓ). Now let 𝒟⊂[Y_1,1,…,Y_1,p,…, Y_N,1,…,Y_N,p] be defined as follows. 𝒟 := ⋃_1 ≤ i,j ≤ N{P(Y_i,1,…,Y_i,p, Y_j,1,…,Y_j,p) | P ∈𝒟}. Then, (𝒟) ≤ C' · N^2, (𝒟) ≤ C', and the number of variables in the polynomials in 𝒟 is p N. It follows from the fact that F is associated to the poset module 𝐏_S,𝐟,ℓ, and that the partition (D)_D ∈(𝒟) is subordinate to F, that for each T = (_1,…,_N) ∈ (^p)^N, the strong equivalence class of the finite poset module 𝐏_S,𝐟,T,ℓ is determined by the map {(i,j) |_i ≼_j}→(𝒟), (i,j) ↦ D(_i,_j), where D(_i,_j) is the unique element of (𝒟) containing (_i,_j). Let D_T ∈(𝒟) such that (_1,…,_N) ∈D_T. Denote by π_i,j: (^p)^N →^p ×^p the projection map on to the (i,j)-th coordinate (tuples). Then, for each i,j with _i ≼_j, π_ij(D(T)) = D(_i,_j). Thus, the strong equivalence class of 𝐏_S,𝐟,T,ℓ is determined by D_T, and hence the number of possibilities for the strong equivalence class of 𝐏_S,𝐟,T,ℓ is bounded by ((𝒟)). Using Theorem <ref> and (<ref>) we obtain ((𝒟)) ≤ ∑_j=1^pNC' N^2j· 4^j · C'(2C'-1)^ p N-1 ≤ ∑_j=1^pNC N^2j· C^ p N with C = 8 C' = (sd)^(np)^O(ℓ) (using (<ref>)). In order to get the asymptotic upper bound observe that ∑_j=1^pNC N^2j· C^ p N ≤ (p N) ·C N^2p N· C^p N ≤ (p N)·(e C N^2/p N)^p N· C^p N ≤ (p N) ·(e C^2/p)^p N· N^p N = (N^o(1) pN) · N^p N = N^(1+o(1))p N, where in the second step we have used the inequality mk≤(em/k)^k valid for all m,k with 0 ≤ k ≤ m. For s,d,n > 0, let M(s,d,n) := s ×n + dd, denote the number of monomials in s polynomials in n variables of degree d. Let (A_i,A_i')_1 ≤ i ≤ s', (B_i,B_i')_1 ≤ i ≤ s” denote Boolean variables, with s' + s”= s, and let Φ(A_1,A_1'…,A_s',A_s''), Ψ(B_1,B_1',…,B_s”,B_s”') denote two Boolean formulas. Notice that there as most 2^2^2s many non-equivalent Boolean formulas in 2s Boolean indeterminates. Now given, 𝒫 = (P_1,…,P_s') ∈ ([X_1,…,X_n])^s', and 𝒬 = (Q_1,…,Q_s”) ∈ ([Y_1,…,Y_p,Y_1',…,Y_p'])^s”, we will denote by Φ(𝒫) (resp. Ψ(𝒬)) the formulas obtained by substituting in Φ (resp. Ψ), A_i by P_i ≥ 0, A_i' by P_i ≤ 0, (resp. B_i by Q_i ≥ 0, B_i' by Q_i ≤ 0). For 𝒫∈ ([X_1,…,X_n]_≤ d)^s' and 𝒬∈ ([Y_1,…,Y_p,X_1,…,X_n]_≤ d)^s”, we will identify (𝒫,𝒬) with the point in ^M(s',d,n)×^M(s”,d,n+p) whose coordinates give the coefficients of the polynomials in 𝒫,𝒬. We will denote the vector of coefficients of 𝒫 by A and those of 𝒬 by B, for A = 𝐚, B = 𝐛, we will denote by 𝒫_𝐚,𝒬_𝐛 the corresponding tuples of polynomials having these coefficients. For a pair. (Φ,Ψ), of Boolean formulas, (𝒫,𝒬) ∈^M(s',d,n)×^M(s”,d,n+p), we denote by S_Φ(𝒫) = (Φ(𝒫)), and by 𝐟_Ψ(𝒬), the semi-algebraic map ^n →^p, such that graph(𝐟) = (Ψ(𝒬)). Treating the coefficient vectors A,B of 𝒫,𝒬 in the input of Algorithm <ref> as parameters, and using a parametrized version of Algorithm <ref> (see discussion in Subsection <ref>), we obtain a finite set of polynomials ℋ⊂[A,B], and for each H ∈(ℋ), finite sets 𝒞_H ⊂[A,B,Y_1,…,Y_p], 𝒟_H ⊂[A,B,Y_1,…,Y_p,Y_1',…,Y_p'], simplicial complexes Δ_H,C,i, and homomorphisms ϕ_H,D,ℓ, such that 𝒞_H(𝐚,𝐛) ⊂[Y_1,…,Y_p], 𝒟_H(𝐚,𝐛) ⊂[Y_1,…,Y_p,Y_1',…,Y_p'], and Δ_H,C,i, ϕ_H,D,ℓ, is the output of Algorithm <ref> with input 𝒫_𝐚,𝒬_𝐛, Φ(𝒫_𝐚),Ψ(𝒬_𝐛), ℓ for all (𝐚,𝐛) ∈ H. It now follows using the same argument and notation as in the proof of Theorem <ref>, that the number of strong equivalence classes amongst the poset modules (𝐏_S_Φ(𝒫_𝐚), 𝐟_Ψ(𝒬_𝐛),T,ℓ)_(𝐚,𝐛) ∈^M(s',d,n)×^M(s”,d,n+p), T ∈ (^p)^N is bounded by ((ℋ∪⋃_H ∈(ℋ)𝒟_H)), where 𝒟_H := ⋃_1 ≤ i,j ≤ N{P(Y_i,1,…,Y_i,p, Y_j,1,…,Y_j,p) | P ∈𝒟_H }. Note that C := (⋃_H ∈(ℋ)𝒟_H), C' := (ℋ), K := (ℋ∪⋃_H ∈(ℋ)𝒟_H), are all bounded in terms of s,d,n,ℓ,p but independent of N. Also note that the number of variables in the polynomials ℋ equals M, and that in the polynomials in ⋃_H ∈(ℋ)𝒟_H equals p N + M, where M =M(s',d,n) + M(s”,d,n+p). Now using using Theorem <ref> we obtain that ((ℋ∪⋃_H ∈(ℋ)𝒟_H)) ≤ ∑_j=1^p N + MC N^2 + C'p N + M 4^j K(2K-1)^p N + M-1 = N^(1 + o(1))N. Finally the theorem follows from the fact there at most ∑_s' + s” =s 2^2^2s'× 2^2^2s” pairs of Boolean formulas (Φ,Ψ) to consider. amsplain
http://arxiv.org/abs/2407.12115v1
20240716185405
Shape-morphing membranes augment the performance of oscillating foil energy harvesting turbines
[ "Ilan M. L. Upfal", "Yuanhang Zhu", "Eric Handy-Cardenas", "Kenneth Breuer" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
[To whom correspondence should be addressed: ]iupfal@mit.edu [Current address: ]University of California, Riverside Center for Fluid Mechanics, School of Engineering, Brown University 184 Hope St, Providence, RI 02912 § ABSTRACT Oscillating foil turbines (OFTs) can be used to produce power from rivers and tides by synchronizing their heaving motion with the strong lift force of vortices shed at their leading edge. Prior work has shown that compliant membrane OFTs, which passively camber, exhibit enhanced leading edge vortex (LEV) stability and improved lift and power compared with rigid foil OFTs for specific kinematics. This work seeks to understand a) the performance of compliant membrane OFTs over their full kinematic parameter space and b) separate the roles of membrane camber and extensibility in LEV stabilization. We characterize the performance of a compliant membrane OFT over a wide range of kinematic parameters through prescribed motion experiments in a free-surface water flume. The optimal frequency of the compliant membrane OFT is found to be lower than that of a rigid foil OFT due to the enhanced LEV stability of the membrane. The lift and power of compliant and inextensible membrane foils are then compared to determine whether camber alone is effective for LEV stabilization or if extensibility plays an important stabilizing role. The deformation of the compliant membrane OFT is measured using laser imaging. We observe that the role of extensibility changes for different angles of attack. At low angles of attack, membrane deformation is consistent through the half cycle coinciding with similar performance to the inextensible foil. At higher angles of attack, the compliant foil has a larger deformation and dynamically decambers corresponding with delayed stall and enhanced lift and power. Shape-morphing membranes augment the performance of oscillating foil energy harvesting turbines Kenneth Breuer July 22, 2024 =============================================================================================== § INTRODUCTION While renewable energy deployment has grown dramatically, vast clean energy resources remain unharnessed in river and tidal flows <cit.>. These flows are highly predictable, therefore tidal energy and run-of-the-river power may reduce the ancillary service requirements of more variable renewable energy sources such as solar and wind power <cit.>. Horizontal axis rotary turbines (HARTs) are the most mature commercial technology for harnessing tidal power; however, they suffer from high maintenance costs, poor suitability to shallow flows, and high tip speeds that can harm aquatic life. HARTs also significantly decline in performance outside of design conditions and in array configurations <cit.>. §.§ Oscillating Foil Turbines An alternate method for harvesting energy from river and tidal is the oscillating foil turbine (OFT) <cit.>. The OFT consists of a foil with two degrees of freedom, heaving translation and pitching rotation. The OFT produces power by synchronizing its heaving motion with the strong lift force of vortices shed at their leading edge. A significant distance is required between HARTs for the flow speed deficit to recover. The OFT does not require this recovery distance due to the unique wake structure of the turbines <cit.>. Instead of producing a simple velocity deficit wake, OFTs shed strong LEVs which can be exploited or avoided by downstream foils to enhance the system performance of turbine arrays <cit.>. OFTs also provide the additional benefits of a rectangular extraction plane more suitable to shallow flows such as rivers and tidal channels and reduced disturbance to aquatic life due to their lower tip speed <cit.>. §.§ Oscillating Foil Turbine Kinematics The kinematics of OFT motion can be characterized by four parameters: pitching amplitude: θ_0, reduced heaving amplitude: h^* = h_0/c, reduced frequency: f^* = f c / U_∞ and phase delay between the heaving and pitching motions: ϕ. Here, c is the foil chord length, h_0 is the heaving amplitude, f is the oscillation frequency and U_∞ is the free stream velocity. The energy harvesting performance of OFTs depends strongly on these parameters and has been studied extensively via simulation, water flume experiments, and field experiments <cit.>. A phase delay of ϕ = 90^∘ has been shown to maximize turbine efficiency <cit.>. The heaving and pitching profiles are commonly sinusoidal, however Su et al. investigated the performance of non-sinusoidal kinematics and found that trapezoidal pitching could improve performance by up to 50% over sinusoidal pitching <cit.>. Kinsey and Dumas observed formation and shedding of vortices at the leading edge of rigid foil OFTs. Leading edge vortices (LEVs) create a suction force on the foil enhancing power extraction <cit.>. LEVs which shed just as the foil reaches the top or bottom of the stroke generally have the greatest strength since they are attached longest. The kinematics with the greatest LEV strength were found to yield highest energy harvesting efficiencies. LEV shedding at the top and bottom of the stroke can also aid the pitch reversal of the foil. An optimal efficiency of 35% was identified by Kinsey and Dumas at h^* = 1, θ_o = 75^∘, and f^* = 0.15. This optimal reduced frequency was confirmed and explained by Zhu as corresponding to the most unstable wake mode of the oscillating foil <cit.>. Kinsey and Dumas found the effective angle of attack at mid stroke, α_T/4 = θ_o - tan^-1 [ḣ(t = T/4) / U_∞ ] where ḣ(t) is the heaving velocity, to be an excellent predictor of OFT performance <cit.>. Kim et al. found that OFT efficiency has highest for 30^∘ < < 40^∘ and 0.09 ≤ f^* ≤ 0.17 <cit.>. Furthermore, the efficiency curves over this frequency range were found to collapse well with respect to , in agreement with Kinsey and Dumas. Ribeiro et al. focused on the vortex structures in the wake and identified three regimes of operation based on and vortex formation: (i) the shear layer regime (0 < ≤ 11^∘) in which there is no separation at the leading edge, and only small vortices forming in the shear layer behind the foil; (ii) a leading edge vortex (LEV) regime (11^∘ < <29^∘) in which a strong primary LEV is formed, and finally (iii) the leading edge vortex and trailing edge vortex (LEV + TEV) regime (29^∘ <) in which an additional vortex is formed at the trailing edge of the foil <cit.>. As in previous studies, the efficiency of the OFT was found to increase with up to ≈ 29^∘ at which point a maximum efficiency was achieved and the efficiency began to decrease with . §.§ Compliant and Inextensible Membrane Wings The performance of OFTs can be improved by introducing a camber to the foil. The OFTs discussed thus far used rigid foils which must be symmetric and thus cannot have a camber. However, using compliant membrane foils the camber can change between the upstroke and downstroke. Compliant membrane wings are used by flying mammals such as bats and have enhanced lift and a delayed, softer transition to stall <cit.>. Mathai et al. showed that an OFT utilizing a compliant membrane foil can yield an improvement in lift coefficient of up to 300% and an improvement in power extraction of up to 160% by cambering in the flow as well as stabilizing the LEV <cit.>. In the context of energy harvesting, compliant membrane foils offer several appealing features. Since OFTs naturally prefer thin wings with a sharp leading edge <cit.>, the membranes have ideal geometric qualities. In addition, their light weight and low bending stiffness provide minimal inertial penalty during pitch reversal. With this in mind the current study aims to extend the work of Mathai et al. with two main goals: first to characterize a wider parameter range, and second to understand the different contributions of camber and extensibility to energy harvesting performance. In this manuscript we present results of experiments using three OFT configurations (Fig. <ref>): an elastic membrane that can stretch in response to the hydrodynamic forces generated during the cycle (“C”); an inextensible membrane with zero slack that cannot camber (“I-0”), and lastly a membrane with 5% slack that can adopt a beneficial camber during the upstroke and downstroke (“I-5”). We measure the membrane shape as well as the power extracted over a range of operating conditions (flow speed, frequency, pitch angle, and heave amplitude) and compare the results with the performance of similar rigid foils. § MATERIALS AND METHODS §.§ Water Flume Facility Experiments were conducted in a free surface water flume at Brown University, (test section width: 0.8 m, depth: 0.53 m, and length = 4.0 m). The freestream velocity, U_∞, was set to 28 cm/s for the compliant wing parameter sweep and 32 cm/s for the compliant and inextensible wing experiments, measured using an acoustic doppler velocimeter (Vectrino, Nortek Inc.). The membrane hydrofoil was held from above, supported by a rigid frame (Figure <ref>a,b) consisting of a 5 mm diameter steel rod, bent into a U-shape with the legs, 300 mm long and 75 mm apart, defining the wing span, b, and chord, c respectively. The base of the U defined the wing tip while the ends of each leg were inserted into a rigid support bar connected to the force transducer and motion carriage positioned above the waterline. The membrane was glued to two steel tubes, outer-diameter: 6.35 mm, using epoxy (Masterbond MS 153). The tubes slid onto the support frame, thus maintaining a fixed chord length, but allowing free rotation at the leading and trailing edges (LE and TE). Circular end plates were mounted onto the frame above and below the membrane to minimize three-dimensional flow effects. The frame was mounted on an ATI 9105-TIF-Delta-IP65 six axis force transducer which was used to measure the forces and torques acting on the foil. The entire system was supported by a two-axis heave/pitch system which prescribed the wing kinematics. A servo motor (Parker SM233AE) controlled the pitching axis motion, while a linear motor (Aerotech BLM-142-A-AC-H-S-5000) drove the heaving motion. Optical encoders were used to record the realized heave and pitch trajectories (US Digital E3-2500-250-IE-D-D-1, and US Digital E3-2500 respectively). §.§ Membrane materials For the inextensible foil experiments, a thin mylar sheet (100 microns) was glued to the LE and TE tube sections with a slack ratio s = l/c of 1 and 1.05, where l is the membrane sheet length. The compliant membrane material was fabricated in-house by casting a thin silicone membrane sheet using a mass ratio of 50% Mold Star Series: Platinum Silicone Rubber Part A, 25% Mold Star Series 16 Fast: Platinum Silicone Rubber Part B, and 25% Mold Star Series 15 Slow: Platinum Silicone Rubber Part B, with a solvent component of BJB Enterprises TC-5005 Part C added equivalent in mass to 40% of the silicone mixture. The addition of the solvent reduces the elastic modulus of the membrane to a desired value. Once mixed thoroughly, the silicone solution was degassed to remove all air bubbles which can act as points of failure in the membrane or change the material properties. The solution was poured onto a clean glass surface and spread using an adjustable wet film applicator (Mitutoyo) at a wet thickness of 750 μm. The film was allowed to dry thoroughly at room temperature for 36 hours (cured thickness of h_m = 500 ± 20 μm), and then laser cut into a rectangular sheet section for use as the test membrane. Some membrane samples were also cut into “dog-bone”-shaped samples and mounted in an uniaxial tensile testing machine (Instron 5942) with which the Young's modulus of the material was determined for the quasi-linear stress-strain region: 1 ≤λ≤ 2.4, where the stretch, λ, is the ratio of the membrane length, l, to its initial length, l_i. The Young's modulus of the silicone membrane was determined to be E = 150 kPa ± 5 kPa. The ratio of elastic stress, Eh_m, to the inertial stress, ρ U_∞^2 c yields the nondimensional Aeroelastic number, Ae = Eh_m / 1/2ρ U_∞^2 c which is important in characterizing the strength of the fluid-structure interaction <cit.>. For the fabricated membrane and the described testing conditions, Ae ≈ 25 and 20 for the compliant parameter sweep and the compliant-inextensible comparison experiments respectively. Once fabricated, the compliant membrane was mounted to the support frame, as described above, so that it had negligible initial stretch, λ_o = 1. §.§ Kinematics The first series of experiments conducted covered a broad range of the kinematic operating space of the compliant membrane foil, varying the pitching amplitude in 8 increments of 10^∘ from 15^∘ to 85^∘, the frequency in 8 increments of 0.075 Hz from 0.125 Hz to 0.65 Hz (f^* = 0.037 - 0.195), and heaving amplitude in 4 increments of 0.25c from 0.75c to 1.5c. Sinusoidal kinematics were chosen for these experiments for comparison with related work on rigid foil OFTs. The reduced (nondimensional) heaving amplitude is defined as h^* = h_o/c and the reduced frequency is given by f^* = fc/U_∞. The sinusoidal profiles for heaving, h(t), and pitching, θ(t), are given by h(t) = h_o cos (2π f t), and θ(t) = θ_o cos (2π f t + ϕ), where h_o and θ_o are the heaving and pitching amplitudes respectively; f is the frequency of oscillation, and ϕ is the phase shift between pitching and heaving cycles which was held at 90^∘ in all trials. A second series of experiments compared the performance of compliant and inextensible foils. Two foils with inextensible membranes were tested (s = 1 and 1.05), and compared with the elastic membrane (λ_o = 1, Ae ≈ 20). All wings were tested at a single frequency, f^* = 0.04 and constant heaving amplitude: h_o/c = 1.2; the pitching amplitude varied between θ_o = 18^∘ - 57^∘. These combinations were chosen so that the effective angle of attack at mid-stroke, , varied between 15^∘ and 45^∘. Prior work on the compliant membrane OFT has studied 0 < < 15^∘. By studying a range of spanning from the domain of prior work to higher , we hope to both validate our results with prior work and gain an understanding of compliant and inextensible membrane OFT performance with increased . Rigid OFTs obtain a maximum efficiency at = 30^∘ - 40^∘, therefore it is of interest to see how the compliant membrane OFT performs under these kinematics. Non-sinusoidal kinematics (trapezoidal pitching and triangular heaving) were chosen for these experiments for ease of comparison with prior work on compliant membrane OFTs. <cit.> Following Su et al., the non-sinusoidal kinematics are conveniently defined by a single parameter, β, which modulates a cosine curve from trapezoidal (for positive values of β) to triangular (for negative values of β) <cit.>. Note that the equations used by Su et al. have been phase-adjusted for consistency with the rest of the present study: h(t) = h_o sin^-1 (- βcos(2 πf t ))/sin^-1(- β) - 1 ≤β≤ 0 h_o cos(2 πf t) β = 0 h_o tanh[βcos(2 πf t)]/tanh(β) 0 < β and θ(t) = θ_o sin^-1 [- βcos(2 πf t + ϕ)] /sin^-1(- β) - 1 ≤β≤ 0 θ_o cos(2 πf t + ϕ) β = 0 θ_o tanh[βcos(2 πf t + ϕ)]/tanh(β) 0 < β. §.§ Measurement procedures At each operating condition data was acquired over 30 cycles with the first and last three cycles discarded to eliminate the startup and stopping transients. Two metrics are used to characterize the energy harvesting performance of the membrane hydrofoil turbine: the coefficient of power, C_p, and the Betz efficiency, η. The coefficient of power, C_p, is calculated from the sum of the cycle-averaged coefficients of heaving and pitching power, normalized by the dynamic pressure and the wing area: C_p= <F ·ḣ> + <τ·θ̇>/1/2ρ U_∞^3 bc. Here, F is the lift force (perpendicular to the flow), and τ is the pitching moment. ḣ and θ̇ are the heaving and pitching velocities, respectively. The Betz efficiency, η, is the power normalized by the swept area of the oscillating foil: η = <F ·ḣ> + <τ·θ̇>/1/2ρ U_∞^3 A_s , where A_s is the swept area. Note that A_s is generally not the same as b h_o, due to the pitch angle of the foil. § RESULTS AND DISCUSSION §.§ Kinematic parameter sweep The energy harvesting performance of the compliant membrane OFT is evaluated first using the Betz efficiency, η, and second using the power coefficient, C_p. Figure <ref> shows the Betz efficiency, η, plotted with respect to the reduced frequency, f^*, and pitching amplitude, θ_o, for four different heave amplitudes, h^*_o. The efficiency is convex with respect to all parameters tested indicating that a true optimum was found. An optimal efficiency of 31.7 ± 0.8 % occurs at h = 1.00, f^* = 0.11, and θ_o = 65^∘ (Figure <ref>, top right panel). The map of the Betz efficiency for the compliant membrane OFT (Figure <ref>) closely resembles analogous maps for rigid foils <cit.>, although key differences exist. Notably, the optimal efficiency of the compliant membrane OFT occurs at a significantly lower reduced frequency, f^* = 0.11, than has been found for a rigid foil OFT, f^* = 0.15 <cit.>. At the large angles of attack of the optimum (30^∘ < < 40^∘) the OFT is operating in the dynamic stall regime <cit.> in which a leading edge vortex (LEV) forms on the suction surface of the wing <cit.>, and the performance of the OFT is strongly dependent on the synchronization of the growth and shedding of the LEV with the pitch reversal of the wing <cit.>. In general, the LEV increases the lift force, enhancing the foil efficiency. However, at low frequencies, the LEV sheds before the pitch reversal takes place, while at high frequencies, the pitch reversal occurs before the LEV has had time to act, resulting in a drop in the lift force which depresses the efficiency. At a “sweet spot”, in this case f^* ∼ 0.1, the contributions of LEV growth and shedding are balanced, making an optimal contribution to the OFT efficiency. Since prior studies have shown that optimal frequency coincides with the synchronization of vortex shedding and pitch reversal, we expect to see a lower optimal frequency for the compliant membrane OFT because the leading edge vortex is more stable on the compliant membrane wing compared to the rigid wing <cit.>. Therefore, the OFT must oscillate slower in order to synchronize with the delayed vortex shedding. Another common trend between the rigid and compliant foil OFT efficiency maps is the steep gradient up from the feathering limit which coincides with the α_T/4 gradient. To elucidate the relationship between energy harvesting performance and the effective angle of attack at mid-stroke, α_T/4, the efficiency, η, was re-plotted with respect to the reduced frequency, f^*, and α_T/4 (Figure <ref>a) for the h^* = 1.00 case. Shown this way, it is clear that for α_T/4 below ∼ 10^∘, the efficiency is a strong function of α_T/4 and is largely independent of f^*. For α_T/4 greater than 10^∘, the efficiency becomes increasingly frequency dependent. For < 10^∘, we do not expect a leading edge vortex to be generated on the foil <cit.>. For these cases, increasing α_T/4 increases the lift force, L, <cit.> which in turn increases the efficiency, η. For α_T/4 > 10^∘ we begin to see a convex relationship between efficiency and frequency, with a maximum efficiency at approximately f^* = 0.1. This frequency dependence coincides with a transition to the dynamic stall regime evident in the force measurements and consistent with results of Ribeiro et al. <cit.>. This frequency dependence will be further discussed shortly, but we first focus on the dependence of η. More insight into the dependence of performance on α_T/4 can be gained from the lift-vs-time profiles for a range of α_T/4. A sample of C_L vs t/T is shown in Figure <ref>b for cases of f^* = 0.11. The energy harvested by the heaving of the foil is simply the integral of the product of the lift force, C_L, and the heaving velocity, ḣ, plotted in dashed red (Figure <ref>b). In the = 32^∘ and = 42^∘ cases shown in Figure <ref>b the instantaneous power, |F(t) ·ḣ(t)|, is maximized due to the large overlap between the force and heave velocity profiles relative to the other cases. The compliant membrane OFT was found to have the same optimal range as rigid foil OFTs based on previous work <cit.>. In the = 12^∘ case (blue line) we observe a delay between the pitch reversal, which occurs at t/T = 0.5, and the sign reversal of the lift force, which only occurs at t/T ≈ 0.65. Prior work <cit.> and laser deformation measurements (presented in the following section of this paper) reveal that as a compliant membrane wing rotates from a positive to negative angle of attack, the wing retains its positive camber through the pitch reversal, suddenly “snapping through” only after the wing has reached a threshold angle of attack. We believe this delayed snap-through behavior to be the cause for the delay in the change of sign of the lift force following pitch reversal. As increases in Figure <ref>b, this delay reduces and the point at which the lift force changes sign from negative to positive following the pitch reversal occurs earlier. In order to change between trials, only the pitching amplitude, θ_o, was varied. An increase in θ_o coincides with an increase in the speed of pitch reversal such that the wing achieves the threshold snap-through angle earlier in the cycle. This trend is confirmed in the laser imaging of the membrane shape (presented in the following section). In the = 12^∘ case, the foil achieves C_L ≈ 2 at t/T ≈ 0.7 and remains close to this value until the following pitch reversal despite small oscillations in the lift force. Similar oscillations were observed by Mathai et al. <cit.> and found to coincide with oscillations in the membrane deformation. Such oscillations are observed in the laser measurements presented in the following section. In the higher α_T/4 lift profiles presented in Figure <ref>b, the foil transiently achieves C_L > 3. We observe that the lift force grows quickly after the pitch reversal at t/T = 0.5. This high transient force is associated with the rapid growth of a leading edge vortex in this regime <cit.>. At the highest value of tested, this trend is broken and the wing experiences a slower growth in lift force, only exceeding C_L = 2 at t/T ≈ 0.8. At this very high pitch angle and high pitching velocity, which is well beyond the known rigid foil optimum, this effect might be due to the LEV detaching too early, before it has had an opportunity to sufficiently grow. However, more detailed examination of the flow field, which is beyond the scope of the current work, will be needed to fully explain this observation. For the three highest α_T/4 cases (Fig <ref>b orange, purple and green lines), the maximum C_L occurs near t/T = 0.9, while for the two lower α_T/4 cases, the maximum occurs earlier after which the lift drops off until the foil turns over. The = 42^∘ case has the highest initial peak amplitude and overall maximum C_L amplitude. Interestingly, for the higher α_T/4 cases, the coefficient of lift continues to increase until the foil begins to turn over. As mentioned earlier, prior work has identified the synchronization of LEV shedding at t/T = 0, 0.5 to be an important factor in optimal kinematics. These results support this hypothesis, since the lift drops off just before pitch reversal in the = 32^∘, 42^∘ cases (highest performing) suggesting that LEV shedding is occurring at that point in the cycle <cit.>. The coefficient of power, C_p, (Figure <ref>) was found to be convex in f^* and θ_o but increased monotonically with h^*. A maximal power coefficient of 0.98 ± 0.03 was achieved at h^* = 1.50, f^* = 0.105, and θ_o = 75^∘ (Figure <ref>, bottom right panel). The power map of the OFT looks similar to the efficiency map in terms of general shape in Figures <ref> and <ref> respectively. Note that the optimal frequency decreases slightly as the heave amplitude increases. As h^* increases, the heave velocity, effective angle of attack and effective leading edge velocity all increase, resulting in a faster LEV growth. In order to account for this, and to time the pitch reversal with the LEV separation (as discussed in the previous section) the optimal frequency is reduced. While the efficiency is maximized at h^* = 1, the coefficient of power continues to increase with heaving amplitude up to the highest value tested. This indicates that the most power is generated from a given cross section of the flow at h^* = 1 but more power can be generated per foil by simply oscillating over a larger cross section of the flow. In a practical setting where sea or river bed space is abundant, minimizing equipment costs is most important and maximizing power generation per foil may be the primary design consideration. We should note that at larger heave values, we may expect that blockage effects in the test section will enhance the OFT performance <cit.>. However in this study, the minimum gap between the foil and the walls is 3.9 chord lengths when h^* = 1.50 therefore while some improvement is expected, we expect it to be small. In addition, the comparisons between different foils discussed in this work, as well as comparisons with the results of Kim et al. <cit.>, which were performed in the same facility, are consistent. §.§ Comparison of Compliant and Inextensible Membrane Hydrofoils §.§.§ Membrane kinematics The vital difference between previous studies of oscillating foil turbines using a rigid foil and the current work is the ability of the membrane foil to (i) adopt camber during the power stroke, and (ii) stretch in response to the hydrodynamic forces acting on the wing. The dynamic stretch, λ, of the compliant membrane is shown in Figure <ref>, plotted over one half cycle (t/T = 0.5 … 1.0) for four values of α_T/4. The pitching profile, θ / θ_o, is superimposed for reference. The initial stretch is one, and as expected, the membrane stretches due to the fluid loading and the stretch increases with . In all cases, the measurements reveal a local peak in the membrane stretch between t/T = 0.55 and 0.6, followed by a transient oscillation. This initial loading shock and oscillation is associated with the elastic vibration of the membrane when it encounters hydrodynamic loading following pitch reversal. The membrane stretch behavior differs significantly as the effective angle of attack at mid-stroke, α_T/4 increases from 14^∘ to 44^∘ and two distinct behaviors can be identified. In the smallest pitching angle case, despite the initial transient, the membrane stretch remains centered around a constant value of λ≈ 1.05. At the highest pitching amplitude case of α_T/4 = 44^∘, there is a steady growth in the stretch until it reaches 16% at t/T ≈ 0.7, followed by a quick decline to a constant value of about 10%. In summary, we observe the overlay of a vibrational excitation of the elastic membrane with a second phenomena which will be explored further as it relates to differences in leading edge vortex formation and shedding between the α_T/4 cases. §.§.§ Power and lift comparison Based on these camber measurements, we compare the performance of three foils: the compliant membrane foil (C), an inextensible membrane foil with zero slack (I-0) and an inextensible membrane foil with 5% slack (I-5) (a comparable camber to the = 14^∘ and 24^∘ cases shown in Fig. <ref>), The power coefficients for these three foils are shown in Figure <ref> over a range of . Both foils that allow for camber - C and I-5 - outperform the inextensible foil, I-0, over all values of . We observe that the performance of the C and I-5 foils are closely matched at the low α_T/4 cases: 14^∘ and 24^∘. These conditions correspond to the cases for which the mean stretch of the C foil is close to the prescribed 5% slack of the I-5 foil (Figure  <ref>). At the higher values of α_T/4 = 34^∘ and 44^∘, the compliant foil significantly outperforms the I-5 foil, and achieves values of λ ranging between 1.1 and 1.15, larger than that prescribed of the I-5 foil. This improved performance can be understood by comparing the lift-vs-time of these three wings at both low and high values of (Figure <ref>). For the case of = 14^∘, we notice that for the majority of the cycle the C and I-5 wings display closely matched lift profiles despite the elastic oscillations of the C foil observed in the stretch measurements. Both foils exhibit a rapid rise in lift (t/T ∼ 0.6) caused by the growth of the LEV and, at this low value of , the vortex remains attached and C_L is sustained at its peak value over the duration of the heave cycle. One subtle difference between the C and I-5 lift profiles is their pitch reversal response. The C wing displays a smooth transition from negative to positive lift at t/T ≈ 0.55 associated with the gradual decambering and recambering of the wing as it goes from negative to positive angle of attack. In contrast, the lift profile of the I-5 wing exhibits a “stutter” - a delayed lift response followed by a rapid reversal which is associated with the delay of the inextensible membrane sheet which maintains a negative camber at small positive angles of attack followed by a sudden snap through to positive camber. Switching to the lift profiles for the highest pitching amplitude, α_T/4 = 44^∘, we see the same stutter in the lift of the inextensible wing as the pitch angle rises (earlier, now at t/T ∼ 0.52 because the pitch reversal is faster). In contrast to the lower case, we see that all three wings begin to stall at t/T ∼ 0.65. The inextensible membrane without slack is the first to stall followed by the inextensible wing with slack - agreeing with previous observations that uncambered and inextensible wings exhibit the sharpest stall behavior <cit.>. This delayed stall, which occurs at peak pitch angle and, more importantly, at peak heave velocity contributes to the increased energy harvesting achieved by wings with camber (Figure <ref>). The two inextensible wings also achieve lower minima in C_L of 1.1 and 1.4 followed by the growth of a second peak while the compliant wing smoothly transitions to a second stable lift coefficient, C_L ≈ 1.75. These trends in the lift-vs-time are completely consistent with the stretch measurements (Figure <ref>). The sharp decambering of the compliant membrane wing in the α_T/4 = 44^∘ case observed in the stretch measurements (Figure <ref>) coincides with the initiation of stall at t/T = 0.65. In summary, at lower angles of attack, α_T/4 = 14^∘,  24^∘, the compliant membrane wing exhibits a roughly constant deformation similar to the shape of the inextensible membrane wing while at higher angles of attack, α_T/4 = 34^∘, 44^∘, the membrane wing exhibits increased deformation and a stabilizing feedback behavior. § CONCLUSIONS The kinematic parameter space of the oscillating foil turbine was studied utilizing a compliant membrane foil. The optimum efficiency was found to occur at a lower reduced frequency than previously reported for rigid hydrofoils, f^* = 0.11 and 0.15 respectively. Given Kinsey and Dumas's findings that optimal kinematics are associated with vortex shedding at the pitch reversal, we expect increased leading edge vortex stability to result in later shedding and a lower optimal oscillation frequency <cit.>. In order to separate the roles which elasticity and dynamic shape-morphing play in leading-edge vortex stability on membrane wings, the energy harvesting performance of a compliant membrane wing, an inextensible membrane with slack (s = 1.05) and an inextensible membrane without slack (s = 1.00) were compared. Two distinct regimes of operation of the membrane foil OFTs were identified, the constant and dynamic camber regimes. In the constant camber regime which occurred for α_T/4≤ 24^∘, the compliant membrane achieved an equilibrium deformation quickly following pitch reversal. The performance of the inextensible membrane with similar slack achieved similar overall lift and power performance, although in this regime, the compliant membrane exhibited a smoother lift transition at the pitch reversal. For α_T/4≥ 34^∘, the compliant membrane exhibited increased deformation and decambered in response to stall, resulting in a softer stall than the inextensible membrane with slack. Thus, at low angles of attack, the LEV stability is primarily due to camber and the elasticity of the compliant membrane does not appear to present a significant benefit over the inextensible wing with slack. In contrast, at higher angles of attack the elasticity of the compliant membrane may enhance the LEV stability by decambering in response to stall. This later regime is critical to take into consideration as the optimal kinematics for this turbine occur exactly at this range of α_T/4 = 30^∘ to 45^∘. While the decambering of the compliant wing resulted in a softer stall behavior than the inextensible wing with a slack ratio of 1.05, other slack ratios could potentially perform better. Since increased α_T/4 enlarges the gap between the feeding shear layer and the top of the wing, a larger wing camber may be necessary to stabilize the leading edge vortex. As the membrane stretch measurements indicate, the deformation of the compliant membrane wing increases with α_T/4. The larger camber of the compliant wing at these higher α_T/4 reduces the gap between the wing and the separation shear layer which may also play a role in the compliant wing's improved performance at large α_T/4. Since the camber of the compliant membrane wing exceeded the stretch of the inextensible wing it is challenging to draw a direct comparison in this case. Further study into inextensible wings with larger slack ratios is necessary to better understand the LEV dynamics in this regime. Our results indicate that the optimal efficiency of the compliant membrane OFT is comparable to that which has been previously observed for a rigid membrane OFT. However, the use of even more compliant membrane foils with larger deformation may yield higher efficiencies. § ACKNOWLEDGEMENTS This work was supported by the National Science Foundation, CBET Award 1921359, and the Halpin award (to IU) from Brown University.
http://arxiv.org/abs/2407.13012v1
20240717210618
CUAOA: A Novel CUDA-Accelerated Simulation Framework for the QAOA
[ "Jonas Stein", "Jonas Blenninger", "David Bucher", "Josef Peter Eder", "Elif Çetiner", "Maximilian Zorn", "Claudia Linnhoff-Popien" ]
quant-ph
[ "quant-ph", "cs.ET" ]
CUAOA: A Novel CUDA-Accelerated Simulation Framework for the QAOA Jonas Stein0000-0001-5727-9151 LMU Munich, Germany Aqarios GmbH, Germany jonas.stein@ifi.lmu.de Jonas Blenninger0009-0004-5382-7113 LMU Munich, Germany Aqarios GmbH, Germany jonas.blenninger@aqarios.com David Bucher0009-0002-0764-9606 Aqarios GmbH, Germany david.bucher@aqarios.com Peter J. Eder0009-0006-3244-875X Siemens AG, Munich, Germany peter-josef.eder@siemens.com Elif Çetiner LMU Munich, Germany elif.cetiner@tum.de Maximilian Zorn0009-0006-2750-7495 LMU Munich, Germany maximilian.zorn@ifi.lmu.de Claudia Linnhoff-Popien0000-0001-6284-9286 LMU Munich, Germany linnhoff@ifi.lmu.de July 22, 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== BSTcontrol § ABSTRACT The Quantum Approximate Optimization Algorithm (QAOA) is a prominent quantum algorithm designed to find approximate solutions to combinatorial optimization problems, which are challenging for classical computers. In the current era, where quantum hardware is constrained by noise and limited qubit availability, simulating the QAOA remains essential for research. However, existing state-of-the-art simulation frameworks suffer from long execution times or lack comprehensive functionality, usability, and versatility, often requiring users to implement essential features themselves. Additionally, these frameworks are primarily restricted to Python, limiting their use in safer and faster languages like Rust, which offer, e.g., advanced parallelization capabilities. In this paper, we develop a GPU accelerated QAOA simulation framework utilizing the NVIDIA CUDA toolkit. This framework offers a complete interface for QAOA simulations, enabling the calculation of (exact) expectation values, direct access to the statevector, fast sampling, and high-performance optimization methods using an advanced state-of-the-art gradient calculation technique. The framework is designed for use in Python and Rust, providing flexibility for integration into a wide range of applications, including those requiring fast algorithm implementations leveraging QAOA at its core. The new framework's performance is rigorously benchmarked on the MaxCut problem and compared against the current state-of-the-art general-purpose quantum circuit simulation frameworks Qiskit and Pennylane as well as the specialized QAOA simulation tool QOKit. Our evaluation shows that our approach outperforms the existing state-of-the-art solutions in terms of runtime up to multiple orders of magnitude. Our implementation is publicly available at <https://github.com/JFLXB/cuaoa> and Zenodo <cit.>. Quantum Computing, Quantum Optimization, QAOA, Quantum Circuit Simulation, CUDA, HPC § INTRODUCTION One of the most promising approaches for quantum advantage in the domain of optimization problems is the Quantum Approximate Optimization Algorithm (QAOA) <cit.>. As a parameterized version of the Quantum Adiabatic Algorithm <cit.>, the QAOA utilizes a quantum phenomenon, i.e., the Adiabatic Theorem <cit.>, to solve optimization problems approximatively. In consequence of many results about improved scaling performance of the QAOA for important optimization problems compared to classical state-of-the-art solvers <cit.>, much scientific effort goes into achieving large scale numerical simulations to explore possible quantum advantages empirically <cit.>. The current standard for large-scale, noise-free quantum circuit simulations relies on efficient matrix multiplication via GPUs and more specifically the cuStateVec SDK <cit.>, which allows for a translation of circuit instructions written in Qiskit <cit.>, Pennylane<cit.>, or other SDKs, to CUDA, the toolkit for instructing computations on NVIDIA GPUs <cit.>. Due to its specific quantum circuit structure, the runtime complexity of simulating a standard p-depth, n-qubit, X-mixer-based, QAOA circuit is 𝒪(pn2^n) instead of the 𝒪(p4^n) for general p-depth quantum circuits <cit.>. As we expect no further speedup for the simulation of the X-mixer (cf. <cit.>), and to allow for arbitrary mixers (cf. <cit.>), we focus on speedup gains based on the diagonal structure of the cost operator (cf. <cit.>). In the current state-of-the-art QAOA-simulator QOKit <cit.>, Lykov et al. exploit this diagonal structure of the cost operator by precomputing the costs for all possible solutions in parallel and then using these for the cost unitary application as well as the expectation value calculation. However, QOKit is written in Python and hence cannot use CUDA natively, hindering itself from achieving the fastest possible computation. What is more, the two key features besides the precomputation of the cost Hamiltonian are not executable in its provided code due to missing implementation: the cuStateVec simulator and the gradient computation <cit.>. Resolving the shortcomings of QOKit, we develop a CUDA-based QAOA simulator (CUAOA) that is optimized to execute all practically relevant operations for QAOA-evaluation in native CUDA, oriented towards a single-GPU setting. To ensure compatibility with the expected end-user programming language Python (e.g., via PyO3 <cit.>) while allowing for direct CUDA access, we employ Rust with integrated C/C++ modules (via Foreign Function Interface (FFI)). Analog to the current state of the art, we use cuStateVec for all computations that do not concern the cost unitary, e.g., the mixer application and the sampling process. Allowing for significant speedups compared to current state-of-the-art QAOA simulators, we propose CUDA-native implementations of * the precomputation of the cost Hamiltonian, * the application of the cost unitary, * the calculation of the expectation value, and * the gradient computation. The remainder of this paper is structured as follows. In  <Ref>, we outline preliminaries on the simulation of the QAOA, the adjoint differentiation method, and CUDA. In <Ref>, CUAOA is presented and subsequently evaluated in <Ref>. Finally, we conclude our findings in <Ref>. § BACKGROUND In this section, we provide preliminaries on the QAOA, a method to efficiently compute gradients in classically simulated quantum circuits and CUDA. §.§ The Quantum Approximate Optimization Algorithm Given a combinatorial optimization problem by an objective function f:{ 0,1}^n→ℝ, the QAOA conducts the following steps to approximate the optimal solution <cit.>: * Mapping the objective values onto the eigenvalues of the diagonal cost Hamiltonian H_C=∑_x f(x)|x⟩⟨x|. * Preparing a system in the ground state of the mixer Hamiltonian, i.e., usually |+⟩^⊗ n for H_M=-∑_i=1^n σ_i^x. * Simulating the time evolution exp(i∫_0^T H_s(t)dt) approximatively, where H_s(t)=(1 - s(t))H_M + s(t)H_C governs the adiabatic evolution and a bijective s:[0,T]→[0,1] increases monotonically for given T>0. * Measuring the resulting state and remapping it to its corresponding solution of the objective function f. To simulate the time evolution of H_s in a quantum circuit, a discretization into p∈ℕ Hamiltonians H_s(1/T),...,H_s(T) via Trotterization is carried out, amounting the following unitary operation forming the QAOA: U(β, γ) = U_M(β_p) U_C(γ_p) … U_M(β_1) U_C(γ_1)H^⊗ n, where β_i and γ_i control the speed of the time evolution and U_M(β_i) = e^-iβ_i H_M, U_C(γ_i) = e^-iγ_i H_C, s.t. U(β, γ) approaches adiabatic time evolution for p→∞, and constant speed, i.e., β_i = 1-i/p, and γ_i = i/p <cit.>. §.§ Adjoint Differentiation The adjoint differentiation method <cit.> exploits the possibility to clone statevectors in classical quantum circuit simulators to yield a runtime of 𝒪(P) instead of the generally employed parameter shift rule which has complexity 𝒪(P· m) <cit.>, where P is the number of possibly parameterized layers in the quantum circuit and m is the number of parameters. These complexities state the query complexity of simulating the application of a layer of gates, which is generally 𝒪(4^n) for an n-qubit quantum register. In the following we assume that each circuit layer U_i has exactly one parameter θ_i, which is the case for the standard form of QAOA. For details on adaptations necessary for this approach to work for more general circuit layers involving repeated parameters and multiple parameters per layer see Ref. <cit.>. Notably, these increase the runtime complexity by constant factors. The adjoint differentiation method exploits the hermiticy of the partial derivative of the measurement operator M, i.e., ∂M/∂θ_i= ⟨0|U_1^†…∂ U_i^†/∂θ_i… U_P^† M U_P … U_i … U_1 |0⟩ + ⟨0|U_1^†… U_i^†… U_P^† M U_P …∂ U_i/∂θ_i… U_1 |0⟩ = 2 ℜ(⟨0|U_1^†… U_i^†… U_P^† M U_P …∂ U_i/∂θ_i… U_1 |0⟩), which can be written as ∇_θ_iM= 2 ℜ(b_i∂ U_i/∂θ_ik_i), where ⟨b_i|⟨0|U_1^†… U^†_i… U_P^† M U_P … U_i+1 and |k_i⟩ U_i-1… U_1 |0⟩ can be computed recursively via ⟨b_i+1|=⟨b_i|U_i+1^† and |k_i+1⟩=U_i|k_i⟩. Due to this recursive nature, it takes 𝒪(P) layer executions to calculate ∇_θ_1M and then 𝒪(1) layer executions for all other partial derivatives yielding the stated overall runtime complexity of 𝒪(P). §.§ CUDA The Compute Unified Device Architecture (CUDA) is a parallel computing platform and application programming interface (API) model that enables developers to directly access NVIDIA Graphics Processing Units (GPUs) <cit.>. The main advantage of GPUs over CPUs lies in the ability to execute computations in a massively parallelized manner. This is especially relevant for applications like matrix multiplication, which can be divided into many smaller, independent computations. CUDA is an extension of the C/C++ programming languages that has a hierarchical structure centered around threads. Calls from the CPU to CUDA are done via kernel functions, known as kernels, that run on the GPU. These kernels are executed by a grid of thread blocks, with each block containing multiple threads. This hierarchical arrangement enables efficient use of the GPU’s resources and provides precise control over each thread's behavior. Threads within the same block can share data through shared memory, which is unique to each block and generally faster than global memory. Communication between threads of different blocks occurs through global memory. <cit.> § RELATED WORK While multiple frameworks exist that are aimed at providing QAOA circuit implementations of various versions of the QAOA (e.g., OpenQAOA <cit.> and JuliaQAOA <cit.>), only QOKit <cit.> is aimed at—and capable of—achieving a significant speedup through GPU usage. Therefore we focus on QOKit in the remainder of this section. QOKit is targeted towards simulating the QAOA involving large amounts of qubits and offers a multi-GPU approach that shares information about the statevector via OpenMPI <cit.>. The two key features of QOKit are a parallelized computation and application of the cost operator and an algorithm to apply the X-mixer in time 𝒪(n2^n). At the time of this paper being published, QOKit is limited by significant shortcomings in their published code, i.e., missing support of the cuStateVec circuit simulator as well as missing gradient calculation, which both are key components for achieving the shortest runtimes possible. Furthermore, QOKit is limited by its implementation being carried out in Python, which manifests in that their proposed mixer unitary, as well as their OpenMPI-based parallelization perform worse compared to plain cuStateVec <cit.>. Also, for extracting information about the statevector, e.g., for sampling, the probabilities of the statevector are copied to the CPU, which displays a significant runtime bottleneck. Based on the results presented in Ref. <cit.>, the only clear speedup that QOKit provides over a plain cuStateVec-based implementation appears to be the efficient precomputation of the cost operator and its application in diagonal form. The precomputation is based on the polynomial representation of the objective function f(s)=∑_k=1^L w_k ∏_i∈ t_ks_i, where s∈{ -1,1}^n and 𝒯{(w_1,t_1),...,(w_L,t_L)} defines the polynomial terms through the indices of the involved variables t_k⊆{ i| 1≤ i ≤ n} and their associated weight w_k∈ℝ. For the computation of f(s) of all possible inputs s, an array of zeros is allocated on the GPU and then a GPU kernel iterating over all terms in 𝒯 is applied in parallel for each entry in the array. The value of each term is calculated using bitwise-XOR and population count operations to determine the sign of ∏_i∈ t_ks_i. The application of the cost unitary is executed by an element wise product of the statevector with exp(-iγ_i f(s)). § METHODOLOGY Aiming to exploit the diagonal structure of the cost operator in the QAOA, we now propose CUAOA, a CUDA-accelerated, single-GPU quantum circuit simulator for the QAOA that annihilates the shortcomings of QOKit. To offer the same convenience to end-users as QOKit—a Python interface—while enabling seamless CUDA-operability, the core module of CUAOA is written in Rust. This allows access to CUDA-instructions written in C/C++ via FFI and the integration from within Python via libraries like PyO3 <cit.>. This also yields the significant advantage of running the exact same program instructions much faster compared to a Python implementation, just because Rust is a compiled language, i.e., the code is compiled directly to machine code before execution. Starting from a baseline of the state of the art in circuit-agnostic noise-free quantum circuit simulation, i.e., cuStateVec, for our QAOA simulation, we now show how every circuit simulation component that involves a cost operator can be implemented more efficiently via CUDA-based implementation of the operator in its diagonal form. §.§ Cost function representation In standard QAOA simulation, the cost operator is represented by quantum gates modelling the cost function. However, as the cost operator is diagonal in most practical applications (which is a result of basis encoding), the application of the cost operator in a classical circuit simulation can be carried out directly through a multiplication with a diagonal matrix, which can be fully parallelized on GPUs. Improving on the cost function computation of QOKit <cit.>, we work with a 0-1 based function representation that reduces the number of necessary additions significantly: Encode the indices of the n binary variables using one-hot encoding, we can represent a polynomial objective function as f(x)=∑_k=1^L w_k (x ⇔ x _b[∑_v_i∈ t_k2^i]_2), where each term t_k consists of variable indices v_i∈[n] and ⇔ denotes the logical equivalency that yields 1 or 0 respectively. As the binary string [∑_v_i∈ t_k2^i]_2 can easily be computed through bitwise logical AND operations (denoted _b) of the one-hot representation of each term t_k, and as only the entries for which the bitstring x is nonzero have to be considered for any given x, a large amount of terms can be ignored in computation. This shortcut is not exploited in QOKit, which iterates over all terms. Thus, our approach can be significantly faster for polynomials that have a large amount of small degree terms, which even are the norm in practice. For problems inheriting symmetries (e.g., the MaxCut problem), many cost values equal each other, such that additional speedups could be gained. However, we refrain from such optimizations to maintain problem agnosticity. §.§ QAOA circuit simulation The CUAOA starts by allocating memory for the statevector as well as the cost Hamiltonian and stores the pointers referencing the respective GPU memory. In addition, a CUDA stream is created and its reference is stored in the handle, which allows multiple kernels associated with different streams to be executed in parallel on the same GPU. Further, the handle for interactions with the cuStateVec library is initialized with the handle's stream and subsequently stored. Memory for other variables is not allocated upon the handle's initialization, but only later when they are actually needed, to reduce memory usage. The statevector is initialized as an array of the CUDA double type for complex numbers in parallel for each entry. As all evaluations are carried out for the standard form of QAOA with an X-Mixer, we directly initialize all values of the array to 1/√(2^n). The cost Hamiltonian is initialized as an array of double precision entries and the value of each entry is computed via <Ref> in parallel. The cost unitary is applied to the statevector |ψ⟩ exploiting Eulers formula exp(iθ) = cosθ + isinθ through ψ_i ↦[cos(-γ_i f(x)) + i sin(-γ_i f(x))] ·ψ_i. The variational parameter γ_i for this is passed as a double-precision input to this operation and the CUDA built-in function for multiplication is used. Note that while cuStateVec also offers a function to directly apply a diagonal matrix[Namely .], they do not support the in-place multiplication of the γ_i, which would have to be done through another kernel leading to resource inefficiency and thus giving the stated approach its right for existence. To apply the mixer unitary, is used. As we only used the X-mixer in our evaluation, this reduces to the application of an R_x(-2β_i) gate for every qubit. §.§ Gradient computation To compute the gradient of a QAOA circuit with respect to its variational parameters γ and β, we employ the adjoint differentiation technique outlined in <Ref>, as it is the state of the art for gradient calculation in classical circuit simulation. For the QAOA, another simplification in gradient calculation of each layer arises from the well-known identity ∂/∂ te^tA=Ae^tA, which implies that ∂/∂ te^-iγ_iH_C=-iH_Ce^-iγ_iH_C and ∂/∂ te^-iγ_iH_M=-iH_Me^-iγ_iH_M. While the application of H_C is trivial, the application of H_M reduces to a layer of X-gates for all of our evaluation runs, as we only consider the standard X-mixer. The uncomputation needed for each gradient calculation step described in <Ref> also simplifies based on ∂/∂ te^tA=Ae^tA, as only -iH_M and -iH_C respectively have to be uncomputed. As both operators are Hermitian (i.e., H_M=H^†_M and H_C=H^†_C), this uncomputation can be done by applying iH_M and iH_C respectively. In our implementation we get rid of the introduced imaginary number i by switching from the real to the imaginary part (cf. <Ref>). §.§ Retrieving Results from the GPU Arguably the most important output of a QAOA simulator is the expectation value ψH_Cψ=∑_i=1^2^n f(x_i)|ψ_i|^2. To compute this sum, we calculate f(x_i)|ψ_i|^2 for all i based on the resulting QAOA statevector |ψ⟩ and the cost operator, and store the result in a new array of doubles. Then we calculate the sum of all components of this array by braking it down into a tree-like hierarchical structure where, at each level, always two elements are added in parallel, amounting to a total of log_2(2^n) sequential computations. To sample from the statevector, we use the sampling functionality offered in cuStateVec. This has the big advantage that the statevector does not have to be copied to the CPU thus evading any memory-transformation bottlenecks. In addition to the sampled solution bitstring, we also output the respective objective value, as it is stored on the GPU anyway and thus save additional computational efforts for the user. What is more, our implementation also supports the edge-case of exporting the complete statevector from GPU to CPU. § EVALUATION To evaluate the performance of CUAOA, we examine its runtime for the full circuit execution with regards to outputting the expected value as well as sampling, and also its performance in parameter training using gradient-based methods. In alignment with QOKit's evaluation <cit.>, we consider the MaxCut problem with three types of graphs ranging from 6 to 29 vertices: (1) random graphs generated based off the Erdős-Rényi G(n,p) model <cit.> with 25%, 50%, and 75% connectivity, (2) random 3-regular graphs, and (3), complete graphs. Generating five instances per vertex-count and graph type (considering each Erdős-Rényi-connectivity as its own type), this results in a dataset of 444 graphs. As baselines, we employ the current state-of-the-art HPC QAOA simulator QOKit as well as standard QAOA implementations in Qiskit and Pennylane. For Qiskit and Pennylane, cuStateVec is used to run the experiments on a GPU. The circuit simulation for QOKit is based on numba (which translates Python code into machine code upon compilation using Just-in-Time compilation and can natively be run on GPUs <cit.>), as the cuStateVec variant of QOKit is not implemented in the currently available version of their code. While this limits the comparability of our results to the results published for QOKit, our methodology shows that all of our modifications to the QAOA simulation yield theory-proven improvements over QOKit, even when cuStateVec was executable for QOKit. All experiments are run on a high-end consumer-grade system running EndeavourOS Linux x86_64 with Linux Kernel version 6.8.7-arch1-1, 64GB of RAM, an AMD Ryzen 7 3700X CPU (16 cores @ 3.600 GHz), and an NVIDIA GeForce RTX 3090 GPU. All executions are started from within a Python script. §.§ Runtime of a single QAOA circuit simulation To examine the runtime of a plain QAOA circuit execution closing with a measurement of the expectation value, we compare CUAOA with all three baselines (QOKit, Qiskit, and Pennylane) in <Ref>. For Pennylane, a memory allocation error occurred for problem instances exceeding 26 vertices, resulting in only 391 graphs being run successfully. Aside from this technical detail, we can observe that CUAOA performs best for all runs, even outperforming the state-of-the-art baselines by orders of magnitude in all small to medium-sized problem instances. It becomes evident that the effect of exponential runtime scaling only starts to manifest at around 16 qubits for CUAOA. In-line with theoretical considerations, the application of the mixer unitary is eventually the biggest bottleneck for the runtime, as can be seen by comparing <Ref> with the results from sampling-based QAOA runs displayed in <Ref>, where the slope of the runtimes of all simulators become identical for problems above 20 qubits. The main reason that prevents further speedups is the sequential application of all mixer gates. Since every single mixer gate application modifies the memory of the entire statevector, parallelization is impossible. Finally, the fact that the runtime of CUAOA is up to an order of magnitude better than QOKit's for problem instances below 20 qubits is necessarily the consequence of our CUDA-native implementation, as well as the optimized computations we introduced in <Ref>. For problem instances beyond 20 qubits, this reduced overhead apparently marginalizes, leading to roughly equal runtimes, but with CUAOA still outperforming QOKit. To evaluate the runtime of sampling bitstrings from the resulting statevector of the QAOA, <Ref> displays the results for a QAOA run with p=6 and 1024 shots. To allow for better comparison to QOKit, whose available implementation at the time this article is published does not support sampling, we implemented a minimal-effort Python script. For this, we utilize QOKit's functionality to extract the probabilities of the statevector to the CPU. Subsequently, we use the standard Python random number generator to sample from the array containing the associated cumulative probabilities (which was computed using the function of numpy). The results of CUAOA mirror those of the expectation value, showing the high degree of efficiency of both, i.e., not exceeding the runtime of the mixer application. While Qiskit performs quite well, Pennylane is significantly worse compared to the runtimes of the expectation value. The reasons for this are somewhat unclear, but indicate different implementations for sampling, especially for smaller circuit sizes. As expected, our CPU-based implementation of sampling for QOKit is hardly competitive as soon as the dimensionality of the statevector increases. §.§ Parameter Training Examining gradient-based parameter training, we now study the runtime of gradient computation for all parameters. As neither QOKit nor the cuStateVec-based Qiskit implementation allows for a computation of the gradient in the form of their available implementation at the time this paper is published, our experiment is restricted to a comparison of Pennylane and CUAOA. As the Pennylane implementation also implements the adjoint method, this is a fair comparison to our QAOA-aware enhanced version of the adjoint method. Analog to earlier evaluation runs, Pennylane again fails to execute graphs beyond 26 vertices, resulting in only 391 successfully executed problem instances. <Ref> clearly shows that CUAOA outperforms Pennylane by multiple orders of magnitude for problem instances up to 18 qubits, being roughly 100 times faster in almost all runs. For a larger number of qubits, this gap closes to about a 10-fold speedup, with a significant runtime increase at around 18 qubits analog to what has manifested in the plain circuit evaluation in <Ref>. Lastly, we also provide an implementation of the gradient based version of the optimizers L-BFGS (natively) <cit.> and BFGS (through scipy) <cit.>. Additional experiments (not displayed here for brevity) show, that CUAOA is still up to two orders of magnitude faster than Pennylane when using the same optimizer (BFGS), essentially mirroring the results of <Ref>. § CONCLUSION In this paper, we proposed a classical high performance CUDA-based QAOA circuit simulator (CUAOA) oriented towards single-GPU usage. By exploiting speedups enabled through the diagonal structure of the cost operator at multiple stages of the QAOA simulation, i.e., (1) the computation and application of the cost operator, (2) the computation of the expectation value, and (3), providing a QAOA-specialized gradient computation method based on adjoint differentiation, our proposed implementation of the CUAOA outperformed the state-of-the-art QAOA simulator QOKit by an order of magnitude (i.e., a 10-fold speedup) for small to medium sized problem instances. For large scale problem instances above 20 qubits, our approach also performed better than QOKit but equally suffers from the dominating runtime of the mixer operator. Notably, CUAOA offers significantly more functionality than QOKit for the key applications of (1) sampling from the statevector and (2) a GPU-based gradient computation. Further, our gradient computation runs about two orders of magnitude faster than the respective state-of-the-art approach. In conclusion, our approach can be regarded as the new state of the art for single-GPU QAOA simulation, as it outperformed all baselines up to multiple orders of magnitude in a representative evaluation. In future work, our approach could be extended towards multi-GPU scenarios, which would require mostly additional implementation while relying on the same theoretical insights. Further, one could natively implement constraint-preserving mixers, by directly reducing the search space, which could significantly reduce numerical simulation runtime for heavily constrained problems. § ACKNOWLEDGMENT This paper was partially funded by the German Federal Ministry for Economic Affairs and Climate Action through the funding program "Quantum Computing – Applications for the industry" based on the allowance "Development of digital technologies" (contract number: 01MQ22008A) and through the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. IEEEtranDoi
http://arxiv.org/abs/2407.12469v1
20240717104619
Inverse participation ratio and entanglement of edge states in HgTe quantum wells in a finite strip geometry
[ "Manuel Calixto", "Octavio Castaños" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "quant-ph" ]
i [ []1/2trkdσℓ_B § calixto@ugr.esDepartment of Applied Mathematics, University of Granada, Fuentenueva s/n, 18071 Granada, SpainInstitute Carlos I for Theoretical for Theoretical and Computational Physics (iC1), Fuentenueva s/n, 18071 Granada, Spain ocasta@nucleares.unam.mxInstitute of Nuclear Sciences, National Autonomous University of Mexico, Apdo. Postal 70-543, 04510, CDMX, Mexico§ ABSTRACT Localization and entanglement properties of edge states of HgTe quantum wells in a finite strip geometry of width L are studied under quantum information concepts such as: 1) inverse participation ratio (IPR), which measures localization, and 2) entropies of the reduced density matrix (RDM) for the spin sector, which measures quantum correlations due to the spin-orbit coupling (SOC). Qualitative and quantitative information on the edge states energies and wavefunctions is extracted from analytic and numerical Hamiltonian diagonalization approaches. The previously observed exponential decay of the energy gap with L and its modulations is confirmed and nontrivial consequences of the strip width and Rashba terms on the charge conductance are also reviewed. Analysis of the structure of the edge-state wave functions in terms of spin, momentum k_x in the x-direction and position y, evidences the spin polarization structure of edge states at the boundaries. An IPR analysis reveals that the valence edge states show maximum localization on the boundaries for certain values of the momenta k_x in the vicinity of the Γ point. The edge-state wave packets participate of less and less momenta as we approach to the boundaries y=0,L (and also the center y=L/2, for some of them) of the strip. A study of the RDM to the spin sector of edge states sheds complementary information on the structure of spin probabilities in (k_x,y) space, giving clear location of extremal values. The purity and entropies of the RDM inform on the regions (k_x,y) where the spin sector is more and less entangled with the rest of the system, due to SOC. 03.65.Vf, 03.65.Pm, Inverse participation ratio and entanglement of edge states in HgTe quantum wells in a finite strip geometry Octavio Castaños July 22, 2024 ============================================================================================================ § INTRODUCTION The spin Hall (SH) effects are associated to relativistic spin-orbit couplings in which electric currents can generate spin currents or viceversa. See e.g. <cit.>, and references there in, for a review of the SH effects and its development to have spintronic devices. The SH effect can be intrinsic (due to the structure of the electronic bands) or extrinsic (due to scattering process), both emerging naturally from the formalism of the anomalous spin Hall (ASH) effect, which generates asymmetric deflections of charge carriers depending on its spin direction <cit.>. The so-called intrinsic spin Hall effect (ISH) combined with the quantum Hall effect (QH) led to the prediction and subsequent experimental verification of the quantum spin Hall (QSH) effect. The QSH state is a non-trivial topological state of quantum matter which is invariant under time reversal transformations (see e.g. <cit.> for a review). It has an energy gap in the bulk, but it has edge states with different spins moving in opposite directions, that is, counter-propagating modes at each edge. These spin currents flow without dissipation on macroscopic scales. Mathematically motivated by an earlier model of Haldane <cit.>, graphene was proposed by Kane & Mele as a two dimensional (2D) Dirac material to exhibit this effect <cit.>, however the spin currents were too small to be measurable. Another proposal made by Bernevig-Hughes-Zhang (BHZ) <cit.>, considering the mercury telluride-cadmium telluride semiconductor quantum wells (QW), was successful and this new QSH state of matter and spin polarization phenomena were experimentally confirmed through the observation of ballistic edge channels <cit.> and by electrical detection <cit.>. The intrinsic QSH effect can be switched on and off and tuned into resonance through the manipulation of the QW width, or the bias electric field across the QW <cit.>. Since these pioneering studies, many low-dimensional quantum spintronic devices based on the spin-polarized transport in HgTe/CdTe QWs, and other non-magnetic semiconductors, have been proposed (see e.g. <cit.>). For example, other QWs exhibiting a similar behavior to HgTe/CdTe are the so-called type-II semiconductors made from InAs/GaSb/AlSb, which have been studied in <cit.>, where they suggest to use this system to construct a QSH field effect transistor (FET). The QSH phenomenom was extended to 3D topological insulators (TI); see <cit.> for text books and <cit.> for standard reviews on TI. In this case, surface states arise with high conductivity properties, like the alloy Bi_x Sb_1-x, which exhibits 2D conducting surface states. Effective Hamiltonian models have been proposed to describe this surface states of 3D TI <cit.>. To study the finite size effects on edge states in the TI phase, there are two procedures in the current literature. On the one hand, the tight-binding method is used in the works about the QSH edge-states by <cit.>. On the other hand, an analytic procedure of the effective BHZ model Hamiltonian for the case of a finite strip geometry was given in <cit.>; here the expressions of the wave functions of the edge states are determined in analytic form. Due to the finite size of the sample, the good quantum number k_y (the wavevector component in the finite strip direction) is replaced by a complex number λ leading to localization properties of wavefunctions at the boundaries and, as a consequence, to the coupling interaction between the edge states, thus producing an energy gap. In this paper, we tackle the problem of finite size effects in the HgTe/CdTe semiconductors, including spin-orbit effects due to bulk- and structure-inversion asymmetries (resp. BIA and SIA). This problem has also been investigated in <cit.>, where they use the tight-binding method to determine and exponential decay of the energy gap (with oscillations) with the strip width L and to prove that this gap is not localized at the Γ point of the first Brillouin zone. This energy gap is also affected by an external perpendicular electric field, which tunes the Rashba (SIA) term of the Hamiltonian model. We confirm this behavior for a more general BIA term including extra electron and hole couplings preserving time reversal symmetry. We also pursue the identification of topological order through quantum information (QI) measures and concepts like entropy and entanglement. These tools have played an important role in the general understanding of quantum phase transitions. Indeed, entanglement is at the heart of the interplay between quantum information and quantum phases of matter (see e.g., <cit.>). Signatures of topological phase transitions in higher Landau levels of HgTe/CdTe quantum wells without SOC from an information theory perspective have been reported in <cit.>. Other localization measures, like the inverse participation measure (IPR), has given useful information about the topological phase transition 2D Dirac materials like silicene <cit.>. This paper analyzes the structure of edge states in HgTe QWs with SOC under QI concepts like IPR and entanglement entropy, which turn out to be an interesting “microscope” to reveal details of their internal structure. The organization of the paper is as follows. In Sec. <ref> we briefly discuss the structure of the HgTe QW Hamiltonian model and its topological phases. In Sec. <ref> we approach the analysis of edge states in a finite strip geometry of width L from two different perspectives: either looking for analytic localized eigenvectors of the low energy Hamiltonian, or by numerically solving the tight-binding model after a lattice regularization. The first approach gives us a deeper understanding of the qualitative and internal structure of edge states, but shall rather follow the second approach to extract quantitative information, firstly about the spectrum and the dependence of the energy gap on the strip width L and the Rashba coupling ξ, and its non-trivial consequences on the charge conductance of edge states and its potential use in the design of a QSH field effect transistor. In Sec. <ref> we take a closer look to the localization properties of edge states as a function of the spin (s=± 1), the momentum wave vector in the x-direction [k_x∈(-π/a,π/a), with a the lattice constant] and the position y∈[0,L] between the strip boundaries y=0,L. This study sheds light on the spin polarization structure of edge states at the boundaries. The spreading of edge states in momentum (k_x) and position (y) space is analyzed through an important quantum information (and statistical, in general) concept called “inverse participation ratio” (IPR). Finally in Sec. <ref> we use the reduced density matrix (RDM) to the spin subsystem to analyze spin up/down and spin transfer probability densities of edge states as a function of the momentum k_x and position y, paying especial attention to extremal values. This analysis also sheds light on the spin polarization structure of edge states. The purity of the RDM (or equivalently, the linear entropy) also gives us information about the degree of entanglement between spin and band (electron-hole) sectors. Extremal entanglement values occur for special values of the position y and momentum k_x. Other alternative correlation measures are also analyzed, all of them giving equivalent results. Finally, Sec. <ref> is devoted to conclusions. § MODEL HAMILTONIAN Following standard references like <cit.>, edge states in HgTe/CdTe QWs are described by the following 2D four-band effective Dirac Hamiltonian. The original BHZ Hamiltonian is H_BHZ = σ_0+σ_z/2⊗ h_+1+σ_0-σ_z/2⊗ h_-1, h_s( ) = ϵ_0()σ_0+_s()·, s=± 1, where =(σ_x,σ_y,σ_z) are Pauli matrices together with the 2× 2 identity matrix σ_0 and =(k_x,k_y) is the wavevector. The spin s=± 1, 2× 2 matrix Hamiltonians h_s( ) are related by h_-1()=h_+1^*(- ) (temporarily reversed) and they admit an expansion around the center Γ of the first Brillouin zone (FBZ) given by <cit.>, ϵ_0()=γ-δ^2, _s()=(α s k_x,-α k_y,μ-β^2), where α, β, γ, δ and μ are material parameters that depend on the HgTe QW geometry, in particular on the HgTe layer thickness ℓ. The parameter γ can be disregarded and we shall set it equal to zero in the following. In Table <ref>) we provide these material parameters for a HgTe layer thickness ℓ=7 nm. We shall use these values all along the manuscript unless otherwise stated. Edge states are topologically protected by the time reversal symmetry Θ = - i (σ_y ⊗σ_0) K, where K means complex conjugation. The sign(μ)=sign(ℓ_c-ℓ) of the mass or gap parameter μ, for a given HgTe layer thickness ℓ, differentiates between band insulator (ℓ<ℓ_c) and topological insulator (ℓ>ℓ_c) phases, with ℓ_c≃ 6.3 nm the critical thickness. The QSH phase is associated with a discrete ℤ_2 topological invariant <cit.>. Actually, the Thouless-Kohmoto-Nightingale-Nijs (TKNN) formula provides the Chern-Pontryagin number 𝒞_s=1/2π∫∫_FBZ d^2(∂_s( )/∂ k_x×∂_s( )/∂ k_y)·_s( ), with _s=_s /|_s |, which gives 𝒞_s=s[sign(μ)+sign(β)], so that the system undergoes a topological phase transition (TPT) from normal (ℓ<ℓ_c or μ/β<0) to inverted (ℓ>ℓ_c or μ/β>0) regimes at the critical HgTe layer thickness ℓ_c. Now we shall introduce spin-orbit coupling (SOC) that connects the spin blocks h_± 1. It is given by the Hamiltonian H_SOC = H_BIA+H_SIA, H_BIA( ) = Δ_z(σ_y⊗σ_y) +Δ_e/2(k_xσ_x-k_yσ_y)⊗(σ_0+σ_z) +Δ_h/2(k_xσ_x+k_yσ_y)⊗(σ_0-σ_z), H_SIA( ) = ξ/2 (k_xσ_x+k_yσ_y)⊗(σ_0+σ_z). The spin-orbit interaction creates a bulk inversion asymmetry (BIA) and a structural inversion asymmetry (SIA) term which manifests as a k-linear Rashba term proportional to ξ for the electron band (see e.g. <cit.>); a finite Rashba term of this type in HgTe QWs requires the presence of a non-zero electric field ℰ_z in the z direction, so that ξ∝ eℰ_z, with e the electric charge. We shall set ℰ_z=1 mV/nm all along the manuscript, except for the discussion of the variation of the charge conductance with ξ towards the end of Sec. <ref> and Fig. <ref>. The spin-orbit interaction H_SOC will be responsible for the entanglement between spin blocks of H_BHZ in the total Hamiltonian H=H_BHZ+H_SOC. Notice that we are arranging Hamiltonian basis states as 4-spinor column vectors of the form Ψ=(ψ_↑ E,ψ_↑ H,ψ_↓ E, ψ_↓ H)^T, where ↑,↓ makes reference to the spin degree of freedom s=± 1 and EH denotes the electron and hole bands, respectively. The introduction of H_SOC preserves the time reversal symmetry of the total Hamiltonian H and therefore does not affect the topological stability of the nontrivial insulator phase already discussed for H_BHZ. We shall set ℓ=7 nm and we shall analyze the topological insulator phase for the material parameters given in table <ref>. To enhance some physical behavior, due to finite size effects, we shall occasionally consider other values of Δ_z, which will be noted in due course. § ENERGY GAP FOR EDGE STATES IN A FINITE STRIP GEOMETRY In order to extract qualitative and quantitative information on edge states, we shall report on two different but complementary approaches to the solution of the Hamiltonian eigenvalue problem. §.§ Analytic approach to the solution of the effective continuous 4-band model Following Ref. <cit.> (see also <cit.> for 3D Bi_2Se_3 films grown on a SiC substrate), the general solution for edge states in a finite strip geometry can be derived analytically as follows. We chose the boundaries of the sample to be perpendicular to the y-axis. Four-spinor states Ψ(y)=Ψ_λ e^λ y localized at the edges y=± L/2 are proposed as solutions to the Schrödinger equation H()Ψ(y)=EΨ(y), by replacing k_x→ k and k_y→ -∂_y. To have nontrivial solutions, the eighth-degree secular polynomial equation [H(k,-λ)-E]=0 in λ must be satisfied, which gives eight different roots λ_j=λ_j(k,E), j=1,…, 8 and eight independent 4-spinor eigenvectors Ψ_j(y). The explicit expressions of them are too long to be given here. Imposing open boundary conditions Ψ(y=± L/2)=0 to a general solution Ψ(y)=∑_j=1^8q_jΨ_λ_je^λ_j y with coefficients q_j, and demanding a nontrivial solution for them, one finally arrives to the trascendental equation Q(k,E)=[ Ψ_λ_1e^λ_j L/2 … Ψ_λ_8e^λ_j L/2; Ψ_λ_1e^-λ_j L/2 … Ψ_λ_8e^-λ_j L/2 ]=0, as a determinant of an 8× 8 matrix. Solving Q(k,E)=0 for E gives the dispersion relation E(k) for edge states. Due to the exponential dependence proposed solution Ψ(y)=Ψ_λ e^λ y, the real part of λ represents the inverse localization length of the edge states. The dominant value of λ_j is the one with a larger real part. As proved in Ref. <cit.>, the energy gap E_g shows an exponential decaying with L. Ref. <cit.> confirms the exponential decay of E_g with the strip width L but observes an oscillatory behavior coming from the imaginary part of λ and the fact that the gap closes outside the Γ point. In the next section we shall rather follow a numerical approach and we shall be able to give a more quantitative analysis about the behavior of edge states and their energies. §.§ Lattice regularization and numerical diagonalization of the tight-binding model The general solution for both, bulk and edge, states can be accomplished through a lattice regularization of the continuum model just replacing k_x,y→ a^-1sin(k_x,ya), k_x,y^2→ 2a^-2(1-cos(k_x,ya), in the Hamiltonian H() in (<ref>), with a the lattice constant (we shall eventually set a=2nm). Then, the Brillouin zone (BZ) is ∈(-π/a,π/a)× (-π/a,π/a). Following the general procedure of Refs. <cit.>, one Fourier transforms k_y in the total Hamiltonian ℋ=∫_BZd H()c^†_ c_ by substituting the annihilation (viz. creation) operators c_=1/L∑_n=0^N e^ k_y y_nc_k,n, y_n=na, N=L/a, to obtain tight-binding model Hamiltonian ℋ=∑_k,nℰ(k)c_k,n^† c_k,n+𝒯c_k,n^† c_k,n+1+𝒯^† c_k,n+1^† c_k,n, in position (discrete) y and momentum k=k_x spaces. Here we are considering a space discretization of the finite strip with y_n=na, n=0,…, N=L/a. The 4× 4 matrix ℰ(k) results from eliminating all terms depending on k_y in the regularized total Hamiltonian H(). Those terms then contribute to the matrix 𝒯= ( [ β+δ/a^2 -α/2 a -Δ _e+i ξ/a 0; α/2 a δ-β/a^2 0 Δ _h/a; Δ _e-i ξ/a 0 β+δ/a^2 -α/2 a; 0 -Δ _h/a α/2 a δ -β/a^2; ]). The matrix Hamiltonian ℋ is of size 4N=4L/a and is numerically diagonalized. The Hamiltonian spectrum is composed of both: bulk and edge states. Figure <ref> shows the energy spectrum E(k) for L=100 and L=400 nm as a function of the wavevector component k=k_x in the vicinity of the Γ point. Bulk conduction/valence (c/v) energy levels E_c/v are plotted in red/blue color while the four edge energy levels, whose 4-spinor states will be denoted by {Ψ_1c,Ψ_1v,Ψ_2c,Ψ_2v}, are plotted in black color, solid for {Ψ_1c,Ψ_1v} and dashed for {Ψ_2c,Ψ_2v}. Notice that Ψ_1 and Ψ_2 are nearly degenerated for conduction and valence bands, but the energy E_1c is a bit lower than E_2c and E_1v is slightly higher than E_2c, so that the energy gap is determined by E_g=min_k[E_1c(k)-E_1v(k)], with k∈(-π/a,π/a). Indeed, due to the finite size L of the strip, edge states on the two sides of it, y=0 and y=L, couple together and create the gap E_g mentioned above. As we already anticipated in Sec. <ref>, this gap shows an exponential decay with modulations/oscillations as function of the strip width L, as showed in Figure <ref> (red dots). We have chosen Δ_z=10 meV this time for computational convenience, for which gap oscillations occur for smaller values of L (smaller Hamiltonian matrix sizes and less computational resources are required). Sudden gap drops occur at the critical strip widths L_c≃ 100 and L_c≃ 220nm. The exponential decay is captured by the gap E_g^Γ=[E_1c(0)-E_1v(0)] at the Γ point (black points). A fit of nine values of E_g^Γ at L=100,…, 500, in steps of Δ L=50, provides the expression E_g^Γ(L)≃ e^2.991 - 0.019 L with determination coefficient R^2>0.999. These gap oscillations have non trivial consequences in the charge conductance of the edge states given by the Landauer-Büttiker formula G=1/e^(E_g/2-μ_F)/k_BT+1-1/e^(-E_g/2-μ_F)/k_BT+1+1 in 2e^2/h units. In Fig. <ref> we plot the charge conductance as a function of the chemical potential μ_F and the width L of the strip at temperature T=3 K, for the energy gaps E_g (left panel) and E_g^Γ (right panel). Sudden gap drops at the critical strip widths L_c≃ 100 and L_c≃ 220nm yield maximum charge conductance regardless the value of μ_F. This phenomenon does not occur for E_g^Γ. Gap drops also occur when varying the Rashba term ξ=15.6|eℰ_z| by applying a perpendicular electric field ℰ_z, as shown in Fig. <ref>. For a strip width of L=200 nm, the gap drops down to E_g≃ 0.01 meV for an electric field of |ℰ_z|=22.4 mV/nm (that is, ξ≃ 350meV.nm), and the charge conductance rises to G≃ 0.9. As suggested by <cit.>, if it is possible to have two independent control gates, one for the SIA and other to change the Fermi energy level, then the variation of the charge conductance as function of the chemical potential (μ_F) would be useful to design a QSH field effect transistor. § EDGE STATES LOCALIZATION PROPERTIES We now proceed to analyze the localization properties of the four edge states {Ψ_1c,Ψ_1v,Ψ_2c,Ψ_2v}, both in position y and momentum k=k_x independent spaces, each one of them taking the form given in (<ref>). Let us firstly consider probability densities |Ψ(k,y)|^2 = |ψ_↑ E(k,y)|^2+|ψ_↑ H(k,y)|^2 + |ψ_↓ E(k,y)|^2+ |ψ_↓ H(k,y)|^2. and normalize them according to ∫_0^L dy |Ψ(k,y)|^2=1. In Fig. <ref> we represent the probability densities |Ψ(k,y)|^2 of the four edge states as a function of y for several values of the momentum k (varying curve thickness). They turn out to be symmetric in k, that is, |Ψ_c,v(k,y)|^2=|Ψ_c,v(-k,y)|^2, so that we take k∈[0,π/a) for these plots. Valence band states are more localized at the boundaries y=0,L than conduction band states (approximately by a factor of four times). Maximum localization at the edges for valence states occurs at k≃± 0.21 nm^-1 (see also later in Fig. <ref>), while for conduction states it occurs at k=0. A separated study of the four probability density components (<ref>) of the 4-spinor is shown in Fig. <ref>. Note that, although |Ψ_c,v(k,y)|^2 does not depend on the sign of k, each component of Ψ does. Spin down valence and spin up conduction component states are localized at y=L for k<0 and at y=0 for k>0, whereas spin up valence and spin down conduction component states are localized at y=0 for k>0 and at y=L for k<0. Therefore, there is a symmetry in sign(k s) (the helicity, with s=± 1), which is a reflect of the already known spin polarization of the QSH edge states, experimentally observed in <cit.>. For k=0, the probability density components show a more balanced behavior in position space. Another useful measure of localization, used in multiple contexts, is the inverse participation ratio (IPR). It measures the spreading of the expansion of a normalized vector |ψ=∑_n=1^N p_n|n in a given basis {|n, n=1,…,N}. It is defined as IPR_ψ=∑_n=1^N |p_n|^4, so that IPR_ψ=1/N for an equally weighted superposition |p_n|=1/√(N) and IPR_ψ=1 for p_n=δ_n,n_0. For the case of a free particle in a box y∈[0,L], the wave function ψ_m(y)=√(2/L)sin(mπ y/L), normalized according to ∫_0^Ldy|ψ_m(y)|^2=1, has an IPR=∫_0^Ldy|ψ_m(y)|^4=3/(2L), which is the lowest expected value of the IPR in our problem. For example, for a strip width of L=100 nm, we have IPR=0.015. A measure of the spreading of a 4-spinor in position space for each value of the momentum k is given by IPR_Ψ(k)=∫_0^L dy |Ψ(k,y)|^4, where now we understand |Ψ|^4=|ψ_↑ E|^4+|ψ_↑ H|^4+|ψ_↓ E|^4+ |ψ_↓ H|^4. Fig. <ref> displays IPR_Ψ(k) for the four edge states. Both valence edge states, Ψ_1v and Ψ_2v, show maximum localization in position space at k≃± 0.21 nm^-1 (mentioned above) while conduction states are more delocalized in space for all values of k∈(-π/a,π/a). Finally, we analyze the spreading of the expansion of edge states in momentum space k for a given position y. To do that, now we have to normalize 4-spinors as ∫_-π/a^π/a dk |Ψ(k,y)|^2=1 and define IPR_Ψ(y)=∫_-π/a^π/a dk |Ψ(k,y)|^4. Fig. <ref> shows that edge states participate of less momenta at y=0,L (higher IPR), since momentum is localized around k=± 0.21 nm^-1, as it was mentioned before. In the case of conduction Ψ_2c and valence Ψ_2v, the corresponding edge states also participate of less momenta (higher IPR) at the center of the strip y=L/2. The IPR concept is related to the purity of a density matrix, which measures the degree of entanglement of a given physical state. In the next section we study entanglement properties of our edge states. § SPIN PROBABILITIES AND SPIN-BAND ENTANGLEMENT MEASURES In order to compute quantum correlations in our system, we shall use two different entanglement measures. §.§ Reduced density matrix, spin probabilities and linear entropy Let ρ=|ΨΨ| the 4× 4 density matrix ρ corresponding to a normalized 4-spinor state (<ref>). Denoting the 4-spinor Ψ(k,y) column 4-vector as a function of position y and momentum k, the 4× 4 density matrix at (k,y) acquires the form ρ(k,y)=Ψ(k,y)Ψ^†(k,y)/Ψ^†(k,y)Ψ(k,y), where we are normalizing by the scalar quantity Ψ^†(k,y)Ψ(k,y)=|Ψ(k,y)|^2 in (<ref>) in order to have (ρ(k,y))=1 at each point (k,y). The 16 density matrix entries ρ_ij, i,j=1,2,3,4 are referenced to the basis |1 = |↑⊗|E, |2=|↑⊗|H, |3 = |↓⊗|E, |4=|↓⊗|H. The reduced density matrix (RDM) ϱ to the spin subsystem is obtained by taking the partial trace ϱ=_EH(ρ)=( [ ρ_11+ρ_22 ρ_13+ρ_24; ρ_31+ρ_42 ρ_33+ρ_44; ]). The diagonal components of the RDM ϱ_11=ρ_11+ρ_22=P_Ψ(↑), ϱ_22=ρ_33+ρ_44=P_Ψ(↓), represent the probabilities of finding the electron with spin up or down, respectively, whereas the modulus of the off-diagonal elements |ϱ_12|=|ϱ_21|=|ρ_13+ρ_24|=P_Ψ(↑→↓), represent the spin transfer probability amplitudes (also called coherences in quantum information jargon). In Fig. <ref> we plot the probabilities P_Ψ(↑) and P_Ψ(↓) for the first conduction Ψ_1c and valence Ψ_1v edge states as a function of (k,y). Lighter colors represent higher probability zones. Probability densities are unbalanced at the boundaries y=0,L depending on the propagation direction given by the sign of k. This is a reflection of the existence of counterpropagating modes of opposite spin at the edges. Note that these probabilities are invariant under the sign of the helicity sign(k s), with s=± 1 the spin, for each value of y. This is again a reflect of the experimental confirmation in <cit.> that the transport in the edge channels is spin polarized. In Fig. <ref> we plot spin transfer probability amplitudes for Ψ_1c and Ψ_1v as a function of k and y for a strip width of L=100 nm. The maximum probability P_1c^max.(↑→↓)≃ 1/2 is attained at the center of the strip y=L/2 for k≃± 1.54 nm^-1, and the minimum probability P_1c^min.(↑→↓)=0.003 is attained at y=34 and y=66 nm for k≃± 0.1 nm^-1. These extrema are quite flat, as can be perceived in Fig. <ref>. Analogously, for the first valence edge state, there is a quite flat zone of maximum probability P_1v^max.(↑→↓)≃ 1/2 around y=L/2 and k≃± 1.4 nm^-1, and of minimum probability P_1v^min.(↑→↓)=0.02 around y=22 and y=78 nm for k≃± 0.1 nm^-1. We now analyze the spin-band quantum correlations by means of the linear entropy, which is defined through the purity (ϱ^2) as S=1-(ϱ^2). Maximum entanglement means S_max=1/2 for a 2× 2 RDM ϱ, whereas pure states have S=0. In Fig. <ref> we show the linear entropies S_i(k,y) of the four edge states Ψ_i, with i∈{1c,1v,2c,2v}, as a function of (k,y) for a strip width of L=100 nm. The entropy is symmetric in k and y, and we shall only show half of the interval in momentum space (that is k∈(0,π/a)). For Ψ_1c and Ψ_2c, the maximum entanglement S≃ 1/2 occurs at y=L/2 and k≃ 0.11 nm^-1. For Ψ_1v, the maximum entanglement S≃ 0.38 occurs at y=L/2 and k≃ 0.3 nm^-1. For Ψ_1v, the maximum entanglement S≃ 0.38 occurs at y=L/2 and k≃ 0.3 nm^-1 whereas for Ψ_2v, the maximum entanglement S≃ 0.33 occurs at y≃ 32 and y≃ 68 nm and k≃ 0.36 nm^-1. §.§ Schlienz & Mahler entanglement measure We shall also briefly discuss other related entanglement measure in the field of quantum information, like the one proposed by Schlienz & Mahler <cit.> related to a bipartite system of an arbitrary number D levels (“quDits”). In our case, D=2 and a qubit-qubit system will make reference to spin up-down and band E-H sectors. The entanglement measure is defined as follows. The 4× 4 density matrix ρ is now written in terms of the 16 generators of the unitary group U(4), which can be written as tensor products of Pauli matrices like in (<ref>) and (<ref>). More precisely ρ = 1/4σ_0 ⊗σ_0+ 1/4∑_k=1^3 (λ^(1)_k σ_k ⊗σ_0+ λ^(2)_k σ_0 ⊗σ_k) + 1/4∑_k, j C^(1,2)_kjσ_k ⊗σ_j , with λ^(1) = (ρ σ⊗σ_0) , λ^(2) = (ρ σ_0⊗σ ), C^(1,2) _kj= (ρ σ_k ⊗σ_j). The vectors λ^(1) and λ^(2) denote the Bloch coherence vectors of the first qubit (spin up-down) and the second qubit (band E-H) and the 3 × 3 matrix C^(1,2) accounts for qubit-qubit (spin-band) correlations. The RDM on the spin sector is ρ^(1)=_2(ρ)=1/2σ_0+1/2∑_k=1^3λ^(1)_kσ_k ⊗σ_0, and analogously on the band sector ρ^(2). Comparing ρ with the direct product ρ^(1)⊗ρ^(2), the difference comes from a 3× 3 entanglement matrix M with components M_jk = C^(1,2)_jk - λ^(1)_j λ^(2)_k , j,k = 1,2,3. Based on M, Ref. <cit.> introduces a measure of “qubit-qubit” (spin-band) entanglemet given by the parameter B_Ψ= 1/3(M^T M ). The parameter B is bounded by 0≤ B ≤ 1 and carries information about spin up and down correlations. The results for B provide an equivalent behavior to the linear entropy in Figure <ref>, except for a scaling factor. § CONCLUSIONS We have used QI theory concepts like IPR, RDM and entanglement entropies, as an interesting “microscope” to reveal details of the internal structure of HgTe QW edge states with SOC (induced by the bulk and structural inversion asymmetries) in a finite strip geometry of width L. To do this, we have considered a four band Hamiltonian describing the low energy effective theory. Quantitative information on the edge states energies and wavefunctions is extracted from a numerical Hamiltonian diagonalization approach, which is complemented by an analytic (more qualitative) view. We corroborate previous results on the intriguing oscillatory dependence of the energy gap with L, this time for a more general SOC, with sudden gap drops for critical strip widths L_c. The non-trivial consequences of the Rashba term on the charge conductance are also reviewed, with a possible design of a QSH FET. The spin polarization structure of edge states in position y∈[0,L] and momentum k_x∈(-π/a,π/a) has also been evidenced by using probability density and IPR plots. The IPR analysis reveals that, in general, edge state wave packets participate of less and less momenta as we approach the boundaries y=0,L of the strip, with maximum localization for certain values of the momenta ± k_x in the vicinity of the Γ point. Complementary information on the structure of spin polarization of edge states in (k_x,y) space is extracted from the RDM for the spin subsystem. Contour plots of the RDM entries show the extremal values of spin up and down and spin transfer probabilities in (k_x,y) space. Also, entropies of the RDM inform on regions in (k_x,y) space where the spin sector is highly entangled with the rest of the system, due to spin-orbit coupling. The behavior of the quantum correlations does not seem to depend on the particular entanglement measure used. § ACKNOWLEDGMENTS We thank the support of Spanish MICIU through the project PID2022-138144NB-I00. OC is on sabbatical leave at Granada University, Spain. OC thanks support from the program PASPA from DGAPA-UNAM. ]
http://arxiv.org/abs/2407.13638v1
20240718161247
A Comparative Study on Automatic Coding of Medical Letters with Explainability
[ "Jamie Glen", "Lifeng Han", "Paul Rayson", "Goran Nenadic" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Beyond Augmentation: Empowering Model Robustness under Extreme Capture Environments Yunpeng Gong2, Yongjie Hou1, Chuangliang Zhang2, Min Jiang22 2School of Informatics, Xiamen University, Xiamen, China 1School of Electronic Science and Engineering, Xiamen University, Xiamen, China Email: fmonkey625@gmail.com, 23120231150268@stu.xmu.edu.cn, 31520231154325@stu.xmu.edu.cn, minjiang@xmu.edu.cn 22 Min Jiang is the corresponding author. =========================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT This study aims to explore the implementation of Natural Language Processing (NLP) and machine learning (ML) techniques to automate the coding of medical letters with visualised explainability and light-weighted local computer settings. Currently in clinical settings, coding is a manual process that involves assigning codes to each condition, procedure, and medication in a patient's paperwork (e.g., 56265001 heart disease using SNOMED CT code). There are preliminary research on automatic coding in this field using state-of-the-art ML models; however, due to the complexity and size of the models, the real-world deployment is not achieved. To further facilitate the possibility of automatic coding practice, we explore some solutions in a local computer setting; in addition, we explore the function of explainability for transparency of AI models. We used the publicly available MIMIC-III database and the HAN/HLAN network models for ICD code prediction purposes. We also experimented with the mapping between ICD and SNOMED CT knowledge bases. In our experiments, the models provided useful information for 97.98% of codes. The result of this investigation can shed some light on implementing automatic clinical coding in practice, such as in hospital settings, on the local computers used by clinicians, project page <https://github.com/Glenj01/Medical-Coding>. § INTRODUCTION The coding of medical letters is currently something that is completed manually in advanced healthcare systems such as the UK and the US [NHS UK <https://www.nhs.uk/>]. It involves professionals reviewing the paperwork for a patient's hospital visit or appointment and assigning specific codes to the conditions, diseases, procedures, and medications in the letters. This study aims to examine the potential automation of this process using Natural Language Processing (NLP) and Machine-Learning (ML) techniques, to create a prototype that could be used alongside the coders to speed up the coding process and to explore if such a system could be integrated into the real practice. Clinical codes are used to remove ambiguity in the language of the letters, provide easily generated statistics, give a standardised way to represent medical concepts and allow the NHS’s Electronic Health Record (EHR) system to process and store the codes more easily <cit.>. Also, in the case of private healthcare providers, coding can make it easier to keep track of billing [<https://www.ashfordstpeters.nhs.uk/clinical-coding>]. To do this, the coder takes a medical letter as input, which can be anything from a prescription request to a hospital discharge summary, and outputs potential codes from a designated terminology and/or classification system. The NHS ‘fundamental information standard’ is the “Systemised Nomenclature of Medicine – Clinical Terms” (aka SNOMED-CT) terminology system, which uses ‘concepts’ to represent clinical thoughts. Each concept is paired with a ‘Concept Id’ – a unique numerical identifier e.g., 56265001 heart disease (disorder) - which is then arranged by relationships into hierarchies from the general to the more detailed <cit.>. It is worth noting that SNOMED is not the only system used for coding. The other system relevant to this work is the International Classification of Diseases (ICD), specifically ICD-9 [<https://www.cdc.gov/nchs/icd/icd9cm.htm>]. This was the official system used to code diagnoses and procedures in the US. While SNOMED is a terminology system that has a comprehensive scope, covering every illness, event, symptom, procedure, test, organism, substance, and medicine, ICD is a classification system with a scope of just classifying diagnoses and procedures. In the NHS UK, coding is a significant issue because it takes time, energy, and resources away from an already underfunded and overworked system. There have been efforts to solve this by having dedicated clinical coding departments in larger hospitals [<https://www.stepintothenhs.nhs.uk/careers/clinical-coder>]; however, in most smaller practices, it is still the medical professionals who will do the coding. It takes the average coder 7-8 minutes to code each case, and a dedicated department of 25-30 coders usually codes more than 20,000 cases monthly <cit.>. Even so, there is almost always a backlog of cases to be coded, which has been known to extend over a year. It is estimated that AI applications in the healthcare industry have the potential to free up 1.944 million hours each year for healthcare professionals, with the biggest cut being taken from AI in virtual health assistance (such as automated medical coding) at 1.145 million hours <cit.>. Clinical coding is such a challenging task due to two main concerns. The first is that the classification systems are complex and dynamic. The international edition of SNOMED contains 352,567 concepts [Five Step Briefing, SNOMED international <https://www.snomed.org/five-step-briefing>], and while it should be noted that not all of these are diagnoses, finding the correct code can be challenging. The other issue is that there is no consistent structure in the documents to be coded. They can be notational, lengthy, and incomplete in addition to being full of abbreviations and symbols. Since all the coding is done manually, the human factor must also be considered. A study by burns2012systematic_12 found that the median accuracy of coders under evaluation was 83.2%. It should be noted that this was with an interquartile range of 67.3% - 92.1%, which further proves the issue of inconsistency with human coding. This paper explores the potential of replacing the time-consuming process of manually coding letters with a program that automatically assigns codes to letters in a local computer setup. In the following sections, this paper will explore the background of automated medical coding, explain the implementation choices and issues encountered with this investigation, review the testing methods and results, and conclude by discussing the implications of these findings and the potential future of medical coding. § BACKGROUNDS AND RELATED WORK The background session will be presented in two sections. The first section, pre-neural networks, will focus on the early attempts at automated medical coding, how they worked, and the reasons why none of them were implemented in the real world. The second section, the introduction of neural networks, will follow the development from recurrent neural networks to transformer-based attention networks. We will explore the methodology and results of each one and conclude with the platform on which the chosen model is based. §.§ Pre-Neural Methods Most papers regarding general healthcare NLP can be divided into two topics: text classification and information extraction <cit.>. Classification can be split into three versions, each getting more complex: binary classification, where an instance is in one of two distinct categories (e.g., smoker or non-smoker); multi-class classification, where there are multiple categories, but an instance can still only be assigned to one class (e.g., current smoker, former smoker, non-smoker); and multi-label text classification [<https://huggingface.co/blog/Valerii-Knowledgator/multi-label-classification>]. This involves instances that can be associated with several different labels/categories simultaneously, such as discharge letters, in which each letter always contains multiple conditions. Automated medical coding is often identified as a multi-label text classification problem; however, some older attempts still utilise information extraction or a combination of methods from both topics. The first attempts at automated clinical coding were from around 1970, such as this 1973 study by dinwoodie1973automatic_15 that utilises a ‘fruit machine’ methodology. This entails representing each significant word of a diagnosis with an associated code number and, like a fruit machine in a pub, the code is correct when a common code number appears for all words in the diagnosis. While this study returns impressive results with a correct coding rate of over 95%, this is only done with a small collection of pre-coded morbidity data from 16 doctors around Scotland. Thus, the project will not scale up to the complex real-world scenario. No real progress was then made for the next few decades. A 2010 literature review on clinical coding <cit.> evaluated the results of 113 studies, the earliest being the above 1973 study, and concluded that while the systems hold promise, there has been no clear trend of improvement over time. Another interesting trend from this review is that, while no improvements had been made, researchers' interest was increasing, as all but 4 of the studies found were published after 1994. Examples of attempted innovation from this period include a study from farkas2008automatic_17rule focusing on rule-based automated radiology report coding. It uses a variation of multi-label classification that treats the assignment of each label as a separate task, as opposed to treating valid sets of labels as a single class. It then builds a rule-based expert system that operates on if-then codes through the ICD hierarchy. It uses decision trees (which recursively classify the data through conditions, similar to the rule-based system used to classify codes) to predict false positives, which occur when the model incorrectly predicts a positive outcome. It then uses a maximum entropy classifier to tackle false negatives, calculating each token's probability of a false negative. Both the decision tree and max entropy classifier worked to increase the micro-averaged F_β=1 scores by   4%, to 87.92%. While these rule-based solutions are very accurate for the specific types of documents they examine, they will not generalise well to new problems since they are domain-specific. For them to be feasible for real-world use, the rules would need to be extended to tens of thousands of codes and would require a substantial investment of time and expertise to be executed properly. Statistical approaches such as initial attempts from mullenbach-etal-2018-explainable_CAML, which utilised logistic regression (LR), and perotte2014diagnosis_28, which made use of Support Vector Machines (SVM), were attempted. However, the results on the full MIMIC database (shown in Figure <ref>) indected that they were also infeasible. Therefore, a different method had to be attempted: deep learning and neural networks. §.§ Neural Networks and Attentions The general approach of deep learning in neural networks aims to map a complex function learned through the training data to match the information in the text to an appropriate set of medical codes <cit.>. Before any deep learning is completed, the common first step in these projects - aside from preprocessing - is to produce word embeddings for each token. Each embedding is a semantically meaningful mathematical representation, usually a vector, of the token designed so that tokens with similar meanings have similar vectors <cit.>. To compare the meaning of two words, one calculates the cosine similarity of their corresponding vectors. The most common method for doing this is ‘word2vec’, which operates on the assumption that words with similar meanings tend to occur in similar contexts. It uses either a continuous bag of words (CBOW) model that predicts the target words based on the context words (words surrounding the target word) or a skip-gram that predicts the context words based on the target words <cit.>, both of which are examples of single-layer neural networks. A more advanced version of word2vec that strays from the standard embedding practice of one vector per word/token/document, is the development of bidirectional encoder representations from transformers (BERT) <cit.>. These are massive pre-trained language models that are too resource-intensive to be trained from scratch in most circumstances, however, models trained on a general corpus can be fine-tuned to meet specific needs (such as clinical text mining through transfer learning <cit.>). Unfortunately, due to their size and complexity, they are not currently feasible to be trained on larger datasets without significant modification. The first successful deep learning attempts utilised recurrent neural networks (RNNs), with a focus on two specific types: Gated Recurrent Units (GRUs) and Long Short-Term Memory Networks (LSTMs). The project <cit.> constructs an RNN with a single layer consisting of 20 time steps; with each time step, a normalised vector representing a patient note is submitted in a time sequential order (oldest to most recent). The activation (threshold) function is tanh, a mathematical operation applied to the weighted sum of inputs and biases in each neuron that introduces non-linearity into the network. There is a dropout rate of 0.1 that is applied to prevent overfitting during training, and a learning rate of 0.001 is used to determine how much the weights of the network are updated during each training iteration. Finally, the model uses cross-entropy loss as its sigmoid function, normalising the neuron's output to a value between 0 and 1. GRUs are implemented as recurrent units, where each unit contains a reset gate and an update gate, which allow the GRU to regulate the flow of information and selectively update its hidden state. They are computationally more efficient; however, they may be outperformed by LSTMs in tasks requiring long-range dependencies. LSTMs are built like GRUs but using three gates instead of two: an input gate, a forget gate, and an output gate. They are more powerful due to their additional gates and memory cells that allow them to better preserve information over time. Convolutional neural networks (CNNs) consist of convolution layers and pooling layers and are mainly used for image and video processing, however, if the text is manipulated and processed correctly, they can be very effective for text processing. For example, one of the most successful studies into automated medical coding is the 2018 project Convolutional Attention for Multi-Label Classification (CAML) <cit.>, which utilised a CNN but swapped the pooling layer for an attention mechanism. This attention mechanism is applied to the data to identify relevant portions of the document for each code prediction, allowing it to selectively focus on and assign higher importance to the relevant words and phrases <cit.>. Using attention mechanisms in this way also allows for enhanced interpretability. It provides insights into which parts of the document it made its predictions from, instead of just being put through a function as with previous methods. With attention comes transformer-based networks, and while attention networks are not exclusively transformer-based, transformers are exclusively attention-based <cit.>. They rely solely on self-attention mechanisms, parallel processing the entire input sequence. This makes them more efficient for handling long sequences and allows for faster training and inference than more sequential models like RNNs. Transformers also allow for multi-head attention, an extension of the self-attention mechanism that allows the model to further parallelize the processing, enabling transformers to capture different aspects of the input data in parallel, allowing for more complex modelling of the relationships and patterns. This has recently been introduced into automated medical coding and, as demonstrated with HiLAT <cit.>, it is already promising. However, due to the computational complexity of such a model, it has only been tested on the limited MIMIC-III-50 dataset. The table shown in Figure <ref> demonstrates automated coding techniques' slow but consistent progress. The highlighted segments represent the top performers in their respective categories. The transformer-based HiLAT model outperforms every other model in every metric when tested on the MIMIC-III-50 database. On the other hand, the CNN + attention-based model of CAML does the same when tested against all the models on the MIMIC-III Full database, while it is also the only model that can provide a level of explainability to its answers. These results indicate that an attention-based model is the preferred choice due to the superior results and their ability to provide explainability for their answers. §.§ The MIMIC-III Dataset In Clinical NLP, the first resource is the MIMIC-III dataset, which is the only publicly available mainstream English dataset with enough data to perform proper training. Additionally, most models that attempt to solve the automatic coding problem use this dataset. MIMIC-III <cit.> is a large, freely available database comprising de-identified health-related data associated with over forty thousand patients who stayed in critical care units of the Beth Israel Deaconess Medical Centre between 2001 – 2012 [<https://mimic.mit.edu/docs/iii/>]. The database is freely available to researchers worldwide, provided they have become a credentialed user of PhysioNet <cit.> and completed the required ‘Data or Specimens Only Research’ CITI training [<https://physionet.org/content/mimiciii/view-required-training/1.4/>] (Or another recognized course in protecting human research participants that includes HIPAA requirements). All data in the MIMIC database has been deidentified per HIPAA (Health Insurance Portability and Accountability Act) standards. This ensures that all 18 listed identifying data elements, such as names, telephone numbers, and addresses, are removed. The only thing not removed are dates, which are shifted in a random but consistent manner to preserve intervals. Therefore, all dates occur between 2100-2200, but the time of day, day of the week, and approximate seasonality have been conserved. MIMIC is a relational database consisting of 26 tables containing different forms of data, from the patient’s clinical notes in NOTEEVENTS to extremely granular data such as the hourly documentation of patients’ heart rates. This makes it a vast and complex database to work with - however since we are only using the database for its clinical notes, only five tables are required: * NOTEEVENTS – Deidentified notes, including nursing and physician notes, ECG reports, imaging reports, and discharge summaries. * DIAGNOSES_ICD - Hospital-assigned diagnoses, coded using the International Statistical Classification of Diseases and Related Health Problems (ICD) system. * PROCEDURES_ICD - Patient procedures, coded using the International Statistical Classification of Diseases and Related Health Problems (ICD) system. * D_ICD_DIAGNOSES - Dictionary of International Statistical Classification of Diseases and Related Health Problems (ICD) codes relating to diagnoses. * D_ICD_PROCEDURES - Dictionary of International Statistical Classification of Diseases and Related Health Problems (ICD) codes relating to procedures. This still leaves a lot of unnecessary data. For example, the NOTEEVENTS table contains CHARTTIME, CHARTDATE, and STORETIME, which are the time and date a note was charted and the time it was stored in the system. The notes in NOTEEVENTS vary in usefulness and format, with the type of note indicated in the DESCRIPTION column. Since all the medical coding projects that use MIMIC unanimously choose to use the discharge summaries as they contain the most potential codes per letter (15.9 labels per document). We removed all the other types of notes. This was done by creating a new table that copied each line as long as the DESCRIPTION = ‘Discharge Summary’. The next step is to combine the data in separate tables into one table for easier access. Another note on MIMIC is about its most popular subset, MIMIC-III-50, that contains only the notes and codes of the top 50 most frequently occurring codes (Table <ref>). First occurring in CAML <cit.>, MIMIIC-III-50 is often used as a proof-of-concept database for automatic medical coding projects due to it being significantly smaller (8,067 documents compared to 47,724) and with fewer labels (5.7 compared to 15.9 for MIMIC full), which means it takes less time and computational resources to train against. Projects like HiLAT <cit.> that face challenges in accessing the necessary computing power for training their models have utilised the MIMIC-III-50 dataset to train on and achieve state-of-the-art results. The only issue with using MIMIC-III-50 is that, as Figure <ref> demonstrates, it doesn’t give the same opportunity to test models against a long tail distribution. A database that follows a long tail distribution is one where there are many data points that are not well-represented, and the majority of occurrences are concentrated around a few values at the “head” of the distribution <cit.>. This accurately describes the MIMIC-III-Full database, where the top 105 codes make up 50% of the total labels in the set, and there are 3,110 labels that have fewer than 5 examples <cit.>, with 203 codes not appearing in any discharge summaries at all. Solving the long tail distribution of MIMIC is one of the key challenges that will need to be addressed by the potential models to be deployed. § MODEL SELECTIONS We have selected three potential models and in this section each model will be evaluated, reviewing their results, methodology, and suitability for the study's needs, concluding with the chosen model. §.§ Problem Formalisation Before each selected model is evaluated, the problem needs to be formally defined. Taking 𝒳 as the collection of clinical notes and 𝒴 as the full set of labels (ICD-9 codes). Each instance x_d ∈ X is a word sequence of a document, d, and is associated with label set y_d ⊆ Y, where each y_d can be represented as a |Y| multi hot vector (a vector where multiple elements can have a value of 1, indicating multiple features/categories are present at the same time), Yd = [y_d1, y_d2, ... , y_d|Y|], and y_dl∈ (0,1) where l indicates the l’th label has been used for the dth instance and 0 indicates irrelevance <cit.>. From this, the task of the models is to learn a complex function f: 𝒳→𝒴 from the training set. All the chosen models use the same loss function, binary cross-entropy, and optimise it with L2 normalisation using the Adam (Adaptive Movement Estimation) optimiser <cit.>. Loss functions are used in neural networks as a measure of how well the networks predictions match the true values of the training data, with binary cross entropy loss measuring the dissimilarity between the true binary labels and the predicted probability of the model. In the context of these models, L2 normalisation is used to avoid overfitting, which occurs when the model is trained so well on a particular dataset that it fails to generalise well to new, unseen data. To prevent this, penalty terms proportional to the magnitude of the vectors (Euclidean norm) are added, which penalise overly specific mappings and encourage the model to learn simpler, more generalised weight configurations. The Adam optimiser is a popular optimisation algorithm used to update the parameters of a neural network to minimise the loss function during training. §.§ Model-1: Convolutional Attention for Multi-Label Classification (CAML) CAML <cit.> (Figure <ref>), as already mentioned in the background section, utilises a CNN based architecture but swaps the traditional pooling layer for an attention mechanism. The model starts by horizontally concatenating pretrained word embeddings into a matrix, X. A sliding window approach as is standard in CNNs is then applied to this matrix that computes an equation on each section of the matrix, resulting in the matrix H. Next, the model applies a per-label attention mechanism. For each label, l, the matrix vector product is computed, and the result of this is passed through a SoftMax operator that essentially reduces the input values to the range [0,1] while ensuring that they sum up to 1 so they can be used as probabilities. This SoftMax operator returns the distribution over locations in the document in the form of attention vector α. This attention vector is then used to compute vector representations for each label, vl. Finally, a probability is computed for label l using another linear layer and sigmoid transformation to obtain the final label predictions yl. This normalisation process ensures that the probability of the label is normalised independently rather than normalising the probability distribution over all labels like the SoftMax operator does. §.§ Model-2: Hierarchical Label Attention Network (HLAN) The HLAN model <cit.> is built around providing explainability for its results, and consists of an embedding layer, the HLAN layers, and a prediction layer. The embedding layer converts each token in the sentence into a continuous vector where the word embedding algorithm word2vec returns the vector of word embeddings x_di. The HLAN makes extended use of Gated Recurrent Units (GRU) to capture long-term dependencies. The GRU unit processes tokens one by one, generating a new hidden state for each token. At each hidden state, the GRU considers the previous tokens using a reset gate and an update gate. The GRU method implemented is known as Bi-GRU because it reads the sequence both forwards and backwards, concatenating the states at each step, to create a more complete representation. The label wise word-level attention mechanism, which contains a context matrix (V_w) where each row V_wl, is the context vector to the corresponding label y_l. The attention score is calculated as a SoftMax function of the dot product similarity between the vector representation of the hidden layers from the Bi-GRU and the context vector for the same label. The sentence representation matrix C_s is computed as the weighted average of all hidden state vectors h^i for the label y^i. The label-wise sentence-level attention mechanism is computed in much the same way, outputting sentence-level attention scores and the document representation matrix C_d. The prediction layer then utilises a label-wise, dot product projection with logistic sigmoid activation to model the probabilities of each label to each document. Finally, the binary cross entropy loss function is optimised with L2 normalisation and the Adam optimiser. The HLAN has an extra label embedding initialisation (denoted as +LE) that can be implemented in place of the normal embedding layer and functions by leveraging the complex semantic relations (how different elements are related to each other in terms of their meanings) among the ICD codes. The embedding works off for two correlated labels; one would expect the prediction of one label to impact the other for some notes, which is represented as giving each label representation corresponding weights. The HLAN model was based on the HAN model <cit.>, where the only difference between the two is that at the sentence and document level, HLAN utilises contextual matrices, whereas HAN uses contextual vectors. This means that while HLAN is more individually label-oriented, HAN still produces an attention visualisation for the whole document and the results are only slightly worse but reducing the computational complexity of training the model. HAN <cit.> model was originally proposed as “Hierarchical Attention Networks for Document Classification”. §.§ Model-3: Multi-Hop Label-wise Attention (MHLAT) Much like HLAN, MHLAT <cit.> is comprised of three main components: an input/encoder layer, MHLAT layer, and a decoder layer (Figure <ref>). It also utilises the same label-wise attention mechanism, however, that is where the similarities end. In the encoding layer, MHLAT first splits the text into chunks with 512 tokens per chunk. It then adopts the general domain pre-trained XLNet <cit.> (similar to BERT but less computationally expensive), which is further trained on MIMIC, and then applied to every chunk. Each chunk from the text is then concatenated to form a global vector of the input text, H. While using label-wise attention through multiple passes is utilised for both HLAN and MHLAT, where HLAN uses multiple Bi-GRUs increasing the scope each time, MHLAT presents a ‘multi-hop’ approach. Initially, the label-wise attention is derived from matrices of the tokens of the input sentence from the encoder, followed by a ‘fusion’ operation that combines label-specific representations and label embeddings. A hop function is then defined that iteratively updates context information and label embeddings, which is then repeated. The decoding layer implements an independent linear layer for computing the label score and utilises the same binary cross entropy loss function as the other models. §.§ Model summaries If going purely off results (given in Figure <ref>), the MHLAT model returns state-of-the-art performance compared to the others in every metric it had resulted in. However, it is worth noting that the model, despite being attention-based, did not factor any type of explainability into itself. As mentioned in the motivations, we want to explore some level of interpretability of coding models, otherwise, the professionals (clinicians) using them would have no way to verify the results and build trust. Looking at the results of the remaining models, it is clear that HLAN performs better than CAML, which in turn performs better than HAN. However, the objective of the project was to prioritise explainability in the results, which made HLAN/HAN the ideal model as despite a slight reduction in performance for the MIMIC Full dataset. The enhanced interpretability in its answers justifies its use, especially in domains such as medical coding where transparency and understanding of the models’ decisions are crucial. § CODING WITH EXPLAINABILITY The goal of this study is to develop a program that could attempt to fulfill the investigation aims, that being to produce SNOMED codes and visualisation, and could then be utilised to evaluate a comparable system being implemented in the real setting, such as NHS UK. The program was implemented in Python 3.8 using the TensorFlow framework and leverages the HAN model to predict ICD codes, converts these codes to SNOMED, and provides visualised attention scores for each document. §.§ data processing and ICD coding The preprocessing (Figure <ref>) takes three of the tables from MIMIC described in Section <ref>, NOTEEVENTS, PROCEDURES_ICD, and DIAGNOSES_ICD, and combines them into one table, notes_labeled, with the schema SUBJECT_ID, HADM_ID, TEXT, LABELS where: * SUBJECT_ID – identifier unique to a patient, found in NOTEEVENTS. * HADM_ID – identifier unique to a hospital stay, found in NOTEEVENTS. * TEXT – The free text of the document. There can be multiple documents with the same HADM_ID. Found in NOTEEVENTS. * LABLES – ICD_9 labels professionally assigned and stored in sequence order in either DIAGNOSES_ICD or PROCEDURES_ICD, depending on if they were diagnoses or procedures. This is accomplished by first concatenating both _ICD tables into one table of codes, ALL_CODES. In the next step it preprocesses the raw TEXT from NOTEVENTS, removing tokens that contain no alphabetic characters (i.e., removing 500 but not 500mg), removing white space, and lowercasing all tokens. The processed text is stored in the disch_full table, which is then joined on the HADM_ID of each line to the ALL_CODES table to form the notes_labeled table. The code then generates the MIMIC_III_50 database by iterating through the notes_labeled file, counting the occurrences of each code, and saving the HADM_IDs to 50_hadm_ids and the codes to TOP_50_CODES. Both the standard notes_labeled and the dev_50 tables are split 90/10 to train/test respectively and stored in the train/test version of their tables. When attempting to train the HLAN model on the full MIMIC dataset, the system that it was being trained on (our local PC) did not have sufficient memory, therefore the HAN model <cit.> was used instead. This model did not need to be trained as the pretrained model could be downloaded from the GitHub. There is now a working model that took a text document as input and outputted an attention visualisation in Excel and a list of predicted codes in the console. §.§ Entity Linking to SNOMED Now with a working model, the next step is to map the ICD codes to SNOMED (Figure <ref>). The map [<https://www.nlm.nih.gov/research/umls/mapping_projects/icd9cm_to_snomedct.html>] was originally created for the Unified Medical Language System (UMLS) to facilitate the translation of legacy data still coded in ICD-9 to SNOMED CT codes. Therefore, it is perfect for the project's needs. It does contain multiple columns of data that are not required, mainly usage statistics, however, these can just be ignored. The 202212 most recent release of the map was implemented by UMLS and is split up into two tab-delimited value files with the same file structure; one for one-to-one mappings, and one for one-to-many mappings. The one-to-one mapping contains 7,596 mappings (64.1% of ICD-9 codes), with each line in the file being a separate mapping. For example, the ICD code 427.31 (Atrial Fibrillation) maps directly to the SNOMED code 49436004 (Atrial Fibrillation (disorder)). The one-to-many file contains 3,495 mappings (29.5% of ICD-9 codes), with the mapping being one ICD code to multiple SNOMED codes. The file is set out as one-to-one maps, with the one ICD code being repeated for each of the many SNOMED codes, for example: * 719.46 – Pain in joint, lower leg | 202489000 – Tibiofibular joint pain * 719.46 – Pain in joint, lower leg | 239733006 – Anterior knee pain * 719.46 – Pain in joint, lower leg | 299372009 – Tenderness of knee joint This was implemented by first loading the one-to-one map into a dictionary, then iterating through the predicted_codes list. At each iteration (new ICD code) the program checks to see if the ICD code is in the one-to-one map. If it is, the associated SNOMED code and FSN (fully specified name) are outputted; if not, the one-to-many map is loaded as a dictionary. The program searches for the ICD code in the one-to-many dictionary, and if found, it outputs all the SNOMED codes related to the ICD code. This is done so that even if the program cannot find a direct mapping, it can at least provide the user with potential options. If an ICD code cannot be found in any mappings, the system will print the ICD code description from either D_ICD_DIAGNOSES or D_ICD_PROCEDURES. There are only a few cases, approximately 6.4% of the ICD codes, where there are no mappings available. This usually occurs with catch-all NEC (not elsewhere classified) ICD codes, such as 480.8 - Pneumonia due to other virus not elsewhere classified, for which SNOMED has no alternative mappings available. After all these steps, the project now takes notes as input through a text document, processes them using the HAN model, and calculates the attention levels of the ICD codes. The program then converts the ICD codes into SNOMED codes with as many 1-to-1 mappings as it can find, outputting that to the console (Figure <ref>). Finally, the attention visualisation is exported into Excel (Figure <ref>) which shows each word in the file and highlights it in a shade of blue. The deeper the blue highlight, the greater the weight that word had when calculating the ICD codes. The visualisation displayed in Figure <ref> is split up halfway down for ease of viewing. In reality, the left-hand side of the upper picture and the right-hand side of the lower picture are joined next to each other. §.§ Evaluations Setups The experiments are evaluated in two ways – first, the model is tested against the standard testing scores of micro/macro F1 and precision. Second, the implementation of SNOMED mapping is also considered, calculating the percentage of codes it can predict/give options for. To accurately test the model, data had to be gathered by running the model against MIMIC discharge summaries from the test files. This was accomplished by randomly selecting 100 notes from the test_full file (refer to sample size and model confidence by gladkoff-etal-2022-measuring). We then ran each set of notes through the model and put it through a program that returned the true and false positives, as well as the false negatives from the results by comparing the labels generated by the model to the true labels in the file, where: * True Positives – when the model predicts a label, and it is correct. * False Positives – when the model predicts a label, but it is incorrect. * False Negatives – when the model doesn’t predict a label even though there is a correct label. Now that these values were generated, the model was tested against the same metrics that have been used in all the models previously. * Recall - measures how often a model correctly identifies positive instances (true positives) from all the actual positive samples in the dataset [<https://www.evidentlyai.com/classification-metrics/accuracy-precision-recall>] and is calculated by dividing the number of true positives by the number of positive instances (true positives + false negatives). * Precision – measures how often a model correctly predicts the positive class, calculated by dividing the number of correct positive predictions (true positives) by the total number of instances the model picked as positive (both true and false positives). The precision results from earlier models were with P@5, P@8, or P@15, which means measuring the proportion of relevant items within the top 5, 8, or 15 items retrieved by the system. * F1 Score – Calculated as the harmonic mean of the precision and recall scores, therefore, encouraging similar values for both precision and recall. The more the precision and recall deviate from each other, the worse the score. * Macro F1 score - is an average of the F1 scores obtained, representing the average performance of the model across all classes (each class having the same weight). * Micro F1 score - computes a global average F1 score by counting the sums of the true positives, false negatives, and false positives and then putting those into the normal F1 equation. It essentially computes the proportion of correctly classified observations out of all observations (each token having the same weight). Aside from gathering these results, the other data collected was that of the SNOMED scores. This was gathered when running the same tests to find the other values, and each returned SNOMED score could be grouped into one of 4 categories: * 1-to-1 – The ICD to SNOMED code was a one-to-one match * 1-to-M – The ICD to SNOMED code was a one-to-many match * No Map – No ICD to SNOMED map was found. * No DESC – There was no description found associated with the ICD codes in the D_DIAGNOSES_ICD MIMIC file. This was a rare valid return due to the formatting of the D_DIAGNOSES_ICD file. §.§ Evaluation Results §.§.§ ICD Coding Evaluation For ICD coding evaluations, the first 20 documents tested were listed in Figure <ref>, with the full list in Appendix. The combined results of all the tests (Table <ref>) were then calculated, returning the macro F1 as 0.041 (compared to 0.036 from previous HAN tests) and the micro F1 as 0.403 (compared to 0.407 from previous HAN tests). The similarity to the previous results demonstrates that the model was functioning as intended, so although the results weren’t state of the art, they were what was expected. The same can be said for precision, which We calculated using the first 15 values returned, otherwise known as P@15 (the same as previous tests), to get a precision of 0.599 (compared to 0.613). While these results aren’t the same as the previous HAN model testing, this is to be expected as only 100 documents were tested. This means that if there were outliers, they had a greater effect on the overall results, and the more documents that were tested, the closer to the actual values the results will become. §.§.§ SNOMED Mapping Evaluation Regarding the SNOMED mapping, from the individual results (shown in Figure <ref>), each row was summed, with 100 subtracted from the No DESC value to ensure that the error of the program producing a No DESC result at the end of each document was not considered in the total. From this, a 1-to-1 map is displayed 52.91% of the time, and a 1-to-many map is displayed 13.88%, which means the program successfully mapped to SNOMED on 66.79% of attempts. The unexpected result in this situation is the significant amount of ‘no maps’ returned. This is due to differing versions of ICD-9 codes utilised, as MIMIC uses the standard ICD-9 coding, but the mapping uses ICD-9-CM, the clinical modification used for morbidity coding. This means that there will be codes in one version that are not featured in the other, and unfortunately, there is not much that can be done to resolve this aside from creating a new mapping. Even when returning a ‘no map’, the program still returns the description of the ICD code which is useful information for the user. Therefore, this implementation returns a useful response for 97.98% of attempted codes. § CONCLUSIONS AND FUTURE WORK This study aimed to compare existing coding methods and produce a model that automatically assigns labels to medical texts and gives an explainable outcome, to explore how this investigation can be implemented in real practice, e.g. NHS UK. High ethical standards were maintained during the project considering the field of study. As outcomes, the model does automatically assign labels to the medical texts utilising a pre-trained HAN model that emphasises interpretability in its outcomes, producing a document explaining how it reached its decisions. The project also explores the potential of integrating a similar system into a real setting, utilising mappings to SNOMED as well as having a medical professional give feedback throughout the development of the system and evaluate the results of the final program (Appendix for human evaluations). Regarding future works specifically for real applications, we believe that for a project like this to be viable, a new dataset needs to be created that more accurately represents the data the model is going to come across. Using discharge summaries from MIMIC to train the model and then expecting it to perform on completely different data is infeasible; no matter how complex the model is and how good it gets at zero-shot learning, etc., it will only ever be good at modelling data that is similar to the data it's trained against. Making a new database would also eliminate the need to map between coding standards, as making a new database specifically for use cases, e.g. NHS UK, means it can be mapped to SNOMED by default. Another direction is that we can deploy some SOTA medication and treatment extraction tools for richer annotation of clinical data, such as recent work by Belkadi2023etal_PLM4clinicalNER,Tu2023etal_MedTem. From a more general perspective, automated medical coding as a problem seems to be advancing towards transformer-based solutions in both the full modelling like MHLAT and word embeddings with BERT. This technology shows definite promise with its results against MIMIC-III-50, with its only limit being the computational feasibility of training such a complex model. § LIMITATIONS After our first meeting, the external stakeholder created a simplified mock-up of the NHS Electronic Health Record (EHR) system to store patient information [<https://github.com/furbrain/SimpleEHR>]. The system integrated the SNOMED codes into the EHR utilizing the SNOMED terminology service Hermes [<https://github.com/wardle/hermes> Hermes : terminology tools, library and microservice.]. Since one of the objectives of the project was to demonstrate how it could be implemented into the wider NHS system, and creating a mock-up of the EHR was deemed as a good starting point. Unfortunately, there were issues getting Hermes (more specifically the Hermes docker file) to function on a Windows PC, but these issues did not persist on the university virtual machines (VM), therefore the project was moved on to the Linux-based VMs. Doing this had its own problems, as we no longer had permissions to ‘sudo install’ any of the Python libraries required to run Hermes. To solve this, a custom text-based VM had to be created with all the permissions needed to run Hermes. There were access problems regarding this VM with incorrect SSH keys, but once this was fixed a Hermes terminology server was successfully set up on the VM. Gaining access to MIMIC-III required the completion of two CITI training modules; Data and Specimens only research, and Conflicts of Interest (Both in Appendix). After this, our PyhsioNet account (PhysioNet is a repository of medical data, and where MIMIC is available to download) became credentialed and, therefore, gained access to the full MIMIC dataset. Unfortunately, the custom VM did not have enough space for the full MIMIC dataset. Therefore, the dataset had to be downloaded onto our personal Windows PC without the working Hermes server and restart the project from there. From here preprocessing could begin to make MIMIC and the HLAN compatible. § AUTHOR CONTRIBUTIONS add the external stakeholder? § ACKNOWLEDGEMENTS We thank the external stakeholder (a Local GP) for the support, feedback, and human evaluation during this project. LH and GN are grateful for the grant “Integrating hospital outpatient letters into the healthcare data space” (EP/V047949/1; funder: UKRI/EPSRC). acl_natbib § APPENDIX § STUDY CONTEXT This paper explores the potential of replacing the time-consuming process of manually coding letters with a program that automatically assigns codes to letters. For the program to be of any value to its intended users, the external stakeholder (who is a local GP and has an interest in programming) stated that the output should be explainable. This would allow the users to verify the results if unsure and increase the trust between them and the system. The stakeholder also stated that ideally the system would be easily implemented into the wider NHS systems, so the system can store and link the codes and letters to the patients they are about. This would allow the program to utilise previous letters about the patient to aid with the coding. Due to the program being oriented around the inherently personal topic of healthcare, ethics approval to gain access to the resources required would always be important. we had to gain access to MIMIC-III (Medical Information Mart for Intensive Care) which is a free database comprised of deidentified healthcare data, as well as the UK and US versions of SNOMED-CT and access to the UMLS ICD-9 to SNOMED-CT maps from the NIH. The MIMIC database had to be pre-processed to train the HLAN (Hierarchical Label Attention Network) system that generated the ICD-9 label predictions. These label predictions had to be mapped to SNOMED-CT terminology codes, and the label predictions exported in a user-friendly and readable manner. The external stakeholder will evaluate this, and tests will be created to validate the results already generated by the HLAN and see if mapping to SNOMED affects them. The following training was conducted for the good practice: * CITI training [<https://physionet.org/about/citi-course/>]: collaborative institutional training initiative (CITI Program) * Massachusetts institute of technology affiliates * Curriculum group: Human Research * Course Learner Group: Data or Specimens Only Research § HUMAN EVALUATION INSIGHTS The second method of our evaluations is to allow the stakeholder to try and code some example real-world scenario letters. To evaluate this program, we will collect the results of the program coding those letters, as well as the stakeholders verbal feedback on how this would fit within the NHS. To complete the stakeholder evaluation, the external stakeholder prepared six example letters containing a mix of common and uncommon diseases/procedures that they would come across in their everyday work. The letters included sections designed to test the system, such as the example letter below signed by ‘Dr xxx xxx’: Dear Dr xxx, Thank you for sending xxx to me. I agree that I think she has quite bad psoriasis; I will refer her for phototherapy. Yours Sincerely, Dr xxx xxx The letters were processed with the model, and the predicted codes and their attention maps were shown to the stakeholder (the other letters are contained in Appendix <ref>). Unfortunately, the results on almost all the letters were disappointing. With the letter above, the correct codes would be 9104002—psoriasis and either 31394004—light therapy, which is the parent to all forms of phototherapy, or 428545002—phototherapy of skin as the more specific result. The model returned the results and attention map shown in Figure <ref>. With these results, not only were the predicted codes incorrect but the attention maps were also both wrong and removing words. This did not happen with any of the MIMIC discharge summaries, which, even when the codes were wrong, at least specified where in the letter the codes were found (as demonstrated in Figure <ref>). There was one letter where the result was correct; the letter stated, ‘I reviewed xxx following his PCA - this has indeed shown a MI which is clearly causing LVF, as evidenced by his raised BNP. We will proceed to a CABG’, where, in this case, LVF = left ventricular failure and CABG = coronary artery bypass graft. The model returned with 42343007 - congestive heart failure, which the external stakeholder identified as a perfect match for LVH, and the procedure ‘continuous invasive mechanical ventilation for less than 96 consecutive hours’, which, although oddly specific, does occur during a CABG. Since using the pre-prepared letters didn’t give the system a chance to demonstrate how it returns the codes, the external stakeholder was also given the codes returned from a MIMIC discharge summary (Figure <ref>) that showed codes with direct and indirect SNOMED mappings. Regarding this, they stated that with a good enough accuracy of coding, the solution would genuinely be useful for medical coding, with their only critique being that when there is no direct mapping, usually the least specific (parent in the hierarchy – in the example in Figure <ref> that would be 55822004 - Hyperlipidaemia) should be used. From these results conclusions can be made looking at the issues from two angles. The first is that, despite the best efforts of the model, it has succumbed to overfitting with the MIMIC discharge summaries, leading to it not properly functioning when given data that doesn’t resemble said discharge summaries. The other conclusion is that the MIMIC database simply isn’t representative enough of what this project aims to code. The model is only trained using discharge summaries, which are long and detailed documents, but more importantly, they only contain diseases/procedures that would require hospitalisation. This also explains why the model successfully predicted heart failure – a serious condition that presumably would have been included in multiple discharge summaries – but didn’t detect the other letters (included in Appendix <ref>) about less serious diseases such as ear infection, headaches, and psoriasis. A note on this conclusion is that the final letter that describes ‘Waldenström’s Macroglobulinemia’ – a rare form of blood cancer - returned no mappings despite it being something with potential for hospitalisation. This was still the case when we changed it to its other well-known name, lymphoplasmacytic lymphoma. Finally, the stakeholder stated that another thing to be added to make it truly useful would be that it implements the whole of the SNOMED terminology, not just the diagnoses and procedures. Using MIMIC data, the models can only be trained on ICD-9 codes, which as described earlier only contain diagnoses and procedures. SNOMED also has hierarchies for medicines, tests, organisms, and substances that also need coding. § IMPLEMENTATION DETAILS Implementing the HAN model came with surprisingly few difficulties considering its complexity and the previous issues with everything in the project so far. It required Python 3.8 instead of 3.6 and TensorFlow 1 instead of PyTorch like CAML. A note on TensorFlow 1 - The only version available for download is TensorFlow 1.15, deprecated from TensorFlow 2.0.0 and installed through the TensorFlow Hub onto an Anaconda (conda) virtual environment. To preprocess the data so that it is in the format expected for the HLAN model to train/test, it requires the same preprocessing as CAML. There were some issues running this as some of the Python libraries, more specifically the versions of NumPy, SciPy, and Scikit-Learn in the requirements list, kept throwing errors about each other’s versions on installation. This was fixed by doing a clean install of Python 3.6 in a virtual environment, and this virtual environment was where the CAML preprocessing script was run [<https://github.com/jamesmullenbach/caml-mimic>]. In this virtual environment there were problems running Jupyter Notebook, but to fix this, the code was copied from the notebook into a regular Python file that did what the notebook would have done, just without the visualisation. Since a deprecated installation of pandas was installed due to python versioning differences, each time a new line of combined codes and processed text was added, a new blank line was also added that made the program throw errors. This was sorted by running the clean_notes program that removed all blank lines. The model was then used by running the runTest.py file with the existing code blocks already set up for MIMIC-III. § FULL EVALUATION RESULTS The full evaluation results are listed in Figure <ref> and <ref>. § EXAMPLE LETTERS FROM STAKEHOLDER AND RESULTS Letter 1: “ Dear xx xxx, I saw xxx today in clinic. I think he has chronic otitis media. I have inserted some grommets, which should hopefully improve his hearing. Yours Sincerely, xx xxx ” ⇒ Letter 1 (anonymized) result is shown in Figure <ref>. The prediction results for ICD code is ‘proc code 38.93’ (Venous catheterization), prediction 427.31 = atrial fibrillation. Letter 2: “ Dear xx xxx, Thank you for sending xxx to me. I agree that I think she has quite bad psoriasis; I will refer her for phototherapy. Yours Sincerely, xx xxx xxx ” ⇒ Letter 2 (anonymized) result is shown in Figure <ref>. The prediction result SNOMED mapping for ICD CODE 244.9 [<https://www.findacode.com/icd-9/244-9-hypothyroidism-primary-nos-icd-9-code.html>] is 40930008, which is Hypothyroidism (disorder) [<https://www.findacode.com/snomed/40930008–hypothyroidism.html>]. ICD code V45.01 is cardiac pacemaker in situ [<https://www.findacode.com/icd-9/v45-01-postsurgical-state-cardiac-pacemaker-icd-9-code.html>]. Letter 3: “ Dear xx xxx, I reviewed xxx following his PCA - this has indeed shown a MI which is clearly causing LVF, as evidenced by his raised BNP. We will proceed to a CABG xx xxx xxx ” ⇒ Letter 3 (anonymized) result is shown in Figure <ref>. It predicted SNOMED mapping 42343007, which is congestive heart failure (disorder) [<https://bioportal.bioontology.org/ontologies/SNOMEDCT?p=classes conceptid=42343007>]. ICD code 96.71 is “continuous invasive mechanical ventilation for less than 96 consecutive hours” [<https://www.findacode.com/icd-9/96-71-continuous-mechanical-ventilation-less-than-96-icd-9-procedure-code.html>]. Dear xxx xxx, I saw xxx today, he has clearly developed Waldenstroms Macroglubulinaemia, which is unusual given his Tay-Sach's disease. I will start him on chemotherapy shortly. Best Wishes, xxx xxx xxx xxx ⇒ No codes found.
http://arxiv.org/abs/2407.13424v1
20240718114837
Apparent delay of the Kibble-Zurek mechanism in quenched open systems
[ "Roy D. Jara Jr.", "Jayson G. Cosme" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.quant-gas", "quant-ph" ]
rjara@nip.upd.edu.ph National Institute of Physics, University of the Philippines, Diliman, Quezon City 1101, Philippines jcosme@nip.upd.edu.ph National Institute of Physics, University of the Philippines, Diliman, Quezon City 1101, Philippines § ABSTRACT We report a new intermediate regime in the quench time, τ_q, separating the usual validity of the Kibble-Zurek mechanism (KZM) and its breakdown for rapid quenches in open systems under finite quench protocols. It manifests in the power-law scaling of the transition time with τ_q as the system appears to enter the adiabatic regime, even though the ramp is already terminated and the final quench value is held constant. This intermediate regime, which we dub as the delayed KZM, emerges due to the dissipation preventing the system from freezing in the impulse regime. This results in a large delay between the actual time the system undergoes a phase transition and the time inferred from a threshold-based criterion for the order parameter, as done in most experiments. We demonstrate using the open Dicke model and its one-dimensional lattice version that this phenomenon is a generic feature of open systems that can be mapped onto an effective coupled oscillator model. We also show that the phenomenon becomes more prominent near criticality, and its effects on the transition time measurement can be further exacerbated by large threshold values for an order parameter. Due to this, we propose an alternative method for threshold-based criterion which uses the spatio-temporal information, such as the system's defect number, for identifying the transition time. Apparent delay of the Kibble-Zurek mechanism in quenched open systems Jayson G. Cosme July 22, 2024 ===================================================================== § INTRODUCTION Initially formulated to describe the evolution of topological defects in the early universe <cit.>, the KZM has been successful in describing the dependence of the defect number and duration of a continuous phase transition on the quench timescale, τ_q <cit.>. In particular, the theory has been tested in multiple platforms, ranging from atomic Bose-Einstein condensates <cit.>, spin systems <cit.>, Rydberg atom set-ups <cit.>, and trapped-ion systems <cit.>. It has also been tested in dissipative quantum systems <cit.>, and has recently been extended to include generic non-equilibrium systems <cit.>. Under the standard KZM, a generic closed system with a continuous phase transition has a diverging relaxation time, τ, and correlation length, ξ, as it approaches its critical point, λ_c. In particular, one expects τ and ξ to scale as τ∝ |ε|^-vz and ξ∝ |ε|^-v <cit.>, respectively, where ε = ( λ - λ_c) / λ_c is the reduced distance of the control parameter, λ, from the critical point, while v and z are the static and dynamic critical exponents, respectively. It is then expected that if the system is linearly quenched via a ramp protocol, ε = t / τ_q, the system will become frozen near λ_c due to τ diverging. This motivates the introduction of the Adiabatic-Impulse (AI) approximation, where the system's dynamics are classified into two regimes <cit.>. Far from λ_c, the system is in an adiabatic regime, in which its macroscopic quantities adiabatically follow the quench. Near λ_c, the system enters the impulse regime, wherein all relevant observables remain frozen even after passing λ_c. It only re-enters the adiabatic regime and transitions to a new phase after some finite time referred to as the freeze-out time, t̂, has passed <cit.>. This occurs after the system reaches the AI crossover point, ε(t̂), setting t̂∼τ( ε(t̂) ) <cit.>. The KZM predicts that, due to the scaling of τ, t̂ and ε(t̂) must follow the scaling laws <cit.> t̂∝τ_q^vz/1 + vz, ε(t̂) ∝τ_q^- 1/1 + vz. While the standard KZM has been successful in explaining the dynamics of continuously quenched systems, studies on systems with finite quenches have shown that the mechanism breaks down if the quench terminates quickly at a certain value, ε_f <cit.>. In particular, t̂ and ε(t̂) saturate at a finite value as τ_q→ 0, with ε(t̂) = ε_f, and thus t̂∼τ(ε_f) <cit.>. In Ref. <cit.>, this breakdown of the KZM is predicted to occur at some critical quench time τ_q, c = t̂_fast / ε_f, where t̂_fast is the saturation value of t̂ in the sudden quench limit, τ_q→ 0. Measuring the exact value of t̂ and ε(t̂) is a non-trivial task unlike defect counting due to the limitations in detecting the exact time a system re-enters the adiabatic regime. As such, it is common to employ a threshold criterion and measure instead the transition time, t̂_th, which is the time it takes for an order parameter to reach a given threshold after passing λ_c. The crossover point at the transition time, ε(t̂_th), is similarly defined. For a sufficiently small threshold value, it is assumed that t̂_th is a good approximation for t̂. While this method is successful in showing the power-law scaling of t̂_th and ε(t̂_th) as a function of τ_q <cit.>, it remains unclear whether the inherent deviation between t̂ and t̂_th does not lead to any significant effects on the scaling of the KZM quantities for any generic quench protocols. In this work, we report an intermediate regime separating the breakdown and validity of the KZM appearing in open systems under a finite quench protocol depicted in Fig. <ref>(a). In this regime, the transition time follows the power-law scaling predicted by the KZM even though the system appears to relax after the quench has terminated, as illustrated in Fig. <ref>(b). As we will show later, this regime manifests precisely due to the dissipation exacerbating the deviation between the freeze-out time and the transition time, leading to a delay in the detection of the phase transition, as schematically represented in Fig. <ref>(b). We demonstrate using the open Dicke model (DM) <cit.> and its one-dimensional lattice extension, the open Dicke lattice model (DLM) <cit.> that the range of τ_q where we observe this "delayed" KZM is a generic feature of open systems with finite dissipation strength, κ. We also show that the delayed KZM is more prominent near criticality and that its signatures become more significant for large threshold values for an order parameter. Thus, our work highlights subtleties of the KZM in open systems in finite quench scenarios relevant to experiments. The paper is structured as follows. In Sec. <ref>, we introduce a minimal system that can exhibit the delayed KZM when quenched at intermediate values of τ_q. By deriving an effective potential for the minimal system, we show that the phenomenon is due to a relaxation mechanism induced by the dissipation of the system. Then, using the open DM and open DLM as a test bed, we demonstrate in Sec. <ref> that the delayed KZM is a generic feature of open systems under a finite quench and that the phenomenon becomes more prominent near criticality. In Sec. <ref>, we explore how the deviations brought by the delayed KZM can be further exacerbated with large thresholds for order parameters and propose an alternative method for measuring the transition time beyond the threshold-based criterion. We provide a summary and possible extensions of our work in Sec. <ref>. § DELAYED KZM: THEORY Consider a generic open system with a continuous phase transition that is described by the Lindblad master equation <cit.>, ∂_tρ̂ = -i [ Ĥ( ε(t) ) / ħ, ρ̂] + 𝒟ρ̂, where 𝒟ρ̂ = ∑_ℓκ_ℓ( 2 L̂_ℓρ̂L̂_ℓ^† - {L̂^†_ℓL̂_ℓ, ρ̂}) is the dissipator, and Ĥ(ε(t)) is the time-dependent Hamiltonian of the system. The system undergoes a phase transition from a normal phase (NP), in which the global symmetry of the system is preserved, to a symmetry-broken phase via a finite quench protocol, ε(t) = t / τ_q t_i≤ t ≤ε_fτ_q ε_f ε_fτ_q < t ≤ t_f, where t_i = -τ_q and t_f are the initial and final time of the quench. In the following, we demonstrate that if the system can be approximated as or mapped onto an effective coupled oscillator system (COS), with at least one dissipative channel, as sketched in Fig. <ref>(a), then we should observe a finite range of τ_q where the deviation between t̂ and t̂_th becomes significant enough that we get a contradictory behaviour between the scaling of the transition time and crossover point. To observe the dynamics of the systems considered in this work, we will use a mean-field approach and assume that for any operators,  and B̂, < ÂB̂> ≈< Â> < B̂>. This allows us to treat any operators as complex numbers and use the notation A ≡< Â>. We numerically integrate the systems' mean-field equations in Appendix <ref> using a standard 4th-order Runge Kutta algorithm with a time step of ω t = 0.01, where ω is a frequency associated to the dissipative channel, as we will show later. The Hamiltonian of the COS is Ĥ^COS/ħ = ωâ^†â + ω_0b̂^†b̂ + λ(t) ( â^† + â) ( b̂ + b̂^†) , where ω and ω_0 are the transition frequencies associated with the bosonic modes â and b̂, respectively, and λ(t) is the coupling strength between the two modes. The â-mode is subject to dissipation, which is captured in the master equation by the dissipator 𝒟ρ̂ = κ(2 âρ̂â^† - {â^†â, ρ̂}). The COS has an extensive application in multiple settings, such as but not limited to cavity-magnon systems <cit.>, atom-cavity systems <cit.>, and spin systems <cit.>. The open COS has two phases: the NP which corresponds to a steady state with a = b = 0, and an unbounded state where both modes exponentially diverge as t →∞ <cit.>. When nonlinearity is present, the unbounded state can be associated with a symmetry-broken phase, in which the modes choose a new steady state depending on their initial values. These two states are separated by the critical point <cit.> λ_c = 1/2√(ω_0/ω( κ^2 + ω^2) ) . To show that the COS is a minimal model that can exhibit the KZM and its subsequent breakdown at small τ_q, we consider its dynamics as it transitions from the NP to the unbounded state. We do this by initializing the system near the steady state of NP, a_0=-b_0=0.01. We then apply the quench protocol in Eq. (<ref>) onto the COS and track the dynamics of the occupation number of the â-mode, |a|^2. We finally determine t̂_th by identifying the time it takes for |a|^2 to reach the threshold value, |a|^2_th, after the ramp passes ε(t = 0)=0. The crossover point at the transition time is then inferred back from t̂_th using Eq. (<ref>). We present in Fig. <ref>(b) the scaling of t̂_th and ε(t̂_th) as a function of τ_q. We can observe that for large τ_q, or slow quench, all relevant quantities follow the power-law scaling predicted by the KZM. As we decrease τ_q, ε(t̂_th) begins to saturate at a larger critical quench time, τ_q, c^*, than t̂_th, as indicated by the solid line in Fig. <ref>(b). Finally, as τ_q→ 0, t̂_th approaches a constant value after passing another critical quench time, τ_q, c, denoted in Fig. <ref>(b) as a dashed line. Note that the fluctuations in the scaling of t̂_th and ε(t̂_th) can be attributed to the mean-field approach, which neglects any quantum fluctuation in the system's dynamics. The scaling behaviour of t̂_th and ε(t̂_th) implies that within the range τ_q, c < τ_q≤τ_q, c^*, there exists an intermediate regime between the true breakdown and the validity of the KZM, wherein the KZM remains valid even though the system appears to relax well after the quench has terminated. As shown in Fig. <ref>(c), this intermediate regime vanishes as κ→ 0, highlighting that this is a dissipation-induced effect. We can understand this apparent contradiction between the scaling of the t̂_th and ε(t̂_th) by looking at the dynamics of |a|^2 as the ramp crosses over ε = 0. In Fig. <ref>(a), we present an exemplary dynamics of |a|^2 in the logarithmic scale for the regime τ_q, c < τ_q≤τ_q, c^*. Notice that before the system enters the unbounded state, |a|^2 first exponentially decays towards its steady state, indicating that the system does not freeze in the impulse regime. This dynamics is reminiscent of systems relaxing towards the global minimum of their energy surface due to dissipation, as sketched in Fig. <ref>(b). We can further establish this connection by obtaining the potential surface of the COS, which we can do by substituting the pseudo-position and momentum operators for the â- mode x̂ = 1/√(2ω)(â^† + â), p̂_x = i√(ω/2)(â^† - â), and the b̂-mode, ŷ = 1/√(2ω_0)(b̂^† + b̂), p̂_y = √(ω_0/2)( b̂^† - b̂), back to Eq. (<ref>). Note that for the remainder of this section, we set ħ = 1 for brevity. With this substitution, the COS Hamiltonian becomes Ĥ^COS = p̂_x^2/2 + p̂_y^2/2 + V̂(x̂, ŷ), where V̂(x̂, ŷ) = 1/2ω^2x̂^2 + 1/2ω_0^2ŷ^2 + 2 √(ωω_0)λx̂ŷ is the effective potential of the COS in the closed system limit, κ = 0. In this limit, the potential surface has a global minimum at x̂ = ŷ = 0 when λ < λ_c = √(ωω_0) / 2, as shown in Fig. <ref>(b). It then loses its global minimum when λ = λ_c as sketched in Fig. <ref>(c). Finally, the global minimum becomes a saddle point when λ > λ_c, as shown in Fig. <ref>(d). Note that in the presence of dissipation, the COS effective potential only becomes modified such that the critical point becomes Eq. <ref>, while the structure of the potential surface remains the same due to Eq. (<ref>) being quadratic. With the above picture, we can now interpret the relaxation mechanism observed in Fig. <ref>(a) as follows. Suppose that we initialize our system such that λ < λ_c and the initial states of â and b̂ modes are close to the global minimum of V̂. In the mean-field level, if κ = 0, we can expect that the system will oscillate around the global minimum of V̂ as we increase λ using the finite ramp protocol defined in Eq. (<ref>), together with the modification of the COS potential surface. As we cross λ_c, the global minimum of V̂ becomes a saddle point. As such, any deviation of the initial state from the origin would eventually push the system to either the positive x̂ and -ŷ direction or vice versa, signalling the spontaneous symmetry breaking of the system. In the presence of dissipation, however, the system can still relax to the global minimum before the quench reaches λ_c for sufficiently large quench time scales τ_q > τ_q, c. As a result, the slow deformation of the effective potential allows for the system to remain near a = b = 0 even after passing the critical point where the potential loses its global minimum. The nudge from the system's initial state eventually pushes the system towards a new minimum as the quench progresses, signalling the phase transition. This approach, however, only becomes detectable when |a|^2 reaches |a|_th^2, which occurs only after the linear ramp has terminated. Thus, we observe the saturation of the crossover point at ε_f even though t̂_th follows the predicted scaling of the KZM, which hints that the system entered the adiabatic regime within the duration of the ramp. Note that the relaxation mechanism is not present in the closed limit, as hinted by the regime vanishing in Fig. <ref>(c) as κ→ 0. The delay between t̂ and t̂_th at finite τ_q motivates us to call this phenomenon delayed KZM. In the next section, we will show that the delayed KZM is a generic feature of open systems that can be mapped onto an effective COS. Moreover, we will demonstrate that not only is the delayed KZM induced purely by dissipation, but it also becomes more prominent when the system is quenched near criticality, ε_f≈ 0. § DELAYED KZM IN OPEN SYSTEMS §.§ Signatures of the delayed KZM We now test whether the delayed KZM is a generic feature of open systems by considering two fully-connected systems: the open DM, schematically represented in Fig. <ref>(a), and its one-dimensional lattice version, the open DLM, as shown in Fig. <ref>(b). Both systems are described by the master equation in Eq. (<ref>), with the Hamiltonian of the open DM being <cit.> Ĥ^DM/ħ = ωâ^†â + ω_0Ŝ^z + 2λ(t)/√(N)( â + â^†) Ŝ^x, while the Hamiltonian of its M-site lattice version with periodic boundary condition takes the form <cit.> Ĥ^DLM/ħ = 1/ħ∑_ℓ^MĤ^DM_ℓ - J ∑_<i, j >^M( â^†_iâ_j + â^†_jâ_i) . The open DM has the same dissipator as the open COS, while the dissipator of the open DM is 𝒟ρ̂ = κ∑_ℓ^M( 2â_ℓρ̂â_ℓ^† - {â_ℓ^†â_ℓ, ρ̂}) <cit.>. The open DM describes the dynamics of N two-level systems, represented by the collective spin operators Ŝ^x, y, z, coupled to a dissipative bosonic mode, â, which in cavity-QED experiments corresponds to a photonic mode <cit.>. In both systems, ω and ω_0 are the bosonic and spin transition frequencies, respectively, and λ is the spin-boson coupling, while J represents the nearest-neighbour interaction in the open DLM. In equilibrium, the open DM has two phases: the NP and the superradiant phase (SR) <cit.>. The NP is characterized by a fully polarised collective spin at the -z direction, i.e. S^z = -N / 2, and a zero total occupation number, |a|^2. Meanwhile, the SR phase is associated with the ℤ_2 symmetry breaking of the system, leading to a nonzero S^x and |a|^2, with S^x (a) picking a random sign (phase) from the two degenerate steady states of the system <cit.>. The two phases are separated by the same critical point as the open COS <cit.>. Under a finite quench, however, the open DM exhibits non-trivial dynamics as it transitions from the NP to the SR phase. We present in Figs. <ref>(c) to <ref>(e) an exemplary dynamics of the total occupation number, |a|^2, and the phase of the bosonic mode, φ, and S^x of the open DM for ωτ_q = 1000. We initialized the system near the steady state of the NP, where the initial values of the bosonic mode are a=0.01, while the collective spin operators are S^x(t_i) = N/2δ, S^y(t_i) =0, S^z(t_i)= N/2√(1 - δ^2), where δ is a perturbation set to δ = 0.01. We can observe that when the system is in the NP, the occupation number approaches the NP steady state, a=0, which is consistent with our predicted behaviour from the potential surface interpretation of phase transition in an open system, which we describe in Sec. <ref>. In addition, the phase of the bosonic mode oscillates from -π to π, while the S^x remains close to S^x=0. As ε(t)>0 at t>0, the system enters the SR phase, which results in φ spontaneously admitting a finite value as the |a|^2 starts to exponentially grow until t = ε_fτ_q, where the ramp terminates. At that point, the |a|^2 finally saturates at the steady state of the SR phase. Meanwhile, the transition of S^x from its behaviour in the NP to the SR phase only becomes prominent at a later time. We will further expand on the implication of the behaviour of these order parameters later in Sec. <ref>. As for the open DLM, for small values of J, the interaction between the open DMs modifies the λ_c into a critical line <cit.>, λ_c = 1/2√(ω_0 (ω - 2J) ( 1 + κ^2/(ω - 2J)^2)). Moreover, suppose that we drive the open DLM from the NP to the SR phase using a finite quench after initializing it near the steady state of NP. Specifically, we initialize the collective spins at S^x, y, z_ℓ = S^x, y, z(t = t_i), while the bosonic modes are initialized at the vacuum state, which can be represented as a complex Gaussian variable a_ℓ = 1/2( η_ℓ^R + η_ℓ^I), where η_ℓ^R, I are random numbers sampled from a Gaussian distribution satisfying < η_ℓ^i> = 0 and < η_ℓ^iη_m^j> = δ_i, jδ_ℓ, m for i, j = R, I <cit.>. Then, as the system enters the SR phase, each site can independently pick between the two degenerate steady states available, allowing for the formation of domains and point defects, the number of which depends on the correlation length of the system. We present in Figs. <ref>(f) to <ref>(h) the exemplary spatio-temporal dynamics of |a_ℓ|^2, φ_ℓ, and S^x_ℓ of the open DLM after doing a finite quench towards the SR phase. We can observe that the point defects can manifest either as dips in the occupation number, phase slips in the spatial profile of φ_ℓ, or domain walls in S^x_ℓ. Note that the defect number N_d follows the predicted KZM power-law scaling with τ_q, which we demonstrate in Appendix <ref>. Since the notion of topological defects is well-defined in the open DLM, it serves as a good test bed for the delayed KZM for systems with short-range interaction. This is in addition to the open DM, which has been experimentally shown to exhibit signatures of the KZM <cit.> despite the open question of its non-equilibrium universality class <cit.>. We now present in Figs. <ref>(i) and <ref>(j) the scaling of t̂_th and ε(t̂_th) as a function of τ_q for the open DM and open DLM, respectively. Similar to the COS, the t̂_th for both systems is inferred from the total occupation number, which for the open DLM is explicitly defined as |a|^2 = ∑_ℓ |a_ℓ|^2. Notice that both systems exhibit the signatures of the delayed KZM, where t̂_th continues with its KZM power-law scaling as ε(t̂_th) saturates for intermediate values of τ_q. They also exhibit the closing of the boundary of the delayed KZM as we decrease κ. We can understand the emergence of the delayed KZM in these two systems by noting that the open DM can be mapped exactly into the COS in the thermodynamic limit, N →∞. We can do this by applying the approximate Holstein-Primakoff representation (HPR) <cit.>, Ŝ^z = N/2, Ŝ^-= √(N)( √(1 - b̂^†b̂/N)) b̂≈√(N)b̂, on Eq. (<ref>) to reduce it onto the COS Hamiltonian in Eq. (<ref>) up to a constant term. Meanwhile, we can transform the open DLM into a set of COS in the thermodynamic limit by first substituting the approximate HPR of the collective spins to Eq. (<ref>), noting that Ŝ^z, ±→Ŝ^z, ±_ℓ and b̂→b̂_ℓ <cit.>. This leads to a Hamiltonian of the form, Ĥ^DLM/ħ≈1/ħ∑_ℓ^MĤ_ℓ^COS - J ∑_<i, j >( â_i^†â_j + â_j^†â_i). We then perform a discrete Fourier transform, â_k = 1/√(M)∑_ℓe^ikℓâ_ℓ, b̂_k = 1/√(M)∑_ℓ e^ikℓb̂_ℓ, on Eq. (<ref>) to obtain an effective Hamiltonian, Ĥ^DLM/ħ≈1/ħ∑_kĤ^OM_k where Ĥ^OM_k/ħ = ω_kâ_k^†â_k + ω_0b̂_k^†b̂_k + λ( â_k^†b̂_k + â_-kb̂_k + h.c.) is the Hamiltonian of each uncoupled oscillator at the momentum mode k and ω_k = ω - 2Jcos(k). In this form, we can easily observe that the open DLM has a similar structure to the COS, with the similarity being more apparent at the zero-momentum mode, Ĥ^OM_0/ħ= ( ω - 2J )â_0^†â_0 + ω_0b̂_0^†b̂_0 + λ(â_0^† + â_0)(b̂_0^† + b̂_0) These results show that the signatures of the delayed KZM can appear not only in the open DM but also in the open DLM, where both short-range interactions between the sites and multiple degenerate steady states are present in the system. As such, we confirm that the delayed KZM is a generic feature of open systems under a finite quench that can be mapped onto a COS, regardless of the interaction present in the system. Since we have shown the generality of the delayed KZM on open systems, we now explore in greater detail the dissipative and near-critical nature of the delayed KZM on Sec. <ref>. §.§ Dissipative and critical nature of the delayed KZM In Sec. <ref>, we have claimed that the dissipation is responsible for the relaxation mechanism that leads to the emergence of the delayed KZM. This is also corroborated by the disappearance of the delayed KZM regime in the closed limit, implying that the phenomenon appears only at finite dissipation strength, κ. We now explicitly demonstrate that this claim is true for any generic open systems by calculating the decay rate of the total occupation number, γ_d, as the system approaches the critical point. We will then identify how γ_d scales with ε_f and κ. For the rest of this section, we will only consider the open DLM, although our results here should apply as well for both the COS and the open DM. To determine the γ_d of the open DLM for a given κ and ε_f, we calculate the slope of the best-fit line of the logarithm of |a|^2 within the time interval [-0.75 τ_q, 0]. The chosen time window is arbitrary, but it ensures that the γ_d is inferred within the duration that the system is in the impulse regime. We show in Fig. <ref>(a) the dependence of γ_d with the quench time. We can observe that γ_d is constant for large values of τ_q. As we decrease τ_q, however, γ_d begins to fluctuate and eventually decreases to a much lower value. We attribute the deviation of γ_d from its constant value on the errors incurred in the best-fit line of ln|a|^2 for small values of τ_q. In particular, since we only considered a simulation time step of ω t = 0.01, the small time window for these values of τ_q leads to smaller sets of data points for |a|^2, resulting to an overall poorer fit. Due to this consideration, we only considered the data points from ωτ_q = 10^2 to ωτ_q = 10^4 in calculating the average value of the decay rate with τ_q, γ̅_d. We now present in Figs. <ref>(b) and <ref>(c) the behavior of γ̅_d as a function of ε_f and κ, respectively. We can observe that γ̅_d remains constant for all values of ε_f, implying that the average decay rate of |a|^2 is independent of the quench protocol used in the system. Meanwhile, γ̅_d has an inverse relationship with κ, demonstrating that the relaxation mechanism responsible for the delayed KZM is indeed a direct result of dissipation allowing the initial occupation number in the dissipative bosonic mode to leak out of the system as it remains in the impulse regime. For completeness, we check the linear dependence of γ̅_d with κ by fitting a line on it and calculating the square of its Pearson correlation coefficient, R^2. By doing this, we obtain R^2 = 0.9996, which indicates a great fit between the best-fit line and the data points. Given that ε_f do not alter the behaviour of the decay rate of |a|^2, it is natural to ask whether varying ε_f has any significant effect as well on the scaling of t̂_th and ε(t̂_th), and on the signatures of the delayed KZM. We answer the first question in Figs. <ref>(a) and <ref>(b), where we show the scaling of t̂_th and ε(t̂_th), respectively, with τ_q for different values of ε_f. We can see that varying ε_f does not significantly change the scaling of the KZM quantities considered. However, the τ_q, c^*, shown as solid lines in Fig. <ref>(b), increases significantly as we decrease ε_f. This modification on τ_q, c^* becomes more apparent in Fig. <ref>(c), where we show the scaling of τ_q, c and τ_q, c^* as a function of ε_f. Notice that both quantities are inversely proportional to ε_f, with τ_q, c^* dropping faster than τ_q, c as ε→∞. As a result, the delayed KZM regime vanishes for large ε_f, highlighting that its signatures become more apparent for strongly dissipative systems quenched near criticality. We finally note that τ_q, c^* follows a power-law scaling as evidenced by the power-law fit curve shown in Fig. <ref>(c). In particular, since τ_q, c^* becomes the true critical quench time separating the breakdown and validity of the KZM at large ε_f, we expect that it should follow the power-law scaling <cit.> τ_q, c^*∝ε_f^-(vz + 1 ), which we show to be the case in Appendix <ref>. So far, we have shown that the presence of the delayed KZM leads to a significant deviation between the true freeze-out time, t̂ and the transition time, t̂_th. Given that the delayed KZM becomes more prominent near criticality at strong dissipation, we now address in the next section how the threshold-based criterion for determining t̂_th contributes to the deviation and whether a more accurate method can be used to measure t̂. § TRANSITION TIME MEASUREMENT The threshold value used to determine the transition time plays a role in the delay between t̂ and t̂_th. In particular, we can expect a longer delay for larger |a|^2_th since the system's order parameter has to reach a larger threshold value before being detected. This intuition prompts the question of whether decreasing the threshold value has any effect on the scaling of t̂_th and ε(t̂_th), and as to whether it can suppress the deviation brought by the delayed KZM, and thus its signatures. We answer the first question in Figs. <ref>(a) and <ref>(b) where we present the scaling of t̂_th and ε(t̂_th), respectively. For this part, while we only consider the open DM, the results here should apply to the COS and the open DLM as well. We can observe that the scaling of t̂_th and ε(t̂_th) do not significantly change as we increase the threshold value. In particular, while the t̂_th is only shifted by a constant value as |a|^2_th increases, both KZM quantities considered eventually collapses in a single scaling as τ_q→∞. As for the boundaries of the delayed KZM regime, we can observe in Fig. <ref>(c) that the gap between τ_q, c and τ_q, c^* widens as we increase |a|^2_th, implying that the delayed KZM becomes more prominent at large |a|^2_th. We can understand the widening of the delayed KZM regime for large |a|^2_th by noting that in an ideal set-up where t̂ can be accurately identified, the gap between τ_q, c and τ_q, c^* vanishes, and thus following the prediction in Ref. <cit.>, τ_q, c = τ_q, c^* = t̂ / ε_f. Since for any threshold-based criterion, τ_q, c^* = t̂_th /ε_f and τ_q, c≠τ_q, c^* for large κ and small ε_f, then τ_q, c^* - τ_q, c = 1/ε_f( t̂_th - t̂) Let us assume that within the time interval [t̂, ϵ_fτ_q], the total occupation is exponentially growing such that |a|^2∝exp(γ_gt), where γ_g is the growth rate of the total occupation number. This assumption is supported by Fig. <ref>(c), where the |a|^2 of the open DM exponentially grows from the minimum value to its saturation value. With this assumption we can infer that t̂_th∝ln|a|^2_th / γ_g and t̂∝ln|a|^2_min / γ_g, where |a|^2_min is the minimum value of the total occupation number. Thus, τ_q, c^* - τ_q, c∝1/ε_fγ_g( ln|a|^2_th - ln|a|^2_min). which implies that we can suppress the signatures of the delayed KZM by setting |a|^2_th close to |a|^2_min. Now, determining an optimal threshold value that suppresses the signatures of the delayed KZM may be difficult to achieve as it requires prior knowledge of |a|^2_min for arbitrary τ_q. This problem motivates the question of whether an alternative method can be used to infer t̂ without relying on any threshold-based criterion. As we have hinted in the dynamics of the phase of the â-mode of the open DM shown in Fig. <ref>(d), we can do this by choosing an appropriate order parameter that rapidly reaches its steady state upon the system entering a phase transition. In the case of the open DM, this order parameter corresponds to the boson mode's phase, φ. We demonstrate this method further in Figs. <ref>(b) and <ref>(c), where we show that for non-zero dimensional systems, like the open DLM, we can use the phase information of the â_ℓ-modes to extract t̂. As presented in Fig. <ref>(d), we can do this by determining the time at which either the defect number, N_d, or the site-averaged phase begins to saturate. In the COS level, the inferred t̂ for this method would be equivalent to the moment the system picks a new global minimum it would fall onto, signalling phase transition. Thus, we expect that if the system's phase information is available in an experimental set-up, such as in Ref. <cit.>, then that can serve as a more sensitive tool for detecting phase transitions compared to threshold-based order parameters that depend on the mode occupations. § SUMMARY AND DISCUSSION In this work, we extend the Kibble-Zurek mechanism to open systems under a finite quench and report an intermediate regime separating the breakdown and validity of the KZM at fast and slow quench timescales, respectively. This novel regime manifests as a continuation of the transition time's KZM power-law scaling at τ_q where the system appears to relax after the quench has terminated. As we have shown using a coupled oscillator system, this phenomenon results from the system's relaxation towards the global minimum of its potential due to dissipation. This mechanism effectively hides the system's crossover to the adiabatic regime, only to be revealed once the system reaches the arbitrary threshold of the order parameter. Using the open DM and the open DLM, we have also demonstrated that the delayed KZM is a generic feature of open systems under finite quenches that can be mapped onto a coupled oscillator system. Furthermore, we have shown that the signatures of the delayed KZM, specifically the size of the quench interval where the delayed KZM regime is observed, become more prominent for small values of ε_f, highlighting the dissipative and near-critical nature of this phenomenon. We have discussed the implications of the delayed KZM in the context of the threshold-based criterion typically used in experiments to measure the transition time and proposed an alternative method to measure t̂. Our proposed method only relies on the spatial-temporal information of an appropriate order parameter, such as the defect number and phase information of the system's bosonic modes, thus providing a more sensitive tool for detecting phase transitions. Our results extend the notion of the KZM to dissipative systems with finite quench protocols beyond the limits of slow and rapid quenches. It also provides a framework on how the manifestation of the KZM can be altered in experimental protocols, wherein limitations in measuring the true AI crossover become more relevant. Since our results are all in the mean-field level, a natural extension of our work is to verify whether the delayed KZM would survive in the presence of quantum fluctuations. It would also be interesting to test the signatures of the delayed KZM in the quantum regime of the open DM and the open DLM, and further explore their universality classes beyond the mean-field level. These extensions can be readily done in multiple platforms, including, but not limited to, cavity-QED set-ups <cit.>, nitrogen-vacancy centre ensembles <cit.>, cavity-magnon systems <cit.>, and photonic crystals <cit.>. § ACKNOWLEDGEMENT This work was funded by the UP System Balik PhD Program (OVPAA-BPhD-2021-04) and the DOST-SEI Accelerated Science and Technology Human Resource Development Program. § MEAN-FIELD EQUATIONS OF THE CONSIDERED SYSTEMS To obtain the mean-field equations of the systems considered in the main text, we consider the master equation for the expectation value of an arbitrary operator, Ô ∂_t< Ô> = i < [ Ĥ/ħ, Ô] + 𝒟Ô>, where Ĥ is the system's Hamiltonian, and 𝒟Ô = ∑_ℓκ_ℓ( 2L̂_ℓ^†ÔL̂_ℓ - {L̂^†_ℓL̂_ℓ, Ô}) is the dissipator, with L̂_ℓ being the jump operators. We will also let A = < Â> for notation convenience. Using this master equations, the mean-field equation of the open COS is, ∂_t a = -i [ ω a + λ( b + b^*) ] - κ a, ∂_t b = -i [ ω_0 b + λ( a + a^*] ). As for the open Dicke model, its mean-field equations are, ∂_t a = -i ( ω a + 2λ/√(N) S^x) - κ a, ∂_t S^x = - ω_0S^y, ∂_t S^y = ω_0 S^x - 2λ/√(N)( a^* + a ) S^z, ∂_t S^z = 2λ/√(N)( a^* + a ) S^y. Note that for both the open COS and open DLM, the jump operator is given to be L̂_ℓ = L̂ = â. Finally, the mean-field equations of the open DLM for a jump operator L̂_ℓ = â_ℓ is ∂_t a_ℓ = -i [ ω a_ℓ + 2λ/√(N) S^x_ℓ - J ( a_ℓ-1 + a_ℓ+1) ] - κ a_ℓ, ∂_t S^x_ℓ = - ω_0 S^y_ℓ, ∂_t S^y_ℓ = ω_0S^x_ℓ - 2λ/√(N)( a_ℓ + a^*_ℓ)S^z_ℓ, ∂_t S^z_ℓ = 2λ/√(N)( a_ℓ + a^*_ℓ)S^y. § KZM EXPONENTS OF THE OPEN DICKE LATTICE MODEL One of the key predictions of the KZM is the power-law scaling of the defect number as a function of τ_q <cit.>, N_d∝τ_q^-(D -d ) v/1 + vz, where D and d are the dimensions of the system and the topological defects, respectively. To demonstrate that the open DLM satisfies the predicted KZM scaling for N_d, we present in Fig. <ref> the number of phase slips present in the system for a given τ_q and ε_f. We can observe that for large τ_q, N_d follows a power-law scaling behaviour with τ_q, emphasizing that the system indeed follows the KZM at slow quenches, which is consistent with the behaviour of t̂ and ε(t̂) shown in Fig. <ref>(a) and <ref>(b). Notice however that as we approach τ_q, c, marked by the vertical dashed lines, N_d starts to fluctuate, with the fluctuation becoming more significant as ε_f→ 0. This behaviour is akin to the pre-saturation regime observed for closed systems under finite quench protocols <cit.>. As to whether this regime persists in the presence of quantum fluctuation remains an open question. We finally observe the saturation of N_d as τ_q→ 0, signifying the breakdown of the KZM for small values of τ_q. Since we have shown that the KZM quantities t̂_th, ε(t̂_th), and N_d follows the predicted KZM scaling, for completeness, we now estimate the critical exponents of the open DLM from the power-law exponents of the of these quantities. We do this by assuming that t̂_th and N_d follows a generic power-law scaling, t̂_th∝τ_q^α, N_d∝τ_q^β, within the quench time interval ωτ_q = ωτ_q, c and ωτ_q = 10^4. From these equations, we can infer from Eq. (<ref>) and Eq. (<ref>) that α and β are related to critical exponents v and z by the relations v = α/|β|, z = |β|/1 - α, vz = α/1 - α. We present in Figs. <ref>(a) and <ref>(b) the estimated vz as a function of ε_f and κ, respectively, for both the threshold-based transition time and the t̂ obtained from the dynamics of the N_d, as described in Sec. <ref> of the main text. We can see that the threshold-based vz remains relatively constant for all values of ε_f, while the defect-based vz appears to converge to the critical exponent of the Ising universality <cit.>, the universality class of the single Dicke model <cit.>. We further check whether the two values of vz are consistent with one another by calculating vz as well from the scaling of τ_q, c^* with ε_f, which is given by Eq. (<ref>). We show this in Fig. <ref>(a) as a solid line, with the grey regions corresponding to the uncertainty due to fitting errors. Notice that the value of vz from the defect-based method is consistent with the one obtained from τ_q, c^* for large values of ε_f, while it becomes more consistent with the threshold-based vz for small ε_f. With this picture, the threshold-based vz and τ_q, c^* can be interpreted as the upper and lower bounds for the uncertainty of the open DLM's critical exponents in the mean-field level, respectively. As for the behaviour of the threshold-based and defect-based vz as a function of κ, we can see in Fig. <ref>(b) that both values of vz decrease as κ→ 0. In particular, the vz for both cases approaches the experimental value of vz for the open Dicke model for κ = 1.0ω <cit.>. This result implies that the system's dissipation modifies the critical exponents of the system, which is consistent with the predictions in Refs. <cit.> and <cit.>. Without any specific analytical prediction on how κ modifies the effective critical exponent of the system, we cannot assign a universality class for the open DLM that may apply to any arbitrary dissipation strength. Given this limitation, we restrict the calculation of the critical exponents for κ = 0.1ω. We show in Figs. <ref>(c) and <ref>(d) the values of v and z, respectively, for both the defect-based and threshold-based methods. We can observe that the value of v for both cases has a large deviation from the static critical exponent of the Ising universality, which is v = 1 <cit.>. Meanwhile, the value of z for the threshold-based criterion converges to z ∼ 2.183, which is the dynamic critical exponent of the Ising universality class <cit.>. As we previously mentioned, we can attribute the deviations of v and z to the dissipation-induced modification of the critical exponents. The accumulated errors on the scaling exponents of N_d, t̂_th, and t̂_N_d due to the fitting errors may also amplify the deviations of the critical exponents from their expected values. Determining which case has a more significant effect on the values of v and z requires understanding the dynamics of the open DLM beyond the mean-field level. apsrev4-2
http://arxiv.org/abs/2407.13551v1
20240718142518
Decoding the interaction mediators from landscape-induced spatial patterns
[ "E. H. Colombo", "L. Defaveri", "C. Anteneodo" ]
q-bio.PE
[ "q-bio.PE", "cond-mat.stat-mech" ]
[Corresponding author: ]e.colombo@hzdr.de Center for Advanced Systems Understanding, Untermarkt 20, 02826 Görlitz=-1 Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01328 Dresden=-1 Department of Physics, Bar Ilan University, Ramat-Gan 52900, Israel=-1 Department of Physics, PUC-Rio, Rua Marquês de São Vicente 225, 22451-900 Gávea, Rio de Janeiro, Brazil National Institute of Science and Technology for Complex Systems, 22290-180, Rio de Janeiro, Brazil=-1 § ABSTRACT Interactions between organisms are mediated by an intricate network of physico-chemical substances and other organisms. Understanding the dynamics of mediators and how they shape the population spatial distribution is key to predict ecological outcomes and how they would be transformed by changes in environmental constraints. However, due to the inherent complexity involved, this task is often unfeasible, from the empirical and theoretical perspectives. In this paper, we make progress in addressing this central issue, creating a bridge that provides a two-way connection between the features of the ensemble of underlying mediators and the wrinkles in the population density induced by a landscape defect (or spatial perturbation). The bridge is constructed by applying the Feynman-Vernon decomposition, which disentangles the influences among the focal population and the mediators in a compact way. This is achieved though an interaction kernel, which effectively incorporates the mediators' degrees of freedom, explaining the emergence of nonlocal influence between individuals, an ad hoc assumption in modeling population dynamics. Concrete examples are worked out and reveal the complexity behind a possible top-down inference procedure. Decoding the interaction mediators from landscape-induced spatial patterns C. Anteneodo July 22, 2024 ============================================================================ § INTRODUCTION Organisms, at the population level, often exhibit spatial patterns. These can be a spontaneous result that emerges from the interactions <cit.> or forced by environmental stresses <cit.>, such as landscape defects that abruptly shift environmental conditions <cit.>. Across scales, these patterns have shown to control key ecological outcomes <cit.>, for example, enhancing population stability and resilience, as observed in vegetation cover <cit.> and mussel beds <cit.>. Along with its ecological relevance, pattern complex structure can encode information about the underlying dynamics responsible for mediating the interactions between individuals (activators and inhibitors), which is typically hidden from remote sensing approaches. Decoding this complex structure could then reveal valuable information about the mechanisms behind pattern formation <cit.>. This could, at low cost, boost the information acquired from satellite images, which is capable of generating abundant spatio-temporal data, e.g., clearly showing the distribution of vegetation <cit.>, but likely miss other actors that indirectly shape plant-plant interactions, especially those below ground (e.g., water, nitrogen, phosphorus, potassium, et al.) <cit.>. A similar situation can occur at the sea, where phytoplankton is easily tracked <cit.>, but zooplankton, viruses and other organisms that affect phytoplankton-phytoplankton interactions remain invisible, thereby requiring expensive large-scale collaborations to gather in-situ samples <cit.>. Nevertheless, a precise connection between patterns (large scales) and the underlying microscopic dynamics (small scales) is often not feasible far from equilibrium, due to the presence of strong nonlinearities <cit.>. In this work, we investigate a scenario where the effects of nonlinearities are attenuated, under a near-equilibrium scenario, opening the possibility of establishing such a connection. Namely, we exploit the wrinkles in the population distribution induced by a landscape defect to reveal information about the set of interaction mediators from large-scale observations. This is because, defects, by exciting a wide range of scales, put in evidence the difference in response to each scale <cit.>, which can then be associated to the dynamics of a specific mediator <cit.>. To access this information, we apply the Feynman-Vernon decomposition <cit.> that disentangles the influences among the focal population and the mediators in a compact way. The practical outcome is the construction of a two-way bridge between the emergent distance-dependent interactions and the network of mediators behind it (Sec. <ref>). For concrete examples, in Sec. <ref>, starting from an observed induced pattern (see Fig. <ref>a), we obtain the effective distance-dependent interaction kernel, and reciprocally from it we recover information about the mediators. Final remarks can be found in Sec. <ref>. § INDUCED STATES We are interested in scenarios where the population density is spatially uniform when the environment is homogeneous, i.e., patterns are not spontaneously formed. But, the occurrence of defects can induce spatial variability of the population density, in such a way that spatial scales, which are hidden in the absence of defects, would become revealed <cit.>. In Fig. <ref>, we pictorially represent this scenario inspired by the vegetation spatial distribution close to a termite mount <cit.>. The starting point is an explicit mathematical description of the nonlinear dynamics of the state vector ψ=[u,v_1,…,v_N], whose components represent the spatial distribution, in one dimension, of the focal population (u(x,t)) and of the N mediators (v_i(x,t)) <cit.>. A general model for this dynamics is represented by Eq. (<ref>). See Appendix <ref> for mathematical details. This description accounts for general density-dependent rates of diffusion and growth, as well as for the influence of an external constraint, q(x), which represents the environment heterogeneity that affects the focal population, and can induce patterns, as depicted in Fig. <ref>a for the case of an abrupt change from nonviable (orange) to viable (blue) regions. The set of differential equations that describe the dynamics is then linearized around the homogeneous steady state ψ^⋆=[u^⋆,v_1^⋆,v_2^⋆, …, v_N^⋆], by considering ψ = ψ^⋆+ ϵ, where ϵ = [ϵ_0,ϵ_1,ϵ_2,⋯ ] is a small deviation. Then, the linearized model, up to first order in ϵ and q, is given by ∂_t ϵ_0 = D_0∇^2 ϵ_0 + Ω_00 ϵ_0 + ∑_i=1^NΩ_0i ϵ_i + q(x) , ∂_t ϵ_i = D_i∇^2 ϵ_i + Ω_ii ϵ_i + ∑_j=0 j≠ i^NΩ_ij ϵ_j , . The coefficients Ω_ij set the extent to which j produces i and results from the linearization of the reaction rates (Appendix <ref>). Moreover, we assume Ω_ii<0 for the stability of the uncoupled system. The coefficients D_i, with i≥ 0, set the diffusion rates. Particular forms of the interaction matrix with elements Ω_ij are provided in Fig. <ref>b: all-to-all, star and linear chain structures. Considering that the system (<ref>-<ref>) is intrinsically stable (no spontaneous pattern formation occurs), the defect-induced wrinkles in the focal population density distribution will achieve a stationary form at long times (see Fig. <ref>a). For shallow defects, we find an expression for the wrinkles induced in the focal population density, ϵ_0(x), (see details in Appendix <ref>). In Fourier space (flagged by the hat symbol), we obtain ϵ_0(k) = Q(k)/-R(k) , where Q(k) is the transformed external forcing, and R is the matrix given by the reaction and diffusion rates, with elements R_ij= Ω_ij- δ_ij D_i k^2. The Fourier-inversion of Eq. (<ref>) provides the shape of the induced pattern in real space, namely ϵ_0(x) = ∑_j=1^N+1 c_j e^(i k_j - 1/ℓ_j) x, where (as becomes clear from residue integration to calculate the inverse Fourier transform of Eq. (<ref>)), the oscillation parameters k_j (wavenumber) and 1/ℓ_j (inverse of the decay-length) are the absolute values of the real and imaginary parts of the j-th zero of R <cit.>, and the constant coefficients c_j depend on the perturbation Q, which is assumed to be non-periodic (i.e., it does not add any characteristic mode by itself). However, with this straightforward approach, any connection between the matrix R and the features of the wrinkles (wavenumber and decay-length) is impractical: R is an N+1-degree polynomial in k^2 with coefficients given by complicated combinations of diffusion and reaction rates (Appendix <ref>). Then, the general solution (<ref>) does not help clarify how the mediators control the spatial pattern, nor does it help interpret what the pattern could tell us about the mediators. § DISENTANGLING THE IMPACT OF MEDIATORS In order to disentangle the contribution of different sources to Eq. (<ref>), we first proceed to identify in Eqs. (<ref>)-(<ref>), the influence of the focal population on the ensemble of mediators by defining the vector _i = Ω_i0 , and, reversely, the mediators' feedback on the focal population, _i =Ω_0i. Moreover, we define the core, of the matrix Ω, where the first row and first column have been eliminated, which sets the coupling between mediators. To separate the different contributions, we proceed to diagonalize, in Fourier space, the N× N core matrix associated to the mediators, following in essence, the Feynman-Vernon decomposition <cit.> (for details see Appendix <ref>). The result allowed us to express the shape of the induced states of the focal population as ϵ_0(k) = q(k)/ D_0 k^2 - Ω_00 + 𝒢(k) , with 𝒢(k) ≡ -∑_i=1^NA_i/λ_i^2 + k^2 , where the weights A_i and spatial frequencies λ_i (for i=1,…,N) encapsulate the information about mediators, being A_i ≡ ( P)_i (P^-1 D^-1)_i . Here D is the diagonal matrix of the diffusivities of the mediators, with elements δ_ijD_i, for 1≤ i ≤ N, and P is the transformation matrix that diagonalizes D^-1, which carries the information on diffusion coefficients and mediator-mediator couplings. P has columns given by the eigenvectors, whose eigenvalues are -λ_i^2, such that λ_i has units of inverse length. (In the current context, we are restricted to the case where the collective effect of the mediators produces real and negative eigenvalues). For instance, for the decoupled case, these eigenvalues are the ratios between minus the decay rates and the respective diffusion coefficient, that is -λ_i^2=Ω_ii/D_i. Note that Eqs. (<ref>) and (<ref>) must coincide, but in the later the different components that control the induced state are neatly separated, while they are intermingled in R (see Appendix <ref> ). Importantly, the decomposition allows us to obtain features of the ensemble of mediators, re-writting Eq.-(<ref>) as 𝒢(k) = ∫_0^∞J(λ)/λ^2 +k^2 dλ , identifying, J(λ) = -∑_i=1^N A_i δ(λ - λ_i) , which stores the introduced spatial frequencies. Taking the inverse Fourier transform of Eq. (<ref>), we obtain 𝒢(x) = ∫_0^∞J(λ)/2 λe^- λ |x| dλ , where the symmetry (± x) in 𝒢 is consequence of (isotropic) diffusion of mediators. We identify G(x) as an interaction kernel, which arises when compressing the degrees of freedom associated to the mediators. In real space, it acts through a convolution term, ∫_-∞^+∞𝒢(x-x') ϵ_0(x') dx', which couples changes in densities at x with the densities at x', spatially-extending interactions (in the absence of mediators, the interaction is local). This result that explains the emergence of a nonlocal influence between individuals, an ingredient often proposed on and ad hoc basis to model population dynamics <cit.>. Finally, using the exponential form that arises from the linear propagation-decay dynamics of the mediators, we can write G(x) by using the Laplace transform to help synthesize the two-way connection between the effective nonlocal interaction and the characterisctic scales of the ensemble of mediators, namely 𝒢(x) = ℒ{ J (λ)/(2λ) }(|x|) and J(λ) = 2λℒ^-1{𝒢(x) }(λ) . where the direct transformation goes from λ→ |x| and the inverse from |x| →λ. § EXAMPLES AND DISCUSSION A large class of models, after linearization, fall within the general structure of Eqs. (<ref>-<ref>). When the spatially uniform solution is stable in an homogeneous environment, patterns can emerge, induced by the occurrence of spatial disturbances. In the following examples, we discuss cases related to vegetation dynamics, which are paradigmatic of pattern formation studies. First, we will follow a direct path, deriving the interaction kernel, 𝒢, from the mediator dynamics, i.e., the spectral density J, under different scenarios (Eq. (<ref>). Second, attempting an inference procedure, we extract the interaction kernel from an observed pattern (using Eq. <ref>) and apply Eq. (<ref>) to reveal the dynamics of the mediators, through J. Activating mediator—Let us consider the specific scenario of vegetation patterns in semi-arid regions. These systems have received continued attention due to their importance for ecosystem tolerance to dryness and sensitivity to climate change <cit.>. In this context, water is the main important resource, and therefore, acts, as a main activating mediator of the vegetation cover dynamics <cit.>. Diverse models have emerged in the literature (see recent and past broad reviews <cit.>). In Appendix <ref>, we solve an extended version of the Klausmeier model <cit.> applied to flat landscapes <cit.>. There, as plants consume water, they affect the water concentration not only at the consumption site but also in their vicinity due to water spatial dynamics. The resulting interaction kernel is derived and found to have the exponential form 𝒢(x)∝exp(-λ|x|), where the inverse length λ depends on the water parameters and precipitation rate. Inhibiting mediator—Plant-plant interactions can also be mediated by substances that negatively affect plant growth. For example, seagrass meadows (which live underwater) have their interaction strongly mediated by sulfide which has been shown to shape their spatial arrangements <cit.>. This scenario falls within the structure depicted by Case I, in Fig. <ref>a. Dots correspond to the numerical simulations of the system of partial differential equations (<ref>), at long times, and the solid line to the prediction provided by Eq. (<ref>). Since, Eq. (<ref>) is only valid in the limit of small perturbations, we considered the superposition of complex exponential functions but performed curve fitting for the amplitude and phase of the oscillations. This helps highlight that outside the limit of small perturbations our calculation still correctly predicts the frequency and decay of the induced spatial oscillations (see <cit.> for detailed discussion). Results for activating mediators will be similar, since both types of mediators lead to an induced state driven by the same mathematical structure, namely Eq. (<ref>). Multiple mediators (star network)— Systems with many uncoupled mediators (star network in Fig. <ref>b), for the case of inhibitors, have been analyzed in Appendix <ref>. In particular, we worked the numerical simulation for the case with three inhibiting mediators (Case II), presented in Fig. <ref>a. As for Case I (one inhibiting mediator), dots correspond to the numerical simulation of the system (<ref>) and the solid line to the prediction by Eq. (<ref>). In both cases, for simplicity, we assumed that we know the reaction rates (all equal to one in the example), and that we do not have information about water diffusion coefficient, such that the kernel has the form, 𝒢 (x) ∝1/N∑_i=1^Nexp(-|x|/D_i). Interestingly, the superposition of exponential functions, can emulate a non-exponential profile, such as a power law <cit.>, providing an unique signature of the star structure. Multiple mediators (linear chain)—Another scenario which is immediately tractable is when Ω represents a cascade structure that forms a linear chain. In this case, there is a hierarchy of populations, such that every signal i has its production influenced only by the population (i-1), hence Ω_ij∝δ_j,i-1, and the focal population is affected by the last (Nth) element of the cascade (similar to the linear chain in Fig. <ref>b, except that it is closed in this example). The interaction kernel that results from our calculations (Appendix <ref>) is given by a product of the Green functions of the mediators, 𝒢 = -Ω_0NΠ_j=1^N[ Ω_j,j-1 G_j ]. Then, we can use the fact that G is exponential (given by Eq. (<ref>)) to rewrite the interaction kernel as 𝒢(x) ∝exp[-∑_j=1^N λ_j |x|] = exp[- N ⟨λ⟩ |x|]. In this case, we can have access to the characteristic scale averaged over all mediators. The inference problem— In the previous examples, we obtained 𝒢 knowing the mediator dynamics, J. In the following, we discuss how to obtain the mediator dynamics, J, from pattern observation. We assume that, while observing an induced pattern, ϵ_0(x), we know a priori the landscape defect, q(x), and the population parameters, D_0 and Ω_00. The dynamics of the mediators can then be partially recovered by first extracting the interaction kernel from Eq. (<ref>) and, then, applying Eq. (<ref>) to access the spectral density J. The spectral density provides the characteristic spatial frequencies, λ associated to mediator dynamics. For the case of one mediator (Case I, for example, in Fig. <ref>a), the spectral density, J, is a single delta-function, indicating the presence of only one characteristic scale. For multiple mediators (Case II, for example, in Fig. <ref>a), many delta-function appear, indicating more than one characteristic scale. The cascade case has an ambiguity, because it leads, as in the case of one mediator, to a single characteristic scale (the average spatial scale across mediators). Supplementing additional information, the characteristic scale can ultimately provide the parameter values. Depending on the interaction network (Fig. <ref>b), the spatial frequencies can provide more or less significant insights about the parameters of the mediators. For instance, complex vegetation models have three equations, accounting for vegetation, roots, surface and underground water dynamics. In this case, as the dynamics of surface and underground water are coupled, due to infiltration, the system structure is actually a combination of the previous examples. Consequently, the extraction of a specific mediator parameter is not trivial, as the eigenvalues will be a mixture of the mediators' parameters. Hence, although Eq. (<ref>) is an advance, containing coarse-grained information, it is still limited. The complete Ω matrix can not be recovered of course, since there is a strong degeneracy as 𝒢(k) can be expanded in as N-order polynomial (Eq. <ref>), but the interaction matrix R has N^2 elements. In these cases, more details about the system need to be known to allow the pinpointing of the values of, for example, the groundwater diffusion coefficient, from the observation of the patterns. In any case, Sec. <ref> decomposes the explicit solution of the system Eq. (<ref>), focusing on coarse-grained features of the dynamics (A_i and λ_i), facilitating our understanding of how the elements of the matrices Ω and D shape the induced patterns. § FINAL REMARKS Induced states can occur in nature close to habitat boundaries, where there is a change in environmental conditions. We take advantage of the fact that the forcing of the system can reveal information about the underlying dynamics that mediates the population interactions. As a consequence, there is an opportunity for theoretical approaches to assist remote sensing initiatives, boosting the amount of information acquired from the surface (induced) patterns, e.g., captured in satellite images. We applied our results having in mind the case of vegetation patterns where underground dynamics play a crucial role. Roots, toxic substances, water, and termites directly and indirectly mediate plant-plant interactions, but access to their dynamics is not trivial, thereby rare in previous studies <cit.>. Beyond the vegetation context, small-scale synthetic experiments are also interesting opportunities as designs can be tuned to target specific “underground” features. For instance, in Ref. <cit.>, spatial patterns were interpreted as a signature of an unknown component of interaction between bacteria and advances in synthetic experimental populations <cit.> can create scenarios to validate whether and how much information can be recovered just from pattern observation. In general, our results show the benefits of exploiting naturally formed disturbances or artificial ones to help us validate and improve theoretical models. Therefore, it would be interesting to look for concrete opportunities and specialize our calculations to the corresponding scenario, e.g., extending the results to two dimensions and being calibrating the parameters with realistic values. § ACKNOWLEDGMENTS We are thankful to Emilio Hernández-García and Ricardo Martínez-García for discussions and critical reading of the manuscript. § FUNDING EHC acknowledges partial funding by the Center of Advanced Systems Understanding (CASUS) which is financed by Germany’s Federal Ministry of Education and Research (BMBF). CA acknowledge partial financial support by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brazil (CAPES) - Finance Code 001, Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)-Brazil (311435/2020-3) and Fundação de Amparo à Pesquisa do Estado de Rio de Janeiro (FAPERJ)-Brazil (CNE E-26/201.109/2021). § EXPLICIT APPROACH The starting (nonlinear) model for the densities of the focal population and mediators is given by ∂_t u(x,t) = D_0(u,{v_i})∇^2 u + f_0(u,{v_i}) + h(x)u, ∂_t v_i(x,t) = D_i(u,{v_i})∇^2 v_i + f_i(u,{v_i}) , with i > 0, where h(x) contains the heterogeneity of the environment, D_i is the diffusion coefficient of mediator i, and the functions f_i represent the rates of the interaction (reaction) processes. Let us define the state vector ψ(x,t)=[u,v_1,…,v_N] and linearize the system (<ref>) around the homogeneous steady state ψ^⋆, by considering ψ = ψ^⋆+ ϵ, where ϵ is a small deviation from the uniform solution. Up to first order in ϵ and h, we obtain Eq. (<ref>), with Ω_ij = ∂ f_i/∂ϵ_j|_ψ^⋆, where ∂ f_i/∂ϵ_j|_ψ^⋆ span the Jacobian of f, computed at ψ^⋆, which is obtained from f_j(u^⋆,{v_i^⋆})=0, ∀ j. The Fourier transformed linearized equation reads ∂_t ϵ(k,t) = R(k^2) ϵ(k) + q(k) , where hat means Fourier transform, being k the wavenumber, R(k^2)=Ω-Dk^2 is a (N+1)× (N+1) matrix where D is the diagonal matrix of diffusion coefficients. The vector q is an scaled version of the perturbation, q(k)≡[h( k)u^⋆, 0,0, …,0]_N+1. As a consequence, the stationary distribution of the population in Fourier space is given by ϵ( k) = - R^-1q = [adj( R) q]/-R , from which we extract the component associated to the focal species ϵ_0(k) = Q(k)/-R , where we have identified the numerator with a Fourier-transformed external forcing Q(k)=[adj( R) q(k)]_0. On the other hand, note that R = [R' -k^2I] ∏_i= 0^N D_i, where R'_ij= Ω_ij/D_i. Hence it is clear that R is a 2(N+1)-degree polynomial in k, which is proportional to the characteristic polynomial of the matrix R', [R'-λ I], once made the identification λ =k^2. Then, from (<ref>), we have R = r_0 + r_2k^2 +… + r_2(N+1)k^2(N+1), where the explicit form of the coefficients r_2n is given by Eq. (<ref>). To obtain these coefficients, note that for a matrix A of order M (with non zero eigenvalues), using that ( exp A)=exp( trA ), we have [A-λ I] = (-1)^Mλ^Mexp( tr[log(1-A/λ)] ) = (-1)^Mλ^Mexp( - tr∑_n=1^∞ (A/λ)^n/n) =∑_j=0^N a_j λ^j, where a_M=(-1)^M and, for 0<n≤ M, a_M-n=(-1)^M-n/n! T_1 n-1 0 … T_2 T_1 n-2 … ⋮ ⋮ ⋱ ⋮ T_n-1 T_n-2 … 1 T_n T_n-1 … t_1 , with T_n= trA^n=∑_i λ_i^n. Then, the coefficients of the polynomial that represents R in Eq. (<ref>) are given by r_2n= a_n ∏_i= 0^N D_i, with the coefficients a_n calculated for A=R' and M=N+1. § DISENTANGLED REPRESENTATION OF INDUCED PATTERNS In Eqs. (<ref>)-(<ref>), we first identify the influence of the focal population on the ensemble of mediators by defining the vector _i = Ω_i0 , and, reversely, the mediators' feedback on the focal population, _i =Ω_0i. Then, the core, of the matrix Ω, where the first row and first column have been eliminated, sets the coupling between mediators. As a consequence, the stationary form of Eqs. (<ref>)-(<ref>) can be written as D_0∂_x^2 ϵ_0(x) + Ω_00ϵ_0(x) + ∑_i=1^N_iϵ_i(x) + q(x) =0 , D_i∂_x^2 ϵ_i(x) + _i ϵ_0 (x) + ∑_j=1^N_ijϵ_j(x)=0 , where i=1,2,…,N. Fourier transforming Eq. (<ref>), we have D_ik^2ϵ^ m_i -∑_j=1^N_ijϵ^ m_j = ϵ_0 _i , where we have defined the N-dimensional vector ϵ^ m≡ϵ^ m(k), whose components are the deviations of the mediators from their uniform steady states (that is, it is a reduced form of the vector ϵ, where the first element ϵ_0) has been removed. Defining the diagonal matrix D of elements δ_ijD_i, with 1≤ i ≤ N, we rewrite Eq. (<ref>) in matrix form as ( k^2 I_N -D^-1 )ϵ^ m = ϵ_0 D^-1 , where I_N is the N× N identity matrix, and recalling that is a matrix, and are vectors, while ϵ_0 is a scalar. Then, from Eq. (<ref>), we obtain the solution ϵ^ m(k) = ϵ_0(k) [k^2 I_N - D^-1]^-1 D^-1 = ϵ_0(k) [ P Λ(k) P^-1] D^-1 , where the matrix P has columns given by the eigenvectors of D^-1 and Λ(k) is a diagonal matrix with elements Λ_ii(k)=[k^2+λ_i^2]^-1, where -λ_i^2 are the eigenvalues of D^-1 (that must be negative for the stability of the system), which combine the impact of diffusion and mediator-mediator coupling. Let us develop the sum of the contributions of the mediators appearing in Eq. (<ref>), using Eq. (<ref>). That is ∑_i=1^N_iϵ_i(x) = ·ϵ^ m(k) = ϵ_0(k)[ P] Λ(k) [P^-1 D^-1] = ϵ_0(k)∑_i=1^NA_i/λ_i^2+k^2 , which is a sum of Lorentzian functions, where A_i ≡ ( P)_i (P^-1 D^-1)_i weights the impact of each eigenvalue. The weights depend on the eigenvectors of the core matrix (contained in P) and hence on how they reflect the feedback loop between the focal population and the mediators (i.e., and ), moderated by the diffusivities (contained in D). Finally, into the Fourier transformed Eq. (<ref>), [D_0 k^2 - Ω_00]ϵ_0(k) + ·ϵ^ m(k) = q(k) , we plug Eq. (<ref>), extracting the effective representation of the induced states of the focal population, [ D_0 k^2 - Ω_00 + 𝒢(k) ] ϵ_0(k) = q(k) , where we identify 𝒢(k) ≡∑_i=1^N-A_i/λ_i^2 + k^2 as the interaction kernel that arises when compressing the degrees of freedom from the mediators. This is because, in real space, Eq. (<ref>) becomes [ D_0∂_x^2 + Ω_00 - 𝒢∗ ] ϵ_0(x) = -q(x) , and the kernel generates a convolution term, 𝒢∗ϵ_0= ∫_-∞^+∞𝒢(x-x') ϵ_0(x') dx', which couples changes in densities at x with the densities at x', spatially-extending interactions. § EXAMPLES §.§ Activator networks We consider a structure that is common in models that describe vegetation in semi-arid regions, where water-type mediators act as activators <cit.>. As example, for the simplest case of one mediator, a typical structure is ∂_t u(x,t) = D_0∂_xx u + u^2v - u + h(x) , ∂_t v(x,t) = ∂_xx v + R -v u^2 - v, where R is the precipitation rate. For this system, a vegetated steady state (u^⋆=R/2+√(R^2/4-1)), and v^⋆=1/u^⋆) is stable for any R>2. The interaction matrix is Ω= c|cΩ_00 = c|c -1 u^⋆^2 -2 -1-u^⋆^2 , then A_1= =-u^⋆^2, the single eigenvalue is -λ^2=1+u^⋆^2, and Eq. (<ref>) becomes 𝒢(k) = 2u^⋆^2/1+u^⋆^2+k^2 , implying exponential kernel 𝒢(x)∼exp(-√(1+u^⋆^2)|x|). §.§ Inhibitory networks (case studies) We analyze a type of inhibitory network for which numerical examples are depicted in Fig. <ref>a, where agreement between the explicit and effective descriptions is observed. In all cases we consider an heterogeneous scenario with a semi-infinite habitat condition, where, for x<0, h→-∞, mimicking harmful conditions, and for 0<x<L, h=0. The system size, L, is taken to be as large as possible to account for the spatial relaxation of the induced states. The evolution equations for N inhibitors are of the form ∂_t u(x,t) = D_0∂_xx u + u - u(v_1 +⋯ + v_N) + h(x) , ∂_t v_i(x,t) = D_i∂_xx v_i -v_i + u , The homogeneous steady state in the viable habitat is given by u^⋆=v_i^⋆=1/N for any i. Moreover, the (linearized) network is of star type (without mediator-mediator interactions), given by Ω= c|cΩ_00 = c|rrrr 0 -1/N -1/N ⋯ -1/N 1 -1 0 ⋯ 0 1 0 -1 ⋯ 0 ⋮ ⋮ 0 ⋱ 0 1 0 0 ⋯ -1 . For all i≥ 1, _i=Ω_0i=-1/N, _i=Ω_i0=1, Ω_00=0, and the core is minus the unitary matrix (since Ω_ii=-1), so that D^-1 is a diagonal N× N matrix with elements -1/D_i (which are the eigenvalues -λ_i^2). Moreover, in this case A_i=-1/(N D_i), hence, the interaction kernel in Eq. (<ref>) becomes 𝒢(k) = 1/N∑_i=1^ND_i/1+ D_i^2k^2 , and J(λ) = ∑_i=1^N1/ND_iδ(λ-λ_i) . Then, the spectral density J recovers the distribution of values of the diffusivities in the original systems. The case studies depicted in Fig <ref> correspond to particular values of N , where we set D_0=10^-3. * Case I: A single inhibitor (N=1). The dynamics of the focal population and the inhibitor are explicitly described by ∂_t u(x,t) = D_0∂_xx u + u - vu +h(x)u, ∂_t v(x,t) = ∂_xx v -v + u. * Case II: three inhibitors (N=3) The evolution equations explicitly are ∂_t u(x,t) = D_0∂_xx u + u - (v_1 + v_2 + v_3)u + h(x) , ∂_t v_1(x,t) = ∂_xx v_1 -v_1 + u , ∂_t v_2(x,t) = 4 ∂_xx v_2 -v_2 + u ∂_t v_3(x,t) = 1/4∂_xx v_3 -v_3 + u . It is worth noting that we can reach the same result for the kernel from a Green-function approach. In fact note that each term of Eq. (<ref>) corresponds to the Fourier transform of the Green function for the operator (D_i ∇^2+ Ω_ii), which in one dimension is G_i(x) = 1/2√(|Ω_ii|D_i)exp(-|x| √(|Ω_ii|/D_i)). Then, the solution of ( D_i ∇^2+ Ω_ii)ϵ_i = -Ω_i0ϵ_0 can be written as ϵ_i (x) = Ω_i0∫_-∞^∞ G_i(x-x') ϵ_0(x') dx' ≡Ω_i0 G_i∗ϵ_0, where ∗ represents a convolution. Plugging this solution into the linearized stationary equation for ϵ_0, Eq. (<ref>), we arrive to a single closed equation for ϵ_0, namely (D_0∇^2 + Ω_00 -𝒢∗) ϵ_0 + h(x)u^⋆ = 0 , with 𝒢(x) = -∑_i=1^NΩ_0iΩ_i0 G_i(x). In the present example, for which Ω_i0=1, Ω_0i=-1/N, and Ω_ii=-1 for all i>1, we have 𝒢(x) = ∑_i=1^NG_i(x)/N, whose Fourier transform recovers Eq. (<ref>). §.§ Cascade of mediators Considering the linearized system ∂_t ϵ_0 = D_0∇^2 ϵ_0 + Ω_00ϵ_0 + Ω_0N ϵ_N + q(x) , ∂_t ϵ_i = D_i ∇^2 ϵ_i + Ω_iiϵ_i + Ω_i i-1 ϵ_i-1, for i≥ 1 , therefore, the interaction network is Ω= c|c 0 = c|ccccΩ_00 0 0 ⋯ Ω_0N Ω_10 Ω_11 0 ⋯ 0 0 Ω_21 Ω_22 ⋯ 0 ⋮ 0 0 ⋱ 0 0 0 0 Ω_N N-1 Ω_NN . Note that the core is a lower triangular matrix (bi-diagonal), then D^-1 is also triangular, hence the eigenvalues coincide with the diagonal elements D_i^-1Ω_ii, which are real and negative. For a clear example, let us consider N=3, which is enough to visualize the recurrent structure of P, the matrix whose columns are the eigenvectors, namely P = [ (Ω'_11 - Ω'_22)( Ω'_11 - Ω'_33 )/Ω'_32Ω'_21 0 0; ( Ω'_11 - Ω'_33 )/Ω'_32 ( Ω'_22 - Ω'_33 )/Ω'_32 0; 1 1 1 ] , where we have defined Ω'_ij=Ω_ij/D_i, and its inverse is P^-1 = [ Ω'_32Ω'_21/( Ω'_11 - Ω'_22) ( Ω'_11 - Ω'_33) 0 0; Ω'_32Ω'_21/( Ω'_22 - Ω'_11) ( Ω'_22 - Ω'_33) Ω'_32/Ω'_22 - Ω'_33 0; Ω'_32Ω'_21/(Ω'_33 - Ω'_11) ( Ω'_33 - Ω'_22) Ω'_32/Ω'_33 - Ω'_22 1 ] , then, we have (Ω^in P )_i = Ω_0N for all i, and (P^-1 D^-1Ω^out)_i = Ω_10/D_1(P^-1)_i1= Ω_10/D_1∏_j=1^N-1Ω_j+1 j/D_j+1/∏_j ≠ i^N ( Ω_ii/D_i - Ω_jj/D_j) , hence, from Eq. (<ref>), A_i = Ω_10Ω_0N/D_1∏_j=1^N-1Ω_j+1 j/D_j+1/∏_j ≠ i^N ( Ω_ii/D_i - Ω_jj/D_j) . Finally, Eq. (<ref>) becomes 𝒢(k) = ∑_i=1^N-A_i/ k^2+-Ω_ii/D_i = -Ω_0N∏_j=1^NΩ_j j-1/∏_j=1^N[ D_i (k^2 + -Ω_ii/D_i)] , where we used Lagrange interpolation decomposition R(x)/Q(x)=∑_i=1^N R(x_i)/(x-x_i)Q'(x_i), setting R(x)=1, Q(x)=Π_i=1^N(x-Ω'_ii) and x=k^2. From the Green-function perspective, Eq. (<ref>) can be written as 𝒢(k) = -Ω_0NΠ_j=1^N [ Ω_j,j-1 G_j(k) ] , where the Green function G_j is the same defined in Eq. (<ref>). In fact, in the present problem, the stationary form of Eq. (<ref>) for ϵ_i is (Ω_ii + D_i ∇^2) ϵ_i = -Ω_i i-1ϵ_i-1 . Using the Green function G_i of the operator on the left-hand side, given by Eq. (<ref>), we can write ϵ_i explicitly as ϵ_i = Ω_i,i-1 G_i ∗ϵ_i-1 . The first two solutions are ϵ_1 = Ω_1,0 G_1∗ϵ_0, and ϵ_2 = Ω_2,1 G_2 ∗ϵ_1 = Ω_2,1Ω_1,0 G_2 ∗ (G_1 ∗ϵ_0), and so on recursively, such that the i-th term is ϵ_i ( x) = Π_j=1^i [ Ω_j,j-1 G_j ∗] ϵ_0 (x) . Substituting the solution for ϵ_N into Eq. (<ref>) for ϵ_0, we obtain the closed equation for ϵ_0 ( D_0∇^2 + Ω_00 - 𝒢∗ )ϵ_0 + q(x) = 0 , where, from Eq. (<ref>), the interaction kernel 𝒢 for the cascade is 𝒢(x) = -Ω_0NΠ_j=1^NΩ_j,j-1 [G_1∗ G_2∗⋯∗ G_N](x) , whose Fourier-transform recovers Eq. (<ref>).
http://arxiv.org/abs/2407.12242v3
20240717011827
Parameter Generation of Quantum Approximate Optimization Algorithm with Diffusion Model
[ "Fanxu Meng", "Xiangzhen Zhou" ]
quant-ph
[ "quant-ph" ]
Beamforming Design for Secure MC-NOMA Empowered ISAC Systems with an Active Eve This work was supported by the Beijing Natural Science Foundation under Grant L222004 and the Young Backbone Teacher Support Plan of Beijing Information Science & Technology University under Grant YBT 202419. Zhongqing Wu^*, Xuehua Li^*, Yuanxin Cai^*, Weijie Yuan† *Key Laboratory of Information and Communication Systems, Ministry of Information Industry, Beijing Information Science and Technology University, Beijing, China †Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China Email: {zhongqing.wu, lixuehua, cai_yuanxin}@bistu.edu.cn, yuanwj@sustech.edu.cn July 22, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Quantum computing presents a compelling prospect for revolutionizing the field of combinatorial optimization, in virtue of the unique attributes of quantum mechanics such as superposition and entanglement. The Quantum Approximate Optimization Algorithm (QAOA), which is a variational hybrid quantum-classical algorithm, stands out as leading proposals to efficiently solve the Max-Cut problem, a representative example of combinatorial optimization. However, its promised advantages strongly rely on parameters initialization strategy, a critical aspect due to the non-convex and complex optimization landscapes characterized by low-quality local minima issues. Therefore, in this work, we formulate the problem of finding good initial parameters as a generative task in which the generative machine learning model, specifically the denoising diffusion probabilistic model (DDPM), is trained to generate high-performing initial parameters for QAOA. The diffusion model is capable of learning the distribution of high-performing parameters and then synthesizing new parameters closer to optimal ones. Experiments with various sized Max-Cut problem instances demonstrate that our diffusion process consistently enhances QAOA’s effectiveness compared to random parameters initialization. Moreover, our framework indicates the capacity of training on small, classically simulatable problem instances, aiming at extrapolating to larger instances to reduce quantum computational resource overhead. § INTRODUCTION Quantum computing is one of the major transformative technologies and presents an entirely new computational paradigm that has potential to achieve an exponential or polynomial advantage for classically intractable problems encompassing machine learning <cit.>, molecular dynamics <cit.>, and combinatorial optimization <cit.>, etc. Although quantum computing is still in the noisy intermediate-scale quantum (NISQ) era <cit.> where quantum hardware is characterized by limited qubit numbers, inherent system noise, significant quantum gates error, and constrained qubit topology, the community have dedicated to exploring quantum algorithms tailored to NISQ machines. Among these efforts, Quantum Approximate Optimization Algorithm (QAOA) <cit.>, revolving around the use of Parameterized Quantum Circuit (PQC), emerges as the foremost proposal and has exhibited their superiority in addressing complex and NP-Hard combinatorial optimization problems like Max-Cut <cit.>. QAOA combines a parameterized quantum state evolution that is performed on a NISQ device, with a classical optimizer that is used to find optimal parameters. Furthermore, QAOA advance spurs the exploration of practical applications such as circuit layout designs <cit.>, wireless communication <cit.>, finance <cit.> and so on. Although QAOA shows immense practical significance in many fields, optimizing QAOA parameters poses a significant challenges <cit.> because the optimization objective is nonconvex with low-quality nondegenerate local minima of the cost landscape hindering the trainability of the algorithm. Parameters optimization focus on evolving the random initial parameters to the specific ones that can perform well on given tasks. Many approaches have been proposed for QAOA parameter optimization including gradient-based and derivative-free methods <cit.>. However, these methods usually require many measurement runs and consequently remain resource-expensive computations. Further, parameters initialization has been recognized as a crucial pipeline to promote the convergence to a potential solution within the parameter space, thereby facilitating more effective optimization without resorting to resource-expensive computations <cit.>. Therefore, we impose that it is essential and inevitable to explore a high-quality parameter initialization strategy. Diffusion model <cit.>, a success generative machine learning method, has shown the significant advantage in the image and visual generation. Recently, the denoising diffusion model has been utilized to generate quantum circuits for unitary compilation and entanglement generation tasks <cit.>. Taking a closer look at optimizing QAOA parameters and diffusion models, we are aware of the commonalities between diffusion-based image generation and QAOA parameters optimization process in the following aspects (illustrated in Figure <ref> introduced by <cit.>): ( 1 ) both the reverse process of diffusion model and QAOA parameters optimization can be regarded as transitions from random noise/initialization to specific distributions. ( 2 ) high-performing QAOA parameters and high-quality images can be degraded into simple distributions, e.g., Gaussian distributions, by iteratively adding noise. To this end, motivated by the foregoing similarity, we delve into the synergy between the use of the diffusion model and QAOA parameters initialization where the trained model can explore the evolution from random parameters to high-performing parameters. We posit that once the model is trained, it can efficiently learn the distribution of high-performing parameters, and excel at generating a good initialization to speed up the optimization convergence. This innovative approach paves the way for harnessing the capabilities of both the generative machine learning model and quantum computing to address some of the most formidable challenges in computer science. The main contributions of our work are summarized as follows. * A denoising diffusion probabilistic model is constructed and trained to generate high-performing initial parameters for QAOA. The proposed approach leverages the similarity between diffusion-based image and QAOA's initial parameters generations, and strengths of both quantum computing and generative machine learning model. * Extensive evaluations on various Max-Cut problem instances using the Xanadu Pennylane <cit.> quantum circuit simulator show that our proposed scheme notably outperforms the random initialization strategy. Once the model is trained, the parameters generated can be leveraged to efficiently find high-quality initial parameters for unseen test instances the sizes of which are even larger than those in the training dataset. § PRELIMINARIES §.§ Quantum Computing Quantum mechanics operate within the Hilbert space ℋ, which is isomorphic to the complex Euclidean space ℂ. Dirac notation is used to denote quantum states, and a pure quantum state is defined by a column vector | ·⟩ (named `ket') with the unit length. The mathematical expression of the n-qubit pure state is denoted as | ψ⟩ = ∑_j=1^2^nα _j | j ⟩ where ∑_j=1^2^n |α _j|^2= 1 and | j ⟩ , j=1,2,… ,2^n stands for the computational basis states. The quantum operations on qubits are quantum gates, as unitary matrices, transforming a quantum state to another and preserving the norm of the quantum state. The basic quantum gates can be split into two groups, single-qubit gates and two-qubit gates. The commonly used single-qubit gates are Pauli gates (σ _x, σ _y, σ _z, and I) and corresponding rotation gates, and Hadamard gate (H), which can respectively be denoted as the following unitary matrices, σ _x =[ 0 1; 1 0 ] σ _y =[ 0 -i; i 0 ] σ _z=[ 1 0; 0 -1 ] I=[ 1 0; 0 1 ] H=1/√(2)[ 1 1; 1 -1 ] R_x ( θ ) =[ cosθ/2 -isinθ/2; -isinθ/2 cosθ/2 ] R_y ( θ ) =[ cosθ/2 -sinθ/2; sinθ/2 cosθ/2 ] R_z ( θ ) =[ e^-iθ/2 0; 0 e^iθ/2 ] The notable two-qubit gate is the CNOT gate as follows CNOT=[ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0 ] §.§ Quantum Approximate Optimization Algorithm for Max-Cut The Quantum Approximate Optimization Algorithm (QAOA) is a variational quantum algorithm paradigm developed to address combinatorial optimization tasks on NISQ devices. It utilizes the problem-dependent parameterized quantum circuit involving alternate Hamiltonian and Mixer layers to prepare and evolve quantum states that encode potential solutions to the optimization problem at hand. By adjusting variational parameters and optimizing them using classical optimization techniques, QAOA aims to find near-optimal solutions efficiently, as illustrated in Figure <ref>. QAOA is particularly suited for the Max-Cut problem and leverages the expressive power of quantum computation to explore large solution spaces more effectively than its classical counterparts. Its efficacy lies in classical initialization and optimization of the parameters, aiming at exploring parameter space to achieve good approximation solutions. In QAOA, the binary assignment combinatorial optimization is first encoded in a cost Hamiltonian H_c by the mapping between n classical binary decision variables s_i∈{ 0,1 } , i=1,2,… ,n and the eigenvalues of the quantum Pauli σ _z, where the ground energy eigenstate of H_c corresponds to the solution of the combinatorial optimization problem. Second, the transverse field mixer Hamiltonian is constructed as H_m = ∑_i=1^nσ _x^i. Then, the uniform superposition initial state | + ⟩ ^⊗ n is prepared by performing Hadamard gates on all qubits with an all-zeroes state. Next, a variational quantum state | ψ ( γ _1,… ,γ _p ,β _1,… ,β _p ) ⟩ is prepared by employing alternate Hamiltonian and Mixer layers e^-iγ _i H_c and e^-iβ _i H_m, i=1,2,… ,p as follows | ψ ( γ _1,… ,γ _p ,β _1,… ,β _p ) ⟩=e^-iβ_pH_me^-iγ _pH_c⋯ e^-iβ_1H_me^-iγ _1H_c | + ⟩ ^⊗ n Finally, the selected classical optimizer is applied to vary parameters γ_i and β_i, i=1,2,… ,p to minimize the cost function C ( γ _1,… , γ _p, β _1,,… ,β _p ) =⟨ψ ( γ _1,… ,γ _p ,β _1,… ,β _p ) | H_c | ψ ( γ _1,… ,γ _p ,β _1,… ,β _p ) ⟩ For the Max-Cut problem, given a graph G=(V,E) where V is the set of nodes and E is the set of edges, the goal of Max-Cut is to partition the set of nodes V into two disjoint subsets such that the total weight of edges connecting the two subsets is maximized. The mathematical expression of the Max-Cut is formulated as follows s⃗min∑_ ( i,j )∈ E ^ -w_ijs_is_j where s_k∈{ -1,1 } and the weight w_ij of an edge (i,j) is set to 1 for the unweighted Max-Cut problem. To apply QAOA to the Max-Cut, the above objective can be encoded in the following problem Hamiltonian by mapping binary variables s_k onto the eigenvalues of the Pauli σ _z ∑_ ( i,j )∈ E ^ -w_ijσ _z^iσ _z^j Thus, minimizing the objective of the Max-Cut is equivalently transformed into obtaining the ground state energy of the problem Hamiltonian. §.§ Denoising Diffusion Probabilistic Model The diffusion model is a class of generative models which can be traced back to non-equilibrium thermodynamics <cit.>. The model aims at progressively removing noise from inputs and generating clear images. The representative work, denoising diffusion probabilistic model (DDPM) <cit.>, refines the diffusion model with a training paradigm characterized by forward and reverse processes in a multi-step chain indexed by timesteps. The forward process, dubbed the diffusion process, is characterized as progressive adding noise as well as the denoising process is regarded as a gradual eliminating noise process. In the diffusion process, given an original sample x_0, Gaussian noise is progressively added for T steps and further the noisy samples x_1, x_2, …, x_T are obtained. Based on the reparameterization rule, the mathematical expression of this process can be written as follows <cit.>, x_t = √(α̅ _t) x_0 + √(1-α̅ _t)ε or q ( x_t|x_0 )=𝒩 ( x_t; √(α̅ _t) x_0, ( 1-α̅ _t )ϵ ^2 ) where α̅ _1 ,α̅ _2,… ,α̅ _T are a series of constants decreasing with t increasing and 𝒩 is the gaussian distribution. In the denoising process, beginning with the timestep T, taking a sample x_t and the current timestep t as inputs, the noise prediction network in the model can evaluate the noise and eliminate it from x_t to generate a new sample x_t-1. The denoising process repeats the process until the original sample is restored. Mathematically, the denoising process can be formulated as follows <cit.> 𝐱_t-1∼1/√(α_t)(𝐱_t-1-α_t/√(1-α̅_t)ϵ_θ(𝐱_t, t)) where ϵ_θ(𝐱_t, t) is the noise prediction network and α̅ _t=α_t…α_1. § METHODOLOGOY Due to the lack of publicly available data sets, we first construct the dataset consisting of various Max-Cut problems and the approximate optimal parameters of QAOA. Then, in the later part, we detailed the diffusion-based initial parameters generation for QAOA. §.§ Dataset Generation We first construct synthetic regular graphs comprising 3500 instances with node sizes N ranging from 4 to 8, and the probability p of having an edge between two nodes varying between 0.3 and 0.75, as illustrated in Figure <ref>. Next, each of the above graphs is inputted into QAOA for the Max-cut problem. Further, to maintain the consistency of the dimension of the diffusion model input and output, we fixed the layers of QAOA to 3, which contains 6 parameters γ⃗ = (γ_1, γ_2, γ_3 ) and β⃗ = (β_1, β_2, β_3 ). The QAOA starts with random initial parameters of γ⃗ and β⃗, and performs optimization process for 500 iterations. However, since the inherently complex optimization landscape of the QAOA algorithm <cit.>, the final parameters optimized may have a relatively large gap compared to optimal ones. Therefore, we devise the multi-start strategy to find a parameter setting which is the closest to optimal values. The parameters corresponding to the lowest cost function are recorded as the input of the proposed diffusion model. §.§ Model Preparation In the pursuit of enhancing the application of QAOA to the max-cut problem, the proposed approach involves the integration of the DDPM model and QAOA parameters initialization. We first reshape the training data into the tensor representation. Next, as illustrated in Figure <ref>, the model is built with two processes, parameters diffusion and generation. In the parameters diffusion process, to train the model, we corrupt the samples in the training set. Then, the model learns to predict the noise of the present step, aiming at subtracting it from the input sample. The noise prediction model is constructed consists of the encoding module and the step-embedding module. As illustrated in Figure <ref>, the encoding module involving 4 full-connected layers with ReLU nonlinearities to encode the input parameters, and the step-embedding module is equipped with 3 embedding layers to iteratively transform the time-step into the latent representation. Specifically, adapting from the work <cit.>, the training procedure of the noise prediction network ϵ_θ(𝐱_t, t) is characterized in Algorithm <ref> where 𝒩(0, 𝐈) represents Gaussian noise distribution, √(α̅_t)𝐱_0+√(1-α̅_t)ϵ is the noisy sample in t time-step and ϵ_θ(√(α̅_t)𝐱_0+√(1-α̅_t)ϵ, t) predicts the present noise. And the parameters generation/sampling is characterized in Algorithm <ref> § EXPERIMENT In this section, we conduct experiments to validate the effectiveness of diffusion-based QAOA initial parameters generation. §.§ Experiment Setup Implementation Detail. For the noise prediction model, we set the input dimension of the encoder module to 6 and the input dimension of the step-embedding module to 100. The model is trained for 100 epochs with batch size being 50. Furthermore, we use the Adam optimizer to optimize the model with the learning rate being lr=10^-3, and neithor weighted decay nor learning rate scheduler are used. Dataset and Baseline Model. We first randomly generate 50 test graphs for the Max-Cut problem with different node numbers and edge probabilities to demonstrate the improvement of the attainable lowest energy/cost achieved by the diffusion-based parameters initialization over the random initialization baseline. We proceed to iteratively optimize parameters for 100 steps after the above initialization strategies. The results are presented in Figure. <ref>, where the orange line is the attainable lowest energy for the proposed initialization strategy, and the blue line for random initialization. The results show that the proposed initialization most outperforms the baseline and attains lower energies up to 1.8x. The average improvements for different node sizes compared to the baseline are listed in Table <ref>. For clarity, we randomly generate 5 graphs with various node sizes to show the convergence of the proposed work against the baseline, as illustrated in Figure <ref>. Moreover, we randomly generate 8 graphs varying node sizes from 9 to 16 for the Max-Cut problem and demonstrate that the proposed initialization strategy can be effectively extrapolated to larger instances and enhance QAOA’s effectiveness compared to the baseline, as illustrated in Figure <ref> § CONCLUSION In conclusion, delving into the commonalities between the diffusion-based generation process and QAOA's parameters training, we present a significant advancement in combining the Denoising Diffusion Probalistic Model (DDPM) with the Quantum Approximate Optimization Algorithm (QAOA), specifically targeting the Max-Cut problem. To show the potential of the DDPM in enhancing the initialization process of QAOA parameters, we conduct extensive experiments. The results demonstrate that the integration of the DDPM with QAOA can improve the attainable lowest energy of QAOA compared to the baseline, Moreover, we also show that the proposed scheme has the capacity to generalize the larger Max-Cut problem instances beyond the training data regime, consequently, reduce quantum computational resource overhead. This research establishes the foundation for future explorations in quantum computing, highlighting the diffusion model as a crucial instrument in this rapidly evolving field. § FUTURE WORKS The current work offers multiple extensions. On the one hand, we may generalize QAOA parameters initialization to the partial or entire parameters initialization in the quantum neural networks. Moreover, we expect that the parameter sampling from the distribution learning by the diffusion model may mitigate the problem of the barren plateau in the PQCs training. On the other hand, perhaps the more complex diffusion model such as U-Net's cross-attention maps can future improve our method. § ACKNOWLEDGEMENT This work is supported by the Jiangsu Funding Program for Excellent Postdoctoral Talent No.2022ZB139. unsrt
http://arxiv.org/abs/2407.12654v1
20240717153731
Sampling with a Black Box: Faster Parameterized Approximation Algorithms for Vertex Deletion Problems
[ "Barış Can Esmer", "Ariel Kulik" ]
cs.DS
[ "cs.DS" ]
figures/ claimproof claimClaimClaims claimClaimClaims algorithmAlgorithmAlgorithms algorithmAlgorithmAlgorithms ALC@uniqueLineLines ALC@uniqueLineLines propertiesenumerate1 [properties,1]label=P*, align=left, left=0pt, itemindent=* propertiesipropertyproperties Configuration: Input: Output Return theoremTheorem[section] lemma[theorem]Lemma definition[theorem]Definition claim[theorem]Claim problemProblem corollary[theorem]Corollary observation[theorem]Observation proofsketch 1]Barış Can EsmerThe author is part of Saarbrücken Graduate School of Computer Science, Germany. 2]Ariel KulikThis project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 852780-ERC (SUBMODULAR). [1]CISPA Helmholtz Center for Information Security, Saarbrücken, Germany. [2]Computer Science Department, Technion, Haifa, Israel. Sampling with a Black Box: Faster Parameterized Approximation Algorithms for Vertex Deletion Problems [ ===================================================================================================== § ABSTRACT In this paper we introduce Sampling with a Black Box, a generic technique for the design of parameterized approximation algorithms for vertex deletion problems (e.g., , , etc.). The technique relies on two components: * A Sampling Step. A polynomial time randomized algorithm which given a graph G returns a random vertex v such that the optimum of G∖{v} is smaller by 1 than the optimum of G with some prescribed probability q. We show such algorithms exists for multiple vertex deletion problems. * A Black Box algorithm which is either an exact parameterized algorithm or a polynomial time approximation algorithm. Our technique combines these two components together. The sampling step is applied iteratively to remove vertices from the input graph, and then the solution is extended using the black box algorithm. The process is repeated sufficiently many times so that the target approximation ratio is attained with a constant probability. The main novelty of our work lies in the analysis of the framework and the optimization of the parameters it uses. We use the technique to derive parameterized approximation algorithm for several vertex deletion problems, including , d and ℓ. In particular, for every approximation ratio 1<β<2, we attain a parameterized β-approximation for which is faster than the parameterized β-approximation of [Jana, Lokshtanov, Mandal, Rai and Saurabh, MFCS 23']. Furthermore, our algorithms are always faster than the algorithms attained using Fidelity Preserving Transformations [Fellows, Kulik, Rosamond, and Shachnai, JCSS 18']. empty § INTRODUCTION A vast body of research has been dedicated to basic vertex deletion problems such as , 3 and . In these problems, the objective is to delete a minimum cardinality set of vertices from the input (hyper-)graph so that the remaining (hyper-)graph satisfies a specific property (edge-free, cycle-free, etc.). As many of these problems are NP-hard, multiple algorithmic results focus on either polynomial time approximations or exact parameterized algorithms. In between these two classes of algorithms lies the class of parameterized approximation algorithms. These algorithms aim to provide approximation ratios which cannot be attained in polynomial time. They operate within a parameterized running time, which is faster than the exact, parameterized state-of-the-art. In this paper we explore how existing exact parameterized algorithms and polynomial time approximation algorithms can be used together with sampling steps to derive efficient parameterized approximation algorithms. Informally, a sampling step with success probability q∈ (0,1) is a polynomial time algorithm which, given an input graph G=(V,E), returns a random vertex v∈ V. The vertex v should satisfy, with probability q or more, that removing v from G reduces its optimum (i.e., the number of vertices one needs to remove from the graph for it to satisfy the property) by 1. As we show in this paper, such algorithms can be easily obtained for various vertex deletion problems. Our technique, Sampling with a Black Box, applies the sampling step t times and subsequently uses the existing parameterized/approximation algorithms to complete the solution. The whole process is executed sufficiently many times so that a β-approximate solution is found with a constant probability. The main novelty of our work lies in the analysis of the framework, involving tail bounds for binomial distribution and optimization of the parameters used by the technique. Sampling with a Black-Box is applicable to a wide collection of vertex deletion problems. We show sampling steps exist for every vertex deletion problems, for which the property can be described by a finite set of forbidden vertex induced hypergraphs, such as d, ℓ and . We further provide sampling steps for () and , where the set of forbidden hypergraphs is infinite. Moreover, even though our setting doesn't explicitly allow it, the results developed in this paper also apply to some problems in the directed graph setting. In particular, we show that there exists a sampling step for . Thus, the technique is applicable for each of these problems. We compare Sampling with a Black Box to existing benchmarks. * In <cit.>, Jana, Lokshtanov, Mandal, Rai, and Saurabh developed a parameterized β-approximation for () for every 1<β<2. The objective in is to remove a minimum number of vertices from an undirected graph, so the remaining graph does not contain cycles. Their approach relies on the same ingredients as ours: utilize the state of art parameterized and approximation algorithms in conjunction with a variant of the well known randomized branching rule of Becker et al. <cit.>. Similar to <cit.>, we utilize the randomized branching rule of <cit.> to derive a sampling step with success probability 1/4. We use Sampling with a Blackbox together with this sampling step to attain a parameterized β-approximation for , for every 1<β<2. Though we use the same core principles, we attain a faster running time for every approximation ratio between 1 and 2 – see comparison in <Ref>. The improved running time stems from our tighter analysis and careful selection of parameters, which also provides flexibility in the design of the sampling steps. For example, to attain a parameterized 1.1-approximation for , the authors of <cit.> use ideas from <cit.> to derive a simple algorithm which randomly picks an edge in the graph G such that at least one of its endpoints is in a minimum solution with probability 1/2. If the selected edge satisfies this property we refer to it as correct. In their scheme, every time an edge is picked, both its endpoints are added to the solution, reducing the optimum by one while increasing the solution size by two, assuming the picked edge is correct. The analysis in <cit.> only considers the case in which the picked edges are always correct. This allows for a simple analysis and renders the parameter selection straightforward. Keeping our focus on a 1.1-approximation, the algorithm has to pick 0.1· k edges before it invokes the state-of-art parameterized algorithm with the parameter k'=0.9· k. The success probability of the above – the probability that all picked edges are correct and therefore a 1.1-approximate solution is attained - is 0.5^0.1· k, and the running time is c^0.9· k· n^(n), where c=2.7 is the running time of the best known exact parameterized algorithm for  <cit.>. This leads to a running time of 2^0.1· k· c^0.9· k≈ 2.62^k. A major limiting factor in the analysis of <cit.> is the focus on the event in which all picked edges are correct. For example, to attain a parameterized 1.1-approximation one can consider picking 0.09k edges (and add both endpoints to the solution), and then invoke the exact parameterized algorithm with k'=0.92k. Now, this procedure finds a 1.1-approximate solution if 0.08k (or more) of the picked edges are correct. A careful analysis shows that the probability of such event is ≈ 0.708^0.09· k≈ 0.969^k. By repeating this procedure 1/0.969^k times, a 1.1-approximate solution is found with a constant probability. The overall running time is (c/0.969)^k· n^(1)≈ 2.57^k· n^(1), which is already an improvement over <cit.>. The above example illustrates the power of a more flexible analysis and a careful selection of parameters such as the number of sampled edges. Furthermore, our approach allows for more powerful sampling steps- instead of picking both endpoints of the selected edges, we can select one at random. Intuitively, the benefit of sampling one vertex at a time is that it increases the variance of the number vertices selected from the optimum, therefore raising the probability of the rare event the analysis is focused on. This change leads to a further improvement to the running time. Overall, we attained a parameterized 1.1-approximation for in time 2.483^k· n^(1). * In <cit.>, Fellows, Kulik, Rosamond and Shachnai provided a generic technique, called Fidelity Preserving Transformations, which can be applied for every vertex deletion problem in which the property can be described by a finite set of forbidden vertex induced subgraphs. Assuming the maximum number of vertices in a forbidden induced subgraph is η, and the problem has an exact parameterized c^k· n^(1) algorithm, the technique of <cit.> yields a β-approximation in time c^η-β/η-1· k · n^(1).[The result in <cit.> was stated for and 3, but can be easily generalized to the stated result.] We prove that the running time we obtain for every problem in the class is always faster than the running time attained by <cit.>, for every 1<β<η. <Ref> provides a comparison between the running times we attain and those of <cit.> for 3, in which the objective is to remove vertices from a graph so the remaining graph does not have a path of 3 vertices. §.§ Related Work The field of parameterized approximation aims to derive approximation algorithms which run in parameterized running time. In the classic setting, the considered problem is one which is not expected to have an exact parameterized algorithm, and further has a hardness of approximation lower bound which indicates that a polynomial time α-approximation is unlikely to exist, for some α>1. In such a setting, the objective is to derive a β-approximation algorithm that runs in parameterized running time for β<α, or to show that such an algorithm cannot exist (subject to common complexity assumptions). In recent years, there has been a surge of breakthrough results in the field, providing both algorithms (e.g., <cit.>) and hardness of approximation results (e.g., <cit.>). Within the broad field of parameterized approximations, a subset of works consider problems which are in FPT. For problems in FPT, there is always a parameterized β-approximation algorithm for all β≥ 1, as the exact algorithm is also an approximation algorithm. Therefore, the main focus of these works is on the trade-off between approximation and running time. A prominent approach to attain such parameterized-approximation algorithms relies on using existing exact parameterized algorithms as black-box. In <cit.> the authors showed that an exact parameterized c^k· n^(1) algorithm for implies a parameterized β-approximation in time c^(2-β )k. The same running time has been attained for by Fellows et al. <cit.> through a more generic framework which can be applied to additional problems. Other works focused on directly designing parameterized approximation algorithms. In <cit.> Brankovic and Fernau provided parameterized approximation algorithms for and 3. The designed algorithms are branching algorithms which involve branching rules aimed for approximation and interleave approximative reduction rules. The works provide, among others, a parameterized 1.5-approximation for in time 1.0883^k· n^(1) and a parameterized 2-approximation for 3 in time 1.29^k· n^(1). In <cit.>, Kulik and Shachnai showed that the use of randomized branching can lead to significantly faster parameterized-approximation algorithms. In particular, they attained a parameterized 1.5-approximation for in time 1.017^k · n^(1) and a parameterized 2-approximation for 3 in time 1.0659^k· n^(1). The idea in randomized branching is that the algorithm picks one of the branching options at random. Subsequently, a good approximation is attained as long as the randomly picked options are not too far from the correct options. The branching rules used by <cit.> are fairly involved in comparison to the sampling steps used by this paper and their analysis required a non-trivial mathematical machinery. While <cit.> showed that randomized branching is a powerful technique for the design of parameterized-approximation algorithms, it has only been applied to a limited set of problems. In <cit.>, the authors provided applications of the technique for and 3. Furthermore, for approximation ratios close to 1, the randomized-branching algorithms are inferior to the exact algorithms in terms of running times (or to the combination of those with <cit.>). A central goal of this paper is to overcome some of these difficulties: Sampling with a Black Box harnesses the power of randomized branching, it is applicable for a wide set of problems, and always improves upon the running times attained using the best known exact algorithms in conjunction with <cit.>. As already mentioned, in <cit.>, the authors developed parameterized approximation algorithms for using the combination of randomized branching and existing algorithms as black-box. Their work indeed served as a motivation for this paper. We point out that the approach in <cit.> is restricted for and the resulting running times are inferior to ours. The concept of randomly sampling a partial solution, which is subsequently extended using a parameterized algorithm, is central to the monotone local search technique of Fomin, Gaspers, Lokshtanov and Saurabh <cit.>. Later variants of this technique are designed to obtain exponential time approximation algorithms <cit.>. They also use the same basic argument, which states that sampling a partial solution not too far from the optimum suffices to attain an approximate solution with high probability. Organization Section <Ref> gives several standard definitions used throughout the paper. In <Ref> we formally state the results of the paper. <Ref> lists applications of the technique for several problems. Additional applications are given in <Ref>. <Ref> provide our main algorithm together with its proof of correctness. In <Ref> we provide a simpler formula for the running time Sampling with a Black Box for several cases. In <Ref> we derive sampling steps for several problems. <Ref> provides the proof that the running time of Sampling with a Black Box is faster than Fidelity Preserving Transformations. Finally, we discuss our results in <Ref>. For the sake of presentation, we include the proofs of some claims in the appendix. § PRELIMINARIES In this section, we present several standard concepts and notations used throughout this paper. Graph Notations. Given an hypergraph G=(V,E) we use V(G)=V and E(G)=E to denote the sets of vertices and hyperedges of the graph (respectively). Vertex Induced Subhypergraphs and Vertex Deletion. Given a a hypergraph G=(V,E) and a subset of vertices U⊆ V , the vertex induced subhypergraph of G and U is the hypergraph G[U] = (U,E') where E' {e ∈ E | e ⊆ U}. We also define the vertex deletion of U from G by G∖ U = G[V∖ U]. For a single vertex u∈ V(G) we use the shorthand G∖ v = G∖{v}. Hypergraph Properties. A hypergraph property is a set Π of hypergraphs such that for every G∈Π and G' isomorphic to G it holds that G'∈Π as well. A hypergraph property Π is called hereditary if for every G∈Π and U⊆ V(G) it holds that G[U]∈Π as well. Furthermore, we say that Π is polynomial-time decidable if given a hypergraph G we can decide in polynomial-time whether G ∈Π or not. We assume that all hypergraph properties discussed in this paper are hereditary and polynomial-time decidable; that is by saying the Π is a hypergraph property we also mean that it is hereditary and polynomial-time decidable. A closed set of hypergraph. We say that a set of hypergraph is closed if every vertex induced subhypergraph of a graph in is also in . That is, for every G∈ and S⊆ V(G) it also holds that G[S] ∈. Kullback-Leibler Divergence Given two number a,b∈ [0,1], the Kullback-Leibler divergence of a and b is ab = a·ln(a/b)+(1-a)·ln(1-a/1-b). We follow that standard convention that 0·ln 0 =0·ln0/x =0, which implies 1b= 1·ln( 1/b) + (1-1) ·ln(1-1/1-b) = -ln(b). § OUR RESULTS In this section we formally state our results, starting with some formal definitions. The definitions of vertex deletion problems and parameterized approximation algorithms are given in <Ref>. <Ref> provides the definition of sampling steps. Then, we state our main result in <Ref>. The comparison of our results to those of <cit.> is finally given in <Ref>. §.§ Vertex Deletion Problems The focus of this paper is the class of problems. For any hypergraph property Π and a closed set of hypergraphs 𝒢, the input for ( for short) is a hypergraph G ∈𝒢. A solution is a subset of vertices S⊆ G(V) such that G∖ S∈Π. We use _Π(G) = { S⊆ G(V) | G∖ S∈Π} to denote the set of all solutions. The objective is to find a smallest cardinality solution S ∈_Π(G). In the decision version of the problem, we are given a hypergraph G ∈𝒢, an integer k ≥ 0 and we are asked whether there exists a set S ∈_Π(G) such that S≤ k. The family of problems include many well known problems. Some notable examples are: * . In this case 𝒢 corresponds to graphs, i.e. hypergraphs with edge cardinality exactly 2. The hypergraph property Π consists of all edgeless graphs. * on graphs of degree at most 3. Similar to the case above, in this case 𝒢 corresponds to graphs of degree at most 3. Π again consists of all edgeless graphs. * d. Similar to Vertex Cover, in this case 𝒢 corresponds to hypergraphs with edge cardinality exactly d. Π also consists of all edgeless hypergraphs. * . In this case 𝒢 is the set of all graphs and Π is the set of graphs that have no cycles. We use _,Π(G) to denote the size of an optimal solution for G∈ with respect to the problem. That is, _,Π(G) = min{S | S ∈_Π(G) }. If and Π are known by context we use instead of _,Π. Our goal is to develop parameterized approximation algorithms for problems, where the parameter is the solution size. Such algorithms return a solution of size α· k or less (with probability at least 1/2) if the optimum of the instance is at most k Let Π be a hypergraph property and 𝒢 be a closed set of hypergraphs. An algorithm 𝒜 is a randomized parameterized α-approximation algorithm for if it takes a graph G ∈𝒢 and an integer k ≥ 0 as input, and returns a solution S∈_Π(G) which satisfies the following. * If _,Π(G) ≤ k, then ( S≤α· k ) ≥1/2. Moreover, the running time of 𝒜 is f(k) · n^(1) for some function f. We note that the above definition of parameterized approximation algorithms captures the classic definition of exact parameterized algorithms (α=1) and polynomial time approximation algorithms (f(k) = O(1)) as special cases. Sampling with a Black-Box converts a randomized parameterized α-approximation algorithm 𝒜 which runs in time c^k· n^(1) to a randomized parameterized β-approximation algorithm ℬ which runs in time d^k· n^(1). The conversion relies on a simple problem dependent sampling step. In all our application the algorithm 𝒜 is either the state-of-art exact parameterized algorithm (i.e., α=1), or the state-of-art polynomial time algorithm (i.e., c=1) for the problem. §.§ Sampling Steps We exploit inherent properties of a specific problem to come up with basic sampling strategies. From a high-level perspective, a sampling step is a polynomial-time algorithm that takes as input a hypergraph G ∈𝒢∖Π and returns a vertex v ∈ V(G), such that the size of the optimal solution of G∖ v decreases by 1, with a prescribed probability. Let be a closed set of hypergraph, Π be a hypergraph property and q∈ (0,1). A sampling step with success probability q is a polynomial time randomized algorithm ℛ that takes as input a hypergraph G ∈𝒢∖Π and returns a vertex v ∈ V(G) such that ( _Π(G ∖ v) ≤_Π(G) - 1 ) ≥ q. For example, consider , which is a problem where 𝒢 is the set of all graphs and Π consists of all edgeless graphs. The following is a very simple sampling step for with success probability 1/2: pick an arbitrary edge and return each of its endpoints with probability 1/2. This algorithm clearly runs in polynomial time. Moreover, for each S of G and for each edge e, at least one of the endpoints of e belongs to S. Therefore, its a sampling step with success probability q=1/2 for . We provide sampling steps for the general case where Π is defined by a finite set of forbidden subgraphs, for and for . On their own, sampling steps can be easily used to derive parameterized approximation algorithms. To obtain a β-approximation, one may use the sampling step β· k times, where each execution returns a vertex v which is removed from the graph and added to the solution S. After β· k steps, with some probability P, the set S of returned vertices is indeed a solution. Thus, by repeating this multiple sampling step for 1/P times, one gets a β-approximate solution with probability 1/2. By carefully tracing the probability in the above argument, we show that P≈(exp(β·1/βq) )^-k, as demonstrated by the following lemma.[Recall ·· stands for the Kullback-Leibler divergence which has been defined in <Ref>.] Let 𝒢 be a closed set of hypergraphs and Π be an hypergraph property such that there is a sampling step with success probability q for . Then for every 1≤β≤1/q there is a randomized parameterized β-approximation for which runs in time d^k· n^(1) where d=exp(β·1/βq). The proof of <ref> can be found in <ref>. Observe that exp(1/q·1/(1/q)q)= 1 for every q∈ (0,1). Thus, <Ref> provides a polynomial time 1/q-approximation algorithm for . This justifies the restriction of the lemma to β≤1/q. While <Ref> provides a parameterized β-approximation algorithm for essentially any β, the resulting algorithms are often clearly far from optimal. For example, for () we provide a sampling step with success probability 1/4. Thus, by <Ref> we get a parameterized 1.1-approximation for which runs in time ≈ 2.944^k · n^(1). However, has an exact parameterized algorithm which runs in time 2.7^k· n^(1). That is, the running time of the approximation algorithm is slower than that of the exact algorithm. Our goal is to combine the sampling step with the exact algorithm to attain improved running time in such cases. §.§ Sampling with a Black-Box Our main theorem states the following: a sampling step with success probability q, together with a parameterized α-approximation algorithm 𝒜 which runs in time c^k· n^(1), can be used to obtain a β-approximation algorithm ℬ. By <Ref>, the sampling step can be used to obtain a randomized parameterized α-approximation which runs in time ( exp(α·1/αq))^k· n^(1). Our assumption is that 𝒜 is at least as fast as the algorithm provided by <Ref>, hence we only consider the case in which c≤exp(α·1/αq). We use the following functions to express the running time of ℬ. Define two function (α,c,q) and (α,c,q) as the unique numbers (α,c,q)∈(1,α] and (α,c,q)∈[α,∞) which satisfy 1/α1/(α,c,q) = 1/α1/(α,c,q)= 1/αq - ln(c)/α. We write = (α,c,q) and = (α,c,q) if the values of α, c and q are clear from the context. The following lemma provides conditions which guarantee that and are well defined in certain domains. lemmasdeltawelldefined For every c≥ 1, 0<q<1 and α≥ 1 such that c≤exp(α·1/αq), the value of (α,c,q) is well defined. Furthermore, if α>1, then (α,c,q) is also well defined. The proof of <ref> can be found in <Ref>. We note that 1/α1/x is monotone in the domains x∈ (1,α] and x∈ [α,∞). Hence the values of (α,c,q) and (α,c,q) can be easily evaluated to arbitrary precision using a simple binary search. We use and to express the running time of ℬ. For every β,α≥ 1, 0< q ≤ 1 and c≥ 1 such that c≤exp(α·1/αq), we define c ·exp(·1/q- ln(c) / - α·( β - α) ) if (α, c, q) > β≥α c ·exp(·1/q- ln(c) / - α·( β - α) ) if (α, c, q) < β≤α exp(β·1/βq) otherwise As we can compute and , it follows we can also compute to arbitrary precision. Recall that <Ref> provides a polynomial time 1/q-approximation algorithm for . Consequently, we restrict our consideration to α and β values that are less than or equal to 1/q. Our main technical result is the following. [Sampling with a Black-Box]theoremsamplingblackbox Let be a closed set of hypergraphs and Π be a hypergraph property. Assume the following: * There is a sampling step with success probability q∈(0,1) for . * There is a randomized parameterized α-approximation algorithm for which runs in time c^k· n^(1) for some c ≥ 1, 1 ≤α≤1/q and c≤exp(α·1/αq). Then, for every 1≤β≤1/q, there is a randomized parameterized β-approximation for which runs in time ( )^k· n^(1). The proof of <Ref> is given in <Ref>. For example, by <Ref>, we can use the sampling step with success probability 1/4 for , together with the exact parameterized algorithm for the problem which runs in time 2.7^k· n^(1) <cit.>, to get a randomized parameterized 1.1-approximation algorithm with running time 2.49^k · n^(1). The algorithm achieves a better running time than that of the exact parameterized algorithm, as well as the running time which can be attain solely by the sampling step, i.e. 2.944^k· n^(1) (<Ref>). We note that this running time is also superior to the running time of 2.62^k· n^(1) for the same approximation ratio given in <cit.>. The running time of the algorithm generated by <Ref> (i.e., the value of ) can always be computed efficiently, though this computation requires a binary search for the evaluation of and . For the special cases of α =1 and well as (α=2 and c=1) we provide a closed form expression for . [simple formula for α=1]theoremsimpslaphaone For every 0 < q < 1, 1 ≤β≤1/q and c ≥ 1 such that c ≤exp(1·1/1q) = 1/q it holds that [1,β,c,q] = c ·( 1 - c · q/1 - q)^β - 1 if 1 ≤β < 1/q · c exp( β·1/βq) if 1/q · c≤β≤1/q [simple formula for α =2 and c=1]theoremsimplealphatwo Let 0 < q ≤1/2 and 1 ≤β≤ 2. Then it holds that [2,β,1,q] = exp( β·1/βq) if 1 ≤β≤1/1 - q ( q/1 - q)^β - 2 if 1/1 - q < β≤ 2 Both <Ref> follow from a closed form formula which we can attain for or  for the specific values of α and c considered in the theorems. The proof of <Ref> is given in <Ref>. §.§ Comparison to Fidelity Preserving Transformations We compare our technique, Sampling with a Black-Box, with the technique of <cit.> for problems in which Π is defined by a finite set of forbidden vertex induced hypergraphs. Let Ω = {F_1, …, F_ℓ} be a finite set of hypergraphs for ℓ > 0. Then Π^Ω is the hypergraph property where a hypergraph G belongs to Π^Ω if and only if there is no vertex induced subhypergraph X of G such that X is isomorphic to F_i, for some 1 ≤ i ≤ℓ. We note that Π^Ω is always hereditary and polynomial-time decidable. We also note the the family of graphs properties defined by a finite set of forbidden hypergraph suffices to define many fundamental graph problems such as , d, ℓ and . For a set of hypergraphs Ω = {F_1, …, F_ℓ}, define η(Ω) max_1 ≤ i ≤ℓV( F_i ), the maximal number of vertices of a hypergraph in Ω. The following result has been (implicitly) given in <cit.>. Let Ω be a finite set of hypergraphs and let be a closed set of hypergraphs. Furthermore, assume there is an randomized exact parameterized c^k · n^(1) algorithms for [, Π^Ω]. Then for every 1≤β≤η(Ω) there is a randomized parameterized β-approximation algorithm for [, Π^Ω] which runs in time c^η(Ω)-β/η(Ω)-1· k· n^(1). There is a simple and generic way to design a sampling step for Π^Ω. The sampling step finds a set of vertices S⊆ V(G) of the input graph such that G[V] is isomorphic for a graph in Ω, and returns a vertex from S uniformly at random. This leads to the following lemma. Let Ω be a finite set of hypergraphs and let be a closed set of hypergraphs. There is a sampling step for [𝒢, Π^Ω] with success probability 1/η(Ω). A formal proof for <Ref> is given in <Ref>. Together with <Ref> the lemma implies the following. Let Ω be a finite set of hypergraphs and be a closed set of hypergraphs. Furthermore, assume there is a randomized exact parameterized c^k · n^(1) algorithms for [, Π^Ω]. Then for every 1≤β≤η(Ω) there is a randomized parameterized β-approximation algorithm for [, Π^Ω] which runs in time ([1,β,c,1/η(Ω)] ) ^k· n^(1). Observe that <Ref> and <Ref> only differ in the resulting running time. The following lemma implies that for every 1<β<η(Ω) the running time of <Ref>, the running time of Sampling with a Black-Box, is always strictly better than the running time of <Ref>, the result of <cit.>. lemmafidelitycomparison For every η∈ℕ such that η≥ 2, 1<c<η and 1<β <η it holds that [1,β,c,1/η] < c^η-β/η-1. The proof of <Ref> is given in <ref>. § APPLICATIONS In this section we will describe some problems to which our results can be applied. For each problem, we utilize sampling steps to obtain parameterized approximation algorithms. We also compare the running time of our algorithm with a benchmark whenever applicable. §.§ Recall that given a graph G and integer k, the problem asks whether there exists a set S ⊆ V(G) of size at most k such that G ∖ S is acyclic. can also be described as a problem, where 𝒢 is the set of all graphs and Π is the set of graphs that have no cycles. First, we demonstrate that there exists a sampling step for . has a sampling step with success probability 1/4. The proof of <ref> can be found in <ref>. The sampling step presented in <ref> begins by removing vertices of degree at most 1. Then, a vertex is sampled from the remaining vertices, where the sampling probability for each vertex is proportional to its degree in the remaining graph. In <cit.>, the authors present an FPT algorithm which runs in time 2.7^k· n^(1) (i.e., in the terminology of this paper, α = 1, c = 2.7). Moreover, also has a 2-approximation algorithm that runs in polynomial time (i.e., α = 2, c = 1) <cit.>. In the following, we demonstrate how the sampling step, together with the existing algorithms, is used to develop a new approximation algorithm with a better running time. For each 1 ≤β≤ 2, has a β-approximation algorithm which runs in time d^k· n^(1) where d = 2.7 ·( 1.3/3) ^β - 1 if 1 ≤β < 1.402 (1/3)^β - 2 if 1.402 ≤β≤ 2 . Let 𝒜_1 denote the FPT algorithm from <cit.> and 𝒜_2 denote the 2-approximation algorithm from <cit.> that runs in polynomial time. Note that when we consider β-approximation algorithms, we can focus on the values of β in the range 1 < β≤ 2 because of 𝒜_2, as for larger values of β we immediately get a β-approximation algorithm that runs in polynomial time. By using 𝒜_1 and <ref>, the first β-approximation algorithm we obtain has the running time d^k· n^(1) where d = 2.7 ·(0.433) ^β - 1 if 1 ≤β < 1.481 exp( β·1/β1/4) if 1.481 ≤β≤ 2. Similarly, by using 𝒜_2 and <ref>, the second β-approximation algorithm we obtain has the running time d^k· n^(1) where d = exp( β·1/β1/4) if 1 ≤β < 1.333 (1/3)^β - 2 if 1.333 ≤β≤ 2 . Note that for each 1 < β, we can choose the algorithm with the faster running time out of (<ref>) and (<ref>). Therefore we compare the base of the exponents in the running time and pick the smallest one. After a straightforward calculation, one can observe that for 1 ≤β < ln( 13/9) /ln( 1.3 ) ≈ 1.402, 2.7 ·( 1.3/3) ^β - 1 is the smallest number. Similarly, for 1.402 ≤β≤ 2, the smallest number becomes (1/3)^β - 2. Therefore, for each 1 < β < 2, we obtain an algorithm with a a running time of d^k· n^(1) where d = 2.7 ·( 1.3/3) ^β - 1 if 1 ≤β < 1.402 (1/3)^β - 2 if 1.402 ≤β≤ 2 . In <cit.>, the authors present a β-approximation algorithm for each 1 < β≤ 2. It can be visually (see <ref>) and numerically (see <ref>) verified that our algorithm demonstrates a strictly better running-time . §.§ Given a graph G and integer k, the () problem ask whether there exists a set S ⊆ V(G) of size at most k such that G ∖ S has pathwidth at most 1. Initially, we demonstrate that there exists a sampling step for . has a sampling step with probability 1/7. The proof of <ref> can be found in <ref>. The sampling step for is very similar to that for , with a slight modification. Similar to , can be described by a set of forbidden subgraphs. Moreover, the set of forbidden subgraphs for includes one additional graph with 7 vertices. Therefore has a sampling step with probability 1/7, instead of 1/4 as in the case of . To the best of our knowledge, parameterized approximation algorithms have not been studied for . However, there exists an FPT algorithm for with running time 3.888^k· n^(1) (α = 1, c = 3.888) <cit.>. Next, we demonstrate how combining the aforementioned sampling step with a parameterized algorithm yields a new approximation algorithm. Refer to <ref> for the corresponding running time. For each 1 ≤β≤ 7, has a β-approximation algorithm which runs in time d^k· n^(1) where d = 3.888 ·(0.519)^β - 1 if 1 ≤β≤ 1.8 exp( β·1/β1/7) if 1.8 < β < 7. Let 𝒜 be the FPT algorithm from <cit.>, with running time 3.888^k· n^(1). By using 𝒜 and <ref>, we obtain a β-approximation algorithm with running time d^k· n^(1) where d = 3.888 ·(0.519)^β - 1 if 1 ≤β≤ 1.8 exp( β·1/β1/7) if 1.8 < β < 7. §.§ (𝒢,Π)-Vertex Deletion for a finite set of forbidden sub-hypergraphs There are many problems that can be described as problems in which Π is defined by a finite set Ω of forbidden vertex induced hypergraphs. For each of those problems, by <ref> there exists a sampling step with success probability 1/η, where η is the maximum number of vertices of a hypergraph in Ω. In the following, we will demonstrate how we can obtain parameterized approximation algorithms for such problems. For the sake of presentation, we will focus on a specific problem called 3. Given a graph G, a subset of vertices S ⊆ V(G) is called an if every path of length ℓ contains a vertex from S. The ℓ problem asks whether there exists an of size at most k where k is the parameter <cit.>. ℓ can be described as a where 𝒢 is the set of graphs and Π is the set of graphs with maximum path length at most ℓ - 1. Alternatively, let F be a path with ℓ vertices where we define Ω{F} and η(Ω) ℓ. It holds that ℓ is equivalent to [𝒢, Π^Ω]. Therefore, by <ref>, there is a sampling step for ℓ with success probability 1/ℓ. In the following, we will consider ℓ = 3, i.e. the problem 3. There exists an FPT algorithm for that runs in time 1.708^k· n^(1) (α = 1, c = 1.708) <cit.>. Moreover, there is also a 2-approximation algorithm that runs in polynomial time. (α = 2, c = 1) <cit.>. For each 1 < β < 2, 3 has a β-approximation algorithm which runs in time d^k· n^(1) where d = 1.708 ·(0.644)^β - 1 if 1 ≤β < 1.6143 (0.5)^β - 2 if 1.6143 ≤β≤ 2 Let 𝒜_1 denote the FPT algorithm from <cit.> and 𝒜_2 denote the 2-approximation algorithm from <cit.> that runs in polynomial time. Because of 𝒜_2, as in the case of , we can focus on the values of β in the range 1 ≤β≤ 2. By using 𝒜_1 and <ref>, it holds that for each β > 1 there exists a β-approximation that runs in time d^k· n^(1) where d = 1.708 ·(0.644)^β - 1 if 1 ≤β < 1.752 exp( β·1/β1/3) if 1.752 ≤β≤ 2. By using 𝒜_2 together with <ref>, it holds that for every 1 < β < 2 there exists a β-approximation algorithm that runs in time d^k· n^(1) where d = exp( β·1/β1/3) if 1 < β < 1.5 (0.5)^β - 2 if 1.5 ≤β < 2. By taking the minimum of the running times in (<ref>) and (<ref>), the base of exponent becomes d = 1.708 ·(0.644)^β - 1 if 1 < β < 1.6143 (0.5)^β - 2 if 1.6143 ≤β < 2 In <cit.>, for each 1 ≤β≤ 2, the authors present a β-approximation algorithm for 3 with running time 1.708^3-β/2· k · n^(1). As can be visually (see <ref>) or numerically (see <ref>) verified, our algorithm has a strictly better running time for all values of 1 < β≤ 2 . § SAMPLING WITH A BLACK BOX In this section we present our main technique, Sampling with a Black Box, and prove <Ref>. The technique is designed using three main components, enabling a modular analysis of each part. We use the notion of δprT to abstract the outcome of iteratively executing a sampling step. We use this abstract notion in which combines the (δ,p,r,T)-procedure together with the black box parameterized α-approximation algorithm. On its own, only attains a β-approximate solution with a low probability. Our main algorithm, , executes multiple times to get a β-approximate solution with a constant probability. The defined algorithm depends on a parameter δ, for which we find the optimal value. We start with the formal definition of a δprT. As already mentioned, a procedure serves as an abstraction of iterative use of a sampling step. It returns a vertex set S ⊆ V(G) with certain properties related to the size of S and the value of (G∖ S). Let Π be a hypergraph property and 𝒢 be a closed set of hypergraphs. For all δ≥ 1, r ≥ 0, 0 < p ≤ 1 and T≥ 0, a δprT for (𝒢,Π) is a polynomial time randomized algorithm that takes as input a hypergraph G ∈𝒢, an integer t ≥ 0 and returns a set S ⊆ V(G) with the following properties: * It holds that S≤δ· t. * Suppose that _,Π(G) ≤ k for some k ≥ 0. If t ≥ T, then with probability at least p^t/(t+1)^r it holds that G ∖ S has a solution of size at most max( 0, k -t ), i.e. ( _,Π(G∖ S) ≤max( 0, k -t ) ) ≥p^t/(t+1)^r. Additionally, we use the notation δp to refer to a δprT for some constants r,T ≥ 0. Observe that if there is a δp for , then we can make use of it to obtain a δ approximation as follows. Suppose G has a solution of size k. Then, by setting t = k in <Ref>, it holds that with probability at least p^k/(k + 1)^r, G∖ S has a solution of size 0, i.e. G∖ S ∈Π . By our assumption, we can check in polynomial time whether a graph belongs to the property Π. By <Ref> it holds that S≤δ· k. Additionally, we can repeat this algorithm p^-k· n^(1) times to obtain a δ-approximate solution constant probability. We summarize these insights in the following observation. If there is a δp for , then there is parameterized δ-approximation algorithm for with running time (1 / p)^k· n^(1). In the remainder of <ref>, we fix the values of 0 < q ≤ 1, 1 ≤α≤1/q, 1 ≤β≤1/q and 1 ≤ c ≤exp( α·1/αq) to specific numbers, unless specified explicitly. For notational simplicity, we omit α, β, and c from the subscript of functions dependent on these variables. Moreover, let Π be a fixed polynomial-time decidable hypergraph property, 𝒢 be a closed set of hypergraphs and be an α-approximation algorithm for the problem with running time c^k· n^(1). Let us also define the set of values δ can take, given α and β. For α, β≥ 1 such that α≠β, we define the set as [ β, ∞ ) if β > α [1,β] if β < α. The next component in our technique is , given in <Ref>. The algorithm is configured with a δp δp. Given an hypergraph G and an integer t in the input, the algorithm invokes the procedure δp which returns a random set S ⊆ V(G) of size at most δ· t, and then runs the parameterized α-approximation algorithm on the remaining hypergraph G ∖ S. The idea is to hope that G ∖ S has a solution of size at most β· k - δ· t/α. Note that the parameter for the α-approximation algorithm is also β· k - δ· t/α, which ensures that the approximation algorithm returns a set of size at most β· k - δ· t, with high probability. In the event that this holds, by adding S to the returned set, we obtain a solution with a size of at most β· k. Our main algorithm, given in <Ref>, begins by selecting a value for t^*. This value ensures the following: if the set S in <ref> contains at least t^* many elements from a solution, then G ∖ S has a solution of size at most β· k - δ· t^*/α. Note that this further implies that the set returned by <ref> has size at most β· k. Furthermore, <ref> utilizes <ref> and executes it multiple times to ensure a β-approximate solution is attained with a constant probability. Let 0 < p ≤ 1, δ∈ and a δprT δp for (𝒢, Π). Then <ref> is a randomized parameterized β-approximation algorithm for with running time f( δ, p ) ^k· n^(1) where f(δ,p) is given by f(δ, p) exp( (δ - β) ·ln(c) + (β - α) ·ln( 1/p) /δ - α). The proof of <ref> can be found in <ref>. <Ref> relies on the existence of a δp, however, insofar we did not show how to design one. We generate a δp from a sampling step (<Ref>) via a simple algorithm which iteratively invokes the sampling step. The pseudo-code of the algorithm is given in <Ref>. Define ϕ(δ, q) exp( -δ·1/δq). The following lemma state that <Ref> is indeed a δp for p = ϕ(δ,q). Let 0 < q ≤ 1 and ℛ be a sampling step for with success probability q. Then, for any 1 ≤δ≤1/q, <ref> is a δϕ( δ, q ) for . The proof of <ref> can be found in <ref>. <ref> implies that for δ = 1/q, there exists a 1/q1 for . Observe that this serves as a δ1 for δ > 1/q. Therefore, we make the following observation. Let 0 < q ≤ 1 and ℛ be a sampling step for with success probability q. Then, for any δ > 1/q, there is a δ1 for . Furthermore, we can now prove <Ref> using <Ref> together with <Ref>. Assume there is a sampling step with success probability q for , and let 1≤β≤1/q. Then by <Ref> there is a βϕ(β,q) for . Hence, by <Ref> there is a parameterized β-approximation for which runs in time d^k· n^(1) where d=1/ϕ(β,q) = exp(β·1/βq). Using the procedure from <Ref> together with <Ref> (<Ref>) we get the following results. Suppose that: * There is a sampling step with success probability q∈(0,1) for . * There is a randomized parameterized α-approximation algorithm for which runs in time c^k· n^(1). Then, there is a randomized parameterized β-approximation algorithm for with running time (min_δ∈∩ [1, 1/q]f̃ (δ, q))^k· n^(1) where f̃(δ ,q) is defined as f̃ (δ, q) f(δ,ϕ(δ,q) )=exp( (δ - β) ·ln(c) + (β - α) ·ln( 1/ϕ(δ, q)) /δ - α). <ref> provides a β approximation algorithms whose running time is a solution for an optimization problem. The final step towards the proof of <Ref> is to solve this optimization problem. It holds that min_δ∈∩ [1, 1/q]f̃ (δ, q) = . The proofs of <ref> can be found in <ref>. We now have everything to proceed with the proof of <ref>. The claim follows immediately from <ref>. §.§ Converting Procedures to Approximation Algorithms In this section we will prove <ref>. To accomplish this, we will examine some properties of <ref>. First, we will show that <ref> indeed returns a solution W ∈_Π(G). Subsequently, we will establish that with a certain probability, the set returned has size at most β· k. Equipped with these results, we will proceed with the proof of <ref>. Let G ∈𝒢 be a hypergraph, and 0 ≤ k ≤ n, T ≤ t ≤β/δ· k be integers. Let W denote the set returned by (G,k,t), then it holds that W ∈_Π(G). Let S be as defined in <ref>. Since is a parameterized α-approximation algorithm, by <Ref>, Y is a solution of G ∖ S, i.e. Y ∈_Π(G ∖ S). Equivalently, it holds that ( G ∖ S ) ∖ Y ∈Π. Since G ∖( Y ∪ S ) = (( G ∖ S ) ∖ Y) ∈Π, it also holds that W = (Y ∪ S) ∈_Π(G). Let t^* be as defined in <ref>, then it holds that max(0, β - α/δ - α· k - 1) ≤ t^* ≤β - α/δ - α· k + 1. Moreover, it also holds that t^* ≤β/δ· k. First, notice that for δ∈, it holds that either δ > β > α or δ < β < α. Hence, the term β - α/δ - α is always positive. By definition of t^*, we also have max(0, β - α/δ - α· k - 1) ≤⌊β - α/δ -α· k ⌋≤ t^* ≤⌈β - α/δ -α· k ⌉ < β - α/δ -α· k + 1. If β > α, then we have t^* = ⌊β - α/δ -α· k ⌋≤β - α/δ -α· k < β/δ· k where the last inequality is true because α < β < δ. Similarly, if β < α, then δ < β as well and we have t^* = ⌈α - β/α - δ· k ⌉ < ⌈α - β/α - β· k ⌉ = k < β/δ· k. Let G ∈𝒢 be a hypergraph, 0 ≤ k ≤ n be an integer and let t^* be as defined in <ref> such that t^* ≥ T. Moreover, let Z be the set returned by (G,k,t^*). If _,Π(G) ≤ k, then Z≤β· k with probability at least p^t^*/2 · (t^* + 1)^r. Suppose _,Π(G) ≤ k. Observe that <ref> implies 0 ≤ t^* ≤β/δ· k. We have k - t^* = β/α· k + ( 1 - β/α) · k -δ· t^*/α - ( t^* - δ· t^*/α) = β· k - δ· t^*/α + ( 1 - β/α) · k + t^* ·(δ - α/α). It holds that t^* ·δ - α/α≤ -( 1 - β/α)· k. Recall that for (δ) ∈, it holds that either δ > β > α or δ < β < α. If β > α, we have that δ - α/α > 0 and t^* = ⌊β - α/δ -α· k ⌋≤β - α/δ -α· k. Hence, t^* ·δ - α/α≤β - α/δ -α· k ·δ - α/α = β - α/α· k = -( 1 - β/α)· k . Similarly, if β < α, then we have that δ - α/α < 0 and t^* = ⌈β - α/δ -α· k ⌉≥β - α/δ -α· k. Therefore, we also get t^* ·δ - α/α≤β - α/δ -α· k ·δ - α/α = β - α/α· k = -( 1 - β/α)· k. <ref> implies that k - t^* = β· k - δ· t^*/α + ( 1 - β/α) · k + t^* ·(δ - α/α) ≤ β· k - δ· t^*/α + ( 1 - β/α) · k - ( 1 - β/α) · k = β· k - δ· t^*/α. Finally, since t^* ≤β/δ· k, it also holds that 0 ≤β· k - δ· t^*/α. Therefore we get max( 0, k - t^* ) ≤β· k - δ· t^*/α. Now let S be as defined in <ref>. According to <Ref> and (<ref>), with probability at least p^t^*/(t^* + 1)^r, it holds that G ∖ S has a solution of size at most max( 0, k - t^* ) ≤β· k - δ· t^*/α. Note that in this scenario, the set returned by (G ∖ S, β· k - δ· t^*/α), i.e. Y, satisfies Y≤α·β· k - δ· t^*/α = β· k - δ· t^* with probability at least 1/2 by <ref>. Moreover, since Z = Y ∪ S and S≤δ· t it holds that Y≤β· k - δ· t^* Z≤β· k. Therefore we have ( Z≤β· k ) ≥( Y≤β· k - δ· t^* ) ≥( Y≤β· k - δ· t^* | G ∖ S has a solution of size at mostmax( 0, k - t^* ) ) ·( G ∖ S has a solution of size at mostmax( 0, k - t^* ) ) ≥1/2·p^t^*/(t^* + 1)^r where the first inequality follows from (<ref>) and the last one follows from (<ref>). Now we are ready to prove <ref>. In order to show that <ref> is a randomized parameterized β-approximation algorithm for problem, we need to prove that it satisfies <ref>. To that end, let S denote the set returned by <ref> and assume that _,Π(G) ≤ k. If t^* < T, by <ref> in <ref>, the set S returned by the algorithm satisfies S ∈_Π(G) and S≤ k < β· k with probability 1. In this case <ref> satisfies <ref>. If t^* ≥ T, consider an iteration i of the for loop in <ref> in <ref>, for 1 ≤ i ≤ 2 · p^-t^*· (t^* + 1)^r. Let Y_i denote the set returned by (G, k, t^*) in iteration i. Recall that t^* ≤β/δ· k, therefore <ref> implies that Y_i ∈_Π(G). On the other hand, by <ref>, it holds that Y_i≤β· k with probability at least p^t^*/2 · (t^* + 1)^r. Therefore we get (S > β· k) = ( Y_i > β· k for 1 ≤ i ≤ 2 · p^-t^*· (t^* + 1)^r ) = ( 1 - p^t^*/2 · (t^* + 1)^r) ^2 · p^-t^*· (t^* + 1)^r ≤ e^-1 where the second equality holds since each iteration of the for loop is independent. Finally, (<ref>) implies that ( S≤β· k ) ≥(1 - 1/e) ≥1/2. Now let us consider the running time of the algorithm. The running time of the algorithm is c^β· k - δ· t^*/α· p^-t^*· n^(1). Observe that if t^* ≥ T, each execution of takes time c^β· k - δ· t^*/α· n^(1). This is because the algorithm executes the δprT δp and the parameterized approximation algorithm, where the former has polynomial running time and the latter has a running time of c^β· k - δ· t^*/α· n^(1). Since the number of iterations is 2 · p^-t^*· (t^* + 1)^r, the total running time becomes c^β· k - δ· t^*/α· p^-t^*· (t^* + 1)^r· n^(1) = c^β· k - δ· t^*/α· p^-t^*· n^(1) since t^* ≤⌈β - α/δ - α· k ⌉ = (k) = (n). Now suppose t^* < T and observe that k = (1) because t^* < T = (1) and k ≤( t^* + 1 )·δ - α/β - α = (1). Since the algorithm goes over all sets W ⊆ V(G) of size at most k, the running time is at most n^(k) = n^(1). Therefore we can conclude that the running time of the algorithm is upper bounded by c^β· k - δ· t^*/α· p^-t^*· n^(1). Since t^* ≥β - α/δ - α· k - 1, by <ref>, we have β· k - δ· t^*/α ≤β· k - δ·(β - α/δ - α· k - 1)/α = β· k/α - (β - α) ·δ· k/(δ - α) ·α + δ/α. Therefore it holds that c^β· k - δ· t^*/α· p^-t^* = exp( ( β· k - δ· t^*/α) ·ln(c) + t^* ·ln( 1/p) ) ≤exp( ( β· k/α - (β - α) ·δ· k/(δ - α) ·α + δ/α) ·ln(c) + ( β - α/δ - α· k + 1 ) ·ln( 1/p) ) = exp( ( β· k · (δ - α) - (β - α) ·δ· k/(δ - α) ·α) ·ln(c) + ( β - α/δ - α· k ) ·ln( 1/p) ) · c^δ/α·1/p = exp( ( α· k · (δ - β)/α· (δ - α)) ·ln(c) + ( β - α/δ - α· k ) ·ln( 1/p) )· c^δ/α·1/p = exp( δ - β/δ - α· k·ln(c) + β - α/δ - α· k ·ln( 1/p) )· c^δ/α·1/p = exp( (δ - β) ·ln(c) + (β - α) ·ln( 1/p) /δ - α)^k· c^δ/α·1/p, where the inequality follows from (<ref>) and <ref>. Therefore, by <ref> and (<ref>), the running time of the algorithm is f(δ, p)^k· n^(1). §.§ Converting Sampling Steps to Procedures In this section, we will prove <ref> by developing several auxiliary lemmas. Let 0 < q ≤ 1, 1 ≤δ≤1/q and ℛ be a sampling step for , with success probability q. Consider <ref> with these parameters. We will demonstrate that there exists integers r and T such that <ref> is a δϕ( δ, q ) rT. To that end, we will need to show that given a hypergraph G ∈𝒢 and t ≥ 0 as input, <ref> runs in polynomial time and satisfies <ref>, as defined in <ref>. Note that neither the running time of the algorithm nor <ref> depend on the values of r and T. Therefore, irrespective of the values of r and T, we will show that <ref> runs in polynomial time and that <ref> holds. Then, we will show that there exists r and T for which <ref> holds, implying the truth of <ref>. <ref> runs in polynomial time. Since ℛ is a sampling step for , it runs in polynomial time. Moreover, after n steps, G becomes empty and therefore belongs to Π, as Π is hereditary and includes the empty graph. Therefore, the number of iterations of the while loop in <ref> is at most n. Finally, membership to Π can be tested in polynomial time since Π is polynomial-time decidable. Therefore, the whole algorithm runs in polynomial time. <ref> satisfies <ref> in <ref>. Observe that the while loop in <ref> runs at most δ· t times. Moreover, in each iteration of the for loop, the size of S increases by at most 1. Therefore the claim follows. Next, we demonstrate a simple feature of hereditary hypergraph properties. Intuitively, removing a vertex from a hypergraph does not increase the size of the optimal solution. The proof of <ref> can be found in <ref>. lemmaheredoptdecrease Let Π be a hereditary hypergraph property and G be a hypergraph. For any v ∈ V(G), it holds that 0 ≤_Π(G) - _Π(G ∖ v ) ≤ 1. Let ξ_1, …, ξ_n be i.i.d. binary random variables and ν∈ (0,1] such that ( ξ_i = 1 ) ≥ν for all 1 ≤ i ≤ n. The following inequality can be shown using standard arguments (∑_i = 1^nξ_i ≥ t) ≥exp(- n/t·t/nν) · n^(1). In the following lemma, we prove a similar statement in our setting where the i.i.d. assumption is dropped. Its proof can be found in <ref>. lemmaprobmainresult Let δ, t ≥ 1 be integers, ν∈ (0,1] be a real number and ξ_1, …, ξ_⌊δ· t ⌋∈{0,1} be random variables such that ( ξ_j =1 | ξ_1 = x_1,…,ξ_j-1 = x_j - 1)≥ν for all 1 ≤ j ≤⌊δ· t ⌋ and ( x_1, …, x_j-1) ∈{0,1}^j-1. Then, there exist integers r and T that depend on δ, such that for t ≥ T it holds that ( ∑_j = 1^δ· tξ_j ≥ t) ≥( δ· t + 1 ) ^-r·exp( -δ·1/δν)^t. Given a hypergraph G ∈𝒢 and t ≥ 0 as input, let ℓ≤δ· t be the number of iterations of the while loop in <ref>. Let G_0 G and for 1 ≤ i ≤ℓ, let G_i denote the hypergraph G at the end of the i'th iteration. Similarly, let v_i denote the vertex v at the end of the i'th iteration, i.e. v_i = ℛ( G_i -1). For ℓ + 1 ≤ i ≤δ· t, we let G_i G_i - 1 and v_i v_i - 1. Furthermore, we define the random variables Z_0 0 and Z_i = ( G_i - 1) - ( G_i ) if 1 ≤ i ≤ℓ 1 if ℓ < i ≤δ· t for 1 ≤ i ≤δ· t. Intuitively, for 1 ≤ i ≤ℓ, Z_i measures the decrease in the optimal solution size, from G_i - 1 to G_i. Note that, by definition, for i ≤ℓ it holds that G_i = G_i - 1∖{v} for some v ∈ V( G_i - 1). Therefore, we have Z_i ∈{0,1} by <ref>. For 1 ≤ j ≤δ· t and (x_1, …, x_j -1) ∈{0,1}^j-1 it holds that ( Z_j = 1 | Z_1 = x_1, …, Z_j- 1 = x_j- 1) ≥ q. By the law of total probability, it holds that ( Z_j = 1 | Z_1 = x_1, …, Z_j- 1 = x_j- 1) = ( Z_j = 1 | Z_1 = x_1, …, Z_j- 1 = x_j- 1, j ≤ℓ) ·(j ≤ℓ) + ( Z_j = 1 | Z_1 = x_1, …, Z_j- 1 = x_j- 1, j > ℓ) ·(j > ℓ). We have ( Z_j = 1 | Z_1 = x_1, …, Z_j- 1 = x_j- 1, j ≤ℓ) ≥ q because j ≤ℓ implies that G_j = G_j - 1∖ v where v = ℛ( G_j - 1). Since ℛ is a sampling step for Π with success probability q, the inequality follows. Similarly, we have ( Z_j = 1 | Z_1 = x_1, …, Z_j- 1 = x_j- 1, j > ℓ) = 1, which holds because j > ℓ implies that Z_j = 1 with probability 1, by definition. Therefore, by (<ref>), (<ref>) and (<ref>) ( Z_j = 1 | Z_1 = x_1, …, Z_j- 1 = x_j- 1) ≥ q. In the following lemma, we establish a lower bound on the probability that the graph returned by the algorithm, i.e. G_δ· t, has a solution of size at most max( 0, k - t ). This lower bound is equal to the probability that the sum of Z_i for i from 1 to δ· t exceeds t. It holds that ( G_δ· t has a solution of size at most max( 0, k - t ) ) ≥(∑_i =1^δ· t Z_i ≥ t). Let A be the event that G_δ· t has a solution of size at most max( 0, k - t ), and let B be the event that ∑_i =1^δ· t Z_i ≥ t. Finally, let C be the event that ℓ = δ· t. In the following, we say that an event X implies another event Y if X ⊆ Y. Let us write B = ( B ∩ C ) ∪( B ∩C). It holds that (B ∩C) ⊆C⊆ A, because if ℓ < δ· t, then G_δ· t = G_ℓ∈Π and therefore G_δ· t has a solution of size 0. On the other hand, B ∩ C implies that ℓ = δ· t and ∑_i =1^δ· t Z_i ≥ t. In this case, for each 1 ≤ i ≤δ· t, it holds that i ≤δ· t = ℓ and Z_i = ( G_i - 1) - ( G_i ). Moreover, (∑_i =1^δ· t Z_i) = ∑_i =1^δ· t( G_i - 1) - ( G_i ) = ( G_0 ) - ( G_δ· t) which implies that ( G_δ· t) = ( G_0 ) - (∑_i =1^δ· t Z_i) ≤ k - (∑_i =1^δ· t Z_i) ≤ k - t ≤max( 0, k - t ) . Therefore, (B ∩ C) ⊆ A and it holds that B ⊆ A. Hence we get ( G_δ· t has a solution of size at most max( 0, k - t ) ) = (A) ≥(B) = (∑_i =1^δ· t Z_i ≥ t). There exists r,T ≥ 0 such that <ref> satisfies <ref> in <ref>. Suppose that _,Π(G) ≤ k for some k ≥ 0 and let S be the set returned by <ref>. In order to prove the <ref> holds, we will show that there exists integers r,T ≥ 0 such that for t ≥ T, it holds that (G ∖ S) = G_⌊δ· t ⌋ has a solution of size at most max( 0, k -t ), with probability at least (ϕ(δ,q))^t/(t+1)^r. By <ref> it holds that ( G_δ· t has a solution of size at most max( 0, k - t ) ) ≥(∑_i =1^δ· t Z_i ≥ t). In light of (<ref>), our goal is to establish a lower bound for the probability that ∑_i =1^δ· t Z_i ≥ t. By <ref>, there exist r and T that depend on δ such that for all t ≥ T, we have (∑_i =1^δ· t Z_i ≥ t) ≥( δ· t + 1 ) ^-r·exp( -δ·1/δq)^t. Finally, we let T' = max( δ, T ) and r' = 2 · r. Then, for all t ≥ T', it holds that ( δ· t + 1 )^-r = ( δ· t + 1 )^-r'/2≥( δ· t + δ)^-r'/2 = δ^-r'/2·( t + 1 )^-r'/2≥( t + 1 )^-r' where the last inequality holds because t ≥ T' ≥δ - 1. By <ref>, (<ref>), (<ref>) and (<ref>), for all t ≥ T' we have ( G_δ· t has a solution of size at most max( 0, k - t ) ) ≥(∑_i =1^δ· t Z_i ≥ t) ≥( δ· t + 1 )^-r·exp( -δ·1/δq)^t ≥( t + 1 )^-r'·exp( -δ·1/δq)^t = (t + 1)^-r'·( ϕ(δ, q))^t, which implies that <ref> holds for T' and r'. Finally, the proof of <ref> simply follows from <ref>. §.§ Which procedure to choose? Optimizing δ. In this section, we prove <ref>. We first give the proof of <ref>. Then, with the aim of proving <ref>, we analyze functions that arise during our analysis of the running time of the algorithm. Observe that <ref> and <ref> together give a randomized parameterized β-approximation algorithm for whose running time depends on δ. More specifically, for δ > 1/q, this running is time equal to exp( (δ - β) ·ln(c) + (β - α) ·ln( 1 ) /δ - α)^k· n^(1) = c^δ - β/δ - α· k· n^(1). Since δ - β/δ - α = 1 - β - α/δ - α is increasing for δ∈, the running time also increases for δ > 1/q. Therefore it doesn't make sense to consider values of δ > 1/q. This is why the range of δ is constrained to be less than or equal to 1/q in <ref>. For each δ∈ such that 1 ≤δ≤1/q, by <Ref> there is a δϕ(δ,q) for . Therefore, by <ref>, there is a randomized parameterized β-approximation algorithm for with running time f(δ, ϕ(δ,q))^k· n^(1)=f̃(δ ,q )^k· n^(1). In particular, one can consider all possible δ∈(∩ [1,1/q]) and choose the δ that minimizes the running time. Therefore, there is a randomized parameterized β-approximation algorithm for which runs in time (min_δ∈∩ [1,1/q]f̃(δ,q))^k · n^(1). Our next goal is to prove <ref>. For fixed 0 < q ≤ 1, let us define h_q(δ) ln(f̃(δ,q)) = (δ - β) ·ln(c) + (β - α) ·ln( 1/ϕ(δ, q)) /δ - α = δ - β/δ - α·ln(c) + β - α/δ - α·ln( 1/ϕ(δ,q)) We omit the subscript and write h(δ) instead of h_q(δ) whenever the values are clear from the context. Note that minimizing h is equivalent to minimizing f̃ as ln is a monotone increasing function, in other words min_δ∈∩ [1,1/q]f̃(δ,q) = exp(min_δ∈∩ [1,1/q] h_q(δ)). Also observe that for any fixed δ∈, h(δ) is a convex combination of ln(c) and ln( 1/ϕ(δ,q)). To make use of this property, let us define the following function which is linear in the variable x m_δ,q(x) δ - x/δ - α·ln(c) + x - α/δ - α·ln( 1/ϕ(δ,q)) = ln(c) + s_q(δ) · (x - α) where s_q(δ) ln( 1/ϕ(δ,q)) - ln(c)/δ - α. Moreover, we have h_q(δ) = m_δ,q(β), and using this equivalence, the running time for an x approximation, for a fixed δ, can be stated as d^k· n^(1) where d = exp( m_δ,q(x) ). In the following we will demonstrate that for 0 < q < 1, the value of δ that minimizes h_q(δ) (equivalently, f̃(δ, q)) and can be found by analyzing s_q(δ). Let 0 < q < 1. It holds that min_δ∈∩ [1,1/q] h_q(δ) = ln(c) + (β - α) ·(min_δ∈∩ [1,1/q] s_q(δ)) if β > α ln(c) + (β - α) ·(max_δ∈∩ [1,1/q] s_q(δ)) if β < α. Observe that h_q(δ) = ln(c) + s_q(δ) · (β - α) which follows from (<ref>) and (<ref>). Assume that β > α. Since α, β and c are independent of δ and (β -α) > 0, the value of δ that minimizes h_q(δ) is the one for which the value of s_q(δ) is minimized. The case of β < α follows in the same way by noting that in this case (β - α) < 0. <ref> states that depending on the values of α and β, minimizing h_q(δ) is equivalent to either minimizing or maximizing s_q(δ). Therefore, in what follows, we will study the analytical properties of s_q(δ). lemmasderivalternative Let 0 < q ≤ 1. For δ∈ (1,∞)∖α, define Γ_q(δ) -α·1/αq + α·1/α1/δ + ln(c) It holds that (∂/∂ δ s_q(δ)) = (Γ_q(δ)). Moreover, it also holds that ∂/∂ δ s_q(δ) = 0 if and only if Γ_q(δ) = 0. The proof of <ref> can be found in <ref>. The following lemma describes the behavior of the function s_q(δ) over specific intervals. It demonstrates that s_q(δ) is unimodal over intervals, meaning that it exhibits a strictly decreasing trend followed by a strictly increasing trend, or vice versa, depending on the interval. lemmasqunimodal The function s_q(δ) is strictly decreasing for α≤δ≤ and strictly increasing for ≤δ≤1/q. Moreover, if α > 1, then the function s_q(δ) is strictly increasing for 1 ≤δ≤ and strictly decreasing for ≤δ≤α. The proof of <ref> can be found in <ref>. With these definitions and results, we are now able to calculate the minimum (or maximum) of s_q(δ) over δ∈∩ [1,1/q]. In <ref>, we separately consider the two complementary cases based on whether α is greater than β or not. Let 0 < q ≤ 1 such that c≤exp(α·1/αq). Suppose that 1/q > β > α and β <. Then ( min_δ∈∩ [1,1/q] s_q(δ) ) = s_q(). We first show that α≤≤1/q. By definition, it holds that ≥α. To prove that ≤1/q, suppose for a contradiction that > 1/q, i.e. 1/ < q. Since 1/ < q < 1/α, it holds that 1/αq - ln(c)/α = 1/α1/ > 1/αq, where the equality follows from the definition of . Therefore we arrive at a contradiction. By <Ref> it holds that s_q(δ) is decreasing in [α, ] and increasing in [,1/q], therefore ( min_δ∈∩ [1,1/q] s_q(δ) ) = s_q(). Next, we state the analogue of <ref> for the β > α case. The proof <ref>, which is nearly identical to the proof of <ref>, can be found in <ref>. lemmamaxsbetasmalleralpha Let (α, β) ∈, 0 < q ≤ 1, and c ≥ 1 such that c≤exp(α·1/αq). Suppose that 1 < β < α < 1/q and β >. Then ( max_δ∈∩ [1,1/q] s_q(δ) ) = s_q(). Finally, we present the proof of <ref>. First, let us assume that β≥α. By (<ref>), it holds that min_δ∈∩ [1,1/q]f̃(δ,q) = exp(min_δ∈∩ [1,1/q] h_q(δ)) = exp(ln(c) + (β - α) ·(min_δ∈∩ [1,1/q] s_q(δ))). where the second equality follows from <ref>. For > β, we have exp(ln(c) + (β - α) ·(min_δ∈∩ [1,1/q] s_q(δ))) = exp(ln(c) + (β - α) ·(s_q())) = c ·exp(ln( 1/ϕ( , q ) ) - ln(c) / - α·( β - α) ) = c ·exp(·1/q - ln(c) / - α·( β - α) ) = where the first equality follows from <ref>, and the second and third equalities follow from the definition of the functions s_q(δ) and ϕ(δ, q) respectively. If β≥, then it holds that ∩ [1,1/q] ⊂ [, 1/q]. Therefore, by <ref>, the function s_q(δ) is strictly increasing over ∩ [1,1/q]. Hence, we get exp(ln(c) + (β - α) ·(min_δ∈∩ [1,1/q] s_q(δ))) = exp(ln(c) + (β - α) ·(s_q(β))) = exp(ln(c) + ln( 1/ϕ( β, q ) ) - ln(c) /β - α·( β - α) ) = exp(ln(c) +β·1/βq - ln(c) /β - α·( β - α) ) = exp(β·1/βq) = . Therefore the claim holds for β≥α. The proof for the case β < α is nearly identical to the one above and is left to the reader. § PROPERTIES OF AND In this section we show properties of the functions and which has been defined in (<ref>). We first prove <Ref> which shows the functions are well defined. Then, we provide a closed formula for and for some special cases. We also use the closed formulas to prove <ref> which provide a closed formula for in the these special cases. * Note that c≤exp(α·1/αq) implies that 1/αq - ln(c)/α≥ 0. Furthermore, observe that 1/αx is a differentiable, non-negative convex function of x and has a global minimizer at 1/α. Therefore, 1/αx is strictly decreasing for x ≤1/α and strictly increasing for x ≥1/α. Let us now consider the range I_1 = [α, ∞). If δ∈ I_1, then 1/δ≤1/α, consequently the function 1/α1/δ is a strictly increasing function of δ for δ∈ I_1. Additionally, it holds that lim_x → 11/αx = lim_x → 01/αx = ∞. Hence, there exists a unique value of δ∈ I_1 such that 1/α1/δ = 1/αq - ln(c)/α≥ 0, which implies that is well-defined. Similarly, let's consider the case where α > 1 and define the interval I_2=(1,α]. In this case, the function 1/α1/δ is strictly decreasing for δ∈ I_2. By (<ref>), it holds that there exists a unique value of δ∈ I_2 such that <ref> holds. Next, we first give a closed formula for in case α=1. Let α = 1, 0 < q ≤ 1 and c ≥ 1 such that c ≤1/q. Then we have (α, c, q) = 1/q · c. Since 1≤ c≤1/q it holds that 1/c· q≥ 1 = α. Furthermore, 1/α1/1/c· q = -ln(q· c) = -ln(q)-ln(c) = 1/αq -ln(c). The first and last equalities holds as α=1 and due to (<ref>). Therefore, by the definition of in (<ref>) we have (α, c, q) = 1/q · c. <ref> is a simple consequence of <Ref>. * Note that by <ref>, we have that =(1,c,q) = 1/q · c. By (<ref>), if β < 1/q · c, then [1,β,c,q] = c ·exp(·1/q- ln(c) / - 1·( β - 1 ) ) = c ·exp(1/c· q·c· qq- ln(c) /1/cq - 1·( β - 1 ) ). By the formula of Kullback-Leibler divergence we have, 1/c· q·c· qq- ln(c) = c· q /c· q·ln( c· q/q)+ 1-c· q/c· q ·ln(1-c· q/1-q)-ln(c) = ( 1/c· q- 1)·ln(1-c· q/1-q). By (<ref>) and (<ref>) we have [1,β,c,q] = c ·exp(1/c· q·c· qq- ln(c) /1/cq - 1·( β - 1 ) ) = c ·exp(( 1/c· q- 1)·ln(1-c· q/1-q) /1/cq - 1·( β - 1 ) ) = c·( 1-c· q/1-q)^β-1. On the other hand, if β≥1/q · c, again by (<ref>) it follows that [1,β,c,q] = exp(β·1/βq). Therefore, the theorem follows. Finally, we also provide a closed formula for in case α=2 and c=1. Here, we rely on the fact that 1/2x is symmetric around 1/2. Let α = 2, 0 < q ≤1/α = 1/2 and c = 1. Then we have (α,c,q) = 1/1 - q. We first observe that 1/1-q > 1 since q>0 and 1/1-q≤1/1-1/2≤ 2=α as q≤1/2. Thus, 1/1-q∈ (1,α]. Furthermore, 1/α1/1/1-q = 1/21-q = 1/2q = 1/αq -ln(c)/α, the second equality holds as 1/2x =1/2·ln(1/2x) + 1/2·ln(1/2· (1-x)) is symmetric around x=1/2, and the last equality holds as ln(c)=ln(1)=0. Thus, we have (α,c,q)=1/1-q by its definition in (<ref>). * By <Ref> it holds that =(2,1,q) = 1/1 - q by <ref>. In case β>1/1-q, by (<ref>) it holds that [2,β,1,q] = 1 ·exp(·1/q- ln(1) / - 2·( β - 2 ) ) = exp(1/1-q·1-qq/1/1-q - 2·( β - 2 ) ) = exp(1-q/1-q·ln(1-q/q)+ q/1-q·ln( q/1-q)/1/1-q - 2·( β - 2 ) ) =exp((1/1-q-2) ln( q/1-q) /1/1-q - 2·( β - 2 ) ) = (1/1-q)^β-2. In case 1 ≤β≤1/1 - q, again by (<ref>) it follows that [2,β,1,q] = exp(β·1/βq). Therefore,the theorem holds. § SAMPLING STEPS In this section we provide sampling steps used by our applications. <Ref> give the sampling steps for and . In <Ref> we give the generic sampling step for problems in which Π is defined by a finite set of forbidden sub-hypergraphs. §.§ Feedback Vertex Set In this section we will prove <ref>, that is we provide a sampling step for () with success probability 1/4. Let 𝒢 denote the set of graphs and let denote the set of graphs without cycles. Observe that is a hereditary hypergraph property. For an input graph G, the problem asks whether there exists a set S ⊆ V(G) of size k such that G ∖ S ∈, i.e. G ∖ S is acyclic. The sampling step for is given in <Ref>. It starts by iteratively removing vertices of degree at most 1, as described in <ref>. Given an input graph G, we refer to the resulting graph from this procedure as G. It is evident that the sets of cycles in G and G are equal because a vertex v ∈ V(G) with degree at most 1 cannot be part of a cycle. Therefore, it holds that _(G) = _(G). After computing G), the sampling step defines a weight for every vertex. The weight of a vertex v of degree 2 (in G) is zero, and the weight of every other vertex is its degree. The algorithm returns a random vertex, so the probability of every vertex to be returned is proportional to is weight. The algorithm also handles a corner case which occurs if G contains a cycle of vertices of degree 2 by sampling a random vertex from this cycle. The proof of the next lemma is an adjustment of the arguments in <cit.> (see also <cit.>). <ref> is a sampling step for with success probability 1/4. <Ref> is an immediate consequence of <Ref>. Initially, let us demonstrate that the procedure operates within polynomial time. Computing the G and determining whether the graph has a connected component with maximum degree 2 takes linear time. Similarly, computing the values w(v) for each vertex v ∈ V(G) also takes linear time. Therefore, the entire algorithm runs in linear time. By (<ref>), we can, without loss of generality, assume that G has no vertices of degree less than 2. Similarly, we can also assume that each component of G has at least one vertex of degree at least 3, because otherwise the algorithm returns a vertex v at <ref> which satisfies _(G ∖ v) ≤_(G) - 1 with probability 1. There exists a minimum feedback vertex set of G such that each vertex in it has degree at least 3. Let B be a minimum feedback vertex set. Since G does not contain any vertices of degree less than 2, B also does not contain any vertices of degree less than 2. Suppose B contains a vertex v of degree exactly 2. We claim that there is always a vertex u ∈ V(G) of degree at least 3 such that ( B ∖{v}) ∪{u} is also a minimum feedback vertex set. Observe that this is enough to prove the claim, as we can apply this step repeatedly as long as B contains a vertex of degree 2. Since v is in a connected component of which contains a vertex of degree 3 or more, there is a path from v to u, such that every vertex on the path expect u is of degree 2. Thus, every cycle C in G that contains v also contains u. Therefore, ( B ∖{v}) ∪{u} is also a minimum feedback vertex set of G. Let B be a minimum feedback vertex set of G such that each vertex has degree at least 3. In the following, for a set X ⊆ V(G), we let w(X) ∑_x ∈ X w(x). The following claim argues that the weight of B is large. It holds that w(B) ≥w( V(G) ∖ B )/3. Define R = V(G) ∖ B. We have w(R) = ∑_v ∈ R (v) ≥ 3(v) = ( ∑_v ∈ R(v) ) - ( ∑_v ∈ R (v) = 2(v) ) = ( ∑_v ∈ R(v) ) - 2 ·{v ∈ R |(v) = 2} = ( ∑_v ∈ R(v) ) - 2 ·R_2 where R_2 {v ∈ R |(v) = 2}. Observe that ( ∑_v ∈ R(v) ) is equal to the number of edges between the vertices in B and R, plus twice the number of edges between vertices in R. Therefore we get w(R) = E(B,R) + 2 ·E(G[R]) - 2 ·R_2 ≤E(B,R) + 2 ·R - 2 ·R_2 = E(B,R) + 2 ·R_≥ 3 where R_≥ 3{v ∈ R |(v) ≥ 3} and E(B,R) = {(u,v) | u ∈ B, v ∈ R, (u,v) ∈ E(G)}. Furthermore, the inequality holds because G[R] is a forest by definition. Since each v ∈ R_≥ 3 contributes at least 3 to w(R), we have 3·R_≥ 3≤ w(R) and w(R) ≤E(B,R) + 2 ·w(R)/3 which implies that w(R)/3≤E(B,R). Finally, since w(B) ≥E(B,R), it holds that w(B) ≥E(B,R)≥w(R)/3. <ref> implies that ( v ∈ B ) = w(B)/W = w(B)/w(B) + w( V(G) ∖ B ) ≥w(B)/w(B) + 3· w(B) = 1/4. Finally, since v ∈ B implies _(G ∖ v) ≤_(G) - 1, we get (_(G ∖ v) ≤_(G) - 1) ≥( v ∈ B ) ≥1/4. §.§ Pathwidth One Vertex Deletion Let 𝒢 denote the set of graphs, and let denote the set of graphs with pathwidth at most 1. Given a graph G ∈𝒢 as input, the () problem asks for a set S ⊆ V(G) such that G ∖ S belongs to , i.e., G ∖ S has pathwidth at most 1. Let T_2 be the graph with 7 vertices, where we take three paths with 3 vertices each, and identify one of the degree 1 vertices of each path (see <ref>). Let Ω denote the set of all cycle graphs together with the graph T_2. An alternative characterization of is the following: a graph G belongs to if and only if G has no subgraph isomorphic to a graph in Ω <cit.>. Note that the set of forbidden subgraphs in the case of is Ω∖{T_2}, hence, it is natural to adapt the sampling step for to . Our sampling step for , given in <ref> i, first checks whether G has a subgraph isomorphic to T_2. If this is the case, is samples a random vertex of this subgraph. If not, then G should have a subgraph isomorphic to cycle and the sampling step for is used. The following lemma show that <ref> holds. <ref> is a sampling step for with success probability 1/7. First, let us show that <ref> runs in polynomial time. Checking whether G has a subgraph Z isomorphic to T_2 can be done by going over all subgraphs of G of size 7, which takes polynomial time. Moreover, <ref> runs in polynomial time by <ref>. Define the partition (𝒢_1,𝒢_2) of ( 𝒢∖) where 𝒢_1 {G ∈( 𝒢∖) | G has a subgraph isomorphic to T_2 }. and 𝒢_2 (( 𝒢∖) ∖𝒢_1). First assume that G ∈𝒢_1 and let Z be the subgraph of G which is isomorphic to T_2. Then, the algorithm returns a vertex v ∈ V(Z) sampled uniformly at random, at <ref> of <ref>. Now let S ∈_(G), and observe that v ∈ S implies that _(G ∖ v) ≤(_(G) - 1). Moreover, (V(Z) ∩ S) ≠∅ because otherwise S is not a solution. Therefore ( _(G ∖ v) ≤_(G) - 1 ) ≥( v ∈ S ) = V(Z) ∩ S/V(Z)≥1/7. Now assume that G ∈𝒢_2. By the alternative characterization of , it follows that G contains a subgraph isomorphic to a graph in Ω∖{T_2}. Therefore, for G ∈𝒢_2, it holds that G∖ S ∈ G∖ S ∈. Therefore the problems [𝒢_2, ] and [𝒢_2, ] are equivalent. Moreover, <ref> is a sampling step for [𝒢_2, ] with success probability 1/4, and by definition it returns a vertex v such that ( _(G ∖ v) ≤_(G) - 1 ) ≥1/4. Therefore, by (<ref>) and (<ref>), <ref> is a sampling step for with a success probability of 1/7. §.§ (𝒢,Π)-Vertex Deletion for a finite set of forbidden sub-hypergraphs We are left to prove <Ref>. That is, we describe a sampling step for [𝒢, Π^Ω] where Π^Ω is described by a finite set of forbidden subhypergraphs (<Ref>). In the remainder of this section, let 𝒢 be a fixed, closed set of hypergraphs, and let Ω = {F_1, …, F_ℓ} be a fixed finite set of hypergraph. Recall η(Ω) max_1 ≤ i ≤ℓV( F_i ). The idea in the sampling step for [𝒢, Π^Ω] is very simple, if a hypergraph G ∈𝒢 does not belong to Π^Ω, then G should have a subhypergraph Z isomorphic to F_i for some 1 ≤ i ≤ℓ. Moreover, any solution S ∈_Π^Ω(G) should contain a vertex from Z, otherwise (G ∖ S) ∉Π^Ω. We combine these ideas in <ref>. <ref> is a sampling step for [𝒢, Π^Ω] with success probability 1/η(Ω). We note that <ref> follows immediately from <Ref>. For each 1 ≤ i ≤ℓ, the algorithm goes over all subsets of V(G) of size F_i, which takes time at most n^(F_i) = n^( η(Ω)) = n^(1). Checking whether G[Z] is isomorphic to F_i takes constant time since F_i is constant. Hence all in all, the algorithm runs in polynomial time. Moreover, the algorithm always returns a vertex because G ∈( 𝒢∖Π^Ω) and there exists Z ⊆ V(G) such that Z is isomorphic to F_i for some F_i ∈Π. Now let v be the output of the algorithm and Z ⊆ V(G) be the set v is sampled from. Consider S ∈_Π^Ω(G). Observe that if v ∈ S, then S ∖{v} is a solution for G ∖ v. Therefore, (v ∈ S) ≤( _Π(G ∖ v) ≤_Π(G) - 1 ). Next, observe that (Z ∩ S) ≠∅, because otherwise we would have (G∖ S) ∉Π^Ω because Z is isomorphic to F_i. Since S ∈_Π^Ω(G), this implies that (Z ∩ S) ≠∅. Since v ∈ Z is sampled uniformly, we get ( v ∈ S ) = S ∩ Z/Z≥1/Z≥1/η(Ω). Finally, by (<ref>) and (<ref>), it holds that ( _Π(G ∖ v) ≤_Π(G) - 1 ) ≥1/η(Ω). § ADDITIONAL APPLICATIONS In this section, following <ref>, we present additional applications of our results. §.§ 3 Observe that 3 is equivalent to where 𝒢 is the set of all hypergraphs with edge cardinality 3 and Π is the set of all edgeless hypergraphs. Moreover, let F be a hypergraph with a single edge of cardinality 3 and define Ω{F} such that η( Ω) = 3. Furthermore, let Π^Ω be as in <ref>. Note that is also equivalent to [𝒢, Π^Ω], because for G ∈𝒢, it holds that G ∈Π if and only if G doesn't have an edge, i.e. there is no vertex induced subhypergraph of G isomorphic to F. By <ref>, there is a sampling step for 3 with success probability 1/3. We will utilize the FPT algorithm from <cit.> which runs in time 2.076^k· n^(1). There is a randomized parameterized β-approximation algorithm for 3 with running time d^k· n^(1) where we have d = 2.076 ·(0.462)^β - 1 if 1 < β < 1.445 exp( β·1/β1/3) if 1.445 ≤β < 3. The lemma follows from the FPT algorithm of <cit.> (α = 1, c = 2.076) and <ref> by setting q = 1/3. In <ref>, we compare our results with those from <cit.>, <cit.>, and <cit.>. §.§ 4 Recall the definition of ℓ from <ref>. According to <ref>, there is a sampling step for 4 with a success probability 1/4. Moreover, there exists an FPT algorithm that runs in time 2.138^k· n^(1) (α = 1, c = 2.138) <cit.>, and a 3-approximation algorithm that runs in polynomial time (α = 3, c = 1) <cit.>. There is a randomized parameterized β-approximation algorithm for 4 with running time d^k· n^(1) where we have d = 2.138 ·(0.621) ^β - 1 if 1 < β≤ 1.871 exp( β·1/β1/4) if 1.871 < β≤ 2.357 exp( 2.357 · 0.424 0.25 / 0.643 · (3 - β) ) if 2.357 < β≤ 3 Let 𝒜_1 denote the FPT algorithm from <cit.>, which runs in 2.138^k· n^(1) time (α = 1, c = 2.138). Similarly, let 𝒜_2 denote the 3-approximation algorithm from <cit.> that runs in polynomial time (i.e., α = 3, c = 1). Note that it suffices to consider β≤ 3 in the following, because for β > 3, 𝒜_2 serves as a polynomial-time β approximation algorithm. By using 𝒜_1 and <ref>, the first β-approximation algorithm we obtain has the running time d^k· n^(1) where d = 2.138 ·(0.621) ^β - 1 if 1 < β < 1.871 exp( β·1/β1/4) if 1.871 ≤β≤ 3. On the other hand, we have that ( 3,1,1/4) = 2.357. Therefore, by using 𝒜_2 and <ref>, for every 1 < β≤ 3 there exists a parameterized β-approximation algorithm which runs in time [3, β, 1, 1/4]^k· n^(1) where [3, β, 1, 1/4] = exp( 2.357 · 0.424 0.25 / 0.643 · (3 - β) ) if 2.357 < β≤ 3 exp( β·1/β1/4) if 1 < β≤ 2.357 . The lemma follows by selecting the smaller value between (<ref>) and (<ref>) for each 1 < β < 3. See <ref> for a plot of d in <ref>, depending on the approximation ratio β. §.§ In this section, we demonstrate how our techniques extend to directed graph problems. Recall that in the problem, we are given a tournament graph G and we would like to find a set of vertices S such that G ∖ S doesn't have any directed cycles. Although our technique normally applies to hypergraphs, we adapt it for directed graphs by defining 𝒢 as the set of all tournament graphs. Similarly, we define the graph property Π to consist of all tournament graphs that are cycle free. Note that a tournament is acyclic if and only if it contains no directed triangle. It is not hard to show that our results for graph properties, with a finite set of forbidden graphs, also apply in this setting. We omit the technical details. By <ref>, there is a sampling step for with success probability 1/3. There is also a 2-approximation algorithm that runs in polynomial time <cit.> (α = 2, c = 1). Moreover, there is an FPT algorithm with running time 1.618^k· n^(1) (α = 1, c = 1.618) <cit.>. There is a randomized parameterized β-approximation algorithm for with running time d^k· n^(1) where we have d = 1.618 ·(0.691) ^β - 1 if 1 < β≤ 1.854 0.5^β - 2 if 1.854 < β≤ 2 Let 𝒜_1 denote the FPT algorithm from <cit.>, which runs in time 1.618^k· n^(1) (α = 1, c = 1.618). Similarly, let 𝒜_2 denote the 2-approximation algorithm from <cit.> that runs in polynomial time (i.e., α = 2, c = 1). Note that it suffices to consider β≤ 2 because of 𝒜_2. By using 𝒜_1 and <ref>, the first β-approximation algorithm we obtain has the running time d^k· n^(1) where d = 1.618 ·(0.691) ^β - 1 if 1 < β < 1.854 exp( β·1/β1/3) if 1.854 ≤β < 2. By using 𝒜_2 and <ref>, for every 1 < β≤ 2 there exists a parameterized β-approximation algorithm which runs in time d^k· n^(1) where d = exp( β·1/β1/3) if 1 < β≤ 1.5 0.5^β - 2 if 1.5 < β≤ 2 The lemma follows by selecting the smaller value between (<ref>) and (<ref>) for each 1 < β≤ 2. See <ref> for a plot of d in <ref>, depending on the approximation ratio β. §.§ on Graphs with Maximal Degree 3 is the restriction of the problem to graphs with maximum degree 3. It can be expressed as a problem, where 𝒢 corresponds to graphs with maximum degree 3 and the hypergraph property Π consists of all edgeless graphs. Note that Π can be described by the forbidden subgraph K_2, which is an edge that consists of two vertices. Therefore, by <ref> there exists a sampling step with success probability 1/2. In <cit.>, the authors present a polynomial time approximation algorithm for any approximation ratio arbitrarily close to 7/6. For simplicity, we will assume that a 7/6-approximation algorithm exists (α = 7/6, c = 1). Note that when we consider β-approximation algorithms, we can focus on the values of β in the range 1 < β≤7/6 because of 𝒜_2. Moreover, there exists an FPT algorithm with running time 1.1616^k· n^(1) <cit.> (α = 1, c = 1.1616). There is a randomized parameterized β-approximation algorithm for with running time d^k· n^(1) where we have d = 1.1616 ·(0.8384) ^β - 1 if 1 < β≤ 1.136 exp( 1.008 · 0.992 0.5 / 0.158 · (1.166 - β) ) if 1.136 < β≤ 1.166 Let 𝒜_1 denote the FPT algorithm from <cit.>, which runs in 1.1616^k· n^(1) time (α = 1, c = 1.1616). Similarly, let 𝒜_2 denote the 7/6-approximation algorithm from <cit.> that runs in polynomial time (i.e., α = 7/6, c = 1). Because of 𝒜_2, we only consider β≤7/6≈ 1.166. By using 𝒜_1 and <ref>, the first β-approximation algorithm we obtain has the running time d^k· n^(1) where d = 1.1616 ·(0.8384) ^β - 1 if 1 < β < 1.722 exp( β·1/β1/4) if 1.722 ≤β < 2. On the other hand, we have that ( 7/6 ,1,1/2) = 1.008. Therefore, by using 𝒜_2 and <ref>, for every 1 < β≤ 2 there exists a parameterized β-approximation algorithm which runs in time [7/6, β, 1, 1/2]^k· n^(1) where [7/6, β, 1, 1/2] = exp( β·1/β1/2) if 1 < β≤ 1.008 exp( 1.008 · 0.992 0.5 / 0.158 · (1.166 - β) ) if 1.008 < β≤ 1.166 See <ref> for a plot of d in <ref>, depending on the approximation ratio β. § COMPARISON TO FIDELITY PRESERVING TRANSFORMATIONS In this section we will prove <ref> which implies that Sampling with a Black Box provides better running time than Fidelity Preserving Transformations. Here we state the lemma once again for completeness. * The statement in <ref> is equivalent to ln([1,β,c,1/η]) < η-β/η - 1·ln(c). Observe that for α = 1, (<ref>) becomes -ln( 1/(1,c,1/η)) = -ln( 1/η) - ln(c), which is equivalent to (1,c,1/η) = η/c. Also recall that we assume c≤exp(α·1/α1/η) = -ln( 1/η) = η. In the following we will consider the two cases β < and β≥. We will demonstrate that in both cases (<ref>) holds. Let β < = η/c. Then (<ref>) holds. By substituting (<ref>) in the definition of [1,β,c,1/η], we get ln([1,β,c,1/η]) = ln(c) + β - 1/η/c - 1·(η/c·c/η1/η - ln(c)). Furthermore, by the definition of the Kullback-Leibler divergence, we get that η/c·c/η1/η = η/c·c/η·ln( c/η·η) + η/c·( 1 - c/η) ·ln( 1 - c/η/1 - 1/η) = ln(c) + η - c/c·ln( η - c/η - 1). Recall that we have α = 1 < β < η/c, therefore it holds that c < η. Furthermore, by (<ref>) and (<ref>), ln([1,β,c,1/η]) = ln(c) + β - 1/η/c - 1·η - c/c·ln( η - c/η - 1) = ln(c) + (β - 1) ·ln( η - c/η - 1). Next, observe that c - 1/η -c > 0 since 1 < c < η. Therefore, by using the fact that ln(1 + x) ≥x/1 + x for x > -1, we obtain ln( η- 1/η - c) = ln( 1 + c - 1/η - c) ≥c -1/η - c/1 + c-1/η -c = c-1/η- 1 > ln(c)/η - 1 where the last inequality holds because ln(x) < x - 1 for x > 1. Finally, by (<ref>) and (<ref>), we get that ln([1,β,c,1/η]) = ln(c) - (β - 1) ·ln( η - 1/η - c) < ln(c) - β - 1/η - 1·ln(c) = η - β/η - 1·ln(c) and (<ref>) holds. Let β≥ = η/c. Then (<ref>) holds. By definition of [1,β,c,1/η], we have ln([1,β,c,1/η]) = β·1/β1/η = β·1/β·ln( η/β) + β·( 1 - 1/β) ·ln( 1 - 1/β/1 - 1/η) = ln( η/β) + (β - 1) ·ln( β - 1/η - 1·η/β). Next, we will first demonstrate that ln( β-1/η- 1) < η/η - 1·ln( β/η). Define the function ν(x) η/η - 1·ln( x/η) - ln( x-1/η - 1), and observe that ν(η) = 0. Moreover, by standard calculations, we have that ν'(x) = x - η/x· (x - 1) · (η - 1). Note that ν'(x) < 0 for x < η, i.e. the function ν(x) is decreasing for x < η. All in all, this implies that ν(β) > 0 for all β < η, and therefore (<ref>) holds. Moreover, we have (β - 1) ·ln( β - 1/η - 1·η/β) = (β - 1) ·ln( β - 1/η - 1) + (β - 1) ·ln( η/β) (<ref>)< (β - 1) ·η/η - 1·ln( β/η) + (β - 1) ·ln( η/β) = ( (1-β)·η/η - 1 + β - 1 )·ln( η/β) = (η - β·η + β·η - β - η + 1/η - 1) ·ln( η/β) = ( 1 - β/η - 1) ·ln( η/β) By (<ref>) and (<ref>), it holds that ln([1,β,c,1/η]) = ln( η/β) + (β - 1) ·ln( β - 1/η - 1·η/β) < ln( η/β) + 1- β/η - 1·ln( η/β) = ( η - β/η - 1) ·ln( η/β) ≤( η - β/η - 1) ·ln(c) where the last step holds because n/β≤ c by assumption. Therefore (<ref>) holds. By <ref> we conclude that (<ref>) holds. Therefore <ref> holds as well. § DISCUSSION In this paper we presented Sampling with a Black Box, a simple and generic technique for the design of parameterized approximation algorithms for vertex deletion problems. The technique relies on sampling steps, polynomial time algorithms which return a random vertex whose removal reduces the optimum by one, with some success probability q. The technique combines the sampling step with existing parameterized and approximation algorithms to derive efficient parameterized approximation algorithms. We provide application for various problems, such as , ℓ, d and . We point out two directions for follow up works: * While Sampling with a Black Box provides faster parameterized approximation algorithms for multiple problems, it does not provide a significant improvement for problems which has been extensively studied from this angle, such as and 3 <cit.>. This can be potentially improved by replacing sampling steps with a more generic procedure which can apply randomized branching rules <cit.>. Initial research suggests this could result in better running times for several problems. . * Our results are focused on unweighted vertex deletion problems. Vertex deletion problems can be naturally generalized for the weighted setting, in which each vertex v∈ V(G) has a weight w(v) and the objective is to find a set S⊆ V(G) of minimum weight ∑_v∈ S w(v) such that G∖ S satisfies the property Π. In particular, parameterized approximation algorithms for the special case of has been recently considered in <cit.>. It would be interesting to adjust Sampling with a Black Box to the weighted setting. For example, such a result may improve the running times of <cit.> for . In <cit.> the authors applied a rounding procedure over weights in order to utilize the (approximate) monotone local search technique of <cit.> in a weighted setting. Intuitively, a similar approach may also be useful for Sampling with a Black Box. In this paper we designed exponential time parameterized approximation algorithms for vertex deletion problems. For many of the considered problems, such as and 3, it is known that, assuming the Exponential Time Hypothesis (ETH), there is no sub-exponential time parameterized (exact) algorithms (see, e.g. <cit.>). However, it is less clear whether the considered problems admit sub-exponential time parameterized approximation algorithms for approximation ratios close to 1. The existence of strictly sub-exponential parameterized approximation algorithms for for certain approximation ratio has been rules out, assuming ETH, in <cit.>. However, we are not aware of a result which rules out a c^o(k)· n^(1) parameterized (1+)-approximation for (or other vertex deletion problem) for a some constant >0. It would be interesting to explore whether recent tools in parameterized inapproximability, possibly together with the stronger Gap Exponential Time Hypothesis (GAP-ETH) <cit.>, can lead to such a result. plain § PROBLEM DEFINITIONS In this section we will give formal definitions of the problems mentioned in this paper. Recall that a feedback vertex set of a graph G is a subset of its vertices S ⊆ V(G) such that G ∖ S is acyclic. ()A graph G, an integer k.Does G have a feedback vertex set of size at most k?𝒢 is the set of all graphs, Π is the set of all graphs that have no cycles. ()A graph G, an integer k.Is there a set of vertices S ⊆ V(G) of size at most k such that G∖ S has pathwidth at most 1?𝒢 is the set of all graphs and Π is the set of all graphs with pathwidth at most 1 (i.e. the set of caterpillar graphs). ℓA graph G, an integer k.Is there a set of vertices S ⊆ V(G) of size at most k such that every path of length ℓ contains a vertex from S?𝒢 is the set of all graph and Π is the set of graphs with maximum path length ℓ - 1. dA universe U, a set system 𝒮 over U where each set S ∈𝒮 has size at most d, an integer k.Is there a set W ⊆ U of size at most k such that W ∩ S ≠∅ for all S ∈𝒮?𝒢 is the set of all hypergraphs with edge cardinality 3 and Π is the set of all edgeless hypergraphs. A tournament graph G, an integer k.Is there a set of vertices S ⊆ G of size at most k such that G ∖ S has no directed cycles?𝒢 is the set of all tournament graphs and Π is the set of all tournaments that are cycle free. A graph G with maximum degree 3, an integer kIs there a set of vertices S ⊆ V(G) of size at most k such that such that every edge of G has at least one vertex from S?𝒢 is the set of all graphs with maximum degree 3 and Π is the set of all edgeless graphs. § PROBABILISTIC CONCEPTS Let xy denote the Kullback-Leibler divergence between two Bernoulli distributions with parameters x and y, i.e. xy x·ln( x/y) + (1-x) ·ln( 1 - x/1 - y) = x ·ln(x/1 - x·1 - y/y) + ln( 1 - x/1 - y). In our analysis of procedures, we need the following technical result which is a special case of Theorem 11.1.4 in <cit.>. For any 0 ≤ p ≤ 1 and integer x ≥ 1, let (x,p) denote the binomial random variable with success probability p and number of trials x. For any 0 ≤ y ≤ x, it holds that ( (x,p) ≥ y ) ≥ (x + 1)^-2·exp( -x ·y/xp). Moreover, we also need the following lemma, which we use to lower bound the tail probability of the sum of certain random variables. lemmaproblowerbound Let X_1,…, X_n∈{0,1} be random variables, let p∈ [0,1] and assume that ( X_j =1 | X_1 = x_1,…,X_j-1 = x_j - 1)≥ p for all j∈ [n] and ( x_1, …, x_j-1) ∈{0,1}^j-1. Then, for every w∈ℝ it holds that ( ∑_j=1^n X_j ≥ w) ≥((n,p) ≥ w). Let Y_1,…, Y_n be n independent Bernoulli random variables with (Y_j=1) = p for every j∈[n]. Also, define Q_ℓ = ∑_j=ℓ^n Y_ℓ and S_ℓ = ∑_j=ℓ^n X_ℓ for every ℓ∈ [n+1]. By definition, Q_n+1=S_n+1=0. Furthermore, the distribution of Q_1 is (n,p). For every ℓ∈ [n+1] and w∈ℝ it holds that (S_ℓ≥ w | _ℓ-1) ≥(Q_ℓ≥ w). We prove the claim by reverse induction over the value of ℓ. Base case: Let ℓ =n+1. Then S_ℓ=0=Q_ℓ. Therefore (S_ℓ≥ w | _ℓ-1) = (Q_ℓ≥ w). Induction step: assume the induction hypothesis holds for ℓ+1∈ [n+1]∖{1}. Let w∈ℝ. Then, (S_ℓ≥ w | _ℓ-1 ) = [ _X_ℓ=1·_S_ℓ+1≥ w-1+_X_ℓ=0·_S_ℓ+1≥ w | _ℓ-1] = [ _X_ℓ=1·[ _S_ℓ+1≥ w-1 | _ℓ]+_X_ℓ=0·[ _S_ℓ+1≥ w | _ℓ] | _ℓ-1] ≥ [ _X_ℓ=1·(Q_ℓ+1≥ w-1 )+_X_ℓ=0·(Q_ℓ+1≥ w) | _ℓ-1] = (X_ℓ=1 | _ℓ-1) ·(Q_ℓ+1≥ w-1 ) + (X_ℓ=0 | _ℓ-1)·(Q_ℓ+1≥ w), where the second equality follows from the tower property and the inequality holds by the induction hypothesis. By (<ref>) we have, (S_ℓ≥ w | _ℓ-1 ) = (X_ℓ=1 | _ℓ-1) ·(Q_ℓ+1≥ w-1 ) + (X_ℓ=0 | _ℓ-1)·(Q_ℓ+1≥ w) ≥ p·(Q_ℓ+1≥ w-1 ) + (1-p)·(Q_ℓ+1≥ w) = (Y_ℓ=1) ·(Q_ℓ+1≥ w-1 | Y_ℓ=1) + (Y_ℓ=0) ·(Q_ℓ+1≥ w | Y_ℓ=0) = (Q_ℓ≥ w). The first inequality holds as (X_ℓ=1 | _ℓ-1) ≥ p and (Q_ℓ+1≥ w-1 ) ≥(Q_ℓ+1≥ w). The second equality holds a (Y_ℓ=1)=p and since Y_ℓ and Q_ℓ+1 are independent. Thus, we proved the induction hypothesis holds for ℓ and completed the proof. By <Ref>, for every w∈ℝ it holds that (∑_j=1^n X_n ≥ w ) = (S_1 ≥ w) ≥ (Q_1 ≥ w) = ((n,p) ≥ w). We can combine <ref> to obtain the following result. * If δ = 1, observe that ( ∑_j = 1^δ· tξ_j ≥ t) = ( ∑_j = 1^tξ_j ≥ t) ≥( ( t, ν) ≥ t )by <ref> = ( ( t, ν) = t ) = ν^t = exp(ln( ν) )^t = exp(-δ·1/δν)^t where the last step holds because 1ν = ln( 1/ν). Now suppose that δ > 1 and let T_δ= 1/δ-1. For t ≥ T_δ we have ( ∑_j = 1^δ· tξ_j ≥ t) ≥( ( δ· t, ν) ≥ t )by <ref> ≥( ( δ· t, ν) = t ) ≥( δ· t + 1 ) ^-2·exp( - δ· t·t/δ· tν), where the last step follows from <ref>. There exists a constant r > 0 that depends on δ such that for each t ≥ T_δ it holds that ( δ· t + 1 ) ^-2·exp( - δ· t·t/δ· tν) ≥( δ· t + 1 ) ^-r·exp( - δ· t ·1/δν). Let us define h(x) 1/x·xν. Then we have ( δ· t + 1 )^-2·exp( - δ· t·t/δ· tν) = ( δ· t + 1 )^-2·exp( -t · h( t/δ· t) ). Since h is a differentiable function on ( 1/δ, t/δ· t), we can approximate the value of h( t/δ· t) by h( t/δ· t) = h( 1/δ) + ( t/δ· t - 1/δ) · h'(w) for some w ∈( 1/δ, t/δ· t), using Mean Value Theorem. Note that t/δ· t - 1/δ = δ· t - δ· t/δ·δ· t ≤1/δ·δ· t < 1/δ·( δ· t - 1 ) < 1/δ· t < δ - 1/δ, where (<ref>) and hold because t ≥ T_δ≥1/δ - 1, which implies that δ· t - t ≥ 1 δ· t - 1 ≥ t. By (<ref>), the value of h'(w) is upper bounded by C_δmax_w'∈(1/δ, δ-1/δ) h'(w'). Note that C_δ only depends on δ hence it is a constant. Finally, by (<ref>) and (<ref>), it holds that t · h( t/δ· t) = t · h( 1/δ) + t ·( t/δ· t - 1/δ) · h'(w) < t · h( 1/δ) + 1/δ· C_δ. Therefore, exp( - δ· t·t/δ· tν) = exp( -t · h( t/δ· t) ) (<ref>)≥exp(-t · h( 1/δ) ) ·exp( -1/δ· C_δ) ≥exp(-δ· t ·1/δν) ·( δ· T_δ + 1 )^-r+2 where r > 2 is a large enough constant that depends on δ such that exp( 1/δ· C_δ) ≤( δ· T_δ + 1 )^r - 2. Finally, by <ref> and (<ref>), there exist r,T_δ > 0 that depend on δ such that for t ≥ T_δ we have ( ∑_j = 1^δ· tξ_j ≥ t) ≥( δ· t + 1 ) ^-r·exp( - δ· t ·1/δν). § TECHNICAL CLAIMS It holds that ∂/∂ δ (ln( 1/ϕ(δ,q))) = ln( 1 - 1/δ/1 - q) and ∂/∂ δ s_q(δ) = (α - 1) ·ln( 1 -q/1 - 1/δ) + ln( δ· q ) + ln(c)/(δ - α)^2. It holds that ∂/∂ a ab= ln(a/1 - a·1 - b/b). Therefore, using the product rule for the derivative, we get ∂/∂ δ (ln( 1/ϕ(δ,q))) = ∂/∂ δ ( δ·1/δq) = 1/δq + δ·ln( 1/δ/1 - 1/δ·1-q/q) ·( -1/δ^2) by (<ref>) = 1/δ·ln( 1/δ/q·1 - q/1 - 1/δ) + ln( 1 - 1/δ/1 - q) -1/δ·ln( 1/δ/1 - 1/δ·1 - q/q) = ln( 1 - 1/δ/1 - q), therefore (<ref>) holds. Similarly, by using the quotient rule for the derivative, we get ∂/∂ δ s_q(δ) = ∂/∂ δ (ln( 1/ϕ(δ,q)) - ln(c)/δ - α) = ( ∂/∂ δ ln( 1/ϕ(δ,q))) · (δ - α) - ( ln( 1/ϕ(δ,q)) - ln(c) )/(δ - α)^2 = (δ - α) ·ln( 1 - 1/δ/1 - q) - ln( 1/ϕ(δ,q)) + ln(c)/(δ- α)^2. by (<ref>) Using the definition of ϕ(δ,q), we further have ∂/∂ δ s_q(δ) = (δ - α) ·ln( 1 - 1/δ/1 - q) - δ·1/δq + ln(c)/(δ- α)^2 = (δ - α) ·ln( 1 - 1/δ/1 - q) - δ·( 1/δ·ln( 1/δ· q) + ( 1 - 1/δ)·ln( 1 - 1/δ/1 - q) ) + ln(c)/(δ- α)^2 = (δ - α - δ + 1) ·ln( 1 - 1/δ/1 -q) - ln( 1/δ· q) + ln(c)/(δ - α)^2 = (α - 1) ·ln( 1 -q/1 - 1/δ) + ln( δ· q ) + ln(c)/(δ - α)^2. Next, we state an equivalence which will be used frequently in the following section. It holds that exp( ln(c) + (β - α) · s_q(β) ) = exp( β·1/βq). The proof simply follows by substituting: exp( ln(c) + (β - α) · s_q(β) ) = exp( ln(c) + (β - α) ·ln( 1/ϕ(δ,q)) -ln(c)/(β - α)) = exp(ln( 1/ϕ(δ,q))) = exp( β·1/βq). § OMITTED PROOFS * Let A ∈_Π(G) such that A = _Π(G). It holds that (G ∖ v) ∖(A ∖{v}) = G ∖(A ∪{v}) ∈Π because Π is hereditary and G ∖(A ∪{v}) is a vertex induced subhypergraph of G ∖ A, which belongs to Π by the definition of A. Therefore, A ∖{v} is a solution for G ∖ v and _Π(G ∖ v ) ≤ A ∖{v} ≤_Π(G). Similarly, let X ∈_Π(G ∖ v) such that X = _Π( G ∖ v ). We have G ∖( X ∪{v}) = (( G ∖ v ) ∖ X )∈Π by definition of X. Therefore, ( X ∪{v}) ∈_Π( G ) and we have _Π( G ) ≤X ∪{v}≤X + 1 ≤_Π( G ∖ v ) + 1. Finally, (<ref>) and (<ref>) together imply the lemma. Now, we provide the previously omitted proof of <ref>. For the sake of completeness, we restate the lemma below. * By <ref>, the sign of the derivative of s_q(δ), i.e. (∂/∂ δ s_q(δ)), agrees with the sign of Γ_q(δ). Observe that the only term in Γ_q(δ) that depends on δ is α·1/α1/δ. Consider the values of δ such that δ≥α, which implies that 1/δ≤1/α. Since 1/αx is a strictly decreasing function for x ≤1/α, and 1/δ is a strictly decreasing function of δ, it follows that 1/α1/δ is a strictly increasing function of δ for δ≥α. Furthermore, observe that Γ_q() = 0. Therefore, (∂/∂ δ s_q(δ)) = ( Γ_q(δ) ) < 0 for α≤δ <. Similarly, (∂/∂ δ s_q(δ)) = ( Γ_q(δ) ) > 0 for < δ≤1/q. Therefore, s_q(δ) is strictly decreasing for α≤δ≤ and strictly increasing for ≤δ≤1/q. Now, assume that α > 1, and consider the values of δ such that δ≤α, which implies that 1/δ≥1/α. Since 1/αx is a strictly increasing function of x for x ≥1/α, it holds that 1/α1/δ is a strictly decreasing function of δ for δ≥α. We also have that Γ_q() = 0. Therefore, it holds that (∂/∂ δ s_q(δ)) = ( Γ_q(δ) ) >0 for 1 ≤δ < and (∂/∂ δ s_q(δ)) = ( Γ_q(δ) ) < 0 for < δ≤α. Therefore, s_q(δ) is a strictly increasing function for 1 ≤δ≤ and a strictly decreasing function for ≤δ≤α. Therefore the lemma holds. Next we present the missing proof of <ref>. For completeness, we again state the lemma here. * By the conditions of the lemma it holds that ∈∩ [1,1/q]. By <Ref> the function s_q(δ) is increasing in [1,] and decreasing in [, α]. Therefore, max_δ∈∩ [1,1/q] s_q(δ) = s_q(). Let us now give the omitted proof of <ref>. * Since (δ - α)^2 > 0, by (<ref>) it holds that ∂/∂ δ s_α,c,q(δ) = 0 if and only if (α - 1) ·ln( 1 -q/1 - 1/δ) + ln( δ· q ) + ln(c) = 0. Moreover, (∂/∂ δ s_α,c,q(δ)) = ((α - 1) ·ln( 1 -q/1 - 1/δ) + ln( δ· q ) + ln(c)). where we let Ψ denote Ψ (α - 1) ·ln( 1 -q/1 - 1/δ) + ln( δ· q ) + ln(c) for for the sake of presenting the following material. We have Ψ = (α - 1) ·ln( 1 -q/1 - 1/δ) + ln( δ· q ) + ln(c) = -(α - 1) ·ln( 1 - 1/δ/1 - q) -ln(1/δ/q) + ln(c) = -α·(( 1 - 1/α)·ln( 1 - 1/δ/1 - q) + 1/α·ln(1/δ/q) ) + ln(c) = -α·( 1/α·ln( 1/δ/q·1 - q/1 - 1/δ) + ln( 1 - 1/δ/1 - q) ) + ln(c). Next, by dividing and multiplying the term inside the logarithm by the same value, we get Ψ = -α·[1/α·ln(1/α/1 - 1/α·1 - q/q·1 - 1/α/1/α·1/δ/1 - 1/δ) + ln(1 - 1/α/1 - q·1 - 1/δ/1 - 1/α)] + ln(c) = -α·[1/α·ln(1/α/1 - 1/α·1 - q/q) + ln(1 - 1/α/1 - q)] - α·[ 1/α·ln(1 - 1/α/1/α·1/δ/1 - 1/δ) + ln( 1 - 1/δ/1 - 1/α)] + ln(c) = -α·1/αq + α·1/α1/δ + ln(c) where the last step follows from (<ref>). Finally, the lemma holds by (<ref>), (<ref>) and (<ref>).
http://arxiv.org/abs/2407.12354v1
20240717071408
Invertible Neural Warp for NeRF
[ "Shin-Fang Chng", "Ravi Garg", "Hemanth Saratchandran", "Simon Lucey" ]
cs.CV
[ "cs.CV" ]
Shin-Fang et al. Adelaide University Australian Institute for Machine Learning shinfang.chng@adelaide.edu.au <https://sfchng.github.io/ineurowarping-github.io/> Invertible Neural Warp for NeRF Shin-Fang ChngRavi Garg Hemanth Saratchandran Simon Lucey July 22, 2024 =============================================================== § ABSTRACT This paper tackles the simultaneous optimization of pose and Neural Radiance Fields (NeRF). Departing from the conventional practice of using explicit global representations for camera pose, we propose a novel overparameterized representation that models camera poses as learnable rigid warp functions. We establish that modeling the rigid warps must be tightly coupled with constraints and regularization imposed. Specifically, we highlight the critical importance of enforcing invertibility when learning rigid warp functions via neural network and propose the use of an Invertible Neural Network (INN) coupled with a geometry-informed constraint for this purpose. We present results on synthetic and real-world datasets, and demonstrate that our approach outperforms existing baselines in terms of pose estimation and high-fidelity reconstruction due to enhanced optimization convergence. § INTRODUCTION NeRF <cit.> has recently emerged as a compelling approach for synthesizing photorealistic images from novel views. NeRF employs a multi-layer perceptron (MLP) to model a volumetric representation of a 3D scene. It operates by minimizing the photometric loss, which is the discrepancy between rendered images and actual images. NeRF's ability to reconstruct high-fidelity signals, coupled with its memory efficiency, has propelled its adoption across a wide array of applications <cit.>, demonstrating its significant impact and versatility. One of the primary challenges with NeRF is the requirement for precisely known camera poses for each captured image. To address this challenge, several approaches such as BARF <cit.>, NeRFmm <cit.>, and GARF <cit.> have been developed. These methods facilitate the simultaneous optimization of the NeRF and the camera poses, using a compact, six-dimensional vector to represent the camera poses efficiently. However, this compact parameterization, while prevalent in contemporary structure from motion (SfM) literature <cit.>, has been shown to struggle with poor basin of convergence when solved simultaneously with a NeRF <cit.>. Drawing wisdom from machine learning, where overparameterization has been recognized as a catalyst for enhanced optimization convergence in modern deep neural networks <cit.>, this paper explores the potential of pose overparameterization for simultaneous pose and neural field estimation. Our approach: In traditional NeRF setups, accurately known extrinsic camera pose, comprising of a global rotation and translation for each image, are used to explicitly map pixel coordinates and the camera center to determine the viewing rays in a global world coordinate system <cit.>. Following the warping operation, the colors and volume densities along each ray in the world coordinate space is manipulated individually through a photometric loss function. In this paper, we explore scenarios where camera poses are not known. Specifically, we propose using a neural network to model the rigid warp function of ray. While it may seem counterintuitive to replace a succinct pose function with a more complex MLP, we argue that the enhanced convergence properties of such overparameterization <cit.> – in conjunction with the right constraint and prior – outweigh the increased functional flexibility. Additionally, we highlight the critical role of enforcing invertibility when learning rigid warps using an MLP. [Our use of invertibility strictly adheres to the well-established mathematical definition. Let f be a function whose domain is 𝒳 and codomain is 𝒴. f is invertible iff there exists a function g from 𝒴 to 𝒳 such that g(f(x))=x ∀ x ∈𝒳 and f(g(y))=y ∀ y ∈𝒴 <cit.>. We use bijective and invertible interchangeably throughout our paper. ] To achieve an approximate bijective solution, one remedy is to use an auxillary network to represent the backward warp; however this will introduce computational overhead. To this end, we propose explicitly modeling inversions in the neural network architectures, formally learning an Invertible Neural Network (INN). Our results demonstrate that opting for an architecture that is explicitly invertible is more effective for jointly optimizing both the pose and radiance field, outperforming existing strong baselines <cit.>. Notably, our INN-based approach achieves an improvement of over 50% in pose accuracy when compared to the standard SE3 parameterization <cit.>. § RELATED WORKS §.§ Joint NeRF and pose estimation Despite NeRF demonstrating compelling results in novel view synthesis, NeRF requires accurate camera poses. The differentiable nature of volume rendering used in NeRF facilitates the backpropagation through the scene representation to update the camera poses. NeRFmm <cit.> demonstrates the possibility of optimizing camera poses within the NeRF framework. BARF <cit.> introduces a coarse-to-fine positional encoding scheduling to improve the joint optimization of NeRF and camera poses, and remains a widely adopted method. GARF <cit.> and SiNeRF <cit.> advocate for leveraging the smoothness inherent in non-traditional activations to mitigate the noisy gradients due to high frequencies in positional embeddings. In contrast, NoPe-NeRF <cit.> uses monocular depth prior as a geometry prior to constrain the relative poses. SPARF <cit.> and SCNeRF <cit.> demonstrate that using keypoint matches or dense correspondence can constrain the relative pose estimates with ray-to-ray correspondence losses. Park  <cit.> proposes a preconditioning strategy to enhance camera pose optimization. DBARF <cit.> proposes using low-frequency feature maps to address the joint optimization problem for generalizable NeRF. Bian  <cit.> proposes a pose residual field which learn the pose corrections to refine the initial camera pose for neural surface reconstruction. Closest to our approach is a very recent work L2G by Chen  <cit.>, which tackles the camera pose representation using an overparameterization strategy. However, we achieve overparameterization in different manner. Unlike L2G which learn an MLP to predict rigid SE(3) transformations, we propose using an MLP to model the rigid warp function between the pixel and the ray space. Our work argues that while overparameterization can be achieved in different manner, it is tightly coupled with the regularization and constraints imposed. In our case, invertibility of warps becomes an essential constraint. §.§ Overparameterization Overparameterization in deep learning involves employing models with a substantially greater number of parameters than the quantity of training data. Recent research <cit.> demonstrates that neural networks, when overparameterized, can effectively generalize to new, unseen data, noting that an increase in parameters often correlates with a decrease in test error. Moreover, <cit.> revealed that this ability to generalize does not necessarily require explicit regularization, suggesting that the optimization process of overparameterized neural networks inherently prefers solutions that are more likely to generalize well. Additionally, <cit.> illustrates that overparameterized networks are capable of consistently finding a global minimum through gradient-based optimization methods. These insights highlight the remarkable ability of overparameterized neural networks to deliver accurate predictions on unseen data, emphasizing their robustness and efficacy in generalizing beyond the training dataset. §.§ Invertible Neural Networks for Deformation Fields Recent advancements in Invertible Neural Networks (INNs) <cit.> have broadened their use in the field of 3D deformation. These networks are particularly useful in modeling homermorphic deformation, where the mapping between any frames are bijective and continuous. Because of this capability, they are now being used in various areas, particularly modeling deformation in spatial <cit.> and temporal <cit.>. § METHODOLOGY We define the mathematical notations for the camera operations and the joint camera pose estimation in <ref>. Further, we outline our approach in <ref>. §.§ Bundle-Adjust NeRF Preliminaries §.§.§ Camera pose We consider a set of T input images as {ℐ_t}_t=1^T taken by camera with intrinsic matrix K ∈ℝ^3 × 3. For each camera t corresponding to these images, BARF-style approaches <cit.> define its camera-to-world (C2W) as P = (𝐑_t, 𝐭_t) ∈ SE(3), where 𝐑_t ∈ SO(3) and 𝐭_t ∈ℝ^3, respectively. §.§.§ Camera projection For any vector 𝐱∈ℝ^l of dimension l, we define its homogeneous representation 𝐱̅∈ℝ^l+1 as 𝐱̅ = [𝐱^T, 1 ]. We define π as the camera projection operator, which maps a 3D point in the camera coordinate frame denoted as 𝐱^(C)∈ℝ^3 to a corresponding 2D pixel coordinate 𝐮∈ℝ^2. π^-1 denotes the camera backprojection that maps a pixel 𝐮 coordinate and depth z to a 3D point in the camera coordinate as 𝐱^(C), for e.g., π(𝐱^(C)) ≅ K𝐱^(C) and π^-1(𝐮, z) = z K^-1𝐮̅. We use ^(C) and ^(W) to denote that it is defined within the camera and world coordinate system respectively. §.§.§ NeRF NeRF represents the volumetric field of a 3D scene as f(γ(𝐱), γ(𝐝)) → (𝐜,σ), which maps a 3D location 𝐱∈ℝ^3 and a viewing direction 𝐝 to a RGB color 𝐜∈ℝ^3 and volume density σ∈ℝ. γ: ℝ^3 →ℝ^3+6L is the positional embedding function with L frequency bases <cit.>. This function is parameterized using an MLP as f_Θ_rgb. Given T input images {ℐ_t}_t=1^T with corresponding camera poses {P_t}_t=1^T, NeRF is optimized by minimizing photometric loss ℒ_rgb between synthesized images Î and original image ℐ as min_Θ_rgb ∑_t=1^T∑_𝐮∈ℝ^2Î(𝐮, P_t; Θ_rgb) - ℐ_i(𝐮) _2^2, §.§.§ Volume rendering For simplicity, let's start by assuming the rendering operation of NeRF operates in the camera coordinate system. We will generalize this later. Each pixel coordinate determines a viewing direction 𝐝 in the camera coordinate system, whose origin is the camera center of projection 𝐨^(C). We can define a 3D point along the camera ray associated with 𝐮 sampled at depth z_i as 𝐫^(C)(z) = 𝐨^(C) + z_i,uK^-1𝐮̅. [This can be succinctly written as 𝐫^(C)(z) = z_i,u𝐝 as 𝐨^(C) is [0,0,0]^T in camera coordinate space.] To render the colour of Î_i,u at pixel coordinate 𝐮, we sample M discrete depth values along the ray between a near bound z_n and a far bound z_f. For each sampled value, we query the NeRF f_Θ_rgb to obtain the corresponding radiance fields. The output from NeRF is then aggregated to render the RGB colour as ℐ̂(𝐮) = ∫_z_n^z_f T(𝐮, z) σ(𝐫(z)) 𝐜(𝐫(z)) δ z, where T(𝐮, z) = exp(-∫_z_n^zσ(𝐫(z))) δ z' denotes the accumulated transmittance value along the ray. We refer readers to <cit.> for more details of volume rendering operation. In practice, <ref> is approximated using M points sampled along the ray at depth {z_i}_i=1^M, which produces radiance field outputs as {𝐲_i}_i=1^M. Denoting the ray compositing function in <ref> as g(.) ∈ℝ^4M→ℝ^3, we can rewrite ℐ̃(𝐮) = g( {𝐲_i}_i=1^M). Finally, given a camera pose P, we can then transform the ray 𝐫^(C) to the world coordinate to obtain 𝐫^(W) through a 3D rigid transformation 𝒯. The rendered image is then obtained as Î(𝐮,𝐩) = g( { f(𝒯( 𝐫^(C)(z),P);Θ_rgb) } _i=1^M). §.§.§ Joint optimization of pose and NeRF Prior works <cit.> demonstrate that it is feasible to optimize both camera pose and NeRF by minimizing ℒ_rgb in <ref>. This is achieved by considering P as optimizable parameters, see <ref> for its parameterization. Consequently, ray 𝐫 is now dependent on the camera pose. Mathematically, this joint optimization can be rewritten as min_P, Θ_rgb ∑_t=1^T∑_𝐮∈ℝ^2Î( 𝒯( 𝐫^(C)(z,𝐮̅); P_G(.)); Θ_rgb) - ℐ_i(𝐮) _2^2. §.§ Invertible Neural Warp for Ray Transform We propose to overparameterize P using an Invertible Neural Network (INN). There exist two options for parameterization: (i) use a separate INN for each camera P_t; (ii) use a single INN that is shared across all frames, coupled with a learnable code that is unique to frame t. Drawing inspiration from the dynamic NeRF method used for representing deformation fields <cit.> and also considering parameter efficiency, we have chosen to pursue the latter strategy for our proposed pose overparameterization – a single, globally shared neural Θ_𝒲 network across all frames, coupled with an optimizable per-frame latent code Φ_t ∈ℝ^D, see supp. (Sec. C) for the comparison of using multiple-INNs versus single-INN. Consequently, we can rewrite G(.) in <ref> as h( 𝐫^(C); Θ_𝒲, Φ_t), where h(.) :ℝ^3+D→ℝ^3. <ref> presents our approach. In our approach, we model each pixel in the camera coordinate system 𝐱_i,t^(C) as an individual ray. Our proposed INN is designed for transforming these rays from camera coordinate to the world coordinate. Specifically, our proposed INN takes in the pixel coordinates 𝐱_i,t^(C) and camera center 𝐨_t^(C), both defined in the camera coordinates t, coupled with the frame-dependent latent code and outputs their corresponding equivalent 𝐱_i,t^(W) and camera center 𝐨_t^(W) in the world coordinates. §.§.§ Rigidity prior In our formulation, each pixel is represented as an individual ray within the camera coordinate system. This inherently relaxes the rigidity constraint. As a result, the output from the INN does not necessarily conform to a global rigid motion. Given known camera-world correspondences (𝐱_i,t^(C), 𝐱_i,t^(W)), we can solve a closed-form rigid registration problem to determine a global pose, which can be integrated into our optimization problem as a rigidity prior ℒ_rigid min_T^*∑_i=1^L𝐱_i,t^(C) - T^*∘𝐱_i,t^(W)_2^2. §.§.§ Final optimization problem We solve our final optimization problem as min_Φ_t, Θ_𝒲, Θ_rgb ∑_t=1^T∑_𝐮∈ℝ^2Î( h( 𝐫^(c); Θ_𝒲, Φ_t) ; Θ_rgb) - ℐ_i(𝐮) _2^2 + λℒ_rigid. §.§ Advantages of INN for Overparameterizing Rigid Ray Warps BARF-based approach parameterizes camera pose P of each frame using a SE(3) (see <ref>), which guarantees that 𝒯 is a bijective mapping. Therefore, when overparameterizing camera poses, it is crucial that the neural network adheres to the bijection property because this one-to-one correspondence ensures that there is a unique output in the world space for every point in the camera space. As we will demonstrate in <ref>, simply applying a rudimentary strategy (denoted as Naive) when overparameterizing the rigid warps of ray (camera-world) with an MLP often does not achieve convergence, see <ref> and <ref>. Consequently, to attain this bijective property using an MLP, it is necessary to introduce an auxillary network to model the backward warps (Implicit-Invertible MLP). While effective, it presents a significant drawback: it results in a twofold rise in the computational complexity due to the existence of the backward network to enforce the self-consistency. To mitigate this substantial increase in computational demands inherent in the modified MLP approach, we propose the use of INNs for parameterize these bijections. INN implements the bijective mappings by composing affine transformations into several blocks. Within each block, the input coordinates are divided into two segments; the first part remains constant and is used to parameterize the transformation that is applied to the second part <cit.>. Besides their inherent invertiblity, INN also offers the advantage of homeomorphic property, which potentially facilitate a more flexible optimization trajectory that is less susceptible to a suboptimal minimum trajectory, see <ref> § EXPERIMENTS §.§ Baselines We compare our approach with two representative methods in pose-NeRF joint optimization: the standard global SE3-approach BARF <cit.>, and the overparameterized representation L2G <cit.>. For all experiments, we use the original implementations including their default settings for coarse-to-fine scheduling, architecture and hyperparameters, see supp. (Sec. A) for more details. Additionally, in our 2D planar experiments, we include a comparison with another two variants called the Naive (MLP) and Implicit-Invertible MLP to execute ray transform. This comparison is specifically designed to highlight the significance of invertibility when employing MLPs for executing ray transformation. §.§.§ Local-to-global (L2G) <cit.> L2G uses an MLP to predict rigid SE3 transformation for each ray. These predicted transformation parameters are then used to analytically estimate the transformed coordinates in the world space. §.§.§ Naive This is the simplest version of our baseline that uses one primary network, denoted as h_fwd :(𝐱^(C), Φ_t) →𝐱^(W), to learn the forward mapping which takes in the coordinates from the camera space (coupled with a per-frame latent code), and outputs the corresponding coordinates in the world space. §.§.§ Implicit-Invertible MLP We use two networks h_fwd and h_bwd to enforce approximate invertibility. Alongside primary network h_fwd, we use a secondary network h_bwd :(𝐱^(W), Φ_t) →𝐱̂^(C) to invert the outputs from the primary network h_fwd. To minimize deviations from bijections, we introduce a regularization term ℒ_implicit as 𝐱^(C) - 𝐱̂^(C)_2^2 into the optimization problem <ref>. §.§.§ Explicit-Invertible INN (Ours) We explicitly model inversions in the neural network architecture by formally learning an INN. We have chosen to utilize architecture proposed by NDR-INN <cit.>, see supp. (Sec. A.1) for the architecture details that we use for all our experiments. §.§ 2D Planar Neural Image Alignment Following BARF <cit.>, we learn a 2D neural image field, for creating a homography-based panoramic image from N patches cropped from the original image, each generated with random homography perturbations. Specifically, we learn a 2D coordinate network f(Θ_rgb) to render the stitched image. Each pixel in the N training patch is warped using the estimated homography H to create the rendered image. We choose the “cat” image from ImageNet <cit.>. We initialize patch warps as identity and fix the gauge freedom by anchoring the first warp to align the neural image to the original image <cit.>. We randomly generate 20 different homography instances, with scale-noise parameter 0.1 and 0.2 for homography and translation, respectively. We solve <ref> for the homography using a Direct Linear Transform (DLT) solver [<https://github.com/kornia/kornia>]. §.§.§ Experiment settings. We evaluate our proposed method against BARF <cit.>, and our three overparameterized network variants: Naive, Implicit- and Explicit-Invertible MLPs, as detailed in <ref>. For both naive and Implicit-Invertible MLPs, we utilized a Leaky-ReLU MLP with five 256-dimensional hidden units, and a 16-dimensional latent code to represent the frame-dependent embeddings ϕ_t. We also follow the default coarse-to-fine scheduling established by BARF. In the case of Implicit-Invertible MLP, we use the same architecture both h_fwd and h_bwd. We use the Adam optimizer <cit.> to optimize for both the network weights Θ_rgb and Θ_𝒲. We set the learning rate for both Θ_rgb and Θ_𝒲 at 1 × 10^-3, with both decaying exponentially to 1 × 10^-4 and 1 × 10^-5, respectively. We set the weighting term for ℒ_rigid for both overparameterized MLPs to 1 × 10^2 and the consistency term ℒ_implicit for Implicit-Invertible MLP to 1 × 10^1. §.§.§ Robustness to noise perturbations  <ref> analyzes the robustness of BARF and our approach under different noise perturbations across 20 different homography instances. Each run began with the initialization of the homography using the “groundtruth”, and noise perturbations were gradually introduced, ranging from 0 to 0.3 to the translation component of the homography.  <ref> indicates that our representation exhibits a higher tolerance to noise compared to BARF. §.§.§ Results.  <ref> summarizes the statistical results from 20 runs, where we report the warp error and patch reconstruction error. We quantify the warp error in terms of corner error, defined as the L2 distance between the groundtruth corner position and estimated corner position, and PSNR as the metric to assess the reconstruction quality. We used 5-pixel threshold to define the success convergence. <ref> presents a qualitative result for a homography instance. By enforcing approximate invertibility, the Implicit-Invertible MLP demonstrates a significantly higher rate of successful convergence compared to the naive version of MLP, with success rates improved by 65%. We further show that by explicitly injecting invertibility into the architecture (Ours), the success rate is increased to 75%. This result reinforces that invertibility is crucial when learning rigid warp functions via overparameterization, and using an architecture that guarantees bijective property is effective in ensuring the pose converge to an optimal solution during joint optimization in practice. Henceforth, we will focus exclusively on using our INN-based approach for subsequent results. For completeness, we also compare with another setup, where each frame is parameterized with a neural network, see supp. (Sec. B.1). Interestingly, apart from the parameter efficiency, we find that using a single global neural network for all frames is sufficient to converge to a good pose solutions. We hypothesized that the difference may be attributed to the benefits of gradient sharing in the shared neural network setup. This result is aligned with the findings by Bian  <cit.>. Consequently, we adhere to the design where we use one single INN shared across all the frames, coupled with a frame-specific latent code for the rest of our experiments. §.§ Neural Radiance Fields (NeRF) In this section, we compare our proposed representation with BARF <cit.> and L2G <cit.>. We assume known intrinsics for all methods. We perform our experiments on both the LLFF <cit.> (<ref>), DTU <cit.> (<ref>) as well as Blender datasets in supp. (Sec. B2). We solve <ref> for the global rigid SE3 transformation using Umeyama algorithm [<https://github.com/naver/roma>]. §.§.§ Evaluation metrics. For pose estimation, we report the accuracy of the poses after globally aligning the optimized poses to the groundtruth <cit.>. [For our proposed method, we evaluate the estimated global poses <ref>.] We assess view synthesis using PSNR, SSIM and LPIPS. A standard procedure in view synthesis evaluation involves performing test-time photometric optimization on the trained models. This additional step is intended to factor out the pose errors, which may otherwise compromise the quality of the synthesized views <cit.>. This process is akin to a pose refinement method which minimises the photometric error on the synthesized image while keeping the trained NeRF model fixed. However, it is important to recognize that this pose correction may not accurately represent the initial accuracy of the methods in terms of pose estimation for view synthesis. Therefore, we opt to report the view synthesis quality both before and after the pose refinement step. On DTU, we extend our evaluation to include comparisons of rendered depth with ground-truth depth using mean depth absolute error, as well as reconstruction accuracy using Chamfer distance. For the Chamfer evaluation, we utilize the optimized poses estimated from all methods and employ a neural surface reconstruction algorithm called Voxurf <cit.> for geometry reconstruction. Further details on pose alignment and metrics computations in the supp. (Sec. A). §.§ Forward Facing Scenes: LLFF §.§.§ Experiment settings. The standard LLFF benchmark dataset <cit.> consists of eight real-world, forward-facing scenes captured using hand-held cameras. Following <cit.>, we initialize all camera poses with the identity transformations for all the methods. We employ the same training and testing split as in BARF <cit.>. We use the evaluation metrics described in <ref>. §.§.§ Implementation details. We train all methods for 200k iterations and randomly sample 2048 pixel rays at each optimization step <cit.>. We train without hierarchical sampling. We set the learning for Θ_rgb to be 1 × 10^-3 decaying to 3 × 10^-4, and 5 × 10^-4 for Θ_𝒲 decaying to 1 × 10^-6. We also follow the default coarse-to-fine scheduling by BARF <cit.>, see supp. for full details. §.§.§ Results. Our approach achieves a substantial reduction of rotation errors (70 % vs. BARF and 35 % vs. L2G) and translation errors (50 % vs. BARF and 20% vs. L2G). Additionally, the superior performance of both our method and L2G over BARF further highlight the merits of overparameterization for simultaneous pose and neural field estimation. This significant improvement in camera pose accuracy directly enhances the performance of view synthesis, as evident in <ref>. In particular, instances such as trex and leaves observe a significant improvement. <ref> illustrates the absolute error between the original image and the rendered image on these two instances. Our approach presents the lowest misalignment error, as indicated darker areas in the error map. It is important to note, however, while L2G and our approach appear comparable after test-time optimization, this refinement procedure has mitigated the pose estimation noise to improve PSNR, as discussed in <ref>. For more qualitative results and ablation studies, please refer to supp. (Sec. B). §.§ Homeomorphism perspective: A qualitative analysis We present a qualitative analysis on a single-view pose estimation that sheds light on the empirical effectiveness of our approach compared to the L2G method <cit.>. Our method leverages the concept of an INN, which predict homeomorphisms—continuous, invertible transformations not limited to the rigid motions of the SE(3) group. Unlike the L2G method, which constrains pose estimation within the rigid bounds of SE(3), our INN-based approach embraces a broader spectrum of transformations. This grants the optimization process a higher degree of flexibility, allowing for a diverse range of optimization paths and facilitating a smoother trajectory towards the solution. To empirically validate our hypothesis, we conducted experiments using a trained NeRF model to estimate the camera pose relative to a 3D scene, aiming to minimize the photometric error between NeRF-rendered and actual observed images. Despite starting from the same initial pose (off by 20^), and employing a random sampling of 2048 rays per iteration for all methods, our INN approach outperformed the L2G method. The L2G method often converged to suboptimal poses, whereas the INN method achieved accurate pose estimation, as evidenced in <ref> where the NeRF-rendered images is well-aligned with the groundtruth. This success can be attributed to the INN's ability to predict a general homeomorphim, a transformation more general than a rigid transformation, which offers a significant advantage in navigating the optimization landscape more effectively and avoiding suboptimal local minima. §.§ 360 Scenes: DTU §.§.§ Experimental settings. We evaluated on 14 test scenes from DTU <cit.>. Following <cit.>, we synthetically perturb the ground-truth camera poses with 15% of additive Gaussian noise, which corresponds to an average rotation and translation error of 15 and 70, respectively. For a fair comparison, we used the same initialization for all methods. We refer the readers to supp. (Sec B.5) for the results with Colmap initialization. §.§.§ Implementation details. As BARF <cit.> and L2G <cit.> have not tested on DTU datasets, given that the DTU dataset encompass 360 scenes which is similar to the original Blender dataset, we adopted the original hyperparameters used by the author for training BARF and L2G on the blender dataset. Following Bian  <cit.>, we multiply the output of their local warp network by a small factor which is α=0.01 for L2G <cit.>. For our approach, we set the learning rate for Θ_rgb to be 1×10^-3 decaying to 1×10^-4, and Θ_𝒲 to start from 5×10^-4 and decaying to 1×10^-8. We use the default coarse-to-fine scheduling by BARF <cit.>. §.§.§ Results. As demonstrated in <ref>, we outperform all baselines by a considerable margin across majority of the sequences in pose accuracy. Overall, our approach achieves approximately a 50% improvement in rotation and a 60% improvement in translation over BARF. When compared to L2G, our method shows a 70% increase in accuracy for both rotation and translation. Additionally, our approach also consistently surpasses both baselines in geometry evaluation, as evidenced in the depth and reconstruction error in <ref> and qualitative results in <ref>. For the quantitative results for novel view synthesis and additional qualitative results, we refer the readers to see supp. (Sec B). § CONCLUSION In this paper, we examine the benefits of overparameterizating poses via a MLP in the joint optimization task of camera pose and NeRF. We establish that invertibility is a crucial property. We further show that using an Invertible Neural Network, inherently equipped with a guaranteed bijection property, significantly improves the convergence in pose optimization compared to existing representative methods. Acknowledgement We thank Chee-Kheng (CK) Chng for insightful discussions and technical feedback. splncs04
http://arxiv.org/abs/2407.12097v1
20240716180539
Radio afterglows from tidal disruption events: An unbiased sample from ASKAP RACS
[ "Akash Anumarlapudi", "Dougal Dobie", "David L. Kaplan", "Tara Murphy", "Assaf Horesh", "Emil Lenc", "Laura N. Driessen", "Stefan W. Duchesne", "Ms. Hannah Dykaar", "Bryan M. Gaensler", "Timothy J. Galvin", "J. A. Grundy", "George Heald", "Aidan Hotan", "Minh Huynh", "James Leung", "David McConnell", "Vanessa A. Moss", "Joshua Pritchard", "Wasim Raja", "Kovi Rose", "Gregory R. Sivakoff", "Yuanming Wang", "Ziteng Wang", "Mark Wieringa", "M. T. Whiting" ]
astro-ph.HE
[ "astro-ph.HE" ]
Akash Anumarlapudi aakash@uwm.edu 0000-0002-8935-9882]Akash Anumarlapudi Department of Physics, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, WI 53201, USA 0000-0003-0699-7019]Dougal Dobie Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Hawthorn, VIC 3122, Australia ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav), Hawthorn, Victoria, Australia 0000-0001-6295-2881]David L. Kaplan Department of Physics, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, WI 53201, USA 0000-0002-2686-438X]Tara Murphy Sydney Institute for Astronomy, School of Physics, University of Sydney, NSW, 2006, Australia ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav), Hawthorn, Victoria, Australia 0000-0002-5936-1156]Assaf Horesh Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, 91904, Israel 0000-0002-9994-1593]Emil Lenc CSIRO Space and Astronomy, PO Box 76, Epping, NSW, 1710, Australia 0000-0002-4405-3273]Laura Driessen Sydney Institute for Astronomy, School of Physics, University of Sydney, NSW, 2006, Australia 0000-0002-3846-0315]Stefan W. Duchesne CSIRO Space and Astronomy, PO Box 1130, Bentley, WA, 6102, Australia 0009-0008-6396-0849]Hannah Dykaar Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St. George St., Toronto, ON M5S 3H4, Canada David A. Dunlap Department of Astronomy and Astrophysics, University of Toronto, 50 St. George St., Toronto, ON M5S 3H4, Canada 0000-0002-3382-9558]B. M. Gaensler Department of Astronomy and Astrophysics, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St. George St., Toronto, ON M5S 3H4, Canada David A. Dunlap Department of Astronomy and Astrophysics, University of Toronto, 50 St. George St., Toronto, ON M5S 3H4, Canada 0000-0002-2801-766X]Timothy J. Galvin CSIRO Space and Astronomy, PO Box 1130, Bentley, WA, 6102, Australia International Centre for Radio Astronomy Research - Curtin University, 1 Turner Avenue, Bentley, WA 6102, Australia 0000-0002-4440-8046]Joe Grundy CSIRO Space and Astronomy, PO Box 1130, Bentley, WA, 6102, Australia International Centre for Radio Astronomy Research - Curtin University, 1 Turner Avenue, Bentley, WA 6102, Australia 0000-0002-2155-6054]George Heald CSIRO Space and Astronomy, PO Box 1130, Bentley, WA, 6102, Australia 0000-0001-7464-8801]Aidan W. Hotan CSIRO Space and Astronomy, PO Box 1130, Bentley, WA, 6102, Australia 0000-0002-8314-9753]Minh Huynh CSIRO Space and Astronomy, PO Box 1130, Bentley, WA, 6102, Australia 0000-0002-9415-3766]James K. Leung Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St. George St., Toronto, ON M5S 3H4, Canada Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, 91904, Israel 0000-0002-2819-9977]David McConnell CSIRO Space and Astronomy, PO Box 76, Epping, NSW, 1710, Australia 0000-0002-3005-9738]Vanessa A. Moss CSIRO Space and Astronomy, PO Box 76, Epping, NSW, 1710, Australia Sydney Institute for Astronomy, School of Physics, University of Sydney, NSW, 2006, Australia 0000-0003-1575-5249]Joshua Pritchard Sydney Institute for Astronomy, School of Physics, University of Sydney, NSW, 2006, Australia CSIRO Space and Astronomy, PO Box 76, Epping, NSW, 1710, Australia ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav), Hawthorn, Victoria, Australia CSIRO Space and Astronomy, PO Box 76, Epping, NSW, 1710, Australia 0000-0002-7329-3209]Kovi Rose Sydney Institute for Astronomy, School of Physics, University of Sydney, NSW, 2006, Australia CSIRO Space and Astronomy, PO Box 76, Epping, NSW, 1710, Australia 0000-0001-6682-916X]Gregory Sivakoff Department of Physics, University of Alberta, CCIS 4-181, Edmonton AB T6G 2E1, Canada. 0000-0003-0203-1196]Yuanming Wang Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Hawthorn, VIC 3122, Australia ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav), Hawthorn, Victoria, Australia 0000-0002-2066-9823]Ziteng Wang International Centre for Radio Astronomy Research - Curtin University, 1 Turner Avenue, Bentley, WA 6102, Australia CSIRO Space and Astronomy, PO Box 76, Epping, NSW, 1710, Australia 0000-0003-1160-2077]Matthew T. Whiting CSIRO Space and Astronomy, PO Box 76, Epping, NSW, 1710, Australia § ABSTRACT Late-time (∼ year) radio follow-up of optically-discovered tidal disruption events (TDEs) is increasingly resulting in detections at radio wavelengths, and there is growing evidence for this late-time radio activity to be common to the broad class of sub-relativistic TDEs. Detailed studies of some of these TDEs at radio wavelengths are also challenging the existing models for radio emission. Using all-sky multi-epoch data from the Australian Square Kilometre Array Pathfinder (ASKAP), taken as a part of the Rapid ASKAP Continuum Survey (RACS), we searched for radio counterparts to a sample of optically-discovered TDEs. We detected late-time emission at RACS frequencies (742-1032 MHz) in five TDEs, reporting the independent discovery of radio emission from TDE AT2019ahk and extending the time baseline out to almost 3000 days for some events. Overall, we find that at least 22^+15_-11% of the population of optically-discovered TDEs has detectable radio emission in the RACS survey, while also noting that the true fraction can be higher given the limited cadence (2 epochs separated by ∼ 3 years) of the survey. Finally, we project that the ongoing higher-cadence (∼ 2 months) ASKAP Variable and Slow Transients (VAST) survey can detect ∼ 20 TDEs in its operational span (4 yrs), given the current rate from optical surveys. § INTRODUCTION The discovery of tidal disruption events <cit.> thus far was initially dominated by X-ray surveys <cit.> and then by optical/ultraviolet (O/UV) surveys in more recent times <cit.>. At O/UV wavelengths, emission from TDEs has a characteristic blue continuum with hydrogen and/or helium emission lines[This is true for sun-like stars. In general, the spectral signature depends on the composition of the disrupted star.] and can be accurately modeled as a black body with temperatures peaking near UV wavelengths <cit.>. Radio emission from TDEs, expected from the interaction of nascent jets or outflows, was initially detected only in a handful of TDEs and this initial sample was dominated by TDEs that were discovered at higher energies. It was estimated by <cit.> that not all TDEs result in radio detections, with only ∼ 20% of them being radio bright. The distribution of radio luminosities from this initial crop of TDEs indicated a dichotomy at radio wavelengths where the luminosity differed by 2–3 orders of magnitude <cit.>. The more luminous events resulted from relativistic jetted TDEs in which the radio luminosity exceeded 10^40 erg/s, while the less luminous events were from TDEs with sub-relativistic outflows where the isotropic radio luminosity were around 10^38 ergs/s <cit.>. Shock-accelerated relativistic electrons produce radio emission from TDEs via the synchrotron mechanism <cit.>. This can be due to external shocks driven by jets/outflows or unbound stellar debris into the circumnuclear medium <cit.> or due to internal shocks within the jet <cit.>. By modeling the spectral and temporal evolution of the emission, one can estimate the jet/outflow properties, particularly the velocity of the ejecta, their launch time relative to the optical flare, and the energy injected into the CNM <cit.>. Continuous monitoring of events in which early time (∼ days to weeks after the optical flare) radio emission was detected, like Swift J1644+57 <cit.> and ASASSN-14li <cit.>, demonstrated that the radio emission can be very long-lived, until ∼ years after the disruption. However, there are TDEs like ASASSN-15oi and AT2018hyz, in which early-time radio observations resulted in null detections, yet continued monitoring of these events until late time (∼ months to years after the optical flare) resulted in radio detections <cit.>. This can be either due to a delay in the ejection of the outflow <cit.> or due to the viewing effects of an off-axis observer looking at a relativistic jet <cit.>. In addition, <cit.> found a radio re-brightening in ASASSN-15oi, ∼ 4 years after the initial optical discovery. <cit.> and <cit.> showed that the radio light curve in both these events showed a rise/decline that is steeper than any of the current predictions. More recently, studying late-time radio activity in TDEs using a sample of 23 TDEs, <cit.> showed that the launch of outflow can be delayed, by as much as ∼ 700 days, which raises the question of whether the phenomenon of delayed ejection is common in TDEs and whether the current models are adequate for describing the observed emission in TDEs like these. While large samples of TDEs are coming from ongoing optical surveys <cit.>, the discovery space is expanding. Recent studies like those of <cit.> and <cit.> have discovered TDEs at infrared (IR) wavelengths using dust echoes from TDEs. Using the first two epochs of the Very Large Array Sky Survey [VLASS;][]vlass, <cit.> produced an independent sample of six radio TDEs that are optically bright. A few TDEs in this sample showed lower blackbody temperatures (T_bb) and luminosities (L_bb) compared to the optically discovered TDEs, indicating TDEs occurring in dust-obscured environments and adding to the sample of radio-first TDE discoveries <cit.>. Such independent TDE discoveries from highly dust-obscured regions at radio/IR wavelengths can help constrain the true rate of TDEs and resolve the tension between the observed rate and the expected rate from theoretical predictions <cit.>. Using the first three years of data from the Zwicky Transient Facility <cit.>, <cit.> estimated a volumetric rate of 3.1^+0.6_-1.0× 10^-7 Mpc^-3 yr^-1 TDEs (L_bb > 10^43 erg/s). Comparing the rate of thermal TDEs to Swift J1644-like X-ray events <cit.> and AT2020cmc-like optical events <cit.>, the relative rate of jetted TDEs is estimated to be less than one percent of the thermal TDEs. This implies that the observed rate of thermal plus jetted TDEs is still lower than the current theoretical prediction by an order of magnitude <cit.>. All-sky radio surveys can be an extremely useful resource in discovering radio afterglows serendipitously. However, multi-epoch data can be crucial to separate emission related to the TDE to emission from any active galactic nucleus (AGN) which may be present. In particular, high cadence surveys like the Australian SKA Pathfinder Variable and Slow Transients survey <cit.>, can be very fruitful in getting a well-sampled light curve[VAST has a cadence of 2 weeks–2 months depending on the sky position.] for a larger sample of TDEs where dedicated follow-up of every individual event may not be possible/practical <cit.>. Motivated by this, we used the data from the Rapid ASKAP Continuum Survey (RACS; ), a multi-epoch all-sky survey (see Table <ref> for survey details) to search for radio emission from TDEs discovered at higher energies (O/UV/X-ray). We then studied the prospects of finding radio TDEs in the VAST survey by projecting the rates estimated from the fraction of TDEs that are radio bright in the RACS survey. An alternate approach of discovering TDEs by modeling the radio light curve evolution using existing models <cit.> is used by <cit.> to independently discover TDE candidates at radio wavelengths. Our approach is different from the untargeted and model-dependent search of <cit.>, yet complementary since we find afterglows from TDEs like ASASSN-15oi, AT2018hyz etc, in which the observed radio emission can not be easily explained by the existing models. Unlike dedicated follow-up campaigns that extensively monitor a given sample of TDEs <cit.>, our approach is different, in that we study the prospects of discovering TDEs serendipitously in all-sky surveys, and hence our data are sparser. We focus instead on the nature of the TDEs we detect at lower observing frequencies, their rates, and the implications and expectations for the VAST survey. Our article is structured as follows: in Section <ref>, we detail our observations, surveys used in this study, and our data reduction methods. In Section <ref> we discuss our sample selection technique. We present our detections in Section <ref> and describe the properties of the individual candidates in Sections <ref> through <ref>. Finally, we discuss the implications of our detections in Section <ref> and projections for future surveys like VAST in Section <ref>, before concluding in Section <ref>. Throughout this work, we use the <cit.> model of cosmology, with H_0 = 67.4 km Mpc^-1 s^-1. § OBSERVATIONS AND DATA ANALYSIS §.§ Rapid ASKAP Continuum Survey (RACS) The primary data set used in this work comes from all-sky 887.5 MHz radio observations taken as a part of RACS — . has been conducted at two separate epochs thus far, separated by ∼ 3 years. In addition, RACS has also been conducted at two other frequencies, as single (so far) epoch surveys — <cit.> and 1655 MHz; (in prep.), data from which we have used to study the behavior of the TDEs that we detected in . Details of each of these surveys are provided in Table <ref>. Observations for all of the RACS surveys were carried out between March 2019 and April 2022. Data were processed using standard techniques recommended for ASKAP data <cit.>, using the ASKAPsoft package <cit.>, to generate both the images and the noise maps. A more detailed description of reduction techniques is provided by <cit.>. In this paper, we only used the total intensity (Stokes I) maps. lccccc Survey details of all the different surveys used as a part of this article. Survey (epoch 1) (epoch 2) VLASS Center frequency (MHz) 887.5 887.5 1367.5 1655.5 3000 Bandwidth (MHz) 288 288 144 200 2000 Sky coverage -90< δ < +41 -90< δ < +51 -90< δ < +49 -90< δ < +48 -40< δ < +90 Integration time 15 min 15 min 15 min 15 min 5 s Median noise (mJy/beam) 0.25 0.19 0.20 0.19 0.12 Angular resolution ∼15 ∼15 ∼10 ∼8 2.5 Observations ∼ March 2019 ∼ March 2022 ∼ January 2021 ∼ December 2021 a Instrument ASKAP ASKAP ASKAP ASKAP VLA Reference <cit.> in prep. <cit.> in prep. <cit.> aThe first two epochs of VLASS were completed roughly in 2019 and 2021, and the third observing run is currently ongoing. §.§ Variable and Slow Transients Survey (VAST) VAST <cit.> is a radio survey that will image almost one-quarter of the entire sky repeatedly for 4 years. VAST is divided between the Galactic and extra-galactic sky with the Galactic sky being observed with a cadence of roughly 2 weeks and the extra-galactic sky with a cadence of roughly 2 months. VAST pilot surveys <cit.> were carried out in-between the two epochs, and the main VAST survey[<https://www.vast-survey.org/Survey/>] began its operation in December 2022. For the TDEs that we detected in the dataset, we augmented the RACS data with data from the VAST survey, if the transient falls inside the VAST footprint. The survey parameters of VAST are similar to the (see Table <ref>) survey, except for a 12 min integration time per field in VAST compared to a 15 min observation in RACS. A more detailed description of the pilot and the full surveys is provided by <cit.>. §.§ VLA Sky Survey (VLASS) In addition to the RACS and VAST survey data, we also made use of the VLA Sky Survey <cit.>. VLASS is an all-sky survey[North of -40 declination.] spanning 2-4 GHz and plans to scan the entire sky at three different epochs with a cadence of roughly 32 months between the epochs. The first two epochs have been completed and the third epoch is underway. For the TDEs detected in data, we used the VLASS quick-look images[<https://archive-new.nrao.edu/vlass/quicklook/>] <cit.> to measure the flux density at 3 GHz. §.§ Search methodology We selected all the TDEs from the transient network server (TNS)[<https://www.wis-tns.org/>] that were spectroscopically classified as TDEs, as well as those that were optically discovered in all-sky surveys like the ZTF and All-Sky Automated Survey for Supernovae <cit.>, which resulted in 63 events <cit.>. We then discarded 13 events that are outside the epoch 1 footprint, as well as those events where the optical discovery occurred after epoch 2, leaving 43 events in our sample. We examined the total intensity (Stokes I) sky maps to look for radio emission at the TDE positions. Radio emission in TDEs can be observable ∼years after the initial disruption[The radio emission can persist for ∼ years after the optical flare <cit.> in a few TDEs, but is only observable at late times <cit.> in a few others.] <cit.> and hence we restricted our cross-match to spatial coincidence, relaxing any constraint on the temporal coincidence as long as the TDE was discovered before the second epoch. The positional accuracy for the ASKAP data is 2.5[This is including the systematic component of the offset <cit.>.] and hence we used twice this as our search radius, 5, when astrometrically crossmatching the TDEs. This resulted in 11 TDEs for which we detected coincident radio emission in . However, only 5 of these events showed significant variability in their light curve between the two epochs. The remaining 6 events did not show any significant evolution between the epochs, which made it difficult to rule out underlying host galaxy/host AGN emission (see Section <ref> for more details). In the five TDEs with coincident variable radio emission, the emission lasted ∼ years after the initial optical outburst, with the longest radio lived TDE lasting ∼ 8 yrs. Our detections add to the sample of TDEs reported by <cit.>, where late-time radio emission is seen. However, only one TDE (AT2018hyz) is common between our sample and <cit.>. Table <ref> gives the flux density measurements for all these events. For all the TDEs that are in the RACS footprint, but resulted in non-detections we provide upper limits (3-σ) on the radio flux density and radio luminosity in Table <ref>. § INDIVIDUAL TIDAL DISRUPTION EVENTS Given the nature of this study, our light curves are sparser than dedicated campaigns like those of <cit.> or <cit.>. We therefore make simplifying assumptions about the spectral and temporal properties of the observed emission to estimate the source properties. We modeled the late-time radio spectrum as a broken power-law with the break frequency corresponding to synchrotron self-absorption (SSA) frequency (ν_ ssa), but adapted from <cit.> to join the power-laws smoothly <cit.>. We modeled the temporal evolution of the light curve using <cit.>: a rising power-law when the emission is optically thick smoothly joined by a declining power-law when the emission becomes optically thin. To infer source parameters we assume that the energy stored in magnetic fields is similar to the energy of the relativistic electrons, (equipartition; ). Since the time scale of our radio detections is ≈ year(s), unless we see evidence for on-axis jets (radio luminosity consistent with Swift J1644 or AT2020cmc-like events) or off-axis relativistic jets (characterized by steep rise time), we assume that the bulk Lorentz factor is close to 1 (Newtonian case). We assume that roughly 10% of the energy in heavy particles is used to accelerate the electrons to relativistic speeds (ϵ_e ≈ 0.1). Assuming a power-law seed electron energy distribution N(E) dE = A E^-p, with p being the index, we infer the emission radius (R_ eq) and the total equipartition energy following <cit.>[Since the peak frequency corresponds to ν_ssa, we correct the total equipartition energy by accounting for the radiation emitted at ν_m.]. We caution that the outflow geometry of sub/non-relativistic outflows can be quasi-spherical or asymmetrical, in which the filling factors can differ, but as noted by <cit.>, the estimated source properties are relatively insensitive to these. Hence, in this work, we assume that the geometry is nearly spherical. Further, we assume that the observed radio emission arises from a thin shell of expanding outflow (of width ≈ 0.1R, where R is the radius, e.g., ) and is spherically symmetric. For such cases, the areal and volume filling factors, f_A, and f_V <cit.> are 1 and 0.36 respectively. §.§ ASASSN-15oi After an initial non-detection at radio wavelengths (up to ∼ 6 months), <cit.> reported the discovery of a radio counterpart to ASASSN-15oi <cit.> that rose steeply (∼ t^4). This was followed by a steep fall (steeper than t^-3) that became shallower at late times (see Figure <ref>). <cit.> noted that such steep rise and fall times could not be explained by a standard forward shock and CNM interaction model. <cit.> also reported a very late time re-brightening (∼1000 days later) in the VLASS epoch 1 data. We detected very late time re-brightening in data and the lightcurve continued to rise (roughly as ∼ t^2) and peaked ∼ 2500 days after the optical flare (see Figure <ref>). This very late time re-brightening was replicated in the and data as well. VAST observations for this transient revealed that the emission started to decline steeply (∼ t^-3) following the peak. This very late time decline is similar to the behavior that <cit.> reported following the initial radio peak. As <cit.> points out, the changes in various decline rates of emission could point to changes in the CNM density profile or a structured jet. However, it is difficult to reconcile such steep rise and fall times with the existing afterglow models. Using the second epoch of VLASS observations, we find that the 3 GHz light curve is declining, roughly following a t^-1 decline (see Figure <ref>). This is in contrast to the rising 887.5 MHz light curve during the same period, which suggests that the emission at 3 GHz was optically thin during this period and that at 887.5 MHz was optically thick. This can be explained by the peak frequency gradually transitioning to lower frequencies at late times, a trend that is expected and was also observed by <cit.> during the initial radio observations. This is also consistent with our 887.5 MHz observations, which revealed a turnover indicative of emission transitioning from optically thick to thin at >3000 days. The , VLASS epoch 2, and the epoch 2 data are separated by ∼ 75 days, and under the assumption that the spectral evolution during this time frame is minimal (given the active cycle of >4 years), we found that the spectrum at this epoch (∼ 2400 d after the event) is well fit by a power law (with the spectra index, α=-0.75± 0.2, where S_ν∝ν^α). We assumed that the self-absorption frequency is closer to the RACS observing frequency (887.5 MHz)[The radio emission being optically thick at 887.5 MHz at this time (which continues until ∼ 2800 days after the disruption) and thin at 3 GHz partially supports this assumption.], without attempting a physical model for the origin of this[<cit.> found that the initial radio spectrum showed large deviation from the SSA spectrum in the self-absorbed part, but might be consistent with free-free absorption.], and estimate the electron distribution index p=2.5± 0.2. Given the peak frequency ν_ p≈ 887.5 MHz, the peak flux density F_ν, p=12.2 mJy, and p=2.5, we derive a lower limit of R_ eq≈ 6× 10^17 cm, on the emission radius and E_ eq≈ 1× 10^50 erg on the total energy. §.§ AT2019ahk / ASASSN-19bt AT2019ahk was discovered as an optical transient by <cit.>. We report an independent radio discovery of this event in RACS data at all three frequencies (see Figure <ref>), where we saw a rising transient over 3 years. <cit.> reports archival radio detection of AT2019ahk roughly 4 years before the disruption and estimates underlying host galaxy emission to follow F_ν, host = 0.439 (ν/2.1)^-1 mJy. Combining the RACS data with contemporaneous data from <cit.>, we see that the 0.8–0.9 GHz light curve is still rising ≈ 1500 days after the event, but the 1.6 GHz light curve started to decline. This hints that emission at 1.6 GHz has transitioned to an optically thin regime, but emission at lower frequencies is still optically thick. Hence, the SSA frequency is very close to the frequency at ≈ 1100 days, consistent with the peak frequency estimated by <cit.>. Using p ≈ 2.7 (using existing literature, e.g., and also consistent with ), we estimate the equipartition emission radius for ν_ p=1.655 GHz, F_ν, p=6.4 mJy to be ≈ 1 × 10^17 cm and total energy to be ≈ 7 × 10^48 erg at δ t=1100 days. §.§ AT2019azh Using multi-frequency observations of multiple epochs, <cit.> modeled the radio spectrum of AT2019azh to find a free expansion of the ejecta that showed signs of deceleration post ∼ 450 days of the disruption. <cit.>, on the other hand, modeled the 15.5 GHz lightcurve and found evidence for two emission components (see Figure <ref>), which led the authors to propose a state transition similar to the ones observed in X-ray binaries. We found this TDE in the data as a slowly rising source, increasing by a factor of ∼ 2 between the two epochs. We also detected this source in the and data sets. Using the data and the data from <cit.>, we modeled the 1.4 GHz lightcurve reasonably well by a two-component model similar to <cit.>. Figure <ref> shows the full light curve for this event where the similarity can be seen between the shapes of the 15.5 GHz and 1.4 GHz light curves, although the rise and fall times at these frequencies are different. At 1.4 GHz, the two components rose to a peak at ∼ 300 and 520 days respectively, slower than the 15.5 GHz light curve that took 130 days and 360 to rise. This is broadly consistent with the underlying model <cit.> where the emission at different frequencies is self-similar but the emission at lower frequencies has longer rise times. However, the very late time (≳ 3 yrs) relative behavior between the and the data is puzzling. <cit.> noted that the peak frequency at late times was <1 GHz, which meant that the spectrum above this should be a declining one. But the observed flux density in is higher than the model-predicted flux density in by a factor that is roughly consistent with the SSA mechanism (where S_ν∝ν^5/2). This might be indicative of the peak frequency increasing to higher frequencies at late times, something that <cit.> observed in another event, AT2018hyz, indicative of late-time source activity. The second epoch detection postdates this, however the lack of continued coverage through very late times makes it difficult to distinguish whether this consistent with the initial decline, or is a signature of very late time rebrightening, as hinted by the data. The and VLASS observations are separated by ≈ 14 days; under the assumptions that i) this interval is much shorter than the evolutionary timescale of the radio emission, and ii) the peak frequency rose, but to a value lower than the observing frequency of . We estimated the electron distribution index to be p=3.2± 0.4 (at Δ t=1030 days), consistent with the electron distribution of <cit.> at late times (849 days). This seems to hint that the emission we see at very late times might still be coming from the same family of electrons. §.§ AT2018hyz AT2018hyz was first detected at radio wavelengths ∼2.5 years after the optical outburst <cit.>, and showed an unusually steep rise (∼ t^4-6) at most of the observed frequencies (1.3-19 GHz) <cit.>. <cit.> noted that the light curve at lower frequencies (≲ 3 GHz) began to decline (see Figure <ref>) at the end of their observing campaign (∼ 1250 d past optical outburst). Modeling the spectrum at multiple epochs, <cit.> also found that the peak frequency increased roughly from 1.5 GHz to 3 GHz at late times. However, following the off-axis jet model proposed by <cit.>, <cit.> showed that the observed radio emission in AT2018hyz is also consistent with late-time emission from a narrow jet (∼ 7) as viewed by an off-axis observer (∼ 42). Upon finding this source in data, we looked at the detailed VAST light curve and found no discernible radio emission until late times and a very steep rise at late times (∼ t^4 rise; see Figure <ref>), both of which were consistent with <cit.> and <cit.>. We also found that the 887.5 MHz emission continued to rise until our final observation (Δ t=1700 days)[Using the data from our latest observation at 1757 days, we find a hint of a turn-over in the 887.5 MHz lightcurve, but we need additional data to robustly confirm this.]. However, given the steep rise of this particular transient and the gap between epoch 2 and the VAST full survey data, we cannot rule out a decline seen by <cit.> at frequencies below 3 GHz, followed by a rebrightening at 887.5 MHz instead of a single brightening episode. We then investigated the sudden jump in the peak frequency from 1.5 GHz to 3 GHz reported by <cit.> <cit.>. At day 1251, <cit.> found that the peak frequency is 1.5 GHz but the data used in this fit were all at frequencies >1.12 GHz, where the self-absorbed part of the spectrum might not have been well captured. Combining the 887.5 MHz RACS data from day 1263 with the data from day 1251, we found that the peak frequency rose to 1.9 GHz, as opposed to 1.5 GHz. At this epoch, we also find that the absorption part of the spectrum is more or less consistent (S_ν∼ν^2.7) with what is expected from the SSA mechanism (S_ν∝ν^5/2). This rise in the peak frequency to roughly 3 GHz at day 1282 might be explained by this gradual increase in the peak frequency rather than a sudden shift, something similar to what we found in AT2019azh (see <ref>). Using the latest epoch of VLASS data, we found that the 3 GHz emission also rose from an early non-detection as t^4 (see Figure <ref>), consistent with the RACS/VAST data, to a remarkably bright 16.5 mJy. This is consistent with the very late-time brightening of this transient in radio. §.§ AT2019qiz AT2019qiz <cit.> has received comparatively very little follow-up at radio wavelengths, with <cit.> presenting the initial radio detections that indicated a rising transient at multiple frequencies but with no robust analysis presented (see Figure <ref>). We found this transient in data brightening from a non-detection in epoch 1 to a flux density level of ∼ 1 mJy in the second epoch of , consistent with and . This suggests that this source might be very slowly evolving or that it may be steadily emitting at higher flux density levels. The VAST full survey data resulted in a non-detection, which indicated that the flux density variation was <30% of the mean (see Figure <ref>). We also inspected the VLASS epoch 1 image that predated the optical disruption time and did not find a detection putting a 3 upper limit of 0.36 mJy on the persistent emission at 3 GHz. However, the transient rose to persistent levels of 1 mJy in the latter VLASS epochs (see Figure <ref>). Motivated by this behavior, we wanted to see if the early-time behavior was consistent with an afterglow or if it was different, in which case it might provide clues to the nature of the underlying emission. Using the data from <cit.>, we found that the initial rise time estimated at different frequencies seems to be consistent with t^2.5 at both 17 and 9 GHz. This t^2.5 increase was also consistent with the non-detection of this transient at 5.5 GHz at early times. The spectrum at ∼ 75d seemed to be inconsistent with a synchrotron self-absorption spectrum (S_ν∝ν^5/2), so, we tried to model the break frequency as minimal frequency (ν_m) instead of self-absorption frequency <cit.>. Here the two rising power-laws spectral indices are +2 (Rayleigh-Jeans tail) at frequencies below the break and +1/3 at frequencies above the break frequency. We found a reasonable fit to the spectrum in this case (see bottom panel of Figure <ref>) with the break frequency around ∼ 9 GHz. This is indicative that at early times the emission seems to be consistent with an afterglow. We tried to reconcile with the late-time radio observations from RACS and VLASS. The lack of late-time evolution likely ruled out the scenario where the late-time activity was still dominated by the emission powered by the CNM interaction. It is also possible that there was prior nuclear activity (possibly from an AGN) in this galaxy which is visible once the transient faded away. The non-detection in VLASS and RACS data before the optical discovery makes this unlikely[In particular, the AGN flux density variation with respect to the VLASS non-detection in epoch 1 has to be at least a factor of ∼ 5-6.], but cannot be ruled out entirely. Although not temporally simultaneous, if we assume that the source is persistent and non-variable, the spectrum might be consistent with a flat spectrum at late times, using the RACS and VLASS data. It might be possible that a jet was launched at early times and we are looking directly into the emission from the jet at late times, which could explain the flat spectrum. If this were the case, then it might be an interesting situation in which late-time emission from the jet was directly seen and would add to the small sample of jetted radio TDEs, but given the sparsity of the data, it cannot be firmly established. §.§ Steady Radio Sources: Probable AGN/Host Galaxy Emission In addition to the candidates where a rising/declining behavior is clearly seen, there are cases where the light curve showed little variation or was consistent with a non-varying source (the underlying host galaxy or AGN). An AGN may be intrinsically variable, or variable due to external effects like scintillation <cit.>. In both cases, if the flux density was consistent with a steady source within error bars between the two RACS epochs, we considered that to result from underlying AGN activity (clearly this is a conservative assumption, as we could be averaging over peaks or declines given our sparse sampling). Below we note such examples (see Figure <ref>). We cross-matched the TDEs in our sample with the WISE catalog <cit.> to look for AGN signature. Figure <ref> shows the identified counterparts on a WISE color-color plot <cit.>. We used a color difference of WISE band 1 (3.4 μm) - WISE band 2 (4.6 μm) > 0.8 <cit.> to classify an object as an AGN. * AT2020nov was detected in both epochs of with no significant evolution between them, and also in and (see Figure <ref>). The first observation pre-dated the optical outburst by ∼400 days. We looked at the VLASS images and found that the same behavior was replicated at 3 GHz. Recently, <cit.> also reported AT2020nov as probably dominated by an AGN in their study, with a non-evolving light curve at 6 GHz. The lack of variability in the observed data seems to indicate that the radio emission is likely coming from the AGN activity itself. Exploiting the non-variability of this at different frequencies, we estimated the spectral index (see Figure <ref>), assuming a power-law spectrum S_ν∝ν^-α for AGN activity. Using the data from RACS, VLASS and <cit.> to find α=-0.64±0.04, consistent with the typical AGN spectrum <cit.>. * AT 2022dsb was discovered by <cit.> on 2022 March 01 and had a radio detection reported by <cit.> roughly 20 days later, but the transient nature of this source was not confirmed. epoch 1 had a 4 pre-discovery measurement which points to an underlying AGN[There is a WISE counterpart within 1 of this position, but the WISE colors were not sufficiently conclusive to claim an AGN.] or host galaxy emission, a conclusion strengthened by the detection in epoch 2 which showed little variability roughly 60 days after the optical outburst. There was also a pre-discovery measurement in . and VLASS data resulted in upper limits (3; 0.6 mJy and 0.36 mJy respectively). Based on these detections and upper limits, we estimated the spectral index to be α=-0.7±0.3, typical of AGN. * AT 2022bdw <cit.>: No radio detection from this source has been reported so far, but we found pre-discovery detections in epoch 1 and data. Comparing it with epoch 2, which was post optical outburst, we found that the flux density level was consistent with a non-varying source, either the host AGN or host galaxy emission[This was one of the few fields that was observed twice as a part of epoch 2, separated by 45 days, and the flux density was consistent with a non-varying source to within 2.]. No emission was found in or VLASS data and using these we estimate the spectral index of the background emission to be α=-0.8±0.3, again typical of AGN. * AT 2021qxv: AT 2021qxv <cit.> was observed as a part of the VAST survey in addition to the RACS survey. However, no strong detection has been found in any of the RACS/VAST data except for detection in epoch 2. showed a weak 3 detection at this location, with , VLASS and FIRST data resulting in null detections, and hence we conclude that the epoch 2 detection that we see is probably coming from an underlying AGN[WISE colors point to a probable AGN.] with a spectral index steeper than α=-1.1. * AT 2020zso: Discovered by <cit.>, very weak radio detection of 22±7 μ Jy at 15 GHz was reported roughly 1 month later by <cit.>, but following this, a null detection was made with the uGMRT <cit.> at the central frequencies of 0.65 and 1.26 GHz (upper limits of 46.6 μJy and 51.2 μJy). No strong detections were made in RACS/VAST data except for a single detection in epoch 2. VLASS data contains a 5 detection in epoch 3.1, but the VAST observation that succeeded this resulted in non-detections so we cannot conclusively establish any late time transient activity from this source. It might be possible that the transient might take longer to rise at lower frequencies <cit.> in which case future data from the VAST full survey will be very useful to check this. However, with the current data, we cannot rule out AGN variability. * ASASSN-14li: ASASSN-14li <cit.> showed late-time fading that continued until ∼ 600 days in some bands <cit.>. We found radio detections at this position in both epochs, but consistent with a steady flux density level. This source was also detected in , , and VLASS, with no sign of evolution in the latter. Comparing the archival FIRST measurement 2.68±0.15 mJy at 1.4 GHz with the observations indicate a ∼ 40% decrease in the flux density, indicating that the transient possibly faded away and we are looking at the variability from an AGN. Using RACS, and VLASS data, we find the spectral index α=-0.95±0.14. § DISCUSSION §.§ On the nature of detections Understanding the sample biases in all-sky searches is important in estimating the rates and expectations for future surveys. In particular, understanding if our radio-detected sample of TDEs forms an unbiased representation of the underlying optical population becomes important for future projections. Figure <ref> shows the optical properties of the TDEs (blackbody luminosity vs temperature, as estimated from the optical data) that resulted in radio detections in the survey. Comparing the radio detections in optically discovered TDEs using the sample for this study and from <cit.>, we do not see preferential occupation of radio-detected TDEs in this phase space. We do see that there are no radio detections of TDEs with both high temperatures and luminosities (top right corner of the plot), but that can be attributed to the redshift because we do not expect detectable radio emission[This is under the expectation of detecting radio emission from a sub-relativistic outflow with ν L_ν≈ 10^38erg/s.] (given the current sensitivity limits of surveys like RACS/VLASS) from that sub-population. In the sub-sample of optical TDEs from which radio emission can be detected (z ≲ 0.1), our sample, as well as the sample from <cit.>, is not biased towards certain classes of optical TDEs which suggests that the late-time detection of radio emission in TDEs might not be coming from a particular population of TDEs, but is a common feature of sub-relativistic TDEs in general. We then compare our estimates of the emission radius and minimum energy injected into the outflow with archival studies, under the equipartition situation (see Figure <ref>). We caution the reader that a strict comparison would need accurate modeling of the outflow expansion properties (linear/accelerating/decelerating) to compare estimates from different times. We hence restrict the sample to those that show late-time radio emission. Using the sample from <cit.> and <cit.>, we find that our estimates for the emission radius and the energy injected are consistent with those reported in the literature. <cit.> did an untargeted search for TDEs using the first two epochs of VLASS data, and independently discovered radio-first detections of optically bright TDEs. These are shown as the green scatter in Figure <ref>. While some of these seem to be consistent with the population of optically selected radio TDEs, <cit.> suggests that some radio-discovered optically bright TDEs can have lower black body temperatures and luminosities which partly can be due to TDEs occuring in dusty environments. Data from the RACS survey, but also more importantly from the VAST survey, which has a cadence of ∼ 2 months, should be very useful in conducting such untargeted searches. §.§ Projections for the VAST survey One of the important questions for an all-sky survey like RACS is the detection efficiency. Figure <ref> shows the radio luminosity at 887.5 MHz of the TDEs detected in the RACS survey compared to those of the population. We see that all of these detections have ν L_ν≈ 10^38 erg/s. We caution that comparison between light curves from our sample at 887.5 MHz and archival light curves at 6 GHz can be non-trivial and that the inferred radio luminosity ν L_ν can have frequency dependence if L_ν does not exactly scale as ν^-1. Thus, if the spectral index is steeper than -1, the radio luminosity, estimated from RACS will be an overestimation to the population (in Figure <ref>) and if shallower than -1, will be an underestimation. Despite this, if we assume ν L_ν≈ 10^38 erg/s to be a typical estimate at 887.5 MHz, then given the sensitivity of RACS survey (RMS noise of 0.25 mJy/beam), then the survey should be complete out to z=0.075 (d_L=350 Mpc). We then look at the total population of potentially detectable TDEs. We require that they i) are in the RACS footprint (<41 declination), ii) occurred before the epoch 2 (April 2022), and iii) are within z=0.075, which results in 23 TDEs. Out of these, we detect 5 candidates where we are most likely seeing the afterglow and as many as 6 other events where we might be seeing the host AGN. Counting the 5 detections yields a (90% confidence; estimated using ) detection rate of 22^+15_-11%. This is slightly more than but consistent within errors with <cit.>, but slightly less than <cit.>, who finds late time radio activity in as much as 40% of optically selected TDEs. It is worth mentioning here that, unlike a targeted search <cit.>, where continuous monitoring is done after initial detection, our results are based on observations roughly separated by 3 yrs and hence we are completely insensitive to TDEs that rose and declined within this period, or to some of the most recent TDEs (that happened within a year of epoch 2), which are still rising, but are currently below our sensitivity threshold. Hence this detection efficiency can be considered as a conservative lower limit for future efforts: a survey with a longer duration and finer time sampling would be able to detect more sources at the same sensitivity threshold. Recently <cit.> performed a comprehensive late-time follow-up of a sample of 23 TDEs and found radio emission lasting on timescales of ∼ a year in roughly 50% of the TDEs. Using the first three years of optical data from ZTF, <cit.> constrained the volumetric rate of optical TDEs to be 3.1^+0.6_-1.0× 10^-7 Mpc^-3 yr^-1. If we assume that as many as 50% of optical TDEs (following ) are capable of producing detectable late-time radio emission, then the current constraints on the rate of optical TDEs imply a rate of 1.5× 10^-7 Mpc^-3 yr^-1 for optically-selected, radio-emitting TDEs. This rate, coupled with the sensitivity of the VAST survey (RMS noise of 0.25 mJy/beam), its footprint (roughly a quarter of the total sky), and the survey lifetime (4 yrs), implies that VAST should be able to detect ∼ 20 optically selected radio TDEs over the full survey. TDEs can also occur in highly dust-obscured environments, in which emission can be better studied at lower frequencies like the infrared where the emission can be powered by dust echoes <cit.> and at radio wavelengths, that need ambient material for the outflows/jets to interact with. The rate of the radio-bright optically-quiet TDEs is highly uncertain currently, particularly due to the lack of such studies. Hence the above-mentioned sample of ∼ 20 TDEs can only be considered a lower limit on the detectable sample, given the current optical rate. §.§ RACS/VAST as a sub-GHz reference map Radio emission from TDEs can last ∼ years, and hence obtaining a robust host spectrum in the absence of one might imply that we need to wait for years before the transient fades away and the host galaxy dominates again. RACS, and in particular VAST, can be tremendously helpful in this respect since it provides a low-frequency (where the emission is brighter) reference image, that can be used to study the long-term variability (or lack thereof) of the host galaxy pre-explosion. To illustrate this, we provide the example of AT 2023clx where we can look for the pre-explosion radio emission using RACS data. AT 2023clx was discovered by <cit.> on 2023 February 22, well after epoch 2. A radio detection was reported 4 days later by <cit.> consistent with the position of the optical transient. We found a persistent source in both epochs of data at the optical location and, using non-detections in VLASS, we constrain the radio spectrum of the host to be steeper than -1.35. For future observing campaigns that aim to do dedicated follow-up of these TDEs, RACS data can be very useful in estimating the level of host contamination. With the availability of VAST full survey data, not only radio first discoveries can be made, but also a well-sampled light curve with a cadence of 2 months leading up to the optical outburst, can be obtained. § CONCLUSIONS We conducted an untargeted search for radio emission in optically selected TDEs using data from the RACS survey, which resulted in 5 TDEs where the light curve showed significant evolution. For each of these TDEs, we modeled the evolution to show that the radio evolution at late times can undergo rebrightening and can be complex. We found that late-time activity can be quite common at radio wavelengths in sub-relativistic TDEs, adding to the sample presented by <cit.> who reached similar conclusions from targeted searches. Our search was based on the variability of the source over a timescale of roughly 3 years, which makes us insensitive to TDEs that evolve on timescales smaller than this, and we estimate the rate of optical TDEs in which late-time radio emission can be observed to be 22^+15_-11%. Using the current optical rates, we estimate a conservative lower limit on the number of TDEs that can be detected in the VAST survey to be ∼ 20 over its survey span (4 years). We thank an anonymous referee for helpful comments. AA and DLK are supported by NSF grant AST-1816492. Parts of this research were conducted by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav), project number CE170100004. The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto. AH is grateful for the support by the the United States-Israel Binational Science Foundation (BSF grant 2020203) and by the Sir Zelman Cowen Universities Fund. This research was supported by the Israel Science Foundation (grant No. 1679/23). HD acknowledges support from the Walter C. Sumner Memorial Fellowship and the Natural Sciences and Engineering Research Council of Canada (NSERC) through a Postgraduate Scholarship. JP is supported by Australian Government Research Training Program Scholarships. KR thanks the LSST-DA Data Science Fellowship Program, which is funded by LSST-DA, the Brinson Foundation, and the Moore Foundation; Their participation in the program has benefited this work. GRS is supported by NSERC Discovery Grant RGPIN-2021-0400. This scientific work uses data obtained from Inyarrimanha Ilgari Bundara / the Murchison Radio-astronomy Observatory. The Australian SKA Pathfinder is part of the Australia Telescope National Facility (<https://ror.org/05qajvd42>) which is managed by CSIRO. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Centre. The establishment of ASKAP, the Murchison Radio-astronomy Observatory, and the Pawsey Supercomputing Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. lccllrcccc Radio properties of the TDEs found in data set. Name RA DEC Discovery (UT) z δt RACS low RACS mid RACS high VLASS (days) (mJy) (mJy) (mJy) (mJy) ASASSN-15oi 2015-08-14 20h39m09.1s -30d45m21s 0.02 1355.1 4.21±0.27 1417.3 8.95±1.35 1961.3 10.52±0.19 2350.2 7.45±0.2 2377.7 4.95±0.76 2424.0 12.2±0.15 2860.9 7.28±0.17 2882.8 7.42±0.27 2938.6 6.51±0.2 AT2019ahk 2019-01-29T21:50:24 07h00m11.5s -66d02m24s 0.026 97.5 1.24±0.33 738.6 4.69±0.21 1090.7 6.4±0.2 1172.5 5.87±0.29 AT2019azh 2019-02-22T00:28:48 08h13m16.9s 22d38m54s 0.022 51.9 <0.58 59.4 1.27±0.35 682.7 3.88±0.19 1025.3 1.22±0.22 1039.8 2.38±0.21 1127.5 2.83±0.2 AT2018hyz 2018-11-06T15:21:36 10h06m50.9s 01d41m34s 0.046 -304.2 <0.39 172.9 <0.69 294.4 <0.78 357.2 <0.81 358.2 <0.75 408.2 <0.73 430.1 <0.75 431.1 <0.85 436.1 <0.82 437.1 <0.68 438.1 <0.61 591.7 <0.67 661.5 <0.94 676.2 0.54±0.17 795.2 0.96±0.21 988.6 1.23±0.2 1018.5 1.07±0.21 1158.2 4.85±0.22 1240.0 3.58±0.16 1553.8 16.67±2.5 1679.8 8.12±0.18 1701.7 8.43±0.19 1757.5 7.27±0.53 AT2019qiz 2019-09-19T11:59:43 04h46m37.9s -10d13m35s 0.015 -617.3 <0.36 -142.2 <1.55 396.9 1.1±0.26 490.0 1.33±0.17 841.1 1.25±0.18 938.8 1.19±0.18 1282.5 0.92±0.21 1387.6 <9.31 1445.4 <2.11 AT2020zso 2020-11-12T03:36:05.003 22h22m17.1s -07d16m00s 0.061 -1080.0 0.99±0.2 -563.2 <0.79 -442.4 <0.91 -379.6 <0.74 -378.7 <0.79 -305.9 <0.79 -304.9 <0.89 -298.8 <0.71 -297.8 <0.71 -145.3 <0.72 -124.6 <0.54 -74.5 <0.85 46.2 <0.58 252.7 <0.67 284.5 <0.87 442.1 <0.62 538.8 0.91±0.17 794.8 1.49±0.29 962.7 <1.07 967.7 <0.86 1025.6 <0.83 AT2021qxv 2021-05-10T10:50:52.800 15h18m59.3s -03d11m45s 0.183 -748.0 <0.59 -741.9 <1.32 -79.5 <0.57 216.2 <0.41 245.6 <0.6 331.3 1.66±0.38 764.2 <1.43 774.1 <1.15 787.1 <1.36 843.9 <1.56 AT2022bdw 2022-01-31T09:37:26.400 08h25m10.4s 18d34m57s 0.038 -1023.4 <0.36 -1014.9 1.04±0.25 -384.7 0.63±0.18 -75.0 <0.35 -30.6 <0.6 56.1 0.91±0.21 100.0 1.39±0.19 AT2022dsb 2022-03-01T13:40:47 15h42m21.7s -22d40m14s 0.023 -1475.1 <0.46 -1041.0 1.29±0.3 -482.8 <0.48 -363.7 0.56±0.16 -47.5 <0.65 41.2 0.76±0.21 ASASSN-14li 2014-11-22T00:00:00 12h48m15.2s 17d46m26s 0.021 1602.4 0.58±0.15 1614.5 2.88±0.28 2230.0 1.54±0.19 2552.5 0.77±0.17 2615.9 1.3±0.18 2681.7 2.39±0.2 ASKAP l|r|cc|cc Upper limits (3-σ) on the radio emission from the RACS/VAST survey for the sample of TDEs that are in RACS footprint but resulted in non-detections. Name δt 2cFlux limit 2cLuminosity limita RACS VAST RACS VAST (days) (mJy) (mJy) (ergs/s) (ergs/s) AT2016fnl 1121 0.8 4.6× 10^36 AT2016fnl 2047 0.6 3.5× 10^36 AT2018dyb 466 2.1 1.5× 10^37 AT2018dyb 1370 2.4 1.6× 10^37 AT2018fyk 419 0.6 5.0× 10^37 AT2018fyk 517 0.7 5.4× 10^37 AT2018fyk 518 0.5 4.4× 10^37 AT2018fyk 520 0.5 4.2× 10^37 AT2018fyk 521 0.5 4.3× 10^37 AT2018fyk 727 0.6 4.9× 10^37 AT2018fyk 1024 0.5 3.8× 10^37 AT2018fyk 1049 0.4 3.7× 10^37 AT2018fyk 1080 0.5 3.7× 10^37 AT2018fyk 1334 0.4 3.3× 10^37 AT2018fyk 1762 0.7 5.4× 10^37 AT2018fyk 1763 0.5 3.9× 10^37 AT2018hco 355 1.1 2.2× 10^38 AT2018hco 1282 0.9 1.6× 10^38 AT2018iih 320 0.7 9.1× 10^38 AT2018iih 1241 0.5 6.2× 10^38 AT2018lna 271 0.8 1.6× 10^38 AT2018lna 1193 0.6 1.2× 10^38 AT2018zr 571 0.7 7.8× 10^37 AT2018zr 1491 0.6 6.5× 10^37 AT2019bhf 245 1.7 6.1× 10^38 AT2019bhf 1147 0.8 2.9× 10^38 AT2019dsg 191 0.9 5.6× 10^37 AT2019dsg 1095 0.6 3.6× 10^37 AT2019gte 139 0.8 1.5× 10^38 AT2019gte 252 0.8 1.5× 10^38 AT2019gte 252 1.0 1.7× 10^38 AT2019gte 254 1.0 1.8× 10^38 AT2019gte 255 0.7 1.2× 10^38 AT2019gte 255 0.9 1.5× 10^38 AT2019gte 462 0.8 1.4× 10^38 AT2019gte 760 0.6 1.1× 10^38 AT2019gte 783 0.7 1.1× 10^38 AT2019gte 814 0.7 1.2× 10^38 AT2019gte 1037 0.6 1.0× 10^38 AT2019gte 1484 0.6 1.1× 10^38 AT2019gte 1497 0.9 1.6× 10^38 AT2019lwu 86 0.8 2.8× 10^38 AT2019lwu 197 0.9 3.0× 10^38 AT2019lwu 198 0.9 2.9× 10^38 AT2019lwu 200 0.8 2.8× 10^38 AT2019lwu 200 0.9 3.0× 10^38 AT2019lwu 201 0.7 2.3× 10^38 AT2019lwu 201 0.7 2.5× 10^38 AT2019lwu 407 0.9 3.0× 10^38 AT2019lwu 706 0.8 2.7× 10^38 AT2019lwu 730 0.8 2.7× 10^38 AT2019lwu 760 0.7 2.5× 10^38 AT2019lwu 1001 0.6 1.9× 10^38 AT2019lwu 1444 0.7 2.5× 10^38 AT2019lwu 1444 0.8 2.7× 10^38 AT2019vcb -52 0.8 1.4× 10^38 AT2019vcb 868 0.5 1.0× 10^38 AT2020acka -421 3.3 1.2× 10^40 AT2020acka 479 1.3 4.5× 10^39 AT2020neh -247 0.9 7.8× 10^37 AT2020neh 654 0.6 5.3× 10^37 AT2020pj -79 0.6 6.4× 10^37 AT2020pj 822 0.5 5.2× 10^37 AT2020vwl -361 0.7 2.0× 10^37 AT2020vwl 540 0.5 1.4× 10^37 AT2021ack -455 1.1 4.7× 10^38 AT2021ack 447 1.0 4.5× 10^38 AT2021ack 484 0.8 3.7× 10^38 AT2021ack 885 1.1 5.1× 10^38 AT2021ack 905 1.3 5.8× 10^38 AT2021axu -481 0.6 6.2× 10^38 AT2021axu 439 0.5 5.0× 10^38 AT2021blz -472 0.7 3.3× 10^37 AT2021blz 445 0.5 2.4× 10^37 AT2021blz 886 0.5 2.3× 10^37 AT2021ehb -487 0.8 5.3× 10^36 AT2021ehb 397 0.7 4.9× 10^36 AT2021gje -504 1.0 4.3× 10^39 AT2021gje 380 0.6 2.5× 10^39 AT2021jjm -540 1.3 7.8× 10^38 AT2021jjm 365 0.7 4.1× 10^38 AT2021jsg -471 0.6 2.3× 10^38 AT2021jsg 449 0.5 2.0× 10^38 AT2021lo -446 1.1 6.4× 10^38 AT2021lo 460 0.7 3.9× 10^38 AT2021lo 894 0.7 3.9× 10^38 AT2021lo 913 0.7 4.3× 10^38 AT2021mhg -595 0.8 9.4× 10^37 AT2021mhg 331 0.6 7.4× 10^37 AT2021uqv -674 0.8 2.2× 10^38 AT2021uqv 253 0.9 2.4× 10^38 AT2021uvz -656 0.8 6.6× 10^38 AT2021uvz 265 0.6 5.2× 10^38 AT2021yte -719 0.8 5.1× 10^37 AT2021yte 218 0.6 3.7× 10^37 AT2021yte 234 0.5 3.4× 10^37 AT2021yzv -655 0.9 2.3× 10^39 AT2021yzv 228 0.8 2.0× 10^39 AT2022adm -829 1.6 1.4× 10^38 AT2022adm 72 0.7 6.3× 10^37 AT2022arb -833 1.0 8.7× 10^37 AT2022arb -725 1.0 8.8× 10^37 AT2022arb -721 1.1 9.6× 10^37 AT2022arb -721 1.2 1.0× 10^38 AT2022arb -720 0.9 8.1× 10^37 AT2022arb -720 1.3 1.2× 10^38 AT2022arb -719 4.0 3.5× 10^38 AT2022arb -718 0.8 7.4× 10^37 AT2022arb -718 1.2 1.1× 10^38 AT2022arb -703 3.9 3.4× 10^38 AT2022arb -512 3.8 3.4× 10^38 AT2022arb -213 0.9 7.9× 10^37 AT2022arb -190 0.7 6.5× 10^37 AT2022arb -160 0.9 7.5× 10^37 AT2022arb 66 0.6 5.4× 10^37 AT2022arb 504 0.7 6.0× 10^37 AT2022arb 524 0.8 7.5× 10^37 AT2022czy -843 0.9 2.7× 10^38 AT2022czy 39 0.7 1.9× 10^38 AT2022dyt -885 0.7 8.3× 10^37 AT2022dyt 33 0.6 7.0× 10^37 AT2022exr -887 0.7 1.6× 10^38 AT2022exr 14 0.6 1.2× 10^38 aRadio luminosity is estimated as ν L_ν and we assume the emission to be spherical which results in the inclusion of 4π factor. aasjournal
http://arxiv.org/abs/2407.13488v1
20240718130855
Similarity over Factuality: Are we making progress on multimodal out-of-context misinformation detection?
[ "Stefanos-Iordanis Papadopoulos", "Christos Koutlis", "Symeon Papadopoulos", "Panagiotis C. Petrantonakis" ]
cs.CV
[ "cs.CV", "cs.MM" ]
[ [ Received: date / Accepted: date =================================== § ABSTRACT Out-of-context (OOC) misinformation poses a significant challenge in multimodal fact-checking, where images are paired with texts that misrepresent their original context to support false narratives. Recent research in evidence-based OOC detection has seen a trend towards increasingly complex architectures, incorporating Transformers, foundation models, and large language models. In this study, we introduce a simple yet robust baseline, which assesses MUltimodal SimilaritiEs (MUSE), specifically the similarity between image-text pairs and external image and text evidence. Our results demonstrate that MUSE, when used with conventional classifiers like Decision Tree, Random Forest, and Multilayer Perceptron, can compete with and even surpass the state-of-the-art on the NewsCLIPpings and VERITE datasets. Furthermore, integrating MUSE in our proposed “Attentive Intermediate Transformer Representations” (AITR) significantly improved performance, by 3.3% and 7.5% on NewsCLIPpings and VERITE, respectively. Nevertheless, the success of MUSE, relying on surface-level patterns and shortcuts, without examining factuality and logical inconsistencies, raises critical questions about how we define the task, construct datasets, collect external evidence and overall, how we assess progress in the field. We release our code at: <https://github.com/stevejpapad/outcontext-misinfo-progress>. § INTRODUCTION In recent decades, we have witnessed the proliferation of new types of misinformation, beyond fake news <cit.> and manipulated images <cit.>, including AI-generated “DeepFakes” <cit.> and misinformation that spans multiple modalities such as images and texts<cit.>. In an effort to assist the work of human fact-checkers, researchers have been leveraging the power of deep learning to automate certain aspects of the fact-checking process <cit.>, such as claim and stance detection, evidence and fact-check retrieval, and verdict prediction, among others<cit.>. In this study, we focus on multimodal fact-checking, specifically targeting evidence-based out-of-context (OOC) detection, a topic that has recently gained significant attention from researchers. OOC misinformation involves the presentation of images with captions that distort or misrepresent their original context<cit.>. Due to the lack of large-scale, annotated datasets for OOC detection, researchers have turned to algorithmic generation of OOC datasets <cit.> which have been used to train numerous methods for OOC detection <cit.>, some of which leverage external information or evidence to further enhance detection accuracy <cit.>. Overall, there is a trend towards increasingly complex architectures for OOC detection, including the integration of Transformers and memory networks <cit.>, fine-tuning foundational vision-language models <cit.>, incorporating modules for detecting relevant evidence <cit.> and leveraging instruction tuning and large language models <cit.>, which generally translate to marginal improvements in performance. We develop a simple yet robust baseline that leverages MUultimodal SimilaritiEs (MUSE), specifically CLIP-based <cit.> similarities between image-text pairs under verification and across external image and text evidence. Our findings show that training machine learning classifiers, such as Decision Tree, Random Forest and Multi-layer Perceptron with MUSE can compete and even outperform much more complex architectures on NewsCLIPpings <cit.> and VERITE <cit.> by up to 4.8%. Furthermore, integrating MUSE within complex architectures, such as our proposed “Attentive Intermediate Transformer Representations” (AITR) can further improve performance, by 3.3% on NewsCLIPpings and 7.5% on VERITE, over the state-of-the-art (SotA). Nevertheless, our analysis reveals that the models primarily rely on shortcuts and heuristics based on surface-level patterns rather than identifying logical or factual inconsistencies. For instance, as illustrated in Fig.<ref>, given a Truthful image-text pair, we use the text to retrieve image evidence and the image to retrieve text evidence from the web. Due to the popularity of the NewsCLIPpings' sources (USA Today, The Washington Post, BBC, and The Guardian) search engines often retrieve the exact same or highly related images and texts as those under verification, which, after re-ranking, are selected as the likely evidence. This results in a high `image-to-evidence image' (0.907) `text-to-evidence text' (0.597) similarities. In contrast, given the OOC image, we retrieve unrelated text evidence, leading to significantly lower similarity scores. Consequently, a model can learn to rely on simple heuristics such as, if the image-text pair exhibits significant similarity both internally and with the retrieved (and re-ranked) evidence, then the pair is likely truthful; otherwise, it is OOC. Furthermore, we show that these models yield high performance only within a limited definition of OOC misinformation, where legitimate images are paired with otherwise truthful texts from different contexts. In contrast, their performance deteriorates when dealing with `miscaptioned images,' where images are de-contextualised by introducing falsehoods in their captions i.e. by altering named entities such as people, dates, or locations. These findings raise critical questions about how realistic and robust the current frameworks are, how we define the task, create datasets, collect external evidence, and, more broadly, how we assess progress in OOC detection and multimodal fact-checking. In summary, we recommend future research to: 1) avoid training and evaluating methods solely on algorithmically created OOC datasets; 2) incorporate annotated evaluation benchmarks; 3) broaden the definition of OOC to include miscaptioned images, named entity manipulations, and other types of de-contextualization, and 4) to expand training datasets and collect external evidence accordingly. § RELATED WORK Out-Of-Context (OOC) detection, also known as image re-purposing, multimodal mismatching, or “CheapFakes”, involves pairing legitimate, non-manipulated, images with texts that misrepresent their context. Due to the lack of manually annotated and large-scale datasets, initial attempts to model out-of-context misinformation relied on randomly re-sampling image-text pairs <cit.>, while more sophisticated methods now rely on hard negative sampling, creating out-of-context pairs that maintain semantic similarity <cit.>. In turn, multiple methods have been proposed for OOC detection that cross-examine and attempt to identify inconsistencies within the image-text pair without leveraging external evidence <cit.>. Another strand of research has focused on constructing multimodal misinformation datasets through weak annotations <cit.> or named entity manipulations <cit.> but, to the best of our knowledge, such datasets have not yet been enhanced with external evidence and used for multimodal fact-checking. Nevertheless, professional fact-checkers[<https://www.factcheck.org/our-process>] rarely rely solely on internal inconsistencies between modalities and instead collect relevant external information, or evidence, that support or refute the claim under verification <cit.>. Furthermore, prior studies on evidence-based OOC detection demonstrate significant performance improvements when leveraging external information <cit.>. Specifically, Abdelnabi et al. <cit.> enhanced the NewsCLIPpings dataset <cit.> by collecting external evidence (See Section <ref>) and developed the Consistency Checking Network (CCN) which examines image-to-image and text-to-text consistency using attention-based memory networks, that employ ResNet152 for images and BERT for texts, as well as a fine-tuned CLIP (ViT B/32) for additional multimodal features. The Stance Extraction Network (SEN) employs the same encoders as CCN but enhances performance by semantically clustering external evidence to determine their stance toward the claim. It also integrates the co-occurrence of named entities between the text and textual evidence <cit.>. The Explainable and Context-Enhanced Network (ECENet) combines a coarse- and fine-grained attention network leveraging ResNet50, BERT and CLIP ViT-B/32 for multimodal feature extraction along with textual and visual entities <cit.>. SNIFFER examines the “internal consistency” of image-text pairs and their “external consistency” with evidence with the use of a large language model, InstructBLIP, that is first fine-tuned for news captioning and then for OOC detection, utilizing GPT-4 to generate instructions that primarily focus on named entities while the Google Entity Detection API is used for extracting visual entities <cit.>. Finally, the Relevant Evidence Detection Directed Transformer (RED-DOT) utilizes evidence re-ranking, element-wise modality fusion, guided attention and a Transformer encoder optimized with multi-task learning to predict the weakly annotated relevance of retrieved evidence <cit.>. On the whole, there is a noticeable trend toward increasing architectural complexity which typically translates into limited improvements in performance. In this study, we show how simple machine learning approaches can compete and even surpass complex SotA methods by simply leveraging multimodal similarities, which raises critical questions on how we define the task, collect data, external evidence and how we access progress in the field. § METHODOLOGY §.§ Problem Formulation We define the task of evidence-based out-of-context detection as follows: given dataset (I^v_i, T^v_i, I^e_i, T^e_i, y_i)_i=1^N where I^v_i, T^v_i represent the image-text pair under verification, I^e_i, T^e_i image and textual external information, or evidence, retrieved for the pair and y_i ∈{0,1} is the pair's ground-truth label, being either truthful (0) or out-of-context (1), the objective is to train classifier f: (ℐ^v, 𝒯^v, ℐ^e, 𝒯^e) →ŷ^v. §.§ Multimodal Similarities As shown in Fig.<ref>, given feature extractor F(·) and extracted features F_I^v, F_T^v, F_I^e, F_T^e, we use cosine similarity s to calculate the Multimodal Similarities (MUSE) vector S^v/e between s(F_I^v, F_T^v) image text pairs, s(F_I^v, F_I^e) image to image evidence, s(F_T^v, F_I^e) text to image evidence, s(F_I^v, F_T^e) image to text evidence, s(F_T^v, F_T^e) text to text evidence and s(F_I^e, F_T^e) image evidence to text evidence. Afterwards, the S^v/e vectors are used to train a machine learning classification such as Decision Tree (DT), Random Forest (RF) and Multi-layer Perceptron (MLP), denoted as MUSE-DT/RF/MLP, respectively, or are integrated within the “Attentive Intermediate Transformer Representations” (AITR) network. §.§ Attentive Intermediate Transformer Representations Attentive Intermediate Transformer Representations (AITR) attempts to model how human fact-checkers may iterate multiple times over the claim and collected evidence during verification, drawing various inferences and interpretations at each pass, exploring both general and fine-grained aspects and finally reassessing the entire process while assigning different weights to different aspects at each stage of analysis. As shown in Fig.<ref>, AITR utilizes a stack of n Transformer encoder layers E(·) = [E_1, E_2, ⋯, E_n] with h = [h_1, h_2, ⋯, h_n] number of multi-head attention enabling both stable attention (e.g., h = [8,8,8,8]) and granular attention, ranging from general to fine-grained (e.g., h = [1,2,4,8]) or from fine-grained to general (e.g., h = [8,4,2,1]). Given initial input: x_0 = [C_0;F^v;F^e;S^v/e] where C_0 is a learnable classification token, F^v represents element-wise modality fusion <cit.> defined as F^v = [F_I^v;F_T^v;F_I^v+F_T^v;F_I^v-F_T^v;F_I^v*F_T^v], F_e = [F_I^e; F_T^e] and “;” denoting concatenation, intermediate Transformer outputs are given by: x_i = E_i(x_i-1) for i ∈{1, 2, …, n} From each intermediate output, we extract the processed classification tokens 𝒞 = [C_1, C_2, …, C_n] and apply the scaled-dot product self-attention mechanism: 𝒞_a = softmax(Q · K^T/√(d)) · V with fully connected layers Q = W_q ·𝒞, K = W_k ·𝒞, V = W_v ·𝒞 and W_q, W_k, W_v ∈ℝ^d × d. Afterwards, we use average pooling to calculate 𝒞_p = 1/n∑_i=1^n𝒞_a[:, i, :] and a final classification layer to predict ŷ^v=W_1·GELU(W_0·𝒞_p) with W_0∈ℝ^1 × d and W_1∈ℝ^d × 1. § EXPERIMENTAL SETUP §.§ Datasets We utilize the NewsCLIPpings Merged/Balanced dataset, comprising 85,360 samples in total <cit.>, 42,680 “Pristine” or truthful ℐ^v, 𝒯^v pairs sourced from credible news sources -as provided by the VisualNews dataset <cit.>- and 42,680 algorithmically created OOC pairs. Specifically, OOC pairs are generated by mismatching the initial image or text with another, utilizing semantic similarities, either CLIP text-to-image or text-to-text similarities, SBERT-WK for text-to-text person mismatching, and ResNet Place for scene mismatching. Furthermore, we utilize the VERITE evaluation benchmark <cit.> comprising 1,000 annotated samples, 338 truthful pairs, 338 miscaptioned images and 324 out-of-context pairs. §.§ External Evidence For NewsCLIPpings, we use the external evidence ℐ^e, 𝒯^e as provided by <cit.> comprising up to 19 text evidence and up to 10 image evidence for each I^v_i, T^v_i pair, collected via Google API; totaling to 146,032 and 736,731 textual and image evidence, respectively. Specifically, the authors employ cross-modal retrieval, namely the text T^v_i is used to retrieve potentially relevant image evidence I^e_i and image I^v_i to retrieve potentially relevant textual evidence T^e_i. We use the same Training, Validation and Testing sets as prior works to ensure comparability. For VERITE, we employ the external evidence as provided by <cit.>. Instead of utilizing all provided evidence as in <cit.>, we follow <cit.>, in re-ranking the external evidence based on CLIP <cit.> intra-modal similarities (image-to-image evidence, text-to-text evidence). We only select the top-1 items, as leveraging additional items was shown to degrade performance by introducing less relevant and noisy information into the detection model. §.§ Backbone Encoder Following <cit.> we use the pre-trained CLIP ViT B/32 and ViT L/14 <cit.> as the backbone encoders in order to extract visual F_I^v, F_I^e∈R^d× 1 and textual features F_T^v, F_T^e∈R^d× 1 with dimensionality d=512 or d=768 for CLIP ViT B/32 and L/14, respectively. Unless stated otherwise, we employ L/14 while using B/32 only for comparability purposes with some older works. We use the “openai” version of the models as provided by OpenCLIP[<https://github.com/mlfoundations/open_clip>]. §.§ Evaluation Protocol We train each model on the NewsCLIPpings train set, tune the models' hyper-parameters on the validation set and report the best version's accuracy on the NewsCLIPpings test set and unless stated otherwise, as in Table <ref>, we report the “True vs OOC” accuracy for VERITE. To ensure comparability with <cit.> on VERITE, we report the mean “out-of-distribution cross-validation" (OOD-CV) accuracy for VERITE in Table <ref>. Specifically, we validate and checkpoint a model on a single VERITE fold (k=3) while evaluating its performance on the other folds. We then retrieve the model version (hyper-parameter combination) that achieved the highest mean validation score and report its mean performance of the testing folds. §.§ Implementation Details We train AITR for a maximum of 50 epochs, with early stopping and check-pointing set at 10 epochs to prevent overfitting. The AdamW optimizer is utilized with ϵ=1e-8 and weight decay=0.01. We employ a batch size of 512 and a transformer dropout rate of 0.1 During hyperparameter tuning, we explore learning rates lr ∈{1e-4, 5e-5}, transformer feed-forward layer dimension z∈{256, 1024, 2048} and for h we try the following values [4,4,4,4], [8, 8, 8, 8], [1,2,4,8], [8,4,2,1]. In the ablation experiments that do not leverage intermediate transformer representations, we exclude the h= [1,2,4,8] and [8,4,2,1] configurations. To ensure reproducibility of our experiments, we use a constant random seed of 0 for PyTorch, Python Random, and NumPy. § EXPERIMENTAL RESULTS §.§ Ablation and Comparative Studies Table <ref> presents the ablation study results for AITR which consistently achieves the highest performance among all ablation configurations, underscoring the importance of each component. Specifically, substituting the attention mechanism with max pooling or weighted pooling leads to a notable reduction in performance across both datasets. Similarly, using the default transformer encoder (Pooling = None) without leveraging intermediate representations lowers performance. Notably, the most critical component of AITR appears to be MUSE, as removing it significantly deteriorates the model's performance, especially on VERITE, in both AITR and the default Transformer encoder. The best AITR performance was achieved with h=[1, 2, 4, 8], z=2048 and learning rate 5e-5. In comparison with the current SotA, as shown in Table <ref>, MUSE-MLP competes and even outperforms much more complex architectures on NewsCLIPpings. Specifically, MUSE-MLP (90%) performs similar to RED-DOT (90.3%) while surpassing SNIFFER (88.4%), ECENet (87.7%), SEN (87.1%) and CCN (84.7%). Notably, MUSE-MLP also significantly outperforms RED-DOT on VERITE, with +4.8% relative improvement. Furthermore, integrating MUSE within AITR, significantly outperforms the SotA on NewsCLIPpings by +3.3% and VERITE by +7.5%. While this study primarily focuses on evidence-based approaches, we may also note that MUSE-MLP with s(F_I^v, F_T^v) and no external evidence, achieves 80.7% on NewsCLIPpings, as seen in Table <ref>, and thus outperforms complex and resource-intensive architectures such as the Self-Supervised Distilled Learning <cit.> (71%) that uses a fully fine-tuned CLIP ResNet50 backbone on NewsCLIPpings, the Detector Transformer <cit.> (77.1%) and even competes against RED-DOT without evidence (81.7%) <cit.>. §.§ Similarity Importance Furthermore, we examine the contribution of each similarity measure within S^v/e. Table <ref> illustrates the performance and feature importance by the Decision Tree and Random Forest classifiers. We observe that both classifier put the highest emphasis on the image-text pair similarity s(F_T^v, F_I^e) followed by image to image evidence s(F_I^v, F_I^e) and text to text evidence s(F_T^v, F_T^e). Table <ref> demonstrates an ablation of MUSE-MLP classifier on NewsCLIPpings (N) and VERITE (V), while excluding certain similarities. We observe that employing S^v/e with all 6 similarity measures consistently achieves the highest overall accuracy (N=89.86, V=80.54) on both datasets. Therefore, each similarity measure contributes to some extend to the overall performance. Nevertheless, among single similarity experiments, we observe that s(F_T^v, F_I^e) and s(F_I^v, F_T^e) yield near-random performance while image-text pair similarities s(F_I^v, F_T^v) yield the highest performance (N=80.69, V=70.89), followed by image to image evidence s(F_I^v, F_I^e) (N=79.86, V=68.02) and then text to text evidence s(F_T^v, F_T^e) (N=71.83, V=52.19), where performance, especially on VERITE, drops significantly. Similarly, removing the image-text pair s(F_I^v, F_T^v) results in a notable drop performance with N=85.64% and V=69.68%. Again, similar to the Random Forest and Decision Tree classifiers, it is s(F_I^v, F_T^v) that has the highest contribution and is followed by s(F_I^v, F_I^e) and s(F_T^v, F_T^e). §.§ Performance with Limited Data As shown in Fig. <ref>, MUSE-MLP maintains high performance on both dataset with only using 25% of the NewsCLIPpings training set. Notably, MUSE-RF maintains high performance even when trained with 1% of the training set, which translates to only 710 samples. Surprisingly, even with 0.1% and 0.05% of the dataset, or 71 and 36 samples, respectively, the performance of MUSE-RF does not completely deteriorate. This means that the patterns that MUSE-RF relies on are simple enough that can be learned from even from a few tens or hundreds of samples. §.§ Pattern Analysis By examining Fig.<ref>, illustrating the distributions of the 6 similarity measures in NewsCLIPpings, we observe clear differences between Truthful and OOC distributions, primarily on s(F_I^v, F_T^v), s(F_I^v, F_I^e), s(F_T^v, F_T^e) and s(F_I^e, F_T^e). Indicatively, the median values of s(F_I^v, F_T^v), s(F_I^v, F_I^e), s(F_T^v, F_T^e) are 0.27, 0.91, 0.63 for Truthful pairs and 0.19, 0.69, 0.32 for OOC pairs. In contrast, s(F_T^v, F_I^e) and s(F_I^v, F_T^e) demonstrate mostly overlapping distributions between True and OOC classes which explains why they result in near-random performance in single-similarity experiments of Table <ref>. In Fig. <ref> we observe that the similarity distributions of VERITE exhibits relatively similar “True vs OOC” distributions with NewsCLIPpings in terms of s(F_I^v, F_T^v) and s(F_I^v, F_I^e) but not s(F_T^v, F_T^e) that have mostly overlapping distributions. Indicatively, the median values of s(F_I^v, F_T^v), s(F_I^v, F_I^e), s(F_T^v, F_T^e) are 0.31, 0.83, 0.32 for Truthful pairs and 0.24, 0.69, 0.28 for OOC pairs. Importantly, we also observe that “True vs Miscaptioned” distributions are overlapping on s(F_I^v, F_I^e) and that s(F_T^v, F_T^e) similarities of the “Miscaptioned” class is skewed towards higher similarity, with median values of 0.29, 0.82 and 0.46 for s(F_I^v, F_T^v), s(F_I^v, F_I^e) and s(F_T^v, F_T^e), respectively, thus inverting the pattern found in NewsCLIPpings. As a result, as seen in Table <ref>, while MUSE and AITR exhibit high performance on VERITE in terms of “True vs OOC”, their performance completely degrades on the “True vs Miscaptioned” evaluation. § DISCUSSION Overall, the experimental results indicate that while our methods surpass the SotA, they primarily rely on shortcuts and simple heuristics rather than detecting logical and factual inconsistencies. This raises critical questions about the realism and robustness of the current OOC detection framework, as well as how we define the task, collect data and external information. As discussed in Section <ref>, our proposed methods, MUSE and AITR, reach high accuracy scores on NewsCLIPpings, surpassing the SotA. It is important to note that OOC samples in NewsCLIPpings are generated by misaligning the original, truthful image-text pairs with other semantically similar images or texts, based on similarities by CLIP, ResNet and S-BERT features. Consequently, the truthful pairs tend to exhibit relatively higher cross-modal similarity, while OOC pairs demonstrate lower similarity, as seen in Fig.<ref>. By relying on this simple relation, MUSE-MLP achieved a high accuracy of 81%, without incorporating any external information. Integrating multimodal similarities with external evidence increased the detection accuracy to 90-93%. To understand this result, it is essential to consider the role of the evidence retrieval process. Following <cit.>, external evidence is gathered through cross-modal retrieval, where the image I^v is used to retrieve text evidence T^e and the text T^v is used to retrieve image evidence I^e. Afterwards, we re-rank the retrieved items based on intra-modal similarity, meaning image-to-image and text-to-text comparisons. Considering that the original truthful image-text pairs in NewsCLIPpings are sourced from VisualNews, which in turn collected pairs from four meainstream sources —USA Today, The Washington Post, BBC, and The Guardian— during the evidence collection process, it is highly likely that the same source article, or a highly related one, is retrieved. These conditions contribute significantly to the high accuracy observed. For instance, as illustrated in Fig.<ref> and discussed in Section <ref>, the Truthful pair exhibits very high s(F_I^v, F_I^e) (0.907) and a relatively high s(F_T^v, F_T^e) (0.597) similarity scores with the retrieved evidence while the OOC sample exhibits significantly lower scores. Although relying on such heuristics leads to high performance on the NewsCLIPpings dataset, the performance on the annotated OOC samples of VERITE is more limited, particularly for the OOC class (70%), which is the primary focus of this task. In terms of the “True vs OOC” evaluation on VERITE, our methods consistently outperform the SotA, though they display lower accuracy compared to NewsCLIPpings, achieving scores around 80-82%. Additionally, there is a notable imbalance with higher accuracy for the Truthful class (90-92%) compared to the OOC class (70%). More importantly, as discussed in relation to Fig.<ref>, MUSE and AITR can not generalize to `Miscaptioned' samples of VERITE. This is because miscaptioned images, as defined by Snopes and Reuters, typically involve images and texts that are highly related, but with some key aspect being misrepresented in the text, such as a person, date, or event. CLIP features and similarities do not capture the subtle linguistic differences necessary to detect such cases. Nevertheless, there is certainly room for further improving OOC detection. Firstly, we recommend that future research in this field not only utilize algorithmically generated misinformation (e.g., NewsCLIPpings) but also incorporate annotated evaluation benchmarks such as VERITE. Additionally, it is crucial to implement evaluation tests and analyses that demonstrate the models' reliance on factuality and its ability to detect logical inconsistencies, rather than merely exploiting shortcuts and simple heuristics. Furthermore, we find the current working definition of OOC to be rather limiting, as it focuses solely on truthful texts combined with mismatched (out-of-context) images. This definition may not fully capture the complexity of real-world OOC misinformation, where the texts themselves often contain falsehoods[ “Miscaptioned: photographs and videos that are “real” (i.e., not the product, partially or wholly, of digital manipulation) but are nonetheless misleading because they are accompanied by explanatory material that falsely describes their origin, context, and/or meaning.” <https://www.snopes.com/fact-check/rating/miscaptioned>]. We recommend that future research in the field of automated fact-checking and evidence-based OOC detection expand their methods and training datasets to also include `miscaptioned images,'<cit.> which encompass cases where an image is decontextualized but key aspects of the image, such as the person, date, or event, are misrepresented within the text. To this end, weakly annotated datasets such as Fakeddit <cit.> and algorithmically created datasets based on named-entity manipulations, such as MEIR, TamperedNews and CHASMA <cit.>, can prove useful if they are augmented with external evidence and combined with existing OOC datasets such as NewsCLIPpings. Finally, we recommend future researchers to consider the problem of “leaked evidence” while collecting external information from the web <cit.>. § CONCLUSIONS In this study, we adress the challenge of out-of-context (OOC) detection by leveraging multimodal similarities (MUSE) between image-text pairs and external image and text evidence. Our results indicate that MUSE, even when used with conventional machine learning classifiers, can compete against complex architectures and even outperform the SotA on the NewsCLIPpings and VERITE datasets. Furthermore, integrating MUSE within our proposed “Attentive Intermediate Transformer Representations” (AITR) yielded further improvements in performance. However, we discovered that these models predominantly rely on shortcuts and simple heuristics for OOC detection rather than assessing factuality. Additionally, we found that these models excel only under a narrow definition of OOC misinformation, but their performance deteriorates under other types of de-contextualization. These findings raise critical questions about the current direction of the field, including the definition of OOC misinformation, dataset construction, and evidence collection and we discuss potential future directions to address these challenges. § ACKNOWLEDGMENTS This work is partially funded by the project “vera.ai: VERification Assisted by Artificial Intelligence” under grant agreement no. 101070093. unsrt
http://arxiv.org/abs/2407.12303v1
20240717035115
Optical pumping through the Liouvillian skin effect
[ "De-Huan Cai", "Wei Yi", "Chen-Xiao Dong" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas" ]
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China wyiz@ustc.edu.cn CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China Anhui Province Key Laboratory of Quantum Network, University of Science and Technology of China, Hefei, 230026, China CAS Center For Excellence in Quantum Information and Quantum Physics, Hefei 230026, China Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China cxdong@hfnl.cn Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China § ABSTRACT The Liouvillian skin effect describes the boundary affinity of Liouvillian eignemodes that originates from the intrinsic non-Hermiticity of the Liouvillian superoperators. Dynamically, it manifests as directional flow in the transient dynamics, and the accumulation of population near open boundaries at long times. Intriguingly, similar dynamic phenomena exist in the well-known process of optical pumping, where the system is driven into a desired state (or a dark-state subspace) through the interplay of dissipation and optical drive. In this work, we show that typical optical pumping processes can indeed be understood in terms of the Liouvillian skin effect. By studying the Liouvillian spectra under different boundary conditions, we reveal that the Liouvillian spectra of the driven-dissipative pumping process sensitively depend on the boundary conditions in the state space, a signature that lies at the origin of the Liouvillian skin effect. Such a connection provides insights and practical means for designing efficient optical-pumping schemes through engineering Liouvillian gaps under the open-boundary condition. Based on these understandings, we show that the efficiency of a typical side-band cooling scheme for trapped ions can be dramatically enhanced by introducing counterintuitive dissipative channels. Our results provide a useful perspective for optical pumping, with interesting implications for state preparation and cooling. 67.85.Lm, 03.75.Ss, 05.30.Fk Optical pumping through the Liouvillian skin effect Chen-Xiao Dong July 22, 2024 =================================================== § INTRODUCTION Optical pumping is a fundamentally important technique in the study of atomic, molecular, and optical physics <cit.>. Originally developed to achieve population inversion necessary for lasing <cit.>, it has become the standard practice to cyclically pump atoms to a given quantum state <cit.>, often with a well-defined magnetic quantum number. More generally, through the ingenious design of optical drive and dissipation, a quantum open system can be driven into a desired steady state (or a desired dark-state subspace) at long times <cit.>. Such general optical pumping processes are widely used for state preparation and cooling <cit.>, and offer promising paradigms for quantum simulation with atoms <cit.>. Phenomenologically, a typical optical pumping process manifests two salient features, the directional flow of population in the state space, and the long-time population accumulation in the final steady state, which, given its dark-state nature, can be considered as a boundary in the state space. Intriguingly, these features also manifest in systems with the non-Hermitian skin effect, a phenomenon that has attracted extensive interest in recent years <cit.>. The non-Hermitian skin effect describes the accumulation of eigenstates near the boundaries of certain non-Hermitian systems <cit.>. It derives from the instability of the eigenvalue problems of non-Hermitian matrices to boundary perturbations, and has profound impact on the band and spectral topologies <cit.>, as well as the bulk dynamics <cit.>. Experimentally, the non-Hermitian skin effect and its various manifestations have been observed in classical systems with gain and/or loss <cit.>, and in the conditional dynamics of quantum open systems subject to post selection <cit.>. But the non-Hermitian skin effect also arises in the full-fledged quantum dynamics governed by the Lindblad master equation, wherein the Liouvillian superoperator can be mapped to a non-Hermitian matrix in an enlarged Hilbert space. Alternatively, under the master equation, the single-particle correlation evolves according to a non-Hermitian damping matrix <cit.>. The corresponding non-Hermitian skin effect in quantum open systems, dubbed the Liouvillian skin effect <cit.>, hosts chiral damping and directional bulk flow in the transient dynamics, as well as various boundary-sensitive long-time behaviors, such as the time scale at which the steady state is approached, and the boundary affinity of steady-state population <cit.>. While the Liouvillian skin effect has yet to be explicitly demonstrated in experiments, the resemblance of its dynamic consequences with those of optical pumping strongly suggests an intimate, if not direct, connection between them. In this work, we show that typical optical pumping processes can indeed be understood in terms of the Liouvillian skin effect of the underlying quantum master equation. As illustrated in Fig. <ref>(a), we focus on a generic optical pumping setup, where a series of otherwise independent quantum-state sectors (labeled by l) are connected by directional dissipation. The quantum states within each sector are coupled by coherent optical fields, and may subject to additional incoherent dissipative processes in between. A discrete translational symmetry in l is possible, but not necessary. Typical examples of such a general setup include the simplest optical pumping process in a three-level system, and the side-band cooling in trapped ions. In these examples, an open boundary condition (OBC) is naturally present, with the final state of the pumping process forming an open boundary. However, for the sake of discussion, a formal periodic boundary condition (PBC) can also be enforced by connecting the left-most and right-most sectors [as illustrated in Fig. <ref>(a)]. We take a typical side-band cooling configuration as an example, and study the Liouvillian spectra of the system. We find that the eigenspectra sensitively depend on the boundary conditions, a signature that lies at the origin of the Liouvillian (or non-Hermitian) skin effect. The existence of the Liouvillian skin effect is further confirmed by the directional bulk flow and the accumulation of the steady-state population at the open boundary, both of which are also natural consequences of the side-band cooling (or optical pumping) setup. Such a connection provides insights on the further design of efficient optical pumping schemes. Specifically, since the time for the system to reach the steady state is determined by the Liouvillian gap, the efficiency of the optical pumping process can be enhanced by engineering larger Liouvillian gaps. Through analytic and numerical analyses, we identify the condition to maximize the Liouvillian gap of our system, which is surprisingly achieved by introducing dissipative processes that are opposite in direction to the bulk flow. Our work is organized as follows. In Sec. II, we introduce the model that we consider, demonstrate the dynamic signatures of the Liouvillian skin effect, and discuss its connection with optical pumping. In Sec. III, we discuss the origin of the Liouvillian skin effect through analytic and numerical characterization of the Liouvillian spectrum. In Sec. IV, we show how the efficiency of the optical pumping process can be enhanced by optimizing the Liouvillian gap. We summarize in Sec. V. § LIOUVILLIAN SKIN EFFECT IN OPTICAL PUMPING As illustrated in Fig. <ref>(b), we consider a concrete example of the general optical pumping process, where external light fields couple transitions from the ground to the excited states, and, aided by dissipative processes, eventually pump the system to a given steady state. Specifically, a set of ground states with energy intervals {ω_l}(l=1,2...) are labeled as {|g,l⟩=|n=2l-1⟩}, and the corresponding excited states are labeled as {|e,l⟩=|n=2l⟩}. The Rabi frequencies of the coherent optical couplings are {Ω_l}, and γ_0 and γ_1 are the decay rates from an excited state to different states in the ground-state manifold. Physically, l can label magnetic quantum numbers in the ground- and excited-state hyperfine manifolds <cit.>, in which case the scheme in Fig. <ref>(b) corresponds to a typical optical pumping for state preparation. Alternatively, l can label phonon side bands in trapped ions, in which case Fig. <ref>(b) depicts side-band cooling <cit.>. Regardless of the physical correspondence, the time evolution of the density matrix under the couplings of Fig. <ref>(b) is determined by the Lindblad master equation (we take ħ=1) <cit.> ρ/ t = -i[H,ρ] + ∑_l,p(2L_l,pρ L^†_l,p- {L^†_l,p,L_l,p}ρ) ≡ℒ(ρ). Here the coherent Hamiltonian reads H = ∑_l[(∑_j=1^lω_j)(|g,l+1⟩⟨ g,l+1| + |e,l+1⟩⟨ e,l+1|)] + ∑_lΩ_l(|e,l⟩⟨ g,l+1| + H.c.), and the quantum jump operators are L_l,p=0=√(γ_0)|g,l+1⟩⟨ e,l|, L_l,p=1=√(γ_1)|g,l⟩⟨ e,l|. We denote the Hilbert-space dimension of the system as N, with n_max=2l_max=N. Then the right and left eigenmodes of the Liouvillian superoperator ℒ, defined in an N^2-dimensional extended Hilbert space, are given by ℒ(ρ_μ^R) = λ_μρ_μ^R, ℒ^†(ρ_μ^L) = λ_μ^∗ρ_μ^L, with μ=1,2,3,...,N^2. The right and left eigenmodes are normalized as √(⟨ρ^R_μ|ρ^R_μ⟩)=√(⟨ρ^L_μ|ρ^L_μ⟩)=1, and are orthogonal to each other (√(⟨ρ^L_μ|ρ^R_ν⟩)=0) when their eigenvalues are different (λ_μ≠λ_ν). In particular, the eigenmodes of ℒ with vanishing eigenvalues are the steady states of the system, with ℒ(ρ_ss) =0. It follows that the density matrix of the initial state can be expanded as ρ_ini=∑_μ=1^N^2c_μρ_μ^R, where c_μ=⟨ρ^L_μ|ρ_ini⟩/⟨ρ^L_μ|ρ^R_μ⟩ according to the completeness condition ∑_μ|ρ^R_μ⟩⟨ρ^L_μ|/⟨ρ^L_μ|ρ^R_μ⟩=1. Thus, the time evolution of the density matrix can be written as ρ(t)=∑_μ=1^N^2c_μe^λ_μtρ_μ^R. Note that the real parts of the eigenvalues of the excited eigenmodes (those that are not steady states) must be negative to ensure that their contributions in Eq. <ref> would be exponentially small after a long enough time evolution, as the system approaches the steady states. Here we set ρ_ss=ρ_μ=1^R, and assume that all eigenvalues are indexed in descending order according to their real parts: 0=λ_1>Re[λ_2]≥Re[λ_3]...≥Re[λ_N^2]. Equation <ref> can then be rewritten as ρ(t)=ρ_ss + ∑_μ=2^N^2c_μe^λ_μtρ_μ^R. Importantly, the Liouvillian gap is defined as Δ=|Re[λ_2]|, which describes the asymptotic decay rate of the system toward the steady states at long times <cit.>. We first consider the simple case with ω_l = 0, Ω_l = Ω, and γ_0 =0. It follows that Hamiltonian (<ref>) is simplified to H=∑_lΩ(|e,l⟩⟨ g,l+1| + H.c.), and only a single quantum jump process exists for each pair of ground and excited states, given by L_l,1. In Hamiltonian (<ref>), states with the smallest and largest n indices are not coupled. This corresponds to an OBC in the state space. By contrast, one may consider an artificial PBC, where all states are cyclically coupled. Such a PBC is achieved by adding the term (Ω|e,l_max⟩⟨ g,1| +H.c.) to Eq. (<ref>), where l_max is the maximum l. Although the PBC is unphysical, it offers insights to the setup as we detail below. Alternatively, one may consider the state label n as lattice sites along a synthetic dimension. Different boundary conditions in the synthetic dimension then directly correspond to boundary conditions in the state space. With these understandings, we now study the Liouvillian spectrum and dynamic evolution of the master equation, under different boundary conditions. As depicted in Fig. <ref>(a), the eigenvalues of the Liouvillian superoperator ℒ under the PBC form a closed loop on the complex plane, enclosing those under the OBC. This is reminiscent of the spectral topology of non-Hermitian Hamiltonians with the skin effect, and is an outstanding signature for the Liouvillian skin effect. In either case, the drastic difference in the eigenspectrum under different boundary conditions originates from the instability of non-Hermitian matrices to boundary perturbations. Fig.  <ref>(b) shows the density-matrix elements ρ_nm of the steady state under the OBC. Here the density-matrix element is defined as ρ_nm=⟨ n|ρ_μ=1^R|m⟩. The steady state is indeed localized in |g,l=1⟩, corresponding to an open boundary. The corresponding steady state under the PBC is shown in Fig. <ref>(c), where uniform distributions in l are observed for both the ground and excited states. A closer look reveals that, in the steady state under the PBC, the majority of the population is in the ground state. Another drastic distinction between the Liouvillian spectrum under OBC and PBC is the Liouvillian gap. As shown in Fig. <ref>(d), the Liouvillian gap Δ tends to zero as the size of the system increases under the PBC. By contrast, the gap is independent of the system size under the OBC. A finite Liouvillian gap implies that the density matrix in Eq. (<ref>) converges exponentially fast to the steady state at long times. Whereas a vanishing Liouvillian gap implies an algebraic convergence, such that the relaxation time diverges for Δ→ 0 <cit.>. Taking the size of the system as N=60 in Fig. <ref>(e) and (f), we evolve the system according to Eq. (<ref>), while setting the initial state to |g,15⟩=|n=29⟩. Under the OBC, the occupation rapidly flows toward the boundary and eventually evolves to the steady state as shown in Fig. <ref>(b). This is the dynamic signature of the Liouvillian skin effect. In the context of optical pumping, such a directional flow is the underlying mechanism for state preparation and cooling. For instance, in trapped ions, the index l corresponds to the phonon modes. The coherent optical drives are implemented by side-band couplings, and the directional flow toward l=0 corresponds to cooling of the external ion motion. The timescale or efficiency of the cooling process is then determined by the Liouvillian gap under the OBC. Under the PBC, since the Liouvillian gap is much smaller, the time it takes to relax to the steady state is much longer, and diverges in the thermodynamic limit. More generally, we have ω_l ≠ 0, and state-dependent Ω_l (but still with γ_0 =0). The Liouvillian spectrum under the PBC no longer encloses the one under OBC, but they remain different, as shown in Fig. <ref>. The Liouvillian skin effect persists, and the steady state under the OBC remains the same as that in Fig. <ref>(b). The long-time evolution of the system generates a directional flow, similar to the results shown in Fig. <ref>(e), and the relaxation time depends on the Liouvillian gap. In the next section, we will illustrate the structure of the Liouvillian spectrum and the origin of the Liouvillian skin effect for the general case through analytic methods. § ANALYTIC STUDY OF THE LIOUVILLIAN In this section, we analytically solve the spectrum of the Liouvillian superoperator to elucidate the origin of the Liouvillian skin effect described in the previous section. First, we rearrange the Lindblad equation (<ref>) into ρ/ t=-i[H_eff,ρ]+∑_l,p2L_l,pρ L^†_l,p, where the effective non-Hermitian Hamilton is H_eff=H-i ∑_l,pL^†_l,p L_l,p. We observe that the effective non-Hermitian Hamiltonian for the setup in Fig. <ref> is block-diagonal. This is because both the coherent Hamiltonian H and the terms -i ∑_l,pL^†_l,p L_l,p are block-diagonal with respect to the subsystems shown in Fig. <ref>(a). We hence denote H_eff=H_1⊕ H_2⊕⋯⊕ H_m, where m represents the number of subsystems and each H_i represents an individual subsystem with the Hilbert-space dimension n_i, with ∑_j=1^m n_j = N. For our model in Fig. <ref>(b), we find that the effective Hamilton is composed of two single-level systems with on-site energies 0 and ∑_l=1^N/2-1ω_l-iγ_1, and a series of (N/2-1) two-level subsystems each described by the Hamiltonian H_j= (∑_l=1^jω_l)|g,j+1⟩⟨ g,j+1| + (∑_l=1^j-1ω_l-iγ_1) |e,j⟩⟨ e,j| + Ω_j(|e,j⟩⟨ g,j+1| + |g,j+1⟩⟨ e,j| ), where j=1,2,⋯,N/2. Under the PBC, we observe that all N/2 subsystems in the effective Hamilton are two-level systems, given by the Hamiltonian in Eq. (<ref>), but with H_N/2= (∑_l=1^N/2-1ω_l-iγ_1) |e,N/2⟩⟨ e,N/2| + Ω_N/2(|e,N/2⟩⟨ g,1| + |g,1⟩⟨ e,N/2| ). Additionally, we observe that the contribution from the recycling terms ∑_l,p2L_l,pρ L^†_l,p exists either between two adjacent subsystems, or within an individual subsystem (defined as ℒ_0 below). Hence the overall Liouvillian superoperator is also block-diagonal in its matrix form, as illustrated in Fig <ref>(a). The large block with intra-block recycling-term contribution is given by the Liouvillian ℒ_0=-i∑^m_j=1(H_j⊗ℐ_n_j-ℐ_n_j⊗ H_j)+∑_l,p2L_l,pρ L^†_l,p, with the dimension ∑^m_j=1n_j^2. Other blocks are given by ℒ_lj=-i(H_j⊗ℐ_n_l-ℐ_n_j⊗ H_l), with dimensions n_l n_j, where l,j=1,2,⋯,N/2-1 and l≠ j. Here ℐ_n is the identity matrix with dimension n. Due to the block-diagonal structure of the Liouvillian, its eigenspectrum is analytically solvable by diagonalizing ℒ_0 and ℒ_lj, respectively. Specifically, in our model, since the dimensions of H_j are less than or equal to 2, the dimension of any given ℒ_lj is less than or equal to 4. And ℒ_0 is a special matrix that is easy to diagonalize. We first study the case with OBC. In this case, we observe that the dissipation between two adjacent subsystems is directional, which makes ℒ_0 a block upper-triangular matrix, as illustrated in Fig <ref>(b). The eigenspectrum of ℒ_0 is then the union of the spectra of the diagonal blocks. In the presence of translational symmetry with ω_l = 0 and Ω_l = Ω, diagonal blocks of ℒ_0 are invariant with increasing system size. Consequently, the Liouvillian gap remains constant as the system size changes, consistent with discussions in the previous section. Furthermore, due to the block upper-triangular structure of ℒ_0, some eigenvectors from ℒ_0 are localized within the subsystems near the boundaries of the entire state space. As we detail in the Appendix, such a localization persists even as the translational symmetry is broken (for general values of ω_l and/or Ω_l). Under the PBC, when ω_l = 0 and Ω_l = Ω, due to the translational symmetry, ℒ_0 forms a block-circulant matrix, illustrated in Fig <ref>(c). We can thus visualize it as a four-band non-Hermitian one-dimensional lattice model with PBC. The eigenspectrum is analytically solvable, and we find that the Liouvillian gap approaches zero when the system size tends to infinity. (More details are shown in the Appendix). Therefore, the Liouvillian skin effect observed in the previous section mathematically originates from the difference in ℒ_0 under different boundary conditions. Physically, the Liouvillian skin effect in our system arises from the divisibility of the effective Hamilton and the non-reciprocal recycling terms. This phenomenon is analogous to the non-Hermitian skin effect observed in non-Hermitian lattice models. Finally, we remark that our discussions here can be generalized to generic optical pumping setups illustrated in Fig. 1(a). § DESIGNING EFFICIENT PUMPING SCHEME In this section, we show that the pumping scheme in Fig. <ref>(b) can be optimized based on the understandings above. Here we set ω_l = ω and Ω_l = Ω to simplify discussions, but our results qualitatively hold for schemes without the translational symmetry. The latter can be important for side-band cooling in trapped ions in the Lamb-Dicke regime, where the coupling strength between different side bands scale as √(n) <cit.>. In our system, any initial state evolves towards a steady state. To quantify the damping dynamics, we calculate the particle-number deviation from that of the steady state, defined as ñ(t)=Tr[ρ(t)-ρ(t→∞)]. As shown in Fig. <ref>(a), the damping of ñ(t) depends on the initial state and the Liouvillian gap. With the same initial states, the damping dynamic accelerates when the Liouvillian gap increases. Next, we explore the relationship between the system parameters and the Liouvillian gap. As discussed earlier, when γ_0=0, the spectrum of our system is independent of the system size, as illustrated in Fig. <ref>(b). Generally, in the experiments, the energy offset ω is smaller than the Rabi frequency Ω. Specifically, when ω=0, the Liouvillian gap increases with Ω/γ_1 when Ω/γ_1<1/4, reaching a maximum of γ_1/4 when Ω/γ_1>1/4, as illustrated in Fig. <ref>(c). Subsequently, if ω≠0, the Liouvillian gap consistently decreases with increasing ω. As a result, the maximum possible Liouvillian gap is γ_1/4 when γ_0=0, which is consistent with previous studies <cit.>. In the following, we aim to further increase the Liouvillian gap by introducing new decay channels. We first introduce an additional decay term given by the jump operator L_l,p=2=√(γ_2)|g,l⟩⟨ e,l+1|, which enhances the dissipation in the direction of the steady state. While such a term does not change the discussion on the Liouvillian superoperator under OBC, it contributes to an increased decay rate within the subsystem, effectively transforming γ_1 to γ_1+γ_2 in Eq. (<ref>). Consequently, the maximum Liouvillian gap becomes γ_1/4+γ_2/4, and the pumping efficiency is enhanced. Likewise, we can introduce longer-distance decay terms to similar effects. Alternatively, we consider the decay term L_l,0, leading to transitions within each subsystem. As illustrated in Fig. <ref>(b), the direction of the dissipation is opposite to that of the directional flow toward the steady state. From numerical calculations, we identify two distinct behaviors of the Liouvillian gap when varying γ_0. First, the Liouvillian gap monotonically decreases to 0 with increasing γ_0, shown as dashed lines in Fig. <ref>(a)(b). Second, the Liouvillian gap increases to a maximum value before decreasing to 0, shown as solid lines in Fig. <ref>(a)(b). Here we use .Δ/γ_0|_γ_0=0 to differentiate the parameter regimes for these different behaviors, as shown in Fig. <ref>(c). When .Δ/γ_0|_γ_0=0<0, the Liouvillian gap monotonically decreases with increasing γ_0; otherwise, the Liouvillian gap increases to a maximum value before decreasing to 0, resulting in a larger Liouvillian gap for appropriate values of γ_0 compared to the case where γ_0=0. We then numerically calculate the maximum Liouvillian gap for different ω and Ω, as shown in Fig. <ref>(d)(e). In general, the maximum Liouvillian gap increases with larger Ω and smaller ω. The optimal decay rate γ_0,max for achieving the maximum Liouvillian gap shows intricate behavior in conjunction with other parameters. Introducing the decay term L_l,0 yields a potential maximum Liouvillian gap of γ_1/2, achievable under the parameters Ω→∞, ω=0, and γ_1=γ_0. § SUMMARY To summarize, we show that typical optical pumping processes can be understood from the perspective of the Liouvillian skin effect. We confirm this understanding through the Liouvillian eigenspectrum and open-system dynamics for a concrete optical pumping setup involving coherent optical drives and directional dissipation. We further illustrate that such an understanding provides means to optimize the pumping efficiency. Our results are helpful for state preparation and cooling in quantum simulation and computation where optical pumping is inevitable. This work is supported by the National Natural Science Foundation of China (Grant No. 12374479), and by the Innovation Program for Quantum Science and Technology (Grant Nos. 2021ZD0301200, 2021ZD0301904). § THE BLOCK UPPER-TRIANGULAR MATRIX Here we discuss the eigen problem of a block upper-triangular matrix M with M=[ A_1,1 A_1,2 A_1,3 ⋯ A_1,m; 0 A_2,2 A_2,3 ⋯ A_2,m; 0 0 A_3,3 ⋯ ⋮; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ A_m,m; ], where A_i,j are matrices with dimensions n_i× n_j, respectively. We observe that the eigenvalues of the block upper-triangular matrix M is the union of the spectra of the diagonal blocks A_i,i. In the following, we prove it by induction. First, the conclusion obviously holds for m=1. Then, assuming the statement is valid for m=l, we will show below that it also holds for m=l+1. To simplify discussions, we set M_l =[ A_1,1 A_1,2 A_1,3 ⋯ A_1,l; 0 A_2,2 A_2,3 ⋯ A_2,l; 0 0 A_3,3 ⋯ ⋮; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ A_l,l; ], C_l =[ A_1,l+1; A_2,l+1; ⋮; A_l,l+1; ]. For m=l+1, we have M_l+1=[ M_l C_l; 0 A_l+1,l+1 ]. Any eigenvalue of M_l is thus also an eigenvalue of M_l+1. Specifically, for any given eigenvalue α and the corresponding eigenvector |ψ⟩ of M_l, we have [ M_l C_l; 0 A_l+1,l+1 ][ |ψ⟩; 0 ]=[ M_l|ψ⟩; 0 ]=α[ |ψ⟩; 0 ]. We then show that any eigenvalue of A_l+1,l+1 is also an eigenvalue of M_l+1. For that purpose, we focus on a given eigenvalue a^l+1 and the corresponding eigenstate |ψ^l+1⟩ of A_l+1,l+1. If a^l+1 is also an eigenvalue of M_l, with the corresponding eigenstate |ϕ_0⟩, we have [ M_l C_l; 0 A_l+1,l+1 ][ |ϕ_0⟩; 0 ] =[ M_l|ϕ_0⟩; 0 ] =a^l+1[ |ϕ_0⟩; 0 ]. Otherwise, we have [ M_l C_l; 0 A_l+1,l+1 ][ |ϕ⟩; |ψ^l+1⟩ ] =[ M_l|ϕ⟩+C_l|ψ^l+1⟩; A_l+1,l+1|ψ^l+1⟩ ] =[ M_l|ϕ⟩+C_l|ψ^l+1⟩; a^l+1|ψ^l+1⟩ ], where |ϕ⟩ is an unknown state. We set M_l|ϕ⟩-a^l+1|ϕ⟩=(M_l-a^l+1ℐ)|ϕ⟩=C_l|ψ^l+1⟩, where ℐ is an identity matrix. Since a^l+1 is not an eigenvalue of M_l, (M_l-a^l+1ℐ) is reversible, and Eq. <ref> must have nontrivial solutions. In other words, we can always find |ϕ⟩ such that Eq. <ref> is satisfied, yielding the right eigenstate of M_l+1. In summary, the eigenvalue of M_l and A_l+1,l+1 are the eigenvalues of M_l+1. In other words, our statement is also valid for m=l+1. We have therefore proved our statement by induction, that the eigenvalues of the block upper-triangular matrix M is the union of the spectra of the diagonal blocks A_i,i. Furthermore, we notice that the right eigenstates of the block upper-triangular matrix are usually localized in the Hilbert space. Here we provide a simple explanation. We set M_k=[ A_1,1 A_1,2 A_1,3 ⋯ A_1,k; 0 A_2,2 A_2,3 ⋯ A_2,k; 0 0 A_3,3 ⋯ ⋮; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ A_k,k; ], D_k=[ A_1,l+1 ⋯ A_1,l+1; A_2,l+1 ⋯ A_1,l+1; ⋮; A_l,l+1 ⋯ A_1,l+1; ], M'_k=[ A_k+1,k+1 A_k+1,k+2 A_k+1,k+3 ⋯ A_k+1,m; 0 A_k+2,k+2 A_k+2,k+3 ⋯ A_k+2,m; 0 0 A_k+3,k+3 ⋯ ⋮; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ A_m,m; ]. the entire matrix follows M_l=[ M_k D_k; 0 M'_k ]. Following the previous discussion, for any k, the eigenvalues of M_k are also the eigenvalues of M_l and the corresponding right eigenstate is equal to 0 in the space of M'_k. Therefore, these right eigenstates are localized within the space of M_k. § LIOUVILLIAN GAP Here we provide analytic expressions for the Liouvillian gap in the main text. Under the OBC, when γ_0=0, ω_l = 0 and Ω_l=Ω, the Liouvillian gap follows Δ_OBC=1/4(γ_1-√(γ_1^2-16Ω^2)), for Ω/γ_1<1/4, 1/4γ_1, for Ω/γ_1≥1/4. If we consider ω_l=ω, the Liouvillian gap becomes Δ=1/4(γ_1-Im[√(γ_1^2-4ω^2-4iωγ_1-16Ω^2) ]). Under the PBC, when ω_l = 0, Ω_l = Ω and γ_0=0, we regard ℒ_0 as a four-band, one-dimensional lattice. Due to the lattice translational symmetry, its Hamilton can be written in the k space as H_4=[ -γ_1 iΩ -iΩ γ_1 e^ik; iΩ -1/2γ_1 0 -iΩ; -iΩ 0 -1/2γ_1 iΩ; 0 -iΩ iΩ 0 ]. Here k=mπ/N(m=1,2,⋯,N/2) is the lattice momentum. The Liouvillian gap can be calculated from the spectrum of Eq. <ref>. We then calculate the Liouvillian gap after introducing the decay term L_l,1, under the OBC and with ω_l = ω and Ω_l = Ω. In order to derive the Liouvillian gap, we need to calculate the spectrum of each diagonal block in ℒ_0, as well as the blocks ℒ_lj. When ω≠0, the expression of the Liouvillian gap is extremely complicated. However, we notice that the Liouvillian gap consistently decreases with increasing energy interval ω. Thus, we calculate the Liouvillian gap for ω=0 for an upper bound, which is given by Δ=1/4(γ_0+γ_1-√((γ_0+γ_1)^2-16Ω^2)), for 4Ω<γ_0+γ_1,527 γ _0^2+575 γ _1^2+1166 γ _0 γ _1>9216 Ω ^2, 1/4(γ_0+γ_1), for 4Ω≥γ_0+γ_1,64Ω^2(γ_1-γ_0)>3(γ_1+γ_0)^3, γ _0+γ _1/2 -√(72 γ _0 Ω ^2+1/3√(46656 γ _0^2 Ω ^4+(-3 γ _0^2-3 γ _1^2-6 γ _0 γ _2+48 Ω ^2)^3))/2 3^2/3 -γ _0^2+γ _1^2+2 γ _0 γ _1-16 Ω ^2/2 √(3)√(72 γ _0 Ω ^2+1/3√(46656 γ _0^2 Ω ^4+(-3 γ _0^2-3 γ _1^2-6 γ _0 γ _1+48 Ω ^2)^3)), otherwise. According to Eq. <ref>, for certain Ω and γ_1 , we have γ_1,max= 0, for 4Ω<γ_1, 4Ω-γ_1, for 4Ω≥γ_1≥7/2Ω, -4/3√(√(81 γ _1^2 Ω ^4+64 Ω ^6)-9 γ _1 Ω ^2)+16 Ω ^2/3 √(√(81 γ _1^2 Ω ^4+64 Ω ^6)-9 γ _1 Ω ^2)-γ _1, for γ_1<7/2Ω. The maximum Liouvillian gap is therefore Δ_max(γ_1,Ω)=1/4(γ_1-√(γ_1^2-16Ω^2)), for 4Ω<γ_1, Ω, for 4Ω≥γ_1≥7/2Ω, -1/3√(√(81 γ _1^2 Ω ^4+64 Ω ^6)-9 γ _1 Ω ^2)+4 Ω ^2/3 √(√(81 γ _1^2 Ω ^4+64 Ω ^6)-9 γ _1 Ω ^2), for γ_1<7/2Ω. When Ω→∞, the maximum possible Liouvillian gap is γ_1/2. 99 Franzen W. Franzen and A. G. Emslie, Phys. Rev. 108, 1453 (1957). Kastler C. Cohen-Tannoudji, and A. Kastler, Pro. Opt. 5, 1 (1966). Happer1 W. Happer, Rev. Mod. Phys. 44, 169 (1972). Happer2 W. Happer and B. S. Mathur, Phys. Rev. 163, 12 (1967). Happer3 W. Happer, E. A. Miron, S. Schaefer, D. Schreiber, W. A. van Wijngaarden, X. Zeng, Phys. Rev. A 29, 3092 (1984). Walker T. G. Walker, and W. Happer, Rev. Mod. Phys. 69, 629 (1997). Appelt S. Appelt, A. Ben-Amar Baranga, C. J. Erickson, M. V. Romalis, A. R. Young, and W. Happer, Phys. Rev. A 58, 1412 (1998). Han J. Han, M.C. Heaven, Opt. Lett. 37, 2157 (2012). Zare R. E. Drullinger, R. N. Zare, J. Chem. Phys. 51, 5532 (1969). Broyer M. Broyer, G. Gouedard, J.C. Lehmann, J. Vigue, Adv. Atom. Mole. Phys. 12, 165 (1976). Viteau M. Viteau, A. Chotia, M. Allegrini, N. Bouloufa, O. Dulieu, D. Comparat, and P. Pillet, Science 321, 232 (2008). Balling L. C. Balling, R. J. Hanson, and F. M. Pipkin, Phys. Rev. 133, A607 (1964). Olsen B. A. Olsen, B. Patton, Y.-Y. Jau, and W. Happer, Phys. Rev. A 84, 063410 (2011). Pitz G. A. Pitz, and M. D. Anderson, Appl. Phys. Rev. 4, 041101 (2017). Jau Y.-Y. Jau, E. Miron, A. B. Post, N. N. Kuzma, and W. Happer, Phys. Rev. Lett. 93, 160802 (2004). Weber E. W. Weber, Phys. Rep. 32, 123 (1977). Zoller1 S. Diehl, A. Micheli, A. Kantian, B. Kraus, H. P. Büchler,and P. Zoller, Nat. Phys. 4, 878 (2008). Zoller2 S. Diehl, E. Rico, M. A. Baranov, and P. Zoller, Nat. Phys. 7, 971 (2011). Zoller3 K. Stannige, P. Rabl, and P. Zoller, New J. Phys. 14, 063014 (2012). Zoller4 A. Tomadin, S. Diehl, and P. Zoller, Phys. Rev. A 83, 013611 (2011). Zhang S. Zhang, J.-Q. Zhang, W. Wu, W.-S. Bao and C. Guo, New J. Phys. 23, 023018 (2021). Lin Z. Lin, Y. Lin, and W. Yi, Phys. Rev. A 106, 063112 (2022). Wang Z. Wang, Y. Lu, Y. Peng, R. Qi, Y. Wang, and J. Jie, Phys. Rev. B 108, 054313 (2023). Wineland1 D. J. Wineland, C. Monroe, W. M. Itano, D. Leibfried, B. E. King and D. M. Meekhof, J. Res. Natl Inst. Stand. Technol. 103, 259 (1998). Wineland2 F. Diedrich, J. Bergquist, W. M. Itano, and D. Wineland, Phys. Rev. Lett. 62, 403 (1989). Wineland3 C. Monroe, D. Meekhof, B. King, S. R. Jefferts, W. M. Itano, D. J. Wineland, and P. Gould, Phys. Rev. Lett. 75, 4011 (1995). Wineland4 Ch. Roos, Th. Zeiger, H. Rohde, H. C. Nägerl, J. Eschner, D. Leibfried, F. Schmidt-Kaler, and R. Blatt, Phys. Rev. Lett. 83, 4713 (1999). Wineland D. Leibfried, R. Blatt, C. Monroe, and D. Wineland, Rev. Mod. Phys. 75, 281 (2003). nhsetheory1S. Yao and Z. Wang, Phys. Rev. Lett. 121, 086803 (2018). nhsetheory2S. Yao, F. Song, and Z. Wang, Phys. Rev. Lett. 121, 136802 (2018). nhsetheory3 F. Song, S. Yao, and Z. Wang, Phys. Rev. Lett. 123, 246801 (2019). nhsetheory4F. K. Kunst, E. Edvardsson, J. C. Budich, and E. J. Bergholtz, Phys. Rev. Lett. 121,026808 (2018). nhsetheory5K. Yokomizo and S. Murakami, Phys. Rev. Lett. 123, 066404 (2019). nhsetheory6C. H. Lee and R. Thomale, Phys. Rev. B 99, 201103(R)(2019). nhsetheory7T.-S. Deng and W. Yi, Phys. Rev. B 100, 035102 (2019). nhsetheory8S. Longhi, Phys. Rev. Research 1, 023013 (2019). nhsetheory9T. Li, J.-Z. Sun, Y.-S. Zhang, and W. Yi, Phys. Rev. Research 3, 023022 (2021). nhsetheory10S. Longhi, Phys. Rev. B 102, 201103(R) (2020). nhsetheory11K. Zhang, Z. Yang, and C. Fang, Phys. Rev. Lett. 125, 126402 (2020). nhsetheory12N. Okuma, K. Kawabata, K. Shiozaki, and M. Sato, M. Phys. Rev. Lett. 124, 086801 (2020). nhsetheory13 H. Li, H. Wu, W. Zheng, and W. Yi, Phys. Rev. Research 5, 033173 (2023) nhsetheory14H.-Y. Wang, F. Song, and Z. Wang, Phys. Rev. X 14, 021011 (2024). NHSEdynamic1 S. Longhi, Phys. Rev. B 105, 245143 (2022). NHSEdynamic2 S. Longhi, and E. Pinotti, Phys. Rev. B 106, 094205 (2022). NHSEdynamic3 S. Guo, C. Dong, F. Zhang, J. Hu, and Z. Yang, Phys. Rev. A 106, L061302 (2022). NHSEdynamic4 F. Song, S. Yao, and Z. Wang, Phys. Rev. Lett. 123, 170401 (2019). NHSEdynamic5 K. Wang, T. Li, L. Xiao, Y. Han, W. Yi, and P. Xue, Phys. Rev. Lett. 127, 270602 (2021). nhseexperiment2T. Helbig, T. Hofmann, S. Imhof, M. Abdelghany, T. Kiessling, L. W. Molenkamp, C. H. Lee, A. Szameit, M. Greiter and R. Thomale Nat. Phys. 16, 747(2020). nhseexperiment4 A. Ghatak, M. Brandenbourger, J. Wezel, and C. Coulais, PNAS 117(47), 29561 (2020). nhseexperiment7 L. Palacios, S. Tchoumakov, M. Guix, I. Pagonabarraga, S. Sánchez, and A. Grushin, Nat. Commun. 12, 4691 (2021). nhseexperiment8 X. Zhang, Y. Tian, J. Jiang, M. Lu, and Y. Chen, Nat. Commun. 12, 5377 (2021). nhseexperiment9 D. Zou, T. Chen, W. He, J. Bao, C. Lee, H. Sun, and X. Zhang, Nat. Commun. 12, 7201 (2021). nhseexperiment1 L. Xiao, T. Deng, K. Wang, G. Zhu, Z. Wang, W. Yi, and P. Xue, Nat. Phys. 16, 761(2020). nhseexperiment5 L. Xiao, T. Deng, K. Wang, Z. Wang, W. Yi, and P. Xue, Phys. Rev. Lett. 126, 230402(2021). nhseexperiment12 Z. Gu, H. Gao, H. Xue, J. Li, Z. Su, and J. Zhu, Nat. Commun. 13, 7668 (2022). Li T. Li, Y. S. Zhang, and W. Yi, Phys. Rev. B 105, 125111 (2022). lse1 T. Haga, M. Nakagawa, R. Hamazaki, and M. Ueda, Phys. Rev. Lett. 127, 070402 (2021). lse2 F. Yang, Q. Jiang, and E. Bergholtz, Phys. Rev. Research 4, 023160 (2022). lse4 S. Hamanaka, K. Yamamoto, and T. Yoshida, Phys. Rev. B 108, 155114 (2023). lse5 S. Begg, and R. Hanai, Phys. Rev. Lett. 132, 120401 (2024). lse7 X. Feng, and S. Chen, Phys. Rev. B 109, 014313 (2024). section1 G. Lindblad, Commun. Math. Phys. 48, 119 (1976). section2 V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, J. Math. Phys. 17, 821 (1976). section3 F. Minganti, A. Biella, N. Bartolo, and C. Ciuti, Phys. Rev. A 98, 042118(2018). section4 Z. Cai and T. Barthel, Phys. Rev. Lett. 111, 150403(2013).
http://arxiv.org/abs/2407.13108v1
20240718023639
UCIP: A Universal Framework for Compressed Image Super-Resolution using Dynamic Prompt
[ "Xin Li", "Bingchen Li", "Yeying Jin", "Cuiling Lan", "Hanxin Zhu", "Yulin Ren", "Zhibo Chen" ]
cs.CV
[ "cs.CV" ]
UCIP X. Li et al. University of Science and Technology of China National University of Singapore Microsoft Research Asia {xin.li, chenzhibo}@ustc.edu.cn, {lbc31415926, hanxinzhu, renyulin}@mail.ustc.edu.cn, jinyeying@u.nus.edu, culan@microsoft.com UCIP: A Universal Framework for Compressed Image Super-Resolution using Dynamic Prompt Xin Li10000-0002-6352-6523† Bingchen Li10009-0001-9990-7790† Yeying Jin20000-0001-7818-9534 Cuiling Lan30000-0001-9145-9957 Hanxin Zhu10009-0006-3524-0364 Yulin Ren10009-0006-4815-7973 Zhibo Chen10000-0002-8525-5066 July 22, 2024 =============================================================================================================================================================================================================================== † Equal Contribution. § ABSTRACT Compressed Image Super-resolution (CSR) aims to simultaneously super-resolve the compressed images and tackle the challenging hybrid distortions caused by compression. However, existing works on CSR usually focus on single compression codec, , JPEG, ignoring the diverse traditional or learning-based codecs in the practical application, , HEVC, VVC, HIFIC, etc. In this work, we propose the first universal CSR framework, dubbed UCIP, with dynamic prompt learning, intending to jointly support the CSR distortions of any compression codecs/modes. Particularly, an efficient dynamic prompt strategy is proposed to mine the content/spatial-aware task-adaptive contextual information for the universal CSR task, using only a small amount of prompts with spatial size 1×1. To simplify contextual information mining, we introduce the novel MLP-like framework backbone for our UCIP by adapting the Active Token Mixer (ATM) to CSR tasks for the first time, where the global information modeling is only taken in horizontal and vertical directions with offset prediction. We also build an all-in-one benchmark dataset for the CSR task by collecting the datasets with the popular 6 diverse traditional and learning-based codecs, including JPEG, HEVC, VVC, HIFIC, etc., resulting in 23 common degradations. Extensive experiments have shown the consistent and excellent performance of our UCIP on universal CSR tasks. The project can be found in <https://lixinustc.github.io/UCIP.github.io> § INTRODUCTION In recent years, we have witnessed the significant development of Deep Neural Networks (DNNs) in image super-resolution (SR) <cit.>, where the image is degraded with low-resolution artifacts. However, in the practical scenario, due to the limitation of storage and bandwidth, collected images are also inevitably compressed with traditional image codecs, such as JPEG <cit.>, and BPG <cit.>. Accordingly, compressed image super-resolution (CSR) is proposed as an advanced task, which greatly meets the requirements of industry and human life. In general, the low-quality images in CSR are jointly degraded with compression artifacts, , block artifacts, ring effects, and low-resolution artifacts. The severe and heterogeneous degradation poses more challenges and high requirements for the CSR backbones. Moreover, in real applications, the compression codecs are usually diverse in different platforms, which urgently entails the Universal CSR model. There are some pioneering works <cit.> attempting to remove this hard degradation by improving the representation ability. The representative strategy is to design the CSR backbone with the Transformer, which profits from the self-attention module. For instance, Swin2SR <cit.> introduces the enhanced Swin Transformer <cit.> (, SwinV2) to boost the restoration capability of the CSR backbone. HST <cit.> utilizes the hierarchical backbone to excavate multi-scale representation for CSR. Despite the transformer-based backbones having revealed strong recovery capability in CSR, the high computational cost of the transformer prevents its application and training optimization <cit.>. Recently, Multi-layer perceptron (MLP) has demonstrated its potential to achieve the trade-off between the computational cost and global dependency modeling in the classification <cit.>, benefiting from its efficient and effective token mixer strategies. Inspired by this, the first MLP-based framework MAXIM <cit.> in image processing is proposed, where the image tokens interact in global and local manners with multi-axis MLP, respectively. However, the above works only focus on single distortion removal, which lacks enough universality for CSR tasks. In this work, we propose the first universal framework, dubbed UCIP, for CSR tasks with our dynamic prompt strategy based on an MLP-like module. It is noteworthy that the optimal contextual information obtained with the CSR network tends to vary with the content/spatial and degradation type, which entails the content-aware task-adaptive contextual information modeling capability. To achieve this, existing prompt-based IR <cit.> methods have attempted to set multiple prompts with image size, lacking adaptability for various input sizes and leading to more computational cost. In contrast, our dynamic prompt strategy can not only achieve content-aware task-adaptive modulation but also own more applicability. Concretely, we propose the Dynamic Prompt generation Module (DPM), where a group of prompts with the size of 1×1× C_p is set and C_p is the channel dimension. Then spatial-wise composable coefficients H × W × C_p are generated with the distorted images, which guides the cooperation of these prompt bases to form the dynamic prompt with image size, thereby owning the content/spatial- and task-adaptive modulation capability. Based on the powerful DPM, we can achieve the universal CSR framework by incorporating it into existing CSR backbones. However, in the commonly used Transformer backbone, contextual information modeling is achieved with the cost attention module, where any two tokens are required to interact. In contrast, an active token mixer (ATM) <cit.> has been proposed for the MLP-like backbone to reduce the computational cost by implicitly achieving contextual information modeling in the horizontal and vertical directions with offset generation. However, no works explore the potential of this backbone on low-level vision tasks. Inspired by this, we propose the dynamic prompt-guided token mixer block (PTMB) by fusing the advantages of our DPM and ATM, where our DPM can guide the contextual information modeling process of the ATM by modulating the offset prediction and toke mixer. Notably, only horizontal and vertical contextual modeling in ATMs lacks enough local information utilization. Consequently, we increase a local branch in PTMB with one 3 × 3 convolution. Based on PTMB, our UCIP can achieve efficient and excellent universal compressed image super-resolution for different codecs/modes. To build the benchmark dataset for universal CSR tasks, we collected the datasets with 6 representative image codecs, including 3 traditional codecs and 3 learning-based codecs. Concretely, traditional codecs consist of JPEG <cit.>, all-intra mode of HEVC <cit.>, and VVC <cit.>. For learning-based codecs, to ensure the diversity of degradations, we select 3 codecs with different optimization objectives, , PSNR-oriented, SSIM-oriented, and GAN-based codecs. In this way, our database can cover the prominent compression types in recent industry and research fields. We have compared our UCIP and reproduced state-of-the-art methods on this benchmark, which showcases the superiority and robustness of our UCIP. The contributions of this paper are listed as follows: * We propose the first universal framework, , UCIP for the CSR tasks with our dynamic prompt strategy, intending to achieve the "all-in-one" for the CSR degradations with different codecs/modes. * We propose the dynamic prompt-guided token mixer block (PTMB) by fusing the advantages of our proposed dynamic prompt generation module (DPM) and revised active token mixer (ATM), as the basic block for UCIP. * We propose the first dataset benchmark for universal CSR tasks by collecting datasets with 6 prominent traditional and learning-based codecs, consisting of multiple compression degrees. This ensures the diversity of degradations in the benchmark dataset, thereby being reliable as the benchmark to measure different CSR methods. * Extensive experiments on our universal CSR benchmark dataset have revealed the effectiveness of our proposed UCIP, which outperforms the recent state-of-the-art transformer-based methods with lower computational costs. § RELATED WORKS §.§ Compressed Image Super-resolution Compressed Image Super-resolution aims to tackle complicated hybrid distortions, including compression artifacts and low-resolution artifacts <cit.>. The first challenge for this task was held in the AIM2022 <cit.>, where the image is first downsampled with the bicubic operation and then compressed with a JPEG codec. To solve this hard degradation, some works <cit.> seek to utilize the Transformer-based architecture as their backbone. For instance, Swin2SR <cit.> eliminates the training instability and the requirements for large data for CSR by incorporating the Swin Transformer V2 to SwinIR <cit.>. HST <cit.> utilizes the multi-scale information flow and pre-training strategy <cit.> to enhance the restoration process with a hierarchical swin transformer. To further fuse the advantages of convolution and transformer, Qin <cit.> proposes a dual-branch network, which achieves the consecutive interaction between the convolution branch and transformer branch. In contrast, to achieve the trade-off between the performance and computational cost, we aim to explore one efficient and effective framework for universal CSR problem. §.§ MLP-like Models As the alternative model for Transformer and Convolution Neural Networks (CNNs), MLP-like models <cit.> have attracted great attention for their concise architectures. Typically, the noticeable success of MLP-like models stems from the well-designed token-mixing strategies <cit.>. The pioneering works, MLP-Mixer <cit.> and ResMLP <cit.> adopt two types of MLP layers, , channel-mixing MLP and token-mixing MLP, which are responsible for the channel and spatial information interaction. To simplify the token-mixing MLP, Hou  <cit.> and Tang  <cit.> decompose the token-mixing MLP into the horizontal and vertical token-mixing MLPs. Sequentially, As-MLP <cit.> introduces the two-axis token shift in different channels to achieve global token mixing. There are also several works that take the hand-craft windows to enlarge the receptive field for better spatial token mixing, , WaveMLP <cit.>, and MorphMLP <cit.>. However, the token-mixing strategies in the above methods are restrictively fixed and lack flexibility and adaptability for different contents. To overcome this, ATM <cit.> is proposed to achieve the active token selection and mixing in each channel. Based on the progress of the above MLP-like models, MAXIM <cit.> is the first work to introduce the MLP-like model in low-level processing. However, the potential of MLP-like models is yet to be explored, as restoration model not only requires long-range token mixing but also demands efficient local feature extractions. §.§ Prompt Learning In the field of Natural Language Processing (NLP), prompt learning has emerged as a pivotal technique, particularly with the advent of transformer-based pre-trained models such as GPT <cit.> and BERT <cit.>. Prompt learning involves providing models with specific textual cues that guide their processing of subsequent input, which helps models fast adapt to unseen tasks or applications. This approach has proven instrumental in directing models for task-specific outputs without necessitating extensive retraining or fine-tuning. Despite the success in NLP tasks, some researchers adopt prompt learning into vision tasks <cit.>. Among them, PromptIR <cit.> is the first to explore the low-level restoration model with prompts to facilitate multi-task learning <cit.>. Prompts here act as a small set of learnable parameters which interact with image features during training, providing task-specific guidance. Therefore, the prompts should be as much dynamic as possible to adapt to various degradation tasks and different pixel distributions. § METHODS In this section, we first clarify the principle and construction of our dynamic prompt generation module in Sec. <ref>, and then describe how to achieve the basic block of our UCIP, , dynamic prompt-guided token mixer block in Sec. <ref>. Finally, we depict the whole framework of our UCIP in Sec. <ref>. §.§ Dynamic Prompt Generation Module As stated in Sec. <ref>, the universal CSR tasks entail the content/spatial- and task-adaptive modulation. An intuitive strategy is to set one prompt with the image size for each task individually or fuse them adaptively. However, it will bring severe parameter costs with the increase of the task number or image size <cit.>. To mitigate this, we propose the dynamic prompt strategy, and design the corresponding dynamic prompt generation module (DPM), intending to only exploit a small amount of prompt with 1×1× C_p and achieve the content/spatial- and task-adaptive with the cooperation of them. To this end, we decouple the large dynamic prompt with the size of H× W × C_p into two smaller matrices, , the coefficients 𝐰_𝐈 with the size of H× W × D and D basic prompts with the size of 1 × 1 × C_p. We can understand that for each spatial position {i, j}, there is one group of coefficients w_I(i, j) to combine D basic prompts. thereby being content/spatial-adaptive. To let the dynamic prompt perceive the task information, we generate the coefficients with the feature of input images directly, thereby being task-adaptive and suitable for any input size. Our implementation has two advantages: 1) no extra operations to adjust the spatial size of prompts, and thus the guidance information from prompts is explicit and accurate; 2) our prompts have fewer parameters and are more computationally-friendly compared to previous methods <cit.>. r0.5 < g r a p h i c s > The architecture of DPM. To dynamically aggregate content/spatial-aware task-adaptive contextual information, we introduce few number of basic dynamic kernels into the generation process of our prompt. Moreover, our design maintains adaptability to arbitrary input resolutions. The overall architecture of DPM is shown in Fig. <ref>, where the learnable basic prompts P_I∈ℝ^D × 1 × 1 × C_P are set. Here, the D and C_P are the number of base prompts and the channel dimension of prompts. To generate dynamic prompt coefficients from input features F_X∈ℝ^H × W × C, an MLP layer is applied to extract the degradation prior and transform the channel dimension from C to the number of basic prompts D. Then, the softmax operation is exploited to generate the composable coefficients w_I∈ℝ^D × H × W × 1 for basic prompts. Based on the inversion of the above dynamic prompt decomposition, we can obtain the dynamic prompt as: w_I = Softmax(MLP(F_X)), P = ∑^D(w_I⊙P_I) §.§ Prompt-guided Token Mixer Block §.§.§ Prompt-guided token mixer module After obtaining the dynamic prompt, we can exploit it to guide the restoration network for universal CSR tasks. Recently, Active Token Mixer (ATM) <cit.> gain great success in high-level vision tasks due to their well-designed token-mixing strategies. In contrast to transformer architecture, where the contextual information modeling is performed with the interactions between any two tokens, ATM utilize the deformable convolution to predict the offset of mostly relevant tokens, achieving the implicit contextual information modeling in the horizontal and vertical directions with offset generation. Inspired by this, we propose the Dynamic Prompt-guided Token Mixer Module, dubbed PTMM by exploiting the dynamic prompt generated with DPM to guide the prediction of the offset of most informative tokens for contextual modeling. Concretely, PTMM leverages deformable convolutions and offsets to adaptively fuse tokens across horizontal and vertical axes, regardless of diverse degradation. However, as mentioned in <cit.>, MLP-like modules exhibit diminished efficacy in the extraction of local relevance, which is essential for compressed super-resolution tasks. Therefore, we introduce a depth convolution around the target pixel to achieve the local information extraction. As shown in Fig. <ref>(b), PTMM first extracts vertical and horizontal representative offsets 𝐎^V, 𝐎^H by two sets of fully connected layers. To incorporate task-adaptive information during offset generation, we concatenate dynamic prompt generated from DPM with input features F_X as the condition: 𝐎^{V,H} = FC_{V,H}(Concat([F_X, P])) Then, we use the offset to recompose features along one certain axis into a new token 𝐱̃^{V,H} by the deformable convolution for information fusion (, token mixer). In addition, we adopt a depth convolution to achieve the local information extraction: 𝐱̃^L = Conv_3× 3(F_X) After we obtain these three tokens 𝐱̃^{V,H,L}, we adaptively mix them with learned weights, formulated as F_𝐱̃=α^V ⊙𝐱̃^V+α^H ⊙𝐱̃^H+α^L ⊙𝐱̃^L where ⊙ denotes element-wise multiplication. α^{V, H, L}∈ ℝ^C are learned from the summation 𝐱̃^Σ of 𝐱̃^{V, H, L} with weights W^{V, H, L}∈ℝ^C × C, where C denotes the channel dimension: [α^V, α^H, α^L]=σ([W^V ·𝐱̃^Σ, W^H ·𝐱̃^Σ, W^L ·𝐱̃^Σ]), Here, σ(·) is a softmax function for normalizing each channel separately. To further incorporate the task prior for our UCIP, we modulate mixed features F_𝐱̃ using the aforementioned dynamic prompt P by a SPADE block <cit.> as the output features of the PTMM, which is shown in the Fig. <ref>. §.§.§ Discussions There are two most relevant MLP-like methods, , MAXIM <cit.> and ActiveMLP <cit.>. The differences between MAXIM and our UCIP are as: MAXIM is only designed for specific task, where the cross-gating block and dense connection result in severe computational costs. The differences between ActiveMLP and our UCIP are as: ActiveMLP is designed for classification and focuses more on global information extraction, lacking local perception. Compared with them, our UCIP introduces the simple MLP-based architecture and the dynamic prompt for low-level vision, which is more applicable than the above methods for Universal CSR. §.§.§ Overall pipeline To improve the modeling cability of PTMB, we connect N PTMMs in a successive way. It is worth noting that, to balance the performance of model and the computational cost, we share the prompt P across all PTMMs within a single PTMB. With respect to offsets, we generate new offsets every two PTMMs. The whole process of PTMB can be formulated as: P = DPM(F_X, P_I), F_X_i+1 = PTMM(P, F_X_i) where F_X_i is the input feature of i^th PTMM. §.§ Overall Framework As shown in Fig. <ref>, we build our UCIP following the popular pipeline of compressed super-resolution backbones, which is composed of shallow feature extraction, deep feature restoration, and HR reconstruction modules. Given a low-resolution input image X_LR∈ℝ^H × W × 3, UCIP first extracts the shallow features F_X∈ℝ^H × W × C using a patch-embedding layer, where H, W are the spatial dimensions of features. Then, we pass F_X through several PTMB to recursively remove the compression artifacts and generate the restored features F_X_r. Finally, following <cit.>, we use a series of convolution layers and nearest interpolation operations to obtain the final high-resolution output X_HR, which can be represented as: X_HR =Conv(Conv(Conv(F_X + F_X_r)↑_× 2)↑_× 2) §.§ Our UCSR Dataset To facilitate current and future research in CSR, we propose the first benchmark dataset for universal CSR, dubbed UCSR dataset, which not only considers traditional compression methods but also learning-based compression methods. We consider 6 types of compression codecs, including 3 most representative traditional codecs JPEG <cit.>, HM <cit.>, VTM <cit.>, and 3 open-sourced learning-based codecs Cheng_PSNR <cit.>, Cheng_SSIM <cit.> (abbreviated as C_PSNR and C_SSIM in the following paper), HIFIC <cit.>. Thesse three learning-based codecs are PSNR-oriented and SSIM-oriented variants from <cit.> and perceptual-oriented GAN-based codecs from <cit.>, respectively. To cover the prominent compression types in real scenarios, we consider four different compression qualities for each codec, except for HIFIC, since only the weights for three bitrate points are released. To generate the training dataset, we choose the popular DF2K <cit.>, which contains 3450 high-quality images. Each image is downsampled by a scale factor of 4 using MATLAB bicubic algorithm. Then, we compress the downsampled images with six different compression algorithms to yield the training dataset of all competitive methods and our UCIP. The quality factors we used for different codecs are respectively as: (i) [10, 20, 30, 40] for JPEG, where the smaller value means poorer image quality. (ii) [32, 37, 42, 47] for HM, VTM, where value denotes the quantization parameter (QP), and larger value means poor quality. (iii) [1, 2, 3, 4] for C_PSNR, C_SSIM, where the smaller value indicates poorer quality. We adopt the implementation in the popular open-sourced compression tools compressai <cit.>. (iv) [`low', `med', `high'] for HIFIC, where `low' indicates the poorest image quality. We use the PyTorch implementation <cit.> to compress images. All the methods are trained from scratch on our proposed benchmarks. We adopt the same process to generate the evaluation datasets based on five commonly used benchmarks: Set5 <cit.>, Set14 <cit.>, BSD100 <cit.>, Urban100 <cit.> and Manga109 <cit.>. § EXPERIMENTS Our objective is to develop an MLP-like model that caters to a wide range of compressed image super-resolution tasks. Thus, we evaluate our UCIP on six different CSR tasks, including three traditional compression codecs: JPEG <cit.>, HM <cit.>, VTM <cit.>; and three learning-based compression codecs: C_PSNR <cit.>, C_SSIM <cit.>, HIFIC <cit.>. §.§.§ Implement details We train our UCIP from scratch in an end-to-end manner. We employ an Adam optimizer with initial learning rate of 3e-4. The learning rate is halved after 200k iterations, and the total number of iterations is set to 40w. The network is optimized by L1 loss. During training, we randomly cropped degraded low-resolution images into patches of size 64 × 64, and 256 × 256 for high-resolution counterparts as well. Following previous works, random horizontal and vertical flips are utilized to augment training data. The total batch size is set to 32. For our baseline model, we use 6 PTMBs for UCIP and 6 PTMMs for each PTMB. §.§.§ Training details To ensure fair comparisons, we train all the competitive methods following their official released codes on our proposed CSR training dataset with the same batch size. The performance are evaluated under the same training iterations. §.§ Comparisons with State-of-the-arts We evaluate UCIP with six state-of-the-art models on our CSR benchmarks which composes of five commonly adopted datasets: Set5 <cit.>, Set14 <cit.>, BSD100 <cit.>, Urban100 <cit.> and Manga109 <cit.>. The compared models include the fully-convolutional network RRDB <cit.>, the transformer-based image restoration model SwinIR <cit.> and its upgraded version Swin2SR <cit.>, the MLP-like model MAXIM <cit.> and two multi-task models AIRNet <cit.> and PromptIR <cit.>. We add the HR reconstruction module to last three models, enabling them to perform super-resolution tasks. All compared methods are trained from scratch with our proposed UCSR dataset for fair comparisons. As demonstrated in Table. <ref> and Table. <ref>, our UCIP outperforms all other methods on almost all codecs and compression qualities. Particularly, UCIP achieves PSNR gain of up to 0.45dB against PromptIR <cit.> with only one-third the number of parameters. Another intriguing observation is that the gains provided by UCIP become more significant as the compression ratio decreases. We attribute this to the preservation of more high-frequency information at milder compression levels. The abundance of high-frequency details further enhances the capability of PTMM to conduct global-wise informative tokens extraction, thus leads to a better performance. As illustrated in Fig. <ref>, UCIP leverages the implicit guidance of the dynamic prompt to recover more textural details while avoiding the generation of artifacts. Specifically, as observed in the first row, our model recovers the clearest texture of the monarch. Besides, in the second and final rows, images reconstructed by our method exhibit clearer edges and fewer distorted lines. For the third row, our method successfully removes compression artifacts, while other methods suffer from blocked and blurry outputs. We attribute these performances to the generation of the dynamic prompt and the fusion of global tokens with local features. It is noteworthy that, though we do not specifically tailor prompts for various compression qualities within certain codec, experimental evidence suggests that our dynamic prompt not only possesses task-specific adaptability but is also capable of handling different distortion degrades. As shown in Fig. <ref>, our method maintains robust image restoration capabilities across three levels of compression qualities (, always recovers straight lines on the right side of image) . §.§ Prompt Tuning for UCIP Prompt learning can be utilized in two popular ways: (i) One is to utilize prompt learning for multi-task learning, , PromptIR <cit.>, ProRes <cit.>, PIP <cit.> in low-level vision, which needs to train whole model from scratch; (ii) another is prompt tuning, which requires a strong baseline model and aims to optimize only a small part of parameters for downstream tasks. Notably, in the CSR field, there are no pre-trained baseline models on multiple types of compression artifacts existed, which prevents us to study prompt tuning in the beginning. And thus, we build the first Universal CSR framework and corresponding dataset with the first way, which follows existing prompt learning works in low-level vision <cit.>. However, training a model from scratch is time consuming. To further explore the potential of our proposed UCIP in prompt tuning, we choose two unseen codecs, including one traditional codec WebP <cit.> and one learning-based codec ELIC <cit.>, to fine-tune UCIP. In Tab. <ref>, we explore four ways of fine-tuning: i) directly evaluated pre-trained model without fine-tuning. ii) pre-training without prompt, then adding prompt and only training prompt parameters on new tasks. iii) pre-training without prompt, then adding prompt and only training prompt parameters on new tasks. iv) pre-training with prompt, then fine-tuning only prompt parameters on new tasks. All the experiments are conducted under the same settings with the same training iterations. As shown in Tab. <ref>, compared between ii) and iii), tuning only prompt achieves comparable performance on ELIC codec against tuning full model. Compared between iii) and iv), tuning only prompt based on UCIP achieves comparable and even better performances against tuning full model after adding prompt. The experimental results indicate that our proposed UCIP can serve as a strong baseline model in CSR field, which will also benefit the prompt tuning for new codecs in future works. §.§ Ablation Studies §.§.§ The effects of dynamic prompt To validate the effectiveness of our DPM, we conduct experiments on the different prompt designs. The results are shown in Table. <ref>. Specifically, without the dynamic prompt, UCIP is unable to perform task-wise informative token selection. Moreover, the use of fixed prompts may even impair the performance of UCIP, as they could provide incorrect guidance during the token mixing process. Compared to PromptIR <cit.>, our DPM utilizes very few parameters to achieve the spatial-adaptive modulation for tasks by only a few basic dynamic prompt kernels, thereby achieving a PSNR gain of up to 0.22dB. §.§.§ The effects of local feature extraction As demonstrated in Sec. <ref>, local feature extraction is essential for the model to aggregate useful local information with the content/spatial-aware task-adaptive contextual information. To validate this point, we conduct an ablation which replaces the local convolution with the identity module. As shown in Table. <ref>, PSNR drops about 0.1dB without local feature extraction, which indicates that incorporate global tokens with local features are necessary for CSR tasks. §.§.§ The effects of number of the dynamic prompt To mine the content/spatial-aware task-adaptive contextual information for the universal CSR task, we introduce the dynamic prompt. In this part, we investigate the optimal number of the dynamic prompt. As demonstrated in Table <ref>, there is a noticeable constraint on the dynamic capacity of prompts for spatial content interpretation and degradation handling when the number of the dynamic prompt is small. As the number incrementally increases, the observed performance gap narrows, falling below our expectations. We attribute this to the inadequate weighting from input image features, primarily due to the constrained capabilities of a singular MLP layer. To strike a balance between performance and computational efficiency, we choose 8 as the number of the dynamic prompt. § CONCLUSION In this paper, we present UCIP, the first universal Compressed Image Super-resolution model, which leverages a novel dynamic prompt structure with multi-layer perception (MLP)-like framework. Distinct from existing CSR works focused on a single compression codec JPEG, UCIP effectively addresses hybrid distortions across a spectrum of codecs. By utilizing the prompt-guided token mixer block (PTMB), it dynamically identifies and refines the content/spatial-aware task-adaptive contextual information, optimizing for different tasks and distortions. Our extensive experiments on the proposed comprehensive UCSR benchmarks confirm that UCIP not only achieves state-of-the-art performance but also demonstrates remarkable versatility and applicability. In future work, we will exploit the potential of UCIP and further improve both objective and subjective performances on UCSR benchmarks. § ACKNOWLEDGEMENT This work was supported in part by NSFC under Grant 623B2098, 62021001, and 62371434. This work was mainly completed before March 2024. splncs04 § APPENDIX Section <ref> illustrates the distribution of offsets from different PTMMs and codecs. Section <ref> presents more qualitative results on various compression codecs and qualities. § DISTRIBUTION OF OFFSETS We investigate the learned distributions of offsets via histogram visualization of offsets from different Prompt guided Token Mixer Modules (PTMMs). The i and j of PTMM_i_j denotes the offsets of j^th PTMM from i^th PTMB. We have the following observations: 1) As the depth increases, the learned offsets first expand to a larger range and then shrink to a smaller range. This hints that the model learns to extract local information for the query token at shallow layers. In the middle layers, the model leverages the offsets to aggregate the global-wise information to perform better token mixing. At the last layers, the distortions contained in image features are mostly removed, therefore the model focuses more on using local information again to refine the query tokens for the reconstruction purpose. 2) The distribution of offsets from middle layers differs among various codecs. We attribute this to the guidance from task-specific prompts. Since the distortion varies among different codecs, the visualization of learned offsets validates that our prompts are capable of providing adaptive guidance against various distortions, thus leading to better performance in the CSR tasks <cit.>. 3) The offsets expand to a wider range for learning-based codecs compared to traditional codecs. We believe this is because the distortion introduced by learning-based codecs is more challenging to eliminate compared to that from traditional codecs, necessitating broader ranges of offsets to extract useful information for query tokens. § MORE VISUAL RESULTS We provide more visual comparisons between our UCIP with state-of-the-art methods on different codecs and different compression qualities within a single codec. UCIP shows clearer textures and less artifacts in super-resolved images, indicating that our prompts and offsets are adaptive and robust against various degradations.
http://arxiv.org/abs/2407.13136v1
20240718034746
The single-particle spectral function of the extended Peierls-Hubbard model at half-filling and quarter-filling
[ "Ren-He Xu", "Hantao Lu", "Takami Tohyama", "Can Shao" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
Department of Applied Physics & MIIT Key Laboratory of Semiconductor Microstructure and Quantum Sensing, Nanjing University of Science and Technology, Nanjing 210094, China School of Physical Science and Technology, Lanzhou University, Lanzhou 730000, China Lanzhou Center for Theoretical Physics & Key Laboratory of Theoretical Physics of Gansu Province, Lanzhou University, Lanzhou 730000, China Department of Applied Physics, Tokyo University of Science, Tokyo 125-8585, Japan shaocan@njust.edu.cn Department of Applied Physics & MIIT Key Laboratory of Semiconductor Microstructure and Quantum Sensing, Nanjing University of Science and Technology, Nanjing 210094, China § ABSTRACT By utilizing the twisted boundary conditions in the exact diagonalization method, we investigate the single-particle spectral function of the extended Peierls-Hubbard model at both half-filling and quarter filling. In one-dimensional (1D) interacting systems, the spin-charge separation can typically be identified in the single-particle spectral function by observing the distinct spinon and holon bands. At half filling, starting from the pure 1D Hubbard model with the on-site interaction U=10, we observe that the band structure indicative of the spin-charge separation gradually transitions to four individual bands as the Peierls instability δ increases. At U=10 and δ=0.2 where the spin-charge separation is still observable, increasing the nearest-neighbor interaction V can drive the system to a charge-density-wave (CDW) state when V≳ U/2, without the obeservation of spinon and holon bands. At quarter-filling, on the other hand, the ground state of Peierls-Hubbard model manifests an antiferromagnetic Mott insulator in units of dimers. Increasing U results in only a very small gap in the single-particle spectrum because even for U=+∞, with the model transforming into a noninteracting half-filled dimerized tight-binding model, its gap determined by the Peierls instability δ remains small. Conversely, increasing V can effectively open the single-particle gap and make the spinon and holon bands more prominent. The single-particle spectral function of the extended Peierls-Hubbard model at half-filling and quarter-filling Can Shao July 22, 2024 =============================================================================================================== § INTRODUCTION Distinct from the Fermi liquid theory that describes the low-energy physics of many-particle systems based on the quasiparticle language, the low-energy charge and spin excitations of one-dimensional (1D) interacting systems belong to another paradigm referred to as the Tomonaga-Luttinger liquid (TLL) <cit.>. It is predicted by TLL that interactions between fermions in 1D systems can induce two separated collective excitations of electrons, known as `spinons' and `holons'. This phenomenon is well-known as spin-charge separation and has been widely investigated in experiments<cit.>. Among these experiments, the angle-resolved photoelectron spectroscopy (ARPES) provides significant clues to its existence, with a direct distinction of the spinon and holon bands<cit.>. Theoretical simulations of ARPES in 1D interacting systems are typically based on the single-particle spectral function of 1D t-J and Hubbard models, which align well with the experimental results for spinon and holon bands <cit.>. Another 1D interacting system, the Peierls-Hubbard model with both on-site interaction U and bond dimerization δ, has also attract significant attention due to the potential formation of a topological phase. In this paper, we focus on the single-particle spectral function of this model, incorporating the nearest-neighbor interaction V, and refer to it as the extended Peierls-Hubbard model. At half-filling, its ground-state phase diagram in the parameter space of (U, V) with explicit bond dimerization δ=0.2 is studied, with the phase transition from Peierls insulator (PI) to charge-density-wave (CDW) state <cit.>. Although different tricritical points are given by Ref. <cit.> using a perturbative approach and Ref. <cit.> using the density-matrix renormalization group method, both proposing a continuous transition in the weak coupling regime and a first-order transition in the strong coupling regime. At quarter-filling, on the other hand, it is suggested that the ground state of the Peierls-Hubbard model is an antiferromagnetic Mott insulator in dimer units <cit.> and is relevant for describing certain charge-transfer salts <cit.>. While for the extended Peierls-Hubbard model at quarter-filling, the main features of the optical conductivity spectrum are found to be determined by dimerization and nearest-neighbor repulsion V. <cit.>. In this paper, we present the results of the single-particle spectral function in the 1D extended Peierls-Hubbard model at both half-filling and quarter-filling, using the exact diagonalization (ED) method with twisted boundary conditions. At half-filling, we observe the evolution of the spectral function as the dimerization strength δ is increased from the pure Hubbard model with U=10. We find the formation of a hybridization gap and the suppression of spin-charge separation, characterized by the disappearance of holon and doublon bands. When keeping U=10 and δ=0.2, increasing V slightly decreases the single-particle gap until the system transforms into the CDW state, where the single-particle gap rapidly increase and the band structure of spin-charge separation is not observable. At quarter-filling, we enlarge the interaction U starting from a pure dimerized chain and observe a split of the energy band at Fermi level, indicating the formation of an antiferromagnetic Mott insulator in dimer units. The single-particle gap remains very small even for large interaction U; however, the interaction V rapidly enlarges the gap, signifying the gap size is primarily determined by the nearest-neighbor interaction. Additionally, the band structure of spin-charge separation becomes more prominent as V increases. The rest of the paper is organized as follows: In Sec. <ref>, we introduce the model and method to obtain the single-particle spectral function. The analysis of our results both at half-filling and quarter-filling is shown in Sec. <ref>, and a conclusion is given in Sec. <ref>. § MODEL AND MEASUREMENT We write the 1D extended Peierls-Hubbard model as H=H_k+H_I, with the kinetic term H_k=-∑_i,σ(t_h(1+δ(-1)^i) c^†_i,σ c_i+1,σ+H.c.) and the interaction term H_I=U∑_in_i,↑n_i,↓+V∑_i(n_i-1)(n_i+1-1). Here c^†_i,σ (c_i,σ) is the creation (annihilation) operator of an electron at site i with spin σ. The number operator n_i,σ=c^†_i,σc_i,σ and n_i=n_i,↑+n_i,↓. The hopping constants within and between dimers are t_h(1+δ) and t_h(1-δ), respectively. U (V) represents the on-site (nearest-neighbor) Coulomb interaction. The inclusion of the -1 in the V term accounts for the positioning of the Fermi level precisely at the interface between electron-addition and electron-removal bands. We focus on the model at half-filling and quarter-filling, and study the single-particle spectral function at zero temperature. In the main text, the lattice size L for the half-filling case is set to 12, while for quarter-filling we set L=16. It is important to note that the number of unit cells (or dimers) is L/2. Combining the standard periodic boundary condition (PBC) and twisted boundary conditions (TBCs) in ED method, the single-particle spectral function I(k,ω) can be solved with more momenta <cit.>. To incorporate TBCs, we introduce the following substitution into Eq.(<ref>): t_h(1+δ(-1)^i)c^†_i,σc_i+1,σ+H.c. ⟶ e^iκ/2(t_h(1+δ(-1)^i)c^†_i,σc_i+1,σ+H.c. , and we use the fraction κ/2 to reflect the fact that distance between neighbouring sites is half the distance between neighbouring dimers, which is essentially the unit cells. The Hamiltonian is subsequently κ-dependent (H^κ), as well as its energy eigenvalues E^κ_m and eigenstates Ψ_m^κ. Here Ψ_0^κ and E^κ_0 are the ground state and the corresponding ground-state energy, respectively. Under these conditions, the single-particle spectral function can be expressed as follows: I(k,ω)=I_+(k,ω)+I_-(k,ω), with the electron-addition spectral function I_+(k,ω)= ∑_m,σ|Ψ_m^κc_k_0,σ^†Ψ_0^κ|^2δ(ω-(E_m^κ-E_0^κ)-μ_κ) and the electron-removal spectral function I_-(k,ω)= ∑_m,σΨ_m^κc_k_0,σΨ_0^κ^2δ(ω+(E_m^κ-E_0^κ)-μ_κ). Here k=k_0+κ, with k_0 the allowed momenta in standard PBC, i.e., k_0=2π l/L/2 (l=0, 1, …, L/2-1). c_k_0,σ and c^†_k_0,σ are the Fourier transformation of c_i,σ and c^†_i,σ, respectively, with the site i∈2l+1. Here, κ can be fine-tuned to obtain I(k,ω) with high momentum resolution. The chemical potential μ_κ is set to be one half of the energy difference between the first ionization and affinity states of the system<cit.>. Eq. (<ref>) and Eq. (<ref>) are solved using the standard Lanczos technique with a spectral broadening factor η=0.2. § RESULTS OF THE SINGLE-PARTICLE SPECTRAL FUNCTION §.§ Results at half-filling For the extended Peierls-Hubbard model at half-filling with U=10 and V=0, we present the single-particle spectral function I(k,ω) for δ=0.0, δ=0.1, δ=0.2, δ=0.3, δ=0.4 and δ=0.5 in Figs. <ref>(a), <ref>(b), <ref>(c), <ref>(d), <ref>(e) and <ref>(f), respectively. The Fermi level (denoted by the dashed white line) is located in the middle between the lower and upper bands. In Fig. <ref>(a) with δ=0.0, the system is actually the 1D Hubbard model without dimerization. However, to compare with the results for finite δ, we calculate the single-particle spectral function in units of dimers (treat two neighboring sites as a unit cell) so that the first brillouin zone shrinks to half of its original size. For more details on their difference, refer to the single-particle spectral functions of the 1D Hubbard model in Appendix <ref>. Spin-charge separation can be characterized from the single-particle spectral function, and a schematic view of the spinon and holon bands can be also found in Appendix <ref>. As δ increases in Fig. <ref>, the single-particle gap between the upper and lower bands increases slightly. On the other hand, two extra gaps emerge and widen at k=0 and ω≈±6, resulting in the holon band becoming ill-defined. These are the hybridization gap induced by the enhancement of bond dimerization. The interlaced stripes develop into four individual bands, and the spinon and holon branches gradually merge and become indistinguishable. This can be attributed to the fact that the combined effect of Coulomb repulsion and the Peierls instablity substantially localizes the mobility of electrons and thus suppress the charge excitation, i.e., holons. Setting U=10 and δ=0.2 where the spinon and holon branches can still be observed in Fig. <ref>(c), we now introduce the nearest-neighbor interaction V and the corresponding single-particle spectral functions are shown in Figs. <ref>(a), <ref>(b), <ref>(c), and <ref>(d) with V=2.0, (b) V=4.0, (c) V=5.0, (d) V=6.0, respectively. Note that the range of ω in Fig. <ref> is [-10,10], while in Fig. <ref> it is [-8,8]. According to the phase diagram based on the infinite density matrix renormalization group method in Ref. <cit.>, the nearest-neighbor interaction V can drive the system from the Peierls insulator (PI) to the charge-density-wave (CDW) state when V≳ U/2, under the condition of strong on-site interaction U. For the single-particle spectral function, increasing V slightly decreases the single-particle gap in PI phase, as shown in Fig. <ref>(a), <ref>(b) and <ref>(c). While it rapidly increases the single-particle gap in the CDW phase (see Fig. <ref>(d)). These features are similar to the results of the single-particle spectrum in the 1D extended Hubbard model <cit.>. In addition, the spinon and holon branches remain identifiable when V=2 and V=4 in the PI phase, but they become difficult to distinguish when V=5 which is around the critical point. When V=6 in the CDW phase, the band structure of spin-charge separation disappears and a four-band structure can be observed. This four-band structure also resembles the single-particle spectrum of the CDW state in the 1D extended Hubbard model <cit.>. We speculate that the formation of CDW state not only localizes the holons, but also suppresses the spin excitations, i.e., spinons. §.§ Results at quarter-filling At half-filling, four individual bands (two below and two above the Fermi level) in the single-particle spectral function are induced by the combined effect of the interaction U and the dimerization δ. This observation motivates an examination of the system at quarter-filling. We present the single-particle spectral function I(k,ω) for the quarter-filled Peierls-Hubbard model with L=16, δ=0.5, V=0, and U=0, 2, 4, 6, 8, 10 in Figs. <ref>(a), <ref>(b), <ref>(c), <ref>(d), <ref>(e), <ref>(f), respectively. In Fig. <ref>(a), the standard energy bands for the noninteracting tight-binding model in the dimerized chain can be observed, with the Fermi level (the white dashed line) positioned at the middle of the lower band due to the quarter-filled electrons. As U increases, the lower band splits into two bands at the Fermi level, resulting in a three-band structure. When U≥4 in Figs. <ref>(c), <ref>(d), <ref>(e), <ref>(f), the two newly generated bands around the Fermi level resemble the single-particle spectrum of the 1D half-filled Hubbard model on a 8-site chain, see Fig. <ref>(b) in Appendix <ref>. The holon and spinon bands can be observed, although not very clearly. In addition, the two newly generated bands are invariant under the symmetric transformations ω→-ω and k→ k+π, which is consistent with the half-filled Hubbard model rather than the half-filled Peierls-Hubbard model. This can be understood from the fact that the system of our quarter-filled Peierls-Hubbard model exhibits an antiferromagnetic Mott state in units of dimers, which is topologically trivial<cit.>, akin to the antiferromagnetic state of the Hubbard model in units of sites. However, unlike the the single-particle spectral function of the Hubbard chain, the gap size of quarter-filled Peierls-Hubbard model remains very small compared to the values of U. This is because even for U→∞, the quarter-filled Peierls-Hubbard model transforms into the noninteracting half-filled tight-binding model on a dimerized chain. The single-particle spectral function of the latter model is identical to Fig. <ref>(a), but with the Fermi level positioned at the middle of the two bands. We find that the single-particle gap in Fig. <ref>(a), determined by the parameter δ, remains small. Setting U=10, we then present the single-particle spectral function I(k,ω) of the quarter-filled extended Peierls-Hubbard model with V=1, 2, 3, 4, 5 and 6 in Figs. <ref>(a), <ref>(b), <ref>(c), <ref>(d), <ref>(e) and <ref>(f), respectively. Note that the range of the color bars in Figs. <ref>(a), <ref>(b), <ref>(c) and <ref>(d) is [0.0, 0.6], while that in Figs. <ref>(e) and <ref>(f) is [0.0, 0.4]. This setting makes the spectra in Figs. <ref>(e) and <ref>(f) more prominent. Notably, the nearest-neighbour interaction V effectively increase the single-particle gap. This feature confirms the DMRG results in Ref. <cit.> that the main features of their optical spectrum are determined by the dimerization and the nearest-neighbor repulsion. Additionally, as V increases, the bands around the Fermi level more closely resemble the the single-particle spectrum of the 1D half-filled Hubbard model on the 8-site chain (see Appendix <ref>), with four “stripes" easily distinguishable, especially for the band below the Fermi level. We speculate that interaction V further enhance the dimerization of electrons and is beneficial to the robustness of the antiferromagnetic Mott state in units of dimers. § CONCLUSION To summarize, employing the twisted boundary conditions in the exact diagonalization method, we presented the outcomes of the single-particle spectral function with high momentum resolution in the 1D extended Peierls-Hubbard model at half-filling and quarter-filling. Starting with a half-filled Hubbard chain, we observe that increasing the dimerization strength δ leads to the emergence of a hybridization gap and disappearance of the spinon and holon bands. Increasing V slghtly decreases the single-particle gap in the Peierls insulator phase and rapidly increase it in the charge-density-wave phase. At quarter-filling, ground state of 1D extended Peierls-Hubbard model with V=0 is an antiferromagnetic Mott insulator in dimer units. The single-particle gap is small even for large U, while the interaction V rapidly widens the gap and makes the bands more similar to that of the 1D half-filled Hubbard model with half the lattice size. C. S. acknowledges support from the National Natural Science Foundation of China (NSFC; Grant No. 12104229) and the Fundamental Research Funds for the Central Universities (Grant No. 30922010803). T. T. is partly supported by the Japan Society for the Promotion of Science, KAKENHI (Grant No. 24K00560) from the Ministry of Education, Culture, Sports, Science, and Technology, Japan. H. L. acknowledges support from the National Natural Science Foundation of China (NSFC; Grants No. 11474136, No. 11674139, and No. 11834005) and the Fundamental Research Funds for the Central Universities. 0.1in § I(K,Ω) OF THE HALF-FILLED 1D HUBBARD MODEL In Fig. <ref>(a), we present the single-particle spectral function for the 1D Hubbard model with lattice size L=12 in the original brillouin zone. We find that if the spectrum within k∈[-π, -π/2] and k∈[π/2, π] is folded into k∈[-π/2,π/2], it becomes identical to the spectrum in Fig. <ref>(a), but with the rang of k extended to [-π,π]. As a result, the lower and upper bands in Fig. <ref>(a) are invariant under the symmetric transformation of ω↔-ω for the Peierls-Hubbard model. However, for the Hubbard model in Fig. <ref>(a), they are invariant when the two symmetric transformations ω↔-ω and k↔ k+π are satisfied. From both Fig. <ref>(a) and Fig. <ref>(a), we can observe some striped bands due to the finite-size effect. We then present the single-particle spectral function of Hubbard model with lattice size L=8 and L=6 in Fig. <ref>(b) and Fig. <ref>(c). Compared to Fig. <ref>(a) with 6 interlaced stripes in a band, number of the interlaced stripes in Fig. <ref>(b) and Fig. <ref>(c) are reduced to 4 and 3, respectively. The results with L=10 and L=14 can be found in Ref. <cit.> and these stripes will develop into the spinon and holon branches as a result of the spin-charge separation in 1D interacting systems. A schematic view of the spinon and holon bands in thermodynamic limit are shown in Fig. <ref>(d). 36 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Tomonaga(1950)]Tomonaga50 author author S.-I. Tomonaga, 10.1143/ptp/5.4.544 journal journal Prog. Theor. Phys. volume 5, pages 544 (year 1950)NoStop [Luttinger(1963)]Luttinger63 author author J. M. Luttinger, 10.1063/1.1704046 journal journal J. Math. Phys. volume 4, pages 1154 (year 1963)NoStop [Haldane(1981)]Haldane_1981 author author F. D. M. Haldane, 10.1088/0022-3719/14/19/010 journal journal J. Phys. C: Solid State Phys. volume 14, pages 2585 (year 1981)NoStop [Voit(1993)]Voit_1993 author author J. Voit, 10.1088/0953-8984/5/44/020 journal journal J. Phys.: Condens. Matter volume 5, pages 8305 (year 1993)NoStop [Kim et al.(1996)Kim, Matsuura, Shen, Motoyama, Eisaki, Uchida, Tohyama, and Maekawa]Kim96 author author C. Kim, author A. Y. Matsuura, author Z.-X. Shen, author N. Motoyama, author H. Eisaki, author S. Uchida, author T. Tohyama, and author S. Maekawa, 10.1103/PhysRevLett.77.4054 journal journal Phys. Rev. Lett. volume 77, pages 4054 (year 1996)NoStop [Kim et al.(1997)Kim, Shen, Motoyama, Eisaki, Uchida, Tohyama, and Maekawa]Kim97 author author C. Kim, author Z.-X. Shen, author N. Motoyama, author H. Eisaki, author S. Uchida, author T. Tohyama, and author S. Maekawa, 10.1103/PhysRevB.56.15589 journal journal Phys. Rev. B volume 56, pages 15589 (year 1997)NoStop [Fujisawa et al.(1999)Fujisawa, Yokoya, Takahashi, Miyasaka, Kibune, and Takagi]Fujisawa99 author author H. Fujisawa, author T. Yokoya, author T. Takahashi, author S. Miyasaka, author M. Kibune, and author H. Takagi, 10.1103/PhysRevB.59.7358 journal journal Phys. Rev. B volume 59, pages 7358 (year 1999)NoStop [Segovia et al.(1999)Segovia, Purdie, Hengsberger, and Baer]Segovia99 author author P. Segovia, author D. Purdie, author M. Hengsberger, and author Y. Baer, 10.1038/990052 journal journal Nature volume 402, pages 504 (year 1999)NoStop [Auslaender et al.(2002)Auslaender, Yacoby, de Picciotto, Baldwin, Pfeiffer, and West]Auslaender02 author author O. M. Auslaender, author A. Yacoby, author R. de Picciotto, author K. W. Baldwin, author L. N. Pfeiffer, and author K. W. West, 10.1126/science.1066266 journal journal Science volume 295, pages 825 (year 2002)NoStop [Auslaender et al.(2005)Auslaender, Steinberg, Yacoby, Tserkovnyak, Halperin, Baldwin, Pfeiffer, and West]Auslaender05 author author O. M. Auslaender, author H. Steinberg, author A. Yacoby, author Y. Tserkovnyak, author B. I. Halperin, author K. W. Baldwin, author L. N. Pfeiffer, and author K. W. West, 10.1126/science.1107821 journal journal Science volume 308, pages 88 (year 2005)NoStop [Jompol et al.(2009)Jompol, Ford, Griffiths, Farrer, Jones, Anderson, Ritchie, Silk, and Schofield]Jompol09 author author Y. Jompol, author C. J. B. Ford, author J. P. Griffiths, author I. Farrer, author G. A. C. Jones, author D. Anderson, author D. A. Ritchie, author T. W. Silk, and author A. J. Schofield, 10.1126/science.1171769 journal journal Science volume 325, pages 597 (year 2009)NoStop [Senaratne et al.(2022)Senaratne, Cavazos-Cavazos, Wang, He, Chang, Kafle, Pu, Guan, and Hulet]Senaratne22 author author R. Senaratne, author D. Cavazos-Cavazos, author S. Wang, author F. He, author Y.-T. Chang, author A. Kafle, author H. Pu, author X.-W. Guan, and author R. G. Hulet, 10.1126/science.abn1719 journal journal Science volume 376, pages 1305 (year 2022)NoStop [Claessen et al.(2002)Claessen, Sing, Schwingenschlögl, Blaha, Dressel, and Jacobsen]Claessen02 author author R. Claessen, author M. Sing, author U. Schwingenschlögl, author P. Blaha, author M. Dressel, and author C. S. Jacobsen, 10.1103/PhysRevLett.88.096402 journal journal Phys. Rev. Lett. volume 88, pages 096402 (year 2002)NoStop [Sing et al.(2003)Sing, Schwingenschlögl, Claessen, Blaha, Carmelo, Martelo, Sacramento, Dressel, and Jacobsen]Sing03 author author M. Sing, author U. Schwingenschlögl, author R. Claessen, author P. Blaha, author J. M. P. Carmelo, author L. M. Martelo, author P. D. Sacramento, author M. Dressel, and author C. S. Jacobsen, 10.1103/PhysRevB.68.125111 journal journal Phys. Rev. B volume 68, pages 125111 (year 2003)NoStop [Kim et al.(2006)Kim, Koh, Rotenberg, Oh, Eisaki, Motoyama, Uchida, Tohyama, Maekawa, Shen, and Kim]Kim2006 author author B. J. Kim, author H. Koh, author E. Rotenberg, author S. J. Oh, author H. Eisaki, author N. Motoyama, author S. Uchida, author T. Tohyama, author S. Maekawa, author Z. X. Shen, and author C. Kim, 10.1038/nphys316 journal journal Nature Physics volume 2, pages 397 (year 2006)NoStop [Sorella and Parola(1992)]Sorella92 author author S. Sorella and author A. Parola, 10.1088/0953-8984/4/13/020 journal journal Journal of Physics: Condensed Matter volume 4, pages 3589 (year 1992)NoStop [Penc et al.(1995)Penc, Mila, and Shiba]Penc95 author author K. Penc, author F. Mila, and author H. Shiba, 10.1103/PhysRevLett.75.894 journal journal Phys. Rev. Lett. volume 75, pages 894 (year 1995)NoStop [Penc et al.(1996)Penc, Hallberg, Mila, and Shiba]Penc96 author author K. Penc, author K. Hallberg, author F. Mila, and author H. Shiba, 10.1103/PhysRevLett.77.1390 journal journal Phys. Rev. Lett. volume 77, pages 1390 (year 1996)NoStop [Penc et al.(1997)Penc, Hallberg, Mila, and Shiba]Penc97 author author K. Penc, author K. Hallberg, author F. Mila, and author H. Shiba, 10.1103/PhysRevB.55.15475 journal journal Phys. Rev. B volume 55, pages 15475 (year 1997)NoStop [Tohyama and Maekawa(1998)]TOHYAMA98 author author T. Tohyama and author S. Maekawa, https://doi.org/10.1016/S0022-3697(98)00126-7 journal journal Journal of Physics and Chemistry of Solids volume 59, pages 1864 (year 1998)NoStop [Aichhorn et al.(2004)Aichhorn, Evertz, von der Linden, and Potthoff]Aichhorn04 author author M. Aichhorn, author H. G. Evertz, author W. von der Linden, and author M. Potthoff, 10.1103/PhysRevB.70.235107 journal journal Phys. Rev. B volume 70, pages 235107 (year 2004)NoStop [Shao et al.(2020)Shao, Tohyama, Luo, and Lu]Shao20 author author C. Shao, author T. Tohyama, author H.-G. Luo, and author H. Lu, 10.1103/PhysRevB.101.045128 journal journal Phys. Rev. B volume 101, pages 045128 (year 2020)NoStop [Tsuchiizu and Furusaki(2004)]Tsuchiizu04 author author M. Tsuchiizu and author A. Furusaki, 10.1103/PhysRevB.69.035103 journal journal Phys. Rev. B volume 69, pages 035103 (year 2004)NoStop [Ejima et al.(2016)Ejima, Essler, Lange, and Fehske]Ejima16 author author S. Ejima, author F. H. L. Essler, author F. Lange, and author H. Fehske, 10.1103/PhysRevB.93.235118 journal journal Phys. Rev. B volume 93, pages 235118 (year 2016)NoStop [Le et al.(2020)Le, Fisher, Curson, and Ginossar]Le2020 author author N. H. Le, author A. J. Fisher, author N. J. Curson, and author E. Ginossar, 10.1038/s41534-020-0253-9 journal journal npj Quantum Information volume 6, pages 24 (year 2020)NoStop [Pedron et al.(1994)Pedron, Bozio, Meneghetti, and Pecile]Pedron94 author author D. Pedron, author R. Bozio, author M. Meneghetti, and author C. Pecile, 10.1103/PhysRevB.49.10893 journal journal Phys. Rev. B volume 49, pages 10893 (year 1994)NoStop [Nishimoto et al.(2000)Nishimoto, Takahashi, and Ohta]Nishimoto2000 author author S. Nishimoto, author M. Takahashi, and author Y. Ohta, 10.1143/jpsj.69.1594 journal journal Journal of the Physical Society of Japan volume 69, pages 1594 (year 2000)NoStop [Shibata et al.(2001)Shibata, Nishimoto, and Ohta]Shibata01 author author Y. Shibata, author S. Nishimoto, and author Y. Ohta, 10.1103/PhysRevB.64.235107 journal journal Phys. Rev. B volume 64, pages 235107 (year 2001)NoStop [Tsuchiizu et al.(2001)Tsuchiizu, Yoshioka, and Suzumura]Tsuchiizu01 author author M. Tsuchiizu, author H. Yoshioka, and author Y. Suzumura, 10.1143/JPSJ.70.1460 journal journal Journal of the Physical Society of Japan volume 70, pages 1460 (year 2001)NoStop [Penc and Mila(1994)]Penc94 author author K. Penc and author F. Mila, 10.1103/PhysRevB.50.11429 journal journal Phys. Rev. B volume 50, pages 11429 (year 1994)NoStop [Mila(1995)]Mila95 author author F. Mila, 10.1103/PhysRevB.52.4788 journal journal Phys. Rev. B volume 52, pages 4788 (year 1995)NoStop [Favand and Mila(1996)]Favand96 author author J. Favand and author F. Mila, 10.1103/PhysRevB.54.10425 journal journal Phys. Rev. B volume 54, pages 10425 (year 1996)NoStop [Benthien and Jeckelmann(2005)]Benthien2005 author author H. Benthien and author E. Jeckelmann, 10.1140/epjb/e2005-00128-1 journal journal The European Physical Journal B - Condensed Matter and Complex Systems volume 44, pages 287 (year 2005)NoStop [Tsutsui et al.(1996)Tsutsui, Ohta, Eder, Maekawa, Dagotto, and Riera]Tsutsui96 author author K. Tsutsui, author Y. Ohta, author R. Eder, author S. Maekawa, author E. Dagotto, and author J. Riera, 10.1103/PhysRevLett.76.279 journal journal Phys. Rev. Lett. volume 76, pages 279 (year 1996)NoStop [Tohyama(2004)]Tohyama04 author author T. Tohyama, 10.1103/PhysRevB.70.174517 journal journal Phys. Rev. B volume 70, pages 174517 (year 2004)NoStop [Su et al.(2023)Su, Lu, Lu, and Shao]Su_2023 author author Y.-G. Su, author R. Lu, author H. Lu, and author C. Shao, 10.1088/1361-6455/acc49b journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 56, pages 085101 (year 2023)NoStop
http://arxiv.org/abs/2407.13011v1
20240717210416
Measurement-device agnostic quantum tomography
[ "Robert Stárek", "Martin Bielak", "Miroslav Ježek" ]
quant-ph
[ "quant-ph" ]
starek@optics.upol.cz Department of Optics, Faculty of Science, Palacký University, 17. listopadu 12, 77146 Olomouc, Czechia § ABSTRACT Characterization of quantum states and devices is paramount to quantum science and technology. The characterization consists of individual measurements, which are required to be precisely known. A mismatch between actual and assumed constituent measurements limits the accuracy of this characterization. Here, we show that such a mismatch introduces reconstruction artifacts in quantum state tomography. We use these artifacts to detect and quantify the mismatch and gain information about the actual measurement operators. It consequently allows the mitigation of systematic errors in quantum measurement and state preparation. Measurement-device agnostic quantum tomography Miroslav Ježek Received -; accepted - ============================================== Introduction - The measurements play a central role in quantum science and technology <cit.>. However, the implemented measurements often differ from the ideal ones, leading to systematic errors. One can compensate for these systematic errors with the knowledge of the actual measurements. We investigate this problem from the point of view of quantum tomography, one of the most valuable tools in quantum science <cit.>. It allows the complete characterization of prepared states and implemented quantum circuits, assessing and certifying their quality. Quantum state tomography consists of a suitable set of known measurements on the investigated objects. One can reconstruct the state from the collection of measurement outcomes, the tomogram. Imperfect realization of these measurements causes a mismatch between the assumed and the realized measurement operators. Such a mismatch leads to the wrong interpretation of the acquired tomogram and manifests as a reconstruction artifact. The true measurement operators can be fully characterized with measurement tomography <cit.>, which relies on perfect knowledge of input probe quantum states. Such a requirement is difficult to meet in the experiments. A self-consistent tomography framework has been introduced to loosen this condition <cit.>. The framework suggests the existence of a self-calibrating state class, which could reveal, when measured, the initially unknown information about the states and the measurement device. The cases of photon-number-resolving measurement and quantum homodyne tomography have been discussed. Another approach demonstrated for polarization states of light infers a single unknown parameter of the measurement device using maximum likelihood estimation <cit.>. Finally, assuming the sparsity of the parameter vector and the low rank of the measured density matrix, the dependence on the measurement device can be relaxed <cit.>. In this Letter, we show that an ensemble of single-qubit states of equal purity distributed quasi-uniformly on the Bloch sphere forms a class of self-calibrating states. This class is convenient in experiments where states are naturally prepared with quasi-equal purity, e.g., in photonics and trapped ions. Advantageously, the precise control, characterization, or even sparsity of such self-calibrating probe states is not required. We show a novel approach to simultaneously calibrating quantum measurements and state preparation. To this end we exploit artifacts in quantum state reconstruction performed on the self-calibrating probe states. In the multi-qubit setting, we can use the method in a fully scalable way to mitigate systematic errors on each local qubit measurement, effectively mitigating errors in the whole system. The method also applies to quantum device tomography thanks to the state-process duality <cit.>, which allows the quantum processes to be treated as states. Our method is not limited to the determination of an initially unknown parameter shared among all measurement operators in the tomographic scheme; it allows the determination of parameters related to each measurement operator individually. The presented scheme provides the characterization of quantum states independently of measurement device imperfections. General concepts are introduced using numerical simulations, and we support our findings with a photonic experiment. Measurement operator mismatch artifact - In quantum state tomography, many copies of an initially unknown state, described by density operator ρ̂, are subject to a series of measurements described by measurement operators π̂_j. These measurement operators should be tomographically complete, and their choice influences the performance of the tomography <cit.>. The resulting collection of measurement outcomes forms the tomogram. The tomogram and the measurement operators are then input into an algorithm, which outputs the density operator. If the actual measurement operators π̂'_j differ from those input into the reconstruction method, artifacts appear. We use the maximum-likelihood method <cit.>, but in Section S4 of the Supplementary Material, we demonstrate that the effect is not limited to the particular reconstruction method. The disagreement between actual and assumed measurement operators causes inconsistency in the tomogram, which results in state-dependent purity loss. Let us demonstrate the behavior in the simplest scenario of pure probe single-qubit states and projective measurements onto the eigenstates of Pauli operators, called Pauli tomography. The measurement operator is realized by two rotation angles, corresponding to rotation about the y- and z-axis and subsequent projection onto computational basis state |0⟩⟨0|. Throughout the article, we use the computational basis {|0⟩, |1⟩}. The following operators describe the constituent measurements π̂_j = R̂_z^†(ϕ_j)R̂_y^†(θ_j)|0⟩⟨ 0 | R̂_y(θ_j)R̂_z(ϕ_j), where R̂_j(α) = exp(iσ̂_j α/2) is the rotation generated by the Pauli operator. Then, the corresponding detection probability is given by the Born rule, p_j = Tr[ρ̂π̂_j]. The angles θ_j and ϕ_j for Pauli tomography are provided in Table S2 of the Supplemental Material. Assume that due to experimental imperfections, the actual angles θ_j' = θ_j + δ_j, ϕ_j' = ϕ_j + ϵ_j, i.e., we introduce systematic additive errors for each measurement operator. The error vector δ⃗ = {δ_j, ϵ_j} parameterizes the measurement operators π̂'_j = π̂_j(δ⃗). We numerically simulate tomograms using these true measurement operators for many pure probe states covering the Bloch sphere quasi-uniformly. Then, we reconstruct them assuming the original unperturbed measurement operators {π̂_j}. This mismatch causes the state-dependent purity modulation, shown in Figure <ref>. We quantify the purity modulation in the ensemble of reconstructed states as Δ P = max P - min P, where P = Tr(ρ̂^2) is the purity of the reconstructed state. The modulation Δ P scales with the magnitude of parameters δ⃗ and thus quantifies the severity of the measurement operator mismatch. The error vector δ⃗ determines at which state the artifact manifests maximally. We will leverage these features to discover the vector δ and, consequently, the true measurement operators. Artifact mitigation - We minimize the artifact severity, i.e., purity modulation (<ref>), by varying the measurement operators used to reconstruct the probe states. After recording all tomograms of the probe states we used, we optimize our assumption of δ⃗ which specifies the measurement operators used in the reconstruction algorithm. Once the assumed parameters match the actual parameters, the artifact is minimized. Since the perturbed measurement operators no longer add the identity operator, we must properly renormalize them in the reconstruction <cit.>. Due to the state dependency, the Bloch sphere must be sampled densely enough to detect the mismatch. The probe states should cover the Bloch sphere quasi-uniformly and possess uniform purity; otherwise, their full control is not required. This feature is advantageous in practice because state preparation is never perfect in real-world implementation. It distinguishes our approach from the schemes that rely on perfect knowledge of the probe states, like quantum measurement tomography <cit.>. Generally, the optimization landscape has multiple local minima; therefore, global optimization must be performed. Here, we use the NOMAD implementation of the MADS algorithm <cit.>. The resulting optimum δ⃗ is not unique because any set of measurement operators {Ûπ̂_j Û^†}, where Û is a fixed unitary operation, also minimizes the artifact. Each of these satisfactory sets corresponds to different parameters δ⃗. These unitary-equivalent solutions could be interpreted as unitary transformations of probe states. Albeit this ambiguity, such a calibration is already applicable in schemes invariant to local unitaries such as random measurement <cit.> or entanglement certification <cit.>, where only relative orientations of π̂_j operators are important. Moreover, the unitary operation can be easily determined by additional measurement of two known and non-orthogonal probe states. Since the artifact is state-dependent, the probe sampling strategy is important for minimization. The sampling with too few probe states could lead to false error parameters δ⃗_f. We used 30 quasi-uniformly distributed probe states. Such a choice provided a reliable determination of error parameters. We discuss the effects of the number of probe states in detail in the Supplementary material. To test the method numerically, we first randomly generate ground-truth error parameters δ⃗. Each element is normally distributed with zero mean value and standard deviation of 10 deg. Then, we produce tomograms corresponding to pure probe states ρ_k = |ψ_k⟩⟨ψ_k|. For simplicity, we exclude statistical noise from our simulations. Parameters δ⃗' assumed in the reconstruction are optimized to minimize the purity modulation Δ P in the set of reconstructed probe states {ρ̂_k }_k=1, …, 30, concluding the calibration step. To verify the correctness of the optimization result, we use the ground truth δ⃗ again to generate tomograms of pure test states ξ̂ = |ξ_k⟩⟨ξ_k|, which quasi-uniformly covers the Bloch sphere in 108 points. We reconstruct these states using the optimized parameters δ⃗ obtained in the calibration step and calculate the purity of the reconstructions. As a further test, we assume knowledge of two non-orthogonal probe states and use their reconstructions to determine corrective unitary operation Ŵ. We apply Ŵ to all reconstructed test states ξ̂_k to eliminate the unitary-related ambiguity. Then, we check their fidelity with the expected ideal test state F_k = ⟨ξ_k | ρ̂_k | ξ_k⟩. We characterize the test states ensemble with the purity modulation and their lowest fidelity min_k F_k to provide the worst-case estimate. We repeated the whole numerical simulation 100 times, randomly choosing new ground truth for the error parameters δ in each run. The details about optimizer settings are provided in Section S1 of the Supplementary Material. When we compare the optimized results to the reference, i.e. the states reconstructed with the assumption of null error vector, the purity modulation decreases from Δ P = (8 +3 -3)) · 10^-2 to Δ P = (1.0+0.8 -1.0)·10^-3. The uncertainty interval spans from 0.158 to 0.842-quantile in correspondence to ± one standard deviation and is used due to the skewed distribution of the results. Mainly, the infidelity decreased from (3+1 -2)· 10^-2 to (3+5 -3)·10^-4, which represents two orders of magnitude improvement in accuracy. This improvement is crucial for reaching error levels low enough for quantum error correction schemes. The lower Δ P of the probe state ensemble was achieved the better results in the test ensemble we achieved. These results illustrate the feasibility of such optimization and its ability to reliably determine the true measurement operators. It is important to stress that the pure probe states and projective (pure) measurements were chosen here for the simplicity of introducing the measurement-device agnostic framework. However, the method is fully applicable in real cases of nonunity purity (as in the following experimental demonstration) and generic positive-operator-valued measurements. In the multi-qubit scenario, the tomography is usually done using local measurements. In that case, we use the presented method for each local measurement device. The artifact severity scales worst as P_min, n = ∏_i=1^n P_min,i for n-qubit state tomography, where P_min,i is the corresponding minimal purity observed at the single qubit. If the input state |ψ_i⟩ at the i-th qubit suffers from the artifact the worst, then product state ⊗_i|ψ_i⟩ is the n-qubit state with the greatest manifestation of the artifact. In such a case, the single-qubit improvements multiply and lead to significantly better overall results in multi-qubit scenarios. State preparation - The state preparation typically suffers from imperfections too. Here, we use the symmetry in the Born rule to formally exchange state preparation and measurement to utilize the previously introduced method to correct state preparation. Usually, the qubit is initialized in state |0⟩, then turned into state |ψ_j⟩ = Û_j|0⟩ by unitary evolution Û_j. The tomographic projection is also typically realized by some unitary evolution V̂_k and subsequent projection, i.e., |π_k⟩ = V̂_k^†|0⟩. The state tomogram of the probe state |ψ_j⟩ consists of measured probabilities |⟨π̂_k | ψ_j⟩|^2 = |⟨ 0 | V̂_k Û_j |0⟩|^2. The discussed procedure obtains information related to projections, i.e. about V_k. Physical reversal of the process consists of preparing states V̂_k |0⟩ and projecting them onto states Û_j^†|0⟩, obtaining tomogram elements p_kj∝ |⟨ 0 | Û_j V̂_k |0⟩|^2. We can analyze the tomograms to improve our knowledge of the measurement operator, i.e., Û_j^†|0⟩⟨ 0| Û_j, related to state preparation in the non-reversed situation. The physical reversal can be realized in the experiment by performing projections onto the original probe states in the tomographic device and preparing states corresponding to the original tomographical projectors instead of the original probes. Either way, we gain information about Û_j, increasing the accuracy of state preparation. Special case: multiplicative error - An important subclass of the problem is the case of multiplicative measurement operator error θ_j' = (1 + δ)θ_j, ϕ_j' = (1 + ϵ)ϕ_j. This case is relevant when we sequentially switch the unitary operations that precede a fixed projector to perform the tomographic measurements. These multiplicative errors could be interpreted as Bloch sphere under- or over-rotation. This approach to quantum state tomography has been reported in various platforms, including optics <cit.>, color-centers in solids <cit.>, trapped ions <cit.>, neutral atoms <cit.>, or superconducting qubits <cit.>, making this problem subclass highly relevant. Because parameters (δ, ϵ) are shared among all projection operators, the optimum search simplifies from 12 parameters to just two. In Section S2 of the Supplementary material, we show that with just eight probe states, the optimization landscape possesses a single minimum, and therefore standard gradient-descent optimization is applicable. The advantage is better performance when compared to global optimization and also the smaller probe state set. Another advantage is that the global optimum is now unique, and the method unambiguously finds the true set {π̂_j} even when there is an unknown unitary operation between preparation and tomographic measurements. In Section S5 of the Supplementary material, we show a similar type of multiplicative error, where we optimize six parameters that describe the dependence of the optical phase on applied voltage in on-chip photonic circuits <cit.>. Experiment: polarization state tomography - We experimentally demonstrate the method in the waveplate-based polarization state tomography scenario, an example of a situation with multiplicative errors. Quarter-wave and half-wave plates transform the initial horizontal polarization of the photons, preparing the probe states with a uniform purity. The angular positions of the waveplates determine the resulting prepared state. The tomography consists of projective measurements. First, the analyzed state undergoes unitary evolution provided by a half- and quarter-wave plate and is subsequently projected using a horizontally oriented linear polarizer. The action of the waveplate could be seen as the rotation of the Bloch sphere around axis (sin(2α), 0, cos(2α)) with rotation angle Γ, where α determines the angular position of the waveplate and Γ its retardance. The retardance usually slightly deviates from its nominal value. These deviations manifest as under- or over-rotation of the Bloch sphere. We parameterize the measurement operator by precisely controlled waveplate angular positions x, y, and half- and quarter-wave plate retardance deviations δ and ϵ, respectively. These deviations are initially unknown to us, and we will use the presented method to estimate their values. To demonstrate the tolerance of the method to a unitary operation between preparation and tomographic measurement, we add a quarter-wave plate with its fast axis oriented at -12.5 deg relative to the horizontal direction. We use eight probe states and perform the Pauli tomography for each. Initially, we assume zero deviations, δ = ϵ = 0, and observe the purity modulation Δ P = 0.06. Then, we vary parameters δ, ϵ and update the measurement operators employed in the reconstruction to minimize the purity modulation. The optimal δ and ϵ are close to the true values because the reconstruction artifact is significantly reduced to Δ P = 0.01. We found the optimal parameters δ = 5.5 deg, ϵ = -1.5 deg. We then reverse the process to determine retardance deviations δ̃, ϵ̃ of the waveplates used in state preparation. We prepare the six eigenstates of Pauli operators, project each onto eight states corresponding to the original probe state set, and treat this data as before. We minimize the apparent purity modulation to find the optimal parameters δ̃ = 4.5 deg and ϵ̃ = -3.6 deg, reducing the purity modulation from 0.07 to 0.01. We verified the accuracy of the determined parameters using the following steps. We removed the central quarter-wave plate and then prepared an ensemble of test states, depicted as empty circles in Figure <ref>. As a reference, we first assumed perfect waveplates. The minimum fidelity of the reconstructed state to the target states was F_min = 0.967, and the average fidelity was F̅ = 0.992. It indicates imprecise preparation. The purity modulation, P̅ = 0.990 and P_min = 0.938, indicates reconstruction artifacts. When we used our knowledge of δ and ϵ in the reconstruction, the average purity increased to P̅ = 0.995 and the minimum purity increased to 0.982. The information gained in the calibration clearly mitigated the reconstruction artifact. As a next step, we use this information to improve state preparation and measurement control. We calculated the new angular position of waveplates, considering the discovered retardance deviations, to implement the improved preparation and projection and repeated the experiment, obtaining F_min = 0.991, F̅ = 0.997, P̅ = 0.995, and P_min = 0.983. The results shown in Figure <ref> indicate improved quality of state preparation and tomographic measurements. The small remaining artifacts are present due to the limited precision of the learned retardance deviations and also the repeatability of the waveplate rotation. Further, in Section S4 of the Supplementary material, we show the use of the presented method for classical rotating waveplate polarimetry <cit.>. Conclusion - The mismatch between actual measurement operators and their theoretical counterparts assumed in reconstructing quantum states or devices introduces reconstruction artifacts. We showed how to leverage these artifacts to reveal the actual measurement operators and mitigate systematic errors in the tomography and state preparation. We experimentally applied this method to Pauli tomography of polarization-encoded photonic qubits and numerically analyzed several other measurement schemes. The proposed method makes the tomography independent of the measurement device. The main advantage is that the method does not require perfect knowledge or control of the probe states, which is virtually impossible to obtain. The method can calibrate already existing tomographic apparatus without the need for individual characterization of its constituent components. This is particularly useful in the case where the experimental setup is monolithic and individual components cannot be calibrated individually, e.g. in the case of integrated circuits. One can also calibrate the state preparation with the reversed version of the method. Furthermore, the presented approach applies locally on individual parties of a larger multiparty system; i.e. it is fully scalable. In summary, we developed the framework for accurate and scalable calibration of quantum measurement devices and also quantum state preparation, which is critical in any applications of quantum science and technology. The application of the reported measurement-device agnostic quantum measurement goes beyond the full quantum tomography. The same method will find its use for mitigation of measurement errors in various approximative and more scalable approaches such as Monte-Carlo sampling <cit.>, permutationally invariant tomography <cit.>, compressed sensing <cit.>, etc. The method can also improve the accuracy of building and characterization of the photonic and quantum circuits in complex media <cit.>. Finally, the reported approach also applies to experimental realizations of entanglement certification and quantification <cit.>. The supporting data for this article are available from the Zenodo repository <cit.>. Acknowledgments - We acknowledge the support of the Czech Science Foundation under grant No. 21-18545S. M. B. acknowledges the support of Palacký University under grant No. IGA-PrF-2024-008. We acknowledge the use of cluster computing resources provided by the Department of Optics, Palacký University Olomouc. We thank J. Provazník for maintaining the cluster and providing support.
http://arxiv.org/abs/2407.13020v1
20240717212036
A hidden AGN powering bright [O III] nebulae in a protocluster core at $z=4.5$ revealed by JWST
[ "M. Solimano", "J. González-López", "M. Aravena", "B. Alcalde Pampliega", "R. J. Assef", "M. Béthermin", "M. Boquien", "S. Bovino", "C. M. Casey", "P. Cassata", "E. da Cunha", "R. L. Davies", "I. De Looze", "X. Ding", "T. Díaz-Santos", "A. L. Faisst", "A. Ferrara", "D. B. Fisher", "N. M. Förster-Schreiber", "S. Fujimoto", "M. Ginolfi", "C. Gruppioni", "L. Guaita", "N. Hathi", "R. Herrera-Camus", "E. Ibar", "H. Inami", "G. C. Jones", "A. M. Koekemoer", "L. Lee", "J. Li", "D. Liu", "Z. Liu", "J. Molina", "P. Ogle", "A. C. Posses", "F. Pozzi", "M. Relaño", "D. A. Riechers", "M. Romano", "J. Spilker", "N. Sulzenauer", "K. Telikova", "L. Vallini", "K. G. C. Vasan", "S. Veilleux", "D. Vergani", "V. Villanueva", "W. Wang", "L. Yan", "G. Zamorani" ]
astro-ph.GA
[ "astro-ph.GA" ]
Instituto de Estudios Astrofísicos, Facultad de Ingeniería y Ciencias, Universidad Diego Portales, Av. Ejército Libertador 441, Santiago, Chile [Código Postal 8370191] manuel.solimano@mail.udp.cl Instituto de Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile, Santiago 7820436, Chile Las Campanas Observatory, Carnegie Institution of Washington, Raúl Bitrán 1200, La Serena, Chile Université de Strasbourg, CNRS, Observatoire astronomique de Strasbourg, UMR 7550, 67000 Strasbourg, France Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France Université Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Laboratoire Lagrange, 06000, Nice, France Departamento de Astronomía, Facultad Ciencias Físicas y Matemáticas, Universidad de Concepción, Av. Esteban Iturra s/n Barrio Universitario, Casilla 160, Concepción, Chile Chemistry Department, Sapienza University of Rome, P.le A. Moro, 00185 Rome, Italy INAF, Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125, Firenze, Italy The University of Texas at Austin, 2515 Speedway Blvd Stop C1400, Austin, TX 78712, USA Cosmic Dawn Center (DAWN), Denmark Dipartimento di Fisica e Astronomia, Università di Padova, Vicolo dell’Osservatorio, 3, 35122 Padova, Italy INAF Osservatorio Astronomico di Padova, vicolo dell’Osservatorio 5, 35122 Padova, Italy International Centre for Radio Astronomy Research (ICRAR), The University of Western Australia, M468, 35 Stirling Highway, Crawley, WA 6009, Australia Centre for Astrophysics and Supercomputing, Swinburne Univ. of Technology, PO Box 218, Hawthorn, VIC, 3122, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia Sterrenkundig Observatorium, Ghent University, Krijgslaan 281-S9, B-9000 Ghent, Belgium Department of Physics & Astronomy, University College London, Gower Street, London WC1E 6BT, UK Kavli Institute for the Physics and Mathematics of the Universe, The University of Tokyo, Kashiwa, Japan 277-8583 (Kavli IPMU, WPI) Institute of Astrophysics, Foundation for Research and Technology-Hellas (FORTH), Heraklion, 70013, Greece Chinese Academy of Sciences South America Center for Astronomy (CASSACA), National Astronomical Observatories, CAS, Beijing, 100101, PR China Caltech/IPAC, MS 314-6, 1200 E. California Blvd. Pasadena, CA 91125, USA Scuola Normale Superiore, Piazza dei Cavalieri 7, I-50126 Pisa, Italy Dipartimento di Fisica e Astronomia, Università di Firenze, via G. Sansone 1, 50019 Sesto Fiorentino, Firenze, Italy INAF - Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, via Gobetti 93/3, 40129 Bologna, Italy Universidad Andrés Bello, Facultad de Ciencias Exactas, Departamento de Física, Instituto de Astrofísica, Fernandez Concha 700, Las Condes, Santiago RM, Chile Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218, USA Instituto de Física y Astronomía, Universidad de Valparaíso, Avda. Gran Bretaña 1111, Valparaíso, Chile Hiroshima Astrophysical Science Center, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK Max-Planck-Institut für extraterrestrische Physik, Gießenbachstraße 1, 85748 Garching Purple Mountain Observatory, Chinese Academy of Sciences, 10 Yuanhua Road, Nanjing 210023, China Center for Data-Driven Discovery, Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan Dipartimento di Fisica e Astronomia, Università di Bologna, via Gobetti 93/2, 40129, Bologna, Italy Dept. Fisica Teorica y del Cosmos, Universidad de Granada, Granada, Spain Instituto Universitario Carlos I de Física Teórica y Computacional, Universidad de Granada, E-18071 Granada, Spain I. Physikalisches Institut, Universität zu Köln, Zülpicher Strasse 77, 50937 Köln, Germany National Centre for Nuclear Research, ul. Pasteura 7, 02-093 Warsaw, Poland Department of Physics and Astronomy and George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A&M University, 4242 TAMU, College Station, TX 77843-4242, US Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, Bonn, D-53121, Germany University of California, Davis, 1 Shields Ave., Davis, CA 95616, USA Department of Astronomy and Joint Space-Science Institute, University of Maryland, College Park, MD 20742, USA Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Mönchhofstr. 12-14, 69120 Heidelberg, Germany Caltech Optical Observatories, California Institute of Technology, Pasadena, CA 91125 ESO Vitacura, Alonso de Córdova 3107,Vitacura, Casilla 19001, Santiago de Chile, Chile We present new JWST/NIRSpec IFU observations of the J1000+0234 system at z=4.54, the dense core of a galaxy protocluster hosting a massive, dusty star forming galaxy (DSFG) with a low luminosity radio counterpart. The new data reveals two extended, high equivalent width (EW_0>1000) nebulae at each side of the DSFG disk along its minor axis (namely O3-N and O3-S). On one hand, O3-N’s spectrum shows a prominent FWHM∼1300 broad and blueshifted component, suggesting an outflow origin. On the other hand, O3-S stretches over 8.6, and has a velocity gradient that spans 800, but no evidence of a broad component. Both sources, however, seem to be powered at least partially by an active galactic nucleus (AGN), so we classify them as extended emission-line regions (EELRs). The strongest evidence comes from the detection of the high-ionization Nev3427 line toward O3-N, which paired with the non-detection of hard X-rays implies an obscuring column density above the Compton-thick regime. In O3-S, the [Ne v] line is not detected, but we measure a HeII4687/=0.25, well above the expectation for star formation. We interpret this as O3-S being externally irradiated by the AGN, akin to the famous Hanny’s Voorwerp object in the local Universe. In addition, more classical line ratio diagnostics (e.g., /vs [N ii]/) put the DSFG itself in the AGN region of the diagrams, and hence the most probable host of the AGN. These results showcase the ability of JWST of unveiling highly obscured AGN at high redshifts. Extended in Solimano et al. A hidden AGN powering bright nebulae in a protocluster core at z=4.5 revealed by JWST M. Solimano1 J. González-López2, 3 M. Aravena1 B. Alcalde Pampliega1,46 R. J. Assef1 M. Béthermin4,5 M. Boquien6 S. Bovino7, 8, 9 C. M. Casey10, 11 P. Cassata12,13 E. da Cunha14 R. L. Davies15,16 I. De Looze17,18 X. Ding19 T. Díaz-Santos20,21 A. L. Faisst22 A. Ferrara23 D. B. Fisher15,16 N. M. Förster-Schreiber30 S. Fujimoto10 M. Ginolfi24,9 C. Gruppioni25 L. Guaita26 N. Hathi27 R. Herrera-Camus7 E. Ibar28 H. Inami29 G. C. Jones30 A. M. Koekemoer27 L. L. Lee31 J. Li14 D. Liu32 Z. Liu19,33,34 J. Molina28 P. Ogle27 A. C. Posses1 F. Pozzi34 M. Relaño36,37 D. A. Riechers38 M. Romano39, 13 J. Spilker40 N. Sulzenauer41 K. Telikova1 L. Vallini25 K. Vasan G. C.42 S. Veilleux43 D. Vergani25 V. Villanueva7 W. Wang44 L. Yan45 G. Zamorani25 Received -; accepted - =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION In the current paradigm of galaxy formation, the densest structures form in the most massive halos at high redshift (z>2), at the junctures of cosmic web filaments of galaxies and neutral gas. These structures are known as protoclusters, since they eventually evolve into massive galaxy clusters at z<1 <cit.>. Protocluster cores are sites where active star formation, supermassive black hole (SMBH) accretion, and dynamical interactions trigger powerful feedback processes at large scales <cit.>. These cores can harbor dozens of galaxies within ≲100 <cit.>, with several of them hosting active galactic nuclei (AGN) and/or starbursts, leading to dramatic effects on the surrounding gas in the form of outflows, shocks, tidal debris, and ionized nebulae. Protocluster cores undergoing their most rapid phase of growth are commonly (albeit not always) signaled by either a luminous quasar <cit.>, a high-redshift radio galaxy <cit.>, and/or a single or multiple submillimeter-bright dusty star-forming galaxies <cit.>. These sources are often embedded in giant Lyman-α () nebulae around them, reaching in some cases scales of hundreds of kiloparsecs <cit.>. The gas in protocluster cores is known to be multi-phase, and hence the extended emission is not restricted to . Recent detections of extended CO, [C i] and emission imply the existence of cold gas reservoirs tracing widespread star formation and accretion <cit.>. Similarly, the detection of -, -, and -emitting ionized nebulae, typically trace outflows and photoionization by AGN <cit.>. nebulae are particularly common around HzRGs, where kinetic feedback also plays a role, as suggested by their alignment to the radio jets <cit.>. The James Webb Space Telescope (JWST) is becoming an important tool to understand ionized nebulae within protocluster cores, since it has opened access to the diagnostic-rich rest-frame optical spectrum at z>3. The Near InfraRed Spectrograph's Integral Field Unit (NIRSpec IFU), in particular, have allowed the community to identify and characterize extended nebulae around quasars <cit.>, HzRGs <cit.>, and dusty star-forming galaxies <cit.> in protoclusters or dense groups at high-z. In this letter, we present NIRSpec IFU observations of , a well-known overdense region in the COSMOS field at z=4.54 <cit.> hosting a massive DSFG <cit.>, and a luminous Lyman-break galaxy (LBG, M_UV≈-24.2; ) called CRISTAL-01a (hereafter C01), within the inner 20. This system is surrounded by a L_Lyα≈4e43 blob and a handful of lower-mass emitters <cit.>, thus resembling the core of a protocluster. Two puzzling observations make an interesting case of study: firstly, the DSFG was detected at radio frequencies with L_1.4 GHz=5.1+-1.2e24 <cit.>, possibly attributed to an AGN, yet without X-ray counterpart. And secondly, <cit.> found a 15-long plume of line emission toward using deep ALMA observations, indicating a dynamically complex system, although its physical origin remains unclear. The observations presented here reveal additional features that will bring us closer to having a full picture of . Throughout the paper, we assume a flat ΛCDM cosmology with H_0=70 and Ω_m,0=0.3. At z=4.54 the physical scale is 6.578. § OBSERVATIONS AND DATA REDUCTION §.§ JWST/NIRCam data Multiband NIRCam imaging data of the system comprise a total of six broadband filters. Images using the F115W, F150W, F277W, and F444W filters were taken as part of the public Cosmos-Web survey <cit.> using integration times of 515 per filter at the position of , while the F200W and F356W bands were observed for 1074 as part of GO-4265 (PI: González-López). At the redshift of our source, the F277W and F356W filters cover the [O iii]+Hβ and emission lines, respectively. We reduced these data using the CRAB.Toolkit.JWST[<https://github.com/1054/Crab.Toolkit.JWST>] wrapper of the JWST pipeline (version 1.10.0, pmap=1075) with highly optimized parameters. In addition, we follow <cit.> for 1/f noise mitigation, apply background subtraction via the skymatch method of the standard pipeline, remove wisp artifacts using published templates <cit.>, and finally align our images to the COSMOS2020 catalog <cit.>. The combined images are drizzled to a common grid with pixel size of ;;0.02. §.§ JWST/NIRSpec data In this work, we use JWST/NIRSpec IFU data from programs GO-3045 (PI: Faisst) and GO-4265 (PI: González-Lopez) that target the system with the G235M (1.7<λ<3.2, R∼1000) and G395H (2.9<λ<5.3, R∼2700) gratings, respectively. The G235M dataset was taken using two 1080 dithered exposures with overlap at the location of C01. The G395H dataset was set up as a two-tile mosaic covering both C01 and the plume reported by <cit.>. Each tile was observed for 5974 seconds. The data was reduced with the standard JWST pipeline (version 1.12.5, pmap=1234) plus some additional tweaks. Briefly, we followed the scripts provided by <cit.>[Available at https://zenodo.org/doi/10.5281/zenodo.1073701110.5281/zenodo.10737011] but implemented improved snowball removal in Stage 1, and additional bad pixel flagging after Stage 1. Also, we switched on the outlier rejection step in Stage 3, and turned off the master background subtraction. Instead, background subtraction was performed as a post-processing step, together with stripe mitigation and astrometric alignment to NIRCam. A more detailed description of the reduction will be presented elsewhere (Fujimoto et al., in prep.). § RESULTS & ANALYSIS The NIRCam images reveal significant emission from several sources that were faint in previous Hubble Space Telescope (HST) imaging <cit.>. For example, the DSFG disk and nucleus starlight are now clearly detected in the long-wavelength filters. Interestingly, two other sources dominate the emission in the F277W and F356W filters (appearing green in Fig. <ref>), indicating the possibility of high equivalent width , , and emission lines. The first of these sources is just ;;0.5 north of the DSFG, at the same location of an HST source <cit.>. The other is located south of the DSFG (hence O3-S), it has a projected extent of ;;1.3=8.55, and has extremely faint HST magnitudes (m_F125W≈26 AB). JWST/NIRSpec observations confirmed the presence of strong emission at the locations of O3-N (EW_0=1780+-80) and O3-S (EW_0=5100+-1000)[The equivalent width values presented here consider only the 5008 line of the doublet.] and, more importantly, at the same redshift of , therefore confirming their physical association (see middle panel of Fig. <ref>). The nebulae also seem to be co-spatial with the 3 radio detections, but they are offset from the peak surface brightness (SB, see right panel of Fig. <ref>). In the following subsections, we use apertures to extract and explore the spectroscopic properties of the two nebulae. The labeled apertures in Fig. <ref> were defined manually based on the RGB NIRCam image and the map. For the DSFG we use an aperture significantly smaller than the full extent of the source to avoid contamination from O3-N. We also define two sub-apertures within the nebulae, that either enclose the peak of emission (O3-N-core) or maximize the S/N of the line (O3-S-HeII). §.§ Morphology and kinematics The middle panel in Fig. <ref> shows the distribution of Oiii5008 SB around . As expected, significant emission is detected in O3-N and O3-S, but also on C01 and the DSFG. Moreover, the global emission seems to be spatially extended and low SB emission connects nearly all the objects in the scene. Figure <ref> features the velocity field and velocity dispersion maps of the emission line in the system. The DSFG and O3-N share similar velocities, with an offset of ≈500 with respect to C01. In turn, O3-S shows a large velocity gradient north-to-south, with a velocity span of almost ∼800 from end to end. If we were to interpret such gradient as a signature of virialized rotation, a back-of-the-envelope calculation would yield a dynamical mass on the order of Rv^2/G=(4.3)(400)^2 / G ≈1.6e11. This number is comparable to the dynamical mass of the DSFG <cit.>, but since O3-S lacks significant stellar or dust emission, we deem unlikely that O3-S is a massive rotator. Instead, O3-S could be tidal debris from an ongoing interaction between the members of the system. In particular, the presence of a low SB bridge between O3-S and C01, together with matching line-of-sight velocities in the southern end of both sources, already hints at a tidal origin. Further discussion of this scenario will be presented in Sec. <ref>. Additionally, the velocity dispersion map of the emission shows a fairly uniform structure at 200 in most of the system except for O3-N. The velocity dispersion in O3-N reaches 500, indicating a higher dynamical mass, increased turbulence, or additional kinematic components. §.§ Broad velocity component in O3-N Inspection of the and line profiles in the O3-N aperture revealed the presence of broad velocity wings. To characterize this additional kinematic component, we fitted single and double Gaussians plus a constant continuum level as detailed in Appendix <ref>. The results of our fits are shown in Figure <ref>. The double Gaussian model is preferred over the single one based on its higher Bayesian evidence score, and lower Akaike Information Criterion <cit.> and Bayesian Information Criterion <cit.> scores. The spectra of the DSFG and O3-S also show secondary velocity components, but they likely arise from the large velocity gradients (cf. Fig. <ref>) contained within the apertures. The broad component of O3-N displays a full width at half maximum (FWHM) of 1266_-47^+36, and is blueshifted by 158+-24 from the central velocity of the narrow component. Such a profile of the line (broad and blueshifted), typically points to the existence of strong ionized outflows projected into the line of sight. shows an even broader but less blueshifted profile. Notably, since O3-N sits at the base of the plume (see left panel of Fig. <ref>) and has a broad component at the same velocity (v_0≈150) as the corresponding line, the outflow scenario proposed by <cit.> emerges as a natural explanation. A detailed assessment of this possibility will be presented in a forthcoming paper. §.§ Strong line diagnostics and high-ionization spectra: evidence for an obscured AGN We measure all line fluxes and errors using <cit.> as detailed in Appendix <ref>. From these we compute the five line ratios presented in Fig. <ref>. The bottom panels of Fig. <ref> show three diagrams displaying the R3=Oiii5008/ratio against three different line ratios, namely N2=Nii6583/<cit.>, S2=Sii67166731/, and O1=Oi6302/<cit.>. We also show in Fig. <ref> the He2-N2 diagram <cit.>, featuring He2=Heii4686/vs N2. In all diagrams, only C01 appears to be consistent with pure star-formation (SF), while the rest can be explained at least partially by AGN excitation. On the other side, DSFG is the only source showing AGN-like ratios in all diagrams. The presence of an AGN was already hinted by <cit.> based on the radio detection, and the Heii1640/and Civ1551/ratios. Additional support to this idea comes from the detection of the Nev3427 line toward O3-N-core (see Fig. <ref>), since [Ne v] requires photons with E>97.11. Such high energies are most easily attainable with AGN activity, either in the form of photoionization or fast shocks <cit.>. In O3-N-core, we measure a Nev3427 flux of 2.25+-0.33e18 and a Balmer decrement of Hα/Hβ=4.52+-0.16. Assuming case B recombination with an intrinsic ratio of Hα/Hβ=2.86 and a <cit.> attenuation law, we derive A_V=1.58+-0.16 mag. We thus infer a reddening corrected Nev3427 luminosity of 4.8+-1.2e42. Recalling that was not detected in X-rays with a luminosity upper limit of L_2-10keV< 1.25e43 <cit.>, we obtain L_2-10keV/L_[Ne V]≲ 3. According to a study of local Seyfert galaxies <cit.>, such a low upper limit implies a very large obscuring column density, well into the Compton-thick regime (n_H > e24). In O3-S and O3-S-HeII, at the other side of the DSFG, we do not detect Nev3427, but only Heii4686 (see Fig. <ref>). Due to the lower ionization energy of helium (E>54.42), this line is not as clean an indicator of AGN as the [Ne v] line, and in fact can be excited by X-ray binaries <cit.>, Wolf-Rayet stars <cit.>, and shocks <cit.>, among others. Yet its location on the He2-N2 diagram is well above the SF boundary line. § DISCUSSION We have found in the previous section that the two strongest nebulae in the system are likely related to (obscured) AGN activity. Here, we put forward a scenario that explains the observed emission. First, we assume that an AGN resides at the very center of the DSFG. This is motivated by the fact the DSFG occupies the AGN loci of all the diagnostic diagrams we have considered (see Sec. <ref>). In addition, given the M_BH-M_* relation <cit.>, the DSFG is the most likely to host a massive SMBH, and thus an AGN. Moreover, its location between the two radio detections makes it the potential launching site of a jet, as proposed by <cit.>. In this picture, O3-N traces an extended emission-line region <cit.> and outflow driven by the AGN, as evidenced by the line ratios and broad velocity component, respectively. Regarding O3-S, one could presume it represents the bipolar counterpart of O3-N (e.g., the receding side of the outflow). However, the observed kinematics, morphology, and spectral properties suggest otherwise. In particular, the large velocity gradient, narrower line width, lower surface brightness, lower metallicity (see Appendix <ref>), and more elongated structure make O3-S fundamentally different from O3-N. As suggested in Sec. <ref>, O3-S is unlikely to be a separate galaxy with M_dyn≈e11, but rather a stream of tidal debris. The connection (both spatial and spectral) between C01 and O3-S then suggests that the gas might have been tidally stripped from C01. This is supported by O3-S having a very similar oxygen abundance to the southern clump of C01 (C01-SW, see Fig. <ref>). Finally, due to the high ionization implied by the strong and He ii lines, and its location along the jet axis, we propose O3-S is being externally illuminated by the AGN. In other words, O3-S is a EELR analog to the famous Hanny's Voorwerp <cit.>. The Voorwerp is characterized by extended, high equivalent width emission at a large projected distance from the galaxy IC 2497. The leading explanation for the nature of the Voorwerp is that a portion of an otherwise-invisible gas tidal tail was exposed to ionizing radiation from the now-faded AGN in the center of IC 2497. Moreover, the escape path of the ionizing photons was carved by a past jet, as evidenced by the detection of a faint radio relic <cit.>. A similar situation could be at place in , although we cannot say whether the AGN in the DSFG is currently switched off or simply obscured along the line of sight. The radio luminosity in , however, is ∼100 times lower than the power of typical radio-selected HzRGs <cit.>, but at the same time ∼500× higher than the Voorwerp <cit.>. Hence the observed emission is more likely explained by an active (albeit weak) jet, rather than a relic. § SUMMARY AND CONCLUSIONS We have presented the discovery and characterization of two bright nebulae, O3-N and O3-S, at the core of the protocluster. Using JWST/NIRCam and JWST/NIRSpec, we have characterized the morpho-kinematic structure of the nebulae, as well as their potential sources of ionization. Our results can be summarized as follows: * O3-N, the brightest nebula in the system, shows a broad and blueshifted velocity component with FWHM≳1200, as measured in both and lines. We interpret this as evidence of ionized outflows, with a potential link to the CII plume. * While fainter than O3-N, O3-S is more extended and shows an elongated but irregular morphology. Moreover, the resolved velocity field reveals a 800 gradient roughly aligned with its major axis, but without a peaked velocity dispersion profile. Also, the lack of emission from stars, cold gas or dust from O3-S disfavor its identification as a massive rotating galaxy. Instead, given the low SB bridge between O3-S and C01, plus their similar oxygen abundances, we deem more likely that O3-S is a tidal feature stemming from C01. * Nebular line ratio diagrams suggest at least some degree of AGN ionization in all sources considered in this paper (except for C01). The DSFG, in particular, shows line ratios consistent with AGN in all the diagrams considered. * O3-N-core also shows a significant detection of the high-ionization Nev3427 line (E>97.1), an almost univocal tracer of AGN activity. Paired with the non-detection of rest-frame hard X-rays, we derive L_X/L_[Ne V] < 3, which implies Compton-thick levels of nuclear obscuration. We propose a scenario where both nebulae are EELRs powered by an AGN deeply buried within the DSFG. While O3-N shows a prominent outflow, O3-S belongs to a tidal tail of C01. This scenario makes O3-S a plausible high-z analog of Hanny's Voorwerp, a residual ionized nebula excited by a faded AGN. These results highlight the power of JWST at uncovering AGN feedback at high redshifts. This work is based in part on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with programs JWST-GO-01727, JWST-GO-0345, and JWST-GO-04265. We also thank Mingyu Li for useful discussions and the My Filter tool https://doi.org/10.5281/zenodo.1021020110.5281/zenodo.10210201. M. S. was financially supported by Becas-ANID scholarship #21221511. M. S., S. B., M. A., R. J. A., J. G-L., M. Boquien, and V. V. all acknowledge support from ANID BASAL project FB210003. M. R. acknowledges support from the Narodowe Centrum Nauki (UMO-2020/38/E/ST9/00077) and support from the Foundation for Polish Science (FNP) under the program START 063.2023. E. I. acknowledges funding by ANID FONDECYT Regular 1221846. M. Boquien gratefully acknowledges support from the FONDECYT regular grant 1211000. This work was supported by the French government through the France 2030 investment plan managed by the National Research Agency (ANR), as part of the Initiative of Excellence of Université Côte d’Azur under reference number ANR-15-IDEX-01. G. C. J. acknowledges funding from the “FirstGalaxies” Advanced Grant from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 78905). R. J. A. was supported by FONDECYT grant number 1231718. H.I. acknowledges support from JSPS KAKENHI Grant Number JP21H01129 and the Ito Foundation for Promotion of Science. R. L. D is supported by the Australian Research Council through the Discovery Early Career Researcher Award (DECRA) Fellowship DE240100136 funded by the Australian Government. aa § LINE FITTING WITH PPXF We use the template-fitting software <cit.> to simultaneously model the continuum and the emission lines of our aperture-extracted spectra in the full wavelength range covered by a single grating. We start with the G395H grating since it provides better spectral resolution than G235M. For each spectrum, we perform perform fits with one and two velocity components for the emission lines, but a single component for the stars, which is tied to the narrow gas component. We include the following lines in the fit: Hei5877, Oi63036365, Nii65506585, Hi6565 (), Sii67186732, Hei7065, Ariii7138, and Siii9071, where the [O i] and [N ii] doublet ratios have been fixed to their theoretical values. The continuum is fitted against a grid of Stellar Population Synthesis (SPS) spectra computed with v3.2 <cit.>, but restricted to ages younger than the age of the Universe at z=4.54. In most cases, the continuum has S/N≲ 1 per resolution element and no stellar absorption features can be identified, hence we refrain from interpreting any of the SPS output parameters. We find, nevertheless, that these templates provide a good representation of the continuum slope, and naturally incorporate the stellar absorption correction for the Balmer emission lines, even though this correction always stays below 1%. We then compute the AIC and BIC scores of both single and double component fits, and require the score difference to be larger than five to keep the double component fit as the preferred model. This criterion is only met for O3-N and O3-N-core. Next, we model the G235M spectrum using the velocity and velocity dispersion best fit values from the G395H fit as starting values. Here, we fit the continuum with the same libraries as above, and include the following list of emission lines: Nev3427, Oii37273730, Neiii3870, Neiii3969, Oiii4364, Heii4687, Oiii49605008, and the Balmer series from Hi3799 (H10) to Hi4863 (). The relative intensities of the doublet are fixed to their theoretical ratio. Throughout the paper, we use the line fluxes and uncertainties measured by to compute line ratios. The values are presented in Table <ref> and represent model fluxes from the single Gaussian component fits, except for O3-N and O3-N-core, where we use the sum of the narrow and broad components. This is because in most lines of the broad component have too low a S/N to provide a meaningful ratio on their own. We also quote fluxes and ratios without correction by reddening unless otherwise noted. § ADAPTIVE MOMENT MAPS The velocity field and velocity dispersion maps shown in Fig. <ref> were created following <cit.>, with a spatial and spectral Gaussian convolution kernel applied to the continuum subtracted cube. The spatial kernel has σ=1 spaxel, whereas the spectral kernel has σ=σ_LSF at the wavelength of the line. The moments are created by masking out all the voxels with S/N<3 in the convolved cube. § DOUBLE GAUSSIAN FITTING In this appendix we describe the method used to fit the line profiles presented in Sec. <ref>. In the case of , we model both lines in the doublet simultaneously but tie their wavelengths and amplitudes to the expected ratios (e.g., Oiii5008/Oiii4960=2.98, ). For , we also fit the doublet with the Nii6585/Nii6550 ratio fixed to 2.8. We set up the models within the probabilistic programming framework PyAutoFit <cit.>, and use the Dynesty <cit.> backend to sample the posterior probability distribution and estimate the Bayesian evidence log (Z). The width of the line spread function (LSF) is taken from the dispersion curves available in the JWST documentation[https://jwst-docs.stsci.edu/jwst-near-infrared-spectrograph/nirspec-instrumentation/nirspec-dispersers-and-filters#NIRSpecDispersersandFilters-DispersioncurvesfortheNIRSpecdispersers&gsc.tab=0NIRSpec Dispersers and Filters]. § OXYGEN ABUNDANCE We measure gas-phase oxygen abundances in the spectra of the different apertures using the indirect indicator proposed by <cit.>. This indicator is calibrated as 12+log(O/H) = 8.77 + log(N II/SII) + 0.264log(NII/Hα). We chose this indicator because it uses lines from a single grating/filter combination (G395H), thus avoiding possible systematic effects from the combination of the two datasets. Also because it is fairly robust to dust attenuation effects (which are significant at least in the case of the DSFG). The main caveat is that relies on the assumption of a specific relation between N/O and O/H abundances. Figure <ref> shows the values obtained for all the apertures considered in this paper, including dedicated apertures for the two clumps C01-SW and C01-NE. As expected, the DSFG shows the highest (even supersolar) oxygen abundance. The rest of the apertures are distributed all across the abundance scale, with O3-S showing the lower abundance. We also report a large (∼0.8 dex) difference between C01-SW and C01-NE, with the latter dominating the integrated value (C01-total). § FAINT LINES In this appendix we plot spectral cutouts of O3-N-core (Fig. <ref>) and O3-S-HeII (Fig. <ref>) around two high-ionization lines (and [Ne ii]) plus the Oiii4364 auroral line.
http://arxiv.org/abs/2407.12302v2
20240717035017
Superluminous supernovae
[ "Takashi J. Moriya" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.CO", "astro-ph.SR" ]
CHAPTER: SUPERLUMINOUS SUPERNOVAE 1,2,3]Takashi J. Moriya [1]National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan [2]Graduate Institute for Advanced Studies, SOKENDAI, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan [3]Monash University, School of Physics and Astronomy, Clayton, Victoria 3800, Australia Chapter Article tagline: update of previous edition,, reprint.. Optical pumping through the Liouvillian skin effect [ July 22, 2024 =================================================== [Glossary] Circumstellar matter Materials surrounding supernova progenitors. Luminosity function Peak luminosity distribution of supernovae. Magnetar Strongly magnetized rapidly rotating neutron star. Type I SNe Supernovae without spectroscopic features of hydrogen. Type Ib SNe show helium features, while Type Ic SNe show neither hydrogen nor helium features. Type II SNe Supernovae with spectroscopic features of hydrogen. Type IIn/Ibn/Icn SNe Supernovae with narrow emission features indicating the existence of dense circumstellar matter. [Nomenclature] CCSN Core-collapse Supernova CSM Circumstellar Matter FBOT Fast Blue Optical Transient FRB Fast Radio Burst GRB Gamma-Ray Burst LSST Legacy Survey of Space and Time PS1 Pan-STARRS1 PTF Palomar Transient Factory SLSN Superluminous Supernova SN Supernova ZAMS Zero-Age Main Sequence ZTF Zwicky Transient Facility § ABSTRACT [Abstract] Superluminous supernovae (SLSNe) are a population of supernovae (SNe) whose peak luminosities are much larger than those of canonical SNe. Although SLSNe were simply defined by their peak luminosity at first, it is currently recognized that they show rich spectroscopic diversities including hydrogen-poor (Type I) and hydrogen-rich (Type II) subtypes. The exact mechanisms making SLSNe luminous are still not fully understood, but there are mainly four major suggested luminosity sources (radioactive decay of ^56Ni, circumstellar interaction, magnetar spin-down, and fallback accretion). We provide an overview of observational properties of SLSNe and major theoretical models for them. Future transient surveys are expected to discover SLSNe at high redshifts which will provide a critical information in revealing their nature. [chap1:box1]Key Points * SLSNe are a class of SNe that become more luminous than around -20 mag in the optical. Broadly, SLSNe have two spectroscopic types: hydrogen-poor (Type I) and hydrogen-rich (Type II). * Hydrogen-poor (Type I) SLSNe are characterized by O ii absorption features around the peak luminosity. Their luminosity sources and progenitors are still debated. * Most hydrogen-rich (Type II) SLSNe have narrow hydrogen emission features (Type IIn) indicating the existence of dense CSM. Thus, they are powered by the interaction between SN ejecta and dense, massive (≳ 5 M_⊙) CSM. The origin of such a CSM is still not clear. § INTRODUCTION Superluminous supernovae (SLSNe) are a class of SNe that become more luminous than other kind of SNe at their peak luminosity. The existence of a population of such a luminous SN was not recognized until the 2000s. The first glimpse was found in SN 1999as <cit.>, but its nature remained unclear for a long time. SLSNe started to be discovered frequently when unbiased SN surveys were started to be conducted in the 2000s. The first well-observed SLSNe include SN 2005ap <cit.>, SN 2006gy <cit.>, and SCP06F6 <cit.>. Some mysterious SNe whose origin was unclear when they were discovered were later identified as SLSNe <cit.>. Several hundred SLSNe have been identified so far <cit.>. Several review papers on SLSNe are available for further reading <cit.>. § DEFINITION SLSNe have intrinsically higher luminosity than other SNe. SLSNe are initially defined as SNe that have peak luminosity more luminous than -21 mag in the optical (see for an early review). This luminosity cut of -21 mag is about 10 times larger than the peak luminosities of commonly observed SNe. However, as the number of SN discovery increases, it was recognized that there are many SNe with peak luminosity fainter than -21 mag showing similar spectroscopic characteristics to SNe with peak luminosity brighter than -21 mag. In other words, it was recognized that SNe that show characteristic spectroscopic features of SLSNe do not necessarily have a clear magnitude cut. For example, the peak luminosity of SNe with hydrogen-free (Type I) SLSN spectroscopic features can be as faint as around -20 mag as discussed in Section <ref>. In addition, some rapidly-evolving SNe like so-called fast blue optical transients (FBOTs) exceed -21 mag in a short time (≲ 3 days) after explosion, but they have different spectroscopic features from canonical SLSNe and they are usually not referred to as SLSNe <cit.>. Thus, at least for Type I SLSNe, SLSNe are defined as a population of luminous SNe with characteristic spectroscopic features rather than SNe that exceed a certain luminosity cut. The observed number of other SLSNe such as Type II SLSNe (Section <ref>) is still too limited to characterize them by their spectroscopic features. In such cases, SNe more luminous than around -20 mag are naively called SLSNe following the luminosity range of Type I SLSNe. § OBSERVATIONAL PROPERTIES As in the case of other SNe, SLSNe can be broadly classified into two spectroscopic classes based on the presence or absence of hydrogen features in their spectra. Type I SLSNe are SLSNe without hydrogen features and Type II SLSNe are SLSNe with hydrogen features. We summarize their observational properties in this section. §.§ Type I SLSNe Type I SLSNe have been actively observed by many transient surveys. Summaries of Type I SLSN samples from major transient surveys so far can be found in <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. <cit.> also provides a summary of Type I SLSN properties. §.§.§ Spectroscopic properties Type I SLSNe are characterized by their O ii absorption features in the wavelength range between 3000 Å and 5000 Å observed around their luminosity peak (, Figure <ref>). No other prominent spectroscopic features appear in optical spectra around the luminosity peak. Thermal excitation is suggested to be sufficient to excite oxygen to form the O ii features if the photospheric temperature is around 14,000-16,000 K <cit.>. Thus, the O ii features can present diverse temporal evolution and strength depending on the photospheric temperature evolution of Type I SLSNe <cit.>. Most Type I SLSNe do not show helium features and thus they are Type Ic SNe. Only a few SLSNe are found to have helium (Type Ib SN) features so far and their peak luminosity is rather faint (aroung -20 mag in the optical, ). Photospheric velocities estimated by Fe ii lines exceeds 10,000 km s^-1 at around the luminosity peak <cit.>. The temporal evolution of photospheric velocities is overall found to be similar to those of Type Ic-BL <cit.>. The rest-frame ultraviolet spectra around the light-curve peak of Type I SLSNe are diverse <cit.>. Ultraviolet spectra below 3000 Å have attenuation due to metal absorption, but the amount of attenuation varies. Strong absorptions in the ultraviolet spectra are likely formed by combinations of the absorptions of several metal lines <cit.>. As the photospheric temperature declines, optical spectra starts to show line features of diverse elements as in other Type I SNe and they evolve to nebular phases. In some cases (e.g., SN 2007bi, ), strong Ca ii emissions start to appear on top of the photospheric spectral features. The origin of these early strong Ca ii emissions is still unclear. Around 10 Type I SLSNe are observed until the entire ejecta become transparent (so-called the “nebular” phase). In the nebular phases, Type I SLSNe are found to have similar spectral features to those of Type Ic-BL <cit.>. They show strong, broad emission lines of O ii and Ca ii, for example. This fact indicates that the physical conditions at the central regions of Type I SLSNe and Type Ic-BL SNe are likely similar. Another notable characteristic of Type I SLSN is the emergence of hydrogen emission at late phases observed in a few Type I SLSNe <cit.>. The late-phase hydrogen emission is likely to indicate the existence of detached hydrogen-rich dense circumstellar matter (CSM) surrounding the progenitors, although it may also originate from hydrogen stripped from the companion star of the progenitors <cit.>. The late-time emergence of hydrogen emission lines is also observed in a couple of Type Ibc SNe <cit.>. Even if no clear interaction signatures are found, the existence of relatively dense CSM surrounding Type I SLSNe is sometimes imprinted in their spectra <cit.>. §.§.§ Light-curve properties The luminosity evolution of Type I SLSNe shows diversity. Their peak luminosity ranges from -20 mag to -23 mag in optical or 3× 10^43 erg s^-1 to 8× 10^44 erg s^-1 in bolometric (e.g., ). The peak luminosity distribution of Type I SLSNe is consistent with an extrapolation from the lower luminosity Type I SNe including Type Ibc and Type Ic-BL SNe (Figure <ref>). In other words, Type I SLSNe do not likely make a separate population in hydrogen-free SNe but they are higher luminosity extension of hydrogen-free SNe. Intermediate SNe are briefly discussed in Section <ref>. The mean rise time of Type I SLSNe is estimated to be 41.9±17.8 days in the g band with the 1σ dispersion in the recent ZTF sample study by <cit.>. The rise time is much shorter than the mean rise time of Type Ibc SNe (∼ 20 days, e.g., ). However, there are Type I SLSNe with the rise time as short as about 10 days <cit.>. To the other extreme, some Type I SLSNe have a very long rise time exceeding 100 days (e.g., PS1-14bj, ; SN 2018ibb, ). These extreme cases are, however, found to be rare. The rise time and the decline time of Type I SLSNe are positively correlated, i.e., slowly rising Type I SLSNe tend to decline slowly. There has been a suggestion that Type I SLSNe can be divided into two populations of slowly evolving Type I SLSNe and rapidly evolving Type I SLSNe <cit.>. It has also been suggested that Type I SLSNe may have a relation between light-curve decline rate and peak luminosity as in Type Ia SNe and they could be a potential standardizable candle <cit.>. The luminosity evolution of Type I SLSNe is not merely characterized by a simple rise and fall. First, it is known that some Type I SLSNe show a precursor before the major luminosity increase. The precursor was first identified in SN 2006oz by <cit.>, which showed a precursor luminosity “bump” before the major luminosity increase. Subsequently, LSQ14bdq <cit.> and DES14X3taz <cit.> are found to have a clear precursor bump. The precursor bump lasts for about 10 days. No spectrum during the bump has been obtained so far, but the color during the bump indicates that the bump should have a very hot spectrum <cit.>. Recent studies on a large number of Type I SLSNe indicate that such a precursor luminosity bump is not a ubiquitous feature of Type I SLSNe and only a fraction (40% or less) of Type I SLSNe show the precursor bump <cit.>. In one case of SN 2018bsz, the precursor was found to have a gradual increase in luminosity without showing the temporal luminosity decline before the major luminosity increase (). In some cases, the early bumps may not be prominent and they may be observed as early flux excess <cit.>. After the luminosity peak, a significant fraction of Type I SLSNe show undulations in their light curves <cit.>. Even in Type I SLSNe without clear undulations, there may often exist an underlying flux excess in their light curves <cit.>. In some clear cases, we can even observe a secondary luminosity peak in light curves <cit.>, which are also sometimes observed in Type Ic SNe that have lower luminosity than Type I SLSNe <cit.>. Optical color and spectral line features do not change significantly during the undulations. Only a couple of Type I SLSNe have light-curve information beyond 1000 days after explosion <cit.>. The light-curve decline rates at the very late phase are found to be diverse even among these cases. The late-phase luminosity evolution might provide an important clue in their luminosity source as discussed in Section <ref>. §.§.§ Polarimetric properties Polarimetric properties of Type I SLSNe have been investigated to constrain their ejecta geometry (see for a summary of Type I SLSNe with polarimetric observations). No significant polarization is observed in many cases, which indicates that ejecta are not far from spherical symmetry. However, polarization is indeed observed in some cases. For Type I SLSNe with significant polarization, it is often found that the polarization degree increases with time. This indicates that the asphericity of the ejecta in Type I SLSNe increases with time. During the period when the significant polarization degree is observed, some signatures of the CSM interaction is also identified in many cases <cit.>. While most polarimetric observations have been conducted to measure linear polarization, PS17bek is the only Type I SLSN with circular polarimetric observations <cit.>. However, no significant circular polarization is identified in PS17bek. §.§.§ X-ray properties Type I SLSNe have been extensively observed in X-ray, but they have been detected only in a couple of cases <cit.>. The first case is SCP06F6 <cit.>. It was detected in 0.2–2.0 keV by XMM-Newton at around 150 days after the discovery, and its corresponding rest-frame luminosity is ≃ 10^45 erg s^-1. This X-ray luminosity is three orders of magnitude higher than those observed in other SNe <cit.>, making SCP06F6 the most luminous SN observed in X-ray. SCP06F6 was observed again 3 months after the detection, but it was not detected (<2.5× 10^44 erg s^-1, ). Many other Type I SLSNe were observed in similar epochs with sufficient depths to detect them if they are as bright as SCP06F6, but SCP06F6 remains to be the only detection with such a high luminosity among Type I SLSNe (, Figure <ref>). PTF12dam is another case of the X-ray detection from a Type I SLSN <cit.>. It was detected by the Chandra X-ray Observatory at around the optical luminosity peak, and its X-ray luminosity was ∼ 2× 10^40 erg s^-1 in 0.3-10 keV. The X-ray luminosity is, however, consistent with the expected diffuse X-ray luminosity from the underlying star forming activities. Thus, it is possible that a significant fraction of the observed X-ray luminosity is not from PTF12dam itself. Finally, a potential Type I SLSN ASASSN-15lh <cit.> was detected in X-ray <cit.>, but its nature as a Type I SLSN has been debated and questioned <cit.>. §.§.§ Mid-infrared properties Some observations of Type I SLSNe in mid-infrared are currently available. SN 2018bsz, a nearby Type I SLSN, was observed by Spitzer and it was detected in 3.6 μ m and 4.5 μ m at around 400 days and 550 days after the r band luminosity peak in the rest frame <cit.>. It was also detected by WISE in 3.4 μ m and 4.6 μ m at around 250 days and 400 days after the r band luminosity peak. <cit.> investigated WISE data, and found some additional Type I SLSNe with mid-infrared detections. The mid-infrared detection indicates the existence of dust in Type I SLSNe created before and/or after their explosions. §.§.§ Radio properties Radio information of Type I SLSNe is suggested to play a key role in constraining their powering mechanism (Section <ref>), and radio follow-up observations of Type I SLSNe have been conducted actively (see for a current summary; Figure <ref>). A couple of Type I SLSNe have been detected in radio. Type I SLSN PTF10hgi was the first case of the radio detection from Type I SLSNe (e.g., ). It was detected in 1-20 GHz from around 7.5 years after the explosion and its radio luminosity was ≃ 10^28 erg s^-1 Hz^-1 at 6 GHz. SN 2017ens <cit.> is a SN that reached the peak luminosity of -21.1 mag in the g band and its luminosity is consistent with being a Type I SLSN. Although it did not show the characteristic O ii features of Type I SLSNe, they might have been missed because of infrequent spectroscopic observations. SN 2017ens first had broad features similar to those of Type Ic-BL at around the peak luminosity. However, it showed Type IIn SN features after 160 days since the luminosity peak which indicate that the ejecta started to interact with a detached hydrogen-rich dense CSM. SN 2017ens was detected in 3-10 GHz from around 3.3 years after explosion, and its radio luminosity was ≃ 10^28 erg s^-1 Hz^-1 at 6 GHz <cit.>. Given the presence of the late-phase CSM interaction signatures in optical spectra, the radio luminosity may originate from the CSM interaction. §.§.§ Gamma-ray properties Possible gamma-ray detection from SN 2017egm by the Fermi satellite is reported by <cit.>. The gamma-ray (500 MeV-500 GeV) was detected at 100-150 days after the discovery. §.§.§ Association with gamma-ray bursts A potential Type I SLSN, SN 2011kl, was associated with ultra-long GRB 111209A <cit.>. The spectrum of SN 2011kl was not good enough to identify their spectral features, but it reached -20 mag at the peak luminosity. Long GRB 140506A was associated with a possible luminous blue SN component which might indicate a potential association between normal long GRBs and SLSNe <cit.>. However, no spectrum was obtained for the potential SN component. Association between GRBs and SLSNe has not been fully investigated and further observational studies are required. §.§.§ Host environments Type I SLSNe are observed to prefer low-metallicity environments <cit.>. Most Type I SLSNe are observed below around 0.5 Z_⊙ <cit.>, although some SLSNe such as SN 2017egm are exceptionally found in high metallicity environments that are similar to typical core-collapse SNe <cit.>. The host galaxies of Type I SLSNe tend to be low stellar-mass galaxies <cit.>. They also tend to have high star-formation efficiencies of around 10^-9-10^-7 yr^-1 (e.g., ; Figure <ref>). This might indicate that Type I SLSNe are explosions of very massive stars that occur immediately after the star formation, although simply associating high star-formation efficiencies to massive progenitors has been questioned <cit.>. Some Type I SLSNe are also found to be associated with dense molecular clouds with active star formation <cit.>. Some host galaxies of Type I SLSNe are interacting galaxies <cit.> or compact dwarf irregular galaxies with extremely strong emission lines likely experiencing active star formations <cit.>. Type I SLSNe are found to have a tendency to explode further away from the host galaxy center than any other types of SNe as well as long GRBs <cit.>. The similarities between the host galaxies of Type I SLSNe and long GRBs have been explored in many studies <cit.>. While some differences among them are pointed out <cit.>, their host galaxies are rather similar without much statistical differences <cit.>. The host galaxies of fast radio bursts (FRBs) are often found to be different from those of Type I SLSNe, although the host galaxies of repeating FRBs might have some similarities <cit.>. §.§.§ Transitional SNe Type I SNe with the light-curve peak magnitude range of ∼ -20 mag - ∼ -19 mag in optical show diverse spectroscopic features <cit.>. Some of them show hot spectra similar to Type I SLSNe while others show cool spectra with more absorption features similar to Type I SNe. Some Type I SNe in this luminosity range have relatively slow photospheric velocities than those of Type I SLSNe and Type I SNe that may indicate an existence of interesting intermediate population in this luminosity range <cit.>. §.§ Type IIn and Type II SLSNe While more than 100 hydrogen-poor (Type I) SLSNe have been found so far, the number of observed hydrogen-rich SLSNe is still far less (of the order of 10). Therefore, the spectroscopic features that characterize hydrogen-rich SLSNe have not been carefully investigated, and hydrogen-rich SLSNe are still classified based on their peak luminosities. Following the luminosity range of Type I SLSNe, SNe with hydrogen features having the peak magnitude brighter than around -20 mag in optical are often referred as hydrogen-rich (Type IIn or Type II) SLSNe. Most hydrogen-rich SLSNe show narrow hydrogen lines in their spectra and they are called Type IIn SNe. The prototype and the most studied SN of this class is SN 2006gy <cit.>. The luminosity and luminosity evolution of Type IIn SNe are diverse. The most luminous Type IIn SLSNe reaches to around -22.5 mag in optical (e.g., SN 2008fz, ; SN 2016aps, ) and these bright Type IIn SLSNe tend to have round light-curve shapes around the luminosity peak. As the peak luminosity of Type IIn SLSNe becomes smaller, their light-curve evolution tends to decline with a power law (e.g., SN 2010jl, ). However, some Type IIn SLSNe are known to evolve quite fast (e.g., SN 2003ma, ), while others evolve very slowly (e.g., SN 2015da, ) regardless of their high luminosity. It is not clear if there is a separate population of Type IIn SLSNe or they consist of the most luminous end of a continuous Type IIn SN population <cit.>. Some Type IIn SLSNe emit more than 5× 10^51 erg and the explosion inside such events are clearly distinct from other Type IIn SNe <cit.>. Some very luminous transients having spectra that are consistent with Type IIn SNe appear near the center of AGNs <cit.>. Because spectra of some AGNs and Type IIn SNe are known to have similar narrow emission features, it is sometimes difficult to distinguish if they are Type IIn SLSNe originating from stellar explosions or certain activities of AGNs <cit.>. Some tidal disruption events may also be confused with Type IIn SLSNe <cit.>. We note that SN 2006gy appeared in an X-ray bright galaxy (NGC 1260, ) which might be an AGN and it may have been identified as a nuclear transient if it appeared at high redshifts. Type IIn SLSNe have been often observed in infrared wavelengths and they are found to have observational features of dusts <cit.>. For example, SN 2006gy was bright in infrared for a long time <cit.>. SN 2010jl is also well observed in infrared with dust signatures <cit.>. Not all hydrogen-rich SLSNe are Type IIn SLSNe. Several hydrogen-rich SLSNe without narrow hydrogen emission lines are known and they are referred as Type II SLSNe <cit.>. SN 2008es is the first SLSN that was identified as a Type II SLSN <cit.>. The number of observed Type II SLSNe is still small <cit.> and the characteristic properties of Type II SLSNe that distinguish them from less luminous Type II SNe are still not fully understood. Host environments of Type IIn SLSNe are more diverse than those of Type I SLSNe. They can appear in higher metallicity environments than Type I SLSNe <cit.>. Their host galaxies can have higher metallicities and larger stellar masses than Type I SLSNe as well. They might tend to originate from lower metallicity and lower luminosity galaxies than typical core-collapse SNe <cit.>. The host environments of Type II SLSNe are found to be not so different from those of Type IIn SLSNe <cit.>, but the number of Type II SLSNe is still too small to make a proper comparison. Some studies suggest that Type II SLSN environments are similar to those of Type I SLSNe <cit.>. § EVENT RATES The current event rate estimates for SLSNe are summarized in Figure <ref>. The event rates of SLSNe were first studied by <cit.>. Based on SLSNe discovered by ROTSE-IIIb, they estimated the Type I SLSN rate at z≃ 0.17 as 32^+77_-26h_71^3 Gpc^-3 yr^-1 and the hydrogen-rich SLSN rate as 151^+151_-82h_71^3 Gpc^-3 yr^-1 at z≃ 0.15. The hydrogen-rich SLSN rate from <cit.> includes both Type IIn and Type II SLSNe discovered by ROTSE-IIIb. <cit.> estimated the Type I SLSN rate at 0.3≲ z ≲ 1.4 as (3-8)× 10^-3% of the core-collapse SN rate. <cit.> estimated the Type I SLSN rate at z≃ 1.13 as 91^+76_-36h_70^3 Gpc^-3 yr^-1 based on the Type I SLSN sample from Supernova Legacy Survey. <cit.> estimated Type I SLSN rate at z≲ 0.2 as 35^+25_-13h_70^3 Gpc^-3 yr^-1 based on the PTF SLSN sample. Using the public Type I SLSN data from PS1, <cit.> estimated the Type I SLSN event rate of 40 h_70^3 Gpc^-3 yr^-1 at z≃ 0.89 with an unknown error. No updated event rates for Type IIn and Type II SLSNe have been obtained after <cit.>. There are some estimates for the total event rates of SLSNe based on photometric samples of SLSNe. For example, based on the fact that no SLSNe were discovered during the Supernova Diversity and Rate Evolution (SUDARE) survey, the total SLSN rate was constrained to be less than 900h_70^3 Gpc^-3 yr^-1 at z≃ 0.5 <cit.>. <cit.> estimated the SLSN rate at z≃ 2-4 to be ∼ 400h_71^3 Gpc^-3 yr^-1 using two SLSNe at z=2.05 and 3.90 discovered by the photometric SLSN search in the CFHT archival data. The high-redshift SN survey with Subaru/Hyper Suprime-Cam (HSC) provided the SLSN event rates of ∼ 900^+900_-500h_70^3 Gpc^-3 yr^-1 at z≃ 2, ∼ 400^+900_-300h_70^3 Gpc^-3 yr^-1 at z≃ 3, and ∼ 500^+1200_-400h_70^3 Gpc^-3 yr^-1 at z≃ 4 <cit.>. § LUMINOSITY SOURCES AND PROGENITORS We provide a brief overview of possible luminosity sources that make SLSNe so bright. For more details of the major proposed luminosity sources, we refer to <cit.>. For each possible luminosity source, we also discuss possible progenitors that can realize the conditions required to have the luminosity source. §.§ Radioactive decay The energy released by the radioactive decay of ^56Ni synthesized during the SN explosion is a standard powering mechanism of SNe <cit.>. Especially, the early luminosity peak of stripped-envelope SNe is mainly powered by the ^56Ni decay <cit.>. A simple approach extending the standard SN powering mechanism to SLSNe is to have a larger amount of ^56Ni from the explosive nucleosynthesis. The more ^56Ni is synthesized, the more radioactive energy is available to make SNe brighter. The synthesized ^56Ni decays to ^56Co with a decay time of 8.76±0.01 days <cit.> and then ^56Co decays to ^56Fe with a decay time of 111.42±0.04 days <cit.>. The total available energy from this radioactive decay is L_^56Ni-decay=[6.48exp(-t/8.76 days)+1.44exp(-t/111.42 days)]M_^56Ni/M_⊙10^43 erg s^-1, where M_^56Ni is the mass of ^56Ni synthesized at the explosion ( with the updated physical values from <http://www.nndc.bnl.gov/chart>). The energy from the nuclear decay is mostly released as gamma-rays having the energy of the order of MeV. Because these gamma-rays may not necessarily be absorbed by the SN ejecta, the actual available energy to power the SN luminosity is less than the total decay energy in Equation (<ref>). In order to have a rough estimate for the amount of ^56Ni that is required to account for the luminosity of SLSNe, we can use the rise time and peak bolometric luminosity of SLSNe. <cit.> analytically showed that the peak luminosity of a SN is the same as the luminosity input at the time of the peak luminosity (so-called "Arnett-law"). If we take the average rise time of 41.9 days for Type I SLSNe, for example, the ^56Ni mass required to explain their peak luminosities above 3× 10^43 erg s^-1 is M_^56Ni≳ 3 M_⊙ (Equation <ref>). This ^56Ni mass is much higher than the ^56Ni mass estimated for typical SNe (M_^56Ni≲ 0.3 M_⊙, e.g., ). Even broad-lined Type Ic SNe that are among the most energetic core-collapse SNe are mostly estimated to have M_^56Ni≲ 1 M_⊙ <cit.>. Synthesizing M_^56Ni≳ 3 M_⊙ is a challenge in the standard core-collapse SN explosion models. The maximum amount of ^56Ni that can be synthesized by core-collapse SNe is estimated to be around 10 M_⊙, which requires the explosion energy of 10^53 erg <cit.>. Some relatively low-luminosity SLSNe that require M_^56Ni≲ 10 M_⊙ could be consistent with such energetic core-collapse SNe <cit.>. However, a significant fraction of SLSNe require M_^56Ni≳ 10 M_⊙ to explain their peak luminosity. However, pair-instability SNe (PISNe, ) are predicted to synthesize up to around 70 M_⊙ of ^56Ni <cit.>. PISNe are predicted explosions of very massive stars with the helium core mass between ∼ 65 M_⊙ and ∼ 135 M_⊙, although the exact mass range depends on uncertainties in, e.g., nuclear reaction rates <cit.>. This helium core mass corresponds to the zero-age main sequence (ZAMS) mass between ∼ 140 M_⊙ and ∼260 M_⊙ when mass loss and rotation are ignored <cit.>. If the rotation is significant, the ZAMS mass of PISN progenitors can be as low as 65 M_⊙ through chemically homogeneous evolution <cit.>. Stellar mergers are also an important path to form massive stars ending up with PISNe <cit.>. Because it is required to sustain massive cores until the instability is triggered, PISNe are not expected to occur frequently above a certain metallicity because of strong mass loss at high metallicity <cit.>. This is consistent with the fact that Type I SLSNe prefer low metallicity environments. However, the exact metallicity cut is uncertain <cit.>, and it is also possible to suppress mass loss in high metallicity environments <cit.>. The predicted PISN event rates in the local Universe are also lower than the event rates of Type I SLSNe <cit.>. Predicted PISN light curves evolve slowly <cit.> and they are often consistent with those of slowly evolving SLSNe. In many rapidly evolving SLSNe, the required amount of ^56Ni to explain their luminosity becomes more than the estimated ejecta mass <cit.>. Therefore, rapidly evolving SLSNe are generally not likely powered by the radioactive decay of ^56Ni, although mixing of synthesized ^56Ni in the ejecta may make luminosity evolution of PISNe faster <cit.>. However, no significant mixing is predicted in multi-dimensional simulations of PISNe <cit.>. The nuclear decay of ^56Ni as a possible power source of SLSNe is mainly discussed for slowly evolving SLSNe whose light-curve decline rates are consistent with the decay rate of ^56Ni→^56Co→^56Fe. Even if the light-curve evolution is consistent with PISNe, their spectral features are often found to be inconsistent with those predicted by PISNe <cit.>. Especially, a large amount of ^56Fe is expected to exist in late phases from the nuclear decay of ^56Ni, but no strong ^56Fe absorption and emission are usually observed in SLSNe. Currently, only SN 2018ibb is found to match most of the predicted PISN properties, but their late-phase spectra have blue flux excess which might be caused by the additional CSM interaction <cit.>. It is difficult to explain the precursors and light-curve bumps and undulations occurring in different timescales solely by the nuclear decay energy input that is govern by the nuclear decay timescale. Thus, additional luminosity source such as CSM interaction is required to explain the whole luminosity evolution of SLSNe. §.§ Circumstellar interaction The collision of SN ejecta into their CSM can efficiently convert the kinetic energy of the ejecta to radiation especially when the CSM has a comparable mass to the SN ejecta. When the CSM density is high enough, the collision forms a strong radiative shock. The emission is first mostly in X-rays but they can be immediately absorbed through free-free absorption when the CSM density is high. Then the post-shock region can be around 10^4 K to emit photons in optical <cit.>. The unshocked CSM can be optically thick and photons will diffuse out in the dense CSM. This diffusion process can make light curves of interaction-powered SNe broad. The diffusion time can vary depending on the CSM density and radius. Therefore, the CSM interaction model can explain SLSNe of various timescales. Type IIn SNe show clear signatures of the CSM interaction in their spectra. Thus, Type IIn SLSNe are considered to be powered by the CSM interaction as in the case of the lower luminosity Type IIn SNe <cit.>. Compared to low-luminosity Type IIn SNe, Type IIn SLSNe are estimated to have higher explosion energies or more massive CSM because of their higher luminosity. The total radiated energy of Type IIn SLSNe is of the order of 10^51 erg, which is a typical SN explosion energy. Thus, if we can efficiently convert the kinetic energy of SN ejecta to radiation through the CSM interaction, it is possible to explain the huge luminosities observed in Type IIn SLSNe. The total CSM mass around Type IIn SLSNe are estimated to be ≳ 5 M_⊙ (, Figure <ref>). We note that the most luminous Type IIn SLSNe emit more than 10^52 erg in total and the inner explosion sometimes needs to be energetic <cit.>. Because observational signatures of Type IIn SLSNe are dominated by the CSM interaction, it is difficult to identify the nature of the explosions inside. The progenitors of Type IIn SNe are known to be massive (≳ 50 M_⊙) luminous blue variables (LBVs, ) and low-mass (≃ 10 M_⊙) red supergiants <cit.>. Among them, some mass eruptions from LBVs such as the Great Eruption of η Carinae <cit.> are known to form CSM as massive as 10 M_⊙ <cit.>, although their mass-loss mechanisms are not well understood. If LBVs explode immediately after they experience such an eruptive mass loss, they could be observed as Type IIn SLSNe. LBVs are considered to be massive stars with ZAMS masses above around 40 M_⊙ <cit.>, but they may also originate from stellar mergers of less massive stars <cit.>. Even if massive stars are not in the LBV phase, they may experience strong mass loss triggered by the strong convection at the innermost layers of massive stars (, but see also ). Phase transition from nuclear matter to the quark-gluon plasma at the center of massive stars after the core collapse is also suggested to result in Type IIn SLSNe <cit.>. Another mechanism to form massive CSM is common-envelope mass ejection <cit.>. Unstable mass transfer in binary systems with massive stars can lead to a common-envelope phase. Although the exact outcome of the common-envelope phase is uncertain, one possible consequence is ejection of the massive stellar envelope. If the common-envelope mass ejection occurs shortly before the explosion of massive stars, the SN ejecta can collide with the massive ejected envelope and can be observed as Type IIn SLSNe. Extensive mass loss from massive stars can also be related to pulsational pair-instability <cit.>. Massive stars slightly below the mass range of PISNe (Section <ref>) can still become dynamically unstable to eject a part of their mass to form a massive CSM. Several mass ejection triggered by this instability can occur sequentially and ejected shells can collide to each other to make Type IIn SLSNe <cit.>. This kind of trainsients triggered by the pulsational pair-instability is called pulsational pair-instability SNe (PPISNe). It is also possible that a core-collapse SN is followed by the pulsational pair-instability. In this case, the SN ejecta can collide to the dense CSM formed by the pulsational pair-instability to become Type IIn SLSNe. As in the case of PISN progenitors, PPISN progenitors can originate from stellar mergers <cit.>. It has also been suggested that some Type IIn SLSNe are related to explosions of white dwarfs (Type Ia SNe). The late-phase spectrum of SN 2006gy was suggested to be similar to that of Type Ia SN and its luminosity evolution is also suggested to be consistent with the explosion of a Type Ia SN within the hydrogen-rich CSM having the mass of ≃ 10 M_⊙ <cit.>. A major question for this scenario is how to form such a massive hydrogen-rich CSM around Type Ia SN progenitors. Some Type Ia SNe are known to show signatures of hydrogen-rich CSM (so-called Type Ia-CSM), but their CSM mass is estimated to be of the order of 0.1 M_⊙ <cit.>. A rare evolutionary path involving common-envelope mass ejection shortly before Type Ia SN explosions may be able to realize such a massive CSM around Type Ia SNe <cit.>. We have discussed Type IIn SLSNe so far. Although the other types of SLSNe do not show clear signatures of the CSM interaction in their spectra in early phases, the CSM interaction has been considered to be their possible luminosity source. For example, the light curves of Type I SLSNe can be reproduced by the interaction between SN ejecta and massive hydrogen-poor CSM <cit.>. The precursor bump observed in Type I SLSNe may also be explained by the existence of a massive CSM <cit.>. Such a massive hydrogen-poor CSM can be formed by the pulsational pair-instability of massive hydrogen-poor stars <cit.> and such massive hydrogen-poor stars can be formed through the chemically homogeneous evolution, for example <cit.>. Several Type I SLSNe have been suggested to be hydrogen-poor PPISNe <cit.>. If Type I SLSNe are mainly powered by the CSM interaction, a major remaining question is why we do not see clear interaction signatures in their spectra. There are hydrogen-poor SNe that show clear CSM interaction signatures in their spectra known as Type Ibn <cit.> and Type Icn SNe <cit.>. Thus, we may expect to have similar spectroscopic signatures if Type I SLSNe are mainly powered by the CSM interaction, although the required CSM mass and density for Type I SLSNe are expected to be higher. More theoretical investigations on the expected spectroscopic features of hydrogen-poor interaction-powered SNe are required. Even if the major luminosity source of Type I and Type II SLSNe is not the CSM interaction, it is very likely that their properties are often partially affected by the CSM interaction. The luminosity undulations observed in Type I SLSNe can be explained as an additional effect caused by the CSM interaction. Similarly multiple luminosity bumps in Type I SLSNe could be related to the existence of the multiple dense CSM components <cit.>. Because many Type I SLSNe show the CSM interaction signatures at late phases, it is possible that some effects of the CSM interaction starts to appear in earlier epochs after the major luminosity source ends providing their energy. §.§ Spin down of strongly magnetized neutron stars (“magnetars”) A neutron star may remain at the center of an exploding massive star after a core-collapse SN explosion. If the neutron star has rotation and poloidal magnetic fields (in other word, if the neutron star becomes a “pulsar”), the rotational energy should be converted to the electromagnetic field mostly as Poynting-flux dominated outflows <cit.>. The total spin-down energy that can be released in this form can be expressed as L_spin-down=(l-1)E_p/t_p(1+t/t_p)^-l, where E_p is the initial rotational energy of the neutron star and t_p is the spin-down timescale. The temporal index l is determined by the braking index and it is often assumed to be l=2, i.e., the magnetar spin-down is dominated by dipole radiation. A fraction of this spin-down energy can be thermalized to power the SN luminosity. The idea that pulsars may be able to power SN explosions and SN luminosity appeared shortly after the discovery of pulsars <cit.>. However, the pulsar-powering mechanism for typical core-collapse SNe was not found to match their observations. Later, <cit.> proposed that the pulsar power can be an extra energy source to illuminate SNe to explain a peculiar light-curve behavior of SN 2005bf <cit.>. <cit.> applied this idea of powering SNe with pulsars for SLSNe. They demonstrated that if the pulsar has an initial rotational period of ∼ 1 ms with a dipole magnetic field of ∼ 10^14 G, it can reproduce the light curves of Type I SLSNe. Such a strongly-magnetized, rapidly-rotating pulsar powering SLSNe are often referred to as “magnetars.” This magnetar scenario to power SLSN luminosity is currently the most popular scenario to explain SLSNe without clear CSM interaction signatures. Many studies applied the magnetar model to fit the light-curve evolution of Type I SLSNe to constrain the properties of magnetars powering Type I SLSNe <cit.>. The magnetar model can explain early-phase light curves of both slowly-evolving and rapidly-evolving Type I SLSNe well. The systematic model fitting to the magnetar-powered model for the Type I SLSN sample from ZTF obtained the average initial spin period of 2.65^+2.58_-0.68 ms and the dipole magnetic field strength of 0.98^+0.98_-0.63× 10^14 G with the 1σ range <cit.>. This fitting provides the average ejecta mass of 5.03^+4.01_-2.39 M_⊙ and the average kinetic energy of 2.13^+1.89_-0.96× 10^51 erg with the 1σ range <cit.>. The distributions of the estimated parameters and their correlation can be found in Figure <ref>. Under the assumption of the magnetar spin-down model, a possible correlation between SN ejecta mass and initial spin periods is found <cit.>. The observed complexity in the light curves of Type I SLSNe is also suggested to be explained by the magnetar scenario. The precursor bump can be explained by the additional shock breakout of a strong shock formed by the central energy injection at the center <cit.>. The late-phase light curves powered by magnetar spin down are strongly affected by the uncertain thermalization efficiencies of the spin-down energy <cit.>. In order to distinguish magnetar spin-down and other energy source such as the ^56Ni radioactive decay, light curves need to be followed for around 1000 days <cit.>, but SLSNe with such a long-term observation is still limited <cit.>. The late-phase light curve undulations can be related to temporal changes in magnetar activities <cit.>. Spectroscopic properties are also found to be consistent with the magnetar model <cit.>. Especially, the brightness in ultraviolet in early phases as well as slow evolution in the photospheric velocity is consistent with the predicted properties from the magnetar powered models. Aspherical nature of Type I SLSNe observed in some Type I SLSNe can also be explained by the magnetar scenario because the spin-down energy injection inevitably occur in an aspherical form. Jet emergence from magnetars is also expected <cit.>. The magnetar energy injection at the central region of SN ejecta is predicted to result in several interesting characteristic features in SLSNe. For example, the extra strong energy input at the innermost layers of the SN ejecta can lead to strong mixing within the ejecta and make the ejecta density structure flatter than other SNe <cit.>. Such a flat density structure may affect spectral formation in SLSNe, although their consequences have not been studied in detail. Another characteristic prediction is that high energy and X-ray emissions can be observed at late phases after the central nebular regions formed by the spin down of the central magnetar become transparent <cit.>. In addition, magnetar-powered SLSNe can become bright in radio because of the pulsar wind nebula formed by the central magnetar. However, X-ray and radio emissions from Type I SLSNe are often not consistent with simple predictions <cit.>. The thermalization process in the magnetar spin down is not understood well and more investigations are required to predict their expected observational properties <cit.>. Possible gamma-ray emission from SN 2017egm is suggested to be consistent with the magnetar spin-down model <cit.>. If the central magnetar is massive enough to require rotational energy to sustain its mass, the magnetar would eventually collapse to a black hole at some moment and leave some observational consequences <cit.>. Finally, FRBs might be associated with SLSNe if they are both powered by magnetars <cit.>, although host galaxy properties of SLSNe and FRBs are different (Section <ref>). The progenitors of magnetar-powered SLSNe should have rapid rotation. Rapidly rotating Type I SLSN progenitors could be realized through chemically homogeneous evolution <cit.> possibly in binary systems. The strong magnetic field may exist at the time of core collapse or amplified during the core collapse. Stellar mergers may also be responsible for the magnetic field amplification <cit.>. We note that some Type I SLSNe showing late-phase CSM interaction features require an additional CSM component ejected from the progenitor in addition to the rapid rotation and strong magnetic field. Some Type I SLSNe show late-phase hydrogen-rich emission that requires hydrogen-rich mass loss from the progenitors shortly before their explosions. §.§ Black hole accretion A black hole, instead of a neutron star, can be formed after the terminal collapse of a massive star. Accretion towards the black hole can launch a jet or a disk wind outflow that can be a luminosity source of SNe. If a massive star collapses directly to the black hole with a free-fall timescale, the accretion timescale would be too short to power SLSNe. However, if the outer layers of the SN ejecta are first ejected and then fall back onto the black hole, a long-lasting mass accretion towards the central black hole can be realized. Such a fallback accretion is suggested to be a potential power source for SLSNe <cit.>. In a simplified picture, the luminosity input from the accretion can be expressed as L_accretion=ε_accṀ_acc c^2, where ε_acc is the thermalization efficiency of the accretion and Ṁ_acc is the accretion rate to the black hole. The thermalization efficiency is quite uncertain, but it can be reasonably assumed to be ε_acc∼ 10^-3 <cit.>. When the fallback accretion is dominant, Ṁ_acc becomes proportional to t^-3/5. <cit.> systematically investigated the properties of the fallback accretion required to account for Type I SLSNe. Although their light curves can be fitted well by the fallback accretion scenario, the required mass to accrete onto the black hole is often found to be very massive (Figure <ref>). §.§ Other proposed mechanisms It is still possible that some unrecognized mechanisms lead to at least some fraction of the observed SLSN populations. For example, axion-instability SNe might be related to SLSNe <cit.>. A latent energy released by the phase transition from neutron stars to quark stars is suggested to power SLSNe <cit.>. Other unconsidered energy sources may also play an important role. § OUTLOOK As we face the era of large-scale transient surveys in various wavelengths, the number of SLSN discovery is still expected to increase. For example, the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) is expected to discover ∼ 10^4 SLSNe in a year, although only a limited fraction among them will have sufficient light-curve information to reconstruct their physical properties <cit.>. Unexplored frontier in SLSN discoveries are at high redshifts. Only a couple of SLSNe have been observed at z≳ 2 so far <cit.>. SLSNe at high redshifts will allow us to explore SLSN properties in, e.g., low metallicity environments that will be essential information to uncover their progenitors. In addition, the event rates of PISNe are predicted to be higher at higher redshifts <cit.> and we expect to discover many PISNe if we search for them at high redshifts. Exploring high-redshift SLSNe require near-infrared transient surveys. Fortunately, several wide-field, sensitive near-infrared imaging instruments will be available in the coming years and they will allow us to explore SLSNe at high redshifts <cit.>. For example, Euclid has started its operation with successful SN discoveries <cit.>. Euclid is expected to discover dozens of SLSNe and PISNe up to z∼ 4 <cit.>. Nancy Grace Roman Space Telescope, which is currently planned to be launched in 2026, can realize time domain surveys that allow us to discover SLSNe beyond z∼ 6 <cit.>. While the field-of-view is small, James Webb Space Telescope may also potentially discover high-redshift SLSNe <cit.>. SLSNe at high redshifts may be used as distance measurement <cit.> as well as light sources to explore interstellar media in the distant galaxies that hosted SLSNe <cit.>. [Acknowledgments] TJM is supported by the Grants-in-Aid for Scientific Research of the Japan Society for the Promotion of Science (JP24K00682, JP24H01824, JP21H04997, JP24H00002, JP24H00027, JP24K00668) and the Australian Research Council (ARC) through the ARC's Discovery Projects funding scheme (project DP240101786). <cit.> Harvard
http://arxiv.org/abs/2407.12932v1
20240717180506
Efficient and accurate force replay in cosmological-baryonic simulations
[ "Arpit Arora", "Robyn Sanderson", "Christopher Regan", "Nicolás Garavito-Camargo", "Emily Bregou", "Nondh Panithanpaisal", "Andrew Wetzel", "Emily C. Cunningham", "Sarah R. Loebman", "Adriana Dropulic", "Nora Shipp" ]
astro-ph.GA
[ "astro-ph.GA" ]
Arpit Arora arora125@sas.upenn.edu 0000-0002-8354-7356]Arpit Arora 0000-0003-3939-3297]Robyn Sanderson 0000-0001-7107-1744]Nicolás Garavito-Camargo 0000-0003-3792-8665]Emily Bregou 0000-0001-5214-8822]Nondh Panithanpaisal Carnegie Observatories, 813 Santa Barbara St, Pasadena, CA 91101, USA TAPIR, California Institute of Technology, Pasadena, CA 91125, USA 0000-0003-0603-8942]Andrew Wetzel Department of Physics & Astronomy, University of California, Davis, CA 95616, USA 0000-0002-6993-0826]Emily C. Cunningham NASA Hubble Fellow Department of Astronomy, Columbia University, 550 West 120th Street, New York, NY 10027, USA 0000-0003-3217-5967]Sarah R. Loebman Department of Physics, University of California, Merced, 5200 Lake Road, Merced, CA 95343, USA 0000-0002-7352-6252]Adriana Dropulic Department of Physics, Princeton University, Princeton, NJ 08544, USA 0000-0003-2497-091X]Nora Shipp Department of Astronomy, University of Washington, Seattle, WA 98195, USA McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213, USA § ABSTRACT We construct time-evolving gravitational potential models for a Milky Way-mass galaxy from the FIRE-2 suite of cosmological-baryonic simulations using basis function expansions. These models capture the angular variation with spherical harmonics for the halo and azimuthal harmonics for the disk, and the radial or meridional plane variation with splines. We fit low-order expansions (4 angular/harmonic terms) to the galaxy's potential for each snapshot, spaced roughly 25 Myr apart, over the last 4 Gyr of its evolution, then extract the forces at discrete times and interpolate them between adjacent snapshots for forward orbit integration. Our method reconstructs the forces felt by simulation particles with high fidelity, with 95% of both stars and dark matter, outside of self-gravitating subhalos, exhibiting errors ≤4% in both the disk and the halo. Imposing symmetry on the model systematically increases these errors, particularly for disk particles, which show greater sensitivity to imposed symmetries. The majority of orbits recovered using the models exhibit positional errors ≤10% for 2-3 orbital periods, with higher errors for orbits that spend more time near the galactic center. Approximate integrals of motion are retrieved with high accuracy even with a larger potential sampling interval of 200 Myr. After 4 Gyr of integration, 43% and 70% of orbits have total energy and angular momentum errors within 10%, respectively. Consequently, there is higher reliability in orbital shape parameters such as pericenters and apocenters, with errors ∼10% even after multiple orbital periods. These techniques have diverse applications, including studying satellite disruption in cosmological contexts. § INTRODUCTION In the Cold Dark Matter (CDM) paradigm, dark matter (DM) halos grow hierarchically by accreting mass inside the cosmic web. The structure of these DM halos in CDM has been found to follow `universal' power law density and potential profiles in DM-only simulations <cit.>, with some variations in baryonic simulations <cit.>. These halo profiles have enabled the study of the properties of many physical phenomena in the Universe, such as gravitational lenses and the internal dynamics of stars within a halo. As the halos assemble over time, their internal structure evolves, such as their concentration <cit.> and shape <cit.>. Furthermore, the evolution of the density profile is even more complex in simulations that include baryonic matter, as baryonic processes such as gas cooling, star formation, and feedback mechanisms can effectively add and redistribute the mass in the system <cit.>. Additionally, halos can undergo significant satellite mergers, such as those involving the LMC <cit.> and progenitor of the Sagittarius stream <cit.> in the Milky Way (MW), which break symmetry and induce disequilibrium in both the halo <cit.> and the disk <cit.>. Traditional orbit modeling techniques utilize static, symmetric halo models, which provide a simplified and computationally efficient means of analyzing orbits. Generally, they decompose the MW into three or more components – the bulge, the disk, and the halo – and model the contribution of each component separately. The bulge and the halo are generally represented by the Navarro–Frenk–White potential model (spherical or flattened) <cit.>, whereas the Miyamoto-Nagai potential model is commonly assumed for the disk <cit.>. The parameters of these potential models are then fit to the available dynamical data of the MW <cit.> and used to backward integrate the orbits of dwarf galaxies <cit.>, globular clusters <cit.>, or stellar streams <cit.> using commonly available galactic dynamics tool such as <cit.>, <cit.>, and <cit.>. Sometimes, these models include mass growth of the halo <cit.> and specific effects of perturbing bodies, such as the LMC <cit.>. Recent studies have highlighted the limitations of static models in recovering orbital parameters within acceptable errors. By comparing integrated orbits with exact orbits in Milky Way-mass cosmological simulations <cit.>, and to time-evolving potential models <cit.>, researchers have observed significant discrepancies. <cit.> noted an error of roughly 80% for orbit shape parameters such as minimum pericenter and apocenter distances between recovered and true orbits of satellite dwarf galaxies integrated in a time-static model. Similarly, <cit.> found high errors in their integration scheme, which accounted for the mass growth of the potential but maintained a fixed shape. They quantified that the orbital parameter errors were comparable to those caused by a 30% uncertainty in the host mass. Additionally, they observed that modeling a recent massive accretion event, such as the LMC, using a combination of two spherical parametric potentials, led to substantial errors in the recovered orbital parameters. These findings emphasize that capturing the time-dependent evolution of halo and its response to massive mergers is particularly important when reconstructing the dynamics of the objects that reside in them, such as satellite galaxies, globular clusters, stellar streams, and stars <cit.>. The time-dependent evolution of the halo structure can be captured by using Basis Function Expansions (BFE) models, which can accurately describe any arbitrary density field <cit.>. <cit.> showed that by applying a Hernquist BFE (often referred to in the literature as a self-consistent field expansion) to every snapshot, spaced roughly 25 Myr, from a cosmological simulation of a DM halo and then interpolating the coefficients of the expansion in time allows substantial improvement in the orbital properties of a halo. Similarly, <cit.> showed high-fidelity orbit reconstruction in DM-only cosmological simulations of MW mass galaxies using compact BFE representation for the time-evolving halo potential, employing spherical harmonics for angular variation and bi-orthonormal basis functions or splines for radial variation. In this paper, we extend this methodology to model fully zoomed baryonic-cosmological simulations of MW mass galaxies from the Latte suite of the FIRE-2 project <cit.>. Unlike <cit.>, where only DM was considered, our simulations include stars and gas, necessitating adaptation of the BFE method with an azimuthal harmonic expansion to account for the baryonic component's flattened shape. Additionally, we utilize a fiducial temporal cadence with snapshots spaced at approximately 25 Myr intervals, and explore the effect of larger sampling intervals (up to 500 Myr) on the quality of recovered orbits. We demonstrate that sufficiently high fidelity halo orbit reconstruction can be achieved with our fiducial cadence, even though the potential in the inner regions can change much more rapidly. We assess the efficacy of this modified potential model in reconstructing orbits at a range of radii. We fit BFE to the potential at a series of discrete snapshots. Then, we approximate the time dependence by calculating a force/acceleration on a star at a given point in time as a linear interpolation of the forces in the two time-adjacent snapshots for orbit recovery. The paper is organized as follows: In Sec. <ref>, we provide a brief overview of the simulation (<ref>), potential modeling techniques (<ref>), constraints on recovered particle forces in the potential model (<ref>), and describe our sample of selected halo stars for orbit integration (<ref>). In Sec. <ref>, we statistically quantify the quality of reconstructed orbits based on recovered 3D positions (<ref>), approximate integrals of motion, and orbital parameters such as pericenter and apocenter distances (<ref>). We also evaluate the dependence of orbit quality on the sampling interval for potential models (<ref>). In Sec. <ref>, we demonstrate an application of the model by simulating the tidal disruption of a dwarf satellite. We discuss our findings and conclusions in Sec. <ref>. § METHODS In this section, we detail our approach to reconstructing halo star orbits in a simulation of a MW-mass galaxy. We describe the galaxy simulation used (Sec. <ref>), the BFE-based potential model fits to the simulation (Sec. <ref>) and their force reconstruction for a sample of DM and star particles from the parent halo at present day (Sec. <ref>). We also describe how we select stellar orbits from the parent halo for reconstruction and integrate them in a cosmological setting (Sec. <ref>). §.§ Simulations and coordinate system We utilize a cosmological zoomed-in baryonic simulation of MW-mass galaxies from the Latte suite <cit.> of the Feedback In Realistic Environments (FIRE) project, specifically .[This simulation is publicly available <cit.> at <http://flathub.flatironinstitute.org/fire>.] This simulations employ the FIRE-2 physics model <cit.> and is consistent with the ΛCDM cosmology from Planck <cit.>: Ω_Λ=0.728, Ω_matter=0.272, Ω_baryon=0.0455, h=0.702, σ_8=0.807, and n_s=0.961 <cit.>. has a total mass of about 1.2 × 10^12 with a total stellar mass of 7 × 10^10 at present day, and initial star and gas particle masses of m_b = 7100 , and DM particle mass m_DM = 35000 . The high particle resolution enables the resolution of phase-space structures in the interstellar medium, facilitating the collapse of gas into well-resolved giant molecular clouds. Snapshots are saved approximately every 25 Myr over the last 7 Gyr of the simulation; the the most massive satellite merger in the last 6 Gyr has a total mass ratio of 1:45 relative to the MW <cit.>. <cit.> demonstrated that reconstructed orbits in a DM-only simulation, with potential models sampled over intervals of 10 Myr and 40 Myr, yielded essentially identical results. Our sampling interval is sufficiently high to ensure high-fidelity orbit reconstructions for halo star orbits. We also show high fidelity can be achieved with a lower snapshot save rate of about 100 Myr. Additionally, the frequent snapshot intervals allow for the tracking of stellar orbits. These zoomed-in cosmological simulations are run in an arbitrary box frame with a non-zero total momentum. Initially, we recenter the simulation onto the host galaxy frame using the iterative shrinking spheres method <cit.> to find a center-of-mass (COM) at each time step using star particles. This halo center in comoving coordinates is defined as x⃗_COM(t) ≡r⃗_COM(t)/a(t), where a(t) is the cosmological scale factor, r⃗_COM(t) is the physical COM position. Subsequently, we rotate all snapshots of the simulation to align the galactic disk with the XY plane at the present day–the principal axes. We define the host-centered rotation in the principal axes as the galactocentric frame. <cit.> and <cit.> assessed the validity of this approach, considering the constancy of the disk's angular momentum over the past 7 Gyr of the simulations and the effectiveness of the potential modeling techniques in such systems. In , the disk plane rotates by about 20 degrees over 7 Gyr up to the present day. This fixed rotation approximation also eliminates non-inertial forces associated with time-varying rotation (see Appendix. <ref>). While mergers with massive satellites such as the LMC <cit.> can affect the orientation of the disk <cit.>, our choice of , which has a quiescent merger history over the last 6 Gyr, ensures that the disk's orientation remains stable. However, careful consideration in modeling the disk is required if the orientation changes rapidly (see Appendix <ref>). Force reconstruction for orbit integration must consider the acceleration of the comoving galactic center as a fictitious force within the galactocentric frame. Incorporating the non-inertial frame of the expanding cosmological background introduces another force. The potential Φ(r⃗,t) is modeled in the physical coordinates r⃗ of the non-inertial frame centered instantaneously on the galaxy. However, we define our equations of motion for the comoving positions x⃗≡r⃗/a(t) and the peculiar velocities ≡ a(t) ẋ⃗̇. In these coordinates, the force acting on each particle for a fixed orientation of the disk is computed as (see Appendix <ref>): d/dt = -∇⃗_r⃗Φ(r⃗,t) - ȧ (t)/a (t) - d u⃗_COM/dt It is useful to evaluate the relative contributions of the terms in Equation <ref>. For a typical system, we have orbital velocities ∼ 100s of km s^-1 and a Hubble parameter ȧ/a ∼ 1/14000 Myr^-1, so the second term on the right is of order 0.01 km s^-1 Myr^-1. Meanwhile a typical value for ∇⃗Φ at 30 kpc is roughly 1.5 km s^-1 Myr^-1, and u̇_COM is typically around 0.3-1 km s^-1 Myr^-1 <cit.>, so the contribution of the expanding background is relatively small for an isolated MW-mass halo, although it can be an order of magnitude larger for systems evolving in a Local Group environment, such as the MW-M31 system. Following the approach of <cit.>, we approximate u̇⃗̇_COM using second derivatives of smooth cubic spline fits to the halo's COM trajectory in comoving Cartesian coordinates. §.§ Potential Models We employ the time-evolving low-order multipole potenital (TEMP) model introduced in <cit.>, fitted using a combination of BFE on the host density at each time step without assuming any symmetry conditions, constructed using <cit.>. These expansions are formulated as separable functions of 3D radial distance (r) or the cylindrical radius and height (R and Z), and angular dependence, represented by orthogonal functions. To model the DM halo and hot gas (T_gas≥ 10^4.5 K), we use a spherical harmonic expansion in spherical coordinates to model the angular dependence (θ, ϕ), while the radial dependence (r) in the density is captured by evaluating the expansion coefficients in 25 logarithimically-spaced 1D radial grid nodes, interpolated using quintic splines <cit.>. The potential is written as Φ_halo(r, θ, ϕ) = ∑_ℓ=0^ℓ_max∑_m=-ℓ^ℓΦ_ℓ m(r) Y_ℓ^m (θ, ϕ) We model the flattened stellar and cold gas (T_gas≤ 10^4.5 K) component using a Fourier harmonic expansion in cylindrical coordinates (R, ϕ, Z), where the expansion coefficients are computed on a 2D meridional plane (R, Z) with 25 and 40 grid nodes in R and Z respectively. The potential is written as Φ_disk (R, ϕ, Z) = ∑_m=0^m_maxΦ_m (R, Z) e^ι m ϕ The methodology for fitting these models and computing the expansion coefficients (Φ_ℓ m(r) and Φ_m (R, Z)) is detailed in <cit.>, and specifically applied to our simulations in <cit.>. These BFE adequately capture deformations in the disk <cit.> and halo <cit.> resulting from galactic evolution and satellite mergers. BFE has proven effective in reproducing orbits, even in the presence of massive satellites, in both idealized <cit.> and cosmological simulations <cit.>. §.§ Force reconstruction Fig. <ref> shows the position-space distribution of the DM particles within 50 kpc of the galactic center at the present day where colors represent the force residual between reconstructed and true force, defined as F⃗_res = F⃗_reconstructed/F⃗_true -1 in each direction. In the first row, the left column illustrates the distribution in the YZ plane with the force residual in the X direction for each particle in the galactocentric frame and physical coordinates. The second and third columns depict force residuals in the Y and Z directions, respectively, in the XZ and XY planes. Our force reconstruction employs a truncated multipole model with maximum pole orders ℓ_max = m_max = 4 in this case. The black ellipse encloses 90% of the stellar mass in each panel. The second row shows the histograms of force residual distributions in each Cartesian coordinate, highlighting the 16th (red dashed line), 50th (blue dashed line), and 84th (red dashed line) quantiles, as well as the mean and standard deviation from Gaussian fits. The mean of all the distributions is close to zero, while the standard deviation represents a ∼2% error. Despite utilizing a low pole order of 4, the forces are reconstructed with under 10% error for all the particles, with over 95% of particles exhibiting less than 4% error in reconstruction. Notably, discrepancies and higher errors (≥ 5%) are predominantly observed in the baryonic disk, stemming from structures that are challenging to model with a low-order expansion, like spiral arms. With our focus on reproducing halo orbits in this work, we anticipate these errors in the inner regions of the galaxy to be negligible for most of our reconstructions. However, these errors can introduce biases in the reconstructed orbits, particularly for stars that spend a majority of their orbit in the inner regions (within 15 kpc of the galactic center). Moreover, the inner regions are susceptible to much more rapid changes in the potential compared to the halo, which our 25 Myr temporal cadence does not adequately capture. While limitations imposed by temporal cadence are noteworthy, we consider the bias in reconstructed forces to be a far more critical issue. Even higher azimuthal harmonic expansions, such as increasing to a pole order of 10, fail to address this primary concern. Table <ref> lists the mean (μ) and standard deviation (σ) of force residuals for DM particles within 50 kpc from the galactic center across Cartesian directions (𝐅_𝐫𝐞𝐬, 𝐗, 𝐅_𝐫𝐞𝐬, 𝐘 and 𝐅_𝐫𝐞𝐬, 𝐙), along with the residual on the absolute force magnitude (𝐅_𝐫𝐞𝐬, 𝐭𝐨𝐭), for increasing pole orders up to 10. The μ and σ remain consistent at the 0.01% level across increasing pole orders, suggesting negligible improvement in the residual distribution. Given the computational complexity of higher-order terms in the azimuthal harmonic expansion and the lack of significant enhancements in the particle-by-particle forces, we adhere to maintaining (ℓ, m)_max = 4 for our TEMP model. This decision is also supported by previous findings, as illustrated in Fig. 7 of <cit.>, where only minor improvement (about ∼ 0.1%) in orbit reconstruction is observed for pole orders greater than 4. §.§.§ Reconstructions under imposed model symmetry We explore the impact of imposing different symmetry conditions on our TEMP model, with (ℓ, m)_max = 4 on the reconstructed force for both the DM and star particles. Such symmetry assumptions are widely used to fit parameterized potential models to the observational data. Additionally, imposing certain symmetries allows us to compute crucial integrals of motion, such as actions <cit.>. These symmetry conditions are imposed by setting certain coefficients in the spherical (ℓ, m) and azimuthal harmonic (m) expansions to zero, effectively reducing the number of terms in the model. Table <ref> specifies the poles that remain non-zero under the imposed symmetries. We explore three different symmetries for both the halo and the disk: no symmetry (n), axisymmetry (a), and triaxial symmetry (t), excluding the trivial spherical symmetry for the halo. Exploring all possible combinations of imposed symmetries across the halo and disk results in a total of nine different setups. We perform force reconstruction for all the DM particles and stars within the parent halo within 50 kpc of the galactic center at present day. The DM particles are primarily located in the halo and stars are predominant in the disk. The residual distributions obtained from these reconstructions serve as a proxy for the adequacy of our halo and disk models, demonstrating how specific symmetries impact the overall model fidelity. The Violin plots in Fig. <ref> show the distributions of the residual between reconstructed and true force magnitudes for both DM (green) and stars (orange) for the nine models. Each model is denoted by a combination of symmetry conditions on the halo and the disk, as indicated on the x-axis in the format (imposed halo symmetry, imposed disk symmetry), with reference to Table <ref>. The dotted lines represent the 25th and 75th quartiles of the distributions, while the dashed line represents the 50th quartile. Each violin plot details error distributions, where the majority of the particles consistently exhibit errors within 10% across all models, with the best performances typically observed in the no symmetry model setup. Specifically, assuming no symmetry on the halo model with any symmetry assumption on the disk, or conversely, no symmetry on the disk with any constraints on the halo model, yields satisfactory results, with most residuals within 2% for both DM and star forces, respectively. Imposing symmetry conditions generally increases the σ and μ of the reconstructed forces, which vary depending on the specific symmetry applied and the particle type. Symmetry conditions on the halo, such as axisymmetry or triaxiality, for DM particles continues to produce satisfactory results, with errors generally remaining within the 5% range. Intriguingly, ensuring accurate modeling of the disk without imposing any symmetry constraints often leads to improved accuracy in the reconstructed forces even within the halo region. However, imposing symmetries on the disk leads to higher errors in reconstructed forces, particularly exaggerating the tails of these distributions and higher occurrence of errors exceeding 10%. In Appendix <ref> (see Fig. <ref>), we present the residual distributions of forces across each Cartesian axis, exhibiting similar trends as observed in Fig. <ref>. In summary, our findings highlight that for modeling halo orbits, strong constraints can be imposed on the disk potential while still achieving accurate orbital reconstructions, even under tight symmetry conditions like axisymmetry. In contrast, for disk orbits, it is imperative to accurately model the disk while employing simpler models for the halo could do sufficiently well. Furthermore, symmetry assumptions could introduce significant biases, particularly when computing integrals of motion for disk orbits. §.§ Selection of stars and orbit integration We select a sample of about 3000 stars with tracked halo-like orbits and integrate them forward in time for approximately 4 Gyr in the TEMP model with no symmetries imposed, described in Sec <ref>. Stars in our sample are: * not associated with any halo except the MW at the selection time (T ≈ 9 Gyr) and have orbital periods less than 3 Gyr. * formed at least 30 kpc away from the galactic center, i.e not in the MW disk. * have galactocentric distances never exceeding 200 kpc, and within 100 kpc at the present day. To identify stars not associated with any subhalos, we use the qcrROCKSTAR halo finder <cit.> to identify DM subhalos and assign stars associated to each subhalo <cit.>. We note that our star selection process may include a few stars (approximately ∼5%) showing indications of being bound to dwarf satellites and/or stream progenitors based on their phase-space distribution and angular momentum along the Z axis. This is a function of the tolerances chosen for determining boundedness of stars to subhalos; additionally particles can be energetically unbound but still associated with subhalos. Despite this, we choose to retain these stars in our analysis, hereby referred to as bound stars, acknowledging a similar challenge encountered in observational data where determining the gravitational binding of an object can be difficult. These bound stars are highlighted in gray whenever shown, and any reported statistics in this study exclude their contributions. We employ a leapfrog algorithm <cit.> to integrate sample orbits over the last 4 Gyr of the simulation using our time-dependent potential model. The models are saved at discrete time points, so we calculate the force experienced by a test particle at any given time using linear interpolation between forces from adjacent snapshots. Each particle orbit employs a time-stepping scheme tailored to its trajectory. Initially with a time step of Δ t=1 Myr, the step is scaled such that changes in velocity at each step are small relative to the velocity magnitude. This ensures resolution of pericenters for each orbit. Fig. <ref> plots the real (solid white) and reconstructed (dashed cyan) trajectory of a sample star integrated for approximately 3.8 Gyr to the present day, in the XZ plane. Both the trajectories start from the same point and the final positions at present day are marked with a star. Overall, the orbit exhibits a close match, indicating a strong agreement between the real and reconstructed paths. § ACCURACY OF RECONSTRUCTED ORBITS In this section, we quantify the accuracy of reconstructed orbits for the sample of ∼3000 stars described in Sec. <ref> with our TEMP model consisting of a spherical harmonic expansion for the DM halo and azimuthal expansion for the galactic disk (see Sec. <ref>). We quantify the spatial and temporal dependence of relative error between true and reconstructed positions for each star (Sec. <ref>). We also compare integrals of motion such as the total energy and angular momentum (Sec. <ref>), which are robust proxies for quality of our reconstructed orbits and statistically quantify “failure” modes based on a 100% error in recovering total angular momentum for an orbit. §.§ Relative position error metric We evaluate the TEMP model's performance in reconstructing orbits by measuring the relative position error for each orbit trajectory at time t as Δ r/r(t) = ||r⃗_reconstructed(t) - r⃗_true(t)||/||r⃗_true(t)||. Here, r⃗_true(t) and r⃗_reconstructed(t) represent the true and reconstructed 3D positions of each particle at time t in physical coordinates. This metric quantifies trajectory errors for each particle, explicitly considering orbit phase. While <cit.> employ a similar metric, they base it on the error relative to the time-averaged radius of the orbit. This measure tends to underestimate errors at pericenter, where a small phase error can lead to a large position or velocity error, and overestimate them at apocenter, where a large error in phase maps to a much smaller position or velocity error. Our metric directly compares reconstructed and true positions at each time step, offering a more accurate evaluation of trajectory fidelity throughout the orbit. We report this metric at the final time step as Δ r/r. Additionally, we normalize times based on the orbit period, computed using a fast Fourier transform of the particle's true trajectory to identify the dominant frequency. Fig. <ref> shows a selection of 5 other randomly chosen orbits, arranged by increasing orbital period. The trajectories are presented in Cartesian planes across the first three columns, with the final column illustrating the distance from the center as a function of time. Upon visual examination, the majority of orbits are reproduced with a reasonable level of accuracy. However, orbits with multiple orbital periods and an average orbital distance within 20 kpc exhibit higher errors, exceeding 10%. Additionally, deviations in other trajectories may result from interactions with substructures that are not adequately resolved with our low order expansion (ℓ, m)_max = 4. It has been suggested that subhalos on the order of a few kiloparsecs in size would require a significantly higher harmonic order <cit.>. §.§.§ Phase-space dependence at the final timestep Fig. <ref> illustrates the phase-space distribution of selected stars for integration (column 1), depicting their distributions in the XZ plane (row 1) and the total distance from the center and radial velocity plane (row 2) in the live simulation (column 2) and their reconstructed distributions (column 3) using the TEMP model after 3.8 Gyr of orbit integration. The last column exhibits the spatial dependence in the error metric, representing the median value of the relative orbit error (Δ r/r) defined by eq. <ref> at the final time step for the stars in the initial sample. In general, the orbit reconstruction demonstrates consistency, with errors typically below 15% after 3.8 Gyr of integration, and no significant angular dependence at different distances in the error metric. However, there is a systematic issue with reconstruction accuracy in the inner regions of the galaxy (within 15 kpc), likely due to the stronger influence of the disk, which is consistent with the errors in reconstructed forces in these regions (Fig. <ref>). Additionally, no significant trends are observed with radial velocity. Furthermore, it is important to note the presence of a small satellite in our sample, approximately 40 kpc away with a radial velocity of 100 km s^-1. This satellite, while not associated with any subhalos in the halo catalog, appears to have bound stars in the sample. Additionally, an unbound stellar stream with a small bound progenitor is identified in our sample. As the simulation progresses, the satellite undergoes tidal disruption and phase mixing, resulting in a median orbit error in our reconstruction of roughly order 1, attributed to the TEMP model's ignorance of self-gravity in the system. However, high-fidelity orbit reconstruction is observed for streams with errors below 5% along the stream track, except for the bound progenitor, which again exhibits an error of order 1. This emphasizes the critical role of self-gravity and the stripping times of stars. Ignoring these factors results in biased orbits. Fig. <ref> plots Δ r/r at the final time step of integration as a function of integration time in periods passed for each orbit, color-coded by the true average distance from the host during the integration time. Also, the bound stars are shown with gray markers and majority of them show large errors. Approximately 70% of orbits retain phase-space information after the integration time, with Δ r/r≤ 1 . Comparing exact errors, only 13% of the recovered orbits have errors less than 10% (see Table. <ref> for more statistics). Most of the orbits with higher relative errors (Δ r/r≥ 1) are integrated for a prolonged period (over 10 periods) and are situated close to the center of the host (within 15 kpc). §.§.§ Temporal dependence in the error metric In addition to the error metric at the final time step, it's important to compare how the metric evolves over time to test the stability and reliability of our reconstruction approach at different time steps. This temporal perspective allows us to evaluate the long-term behavior of orbits and identify any potential sources of bias or inaccuracies that may arise over extended periods of integration. Fig. <ref> depicts the temporal evolution of relative position error (Δ r/r (t)) as a function of total number of periods passed for each orbit on the left, organized into rows based on average orbital distance from the center. The top row includes orbits between 0-30 kpc, the middle row 30-60 kpc, and the bottom row ≥ 60 kpc. The histograms on the right show the Δ r/r (t) for all orbits at each time step, color-coded by the distance cuts. The error metric trajectories and distributions for bound stars are also shown in gray. Most of the orbital errors (approximately 90%) remain within 10% for up to 2 periods across all distance cuts, indicating consistent performance of our method in capturing orbital dynamics. Notably, instances where Δ r/r (t) ≥ 1 predominantly occur for stars closer to the center after 5-6 periods have passed. Additionally, about 80% of the outer orbits (average orbital distance ≥ 30 kpc) exhibit Δ r/r (t) ≤ 0.1. Other significant errors are observed in orbits associated with the bound satellite and stream progenitor (marked with gray lines) identified in the sample (refer to Fig. <ref>). Interestingly a notable trend emerges wherein orbits approaching their pericentric passage exhibit higher Δ r/r (t), whereas lower errors are observed at apocenter. This trend is particularly prominent in the outermost halo (≥ 60 kpc), where the Δ r/r (t) trajectory predominantly displays a triangular wave pattern. The error peaks near pericenter and valleys at apocenter due to the smallest division factor in our metric at pericenter, amplifying the Δ r/r (t). Additionally, the same phase error in the orbital plane magnifies the positional error into a larger discrepancy at pericenter compared to apocenter. Moreover, the faster tangential velocity at pericenter contributes to increased velocity errors, while slower tangential velocities at apocenter allow the phase to synchronize between reconstruction and true trajectories. <cit.> also observed similar trends, with increased errors in pericenter reconstruction of satellite orbits, where they compared the true and reconstructed pericenter position neglecting the phase error. While higher positional errors occur at pericentric passage, in a summary statistic, this discrepancy may not be significant. Most orbits spend more time near apocenter, so for an arbitrary selected final time step, most stars will have lower positional errors. However, this discrepancy would bias positional errors if one is examining only resonant orbits that reach pericenter at the same time. §.§.§ Pericenter and apocenter comparison <cit.> used the MW-mass galaxies from the Latte suite of baryonic-cosmological simulations <cit.> and beginning at present day, backward integrated the center-of-mass (COM) positions and velocities of luminous satellites orbiting the main host. They employed a static MW-mass potential and ignored dynamical friction. They showed that recovering the first pericentric and apocentric distance through orbit reconstruction has a 20-40% uncertainty with higher uncertainties in pericentric distance. Similarly, <cit.> used a controlled MW host with no massive mergers from the Elvis suite of DM only simulations <cit.>, backward integrated subhalo COM positions and velocities accounting for the mass growth of the main halo while keeping the shape of the potential fixed. They also used a prescription for the dynamical friction experienced by the satellites. They reported the fraction of satellite orbits that have less than 30% error in the pericentric and apocentric distances. They found that 70% of the satellites were below this threshold for their first pericentric distances, and only 55% of subsequent pericentric distances less than 30% error, while 90% of first apocenters and 75% of subsequent apocenters had similar errors below 30%. Motivated by these findings, we backtrack the COM positions and velocities of luminous satellites (M_⋆ > 0 ) at that are within the virial radius of the main halo and have a total mass less than 10^10 the present day to 4 Gyr ago. We then forward integrate to the present day in our TEMP model, ignoring dynamical friction (similar to <cit.>). Many of these satellites can be much more massive 4 Gyr ago. Fig. <ref> compares the reconstructed and true pericenter distances (left column) and apocenter distances (right column) for the first (top row) and last (bottom row) pericentric passages, respectively. Unbound star (solid circles) and luminous satellite (diamonds) orbits are color-coded based on their orbital period, with gray markers indicating bound stars within our selection sample. The inset in each panel shows the distribution of percentage error in each reconstructed property. At first passage, most orbits both in pericenter and apocenter closely align with the 1-1 line, exhibiting errors within 10% (insets in top row). However, typically higher errors are noticeable nearer to the galactic center, along with errors originating from the satellite and stream progenitor in both pericenter and apocenter distances (gray scatter points), as highlighted in Fig. <ref>, with notable discrepancies particularly evident at pericenter and apocenter distances around 20 kpc and 80 kpc, respectively. As the simulation progresses, pericenter distances exhibit greater variability compared to apocenter distances by the final passage (insets in bottom row). Nevertheless, errors for orbits with pericenters beyond 30 kpc remain within the 10% threshold. Similar errors in pericenter and apocenter distances are noted in <cit.>. Accurate reconstruction of pericenter and apocenter distances is crucial, as they are fundamental properties widely used in studying small scale structure formation and disruption within a galaxy <cit.>. Luminous satellite orbits follow similar trends to unbound stars. We recover 85% and 98% of pericentric and apocentric distances for luminous satellites to errors within 10%. These recovery rates are overall better than static potential models that only account for the mass growth. Subsequent pericenters are harder to recover, but we still achieve 70% recovery of subsequent apocenters within 10%. The reconstructed satellite orbits overestimate the last pericenter due to the absence of a dynamical friction prescription in our model. In summary, our reconstruction method demonstrates overall success, maintaining errors below 15% after 3.8 Gyr of integration, with approximately 70% of orbits exhibiting a relative error below 1. However, accuracy issues arise within the inner galaxy regions, likely due to the disk's stronger influence and challenges in reconstructing orbits of bound substructures. Notably, the stable, quiescent merger history in this simulation ensures that the potential undergoes minimal non-adiabatic changes, contributing to the robustness of the reconstruction process. The temporal evolution analysis reveals that most orbital errors remain within 10% for up to 2 periods across all regions, but deteriorate after 5-6 periods for orbits within 30 kpc. We notably observe higher positional errors at pericentric passages, consistent with prior studies. Our reconstruction method generally accurately reproduces orbits up to the initial pericentric and apocentric passages. However, errors in reconstructed pericenter tend to increase for orbits with pericentric distances closer to the galactic center, while the apocenters are generally recovered more accurately <cit.>. §.§ Reproducing integrals of motion To evaluate the quality of our orbit reconstructions, we analyze approximate integrals of motion: energy and total angular momentum (J_total). While energy is not strictly conserved due to the time-dependent potential and tends to grow adiabatically, the total angular momentum remains approximately conserved over time <cit.>. The position-based error, particularly dependent on a particle's radial distance from the halo center, can vary rapidly along an orbit <cit.>. Fig. <ref> plots the comparison between reconstructed and true final energy (left panel) and final total angular momentum (J_total) (right panel). The top left insets in both panels show the distributions of initial (green), final from simulation (black), and final from reconstruction (blue) for both energy and J_total. Notably, J_total exhibits less scatter and tighter 1-to-1 correspondence with a correlation coefficient of almost 0.99 compared to energy, which has a correlation coefficient 0.71. We find that 87% of orbits have final energy errors below 100%, with 75% below 50%, 61% below 25%, and 43% below 10% (see Table <ref>). In contrast, total angular momentum (J_total) errors are lower, with 98% of orbits having errors below 100%, 93% below 50%, 85% below 25%, and 70% below 10%. This indicates that J_total is more robustly conserved, reflecting the relative stability of angular momentum in our dynamic models. The histograms show the presence of bound substructure in the initial energy distribution (over 200 particles in a single bin), which affects the conservation accuracy. Our correlation between true and reconstructed final energy is slightly weaker compared to that reported in <cit.>. This disparity can be attributed to the presence of a live baryonic disk actively forming stars, coupled with realistic feedback models, resulting in a larger non-conservation of energy. We perform a Kolmogorov-Smirnov (KS) test to assess whether the reconstructed and true distributions of final energy and J_total come from the same distribution. The resulting p-value for energy and J_total is 0.21 and 0.35, suggesting no significant differences between either of the distributions. Additionally, the correlation coefficient between true and reconstructed angular momentum along the Z-axis is 0.995, with a KS test p-value of 0.5, further supporting the consistency between the reconstructed and true distributions. §.§ Instantaneous failure based on total angular momentum The high fidelity of J_total in our orbit reconstructions, compared to reconstructed positional error (see Fig. <ref>) and energy (see Fig. <ref>) motivates establishing a criterion for instantaneous failure in orbit reconstruction. We propose defining failure time as the moment when the instantaneous error in J_total between reconstructed and true values exceeds 100%. This criterion provides a robust measure to pinpoint instances where orbit recovery becomes unattainable due to the complete loss of phase-space information <cit.>. Fig. <ref> illustrates failure time plotted against integration time in number of periods for orbits that fail based on the aforementioned criteria, with only approximately 10% of orbits failing. Surprisingly, the distribution of failure times seems to be independent of integration time. However, one might expect that orbits integrated for longer periods would exhibit a higher likelihood of failure. The top panel illustrates the distribution of integration times for orbits that failed (red) and those that never fail (green), with each bin scaled to represent the fraction of stars. Remarkably, the success rate remains consistently high at around 90%, while the failure rate remains low at approximately 10%, regardless of integration time. The side panel plots the distribution of failure times (red), integration times for orbits that never fail (green), and all integration times (black), revealing no discernible pattern in failure time distribution. This observation suggests that neither longer nor shorter integration times significantly affect the likelihood of failure. The independence suggests failure to reconstruct a specific orbit is not due to time dependence of the global potential, but rather to more localized factors like subhalo interactions altering angular momentum or orbits passing through the baryonic disk. Prolonged integration spanning multiple periods can be effective in preserving angular momentum-dependent properties, such as the shape of orbits, despite the high errors in reconstructed positions. This underscores the robustness of orbit reconstruction techniques in capturing essential dynamical features over extended integration times. §.§ Dependence on the sampling interval The simulation snapshots in FIRE-2 simulations are saved rather frequently—approximately every 25 Myr—while this isn't usually the case for baryonic-cosmological simulation suites. To evaluate the dependence of orbit quality on the temporal cadence of the available snapshots, we re-integrate our sample of selected stars while sampling the potential model less frequently compared to our fiducial time interval of 25 Myr for approximately 4 Gyr (to present day). While one can integrate each orbit for a fixed number of orbital periods, which would decrease the overall errors, our fixed time approach is more representative of integration needs for a statistical ensemble of orbits. Fig. <ref> plots the fraction of stars with errors below specific error thresholds (different colors) in recovered properties: total angular momentum (J_tot), total energy (E_tot), and the positional error metric (Δ r/r from eq. <ref>) as a function of sampling interval. An eightfold increase in the snapshot spacing (to 200 Myr) does not significantly affect the recovery errors for J_tot and E_tot. The fraction of stars with well-recovered orbits only starts to decrease significantly at a sampling interval of 500 Myr, a twentyfold increase from our fiducial sampling. The positional error metric (Δ r/r) is more sensitive to the sampling interval. Stars show a noticeable deterioration in positional accuracy at a sampling interval of 100 Myr. This finding is consistent with DM-only results from <cit.>, although we caution that it depends strongly on the period of the orbits to be reconstructed. Our sample population consists of halo-like orbits with relatively long periods, while dynamical times in the disk and bulge of our simulations can be well below our snapshot frequency; we would expect only orbit-averaged properties to be stable in that case. Indeed, part of the reason for higher fractional errors in position is due to our choice of integrating over a fixed time—which could be a factor of 100 in orbital periods for different stars. For instance, a star in the outer halo with a period of 2 Gyr is only integrated for 2 periods, while a star in the inner region with a period of 0.2 Gyr is integrated for 20 periods. Moreover, due to expected density profiles, there are more stars with shorter periods. In fact, 60% of stars have orbital periods ≤ 0.8 Gyr (see Fig. <ref>). § SIMULATING STREAM FORMATION In this section, we demonstrate an application of the TEMP models: forming realistic stellar streams. Stellar streams are formed when stars in a globular cluster or a dwarf galaxy are tidally stripped by a more massive host galaxy <cit.>, making their structure and evolution highly sensitive to the host galaxy's potential and mass profiles <cit.>. Therefore, a successful potential model of the host should accurately replicate the observed evolution and structure of stellar streams. We use our TEMP model to reconstruct an example stellar stream with M_⋆ = 10^7 at present day formed by tidal disruption of a dwarf satellite, identified by <cit.> in . We begin tracking tidal disruption at the “stream formation time” for this progenitor galaxy, defined as in <cit.> to be the first time that the tidal deformation of the progenitor, measured by its principal axis ratio, exceeds a threshold value. For our example progenitor this occurs 6.5 Gyr after the Big Bang, which we will call T'=0 Gyr, representing the start of orbit integration. We assign unique stripping times (T_strip) to each star associated with the satellite by tracking when it crosses twice the time-evolving virial radius of the progenitor satellite: |r⃗_* - r⃗_sat| > 2R_200m^prog(t). Once a star reaches this distance, we consider it to be unaffected by the self gravity of its progenitor galaxy, which is not included in the TEMP model. We then begin orbit integration of the stripped star in the host potential up to the present day. The duration of orbit integration differs for each star, but does not exceed 7.2 Gyr for those stars that are assigned as stripped at or before T'=0 Gyr. Once we have our potential models, these integrations are computationally inexpensive, allowing us to resimulate tidal stream formation at high resolution in a few minutes, without the cost of re-running the entire simulation. Fig. <ref> shows the XZ plane (top half) and phase-space (radial velocity versus distance from the center, bottom half) distribution and temporal evolution (arranged in columns–increasing time from left to right) of this dwarf satellite in both the live simulation (first and third row) and TEMP model (second and fourth row) over a period of 7.2 Gyr to present day. The stars are color-coded by their stripping times from the progenitor starting from T'=0 Gyr. Simulated stream structure (position-space; top 2 rows) and kinematics (phase-space; bottom 2 rows) align closely with the real stream, demonstrating the model's effectiveness in modeling tidal disruption and evolution. However, minor discrepancies in both position-space and phase-space are noted after 5.8 Gyr near the outer tails (further than 60 kpc from the center). Similar features are noted for other streams across the Latte suite <cit.>. § SUMMARY AND DISCUSSION In this paper, we assess the effectiveness of the time-evolving potential (TEMP) model, fit to a zoomed baryonic-cosmological simulation of a MW-mass galaxy from the FIRE-2 suite <cit.>, introduced by <cit.> in recovering particle forces and halo star orbits for a ∼4 Gyr integration. The TEMP model incorporates a spherical harmonic expansion for the halo and azimuthal harmonic expansion for the disk, with a maximum pole order of 4. We recover individual particle forces to high accuracy (Sec. <ref>), with 68% of errors within 1% and about 95% of particles exhibiting less than 4% error in force reconstruction at the present day (Fig. <ref>). We observe negligible improvement in recovered forces with increasing pole orders beyond 4, as evident from the relatively constant means and standard deviations of the force error distribution (Table <ref>). Similar minor improvements for reconstructed orbits were noted for pole orders beyond 4 in <cit.> and other previous works, which used DM-only simulations. The largest force errors produced by the model are localized near the galactic center, and are due to complex small-scale structures, such as spiral arms in the baryonic disk components. Additionally, errors arise from interactions with a few subhalos that are not resolved in our smooth density field, yet still massive enough to affect orbits. These errors can bias predictions of orbits for stars that spend the majority of their time in the inner regions (≤ 10-15 kpc, ∼ 1-1.5× radius enclosing 90% stellar mass). However, they have minimal impact on halo-like orbits, as these stars move very quickly near pericenter. Consequently, the increased error in acceleration during the shorter time spent in the inner region has a negligible effect on their overall velocities. We also imposed various symmetry conditions on the BFE-based potential model at present day and evaluated the recovered forces. We show that for halo particles, imposing even the most restrictive symmetries, such as axisymmetry, on the disk potential still yields accurate force reconstructions. For disk orbits, however, it is crucial to accurately model the disk, while simpler halo models can suffice (see Fig. <ref>, and <ref>). Symmetry assumptions can introduce significant biases if applied without regard for orbit type, particularly in computing integrals of motion for disk orbits. Overall, we achieve high fidelity in orbit recovery using the TEMP model, demonstrating its effectiveness statistically (see Fig. <ref>). However, the fidelity in individual orbit reconstruction depends on the specific question being addressed. For example, accurately determining individual orbits in terms of exact positions, velocities, and times is most challenging. Using an error metric based on recovering exact positions, we show that 3D positions can still be recovered to within 10% accuracy over 2-3 orbital periods (see Fig. <ref>), though errors are higher for orbits closer to the galactic center. Notably, 70% of orbits have total positional errors below 100% after multiple periods (1–20) of integration (see Fig. <ref>), indicating that while some errors exist, a substantial portion of the orbits can maintain reasonably accurate positional information. Therefore, while single orbits may not hold much meaning due to inherent uncertainties, a statistical ensemble of orbits can still be effectively utilized to understand the overall dynamical behavior of the system (see Fig. <ref>). This method can be particularly successful for studying halo orbits and their precise positions, such as those of the Sagittarius dwarf satellite and its tidal stream, where the larger spatial extent and statistical sample allow for more accurate orbit recovery <cit.>. Conversely, orbits within the inner regions, such as those associated with the MW bar and disk, will be highly biased. Integrals of motion, such as total energy and angular momentum, can be recovered with much higher accuracy. We find that 87% of orbits have recovered energy errors below 100%, and 98% of orbits have recovered angular momentum errors under 100% (see Fig. <ref>). These quantities change appreciably over time for many stars, indicating that our model accurately predicts their variations as the galaxy evolves. Moreover, these integrals are recovered to high fidelity even with a larger potential sampling interval of 200 Myr compared to our fiducial snapshot spacing of 25 Myr. Additionally, orbital properties that directly depend on energy and angular momentum, such as the shape of the orbits—including pericenter and apocenter distances—are well reconstructed to within 10% accuracy even after multiple passages (see Fig. <ref> and Table <ref>), with apocenter distances being more accurate than pericenter distances. Interestingly, the instances where total angular momentum errors exceed 100% do not show any significant dependence on the orbital period or the average distance from the galactic center (see Fig. <ref>), given that total angular momentum is approximately conserved for adiabatic changes in the potential <cit.>. This implies that stream- and potential-modeling techniques based on integrals of motion should be more robust to biases induced by phase-space errors than those that compare model predictions for positions and velocities. The high accuracy in recovering approximate integrals of motion also leads to exceptionally reliable modeling of orbital planes. This reliability is crucial for studying the planes of satellite galaxies, as dwarf galaxies in the MW appear to lie in a plane that approximately follows the Magellanic Stream, a phenomenon observed in other galaxies as well <cit.>. Accurate modeling of orbital planes enhances our understanding of these satellite planes, which have important implications for the formation and evolution of galaxies and their satellite systems, as well as for dark matter <cit.>. For these accuracy statistics, we have focused on a simulation that had no massive mergers (M_sat≤ 2 × 10^10 ) during the integration time, so all of our errors can be treated as lower bounds. However, an obvious question arises: “How big of a merger can this model accommodate before it breaks?” The success of these integration techniques largely depends on how well the potential model can describe the deforming halo along with the merging satellite. <cit.> used a metric to measure how well action-space coherence was preserved, which is slightly different from, but akin to, the conservation metrics in this paper. They found that these potential modeling techniques could effectively describe mergers with mass ratios of approximately 1:15 (total mass of halo: total mass of satellite at the time of pericentric passage), including mergers similar to the Sagittarius/SMC-mass satellites, in m12f (note Fig. 3 in <cit.> showing tidal debris of reconstructed orbits to high accuracy). However, the model breaks down for highly radial, massive mergers with a mass ratio of about 1:8 with the first pericentric passage very close to the center of the host, as seen in m12w. Later, <cit.> and <cit.> showed that stellar stream orbits can be reproduced fairly well with errors within 10-20% for halos with mergers of LMC-like orbit and mass, with a mass ratio of roughly 1:10. Additionally, <cit.> noted that a statistical ensemble of integrated stellar streams are reproduced with 20% errors in position space (two folds higher compared to this work) after 2 orbital periods in m12b. The TEMP-based orbit reconstruction methods find extensive application in zoomed cosmological simulations. Foundational work focused on dark matter only simulations to examine subhalo evolution and disruption <cit.>, simulate stellar streams in a smoothed self-consistent field potential to analyze morphological differences between smooth and lumpy potentials <cit.>, and explore the effects of time-dependent potentials on MW satellite orbits <cit.>. We expand the utility of these techniques to a fully baryonic-cosmological zoomed simulation, demonstrating their broader applicability. The models and orbit reconstruction techniques presented here have proven effective in various applications, including studying the time-evolution of stellar streams in action-space <cit.>, and position-space <cit.>, and deriving orbital parameters of disrupting satellites <cit.>, stellar streams <cit.>, and stars in the disk <cit.>. Additionally, <cit.> applied these models to inject and integrate synthetic stream orbits in a halo undergoing a merger with an LMC-mass satellite, while <cit.> used them to increase the particle resolution of merging dwarf galaxies. Furthermore, <cit.> integrated known progenitors of dwarf galaxy streams in Latte suite to measure the impact of the LMC on their orbits. These applications underscore the versatility and robustness of time-evolving BFE-based models in capturing complex dynamics within cosmological contexts, providing crucial insights into the formation and evolution of galaxies and their substructures. AA and RES acknowledge support from the Research Corporation through the Scialog Fellows program on Time Domain Astronomy, from NSF grants AST-2007232 and AST-2307787, and from NASA grant 19-ATP19-0068. RES is supported in part by a Sloan Fellowship. AW received support from: NSF via CAREER award AST-2045928 and grant AST-2107772; NASA ATP grant 80NSSC20K0513; HST grant GO-16273 from STScI. SL acknowledges support from NSF grant AST-2109234 and HST grant AR-16624 from STScI. ECC acknowledges support for this work provided by NASA through the NASA Hubble Fellowship Program grant HST-HF2-51502 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. NS is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2303841. This research is part of the Frontera computing project at the Texas Advanced Computing Center (TACC). Frontera is made possible by National Science Foundation award OAC-1818253. Simulations in this project were run using Early Science Allocation 1923870, and analyzed using computing resources supported by the Scientific Computing Core at the Flatiron Institute. This work used additional computational resources of the University of Texas at Austin and TACC, the NASA Advanced Supercomputing (NAS) Division and the NASA Center for Climate Simulation (NCCS), and the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575. FIRE-2 simulations are publicly available <cit.> at <http://flathub.flatironinstitute.org/fire>. Additional FIRE simulation data is available at <https://fire.northwestern.edu/data>. A public version of the Gizmo code is available at <http://www.tapir.caltech.edu/ phopkins/Site/GIZMO.html>. IPython <cit.>, Matplotlib <cit.>, Numpy <cit.>, Gizmo Analysis <cit.>, Agama <cit.>, Rockstar <cit.>, Halo Analysis <cit.>, CMasher <cit.>. aasjournal § NON-INERTIAL FORCES ON A COSMOLOGICAL BACKGROUND §.§ Definitions We wish to compute the contributions to the equation of motion that arise by transforming from the global inertial frame of the simulation volume into a time-varying galactocentric frame in which the potential Φ of the simulated galaxy is modeled. This transformation can be written as a Lorentz transformation, Λ(u⃗(t),θ⃗(t)), where u⃗(t) is the instantaneous velocity of the galaxy's center of mass in the global inertial frame of the simulation box, and θ = n̂(t)θ(t) is the rotation from the coordinates in the simulation box to the instantaneous frame aligned with the galactic disk, written as a rotation by angle θ around the normal unit vector n̂. The rotation matrix is related by R^I_ J=δ^I_ Jcosθ+n^In_J(1-cosθ) +ϵ^I_ JKn^Ksinθ , where we have used Einstein summation convention (repeated indices are summed). All these quantities are functions of the time t (i.e. time in the simulation box). This transformation takes place on an expanding cosmological background in which the galaxy is evolving. In conformal Newtonian gauge (useful for this situation) the metric is ds^2 = a^2(η) { -[1+2Φ(x⃗)] dη^2 + [1-2Φ(x⃗) ] dx⃗^2 }, where η is the conformal time (related to the proper time τ by a dη = dτ), a(η) is the scale factor and Φ(x⃗) is the galactic potential defined in the comoving coordinates of the frame of the simulation box. Formally Φ is also a function of time but for the purposes of the Lorentz transformation we can treat it as instantaneously static; we ignore the time-dependence of Φ in what follows. The transformation from the global inertial coordinate frame of the simulation box to the locally orthonormal galactocentric frame of the model can be described by an object called the tetrad, which relates the basis vectors in the two frames: e_μ(x)=e^A_ μ(x) e_A(x) . The Greek indices hereafter indicate components in the coordinate (simulation-box) basis, while the capital Latin letter indices indicate components in the orthonormal basis where the potential is modeled and the orbit integrations are to be carried out. The tetrad enables one to translate coordinate vector components into orthonormal vector components, for example p^A=e^A_ μ p^μ . The reverse transformation is described by the inverse tetrad ẽ^ν_ B, defined by e^A_ μ ẽ^μ_B=δ^A_ B , ẽ^μ_A e^A_ ν=δ^μ_ ν , so that e.g. p^μ=ẽ^μ_A p^A. The metric itself is formed from the tetrad: _μν=η_AB e^A_ μ e^B_ ν . If the potential is weak (Φ/c^2 ≪ 1) then we can factor the metric into the tetrad: e^A_ μ = a(η)( [ (1+Φ) 0 0 0; 0 (1-Φ) 0 0; 0 0 (1-Φ) 0; 0 0 0 (1-Φ); ]) and its inverse: e^A_ μ = 1/a(η)( [ (1-Φ) 0 0 0; 0 (1+Φ) 0 0; 0 0 (1+ Φ) 0; 0 0 0 (1+Φ); ]) . §.§ Equation of motion The equation of motion (equivalent to Newton's second law, F⃗=dp⃗/dt) of a particle in this system is the geodesic equation, dp^A/dτ=-1/m^A_μ Bp^μ p^B , where τ is the proper time and the Lorentz connection (also known as spin connection or Ricci rotation coefficients) is computed from the tetrad using a version of the first Cartan structure equation (a fundamental equation of manifold geometry): ω^A_ABμ = e^ν_B∂_[νe^A_μ] + η^ACη_BDe^α_C∂_[μe^D_α] + η^ACη_DFe^α_Ce^ν_Be^F_μ∂_[νe^D_α] The brackets around lower indices are shorthand for the antisymmetric part: ∂_[νe^A_μ]≡( ∂_ν e^A_μ - ∂_μ e^A_ν) In this version of the equation of motion one sees two types of momenta (velocities): the coordinate momentum p^μ≡ m v^μ, which is related to the comoving velocity in the simulation frame by a factor of the inertial mass m, and the momentum in the galactocentric frame, p^B ≡ E v^B which is instead proportional to the energy E. Just like positions, the two are related by the tetrad: p^A=me^A_ μ(dx^μ/dτ). The tetrad also relates the coordinate time τ on the left-hand side of Equation <ref> and the time t in the galactocentric frame; these are related by the tetrad such that t = a(τ)(1-Φ)τ. For Φ/c^2 ≪ 1 this reduces to t=a(τ) τ. By using the identities above, one can rearrange the equation to give the equation of motion for the 3-momentum, almost entirely in terms of quantities defined in the galactocentric frame: dp^I/dt = - E ω^I_μ B v^μ v^B. The subscripts IJK… are used to indicate spatial components of the four-vectors (i.e. normal three-dimensional positions and velocities) in the orthonormal frame (i.e. one that has orthogonal unit vectors, like global Cartesian coordinates), while ijk… indicate 3D positions and velocities in the coordinate frame (where the unit vectors point along the different coordinate directions and might not be orthogonal, since mass “bends” the coordinate grid according to the equivalence principle). The inertia of a particle is given by its energy E, which since our velocities are all slow compared to c is approximately equivalent to the rest mass m and is therefore constant in time. Pulling this factor out on both sides of the equation above leaves us with dv^I/dt = - ω^I_μ B v^μ v^B. §.§ Fields and transformations The terms in the equation of motion (Equation <ref>) can be decomposed into a form similar to the equation of motion for a particle in an electromagnetic field. This is done by separating the components of the connection by the number of powers of v they contain and whether they are symmetric or antisymmetric: dv^I/dt=g^I+M^I_ jv^j+ϵ^I_ KL (^K+N^K_ jv^j)v^L, where we have defined ^I_t0=^0_tI=-g^I , ^I_tJ=ϵ^I_ JK^K , ^I_j0=^0_jI=-M^I_ j , ^I_jK=ϵ^I_ KLN^L_ j . The object ϵ^I_ JK is the totally antisymmetric matrix or Levi-Civita pseudotensor as found in the component definition of the cross product, ( A× B)^I ≡ϵ^I_ JK A^J B^K. This formalism is useful for our purposes since these fields transform in particular ways under the time-varying Lorentz transformation we need to arrive in the galactocentric frame at each timestep. This is because although the Lorentz connection is a tensor (specifically a one-form) under arbitrary coordinate transformations,[this is an extension of the idea that a vector always points in the same direction no matter what coordinate system you use to describe its components] it's a tensor only for global Lorentz transformations, not for time-varying or spatially-varying ones: ^A_μ B→^A_μ B=Λ^A_ C (Λ^-1)^D_ B ^C_μ D-(Λ^-1)^C_ B ∂_μΛ^A_ C . The terms that are proportional to partial derivatives of Λ are nonzero if the boost velocity u⃗ or the rotation angle θ⃗ is a function of time or space. In particular, since the transformation (both the velocity shift and the rotation) varies in time but not space, the following table gives the transformation rules that lead to the equation of motion in the galactocentric frame, in the limit that u≪ c: This formalism thus allows us to account for both the transformation to the galactocentric frame varying with time and the expanding cosmological background. §.§ Equation of motion in the galactocentric frame We can use Equation <ref> and the definitions in Equations <ref> to calculate the fields for our metric and tetrad. Then we can use the table to find the equation of motion in the frame of integration. The fields in the simulation frame are: g^I = -∂^I Φ Ω^I = 0 M^I_ j = -(1-2Φ)ȧ/aδ^I_ j N^I_ j = 0 When transformed to the galactocentric frame, we find g̅^I = -R^I_ J∂^J Φ - 2u^I(u⃗·∇⃗Φ) - ∂_t u^I Ω̅^I = (u⃗×∇⃗Φ)^I - (u⃗×∂_t u⃗ )^I -n^I∂_tθ - sinθ∂_tn^I + (1-cosθ) ϵ^I_ JK n^J ∂_tn^K M̅^I_ j = -(1-2Φ)ȧ/a R^I_ Kδ^K_ j N̅^I_ j = (1-2Φ)ȧ/aϵ^I_ JKδ^J_j u^K Inserting these terms into the equation of motion gives us dv^I/dt = -∂^I Φ - 2u^I(u⃗·∇⃗Φ) - ∂_t u^I - [ (u⃗×∇⃗Φ) ×v⃗]^I - [(u⃗×∂_t u⃗ ) ×v⃗]^I -∂_t θ(n̂×v⃗)^I - sinθ (∂_tn̂×v⃗)^I + (1-cosθ)[(n̂×∂_t n̂) ×v⃗] -(1-2Φ)ȧ/a v^I + (1-2Φ)ȧ/a[v^2 u^I -v^I (u⃗·v⃗) ]. which includes the “non-inertial forces”—terms from transforming from the expanding cosmological box (the simulation frame) into a local inertial frame that varies with time (the galactocentric frame). The terms should all be evaluated in the galactocentric frame. The last term on line 1 of the above equation is the “force" from accelerating the reference frame, and the first term on line 3 is the Coriolis force from rotating the reference frame. The other terms may be fairly small to zero given that u/c ≪ 1, v/c ≪ 1 and Φ/c^2 ≪ 1. Also note the terms involving the Hubble parameter ȧ/a. Re-introducing appropriate factors of c on both sides gives dv^I/dt = -∂^I Φ - ∂_t u^I -∂_t θ(n̂×v⃗)^I - sinθ (∂_tn̂×v⃗)^I + (1-cosθ)[(n̂×∂_t n̂) ×v⃗] - 2u^I(u⃗·∇⃗Φ)/c^2 - [ (u⃗×∇⃗Φ) ×v⃗]^I/c^2 - [(u⃗×∂_t u⃗ ) ×v⃗]^I/c^2 -(1-2Φ/c^2)ȧ/a v^I + (1-2Φ/c^2)ȧ/a[(v/c)^2 u^I -v^I (u⃗·v⃗)/c^2 ]. This implies that for v ≪ c the EOM reduces to dv^I/dt = -∂^I Φ - ∂_t u^I -ȧ/a v^I -∂_t θ(n̂×v⃗)^I - sinθ (∂_tn̂×v⃗)^I + (1-cosθ)[(n̂×∂_t n̂) ×v⃗] where the first line is terms from acceleration and second from rotation of the galactocentric frame. In this work we fix the rotation of the galactocentric frame, so the terms on the second line vanish, leaving the terms on the first line. Equation <ref> is then obtained by identifying dv^I/dt →F⃗ (the force per unit mass in the galactocentric frame), -∂^I Φ→∇⃗Φ (the gravitational field in the galactocentric frame), and ∂_t u^I → du⃗/dt (the peculiar velocity of the galactic center). It is useful to consider the size of terms on the second line of this equation, induced by changing the orientation of the galactic frame in time, compared to the size of the second term on the first line, which is induced by tracking the motion of the galactic center of mass in space. It is sometimes necessary to re-orient the galactocentric frame over the course of a simulation in order to keep the azimuthal harmonic expansion coordinates lined up with the disk symmetry axis, in which case this rotation induces fictitious forces and the terms on the second line will be nonzero. Typical center-of-mass accelerations are of order 0.3 km/s/Myr <cit.>. In the simulation used in this work the disk plane rotates by about 20 degrees over 7 Gyr up to the present day, so ∂_t θ is ∼ 10^-4 rad Myr^-1. For typical orbital velocities of ∼ 100 km s^-1, the first term on the second line of Equation <ref> is thus about 0.01 km s^-1 Myr^-1. However, during a merger the disk can change its orientation far more rapidly than this, which will both necessitate rotation of the galactocentric frame and increase the size of these terms. § FORCE RESIDUALS UNDER DIFFERENT SYMMETRIES ACROSS THE CARTESIAN AXES Fig. <ref> presents the violin plots showcasing the residual distributions of reconstructed forces compared to true forces across different Cartesian axes for DM (green) and stars (orange) within 50 kpc of the galactic center. Each row corresponds to a specific Cartesian axis: X (top row), Y (middle row), and Z (bottom row). These distributions depict the impact of various symmetry conditions imposed on the halo and the disk, as denoted by the x-axis notation, with detailed symmetries listed in Table <ref>. The dotted lines in the plots denote the 25th and 75th quartiles, while the solid line marks the 50th quartile. The residual distributions across different Cartesian axes, as depicted in Fig. <ref>, echo the trends observed in the total force residual distributions discussed in Fig. <ref> (Sec. <ref>). Generally, all distributions exhibit larger standard deviations (σ) compared to the residuals of force magnitudes. However, models with constrained symmetries tend to display wider tails across all axes, along with a non-zero mean (μ) along the X and Y axes. Notably, the residual distribution along the Z axis shows a higher degree of symmetry, with mean values close to zero and lower standard deviations for both DM and star distributions across all models. This could stem from disk alignment in the XY plane, with the disk angular momentum aligned with Z-axis. Conversely, the distribution along the Y axis exhibits a bimodal pattern, suggesting a higher likelihood of systematic biases introduced by symmetry assumptions. Interestingly, this bimodal behavior is not observed in a different simulation from the Latte Suite <cit.> (not included).
http://arxiv.org/abs/2407.12625v1
20240717145238
Serendipity discrete complexes with enhanced regularity
[ "Daniele Di Pietro", "Marien Hanot", "Marwa Salah" ]
math.NA
[ "math.NA", "cs.NA", "65N30, 65N99, 65N12, 35Q60" ]
1]Daniele A. Di Pietro 2]Marien Hanot 1]Marwa Salah [1]IMAG, Univ Montpellier, CNRS, Montpellier, France, daniele.di-pietro@umontpellier.fr, marwa.salah@umontpellier.fr [2]University of Edinburgh, Edinburgh, United Kingdom, mhanot@ed.ac.uk Serendipity discrete complexes with enhanced regularity [ ======================================================= § ABSTRACT In this work we address the problem of finding serendipity versions of approximate de Rham complexes with enhanced regularity. The starting point is a new abstract construction of general scope which, given three complexes linked by extension and reduction maps, generates a fourth complex with cohomology isomorphic to the former three. This construction is used to devise new serendipity versions of rot-rot and Stokes complexes derived in the Discrete de Rham spirit. Key words. Discrete de Rham method, compatible discretizations, serendipity, rot-rot complex, Stokes complex MSC2010. 65N30, 65N99, 65N12, 35Q60 § INTRODUCTION In this work we address the question of finding serendipity versions of discrete de Rham complexes with enhanced regularity. The starting point is a new construction of general scope which, given three complexes connected by extension and reduction maps in the spirit of <cit.>, generates a fourth complex with cohomology isomorphic to the former three. In the context of finite elements, the word “serendipity” refers to the possibility, on certain element geometries, to discard some internal degrees of freedom (DOFs) without modifying the approximation properties of the underlying space; see, e.g., <cit.> for recent developments in the context of the approximation of Hilbert complexes. In the context of arbitrary-order polyhedral methods, serendipity techniques were first developed in <cit.> to build a reduced version of the nodal (H^1-conforming) virtual space. Similar ideas had been previously followed in <cit.> to reduce the number of element DOFs in the framework of discontinuous Galerkin methods and in <cit.> to eliminate element DOFs in hybrid finite volume methods; see also <cit.> on this subject. When applying serendipity techniques to a discrete complex rather than a single space, one must make sure that the elimination of DOFs does not alter its homological properties. Compatible serendipity techniques to reduce the number of face DOFs in virtual element discretizations of the de Rham complex have been developed in <cit.>, where a direct proof of local exactness properties was provided. A variation of the discrete complex in the previous reference has been recently proposed in <cit.>, where links with Discrete de Rham (DDR) methods have also been established. A systematic approach to serendipity for polyhedral approximations of discrete complexes, including the elimination of both element and face DOFs, has been recently proposed in <cit.> and applied to the DDR complex of <cit.> (see also <cit.> for preliminary developments and <cit.> for an extension to differential forms). In practical applications, Hilbert complexes different from (but typically linked to <cit.>) the de Rham complex are often relevant. Examples include: the rot-rot complex, which naturally arises when considering quad-rot problems; the Stokes complex, relevant for incompressible flow problems; the div-div complex, appearing in the modeling of thin plates. Discretizations of such complexes in the DDR spirit have been recently proposed in <cit.>, <cit.>, and <cit.>, respectively. To this date, however, the literature on serendipity techniques for advanced Hilbert complexes is extremely limited. An example in the context of polyhedral methods is provided by <cit.>, where a serendipity version of the DDR div-div complex is proposed and studied. The goal of the present work is to fill this gap by proposing a general construction that makes it possible to derive in a systematic way a serendipity version of an advanced discrete complex whenever a serendipity version of the underlying de Rham complex is available. The construction is applied to the derivation and study of discrete versions of the discrete rot-rot and Stokes complexes of <cit.>. The rest of this work is organized as follows. In Section <ref> we present the abstract construction. The discrete de Rham complex of <cit.> along with its serendipity version of <cit.> are briefly recalled in Section <ref>. Serendipity versions of the rot-rot complex of <cit.> and of the Stokes complex of <cit.> are derived and studied in Section <ref> and <ref>, respectively. Section <ref> also contains numerical experiments comparing the performance of the serendipity and original rot-rot complexes on a quad-rot problem. § AN ABSTRACT FRAMEWORK FOR SERENDIPITY COMPLEXES WITH ENHANCED REGULARITY In this section we present an abstract framework that, given three complexes linked by suitable reduction and extension operators, allows one to construct a fourth complex with cohomology isomorphic to the others. The application that we have in mind is the construction of serendipity versions of the de Rham complex with enhanced regularity. §.§ Setting We consider the situation depicted in the following diagram, involving three complexes (W_i,∂_i)_i, (W_i,∂_i)_i, and (V_i,d_i)_i : [xscale=2.5, yscale=1.75, baseline=(Wi.base)] (SWi) at (0,1) W_i; (SWi1) at (1,1) W_i+1; (Wi) at (0,0) W_i; (Wi1) at (1,0) W_i+1; (Vi) at (0,-1) V_i; (Vi1) at (1,-1) V_i+1; [->] (SWi) to node[above, font=]∂_i (SWi1); [->] (Wi) to node[above, font=]∂_i (Wi1); [->] (Vi) to node[above, font=]d_i (Vi1); [->,dashed] (-1,-1) – (Vi); [->,dashed] (-1,0) – (Wi); [->,dashed] (-1,1) – (SWi); [->,dashed] (SWi1) – (2,1); [->,dashed] (Wi1) – (2,0); [->,dashed] (Vi1) – (2,-1); [->,dashed] (Wi) to [bend left=20] node[left, font=] Wi(SWi); [->,dashed] (Wi1) to [bend left=20] node[left, font=] Wi+1(SWi1); [->] (SWi) to [bend left=20] node[right, font=] Wi(Wi); [->] (SWi1) to [bend left=20] node[right, font=] Wi+1(Wi1); [->,dashed] (Vi) to [bend left=20] node[left, font=] i(Wi); [->,dashed] (Vi1) to [bend left=20] node[left, font=] i+1(Wi1); [->] (Wi) to [bend left=20] node[right, font=] i(Vi); [->] (Wi1) to [bend left=20] node[right, font=] i+1(Vi1); The complexes (W_i,∂_i)_i and (W_i,∂_i)_i are linked by linear extension and reduction operators WiW_i → W_i and Wi W_i →W_i that meet the following assumption. [Properties of Wi and Wi] It holds: * (WiWi)_|∂_i =_∂_i. * (Wi+1Wi+1-_Wi+1)(∂_i+1) ⊂(∂_i). * Wi+1∂_i=∂_iWi and Wi+1∂_i= ∂_i Wi. By <cit.>, Assumption <ref> guarantees the cohomologies of the complexes (W_i,∂_i)_i and (W_i,∂_i)_i are isomorphic. Additionally, the upper diagram in (<ref>) is commutative and we have: ∂_i=Wi+1∂_iWi. Examples of complexes (W_i,∂_i)_i and (W_i,∂_i)_i and of the corresponding reduction and extension operators that match Assumption <ref> are provided by the two- and three-dimensional discrete de Rham complexes (<ref>) and (<ref>) below and their serendipity versions recalled in Section <ref>. In the applications of Sections <ref> and <ref> , (V_i,d_i)_i is an extended version of (W_i,∂_i)_i with enhanced regularity, which is linked to (W_i, ∂_i)_i by the linear extension and reduction operators i W_i→ V_i and i V_i→ W_i. [Properties of i and i] It holds: * ii =_W_i. * (i+1i+1-_V_i+1)( d_i+1) ⊂(d_i). * i+1 d_i=∂_ii and i+1∂_i= d_i i. Notice that property <ref> is stricter than <ref> since it requires i to be a left inverse of i on the entire space W_i and not only on ∂_i. Accounting for this remark and invoking again <cit.>, it is easy to see that the cohomologies of (V_i, d_i)_i and (W_i, ∂_i)_i are isomorphic. As noticed above, the latter is, in turn, isomorphic to the cohomology of (W_i,∂_i)_i. The complex (V_i, d_i)_i can be illustrated by the discrete rot-rot complex (<ref>) or the discrete Stokes complex (<ref>), respectively discussed in Sections <ref> and <ref> below. Assume <ref> and let C_ii. Then, we have the following direct decomposition: V_i = iW_i ⊕ C_i. Under assumption <ref>, this decomposition is compatible with d_i, in the sense that d_i iW_i ⊂i+1W_i+1 and d_i C_i ⊂ C_i+1. By <ref>, i is surjective and i is injective. As a consequence of the latter property, |W_i|=|iW_i|, where | · | denotes here the dimension of a vector space. By the rank-nullity theorem, we can also write |C_i|= |V_i| - |(i)| =|V_i|-|W_i|, where the conclusion follows from the surjectivity of i. Thus, |C_i| + |iW_i|=|V_i|-|W_i|+|W_i|=|V_i|, and this gives V_i=iW_i + C_i, thus proving (<ref>). Let us now prove that the sum in the above expression is direct. To this purpose, let v∈iW_i ∩ C_i. Since v ∈ C_i, i v =0. Since v ∈iW_i, on the other hand, v can be written as iv_w for some v_w ∈ W_i, so iiv_w=0. By <ref>, v_w=0, so v=i0=0 (since i is linear). As a result, iW_i ∩ C_i={0}. Now, i+1 d_i C_i <ref>= ∂_ii C_i (<ref>)= 0, giving that d_i C_i ⊂ C_i+1. On the other hand, d_i i W_i <ref> = i+1∂_i W_i, hence d_i i W_i ⊂i+1 W_i+1. This concludes the proof of (<ref>). §.§ Construction of a serendipity complex with enhanced regularity The goal of this section is to construct a new complex (V_i,d_i) with operators ViV_i→ V_i, Vi V_i→V_i, iW_i→V_i, iV_i→W_i, that verify conditions similar to the ones in Assumptions <ref> and <ref>, so that (V_i,d_i) has the same cohomology as the three other complexes. The construction is illustrated in the following diagram: [xscale=2,yscale=2,baseline=(Middle.base)] (SVi) at (0,0,0) V_i; (SVi1) at (2,0,0) V_i+1; (Vi) at (0,0,-2) V_i; (Vi1) at (2,0,-2) V_i+1; (SWi) at (0,2,0) W_i; (SWi1) at (2,2,0) W_i+1; (Wi) at (0,2,-2) W_i; (Wi1) at (2,2,-2) W_i+1; (Middle) at (0,1,0) ; [->,>=latex] (SVi) – (SVi1) node[midway, above, font=] d_i; [->,>=latex] (Vi) – (Vi1) node[midway, above, font=] d_i; [->,>=latex] (SWi) – (SWi1) node[midway, above, font=] ∂_i; [->,>=latex] (Wi) – (Wi1) node[midway, above, font=] ∂_i; [->,>=latex] (SVi)to [bend right=20] node[pos=0.8, right, font=] Vi (Vi); [->,dashed] (Vi)to [bend right=20] node[pos=0.5, left, font=] Vi (SVi); [->,>=latex] (SVi1)to [bend right=20] node[pos=0.7, right, font=] Vi+1 (Vi1); [->,dashed] (Vi1)to [bend right=20] node[pos=0.5, left, font=] Vi+1 (SVi1); [->,>=latex] (SWi)to [bend right=20] node[pos=0.8, right, font=] Wi (Wi); [->,dashed] (Wi)to [bend right=20] node[pos=0.5, left, font=] Wi (SWi); [->,>=latex] (SWi1)to [bend right=20] node[pos=0.7, right, font=] Wi+1 (Wi1); [->,dashed] (Wi1)to [bend right=20] node[pos=0.5, left, font=] Wi+1 (SWi1); [->,>=latex] (Wi) to [bend left=15] node[pos=0.6, right,font=] i (Vi); [->,dashed] (Vi) to [bend left=15] node[pos=0.4, left,font=] i (Wi); [->,>=latex] (Wi1) to [bend left=15] node[pos=0.6, right,font=] i+1 (Vi1); [->,dashed] (Vi1) to [bend left=15] node[pos=0.4, left,font=] i+1 (Wi1); [->,>=latex] (SWi) to [bend left=15] node[pos=0.4, right,font=] i (SVi); [->,dashed] (SVi) to [bend left=15] node[pos=0.6, left,font=] i (SWi); [->,>=latex] (SWi1) to [bend left=15] node[pos=0.4, right,font=] i+1 (SVi1); [->,dashed] (SVi1) to [bend left=15] node[pos=0.6, left,font=] i+1 (SWi1); [->,dashed] (-1,0,0) – (SVi); [->,dashed] (-1,2,0) – (SWi); [->,dashed] (-1,0,-2) – (Vi); [->,dashed] (-1,2,-2) – (Wi); [->,dashed] (SVi1) – (3,0,0); [->,dashed] (SWi1) – (3,2,0); [->,dashed] (Vi1) – (3,0,-2); [->,dashed] (Wi1) – (3,2,-2); By Lemma <ref>, a generic element v ∈ V_i can be written as v = iv_w + v_c with (v_w, v_c) ∈ W_i × C_i. We introduce the projector Π_C_i onto C_i such that, for any v = i v_w + v_c, Π_C_i v v_c. Notice that, by definition, Π_C_ii = 0. In addition, using the compatibility expressed by (<ref>), Π_C_i+1 d_i v = d_i Π_C_iv, as can be checked writing Π_C_i+1 d_i v = Π_C_i+1 d_i (iv_w + v_c) = Π_C_i+1 (d_i iv_w + d_i v_c) (<ref>)= d_i v_c (<ref>)= d_i Π_C_iv. The spaces and differential of the new complex are respectively given by V_i{v = (v_w,v_c)v_w∈W_i and v_c∈ C_i}, and d_iv (∂_iv_w, d_iv_c) for all v= (v_w,v_c) ∈V_i. The operators iW_i→V_i, iV_i→W_i, ViV_i→ V_i, and Vi V_i→V_i relating this new complex to (W_i,∂_i)_i and (V_i,d_i)_i, respectively, are defined as follows: i v_w (v_w,0)   for all  v_w ∈W_i, i v v_w  for all  v =(v_w,v_c) ∈V_i, Vi v iWiv_w+v_c  for all  v =(v_w,v_c) ∈V_i, Vi v (Wiiv,Π_C_i v)  for all  v ∈V_i. Under Assumptions <ref> and <ref>, the operators defined by (<ref>) satisfy the following relations: R_W_iℛ_i = ℛ_iR_V_i, ℰ_i R_W_i = R_V_i ℰ_i, E_W_iℛ_i = ℛ_i E_V_i, ℰ_i E_W_i = E_V_i ℰ_i, ∂_ii = i+1d_i. (i) Proof of (<ref>). For all v ∈ V_i, we have iViv (<ref>)=i(Wiiv,Π_C_i v)(<ref>)= Wiiv. (ii) Proof of (<ref>). For all v_w ∈ W_i, it holds Viiv_w (<ref>)= (Wiiiv_w,Π_C_iiv_w) <ref>, (<ref>)= (Wiv_w,0) (<ref>)=iWiv_w. (iii) Proof of (<ref>). For all v = (v_w,v_c) ∈V_i, we have: Wiiv (<ref>)= Wiv_w <ref>= iiWiv_w + iv_c = i(iWiv_w + v_c) (<ref>)= iViv, where we have additionally used the fact that v_c ∈ C_i to add iv_c = 0 in the right-hand side of the second equality and the linearity of i in the third equality. (iv) Proof of (<ref>). For all v_w ∈W_i, we can write Viiv_w (<ref>)= Vi(v_w,0) (<ref>)= iWiv_w . (v) Proof of (<ref>). For all v =(v_w,v_c) ∈V_i, we have: ∂_iiv (<ref>)= ∂_i v_w (<ref>)= i+1(∂_i v_w, d_i v_c) (<ref>)= i+1d_iv. Under Assumptions <ref> and <ref>, the operators Vi and Vi satisfy the following properties: (ViVi)_|d_i = _d_i, (Vi+1Vi+1-_V_i+1)( d_i+1) ⊂(d_i), Vi+1 d_i=d_iVi and Vi+1d_i=d_iVi. (i) Proof of (<ref>). Let v=(v_w,v_c) ∈d_i. We have ViVi(v_w,v_c) (<ref>) = Vi(iWiv_w+v_c) (<ref>) = ( WiiiWiv_w, Π_C_i (iWiv_w+v_c)) <ref>, (<ref>) = (WiWiv_w,v_c) <ref> = (v_w,v_c), where we have used the linearity of i along with iv_c = 0 (since v_c ∈ C_i) in the second equality, while the use of <ref> in the fourth equality is possible since v_w ∈∂_i, as can be checked writing ∂_iv_w (<ref>)= ∂_iiv(<ref>)= i+1d_i v = 0, the conclusion being a consequence of v∈d_i and the linearity of i+1. (ii) Proof of (<ref>). Let v (<ref>)= i+1 v_w + v_c ∈ d_i+1 with (v_w, v_c) ∈ W_i+1× C_i+1. We write Vi+1Vi+1 v - v = Vi+1Vi+1(i+1 v_w+ v_c)-(i+1 v_w+ v_c) (<ref>) = Vi+1(Wi+1i+1(i+1 v_w+v_c),Π_C_i(i+1 v_w+ v_c)) - (i+1 v_w+ v_c) (<ref>) = Vi+1(Wi+1i+1i+1 v_w,v_c) - (i+1 v_w+ v_c) <ref> = Vi+1(Wi+1 v_w, v_c) - (i+1 v_w+ v_c) (<ref>) = i+1Wi+1Wi+1 v_w+ v_c- (i+1 v_w+ v_c) = i+1 (Wi+1Wi+1 v_w- v_w), where, in the third equality, we have additionally used the fact that i+1 v_c = 0 since v_c ∈ C_i+1. We next notice that i+1v = i+1(i+1v_w+v_c) = i+1i+1v_w<ref>=v_w. This implies, in turn, ∂_i+1v_w=∂_i+1i+1v<ref>=i+2d_i+1v=i+20=0 since v∈ d_i+1 and i+2 is linear by definition, giving that v_w∈∂_i. We can therefore use Assumption <ref> on Wi+1Wi+1 v_w- v_w in (<ref>) to infer the existence of q∈ W_i such that Vi+1Vi+1 v - v =i+1∂_i q <ref>= d_i i q ∈(d_i). (iii) Proof of (<ref>). For all v ∈ V_i, we have Vi+1 d_i v (<ref>) = ( Wi+1i+1 d_i v, Π_C_i+1d_i v ) <ref> = ( Wi+1∂_i i v ,Π_C_i+1d_i v) <ref>, (<ref>) = ( ∂_i Wiiv, d_i Π_C_i v ) (<ref>) = d_i (Wii v , Π_C_i v ) (<ref>) =d_i Vi v. For all v=(v_w,v_c) ∈V_i, on the other hand, we have: Vi+1d_i v(<ref>) = Vi+1 (∂_i v_w, d_iv_c ) (<ref>) = i+1Wi+1∂_i v_w+ d_i v_c <ref>, <ref> = d_i iWiv_w+ d_i v_c (<ref>) = d_i Vi (v_w, v_c), where the conclusion additionally uses the linearity of d_i. Under Assumptions <ref> and <ref>, the cohomologies of all the complexes in diagram (<ref>) are isomorphic. Theorem <ref> gives all the properties needed to invoke <cit.> and prove that the cohomology of the complex (V_i,d_i)_i is isomorphic to that of (V_i,d_i)_i. The latter is, on the other hand, isomorphic to both the cohomologies of (W_i, ∂_i) and (W_i,∂_i)_i (see Remark <ref>). § THE DISCRETE DE RHAM COMPLEX AND ITS SERENDIPITY VERSION In this section we recall the Discrete De Rham (DDR) complex of <cit.> and its serendipity version (SDDR) of <cit.>. These complexes will respectively play the role of (W_i,∂_i)_i and (W_i,∂_i)_i in (<ref>) for the applications of the following sections. We only give a brief overview of the construction for the sake of conciseness and refer to <cit.> for additional details. §.§ Local polynomial spaces and L^2-orthogonal projectors For a polytope T_d embedded in ℝ^n with n ≥ d and an integer ℓ≥ 0, we denote by ℓ(T_d) the space spanned by the restriction to T_d of n-variate polynomials. Introducing the boldface notation for the space of tangential polynomials ℓ(T_d) ℓ(T_d; ^d) for d ∈{ 2, 3}, the following direct decompositions hold (see, e.g., <cit.>): ℓ(T_2) = ℓ(T_2) ⊕ℓ(T_2) with ℓ(T_2)_T_2ℓ+1(T_2) and ℓ(T_2)(x-x_T_2)^⊥ℓ-1(T_2), where _T_2 denotes the tangential gradient when T_2 is embedded in ^3 and v^⊥ is obtained rotating v by π/2, ℓ(T_3) = ℓ(T_3) ⊕ℓ(T_3) with ℓ(T_3)ℓ+1(T_3) and ℓ(T_3)(x-x_T_3)×ℓ-1(T_3), and, for d ∈{ 2, 3 }, ℓ(T_d) = ℓ(T_d) ⊕ℓ(T_d) with ℓ(T_d)_T_dℓ+1(T_d) and ℓ(T_d)(x-x_T_d)ℓ-1(T_d), where _T_2_T_2^⊥ and _T_3. We extend the above notations to negative exponents ℓ by setting all the spaces appearing in the decompositions equal to the trivial vector space. Given a polynomial (sub)space 𝒳^ℓ(T_d), the corresponding L^2-orthogonal projector is denoted by π_𝒳,T_d^ℓ. Boldface font will be used when the elements of 𝒳^ℓ(T_d) are vector-valued, and, for X∈{R, G}, ℓT_d denotes the L^2-orthogonal projector on X^ c,ℓ(T_d). §.§ The two-dimensional discrete de Rham complex §.§.§ Spaces Given a two-dimensional polygonal mesh ℳ_h, we denote by ℳ_0,h, ℳ_1,h and ℳ_2,h, respectively, the set of vertices T_0, edges T_1, and elements T_2 of the mesh. Let k ≥ 0 be a given polynomial degree and, for all T_2 ∈ℳ_2,h, n_T_2 and s_T_2 two integers ≥ -1 that we collect in the vectors n= ( n_T_2 )_T_2∈ℳ_2,h and s=( s_T_2 )_T_2∈ℳ_2,h. The boldface notation is dropped when the values in n and s are all equal. We define the following discrete counterparts of H^1(Ω), Ω, and L^2(Ω): n,kh{[t] q_h =( (q_T_2)_T_2∈ℳ_2,h,(q_T_1)_T_1∈ℳ_1,h, (q_T_0)_T_0∈ℳ_0,h) q_T_2∈n_T_2(T_2) for all T_2∈ℳ_2,h, q_T_1∈k-1(T_1) for all T_1∈ℳ_1,h, q_T_0∈ for all T_0∈ℳ_0,h}, s,kh{[t] v_w,h =( (v_R,T_2,v_R,T_2^)_T_2∈ℳ_2,h, (v_T_1)_T_1∈ℳ_1,h) v_R,T_2∈k-1(T_2) and v_R,T_2^∈s_T_2(T_2) for all T_2∈ℳ_2,h, and v_T_1∈k(T_1) for all T_1∈ℳ_1,h}, khk(ℳ_2,h), where k(ℳ_2,h) denotes the space of broken polynomials on ℳ_2,h of total degree ≤ k. The restriction of n,kh to an element T_d, d ∈{ 1, 2}, is obtained collecting the components on T_d and its boundary and is denoted by n,kT_d. Similar conventions are used for the restriction of the spaces that will appear in the rest of the paper as well as their elements. §.§.§ Discrete vector calculus operators For any edge T_1∈ℳ_1,T_2 and any q_T_1∈k-1,kT_1, the edge gradient q_T_1 is defined as the derivative along T_1 of the function q_T_1∈k+1(T_1) such that q_T_1(x_T_0) = q_T_0 for any vertex T_0 of T_1 of coordinates x_T_0 and k-1T_1q_T_1 = q_T_1. We next define the gradient :k-1,kT_2→k(T_2) and the scalar two-dimensional potential :k-1,kT_2→k+1(T_2) on T_2 such that, for all q_T_2∈k-1,kT_2, ∫_T_2q_T_2·v = -∫_T_2 q_T_2_Fv + ∑_T_1∈ℳ_1,T_2ω_T_2T_1∫_T_1q_T_1 (v·_T_2T_1) ∀v∈k(T_2), ∫_T_2q_T_2_T_2v = -∫_T_2q_T_2·v + ∑_T_1∈ℳ_1,T_2ω_T_2T_1∫_T_1q_T_1 (v·_T_2T_1) ∀v∈k+2(T_2), where _T_2T_1 is a unit normal vector to T_1 lying in the plane of T_2 and ω_T_2 T_1 the orientation of T_1 relative to T_2 such that ω_T_2T_1_T_2T_1 points out of T_2. The two-dimensional scalar rotor :k,kT_2→k(T_2) and the corresponding vector potential :k,kT_2→k(T_2) (which can be interpreted as a tangential component when T_2 is the face of a polyhedron) are such that, for all v_T_2∈k,kT_2, ∫_T_2v_T_2 r = ∫_T_2v_R,T_2·_T_2 r - ∑_T_1∈ℳ_1,T_2ω_T_2T_1∫_T_1 v_T_1 r ∀ r∈k(T_2), ∫_T_2v_T_2·(_T_2 r + w) = ∫_T_2v_T_2 r + ∑_T_1∈ℳ_1,T_2ω_T_2T_1∫_T_1 v_T_1 r + ∫_T_2v_R,T_2^·w ∀ (r,w)∈k+1(T_2)×k(T_2). We will also need the two-dimensional vector rotor C^k_T_2:k,kT_2→k(T_2) such that ∫_T_2C^k_T_2v_T_2·w = ∫_T_2 v_T_2w + ∑_T_1∈ℳ_1,T_2ω_T_2T_1∫_T_1 (v_T_1·_T_2T_1) (w·_T_1) ∀w∈k(T_2). §.§.§ DDR complex The two-dimensional DDR complex of degree k reads [xscale=2,baseline=(Xgrad.base)] at (-1,0) DDR2d:; (Xgrad) at (0,0) k-1,kh; (Xrot) at (1.5,0) k,kh; (WL2) at (3,0) kh,; [->,>=latex] (Xgrad) – (Xrot) node[midway, above, font=]h; [->,>=latex] (Xrot) – (WL2) node[midway, above, font=]h; where the discrete global gradient h and curl h are such that, for all (q_h, v_h) ∈k-1,kh×k,kh, hq_h( (k-1T_2q_T_2,kT_2q_T_2)_T_2∈ℳ_2,h, ( q_T_1 )_T_1∈ℳ_1,h), ( hv_h )_| T_2v_T_2 for all T_2∈ℳ_2,h. §.§ The three-dimensional discrete de Rham complex §.§.§ Spaces Let us now consider a three-dimensional mesh ℳ_h, with ℳ_0,h, ℳ_1,h, ℳ_2,h, and ℳ_3,h denoting, respectively, the set of vertices T_0, edges T_1, faces T_2, and elements T_3. Given four vectors of integers ≥ -1 m(m_T_3)_T_3∈ℳ_3,h, n(n_T_2)_T_2∈ℳ_2,h, p(p_T_3)_T_3∈ℳ_3,h, and s(s_T_2)_T_2∈ℳ_3,h, we define the following discrete counterparts of H^1(Ω), Ω, Ω, and L^2(Ω): m,n,kh{[t] q_w,h =( (q_T_3)_T_3∈ℳ_3,h,(q_T_2)_T_2∈ℳ_2,h,(q_T_1)_T_1∈ℳ_1,h, (q_T_0)_T_0∈ℳ_0,h) q_T_3∈m_T_3(T_3)for all T_3 ∈ℳ_3,h, q_T_2∈n_T_2(T_2) for all T_2 ∈ℳ_2,h, q_T_1∈k-1(T_1) for all T_1∈ℳ_1,h, and q_T_0∈ for all T_0∈ℳ_0,h}, p,s,kh{[t] v_w,h =( (v_R,T_3,v_R,T_3^)_T_3∈ℳ_3,h, (v_R,T_2,v_R,T_2^)_T_2∈ℳ_2,h, (v_T_1)_T_1∈ℳ_1,h) v_R,T_3∈k-1(T_3) and v_R,T_3^∈p_T_3(T_3) for all T_3∈ℳ_3,h, v_R,T_2∈k-1(T_2) and v_R,T_2^∈s_T_2(T_2) for all T_2∈ℳ_2,h, and v_T_1∈k(T_1) for all T_1∈ℳ_1,h}, h{[t] w_w,h =((w_G,T_3,w_G,T_3^)_T_3∈ℳ_3,h, (w_T_2)_T_2∈ℳ_2,h) w_G,T_3∈k-1(T_3) and w_G,T_3^∈k(T_3) for all T_3∈ℳ_3,h, and w_T_2∈k(T_2) for all T_2∈ℳ_2,T_3}, and khk(ℳ_3,h). When the values in m, n, p and s are all equal, where we drop the boldface notation. With a little abuse in notation, for the discrete gradient operator defined by (<ref>) below as well as for the tail space kh, we use the same symbols as for the DDR2d sequence: all ambiguity will be removed by the context. §.§.§ Discrete vector calculus operators The element gradient :k-1,k-1,kT_3→k(T_3), the element curl :k,k,kT_3→k(T_3), and the element divergence :T_3→k(T_3) are respectively defined such that, for all q_T_3∈k-1,k-1,kT_3, all v_T_3∈k,k,kT_3, and all w_T_3∈T_3, ∫_T_3q_T_3·v = -∫_T_3 q_T_3v + ∑_T_2∈ℳ_2,T_3ω_T_3T_2∫_T_2q_T_2 (v·_T_2) ∀v∈k(T_3), ∫_T_3v_T_3·z = ∫_T_3v_R,T_3·z + ∑_T_2∈ℳ_2,T_3ω_T_3T_2∫_T_2v_T_2·(z×_T_2) ∀z∈k(T_3), ∫_T_3w_T_3 q = -∫_T_3w_G,T_3· q + ∑_T_2∈ℳ_2,T_3ω_T_3T_2∫_T_2 w_T_2 q ∀ q∈k(T_3), where _T_2 is a unit normal vector to T_2 and ω_T_3T_2 is the orientation of T_2 relative to T_3 such that ω_T_3 T_2_T_2 points out of T_3. §.§.§ DDR complex The global three-dimensional DDR complex of degree k is [xscale=2,baseline=(Xgrad.base)] at (-1,0) DDR3d:; (Xgrad) at (0,0) k-1,k-1,kh; (Xcurl) at (1.5,0) k,k,kh; (Xdiv) at (3,0) h; (WL2) at (4.5,0) kh,; [->,>=latex] (Xgrad) – (Xcurl) node[midway, above, font=]h; [->,>=latex] (Xcurl) – (Xdiv) node[midway, above, font=]h; [->,>=latex] (Xdiv) – (WL2) node[midway, above, font=]h; where the operators h, h and h are obtained projecting the element and face operators onto the component spaces: For all (q_h,v_h,w_h)∈k-1,k-1,kh×k,k,kh×h, hq_h[t] ( (k-1T_3q_T_3,kT_3q_T_3)_T_3∈ℳ_3,h, ( k-1T_2q_T_2,kT_2q_T_2 )_T_2∈ℳ_2,h, ( q_T_1 )_T_1∈ℳ_1,h), hv_h( (k-1T_3v_T_3,kT_3v_T_3)_T_3∈ℳ_3,h, ( v_T_2 )_T_2∈ℳ_2,h), ( hw_h )_| T_3w_T_3 for all T_3∈ℳ_3,h. §.§ Serendipity spaces We now introduce the two- and three-dimensional Serendipity Discrete de Rham (SDDR) complexes that will play the role of (W_i,∂_i)_i in the applications considered in Sections <ref> and <ref> below. For each T_d ∈ℳ_d,h, d ∈{2, 3}, we select η_T_d≥ 2 faces/edges that are not pairwise aligned and such that T_d lies entirely on one side of the plane/line spanned by each of those faces/edges and the regularity assumption detailed in <cit.> are satisfied. We then set ℓ_T_d k + 1 - η_T_d. These integers are collected in the vector ℓ_d ( ℓ_T_d )_T_d ∈ℳ_d,h. The serendipity version of the spaces in (<ref>) and (<ref>) are, respectively, 3kh ℓ_2,kh, h ℓ_2+1,kh, kh ℓ_3,ℓ_2,kh, h ℓ_3 +1,ℓ_2 +1,kh. In these spaces, the degree of certain polynomial components inside faces and elements for which η_T_d > 2 is lower than in the non-serendipity spaces defined in Sections <ref> and <ref>, the more so the larger η_T_d. §.§ Extension and reduction maps between the two-dimensional DDR and SDDR complexes Following <cit.>, for a polygon T_2 it is possible to define serendipity gradient and rotor operators T_2:kT_2→k(T_2) and T_2:T_2→k(T_2) that satisfy the following properties: T_2T_2 q = _T_2q ∀ q ∈k+1(T_2), T_2T_2v=v ∀v∈k(T_2), where T_2 and T_2 are the standard DDR interpolators on kT_2 and T_2, obtained collecting L^2-orthogonal projections on the component spaces. The role of the serendipity operators is to reconstruct polynomials fields inside T_2 from the polynomial components of the serendipity spaces. In order to define two-dimensional extension maps, we need an operator T_2:kT_2→k-1(T_2) that satisfies a formal integration by parts with the serendipity gradient: For all w∈k(T_2), ∫_FT_2q_T_2_T_2w = - ∫_T_2T_2q_T_2·w + ∑_T_i∈ℳ_1,T_2ω_T_2 T_1∫_T_1q_T_1 (w·_T_2T_1). The extension operators h:kh→k-1,kh and h:h→k,kh are defined by hq_h( (T_2q_T_2)_T_2 ∈ℳ_2,h, (q_T_1)_T_1 ∈ℳ_1,h, (q_T_0)_T_0 ∈ℳ_0,h) ∀q_h∈kh, hv_h( (v_R,T_2, kT_2T_2v_T_2)_T_2∈ℳ_2,h, (v_T_1)_T_1∈ℳ_1,h) ∀v_h∈h, while the reduction operators h:k-1,kh→kh and h:k,kh→h are such that hq_h( (ℓ_T_2T_2q_T_2)_T_2 ∈ℳ_2,h, (q_T_1)_T_1 ∈ℳ_1,h, (q_T_0)_T_0 ∈ℳ_0,h) ∀q_h∈k-1,kh, hv_h( (v_R,T_2, ℓ_T_2+1T_2v_R,T_2^)_T_2 ∈ℳ_2,h, (v_T_1)_T_1∈ℳ_1,T_2) ∀v_h∈k,kh. The complexes (W_i,∂_i)_i and (W_i,∂_i)_i along with the corresponding extension and reduction maps that will be used in the application of Section <ref> are summarized in the following diagram: [xscale=2.5, yscale=1.25, baseline=(WL2.base)] at (-1,1) DDR2d:; at (-1,-1) SDDR2d:; (Xgrad) at (0,1) k-1,kh; (Xrot) at (1.5,1) k,kh; (WL2) at (3,0) kh; [->,>=latex] (Xgrad) – (Xrot) node[midway, above, font=]h; [->,>=latex] (Xrot) – (WL2) node[midway, above, font=]h; (SXgrad) at (0,-1) kh; (SXrot) at (1.5,-1) h; [->,>=latex] (SXgrad) – (SXrot) node[midway, above, font=]h; [->,>=latex] (SXrot) – (WL2) node[midway, below, font=]h; [->,>=latex] (SXgrad) to [bend right=10] node[midway, right, font=] h (Xgrad) ; [->,>=latex,dashed] (Xgrad) to [bend right=10] node[midway, left, font=] h (SXgrad) ; [->,>=latex] (SXrot) to [bend right=10] node[midway, right, font=] h (Xrot) ; [->,>=latex,dashed] (Xrot) to [bend right=10] node[midway, left, font=] h (SXrot) ; where h and h are given by (<ref>). §.§ Extension and reduction maps between the three-dimensional DDR and SDDR complexes Now, taking a polyhedron T_3 and following again <cit.>, it is possible to define serendipity gradient and curl operators T_3:kT_3→k(T_3) and T_3:T_3→k(T_3) that satisfy the following properties: T_3T_3 q=_T_3q ∀ q ∈k+1(T_3), T_3T_3v=v ∀v∈k(T_3), where T_3 and T_3 are the standard DDR interpolators on kh and h obtained collecting L^2-orthogonal projection on the component spaces. We also define T:kT_3→k-1(T_3) such that, for all w∈k(T_3), ∫_T_3Tq_T_3w = - ∫_T_3T_3q_T·w + ∑_T_2∈ℳ_T_2∈T_3ω_T_3T_2∫_T_2q_T_2 (w·_T_2), T_3:k-1,k-1,kT_3→ℓ_T_3(T_3), such that, for all w∈ℓ_T_3+1(T_3), ∫_T_3T_3q_T_3w =-∫_T_3q_T_3·w + ∑_T_2∈ℳ_2,T_3ω_T_3 T_2∫_T_2T_2T_2q_T_2 (w·_T_2), and T_3:T_3→k-1(T_3) such that, for all w∈k(T_3), ∫_T_3T_3v_T_3·w = ∫_T_3v_T_3·w - ∑_T_2∈ℳ_2,T_3ω_T_3T_2∫_T_2T_2T_2v_T_2· (w×_T_2). where , , , and , are respectively defined by (<ref>), (<ref>), (<ref>), and (<ref>). The extension operators h:kh→k-1,k-1,kh and h:h→k,k,kh are such that, for all q_h∈kh and all v_h∈h, hq_h( (T_3q_T_3)_T_3∈ℳ_3,h, (T_2q_T_2)_T_2∈ℳ_2,h, (q_T_1)_T_1 ∈ℳ_1,h, (q_T_0)_T_0 ∈ℳ_0,h), hv_h( (v_R,T_3, kT_3T_3v_T_3)_T_3∈ℳ_3,h, (v_R,T_2, kT_2T_2v_T_2)_T_2∈ℳ_2,h, (v_T_1)_T_1∈ℳ_1,h), while the reduction operators are h:k-1,k-1,kh→kh and h:k,k,kh→h such that, for all q_h∈k-1,k-1,kh and all v_h∈k,k,kh, hq_h( (T_3q_T_3)_T_3∈ℳ_3,h, (ℓ_T_2T_2q_T_2)_T_2∈ℳ_2,h, (q_T_1)_T_1 ∈ℳ_1,h, (q_T_0)_T_0 ∈ℳ_0,h), hv_h( (T_3v_T_3, ℓ_T_3+1T_3v_R,T_3^)_T_3∈ℳ_3,h, (v_R,T_2,ℓ_T_2+1T_2v_R,T_2^)_T_2∈ℳ_2,h, (v_T_1)_T_1∈ℳ_1,h). The complexes (W_i,∂_i)_i and (W_i,∂_i)_i for the application of Section <ref> along with the corresponding extension and reduction maps are summarized in the following diagram: [xscale=2.5, yscale=1.25, baseline=(WL2.base)] at (-1,1) DDR3d:; at (-1,-1) SDDR3d:; (Xgrad) at (0,1) k-1,k-1,kh; (Xcurl) at (1.5,1) k,k,kh; (Xdiv) at (3,0) h; (WL2) at (4.5,0) kh; [->,>=latex] (Xgrad) – (Xcurl) node[midway, above, font=]h; [->,>=latex] (Xcurl) – (Xdiv) node[midway, above, font=]h; [->,>=latex] (Xdiv) – (WL2) node[midway, above, font=]h; (SXgrad) at (0,-1) kh; (SXcurl) at (1.5,-1) h; [->,>=latex] (SXgrad) – (SXcurl) node[midway, above, font=]h; [->,>=latex] (SXcurl) – (Xdiv) node[midway, below, font=]h; [->,>=latex] (SXgrad) to [bend right=10] node[midway, right, font=] h (Xgrad) ; [->,>=latex,dashed] (Xgrad) to [bend right=10] node[midway, left, font=] h (SXgrad) ; [->,>=latex] (SXcurl) to [bend right=10] node[midway, right, font=] h (Xcurl) ; [->,>=latex,dashed] (Xcurl) to [bend right=10] node[midway, left, font=] h (SXcurl) ; where h and h are given by (<ref>). §.§ Cohomology of the serendipity DDR complexes We recall the following result from <cit.> (see, in particular, Lemmas 22 and 26 therein). The two- and three-dimensional DDR and SDDR complexes, together with their extension and reduction operators, satisfy Assumption <ref>. In particular, this implies that both the cohomologies of the SDDR and DDR complexes are isomorphic to the cohomology of the corresponding continuous de Rham complex. § A SERENDIPITY ROT-ROT COMPLEX We now turn to the first application of the general construction considering the following smoother variant of the two-dimensional de Rham complex: [xscale=2,baseline=(H1head.base)] (H1head) at (0,0) H^1(Ω); (Hrotrot) at (1.5,0) Ω; (H1tail) at (3,0) H^1(Ω),; [->,>=latex] (H1head) – (Hrotrot) node[midway, above, font=]; [->,>=latex] (Hrotrot) – (H1tail) node[midway, above, font=]; where Ω⊂ℝ^2 is a polygonal domain and, for a smooth enough vector-valued field v, vv^⊥. Diagram (<ref>) specialized to the present case becomes [xscale=2, yscale=2.5] at (1,0,0) Srot-rot:; at (1,0,-2) rot-rot:; at (1,2,0) SDDR2d:; at (1,2,-2) DDR2d:; (SVser) at (2,0,0) h; (SSigmaser) at (4,0,0) h; (SW) at (6,0,-1) h; (SV) at (2,0,-2) h; (SSigma) at (4,0,-2) h; (SXGrad) at (2,2,0) kh; (SXRot) at (4,2,0) h; (Polyk) at (6,2,-1) kh; (XGrad) at (2,2,-2) k-1,kh; (XRot) at (4,2,-2) k,kh; [->,>=latex] (SVser) – (SSigmaser) node[midway, below,font=] h; [->,>=latex] (SSigmaser) – (SW) node[midway, below,font=] h; [->,>=latex] (SV) – (SSigma) node[midway, above,font=] h; [->,>=latex] (SSigma) – (SW) node[midway, above,font=] h; [->,>=latex] (SXGrad) – (SXRot) node[midway, below,font=] h; [->,>=latex] (SXRot) – (Polyk) node[midway, below,font=] h; [->,>=latex] (XGrad) – (XRot) node[midway, above,font=] h; [->,>=latex] (XRot) – (Polyk) node[midway, above,font=] h; [->,>=latex] (SVser) to [bend right=20] node[midway, right,font=] h (SV) ; [->,dashed] (SV) to [bend right=20] node[midway, left,font=] h (SVser); [->,>=latex] (SSigmaser) to [bend right=20] node[midway, right,font=] h (SSigma) ; [->,dashed] (SSigma) to [bend right=20] node[midway, left, font=] h (SSigmaser); [->,>=latex,dashed] (XGrad) to [bend right=20] node[midway, left, font=] h (SXGrad); [->] (SXGrad) to [bend right=20] node[midway, right, font=] h (XGrad); [->,>=latex, dashed] (XRot) to [bend right=20] node[midway, left, font=] h (SXRot); [->] (SXRot) to [bend right=20] node[midway, right, font=] h (XRot); [<->] (XGrad) – (SV) node[pos=0.8, right,font=] Id; [->] (SXRot) to [bend left=20] node[pos=0.4, right,font=] (SSigmaser); [->] (XRot) to [bend left=20] node[pos=0.6, right, font=] (SSigma); [->] (Polyk) to [bend left=20] node[pos=0.6, right, font=] h (SW); [->,dashed] (SW) to [bend left=20] node[pos=0.4, left, font=] h (Polyk); [->,dashed] (SSigma) to [bend left=20] node[pos=0.4, left,font=] (XRot); [->,dashed] (SSigmaser) to [bend left=20] node[pos=0.6, left,font=] (SXRot); [<->] (SXGrad) – (SVser) node[pos=0.4, right,font=] Id; The top horizontal portion of the above diagram corresponds to (<ref>). In the rest of this section we will provide a precise definition of the other spaces and operators that appear in it and, using the abstract framework of Section <ref>, show that all the complexes involved have isomorphic cohomologies. §.§ Discrete rot-rot complex A discrete counterpart of the complex (<ref>) was developed in <cit.>. We briefly recall its construction here. We define the discrete head H^1(Ω), Ω, and tail H^1(Ω) spaces as follows: hk-1,kh, hk,kh×(_T_1∈ℳ_1,hk-1(T_1)×^ℳ_0,h), hk,kh. The discrete gradient and rotor are respectively such that, for all q_h∈h and all v_h=(v_w,h, v_,h)∈h, h q_h ( hq_h,0), hv_h ( hv_w,h, v_,h). The discrete counterpart of (<ref>) is then given by: [xscale=2,baseline=(H1head.base)] at (-1,0) rot-rot:; (H1head) at (0,0) h; (Xrotrot) at (1.5,0) h; (H1tail) at (3,0) h.; [->,>=latex] (H1head) – (Xrotrot) node[midway, above, font=]h; [->,>=latex] (Xrotrot) – (H1tail) node[midway, above, font=]h; §.§ Extension and reduction maps between the two-dimensional DDR and rot-rot complexes In order to apply the construction of Definition <ref> to define and characterize a serendipity version of this complex, we need extension and reduction maps between the two-dimensional DDR complex (<ref>) and the discrete rot-rot complex (<ref>). Noticing that h=kh×(_T_1∈ℳ_1,hk-1(T_1)×^ℳ_0,h), the spaces k,kh and kh inject respectively into h and h trough the extension map such that, for all v_w,h∈k,kh and all q_h∈kh, v_w,h( v_w,h, 0) and hq_h ( q_h,0). We also define the reduction map such that, for all v_h=(v_w,h,v_,h)∈h and all q_h=(q_h,q_,h)∈h, v_h v_w,h and hq_h q_h. The decomposition of Lemma <ref> clearly holds by definition, so we have h=k,kh⊕ and h=hkh⊕h. The maps defined by (<ref>) and (<ref>) satisfy Assumption <ref>, i.e., * For all v_w,h∈k,kh and all q_h∈kh, v_w,h = v_w,h and hhq_h = q_h. * For all v_h=(v_w,h,v_,h)∈h, v_h-v_h∈(h). * For all q_h∈h, all v_h∈h, and all v_w,h∈k,kh, it holds hq_h = h q_h, h q_h = hq_h, hh v_h = h v_h, h h v_w,h = hv_w,h. It then follows from Remark <ref> that the two-dimensional DDR complex (<ref>) and the rot-rot complex (<ref>) have isomorphic cohomologies. (i) Proof of (<ref>). For all v_w,h∈k,kh, v_w,h(<ref>)=(v_w,h,0)(<ref>)=v_w,h and, for all q_h∈kh, hhq_h(<ref>)=h(q_h,0)(<ref>)=q_h. (ii) Proof of (<ref>). Let v_h∈h. Using the definition (<ref>) of h, we obtain that v_h=(v_w,h,0), so v_h-v_h=0=h0. (iii) Proof of (<ref>). For all q_h∈h, we have hq_h (<ref>)= (hq_h,0) (<ref>)= hq_h and hq_h (<ref>)= (hq_h,0) (<ref>)= hq_h. (vi) Proof of (<ref>). For all v_h=(v_w,h,v_,h)∈h, hh(v_w,h,v_,h) (<ref>)= h(hv_w,h,v_,h) (<ref>)= hv_w,h(<ref>)= hv_h and, for all v_w,h∈k,kh, hhv_w,h(<ref>)= (hv_w,h,0) (<ref>)= h(v_w,h,0) (<ref>)= hv_w,h. §.§ Serendipity rot-rot complex and homological properties Lemma <ref> and Theorem <ref> ensure that the SDDR and rot-rot complexes satisfy Assumptions <ref> and <ref>. We are now in a position to apply the construction (<ref>) to the rot-rot complex in order to derive its serendipity version and characterize its cohomology. §.§.§ Serendipity spaces and operators Recalling (<ref>), the serendipity version of spaces h and h can be written as follows: hkh hh×≅h×( _T_1∈ℳ_1,hk-1(T_1)×^ℳ_0,h). Accounting for the isomorphism in (<ref>), we write a generic element v_h of h as v_h=(v_w,h,v_,h) with v_w,h∈h and v_,h such that (0,v_,h)∈. We define the extension of h into h according to (<ref>): v_w,h( v_w,h, 0). The reduction is given by (<ref>): (v_w,h,v_,h) v_w,h. The reduction operators h:h→h and h:h→h are defined using (<ref>) and accounting for the isomorphism (<ref>): For all q_h∈h and all v_h∈h, hq_h hq_h and hv_h (hv_h,v_,h), with h and h respectively defined according to (<ref>) and (<ref>). Finally, using (<ref>), the extension operators h:h→h and h:h→h are such that, for all q_h∈h and all (v_w,h,v_,h)∈h, hq_h hq_h and hv_h hv_w,h + (0,v_,h), with h and h respectively defined according to (<ref>) and (<ref>). Using (<ref>), the serendipity discrete differential operators are such that, for all (q_h,v_h)∈h×h: hq_h (hq_h,0), hv_h (hv_w,h,h(0,v_,h))(<ref>), (<ref>)=(hv_w,h,v_,h). §.§.§ Serendipity rot-rot complex and isomorphism in cohomology The serendipity rot-rot complex is given by: [xscale=2, baseline=(H1head.base)] at (-1,0) Srot-rot:; (H1head) at (0,0) h; (Xrotrot) at (1.5,0) h; (H1tail) at (3,0) h.; [->,>=latex] (H1head) – (Xrotrot) node[midway, above, font=]h; [->,>=latex] (Xrotrot) – (H1tail) node[midway, above, font=]h; All the complexes in the diagram (<ref>) have cohomologies that are isomorphic to the cohomology of the continuous de Rham complex. Lemma <ref> and Theorem <ref> ensure that Assumptions <ref> and <ref> are satisfied. We can therefore invoke Corollary <ref> to infer that the cohomology of the Srot-rot complex (<ref>) is isomorphic to the cohomology of the rot-rot complex (<ref>), of the DDR2d complex (<ref>), and, therefore, of the continuous de Rham complex. §.§ Numerical examples In order to show the effect of serendipity DOF reduction, we consider the quad-rot problem of <cit.> and compare the results obtained using the original and serendipity spaces in terms of error versus dimension of the linear system (after elimination of Dirichlet DOFs). The errors are defined as the difference between the solution of the numerical scheme and the interpolate of the exact solution. Specifically, denoting respectively by (u_h, p_h) and (u_h, p_h) the numerical solutions obtained using standard and serendipity spaces, we set 3e_h u_h - hu ε_h p_h - h p, e_h u_h - hu ε_h p_h - h p, where h, h, h, and h respectively denote the interpolators on h, h, h, and h. The errors are measured by L^2-like operator norms defined in the spirit of <cit.> and, consistently with <cit.>, respectively denoted by V,h· for h and h and Σ,h· for h and h (we do not distinguish the notation for the norms on the standard and serendipity spaces, as they have formally the same expression and the exact meaning is made clear by the argument). On the latter spaces, we additionally consider the norm ,h·, an L^2-like norm of the discrete rot-rot operator defined as in <cit.>. The problem data, meshes, and polynomial degrees are exactly the same as in the above reference, so we do not repeat these details here, while the number of edges η_T_1 for each edge T_1 ∈ℳ_h,1 is chosen the same way as in <cit.>. The various error measures displayed in Figures <ref>–<ref> show that a given precision is invariably obtained with fewer DOFs using serendipity spaces, the more so the higher the degree. A comparison in terms of error versus meshsize h, not reported here for the sake of conciseness, shows that the serendipity and non-serendipity schemes yield essentially the same solution for a given mesh and polynomial degree, with visible differences only for the pressure errors V,hε_h, V,hε_h, Σ,hhε_h, and Σ,hhε_h for k=3. § A SERENDIPITY STOKES COMPLEX In this section we discuss a second application of the general construction considering the three-dimensional Stokes complex, another smoother variant of the three-dimensional de Rham complex. Let Ω⊂ℝ^3 be a polyhedral domain. The Stokes complex reads: [xscale=2, baseline=(H2.base)] (H2) at (0,0) H^2(Ω); (H1curl) at (1.5,0) H^1(;Ω); (H1) at (3,0) H^1(Ω); (L2) at (4.5,0) L^2(Ω).; [->,>=latex] (H2) – (H1curl) node[midway, above, font=]; [->,>=latex] (H1curl) – (H1) node[midway, above, font=]; [->,>=latex] (H1) – (L2) node[midway, above, font=]; Diagram (<ref>) specialized to the present case becomes ! [xscale=2,yscale=2.5] at (1,0,-2) Stokes:; (SGr) at (2,0,-2) h; (SCr) at (4,0,-2) h; (SDr) at (6,0,-1) h; at (1,0,0) SStokes:; (SSGr) at (2,0,0) h; (SSCr) at (4,0,0) h; at (1,2,0) SDDR3d:; (SXGrad) at (2,2,0) kh; (SXCurl) at (4,2,0) h; (Xdiv) at (6,2,-1) h; (Polyk) at (8,1,-1) kh; at (1,2,-2) DDR3d:; (XGrad) at (2,2,-2) k-1,k-1,kh; (XCurl) at (4,2,-2) k,k,kh; [->,>=latex] (SSGr) – (SSCr) node[midway, below,font=] h; [->,>=latex] (SSCr) – (SDr) node[midway, below,font=] h; [->,>=latex] (SGr) – (SCr) node[midway, above,font=] h; [->,>=latex] (SCr) – (SDr) node[midway, above,font=] h; [->,>=latex] (SDr) – (Polyk) node[midway, below,font=] h; [->,>=latex] (SXGrad) – (SXCurl) node[midway, below,font=] h; [->,>=latex] (SXCurl) – (Xdiv) node[midway, below,font=] h; [->,>=latex] (XGrad) – (XCurl) node[midway, above,font=] h; [->,>=latex] (XCurl) – (Xdiv) node[midway, above,font=] h; [->,>=latex] (Xdiv) – (Polyk) node[midway, above,font=] h; [->,>=latex] (SSGr) to [bend right=20] node[midway, right, font=] h (SGr) ; [->,dashed] (SGr) to [bend right=20] node[midway, left, font=] h (SSGr); [->,>=latex] (SSCr) to [bend right=20] node[midway, right, font=] E_V,,h (SCr) ; [->,dashed] (SCr) to [bend right=20] node[midway, left, font=] R_V,,h (SSCr); [->,>=latex] (SXGrad) to [bend right=20] node[midway, right, font=] h (XGrad); [->,dashed] (XGrad) to [bend right=20] node[midway, left, font=] h (SXGrad); [->,>=latex] (SXCurl) to [bend right=20] node[midway, right, font=] h (XCurl); [->,dashed] (XCurl) to [bend right=20] node[midway, left, font=] h (SXCurl); [->] (XCurl) to [bend left=20] node[pos=0.6, right,font=] (SCr); [->,dashed] (SCr) to [bend left=20] node[pos=0.4, left,font=] (XCurl); [->] (XGrad) to [bend left=20] node[pos=0.6, right,font=] (SGr); [->,dashed] (SGr) to [bend left=20] node[pos=0.4, left,font=] (XGrad); [->] (Xdiv) to [bend left=20] node[pos=0.6, right,font=] (SDr); [->,dashed] (SDr) to [bend left=20] node[pos=0.4, left,font=] (Xdiv); [->] (SXCurl) to [bend left=20] node[pos=0.4, right,font=] (SSCr); [->,dashed] (SSCr) to [bend left=20] node[pos=0.6, left,font=] (SXCurl); [->] (SXGrad) to [bend left=20] node[pos=0.4, right,font=] (SSGr); [->,dashed] (SSGr) to [bend left=20] node[pos=0.6, left,font=] (SXGrad); The top horizontal portion of this diagram corresponds to (<ref>). In the rest of this section we will provide precise definitions of the remaining spaces and operators involved in the construction. §.§ Discrete Stokes complex We will start by giving a brief overview of the construction of a discrete counterpart of the complex (<ref>) developed in <cit.>. §.§.§ Discrete spaces For each edge T_1 ∈ℳ_1,h, we will need the following space spanned by vector-valued polynomial functions that are normal to T_1: k{ p_1 n_1 + p_2 n_2 p_1, p_2 ∈k(T_1) }, where n_1 and n_2 are two arbitrary orthogonal unit vectors normal to T_1. The discrete counterparts of the spaces H^2(Ω), H^1(;Ω), H^1(Ω), and L^2(Ω) read: hk-1,kh×,h, hk,kh×,h, hh×,h where the additional components with respect to the standard three-dimensional DDR spaces are given by ,h _T_2∈ℳ_2,hk-1(T_2) ×_T_1∈ℳ_1,hk×^3ℳ_0,h, ,h [t] _T_2∈ℳ_2,h( k-1(T_2)×k(T_2)×k(T_2) ) ×_T_1∈ℳ_1,h( k+1(T_1;^3)×k)×( ^3ℳ_0,h)^2, ,h _T_2∈ℳ_2,h( k(T_2)×k(T_2) ) ×_T_1∈ℳ_1,h𝒫^k+3(T_1;^3), where, to write h, we have decomposed the space 𝒫^k+2(ℳ_1,h;^3) in <cit.> as _T_1∈ℳ_1,h(k(T_1)×k)×^3ℳ_0,h and 𝒫^m(T_1;^3) denotes the space of vector-valued functions over T_1 whose components are in m(T_1) and are continuous on T_1. §.§.§ Discrete gradient Let q_h=(q_w,h,q_,h)∈h with q_,h( (G_q,T_2)_T_2∈ℳ_2,h, (G_q,T_1)_T_1∈ℳ_1,h, (G_q,T_0)_T_0∈ℳ_0,h) ∈,h, where G_q,T_2, G_q,T_1, and G_q,T_0 have, respectively, the meaning of a normal gradient to the face T_2, a normal gradient to the edge T_1, and a full gradient at the vertex T_0. The DDR discrete gradient is completed to map from h to h by adding the following component: d_,,h^kq_,h( (G_q,T_2, kT_2RG^k_T_2q_,T_2, kT_2RG^k_T_2q_,T_2)_T_2∈ℳ_2,h, (G_q,T_1,v_T_1'×t_T_1)_T_1∈ℳ_1,h, (G_q,T_0,0)_T_0∈ℳ_0,h) ∈,h, where q_,T_2 is the restriction of q_,h to the elements neighbooring T_2, RG^k_T_2 is the rotor of the normal gradient defined by ∫_T_2RG^k_T_2q_T_2·w = - ∫_T_2 G_q,T_2w - ∑_T_1 ∈ℳ_1,T_2ω_T_2T_1∫_T_1 (G_q,T_1·_T_2 ) (w·_T_1) ∀w∈k(T_2), and v_T_1' is the derivative along the edge T_1 of the function v_T_1 such that π^k_𝒫,T_1v_T_1 = G_q,T_1 and for all T_0∈ℳ_0,T_1, v_T_1(x_T_0) = G_q,T_0. The discrete gradient h : h→h is then given by h q_h( hq_w,h,d_,,h^kq_,h). §.§.§ Discrete curl For v_h=(v_w,h,v_,h)∈h, the component v_,h is given by v_,h((v_T_2,R_v,G,T_2,R^c_v,G,T_2)_T_2∈ℳ_2,h, (R_v,T_1,v_n,T_1)_T_1∈ℳ_1,h, (v_T_0, R_v,T_0)_T_0∈ℳ_0,h) ∈,h, where v_T_2, (R_v,G,T_2,R^c_v,G,T_2), R_v,T_1, and (v_T_0R_v,T_0) have, respectively, the meaning of the normal flux accross the face T_2, the normal gradient of the tangential components to the face T_2, the tangential component of the curl plus the normal gradient of the tangential component to the edge T_1, and the value of the function and of its curl at the vertex T_0. The discrete curl in the DDR complex (<ref>) is completed by adding the following component in order to obtain a map from h to h: d_,,h^kv_,h( (kT_2C^k_T_2v_,T_2,R_v,G,T_2, kT_2C^k_T_2v_,T_2,R^c_v,G,T_2)_T_2∈ℳ_2,h, (C^k_T_1v_,T_1)_T_1∈ℳ_1,h) ∈,h, where v_,T_2 is the restriction of v_,h to the elements sharing T_2, v_,T_1 the restriction of v_,h to the elements sharing T_1, C^k_T_2 is the face curl defined in (<ref>), and C^k_T_1 is such that C^k_T_1v_,T_1 (x_T_0)= R_v,T_0 and k+1T_1C^k_T_1v_,T_1= R_v,T_1- v_n,T_1'  × _T_1, with v_n,T_1 such that kT_1v_n,T_1 = v_n,T_1 and for all T_0∈ℳ_0,T_1, v_n,T_1(x_T_0) = v_T_0. The discrete curl is then given by h v_h ( hv_w,h,d_,,h^kv_,h). §.§.§ Discrete divergence The discrete divergence is nothing but the original DDR divergence defined by (<ref>) but with domain h instead of h: For all w_h=(w_w,h,w_,h)∈h, h w_h hw_w,h. §.§.§ Discrete Stokes complex The discrete counterpart of the Stokes complex (<ref>) which appears at the bottom and back of diagram (<ref>) is given by: [xscale=2, baseline=(H2.base)] at (-1,0) Stokes:; (H2) at (0,0) h; (H1curl) at (1.5,0) h; (H1) at (3,0) h; (L2) at (4.5,0) kh.; [->,>=latex] (H2) – (H1curl) node[midway, above, font=]h; [->,>=latex] (H1curl) – (H1) node[midway, above, font=]h; [->,>=latex] (H1) – (L2) node[midway, above, font=]h; §.§ Extension and reduction maps between the three-dimensional DDR and Stokes complexes We next define extension and reduction operators between the three-dimensional DDR complex (<ref>) and the discrete Stokes complex (<ref>) that satisfy Assumption <ref>. The proof is similar to that of Theorem <ref> and is omitted for the sake of brevity. It follows once again from Remark <ref> that (<ref>) and (<ref>) have isomorphic cohomologies. The extension operators are such that: For all q_w,h∈k-1,k-1,kh, all v_w,h∈k,k,kh, and all w_w,h∈h, q_w,h( q_w,h,0), v_w,h( v_w,h,0), w_w,h( w_w,h,0). The reduction map is such that, for all q_h=(q_w,h,q_,h)∈h, all v_h=(v_w,h,v_,h)∈h, and all w_h=(w_w,h,w_,h)∈h, q_h q_w,h, v_h v_w,h, w_h w_w,h. For future reference, we note the following isomorphisms, which are a direct consequence of the above definitions: ≅,h and ≅,h. §.§ Serendipity Stokes complex and homological properties Applying the construction of Section <ref> to the Stokes complex and recalling the isomorphisms (<ref>), we obtain the following serendipity version of the spaces h and h: hkh×,h, hh×,h, where kh and h are the serendipity DDR spaces defined by (<ref>). We write generic elements q_h of h and v_h of h respectively as q_h=(q_w,h,q_,h) and v_h=(v_w,h,v_,h) with q_w,h∈kh, v_w,h∈h, and q_,h∈,h and v_,h∈,h. According to (<ref>), we define the extensions of the SDDR spaces into serendipity Stokes spaces as follows: For all q_w,h∈kh and all v_w,h∈h, q_w,h( q_w,h, 0) and v_w,h( v_w,h, 0). The reduction map between the SStokes and the SDDR complexes is given by (<ref>): For all (q_w,h, q_,h) ∈h and all (v_w,h,v_,h) ∈h, (q_w,h,q_,h) q_w,h and (v_w,h,v_,h) v_w,h. By (<ref>), the reduction map from the Stokes to the SStokes complexes are given by: For all q_h = (q_w,h, q_,h)∈h and all v_h = (v_w,h, v_,h) ∈h, R_V,,hq_h (hq_h, q_,h) and R_V,,hv_h ( hv_h, v_,h). The extension operators from the SStokes to the Stokes complexes are defined according to (<ref>): For all q_h∈h and all v_h∈h, E_V,,hq_h hq_w,h+(0,q_,h), E_V,,hv_h hv_w,h + (0,v_,h). Using (<ref>), the serendipity discrete differential operators are such that, for all (q_h,v_h)∈h×h, hq_h (hq_h,h (0,q_,h))(<ref>),(<ref>)=(hq_h,d_,,h^k q_,h), hv_h (hv_w,h,h(0,v_,h))(<ref>),(<ref>)=(hv_w,h,d_,,h^k v_,h). This completes the definition of the serendipity Stokes complex corresponding to the bottom front complex in diagram (<ref>). The following theorem can be proved using arguments similar to Theorem <ref>. The details are omitted for the sake of brevity. All the complexes in the diagram (<ref>) have cohomologies that are isomorphic to the cohomology of the continuous de Rham complex. § ACKNOWLEDGEMENTS Funded by the European Union (ERC Synergy, NEMESIS, project number 101115663). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
http://arxiv.org/abs/2407.12380v1
20240717075816
PCQ: Emotion Recognition in Speech via Progressive Channel Querying
[ "Xincheng Wang", "Liejun Wang", "Yinfeng Yu", "Xinxin Jiao" ]
eess.AS
[ "eess.AS", "cs.SD" ]
Xincheng Wang et al. ^1School of Computer Science and Technology, Xinjiang University, China wljxju,yuyinfeng@xju.edu.cn PCQ: Emotion Recognition in Speech via Progressive Channel Querying Xincheng Wang^1 Liejun Wang^1() Yinfeng Yu^1() Xinxin Jiao^1 July 22, 2024 =================================================================== § ABSTRACT In human-computer interaction (HCI), Speech Emotion Recognition (SER) is a key technology for understanding human intentions and emotions. Traditional SER methods struggle to effectively capture the long-term temporal correla-tions and dynamic variations in complex emotional expressions. To overcome these limitations, we introduce the PCQ method, a pioneering approach for SER via Progressive Channel Querying. This method can drill down layer by layer in the channel dimension through the channel query technique to achieve dynamic modeling of long-term contextual information of emotions. This mul-ti-level analysis gives the PCQ method an edge in capturing the nuances of hu-man emotions. Experimental results show that our model improves the weighted average (WA) accuracy by 3.98% and 3.45% and the unweighted av-erage (UA) accuracy by 5.67% and 5.83% on the IEMOCAP and EMODB emotion recognition datasets, respectively, significantly exceeding the baseline levels. § INTRODUCTION Emotion recognition technology identifies and understands an individual's emotional state by analyzing multiple modalities<cit.>, including speech, facial expres-sions, body movements, and language. This technology is essential in some fields, such as Face Forgeries<cit.> and health monitoring <cit.>. Especially in specific scenarios, like call centres and telemedicine, speech becomes the preferred or the only feasible modal-ity for emotion recognition due to other modalities' unavailability or practical limita-tions, such as text and images. This modal dependence highlights speech emotion recognition's unique value and challenges in specific applications. Given this, this paper will investigate unimodal speech emotion recognition. Much attention has been paid to the temporal nature of speech signals <cit.>, a crucial property for understanding and processing speech data. Our research focuses on discrete speech emotion recognition (DSER) <cit.>, i.e., in this framework, we assume that each sentence expresses a single emotion. However, the expression of human emo-tions is a dynamic process: emotions are not fully revealed instantly but gradually revealed over time. This dynamism means that although our task is to identify the dominant emotion in each sentence, we still need to pay attention to temporal varia-tions in the speech signal to capture subtle clues about how the emotion develops over time. The temporal character of emotional expression exacerbates the difficulty of accurately dynamic modelling emotional information in speech. To address this chal-lenge, current research approaches face several problems. Firstly, although the Transformer <cit.> based approach enables global modelling, the approach may not be appli-cable when dealing with relatively small unimodal speech emotion recognition data. Therefore, current research in SER focuses mainly on capturing contextual emotional information using self-attention or cross-attention mechanisms. Hu et al. in <cit.> applied different cross-attention modules to a joint speech emotion recognition network and achieved good results in a speaker-independent setting. In <cit.>, Jiao et al. proposed a Hierarchical Cooperative Attention method, which combines features extracted by HuBERT with spectrogram features to enhance the accuracy of speech emotion recognition. In addition, Xu et al. proposed a head fusion strategy in <cit.>, which solves the limitation that the multi-head attention mechanism can only focus on a single infor-mation point and makes the model able to concentrate on multiple important infor-mation points at the same time, which in turn optimizes the model performance. While these attention methods can achieve global attention to some extent, they inevitably require more parameters. Meanwhile, Convolutional Neural Networks (CNNs) are also prevalent in the re-search of speech emotion recognition. For example, Zhao et al. in <cit.> combined an LSTM network and a full convolutional network (FCN), the latter extracting features by concatenating multiple 2D convolutional layers. Aftab et al. in <cit.> designed a lightweight FCN to extract features in-depth by increasing the number of convolu-tional layers, and Mekruksavanich et al. in <cit.> explored the possibility of applying a 1D convolutional method applied to Thai sentiment recognition. Although these mul-tilayer convolutional network strategies are effective in feature extraction, they usually focus only on the output of the last layer of the CNN, which may ignore the fine-grained information in the shallow layers, which limits the model's ability to capture long-term contextual information. To address the shortcomings in the existing methods mentioned above, our study proposes a novel approach to speech emotion recognition based on the progressive channel querying technique. By combining the dual perspectives of speech and spec-trogram and introducing the channel query module, this method can model speech emotion signals dynamically in the channel dimension in a progressive manner. Our main contributions are as follows: * We designed a Multilayer Lightweight CNN (MLCNN) branch to extract feature outputs from different layers. Further, we introduce the overall framework of the PCQ network. This framework merges the MLCNN branch, using spectrogram as input, with the WavLM pre-trained model branch, using original speech as input. * We designed a Channel Semantic Query (CSQ) module for querying and integrat-ing semantically similar sentiment features from neighbouring layers of an MLCNN in the channel dimension. This module exploits the temporal properties of speech signals to dynamically model speech emotion, thereby capturing the tem-poral variation of emotional information in the channel dimension. We achieve step-by-step acquisition of emotional information by integrating multiple CSQ modules in the PCQ framework. * Our method achieves good results on the IEMOCAP dataset and the EMODB da-taset. § PROPOSED METHOD This section provides a comprehensive overview of our proposed approach, as shown in Fig. 1. Section 2.1 describes the structure of the sub-network MLCNN. Section 2.2 describes the WavLM pre-trained model and the processing of its output features. Section 2.3 introduces the CSQ module. Section 2.4 outlines the overall network structure of PCQ. §.§ MLCNN Branch As shown in Figure 1(B), the MLCNN we developed consists of four layers with {16, 32, 48, 64} channels per layer. For the layer setting of the MLCNN, we have explained in detail as well as ablation experiments in Section 5. As the network depth increases, the MLCNN will output deeper semantic features layer by layer. Specifically, the network contains four layers whose outputs are x_1,x_2,x_3,x_4, and the feature of the mth pair of adjacent layers is denoted as f_m. Thus, we define:f_1=[x_1,x_2 ], f_2=[x_2,x_3 ], f_3=[x_3,x_4 ], where the [· ,· ] symbols are used to denote the selected two neighboring layer features. In addition, each layer contains a Parameter-efficient Depth Convolution (PDC) block, which consists of two point convolutions, a depth convolution, and a channel attention block, as shown in Figure 1(C). The input feature is χ∈R^C× W× H, where C denotes the number of channels, W is the width, H is the height, and the cth channel of the input feature map is denoted as χ _c. For each feature map, the global average pooled feature Gap(χ) is obtained by computing the average of all its elements. Gap(χ ) is then fed into a fully connected layer Υ and processed by a sigmoid function δ to learn the importance of the sentiment information in each channel and generate the channel weights ω accordingly. Then, by multiplying the weights ω with the input feature χ, we obtain the output feature y. In Eq.(1), (χ _c)_i,j is the element with position (i,j) on the cth channel. 𝐺𝑎𝑝 (χ ) = 1/W× H ∑_i=1^W∑_j=1^H (χ _c)_i,j , ω = δ(Υ(Gap(χ))), y = χ⊗ω. The attentional mechanism therefore enables the model to understand and emphasize key emotional information in each channel more accurately. The PDC module adopts an architecture that sequentially connects dot convolution, deep convolution, channel attention mechanism, and dot convolution. This design not only significantly improves the performance of the model in capturing emotionally relevant information, but also effectively reduces noise interference. The PDC module maintains an equal number of input and output channels, denoted as C. Under equal conditions, traditional 3x3 convolutions require 9C^2 parameters, while the parameter count of the PDC module is (16/3)C^2 + 18C. From the analysis, it is found that the number of parameters of the PDC module will be lower than the traditional 3x3 convolution when C > 4.90. The relevant parameter count comparisons are detailed in Table 3 in Section 4. In addition, in Table 4 in Section 4, we also show that the PDC module improves in terms of accuracy compared to the traditional 3x3 convolution. §.§ WavLM Branch As shown in (A) of Fig. 1, the WavLM pre-trained model, serving as an encoder within a branching network that takes speech signals as inputs, effectively models long-term contextual sequences. Second, in the work <cit.>, Zhao et al. revealed the layer-to-layer variability in the pre-trained model: the part closer to the output layer tends to contain rich task-specific information, whereas the underlying layers closer to the input capture more generic features. These bottom layers capture a wide range of features, while the top layer focuses on high-level abstract features that are closely related to a specific task. In this study, we utilized the WavLM pre-trained model for the speech emotion recognition task. Compared to the bottom layer of the model, the top layer of WavLM is more focused on modeling speech emotion information. Therefore, we selected features from the final output layer of the WavLM model and enhanced the network's overall recognition of emotions through multiple average pooling operations. §.§ CSQ Module In Figure 2, we design the Channel Semantic Query (CSQ) Module. CSQ module takes three inputs, with the first two being shallow-level features X_l ∈ℝ^C_l × H_l × W_l and deep-level features X_h∈ℝ^C_h× H_h× W_h. To aggregate the semantically similar information on these two different feature scales, we first employ convolution and bilinear interpolation to resize the deep-level speech features to match the size of the shallow-level speech features, C_h=C_l, H_h=H_l, W_h=W_l. Next, we evenly divide the channels of the two features into four groups (Group= 4) for positional encoding. We introduced a channel query token Q which queries and aggregates channel features with identical positional encodings, creating ϖ^i∈ℝ^(Group/2+ 1)× H_l× W_l, i=[1,2,3,4]. As shown in Eq. (4), in each ϖ^i, Q efficiently synthesizes similar emotional information of the same positional encoding, utilizing its pre-trained knowledge. In high-dimensional channels, deeper semantic information exists. To further extract the deeper semantic feature η ^i∈ℝ^(Group/2+ 1)× H_l× W_l, for four blocks from different channel dimensions, we use 3x3 dilation convolution with dilation rates of d_i = [7,5,2,1]. Based on the correlation between emotion and time, we concatenate η ^i to obtain ŷ∈ℝ^C_l× H_l× W_l. Next, a convolution operation highlights the emotional information in η̂. ϖ^i= Concat(X_l^i ,X_h^i,Q ). η ^i= Conv_3× 3^d_i (ϖ^i ). §.§ The Proposed Overall Framework (PCQ) In Figure 1(A), PCQ is composed of three main components: the MLCNN sub-branch, the WavLM pre-training branch, and the CSQ module. Firstly, spectrogram features are input into the MLCNN, as illustrated in Figure 1(B). The MLCNN produces three sets of outputs, namely, f_1, f_2, and f_3. These outputs are then sequentially used as inputs for the three independent CSQ modules. After the speech signal passes through the WavLM encoder, it first obtains the channel query token Q_1 with channel number 1. Then, Q_1 undergoes two adaptive pooling operations to obtain the other two channel query tokens, Q_2 and Q_3. These tokens are pre-trained features with global sentiment information. Next, we need to use this pre-training information to assist the PCQ network in achieving global sentiment perception. Therefore, Q_1, Q_2, and Q_3 are input as query tokens to the three CSQ modules sequentially. Next, the CSQ modules will sequentially generate three progressive features z_1, z_2, and z_3, which are different degrees of perception of global emotion information. Q_1 obtains its self-attention features through an attention mechanism weighted by x_4. z_j =CSQ(f_j , Q_j), j =1,2,3. Q_4 =x_4⊗ Q_1. To enhance feature fusion, the Gap (Global Average Pooling) operation is applied to Q_4, z_1,z_2,z_3 and x_4. Subsequently, these fused features are fed into a classifier comprising multiple linear layers for emotion prediction. as shown in Eq. (8).These fused features are fed into a classifier comprising multiple linear layers for emotion prediction, as shown in Eq. (9). y_fusion = Concat(Gap(z_1), Gap(z_2), Gap(z_3), Gap(x_4), Gap(Q_1)) ŷ = Classifier(y_fusion) § EXPERIMENT §.§ Datasets IEMOCAP : This dataset is an English corpus. In total, it contains about 12 hours of audiovisual data, of which audio data has been widely used in automatic emotion recognition re-search. We identify four main emotions: anger, sadness, happiness and neutrality. Considering the unbalanced distribution of emotion categories in the dataset, we merge "happy" and "excited" into "happy". EMODB : This dataset is a German language speech library recorded by 10 participants (5 males and 5 females) covering seven different emotional expressions: anger, disgust, fear, happiness, sadness, surprise, and neutrality. The entire database contains about 535 audio clips, each ranging from 1 to 10 seconds long, providing a rich sound sample for emotion recognition studies. §.§ Experimental Setup In this study, we sampled the raw audio signal at 16 kHz and segmented it into 3-second segments, with underfilled segments using zero padding. The final prediction is based on the judgement of all the segments. Through a series of 40 ms Hamming windows, we generate spectrogram features. Each window was discrete Fourier trans-formed (DFT) as a frame to obtain an 800-point frequency domain signal, and the first 200 were taken as input features. In this way, we obtained a spectrogram of 300x200 size corresponding to each audio clip. To evaluate the model performance, we use both Weighted Accuracy (WA) and Unweighted Accuracy (UA) metrics and use 10-fold cross-validation to ensure that the results are reliable.Our emotion classi-fication task uses a cross-entropy loss function. Our system is implemented in PyTorch. The batch sizes are 16 and 32 for the IEMOCAP dataset and EMODB da-taset, respectively. The early stop is 20 epochs. We use the AdamW optimiser, and the learning rate is 1e-5. All experiments were conducted on an NVIDIA 4090 24G GPU. § EXPERIMENTAL RESULTS AND ANALYSIS Fig. 3 shows the performance of MLCNN with the number of layers 2, 3 and 4 on the IEMOCAP dataset. The results show that both performance metrics improve signifi-cantly with the increasing number of layers, especially when the model reaches 4 lay-ers, both WA and UA metrics reach the highest, 75.62% and 76.15%, respectively. This trend demonstrates that increasing network depth effectively enhances model accuracy. However, to ensure comparability with the baseline AlexNet network, we decided against further increasing the number of layers, opting instead for the same network depth to facilitate effective model comparison under similar conditions. Therefore, a 4-layer MLCNN network was used for all subsequent experiments. §.§ Results and Comparisons Table 1 shows the performance of our proposed PCQ network compared to the three-branch network used in the baseline study described in [22] on the IEMOCAP dataset. Our method achieves significant improvements of 3.98% and 3.45% in terms of WA and UA metrics, respectively. In addition, Table 1 also shows some current state-of-the-art speech emotion recognition models. Compared to these models, our PCQ network exhibits a superior performance. On the EMODB dataset, as shown in Table 2, the PCQ network achieved high accuracy. Compared with the baseline network[20], PCQ improved 5.67% and 5.83% on WA and UA, respectively, further demonstrat-ing the effectiveness and applicability of the methodology. In the upper part of Table 3, we compare in detail the total number of parameters of the benchmark network used on the IEMOCAP dataset with our newly designed PCQ network. From the data, the number of parameters of the PCQ network is reduced by 43.98% compared to the benchmark network. In the lower part of Table 3, we compare the branching networks designed based on spectrograms. Compared to the baseline AlexNet, our designed MLCNN substantially reduces the number of parameters, which fully proves the lightweight feature of our proposed PCQ network and MLCNN network. Table 4 details the results of the ablation experiments for each component of the PCQ network on the IEMOCAP dataset. Replacing the PDC module with the 3x3 Conv2d resulted in a reduction of WA and UA by 0.42% and 0.34%, respectively, despite an increase in the number of network parameters. This clearly demonstrates the advantages of the PDC module in terms of lightweight and efficiency. Without the CSQ module, the WA and UA of the network decreased by 0.97% and 1.42%, re-spectively. As shown in the third-to-last row of Table 4, without the CSQ module and the WavLM branch, the training accuracy of the network is significantly reduced. While the WA and UA when using Wav2Vec 2.0 as the pre-training encoder increase by 0.14% and 0.47% respectively compared to the baseline two-branch network, they are still 5.43% and 4.38% lower compared to our PCQ network. This further demon-strates that both our CSQ model and the WavLM pre-training model are indispensable key components in the PCQ backbone network. As shown in Fig. 4, the t-SNE visual-isation results on the IEMOCAP dataset show that the PCQ method with the integrated CSQ module has clearer classification boundaries than the network without the CSQ module. Furthermore, in Fig. 5, the normalised confusion matrix shows that the PCQ method significantly improves the recognition of "happy" and "neutral" emotions on the IEMOCAP dataset. § CONCLUSION In this study, we propose a new framework for speech emotion recognition named Progressive Channel Querying(PCQ). The method mainly queries and integrates similar sentiment features in the channel dimension through the CSQ (Channel Semantic Query) module. Applying the CSQ module at different layers in the PCQ framework enables a gradual enhancement of the understanding of the sentiment information, thus allowing the model to acquire sentiment features in a progressive manner. Experimental results on the IEMOCAP dataset and the EMODB dataset show that our method achieves significant improvements on the SER task compared to existing techniques. SER tasks are used in a particular scenario. For future research, we will work on multi-scene SER research, i.e., multimodal emotion recognition, where the input is speech, image, text, and other modalities. Acknowledgements. The following projects jointly supported this work: the Tianshan Excellence Program Project of Xinjiang Uygur Autonomous Region, China (2022TSYCLJ0036), the Central Government Guides Local Science and Technology Development Fund Projects (ZYYD2022C19), and the National Natural Science Foundation of China (62303259), the Graduate Student Research and Innovation Program in the Xinjiang Uygur Autonomous Region (XJ2024G089). splncs04
http://arxiv.org/abs/2407.13153v1
20240718044201
Preset-Voice Matching for Privacy Regulated Speech-to-Speech Translation Systems
[ "Daniel Platnick", "Bishoy Abdelnour", "Eamon Earl", "Rahul Kumar", "Zahra Rezaei", "Thomas Tsangaris", "Faraj Lagum" ]
cs.CL
[ "cs.CL", "cs.CR", "cs.LG", "cs.SD", "eess.AS" ]
Preset-Voice Matching for Privacy Regulated Speech-to-Speech Translation Systems Daniel Platnick11,2, Bishoy Abdelnour11, Eamon Earl11, Rahul Kumar1, Zahra Rezaei1, Thomas Tsangaris1, Faraj Lagum1 1Vosyn, Etobicoke, Canada 2Vector Institute, Toronto, Canada , 1These authors contributed equally and share co-first authorship. 2Accepted to the ACL PrivateNLP 2024 Workshop. July 22, 2024 ========================================================================================================================================================================================================================================================================================================================= § ABSTRACT In recent years, there has been increased demand for speech-to-speech translation (S2ST) systems in industry settings. Although successfully commercialized, cloning-based S2ST systems expose their distributors to liabilities when misused by individuals and can infringe on personality rights when exploited by media organizations. This work proposes a regulated S2ST framework called Preset-Voice Matching (PVM). PVM removes cross-lingual voice cloning in S2ST by first matching the input voice to a similar prior consenting speaker voice in the target-language. With this separation, PVM avoids cloning the input speaker, ensuring PVM systems comply with regulations and reduce risk of misuse. Our results demonstrate PVM can significantly improve S2ST system run-time in multi-speaker settings and the naturalness of S2ST synthesized speech. To our knowledge, PVM is the first explicitly regulated S2ST framework leveraging similarly-matched preset-voices for dynamic S2ST tasks. § INTRODUCTION Progress in deep learning and voice cloning technology has enhanced public access to robust AI-driven voice cloning systems. These systems can help solve complicated speech-to-speech translation (S2ST) tasks like automated video dubbing (auto-dubbing) by generating audio deepfakes <cit.>. Cloning systems are desirable for dynamic speech tasks because they can generate a clone from an input voice given an audio sample as short as a few seconds <cit.>. Currently, voice cloning technology is highly unregulated and can be harmful if misused or commercialized irresponsibly <cit.>. As voice cloning systems can clone an arbitrary voice and do not require permission, they raise several privacy concerns <cit.>. Risks related to voice cloning technology include lack of informed consent, biometric privacy, and the spread of misinformation through deepfakes <cit.>. Robust regulations are necessary to mitigate these risks, protect individual rights, and prevent misuse <cit.>. The risks of unregulated voice cloning technologies are compounded by a high demand for voice cloning-based products. Pressure to capitalize on a newly budding market of cloning-based products can lead businesses to emphasize speed over careful and tested development. Since voice cloning technology is so new, regulatory measures are required and in the process of being implemented, but not yet fully in place. Given these challenges, it is crucial to integrate privacy regulations into AI-powered voice cloning systems <cit.>. To address the need for regulated voice cloning technology, we propose Preset-Voice Matching (PVM), a regulated S2ST framework. PVM bakes regulatory precautions into the S2ST process by removing the explicit objective function of cloning an unknown input speaker’s voice, and instead cloning a similar preset-voice of a consenting speaker. PVM can be easily installed on top of existing cascaded S2ST pipelines, improving regulatory compliance. We find this process also decreases system run-time in multi-speaker auto-dubbing scenarios and improves speaker naturalness relative to state-of-the-art voice cloning systems when translating across our tested languages. The intention of this paper is to put forward a regulated PVM S2ST framework that is robust against legislative changes and future liability concerns. We demonstrate PVM is desirable for S2ST over current benchmark voice cloning frameworks due to its inherent safety, lower run-time in multi-speaker scenarios, and enhanced speaker naturalness. We show this by providing and testing a PVM algorithm which we call GEMO-Match. We hope this work inspires others to develop and tune the framework for different high-performance environments. Our main contributions are as follows: * We propose PVM, a novel privacy-regulated S2ST framework which leverages consented preset-voices to clone a preset-voice similar to the input voice. * We provide a gender-emotion based PVM algorithm, GEMO-Match, and use it to demonstrate PVM in multilingual settings. * We empirically analyze GEMO-Match in terms of robustness, multilingual capability, and run-time, on two speech emotion datasets and discuss the implications of our system. * We create and provide a Combined Gender-Dependent Dataset (CGDD), which combines various benchmark speech-emotion datasets for training future gender-dependent PVM algorithms. The rest of this paper is organized as follows. Background information is provided in Section <ref>. Our PVM framework and GEMO-Match algorithm are detailed in sections <ref> and <ref>. Relevant datasets are described in Section <ref>. Section <ref> explains our experimental setup as well as the techniques, algorithms, and parameters used in the study. Section <ref> includes experimental results and analysis. We discuss potential future work towards PVM and conclude the paper in sections <ref> and <ref>. We address PVM limitations in Section <ref>. § BACKGROUND INFORMATION Speech-to-speech translation (S2ST) is typically achieved by direct translation or cascaded approaches <cit.>. Direct translation approaches use speech and linguistic encoder/decoders <cit.> to directly translate speech signals from one language to another. Cascading architectures split S2ST into three sub-tasks, using separate but connected speech-to-text (STT), text-to-text (TTT), and text-to-speech (TTS) modules <cit.>. Cascading architectures have been the traditional method for S2ST. Two common approaches for synthesizing speech from text are concatenative and parametric TTS. Concatenative TTS combines pre-recorded clips from a database to form a final speech output <cit.>. Parametric TTS attempts to model and predict speech variations given text and a reference voice <cit.>. Parametric deep learning methods have shown ubiquitous success spanning various industries from computer vision to text synthesis <cit.>. As deep neural network (DNN) based TTS methods can lead to natural and expressive synthesized voices, they are desirable for many speech tasks <cit.>. Wavenet is a benchmark DNN-based TTS model <cit.>. Since its creation, there have been many advancements in sequence-to-sequence TTS models trained to produce human-like speech <cit.>. Wavenet performs speech synthesis by training on a set of human voices, conditioning on their unique speaker ID to generate natural-sounding utterances in the voice of a selected speaker <cit.>. Recently, there have been models which aim to extend this behavior by cloning voices unseen in training, resulting in zero-shot voice cloning <cit.>. Cross-lingual voice cloning is difficult due to complexities in discriminating between language-specific and speaker-specific features within a singular waveform, and mapping these features across different languages <cit.>. Additionally, training robust multilingual speech generation models requires vast amounts of processed language and speech data in multiple languages with a variety of utterances and speakers. The performance of these models depends on the data they are trained on <cit.>. Preset-voice TTS methods generate speech from stored options of preset or pre-recorded voices. Preset-voice methods are typically used in static or repetitive systems which do not require dynamic adaptive functionality. Examples include pre-programmed transit operator dispatch messages, medical alert systems in healthcare, and emergency flight announcements <cit.>. Due to the static nature of current preset-voice methods, they have not previously been used for dynamic S2ST tasks like auto-dubbing. Such dynamic tasks require modelling different speakers across languages based on incoming media data <cit.>. In addition to providing a regulated PVM framework, this work aims to extend the application of preset-voice TTS methods to more dynamic settings. § PRESET-VOICE MATCHING FRAMEWORK This section explains our privacy regulated Preset-Voice Matching (PVM) framework. PVM bakes privacy regulations into the S2ST process by cloning a similar and prior consenting preset-voice, instead of the voice originally input to the S2ST system. The PVM framework connects to cascading S2ST architectures, performing additional computations alongside the STT, TTT, and TTS modules. The PVM framework consists of 3 sub-modules. Module 1, the Similarity Feature Extraction module, extracts features from the inputted voice. It then uses the extracted features to match the input voice to the most similar preset-voice from the Preset-Voice Library. Module 2, the Preset-Voice Library, contains a collection of consented target-language preset-voices, partitioned by discrete feature codes depending on the PVM implementation. Module 3, the TTS Module, generates TTS in the target-language using the matched preset-voice from the Preset-Voice Library. We describe these 3 modules below in detail. §.§ Feature Extraction and Voice Matching The Similarity Feature Extraction module extracts meaningful features from the input voice. These features are used to determine the most similar consented preset-voice in the target-language from our preset-voice library. This module takes in speech signals as input and outputs similarity feature encodings (gender-emotion pair combinations in the case of GEMO-Match) to match a consented similar preset-voice. §.§ Target-language Preset-Voice Libraries Module 2, the Preset-Voice Library, contains a collection of preset-voices in desired target-languages. The Preset-Voice Library acts as a feature codebook, informing the mapping between feature encodings and target-language preset-voice samples. This module takes in a feature code as input, and outputs a matched consenting speaker preset-voice sample. §.§ Text-to-Speech with Matched Preset-Voice As input, the TTS Module takes in the matched consented preset-voice and target-language text (from an auxiliary TTT module). The TTS Module outputs a clone of the most similar preset-voice in a desired language relative to the features extracted in the Similarity Feature Extraction module. Any voice cloning TTS model supporting the desired target-languages can be used in the TTS Module. Therefore, PVM is a general framework and is easily modifiable for many industry settings. § GEMO-MATCH ALGORITHM In this section we describe GEMO-Match, an example PVM framework implementation. Following a similar notion to <cit.>, GEMO-Match employs a hierarchical gender-dependent emotion classifier architecture trained with a gender-dependent training method. The process of splitting gender and emotion in emotion classification simplifies the emotion classification problem. As GEMO-Match is a PVM framework, it contains the 3 PVM modules: the Similarity Feature Extraction module, the Preset-Voice Library, and the TTS Module. These modules and their process are described below. §.§ GEMO-Match Modules The GEMO-Match Similarity Feature Extraction module contains 3 classifiers in two stages. The first stage contains the gender classifier, and the second stage includes both the male-emotion classifier, and the female-emotion classifier. The Similarity Feature Extraction classifiers are trained in the source language (English). In GEMO-Match, the Preset-Voice Library contains previously consenting speakers in desired target-languages for a given S2ST task. The Preset-Voice Library partitions target-language preset-voices by language, gender, and emotion. The number of target-languages supported by GEMO-Match depends on the ability to gather preset-voices in each desired target-language. The Preset-Voice Library in our provided implementation includes two target-languages, French and German. Therefore, the GEMO-Match implementation can translate from English to either French or German. The GEMO-Match TTS Module performs TTS. The TTS Module is straightforward and performs TTS given a matched preset-voice and a text prompt in the desired target-language. We implement GEMO-Match with two distinct TTS models, discussed in <ref> and <ref>. §.§ GEMO-Match Algorithm Flow First, source language speech is input to the Similarity Feature Extraction module. The gender classifier then classifies the input voice as male or female. Next, given the gender classification result, the source speech is input to the corresponding gender-dependent emotion classifier. The appropriate gender-dependent emotion classifier will then classify the source language speech as happy, angry, sad, disgust, or neutral. The two-stage classifier output pair is then concatenated (i.e., Female - Sad). The resulting concatenation is used alongside the intended target-language to query the most similar preset-voice in the Preset-Voice Library. Finally, the feature-matched preset-voice is passed alongside a text prompt to the voice cloning TTS model. This algorithm assumes that the intended target-language will be an input to the system. The performance of GEMO-Match depends primarily on the robustness of the Similarity Feature Extraction classifiers. § DATASET DESCRIPTIONS In this section, we describe the datasets used to test our framework. We experimented with two speech-emotion datasets: the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) <cit.>, and the Combined Gender-Dependent Dataset (CGDD), which we curated by combining four benchmark speech datasets. To ensure compatibility with our gender-emotion based GEMO-Match algorithm, we split the RAVDESS dataset by gender and relabeled it with gender-emotion pairs. Further details on RAVDESS and CGDD are outlined in <ref> and <ref>. §.§ RAVDESS Dataset RAVDESS is a benchmark emotional speech dataset containing 1440 audio files of 24 professional actors (12 female and 12 male) with the emotions calm, happy, sad, angry, fearful, surprise, and disgust <cit.>. As GEMO-Match requires consistent labeling across source and target-language data, we focus on a subset of 5 common emotions: happy, angry, sad, disgust, and calm (neutral). Each speech sample was originally provided with two intensities, normal and strong. We filtered the speech files to include only strong intensities as the emotion is more apparent in those samples. After filtering, the RAVDESS subset contains a total of 5 speech recordings per actor per emotion. §.§ Combined Gender-Dependent Dataset Training a robust gender-emotion classifier requires numerous samples of speakers from various demographics, speaking a variety of utterances with different emotional intensities. We found that many available speech-emotion datasets have limited variance in regards to at least one of these features. To help facilitate gender-dependent training research, we provide a Combined Gender-Dependent Dataset (CGDD), made from combining four benchmark emotional speech datasets: RAVDESS, CREMA-D, SAVEE, and TESS <cit.>. The RAVDESS dataset is explained in section <ref>. CREMA-D is comprised of 7,442 audio recordings of 91 actors. These clips include 48 male actors and 43 female actors, with ages ranging from 20 to 74. SAVEE database includes four English male speakers aged between 27 and 31, totaling 480 files. The TESS database contains two female speakers, one aged 26 and the other aged 64, with a total of 2800 files. The CGDD dataset is processed for gender-dependent training, useful for hierarchical emotion detection algorithms like GEMO-Match. We further processed the audio based on pitch frequency and loudness to obtain a higher-quality dataset. As pitch and loudness are crucial attributes of speech, we filter the data to ensure the files are within a suitable range for speech recognition <cit.>. Additionally, we use RMS loudness to eliminate excessively quiet or loud files. The best quality was found with a pitch frequency range of 75 Hz to 3000 Hz. We removed audio samples with RMS loudness less than -23 dBFS and greater than -20 dBFS. §.§ Data Pre-processing We processed the RAVDESS and CGDD datasets to be compatible with the hierarchical gender-dependent emotion classification architecture of the GEMO-Match Similarity Feature Extraction module. For both datasets, we partitioned the speech signal files on gender and further organized them into five gender-emotion directories. We then converted the speech signals to mel-spectograms using the Fast Fourier Transform. Next, the mel-spectrograms were converted to image representation (PNG format) to be processed by a pre-trained ResNet50 model initialized with ImageNet weights <cit.>. Our data pre-processing methodology is similar to the procedures outlined in <cit.>. The Python library Librosa was used to convert speech signal files to mel-spectrogram signals. § EXPERIMENTAL SETUP This section details the setup of each experiment, which show additional strengths of the PVM framework, beyond its inherent regulatory benefits. We demonstrate the effectiveness of PVM for S2ST with GEMO-Match in terms of robustness, multilingual capability, and run-time. Our experiments were run on a single Tesla T4 GPU with 40 cores. We discuss each experiment in detail below. §.§ GEMO-Match Robustness For this test, we assess the robustness of GEMO-Match. The performance of GEMO-Match depends on the three Similarity Feature Extraction classifiers. We fine-tuned and evaluated these classifiers on the RAVDESS and our CGDD dataset in terms of accuracy and precision. Each classifier was implemented as a ResNet50 previously pre-trained on ImageNet. The results of the six classifiers are shown in tables <ref> and <ref>. The same approach was used to train each ResNet50. The gender classifiers were trained for 20 epochs, while the male-emotion and female-emotion classifiers required 30 epochs to converge. Each emotion classifier was trained using a dynamic learning rate schedule: 0.01 for the first 20 epochs, reduced to 0.001 for the remaining 10. We used the Adam optimizer, and the Pytorch ImageDataGenerator function for data augmentation <cit.>. The classifiers were trained using a batch size of 32 and a train-test-validation split of 60-20-20. The models were optimized using categorical cross entropy as the loss function, incorporating batch normalization and dropout layers for regularization. The activation functions used were ReLU for internal layers and softmax for the output layer. §.§ GEMO-Match Multilingualism We test GEMO-Match in terms of speaker naturalness on the task of translating English speech into French and German speech. GEMO-Match is implemented within a cascaded S2ST system using SeamlessM4T for TTT, and XTTS as the TTS module <cit.>. XTTS is a state-of-the-art TTS model which supports zero-shot voice cloning across 17 languages. Instead of performing STT, we provide ground truth source-language (English) text directly to the TTT model (SeamlessM4T) to measure the isolated performance of GEMO-Match across multiple languages. We measured speaker naturalness using the standard metric Non-intrusive Objective Speech Quality Assessment (NISQA) <cit.>. We show PVM algorithms lead to higher naturalness in S2ST outputted speech by alleviating the need to perform cross-lingual voice cloning. We compare two cases of S2ST. The first case is when XTTS performs cross-lingual cloning from an English voice input to the target-languages German and French. In the second case, GEMO-Match performs the cross-lingual matching, allowing XTTS to run monolingual TTS given the matched target-language voice as input. The French and German preset-voices used in this experiment are sourced from the CAFE, and EmoDB datasets respectively <cit.>. For each target-language in both experimental pipelines, we used 150 English text transcriptions from the CREMA-D dataset alongside emotive English audios from RAVDESS as input <cit.>. We ensured that our RAVDESS audios had an average NISQA (3.54) similar to the preset-voices in our target-languages. For additional context, we included the average preset-voice NISQA scores for both target-languages in Table <ref>. §.§ GEMO-Match Run-time We compared the run-time of GEMO-Match to state-of-the-art TTS models VALL-E X, XTTS, SeamlessM4T, and OpenVoice, as shown in Figure <ref> <cit.>. The gender, male-emotion, and female-emotion classifiers were implemented using the same lightweight ResNet50 models as in <ref>. Each model was given 10 identical utterances with their respective transcriptions, and average inference run-times were calculated. The inputs were each 15 seconds and varied in tone, emotion, pacing, and vocabulary. We compared PVM (using GEMO-Match) with OpenVoice as they are both cascaded TTS frameworks that decouple voice-cloning from voice synthesis. OpenVoice uses a variation of VITS for TTS in its open-source implementation <cit.>. For consistent comparisons with OpenVoice, we use StyleTTS2 for TTS with GEMO-Match <cit.>. StyleTTS2 and VITS are both styling-based models and display similar run-times. StyleTTS2 is a monolingual TTS model, and we use it to show the run-time benefits of PVM removing cross-lingual voice cloning in cascaded S2ST systems. Figure <ref> compares GEMO-Match with the OpenVoice framework in terms of run-time scaling in multi-speaker scenarios. We plotted the number of times each system must re-run auxiliary modules while performing TTS over time in multi-speaker instances. The plots were generated using Python. § EXPERIMENTAL RESULTS AND ANALYSIS In this section, we discuss and analyze our experimental results. Section <ref> describes the results of the GEMO-Match robustness experiment, contained in tables <ref> and <ref>. Next, section <ref> provides an analysis on the results in Table <ref>. Section <ref> then highlights our run-time experiment results. §.§ GEMO-Match Robustness Results Tables <ref> and <ref> show the precision and accuracy of the Similarity Feature Extraction module classifiers. Testing GEMO-Match on RAVDESS across emotions, the Male-Emotion Classifier performs best on happy, angry, and neutral, which have precision scores of 78%, 78%, and 80%, respectively. The Female-Emotion Classifier performs well on angry and neutral, achieving 100% and 90% precision, respectively. We find GEMO-Match overfits to certain gender-emotion classes when trained on RAVDESS. This is prevalent in the Female-Emotion Classifier performance, as it classifies angry emotions with perfect precision, but classifies sad and disgust with 40% precision. As illustrated in Table <ref>, GEMO-Match generalizes more consistently across emotions when trained on CGDD compared to RAVDESS. In the cases of both datasets shown in Table <ref>, GEMO-Match tends to classify angry and neutral effectively. The improvements in generalization described in Table <ref> when using CGDD instead of RAVDESS showcases that some benchmarks are currently lacking variation. CGDD can remedy this, as it has higher variance compared to RAVDESS, comprising of multiple benchmark datasets as described in section <ref>. Table <ref> shows the accuracy of GEMO-Match on RAVDESS and CGDD. The GEMO-Match gender classifier scored 94% accuracy on the RAVDESS dataset, and 97% on CGDD. The best GEMO-Match emotion classifier results are found when training and testing on CGDD, which results in 63% accuracy for the Male-Emotion Classifier and 71% for the Female-Emotion Classifier. Therefore, our proposed CGDD dataset can improve model generalization compared to benchmark datasets like RAVDESS. §.§ GEMO-Match Multilingual Results The results in Table <ref> show PVM implementations can significantly improve the output naturalness of S2ST systems by enabling monolingual TTS within S2ST. We find this trend holds across the two tested languages, French and German. When XTTS performs cross-lingual TTS from English to German, NISQA values decrease from 3.54 (English) to 3.41 (German). Similarly, when XTTS cross-lingually clones from English to French, the input-output NISQA values are 3.54 and 3.54, respectively. Overall, XTTS either maintained or degraded the input naturalness when performing cross-lingual cloning in our experiments. We find XTTS performs much better in a monolingual setting, which can significantly enhance S2ST quality. The average NISQA score when XTTS cloned from German preset-voices to German outputs increased from 3.47 to 3.69. The same increase is seen with French, though to a lesser degree. For our tested language pairs, GEMO-Match consistently improves output naturalness by allowing S2ST pipelines to clone in a monolingual context while maintaining cross-lingual behavior. §.§ GEMO-Match Run-time Results The run-time results of different TTS approaches are shown in Figure <ref>. VALL-E X and XTTS, deep multilingual voice cloning models, are slowest on average. SeamlessM4T offers multilingualism in multiple modalities, but does not clone voices, and has significantly lower runtime than the aforementioned models. This underscores additional complexities inherent to achieving speech translation and voice cloning in a single embedding space. In our experiments, the lowest run-times were achieved by our PVM implementation (GEMO-Match with StyleTTS2) and OpenVoice. Both of these frameworks are not strictly limited to a specific TTS module for processing. As such, the runtime of their auxiliary, decoupled systems are noted separately in Figure <ref>. OpenVoice uses the post-processing tone extractor described in <cit.>, and PVM uses GEMO-Match. For these isolated auxiliary modules, we achieved an average runtime of 0.52 for OpenVoice and 0.61 seconds for GEMO-Match. Figure <ref> compares these auxiliary modules under sequential inference on long multi-speaker inputs. For this comparison, we focus on the run-time of the entire S2ST system. Figure <ref> shows that GEMO-Match need only run when a new speaker is presented in the input, while OpenVoice must always post-process the TTS output to achieve the desired result. Therefore, PVM offers favourable scaling properties, making it desirable for many commercial use-cases. § FUTURE WORK PVM is a general framework for regulated S2ST that can be integrated into pre-existing cascaded S2ST pipelines. The performance of PVM is directly dependent on the quality of the individual swappable components of the pipeline. Consequently, the efficacy of any PVM implementation is expected to increase with general advancements in TTS technology. There are many ways to improve the PVM framework, and we propose some ideas for future work. For future work, we propose a cascaded voice cloning TTS system which uses an initial vocal encoder with learned weights to extract and compress relevant features from the input voice. The system would perform the classical cloning tasks on this encoded voice in a downstream, decoupled TTS model. This would allow voices to be stored in the Preset-Voice Library in their encoded formats rather than speech signals, likely decreasing run-time complexity. Using a cascaded learning process, the TTS module would learn to effectively clone and only synthesize voices encoded by the vocal encoder. During distribution of the system, the vocal encoder would not be published. In this way, the system could not be used to clone a voice outside of the pre-encoded preset-voices in the Preset-Voice Library. GEMO-Match uses classifiers which depend on labeled data. This dependency motivates the development of alternative PVM instances capable of voice-matching without relying on labeled data. We posit that learned encodings can be used, akin to self-supervised learning mechanisms employed by transformer architectures, to extract robust internal representations of speech inputs <cit.>. This would require a new training pipeline with an objective function for maximizing speaker similarity between the input voice and the matched voice. The resulting PVM system could use latent feature representations to perform voice matching, and training would not require labeled speech datasets. § CONCLUSION We proposed Preset-Voice Matching, a novel framework that bakes regulatory precautions into the S2ST process. PVM achieves this by removing the explicit objective of cloning an unknown input speaker’s voice, and instead cloning a similar preset-voice of a consenting speaker. This paradigm is extensible to a variety of industry settings to regulate the behavior of S2ST systems. Quantitative experiments show PVM is a desirable framework compared to the tested benchmarks in terms of run-time and naturalness of multilingual translation output. Additionally, we provided CGDD, a gender-dependent speech-emotion dataset. We then showed CGDD leads to better model generalization and robustness in terms of accuracy and precision compared to the benchmark RAVDESS dataset. We hope this work inspires others to create more privacy regulated S2ST systems using the PVM framework. § PVM LIMITATIONS In this section, we discuss the limitations of GEMO-Match and the PVM framework. GEMO-Match requires training 3 unique classifiers for every source-language supported by the system. Specifically, the three Feature Extraction Module classifiers need to be trained on language specific emotional speech datasets processed into 3 versions: the entire dataset labeled by gender, and two subsets containing the gender-dependent labeled data. Gathering and processing data as described for each desired source-language may be complicated depending on data availability. We acknowledge that the three features language, gender, and emotion alone are inadequate to fully capture the breadth of speaker variance across human speech. There are scenarios which demand more closely matched consented speakers in terms of vocal characteristics of the input speaker. GEMO-Match has strong limitations in this respect, which necessitates systems with more granularity in terms of speech feature extraction than what is offered by GEMO-Match. Additionally, PVM makes no attempt to mimic background ambience or environmental noise in the inputted audios, as it loses this information when matching to a preset-voice. This is a drawback of PVM, as maintaining background audio noise information is highly important in some settings. However, many modern S2ST systems denoise input audio to improve model performance, and add the noise back during post-processing. PVM would not be limited in such an environment, and can ensure high-quality voice inputs to the TTS module by always mapping to high-quality consenting speaker audios. Lastly, we consider the drawback of error propagation in the PVM framework, inherent to cascaded architectures with separate modules. Ultimately, using a set of separate modules introduces additional points of failure, causing inaccuracies which are passed to downstream tasks. § APPENDIX §.§ Industry Applications In this section, we include some examples of cases where PVM can be applied to industry settings. APIs are a common avenue for controlled public access to ML models and pipelines. These access points are commonly subjected to adversarial attacks, where imperceptible artefacts are injected into inputs to produce undesirable results. In the PVM framework, the audio input by our user is not directly passed to the TTS model, and is only matched to a consented speaker using feature similarity. This limits the scope of poor results that could be triggered by an adversarial user by negating direct access to the TTS model. Additionally, propagating audio input data from a genuine user through fewer modules in the pipeline limits opportunities for sensitive bio-metric data to be extracted by malicious third parties. Ultimately, removing direct control over synthesis of the input voice prevents bad actors from cloning a non-consenting speaker for nefarious goals. We also consider how PVM can be extended to help regulate open-source models. As mentioned in Section <ref>, an autoencoder could be applied to derive robust latent space representations of the preset-voices. Matching based on similarity would still occur on the raw preset-voice audios, but their corresponding preset encodings would be passed as input to the voice cloning TTS model. The encoder/decoder models would not be published alongside the rest of the system. As the TTS model would have only been trained on the latent embeddings, the published system could not be hijacked to clone non-consenting voices. In content localization systems, media content is leased by distributing platforms, while rights to the reproduction of the likenesses of individuals present in the content is not readily available. Not only can PVM secure these systems in the manners mentioned above, but its regulated application can help bring this budding market to life by efficiently producing translated content in only the voices of consenting speakers. We believe PVM provides feasibility to the commercialization of such systems while being robust against future industry regulations. We hope these examples give insight into the vast extensibility of the PVM framework. § ACKNOWLEDGEMENTS We thank the anonymous reviewers for their feedback on this work. We also thank Joy Christian, Chao-Lin Chen, Sina Pordanesh, Akash Lakhani, Kaivil Brahmbhatt, and Darshan Sarkale of Vosyn's PVM team for their contributions on the early stages of this work. IEEEtran